What is the purpose of the kube-proxy?
The kube-proxy balances network requests to Pods.
The kube-proxy maintains network rules on nodes.
The kube-proxy ensures the cluster connectivity with the internet.
The kube-proxy maintains the DNS rules of the cluster.
The Answer Is:
BExplanation:
The correct answer is B: kube-proxy maintains network rules on nodes. kube-proxy is a node component that implements part of the Kubernetes Service abstraction. It watches the Kubernetes API for Service and EndpointSlice/Endpoints changes, and then programs the node’s dataplane rules (commonly iptables or IPVS, depending on configuration) so that traffic sent to a Service virtual IP and port is correctly forwarded to one of the backing Pod endpoints.
This is how Kubernetes provides stable Service addresses even though Pod IPs are ephemeral. When Pods scale up/down or are replaced during a rollout, endpoints change; kube-proxy updates the node rules accordingly. From the perspective of a client, the Service name and ClusterIP remain stable, while the actual backend endpoints are load-distributed.
Option A is a tempting phrasing but incomplete: load distribution is an outcome of the forwarding rules, but kube-proxy’s primary role is maintaining the network forwarding rules that make Services work. Option C is incorrect because internet connectivity depends on cluster networking, routing, NAT, and often CNI configuration—not kube-proxy’s job description. Option D is incorrect because DNS is typically handled by CoreDNS; kube-proxy does not “maintain DNS rules.”
Operationally, kube-proxy failures often manifest as Service connectivity issues: Pod-to-Service traffic fails, ClusterIP routing breaks, NodePort behavior becomes inconsistent, or endpoints aren’t updated correctly. Modern Kubernetes environments sometimes replace kube-proxy with eBPF-based dataplanes, but in the classic architecture the correct statement remains: kube-proxy runs on each node and maintains the rules needed for Service traffic steering.
=========
Which of the following sentences is true about namespaces in Kubernetes?
You can create a namespace within another namespace in Kubernetes.
You can create two resources of the same kind and name in a namespace.
The default namespace exists when a new cluster is created.
All the objects in the cluster are namespaced by default.
The Answer Is:
CExplanation:
The true statement is C: the default namespace exists when a new cluster is created. Namespaces are a Kubernetes mechanism for partitioning cluster resources into logical groups. When you set up a cluster, Kubernetes creates some initial namespaces (including default, and commonly kube-system, kube-public, and kube-node-lease). The default namespace is where resources go if you don’t specify a namespace explicitly.
Option A is false because namespaces are not hierarchical; Kubernetes does not support “namespaces inside namespaces.” Option B is false because within a given namespace, resource names must be unique per resource kind. You can’t have two Deployments with the same name in the same namespace. You can have a Deployment named web in one namespace and another Deployment named web in a different namespace—namespaces provide that scope boundary. Option D is false because not all objects are namespaced. Many resources are cluster-scoped (for example, Nodes, PersistentVolumes, ClusterRoles, ClusterRoleBindings, and StorageClasses). Namespaces apply only to namespaced resources.
Operationally, namespaces support multi-tenancy and environment separation (dev/test/prod), RBAC scoping, resource quotas, and policy boundaries. For example, you can grant a team access only to their namespace and enforce quotas that prevent them from consuming excessive CPU/memory. Namespaces also make organization and cleanup easier: deleting a namespace removes most namespaced resources inside it (subject to finalizers).
So, the verified correct statement is C: the default namespace exists upon cluster creation.
=========
In which framework do the developers no longer have to deal with capacity, deployments, scaling and fault tolerance, and OS?
Docker Swarm
Kubernetes
Mesos
Serverless
The Answer Is:
DExplanation:
Serverless is the model where developers most directly avoid managing server capacity, OS operations, and much of the deployment/scaling/fault-tolerance mechanics, which is why D is correct. In serverless computing (commonly Function-as-a-Service, FaaS, and managed serverless container platforms), the provider abstracts away the underlying servers. You typically deploy code (functions) or a container image, define triggers (HTTP events, queues, schedules), and the platform automatically provisions the required compute, scales it based on demand, and handles much of the availability and fault tolerance behind the scenes.
It’s important to compare this to Kubernetes: Kubernetes does automate scheduling, self-healing, rolling updates, and scaling, but it still requires you (or your platform team) to design and operate cluster capacity, node pools, upgrades, runtime configuration, networking, and baseline reliability controls. Even in managed Kubernetes services, you still choose node sizes, scale policies, and operational configuration. Kubernetes reduces toil, but it does not eliminate infrastructure concerns in the same way serverless does.
Docker Swarm and Mesos are orchestration platforms that schedule workloads, but they also require managing the underlying capacity and OS-level aspects. They are not “no longer have to deal with capacity and OS” frameworks.
From a cloud native viewpoint, serverless is about consuming compute as an on-demand utility. Kubernetes can be a foundation for a serverless experience (for example, with event-driven autoscaling or serverless frameworks), but the pure framework that removes the most operational burden from developers is serverless.
How can you monitor the progress for an updated Deployment/DaemonSets/StatefulSets?
kubectl rollout watch
kubectl rollout progress
kubectl rollout state
kubectl rollout status
The Answer Is:
DExplanation:
To monitor rollout progress for Kubernetes workload updates (most commonly Deployments, and also StatefulSets and DaemonSets where applicable), the standard kubectl command is kubectl rollout status, which makes D correct.
Kubernetes manages updates declaratively through controllers. For a Deployment, an update typically creates a new ReplicaSet and gradually shifts replicas from the old to the new according to the strategy (e.g., RollingUpdate with maxUnavailable and maxSurge). For StatefulSets, updates may be ordered and respect stable identities, and for DaemonSets, an update replaces node-level Pods according to update strategy. In all cases, you often want a single command that tells you whether the controller has completed the update and whether the new replicas are available. kubectl rollout status queries the resource status and prints a progress view until completion or timeout.
The other commands listed are not the canonical kubectl subcommands. kubectl rollout watch, kubectl rollout progress, and kubectl rollout state are not standard rollout verbs in kubectl. The supported rollout verbs typically include status, history, undo, pause, and resume (depending on kubectl version and resource type).
Operationally, kubectl rollout status deployment/
=========
Which of the following is a good habit for cloud native cost efficiency?
Follow an automated approach to cost optimization, including visibility and forecasting.
Follow manual processes for cost analysis, including visibility and forecasting.
Use only one cloud provider to simplify the cost analysis.
Keep your legacy workloads unchanged, to avoid cloud costs.
The Answer Is:
AExplanation:
The correct answer is A. In cloud-native environments, costs are highly dynamic: autoscaling changes compute footprint, ephemeral environments come and go, and usage-based billing applies to storage, network egress, load balancers, and observability tooling. Because of this variability, automation is the most sustainable way to achieve cost efficiency. Automated visibility (dashboards, chargeback/showback), anomaly detection, and forecasting help teams understand where spend is coming from and how it changes over time. Automated optimization actions can include right-sizing requests/limits, enforcing TTLs on preview environments, scaling down idle clusters, and cleaning unused resources.
Manual processes (B) don’t scale as complexity grows. By the time someone reviews a spreadsheet or dashboard weekly, cost spikes may have already occurred. Automation enables fast feedback loops and guardrails, which is essential for preventing runaway spend caused by misconfiguration (e.g., excessive log ingestion, unbounded autoscaling, oversized node pools).
Option C is not a cost-efficiency “habit.” Single-provider strategies may simplify some billing views, but they can also reduce leverage and may not be feasible for resilience/compliance; it’s a business choice, not a best practice for cloud-native cost management. Option D is counterproductive: keeping legacy workloads unchanged often wastes money because cloud efficiency typically requires adapting workloads—right-sizing, adopting autoscaling, and using managed services appropriately.
In Kubernetes specifically, cost efficiency is tightly linked to resource management: accurate CPU/memory requests, limits where appropriate, cluster autoscaler tuning, and avoiding overprovisioning. Observability also matters because you can’t optimize what you can’t measure. Therefore, the best habit is an automated cost optimization approach with strong visibility and forecasting—A.
=========
Which type of Service requires manual creation of Endpoints?
LoadBalancer
Services without selectors
NodePort
ClusterIP with selectors
The Answer Is:
BExplanation:
A Kubernetes Service without selectors requires you to manage its backend endpoints manually, so B is correct. Normally, a Service uses a selector to match a set of Pods (by labels). Kubernetes then automatically maintains the backend list (historically Endpoints, now commonly EndpointSlice) by tracking which Pods match the selector and are Ready. This automation is one of the key reasons Services provide stable connectivity to dynamic Pods.
When you create a Service without a selector, Kubernetes has no way to know which Pods (or external IPs) should receive traffic. In that pattern, you explicitly create an Endpoints object (or EndpointSlices, depending on your approach and controller support) that maps the Service name to one or more IP:port tuples. This is commonly used to represent external services (e.g., a database running outside the cluster) while still providing a stable Kubernetes Service DNS name for in-cluster clients. Another use case is advanced migration scenarios where endpoints are controlled by custom controllers rather than label selection.
Why the other options are wrong: Service types like ClusterIP, NodePort, and LoadBalancer describe how a Service is exposed, but they do not inherently require manual endpoint management. A ClusterIP Service with selectors (D) is the standard case where endpoints are automatically created and updated. NodePort and LoadBalancer Services also typically use selectors and therefore inherit automatic endpoint management; the difference is in how traffic enters the cluster, not how backends are discovered.
Operationally, when using Services without selectors, you must ensure endpoint IPs remain correct, health is accounted for (often via external tooling), and you update endpoints when backends change. The key concept is: no selector → Kubernetes can’t auto-populate endpoints → you must provide them.
=========
What does the "nodeSelector" within a PodSpec use to place Pods on the target nodes?
Annotations
IP Addresses
Hostnames
Labels
The Answer Is:
DExplanation:
nodeSelector is a simple scheduling constraint that matches node labels, so the correct answer is D (Labels). In Kubernetes, nodes have key/value labels (for example, disktype=ssd, topology.kubernetes.io/zone=us-east-1a, kubernetes.io/os=linux). When you set spec.nodeSelector in a Pod template, you provide a map of required label key/value pairs. The kube-scheduler will then only consider nodes that have all those labels with matching values as eligible placement targets for that Pod.
This is different from annotations: annotations are also key/value metadata, but they are not intended for selection logic and are not used by the scheduler for nodeSelector. IP addresses and hostnames are not the mechanism used by nodeSelector either. While Kubernetes nodes do have hostnames and IPs, nodeSelector specifically operates on labels because labels are designed for selection, grouping, and placement constraints.
Operationally, nodeSelector is the most basic form of node placement control. It is commonly used to pin workloads to specialized hardware (GPU nodes), compliance zones, or certain OS/architecture pools. However, it has limitations: it only supports exact match on labels and cannot express more complex rules (like “in this set of zones” or “prefer but don’t require”). For that, Kubernetes offers node affinity (requiredDuringSchedulingIgnoredDuringExecution, preferredDuringSchedulingIgnoredDuringExecution) which supports richer expressions.
Still, the underlying mechanism is the same concept: the scheduler evaluates your Pod’s placement requirements against node metadata, and for nodeSelector, that metadata is labels. Therefore, the verified correct answer is D.
=========
In Kubernetes, what is the primary purpose of using annotations?
To control the access permissions for users and service accounts.
To provide a way to attach metadata to objects.
To specify the deployment strategy for applications.
To define the specifications for resource limits and requests.
The Answer Is:
BExplanation:
Annotations in Kubernetes are a flexible mechanism for attaching non-identifying metadata to Kubernetes objects. Their primary purpose is to store additional information that is not used for object selection or grouping, which makes Option B the correct answer.
Unlike labels, which are designed to be used for selection, filtering, and grouping of resources (for example, by Services or Deployments), annotations are intended purely for informational or auxiliary purposes. They allow users, tools, and controllers to store arbitrary key–value data on objects without affecting Kubernetes’ core behavior. This makes annotations ideal for storing data such as build information, deployment timestamps, commit hashes, configuration hints, or ownership details.
Annotations are commonly consumed by external tools and controllers rather than by the Kubernetes scheduler or control plane for decision-making. For example, ingress controllers, service meshes, monitoring agents, and CI/CD systems often read annotations to enable or customize specific behaviors. Because annotations are not used for querying or selection, Kubernetes places no strict size or structure requirements on their values beyond general object size limits.
Option A is incorrect because access permissions are managed using Role-Based Access Control (RBAC), which relies on roles, role bindings, and service accounts—not annotations. Option C is incorrect because deployment strategies (such as RollingUpdate or Recreate) are defined in the specification of workload resources like Deployments, not through annotations. Option D is also incorrect because resource limits and requests are specified explicitly in the Pod or container spec under the resources field.
In summary, annotations provide a powerful and extensible way to associate metadata with Kubernetes objects without influencing scheduling, selection, or identity. They support integration, observability, and operational tooling while keeping core Kubernetes behavior predictable and stable. This design intent is clearly documented in Kubernetes metadata concepts, making Option B the correct and verified answer.
In a cloud native environment, how do containerization and virtualization differ in terms of resource management?
Containerization uses hypervisors to manage resources, while virtualization does not.
Containerization shares the host OS, while virtualization runs a full OS for each instance.
Containerization consumes more memory than virtualization by default.
Containerization allocates resources per container, virtualization does not isolate them.
The Answer Is:
BExplanation:
The fundamental difference between containerization and virtualization in a cloud native environment lies in how they manage and isolate resources, particularly with respect to the operating system. The correct description is that containerization shares the host operating system, while virtualization runs a full operating system for each instance, making option B the correct answer.
In virtualization, each virtual machine (VM) includes its own complete guest operating system running on top of a hypervisor. The hypervisor virtualizes hardware resources—CPU, memory, storage, and networking—and allocates them to each VM. Because every VM runs a full OS, virtualization introduces significant overhead in terms of memory usage, disk space, and startup time. However, it provides strong isolation between workloads, which is useful for running different operating systems or untrusted workloads on the same physical hardware.
In contrast, containerization operates at the operating system level rather than the hardware level. Containers share the host OS kernel and isolate applications using kernel features such as namespaces and control groups (cgroups). This design makes containers much lighter weight than virtual machines. Containers start faster, consume fewer resources, and allow higher workload density on the same infrastructure. Resource limits and isolation are still enforced, but without duplicating the entire operating system for each application instance.
Option A is incorrect because hypervisors are a core component of virtualization, not containerization. Option C is incorrect because containers generally consume less memory than virtual machines due to the absence of a full guest OS. Option D is incorrect because virtualization does isolate resources very strongly, while containers rely on OS-level isolation rather than hardware-level isolation.
In cloud native architectures, containerization is preferred for microservices and scalable workloads because of its efficiency and portability. Virtualization is still valuable for stronger isolation and heterogeneous operating systems. Therefore, Option B accurately captures the key resource management distinction between the two models.
The Container Runtime Interface (CRI) defines the protocol for the communication between:
The kubelet and the container runtime.
The container runtime and etcd.
The kube-apiserver and the kubelet.
The container runtime and the image registry.
The Answer Is:
AExplanation:
The CRI (Container Runtime Interface) defines how the kubelet talks to the container runtime, so A is correct. The kubelet is the node agent responsible for ensuring containers are running in Pods on that node. It needs a standardized way to request operations such as: create a Pod sandbox, pull an image, start/stop containers, execute commands, attach streams, and retrieve logs. CRI provides that contract so kubelet does not need runtime-specific integrations.
This interface is a key part of Kubernetes’ modular design. Different container runtimes implement the CRI, allowing Kubernetes to run with containerd, CRI-O, and other CRI-compliant runtimes. This separation of concerns lets Kubernetes focus on orchestration, while runtimes focus on executing containers according to the OCI runtime spec, managing images, and handling low-level container lifecycle.
Why the other options are incorrect:
etcd is the control plane datastore; container runtimes do not communicate with etcd via CRI.
kube-apiserver and kubelet communicate using Kubernetes APIs, but CRI is not their protocol; CRI is specifically kubelet ↔ runtime.
container runtime and image registry communicate using registry protocols (image pull/push APIs), but that is not CRI. CRI may trigger image pulls via runtime requests, yet the actual registry communication is separate.
Operationally, this distinction matters when debugging node issues. If Pods are stuck in “ContainerCreating” due to image pull failures or runtime errors, you often investigate kubelet logs and the runtime (containerd/CRI-O) logs. Kubernetes administrators also care about CRI streaming (exec/attach/logs streaming), runtime configuration, and compatibility across Kubernetes versions.
So, the verified answer is A: the kubelet and the container runtime.
=========