Which Kubernetes component is the smallest deployable unit of computing?
StatefulSet
Deployment
Pod
Container
The Answer Is:
CExplanation:
In Kubernetes, the Pod is the smallest deployable and schedulable unit, making C correct. Kubernetes does not schedule individual containers directly; instead, it schedules Pods, each of which encapsulates one or more containers that must run together on the same node. This design supports both single-container Pods (the most common) and multi-container Pods (for sidecars, adapters, and co-located helper processes).
Pods provide shared context: containers in a Pod share the same network namespace (one IP address and port space) and can share storage volumes. This enables tight coupling where needed—for example, a service mesh proxy sidecar and the application container communicate via localhost, or a log-forwarding sidecar reads logs from a shared volume. Kubernetes manages lifecycle at the Pod level: kubelet ensures the containers defined in the PodSpec are running and uses probes to determine readiness and liveness.
StatefulSet and Deployment are controllers that manage sets of Pods. A Deployment manages ReplicaSets for stateless workloads and provides rollout/rollback features; a StatefulSet provides stable identities, ordered operations, and stable storage for stateful replicas. These are higher-level constructs, not the smallest units.
Option D (“Container”) is smaller in an abstract sense, but it is not the smallest Kubernetes deployable unit because Kubernetes APIs and scheduling work at the Pod boundary. You don’t “kubectl apply” a container; you apply a Pod template within a Pod object (often via controllers).
Understanding Pods as the atomic unit is crucial: Services select Pods, autoscalers scale Pods (replica counts), and scheduling decisions are made per Pod. That’s why Kubernetes documentation consistently refers to Pods as the fundamental building block for running workloads.
=========
What is a DaemonSet?
It’s a type of workload that ensures a specific set of nodes run a copy of a Pod.
It’s a type of workload responsible for maintaining a stable set of replica Pods running in any node.
It’s a type of workload that needs to be run periodically on a given schedule.
It’s a type of workload that provides guarantees about ordering, uniqueness, and identity of a set of Pods.
The Answer Is:
AExplanation:
A DaemonSet ensures that a copy of a Pod runs on each node (or a selected subset of nodes), which matches option A and makes it correct. DaemonSets are ideal for node-level agents that should exist everywhere, such as log shippers, monitoring agents, CNI components, storage daemons, and security scanners.
DaemonSets differ from Deployments/ReplicaSets because their goal is not “N replicas anywhere,” but “one replica per node” (subject to node selection). When nodes are added to the cluster, the DaemonSet controller automatically schedules the DaemonSet Pod onto the new nodes. When nodes are removed, the Pods associated with those nodes are cleaned up. You can restrict placement using node selectors, affinity rules, or tolerations so that only certain nodes run the DaemonSet (for example, only Linux nodes, only GPU nodes, or only nodes with a dedicated label).
Option B sounds like a ReplicaSet/Deployment behavior (stable set of replicas), not a DaemonSet. Option C describes CronJobs (scheduled, recurring run-to-completion workloads). Option D describes StatefulSets, which provide stable identity, ordering, and uniqueness guarantees for stateful replicas.
Operationally, DaemonSets matter because they often run critical cluster services. During maintenance and upgrades, DaemonSet update strategy determines how those node agents roll out across the fleet. Since DaemonSets can tolerate taints (like master/control-plane node taints), they can also be used to ensure essential agents run across all nodes, including special pools. Thus, the correct definition is A.
=========
In Kubernetes, which abstraction defines a logical set of Pods and a policy by which to access them?
Service Account
NetworkPolicy
Service
Custom Resource Definition
The Answer Is:
CExplanation:
The correct answer is C: Service. A Kubernetes Service is an abstraction that provides stable access to a logical set of Pods. Pods are ephemeral: they can be rescheduled, recreated, and scaled, which changes their IP addresses over time. A Service solves this by providing a stable identity—typically a virtual IP (ClusterIP) and a DNS name—and a traffic-routing policy that directs requests to the current set of backend Pods.
Services commonly select Pods using labels via a selector (e.g., app=web). Kubernetes then maintains the backend endpoint list (Endpoints/EndpointSlices). The cluster networking layer routes traffic sent to the Service IP/port to one of the Pod endpoints, enabling load distribution across replicas. This is fundamental to microservices architectures: clients call the Service name, not individual Pods.
Why the other options are incorrect:
A ServiceAccount is an identity for Pods to authenticate to the Kubernetes API; it doesn’t define a set of Pods nor traffic access policy.
A NetworkPolicy defines allowed network flows (who can talk to whom) but does not provide stable addressing or load-balanced access to Pods. It is a security policy, not an exposure abstraction.
A CustomResourceDefinition extends the Kubernetes API with new resource types; it’s unrelated to service discovery and traffic routing for a set of Pods.
Understanding Services is core Kubernetes fundamentals: they decouple backend Pod churn from client connectivity. Services also integrate with different exposure patterns via type (ClusterIP, NodePort, LoadBalancer, ExternalName) and can be paired with Ingress/Gateway for HTTP routing. But the essential definition in the question—“logical set of Pods and a policy to access them”—is exactly the textbook description of a Service.
Therefore, the verified correct answer is C.
=========
Which of the following options includes valid API versions?
alpha1v1, beta3v3, v2
alpha1, beta3, v2
v1alpha1, v2beta3, v2
v1alpha1, v2beta3, 2.0
The Answer Is:
CExplanation:
Kubernetes API versions follow a consistent naming pattern that indicates stability level and versioning. The valid forms include stable versions like v1, and pre-release versions such as v1alpha1, v1beta1, etc. Option C contains valid-looking Kubernetes version strings—v1alpha1, v2beta3, v2—so C is correct.
In Kubernetes, the “v” prefix is part of the standard for API versions. A stable API uses v1, v2, etc. Pre-release APIs include a stability marker: alpha (earliest, most changeable) and beta (more stable but still may change). The numeric suffix (e.g., alpha1, beta3) indicates iteration within that stability stage.
Option A is invalid because strings like alpha1v1 and beta3v3 do not match Kubernetes conventions (the v comes first, and alpha/beta are qualifiers after the version: v1alpha1). Option B is invalid because alpha1 and beta3 are missing the leading version prefix; Kubernetes API versions are not just “alpha1.” Option D includes 2.0, which looks like semantic versioning but is not the Kubernetes API version format. Kubernetes uses v2, not 2.0, for API versions.
Understanding this matters because API versions signal compatibility guarantees. Stable APIs are supported for a defined deprecation window, while alpha/beta APIs may change in incompatible ways and can be removed more easily. When authoring manifests, selecting the correct apiVersion ensures the API server accepts your resource and that controllers interpret fields correctly.
Therefore, among the choices, C is the only option comprised of valid Kubernetes-style API version strings.
=========
What is the main role of the Kubernetes DNS within a cluster?
Acts as a DNS server for virtual machines that are running outside the cluster.
Provides a DNS as a Service, allowing users to create zones and registries for domains that they own.
Allows Pods running in dual stack to convert IPv6 calls into IPv4 calls.
Provides consistent DNS names for Pods and Services for workloads that need to communicate with each other.
The Answer Is:
DExplanation:
Kubernetes DNS (commonly implemented by CoreDNS) provides service discovery inside the cluster by assigning stable, consistent DNS names to Services and (optionally) Pods, which makes D correct. In a Kubernetes environment, Pods are ephemeral—IP addresses can change when Pods restart or move between nodes. DNS-based discovery allows applications to communicate using stable names rather than hardcoded IPs.
For Services, Kubernetes creates DNS records like service-name.namespace.svc.cluster.local, which resolve to the Service’s virtual IP (ClusterIP) or, for headless Services, to the set of Pod endpoints. This supports both load-balanced communication (standard Service) and per-Pod addressing (headless Service, commonly used with StatefulSets). Kubernetes DNS is therefore a core building block that enables microservices to locate each other reliably.
Option A is not Kubernetes DNS’s purpose; it serves cluster workloads rather than external VMs. Option B describes a managed DNS hosting product (creating zones/registries), which is outside the scope of cluster DNS. Option C describes protocol translation, which is not the role of DNS. Dual-stack support relates to IP families and networking configuration, not DNS translating IPv6 to IPv4.
In day-to-day Kubernetes operations, DNS reliability impacts everything: if DNS is unhealthy, Pods may fail to resolve Services, causing cascading outages. That’s why CoreDNS is typically deployed as a highly available add-on in kube-system, and why DNS caching and scaling are important for large clusters.
So the correct statement is D: Kubernetes DNS provides consistent DNS names so workloads can communicate reliably.
=========
Which of the following is a correct definition of a Helm chart?
A Helm chart is a collection of YAML files bundled in a tar.gz file and can be applied without decompressing it.
A Helm chart is a collection of JSON files and contains all the resource definitions to run an application on Kubernetes.
A Helm chart is a collection of YAML files that can be applied on Kubernetes by using the kubectl tool.
A Helm chart is similar to a package and contains all the resource definitions to run an application on Kubernetes.
The Answer Is:
DExplanation:
A Helm chart is best described as a package for Kubernetes applications, containing the resource definitions (as templates) and metadata needed to install and manage an application—so D is correct. Helm is a package manager for Kubernetes; the chart is the packaging format. Charts include a Chart.yaml (metadata), a values.yaml (default configuration values), and a templates/ directory containing Kubernetes manifests written as templates. When you install a chart, Helm renders those templates into concrete Kubernetes YAML manifests by substituting values, then applies them to the cluster.
Option A is misleading/incomplete. While charts are often distributed as a compressed tarball (.tgz), the defining feature is not “YAML bundled in tar.gz” but the packaging and templating model that supports install/upgrade/rollback. Option B is incorrect because Helm charts are not “collections of JSON files” by definition; Kubernetes resources can be expressed as YAML or JSON, but Helm charts overwhelmingly use templated YAML. Option C is incorrect because charts are not simply YAML applied by kubectl; Helm manages releases, tracks installed resources, and supports upgrades and rollbacks. Helm uses Kubernetes APIs under the hood, but the value of Helm is the lifecycle and packaging system, not “kubectl apply.”
In cloud-native application delivery, Helm helps standardize deployments across environments (dev/stage/prod) by externalizing configuration through values. It reduces copy/paste and supports reuse via dependencies and subcharts. Helm also supports versioning of application packages, allowing teams to upgrade predictably and roll back if needed—critical for production change management.
So, the correct and verified definition is D: a Helm chart is like a package containing the resource definitions needed to run an application on Kubernetes.
=========
What does vertical scaling an application deployment describe best?
Adding/removing applications to meet demand.
Adding/removing node instances to the cluster to meet demand.
Adding/removing resources to applications to meet demand.
Adding/removing application instances of the same application to meet demand.
The Answer Is:
CExplanation:
Vertical scaling means changing the resources allocated to a single instance of an application (more or less CPU/memory), which is why C is correct. In Kubernetes terms, this corresponds to adjusting container resource requests and limits (for CPU and memory). Increasing resources can help a workload handle more load per Pod by giving it more compute or memory headroom; decreasing can reduce cost and improve cluster packing efficiency.
This differs from horizontal scaling, which changes the number of instances (replicas). Option D describes horizontal scaling: adding/removing replicas of the same workload, typically managed by a Deployment and often automated via the Horizontal Pod Autoscaler (HPA). Option B describes scaling the infrastructure layer (nodes) which is cluster/node autoscaling (Cluster Autoscaler in cloud environments). Option A is not a standard scaling definition.
In practice, vertical scaling in Kubernetes can be manual (edit the Deployment resource requests/limits) or automated using the Vertical Pod Autoscaler (VPA), which can recommend or apply new requests based on observed usage. A key nuance is that changing requests/limits often requires Pod restarts to take effect, so vertical scaling is less “instant” than HPA and can disrupt workloads if not planned. That’s why many production teams prefer horizontal scaling for traffic-driven workloads and use vertical scaling to right-size baseline resources or address memory-bound/cpu-bound behavior.
From a cloud-native architecture standpoint, understanding vertical vs horizontal scaling helps you design for elasticity: use vertical scaling to tune per-instance capacity; use horizontal scaling for resilience and throughput; and combine with node autoscaling to ensure the cluster has sufficient capacity. The definition the question is testing is simple: vertical scaling = change resources per application instance, which is option C.
Which of the following is a valid PromQL query?
SELECT * from http_requests_total WHERE job=apiserver
http_requests_total WHERE (job="apiserver")
SELECT * from http_requests_total
http_requests_total(job="apiserver")
The Answer Is:
DExplanation:
Prometheus Query Language (PromQL) uses a function-and-selector syntax, not SQL. A valid query typically starts with a metric name and optionally includes label matchers in curly braces. In the simplified quiz syntax given, the valid PromQL-style selector is best represented by D: http_requests_total(job="apiserver"), so D is correct.
Conceptually, what this query means is “select time series for the metric http_requests_total where the job label equals apiserver.” In standard PromQL formatting you most often see this as: http_requests_total{job="apiserver"}. Many training questions abbreviate braces and focus on the idea of filtering by labels; the key is that PromQL uses label matchers rather than SQL WHERE clauses.
Options A and C are invalid because they use SQL (SELECT * FROM ...) which is not PromQL. Option B is also invalid because PromQL does not use the keyword WHERE. PromQL filtering is done by applying label matchers directly to the metric selector.
In Kubernetes observability, PromQL is central to building dashboards and alerts from cluster metrics. For example, you might compute rates from counters: rate(http_requests_total{job="apiserver"}[5m]), aggregate by labels: sum by (code) (...), or alert on error ratios. Understanding the selector and label-matcher model is foundational because Prometheus metrics are multi-dimensional—labels define the slices you can filter and aggregate on.
So, within the provided options, D is the only one that follows PromQL’s metric+label-filter style and therefore is the verified correct answer.
=========
Which tools enable Kubernetes HorizontalPodAutoscalers to use custom, application-generated metrics to trigger scaling events?
Prometheus and the prometheus-adapter.
Graylog and graylog-autoscaler metrics.
Graylog and the kubernetes-adapter.
Grafana and Prometheus.
The Answer Is:
AExplanation:
To scale on custom, application-generated metrics, the Horizontal Pod Autoscaler (HPA) needs those metrics exposed through the Kubernetes custom metrics (or external metrics) API. A common and Kubernetes-documented approach is Prometheus + prometheus-adapter, making A correct. Prometheus scrapes application metrics (for example, request rate, queue depth, in-flight requests) from /metrics endpoints. The prometheus-adapter then translates selected Prometheus time series into the Kubernetes Custom Metrics API so the HPA controller can fetch them and make scaling decisions.
Why not the other options: Grafana is a visualization tool; it does not provide the metrics API translation layer required by HPA, so “Grafana and Prometheus” is incomplete. Graylog is primarily a log management system; it’s not the standard solution for feeding custom metrics into HPA via the Kubernetes metrics APIs. The “kubernetes-adapter” term in option C is not the standard named adapter used in the common Kubernetes ecosystem for Prometheus-backed custom metrics (the recognized component is prometheus-adapter).
This matters operationally because HPA is not limited to CPU/memory. CPU and memory use resource metrics (often from metrics-server), but modern autoscaling often needs application signals: message queue length, requests per second, latency, or business metrics. With Prometheus and prometheus-adapter, you can define HPA rules such as “scale to maintain queue depth under X” or “scale based on requests per second per pod.” This can produce better scaling behavior than CPU-based scaling alone, especially for I/O-bound services or workloads with uneven CPU profiles.
So the correct tooling combination in the provided choices is Prometheus and the prometheus-adapter, option A.
=========
Which of the following options is true about considerations for large Kubernetes clusters?
Kubernetes supports up to 1000 nodes and recommends no more than 1000 containers per node.
Kubernetes supports up to 5000 nodes and recommends no more than 500 Pods per node.
Kubernetes supports up to 5000 nodes and recommends no more than 110 Pods per node.
Kubernetes supports up to 50 nodes and recommends no more than 1000 containers per node.
The Answer Is:
CExplanation:
The correct answer is C: Kubernetes scalability guidance commonly cites support up to 5000 nodes and recommends no more than 110 Pods per node. The “110 Pods per node” recommendation is a practical limit based on kubelet, networking, and IP addressing constraints, as well as performance characteristics for scheduling, service routing, and node-level resource management. It is also historically aligned with common CNI/IPAM defaults where node Pod CIDRs are sized for ~110 usable Pod IPs.
Why the other options are incorrect: A and D reference “containers per node,” which is not the standard sizing guidance (Kubernetes typically discusses Pods per node). B’s “500 Pods per node” is far above typical recommended limits for many environments and would stress IPAM, kubelet, and node resources significantly.
In large clusters, several considerations matter beyond the headline limits: API server and etcd performance, watch/list traffic, controller reconciliation load, CoreDNS scaling, and metrics/observability overhead. You must also plan for IP addressing (cluster CIDR sizing), node sizes (CPU/memory), and autoscaling behavior. On each node, kubelet and the container runtime must handle churn (starts/stops), logging, and volume operations. Networking implementations (kube-proxy, eBPF dataplanes) also have scaling characteristics.
Kubernetes provides patterns to keep systems stable at scale: request/limit discipline, Pod disruption budgets, topology spread constraints, namespaces and quotas, and careful observability sampling. But the exam-style fact this question targets is the published scalability figure and per-node Pod recommendation.
Therefore, the verified true statement among the options is C.
=========