What best describes cloud native service discovery?
It's a mechanism for applications and microservices to locate each other on a network.
It's a procedure for discovering a MAC address, associated with a given IP address.
It's used for automatically assigning IP addresses to devices connected to the network.
It's a protocol that turns human-readable domain names into IP addresses on the Internet.
The Answer Is:
AExplanation:
Cloud native service discovery is fundamentally about how services and microservices find and connect to each other reliably in a dynamic environment, so A is correct. In cloud native systems (especially Kubernetes), instances are ephemeral: Pods can be created, destroyed, rescheduled, and scaled at any time. Hardcoding IPs breaks quickly. Service discovery provides stable names and lookup mechanisms so that one component can locate another even as underlying endpoints change.
In Kubernetes, service discovery is commonly achieved through Services (stable virtual IP + DNS name) and cluster DNS (CoreDNS). A Service selects a group of Pods via labels, and Kubernetes maintains the set of endpoints behind that Service. Clients connect to the Service name (DNS) and Kubernetes routes traffic to the current healthy Pods. For some workloads, headless Services provide DNS records that map directly to Pod IPs for per-instance discovery.
The other options describe different networking concepts: B is ARP (MAC discovery), C is DHCP (IP assignment), and D is DNS in a general internet sense. DNS is often used as a mechanism for service discovery, but cloud native service discovery is broader: it’s the overall mechanism enabling dynamic location of services, often implemented via DNS and/or environment variables and sometimes enhanced by service meshes.
So the best description remains A: a mechanism that allows applications and microservices to locate each other on a network in a dynamic environment.
What are the two essential operations that the kube-scheduler normally performs?
Pod eviction or starting
Resource monitoring and reporting
Filtering and scoring nodes
Starting and terminating containers
The Answer Is:
CExplanation:
The kube-scheduler is a core control plane component in Kubernetes responsible for assigning newly created Pods to appropriate nodes. Its primary responsibility is decision-making, not execution. To make an informed scheduling decision, the kube-scheduler performs two essential operations: filtering and scoring nodes.
The scheduling process begins when a Pod is created without a node assignment. The scheduler first evaluates all available nodes and applies a set of filtering rules. During this phase, nodes that do not meet the Pod’s requirements are eliminated. Filtering criteria include resource availability (CPU and memory requests), node selectors, node affinity rules, taints and tolerations, volume constraints, and other policy-based conditions. Any node that fails one or more of these checks is excluded from consideration.
Once filtering is complete, the scheduler moves on to the scoring phase. In this step, each remaining eligible node is assigned a score based on a collection of scoring plugins. These plugins evaluate factors such as resource utilization balance, affinity preferences, topology spread constraints, and custom scheduling policies. The purpose of scoring is to rank nodes according to how well they satisfy the Pod’s placement preferences. The node with the highest total score is selected as the best candidate.
Option A is incorrect because Pod eviction is handled by other components such as the kubelet and controllers, and starting Pods is the responsibility of the kubelet. Option B is incorrect because resource monitoring and reporting are performed by components like metrics-server, not the scheduler. Option D is also incorrect because starting and terminating containers is entirely handled by the kubelet and the container runtime.
By separating filtering (eligibility) from scoring (preference), the kube-scheduler provides a flexible, extensible, and policy-driven scheduling mechanism. This design allows Kubernetes to support diverse workloads and advanced placement strategies while maintaining predictable scheduling behavior.
Therefore, the correct and verified answer is Option C: Filtering and scoring nodes, as documented in Kubernetes scheduling architecture.
Which mechanism allows extending the Kubernetes API?
ConfigMap
CustomResourceDefinition
MutatingAdmissionWebhook mechanism
Kustomize
The Answer Is:
BExplanation:
The correct answer is B: CustomResourceDefinition (CRD). Kubernetes is designed to be extensible. A CRD lets you define your own resource types (custom API objects) that behave like native Kubernetes resources: they can be created with YAML, stored in etcd, retrieved via the API server, and managed using kubectl. For example, operators commonly define CRDs such as Databases, RedisClusters, or Certificates to model higher-level application concepts.
A CRD extends the API by adding a new kind under a group/version (e.g., example.com/v1). You typically pair CRDs with a controller (often called an operator) that watches these custom objects and reconciles real-world resources (Deployments, StatefulSets, cloud resources) to match the desired state specified in the CRD instances. This is the same control-loop pattern used for built-in controllers—just applied to your custom domain.
Why the other options aren’t correct: ConfigMaps store configuration data but do not add new API types. A MutatingAdmissionWebhook can modify or validate requests for existing resources, but it doesn’t define new API kinds; it enforces policy or injects defaults. Kustomize is a manifest customization tool (patch/overlay) and doesn’t extend the Kubernetes API surface.
CRDs are foundational to much of the Kubernetes ecosystem: cert-manager, Argo, Istio, and many operators rely heavily on CRDs. They also support schema validation via OpenAPI v3 schemas, which improves safety and tooling (better error messages, IDE hints). Therefore, the mechanism for extending the Kubernetes API is CustomResourceDefinition, option B.
=========
During a team meeting, a developer mentions the significance of open collaboration in the cloud native ecosystem. Which statement accurately reflects principles of collaborative development and community stewardship?
Open source projects succeed when contributors focus on code quality without the overhead of community engagement.
Maintainers of open source projects act independently to make technical decisions without requiring input from contributors.
Community stewardship emphasizes guiding project growth but does not necessarily include sustainability considerations.
Community events and working groups foster collaboration by bringing people together to share knowledge and build connections.
The Answer Is:
DExplanation:
Open collaboration and community stewardship are foundational principles of the cloud native ecosystem, particularly within projects governed by organizations such as the Cloud Native Computing Foundation (CNCF). These principles emphasize that successful open source projects are not driven solely by code quality, but by healthy, inclusive, and sustainable communities.
Option D accurately reflects these principles. Community events, special interest groups, and working groups play a vital role in fostering collaboration. They provide structured and informal spaces where contributors, maintainers, and users can exchange ideas, share operational experiences, mentor new participants, and collectively guide the direction of projects. This collaborative approach helps ensure that projects evolve in ways that meet real-world needs and benefit from diverse perspectives.
Option A is incorrect because community engagement is not an “overhead” but a critical success factor. Kubernetes and other cloud native projects explicitly recognize that documentation, communication, governance, and contributor onboarding are just as important as writing high-quality code. Without active community participation, projects often struggle with adoption, contributor burnout, and long-term viability.
Option B is incorrect because modern open source governance values transparency and shared decision-making. While maintainers have responsibilities such as reviewing changes and ensuring project stability, they are expected to solicit feedback, encourage discussion, and incorporate contributor input through open processes. This approach builds trust and accountability within the community.
Option C is also incorrect because sustainability is a core aspect of community stewardship. Stewardship includes ensuring that projects can be maintained over time, preventing maintainer burnout, encouraging new contributors, and establishing governance models that support long-term health.
According to cloud native and Kubernetes documentation, strong communities enable innovation, resilience, and scalability—both technically and socially. By bringing people together through events and working groups, community stewardship reinforces collaboration and shared ownership, making option D the correct and fully verified answer.
E QUESTION NO: 5 [Cloud Native Application Delivery]
What does SBOM stand for?
A. System Bill of Materials
B. Software Bill Operations Management
C. Security Baseline for Open Source Management
D. Software Bill of Materials
Answer: D
SBOM stands for Software Bill of Materials, a critical concept in modern cloud native application delivery and software supply chain security. An SBOM is a formal, structured inventory that lists all components included in a software artifact, such as libraries, frameworks, dependencies, and their versions. This includes both direct and transitive dependencies that are bundled into applications, containers, or container images.
In cloud native environments, applications are often built using numerous open source components and third-party libraries. While this accelerates development, it also increases the risk of hidden vulnerabilities. An SBOM provides transparency into what software is actually running in production, enabling organizations to quickly identify whether they are affected by newly disclosed vulnerabilities or license compliance issues.
Option A is incorrect because SBOM is specific to software, not systems or hardware materials. Option B is incorrect because it describes a management process rather than a standardized inventory of software components. Option C is incorrect because SBOM is not a security baseline or policy framework; instead, it is a factual record of software contents that supports security and compliance efforts.
SBOMs are especially important in containerized and Kubernetes-based workflows. Container images often bundle many dependencies into a single artifact, making it difficult to assess risk without a detailed inventory. By generating and distributing SBOMs alongside container images, teams can integrate vulnerability scanning, compliance checks, and risk assessment earlier in the delivery pipeline. This practice aligns with the principles of DevSecOps and shift-left security.
Kubernetes and cloud native security guidance emphasize SBOMs as a foundational element of software supply chain security. They support faster incident response, improved trust between software producers and consumers, and stronger governance across the lifecycle of applications. As a result, Software Bill of Materials is the correct and fully verified expansion of SBOM, making option D the accurate answer.
Which of the following scenarios would benefit the most from a service mesh architecture?
A few applications with hundreds of Pod replicas running in multiple clusters, each one providing multiple services.
Thousands of distributed applications running in a single cluster, each one providing multiple services.
Tens of distributed applications running in multiple clusters, each one providing multiple services.
Thousands of distributed applications running in multiple clusters, each one providing multiple services.
The Answer Is:
DExplanation:
A service mesh is most valuable when service-to-service communication becomes complex at large scale—many services, many teams, and often multiple clusters. That’s why D is the best fit: thousands of distributed applications across multiple clusters. In that scenario, the operational burden of securing, observing, and controlling east-west traffic grows dramatically. A service mesh (e.g., Istio, Linkerd) addresses this by introducing a dedicated networking layer (usually sidecar proxies such as Envoy) that standardizes capabilities across services without requiring each application to implement them consistently.
The common “mesh” value-adds are: mTLS for service identity and encryption, fine-grained traffic policy (retries, timeouts, circuit breaking), traffic shifting (canary, mirroring), and consistent telemetry (metrics, traces, access logs). Those features become increasingly beneficial as the number of services and cross-service calls rises, and as you add multi-cluster routing, failover, and policy management across environments. With thousands of applications, inconsistent libraries and configurations become a reliability and security risk; the mesh centralizes and standardizes these behaviors.
In smaller environments (A or C), you can often meet requirements with simpler approaches: Kubernetes Services, Ingress/Gateway, basic mTLS at the edge, and application-level libraries. A single large cluster (B) can still benefit from a mesh, but adding multiple clusters increases complexity: traffic management across clusters, identity trust domains, global observability correlation, and consistent policy enforcement. That’s where mesh architectures typically justify their additional overhead (extra proxies, control plane components, operational complexity).
So, the “most benefit” scenario is the largest, most distributed footprint—D.
=========
What is the Kubernetes object used for running a recurring workload?
Job
Batch
DaemonSet
CronJob
The Answer Is:
DExplanation:
A recurring workload in Kubernetes is implemented with a CronJob, so the correct choice is D. A CronJob is a controller that creates Jobs on a schedule defined in standard cron format (minute, hour, day of month, month, day of week). This makes CronJobs ideal for periodic tasks like backups, report generation, log rotation, and cleanup tasks.
A Job (option A) is run-to-completion but is typically a one-time execution; it ensures that a specified number of Pods successfully terminate. You can use a Job repeatedly, but something else must create it each time—CronJob is that built-in scheduler. Option B (“Batch”) is not a standard workload resource type (batch is an API group, not the object name used here). Option C (DaemonSet) ensures one Pod runs on every node (or selected nodes), which is not “recurring,” it’s “always present per node.”
CronJobs include operational controls that matter in real clusters. For example, concurrencyPolicy controls what happens if a scheduled run overlaps with a previous run (Allow, Forbid, Replace). startingDeadlineSeconds can handle missed schedules (e.g., if the controller was down). History limits (successfulJobsHistoryLimit, failedJobsHistoryLimit) help manage cleanup and troubleshooting. Each scheduled execution results in a Job with its own Pods, which can be inspected with kubectl get jobs and kubectl logs.
So the correct Kubernetes object for a recurring workload is CronJob (D): it provides native scheduling and creates Jobs automatically according to the defined cadence.
=========
Which cloud native tool keeps Kubernetes clusters in sync with sources of configuration (like Git repositories), and automates updates to configuration when there is new code to deploy?
Flux and ArgoCD
GitOps Toolkit
Linkerd and Istio
Helm and Kustomize
The Answer Is:
AExplanation:
Tools that continuously reconcile cluster state to match a Git repository’s desired configuration are GitOps controllers, and the best match here is Flux and ArgoCD, so A is correct. GitOps is the practice where Git is the source of truth for declarative system configuration. A GitOps tool continuously compares the desired state (manifests/Helm/Kustomize outputs stored in Git) with the actual state in the cluster and then applies changes to eliminate drift.
Flux and Argo CD both implement this reconciliation loop. They watch Git repositories, detect updates (new commits/tags), and apply the updated Kubernetes resources. They also surface drift and sync status, enabling auditable, repeatable deployments and easy rollbacks (revert Git). This model improves delivery velocity and security because changes flow through code review, and cluster changes can be restricted to the GitOps controller identity rather than ad-hoc human kubectl access.
Option B (“GitOps Toolkit”) is related—Flux uses a GitOps Toolkit internally—but the question asks for a “tool” that keeps clusters in sync; the recognized tools are Flux and Argo CD in this list. Option C lists service meshes (traffic/security/telemetry), not deployment synchronization tools. Option D lists packaging/templating tools; Helm and Kustomize help build manifests, but they do not, by themselves, continuously reconcile cluster state to a Git source.
In Kubernetes application delivery, GitOps tools become the deployment engine: CI builds artifacts, updates references in Git (image tags/digests), and the GitOps controller deploys those changes. This separation strengthens traceability and reduces configuration drift. Therefore, A is the verified correct answer.
=========
What Linux namespace is shared by default by containers running within a Kubernetes Pod?
Host Network
Network
Process ID
Process Name
The Answer Is:
BExplanation:
By default, containers in the same Kubernetes Pod share the network namespace, which means they share the same IP address and port space. Therefore, the correct answer is B (Network).
This shared network namespace is a key part of the Pod abstraction. Because all containers in a Pod share networking, they can communicate with each other over localhost and coordinate tightly, which is the basis for patterns like sidecars (service mesh proxies, log shippers, config reloaders). It also means containers must coordinate port usage: if two containers try to bind the same port on 0.0.0.0, they’ll conflict because they share the same port namespace.
Option A (“Host Network”) is different: hostNetwork: true is an optional Pod setting that puts the Pod into the node’s network namespace, not the Pod’s shared namespace. It is not the default and is generally used sparingly due to security and port-collision risks. Option C (“Process ID”) is not shared by default in Kubernetes; PID namespace sharing requires explicitly enabling process namespace sharing (e.g., shareProcessNamespace: true). Option D (“Process Name”) is not a Linux namespace concept.
The Pod model also commonly implies shared storage volumes (if defined) and shared IPC namespace in some configurations, but the universally shared-by-default namespace across containers in the same Pod is the network namespace. This default behavior is why Kubernetes documentation explains a Pod as a “logical host” for one or more containers: the containers are co-located and share certain namespaces as if they ran on the same host.
So, the correct, verified answer is B: containers in the same Pod share the Network namespace by default.
=========
What happens with a regular Pod running in Kubernetes when a node fails?
A new Pod with the same UID is scheduled to another node after a while.
A new, near-identical Pod but with different UID is scheduled to another node.
By default, a Pod can only be scheduled to the same node when the node fails.
A new Pod is scheduled on a different node only if it is configured explicitly.
The Answer Is:
BExplanation:
B is correct: when a node fails, Kubernetes does not “move” the same Pod instance; instead, a new Pod object (new UID) is created to replace it—assuming the Pod is managed by a controller (Deployment/ReplicaSet, StatefulSet, etc.). A Pod is an API object with a unique identifier (UID) and is tightly associated with the node it’s scheduled to via spec.nodeName. If the node becomes unreachable, that original Pod cannot be restarted elsewhere because it was bound to that node.
Kubernetes’ high availability comes from controllers maintaining desired state. For example, a Deployment desires N replicas. If a node fails and the replicas on that node are lost, the controller will create replacement Pods, and the scheduler will place them onto healthy nodes. These replacement Pods will be “near-identical” in spec (same template), but they are still new instances with new UIDs and typically new IPs.
Why the other options are wrong:
A is incorrect because the UID does not remain the same—Kubernetes creates a new Pod object rather than reusing the old identity.
C is incorrect; pods are not restricted to the same node after failure. The whole point of orchestration is to reschedule elsewhere.
D is incorrect; rescheduling does not require special explicit configuration for typical controller-managed workloads. The controller behavior is standard. (If it’s a bare Pod without a controller, it will not be recreated automatically.)
This also ties to the difference between “regular Pod” vs controller-managed workloads: a standalone Pod is not self-healing by itself, while a Deployment/ReplicaSet provides that resilience. In typical production design, you run workloads under controllers specifically so node failure triggers replacement and restores replica count.
Therefore, the correct outcome is B.
=========
Which tool is used to streamline installing and managing Kubernetes applications?
apt
helm
service
brew
The Answer Is:
BExplanation:
Helm is the Kubernetes package manager used to streamline installing and managing applications, so B is correct. Helm packages Kubernetes resources into charts, which contain templates, default values, and metadata. When you install a chart, Helm renders templates into concrete manifests and applies them to the cluster. Helm also tracks a “release,” enabling upgrades, rollbacks, and consistent lifecycle operations across environments.
This is why Helm is widely used for complex applications that require multiple Kubernetes objects (Deployments/StatefulSets, Services, Ingresses, ConfigMaps, RBAC, CRDs). Rather than manually maintaining many YAML files per environment, teams can parameterize configuration with values and reuse the same chart across dev/stage/prod with different overrides.
Option A (apt) and option D (brew) are OS package managers (Debian/Ubuntu and macOS/Linuxbrew respectively), not Kubernetes application managers. Option C (service) is a Linux service manager command pattern and not relevant here.
In cloud-native delivery pipelines, Helm often integrates with GitOps and CI/CD: the pipeline builds an image, updates chart values (image tag/digest), and deploys via Helm or via GitOps controllers that render/apply Helm charts. Helm also supports chart repositories and versioning, making it easier to standardize deployments and manage dependencies.
So, the verified tool for streamlined Kubernetes app install/management is Helm (B).
=========