New Year Sale Special - Limited Time 70% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: sntaclus

You are using Deployment Manager to create a Google Kubernetes Engine cluster. Using the same Deployment Manager deployment, you also want to create a DaemonSet in the kube-system namespace of the cluster. You want a solution that uses the fewest possible services. What should you do?

A.

Add the cluster’s API as a new Type Provider in Deployment Manager, and use the new type to create the DaemonSet.

B.

Use the Deployment Manager Runtime Configurator to create a new Config resource that contains the DaemonSet definition.

C.

With Deployment Manager, create a Compute Engine instance with a startup script that uses kubectl to create the DaemonSet.

D.

In the cluster’s definition in Deployment Manager, add a metadata that has kube-system as key and the DaemonSet manifest as value.

You have an application that uses Cloud Spanner as a database backend to keep current state information about users. Cloud Bigtable logs all events triggered by users. You export Cloud Spanner data to Cloud Storage during daily backups. One of your analysts asks you to join data from Cloud Spanner and Cloud Bigtable for specific users. You want to complete this ad hoc request as efficiently as possible. What should you do?

A.

Create a dataflow job that copies data from Cloud Bigtable and Cloud Storage for specific users.

B.

Create a dataflow job that copies data from Cloud Bigtable and Cloud Spanner for specific users.

C.

Create a Cloud Dataproc cluster that runs a Spark job to extract data from Cloud Bigtable and Cloud Storage for specific users.

D.

Create two separate BigQuery external tables on Cloud Storage and Cloud Bigtable. Use the BigQuery console to join these tables through user fields, and apply appropriate filters.

A team of data scientists infrequently needs to use a Google Kubernetes Engine (GKE) cluster that you manage. They require GPUs for some long-running, non-restartable jobs. You want to minimize cost. What should you do?

A.

Enable node auto-provisioning on the GKE cluster.

B.

Create a VerticalPodAutscaler for those workloads.

C.

Create a node pool with preemptible VMs and GPUs attached to those VMs.

D.

Create a node pool of instances with GPUs, and enable autoscaling on this node pool with a minimum size of 1.

An application generates daily reports in a Compute Engine virtual machine (VM). The VM is in the project corp-iot-insights. Your team operates only in the project corp-aggregate-reports and needs a copy of the daily exports in the bucket corp-aggregate-reports-storage. You want to configure access so that the daily reports from the VM are available in the bucket corp-aggregate-reports-storage and use as few steps as possible while following Google-recommended practices. What should you do?

A.

Move both projects under the same folder.

B.

Grant the VM Service Account the role Storage Object Creator on corp-aggregate-reports-storage.

C.

Create a Shared VPC network between both projects. Grant the VM Service Account the role Storage Object Creator on corp-iot-insights.

D.

Make corp-aggregate-reports-storage public and create a folder with a pseudo-randomized suffix name. Share the folder with the IoT team.

Your company has multiple projects linked to a single billing account in Google Cloud. You need to visualize the costs with specific metrics that should be dynamically calculated based on company-specific criteria. You want to automate the process. What should you do?

A.

In the Google Cloud console, visualize the costs related to the projects in the Reports section.

B.

In the Google Cloud console, visualize the costs related to the projects in the Cost breakdown section.

C.

In the Google Cloud console, use the export functionality of the Cost table. Create a Looker Studiodashboard on top of the CSV export.

D.

Configure Cloud Billing data export to BigOuery for the billing account. Create a Looker Studio dashboard on top of the BigQuery export.

You want to send and consume Cloud Pub/Sub messages from your App Engine application. The Cloud Pub/Sub API is currently disabled. You will use a service account to authenticate yourapplication to the API. You want to make sure your application can use Cloud Pub/Sub. What should you do?

A.

Enable the Cloud Pub/Sub API in the API Library on the GCP Console.

B.

Rely on the automatic enablement of the Cloud Pub/Sub API when the Service Account accesses it.

C.

Use Deployment Manager to deploy your application. Rely on the automatic enablement of all APIs used by the application being deployed.

D.

Grant the App Engine Default service account the role of Cloud Pub/Sub Admin. Have your application enable the API on the first connection to Cloud Pub/Sub.

You are operating a Google Kubernetes Engine (GKE) cluster for your company where different teams can run non-production workloads. Your Machine Learning (ML) team needs access to Nvidia Tesla P100 GPUs to train their models. You want to minimize effort and cost. What should you do?

A.

Ask your ML team to add the “accelerator: gpu” annotation to their pod specification.

B.

Recreate all the nodes of the GKE cluster to enable GPUs on all of them.

C.

Create your own Kubernetes cluster on top of Compute Engine with nodes that have GPUs. Dedicate this cluster to your ML team.

D.

Add a new, GPU-enabled, node pool to the GKE cluster. Ask your ML team to add the cloud.google.com/gke -accelerator: nvidia-tesla-p100 nodeSelector to their pod specification.

You are planning to migrate your on-premises VMs to Google Cloud. You need to set up a landing zone in Google Cloud before migrating the VMs. You must ensure that all VMs in your production environment can communicate with each other through private IP addresses. You need to allow all VMs in your Google Cloud organization to accept connections on specific TCP ports. You want to follow Google-recommended practices, and you need to minimize your operational costs. What should you do?

A.

Create individual VPCs per Google Cloud project. Peer all the VPCs together. Apply organization policies on the organization level.

B.

Create individual VPCs for each Google Cloud project. Peer all the VPCs together. Apply hierarchical firewall policies on the organization level.

C.

Create a host VPC project with each production project as its service project. Apply organization policies on the organization level.

D.

Create a host VPC project with each production project as its service project. Apply hierarchical firewall policies on the organization level.

You need to configure optimal data storage for files stored in Cloud Storage for minimal cost. The files are used in a mission-critical analytics pipeline that is used continually. The users are in Boston, MA (United States). What should you do?

A.

Configure regional storage for the region closest to the users Configure a Nearline storage class

B.

Configure regional storage for the region closest to the users Configure a Standard storage class

C.

Configure dual-regional storage for the dual region closest to the users Configure a Nearline storage class

D.

Configure dual-regional storage for the dual region closest to the users Configure a Standard storage class

You have an object in a Cloud Storage bucket that you want to share with an external company. The object contains sensitive data. You want access to the content to be removed after four hours. The external company does not have a Google account to which you can grant specific user-based access privileges. You want to use the most secure method that requires the fewest steps. What should you do?

A.

Create a signed URL with a four-hour expiration and share the URL with the company.

B.

Set object access to ‘public’ and use object lifecycle management to remove the object after four hours.

C.

Configure the storage bucket as a static website and furnish the object’s URL to the company. Delete the object from the storage bucket after four hours.

D.

Create a new Cloud Storage bucket specifically for the external company to access. Copy the object to that bucket. Delete the bucket after four hours have passed.