Summer Special Limited Time 65% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: exc65

Your company stores historical data in Cloud Storage. You need to ensure that all data is saved in a bucket for at least three years. What should you do?

A.

Enable Object Versioning.

B.

Change the bucket storage class to Archive.

C.

Set a bucket retention policy.

D.

Set temporary object holds.

Your organization consists of two hundred employees on five different teams. The leadership team is concerned that any employee can move or delete all Looker dashboards saved in the Shared folder. You need to create an easy-to-manage solution that allows the five different teams in your organization to view content in the Shared folder, but only be able to move or delete their team-specific dashboard. What should you do?

A.

1. Create Looker groups representing each of the five different teams, and add users to their corresponding group. 2. Create five subfolders inside the Shared folder. Grant each group the View access level to their corresponding subfolder.

B.

1. Move all team-specific content into the dashboard owner s personal folder. 2. Change the access level of the Shared folder to View for the All Users group. 3. Instruct each user to create content for their team in the user's personal folder.

C.

1. Change the access level of the Shared folder to View for the All Users group. 2. Create Looker groups representing each of the five different teams, and add users to their corresponding group. 3. Create five subfolders inside the Shared folder. Grant each group the Manage Access, Edit access level to their corresponding subfolder.

D.

1. Change the access level of the Shared folder to View for the All Users group. 2. Create five subfolders inside the Shared folder. Grant each team member the Manage Access, Edit access level to their corresponding subfolder.

You work for an ecommerce company that has a BigQuery dataset that contains customer purchase history, demographics, and website interactions. You need to build a machine learning (ML) model to predict which customers are most likely to make a purchase in the next month. You have limited engineering resources and need to minimize the ML expertise required for the solution. What should you do?

A.

Use BigQuery ML to create a logistic regression model for purchase prediction.

B.

Use Vertex AI Workbench to develop a custom model for purchase prediction.

C.

Use Colab Enterprise to develop a custom model for purchase prediction.

D.

Export the data to Cloud Storage, and use AutoML Tables to build a classification model for purchase prediction.

Your retail organization stores sensitive application usage data in Cloud Storage. You need to encrypt the data without the operational overhead of managing encryption keys. What should you do?

A.

Use Google-managed encryption keys (GMEK).

B.

Use customer-managed encryption keys (CMEK).

C.

Use customer-supplied encryption keys (CSEK).

D.

Use customer-supplied encryption keys (CSEK) for the sensitive data and customer-managed encryption keys (CMEK) for the less sensitive data.

Your organization's website uses an on-premises MySQL as a backend database. You need to migrate the on-premises MySQL database to Google Cloud while maintaining MySQL features. You want to minimize administrative overhead and downtime. What should you do?

A.

Install MySQL on a Compute Engine virtual machine. Export the database files using the mysqldump command. Upload the files to Cloud Storage, and import them into the MySQL instance on Compute Engine.

B.

Use Database Migration Service to transfer the data to Cloud SQL for MySQL, and configure the on premises MySQL database as the source.

C.

Use a Google-provided Dataflow template to replicate the MySQL database in BigQuery.

D.

Export the database tables to CSV files, and upload the files to Cloud Storage. Convert the MySQL schema to a Spanner schema, create a JSON manifest file, and run a Google-provided Dataflow template to load the data into Spanner.

You are designing an application that will interact with several BigQuery datasets. You need to grant the application’s service account permissions that allow it to query and update tables within the datasets, and list all datasets in a project within your application. You want to follow the principle of least privilege. Which pre-defined IAM role(s) should you apply to the service account?

A.

roles/bigquery.jobUser and roles/bigquery.dataOwner

B.

roles/bigquery.connectionUser and roles/bigquery.dataViewer

C.

roles/bigquery.admin

D.

roles/bigquery.user and roles/bigquery.filteredDataViewer

You are working with a small dataset in Cloud Storage that needs to be transformed and loaded into BigQuery for analysis. The transformation involves simple filtering and aggregation operations. You want to use the most efficient and cost-effective data manipulation approach. What should you do?

A.

Use Dataproc to create an Apache Hadoop cluster, perform the ETL process using Apache Spark, and load the results into BigQuery.

B.

Use BigQuery's SQL capabilities to load the data from Cloud Storage, transform it, and store the results in a new BigQuery table.

C.

Create a Cloud Data Fusion instance and visually design an ETL pipeline that reads data from Cloud Storage, transforms it using built-in transformations, and loads the results into BigQuery.

D.

Use Dataflow to perform the ETL process that reads the data from Cloud Storage, transforms it using Apache Beam, and writes the results to BigQuery.

You created a curated dataset of market trends in BigQuery that you want to share with multiple external partners. You want to control the rows and columns that each partner has access to. You want to follow Google-recommended practices. What should you do?

A.

Publish the dataset in Analytics Hub. Grant dataset-level access to each partner by using subscriptions.

B.

Create a separate Cloud Storage bucket for each partner. Export the dataset to each bucket and assign each partner to their respective bucket. Grant bucket-level access by using 1AM roles.

C.

Grant each partner read access to the BigQuery dataset by using 1AM roles.

D.

Create a separate project for each partner and copy the dataset into each project. Publish each dataset in Analytics Hub. Grant dataset-level access to each partner by using subscriptions.

Your organization has a petabyte of application logs stored as Parquet files in Cloud Storage. You need to quickly perform a one-time SQL-based analysis of the files and join them to data that already resides in BigQuery. What should you do?

A.

Create a Dataproc cluster, and write a PySpark job to join the data from BigQuery to the files in Cloud Storage.

B.

Launch a Cloud Data Fusion environment, use plugins to connect to BigQuery and Cloud Storage, and use the SQL join operation to analyze the data.

C.

Create external tables over the files in Cloud Storage, and perform SQL joins to tables in BigQuery to analyze the data.

D.

Use the bq load command to load the Parquet files into BigQuery, and perform SQL joins to analyze the data.

Your organization needs to store historical customer order data. The data will only be accessed once a month for analysis and must be readily available within a few seconds when it is accessed. You need to choose a storage class that minimizes storage costs while ensuring that the data can be retrieved quickly. What should you do?

A.

Store the data in Cloud Storaqe usinq Nearline storaqe.

B.

Store the data in Cloud Storaqe usinq Coldline storaqe.

C.

Store the data in Cloud Storage using Standard storage.

D.

Store the data in Cloud Storage using Archive storage.