Summer Special Limited Time 65% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: exc65

Company A would like to share data in Snowflake with Company B. Company B is not on the same cloud platform as Company A.

What is required to allow data sharing between these two companies?

A.

Create a pipeline to write shared data to a cloud storage location in the target cloud provider.

B.

Ensure that all views are persisted, as views cannot be shared across cloud platforms.

C.

Setup data replication to the region and cloud platform where the consumer resides.

D.

Company A and Company B must agree to use a single cloud platform: Data sharing is only possible if the companies share the same cloud provider.

A media company needs a data pipeline that will ingest customer review data into a Snowflake table, and apply some transformations. The company also needs to use Amazon Comprehend to do sentiment analysis and make the de-identified final data set available publicly for advertising companies who use different cloud providers in different regions.

The data pipeline needs to run continuously ang efficiently as new records arrive in the object storage leveraging event notifications. Also, the operational complexity, maintenance of the infrastructure, including platform upgrades and security, and the development effort should be minimal.

Which design will meet these requirements?

A.

Ingest the data using COPY INTO and use streams and tasks to orchestrate transformations. Export the data into Amazon S3 to do model inference with Amazon Comprehend and ingest the data back into a Snowflake table. Then create a listing in the Snowflake Marketplace to make the data available to other companies.

B.

Ingest the data using Snowpipe and use streams and tasks to orchestrate transformations. Create an external function to do model inference with Amazon Comprehend and write the final records to a Snowflake table. Then create a listing in the Snowflake Marketplace to make the data available to other companies.

C.

Ingest the data into Snowflake using Amazon EMR and PySpark using the Snowflake Spark connector. Apply transformations using another Spark job. Develop a python program to do model inference by leveraging the Amazon Comprehend text analysis API. Then write the results to a Snowflake table and create a listing in the Snowflake Marketplace to make the data available to other companies.

D.

Ingest the data using Snowpipe and use streams and tasks to orchestrate transformations. Export the data into Amazon S3 to do model inference with Amazon Comprehend and ingest the data back into a Snowflake table. Then create a listing in the Snowflake Marketplace to make the data available to other companies.

What is a characteristic of event notifications in Snowpipe?

A.

The load history is stored In the metadata of the target table.

B.

Notifications identify the cloud storage event and the actual data in the files.

C.

Snowflake can process all older notifications when a paused pipe Is resumed.

D.

When a pipe Is paused, event messages received for the pipe enter a limited retention period.

When using the copy into

command with the CSV file format, how does the match_by_column_name parameter behave?

A.

It expects a header to be present in the CSV file, which is matched to a case-sensitive table column name.

B.

The parameter will be ignored.

C.

The command will return an error.

D.

The command will return a warning stating that the file has unmatched columns.

A company is designing its serving layer for data that is in cloud storage. Multiple terabytes of the data will be used for reporting. Some data does not have a clear use case but could be useful for experimental analysis. This experimentation data changes frequently and is sometimes wiped out and replaced completely in a few days.

The company wants to centralize access control, provide a single point of connection for the end-users, and maintain data governance.

What solution meets these requirements while MINIMIZING costs, administrative effort, and development overhead?

A.

Import the data used for reporting into a Snowflake schema with native tables. Then create external tables pointing to the cloud storage folders used for the experimentation data. Then create two different roles with grants to the different datasets to match the different user personas, and grant these roles to the corresponding users.

B.

Import all the data in cloud storage to be used for reporting into a Snowflake schema with native tables. Then create a role that has access to this schema and manage access to the data through that role.

C.

Import all the data in cloud storage to be used for reporting into a Snowflake schema with native tables. Then create two different roles with grants to the different datasets to match the different user personas, and grant these roles to the corresponding users.

D.

Import the data used for reporting into a Snowflake schema with native tables. Then create views that have SELECT commands pointing to the cloud storage files for the experimentation data. Then create two different roles to match the different user personas, and grant these roles to the corresponding users.

A company has a source system that provides JSON records for various loT operations. The JSON Is loading directly into a persistent table with a variant field. The data Is quickly growing to 100s of millions of records and performance to becoming an issue. There is a generic access pattern that Is used to filter on the create_date key within the variant field.

What can be done to improve performance?

A.

Alter the target table to Include additional fields pulled from the JSON records. This would Include a create_date field with a datatype of time stamp. When this field Is used in the filter, partition pruning will occur.

B.

Alter the target table to include additional fields pulled from the JSON records. This would include a create_date field with a datatype of varchar. When this field is used in the filter, partition pruning will occur.

C.

Validate the size of the warehouse being used. If the record count is approaching 100s of millions, size XL will be the minimum size required to process this amount of data.

D.

Incorporate the use of multiple tables partitioned by date ranges. When a user or process needs to query a particular date range, ensure the appropriate base table Is used.

A user can change object parameters using which of the following roles?

A.

ACCOUNTADMIN, SECURITYADMIN

B.

SYSADMIN, SECURITYADMIN

C.

ACCOUNTADMIN, USER with PRIVILEGE

D.

SECURITYADMIN, USER with PRIVILEGE

An Architect uses COPY INTO with the ON_ERROR=SKIP_FILE option to bulk load CSV files into a table called TABLEA, using its table stage. One file named file5.csv fails to load. The Architect fixes the file and re-loads it to the stage with the exact same file name it had previously.

Which commands should the Architect use to load only file5.csv file from the stage? (Choose two.)

A.

COPY INTO tablea FROM @%tablea RETURN_FAILED_ONLY = TRUE;

B.

COPY INTO tablea FROM @%tablea;

C.

COPY INTO tablea FROM @%tablea FILES = ('file5.csv');

D.

COPY INTO tablea FROM @%tablea FORCE = TRUE;

E.

COPY INTO tablea FROM @%tablea NEW_FILES_ONLY = TRUE;

F.

COPY INTO tablea FROM @%tablea MERGE = TRUE;

What considerations need to be taken when using database cloning as a tool for data lifecycle management in a development environment? (Select TWO).

A.

Any pipes in the source are not cloned.

B.

Any pipes in the source referring to internal stages are not cloned.

C.

Any pipes in the source referring to external stages are not cloned.

D.

The clone inherits all granted privileges of all child objects in the source object, including the database.

E.

The clone inherits all granted privileges of all child objects in the source object, excluding the database.

An Architect needs to meet a company requirement to ingest files from the company's AWS storage accounts into the company's Snowflake Google Cloud Platform (GCP) account. How can the ingestion of these files into the company's Snowflake account be initiated? (Select TWO).

A.

Configure the client application to call the Snowpipe REST endpoint when new files have arrived in Amazon S3 storage.

B.

Configure the client application to call the Snowpipe REST endpoint when new files have arrived in Amazon S3 Glacier storage.

C.

Create an AWS Lambda function to call the Snowpipe REST endpoint when new files have arrived in Amazon S3 storage.

D.

Configure AWS Simple Notification Service (SNS) to notify Snowpipe when new files have arrived in Amazon S3 storage.

E.

Configure the client application to issue a COPY INTO

command to Snowflake when new files have arrived in Amazon S3 Glacier storage.