Summer Special Limited Time 65% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: exc65

NTO need to extract 50 million records from a custom object everyday from its Salesforce org. NTO is facing query timeout issues while extracting these records.

What should a data architect recommend in order to get around the time out issue?

A.

Use a custom auto number and formula field and use that to chunk records while extracting data.

B.

The REST API to extract data as it automatically chunks records by 200.

C.

Use ETL tool for extraction of records.

D.

Ask SF support to increase the query timeout value.

Northern Trail Outfitters has these simple requirements for a data export process:

File format should be in CSV.

Process should be scheduled and run once per week.

The expert should be configurable through the Salesforce UI.

Which tool should a data architect leverage to accomplish these requirements?

A.

Bulk API

B.

Data export wizard

C.

Third-party ETL tool

D.

Data loader

Due to security requirements, Universal Containers needs to capture specific user actions, such as login, logout, file attachment download, package install, etc. What is the recommended approach for defining a solution for this requirement?

A.

Use a field audit trail to capture field changes.

B.

Use a custom object and trigger to capture changes.

C.

Use Event Monitoring to capture these changes.

D.

Use a third-party AppExchange app to capture changes.

Universal Containers (UC) has a Salesforce org with multiple automated processes defined for group membership processing, UC also has multiple admins on staff that perform manual adjustments to the role hierarchy. The automated tasks and manual tasks overlap daily, and UC is experiencing "lock errors" consistently.

What should a data architect recommend to mitigate these errors?

A.

Enable granular locking.

B.

Remove SOQL statements from Apex Loops.

C.

Enable sharing recalculations.

D.

Ask Salesforce support for additional CPU power.

Northern Trail Outfitters (NTO) has a variety of customers that include householder, businesses, and individuals.

The following conditions exist within its system:

NTO has a total of five million customers.

Duplicate records exist, which is replicated across many systems, including Salesforce.

Given these conditions, there is a lack of consistent presentation and clear identification of a customer record.

Which three option should a data architect perform to resolve the issues with the customer data?

A.

Create a unique global customer ID for each customer and store that in all system for referential identity.

B.

Use Salesforce CDC to sync customer data cross all systems to keep customer record in sync.

C.

Invest in data duplicate tool to de-dupe and merge duplicate records across all systems.

D.

Duplicate customer records across the system and provide a two-way sync of data between the systems.

E.

Create a customer master database external to Salesforce as a system of truth and sync the customer data with all systems.

Universal Container has a Sales Cloud implementation for a sales team and an enterprise resource planning (ERP) as a customer master Sales team are complaining about duplicate account and data quality issues with account data.

Which two solutions should a data architect recommend to resolve the complaints?

A.

Build a nightly batch job to de-dupe data, and merge account records.

B.

Integrate Salesforce with ERP, and make ERP as system of truth.

C.

Build a nightly sync job from ERP to Salesforce.

D.

Implement a de-dupe solution and establish account ownership in Salesforce

UC is planning a massive SF implementation with large volumes of data. As part of the org’s implementation, several roles, territories, groups, and sharing rules have been configured. The data architect has been tasked with loading all of the required data, including user data, in a timely manner.

What should a data architect do to minimize data load times due to system calculations?

A.

Enable defer sharing calculations, and suspend sharing rule calculations

B.

Load the data through data loader, and turn on parallel processing.

C.

Leverage the Bulk API and concurrent processing with multiple batches

D.

Enable granular locking to avoid “UNABLE _TO_LOCK_ROW” error.