Summer Special Limited Time 65% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: exc65

What do you use the score.py file for?

A.

Configure the deployment infrastructure

B.

Execute the inference logic code

C.

Define the required conda environment

D.

Define the scaling strategy

You are preparing a configuration object necessary to create a Data Flow application. Which THREE parameter values should you provide?

A.

The path to the archive.zip file

B.

The local path to your PySpark script

C.

The compartment of the Data Flow application

D.

The bucket used to read/write the PySpark script in Object Storage

E.

The display name of the application

You want to build a multistep machine learning workflow by using the Oracle Cloud Infrastructure (OCI) Data Science Pipeline feature. How would you configure the conda environment to run a pipeline step?

A.

Configure a compute shape

B.

Configure a block volume

C.

Use command-line variables

D.

Use environmental variables

When preparing your model artifact to save it to the Oracle Cloud Infrastructure (OCI) DataScience model catalog, you create a score.py file. What is the purpose of the score.py file?

A.

Configure the deployment infrastructure

B.

Execute the inference logic code

C.

Define the compute scaling strategy

D.

Define the inference server dependencies

You are a data scientist designing an air traffic control model, and you choose to leverage Oracle AutoML. You understand that the Oracle AutoML pipeline consists of multiple stages and automatically operates in a certain sequence. What is the correct sequence for the Oracle AutoML pipeline?

A.

Algorithm selection, Feature selection, Adaptive sampling, Hyperparameter tuning

B.

Adaptive sampling, Algorithm selection, Feature selection, Hyperparameter tuning

C.

Adaptive sampling, Feature selection, Algorithm selection, Hyperparameter tuning

D.

Algorithm selection, Adaptive sampling, Feature selection, Hyperparameter tuning

As a data scientist, you are tasked with creating a model training job that is expected to take different hyperparameter values on every run. What is the most efficient way to set those parameters with Oracle Data Science Jobs?

A.

Create a new job every time you need to run your code and pass the parameters as environment variables

B.

Create a new job by setting the required parameters in your code and create a new job for every code change

C.

Create your code to expect different parameters either as environment variables or as command-line arguments, which are set on every job run with different values

D.

Create your code to expect different parameters as command-line arguments and create a new job every time you run the code

Which of the following analytical and statistical techniques do data scientists commonly use?

A.

Classification

B.

Regression

C.

Clustering

D.

All of the above