Month End Sale - Limited Time 70% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: sntaclus

When comparing and contrasting the ReLU and sigmoid activation functions, which statement is true?

A.

ReLU is a linear function while sigmoid is non-linear.

B.

ReLU is less computationally efficient than sigmoid, but it is more accurate than sigmoid.

C.

ReLU and sigmoid both have a range of 0 to 1.

D.

ReLU is more computationally efficient, but sigmoid is better for predicting probabilities.

In the context of evaluating a fine-tuned LLM for a text classification task, which experimental design technique ensures robust performance estimation when dealing with imbalanced datasets?

A.

Single hold-out validation with a fixed test set.

B.

Stratified k-fold cross-validation.

C.

Bootstrapping with random sampling.

D.

Grid search for hyperparameter tuning.

What are the main advantages of instructed large language models over traditional, small language models (< 300M parameters)? (Pick the 2 correct responses)

A.

Trained without the need for labeled data.

B.

Smaller latency, higher throughput.

C.

It is easier to explain the predictions.

D.

Cheaper computational costs during inference.

E.

Single generic model can do more than one task.

When should one use data clustering and visualization techniques such as tSNE or UMAP?

A.

When there is a need to handle missing values and impute them in the dataset.

B.

When there is a need to perform regression analysis and predict continuous numerical values.

C.

When there is a need to reduce the dimensionality of the data and visualize the clusters in a lower-dimensional space.

D.

When there is a need to perform feature extraction and identify important variables in the dataset.

You are in need of customizing your LLM via prompt engineering, prompt learning, or parameter-efficient fine-tuning. Which framework helps you with all of these?

A.

NVIDIA TensorRT

B.

NVIDIA DALI

C.

NVIDIA Triton

D.

NVIDIA NeMo

You are working on developing an application to classify images of animals and need to train a neural model. However, you have a limited amount of labeled data. Which technique can you use to leverage the knowledge from a model pre-trained on a different task to improve the performance of your new model?

A.

Dropout

B.

Random initialization

C.

Transfer learning

D.

Early stopping

In the evaluation of Natural Language Processing (NLP) systems, what do ‘validity’ and ‘reliability’ imply regarding the selection of evaluation metrics?

A.

Validity involves the metric’s ability to predict future trends in data, and reliability refers to its capacity to integrate with multiple data sources.

B.

Validity ensures the metric accurately reflects the intended property to measure, while reliability ensures consistent results over repeated measurements.

C.

Validity is concerned with the metric’s computational cost, while reliability is about its applicability across different NLP platforms.

D.

Validity refers to the speed of metric computation, whereas reliability pertains to the metric’s performance in high-volume data processing.

Which of the following options describes best the NeMo Guardrails platform?

A.

Ensuring scalability and performance of large language models in pre-training and inference.

B.

Developing and designing advanced machine learning models capable of interpreting and integrating various forms of data.

C.

Ensuring the ethical use of artificial intelligence systems by monitoring and enforcing compliance with predefined rules and regulations.

D.

Building advanced data factories for generative AI services in the context of language models.

Which Python library is specifically designed for working with large language models (LLMs)?

A.

NumPy

B.

Pandas

C.

HuggingFace Transformers

D.

Scikit-learn

You have access to training data but no access to test data. What evaluation method can you use to assess the performance of your AI model?

A.

Cross-validation

B.

Randomized controlled trial

C.

Average entropy approximation

D.

Greedy decoding