Summer Special Limited Time 65% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: exc65

What is a foundation model in the context of Large Language Models (LLMs)?

A.

A model that sets the state-of-the-art results for any of the tasks that compose the General Language Understanding Evaluation (GLUE) benchmark.

B.

Any model trained on vast quantities of data at scale whose goal is to serve as a starter that can be adapted to a variety of downstream tasks.

C.

Any model validated by the artificial intelligence safety institute as the foundation for building transformer-based applications.

D.

Any model based on the foundation paper "Attention is all you need," that uses recurrent neural networks and convolution layers.

What is 'chunking' in Retrieval-Augmented Generation (RAG)?

A.

Rewrite blocks of text to fill a context window.

B.

A method used in RAG to generate random text.

C.

A concept in RAG that refers to the training of large language models.

D.

A technique used in RAG to split text into meaningful segments.

What do we usually refer to as generative AI?

A.

A branch of artificial intelligence that focuses on creating models that can generate new and original data.

B.

A branch of artificial intelligence that focuses on auto generation of models for classification.

C.

A branch of artificial intelligence that focuses on improving the efficiency of existing models.

D.

A branch of artificial intelligence that focuses on analyzing and interpreting existing data.

What is the fundamental role of LangChain in an LLM workflow?

A.

To act as a replacement for traditional programming languages.

B.

To reduce the size of AI foundation models.

C.

To orchestrate LLM components into complex workflows.

D.

To directly manage the hardware resources used by LLMs.

What distinguishes BLEU scores from ROUGE scores when evaluating natural language processing models?

A.

BLEU scores determine the fluency of text generation, while ROUGE scores rate the uniqueness of generated text.

B.

BLEU scores analyze syntactic structures, while ROUGE scores evaluate semantic accuracy.

C.

BLEU scores evaluate the 'precision' of translations, while ROUGE scores focus on the 'recall' of summarized text.

D.

BLEU scores measure model efficiency, whereas ROUGE scores assess computational complexity.

Which of the following claims is correct about TensorRT and ONNX?

A.

TensorRT is used for model deployment and ONNX is used for model interchange.

B.

TensorRT is used for model deployment and ONNX is used for model creation.

C.

TensorRT is used for model creation and ONNX is used for model interchange.

D.

TensorRT is used for model creation and ONNX is used for model deployment.

What statement best describes the diffusion models in generative AI?

A.

Diffusion models are probabilistic generative models that progressively inject noise into data, then learn to reverse this process for sample generation.

B.

Diffusion models are discriminative models that use gradient-based optimization algorithms to classify data points.

C.

Diffusion models are unsupervised models that use clustering algorithms to group similar data points together.

D.

Diffusion models are generative models that use a transformer architecture to learn the underlying probability distribution of the data.

Your company has upgraded from a legacy LLM model to a new model that allows for larger sequences and higher token limits. What is the most likely result of upgrading to the new model?

A.

The number of tokens is fixed for all existing language models, so there is no benefit to upgrading to higher token limits.

B.

The newer model allows for larger context, so the outputs will improve without increasing inference time overhead.

C.

The newer model allows the same context lengths, but the larger token limit will result in more comprehensive and longer outputs with more detail.

D.

The newer model allows larger context, so outputs will improve, but you will likely incur longer inference times.