Pre-Summer Sale Special 65% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: ex2p65

Exact2Pass Menu

Question # 4

You create a fine-tuning dedicated AI cluster to customize a foundational model with your custom training data. How many unit hours are required for fine-tuning if the cluster is active for 10 hours?

A.

25 unit hours

B.

40 unit hours

C.

20 unit hours

D.

30 unit hours

Full Access
Question # 5

What is the purpose of Retrieval Augmented Generation (RAG) in text generation?

A.

To generate text based only on the model's internal knowledge without external data

B.

To generate text using extra information obtained from an external data source

C.

To store text in an external database without using it for generation

D.

To retrieve text from an external source and present it without any modifications

Full Access
Question # 6

What do prompt templates use for templating in language model applications?

A.

Python's list comprehension syntax

B.

Python's str.format syntax

C.

Python's lambda functions

D.

Python's class and object structures

Full Access
Question # 7

What distinguishes the Cohere Embed v3 model from its predecessor in the OCI Generative AI service?

A.

Support for tokenizing longer sentences

B.

Improved retrievals for Retrieval Augmented Generation (RAG) systems

C.

Emphasis on syntactic clustering of word embeddings

D.

Capacity to translate text in over 100 languages

Full Access
Question # 8

Which is a distinguishing feature of "Parameter-Efficient Fine-Tuning (PEFT)" as opposed to classic "Fine-tuning" in Large Language Model training?

A.

PEFT involves only a few or new parameters and uses labeled, task-specific data.

B.

PEFT modifies all parameters and is typically used when no training data exists.

C.

PEFT does not modify any parameters but uses soft prompting with unlabeled data.

D.

PEFT modifies all parameters and uses unlabeled, task-agnostic data.

Full Access
Question # 9

How do Dot Product and Cosine Distance differ in their application to comparing text embeddings in natural language processing?

A.

Dot Product assesses the overall similarity in content, whereas Cosine Distance measures topical relevance.

B.

Dot Product is used for semantic analysis, whereas Cosine Distance is used for syntactic comparisons.

C.

Dot Product measures the magnitude and direction of vectors, whereas Cosine Distance focuses on the orientation regardless of magnitude.

D.

Dot Product calculates the literal overlap of words, whereas Cosine Distance evaluates the stylistic similarity.

Full Access
Question # 10

Which role does a "model endpoint" serve in the inference workflow of the OCI Generative AI service?

A.

Updates the weights of the base model during the fine-tuning process

B.

Serves as a designated point for user requests and model responses

C.

Evaluates the performance metrics of the custom models

D.

Hosts the training data for fine-tuning custom models

Full Access
Question # 11

Which is a key characteristic of Large Language Models (LLMs) without Retrieval Augmented Generation (RAG)?

A.

They always use an external database for generating responses.

B.

They rely on internal knowledge learned during pretraining on a large text corpus.

C.

They cannot generate responses without fine-tuning.

D.

They use vector databases exclusively to produce answers.

Full Access
Question # 12

How does the structure of vector databases differ from traditional relational databases?

A.

A vector database stores data in a linear or tabular format.

B.

It is not optimized for high-dimensional spaces.

C.

It is based on distances and similarities in a vector space.

D.

It uses simple row-based data storage.

Full Access
Question # 13

Which is a distinctive feature of GPUs in Dedicated AI Clusters used for generative AI tasks?

A.

GPUs are shared with other customers to maximize resource utilization.

B.

The GPUs allocated for a customer’s generative AI tasks are isolated from other GPUs.

C.

GPUs are used exclusively for storing large datasets, not for computation.

D.

Each customer's GPUs are connected via a public Internet network for ease of access.

Full Access
Question # 14

What is the purpose of Retrievers in LangChain?

A.

To train Large Language Models

B.

To retrieve relevant information from knowledge bases

C.

To break down complex tasks into smaller steps

D.

To combine multiple components into a single pipeline

Full Access
Question # 15

What is the primary purpose of LangSmith Tracing?

A.

To generate test cases for language models

B.

To analyze the reasoning process of language models

C.

To debug issues in language model outputs

D.

To monitor the performance of language models

Full Access
Question # 16

What is the characteristic of T-Few fine-tuning for Large Language Models (LLMs)?

A.

It updates all the weights of the model uniformly.

B.

It selectively updates only a fraction of weights to reduce the number of parameters.

C.

It selectively updates only a fraction of weights to reduce computational load and avoid overfitting.

D.

It increases the training time as compared to Vanilla fine-tuning.

Full Access
Question # 17

In which scenario is soft prompting appropriate compared to other training styles?

A.

When there is a significant amount of labeled, task-specific data available

B.

When the model needs to be adapted to perform well in a domain on which it was not originally trained

C.

When there is a need to add learnable parameters to a Large Language Model (LLM) without task-specific training

D.

When the model requires continued pretraining on unlabeled data

Full Access
Question # 18

Which statement is true about Fine-tuning and Parameter-Efficient Fine-Tuning (PEFT)?

A.

Fine-tuning requires training the entire model on new data, often leading to substantial computational costs, whereas PEFT involves updating only a small subset of parameters, minimizing computational requirements and data needs.

B.

PEFT requires replacing the entire model architecture with a new one designed specifically for the new task, making it significantly more data-intensive than Fine-tuning.

C.

Both Fine-tuning and PEFT require the model to be trained from scratch on new data, making them equally data and computationally intensive.

D.

Fine-tuning and PEFT do not involve model modification; they differ only in the type of data used for training, with Fine-tuning requiring labeled data and PEFT using unlabeled data.

Full Access
Question # 19

In the simplified workflow for managing and querying vector data, what is the role of indexing?

A.

To convert vectors into a non-indexed format for easier retrieval

B.

To map vectors to a data structure for faster searching, enabling efficient retrieval

C.

To compress vector data for minimized storage usage

D.

To categorize vectors based on their originating data type (text, images, audio)

Full Access
Question # 20

What does in-context learning in Large Language Models involve?

A.

Pretraining the model on a specific domain

B.

Training the model using reinforcement learning

C.

Conditioning the model with task-specific instructions or demonstrations

D.

Adding more layers to the model

Full Access
Question # 21

How are fine-tuned customer models stored to enable strong data privacy and security in the OCI Generative AI service?

A.

Shared among multiple customers for efficiency

B.

Stored in Object Storage encrypted by default

C.

Stored in an unencrypted form in Object Storage

D.

Stored in Key Management service

Full Access
Question # 22

What is the primary function of the "temperature" parameter in the OCI Generative AI Generation models?

A.

Controls the randomness of the model's output, affecting its creativity

B.

Specifies a string that tells the model to stop generating more content

C.

Assigns a penalty to tokens that have already appeared in the preceding text

D.

Determines the maximum number of tokens the model can generate per response

Full Access
Question # 23

Which technique involves prompting the Large Language Model (LLM) to emit intermediate reasoning steps as part of its response?

A.

Step-Back Prompting

B.

Chain-of-Thought

C.

Least-to-Most Prompting

D.

In-Context Learning

Full Access
Question # 24

What issue might arise from using small datasets with the Vanilla fine-tuning method in the OCI Generative AI service?

A.

Overfitting

B.

Underfitting

C.

Data Leakage

D.

Model Drift

Full Access
Question # 25

What is the purpose of embeddings in natural language processing?

A.

To increase the complexity and size of text data

B.

To translate text into a different language

C.

To create numerical representations of text that capture the meaning and relationships between words or phrases

D.

To compress text data into smaller files for storage

Full Access
Question # 26

Given the following prompts used with a Large Language Model, classify each as employing the Chain-of-Thought, Least-to-Most, or Step-Back prompting technique:

A.

"Calculate the total number of wheels needed for 3 cars. Cars have 4 wheels each. Then, use the total number of wheels to determine how many sets of wheels we can buy with $200 if one set (4 wheels) costs $50."

B.

"Solve a complex math problem by first identifying the formula needed, and then solve a simpler version of the problem before tackling the full question."

C.

"To understand the impact of greenhouse gases on climate change, let’s start by defining what greenhouse gases are. Next, we’ll explore how they trap heat in the Earth’s atmosphere."A. 1: Step-Back, 2: Chain-of-Thought, 3: Least-to-MostB. 1: Least-to-Most, 2: Chain-of-Thought, 3: Step-BackC. 1: Chain-of-Thought, 2: Step-Back, 3: Least-to-MostD. 1: Chain-of-Thought, 2: Least-to-Most, 3: Step-Back

Full Access