Special Summer Sale Limited Time 70% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: buysanta

Exact2Pass Menu

Question # 4

What is the purpose of the "stop sequence" parameter in the OCI Generative AI Generation models?

A.

It com rob the randomness of the model* output, affecting its creativity.

B It specifies a string that tells the model to stop generating more content

B.

It assigns a penalty to frequently occurring tokens to reduce repetitive text.

C.

It determines the maximum number of tokens the model can generate per response.

Full Access
Question # 5

Which statement describes the difference between Top V and Top p" in selecting the next token in the OCI Generative AI Generation models?

A.

Top k selects the next token based on its position in the list of probable tokens, whereas "Top p" selects based on the cumulative probability of the Top token.

B.

Top K considers the sum of probabilities of the top tokens, whereas Top" selects from the Top k" tokens sorted by probability.

C.

Top k and Top p" both select from the same set of tokens but use different methods to prioritize them based on frequency.

D.

Top k and "Top p" are identical in their approach to token selection but differ in their application of penalties to tokens.

Full Access
Question # 6

In LangChain, which retriever search type is used to balance between relevancy and diversity?

A.

top k

B.

mmr

C.

similarity_score_threshold

D.

similarity

Full Access
Question # 7

How are fine-tuned customer models stored to enable strong data privacy and security in the OCI Generative AI service?

A.

Stored in Object Storage encrypted by default

B.

Shared among multiple customers for efficiency

C.

Stored in Key Management service

D.

Stored in an unencrypted form in Object Storage

Full Access
Question # 8

You create a fine-tuning dedicated AI cluster to customize a foundational model with your custom training data. How many unit hours arc required for fine-tuning if the cluster is active for 10 hours?

A.

10 unit hours

B.

30 unit hours

C.

15 unit hours

D.

40 unit hours

Full Access
Question # 9

What distinguishes the Cohere Embed v3 model from its predecessor in the OCI Generative AI service?

A.

Improved retrievals for Retrieval Augmented Generation (RAG) systems

B.

Capacity to translate text in over u languages

C.

Support for tokenizing longer sentences

D.

Emphasis on syntactic clustering of word embedding’s

Full Access
Question # 10

Given the following prompts used with a Large Language Model, classify each as employing the Chain-of- Thought, Least-to-most, or Step-Back prompting technique.

L Calculate the total number of wheels needed for 3 cars. Cars have 4 wheels each. Then, use the total number of wheels to determine how many sets of wheels we can buy with $200 if one set (4 wheels) costs $50.

2. Solve a complex math problem by first identifying the formula needed, and then solve a simpler version of the problem before tackling the full question.

3. To understand the impact of greenhouse gases on climate change, let's start by defining what greenhouse gases are. Next, well explore how they trap heat in the Earths atmosphere.

A.

1:Step-Back, 2:Chain-of-Thought, 3:Least-to-most

B.

1:Least-to-most, 2 Chain-of-Thought, 3:Step-Back

C.

1:Chain-of-Thought ,2:Step-Back, 3:Least-to most

D.

1:Chain-of-throught, 2: Least-to-most, 3:Step-Back

Full Access
Question # 11

What does a dedicated RDMA cluster network do during model fine-tuning and inference?

A.

It leads to higher latency in model inference.

B.

It enables the deployment of multiple fine-tuned models.

C.

It limits the number of fine-tuned model deployable on the same GPU cluster.

D.

It increases G PU memory requirements for model deployment.

Full Access
Question # 12

Which is a cost-related benefit of using vector databases with Large Language Models (LLMs)?

A.

They require frequent manual updates, which increase operational costs.

B.

They offer real-time updated knowledge bases and are cheaper than fine-tuned LLMs.

C.

They increase the cost due to the need for real- time updates.

D.

They are more expensive but provide higher quality data.

Full Access
Question # 13

What is the primary purpose of LangSmith Tracing?

A.

To monitor the performance of language models

B.

To generate test cases for language models

C.

To analyze the reasoning process of language

D.

To debug issues in language model outputs

Full Access