Weekend Special Sale Limited Time 70% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: buysanta

Exact2Pass Menu

Question # 4

A Generative Al Engineer is tasked with improving the RAG quality by addressing its inflammatory outputs.

Which action would be most effective in mitigating the problem of offensive text outputs?

A.

Increase the frequency of upstream data updates

B.

Inform the user of the expected RAG behavior

C.

Restrict access to the data sources to a limited number of users

D.

Curate upstream data properly that includes manual review before it is fed into the RAG system

Full Access
Question # 5

A Generative AI Engineer just deployed an LLM application at a digital marketing company that assists with answering customer service inquiries.

Which metric should they monitor for their customer service LLM application in production?

A.

Number of customer inquiries processed per unit of time

B.

Energy usage per query

C.

Final perplexity scores for the training of the model

D.

HuggingFace Leaderboard values for the base LLM

Full Access
Question # 6

A Generative AI Engineer is developing an LLM application that users can use to generate personalized birthday poems based on their names.

Which technique would be most effective in safeguarding the application, given the potential for malicious user inputs?

A.

Implement a safety filter that detects any harmful inputs and ask the LLM to respond that it is unable to assist

B.

Reduce the time that the users can interact with the LLM

C.

Ask the LLM to remind the user that the input is malicious but continue the conversation with the user

D.

Increase the amount of compute that powers the LLM to process input faster

Full Access
Question # 7

A Generative AI Engineer has been asked to build an LLM-based question-answering application. The application should take into account new documents that are frequently published. The engineer wants to build this application with the least cost and least development effort and have it operate at the lowest cost possible.

Which combination of chaining components and configuration meets these requirements?

A.

For the application a prompt, a retriever, and an LLM are required. The retriever output is inserted into the prompt which is given to the LLM to generate answers.

B.

The LLM needs to be frequently with the new documents in order to provide most up-to-date answers.

C.

For the question-answering application, prompt engineering and an LLM are required to generate answers.

D.

For the application a prompt, an agent and a fine-tuned LLM are required. The agent is used by the LLM to retrieve relevant content that is inserted into the prompt which is given to the LLM to generate answers.

Full Access
Question # 8

A Generative AI Engineer is creating an LLM-powered application that will need access to up-to-date news articles and stock prices.

The design requires the use of stock prices which are stored in Delta tables and finding the latest relevant news articles by searching the internet.

How should the Generative AI Engineer architect their LLM system?

A.

Use an LLM to summarize the latest news articles and lookup stock tickers from the summaries to find stock prices.

B.

Query the Delta table for volatile stock prices and use an LLM to generate a search query to investigate potential causes of the stock volatility.

C.

Download and store news articles and stock price information in a vector store. Use a RAG architecture to retrieve and generate at runtime.

D.

Create an agent with tools for SQL querying of Delta tables and web searching, provide retrieved values to an LLM for generation of response.

Full Access
Question # 9

What is an effective method to preprocess prompts using custom code before sending them to an LLM?

A.

Directly modify the LLM’s internal architecture to include preprocessing steps

B.

It is better not to introduce custom code to preprocess prompts as the LLM has not been trained with examples of the preprocessed prompts

C.

Rather than preprocessing prompts, it’s more effective to postprocess the LLM outputs to align the outputs to desired outcomes

D.

Write a MLflow PyFunc model that has a separate function to process the prompts

Full Access
Question # 10

A Generative Al Engineer interfaces with an LLM with prompt/response behavior that has been trained on customer calls inquiring about product availability. The LLM is designed to output “In Stock” if the product is available or only the term “Out of Stock” if not.

Which prompt will work to allow the engineer to respond to call classification labels correctly?

A.

Respond with “In Stock” if the customer asks for a product.

B.

You will be given a customer call transcript where the customer asks about product availability. The outputs are either “In Stock” or “Out of Stock”. Format the output in JSON, for example: {“call_id”: “123”, “label”: “In Stock”}.

C.

Respond with “Out of Stock” if the customer asks for a product.

D.

You will be given a customer call transcript where the customer inquires about product availability. Respond with “In Stock” if the product is available or “Out of Stock” if not.

Full Access
Question # 11

A Generative AI Engineer has a provisioned throughput model serving endpoint as part of a RAG application and would like to monitor the serving endpoint’s incoming requests and outgoing responses. The current approach is to include a micro-service in between the endpoint and the user interface to write logs to a remote server.

Which Databricks feature should they use instead which will perform the same task?

A.

Vector Search

B.

Lakeview

C.

DBSQL

D.

Inference Tables

Full Access
Question # 12

A Generative AI Engineer is building a RAG application that will rely on context retrieved from source documents that are currently in PDF format. These PDFs can contain both text and images. They want to develop a solution using the least amount of lines of code.

Which Python package should be used to extract the text from the source documents?

A.

flask

B.

beautifulsoup

C.

unstructured

D.

numpy

Full Access
Question # 13

A Generative AI Engineer received the following business requirements for an external chatbot.

The chatbot needs to know what types of questions the user asks and routes to appropriate models to answer the questions. For example, the user might ask about upcoming event details. Another user might ask about purchasing tickets for a particular event.

What is an ideal workflow for such a chatbot?

A.

The chatbot should only look at previous event information

B.

There should be two different chatbots handling different types of user queries.

C.

The chatbot should be implemented as a multi-step LLM workflow. First, identify the type of question asked, then route the question to the appropriate model. If it’s an upcoming event question, send the query to a text-to-SQL model. If it’s about ticket purchasing, the customer should be redirected to a payment platform.

D.

The chatbot should only process payments

Full Access