Special Summer Sale Limited Time 70% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: buysanta

Exact2Pass Menu

Question # 4

HOTSPOT

You need to troubleshoot the ad-hoc query issue.

How should you complete the statement? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.

Full Access
Question # 5

What should you do to optimize the query experience for the business users?

A.

Enable V-Order.

B.

Create and update statistics.

C.

Run the VACUUM command.

D.

Introduce primary keys.

Full Access
Question # 6

You need to resolve the sales data issue. The solution must minimize the amount of data transferred.

What should you do?

A.

Spilt the dataflow into two dataflows.

B.

Configure scheduled refresh for the dataflow.

C.

Configure incremental refresh for the dataflow. Set Store rows from the past to 1 Month.

D.

Configure incremental refresh for the dataflow. Set Refresh rows from the past to 1 Year.

E.

Configure incremental refresh for the dataflow. Set Refresh rows from the past to 1 Month.

Full Access
Question # 7

You need to implement the solution for the book reviews.

Which should you do?

A.

Create a Dataflow Gen2 dataflow.

B.

Create a shortcut.

C.

Enable external data sharing.

D.

Create a data pipeline.

Full Access
Question # 8

You need to ensure that usage of the data in the Amazon S3 bucket meets the technical requirements.

What should you do?

A.

Create a workspace identity and enable high concurrency for the notebooks.

B.

Create a shortcut and ensure that caching is disabled for the workspace.

C.

Create a workspace identity and use the identity in a data pipeline.

D.

Create a shortcut and ensure that caching is enabled for the workspace.

Full Access
Question # 9

You need to ensure that the data analysts can access the gold layer lakehouse.

What should you do?

A.

Add the DataAnalyst group to the Viewer role for WorkspaceA.

B.

Share the lakehouse with the DataAnalysts group and grant the Build reports on the default semantic model permission.

C.

Share the lakehouse with the DataAnalysts group and grant the Read all SQL Endpoint data permission.

D.

Share the lakehouse with the DataAnalysts group and grant the Read all Apache Spark permission.

Full Access
Question # 10

You need to populate the MAR1 data in the bronze layer.

Which two types of activities should you include in the pipeline? Each correct answer presents part of the solution.

NOTE: Each correct selection is worth one point.

A.

ForEach

B.

Copy data

C.

WebHook

D.

Stored procedure

Full Access
Question # 11

You need to create the product dimension.

How should you complete the Apache Spark SQL code? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.

Full Access
Question # 12

You are implementing the following data entities in a Fabric environment:

Entity1: Available in a lakehouse and contains data that will be used as a core organization entity

Entity2: Available in a semantic model and contains data that meets organizational standards

Entity3: Available in a Microsoft Power BI report and contains data that is ready for sharing and reuse

Entity4: Available in a Power BI dashboard and contains approved data for executive-level decision making

Your company requires that specific governance processes be implemented for the data.

You need to apply endorsement badges to the entities based on each entity’s use case.

Which badge should you apply to each entity? To answer, drag the appropriate badges the correct entities. Each badge may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content.

NOTE: Each correct selection is worth one point.

Full Access
Question # 13

You have a Fabric workspace that contains a lakehouse named Lakehouse1.

In an external data source, you have data files that are 500 GB each. A new file is added every day.

You need to ingest the data into Lakehouse1 without applying any transformations. The solution must meet the following requirements

Trigger the process when a new file is added.

Provide the highest throughput.

Which type of item should you use to ingest the data?

A.

Event stream

B.

Dataflow Gen2

C.

Streaming dataset

D.

Data pipeline

Full Access
Question # 14

You have a Fabric workspace named Workspace1 that contains a notebook named Notebook1.

In Workspace1, you create a new notebook named Notebook2.

You need to ensure that you can attach Notebook2 to the same Apache Spark session as Notebook1.

What should you do?

A.

Enable high concurrency for notebooks.

B.

Enable dynamic allocation for the Spark pool.

C.

Change the runtime version.

D.

Increase the number of executors.

Full Access
Question # 15

You have a Fabric workspace named Workspace1 that contains a warehouse named Warehouse1.

You plan to deploy Warehouse1 to a new workspace named Workspace2.

As part of the deployment process, you need to verify whether Warehouse1 contains invalid references. The solution must minimize development effort.

What should you use?

A.

a database project

B.

a deployment pipeline

C.

a Python script

D.

a T-SQL script

Full Access
Question # 16

You have a Fabric workspace named Workspace1 that contains a notebook named Notebook1.

In Workspace1, you create a new notebook named Notebook2.

You need to ensure that you can attach Notebook2 to the same Apache Spark session as Notebook1.

What should you do?

A.

Enable high concurrency for notebooks.

B.

Enable dynamic allocation for the Spark pool.

C.

Change the runtime version.

D.

Increase the number of executors.

Full Access
Question # 17

You have a Fabric workspace that contains a warehouse named DW1. DW1 is loaded by using a notebook named Notebook1.

You need to identify which version of Delta was used when Notebook1 was executed.

What should you use?

A.

Real-Time hub

B.

OneLake data hub

C.

the Admin monitoring workspace

D.

Fabric Monitor

E.

the Microsoft Fabric Capacity Metrics app

Full Access
Question # 18

You have a Fabric workspace that contains a warehouse named Warehouse1.

While monitoring Warehouse1, you discover that query performance has degraded during the last 60 minutes.

You need to isolate all the queries that were run during the last 60 minutes. The results must include the username of the users that submitted the queries and the query statements. What should you use?

A.

the Microsoft Fabric Capacity Metrics app

B.

views from the queryinsights schema

C.

Query activity

D.

the sys.dm_exec_requests dynamic management view

Full Access
Question # 19

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.

After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.

You have a KQL database that contains two tables named Stream and Reference. Stream contains streaming data in the following format.

Reference contains reference data in the following format.

Both tables contain millions of rows.

You have the following KQL queryset.

You need to reduce how long it takes to run the KQL queryset.

Solution: You add the make_list() function to the output columns.

Does this meet the goal?

A.

Yes

B.

No

Full Access
Question # 20

You have a Fabric workspace that contains a data pipeline named Pipeline! as shown in the exhibit.

(Click the Exhibit tab.) What will occur the next time Pipelinel tuns?

A.

Both activities will run simultaneously.

B.

Both activities will be skipped.

C.

Execute procedurel will run and Copy_kdi will be skipped.

D.

Copy.kdi will run and Execute procedurel will be skipped.

E.

Execute procedure1 will run first, and then Copy_kdi will run.

F.

Copy.kdi will run first, and then Execute procedurel will run.

Full Access