Weekend Special Sale Limited Time 70% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: buysanta

Exact2Pass Menu

Question # 4

Your company is building a near real-time streaming pipeline to process JSON telemetry data from small appliances. You need to process messages arriving at a Pub/Sub topic, capitalize letters in the serial number field, and write results to BigQuery. You want to use a managed service and write a minimal amount of code for underlying transformations. What should you do?

A.

Use a Pub/Sub to BigQuery subscription, write results directly to BigQuery, and schedule a transformation query to run every five minutes.

B.

Use a Pub/Sub to Cloud Storage subscription, write a Cloud Run service that is triggered when objects arrive in the bucket, performs the transformations, and writes the results to BigQuery.

C.

Use the “Pub/Sub to BigQuery” Dataflow template with a UDF, and write the results to BigQuery.

D.

Use a Pub/Sub push subscription, write a Cloud Run service that accepts the messages, performs the transformations, and writes the results to BigQuery.

Full Access
Question # 5

You have millions of customer feedback records stored in BigQuery. You want to summarize the data by using the large language model (LLM) Gemini. You need to plan and execute this analysis using the most efficient approach. What should you do?

A.

Query the BigQuery table from within a Python notebook, use the Gemini API to summarize the data within the notebook, and store the summaries in BigQuery.

B.

Use a BigQuery ML model to pre-process the text data, export the results to Cloud Storage, and use the Gemini API to summarize the pre- processed data.

C.

Create a BigQuery Cloud resource connection to a remote model in Vertex Al, and use Gemini to summarize the data.

D.

Export the raw BigQuery data to a CSV file, upload it to Cloud Storage, and use the Gemini API to summarize the data.

Full Access
Question # 6

You work for a financial services company that handles highly sensitive data. Due to regulatory requirements, your company is required to have complete and manual control of data encryption. Which type of keys should you recommend to use for data storage?

A.

Use customer-supplied encryption keys (CSEK).

B.

Use a dedicated third-party key management system (KMS) chosen by the company.

C.

Use Google-managed encryption keys (GMEK).

D.

Use customer-managed encryption keys (CMEK).

Full Access
Question # 7

You are a Looker analyst. You need to add a new field to your Looker report that generates SQL that will run against your company's database. You do not have the Develop permission. What should you do?

A.

Create a new field in the LookML layer, refresh your report, and select your new field from the field picker.

B.

Create a calculated field using the Add a field option in Looker Studio, and add it to your report.

C.

Create a table calculation from the field picker in Looker, and add it to your report.

D.

Create a custom field from the field picker in Looker, and add it to your report.

Full Access
Question # 8

You are migrating data from a legacy on-premises MySQL database to Google Cloud. The database contains various tables with different data types and sizes, including large tables with millions of rows and transactional data. You need to migrate this data while maintaining data integrity, and minimizing downtime and cost. What should you do?

A.

Set up a Cloud Composer environment to orchestrate a custom data pipeline. Use a Python script to extract data from the MySQL database and load it to MySQL on Compute Engine.

B.

Export the MySQL database to CSV files, transfer the files to Cloud Storage by using Storage Transfer Service, and load the files into a Cloud SQL for MySQL instance.

C.

Use Database Migration Service to replicate the MySQL database to a Cloud SQL for MySQL instance.

D.

Use Cloud Data Fusion to migrate the MySQL database to MySQL on Compute Engine.

Full Access
Question # 9

Your team needs to analyze large datasets stored in BigQuery to identify trends in user behavior. The analysis will involve complex statistical calculations, Python packages, and visualizations. You need to recommend a managed collaborative environment to develop and share the analysis. What should you recommend?

A.

Create a Colab Enterprise notebook and connect the notebook to BigQuery. Share the notebook with your team. Analyze the data and generate visualizations in Colab Enterprise.

B.

Create a statistical model by using BigQuery ML. Share the query with your team. Analyze the data and generate visualizations in Looker Studio.

C.

Create a Looker Studio dashboard and connect the dashboard to BigQuery. Share the dashboard with your team. Analyze the data and generate visualizations in Looker Studio.

D.

Connect Google Sheets to BigQuery by using Connected Sheets. Share the Google Sheet with your team. Analyze the data and generate visualizations in Gooqle Sheets.

Full Access
Question # 10

You are predicting customer churn for a subscription-based service. You have a 50 PB historical customer dataset in BigQuery that includes demographics, subscription information, and engagement metrics. You want to build a churn prediction model with minimal overhead. You want to follow the Google-recommended approach. What should you do?

A.

Export the data from BigQuery to a local machine. Use scikit-learn in a Jupyter notebook to build the churn prediction model.

B.

Use Dataproc to create a Spark cluster. Use the Spark MLlib within the cluster to build the churn prediction model.

C.

Create a Looker dashboard that is connected to BigQuery. Use LookML to predict churn.

D.

Use the BigQuery Python client library in a Jupyter notebook to query and preprocess the data in BigQuery. Use the CREATE MODEL statement in BigQueryML to train the churn prediction model.

Full Access
Question # 11

Your organization has a BigQuery dataset that contains sensitive employee information such as salaries and performance reviews. The payroll specialist in the HR department needs to have continuous access to aggregated performance data, but they do not need continuous access to other sensitive data. You need to grant the payroll specialist access to the performance data without granting them access to the entire dataset using the simplest and most secure approach. What should you do?

A.

Use authorized views to share query results with the payroll specialist.

B.

Create row-level and column-level permissions and policies on the table that contains performance data in the dataset. Provide the payroll specialist with the appropriate permission set.

C.

Create a table with the aggregated performance data. Use table-level permissions to grant access to the payroll specialist.

D.

Create a SQL query with the aggregated performance data. Export the results to an Avro file in a Cloud Storage bucket. Share the bucket with the payroll specialist.

Full Access
Question # 12

Your retail company wants to predict customer churn using historical purchase data stored in BigQuery. The dataset includes customer demographics, purchase history, and a label indicating whether the customer churned or not. You want to build a machine learning model to identify customers at risk of churning. You need to create and train a logistic regression model for predicting customer churn, using the customer_data table with the churned column as the target label. Which BigQuery ML query should you use?

A)

B)

C)

D)

A.

Option A

B.

Option B

C.

Option C

D.

Option D

Full Access
Question # 13

Your organization uses Dataflow pipelines to process real-time financial transactions. You discover that one of your Dataflow jobs has failed. You need to troubleshoot the issue as quickly as possible. What should you do?

A.

Set up a Cloud Monitoring dashboard to track key Dataflow metrics, such as data throughput, error rates, and resource utilization.

B.

Create a custom script to periodically poll the Dataflow API for job status updates, and send email alerts if any errors are identified.

C.

Navigate to the Dataflow Jobs page in the Google Cloud console. Use the job logs and worker logs to identify the error.

D.

Use the gcloud CLI tool to retrieve job metrics and logs, and analyze them for errors and performance bottlenecks.

Full Access
Question # 14

You work for a home insurance company. You are frequently asked to create and save risk reports with charts for specific areas using a publicly available storm event dataset. You want to be able to quickly create and re-run risk reports when new data becomes available. What should you do?

A.

Export the storm event dataset as a CSV file. Import the file to Google Sheets, and use cell data in the worksheets to create charts.

B.

Copy the storm event dataset into your BigQuery project. Use BigQuery Studio to query and visualize the data in Looker Studio.

C.

Reference and query the storm event dataset using SQL in BigQuery Studio. Export the results to Google Sheets, and use cell data in the worksheets to create charts.

D.

Reference and query the storm event dataset using SQL in a Colab Enterprise notebook. Display the table results and document with Markdown, and use Matplotlib to create charts.

Full Access
Question # 15

You manage a web application that stores data in a Cloud SQL database. You need to improve the read performance of the application by offloading read traffic from the primary database instance. You want to implement a solution that minimizes effort and cost. What should you do?

A.

Use Cloud CDN to cache frequently accessed data.

B.

Store frequently accessed data in a Memorystore instance.

C.

Migrate the database to a larger Cloud SQL instance.

D.

Enable automatic backups, and create a read replica of the Cloud SQL instance.

Full Access
Question # 16

You are using your own data to demonstrate the capabilities of BigQuery to your organization’s leadership team. You need to perform a one- time load of the files stored on your local machine into BigQuery using as little effort as possible. What should you do?

A.

Write and execute a Python script using the BigQuery Storage Write API library.

B.

Create a Dataproc cluster, copy the files to Cloud Storage, and write an Apache Spark job using the spark-bigquery-connector.

C.

Execute the bq load command on your local machine.

D.

Create a Dataflow job using the Apache Beam FileIO and BigQueryIO connectors with a local runner.

Full Access
Question # 17

Your organization uses a BigQuery table that is partitioned by ingestion time. You need to remove data that is older than one year to reduce your organization’s storage costs. You want to use the most efficient approach while minimizing cost. What should you do?

A.

Create a scheduled query that periodically runs an update statement in SQL that sets the “deleted" column to “yes” for data that is more than one year old. Create a view that filters out rows that have been marked deleted.

B.

Create a view that filters out rows that are older than one year.

C.

Require users to specify a partition filter using the alter table statement in SQL.

D.

Set the table partition expiration period to one year using the ALTER TABLE statement in SQL.

Full Access
Question # 18

You created a customer support application that sends several forms of data to Google Cloud. Your application is sending:

1. Audio files from phone interactions with support agents that will be accessed during trainings.

2. CSV files of users’ personally identifiable information (Pll) that will be analyzed with SQL.

3. A large volume of small document files that will power other applications.

You need to select the appropriate tool for each data type given the required use case, while following Google-recommended practices. Which should you choose?

A.

1. Cloud Storage

2. CloudSQL for PostgreSQL

3. Bigtable

B.

1. Filestore

2. Cloud SQL for PostgreSQL

3. Datastore

C.

1. Cloud Storage

2. BigQuery

3. Firestore

D.

1. Filestore

2. Bigtable

3. BigQuery

Full Access
Question # 19

Your organization has several datasets in their data warehouse in BigQuery. Several analyst teams in different departments use the datasets to run queries. Your organization is concerned about the variability of their monthly BigQuery costs. You need to identify a solution that creates a fixed budget for costs associated with the queries run by each department. What should you do?

A.

Create a custom quota for each analyst in BigQuery.

B.

Create a single reservation by using BigQuery editions. Assign all analysts to the reservation.

C.

Assign each analyst to a separate project associated with their department. Create a single reservation by using BigQuery editions. Assign all projects to the reservation.

D.

Assign each analyst to a separate project associated with their department. Create a single reservation for each department by using BigQuery editions. Create assignments for each project in the appropriate reservation.

Full Access
Question # 20

Your organization’s business analysts require near real-time access to streaming data. However, they are reporting that their dashboard queries are loading slowly. After investigating BigQuery query performance, you discover the slow dashboard queries perform several joins and aggregations.

You need to improve the dashboard loading time and ensure that the dashboard data is as up-to-date as possible. What should you do?

A.

Disable BiqQuery query result caching.

B.

Modify the schema to use parameterized data types.

C.

Create a scheduled query to calculate and store intermediate results.

D.

Create materialized views.

Full Access
Question # 21

You recently inherited a task for managing Dataflow streaming pipelines in your organization and noticed that proper access had not been provisioned to you. You need to request a Google-provided IAM role so you can restart the pipelines. You need to follow the principle of least privilege. What should you do?

A.

Request the Dataflow Developer role.

B.

Request the Dataflow Viewer role.

C.

Request the Dataflow Worker role.

D.

Request the Dataflow Admin role.

Full Access