Summer Sale Special 65% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: ex2p65

Exact2Pass Menu

Google Professional Data Engineer Exam

Last Update 22 hours ago Total Questions : 383

The Google Professional Data Engineer Exam content is now fully updated, with all current exam questions added 22 hours ago. Deciding to include Professional-Data-Engineer practice exam questions in your study plan goes far beyond basic test preparation.

You'll find that our Professional-Data-Engineer exam questions frequently feature detailed scenarios and practical problem-solving exercises that directly mirror industry challenges. Engaging with these Professional-Data-Engineer sample sets allows you to effectively manage your time and pace yourself, giving you the ability to finish any Google Professional Data Engineer Exam practice test comfortably within the allotted time.

Question # 4

You need to migrate a Redis database from an on-premises data center to a Memorystore for Redis instance. You want to follow Google-recommended practices and perform the migration for minimal cost. time, and effort. What should you do?

A.

Make a secondary instance of the Redis database on a Compute Engine instance, and then perform a live cutover.

B.

Write a shell script to migrate the Redis data, and create a new Memorystore for Redis instance.

C.

Create a Dataflow job to road the Redis database from the on-premises data center. and write the data to a Memorystore for Redis instance

D.

Make an RDB backup of the Redis database, use the gsutil utility to copy the RDB file into a Cloud Storage bucket, and then import the RDB tile into the Memorystore for Redis instance.

Question # 5

You are building an ELT solution in BigQuery by using Dataform. You need to perform uniqueness and null value checks on your final tables. What should you do to efficiently integrate these checks into your pipeline?

A.

Build Dataform assertions into your code

B.

Write a Spark-based stored procedure.

C.

Build BigQuery user-defined functions (UDFs).

D.

Create Dataplex data quality tasks.

Question # 6

You are migrating a large number of files from a public HTTPS endpoint to Cloud Storage. The files are protected from unauthorized access using signed URLs. You created a TSV file that contains the list of object URLs and started a transfer job by using Storage Transfer Service. You notice that the job has run for a long time and eventually failed Checking the logs of the transfer job reveals that the job was running fine until one point, and then it failed due to HTTP 403 errors on the remaining files You verified that there were no changes to the source system You need to fix the problem to resume the migration process. What should you do?

A.

Set up Cloud Storage FUSE, and mount the Cloud Storage bucket on a Compute Engine Instance Remove the completed files from the TSV file Use a shell script to iterate through the TSV file and download the remaining URLs to the FUSE mount point.

B.

Update the file checksums in the TSV file from using MD5 to SHA256. Remove the completed files from the TSV file and rerun the Storage Transfer Service job.

C.

Renew the TLS certificate of the HTTPS endpoint Remove the completed files from the TSV file and rerun the Storage Transfer Service job.

D.

Create a new TSV file for the remaining files by generating signed URLs with a longer validity period. Split the TSV file into multiple smaller files and submit them as separate Storage Transfer Service jobs in parallel.

Question # 7

You are using Cloud Bigtable to persist and serve stock market data for each of the major indices. To serve the trading application, you need to access only the most recent stock prices that are streaming in How should you design your row key and tables to ensure that you can access the data with the most simple query?

A.

Create one unique table for all of the indices, and then use the index and timestamp as the row key design

B.

Create one unique table for all of the indices, and then use a reverse timestamp as the row key design.

C.

For each index, have a separate table and use a timestamp as the row key design

D.

For each index, have a separate table and use a reverse timestamp as the row key design

Question # 8

You used Cloud Dataprep to create a recipe on a sample of data in a BigQuery table. You want to reuse this recipe on a daily upload of data with the same schema, after the load job with variable execution time completes. What should you do?

A.

Create a cron schedule in Cloud Dataprep.

B.

Create an App Engine cron job to schedule the execution of the Cloud Dataprep job.

C.

Export the recipe as a Cloud Dataprep template, and create a job in Cloud Scheduler.

D.

Export the Cloud Dataprep job as a Cloud Dataflow template, and incorporate it into a Cloud Composer job.

Question # 9

You are selecting services to write and transform JSON messages from Cloud Pub/Sub to BigQuery for a data pipeline on Google Cloud. You want to minimize service costs. You also want to monitor and accommodate input data volume that will vary in size with minimal manual intervention. What should you do?

A.

Use Cloud Dataproc to run your transformations. Monitor CPU utilization for the cluster. Resize the number of worker nodes in your cluster via the command line.

B.

Use Cloud Dataproc to run your transformations. Use the diagnose command to generate an operational output archive. Locate the bottleneck and adjust cluster resources.

C.

Use Cloud Dataflow to run your transformations. Monitor the job system lag with Stackdriver. Use thedefault autoscaling setting for worker instances.

D.

Use Cloud Dataflow to run your transformations. Monitor the total execution time for a sampling of jobs. Configure the job to use non-default Compute Engine machine types when needed.

Question # 10

You are developing an application on Google Cloud that will automatically generate subject labels for users’ blog posts. You are under competitive pressure to add this feature quickly, and you have no additional developer resources. No one on your team has experience with machine learning. What should you do?

A.

Call the Cloud Natural Language API from your application. Process the generated Entity Analysis aslabels.

B.

Call the Cloud Natural Language API from your application. Process the generated Sentiment Analysis as labels.

C.

Build and train a text classification model using TensorFlow. Deploy the model using Cloud MachineLearning Engine. Call the model from your application and process the results as labels.

D.

Build and train a text classification model using TensorFlow. Deploy the model using a Kubernetes Engine cluster. Call the model from your application and process the results as labels.

Go to page: