Halloween Special Sale Limited Time 70% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: buysanta

Exact2Pass Menu

Question # 4

Which are the Cloud Platforms that Support Calling an External Function?

A.

GCP

B.

AWS & AZURE

C.

AWS only

D.

AWS,GCP,AZURE

Full Access
Question # 5

To help manage STAGE storage costs, Data engineer recommended to monitor stage files and re-move them from the stages once the data has been loaded and the files which are no longer needed. Which option he can choose to remove these files either during data loading or afterwards?

A.

He can choose to remove stage files during data loading (using the COPY INTO

command).

B.

Files no longer needed, can be removed using the PURGE=TRUE command.

C.

Files no longer needed, can be removed using the REMOVE command.

D.

Script can be used during data loading & post data loading with DELETE command.

Full Access
command.

· After the load completes, use the REMOVE command to remove the files in the stage.

Removing files ensures they aren’t inadvertently loaded again. It also improves load performance, because it reduces the number of files that COPY commands must scan to verify whether existing files in a stage were loaded already.

Question # 6

Which one is not the Core benefits of micro-partitioning

A.

Snowflake micro-partitions are derived automatically they do not need to be explicitly defined up-front or maintained by users.

B.

Enables extremely efficient DML and fine-grained pruning for faster queries.

C.

Micro-partitions can overlap in their range of values, helps data skewing.

D.

Columns are stored independently within micro-partitions, often referred to as colum-nar storage.

E.

Columns are also compressed individually within micro-partitions.

Full Access
Question # 7

Charles, A Lead Data engineer, with ACCOUNTADMIN role wants to configure the time travel for one of the Schema’s object. He setup the MIN_DATA_RETENTION_TIME_IN_DAYS pa-rameter with Value 79 at account level but he figured out that DA-TA_RETENTION_TIME_IN_DAYS is already set with value 81 at account level. What would be the effective minimum data retention period for an object?

A.

90

B.

81

C.

79

D.

There is no such MIN_DATA_RETENTION_TIME_IN_DAYS parameter

Full Access
Question # 8

Mark the incorrect statement when Data Engineer implement Automating Continuous Data Loading Using Cloud Messaging?

A.

Automated Snowpipe uses event notifications to determine when new files arrive in monitored cloud storage and are ready to load.

B.

When a pipe is paused, event messages received for the pipe enter a limited retention period. The period is 14 days by default. If a pipe is paused for longer than 14 days, it is considered stale.

C.

Notifications identify the cloud storage event and include a list of the file names. They do not include the actual data in the files.

D.

Triggering automated Snowpipe data loads using S3 event messages is supported by Snowflake accounts hosted on Cloud Platform like AWS, GCP or AZURE.

Full Access
Question # 9

Bob, a Lead Data Engineer is looking out to get the function definition & queried below statement to check if this function is secure enough to use in his script or not.

select is_secure from information_schema.functions where function_name = 'JOHNFUNCTION';

From the query output he is sure that, Function is secure UDF, what are the way provided by snow-flake to get the function definition of secure UDF?

A.

He can get the secure UDF definition using GET_DDL utility function.

B.

UDF definition or text, is visible to users via Query Profile (in the web interface).

C.

SHOW FUNCTIONS Commands

D.

Declaring a UDF as “secure” hide the definition from Bob & all the required Definition commands will throw error.

Full Access
Question # 10

Snowflake web interface can be used to create users with no passwords or remove passwords from existing users?

A.

TRUE

B.

FALSE

Full Access
Question # 11

Jackie, a Data engineer advised to his data team members about one of the Role highlighting fol-lows points:

1. Avoid Using the Role for Automated Scripts

2. Avoid Using the Role to Create Objects

Which System defined or Custom Role She is mentioning?

A.

SYSADMIN

B.

SECURITYADMIN

C.

CUSTOM Role

D.

USERADMIN

E.

ACCOUNTADMIN

Full Access
Question # 12

Regular views do not cache data, and therefore cannot improve performance by caching?

A.

TRUE

B.

FALSE

Full Access
Question # 13

To view/monitor the clustering metadata for a table, Snowflake provides which of the following system functions?

A.

SYSTEM$CLUSTERING_DEPTH_KEY

B.

SYSTEM$CLUSTERING_KEY_INFORMATION (including clustering depth)

C.

SYSTEM$CLUSTERING_DEPTH

D.

SYSTEM$CLUSTERING_INFORMATION (including clustering depth)

Full Access
Question # 14

The Snowpipe API provides REST endpoints for fetching load reports. One of the Endpoint named insertReport helps to retrieves a report of files submitted via insertFiles end point whose contents were recently ingested into a table. A success response (200) contains information about files that have recently been added to the table. Response Looks like below:

1.{

2."pipe": "SNOWTESTDB.SFTESTSCHEMA.SFpipe",

3."completeResult": true,

4."nextBeginMark": "1_16",

5."files": [

6.{

7."path": "data4859992083898.csv",

8."stageLocation": "s3://mybucket/",

9."fileSize": 89,

10."timeReceived": "2022-01-31T04:47:41.453Z",

11."lastInsertTime": "2022-01-31T04:48:28.575Z",

12."rowsInserted": 1,

13."rowsParsed": 1,

14."errorsSeen": 0,

15."errorLimit": 1,

16."complete": true,

17."status": "????"

18.}

19.]

20.}

Which one is the correct value of status string data in the Response Body?

A.

LOADED

B.

LOADED_SUCCESS

C.

LOAD_SUCCESS

D.

SUCCESS

Full Access
Question # 15

Data Engineer looking out for quick tool for understanding the mechanics of queries & need to know more about the performance or behaviour of a particular query.

He should go to which feature of snowflake which can help him to spot typical mistakes in SQL query expressions to identify potential performance bottlenecks and improvement opportunities?

A.

Query Optimizer

B.

Performance Metadata table

C.

Query Profile

D.

Query Designer

Full Access
Question # 16

Snowflake does not treat the inner transaction as nested; instead, the inner transaction is a separate transaction. What is term used to call these Transaction?

A.

Scoped transactions

B.

Inner Transaction

C.

Nested Scope Transaction

D.

Atomic Transaction

E.

Enclosed Transaction

Full Access
Question # 17

Let us say you have List of 50 Source files, which needs to be loaded into Snowflake internal stage. All these Source system files are already Brotli-compressed files. Which statement is correct with respect to Compression of Staged Files?

A.

Even though Source files are already compressed, Snowflake do apply default gzip2 Compression to optimize the storage cost.

B.

Snowflake automatically detect Brotli Compression, will skip further compression of all 50 files.

C.

Auto-detection is not yet supported for Brotli-compressed files; when staging or loading Brotli-compressed files, you must explicitly specify the compression method that was used.

D.

When staging 50 compressed files in a Snowflake stage, the files are automatically com-pressed using gzip.

Full Access
Question # 18

Which of the below concepts/functions helps while implementing advanced Column-level Security?

A.

CURRENT_ROLE

B.

INVOKER_ROLE

C.

Role Hierarchy

D.

CURRENT_CLIENT

Full Access
Question # 19

A SQL UDF evaluates an arbitrary SQL expression and returns the result(s) of the expression. Which value type it can returns?

A.

Single Value

B.

A Set of Rows

C.

Scaler or Tabular depend on input SQL expression

D.

Regex

Full Access
Question # 20

Which privilege are required on an object (i.e. user or role) with USERADMIN Role can modify the object properties?

A.

OPEARTE

B.

MANAGE GRANTS

C.

OWNERSHIP

D.

MODIFY

Full Access
Question # 21

Which of the following statements is/are incorrect regarding Fail-safe data recovery?

A.

Data stored in temporary tables is not recoverable after the table is dropped as they do not have fail-safe.

B.

Historical data in transient tables can be recovered by Snowflake due to Operation fail-ure after the Time Travel retention period ends using Fail-safe.

C.

Long-lived tables, such as fact tables, should always be defined as permanent to ensure they are fully protected by Fail-safe.

D.

Short-lived tables (i.e. <1 day), such as ETL work tables, can be defined as transient to eliminate Fail-safe costs.

E.

If downtime and the time required to reload lost data are factors, permanent tables, even with their added Fail-safe costs, may offer a better overall solution than transient tables.

Full Access
Question # 22

Robert, A Data Engineer, found that Pipe become stale as it was paused for longer than the limited retention period for event messages received for the pipe (14 days by default) & also the previous pipe owner transfers the ownership of this pipe to Robert role while the pipe was paused. How Robert in this case, Resume this stale pipe?

A.

PIPE needs to recreate in this scenario, as pipe already past 14 days of period & stale.

B.

He can apply System function SYSTEM$PIPE_STALE_RESUME with ALTER PIPE statement.

C.

Robert can use SYSTEM$PIPE_FORCE_RESUME function to resume this stale pipe.

D.

select sys-tem$pipe_force_resume('mydb.myschema.stalepipe','staleness_check_override, ownership_transfer_check_override');

E.

ALTER PIPES ... RESUME statement will resume the pipe.

Full Access
Question # 23

Which ones are the false statements about Materialized Views?

A.

Snowflake does not allow standard DML (e.g. INSERT, UPDATE, DELETE) on ma-terialized views.

B.

Snowflake does not allow users to truncate materialized views.

C.

Materialized views are first-class account objects.

D.

A materialized view can also be used as the data source for a subquery.

E.

Materialized views can be secure views.

F.

Clustering a subset of the materialized views on a table tends to be more cost-effective than clustering the table itself.

Full Access
Question # 24

Which property can be used with ALTER USER command to temporarily disable MFA for the user so that they can log in?

A.

HOURS_TO_BYPASS_MFA

B.

SECS_TO_BYPASS_MFA

C.

MINS_TO_SKIP_MFA

D.

MINS_TO_BYPASS_MFA

Full Access
Question # 25

The smaller the average depth, the better clustered the table is with regards to the specified column?

A.

TRUE

B.

FALSE

Full Access
Question # 26

While creating External function, Which Database object required with at least ACCOUNTAD-MIN privileges?

A.

STORAGE Integration

B.

SECURITY Integration

C.

API Integration

D.

None of the above required.

Full Access
Question # 27

You can execute zero, one, or more transactions inside a stored procedure?

A.

TRUE

B.

FALSE

Full Access
Question # 28

Data engineer designed the data pipelines using Snowpipe to load data files into Snowflake tables, what will happen in case few files with same name but modified data are queued for reloading?

A.

Data will be reloaded as files are modified & its associated metadata also changed. But Snowflake handle implicitly deduplication.

B.

eTAG is changed for Files even they are having same name, so data will be duplicated in SnowFlake tables.

C.

Snowpipe uses file loading metadata associated with each table object, so no metadata available to prevent duplication.

D.

Snowpipe uses file loading metadata associated with each pipe object to prevent reload-ing the same files (and duplicating data) in a table.

Full Access
Question # 29

Tasks may optionally use table streams to provide a convenient way to continuously process new or changed data. A task can transform new or changed rows that a stream surfaces. Each time a task is scheduled to run, it can verify whether a stream contains change data for a table and either consume the change data or skip the current run if no change data exists. Which System Function can be used by Data engineer to verify whether a stream contains changed data for a table?

A.

SYSTEM$STREAM_HAS_CHANGE_DATA

B.

SYSTEM$STREAM_CDC_DATA

C.

SYSTEM$STREAM_HAS_DATA

D.

SYSTEM$STREAM_DELTA_DATA

Full Access
Question # 30

Harry using Snowflake Enterprise Edition & decided to scale in/out the Cluster in automatic mode. He needs to configure some warehouses as multi cluster mode and some among them in Standard mode as per needs.

If Harry is using Snowflake Enterprise Edition (or a higher edition), all his warehouses should be configured as multi-cluster warehouses only.

A.

TRUE

B.

FALSE

Full Access
Question # 31

Select the Correct statements with regard to using Federated authentication/SSO?

A.

Snowflake supports using MFA in conjunction with SSO to provide additional levels of security.

B.

Snowflake supports multiple audience values (i.e. Audience or Audience Restriction Fields) in the SAML 2.0 assertion from the identity provider to Snowflake.

C.

Snowflake supports SSO with Private Connectivity to the Snowflake Service for Snow-flake accounts on Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform.

D.

Snowflake supports using SSO with organizations, and you can use the corresponding URL in the SAML2 security integration.

Full Access
Question # 32

Which are supported Programming Languages for Creating UDTFs?

A.

Python

B.

Node.javascript

C.

Javascript

D.

Java

E.

Perl

Full Access
Question # 33

In one of your created Schema, you have been required to create Internal Stages, what are the In-correct considerations you can noticed from the below options? [Select All that Apply]

A.

User stages can be altered or dropped just like Table Stage.

B.

Table stage type is designed to store files that are staged and managed by one or more users but only loaded into a single table.

C.

A named internal stage type can store files that are staged and managed by one or more users and loaded into one or more tables.

D.

A table stage is available for each table created in Snowflake.

Full Access
command when loading data into a table from the staged files.

Consider the best type of stage for specific data files. Each option provides benefits and potential drawbacks.

User Stages

Each user has a Snowflake stage allocated to them by default for storing files. This stage is a con-venient option if your files will only be accessed by a single user, but need to be copied into multi-ple tables.

User stages have the following characteristics and limitations:

User stages are referenced using @~; e.g. use LIST @~ to list the files in a user stage.

Unlike named stages, user stages cannot be altered or dropped.

User stages do not support setting file format options. Instead, you must specify file format and copy options as part of the COPY INTO

command.

This option is not appropriate if:

Multiple users require access to the files.

The current user does not have INSERT privileges on the tables the data will be loaded into.

Table Stages

Each table has a Snowflake stage allocated to it by default for storing files. This stage is a conven-ient option if your files need to be accessible to multiple users and only need to be copied into a sin-gle table.

Table stages have the following characteristics and limitations:

Table stages have the same name as the table; e.g. a table named mytable has a stage referenced as @%mytable.

Unlike named stages, table stages cannot be altered or dropped.

Table stages do not support transforming data while loading it (i.e. using a query as the source for the COPY command).

Note that a table stage is not a separate database object; rather, it is an implicit stage tied to the table itself. A table stage has no grantable privileges of its own. To stage files to a table stage, list the files, query them on the stage, or drop them, you must be the table owner (have the role with the OWNERSHIP privilege on the table).

This option is not appropriate if you need to copy the data in the files into multiple tables.

Named Stages

Named stages are database objects that provide the greatest degree of flexibility for data loading:

Users with the appropriate privileges on the stage can load data into any table.

Because the stage is a database object, the security/access rules that apply to all objects apply. The privileges to use a stage can be granted or revoked from roles. In addition, ownership of the stage can be transferred to another role.

If you plan to stage data files that will be loaded only by you, or will be loaded only into a single table, then you may prefer to simply use either your user stage or the stage for the table into which you will be loading data.

Named stages are optional but recommended when you plan regular data loads that could involve multiple users and/or tables.

Question # 34

By default, a newly-created Custom role is not assigned to any user, nor granted to any other role?

A.

TRUE

B.

FALSE

Full Access
Question # 35

To troubleshoot data load failure in one of your Copy Statement, Data Engineer have Executed a COPY statement with the VALIDATION_MODE copy option set to RETURN_ALL_ERRORS with reference to the set of files he had attempted to load. Which below function can facilitate analysis of the problematic records on top of the Results produced? [Select 2]

A.

RESULT_SCAN

B.

LAST_QUERY_ID

C.

Rejected_record

D.

LOAD_ERROR

Full Access
Question # 36

Elon, a Data Engineer, needs to Split Semi-structured Elements from the Source files and load them as an array into Separate Columns.

Source File:

1.+----------------------------------------------------------------------+

2.| $1 |

3.|----------------------------------------------------------------------|

4.| {"mac_address": {"host1": "197.128.1.1","host2": "197.168.0.1"}}, |

5.| {"mac_address": {"host1": "197.168.2.1","host2": "197.168.3.1"}} |

6.+----------------------------------------------------------------------+

Output: Splitting the Machine Address as below.

1.COL1 | COL2 |

2.|----------+----------|

3.| [ | [ |

4.| "197", | "197", |

5.| "128", | "168", |

6.| "1", | "0", |

7.| "1" | "1" |

8.| ] | ] |

9.| [ | [ |

10.| "197", | "197", |

11.| "168", | "168", |

12.| "2", | "3", |

13.| "1" | "1" |

14.| ] | ]

Which SnowFlake Function can Elon use to transform this semi structured data in the output for-mat?

A.

CONVERT_TO_ARRAY

B.

SPLIT

C.

GROUP_BY_CONNECT

D.

NEST

Full Access
Question # 37

For SQL UDFs, The invoker of the function need not have access to the objects referenced in the function definition, but only needs the privilege to use the function?

A.

TRUE

B.

FALSE

Full Access
Question # 38

Which Role inherits the privileges of the USERADMIN role via the system role hierarchy?

A.

SYSADMIN

B.

SECURITYADMIN

C.

PUBLIC

D.

CUSTOM ROLE

Full Access
Question # 39

Which connector creates the RECORD_CONTENT and RECORD_METADATA columns in the existing Snowflake table while connecting to Snowflake?

A.

Python Connector

B.

Spark Connector

C.

Node.js connector

D.

Kafka Connector

Full Access
Question # 40

UDTFs also called a table function, returns zero, one, or multiple rows for each input row?

A.

YES

B.

NO

Full Access
Question # 41

John, Data Engineer, do have technical requirements to refresh the External tables Metadata period-ically or in auto mode, which approach John can take to meet this technical specification?

A.

John can use AUTO_REFRESH parameter if the underlying External Cloud host sup-ports this for External tables.

B.

He can create a task that executes an ALTER EXTERNAL TABLE ... REFRESH statement every 5 minutes.

C.

External table cannot be scheduled via Snowflake Tasks, 3rd party tools/scripts needs to be used provided by External cloud storage provider.

D.

Snowflake implicitly take care this Infrastructure needs, as underlying warehouse layer internally manage the refresh. No action needed from John.

Full Access
Question # 42

How can the following relational data be transformed into semi-structured data using the LEAST amount of operational overhead?

A.

Use the to_json function

B.

Use the PAESE_JSON function to produce a variant value

C.

Use the OBJECT_CONSTRUCT function to return a Snowflake object

D.

Use the TO_VARIANT function to convert each of the relational columns to VARIANT.

Full Access
Question # 43

Which functions will compute a 'fingerprint' over an entire table, query result, or window to quickly detect changes to table contents or query results? (Select TWO).

A.

HASH (*)

B.

HASH_AGG(*)

C.

HASH_AGG(, )

D.

HASH_AGG_COMPARE (*)

E.

HASH COMPARE(*)

Full Access
Question # 44

A Data Engineer is working on a continuous data pipeline which receives data from Amazon Kinesis Firehose and loads the data into a staging table which will later be used in the data transformation process The average file size is 300-500 MB.

The Engineer needs to ensure that Snowpipe is performant while minimizing costs.

How can this be achieved?

A.

Increase the size of the virtual warehouse used by Snowpipe.

B.

Split the files before loading them andset the SIZE_LIMIT option to 250 MB.

C.

Change the file compression size and increase the frequency of the Snowpipe loads

D.

Decrease the buffer size to trigger delivery of files sized between 100 to 250 MB in Kinesis Firehose

Full Access
Question # 45

At what isolation level are Snowflake streams?

A.

Snapshot

B.

Repeatable read

C.

Read committed

D.

Read uncommitted

Full Access
Question # 46

A database contains a table and a stored procedure defined as.

No other operations are affecting the log_table.

What will be the outcome of the procedure call?

A.

The Iog_table contains zero records and the stored procedure returned 1 as a return value

B.

The Iog_table contains one record and the stored procedure returned 1 as a return value

C.

The log_table contains one record and the stored procedure returned NULL as a return value

D.

The Iog_table contains zero records and the stored procedure returned NULL as a return value

Full Access
Question # 47

A table is loaded using Snowpipe and truncated afterwards Later, a Data Engineer finds that the table needs to be reloaded but the metadata of the pipe will not allow the same files to be loaded again.

How can this issue be solved using the LEAST amount of operational overhead?

A.

Wait until the metadata expires and then reload the file using Snowpipe

B.

Modify the file by adding a blank row to the bottom and re-stage the file

C.

Set the FORCE=TRUE option in the Snowpipe COPY INTO command

D.

Recreate the pipe by using the create or replace pipe command

Full Access
Question # 48

While running an external function, me following error message is received:

Error:function received the wrong number of rows

What iscausing this to occur?

A.

External functions do not support multiple rows

B.

Nested arrays are not supported in the JSON response

C.

The JSON returned by the remote service is not constructed correctly

D.

The return message did not produce the same number of rows that it received

Full Access
Question # 49

Which system role is recommended for a custom role hierarchy to be ultimately assigned to?

A.

ACCOUNTADMIN

B.

SECURITYADMIN

C.

SYSTEMADMIN

D.

USERADMIN

Full Access
Question # 50

A Data Engineer enables a result cache at the session level with the following command:

ALTER SESSION SET USE CACHED RESULT = TRUE;

The Engineer then runs the following select query twice without delay:

The underlying table does not change between executions

What are the results of both runs?

A.

The first and second run returned the same results because sample is deterministic

B.

The first and second run returned the same results, because the specific SEEDvalue was provided.

C.

The first and second run returned different results because the query is evaluated each time it is run.

D.

The first and second run returned differentresults because the query uses *instead of an explicit column list

Full Access
Question # 51

A new customer table is created by a data pipeline in a Snowflake schema where MANAGED ACCESSenabled.

…. Can gran access to the CUSTOMER table? (Select THREE.)

A.

The role that owns the schema

B.

The role that owns the database

C.

The role that owns the customer table

D.

The SYSADMIN role

E.

The SECURITYADMIN role

F.

The USERADMIN role with the manage grants privilege

Full Access
Question # 52

The following is returned fromSYSTEMCLUSTERING_INFORMATION () for a tablenamed orders with adate column named O_ORDERDATE:

What does the total_constant_partition_count value indicate about this table?

A.

The table is clustered very well on_ORDERDATE, as there are 493 micro-partitions that could not be significantly improved by reclustering

B.

The table is not clustered well on O_ORDERDATE, as there are 493 micro-partitions where the range of values in that column overlap with every other micro partition in

the table.

C.

The data inO_ORDERDATEdoes not change very often as there are 493 micro-partitionscontaining rows where that column has not been modified since the row was

created

D.

The data inO_ORDERDATEhas a very low cardinality as there are 493 micro-partitions where there is only a single distinct value in that column for all rows in the

micro-partition

Full Access
Question # 53

A Data Engineer ran a stored procedure containing various transactions During the execution, the session abruptly disconnected preventing one transactionfrom committing or rolling hark.The transaction was left in a detached state and created a lock on resources

...must the Engineer take to immediately run a new transaction?

A.

Call the system function SYSTEM$ABORT_TRANSACTION.

B.

Call the system function SYSTEM$CANCEL_TRANSACTION.

C.

Set the LOCK_TIMEOUTto FALSE in the stored procedure

D.

Set the transaction abort on error to true in the stored procedure.

Full Access
Question # 54

A Data Engineer executes a complex query and wants to make use of Snowflake s query results caching capabilities to reuse the results.

Which conditions must be met? (Select THREE).

A.

The results must be reused within 72 hours.

B.

The query must be executed using the same virtual warehouse.

C.

The USED_CACHED_RESULT parameter must be included in the query.

D.

The table structure contributing to the query result cannot have changed

E.

The new query must have the same syntax as the previously executed query.

F.

The micro-partitions cannot have changed due to changes to other data in the table

Full Access
Question # 55

Database XYZ has the data_retention_time_in_days parameter set to 7 days and table xyz.public.ABC has the data_retention_time_in_daysset to 10 days.

A Developer accidentally dropped the database containing this single table 8 days ago and just discovered the mistake.

How can the table be recovered?

A.

undrop database xyz;

B.

create -able abc_restore as select * from xyz.public.abc at {offset => -60*60*24*8};

C.

create table abc_restore clone xyz.public.abc at (offset => -3€0G*24*3);

D.

Create a Snowflake Support case lo restore the database and tab e from "a i-safe

Full Access
Question # 56

A Data Engineer is implementing a near real-time ingestionpipeline to toad data into Snowflake using the Snowflake Kafka connector. There will be three Kafka topics created.

……snowflake objects are created automatically when the Kafka connector starts? (Select THREE)

A.

Tables

B.

Tasks

C.

Pipes

D.

internal stages

E.

External stages

F.

Materialized views

Full Access