Weekend Special Sale Limited Time 70% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: buysanta

Exact2Pass Menu

Question # 4

On a cluster running CDH 5.0 or above, you use the hadoop fs –put command to write a 300MB file into a previously empty directory using an HDFS block size of 64 MB. Just after this command has finished writing 200 MB of this file, what would another use see when they look in directory?

A.

The directory will appear to be empty until the entire file write is completed on the cluster

B.

They will see the file with a ._COPYING_ extension on its name. If they view the file, they will see contents of the file up to the last completed block (as each 64MB block is written, that block becomes available)

C.

They will see the file with a ._COPYING_ extension on its name. If they attempt to view the file, they will get a ConcurrentFileAccessException until the entire file write is completed on the cluster

D.

They will see the file with its original name. If they attempt to view the file, they will get a ConcurrentFileAccessException until the entire file write is completed on the cluster

Full Access
Question # 5

You are running a Hadoop cluster with a NameNode on host mynamenode. What are two ways to determine available HDFS space in your cluster?

A.

Run hdfs fs –du / and locate the DFS Remaining value

B.

Run hdfs dfsadmin –report and locate the DFS Remaining value

C.

Run hdfs dfs / and subtract NDFS Used from configured Capacity

D.

Connect to http://mynamenode:50070/dfshealth.jsp and locate the DFS remaining value

Full Access
Question # 6

Which command does Hadoop offer to discover missing or corrupt HDFS data?

A.

Hdfs fs –du

B.

Hdfs fsck

C.

Dskchk

D.

The map-only checksum

E.

Hadoop does not provide any tools to discover missing or corrupt data; there is not need because three replicas are kept for each data block

Full Access
Question # 7

On a cluster running MapReduce v2 (MRv2) on YARN, a MapReduce job is given a directory of 10 plain text files as its input directory. Each file is made up of 3 HDFS blocks. How many Mappers will run?

A.

We cannot say; the number of Mappers is determined by the ResourceManager

B.

We cannot say; the number of Mappers is determined by the developer

C.

30

D.

3

E.

10

F.

We cannot say; the number of mappers is determined by the ApplicationMaster

Full Access
Question # 8

Your cluster’s mapred-start.xml includes the following parameters

mapreduce.map.memory.mb

4096

mapreduce.reduce.memory.mb

8192

And any cluster’s yarn-site.xml includes the following parameters

yarn.nodemanager.vmen-pmen-ration

2.1

What is the maximum amount of virtual memory allocated for each map task before YARN will kill its Container?

A.

4 GB

B.

17.2 GB

C.

8.9 GB

D.

8.2 GB

E.

24.6 GB

Full Access
Question # 9

Table schemas in Hive are:

A.

Stored as metadata on the NameNode

B.

Stored along with the data in HDFS

C.

Stored in the Metadata

D.

Stored in ZooKeeper

Full Access