DUMPS BASE
EXAM DUMPS
ORACLE 1Z0-928
28% OFF Automatically For You Oracle Cloud Platform Big Data Management 2018 Associate
1.Which function is NOT a part of Kerberos? A. Kerberos offers Strong Authentication features. B. Kerberos authorizes SQL Access for Hive and Impala. C. Kerberos builds on symmetric key cryptography and requires a trusted third party. D. Kerberos works on the basis of tickets. Answer: B
2.You have a 3-Rack Hadoop Cluster and you have configured HDFS with the replication factor set to 3. All Racks and Nodes are up and running. Which is the correct Data Replica layout by default? A. One replica on one node in Local Rack1, another replica on a node in a remote Rack2, and the last replica on the fastest node in the cluster. B. One replica on one node in Local Rack1, another replica on a node in a remote Rack2, and the last replica on a different node in the same remote Rack2. C. One replica on one node in Local Rack1, another replica on a node in a remote Rack2, and the last replica on a different node in Local Rack1. D. One replica on one node in Local Rack1, another replica on a node in a remote Rack2, and the last replica on a node in the remote Rack3. Answer: A
3.Which methods of Kafka deployment on Oracle Cloud does NOT provide SSH access? A. Oracle Event Hub Cloud Service Dedicated associated with Big Data Cloud B. Oracle Event Hub Cloud Service (Multi Tenant Version) C. Oracle Event Hub Cloud Service-Dedicated (Kafka as a service) D. Customer installed Kafka on Oracle Cloud Infrastructure Answer: B
4.You need to enforce network encryption, as Kerberos authentication does not protect data as it travels through the network. Hadoop being a distributed system means that data must be transmitted over the network across machines. Which is NOT a network communication protocol used network encryption? A. Direct TCP/IP B. HTTPS C. SSI D. Hadoop RPC Answer: A
5.Big Data Cloud has tools for easy provisioning, managing, and monitoring of
Apache Hadoop clusters. Which component does Oracle Big Data Cloud use as the platform stack for cluster orchestration and provisioning? A. Cloudera Manager B. Apache Ambari C. MapR Control System D. Oracle Enterprise Manager Answer: B
6.You have data in large files you need to copy from your Hadoop HDFS to other storage providers. You decided to use the Oracle Big Data Cloud Service distributed copy utility odcp. As odcp is compatible with Cloudera Distribution Including Apache Hadoop, which four are supported when copying files? A. Secure WebHDFS (SWebHDFS) B. Apache Hadoop Distributed File Service (HDFS) C. Apache Flume D. Hypertext Transfer Protocol (HTTP) E. Oracle Cloud Infrastructure Object Storage F. Apache Sqoop Answer: ABDE Explanation: Reference: https://docs.oracle.com/en/cloud/paas/big-data-cloud/csbdi/copy-dataodcp.html#GUID4049DB4F-2E9A-4050-AB6F-B8F99918059F
7.ABC Media receives thousands of files every day from many sources. Each textformatted file is typically 1-2 MB in size. They need to store all these files for at least two years. They heard about Hadoop and about the HDFS filesystem, and want to take advantage of the cost-effective storage to store the vast number of files. Which two recommendations could you provide to the customer to maintain the effectiveness of HDFS with the growing number of files? A. Consider breaking down files into smaller files before ingesting. B. Consider adding additional Name Nodes to increase data storage capacity. C. Reduce the memory available for namenode as 1-2 MB files don’t need a lot of memory. D. Consider concatenating files after ingesting. E. Use compression to free up space. Answer: AE
8.During provisioning, which can you create in order to integrate Big Data Cloud with
Other Oracle PaaS services? A. Attachments B. Associations C. Couplings D. Data Pipelines Answer: B
9.You have easily and successfully created clusters with the Oracle Big Data Cloud wizard. You want to create a cluster that will be very specific to the needs of your business. How would you customize Oracle Big Data Cloud clusters during provisioning? A. by using Stack Manager B. by using Oracle Enterprise Manager C. by using Platform Service Manager UI D. by using a Bootstrap script Answer: D Explanation: Reference: https://docs.oracle.com/en/cloud/paas/big-data-computecloud/csspc/using-oracle-big-datacloud.pdf
10.What is the optimal way in Event Hub Cloud Service to stream data into Object Storage? A. Use block storage as temporary data landing. B. Use the external database system to push the data to the object store. C. Use Kafka connectors. D. It is not possible to stream data to object store. Answer: C Explanation: Reference: https://cloud.oracle.com/event-hub
11.When you create a cluster, how many permanent nodes must you have before you can have edge nodes? A. One B. Two C. Five D. Four E. Three Answer: D Explanation: Reference: https://docs.oracle.com/en/cloud/paas/big-data-cloud/csbdi/add-nodes-
cluster.html#GUIDB6EDF7FB-A94A-4356-AF5E-C9307721F772
12.A company is using Oracle Big Data Cloud Service that has Hive tables owned by user A. The tables are being dropped by user B. How should you implement authorization in Hadoop? A. Role-based separation is not possible because Hadoop is a file system. B. Configure Apache Record Service C. Implement role-based security using Sentry. D. Implement role-based security using Kerberos Answer: C Explanation: Reference: https://www.cloudera.com/documentation/enterprise/latest/topics/sg_hive_sql.html
13.Which statement is true about Big Data File System in Oracle Big Data Cloud? A. It is a transparent distributed cache layer. B. It supports HDFS as a datalake concept. C. It is a block storage layer. D. It offers block storage with NVMe-based SSD. Answer: C Explanation: Reference: https://docs.oracle.com/en/cloud/paas/big-data-compute-cloud/csspc/bigdata-file-systembdfs.html
14.You are in a project with teams from other departments of the company. It is necessary to collaborate not only creative ideas but also the data with the other departments. In the Big Data Cloud Service, you need to copy data that is in very large files between HDFS on your cluster to their cloud storage. Which utility is more efficient for copying large data files? A. odcp B. ftp C. fastcopy D. scp Answer: A Explanation: Reference: https://docs.oracle.com/en/cloud/paas/big-data-cloud/csbdi/copy-dataodcp.html#GUIDAE8587AF-6538-43A6-A2F3-52D63E287788
15.Oracle Big Data Cloud provides a ready-to-use, open source Notebook utility you
can use to create and code interactive exploratory analytical visualizations. What is the name of that Notebook utility? A. Jupiter B. Beaker Notebook C. Zeppelin Notebook D. Oracle Analytics Cloud Answer: C
16.Oracle Data Integrator for Big Data offers customers with Enterprise big data Integration. What component does Oracle Data Integrator for Big Data use to give you the ability to solve your most complex and time-sensitive data transformation and data movement challenges? A. RDD B. Knowledge modules C. Predefined MapReduce job for data transformation D. Package scripts Answer: B Explanation: Reference: http://www.oracle.com/us/products/middleware/data-integration/odieebdds-2464372.pdf
17.What is the difference between permanent nodes and edge nodes? A. Permanent nodes cannot be stopped, whereas you can start and stop edge nodes. B. Permanent nodes are for the life of the cluster, whereas edge nodes are temporary for the duration of processing the data Permanent nodes contain your Hadoop data, but edge nodes do not have Hadoop data. C. Permanent nodes contain your Hadoop data, but edge nodes do not have Hadoop data. D. Permanent nodes contain your Hadoop data, but edge nodes give you the “edge� in processing your data with more processors. Answer: B Explanation: Reference: https://docs.oracle.com/en/cloud/paas/big-data-cloud/csbdi/using-oraclebig-data-cloudservice.pdf
18.What is the result of the FLATMAP () function in Spark? A. It always returns a new RDD by passing the supplied function used to filter the results. B. It always returns a new RDD that contains elements in the source dataset and the
argument. C. It always returns an RDD with 0, 1, or more elements. D. It always returns an RDD with identical size of the input RDD. Answer: A Explanation: Reference: https://backtobazics.com/big-data/spark/apache-spark-flatmap-example/
GET FULL VERSION OF 1Z0-928 DUMPS
Powered by TCPDF (www.tcpdf.org)