ExamGecko
Home Home / Amazon / DAS-C01

Amazon DAS-C01 Practice Test - Questions Answers, Page 13

Question list
Search
Search

List of questions

Search

Related questions











A company using Amazon QuickSight Enterprise edition has thousands of dashboards, analyses, and datasets. The company struggles to manage and assign permissions for granting users access to various items within QuickSight. The company wants to make it easier to implement sharing and permissions management. Which solution should the company implement to simplify permissions management?

A.
Use QuickSight folders to organize dashboards, analyses, and datasets. Assign individual users permissions to these folders.
A.
Use QuickSight folders to organize dashboards, analyses, and datasets. Assign individual users permissions to these folders.
Answers
B.
Use QuickSight folders to organize dashboards, analyses, and datasets. Assign group permissions by using these folders.
B.
Use QuickSight folders to organize dashboards, analyses, and datasets. Assign group permissions by using these folders.
Answers
C.
Use AWS IAM resource-based policies to assign group permissions to QuickSight items.
C.
Use AWS IAM resource-based policies to assign group permissions to QuickSight items.
Answers
D.
Use QuickSight user management APIs to provision group permissions based on dashboard naming conventions.
D.
Use QuickSight user management APIs to provision group permissions based on dashboard naming conventions.
Answers
Suggested answer: B

Explanation:


Reference: https://awscli.amazonaws.com/v2/documentation/api/latest/reference/quicksight/update-folder-permissions.html

A company launched a service that produces millions of messages every day and uses Amazon Kinesis Data Streams as the streaming service.

The company uses the Kinesis SDK to write data to Kinesis Data Streams. A few months after launch, a data analyst found that write performance is significantly reduced. The data analyst investigated the metrics and determined that Kinesis is throttling the write requests. The data analyst wants to address this issue without significant changes to the architecture. Which actions should the data analyst take to resolve this issue? (Choose two.)

A.
Increase the Kinesis Data Streams retention period to reduce throttling.
A.
Increase the Kinesis Data Streams retention period to reduce throttling.
Answers
B.
Replace the Kinesis API-based data ingestion mechanism with Kinesis Agent.
B.
Replace the Kinesis API-based data ingestion mechanism with Kinesis Agent.
Answers
C.
Increase the number of shards in the stream using the UpdateShardCount API.
C.
Increase the number of shards in the stream using the UpdateShardCount API.
Answers
D.
Choose partition keys in a way that results in a uniform record distribution across shards.
D.
Choose partition keys in a way that results in a uniform record distribution across shards.
Answers
E.
Customize the application code to include retry logic to improve performance.
E.
Customize the application code to include retry logic to improve performance.
Answers
Suggested answer: A, C

A transportation company uses IoT sensors attached to trucks to collect vehicle data for its global delivery fleet. The company currently sends the sensor data in small .csv files to Amazon S3. The files are then loaded into a 10-node Amazon Redshift cluster with two slices per node and queried using both Amazon Athena and Amazon Redshift. The company wants to optimize the files to reduce the cost of querying and also improve the speed of data loading into the Amazon Redshift cluster.

Which solution meets these requirements?

A.
Use AWS Glue to convert all the files from .csv to a single large Apache Parquet file. COPY the file into Amazon Redshift and query the file with Athena from Amazon S3.
A.
Use AWS Glue to convert all the files from .csv to a single large Apache Parquet file. COPY the file into Amazon Redshift and query the file with Athena from Amazon S3.
Answers
B.
Use Amazon EMR to convert each .csv file to Apache Avro. COPY the files into Amazon Redshift and query the file with Athena from Amazon S3.
B.
Use Amazon EMR to convert each .csv file to Apache Avro. COPY the files into Amazon Redshift and query the file with Athena from Amazon S3.
Answers
C.
Use AWS Glue to convert the files from .csv to a single large Apache ORC file. COPY the file into Amazon Redshift and query the file with Athena from Amazon S3.
C.
Use AWS Glue to convert the files from .csv to a single large Apache ORC file. COPY the file into Amazon Redshift and query the file with Athena from Amazon S3.
Answers
D.
Use AWS Glue to convert the files from .csv to Apache Parquet to create 20 Parquet files. COPY the files into Amazon Redshift and query the files with Athena from Amazon S3.
D.
Use AWS Glue to convert the files from .csv to Apache Parquet to create 20 Parquet files. COPY the files into Amazon Redshift and query the files with Athena from Amazon S3.
Answers
Suggested answer: D

A data analytics specialist is building an automated ETL ingestion pipeline using AWS Glue to ingest compressed files that have been uploaded to an Amazon S3 bucket. The ingestion pipeline should support incremental data processing. Which AWS Glue feature should the data analytics specialist use to meet this requirement?

A.
Workflows
A.
Workflows
Answers
B.
Triggers
B.
Triggers
Answers
C.
Job bookmarks
C.
Job bookmarks
Answers
D.
Classifiers
D.
Classifiers
Answers
Suggested answer: B

Explanation:


Reference: https://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/build-an-etl-service-pipeline-to-load-dataincrementally-from-amazon-s3-to-amazon-redshift-using-aws-glue.html

A company developed a new elections reporting website that uses Amazon Kinesis Data Firehose to deliver full logs from AWS WAF to an Amazon S3 bucket. The company is now seeking a low-cost option to perform this infrequent data analysis with visualizations of logs in a way that requires minimal development effort. Which solution meets these requirements?

A.
Use an AWS Glue crawler to create and update a table in the Glue data catalog from the logs. Use Athena to perform adhoc analyses and use Amazon QuickSight to develop data visualizations.
A.
Use an AWS Glue crawler to create and update a table in the Glue data catalog from the logs. Use Athena to perform adhoc analyses and use Amazon QuickSight to develop data visualizations.
Answers
B.
Create a second Kinesis Data Firehose delivery stream to deliver the log files to Amazon OpenSearch Service (Amazon Elasticsearch Service). Use Amazon ES to perform text-based searches of the logs for ad-hoc analyses and useOpenSearch Dashboards (Kibana) for data visualizations.
B.
Create a second Kinesis Data Firehose delivery stream to deliver the log files to Amazon OpenSearch Service (Amazon Elasticsearch Service). Use Amazon ES to perform text-based searches of the logs for ad-hoc analyses and useOpenSearch Dashboards (Kibana) for data visualizations.
Answers
C.
Create an AWS Lambda function to convert the logs into .csv format. Then add the function to the Kinesis Data Firehose transformation configuration. Use Amazon Redshift to perform ad-hoc analyses of the logs using SQL queries anduse Amazon QuickSight to develop data visualizations.
C.
Create an AWS Lambda function to convert the logs into .csv format. Then add the function to the Kinesis Data Firehose transformation configuration. Use Amazon Redshift to perform ad-hoc analyses of the logs using SQL queries anduse Amazon QuickSight to develop data visualizations.
Answers
D.
Create an Amazon EMR cluster and use Amazon S3 as the data source. Create an Apache Spark job to perform ad-hoc analyses and use Amazon QuickSight to develop data visualizations.
D.
Create an Amazon EMR cluster and use Amazon S3 as the data source. Create an Apache Spark job to perform ad-hoc analyses and use Amazon QuickSight to develop data visualizations.
Answers
Suggested answer: D

A bank operates in a regulated environment. The compliance requirements for the country in which the bank operates say that customer data for each state should only be accessible by the bank’s employees located in the same state. Bank employees in one state should NOT be able to access data for customers who have provided a home address in a different state.

The bank’s marketing team has hired a data analyst to gather insights from customer data for a new campaign being launched in certain states. Currently, data linking each customer account to its home state is stored in a tabular .csv file within a single Amazon S3 folder in a private S3 bucket. The total size of the S3 folder is 2 GB uncompressed. Due to the country’s compliance requirements, the marketing team is not able to access this folder.

The data analyst is responsible for ensuring that the marketing team gets one-time access to customer data for their campaign analytics project, while being subject to all the compliance requirements and controls.

Which solution should the data analyst implement to meet the desired requirements with the LEAST amount of setup effort?

A.
Re-arrange data in Amazon S3 to store customer data about each state in a different S3 folder within the same bucket.Set up S3 bucket policies to provide marketing employees with appropriate data access under compliance controls. Delete the bucket policies after the project.
A.
Re-arrange data in Amazon S3 to store customer data about each state in a different S3 folder within the same bucket.Set up S3 bucket policies to provide marketing employees with appropriate data access under compliance controls. Delete the bucket policies after the project.
Answers
B.
Load tabular data from Amazon S3 to an Amazon EMR cluster using s3DistCp. Implement a custom Hadoop-based rowlevel security solution on the Hadoop Distributed File System (HDFS) to provide marketing employees withappropriate data access under compliance controls. Terminate the EMR cluster after the project.
B.
Load tabular data from Amazon S3 to an Amazon EMR cluster using s3DistCp. Implement a custom Hadoop-based rowlevel security solution on the Hadoop Distributed File System (HDFS) to provide marketing employees withappropriate data access under compliance controls. Terminate the EMR cluster after the project.
Answers
C.
Load tabular data from Amazon S3 to Amazon Redshift with the COPY command. Use the built-in row-level security feature in Amazon Redshift to provide marketing employees with appropriate data access under compliance controls. Delete the Amazon Redshift tables after the project.
C.
Load tabular data from Amazon S3 to Amazon Redshift with the COPY command. Use the built-in row-level security feature in Amazon Redshift to provide marketing employees with appropriate data access under compliance controls. Delete the Amazon Redshift tables after the project.
Answers
D.
Load tabular data from Amazon S3 to Amazon QuickSight Enterprise edition by directly importing it as a data source. Use the built-in row-level security feature in Amazon QuickSight to provide marketing employees with appropriate dataaccess under compliance controls. Delete Amazon QuickSight data sources after the project is complete.
D.
Load tabular data from Amazon S3 to Amazon QuickSight Enterprise edition by directly importing it as a data source. Use the built-in row-level security feature in Amazon QuickSight to provide marketing employees with appropriate dataaccess under compliance controls. Delete Amazon QuickSight data sources after the project is complete.
Answers
Suggested answer: C

A transport company wants to track vehicular movements by capturing geolocation records. The records are 10 B in size and up to 10,000 records are captured each second. Data transmission delays of a few minutes are acceptable, considering unreliable network conditions. The transport company decided to use Amazon Kinesis Data Streams to ingest the data. The company is looking for a reliable mechanism to send data to Kinesis Data Streams while maximizing the throughput efficiency of the Kinesis shards. Which solution will meet the company’s requirements?

A.
Kinesis Agent
A.
Kinesis Agent
Answers
B.
Kinesis Producer Library (KPL)
B.
Kinesis Producer Library (KPL)
Answers
C.
Kinesis Data Firehose
C.
Kinesis Data Firehose
Answers
D.
Kinesis SDK
D.
Kinesis SDK
Answers
Suggested answer: B

Explanation:


Reference: https://docs.aws.amazon.com/streams/latest/dev/developing-producers-with-sdk.htmls

A power utility company is deploying thousands of smart meters to obtain real-time updates about power consumption. The company is using Amazon Kinesis Data Streams to collect the data streams from smart meters. The consumer application uses the Kinesis Client Library (KCL) to retrieve the stream data. The company has only one consumer application.

The company observes an average of 1 second of latency from the moment that a record is written to the stream until the record is read by a consumer application. The company must reduce this latency to 500 milliseconds. Which solution meets these requirements?

A.
Use enhanced fan-out in Kinesis Data Streams.
A.
Use enhanced fan-out in Kinesis Data Streams.
Answers
B.
Increase the number of shards for the Kinesis data stream.
B.
Increase the number of shards for the Kinesis data stream.
Answers
C.
Reduce the propagation delay by overriding the KCL default settings.
C.
Reduce the propagation delay by overriding the KCL default settings.
Answers
D.
Develop consumers by using Amazon Kinesis Data Firehose.
D.
Develop consumers by using Amazon Kinesis Data Firehose.
Answers
Suggested answer: C

Explanation:


The KCL defaults are set to follow the best practice of polling every 1 second. This default results in average propagation delays that are typically below 1 second. Reference: https://docs.aws.amazon.com/streams/latest/dev/kinesis-low-latency.html

A company has a marketing department and a finance department. The departments are storing data in Amazon S3 in their own AWS accounts in AWS Organizations. Both departments use AWS Lake Formation to catalog and secure their data.

The departments have some databases and tables that share common names.

The marketing department needs to securely access some tables from the finance department. Which two steps are required for this process? (Choose two.)

A.
The finance department grants Lake Formation permissions for the tables to the external account for the marketing department.
A.
The finance department grants Lake Formation permissions for the tables to the external account for the marketing department.
Answers
B.
The finance department creates cross-account IAM permissions to the table for the marketing department role.
B.
The finance department creates cross-account IAM permissions to the table for the marketing department role.
Answers
C.
The marketing department creates an IAM role that has permissions to the Lake Formation tables.
C.
The marketing department creates an IAM role that has permissions to the Lake Formation tables.
Answers
Suggested answer: A, B

Explanation:


Granting Lake Formation Permissions

Creating an IAM role (AWS CLI)

Reference: https://docs.aws.amazon.com/lake-formation/latest/dg/lake-formation-permissions.html https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create_for-user.html

A data analyst is designing a solution to interactively query datasets with SQL using a JDBC connection. Users will join data stored in Amazon S3 in Apache ORC format with data stored in Amazon OpenSearch Service (Amazon Elasticsearch Service) and Amazon Aurora MySQL.

Which solution will provide the MOST up-to-date results?

A.
Use AWS Glue jobs to ETL data from Amazon ES and Aurora MySQL to Amazon S3. Query the data with Amazon Athena.
A.
Use AWS Glue jobs to ETL data from Amazon ES and Aurora MySQL to Amazon S3. Query the data with Amazon Athena.
Answers
B.
Use Amazon DMS to stream data from Amazon ES and Aurora MySQL to Amazon Redshift. Query the data with Amazon Redshift.
B.
Use Amazon DMS to stream data from Amazon ES and Aurora MySQL to Amazon Redshift. Query the data with Amazon Redshift.
Answers
C.
Query all the datasets in place with Apache Spark SQL running on an AWS Glue developer endpoint.
C.
Query all the datasets in place with Apache Spark SQL running on an AWS Glue developer endpoint.
Answers
D.
Query all the datasets in place with Apache Presto running on Amazon EMR.
D.
Query all the datasets in place with Apache Presto running on Amazon EMR.
Answers
Suggested answer: C
Total 214 questions
Go to page: of 22