ExamGecko

DAS-C01: AWS Certified Data Analytics - Specialty

AWS Certified Data Analytics - Specialty
Vendor:

Amazon

AWS Certified Data Analytics - Specialty Exam Questions: 214
AWS Certified Data Analytics - Specialty   2.370 Learners
Take Practice Tests
Comming soon
PDF | VPLUS

The AWS Certified Data Analytics – Specialty (DAS-C01) exam is a crucial certification for anyone aiming to advance their career in data analytics on AWS. Our topic is your ultimate resource for DAS-C01 practice test shared by individuals who have successfully passed the exam. These practice tests provide real-world scenarios and invaluable insights to help you ace your preparation.

Why Use DAS-C01 Practice Test?

  • Real Exam Experience: Our practice test accurately replicates the format and difficulty of the actual AWS DAS-C01 exam, providing you with a realistic preparation experience.

  • Identify Knowledge Gaps: Practicing with these tests helps you identify areas where you need more study, allowing you to focus your efforts effectively.

  • Boost Confidence: Regular practice with exam-like questions builds your confidence and reduces test anxiety.

  • Track Your Progress: Monitor your performance over time to see your improvement and adjust your study plan accordingly.

Key Features of DAS-C01 Practice Test:

  • Up-to-Date Content: Our community ensures that the questions are regularly updated to reflect the latest exam objectives and technology trends.

  • Detailed Explanations: Each question comes with detailed explanations, helping you understand the correct answers and learn from any mistakes.

  • Comprehensive Coverage: The practice test covers all key topics of the AWS DAS-C01 exam, including data collection, processing, analysis, and visualization.

  • Customizable Practice: Create your own practice sessions based on specific topics or difficulty levels to tailor your study experience to your needs.

Exam number: DAS-C01

Exam name: AWS Certified Data Analytics – Specialty

Length of test: 180 minutes

Exam format: Multiple-choice and multiple-response questions.

Exam language: English

Number of questions in the actual exam: Maximum of 65 questions

Passing score: 750/1000

Use the member-shared AWS DAS-C01 Practice Test to ensure you’re fully prepared for your certification exam. Start practicing today and take a significant step towards achieving your certification goals!

Related questions

An online gaming company is using an Amazon Kinesis Data Analytics SQL application with a Kinesis data stream as its source. The source sends three non-null fields to the application: player_id, score, and us_5_digit_zip_code. A data analyst has a .csv mapping file that maps a small number of us_5_digit_zip_code values to a territory code. The data analyst needs to include the territory code, if one exists, as an additional output of the Kinesis Data Analytics application.

How should the data analyst meet this requirement while minimizing costs?

A.
Store the contents of the mapping file in an Amazon DynamoDB table. Preprocess the records as they arrive in the Kinesis Data Analytics application with an AWS Lambda function that fetches the mapping and supplements each recordto include the territory code, if one exists. Change the SQL query in the application to include the new field in the SELECT statement.
A.
Store the contents of the mapping file in an Amazon DynamoDB table. Preprocess the records as they arrive in the Kinesis Data Analytics application with an AWS Lambda function that fetches the mapping and supplements each recordto include the territory code, if one exists. Change the SQL query in the application to include the new field in the SELECT statement.
Answers
B.
Store the mapping file in an Amazon S3 bucket and configure the reference data column headers for the .csv file in the Kinesis Data Analytics application. Change the SQL query in the application to include a join to the file’s S3 AmazonResource Name (ARN), and add the territory code field to the SELECT columns.
B.
Store the mapping file in an Amazon S3 bucket and configure the reference data column headers for the .csv file in the Kinesis Data Analytics application. Change the SQL query in the application to include a join to the file’s S3 AmazonResource Name (ARN), and add the territory code field to the SELECT columns.
Answers
C.
Store the mapping file in an Amazon S3 bucket and configure it as a reference data source for the Kinesis Data Analytics application. Change the SQL query in the application to include a join to the reference table and add the territorycode field to the SELECT columns.
C.
Store the mapping file in an Amazon S3 bucket and configure it as a reference data source for the Kinesis Data Analytics application. Change the SQL query in the application to include a join to the reference table and add the territorycode field to the SELECT columns.
Answers
D.
Store the contents of the mapping file in an Amazon DynamoDB table. Change the Kinesis Data Analytics application to send its output to an AWS Lambda function that fetches the mapping and supplements each record to include the territory code, if one exists. Forward the record from the Lambda function to the original application destination.
D.
Store the contents of the mapping file in an Amazon DynamoDB table. Change the Kinesis Data Analytics application to send its output to an AWS Lambda function that fetches the mapping and supplements each record to include the territory code, if one exists. Forward the record from the Lambda function to the original application destination.
Answers
Suggested answer: C
asked 16/09/2024
Mohamed Nacer Ferhi
41 questions

A large company receives files from external parties in Amazon EC2 throughout the day. At the end of the day, the files are combined into a single file, compressed into a gzip file, and uploaded to Amazon S3. The total size of all the files is close to 100 GB daily. Once the files are uploaded to Amazon S3, an AWS Batch program executes a COPY command to load the files into an Amazon Redshift cluster. Which program modification will accelerate the COPY process?

A.
Upload the individual files to Amazon S3 and run the COPY command as soon as the files become available.
A.
Upload the individual files to Amazon S3 and run the COPY command as soon as the files become available.
Answers
B.
Split the number of files so they are equal to a multiple of the number of slices in the Amazon Redshift cluster. Gzip and upload the files to Amazon S3. Run the COPY command on the files.
B.
Split the number of files so they are equal to a multiple of the number of slices in the Amazon Redshift cluster. Gzip and upload the files to Amazon S3. Run the COPY command on the files.
Answers
C.
Split the number of files so they are equal to a multiple of the number of compute nodes in the Amazon Redshift cluster.Gzip and upload the files to Amazon S3. Run the COPY command on the files.
C.
Split the number of files so they are equal to a multiple of the number of compute nodes in the Amazon Redshift cluster.Gzip and upload the files to Amazon S3. Run the COPY command on the files.
Answers
D.
Apply sharding by breaking up the files so the distkey columns with the same values go to the same file. Gzip and upload the sharded files to Amazon S3. Run the COPY command on the files.
D.
Apply sharding by breaking up the files so the distkey columns with the same values go to the same file. Gzip and upload the sharded files to Amazon S3. Run the COPY command on the files.
Answers
Suggested answer: B

Explanation:


Reference: https://docs.aws.amazon.com/redshift/latest/dg/t_splitting-data-files.html

asked 16/09/2024
Robert Akehurst
34 questions

A large university has adopted a strategic goal of increasing diversity among enrolled students. The data analytics team is creating a dashboard with data visualizations to enable stakeholders to view historical trends. All access must be authenticated using Microsoft Active Directory. All data in transit and at rest must be encrypted. Which solution meets these requirements?

A.
Amazon QuickSight Standard edition configured to perform identity federation using SAML 2.0. and the default encryption settings.
A.
Amazon QuickSight Standard edition configured to perform identity federation using SAML 2.0. and the default encryption settings.
Answers
B.
Amazon QuickSight Enterprise edition configured to perform identity federation using SAML 2.0 and the default encryption settings.
B.
Amazon QuickSight Enterprise edition configured to perform identity federation using SAML 2.0 and the default encryption settings.
Answers
C.
Amazon QuckSight Standard edition using AD Connector to authenticate using Active Directory. Configure Amazon QuickSight to use customer-provided keys imported into AWS KMS.
C.
Amazon QuckSight Standard edition using AD Connector to authenticate using Active Directory. Configure Amazon QuickSight to use customer-provided keys imported into AWS KMS.
Answers
D.
Amazon QuickSight Enterprise edition using AD Connector to authenticate using Active Directory. Configure Amazon QuickSight to use customer-provided keys imported into AWS KMS.
D.
Amazon QuickSight Enterprise edition using AD Connector to authenticate using Active Directory. Configure Amazon QuickSight to use customer-provided keys imported into AWS KMS.
Answers
Suggested answer: D

Explanation:


Reference: https://docs.aws.amazon.com/quicksight/latest/user/WhatsNew.html

asked 16/09/2024
Louis Lee
36 questions

A company is using an AWS Lambda function to run Amazon Athena queries against a cross-account AWS Glue Data Catalog. A query returns the following error:

HIVE_METASTORE_ERROR

The error message states that the response payload size exceeds the maximum allowed size. The queried table is already partitioned, and the data is stored in an Amazon S3 bucket in the Apache Hive partition format. Which solution will resolve this error?

A.
Modify the Lambda function to upload the query response payload as an object into the S3 bucket. Include an S3 object presigned URL as the payload in the Lambda function response.
A.
Modify the Lambda function to upload the query response payload as an object into the S3 bucket. Include an S3 object presigned URL as the payload in the Lambda function response.
Answers
B.
Run the MSCK REPAIR TABLE command on the queried table.
B.
Run the MSCK REPAIR TABLE command on the queried table.
Answers
C.
Create a separate folder in the S3 bucket. Move the data files that need to be queried into that folder. Create an AWS Glue crawler that points to the folder instead of the S3 bucket.
C.
Create a separate folder in the S3 bucket. Move the data files that need to be queried into that folder. Create an AWS Glue crawler that points to the folder instead of the S3 bucket.
Answers
D.
Check the schema of the queried table for any characters that Athena does not support. Replace any unsupported characters with characters that Athena supports.
D.
Check the schema of the queried table for any characters that Athena does not support. Replace any unsupported characters with characters that Athena supports.
Answers
Suggested answer: C

Explanation:


Reference: https://docs.aws.amazon.com/athena/latest/ug/tables-location-format.html

asked 16/09/2024
Jeffrey VanBemmel
39 questions

A company has an encrypted Amazon Redshift cluster. The company recently enabled Amazon Redshift audit logs and needs to ensure that the audit logs are also encrypted at rest. The logs are retained for 1 year. The auditor queries the logs once a month.

What is the MOST cost-effective way to meet these requirements?

A.
Encrypt the Amazon S3 bucket where the logs are stored by using AWS Key Management Service (AWS KMS). Copy the data into the Amazon Redshift cluster from Amazon S3 on a daily basis. Query the data as required.
A.
Encrypt the Amazon S3 bucket where the logs are stored by using AWS Key Management Service (AWS KMS). Copy the data into the Amazon Redshift cluster from Amazon S3 on a daily basis. Query the data as required.
Answers
B.
Disable encryption on the Amazon Redshift cluster, configure audit logging, and encrypt the Amazon Redshift cluster. Use Amazon Redshift Spectrum to query the data as required.
B.
Disable encryption on the Amazon Redshift cluster, configure audit logging, and encrypt the Amazon Redshift cluster. Use Amazon Redshift Spectrum to query the data as required.
Answers
C.
Enable default encryption on the Amazon S3 bucket where the logs are stored by using AES-256 encryption. Copy the data into the Amazon Redshift cluster from Amazon S3 on a daily basis. Query the data as required.
C.
Enable default encryption on the Amazon S3 bucket where the logs are stored by using AES-256 encryption. Copy the data into the Amazon Redshift cluster from Amazon S3 on a daily basis. Query the data as required.
Answers
D.
Enable default encryption on the Amazon S3 bucket where the logs are stored by using AES-256 encryption. Use Amazon Redshift Spectrum to query the data as required.
D.
Enable default encryption on the Amazon S3 bucket where the logs are stored by using AES-256 encryption. Use Amazon Redshift Spectrum to query the data as required.
Answers
Suggested answer: A
asked 16/09/2024
Miguel Medina Parra
26 questions

A company needs to implement a near-real-time messaging system for hotel inventory. The messages are collected from 1,000 data sources and contain hotel inventory data. The data is then processed and distributed to 20 HTTP endpoint destinations. The range of data size for messages is 2-500 KB.

The messages must be delivered to each destination in order. The performance of a single destination HTTP endpointshould not impact the performance of the delivery for other destinations. Which solution meets these requirements with the LOWEST latency from message ingestion to delivery?

A.
Create an Amazon Kinesis data stream, and ingest the data for each source into the stream. Create 30 AWS Lambda functions to read these messages and send the messages to each destination endpoint.
A.
Create an Amazon Kinesis data stream, and ingest the data for each source into the stream. Create 30 AWS Lambda functions to read these messages and send the messages to each destination endpoint.
Answers
B.
Create an Amazon Kinesis data stream, and ingest the data for each source into the stream. Create a single enhanced fan-out AWS Lambda function to read these messages and send the messages to each destination endpoint.Register the function as an enhanced fan-out consumer.
B.
Create an Amazon Kinesis data stream, and ingest the data for each source into the stream. Create a single enhanced fan-out AWS Lambda function to read these messages and send the messages to each destination endpoint.Register the function as an enhanced fan-out consumer.
Answers
C.
Create an Amazon Kinesis Data Firehose delivery stream, and ingest the data for each source into the stream. Configure Kinesis Data Firehose to deliver the data to an Amazon S3 bucket. Invoke an AWS Lambda function with an S3event notification to read these messages and send the messages to each destination endpoint.
C.
Create an Amazon Kinesis Data Firehose delivery stream, and ingest the data for each source into the stream. Configure Kinesis Data Firehose to deliver the data to an Amazon S3 bucket. Invoke an AWS Lambda function with an S3event notification to read these messages and send the messages to each destination endpoint.
Answers
D.
Create an Amazon Kinesis data stream, and ingest the data for each source into the stream. Create 20 enhanced fan-out AWS Lambda functions to read these messages and send the messages to each destination endpoint. Register the20 functions as enhanced fan-out consumers.
D.
Create an Amazon Kinesis data stream, and ingest the data for each source into the stream. Create 20 enhanced fan-out AWS Lambda functions to read these messages and send the messages to each destination endpoint. Register the20 functions as enhanced fan-out consumers.
Answers
Suggested answer: B

Explanation:


Reference: https://docs.aws.amazon.com/lambda/latest/dg/with-kinesis.html

asked 16/09/2024
Pachara Suwannasit
32 questions

A large marketing company needs to store all of its streaming logs and create near-real-time dashboards. The dashboards will be used to help the company make critical business decisions and must be highly available.

Which solution meets these requirements?

Become a Premium Member for full access
Unlock Premium Member  Unlock Premium Member

A company wants to research user turnover by analyzing the past 3 months of user activities. With millions of users, 1.5 TB of uncompressed data is generated each day. A 30-node Amazon Redshift cluster with 2.56 TB of solid state drive (SSD) storage for each node is required to meet the query performance goals.

The company wants to run an additional analysis on a year’s worth of historical data to examine trends indicating which features are most popular. This analysis will be done once a week. What is the MOST cost-effective solution?

A.
Increase the size of the Amazon Redshift cluster to 120 nodes so it has enough storage capacity to hold 1 year of data.Then use Amazon Redshift for the additional analysis.
A.
Increase the size of the Amazon Redshift cluster to 120 nodes so it has enough storage capacity to hold 1 year of data.Then use Amazon Redshift for the additional analysis.
Answers
B.
Keep the data from the last 90 days in Amazon Redshift. Move data older than 90 days to Amazon S3 and store it in Apache Parquet format partitioned by date. Then use Amazon Redshift Spectrum for the additional analysis.
B.
Keep the data from the last 90 days in Amazon Redshift. Move data older than 90 days to Amazon S3 and store it in Apache Parquet format partitioned by date. Then use Amazon Redshift Spectrum for the additional analysis.
Answers
C.
Keep the data from the last 90 days in Amazon Redshift. Move data older than 90 days to Amazon S3 and store it in Apache Parquet format partitioned by date. Then provision a persistent Amazon EMR cluster and use Apache Prestofor the additional analysis.
C.
Keep the data from the last 90 days in Amazon Redshift. Move data older than 90 days to Amazon S3 and store it in Apache Parquet format partitioned by date. Then provision a persistent Amazon EMR cluster and use Apache Prestofor the additional analysis.
Answers
D.
Resize the cluster node type to the dense storage node type (DS2) for an additional 16 TB storage capacity on each individual node in the Amazon Redshift cluster. Then use Amazon Redshift for the additional analysis.
D.
Resize the cluster node type to the dense storage node type (DS2) for an additional 16 TB storage capacity on each individual node in the Amazon Redshift cluster. Then use Amazon Redshift for the additional analysis.
Answers
Suggested answer: B
asked 16/09/2024
Mohamed Ismail
49 questions

A bank is building an Amazon S3 data lake. The bank wants a single data repository for customer data needs, such as personalized recommendations. The bank needs to use Amazon Kinesis Data Firehose to ingest customers' personal information, bank accounts, and transactions in near real time from a transactional relational database.

All personally identifiable information (Pll) that is stored in the S3 bucket must be masked. The bank has enabled versioning for the S3 bucket.

Which solution will meet these requirements?

Become a Premium Member for full access
Unlock Premium Member  Unlock Premium Member

A marketing company has an application that stores event data in an Amazon RDS database. The company is replicating this data to Amazon Redshift for reporting and business intelligence (BI) purposes. New event data is continuously generated and ingested into the RDS database throughout the day and captured by a change data capture (CDC) replication task in AWS Database Migration Service (AWS DMS). The company requires that the new data be replicated to Amazon Redshift in near-real time.

Which solution meets these requirements?

Become a Premium Member for full access
Unlock Premium Member  Unlock Premium Member