ExamGecko
Home Home / Amazon / SAA-C03

Amazon SAA-C03 Practice Test - Questions Answers, Page 20

Question list
Search
Search

List of questions

Search

Related questions











A company has two applications: a sender application that sends messages with payloads to be processed and a processing application intended to receive the messages with payloads. The company wants to implement an AWS service to handle messages between the two applications.

The sender application can send about 1.000 messages each hour. The messages may take up to 2 days to be processed. If the messages fail to process, they must be retained so that they do not impact the processing of any remaining messages.

Which solution meets these requirements and is the MOST operationally efficient?

A.
Set up an Amazon EC2 instance running a Redis database. Configure both applications to use the instance. Store, process, and delete the messages, respectively.
A.
Set up an Amazon EC2 instance running a Redis database. Configure both applications to use the instance. Store, process, and delete the messages, respectively.
Answers
B.
Use an Amazon Kinesis data stream to receive the messages from the sender application.Integrate the processing application with the Kinesis Client Library (KCL).
B.
Use an Amazon Kinesis data stream to receive the messages from the sender application.Integrate the processing application with the Kinesis Client Library (KCL).
Answers
C.
Integrate the sender and processor applications with an Amazon Simple Queue Service (Amazon SQS) queue. Configure a dead-letter queue to collect the messages that failed to process.
C.
Integrate the sender and processor applications with an Amazon Simple Queue Service (Amazon SQS) queue. Configure a dead-letter queue to collect the messages that failed to process.
Answers
D.
Subscribe the processing application to an Amazon Simple Notification Service (Amazon SNS) topic to receive notifications to process. Integrate the sender application to write to the SNS topic.
D.
Subscribe the processing application to an Amazon Simple Notification Service (Amazon SNS) topic to receive notifications to process. Integrate the sender application to write to the SNS topic.
Answers
Suggested answer: C

Explanation:

https://aws.amazon.com/blogs/compute/building-loosely-coupled-scalable-c-applications-withamazon-sqs-and-amazon-sns/ https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-deadletter-queues.html

A company has an AWS account used for software engineering. The AWS account has access to the company's on-premises data center through a pair of AWS Direct Connect connections. All non-VPC traffic routes to the virtual private gateway.

A development team recently created an AWS Lambda function through the console. The development team needs to allow the function to access a database that runs in a private subnet in the company's data center. Which solution will meet these requirements?

A.
Configure the Lambda function to run in the VPC with the appropriate security group.
A.
Configure the Lambda function to run in the VPC with the appropriate security group.
Answers
B.
Set up a VPN connection from AWS to the data center. Route the traffic from the Lambda function through the VPN.
B.
Set up a VPN connection from AWS to the data center. Route the traffic from the Lambda function through the VPN.
Answers
C.
Update the route tables in the VPC to allow the Lambda function to access the on-premises data center through Direct Connect.
C.
Update the route tables in the VPC to allow the Lambda function to access the on-premises data center through Direct Connect.
Answers
D.
Create an Elastic IP address. Configure the Lambda function to send traffic through the Elastic IP address without an elastic network interface.
D.
Create an Elastic IP address. Configure the Lambda function to send traffic through the Elastic IP address without an elastic network interface.
Answers
Suggested answer: A

Explanation:

https://docs.aws.amazon.com/lambda/latest/dg/configuration-vpc.html#vpc-managing-eni

A company has a legacy data processing application that runs on Amazon EC2 instances. Data is processed sequentially, but the order of results does not matter. The application uses a monolithic architecture. The only way that the company can scale the application to meet increased demand is to increase the size of the instances.

The company's developers have decided to rewrite the application to use a microservices architecture on Amazon Elastic Container Service (Amazon ECS). What should a solutions architect recommend for communication between the microservices?

A.
Create an Amazon Simple Queue Service (Amazon SQS) queue. Add code to the data producers, and send data to the queue. Add code to the data consumers to process data from the queue.
A.
Create an Amazon Simple Queue Service (Amazon SQS) queue. Add code to the data producers, and send data to the queue. Add code to the data consumers to process data from the queue.
Answers
B.
Create an Amazon Simple Notification Service (Amazon SNS) topic. Add code to the data producers, and publish notifications to the topic. Add code to the data consumers to subscribe to the topic.
B.
Create an Amazon Simple Notification Service (Amazon SNS) topic. Add code to the data producers, and publish notifications to the topic. Add code to the data consumers to subscribe to the topic.
Answers
C.
Create an AWS Lambda function to pass messages. Add code to the data producers to call the Lambda function with a data object. Add code to the data consumers to receive a data object that is passed from the Lambda function.
C.
Create an AWS Lambda function to pass messages. Add code to the data producers to call the Lambda function with a data object. Add code to the data consumers to receive a data object that is passed from the Lambda function.
Answers
D.
Create an Amazon DynamoDB table. Enable DynamoDB Streams. Add code to the data producers to insert data into the table. Add code to the data consumers to use the DynamoDB Streams API to detect new table entries and retrieve the data.
D.
Create an Amazon DynamoDB table. Enable DynamoDB Streams. Add code to the data producers to insert data into the table. Add code to the data consumers to use the DynamoDB Streams API to detect new table entries and retrieve the data.
Answers
Suggested answer: A

Explanation:

Queue has Limited throughput (300 msg/s without batching, 3000 msg/s with batching whereby upto 10 msg per batch operation; Msg duplicates not allowed in the queue (exactly-once delivery); Msg order is preserved (FIFO); Queue name must end with .fifo

A hospital wants to create digital copies for its large collection of historical written records. The hospital will continue to add hundreds of new documents each day. The hospital's data team will scan the documents and will upload the documents to the AWS Cloud.

A solutions architect must implement a solution to analyze the documents, extract the medical information, and store the documents so that an application can run SQL queries on the dat a. The solution must maximize scalability and operational efficiency.

Which combination of steps should the solutions architect take to meet these requirements? (Select TWO.)

A.
Write the document information to an Amazon EC2 instance that runs a MySQL database.
A.
Write the document information to an Amazon EC2 instance that runs a MySQL database.
Answers
B.
Write the document information to an Amazon S3 bucket. Use Amazon Athena to query the data.
B.
Write the document information to an Amazon S3 bucket. Use Amazon Athena to query the data.
Answers
C.
Create an Auto Scaling group of Amazon EC2 instances to run a custom application that processes the scanned files and extracts the medical information.
C.
Create an Auto Scaling group of Amazon EC2 instances to run a custom application that processes the scanned files and extracts the medical information.
Answers
D.
Create an AWS Lambda function that runs when new documents are uploaded. Use Amazon Rekognition to convert the documents to raw text. Use Amazon Transcribe Medical to detect and extract relevant medical information from the text.
D.
Create an AWS Lambda function that runs when new documents are uploaded. Use Amazon Rekognition to convert the documents to raw text. Use Amazon Transcribe Medical to detect and extract relevant medical information from the text.
Answers
E.
Create an AWS Lambda function that runs when new documents are uploaded. Use Amazon Textract to convert the documents to raw text. Use Amazon Comprehend Medical to detect and extract relevant medical information from the text.
E.
Create an AWS Lambda function that runs when new documents are uploaded. Use Amazon Textract to convert the documents to raw text. Use Amazon Comprehend Medical to detect and extract relevant medical information from the text.
Answers
Suggested answer: B, E

Explanation:

This solution meets the requirements of creating digital copies for a large collection of historical written records, analyzing the documents, extracting the medical information, and storing the documents so that an application can run SQL queries on the data. Writing the document information to an Amazon S3 bucket can provide scalable and durable storage for the scanned files.Using Amazon Athena to query the data can provide serverless and interactive SQL analysis on data stored in S3. Creating an AWS Lambda function that runs when new documents are uploaded can provide event-driven and serverless processing of the scanned files. Using Amazon Textract to convert the documents to raw text can provide accurate optical character recognition (OCR) and extraction of structured data such as tables and forms from documents using artificial intelligence (AI). Using Amazon Comprehend Medical to detect and extract relevant medical information from the text can provide natural language processing (NLP) service that uses machine learning that has been pre-trained to understand and extract health data from medical text.Option A is incorrect because writing the document information to an Amazon EC2 instance that runs a MySQL database can increase the infrastructure overhead and complexity, and it may not be able to handle large volumes of data. Option C is incorrect because creating an Auto Scaling group of Amazon EC2 instances to run a custom application that processes the scanned files and extracts the medical information can increase the infrastructure overhead and complexity, and it may not be able to leverage existing AI and NLP services such as Textract and Comprehend Medical. Option D is incorrect because using Amazon Rekognition to convert the documents to raw text can provide image and video analysis, but it does not support OCR or extraction of structured data from documents. Using Amazon Transcribe Medical to detect and extract relevant medical information from the text can provide speech-to-text transcription service for medical conversations, but it does not support text analysis or extraction of health data from medical text.

Reference: https://aws.amazon.com/s3/ https://aws.amazon.com/athena/ https://aws.amazon.com/lambda/ https://aws.amazon.com/textract/ https://aws.amazon.com/comprehend/medical/


A solutions architect is optimizing a website for an upcoming musical event. Videos of the performances will be streamed in real time and then will be available on demand. The event is expected to attract a global online audience. Which service will improve the performance of both the real-lime and on-demand streaming?

A.
Amazon CloudFront
A.
Amazon CloudFront
Answers
B.
AWS Global Accelerator
B.
AWS Global Accelerator
Answers
C.
Amazon Route 53
C.
Amazon Route 53
Answers
D.
Amazon S3 Transfer Acceleration
D.
Amazon S3 Transfer Acceleration
Answers
Suggested answer: A

Explanation:

You can use CloudFront to deliver video on demand (VOD) or live streaming video using any HTTPorigin. One way you can set up video workflows in the cloud is by using CloudFront together withAWS Media Services. https:// docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/ondemand-streaming-video.html

A company wants to migrate its MySQL database from on premises to AWS. The company recently experienced a database outage that significantly impacted the business. To ensure this does not happen again, the company wants a reliable database solution on AWS that minimizes data loss and stores every transaction on at least two nodes. Which solution meets these requirements?

A.
Create an Amazon RDS DB instance with synchronous replication to three nodes in three Availability Zones.
A.
Create an Amazon RDS DB instance with synchronous replication to three nodes in three Availability Zones.
Answers
B.
Create an Amazon RDS MySQL DB instance with Multi-AZ functionality enabled to synchronously replicate the data.
B.
Create an Amazon RDS MySQL DB instance with Multi-AZ functionality enabled to synchronously replicate the data.
Answers
C.
Create an Amazon RDS MySQL DB instance and then create a read replica in a separate AWS Region that synchronously replicates the data.
C.
Create an Amazon RDS MySQL DB instance and then create a read replica in a separate AWS Region that synchronously replicates the data.
Answers
D.
Create an Amazon EC2 instance with a MySQL engine installed that triggers an AWS Lambda function to synchronously replicate the data to an Amazon RDS MySQL DB instance.
D.
Create an Amazon EC2 instance with a MySQL engine installed that triggers an AWS Lambda function to synchronously replicate the data to an Amazon RDS MySQL DB instance.
Answers
Suggested answer: B

Explanation:

Q: What does Amazon RDS manage on my behalf?

Amazon RDS manages the work involved in setting up a relational database: from provisioning the infrastructure capacity you request to installing the database software. Once your database is up and running, Amazon RDS automates common administrative tasks such as performing backups and patching the software that powers your database. With optional Multi-AZ deployments, Amazon RDS also manages synchronous data replication across Availability Zones with automatic failover.

https://aws.amazon.com/rds/faqs/


An ecommerce company hosts its analytics application in the AWS Cloud. The application generates about 300 MB of data each month. The data is stored in JSON format. The company is evaluating a disaster recovery solution to back up the dat a. The data must be accessible in milliseconds if it is needed, and the data must be kept for 30 days. Which solution meets these requirements MOST cost-effectively?

A.
Amazon OpenSearch Service (Amazon Elasticsearch Service)
A.
Amazon OpenSearch Service (Amazon Elasticsearch Service)
Answers
B.
Amazon S3 Glacier
B.
Amazon S3 Glacier
Answers
C.
Amazon S3 Standard
C.
Amazon S3 Standard
Answers
D.
Amazon RDS for PostgreSQL
D.
Amazon RDS for PostgreSQL
Answers
Suggested answer: C

A company has a Windows-based application that must be migrated to AWS. The application requires the use of a shared Windows file system attached to multiple Amazon EC2 Windows instances that are deployed across multiple Availability Zones.

What should a solutions architect do to meet this requirement?

A.
Configure AWS Storage Gateway in volume gateway mode. Mount the volume to each Windows instance.
A.
Configure AWS Storage Gateway in volume gateway mode. Mount the volume to each Windows instance.
Answers
B.
Configure Amazon FSx for Windows File Server. Mount the Amazon FSx file system to each Windows instance.
B.
Configure Amazon FSx for Windows File Server. Mount the Amazon FSx file system to each Windows instance.
Answers
C.
Configure a file system by using Amazon Elastic File System (Amazon EFS). Mount the EFS file system to each Windows instance.
C.
Configure a file system by using Amazon Elastic File System (Amazon EFS). Mount the EFS file system to each Windows instance.
Answers
D.
Configure an Amazon Elastic Block Store (Amazon EBS) volume with the required size. Attach each EC2 instance to the volume. Mount the file system within the volume to each Windows instance.
D.
Configure an Amazon Elastic Block Store (Amazon EBS) volume with the required size. Attach each EC2 instance to the volume. Mount the file system within the volume to each Windows instance.
Answers
Suggested answer: B

A solutions architect is creating a new Amazon CloudFront distribution for an application. Some of the information submitted by users is sensitive. The application uses HTTPS but needs another layer of security. The sensitive information should be protected throughout the entire application stack, and access to the information should be restricted to certain applications. Which action should the solutions architect take?

A.
Configure a CloudFront signed URL.
A.
Configure a CloudFront signed URL.
Answers
B.
Configure a CloudFront signed cookie.
B.
Configure a CloudFront signed cookie.
Answers
C.
Configure a CloudFront field-level encryption profile.
C.
Configure a CloudFront field-level encryption profile.
Answers
D.
Configure CloudFront and set the Origin Protocol Policy setting to HTTPS Only for the ViewerProtocol Policy.
D.
Configure CloudFront and set the Origin Protocol Policy setting to HTTPS Only for the ViewerProtocol Policy.
Answers
Suggested answer: C

Explanation:

https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/field-levelencryption.html"

With Amazon CloudFront, you can enforce secure end-to-end connections to origin servers by using

HTTPS. Field-level encryption adds an additional layer of security that lets you protect specific data throughout system processing so that only certain applications can see it."

A company is planning to move its data to an Amazon S3 bucket. The data must be encrypted when it is stored in the S3 bucket. Additionally, the encryption key must be automatically rotated every year. Which solution will meet these requirements with the LEAST operational overhead?

A.
Move the data to the S3 bucket. Use server-side encryption with Amazon S3 managed encryption keys (SSE-S3). Use the built-in key rotation behavior of SSE-S3 encryption keys.
A.
Move the data to the S3 bucket. Use server-side encryption with Amazon S3 managed encryption keys (SSE-S3). Use the built-in key rotation behavior of SSE-S3 encryption keys.
Answers
B.
Create an AWS Key Management Service {AWS KMS) customer managed key. Enable automatic key rotation. Set the S3 bucket's default encryption behavior to use the customer managed KMS key. Move the data to the S3 bucket.
B.
Create an AWS Key Management Service {AWS KMS) customer managed key. Enable automatic key rotation. Set the S3 bucket's default encryption behavior to use the customer managed KMS key. Move the data to the S3 bucket.
Answers
C.
Create an AWS Key Management Service (AWS KMS) customer managed key. Set the S3 bucket's default encryption behavior to use the customer managed KMS key. Move the data to the S3 bucket. Manually rotate the KMS key every year.
C.
Create an AWS Key Management Service (AWS KMS) customer managed key. Set the S3 bucket's default encryption behavior to use the customer managed KMS key. Move the data to the S3 bucket. Manually rotate the KMS key every year.
Answers
D.
Encrypt the data with customer key material before moving the data to the S3 bucket. Create an AWS Key Management Service (AWS KMS) key without key material. Import the customer key material into the KMS key. Enable automatic key rotation.
D.
Encrypt the data with customer key material before moving the data to the S3 bucket. Create an AWS Key Management Service (AWS KMS) key without key material. Import the customer key material into the KMS key. Enable automatic key rotation.
Answers
Suggested answer: B

Explanation:

SSE-S3 - is free and uses AWS owned CMKs (CMK = Customer Master Key). The encryption key is owned and managed by AWS, and is shared among many accounts. Its rotation is automatic with time that varies as shown in the table here. The time is not explicitly defined. SSE-KMS - has two flavors: AWS managed CMK. This is free CMK generated only for your account. You can only view it policies and audit usage, but not manage it. Rotation is automatic - once per 1095 days (3 years), Customer managed CMK. This uses your own key that you create and can manage. Rotation is not enabled by default. But if you enable it, it will be automatically rotated every 1 year. This variant can also use an imported key material by you. If you create such key with an imported material, there is no automated rotation. Only manual rotation. SSE-C - customer provided key. The encryption key is fully managed by you outside of AWS. AWS will not rotate it.

This solution meets the requirements of moving data to an Amazon S3 bucket, encrypting the data when it is stored in the S3 bucket, and automatically rotating the encryption key every year with the least operational overhead. AWS Key Management Service (AWS KMS) is a service that enables you to create and manage encryption keys for your data. A customer managed key is a symmetric encryption key that you create and manage in AWS KMS. You can enable automatic key rotation for a customer managed key, which means that AWS KMS generates new cryptographic material for the key every year. You can set the S3 bucket's default encryption behavior to use the customer managed KMS key, which means that any object that is uploaded to the bucket without specifying an encryption method will be encrypted with that key. Option A is incorrect because using server-side encryption with Amazon S3 managed encryption keys (SSE-S3) does not allow you to control or manage the encryption keys. SSE-S3 uses a unique key for each object, and encrypts that key with a master key that is regularly rotated by S3. However, you cannot enable or disable key rotation for SSE-S3 keys, or specify the rotation interval. Option C is incorrect because manually rotating the KMS key every year can increase the operational overhead and complexity, and it may not meet the requirement of rotating the key every year if you forget or delay the rotation process. Option D is incorrect because encrypting the data with customer key material before moving the data to the S3 bucket can increase the operational overhead and complexity, and it may not provide consistent encryption for all objects in the bucket. Creating a KMS key without key material and importing the customer key material into the KMS key can enable you to use your own source of random bits to generate your KMS keys, but it does not support automatic key rotation. https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html https://docs.aws.amazon.com/kms/latest/developerguide/rotate-keys.html https://docs.aws.amazon.com/AmazonS3/latest/userguide/bucket-encryption.html


Total 886 questions
Go to page: of 89