ExamGecko
Home Home / Amazon / SAA-C03

Amazon SAA-C03 Practice Test - Questions Answers, Page 5

Question list
Search
Search

List of questions

Search

Related questions











A company's application integrates with multiple software-as-a-service (SaaS) sources for data collection. The company runs Amazon EC2 instances to receive the data and to upload the data to an Amazon S3 bucket for analysis. The same EC2 instance that receives and uploads the data also sends a notification to the user when an upload is complete. The company has noticed slow application performance and wants to improve the performance as much as possible. Which solution will meet these requirements with the LEAST operational overhead?

A.
Create an Auto Scaling group so that EC2 instances can scale out. Configure an S3 event notification to send events to an Amazon Simple Notification Service (Amazon SNS) topic when the upload to the S3 bucket is complete.
A.
Create an Auto Scaling group so that EC2 instances can scale out. Configure an S3 event notification to send events to an Amazon Simple Notification Service (Amazon SNS) topic when the upload to the S3 bucket is complete.
Answers
B.
Create an Amazon AppFlow flow to transfer data between each SaaS source and the S3 bucket.Configure an S3 event notification to send events to an Amazon Simple Notification Service (Amazon SNS) topic when the upload to the S3 bucket is complete.
B.
Create an Amazon AppFlow flow to transfer data between each SaaS source and the S3 bucket.Configure an S3 event notification to send events to an Amazon Simple Notification Service (Amazon SNS) topic when the upload to the S3 bucket is complete.
Answers
C.
Create an Amazon EventBridge (Amazon CloudWatch Events) rule for each SaaS source to send output data. Configure the S3 bucket as the rule's target. Create a second EventBridge (CloudWatch Events) rule to send events when the upload to the S3 bucket is complete. Configure an Amazon Simple Notification Service (Amazon SNS) topic as the second rule's target.
C.
Create an Amazon EventBridge (Amazon CloudWatch Events) rule for each SaaS source to send output data. Configure the S3 bucket as the rule's target. Create a second EventBridge (CloudWatch Events) rule to send events when the upload to the S3 bucket is complete. Configure an Amazon Simple Notification Service (Amazon SNS) topic as the second rule's target.
Answers
D.
Create a Docker container to use instead of an EC2 instance. Host the containerized application on Amazon Elastic Container Service (Amazon ECS). Configure Amazon CloudWatch Container Insights to send events to an Amazon Simple Notification Service (Amazon SNS) topic when the upload to the S3 bucket is complete.
D.
Create a Docker container to use instead of an EC2 instance. Host the containerized application on Amazon Elastic Container Service (Amazon ECS). Configure Amazon CloudWatch Container Insights to send events to an Amazon Simple Notification Service (Amazon SNS) topic when the upload to the S3 bucket is complete.
Answers
Suggested answer: B

Explanation:

Amazon AppFlow is a fully managed integration service that enables you to securely transfer data between Software-as-a-Service (SaaS) applications like Salesforce, SAP, Zendesk, Slack, and ServiceNow, and AWS services like Amazon S3 and Amazon Redshift, in just a few clicks.https://aws.amazon.com/appflow/

A company runs a highly available image-processing application on Amazon EC2 instances in a single VPC The EC2 instances run inside several subnets across multiple Availability Zones. The EC2 instances do not communicate with each other However, the EC2 instances download images from Amazon S3 and upload images to Amazon S3 through a single NAT gateway The company is concerned about data transfer charges What is the MOST cost-effective way for the company to avoid Regional data transfer charges?

A.
Launch the NAT gateway in each Availability Zone
A.
Launch the NAT gateway in each Availability Zone
Answers
B.
Replace the NAT gateway with a NAT instance
B.
Replace the NAT gateway with a NAT instance
Answers
C.
Deploy a gateway VPC endpoint for Amazon S3
C.
Deploy a gateway VPC endpoint for Amazon S3
Answers
D.
Provision an EC2 Dedicated Host to run the EC2 instances
D.
Provision an EC2 Dedicated Host to run the EC2 instances
Answers
Suggested answer: C

Explanation:


A company has an on-premises application that generates a large amount of time-sensitive data that is backed up to Amazon S3. The application has grown and there are user complaints about internet bandwidth limitations. A solutions architect needs to design a long-term solution that allows for both timely backups to Amazon S3 and with minimal impact on internet connectivity for internal users. Which solution meets these requirements?

A.
Establish AWS VPN connections and proxy all traffic through a VPC gateway endpoint
A.
Establish AWS VPN connections and proxy all traffic through a VPC gateway endpoint
Answers
B.
Establish a new AWS Direct Connect connection and direct backup traffic through this new connection.
B.
Establish a new AWS Direct Connect connection and direct backup traffic through this new connection.
Answers
C.
Order daily AWS Snowball devices Load the data onto the Snowball devices and return the devices to AWS each day.
C.
Order daily AWS Snowball devices Load the data onto the Snowball devices and return the devices to AWS each day.
Answers
D.
Submit a support ticket through the AWS Management Console Request the removal of S3 service limits from the account.
D.
Submit a support ticket through the AWS Management Console Request the removal of S3 service limits from the account.
Answers
Suggested answer: B

Explanation:

To address the issue of bandwidth limitations on the company's on-premises application, and to minimize the impact on internal user connectivity, a new AWS Direct Connect connection should be established to direct backup traffic through this new connection. This solution will offer a secure, high-speed connection between the company's data center and AWS, which will allow the company to transfer data quickly without consuming internet bandwidth.

Reference:AWS Direct Connect documentation: https://aws.amazon.com/directconnect/


A company has an Amazon S3 bucket that contains critical dat a. The company must protect the data from accidental deletion. Which combination of steps should a solutions architect take to meet these requirements? (Choose two.)

A.
Enable versioning on the S3 bucket.
A.
Enable versioning on the S3 bucket.
Answers
B.
Enable MFA Delete on the S3 bucket.
B.
Enable MFA Delete on the S3 bucket.
Answers
C.
Create a bucket policy on the S3 bucket.
C.
Create a bucket policy on the S3 bucket.
Answers
D.
Enable default encryption on the S3 bucket.
D.
Enable default encryption on the S3 bucket.
Answers
E.
Create a lifecycle policy for the objects in the S3 bucket.
E.
Create a lifecycle policy for the objects in the S3 bucket.
Answers
Suggested answer: A, B

Explanation:

To protect data in an S3 bucket from accidental deletion, versioning should be enabled, which enables you to preserve, retrieve, and restore every version of every object in an S3 bucket.Additionally, enabling MFA (multi-factor authentication) Delete on the S3 bucket adds an extra layer of protection by requiring an authentication token in addition to the user's access keys to delete objects in the bucket.Reference: AWS S3 Versioning documentation:https://docs.aws.amazon.com/AmazonS3/latest/dev/Versioning.html

AWS S3 MFA Delete documentation:https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingMFADelete.html


A company has a data ingestion workflow that consists the following:

An Amazon Simple Notification Service (Amazon SNS) topic for notifications about new data deliveries An AWS Lambda function to process the data and record metadata The company observes that the ingestion workflow fails occasionally because of network connectivity issues. When such a failure occurs, the Lambda function does not ingest the corresponding data unless the company manually reruns the job. Which combination of actions should a solutions architect take to ensure that the Lambda function ingests all data in the future? (Select TWO.)

A.
Configure the Lambda function In multiple Availability Zones.
A.
Configure the Lambda function In multiple Availability Zones.
Answers
B.
Create an Amazon Simple Queue Service (Amazon SQS) queue, and subscribe It to me SNS topic.
B.
Create an Amazon Simple Queue Service (Amazon SQS) queue, and subscribe It to me SNS topic.
Answers
C.
Increase the CPU and memory that are allocated to the Lambda function.
C.
Increase the CPU and memory that are allocated to the Lambda function.
Answers
D.
Increase provisioned throughput for the Lambda function.
D.
Increase provisioned throughput for the Lambda function.
Answers
E.
Modify the Lambda function to read from an Amazon Simple Queue Service (Amazon SQS) queue
E.
Modify the Lambda function to read from an Amazon Simple Queue Service (Amazon SQS) queue
Answers
Suggested answer: B, E

Explanation:

To ensure that the Lambda function ingests all data in the future despite occasional network connectivity issues, the following actions should be taken: Create an Amazon Simple Queue Service (SQS) queue and subscribe it to the SNS topic. This allows


A company has an application that provides marketing services to stores. The services are based on previous purchases by store customers. The stores upload transaction data to the company through SFTP, and the data is processed and analyzed to generate new marketing offers. Some of the files can exceed 200 GB in size.

Recently, the company discovered that some of the stores have uploaded files that contain personally identifiable information (PII) that should not have been included. The company wants administrators to be alerted if PII is shared again. The company also wants to automate remediation.

What should a solutions architect do to meet these requirements with the LEAST development effort?

A.
Use an Amazon S3 bucket as a secure transfer point. Use Amazon Inspector to scan me objects in the bucket. If objects contain Pll. trigger an S3 Lifecycle policy to remove the objects that contain Pll.
A.
Use an Amazon S3 bucket as a secure transfer point. Use Amazon Inspector to scan me objects in the bucket. If objects contain Pll. trigger an S3 Lifecycle policy to remove the objects that contain Pll.
Answers
B.
Use an Amazon S3 bucket as a secure transfer point. Use Amazon Macie to scan the objects in the bucket. If objects contain Pll. Use Amazon Simple Notification Service (Amazon SNS) to trigger a notification to the administrators to remove the objects mat contain Pll.
B.
Use an Amazon S3 bucket as a secure transfer point. Use Amazon Macie to scan the objects in the bucket. If objects contain Pll. Use Amazon Simple Notification Service (Amazon SNS) to trigger a notification to the administrators to remove the objects mat contain Pll.
Answers
C.
Implement custom scanning algorithms in an AWS Lambda function. Trigger the function when objects are loaded into the bucket. It objects contain Rll. use Amazon Simple Notification Service (Amazon SNS) to trigger a notification to the administrators to remove the objects that contain Pll.
C.
Implement custom scanning algorithms in an AWS Lambda function. Trigger the function when objects are loaded into the bucket. It objects contain Rll. use Amazon Simple Notification Service (Amazon SNS) to trigger a notification to the administrators to remove the objects that contain Pll.
Answers
D.
Implement custom scanning algorithms in an AWS Lambda function. Trigger the function when objects are loaded into the bucket. If objects contain Pll. use Amazon Simple Email Service (Amazon STS) to trigger a notification to the administrators and trigger on S3 Lifecycle policy to remove the objects mot contain PII.
D.
Implement custom scanning algorithms in an AWS Lambda function. Trigger the function when objects are loaded into the bucket. If objects contain Pll. use Amazon Simple Email Service (Amazon STS) to trigger a notification to the administrators and trigger on S3 Lifecycle policy to remove the objects mot contain PII.
Answers
Suggested answer: A

Explanation:


A company needs guaranteed Amazon EC2 capacity in three specific Availability Zones in a specific AWS Region for an upcoming event that will last 1 week. What should the company do to guarantee the EC2 capacity?

A.
Purchase Reserved instances that specify the Region needed
A.
Purchase Reserved instances that specify the Region needed
Answers
B.
Create an On Demand Capacity Reservation that specifies the Region needed
B.
Create an On Demand Capacity Reservation that specifies the Region needed
Answers
C.
Purchase Reserved instances that specify the Region and three Availability Zones needed
C.
Purchase Reserved instances that specify the Region and three Availability Zones needed
Answers
D.
Create an On-Demand Capacity Reservation that specifies the Region and three Availability Zones needed
D.
Create an On-Demand Capacity Reservation that specifies the Region and three Availability Zones needed
Answers
Suggested answer: D

Explanation:

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-capacity-reservations.html

Reserve instances: You will have to pay for the whole term (1 year or 3years) which is not cost effective

A company's website uses an Amazon EC2 instance store for its catalog of items. The company wants to make sure that the catalog is highly available and that the catalog is stored in a durable location. What should a solutions architect do to meet these requirements?

A.
Move the catalog to Amazon ElastiCache for Redis.
A.
Move the catalog to Amazon ElastiCache for Redis.
Answers
B.
Deploy a larger EC2 instance with a larger instance store.
B.
Deploy a larger EC2 instance with a larger instance store.
Answers
C.
Move the catalog from the instance store to Amazon S3 Glacier Deep Archive.
C.
Move the catalog from the instance store to Amazon S3 Glacier Deep Archive.
Answers
D.
Move the catalog to an Amazon Elastic File System (Amazon EFS) file system.
D.
Move the catalog to an Amazon Elastic File System (Amazon EFS) file system.
Answers
Suggested answer: D

Explanation:

Moving the catalog to an Amazon Elastic File System (Amazon EFS) file system provides both high availability and durability. Amazon EFS is a fully-managed, highly-available, and durable file system that is built to scale on demand. With Amazon EFS, the catalog data can be stored and accessed from multiple EC2 instances in different availability zones, ensuring high availability. Also, Amazon EFS automatically stores files redundantly within and across multiple availability zones, making it a durable storage option.


A company stores call transcript files on a monthly basis. Users access the files randomly within 1 year of the call, but users access the files infrequently after 1 year. The company wants to optimize its solution by giving users the ability to query and retrieve files that are less than 1-year-old as quickly as possible. A delay in retrieving older files is acceptable. Which solution will meet these requirements MOST cost-effectively?

A.
Store individual files with tags in Amazon S3 Glacier Instant Retrieval. Query the tags to retrieve the files from S3 Glacier Instant Retrieval.
A.
Store individual files with tags in Amazon S3 Glacier Instant Retrieval. Query the tags to retrieve the files from S3 Glacier Instant Retrieval.
Answers
B.
Store individual files in Amazon S3 Intelligent-Tiering. Use S3 Lifecycle policies to move the files to S3 Glacier Flexible Retrieval after 1 year. Query and retrieve the files that are in Amazon S3 by using Amazon Athena. Query and retrieve the files that are in S3 Glacier by using S3 Glacier Select.
B.
Store individual files in Amazon S3 Intelligent-Tiering. Use S3 Lifecycle policies to move the files to S3 Glacier Flexible Retrieval after 1 year. Query and retrieve the files that are in Amazon S3 by using Amazon Athena. Query and retrieve the files that are in S3 Glacier by using S3 Glacier Select.
Answers
C.
Store individual files with tags in Amazon S3 Standard storage. Store search metadata for each archive in Amazon S3 Standard storage. Use S3 Lifecycle policies to move the files to S3 Glacier Instant Retrieval after 1 year. Query and retrieve the files by searching for metadata from Amazon S3.
C.
Store individual files with tags in Amazon S3 Standard storage. Store search metadata for each archive in Amazon S3 Standard storage. Use S3 Lifecycle policies to move the files to S3 Glacier Instant Retrieval after 1 year. Query and retrieve the files by searching for metadata from Amazon S3.
Answers
D.
Store individual files in Amazon S3 Standard storage. Use S3 Lifecycle policies to move the files to S3 Glacier Deep Archive after 1 year. Store search metadata in Amazon RDS. Query the files from Amazon RDS. Retrieve the files from S3 Glacier Deep Archive.
D.
Store individual files in Amazon S3 Standard storage. Use S3 Lifecycle policies to move the files to S3 Glacier Deep Archive after 1 year. Store search metadata in Amazon RDS. Query the files from Amazon RDS. Retrieve the files from S3 Glacier Deep Archive.
Answers
Suggested answer: B

Explanation:

"For archive data that needs immediate access, such as medical images, news media assets, or genomics data, choose the S3 Glacier Instant Retrieval storage class, an archive storage class that delivers the lowest cost storage with milliseconds retrieval. For archive data that does not require immediate access but needs the flexibility to retrieve large sets of data at no cost, such as backup or disaster recovery use cases, choose S3 Glacier Flexible Retrieval (formerly S3 Glacier), with retrieval in minutes or free bulk retrievals in 5-12 hours." https://aws.amazon.com/about-aws/whats-new/2021/11/amazon-s3-glacier-instant-retrieval-storage-class/

A company has a production workload that runs on 1,000 Amazon EC2 Linux instances. The workload is powered by third-party software. The company needs to patch the third-party software on all EC2 instances as quickly as possible to remediate a critical security vulnerability.

What should a solutions architect do to meet these requirements?

A.
Create an AWS Lambda function to apply the patch to all EC2 instances.
A.
Create an AWS Lambda function to apply the patch to all EC2 instances.
Answers
B.
Configure AWS Systems Manager Patch Manager to apply the patch to all EC2 instances.
B.
Configure AWS Systems Manager Patch Manager to apply the patch to all EC2 instances.
Answers
C.
Schedule an AWS Systems Manager maintenance window to apply the patch to all EC2 instances.
C.
Schedule an AWS Systems Manager maintenance window to apply the patch to all EC2 instances.
Answers
D.
Use AWS Systems Manager Run Command to run a custom command that applies the patch to all EC2 instances.
D.
Use AWS Systems Manager Run Command to run a custom command that applies the patch to all EC2 instances.
Answers
Suggested answer: D

Explanation:


Total 886 questions
Go to page: of 89