ExamGecko
Home Home / Amazon / SAA-C03

Amazon SAA-C03 Practice Test - Questions Answers, Page 44

Question list
Search
Search

List of questions

Search

Related questions











A company is creating an application that runs on containers in a VPC. The application stores and accesses data in an Amazon S3 bucket During the development phase, the application will store and access 1 TB of data in Amazon S3 each day. The company wants to minimize costs and wants to prevent traffic from traversing the internet whenever possible.

Which solution will meet these requirements?

A.
Enable S3 Intelligent-Tiering for the S3 bucket.
A.
Enable S3 Intelligent-Tiering for the S3 bucket.
Answers
B.
Enable S3 Transfer Acceleration for the S3 bucket.
B.
Enable S3 Transfer Acceleration for the S3 bucket.
Answers
C.
Create a gateway VPC endpoint for Amazon S3. Associate this endpoint with all route tables in the VPC.
C.
Create a gateway VPC endpoint for Amazon S3. Associate this endpoint with all route tables in the VPC.
Answers
D.
Create an interface endpoint for Amazon S3 in the VPC. Associate this endpoint with all route tables in the VPC.
D.
Create an interface endpoint for Amazon S3 in the VPC. Associate this endpoint with all route tables in the VPC.
Answers
Suggested answer: C

Explanation:

A gateway VPC endpoint for Amazon S3 enables private connections between the VPC and Amazon S3 that do not require an internet gateway or NAT device. This minimizes costs and prevents traffic from traversing the internet. A gateway VPC endpoint uses a prefix list as the route target in a VPC route table to route traffic privately to Amazon S31. Associating the endpoint with all route tables in the VPC ensures that all subnets can access Amazon S3 through the endpoint.

Option A is incorrect because S3 Intelligent-Tiering is a storage class that optimizes storage costs by automatically moving objects between two access tiers based on changing access patterns. It does not affect the network traffic between the VPC and Amazon S32.

Option B is incorrect because S3 Transfer Acceleration is a feature that enables fast, easy, and secure transfers of files over long distances between clients and an S3 bucket. It does not prevent traffic from traversing the internet3.

Option D is incorrect because an interface VPC endpoint for Amazon S3 is powered by AWS PrivateLink, which requires an elastic network interface (ENI) with a private IP address in each subnet. This adds complexity and cost to the solution. Moreover, an interface VPC endpoint does not support cross-Region access to Amazon S3. Reference URL:1: https://docs.aws.amazon.com/vpc/latest/privatelink/vpc-endpoints-s3.html2: https://docs.aws.amazon.com/AmazonS3/latest/userguide/storage-class-intro.html#sc-dynamic-data-access3: https://docs.aws.amazon.com/AmazonS3/latest/userguide/transfer-acceleration.html : https://aws.amazon.com/blogs/architecture/choosing-your-vpc-endpoint-strategy-for-amazon-s3/

A company stores data in PDF format in an Amazon S3 bucket The company must follow a legal requirement to retain all new and existing data in Amazon S3 for 7 years.

Which solution will meet these requirements with the LEAST operational overhead?

A.
Turn on the S3 Versionmg feature for the S3 bucket Configure S3 Lifecycle to delete the data after 7 years. Configure multi-factor authentication (MFA) delete for all S3 objects.
A.
Turn on the S3 Versionmg feature for the S3 bucket Configure S3 Lifecycle to delete the data after 7 years. Configure multi-factor authentication (MFA) delete for all S3 objects.
Answers
B.
Turn on S3 Object Lock with governance retention mode for the S3 bucket Set the retention period to expire after 7 years. Recopy all existing objects to bring the existing data into compliance
B.
Turn on S3 Object Lock with governance retention mode for the S3 bucket Set the retention period to expire after 7 years. Recopy all existing objects to bring the existing data into compliance
Answers
C.
Turn on S3 Object Lock with compliance retention mode for the S3 bucket. Set the retention period to expire after 7 years. Recopy all existing objects to bring the existing data into compliance
C.
Turn on S3 Object Lock with compliance retention mode for the S3 bucket. Set the retention period to expire after 7 years. Recopy all existing objects to bring the existing data into compliance
Answers
D.
Turn on S3 Object Lock with compliance retention mode for the S3 bucket. Set the retention period to expire after 7 years. Use S3 Batch Operations to bring the existing data into compliance
D.
Turn on S3 Object Lock with compliance retention mode for the S3 bucket. Set the retention period to expire after 7 years. Use S3 Batch Operations to bring the existing data into compliance
Answers
Suggested answer: C

Explanation:

S3 Object Lock enables a write-once-read-many (WORM) model for objects stored in Amazon S3. It can help prevent objects from being deleted or overwritten for a fixed amount of time or indefinitely1. S3 Object Lock has two retention modes: governance mode and compliance mode. Compliance mode provides the highest level of protection and prevents any user, including the root user, from deleting or modifying an object version until the retention period expires. To use S3 Object Lock, a new bucket with Object Lock enabled must be created, and a default retention period can be optionally configured for objects placed in the bucket2. To bring existing objects into compliance, they must be recopied into the bucket with a retention period specified.

Option A is incorrect because S3 Versioning and S3 Lifecycle do not provide WORM protection for objects. Moreover, MFA delete only applies to deleting object versions, not modifying them.

Option B is incorrect because governance mode allows users with special permissions to override or remove the retention settings or delete the object if necessary. This does not meet the legal requirement of retaining all data for 7 years.

Option D is incorrect because S3 Batch Operations cannot be used to apply compliance mode retention periods to existing objects. S3 Batch Operations can only apply governance mode retention periods or legal holds. Reference URL:2: https://docs.aws.amazon.com/AmazonS3/latest/userguide/object-lock-console.html3: https://docs.aws.amazon.com/AmazonS3/latest/userguide/storage-class-intro.html#sc-dynamic-data-access4: https://docs.aws.amazon.com/AmazonS3/latest/userguide/transfer-acceleration.html1: https://docs.aws.amazon.com/AmazonS3/latest/userguide/object-lock.html : https://docs.aws.amazon.com/AmazonS3/latest/userguide/object-lock-overview.html : https://docs.aws.amazon.com/AmazonS3/latest/userguide/object-lock-managing.html : https://aws.amazon.com/blogs/storage/managing-amazon-s3-access-with-vpc-endpoints-and-s3-access-points/

An image hosting company uploads its large assets to Amazon S3 Standard buckets The company uses multipart upload in parallel by using S3 APIs and overwrites if the same object is uploaded again. For the first 30 days after upload, the objects will be accessed frequently. The objects will be used less frequently after 30 days, but the access patterns for each object will be inconsistent The company must optimize its S3 storage costs while maintaining high availability and resiliency of stored assets.

Which combination of actions should a solutions architect recommend to meet these requirements? (Select TWO.)

A.
Move assets to S3 Intelligent-Tiering after 30 days.
A.
Move assets to S3 Intelligent-Tiering after 30 days.
Answers
B.
Configure an S3 Lifecycle policy to clean up incomplete multipart uploads.
B.
Configure an S3 Lifecycle policy to clean up incomplete multipart uploads.
Answers
C.
Configure an S3 Lifecycle policy to clean up expired object delete markers.
C.
Configure an S3 Lifecycle policy to clean up expired object delete markers.
Answers
D.
Move assets to S3 Standard-Infrequent Access (S3 Standard-IA) after 30 days
D.
Move assets to S3 Standard-Infrequent Access (S3 Standard-IA) after 30 days
Answers
E.
Move assets to S3 One Zone-Infrequent Access (S3 One Zone-IA) after 30 days.
E.
Move assets to S3 One Zone-Infrequent Access (S3 One Zone-IA) after 30 days.
Answers
Suggested answer: A, B

Explanation:

S3 Intelligent-Tiering is a storage class that automatically moves data to the most cost-effective access tier based on access frequency, without performance impact, retrieval fees, or operational overhead1. It is ideal for data with unknown or changing access patterns, such as the company's assets. By moving assets to S3 Intelligent-Tiering after 30 days, the company can optimize its storage costs while maintaining high availability and resilience of stored assets.

S3 Lifecycle is a feature that enables you to manage your objects so that they are stored cost effectively throughout their lifecycle2. You can create lifecycle rules to define actions that Amazon S3 applies to a group of objects. One of the actions is to abort incomplete multipart uploads that can occur when an upload is interrupted. By configuring an S3 Lifecycle policy to clean up incomplete multipart uploads, the company can reduce its storage costs and avoid paying for parts that are not used.

Option C is incorrect because expired object delete markers are automatically deleted by Amazon S3 and do not incur any storage costs3. Therefore, configuring an S3 Lifecycle policy to clean up expired object delete markers will not have any effect on the company's storage costs.

Option D is incorrect because S3 Standard-IA is a storage class for data that is accessed less frequently, but requires rapid access when needed1. It has a lower storage cost than S3 Standard, but it has a higher retrieval cost and a minimum storage duration charge of 30 days. Therefore, moving assets to S3 Standard-IA after 30 days may not optimize the company's storage costs if the assets are still accessed occasionally.

Option E is incorrect because S3 One Zone-IA is a storage class for data that is accessed less frequently, but requires rapid access when needed1. It has a lower storage cost than S3 Standard-IA, but it stores data in only one Availability Zone and has less resilience than other storage classes. It also has a higher retrieval cost and a minimum storage duration charge of 30 days. Therefore, moving assets to S3 One Zone-IA after 30 days may not optimize the company's storage costs if the assets are still accessed occasionally or require high availability. Reference URL:1: https://docs.aws.amazon.com/AmazonS3/latest/userguide/storage-class-intro.html2: https://docs.aws.amazon.com/AmazonS3/latest/userguide/object-lifecycle-mgmt.html3: https://docs.aws.amazon.com/AmazonS3/latest/userguide/delete-or-empty-bucket.html#delete-bucket-considerations : https://docs.aws.amazon.com/AmazonS3/latest/userguide/mpuoverview.html : https://aws.amazon.com/certification/certified-solutions-architect-associate/

A company designed a stateless two-tier application that uses Amazon EC2 in a single Availability Zone and an Amazon RDS Multi-AZ DB instance New company management wants to ensure the application is highly available.

What should a solutions architect do to meet this requirement?

A.
Configure the application to use Multi-AZ EC2 Auto Scaling and create an Application Load Balancer
A.
Configure the application to use Multi-AZ EC2 Auto Scaling and create an Application Load Balancer
Answers
B.
Configure the application to take snapshots of the EC2 instances and send them to a different AWS Region.
B.
Configure the application to take snapshots of the EC2 instances and send them to a different AWS Region.
Answers
C.
Configure the application to use Amazon Route 53 latency-based routing to feed requests to the application.
C.
Configure the application to use Amazon Route 53 latency-based routing to feed requests to the application.
Answers
D.
Configure Amazon Route 53 rules to handle incoming requests and create a Multi-AZ Application Load Balancer
D.
Configure Amazon Route 53 rules to handle incoming requests and create a Multi-AZ Application Load Balancer
Answers
Suggested answer: A

Explanation:

https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-add-availability-zone.html

A company hosts an online shopping application that stores all orders in an Amazon RDS for PostgreSQL Singfe-AZ DB instance. Management wants to eliminate single points of C^ilure and has asked a solutions architect to recommend an approach to minimize database downtime without requiring any changes to the application code.

Which solution meets these requirements?

A.
Convert the existing database instance to a Multi-AZ deployment by modifying the database instance and specifying the Multi-AZ option.
A.
Convert the existing database instance to a Multi-AZ deployment by modifying the database instance and specifying the Multi-AZ option.
Answers
B.
Create a new RDS Multi-AZ deployment. Take a snapshot of the current RDS instance and restore the new Multi-AZ deployment with the snapshot.
B.
Create a new RDS Multi-AZ deployment. Take a snapshot of the current RDS instance and restore the new Multi-AZ deployment with the snapshot.
Answers
C.
Create a read-only replica of the PostgreSQL database in another Availability Zone. Use Amazon Route 53 weighted record sets to distribute requests across the databases.
C.
Create a read-only replica of the PostgreSQL database in another Availability Zone. Use Amazon Route 53 weighted record sets to distribute requests across the databases.
Answers
D.
Place the RDS for PostgreSQL database in an Amazon EC2 Auto Scaling group with a minimum group size of two. Use Amazon Route 53 weighted record sets to distribute requests across instances.
D.
Place the RDS for PostgreSQL database in an Amazon EC2 Auto Scaling group with a minimum group size of two. Use Amazon Route 53 weighted record sets to distribute requests across instances.
Answers
Suggested answer: A

Explanation:

https://aws.amazon.com/rds/features/multi-az/ To convert an existing Single-AZ DB Instance to a Multi-AZ deployment, use the 'Modify' option corresponding to your DB Instance in the AWS Management Console.

A company stores data in Amazon S3. According to regulations, the data must not contain personally identifiable information (Pll). The company recently discovered that S3 buckets have some objects that contain Pll. The company needs to automatically detect Pll in S3 buckets and to notify the company's security team.

Which solution will meet these requirements?

A.
Use Amazon Macie. Create an Amazon EventBridge rule to filter the SensitiveData event type from Macie findings and to send an Amazon Simple Notification Service (Amazon SNS) notification to the security team.
A.
Use Amazon Macie. Create an Amazon EventBridge rule to filter the SensitiveData event type from Macie findings and to send an Amazon Simple Notification Service (Amazon SNS) notification to the security team.
Answers
B.
Use Amazon GuardDuty. Create an Amazon EventBridge rule to filter the CRITICAL event type from GuardDuty findings and to send an Amazon Simple Notification Service (Amazon SNS) notification to the security team.
B.
Use Amazon GuardDuty. Create an Amazon EventBridge rule to filter the CRITICAL event type from GuardDuty findings and to send an Amazon Simple Notification Service (Amazon SNS) notification to the security team.
Answers
C.
Use Amazon Macie. Create an Amazon EventBridge rule to filter the SensitiveData:S30bject/Personal event type from Macie findings and to send an Amazon Simple Queue Service (Amazon SQS) notification to the security team.
C.
Use Amazon Macie. Create an Amazon EventBridge rule to filter the SensitiveData:S30bject/Personal event type from Macie findings and to send an Amazon Simple Queue Service (Amazon SQS) notification to the security team.
Answers
D.
Use Amazon GuardDuty. Create an Amazon EventBridge rule to filter the CRITICAL event type from GuardDuty findings and to send an Amazon Simple Queue Service (Amazon SQS) notification to the security team.
D.
Use Amazon GuardDuty. Create an Amazon EventBridge rule to filter the CRITICAL event type from GuardDuty findings and to send an Amazon Simple Queue Service (Amazon SQS) notification to the security team.
Answers
Suggested answer: A

Explanation:

Amazon Macie can also send its findings to Amazon EventBridge, which is a serverless event bus that makes it easy to connect applications using data from a variety of sources. You can create an EventBridge rule that filters the SensitiveData event type from Macie findings and sends an Amazon SNS notification to the security team. Amazon SNS is a fully managed messaging service that enables you to send messages to subscribers or other applications.

Reference: https://docs.aws.amazon.com/macie/latest/userguide/macie-findings.html#macie-findings-eventbridge

A company provides an API interface to customers so the customers can retrieve their financial information. The company expects a larger number of requests during peak usage times of the year.

The company requires the API to respond consistently with low latency to ensure customer satisfaction. The company needs to provide a compute host for the API.

Which solution will meet these requirements with the LEAST operational overhead?

A.
Use an Application Load Balancer and Amazon Elastic Container Service (Amazon ECS).
A.
Use an Application Load Balancer and Amazon Elastic Container Service (Amazon ECS).
Answers
B.
Use Amazon API Gateway and AWS Lambda functions with provisioned concurrency.
B.
Use Amazon API Gateway and AWS Lambda functions with provisioned concurrency.
Answers
C.
Use an Application Load Balancer and an Amazon Elastic Kubernetes Service (Amazon EKS) cluster.
C.
Use an Application Load Balancer and an Amazon Elastic Kubernetes Service (Amazon EKS) cluster.
Answers
D.
Use Amazon API Gateway and AWS Lambda functions with reserved concurrency.
D.
Use Amazon API Gateway and AWS Lambda functions with reserved concurrency.
Answers
Suggested answer: B

Explanation:

Amazon API Gateway is a fully managed service that makes it easy for developers to create, publish, maintain, monitor, and secure APIs at any scale. AWS Lambda is a serverless compute service that lets you run code without provisioning or managing servers. Lambda scales automatically based on the incoming requests, but it may take some time to initialize new instances of your function if there is a sudden increase in demand. This may result in high latency or cold starts for your API. To avoid this, you can use provisioned concurrency, which ensures that your function is initialized and ready to respond at any time. Provisioned concurrency also helps you achieve consistent low latency for your API by reducing the impact of scaling on performance.

Reference: https://docs.aws.amazon.com/apigateway/latest/developerguide/http-api-develop-integrations-lambda.html https://docs.aws.amazon.com/lambda/latest/dg/configuration-concurrency.html

A company seeks a storage solution for its application The solution must be highly available and scalable. The solution also must function as a file system, be mountable by multiple Linux instances in AWS and on premises through native protocols, and have no minimum size requirements. The company has set up a Site-to-Site VPN for access from its on-premises network to its VPC.

Which storage solution meets these requirements?

A.
Amazon FSx Multi-AZ deployments
A.
Amazon FSx Multi-AZ deployments
Answers
B.
Amazon Elastic Block Store (Amazon EBS) Multi-Attach volumes
B.
Amazon Elastic Block Store (Amazon EBS) Multi-Attach volumes
Answers
C.
Amazon Elastic File System (Amazon EFS) with multiple mount targets
C.
Amazon Elastic File System (Amazon EFS) with multiple mount targets
Answers
D.
Amazon Elastic File System (Amazon EFS) with a single mount target and multiple access points
D.
Amazon Elastic File System (Amazon EFS) with a single mount target and multiple access points
Answers
Suggested answer: C

Explanation:

Amazon EFS is a fully managed file system that can be mounted by multiple Linux instances in AWS and on premises through native protocols such as NFS and SMB. Amazon EFS has no minimum size requirements and can scale up and down automatically as files are added and removed. Amazon EFS also supports high availability and durability by allowing multiple mount targets in different Availability Zones within a region. Amazon EFS meets all the requirements of the question, while the other options do not.

Reference:

https://aws.amazon.com/efs/

https://docs.aws.amazon.com/wellarchitected/latest/performance-efficiency-pillar/storage-architecture-selection.html

https://aws.amazon.com/blogs/storage/from-on-premises-to-aws-hybrid-cloud-architecture-for-network-file-shares/

A company hosts a website on Amazon EC2 instances behind an Application Load Balancer (ALB) The website serves static content Website traffic is increasing and the company is concerned about a potential increase in cost.

What should a solutions architect do to reduce the cost of the website?

A.
Create an Amazon CloudFront distribution to cache static files at edge locations.
A.
Create an Amazon CloudFront distribution to cache static files at edge locations.
Answers
B.
Create an Amazon ElastiCache cluster Connect the ALB to the ElastiCache cluster to serve cached files.
B.
Create an Amazon ElastiCache cluster Connect the ALB to the ElastiCache cluster to serve cached files.
Answers
C.
Create an AWS WAF web ACL and associate it with the ALB. Add a rule to the web ACL to cache static files.
C.
Create an AWS WAF web ACL and associate it with the ALB. Add a rule to the web ACL to cache static files.
Answers
D.
Create a second ALB in an alternative AWS Region Route user traffic to the closest Region to minimize data transfer costs
D.
Create a second ALB in an alternative AWS Region Route user traffic to the closest Region to minimize data transfer costs
Answers
Suggested answer: A

Explanation:

Amazon CloudFront is a content delivery network (CDN) that can improve the performance and reduce the cost of serving static content from a website. CloudFront can cache static files at edge locations closer to the users, reducing the latency and data transfer costs. CloudFront can also integrate with Amazon S3 as the origin for the static content, eliminating the need for EC2 instances to host the website. CloudFront meets all the requirements of the question, while the other options do not.

Reference:

https://aws.amazon.com/blogs/architecture/architecting-a-low-cost-web-content-publishing-system/

https://nodeployfriday.com/posts/static-website-hosting/

https://aws.amazon.com/cloudfront/

A company uses multiple vendors to distribute digital assets that are stored in Amazon S3 buckets The company wants to ensure that its vendor AWS accounts have the minimum access that is needed to download objects in these S3 buckets

Which solution will meet these requirements with the LEAST operational overhead?

A.
Design a bucket policy that has anonymous read permissions and permissions to list ail buckets.
A.
Design a bucket policy that has anonymous read permissions and permissions to list ail buckets.
Answers
B.
Design a bucket policy that gives read-only access to users. Specify IAM entities as principals
B.
Design a bucket policy that gives read-only access to users. Specify IAM entities as principals
Answers
C.
Create a cross-account IAM role that has a read-only access policy specified for the IAM role.
C.
Create a cross-account IAM role that has a read-only access policy specified for the IAM role.
Answers
D.
Create a user policy and vendor user groups that give read-only access to vendor users
D.
Create a user policy and vendor user groups that give read-only access to vendor users
Answers
Suggested answer: C

Explanation:

A cross-account IAM role is a way to grant users from one AWS account access to resources in another AWS account. The cross-account IAM role can have a read-only access policy attached to it, which allows the users to download objects from the S3 buckets without modifying or deleting them. The cross-account IAM role also reduces the operational overhead of managing multiple IAM users and policies in each account. The cross-account IAM role meets all the requirements of the question, while the other options do not.

Reference:

https://docs.aws.amazon.com/AmazonS3/latest/userguide/example-walkthroughs-managing-access-example2.html

https://aws.amazon.com/blogs/storage/setting-up-cross-account-amazon-s3-access-with-s3-access-points/

https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create_for-user_externalid.html

Total 918 questions
Go to page: of 92