ExamGecko
Home Home / Amazon / SAA-C03

Amazon SAA-C03 Practice Test - Questions Answers, Page 56

Question list
Search
Search

List of questions

Search

Related questions











A company uses Amazon Elastic Kubernetes Service (Amazon EKS) to run a container application. The EKS cluster stores sensitive information in the Kubernetes secrets object. The company wants to ensure that the information is encrypted Which solution will meet these requirements with the LEAST operational overhead?

A.
Use the container application to encrypt the information by using AWS Key Management Service (AWS KMS).
A.
Use the container application to encrypt the information by using AWS Key Management Service (AWS KMS).
Answers
B.
Enable secrets encryption in the EKS cluster by using AWS Key Management Service (AWS KMS)_
B.
Enable secrets encryption in the EKS cluster by using AWS Key Management Service (AWS KMS)_
Answers
C.
Implement an AWS Lambda tuncüon to encrypt the information by using AWS Key Management Service (AWS KMS).
C.
Implement an AWS Lambda tuncüon to encrypt the information by using AWS Key Management Service (AWS KMS).
Answers
D.
use AWS Systems Manager Parameter Store to encrypt the information by using AWS Key Management Service (AWS KMS).
D.
use AWS Systems Manager Parameter Store to encrypt the information by using AWS Key Management Service (AWS KMS).
Answers
Suggested answer: B

Explanation:

it allows the company to encrypt the Kubernetes secrets object in the EKS cluster with the least operational overhead. By enabling secrets encryption in the EKS cluster, the company can use AWS Key Management Service (AWS KMS) to generate and manage encryption keys for encrypting and decrypting secrets at rest. This is a simple and secure way to protect sensitive information in EKS clusters. Reference:

Encrypting Kubernetes secrets with AWS KMS

Kubernetes Secrets

A company runs a web application on Amazon EC2 instances in an Auto Scaling group that has a target group. The company desgned the application to work with session affinity (sticky sessions) for a better user experience.

The application must be available publicly over the internet as an endpoint_ A WAF must be applied to the endpoint for additional security. Session affinity (sticky sessions) must be configured on the endpoint Which combination of steps will meet these requirements? (Select TWO)

A.
Create a public Network Load Balancer Specify the application target group.
A.
Create a public Network Load Balancer Specify the application target group.
Answers
B.
Create a Gateway Load Balancer Specify the application target group.
B.
Create a Gateway Load Balancer Specify the application target group.
Answers
C.
Create a public Application Load Balancer Specify the application target group.
C.
Create a public Application Load Balancer Specify the application target group.
Answers
D.
Create a second target group. Add Elastic IP addresses to the EC2 instances
D.
Create a second target group. Add Elastic IP addresses to the EC2 instances
Answers
E.
Create a web ACL in AWS WAF Associate the web ACL with the endpoint
E.
Create a web ACL in AWS WAF Associate the web ACL with the endpoint
Answers
Suggested answer: C, E

Explanation:

C and E are the correct answers because they allow the company to create a public endpoint for its web application that supports session affinity (sticky sessions) and has a WAF applied for additional security. By creating a public Application Load Balancer, the company can distribute incoming traffic across multiple EC2 instances in an Auto Scaling group and specify the application target group. By creating a web ACL in AWS WAF and associating it with the Application Load Balancer, the company can protect its web application from common web exploits. By enabling session stickiness on the Application Load Balancer, the company can ensure that subsequent requests from a user during a session are routed to the same target. Reference:

Application Load Balancers

AWS WAF

Target Groups for Your Application Load Balancers

How Application Load Balancer Works with Sticky Sessions

A company runs an application on Amazon EC2 instances. The company needs to implement a disaster recovery (DR) solution for the application. The DR solution needs to have a recovery time objective (RTO) of less than 4 hours. The DR solution also needs to use the fewest possible AWS resources during normal operations.

Which solution will meet these requirements in the MOST operationally efficient way?

A.
Create Amazon Machine Images (AMIs) to back up the EC2 instances. Copy the AMIs to a secondary AWS Region. Automate infrastructure deployment in the secondary Region by using AWS Lambda and custom scripts.
A.
Create Amazon Machine Images (AMIs) to back up the EC2 instances. Copy the AMIs to a secondary AWS Region. Automate infrastructure deployment in the secondary Region by using AWS Lambda and custom scripts.
Answers
B.
Create Amazon Machine Images (AMIs) to back up the EC2 instances. Copy the AMIs to a secondary AWS Region. Automate infrastructure deployment in the secondary Region by using AWS CloudFormation.
B.
Create Amazon Machine Images (AMIs) to back up the EC2 instances. Copy the AMIs to a secondary AWS Region. Automate infrastructure deployment in the secondary Region by using AWS CloudFormation.
Answers
C.
Launch EC2 instances in a secondary AWS Region. Keep the EC2 instances in the secondary Region active at all times.
C.
Launch EC2 instances in a secondary AWS Region. Keep the EC2 instances in the secondary Region active at all times.
Answers
D.
Launch EC2 instances in a secondary Availability Zone. Keep the EC2 instances in the secondary Availability Zone active at all times.
D.
Launch EC2 instances in a secondary Availability Zone. Keep the EC2 instances in the secondary Availability Zone active at all times.
Answers
Suggested answer: B

Explanation:

it allows the company to implement a disaster recovery (DR) solution for the application that has a recovery time objective (RTO) of less than 4 hours and uses the fewest possible AWS resources during normal operations. By creating Amazon Machine Images (AMIs) to back up the EC2 instances and copying the AMIs to a secondary AWS Region, the company can create point-in-time snapshots of the application and store them in a different geographical location. By automating infrastructure deployment in the secondary Region by using AWS CloudFormation, the company can quickly launch a stack of resources from a template in case of a disaster. This is a cost-effective and operationally efficient way to implement a DR solution for EC2 instances. Reference:

Amazon Machine Images (AMI)

Copying an AMI

AWS CloudFormation

Working with Stacks

A company stores its data on premises. The amount of data is growing beyond the company's available capacity.

The company wants to migrate its data from the on-premises location to an Amazon S3 bucket The company needs a solution that will automatically validate the integrity of the data after the transfer Which solution will meet these requirements?

A.
Order an AWS Snowball Edge device Configure the Snowball Edge device to perform the online data transfer to an S3 bucket.
A.
Order an AWS Snowball Edge device Configure the Snowball Edge device to perform the online data transfer to an S3 bucket.
Answers
B.
Deploy an AWS DataSync agent on premises. Configure the DataSync agent to perform the online data transfer to an S3 bucket.
B.
Deploy an AWS DataSync agent on premises. Configure the DataSync agent to perform the online data transfer to an S3 bucket.
Answers
C.
Create an Amazon S3 File Gateway on premises. Configure the S3 File Gateway to perform the online data transfer to an S3 bucket
C.
Create an Amazon S3 File Gateway on premises. Configure the S3 File Gateway to perform the online data transfer to an S3 bucket
Answers
D.
Configure an accelerator in Amazon S3 Transfer Acceleration on premises. Configure the accelerator to perform the online data transfer to an S3 bucket.
D.
Configure an accelerator in Amazon S3 Transfer Acceleration on premises. Configure the accelerator to perform the online data transfer to an S3 bucket.
Answers
Suggested answer: B

Explanation:

it allows the company to migrate its data from the on-premises location to an Amazon S3 bucket and automatically validate the integrity of the data after the transfer. By deploying an AWS DataSync agent on premises, the company can use a fully managed data transfer service that makes it easy to move large amounts of data to and from AWS. By configuring the DataSync agent to perform the online data transfer to an S3 bucket, the company can take advantage of DataSync's features, such as encryption, compression, bandwidth throttling, and data validation. DataSync automatically verifies data integrity at both source and destination after each transfer task. Reference:

AWS DataSync

Deploying an Agent for AWS DataSync

How AWS DataSync Works

A company has a large workload that runs every Friday evening. The workload runs on Amazon EC2 instances that are in two Availability Zones in the us-east-1 Region. Normally, the company must run no more than two instances at all times. However, the company wants to scale up to six instances each Friday to handle a regularly repeating increased workload.

Which solution will meet these requirements with the LEAST operational overhead?

A.
Create a reminder in Amazon EventBridge to scale the instances.
A.
Create a reminder in Amazon EventBridge to scale the instances.
Answers
B.
Create an Auto Scaling group that has a scheduled action.
B.
Create an Auto Scaling group that has a scheduled action.
Answers
C.
Create an Auto Scaling group that uses manual scaling.
C.
Create an Auto Scaling group that uses manual scaling.
Answers
D.
Create an Auto Scaling group that uses automatic scaling.
D.
Create an Auto Scaling group that uses automatic scaling.
Answers
Suggested answer: B

Explanation:

An Auto Scaling group is a collection of EC2 instances that share similar characteristics and can be scaled in or out automatically based on demand. An Auto Scaling group can have a scheduled action, which is a configuration that tells the group to scale to a specific size at a specific time. This way, the company can scale up to six instances each Friday evening to handle the increased workload, and scale down to two instances at other times to save costs. This solution meets the requirements with the least operational overhead, as it does not require manual intervention or custom scripts.

1 explains how to create a scheduled action for an Auto Scaling group.

2 describes the concept and benefits of an Auto Scaling group.

A company uses an on-premises network-attached storage (NAS) system to provide file shares to its high performance computing (HPC) workloads. The company wants to migrate its latency-sensitive HPC workloads and its storage to the AWS Cloud. The company must be able to provide NFS and SMB multi-protocol access from the file system.

Which solution will meet these requirements with the LEAST latency? (Select TWO.)

A.
Deploy compute optimized EC2 instances into a cluster placement group.
A.
Deploy compute optimized EC2 instances into a cluster placement group.
Answers
B.
Deploy compute optimized EC2 instances into a partition placement group.
B.
Deploy compute optimized EC2 instances into a partition placement group.
Answers
C.
Attach the EC2 instances to an Amazon FSx for Lustre file system.
C.
Attach the EC2 instances to an Amazon FSx for Lustre file system.
Answers
D.
Attach the EC2 instances to an Amazon FSx for OpenZFS file system.
D.
Attach the EC2 instances to an Amazon FSx for OpenZFS file system.
Answers
E.
Attach the EC2 instances to an Amazon FSx for NetApp ONTAP file system.
E.
Attach the EC2 instances to an Amazon FSx for NetApp ONTAP file system.
Answers
Suggested answer: A, E

Explanation:

A cluster placement group is a logical grouping of EC2 instances within a single Availability Zone that are placed close together to minimize network latency. This is suitable for latency-sensitive HPC workloads that require high network performance. A compute optimized EC2 instance is an instance type that has a high ratio of vCPUs to memory, which is ideal for compute-intensive applications. Amazon FSx for NetApp ONTAP is a fully managed service that provides NFS and SMB multi-protocol access from the file system, as well as features such as data deduplication, compression, thin provisioning, and snapshots. This solution will meet the requirements with the least latency, as it leverages the low-latency network and storage performance of AWS.

1 explains how cluster placement groups work and their benefits.

2 describes the characteristics and use cases of compute optimized EC2 instances.

3 provides an overview of Amazon FSx for NetApp ONTAP and its features.

A solutions architect needs to copy files from an Amazon S3 bucket to an Amazon Elastic File System (Amazon EFS) file system and another S3 bucket. The files must be copied continuously. New files are added to the original S3 bucket consistently. The copied files should be overwritten only if the source file changes.

Which solution will meet these requirements with the LEAST operational overhead?

A.
Create an AWS DataSync location for both the destination S3 bucket and the EFS file system. Create a task for the destination S3 bucket and the EFS file system. Set the transfer mode to transfer only data that has changed.
A.
Create an AWS DataSync location for both the destination S3 bucket and the EFS file system. Create a task for the destination S3 bucket and the EFS file system. Set the transfer mode to transfer only data that has changed.
Answers
B.
Create an AWS Lambda function. Mount the file system to the function. Set up an S3 event notification to invoke the function when files are created and changed in Amazon S3. Configure the function to copy files to the file system and the destination S3 bucket.
B.
Create an AWS Lambda function. Mount the file system to the function. Set up an S3 event notification to invoke the function when files are created and changed in Amazon S3. Configure the function to copy files to the file system and the destination S3 bucket.
Answers
C.
Create an AWS DataSync location for both the destination S3 bucket and the EFS file system. Create a task for the destination S3 bucket and the EFS file system. Set the transfer mode to transfer all data.
C.
Create an AWS DataSync location for both the destination S3 bucket and the EFS file system. Create a task for the destination S3 bucket and the EFS file system. Set the transfer mode to transfer all data.
Answers
D.
Launch an Amazon EC2 instance in the same VPC as the file system. Mount the file system. Create a script to routinely synchronize all objects that changed in the origin S3 bucket to the destination S3 bucket and the mounted file system.
D.
Launch an Amazon EC2 instance in the same VPC as the file system. Mount the file system. Create a script to routinely synchronize all objects that changed in the origin S3 bucket to the destination S3 bucket and the mounted file system.
Answers
Suggested answer: A

Explanation:

AWS DataSync is a service that makes it easy to move large amounts of data between AWS storage services and on-premises storage systems. AWS DataSync can copy files from an S3 bucket to an EFS file system and another S3 bucket continuously, as well as overwrite only the files that have changed in the source. This solution will meet the requirements with the least operational overhead, as it does not require any code development or manual intervention.

4 explains how to create AWS DataSync locations for different storage services.

5 describes how to create and configure AWS DataSync tasks for data transfer.

6 discusses the different transfer modes that AWS DataSync supports.

A company deployed a serverless application that uses Amazon DynamoDB as a database layer The application has experienced a large increase in users. The company wants to improve database response time from milliseconds to microseconds and to cache requests to the database.

Which solution will meet these requirements with the LEAST operational overhead?

A.
Use DynamoDB Accelerator (DAX).
A.
Use DynamoDB Accelerator (DAX).
Answers
B.
Migrate the database to Amazon Redshift.
B.
Migrate the database to Amazon Redshift.
Answers
C.
Migrate the database to Amazon RDS.
C.
Migrate the database to Amazon RDS.
Answers
D.
Use Amazon ElastiCache for Redis.
D.
Use Amazon ElastiCache for Redis.
Answers
Suggested answer: A

Explanation:

DynamoDB Accelerator (DAX) is a fully managed, highly available caching service built for Amazon DynamoDB. DAX delivers up to a 10 times performance improvement---from milliseconds to microseconds---even at millions of requests per second. DAX does all the heavy lifting required to add in-memory acceleration to your DynamoDB tables, without requiring developers to manage cache invalidation, data population, or cluster management. Now you can focus on building great applications for your customers without worrying about performance at scale. You do not need to modify application logic because DAX is compatible with existing DynamoDB API calls. This solution will meet the requirements with the least operational overhead, as it does not require any code development or manual intervention.

1provides an overview of Amazon DynamoDB Accelerator (DAX) and its benefits.

2explains how to use DAX with DynamoDB for in-memory acceleration.

A company hosts multiple applications on AWS for different product lines. The applications use different compute resources, including Amazon EC2 instances and Application Load Balancers. The applications run in different AWS accounts under the same organization in AWS Organizations across multiple AWS Regions. Teams for each product line have tagged each compute resource in the individual accounts.

The company wants more details about the cost for each product line from the consolidated billing feature in Organizations.

Which combination of steps will meet these requirements? (Select TWO.)

A.
Select a specific AWS generated tag in the AWS Billing console.
A.
Select a specific AWS generated tag in the AWS Billing console.
Answers
B.
Select a specific user-defined tag in the AWS Billing console.
B.
Select a specific user-defined tag in the AWS Billing console.
Answers
C.
Select a specific user-defined tag in the AWS Resource Groups console.
C.
Select a specific user-defined tag in the AWS Resource Groups console.
Answers
D.
Activate the selected tag from each AWS account.
D.
Activate the selected tag from each AWS account.
Answers
E.
Activate the selected tag from the Organizations management account.
E.
Activate the selected tag from the Organizations management account.
Answers
Suggested answer: B, E

Explanation:

User-defined tags are key-value pairs that can be applied to AWS resources to categorize and track them. User-defined tags can also be used to allocate costs and create detailed billing reports in the AWS Billing console. To use user-defined tags for cost allocation, the tags must be activated from the Organizations management account, which is the root account that has full control over all the member accounts in the organization. Once activated, the user-defined tags will appear as columns in the cost allocation report, and can be used to filter and group costs by product line. This solution will meet the requirements with the least operational overhead, as it leverages the existing tagging strategy and does not require any code development or manual intervention.

1 explains how to use user-defined tags for cost allocation.

2 describes how to access and manage member accounts from the Organizations management account.

3 discusses how to create and view cost allocation reports in the AWS Billing console.

A company uses Amazon EC2 instances and Amazon Elastic Block Store (Amazon EBS) volumes to run an application. The company creates one snapshot of each EBS volume every day to meet compliance requirements. The company wants to implement an architecture that prevents the accidental deletion of EBS volume snapshots. The solution must not change the administrative rights of the storage administrator user.

Which solution will meet these requirements with the LEAST administrative effort?

A.
Create an IAM role that has permission to delete snapshots. Attach the role to a new EC2 instance. Use the AWS CLI from the new EC2 instance to delete snapshots.
A.
Create an IAM role that has permission to delete snapshots. Attach the role to a new EC2 instance. Use the AWS CLI from the new EC2 instance to delete snapshots.
Answers
B.
Create an IAM policy that denies snapshot deletion. Attach the policy to the storage administrator user.
B.
Create an IAM policy that denies snapshot deletion. Attach the policy to the storage administrator user.
Answers
C.
Add tags to the snapshots. Create retention rules in Recycle Bin for EBS snapshots that have the tags.
C.
Add tags to the snapshots. Create retention rules in Recycle Bin for EBS snapshots that have the tags.
Answers
D.
Lock the EBS snapshots to prevent deletion.
D.
Lock the EBS snapshots to prevent deletion.
Answers
Suggested answer: D

Explanation:

EBS snapshots are point-in-time backups of EBS volumes that can be used to restore data or create new volumes. EBS snapshots can be locked to prevent accidental deletion using a feature called EBS Snapshot Lock. When a snapshot is locked, it cannot be deleted by any user, including the root user, until it is unlocked. The lock policy can also specify a retention period, after which the snapshot can be deleted. This solution will meet the requirements with the least administrative effort, as it does not require any code development or policy changes.

1 explains how to lock and unlock EBS snapshots using EBS Snapshot Lock.

2 describes the concept and benefits of EBS snapshots.

Total 886 questions
Go to page: of 89