ExamGecko
Home Home / Amazon / SAA-C03

Amazon SAA-C03 Practice Test - Questions Answers, Page 46

Question list
Search
Search

List of questions

Search

Related questions











A company hosts an internal serverless application on AWS by using Amazon API Gateway and AWS Lambda. The company's employees report issues with high latency when they begin using the application each day. The company wants to reduce latency.

Which solution will meet these requirements?

A.
Increase the API Gateway throttling limit.
A.
Increase the API Gateway throttling limit.
Answers
B.
Set up a scheduled scaling to increase Lambda provisioned concurrency before employees begin to use the application each day.
B.
Set up a scheduled scaling to increase Lambda provisioned concurrency before employees begin to use the application each day.
Answers
C.
Create an Amazon CloudWatch alarm to initiate a Lambda function as a target for the alarm at the beginning of each day.
C.
Create an Amazon CloudWatch alarm to initiate a Lambda function as a target for the alarm at the beginning of each day.
Answers
D.
Increase the Lambda function memory.
D.
Increase the Lambda function memory.
Answers
Suggested answer: B

Explanation:

AWS Lambda is a serverless compute service that lets you run code without provisioning or managing servers. Lambda scales automatically based on the incoming requests, but it may take some time to initialize new instances of your function if there is a sudden increase in demand. This may result in high latency or cold starts for your application. To avoid this, you can use provisioned concurrency, which ensures that your function is initialized and ready to respond at any time. You can also set up a scheduled scaling policy that increases the provisioned concurrency before employees begin to use the application each day, and decreases it when the demand is low.

Reference: https://docs.aws.amazon.com/lambda/latest/dg/configuration-concurrency.html

A company wants to migrate 100 GB of historical data from an on-premises location to an Amazon S3 bucket. The company has a 100 megabits per second (Mbps) internet connection on premises. The company needs to encrypt the data in transit to the S3 bucket. The company will store new data directly in Amazon S3.

Which solution will meet these requirements with the LEAST operational overhead?

A.
Use the s3 sync command in the AWS CLI to move the data directly to an S3 bucket.
A.
Use the s3 sync command in the AWS CLI to move the data directly to an S3 bucket.
Answers
B.
Use AWS DataSync to migrate the data from the on-premises location to an S3 bucket.
B.
Use AWS DataSync to migrate the data from the on-premises location to an S3 bucket.
Answers
C.
Use AWS Snowball to move the data to an S3 bucket.
C.
Use AWS Snowball to move the data to an S3 bucket.
Answers
D.
Set up an IPsec VPN from the on-premises location to AWS. Use the s3 cp command in the AWS CLI to move the data directly to an S3 bucket.
D.
Set up an IPsec VPN from the on-premises location to AWS. Use the s3 cp command in the AWS CLI to move the data directly to an S3 bucket.
Answers
Suggested answer: B

Explanation:

AWS DataSync is a data transfer service that makes it easy for you to move large amounts of data online between on-premises storage and AWS storage services over the internet or AWS Direct Connect. DataSync automatically encrypts your data in transit using TLS encryption, and verifies data integrity during transfer using checksums. DataSync can transfer data up to 10 times faster than open-source tools, and reduces operational overhead by simplifying and automating tasks such as scheduling, monitoring, and resuming transfers.

Reference: https://aws.amazon.com/datasync/

A company wants to implement a backup strategy for Amazon EC2 data and multiple Amazon S3 buckets. Because of regulatory requirements, the company must retain backup files for a specific time period. The company must not alter the files for the duration of the retention period.

Which solution will meet these requirements?

A.
Use AWS Backup to create a backup vault that has a vault lock in governance mode. Create the required backup plan.
A.
Use AWS Backup to create a backup vault that has a vault lock in governance mode. Create the required backup plan.
Answers
B.
Use Amazon Data Lifecycle Manager to create the required automated snapshot policy.
B.
Use Amazon Data Lifecycle Manager to create the required automated snapshot policy.
Answers
C.
Use Amazon S3 File Gateway to create the backup. Configure the appropriate S3 Lifecycle management.
C.
Use Amazon S3 File Gateway to create the backup. Configure the appropriate S3 Lifecycle management.
Answers
D.
Use AWS Backup to create a backup vault that has a vault lock in compliance mode. Create the required backup plan.
D.
Use AWS Backup to create a backup vault that has a vault lock in compliance mode. Create the required backup plan.
Answers
Suggested answer: D

Explanation:

AWS Backup is a fully managed service that allows you to centralize and automate data protection of AWS services across compute, storage, and database. AWS Backup Vault Lock is an optional feature of a backup vault that can help you enhance the security and control over your backup vaults. When a lock is active in Compliance mode and the grace time is over, the vault configuration cannot be altered or deleted by a customer, account/data owner, or AWS. This ensures that your backups are available for you until they reach the expiration of their retention periods and meet the regulatory requirements.

Reference: https://docs.aws.amazon.com/aws-backup/latest/devguide/vault-lock.html

A company is subscribed to the AWS Business Support plan. Compliance rules require the company to check on AWS infrastructure health before deployments can proceed. The company needs a programmatic and automated way to check on infrastructure health at the beginning of new deployments.

Which solution will meet these requirements?

A.
Use the AWS Trusted Advisor API at the start of each deployment. Pause all new deployments if the API returns any issues.
A.
Use the AWS Trusted Advisor API at the start of each deployment. Pause all new deployments if the API returns any issues.
Answers
B.
Use the AWS Health API at the start of each deployment. Pause all new deployments if the API returns any issues.
B.
Use the AWS Health API at the start of each deployment. Pause all new deployments if the API returns any issues.
Answers
C.
Query the AWS Support API at the start of each deployment. Pause all new deployments if the API returns any open issues.
C.
Query the AWS Support API at the start of each deployment. Pause all new deployments if the API returns any open issues.
Answers
D.
Send an API call to each workload ahead of deployment. Pause the deployments if the API call fails.
D.
Send an API call to each workload ahead of deployment. Pause the deployments if the API call fails.
Answers
Suggested answer: B

Explanation:

The AWS Health API provides programmatic access to the AWS Health information that is presented in the AWS Personal Health Dashboard. You can use the API operations to get information about AWS Health events that affect your AWS services and resources. You can also use the API to enable or disable health-based insights for your organization. You can use the AWS Health API at the start of each deployment to check on AWS infrastructure health and pause all new deployments if the API returns any issues.

Reference: https://docs.aws.amazon.com/health/latest/APIReference/Welcome.html

A company needs to migrate a MySQL database from its on-premises data center to AWS within 2 weeks. The database is 20 TB in size. The company wants to complete the migration with minimal downtime.

Which solution will migrate the database MOST cost-effectively?

A.
Order an AWS Snowball Edge Storage Optimized device. Use AWS Database Migration Service (AWS DMS) with AWS Schema Conversion Tool (AWS SCT) to migrate the database with replication of ongoing changes. Send the Snowball Edge device to AWS to finish the migration and continue the ongoing replication.
A.
Order an AWS Snowball Edge Storage Optimized device. Use AWS Database Migration Service (AWS DMS) with AWS Schema Conversion Tool (AWS SCT) to migrate the database with replication of ongoing changes. Send the Snowball Edge device to AWS to finish the migration and continue the ongoing replication.
Answers
B.
Order an AWS Snowmobile vehicle. Use AWS Database Migration Service (AWS DMS) with AWS Schema Conversion Tool (AWS SCT) to migrate the database wjgh ongoing changes. Send the Snowmobile vehicle back to AWS to finish the migration and continue the ongoing replication.
B.
Order an AWS Snowmobile vehicle. Use AWS Database Migration Service (AWS DMS) with AWS Schema Conversion Tool (AWS SCT) to migrate the database wjgh ongoing changes. Send the Snowmobile vehicle back to AWS to finish the migration and continue the ongoing replication.
Answers
C.
Order an AWS Snowball Edge Compute Optimized with GPU device. Use AWS Database Migration Service (AWS DMS) with AWS Schema Conversion Tool (AWS SCT) to migrate the database with ongoing changes. Send the Snowball device to AWS to finish the migration and continue the ongoing replication.
C.
Order an AWS Snowball Edge Compute Optimized with GPU device. Use AWS Database Migration Service (AWS DMS) with AWS Schema Conversion Tool (AWS SCT) to migrate the database with ongoing changes. Send the Snowball device to AWS to finish the migration and continue the ongoing replication.
Answers
D.
Order a 1 GB dedicated AWS Direct Connect connection to establish a connection with the data center. Use AWS Database Migration Service (AWS DMS) with AWS Schema Conversion Tool(AWS SCT) to migrate the database with replication of ongoing changes.
D.
Order a 1 GB dedicated AWS Direct Connect connection to establish a connection with the data center. Use AWS Database Migration Service (AWS DMS) with AWS Schema Conversion Tool(AWS SCT) to migrate the database with replication of ongoing changes.
Answers
Suggested answer: A

Explanation:

This answer is correct because it meets the requirements of migrating a 20 TB MySQL database within 2 weeks with minimal downtime and cost-effectively. The AWS Snowball Edge Storage Optimized device has up to 80 TB of usable storage space, which is enough to fit the database. The AWS Database Migration Service (AWS DMS) can migrate data from MySQL to Amazon Aurora, Amazon RDS for MySQL, or MySQL on Amazon EC2 with minimal downtime by continuously replicating changes from the source to the target. The AWS Schema Conversion Tool (AWS SCT) can convert the source schema and code to a format compatible with the target database. By using these services together, the company can migrate the database to AWS with minimal downtime and cost. The Snowball Edge device can be shipped back to AWS to finish the migration and continue the ongoing replication until the database is fully migrated.

https://docs.aws.amazon.com/snowball/latest/developer-guide/device-differences.html

https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.MySQL.html

https://docs.aws.amazon.com/SchemaConversionTool/latest/userguide/CHAP_Source.MySQL.htm

A company wants to host a scalable web application on AWS. The application will be accessed by users from different geographic regions of the world. Application users will be able to download and upload unique data up to gigabytes in size. The development team wants a cost-effective solution to minimize upload and download latency and maximize performance.

What should a solutions architect do to accomplish this?

A.
Use Amazon S3 with Transfer Acceleration to host the application.
A.
Use Amazon S3 with Transfer Acceleration to host the application.
Answers
B.
Use Amazon S3 with CacheControl headers to host the application.
B.
Use Amazon S3 with CacheControl headers to host the application.
Answers
C.
Use Amazon EC2 with Auto Scaling and Amazon CloudFront to host the application.
C.
Use Amazon EC2 with Auto Scaling and Amazon CloudFront to host the application.
Answers
D.
Use Amazon EC2 with Auto Scaling and Amazon ElastiCache to host the application.
D.
Use Amazon EC2 with Auto Scaling and Amazon ElastiCache to host the application.
Answers
Suggested answer: C

Explanation:

This answer is correct because it meets the requirements of hosting a scalable web application that can handle large data transfers from different geographic regions. Amazon EC2 provides scalable compute capacity for hosting web applications. Auto Scaling can automatically adjust the number of EC2 instances based on the demand and traffic patterns. Amazon CloudFront is a content delivery network (CDN) that can cache static and dynamic content at edge locations closer to the users, reducing latency and improving performance. CloudFront can also use S3 Transfer Acceleration to speed up the transfers between S3 buckets and CloudFront edge locations.

https://docs.aws.amazon.com/autoscaling/ec2/userguide/what-is-amazon-ec2-auto-scaling.html

https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/Introduction.html

https://aws.amazon.com/s3/transfer-acceleration/

A company runs an application using Amazon ECS. The application creates resized versions of an original image and then makes Amazon S3 API calls to store the resized images in Amazon S3.

How can a solutions architect ensure that the application has permission to access Amazon $3?

A.
Update the S3 role in AWS IAM to allow read/write access from Amazon ECS, and then relaunch the container.
A.
Update the S3 role in AWS IAM to allow read/write access from Amazon ECS, and then relaunch the container.
Answers
B.
Create an IAM role with S3 permissions, and then specify that role as the taskRoleArn in the task definition.
B.
Create an IAM role with S3 permissions, and then specify that role as the taskRoleArn in the task definition.
Answers
C.
Create a security group that allows access from Amazon ECS to Amazon $3, and update the launch configuration used by the ECS cluster.
C.
Create a security group that allows access from Amazon ECS to Amazon $3, and update the launch configuration used by the ECS cluster.
Answers
D.
Create an IAM user with S3 permissions, and then relaunch the Amazon EC2 instances for the ECS cluster while logged in as this account.
D.
Create an IAM user with S3 permissions, and then relaunch the Amazon EC2 instances for the ECS cluster while logged in as this account.
Answers
Suggested answer: B

Explanation:

This answer is correct because it allows the application to access Amazon S3 by using an IAM role that is associated with the ECS task. The task role grants permissions to the containers running in the task, and can be used to make AWS API calls from the application code. The taskRoleArn is a parameter in the task definition that specifies the IAM role to use for the task.

https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-iam-roles.html

https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_TaskDefinition.html

A solutions architect is implementing a complex Java application with a MySQL database. The Java application must be deployed on Apache Tomcat and must be highly available.

What should the solutions architect do to meet these requirements?

A.
Deploy the application in AWS Lambda. Configure an Amazon API Gateway API to connect with the Lambda functions.
A.
Deploy the application in AWS Lambda. Configure an Amazon API Gateway API to connect with the Lambda functions.
Answers
B.
Deploy the application by using AWS Elastic Beanstalk. Configure a load-balanced environment and a rolling deployment policy.
B.
Deploy the application by using AWS Elastic Beanstalk. Configure a load-balanced environment and a rolling deployment policy.
Answers
C.
Migrate the database to Amazon ElastiCache. Configure the ElastiCache security group to allow access from the application.
C.
Migrate the database to Amazon ElastiCache. Configure the ElastiCache security group to allow access from the application.
Answers
D.
Yauch an Amazon EC2 instance. Install a MySQL server on the EC2 instance. Configure the application on the server. Create an AMI. Use the AMI to create a launch template with an Auto caling group.
D.
Yauch an Amazon EC2 instance. Install a MySQL server on the EC2 instance. Configure the application on the server. Create an AMI. Use the AMI to create a launch template with an Auto caling group.
Answers
Suggested answer: B

Explanation:

AWS Elastic Beanstalk provides an easy and quick way to deploy, manage, and scale applications. It supports a variety of platforms, including Java and Apache Tomcat. By using Elastic Beanstalk, the solutions architect can upload the Java application and configure the environment to run Apache Tomcat.

A 4-year-old media company is using the AWS Organizations all features feature set fo organize its AWS accounts. According to he company's finance team, the billing information on the member accounts

must not be accessible to anyone, including the root user of the member accounts.

Which solution will meet these requirements?

A.
Add all finance team users to an IAM group. Attach an AWS managed policy named Billing to the group.
A.
Add all finance team users to an IAM group. Attach an AWS managed policy named Billing to the group.
Answers
B.
Attach an identity-based policy to deny access to the billing information to all users, including the root user.
B.
Attach an identity-based policy to deny access to the billing information to all users, including the root user.
Answers
C.
Create a service control policy (SCP) to deny access to the billing information. Attach the SCP to the root organizational unit (OU).
C.
Create a service control policy (SCP) to deny access to the billing information. Attach the SCP to the root organizational unit (OU).
Answers
D.
Convert from the Organizations all features feature set to the Organizations consolidated billing feature set.
D.
Convert from the Organizations all features feature set to the Organizations consolidated billing feature set.
Answers
Suggested answer: C

Explanation:

Service Control Policies (SCP): SCPs are an integral part of AWS Organizations and allow you to set fine-grained permissions on the organizational units (OUs) within your AWS Organization. SCPs provide central control over the maximum permissions that can be granted to member accounts, including the root user. Denying Access to Billing Information: By creating an SCP and attaching it to the root OU, you can explicitly deny access to billing information for all accounts within the organization. SCPs can be used to restrict access to various AWS services and actions, including billing-related services. Granular Control: SCPs enable you to define specific permissions and restrictions at the organizational unit level. By denying access to billing information at the root OU, you can ensure that no member accounts, including root users, have access to the billing information.

A company has two VPCs named Management and Production. The Management VPC uses VPNs through a customer gateway to connect to a single device in the data center. The Production VPC uses a virtual private gateway AWS Direct Connect connections. The Management and Production VPCs both use a single VPC peering connection to allow communication between the

What should a solutions architect do to mitigate any single point of failure in this architecture?

A.
Add a set of VPNs between the Management and Production VPCs.
A.
Add a set of VPNs between the Management and Production VPCs.
Answers
B.
Add a second virtual private gateway and attach it to the Management VPC.
B.
Add a second virtual private gateway and attach it to the Management VPC.
Answers
C.
Add a second set of VPNs to the Management VPC from a second customer gateway device.
C.
Add a second set of VPNs to the Management VPC from a second customer gateway device.
Answers
D.
Add a second VPC peering connection between the Management VPC and the Production VPC.
D.
Add a second VPC peering connection between the Management VPC and the Production VPC.
Answers
Suggested answer: C

Explanation:

This answer is correct because it provides redundancy for the VPN connection between the Management VPC and the data center. If one customer gateway device or one VPN tunnel becomes unavailable, the traffic can still flow over the second customer gateway device and the second VPN tunnel. This way, the single point of failure in the VPN connection is mitigated.

https://docs.aws.amazon.com/vpn/latest/s2svpn/vpn-redundant-connection.html

https://www.trendmicro.com/cloudoneconformity/knowledge-base/aws/VPC/vpn-tunnel-redundancy.html

Total 918 questions
Go to page: of 92