Amazon SAA-C03 Practice Test - Questions Answers, Page 46
List of questions
Question 451

A company hosts an internal serverless application on AWS by using Amazon API Gateway and AWS Lambda. The company's employees report issues with high latency when they begin using the application each day. The company wants to reduce latency.
Which solution will meet these requirements?
Explanation:
AWS Lambda is a serverless compute service that lets you run code without provisioning or managing servers. Lambda scales automatically based on the incoming requests, but it may take some time to initialize new instances of your function if there is a sudden increase in demand. This may result in high latency or cold starts for your application. To avoid this, you can use provisioned concurrency, which ensures that your function is initialized and ready to respond at any time. You can also set up a scheduled scaling policy that increases the provisioned concurrency before employees begin to use the application each day, and decreases it when the demand is low.
Reference: https://docs.aws.amazon.com/lambda/latest/dg/configuration-concurrency.html
Question 452

A company wants to migrate 100 GB of historical data from an on-premises location to an Amazon S3 bucket. The company has a 100 megabits per second (Mbps) internet connection on premises. The company needs to encrypt the data in transit to the S3 bucket. The company will store new data directly in Amazon S3.
Which solution will meet these requirements with the LEAST operational overhead?
Explanation:
AWS DataSync is a data transfer service that makes it easy for you to move large amounts of data online between on-premises storage and AWS storage services over the internet or AWS Direct Connect. DataSync automatically encrypts your data in transit using TLS encryption, and verifies data integrity during transfer using checksums. DataSync can transfer data up to 10 times faster than open-source tools, and reduces operational overhead by simplifying and automating tasks such as scheduling, monitoring, and resuming transfers.
Reference: https://aws.amazon.com/datasync/
Question 453

A company wants to implement a backup strategy for Amazon EC2 data and multiple Amazon S3 buckets. Because of regulatory requirements, the company must retain backup files for a specific time period. The company must not alter the files for the duration of the retention period.
Which solution will meet these requirements?
Explanation:
AWS Backup is a fully managed service that allows you to centralize and automate data protection of AWS services across compute, storage, and database. AWS Backup Vault Lock is an optional feature of a backup vault that can help you enhance the security and control over your backup vaults. When a lock is active in Compliance mode and the grace time is over, the vault configuration cannot be altered or deleted by a customer, account/data owner, or AWS. This ensures that your backups are available for you until they reach the expiration of their retention periods and meet the regulatory requirements.
Reference: https://docs.aws.amazon.com/aws-backup/latest/devguide/vault-lock.html
Question 454

A company is subscribed to the AWS Business Support plan. Compliance rules require the company to check on AWS infrastructure health before deployments can proceed. The company needs a programmatic and automated way to check on infrastructure health at the beginning of new deployments.
Which solution will meet these requirements?
Explanation:
The AWS Health API provides programmatic access to the AWS Health information that is presented in the AWS Personal Health Dashboard. You can use the API operations to get information about AWS Health events that affect your AWS services and resources. You can also use the API to enable or disable health-based insights for your organization. You can use the AWS Health API at the start of each deployment to check on AWS infrastructure health and pause all new deployments if the API returns any issues.
Reference: https://docs.aws.amazon.com/health/latest/APIReference/Welcome.html
Question 455

A company needs to migrate a MySQL database from its on-premises data center to AWS within 2 weeks. The database is 20 TB in size. The company wants to complete the migration with minimal downtime.
Which solution will migrate the database MOST cost-effectively?
Explanation:
This answer is correct because it meets the requirements of migrating a 20 TB MySQL database within 2 weeks with minimal downtime and cost-effectively. The AWS Snowball Edge Storage Optimized device has up to 80 TB of usable storage space, which is enough to fit the database. The AWS Database Migration Service (AWS DMS) can migrate data from MySQL to Amazon Aurora, Amazon RDS for MySQL, or MySQL on Amazon EC2 with minimal downtime by continuously replicating changes from the source to the target. The AWS Schema Conversion Tool (AWS SCT) can convert the source schema and code to a format compatible with the target database. By using these services together, the company can migrate the database to AWS with minimal downtime and cost. The Snowball Edge device can be shipped back to AWS to finish the migration and continue the ongoing replication until the database is fully migrated.
https://docs.aws.amazon.com/snowball/latest/developer-guide/device-differences.html
https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.MySQL.html
https://docs.aws.amazon.com/SchemaConversionTool/latest/userguide/CHAP_Source.MySQL.htm
Question 456

A company wants to host a scalable web application on AWS. The application will be accessed by users from different geographic regions of the world. Application users will be able to download and upload unique data up to gigabytes in size. The development team wants a cost-effective solution to minimize upload and download latency and maximize performance.
What should a solutions architect do to accomplish this?
Explanation:
This answer is correct because it meets the requirements of hosting a scalable web application that can handle large data transfers from different geographic regions. Amazon EC2 provides scalable compute capacity for hosting web applications. Auto Scaling can automatically adjust the number of EC2 instances based on the demand and traffic patterns. Amazon CloudFront is a content delivery network (CDN) that can cache static and dynamic content at edge locations closer to the users, reducing latency and improving performance. CloudFront can also use S3 Transfer Acceleration to speed up the transfers between S3 buckets and CloudFront edge locations.
https://docs.aws.amazon.com/autoscaling/ec2/userguide/what-is-amazon-ec2-auto-scaling.html
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/Introduction.html
https://aws.amazon.com/s3/transfer-acceleration/
Question 457

A company runs an application using Amazon ECS. The application creates resized versions of an original image and then makes Amazon S3 API calls to store the resized images in Amazon S3.
How can a solutions architect ensure that the application has permission to access Amazon $3?
Explanation:
This answer is correct because it allows the application to access Amazon S3 by using an IAM role that is associated with the ECS task. The task role grants permissions to the containers running in the task, and can be used to make AWS API calls from the application code. The taskRoleArn is a parameter in the task definition that specifies the IAM role to use for the task.
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-iam-roles.html
https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_TaskDefinition.html
Question 458

A solutions architect is implementing a complex Java application with a MySQL database. The Java application must be deployed on Apache Tomcat and must be highly available.
What should the solutions architect do to meet these requirements?
Explanation:
AWS Elastic Beanstalk provides an easy and quick way to deploy, manage, and scale applications. It supports a variety of platforms, including Java and Apache Tomcat. By using Elastic Beanstalk, the solutions architect can upload the Java application and configure the environment to run Apache Tomcat.
Question 459

A 4-year-old media company is using the AWS Organizations all features feature set fo organize its AWS accounts. According to he company's finance team, the billing information on the member accounts
must not be accessible to anyone, including the root user of the member accounts.
Which solution will meet these requirements?
Explanation:
Service Control Policies (SCP): SCPs are an integral part of AWS Organizations and allow you to set fine-grained permissions on the organizational units (OUs) within your AWS Organization. SCPs provide central control over the maximum permissions that can be granted to member accounts, including the root user. Denying Access to Billing Information: By creating an SCP and attaching it to the root OU, you can explicitly deny access to billing information for all accounts within the organization. SCPs can be used to restrict access to various AWS services and actions, including billing-related services. Granular Control: SCPs enable you to define specific permissions and restrictions at the organizational unit level. By denying access to billing information at the root OU, you can ensure that no member accounts, including root users, have access to the billing information.
Question 460

A company has two VPCs named Management and Production. The Management VPC uses VPNs through a customer gateway to connect to a single device in the data center. The Production VPC uses a virtual private gateway AWS Direct Connect connections. The Management and Production VPCs both use a single VPC peering connection to allow communication between the
What should a solutions architect do to mitigate any single point of failure in this architecture?
Explanation:
This answer is correct because it provides redundancy for the VPN connection between the Management VPC and the data center. If one customer gateway device or one VPN tunnel becomes unavailable, the traffic can still flow over the second customer gateway device and the second VPN tunnel. This way, the single point of failure in the VPN connection is mitigated.
https://docs.aws.amazon.com/vpn/latest/s2svpn/vpn-redundant-connection.html
https://www.trendmicro.com/cloudoneconformity/knowledge-base/aws/VPC/vpn-tunnel-redundancy.html
Question