ExamGecko
Home Home / Amazon / SAA-C03

Amazon SAA-C03 Practice Test - Questions Answers, Page 49

Question list
Search
Search

List of questions

Search

Related questions











A company has developed a new video game as a web application. The application is in a three-tier architecture in a VPC with Amazon RDS for MySQL in the database layer. Several players will compete concurrently online. The game's developers want to display a top-10 scoreboard in near-real time and offer the ability to stop and restore the game while preserving the current scores.

What should a solutions architect do to meet these requirements?

A.
Set up an Amazon ElastiCache for Memcached cluster to cache the scores for the web application to display.
A.
Set up an Amazon ElastiCache for Memcached cluster to cache the scores for the web application to display.
Answers
B.
Set up an Amazon ElastiCache for Redis cluster to compute and cache the scores for the web application to display.
B.
Set up an Amazon ElastiCache for Redis cluster to compute and cache the scores for the web application to display.
Answers
C.
Place an Amazon CloudFront distribution in front of the web application to cache the scoreboard in a section of the application.
C.
Place an Amazon CloudFront distribution in front of the web application to cache the scoreboard in a section of the application.
Answers
D.
Create a read replica on Amazon RDS for MySQL to run queries to compute the scoreboard and serve the read traffic to the web application.
D.
Create a read replica on Amazon RDS for MySQL to run queries to compute the scoreboard and serve the read traffic to the web application.
Answers
Suggested answer: B

Explanation:

This answer is correct because it meets the requirements of displaying a top-10 scoreboard in near-real time and offering the ability to stop and restore the game while preserving the current scores. Amazon ElastiCache for Redis is a blazing fast in-memory data store that provides sub-millisecond latency to power internet-scale real-time applications. You can use Amazon ElastiCache for Redis to set up an ElastiCache for Redis cluster to compute and cache the scores for the web application to display. You can use Redis data structures such as sorted sets and hashes to store and rank the scores of the players, and use Redis commands such as ZRANGE and ZADD to retrieve and update the scores efficiently. You can also use Redis persistence features such as snapshots and append-only files (AOF) to enable point-in-time recovery of your data, which can help you stop and restore the game while preserving the current scores.

https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/WhatIs.html

https://redis.io/topics/data-types

https://redis.io/topics/persistence

A company wants to share accounting data with an external auditor. The data is stored in an Amazon RDS DB instance that resides in a private subnet. The auditor has its own AWS account and requires its own copy of the database.

What is the MOST secure way for the company to share the database with the auditor?

A.
Create a read replica of the database. Configure IAM standard database authentication to grant the auditor access.
A.
Create a read replica of the database. Configure IAM standard database authentication to grant the auditor access.
Answers
B.
Export the database contents to text files. Store the files in an Amazon S3 bucket. Create a new IAM user for the auditor. Grant the user access to the S3 bucket.
B.
Export the database contents to text files. Store the files in an Amazon S3 bucket. Create a new IAM user for the auditor. Grant the user access to the S3 bucket.
Answers
C.
Copy a snapshot of the database to an Amazon S3 bucket. Create an IAM user. Share the user's keys with the auditor to grant access to the object in the $3 bucket.
C.
Copy a snapshot of the database to an Amazon S3 bucket. Create an IAM user. Share the user's keys with the auditor to grant access to the object in the $3 bucket.
Answers
D.
Create an encrypted snapshot of the database. Share the snapshot with the auditor. Allow access to the AWS Key Management Service (AWS KMS) encryption key.
D.
Create an encrypted snapshot of the database. Share the snapshot with the auditor. Allow access to the AWS Key Management Service (AWS KMS) encryption key.
Answers
Suggested answer: D

Explanation:

This answer is correct because it meets the requirements of sharing the database with the auditor in a secure way. You can create an encrypted snapshot of the database by using AWS Key Management Service (AWS KMS) to encrypt the snapshot with a customer managed key. You can share the snapshot with the auditor by modifying the permissions of the snapshot and specifying the AWS account ID of the auditor. You can also allow access to the AWS KMS encryption key by adding a key policy statement that grants permissions to the auditor's account. This way, you can ensure that only the auditor can access and restore the snapshot in their own AWS account.

https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_ShareSnapshot.html

https://docs.aws.amazon.com/kms/latest/developerguide/key-policies.html#key-policy-default-allow-root-enable-iam

A retail company uses a regional Amazon API Gateway API for its public REST APIs. The API Gateway endpoint is a custom domain name that points to an Amazon Route 53 alias record. A solutions architect needs to create a solution that has minimal effects on customers and minimal data loss to release the new version of APIs.

Which solution will meet these requirements?

A.
Create a canary release deployment stage for API Gateway. Deploy the latest API version. Point an appropriate percentage of traffic to the canary stage. After API verification, promote the canary stage to the production stage.
A.
Create a canary release deployment stage for API Gateway. Deploy the latest API version. Point an appropriate percentage of traffic to the canary stage. After API verification, promote the canary stage to the production stage.
Answers
B.
Create a new API Gateway endpoint with a new version of the API in OpenAPI YAML file format. Use the import-to-update operation in merge mode into the API in API Gateway. Deploy the new version of the API to the production stage.
B.
Create a new API Gateway endpoint with a new version of the API in OpenAPI YAML file format. Use the import-to-update operation in merge mode into the API in API Gateway. Deploy the new version of the API to the production stage.
Answers
C.
Create a new API Gateway endpoint with a new version of the API in OpenAPI JSON file format. Use the import-to-update operation in overwrite mode into the API in API Gateway. Deploy the new version of the API to the production stage.
C.
Create a new API Gateway endpoint with a new version of the API in OpenAPI JSON file format. Use the import-to-update operation in overwrite mode into the API in API Gateway. Deploy the new version of the API to the production stage.
Answers
D.
Create a new API Gateway endpoint with new versions of the API definitions. Create a custom domain name for the new API Gateway API. Point the Route 53 alias record to the new API Gateway API custom domain name.
D.
Create a new API Gateway endpoint with new versions of the API definitions. Create a custom domain name for the new API Gateway API. Point the Route 53 alias record to the new API Gateway API custom domain name.
Answers
Suggested answer: A

Explanation:

This answer is correct because it meets the requirements of releasing the new version of APIs with minimal effects on customers and minimal data loss. A canary release deployment is a software development strategy in which a new version of an API is deployed for testing purposes, and the base version remains deployed as a production release for normal operations on the same stage. In a canary release deployment, total API traffic is separated at random into a production release and a canary release with a pre-configured ratio. Typically, the canary release receives a small percentage of API traffic and the production release takes up the rest. The updated API features are only visible to API traffic through the canary. You can adjust the canary traffic percentage to optimize test coverage or performance. By keeping canary traffic small and the selection random, most users are not adversely affected at any time by potential bugs in the new version, and no single user is adversely affected all the time. After the test metrics pass your requirements, you can promote the canary release to the production release and disable the canary from the deployment. This makes the new features available in the production stage.

https://docs.aws.amazon.com/apigateway/latest/developerguide/canary-release.html

A company has a web application for travel ticketing. The application is based on a database that runs in a single data center in North America. The company wants to expand the application to serve a global user base. The company needs to deploy the application to multiple AWS Regions. Average latency must be less than 1 second on updates to the reservation database.

The company wants to have separate deployments of its web platform across multiple Regions. However the company must maintain a single primary reservation database that is globally consistent.

Which solution should a solutions architect recommend to meet these requirements?

A.
Convert the application to use Amazon DynamoDB. Use a global table for the center reservation table. Use the correct Regional endpoint in each Regional deployment.
A.
Convert the application to use Amazon DynamoDB. Use a global table for the center reservation table. Use the correct Regional endpoint in each Regional deployment.
Answers
B.
Migrate the database to an Amazon Aurora MySQL database. Deploy Aurora Read Replicas in each Region. Use the correct Regional endpoint in each Regional deployment for access to the database.
B.
Migrate the database to an Amazon Aurora MySQL database. Deploy Aurora Read Replicas in each Region. Use the correct Regional endpoint in each Regional deployment for access to the database.
Answers
C.
Migrate the database to an Amazon RDS for MySQL database Deploy MySQL read replicas in each Region. Use the correct Regional endpoint in each Regional deployment for access to the database.
C.
Migrate the database to an Amazon RDS for MySQL database Deploy MySQL read replicas in each Region. Use the correct Regional endpoint in each Regional deployment for access to the database.
Answers
D.
Migrate the application to an Amazon Aurora Serverless database. Deploy instances of the database to each Region. Use the correct Regional endpoint in each Regional deployment to access the database. Use AWS Lambda functions to process event streams in each Region to synchronize the databases.
D.
Migrate the application to an Amazon Aurora Serverless database. Deploy instances of the database to each Region. Use the correct Regional endpoint in each Regional deployment to access the database. Use AWS Lambda functions to process event streams in each Region to synchronize the databases.
Answers
Suggested answer: B

Explanation:

https://aws.amazon.com/rds/aurora/global-database/

https://aws.amazon.com/blogs/architecture/using-amazon-aurora-global-database-for-low-latency-without-application-changes/

An application uses an Amazon RDS MySQL DB instance. The RDS database is becoming low on disk space. A solutions architect wants to increase the disk space without downtime.

Which solution meets these requirements with the LEAST amount of effort?

A.
Enable storage autoscaling in RDS.
A.
Enable storage autoscaling in RDS.
Answers
B.
Increase the RDS database instance size.
B.
Increase the RDS database instance size.
Answers
C.
Change the RDS database instance storage type to Provisioned IOPS.
C.
Change the RDS database instance storage type to Provisioned IOPS.
Answers
D.
Back up the RDS database, increase the storage capacity, restore the database, and stop the previous instance
D.
Back up the RDS database, increase the storage capacity, restore the database, and stop the previous instance
Answers
Suggested answer: A

Explanation:

https://aws.amazon.com/about-aws/whats-new/2019/06/rds-storage-auto-scaling/

A company wants to use high-performance computing and artificial intelligence to improve its fraud prevention and detection technology. The company requires distributed processing to complete a single workload as quickly as possible.

Which solution will meet these requirements?

A.
Use Amazon Elastic Kubernetes Service (Amazon EKS) and multiple containers.
A.
Use Amazon Elastic Kubernetes Service (Amazon EKS) and multiple containers.
Answers
B.
Use AWS ParallelCluster and the Message Passing Interface (MPI) libraries.
B.
Use AWS ParallelCluster and the Message Passing Interface (MPI) libraries.
Answers
C.
Use an Application Load Balancer and Amazon EC2 instances.
C.
Use an Application Load Balancer and Amazon EC2 instances.
Answers
D.
Use AWS Lambda functions.
D.
Use AWS Lambda functions.
Answers
Suggested answer: B

Explanation:

AWS ParallelCluster is a service that allows you to create and manage high-performance computing (HPC) clusters on AWS.It supports multiple schedulers, including AWS Batch, which can run distributed workloads across multiple EC2 instances1.

MPI is a standard for message passing between processes in parallel computing.It provides functions for sending and receiving data, synchronizing processes, and managing communication groups2.

By using AWS ParallelCluster and MPI libraries, you can take advantage of the following benefits:

You can easily create and configure HPC clusters that meet your specific requirements, such as instance type, number of nodes, network configuration, and storage options1.

You can leverage the scalability and elasticity of AWS to run large-scale parallel workloads without worrying about provisioning or managing servers1.

You can use MPI libraries to optimize the performance and efficiency of your parallel applications by enabling inter-process communication and data exchange2.

You can choose from a variety of MPI implementations that are compatible with AWS ParallelCluster, such as Open MPI, Intel MPI, and MPICH3.

A company needs to connect several VPCs in the us-east-1 Region that span hundreds of AWS accounts. The company's networking team has its own AWS account to manage the cloud network.

What is the MOST operationally efficient solution to connect the VPCs?

A.
Set up VPC peering connections between each VPC. Update each associated subnet's route table.
A.
Set up VPC peering connections between each VPC. Update each associated subnet's route table.
Answers
B.
Configure a NAT gateway and an internet gateway in each VPC to connect each VPC through the internet.
B.
Configure a NAT gateway and an internet gateway in each VPC to connect each VPC through the internet.
Answers
C.
Create an AWS Transit Gateway in the networking team's AWS account. Configure static routes from each VPC.
C.
Create an AWS Transit Gateway in the networking team's AWS account. Configure static routes from each VPC.
Answers
D.
Deploy VPN gateways in each VPC. Create a transit VPC in the networking team's AWS account to connect to each VPC.
D.
Deploy VPN gateways in each VPC. Create a transit VPC in the networking team's AWS account to connect to each VPC.
Answers
Suggested answer: C

Explanation:

AWS Transit Gateway is a highly scalable and centralized hub for connecting multiple VPCs, on-premises networks, and remote networks. It simplifies network connectivity by providing a single entry point and reducing the number of connections required. In this scenario, deploying an AWS Transit Gateway in the networking team's AWS account allows for efficient management and control over the network connectivity across multiple VPCs.

A company's data platform uses an Amazon Aurora MySQL database. The database has multiple read replicas and multiple DB instances across different Availability Zones. Users have recently reported errors from the database that indicate that there are too many connections. The company wants to reduce the failover time by 20% when a read replica is promoted to primary writer.

Which solution will meet this requirement?

A.
Switch from Aurora to Amazon RDS with Multi-AZ cluster deployment.
A.
Switch from Aurora to Amazon RDS with Multi-AZ cluster deployment.
Answers
B.
Use Amazon RDS Proxy in front of the Aurora database.
B.
Use Amazon RDS Proxy in front of the Aurora database.
Answers
C.
Switch to Amazon DynamoDB with DynamoDB Accelerator (DAX) for read connections.
C.
Switch to Amazon DynamoDB with DynamoDB Accelerator (DAX) for read connections.
Answers
D.
Switch to Amazon Redshift with relocation capability.
D.
Switch to Amazon Redshift with relocation capability.
Answers
Suggested answer: B

Explanation:

Amazon RDS Proxy is a service that provides a fully managed, highly available database proxy for Amazon RDS and Aurora databases. It allows you to pool and share database connections, reduce database load, and improve application scalability and availability.

By using Amazon RDS Proxy in front of your Aurora database, you can achieve the following benefits:

You can reduce the number of connections to your database and avoid errors that indicate that there are too many connections. Amazon RDS Proxy handles the connection management and multiplexing for you, so you can use fewer database connections and resources.

You can reduce the failover time by 20% when a read replica is promoted to primary writer. Amazon RDS Proxy automatically detects failures and routes traffic to the new primary instance without requiring changes to your application code or configuration. According to a benchmark test, using Amazon RDS Proxy reduced the failover time from 66 seconds to 53 seconds, which is a 20% improvement.

You can improve the security and compliance of your database access. Amazon RDS Proxy integrates with AWS Secrets Manager and AWS Identity and Access Management (IAM) to enable secure and granular authentication and authorization for your database connections.

A global marketing company has applications that run in the ap-southeast-2 Region and the eu-west-1 Region. Applications that run in a VPC in eu-west-1 need to communicate securely with databases that run in a VPC in ap-southeast-2.

Which network design will meet these requirements?

A.
Create a VPC peering connection between the eu-west-1 VPC and the ap-southeast-2 VPC. Create an inbound rule in the eu-west-1 application security group that allows traffic from the database server IP addresses in the ap-southeast-2 security group.
A.
Create a VPC peering connection between the eu-west-1 VPC and the ap-southeast-2 VPC. Create an inbound rule in the eu-west-1 application security group that allows traffic from the database server IP addresses in the ap-southeast-2 security group.
Answers
B.
Configure a VPC peering connection between the ap-southeast-2 VPC and the eu-west-1 VPC. Update the subnet route tables. Create an inbound rule in the ap-southeast-2 database security group that references the security group ID of the application servers in eu-west-1.
B.
Configure a VPC peering connection between the ap-southeast-2 VPC and the eu-west-1 VPC. Update the subnet route tables. Create an inbound rule in the ap-southeast-2 database security group that references the security group ID of the application servers in eu-west-1.
Answers
C.
Configure a VPC peering connection between the ap-southeast-2 VPC and the eu-west-1 VPC. Update the subnet route tables Create an inbound rule in the ap-southeast-2 database security group that allows traffic from the eu-west-1 application server IP addresses.
C.
Configure a VPC peering connection between the ap-southeast-2 VPC and the eu-west-1 VPC. Update the subnet route tables Create an inbound rule in the ap-southeast-2 database security group that allows traffic from the eu-west-1 application server IP addresses.
Answers
D.
Create a transit gateway with a peering attachment between the eu-west-1 VPC and the ap-southeast-2 VPC. After the transit gateways are properly peered and routing is configured, create an inbound rule in the database security group that references the security group ID of the application servers in eu-west-1.
D.
Create a transit gateway with a peering attachment between the eu-west-1 VPC and the ap-southeast-2 VPC. After the transit gateways are properly peered and routing is configured, create an inbound rule in the database security group that references the security group ID of the application servers in eu-west-1.
Answers
Suggested answer: C

Explanation:

'You cannot reference the security group of a peer VPC that's in a different Region. Instead, use the CIDR block of the peer VPC.' https://docs.aws.amazon.com/vpc/latest/peering/vpc-peering-security-groups.html

A company needs to store contract documents. A contract lasts for 5 years. During the 5-year period, the company must ensure that the documents cannot be overwritten or deleted. The company needs to encrypt the documents at rest and rotate the encryption keys automatically every year.

Which combination of steps should a solutions architect take to meet these requirements with the LEAST operational overhead? (Select TWO.)

A.
Store the documents in Amazon S3. Use S3 Object Lock in governance mode.
A.
Store the documents in Amazon S3. Use S3 Object Lock in governance mode.
Answers
B.
Store the documents in Amazon S3. Use S3 Object Lock in compliance mode.
B.
Store the documents in Amazon S3. Use S3 Object Lock in compliance mode.
Answers
C.
Use server-side encryption with Amazon S3 managed encryption keys (SSE-S3). Configure key rotation.
C.
Use server-side encryption with Amazon S3 managed encryption keys (SSE-S3). Configure key rotation.
Answers
D.
Use server-side encryption with AWS Key Management Service (AWS KMS) customer managed keys. Configure key rotation.
D.
Use server-side encryption with AWS Key Management Service (AWS KMS) customer managed keys. Configure key rotation.
Answers
E.
Use server-side encryption with AWS Key Management Service (AWS KMS) customer provided (imported) keys. Configure key rotation.
E.
Use server-side encryption with AWS Key Management Service (AWS KMS) customer provided (imported) keys. Configure key rotation.
Answers
Suggested answer: B, D

Explanation:

Consider using the default aws/s3 KMS key if: You're uploading or accessing S3 objects using AWS Identity and Access Management (IAM) principals that are in the same AWS account as the AWS KMS key. You don't want to manage policies for the KMS key. Consider using a customer managed key if: You want to create, rotate, disable, or define access controls for the key. You want to grant cross-account access to your S3 objects. You can configure the policy of a customer managed key to allow access from another account. https://repost.aws/knowledge-center/s3-object-encryption-keys

Total 886 questions
Go to page: of 89