ExamGecko
Home Home / Amazon / SAA-C03

Amazon SAA-C03 Practice Test - Questions Answers, Page 55

Question list
Search
Search

List of questions

Search

Related questions











A company runs a three-tier application in two AWS Regions. The web tier, the application tier, and the database tier run on Amazon EC2 instances. The company uses Amazon RDS for Microsoft SQL Server Enterprise for the database tier The database tier is experiencing high load when weekly and monthly reports are run. The company wants to reduce the load on the database tier.

Which solution will meet these requirements with the LEAST administrative effort?

A.
Create read replicas. Configure the reports to use the new read replicas.
A.
Create read replicas. Configure the reports to use the new read replicas.
Answers
B.
Convert the RDS database to Amazon DynamoDB_ Configure the reports to use DynamoDB
B.
Convert the RDS database to Amazon DynamoDB_ Configure the reports to use DynamoDB
Answers
C.
Modify the existing RDS DB instances by selecting a larger instance size.
C.
Modify the existing RDS DB instances by selecting a larger instance size.
Answers
D.
Modify the existing ROS DB instances and put the instances into an Auto Scaling group.
D.
Modify the existing ROS DB instances and put the instances into an Auto Scaling group.
Answers
Suggested answer: A

Explanation:

it allows the company to create read replicas of its RDS database and reduce the load on the database tier. By creating read replicas, the company can offload read traffic from the primary database instance to one or more replicas. By configuring the reports to use the new read replicas, the company can improve performance and availability of its database tier. Reference:

Working with Read Replicas

Read Replicas for Amazon RDS for SQL Server

A company runs a website that stores images of historical events. Website users need the ability to search and view images based on the year that the event in the image occurred. On average, users request each image only once or twice a year The company wants a highly available solution to store and deliver the images to users.

Which solution will meet these requirements MOST cost-effectively?

A.
Store images in Amazon Elastic Block Store (Amazon EBS). Use a web server that runs on Amazon EC2.
A.
Store images in Amazon Elastic Block Store (Amazon EBS). Use a web server that runs on Amazon EC2.
Answers
B.
Store images in Amazon Elastic File System (Amazon EFS). Use a web server that runs on Amazon EC2.
B.
Store images in Amazon Elastic File System (Amazon EFS). Use a web server that runs on Amazon EC2.
Answers
C.
Store images in Amazon S3 Standard. use S3 Standard to directly deliver images by using a static website.
C.
Store images in Amazon S3 Standard. use S3 Standard to directly deliver images by using a static website.
Answers
D.
Store images in Amazon S3 Standard-InfrequentAccess (S3 Standard-IA). use S3 Standard-IA to directly deliver images by using a static website.
D.
Store images in Amazon S3 Standard-InfrequentAccess (S3 Standard-IA). use S3 Standard-IA to directly deliver images by using a static website.
Answers
Suggested answer: C

Explanation:

it allows the company to store and deliver images to users in a highly available and cost-effective way. By storing images in Amazon S3 Standard, the company can use a durable, scalable, and secure object storage service that offers high availability and performance. By using S3 Standard to directly deliver images by using a static website, the company can avoid running web servers and reduce operational overhead. S3 Standard also offers low storage pricing and free data transfer within AWS Regions. Reference:

Amazon S3 Storage Classes

Hosting a Static Website on Amazon S3


A solutions architect needs to review a company's Amazon S3 buckets to discover personally identifiable information (Pll). The company stores the Pll data in the us-east-I Region and us-west-2 Region.

Which solution will meet these requirements with the LEAST operational overhead?

A.
Configure Amazon Macie in each Region. Create a job to analyze the data that is in Amazon S3_
A.
Configure Amazon Macie in each Region. Create a job to analyze the data that is in Amazon S3_
Answers
B.
Configure AWS Security Hub for all Regions. Create an AWS Config rule to analyze the data that is in Amazon S3_
B.
Configure AWS Security Hub for all Regions. Create an AWS Config rule to analyze the data that is in Amazon S3_
Answers
C.
Configure Amazon Inspector to analyze the data that IS in Amazon S3.
C.
Configure Amazon Inspector to analyze the data that IS in Amazon S3.
Answers
D.
Configure Amazon GuardDuty to analyze the data that is in Amazon S3.
D.
Configure Amazon GuardDuty to analyze the data that is in Amazon S3.
Answers
Suggested answer: A

Explanation:

it allows the solutions architect to review the S3 buckets to discover personally identifiable information (Pll) with the least operational overhead. Amazon Macie is a fully managed data security and data privacy service that uses machine learning and pattern matching to discover and protect sensitive data in AWS. Amazon Macie can analyze data in S3 buckets across multiple regions and provide insights into the type, location, and level of sensitivity of the data. Reference:

Amazon Macie

Analyzing data with Amazon Macie

A company is building an ecommerce application and needs to store sensitive customer information.

The company needs to give customers the ability to complete purchase transactions on the website.

The company also needs to ensure that sensitive customer data is protected, even from database administrators.

Which solution meets these requirements?

A.
Store sensitive data in an Amazon Elastic Block Store (Amazon EBS) volume. Use EBS encryption to encrypt the data. Use an IAM instance role to restrict access.
A.
Store sensitive data in an Amazon Elastic Block Store (Amazon EBS) volume. Use EBS encryption to encrypt the data. Use an IAM instance role to restrict access.
Answers
B.
Store sensitive data in Amazon RDS for MySQL. Use AWS Key Management Service (AWS KMS) client-side encryption to encrypt the data.
B.
Store sensitive data in Amazon RDS for MySQL. Use AWS Key Management Service (AWS KMS) client-side encryption to encrypt the data.
Answers
C.
Store sensitive data in Amazon S3. Use AWS Key Management Service (AWS KMS) server-side encryption to encrypt the data. Use S3 bucket policies to restrict access.
C.
Store sensitive data in Amazon S3. Use AWS Key Management Service (AWS KMS) server-side encryption to encrypt the data. Use S3 bucket policies to restrict access.
Answers
D.
Store sensitive data in Amazon FSx for Windows Server. Mount the file share on application servers. Use Windows file permissions to restrict access.
D.
Store sensitive data in Amazon FSx for Windows Server. Mount the file share on application servers. Use Windows file permissions to restrict access.
Answers
Suggested answer: B

Explanation:

it allows the company to store sensitive customer information in a managed AWS service and give customers the ability to complete purchase transactions on the website. By using AWS Key Management Service (AWS KMS) client-side encryption, the company can encrypt the data before sending it to Amazon RDS for MySQL. This ensures that sensitive customer data is protected, even from database administrators, as only the application has access to the encryption keys. Reference:

Using Encryption with Amazon RDS for MySQL

Encrypting Amazon RDS Resources

A solutions architect needs to ensure that API calls to Amazon DynamoDB from Amazon EC2 instances in a VPC do not travel across the internet.

Which combination of steps should the solutions architect take to meet this requirement? (Choose two.)

A.
Create a route table entry for the endpoint.
A.
Create a route table entry for the endpoint.
Answers
B.
Create a gateway endpoint for DynamoDB.
B.
Create a gateway endpoint for DynamoDB.
Answers
C.
Create an interface endpoint for Amazon EC2.
C.
Create an interface endpoint for Amazon EC2.
Answers
D.
Create an elastic network interface for the endpoint in each of the subnets of the VPC.
D.
Create an elastic network interface for the endpoint in each of the subnets of the VPC.
Answers
E.
Create a security group entry in the endpoint's security group to provide access.
E.
Create a security group entry in the endpoint's security group to provide access.
Answers
Suggested answer: B, E

Explanation:

B and E are the correct answers because they allow the solutions architect to ensure that API calls to Amazon DynamoDB from Amazon EC2 instances in a VPC do not travel across the internet. By creating a gateway endpoint for DynamoDB, the solutions architect can enable private connectivity between the VPC and DynamoDB. By creating a security group entry in the endpoint's security group to provide access, the solutions architect can control which EC2 instances can communicate with DynamoDB through the endpoint. Reference:

Gateway Endpoints Controlling Access to Services with VPC Endpoints

A company has a service that reads and writes large amounts of data from an Amazon S3 bucket in the same AWS Region. The service is deployed on Amazon EC2 instances within the private subnet of a VPC. The service communicates with Amazon S3 over a NAT gateway in the public subnet.

However, the company wants a solution that will reduce the data output costs.

Which solution will meet these requirements MOST cost-effectively?

A.
Provision a dedicated EC2 NAT instance in the public subnet. Configure the route table for the private subnet to use the elastic network interface of this instance as the destination for all S3 traffic.
A.
Provision a dedicated EC2 NAT instance in the public subnet. Configure the route table for the private subnet to use the elastic network interface of this instance as the destination for all S3 traffic.
Answers
B.
Provision a dedicated EC2 NAT instance in the private subnet. Configure the route table for the public subnet to use the elastic network interface of this instance as the destination for all S3 traffic.
B.
Provision a dedicated EC2 NAT instance in the private subnet. Configure the route table for the public subnet to use the elastic network interface of this instance as the destination for all S3 traffic.
Answers
C.
Provision a VPC gateway endpoint. Configure the route table for the private subnet to use the gateway endpoint as the route for all S3 traffic.
C.
Provision a VPC gateway endpoint. Configure the route table for the private subnet to use the gateway endpoint as the route for all S3 traffic.
Answers
D.
Provision a second NAT gateway. Configure the route table for the private subnet to use this NAT gateway as the destination for all S3 traffic.
D.
Provision a second NAT gateway. Configure the route table for the private subnet to use this NAT gateway as the destination for all S3 traffic.
Answers
Suggested answer: C

Explanation:

it allows the company to reduce the data output costs for accessing Amazon S3 from Amazon EC2 instances in a VPC. By provisioning a VPC gateway endpoint, the company can enable private connectivity between the VPC and S3. By configuring the route table for the private subnet to use the gateway endpoint as the route for all S3 traffic, the company can avoid using a NAT gateway, which charges for data processing and data transfer. Reference:

VPC Endpoints for Amazon S3

VPC Endpoints Pricing

A company runs multiple Amazon EC2 Linux instances in a VPC across two Availability Zones. The instances host applications that use a hierarchical directory structure. The applications need to read and write rapidly and concurrently to shared storage.

What should a solutions architect do to meet these requirements?

A.
Create an Amazon S3 bucket. Allow access from all the EC2 instances in the VPC.
A.
Create an Amazon S3 bucket. Allow access from all the EC2 instances in the VPC.
Answers
B.
Create an Amazon Elastic File System (Amazon EFS) file system. Mount the EFS file system from each EC2 instance.
B.
Create an Amazon Elastic File System (Amazon EFS) file system. Mount the EFS file system from each EC2 instance.
Answers
C.
Create a file system on a Provisioned IOPS SSD (102) Amazon Elastic Block Store (Amazon EBS) volume. Attach the EBS volume to all the EC2 instances.
C.
Create a file system on a Provisioned IOPS SSD (102) Amazon Elastic Block Store (Amazon EBS) volume. Attach the EBS volume to all the EC2 instances.
Answers
D.
Create file systems on Amazon Elastic Block Store (Amazon EBS) volumes that are attached to each EC2 instance. Synchromze the EBS volumes across the different EC2 instances.
D.
Create file systems on Amazon Elastic Block Store (Amazon EBS) volumes that are attached to each EC2 instance. Synchromze the EBS volumes across the different EC2 instances.
Answers
Suggested answer: B

Explanation:

it allows the EC2 instances to read and write rapidly and concurrently to shared storage across two

Availability Zones. Amazon EFS provides a scalable, elastic, and highly available file system that can be mounted from multiple EC2 instances. Amazon EFS supports high levels of throughput and IOPS, and consistent low latencies. Amazon EFS also supports NFSv4 lock upgrading and downgrading, which enables high levels of concurrency. Reference:

Amazon EFS Features

Using Amazon EFS with Amazon EC2

A company is using AWS Key Management Service (AWS KMS) keys to encrypt AWS Lambda environment variables. A solutions architect needs to ensure that the required permissions are in place to decrypt and use the environment variables.

Which steps must the solutions architect take to implement the correct permissions? (Choose two.)

A.
Add AWS KMS permissions in the Lambda resource policy.
A.
Add AWS KMS permissions in the Lambda resource policy.
Answers
B.
Add AWS KMS permissions in the Lambda execution role.
B.
Add AWS KMS permissions in the Lambda execution role.
Answers
C.
Add AWS KMS permissions in the Lambda function policy.
C.
Add AWS KMS permissions in the Lambda function policy.
Answers
D.
Allow the Lambda execution role in the AWS KMS key policy.
D.
Allow the Lambda execution role in the AWS KMS key policy.
Answers
E.
Allow the Lambda resource policy in the AWS KMS key policy.
E.
Allow the Lambda resource policy in the AWS KMS key policy.
Answers
Suggested answer: B, D

Explanation:

B and D are the correct answers because they ensure that the Lambda execution role has the permissions to decrypt and use the environment variables, and that the AWS KMS key policy allows the Lambda execution role to use the key. The Lambda execution role is an IAM role that grants the Lambda function permission to access AWS resources, such as AWS KMS. The AWS KMS key policy is a resource-based policy that controls access to the key. By adding AWS KMS permissions in the Lambda execution role and allowing the Lambda execution role in the AWS KMS key policy, the solutions architect can implement the correct permissions for encrypting and decrypting environment variables. Reference:

AWS Lambda Execution Role

Using AWS KMS keys in AWS Lambda

A company wants to use an AWS CloudFormatlon stack for its application in a test environment. The company stores the CloudFormation template in an Amazon S3 bucket that blocks public access. The company wants to grant CloudFormation access to the template in the S3 bucket based on specific user requests to create the test environment The solution must follow security best practices.

Which solution will meet these requirements?

A.
Create a gateway VPC endpoint for Amazon S3. Configure the CloudFormation stack to use the S3 object URL
A.
Create a gateway VPC endpoint for Amazon S3. Configure the CloudFormation stack to use the S3 object URL
Answers
B.
Create an Amazon API Gateway REST API that has the S3 bucket as the target. Configure the CloudFormat10n stack to use the API Gateway URL _
B.
Create an Amazon API Gateway REST API that has the S3 bucket as the target. Configure the CloudFormat10n stack to use the API Gateway URL _
Answers
C.
Create a presigned URL for the template object_ Configure the CloudFormation stack to use the presigned URL.
C.
Create a presigned URL for the template object_ Configure the CloudFormation stack to use the presigned URL.
Answers
D.
Allow public access to the template object in the S3 bucket. Block the public access after the test environment is created
D.
Allow public access to the template object in the S3 bucket. Block the public access after the test environment is created
Answers
Suggested answer: C

Explanation:

it allows CloudFormation to access the template in the S3 bucket without granting public access or creating additional resources. A presigned URL is a URL that is signed with the access key of an IAM user or role that has permission to access the object. The presigned URL can be used by anyone who receives it, but it expires after a specified time. By creating a presigned URL for the template object and configuring the CloudFormation stack to use it, the company can grant CloudFormation access to the template based on specific user requests and follow security best practices. Reference:

Using Amazon S3 Presigned URLs

Using Amazon S3 Buckets

A company runs an application on AWS. The application receives inconsistent amounts of usage. The application uses AWS Direct Connect to connect to an on-premises MySQL-compatible database. The on-premises database consistently uses a minimum of 2 GiB of memory.

The company wants to migrate the on-premises database to a managed AWS service. The company wants to use auto scaling capabilities to manage unexpected workload increases.

Which solution will meet these requirements with the LEAST administrative overhead?

A.
Provision an Amazon DynamoDB database with default read and write capacity settings.
A.
Provision an Amazon DynamoDB database with default read and write capacity settings.
Answers
B.
Provision an Amazon Aurora database with a minimum capacity of 1 Aurora capacity unit (ACU).
B.
Provision an Amazon Aurora database with a minimum capacity of 1 Aurora capacity unit (ACU).
Answers
C.
Provision an Amazon Aurora Serverless v2 database with a minimum capacity of 1 Aurora capacity unit (ACU).
C.
Provision an Amazon Aurora Serverless v2 database with a minimum capacity of 1 Aurora capacity unit (ACU).
Answers
D.
Provision an Amazon RDS for MySQL database with 2 GiB of memory.
D.
Provision an Amazon RDS for MySQL database with 2 GiB of memory.
Answers
Suggested answer: C

Explanation:

it allows the company to migrate the on-premises database to a managed AWS service that supports auto scaling capabilities and has the least administrative overhead. Amazon Aurora Serverless v2 is a configuration of Amazon Aurora that automatically scales compute capacity based on workload demand. It can scale from hundreds to hundreds of thousands of transactions in a fraction of a second. Amazon Aurora Serverless v2 also supports MySQL-compatible databases and AWS Direct Connect connectivity. Reference:

Amazon Aurora Serverless v2 Connecting to an Amazon Aurora DB Cluster

Total 886 questions
Go to page: of 89