ExamGecko
Home Home / Amazon / SAA-C03

Amazon SAA-C03 Practice Test - Questions Answers, Page 63

Question list
Search
Search

List of questions

Search

Related questions











A company is running a legacy system on an Amazon EC2 instance. The application code cannot be modified, and the system cannot run on more than one instance. A solutions architect must design a resilient solution that can improve the recovery time for the system.

What should the solutions architect recommend to meet these requirements?

A.
Enable termination protection for the EC2 instance.
A.
Enable termination protection for the EC2 instance.
Answers
B.
Configure the EC2 instance for Multi-AZ deployment.
B.
Configure the EC2 instance for Multi-AZ deployment.
Answers
C.
Create an Amazon CloudWatch alarm to recover the EC2 instance in case of failure.
C.
Create an Amazon CloudWatch alarm to recover the EC2 instance in case of failure.
Answers
D.
Launch the EC2 instance with two Amazon Elastic Block Store (Amazon EBS) volumes that use RAID configurations for storage redundancy.
D.
Launch the EC2 instance with two Amazon Elastic Block Store (Amazon EBS) volumes that use RAID configurations for storage redundancy.
Answers
Suggested answer: C

Explanation:

To design a resilient solution that can improve the recovery time for the system, a solutions architect should recommend creating an Amazon CloudWatch alarm to recover the EC2 instance in case of failure. This solution has the following benefits:

It allows the EC2 instance to be automatically recovered when a system status check failure occurs, such as loss of network connectivity, loss of system power, software issues on the physical host, or hardware issues on the physical host that impact network reachability1.

It preserves the instance ID, private IP addresses, Elastic IP addresses, and all instance metadata of the original instance.A recovered instance is identical to the original instance, except for any data that is in-memory, which is lost during the recovery process1.

It does not require any modification of the application code or the EC2 instance configuration.The solutions architect can create a CloudWatch alarm using the AWS Management Console, the AWS CLI, or the CloudWatch API2.

1: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-recover.html

2: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-recover.html#ec2-instance-recover-create-alarm

A company is running a photo hosting service in the us-east-1 Region. The service enables users across multiple countries to upload and view photos. Some photos are heavily viewed for months, and others are viewed for less than a week. The application allows uploads of up to 20 MB for each photo. The service uses the photo metadata to determine which photos to display to each user.

Which solution provides the appropriate user access MOST cost-effectively?

A.
Store the photos in Amazon DynamoDB. Turn on DynamoDB Accelerator (DAX) to cache frequently viewed items.
A.
Store the photos in Amazon DynamoDB. Turn on DynamoDB Accelerator (DAX) to cache frequently viewed items.
Answers
B.
Store the photos in the Amazon S3 Intelligent-Tiering storage class. Store the photo metadata and its S3 location in DynamoDB.
B.
Store the photos in the Amazon S3 Intelligent-Tiering storage class. Store the photo metadata and its S3 location in DynamoDB.
Answers
C.
Store the photos in the Amazon S3 Standard storage class. Set up an S3 Lifecycle policy to move photos older than 30 days to the S3 Standard-Infrequent Access (S3 Standard-IA) storage class. Use the object tags to keep track of metadata.
C.
Store the photos in the Amazon S3 Standard storage class. Set up an S3 Lifecycle policy to move photos older than 30 days to the S3 Standard-Infrequent Access (S3 Standard-IA) storage class. Use the object tags to keep track of metadata.
Answers
D.
Store the photos in the Amazon S3 Glacier storage class. Set up an S3 Lifecycle policy to move photos older than 30 days to the S3 Glacier Deep Archive storage class. Store the photo metadata and its S3 location in Amazon OpenSearch Service.
D.
Store the photos in the Amazon S3 Glacier storage class. Set up an S3 Lifecycle policy to move photos older than 30 days to the S3 Glacier Deep Archive storage class. Store the photo metadata and its S3 location in Amazon OpenSearch Service.
Answers
Suggested answer: B

Explanation:

This solution provides the appropriate user access most cost-effectively because it uses the Amazon S3 Intelligent-Tiering storage class, which automatically optimizes storage costs by moving data to the most cost-effective access tier when access patterns change, without performance impact or operational overhead1. This storage class is ideal for data with unknown, changing, or unpredictable access patterns, such as photos that are heavily viewed for months or less than a week. By storing the photo metadata and its S3 location in DynamoDB, the application can quickly query and retrieve the relevant photos for each user.DynamoDB is a fast, scalable, and fully managed NoSQL database service that supports key-value and document data models2.

The DNS provider that hosts a company's domain name records is experiencing outages that cause service disruption for a website running on AWS. The company needs to migrate to a more resilient managed DNS service and wants the service to run on AWS.

What should a solutions architect do to rapidly migrate the DNS hosting service?

A.
Create an Amazon Route 53 public hosted zone for the domain name. Import the zone file containing the domain records hosted by the previous provider
A.
Create an Amazon Route 53 public hosted zone for the domain name. Import the zone file containing the domain records hosted by the previous provider
Answers
B.
Create an Amazon Route 53 private hosted zone for the domain name Import the zone file containing the domain records hosted by the previous provider.
B.
Create an Amazon Route 53 private hosted zone for the domain name Import the zone file containing the domain records hosted by the previous provider.
Answers
C.
Create a Simple AD directory in AWS. Enable zone transfer between the DNS provider and AWS Directory Service for Microsoft Active Directory for the domain records.
C.
Create a Simple AD directory in AWS. Enable zone transfer between the DNS provider and AWS Directory Service for Microsoft Active Directory for the domain records.
Answers
D.
Create an Amazon Route 53 Resolver inbound endpomt in the VPC. Specify the IP addresses that the provider's DNS will forward DNS queries to. Configure the provider's DNS to forward DNS queries for the domain to the IP addresses that are specified in the inbound endpoint.
D.
Create an Amazon Route 53 Resolver inbound endpomt in the VPC. Specify the IP addresses that the provider's DNS will forward DNS queries to. Configure the provider's DNS to forward DNS queries for the domain to the IP addresses that are specified in the inbound endpoint.
Answers
Suggested answer: A

Explanation:

To migrate the DNS hosting service to a more resilient managed DNS service on AWS, the company should use Amazon Route 53, which is a highly available and scalable cloud DNS web service. Route 53 can host public DNS records for the company's domain name and provide reliable and secure DNS resolution. To rapidly migrate the DNS hosting service, the company should create a public hosted zone for the domain name in Route 53, which is a container for the domain's DNS records. Then, the company should import the zone file containing the domain records hosted by the previous provider, which is a text file that defines the DNS records for the domain. This way, the company can quickly transfer the existing DNS records to Route 53 without manually creating them. After importing the zone file, the company should update the domain registrar to use the name servers that Route 53 assigns to the hosted zone.This will ensure that DNS queries for the domain name are routed to Route 53 and resolved by the imported records.

A company is building a microservices-based application that will be deployed on Amazon Elastic Kubernetes Service (Amazon EKS). The microservices will interact with each other. The company wants to ensure that the application is observable to identify performance issues in the future.

Which solution will meet these requirements?

A.
Configure the application to use Amazon ElastiCache to reduce the number of requests that are sent to the microservices.
A.
Configure the application to use Amazon ElastiCache to reduce the number of requests that are sent to the microservices.
Answers
B.
Configure Amazon CloudWatch Container Insights to collect metrics from the EKS clusters Configure AWS X-Ray to trace the requests between the microservices.
B.
Configure Amazon CloudWatch Container Insights to collect metrics from the EKS clusters Configure AWS X-Ray to trace the requests between the microservices.
Answers
C.
Configure AWS CloudTrail to review the API calls. Build an Amazon QuickSight dashboard to observe the microservice interactions.
C.
Configure AWS CloudTrail to review the API calls. Build an Amazon QuickSight dashboard to observe the microservice interactions.
Answers
D.
Use AWS Trusted Advisor to understand the performance of the application.
D.
Use AWS Trusted Advisor to understand the performance of the application.
Answers
Suggested answer: B

Explanation:

This solution meets the requirements because it enables the company to observe the performance and behavior of its microservices-based application on Amazon EKS. Amazon CloudWatch Container Insights is a feature that collects, aggregates, and summarizes metrics and logs from containerized applications and microservices. Container Insights integrates with Amazon EKS and Kubernetes to provide metrics at the cluster, node, pod, task, and service level. You can use Container Insights to monitor the CPU, memory, disk, and network utilization of your EKS clusters and identify bottlenecks, latency spikes, and other issues. AWS X-Ray is a service that collects data about requests that your application serves, and provides tools that you can use to view, filter, and gain insights into that data. X-Ray integrates with Amazon EKS and Kubernetes to trace the requests that your microservices make to downstream AWS resources, microservices, databases, and web APIs. You can use X-Ray to analyze the root cause of errors, faults, and performance issues, and visualize the service map of your application.

Using Container Insights

AWS X-Ray

A company hosts a three-tier web application in the AWS Cloud. A Multi-AZ Amazon RDS for MySQL server forms the database layer. Amazon ElastiCache forms the cache layer. The company wants a caching strategy that adds or updates data in the cache when a customer adds an item to the database. The data in the cache must always match the data in the database.

Which solution will meet these requirements?

A.
Implement the lazy loading caching strategy
A.
Implement the lazy loading caching strategy
Answers
B.
Implement the write-through caching strategy.
B.
Implement the write-through caching strategy.
Answers
C.
Implement the adding TTL caching strategy.
C.
Implement the adding TTL caching strategy.
Answers
D.
Implement the AWS AppConfig caching strategy.
D.
Implement the AWS AppConfig caching strategy.
Answers
Suggested answer: B

Explanation:

A write-through caching strategy adds or updates data in the cache whenever data is written to the database. This ensures that the data in the cache is always consistent with the data in the database. A write-through caching strategy also reduces the cache miss penalty, as data is always available in the cache when it is requested. However, a write-through caching strategy can increase the write latency, as data has to be written to both the cache and the database. A write-through caching strategy is suitable for applications that require high data consistency and low read latency.

A lazy loading caching strategy only loads data into the cache when it is requested, and updates the cache when there is a cache miss. This can result in stale data in the cache, as data is not updated in the cache when it is changed in the database. A lazy loading caching strategy is suitable for applications that can tolerate some data inconsistency and have a low cache miss rate.

An adding TTL caching strategy assigns a time-to-live (TTL) value to each data item in the cache, and removes the data from the cache when the TTL expires. This can help prevent stale data in the cache, as data is periodically refreshed from the database. However, an adding TTL caching strategy can also increase the cache miss rate, as data can be evicted from the cache before it is requested. An adding TTL caching strategy is suitable for applications that have a high cache hit rate and can tolerate some data inconsistency.

An AWS AppConfig caching strategy is not a valid option, as AWS AppConfig is a service that enables customers to quickly deploy validated configurations to applications of any size and scale. AWS AppConfig does not provide a caching layer for web applications.

A company is running its production and nonproduction environment workloads in multiple AWS accounts. The accounts are in an organization in AWS Organizations. The company needs to design a solution that will prevent the modification of cost usage tags.

Which solution will meet these requirements?

A.
Create a custom AWS Config rule to prevent tag modification except by authorized principals.
A.
Create a custom AWS Config rule to prevent tag modification except by authorized principals.
Answers
B.
Create a custom trail in AWS CloudTrail to prevent tag modification
B.
Create a custom trail in AWS CloudTrail to prevent tag modification
Answers
C.
Create a service control policy (SCP) to prevent tag modification except by authonzed principals.
C.
Create a service control policy (SCP) to prevent tag modification except by authonzed principals.
Answers
D.
Create custom Amazon CloudWatch logs to prevent tag modification.
D.
Create custom Amazon CloudWatch logs to prevent tag modification.
Answers
Suggested answer: C

Explanation:

This solution meets the requirements because it uses SCPs to restrict the actions that can be performed on cost usage tags in the organization. SCPs are a type of organization policy that you can use to manage permissions in your organization. SCPs specify the maximum permissions for an organization, organizational unit (OU), or account. You can use SCPs to enforce consistent tag policies across your organization and prevent unauthorized or accidental changes to your tags. You can also create exceptions for authorized principals, such as administrators or auditors, who need to modify tags for legitimate purposes.

Service control policies (SCPs) - AWS Organizations

Tag policies - AWS Organizations

A research company runs experiments that are powered by a simulation application and a visualization application. The simulation application runs on Linux and outputs intermediate data to an NFS share every 5 minutes. The visualization application is a Windows desktop application that displays the simulation output and requires an SMB file system.

The company maintains two synchronized file systems. This strategy is causing data duplication and inefficient resource usage. The company needs to migrate the applications to AWS without making code changes to either application.

Which solution will meet these requirements?

A.
Migrate both applications to AWS Lambda. Create an Amazon S3 bucket to exchange data between the applications.
A.
Migrate both applications to AWS Lambda. Create an Amazon S3 bucket to exchange data between the applications.
Answers
B.
Migrate both applications to Amazon Elastic Container Service (Amazon ECS). Configure Amazon FSx File Gateway for storage.
B.
Migrate both applications to Amazon Elastic Container Service (Amazon ECS). Configure Amazon FSx File Gateway for storage.
Answers
C.
Migrate the simulation application to Linux Amazon EC2 instances. Migrate the visualization application to Windows EC2 instances. Configure Amazon Simple Queue Service (Amazon SQS) to exchange data between the applications.
C.
Migrate the simulation application to Linux Amazon EC2 instances. Migrate the visualization application to Windows EC2 instances. Configure Amazon Simple Queue Service (Amazon SQS) to exchange data between the applications.
Answers
D.
Migrate the simulation application to Linux Amazon EC2 instances. Migrate the visualization application to Windows EC2 instances. Configure Amazon FSx for NetApp ONTAP for storage.
D.
Migrate the simulation application to Linux Amazon EC2 instances. Migrate the visualization application to Windows EC2 instances. Configure Amazon FSx for NetApp ONTAP for storage.
Answers
Suggested answer: D

Explanation:

This solution will meet the requirements because Amazon FSx for NetApp ONTAP is a fully managed service that provides highly reliable, scalable, and feature-rich file storage built on NetApp's popular ONTAP file system. FSx for ONTAP supports both NFS and SMB protocols, which means it can be accessed by both Linux and Windows applications without code changes. FSx for ONTAP also eliminates data duplication and inefficient resource usage by automatically tiering infrequently accessed data to a lower-cost storage tier and providing storage efficiency features such as deduplication and compression. FSx for ONTAP also integrates with other AWS services such as Amazon S3, AWS Backup, and AWS CloudFormation.By migrating the applications to Amazon EC2 instances, the company can leverage the scalability, security, and performance of AWS compute resources.

A company is deploying an application in three AWS Regions using an Application Load Balancer Amazon Route 53 will be used to distribute traffic between these Regions. Which Route 53 configuration should a solutions architect use to provide the MOST high-performing experience?

A.
Create an A record with a latency policy.
A.
Create an A record with a latency policy.
Answers
B.
Create an A record with a geolocation policy.
B.
Create an A record with a geolocation policy.
Answers
C.
Create a CNAME record with a failover policy.
C.
Create a CNAME record with a failover policy.
Answers
D.
Create a CNAME record with a geoproximity policy.
D.
Create a CNAME record with a geoproximity policy.
Answers
Suggested answer: A

Explanation:

To provide the most high-performing experience for the users of the application, a solutions architect should use a latency routing policy for the Route 53 A record.This policy allows Route 53 to route traffic to the AWS Region that provides the lowest possible latency for the users1.A latency routing policy can also improve the availability of the application, as Route 53 can automatically route traffic to another Region if the primary Region becomes unavailable2.

1: https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-policy.html#routing-policy-latency

2: https://aws.amazon.com/route53/faqs/#Latency_Based_Routing

A company has an application that delivers on-demand training videos to students around the world. The application also allows authorized content developers to upload videos. The data is stored in an Amazon S3 bucket in the us-east-2 Region.

The company has created an S3 bucket in the eu-west-2 Region and an S3 bucket in the ap-southeast-1 Region. The company wants to replicate the data to the new S3 buckets. The company needs to minimize latency for developers who upload videos and students who stream videos near eu-west-2 and ap-southeast-1.

Which combination of steps will meet these requirements with the FEWEST changes to the application? (Select TWO.)

A.
Configure one-way replication from the us-east-2 S3 bucket to the eu-west-2 S3 bucket. Configure one-way replication from the us-east-2 S3 bucket to the ap-southeast-1 S3 bucket.
A.
Configure one-way replication from the us-east-2 S3 bucket to the eu-west-2 S3 bucket. Configure one-way replication from the us-east-2 S3 bucket to the ap-southeast-1 S3 bucket.
Answers
B.
Configure one-way replication from the us-east-2 S3 bucket to the eu-west-2 S3 bucket. Configure one-way replication from the eu-west-2 S3 bucket to the ap-southeast-1 S3 bucket.
B.
Configure one-way replication from the us-east-2 S3 bucket to the eu-west-2 S3 bucket. Configure one-way replication from the eu-west-2 S3 bucket to the ap-southeast-1 S3 bucket.
Answers
C.
Configure two-way (bidirectional) replication among the S3 buckets that are in all three Regions.
C.
Configure two-way (bidirectional) replication among the S3 buckets that are in all three Regions.
Answers
D.
Create an S3 Multi-Region Access Point. Modify the application to use the Amazon Resource Name (ARN) of the Multi-Region Access Point for video streaming. Do not modify the application for video uploads.
D.
Create an S3 Multi-Region Access Point. Modify the application to use the Amazon Resource Name (ARN) of the Multi-Region Access Point for video streaming. Do not modify the application for video uploads.
Answers
E.
Create an S3 Multi-Region Access Point Modify the application to use the Amazon Resource Name (ARN) of the Multi-Region Access Point for video streaming and uploads.
E.
Create an S3 Multi-Region Access Point Modify the application to use the Amazon Resource Name (ARN) of the Multi-Region Access Point for video streaming and uploads.
Answers
Suggested answer: A, E

Explanation:

These two steps will meet the requirements with the fewest changes to the application because they will enable the company to replicate the data to the new S3 buckets and minimize latency for both video streaming and uploads. One-way replication from the us-east-2 S3 bucket to the other two S3 buckets will ensure that the data is synchronized across all three regions. The company can use S3 Cross-Region Replication (CRR) to automatically copy objects across buckets in different AWS Regions. CRR can help the company achieve lower latency and compliance requirements by keeping copies of their data in different regions. Creating an S3 Multi-Region Access Point and modifying the application to use its ARN will allow the company to access the data through a single global endpoint. An S3 Multi-Region Access Point is a globally unique name that can be used to access objects stored in S3 buckets across multiple regions. It automatically routes requests to the closest S3 bucket with the lowest latency. By using an S3 Multi-Region Access Point, the company can simplify the application architecture and improve the performance and reliability of the application.

Replicating objects

Multi-Region Access Points in Amazon S3

An analytics company uses Amazon VPC to run its multi-tier services. The company wants to use RESTful APIs to offer a web analytics service to millions of users. Users must be verified by using an authentication service to access the APIs.

Which solution will meet these requirements with the MOST operational efficiency?

A.
Configure an Amazon Cognito user pool for user authentication. Implement Amazon API Gateway REST APIs with a Cognito authorizer.
A.
Configure an Amazon Cognito user pool for user authentication. Implement Amazon API Gateway REST APIs with a Cognito authorizer.
Answers
B.
Configure an Amazon Cognito identity pool for user authentication. Implement Amazon API Gateway HTTP APIs with a Cognito authorizer.
B.
Configure an Amazon Cognito identity pool for user authentication. Implement Amazon API Gateway HTTP APIs with a Cognito authorizer.
Answers
C.
Configure an AWS Lambda function to handle user authentication. Implement Amazon API Gateway REST APIs with a Lambda authorizer.
C.
Configure an AWS Lambda function to handle user authentication. Implement Amazon API Gateway REST APIs with a Lambda authorizer.
Answers
D.
Configure an IAM user to handle user authentication. Implement Amazon API Gateway HTTP APIs with an IAM authorizer.
D.
Configure an IAM user to handle user authentication. Implement Amazon API Gateway HTTP APIs with an IAM authorizer.
Answers
Suggested answer: A

Explanation:

This solution will meet the requirements with the most operational efficiency because:

Amazon Cognito user pools provide a secure and scalable user directory that can store and manage user profiles, and handle user sign-up, sign-in, and access control. User pools can also integrate with social identity providers and enterprise identity providers via SAML or OIDC. User pools can issue JSON Web Tokens (JWTs) that can be used to authenticate users and authorize API requests.

Amazon API Gateway REST APIs enable you to create and deploy APIs that expose your backend services to your clients. REST APIs support multiple authorization mechanisms, including Cognito user pools, IAM, Lambda, and custom authorizers. A Cognito authorizer is a type of Lambda authorizer that uses a Cognito user pool as the identity source. When a client makes a request to a REST API method that is configured with a Cognito authorizer, API Gateway verifies the JWTs that are issued by the user pool and grants access based on the token's claims and the authorizer's configuration.

By using Cognito user pools and API Gateway REST APIs with a Cognito authorizer, you can achieve a high level of security, scalability, and performance for your web analytics service. You can also leverage the built-in features of Cognito and API Gateway, such as user management, token validation, caching, throttling, and monitoring, without having to implement them yourself. This reduces the operational overhead and complexity of your solution.

Amazon Cognito User Pools

Amazon API Gateway REST APIs

Use API Gateway Lambda authorizers

Total 886 questions
Go to page: of 89