ExamGecko
Home Home / Amazon / SAA-C03

Amazon SAA-C03 Practice Test - Questions Answers, Page 86

Question list
Search
Search

List of questions

Search

Related questions











A company needs to give a globally distributed development team secure access to the company's AWS resources in a way that complies with security policies.

The company currently uses an on-premises Active Directory for internal authentication. The company uses AWS Organizations to manage multiple AWS accounts that support multiple projects.

The company needs a solution to integrate with the existing infrastructure to provide centralized identity management and access control.

Which solution will meet these requirements with the LEAST operational overhead?

A.

Set up AWS Directory Service to create an AWS managed Microsoft Active Directory on AWS. Establish a trust relationship with the on-premises Active Directory. Use 1AM roles that are assigned to Active Directory groups to access AWS resources within the company's AWS accounts.

A.

Set up AWS Directory Service to create an AWS managed Microsoft Active Directory on AWS. Establish a trust relationship with the on-premises Active Directory. Use 1AM roles that are assigned to Active Directory groups to access AWS resources within the company's AWS accounts.

Answers
B.

Create an 1AM user for each developer. Manually manage permissions for each 1AM user based on each user's involvement with each project. Enforce multi-factor authentication (MFA) as an additional layer of security.

B.

Create an 1AM user for each developer. Manually manage permissions for each 1AM user based on each user's involvement with each project. Enforce multi-factor authentication (MFA) as an additional layer of security.

Answers
C.

Use AD Connector in AWS Directory Service to connect to the on-premises Active Directory. Integrate AD Connector with AWS 1AM Identity Center. Configure permissions sets to give each AD group access to specific AWS accounts and resources.

C.

Use AD Connector in AWS Directory Service to connect to the on-premises Active Directory. Integrate AD Connector with AWS 1AM Identity Center. Configure permissions sets to give each AD group access to specific AWS accounts and resources.

Answers
D.

Use Amazon Cognito to deploy an identity federation solution. Integrate the identity federation solution with the on-premises Active Directory. Use Amazon Cognito to provide access tokens for developers to access AWS accounts and resources.

D.

Use Amazon Cognito to deploy an identity federation solution. Integrate the identity federation solution with the on-premises Active Directory. Use Amazon Cognito to provide access tokens for developers to access AWS accounts and resources.

Answers
Suggested answer: C

Explanation:

Using AD Connector with AWS IAM Identity Center (formerly AWS Single Sign-On) allows the company to leverage its existing on-premises Active Directory for centralized identity management and access control. AD Connector acts as a proxy to the on-premises AD without requiring additional infrastructure or complex setup. This solution integrates seamlessly with AWS, allowing the development team to use their existing AD credentials to access AWS resources across multiple accounts managed by AWS Organizations. The permissions for AWS resources can be managed centrally through IAM Identity Center by configuring permission sets.

This solution provides:

Least operational overhead: AD Connector is fully managed, and IAM Identity Center allows centralized management of permissions across accounts.

Secure access: The solution complies with security policies by using existing AD authentication mechanisms.

Option A (AWS Managed AD): Setting up a fully managed AWS AD and establishing a trust is more complex and involves additional operational overhead.

Option B (IAM Users): Manually managing IAM users and permissions is less scalable and increases operational complexity.

Option D (Cognito): Amazon Cognito is more suited for user-facing applications rather than internal identity management for AWS resources.

AWS

Reference:

AD Connector with IAM Identity Center

AWS IAM Identity Center

A company wants to improve the availability and performance of its hybrid application. The application consists of a stateful TCP-based workload hosted on Amazon EC2 instances in different AWS Regions and a stateless UDP-based workload hosted on premises.

Which combination of actions should a solutions architect take to improve availability and performance? (Select TWO.)

A.

Create an accelerator using AWS Global Accelerator. Add the load balancers as endpoints.

A.

Create an accelerator using AWS Global Accelerator. Add the load balancers as endpoints.

Answers
B.

Create an Amazon CloudFront distribution with an origin that uses Amazon Route 53 latency-based routing to route requests to the load balancers.

B.

Create an Amazon CloudFront distribution with an origin that uses Amazon Route 53 latency-based routing to route requests to the load balancers.

Answers
C.

Configure two Application Load Balancers in each Region. The first will route to the EC2 endpoints. and the second will route lo the on-premises endpoints.

C.

Configure two Application Load Balancers in each Region. The first will route to the EC2 endpoints. and the second will route lo the on-premises endpoints.

Answers
D.

Configure a Network Load Balancer in each Region to address the EC2 endpoints. Configure a Network Load Balancer in each Region that routes to the on-premises endpoints.

D.

Configure a Network Load Balancer in each Region to address the EC2 endpoints. Configure a Network Load Balancer in each Region that routes to the on-premises endpoints.

Answers
E.

Configure a Network Load Balancer in each Region to address the EC2 endpoints. Configure an Application Load Balancer in each Region that routes to the on-premises endpoints.

E.

Configure a Network Load Balancer in each Region to address the EC2 endpoints. Configure an Application Load Balancer in each Region that routes to the on-premises endpoints.

Answers
Suggested answer: A, D

Explanation:

For improving availability and performance of the hybrid application, the following solutions are optimal:

AWS Global Accelerator (Option A): Global Accelerator provides high availability and improves performance by using the AWS global network to route user traffic to the nearest healthy endpoint (across AWS Regions). By adding the Network Load Balancers as endpoints, Global Accelerator ensures that traffic is routed efficiently to the closest endpoint, improving both availability and performance.

Network Load Balancer (Option D): The stateful TCP-based workload hosted on Amazon EC2 instances and the stateless UDP-based workload hosted on-premises are best served by Network Load Balancers (NLBs). NLBs are designed to handle TCP and UDP traffic with ultra-low latency and can route traffic to both EC2 and on-premises endpoints.

Option B (CloudFront and Route 53): CloudFront is better suited for HTTP/HTTPS workloads, not for TCP/UDP-based applications.

Option C (ALB): Application Load Balancers do not support the stateless UDP-based workload, making NLBs the better choice for both TCP and UDP.

AWS

Reference:

AWS Global Accelerator

Network Load Balancer

A company runs a production database on Amazon RDS for MySQL. The company wants to upgrade the database version for security compliance reasons. Because the database contains critical data, the company wants a quick solution to upgrade and test functionality without losing any data.

Which solution will meet these requirements with the LEAST operational overhead?

A.

Create an RDS manual snapshot. Upgrade to the new version of Amazon RDS for MySQL.

A.

Create an RDS manual snapshot. Upgrade to the new version of Amazon RDS for MySQL.

Answers
B.

Use native backup and restore. Restore the data to the upgraded new version of Amazon RDS for MySQL.

B.

Use native backup and restore. Restore the data to the upgraded new version of Amazon RDS for MySQL.

Answers
C.

Use AWS Database Migration Service (AWS DMS) to replicate the data to the upgraded new version of Amazon RDS for MySQL.

C.

Use AWS Database Migration Service (AWS DMS) to replicate the data to the upgraded new version of Amazon RDS for MySQL.

Answers
D.

Use Amazon RDS Blue/Green Deployments to deploy and test production changes.

D.

Use Amazon RDS Blue/Green Deployments to deploy and test production changes.

Answers
Suggested answer: D

Explanation:

Amazon RDS Blue/Green Deployments is the ideal solution for upgrading the database version with minimal operational overhead and no data loss. Blue/Green Deployments allows you to create a separate, fully managed 'green' environment with the upgraded database version. You can test the new version in the green environment while the 'blue' environment continues serving production traffic. Once testing is complete, you can seamlessly switch traffic to the green environment without downtime.

This solution provides:

Fast, non-disruptive upgrade: Traffic is only switched to the new environment after testing, ensuring zero data loss.

Minimal operational overhead: AWS handles the infrastructure management, reducing manual intervention.

Option A (Manual snapshot): This requires manual intervention and involves more operational overhead.

Option B (Native backup/restore): This approach is more labor-intensive and slower than Blue/Green Deployments.

Option C (DMS): AWS DMS adds unnecessary complexity for a simple version upgrade when Blue/Green Deployments can handle the task more efficiently.

AWS

Reference:

Amazon RDS Blue/Green Deployments

A digital image processing company wants to migrate its on-premises monolithic application to the AWS Cloud. The company processes thousands of images and generates large files as part of the processing workflow.

The company needs a solution to manage the growing number of image processing jobs. The solution must also reduce the manual tasks in the image processing workflow. The company does not want to manage the underlying infrastructure of the solution.

Which solution will meet these requirements with the LEAST operational overhead?

A.

Use Amazon Elastic Container Service (Amazon ECS) with Amazon EC2 Spot Instances to process the images. Configure Amazon Simple Queue Service (Amazon SQS) to orchestrate the workflow. Store the processed files in Amazon Elastic File System (Amazon EFS)

A.

Use Amazon Elastic Container Service (Amazon ECS) with Amazon EC2 Spot Instances to process the images. Configure Amazon Simple Queue Service (Amazon SQS) to orchestrate the workflow. Store the processed files in Amazon Elastic File System (Amazon EFS)

Answers
B.

Use AWS Batch jobs to process the images. Use AWS Step Functions to orchestrate the workflow. Store the processed files in an Amazon S3 bucket.

B.

Use AWS Batch jobs to process the images. Use AWS Step Functions to orchestrate the workflow. Store the processed files in an Amazon S3 bucket.

Answers
C.

Use AWS Lambda functions and Amazon EC2 Spot Instances lo process the images. Store the processed files in Amazon FSx.

C.

Use AWS Lambda functions and Amazon EC2 Spot Instances lo process the images. Store the processed files in Amazon FSx.

Answers
D.

Deploy a group of Amazon EC2 instances to process the images. Use AWS Step Functions to orchestrate the workflow. Store the processed files in an Amazon Elastic Block Store (Amazon EBS) volume.

D.

Deploy a group of Amazon EC2 instances to process the images. Use AWS Step Functions to orchestrate the workflow. Store the processed files in an Amazon Elastic Block Store (Amazon EBS) volume.

Answers
Suggested answer: B

Explanation:

For processing thousands of images and generating large files while minimizing manual tasks and operational overhead, using AWS Batch is the best solution. AWS Batch allows you to run large-scale, parallel, and managed batch computing jobs without needing to manage the underlying infrastructure.

AWS Batch: Automates the image processing jobs, dynamically allocating the necessary resources based on the job requirements, which reduces operational overhead.

AWS Step Functions: Orchestrates the entire image processing workflow, ensuring that tasks are executed in the correct sequence, improving manageability.

Amazon S3: Stores the processed files, providing scalable and cost-effective storage.

Option A (ECS with EC2 Spot Instances): While cost-effective, managing ECS and Spot Instances involves more operational effort.

Option C (Lambda with EC2 Spot): Lambda functions have size and duration limitations, making them less suited for large image processing tasks.

Option D (EC2 with Step Functions): Managing EC2 instances involves more overhead than using AWS Batch.

AWS

Reference:

AWS Batch

AWS Step Functions

A company is developing an application in the AWS Cloud. The application's HTTP API contains critical information that is published in Amazon API Gateway. The critical information must be accessible from only a limited set of trusted IP addresses that belong to the company's internal network.

Which solution will meet these requirements?

A.

Set up an API Gateway private integration to restrict access to a predefined set ot IP addresses.

A.

Set up an API Gateway private integration to restrict access to a predefined set ot IP addresses.

Answers
B.

Create a resource policy for the API that denies access to any IP address that is not specifically allowed.

B.

Create a resource policy for the API that denies access to any IP address that is not specifically allowed.

Answers
C.

Directly deploy the API in a private subnet. Create a network ACL. Set up rules to allow the traffic from specific IP addresses.

C.

Directly deploy the API in a private subnet. Create a network ACL. Set up rules to allow the traffic from specific IP addresses.

Answers
D.

Modify the security group that is attached to API Gateway to allow inbound traffic from only the trusted IP addresses.

D.

Modify the security group that is attached to API Gateway to allow inbound traffic from only the trusted IP addresses.

Answers
Suggested answer: B

Explanation:

Amazon API Gateway supports resource policies, which allow you to control access to your API by specifying the IP addresses or ranges that can access the API. By creating a resource policy that explicitly denies access to any IP address outside the allowed set, you can ensure that only trusted IP addresses (such as those from your internal network) can access the critical information in your API. This approach provides fine-grained access control without the need for additional infrastructure or complex configurations.

Option A (Private integration): API Gateway private integrations are for creating private APIs that are only accessible within a VPC, but this solution is about restricting access to certain IP addresses.

Option C (Private subnet and ACLs): Deploying the API in a private subnet and using network ACLs adds unnecessary complexity and isn't the best fit for HTTP APIs.

Option D (Security group): API Gateway doesn't have a security group because it isn't a resource inside a VPC. Instead, resource policies are the correct mechanism for controlling IP-based access.

AWS

Reference:

Controlling Access to API Gateway with Resource Policies

A company runs its databases on Amazon RDS for PostgreSQL. The company wants a secure solution to manage the master user password by rotating the password every 30 days. Which solution will meet these requirements with the LEAST operational overhead?

A.

Use Amazon EventBridge to schedule a custom AWS Lambda function to rotate the password every 30 days.

A.

Use Amazon EventBridge to schedule a custom AWS Lambda function to rotate the password every 30 days.

Answers
B.

Use the modlfy-db-instance command in the AWS CLI to change the password.

B.

Use the modlfy-db-instance command in the AWS CLI to change the password.

Answers
C.

Integrate AWS Secrets Manager with Amazon RDS for PostgreSQL to automate password rotation.

C.

Integrate AWS Secrets Manager with Amazon RDS for PostgreSQL to automate password rotation.

Answers
D.

Integrate AWS Systems Manager Parameter Store with Amazon RDS for PostgreSQL to automate password rotation.

D.

Integrate AWS Systems Manager Parameter Store with Amazon RDS for PostgreSQL to automate password rotation.

Answers
Suggested answer: C

Explanation:

AWS Secrets Manager can integrate directly with Amazon RDS for automatic and seamless password rotation. Secrets Manager handles the complexity of password management, including generating strong passwords and rotating them at a defined interval (e.g., every 30 days). It also automatically updates the connection information for RDS, minimizing operational overhead.

Option A (Lambda with EventBridge): While possible, this requires custom coding and operational management of Lambda, which introduces additional complexity.

Option B (Manual password change): Using the modify-db-instance command requires manual intervention and is not automated, leading to more operational effort.

Option D (Parameter Store): Systems Manager Parameter Store is less specialized for password management than Secrets Manager and does not have built-in automated rotation for RDS credentials.

AWS

Reference:

AWS Secrets Manager Rotation for RDS

A company wants to implement new security compliance requirements for its development team to limit the use of approved Amazon Machine Images (AMIs).

The company wants to provide access to only the approved operating system and software for all its Amazon EC2 instances. The company wants the solution to have the least amount of lead time for launching EC2 instances.

Which solution will meet these requirements?

A.

Create a portfolio by using AWS Service Catalog that includes only EC2 instances launched with approved AMIs. Ensure that all required software is preinstalled on the AMIs. Create the necessary permissions for developers to use the portfolio.

A.

Create a portfolio by using AWS Service Catalog that includes only EC2 instances launched with approved AMIs. Ensure that all required software is preinstalled on the AMIs. Create the necessary permissions for developers to use the portfolio.

Answers
B.

Create an AMI that contains the approved operating system and software by using EC2 Image Builder. Give developers access to that AMI to launch the EC2 instances.

B.

Create an AMI that contains the approved operating system and software by using EC2 Image Builder. Give developers access to that AMI to launch the EC2 instances.

Answers
C.

Create an AMI that contains the approved operating system Tell the developers to use the approved AMI Create an Amazon EventBridge rule to run an AWS Systems Manager script when a new EC2 instance is launched. Configure the script to install the required software from a repository.

C.

Create an AMI that contains the approved operating system Tell the developers to use the approved AMI Create an Amazon EventBridge rule to run an AWS Systems Manager script when a new EC2 instance is launched. Configure the script to install the required software from a repository.

Answers
D.

Create an AWS Config rule to detect the launch of EC2 instances with an AMI that is not approved. Associate a remediation rule to terminate those instances and launch the instances again with the approved AMI. Use AWS Systems Manager to automatically install the approved software on the launch of an EC2 instance.

D.

Create an AWS Config rule to detect the launch of EC2 instances with an AMI that is not approved. Associate a remediation rule to terminate those instances and launch the instances again with the approved AMI. Use AWS Systems Manager to automatically install the approved software on the launch of an EC2 instance.

Answers
Suggested answer: A

Explanation:

AWS Service Catalog is designed to allow organizations to manage a catalog of approved products (including AMIs) that users can deploy. By creating a portfolio that contains only EC2 instances launched with preapproved AMIs, the company can enforce compliance with the approved operating system and software for all EC2 instances. Service Catalog also streamlines the process of launching EC2 instances, reducing the lead time while ensuring that developers use only the approved configurations.

Option B (EC2 Image Builder): While EC2 Image Builder helps in creating and managing AMIs, it doesn't provide the enforcement mechanism that Service Catalog does.

Option C (EventBridge rule and Systems Manager script): This solution is reactive and involves more operational complexity compared to Service Catalog.

Option D (AWS Config rule): This option is reactive (it terminates non-compliant instances after launch) and introduces additional operational overhead.

AWS

Reference:

AWS Service Catalog

A company needs a solution to automate email ingestion. The company needs to automatically parse email messages, look for email attachments, and save any attachments to an Amazon S3 bucket in near real time. Email volume varies significantly from day to day.

Which solution will meet these requirements?

A.

Set up email receiving in Amazon Simple Email Service {Amazon SES). Create a rule set and a receipt rule. Create an AWS Lambda function that Amazon SES can invoke to process the email bodies and attachments.

A.

Set up email receiving in Amazon Simple Email Service {Amazon SES). Create a rule set and a receipt rule. Create an AWS Lambda function that Amazon SES can invoke to process the email bodies and attachments.

Answers
B.

Set up email content filtering in Amazon Simple Email Service (Amazon SES). Create a content filtering rule based on sender, recipient, message body, and attachments.

B.

Set up email content filtering in Amazon Simple Email Service (Amazon SES). Create a content filtering rule based on sender, recipient, message body, and attachments.

Answers
C.

Set up email receiving in Amazon Simple Email Service (Amazon SES). Configure Amazon SES and S3 Event Notifications to process the email bodies and attachments.

C.

Set up email receiving in Amazon Simple Email Service (Amazon SES). Configure Amazon SES and S3 Event Notifications to process the email bodies and attachments.

Answers
D.

Create an AWS Lambda function to process the email bodies and attachments. Use Amazon EventBridge to invoke the Lambda function. Configure an EventBridge rule to listen for incoming emails.

D.

Create an AWS Lambda function to process the email bodies and attachments. Use Amazon EventBridge to invoke the Lambda function. Configure an EventBridge rule to listen for incoming emails.

Answers
Suggested answer: A

Explanation:

Amazon SES (Simple Email Service) allows for the automatic ingestion of incoming emails. By setting up email receiving in SES and creating a rule set with a receipt rule, you can configure SES to invoke an AWS Lambda function whenever an email is received. The Lambda function can then process the email body and attachments, saving any attachments to an Amazon S3 bucket. This solution is highly scalable, cost-effective, and provides near real-time processing of emails with minimal operational overhead.

Option B (Content filtering): This only filters emails based on content and does not provide the functionality to save attachments to S3.

Option C (S3 Event Notifications): While SES can store emails in S3, SES with Lambda offers more flexibility for processing attachments in real-time.

Option D (EventBridge rule): EventBridge cannot directly listen for incoming emails, making this solution incorrect.

AWS

Reference:

Receiving Email with Amazon SES

Invoking Lambda from SES

A marketing team wants to build a campaign for an upcoming multi-sport event. The team has news reports from the past five years in PDF format. The team needs a solution to extract insights about the content and the sentiment of the news reports. The solution must use Amazon Textract to process the news reports.

Which solution will meet these requirements with the LEAST operational overhead?

A.

Provide the extracted insights to Amazon Athena for analysis Store the extracted insights and analysis in an Amazon S3 bucket.

A.

Provide the extracted insights to Amazon Athena for analysis Store the extracted insights and analysis in an Amazon S3 bucket.

Answers
B.

Store the extracted insights in an Amazon DynamoDB table. Use Amazon SageMaker to build a sentiment model.

B.

Store the extracted insights in an Amazon DynamoDB table. Use Amazon SageMaker to build a sentiment model.

Answers
C.

Provide the extracted insights to Amazon Comprehend for analysis. Save the analysis to an Amazon S3 bucket.

C.

Provide the extracted insights to Amazon Comprehend for analysis. Save the analysis to an Amazon S3 bucket.

Answers
D.

Store the extracted insights in an Amazon S3 bucket. Use Amazon QuickSight to visualize and analyze the data.

D.

Store the extracted insights in an Amazon S3 bucket. Use Amazon QuickSight to visualize and analyze the data.

Answers
Suggested answer: C

Explanation:

Amazon Textract can extract text from the PDFs, and Amazon Comprehend is the most suitable service to analyze the extracted text for sentiment and insights. Comprehend offers a fully managed, low-operational overhead solution for analyzing text data. The results can then be stored in an Amazon S3 bucket, ensuring scalability and easy access.

Option A: Athena is for querying structured data and is not suitable for sentiment analysis.

Option B: SageMaker adds complexity and is not necessary when Comprehend can handle sentiment analysis natively.

Option D: QuickSight is used for visualization and analytics, but it does not provide sentiment analysis.

AWS

Reference:

Amazon Comprehend

Amazon Textract

An ecommerce company is migrating its on-premises workload to the AWS Cloud. The workload currently consists of a web application and a backend Microsoft SQL database for storage.

The company expects a high volume of customers during a promotional event. The new infrastructure in the AWS Cloud must be highly available and scalable.

Which solution will meet these requirements with the LEAST administrative overhead?

A.

Migrate the web application to two Amazon EC2 instances across two Availability Zones behind an Application Load Balancer. Migrate the database to Amazon RDS for Microsoft SQL Server with read replicas in both Availability Zones.

A.

Migrate the web application to two Amazon EC2 instances across two Availability Zones behind an Application Load Balancer. Migrate the database to Amazon RDS for Microsoft SQL Server with read replicas in both Availability Zones.

Answers
B.

Migrate the web application to an Amazon EC2 instance that runs in an Auto Scaling group across two Availability Zones behind an Application Load Balancer. Migrate the database to two EC2 instances across separate AWS Regions with database replication.

B.

Migrate the web application to an Amazon EC2 instance that runs in an Auto Scaling group across two Availability Zones behind an Application Load Balancer. Migrate the database to two EC2 instances across separate AWS Regions with database replication.

Answers
C.

Migrate the web application to Amazon EC2 instances that run in an Auto Scaling group across two Availability Zones behind an Application Load Balancer. Migrate the database to Amazon RDS with Multi-AZ deployment.

C.

Migrate the web application to Amazon EC2 instances that run in an Auto Scaling group across two Availability Zones behind an Application Load Balancer. Migrate the database to Amazon RDS with Multi-AZ deployment.

Answers
D.

Migrate the web application to three Amazon EC2 instances across three Availability Zones behind an Application Load Balancer. Migrate the database to three EC2 instances across three Availability Zones.

D.

Migrate the web application to three Amazon EC2 instances across three Availability Zones behind an Application Load Balancer. Migrate the database to three EC2 instances across three Availability Zones.

Answers
Suggested answer: C

Explanation:

To ensure high availability and scalability, the web application should run in an Auto Scaling group across two Availability Zones behind an Application Load Balancer (ALB). The database should be migrated to Amazon RDS with Multi-AZ deployment, which ensures fault tolerance and automatic failover in case of an AZ failure. This setup minimizes administrative overhead while meeting the company's requirements for high availability and scalability.

Option A: Read replicas are typically used for scaling read operations, and Multi-AZ provides better availability for a transactional database.

Option B: Replicating across AWS Regions adds unnecessary complexity for a single web application.

Option D: EC2 instances across three Availability Zones add unnecessary complexity for this scenario.

AWS

Reference:

Auto Scaling Groups

Amazon RDS Multi-AZ

Total 886 questions
Go to page: of 89