ExamGecko
Home Home / Amazon / SAA-C03

Amazon SAA-C03 Practice Test - Questions Answers, Page 91

Question list
Search
Search

List of questions

Search

Related questions











A company is developing a social media application that must scale to meet demand spikes and handle ordered processes.

Which AWS services meet these requirements?

A.

ECS with Fargate, RDS, and SQS for decoupling.

A.

ECS with Fargate, RDS, and SQS for decoupling.

Answers
B.

ECS with Fargate, RDS, and SNS for decoupling.

B.

ECS with Fargate, RDS, and SNS for decoupling.

Answers
C.

DynamoDB, Lambda, DynamoDB Streams, and Step Functions.

C.

DynamoDB, Lambda, DynamoDB Streams, and Step Functions.

Answers
D.

Elastic Beanstalk, RDS, and SNS for decoupling.

D.

Elastic Beanstalk, RDS, and SNS for decoupling.

Answers
Suggested answer: A

Explanation:

Option A combines ECS with Fargate for scalability, RDS for relational data, and SQS for decoupling with message ordering (FIFO queues).

Option B uses SNS, which does not maintain message order.

Option C is suitable for serverless workflows but not relational data.

Option D relies on Elastic Beanstalk, which offers less flexibility for scaling.

A company wants to implement a data lake in the AWS Cloud. The company must ensure that only specific teams have access to sensitive data in the data lake. The company must have row-level access control for the data lake.

Which solution will meet these requirements?

A.

Use Amazon RDS to store the data. Use IAM roles and permissions for data governance and access control.

A.

Use Amazon RDS to store the data. Use IAM roles and permissions for data governance and access control.

Answers
B.

Use Amazon Redshift to store the data. Use IAM roles and permissions for data governance and access control.

B.

Use Amazon Redshift to store the data. Use IAM roles and permissions for data governance and access control.

Answers
C.

Use Amazon S3 to store the data. Use AWS Lake Formation for data governance and access control.

C.

Use Amazon S3 to store the data. Use AWS Lake Formation for data governance and access control.

Answers
D.

Use AWS Glue Catalog to store the data. Use AWS Glue DataBrew for data governance and access control.

D.

Use AWS Glue Catalog to store the data. Use AWS Glue DataBrew for data governance and access control.

Answers
Suggested answer: C

Explanation:


A . RDS: Suitable for relational databases but does not provide native support for data lakes or row-level access.

B . Redshift: Primarily for analytics, not designed for large-scale data lake governance.

C . S3 + Lake Formation: Provides native support for data lakes with granular access control, including row-level permissions.

D . Glue Catalog + DataBrew: Focused on data preparation and metadata management, not row-level access control.

A company hosts a multi-tier inventory reporting application on AWS. The company needs a cost-effective solution to generate inventory reports on demand. Admin users need to have the ability to generate new reports. Reports take approximately 5-10 minutes to finish. The application must send reports to the email address of the admin user who generates each report.

Which solution will meet these requirements?

A.

Use Amazon Elastic Container Service (Amazon ECS) to host the report generation code. Use an Amazon API Gateway HTTP API to invoke the code. Use Amazon Simple Email Service (Amazon SES) to send the reports to admin users.

A.

Use Amazon Elastic Container Service (Amazon ECS) to host the report generation code. Use an Amazon API Gateway HTTP API to invoke the code. Use Amazon Simple Email Service (Amazon SES) to send the reports to admin users.

Answers
B.

Use Amazon EventBridge to invoke a scheduled AWS Lambda function to generate the reports. Use Amazon Simple Notification Service (Amazon SNS) to send the reports to admin users.

B.

Use Amazon EventBridge to invoke a scheduled AWS Lambda function to generate the reports. Use Amazon Simple Notification Service (Amazon SNS) to send the reports to admin users.

Answers
C.

Use Amazon Elastic Kubernetes Service (Amazon EKS) to host the report generation code. Use an Amazon API Gateway REST API to invoke the code. Use Amazon Simple Notification Service (Amazon SNS) to send the reports to admin users.

C.

Use Amazon Elastic Kubernetes Service (Amazon EKS) to host the report generation code. Use an Amazon API Gateway REST API to invoke the code. Use Amazon Simple Notification Service (Amazon SNS) to send the reports to admin users.

Answers
D.

Create an AWS Lambda function to generate the reports. Use a function URL to invoke the function. Use Amazon Simple Email Service (Amazon SES) to send the reports to admin users.

D.

Create an AWS Lambda function to generate the reports. Use a function URL to invoke the function. Use Amazon Simple Email Service (Amazon SES) to send the reports to admin users.

Answers
Suggested answer: D

Explanation:


A . ECS + API Gateway: Overly complex and costly for an on-demand, intermittent workload.

B . EventBridge + SNS: EventBridge schedules are unnecessary for on-demand generation.

C . EKS + API Gateway: Overkill for this use case, with high operational overhead.

D . Lambda + SES: Most cost-effective and efficient solution for generating and emailing reports on demand.

A company that has multiple AWS accounts maintains an on-premises Microsoft Active Directory. The company needs a solution to implement Single Sign-On for its employees. The company wants to use AWS IAM Identity Center.

The solution must meet the following requirements:

Allow users to access AWS accounts and third-party applications by using existing Active Directory credentials.

Enforce multi-factor authentication (MFA) to access AWS accounts.

Centrally manage permissions to access AWS accounts and applications.

Which solution will meet these requirements?

A.

Create an IAM identity provider for Active Directory in each AWS account. Ensure that Active Directory users and groups access AWS accounts directly through IAM roles. Use IAM Identity Center to enforce MFA in each account for all users.

A.

Create an IAM identity provider for Active Directory in each AWS account. Ensure that Active Directory users and groups access AWS accounts directly through IAM roles. Use IAM Identity Center to enforce MFA in each account for all users.

Answers
B.

Use AWS Directory Service to create a new AWS Managed Microsoft AD Active Directory. Configure IAM Identity Center in each account to use the new AWS Managed Microsoft AD Active Directory as the identity source. Use IAM Identity Center to enforce MFA for all users.

B.

Use AWS Directory Service to create a new AWS Managed Microsoft AD Active Directory. Configure IAM Identity Center in each account to use the new AWS Managed Microsoft AD Active Directory as the identity source. Use IAM Identity Center to enforce MFA for all users.

Answers
C.

Use IAM Identity Center with the existing Active Directory as the identity source. Enforce MFA for all users. Use AWS Organizations and Active Directory groups to manage access permissions for AWS accounts and application access.

C.

Use IAM Identity Center with the existing Active Directory as the identity source. Enforce MFA for all users. Use AWS Organizations and Active Directory groups to manage access permissions for AWS accounts and application access.

Answers
D.

Use AWS Lambda functions to periodically synchronize Active Directory users and groups with IAM users and groups in each AWS account. Use IAM roles and policies to manage application access. Create a second Lambda function to enforce MFA.

D.

Use AWS Lambda functions to periodically synchronize Active Directory users and groups with IAM users and groups in each AWS account. Use IAM roles and policies to manage application access. Create a second Lambda function to enforce MFA.

Answers
Suggested answer: C

Explanation:


A . IAM identity provider: Does not support centralized management across multiple accounts.

B . AWS Managed AD: Unnecessary if an on-premises Active Directory already exists.

C . IAM Identity Center + Existing AD: Best approach to integrate existing Active Directory for SSO, with MFA and centralized permissions.

D . Lambda for synchronization: Adds complexity and does not leverage IAM Identity Center capabilities.

A company runs an order management application on AWS. The application allows customers to place orders and pay with a credit card. The company uses an Amazon CloudFront distribution to deliver the application.

A security team has set up logging for all incoming requests. The security team needs a solution to generate an alert if any user modifies the logging configuration.

Which solution will meet these requirements? (Select TWO)

A.

Configure an Amazon EventBridge rule that is invoked when a user creates or modifies a CloudFront distribution. Add the AWS Lambda function as a target of the EventBridge rule.

A.

Configure an Amazon EventBridge rule that is invoked when a user creates or modifies a CloudFront distribution. Add the AWS Lambda function as a target of the EventBridge rule.

Answers
B.

Create an Application Load Balancer (ALB). Enable AWS WAF rules for the ALB. Configure an AWS Config rule to detect security violations.

B.

Create an Application Load Balancer (ALB). Enable AWS WAF rules for the ALB. Configure an AWS Config rule to detect security violations.

Answers
C.

Create an AWS Lambda function to detect changes in CloudFront distribution logging. Configure the Lambda function to use Amazon Simple Notification Service (Amazon SNS) to send notifications to the security team.

C.

Create an AWS Lambda function to detect changes in CloudFront distribution logging. Configure the Lambda function to use Amazon Simple Notification Service (Amazon SNS) to send notifications to the security team.

Answers
D.

Set up Amazon GuardDuty. Configure GuardDuty to monitor findings from the CloudFront distribution. Create an AWS Lambda function to address the findings.

D.

Set up Amazon GuardDuty. Configure GuardDuty to monitor findings from the CloudFront distribution. Create an AWS Lambda function to address the findings.

Answers
E.

Create a private API in Amazon API Gateway. Use AWS WAF rules to protect the private API from common security problems.

E.

Create a private API in Amazon API Gateway. Use AWS WAF rules to protect the private API from common security problems.

Answers
Suggested answer: A, C

Explanation:


A . EventBridge Rule: Detects modifications to CloudFront distributions in real time and triggers the Lambda function for further action.

B . ALB + Config: Focuses on ALB security violations, not relevant for CloudFront logging changes.

C . Lambda + SNS: Provides real-time notifications about changes in logging configuration.

D . GuardDuty: Focuses on threat detection, not logging configuration changes.

E . API Gateway + WAF: Unrelated to CloudFront logging changes.


A company recently migrated a data warehouse to AWS. The company has an AWS Direct Connect connection to AWS. Company users query the data warehouse by using a visualization tool. The average size of the queries that the data warehouse returns is 50 MB. The average visualization that the visualization tool produces is 500 KB in size. The result sets that the data warehouse returns are not cached.

The company wants to optimize costs for data transfers between the data warehouse and the company.

Which solution will meet this requirement?

A.

Host the visualization tool on premises. Connect to the data warehouse directly through the internet.

A.

Host the visualization tool on premises. Connect to the data warehouse directly through the internet.

Answers
B.

Host the visualization tool in the same AWS Region as the data warehouse. Access the visualization tool through the internet.

B.

Host the visualization tool in the same AWS Region as the data warehouse. Access the visualization tool through the internet.

Answers
C.

Host the visualization tool on premises. Connect to the data warehouse through the Direct Connect connection

C.

Host the visualization tool on premises. Connect to the data warehouse through the Direct Connect connection

Answers
D.

Host the visualization tool in the same AWS Region as the data warehouse. Access the visualization tool through the Direct Connect connection.

D.

Host the visualization tool in the same AWS Region as the data warehouse. Access the visualization tool through the Direct Connect connection.

Answers
Suggested answer: D

Explanation:

A. On-premises tool via internet: Incurs high costs due to large data transfers over the internet.

B. AWS Region tool via internet: Does not utilize Direct Connect, leading to potential latency and higher costs.

C. On-premises tool via Direct Connect: Adds latency for querying and visualization.

D. AWS Region tool via Direct Connect: Reduces latency and leverages Direct Connect for optimized data transfer costs.


How can trade data from DynamoDB be ingested into an S3 data lake for near real-time analysis?

A.

Use DynamoDB Streams to invoke a Lambda function that writes to S3.

A.

Use DynamoDB Streams to invoke a Lambda function that writes to S3.

Answers
B.

Use DynamoDB Streams to invoke a Lambda function that writes to Data Firehose, which writes to S3.

B.

Use DynamoDB Streams to invoke a Lambda function that writes to Data Firehose, which writes to S3.

Answers
C.

Enable Kinesis Data Streams on DynamoDB. Configure it to invoke a Lambda function that writes to S3.

C.

Enable Kinesis Data Streams on DynamoDB. Configure it to invoke a Lambda function that writes to S3.

Answers
D.

Enable Kinesis Data Streams on DynamoDB. Use Data Firehose to write to S3.

D.

Enable Kinesis Data Streams on DynamoDB. Use Data Firehose to write to S3.

Answers
Suggested answer: A

Explanation:

Option A is the simplest solution, using DynamoDB Streams and Lambda for real-time ingestion into S3.

Options B, C, and D add unnecessary complexity with Data Firehose or Kinesis.

How can DynamoDB data be made available for long-term analytics with minimal operational overhead?

A.

Configure DynamoDB incremental exports to S3.

A.

Configure DynamoDB incremental exports to S3.

Answers
B.

Configure DynamoDB Streams to write records to S3.

B.

Configure DynamoDB Streams to write records to S3.

Answers
C.

Configure EMR to copy DynamoDB data to S3.

C.

Configure EMR to copy DynamoDB data to S3.

Answers
D.

Configure EMR to copy DynamoDB data to HDFS.

D.

Configure EMR to copy DynamoDB data to HDFS.

Answers
Suggested answer: A

Explanation:

Option A is the most automated and cost-efficient solution for exporting data to S3 for analytics.

Option B involves manual setup of Streams to S3.

Options C and D introduce complexity with EMR.

A company runs a Microsoft Windows SMB file share on-premises to support an application. The company wants to migrate the application to AWS. The company wants to share storage across multiple Amazon EC2 instances.

Which solutions will meet these requirements with the LEAST operational overhead? (Select TWO.)

A.

Create an Amazon Elastic File System (Amazon EFS) file system with elastic throughput.

A.

Create an Amazon Elastic File System (Amazon EFS) file system with elastic throughput.

Answers
B.

Create an Amazon FSx for NetApp ONTAP file system.

B.

Create an Amazon FSx for NetApp ONTAP file system.

Answers
C.

Use Amazon Elastic Block Store (Amazon EBS) to create a self-managed Windows file share on the instances

C.

Use Amazon Elastic Block Store (Amazon EBS) to create a self-managed Windows file share on the instances

Answers
D.

Create an Amazon FSx for Windows File Server file system

D.

Create an Amazon FSx for Windows File Server file system

Answers
E.

Create an Amazon FSx for OpenZFS file system.

E.

Create an Amazon FSx for OpenZFS file system.

Answers
Suggested answer: A, D

Explanation:

A. Amazon EFS: Provides a scalable, shared file storage solution with minimal operational overhead. It's ideal for Linux-based workloads.

B. Amazon FSx for NetApp ONTAP: More suited for workloads requiring NetApp-specific features.

C. Amazon EBS: Requires manual management of file shares, which increases operational overhead.

D. Amazon FSx for Windows File Server: Best suited for Windows SMB workloads with low operational overhead.

E. Amazon FSx for OpenZFS: Better for Linux and Unix-based workloads.


A solutions architect needs to implement a solution that can handle up to 5,000 messages per second. The solution must publish messages as events to multiple consumers. The messages are up to 500 KB in size. The message consumers need to have the ability to use multiple programming languages to consume the messages with minimal latency. The solution must retain published messages for more than 3 months. The solution must enforce strict ordering of the messages.

Which solution will meet these requirements?

A.

Publish messages to an Amazon Kinesis Data Streams data stream. Enable enhanced fan-out. Ensure that consumers ingest the data stream by using dedicated throughput.

A.

Publish messages to an Amazon Kinesis Data Streams data stream. Enable enhanced fan-out. Ensure that consumers ingest the data stream by using dedicated throughput.

Answers
B.

Publish messages to an Amazon Simple Notification Service (Amazon SNS) topic. Ensure that consumers use an Amazon Simple Queue Service (Amazon SQS) FIFO queue to subscribe to the topic.

B.

Publish messages to an Amazon Simple Notification Service (Amazon SNS) topic. Ensure that consumers use an Amazon Simple Queue Service (Amazon SQS) FIFO queue to subscribe to the topic.

Answers
C.

Publish messages to Amazon EventBridge. Allow each consumer to create rules to deliver messages to the consumer's own target.

C.

Publish messages to Amazon EventBridge. Allow each consumer to create rules to deliver messages to the consumer's own target.

Answers
D.

Publish messages to an Amazon Simple Notification Service (Amazon SNS) topic. Ensure that consumers use Amazon Data Firehose to subscribe to the topic.

D.

Publish messages to an Amazon Simple Notification Service (Amazon SNS) topic. Ensure that consumers use Amazon Data Firehose to subscribe to the topic.

Answers
Suggested answer: A

Explanation:

A. Kinesis Data Streams: Supports high throughput, strict ordering, multiple consumers, and data retention for 365 days.

B. SNS + SQS FIFO: Can enforce ordering but lacks native support for 500 KB messages and retention requirements.

C. EventBridge: Lacks strict ordering and message size compatibility.

D. SNS + Firehose: Not designed for strict ordering or large message sizes.


Total 918 questions
Go to page: of 92