ExamGecko
Home Home / Amazon / SAA-C03

Amazon SAA-C03 Practice Test - Questions Answers, Page 85

Question list
Search
Search

List of questions

Search

Related questions











A company has a large data workload that runs for 6 hours each day. The company cannot lose any data while the process is running. A solutions architect is designing an Amazon EMR cluster configuration to support this critical data workload.

Which solution will meet these requirements MOST cost-effectively?

A.

Configure a long-running cluster that runs the primary node and core nodes on On-Demand Instances and the task nodes on Spot Instances.

A.

Configure a long-running cluster that runs the primary node and core nodes on On-Demand Instances and the task nodes on Spot Instances.

Answers
B.

Configure a transient cluster that runs the primary node and core nodes on On-Demand Instances and the task nodes on Spot Instances.

B.

Configure a transient cluster that runs the primary node and core nodes on On-Demand Instances and the task nodes on Spot Instances.

Answers
C.

Configure a transient cluster that runs the primary node on an On-Demand Instance and the core nodes and task nodes on Spot Instances.

C.

Configure a transient cluster that runs the primary node on an On-Demand Instance and the core nodes and task nodes on Spot Instances.

Answers
D.

Configure a long-running cluster that runs the primary node on an On-Demand Instance, the core nodes on Spot Instances, and the task nodes on Spot Instances.

D.

Configure a long-running cluster that runs the primary node on an On-Demand Instance, the core nodes on Spot Instances, and the task nodes on Spot Instances.

Answers
Suggested answer: B

Explanation:

For cost-effectiveness and high availability in Amazon EMR workloads, the best approach is to configure a transient cluster (which runs for the duration of the job and then terminates) with On-Demand Instances for the primary and core nodes, and Spot Instances for the task nodes. Here's why:

Primary and core nodes on On-Demand Instances: These nodes are critical because they manage the cluster and store data on HDFS. Running them on On-Demand Instances ensures stability and that no data is lost, as Spot Instances can be interrupted.

Task nodes on Spot Instances: Task nodes handle additional processing and can be used with Spot Instances to reduce costs. Spot Instances are much cheaper but can be interrupted, which is fine for non-critical tasks as the framework can handle retries.

A transient cluster is more cost-effective than a long-running cluster for workloads that only run for 6 hours a day. Transient clusters automatically terminate after the workload completes, saving costs by not keeping the cluster running when it's not needed.

Option A: A long-running cluster may result in unnecessary costs when the cluster isn't being used.

Option C: Running core nodes on Spot Instances risks data loss if the Spot Instances are interrupted, violating the requirement for zero data loss.

Option D: Running both core and task nodes on Spot Instances is highly risky for data-critical workloads.

AWS

Reference:

Amazon EMR Cluster Management

Using Spot Instances in EMR

A company runs an application that stores and shares photos. Users upload the photos to an Amazon S3 bucket. Every day, users upload approximately 150 photos. The company wants to design a solution that creates a thumbnail of each new photo and stores the thumbnail in a second S3 bucket.

Which solution will meet these requirements MOST cost-effectively?

A.

Configure an Amazon EventBridge scheduled rule to invoke a scrip! every minute on a long-running Amazon EMR cluster. Configure the script to generate thumbnails for the photos that do not have thumbnails. Configure the script to upload the thumbnails to the second S3 bucket.

A.

Configure an Amazon EventBridge scheduled rule to invoke a scrip! every minute on a long-running Amazon EMR cluster. Configure the script to generate thumbnails for the photos that do not have thumbnails. Configure the script to upload the thumbnails to the second S3 bucket.

Answers
B.

Configure an Amazon EventBridge scheduled rule to invoke a script every minute on a memory-optimized Amazon EC2 instance that is always on. Configure the script to generate thumbnails for the photos that do not have thumbnails. Configure the script to upload the thumbnails to the second S3 bucket.

B.

Configure an Amazon EventBridge scheduled rule to invoke a script every minute on a memory-optimized Amazon EC2 instance that is always on. Configure the script to generate thumbnails for the photos that do not have thumbnails. Configure the script to upload the thumbnails to the second S3 bucket.

Answers
C.

Configure an S3 event notification to invoke an AWS Lambda function each time a user uploads a new photo to the application. Configure the Lambda function to generate a thumbnail and to upload the thumbnail to the second S3 bucket.

C.

Configure an S3 event notification to invoke an AWS Lambda function each time a user uploads a new photo to the application. Configure the Lambda function to generate a thumbnail and to upload the thumbnail to the second S3 bucket.

Answers
D.

Configure S3 Storage Lens to invoke an AWS Lambda function each time a user uploads a new photo to the application. Configure the Lambda function to generate a thumbnail and to upload the thumbnail to a second S3 bucket.

D.

Configure S3 Storage Lens to invoke an AWS Lambda function each time a user uploads a new photo to the application. Configure the Lambda function to generate a thumbnail and to upload the thumbnail to a second S3 bucket.

Answers
Suggested answer: C

Explanation:

The most cost-effective and scalable solution for generating thumbnails when photos are uploaded to an S3 bucket is to use S3 event notifications to trigger an AWS Lambda function. This approach avoids the need for a long-running EC2 instance or EMR cluster, making it highly cost-effective because Lambda only charges for the time it takes to process each event.

S3 Event Notifications: Automatically triggers the Lambda function when a new photo is uploaded to the S3 bucket.

AWS Lambda: A serverless compute service that scales automatically and only charges for execution time, which makes it the most economical choice when dealing with periodic events like photo uploads.

The Lambda function can generate the thumbnail and upload it to a second S3 bucket, fulfilling the requirement efficiently.

Option A and Option B (EMR or EC2 with scheduled scripts)**: These are less cost-effective as they involve continuously running infrastructure, which incurs unnecessary costs.

Option D (S3 Storage Lens): S3 Storage Lens is a tool for storage analytics and is not designed for event-based photo processing.

AWS

Reference:

Amazon S3 Event Notifications

AWS Lambda Pricing

A company uses Amazon RDS (or PostgreSQL to run its applications in the us-east-1 Region. The company also uses machine learning (ML) models to forecast annual revenue based on neat real-time reports. The reports are generated by using the same RDS for PostgreSQL database. The database performance slows during business hours. The company needs to improve database performance.

Which solution will meet these requirements MOST cost-effectively?

A.

Create a cross-Region read replica. Configure the reports to be generated from the read replica.

A.

Create a cross-Region read replica. Configure the reports to be generated from the read replica.

Answers
B.

Activate Multi-AZ DB instance deployment for RDS for PostgreSQL. Configure the reports to be generated from the standby database.

B.

Activate Multi-AZ DB instance deployment for RDS for PostgreSQL. Configure the reports to be generated from the standby database.

Answers
C.

Use AWS Data Migration Service (AWS DMS) to logically replicate data lo a new database. Configure the reports to be generated from the new database.

C.

Use AWS Data Migration Service (AWS DMS) to logically replicate data lo a new database. Configure the reports to be generated from the new database.

Answers
D.

Create a read replica in us-east-1. Configure the reports to be generated from the read replica.

D.

Create a read replica in us-east-1. Configure the reports to be generated from the read replica.

Answers
Suggested answer: D

Explanation:

To improve the performance of the primary RDS PostgreSQL database during business hours and reduce the load, the best solution is to create a read replica in the same region (us-east-1). This will offload the read-heavy operations (like generating reports) to the replica, reducing the burden on the primary instance, which improves overall performance. Additionally, read replicas provide near real-time replication, making them ideal for real-time reporting use cases.

Option A (cross-Region read replica): This adds unnecessary latency for real-time reporting and increased costs due to cross-region data transfer.

Option B (Multi-AZ): Multi-AZ deployments are for high availability and disaster recovery but won't offload the read traffic, as the standby database cannot serve read requests.

Option C (AWS DMS replication): This adds complexity and is not as cost-effective as using an RDS read replica for the same region.

AWS

Reference:

Amazon RDS Read Replicas

Amazon RDS Performance Best Practices

A company has developed a non-production application that is composed of multiple microservices for each of the company's business units. A single development team maintains all the microservices.

The current architecture uses a static web frontend and a Java-based backend that contains the application logic. The architecture also uses a MySQL database that the company hosts on an Amazon EC2 instance.

The company needs to ensure that the application is secure and available globally.

Which solution will meet these requirements with the LEAST operational overhead?

A.

Use Amazon CloudFront and AWS Amplify to host the static web frontend. Refactor the microservices to use AWS Lambda functions that the microservices access by using Amazon API Gateway. Migrate the MySQL database to an Amazon EC2 Reserved Instance.

A.

Use Amazon CloudFront and AWS Amplify to host the static web frontend. Refactor the microservices to use AWS Lambda functions that the microservices access by using Amazon API Gateway. Migrate the MySQL database to an Amazon EC2 Reserved Instance.

Answers
B.

Use Amazon CloudFront and Amazon S3 to host the static web frontend. Refactor the microservices to use AWS Lambda functions that the microservices access by using Amazon API Gateway. Migrate the MySQL database to Amazon RDS for MySQL.

B.

Use Amazon CloudFront and Amazon S3 to host the static web frontend. Refactor the microservices to use AWS Lambda functions that the microservices access by using Amazon API Gateway. Migrate the MySQL database to Amazon RDS for MySQL.

Answers
C.

Use Amazon CloudFront and Amazon S3 to host the static web frontend. Refactor the microservices to use AWS Lambda functions that are in a target group behind a Network Load Balancer. Migrate the MySQL database to Amazon RDS for MySQL.

C.

Use Amazon CloudFront and Amazon S3 to host the static web frontend. Refactor the microservices to use AWS Lambda functions that are in a target group behind a Network Load Balancer. Migrate the MySQL database to Amazon RDS for MySQL.

Answers
D.

Use Amazon S3 to host the static web frontend. Refactor the microservices to use AWS Lambda functions that are in a target group behind an Application Load Balancer. Migrate the MySQL database to an Amazon EC2 Reserved Instance.

D.

Use Amazon S3 to host the static web frontend. Refactor the microservices to use AWS Lambda functions that are in a target group behind an Application Load Balancer. Migrate the MySQL database to an Amazon EC2 Reserved Instance.

Answers
Suggested answer: B

Explanation:

This solution offers the least operational overhead while meeting the security and global availability requirements:

Amazon CloudFront and S3: Hosting the static frontend on S3 and serving it via CloudFront provides low-latency global distribution, high availability, and security. S3 is a cost-effective and serverless option for hosting static assets, and CloudFront ensures that the application is cached closer to the users, reducing latency globally.

AWS Lambda and API Gateway: Refactoring the microservices to use Lambda functions with API Gateway allows for a fully serverless, scalable, and highly available backend. This reduces the need for managing EC2 instances, as Lambda automatically scales to meet demand and only charges for the actual usage.

RDS for MySQL: Migrating the MySQL database from an EC2 instance to Amazon RDS significantly reduces operational overhead. RDS manages backups, patching, and scaling, and it offers high availability options (e.g., Multi-AZ).

Option A and D involve using EC2 Reserved Instances for the database, which requires more operational maintenance than using RDS.

Option C suggests using a Network Load Balancer with Lambda, which adds unnecessary complexity for this use case.

AWS

Reference:

Amazon S3 and CloudFront Integration

AWS Lambda with API Gateway

Amazon RDS for MySQL

A company is designing a new internal web application in the AWS Cloud. The new application must securely retrieve and store multiple employee usernames and passwords from an AWS managed service. Which solution will meet these requirements with the LEAST operational overhead?

A.

Store the employee credentials in AWS Systems Manager Parameter Store. Use AWS Cloud Formation and the BatchGetSecretValue API to retrieve usernames and passwords from Parameter Store.

A.

Store the employee credentials in AWS Systems Manager Parameter Store. Use AWS Cloud Formation and the BatchGetSecretValue API to retrieve usernames and passwords from Parameter Store.

Answers
B.

Store the employee credentials in AWS Secrets Manager. Use AWS Cloud Formation and AWS Batch with the BatchGetSecretValue API to retrieve the usernames and passwords from Secrets Manager.

B.

Store the employee credentials in AWS Secrets Manager. Use AWS Cloud Formation and AWS Batch with the BatchGetSecretValue API to retrieve the usernames and passwords from Secrets Manager.

Answers
C.

Store the employee credentials in AWS Systems Manager Parameter Store. Use AWS Cloud Formation and AWS Batch with the BatchGetSecretValue API to retrieve the usernames and passwords from Parameter Store.

C.

Store the employee credentials in AWS Systems Manager Parameter Store. Use AWS Cloud Formation and AWS Batch with the BatchGetSecretValue API to retrieve the usernames and passwords from Parameter Store.

Answers
D.

Store the employee credentials in AWS Secrets Manager. Use AWS Cloud Formation and the BatchGetSecretValue API to retrieve the usernames and passwords from Secrets Manager.

D.

Store the employee credentials in AWS Secrets Manager. Use AWS Cloud Formation and the BatchGetSecretValue API to retrieve the usernames and passwords from Secrets Manager.

Answers
Suggested answer: D

Explanation:

AWS Secrets Manager is the best solution for securely storing and managing sensitive information, such as usernames and passwords. Secrets Manager provides automatic rotation, fine-grained access control, and encryption of credentials. It is designed to integrate easily with other AWS services, such as CloudFormation, to automate the retrieval of secrets via the BatchGetSecretValue API.

Secrets Manager has a lower operational overhead than manually managing credentials, and it offers features like automatic secret rotation that reduce the need for human intervention.

Option A and C (Parameter Store): While Systems Manager Parameter Store can store secrets, Secrets Manager provides more specialized capabilities for securely managing and rotating credentials with less operational overhead.

Option B and C (AWS Batch): Introducing AWS Batch unnecessarily complicates the solution. Secrets Manager already provides simple API calls for retrieving secrets without needing an additional service.

AWS

Reference:

AWS Secrets Manager

Secrets Manager with CloudFormation

A company hosts an application on AWS. The application gives users the ability to upload photos and store the photos in an Amazon S3 bucket. The company wants to use Amazon CloudFront and a custom domain name to upload the photo files to the S3 bucket in the eu-west-1 Region.

Which solution will meet these requirements? (Select TWO.)

A.

Use AWS Certificate Manager (ACM) to create a public certificate in the us-east-1 Region. Use the certificate in CloudFront

A.

Use AWS Certificate Manager (ACM) to create a public certificate in the us-east-1 Region. Use the certificate in CloudFront

Answers
B.

Use AWS Certificate Manager (ACM) to create a public certificate in eu-west-1. Use the certificate in CloudFront.

B.

Use AWS Certificate Manager (ACM) to create a public certificate in eu-west-1. Use the certificate in CloudFront.

Answers
C.

Configure Amazon S3 to allow uploads from CloudFront. Configure S3 Transfer Acceleration.

C.

Configure Amazon S3 to allow uploads from CloudFront. Configure S3 Transfer Acceleration.

Answers
D.

Configure Amazon S3 to allow uploads from CloudFront origin access control (OAC).

D.

Configure Amazon S3 to allow uploads from CloudFront origin access control (OAC).

Answers
E.

Configure Amazon S3 to allow uploads from CloudFront. Configure an Amazon S3 website endpoint.

E.

Configure Amazon S3 to allow uploads from CloudFront. Configure an Amazon S3 website endpoint.

Answers
Suggested answer: A, C

Explanation:

To upload photos to an S3 bucket using Amazon CloudFront with a custom domain name, the following components are required:

ACM in us-east-1 (Option A): When using CloudFront with HTTPS, the SSL/TLS certificate must be created in the us-east-1 Region. AWS Certificate Manager (ACM) handles the provisioning, management, and renewal of public certificates, making this a cost-effective and low-maintenance solution.

S3 Transfer Acceleration (Option C): Transfer Acceleration allows faster uploads to S3 from CloudFront by routing traffic through AWS's edge locations. This significantly speeds up the data upload process, especially for users that are geographically distant from the S3 bucket's region.

Option B (ACM in eu-west-1): CloudFront only supports certificates created in us-east-1.

Option D and E (OAC and website endpoint): These are not ideal for handling secure uploads or efficient data transfers in this case.

AWS

Reference:

Using ACM with CloudFront

Amazon S3 Transfer Acceleration

A company maintains its accounting records in a custom application that runs on Amazon EC2 instances. The company needs to migrate the data to an AWS managed service for development and maintenance of the application data. The solution must require minimal operational support and provide immutable, cryptographically verifiable logs of data changes.

Which solution will meet these requirements MOST cost-effectively?

A.

Copy the records from the application into an Amazon Redshift cluster.

A.

Copy the records from the application into an Amazon Redshift cluster.

Answers
B.

Copy the records from the application into an Amazon Neptune cluster.

B.

Copy the records from the application into an Amazon Neptune cluster.

Answers
C.

Copy the records from the application into an Amazon Timestream database.

C.

Copy the records from the application into an Amazon Timestream database.

Answers
D.

Copy the records from the application into an Amazon Quantum Ledger Database (Amazon QLDB) ledger.

D.

Copy the records from the application into an Amazon Quantum Ledger Database (Amazon QLDB) ledger.

Answers
Suggested answer: D

Explanation:

Amazon QLDB is the most cost-effective and suitable service for maintaining immutable, cryptographically verifiable logs of data changes. QLDB provides a fully managed ledger database with a built-in cryptographic hash chain, making it ideal for recording changes to accounting records, ensuring data integrity and security.

QLDB reduces operational overhead by offering fully managed services, so there's no need for server management, and it's built specifically to ensure immutability and verifiability, making it the best fit for the given requirements.

Option A (Redshift): Redshift is designed for analytics and not for immutable, cryptographically verifiable logs.

Option B (Neptune): Neptune is a graph database, which is not suitable for this use case.

Option C (Timestream): Timestream is a time series database optimized for time-stamped data, but it does not provide immutable or cryptographically verifiable logs.

AWS

Reference:

Amazon QLDB

How QLDB Works

A company hosts a video streaming web application in a VPC. The company uses a Network Load Balancer (NLB) to handle TCP traffic for real-time data processing. There have been unauthorized attempts to access the application.

The company wants to improve application security with minimal architectural change to prevent unauthorized attempts to access the application.

Which solution will meet these requirements?

A.

Implement a series of AWS WAF rules directly on the NLB to filter out unauthorized traffic.

A.

Implement a series of AWS WAF rules directly on the NLB to filter out unauthorized traffic.

Answers
B.

Recreate the NLB with a security group to allow only trusted IP addresses.

B.

Recreate the NLB with a security group to allow only trusted IP addresses.

Answers
C.

Deploy a second NLB in parallel with the existing NLB configured with a strict IP address allow list.

C.

Deploy a second NLB in parallel with the existing NLB configured with a strict IP address allow list.

Answers
D.

Use AWS Shield Advanced to provide enhanced DDoS protection and prevent unauthorized access attempts.

D.

Use AWS Shield Advanced to provide enhanced DDoS protection and prevent unauthorized access attempts.

Answers
Suggested answer: D

A company runs its workloads on Amazon Elastic Container Service (Amazon ECS). The container images that the ECS task definition uses need to be scanned for Common Vulnerabilities and Exposures (CVEs). New container images that are created also need to be scanned.

Which solution will meet these requirements with the FEWEST changes to the workloads?

A.

Use Amazon Elastic Container Registry (Amazon ECR) as a private image repository to store the container images. Specify scan on push filters for the ECR basic scan.

A.

Use Amazon Elastic Container Registry (Amazon ECR) as a private image repository to store the container images. Specify scan on push filters for the ECR basic scan.

Answers
B.

Store the container images in an Amazon S3 bucket. Use Amazon Macie to scan the images. Use an S3 Event Notification to initiate a Made scan for every event with an s3:ObjeclCreated:Put event type

B.

Store the container images in an Amazon S3 bucket. Use Amazon Macie to scan the images. Use an S3 Event Notification to initiate a Made scan for every event with an s3:ObjeclCreated:Put event type

Answers
C.

Deploy the workloads to Amazon Elastic Kubernetes Service (Amazon EKS). Use Amazon Elastic Container Registry (Amazon ECR) as a private image repository. Specify scan on push filters for the ECR enhanced scan.

C.

Deploy the workloads to Amazon Elastic Kubernetes Service (Amazon EKS). Use Amazon Elastic Container Registry (Amazon ECR) as a private image repository. Specify scan on push filters for the ECR enhanced scan.

Answers
D.

Store the container images in an Amazon S3 bucket that has versioning enabled. Configure an S3 Event Notification for s3:ObjectCrealed:* events to invoke an AWS Lambda function. Configure the Lambda function to initiate an Amazon Inspector scan.

D.

Store the container images in an Amazon S3 bucket that has versioning enabled. Configure an S3 Event Notification for s3:ObjectCrealed:* events to invoke an AWS Lambda function. Configure the Lambda function to initiate an Amazon Inspector scan.

Answers
Suggested answer: A

A company has an application that runs on an Amazon Elastic Kubernetes Service (Amazon EKS) cluster on Amazon EC2 instances. The application has a U1 that uses Amazon DynamoDB and data services that use Amazon S3 as part of the application deployment.

The company must ensure that the EKS Pods for the U1 can access only Amazon DynamoDB and that the EKS Pods for the data services can access only Amazon S3. The company uses AWS Identity and Access Management |IAM).

Which solution meets these requirements?

A.

Create separate 1AM policies (or Amazon S3 and DynamoDB access with the required permissions. Attach both 1AM policies to the EC2 instance profile. Use role-based access control (RBAC) to control access to Amazon S3 or DynamoDB (or the respective EKS Pods.

A.

Create separate 1AM policies (or Amazon S3 and DynamoDB access with the required permissions. Attach both 1AM policies to the EC2 instance profile. Use role-based access control (RBAC) to control access to Amazon S3 or DynamoDB (or the respective EKS Pods.

Answers
B.

Create separate 1AM policies (or Amazon S3 and DynamoDB access with the required permissions. Attach the Amazon S3 1AM policy directly to the EKS Pods (or the data services and the DynamoDB policy to the EKS Pods for the U1.

B.

Create separate 1AM policies (or Amazon S3 and DynamoDB access with the required permissions. Attach the Amazon S3 1AM policy directly to the EKS Pods (or the data services and the DynamoDB policy to the EKS Pods for the U1.

Answers
C.

Create separate Kubernetes service accounts for the U1 and data services to assume an 1AM role. Attach the Amazon S3 Full Access policy to the data services account and the AmazonDynamoDBFullAccess policy to the U1 service account.

C.

Create separate Kubernetes service accounts for the U1 and data services to assume an 1AM role. Attach the Amazon S3 Full Access policy to the data services account and the AmazonDynamoDBFullAccess policy to the U1 service account.

Answers
D.

Create separate Kubernetes service accounts for the U1 and data services to assume an 1AM role. Use 1AM Role for Service Accounts (IRSA) to provide access to the EKS Pods for the U1 to Amazon S3 and the EKS Pods for the data services to DynamoDB.

D.

Create separate Kubernetes service accounts for the U1 and data services to assume an 1AM role. Use 1AM Role for Service Accounts (IRSA) to provide access to the EKS Pods for the U1 to Amazon S3 and the EKS Pods for the data services to DynamoDB.

Answers
Suggested answer: A
Total 886 questions
Go to page: of 89