ExamGecko
Home Home / Amazon / SAA-C03

Amazon SAA-C03 Practice Test - Questions Answers, Page 90

Question list
Search
Search

List of questions

Search

Related questions











A company hosts a database that runs on an Amazon RDS instance deployed to multiple Availability Zones. A periodic script negatively affects a critical application by querying the database. How can application performance be improved with minimal costs?

A.

Add functionality to the script to identify the instance with the fewest active connections and query that instance.

A.

Add functionality to the script to identify the instance with the fewest active connections and query that instance.

Answers
B.

Create a read replica of the database. Configure the script to query only the read replica.

B.

Create a read replica of the database. Configure the script to query only the read replica.

Answers
C.

Instruct the development team to manually export new entries at the end of the day.

C.

Instruct the development team to manually export new entries at the end of the day.

Answers
D.

Use Amazon ElastiCache to cache the common queries the script runs.

D.

Use Amazon ElastiCache to cache the common queries the script runs.

Answers
Suggested answer: B

Explanation:

Option A introduces complexity and does not scale well.

Option B creates a read replica, offloading read traffic from the primary RDS instance without impacting the critical application.

Option C is manual and inefficient.

Option D might help for caching frequently queried data but is not ideal for ad-hoc reporting.

Therefore, Option B is the best choice.

How can a company detect and notify security teams about PII in S3 buckets?

A.

Use Amazon Macie. Create an EventBridge rule for SensitiveData findings and send an SNS notification.

A.

Use Amazon Macie. Create an EventBridge rule for SensitiveData findings and send an SNS notification.

Answers
B.

Use Amazon GuardDuty. Create an EventBridge rule for CRITICAL findings and send an SNS notification.

B.

Use Amazon GuardDuty. Create an EventBridge rule for CRITICAL findings and send an SNS notification.

Answers
C.

Use Amazon Macie. Create an EventBridge rule for SensitiveData:S3Object/Personal findings and send an SQS notification.

C.

Use Amazon Macie. Create an EventBridge rule for SensitiveData:S3Object/Personal findings and send an SQS notification.

Answers
D.

Use Amazon GuardDuty. Create an EventBridge rule for CRITICAL findings and send an SQS notification.

D.

Use Amazon GuardDuty. Create an EventBridge rule for CRITICAL findings and send an SQS notification.

Answers
Suggested answer: A

Explanation:

Amazon Macie is purpose-built for detecting PII in S3.

Option A uses EventBridge to filter SensitiveData findings and notify via SNS, meeting the requirements.

Options B and D involve GuardDuty, which is not designed for PII detection.

Option C uses SQS, which is less suitable for immediate notifications.

A company runs HPC workloads requiring high IOPS.

Which combination of steps will meet these requirements? (Select TWO)

A.

Use Amazon EFS as a high-performance file system.

A.

Use Amazon EFS as a high-performance file system.

Answers
B.

Use Amazon FSx for Lustre as a high-performance file system.

B.

Use Amazon FSx for Lustre as a high-performance file system.

Answers
C.

Create an Auto Scaling group of EC2 instances. Use Reserved Instances. Configure a spread placement group. Use AWS Batch for analytics.

C.

Create an Auto Scaling group of EC2 instances. Use Reserved Instances. Configure a spread placement group. Use AWS Batch for analytics.

Answers
D.

Use Mountpoint for Amazon S3 as a high-performance file system.

D.

Use Mountpoint for Amazon S3 as a high-performance file system.

Answers
E.

Create an Auto Scaling group of EC2 instances. Use mixed instance types and a cluster placement group. Use Amazon EMR for analytics.

E.

Create an Auto Scaling group of EC2 instances. Use mixed instance types and a cluster placement group. Use Amazon EMR for analytics.

Answers
Suggested answer: B, E

Explanation:

Option B: FSx for Lustre is designed for HPC workloads with high IOPS.

Option E: A cluster placement group ensures low-latency networking for HPC analytics workloads.

Option A: Amazon EFS is not optimized for HPC.

Option D: Mountpoint for S3 does not meet high IOPS needs.

A company has developed an API using Amazon API Gateway REST API and AWS Lambd

a. How can latency be reduced for users worldwide?

A.

Deploy the REST API as an edge-optimized API endpoint. Enable caching. Enable content encoding to compress data in transit.

A.

Deploy the REST API as an edge-optimized API endpoint. Enable caching. Enable content encoding to compress data in transit.

Answers
B.

Deploy the REST API as a Regional API endpoint. Enable caching. Enable content encoding to compress data in transit.

B.

Deploy the REST API as a Regional API endpoint. Enable caching. Enable content encoding to compress data in transit.

Answers
C.

Deploy the REST API as an edge-optimized API endpoint. Enable caching. Configure reserved concurrency for Lambda functions.

C.

Deploy the REST API as an edge-optimized API endpoint. Enable caching. Configure reserved concurrency for Lambda functions.

Answers
D.

Deploy the REST API as a Regional API endpoint. Enable caching. Configure reserved concurrency for Lambda functions.

D.

Deploy the REST API as a Regional API endpoint. Enable caching. Configure reserved concurrency for Lambda functions.

Answers
Suggested answer: A

Explanation:

Edge-optimized API endpoints route requests through CloudFront, reducing latency for global users.

Option A correctly implements edge-optimization, caching, and compression to minimize latency.

Options B and D do not use edge optimization, leading to higher latency for global users.

How can a law firm make files publicly readable while preventing modifications or deletions until a specific future date?

A.

Upload files to an Amazon S3 bucket configured for static website hosting. Grant read-only IAM permissions to any AWS principals.

A.

Upload files to an Amazon S3 bucket configured for static website hosting. Grant read-only IAM permissions to any AWS principals.

Answers
B.

Create an S3 bucket. Enable S3 Versioning. Use S3 Object Lock with a retention period. Create a CloudFront distribution. Use a bucket policy to restrict access.

B.

Create an S3 bucket. Enable S3 Versioning. Use S3 Object Lock with a retention period. Create a CloudFront distribution. Use a bucket policy to restrict access.

Answers
C.

Create an S3 bucket. Enable S3 Versioning. Configure an event trigger with AWS Lambda to restore modified objects from a private S3 bucket.

C.

Create an S3 bucket. Enable S3 Versioning. Configure an event trigger with AWS Lambda to restore modified objects from a private S3 bucket.

Answers
D.

Upload files to an S3 bucket for static website hosting. Use S3 Object Lock with a retention period. Grant read-only IAM permissions.

D.

Upload files to an S3 bucket for static website hosting. Use S3 Object Lock with a retention period. Grant read-only IAM permissions.

Answers
Suggested answer: B

Explanation:

Option B ensures the use of S3 Object Lock and Versioning to meet compliance for immutability. CloudFront enhances performance while a bucket policy ensures secure access.

Option A lacks immutability safeguards.

Option C introduces unnecessary complexity.

Option D misses out on additional security benefits offered by CloudFront.

A media company hosts a web application on AWS for uploading videos. Only authenticated users should upload within a specified time frame after authentication.

Which solution will meet these requirements with the LEAST operational overhead?

A.

Configure the application to generate IAM temporary security credentials for authenticated users.

A.

Configure the application to generate IAM temporary security credentials for authenticated users.

Answers
B.

Create an AWS Lambda function that generates pre-signed URLs when a user authenticates.

B.

Create an AWS Lambda function that generates pre-signed URLs when a user authenticates.

Answers
C.

Develop a custom authentication service that integrates with Amazon Cognito to control and log direct S3 bucket access through the application.

C.

Develop a custom authentication service that integrates with Amazon Cognito to control and log direct S3 bucket access through the application.

Answers
D.

Use AWS Security Token Service (AWS STS) to assume a pre-defined IAM role that grants authenticated users temporary permissions to upload videos directly to the S3 bucket.

D.

Use AWS Security Token Service (AWS STS) to assume a pre-defined IAM role that grants authenticated users temporary permissions to upload videos directly to the S3 bucket.

Answers
Suggested answer: B

Explanation:

Option B: Pre-signed URLs provide temporary, authenticated access to S3, limiting uploads to the time frame specified. This solution is lightweight, efficient, and easy to implement.

Option A requires the management of IAM temporary credentials, adding complexity.

Option C involves unnecessary development effort.

Option D introduces more complexity with STS and roles than pre-signed URLs.

A company needs to ingest and analyze telemetry data from vehicles at scale for machine learning and reporting.

Which solution will meet these requirements?

A.

Use Amazon Timestream for LiveAnalytics to store data points. Grant Amazon SageMaker permission to access the data. Use Amazon QuickSight to visualize the data.

A.

Use Amazon Timestream for LiveAnalytics to store data points. Grant Amazon SageMaker permission to access the data. Use Amazon QuickSight to visualize the data.

Answers
B.

Use Amazon DynamoDB to store data points. Use DynamoDB Connector to ingest data into Amazon EMR for processing. Use Amazon QuickSight to visualize the data.

B.

Use Amazon DynamoDB to store data points. Use DynamoDB Connector to ingest data into Amazon EMR for processing. Use Amazon QuickSight to visualize the data.

Answers
C.

Use Amazon Neptune to store data points. Use Amazon Kinesis Data Streams to ingest data into a Lambda function for processing. Use Amazon QuickSight to visualize the data.

C.

Use Amazon Neptune to store data points. Use Amazon Kinesis Data Streams to ingest data into a Lambda function for processing. Use Amazon QuickSight to visualize the data.

Answers
D.

Use Amazon Timestream for LiveAnalytics to store data points. Grant Amazon SageMaker permission to access the data. Use Amazon Athena to visualize the data.

D.

Use Amazon Timestream for LiveAnalytics to store data points. Grant Amazon SageMaker permission to access the data. Use Amazon Athena to visualize the data.

Answers
Suggested answer: A

Explanation:

Amazon Timestream is purpose-built for storing and analyzing time-series data like telemetry.

Option A leverages Timestream, SageMaker for ML, and QuickSight for visualization, meeting all requirements with minimal complexity.

Option B involves more complex DynamoDB-EMR integration.

Option C uses Neptune, which is designed for graph databases, not telemetry data.

Option D incorrectly uses Athena for visualization instead of QuickSight.

A company runs an application on EC2 instances that need access to RDS credentials stored in AWS Secrets Manager.

Which solution meets this requirement?

A.

Create an IAM role, and attach the role to each EC2 instance profile. Use an identity-based policy to grant the role access to the secret.

A.

Create an IAM role, and attach the role to each EC2 instance profile. Use an identity-based policy to grant the role access to the secret.

Answers
B.

Create an IAM user, and attach the user to each EC2 instance profile. Use a resource-based policy to grant the user access to the secret.

B.

Create an IAM user, and attach the user to each EC2 instance profile. Use a resource-based policy to grant the user access to the secret.

Answers
C.

Create a resource-based policy for the secret. Use EC2 Instance Connect to access the secret.

C.

Create a resource-based policy for the secret. Use EC2 Instance Connect to access the secret.

Answers
D.

Create an identity-based policy for the secret. Grant direct access to the EC2 instances.

D.

Create an identity-based policy for the secret. Grant direct access to the EC2 instances.

Answers
Suggested answer: A

Explanation:

Option A uses an IAM role attached to the EC2 instance profile, enabling secure and automated access to Secrets Manager. This is the recommended approach.

Option B uses IAM users, which is less secure and harder to manage.

Option C is not practical for accessing secrets programmatically.

Option D violates best practices by granting direct access to the EC2 instance.

A company needs a cloud-based solution for backup, recovery, and archiving while retaining encryption key material control.

Which combination of solutions will meet these requirements? (Select TWO)

A.

Create an AWS Key Management Service (AWS KMS) key without key material. Import the company's key material into the KMS key.

A.

Create an AWS Key Management Service (AWS KMS) key without key material. Import the company's key material into the KMS key.

Answers
B.

Create an AWS KMS encryption key that contains key material generated by AWS KMS.

B.

Create an AWS KMS encryption key that contains key material generated by AWS KMS.

Answers
C.

Store the data in Amazon S3 Standard-Infrequent Access (S3 Standard-IA). Use S3 Bucket Keys with AWS KMS keys.

C.

Store the data in Amazon S3 Standard-Infrequent Access (S3 Standard-IA). Use S3 Bucket Keys with AWS KMS keys.

Answers
D.

Store the data in an Amazon S3 Glacier storage class. Use server-side encryption with customer-provided keys (SSE-C).

D.

Store the data in an Amazon S3 Glacier storage class. Use server-side encryption with customer-provided keys (SSE-C).

Answers
E.

Store the data in AWS Snowball devices. Use server-side encryption with AWS KMS keys (SSE-KMS).

E.

Store the data in AWS Snowball devices. Use server-side encryption with AWS KMS keys (SSE-KMS).

Answers
Suggested answer: A, D

Explanation:

Option A allows importing your own encryption keys into AWS KMS, ensuring control over key material.

Option D uses S3 Glacier with SSE-C, where the customer controls the encryption keys, meeting compliance needs.

Option B uses AWS-managed key material, violating the requirement for key material control.

Option C and E are not fully compliant with the control requirement.

A website uses EC2 instances with Auto Scaling and EFS. How can the company optimize costs?

A.

Reconfigure the Auto Scaling group to set a desired number of instances. Turn off scheduled scaling.

A.

Reconfigure the Auto Scaling group to set a desired number of instances. Turn off scheduled scaling.

Answers
B.

Create a new launch template version that uses larger EC2 instances.

B.

Create a new launch template version that uses larger EC2 instances.

Answers
C.

Reconfigure the Auto Scaling group to use a target tracking scaling policy.

C.

Reconfigure the Auto Scaling group to use a target tracking scaling policy.

Answers
D.

Replace the EFS volume with instance store volumes.

D.

Replace the EFS volume with instance store volumes.

Answers
Suggested answer: C

Explanation:

Option C ensures dynamic scaling based on demand using a target tracking scaling policy, optimizing costs.

Option A results in over-provisioning, leading to higher costs.

Option B increases costs by using larger instances.

Option D is not feasible as instance store volumes are ephemeral and unsuitable for shared storage like EFS.

Total 918 questions
Go to page: of 92