ExamGecko
Home Home / Amazon / SAA-C03

Amazon SAA-C03 Practice Test - Questions Answers, Page 71

Question list
Search
Search

List of questions

Search

Related questions











A company runs a critical data analysis job each week before the first day of the work week The job requires at least 1 hour to complete the analysis The job is stateful and cannot tolerate interruptions. The company needs a solution to run the job on AWS.

Which solution will meet these requirements?

A.
Create a container for the job. Schedule the job to run as an AWS Fargate task on an Amazon Elastic Container Service (Amazon ECS) cluster by using Amazon EventBridge Scheduler.
A.
Create a container for the job. Schedule the job to run as an AWS Fargate task on an Amazon Elastic Container Service (Amazon ECS) cluster by using Amazon EventBridge Scheduler.
Answers
B.
Configure the job to run in an AWS Lambda function. Create a scheduled rule in Amazon EventBridge to invoke the Lambda function.
B.
Configure the job to run in an AWS Lambda function. Create a scheduled rule in Amazon EventBridge to invoke the Lambda function.
Answers
C.
Configure an Auto Scaling group of Amazon EC2 Spot Instances that run Amazon Linux Configure a crontab entry on the instances to run the analysis.
C.
Configure an Auto Scaling group of Amazon EC2 Spot Instances that run Amazon Linux Configure a crontab entry on the instances to run the analysis.
Answers
D.
Configure an AWS DataSync task to run the job Configure a cron expression to run the task on a schedule.
D.
Configure an AWS DataSync task to run the job Configure a cron expression to run the task on a schedule.
Answers
Suggested answer: A

Explanation:

Understanding the Requirement: The job is stateful, cannot tolerate interruptions, and needs to run reliably for at least one hour each week.

Analysis of Options:

AWS Fargate with Amazon ECS and EventBridge: This option provides a serverless compute engine for containers that can run stateful tasks reliably. Using EventBridge Scheduler, the job can be triggered automatically at the specified time without manual intervention.

AWS Lambda with EventBridge: Lambda functions are not suitable for long-running stateful jobs since they have a maximum execution time of 15 minutes.

EC2 Spot Instances: Spot Instances can be interrupted, making them unsuitable for a stateful job that cannot tolerate interruptions.

AWS DataSync: This service is primarily for moving large amounts of data and is not designed to run stateful analysis jobs.

Best Option for Reliable, Scheduled Execution:

The Fargate task on ECS with EventBridge Scheduler meets all requirements, providing the necessary reliability and scheduling capabilities without interruption risks.

Amazon ECS

AWS Fargate

Amazon EventBridge

A company runs workloads in the AWS Cloud The company wants to centrally collect security data to assess security across the entire company and to improve workload protection.

Which solution will meet these requirements with the LEAST development effort?

A.
Configure a data lake in AWS Lake Formation Use AWS Glue crawlers to ingest the security data into the data lake.
A.
Configure a data lake in AWS Lake Formation Use AWS Glue crawlers to ingest the security data into the data lake.
Answers
B.
Configure an AWS Lambda function to collect the security data in csv format. Upload the data to an Amazon S3 bucket
B.
Configure an AWS Lambda function to collect the security data in csv format. Upload the data to an Amazon S3 bucket
Answers
C.
Configure a data lake in Amazon Security Lake to collect the security data Upload the data to an Amazon S3 bucket.
C.
Configure a data lake in Amazon Security Lake to collect the security data Upload the data to an Amazon S3 bucket.
Answers
D.
Configure an AWS Database Migration Service (AWS DMS) replication instance to load the security data into an Amazon RDS cluster
D.
Configure an AWS Database Migration Service (AWS DMS) replication instance to load the security data into an Amazon RDS cluster
Answers
Suggested answer: C

Explanation:

Understanding the Requirement: The company wants to centrally collect security data with minimal development effort to assess and improve security across all workloads.

Analysis of Options:

Amazon Security Lake: This is a purpose-built service for centralizing security data from across AWS services and third-party sources into a data lake. It provides native integration and requires minimal development effort to set up.

AWS Lake Formation with AWS Glue: While this can be used to create a data lake, it requires more development effort to set up and configure Glue crawlers for ingestion.

AWS Lambda with S3: This approach involves custom development to collect and process security data before storing it in S3, which requires more effort.

AWS DMS to RDS: AWS Database Migration Service is typically used for database migrations and is not suited for collecting and analyzing security data.

Best Option for Minimal Development Effort:

Amazon Security Lake provides the least development effort for setting up a centralized repository for security data. It simplifies data ingestion and management, making it the most efficient solution for this use case.

Amazon Security Lake

AWS Lake Formation

AWS Glue

A company is storing petabytes of data in Amazon S3 Standard The data is stored in multiple S3 buckets and is accessed with varying frequency The company does not know access patterns for all the data. The company needs to implement a solution for each S3 bucket to optimize the cost of S3 usage.

Which solution will meet these requirements with the MOST operational efficiency?

A.
Create an S3 Lifecycle configuration with a rule to transition the objects in the S3 bucket to S3 Intelligent-Tiering.
A.
Create an S3 Lifecycle configuration with a rule to transition the objects in the S3 bucket to S3 Intelligent-Tiering.
Answers
B.
Use the S3 storage class analysis tool to determine the correct tier for each object in the S3 bucket. Move each object to the identified storage tier.
B.
Use the S3 storage class analysis tool to determine the correct tier for each object in the S3 bucket. Move each object to the identified storage tier.
Answers
C.
Create an S3 Lifecycle configuration with a rule to transition the objects in the S3 bucket to S3 Glacier Instant Retrieval.
C.
Create an S3 Lifecycle configuration with a rule to transition the objects in the S3 bucket to S3 Glacier Instant Retrieval.
Answers
D.
Create an S3 Lifecycle configuration with a rule to transition the objects in the S3 bucket to S3 One Zone-Infrequent Access (S3 One Zone-IA).
D.
Create an S3 Lifecycle configuration with a rule to transition the objects in the S3 bucket to S3 One Zone-Infrequent Access (S3 One Zone-IA).
Answers
Suggested answer: A

Explanation:

Understanding the Requirement: The company has petabytes of data in S3 Standard across multiple buckets with varying access frequencies. They do not know the access patterns and need a cost-optimized storage solution with minimal operational effort.

Analysis of Options:

S3 Intelligent-Tiering: This storage class automatically moves data between two access tiers (frequent and infrequent) based on changing access patterns. It incurs a small monitoring and automation charge but eliminates the need to manually move data between storage classes.

S3 Storage Class Analysis Tool: While useful for determining access patterns, this tool requires manual intervention to move objects to the appropriate storage class, which increases operational overhead.

S3 Glacier Instant Retrieval: This storage class is designed for data that is rarely accessed but requires instant retrieval when needed. It may not be suitable for data with unknown and varying access patterns.

S3 One Zone-IA: This is a lower-cost option for infrequently accessed data stored in a single availability zone. It does not provide the same level of durability and availability as other options and requires knowledge of access patterns.

Best Option for Operational Efficiency:

S3 Intelligent-Tiering provides the best balance of cost savings and operational efficiency. It dynamically adjusts to access patterns without manual intervention, ensuring the company is only paying for what they need without the risk of incurring high costs for infrequent access data.

Amazon S3 Intelligent-Tiering

Managing your storage lifecycle

A company is planning to migrate data to an Amazon S3 bucket The data must be encrypted at rest within the S3 bucket The encryption key must be rotated automatically every year.

Which solution will meet these requirements with the LEAST operational overhead?

A.
Migrate the data to the S3 bucket. Use server-side encryption with Amazon S3 managed keys (SSE-S3). Use the built-in key rotation behavior of SSE-S3 encryption keys.
A.
Migrate the data to the S3 bucket. Use server-side encryption with Amazon S3 managed keys (SSE-S3). Use the built-in key rotation behavior of SSE-S3 encryption keys.
Answers
B.
Create an AWS Key Management Service (AWS KMS) customer managed key Enable automatic key rotation Set the S3 bucket's default encryption behavior to use the customer managed KMS key. Migrate the data to the S3 bucket.
B.
Create an AWS Key Management Service (AWS KMS) customer managed key Enable automatic key rotation Set the S3 bucket's default encryption behavior to use the customer managed KMS key. Migrate the data to the S3 bucket.
Answers
C.
Create an AWS Key Management Service (AWS KMS) customer managed key Set the S3 bucket's default encryption behavior to use the customer managed KMS key. Migrate the data to the S3 bucket. Manually rotate the KMS key every year.
C.
Create an AWS Key Management Service (AWS KMS) customer managed key Set the S3 bucket's default encryption behavior to use the customer managed KMS key. Migrate the data to the S3 bucket. Manually rotate the KMS key every year.
Answers
D.
Use customer key material to encrypt the data Migrate the data to the S3 bucket. Create an AWS Key Management Service (AWS KMS) key without key material Import the customer key material into the KMS key. Enable automatic key rotation.
D.
Use customer key material to encrypt the data Migrate the data to the S3 bucket. Create an AWS Key Management Service (AWS KMS) key without key material Import the customer key material into the KMS key. Enable automatic key rotation.
Answers
Suggested answer: B

Explanation:

Understanding the Requirement: The data must be encrypted at rest with automatic key rotation every year, with minimal operational overhead.

Analysis of Options:

SSE-S3: This option provides encryption with S3 managed keys and automatic key rotation but offers less control and flexibility compared to KMS keys.

AWS KMS with Customer Managed Key (automatic rotation): This option offers full control over encryption keys, with AWS KMS handling automatic key rotation, minimizing operational overhead.

AWS KMS with Customer Managed Key (manual rotation): This requires manual intervention for key rotation, increasing operational overhead.

Customer Key Material: This involves more complex management, including importing key material and setting up automatic rotation, which increases operational overhead.

Best Option for Minimal Operational Overhead:

AWS KMS with a customer managed key and automatic rotation provides the needed security and key rotation with minimal operational effort. Setting the S3 bucket's default encryption to use this key ensures all data is encrypted as required.

AWS Key Management Service (KMS)

Amazon S3 default encryption

A company wants to build a map of its IT infrastructure to identify and enforce policies on resources that pose security risks. The company's security team must be able to query data in the IT infrastructure map and quickly identify security risks.

Which solution will meet these requirements with the LEAST operational overhead?

A.
Use Amazon RDS to store the data. Use SQL to query the data to identify security risks.
A.
Use Amazon RDS to store the data. Use SQL to query the data to identify security risks.
Answers
B.
Use Amazon Neptune to store the data. Use SPARQL to query the data to identify security risks.
B.
Use Amazon Neptune to store the data. Use SPARQL to query the data to identify security risks.
Answers
C.
Use Amazon Redshift to store the data. Use SQL to query the data to identify security risks.
C.
Use Amazon Redshift to store the data. Use SQL to query the data to identify security risks.
Answers
D.
Use Amazon DynamoDB to store the data. Use PartiQL to query the data to identify security risks.
D.
Use Amazon DynamoDB to store the data. Use PartiQL to query the data to identify security risks.
Answers
Suggested answer: B

Explanation:

Understanding the Requirement: The company needs to map its IT infrastructure to identify and enforce security policies, with the ability to quickly query and identify security risks.

Analysis of Options:

Amazon RDS: While suitable for relational data, it is not optimized for handling complex relationships and querying those relationships, which is essential for an IT infrastructure map.

Amazon Neptune: A graph database service designed for handling highly connected data. It uses SPARQL to query graph data efficiently, making it ideal for mapping IT infrastructure and identifying relationships that pose security risks.

Amazon Redshift: A data warehouse solution optimized for complex queries on large datasets but not specifically for graph data.

Amazon DynamoDB: A NoSQL database that uses PartiQL for querying, but it is not optimized for complex relationships in graph data.

Best Option for Mapping and Querying IT Infrastructure:

Amazon Neptune provides the most suitable solution with the least operational overhead. It is purpose-built for graph data and enables efficient querying of complex relationships to identify security risks.

Amazon Neptune

Querying with SPARQL

A company wants to add its existing AWS usage cost to its operation cost dashboard A solutions architect needs to recommend a solution that will give the company access to its usage cost programmatically. The company must be able to access cost data for the current year and forecast costs for the next 12 months.

Which solution will meet these requirements with the LEAST operational overhead?

A.
Access usage cost-related data by using the AWS Cost Explorer API with pagination.
A.
Access usage cost-related data by using the AWS Cost Explorer API with pagination.
Answers
B.
Access usage cost-related data by using downloadable AWS Cost Explorer report csv files.
B.
Access usage cost-related data by using downloadable AWS Cost Explorer report csv files.
Answers
C.
Configure AWS Budgets actions to send usage cost data to the company through FTP.
C.
Configure AWS Budgets actions to send usage cost data to the company through FTP.
Answers
D.
Create AWS Budgets reports for usage cost data Send the data to the company through SMTP.
D.
Create AWS Budgets reports for usage cost data Send the data to the company through SMTP.
Answers
Suggested answer: A

Explanation:

Understanding the Requirement: The company needs programmatic access to its AWS usage costs for the current year and cost forecasts for the next 12 months, with minimal operational overhead.

Analysis of Options:

AWS Cost Explorer API: Provides programmatic access to detailed usage and cost data, including forecast costs. It supports pagination for handling large datasets, making it an efficient solution.

Downloadable AWS Cost Explorer report csv files: While useful, this method requires manual handling of files and does not provide real-time access.

AWS Budgets actions via FTP: This is less suitable as it involves setting up FTP transfers and does not provide the same level of detail and real-time access as the API.

AWS Budgets reports via SMTP: Similar to FTP, this method involves additional setup and lacks the real-time access and detail provided by the API.

Best Option for Minimal Operational Overhead:

AWS Cost Explorer API provides direct, programmatic access to cost data, including detailed usage and forecasting, with minimal setup and operational effort. It is the most efficient solution for integrating cost data into an operational cost dashboard.

AWS Cost Explorer API

AWS Cost and Usage Reports

A company wants to create a mobile app that allows users to stream slow-motion video clips on their mobile devices. Currently, the app captures video clips and uploads the video clips in raw format into an Amazon S3 bucket. The app retrieves these video clips directly from the S3 bucket. However, the videos are large in their raw format.

Users are experiencing issues with buffering and playback on mobile devices. The company wants to implement solutions to maximize the performance and scalability of the app while minimizing operational overhead.

Which combination of solutions will meet these requirements? (Select TWO.)

A.
Deploy Amazon CloudFront for content delivery and caching
A.
Deploy Amazon CloudFront for content delivery and caching
Answers
B.
Use AWS DataSync to replicate the video files across AWS Regions in other S3 buckets
B.
Use AWS DataSync to replicate the video files across AWS Regions in other S3 buckets
Answers
C.
Use Amazon Elastic Transcoder to convert the video files to more appropriate formats.
C.
Use Amazon Elastic Transcoder to convert the video files to more appropriate formats.
Answers
D.
Deploy an Auto Scaling group of Amazon EC2 instances in Local Zones for content delivery and caching
D.
Deploy an Auto Scaling group of Amazon EC2 instances in Local Zones for content delivery and caching
Answers
E.
Deploy an Auto Scaling group of Amazon EC2 Instances to convert the video files to more appropriate formats.
E.
Deploy an Auto Scaling group of Amazon EC2 Instances to convert the video files to more appropriate formats.
Answers
Suggested answer: A, C

Explanation:

Understanding the Requirement: The mobile app captures and uploads raw video clips to S3, but users experience buffering and playback issues due to the large size of these videos.

Analysis of Options:

Amazon CloudFront: A content delivery network (CDN) that can cache and deliver content globally with low latency. It helps reduce buffering by delivering content from edge locations closer to the users.

AWS DataSync: Primarily used for data transfer and replication across AWS Regions, which does not directly address the video size and buffering issue.

Amazon Elastic Transcoder: A media transcoding service that can convert raw video files into formats and resolutions more suitable for streaming, reducing the size and improving playback performance.

EC2 Instances in Local Zones: While this could provide content delivery and caching, it involves more operational overhead compared to using CloudFront.

EC2 Instances for Transcoding: Involves setting up and maintaining infrastructure, leading to higher operational overhead compared to using Elastic Transcoder.

Best Combination of Solutions:

Deploy Amazon CloudFront: This optimizes the performance by caching content at edge locations, reducing latency and buffering for users.

Use Amazon Elastic Transcoder: This reduces the file size and converts videos into formats better suited for streaming on mobile devices.

Amazon CloudFront

Amazon Elastic Transcoder

A company's application is running on Amazon EC2 instances within an Auto Scaling group behind an Elastic Load Balancing (ELB) load balancer Based on the application's history, the company anticipates a spike in traffic during a holiday each year. A solutions architect must design a strategy to ensure that the Auto Scaling group proactively increases capacity to minimize any performance impact on application users.

Which solution will meet these requirements?

A.
Create an Amazon CloudWatch alarm to scale up the EC2 instances when CPU utilization exceeds 90%.
A.
Create an Amazon CloudWatch alarm to scale up the EC2 instances when CPU utilization exceeds 90%.
Answers
B.
Create a recurring scheduled action to scale up the Auto Scaling group before the expected period of peak demand
B.
Create a recurring scheduled action to scale up the Auto Scaling group before the expected period of peak demand
Answers
C.
Increase the minimum and maximum number of EC2 instances in the Auto Scaling group during the peak demand period
C.
Increase the minimum and maximum number of EC2 instances in the Auto Scaling group during the peak demand period
Answers
D.
Configure an Amazon Simple Notification Service (Amazon SNS) notification to send alerts when there are autoscaling:EC2_INSTANCE_LAUNCH events.
D.
Configure an Amazon Simple Notification Service (Amazon SNS) notification to send alerts when there are autoscaling:EC2_INSTANCE_LAUNCH events.
Answers
Suggested answer: B

Explanation:

Understanding the Requirement: The company anticipates a spike in traffic during a holiday and wants to ensure the Auto Scaling group can handle the increased load without impacting performance.

Analysis of Options:

CloudWatch Alarm: This reacts to spikes based on metrics like CPU utilization but does not proactively scale before the anticipated demand.

Recurring Scheduled Action: This allows the Auto Scaling group to scale up based on a known schedule, ensuring additional capacity is available before the expected spike.

Increase Min/Max Instances: This could result in unnecessary costs by maintaining higher capacity even when not needed.

SNS Notification: Alerts on scaling events but does not proactively manage scaling to prevent performance issues.

Best Solution for Proactive Scaling:

Create a recurring scheduled action: This approach ensures that the Auto Scaling group scales up before the peak demand, providing the necessary capacity proactively without manual intervention.

Scheduled Scaling for Auto Scaling

A company is hosting a high-traffic static website on Amazon S3 with an Amazon CloudFront distribution that has a default TTL of 0 seconds The company wants to implement caching to improve performance for the website However, the company also wants to ensure that stale content Is not served for more than a few minutes after a deployment

Which combination of caching methods should a solutions architect implement to meet these requirements? (Select TWO.)

A.
Set the CloudFront default TTL to 2 minutes.
A.
Set the CloudFront default TTL to 2 minutes.
Answers
B.
Set a default TTL of 2 minutes on the S3 bucket
B.
Set a default TTL of 2 minutes on the S3 bucket
Answers
C.
Add a Cache-Control private directive to the objects in Amazon S3.
C.
Add a Cache-Control private directive to the objects in Amazon S3.
Answers
D.
Create an AWS Lambda@Edge function to add an Expires header to HTTP responses Configure the function to run on viewer response.
D.
Create an AWS Lambda@Edge function to add an Expires header to HTTP responses Configure the function to run on viewer response.
Answers
E.
Add a Cache-Control max-age directive of 24 hours to the objects in Amazon S3. On deployment, create a CloudFront invalidation to clear any changed files from edge caches
E.
Add a Cache-Control max-age directive of 24 hours to the objects in Amazon S3. On deployment, create a CloudFront invalidation to clear any changed files from edge caches
Answers
Suggested answer: A, E

Explanation:

Understanding the Requirement: The company wants to improve caching to enhance website performance while ensuring that stale content is not served for more than a few minutes after a deployment.

Analysis of Options:

Set CloudFront TTL: Setting a short TTL (e.g., 2 minutes) ensures that cached content is refreshed frequently, reducing the risk of serving stale content.

S3 Bucket TTL: This would not control the cache duration for the CloudFront distribution.

Cache-Control Private: This directive is for controlling caching by private caches (e.g., browsers) and is not applicable for CloudFront.

Lambda@Edge: While this can add headers dynamically, it adds complexity and operational overhead.

Cache-Control max-age and CloudFront Invalidation: Setting a longer max-age for objects ensures they are cached longer, reducing load on the origin. Invalidation ensures that updated content is refreshed immediately after deployment.

Best Combination of Caching Methods:

Set the CloudFront default TTL to 2 minutes: This balances caching and freshness of content.

Add a Cache-Control max-age directive of 24 hours and use CloudFront invalidation: This ensures efficient caching while providing a mechanism to clear outdated content immediately after a deployment.

Amazon CloudFront Caching

Invalidating Files in CloudFront

A company that uses AWS Organizations runs 150 applications across 30 different AWS accounts The company used AWS Cost and Usage Report to create a new report in the management account The report is delivered to an Amazon S3 bucket that is replicated to a bucket in the data collection account.

The company's senior leadership wants to view a custom dashboard that provides NAT gateway costs each day starting at the beginning of the current month.

Which solution will meet these requirements?

A.
Share an Amazon QuickSight dashboard that includes the requested table visual. Configure QuickSight to use AWS DataSync to query the new report
A.
Share an Amazon QuickSight dashboard that includes the requested table visual. Configure QuickSight to use AWS DataSync to query the new report
Answers
B.
Share an Amazon QuickSight dashboard that includes the requested table visual. Configure QuickSight to use Amazon Athena to query the new report.
B.
Share an Amazon QuickSight dashboard that includes the requested table visual. Configure QuickSight to use Amazon Athena to query the new report.
Answers
C.
Share an Amazon CloudWatch dashboard that includes the requested table visual Configure CloudWatch to use AWS DataSync to query the new report
C.
Share an Amazon CloudWatch dashboard that includes the requested table visual Configure CloudWatch to use AWS DataSync to query the new report
Answers
D.
Share an Amazon CloudWatch dashboard that includes the requested table visual. Configure CloudWatch to use Amazon Athena to query the new report
D.
Share an Amazon CloudWatch dashboard that includes the requested table visual. Configure CloudWatch to use Amazon Athena to query the new report
Answers
Suggested answer: B

Explanation:

Understanding the Requirement: Senior leadership wants a custom dashboard displaying NAT gateway costs daily, starting from the beginning of the current month.

Analysis of Options:

QuickSight with DataSync: While QuickSight is suitable for dashboards, DataSync is not designed for querying and analyzing data reports.

QuickSight with Athena: QuickSight can visualize data queried by Athena, which is designed to analyze data directly from S3.

CloudWatch with DataSync: CloudWatch is primarily for monitoring metrics, not for creating detailed cost analysis dashboards.

CloudWatch with Athena: Similarly, using CloudWatch with Athena does not align well with the requirement for a visual dashboard.

Best Solution for Visualization and Querying:

Amazon QuickSight with Athena: This combination allows for powerful data visualization and querying capabilities. QuickSight can create dynamic dashboards, while Athena efficiently queries the cost and usage report data stored in S3.

Amazon QuickSight

Amazon Athena

Total 886 questions
Go to page: of 89