ExamGecko
Home Home / Amazon / SAA-C03

Amazon SAA-C03 Practice Test - Questions Answers, Page 81

Question list
Search
Search

List of questions

Search

Related questions











A company currently stores 5 TB of data in on-premises block storage systems. The company's current storage solution provides limited space for additional data. The company runs applications on premises that must be able to retrieve frequently accessed data with low latency. The company requires a cloud-based storage solution.

Which solution will meet these requirements with the MOST operational efficiency?

A.
Use Amazon S3 File Gateway Integrate S3 File Gateway with the on-premises applications to store and directly retrieve files by using the SMB file system.
A.
Use Amazon S3 File Gateway Integrate S3 File Gateway with the on-premises applications to store and directly retrieve files by using the SMB file system.
Answers
B.
Use an AWS Storage Gateway Volume Gateway with cached volumes as iSCSt targets.
B.
Use an AWS Storage Gateway Volume Gateway with cached volumes as iSCSt targets.
Answers
C.
Use an AWS Storage Gateway Volume Gateway with stored volumes as iSCSI targets.
C.
Use an AWS Storage Gateway Volume Gateway with stored volumes as iSCSI targets.
Answers
D.
Use an AWS Storage Gateway Tape Gateway. Integrate Tape Gateway with the on-premises applications to store virtual tapes in Amazon S3.
D.
Use an AWS Storage Gateway Tape Gateway. Integrate Tape Gateway with the on-premises applications to store virtual tapes in Amazon S3.
Answers
Suggested answer: B

Explanation:

The company needs a cloud-based storage solution for frequently accessed data with low latency, while retaining their current on-premises infrastructure for some data storage. AWS Storage Gateway's Volume Gateway with cached volumes is the most appropriate solution for this scenario.

Detailed Explanation:

AWS Storage Gateway - Volume Gateway (Cached Volumes):

Volume Gateway with cached volumes allows you to store frequently accessed data in the AWS Cloud while keeping the most recently accessed data cached locally on-premises. This ensures low-latency access to active data while providing scalability for the rest of the data in the cloud.

The cached volume option stores the primary data in Amazon S3 but caches frequently accessed data locally, ensuring fast access. This configuration is well-suited for applications that require fast access to frequently used data but can tolerate cloud-based storage for the rest.

Since the company is facing limited on-premises storage, cached volumes provide an ideal solution, as they reduce the need for additional on-premises storage infrastructure.

Why Not the Other Options?:

Option A (S3 File Gateway): S3 File Gateway provides a file-based interface (SMB/NFS) for storing data directly in S3. While it is great for file storage, the company's need for block-level storage with iSCSI targets makes Volume Gateway a better fit.

Option C (Volume Gateway - Stored Volumes): Stored volumes keep all the data on-premises and asynchronously back up to AWS. This would not address the company's storage limitations since they would still need substantial on-premises storage.

Option D (Tape Gateway): Tape Gateway is designed for archiving and backup, not for frequently accessed low-latency data.

AWS

Reference:

AWS Storage Gateway - Volume Gateway

A company has a three-tier web application that processes orders from customers. The web tier consists of Amazon EC2 instances behind an Application Load Balancer. The processing tier consists of EC2 instances. The company decoupled the web tier and processing tier by using Amazon Simple Queue Service (Amazon SQS). The storage layer uses Amazon DynamoDB.

At peak times some users report order processing delays and halts. The company has noticed that during these delays, the EC2 instances are running at 100% CPU usage, and the SQS queue fills up. The peak times are variable and unpredictable.

The company needs to improve the performance of the application

Which solution will meet these requirements?

A.
Use scheduled scaling for Amazon EC2 Auto Scaling to scale out the processing tier instances for the duration of peak usage times. Use the CPU Utilization metric to determine when to scale.
A.
Use scheduled scaling for Amazon EC2 Auto Scaling to scale out the processing tier instances for the duration of peak usage times. Use the CPU Utilization metric to determine when to scale.
Answers
B.
Use Amazon ElastiCache for Redis in front of the DynamoDB backend tier. Use target utilization as a metric to determine when to scale.
B.
Use Amazon ElastiCache for Redis in front of the DynamoDB backend tier. Use target utilization as a metric to determine when to scale.
Answers
C.
Add an Amazon CloudFront distribution to cache the responses for the web tier. Use HTTP latency as a metric to determine when to scale.
C.
Add an Amazon CloudFront distribution to cache the responses for the web tier. Use HTTP latency as a metric to determine when to scale.
Answers
D.
Use an Amazon EC2 Auto Scaling target tracking policy to scale out the processing tier instances. Use the ApproximateNumberOfMessages attribute to determine when to scale.
D.
Use an Amazon EC2 Auto Scaling target tracking policy to scale out the processing tier instances. Use the ApproximateNumberOfMessages attribute to determine when to scale.
Answers
Suggested answer: D

Explanation:

The issue in this case is related to the processing tier, where EC2 instances are overwhelmed at peak times, causing delays. Option D, using an Amazon EC2 Auto Scaling target tracking policy based on the ApproximateNumberOfMessages in the SQS queue, is the best solution.

Detailed Explanation:

Auto Scaling with Target Tracking:

Target tracking policies dynamically scale out or in based on a specific metric. For this use case, you can monitor the ApproximateNumberOfMessages in the SQS queue. When the number of messages (orders) in the queue increases, the Auto Scaling group will scale out more EC2 instances to handle the additional load, ensuring that the queue doesn't build up and cause delays.

This solution is ideal for handling variable and unpredictable peak times, as Auto Scaling can automatically adjust based on real-time load rather than scheduled times.

Why Not the Other Options?:

Option A (Scheduled Scaling): Scheduled scaling works well for predictable peak times, but this company experiences unpredictable peak usage, making scheduled scaling less effective.

Option B (ElastiCache for Redis): Adding a caching layer would help if DynamoDB were the bottleneck, but in this case, the issue is CPU overload on EC2 instances in the processing tier.

Option C (CloudFront): CloudFront would help cache static content from the web tier, but it wouldn't resolve the issue of the processing tier's overloaded EC2 instances.

AWS

Reference:

Amazon EC2 Auto Scaling Target Tracking

Amazon SQS ApproximateNumberOfMessages

An online gaming company hosts its platform on Amazon EC2 instances behind Network Load Balancers (NLBs) across multiple AWS Regions. The NLBs can route requests to targets over the internet. The company wants to improve the customer playing experience by reducing end-to-end load time for its global customer base.

Which solution will meet these requirements?

A.
Create Application Load Balancers (ALBs) in each Region to replace the existing NLBs. Register the existing EC2 instances as targets for the ALBs in each Region.
A.
Create Application Load Balancers (ALBs) in each Region to replace the existing NLBs. Register the existing EC2 instances as targets for the ALBs in each Region.
Answers
B.
Configure Amazon Route 53 to route equally weighted traffic to the NLBs in each Region.
B.
Configure Amazon Route 53 to route equally weighted traffic to the NLBs in each Region.
Answers
C.
Create additional NLBs and EC2 instances in other Regions where the company has large customer bases.
C.
Create additional NLBs and EC2 instances in other Regions where the company has large customer bases.
Answers
D.
Create a standard accelerator in AWS Global Accelerator. Configure the existing NLBs as target endpoints.
D.
Create a standard accelerator in AWS Global Accelerator. Configure the existing NLBs as target endpoints.
Answers
Suggested answer: D

Explanation:

The company wants to reduce end-to-end load time for its global customer base. AWS Global Accelerator provides a network optimization service that reduces latency by routing traffic to the nearest AWS edge locations, improving the user experience for globally distributed customers.

Detailed Explanation:

AWS Global Accelerator:

Global Accelerator improves the performance of your applications by routing traffic through AWS's global network infrastructure. This reduces the number of hops and latency compared to using the public internet.

By creating a standard accelerator and configuring the existing NLBs as target endpoints, Global Accelerator ensures that traffic from users around the world is routed to the nearest AWS edge location and then through optimized paths to the NLBs in each region. This significantly improves end-to-end load time for global customers.

Why Not the Other Options?:

Option A (ALBs instead of NLBs): ALBs are designed for HTTP/HTTPS traffic and provide layer 7 features, but they wouldn't solve the latency issue for a global customer base. The key problem here is latency, and Global Accelerator is specifically designed to address that.

Option B (Route 53 weighted routing): Route 53 can route traffic to different regions, but it doesn't optimize network performance. It simply balances traffic between endpoints without improving latency.

Option C (Additional NLBs in more regions): This could potentially improve latency but would require setting up infrastructure in multiple regions. Global Accelerator is a simpler and more efficient solution that leverages AWS's existing global network.

AWS

Reference:

AWS Global Accelerator

By using AWS Global Accelerator with the existing NLBs, the company can optimize global traffic routing and improve the customer experience by minimizing latency. Therefore, Option D is the correct answer.

A company has 15 employees. The company stores employee start dates in an Amazon DynamoDB table. The company wants to send an email message to each employee on the day of the employee's work anniversary.

Which solution will meet these requirements with the MOST operational efficiency?

A.
Create a script that scans the DynamoDB table and uses Amazon Simple Notification Service (Amazon SNS) to send email messages to employees when necessary. Use a cron job to run this script every day on an Amazon EC2 instance.
A.
Create a script that scans the DynamoDB table and uses Amazon Simple Notification Service (Amazon SNS) to send email messages to employees when necessary. Use a cron job to run this script every day on an Amazon EC2 instance.
Answers
B.
Create a script that scans the DynamoDB table and uses Amazon Simple Queue Service {Amazon SQS) to send email messages to employees when necessary. Use a cron job to run this script every day on an Amazon EC2 instance.
B.
Create a script that scans the DynamoDB table and uses Amazon Simple Queue Service {Amazon SQS) to send email messages to employees when necessary. Use a cron job to run this script every day on an Amazon EC2 instance.
Answers
C.
Create an AWS Lambda function that scans the DynamoDB table and uses Amazon Simple Notification Service (Amazon SNS) to send email messages to employees when necessary. Schedule this Lambda function to run every day.
C.
Create an AWS Lambda function that scans the DynamoDB table and uses Amazon Simple Notification Service (Amazon SNS) to send email messages to employees when necessary. Schedule this Lambda function to run every day.
Answers
D.
Create an AWS Lambda function that scans the DynamoDB table and uses Amazon Simple Queue Service (Amazon SQS) to send email messages to employees when necessary Schedule this Lambda function to run every day.
D.
Create an AWS Lambda function that scans the DynamoDB table and uses Amazon Simple Queue Service (Amazon SQS) to send email messages to employees when necessary Schedule this Lambda function to run every day.
Answers
Suggested answer: C

Explanation:

AWS Lambda for Operational Efficiency:

AWS Lambda is a serverless compute service that allows you to run code without provisioning or managing servers. It automatically scales based on the number of invocations and eliminates the need to maintain and monitor EC2 instances, making it far more operationally efficient compared to running a cron job on EC2.

By using Lambda, you pay only for the compute time that your function uses. This is especially beneficial when dealing with lightweight tasks, such as scanning a DynamoDB table and sending email messages once a day.

Amazon DynamoDB:

DynamoDB is a highly scalable, fully managed NoSQL database. The table stores employee start dates, and scanning the table to find the employees who have a work anniversary on the current day is a lightweight operation. Lambda can easily perform this operation using the DynamoDB Scan API or queries, depending on how the data is structured.

Amazon SNS for Email Notifications:

Amazon Simple Notification Service (SNS) is a fully managed messaging service that supports sending notifications to a variety of endpoints, including email. SNS is well-suited for sending out email messages to employees, as it can handle the fan-out messaging pattern (sending the same message to multiple recipients).

In this scenario, once Lambda identifies employees who have their work anniversaries, it can use SNS to send the email notifications efficiently. SNS integrates seamlessly with Lambda, and sending emails via SNS is a common pattern for this type of use case.

Event Scheduling:

To automate this daily task, you can schedule the Lambda function using Amazon EventBridge (formerly CloudWatch Events). EventBridge can trigger the Lambda function on a daily schedule (cron-like scheduling). This avoids the complexity and operational overhead of manually setting up cron jobs on EC2 instances.

Why Not EC2 or SQS?:

Option A & B suggest running a cron job on an Amazon EC2 instance. This approach requires you to manage, scale, and patch the EC2 instance, which increases operational overhead. Lambda is a better choice because it automatically scales and doesn't require server management.

Amazon Simple Queue Service (SQS) is ideal for decoupling distributed systems but isn't necessary in this context because the goal is to send notifications to employees on their work anniversaries. SQS adds unnecessary complexity for this straightforward use case, where SNS is the simpler and more efficient solution.

AWS

Reference:

AWS Lambda

Amazon SNS

Amazon DynamoDB

Amazon EventBridge

Summary:

Using AWS Lambda combined with Amazon SNS to send notifications, and scheduling the function with Amazon EventBridge to run daily, is the most operationally efficient solution. It leverages AWS serverless technologies, which reduce the need for infrastructure management and provide automatic scaling. Therefore, Option C is the correct and optimal choice.


A company has a web application that has thousands of users. The application uses 8-10 user-uploaded images to generate Al images. Users can download the generated Al Images once every 6 hours. The company also has a premium user option that gives users the ability to download the generated Al images anytime

The company uses the user-uploaded images to run Al model training twice a year. The company needs a storage solution to store the images.

Which storage solution meets these requirements MOST cost-effectively?

A.
Move uploaded images to Amazon S3 Glacier Deep Archive. Move premium user-generated Al images to S3 Standard. Move non-premium user-generated Al images to S3 Standard-Infrequent Access (S3 Standard-IA).
A.
Move uploaded images to Amazon S3 Glacier Deep Archive. Move premium user-generated Al images to S3 Standard. Move non-premium user-generated Al images to S3 Standard-Infrequent Access (S3 Standard-IA).
Answers
B.
Move uploaded images to Amazon S3 Glacier Deep Archive. Move all generated Al images to S3 Glacier Flexible Retrieval.
B.
Move uploaded images to Amazon S3 Glacier Deep Archive. Move all generated Al images to S3 Glacier Flexible Retrieval.
Answers
C.
Move uploaded images to Amazon S3 One Zone-Infrequent Access {S3 One Zone-IA) Move premium user-generated Al images to S3 Standard. Move non-premium user-generated Al images to S3 Standard-Infrequent Access (S3 Standard-IA).
C.
Move uploaded images to Amazon S3 One Zone-Infrequent Access {S3 One Zone-IA) Move premium user-generated Al images to S3 Standard. Move non-premium user-generated Al images to S3 Standard-Infrequent Access (S3 Standard-IA).
Answers
D.
Move uploaded images to Amazon S3 One Zone-Infrequent Access {S3 One Zone-IA) Move all generated Al images to S3 Glacier Flexible Retrieval
D.
Move uploaded images to Amazon S3 One Zone-Infrequent Access {S3 One Zone-IA) Move all generated Al images to S3 Glacier Flexible Retrieval
Answers
Suggested answer: C

Explanation:

S3 One Zone-IA:

Suitable for infrequently accessed data that doesn't require multiple Availability Zone resilience.

Cost-effective for storing user-uploaded images that are only used for AI model training twice a year.

S3 Standard:

Ideal for frequently accessed data with high durability and availability.

Store premium user-generated AI images here to ensure they are readily available for download at any time.

S3 Standard-IA:

Cost-effective storage for data that is accessed less frequently but still requires rapid retrieval.

Store non-premium user-generated AI images here, as these images are only downloaded once every 6 hours, making it a good balance between cost and accessibility.

Cost-Effectiveness: This solution optimizes storage costs by categorizing data based on access patterns and durability requirements, ensuring that each type of data is stored in the most cost-effective manner.

Amazon S3 Storage Classes

S3 One Zone-IA

A company is migrating its on-premises Oracle database to an Amazon RDS for Oracle database. The company needs to retain data for 90 days to meet regulatory requirements. The company must also be able to restore the database to a specific point in time for up to 14 days.

Which solution will meet these requirements with the LEAST operational overhead?

A.
Create Amazon RDS automated backups. Set the retention period to 90 days.
A.
Create Amazon RDS automated backups. Set the retention period to 90 days.
Answers
B.
Create an Amazon RDS manual snapshot every day. Delete manual snapshots that are older than 90 days.
B.
Create an Amazon RDS manual snapshot every day. Delete manual snapshots that are older than 90 days.
Answers
C.
Use the Amazon Aurora Clone feature for Oracle to create a point-in-time restore. Delete clones that are older than 90 days
C.
Use the Amazon Aurora Clone feature for Oracle to create a point-in-time restore. Delete clones that are older than 90 days
Answers
D.
Create a backup plan that has a retention period of 90 days by using AWS Backup for Amazon RDS.
D.
Create a backup plan that has a retention period of 90 days by using AWS Backup for Amazon RDS.
Answers
Suggested answer: D

Explanation:

AWS Backup is the most appropriate solution for managing backups with minimal operational overhead while meeting the regulatory requirement to retain data for 90 days and enabling point-in-time restore for up to 14 days.

AWS Backup: AWS Backup provides a centralized backup management solution that supports automated backup scheduling, retention management, and compliance reporting across AWS services, including Amazon RDS. By creating a backup plan, you can define a retention period (in this case, 90 days) and automate the backup process.

Point-in-Time Restore (PITR): Amazon RDS supports point-in-time restore for up to 35 days with automated backups. By using AWS Backup in conjunction with RDS, you ensure that your backup strategy meets the requirement for restoring data to a specific point in time within the last 14 days.

Why Not Other Options?:

Option A (RDS Automated Backups): While RDS automated backups support PITR, they do not directly support retention beyond 35 days without manual intervention.

Option B (Manual Snapshots): Manually creating and managing snapshots is operationally intensive and less automated compared to AWS Backup.

Option C (Aurora Clones): Aurora Clone is a feature specific to Amazon Aurora and is not applicable to Amazon RDS for Oracle.

AWS

Reference:

AWS Backup - Overview of AWS Backup and its capabilities.

Amazon RDS Automated Backups - Information on how RDS automated backups work and their limitations.

A company is building an application in the AWS Cloud. The application is hosted on Amazon EC2 instances behind an Application Load Balancer (ALB). The company uses Amazon Route 53 for the DNS.

The company needs a managed solution with proactive engagement to detect against DDoS attacks.

Which solution will meet these requirements?

A.
Enable AWS Config. Configure an AWS Config managed rule that detects DDoS attacks.
A.
Enable AWS Config. Configure an AWS Config managed rule that detects DDoS attacks.
Answers
B.
Enable AWS WAF on the ALB Create an AWS WAF web ACL with rules to detect and prevent DDoS attacks. Associate the web ACL with the ALB.
B.
Enable AWS WAF on the ALB Create an AWS WAF web ACL with rules to detect and prevent DDoS attacks. Associate the web ACL with the ALB.
Answers
C.
Store the ALB access logs in an Amazon S3 bucket. Configure Amazon GuardDuty to detect and take automated preventative actions for DDoS attacks.
C.
Store the ALB access logs in an Amazon S3 bucket. Configure Amazon GuardDuty to detect and take automated preventative actions for DDoS attacks.
Answers
D.
Subscribe to AWS Shield Advanced. Configure hosted zones in Route 53 Add ALB resources as protected resources.
D.
Subscribe to AWS Shield Advanced. Configure hosted zones in Route 53 Add ALB resources as protected resources.
Answers
Suggested answer: D

Explanation:

AWS Shield Advanced is designed to provide enhanced protection against DDoS attacks with proactive engagement and response capabilities, making it the best solution for this scenario.

AWS Shield Advanced: This service provides advanced protection against DDoS attacks. It includes detailed attack diagnostics, 24/7 access to the AWS DDoS Response Team (DRT), and financial protection against DDoS-related scaling charges. Shield Advanced also integrates with Route 53 and the Application Load Balancer (ALB) to ensure comprehensive protection for your web applications.

Route 53 and ALB Protection: By adding your Route 53 hosted zones and ALB resources to AWS Shield Advanced, you ensure that these components are covered under the enhanced protection plan. Shield Advanced actively monitors traffic and provides real-time attack mitigation, minimizing the impact of DDoS attacks on your application.

Why Not Other Options?:

Option A (AWS Config): AWS Config is a configuration management service and does not provide DDoS protection or detection capabilities.

Option B (AWS WAF): While AWS WAF can help mitigate some types of attacks, it does not provide the comprehensive DDoS protection and proactive engagement offered by Shield Advanced.

Option C (GuardDuty): GuardDuty is a threat detection service that identifies potentially malicious activity within your AWS environment, but it is not specifically designed to provide DDoS protection.

AWS

Reference:

AWS Shield Advanced - Overview of AWS Shield Advanced and its DDoS protection capabilities.

Integrating AWS Shield Advanced with Route 53 and ALB - Detailed guidance on how to protect Route 53 and ALB with AWS Shield Advanced.

A company has stored millions of objects across multiple prefixes in an Amazon S3 bucket by using the Amazon S3 Glacier Deep Archive storage class. The company needs to delete all data older than 3 years except for a subset of data that must be retained. The company has identified the data that must be retained and wants to implement a serverless solution.

Which solution will meet these requirements?

A.
Use S3 Inventory to list all objects. Use the AWS CLI to create a script that runs on an Amazon EC2 instance that deletes objects from the inventory list.
A.
Use S3 Inventory to list all objects. Use the AWS CLI to create a script that runs on an Amazon EC2 instance that deletes objects from the inventory list.
Answers
B.
Use AWS Batch to delete objects older than 3 years except for the data that must be retained
B.
Use AWS Batch to delete objects older than 3 years except for the data that must be retained
Answers
C.
Provision an AWS Glue crawler to query objects older than 3 years. Save the manifest file of old objects. Create a script to delete objects in the manifest.
C.
Provision an AWS Glue crawler to query objects older than 3 years. Save the manifest file of old objects. Create a script to delete objects in the manifest.
Answers
D.
Enable S3 Inventory. Create an AWS Lambda function to filter and delete objects. Invoke the Lambda function with S3 Batch Operations to delete objects by using the inventory reports.
D.
Enable S3 Inventory. Create an AWS Lambda function to filter and delete objects. Invoke the Lambda function with S3 Batch Operations to delete objects by using the inventory reports.
Answers
Suggested answer: D

Explanation:

To meet the requirement of deleting objects older than 3 years while retaining certain data, this solution leverages serverless technologies to minimize operational overhead.

S3 Inventory: S3 Inventory provides a flat file that lists all the objects in an S3 bucket and their metadata, which can be configured to include data such as the last modified date. This inventory can be generated daily or weekly.

AWS Lambda Function: A Lambda function can be created to process the S3 Inventory report, filtering out the objects that need to be retained and identifying those that should be deleted.

S3 Batch Operations: S3 Batch Operations can execute tasks such as object deletion at scale. By invoking the Lambda function through S3 Batch Operations, you can automate the process of deleting the identified objects, ensuring that the solution is serverless and requires minimal operational management.

Why Not Other Options?:

Option A (AWS CLI script on EC2): Running a script on an EC2 instance adds unnecessary operational overhead and is not serverless.

Option B (AWS Batch): AWS Batch is designed for running large-scale batch computing workloads, which is overkill for this scenario.

Option C (AWS Glue + script): AWS Glue is more suited for ETL tasks, and this approach would add unnecessary complexity compared to the serverless Lambda solution.

AWS

Reference:

Amazon S3 Inventory - Information on how to set up and use S3 Inventory.

S3 Batch Operations - Documentation on how to perform bulk operations on S3 objects using S3 Batch Operations.

A company runs several websites on AWS for its different brands Each website generates tens of gigabytes of web traffic logs each day. A solutions architect needs to design a scalable solution to give the company's developers the ability to analyze traffic patterns across all the company's websites. This analysis by the developers will occur on demand once a week over the course of several months. The solution must support queries with standard SQL.

Which solution will meet these requirements MOST cost-effectively?

A.
Store the logs in Amazon S3. Use Amazon Athena for analysis.
A.
Store the logs in Amazon S3. Use Amazon Athena for analysis.
Answers
B.
Store the logs in Amazon RDS. Use a database client for analysis.
B.
Store the logs in Amazon RDS. Use a database client for analysis.
Answers
C.
Store the logs in Amazon OpenSearch Service. Use OpenSearch Service for analysis.
C.
Store the logs in Amazon OpenSearch Service. Use OpenSearch Service for analysis.
Answers
D.
Store the logs in an Amazon EMR cluster. Use a supported open-source framework for SQL-based analysis.
D.
Store the logs in an Amazon EMR cluster. Use a supported open-source framework for SQL-based analysis.
Answers
Suggested answer: A

Explanation:

This solution is the most cost-effective and scalable for analyzing large amounts of web traffic logs.

Amazon S3: Storing the logs in Amazon S3 is highly scalable, durable, and cost-effective. S3 is designed to handle large-scale data storage, which is ideal for storing tens of gigabytes of log data generated daily by multiple websites.

Amazon Athena: Athena is a serverless, interactive query service that allows you to analyze data in S3 using standard SQL. It works directly with the data stored in S3, so there's no need to load the data into a database, which saves on costs and reduces complexity. Athena charges based on the amount of data scanned by queries, making it a cost-effective solution for on-demand analysis that only occurs once a week.

Why Not Other Options?:

Option B (Amazon RDS): Storing logs in a relational database like Amazon RDS would be more expensive due to the storage and I/O costs associated with RDS. Additionally, it would require more management overhead.

Option C (Amazon OpenSearch Service): OpenSearch is a good option for full-text search and analytics on log data, but it might be more costly and complex to manage compared to the simplicity and cost-effectiveness of Athena for periodic SQL-based queries.

Option D (Amazon EMR): While EMR can handle large-scale data processing, it involves more operational overhead and might be overkill for the type of ad-hoc, SQL-based analysis required here. Additionally, EMR costs can be higher due to the need to maintain a cluster.

AWS

Reference:

Amazon S3 - Information on how to store and manage data in Amazon S3.

Amazon Athena - Documentation on using Amazon Athena for querying data stored in S3 using SQL.

A company is migrating a legacy application from an on-premises data center to AWS. The application relies on hundreds of cron Jobs that run between 1 and 20 minutes on different recurring schedules throughout the day.

The company wants a solution to schedule and run the cron jobs on AWS with minimal refactoring. The solution must support running the cron jobs in response to an event in the future.

Which solution will meet these requirements?

A.
Create a container image for the cron jobs. Use Amazon EventBridge Scheduler to create a recurring schedule. Run the cron job tasks as AWS Lambda functions.
A.
Create a container image for the cron jobs. Use Amazon EventBridge Scheduler to create a recurring schedule. Run the cron job tasks as AWS Lambda functions.
Answers
B.
Create a container image for the cron jobs. Use AWS Batch on Amazon Elastic Container Service (Amazon ECS) with a scheduling policy to run the cron jobs.
B.
Create a container image for the cron jobs. Use AWS Batch on Amazon Elastic Container Service (Amazon ECS) with a scheduling policy to run the cron jobs.
Answers
C.
Create a container image for the cron jobs. Use Amazon EventBridge Scheduler to create a recurring schedule Run the cron job tasks on AWS Fargate.
C.
Create a container image for the cron jobs. Use Amazon EventBridge Scheduler to create a recurring schedule Run the cron job tasks on AWS Fargate.
Answers
D.
Create a container image for the cron jobs. Create a workflow in AWS Step Functions that uses a Wait state to run the cron jobs at a specified time. Use the RunTask action to run the cron job tasks on AWS Fargate.
D.
Create a container image for the cron jobs. Create a workflow in AWS Step Functions that uses a Wait state to run the cron jobs at a specified time. Use the RunTask action to run the cron job tasks on AWS Fargate.
Answers
Suggested answer: C

Explanation:

This solution is the most suitable for running cron jobs on AWS with minimal refactoring, while also supporting the possibility of running jobs in response to future events.

Container Image for Cron Jobs: By containerizing the cron jobs, you can package the environment and dependencies required to run the jobs, ensuring consistency and ease of deployment across different environments.

Amazon EventBridge Scheduler: EventBridge Scheduler allows you to create a recurring schedule that can trigger tasks (like running your cron jobs) at specific times or intervals. It provides fine-grained control over scheduling and integrates seamlessly with AWS services.

AWS Fargate: Fargate is a serverless compute engine for containers that removes the need to manage EC2 instances. It allows you to run containers without worrying about the underlying infrastructure. Fargate is ideal for running jobs that can vary in duration, like cron jobs, as it scales automatically based on the task's requirements.

Why Not Other Options?:

Option A (Lambda): While AWS Lambda could handle short-running cron jobs, it has limitations in terms of execution duration (maximum of 15 minutes) and might not be suitable for jobs that run up to 20 minutes.

Option B (AWS Batch on ECS): AWS Batch is more suitable for batch processing and workloads that require complex job dependencies or orchestration, which might be more than what is needed for simple cron jobs.

Option D (Step Functions with Wait State): While Step Functions provide orchestration capabilities, this approach would introduce unnecessary complexity and overhead compared to the straightforward scheduling with EventBridge and running on Fargate.

AWS

Reference:

Amazon EventBridge Scheduler - Details on how to schedule tasks using Amazon EventBridge Scheduler.

AWS Fargate - Information on how to run containers in a serverless manner using AWS Fargate.

Total 886 questions
Go to page: of 89