Amazon SAA-C03 Practice Test - Questions Answers, Page 71
List of questions
Question 701

A company runs a critical data analysis job each week before the first day of the work week The job requires at least 1 hour to complete the analysis The job is stateful and cannot tolerate interruptions. The company needs a solution to run the job on AWS.
Which solution will meet these requirements?
Explanation:
Understanding the Requirement: The job is stateful, cannot tolerate interruptions, and needs to run reliably for at least one hour each week.
Analysis of Options:
AWS Fargate with Amazon ECS and EventBridge: This option provides a serverless compute engine for containers that can run stateful tasks reliably. Using EventBridge Scheduler, the job can be triggered automatically at the specified time without manual intervention.
AWS Lambda with EventBridge: Lambda functions are not suitable for long-running stateful jobs since they have a maximum execution time of 15 minutes.
EC2 Spot Instances: Spot Instances can be interrupted, making them unsuitable for a stateful job that cannot tolerate interruptions.
AWS DataSync: This service is primarily for moving large amounts of data and is not designed to run stateful analysis jobs.
Best Option for Reliable, Scheduled Execution:
The Fargate task on ECS with EventBridge Scheduler meets all requirements, providing the necessary reliability and scheduling capabilities without interruption risks.
Amazon ECS
AWS Fargate
Amazon EventBridge
Question 702

A company runs workloads in the AWS Cloud The company wants to centrally collect security data to assess security across the entire company and to improve workload protection.
Which solution will meet these requirements with the LEAST development effort?
Explanation:
Understanding the Requirement: The company wants to centrally collect security data with minimal development effort to assess and improve security across all workloads.
Analysis of Options:
Amazon Security Lake: This is a purpose-built service for centralizing security data from across AWS services and third-party sources into a data lake. It provides native integration and requires minimal development effort to set up.
AWS Lake Formation with AWS Glue: While this can be used to create a data lake, it requires more development effort to set up and configure Glue crawlers for ingestion.
AWS Lambda with S3: This approach involves custom development to collect and process security data before storing it in S3, which requires more effort.
AWS DMS to RDS: AWS Database Migration Service is typically used for database migrations and is not suited for collecting and analyzing security data.
Best Option for Minimal Development Effort:
Amazon Security Lake provides the least development effort for setting up a centralized repository for security data. It simplifies data ingestion and management, making it the most efficient solution for this use case.
Amazon Security Lake
AWS Lake Formation
AWS Glue
Question 703

A company is storing petabytes of data in Amazon S3 Standard The data is stored in multiple S3 buckets and is accessed with varying frequency The company does not know access patterns for all the data. The company needs to implement a solution for each S3 bucket to optimize the cost of S3 usage.
Which solution will meet these requirements with the MOST operational efficiency?
Explanation:
Understanding the Requirement: The company has petabytes of data in S3 Standard across multiple buckets with varying access frequencies. They do not know the access patterns and need a cost-optimized storage solution with minimal operational effort.
Analysis of Options:
S3 Intelligent-Tiering: This storage class automatically moves data between two access tiers (frequent and infrequent) based on changing access patterns. It incurs a small monitoring and automation charge but eliminates the need to manually move data between storage classes.
S3 Storage Class Analysis Tool: While useful for determining access patterns, this tool requires manual intervention to move objects to the appropriate storage class, which increases operational overhead.
S3 Glacier Instant Retrieval: This storage class is designed for data that is rarely accessed but requires instant retrieval when needed. It may not be suitable for data with unknown and varying access patterns.
S3 One Zone-IA: This is a lower-cost option for infrequently accessed data stored in a single availability zone. It does not provide the same level of durability and availability as other options and requires knowledge of access patterns.
Best Option for Operational Efficiency:
S3 Intelligent-Tiering provides the best balance of cost savings and operational efficiency. It dynamically adjusts to access patterns without manual intervention, ensuring the company is only paying for what they need without the risk of incurring high costs for infrequent access data.
Amazon S3 Intelligent-Tiering
Managing your storage lifecycle
Question 704

A company is planning to migrate data to an Amazon S3 bucket The data must be encrypted at rest within the S3 bucket The encryption key must be rotated automatically every year.
Which solution will meet these requirements with the LEAST operational overhead?
Explanation:
Understanding the Requirement: The data must be encrypted at rest with automatic key rotation every year, with minimal operational overhead.
Analysis of Options:
SSE-S3: This option provides encryption with S3 managed keys and automatic key rotation but offers less control and flexibility compared to KMS keys.
AWS KMS with Customer Managed Key (automatic rotation): This option offers full control over encryption keys, with AWS KMS handling automatic key rotation, minimizing operational overhead.
AWS KMS with Customer Managed Key (manual rotation): This requires manual intervention for key rotation, increasing operational overhead.
Customer Key Material: This involves more complex management, including importing key material and setting up automatic rotation, which increases operational overhead.
Best Option for Minimal Operational Overhead:
AWS KMS with a customer managed key and automatic rotation provides the needed security and key rotation with minimal operational effort. Setting the S3 bucket's default encryption to use this key ensures all data is encrypted as required.
AWS Key Management Service (KMS)
Amazon S3 default encryption
Question 705

A company wants to build a map of its IT infrastructure to identify and enforce policies on resources that pose security risks. The company's security team must be able to query data in the IT infrastructure map and quickly identify security risks.
Which solution will meet these requirements with the LEAST operational overhead?
Explanation:
Understanding the Requirement: The company needs to map its IT infrastructure to identify and enforce security policies, with the ability to quickly query and identify security risks.
Analysis of Options:
Amazon RDS: While suitable for relational data, it is not optimized for handling complex relationships and querying those relationships, which is essential for an IT infrastructure map.
Amazon Neptune: A graph database service designed for handling highly connected data. It uses SPARQL to query graph data efficiently, making it ideal for mapping IT infrastructure and identifying relationships that pose security risks.
Amazon Redshift: A data warehouse solution optimized for complex queries on large datasets but not specifically for graph data.
Amazon DynamoDB: A NoSQL database that uses PartiQL for querying, but it is not optimized for complex relationships in graph data.
Best Option for Mapping and Querying IT Infrastructure:
Amazon Neptune provides the most suitable solution with the least operational overhead. It is purpose-built for graph data and enables efficient querying of complex relationships to identify security risks.
Amazon Neptune
Querying with SPARQL
Question 706

A company wants to add its existing AWS usage cost to its operation cost dashboard A solutions architect needs to recommend a solution that will give the company access to its usage cost programmatically. The company must be able to access cost data for the current year and forecast costs for the next 12 months.
Which solution will meet these requirements with the LEAST operational overhead?
Explanation:
Understanding the Requirement: The company needs programmatic access to its AWS usage costs for the current year and cost forecasts for the next 12 months, with minimal operational overhead.
Analysis of Options:
AWS Cost Explorer API: Provides programmatic access to detailed usage and cost data, including forecast costs. It supports pagination for handling large datasets, making it an efficient solution.
Downloadable AWS Cost Explorer report csv files: While useful, this method requires manual handling of files and does not provide real-time access.
AWS Budgets actions via FTP: This is less suitable as it involves setting up FTP transfers and does not provide the same level of detail and real-time access as the API.
AWS Budgets reports via SMTP: Similar to FTP, this method involves additional setup and lacks the real-time access and detail provided by the API.
Best Option for Minimal Operational Overhead:
AWS Cost Explorer API provides direct, programmatic access to cost data, including detailed usage and forecasting, with minimal setup and operational effort. It is the most efficient solution for integrating cost data into an operational cost dashboard.
AWS Cost Explorer API
AWS Cost and Usage Reports
Question 707

A company wants to create a mobile app that allows users to stream slow-motion video clips on their mobile devices. Currently, the app captures video clips and uploads the video clips in raw format into an Amazon S3 bucket. The app retrieves these video clips directly from the S3 bucket. However, the videos are large in their raw format.
Users are experiencing issues with buffering and playback on mobile devices. The company wants to implement solutions to maximize the performance and scalability of the app while minimizing operational overhead.
Which combination of solutions will meet these requirements? (Select TWO.)
Explanation:
Understanding the Requirement: The mobile app captures and uploads raw video clips to S3, but users experience buffering and playback issues due to the large size of these videos.
Analysis of Options:
Amazon CloudFront: A content delivery network (CDN) that can cache and deliver content globally with low latency. It helps reduce buffering by delivering content from edge locations closer to the users.
AWS DataSync: Primarily used for data transfer and replication across AWS Regions, which does not directly address the video size and buffering issue.
Amazon Elastic Transcoder: A media transcoding service that can convert raw video files into formats and resolutions more suitable for streaming, reducing the size and improving playback performance.
EC2 Instances in Local Zones: While this could provide content delivery and caching, it involves more operational overhead compared to using CloudFront.
EC2 Instances for Transcoding: Involves setting up and maintaining infrastructure, leading to higher operational overhead compared to using Elastic Transcoder.
Best Combination of Solutions:
Deploy Amazon CloudFront: This optimizes the performance by caching content at edge locations, reducing latency and buffering for users.
Use Amazon Elastic Transcoder: This reduces the file size and converts videos into formats better suited for streaming on mobile devices.
Amazon CloudFront
Amazon Elastic Transcoder
Question 708

A company's application is running on Amazon EC2 instances within an Auto Scaling group behind an Elastic Load Balancing (ELB) load balancer Based on the application's history, the company anticipates a spike in traffic during a holiday each year. A solutions architect must design a strategy to ensure that the Auto Scaling group proactively increases capacity to minimize any performance impact on application users.
Which solution will meet these requirements?
Explanation:
Understanding the Requirement: The company anticipates a spike in traffic during a holiday and wants to ensure the Auto Scaling group can handle the increased load without impacting performance.
Analysis of Options:
CloudWatch Alarm: This reacts to spikes based on metrics like CPU utilization but does not proactively scale before the anticipated demand.
Recurring Scheduled Action: This allows the Auto Scaling group to scale up based on a known schedule, ensuring additional capacity is available before the expected spike.
Increase Min/Max Instances: This could result in unnecessary costs by maintaining higher capacity even when not needed.
SNS Notification: Alerts on scaling events but does not proactively manage scaling to prevent performance issues.
Best Solution for Proactive Scaling:
Create a recurring scheduled action: This approach ensures that the Auto Scaling group scales up before the peak demand, providing the necessary capacity proactively without manual intervention.
Scheduled Scaling for Auto Scaling
Question 709

A company is hosting a high-traffic static website on Amazon S3 with an Amazon CloudFront distribution that has a default TTL of 0 seconds The company wants to implement caching to improve performance for the website However, the company also wants to ensure that stale content Is not served for more than a few minutes after a deployment
Which combination of caching methods should a solutions architect implement to meet these requirements? (Select TWO.)
Explanation:
Understanding the Requirement: The company wants to improve caching to enhance website performance while ensuring that stale content is not served for more than a few minutes after a deployment.
Analysis of Options:
Set CloudFront TTL: Setting a short TTL (e.g., 2 minutes) ensures that cached content is refreshed frequently, reducing the risk of serving stale content.
S3 Bucket TTL: This would not control the cache duration for the CloudFront distribution.
Cache-Control Private: This directive is for controlling caching by private caches (e.g., browsers) and is not applicable for CloudFront.
Lambda@Edge: While this can add headers dynamically, it adds complexity and operational overhead.
Cache-Control max-age and CloudFront Invalidation: Setting a longer max-age for objects ensures they are cached longer, reducing load on the origin. Invalidation ensures that updated content is refreshed immediately after deployment.
Best Combination of Caching Methods:
Set the CloudFront default TTL to 2 minutes: This balances caching and freshness of content.
Add a Cache-Control max-age directive of 24 hours and use CloudFront invalidation: This ensures efficient caching while providing a mechanism to clear outdated content immediately after a deployment.
Amazon CloudFront Caching
Invalidating Files in CloudFront
Question 710

A company that uses AWS Organizations runs 150 applications across 30 different AWS accounts The company used AWS Cost and Usage Report to create a new report in the management account The report is delivered to an Amazon S3 bucket that is replicated to a bucket in the data collection account.
The company's senior leadership wants to view a custom dashboard that provides NAT gateway costs each day starting at the beginning of the current month.
Which solution will meet these requirements?
Explanation:
Understanding the Requirement: Senior leadership wants a custom dashboard displaying NAT gateway costs daily, starting from the beginning of the current month.
Analysis of Options:
QuickSight with DataSync: While QuickSight is suitable for dashboards, DataSync is not designed for querying and analyzing data reports.
QuickSight with Athena: QuickSight can visualize data queried by Athena, which is designed to analyze data directly from S3.
CloudWatch with DataSync: CloudWatch is primarily for monitoring metrics, not for creating detailed cost analysis dashboards.
CloudWatch with Athena: Similarly, using CloudWatch with Athena does not align well with the requirement for a visual dashboard.
Best Solution for Visualization and Querying:
Amazon QuickSight with Athena: This combination allows for powerful data visualization and querying capabilities. QuickSight can create dynamic dashboards, while Athena efficiently queries the cost and usage report data stored in S3.
Amazon QuickSight
Amazon Athena
Question