ExamGecko
Home Home / Amazon / SAA-C03

Amazon SAA-C03 Practice Test - Questions Answers, Page 76

Question list
Search
Search

List of questions

Search

Related questions











A company's near-real-time streaming application is running on AWS. As the data is ingested, a Job runs on the data and takes 30 minutes to complete. The workload frequently experiences high latency due to large amounts of incoming data. A solutions architect needs to design a scalable and serverless solution to enhance performance.

Which combination of steps should the solutions architect take? (Select TWO.)

A.
Use Amazon Kinesis Data Firehose to Ingest the data.
A.
Use Amazon Kinesis Data Firehose to Ingest the data.
Answers
B.
Use AWS Lambda with AWS Step Functions to process the data.
B.
Use AWS Lambda with AWS Step Functions to process the data.
Answers
C.
Use AWS Database Migration Service (AWS DMS) to ingest the data
C.
Use AWS Database Migration Service (AWS DMS) to ingest the data
Answers
D.
Use Amazon EC2 instances in an Auto Seating group to process the data.
D.
Use Amazon EC2 instances in an Auto Seating group to process the data.
Answers
E.
Use AWS Fargate with Amazon Elastic Container Service (Amazon ECS) to process the data.
E.
Use AWS Fargate with Amazon Elastic Container Service (Amazon ECS) to process the data.
Answers
Suggested answer: A

Explanation:

Understanding the Requirement: The company needs to design a scalable and serverless solution for a near-real-time streaming application that experiences high latency due to large amounts of incoming data. The job processing takes about 30 minutes.

Analysis of Options:

Amazon Kinesis Data Firehose: Provides a fully managed service for real-time data streaming and ingestion, allowing for seamless data delivery to destinations such as Amazon S3, Redshift, and Elasticsearch.

AWS Lambda with AWS Step Functions: While suitable for orchestration and lightweight processing, Lambda might not handle long-running jobs (max 15 minutes execution limit) efficiently.

AWS DMS: Primarily used for database migration, not for real-time data ingestion in this context.

Amazon EC2 in Auto Scaling Group: Provides scalability but involves managing servers, which is not serverless and adds operational overhead.

AWS Fargate with ECS: Offers a serverless compute engine for containers, allowing easy scaling and management without managing the underlying infrastructure.

Best Solution:

Amazon Kinesis Data Firehose: For ingesting the streaming data efficiently.

AWS Fargate with ECS: For processing the data in a scalable and serverless manner.

Amazon Kinesis Data Firehose

AWS Fargate

A company uses an Amazon CloudFront distribution to serve content pages for its website. The company needs to ensure that clients use a TLS certificate when accessing the company's website. The company wants to automate the creation and renewal of the Tl S certificates.

Which solution will meet these requirements with the MOST operational efficiency?

A.
Use a CloudFront security policy lo create a certificate.
A.
Use a CloudFront security policy lo create a certificate.
Answers
B.
Use a CloudFront origin access control (OAC) to create a certificate.
B.
Use a CloudFront origin access control (OAC) to create a certificate.
Answers
C.
Use AWS Certificate Manager (ACM) to create a certificate. Use DNS validation for the domain.
C.
Use AWS Certificate Manager (ACM) to create a certificate. Use DNS validation for the domain.
Answers
D.
Use AWS Certificate Manager (ACM) to create a certificate. Use email validation for the domain.
D.
Use AWS Certificate Manager (ACM) to create a certificate. Use email validation for the domain.
Answers
Suggested answer: C

Explanation:

Understanding the Requirement: The company needs to ensure clients use a TLS certificate when accessing the website and automate the creation and renewal of TLS certificates.

Analysis of Options:

CloudFront Security Policy: Not applicable for creating certificates.

CloudFront Origin Access Control (OAC): Controls access to origins, not relevant for TLS certificate creation.

AWS Certificate Manager (ACM) with DNS Validation: Provides automated certificate management, including creation and renewal, with minimal manual intervention. DNS validation is automated and does not require manual intervention as email validation does.

AWS Certificate Manager (ACM) with Email Validation: Requires manual intervention to approve validation emails, which increases operational effort.

Best Solution:

AWS Certificate Manager (ACM) with DNS Validation: Ensures automated and efficient certificate management with the least operational effort.

AWS Certificate Manager (ACM)

DNS Validation in ACM

A company's solutions architect is designing an AWS multi-account solution that uses AWS Organizations. The solutions architect has organized the company's accounts into organizational units (OUs).

The solutions architect needs a solution that will identify any changes to the OU hierarchy. The solution also needs to notify the company's operations team of any changes.

Which solution will meet these requirements with the LEAST operational overhead?

A.
Provision the AWS accounts by using AWS Control Tower. Use account drift notifications to Identify the changes to the OU hierarchy.
A.
Provision the AWS accounts by using AWS Control Tower. Use account drift notifications to Identify the changes to the OU hierarchy.
Answers
B.
Provision the AWS accounts by using AWS Control Tower. Use AWS Config aggregated rules to identify the changes to the OU hierarchy.
B.
Provision the AWS accounts by using AWS Control Tower. Use AWS Config aggregated rules to identify the changes to the OU hierarchy.
Answers
C.
Use AWS Service Catalog to create accounts in Organizations. Use an AWS CloudTrail organization trail to identify the changes to the OU hierarchy.
C.
Use AWS Service Catalog to create accounts in Organizations. Use an AWS CloudTrail organization trail to identify the changes to the OU hierarchy.
Answers
D.
Use AWS CloudFormation templates to create accounts in Organizations. Use the drift detection operation on a stack to identify the changes to the OU hierarchy.
D.
Use AWS CloudFormation templates to create accounts in Organizations. Use the drift detection operation on a stack to identify the changes to the OU hierarchy.
Answers
Suggested answer: A

Explanation:

Understanding the Requirement: The company needs to monitor and notify changes to the OU hierarchy with minimal operational overhead.

Analysis of Options:

AWS Control Tower with Account Drift Notifications: AWS Control Tower provides automated account provisioning and governance, including drift detection and notifications for changes in the OU hierarchy.

AWS Control Tower with AWS Config: AWS Config provides resource configuration tracking but is more complex compared to drift notifications directly available in Control Tower.

AWS Service Catalog with CloudTrail: While CloudTrail tracks changes, setting up notification mechanisms involves more operational overhead.

AWS CloudFormation with Drift Detection: Suitable for tracking configuration changes but less efficient for monitoring OU hierarchy changes compared to Control Tower's built-in features.

Best Solution:

AWS Control Tower with Account Drift Notifications: Provides a streamlined and efficient way to detect and notify changes in the OU hierarchy with minimal operational overhead.

AWS Control Tower

AWS Control Tower Drift Detection

A company is designing the architecture for a new mobile app that uses the AWS Cloud. The company uses organizational units (OUs) in AWS Organizations to manage its accounts. The company wants to tag Amazon EC2 instances with data sensitivity by using values of sensitive and nonsensitive 1AM identities must not be able to delete a tag or create instances without a tag

Which combination of steps will meet these requirements? (Select TWO.)

A.
In Organizations, create a new tag policy that specifies the data sensitivity tag key and the required values. Enforce the tag values for the EC2 instances Attach the tag policy to the appropriate OU.
A.
In Organizations, create a new tag policy that specifies the data sensitivity tag key and the required values. Enforce the tag values for the EC2 instances Attach the tag policy to the appropriate OU.
Answers
B.
In Organizations, create a new service control policy (SCP) that specifies the data sensitivity tag key and the required tag values Enforce the tag values for the EC2 instances. Attach the SCP to the appropriate OU.
B.
In Organizations, create a new service control policy (SCP) that specifies the data sensitivity tag key and the required tag values Enforce the tag values for the EC2 instances. Attach the SCP to the appropriate OU.
Answers
C.
Create a tag policy to deny running instances when a tag key is not specified. Create another tag policy that prevents identities from deleting tags Attach the tag policies to the appropriate OU.
C.
Create a tag policy to deny running instances when a tag key is not specified. Create another tag policy that prevents identities from deleting tags Attach the tag policies to the appropriate OU.
Answers
D.
Create a service control policy (SCP) to deny creating instances when a tag key is not specified. Create another SCP that prevents identities from deleting tags Attach the SCPs to the appropriate OU.
D.
Create a service control policy (SCP) to deny creating instances when a tag key is not specified. Create another SCP that prevents identities from deleting tags Attach the SCPs to the appropriate OU.
Answers
E.
Create an AWS Config rule to check if EC2 instances use the data sensitivity tag and the specified values. Configure an AWS Lambda function to delete the resource if a noncompliant resource is found.
E.
Create an AWS Config rule to check if EC2 instances use the data sensitivity tag and the specified values. Configure an AWS Lambda function to delete the resource if a noncompliant resource is found.
Answers
Suggested answer: A, D

Explanation:

To meet the requirements for tagging and preventing instance creation or deletion without proper tags, the company can use a combination of AWS Organizations tag policies and service control policies (SCPs).

Tag Policies: These enforce specific tag values across resources. Creating a tag policy with required values (e.g., sensitive, non-sensitive) and attaching it to the appropriate organizational unit (OU) ensures consistency in tagging.

SCPs: SCPs can be used to enforce compliance by preventing instance creation without a tag and preventing tag deletion. These policies control actions at the account level across the organization.

Key AWS features:

Tag Policies help standardize tags across accounts, and SCPs enforce governance by restricting actions that violate the policies.

AWS Documentation: AWS best practices recommend using tag policies and SCPs to enforce compliance across multiple accounts within AWS Organizations.

A company runs multiple workloads on virtual machines (VMs) in an on-premises data center. The company is expanding rapidly. The on-premises data center is not able to scale fast enough to meet business needs. The company wants to migrate the workloads to AWS.

The migration is time sensitive. The company wants to use a lift-and-shift strategy for non-critical workloads.

Which combination of steps will meet these requirements? (Select THREE.)

A.
Use the AWS Schema Conversion Tool (AWS SCT) to collect data about the VMs.
A.
Use the AWS Schema Conversion Tool (AWS SCT) to collect data about the VMs.
Answers
B.
Use AWS Application Migration Service. Install the AWS Replication Agent on the VMs.
B.
Use AWS Application Migration Service. Install the AWS Replication Agent on the VMs.
Answers
C.
Complete the initial replication of the VMs. Launch test instances to perform acceptance tests on the VMs.
C.
Complete the initial replication of the VMs. Launch test instances to perform acceptance tests on the VMs.
Answers
D.
Stop all operations on the VMs Launch a cutover instance.
D.
Stop all operations on the VMs Launch a cutover instance.
Answers
E.
Use AWS App2Container (A2C) to collect data about the VMs.
E.
Use AWS App2Container (A2C) to collect data about the VMs.
Answers
F.
Use AWS Database Migration Service (AWS DMS) to migrate the VMs.
F.
Use AWS Database Migration Service (AWS DMS) to migrate the VMs.
Answers
Suggested answer: B, C, D

Explanation:

AWS Application Migration Service (AWS MGN) is the recommended tool for a lift-and-shift strategy, especially for time-sensitive migrations. It automates the replication of on-premises VMs to AWS, minimizing the effort required for migration and testing.

Key steps:

Replication with AWS MGN: The AWS Replication Agent is installed on the VMs to continuously replicate data to AWS, allowing you to manage migration easily.

Testing and Cutover: Initial replication allows for testing in AWS before performing the final cutover, ensuring that the migration process is smooth and data integrity is maintained.

AWS Documentation: AWS MGN is recommended for migrating virtual machines to the cloud with minimal downtime and disruption.

A company's application is deployed on Amazon EC2 instances and uses AWS Lambda functions for an event-driven architecture. The company uses nonproduction development environments in a different AWS account to test new features before the company deploys the features to production.

The production instances show constant usage because of customers in different time zones. The company uses nonproduction instances only during business hours on weekdays. The company does not use the nonproduction instances on the weekends. The company wants to optimize the costs to run its application on AWS.

Which solution will meet these requirements MOST cost-effectively?

A.
Use On-Demand Instances (or the production instances. Use Dedicated Hosts for the nonproduction instances on weekends only.
A.
Use On-Demand Instances (or the production instances. Use Dedicated Hosts for the nonproduction instances on weekends only.
Answers
B.
Use Reserved instances for the production instances and the nonproduction instances Shut down the nonproduction instances when not in use.
B.
Use Reserved instances for the production instances and the nonproduction instances Shut down the nonproduction instances when not in use.
Answers
C.
Use Compute Savings Plans for the production instances. Use On-Demand Instances for the nonproduction instances Shut down the nonproduction instances when not in use.
C.
Use Compute Savings Plans for the production instances. Use On-Demand Instances for the nonproduction instances Shut down the nonproduction instances when not in use.
Answers
D.
Use Dedicated Hosts for the production instances. Use EC2 Instance Savings Plans for the nonproduction instances.
D.
Use Dedicated Hosts for the production instances. Use EC2 Instance Savings Plans for the nonproduction instances.
Answers
Suggested answer: C

Explanation:

Compute Savings Plans offer the most flexible and cost-effective solution for the production instances, as they provide significant savings (up to 66%) for both EC2 and AWS Lambda usage, while allowing flexibility in the type of instance family, size, and even region. For nonproduction instances, using On-Demand Instances ensures you only pay for the instances when they are running, and shutting them down during off-hours further optimizes cost.

Key AWS features:

Compute Savings Plans: Provide savings based on consistent usage, making it ideal for production environments with steady load.

On-Demand Instances: Suitable for nonproduction environments that are used intermittently. Shutting them down when not in use avoids unnecessary costs.

AWS Documentation: According to AWS's cost optimization best practices, using a combination of Savings Plans for production and On-Demand Instances for nonproduction environments that are used sparingly results in optimal cost savings.

A company runs database workloads on AWS that are the backend for the company's customer portals. The company runs a Multi-AZ database cluster on Amazon RDS for PostgreSQL.

The company needs to implement a 30-day backup retention policy. The company currently has both automated RDS backups and manual RDS backups. The company wants to maintain both types of existing RDS backups that are less than 30 days old.

Which solution will meet these requirements MOST cost-effectively?

A.
Configure the RDS backup retention policy to 30 days tor automated backups by using AWS Backup. Manually delete manual backups that are older than 30 days.
A.
Configure the RDS backup retention policy to 30 days tor automated backups by using AWS Backup. Manually delete manual backups that are older than 30 days.
Answers
B.
Disable RDS automated backups. Delete automated backups and manual backups that are older than 30 days. Configure the RDS backup retention policy to 30 days tor automated backups.
B.
Disable RDS automated backups. Delete automated backups and manual backups that are older than 30 days. Configure the RDS backup retention policy to 30 days tor automated backups.
Answers
C.
Configure the RDS backup retention policy to 30 days for automated backups. Manually delete manual backups that are older than 30 days
C.
Configure the RDS backup retention policy to 30 days for automated backups. Manually delete manual backups that are older than 30 days
Answers
D.
Disable RDS automated backups. Delete automated backups and manual backups that are older than 30 days automatically by using AWS CloudFormation. Configure the RDS backup retention policy to 30 days for automated backups.
D.
Disable RDS automated backups. Delete automated backups and manual backups that are older than 30 days automatically by using AWS CloudFormation. Configure the RDS backup retention policy to 30 days for automated backups.
Answers
Suggested answer: A

Explanation:

Setting the RDS backup retention policy to 30 days for automated backups through AWS Backup allows the company to retain backups cost-effectively. Manual backups, however, are not automatically managed by RDS's retention policy, so they need to be manually deleted if they are older than 30 days to avoid unnecessary storage costs.

Key AWS features:

Automated Backups: Can be configured with a retention policy of up to 35 days, ensuring that older automated backups are deleted automatically.

Manual Backups: These are not subject to the automated retention policy and must be manually managed to avoid extra costs.

AWS Documentation: AWS recommends using backup retention policies for automated backups while manually managing manual backups.

A company is building a web application that serves a content management system. The content management system runs on Amazon EC2 instances behind an Application Load Balancer (Al B). The FC? instances run in an Auto Scaling group across multiple Availability 7ones. Users are constantly adding and updating files, blogs and other website assets in the content management system.

A solutions architect must implement a solution in which all the EC2 Instances share up-to-date website content with the least possible lag time.

Which solution meets these requirements?

A.
Update the EC2 user data in the Auto Scaling group lifecycle policy to copy the website assets from the EC2 instance that was launched most recently. Configure the ALB to make changes to the website assets only in the newest EC2 instance.
A.
Update the EC2 user data in the Auto Scaling group lifecycle policy to copy the website assets from the EC2 instance that was launched most recently. Configure the ALB to make changes to the website assets only in the newest EC2 instance.
Answers
B.
Copy the website assets to an Amazon Elastic File System (Amazon EFS) file system. Configure each EC2 instance to mount the EFS file system locally. Configure the website hosting application to reference the website assets that are stored in the EFS file system.
B.
Copy the website assets to an Amazon Elastic File System (Amazon EFS) file system. Configure each EC2 instance to mount the EFS file system locally. Configure the website hosting application to reference the website assets that are stored in the EFS file system.
Answers
C.
Copy the website assets to an Amazon S3 bucket. Ensure that each EC2 Instance downloads the website assets from the S3 bucket to the attached Amazon Elastic Block Store (Amazon EBS) volume. Run the S3 sync command once each hour to keep files up to date.
C.
Copy the website assets to an Amazon S3 bucket. Ensure that each EC2 Instance downloads the website assets from the S3 bucket to the attached Amazon Elastic Block Store (Amazon EBS) volume. Run the S3 sync command once each hour to keep files up to date.
Answers
D.
Restore an Amazon Elastic Block Store (Amazon EBS) snapshot with the website assets. Attach the EBS snapshot as a secondary EBS volume when a new CC2 instance is launched. Configure the website hosting application to reference the website assets that are stored in the secondary EDS volume.
D.
Restore an Amazon Elastic Block Store (Amazon EBS) snapshot with the website assets. Attach the EBS snapshot as a secondary EBS volume when a new CC2 instance is launched. Configure the website hosting application to reference the website assets that are stored in the secondary EDS volume.
Answers
Suggested answer: B

Explanation:

Understanding the Requirement: The company needs all EC2 instances to share up-to-date website content with minimal lag time, running behind an Application Load Balancer.

Analysis of Options:

EC2 User Data with ALB: Complex and not scalable as it requires updating each instance manually.

Amazon EFS: Provides a scalable, shared file storage solution that can be mounted by multiple EC2 instances, ensuring all instances have access to the same up-to-date content.

Amazon S3 with EC2 Sync: Involves periodic synchronization which introduces lag and complexity.

Amazon EBS Snapshots: Not suitable for dynamic and frequent updates required by a content management system.

Best Solution:

Amazon EFS: Ensures all EC2 instances have access to a consistent and up-to-date set of website assets with minimal lag time, meeting the requirements effectively.

Amazon Elastic File System (EFS)

Mounting EFS File Systems on EC2 Instances

A company needs to optimize the cost of its Amazon EC2 Instances. The company also needs to change the type and family of its EC2 instances every 2-3 months.

What should the company do lo meet these requirements?

A.
Purchase Partial Upfront Reserved Instances tor a 3-year term.
A.
Purchase Partial Upfront Reserved Instances tor a 3-year term.
Answers
B.
Purchase a No Upfront Compute Savings Plan for a 1-year term.
B.
Purchase a No Upfront Compute Savings Plan for a 1-year term.
Answers
C.
Purchase All Upfront Reserved Instances for a 1 -year term.
C.
Purchase All Upfront Reserved Instances for a 1 -year term.
Answers
D.
Purchase an All Upfront EC2 Instance Savings Plan for a 1-year term.
D.
Purchase an All Upfront EC2 Instance Savings Plan for a 1-year term.
Answers
Suggested answer: B

Explanation:

Understanding the Requirements: The company needs to optimize costs and has the flexibility to change EC2 instance types and families frequently (every 2-3 months).

Savings Plans Overview: Savings Plans offer significant savings over On-Demand pricing, with the flexibility to use any instance type and family within a region.

No Upfront Compute Savings Plan: This plan allows for cost optimization without any upfront payment, offering flexibility to change instance types and families.

Term Selection: A 1-year term is appropriate for balancing cost savings and flexibility given the frequent changes in instance types.

Conclusion: A No Upfront Compute Savings Plan for a 1-year term provides the needed flexibility and cost savings without the commitment and inflexibility of Reserved Instances.

Reference

AWS Savings Plans: AWS Savings Plans

AWS Cost Management Documentation: AWS Cost Management

A company uses a Microsoft SOL Server database. The company's applications are connected to the database. The company wants to migrate to an Amazon Aurora PostgreSQL database with minimal changes to the application code.

Which combination of steps will meet these requirements? (Select TWO.)

A.
Use the AWS Schema Conversion Tool <AWS SCT) to rewrite the SOL queries in the applications.
A.
Use the AWS Schema Conversion Tool <AWS SCT) to rewrite the SOL queries in the applications.
Answers
B.
Enable Babelfish on Aurora PostgreSQL to run the SQL queues from the applications.
B.
Enable Babelfish on Aurora PostgreSQL to run the SQL queues from the applications.
Answers
C.
Migrate the database schema and data by using the AWS Schema Conversion Tool (AWS SCT) and AWS Database Migration Service (AWS DMS).
C.
Migrate the database schema and data by using the AWS Schema Conversion Tool (AWS SCT) and AWS Database Migration Service (AWS DMS).
Answers
D.
Use Amazon RDS Proxy to connect the applications to Aurora PostgreSQL
D.
Use Amazon RDS Proxy to connect the applications to Aurora PostgreSQL
Answers
E.
Use AWS Database Migration Service (AWS DMS) to rewrite the SOI queries in the applications
E.
Use AWS Database Migration Service (AWS DMS) to rewrite the SOI queries in the applications
Answers
Suggested answer: B, C

Explanation:

Requirement Analysis: The goal is to migrate from Microsoft SQL Server to Amazon Aurora PostgreSQL with minimal application code changes.

Babelfish for Aurora PostgreSQL: Babelfish allows Aurora PostgreSQL to understand SQL Server queries natively, reducing the need for application code changes.

AWS Schema Conversion Tool (SCT): This tool helps in converting the database schema from SQL Server to PostgreSQL.

AWS Database Migration Service (DMS): DMS can be used to migrate data from SQL Server to Aurora PostgreSQL seamlessly.

Combined Approach: Enabling Babelfish addresses the SQL query compatibility, while SCT and DMS handle the schema and data migration.

Reference

Babelfish for Aurora PostgreSQL: Babelfish Documentation

AWS SCT and DMS: AWS Database Migration Service

Total 886 questions
Go to page: of 89