ExamGecko
Home Home / Amazon / SAA-C03

Amazon SAA-C03 Practice Test - Questions Answers, Page 57

Question list
Search
Search

List of questions

Search

Related questions











A company maintains an Amazon RDS database that maps users to cost centers. The company has accounts in an organization in AWS Organizations. The company needs a solution that will tag all resources that are created in a specific AWS account in the organization. The solution must tag each resource with the cost center ID of the user who created the resource.

Which solution will meet these requirements?

A.
Move the specific AWS account to a new organizational unit (OU) in Organizations from the management account. Create a service control policy (SCP) that requires all existing resources to have the correct cost center tag before the resources are created. Apply the SCP to the new OU.
A.
Move the specific AWS account to a new organizational unit (OU) in Organizations from the management account. Create a service control policy (SCP) that requires all existing resources to have the correct cost center tag before the resources are created. Apply the SCP to the new OU.
Answers
B.
Create an AWS Lambda function to tag the resources after the Lambda function looks up the appropriate cost center from the RDS database. Configure an Amazon EventBridge rule that reacts to AWS CloudTrail events to invoke the Lambda function.
B.
Create an AWS Lambda function to tag the resources after the Lambda function looks up the appropriate cost center from the RDS database. Configure an Amazon EventBridge rule that reacts to AWS CloudTrail events to invoke the Lambda function.
Answers
C.
Create an AWS CloudFormation stack to deploy an AWS Lambda function. Configure the Lambda function to look up the appropriate cost center from the RDS database and to tag resources. Create an Amazon EventBridge scheduled rule to invoke the CloudFormation stack.
C.
Create an AWS CloudFormation stack to deploy an AWS Lambda function. Configure the Lambda function to look up the appropriate cost center from the RDS database and to tag resources. Create an Amazon EventBridge scheduled rule to invoke the CloudFormation stack.
Answers
D.
Create an AWS Lambda function to tag the resources with a default value. Configure an Amazon EventBridge rule that reacts to AWS CloudTrail events to invoke the Lambda function when a resource is missing the cost center tag.
D.
Create an AWS Lambda function to tag the resources with a default value. Configure an Amazon EventBridge rule that reacts to AWS CloudTrail events to invoke the Lambda function when a resource is missing the cost center tag.
Answers
Suggested answer: B

Explanation:

AWS Lambda is a serverless compute service that lets you run code without provisioning or managing servers. Lambda can be used to tag resources with the cost center ID of the user who created the resource, by querying the RDS database that maps users to cost centers. Amazon EventBridge is a serverless event bus service that enables event-driven architectures. EventBridge can be configured to react to AWS CloudTrail events, which are recorded API calls made by or on behalf of the AWS account. EventBridge can invoke the Lambda function when a resource is created in the specific AWS account, passing the user identity and resource information as parameters. This solution will meet the requirements, as it enables automatic tagging of resources based on the user and cost center mapping.

1 provides an overview of AWS Lambda and its benefits.

2 provides an overview of Amazon EventBridge and its benefits.

3 explains the concept and benefits of AWS CloudTrail events.

A company is migrating its multi-tier on-premises application to AWS. The application consists of a single-node MySQL database and a multi-node web tier. The company must minimize changes to the application during the migration. The company wants to improve application resiliency after the migration.

Which combination of steps will meet these requirements? (Select TWO.)

A.
Migrate the web tier to Amazon EC2 instances in an Auto Scaling group behind an Application Load Balancer.
A.
Migrate the web tier to Amazon EC2 instances in an Auto Scaling group behind an Application Load Balancer.
Answers
B.
Migrate the database to Amazon EC2 instances in an Auto Scaling group behind a Network Load Balancer.
B.
Migrate the database to Amazon EC2 instances in an Auto Scaling group behind a Network Load Balancer.
Answers
C.
Migrate the database to an Amazon RDS Multi-AZ deployment.
C.
Migrate the database to an Amazon RDS Multi-AZ deployment.
Answers
D.
Migrate the web tier to an AWS Lambda function.
D.
Migrate the web tier to an AWS Lambda function.
Answers
E.
Migrate the database to an Amazon DynamoDB table.
E.
Migrate the database to an Amazon DynamoDB table.
Answers
Suggested answer: A, C

Explanation:

An Auto Scaling group is a collection of EC2 instances that share similar characteristics and can be scaled in or out automatically based on demand. An Auto Scaling group can be placed behind an Application Load Balancer, which is a type of Elastic Load Balancing load balancer that distributes incoming traffic across multiple targets in multiple Availability Zones. This solution will improve the resiliency of the web tier by providing high availability, scalability, and fault tolerance. An Amazon RDS Multi-AZ deployment is a configuration that automatically creates a primary database instance and synchronously replicates the data to a standby instance in a different Availability Zone. When a failure occurs, Amazon RDS automatically fails over to the standby instance without manual intervention. This solution will improve the resiliency of the database tier by providing data redundancy, backup support, and availability. This combination of steps will meet the requirements with minimal changes to the application during the migration.

1 describes the concept and benefits of an Auto Scaling group.

2 provides an overview of Application Load Balancers and their benefits.

3explains how Amazon RDS Multi-AZ deployments work and their benefits.

A company runs an SMB file server in its data center. The file server stores large files that the company frequently accesses for up to 7 days after the file creation date. After 7 days, the company needs to be able to access the files with a maximum retrieval time of 24 hours.

Which solution will meet these requirements?

A.
Use AWS DataSync to copy data that is older than 7 days from the SMB file server to AWS.
A.
Use AWS DataSync to copy data that is older than 7 days from the SMB file server to AWS.
Answers
B.
Create an Amazon S3 File Gateway to increase the company's storage space. Create an S3 Lifecycle policy to transition the data to S3 Glacier Deep Archive after 7 days.
B.
Create an Amazon S3 File Gateway to increase the company's storage space. Create an S3 Lifecycle policy to transition the data to S3 Glacier Deep Archive after 7 days.
Answers
C.
Create an Amazon FSx File Gateway to increase the company's storage space. Create an Amazon S3 Lifecycle policy to transition the data after 7 days.
C.
Create an Amazon FSx File Gateway to increase the company's storage space. Create an Amazon S3 Lifecycle policy to transition the data after 7 days.
Answers
D.
Configure access to Amazon S3 for each user. Create an S3 Lifecycle policy to transition the data to S3 Glacier Flexible Retrieval after 7 days.
D.
Configure access to Amazon S3 for each user. Create an S3 Lifecycle policy to transition the data to S3 Glacier Flexible Retrieval after 7 days.
Answers
Suggested answer: B

Explanation:

Amazon S3 File Gateway is a service that provides a file-based interface to Amazon S3, which appears as a network file share. It enables you to store and retrieve Amazon S3 objects through standard file storage protocols such as SMB. S3 File Gateway can also cache frequently accessed data locally for low-latency access. S3 Lifecycle policy is a feature that allows you to define rules that automate the management of your objects throughout their lifecycle. You can use S3 Lifecycle policy to transition objects to different storage classes based on their age and access patterns. S3 Glacier Deep Archive is a storage class that offers the lowest cost for long-term data archiving, with a retrieval time of 12 hours or 48 hours. This solution will meet the requirements, as it allows the company to store large files in S3 with SMB file access, and to move the files to S3 Glacier Deep Archive after 7 days for cost savings and compliance.

1provides an overview of Amazon S3 File Gateway and its benefits.

2explains how to use S3 Lifecycle policy to manage object storage lifecycle.

3 describes the features and use cases of S3 Glacier Deep Archive storage class.

A company has a three-tier environment on AWS that ingests sensor data from its users' devices The traffic flows through a Network Load Balancer (NIB) then to Amazon EC2 instances for the web tier and finally to EC2 instances for the application tier that makes database calls

What should a solutions architect do to improve the security of data in transit to the web tier?

A.
Configure a TLS listener and add the server certificate on the NLB
A.
Configure a TLS listener and add the server certificate on the NLB
Answers
B.
Configure AWS Shield Advanced and enable AWS WAF on the NLB
B.
Configure AWS Shield Advanced and enable AWS WAF on the NLB
Answers
C.
Change the load balancer to an Application Load Balancer and attach AWS WAF to it
C.
Change the load balancer to an Application Load Balancer and attach AWS WAF to it
Answers
D.
Encrypt the Amazon Elastic Block Store (Amazon EBS) volume on the EC2 instances using AWS Key Management Service (AWS KMS)
D.
Encrypt the Amazon Elastic Block Store (Amazon EBS) volume on the EC2 instances using AWS Key Management Service (AWS KMS)
Answers
Suggested answer: A

Explanation:

A) How do you protect your data in transit?

Best Practices:

Implement secure key and certificate management: Store encryption keys and certificates securely and rotate them at appropriate time intervals while applying strict access control; for example, by using a certificate management service, such as AWS Certificate Manager (ACM).

Enforce encryption in transit: Enforce your defined encryption requirements based on appropriate standards and recommendations to help you meet your organizational, legal, and compliance requirements.

Automate detection of unintended data access: Use tools such as GuardDuty to automatically detect attempts to move data outside of defined boundaries based on data classification level, for example, to detect a trojan that is copying data to an unknown or untrusted network using the DNS protocol.

Authenticate network communications: Verify the identity of communications by using protocols that support authentication, such as Transport Layer Security (TLS) or IPsec.

https://wa.aws.amazon.com/wat.question.SEC_9.en.html


A company wants to migrate its on-premises Microsoft SQL Server Enterprise edition database to AWS. The company's online application uses the database to process transactions. The data analysis team uses the same production database to run reports for analytical processing. The company wants to reduce operational overhead by moving to managed services wherever possible.

Which solution will meet these requirements with the LEAST operational overhead?

A.
Migrate to Amazon RDS for Microsoft SQL Server. Use read replicas for reporting purposes.
A.
Migrate to Amazon RDS for Microsoft SQL Server. Use read replicas for reporting purposes.
Answers
B.
Migrate to Microsoft SQL Server on Amazon EC2. Use Always On read replicas for reporting purposes.
B.
Migrate to Microsoft SQL Server on Amazon EC2. Use Always On read replicas for reporting purposes.
Answers
C.
Migrate to Amazon DynamoDB. Use DynamoDB on-demand replicas for reporting purposes.
C.
Migrate to Amazon DynamoDB. Use DynamoDB on-demand replicas for reporting purposes.
Answers
D.
Migrate to Amazon Aurora MySQL. Use Aurora read replicas for reporting purposes.
D.
Migrate to Amazon Aurora MySQL. Use Aurora read replicas for reporting purposes.
Answers
Suggested answer: A

Explanation:

Amazon RDS for Microsoft SQL Server is a fully managed service that offers SQL Server 2014, 2016, 2017, and 2019 editions while offloading database administration tasks such as backups, patching, and scaling. Amazon RDS supports read replicas, which are read-only copies of the primary database that can be used for reporting purposes without affecting the performance of the online application. This solution will meet the requirements with the least operational overhead, as it does not require any code changes or manual intervention.

1provides an overview of Amazon RDS for Microsoft SQL Server and its benefits.

2 explains how to create and use read replicas with Amazon RDS.

A company has deployed its newest product on AWS. The product runs in an Auto Scaling group behind a Network Load Balancer. The company stores the product's objects in an Amazon S3 bucket.

The company recently experienced malicious attacks against its systems. The company needs a solution that continuously monitors for malicious activity in the AWS account, workloads, and access patterns to the S3 bucket. The solution must also report suspicious activity and display the information on a dashboard.

Which solution will meet these requirements?

A.
Configure Amazon Made to monitor and report findings to AWS Config.
A.
Configure Amazon Made to monitor and report findings to AWS Config.
Answers
B.
Configure Amazon Inspector to monitor and report findings to AWS CloudTrail.
B.
Configure Amazon Inspector to monitor and report findings to AWS CloudTrail.
Answers
C.
Configure Amazon GuardDuty to monitor and report findings to AWS Security Hub.
C.
Configure Amazon GuardDuty to monitor and report findings to AWS Security Hub.
Answers
D.
Configure AWS Config to monitor and report findings to Amazon EventBridge.
D.
Configure AWS Config to monitor and report findings to Amazon EventBridge.
Answers
Suggested answer: C

Explanation:

Amazon GuardDuty is a threat detection service that continuously monitors for malicious activity and unauthorized behavior across the AWS account and workloads. GuardDuty analyzes data sources such as AWS CloudTrail event logs, Amazon VPC Flow Logs, and DNS logs to identify potential threats such as compromised instances, reconnaissance, port scanning, and data exfiltration. GuardDuty can report its findings to AWS Security Hub, which is a service that provides a comprehensive view of the security posture of the AWS account and workloads. Security Hub aggregates, organizes, and prioritizes security alerts from multiple AWS services and partner solutions, and displays them on a dashboard. This solution will meet the requirements, as it enables continuous monitoring, reporting, and visualization of malicious activity in the AWS account, workloads, and access patterns to the S3 bucket.

1 provides an overview of Amazon GuardDuty and its benefits.

2 explains how GuardDuty generates and reports findings based on threat detection.

3 provides an overview of AWS Security Hub and its benefits.

4 describes how Security Hub collects and displays findings from multiple sources on a dashboard

A company is developing an application that will run on a production Amazon Elastic Kubernetes Service (Amazon EKS) cluster The EKS cluster has managed node groups that are provisioned with On-Demand Instances.

The company needs a dedicated EKS cluster for development work. The company will use the development cluster infrequently to test the resiliency of the application. The EKS cluster must manage all the nodes.

Which solution will meet these requirements MOST cost-effectively?

A.
Create a managed node group that contains only Spot Instances.
A.
Create a managed node group that contains only Spot Instances.
Answers
B.
Create two managed node groups. Provision one node group with On-Demand Instances. Provision the second node group with Spot Instances.
B.
Create two managed node groups. Provision one node group with On-Demand Instances. Provision the second node group with Spot Instances.
Answers
C.
Create an Auto Scaling group that has a launch configuration that uses Spot Instances. Configure the user data to add the nodes to the EKS cluster.
C.
Create an Auto Scaling group that has a launch configuration that uses Spot Instances. Configure the user data to add the nodes to the EKS cluster.
Answers
D.
Create a managed node group that contains only On-Demand Instances.
D.
Create a managed node group that contains only On-Demand Instances.
Answers
Suggested answer: A

Explanation:

Spot Instances are EC2 instances that are available at up to a 90% discount compared to On-Demand prices. Spot Instances are suitable for stateless, fault-tolerant, and flexible workloads that can tolerate interruptions. Spot Instances can be reclaimed by EC2 when the demand for On-Demand capacity increases, but they provide a two-minute warning before termination. EKS managed node groups automate the provisioning and lifecycle management of nodes for EKS clusters. Managed node groups can use Spot Instances to reduce costs and scale the cluster based on demand. Managed node groups also support features such as Capacity Rebalancing and Capacity Optimized allocation strategy to improve the availability and resilience of Spot Instances. This solution will meet the requirements most cost-effectively, as it leverages the lowest-priced EC2 capacity and does not require any manual intervention.

1 explains how to create and use managed node groups with EKS.

2 describes how to use Spot Instances with managed node groups.

3 provides an overview of Spot Instances and their benefits.

A global company runs its applications in multiple AWS accounts in AWS Organizations. The company's applications use multipart uploads to upload data to multiple Amazon S3 buckets across AWS Regions. The company wants to report on incomplete multipart uploads for cost compliance purposes.

Which solution will meet these requirements with the LEAST operational overhead?

A.
Configure AWS Config with a rule to report the incomplete multipart upload object count.
A.
Configure AWS Config with a rule to report the incomplete multipart upload object count.
Answers
B.
Create a service control policy (SCP) to report the incomplete multipart upload object count.
B.
Create a service control policy (SCP) to report the incomplete multipart upload object count.
Answers
C.
Configure S3 Storage Lens to report the incomplete multipart upload object count.
C.
Configure S3 Storage Lens to report the incomplete multipart upload object count.
Answers
D.
Create an S3 Multi-Region Access Point to report the incomplete multipart upload object count.
D.
Create an S3 Multi-Region Access Point to report the incomplete multipart upload object count.
Answers
Suggested answer: C

Explanation:

S3 Storage Lens is a cloud storage analytics feature that provides organization-wide visibility into object storage usage and activity across multiple AWS accounts in AWS Organizations. S3 Storage Lens can report the incomplete multipart upload object count as one of the metrics that it collects and displays on an interactive dashboard in the S3 console. S3 Storage Lens can also export metrics in CSV or Parquet format to an S3 bucket for further analysis. This solution will meet the requirements with the least operational overhead, as it does not require any code development or policy changes.

1explains how to use S3 Storage Lens to gain insights into S3 storage usage and activity.

2 describes the concept and benefits of multipart uploads.

A company has deployed its application on Amazon EC2 instances with an Amazon RDS database. The company used the principle of least privilege to configure the database access credentials. The company's security team wants to protect the application and the database from SQL injection and other web-based attacks.

Which solution will meet these requirements with the LEAST operational overhead?

A.
Use security groups and network ACLs to secure the database and application servers.
A.
Use security groups and network ACLs to secure the database and application servers.
Answers
B.
Use AWS WAF to protect the application. Use RDS parameter groups to configure the security settings.
B.
Use AWS WAF to protect the application. Use RDS parameter groups to configure the security settings.
Answers
C.
Use AWS Network Firewall to protect the application and the database.
C.
Use AWS Network Firewall to protect the application and the database.
Answers
D.
Use different database accounts in the application code for different functions. Avoid granting excessive privileges to the database users.
D.
Use different database accounts in the application code for different functions. Avoid granting excessive privileges to the database users.
Answers
Suggested answer: B

Explanation:

AWS WAF is a web application firewall that helps protect web applications from common web exploits that could affect application availability, compromise security, or consume excessive resources. AWS WAF allows users to create rules that block, allow, or count web requests based on customizable web security rules. One of the types of rules that can be created is an SQL injection rule, which allows users to specify a list of IP addresses or IP address ranges that they want to allow or block. By using AWS WAF to protect the application, the company can prevent SQL injection and other web-based attacks from reaching the application and the database.

RDS parameter groups are collections of parameters that define how a database instance operates. Users can modify the parameters in a parameter group to change the behavior and performance of the database. By using RDS parameter groups to configure the security settings, the company can enforce best practices such as disabling remote root login, requiring SSL connections, and limiting the maximum number of connections.

The other options are not correct because they do not effectively protect the application and the database from SQL injection and other web-based attacks. Using security groups and network ACLs to secure the database and application servers is not sufficient because they only filter traffic at the network layer, not at the application layer. Using AWS Network Firewall to protect the application and the database is not necessary because it is a stateful firewall service that provides network protection for VPCs, not for individual applications or databases. Using different database accounts in the application code for different functions is a good practice, but it does not prevent SQL injection attacks from exploiting vulnerabilities in the application code.

AWS WAF

How AWS WAF works

Working with IP match conditions

Working with DB parameter groups

Amazon RDS security best practices

A research company uses on-premises devices to generate data for analysis. The company wants to use the AWS Cloud to analyze the dat

a. The devices generate .csv files and support writing the data to SMB file share. Company analysts must be able to use SQL commands to query the data. The analysts will run queries periodically throughout the day.

Which combination of steps will meet these requirements MOST cost-effectively? (Select THREE.)

A.
Deploy an AWS Storage Gateway on premises in Amazon S3 File Gateway mode.
A.
Deploy an AWS Storage Gateway on premises in Amazon S3 File Gateway mode.
Answers
B.
Deploy an AWS Storage Gateway on premises in Amazon FSx File Gateway mode.
B.
Deploy an AWS Storage Gateway on premises in Amazon FSx File Gateway mode.
Answers
C.
Set up an AWS Glue crawler to create a table based on the data that is in Amazon S3.
C.
Set up an AWS Glue crawler to create a table based on the data that is in Amazon S3.
Answers
D.
Set up an Amazon EMR cluster with EMR Fife System (EMRFS) to query the data that is in Amazon S3. Provide access to analysts.
D.
Set up an Amazon EMR cluster with EMR Fife System (EMRFS) to query the data that is in Amazon S3. Provide access to analysts.
Answers
E.
Set up an Amazon Redshift cluster to query the data that is in Amazon S3. Provide access to analysts.
E.
Set up an Amazon Redshift cluster to query the data that is in Amazon S3. Provide access to analysts.
Answers
F.
Set up Amazon Athena to query the data that is in Amazon S3. Provide access to analysts.
F.
Set up Amazon Athena to query the data that is in Amazon S3. Provide access to analysts.
Answers
Suggested answer: A, C, F

Explanation:

To meet the requirements of the use case in a cost-effective way, the following steps are recommended:

Deploy an AWS Storage Gateway on premises in Amazon S3 File Gateway mode. This will allow the company to write the .csv files generated by the devices to an SMB file share, which will be stored as objects in Amazon S3 buckets. AWS Storage Gateway is a hybrid cloud storage service that integrates on-premises environments with AWS storage.Amazon S3 File Gateway mode provides a seamless way to connect to Amazon S3 and access a virtually unlimited amount of cloud storage1.

Set up an AWS Glue crawler to create a table based on the data that is in Amazon S3. This will enable the company to use standard SQL to query the data stored in Amazon S3 buckets. AWS Glue is a serverless data integration service that simplifies data preparation and analysis.AWS Glue crawlers can automatically discover and classify data from various sources, and create metadata tables in the AWS Glue Data Catalog2.The Data Catalog is a central repository that stores information about data sources and how to access them3.

Set up Amazon Athena to query the data that is in Amazon S3. This will provide the company analysts with a serverless and interactive query service that can analyze data directly in Amazon S3 using standard SQL. Amazon Athena is integrated with the AWS Glue Data Catalog, so users can easily point Athena at the data source tables defined by the crawlers.Amazon Athena charges only for the queries that are run, and offers a pay-per-query pricing model, which makes it a cost-effective option for periodic queries4.

The other options are not correct because they are either not cost-effective or not suitable for the use case. Deploying an AWS Storage Gateway on premises in Amazon FSx File Gateway mode is not correct because this mode provides low-latency access to fully managed Windows file shares in AWS, which is not required for the use case. Setting up an Amazon EMR cluster with EMR File System (EMRFS) to query the data that is in Amazon S3 is not correct because this option involves setting up and managing a cluster of EC2 instances, which adds complexity and cost to the solution. Setting up an Amazon Redshift cluster to query the data that is in Amazon S3 is not correct because this option also involves provisioning and managing a cluster of nodes, which adds overhead and cost to the solution.

What is AWS Storage Gateway?

What is AWS Glue?

AWS Glue Data Catalog

What is Amazon Athena?

Total 886 questions
Go to page: of 89