ExamGecko
Home Home / Amazon / SAA-C03

Amazon SAA-C03 Practice Test - Questions Answers, Page 4

Question list
Search
Search

List of questions

Search

Related questions












A company that hosts its web application on AWS wants to ensure all Amazon EC2 instances. Amazon RDS DB instances. and Amazon Redshift clusters are configured with tags. The company wants to minimize the effort of configuring and operating this check.

What should a solutions architect do to accomplish this?

A.
Use AWS Config rules to define and detect resources that are not properly tagged.
A.
Use AWS Config rules to define and detect resources that are not properly tagged.
Answers
B.
Use Cost Explorer to display resources that are not properly tagged. Tag those resources manually.
B.
Use Cost Explorer to display resources that are not properly tagged. Tag those resources manually.
Answers
C.
Write API calls to check all resources for proper tag allocation. Periodically run the code on an EC2 instance.
C.
Write API calls to check all resources for proper tag allocation. Periodically run the code on an EC2 instance.
Answers
D.
Write API calls to check all resources for proper tag allocation. Schedule an AWS Lambda function through Amazon CloudWatch to periodically run the code.
D.
Write API calls to check all resources for proper tag allocation. Schedule an AWS Lambda function through Amazon CloudWatch to periodically run the code.
Answers
Suggested answer: A

Explanation:

To ensure all Amazon EC2 instances, Amazon RDS DB instances, and Amazon Redshift clusters are configured with tags, a solutions architect should use AWS Config rules to define and detect resources that are not properly tagged. AWS Config rules are a set of customizable rules that AWS Config uses to evaluate AWS resource configurations for compliance with best practices and company policies. Using AWS Config rules can minimize the effort of configuring and operating this check because it automates the process of identifying non-compliant resources and notifying the responsible teams.Reference: AWS Config Developer Guide: AWS Config Rules(https://docs.aws.amazon.com/config/latest/developerguide/evaluate-config_use-managed- rules.html)


A development team needs to host a website that will be accessed by other teams. The website contents consist of HTML, CSS, client-side JavaScript, and images Which method is the MOST costeffective for hosting the website?

A.
Containerize the website and host it in AWS Fargate.
A.
Containerize the website and host it in AWS Fargate.
Answers
B.
Create an Amazon S3 bucket and host the website there
B.
Create an Amazon S3 bucket and host the website there
Answers
C.
Deploy a web server on an Amazon EC2 instance to host the website.
C.
Deploy a web server on an Amazon EC2 instance to host the website.
Answers
D.
Configure an Application Loa d Balancer with an AWS Lambda target that uses the Express js framework.
D.
Configure an Application Loa d Balancer with an AWS Lambda target that uses the Express js framework.
Answers
Suggested answer: B

Explanation:

In Static Websites, Web pages are returned by the server which are prebuilt.

They use simple languages such as HTML, CSS, or JavaScript.

There is no processing of content on the server (according to the user) in Static Websites. Web pages are returned by the server with no change therefore, static Websites are fast. There is no interaction with databases.

Also, they are less costly as the host does not need to support server-side processing with different languages. ============

In Dynamic Websites, Web pages are returned by the server which are processed during runtime means they are not prebuilt web pages but they are built during runtime according to the user’s demand. These use server-side scripting languages such as PHP, Node.js, ASP.NET and many more supported by the server. So, they are slower than static websites but updates and interaction with databases are possible.

A company runs an online marketplace web application on AWS. The application serves hundreds of thousands of users during peak hours. The company needs a scalable, near-real-time solution to share the details of millions of financial transactions with several other internal applications Transactions also need to be processed to remove sensitive data before being stored in a document database for low-latency retrieval. What should a solutions architect recommend to meet these requirements?

A.
Store the transactions data into Amazon DynamoDB Set up a rule in DynamoDB to remove sensitive data from every transaction upon write Use DynamoDB Streams to share the transactions data with other applications
A.
Store the transactions data into Amazon DynamoDB Set up a rule in DynamoDB to remove sensitive data from every transaction upon write Use DynamoDB Streams to share the transactions data with other applications
Answers
B.
Stream the transactions data into Amazon Kinesis Data Firehose to store data in Amazon DynamoDB and Amazon S3 Use AWS Lambda integration with Kinesis Data Firehose to remove sensitive data. Other applications can consume the data stored in Amazon S3
B.
Stream the transactions data into Amazon Kinesis Data Firehose to store data in Amazon DynamoDB and Amazon S3 Use AWS Lambda integration with Kinesis Data Firehose to remove sensitive data. Other applications can consume the data stored in Amazon S3
Answers
C.
Stream the transactions data into Amazon Kinesis Data Streams Use AWS Lambda integration to remove sensitive data from every transaction and then store the transactions data in Amazon DynamoDB Other applications can consume the transactions data off the Kinesis data stream.
C.
Stream the transactions data into Amazon Kinesis Data Streams Use AWS Lambda integration to remove sensitive data from every transaction and then store the transactions data in Amazon DynamoDB Other applications can consume the transactions data off the Kinesis data stream.
Answers
D.
Store the batched transactions data in Amazon S3 as files. Use AWS Lambda to process every file and remove sensitive data before updating the files in Amazon S3 The Lambda function then stores the data in Amazon DynamoDB Other applications can consume transaction files stored in Amazon S3.
D.
Store the batched transactions data in Amazon S3 as files. Use AWS Lambda to process every file and remove sensitive data before updating the files in Amazon S3 The Lambda function then stores the data in Amazon DynamoDB Other applications can consume transaction files stored in Amazon S3.
Answers
Suggested answer: C

Explanation:

The destination of your Kinesis Data Firehose delivery stream. Kinesis Data Firehose can send data records to various destinations, including Amazon Simple Storage Service (Amazon S3), Amazon Redshift, Amazon OpenSearch Service, and any HTTP endpoint that is owned by you or any of your third-party service providers. The following are the supported destinations:

* Amazon OpenSearch Service

* Amazon S3

* Datadog

* Dynatrace

* Honeycomb

* HTTP Endpoint

* Logic Monitor

* MongoDB Cloud

* New Relic

* Splunk

* Sumo Logic

https://docs.aws.amazon.com/firehose/latest/dev/create-name.html

https://aws.amazon.com/kinesis/data-streams/

Amazon Kinesis Data Streams (KDS) is a massively scalable and durable real-time data streaming service. KDS can continuously capture gigabytes of data per second from hundreds of thousands of sources such as website clickstreams, database event streams, financial transactions, social media feeds, IT logs, and location-tracking events.

A company hosts its multi-tier applications on AWS. For compliance, governance, auditing, and security, the company must track configuration changes on its AWS resources and record a history of API calls made to these resources. What should a solutions architect do to meet these requirements?

A.
Use AWS CloudTrail to track configuration changes and AWS Config to record API calls
A.
Use AWS CloudTrail to track configuration changes and AWS Config to record API calls
Answers
B.
Use AWS Config to track configuration changes and AWS CloudTrail to record API calls
B.
Use AWS Config to track configuration changes and AWS CloudTrail to record API calls
Answers
C.
Use AWS Config to track configuration changes and Amazon CloudWatch to record API calls
C.
Use AWS Config to track configuration changes and Amazon CloudWatch to record API calls
Answers
D.
Use AWS CloudTrail to track configuration changes and Amazon CloudWatch to record API calls
D.
Use AWS CloudTrail to track configuration changes and Amazon CloudWatch to record API calls
Answers
Suggested answer: B

Explanation:

AWS Config is a fully managed service that allows the company to assess, audit, and evaluate the configurations of its AWS resources. It provides a detailed inventory of the resources in use and tracks changes to resource configurations. AWS Config can detect configuration changes and alert the company when changes occur. It also provides a historical view of changes, which is essential for compliance and governance purposes. AWS CloudTrail is a fully managed service that provides a detailed history of API calls made to the company's AWS resources. It records all API activity in the AWS account, including who made the API call, when the call was made, and what resources were affected by the call. This information is critical for security and auditing purposes, as it allows the company to investigate any suspicious activity that might occur on its AWS resources.


A company is preparing to launch a public-facing web application in the AWS Cloud. The architecture consists of Amazon EC2 instances within a VPC behind an Elastic Load Balancer (ELB). A third-party service is used for the DNS. The company's solutions architect must recommend a solution to detect and protect against large-scale DDoS attacks. Which solution meets these requirements?

A.
Enable Amazon GuardDuty on the account.
A.
Enable Amazon GuardDuty on the account.
Answers
B.
Enable Amazon Inspector on the EC2 instances.
B.
Enable Amazon Inspector on the EC2 instances.
Answers
C.
Enable AWS Shield and assign Amazon Route 53 to it.
C.
Enable AWS Shield and assign Amazon Route 53 to it.
Answers
D.
Enable AWS Shield Advanced and assign the ELB to it.
D.
Enable AWS Shield Advanced and assign the ELB to it.
Answers
Suggested answer: D

Explanation:

https://aws.amazon.com/shield/faqs/

A company is building an application in the AWS Cloud. The application will store data in Amazon S3 buckets in two AWS Regions. The company must use an AWS Key Management Service (AWS KMS) customer managed key to encrypt all data that is stored in the S3 buckets. The data in both S3 buckets must be encrypted and decrypted with the same KMS key. The data and the key must be stored in each of the two Regions. Which solution will meet these requirements with the LEAST operational overhead?

A.
Create an S3 bucket in each Region Configure the S3 buckets to use server-side encryption with Amazon S3 managed encryption keys (SSE-S3) Configure replication between the S3 buckets.
A.
Create an S3 bucket in each Region Configure the S3 buckets to use server-side encryption with Amazon S3 managed encryption keys (SSE-S3) Configure replication between the S3 buckets.
Answers
B.
Create a customer managed multi-Region KMS key. Create an S3 bucket in each Region. Configure replication between the S3 buckets. Configure the application to use the KMS key with client-side encryption.
B.
Create a customer managed multi-Region KMS key. Create an S3 bucket in each Region. Configure replication between the S3 buckets. Configure the application to use the KMS key with client-side encryption.
Answers
C.
Create a customer managed KMS key and an S3 bucket in each Region Configure the S3 buckets to use server-side encryption with Amazon S3 managed encryption keys (SSE-S3) Configure replication between the S3 buckets.
C.
Create a customer managed KMS key and an S3 bucket in each Region Configure the S3 buckets to use server-side encryption with Amazon S3 managed encryption keys (SSE-S3) Configure replication between the S3 buckets.
Answers
D.
Create a customer managed KMS key and an S3 bucket m each Region Configure the S3 buckets to use server-side encryption with AWS KMS keys (SSE-KMS) Configure replication between the S3 buckets.
D.
Create a customer managed KMS key and an S3 bucket m each Region Configure the S3 buckets to use server-side encryption with AWS KMS keys (SSE-KMS) Configure replication between the S3 buckets.
Answers
Suggested answer: B

Explanation:

From https://docs.aws.amazon.com/kms/latest/developerguide/custom-key-store-overview.htmlFor most users, the default AWS KMS key store, which is protected by FIPS 140-2 validatedcryptographic modules, fulfills their security requirements. There is no need to add an extra layer ofmaintenance responsibility or a dependency on an additional service. However, you might considercreating a custom key store if your organization has any of the following requirements: Key materialcannot be stored in a shared environment. Key material must be subject to a secondary, independentaudit path. The HSMs that generate and store key material must be certified at FIPS 140-2 Level 3. https://docs.aws.amazon.com/kms/latest/developerguide/custom-key-store-overview.html

A company recently launched a variety of new workloads on Amazon EC2 instances in its AWS account. The company needs to create a strategy to access and administer the instances remotely and securely. The company needs to implement a repeatable process that works with native AWS services and follows the AWS Well-Architected Framework. Which solution will meet these requirements with the LEAST operational overhead?

A.
Use the EC2 serial console to directly access the terminal interface of each instance for administration.
A.
Use the EC2 serial console to directly access the terminal interface of each instance for administration.
Answers
B.
Attach the appropriate IAM role to each existing instance and new instance. Use AWS Systems Manager Session Manager to establish a remote SSH session.
B.
Attach the appropriate IAM role to each existing instance and new instance. Use AWS Systems Manager Session Manager to establish a remote SSH session.
Answers
C.
Create an administrative SSH key pair. Load the public key into each EC2 instance. Deploy a bastion host in a public subnet to provide a tunnel for administration of each instance.
C.
Create an administrative SSH key pair. Load the public key into each EC2 instance. Deploy a bastion host in a public subnet to provide a tunnel for administration of each instance.
Answers
D.
Establish an AWS Site-to-Site VPN connection. Instruct administrators to use their local onpremises machines to connect directly to the instances by using SSH keys across the VPN tunnel.
D.
Establish an AWS Site-to-Site VPN connection. Instruct administrators to use their local onpremises machines to connect directly to the instances by using SSH keys across the VPN tunnel.
Answers
Suggested answer: B

Explanation:

https://docs.aws.amazon.com/systems-manager/latest/userguide/setup-launch-managedinstance. html

A company is hosting a static website on Amazon S3 and is using Amazon Route 53 for DNS. The website is experiencing increased demand from around the world. The company must decrease latency for users who access the website. Which solution meets these requirements MOST cost-effectively?

A.
Replicate the S3 bucket that contains the website to all AWS Regions. Add Route 53 geolocation routing entries.
A.
Replicate the S3 bucket that contains the website to all AWS Regions. Add Route 53 geolocation routing entries.
Answers
B.
Provision accelerators in AWS Global Accelerator. Associate the supplied IP addresses with the S3 bucket. Edit the Route 53 entries to point to the IP addresses of the accelerators.
B.
Provision accelerators in AWS Global Accelerator. Associate the supplied IP addresses with the S3 bucket. Edit the Route 53 entries to point to the IP addresses of the accelerators.
Answers
C.
Add an Amazon CloudFront distribution in front of the S3 bucket. Edit the Route 53 entries to point to the CloudFront distribution.
C.
Add an Amazon CloudFront distribution in front of the S3 bucket. Edit the Route 53 entries to point to the CloudFront distribution.
Answers
D.
Enable S3 Transfer Acceleration on the bucket. Edit the Route 53 entries to point to the new endpoint.
D.
Enable S3 Transfer Acceleration on the bucket. Edit the Route 53 entries to point to the new endpoint.
Answers
Suggested answer: C

Explanation:

Amazon CloudFront is a content delivery network (CDN) that caches content at edge locations around the world, providing low latency and high transfer speeds to users accessing the content.Adding a CloudFront distribution in front of the S3 bucket will cache the static website's content at edge locations around the world, decreasing latency for users accessing the website. This solution is


A company maintains a searchable repository of items on its website. The data is stored in an Amazon RDS for MySQL database table that contains more than 10 million rows The database has 2 TB of General Purpose SSD storage There are millions of updates against this data every day through the company's website The company has noticed that some insert operations are taking 10 seconds or longer The company has determined that the database storage performance is the problem Which solution addresses this performance issue?

A.
Change the storage type to Provisioned IOPS SSD
A.
Change the storage type to Provisioned IOPS SSD
Answers
B.
Change the DB instance to a memory optimized instance class
B.
Change the DB instance to a memory optimized instance class
Answers
C.
Change the DB instance to a burstable performance instance class
C.
Change the DB instance to a burstable performance instance class
Answers
D.
Enable Multi-AZ RDS read replicas with MySQL native asynchronous replication.
D.
Enable Multi-AZ RDS read replicas with MySQL native asynchronous replication.
Answers
Suggested answer: A

Explanation:

https://aws.amazon.com/ebs/features/

"Provisioned IOPS volumes are backed by solid-state drives (SSDs) and are the highest performance EBS volumes designed for your critical, I/O intensive database applications. These volumes are ideal for both IOPS-intensive and throughput-intensive workloads that require extremely low latency."

A company has thousands of edge devices that collectively generate 1 TB of status alerts each day.

Each alert is approximately 2 KB in size. A solutions architect needs to implement a solution to ingest and store the alerts for future analysis. The company wants a highly available solution. However, the company needs to minimize costs and does not want to manage additional infrastructure. Additionally, the company wants to keep 14 days of data available for immediate analysis and archive any data older than 14 days.

What is the MOST operationally efficient solution that meets these requirements?

A.
Create an Amazon Kinesis Data Firehose delivery stream to ingest the alerts Configure the Kinesis Data Firehose stream to deliver the alerts to an Amazon S3 bucket Set up an S3 Lifecycle configuration to transition data to Amazon S3 Glacier after 14 days
A.
Create an Amazon Kinesis Data Firehose delivery stream to ingest the alerts Configure the Kinesis Data Firehose stream to deliver the alerts to an Amazon S3 bucket Set up an S3 Lifecycle configuration to transition data to Amazon S3 Glacier after 14 days
Answers
B.
Launch Amazon EC2 instances across two Availability Zones and place them behind an Elastic Load Balancer to ingest the alerts Create a script on the EC2 instances that will store tne alerts m an Amazon S3 bucket Set up an S3 Lifecycle configuration to transition data to Amazon S3 Glacier after 14 days
B.
Launch Amazon EC2 instances across two Availability Zones and place them behind an Elastic Load Balancer to ingest the alerts Create a script on the EC2 instances that will store tne alerts m an Amazon S3 bucket Set up an S3 Lifecycle configuration to transition data to Amazon S3 Glacier after 14 days
Answers
C.
Create an Amazon Kinesis Data Firehose delivery stream to ingest the alerts Configure the Kinesis Data Firehose stream to deliver the alerts to an Amazon Elasticsearch Service (Amazon ES) duster Set up the Amazon ES cluster to take manual snapshots every day and delete data from the duster that is older than 14 days
C.
Create an Amazon Kinesis Data Firehose delivery stream to ingest the alerts Configure the Kinesis Data Firehose stream to deliver the alerts to an Amazon Elasticsearch Service (Amazon ES) duster Set up the Amazon ES cluster to take manual snapshots every day and delete data from the duster that is older than 14 days
Answers
D.
Create an Amazon Simple Queue Service (Amazon SQS i standard queue to ingest the alerts and set the message retention period to 14 days Configure consumers to poll the SQS queue check the age of the message and analyze the message data as needed If the message is 14 days old the consumer should copy the message to an Amazon S3 bucket and delete the message from the SQS queue
D.
Create an Amazon Simple Queue Service (Amazon SQS i standard queue to ingest the alerts and set the message retention period to 14 days Configure consumers to poll the SQS queue check the age of the message and analyze the message data as needed If the message is 14 days old the consumer should copy the message to an Amazon S3 bucket and delete the message from the SQS queue
Answers
Suggested answer: A

Explanation:

https://aws.amazon.com/kinesis/datafirehose/ features/?nc=sn&loc=2#:~:text=into%20Amazon%20S3%2C%20Amazon%20Redshift%2C%2 0Amazon%20OpenSearch%20Service%2C%20Kinesis,Delivery%20streams

Total 886 questions
Go to page: of 89