ExamGecko
Home Home / Amazon / SAA-C03

Amazon SAA-C03 Practice Test - Questions Answers, Page 80

Question list
Search
Search

List of questions

Search

Related questions











A company has an application that serves clients that are deployed in more than 20.000 retail storefront locations around the world. The application consists of backend web services that are exposed over HTTPS on port 443 The application is hosted on Amazon EC2 Instances behind an Application Load Balancer (ALB). The retail locations communicate with the web application over the public internet. The company allows each retail location to register the IP address that the retail location has been allocated by its local ISP.

The company's security team recommends to increase the security of the application endpoint by restricting access to only the IP addresses registered by the retail locations.

What should a solutions architect do to meet these requirements?

A.
Associate an AWS WAF web ACL with the ALB Use IP rule sets on the ALB to filter traffic Update the IP addresses in the rule to Include the registered IP addresses
A.
Associate an AWS WAF web ACL with the ALB Use IP rule sets on the ALB to filter traffic Update the IP addresses in the rule to Include the registered IP addresses
Answers
B.
Deploy AWS Firewall Manager to manage the ALB. Configure firewall rules to restrict traffic to the ALB Modify the firewall rules to include the registered IP addresses.
B.
Deploy AWS Firewall Manager to manage the ALB. Configure firewall rules to restrict traffic to the ALB Modify the firewall rules to include the registered IP addresses.
Answers
C.
Store the IP addresses in an Amazon DynamoDB table. Configure an AWS Lambda authorization function on the ALB to validate that incoming requests are from the registered IP addresses.
C.
Store the IP addresses in an Amazon DynamoDB table. Configure an AWS Lambda authorization function on the ALB to validate that incoming requests are from the registered IP addresses.
Answers
D.
Configure the network ACL on the subnet that contains the public interface of the ALB Update the ingress rules on the network ACL with entries for each of the registered IP addresses.
D.
Configure the network ACL on the subnet that contains the public interface of the ALB Update the ingress rules on the network ACL with entries for each of the registered IP addresses.
Answers
Suggested answer: A

Explanation:

AWS WAF (Web Application Firewall): AWS WAF allows you to create custom rules to block or allow web requests based on conditions that you specify.

Web ACL (Access Control List):

Create a web ACL and associate it with the ALB.

Use IP rule sets to specify the IP addresses of the retail locations that are allowed to access the application.

Security and Flexibility:

AWS WAF provides a scalable way to manage access control, ensuring that only traffic from registered IP addresses is allowed.

You can dynamically update the IP rule sets to add or remove IP addresses as needed.

Operational Simplicity: Using AWS WAF with a web ACL is straightforward and integrates seamlessly with the ALB, providing an efficient solution for managing access control based on IP addresses.

AWS WAF

How AWS WAF Works

A company hosts its application on several Amazon EC2 instances inside a VPC. The company creates a dedicated Amazon S3 bucket for each customer to store their relevant information in Amazon S3.

The company wants to ensure that the application running on EC2 instances can securely access only the S3 buckets that belong to the company's AWS account.

Which solution will meet these requirements with the LEAST operational overhead?

A.
Create a gateway endpoint for Amazon S3 that is attached to the VPC Update the 1AM instance profile policy to provide access to only the specific buckets that the application needs.
A.
Create a gateway endpoint for Amazon S3 that is attached to the VPC Update the 1AM instance profile policy to provide access to only the specific buckets that the application needs.
Answers
B.
Create a NAT gateway in a public subnet with a security group that allows access to only Amazon S3 Update the route tables to use the NAT Gateway.
B.
Create a NAT gateway in a public subnet with a security group that allows access to only Amazon S3 Update the route tables to use the NAT Gateway.
Answers
C.
Create a gateway endpoint for Amazon S3 that is attached to the VPC Update the 1AM instance profile policy with a Deny action and the following condition key:
C.
Create a gateway endpoint for Amazon S3 that is attached to the VPC Update the 1AM instance profile policy with a Deny action and the following condition key:
Answers
D.
Create a NAT Gateway in a public subnet Update route tables to use the NAT Gateway Assign bucket policies for all buckets with a Deny action and the following condition key:
D.
Create a NAT Gateway in a public subnet Update route tables to use the NAT Gateway Assign bucket policies for all buckets with a Deny action and the following condition key:
Answers
Suggested answer: A

A company stores user data in AWS. The data is used continuously with peak usage during business hours. Access patterns vary, with some data not being used for months at a time. A solutions architect must choose a cost-effective solution that maintains the highest level of durability while maintaining high availability.

Which storage solution meets these requirements?

A.
Amazon S3 Standard
A.
Amazon S3 Standard
Answers
B.
Amazon S3 Intelligent-Tiering
B.
Amazon S3 Intelligent-Tiering
Answers
C.
Amazon S3 Glacier Deep Archive
C.
Amazon S3 Glacier Deep Archive
Answers
D.
Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA)
D.
Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA)
Answers
Suggested answer: B

Explanation:

Amazon S3 Intelligent-Tiering is the most cost-effective solution for this scenario, providing both high availability and durability while adjusting automatically to changing access patterns. It moves data across two access tiers: one optimized for frequent access and another for infrequent access, based on usage patterns. This tiering ensures that the company avoids paying for unused storage while also keeping frequently accessed data in a more accessible tier.

Key AWS references and benefits of S3 Intelligent-Tiering:

High Durability and Availability: Amazon S3 offers 99.999999999% durability and 99.9% availability for objects stored, ensuring data is always protected.

Automatic Tiering: Data is automatically moved between tiers based on access patterns, making it ideal for workloads with unpredictable or variable access patterns.

No Retrieval Fees: Unlike S3 One Zone-IA or Glacier, there are no retrieval fees, making this more cost-effective in scenarios where access patterns vary over time.

AWS Documentation: According to the AWS Well-Architected Framework under the Cost Optimization Pillar, S3 Intelligent-Tiering is recommended for storage when access patterns change over time, as it minimizes costs while maintaining availability.

A medical company wants to perform transformations on a large amount of clinical trial data that comes from several customers. The company must extract the data from a relational database that contains the customer data. Then the company will transform the data by using a series of complex rules. The company will load the data to Amazon S3 when the transformations are complete.

All data must be encrypted where it is processed before the company stores the data in Amazon S3. All data must be encrypted by using customer-specific keys.

Which solution will meet these requirements with the LEAST amount of operational effort?

A.
Create one AWS Glue job for each customer Attach a security configuration to each job that uses server-side encryption with Amazon S3 managed keys (SSE-S3) to encrypt the data.
A.
Create one AWS Glue job for each customer Attach a security configuration to each job that uses server-side encryption with Amazon S3 managed keys (SSE-S3) to encrypt the data.
Answers
B.
Create one Amazon EMR cluster for each customer Attach a security configuration to each cluster that uses client-side encryption with a custom client-side root key (CSE-Custom) to encrypt the data.
B.
Create one Amazon EMR cluster for each customer Attach a security configuration to each cluster that uses client-side encryption with a custom client-side root key (CSE-Custom) to encrypt the data.
Answers
C.
Create one AWS Glue job for each customer Attach a security configuration to each job that uses client-side encryption with AWS KMS managed keys (CSE-KMS) to encrypt the data.
C.
Create one AWS Glue job for each customer Attach a security configuration to each job that uses client-side encryption with AWS KMS managed keys (CSE-KMS) to encrypt the data.
Answers
D.
Create one Amazon EMR cluster for each customer Attach a security configuration to each cluster that uses server-side encryption with AWS KMS keys (SSE-KMS) to encrypt the data.
D.
Create one Amazon EMR cluster for each customer Attach a security configuration to each cluster that uses server-side encryption with AWS KMS keys (SSE-KMS) to encrypt the data.
Answers
Suggested answer: C

Explanation:

AWS Glue jobs are designed for extract, transform, and load (ETL) operations, which are perfect for transforming clinical trial data. AWS Glue integrates with AWS Key Management Service (KMS), allowing for customer-specific encryption keys, fulfilling the encryption requirement with minimal operational effort. Client-side encryption with AWS KMS ensures that the data is encrypted before it is sent to S3, aligning with the security needs specified in the scenario.

Key aspects:

AWS Glue: This managed ETL service simplifies data transformation, reduces operational overhead, and integrates seamlessly with KMS.

CSE-KMS: Client-side encryption with KMS ensures that the data is encrypted with customer-specific keys before it is processed or stored in S3, offering robust security.

Minimal Operational Overhead: Compared to managing an EMR cluster, AWS Glue automates much of the process, making it a lower-effort solution.

AWS Documentation: According to the AWS Well-Architected Framework, encryption with AWS KMS offers strong security controls that meet the needs of industries requiring high levels of confidentiality.

A company uses an Amazon DynamoDB table to store data that the company receives from devices. The DynamoDB table supports a customer-facing website to display recent activity on customer devices The company configured the table with provisioned throughput for writes and reads

The company wants to calculate performance metrics for customer device data on a daily basis. The solution must have minimal effect on the table's provisioned read and write capacity

Which solution will meet these requirements?

A.
Use an Amazon Athena SQL query with the Amazon Athena DynamoDB connector to calculate performance metrics on a recurring schedule.
A.
Use an Amazon Athena SQL query with the Amazon Athena DynamoDB connector to calculate performance metrics on a recurring schedule.
Answers
B.
Use an AWS Glue job with the AWS Glue DynamoDB export connector to calculate performance metrics on a recurring schedule.
B.
Use an AWS Glue job with the AWS Glue DynamoDB export connector to calculate performance metrics on a recurring schedule.
Answers
C.
Use an Amazon Redshift COPY command to calculate performance metrics on a recurring schedule.
C.
Use an Amazon Redshift COPY command to calculate performance metrics on a recurring schedule.
Answers
D.
Use an Amazon EMR job with an Apache Hive external table to calculate performance metrics on a recurring schedule.
D.
Use an Amazon EMR job with an Apache Hive external table to calculate performance metrics on a recurring schedule.
Answers
Suggested answer: A

Explanation:

Amazon Athena provides a cost-effective, serverless way to query data without affecting the performance of DynamoDB. By using the Athena DynamoDB connector, the company can perform the necessary SQL queries without consuming read capacity on the DynamoDB table, which is essential for minimizing impact on provisioned throughput.

Key benefits:

Minimal Impact on Provisioned Capacity: Athena queries do not directly impact DynamoDB's read capacity, making it ideal for running analytics without affecting the customer-facing workloads.

Cost-Effective: Athena is a serverless solution, meaning you pay only for the queries you run, making it highly cost-effective compared to running a dedicated cluster like Amazon EMR or Redshift.

AWS Documentation: The use of Athena to query DynamoDB through its connector aligns with AWS's best practices for performance efficiency and cost optimization.

An ecommerce company runs several internal applications in multiple AWS accounts. The company uses AWS Organizations to manage its AWS accounts.

A security appliance in the company's networking account must inspect interactions between applications across AWS accounts.

Which solution will meet these requirements?

A.
Deploy a Network Load Balancer (NLB) in the networking account to send traffic to the security appliance. Configure the application accounts to send traffic to the NLB by using an interface VPC endpoint in the application accounts
A.
Deploy a Network Load Balancer (NLB) in the networking account to send traffic to the security appliance. Configure the application accounts to send traffic to the NLB by using an interface VPC endpoint in the application accounts
Answers
B.
Deploy an Application Load Balancer (ALB) in the application accounts to send traffic directly to the security appliance.
B.
Deploy an Application Load Balancer (ALB) in the application accounts to send traffic directly to the security appliance.
Answers
C.
Deploy a Gateway Load Balancer (GWLB) in the networking account to send traffic to the security appliance. Configure the application accounts to send traffic to the GWLB by using an interface GWLB endpoint in the application accounts
C.
Deploy a Gateway Load Balancer (GWLB) in the networking account to send traffic to the security appliance. Configure the application accounts to send traffic to the GWLB by using an interface GWLB endpoint in the application accounts
Answers
D.
Deploy an interface VPC endpoint in the application accounts to send traffic directly to the security appliance.
D.
Deploy an interface VPC endpoint in the application accounts to send traffic directly to the security appliance.
Answers
Suggested answer: C

Explanation:

The Gateway Load Balancer (GWLB) is specifically designed to route traffic through a security appliance in a hub-and-spoke model, making it the ideal solution for inspecting traffic between multiple AWS accounts. GWLB enables you to simplify, scale, and deploy third-party virtual appliances transparently, and it can work across multiple VPCs or accounts using interface endpoints (Gateway Load Balancer Endpoints).

Key AWS features:

Traffic Inspection: The GWLB allows the centralized security appliance to inspect traffic between different VPCs, making it suitable for inspecting inter-account interactions.

Interface VPC Endpoints: By using interface endpoints in the application accounts, traffic can securely and efficiently be routed to the security appliance in the networking account.

AWS Documentation: The use of GWLB aligns with AWS's best practices for centralized network security, simplifying architecture and reducing operational complexity.

A company uses Amazon EC2 instances and stores data on Amazon Elastic Block Store (Amazon EBS) volumes. The company must ensure that all data is encrypted at rest by using AWS Key Management Service (AWS KMS). The company must be able to control rotation of the encryption keys.

Which solution will meet these requirements with the LEAST operational overhead?

A.
Create a customer managed key Use the key to encrypt the EBS volumes.
A.
Create a customer managed key Use the key to encrypt the EBS volumes.
Answers
B.
Use an AWS managed key to encrypt the EBS volumes. Use the key to configure automatic key rotation.
B.
Use an AWS managed key to encrypt the EBS volumes. Use the key to configure automatic key rotation.
Answers
C.
Create an external KMS key with imported key material. Use the key to encrypt the EBS volumes.
C.
Create an external KMS key with imported key material. Use the key to encrypt the EBS volumes.
Answers
D.
Use an AWS owned key to encrypt the EBS volumes.
D.
Use an AWS owned key to encrypt the EBS volumes.
Answers
Suggested answer: A

Explanation:

To meet the requirement of controlling key rotation with minimal operational overhead, creating a customer managed key (CMK) in AWS KMS is the optimal solution. With CMKs, you can define custom key rotation policies, ensuring that you retain control over the key lifecycle, including enabling automatic key rotation every year.

Key AWS features:

Custom Key Management: A customer managed key allows you to control the key policies, lifecycle, and enable key rotation for compliance.

Least Operational Overhead: Using a customer managed key simplifies encryption management while offering more flexibility than AWS managed or owned keys.

AWS Documentation: The AWS Well-Architected Framework recommends customer managed keys for environments where key control and flexibility are required.

A weather forecasting company collects temperature readings from various sensors on a continuous basis. An existing data ingestion process collects the readings and aggregates the readings into larger Apache Parquet files. Then the process encrypts the files by using client-side encryption with KMS managed keys (CSE-KMS). Finally, the process writes the files to an Amazon S3 bucket with separate prefixes for each calendar day.

The company wants to run occasional SQL queries on the data to take sample moving averages for a specific calendar day.

Which solution will meet these requirements MOST cost-effectively?

A.
Configure Amazon Athena to read the encrypted files. Run SQL queries on the data directly in Amazon S3.
A.
Configure Amazon Athena to read the encrypted files. Run SQL queries on the data directly in Amazon S3.
Answers
B.
Use Amazon S3 Select to run SQL queries on the data directly in Amazon S3.
B.
Use Amazon S3 Select to run SQL queries on the data directly in Amazon S3.
Answers
C.
Configure Amazon Redshift to read the encrypted files Use Redshift Spectrum and Redshift query editor v2 to run SQL queries on the data directly in Amazon S3.
C.
Configure Amazon Redshift to read the encrypted files Use Redshift Spectrum and Redshift query editor v2 to run SQL queries on the data directly in Amazon S3.
Answers
D.
Configure Amazon EMR Serverless to read the encrypted files. Use Apache SparkSQL to run SQL queries on the data directly in Amazon S3.
D.
Configure Amazon EMR Serverless to read the encrypted files. Use Apache SparkSQL to run SQL queries on the data directly in Amazon S3.
Answers
Suggested answer: A

Explanation:

Amazon Athena is a serverless query service that allows you to run SQL queries directly on data stored in Amazon S3 without the need for a data warehouse. It is cost-effective because you only pay for the queries you run, and it can handle Apache Parquet files efficiently. Additionally, Athena integrates with KMS, making it suitable for querying encrypted data.

Key AWS features:

Cost-Effective: Athena charges only for the data scanned by the queries, making it a more cost-effective solution compared to Redshift or EMR for occasional queries.

Direct S3 Querying: Athena supports querying data directly in S3, including Parquet files, without needing to move the data.

AWS Documentation: Athena's compatibility with encrypted Parquet files in S3 makes it the ideal choice for this scenario, reducing both cost and complexity.

A company is planning to migrate a legacy application to AWS. The application currently uses NFS to communicate to an on-premises storage solution to store application data. The application cannot be modified to use any other communication protocols other than NFS for this purpose.

Which storage solution should a solutions architect recommend for use after the migration?

A.
AWS DataSync
A.
AWS DataSync
Answers
B.
Amazon Elastic Block Store (Amazon EB5)
B.
Amazon Elastic Block Store (Amazon EB5)
Answers
C.
Amazon Elastic File System (Amazon EF5)
C.
Amazon Elastic File System (Amazon EF5)
Answers
D.
Amazon EMR File System (Amazon EMRFS)
D.
Amazon EMR File System (Amazon EMRFS)
Answers
Suggested answer: C

Explanation:

Amazon Elastic File System (EFS) is the ideal solution for migrating legacy applications that require NFS (Network File System) communication. EFS provides fully managed, scalable NFS storage in the cloud, and it supports the standard NFS protocols, allowing the legacy application to continue using NFS without modification after migration to AWS.

Key AWS features:

NFS Support: EFS natively supports the NFSv4 protocol, which makes it the best solution for workloads that rely on NFS communication.

Scalability and Availability: EFS automatically scales as application demands grow, making it a highly available and reliable storage solution.

AWS Documentation: According to AWS's best practices for file storage, EFS is recommended for any workloads requiring NFS support in a cloud environment.

A company runs an environment where data is stored in an Amazon S3 bucket. The objects are accessed frequently throughout the day. The company has strict data encryption requirements for data that is stored in the S3 bucket. The company currently uses AWS Key Management Service (AWS KMS) for encryption.

The company wants to optimize costs associated with encrypting S3 objects without making additional calls to AWS KMS.

Which solution will meet these requirements?

A.
Use server-side encryption with Amazon S3 managed keys (SSE-S3).
A.
Use server-side encryption with Amazon S3 managed keys (SSE-S3).
Answers
B.
Use an S3 Bucket Key for server-side encryption with AWS KMS keys (SSE-KMS) on the new objects.
B.
Use an S3 Bucket Key for server-side encryption with AWS KMS keys (SSE-KMS) on the new objects.
Answers
C.
Use client-side encryption with AWS KMS customer managed keys.
C.
Use client-side encryption with AWS KMS customer managed keys.
Answers
D.
Use server-side encryption with customer-provided keys (SSE-C) stored in AWS KMS.
D.
Use server-side encryption with customer-provided keys (SSE-C) stored in AWS KMS.
Answers
Suggested answer: B

Explanation:

An S3 Bucket Key reduces the cost of using AWS KMS for server-side encryption by decreasing the number of requests made to KMS. By enabling S3 Bucket Key, the company can meet its encryption requirements with KMS keys while optimizing costs by reducing the number of KMS API requests.

Key AWS features:

Cost Optimization: S3 Bucket Keys reduce the frequency of KMS calls, optimizing the cost associated with encryption while still using AWS KMS for key management.

Compliance with KMS Encryption: This solution continues to meet the strict encryption requirements of the company by using KMS managed keys.

AWS Documentation: Using an S3 Bucket Key is recommended for organizations looking to optimize encryption costs without compromising security.

Total 886 questions
Go to page: of 89