ExamGecko
Home Home / Amazon / SAA-C03

Amazon SAA-C03 Practice Test - Questions Answers, Page 78

Question list
Search
Search

List of questions

Search

Related questions











A company is planning to deploy its application on an Amazon Aurora PostgreSQL Serverless v2 cluster. The application will receive large amounts of traffic. The company wants to optimize the storage performance of the cluster as the load on the application increases

Which solution will meet these requirements MOST cost-effectively?

A.
Configure the cluster to use the Aurora Standard storage configuration.
A.
Configure the cluster to use the Aurora Standard storage configuration.
Answers
B.
Configure the cluster storage type as Provisioned IOPS.
B.
Configure the cluster storage type as Provisioned IOPS.
Answers
C.
Configure the cluster storage type as General Purpose.
C.
Configure the cluster storage type as General Purpose.
Answers
D.
Configure the cluster to use the Aurora l/O-Optimized storage configuration.
D.
Configure the cluster to use the Aurora l/O-Optimized storage configuration.
Answers
Suggested answer: D

Explanation:

Aurora I/O-Optimized: This storage configuration is designed to provide consistent high performance for Aurora databases. It automatically scales IOPS as the workload increases, without needing to provision IOPS separately.

Cost-Effectiveness: With Aurora I/O-Optimized, you only pay for the storage and I/O you use, making it a cost-effective solution for applications with varying and unpredictable I/O demands.

Implementation:

During the creation of the Aurora PostgreSQL Serverless v2 cluster, select the I/O-Optimized storage configuration.

The storage system will automatically handle scaling and performance optimization based on the application load.

Operational Efficiency: This configuration reduces the need for manual tuning and ensures optimal performance without additional administrative overhead.

Amazon Aurora I/O-Optimized

A company is preparing to store confidential data in Amazon S3. For compliance reasons, the data must be encrypted at rest. Encryption key usage must be logged for auditing purposes. Keys must be rotated every year.

Which solution meets these requirements and is the MOST operationally efficient?

A.
Server-side encryption with customer-provided keys (SSE-C)
A.
Server-side encryption with customer-provided keys (SSE-C)
Answers
B.
Server-side encryption with Amazon S3 managed keys (SSE-S3)
B.
Server-side encryption with Amazon S3 managed keys (SSE-S3)
Answers
C.
Server-side encryption with AWS KMS keys (SSE-KMS) with manual rotation
C.
Server-side encryption with AWS KMS keys (SSE-KMS) with manual rotation
Answers
D.
Server-side encryption with AWS KMS keys (SSE-KMS) with automatic rotation
D.
Server-side encryption with AWS KMS keys (SSE-KMS) with automatic rotation
Answers
Suggested answer: D

Explanation:

SSE-KMS: Server-side encryption with AWS Key Management Service (SSE-KMS) provides robust encryption of data at rest, integrated with AWS KMS for key management and auditing.

Automatic Key Rotation: By enabling automatic rotation for the KMS keys, the system ensures that keys are rotated annually without manual intervention, meeting compliance requirements.

Logging and Auditing: AWS KMS automatically logs all key usage and management actions in AWS CloudTrail, providing the necessary audit logs.

Implementation:

Create a KMS key with automatic rotation enabled.

Configure the S3 bucket to use SSE-KMS with the created KMS key.

Ensure CloudTrail is enabled for logging KMS key usage.

Operational Efficiency: This solution provides encryption, automatic key management, and auditing in a seamless, fully managed way, reducing operational overhead.

AWS KMS Automatic Key Rotation

Amazon S3 Server-Side Encryption

A company has several on-premises Internet Small Computer Systems Interface (iSCSI) network storage servers The company wants to reduce the number of these servers by moving to the AWS Cloud. A solutions architect must provide low-latency access to frequently used data and reduce the dependency on on-premises servers with a minimal number of infrastructure changes.

Which solution will meet these requirements?

A.
Deploy an Amazon S3 File Gateway
A.
Deploy an Amazon S3 File Gateway
Answers
B.
Deploy Amazon Elastic Block Store (Amazon EBS) storage with backups to Amazon S3
B.
Deploy Amazon Elastic Block Store (Amazon EBS) storage with backups to Amazon S3
Answers
C.
Deploy an AWS Storage Gateway volume gateway that is configured with stored volumes
C.
Deploy an AWS Storage Gateway volume gateway that is configured with stored volumes
Answers
D.
Deploy an AWS Storage Gateway volume gateway that is configured with cached volumes.
D.
Deploy an AWS Storage Gateway volume gateway that is configured with cached volumes.
Answers
Suggested answer: D

Explanation:

Storage Gateway Volume Gateway (Cached Volumes): This configuration allows you to store your primary data in Amazon S3 while retaining frequently accessed data locally in a cache for low-latency access.

Low-Latency Access: Frequently accessed data is cached locally on-premises, providing low-latency access while the less frequently accessed data is stored cost-effectively in Amazon S3.

Implementation:

Deploy a Storage Gateway appliance on-premises or in a virtual environment.

Configure it as a volume gateway with cached volumes.

Create volumes and configure your applications to use these volumes.

Minimal Infrastructure Changes: This solution integrates seamlessly with existing on-premises infrastructure, requiring minimal changes and reducing dependency on on-premises storage servers.

AWS Storage Gateway Volume Gateway

Volume Gateway Cached Volumes

A marketing company receives a large amount of new clickstream data in Amazon S3 from a marketing campaign The company needs to analyze the clickstream data in Amazon S3 quickly. Then the company needs to determine whether to process the data further in the data pipeline.

Which solution will meet these requirements with the LEAST operational overhead?

A.
Create external tables in a Spark catalog Configure jobs in AWS Glue to query the data
A.
Create external tables in a Spark catalog Configure jobs in AWS Glue to query the data
Answers
B.
Configure an AWS Glue crawler to crawl the data. Configure Amazon Athena to query the data.
B.
Configure an AWS Glue crawler to crawl the data. Configure Amazon Athena to query the data.
Answers
C.
Create external tables in a Hive metastore. Configure Spark jobs in Amazon EMR to query the data.
C.
Create external tables in a Hive metastore. Configure Spark jobs in Amazon EMR to query the data.
Answers
D.
Configure an AWS Glue crawler to crawl the data. Configure Amazon Kinesis Data Analytics to use SQL to query the data
D.
Configure an AWS Glue crawler to crawl the data. Configure Amazon Kinesis Data Analytics to use SQL to query the data
Answers
Suggested answer: B

Explanation:

AWS Glue Crawler: AWS Glue is a fully managed ETL (Extract, Transform, Load) service that makes it easy to prepare and load data for analytics. A Glue crawler can automatically discover new data and schema in Amazon S3, making it easy to keep the data catalog up-to-date.

Crawling the Data:

Set up an AWS Glue crawler to scan the S3 bucket containing the clickstream data.

The crawler will automatically detect the schema and create/update the tables in the AWS Glue Data Catalog.

Amazon Athena:

Athena is an interactive query service that makes it easy to analyze data in Amazon S3 using standard SQL.

Once the data catalog is updated by the Glue crawler, use Athena to query the clickstream data directly in S3.

Operational Efficiency: This solution leverages fully managed services, reducing operational overhead. Glue crawlers automate data cataloging, and Athena provides a serverless, pay-per-query model for quick data analysis without the need to set up or manage infrastructure.

AWS Glue

Amazon Athena

A company has applications that run on Amazon EC2 instances in a VPC One of the applications needs to call the Amazon S3 API to store and read objects. According to the company's security regulations, no traffic from the applications is allowed to travel across the internet.

Which solution will meet these requirements?

A.
Configure an S3 gateway endpoint.
A.
Configure an S3 gateway endpoint.
Answers
B.
Create an S3 bucket in a private subnet.
B.
Create an S3 bucket in a private subnet.
Answers
C.
Create an S3 bucket in the same AWS Region as the EC2 instances.
C.
Create an S3 bucket in the same AWS Region as the EC2 instances.
Answers
D.
Configure a NAT gateway in the same subnet as the EC2 instances
D.
Configure a NAT gateway in the same subnet as the EC2 instances
Answers
Suggested answer: A

Explanation:

VPC Endpoint for S3: A gateway endpoint for Amazon S3 enables you to privately connect your VPC to S3 without requiring an internet gateway, NAT device, VPN connection, or AWS Direct Connect connection.

Configuration Steps:

In the VPC console, navigate to 'Endpoints' and create a new endpoint.

Select the service name for S3 (com.amazonaws.region.s3).

Choose the VPC and the subnets where your EC2 instances are running.

Update the route tables for the selected subnets to include a route pointing to the endpoint.

Security Compliance: By configuring an S3 gateway endpoint, all traffic between the VPC and S3 stays within the AWS network, complying with the company's security regulations to avoid internet traversal.

VPC Endpoints for Amazon S3

A company wants to isolate its workloads by creating an AWS account for each workload. The company needs a solution that centrally manages networking components for the workloads. The solution also must create accounts with automatic security controls (guardrails).

Which solution will meet these requirements with the LEAST operational overhead?

A.
Use AWS Control Tower to deploy accounts. Create a networking account that has a VPC with private subnets and public subnets. Use AWS Resource Access Manager (AWS RAM) to share the subnets with the workload accounts.
A.
Use AWS Control Tower to deploy accounts. Create a networking account that has a VPC with private subnets and public subnets. Use AWS Resource Access Manager (AWS RAM) to share the subnets with the workload accounts.
Answers
B.
Use AWS Organizations to deploy accounts. Create a networking account that has a VPC with private subnets and public subnets. Use AWS Resource Access Manager (AWS RAM) to share the subnets with the workload accounts.
B.
Use AWS Organizations to deploy accounts. Create a networking account that has a VPC with private subnets and public subnets. Use AWS Resource Access Manager (AWS RAM) to share the subnets with the workload accounts.
Answers
C.
Use AWS Control Tower to deploy accounts. Deploy a VPC in each workload account. Configure each VPC to route through an inspection VPC by using a transit gateway attachment.
C.
Use AWS Control Tower to deploy accounts. Deploy a VPC in each workload account. Configure each VPC to route through an inspection VPC by using a transit gateway attachment.
Answers
D.
Use AWS Organizations to deploy accounts. Deploy a VPC in each workload account. Configure each VPC to route through an inspection VPC by using a transit gateway attachment.
D.
Use AWS Organizations to deploy accounts. Deploy a VPC in each workload account. Configure each VPC to route through an inspection VPC by using a transit gateway attachment.
Answers
Suggested answer: A

Explanation:

AWS Control Tower: Provides a managed service to set up and govern a secure, multi-account AWS environment based on AWS best practices. It automates the setup of AWS Organizations and applies security controls (guardrails).

Networking Account:

Create a centralized networking account that includes a VPC with both private and public subnets.

This centralized VPC will manage and control the networking resources.

AWS Resource Access Manager (AWS RAM):

Use AWS RAM to share the subnets from the networking account with the other workload accounts.

This allows different workload accounts to utilize the shared networking resources without the need to manage their own VPCs.

Operational Efficiency: Using AWS Control Tower simplifies the setup and governance of multiple AWS accounts, while AWS RAM facilitates centralized management of networking resources, reducing operational overhead and ensuring consistent security and compliance.

AWS Control Tower

AWS Resource Access Manager

A company's SAP application has a backend SQL Server database in an on-premises environment. The company wants to migrate its on-premises application and database server to AWS. The company needs an instance type that meets the high demands of its SAP database. On-premises performance data shows that both the SAP application and the database have high memory utilization.

Which solution will meet these requirements?

A.
Use the compute optimized Instance family for the application Use the memory optimized instance family for the database.
A.
Use the compute optimized Instance family for the application Use the memory optimized instance family for the database.
Answers
B.
Use the storage optimized instance family for both the application and the database
B.
Use the storage optimized instance family for both the application and the database
Answers
C.
Use the memory optimized instance family for both the application and the database
C.
Use the memory optimized instance family for both the application and the database
Answers
D.
Use the high performance computing (HPC) optimized instance family for the application. Use the memory optimized instance family for the database.
D.
Use the high performance computing (HPC) optimized instance family for the application. Use the memory optimized instance family for the database.
Answers
Suggested answer: C

Explanation:

Memory Optimized Instances: These instances are designed to deliver fast performance for workloads that process large data sets in memory. They are ideal for high-performance databases like SAP and applications with high memory utilization.

High Memory Utilization: Both the SAP application and the SQL Server database have high memory demands as per the on-premises performance data. Memory optimized instances provide the necessary memory capacity and performance.

Instance Types:

For the SAP application, using a memory optimized instance ensures the application has sufficient memory to handle the high workload efficiently.

For the SQL Server database, memory optimized instances ensure optimal database performance with high memory throughput.

Operational Efficiency: Using the same instance family for both the application and the database simplifies management and ensures both components meet performance requirements.

Amazon EC2 Instance Types

SAP on AWS

A company plans to rehost an application to Amazon EC2 instances that use Amazon Elastic Block Store (Amazon EBS) as the attached storage

A solutions architect must design a solution to ensure that all newly created Amazon EBS volumes are encrypted by default. The solution must also prevent the creation of unencrypted EBS volumes

Which solution will meet these requirements?

A.
Configure the EC2 account attributes to always encrypt new EBS volumes.
A.
Configure the EC2 account attributes to always encrypt new EBS volumes.
Answers
B.
Use AWS Config. Configure the encrypted-volumes identifier Apply the default AWS Key Management Service (AWS KMS) key.
B.
Use AWS Config. Configure the encrypted-volumes identifier Apply the default AWS Key Management Service (AWS KMS) key.
Answers
C.
Configure AWS Systems Manager to create encrypted copies of the EBS volumes. Reconfigure the EC2 instances to use the encrypted volumes
C.
Configure AWS Systems Manager to create encrypted copies of the EBS volumes. Reconfigure the EC2 instances to use the encrypted volumes
Answers
D.
Create a customer managed key in AWS Key Management Service (AWS KMS) Configure AWS Migration Hub to use the key when the company migrates workloads.
D.
Create a customer managed key in AWS Key Management Service (AWS KMS) Configure AWS Migration Hub to use the key when the company migrates workloads.
Answers
Suggested answer: A

Explanation:

EC2 Account Attributes: Amazon EC2 allows you to set account attributes to automatically encrypt new EBS volumes. This ensures that all new volumes created in your account are encrypted by default.

Configuration Steps:

Go to the EC2 Dashboard.

Select 'Account Attributes' and then 'EBS encryption'.

Enable default EBS encryption and select the default AWS KMS key or a customer-managed key.

Prevention of Unencrypted Volumes: By setting this account attribute, you ensure that it is not possible to create unencrypted EBS volumes, thereby enforcing compliance with security requirements.

Operational Efficiency: This solution requires minimal configuration changes and provides automatic enforcement of encryption policies, reducing operational overhead.

Amazon EC2 Default EBS Encryption

A weather forecasting company needs to process hundreds of gigabytes of data with sub-millisecond latency. The company has a high performance computing (HPC) environment in its data center and wants to expand its forecasting capabilities.

A solutions architect must identify a highly available cloud storage solution that can handle large amounts of sustained throughput Files that are stored in the solution should be accessible to thousands of compute instances that will simultaneously access and process the entire dataset.

What should the solutions architect do to meet these requirements?

A.
Use Amazon FSx for Lustre scratch file systems
A.
Use Amazon FSx for Lustre scratch file systems
Answers
B.
Use Amazon FSx for Lustre persistent file systems.
B.
Use Amazon FSx for Lustre persistent file systems.
Answers
C.
Use Amazon Elastic File System (Amazon EFS) with Bursting Throughput mode.
C.
Use Amazon Elastic File System (Amazon EFS) with Bursting Throughput mode.
Answers
D.
Use Amazon Elastic File System (Amazon EFS) with Provisioned Throughput mode.
D.
Use Amazon Elastic File System (Amazon EFS) with Provisioned Throughput mode.
Answers
Suggested answer: B

Explanation:

Amazon FSx for Lustre: Lustre is a high-performance file system designed for workloads that require fast storage with sustained high throughput and low latency. It integrates with Amazon S3, making it suitable for HPC environments.

Persistent File Systems:

Persistent Storage: Suitable for long-term storage and recurrent use, providing durability and availability.

High Throughput and Low Latency: Persistent Lustre file systems can handle large amounts of data with sub-millisecond latency, meeting the needs of high-performance computing workloads.

Simultaneous Access: FSx for Lustre allows thousands of compute instances to access and process large datasets concurrently, ensuring that the high volume of data is handled efficiently.

Highly Available: FSx for Lustre is designed to provide high availability and is managed by AWS, reducing the operational burden.

Amazon FSx for Lustre

High-Performance Computing on AWS

A company plans to run a high performance computing (HPC) workload on Amazon EC2 Instances The workload requires low-latency network performance and high network throughput with tightly coupled node-to-node communication.

Which solution will meet these requirements?

A.
Configure the EC2 instances to be part of a cluster placement group
A.
Configure the EC2 instances to be part of a cluster placement group
Answers
B.
Launch the EC2 instances with Dedicated Instance tenancy.
B.
Launch the EC2 instances with Dedicated Instance tenancy.
Answers
C.
Launch the EC2 instances as Spot Instances.
C.
Launch the EC2 instances as Spot Instances.
Answers
D.
Configure an On-Demand Capacity Reservation when the EC2 instances are launched.
D.
Configure an On-Demand Capacity Reservation when the EC2 instances are launched.
Answers
Suggested answer: A

Explanation:

Cluster Placement Group: This type of placement group is designed to provide low-latency network performance and high throughput by grouping instances within a single Availability Zone. It is ideal for applications that require tightly coupled node-to-node communication.

Configuration:

When launching EC2 instances, specify the option to launch them in a cluster placement group.

This ensures that the instances are physically located close to each other, reducing latency and increasing network throughput.

Benefits:

Low-Latency Communication: Instances in a cluster placement group benefit from enhanced networking capabilities, enabling low-latency communication.

High Network Throughput: The network performance within a cluster placement group is optimized for high throughput, which is essential for HPC workloads.

Placement Groups

High Performance Computing on AWS

Total 886 questions
Go to page: of 89