ExamGecko
Home Home / Amazon / SAA-C03

Amazon SAA-C03 Practice Test - Questions Answers, Page 64

Question list
Search
Search

List of questions

Search

Related questions











A company is designing a tightly coupled high performance computing (HPC) environment in the AWS Cloud The company needs to include features that will optimize the HPC environment for networking and storage.

Which combination of solutions will meet these requirements? (Select TWO )

A.
Create an accelerator in AWS Global Accelerator. Configure custom routing for the accelerator.
A.
Create an accelerator in AWS Global Accelerator. Configure custom routing for the accelerator.
Answers
B.
Create an Amazon FSx for Lustre file system. Configure the file system with scratch storage.
B.
Create an Amazon FSx for Lustre file system. Configure the file system with scratch storage.
Answers
C.
Create an Amazon CloudFront distribution. Configure the viewer protocol policy to be HTTP and HTTPS.
C.
Create an Amazon CloudFront distribution. Configure the viewer protocol policy to be HTTP and HTTPS.
Answers
D.
Launch Amazon EC2 instances. Attach an Elastic Fabric Adapter (EFA) to the instances.
D.
Launch Amazon EC2 instances. Attach an Elastic Fabric Adapter (EFA) to the instances.
Answers
E.
Create an AWS Elastic Beanstalk deployment to manage the environment.
E.
Create an AWS Elastic Beanstalk deployment to manage the environment.
Answers
Suggested answer: B, D

Explanation:

These two solutions will optimize the HPC environment for networking and storage. Amazon FSx for Lustre is a fully managed service that provides cost-effective, high-performance, scalable storage for compute workloads. It is built on the world's most popular high-performance file system, Lustre, which is designed for applications that require fast storage, such as HPC and machine learning. By configuring the file system with scratch storage, you can achieve sub-millisecond latencies, up to hundreds of GBs/s of throughput, and millions of IOPS. Scratch file systems are ideal for temporary storage and shorter-term processing of data. Data is not replicated and does not persist if a file server fails. For more information, seeAmazon FSx for Lustre.

Elastic Fabric Adapter (EFA) is a network interface for Amazon EC2 instances that enables customers to run applications requiring high levels of inter-node communications at scale on AWS. Its custom-built operating system (OS) bypass hardware interface enhances the performance of inter-instance communications, which is critical to scaling HPC and machine learning applications. EFA provides a low-latency, low-jitter channel for inter-instance communications, enabling your tightly-coupled HPC or distributed machine learning applications to scale to thousands of cores. EFA uses libfabric interface and libfabric APIs for communications, which are supported by most HPC programming models. For more information, seeElastic Fabric Adapter.

The other solutions are not suitable for optimizing the HPC environment for networking and storage. AWS Global Accelerator is a networking service that helps you improve the availability, performance, and security of your public applications by using the AWS global network. It provides two global static public IPs, deterministic routing, fast failover, and TCP termination at the edge for your application endpoints. However, it does not support OS-bypass capabilities or high-performance file systems that are required for HPC and machine learning applications. For more information, seeAWS Global Accelerator.

Amazon CloudFront is a content delivery network (CDN) service that securely delivers data, videos, applications, and APIs to customers globally with low latency, high transfer speeds, all within a developer-friendly environment. CloudFront is integrated with AWS services such as Amazon S3, Amazon EC2, AWS Elemental Media Services, AWS Shield, AWS WAF, and AWS Lambda@Edge. However, CloudFront is not designed for HPC and machine learning applications that require high levels of inter-node communications and fast storage. For more information, see [Amazon CloudFront].

AWS Elastic Beanstalk is an easy-to-use service for deploying and scaling web applications and services developed with Java, .NET, PHP, Node.js, Python, Ruby, Go, and Docker on familiar servers such as Apache, Nginx, Passenger, and IIS. You can simply upload your code and Elastic Beanstalk automatically handles the deployment, from capacity provisioning, load balancing, auto-scaling to application health monitoring. However, Elastic Beanstalk is not optimized for HPC and machine learning applications that require OS-bypass capabilities and high-performance file systems. For more information, see [AWS Elastic Beanstalk].

A solutions architect wants to use the following JSON text as an identity-based policy to grant specific permissions:

Which IAM principals can the solutions architect attach this policy to? (Select TWO.)

A.
Role
A.
Role
Answers
B.
Group
B.
Group
Answers
C.
Organization
C.
Organization
Answers
D.
Amazon Elastic Container Service (Amazon ECS) resource
D.
Amazon Elastic Container Service (Amazon ECS) resource
Answers
E.
Amazon EC2 resource
E.
Amazon EC2 resource
Answers
Suggested answer: A, B

Explanation:

This JSON text is an identity-based policy that grants specific permissions. The IAM principals that the solutions architect can attach this policy to are Role and Group. This is because the policy is written in JSON and is an identity-based policy, which can be attached to IAM principals such as users, groups, and roles.Identity-based policies are permissions policies that you attach to IAM identities (users, groups, or roles) and explicitly state what that identity is allowed (or denied) to do1.Identity-based policies are different from resource-based policies, which define the permissions around the specific resource1.Resource-based policies are attached to a resource, such as an Amazon S3 bucket or an Amazon EC2 instance1.Resource-based policies can also specify a principal, which is the entity that is allowed or denied access to the resource1.Organization is not an IAM principal, but a feature of AWS Organizations that allows you to manage multiple AWS accounts centrally2.Amazon ECS resource and Amazon EC2 resource are not IAM principals, but AWS resources that can have resource-based policies attached to them34.

Identity-based policies and resource-based policies

AWS Organizations

Amazon ECS task role

Amazon EC2 instance profile

A company has a three-tier environment on AWS that ingests sensor data from its users' devices The traffic flows through a Network Load Balancer (NIB) then to Amazon EC2 instances for the web tier and finally to EC2 instances for the application tier that makes database calls

What should a solutions architect do to improve the security of data in transit to the web tier?

A.
Configure a TLS listener and add the server certificate on the NLB
A.
Configure a TLS listener and add the server certificate on the NLB
Answers
B.
Configure AWS Shield Advanced and enable AWS WAF on the NLB
B.
Configure AWS Shield Advanced and enable AWS WAF on the NLB
Answers
C.
Change the load balancer to an Application Load Balancer and attach AWS WAF to it
C.
Change the load balancer to an Application Load Balancer and attach AWS WAF to it
Answers
D.
Encrypt the Amazon Elastic Block Store (Amazon EBS) volume on the EC2 instances using AWS Key Management Service (AWS KMS)
D.
Encrypt the Amazon Elastic Block Store (Amazon EBS) volume on the EC2 instances using AWS Key Management Service (AWS KMS)
Answers
Suggested answer: A

Explanation:

A: How do you protect your data in transit?

Best Practices:

Implement secure key and certificate management: Store encryption keys and certificates securely and rotate them at appropriate time intervals while applying strict access control; for example, by using a certificate management service, such as AWS Certificate Manager (ACM).

Enforce encryption in transit: Enforce your defined encryption requirements based on appropriate standards and recommendations to help you meet your organizational, legal, and compliance requirements.

Automate detection of unintended data access: Use tools such as GuardDuty to automatically detect attempts to move data outside of defined boundaries based on data classification level, for example, to detect a trojan that is copying data to an unknown or untrusted network using the DNS protocol.

Authenticate network communications: Verify the identity of communications by using protocols that support authentication, such as Transport Layer Security (TLS) or IPsec.

https://wa.aws.amazon.com/wat.question.SEC_9.en.html



A company has a web application that includes an embedded NoSQL database. The application runs on Amazon EC2 instances behind an Application Load Balancer (ALB). The instances run in an Amazon EC2 Auto Scaling group in a single Availability Zone.

A recent increase in traffic requires the application to be highly available and for the database to be eventually consistent

Which solution will meet these requirements with the LEAST operational overhead?

A.
Replace the ALB with a Network Load Balancer Maintain the embedded NoSQL database with its replication service on the EC2 instances.
A.
Replace the ALB with a Network Load Balancer Maintain the embedded NoSQL database with its replication service on the EC2 instances.
Answers
B.
Replace the ALB with a Network Load Balancer Migrate the embedded NoSQL database to Amazon DynamoDB by using AWS Database Migration Service (AWS DMS).
B.
Replace the ALB with a Network Load Balancer Migrate the embedded NoSQL database to Amazon DynamoDB by using AWS Database Migration Service (AWS DMS).
Answers
C.
Modify the Auto Scaling group to use EC2 instances across three Availability Zones. Maintain the embedded NoSQL database with its replication service on the EC2 instances.
C.
Modify the Auto Scaling group to use EC2 instances across three Availability Zones. Maintain the embedded NoSQL database with its replication service on the EC2 instances.
Answers
D.
Modify the Auto Scaling group to use EC2 instances across three Availability Zones. Migrate the embedded NoSQL database to Amazon DynamoDB by using AWS Database Migration Service (AWS DMS).
D.
Modify the Auto Scaling group to use EC2 instances across three Availability Zones. Migrate the embedded NoSQL database to Amazon DynamoDB by using AWS Database Migration Service (AWS DMS).
Answers
Suggested answer: D

Explanation:

This solution will meet the requirements of high availability and eventual consistency with the least operational overhead. By modifying the Auto Scaling group to use EC2 instances across three Availability Zones, the web application can handle the increase in traffic and tolerate the failure of one or two Availability Zones. By migrating the embedded NoSQL database to Amazon DynamoDB, the company can benefit from a fully managed, scalable, and reliable NoSQL database service that supports eventual consistency. AWS Database Migration Service (AWS DMS) is a cloud service that makes it easy to migrate relational databases, data warehouses, NoSQL databases, and other types of data stores. AWS DMS can migrate the embedded NoSQL database to Amazon DynamoDB with minimal downtime and zero data loss.

A company is building a shopping application on AWS. The application offers a catalog that changes once each month and needs to scale with traffic volume. The company wants the lowest possible latency from the application. Data from each user's shopping carl needs to be highly available. User session data must be available even if the user is disconnected and reconnects.

What should a solutions architect do to ensure that the shopping cart data is preserved at all times?

A.
Configure an Application Load Balancer to enable the sticky sessions feature (session affinity) for access to the catalog in Amazon Aurora.
A.
Configure an Application Load Balancer to enable the sticky sessions feature (session affinity) for access to the catalog in Amazon Aurora.
Answers
B.
Configure Amazon ElastiCacJie for Redis to cache catalog data from Amazon DynamoDB and shopping carl data from the user's session.
B.
Configure Amazon ElastiCacJie for Redis to cache catalog data from Amazon DynamoDB and shopping carl data from the user's session.
Answers
C.
Configure Amazon OpenSearch Service to cache catalog data from Amazon DynamoDB and shopping cart data from the user's session.
C.
Configure Amazon OpenSearch Service to cache catalog data from Amazon DynamoDB and shopping cart data from the user's session.
Answers
D.
Configure an Amazon EC2 instance with Amazon Elastic Block Store (Amazon EBS) storage for the catalog and shopping cart. Configure automated snapshots.
D.
Configure an Amazon EC2 instance with Amazon Elastic Block Store (Amazon EBS) storage for the catalog and shopping cart. Configure automated snapshots.
Answers
Suggested answer: B

Explanation:

To ensure that the shopping cart data is preserved at all times, a solutions architect should configure Amazon ElastiCache for Redis to cache catalog data from Amazon DynamoDB and shopping cart data from the user's session. This solution has the following benefits:

It offers the lowest possible latency from the application, as ElastiCache for Redis is a blazing fast in-memory data store that provides sub-millisecond latency to power internet-scale real-time applications1.

It scales with traffic volume, as ElastiCache for Redis supports horizontal scaling by adding more nodes or shards to the cluster, and vertical scaling by changing the node type2.

It is highly available, as ElastiCache for Redis supports replication across multiple Availability Zones and automatic failover in case of a primary node failure3.

It preserves user session data even if the user is disconnected and reconnects, as ElastiCache for Redis can store session data, such as user login information and shopping cart contents, in a persistent and durable manner using snapshots or AOF (append-only file) persistence4.

1: https://aws.amazon.com/elasticache/redis/

2: https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/Scaling.html

3: https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/Replication.html

4: https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/backups.html

A company is designing a new web service that will run on Amazon EC2 instances behind an Elastic Load Balancing (ELB) load balancer. However, many of the web service clients can only reach IP addresses authorized on their firewalls.

What should a solutions architect recommend to meet the clients' needs?

A.
A Network Load Balancer with an associated Elastic IP address.
A.
A Network Load Balancer with an associated Elastic IP address.
Answers
B.
An Application Load Balancer with an associated Elastic IP address.
B.
An Application Load Balancer with an associated Elastic IP address.
Answers
C.
An A record in an Amazon Route 53 hosted zone pointing to an Elastic IP address.
C.
An A record in an Amazon Route 53 hosted zone pointing to an Elastic IP address.
Answers
D.
An EC2 instance with a public IP address running as a proxy in front of the load balancer.
D.
An EC2 instance with a public IP address running as a proxy in front of the load balancer.
Answers
Suggested answer: A

Explanation:

A Network Load Balancer can be assigned one Elastic IP address for each Availability Zone it uses1. This allows the clients to reach the load balancer using a static IP address that can be authorized on their firewalls.An Application Load Balancer cannot be assigned an Elastic IP address2. An A record in an Amazon Route 53 hosted zone pointing to an Elastic IP address would not work because the load balancer would still use its own IP address as the source of the forwarded requests to the web service.An EC2 instance with a public IP address running as a proxy in front of the load balancer would add unnecessary complexity and cost, and would not provide the same scalability and availability as a Network Load Balancer.Reference:1: Network Load Balancers - Elastic Load Balancing3, IP address type section2: How to assign Elastic IP to Application Load Balancer in AWS?4, answer section.

A company runs a Java-based job on an Amazon EC2 instance. The job runs every hour and takes 10 seconds to run. The job runs on a scheduled interval and consumes 1 GB of memory. The CPU utilization of the instance is low except for short surges during which the job uses the maximum CPU available. The company wants to optimize the costs to run the job.

Which solution will meet these requirements?

A.
Use AWS App2Container (A2C) to containerize the job. Run the job as an Amazon Elastic Container Service (Amazon ECS) task on AWS Fargate with 0.5 virtual CPU (vCPU) and 1 GB of memory.
A.
Use AWS App2Container (A2C) to containerize the job. Run the job as an Amazon Elastic Container Service (Amazon ECS) task on AWS Fargate with 0.5 virtual CPU (vCPU) and 1 GB of memory.
Answers
B.
Copy the code into an AWS Lambda function that has 1 GB of memory. Create an Amazon EventBridge scheduled rule to run the code each hour.
B.
Copy the code into an AWS Lambda function that has 1 GB of memory. Create an Amazon EventBridge scheduled rule to run the code each hour.
Answers
C.
Use AWS App2Container (A2C) to containerize the job. Install the container in the existing Amazon Machine Image (AMI). Ensure that the schedule stops the container when the task finishes.
C.
Use AWS App2Container (A2C) to containerize the job. Install the container in the existing Amazon Machine Image (AMI). Ensure that the schedule stops the container when the task finishes.
Answers
D.
Configure the existing schedule to stop the EC2 instance at the completion of the job and restart the EC2 instance when the next job starts.
D.
Configure the existing schedule to stop the EC2 instance at the completion of the job and restart the EC2 instance when the next job starts.
Answers
Suggested answer: B

Explanation:

AWS Lambda is a serverless compute service that allows you to run code without provisioning or managing servers. You can create Lambda functions using various languages, including Java, and specify the amount of memory and CPU allocated to your function. Lambda charges you only for the compute time you consume, which is calculated based on the number of requests and the duration of your code execution. You can use Amazon EventBridge to trigger your Lambda function on a schedule, such as every hour, using cron or rate expressions.This solution will optimize the costs to run the job, as you will not pay for any idle time or unused resources, unlike running the job on an EC2 instance.Reference:1: AWS Lambda - FAQs2, General Information section2: Tutorial: Schedule AWS Lambda functions using EventBridge3, Introduction section3: Schedule expressions using rate or cron - AWS Lambda4, Introduction section.

A media company stores movies in Amazon S3. Each movie is stored in a single video file that ranges from 1 GB to 10 GB in size.

The company must be able to provide the streaming content of a movie within 5 minutes of a user purchase. There is higher demand for movies that are less than 20 years old than for movies that are more than 20 years old. The company wants to minimize hosting service costs based on demand.

Which solution will meet these requirements?

A.
Store all media content in Amazon S3. Use S3 Lifecycle policies to move media data into the Infrequent Access tier when the demand for a movie decreases.
A.
Store all media content in Amazon S3. Use S3 Lifecycle policies to move media data into the Infrequent Access tier when the demand for a movie decreases.
Answers
B.
Store newer movie video files in S3 Standard Store older movie video files in S3 Standard-Infrequent Access (S3 Standard-IA). When a user orders an older movie, retrieve the video file by using standard retrieval.
B.
Store newer movie video files in S3 Standard Store older movie video files in S3 Standard-Infrequent Access (S3 Standard-IA). When a user orders an older movie, retrieve the video file by using standard retrieval.
Answers
C.
Store newer movie video files in S3 Intelligent-Tiering. Store older movie video files in S3 Glacier Flexible Retrieval. When a user orders an older movie, retrieve the video file by using expedited retrieval.
C.
Store newer movie video files in S3 Intelligent-Tiering. Store older movie video files in S3 Glacier Flexible Retrieval. When a user orders an older movie, retrieve the video file by using expedited retrieval.
Answers
D.
Store newer movie video files in S3 Standard. Store older movie video files in S3 Glacier Flexible Retrieval. When a user orders an older movie, retrieve the video file by using bulk retrieval.
D.
Store newer movie video files in S3 Standard. Store older movie video files in S3 Glacier Flexible Retrieval. When a user orders an older movie, retrieve the video file by using bulk retrieval.
Answers
Suggested answer: C

Explanation:

This solution will meet the requirements of minimizing hosting service costs based on demand and providing the streaming content of a movie within 5 minutes of a user purchase. S3 Intelligent-Tiering is a storage class that automatically optimizes storage costs by moving data to the most cost-effective access tier when access patterns change.It is suitable for data with unknown, changing, or unpredictable access patterns, such as newer movies that may have higher demand1. S3 Glacier Flexible Retrieval is a storage class that provides low-cost storage for archive data that is retrieved asynchronously. It offers flexible data retrieval options from minutes to hours, and free bulk retrievals in 5-12 hours.It is ideal for backup, disaster recovery, and offsite data storage needs2.By using expedited retrieval, the user can access the older movie video file in 1-5 minutes, which meets the requirement of 5 minutes3.

A company copies 200 TB of data from a recent ocean survey onto AWS Snowball Edge Storage Optimized devices. The company has a high performance computing (HPC) cluster that is hosted on AWS to look for oil and gas deposits. A solutions architect must provide the cluster with consistent sub-millisecond latency and high-throughput access to the data on the Snowball Edge Storage Optimized devices. The company is sending the devices back to AWS.

Which solution will meet these requirements?

A.
Create an Amazon S3 bucket. Import the data into the S3 bucket. Configure an AWS Storage Gateway file gateway to use the S3 bucket. Access the file gateway from the HPC cluster instances.
A.
Create an Amazon S3 bucket. Import the data into the S3 bucket. Configure an AWS Storage Gateway file gateway to use the S3 bucket. Access the file gateway from the HPC cluster instances.
Answers
B.
Create an Amazon S3 bucket. Import the data into the S3 bucket. Configure an Amazon FSx for Lustre file system, and integrate it with the S3 bucket. Access the FSx for Lustre file system from the HPC cluster instances.
B.
Create an Amazon S3 bucket. Import the data into the S3 bucket. Configure an Amazon FSx for Lustre file system, and integrate it with the S3 bucket. Access the FSx for Lustre file system from the HPC cluster instances.
Answers
C.
Create an Amazon S3 bucket and an Amazon Elastic File System (Amazon EFS) file system. Import the data into the S3 bucket. Copy the data from the S3 bucket to the EFS file system. Access the EFS file system from the HPC cluster instances.
C.
Create an Amazon S3 bucket and an Amazon Elastic File System (Amazon EFS) file system. Import the data into the S3 bucket. Copy the data from the S3 bucket to the EFS file system. Access the EFS file system from the HPC cluster instances.
Answers
D.
Create an Amazon FSx for Lustre file system. Import the data directly into the FSx for Lustre file system. Access the FSx for Lustre file system from the HPC cluster instances.
D.
Create an Amazon FSx for Lustre file system. Import the data directly into the FSx for Lustre file system. Access the FSx for Lustre file system from the HPC cluster instances.
Answers
Suggested answer: B

Explanation:

To provide the HPC cluster with consistent sub-millisecond latency and high-throughput access to the data on the Snowball Edge Storage Optimized devices, a solutions architect should configure an Amazon FSx for Lustre file system, and integrate it with an Amazon S3 bucket. This solution has the following benefits:

It allows the HPC cluster to access the data on the Snowball Edge devices using a POSIX-compliant file system that is optimized for fast processing of large datasets1.

It enables the data to be imported from the Snowball Edge devices into the S3 bucket using the AWS Snow Family Console or the AWS CLI2.The data can then be accessed from the FSx for Lustre file system using the S3 integration feature3.

It supports high availability and durability of the data, as the FSx for Lustre file system can automatically copy the data to and from the S3 bucket3.The data can also be accessed from other AWS services or applications using the S3 API4.

1: https://aws.amazon.com/fsx/lustre/

2: https://docs.aws.amazon.com/snowball/latest/developer-guide/using-adapter.html

3: https://docs.aws.amazon.com/fsx/latest/LustreGuide/create-fs-linked-data-repo.html

4: https://docs.aws.amazon.com/fsx/latest/LustreGuide/export-data-repo.html

A company's marketing data is uploaded from multiple sources to an Amazon S3 bucket A series ot data preparation jobs aggregate the data for reporting The data preparation jobs need to run at regular intervals in parallel A few jobs need to run in a specific order later

The company wants to remove the operational overhead of job error handling retry logic, and state management

Which solution will meet these requirements?

A.
Use an AWS Lambda function to process the data as soon as the data is uploaded to the S3 bucket Invoke Other Lambda functions at regularly scheduled intervals
A.
Use an AWS Lambda function to process the data as soon as the data is uploaded to the S3 bucket Invoke Other Lambda functions at regularly scheduled intervals
Answers
B.
Use Amazon Athena to process the data Use Amazon EventBndge Scheduler to invoke Athena on a regular internal
B.
Use Amazon Athena to process the data Use Amazon EventBndge Scheduler to invoke Athena on a regular internal
Answers
C.
Use AWS Glue DataBrew to process the data Use an AWS Step Functions state machine to run the DataBrew data preparation jobs
C.
Use AWS Glue DataBrew to process the data Use an AWS Step Functions state machine to run the DataBrew data preparation jobs
Answers
D.
Use AWS Data Pipeline to process the data. Schedule Data Pipeline to process the data once at midnight.
D.
Use AWS Data Pipeline to process the data. Schedule Data Pipeline to process the data once at midnight.
Answers
Suggested answer: C

Explanation:

AWS Glue DataBrew is a visual data preparation tool that allows you to easily clean, normalize, and transform your data without writing any code. You can create and run data preparation jobs on your data stored in Amazon S3, Amazon Redshift, or other data sources. AWS Step Functions is a service that lets you coordinate multiple AWS services into serverless workflows. You can use Step Functions to orchestrate your DataBrew jobs, define the order and parallelism of execution, handle errors and retries, and monitor the state of your workflow. By using AWS Glue DataBrew and AWS Step Functions, you can meet the requirements of the company with minimal operational overhead, as you do not need to write any code, manage any servers, or deal with complex dependencies.

AWS Glue DataBrew

AWS Step Functions

Orchestrate AWS Glue DataBrew jobs using AWS Step Functions

Total 886 questions
Go to page: of 89