Amazon SAA-C03 Practice Test - Questions Answers, Page 64
List of questions
Question 631
![Export Export](https://examgecko.com/assets/images/icon-download-24.png)
A company is designing a tightly coupled high performance computing (HPC) environment in the AWS Cloud The company needs to include features that will optimize the HPC environment for networking and storage.
Which combination of solutions will meet these requirements? (Select TWO )
Explanation:
These two solutions will optimize the HPC environment for networking and storage. Amazon FSx for Lustre is a fully managed service that provides cost-effective, high-performance, scalable storage for compute workloads. It is built on the world's most popular high-performance file system, Lustre, which is designed for applications that require fast storage, such as HPC and machine learning. By configuring the file system with scratch storage, you can achieve sub-millisecond latencies, up to hundreds of GBs/s of throughput, and millions of IOPS. Scratch file systems are ideal for temporary storage and shorter-term processing of data. Data is not replicated and does not persist if a file server fails. For more information, seeAmazon FSx for Lustre.
Elastic Fabric Adapter (EFA) is a network interface for Amazon EC2 instances that enables customers to run applications requiring high levels of inter-node communications at scale on AWS. Its custom-built operating system (OS) bypass hardware interface enhances the performance of inter-instance communications, which is critical to scaling HPC and machine learning applications. EFA provides a low-latency, low-jitter channel for inter-instance communications, enabling your tightly-coupled HPC or distributed machine learning applications to scale to thousands of cores. EFA uses libfabric interface and libfabric APIs for communications, which are supported by most HPC programming models. For more information, seeElastic Fabric Adapter.
The other solutions are not suitable for optimizing the HPC environment for networking and storage. AWS Global Accelerator is a networking service that helps you improve the availability, performance, and security of your public applications by using the AWS global network. It provides two global static public IPs, deterministic routing, fast failover, and TCP termination at the edge for your application endpoints. However, it does not support OS-bypass capabilities or high-performance file systems that are required for HPC and machine learning applications. For more information, seeAWS Global Accelerator.
Amazon CloudFront is a content delivery network (CDN) service that securely delivers data, videos, applications, and APIs to customers globally with low latency, high transfer speeds, all within a developer-friendly environment. CloudFront is integrated with AWS services such as Amazon S3, Amazon EC2, AWS Elemental Media Services, AWS Shield, AWS WAF, and AWS Lambda@Edge. However, CloudFront is not designed for HPC and machine learning applications that require high levels of inter-node communications and fast storage. For more information, see [Amazon CloudFront].
AWS Elastic Beanstalk is an easy-to-use service for deploying and scaling web applications and services developed with Java, .NET, PHP, Node.js, Python, Ruby, Go, and Docker on familiar servers such as Apache, Nginx, Passenger, and IIS. You can simply upload your code and Elastic Beanstalk automatically handles the deployment, from capacity provisioning, load balancing, auto-scaling to application health monitoring. However, Elastic Beanstalk is not optimized for HPC and machine learning applications that require OS-bypass capabilities and high-performance file systems. For more information, see [AWS Elastic Beanstalk].
Question 632
![Export Export](https://examgecko.com/assets/images/icon-download-24.png)
A solutions architect wants to use the following JSON text as an identity-based policy to grant specific permissions:
Which IAM principals can the solutions architect attach this policy to? (Select TWO.)
Explanation:
This JSON text is an identity-based policy that grants specific permissions. The IAM principals that the solutions architect can attach this policy to are Role and Group. This is because the policy is written in JSON and is an identity-based policy, which can be attached to IAM principals such as users, groups, and roles.Identity-based policies are permissions policies that you attach to IAM identities (users, groups, or roles) and explicitly state what that identity is allowed (or denied) to do1.Identity-based policies are different from resource-based policies, which define the permissions around the specific resource1.Resource-based policies are attached to a resource, such as an Amazon S3 bucket or an Amazon EC2 instance1.Resource-based policies can also specify a principal, which is the entity that is allowed or denied access to the resource1.Organization is not an IAM principal, but a feature of AWS Organizations that allows you to manage multiple AWS accounts centrally2.Amazon ECS resource and Amazon EC2 resource are not IAM principals, but AWS resources that can have resource-based policies attached to them34.
Identity-based policies and resource-based policies
AWS Organizations
Amazon ECS task role
Amazon EC2 instance profile
Question 633
![Export Export](https://examgecko.com/assets/images/icon-download-24.png)
A company has a three-tier environment on AWS that ingests sensor data from its users' devices The traffic flows through a Network Load Balancer (NIB) then to Amazon EC2 instances for the web tier and finally to EC2 instances for the application tier that makes database calls
What should a solutions architect do to improve the security of data in transit to the web tier?
Explanation:
A: How do you protect your data in transit?
Best Practices:
Implement secure key and certificate management: Store encryption keys and certificates securely and rotate them at appropriate time intervals while applying strict access control; for example, by using a certificate management service, such as AWS Certificate Manager (ACM).
Enforce encryption in transit: Enforce your defined encryption requirements based on appropriate standards and recommendations to help you meet your organizational, legal, and compliance requirements.
Automate detection of unintended data access: Use tools such as GuardDuty to automatically detect attempts to move data outside of defined boundaries based on data classification level, for example, to detect a trojan that is copying data to an unknown or untrusted network using the DNS protocol.
Authenticate network communications: Verify the identity of communications by using protocols that support authentication, such as Transport Layer Security (TLS) or IPsec.
https://wa.aws.amazon.com/wat.question.SEC_9.en.html
Question 634
![Export Export](https://examgecko.com/assets/images/icon-download-24.png)
A company has a web application that includes an embedded NoSQL database. The application runs on Amazon EC2 instances behind an Application Load Balancer (ALB). The instances run in an Amazon EC2 Auto Scaling group in a single Availability Zone.
A recent increase in traffic requires the application to be highly available and for the database to be eventually consistent
Which solution will meet these requirements with the LEAST operational overhead?
Explanation:
This solution will meet the requirements of high availability and eventual consistency with the least operational overhead. By modifying the Auto Scaling group to use EC2 instances across three Availability Zones, the web application can handle the increase in traffic and tolerate the failure of one or two Availability Zones. By migrating the embedded NoSQL database to Amazon DynamoDB, the company can benefit from a fully managed, scalable, and reliable NoSQL database service that supports eventual consistency. AWS Database Migration Service (AWS DMS) is a cloud service that makes it easy to migrate relational databases, data warehouses, NoSQL databases, and other types of data stores. AWS DMS can migrate the embedded NoSQL database to Amazon DynamoDB with minimal downtime and zero data loss.
Question 635
![Export Export](https://examgecko.com/assets/images/icon-download-24.png)
A company is building a shopping application on AWS. The application offers a catalog that changes once each month and needs to scale with traffic volume. The company wants the lowest possible latency from the application. Data from each user's shopping carl needs to be highly available. User session data must be available even if the user is disconnected and reconnects.
What should a solutions architect do to ensure that the shopping cart data is preserved at all times?
Explanation:
To ensure that the shopping cart data is preserved at all times, a solutions architect should configure Amazon ElastiCache for Redis to cache catalog data from Amazon DynamoDB and shopping cart data from the user's session. This solution has the following benefits:
It offers the lowest possible latency from the application, as ElastiCache for Redis is a blazing fast in-memory data store that provides sub-millisecond latency to power internet-scale real-time applications1.
It scales with traffic volume, as ElastiCache for Redis supports horizontal scaling by adding more nodes or shards to the cluster, and vertical scaling by changing the node type2.
It is highly available, as ElastiCache for Redis supports replication across multiple Availability Zones and automatic failover in case of a primary node failure3.
It preserves user session data even if the user is disconnected and reconnects, as ElastiCache for Redis can store session data, such as user login information and shopping cart contents, in a persistent and durable manner using snapshots or AOF (append-only file) persistence4.
1: https://aws.amazon.com/elasticache/redis/
2: https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/Scaling.html
3: https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/Replication.html
4: https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/backups.html
Question 636
![Export Export](https://examgecko.com/assets/images/icon-download-24.png)
A company is designing a new web service that will run on Amazon EC2 instances behind an Elastic Load Balancing (ELB) load balancer. However, many of the web service clients can only reach IP addresses authorized on their firewalls.
What should a solutions architect recommend to meet the clients' needs?
Explanation:
A Network Load Balancer can be assigned one Elastic IP address for each Availability Zone it uses1. This allows the clients to reach the load balancer using a static IP address that can be authorized on their firewalls.An Application Load Balancer cannot be assigned an Elastic IP address2. An A record in an Amazon Route 53 hosted zone pointing to an Elastic IP address would not work because the load balancer would still use its own IP address as the source of the forwarded requests to the web service.An EC2 instance with a public IP address running as a proxy in front of the load balancer would add unnecessary complexity and cost, and would not provide the same scalability and availability as a Network Load Balancer.Reference:1: Network Load Balancers - Elastic Load Balancing3, IP address type section2: How to assign Elastic IP to Application Load Balancer in AWS?4, answer section.
Question 637
![Export Export](https://examgecko.com/assets/images/icon-download-24.png)
A company runs a Java-based job on an Amazon EC2 instance. The job runs every hour and takes 10 seconds to run. The job runs on a scheduled interval and consumes 1 GB of memory. The CPU utilization of the instance is low except for short surges during which the job uses the maximum CPU available. The company wants to optimize the costs to run the job.
Which solution will meet these requirements?
Explanation:
AWS Lambda is a serverless compute service that allows you to run code without provisioning or managing servers. You can create Lambda functions using various languages, including Java, and specify the amount of memory and CPU allocated to your function. Lambda charges you only for the compute time you consume, which is calculated based on the number of requests and the duration of your code execution. You can use Amazon EventBridge to trigger your Lambda function on a schedule, such as every hour, using cron or rate expressions.This solution will optimize the costs to run the job, as you will not pay for any idle time or unused resources, unlike running the job on an EC2 instance.Reference:1: AWS Lambda - FAQs2, General Information section2: Tutorial: Schedule AWS Lambda functions using EventBridge3, Introduction section3: Schedule expressions using rate or cron - AWS Lambda4, Introduction section.
Question 638
![Export Export](https://examgecko.com/assets/images/icon-download-24.png)
A media company stores movies in Amazon S3. Each movie is stored in a single video file that ranges from 1 GB to 10 GB in size.
The company must be able to provide the streaming content of a movie within 5 minutes of a user purchase. There is higher demand for movies that are less than 20 years old than for movies that are more than 20 years old. The company wants to minimize hosting service costs based on demand.
Which solution will meet these requirements?
Explanation:
This solution will meet the requirements of minimizing hosting service costs based on demand and providing the streaming content of a movie within 5 minutes of a user purchase. S3 Intelligent-Tiering is a storage class that automatically optimizes storage costs by moving data to the most cost-effective access tier when access patterns change.It is suitable for data with unknown, changing, or unpredictable access patterns, such as newer movies that may have higher demand1. S3 Glacier Flexible Retrieval is a storage class that provides low-cost storage for archive data that is retrieved asynchronously. It offers flexible data retrieval options from minutes to hours, and free bulk retrievals in 5-12 hours.It is ideal for backup, disaster recovery, and offsite data storage needs2.By using expedited retrieval, the user can access the older movie video file in 1-5 minutes, which meets the requirement of 5 minutes3.
Question 639
![Export Export](https://examgecko.com/assets/images/icon-download-24.png)
A company copies 200 TB of data from a recent ocean survey onto AWS Snowball Edge Storage Optimized devices. The company has a high performance computing (HPC) cluster that is hosted on AWS to look for oil and gas deposits. A solutions architect must provide the cluster with consistent sub-millisecond latency and high-throughput access to the data on the Snowball Edge Storage Optimized devices. The company is sending the devices back to AWS.
Which solution will meet these requirements?
Explanation:
To provide the HPC cluster with consistent sub-millisecond latency and high-throughput access to the data on the Snowball Edge Storage Optimized devices, a solutions architect should configure an Amazon FSx for Lustre file system, and integrate it with an Amazon S3 bucket. This solution has the following benefits:
It allows the HPC cluster to access the data on the Snowball Edge devices using a POSIX-compliant file system that is optimized for fast processing of large datasets1.
It enables the data to be imported from the Snowball Edge devices into the S3 bucket using the AWS Snow Family Console or the AWS CLI2.The data can then be accessed from the FSx for Lustre file system using the S3 integration feature3.
It supports high availability and durability of the data, as the FSx for Lustre file system can automatically copy the data to and from the S3 bucket3.The data can also be accessed from other AWS services or applications using the S3 API4.
1: https://aws.amazon.com/fsx/lustre/
2: https://docs.aws.amazon.com/snowball/latest/developer-guide/using-adapter.html
3: https://docs.aws.amazon.com/fsx/latest/LustreGuide/create-fs-linked-data-repo.html
4: https://docs.aws.amazon.com/fsx/latest/LustreGuide/export-data-repo.html
Question 640
![Export Export](https://examgecko.com/assets/images/icon-download-24.png)
A company's marketing data is uploaded from multiple sources to an Amazon S3 bucket A series ot data preparation jobs aggregate the data for reporting The data preparation jobs need to run at regular intervals in parallel A few jobs need to run in a specific order later
The company wants to remove the operational overhead of job error handling retry logic, and state management
Which solution will meet these requirements?
Explanation:
AWS Glue DataBrew is a visual data preparation tool that allows you to easily clean, normalize, and transform your data without writing any code. You can create and run data preparation jobs on your data stored in Amazon S3, Amazon Redshift, or other data sources. AWS Step Functions is a service that lets you coordinate multiple AWS services into serverless workflows. You can use Step Functions to orchestrate your DataBrew jobs, define the order and parallelism of execution, handle errors and retries, and monitor the state of your workflow. By using AWS Glue DataBrew and AWS Step Functions, you can meet the requirements of the company with minimal operational overhead, as you do not need to write any code, manage any servers, or deal with complex dependencies.
AWS Glue DataBrew
AWS Step Functions
Orchestrate AWS Glue DataBrew jobs using AWS Step Functions
Question