ExamGecko
Home Home / Amazon / SAP-C02

Amazon SAP-C02 Practice Test - Questions Answers, Page 43

Question list
Search
Search

List of questions

Search

Related questions











A company deploys a new web application. As pari of the setup, the company configures AWS WAF to log to Amazon S3 through Amazon Kinesis Data Firehose. The company develops an Amazon Athena query that runs once daily to return AWS WAF log data from the previous 24 hours. The volume of daily logs is constant. However, over time, the same query is taking more time to run.

A solutions architect needs to design a solution to prevent the query time from continuing to increase. The solution must minimize operational overhead.

Which solution will meet these requirements?

A.
Create an AWS Lambda function that consolidates each day's AWS WAF logs into one log file.
A.
Create an AWS Lambda function that consolidates each day's AWS WAF logs into one log file.
Answers
B.
Reduce the amount of data scanned by configuring AWS WAF to send logs to a different S3 bucket each day.
B.
Reduce the amount of data scanned by configuring AWS WAF to send logs to a different S3 bucket each day.
Answers
C.
Update the Kinesis Data Firehose configuration to partition the data in Amazon S3 by date and time. Create external tables for Amazon Redshift. Configure Amazon Redshift Spectrum to query the data source.
C.
Update the Kinesis Data Firehose configuration to partition the data in Amazon S3 by date and time. Create external tables for Amazon Redshift. Configure Amazon Redshift Spectrum to query the data source.
Answers
D.
Modify the Kinesis Data Firehose configuration and Athena table definition to partition the data by date and time. Change the Athena query to view the relevant partitions.
D.
Modify the Kinesis Data Firehose configuration and Athena table definition to partition the data by date and time. Change the Athena query to view the relevant partitions.
Answers
Suggested answer: D

Explanation:

The best solution is to modify the Kinesis Data Firehose configuration and Athena table definition to partition the data by date and time. This will reduce the amount of data scanned by Athena and improve the query performance. Changing the Athena query to view the relevant partitions will also help to filter out unnecessary data. This solution requires minimal operational overhead as it does not involve creating additional resources or changing the log format.Reference:[AWS WAF Developer Guide], [Amazon Kinesis Data Firehose User Guide], [Amazon Athena User Guide]

A company is developing a web application that runs on Amazon EC2 instances in an Auto Scaling group behind a public-facing Application Load Balancer (ALB). Only users from a specific country are allowed to access the application. The company needs the ability to log the access requests that have been blocked. The solution should require the least possible maintenance.

Which solution meets these requirements?

A.
Create an IPSet containing a list of IP ranges that belong to the specified country. Create an AWS WAF web ACL. Configure a rule to block any requests that do not originate from an IP range in the IPSet. Associate the rule with the web ACL. Associate the web ACL with the ALB.
A.
Create an IPSet containing a list of IP ranges that belong to the specified country. Create an AWS WAF web ACL. Configure a rule to block any requests that do not originate from an IP range in the IPSet. Associate the rule with the web ACL. Associate the web ACL with the ALB.
Answers
B.
Create an AWS WAF web ACL. Configure a rule to block any requests that do not originate from the specified country. Associate the rule with the web ACL. Associate the web ACL with the ALB.
B.
Create an AWS WAF web ACL. Configure a rule to block any requests that do not originate from the specified country. Associate the rule with the web ACL. Associate the web ACL with the ALB.
Answers
C.
Configure AWS Shield to block any requests that do not originate from the specified country. Associate AWS Shield with the ALB.
C.
Configure AWS Shield to block any requests that do not originate from the specified country. Associate AWS Shield with the ALB.
Answers
D.
Create a security group rule that allows ports 80 and 443 from IP ranges that belong to the specified country. Associate the security group with the ALB.
D.
Create a security group rule that allows ports 80 and 443 from IP ranges that belong to the specified country. Associate the security group with the ALB.
Answers
Suggested answer: B

Explanation:

The best solution is to create an AWS WAF web ACL and configure a rule to block any requests that do not originate from the specified country. This will ensure that only users from the allowed country can access the application. AWS WAF also provides logging capabilities that can capture the access requests that have been blocked. This solution requires the least possible maintenance as it does not involve updating IP ranges or security group rules.Reference:[AWS WAF Developer Guide], [AWS Shield Developer Guide]

A company is migrating an on-premises application and a MySQL database to AWS. The application processes highly sensitive data, and new data is constantly updated in the database. The data must not be transferred over the internet. The company also must encrypt the data in transit and at rest.

The database is 5 TB in size. The company already has created the database schema in an Amazon RDS for MySQL DB instance. The company has set up a 1 Gbps AWS Direct Connect connection to AWS. The company also has set up a public VIF and a private VIF. A solutions architect needs to design a solution that will migrate the data to AWS with the least possible downtime.

Which solution will meet these requirements?

A.
Perform a database backup. Copy the backup files to an AWS Snowball Edge Storage Optimized device. Import the backup to Amazon S3. Use server-side encryption with Amazon S3 managed encryption keys (SSE-S3) for encryption at rest. Use TLS for encryption in transit. Import the data from Amazon S3 to the DB instance.
A.
Perform a database backup. Copy the backup files to an AWS Snowball Edge Storage Optimized device. Import the backup to Amazon S3. Use server-side encryption with Amazon S3 managed encryption keys (SSE-S3) for encryption at rest. Use TLS for encryption in transit. Import the data from Amazon S3 to the DB instance.
Answers
B.
Use AWS Database Migration Service (AWS DMS) to migrate the data to AWS. Create a DMS replication instance in a private subnet. Create VPC endpoints for AWS DMS. Configure a DMS task to copy data from the on-premises database to the DB instance by using full load plus change data capture (CDC). Use the AWS Key Management Service (AWS KMS) default key for encryption at rest. Use TLS for encryption in transit.
B.
Use AWS Database Migration Service (AWS DMS) to migrate the data to AWS. Create a DMS replication instance in a private subnet. Create VPC endpoints for AWS DMS. Configure a DMS task to copy data from the on-premises database to the DB instance by using full load plus change data capture (CDC). Use the AWS Key Management Service (AWS KMS) default key for encryption at rest. Use TLS for encryption in transit.
Answers
C.
Perform a database backup. Use AWS DataSync to transfer the backup files to Amazon S3. Use server-side encryption with Amazon S3 managed encryption keys (SSE-S3) for encryption at rest. Use TLS for encryption in transit. Import the data from Amazon S3 to the DB instance.
C.
Perform a database backup. Use AWS DataSync to transfer the backup files to Amazon S3. Use server-side encryption with Amazon S3 managed encryption keys (SSE-S3) for encryption at rest. Use TLS for encryption in transit. Import the data from Amazon S3 to the DB instance.
Answers
D.
Use Amazon S3 File Gateway. Set up a private connection to Amazon S3 by using AWS PrivateLink. Perform a database backup. Copy the backup files to Amazon S3. Use server-side encryption with Amazon S3 managed encryption keys (SSE-S3) for encryption at rest. Use TLS for encryption in transit. Import the data from Amazon S3 to the DB instance.
D.
Use Amazon S3 File Gateway. Set up a private connection to Amazon S3 by using AWS PrivateLink. Perform a database backup. Copy the backup files to Amazon S3. Use server-side encryption with Amazon S3 managed encryption keys (SSE-S3) for encryption at rest. Use TLS for encryption in transit. Import the data from Amazon S3 to the DB instance.
Answers
Suggested answer: B

Explanation:

The best solution is to use AWS Database Migration Service (AWS DMS) to migrate the data to AWS. AWS DMS is a web service that can migrate data from various sources to various targets, including MySQL databases. AWS DMS can perform full load and change data capture (CDC) migrations, which means that it can copy the existing data and also capture the ongoing changes to keep the source and target databases in sync. This minimizes the downtime during the migration process. AWS DMS also supports encryption at rest and in transit by using AWS Key Management Service (AWS KMS) and TLS, respectively. This ensures that the data is protected during the migration. AWS DMS can also leverage AWS Direct Connect to transfer the data over a private connection, avoiding the internet. This solution meets all the requirements of the company.Reference:AWS Database Migration Service Documentation,Migrating Data to Amazon RDS for MySQL or MariaDB,Using SSL to Encrypt a Connection to a DB Instance

A company has mounted sensors to collect information about environmental parameters such as humidity and light throughout all the company's factories. The company needs to stream and analyze the data in the AWS Cloud in real time. If any of the parameters fall out of acceptable ranges, the factory operations team must receive a notification immediately.

Which solution will meet these requirements?

A.
Stream the data to an Amazon Kinesis Data Firehose delivery stream. Use AWS Step Functions to consume and analyze the data in the Kinesis Data Firehose delivery stream. use Amazon Simple Notification Service (Amazon SNS) to notify the operations team.
A.
Stream the data to an Amazon Kinesis Data Firehose delivery stream. Use AWS Step Functions to consume and analyze the data in the Kinesis Data Firehose delivery stream. use Amazon Simple Notification Service (Amazon SNS) to notify the operations team.
Answers
B.
Stream the data to an Amazon Managed Streaming for Apache Kafka (Amazon MSK) cluster. Set up a trigger in Amazon MSK to invoke an AWS Fargate task to analyze the data. Use Amazon Simple Email Service (Amazon SES) to notify the operations team.
B.
Stream the data to an Amazon Managed Streaming for Apache Kafka (Amazon MSK) cluster. Set up a trigger in Amazon MSK to invoke an AWS Fargate task to analyze the data. Use Amazon Simple Email Service (Amazon SES) to notify the operations team.
Answers
C.
Stream the data to an Amazon Kinesis data stream. Create an AWS Lambda function to consume the Kinesis data stream and to analyze the data. Use Amazon Simple Notification Service (Amazon SNS) to notify the operations team.
C.
Stream the data to an Amazon Kinesis data stream. Create an AWS Lambda function to consume the Kinesis data stream and to analyze the data. Use Amazon Simple Notification Service (Amazon SNS) to notify the operations team.
Answers
D.
Stream the data to an Amazon Kinesis Data Analytics application. I-Jse an automatically scaled and containerized service in Amazon Elastic Container Service (Amazon ECS) to consume and analyze the data. use Amazon Simple Email Service (Amazon SES) to notify the operations team.
D.
Stream the data to an Amazon Kinesis Data Analytics application. I-Jse an automatically scaled and containerized service in Amazon Elastic Container Service (Amazon ECS) to consume and analyze the data. use Amazon Simple Email Service (Amazon SES) to notify the operations team.
Answers
Suggested answer: C

Explanation:

The best solution is to stream the data to an Amazon Kinesis data stream and create an AWS Lambda function to consume the Kinesis data stream and to analyze the data. Amazon Kinesis is a web service that can collect, process, and analyze real-time streaming data from various sources, such as sensors. AWS Lambda is a serverless computing service that can run code in response to events, such as incoming data from a Kinesis data stream. By using AWS Lambda, the company can avoid provisioning or managing servers and scale automatically based on the demand. Amazon Simple Notification Service (Amazon SNS) is a web service that enables applications to send and receive notifications from the cloud. By using Amazon SNS, the company can notify the operations team immediately if any of the parameters fall out of acceptable ranges. This solution meets all the requirements of the company.

A company is deploying AWS Lambda functions that access an Amazon RDS for PostgreSQL database. The company needs to launch the Lambda functions in a QA

environment and in a production environment.

The company must not expose credentials within application code and must rotate passwords automatically.

Which solution will meet these requirements?

A.
Store the database credentials for both environments in AWS Systems Manager Parameter Store. Encrypt the credentials by using an AWS Key Management Service (AWS KMS) key. Within the application code of the Lambda functions, pull the credentials from the Parameter Store parameter by using the AWS SDK for Python (Bot03). Add a role to the Lambda functions to provide access to the Parameter Store parameter.
A.
Store the database credentials for both environments in AWS Systems Manager Parameter Store. Encrypt the credentials by using an AWS Key Management Service (AWS KMS) key. Within the application code of the Lambda functions, pull the credentials from the Parameter Store parameter by using the AWS SDK for Python (Bot03). Add a role to the Lambda functions to provide access to the Parameter Store parameter.
Answers
B.
Store the database credentials for both environments in AWS Secrets Manager with distinct key entry for the QA environment and the production environment. Turn on rotation. Provide a reference to the Secrets Manager key as an environment variable for the Lambda functions.
B.
Store the database credentials for both environments in AWS Secrets Manager with distinct key entry for the QA environment and the production environment. Turn on rotation. Provide a reference to the Secrets Manager key as an environment variable for the Lambda functions.
Answers
C.
Store the database credentials for both environments in AWS Key Management Service (AWS KMS). Turn on rotation. Provide a reference to the credentials that are stored in AWS KMS as an environment variable for the Lambda functions.
C.
Store the database credentials for both environments in AWS Key Management Service (AWS KMS). Turn on rotation. Provide a reference to the credentials that are stored in AWS KMS as an environment variable for the Lambda functions.
Answers
D.
Create separate S3 buckets for the QA environment and the production environment. Turn on server-side encryption with AWS KMS keys (SSE-KMS) for the S3 buckets. Use an object naming pattern that gives each Lambda function's application code the ability to pull the correct credentials for the function's corresponding environment. Grant each Lambda function's execution role access to Amazon S3.
D.
Create separate S3 buckets for the QA environment and the production environment. Turn on server-side encryption with AWS KMS keys (SSE-KMS) for the S3 buckets. Use an object naming pattern that gives each Lambda function's application code the ability to pull the correct credentials for the function's corresponding environment. Grant each Lambda function's execution role access to Amazon S3.
Answers
Suggested answer: B

Explanation:

The best solution is to store the database credentials for both environments in AWS Secrets Manager with distinct key entry for the QA environment and the production environment. AWS Secrets Manager is a web service that can securely store, manage, and retrieve secrets, such as database credentials. AWS Secrets Manager also supports automatic rotation of secrets by using Lambda functions or built-in rotation templates. By storing the database credentials for both environments in AWS Secrets Manager, the company can avoid exposing credentials within application code and rotate passwords automatically. By providing a reference to the Secrets Manager key as an environment variable for the Lambda functions, the company can easily access the credentials from the code by using the AWS SDK. This solution meets all the requirements of the company.

An e-commerce company is revamping its IT infrastructure and is planning to use AWS services. The company's CIO has asked a solutions architect to design a simple, highly available, and loosely coupled order processing application. The application is responsible for receiving and processing orders before storing them in an Amazon DynamoDB table. The application has a sporadic traffic pattern and should be able to scale during marketing campaigns to process the orders with minimal delays.

Which of the following is the MOST reliable approach to meet the requirements?

A.
Receive the orders in an Amazon EC2-hosted database and use EC2 instances to process them.
A.
Receive the orders in an Amazon EC2-hosted database and use EC2 instances to process them.
Answers
B.
Receive the orders in an Amazon SQS queue and invoke an AWS Lambda function to process them.
B.
Receive the orders in an Amazon SQS queue and invoke an AWS Lambda function to process them.
Answers
C.
Receive the orders using the AWS Step Functions program and launch an Amazon ECS container to process them.
C.
Receive the orders using the AWS Step Functions program and launch an Amazon ECS container to process them.
Answers
D.
Receive the orders in Amazon Kinesis Data Streams and use Amazon EC2 instances to process them.
D.
Receive the orders in Amazon Kinesis Data Streams and use Amazon EC2 instances to process them.
Answers
Suggested answer: B

Explanation:

The best option is to use Amazon SQS and AWS Lambda to create a serverless order processing application. Amazon SQS is a fully managed message queue service that can decouple the order receiving and processing components, making the application more scalable and fault-tolerant. AWS Lambda is a serverless compute service that can automatically scale to handle the incoming messages from the SQS queue and process them according to the business logic. AWS Lambda can also integrate with Amazon DynamoDB to store the processed orders in a fast and flexible NoSQL database. This approach eliminates the need to provision, manage, or scale any servers or containers, and reduces the operational overhead and cost.

Option A is not reliable because using an EC2-hosted database to receive the orders introduces a single point of failure and a scalability bottleneck. EC2 instances also require more management and configuration than serverless services.

Option C is not reliable because using AWS Step Functions to receive the orders adds unnecessary complexity and cost to the application. AWS Step Functions is a service that coordinates multiple AWS services into a serverless workflow, but it is not designed to handle high-volume, sporadic, or unpredictable traffic patterns. AWS Step Functions also charges per state transition, which can be expensive for a large number of orders. Launching an ECS container to process each order also requires more resources and management than invoking a Lambda function.

Option D is not reliable because using Amazon Kinesis Data Streams to receive the orders is not suitable for this use case. Amazon Kinesis Data Streams is a service that enables real-time processing of streaming data at scale, but it is not meant for asynchronous message queuing. Amazon Kinesis Data Streams requires consumers to poll the data from the stream, which can introduce latency and complexity. Amazon Kinesis Data Streams also charges per shard hour, which can be expensive for a sporadic traffic pattern.

Amazon SQS

AWS Lambda

Amazon DynamoDB

AWS Step Functions

Amazon ECS

A company maintains information on premises in approximately 1 million .csv files that are hosted on a VM. The data initially is 10 TB in size and grows at a rate of 1 TB each week. The company needs to automate backups of the data to the AWS Cloud.

Backups of the data must occur daily. The company needs a solution that applies custom filters to back up only a subset of the data that is located in designated source directories. The company has set up an AWS Direct Connect connection.

Which solution will meet the backup requirements with the LEAST operational overhead?

A.
Use the Amazon S3 CopyObject API operation with multipart upload to copy the existing data to Amazon S3. Use the CopyObject API operation to replicate new data to Amazon S3 daily.
A.
Use the Amazon S3 CopyObject API operation with multipart upload to copy the existing data to Amazon S3. Use the CopyObject API operation to replicate new data to Amazon S3 daily.
Answers
B.
Create a backup plan in AWS Backup to back up the data to Amazon S3. Schedule the backup plan to run daily.
B.
Create a backup plan in AWS Backup to back up the data to Amazon S3. Schedule the backup plan to run daily.
Answers
C.
Install the AWS DataSync agent as a VM that runs on the on-premises hypervisor. Configure a DataSync task to replicate the data to Amazon S3 daily.
C.
Install the AWS DataSync agent as a VM that runs on the on-premises hypervisor. Configure a DataSync task to replicate the data to Amazon S3 daily.
Answers
D.
Use an AWS Snowball Edge device for the initial backup. Use AWS DataSync for incremental backups to Amazon S3 daily.
D.
Use an AWS Snowball Edge device for the initial backup. Use AWS DataSync for incremental backups to Amazon S3 daily.
Answers
Suggested answer: C

Explanation:

AWS DataSync is an online data transfer service that is designed to help customers get their data to and from AWS quickly, easily, and securely. Using DataSync, you can copy data from your on-premises NFS or SMB shares directly to Amazon S3, Amazon EFS, or Amazon FSx for Windows File Server. DataSync uses a purpose-built, parallel transfer protocol for speeds up to 10x faster than open source tools. DataSync also has built-in verification of data both in flight and at rest, so you can be confident that your data was transferred successfully. DataSync allows you to apply filters to select which files or folders to transfer, based on file name, size, or modification time. You can also schedule your DataSync tasks to run daily, weekly, or monthly, or on demand. DataSync is integrated with AWS Direct Connect, so you can take advantage of your existing private connection to AWS. DataSync is also a fully managed service, so you do not need to provision, configure, or maintain any infrastructure for data transfer.

Option A is incorrect because the Amazon S3 CopyObject API operation does not support filtering or scheduling, and it would require you to write and maintain custom scripts to automate the backup process.

Option B is incorrect because AWS Backup does not support filtering or transferring data from on-premises sources to Amazon S3. AWS Backup is a fully managed backup service that makes it easy to centralize and automate the backup of data across AWS services.

Option D is incorrect because AWS Snowball Edge is a physical device that is used for offline data transfer when network bandwidth is limited or unavailable. It is not suitable for daily backups or incremental transfers. AWS Snowball Edge also does not support filtering or scheduling.

1: Considering four different replication options for data in Amazon S3

2: Protect your file and backup archives using AWS DataSync and Amazon S3 Glacier

3: AWS DataSync FAQs

A company hosts an intranet web application on Amazon EC2 instances behind an Application Load Balancer (ALB). Currently, users authenticate to the application against an internal user database.

The company needs to authenticate users to the application by using an existing AWS Directory Service for Microsoft Active Directory directory. All users with accounts in the directory must have access to the application.

Which solution will meet these requirements?

A.
Create a new app client in the directory. Create a listener rule for the ALB. Specify the authenticate-oidc action for the listener rule. Configure the listener rule with the appropriate issuer, client ID and secret, and endpoint details for the Active Directory service. Configure the new app client with the callback URL that the ALB provides.
A.
Create a new app client in the directory. Create a listener rule for the ALB. Specify the authenticate-oidc action for the listener rule. Configure the listener rule with the appropriate issuer, client ID and secret, and endpoint details for the Active Directory service. Configure the new app client with the callback URL that the ALB provides.
Answers
B.
Configure an Amazon Cognito user pool. Configure the user pool with a federated identity provider (IdP) that has metadata from the directory. Create an app client. Associate the app client with the user pool. Create a listener rule for the ALB. Specify the authenticate-cognito action for the listener rule. Configure the listener rule to use the user pool and app client.
B.
Configure an Amazon Cognito user pool. Configure the user pool with a federated identity provider (IdP) that has metadata from the directory. Create an app client. Associate the app client with the user pool. Create a listener rule for the ALB. Specify the authenticate-cognito action for the listener rule. Configure the listener rule to use the user pool and app client.
Answers
C.
Add the directory as a new 1AM identity provider (IdP). Create a new 1AM role that has an entity type of SAML 2.0 federation. Configure a role policy that allows access to the ALB. Configure the new role as the default authenticated user role for the IdP. Create a listener rule for the ALB. Specify the authenticate-oidc action for the listener rule.
C.
Add the directory as a new 1AM identity provider (IdP). Create a new 1AM role that has an entity type of SAML 2.0 federation. Configure a role policy that allows access to the ALB. Configure the new role as the default authenticated user role for the IdP. Create a listener rule for the ALB. Specify the authenticate-oidc action for the listener rule.
Answers
D.
Enable AWS 1AM Identity Center (AWS Single Sign-On). Configure the directory as an external identity provider (IdP) that uses SAML. Use the automatic provisioning method. Create a new 1AM role that has an entity type of SAML 2.0 federation. Configure a role policy that allows access to the ALB. Attach the new role to all groups. Create a listener rule for the ALB. Specify the authenticate-cognito action for the listener rule.
D.
Enable AWS 1AM Identity Center (AWS Single Sign-On). Configure the directory as an external identity provider (IdP) that uses SAML. Use the automatic provisioning method. Create a new 1AM role that has an entity type of SAML 2.0 federation. Configure a role policy that allows access to the ALB. Attach the new role to all groups. Create a listener rule for the ALB. Specify the authenticate-cognito action for the listener rule.
Answers
Suggested answer: A

Explanation:

The correct solution is to use the authenticate-oidc action for the ALB listener rule and configure it with the details of the AWS Directory Service for Microsoft Active Directory directory. This way, the ALB can use OpenID Connect (OIDC) to authenticate users against the directory and grant them access to the intranet web application. The app client in the directory is used to register the ALB as an OIDC client and provide the necessary credentials and endpoints. The callback URL is the URL that the ALB redirects the user to after a successful authentication. This solution does not require any additional services or roles, and it leverages the existing directory accounts for all users.

The other solutions are incorrect because they either use the wrong action for the ALB listener rule, or they involve unnecessary or incompatible services or roles. For example:

Solution B is incorrect because it uses Amazon Cognito user pool, which is a separate user directory service that does not integrate with AWS Directory Service for Microsoft Active Directory. To use this solution, the company would have to migrate or synchronize their users from the directory to the user pool, which is not required by the question. Moreover, the authenticate-cognito action for the ALB listener rule only works with Amazon Cognito user pools, not with federated identity providers (IdPs) that have metadata from the directory.

Solution C is incorrect because it uses IAM as an identity provider (IdP), which is not compatible with AWS Directory Service for Microsoft Active Directory. IAM can only be used as an IdP for web identity federation, which allows users to sign in with social media or other third-party IdPs, not with Active Directory. Moreover, the authenticate-oidc action for the ALB listener rule requires an OIDC IdP, not a SAML 2.0 federation IdP, which is what IAM provides.

Solution D is incorrect because it uses AWS IAM Identity Center (AWS Single Sign-On), which is a service that simplifies the management of SSO access to multiple AWS accounts and business applications. This service is not needed for the scenario in the question, which only involves a single intranet web application. Moreover, the authenticate-cognito action for the ALB listener rule does not work with external IdPs that use SAML, such as AWS IAM Identity Center.

Authenticate users using an Application Load Balancer

What is AWS Directory Service for Microsoft Active Directory?

Using OpenID Connect for user authentication

A company is launching a new online game on Amazon EC2 instances. The game must be available globally. The company plans to run the game in three AWS Regions: us-east-1, eu-west-1, and ap-southeast-1. The game's leaderboards. player inventory, and event status must be available across Regions.

A solutions architect must design a solution that will give any Region the ability to scale to handle the load of all Regions. Additionally, users must automatically connect to the Region that provides the least latency.

Which solution will meet these requirements with the LEAST operational overhead?

A.
Create an EC2 Spot Fleet. Attach the Spot Fleet to a Network Load Balancer (NLB) in each Region. Create an AWS Global Accelerator IP address that points to the NLB. Create an Amazon Route 53 latency-based routing entry for the Global Accelerator IP address. Save the game metadata to an Amazon RDS for MySQL DB instance in each Region. Set up a read replica in the other Regions.
A.
Create an EC2 Spot Fleet. Attach the Spot Fleet to a Network Load Balancer (NLB) in each Region. Create an AWS Global Accelerator IP address that points to the NLB. Create an Amazon Route 53 latency-based routing entry for the Global Accelerator IP address. Save the game metadata to an Amazon RDS for MySQL DB instance in each Region. Set up a read replica in the other Regions.
Answers
B.
Create an Auto Scaling group for the EC2 instances. Attach the Auto Scaling group to a Network Load Balancer (NLB) in each Region. For each Region, create an Amazon Route 53 entry that uses geoproximity routing and points to the NLB in that Region. Save the game metadata to MySQL databases on EC2 instances in each Region. Save the game metadata to MySQL databases on EC2 instances in each Region. Set up replication between the database EC2 instances in each Region.
B.
Create an Auto Scaling group for the EC2 instances. Attach the Auto Scaling group to a Network Load Balancer (NLB) in each Region. For each Region, create an Amazon Route 53 entry that uses geoproximity routing and points to the NLB in that Region. Save the game metadata to MySQL databases on EC2 instances in each Region. Save the game metadata to MySQL databases on EC2 instances in each Region. Set up replication between the database EC2 instances in each Region.
Answers
C.
Create an Auto Scaling group for the EC2 instances. Attach the Auto Scaling group to a Network Load Balancer (NLB) in each Region. For each Region, create an Amazon Route 53 entry that uses latency-based routing and points to the NLB in that Region. Save the game metadata to an Amazon DynamoDB global table.
C.
Create an Auto Scaling group for the EC2 instances. Attach the Auto Scaling group to a Network Load Balancer (NLB) in each Region. For each Region, create an Amazon Route 53 entry that uses latency-based routing and points to the NLB in that Region. Save the game metadata to an Amazon DynamoDB global table.
Answers
D.
Use EC2 Global View. Deploy the EC2 instances to each Region. Attach the instances to a Network Load Balancer (NLB). Deploy a DNS server on an EC2 instance in each Region. Set up custom logic on each DNS server to redirect the user to the Region that provides the lowest latency. Save the game metadata to an Amazon Aurora global database.
D.
Use EC2 Global View. Deploy the EC2 instances to each Region. Attach the instances to a Network Load Balancer (NLB). Deploy a DNS server on an EC2 instance in each Region. Set up custom logic on each DNS server to redirect the user to the Region that provides the lowest latency. Save the game metadata to an Amazon Aurora global database.
Answers
Suggested answer: C

Explanation:

The best option is to use an Auto Scaling group, a Network Load Balancer, Amazon Route 53, and Amazon DynamoDB to create a scalable, highly available, and low-latency online game application. An Auto Scaling group can automatically adjust the number of EC2 instances based on the demand and traffic in each Region. A Network Load Balancer can distribute the incoming traffic across the EC2 instances and handle millions of requests per second. Amazon Route 53 can use latency-based routing to direct the users to the Region that provides the best performance. Amazon DynamoDB global tables can replicate the game metadata across multiple Regions, ensuring consistency and availability of the data. This approach minimizes the operational overhead and cost, as it leverages fully managed services and avoids custom logic or replication.

Option A is not optimal because using an EC2 Spot Fleet can introduce the risk of losing the EC2 instances if the Spot price exceeds the bid price, which can affect the availability and performance of the game. Using AWS Global Accelerator can improve the network performance, but it is not necessary if Amazon Route 53 can already route the users to the closest Region. Using Amazon RDS for MySQL can store the game metadata, but it requires setting up read replicas and managing the replication lag across Regions, which can increase the complexity and cost.

Option B is not optimal because using geoproximity routing can direct the users to the Region that is geographically closer, but it does not account for the network latency or performance. Using MySQL databases on EC2 instances can store the game metadata, but it requires managing the EC2 instances, the database software, the backups, the patches, and the replication across Regions, which can increase the operational overhead and cost.

Option D is not optimal because using EC2 Global View is not a valid service. Using a custom DNS server on an EC2 instance can redirect the user to the Region that provides the lowest latency, but it requires developing and maintaining the custom logic, as well as managing the EC2 instance, which can increase the operational overhead and cost. Using Amazon Aurora global database can store the game metadata, but it is more expensive and complex than using Amazon DynamoDB global tables.

Auto Scaling groups

Network Load Balancer

Amazon Route 53

Amazon DynamoDB global tables

A solutions architect is preparing to deploy a new security tool into several previously unused AWS Regions. The solutions architect will deploy the tool by using an AWS CloudFormation stack set. The stack set's template contains an 1AM role that has a custom name. Upon creation of the stack set. no stack instances are created successfully.

What should the solutions architect do to deploy the stacks successfully?

A.
Enable the new Regions in all relevant accounts. Specify the CAPABILITY_NAMED_IAM capability during the creation of the stack set.
A.
Enable the new Regions in all relevant accounts. Specify the CAPABILITY_NAMED_IAM capability during the creation of the stack set.
Answers
B.
Use the Service Quotas console to request a quota increase for the number of CloudFormation stacks in each new Region in all relevant accounts. Specify the CAPABILITYJAM capability during the creation of the stack set.
B.
Use the Service Quotas console to request a quota increase for the number of CloudFormation stacks in each new Region in all relevant accounts. Specify the CAPABILITYJAM capability during the creation of the stack set.
Answers
C.
Specify the CAPABILITY_NAMED_IAM capability and the SELF_MANAGED permissions model during the creation of the stack set.
C.
Specify the CAPABILITY_NAMED_IAM capability and the SELF_MANAGED permissions model during the creation of the stack set.
Answers
D.
Specify an administration role ARN and the CAPABILITYJAM capability during the creation of the stack set.
D.
Specify an administration role ARN and the CAPABILITYJAM capability during the creation of the stack set.
Answers
Suggested answer: A

Explanation:

The CAPABILITY_NAMED_IAM capability is required when creating or updating CloudFormation stacks that contain IAM resources with custom names. This capability acknowledges that the template might create IAM resources that have broad permissions or affect other resources in the AWS account. The stack set's template contains an IAM role that has a custom name, so this capability is needed. Enabling the new Regions in all relevant accounts is also necessary to deploy the stack set across multiple Regions and accounts.

Option B is incorrect because the Service Quotas console is used to view and manage the quotas for AWS services, not for CloudFormation stacks. The number of stacks per Region per account is not a service quota that can be increased.

Option C is incorrect because the SELF_MANAGED permissions model is used when the administrator wants to retain full permissions to manage stack sets and stack instances. This model does not affect the creation of the stack set or the requirement for the CAPABILITY_NAMED_IAM capability.

Option D is incorrect because an administration role ARN is optional when creating a stack set. It is used to specify a role that CloudFormation assumes to create stack instances in the target accounts. It does not affect the creation of the stack set or the requirement for the CAPABILITY_NAMED_IAM capability.

1: AWS CloudFormation stack sets

2: Acknowledging IAM resources in AWS CloudFormation templates

3: AWS CloudFormation stack set permissions

Total 492 questions
Go to page: of 50