ExamGecko
Home Home / Amazon / SAA-C03

Amazon SAA-C03 Practice Test - Questions Answers, Page 66

Question list
Search
Search

List of questions

Search

Related questions











A company is deploying an application that processes streaming data in near-real time The company plans to use Amazon EC2 instances for the workload The network architecture must be configurable to provide the lowest possible latency between nodes

Which combination of network solutions will meet these requirements? (Select TWO)

A.
Enable and configure enhanced networking on each EC2 instance
A.
Enable and configure enhanced networking on each EC2 instance
Answers
B.
Group the EC2 instances in separate accounts
B.
Group the EC2 instances in separate accounts
Answers
C.
Run the EC2 instances in a cluster placement group
C.
Run the EC2 instances in a cluster placement group
Answers
D.
Attach multiple elastic network interfaces to each EC2 instance
D.
Attach multiple elastic network interfaces to each EC2 instance
Answers
E.
Use Amazon Elastic Block Store (Amazon EBS) optimized instance types.
E.
Use Amazon Elastic Block Store (Amazon EBS) optimized instance types.
Answers
Suggested answer: A, C

Explanation:

These options are the most suitable ways to configure the network architecture to provide the lowest possible latency between nodes. Option A enables and configures enhanced networking on each EC2 instance, which is a feature that improves the network performance of the instance by providing higher bandwidth, lower latency, and lower jitter. Enhanced networking uses single root I/O virtualization (SR-IOV) or Elastic Fabric Adapter (EFA) to provide direct access to the network hardware. You can enable and configure enhanced networking by choosing a supported instance type and a compatible operating system, and installing the required drivers. Option C runs the EC2 instances in a cluster placement group, which is a logical grouping of instances within a single Availability Zone that are placed close together on the same underlying hardware. Cluster placement groups provide the lowest network latency and the highest network throughput among the placement group options. You can run the EC2 instances in a cluster placement group by creating a placement group and launching the instances into it.

Option B is not suitable because grouping the EC2 instances in separate accounts does not provide the lowest possible latency between nodes. Separate accounts are used to isolate and organize resources for different purposes, such as security, billing, or compliance. However, they do not affect the network performance or proximity of the instances. Moreover, grouping the EC2 instances in separate accounts would incur additional costs and complexity, and it would require setting up cross-account networking and permissions.

Option D is not suitable because attaching multiple elastic network interfaces to each EC2 instance does not provide the lowest possible latency between nodes. Elastic network interfaces are virtual network interfaces that can be attached to EC2 instances to provide additional network capabilities, such as multiple IP addresses, multiple subnets, or enhanced security. However, they do not affect the network performance or proximity of the instances. Moreover, attaching multiple elastic network interfaces to each EC2 instance would consume additional resources and limit the instance type choices.

Option E is not suitable because using Amazon EBS optimized instance types does not provide the lowest possible latency between nodes. Amazon EBS optimized instance types are instances that provide dedicated bandwidth for Amazon EBS volumes, which are block storage volumes that can be attached to EC2 instances. EBS optimized instance types improve the performance and consistency of the EBS volumes, but they do not affect the network performance or proximity of the instances. Moreover, using EBS optimized instance types would incur additional costs and may not be necessary for the streaming data workload.Reference:

Enhanced networking on Linux

Placement groups

Elastic network interfaces

Amazon EBS-optimized instances

A company wants to run its payment application on AWS The application receives payment notifications from mobile devices Payment notifications require a basic validation before they are sent for further processing

The backend processing application is long running and requires compute and memory to be adjusted The company does not want to manage the infrastructure

Which solution will meet these requirements with the LEAST operational overhead?

A.
Create an Amazon Simple Queue Service (Amazon SQS) queue Integrate the queue with an Amazon EventBndge rule to receive payment notifications from mobile devices Configure the rule to validate payment notifications and send the notifications to the backend application Deploy the backend application on Amazon Elastic Kubernetes Service (Amazon EKS) Anywhere Create a standalone cluster
A.
Create an Amazon Simple Queue Service (Amazon SQS) queue Integrate the queue with an Amazon EventBndge rule to receive payment notifications from mobile devices Configure the rule to validate payment notifications and send the notifications to the backend application Deploy the backend application on Amazon Elastic Kubernetes Service (Amazon EKS) Anywhere Create a standalone cluster
Answers
B.
Create an Amazon API Gateway API Integrate the API with anAWS Step Functions state machine to receive payment notifications from mobile devices Invoke the state machine to validate payment notifications and send the notifications to the backend application Deploy the backend application on Amazon Elastic Kubernetes Sen/ice (Amazon EKS). Configure an EKS cluster with self-managed nodes.
B.
Create an Amazon API Gateway API Integrate the API with anAWS Step Functions state machine to receive payment notifications from mobile devices Invoke the state machine to validate payment notifications and send the notifications to the backend application Deploy the backend application on Amazon Elastic Kubernetes Sen/ice (Amazon EKS). Configure an EKS cluster with self-managed nodes.
Answers
C.
Create an Amazon Simple Queue Sen/ice (Amazon SQS) queue Integrate the queue with an Amazon EventBridge rule to receive payment notifications from mobile devices Configure the rule to validate payment notifications and send the notifications to the backend application Deploy the backend application on Amazon EC2 Spot Instances Configure a Spot Fleet with a default allocation strategy.
C.
Create an Amazon Simple Queue Sen/ice (Amazon SQS) queue Integrate the queue with an Amazon EventBridge rule to receive payment notifications from mobile devices Configure the rule to validate payment notifications and send the notifications to the backend application Deploy the backend application on Amazon EC2 Spot Instances Configure a Spot Fleet with a default allocation strategy.
Answers
D.
Create an Amazon API Gateway API Integrate the API with AWS Lambda to receive payment notifications from mobile devices Invoke a Lambda function to validate payment notifications and send the notifications to the backend application Deploy the backend application on Amazon Elastic Container Service (Amazon ECS). Configure Amazon ECS with an AWS Fargate launch type.
D.
Create an Amazon API Gateway API Integrate the API with AWS Lambda to receive payment notifications from mobile devices Invoke a Lambda function to validate payment notifications and send the notifications to the backend application Deploy the backend application on Amazon Elastic Container Service (Amazon ECS). Configure Amazon ECS with an AWS Fargate launch type.
Answers
Suggested answer: D

Explanation:

This option is the best solution because it allows the company to run its payment application on AWS with minimal operational overhead and infrastructure management. By using Amazon API Gateway, the company can create a secure and scalable API to receive payment notifications from mobile devices. By using AWS Lambda, the company can run a serverless function to validate the payment notifications and send them to the backend application. Lambda handles the provisioning, scaling, and security of the function, reducing the operational complexity and cost. By using Amazon ECS with AWS Fargate, the company can run the backend application on a fully managed container service that scales the compute resources automatically and does not require any EC2 instances to manage. Fargate allocates the right amount of CPU and memory for each container and adjusts them as needed.

A) Create an Amazon Simple Queue Service (Amazon SQS) queue Integrate the queue with an Amazon EventBndge rule to receive payment notifications from mobile devices Configure the rule to validate payment notifications and send the notifications to the backend application Deploy the backend application on Amazon Elastic Kubernetes Service (Amazon EKS) Anywhere Create a standalone cluster. This option is not optimal because it requires the company to manage the Kubernetes cluster that runs the backend application. Amazon EKS Anywhere is a deployment option that allows the company to create and operate Kubernetes clusters on-premises or in other environments outside AWS. The company would need to provision, configure, scale, patch, and monitor the cluster nodes, which can increase the operational overhead and complexity. Moreover, the company would need to ensure the connectivity and security between the AWS services and the EKS Anywhere cluster, which can also add challenges and risks.

B) Create an Amazon API Gateway API Integrate the API with anAWS Step Functions state ma-chine to receive payment notifications from mobile devices Invoke the state machine to validate payment notifications and send the notifications to the backend application Deploy the backend application on Amazon Elastic Kubernetes Sen/ice (Amazon EKS). Configure an EKS cluster with self-managed nodes. This option is not ideal because it requires the company to manage the EC2 instances that host the Kubernetes cluster that runs the backend application. Amazon EKS is a fully managed service that runs Kubernetes on AWS, but it still requires the company to manage the worker nodes that run the containers. The company would need to provision, configure, scale, patch, and monitor the EC2 instances, which can increase the operational overhead and infrastructure costs. Moreover, using AWS Step Functions to validate the payment notifications may be unnecessary and complex, as the validation logic can be implemented in a simpler way with Lambda or other services.

C) Create an Amazon Simple Queue Sen/ice (Amazon SQS) queue Integrate the queue with an Amazon EventBridge rule to receive payment notifications from mobile devices Configure the rule to validate payment notifications and send the notifications to the backend application Deploy the backend application on Amazon EC2 Spot Instances Configure a Spot Fleet with a default al-location strategy. This option is not cost-effective because it requires the company to manage the EC2 instances that run the backend application. The company would need to provision, configure, scale, patch, and monitor the EC2 instances, which can increase the operational overhead and infrastructure costs. Moreover, using Spot Instances can introduce the risk of interruptions, as Spot Instances are reclaimed by AWS when the demand for On-Demand Instances increases. The company would need to handle the interruptions gracefully and ensure the availability and reliability of the backend application.

1Amazon API Gateway - Amazon Web Services

2AWS Lambda - Amazon Web Services

3Amazon Elastic Container Service - Amazon Web Services

4AWS Fargate - Amazon Web Services

An ecommerce company runs applications in AWS accounts that are part of an organization in AWS Organizations The applications run on Amazon Aurora PostgreSQL databases across all the accounts The company needs to prevent malicious activity and must identify abnormal failed and incomplete login attempts to the databases

Which solution will meet these requirements in the MOST operationally efficient way?

A.
Attach service control policies (SCPs) to the root of the organization to identify the failed login attempts
A.
Attach service control policies (SCPs) to the root of the organization to identify the failed login attempts
Answers
B.
Enable the Amazon RDS Protection feature in Amazon GuardDuty for the member accounts of the organization
B.
Enable the Amazon RDS Protection feature in Amazon GuardDuty for the member accounts of the organization
Answers
C.
Publish the Aurora general logs to a log group in Amazon CloudWatch Logs Export the log data to a central Amazon S3 bucket
C.
Publish the Aurora general logs to a log group in Amazon CloudWatch Logs Export the log data to a central Amazon S3 bucket
Answers
D.
Publish all the Aurora PostgreSQL database events in AWS CloudTrail to a central Amazon S3 bucket
D.
Publish all the Aurora PostgreSQL database events in AWS CloudTrail to a central Amazon S3 bucket
Answers
Suggested answer: C

Explanation:

This option is the most operationally efficient way to meet the requirements because it allows the company to monitor and analyze the database login activity across all the accounts in the organization. By publishing the Aurora general logs to a log group in Amazon CloudWatch Logs, the company can enable the logging of the database connections, disconnections, and failed authentication attempts. By exporting the log data to a central Amazon S3 bucket, the company can store the log data in a durable and cost-effective way and use other AWS services or tools to perform further analysis or alerting on the log data. For example, the company can use Amazon Athena to query the log data in Amazon S3, or use Amazon SNS to send notifications based on the log data.

A) Attach service control policies (SCPs) to the root of the organization to identify the failed login attempts. This option is not effective because SCPs are not designed to identify the failed login attempts, but to restrict the actions that the users and roles can perform in the member accounts of the organization. SCPs are applied to the AWS API calls, not to the database login attempts. Moreover, SCPs do not provide any logging or analysis capabilities for the database activity.

B) Enable the Amazon RDS Protection feature in Amazon GuardDuty for the member accounts of the organization. This option is not optimal because the Amazon RDS Protection feature in Amazon GuardDuty is not available for Aurora PostgreSQL databases, but only for Amazon RDS for MySQL and Amazon RDS for MariaDB databases. Moreover, the Amazon RDS Protection feature does not monitor the database login attempts, but the network and API activity related to the RDS instances.

D) Publish all the Aurora PostgreSQL database events in AWS CloudTrail to a central Amazon S3 bucket. This option is not sufficient because AWS CloudTrail does not capture the database login attempts, but only the AWS API calls made by or on behalf of the Aurora PostgreSQL database. For example, AWS CloudTrail can record the events such as creating, modifying, or deleting the database instances, clusters, or snapshots, but not the events such as connecting, disconnecting, or failing to authenticate to the database.

1Working with Amazon Aurora PostgreSQL - Amazon Aurora

2Working with log groups and log streams - Amazon CloudWatch Logs

3Exporting Log Data to Amazon S3 - Amazon CloudWatch Logs

[4] Amazon GuardDuty FAQs

[5] Logging Amazon RDS API Calls with AWS CloudTrail - Amazon Relational Database Service

A company has an organization in AWS Organizations that has all features enabled The company requires that all API calls and logins in any existing or new AWS account must be audited The company needs a managed solution to prevent additional work and to minimize costs The company also needs to know when any AWS account is not compliant with the AWS Foundational Security Best Practices (FSBP) standard.

Which solution will meet these requirements with the LEAST operational overhead?

A.
Deploy an AWS Control Tower environment in the Organizations management account Enable AWS Security Hub and AWS Control Tower Account Factory in the environment.
A.
Deploy an AWS Control Tower environment in the Organizations management account Enable AWS Security Hub and AWS Control Tower Account Factory in the environment.
Answers
B.
Deploy an AWS Control Tower environment in a dedicated Organizations member account Enable AWS Security Hub and AWS Control Tower Account Factory in the environment.
B.
Deploy an AWS Control Tower environment in a dedicated Organizations member account Enable AWS Security Hub and AWS Control Tower Account Factory in the environment.
Answers
C.
Use AWS Managed Services (AMS) Accelerate to build a multi-account landing zone (MALZ) Submit an RFC to self-service provision Amazon GuardDuty in the MALZ.
C.
Use AWS Managed Services (AMS) Accelerate to build a multi-account landing zone (MALZ) Submit an RFC to self-service provision Amazon GuardDuty in the MALZ.
Answers
D.
Use AWS Managed Services (AMS) Accelerate to build a multi-account landing zone (MALZ) Submit an RFC to self-service provision AWS Security Hub in the MALZ.
D.
Use AWS Managed Services (AMS) Accelerate to build a multi-account landing zone (MALZ) Submit an RFC to self-service provision AWS Security Hub in the MALZ.
Answers
Suggested answer: A

Explanation:

AWS Control Tower is a fully managed service that simplifies the setup and governance of a secure, compliant, multi-account AWS environment. It establishes a landing zone that is based on best-practices blueprints, and it enables governance using controls you can choose from a pre-packaged list. The landing zone is a well-architected, multi-account baseline that follows AWS best practices. Controls implement governance rules for security, compliance, and operations. AWS Security Hub is a service that provides a comprehensive view of your security posture across your AWS accounts. It aggregates, organizes, and prioritizes security alerts and findings from multiple AWS services, such as Amazon GuardDuty, Amazon Inspector, Amazon Macie, AWS Firewall Manager, and AWS IAM Access Analyzer, as well as from AWS Partner solutions. AWS Security Hub continuously monitors your environment using automated compliance checks based on the AWS best practices and industry standards, such as the AWS Foundational Security Best Practices (FSBP) standard. AWS Control Tower Account Factory is a feature that automates the provisioning of new AWS accounts that are preconfigured to meet your business, security, and compliance requirements. By deploying an AWS Control Tower environment in the Organizations management account, you can leverage the existing organization structure and policies, and enable AWS Security Hub and AWS Control Tower Account Factory in the environment. This way, you can audit all API calls and logins in any existing or new AWS account, monitor the compliance status of each account with the FSBP standard, and provision new accounts with ease and consistency. This solution meets the requirements with the least operational overhead, as you do not need to manage any infrastructure, perform any data migration, or submit any requests for changes.

AWS Control Tower

[AWS Security Hub]

[AWS Control Tower Account Factory]

A company has a business-critical application that runs on Amazon EC2 instances. The application stores data in an Amazon DynamoDB table. The company must be able to revert the table to any point within the last 24 hours.

Which solution meets these requirements with the LEAST operational overhead?

A.
Configure point-in-time recovery for the table.
A.
Configure point-in-time recovery for the table.
Answers
B.
Use AWS Backup for the table.
B.
Use AWS Backup for the table.
Answers
C.
Use an AWS Lambda function to make an on-demand backup of the table every hour.
C.
Use an AWS Lambda function to make an on-demand backup of the table every hour.
Answers
D.
Turn on streams on the table to capture a log of all changes to the table in the last 24 hours Store a copy of the stream in an Amazon S3 bucket.
D.
Turn on streams on the table to capture a log of all changes to the table in the last 24 hours Store a copy of the stream in an Amazon S3 bucket.
Answers
Suggested answer: A

Explanation:

Point-in-time recovery (PITR) for DynamoDB is a feature that enables you to restore your table data to any point in time during the last 35 days. PITR helps protect your table from accidental write or delete operations, such as a test script writing to a production table or a user issuing a wrong command. PITR is easy to use, fully managed, fast, and scalable. You can enable PITR with a single click in the DynamoDB console or with a simple API call. You can restore a table to a new table using the console, the AWS CLI, or the DynamoDB API. PITR does not consume any provisioned table capacity and has no impact on the performance or availability of your production applications. PITR meets the requirements of the company with the least operational overhead, as it does not require any manual backup creation, scheduling, or maintenance. It also provides per-second granularity for restoring the table to any point within the last 24 hours.

Point-in-time recovery for DynamoDB - Amazon DynamoDB

Amazon DynamoDB point-in-time recovery (PITR)

Enable Point-in-Time Recovery (PITR) for Dynamodb global tables

Restoring a DynamoDB table to a point in time - Amazon DynamoDB

Point-in-time recovery: How it works - Amazon DynamoDB

A company has multiple AWS accounts with applications deployed in the us-west-2 Region Application logs are stored within Amazon S3 buckets in each account The company wants to build a centralized log analysis solution that uses a single S3 bucket Logs must not leave us-west-2, and the company wants to incur minimal operational overhead

Which solution meets these requirements and is MOST cost-effective?

A.
Create an S3 Lifecycle policy that copies the objects from one of the application S3 buckets to the centralized S3 bucket
A.
Create an S3 Lifecycle policy that copies the objects from one of the application S3 buckets to the centralized S3 bucket
Answers
B.
Use S3 Same-Region Replication to replicate logs from the S3 buckets to another S3 bucket in us-west-2 Use this S3 bucket for log analysis.
B.
Use S3 Same-Region Replication to replicate logs from the S3 buckets to another S3 bucket in us-west-2 Use this S3 bucket for log analysis.
Answers
C.
Write a script that uses the PutObject API operation every day to copy the entire contents of the buckets to another S3 bucket in us-west-2 Use this S3 bucket for log analysis.
C.
Write a script that uses the PutObject API operation every day to copy the entire contents of the buckets to another S3 bucket in us-west-2 Use this S3 bucket for log analysis.
Answers
D.
Write AWS Lambda functions in these accounts that are triggered every time logs are delivered to the S3 buckets (s3 ObjectCreated a event) Copy the logs to another S3 bucket in us-west-2. Use this S3 bucket for log analysis.
D.
Write AWS Lambda functions in these accounts that are triggered every time logs are delivered to the S3 buckets (s3 ObjectCreated a event) Copy the logs to another S3 bucket in us-west-2. Use this S3 bucket for log analysis.
Answers
Suggested answer: B

Explanation:

This solution meets the following requirements:

It is cost-effective, as it only charges for the storage and data transfer of the replicated objects, and does not require any additional AWS services or custom scripts. S3 Same-Region Replication (SRR) is a feature that automatically replicates objects across S3 buckets within the same AWS Region. SRR can help you aggregate logs from multiple sources to a single destination for analysis and auditing. SRR also preserves the metadata, encryption, and access control of the source objects.

It is operationally efficient, as it does not require any manual intervention or scheduling. SRR replicates objects as soon as they are uploaded to the source bucket, ensuring that the destination bucket always has the latest log data. SRR also handles any updates or deletions of the source objects, keeping the destination bucket in sync. SRR can be enabled with a few clicks in the S3 console or with a simple API call.

It is secure, as it does not allow the logs to leave the us-west-2 Region. SRR only replicates objects within the same AWS Region, ensuring that the data sovereignty and compliance requirements are met. SRR also supports encryption of the source and destination objects, using either server-side encryption with AWS KMS or S3-managed keys, or client-side encryption.

Same-Region Replication - Amazon Simple Storage Service

How do I replicate objects across S3 buckets in the same AWS Region?

Centralized Logging on AWS | AWS Solutions | AWS Solutions Library

A solutions architect is designing a shared storage solution for a web application that is deployed across multiple Availability Zones The web application runs on Amazon EC2 instances that are in an Auto Scaling group The company plans to make frequent changes to the content The solution must have strong consistency in returning the new content as soon as the changes occur.

Which solutions meet these requirements? (Select TWO)

A.
Use AWS Storage Gateway Volume Gateway Internet Small Computer Systems Interface (iSCSI) block storage that is mounted to the individual EC2 instances
A.
Use AWS Storage Gateway Volume Gateway Internet Small Computer Systems Interface (iSCSI) block storage that is mounted to the individual EC2 instances
Answers
B.
Create an Amazon Elastic File System (Amazon EFS) file system Mount the EFS file system on the individual EC2 instances
B.
Create an Amazon Elastic File System (Amazon EFS) file system Mount the EFS file system on the individual EC2 instances
Answers
C.
Create a shared Amazon Elastic Block Store (Amazon EBS) volume. Mount the EBS volume on the individual EC2 instances.
C.
Create a shared Amazon Elastic Block Store (Amazon EBS) volume. Mount the EBS volume on the individual EC2 instances.
Answers
D.
Use AWS DataSync to perform continuous synchronization of data between EC2 hosts in the Auto Scaling group
D.
Use AWS DataSync to perform continuous synchronization of data between EC2 hosts in the Auto Scaling group
Answers
E.
Create an Amazon S3 bucket to store the web content Set the metadata for the Cache-Control header to no-cache Use Amazon CloudFront to deliver the content
E.
Create an Amazon S3 bucket to store the web content Set the metadata for the Cache-Control header to no-cache Use Amazon CloudFront to deliver the content
Answers
Suggested answer: B, E

Explanation:

These options are the most suitable ways to design a shared storage solution for a web application that is deployed across multiple Availability Zones and requires strong consistency. Option B uses Amazon Elastic File System (Amazon EFS) as a shared file system that can be mounted on multiple EC2 instances in different Availability Zones. Amazon EFS provides high availability, durability, scalability, and performance for file-based workloads. It also supports strong consistency, which means that any changes made to the file system are immediately visible to all clients. Option E uses Amazon S3 as a shared object store that can store the web content and serve it through Amazon CloudFront, a content delivery network (CDN). Amazon S3 provides high availability, durability, scalability, and performance for object-based workloads. It also supports strong consistency for read-after-write and list operations, which means that any changes made to the objects are immediately visible to all clients. By setting the metadata for the Cache-Control header to no-cache, the web content can be prevented from being cached by the browsers or the CDN edge locations, ensuring that the latest content is always delivered to the users.

Option A is not suitable because using AWS Storage Gateway Volume Gateway as a shared storage solution for a web application is not efficient or scalable. AWS Storage Gateway Volume Gateway is a hybrid cloud storage service that provides block storage volumes that can be mounted on-premises or on EC2 instances as iSCSI devices. It is useful for migrating or backing up data to AWS, but it is not designed for serving web content or providing strong consistency. Moreover, using Volume Gateway would incur additional costs and complexity, and it would not leverage the native AWS storage services.

Option C is not suitable because creating a shared Amazon EBS volume and mounting it on multiple EC2 instances is not possible or reliable. Amazon EBS is a block storage service that provides persistent and high-performance volumes for EC2 instances. However, EBS volumes can only be attached to one EC2 instance at a time, and they are constrained to a single Availability Zone. Therefore, creating a shared EBS volume for a web application that is deployed across multiple Availability Zones is not feasible. Moreover, EBS volumes do not support strong consistency, which means that any changes made to the volume may not be immediately visible to other clients.

Option D is not suitable because using AWS DataSync to perform continuous synchronization of data between EC2 hosts in the Auto Scaling group is not efficient or scalable. AWS DataSync is a data transfer service that helps you move large amounts of data to and from AWS storage services. It is useful for migrating or archiving data, but it is not designed for serving web content or providing strong consistency. Moreover, using DataSync would incur additional costs and complexity, and it would not leverage the native AWS storage services.Reference:

What Is Amazon Elastic File System?

What Is Amazon Simple Storage Service?

What Is Amazon CloudFront?

What Is AWS Storage Gateway?

What Is Amazon Elastic Block Store?

What Is AWS DataSync?

A company runs a container application on a Kubernetes cluster in the company's data center The application uses Advanced Message Queuing Protocol (AMQP) to communicate with a message queue The data center cannot scale fast enough to meet the company's expanding business needs The company wants to migrate the workloads to AWS

Which solution will meet these requirements with the LEAST operational overhead? \

A.
Migrate the container application to Amazon Elastic Container Service (Amazon ECS) Use Amazon Simple Queue Service (Amazon SQS) to retrieve the messages.
A.
Migrate the container application to Amazon Elastic Container Service (Amazon ECS) Use Amazon Simple Queue Service (Amazon SQS) to retrieve the messages.
Answers
B.
Migrate the container application to Amazon Elastic Kubernetes Service (Amazon EKS) Use Amazon MQ to retrieve the messages.
B.
Migrate the container application to Amazon Elastic Kubernetes Service (Amazon EKS) Use Amazon MQ to retrieve the messages.
Answers
C.
Use highly available Amazon EC2 instances to run the application Use Amazon MQ to retrieve the messages.
C.
Use highly available Amazon EC2 instances to run the application Use Amazon MQ to retrieve the messages.
Answers
D.
Use AWS Lambda functions to run the application Use Amazon Simple Queue Service (Amazon SQS) to retrieve the messages.
D.
Use AWS Lambda functions to run the application Use Amazon Simple Queue Service (Amazon SQS) to retrieve the messages.
Answers
Suggested answer: B

Explanation:

This option is the best solution because it allows the company to migrate the container application to AWS with minimal changes and leverage a managed service to run the Kubernetes cluster and the message queue. By using Amazon EKS, the company can run the container application on a fully managed Kubernetes control plane that is compatible with the existing Kubernetes tools and plugins. Amazon EKS handles the provisioning, scaling, patching, and security of the Kubernetes cluster, reducing the operational overhead and complexity. By using Amazon MQ, the company can use a fully managed message broker service that supports AMQP and other popular messaging protocols. Amazon MQ handles the administration, maintenance, and scaling of the message broker, ensuring high availability, durability, and security of the messages.

A) Migrate the container application to Amazon Elastic Container Service (Amazon ECS) Use Amazon Simple Queue Service (Amazon SQS) to retrieve the messages. This option is not optimal because it requires the company to change the container orchestration platform from Kubernetes to ECS, which can introduce additional complexity and risk. Moreover, it requires the company to change the messaging protocol from AMQP to SQS, which can also affect the application logic and performance. Amazon ECS and Amazon SQS are both fully managed services that simplify the deployment and management of containers and messages, but they may not be compatible with the existing application architecture and requirements.

C) Use highly available Amazon EC2 instances to run the application Use Amazon MQ to retrieve the messages. This option is not ideal because it requires the company to manage the EC2 instances that host the container application. The company would need to provision, configure, scale, patch, and monitor the EC2 instances, which can increase the operational overhead and infrastructure costs. Moreover, the company would need to install and maintain the Kubernetes software on the EC2 instances, which can also add complexity and risk. Amazon MQ is a fully managed message broker service that supports AMQP and other popular messaging protocols, but it cannot compensate for the lack of a managed Kubernetes service.

D) Use AWS Lambda functions to run the application Use Amazon Simple Queue Service (Amazon SQS) to retrieve the messages. This option is not feasible because AWS Lambda does not support running container applications directly. Lambda functions are executed in a sandboxed environment that is isolated from other functions and resources. To run container applications on Lambda, the company would need to use a custom runtime or a wrapper library that emulates the container API, which can introduce additional complexity and overhead. Moreover, Lambda functions have limitations in terms of available CPU, memory, and runtime, which may not suit the application needs. Amazon SQS is a fully managed message queue service that supports asynchronous communication, but it does not support AMQP or other messaging protocols.

1Amazon Elastic Kubernetes Service - Amazon Web Services

2Amazon MQ - Amazon Web Services

3Amazon Elastic Container Service - Amazon Web Services

4AWS Lambda FAQs - Amazon Web Services

A company hosts a database that runs on an Amazon RDS instance that is deployed to multiple Availability Zones. The company periodically runs a script against the database to report new entries that are added to the database. The script that runs against the database negatively affects the performance of a critical application. The company needs to improve application performance with minimal costs.

Which solution will meet these requirements with the LEAST operational overhead?

A.
Add functionality to the script to identify the instance that has the fewest active connections. Configure the script to read from that instance to report the total new entries.
A.
Add functionality to the script to identify the instance that has the fewest active connections. Configure the script to read from that instance to report the total new entries.
Answers
B.
Create a read replica of the database. Configure the script to query only the read replica to report the total new entries.
B.
Create a read replica of the database. Configure the script to query only the read replica to report the total new entries.
Answers
C.
Instruct the development team to manually export the new entries for the day in the database at the end of each day.
C.
Instruct the development team to manually export the new entries for the day in the database at the end of each day.
Answers
D.
Use Amazon ElastiCache to cache the common queries that the script runs against the database.
D.
Use Amazon ElastiCache to cache the common queries that the script runs against the database.
Answers
Suggested answer: B

Explanation:

A read replica is a copy of the primary database that supports read-only queries. By creating a read replica, you can offload the read workload from the primary database and improve its performance. The script can query the read replica without affecting the critical application that uses the primary database. This solution also has the least operational overhead, as you do not need to modify the script, export the data manually, or manage a cache cluster.Reference:

Working with PostgreSQL, MySQL, and MariaDB Read Replicas

Amazon RDS Performance Insights

A company has an organization in AWS Organizations. The company runs Amazon EC2 instances across four AWS accounts in the root organizational unit (OU). There are three nonproduction accounts and one production account. The company wants to prohibit users from launching EC2 instances of a certain size in the nonproduction accounts. The company has created a service control policy (SCP) to deny access to launch instances that use the prohibited types.

Which solutions to deploy the SCP will meet these requirements? (Select TWO.)

A.
Attach the SCP to the root OU for the organization.
A.
Attach the SCP to the root OU for the organization.
Answers
B.
Attach the SCP to the three nonproduction Organizations member accounts.
B.
Attach the SCP to the three nonproduction Organizations member accounts.
Answers
C.
Attach the SCP to the Organizations management account.
C.
Attach the SCP to the Organizations management account.
Answers
D.
Create an OU for the production account. Attach the SCP to the OU. Move the production member account into the new OU.
D.
Create an OU for the production account. Attach the SCP to the OU. Move the production member account into the new OU.
Answers
E.
Create an OU for the required accounts. Attach the SCP to the OU. Move the nonproduction member accounts into the new OU.
E.
Create an OU for the required accounts. Attach the SCP to the OU. Move the nonproduction member accounts into the new OU.
Answers
Suggested answer: B, E

Explanation:

SCPs are a type of organization policy that you can use to manage permissions in your organization. SCPs offer central control over the maximum available permissions for all accounts in your organization.SCPs help you to ensure your accounts stay within your organization's access control guidelines1.

To apply an SCP to a specific set of accounts, you need to create an OU for those accounts and attach the SCP to the OU. This way, the SCP affects only the member accounts in that OU and not the other accounts in the organization. If you attach the SCP to the root OU, it will apply to all accounts in the organization, including the production account, which is not the desired outcome.If you attach the SCP to the management account, it will have no effect, as SCPs do not affect users or roles in the management account1.

Therefore, the best solutions to deploy the SCP are B and E. Option B attaches the SCP directly to the three nonproduction accounts, while option E creates a separate OU for the nonproduction accounts and attaches the SCP to the OU.Both options will achieve the same result of restricting the EC2 instance types in the nonproduction accounts, but option E might be more scalable and manageable if there are more accounts or policies to be applied in the future2.

1:Service control policies (SCPs) - AWS Organizations

2:Best Practices for AWS Organizations Service Control Policies in a Multi-Account Environment

Total 886 questions
Go to page: of 89