ExamGecko
Home Home / Amazon / SAP-C02

Amazon SAP-C02 Practice Test - Questions Answers, Page 40

Question list
Search
Search

List of questions

Search

Related questions











An online retail company hosts its stateful web-based application and MySQL database in an on-premises data center on a single server. The company wants to increase its customer base by conducting more marketing campaigns and promotions. In preparation, the company wants to migrate its application and database to AWS to increase the reliability of its architecture.

Which solution should provide the HIGHEST level of reliability?

A.
Migrate the database to an Amazon RDS MySQL Multi-AZ DB instance. Deploy the application in an Auto Scaling group on Amazon EC2 instances behind an Application Load Balancer. Store sessions in Amazon Neptune.
A.
Migrate the database to an Amazon RDS MySQL Multi-AZ DB instance. Deploy the application in an Auto Scaling group on Amazon EC2 instances behind an Application Load Balancer. Store sessions in Amazon Neptune.
Answers
B.
Migrate the database to Amazon Aurora MySQL. Deploy the application in an Auto Scaling group on Amazon EC2 instances behind an Application Load Balancer. Store sessions in an Amazon ElastiCache for Redis replication group.
B.
Migrate the database to Amazon Aurora MySQL. Deploy the application in an Auto Scaling group on Amazon EC2 instances behind an Application Load Balancer. Store sessions in an Amazon ElastiCache for Redis replication group.
Answers
C.
Migrate the database to Amazon DocumentDB (with MongoDB compatibility). Deploy the application in an Auto Scaling group on Amazon EC2 instances behind a Network Load Balancer. Store sessions in Amazon Kinesis Data Firehose.
C.
Migrate the database to Amazon DocumentDB (with MongoDB compatibility). Deploy the application in an Auto Scaling group on Amazon EC2 instances behind a Network Load Balancer. Store sessions in Amazon Kinesis Data Firehose.
Answers
D.
Migrate the database to an Amazon RDS MariaDB Multi-AZ DB instance. Deploy the application in an Auto Scaling group on Amazon EC2 instances behind an Application Load Balancer. Store sessions in Amazon ElastiCache for Memcached.
D.
Migrate the database to an Amazon RDS MariaDB Multi-AZ DB instance. Deploy the application in an Auto Scaling group on Amazon EC2 instances behind an Application Load Balancer. Store sessions in Amazon ElastiCache for Memcached.
Answers
Suggested answer: B

Explanation:

What is Amazon Aurora?

What is Auto Scaling?

What is Amazon ElastiCache?

A company is building an application on AWS. The application sends logs to an Amazon Elasticsearch Service (Amazon ES) cluster for analysis. All data must be stored within a VPC.

Some of the company's developers work from home. Other developers work from three different company office locations. The developers need to access Amazon ES to analyze and visualize logs directly from their local development machines.

Which solution will meet these requirements?

A.
Configure and set up an AWS Client VPN endpoint. Associate the Client VPN endpoint with a subnet in the VPC. Configure a Client VPN self-service portal. Instruct the developers to connect by using the client for Client VPN.
A.
Configure and set up an AWS Client VPN endpoint. Associate the Client VPN endpoint with a subnet in the VPC. Configure a Client VPN self-service portal. Instruct the developers to connect by using the client for Client VPN.
Answers
B.
Create a transit gateway, and connect it to the VPC. Create an AWS Site-to-Site VPN. Create an attachment to the transit gateway. Instruct the developers to connect by using an OpenVPN client.
B.
Create a transit gateway, and connect it to the VPC. Create an AWS Site-to-Site VPN. Create an attachment to the transit gateway. Instruct the developers to connect by using an OpenVPN client.
Answers
C.
Create a transit gateway, and connect it to the VPC. Order an AWS Direct Connect connection. Set up a public VIF on the Direct Connect connection. Associate the public VIF with the transit gateway. Instruct the developers to connect to the Direct Connect connection
C.
Create a transit gateway, and connect it to the VPC. Order an AWS Direct Connect connection. Set up a public VIF on the Direct Connect connection. Associate the public VIF with the transit gateway. Instruct the developers to connect to the Direct Connect connection
Answers
D.
Create and configure a bastion host in a public subnet of the VPC. Configure the bastion host security group to allow SSH access from the company CIDR ranges. Instruct the developers to connect by using SSH.
D.
Create and configure a bastion host in a public subnet of the VPC. Configure the bastion host security group to allow SSH access from the company CIDR ranges. Instruct the developers to connect by using SSH.
Answers
Suggested answer: A

Explanation:

What is AWS Client VPN?

Creating a Client VPN endpoint

Associating a target network with a Client VPN endpoint

Configuring a self-service portal

A research center is migrating to the AWS Cloud and has moved its on-premises 1 PB object storage to an Amazon S3 bucket. One hundred scientists are using this object storage to store their work-related documents. Each scientist has a personal folder on the object store. All the scientists are members of a single IAM user group.

The research center's compliance officer is worried that scientists will be able to access each other's work. The research center has a strict obligation to report on which scientist accesses which documents.

The team that is responsible for these reports has little AWS experience and wants a ready-to-use solution that minimizes operational overhead.

Which combination of actions should a solutions architect take to meet these requirements? (Select TWO.)

A.
Create an identity policy that grants the user read and write access. Add a condition that specifies that the S3 paths must be prefixed with ${aws:username}. Apply the policy on the scientists' IAM user group.
A.
Create an identity policy that grants the user read and write access. Add a condition that specifies that the S3 paths must be prefixed with ${aws:username}. Apply the policy on the scientists' IAM user group.
Answers
B.
Configure a trail with AWS CloudTrail to capture all object-level events in the S3 bucket. Store the trail output in another S3 bucket. Use Amazon Athena to query the logs and generate reports.
B.
Configure a trail with AWS CloudTrail to capture all object-level events in the S3 bucket. Store the trail output in another S3 bucket. Use Amazon Athena to query the logs and generate reports.
Answers
C.
Enable S3 server access logging. Configure another S3 bucket as the target for log delivery. Use Amazon Athena to query the logs and generate reports.
C.
Enable S3 server access logging. Configure another S3 bucket as the target for log delivery. Use Amazon Athena to query the logs and generate reports.
Answers
D.
Create an S3 bucket policy that grants read and write access to users in the scientists' IAM user group.
D.
Create an S3 bucket policy that grants read and write access to users in the scientists' IAM user group.
Answers
E.
Configure a trail with AWS CloudTrail to capture all object-level events in the S3 bucket and write the events to Amazon CloudWatch. Use the Amazon Athena CloudWatch connector to query the logs and generate reports.
E.
Configure a trail with AWS CloudTrail to capture all object-level events in the S3 bucket and write the events to Amazon CloudWatch. Use the Amazon Athena CloudWatch connector to query the logs and generate reports.
Answers
Suggested answer: A, B

Explanation:

Identity-based policies

Policy variables

IAM groups

Object-level logging

Creating a trail that applies to all regions

[What is Amazon Athena?]

A company wants to migrate its website from an on-premises data center onto AWS. At the same time, it wants to migrate the website to a containerized microservice-based architecture to improve the availability and cost efficiency. The company's security policy states that privileges and network permissions must be configured according to best practice, using least privilege.

A Solutions Architect must create a containerized architecture that meets the security requirements and has deployed the application to an Amazon ECS cluster.

What steps are required after the deployment to meet the requirements? (Choose two.)

A.
Create tasks using the bridge network mode.
A.
Create tasks using the bridge network mode.
Answers
B.
Create tasks using the awsvpc network mode.
B.
Create tasks using the awsvpc network mode.
Answers
C.
Apply security groups to Amazon EC2 instances, and use IAM roles for EC2 instances to access other resources.
C.
Apply security groups to Amazon EC2 instances, and use IAM roles for EC2 instances to access other resources.
Answers
D.
Apply security groups to the tasks, and pass IAM credentials into the container at launch time to access other resources.
D.
Apply security groups to the tasks, and pass IAM credentials into the container at launch time to access other resources.
Answers
E.
Apply security groups to the tasks, and use IAM roles for tasks to access other resources.
E.
Apply security groups to the tasks, and use IAM roles for tasks to access other resources.
Answers
Suggested answer: B, E

Explanation:

awsvpc network mode

Task networking with the awsvpc network mode

Security groups for your VPC

IAM roles for tasks

Best practices for managing AWS access keys

A company is running a serverless application that consists of several AWS Lambda functions and Amazon DynamoDB tables. The company has created new functionality that requires the Lambda functions to access an Amazon Neptune DB cluster. The Neptune DB cluster is located in three subnets in a VPC.

Which of the possible solutions will allow the Lambda functions to access the Neptune DB cluster and DynamoDB tables? (Select TWO.)

A.
Create three public subnets in the Neptune VPC, and route traffic through an internet gateway. Host the Lambda functions in the three new public subnets.
A.
Create three public subnets in the Neptune VPC, and route traffic through an internet gateway. Host the Lambda functions in the three new public subnets.
Answers
B.
Create three private subnets in the Neptune VPC, and route internet traffic through a NAT gateway. Host the Lambda functions in the three new private subnets.
B.
Create three private subnets in the Neptune VPC, and route internet traffic through a NAT gateway. Host the Lambda functions in the three new private subnets.
Answers
C.
Host the Lambda functions outside the VPC. Update the Neptune security group to allow access from the IP ranges of the Lambda functions.
C.
Host the Lambda functions outside the VPC. Update the Neptune security group to allow access from the IP ranges of the Lambda functions.
Answers
D.
Host the Lambda functions outside the VPC. Create a VPC endpoint for the Neptune database, and have the Lambda functions access Neptune over the VPC endpoint.
D.
Host the Lambda functions outside the VPC. Create a VPC endpoint for the Neptune database, and have the Lambda functions access Neptune over the VPC endpoint.
Answers
E.
Create three private subnets in the Neptune VPC. Host the Lambda functions in the three new isolated subnets. Create a VPC endpoint for DynamoDB, and route DynamoDB traffic to the VPC endpoint.
E.
Create three private subnets in the Neptune VPC. Host the Lambda functions in the three new isolated subnets. Create a VPC endpoint for DynamoDB, and route DynamoDB traffic to the VPC endpoint.
Answers
Suggested answer: B, E

Explanation:

Configuring a Lambda function to access resources in a VPC

Working with VPCs and subnets

NAT gateways

Accessing Amazon Neptune from AWS Lambda

VPC endpoints for DynamoDB



A company is planning to migrate an on-premises data center to AWS. The company currently hosts the data center on Linux-based VMware VMs. A solutions architect must collect information about network dependencies between the VMs. The information must be in the form of a diagram that details host IP addresses, hostnames, and network connection information.

Which solution will meet these requirements?

A.
Use AWS Application Discovery Service. Select an AWS Migration Hub home AWS Region. Install the AWS Application Discovery Agent on the on-premises servers for data collection. Grant permissions to Application Discovery Service to use the Migration Hub network diagrams.
A.
Use AWS Application Discovery Service. Select an AWS Migration Hub home AWS Region. Install the AWS Application Discovery Agent on the on-premises servers for data collection. Grant permissions to Application Discovery Service to use the Migration Hub network diagrams.
Answers
B.
Use the AWS Application Discovery Service Agentless Collector for server data collection. Export the network diagrams from the AWS Migration Hub in .png format.
B.
Use the AWS Application Discovery Service Agentless Collector for server data collection. Export the network diagrams from the AWS Migration Hub in .png format.
Answers
C.
Install the AWS Application Migration Service agent on the on-premises servers for data collection. Use AWS Migration Hub data in Workload Discovery on AWS to generate network diagrams.
C.
Install the AWS Application Migration Service agent on the on-premises servers for data collection. Use AWS Migration Hub data in Workload Discovery on AWS to generate network diagrams.
Answers
D.
Install the AWS Application Migration Service agent on the on-premises servers for data collection. Export data from AWS Migration Hub in .csv format into an Amazon CloudWatch dashboard to generate network diagrams.
D.
Install the AWS Application Migration Service agent on the on-premises servers for data collection. Export data from AWS Migration Hub in .csv format into an Amazon CloudWatch dashboard to generate network diagrams.
Answers
Suggested answer: B

Explanation:

To effectively gather information about network dependencies between VMs in an on-premises data center for migration to AWS, it's crucial to use tools that can capture detailed application and server dependencies. The AWS Application Discovery Service is designed for this purpose, particularly when migrating from environments like Linux-based VMware VMs. By installing the AWS Application Discovery Agent on the on-premises servers, the service can collect necessary data such as host IP addresses, hostnames, and network connection information. This data is crucial for creating a comprehensive network diagram that outlines the interactions and dependencies between various components of the on-premises infrastructure. The integration with AWS Migration Hub enhances this process by allowing the visualization of these dependencies in a network diagram format, aiding in the planning and execution of the migration process. This approach ensures a thorough understanding of the on-premises environment, which is essential for a successful migration to AWS.

AWS Documentation on Application Discovery Service: This provides detailed guidance on how to use the Application Discovery Service, including the installation and configuration of the Discovery Agent.

AWS Migration Hub User Guide: Offers insights on how to integrate Application Discovery Service data with Migration Hub for comprehensive migration planning and tracking.

AWS Solutions Architect Professional Learning Path: Contains advanced topics and best practices for migrating complex on-premises environments to AWS, emphasizing the use of AWS services and tools for effective migration planning and execution.

A company is planning a migration from an on-premises data center to the AWS cloud. The company plans to use multiple AWS accounts that are managed in an organization in AWS organizations. The company will cost a small number of accounts initially and will add accounts as needed. A solution architect must design a solution that turns on AWS accounts.

What is the MOST operationally efficient solution that meets these requirements.

A.
Create an AWS Lambda function that creates a new cloudTrail trail in all AWS account in the organization. Invoke the Lambda function dally by using a scheduled action in Amazon EventBridge.
A.
Create an AWS Lambda function that creates a new cloudTrail trail in all AWS account in the organization. Invoke the Lambda function dally by using a scheduled action in Amazon EventBridge.
Answers
B.
Create a new CloudTrail trail in the organizations management account. Configure the trail to log all events for all AYYS accounts in the organization.
B.
Create a new CloudTrail trail in the organizations management account. Configure the trail to log all events for all AYYS accounts in the organization.
Answers
C.
Create a new CloudTrail trail in all AWS accounts in the organization. Create new trails whenever a new account is created.
C.
Create a new CloudTrail trail in all AWS accounts in the organization. Create new trails whenever a new account is created.
Answers
D.
Create an AWS systems Manager Automaton runbook that creates a cloud trail in all AWS accounts in the organization. Invoke the automation by using Systems Manager State Manager.
D.
Create an AWS systems Manager Automaton runbook that creates a cloud trail in all AWS accounts in the organization. Invoke the automation by using Systems Manager State Manager.
Answers
Suggested answer: B

Explanation:

The most operationally efficient solution for turning on AWS CloudTrail across multiple AWS accounts managed within an AWS Organization is to create a single CloudTrail trail in the organization's management account and configure it to log events for all accounts within the organization. This approach leverages CloudTrail's ability to consolidate logs from all accounts in an organization, thereby simplifying management, reducing overhead, and ensuring consistent logging across accounts. This method eliminates the need for manual intervention in each account, making it an operationally efficient choice for organizations planning to scale their AWS usage.

AWS CloudTrail Documentation: Provides detailed instructions on setting up CloudTrail, including how to configure it for an organization.

AWS Organizations Documentation: Offers insights into best practices for managing multiple AWS accounts and how services like CloudTrail integrate with AWS Organizations.

AWS Best Practices for Security and Governance: Guides on how to effectively use AWS services to maintain a secure and well-governed AWS environment, with a focus on centralized logging and monitoring.

A company's factory and automaton applications are running in a single VPC More than 23 applications run on a combination of Amazon EC2, Amazon Elastic Container Service (Amazon ECS), are Amazon RDS.

The company has software engineers spread across three teams. One of the three teams owns each application, and each team is responsible for the cost and performance of all of its applications. Team resources have tags that represent their application and team. The learns use IAH access for daily activities.

The company needs to determine which costs on the monthly AWS bill are attributable to each application or team. The company also must be able to create reports to compare costs item the last 12 months and to help forecast costs tor the next 12 months. A solution architect must recommend an AWS Billing and Cost Management solution that provides these cost reports.

Which combination of actions will meet these requirement? Select THREE.)

A.
Activate the user-defined cost allocation tags that represent the application and the team.
A.
Activate the user-defined cost allocation tags that represent the application and the team.
Answers
B.
Activate the AWS generated cost allocation tags that represent the application and the team.
B.
Activate the AWS generated cost allocation tags that represent the application and the team.
Answers
C.
Create a cost category for each application in Billing and Cost Management
C.
Create a cost category for each application in Billing and Cost Management
Answers
D.
Activate IAM access to Billing and Cost Management.
D.
Activate IAM access to Billing and Cost Management.
Answers
E.
Create a cost budget
E.
Create a cost budget
Answers
F.
Enable Cost Explorer.
F.
Enable Cost Explorer.
Answers
Suggested answer: A, C, F

Explanation:

To attribute AWS costs to specific applications or teams and enable detailed cost analysis and forecasting, the solution architect should recommend the following actions: A. Activating user-defined cost allocation tags for resources associated with each application and team allows for detailed tracking of costs by these identifiers. C. Creating a cost category for each application within AWS Billing and Cost Management enables the organization to group costs according to application, facilitating detailed reporting and analysis. F. Enabling Cost Explorer is essential for analyzing and visualizing AWS spending over time. It provides the capability to view historical costs and forecast future expenses, supporting the company's requirement for cost comparison and forecasting.

AWS Billing and Cost Management Documentation: Covers the activation of cost allocation tags, creation of cost categories, and the use of Cost Explorer for cost management.

AWS Tagging Strategies: Provides best practices for implementing tagging strategies that support cost allocation and reporting.

AWS Cost Explorer Documentation: Details how to use Cost Explorer to analyze and forecast AWS costs.

A company is using an organization in AWS organization to manage AWS accounts. For each new project the company creates a new linked account. After the creation of a new account, the root user signs in to the new account and creates a service request to increase the service quota for Amazon EC2 instances. A solutions architect needs to automate this process.

Which solution will meet these requirements with tie LEAST operational overhead?

A.
Create an Amazon EventBridge rule to detect creation of a new account Send the event to an Amazon Simple Notification Service (Amazon SNS) topic that invokes an AWS Lambda function. Configure the Lambda function to run the request-service-quota-increase command to request a service quota increase for EC2 instances.
A.
Create an Amazon EventBridge rule to detect creation of a new account Send the event to an Amazon Simple Notification Service (Amazon SNS) topic that invokes an AWS Lambda function. Configure the Lambda function to run the request-service-quota-increase command to request a service quota increase for EC2 instances.
Answers
B.
Create a Service Quotas request template in the management account. Configure the desired service quota increases for EC2 instances.
B.
Create a Service Quotas request template in the management account. Configure the desired service quota increases for EC2 instances.
Answers
C.
Create an AWS Config rule in the management account to set the service quota for EC2 instances.
C.
Create an AWS Config rule in the management account to set the service quota for EC2 instances.
Answers
D.
Create an Amazon EventBridge rule to detect creation of a new account. Send the event to an Amazon simple Notification service (Amazon SNS) topic that involves an AWS Lambda function. Configure the Lambda function to run the create-case command to request a service quota increase for EC2 instances.
D.
Create an Amazon EventBridge rule to detect creation of a new account. Send the event to an Amazon simple Notification service (Amazon SNS) topic that involves an AWS Lambda function. Configure the Lambda function to run the create-case command to request a service quota increase for EC2 instances.
Answers
Suggested answer: A

Explanation:

Automating the process of increasing service quotas for Amazon EC2 instances in new AWS accounts with minimal operational overhead can be effectively achieved by using Amazon EventBridge, Amazon SNS, and AWS Lambda. An EventBridge rule can detect the creation of a new account and trigger an SNS topic, which in turn invokes a Lambda function. This function can then programmatically request a service quota increase for EC2 instances using the AWS Service Quotas API. This approach streamlines the process, reduces manual intervention, and ensures that new accounts are automatically configured with the desired service quotas.

Amazon EventBridge Documentation: Provides guidance on setting up event rules for detecting AWS account creation.

AWS Lambda Documentation: Details how to create and configure Lambda functions to perform automated tasks, such as requesting service quota increases.

AWS Service Quotas Documentation: Offers information on managing and requesting increases for AWS service quotas programmatically.

A company needs to gather data from an experiment in a remote location that does not have internet connectivity. During the experiment, sensors that are connected to a total network will generate 6 TB of data in a preprimary formal over the course of 1 week. The sensors can be configured to upload their data files to an FTP server periodically, but the sensors do not have their own FTP server. The sensors also do not support other protocols. The company needs to collect the data centrally and move lie data to object storage in the AWS Cloud as soon. as possible after the experiment.

Which solution will meet these requirements?

A.
Order an AWS Snowball Edge Compute Optimized device. Connect the device to the local network. Configure AWS DataSync with a target bucket name, and unload the data over NFS to the device. After the experiment return the device to AWS so that the data can be loaded into Amazon S3.
A.
Order an AWS Snowball Edge Compute Optimized device. Connect the device to the local network. Configure AWS DataSync with a target bucket name, and unload the data over NFS to the device. After the experiment return the device to AWS so that the data can be loaded into Amazon S3.
Answers
B.
Order an AWS Snowcone device, including an Amazon Linux 2 AMI. Connect the device to the local network. Launch an Amazon EC2 instance on the device. Create a shell script that periodically downloads data from each sensor. After the experiment, return the device to AWS so that the data can be loaded as an Amazon Elastic Block Store [Amazon EBS) volume.
B.
Order an AWS Snowcone device, including an Amazon Linux 2 AMI. Connect the device to the local network. Launch an Amazon EC2 instance on the device. Create a shell script that periodically downloads data from each sensor. After the experiment, return the device to AWS so that the data can be loaded as an Amazon Elastic Block Store [Amazon EBS) volume.
Answers
C.
Order an AWS Snowcone device, including an Amazon Linux 2 AMI. Connect the device to the local network. Launch an Amazon EC2 instance on the device. Install and configure an FTP server on the EC2 instance. Configure the sensors to upload data to the EC2 instance. After the experiment, return the device to AWS so that the data can be loaded into Amazon S3.
C.
Order an AWS Snowcone device, including an Amazon Linux 2 AMI. Connect the device to the local network. Launch an Amazon EC2 instance on the device. Install and configure an FTP server on the EC2 instance. Configure the sensors to upload data to the EC2 instance. After the experiment, return the device to AWS so that the data can be loaded into Amazon S3.
Answers
D.
Order an AWS Snowcone device. Connect the device to the local network. Configure the device to use Amazon FSx. Configure the sensors to upload data to the device. Configure AWS DataSync on the device to synchronize the uploaded data with an Amazon S3 bucket Return the device to AWS so that the data can be loaded as an Amazon Elastic Block Store (Amazon EBS) volume.
D.
Order an AWS Snowcone device. Connect the device to the local network. Configure the device to use Amazon FSx. Configure the sensors to upload data to the device. Configure AWS DataSync on the device to synchronize the uploaded data with an Amazon S3 bucket Return the device to AWS so that the data can be loaded as an Amazon Elastic Block Store (Amazon EBS) volume.
Answers
Suggested answer: C

Explanation:

For collecting data from remote sensors without internet connectivity, using an AWS Snowcone device with an Amazon EC2 instance running an FTP server presents a practical solution. This setup allows the sensors to upload data to the EC2 instance via FTP, and after the experiment, the Snowcone device can be returned to AWS for data ingestion into Amazon S3. This approach minimizes operational complexity and ensures efficient data transfer to AWS for further processing or storage.

Total 492 questions
Go to page: of 50