ExamGecko
Home Home / Amazon / SAP-C01

Amazon SAP-C01 Practice Test - Questions Answers, Page 13

Question list
Search
Search

List of questions

Search

Related questions











A company has built a high performance computing (HPC) cluster in AWS for a tightly coupled workload that generates a large number of shared files stored in Amazon EFS. The cluster was performing well when the number of Amazon EC2 instances in the cluster was 100. However, when the company increased the cluster size to 1,000 EC2 instances, overall performance was well below expectations. Which collection of design choices should a solutions architect make to achieve the maximum performance from the HPC cluster? (Choose three.)

A.
Ensure the HPC cluster is launched within a single Availability Zone.
A.
Ensure the HPC cluster is launched within a single Availability Zone.
Answers
B.
Launch the EC2 instances and attach elastic network interfaces in multiples of four.
B.
Launch the EC2 instances and attach elastic network interfaces in multiples of four.
Answers
C.
Select EC2 instance types with an Elastic Fabric Adapter (EFA) enabled.
C.
Select EC2 instance types with an Elastic Fabric Adapter (EFA) enabled.
Answers
D.
Ensure the clusters is launched across multiple Availability Zones.
D.
Ensure the clusters is launched across multiple Availability Zones.
Answers
E.
Replace Amazon EFS win multiple Amazon EBS volumes in a RAID array.
E.
Replace Amazon EFS win multiple Amazon EBS volumes in a RAID array.
Answers
F.
Replace Amazon EFS with Amazon FSx for Lustre.
F.
Replace Amazon EFS with Amazon FSx for Lustre.
Answers
Suggested answer: D, E, F

A Solutions Architect is building a solution for updating user metadata that is initiated by web servers. The solution needs to rapidly scale from hundreds to tens of thousands of jobs in less than 30 seconds. The solution must be asynchronous always avertable and minimize costs.

Which strategies should the Solutions Architect use to meet these requirements?

A.
Create an AWS SWF worker that will update user metadata updating web application to start a new workflow for every job.
A.
Create an AWS SWF worker that will update user metadata updating web application to start a new workflow for every job.
Answers
B.
Create an AWS Lambda function that will update user metadata. Create an Amazon SOS queue and configure it as an event source for the Lambda function. Update the web application to send jobs to the queue.
B.
Create an AWS Lambda function that will update user metadata. Create an Amazon SOS queue and configure it as an event source for the Lambda function. Update the web application to send jobs to the queue.
Answers
C.
Create an AWS Lambda function that will update user metadata. Create AWS Step Functions that will trigger the Lambda function. Update the web application to initiate Step Functions for every job.
C.
Create an AWS Lambda function that will update user metadata. Create AWS Step Functions that will trigger the Lambda function. Update the web application to initiate Step Functions for every job.
Answers
D.
Create an Amazon SQS queue. Create an AMI with a worker to check the queue and update user metadata. Configure an Amazon EC2 Auto Scaling group with the new AMI. Update the web application to send jobs to the queue.
D.
Create an Amazon SQS queue. Create an AMI with a worker to check the queue and update user metadata. Configure an Amazon EC2 Auto Scaling group with the new AMI. Update the web application to send jobs to the queue.
Answers
Suggested answer: B

A company’s security compliance requirements state that all Amazon EC2 images must be scanned for vulnerabilities and must pass a CVE assessment. A solutions architect is developing a mechanism to create security- approved AMIs that can be used by developers. Any new AMIs should go through an automated assessment process and be marked as approved before developers can use them. The approved images must be scanned every 30 days to ensure compliance.

Which combination of steps should the solutions architect take to meet these requirements while following best practices? (Choose two.)

A.
Use the AWS Systems Manager EC2 agent to run the CVE assessment on the EC2 instances launched from the AMIs that need to be scanned.
A.
Use the AWS Systems Manager EC2 agent to run the CVE assessment on the EC2 instances launched from the AMIs that need to be scanned.
Answers
B.
Use AWS Lambda to write automatic approval rules. Store the approved AMI list in AWS Systems Manager Parameter Store. Use Amazon EventBridge to trigger an AWS Systems Manager Automation document on all EC2 instances every 30 days.
B.
Use AWS Lambda to write automatic approval rules. Store the approved AMI list in AWS Systems Manager Parameter Store. Use Amazon EventBridge to trigger an AWS Systems Manager Automation document on all EC2 instances every 30 days.
Answers
C.
Use Amazon Inspector to run the CVE assessment on the EC2 instances launched from the AMIs that need to be scanned.
C.
Use Amazon Inspector to run the CVE assessment on the EC2 instances launched from the AMIs that need to be scanned.
Answers
D.
Use AWS Lambda to write automatic approval rules. Store the approved AMI list in AWS Systems Manager Parameter Store. Use a managed AWS Config rule for continuous scanning on all EC2 instances, and use AWS Systems Manager Automation documents for remediation.
D.
Use AWS Lambda to write automatic approval rules. Store the approved AMI list in AWS Systems Manager Parameter Store. Use a managed AWS Config rule for continuous scanning on all EC2 instances, and use AWS Systems Manager Automation documents for remediation.
Answers
E.
Use AWS CloudTrail to run the CVE assessment on the EC2 instances launched from the AMIs that need to be scanned.
E.
Use AWS CloudTrail to run the CVE assessment on the EC2 instances launched from the AMIs that need to be scanned.
Answers
Suggested answer: B, C

An organization is setting up a highly scalable application using Elastic Beanstalk.

They are using Elastic Load Balancing (ELB) as well as a Virtual Private Cloud (VPC) with public and private subnets. They have the following requirements:

- All the EC2 instances should have a private IP

- All the EC2 instances should receive data via the ELB's.

Which of these will not be needed in this setup?

A.
Launch the EC2 instances with only the public subnet.
A.
Launch the EC2 instances with only the public subnet.
Answers
B.
Create routing rules which will route all inbound traffic from ELB to the EC2 instances.
B.
Create routing rules which will route all inbound traffic from ELB to the EC2 instances.
Answers
C.
Configure ELB and NAT as a part of the public subnet only.
C.
Configure ELB and NAT as a part of the public subnet only.
Answers
D.
Create routing rules which will route all outbound traffic from the EC2 instances through NAT.
D.
Create routing rules which will route all outbound traffic from the EC2 instances through NAT.
Answers
Suggested answer: A

Explanation:

The Amazon Virtual Private Cloud (Amazon VPC) allows the user to define a virtual networking environment in a private, isolated section of the Amazon Web Services (AWS) cloud. The user has complete control over the virtual networking environment. If the organization wants the Amazon EC2 instances to have a private IP address, he should create a public and private subnet for VPC in each Availability Zone (this is an AWS Elastic Beanstalk requirement). The organization should add their public resources, such as ELB and NAT to the public subnet, and AWC Elastic Beanstalk will assign them unique elastic IP addresses (a static, public IP address). The organization should launch Amazon EC2 instances in a private subnet so that AWS Elastic Beanstalk assigns them non-routable private IP addresses. Now the organization should configure route tables with the following rules: route all inbound traffic from ELB to EC2 instances route all outbound traffic from EC2 instances through NAT

Reference: http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/AWSHowTo-vpc.html

The CFO of a company wants to allow one of his employees to view only the AWS usage report page.

Which of the below mentioned IAM policy statements allows the user to have access to the AWS usage report page?

A.
"Effect": "Allow", "Action": ["Describe"], "Resource": "Billing"
A.
"Effect": "Allow", "Action": ["Describe"], "Resource": "Billing"
Answers
B.
"Effect": "Allow", "Action": ["aws-portal: ViewBilling"], "Resource": "*"
B.
"Effect": "Allow", "Action": ["aws-portal: ViewBilling"], "Resource": "*"
Answers
C.
"Effect": "Allow", "Action": ["aws-portal: ViewUsage"], "Resource": "*"
C.
"Effect": "Allow", "Action": ["aws-portal: ViewUsage"], "Resource": "*"
Answers
D.
"Effect": "Allow", "Action": ["AccountUsage], "Resource": "*"
D.
"Effect": "Allow", "Action": ["AccountUsage], "Resource": "*"
Answers
Suggested answer: C

Explanation:

AWS Identity and Access Management is a web service which allows organizations to manage users and user permissions for various AWS services. If the CFO wants to allow only AWS usage report page access, the policy for that IAM user will be as given below:

{

"Version": "2012-10-17",

"Statement": [

{

"Effect": "Allow", "Action": [

"aws-portal:ViewUsage"

],

"Resource": "*"

}

]

}

Reference: http://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/billing-permissions-ref.html

A company runs an application in the cloud that consists of a database and a website. Users can post data to the website, have the data processed, and have the data sent back to them in an email. Data is stored in a MySQL database running on an Amazon EC2 instance. The database is running in a VPC with two private subnets. The website is running on Apache Tomcat in a single EC2 instance in a different VPC with one public subnet. There is a single VPC peering connection between the database and website VPC.

The website has suffered several outages during the last month due to high traffic.

Which actions should a solutions architect take to increase the reliability of the application? (Choose three.)

A.
Place the Tomcat server in an Auto Scaling group with multiple EC2 instances behind an Application Load Balancer.
A.
Place the Tomcat server in an Auto Scaling group with multiple EC2 instances behind an Application Load Balancer.
Answers
B.
Provision an additional VPC peering connection.
B.
Provision an additional VPC peering connection.
Answers
C.
Migrate the MySQL database to Amazon Aurora with one Aurora Replica.
C.
Migrate the MySQL database to Amazon Aurora with one Aurora Replica.
Answers
D.
Provision two NAT gateways in the database VPC.
D.
Provision two NAT gateways in the database VPC.
Answers
E.
Move the Tomcat server to the database VPC.
E.
Move the Tomcat server to the database VPC.
Answers
F.
Create an additional public subnet in a different Availability Zone in the website VPC.
F.
Create an additional public subnet in a different Availability Zone in the website VPC.
Answers
Suggested answer: A, C, F

A company runs an application that gives users the ability to search for videos and related information by using keywords that are curated from content providers. The application data is stored in an onpremises Oracle database that is 800 GB in size.

The company wants to migrate the data to an Amazon Aurora MySQL DB instance. A solutions architect plans to use the AWS Schema Conversion Tool and AWS Database Migration Service (AWS DMS) for the migration. During the migration, the existing database must serve ongoing requests. The migration must be completed with minimum downtime. Which solution will meet these requirements?

A.
Create primary key indexes, secondary indexes, and referential integrity constraints in the target database before starting the migration process.
A.
Create primary key indexes, secondary indexes, and referential integrity constraints in the target database before starting the migration process.
Answers
B.
Use AWS DMS to run the conversion report for Oracle to Aurora MySQL. Remediate any issues. Then use AWS DMS to migrate the data.
B.
Use AWS DMS to run the conversion report for Oracle to Aurora MySQL. Remediate any issues. Then use AWS DMS to migrate the data.
Answers
C.
Use the M5 or C5 DMS replication instance type for ongoing replication.
C.
Use the M5 or C5 DMS replication instance type for ongoing replication.
Answers
D.
Turn off automatic backups and logging of the target database until the migration and cutover processes are complete.
D.
Turn off automatic backups and logging of the target database until the migration and cutover processes are complete.
Answers
Suggested answer: A

Explanation:

Reference: https://docs.aws.amazon.com/dms/latest/sbs/chap-rdsoracle2aurora.html

One of your AWS Data Pipeline activities has failed consequently and has entered a hard failure state after retrying thrice. You want to try it again. Is it possible to increase the number of automatic retries to more than thrice?

A.
Yes, you can increase the number of automatic retries to 6.
A.
Yes, you can increase the number of automatic retries to 6.
Answers
B.
Yes, you can increase the number of automatic retries to indefinite number.
B.
Yes, you can increase the number of automatic retries to indefinite number.
Answers
C.
No, you cannot increase the number of automatic retries.
C.
No, you cannot increase the number of automatic retries.
Answers
D.
Yes, you can increase the number of automatic retries to 10.
D.
Yes, you can increase the number of automatic retries to 10.
Answers
Suggested answer: D

Explanation:

In AWS Data Pipeline, an activity fails if all of its activity attempts return with a failed state. By default, an activity retries three times before entering a hard failure state. You can increase the number of automatic retries to 10. However, the system does not allow indefinite retries.

Reference:

https://aws.amazon.com/datapipeline/faqs/

Your Application is not highly available, and your on-premises server cannot access the mount target because the Availability Zone (AZ) in which the mount target exists is unavailable. Which of the following actions is recommended?

A.
The application must implement the checkpoint logic and recreate the mount target.
A.
The application must implement the checkpoint logic and recreate the mount target.
Answers
B.
The application must implement the shutdown logic and delete the mount target in the AZ.
B.
The application must implement the shutdown logic and delete the mount target in the AZ.
Answers
C.
The application must implement the delete logic and connect to a different mount target in the same AZ.
C.
The application must implement the delete logic and connect to a different mount target in the same AZ.
Answers
D.
The application must implement the restart logic and connect to a mount target in a different AZ.
D.
The application must implement the restart logic and connect to a mount target in a different AZ.
Answers
Suggested answer: D

Explanation:

To make sure that there is continuous availability between your on-premises data center and your Amazon Virtual Private Cloud (VPC), it is suggested that you configure two AWS Direct Connect connections. Your application should implement restart logic and connect to a mount target in a different AZ if your application is not highly available and your on-premises server cannot access the mount target because the AZ in which the mount target exists becomes unavailable.

Reference: http://docs.aws.amazon.com/efs/latest/ug/performance.html#performance-onpremises

An organization has created multiple components of a single application for compartmentalization. Currently all the components are hosted on a single EC2 instance. Due to security reasons the organization wants to implement two separate SSLs for the separate modules although it is already using VPC.

How can the organization achieve this with a single instance?

A.
You have to launch two instances each in a separate subnet and allow VPC peering for a single IP.
A.
You have to launch two instances each in a separate subnet and allow VPC peering for a single IP.
Answers
B.
Create a VPC instance which will have multiple network interfaces with multiple elastic IP addresses.
B.
Create a VPC instance which will have multiple network interfaces with multiple elastic IP addresses.
Answers
C.
Create a VPC instance which will have both the ACL and the security group attached to it and have separate rules for each IP address.
C.
Create a VPC instance which will have both the ACL and the security group attached to it and have separate rules for each IP address.
Answers
D.
Create a VPC instance which will have multiple subnets attached to it and each will have a separate IP address.
D.
Create a VPC instance which will have multiple subnets attached to it and each will have a separate IP address.
Answers
Suggested answer: B

Explanation:

A Virtual Private Cloud (VPC) is a virtual network dedicated to the user's AWS account. It enables the user to launch AWS resources into a virtual network that the user has defined. With VPC the user can specify multiple private IP addresses for his instances.

The number of network interfaces and private IP addresses that a user can specify for an instance depends on the instance type. With each network interface the organization can assign an EIP. This scenario helps when the user wants to host multiple websites on a single EC2 instance by using multiple SSL certificates on a single server and associating each certificate with a specific EIP address. It also helps in scenarios for operating network appliances, such as firewalls or load balancers that have multiple private IP addresses for each network interface.

Reference: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/MultipleIP.html

Total 906 questions
Go to page: of 91