ExamGecko
Home Home / Amazon / SAP-C01

Amazon SAP-C01 Practice Test - Questions Answers, Page 49

Question list
Search
Search

List of questions

Search

Related questions











A company’s application is increasingly popular and experiencing latency because of high volume reads on the database server. The service has the following properties:

A highly available REST API hosted in one region using Application Load Balancer (ALB) with auto scaling. A MySQL database hosted on an Amazon EC2 instance in a single Availability Zone. The company wants to reduce latency, increase in-region database read performance, and have multi-region disaster recovery capabilities that can perform a live recovery automatically without any data or performance loss (HA/DR). Which deployment strategy will meet these requirements?

A.
Use AWS CloudFormation StackSets to deploy the API layer in two regions. Migrate the database to an Amazon Aurora with MySQL database cluster with multiple read replicas in one region and a read replica in a different region than the source database cluster. Use Amazon Route 53 health checks to trigger a DNS failover to the standby region if the health checks to the primary load balancer fail. In the event of Route 53 failover, promote the cross-region database replica to be the master and build out new read replicas in the standby region.
A.
Use AWS CloudFormation StackSets to deploy the API layer in two regions. Migrate the database to an Amazon Aurora with MySQL database cluster with multiple read replicas in one region and a read replica in a different region than the source database cluster. Use Amazon Route 53 health checks to trigger a DNS failover to the standby region if the health checks to the primary load balancer fail. In the event of Route 53 failover, promote the cross-region database replica to be the master and build out new read replicas in the standby region.
Answers
B.
Use Amazon ElastiCache for Redis Multi-AZ with an automatic failover to cache the database read queries. Use AWS OpsWorks to deploy the API layer, cache layer, and existing database layer in two regions. In the event of failure, use Amazon Route 53 health checks on the database to trigger a DNS failover to the standby region if the health checks in the primary region fail. Back up the MySQL database frequently, and in the event of a failure in an active region, copy the backup to the standby region and restore the standby database.
B.
Use Amazon ElastiCache for Redis Multi-AZ with an automatic failover to cache the database read queries. Use AWS OpsWorks to deploy the API layer, cache layer, and existing database layer in two regions. In the event of failure, use Amazon Route 53 health checks on the database to trigger a DNS failover to the standby region if the health checks in the primary region fail. Back up the MySQL database frequently, and in the event of a failure in an active region, copy the backup to the standby region and restore the standby database.
Answers
C.
Use AWS CloudFormation StackSets to deploy the API layer in two regions. Add the database to an Auto Scaling group. Add a read replica to the database in the second region. Use Amazon Route 53 health checks on the database to trigger a DNS failover to the standby region if the health checks in the primary region fail. Promote the cross-region database replica to be the master and build out new read replicas in the standby region.
C.
Use AWS CloudFormation StackSets to deploy the API layer in two regions. Add the database to an Auto Scaling group. Add a read replica to the database in the second region. Use Amazon Route 53 health checks on the database to trigger a DNS failover to the standby region if the health checks in the primary region fail. Promote the cross-region database replica to be the master and build out new read replicas in the standby region.
Answers
D.
Use Amazon ElastiCache for Redis Multi-AZ with an automatic failover to cache the database read queries. Use AWS OpsWorks to deploy the API layer, cache layer, and existing database layer in two regions. Use Amazon Route 53 health checks on the ALB to trigger a DNS failover to the standby region if the health checks in the primary region fail. Back up the MySQL database frequently, and in the event of a failure in an active region, copy the backup to the standby region and restore the standby database.
D.
Use Amazon ElastiCache for Redis Multi-AZ with an automatic failover to cache the database read queries. Use AWS OpsWorks to deploy the API layer, cache layer, and existing database layer in two regions. Use Amazon Route 53 health checks on the ALB to trigger a DNS failover to the standby region if the health checks in the primary region fail. Back up the MySQL database frequently, and in the event of a failure in an active region, copy the backup to the standby region and restore the standby database.
Answers
Suggested answer: A

In Amazon ElastiCache, which of the following statements is correct?

A.
When you launch an ElastiCache cluster into an Amazon VPC private subnet, every cache node is assigned a public IP address within that subnet.
A.
When you launch an ElastiCache cluster into an Amazon VPC private subnet, every cache node is assigned a public IP address within that subnet.
Answers
B.
You cannot use ElastiCache in a VPC that is configured for dedicated instance tenancy.
B.
You cannot use ElastiCache in a VPC that is configured for dedicated instance tenancy.
Answers
C.
If your AWS account supports only the EC2-VPC platform, ElastiCache will never launch your cluster in a VPC.
C.
If your AWS account supports only the EC2-VPC platform, ElastiCache will never launch your cluster in a VPC.
Answers
D.
ElastiCache is not fully integrated with Amazon Virtual Private Cloud (VPC).
D.
ElastiCache is not fully integrated with Amazon Virtual Private Cloud (VPC).
Answers
Suggested answer: B

Explanation:

The VPC must allow non-dedicated EC2 instances. You cannot use ElastiCache in a VPC that is configured for dedicated instance tenancy.

Reference: http://docs.aws.amazon.com/AmazonElastiCache/latest/UserGuide/AmazonVPC.EC.html

A company has an application written using an in-house software framework. The framework installation takes 30 minutes and is performed with a user data script. Company Developers deploy changes to the application frequently. The framework installation is becoming a bottleneck in this process.

Which of the following would speed up this process?

A.
Create a pipeline to build a custom AMI with the framework installed and use this AMI as a baseline for application deployments.
A.
Create a pipeline to build a custom AMI with the framework installed and use this AMI as a baseline for application deployments.
Answers
B.
Employ a user data script to install the framework but compress the installation files to make them smaller.
B.
Employ a user data script to install the framework but compress the installation files to make them smaller.
Answers
C.
Create a pipeline to parallelize the installation tasks and call this pipeline from a user data script.
C.
Create a pipeline to parallelize the installation tasks and call this pipeline from a user data script.
Answers
D.
Configure an AWS OpsWorks cookbook that installs the framework instead of employing user data. Use this cookbook as a base for all deployments.
D.
Configure an AWS OpsWorks cookbook that installs the framework instead of employing user data. Use this cookbook as a base for all deployments.
Answers
Suggested answer: C

Explanation:

Reference: https://aws.amazon.com/codepipeline/features/?nc=sn&loc=2

During a security audit of a Service team’s application, a Solutions Architect discovers that a username and password for an Amazon RDS database and a set of AWS IAM user credentials can be viewed in the AWS Lambda function code. The Lambda function uses the username and password to run queries on the database, and it uses the IAM credentials to call AWS services in a separate management account. The Solutions Architect is concerned that the credentials could grant inappropriate access to anyone who can view the Lambda code. The management account and the Service team’s account are in separate AWS Organizations organizational units (OUs).

Which combination of changes should the Solutions Architect make to improve the solution’s security? (Choose two.)

A.
Configure Lambda to assume a role in the management account with appropriate access to AWS.
A.
Configure Lambda to assume a role in the management account with appropriate access to AWS.
Answers
B.
Configure Lambda to use the stored database credentials in AWS Secrets Manager and enable automatic rotation.
B.
Configure Lambda to use the stored database credentials in AWS Secrets Manager and enable automatic rotation.
Answers
C.
Create a Lambda function to rotate the credentials every hour by deploying a new Lambda version with the updated credentials.
C.
Create a Lambda function to rotate the credentials every hour by deploying a new Lambda version with the updated credentials.
Answers
D.
Use an SCP on the management account’s OU to prevent IAM users from accessing resources in the Service team’s account.
D.
Use an SCP on the management account’s OU to prevent IAM users from accessing resources in the Service team’s account.
Answers
E.
Enable AWS Shield Advanced on the management account to shield sensitive resources from unauthorized IAM access.
E.
Enable AWS Shield Advanced on the management account to shield sensitive resources from unauthorized IAM access.
Answers
Suggested answer: B, D

In Amazon Cognito, your mobile app authenticates with the Identity Provider (IdP) using the provider's SDK. Once the end user is authenticated with the IdP, the OAuth or OpenID Connect token returned from the IdP is passed by your app to Amazon Cognito, which returns a new _____ for the user and a set of temporary, limited-privilege AWS credentials.

A.
Cognito Key Pair
A.
Cognito Key Pair
Answers
B.
Cognito API
B.
Cognito API
Answers
C.
Cognito ID
C.
Cognito ID
Answers
D.
Cognito SDK
D.
Cognito SDK
Answers
Suggested answer: C

Explanation:

Your mobile app authenticates with the identity provider (IdP) using the provider's SDK. Once the end user is authenticated with the IdP, the OAuth or OpenID Connect token returned from the IdP is passed by your app to Amazon Cognito, which returns a new Cognito ID for the user and a set of temporary, limited-privilege AWS credentials.

Reference: http://aws.amazon.com/cognito/faqs/

A large global company wants to migrate a stateless mission-critical application to AWS. The application is based on IBM WebSphere (application and integration middleware), IBM MQ (messaging middleware), and IBM DB2 (database software) on a z/OS operating system.

How should the Solutions Architect migrate the application to AWS?

A.
Re-host WebSphere-based applications on Amazon EC2 behind a load balancer with Auto Scaling. Re-platform the IBM MQ to an Amazon EC2-based MQ. Re-platform the z/OS-based DB2 to Amazon RDS DB2.
A.
Re-host WebSphere-based applications on Amazon EC2 behind a load balancer with Auto Scaling. Re-platform the IBM MQ to an Amazon EC2-based MQ. Re-platform the z/OS-based DB2 to Amazon RDS DB2.
Answers
B.
Re-host WebSphere-based applications on Amazon EC2 behind a load balancer with Auto Scaling. Re-platform the IBM MQ to an Amazon MQ. Re-platform z/OS-based DB2 to Amazon EC2-based DB2.
B.
Re-host WebSphere-based applications on Amazon EC2 behind a load balancer with Auto Scaling. Re-platform the IBM MQ to an Amazon MQ. Re-platform z/OS-based DB2 to Amazon EC2-based DB2.
Answers
C.
Orchestrate and deploy the application by using AWS Elastic Beanstalk. Re-platform the IBM MQ to Amazon SQS. Replatform z/OS-based DB2 to Amazon RDS DB2.
C.
Orchestrate and deploy the application by using AWS Elastic Beanstalk. Re-platform the IBM MQ to Amazon SQS. Replatform z/OS-based DB2 to Amazon RDS DB2.
Answers
D.
Use the AWS Server Migration Service to migrate the IBM WebSphere and IBM DB2 to an Amazon EC2-based solution. Re-platform the IBM MQ to an Amazon MQ.
D.
Use the AWS Server Migration Service to migrate the IBM WebSphere and IBM DB2 to an Amazon EC2-based solution. Re-platform the IBM MQ to an Amazon MQ.
Answers
Suggested answer: B

Explanation:

Reference:

https://aws.amazon.com/blogs/database/aws-database-migration-service-and-aws-schema-conversion-tool-now-supportibm-db2-as-a-source/ https://aws.amazon.com/quickstart/architecture/ibm-mq/

A company uses Amazon S3 to store documents that may only be accessible to an Amazon EC2 instance in a certain virtual private cloud (VPC). The company fears that a malicious insider with access to this instance could also set up an EC2 instance in another VPC to access these documents.

Which of the following solutions will provide the required protection?

A.
Use an S3 VPC endpoint and an S3 bucket policy to limit access to this VPC endpoint.
A.
Use an S3 VPC endpoint and an S3 bucket policy to limit access to this VPC endpoint.
Answers
B.
Use EC2 instance profiles and an S3 bucket policy to limit access to the role attached to the instance profile.
B.
Use EC2 instance profiles and an S3 bucket policy to limit access to the role attached to the instance profile.
Answers
C.
Use S3 client-side encryption and store the key in the instance metadata.
C.
Use S3 client-side encryption and store the key in the instance metadata.
Answers
D.
Use S3 server-side encryption and protect the key with an encryption context.
D.
Use S3 server-side encryption and protect the key with an encryption context.
Answers
Suggested answer: B

When you put objects in Amazon S3, what is the indication that an object was successfully stored?

A.
A HTTP 200 result code and MD5 checksum, taken together, indicate that the operation was successful.
A.
A HTTP 200 result code and MD5 checksum, taken together, indicate that the operation was successful.
Answers
B.
Amazon S3 is engineered for 99.999999999% durability. Therefore there is no need to confirm that data was inserted.
B.
Amazon S3 is engineered for 99.999999999% durability. Therefore there is no need to confirm that data was inserted.
Answers
C.
A success code is inserted into the S3 object metadata.
C.
A success code is inserted into the S3 object metadata.
Answers
D.
Each S3 account has a special bucket named _s3_logs. Success codes are written to this bucket with a timestamp and checksum.
D.
Each S3 account has a special bucket named _s3_logs. Success codes are written to this bucket with a timestamp and checksum.
Answers
Suggested answer: A

You would like to create a mirror image of your production environment in another region for disaster recovery purposes. Which of the following AWS resources do not need to be recreated in the second region? (Choose two.)

A.
Route 53 Record Sets
A.
Route 53 Record Sets
Answers
B.
IAM Roles
B.
IAM Roles
Answers
C.
Elastic IP Addresses (EIP)
C.
Elastic IP Addresses (EIP)
Answers
D.
EC2 Key Pairs
D.
EC2 Key Pairs
Answers
E.
Launch configurations
E.
Launch configurations
Answers
F.
Security Groups
F.
Security Groups
Answers
Suggested answer: A, B

Explanation:

As per the document defined, new IPs should be reserved not the same ones Elastic IP Addresses are static IP addresses designed for dynamic cloud computing. Unlike traditional static IP addresses, however, Elastic IP addresses enable you to mask instance or Availability Zone failures by programmatically remapping your public IP addresses to instances in your account in a particular region. For DR, you can also pre-allocate some IP addresses for the most critical systems so that their IP addresses are already known before disaster strikes. This can simplify the execution of the DR plan. Reference: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/resources.html

A customer has established an AWS Direct Connect connection to AWS. The link is up and routes are being advertised from the customer's end, however the customer is unable to connect from EC2 instances inside its VPC to servers residing in its datacenter.

Which of the following options provide a viable solution to remedy this situation? (Choose two.)

A.
Add a route to the route table with an iPsec VPN connection as the target.
A.
Add a route to the route table with an iPsec VPN connection as the target.
Answers
B.
Enable route propagation to the virtual pinnate gateway (VGW).
B.
Enable route propagation to the virtual pinnate gateway (VGW).
Answers
C.
Enable route propagation to the customer gateway (CGW).
C.
Enable route propagation to the customer gateway (CGW).
Answers
D.
Modify the route table of all Instances using the 'route' command.
D.
Modify the route table of all Instances using the 'route' command.
Answers
E.
Modify the Instances VPC subnet route table by adding a route back to the customer's on-premises environment.
E.
Modify the Instances VPC subnet route table by adding a route back to the customer's on-premises environment.
Answers
Suggested answer: B, E
Total 906 questions
Go to page: of 91