ExamGecko
Home Home / Amazon / SAP-C01

Amazon SAP-C01 Practice Test - Questions Answers, Page 28

Question list
Search
Search

List of questions

Search

Related questions











Complete this statement: "When you load your table directly from an Amazon_____ table, you have the option to control the amount of provisioned throughput you consume."

A.
RDS
A.
RDS
Answers
B.
DataPipeline
B.
DataPipeline
Answers
C.
DynamoDB
C.
DynamoDB
Answers
D.
S3
D.
S3
Answers
Suggested answer: C

Explanation:

When you load your table directly from an Amazon DynamoDB table, you have the option to control the amount of Amazon DynamoDB provisioned throughput you consume.

Reference: http://docs.aws.amazon.com/redshift/latest/dg/t_Loading_tables_with_the_COPY_command.html

A company is creating a sequel for a popular online game. A large number of users from all over the world will play the game within the first week after launch. Currently, the game consists of the following components deployed in a single AWS Region:

Amazon S3 bucket that stores game assets

Amazon DynamoDB table that stores player scores

A solutions architect needs to design a multi-Region solution that will reduce latency, improve reliability, and require the least effort to implement. What should the solutions architect do to meet these requirements?

A.
Create an Amazon CloudFront distribution to serve assets from the S3 bucket. Configure S3 Cross-Region Replication. Create a new DynamoDB table in a new Region. Use the new table as a replica target for DynamoDB global tables.
A.
Create an Amazon CloudFront distribution to serve assets from the S3 bucket. Configure S3 Cross-Region Replication. Create a new DynamoDB table in a new Region. Use the new table as a replica target for DynamoDB global tables.
Answers
B.
Create an Amazon CloudFront distribution to serve assets from the S3 bucket. Configure S3 Same-Region Replication. Create a new DynamoDB table in a new Region. Configure asynchronous replication between the DynamoDB tables by using AWS Database Migration Service (AWS DMS) with change data capture (CDC).
B.
Create an Amazon CloudFront distribution to serve assets from the S3 bucket. Configure S3 Same-Region Replication. Create a new DynamoDB table in a new Region. Configure asynchronous replication between the DynamoDB tables by using AWS Database Migration Service (AWS DMS) with change data capture (CDC).
Answers
C.
Create another S3 bucket in a new Region, and configure S3 Cross-Region Replication between the buckets. Create an Amazon CloudFront distribution and configure origin failover with two origins accessing the S3 buckets in each Region.Configure DynamoDB global tables by enabling Amazon DynamoDB Streams, and add a replica table in a new Region.
C.
Create another S3 bucket in a new Region, and configure S3 Cross-Region Replication between the buckets. Create an Amazon CloudFront distribution and configure origin failover with two origins accessing the S3 buckets in each Region.Configure DynamoDB global tables by enabling Amazon DynamoDB Streams, and add a replica table in a new Region.
Answers
D.
Create another S3 bucket in the sine Region, and configure S3 Same-Region Replication between the buckets. Create an Amazon CloudFront distribution and configure origin failover with two origins accessing the S3 buckets. Create a new DynamoDB table in a new Region. Use the new table as a replica target for DynamoDB global tables.
D.
Create another S3 bucket in the sine Region, and configure S3 Same-Region Replication between the buckets. Create an Amazon CloudFront distribution and configure origin failover with two origins accessing the S3 buckets. Create a new DynamoDB table in a new Region. Use the new table as a replica target for DynamoDB global tables.
Answers
Suggested answer: D

Explanation:

Reference: https://aws.amazon.com/blogs/publicsector/how-to-meet-business-data-resiliency-s3-cross-region-replication/

A company wants to migrate its on-premises data center to the AWS Cloud. This includes thousands of virtualized Linux and Microsoft Windows servers, SAN storage, Java and PHP applications with MYSQL, and Oracle databases. There are many department services hosted either in the same data center or externally. The technical documentation is incomplete and outdated. A solutions architect needs to understand the current environment and estimate the cloud resource costs after the migration. Which tools or services should solutions architect use to plan the cloud migration (Choose three.)

A.
AWS Application Discovery Service
A.
AWS Application Discovery Service
Answers
B.
AWS SMS
B.
AWS SMS
Answers
C.
AWS x-Ray
C.
AWS x-Ray
Answers
D.
AWS Cloud Adoption Readness Tool (CART)
D.
AWS Cloud Adoption Readness Tool (CART)
Answers
E.
Amazon Inspector
E.
Amazon Inspector
Answers
F.
AWS Migration Hub
F.
AWS Migration Hub
Answers
Suggested answer: B, C, F

A company is adding a new approved external vendor that only supports IPv6 connectivity. The company’s backend systems sit in the private subnet of an Amazon VPC. The company uses a NAT gateway to allow these systems to communicate with external vendors over IPv4. Company policy requires systems that communicate with external vendors to use a security group that limits access to only approved external vendors. The virtual private cloud (VPC) uses the default network ACL.

The Systems Operator successfully assigns IPv6 addresses to each of the backend systems. The Systems Operator also updates the outbound security group to include the IPv6 CIDR of the external vendor (destination). The systems within the VPC are able to ping one another successfully over IPv6. However, these systems are unable to communicate with the external vendor. What changes are required to enable communication with the external vendor?

A.
Create an IPv6 NAT instance. Add a route for destination 0.0.0.0/0 pointing to the NAT instance.
A.
Create an IPv6 NAT instance. Add a route for destination 0.0.0.0/0 pointing to the NAT instance.
Answers
B.
Enable IPv6 on the NAT gateway. Add a route for destination ::/0 pointing to the NAT gateway.
B.
Enable IPv6 on the NAT gateway. Add a route for destination ::/0 pointing to the NAT gateway.
Answers
C.
Enable IPv6 on the internet gateway. Add a route for destination 0.0.0.0/0 pointing to the IGW.
C.
Enable IPv6 on the internet gateway. Add a route for destination 0.0.0.0/0 pointing to the IGW.
Answers
D.
Create an egress-only internet gateway. Add a route for destination ::/0 pointing to the gateway.
D.
Create an egress-only internet gateway. Add a route for destination ::/0 pointing to the gateway.
Answers
Suggested answer: D

Explanation:

Reference:

https://docs.aws.amazon.com/vpc/latest/userguide/egress-only-internet-gateway.html

A company is deploying a public-facing global application on AWS using Amazon CloudFront. The application communicates with an external system. A solutions architect needs to ensure the data is secured during end-to-end transit and at rest.

Which combination of steps will satisfy these requirements? (Choose three.)

A.
Create a public certificate for the required domain in AWS Certificate Manager and deploy it to CloudFront, an Application Load Balancer, and Amazon EC2 instances.
A.
Create a public certificate for the required domain in AWS Certificate Manager and deploy it to CloudFront, an Application Load Balancer, and Amazon EC2 instances.
Answers
B.
Acquire a public certificate from a third-party vendor and deploy it to CloudFront, an Application Load Balancer, and Amazon EC2 instances.
B.
Acquire a public certificate from a third-party vendor and deploy it to CloudFront, an Application Load Balancer, and Amazon EC2 instances.
Answers
C.
Provision Amazon EBS encrypted volumes using AWS KMS and ensure explicit encryption of data when writing to Amazon EBS.
C.
Provision Amazon EBS encrypted volumes using AWS KMS and ensure explicit encryption of data when writing to Amazon EBS.
Answers
D.
Provision Amazon EBS encrypted volumes using AWS KMS.
D.
Provision Amazon EBS encrypted volumes using AWS KMS.
Answers
E.
Use SSL or encrypt data while communicating with the external system using a VPN.
E.
Use SSL or encrypt data while communicating with the external system using a VPN.
Answers
F.
Communicate with the external system using plaintext and use the VPN to encrypt the data in transit.
F.
Communicate with the external system using plaintext and use the VPN to encrypt the data in transit.
Answers
Suggested answer: A, C, E

A company has an on-premises Microsoft SQL Server database that writes a nightly 200 GB export to a local drive. The company wants to move the backups to more robust cloud storage on Amazon S3. The company has set up a 10 Gbps AWS Direct Connect connection between the on-premises data center and AWS.

Which solution meets these requirements MOST cost-effectively?

A.
Create a new S3 bucket. Deploy an AWS Storage Gateway file gateway within the VPC that is connected to the Direct Connect connection. Create a new SMB file share. Write nightly database exports to the new SMB file share.
A.
Create a new S3 bucket. Deploy an AWS Storage Gateway file gateway within the VPC that is connected to the Direct Connect connection. Create a new SMB file share. Write nightly database exports to the new SMB file share.
Answers
B.
Create an Amazon FSx for Windows File Server Single-AZ file system within the VPC that is connected to the Direct Connect connection. Create a new SMB file share. Write nightly database exports to an SMB file share on the Amazon FSx file system. Enable nightly backups.
B.
Create an Amazon FSx for Windows File Server Single-AZ file system within the VPC that is connected to the Direct Connect connection. Create a new SMB file share. Write nightly database exports to an SMB file share on the Amazon FSx file system. Enable nightly backups.
Answers
C.
Create an Amazon FSx for Windows File Server Multi-AZ file system within the VPC that is connected to the Direct Connect connection. Create a new SMB file share. Write nightly database exports to an SMB file share on the Amazon FSx file system. Enable nightly backups.
C.
Create an Amazon FSx for Windows File Server Multi-AZ file system within the VPC that is connected to the Direct Connect connection. Create a new SMB file share. Write nightly database exports to an SMB file share on the Amazon FSx file system. Enable nightly backups.
Answers
D.
Create a new S3 bucket. Deploy an AWS Storage Gateway volume gateway within the VPC that is connected to the Direct Connect connection. Create a new SMB file share. Write nightly database exports to the new SMB file share on the volume gateway, and automate copies of this data to an S3 bucket.
D.
Create a new S3 bucket. Deploy an AWS Storage Gateway volume gateway within the VPC that is connected to the Direct Connect connection. Create a new SMB file share. Write nightly database exports to the new SMB file share on the volume gateway, and automate copies of this data to an S3 bucket.
Answers
Suggested answer: B

Explanation:

Reference: https://aws.amazon.com/blogs/storage/accessing-smb-file-shares-remotely-with-amazon-fsx-for-windows-fileserver/

A life sciences company is using a combination of open source tools to manage data analysis workflows and Docker containers running on servers in its on-premises data center to process genomics data. Sequencing data is generated and stored on a local storage area network (SAN), and then the data is processed. The research and development teams are running into capacity issues and have decided to re-architect their genomics analysis platform on AWS to scale based on workload demands and reduce the turnaround time from weeks to days.

The company has a high-speed AWS Direct Connect connection. Sequencers will generate around 200 GB of data for each genome, and individual jobs can take several hours to process the data with ideal compute capacity. The end result will be stored in Amazon S3. The company is expecting 10-15 job requests each day.

Which solution meets these requirements?

A.
Use regularly scheduled AWS Snowball Edge devices to transfer the sequencing data into AWS. When AWS receives the Snowball Edge device and the data is loaded into Amazon S3, use S3 events to trigger an AWS Lambda function to process the data.
A.
Use regularly scheduled AWS Snowball Edge devices to transfer the sequencing data into AWS. When AWS receives the Snowball Edge device and the data is loaded into Amazon S3, use S3 events to trigger an AWS Lambda function to process the data.
Answers
B.
Use AWS Data Pipeline to transfer the sequencing data to Amazon S3. Use S3 events to trigger an Amazon EC2 Auto Scaling group to launch custom-AMI EC2 instances running the Docker containers to process the data.
B.
Use AWS Data Pipeline to transfer the sequencing data to Amazon S3. Use S3 events to trigger an Amazon EC2 Auto Scaling group to launch custom-AMI EC2 instances running the Docker containers to process the data.
Answers
C.
Use AWS DataSync to transfer the sequencing data to Amazon S3. Use S3 events to trigger an AWS Lambda function that starts an AWS Step Functions workflow. Store the Docker images in Amazon Elastic Container Registry (Amazon ECR) and trigger AWS Batch to run the container and process the sequencing data.
C.
Use AWS DataSync to transfer the sequencing data to Amazon S3. Use S3 events to trigger an AWS Lambda function that starts an AWS Step Functions workflow. Store the Docker images in Amazon Elastic Container Registry (Amazon ECR) and trigger AWS Batch to run the container and process the sequencing data.
Answers
D.
Use an AWS Storage Gateway file gateway to transfer the sequencing data to Amazon S3. Use S3 events to trigger an AWS Batch job that executes on Amazon EC2 instances running the Docker containers to process the data.
D.
Use an AWS Storage Gateway file gateway to transfer the sequencing data to Amazon S3. Use S3 events to trigger an AWS Batch job that executes on Amazon EC2 instances running the Docker containers to process the data.
Answers
Suggested answer: A

AWS CloudFormation ______ are special actions you use in your template to assign values to properties that are not available until runtime.

A.
intrinsic functions
A.
intrinsic functions
Answers
B.
properties declarations
B.
properties declarations
Answers
C.
output functions
C.
output functions
Answers
D.
conditions declarations
D.
conditions declarations
Answers
Suggested answer: A

Explanation:

AWS CloudFormation intrinsic functions are special actions you use in your template to assign values to properties not available until runtime. Each function is declared with a name enclosed in quotation marks (""), a single colon, and its parameters.

Reference: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/intrinsic-fuctions-structure.html

A hedge fund company is developing a new web application to handle trades. Traders around the world will use the application. The application will handle hundreds of thousands of transactions, especially during overlapping work hours between Europe and the United States.

According to the company’s disaster recovery plan, the data that is generated must be replicated to a second AWS Region. Each transaction item will be less than 100 KB in size. The company wants to simplify the CI/CD pipeline as much as possible. Which combination of steps will meet these requirements MOST cost-effectively? (Choose two.)

A.
Deploy the application in multiple Regions. Use Amazon Route 53 latency-based routing to route users to the nearest deployment.
A.
Deploy the application in multiple Regions. Use Amazon Route 53 latency-based routing to route users to the nearest deployment.
Answers
B.
Provision an Amazon Aurora global database to persist data. Use Amazon ElastiCache to improve response time.
B.
Provision an Amazon Aurora global database to persist data. Use Amazon ElastiCache to improve response time.
Answers
C.
Provision an Amazon CloudFront domain with the website as an origin. Restrict access to geographies where the usage is expected.
C.
Provision an Amazon CloudFront domain with the website as an origin. Restrict access to geographies where the usage is expected.
Answers
D.
Provision an Amazon DynamoDB global table. Use DynamoDB Accelerator (DAX) to improve response time.
D.
Provision an Amazon DynamoDB global table. Use DynamoDB Accelerator (DAX) to improve response time.
Answers
E.
Provision an Amazon Aurora multi-master cluster to persist data. Use Amazon ElastiCache to improve response time.
E.
Provision an Amazon Aurora multi-master cluster to persist data. Use Amazon ElastiCache to improve response time.
Answers
Suggested answer: A, B

Explanation:

Reference: https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-policy.html

Auto Scaling requests are signed with a _________ signature calculated from the request and the user’s private key.

A.
SSL
A.
SSL
Answers
B.
AES-256
B.
AES-256
Answers
C.
HMAC-SHA1
C.
HMAC-SHA1
Answers
D.
X.509
D.
X.509
Answers
Suggested answer: C
Total 906 questions
Go to page: of 91