ExamGecko
Home Home / Amazon / SAP-C01

Amazon SAP-C01 Practice Test - Questions Answers, Page 29

Question list
Search
Search

List of questions

Search

Related questions











A company collects a steady stream of 10 million data records from 100,000 sources each day. These records are written to an Amazon RDS MySQL DB. A query must produce the daily average of a data source over the past 30 days. There are twice as many reads as writes. Queries to the collected data are for one source ID at a time.

How can the Solutions Architect improve the reliability and cost effectiveness of this solution?

A.
Use Amazon Aurora with MySQL in a Multi-AZ mode. Use four additional read replicas.
A.
Use Amazon Aurora with MySQL in a Multi-AZ mode. Use four additional read replicas.
Answers
B.
Use Amazon DynamoDB with the source ID as the partition key and the timestamp as the sort key. Use a Time to Live (TTL) to delete data after 30 days.
B.
Use Amazon DynamoDB with the source ID as the partition key and the timestamp as the sort key. Use a Time to Live (TTL) to delete data after 30 days.
Answers
C.
Use Amazon DynamoDB with the source ID as the partition key. Use a different table each day.
C.
Use Amazon DynamoDB with the source ID as the partition key. Use a different table each day.
Answers
D.
Ingest data into Amazon Kinesis using a retention period of 30 days. Use AWS Lambda to write data records to Amazon ElastiCache for read access.
D.
Ingest data into Amazon Kinesis using a retention period of 30 days. Use AWS Lambda to write data records to Amazon ElastiCache for read access.
Answers
Suggested answer: B

Explanation:

Reference: https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Introduction.html

Identify a true statement about using an IAM role to grant permissions to applications running on Amazon EC2 instances.

A.
When AWS credentials are rotated; developers have to update only the root Amazon EC2 instance that uses their credentials.
A.
When AWS credentials are rotated; developers have to update only the root Amazon EC2 instance that uses their credentials.
Answers
B.
When AWS credentials are rotated, developers have to update only the Amazon EC2 instance on which the password policy was applied and which uses their credentials.
B.
When AWS credentials are rotated, developers have to update only the Amazon EC2 instance on which the password policy was applied and which uses their credentials.
Answers
C.
When AWS credentials are rotated, you don't have to manage credentials and you don't have to worry about long-term security risks.
C.
When AWS credentials are rotated, you don't have to manage credentials and you don't have to worry about long-term security risks.
Answers
D.
When AWS credentials are rotated, you must manage credentials and you should consider precautions for long-term security risks.
D.
When AWS credentials are rotated, you must manage credentials and you should consider precautions for long-term security risks.
Answers
Suggested answer: C

Explanation:

Using IAM roles to grant permissions to applications that run on EC2 instances requires a bit of extra configuration. Because role credentials are temporary and rotated automatically, you don't have to manage credentials, and you don't have to worry about long-term security risks.

Reference: http://docs.aws.amazon.com/IAM/latest/UserGuide/role-usecase-ec2app.html

A company is planning to migrate an application from on-premises to AWS. The application currently uses an Oracle database and the company can tolerate a brief downtime of 1 hour when performing the switch to the new infrastructure. As part of the migration, the database engine will be changed to MySQL. A Solutions Architect needs to determine which AWS services can be used to perform the migration while minimizing the amount of work and time required. Which of the following will meet the requirements?

A.
Use AWS SCT to generate the schema scripts and apply them on the target prior to migration. Use AWS DMS to analyze the current schema and provide a recommendation for the optimal database engine. Then, use AWS DMS to migrate to the recommended engine. Use AWS SCT to identify what embedded SQL code in the application can be converted and what has to be done manually.
A.
Use AWS SCT to generate the schema scripts and apply them on the target prior to migration. Use AWS DMS to analyze the current schema and provide a recommendation for the optimal database engine. Then, use AWS DMS to migrate to the recommended engine. Use AWS SCT to identify what embedded SQL code in the application can be converted and what has to be done manually.
Answers
B.
Use AWS SCT to generate the schema scripts and apply them on the target prior to migration. Use AWS DMS to begin moving data from the on-premises database to AWS. After the initial copy, continue to use AWS DMS to keep the databases in sync until cutting over to the new database. Use AWS SCT to identify what embedded SQL code in the application can be converted and what has to be done manually.
B.
Use AWS SCT to generate the schema scripts and apply them on the target prior to migration. Use AWS DMS to begin moving data from the on-premises database to AWS. After the initial copy, continue to use AWS DMS to keep the databases in sync until cutting over to the new database. Use AWS SCT to identify what embedded SQL code in the application can be converted and what has to be done manually.
Answers
C.
Use AWS DMS to help identify the best target deployment between installing the database engine on Amazon EC2 directly or moving to Amazon RDS. Then, use AWS DMS to migrate to the platform. Use AWS Application Discovery Service to identify what embedded SQL code in the application can be converted and what has to be done manually.
C.
Use AWS DMS to help identify the best target deployment between installing the database engine on Amazon EC2 directly or moving to Amazon RDS. Then, use AWS DMS to migrate to the platform. Use AWS Application Discovery Service to identify what embedded SQL code in the application can be converted and what has to be done manually.
Answers
D.
Use AWS DMS to begin moving data from the on-premises database to AWS. After the initial copy, continue to use AWS DMS to keep the databases in sync until cutting over to the new database. Use AWS Application Discovery Service to identify what embedded SQL code in the application can be converted and what has to be done manually.
D.
Use AWS DMS to begin moving data from the on-premises database to AWS. After the initial copy, continue to use AWS DMS to keep the databases in sync until cutting over to the new database. Use AWS Application Discovery Service to identify what embedded SQL code in the application can be converted and what has to be done manually.
Answers
Suggested answer: B

A user has launched a dedicated EBS backed instance with EC2. You are curious where the EBS volume for this instance will be created. Which statement is correct about the EBS volume's creation?

A.
The EBS volume will not be created on the same tenant hardware assigned to the dedicated instance
A.
The EBS volume will not be created on the same tenant hardware assigned to the dedicated instance
Answers
B.
AWS does not allow a dedicated EBS backed instance launch
B.
AWS does not allow a dedicated EBS backed instance launch
Answers
C.
The EBS volume will be created on the same tenant hardware assigned to the dedicated instance
C.
The EBS volume will be created on the same tenant hardware assigned to the dedicated instance
Answers
D.
The user can specify where the EBS will be created
D.
The user can specify where the EBS will be created
Answers
Suggested answer: A

Explanation:

The dedicated instances are Amazon EC2 instances that run in a Virtual Private Cloud (VPC) on hardware that is dedicated to a single customer. When a user launches an Amazon EBS-backed dedicated instance, the EBS volume does not run on single-tenant hardware.

Reference: http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/dedicated-instance.html

You require the ability to analyze a customer's clickstream data on a website so they can do behavioral analysis. Your customer needs to know what sequence of pages and ads their customer clicked on. This data will be used in real time to modify the page layouts as customers click through the site to increase stickiness and advertising click-through. Which option meets the requirements for captioning and analyzing this data?

A.
Log clicks in weblogs by URL store to Amazon S3, and then analyze with Elastic MapReduce
A.
Log clicks in weblogs by URL store to Amazon S3, and then analyze with Elastic MapReduce
Answers
B.
Push web clicks by session to Amazon Kinesis and analyze behavior using Kinesis workers
B.
Push web clicks by session to Amazon Kinesis and analyze behavior using Kinesis workers
Answers
C.
Write click events directly to Amazon Redshift and then analyze with SQL
C.
Write click events directly to Amazon Redshift and then analyze with SQL
Answers
D.
Publish web clicks by session to an Amazon SQS queue then periodically drain these events to Amazon RDS and analyze with SQL.
D.
Publish web clicks by session to an Amazon SQS queue then periodically drain these events to Amazon RDS and analyze with SQL.
Answers
Suggested answer: B

Explanation:

Reference: http://www.slideshare.net/AmazonWebServices/aws-webcast-introduction-to-amazon-kinesis

A company has a new security policy. The policy requires the company to log any event that retrieves data from Amazon S3 buckets. The company must save these audit logs in a dedicated S3 bucket. The company created the audit logs S3 bucket in an AWS account that is designated for centralized logging. The S3 bucket has a bucket policy that allows write-only cross-account access. A solutions architect must ensure that all S3 object-level access is being logged for current S3 buckets and future S3 buckets. Which solution will meet these requirements?

A.
Enable server access logging for all current S3 buckets. Use the audit logs S3 bucket as a destination for audit logs.
A.
Enable server access logging for all current S3 buckets. Use the audit logs S3 bucket as a destination for audit logs.
Answers
B.
Enable replication between all current S3 buckets and the audit logs S3 bucket. Enable S3 Versioning in the audit logs S3 bucket.
B.
Enable replication between all current S3 buckets and the audit logs S3 bucket. Enable S3 Versioning in the audit logs S3 bucket.
Answers
C.
Configure S3 Event Notifications for all current S3 buckets to invoke an AWS Lambda function every time objects are accessed. Store Lambda logs in the audit logs S3 bucket.
C.
Configure S3 Event Notifications for all current S3 buckets to invoke an AWS Lambda function every time objects are accessed. Store Lambda logs in the audit logs S3 bucket.
Answers
D.
Enable AWS CloudTrail, and use the audit logs S3 bucket to store logs. Enable data event logging for S3 event sources, current S3 buckets, and future S3 buckets.
D.
Enable AWS CloudTrail, and use the audit logs S3 bucket to store logs. Enable data event logging for S3 event sources, current S3 buckets, and future S3 buckets.
Answers
Suggested answer: D

Explanation:

Reference: https://docs.aws.amazon.com/awscloudtrail/latest/userguide/best-practices-security.html

An organization is setting up a web application with the JEE stack. The application uses the JBoss app server and MySQL DB. The application has a logging module which logs all the activities whenever a business function of the JEE application is called. The logging activity takes some time due to the large size of the log file.

If the application wants to setup a scalable infrastructure which of the below mentioned options will help achieve this setup?

A.
Host the log files on EBS with PIOPS which will have higher I/O.
A.
Host the log files on EBS with PIOPS which will have higher I/O.
Answers
B.
Host logging and the app server on separate servers such that they are both in the same zone.
B.
Host logging and the app server on separate servers such that they are both in the same zone.
Answers
C.
Host logging and the app server on the same instance so that the network latency will be shorter.
C.
Host logging and the app server on the same instance so that the network latency will be shorter.
Answers
D.
Create a separate module for logging and using SQS compartmentalize the module such that all calls to logging are asynchronous.
D.
Create a separate module for logging and using SQS compartmentalize the module such that all calls to logging are asynchronous.
Answers
Suggested answer: D

Explanation:

The organization can always launch multiple EC2 instances in the same region across multiple AZs for HA and DR. The AWS architecture practice recommends compartmentalizing the functionality such that they can both run in parallel without affecting the performance of the main application. In this scenario logging takes a longer time due to the large size of the log file. Thus, it is recommended that the organization should separate them out and make separate modules and make asynchronous calls among them. This way the application can scale as per the requirement and the performance will not bear the impact of logging.

Reference: http://www.awsarchitectureblog.com/2014/03/aws-and-compartmentalization.html

In Amazon Cognito what is a silent push notification?

A.
It is a push message that is received by your application on a user's device that will not be seen by the user.
A.
It is a push message that is received by your application on a user's device that will not be seen by the user.
Answers
B.
It is a push message that is received by your application on a user's device that will return the user's geolocation.
B.
It is a push message that is received by your application on a user's device that will return the user's geolocation.
Answers
C.
It is a push message that is received by your application on a user's device that will not be heard by the user.
C.
It is a push message that is received by your application on a user's device that will not be heard by the user.
Answers
D.
It is a push message that is received by your application on a user's device that will return the user's authentication credentials.
D.
It is a push message that is received by your application on a user's device that will return the user's authentication credentials.
Answers
Suggested answer: A

Explanation:

Amazon Cognito uses the Amazon Simple Notification Service (SNS) to send silent push notifications to devices. A silent push notification is a push message that is received by your application on a user's device that will not be seen by the user.

Reference: http://aws.amazon.com/cognito/faqs/

Identify a correct statement about the expiration date of the "Letter of Authorization and Connecting Facility Assignment (LOA-CFA)," which lets you complete the Cross Connect step of setting up your AWS Direct Connect.

A.
If the cross connect is not completed within 90 days, the authority granted by the LOA-CFA expires.
A.
If the cross connect is not completed within 90 days, the authority granted by the LOA-CFA expires.
Answers
B.
If the virtual interface is not created within 72 days, the LOA-CFA becomes outdated.
B.
If the virtual interface is not created within 72 days, the LOA-CFA becomes outdated.
Answers
C.
If the cross connect is not completed within a user-defined time, the authority granted by the LOA- CFA expires.
C.
If the cross connect is not completed within a user-defined time, the authority granted by the LOA- CFA expires.
Answers
D.
If the cross connect is not completed within the specified duration from the appropriate provider, the LOA-CFA expires.
D.
If the cross connect is not completed within the specified duration from the appropriate provider, the LOA-CFA expires.
Answers
Suggested answer: A

Explanation:

An AWS Direct Connect location provides access to AWS in the region it is associated with. You can establish connections with AWS Direct Connect locations in multiple regions, but a connection in one region does not provide connectivity to other regions. Note: If the cross connect is not completed within 90 days, the authority granted by the LOA-CFA expires.

Reference: http://docs.aws.amazon.com/directconnect/latest/UserGuide/Colocation.html

For AWS CloudFormation, which stack state refuses UpdateStack calls?

A.
UPDATE_ROLLBACK_FAILED
A.
UPDATE_ROLLBACK_FAILED
Answers
B.
UPDATE_ROLLBACK_COMPLETE
B.
UPDATE_ROLLBACK_COMPLETE
Answers
C.
UPDATE_COMPLETE
C.
UPDATE_COMPLETE
Answers
D.
CREATE_COMPLETE
D.
CREATE_COMPLETE
Answers
Suggested answer: A

Explanation:

When a stack is in the UPDATE_ROLLBACK_FAILED state, you can continue rolling it back to return it to a working state (to UPDATE_ROLLBACK_COMPLETE). You cannot update a stack that is in the UPDATE_ROLLBACK_FAILED state.

However, if you can continue to roll it back, you can return the stack to its original settings and try to update it again.

Reference: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-cfn-updating-stacks-continueupdaterollback.html

Total 906 questions
Go to page: of 91