ExamGecko
Home Home / Amazon / SAP-C01

Amazon SAP-C01 Practice Test - Questions Answers, Page 81

Question list
Search
Search

List of questions

Search

Related questions











Cognito Sync is an AWS service that you can use to synchronize user profile data across mobile devices without requiring your own backend. When the device is online, you can synchronize data. If you also set up push sync, what does it allow you to do?

A.
Notify other devices that a user profile is available across multiple devices
A.
Notify other devices that a user profile is available across multiple devices
Answers
B.
Synchronize user profile data with less latency
B.
Synchronize user profile data with less latency
Answers
C.
Notify other devices immediately that an update is available
C.
Notify other devices immediately that an update is available
Answers
D.
Synchronize online data faster
D.
Synchronize online data faster
Answers
Suggested answer: C

Explanation:

Cognito Sync is an AWS service that you can use to synchronize user profile data across mobile devices without requiring your own backend. When the device is online, you can synchronize data, and if you have also set up push sync, notify other devices immediately that an update is available.

Reference: http://docs.aws.amazon.com/cognito/devguide/sync/

A company hosts a web application on AWS in the us-east-1 Region. The application servers are distributed across three Availability Zones behind an Application Load Balancer. The database is hosted in MySQL database on an Amazon EC2 instance. A solutions architect needs to design a cross-Region data recovery solution using AWS services with an RTO of less than 5 minutes and an RPO of less than 1 minute. The solutions architect is deploying application servers in us-west-2, and has configured Amazon Route 53 health checks and DNS failover to us-west-2.

Which additional step should the solutions architect take?

A.
Migrate the database to an Amazon RDS for MySQL instance with a cross-Region read replica in us-west-2.
A.
Migrate the database to an Amazon RDS for MySQL instance with a cross-Region read replica in us-west-2.
Answers
B.
Migrate the database to an Amazon Aurora global database with the primary in us-east-1 and the secondary in us-west-2.
B.
Migrate the database to an Amazon Aurora global database with the primary in us-east-1 and the secondary in us-west-2.
Answers
C.
Migrate the database to an Amazon RDS for MySQL instance with a Multi-AZ deployment.
C.
Migrate the database to an Amazon RDS for MySQL instance with a Multi-AZ deployment.
Answers
D.
Create a MySQL standby database on an Amazon EC2 instance in us-west-2.
D.
Create a MySQL standby database on an Amazon EC2 instance in us-west-2.
Answers
Suggested answer: B

A Solutions Architect must update an application environment within AWS Elastic Beanstalk using a blue/green deployment methodology. The Solutions Architect creates an environment that is identical to the existing application environment and deploys the application to the new environment. What should be done next to complete the update?

A.
Redirect to the new environment using Amazon Route 53
A.
Redirect to the new environment using Amazon Route 53
Answers
B.
Select the Swap Environment URLs option
B.
Select the Swap Environment URLs option
Answers
C.
Replace the Auto Scaling launch configuration
C.
Replace the Auto Scaling launch configuration
Answers
D.
Update the DNS records to point to the green environment
D.
Update the DNS records to point to the green environment
Answers
Suggested answer: B

Explanation:

Reference:

https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/using-features.CNAMESwap.html

A company is migrating an application to AWS. It wants to use fully managed services as much as possible during the migration. The company needs to store large, important documents within the application with the following requirements:

The data must be highly durable and available.

The data must always be encrypted at rest and in transit.

The encryption key must be managed by the company and rotated periodically.

Which of the following solutions should the Solutions Architect recommend?

A.
Deploy the storage gateway to AWS in file gateway mode. Use Amazon EBS volume encryption using an AWS KMS key to encrypt the storage gateway volumes.
A.
Deploy the storage gateway to AWS in file gateway mode. Use Amazon EBS volume encryption using an AWS KMS key to encrypt the storage gateway volumes.
Answers
B.
Use Amazon S3 with a bucket policy to enforce HTTPS for connections to the bucket and to enforce server-sideencryption and AWS KMS for object encryption.
B.
Use Amazon S3 with a bucket policy to enforce HTTPS for connections to the bucket and to enforce server-sideencryption and AWS KMS for object encryption.
Answers
C.
Use Amazon DynamoDB with SSL to connect to DynamoDB. Use an AWS KMS key to encrypt DynamoDB objects at rest.
C.
Use Amazon DynamoDB with SSL to connect to DynamoDB. Use an AWS KMS key to encrypt DynamoDB objects at rest.
Answers
D.
Deploy instances with Amazon EBS volumes attached to store this data. Use EBS volume encryption using an AWS KMS key to encrypt the data.
D.
Deploy instances with Amazon EBS volumes attached to store this data. Use EBS volume encryption using an AWS KMS key to encrypt the data.
Answers
Suggested answer: A

A solutions architect has been assigned to migrate a 50 TB Oracle data warehouse that contains sales data from onpremises to Amazon Redshift. Major updates to the sales data occur on the final calendar day of the month. For the remainder of the month, the data warehouse only receives minor daily updates and is primarily used for reading and reporting. Because of this, the migration process must start on the first day of the month and must be complete before the next set of updates occur. This provides approximately 30 days to complete the migration and ensure that the minor daily changes have been synchronized with the Amazon Redshift data warehouse. Because the migration cannot impact normal business network operations, the bandwidth allocated to the migration for moving data over the internet is 50 Mbps. The company wants to keep data migration costs low. Which steps will allow the solutions architect to perform the migration within the specified timeline?

A.
Install Oracle database software on an Amazon EC2 instance. Configure VPN connectivity between AWS and the company’s data center. Configure the Oracle database running on Amazon EC2 to join the Oracle Real Application Clusters (RAC). When the Oracle database on Amazon EC2 finishes synchronizing, create an AWS DMS ongoing replication task to migrate the data from the Oracle database on Amazon EC2 to Amazon Redshift. Verify the data migration is complete and perform the cut over to Amazon Redshift.
A.
Install Oracle database software on an Amazon EC2 instance. Configure VPN connectivity between AWS and the company’s data center. Configure the Oracle database running on Amazon EC2 to join the Oracle Real Application Clusters (RAC). When the Oracle database on Amazon EC2 finishes synchronizing, create an AWS DMS ongoing replication task to migrate the data from the Oracle database on Amazon EC2 to Amazon Redshift. Verify the data migration is complete and perform the cut over to Amazon Redshift.
Answers
B.
Create an AWS Snowball import job. Export a backup of the Oracle data warehouse. Copy the exported data to the Snowball device. Return the Snowball device to AWS. Create an Amazon RDS for Oracle database and restore the backup file to that RDS instance. Create an AWS DMS task to migrate the data from the RDS for Oracle database to Amazon Redshift. Copy daily incremental backups from Oracle in the data center to the RDS for Oracle database over the internet.Verify the data migration is complete and perform the cut over to Amazon Redshift.
B.
Create an AWS Snowball import job. Export a backup of the Oracle data warehouse. Copy the exported data to the Snowball device. Return the Snowball device to AWS. Create an Amazon RDS for Oracle database and restore the backup file to that RDS instance. Create an AWS DMS task to migrate the data from the RDS for Oracle database to Amazon Redshift. Copy daily incremental backups from Oracle in the data center to the RDS for Oracle database over the internet.Verify the data migration is complete and perform the cut over to Amazon Redshift.
Answers
C.
Install Oracle database software on an Amazon EC2 instance. To minimize the migration time, configure VPN connectivity between AWS and the company’s data center by provisioning a 1 Gbps AWS Direct Connect connection. Configure the Oracle database running on Amazon EC2 to be a read replica of the data center Oracle database. Start the synchronization process between the company’s on-premises data center and the Oracle database on Amazon EC2. When the Oracle database on Amazon EC2 is synchronized with the on-premises database, create an AWS DMS ongoing replication task to migrate the data from the Oracle database read replica that is running on Amazon EC2 to Amazon Redshift. Verify the data migration is complete and perform the cut over to Amazon Redshift.
C.
Install Oracle database software on an Amazon EC2 instance. To minimize the migration time, configure VPN connectivity between AWS and the company’s data center by provisioning a 1 Gbps AWS Direct Connect connection. Configure the Oracle database running on Amazon EC2 to be a read replica of the data center Oracle database. Start the synchronization process between the company’s on-premises data center and the Oracle database on Amazon EC2. When the Oracle database on Amazon EC2 is synchronized with the on-premises database, create an AWS DMS ongoing replication task to migrate the data from the Oracle database read replica that is running on Amazon EC2 to Amazon Redshift. Verify the data migration is complete and perform the cut over to Amazon Redshift.
Answers
D.
Create an AWS Snowball import job. Configure a server in the company’s data center with an extraction agent. Use AWS SCT to manage the extraction agent and convert the Oracle schema to an Amazon Redshift schema. Create a new project in AWS SCT using the registered data extraction agent. Create a local task and an AWS DMS task in AWS SCT with replication of ongoing changes. Copy data to the Snowball device and return the Snowball device to AWS. Allow AWS DMS to copy data from Amazon S3 to Amazon Redshift. Verify that the data migration is complete and perform the cut over to Amazon Redshift.
D.
Create an AWS Snowball import job. Configure a server in the company’s data center with an extraction agent. Use AWS SCT to manage the extraction agent and convert the Oracle schema to an Amazon Redshift schema. Create a new project in AWS SCT using the registered data extraction agent. Create a local task and an AWS DMS task in AWS SCT with replication of ongoing changes. Copy data to the Snowball device and return the Snowball device to AWS. Allow AWS DMS to copy data from Amazon S3 to Amazon Redshift. Verify that the data migration is complete and perform the cut over to Amazon Redshift.
Answers
Suggested answer: A

Attempts, one of the three types of items associated with the schedule pipeline in the AWS Data Pipeline, provides robust data management. Which of the following statements is NOT true about Attempts?

A.
Attempts provide robust data management.
A.
Attempts provide robust data management.
Answers
B.
AWS Data Pipeline retries a failed operation until the count of retries reaches the maximum number of allowed retry attempts.
B.
AWS Data Pipeline retries a failed operation until the count of retries reaches the maximum number of allowed retry attempts.
Answers
C.
An AWS Data Pipeline Attempt object compiles the pipeline components to create a set of actionable instances.
C.
An AWS Data Pipeline Attempt object compiles the pipeline components to create a set of actionable instances.
Answers
D.
AWS Data Pipeline Attempt objects track the various attempts, results, and failure reasons if applicable.
D.
AWS Data Pipeline Attempt objects track the various attempts, results, and failure reasons if applicable.
Answers
Suggested answer: C

Explanation:

Attempts, one of the three types of items associated with a schedule pipeline in AWS Data Pipeline, provides robust data management. AWS Data Pipeline retries a failed operation. It continues to do so until the task reaches the maximum number of allowed retry attempts. Attempt objects track the various attempts, results, and failure reasons if applicable. Essentially, it is the instance with a counter. AWS Data Pipeline performs retries using the same resources from the previous attempts, such as Amazon EMR clusters and EC2 instances.

Reference: http://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/dp-how-tasks-scheduled.html

By default, temporary security credentials for an IAM user are valid for a maximum of 12 hours, but you can request a duration as long as _________ hours.

A.
24
A.
24
Answers
B.
36
B.
36
Answers
C.
10
C.
10
Answers
D.
48
D.
48
Answers
Suggested answer: B

Explanation:

By default, temporary security credentials for an IAM user are valid for a maximum of 12 hours, but you can request a duration as short as 15 minutes or as long as 36 hours.

Reference: http://docs.aws.amazon.com/STS/latest/UsingSTS/CreatingSessionTokens.html

A benefits enrollment company is hosting a 3-tier web application running in a VPC on AWS which includes a NAT (Network Address Translation) instance in the public Web tier. There is enough provisioned capacity for the expected workload tor the new fiscal year benefit enrollment period plus some extra overhead Enrollment proceeds nicely for two days and then the web tier becomes unresponsive, upon investigation using CloudWatch and other monitoring tools it is discovered that there is an extremely large and unanticipated amount of inbound traffic coming from a set of 15 specific IP addresses over port 80 from a country where the benefits company has no customers. The web tier instances are so overloaded that benefit enrollment administrators cannot even SSH into them.

Which activity would be useful in defending against this attack?

A.
Create a custom route table associated with the web tier and block the attacking IP addresses from the IGW (Internet Gateway)
A.
Create a custom route table associated with the web tier and block the attacking IP addresses from the IGW (Internet Gateway)
Answers
B.
Change the EIP (Elastic IP Address) of the NAT instance in the web tier subnet and update the Main Route Table with the new EIP
B.
Change the EIP (Elastic IP Address) of the NAT instance in the web tier subnet and update the Main Route Table with the new EIP
Answers
C.
Create 15 Security Group rules to block the attacking IP addresses over port 80
C.
Create 15 Security Group rules to block the attacking IP addresses over port 80
Answers
D.
Create an inbound NACL (Network Access control list) associated with the web tier subnet with deny rules to block the attacking IP addresses
D.
Create an inbound NACL (Network Access control list) associated with the web tier subnet with deny rules to block the attacking IP addresses
Answers
Suggested answer: D

Explanation:

Use AWS Identity and Access Management (IAM) to control who in your organization has permission to create and manage security groups and network ACLs (NACL). Isolate the responsibilities and roles for better defense. For example, you can give only your network administrators or security admin the permission to manage the security groups and restrict other roles.

A company has multiple AWS accounts hosting IT applications. An Amazon CloudWatch Logs agent is installed on all Amazon EC2 instances. The company wants to aggregate all security events in a centralized AWS account dedicated to log storage.

Security Administrators need to perform near-real-time gathering and correlating of events across multiple AWS accounts. Which solution satisfies these requirements?

A.
Create a Log Audit IAM role in each application AWS account with permissions to view CloudWatch Logs, configure an AWS Lambda function to assume the Log Audit role, and perform an hourly export of CloudWatch Logs data to an Amazon S3 bucket in the logging AWS account.
A.
Create a Log Audit IAM role in each application AWS account with permissions to view CloudWatch Logs, configure an AWS Lambda function to assume the Log Audit role, and perform an hourly export of CloudWatch Logs data to an Amazon S3 bucket in the logging AWS account.
Answers
B.
Configure CloudWatch Logs streams in each application AWS account to forward events to CloudWatch Logs in the logging AWS account. In the logging AWS account, subscribe an Amazon Kinesis Data Firehose stream to Amazon CloudWatch Events, and use the stream to persist log data in Amazon S3.
B.
Configure CloudWatch Logs streams in each application AWS account to forward events to CloudWatch Logs in the logging AWS account. In the logging AWS account, subscribe an Amazon Kinesis Data Firehose stream to Amazon CloudWatch Events, and use the stream to persist log data in Amazon S3.
Answers
C.
Create Amazon Kinesis Data Streams in the logging account, subscribe the stream to CloudWatch Logs streams in each application AWS account, configure an Amazon Kinesis Data Firehose delivery stream with the Data Streams as its source, and persist the log data in an Amazon S3 bucket inside the logging AWS account.
C.
Create Amazon Kinesis Data Streams in the logging account, subscribe the stream to CloudWatch Logs streams in each application AWS account, configure an Amazon Kinesis Data Firehose delivery stream with the Data Streams as its source, and persist the log data in an Amazon S3 bucket inside the logging AWS account.
Answers
D.
Configure CloudWatch Logs agents to publish data to an Amazon Kinesis Data Firehose stream in the logging AWS account, use an AWS Lambda function to read messages from the stream and push messages to Data Firehose, and persist the data in Amazon S3.
D.
Configure CloudWatch Logs agents to publish data to an Amazon Kinesis Data Firehose stream in the logging AWS account, use an AWS Lambda function to read messages from the stream and push messages to Data Firehose, and persist the data in Amazon S3.
Answers
Suggested answer: C

Explanation:

Reference:

https://noise.getoto.net/2018/03/03/central-logging-in-multi-account-environments/

True or False: "In the context of Amazon ElastiCache, from the application's point of view, connecting to the cluster configuration endpoint is no different than connecting directly to an individual cache node."

A.
True, from the application's point of view, connecting to the cluster configuration endpoint is no different than connecting directly to an individual cache node since, each has a unique node identifier.
A.
True, from the application's point of view, connecting to the cluster configuration endpoint is no different than connecting directly to an individual cache node since, each has a unique node identifier.
Answers
B.
True, from the application's point of view, connecting to the cluster configuration endpoint is no different than connecting directly to an individual cache node.
B.
True, from the application's point of view, connecting to the cluster configuration endpoint is no different than connecting directly to an individual cache node.
Answers
C.
False, you can connect to a cache node, but not to a cluster configuration endpoint.
C.
False, you can connect to a cache node, but not to a cluster configuration endpoint.
Answers
D.
False, you can connect to a cluster configuration endpoint, but not to a cache node.
D.
False, you can connect to a cluster configuration endpoint, but not to a cache node.
Answers
Suggested answer: B

Explanation:

This is true. From the application's point of view, connecting to the cluster configuration endpoint is no different than connecting directly to an individual cache node. In the process of connecting to cache nodes, the application resolves the configuration endpoint's DNS name. Because the configuration endpoint maintains CNAME entries for all of the cache nodes, the DNS name resolves to one of the nodes; the client can then connect to that node.

Reference: http://docs.aws.amazon.com/AmazonElastiCache/latest/UserGuide/AutoDiscovery.HowAutoDiscoveryWorks.html

Total 906 questions
Go to page: of 91