ExamGecko
Home Home / Amazon / DOP-C01

Amazon DOP-C01 Practice Test - Questions Answers, Page 39

Question list
Search
Search

List of questions

Search

Related questions











A company has developed an AWS Lambda function that handles orders received through an API. The company is using AWS CodeDeploy to deploy the Lambda function as the final stage of a CI/CD pipeline. A DevOps Engineer has noticed there are intermittent failures of the ordering API for a few seconds after deployment. After some investigation, the DevOps Engineer believes the failures are due to database changes not having fully propagated before the Lambda function begins executing.

How should the DevOps Engineer overcome this?

A.
Add a BeforeAllowTraffic hook to the AppSpec file that tests and waits for any necessary database changes before traffic can flow to the new version of the Lambda function
A.
Add a BeforeAllowTraffic hook to the AppSpec file that tests and waits for any necessary database changes before traffic can flow to the new version of the Lambda function
Answers
B.
Add an AfterAllowTraffic hook to the AppSpec file that forces traffic to wait for any pending database changes before allowing the new version of the Lambda function to respond
B.
Add an AfterAllowTraffic hook to the AppSpec file that forces traffic to wait for any pending database changes before allowing the new version of the Lambda function to respond
Answers
C.
Add a BeforeInstall hook to the AppSpec file that tests and waits for any necessary database changes before deploying the new version of the Lambda function
C.
Add a BeforeInstall hook to the AppSpec file that tests and waits for any necessary database changes before deploying the new version of the Lambda function
Answers
D.
Add a ValidateService hook to the AppSpec file that inspects incoming traffic and rejects the payload if dependent services, such as the database, are not yet ready
D.
Add a ValidateService hook to the AppSpec file that inspects incoming traffic and rejects the payload if dependent services, such as the database, are not yet ready
Answers
Suggested answer: A

You have a web application that is currently running on a collection of micro instance types in a single AZ behind a single load balancer. You have an Auto Scaling group configured to scale from 2 to 64 instances. When reviewing your CloudWatch metrics, you see that sometimes your Auto Scaling group is running 64 micro instances. The web application is reading and writing to a DynamoDB-configured backend and configured with 800 Write Capacity Units and 800 Read Capacity Units. Your customers are complaining that they are experiencing long load times when viewing your website. You have investigated the DynamoDB CloudWatch metrics; you are under the provisioned Read and write Capacity Units and there is no throttling. How do you scale your service to improve the load times and ensure the principles of high availability?

A.
Change your Auto Scaling group configuration to include multiple AZs.
A.
Change your Auto Scaling group configuration to include multiple AZs.
Answers
B.
Change your Auto Scaling group configuration to include multiple AZs, and increase the number of Read Capacity Units in your DynamoDB table by a factor of three, because you will need to be calling DynarnoDB from three AZs.
B.
Change your Auto Scaling group configuration to include multiple AZs, and increase the number of Read Capacity Units in your DynamoDB table by a factor of three, because you will need to be calling DynarnoDB from three AZs.
Answers
C.
Add a second load balancer to your Auto Scaling group so that you can support more inbound connections per second.
C.
Add a second load balancer to your Auto Scaling group so that you can support more inbound connections per second.
Answers
D.
Change your Auto Scaling group configuration to use larger instances and include multiple AZ's instead of one.
D.
Change your Auto Scaling group configuration to use larger instances and include multiple AZ's instead of one.
Answers
Suggested answer: D

A company's DevOps team launches a WorkSpace using Amazon WorkSpaces for each new user. Recently, the Security team said that WorkSpaces for these new users are not consistently being tagged. Company policy requires that all WorkSpaces be tagged with USERNAME automatically upon creation.

Which combination of steps should the DevOps Engineer take to address this requirement? (Choose two.)

A.
Add an AWS Lambda function policy allowing cloudtrail.amazonaws.com to use the lambda:InvokeFunction action.
A.
Add an AWS Lambda function policy allowing cloudtrail.amazonaws.com to use the lambda:InvokeFunction action.
Answers
B.
Create a new Amazon CloudWatch Events event pattern rule based on Amazon WorkSpaces with an AWS API Call via CloudTrail event type. Select the CreateWorkspaces operation, and target an AWS Lambda function that will tag the Workspace.
B.
Create a new Amazon CloudWatch Events event pattern rule based on Amazon WorkSpaces with an AWS API Call via CloudTrail event type. Select the CreateWorkspaces operation, and target an AWS Lambda function that will tag the Workspace.
Answers
C.
Ensure AWS CloudTrail is enabled in all Regions where WorkSpaces are created.
C.
Ensure AWS CloudTrail is enabled in all Regions where WorkSpaces are created.
Answers
D.
Enable custom tagging for Amazon WorkSpaces from the directory details.
D.
Enable custom tagging for Amazon WorkSpaces from the directory details.
Answers
E.
Create a new Amazon CloudWatch Events scheduled event rule based on Amazon WorkSpaces with an interval of 1 minute. Target an AWS Lambda function that will tag the Workspace.
E.
Create a new Amazon CloudWatch Events scheduled event rule based on Amazon WorkSpaces with an interval of 1 minute. Target an AWS Lambda function that will tag the Workspace.
Answers
Suggested answer: B, C

A company is deploying a new mobile game on AWS for its customers around the world. The Development team uses AWS Code services and must meet the following requirements:

- Clients need to send/receive real-time playing data from the backend frequently and with minimal latency - Game data must meet the data residency requirement Which strategy can a DevOps Engineer implement to meet their needs?

A.
Deploy the backend application to multiple regions. Any update to the code repository triggers a two-stage build and deployment pipeline. A successful deployment in one region invokes an AWS Lambda function to copy the build artifacts to an Amazon S3 bucket in another region. After the artifact is copied, it triggers a deployment pipeline in the new region.
A.
Deploy the backend application to multiple regions. Any update to the code repository triggers a two-stage build and deployment pipeline. A successful deployment in one region invokes an AWS Lambda function to copy the build artifacts to an Amazon S3 bucket in another region. After the artifact is copied, it triggers a deployment pipeline in the new region.
Answers
B.
Deploy the backend application to multiple Availability Zones in a single region. Create an Amazon CloudFront distribution to serve the application backend to global customers. Any update to the code repository triggers a two-stage build-anddeployment pipeline. The pipeline deploys the backend application to all Availability Zones.
B.
Deploy the backend application to multiple Availability Zones in a single region. Create an Amazon CloudFront distribution to serve the application backend to global customers. Any update to the code repository triggers a two-stage build-anddeployment pipeline. The pipeline deploys the backend application to all Availability Zones.
Answers
C.
Deploy the backend application to multiple regions. Use AWS Direct Connect to serve the application backend to global customers. Any update to the code repository triggers a two-stage build-anddeployment pipeline in the region. After a successful deployment in the region, the pipeline continues to deploy the artifact to another region.
C.
Deploy the backend application to multiple regions. Use AWS Direct Connect to serve the application backend to global customers. Any update to the code repository triggers a two-stage build-anddeployment pipeline in the region. After a successful deployment in the region, the pipeline continues to deploy the artifact to another region.
Answers
D.
Deploy the backend application to multiple regions. Any update to the code repository triggers a two-stage build-anddeployment pipeline in the region. After a successful deployment in the region, the pipeline invokes the pipeline in another region and passes the build artifact location. The pipeline uses the artifact location and deploys applications in the new region.
D.
Deploy the backend application to multiple regions. Any update to the code repository triggers a two-stage build-anddeployment pipeline in the region. After a successful deployment in the region, the pipeline invokes the pipeline in another region and passes the build artifact location. The pipeline uses the artifact location and deploys applications in the new region.
Answers
Suggested answer: A

Explanation:

Reference:

https://docs.aws.amazon.com/codepipeline/latest/userguide/integrations-actiontype.html#integrations-invoke

Due to compliance regulations, management has asked you to provide a system that allows for cost-effective long-term storage of your application logs and provides a way for support staff to view the logs more quickly. Currently your log system archives logs automatically to Amazon S3 every hour, and support staff must wait for these logs to appear in Amazon S3, because they do not currently have access to the systems to view live logs. What method should you use to become compliant while also providing a faster way for support staff to have access to logs?

A.
Update Amazon S3 lifecycle policies to archive old logs to Amazon Glacier, and add a new policy to push all log entries to Amazon SQS for ingestion by the support team
A.
Update Amazon S3 lifecycle policies to archive old logs to Amazon Glacier, and add a new policy to push all log entries to Amazon SQS for ingestion by the support team
Answers
B.
Update Amazon S3 lifecycle policies to archive old logs to Amazon Glacier, and use or write a service to also stream your application logs to CloudWatch Logs.
B.
Update Amazon S3 lifecycle policies to archive old logs to Amazon Glacier, and use or write a service to also stream your application logs to CloudWatch Logs.
Answers
C.
Update Amazon Glacier lifecycle policies to pull new logs from Amazon S3, and in the Amazon EC2 console, enable the CloudWatch Logs Agent on all of your application servers.
C.
Update Amazon Glacier lifecycle policies to pull new logs from Amazon S3, and in the Amazon EC2 console, enable the CloudWatch Logs Agent on all of your application servers.
Answers
D.
Update Amazon S3 lifecycle policies to archive old logs to Amazon Glacier. key can be different from the tableEnable Amazon S3 partial uploads on your Amazon S3 bucket, and trigger an Amazon SNS notification when a partial upload occurs.
D.
Update Amazon S3 lifecycle policies to archive old logs to Amazon Glacier. key can be different from the tableEnable Amazon S3 partial uploads on your Amazon S3 bucket, and trigger an Amazon SNS notification when a partial upload occurs.
Answers
E.
Use or write a service to stream your application logs to CloudWatch Logs. Use an Amazon Elastic Map Reduce cluster to live stream your logs from CloudWatch Logs for ingestion by the support team, and create a Hadoop job to push the logs to S3 in five-minute chunks.
E.
Use or write a service to stream your application logs to CloudWatch Logs. Use an Amazon Elastic Map Reduce cluster to live stream your logs from CloudWatch Logs for ingestion by the support team, and create a Hadoop job to push the logs to S3 in five-minute chunks.
Answers
Suggested answer: B

A Developer is maintaining a fleet of 50 Amazon EC2 Linux servers. The servers are part of an Amazon EC2 Auto Scaling group, and also use Elastic Load Balancing for load balancing. Occasionally, some application servers are being terminated after failing ELB HTTP health checks. The Developer would liketo perform a root cause analysis on the issue, but before being able to access application logs, the server is terminated. Howcan log collection be automated?

A.
Use Auto Scaling lifecycle hooks to put instances in a Pending:Wait state. Create an Amazon CloudWatch Alarm for EC2 Instance Terminate Successful and trigger an AWS Lambda function that executes an SSM Run Command script to collect logs, push them to Amazon S3, and complete the lifecycle action once logs are collected.
A.
Use Auto Scaling lifecycle hooks to put instances in a Pending:Wait state. Create an Amazon CloudWatch Alarm for EC2 Instance Terminate Successful and trigger an AWS Lambda function that executes an SSM Run Command script to collect logs, push them to Amazon S3, and complete the lifecycle action once logs are collected.
Answers
B.
Use Auto Scaling lifecycle hooks to put instances in a Terminating:Wait state. Create a Config rule for EC2 Instanceterminate Lifecycle Action and trigger a step function that executes a script to collect logs, push them to Amazon S3, and complete the lifecycle action once logs are collected.
B.
Use Auto Scaling lifecycle hooks to put instances in a Terminating:Wait state. Create a Config rule for EC2 Instanceterminate Lifecycle Action and trigger a step function that executes a script to collect logs, push them to Amazon S3, and complete the lifecycle action once logs are collected.
Answers
C.
Use Auto Scaling lifecycle hooks to put instances in a Terminating:Wait state. Create an Amazon CloudWatch subscription filter for EC2 Instance Terminate Successful and trigger a CloudWatch agent that executes a script to called logs, push them to Amazon S3, and complete the lifecycle action once logs are collected.
C.
Use Auto Scaling lifecycle hooks to put instances in a Terminating:Wait state. Create an Amazon CloudWatch subscription filter for EC2 Instance Terminate Successful and trigger a CloudWatch agent that executes a script to called logs, push them to Amazon S3, and complete the lifecycle action once logs are collected.
Answers
D.
Use Auto Scaling lifecycle hooks to put instances in a Terminating:Wait state. Create an Amazon CloudWatch Events rule for EC2 Instance-terminate Lifecycle Action and trigger an AWS Lambda function that executes a SSM Run Command script to collect logs, push them to Amazon S3, and complete the lifecycle action once logs are collected.
D.
Use Auto Scaling lifecycle hooks to put instances in a Terminating:Wait state. Create an Amazon CloudWatch Events rule for EC2 Instance-terminate Lifecycle Action and trigger an AWS Lambda function that executes a SSM Run Command script to collect logs, push them to Amazon S3, and complete the lifecycle action once logs are collected.
Answers
Suggested answer: D

You manage a web advertising platform on a single AWS account. This platform produces realtime ad-click data that you store as objects in an Amazon S3 bucket called "dick-data." Your advertising partners want to use Amazon Elastic MapReduce in their own AWS accounts to do analytics on the ad-click data. They have asked for immediate access to the ad-dick data so that they can run analytics. Which two choices are required to facilitate secure access to this data? (Choose two.)

A.
Create a cross-account TAM role with a trust policy that contains partner AWS account IDs and a unique external ID.
A.
Create a cross-account TAM role with a trust policy that contains partner AWS account IDs and a unique external ID.
Answers
B.
Create a new IAM group for AWS Data Pipeline users with a trust policy that contains partner AWS account IDs.
B.
Create a new IAM group for AWS Data Pipeline users with a trust policy that contains partner AWS account IDs.
Answers
C.
Configure an Amazon S3 bucket policy for the "click-data" bucket that allows Read-Only access to the objects, and associate this policy with an IAM role.
C.
Configure an Amazon S3 bucket policy for the "click-data" bucket that allows Read-Only access to the objects, and associate this policy with an IAM role.
Answers
D.
Configure the Amazon S3 bucket access control list to allow access to the partners Amazon Elastic MapReduce cluster.
D.
Configure the Amazon S3 bucket access control list to allow access to the partners Amazon Elastic MapReduce cluster.
Answers
E.
Configure AWS Data Pipeline in the partner AWS accounts to use the web Identity Federation API to access data in the "click-data" bucket.
E.
Configure AWS Data Pipeline in the partner AWS accounts to use the web Identity Federation API to access data in the "click-data" bucket.
Answers
F.
Configure AWS Data Pipeline to transfer the data from the ''click-data" bucket to the partner's Amazon Elastic MapReduce cluster.
F.
Configure AWS Data Pipeline to transfer the data from the ''click-data" bucket to the partner's Amazon Elastic MapReduce cluster.
Answers
Suggested answer: A, C

Your company has multiple applications running on AWS. Your company wants to develop a tool that notifies on-call teams immediately via email when an alarm is triggered in your environment. You have multiple on-call teams that work different shifts, and the tool should handle notifying the correct teams at the correct times. How should you implement this solution?

A.
Create an Amazon SNS topic and an Amazon SQS queue. Configure the Amazon SQS queue as a subscriber to the Amazon SNS topic. Configure CloudWatch alarms to notify this topic when an alarm is triggered. Create an Amazon EC2 Auto Scaling group with both minimum and desired Instances configured to 0. Worker nodes in this group spawn when messages are added to the queue. Workers then use Amazon Simple Email Service to send messages to your on call teams.
A.
Create an Amazon SNS topic and an Amazon SQS queue. Configure the Amazon SQS queue as a subscriber to the Amazon SNS topic. Configure CloudWatch alarms to notify this topic when an alarm is triggered. Create an Amazon EC2 Auto Scaling group with both minimum and desired Instances configured to 0. Worker nodes in this group spawn when messages are added to the queue. Workers then use Amazon Simple Email Service to send messages to your on call teams.
Answers
B.
Create an Amazon SNS topic and configure your on-call team email addresses as subscribers. Use the AWS SDK tools to integrate your application with Amazon SNS and send messages to this new topic. Notifications will be sent to on- call users when a CloudWatch alarm is triggered.
B.
Create an Amazon SNS topic and configure your on-call team email addresses as subscribers. Use the AWS SDK tools to integrate your application with Amazon SNS and send messages to this new topic. Notifications will be sent to on- call users when a CloudWatch alarm is triggered.
Answers
C.
Create an Amazon SNS topic and configure your on-call team email addresses as subscribers. Create a secondary Amazon SNS topic for alarms and configure your CloudWatch alarms to notify this topic when triggered. Create an HTTP subscriber to this topic that notifies your application via HTTP POST when an alarm is triggered. Use the AWS SDK tools to integrate your application with Amazon SNS and send messages to the first topic so that on-call engineers receive alerts.
C.
Create an Amazon SNS topic and configure your on-call team email addresses as subscribers. Create a secondary Amazon SNS topic for alarms and configure your CloudWatch alarms to notify this topic when triggered. Create an HTTP subscriber to this topic that notifies your application via HTTP POST when an alarm is triggered. Use the AWS SDK tools to integrate your application with Amazon SNS and send messages to the first topic so that on-call engineers receive alerts.
Answers
D.
Create an Amazon SNS topic for each on-call group, and configure each of these with the team member emails as subscribers. Create another Amazon SNS topic and configure your CloudWatch alarms to notify this topic when triggered.Create an HTTP subscriber to this topic that notifies your application via HTTP POST when an alarm is triggered. Use theAWS SDK tools to integrate your application with Amazon SNS and send messages to the correct team topic when on shift.
D.
Create an Amazon SNS topic for each on-call group, and configure each of these with the team member emails as subscribers. Create another Amazon SNS topic and configure your CloudWatch alarms to notify this topic when triggered.Create an HTTP subscriber to this topic that notifies your application via HTTP POST when an alarm is triggered. Use theAWS SDK tools to integrate your application with Amazon SNS and send messages to the correct team topic when on shift.
Answers
Suggested answer: D

You want to build a new search tool feature for your monitoring system that will allow your information security team to quickly audit all API calls in your AWS accounts. What combination of AWS services can you use to develop and automate the backend processes supporting this tool? (Choose three.)

A.
Create an Amazon CloudSearch domain for API call logs. Configure the search domain so that it can be used to index API call logs for the search tool.
A.
Create an Amazon CloudSearch domain for API call logs. Configure the search domain so that it can be used to index API call logs for the search tool.
Answers
B.
Use AWS CloudTrail to store API call logs in an Amazon S3 bucket. Configure an Amazon Simple Notification Service topic called "log-notification" that notifies subscribers when new logs are available. Subscribe an Amazon SQS queue to the topic.
B.
Use AWS CloudTrail to store API call logs in an Amazon S3 bucket. Configure an Amazon Simple Notification Service topic called "log-notification" that notifies subscribers when new logs are available. Subscribe an Amazon SQS queue to the topic.
Answers
C.
Use Amazon Cloudwatch to ship AWS CloudTrail logs to your monitoring system.
C.
Use Amazon Cloudwatch to ship AWS CloudTrail logs to your monitoring system.
Answers
D.
Create an AWS Elastic Beanstalk application in worker role mode that uses an Amazon Simple Email Service (SES) domain to facilitate batch processing new API call log files retrieved from an Amazon S3 bucket into a search index.
D.
Create an AWS Elastic Beanstalk application in worker role mode that uses an Amazon Simple Email Service (SES) domain to facilitate batch processing new API call log files retrieved from an Amazon S3 bucket into a search index.
Answers
E.
Use AWS CloudTrail to store API call logs in an Amazon S3 bucket. Configure Amazon Simple Email Service (SES) to notify subscribers when new logs are available. Subscribe an Amazon SQS queue to the email domain.
E.
Use AWS CloudTrail to store API call logs in an Amazon S3 bucket. Configure Amazon Simple Email Service (SES) to notify subscribers when new logs are available. Subscribe an Amazon SQS queue to the email domain.
Answers
F.
Create Amazon Cloudwatch custom metrics for the API call logs. Configure a Cloudwatch search domain so that it can be used to index API call logs for the search tool.
F.
Create Amazon Cloudwatch custom metrics for the API call logs. Configure a Cloudwatch search domain so that it can be used to index API call logs for the search tool.
Answers
G.
Create an AWS Elastic Beanstalk application in worker role mode that uses an Amazon SQS queue to facilitate batch processing new API call log files retrieved from an Amazon S3 bucket into a search index.
G.
Create an AWS Elastic Beanstalk application in worker role mode that uses an Amazon SQS queue to facilitate batch processing new API call log files retrieved from an Amazon S3 bucket into a search index.
Answers
Suggested answer: A, B, G

You need to perform ad-hoc analysis on log data, including searching quickly for specific error codes and reference numbers. Which should you evaluate first?

A.
AWS Elasticsearch Service
A.
AWS Elasticsearch Service
Answers
B.
AWS RedShift
B.
AWS RedShift
Answers
C.
AWS EMR
C.
AWS EMR
Answers
D.
AWS DynamoDB
D.
AWS DynamoDB
Answers
Suggested answer: A

Explanation:

Amazon Elasticsearch Service (Amazon ES) is a managed service that makes it easy to deploy, operate, and scale Elasticsearch clusters in the AWS cloud. Elasticsearch is a popular opensource search and analytics engine for use cases such as log analytics, real-time application monitoring, and click stream analytics.

Reference: http://docs.aws.amazon.com/elasticsearch-service/latest/developerguide/what-is-amazonelasticsearch-service.html

Total 557 questions
Go to page: of 56