ExamGecko
Home Home / Amazon / DOP-C01

Amazon DOP-C01 Practice Test - Questions Answers, Page 31

Question list
Search
Search

List of questions

Search

Related questions











You are building a game high score table in DynamoDB. You will store each user's highest score for each game, with many games, all of which have relatively similar usage levels and numbers of players. You need to be able to look up the highest score for any game. What is the best DynamoDB key structure?

A.
HighestScore as the hash / only key.
A.
HighestScore as the hash / only key.
Answers
B.
GameID as the hash key, HighestScore as the range key.
B.
GameID as the hash key, HighestScore as the range key.
Answers
C.
GameID as the hash / only key.
C.
GameID as the hash / only key.
Answers
D.
GameID as the range / only key.
D.
GameID as the range / only key.
Answers
Suggested answer: B

Explanation:

Since access and storage for games is uniform, and you need to have ordering within each game for the scores (to access the highest value), your hash (partition) key should be the GameID, and there should be a range key for HighestScore.

Two teams are working together on different portions of an architecture and are using AWS CloudFormation to manage their resources. One team administers operating system-level updates and patches, while the other team manages applicationlevel dependencies and updates. The Application team must take the most recent AMI when creating new instances and deploying the application. What is the MOST scalable method for linking these two teams and processes?

A.
The Operating System team uses CloudFormation to create new versions of their AMIs and lists the Amazon Resource Names (ARNs) of the AMIs in an encrypted Amazon S3 object as part of the stack output section. The Application team uses a cross-stack reference to load the encrypted S3 object and obtain the most recent AMI ARNs.
A.
The Operating System team uses CloudFormation to create new versions of their AMIs and lists the Amazon Resource Names (ARNs) of the AMIs in an encrypted Amazon S3 object as part of the stack output section. The Application team uses a cross-stack reference to load the encrypted S3 object and obtain the most recent AMI ARNs.
Answers
B.
The Operating System team uses CloudFormation stack to create an AWS CodePipeline pipeline that builds new AMIs, then places the latest AMI ARNs in an encrypted Amazon S3 object as part of the pipeline output. The Application team uses a cross-stack reference within their own CloudFormation template to get that S3 object location and obtain the most recent AMI ARNs to use when deploying their application.
B.
The Operating System team uses CloudFormation stack to create an AWS CodePipeline pipeline that builds new AMIs, then places the latest AMI ARNs in an encrypted Amazon S3 object as part of the pipeline output. The Application team uses a cross-stack reference within their own CloudFormation template to get that S3 object location and obtain the most recent AMI ARNs to use when deploying their application.
Answers
C.
The Operating System team uses CloudFormation stack to create an AWS CodePipeline pipeline that builds new AMIs. The team then places the AMI ARNs as parameters in AWS Systems Manager Parameter Store as part of the pipeline output. The Application team specifies a parameter of type SSM in their CloudFormation stack to obtain the most recent AMI ARN from the Parameter Store.
C.
The Operating System team uses CloudFormation stack to create an AWS CodePipeline pipeline that builds new AMIs. The team then places the AMI ARNs as parameters in AWS Systems Manager Parameter Store as part of the pipeline output. The Application team specifies a parameter of type SSM in their CloudFormation stack to obtain the most recent AMI ARN from the Parameter Store.
Answers
D.
The Operating System team maintains a nested stack that includes both the operating system and Application team templates. The Operating System team uses a stack update to deploy updates to the application stack whenever the Application team changes the application code.
D.
The Operating System team maintains a nested stack that includes both the operating system and Application team templates. The Operating System team uses a stack update to deploy updates to the application stack whenever the Application team changes the application code.
Answers
Suggested answer: C

A company runs a three-tier web application in its production environment, which is built on a single AWS CloudFormation template made up of Amazon EC2 instances behind an ELB Application Load Balancer. The instances run in an EC2 Auto Scaling group across multiple Availability Zones. Data is stored in an Amazon RDS Multi-AZ DB instance with read replicas. Amazon Route 53 manages the application’s public DNS record.

A DevOps Engineer must create a workflow to mitigate a failed software deployment by rolling back changes in the production environment when a software cutover occurs for new application software. What steps should the Engineer perform to meet these requirements with the LEAST amount of downtime?

A.
Use CloudFormation to deploy an additional staging environment and configure the Route 53 DNS with weighted records. During cutover, change the Route 53 A record weights to achieve an even traffic distribution between the two environments. Validate the traffic in the new environment and immediately terminate the old environment if tests are successful.
A.
Use CloudFormation to deploy an additional staging environment and configure the Route 53 DNS with weighted records. During cutover, change the Route 53 A record weights to achieve an even traffic distribution between the two environments. Validate the traffic in the new environment and immediately terminate the old environment if tests are successful.
Answers
B.
Use a single AWS Elastic Beanstalk environment to deploy the staging and production environments. Update the environment by uploading the ZIP file with the new application code. Swap the Elastic Beanstalk environment CNAME. Validate the traffic in the new environment and immediately terminate the old environment if tests are successful.
B.
Use a single AWS Elastic Beanstalk environment to deploy the staging and production environments. Update the environment by uploading the ZIP file with the new application code. Swap the Elastic Beanstalk environment CNAME. Validate the traffic in the new environment and immediately terminate the old environment if tests are successful.
Answers
C.
Use a single AWS Elastic Beanstalk environment and an AWS OpsWorks environment to deploy the staging and production environments. Update the environment by uploading the ZIP file with the new application code into the Elastic Beanstalk environment deployed with the OpsWorks stack. Validate the traffic in the new environment and immediately terminate the old environment if tests are successful.
C.
Use a single AWS Elastic Beanstalk environment and an AWS OpsWorks environment to deploy the staging and production environments. Update the environment by uploading the ZIP file with the new application code into the Elastic Beanstalk environment deployed with the OpsWorks stack. Validate the traffic in the new environment and immediately terminate the old environment if tests are successful.
Answers
D.
Use AWS CloudFormation to deploy an additional staging environment, and configure the Route 53 DNS with weighted records. During cutover, increase the weight distribution to have more traffic directed to the new staging environment as workloads are successfully validated. Keep the old production environment in place until the new staging environment handles all traffic.
D.
Use AWS CloudFormation to deploy an additional staging environment, and configure the Route 53 DNS with weighted records. During cutover, increase the weight distribution to have more traffic directed to the new staging environment as workloads are successfully validated. Keep the old production environment in place until the new staging environment handles all traffic.
Answers
Suggested answer: D

The Development team at an online retailer has moved to Business support and wants to take advantage of the AWS Health Dashboard and the AWS Health API to automate remediation actions for issues with the health of AWS resources. The first use case is to respond to AWS detecting an IAM access key that is listed on a public code repository site. The automated response will be to delete the IAM access key and send a notification to the Security team. How should this be achieved?

A.
Create an AWS Lambda function to delete the IAM access key. Send AWS CloudTrail logs to AWS CloudWatch logs. Create a CloudWatch Logs metric filter for the AWS_RISK_CREDENTIALS_EXPOSED event with two actions: first, run the Lambda function; second, use Amazon SNS to send a notification to the Security team.
A.
Create an AWS Lambda function to delete the IAM access key. Send AWS CloudTrail logs to AWS CloudWatch logs. Create a CloudWatch Logs metric filter for the AWS_RISK_CREDENTIALS_EXPOSED event with two actions: first, run the Lambda function; second, use Amazon SNS to send a notification to the Security team.
Answers
B.
Create an AWS Lambda function to delete the IAM access key. Create an AWS Config rule for changes to aws.health and the AWS_RISK_CREDENTIALS_EXPOSED event with two actions: first, run the Lambda function; second, use Amazon SNS to send a notification to the Security team.
B.
Create an AWS Lambda function to delete the IAM access key. Create an AWS Config rule for changes to aws.health and the AWS_RISK_CREDENTIALS_EXPOSED event with two actions: first, run the Lambda function; second, use Amazon SNS to send a notification to the Security team.
Answers
C.
Use AWS Step Functions to create a function to delete the IAM access key, and then use Amazon SNS to send a notification to the Security team. Create an AWS Personal Health Dashboard rule for the AWS_RISK_CREDENTIALS_EXPOSED event; set the target of the Personal Health Dashboard rule to Step Functions.
C.
Use AWS Step Functions to create a function to delete the IAM access key, and then use Amazon SNS to send a notification to the Security team. Create an AWS Personal Health Dashboard rule for the AWS_RISK_CREDENTIALS_EXPOSED event; set the target of the Personal Health Dashboard rule to Step Functions.
Answers
D.
Use AWS Step Functions to create a function to delete the IAM access key, and then use Amazon SNS to send a notification to the Security team. Create an Amazon CloudWatch Events rule with an aws.health event source and the AWS_RISK_CREDENTIALS_EXPOSED event; set the target of the CloudWatch Events rule to Step Functions.
D.
Use AWS Step Functions to create a function to delete the IAM access key, and then use Amazon SNS to send a notification to the Security team. Create an Amazon CloudWatch Events rule with an aws.health event source and the AWS_RISK_CREDENTIALS_EXPOSED event; set the target of the CloudWatch Events rule to Step Functions.
Answers
Suggested answer: D

If a variable is assigned in the `vars' section of a playbook, where is the proper place to override that variable?

A.
Inventory group var
A.
Inventory group var
Answers
B.
playbook host_vars
B.
playbook host_vars
Answers
C.
role defaults
C.
role defaults
Answers
D.
extra vars
D.
extra vars
Answers
Suggested answer: D

Explanation:

In Ansible's variable precedence, the highest precedence is the extra vars option on the command line.

Reference: http://docs.ansible.com/ansible/playbooks_variables.html#variable-precedence-where-should-iput-a-variable

You are hired as the new head of operations for a SaaS company. Your CTO has asked you to make debugging any part of your entire operation simpler and as fast as possible. She complains that she has no idea what is going on in the complex, service-oriented architecture, because the developers just log to disk, and it's very hard to find errors in logs on so many services. How can you best meet this requirement and satisfy your CTO?

A.
Copy all log files into AWS S3 using a cron job on each instance. Use an S3 Notification Configuration on the PutBucket event and publish events to AWS Lambda. Use the Lambda to analyze logs as soon as they come in and flag issues.
A.
Copy all log files into AWS S3 using a cron job on each instance. Use an S3 Notification Configuration on the PutBucket event and publish events to AWS Lambda. Use the Lambda to analyze logs as soon as they come in and flag issues.
Answers
B.
Begin using CloudWatch Logs on every service. Stream all Log Groups into S3 objects. Use AWS EMR cluster jobs to perform ad-hoc MapReduce analysis and write new queries when needed.
B.
Begin using CloudWatch Logs on every service. Stream all Log Groups into S3 objects. Use AWS EMR cluster jobs to perform ad-hoc MapReduce analysis and write new queries when needed.
Answers
C.
Copy all log files into AWS S3 using a cron job on each instance. Use an S3 Notification Configuration on the PutBucket event and publish events to AWS Kinesis. Use Apache Spark on AWS EMR to perform at-scale stream processing queries on the log chunks and flag issues.
C.
Copy all log files into AWS S3 using a cron job on each instance. Use an S3 Notification Configuration on the PutBucket event and publish events to AWS Kinesis. Use Apache Spark on AWS EMR to perform at-scale stream processing queries on the log chunks and flag issues.
Answers
D.
Begin using CloudWatch Logs on every service. Stream all Log Groups into an AWS Elasticsearch Service Domain running Kibana 4 and perform log analysis on a search cluster.
D.
Begin using CloudWatch Logs on every service. Stream all Log Groups into an AWS Elasticsearch Service Domain running Kibana 4 and perform log analysis on a search cluster.
Answers
Suggested answer: D

Explanation:

The Elasticsearch and Kibana 4 combination is called the ELK Stack, and is designed specifically for real-time, ad-hoc log analysis and aggregation. All other answers introduce extra delay or require predefined queries. Amazon Elasticsearch Service is a managed service that makes it easy to deploy, operate, and scale Elasticsearch in the AWS Cloud. Elasticsearch is a popular open-source search and analytics engine for use cases such as log analytics, real-time application monitoring, and click stream analytics. Reference: https://aws.amazon.com/elasticsearch-service/

Your API requires the ability to stay online during AWS regional failures. Your API does not store any state, it only aggregates data from other sources - you do not have a database. What is a simple but effective way to achieve this uptime goal?

A.
Use a CloudFront distribution to serve up your API. Even if the region your API is in goes down, the edge locations CloudFront uses will be fine.
A.
Use a CloudFront distribution to serve up your API. Even if the region your API is in goes down, the edge locations CloudFront uses will be fine.
Answers
B.
Use an ELB and a cross-zone ELB deployment to create redundancy across datacenters. Even if a region fails, the other AZ will stay online.
B.
Use an ELB and a cross-zone ELB deployment to create redundancy across datacenters. Even if a region fails, the other AZ will stay online.
Answers
C.
Create a Route53 Weighted Round Robin record, and if one region goes down, have that region redirect to the other region.
C.
Create a Route53 Weighted Round Robin record, and if one region goes down, have that region redirect to the other region.
Answers
D.
Create a Route53 Latency Based Routing Record with Failover and point it to two identical deployments of your stateless API in two different regions. Make sure both regions use Auto Scaling Groups behind ELBs.
D.
Create a Route53 Latency Based Routing Record with Failover and point it to two identical deployments of your stateless API in two different regions. Make sure both regions use Auto Scaling Groups behind ELBs.
Answers
Suggested answer: D

Explanation:

Latency Based Records allow request distribution when all is well with both regions, and the Failover component enables fallbacks between regions. By adding in the ELB and ASG, your system in the surviving region can expand to meet 100% of demand instead of the original fraction, whenever failover occurs.

Reference: http://docs.aws.amazon.com/Route53/latest/DeveloperGuide/dns-failover.html

Which services can be used as optional components of setting up a new Trail in CloudTrail?

A.
KMS, SNS and SES
A.
KMS, SNS and SES
Answers
B.
CloudWatch, S3 and SNS
B.
CloudWatch, S3 and SNS
Answers
C.
KMS, Cloudwatch and SNS
C.
KMS, Cloudwatch and SNS
Answers
D.
KMS, S3 and CloudWatch
D.
KMS, S3 and CloudWatch
Answers
Suggested answer: C

Explanation:

Key Management Service: The use of AWS KMS is an optional element of CloudTrail, but it allows additional encryption to be added to your Log files when stored on S3 Simple Notification Service:

Amazon SNS is also an optional component for CloudTrail, but it allows for you to create notifications, for example when a new log file is delivered to S3 SNS could notify someone or a team via an e-mail. Or it could be used in conjunction with CloudWatch when metric thresholds have been reached. CloudWatch Logs: Again, this is another optional component, but AWS CloudTrail allows you to deliver its logs to AWS Cloudwatch Logs as well as S3 for specific monitoring metrics to take place.

Reference:

https://cloudacademy.com/amazon-web-services/aws-cloudtrail-introduction-course/how-doesaws-cloudtrail-work.html

What is the scope of AWS IAM?

A.
Global
A.
Global
Answers
B.
Availability Zone
B.
Availability Zone
Answers
C.
Region
C.
Region
Answers
D.
Placement Group
D.
Placement Group
Answers
Suggested answer: A

Explanation:

IAM resources are all global; there is not regional constraint.

Reference: https://aws.amazon.com/iam/faqs/

Within an IAM policy, can you add an IfExists condition at the end of a Null condition?

A.
Yes, you can add an IfExists condition at the end of a Null condition but not in all Regions.
A.
Yes, you can add an IfExists condition at the end of a Null condition but not in all Regions.
Answers
B.
Yes, you can add an IfExists condition at the end of a Null condition depending on the condition.
B.
Yes, you can add an IfExists condition at the end of a Null condition depending on the condition.
Answers
C.
No, you cannot add an IfExists condition at the end of a Null condition.
C.
No, you cannot add an IfExists condition at the end of a Null condition.
Answers
D.
Yes, you can add an IfExists condition at the end of a Null condition.
D.
Yes, you can add an IfExists condition at the end of a Null condition.
Answers
Suggested answer: C

Explanation:

Within an IAM policy, IfExists can be added to the end of any condition operator except the Null condition. It can be used to indicate that conditional comparison needs to happen if the policy key is present in the context of a request; otherwise, it can be ignored.

Reference: http://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements.html

Total 557 questions
Go to page: of 56