ExamGecko
Home Home / Amazon / DOP-C01

Amazon DOP-C01 Practice Test - Questions Answers, Page 11

Question list
Search
Search

List of questions

Search

Related questions











A company that uses electronic health records is running a fleet of Amazon EC2 instances with an Amazon Linux operating system. As part of patient privacy requirements, the company must ensure continuous compliance for patches for operating system and applications running on the EC2 instances.

How can the deployments of the operating system and application patches be automated using a default and custom repository?

A.
Use AWS Systems Manager to create a new patch baseline including the custom repository. Execute the AWSRunPatchBaseline document using the run command to verify and install patches.
A.
Use AWS Systems Manager to create a new patch baseline including the custom repository. Execute the AWSRunPatchBaseline document using the run command to verify and install patches.
Answers
B.
Use AWS Direct Connect to integrate the corporate repository and deploy the patches using Amazon CloudWatch scheduled events, then use the CloudWatch dashboard to create reports.
B.
Use AWS Direct Connect to integrate the corporate repository and deploy the patches using Amazon CloudWatch scheduled events, then use the CloudWatch dashboard to create reports.
Answers
C.
Use yum-config-manager to add the custom repository under /etc/yum.repos.d and run yum-config-manager-enable to activate the repository.
C.
Use yum-config-manager to add the custom repository under /etc/yum.repos.d and run yum-config-manager-enable to activate the repository.
Answers
D.
Use AWS Systems Manager to create a new patch baseline including the corporate repository. Execute the AWSAmazonLinuxDefaultPatchBaseline document using the run command to verify and install patches.
D.
Use AWS Systems Manager to create a new patch baseline including the corporate repository. Execute the AWSAmazonLinuxDefaultPatchBaseline document using the run command to verify and install patches.
Answers
Suggested answer: A

A healthcare services company is concerned about the growing costs of software licensing for an application for monitoring patient wellness. The company wants to create an audit process to ensure that the application is running exclusively on Amazon EC2 Dedicated Hosts. A DevOps Engineer must create a workflow to audit the application to ensure compliance. What steps should the Engineer take to meet this requirement with the LEAST administrative overhead?

A.
Use AWS Systems Manager Configuration Compliance. Use calls to the put-compliance- items API action to scan and build a database of noncompliant EC2 instances based on their host placement configuration. Use an Amazon DynamoDB table to store these instance IDs for fast access. Generate a report through Systems Manager by calling the list-compliancesummaries API action.
A.
Use AWS Systems Manager Configuration Compliance. Use calls to the put-compliance- items API action to scan and build a database of noncompliant EC2 instances based on their host placement configuration. Use an Amazon DynamoDB table to store these instance IDs for fast access. Generate a report through Systems Manager by calling the list-compliancesummaries API action.
Answers
B.
Use custom Java code running on an EC2 instance. Set up EC2 Auto Scaling for the instance depending on the number of instances to be checked. Send the list of noncompliant EC2 instance IDs to an Amazon SQS queue. Set up another worker instance to process instance IDs from the SQS queue and write them to Amazon DynamoDUse an AWS Lambda function to terminate noncompliant instance IDs obtained from the queue, and send them to an Amazon SNS email topic for distribution.
B.
Use custom Java code running on an EC2 instance. Set up EC2 Auto Scaling for the instance depending on the number of instances to be checked. Send the list of noncompliant EC2 instance IDs to an Amazon SQS queue. Set up another worker instance to process instance IDs from the SQS queue and write them to Amazon DynamoDUse an AWS Lambda function to terminate noncompliant instance IDs obtained from the queue, and send them to an Amazon SNS email topic for distribution.
Answers
C.
Use AWS Config. Identify all EC2 instances to be audited by enabling Config Recording on all Amazon EC2 resources for the region. Create a custom AWS Config rule that triggers an AWS Lambda function by using the “config-rule- changetriggered” blueprint. Modify the Lambda evaluateCompliance () function to verify host placement to return a NON_COMPLIANT result if the instance is not running on an EC2 Dedicated Host. Use the AWS Config report to address noncompliant instances.
C.
Use AWS Config. Identify all EC2 instances to be audited by enabling Config Recording on all Amazon EC2 resources for the region. Create a custom AWS Config rule that triggers an AWS Lambda function by using the “config-rule- changetriggered” blueprint. Modify the Lambda evaluateCompliance () function to verify host placement to return a NON_COMPLIANT result if the instance is not running on an EC2 Dedicated Host. Use the AWS Config report to address noncompliant instances.
Answers
D.
Use AWS CloudTrail. Identify all EC2 instances to be audited by analyzing all calls to the EC2 RunCommand API action. Invoke an AWS Lambda function that analyzes the host placement of the instance. Store the EC2 instance ID of noncompliant resources in an Amazon RDS MySQL DB instance. Generate a report by querying the RDS instance and exporting the query results to a CSV text file.
D.
Use AWS CloudTrail. Identify all EC2 instances to be audited by analyzing all calls to the EC2 RunCommand API action. Invoke an AWS Lambda function that analyzes the host placement of the instance. Store the EC2 instance ID of noncompliant resources in an Amazon RDS MySQL DB instance. Generate a report by querying the RDS instance and exporting the query results to a CSV text file.
Answers
Suggested answer: C

You are hosting multiple environments in multiple regions and would like to use Amazon Inspector for regular security assessments on your AWS resources across all regions. Which statement about Amazon Inspector's operation across regions is true?

A.
Amazon Inspector is a global service that is not region-bound. You can include AWS resources from multiple regions in the same assessment target.
A.
Amazon Inspector is a global service that is not region-bound. You can include AWS resources from multiple regions in the same assessment target.
Answers
B.
Amazon Inspector is hosted within AWS regions behind a public endpoint. All regions are isolated from each other, and the telemetry and findings for all assessments performed within a region remain in that region and are not distributed by the service to other Amazon Inspector locations.
B.
Amazon Inspector is hosted within AWS regions behind a public endpoint. All regions are isolated from each other, and the telemetry and findings for all assessments performed within a region remain in that region and are not distributed by the service to other Amazon Inspector locations.
Answers
C.
Amazon Inspector is hosted in each supported region. Telemetry data and findings are shared across regions to provide complete assessment reports.
C.
Amazon Inspector is hosted in each supported region. Telemetry data and findings are shared across regions to provide complete assessment reports.
Answers
D.
Amazon Inspector is hosted in each supported region separately. You have to create assessment targets using the same name and tags in each region and Amazon Inspector will run against each assessment target in each region.
D.
Amazon Inspector is hosted in each supported region separately. You have to create assessment targets using the same name and tags in each region and Amazon Inspector will run against each assessment target in each region.
Answers
Suggested answer: B

Explanation:

At this time, Amazon Inspector supports assessment services for EC2 instances in only the following AWS regions: US West (Oregon) US East (N. Virginia) EU (Ireland) Asia Pacific (Seoul) Asia Pacific (Mumbai) Asia Pacific (Tokyo) Asia Pacific (Sydney) Amazon Inspector is hosted within AWS regions behind a public endpoint. All regions are isolated from each other, and the telemetry and findings for all assessments performed within a region remain in that region and are not distributed by the service to other Amazon Inspector locations.

Reference:

https://docs.aws.amazon.com/inspector/latest/userguide/inspector_supported_os_regions.html#in%20spector_supportedregions

A company requires that all logs are captured for everything that runs in the company’s AWS account. The account has multiple VPCs with Amazon EC2 instances, Application Load Balancers, Amazon RDS MySQL databases, and AWS WAF rules that are configured. The logs must be protected from deletion. The company also needs a daily visual analysis of log anomalies from the previous day. Which combination of actions should a DevOps engineer take to meet these requirements? (Choose three.)

A.
Configure an AWS Lambda function to send all Amazon CloudWatch logs to an Amazon S3 bucket. Create a dashboard report in Amazon QuickSight.
A.
Configure an AWS Lambda function to send all Amazon CloudWatch logs to an Amazon S3 bucket. Create a dashboard report in Amazon QuickSight.
Answers
B.
Configure AWS CloudTrail to send all logs to Amazon Inspector. Create a dashboard report in Amazon QuickSight.
B.
Configure AWS CloudTrail to send all logs to Amazon Inspector. Create a dashboard report in Amazon QuickSight.
Answers
C.
Configure Amazon S3 MFA Delete on the logging S3 bucket.
C.
Configure Amazon S3 MFA Delete on the logging S3 bucket.
Answers
D.
Configure an Amazon S3 Object Lock legal hold on the logging S3 bucket.
D.
Configure an Amazon S3 Object Lock legal hold on the logging S3 bucket.
Answers
E.
Configure AWS Artifact to send all logs to the logging Amazon S3 bucket. Create a dashboard report in Amazon QuickSight.
E.
Configure AWS Artifact to send all logs to the logging Amazon S3 bucket. Create a dashboard report in Amazon QuickSight.
Answers
F.
Deploy the Amazon CloudWatch agent to all EC2 instances.
F.
Deploy the Amazon CloudWatch agent to all EC2 instances.
Answers
Suggested answer: A, D, F

You need to know when you spend $1000 or more on AWS. What's the easy way for you to see that notification?

A.
AWS CloudWatch Events tied to API calls, when certain thresholds are exceeded, publish to SNS.
A.
AWS CloudWatch Events tied to API calls, when certain thresholds are exceeded, publish to SNS.
Answers
B.
Scrape the billing page periodically and pump into Kinesis.
B.
Scrape the billing page periodically and pump into Kinesis.
Answers
C.
AWS CloudWatch Metrics + Billing Alarm + Lambda event subscription. When a threshold is exceeded, email the manager.
C.
AWS CloudWatch Metrics + Billing Alarm + Lambda event subscription. When a threshold is exceeded, email the manager.
Answers
D.
Scrape the billing page periodically and publish to SNS.
D.
Scrape the billing page periodically and publish to SNS.
Answers
Suggested answer: C

Explanation:

Even if you're careful to stay within the free tier, it's a good idea to create a billing alarm to notify you if you exceed the limits of the free tier. Billing alarms can help to protect you against unknowingly accruing charges if you inadvertently use a service outside of the free tier or if traffic exceeds your expectations.

Reference: http://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/freetier-alarms.html

A company gives its employees limited rights to AWS. DevOps engineers have the ability to assume an administrator role. For tracking purposes, the security team wants to receive a near-real-time notification when the administrator role is assumed. How should this be accomplished?

A.
Configure AWS Config to publish logs to an Amazon S3 bucket. Use Amazon Athena to query the logs and send a notification to the security team when the administrator role is assumed.
A.
Configure AWS Config to publish logs to an Amazon S3 bucket. Use Amazon Athena to query the logs and send a notification to the security team when the administrator role is assumed.
Answers
B.
Configure Amazon GuardDuty to monitor when the administrator role is assumed and send a notification to the security team.
B.
Configure Amazon GuardDuty to monitor when the administrator role is assumed and send a notification to the security team.
Answers
C.
Create an Amazon EventBridge (Amazon CloudWatch Events) event rule using an AWS Management Console sign-in events event pattern that publishes a message to an Amazon SNS topic if the administrator role is assumed.
C.
Create an Amazon EventBridge (Amazon CloudWatch Events) event rule using an AWS Management Console sign-in events event pattern that publishes a message to an Amazon SNS topic if the administrator role is assumed.
Answers
D.
Create an Amazon EventBridge (Amazon CloudWatch Events) events rule using an AWS API call that uses an AWS CloudTrail event pattern to trigger an AWS Lambda function that publishes a message to an Amazon SNS topic if the administrator role is assumed.
D.
Create an Amazon EventBridge (Amazon CloudWatch Events) events rule using an AWS API call that uses an AWS CloudTrail event pattern to trigger an AWS Lambda function that publishes a message to an Amazon SNS topic if the administrator role is assumed.
Answers
Suggested answer: C

Explanation:

Reference: https://docs.aws.amazon.com/eventbridge/latest/userguide/user-guide.pdf

A company has migrated its container-based applications to Amazon EKS and wants to establish automated email notifications. The notifications sent to each email address are for specific activities related to EKS components. The solution will include Amazon SNS topics and an AWS Lambda function to evaluate incoming log events and publish messages to the correct SNS topic. Which logging solution will support these requirements?

A.
Enable Amazon CloudWatch Logs to log the EKS components. Create a CloudWatch subscription filter for each component with Lambda as the subscription feed destination.
A.
Enable Amazon CloudWatch Logs to log the EKS components. Create a CloudWatch subscription filter for each component with Lambda as the subscription feed destination.
Answers
B.
Enable Amazon CloudWatch Logs to log the EKS components. Create CloudWatch Logs Insights queries linked to Amazon CloudWatch Events events that trigger Lambda.
B.
Enable Amazon CloudWatch Logs to log the EKS components. Create CloudWatch Logs Insights queries linked to Amazon CloudWatch Events events that trigger Lambda.
Answers
C.
Enable Amazon S3 logging for the EKS components. Configure an Amazon CloudWatch subscription filter for each component with Lambda as the subscription feed destination.
C.
Enable Amazon S3 logging for the EKS components. Configure an Amazon CloudWatch subscription filter for each component with Lambda as the subscription feed destination.
Answers
D.
Enable Amazon S3 logging for the EKS components. Configure S3 PUT Object event notifications with AWS Lambda as the destination.
D.
Enable Amazon S3 logging for the EKS components. Configure S3 PUT Object event notifications with AWS Lambda as the destination.
Answers
Suggested answer: A

You are building out a layer in a software stack on AWS that needs to be able to scale out to react to increased demand as fast as possible. You are running the code on EC2 instances in an Auto Scaling Group behind an ELB. Which application code deployment method should you use?

A.
SSH into new instances that come online, and deploy new code onto the system by pulling it from an S3 bucket, which is populated by code that you refresh from source control on new pushes.
A.
SSH into new instances that come online, and deploy new code onto the system by pulling it from an S3 bucket, which is populated by code that you refresh from source control on new pushes.
Answers
B.
Bake an AMI when deploying new versions of code, and use that AMI for the Auto Scaling Launch Configuration.
B.
Bake an AMI when deploying new versions of code, and use that AMI for the Auto Scaling Launch Configuration.
Answers
C.
Create a Dockerfile when preparing to deploy a new version to production and publish it to S3. Use UserData in the Auto Scaling Launch configuration to pull down the Dockerfile from S3 and run it when new instances launch.
C.
Create a Dockerfile when preparing to deploy a new version to production and publish it to S3. Use UserData in the Auto Scaling Launch configuration to pull down the Dockerfile from S3 and run it when new instances launch.
Answers
D.
Create a new Auto Scaling Launch Configuration with UserData scripts configured to pull the latest code at all times.
D.
Create a new Auto Scaling Launch Configuration with UserData scripts configured to pull the latest code at all times.
Answers
Suggested answer: B

Explanation:

The bootstrapping process can be slower if you have a complex application or multiple applications to install. Managing a fleet of applications with several build tools and dependencies can be a challenging task during rollouts. Furthermore, your deployment service should be designed to do faster rollouts to take advantage of Auto Scaling.

Reference:

https://d0.awsstatic.com/whitepapers/overview-of-deployment-options-on-aws.pdf

A DevOps Engineer has several legacy applications that all generate different log formats. The Engineer must standardize the formats before writing them to Amazon S3 for querying and analysis. How can this requirement be met at the LOWEST cost?

A.
Have the application send its logs to an Amazon EMR cluster and normalize the logs before sending them to Amazon S3
A.
Have the application send its logs to an Amazon EMR cluster and normalize the logs before sending them to Amazon S3
Answers
B.
Have the application send its logs to Amazon QuickSight, then use the Amazon QuickSight SPICE engine to normalize the logs. Do the analysis directly from Amazon QuickSight
B.
Have the application send its logs to Amazon QuickSight, then use the Amazon QuickSight SPICE engine to normalize the logs. Do the analysis directly from Amazon QuickSight
Answers
C.
Keep the logs in Amazon S3 and use Amazon Redshift Spectrum to normalize the logs in place
C.
Keep the logs in Amazon S3 and use Amazon Redshift Spectrum to normalize the logs in place
Answers
D.
Use Amazon Kinesis Agent on each server to upload the logs and have Amazon Kinesis Data Firehose use an AWS Lambda function to normalize the logs before writing them to Amazon S3
D.
Use Amazon Kinesis Agent on each server to upload the logs and have Amazon Kinesis Data Firehose use an AWS Lambda function to normalize the logs before writing them to Amazon S3
Answers
Suggested answer: D

Which deployment method, when using AWS Auto Scaling Groups and Auto Scaling Launch Configurations, enables the shortest time to live for individual servers?

A.
Pre-baking AMIs with all code and configuration on deploys.
A.
Pre-baking AMIs with all code and configuration on deploys.
Answers
B.
Using a Dockerfile bootstrap on instance launch.
B.
Using a Dockerfile bootstrap on instance launch.
Answers
C.
Using UserData bootstrapping scripts.
C.
Using UserData bootstrapping scripts.
Answers
D.
Using AWS EC2 Run Commands to dynamically SSH into fleets.
D.
Using AWS EC2 Run Commands to dynamically SSH into fleets.
Answers
Suggested answer: A

Explanation:

Note that the bootstrapping process can be slower if you have a complex application or multiple applications to install. Managing a fleet of applications with several build tools and dependencies can be a challenging task during rollouts. Furthermore, your deployment service should be designed to do faster rollouts to take advantage of Auto Scaling. Prebaking is a process of embedding a significant portion of your application artifacts within your base AMI. During the deployment process you can customize application installations by using EC2 instance artifacts such as instance tags, instance metadata, and Auto Scaling groups.

Reference:

https://d0.awsstatic.com/whitepapers/overview-of-deployment-options-on-aws.pdf

Total 557 questions
Go to page: of 56