ExamGecko
Home Home / Amazon / DOP-C01

Amazon DOP-C01 Practice Test - Questions Answers, Page 50

Question list
Search
Search

List of questions

Search

Related questions











Which of the following is an invalid variable name in Ansible?

A.
host1st_ref
A.
host1st_ref
Answers
B.
host-first-ref
B.
host-first-ref
Answers
C.
Host1stRef
C.
Host1stRef
Answers
D.
host_first_ref
D.
host_first_ref
Answers
Suggested answer: B

Explanation:

Variable names can contain letters, numbers and underscores and should always start with a letter. Invalid variable examples, `host first ref', `1st_host_ref''.

Reference: http://docs.ansible.com/ansible/playbooks_variables.html#what-makes-a-valid-variable-name

A company wants to automatically re-create its infrastructure using AWS CloudFormation as part of the company's quality assurance (QA) pipeline. For each QA run, a new VPC must be created in a single account, resources must be deployed into the VPC, and tests must be run against this new infrastructure. The company policy states that all VPCs must be peered with a central management VPC to allow centralized logging. The company has existing CloudFormation templates to deploy its VPC and associated resources.

Which combination of steps will achieve the goal in a way that is automated and repeatable? (Choose two.)

A.
Create an AWS Lambda function that is invoked by an Amazon CloudWatch Events rule when a CreateVpcPeeringConnection API call is made. The Lambda function should check the source of the peering request, accepts the request, and update the route tables for the management VPC to allow traffic to go over the peering connection.
A.
Create an AWS Lambda function that is invoked by an Amazon CloudWatch Events rule when a CreateVpcPeeringConnection API call is made. The Lambda function should check the source of the peering request, accepts the request, and update the route tables for the management VPC to allow traffic to go over the peering connection.
Answers
B.
In the CloudFormation template:Invoke a custom resource to generate unique VPC CIDR ranges for the VPC and subnets.Create a peering connection to the management VPC.Update route tables to allow traffic to the management VPC.
B.
In the CloudFormation template:Invoke a custom resource to generate unique VPC CIDR ranges for the VPC and subnets.Create a peering connection to the management VPC.Update route tables to allow traffic to the management VPC.
Answers
C.
In the CloudFormation template:Use the Fn::Cidr function to allocate an unused CIDR range for the VPC and subnets.Create a peering connection to the management VPC.Update route tables to allow traffic to the management VPC.
C.
In the CloudFormation template:Use the Fn::Cidr function to allocate an unused CIDR range for the VPC and subnets.Create a peering connection to the management VPC.Update route tables to allow traffic to the management VPC.
Answers
D.
Modify the CloudFormation template to include a mappings object that includes a list of /16 CIDR ranges for each account where the stack will be deployed.
D.
Modify the CloudFormation template to include a mappings object that includes a list of /16 CIDR ranges for each account where the stack will be deployed.
Answers
E.
Use CloudFormation StackSets to deploy the VPC and associated resources to multiple AWS accounts using a custom resource to allocate unique CIDR ranges. Create peering connections from each VPC to the central management VPC and accept those connections in the management VPC.
E.
Use CloudFormation StackSets to deploy the VPC and associated resources to multiple AWS accounts using a custom resource to allocate unique CIDR ranges. Create peering connections from each VPC to the central management VPC and accept those connections in the management VPC.
Answers
Suggested answer: A, B

A company uses federated access for its AWS environment. The company creates and manages IAM roles by using AWS CloudFormation from a CI/CD pipeline. All changes should be made to the IAM roles through the pipeline. The company’s security team discovers that out-of-band changes are being made to the IAM roles. The security team needs a way to detect when these out-of-band changes occur. What should a DevOps engineer do to meet this requirement?

A.
Use Amazon Inspector rules to detect and notify when an AWS CloudFormation stack has a configuration change.
A.
Use Amazon Inspector rules to detect and notify when an AWS CloudFormation stack has a configuration change.
Answers
B.
Use AWS Trusted Advisor to detect and notify when an AWS CloudFormation stack has a configuration change.
B.
Use AWS Trusted Advisor to detect and notify when an AWS CloudFormation stack has a configuration change.
Answers
C.
Use AWS CloudTrail to detect and notify when an AWS CloudFormation stack detects a configuration change.
C.
Use AWS CloudTrail to detect and notify when an AWS CloudFormation stack detects a configuration change.
Answers
D.
Use an AWS Config rule to detect and notify when AWS CloudFormation drift detection identifies a configuration change.
D.
Use an AWS Config rule to detect and notify when AWS CloudFormation drift detection identifies a configuration change.
Answers
Suggested answer: C

Explanation:

Reference: https://aws.amazon.com/blogs/mt/how-to-track-configuration-changes-to-cloudformation-stacks-using-awsconfig/

A company has mandated a global encryption-at-rest policy. A DevOps engineer has been tasked to ensure that new data uploaded to both new and existing Amazon S3 buckets is encrypted at rest across the company’s AWS Organizations organization. There are a number of legacy applications deployed on AWS that use Amazon S3 and do not store data encrypted at rest. These applications MUST continue to operate. The engineer must ensure S3 encryption at rest across the organization without requiring an application code change.

How should this be accomplished with MINIMAL effort?

A.
Develop an AWS Lambda function that lists all Amazon S3 buckets in a given account and applies default encryption to all S3 buckets that either do not have it enabled or to those with an S3 bucket policy that do not explicitly deny put- object requests without server-side encryption. Deploy the Lambda function along with an Amazon EventBridge (Amazon CloudWatch Events) scheduled rule with AWS CloudFormation StackSets to all accounts within the organization.
A.
Develop an AWS Lambda function that lists all Amazon S3 buckets in a given account and applies default encryption to all S3 buckets that either do not have it enabled or to those with an S3 bucket policy that do not explicitly deny put- object requests without server-side encryption. Deploy the Lambda function along with an Amazon EventBridge (Amazon CloudWatch Events) scheduled rule with AWS CloudFormation StackSets to all accounts within the organization.
Answers
B.
Enable the AWS Config s3-bucket-server-side-encryption-enabled managed rule that checks for S3 bucket that either do not have S3 default encryption enabled or those with an S3 bucket policy that does not explicitly deny put-object requests without server-side encryption. Add the AWS-EnabledS3BucketEncryption remediation action to the AWS Config rule to enable default encryption on any S3 buckets that are not complaint. Use AWS Config organizations integration to deploy the rule across all accounts in the organization.
B.
Enable the AWS Config s3-bucket-server-side-encryption-enabled managed rule that checks for S3 bucket that either do not have S3 default encryption enabled or those with an S3 bucket policy that does not explicitly deny put-object requests without server-side encryption. Add the AWS-EnabledS3BucketEncryption remediation action to the AWS Config rule to enable default encryption on any S3 buckets that are not complaint. Use AWS Config organizations integration to deploy the rule across all accounts in the organization.
Answers
C.
Enable an AWS Config custom rule that checks for S3 buckets that do not have a bucket policy denying access to s3:PutObject unless the x-amz-server-side-encryption S3 condition is met with an AES 256 value or x-amz-server- sideencryption is not present. Add a custom remediation action to the AWS Config rule that will apply the bucket policy if the S3 bucket is non-complaint. Use AWS Config organizations integration to deploy the rule across all accounts in the organization.
C.
Enable an AWS Config custom rule that checks for S3 buckets that do not have a bucket policy denying access to s3:PutObject unless the x-amz-server-side-encryption S3 condition is met with an AES 256 value or x-amz-server- sideencryption is not present. Add a custom remediation action to the AWS Config rule that will apply the bucket policy if the S3 bucket is non-complaint. Use AWS Config organizations integration to deploy the rule across all accounts in the organization.
Answers
D.
Write an SCP that denies access to s3:PutObject unless either the x-amz-server-side-encryption S3 condition is met with an AES 256 value or x-amz-server-side-encryption is not present. Apply the SCP to the root of the organization to enforce the policy across the entire organization.
D.
Write an SCP that denies access to s3:PutObject unless either the x-amz-server-side-encryption S3 condition is met with an AES 256 value or x-amz-server-side-encryption is not present. Apply the SCP to the root of the organization to enforce the policy across the entire organization.
Answers
Suggested answer: B

You have just recently deployed an application on EC2 instances behind an ELB. After a couple of weeks, customers are complaining on receiving errors from the application. You want to diagnose the errors and are trying to get errors from the ELB access logs. But the ELB access logs are empty. What is the reason for this.

A.
You do not have the appropriate permissions to access the logs
A.
You do not have the appropriate permissions to access the logs
Answers
B.
You do not have your CloudWatch metrics correctly configured
B.
You do not have your CloudWatch metrics correctly configured
Answers
C.
ELB Access logs are only available for a maximum of one week
C.
ELB Access logs are only available for a maximum of one week
Answers
D.
Access logging is an optional feature of Elastic Load Balancing that is disabled by default
D.
Access logging is an optional feature of Elastic Load Balancing that is disabled by default
Answers
Suggested answer: D

Explanation:

Clastic Load Balancing provides access logs that capture detailed information about requests sent to your load balancer. Cach log contains information such as the time the request was received, the client's IP address, latencies, request paths, and server responses. You can use these access logs to analyze traffic patterns and to troubleshoot issues. Access logging is an optional feature of Elastic Load Balancing that is disabled by default. After you enable access logging for your load balancer. Clastic Load Balancing captures the logs and stores them in the Amazon S3 bucket that you specify. You can disable access logging at any time.

A company is building a web and mobile application that uses a serverless architecture powered by AWS Lambda and Amazon API Gateway. The company wants to fully automate the backend Lambda deployment based on code that is pushed to the appropriate environment branch in an AWS CodeCommit repository.

The deployment must have the following:

Separate environment pipelines for testing and production.

Automatic deployment that occurs for test environments only.

Which steps should be taken to meet these requirements?

A.
Configure a new AWS CodePipeline service. Create a CodeCommit repository for each environment. Set up CodePipeline to retrieve the source code from the appropriate repository. Set up the deployment step to deploy the Lambda functions with AWS CloudFormation.
A.
Configure a new AWS CodePipeline service. Create a CodeCommit repository for each environment. Set up CodePipeline to retrieve the source code from the appropriate repository. Set up the deployment step to deploy the Lambda functions with AWS CloudFormation.
Answers
B.
Create two AWS CodePipeline configurations for test and production environments. Configure the production pipeline to have a manual approval step. Create a CodeCommit repository for each environment. Set up each CodePipeline to retrieve the source code from the appropriate repository. Set up the deployment step to deploy the Lambda functions with AWS CloudFormation.
B.
Create two AWS CodePipeline configurations for test and production environments. Configure the production pipeline to have a manual approval step. Create a CodeCommit repository for each environment. Set up each CodePipeline to retrieve the source code from the appropriate repository. Set up the deployment step to deploy the Lambda functions with AWS CloudFormation.
Answers
C.
Create two AWS CodePipeline configurations for test and production environments. Configure the production pipeline to have a manual approval step. Create one CodeCommit repository with a branch for each environment. Set up each CodePipeline to retrieve the source code from the appropriate branch in the repository. Set up the deployment step to deploy the Lambda functions with AWS CloudFormation.
C.
Create two AWS CodePipeline configurations for test and production environments. Configure the production pipeline to have a manual approval step. Create one CodeCommit repository with a branch for each environment. Set up each CodePipeline to retrieve the source code from the appropriate branch in the repository. Set up the deployment step to deploy the Lambda functions with AWS CloudFormation.
Answers
D.
Create an AWS CodeBuild configuration for test and production environments. Configure the production pipeline to have a manual approval step. Create one CodeCommit repository with a branch for each environment. Push the Lambda function code to an Amazon S3 bucket. Set up the deployment step to deploy the Lambda functions from the S3 bucket.
D.
Create an AWS CodeBuild configuration for test and production environments. Configure the production pipeline to have a manual approval step. Create one CodeCommit repository with a branch for each environment. Push the Lambda function code to an Amazon S3 bucket. Set up the deployment step to deploy the Lambda functions from the S3 bucket.
Answers
Suggested answer: C

Which of the following are not valid sources for OpsWorks custom cookbook repositories?

A.
HTTP(S)
A.
HTTP(S)
Answers
B.
Git
B.
Git
Answers
C.
AWS EBS
C.
AWS EBS
Answers
D.
Subversion
D.
Subversion
Answers
Suggested answer: C

Explanation:

Linux stacks can install custom cookbooks from any of the following repository types: HTTP or Amazon S3 archives. Theycan be either public or private, but Amazon S3 is typically the preferred option for a private archive. Git and Subversionrepositories provide source control and the ability to have multiple versions.

Reference: http://docs.aws.amazon.com/opsworks/latest/userguide/workingcookbook-installingcustomenable.html

A user has attached an EBS volume to a running Linux instance as a "/dev/sdf" device. The user is unable to see the attached device when he runs the command "df -h". What is the possible reason for this?

A.
The volume is not in the same AZ of the instance
A.
The volume is not in the same AZ of the instance
Answers
B.
The volume is not formatted
B.
The volume is not formatted
Answers
C.
The volume is not attached as a root device
C.
The volume is not attached as a root device
Answers
D.
The volume is not mounted
D.
The volume is not mounted
Answers
Suggested answer: D

Explanation:

When a user creates an EBS volume and attaches it as a device, it is required to mount the device. If the device/volume is not mounted it will not be available in the listing.

Your company operates a website for promoters to sell tickets for entertainment events. You are using a load balancer in front of an Auto Scaling group of web servers. Promotion of popular events can cause surges of website visitors. During scaling-out at these times, newly launched instances are unable to complete configuration quickly enough, leading to user disappointment. What options should you choose to improve scaling yet minimize costs? (Choose two.)

A.
Create an AMI with the application pre-configured. Create a new Auto Scaling launch configuration using this new AMI, and configure the Auto Scaling group to launch with this AMI.
A.
Create an AMI with the application pre-configured. Create a new Auto Scaling launch configuration using this new AMI, and configure the Auto Scaling group to launch with this AMI.
Answers
B.
Use Auto Scaling pre-warming to launch instances before they are required. Configure pre-warming to use the CPU trend CloudWatch metric for the group.
B.
Use Auto Scaling pre-warming to launch instances before they are required. Configure pre-warming to use the CPU trend CloudWatch metric for the group.
Answers
C.
Publish a custom CloudWatch memo from your application on the number of tickets sold, and create an Auto Scaling policy based on this.
C.
Publish a custom CloudWatch memo from your application on the number of tickets sold, and create an Auto Scaling policy based on this.
Answers
D.
Use the history of past scaling events for similar event sales to predict future scaling requirements. Use the Auto Scaling scheduled scaling feature to vary the size of the fleet.
D.
Use the history of past scaling events for similar event sales to predict future scaling requirements. Use the Auto Scaling scheduled scaling feature to vary the size of the fleet.
Answers
E.
Configure an Amazon S3 bucket for website hosting. Upload into the bucket an HTML holding page with its x-amzwebsite- redirect-location' metadata property set to the load balancer endpoint. Configure Elastic Load Balancing to redirect to the holding page when the load on web servers is above a certain level.
E.
Configure an Amazon S3 bucket for website hosting. Upload into the bucket an HTML holding page with its x-amzwebsite- redirect-location' metadata property set to the load balancer endpoint. Configure Elastic Load Balancing to redirect to the holding page when the load on web servers is above a certain level.
Answers
Suggested answer: A, D

A user has created a new EBS volume from an existing snapshot. The user mounts the volume on the instance to which it is attached. Which of the below mentioned options is a required step before the user can mount the volume?

A.
Run a cyclic check on the device for data consistency
A.
Run a cyclic check on the device for data consistency
Answers
B.
Create the file system of the volume
B.
Create the file system of the volume
Answers
C.
Resize the volume as per the original snapshot size
C.
Resize the volume as per the original snapshot size
Answers
D.
No step is required. The user can directly mount the device
D.
No step is required. The user can directly mount the device
Answers
Suggested answer: D

Explanation:

When a user is trying to mount a blank EBS volume, it is required that the user first creates a file system within the volume. If the volume is created from an existing snapshot then the user needs not to create a file system on the volume as it will wipe out the existing data.

Total 557 questions
Go to page: of 56