ExamGecko
Home Home / Amazon / DOP-C01

Amazon DOP-C01 Practice Test - Questions Answers, Page 44

Question list
Search
Search

List of questions

Search

Related questions











Your application stores sensitive information on an EBS volume attached to your EC2 instance. How can you protect your information? (Choose two.)

A.
Unmount the EBS volume, take a snapshot and encrypt the snapshot. Re-mount the Amazon EBS volume.
A.
Unmount the EBS volume, take a snapshot and encrypt the snapshot. Re-mount the Amazon EBS volume.
Answers
B.
It is not possible to encrypt an EBS volume, you must use a lifecycle policy to transfer data to S3 for encryption.
B.
It is not possible to encrypt an EBS volume, you must use a lifecycle policy to transfer data to S3 for encryption.
Answers
C.
Copy the unencrypted snapshot and check the box to encrypt the new snapshot. Volumes restored from this encrypted snapshot will also be encrypted.
C.
Copy the unencrypted snapshot and check the box to encrypt the new snapshot. Volumes restored from this encrypted snapshot will also be encrypted.
Answers
D.
Create and mount a new, encrypted Amazon EBS volume. Move the data to the new volume. Delete the old Amazon EBS volume.
D.
Create and mount a new, encrypted Amazon EBS volume. Move the data to the new volume. Delete the old Amazon EBS volume.
Answers
Suggested answer: C, D

Explanation:

These steps are given in the AWS documentation

To migrate data between encrypted and unencrypted volumes

1) Create your destination volume (encrypted or unencrypted, depending on your need).

2) Attach the destination volume to the instance that hosts the data to migrate.

3) Make the destination volume available by following the procedures in Making an Amazon EBS Volume Available for Use.

For Linux instances, you can create a mount point at /mnt/destination and mountthe destination volume there.

4) Copy the data from your source directory to the destination volume. It may be most convenient to use a bulk-copy utility for this.

To encrypt a volume's data by means of snapshot copying

1) Create a snapshot of your unencrypted CBS volume. This snapshot is also unencrypted.

2) Copy the snapshot while applying encryption parameters. The resulting target snapshot is encrypted.3) Restore the encrypted snapshot to a new volume, which is also encrypted.

You have just come from your Chief Information Security Officer's (CISO) office with the instructions to provide an audit report of all AWS network rules used by the organization's Amazon EC2 instances. You have discovered that a single Describe-Security-Groups API call will return all of an account's security groups and rules within a region. You create the following pseudo-code to create the required report:

- Parse "aws ec2 describe-security-groups" output

- For each security group

- Create report of ingress and egress rules

Which two additional pieces of logic should you include to meet the CISO's requirements? (Choose two.)

A.
Parse security groups in each region.
A.
Parse security groups in each region.
Answers
B.
Parse security groups in each Availability Zone and region.
B.
Parse security groups in each Availability Zone and region.
Answers
C.
Evaluate VPC network access control lists.
C.
Evaluate VPC network access control lists.
Answers
D.
Evaluate AWS CloudTrail logs.
D.
Evaluate AWS CloudTrail logs.
Answers
E.
Evaluate Elastic Load Balancing access control lists.
E.
Evaluate Elastic Load Balancing access control lists.
Answers
F.
Parse CloudFront access control lists.
F.
Parse CloudFront access control lists.
Answers
Suggested answer: A, C

You want to pass queue messages that are 1GB each. How should you achieve this?

A.
Use Kinesis as a buffer stream for message bodies. Store the checkpoint id for the placement in the Kinesis Stream in SQS.
A.
Use Kinesis as a buffer stream for message bodies. Store the checkpoint id for the placement in the Kinesis Stream in SQS.
Answers
B.
Use the Amazon SQS Extended Client Library for Java and Amazon S3 as a storage mechanism for message bodies.
B.
Use the Amazon SQS Extended Client Library for Java and Amazon S3 as a storage mechanism for message bodies.
Answers
C.
Use SQS's support for message partitioning and multi-part uploads on Amazon S3.
C.
Use SQS's support for message partitioning and multi-part uploads on Amazon S3.
Answers
D.
Use AWS EFS as a shared pool storage medium. Store filesystem pointers to the files on disk in the SQS message bodies.
D.
Use AWS EFS as a shared pool storage medium. Store filesystem pointers to the files on disk in the SQS message bodies.
Answers
Suggested answer: B

Explanation:

You can manage Amazon SQS messages with Amazon S3. This is especially useful for storing and retrieving messages with a message size of up to 2 GB. To manage Amazon SQS messages with Amazon S3, use the Amazon SQS Extended Client Library for Java.

Reference: http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/s3-messages.html

What is web identity federation?

A.
Use of an identity provider like Google or Facebook to become an AWS IAM User.
A.
Use of an identity provider like Google or Facebook to become an AWS IAM User.
Answers
B.
Use of an identity provider like Google or Facebook to exchange for temporary AWS security credentials.
B.
Use of an identity provider like Google or Facebook to exchange for temporary AWS security credentials.
Answers
C.
Use of AWS IAM User tokens to log in as a Google or Facebook user.
C.
Use of AWS IAM User tokens to log in as a Google or Facebook user.
Answers
D.
Use of AWS STS Tokens to log in as a Google or Facebook user.
D.
Use of AWS STS Tokens to log in as a Google or Facebook user.
Answers
Suggested answer: B

Explanation:

Users of your app can sign in using a well-known identity provider (IdP) – such as Login with Amazon, Facebook, Google, or any other OpenID Connect (OIDC)-compatible IdP, receive an authentication token, and then exchange that token for temporary security credentials in AWS that map to an IAM role with permissions to use the resources in your AWS account.

Reference: http://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_providers_oidc.html

There is a very serious outage at AWS. EC2 is not affected, but your EC2 instance deployment scripts stopped working in the region with the outage. What might be the issue?

A.
The AWS Console is down, so your CLI commands do not work.
A.
The AWS Console is down, so your CLI commands do not work.
Answers
B.
S3 is unavailable, so you can't create EBS volumes from a snapshot you use to deploy new volumes.
B.
S3 is unavailable, so you can't create EBS volumes from a snapshot you use to deploy new volumes.
Answers
C.
AWS turns off the DeployCode API call when there are major outages, to protect from system floods.
C.
AWS turns off the DeployCode API call when there are major outages, to protect from system floods.
Answers
D.
None of the other answers make sense. If EC2 is not affected, it must be some other issue.
D.
None of the other answers make sense. If EC2 is not affected, it must be some other issue.
Answers
Suggested answer: B

Explanation:

S3 stores all snapshots. If S3 is unavailable, snapshots are unavailable. Amazon EC2 also uses Amazon S3 to store snapshots (backup copies) of the data volumes. You can use snapshots for recovering data quickly and reliably in case of application or system failures. You can also use snapshots as a baseline to create multiple new data volumes, expand the size of an existing data volume, or move data volumes across multiple Availability Zones, thereby making your data usage highly scalable. For more information about using data volumes and snapshots, see Amazon Elastic Block Store.

Reference: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AmazonS3.html

What method should you use to author automation if you want to wait for a CloudFormation stack to finish completing in a script?

A.
Event subscription using SQS.
A.
Event subscription using SQS.
Answers
B.
Event subscription using SNS.
B.
Event subscription using SNS.
Answers
C.
Poll using ListStacks / list-stacks
C.
Poll using ListStacks / list-stacks
Answers
D.
Poll using GetStackStatus / get-stack-status
D.
Poll using GetStackStatus / get-stack-status
Answers
Suggested answer: C

Explanation:

Event driven systems are good for IFTTT logic, but only polling will make a script wait to complete. ListStacks / list-stacks is a real method, GetStackStatus / get-stack-status is not.

Reference: http://docs.aws.amazon.com/cli/latest/reference/cloudformation/list-stacks.html

An Amazon EC2 instance with no internet access is running in a Virtual Private Cloud (VPC) and needs to download an object from a restricted Amazon S3 bucket. When the DevOps Engineer tries to gain access to the object, an

AccessDenied error is received. What are the possible causes for this error? (Choose three.)

A.
The S3 bucket default encryption is enabled.
A.
The S3 bucket default encryption is enabled.
Answers
B.
There is an error in the S3 bucket policy.
B.
There is an error in the S3 bucket policy.
Answers
C.
There is an error in the VPC endpoint policy.
C.
There is an error in the VPC endpoint policy.
Answers
D.
The object has been moved to Amazon Glacier.
D.
The object has been moved to Amazon Glacier.
Answers
E.
There is an error in the IAM role configuration.
E.
There is an error in the IAM role configuration.
Answers
F.
S3 versioning is enabled.
F.
S3 versioning is enabled.
Answers
Suggested answer: B, C, E

Explanation:

Reference: https://aws.amazon.com/premiumsupport/knowledge-center/s3-403-upload-bucket/

A company uses a complex system that consists of networking, IAM policies, and multiple three-tier applications. Requirements are still being defined for a new system, so the number of AWS components present in the final design is not known. The DevOps Engineer needs to begin defining AWS resources using AWS CloudFormation to automate and versioncontrol the new infrastructure. What is the best practice for using CloudFormation to create new environments?

A.
Manually construct the networking layer using Amazon VPC and then define all other resources using CloudFormation.
A.
Manually construct the networking layer using Amazon VPC and then define all other resources using CloudFormation.
Answers
B.
Create a single template to encompass all resources that are required for the system so there is only one template to version-control.
B.
Create a single template to encompass all resources that are required for the system so there is only one template to version-control.
Answers
C.
Create multiple separate templates for each logical part of the system, use cross-stack references in CloudFormation, and maintain several templates in version control.
C.
Create multiple separate templates for each logical part of the system, use cross-stack references in CloudFormation, and maintain several templates in version control.
Answers
D.
Create many separate templates for each logical part of the system, and provide the outputs from one to the next using an Amazon EC2 instance running SDK for granular control.
D.
Create many separate templates for each logical part of the system, and provide the outputs from one to the next using an Amazon EC2 instance running SDK for granular control.
Answers
Suggested answer: C

You are building a Docker image with the following Dockerfile. How many layers will the resulting image have? FROM scratch

CMD /app/hello.sh

A.
2
A.
2
Answers
B.
4
B.
4
Answers
C.
1
C.
1
Answers
D.
3
D.
3
Answers
Suggested answer: D

Explanation:

FROM scratch

CMD /app/hello.sh

The image contains all the layers from the base image (only one in this case, since we're building rom scratch), plus a new layer with the CMD instruction, and a read-write container layer.

Reference: https://docs.docker.com/engine/userguide/storagedriver/imagesandcontainers/#sharingpromotes-smaller-images

A company has an application deployed using Amazon ECS with data stored in an Amazon DynamoDB table. The company wants the application to fail over to another Region in a disaster recovery scenario. The application must also efficiently recover from any accidental data loss events. The RPO for the application is 1 hour and the RTO is 2 hours. Which highly available solution should a DevOps engineer recommend?

A.
Change the configuration of the existing DynamoDB table. Enable this as a global table and specify the second Region that will be used. Enable DynamoDB point-in-time recovery.
A.
Change the configuration of the existing DynamoDB table. Enable this as a global table and specify the second Region that will be used. Enable DynamoDB point-in-time recovery.
Answers
B.
Enable DynamoDB Streams for the table and create an AWS Lambda function to write the stream data to an S3 bucket in the second Region. Schedule a job for every 2 hours to use AWS Data Pipeline to restore the database to the failover Region.
B.
Enable DynamoDB Streams for the table and create an AWS Lambda function to write the stream data to an S3 bucket in the second Region. Schedule a job for every 2 hours to use AWS Data Pipeline to restore the database to the failover Region.
Answers
C.
Export the DynamoDB table every 2 hours using AWS Data Pipeline to an Amazon S3 bucket in the second Region. Use Data Pipeline in the second Region to restore the export from S3 into the second DynamoDB table.
C.
Export the DynamoDB table every 2 hours using AWS Data Pipeline to an Amazon S3 bucket in the second Region. Use Data Pipeline in the second Region to restore the export from S3 into the second DynamoDB table.
Answers
D.
Use AWS DMS to replicate the data every hour. Set the original DynamoDB table as the source and the new DynamoDB table as the target.
D.
Use AWS DMS to replicate the data every hour. Set the original DynamoDB table as the source and the new DynamoDB table as the target.
Answers
Suggested answer: B
Total 557 questions
Go to page: of 56