ExamGecko
Home Home / Amazon / DOP-C02

Amazon DOP-C02 Practice Test - Questions Answers, Page 23

Question list
Search
Search

List of questions

Search

Related questions











A company gives its employees limited rights to AWS DevOps engineers have the ability to assume an administrator role. For tracking purposes, the security team wants to receive a near-real-time notification when the administrator role is assumed.

How should this be accomplished?

A.
Configure AWS Config to publish logs to an Amazon S3 bucket Use Amazon Athena to query the logs and send a notification to the security team when the administrator role is assumed
A.
Configure AWS Config to publish logs to an Amazon S3 bucket Use Amazon Athena to query the logs and send a notification to the security team when the administrator role is assumed
Answers
B.
Configure Amazon GuardDuty to monitor when the administrator role is assumed and send a notification to the security team
B.
Configure Amazon GuardDuty to monitor when the administrator role is assumed and send a notification to the security team
Answers
C.
Create an Amazon EventBridge event rule using an AWS Management Console sign-in events event pattern that publishes a message to an Amazon SNS topic if the administrator role is assumed
C.
Create an Amazon EventBridge event rule using an AWS Management Console sign-in events event pattern that publishes a message to an Amazon SNS topic if the administrator role is assumed
Answers
D.
Create an Amazon EventBridge events rule using an AWS API call that uses an AWS CloudTrail event pattern to invoke an AWS Lambda function that publishes a message to an Amazon SNS topic if the administrator role is assumed.
D.
Create an Amazon EventBridge events rule using an AWS API call that uses an AWS CloudTrail event pattern to invoke an AWS Lambda function that publishes a message to an Amazon SNS topic if the administrator role is assumed.
Answers
Suggested answer: D

Explanation:

* Create an Amazon EventBridge Rule Using an AWS CloudTrail Event Pattern:

AWS CloudTrail logs API calls made in your account, including actions performed by roles.

Create an EventBridge rule that matches CloudTrail events where the AssumeRole API call is made to assume the administrator role.

* Invoke an AWS Lambda Function:

Configure the EventBridge rule to trigger a Lambda function whenever the rule's conditions are met.

The Lambda function will handle the logic to send a notification.

* Publish a Message to an Amazon SNS Topic:

The Lambda function will publish a message to an SNS topic to notify the security team.

Subscribe the security team's email address to this SNS topic to receive real-time notifications.

Example EventBridge rule pattern:

{

'source': ['aws.cloudtrail'],

'detail-type': ['AWS API Call via CloudTrail'],

'detail': {

'eventSource': ['sts.amazonaws.com'],

'eventName': ['AssumeRole'],

'requestParameters': {

'roleArn': ['arn:aws:iam:::role/AdministratorRole']

}

}

}

Example Lambda function (Node.js) to publish to SNS:

const AWS = require('aws-sdk');

const sns = new AWS.SNS();

exports.handler = async (event) => {

const params = {

Message: `Administrator role assumed: ${JSON.stringify(event.detail)}`,

TopicArn: 'arn:aws:sns:<region>::<sns-topic>'

};

await sns.publish(params).promise();

};

Creating EventBridge Rules

Using AWS Lambda with Amazon SNS

A company has a fleet of Amazon EC2 instances that run Linux in a single AWS account. The company is using an AWS Systems Manager Automation task across the EC2 instances.

During the most recent patch cycle, several EC2 instances went into an error state because of insufficient available disk space. A DevOps engineer needs to ensure that the EC2 instances have sufficient available disk space during the patching process in the future.

Which combination of steps will meet these requirements? {Select TWO.)

A.
Ensure that the Amazon CloudWatch agent is installed on all EC2 instances
A.
Ensure that the Amazon CloudWatch agent is installed on all EC2 instances
Answers
B.
Create a cron job that is installed on each EC2 instance to periodically delete temporary files.
B.
Create a cron job that is installed on each EC2 instance to periodically delete temporary files.
Answers
C.
Create an Amazon CloudWatch log group for the EC2 instances. Configure a cron job that is installed on each EC2 instance to write the available disk space to a CloudWatch log stream for the relevant EC2 instance.
C.
Create an Amazon CloudWatch log group for the EC2 instances. Configure a cron job that is installed on each EC2 instance to write the available disk space to a CloudWatch log stream for the relevant EC2 instance.
Answers
D.
Create an Amazon CloudWatch alarm to monitor available disk space on all EC2 instances Add the alarm as a safety control to the Systems Manager Automation task.
D.
Create an Amazon CloudWatch alarm to monitor available disk space on all EC2 instances Add the alarm as a safety control to the Systems Manager Automation task.
Answers
E.
Create an AWS Lambda function to periodically check for sufficient available disk space on all EC2 instances by evaluating each EC2 instance's respective Amazon CloudWatch log stream.
E.
Create an AWS Lambda function to periodically check for sufficient available disk space on all EC2 instances by evaluating each EC2 instance's respective Amazon CloudWatch log stream.
Answers
Suggested answer: A, D

Explanation:

* Ensure that the Amazon CloudWatch agent is installed on all EC2 instances:

The Amazon CloudWatch agent collects and logs metrics and sends them to Amazon CloudWatch.

To install the CloudWatch agent:

Download the CloudWatch agent package.

Install the agent on your EC2 instances.

Configure the agent to collect disk space metrics.

* Create an Amazon CloudWatch alarm to monitor available disk space on all EC2 instances Add the alarm as a safety control to the Systems Manager Automation task:

Create CloudWatch alarms to monitor the available disk space and trigger notifications or actions when the disk space falls below a defined threshold.

Add the CloudWatch alarm to the Systems Manager Automation task to halt or fail the task if disk space is insufficient.

To create the alarm:

Navigate to the CloudWatch console and create a new alarm.

Set the metric to monitor (e.g., disk space utilization).

Define the threshold and notification actions.

Amazon CloudWatch agent

Creating Amazon CloudWatch alarms

A company is migrating from its on-premises data center to AWS. The company currently uses a custom on-premises CI/CD pipeline solution to build and package software.

The company wants its software packages and dependent public repositories to be available in AWS CodeArtifact to facilitate the creation of application-specific pipelines.

Which combination of steps should the company take to update the CI/CD pipeline solution and to configure CodeArtifact with the LEAST operational overhead? (Select TWO.)

A.
Update the CI/CD pipeline to create a VM image that contains newly packaged software Use AWS Import/Export to make the VM image available as an Amazon EC2 AMI. Launch the AMI with an attached IAM instance profile that allows CodeArtifact actions. Use AWS CLI commands to publish the packages to a CodeArtifact repository.
A.
Update the CI/CD pipeline to create a VM image that contains newly packaged software Use AWS Import/Export to make the VM image available as an Amazon EC2 AMI. Launch the AMI with an attached IAM instance profile that allows CodeArtifact actions. Use AWS CLI commands to publish the packages to a CodeArtifact repository.
Answers
B.
Create an AWS Identity and Access Management Roles Anywhere trust anchor Create an IAM role that allows CodeArtifact actions and that has a trust relationship on the trust anchor. Update the on-premises CI/CD pipeline to assume the new IAM role and to publish the packages to CodeArtifact.
B.
Create an AWS Identity and Access Management Roles Anywhere trust anchor Create an IAM role that allows CodeArtifact actions and that has a trust relationship on the trust anchor. Update the on-premises CI/CD pipeline to assume the new IAM role and to publish the packages to CodeArtifact.
Answers
C.
Create a new Amazon S3 bucket. Generate a presigned URL that allows the PutObject request. Update the on-premises CI/CD pipeline to use the presigned URL to publish the packages from the on-premises location to the S3 bucket. Create an AWS Lambda function that runs when packages are created in the bucket through a put command Configure the Lambda function to publish the packages to CodeArtifact
C.
Create a new Amazon S3 bucket. Generate a presigned URL that allows the PutObject request. Update the on-premises CI/CD pipeline to use the presigned URL to publish the packages from the on-premises location to the S3 bucket. Create an AWS Lambda function that runs when packages are created in the bucket through a put command Configure the Lambda function to publish the packages to CodeArtifact
Answers
D.
For each public repository, create a CodeArtifact repository that is configured with an external connection Configure the dependent repositories as upstream public repositories.
D.
For each public repository, create a CodeArtifact repository that is configured with an external connection Configure the dependent repositories as upstream public repositories.
Answers
E.
Create a CodeArtifact repository that is configured with a set of external connections to the public repositories. Configure the external connections to be downstream of the repository
E.
Create a CodeArtifact repository that is configured with a set of external connections to the public repositories. Configure the external connections to be downstream of the repository
Answers
Suggested answer: B, D

Explanation:

* Create an AWS Identity and Access Management Roles Anywhere trust anchor Create an IAM role that allows CodeArtifact actions and that has a trust relationship on the trust anchor. Update the on-premises CI/CD pipeline to assume the new IAM role and to publish the packages to CodeArtifact:

Roles Anywhere allows on-premises servers to assume IAM roles, making it easier to integrate on-premises environments with AWS services.

Steps:

Create a trust anchor in IAM.

Create an IAM role with permissions for CodeArtifact actions (e.g., publishing packages).

Update the CI/CD pipeline to assume this role using the trust anchor.

* Create a new Amazon S3 bucket. Generate a presigned URL that allows the PutObject request. Update the on-premises CI/CD pipeline to use the presigned URL to publish the packages from the on-premises location to the S3 bucket. Create an AWS Lambda function that runs when packages are created in the bucket through a put command Configure the Lambda function to publish the packages to CodeArtifact:

Using an S3 bucket as an intermediary, you can easily upload packages from on-premises systems.

Steps:

Create an S3 bucket.

Generate presigned URLs to allow the CI/CD pipeline to upload packages.

Configure an AWS Lambda function to trigger on S3 PUT events and publish the packages to CodeArtifact.

IAM Roles Anywhere

Amazon S3 presigned URLs

AWS Lambda function triggers

A company operates sensitive workloads across the AWS accounts that are in the company's organization in AWS Organizations The company uses an IP address range to delegate IP addresses for Amazon VPC CIDR blocks and all non-cloud hardware.

The company needs a solution that prevents principals that are outside the company's IP address range from performing AWS actions In the organization's accounts

Which solution will meet these requirements?

A.
Configure AWS Firewall Manager for the organization. Create an AWS Network Firewall policy that allows only source traffic from the company's IP address range Set the policy scope to all accounts in the organization.
A.
Configure AWS Firewall Manager for the organization. Create an AWS Network Firewall policy that allows only source traffic from the company's IP address range Set the policy scope to all accounts in the organization.
Answers
B.
In Organizations, create an SCP that denies source IP addresses that are outside of the company s IP address range. Attach the SCP to the organization's root
B.
In Organizations, create an SCP that denies source IP addresses that are outside of the company s IP address range. Attach the SCP to the organization's root
Answers
C.
Configure Amazon GuardDuty for the organization. Create a GuardDuty trusted IP address list for the company's IP range Activate the trusted IP list for the organization.
C.
Configure Amazon GuardDuty for the organization. Create a GuardDuty trusted IP address list for the company's IP range Activate the trusted IP list for the organization.
Answers
D.
In Organizations, create an SCP that allows source IP addresses that are inside of the company s IP address range. Attach the SCP to the organization's root.
D.
In Organizations, create an SCP that allows source IP addresses that are inside of the company s IP address range. Attach the SCP to the organization's root.
Answers
Suggested answer: B

Explanation:

https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_examples_aws_deny-ip.html

A company uses AWS Organizations to manage its AWS accounts. The organization root has a child OU that is named Department. The Department OU has a child OU that is named Engineering. The default FullAWSAccess policy is attached to the root, the Department OU. and the Engineering OU.

The company has many AWS accounts in the Engineering OU. Each account has an administrative IAM role with the AdmmistratorAccess IAM policy attached. The default FullAWSAccessPolicy is also attached to each account.

A DevOps engineer plans to remove the FullAWSAccess policy from the Department OU The DevOps engineer will replace the policy with a policy that contains an Allow statement for all Amazon EC2 API operations.

What will happen to the permissions of the administrative IAM roles as a result of this change'?

A.
All API actions on all resources will be allowed
A.
All API actions on all resources will be allowed
Answers
B.
All API actions on EC2 resources will be allowed. All other API actions will be denied.
B.
All API actions on EC2 resources will be allowed. All other API actions will be denied.
Answers
C.
All API actions on all resources will be denied
C.
All API actions on all resources will be denied
Answers
D.
All API actions on EC2 resources will be denied. All other API actions will be allowed.
D.
All API actions on EC2 resources will be denied. All other API actions will be allowed.
Answers
Suggested answer: B

Explanation:

* Impact of Removing FullAWSAccess and Adding Policy for EC2 Actions:

The FullAWSAccess policy allows all actions on all resources by default. Removing this policy from the Department OU will limit the permissions that accounts within this OU inherit from the parent OU.

Adding a policy that allows only Amazon EC2 API operations will restrict the permissions to EC2 actions only.

* Permissions of Administrative IAM Roles:

The administrative IAM roles in the Engineering OU have the AdministratorAccess policy attached, which grants full access to all AWS services and resources.

Since SCPs are restrictions that apply at the organizational level, removing FullAWSAccess and replacing it with a policy allowing only EC2 actions means that for all accounts in the Engineering OU:

They will have full access to EC2 actions due to the new SCP.

They will be restricted in other actions that are not covered by the SCP, hence, non-EC2 API actions will be denied.

* Conclusion:

All API actions on EC2 resources will be allowed.

All other API actions will be denied due to the absence of a broader allow policy.

A company is running a custom-built application that processes records. All the components run on Amazon EC2 instances that run in an Auto Scaling group. Each record's processing is a multistep sequential action that is compute-intensive. Each step is always completed in 5 minutes or less.

A limitation of the current system is that if any steps fail, the application has to reprocess the record from the beginning The company wants to update the architecture so that the application must reprocess only the failed steps.

What is the MOST operationally efficient solution that meets these requirements?

A.
Create a web application to write records to Amazon S3 Use S3 Event Notifications to publish to an Amazon Simple Notification Service (Amazon SNS) topic Use an EC2 instance to poll Amazon SNS and start processing Save intermediate results to Amazon S3 to pass on to the next step
A.
Create a web application to write records to Amazon S3 Use S3 Event Notifications to publish to an Amazon Simple Notification Service (Amazon SNS) topic Use an EC2 instance to poll Amazon SNS and start processing Save intermediate results to Amazon S3 to pass on to the next step
Answers
B.
Perform the processing steps by using logic in the application. Convert the application code to run in a container. Use AWS Fargate to manage the container Instances. Configure the container to invoke itself to pass the state from one step to the next.
B.
Perform the processing steps by using logic in the application. Convert the application code to run in a container. Use AWS Fargate to manage the container Instances. Configure the container to invoke itself to pass the state from one step to the next.
Answers
C.
Create a web application to pass records to an Amazon Kinesis data stream. Decouple the processing by using the Kinesis data stream and AWS Lambda functions.
C.
Create a web application to pass records to an Amazon Kinesis data stream. Decouple the processing by using the Kinesis data stream and AWS Lambda functions.
Answers
D.
Create a web application to pass records to AWS Step Functions. Decouple the processing into Step Functions tasks and AWS Lambda functions.
D.
Create a web application to pass records to AWS Step Functions. Decouple the processing into Step Functions tasks and AWS Lambda functions.
Answers
Suggested answer: D

Explanation:

* Use AWS Step Functions to Orchestrate Processing:

AWS Step Functions allow you to build distributed applications by combining AWS Lambda functions or other AWS services into workflows.

Decoupling the processing into Step Functions tasks enables you to retry individual steps without reprocessing the entire record.

* Architectural Steps:

Create a web application to pass records to AWS Step Functions:

The web application can be a simple frontend that receives input and triggers the Step Functions workflow.

Define a Step Functions state machine:

Each step in the state machine represents a processing stage. If a step fails, Step Functions can retry the step based on defined conditions.

Use AWS Lambda functions:

Lambda functions can be used to handle each processing step. These functions can be stateless and handle specific tasks, reducing the complexity of error handling and reprocessing logic.

* Operational Efficiency:

Using Step Functions and Lambda improves operational efficiency by providing built-in error handling, retries, and state management.

This architecture scales automatically and isolates failures to individual steps, ensuring only failed steps are retried.

AWS Step Functions

Building Workflows with Step Functions

A company has an organization in AWS Organizations. A DevOps engineer needs to maintain multiple AWS accounts that belong to different OUs in the organization. All resources, including IAM policies and Amazon S3 policies within an account, are deployed through AWS CloudFormation. All templates and code are maintained in an AWS CodeCommit repository Recently, some developers have not been able to access an S3 bucket from some accounts in the organization.

The following policy is attached to the S3 bucket.

What should the DevOps engineer do to resolve this access issue?

A.
Modify the S3 bucket policy Turn off the S3 Block Public Access setting on the S3 bucket In the S3 policy, add the awsSourceAccount condition. Add the AWS account IDs of all developers who are experiencing the issue.
A.
Modify the S3 bucket policy Turn off the S3 Block Public Access setting on the S3 bucket In the S3 policy, add the awsSourceAccount condition. Add the AWS account IDs of all developers who are experiencing the issue.
Answers
B.
Verify that no IAM permissions boundaries are denying developers access to the S3 bucket Make the necessary changes to IAM permissions boundaries. Use an AWS Config recorder in the individual developer accounts that are experiencing the issue to revert any changes that are blocking access. Commit the fix back into the CodeCommit repository. Invoke deployment through Cloud Formation to apply the changes.
B.
Verify that no IAM permissions boundaries are denying developers access to the S3 bucket Make the necessary changes to IAM permissions boundaries. Use an AWS Config recorder in the individual developer accounts that are experiencing the issue to revert any changes that are blocking access. Commit the fix back into the CodeCommit repository. Invoke deployment through Cloud Formation to apply the changes.
Answers
C.
Configure an SCP that stops anyone from modifying IAM resources in developer OUs. In the S3 policy, add the awsSourceAccount condition. Add the AWS account IDs of all developers who are experiencing the issue Commit the fix back into the CodeCommit repository Invoke deployment through CloudFormation to apply the changes
C.
Configure an SCP that stops anyone from modifying IAM resources in developer OUs. In the S3 policy, add the awsSourceAccount condition. Add the AWS account IDs of all developers who are experiencing the issue Commit the fix back into the CodeCommit repository Invoke deployment through CloudFormation to apply the changes
Answers
D.
Ensure that no SCP is blocking access for developers to the S3 bucket Ensure that no IAM policy permissions boundaries are denying access to developer IAM users Make the necessary changes to the SCP and IAM policy permissions boundaries in the CodeCommit repository Invoke deployment through CloudFormation to apply the changes
D.
Ensure that no SCP is blocking access for developers to the S3 bucket Ensure that no IAM policy permissions boundaries are denying access to developer IAM users Make the necessary changes to the SCP and IAM policy permissions boundaries in the CodeCommit repository Invoke deployment through CloudFormation to apply the changes
Answers
Suggested answer: D

Explanation:

Verify No SCP Blocking Access:

Ensure that no Service Control Policy (SCP) is blocking access for developers to the S3 bucket. SCPs are applied at the organization or organizational unit (OU) level in AWS Organizations and can restrict what actions users and roles in the affected accounts can perform.

Verify No IAM Policy Permissions Boundaries Blocking Access:

IAM permissions boundaries can limit the maximum permissions that a user or role can have. Verify that these boundaries are not restricting access to the S3 bucket.

Make Necessary Changes to SCP and IAM Policy Permissions Boundaries:

Adjust the SCPs and IAM permissions boundaries if they are found to be the cause of the access issue. Make sure these changes are reflected in the code maintained in the AWS CodeCommit repository.

Invoke Deployment Through CloudFormation:

Commit the updated policies to the CodeCommit repository.

Use AWS CloudFormation to deploy the changes across the relevant accounts and resources to ensure that the updated permissions are applied consistently.

By ensuring no SCPs or IAM policy permissions boundaries are blocking access and making necessary changes if they are, the DevOps engineer can resolve the access issue for developers trying to access the S3 bucket.

AWS SCPs

IAM Permissions Boundaries

Deploying CloudFormation Templates

A company is developing a web application's infrastructure using AWS CloudFormation The database engineering team maintains the database resources in a Cloud Formation template, and the software development team maintains the web application resources in a separate CloudFormation template. As the scope of the application grows, the software development team needs to use resources maintained by the database engineering team However, both teams have their own review and lifecycle management processes that they want to keep. Both teams also require resource-level change-set reviews. The software development team would like to deploy changes to this template using their Cl/CD pipeline.

Which solution will meet these requirements?

A.
Create a stack export from the database CloudFormation template and import those references into the web application CloudFormation template
A.
Create a stack export from the database CloudFormation template and import those references into the web application CloudFormation template
Answers
B.
Create a CloudFormation nested stack to make cross-stack resource references and parameters available in both stacks.
B.
Create a CloudFormation nested stack to make cross-stack resource references and parameters available in both stacks.
Answers
C.
Create a CloudFormation stack set to make cross-stack resource references and parameters available in both stacks.
C.
Create a CloudFormation stack set to make cross-stack resource references and parameters available in both stacks.
Answers
D.
Create input parameters in the web application CloudFormation template and pass resource names and IDs from the database stack.
D.
Create input parameters in the web application CloudFormation template and pass resource names and IDs from the database stack.
Answers
Suggested answer: A

Explanation:

* Stack Export and Import:

Use the Export feature in CloudFormation to share outputs from one stack (e.g., database resources) and use them as inputs in another stack (e.g., web application resources).

* Steps to Create Stack Export:

Define the resources in the database CloudFormation template and use the Outputs section to export necessary values.

Outputs:

DBInstanceEndpoint:

Value: !GetAtt DBInstance.Endpoint.Address

Export:

Name: DBInstanceEndpoint

Steps to Import into Web Application Stack:

In the web application CloudFormation template, use the ImportValue function to import these exported values.

Resources:

MyResource:

Type: 'AWS::SomeResourceType'

Properties:

SomeProperty: !ImportValue DBInstanceEndpoint

Resource-Level Change-Set Reviews:

Both teams can continue using their respective review processes, as changes to each stack are managed independently.

Use CloudFormation change sets to preview changes before deploying.

By exporting resources from the database stack and importing them into the web application stack, both teams can maintain their separate review and lifecycle management processes while sharing necessary resources.

AWS CloudFormation Export

AWS CloudFormation ImportValue

A company uses Amazon RDS for all databases in Its AWS accounts The company uses AWS Control Tower to build a landing zone that has an audit and logging account All databases must be encrypted at rest for compliance reasons. The company's security engineer needs to receive notification about any noncompliant databases that are in the company's accounts

Which solution will meet these requirements with the MOST operational efficiency?

A.
Use AWS Control Tower to activate the optional detective control (guardrail) to determine whether the RDS storage is encrypted Create an Amazon Simple Notification Service (Amazon SNS) topic in the company's audit account. Create an Amazon EventBridge rule to filter noncompliant events from the AWS Control Tower control (guardrail) to notify the SNS topic. Subscribe the security engineer's email address to the SNS topic
A.
Use AWS Control Tower to activate the optional detective control (guardrail) to determine whether the RDS storage is encrypted Create an Amazon Simple Notification Service (Amazon SNS) topic in the company's audit account. Create an Amazon EventBridge rule to filter noncompliant events from the AWS Control Tower control (guardrail) to notify the SNS topic. Subscribe the security engineer's email address to the SNS topic
Answers
B.
Use AWS Cloud Formation StackSets to deploy AWS Lambda functions to every account. Write the Lambda function code to determine whether the RDS storage is encrypted in the account the function is deployed to Send the findings as an Amazon CloudWatch metric to the management account Create an Amazon Simple Notification Service (Amazon SNS) topic. Create a CloudWatch alarm that notifies the SNS topic when metric thresholds are met. Subscribe the security engineer's email address to the SNS topic.
B.
Use AWS Cloud Formation StackSets to deploy AWS Lambda functions to every account. Write the Lambda function code to determine whether the RDS storage is encrypted in the account the function is deployed to Send the findings as an Amazon CloudWatch metric to the management account Create an Amazon Simple Notification Service (Amazon SNS) topic. Create a CloudWatch alarm that notifies the SNS topic when metric thresholds are met. Subscribe the security engineer's email address to the SNS topic.
Answers
C.
Create a custom AWS Config rule in every account to determine whether the RDS storage is encrypted Create an Amazon Simple Notification Service (Amazon SNS) topic in the audit account Create an Amazon EventBridge rule to filter noncompliant events from the AWS Control Tower control (guardrail) to notify the SNS topic. Subscribe the security engineer's email address to the SNS topic
C.
Create a custom AWS Config rule in every account to determine whether the RDS storage is encrypted Create an Amazon Simple Notification Service (Amazon SNS) topic in the audit account Create an Amazon EventBridge rule to filter noncompliant events from the AWS Control Tower control (guardrail) to notify the SNS topic. Subscribe the security engineer's email address to the SNS topic
Answers
D.
Launch an Amazon EC2 instance. Run an hourly cron job by using the AWS CLI to determine whether the RDS storage is encrypted in each AWS account Store the results in an RDS database. Notify the security engineer by sending email messages from the EC2 instance when noncompliance is detected
D.
Launch an Amazon EC2 instance. Run an hourly cron job by using the AWS CLI to determine whether the RDS storage is encrypted in each AWS account Store the results in an RDS database. Notify the security engineer by sending email messages from the EC2 instance when noncompliance is detected
Answers
Suggested answer: A

Explanation:

Activate AWS Control Tower Guardrail:

Use AWS Control Tower to activate a detective guardrail that checks whether RDS storage is encrypted.

Create SNS Topic for Notifications:

Set up an Amazon Simple Notification Service (SNS) topic in the audit account to receive notifications about non-compliant databases.

Create EventBridge Rule to Filter Non-compliant Events:

Create an Amazon EventBridge rule that filters events related to the guardrail's findings on non-compliant RDS instances.

Configure the rule to send notifications to the SNS topic when non-compliant events are detected.

Subscribe Security Engineer's Email to SNS Topic:

Subscribe the security engineer's email address to the SNS topic to receive notifications when non-compliant databases are detected.

By using AWS Control Tower to activate a detective guardrail and setting up SNS notifications for non-compliant events, the company can efficiently monitor and ensure that all RDS databases are encrypted at rest.

AWS Control Tower Guardrails

Amazon SNS

Amazon EventBridge

A DevOps engineer has created an AWS CloudFormation template that deploys an application on Amazon EC2 instances The EC2 instances run Amazon Linux The application is deployed to the EC2 instances by using shell scripts that contain user data. The EC2 instances have an IAM instance profile that has an IAM role with the AmazonSSMManagedlnstanceCore managed policy attached

The DevOps engineer has modified the user data in the CloudFormation template to install a new version of the application. The engineer has also applied the stack update. However, the application was not updated on the running EC2 instances. The engineer needs to ensure that the changes to the application are installed on the running EC2 instances.

Which combination of steps will meet these requirements? (Select TWO.)

A.
Configure the user data content to use the Multipurpose Internet Mail Extensions (MIME) multipart format. Set the scripts-user parameter to always in the text/cloud-config section.
A.
Configure the user data content to use the Multipurpose Internet Mail Extensions (MIME) multipart format. Set the scripts-user parameter to always in the text/cloud-config section.
Answers
B.
Refactor the user data commands to use the cfn-init helper script. Update the user data to install and configure the cfn-hup and cfn-mit helper scripts to monitor and apply the metadata changes
B.
Refactor the user data commands to use the cfn-init helper script. Update the user data to install and configure the cfn-hup and cfn-mit helper scripts to monitor and apply the metadata changes
Answers
C.
Configure an EC2 launch template for the EC2 instances. Create a new EC2 Auto Scaling group. Associate the Auto Scaling group with the EC2 launch template Use the AutoScalingScheduledAction update policy for the Auto Scaling group.
C.
Configure an EC2 launch template for the EC2 instances. Create a new EC2 Auto Scaling group. Associate the Auto Scaling group with the EC2 launch template Use the AutoScalingScheduledAction update policy for the Auto Scaling group.
Answers
D.
Refactor the user data commands to use an AWS Systems Manager document (SSM document). Add an AWS CLI command in the user data to use Systems Manager Run Command to apply the SSM document to the EC2 instances
D.
Refactor the user data commands to use an AWS Systems Manager document (SSM document). Add an AWS CLI command in the user data to use Systems Manager Run Command to apply the SSM document to the EC2 instances
Answers
E.
Refactor the user data command to use an AWS Systems Manager document (SSM document) Use Systems Manager State Manager to create an association between the SSM document and the EC2 instances.
E.
Refactor the user data command to use an AWS Systems Manager document (SSM document) Use Systems Manager State Manager to create an association between the SSM document and the EC2 instances.
Answers
Suggested answer: B, E

Explanation:

Refactor User Data to Use cfn-init and cfn-hup:

cfn-init helps to bootstrap the instance, installing packages and starting services.

cfn-hup is a daemon that can monitor metadata changes and re-apply configurations when necessary.

Example user data script with cfn-init:

#!/bin/bash

yum update -y

yum install -y aws-cfn-bootstrap

/opt/aws/bin/cfn-init -v --stack ${AWS::StackName} --resource WebServer --region ${AWS::Region}

/opt/aws/bin/cfn-hup

Use Systems Manager State Manager:

State Manager can automatically apply an AWS Systems Manager document to instances at regular intervals, ensuring configurations are kept up-to-date.

Steps:

Create an SSM document that installs and configures your application.

Use State Manager to associate this document with your EC2 instances.

Example SSM document:

{

'schemaVersion': '2.2',

'description': 'Install My Application',

'mainSteps': [

{

'action': 'aws:runShellScript',

'name': 'installApplication',

'inputs': {

'runCommand': [

'yum install -y my-application'

]

}

}

]

}

Create State Manager association:

aws ssm create-association --name 'InstallMyApplication' --instance-id <instance-id> --document-version '\$LATEST'

Using cfn-init and cfn-hup

AWS Systems Manager State Manager

Total 252 questions
Go to page: of 26