ExamGecko
Home Home / Amazon / DOP-C02

Amazon DOP-C02 Practice Test - Questions Answers, Page 24

Question list
Search
Search

List of questions

Search

Related questions











A company's DevOps team manages a set of AWS accounts that are in an organization in AWS Organizations

The company needs a solution that ensures that all Amazon EC2 instances use approved AMIs that the DevOps team manages. The solution also must remediate the usage of AMIs that are not approved The individual account administrators must not be able to remove the restriction to use approved AMIs.

Which solution will meet these requirements?

A.
Use AWS CloudFormation StackSets to deploy an Amazon EventBridge rule to each account. Configure the rule to react to AWS CloudTrail events for Amazon EC2 and to send a notification to an Amazon Simple Notification Service (Amazon SNS) topic. Subscribe the DevOps team to the SNS topic
A.
Use AWS CloudFormation StackSets to deploy an Amazon EventBridge rule to each account. Configure the rule to react to AWS CloudTrail events for Amazon EC2 and to send a notification to an Amazon Simple Notification Service (Amazon SNS) topic. Subscribe the DevOps team to the SNS topic
Answers
B.
Use AWS CloudFormation StackSets to deploy the approved-amis-by-id AWS Config managed rule to each account. Configure the rule with the list of approved AMIs. Configure the rule to run the the AWS-StopEC2lnstance AWS Systems Manager Automation runbook for the noncompliant EC2 instances.
B.
Use AWS CloudFormation StackSets to deploy the approved-amis-by-id AWS Config managed rule to each account. Configure the rule with the list of approved AMIs. Configure the rule to run the the AWS-StopEC2lnstance AWS Systems Manager Automation runbook for the noncompliant EC2 instances.
Answers
C.
Create an AWS Lambda function that processes AWS CloudTrail events for Amazon EC2 Configure the Lambda function to send a notification to an Amazon Simple Notification Service (Amazon SNS) topic. Subscribe the DevOps team to the SNS topic. Deploy the Lambda function in each account in the organization Create an Amazon EventBridge rule in each account Configure the EventBridge rules to react to AWS CloudTrail events for Amazon EC2 and to invoke the Lambda function.
C.
Create an AWS Lambda function that processes AWS CloudTrail events for Amazon EC2 Configure the Lambda function to send a notification to an Amazon Simple Notification Service (Amazon SNS) topic. Subscribe the DevOps team to the SNS topic. Deploy the Lambda function in each account in the organization Create an Amazon EventBridge rule in each account Configure the EventBridge rules to react to AWS CloudTrail events for Amazon EC2 and to invoke the Lambda function.
Answers
D.
Enable AWS Config across the organization Create a conformance pack that uses the approved -amis-by-id AWS Config managed rule with the list of approved AMIs. Deploy the conformance pack across the organization. Configure the rule to run the AWS-StopEC2lnstance AWS Systems Manager Automation runbook for the noncompliant EC2 instances.
D.
Enable AWS Config across the organization Create a conformance pack that uses the approved -amis-by-id AWS Config managed rule with the list of approved AMIs. Deploy the conformance pack across the organization. Configure the rule to run the AWS-StopEC2lnstance AWS Systems Manager Automation runbook for the noncompliant EC2 instances.
Answers
Suggested answer: D

Explanation:

Enable AWS Config Across the Organization:

AWS Config provides a detailed view of the configuration of AWS resources in your AWS account. It can be used to assess, audit, and evaluate the configurations of your resources.

Enabling AWS Config across the organization ensures that all accounts are monitored for compliance.

Create a Conformance Pack Using the approved-amis-by-id AWS Config Managed Rule:

A conformance pack is a collection of AWS Config rules and remediation actions that can be easily deployed across an organization.

The approved-amis-by-id managed rule checks whether running instances are using approved AMIs.

Deploy the Conformance Pack Across the Organization:

Deploying the conformance pack across the organization ensures that all accounts adhere to the policy of using only approved AMIs.

The conformance pack can be deployed via the AWS Management Console, CLI, or SDKs.

Configure the Rule to Run the AWS-StopEC2Instance AWS Systems Manager Automation Runbook for Non-Compliant EC2 Instances:

The AWS-StopEC2Instance runbook can be configured to automatically stop any EC2 instances that are found to be non-compliant (i.e., not using approved AMIs).

This remediation action ensures that any unauthorized instances are promptly stopped, enforcing the policy without manual intervention.

By following these steps, the solution ensures that all EC2 instances across the organization use approved AMIs, and any non-compliant instances are remediated automatically.

AWS Config Conformance Packs

AWS Config Managed Rules

AWS Systems Manager Automation Runbooks

A company uses containers for its applications The company learns that some container Images are missing required security configurations

A DevOps engineer needs to implement a solution to create a standard base image The solution must publish the base image weekly to the us-west-2 Region, us-east-2 Region, and eu-central-1 Region.

Which solution will meet these requirements?

A.
Create an EC2 Image Builder pipeline that uses a container recipe to build the image. Configure the pipeline to distribute the image to an Amazon Elastic Container Registry (Amazon ECR) repository in us-west-2. Configure ECR replication from us-west-2 to us-east-2 and from us-east-2 to eu-central-1 Configure the pipeline to run weekly
A.
Create an EC2 Image Builder pipeline that uses a container recipe to build the image. Configure the pipeline to distribute the image to an Amazon Elastic Container Registry (Amazon ECR) repository in us-west-2. Configure ECR replication from us-west-2 to us-east-2 and from us-east-2 to eu-central-1 Configure the pipeline to run weekly
Answers
B.
Create an AWS CodePipeline pipeline that uses an AWS CodeBuild project to build the image Use AWS CodeOeploy to publish the image to an Amazon Elastic Container Registry (Amazon ECR) repository in us-west-2 Configure ECR replication from us-west-2 to us-east-2 and from us-east-2 to eu-central-1 Configure the pipeline to run weekly
B.
Create an AWS CodePipeline pipeline that uses an AWS CodeBuild project to build the image Use AWS CodeOeploy to publish the image to an Amazon Elastic Container Registry (Amazon ECR) repository in us-west-2 Configure ECR replication from us-west-2 to us-east-2 and from us-east-2 to eu-central-1 Configure the pipeline to run weekly
Answers
C.
Create an EC2 Image Builder pipeline that uses a container recipe to build the Image Configure the pipeline to distribute the image to Amazon Elastic Container Registry (Amazon ECR) repositories in all three Regions. Configure the pipeline to run weekly.
C.
Create an EC2 Image Builder pipeline that uses a container recipe to build the Image Configure the pipeline to distribute the image to Amazon Elastic Container Registry (Amazon ECR) repositories in all three Regions. Configure the pipeline to run weekly.
Answers
D.
Create an AWS CodePipeline pipeline that uses an AWS CodeBuild project to build the image Use AWS CodeDeploy to publish the image to Amazon Elastic Container Registry (Amazon ECR) repositories in all three Regions. Configure the pipeline to run weekly.
D.
Create an AWS CodePipeline pipeline that uses an AWS CodeBuild project to build the image Use AWS CodeDeploy to publish the image to Amazon Elastic Container Registry (Amazon ECR) repositories in all three Regions. Configure the pipeline to run weekly.
Answers
Suggested answer: C

Explanation:

Create an EC2 Image Builder Pipeline that Uses a Container Recipe to Build the Image:

EC2 Image Builder simplifies the creation, maintenance, validation, and sharing of container images.

By using a container recipe, you can define the base image, components, and validation tests for your container image.

Configure the Pipeline to Distribute the Image to Amazon Elastic Container Registry (Amazon ECR) Repositories in All Three Regions:

Amazon ECR provides a secure, scalable, and reliable container registry.

Configuring the pipeline to distribute the image to ECR repositories in us-west-2, us-east-2, and eu-central-1 ensures that the image is available in all required regions.

Configure the Pipeline to Run Weekly:

Setting the pipeline to run on a weekly schedule ensures that the base image is regularly updated and published, incorporating any new security configurations or updates.

By using EC2 Image Builder to automate the creation and distribution of the container image, the solution ensures that the base image is consistently maintained and available across multiple regions with minimal management overhead.

EC2 Image Builder

Amazon ECR

Setting Up EC2 Image Builder Pipelines

A company uses AWS Organizations to manage its AWS accounts. A DevOps engineer must ensure that all users who access the AWS Management Console are authenticated through the company's corporate identity provider (IdP).

Which combination of steps will meet these requirements? (Select TWO.)

A.
Use Amazon GuardDuty with a delegated administrator account. Use GuardDuty to enforce denial of IAM user logins
A.
Use Amazon GuardDuty with a delegated administrator account. Use GuardDuty to enforce denial of IAM user logins
Answers
B.
Use AWS IAM Identity Center to configure identity federation with SAML 2.0.
B.
Use AWS IAM Identity Center to configure identity federation with SAML 2.0.
Answers
C.
Create a permissions boundary in AWS IAM Identity Center to deny password logins for IAM users.
C.
Create a permissions boundary in AWS IAM Identity Center to deny password logins for IAM users.
Answers
D.
Create IAM groups in the Organizations management account to apply consistent permissions for all IAM users.
D.
Create IAM groups in the Organizations management account to apply consistent permissions for all IAM users.
Answers
E.
Create an SCP in Organizations to deny password creation for IAM users.
E.
Create an SCP in Organizations to deny password creation for IAM users.
Answers
Suggested answer: B, E

Explanation:

* Step 1: Using AWS IAM Identity Center for SAML-based Identity Federation To ensure that all users accessing the AWS Management Console are authenticated via the corporate identity provider (IdP), the best approach is to set up identity federation with AWS IAM Identity Center (formerly AWS SSO) using SAML 2.0. Action: Use AWS IAM Identity Center to configure identity federation with the corporate IdP that supports SAML 2.0. Why: SAML 2.0 integration enables single sign-on (SSO) for users, allowing them to authenticate through the corporate IdP and gain access to AWS resources.

This corresponds to Option B: Use AWS IAM Identity Center to configure identity federation with SAML 2.0.

* Step 2: Creating an SCP to Deny Password Logins for IAM Users To enforce that IAM users do not create passwords or access the Management Console directly without going through the corporate IdP, you can create a Service Control Policy (SCP) in AWS Organizations that denies password creation for IAM users.

Action: Create an SCP that denies password creation for IAM users.

Why: This ensures that users cannot set passwords for their IAM user accounts, forcing them to use federated access through the corporate IdP for console login.

This corresponds to Option E: Create an SCP in Organizations to deny password creation for IAM users.

A company recently migrated its application to an Amazon Elastic Kubernetes Service (Amazon EKS) cluster that uses Amazon EC2 instances. The company configured the application to automatically scale based on CPU utilization.

The application produces memory errors when it experiences heavy loads. The application also does not scale out enough to handle the increased load. The company needs to collect and analyze memory metrics for the application over time.

Which combination of steps will meet these requirements? (Select THREE.)

A.
Attach the Cloud WatchAgentServer Pol icy managed IAM policy to the IAM instance profile that the cluster uses.
A.
Attach the Cloud WatchAgentServer Pol icy managed IAM policy to the IAM instance profile that the cluster uses.
Answers
B.
Attach the Cloud WatchAgentServer Pol icy managed IAM policy to a service account role for the cluster.
B.
Attach the Cloud WatchAgentServer Pol icy managed IAM policy to a service account role for the cluster.
Answers
C.
Collect performance metrics by deploying the unified Amazon CloudWatch agent to the existing EC2 instances in the cluster. Add the agent to the AMI for any new EC2 instances that are added to the cluster.
C.
Collect performance metrics by deploying the unified Amazon CloudWatch agent to the existing EC2 instances in the cluster. Add the agent to the AMI for any new EC2 instances that are added to the cluster.
Answers
D.
Collect performance logs by deploying the AWS Distro for OpenTelemetry collector as a DaemonSet.
D.
Collect performance logs by deploying the AWS Distro for OpenTelemetry collector as a DaemonSet.
Answers
E.
Analyze the pod_memory_utilization Amazon CloudWatch metric in the Containerlnsights namespace by using the Service dimension.
E.
Analyze the pod_memory_utilization Amazon CloudWatch metric in the Containerlnsights namespace by using the Service dimension.
Answers
F.
Analyze the node_memory_utilization Amazon CloudWatch metric in the Containerlnsights namespace by using the ClusterName dimension.
F.
Analyze the node_memory_utilization Amazon CloudWatch metric in the Containerlnsights namespace by using the ClusterName dimension.
Answers
Suggested answer: A, C, E

Explanation:

* Step 1: Attaching the CloudWatchAgentServerPolicy to the IAM Role The CloudWatch agent needs permissions to collect and send metrics, including memory metrics, to Amazon CloudWatch. You can attach the CloudWatchAgentServerPolicy managed IAM policy to the IAM instance profile or service account role to grant these permissions. Action: Attach the CloudWatchAgentServerPolicy managed IAM policy to the IAM instance profile that the EKS cluster uses. Why: This ensures the CloudWatch agent has the necessary permissions to collect memory metrics.

This corresponds to Option A: Attach the CloudWatchAgentServerPolicy managed IAM policy to the IAM instance profile that the cluster uses.

* Step 2: Deploying the CloudWatch Agent to EC2 Instances To collect memory metrics from the EC2 instances running in the EKS cluster, the CloudWatch agent needs to be deployed on these instances. The agent collects system-level metrics, including memory usage.

Action: Deploy the unified Amazon CloudWatch agent to the existing EC2 instances in the EKS cluster. Update the Amazon Machine Image (AMI) for future instances to include the CloudWatch agent.

Why: The CloudWatch agent allows you to collect detailed memory metrics from the EC2 instances, which is not enabled by default.

This corresponds to Option C: Collect performance metrics by deploying the unified Amazon CloudWatch agent to the existing EC2 instances in the cluster. Add the agent to the AMI for any new EC2 instances that are added to the cluster.

* Step 3: Analyzing Memory Metrics Using Container Insights After collecting the memory metrics, you can analyze them using the pod_memory_utilization metric in Amazon CloudWatch Container Insights. This metric provides visibility into the memory usage of the containers (pods) in the EKS cluster.

Action: Analyze the pod_memory_utilization CloudWatch metric in the Container Insights namespace by using the Service dimension.

Why: This provides detailed insights into memory usage at the container level, which helps diagnose memory-related issues.

This corresponds to Option E: Analyze the pod_memory_utilization Amazon CloudWatch metric in the Container Insights namespace by using the Service dimension.

A company has developed a static website hosted on an Amazon S3 bucket. The website is deployed using AWS CloudFormation. The CloudFormation template defines an S3 bucket and a custom resource that copies content into the bucket from a source location.

The company has decided that it needs to move the website to a new location, so the existing CloudFormation stack must be deleted and re-created. However, CloudFormation reports that the stack could not be deleted cleanly.

What is the MOST likely cause and how can the DevOps engineer mitigate this problem for this and future versions of the website?

A.
Deletion has failed because the S3 bucket has an active website configuration. Modify the Cloud Formation template to remove the WebsiteConfiguration properly from the S3 bucket resource.
A.
Deletion has failed because the S3 bucket has an active website configuration. Modify the Cloud Formation template to remove the WebsiteConfiguration properly from the S3 bucket resource.
Answers
B.
Deletion has failed because the S3 bucket is not empty. Modify the custom resource's AWS Lambda function code to recursively empty the bucket when RequestType is Delete.
B.
Deletion has failed because the S3 bucket is not empty. Modify the custom resource's AWS Lambda function code to recursively empty the bucket when RequestType is Delete.
Answers
C.
Deletion has failed because the custom resource does not define a deletion policy. Add a DeletionPolicy property to the custom resource definition with a value of RemoveOnDeletion.
C.
Deletion has failed because the custom resource does not define a deletion policy. Add a DeletionPolicy property to the custom resource definition with a value of RemoveOnDeletion.
Answers
D.
Deletion has failed because the S3 bucket is not empty. Modify the S3 bucket resource in the CloudFormation template to add a DeletionPolicy property with a value of Empty.
D.
Deletion has failed because the S3 bucket is not empty. Modify the S3 bucket resource in the CloudFormation template to add a DeletionPolicy property with a value of Empty.
Answers
Suggested answer: B

Explanation:

Step 1: Understanding the Deletion Failure The most likely reason why the CloudFormation stack failed to delete is that the S3 bucket was not empty. AWS CloudFormation cannot delete an S3 bucket that contains objects, so if the website files are still in the bucket, the deletion will fail. Issue: The S3 bucket is not empty during deletion, preventing the stack from being deleted. Step 2: Modifying the Custom Resource to Handle Deletion To mitigate this issue, you can modify the Lambda function associated with the custom resource to automatically empty the S3 bucket when the stack is being deleted. By adding logic to handle the RequestType: Delete event, the function can recursively delete all objects in the bucket before allowing the stack to be deleted. Action: Modify the Lambda function to recursively delete the objects in the S3 bucket when RequestType is set to Delete. Why: This ensures that the S3 bucket is empty before CloudFormation tries to delete it, preventing the stack deletion failure.

This corresponds to Option B: Deletion has failed because the S3 bucket is not empty. Modify the custom resource's AWS Lambda function code to recursively empty the bucket when RequestType is Delete.

A company uses an AWS CodeCommit repository to store its source code and corresponding unit tests. The company has configured an AWS CodePipeline pipeline that includes an AWS CodeBuild project that runs when code is merged to the main branch of the repository.

The company wants the CodeBuild project to run the unit tests. If the unit tests pass, the CodeBuild project must tag the most recent commit.

How should the company configure the CodeBuild project to meet these requirements?

A.
Configure the CodeBuild project to use native Git to clone the CodeCommit repository. Configure the project to run the unit tests. Configure the project to use native Git to create a tag and to push the Git tag to the repository if the code passes the unit tests.
A.
Configure the CodeBuild project to use native Git to clone the CodeCommit repository. Configure the project to run the unit tests. Configure the project to use native Git to create a tag and to push the Git tag to the repository if the code passes the unit tests.
Answers
B.
Configure the CodeBuild project to use native Git to clone the CodeCommit repository. Configure the project to run the unit tests. Configure the project to use AWS CLI commands to create a new repository tag in the repository if the code passes the unit tests.
B.
Configure the CodeBuild project to use native Git to clone the CodeCommit repository. Configure the project to run the unit tests. Configure the project to use AWS CLI commands to create a new repository tag in the repository if the code passes the unit tests.
Answers
C.
Configure the CodeBuild project to use AWS CLI commands to copy the code from the CodeCommit repository. Configure the project lo run the unit tests. Configure the project to use AWS CLI commands to create a new Git tag in the repository if the code passes the unit tests.
C.
Configure the CodeBuild project to use AWS CLI commands to copy the code from the CodeCommit repository. Configure the project lo run the unit tests. Configure the project to use AWS CLI commands to create a new Git tag in the repository if the code passes the unit tests.
Answers
D.
Configure the CodeBuild project to use AWS CLI commands to copy the code from the CodeCommit repository. Configure the project to run the unit tests. Configure the project to use AWS CLI commands to create a new repository tag in the repository if the code passes the unit tests.
D.
Configure the CodeBuild project to use AWS CLI commands to copy the code from the CodeCommit repository. Configure the project to run the unit tests. Configure the project to use AWS CLI commands to create a new repository tag in the repository if the code passes the unit tests.
Answers
Suggested answer: A

Explanation:

Step 1: Using Native Git in CodeBuild To meet the requirement of running unit tests and tagging the most recent commit if the tests pass, the CodeBuild project should be configured to use native Git to clone the CodeCommit repository. Native Git support allows full functionality for managing the repository, including the ability to create and push tags. Action: Configure the CodeBuild project to use native Git to clone the repository and run the tests. Why: Using native Git provides flexibility for managing tags and other repository operations after the tests are successfully executed. Step 2: Tagging the Most Recent Commit Once the unit tests pass, the CodeBuild project can use native Git to create a tag for the most recent commit and push that tag to the repository. This ensures that the tagged commit is linked to the test results. Action: Configure the project to use native Git to create and push a tag to the repository if the tests pass. Why: This ensures the correct commit is tagged automatically, streamlining the workflow.

This corresponds to Option A: Configure the CodeBuild project to use native Git to clone the CodeCommit repository. Configure the project to run the unit tests. Configure the project to use native Git to create a tag and to push the Git tag to the repository if the code passes the unit tests.

A company has an organization in AWS Organizations for its multi-account environment. A DevOps engineer is developing an AWS CodeArtifact based strategy for application package management across the organization. Each application team at the company has its own account in the organization. Each application team also has limited access to a centralized shared services account.

Each application team needs full access to download, publish, and grant access to its own packages. Some common library packages that the application teams use must also be shared with the entire organization.

Which combination of steps will meet these requirements with the LEAST administrative overhead? (Select THREE.)

A.
Create a domain in each application team's account. Grant each application team's account lull read access and write access to the application team's domain
A.
Create a domain in each application team's account. Grant each application team's account lull read access and write access to the application team's domain
Answers
B.
Create a domain in the shared services account Grant the organization read access and CreateRepository access.
B.
Create a domain in the shared services account Grant the organization read access and CreateRepository access.
Answers
C.
Create a repository in each application team's account. Grant each application team's account lull read access and write access to its own repository.
C.
Create a repository in each application team's account. Grant each application team's account lull read access and write access to its own repository.
Answers
D.
Create a repository in the shared services account. Grant the organization read access to the repository in the shared services account. Set the repository as the upstream repository in each application team's repository.
D.
Create a repository in the shared services account. Grant the organization read access to the repository in the shared services account. Set the repository as the upstream repository in each application team's repository.
Answers
E.
For teams that require shared packages, create resource-based policies that allow read access to the repository from other application teams' accounts.
E.
For teams that require shared packages, create resource-based policies that allow read access to the repository from other application teams' accounts.
Answers
F.
Set the other application teams' repositories as upstream repositories.
F.
Set the other application teams' repositories as upstream repositories.
Answers
Suggested answer: B, D, E

Explanation:

* Step 1: Creating a Centralized Domain in the Shared Services Account To manage application package dependencies across multiple accounts, the most efficient solution is to create a centralized domain in the shared services account. This allows all application teams to access and manage package repositories within the same domain, ensuring consistency and centralization. Action: Create a domain in the shared services account. Why: A single, centralized domain reduces the need for redundant management in each application team's account.

This corresponds to Option B: Create a domain in the shared services account. Grant the organization read access and CreateRepository access.

* Step 2: Sharing Repositories Across Teams with Upstream Configurations To share common library packages across the organization, each application team's repository can point to the shared services repository as an upstream repository. This enables teams to access shared packages without managing them individually in each team's account.

Action: Create a repository in the shared services account and set it as the upstream repository for each application team.

Why: Upstream repositories allow package sharing while maintaining individual team repositories for managing their own packages.

This corresponds to Option D: Create a repository in the shared services account. Grant the organization read access to the repository in the shared services account. Set the repository as the upstream repository in each application team's repository.

* Step 3: Using Resource-Based Policies for Cross-Account Access For teams that need to share their packages with other application teams, resource-based policies can be applied to grant the necessary permissions. These policies allow cross-account access without having to manage permissions at the individual account level.

Action: Create resource-based policies that allow read access to the repositories across application teams.

Why: This simplifies management by centralizing permissions in the shared services account while allowing cross-team collaboration.

This corresponds to Option E: For teams that require shared packages, create resource-based policies that allow read access to the repository from other application teams' accounts.

A company is refactoring applications to use AWS. The company identifies an internal web application that needs to make Amazon S3 API calls in a specific AWS account.

The company wants to use its existing identity provider (IdP) auth.company.com for authentication. The IdP supports only OpenID Connect (OIDC). A DevOps engineer needs to secure the web application's access to the AWS account.

Which combination of steps will meet these requirements? (Select THREE.)

A.
Configure AWS IAM Identity Center. Configure an IdP. Upload the IdP metadata from the existing IdP.
A.
Configure AWS IAM Identity Center. Configure an IdP. Upload the IdP metadata from the existing IdP.
Answers
B.
Create an IAM IdP by using the provider URL, audience, and signature from the existing IdP.
B.
Create an IAM IdP by using the provider URL, audience, and signature from the existing IdP.
Answers
C.
Create an IAM role that has a policy that allows the necessary S3 actions. Configure the role's trust policy to allow the OIDC IdP to assume the role if the sts.amazon.conraud context key is appid from idp.
C.
Create an IAM role that has a policy that allows the necessary S3 actions. Configure the role's trust policy to allow the OIDC IdP to assume the role if the sts.amazon.conraud context key is appid from idp.
Answers
D.
Create an IAM role that has a policy that allows the necessary S3 actions. Configure the role's trust policy to allow the OIDC IdP to assume the role if the auth.company.com:aud context key is appid_from_idp.
D.
Create an IAM role that has a policy that allows the necessary S3 actions. Configure the role's trust policy to allow the OIDC IdP to assume the role if the auth.company.com:aud context key is appid_from_idp.
Answers
E.
Configure the web application lo use the AssumeRoleWith Web Identity API operation to retrieve temporary credentials. Use the temporary credentials to make the S3 API calls.
E.
Configure the web application lo use the AssumeRoleWith Web Identity API operation to retrieve temporary credentials. Use the temporary credentials to make the S3 API calls.
Answers
F.
Configure the web application to use the GetFederationToken API operation to retrieve temporary credentials.Use the temporary credentials to make the S3 API calls.
F.
Configure the web application to use the GetFederationToken API operation to retrieve temporary credentials.Use the temporary credentials to make the S3 API calls.
Answers
Suggested answer: B, D, E

Explanation:

Step 1: Creating an Identity Provider in IAM You first need to configure AWS to trust the external identity provider (IdP), which in this case supports OpenID Connect (OIDC). The IdP will handle the authentication, and AWS will handle the authorization based on the IdP's token. Action: Create an IAM Identity Provider (IdP) in AWS using the existing provider's URL, audience, and signature. This step is essential for establishing trust between AWS and the external IdP. Why: This allows AWS to accept tokens from your external IdP (auth.company.com) for authentication.

So, this corresponds to Option B: Create an IAM IdP by using the provider URL, audience, and signature from the existing IdP.

Step 2: Creating an IAM Role with Specific Permissions Next, you need to create an IAM role with a trust policy that allows the external IdP to assume it when certain conditions are met. Specifically, the trust policy needs to allow the role to be assumed based on the context key auth.company.com:aud (audience claim in the token).

Action: Create an IAM role that has the necessary permissions (e.g., Amazon S3 access). The role's trust policy should specify the OIDC IdP as the trusted entity and validate the audience claim (auth.company.com:aud), which comes from the token provided by the IdP.

Why: This step ensures that only the specified web application authenticated via OIDC can assume the IAM role to make API calls.

This corresponds to Option D: Create an IAM role that has a policy that allows the necessary S3 actions. Configure the role's trust policy to allow the OIDC IdP to assume the role if the auth.company.com:aud context key is appid_from_idp.

Step 3: Using Temporary Credentials via AssumeRoleWithWebIdentity API To securely make Amazon S3 API calls, the web application will need temporary credentials. The web application can use the AssumeRoleWithWebIdentity API call to assume the IAM role configured in the previous step and obtain temporary AWS credentials. These credentials can then be used to interact with Amazon S3.

Action: The web application must be configured to call the AssumeRoleWithWebIdentity API operation, passing the OIDC token from the IdP to obtain temporary credentials.

Why: This allows the web application to authenticate via the external IdP and then authorize access to AWS resources securely using short-lived credentials.

This corresponds to Option E: Configure the web application to use the AssumeRoleWithWebIdentity API operation to retrieve temporary credentials. Use the temporary credentials to make the S3 API calls.

Summary of Selected Answers:

B: Create an IAM IdP by using the provider URL, audience, and signature from the existing IdP.

D: Create an IAM role that has a policy that allows the necessary S3 actions. Configure the role's trust policy to allow the OIDC IdP to assume the role if the auth.company.com:aud context key is appid_from_idp.

E: Configure the web application to use the AssumeRoleWithWebIdentity API operation to retrieve temporary credentials. Use the temporary credentials to make the S3 API calls.

This setup enables the web application to use OpenID Connect (OIDC) for authentication and securely interact with Amazon S3 in a specific AWS account using short-lived credentials obtained through AWS Security Token Service (STS).

A company uses an organization in AWS Organizations to manage several AWS accounts that the company's developers use. The company requires all data to be encrypted in transit.

Multiple Amazon S3 buckets that were created in developer accounts allow unencrypted connections. A DevOps engineer must enforce encryption of data in transit for all existing S3 buckets that are created in accounts in the organization.

Which solution will meet these requirements?

A.
Use AWS Cloud Formation StackSets to deploy an AWS Network Firewall firewall to each account. Route all outbound requests from the AWS environment through the firewall. Deploy a policy to block access to all outbound requests on port 80.
A.
Use AWS Cloud Formation StackSets to deploy an AWS Network Firewall firewall to each account. Route all outbound requests from the AWS environment through the firewall. Deploy a policy to block access to all outbound requests on port 80.
Answers
B.
Use AWS CloudFormation StackSets to deploy an AWS Network Firewall firewall to each account. Route all inbound requests to the AWS environment through the firewall. Deploy a policy to block access to all inbound requests on port 80.
B.
Use AWS CloudFormation StackSets to deploy an AWS Network Firewall firewall to each account. Route all inbound requests to the AWS environment through the firewall. Deploy a policy to block access to all inbound requests on port 80.
Answers
C.
Turn on AWS Config for the organization. Deploy a conformance pack that uses the s3-bucket-ssi-requests-only managed rule and an AWS Systems Manager Automation runbook. Use a runbook that adds a bucket policy statement to deny access to an S3 bucket when the value of the aws:SecureTransport condition key is false.
C.
Turn on AWS Config for the organization. Deploy a conformance pack that uses the s3-bucket-ssi-requests-only managed rule and an AWS Systems Manager Automation runbook. Use a runbook that adds a bucket policy statement to deny access to an S3 bucket when the value of the aws:SecureTransport condition key is false.
Answers
D.
Turn on AWS Config for the organization. Deploy a conformance pack that uses the s3-buckot-ssl-requests-only managed rule and an AWS Systems Manager Automation runbook. Use a runbook that adds a bucket policy statement to deny access to an S3 bucket when the value of the s3:x-amz-server-side-encryption-aws-kms-key-id condition key is null.
D.
Turn on AWS Config for the organization. Deploy a conformance pack that uses the s3-buckot-ssl-requests-only managed rule and an AWS Systems Manager Automation runbook. Use a runbook that adds a bucket policy statement to deny access to an S3 bucket when the value of the s3:x-amz-server-side-encryption-aws-kms-key-id condition key is null.
Answers
Suggested answer: C

Explanation:

Step 1: Enabling AWS Config for the Organization The first step is to enable AWS Config across the AWS Organization. AWS Config is a service that enables you to assess, audit, and evaluate the configurations of your AWS resources. By enabling AWS Config, you can ensure that all S3 buckets within the organization are tracked and evaluated according to compliance rules. Action: Turn on AWS Config for all AWS accounts in the organization. Why: AWS Config will help monitor all resources (like S3 buckets) in real time to detect whether they are compliant with security policies.

Step 2: Deploying a Conformance Pack with Managed Rules After AWS Config is enabled, you need to deploy a conformance pack that contains the s3-bucket-ssi-requests-only managed rule. This rule enforces that all S3 buckets only allow requests using Secure Socket Layer (SSL) connections (HTTPS).

Action: Deploy a conformance pack that uses the s3-bucket-ssi-requests-only rule. This rule ensures that only SSL connections (for encrypted data in transit) are allowed when accessing S3.

Why: This rule guarantees that data is encrypted in transit by enforcing SSL connections to the S3 buckets.

Step 3: Using an AWS Systems Manager Automation Runbook To automatically remediate the compliance issues, such as S3 buckets allowing non-SSL requests, a Systems Manager Automation runbook is deployed. The runbook will automatically add a bucket policy that denies access to any requests that do not use SSL.

Action: Use a Systems Manager Automation runbook that adds a bucket policy statement to deny access when the aws:SecureTransport condition key is false.

Why: This ensures that all S3 buckets across the organization comply with the policy of enforcing encrypted data in transit.

This corresponds to Option C: Turn on AWS Config for the organization. Deploy a conformance pack that uses the s3-bucket-ssi-requests-only managed rule and an AWS Systems Manager Automation runbook. Use a runbook that adds a bucket policy statement to deny access to an S3 bucket when the value of the aws:SecureTransport condition key is false.

A company is using AWS CodeDeploy to automate software deployment. The deployment must meet these requirements:

* A number of instances must be available to serve traffic during the deployment Traffic must be balanced across those instances, and the instances must automatically heal in the event of failure.

* A new fleet of instances must be launched for deploying a new revision automatically, with no manual provisioning.

* Traffic must be rerouted to the new environment to half of the new instances at a time. The deployment should succeed if traffic is rerouted to at least half of the instances; otherwise, it should fail.

* Before routing traffic to the new fleet of instances, the temporary files generated during the deployment process must be deleted.

* At the end of a successful deployment, the original instances in the deployment group must be deleted immediately to reduce costs.

How can a DevOps engineer meet these requirements?

A.
Use an Application Load Balancer and an in-place deployment. Associate the Auto Scaling group with the deployment group. Use the Automatically copy Auto Scaling group option. and use CodeDeployDefault.OneAtAtime as the deployment configuration. Instruct AWS CodeDeploy to terminate the original instances in the deployment group, and use the AllowTraffic hook within appspec.yml to delete the temporary files.
A.
Use an Application Load Balancer and an in-place deployment. Associate the Auto Scaling group with the deployment group. Use the Automatically copy Auto Scaling group option. and use CodeDeployDefault.OneAtAtime as the deployment configuration. Instruct AWS CodeDeploy to terminate the original instances in the deployment group, and use the AllowTraffic hook within appspec.yml to delete the temporary files.
Answers
B.
Use an Application Load Balancer and a blue/green deployment. Associate the Auto Scaling group and Application Load Balancer target group with the deployment group. Use the Automatically copy Auto Scaling group option, create a custom deployment configuration with minimum healthy hosts defined as 50%. and assign the configuration to the deployment group. Instruct AWS CodeDeploy to terminate the original instances in the deployment group, and use the BeforeBlockTraffic hook within appspec.yml to delete the temporary files.
B.
Use an Application Load Balancer and a blue/green deployment. Associate the Auto Scaling group and Application Load Balancer target group with the deployment group. Use the Automatically copy Auto Scaling group option, create a custom deployment configuration with minimum healthy hosts defined as 50%. and assign the configuration to the deployment group. Instruct AWS CodeDeploy to terminate the original instances in the deployment group, and use the BeforeBlockTraffic hook within appspec.yml to delete the temporary files.
Answers
C.
Use an Application Load Balancer and a blue/green deployment. Associate the Auto Scaling group and the Application Load Balancer target group with the deployment group. Use the Automatically copy Auto scaling group option, and use CodeDeployDefault.HalfAtAtime as the deployment configuration. Instruct AWSCodeDeploy to terminate the original instances in the deployment group, and use the BeforeAlIowTraffic hook within appspec.yml to delete the temporary tiles.
C.
Use an Application Load Balancer and a blue/green deployment. Associate the Auto Scaling group and the Application Load Balancer target group with the deployment group. Use the Automatically copy Auto scaling group option, and use CodeDeployDefault.HalfAtAtime as the deployment configuration. Instruct AWSCodeDeploy to terminate the original instances in the deployment group, and use the BeforeAlIowTraffic hook within appspec.yml to delete the temporary tiles.
Answers
D.
Use an Application Load Balancer and an in-place deployment. Associate the Auto Scaling group and Application Load Balancer target group with the deployment group. Use the Automatically copy Auto Scaling group option, and use CodeDeployDefaulLAIIatOnce as a deployment configuration. Instruct AWS CodeDeploy to terminate the original instances in the deployment group, and use the BlockTraffic hook within appspec.yml to delete the temporary files.
D.
Use an Application Load Balancer and an in-place deployment. Associate the Auto Scaling group and Application Load Balancer target group with the deployment group. Use the Automatically copy Auto Scaling group option, and use CodeDeployDefaulLAIIatOnce as a deployment configuration. Instruct AWS CodeDeploy to terminate the original instances in the deployment group, and use the BlockTraffic hook within appspec.yml to delete the temporary files.
Answers
Suggested answer: C

Explanation:

Step 1: Use a Blue/Green Deployment Strategy A blue/green deployment strategy is necessary to meet the requirement of launching a new fleet of instances for each deployment and ensuring availability. In a blue/green deployment, the new version (green environment) is deployed to a separate set of instances, while the old version (blue environment) remains active. After testing the new version, traffic can be gradually shifted. Action: Use AWS CodeDeploy's blue/green deployment configuration. Why: Blue/green deployment minimizes downtime and ensures that traffic is shifted only to healthy instances.

Step 2: Use an Application Load Balancer and Auto Scaling Group The Application Load Balancer (ALB) is essential to balance traffic across multiple instances, and Auto Scaling ensures the deployment scales automatically to meet demand.

Action: Associate the Auto Scaling group and Application Load Balancer target group with the deployment group.

Why: This configuration ensures that traffic is evenly distributed and that instances automatically scale based on traffic load.

Step 3: Use Custom Deployment Configuration The company requires that traffic be rerouted to at least half of the instances to succeed. AWS CodeDeploy allows you to configure custom deployment settings with specific thresholds for healthy hosts.

Action: Create a custom deployment configuration where 50% of the instances must be healthy.

Why: This ensures that the deployment continues only if at least 50% of the new instances are healthy.

Step 4: Clean Temporary Files Using Hooks Before routing traffic to the new environment, the temporary files generated during the deployment must be deleted. This can be achieved using the BeforeAllowTraffic hook in the appspec.yml file.

Action: Use the BeforeAllowTraffic lifecycle event hook to clean up temporary files before routing traffic to the new environment.

Why: This ensures that the environment is clean before the new instances start serving traffic.

Step 5: Terminate Original Instances After Deployment After a successful deployment, AWS CodeDeploy can automatically terminate the original instances (blue environment) to save costs.

Action: Instruct AWS CodeDeploy to terminate the original instances after the new instances are healthy.

Why: This helps in cost reduction by removing unused instances after the deployment.

This corresponds to Option C: Use an Application Load Balancer and a blue/green deployment. Associate the Auto Scaling group and the Application Load Balancer target group with the deployment group. Use the Automatically copy Auto Scaling group option, and use CodeDeployDefault.HalfAtATime as the deployment configuration. Instruct AWS CodeDeploy to terminate the original instances in the deployment group, and use the BeforeAllowTraffic hook within appspec.yml to delete the temporary files.

Total 252 questions
Go to page: of 26