ExamGecko
Home Home / Amazon / DOP-C02

Amazon DOP-C02 Practice Test - Questions Answers, Page 25

Question list
Search
Search

List of questions

Search

Related questions











A company has deployed a new platform that runs on Amazon Elastic Kubernetes Service (Amazon EKS). The new platform hosts web applications that users frequently update. The application developers build the Docker images for the applications and deploy the Docker images manually to the platform.

The platform usage has increased to more than 500 users every day. Frequent updates, building the updated Docker images for the applications, and deploying the Docker images on the platform manually have all become difficult to manage.

The company needs to receive an Amazon Simple Notification Service (Amazon SNS) notification if Docker image scanning returns any HIGH or CRITICAL findings for operating system or programming language package vulnerabilities.

Which combination of steps will meet these requirements? (Select TWO.)

A.
Create an AWS CodeCommit repository to store the Dockerfile and Kubernetes deployment files. Create a pipeline in AWS CodePipeline. Use an Amazon S3 event to invoke the pipeline when a newer version of the Dockerfile is committed. Add a stop to the pipeline to initiate the AWS CodeBuild project.
A.
Create an AWS CodeCommit repository to store the Dockerfile and Kubernetes deployment files. Create a pipeline in AWS CodePipeline. Use an Amazon S3 event to invoke the pipeline when a newer version of the Dockerfile is committed. Add a stop to the pipeline to initiate the AWS CodeBuild project.
Answers
B.
Create an AWS CodeCommit repository to store the Dockerfile and Kubernetes deployment files. Create a pipeline in AWS CodePipeline. Use an Amazon EvenlBridge event to invoke the pipeline when a newer version of the Dockerfile is committed. Add a step to the pipeline to initiate the AWS CodeBuild project.
B.
Create an AWS CodeCommit repository to store the Dockerfile and Kubernetes deployment files. Create a pipeline in AWS CodePipeline. Use an Amazon EvenlBridge event to invoke the pipeline when a newer version of the Dockerfile is committed. Add a step to the pipeline to initiate the AWS CodeBuild project.
Answers
C.
Create an AWS CodeBuild project that builds the Docker images and stores the Docker images in an Amazon Elastic Container Registry (Amazon ECR) repository. Turn on basic scanning for the ECR repository. Create an Amazon EventBridge rule that monitors Amazon GuardDuty events. Configure the EventBridge rule to send an event to an SNS topic when the finding-severity-counts parameter is more than 0 at a CRITICAL or HIGH level.
C.
Create an AWS CodeBuild project that builds the Docker images and stores the Docker images in an Amazon Elastic Container Registry (Amazon ECR) repository. Turn on basic scanning for the ECR repository. Create an Amazon EventBridge rule that monitors Amazon GuardDuty events. Configure the EventBridge rule to send an event to an SNS topic when the finding-severity-counts parameter is more than 0 at a CRITICAL or HIGH level.
Answers
D.
Create an AWS CodeBuild project that builds the Docker images and stores the Docker images in an Amazon Elastic Container Registry (Amazon ECR) repository. Turn on enhanced scanning for the ECR repository. Create an Amazon EventBridge rule that monitors ECR image scan events. Configure the EventBridge rule to send an event to an SNS topic when the finding-severity-counts parameter is more than 0 at a CRITICAL or HIGH level.
D.
Create an AWS CodeBuild project that builds the Docker images and stores the Docker images in an Amazon Elastic Container Registry (Amazon ECR) repository. Turn on enhanced scanning for the ECR repository. Create an Amazon EventBridge rule that monitors ECR image scan events. Configure the EventBridge rule to send an event to an SNS topic when the finding-severity-counts parameter is more than 0 at a CRITICAL or HIGH level.
Answers
E.
Create an AWS CodeBuild project that scans the Dockerfile. Configure the project to build the Docker images and store the Docker images in an Amazon Elastic Container Registry (Amazon ECR) repository if the scan is successful. Configure an SNS topic to provide notification if the scan returns any vulnerabilities.
E.
Create an AWS CodeBuild project that scans the Dockerfile. Configure the project to build the Docker images and store the Docker images in an Amazon Elastic Container Registry (Amazon ECR) repository if the scan is successful. Configure an SNS topic to provide notification if the scan returns any vulnerabilities.
Answers
Suggested answer: B, D

Explanation:

* Step 1: Automate Docker Image Deployment using AWS CodePipeline The first challenge is the manual process of building and deploying Docker images. To address this, you can use AWS CodePipeline to automate the process. AWS CodePipeline integrates with CodeCommit (for source code and Dockerfile storage) and CodeBuild (to build Docker images and store them in Amazon Elastic Container Registry (ECR)). Action: Create an AWS CodeCommit repository to store the Dockerfile and Kubernetes deployment files. Then, create a pipeline in AWS CodePipeline that triggers on new commits via an Amazon EventBridge event. Why: This automation significantly reduces the manual effort of building and deploying Docker images when updates are made to the codebase.

This corresponds to Option B: Create an AWS CodeCommit repository to store the Dockerfile and Kubernetes deployment files. Create a pipeline in AWS CodePipeline. Use an Amazon EventBridge event to invoke the pipeline when a newer version of the Dockerfile is committed. Add a step to the pipeline to initiate the AWS CodeBuild project.

* Step 2: Enabling Enhanced Scanning on Amazon ECR and Monitoring Vulnerabilities To scan for vulnerabilities in Docker images, Amazon ECR provides both basic and enhanced scanning options. Enhanced scanning offers deeper and more frequent scans, and integrates with Amazon EventBridge to send notifications based on findings.

Action: Turn on enhanced scanning for the Amazon ECR repository where the Docker images are stored. Use Amazon EventBridge to monitor image scan events and trigger an Amazon SNS notification if any HIGH or CRITICAL vulnerabilities are found.

Why: Enhanced scanning provides a detailed analysis of operating system and programming language package vulnerabilities, which can trigger notifications in real-time.

This corresponds to Option D: Create an AWS CodeBuild project that builds the Docker images and stores the Docker images in an Amazon Elastic Container Registry (Amazon ECR) repository. Turn on enhanced scanning for the ECR repository. Create an Amazon EventBridge rule that monitors ECR image scan events. Configure the EventBridge rule to send an event to an SNS topic when the finding-severity-counts parameter is more than 0 at a CRITICAL or HIGH level.

A company uses an organization in AWS Organizations to manage multiple AWS accounts The company needs an automated process across all AWS accounts to isolate any compromised Amazon EC2 instances when the instances receive a specific tag.

Which combination of steps will meet these requirements? (Select TWO.)

A.
Use AWS Cloud Formation StackSets to deploy the Cloud Formation stacks in all AWS accounts.
A.
Use AWS Cloud Formation StackSets to deploy the Cloud Formation stacks in all AWS accounts.
Answers
B.
Create an SCP that has a Deny statement for the ec2:' action with a condition of 'aws:RequestTag/isolation': false.
B.
Create an SCP that has a Deny statement for the ec2:' action with a condition of 'aws:RequestTag/isolation': false.
Answers
C.
Attach the SCP to the root of the organization.
C.
Attach the SCP to the root of the organization.
Answers
D.
Create an AWS Cloud Formation template that creates an EC2 instance rote that has no IAM policies attached. Configure the template to have a security group that has an explicit Deny rule on all traffic. Use the Cloud Formation template to create an AWS Lambda function that attaches the IAM role to instances. Configure the Lambda function to add a network ACL. Sot up an Amazon EventBridge rule to invoke the Lambda function when a specific tag is applied to a compromised EC2 instance.
D.
Create an AWS Cloud Formation template that creates an EC2 instance rote that has no IAM policies attached. Configure the template to have a security group that has an explicit Deny rule on all traffic. Use the Cloud Formation template to create an AWS Lambda function that attaches the IAM role to instances. Configure the Lambda function to add a network ACL. Sot up an Amazon EventBridge rule to invoke the Lambda function when a specific tag is applied to a compromised EC2 instance.
Answers
E.
Create an AWS Cloud Formation template that creates an EC2 instance role that has no IAM policies attached. Configure the template to have a security group that has no inbound rules or outbound rules. Use the CloudFormation template to create an AWS Lambda function that attaches the IAM role to instances. Configure the Lambda function to replace any existing security groups with the new security group. Set up an Amazon EventBridge rule to invoke the Lambda function when a specific tag is applied to a compromised EC2 instance.
E.
Create an AWS Cloud Formation template that creates an EC2 instance role that has no IAM policies attached. Configure the template to have a security group that has no inbound rules or outbound rules. Use the CloudFormation template to create an AWS Lambda function that attaches the IAM role to instances. Configure the Lambda function to replace any existing security groups with the new security group. Set up an Amazon EventBridge rule to invoke the Lambda function when a specific tag is applied to a compromised EC2 instance.
Answers
Suggested answer: A, E

Explanation:

* Step 1: Deploy the Automation Solution using CloudFormation StackSets To automate the process across multiple AWS accounts within an organization, you can use AWS CloudFormation StackSets. StackSets allow you to deploy CloudFormation templates to multiple accounts within an organization, ensuring consistent infrastructure and automation. Action: Use AWS CloudFormation StackSets to deploy the necessary resources across all AWS accounts. This includes deploying the Lambda function and security groups that will isolate compromised EC2 instances. Why: StackSets make it easy to deploy and manage resources across multiple AWS accounts, reducing the operational overhead.

This corresponds to Option A: Use AWS CloudFormation StackSets to deploy the CloudFormation stacks in all AWS accounts.

* Step 2: Isolate EC2 Instances using Lambda and Security Groups When an EC2 instance is compromised, it needs to be isolated from the network. This can be done by creating a security group with no inbound or outbound rules and attaching it to the instance. A Lambda function can handle this process and can be triggered automatically by an Amazon EventBridge rule when a specific tag (e.g., 'isolation') is applied to the compromised instance.

Action: Create a Lambda function that attaches an isolated security group (with no inbound or outbound rules) to the compromised EC2 instances. Set up an EventBridge rule to trigger the Lambda function when the 'isolation' tag is applied to the instance.

Why: This automates the isolation process, ensuring that any compromised instances are immediately cut off from the network, reducing the potential damage from the compromise.

This corresponds to Option E: Create an AWS CloudFormation template that creates an EC2 instance role that has no IAM policies attached. Configure the template to have a security group that has no inbound rules or outbound rules. Use the CloudFormation template to create an AWS Lambda function that attaches the IAM role to instances. Configure the Lambda function to replace any existing security groups with the new security group. Set up an Amazon EventBridge rule to invoke the Lambda function when a specific tag is applied to a compromised EC2 instance.

A DevOps learn has created a Custom Lambda rule in AWS Config. The rule monitors Amazon Elastic Container Repository (Amazon ECR) policy statements for ecr:' actions. When a noncompliant repository is detected, Amazon EventBridge uses Amazon Simple Notification Service (Amazon SNS) to route the notification to a security team.

When the custom AWS Config rule is evaluated, the AWS Lambda function fails to run.

Which solution will resolve the issue?

A.
Modify the Lambda function's resource policy to grant AWS Config permission to invoke the function.
A.
Modify the Lambda function's resource policy to grant AWS Config permission to invoke the function.
Answers
B.
Modify the SNS topic policy to include configuration changes for EventBridge to publish to the SNS topic.
B.
Modify the SNS topic policy to include configuration changes for EventBridge to publish to the SNS topic.
Answers
C.
Modify the Lambda function's execution role to include configuration changes for custom AWS Config rules.
C.
Modify the Lambda function's execution role to include configuration changes for custom AWS Config rules.
Answers
D.
Modify all the ECR repository policies to grant AWS Config access to the necessary ECR API actions.
D.
Modify all the ECR repository policies to grant AWS Config access to the necessary ECR API actions.
Answers
Suggested answer: A

Explanation:

Step 1: Understanding Lambda Permissions and AWS Config The custom AWS Config rule evaluates resources and invokes an AWS Lambda function when a compliance check is triggered. For AWS Config to invoke the Lambda function, it requires permission to do so. Issue: The Lambda function fails to execute because AWS Config doesn't have permission to invoke it. Action: Modify the resource-based policy of the Lambda function to grant AWS Config permission to invoke the Lambda function. Why: Without this permission, AWS Config cannot trigger the Lambda function, which is why the evaluation fails.

This corresponds to Option A: Modify the Lambda function's resource policy to grant AWS Config permission to invoke the function.

A company's organization in AWS Organizations has a single OU. The company runs Amazon EC2 instances in the OU accounts. The company needs to limit the use of each EC2 instance's credentials to the specific EC2 instance that the credential is assigned to. A DevOps engineer must configure security for the EC2 instances.

Which solution will meet these requirements?

A.
Create an SCP that specifies the VPC CIDR block. Configure the SCP to check whether the value of the aws:VpcSourcelp condition key is in the specified block. In the same SCP check, check whether the values of the aws:EC2lnstanceSourcePrivatelPv4 and aws:SourceVpc condition keys are the same. Deny access if either condition is false. Apply the SCP to the OU.
A.
Create an SCP that specifies the VPC CIDR block. Configure the SCP to check whether the value of the aws:VpcSourcelp condition key is in the specified block. In the same SCP check, check whether the values of the aws:EC2lnstanceSourcePrivatelPv4 and aws:SourceVpc condition keys are the same. Deny access if either condition is false. Apply the SCP to the OU.
Answers
B.
Create an SCP that checks whether the values of the aws:EC2lnstanceSourceVPC and aws:SourceVpc condition keys are the same. Deny access if the values are not the same. In the same SCP check, check whether the values of the aws:EC2lnstanceSourcePrivatelPv4 and awsVpcSourcelp condition keys are the same. Deny access if the values are not the same. Apply the SCP to the OU.
B.
Create an SCP that checks whether the values of the aws:EC2lnstanceSourceVPC and aws:SourceVpc condition keys are the same. Deny access if the values are not the same. In the same SCP check, check whether the values of the aws:EC2lnstanceSourcePrivatelPv4 and awsVpcSourcelp condition keys are the same. Deny access if the values are not the same. Apply the SCP to the OU.
Answers
C.
Create an SCP that includes a list of acceptable VPC values and checks whether the value of the aws:SourceVpc condition key is in the list. In the same SCP check, define a list of acceptable IP address values and check whether the value of the aws:VpcSourcelp condition key is in the list. Deny access if either condition is false. Apply the SCP to each account in the organization.
C.
Create an SCP that includes a list of acceptable VPC values and checks whether the value of the aws:SourceVpc condition key is in the list. In the same SCP check, define a list of acceptable IP address values and check whether the value of the aws:VpcSourcelp condition key is in the list. Deny access if either condition is false. Apply the SCP to each account in the organization.
Answers
D.
Create an SCP that checks whether the values of the aws:EC2lnstanceSourceVPC and aws:VpcSourcelp condition keys are the same. Deny access if the values are not the same. In the same SCP check, check whether the values of the aws:EC2lnstanceSourcePrivatolPv4 and aws:SourceVpc condition keys are the same. Deny access if the values are not the same. Apply the SCP to each account in the organization.
D.
Create an SCP that checks whether the values of the aws:EC2lnstanceSourceVPC and aws:VpcSourcelp condition keys are the same. Deny access if the values are not the same. In the same SCP check, check whether the values of the aws:EC2lnstanceSourcePrivatolPv4 and aws:SourceVpc condition keys are the same. Deny access if the values are not the same. Apply the SCP to each account in the organization.
Answers
Suggested answer: B

Explanation:

Step 1: Using Service Control Policies (SCPs) for EC2 Security To limit the use of EC2 instance credentials to the specific EC2 instance they are assigned to, you can create a Service Control Policy (SCP) that verifies specific conditions, such as whether the EC2 instance's source VPC and private IP match expected values. Action: Create an SCP that checks whether the values of the aws:EC2InstanceSourceVPC and aws:SourceVpc condition keys are the same. Deny access if they are not. Why: This ensures that credentials cannot be used outside the designated EC2 instance or VPC.

Step 2: Further Validation with Private IPs The SCP should also verify that the EC2 instance's private IP matches the IP range specified for the VPC. If the instance's private IP does not match, access should be denied. Action: In the same SCP, check whether the values of the aws:EC2InstanceSourcePrivateIP and aws:VpcSourceIP condition keys are the same. Deny access if they are not. Why: This ensures that the credentials are only used within the specific EC2 instance and its associated VPC.

This corresponds to Option B: Create an SCP that checks whether the values of the aws:EC2InstanceSourceVPC and aws:SourceVpc condition keys are the same. Deny access if the values are not the same. In the same SCP check, check whether the values of the aws:EC2InstanceSourcePrivateIP and aws:VpcSourceIP condition keys are the same. Deny access if the values are not the same. Apply the SCP to the OU.

A DevOps engineer uses AWS CodeBuild to frequently produce software packages. The CodeBuild project builds large Docker images that the DevOps engineer can use across multiple builds. The DevOps engineer wants to improve build performance and minimize costs. Which solution will meet these requirements?

A.
Store the Docker images in an Amazon Elastic Container Registry (Amazon ECR) repository. Implement a local Docker layer cache for CodeBuild.
A.
Store the Docker images in an Amazon Elastic Container Registry (Amazon ECR) repository. Implement a local Docker layer cache for CodeBuild.
Answers
B.
Cache the Docker images in an Amazon S3 bucket that is available across multiple build hosts. Expire the cache by using an S3 Lifecycle policy.
B.
Cache the Docker images in an Amazon S3 bucket that is available across multiple build hosts. Expire the cache by using an S3 Lifecycle policy.
Answers
C.
Store the Docker images in an Amazon Elastic Container Registry (Amazon ECR) repository. Modify the CodeBuild project runtime configuration to always use the most recent image version.
C.
Store the Docker images in an Amazon Elastic Container Registry (Amazon ECR) repository. Modify the CodeBuild project runtime configuration to always use the most recent image version.
Answers
D.
Create custom AMIs that contain the cached Docker images. In the CodeBuild build, launch Amazon EC2 instances from the custom AMIs.
D.
Create custom AMIs that contain the cached Docker images. In the CodeBuild build, launch Amazon EC2 instances from the custom AMIs.
Answers
Suggested answer: A

Explanation:

Step 1: Storing Docker Images in Amazon ECR Docker images can be large, and storing them in a centralized, scalable location can greatly reduce build times. Amazon Elastic Container Registry (ECR) is a fully managed container registry that stores, manages, and deploys Docker container images. Action: Store the Docker images in an ECR repository. Why: Storing Docker images in ECR ensures that Docker images can be reused across multiple builds, improving build performance by avoiding the need to rebuild the images from scratch.

Step 2: Implementing Docker Layer Caching in CodeBuild Docker layer caching is essential for improving performance in continuous integration pipelines. CodeBuild supports local caching of Docker layers, which speeds up builds that reuse Docker images across multiple runs.

Action: Implement Docker layer caching within the CodeBuild project.

Why: This improves performance by allowing frequently used Docker layers to be cached locally, avoiding the need to pull or build the layers every time.

This corresponds to Option A: Store the Docker images in an Amazon Elastic Container Registry (Amazon ECR) repository. Implement a local Docker layer cache for CodeBuild.

A company uses an Amazon Aurora PostgreSQL global database that has two secondary AWS Regions. A DevOps engineer has configured the database parameter group to guarantee an RPO of 60 seconds. Write operations on the primary cluster are occasionally blocked because of the RPO setting.

The DevOps engineer needs to reduce the frequency of blocked write operations.

Which solution will meet these requirements?

A.
Add an additional secondary cluster to the global database.
A.
Add an additional secondary cluster to the global database.
Answers
B.
Enable write forwarding for the global database.
B.
Enable write forwarding for the global database.
Answers
C.
Remove one of the secondary clusters from the global database.
C.
Remove one of the secondary clusters from the global database.
Answers
D.
Configure synchronous replication for the global database.
D.
Configure synchronous replication for the global database.
Answers
Suggested answer: C

Explanation:

Step 1: Reducing Replication Lag in Aurora Global Databases In Amazon Aurora global databases, write operations on the primary cluster can be delayed due to the time it takes to replicate to secondary clusters, especially when there are multiple secondary regions involved. Issue: The write operations are occasionally blocked due to the RPO setting, which guarantees replication within 60 seconds. Action: Remove one of the secondary clusters from the global database. Why: Fewer secondary clusters will reduce the overall replication lag, improving write performance and reducing the frequency of blocked writes.

This corresponds to Option C: Remove one of the secondary clusters from the global database.

A company uses AWS WAF to protect its cloud infrastructure. A DevOps engineer needs to give an operations team the ability to analyze log messages from AWS WAR. The operations team needs to be able to create alarms for specific patterns in the log output.

Which solution will meet these requirements with the LEAST operational overhead?

A.
Create an Amazon CloudWatch Logs log group. Configure the appropriate AWS WAF web ACL to send log messages to the log group. Instruct the operations team to create CloudWatch metric filters.
A.
Create an Amazon CloudWatch Logs log group. Configure the appropriate AWS WAF web ACL to send log messages to the log group. Instruct the operations team to create CloudWatch metric filters.
Answers
B.
Create an Amazon OpenSearch Service cluster and appropriate indexes. Configure an Amazon Kinesis Data Firehose delivery stream to stream log data to the indexes. Use OpenSearch Dashboards to create filters and widgets.
B.
Create an Amazon OpenSearch Service cluster and appropriate indexes. Configure an Amazon Kinesis Data Firehose delivery stream to stream log data to the indexes. Use OpenSearch Dashboards to create filters and widgets.
Answers
C.
Create an Amazon S3 bucket for the log output. Configure AWS WAF to send log outputs to the S3 bucket. Instruct the operations team to create AWS Lambda functions that detect each desired log message pattern. Configure the Lambda functions to publish to an Amazon Simple Notification Service (Amazon SNS) topic.
C.
Create an Amazon S3 bucket for the log output. Configure AWS WAF to send log outputs to the S3 bucket. Instruct the operations team to create AWS Lambda functions that detect each desired log message pattern. Configure the Lambda functions to publish to an Amazon Simple Notification Service (Amazon SNS) topic.
Answers
D.
Create an Amazon S3 bucket for the log output. Configure AWS WAF to send log outputs to the S3 bucket. Use Amazon Athena to create an external table definition that fits the log message pattern. Instruct the operations team to write SOL queries and to create Amazon CloudWatch metric filters for the Athena queries.
D.
Create an Amazon S3 bucket for the log output. Configure AWS WAF to send log outputs to the S3 bucket. Use Amazon Athena to create an external table definition that fits the log message pattern. Instruct the operations team to write SOL queries and to create Amazon CloudWatch metric filters for the Athena queries.
Answers
Suggested answer: A

Explanation:

Step 1: Sending AWS WAF Logs to CloudWatch Logs AWS WAF allows you to log requests that are evaluated against your web ACLs. These logs can be sent directly to CloudWatch Logs, which enables real-time monitoring and analysis. Action: Configure the AWS WAF web ACL to send log messages to a CloudWatch Logs log group. Why: This allows the operations team to view the logs in real time and analyze patterns using CloudWatch metric filters.

Step 2: Creating CloudWatch Metric Filters CloudWatch metric filters can be used to search for specific patterns in log data. The operations team can create filters for certain log patterns and set up alarms based on these filters.

Action: Instruct the operations team to create CloudWatch metric filters to detect patterns in the WAF log output.

Why: Metric filters allow the team to trigger alarms based on specific patterns without needing to manually search through logs.

This corresponds to Option A: Create an Amazon CloudWatch Logs log group. Configure the appropriate AWS WAF web ACL to send log messages to the log group. Instruct the operations team to create CloudWatch metric filters.

A company releases a new application in a new AWS account. The application includes an AWS Lambda function that processes messages from an Amazon Simple Queue Service (Amazon SOS) standard queue. The Lambda function stores the results in an Amazon S3 bucket for further downstream processing. The Lambda function needs to process the messages within a specific period of time after the messages are published. The Lambda function has a batch size of 10 messages and takes a few seconds to process a batch of messages.

As load increases on the application's first day of service, messages in the queue accumulate at a greater rate than the Lambda function can process the messages. Some messages miss the required processing timelines. The logs show that many messages in the queue have data that is not valid. The company needs to meet the timeline requirements for messages that have valid data.

Which solution will meet these requirements?

A.
Increase the Lambda function's batch size. Change the SOS standard queue to an SOS FIFO queue. Request a Lambda concurrency increase in the AWS Region.
A.
Increase the Lambda function's batch size. Change the SOS standard queue to an SOS FIFO queue. Request a Lambda concurrency increase in the AWS Region.
Answers
B.
Reduce the Lambda function's batch size. Increase the SOS message throughput quota. Request a Lambda concurrency increase in the AWS Region.
B.
Reduce the Lambda function's batch size. Increase the SOS message throughput quota. Request a Lambda concurrency increase in the AWS Region.
Answers
C.
Increase the Lambda function's batch size. Configure S3 Transfer Acceleration on the S3 bucket. Configure an SOS dead-letter queue.
C.
Increase the Lambda function's batch size. Configure S3 Transfer Acceleration on the S3 bucket. Configure an SOS dead-letter queue.
Answers
D.
Keep the Lambda function's batch size the same. Configure the Lambda function to report failed batch items. Configure an SOS dead-letter queue.
D.
Keep the Lambda function's batch size the same. Configure the Lambda function to report failed batch items. Configure an SOS dead-letter queue.
Answers
Suggested answer: D

Explanation:

Step 1: Handling Invalid Data with Failed Batch Items The Lambda function is processing batches of messages, and some messages contain invalid data, causing processing delays. Lambda provides the capability to report failed batch items, which allows valid messages to be processed while skipping invalid ones. This functionality ensures that the valid messages are processed within the required timeline. Action: Keep the Lambda function's batch size the same and configure it to report failed batch items. Why: By reporting failed batch items, the Lambda function can skip invalid messages and continue processing valid ones, ensuring that they meet the processing timeline.

Step 2: Using an SQS Dead-Letter Queue (DLQ) Configuring a dead-letter queue (DLQ) for SQS will ensure that messages with invalid data, or those that cannot be processed successfully, are moved to the DLQ. This prevents such messages from clogging the queue and allows the system to focus on processing valid messages.

Action: Configure an SQS dead-letter queue for the main queue.

Why: A DLQ helps isolate problematic messages, preventing them from continuously reappearing in the queue and causing processing delays for valid messages.

Step 3: Maintaining the Lambda Function's Batch Size Keeping the current batch size allows the Lambda function to continue processing multiple messages at once. By addressing the failed items separately, there's no need to increase or reduce the batch size.

Action: Maintain the Lambda function's current batch size.

Why: Changing the batch size is unnecessary if the invalid messages are properly handled by reporting failed items and using a DLQ.

This corresponds to Option D: Keep the Lambda function's batch size the same. Configure the Lambda function to report failed batch items. Configure an SQS dead-letter queue.


A DevOps engineer is setting up an Amazon Elastic Container Service (Amazon ECS) blue/green deployment for an application by using AWS CodeDeploy and AWS CloudFormation. During the deployment window, the application must be highly available and CodeDeploy must shift 10% of traffic to a new version of the application every minute until all traffic is shifted.

Which configuration should the DevOps engineer add in the CloudFormation template to meet these requirements?

A.
Add an AppSpec file with the CodeDeployDefault.ECSLineaMOPercentEverylMinutes deployment configuration.
A.
Add an AppSpec file with the CodeDeployDefault.ECSLineaMOPercentEverylMinutes deployment configuration.
Answers
B.
Add the AWS::CodeDeployBlueGreen transform and the AWS::CodeDeploy::BlueGreen hook parameter with the CodeDeployDefault.ECSLinear10PercentEvery1 Minutes deployment configuration.
B.
Add the AWS::CodeDeployBlueGreen transform and the AWS::CodeDeploy::BlueGreen hook parameter with the CodeDeployDefault.ECSLinear10PercentEvery1 Minutes deployment configuration.
Answers
C.
Add an AppSpec file with the ECSCanary10Percent5Minutes deployment configuration.
C.
Add an AppSpec file with the ECSCanary10Percent5Minutes deployment configuration.
Answers
D.
Add the AWS::CodeDeployBlueGroen transform and the AWS::CodeDeploy::BlueGreen hook parameter with the ECSCanary10Percent5Minutes deployment configuration.
D.
Add the AWS::CodeDeployBlueGroen transform and the AWS::CodeDeploy::BlueGreen hook parameter with the ECSCanary10Percent5Minutes deployment configuration.
Answers
Suggested answer: B

Explanation:

Step 1: Using AWS CloudFormation with ECS Blue/Green Deployments The requirement is to implement an ECS blue/green deployment where traffic is shifted gradually. AWS CodeDeploy supports such blue/green deployments with predefined configurations, like ECSLinear10PercentEvery1Minute, which shifts 10% of traffic every minute. Action: Use the AWS::CodeDeployBlueGreen transform and the appropriate hooks in the CloudFormation template. The ECSLinear10PercentEvery1Minute deployment configuration meets the requirement of shifting 10% of traffic every minute. Why: The transform and hook parameters in CloudFormation are essential for configuring the blue/green deployment with the desired traffic-shifting behavior.

This corresponds to Option B: Add the AWS::CodeDeployBlueGreen transform and the AWS::CodeDeploy::BlueGreen hook parameter with the CodeDeployDefault.ECSLinear10PercentEvery1Minutes deployment configuration.

A company is migrating its container-based workloads to an AWS Organizations multi-account environment. The environment consists of application workload accounts that the company uses to deploy and run the containerized workloads. The company has also provisioned a shared services account tor shared workloads in the organization.

The company must follow strict compliance regulations. All container images must receive security scanning before they are deployed to any environment. Images can be consumed by downstream deployment mechanisms after the images pass a scan with no critical vulnerabilities. Pre-scan and post-scan images must be isolated from one another so that a deployment can never use pre-scan images.

A DevOps engineer needs to create a strategy to centralize this process.

Which combination of steps will meet these requirements with the LEAST administrative overhead? (Select TWO.)

A.
Create Amazon Elastic Container Registry (Amazon ECR) repositories in the shared services account: one repository for each pre-scan image and one repository for each post-scan image. Configure Amazon ECR image scanning to run on new image pushes to the pre-scan repositories. Use resource-based policies to grant the organization write access to the pre-scan repositories and read access to the post-scan repositories.
A.
Create Amazon Elastic Container Registry (Amazon ECR) repositories in the shared services account: one repository for each pre-scan image and one repository for each post-scan image. Configure Amazon ECR image scanning to run on new image pushes to the pre-scan repositories. Use resource-based policies to grant the organization write access to the pre-scan repositories and read access to the post-scan repositories.
Answers
B.
Create pre-scan Amazon Elastic Container Registry (Amazon ECR) repositories in each account that publishes container images. Create repositories for post-scan images in the shared services account. Configure Amazon ECR image scanning to run on new image pushes to the pre-scan repositories. Use resource-based policies to grant the organization read access to the post-scan repositories.
B.
Create pre-scan Amazon Elastic Container Registry (Amazon ECR) repositories in each account that publishes container images. Create repositories for post-scan images in the shared services account. Configure Amazon ECR image scanning to run on new image pushes to the pre-scan repositories. Use resource-based policies to grant the organization read access to the post-scan repositories.
Answers
C.
Configure image replication for each image from the image's pre-scan repository to the image's post-scan repository.
C.
Configure image replication for each image from the image's pre-scan repository to the image's post-scan repository.
Answers
D.
Create a pipeline in AWS CodePipeline for each pre-scan repository. Create a source stage that runs when new images are pushed to the pre-scan repositories. Create a stage that uses AWS CodeBuild as the action provider. Write a buildspec.yaml definition that determines the image scanning status and pushes images without critical vulnerabilities lo the post-scan repositories.
D.
Create a pipeline in AWS CodePipeline for each pre-scan repository. Create a source stage that runs when new images are pushed to the pre-scan repositories. Create a stage that uses AWS CodeBuild as the action provider. Write a buildspec.yaml definition that determines the image scanning status and pushes images without critical vulnerabilities lo the post-scan repositories.
Answers
E.
Create an AWS Lambda function. Create an Amazon EventBridge rule that reacts to image scanning completed events and invokes the Lambda function. Write function code that determines the image scanning status and pushes images without critical vulnerabilities to the post-scan repositories.
E.
Create an AWS Lambda function. Create an Amazon EventBridge rule that reacts to image scanning completed events and invokes the Lambda function. Write function code that determines the image scanning status and pushes images without critical vulnerabilities to the post-scan repositories.
Answers
Suggested answer: A, C

Explanation:

* Step 1: Centralizing Image Scanning in a Shared Services Account The first requirement is to centralize the image scanning process, ensuring pre-scan and post-scan images are stored separately. This can be achieved by creating separate pre-scan and post-scan repositories in the shared services account, with the appropriate resource-based policies to control access. Action: Create separate ECR repositories for pre-scan and post-scan images in the shared services account. Configure resource-based policies to allow write access to pre-scan repositories and read access to post-scan repositories. Why: This ensures that images are isolated before and after the scan, following the compliance requirements.

This corresponds to Option A: Create Amazon Elastic Container Registry (Amazon ECR) repositories in the shared services account: one repository for each pre-scan image and one repository for each post-scan image. Configure Amazon ECR image scanning to run on new image pushes to the pre-scan repositories. Use resource-based policies to grant the organization write access to the pre-scan repositories and read access to the post-scan repositories.

* Step 2: Replication between Pre-Scan and Post-Scan Repositories To automate the transfer of images from the pre-scan repositories to the post-scan repositories (after they pass the security scan), you can configure image replication between the two repositories.

Action: Set up image replication between the pre-scan and post-scan repositories to move images that have passed the security scan.

Why: Replication ensures that only scanned and compliant images are available for deployment, streamlining the process with minimal administrative overhead.

This corresponds to Option C: Configure image replication for each image from the image's pre-scan repository to the image's post-scan repository.

Total 252 questions
Go to page: of 26