ExamGecko
Home Home / Amazon / DOP-C02

Amazon DOP-C02 Practice Test - Questions Answers, Page 22

Question list
Search
Search

List of questions

Search

Related questions











A company has an application that stores data that includes personally Identifiable Information (Pll) In an Amazon S3 bucket All data Is encrypted with AWS Key Management Service (AWS KMS) customer managed keys. All AWS resources are deployed from an AWS Cloud Formation template.

A DevOps engineer needs to set up a development environment for the application in a different AWS account The data in the development environment's S3 bucket needs to be updated once a week from the production environment's S3 bucket.

The company must not move Pll from the production environment without anonymizmg the Pll first The data in each environment must be encrypted with different KMS customer managed keys.

Which combination of steps should the DevOps engineer take to meet these requirements? (Select TWO )

A.
Activate Amazon Macie on the S3 bucket In the production account Create an AWS Step Functions state machine to initiate a discovery job and redact all Pll before copying files to the S3 bucket in the development account. Give the state machine tasks decrypt permissions on the KMS key in the production account. Give the state machine tasks encrypt permissions on the KMS key in the development account
A.
Activate Amazon Macie on the S3 bucket In the production account Create an AWS Step Functions state machine to initiate a discovery job and redact all Pll before copying files to the S3 bucket in the development account. Give the state machine tasks decrypt permissions on the KMS key in the production account. Give the state machine tasks encrypt permissions on the KMS key in the development account
Answers
B.
Set up S3 replication between the production S3 bucket and the development S3 bucket Activate Amazon Macie on the development S3 bucket Create an AWS Step Functions state machine to initiate a discovery job and redact all Pll as the files are copied to the development S3 bucket. Give the state machine tasks encrypt and decrypt permissions on the KMS key in the development account.
B.
Set up S3 replication between the production S3 bucket and the development S3 bucket Activate Amazon Macie on the development S3 bucket Create an AWS Step Functions state machine to initiate a discovery job and redact all Pll as the files are copied to the development S3 bucket. Give the state machine tasks encrypt and decrypt permissions on the KMS key in the development account.
Answers
C.
Set up an S3 Batch Operations job to copy files from the production S3 bucket to the development S3 bucket. In the development account, configure an AWS Lambda function to redact all Pll. Configure S3 Object Lambda to use the Lambda function for S3 GET requests Give the Lambda function's IAM role encrypt and decrypt permissions on the KMS key in the development account.
C.
Set up an S3 Batch Operations job to copy files from the production S3 bucket to the development S3 bucket. In the development account, configure an AWS Lambda function to redact all Pll. Configure S3 Object Lambda to use the Lambda function for S3 GET requests Give the Lambda function's IAM role encrypt and decrypt permissions on the KMS key in the development account.
Answers
D.
Create a development environment from the CloudFormatlon template in the development account. Schedule an Amazon EventBridge rule to start the AWS Step Functions state machine once a week
D.
Create a development environment from the CloudFormatlon template in the development account. Schedule an Amazon EventBridge rule to start the AWS Step Functions state machine once a week
Answers
E.
Create a development environment from the CloudFormation template in the development account. Schedule a cron job on an Amazon EC2 instance to run once a week to start the S3 Batch Operations job.
E.
Create a development environment from the CloudFormation template in the development account. Schedule a cron job on an Amazon EC2 instance to run once a week to start the S3 Batch Operations job.
Answers
Suggested answer: A, D

Explanation:

Activate Amazon Macie on the Production S3 Bucket:

Macie can identify and protect sensitive data such as PII.

Create a Step Functions state machine to automate data discovery and redaction before copying it to the development environment.

Example Step Functions state machine:

{

'Comment': 'Anonymize PII and copy data',

'StartAt': 'MacieDiscoveryJob',

'States': {

'MacieDiscoveryJob': {

'Type': 'Task',

'Resource': 'arn:aws:states:::macie:startClassificationJob',

'End': true

}

}

}

Create a Development Environment from CloudFormation Template:

Deploy the development environment in a new account using the existing CloudFormation template.

Schedule an EventBridge rule to start the Step Functions state machine on a weekly basis.

EventBridge rule example:

{

'ScheduleExpression': 'rate(7 days)',

'StateMachineArn': 'arn:aws:states:<region>::stateMachine:AnonymizeAndCopyData'

}

By using Macie for data anonymization and Step Functions for automation, you ensure PII is properly handled before data transfer between environments.

Amazon Macie

AWS Step Functions

AWS CloudFormation Templates

A DevOps engineer needs to implement integration tests into an existing AWS CodePipelme CI/CD workflow for an Amazon Elastic Container Service (Amazon ECS) service. The CI/CD workflow retrieves new application code from an AWS CodeCommit repository and builds a container image. The CI/CD workflow then uploads the container image to Amazon Elastic Container Registry (Amazon ECR) with a new image tag version.

The integration tests must ensure that new versions of the service endpoint are reachable and that vanous API methods return successful response data The DevOps engineer has already created an ECS cluster to test the service

Which combination of steps will meet these requirements with the LEAST management overhead? (Select THREE.)

A.
Add a deploy stage to the pipeline Configure Amazon ECS as the action provider
A.
Add a deploy stage to the pipeline Configure Amazon ECS as the action provider
Answers
B.
Add a deploy stage to the pipeline Configure AWS CodeDeploy as the action provider
B.
Add a deploy stage to the pipeline Configure AWS CodeDeploy as the action provider
Answers
C.
Add an appspec.yml file to the CodeCommit repository
C.
Add an appspec.yml file to the CodeCommit repository
Answers
D.
Update the image build pipeline stage to output an imagedefinitions json file that references the new image tag.
D.
Update the image build pipeline stage to output an imagedefinitions json file that references the new image tag.
Answers
E.
Create an AWS Lambda function that runs connectivity checks and API calls against the service. Integrate the Lambda function with CodePipeline by using aLambda action stage
E.
Create an AWS Lambda function that runs connectivity checks and API calls against the service. Integrate the Lambda function with CodePipeline by using aLambda action stage
Answers
F.
Write a script that runs integration tests against the service. Upload the script to an Amazon S3 bucket. Integrate the script in the S3 bucket with CodePipeline by using an S3 action stage.
F.
Write a script that runs integration tests against the service. Upload the script to an Amazon S3 bucket. Integrate the script in the S3 bucket with CodePipeline by using an S3 action stage.
Answers
Suggested answer: A, D, E

Explanation:

* Add a Deploy Stage to the Pipeline, Configure Amazon ECS as the Action Provider:

By adding a deploy stage to the pipeline and configuring Amazon ECS as the action provider, the pipeline can automatically deploy the new container image to the ECS cluster.

This ensures that the service is updated with the new image tag, making the new version of the service endpoint reachable.

* Update the Image Build Pipeline Stage to Output an imagedefinitions.json File that Reference the New Image Tag:

The imagedefinitions.json file provides the necessary information about the container images and their tags for the ECS task definitions.

Updating the pipeline to output this file ensures that the correct image version is deployed.

Example imagedefinitions.json

[

{

'name': 'container-name',

'imageUri': '123456789012.dkr.ecr.region.amazonaws.com/my-repo:my-tag'

}

]

*

Reference: CodePipeline ECS Deployment

* Create an AWS Lambda Function that Runs Connectivity Checks and API Calls against the Service. Integrate the Lambda Function with CodePipeline by Using a Lambda Action Stage:

The Lambda function can perform the necessary integration tests by making connectivity checks and API calls to the deployed service endpoint.

Integrating this Lambda function into CodePipeline ensures that these tests are run automatically after deployment, providing near-real-time feedback on the new deployment's health.

Example Lambda function integration:

actions:

- name: TestService

actionTypeId:

category: Test

owner: AWS

provider: Lambda

runOrder: 2

configuration:

FunctionName: testServiceFunction

These steps ensure that the CI/CD workflow deploys the new container image to ECS, updates the image references, and performs integration tests, meeting the requirements with minimal management overhead.

A company wants to use AWS Systems Manager documents to bootstrap physical laptops for developers The bootstrap code Is stored in GitHub A DevOps engineer has already created a Systems Manager activation, installed the Systems Manager agent with the registration code, and installed an activation ID on all the laptops.

Which set of steps should be taken next?

A.
Configure the Systems Manager document to use the AWS-RunShellScnpt command to copy the files from GitHub to Amazon S3, then use the aws-downloadContent plugin with a sourceType of S3
A.
Configure the Systems Manager document to use the AWS-RunShellScnpt command to copy the files from GitHub to Amazon S3, then use the aws-downloadContent plugin with a sourceType of S3
Answers
B.
Configure the Systems Manager document to use the aws-configurePackage plugin with an install action and point to the Git repository
B.
Configure the Systems Manager document to use the aws-configurePackage plugin with an install action and point to the Git repository
Answers
C.
Configure the Systems Manager document to use the aws-downloadContent plugin with a sourceType of GitHub and sourcelnfo with the repository details.
C.
Configure the Systems Manager document to use the aws-downloadContent plugin with a sourceType of GitHub and sourcelnfo with the repository details.
Answers
D.
Configure the Systems Manager document to use the aws:softwarelnventory plugin and run the script from the Git repository
D.
Configure the Systems Manager document to use the aws:softwarelnventory plugin and run the script from the Git repository
Answers
Suggested answer: C

Explanation:

Configure the Systems Manager Document to Use the aws-downloadContent Plugin with a sourceType of GitHub and sourcelnfo with the Repository Details:

The aws-downloadContent plugin can download content from various sources, including GitHub, which is necessary for bootstrapping the laptops with the code stored in the GitHub repository.

schemaVersion: '2.2'

description: 'Download and run bootstrap script from GitHub'

mainSteps:

- action: aws:downloadContent

name: downloadBootstrapScript

inputs:

sourceType: GitHub

sourceInfo: '{'owner':'my-org','repository':'my-repo','path':'scripts/bootstrap.sh','getOptions':'branch:main'}'

destinationPath: /tmp/bootstrap.sh

- action: aws:runShellScript

name: runBootstrapScript

inputs:

runCommand:

- chmod +x /tmp/bootstrap.sh

- /tmp/bootstrap.sh

This setup ensures that the bootstrap code is downloaded from GitHub and executed on the laptops using Systems Manager.

AWS Systems Manager aws-downloadContent Plugin

Running Commands Using Systems Manager

A company hired a penetration tester to simulate an internal security breach The tester performed port scans on the company's Amazon EC2 instances. The company's security measures did not detect the port scans.

The company needs a solution that automatically provides notification when port scans are performed on EC2 instances. The company creates and subscribes to an Amazon Simple Notification Service (Amazon SNS) topic.

What should the company do next to meet the requirement?

A.
Ensure that Amazon GuardDuty is enabled Create an Amazon CloudWatch alarm for detected EC2 and port scan findings. Connect the alarm to the SNS topic.
A.
Ensure that Amazon GuardDuty is enabled Create an Amazon CloudWatch alarm for detected EC2 and port scan findings. Connect the alarm to the SNS topic.
Answers
B.
Ensure that Amazon Inspector is enabled Create an Amazon EventBridge event for detected network reachability findings that indicate port scans Connect the event to the SNS topic.
B.
Ensure that Amazon Inspector is enabled Create an Amazon EventBridge event for detected network reachability findings that indicate port scans Connect the event to the SNS topic.
Answers
C.
Ensure that Amazon Inspector is enabled. Create an Amazon EventBridge event for detected CVEs that cause open port vulnerabilities. Connect the event to the SNS topic
C.
Ensure that Amazon Inspector is enabled. Create an Amazon EventBridge event for detected CVEs that cause open port vulnerabilities. Connect the event to the SNS topic
Answers
D.
Ensure that AWS CloudTrail is enabled Create an AWS Lambda function to analyze the CloudTrail logs for unusual amounts of traffic from an IP address range Connect the Lambda function to the SNS topic.
D.
Ensure that AWS CloudTrail is enabled Create an AWS Lambda function to analyze the CloudTrail logs for unusual amounts of traffic from an IP address range Connect the Lambda function to the SNS topic.
Answers
Suggested answer: A

Explanation:

* Ensure that Amazon GuardDuty is Enabled:

Amazon GuardDuty is a threat detection service that continuously monitors for malicious activity and unauthorized behavior.

It can detect port scans and generate findings for these events.

* Create an Amazon CloudWatch Alarm for Detected EC2 and Port Scan Findings:

Configure GuardDuty to monitor for port scans and other threats.

Create a CloudWatch alarm that triggers when GuardDuty detects port scan activities.

* Connect the Alarm to the SNS Topic:

The CloudWatch alarm should be configured to send notifications to the SNS topic subscribed by the security team.

This setup ensures that the security team receives near-real-time notifications when a port scan is detected on the EC2 instances.

Example configuration steps:

Enable GuardDuty and ensure it is monitoring the relevant AWS accounts.

Create a CloudWatch alarm:

{

'AlarmName': 'GuardDutyPortScanAlarm',

'MetricName': 'ThreatIntelIndicator',

'Namespace': 'AWS/GuardDuty',

'Statistic': 'Sum',

'Dimensions': [

{

'Name': 'FindingType',

'Value': 'Recon:EC2/Portscan'

}

],

'Period': 300,

'EvaluationPeriods': 1,

'Threshold': 1,

'ComparisonOperator': 'GreaterThanOrEqualToThreshold',

'AlarmActions': ['arn:aws:sns:region:account-id:SecurityAlerts']

}

Amazon GuardDuty

Creating CloudWatch Alarms for GuardDuty Findings

A company uses Amazon EC2 as its primary compute platform. A DevOps team wants to audit the company's EC2 instances to check whether any prohibited applications have been installed on the EC2 instances.

Which solution will meet these requirements with the MOST operational efficiency?

A.
Configure AWS Systems Manager on each instance Use AWS Systems Manager Inventory Use Systems Manager resource data sync to synchronize and store findings in an Amazon S3 bucket Create an AWS Lambda function that runs when new objects are added to the S3 bucket. Configure the Lambda function to identify prohibited applications.
A.
Configure AWS Systems Manager on each instance Use AWS Systems Manager Inventory Use Systems Manager resource data sync to synchronize and store findings in an Amazon S3 bucket Create an AWS Lambda function that runs when new objects are added to the S3 bucket. Configure the Lambda function to identify prohibited applications.
Answers
B.
Configure AWS Systems Manager on each instance Use Systems Manager Inventory Create AWS Config rules that monitor changes from Systems Manager Inventory to identify prohibited applications.
B.
Configure AWS Systems Manager on each instance Use Systems Manager Inventory Create AWS Config rules that monitor changes from Systems Manager Inventory to identify prohibited applications.
Answers
C.
Configure AWS Systems Manager on each instance. Use Systems Manager Inventory. Filter a trail in AWS CloudTrail for Systems Manager Inventory events to identify prohibited applications.
C.
Configure AWS Systems Manager on each instance. Use Systems Manager Inventory. Filter a trail in AWS CloudTrail for Systems Manager Inventory events to identify prohibited applications.
Answers
D.
Designate Amazon CloudWatch Logs as the log destination for all application instances Run an automated script across all instances to create an inventory of installed applications Configure the script to forward the results to CloudWatch Logs Create a CloudWatch alarm that uses filter patterns to search log data to identify prohibited applications.
D.
Designate Amazon CloudWatch Logs as the log destination for all application instances Run an automated script across all instances to create an inventory of installed applications Configure the script to forward the results to CloudWatch Logs Create a CloudWatch alarm that uses filter patterns to search log data to identify prohibited applications.
Answers
Suggested answer: A

Explanation:

* Configure AWS Systems Manager on Each Instance:

AWS Systems Manager provides a unified interface for managing AWS resources. Install the Systems Manager agent on each EC2 instance to enable inventory management and other features.

* Use AWS Systems Manager Inventory:

Systems Manager Inventory collects metadata about your instances and the software installed on them. This data includes information about applications, network configurations, and more.

Enable Systems Manager Inventory on all EC2 instances to gather detailed information about installed applications.

* Use Systems Manager Resource Data Sync to Synchronize and Store Findings in an Amazon S3 Bucket:

Resource Data Sync aggregates inventory data from multiple accounts and regions into a single S3 bucket, making it easier to query and analyze the data.

Configure Resource Data Sync to automatically transfer inventory data to an S3 bucket for centralized storage.

* Create an AWS Lambda Function that Runs When New Objects are Added to the S3 Bucket:

Use an S3 event to trigger a Lambda function whenever new inventory data is added to the S3 bucket.

The Lambda function can parse the inventory data and check for the presence of prohibited applications.

* Configure the Lambda Function to Identify Prohibited Applications:

The Lambda function should be programmed to scan the inventory data for any known prohibited applications and generate alerts or take appropriate actions if such applications are found.

Example Lambda function in Python

import json

import boto3

def lambda_handler(event, context):

s3 = boto3.client('s3')

bucket = event['Records'][0]['s3']['bucket']['name']

key = event['Records'][0]['s3']['object']['key']

response = s3.get_object(Bucket=bucket, Key=key)

inventory_data = json.loads(response['Body'].read().decode('utf-8'))

prohibited_apps = ['app1', 'app2']

for instance in inventory_data['Instances']:

for app in instance['Applications']:

if app['Name'] in prohibited_apps:

# Send notification or take action

print(f'Prohibited application found: {app['Name']} on instance {instance['InstanceId']}')

return {'statusCode': 200, 'body': json.dumps('Check completed')}

By leveraging AWS Systems Manager Inventory, Resource Data Sync, and Lambda, this solution provides an efficient and automated way to audit EC2 instances for prohibited applications.

AWS Systems Manager Inventory

AWS Systems Manager Resource Data Sync

S3 Event Notifications

AWS Lambda


A company has an AWS Control Tower landing zone. The company's DevOps team creates a workload OU. A development OU and a production OU are nested under the workload OU. The company grants users full access to the company's AWS accounts to deploy applications.

The DevOps team needs to allow only a specific management IAM role to manage the IAM roles and policies of any AWS accounts In only the production OU.

Which combination of steps will meet these requirements? {Select TWO.)

A.
Create an SCP that denies full access with a condition to exclude the management IAM role for the organization root.
A.
Create an SCP that denies full access with a condition to exclude the management IAM role for the organization root.
Answers
B.
Ensure that the FullAWSAccess SCP is applied at the organization root
B.
Ensure that the FullAWSAccess SCP is applied at the organization root
Answers
C.
Create an SCP that allows IAM related actions Attach the SCP to the development OU
C.
Create an SCP that allows IAM related actions Attach the SCP to the development OU
Answers
D.
Create an SCP that denies IAM related actions with a condition to exclude the management I AM role Attach the SCP to the workload OU
D.
Create an SCP that denies IAM related actions with a condition to exclude the management I AM role Attach the SCP to the workload OU
Answers
E.
Create an SCP that denies IAM related actions with a condition to exclude the management IAM role Attach the SCP to the production OU
E.
Create an SCP that denies IAM related actions with a condition to exclude the management IAM role Attach the SCP to the production OU
Answers
Suggested answer: B, E

Explanation:

You need to understand how SCP inheritance works in AWS. The way it works for Deny policies is different that allow policies.

Allow polices are passing down to children ONLY if they don't have an allow policy.

Deny policies always pass down to children.

That's why there is always an SCP set to the Root to allow everything by default. If you limit this policy, the whole organization will be limited, not matter what other policies are saying for the other OUs. So it's not A. It's not D because it restricts the wrong OU.

A company has an application that runs on Amazon EC2 instances behind an Application Load Balancer (ALB) The EC2 Instances are in multiple Availability Zones The application was misconfigured in a single Availability Zone, which caused a partial outage of the application.

A DevOps engineer made changes to ensure that the unhealthy EC2 instances in one Availability Zone do not affect the healthy EC2 instances in the other Availability Zones. The DevOps engineer needs to test the application's failover and shift where the ALB sends traffic During failover. the ALB must avoid sending traffic to the Availability Zone where the failure has occurred.

Which solution will meet these requirements?

A.
Turn off cross-zone load balancing on the ALB Use Amazon Route 53 Application Recovery Controller to start a zonal shift away from the Availability Zone
A.
Turn off cross-zone load balancing on the ALB Use Amazon Route 53 Application Recovery Controller to start a zonal shift away from the Availability Zone
Answers
B.
Turn off cross-zone load balancing on the ALB's target group Use Amazon Route 53 Application Recovery Controller to start a zonal shift away from the Availability Zone
B.
Turn off cross-zone load balancing on the ALB's target group Use Amazon Route 53 Application Recovery Controller to start a zonal shift away from the Availability Zone
Answers
C.
Create an Amazon Route 53 Application Recovery Controller resource set that uses the DNS hostname of the ALB Start a zonal shift for the resource set away from the Availability Zone
C.
Create an Amazon Route 53 Application Recovery Controller resource set that uses the DNS hostname of the ALB Start a zonal shift for the resource set away from the Availability Zone
Answers
D.
Create an Amazon Route 53 Application Recovery Controller resource set that uses the ARN of the ALB's target group Create a readiness check that uses the ElbV2TargetGroupsCanServeTraffic rule
D.
Create an Amazon Route 53 Application Recovery Controller resource set that uses the ARN of the ALB's target group Create a readiness check that uses the ElbV2TargetGroupsCanServeTraffic rule
Answers
Suggested answer: B

Explanation:

* Turn off cross-zone load balancing on the ALB's target group:

Cross-zone load balancing distributes traffic evenly across all registered targets in all enabled Availability Zones. Turning this off will ensure that each target group only handles requests from its respective Availability Zone.

To disable cross-zone load balancing:

Go to the Amazon EC2 console.

Navigate to Load Balancers and select the ALB.

Choose the Target Groups tab, select the target group, and then select the Group details tab.

Click on Edit and turn off Cross-zone load balancing.

* Use Amazon Route 53 Application Recovery Controller to start a zonal shift away from the Availability Zone:

Amazon Route 53 Application Recovery Controller provides the ability to control traffic flow to ensure high availability and disaster recovery.

By using Route 53 Application Recovery Controller, you can perform a zonal shift to redirect traffic away from the unhealthy Availability Zone.

To start a zonal shift:

Configure Route 53 Application Recovery Controller by creating a cluster and control panel.

Create routing controls to manage traffic shifts between Availability Zones.

Use the routing control to shift traffic away from the affected Availability Zone.

Disabling cross-zone load balancing

Route 53 Application Recovery Controller

A software team is using AWS CodePipeline to automate its Java application release pipeline The pipeline consists of a source stage, then a build stage, and then a deploy stage. Each stage contains a single action that has a runOrder value of 1.

The team wants to integrate unit tests into the existing release pipeline. The team needs a solution that deploys only the code changes that pass all unit tests.

Which solution will meet these requirements?

A.
Modify the build stage. Add a test action that has a runOrder value of 1. Use AWS CodeDeploy as the action provider to run unit tests.
A.
Modify the build stage. Add a test action that has a runOrder value of 1. Use AWS CodeDeploy as the action provider to run unit tests.
Answers
B.
Modify the build stage Add a test action that has a runOrder value of 2 Use AWS CodeBuild as the action provider to run unit tests
B.
Modify the build stage Add a test action that has a runOrder value of 2 Use AWS CodeBuild as the action provider to run unit tests
Answers
C.
Modify the deploy stage Add a test action that has a runOrder value of 1 Use AWS CodeDeploy as the action provider to run unit tests
C.
Modify the deploy stage Add a test action that has a runOrder value of 1 Use AWS CodeDeploy as the action provider to run unit tests
Answers
D.
Modify the deploy stage Add a test action that has a runOrder value of 2 Use AWS CodeBuild as the action provider to run unit tests
D.
Modify the deploy stage Add a test action that has a runOrder value of 2 Use AWS CodeBuild as the action provider to run unit tests
Answers
Suggested answer: B

Explanation:

* Modify the Build Stage to Add a Test Action with a RunOrder Value of 2:

The build stage in AWS CodePipeline can have multiple actions. By adding a test action with a runOrder value of 2, the test action will execute after the initial build action completes.

* Use AWS CodeBuild as the Action Provider to Run Unit Tests:

AWS CodeBuild is a fully managed build service that compiles source code, runs tests, and produces software packages.

Using CodeBuild to run unit tests ensures that the tests are executed in a controlled environment and that only the code changes that pass the unit tests proceed to the deploy stage.

Example configuration in CodePipeline:

{

'name': 'BuildStage',

'actions': [

{

'name': 'Build',

'actionTypeId': {

'category': 'Build',

'owner': 'AWS',

'provider': 'CodeBuild',

'version': '1'

},

'runOrder': 1

},

{

'name': 'Test',

'actionTypeId': {

'category': 'Test',

'owner': 'AWS',

'provider': 'CodeBuild',

'version': '1'

},

'runOrder': 2

}

]

}

By integrating the unit tests into the build stage and ensuring they run after the build process, the pipeline guarantees that only code changes passing all unit tests are deployed.

AWS CodePipeline

AWS CodeBuild

Using CodeBuild with CodePipeline

A company has set up AWS CodeArtifact repositories with public upstream repositories The company's development team consumes open source dependencies from the repositories in the company's internal network.

The company's security team recently discovered a critical vulnerability in the most recent version of a package that the development team consumes. The security team has produced a patched version to fix the vulnerability. The company needs to prevent the vulnerable version from being downloaded. The company also needs to allow the security team to publish the patched version.

Which combination of steps will meet these requirements? {Select TWO.)

A.
Update the status of the affected CodeArtifact package version to unlisted
A.
Update the status of the affected CodeArtifact package version to unlisted
Answers
B.
Update the status of the affected CodeArtifact package version to deleted
B.
Update the status of the affected CodeArtifact package version to deleted
Answers
C.
Update the status of the affected CodeArtifact package version to archived.
C.
Update the status of the affected CodeArtifact package version to archived.
Answers
D.
Update the CodeArtifact package origin control settings to allow direct publishing and to block upstream operations
D.
Update the CodeArtifact package origin control settings to allow direct publishing and to block upstream operations
Answers
E.
Update the CodeArtifact package origin control settings to block direct publishing and to allow upstream operations.
E.
Update the CodeArtifact package origin control settings to block direct publishing and to allow upstream operations.
Answers
Suggested answer: B, D

Explanation:

Update the status of the affected CodeArtifact package version to deleted:

Deleting the vulnerable package version prevents it from being available for download by any users or systems, ensuring that the compromised version is not consumed.

Update the CodeArtifact package origin control settings to allow direct publishing and to block upstream operations:

By allowing direct publishing, the security team can publish the patched version of the package directly to the CodeArtifact repository.

Blocking upstream operations prevents the repository from automatically fetching and serving the vulnerable package version from upstream public repositories.

By deleting the vulnerable version and configuring the origin control settings to allow direct publishing and block upstream operations, the company ensures that only the patched version is available and the vulnerable version cannot be downloaded.

Managing Package Versions in CodeArtifact

Package Origin Controls in CodeArtifact

A company deploys an application to Amazon EC2 instances. The application runs Amazon Linux 2 and uses AWS CodeDeploy. The application has the following file structure for its code repository:

The appspec.yml file has the following contents in the files section:

What will the result be for the deployment of the config.txt file?

A.
The config.txt file will be deployed to only /var/www/html/config/config txt
A.
The config.txt file will be deployed to only /var/www/html/config/config txt
Answers
B.
The config.txt file will be deployed to /usr/local/src/config.txt and to /var/www/html/config/config txt.
B.
The config.txt file will be deployed to /usr/local/src/config.txt and to /var/www/html/config/config txt.
Answers
C.
The config.txt file will be deployed to only /usr/local/src/config txt
C.
The config.txt file will be deployed to only /usr/local/src/config txt
Answers
D.
The config txt file will be deployed to /usr/local/src/config.txt and to /var/www/html/application/web/config txt
D.
The config txt file will be deployed to /usr/local/src/config.txt and to /var/www/html/application/web/config txt
Answers
Suggested answer: C

Explanation:

Deployment of config.txt file based on the appspec.yml:

The appspec.yml file specifies that config/config.txt should be copied to /usr/local/src/config.txt.

The source: / directive in the appspec.yml indicates that the entire directory structure starting from the root of the application source should be copied to the specified destination, which is /var/www/html.

Result of the Deployment:

The config.txt file will be specifically deployed to /usr/local/src/config.txt as per the explicit file mapping.

The entire directory structure including application/web will be copied to /var/www/html, but this does not include config/config.txt since it has a specific destination defined.

Thus, the config.txt file will be deployed only to /usr/local/src/config.txt.

Therefore, the correct answer is:

C . The config.txt file will be deployed to only /usr/local/src/config.txt.

AWS CodeDeploy AppSpec File Reference

AWS CodeDeploy Deployment Process

Total 252 questions
Go to page: of 26