ExamGecko
Home Home / Amazon / SCS-C02

Amazon SCS-C02 Practice Test - Questions Answers, Page 32

Question list
Search
Search

List of questions

Search

Related questions











A company manages multiple AWS accounts using AWS Organizations. The company's security team notices that some member accounts are not sending AWS CloudTrail logs to a centralized Amazon S3 logging bucket. The security team wants to ensure there is at least one trail configured for all existing accounts and for any account that is created in the future.

Which set of actions should the security team implement to accomplish this?

A.
Create a new trail and configure it to send CloudTraiI logs to Amazon S3. Use Amazon EventBridge to send notification if a trail is deleted or stopped.
A.
Create a new trail and configure it to send CloudTraiI logs to Amazon S3. Use Amazon EventBridge to send notification if a trail is deleted or stopped.
Answers
B.
Deploy an AWS Lambda function in every account to check if there is an existing trail and create a new trail, if needed.
B.
Deploy an AWS Lambda function in every account to check if there is an existing trail and create a new trail, if needed.
Answers
C.
Edit the existing trail in the Organizations management account and apply it to the organization.
C.
Edit the existing trail in the Organizations management account and apply it to the organization.
Answers
D.
Create an SCP to deny the cloudtraiI:DeIete* and cloudtraiI:Stop* actbns. Apply the SCP to all accounts.
D.
Create an SCP to deny the cloudtraiI:DeIete* and cloudtraiI:Stop* actbns. Apply the SCP to all accounts.
Answers
Suggested answer: C

Explanation:

The correct answer is C. Edit the existing trail in the Organizations management account and apply it to the organization.

The reason is that this is the simplest and most effective way to ensure that there is at least one trail configured for all existing accounts and for any account that is created in the future. According to the AWS documentation1, ''If you have created an organization in AWS Organizations, you can create a trail that logs all events for all AWS accounts in that organization. This is sometimes called an organization trail.'' The documentation1 also states that ''The management account for the organization can edit an existing trail in their account, and apply it to an organization, making it an organization trail. Organization trails log events for the management account and all member accounts in the organization.'' Therefore, by editing the existing trail in the management account and applying it to the organization, the security team can ensure that all accounts are sending CloudTrail logs to a centralized S3 logging bucket.

The other options are incorrect because:

A) Create a new trail and configure it to send CloudTrail logs to Amazon S3. Use Amazon EventBridge to send notification if a trail is deleted or stopped. This option is not sufficient to ensure that there is at least one trail configured for all accounts, because it does not prevent users from deleting or stopping the trail in their accounts. Even if EventBridge sends a notification, the security team would have to manually restore or restart the trail, which is not efficient or scalable.

B) Deploy an AWS Lambda function in every account to check if there is an existing trail and create a new trail, if needed. This option is not optimal because it requires deploying and maintaining a Lambda function in every account, which adds complexity and cost. Moreover, it does not prevent users from deleting or stopping the trail after it is created by the Lambda function.

D) Create an SCP to deny the cloudtrail:Delete and cloudtrail:Stop actions. Apply the SCP to all accounts. This option is not sufficient to ensure that there is at least one trail configured for all accounts, because it does not create or apply a trail in the first place. It only prevents users from deleting or stopping an existing trail, but it does not guarantee that a trail exists in every account.

A company's data scientists want to create artificial intelligence and machine learning (AI/ML) training models by using Amazon SageMaker. The training models will use large datasets in an Amazon S3 bucket. The datasets contain sensitive information.

On average. the data scientists need 30 days to train models. The S3 bucket has been secured appropriately The companfs data retention policy states that all data that is older than 45 days must be removed from the S3 bucket.

Which action should a security engineer take to enforce this data retention policy?

A.
Configure an S3 Lifecycle rule on the S3 bucket to delete objects after 45 days.
A.
Configure an S3 Lifecycle rule on the S3 bucket to delete objects after 45 days.
Answers
B.
Create an AWS Lambda function to check the last-modified date of the S3 objects and delete objects that are older than 45 days. Create an S3 event notification to invoke the Lambda function for each PutObject operation.
B.
Create an AWS Lambda function to check the last-modified date of the S3 objects and delete objects that are older than 45 days. Create an S3 event notification to invoke the Lambda function for each PutObject operation.
Answers
C.
Create an AWS Lambda function to check the last-modified date of the S3 objects and delete objects that are older than 45 days. Create an Amazon EventBridge rule to invoke the Lambda function each month.
C.
Create an AWS Lambda function to check the last-modified date of the S3 objects and delete objects that are older than 45 days. Create an Amazon EventBridge rule to invoke the Lambda function each month.
Answers
D.
Configure S3 Intelligent-Ttering on the S3 bucket to automatically transition objects to another storage class.
D.
Configure S3 Intelligent-Ttering on the S3 bucket to automatically transition objects to another storage class.
Answers
Suggested answer: A

Explanation:

The correct answer is A. Configure an S3 Lifecycle rule on the S3 bucket to delete objects after 45 days.

The reason is that this is the simplest and most effective way to enforce the data retention policy. According to the AWS documentation1, ''To manage your objects so that they are stored cost effectively throughout their lifecycle, configure their Amazon S3 Lifecycle. An S3 Lifecycle configuration is a set of rules that define actions that Amazon S3 applies to a group of objects. There are two types of actions: Transition actions and Expiration actions.'' The documentation1 also states that ''Expiration actions define when objects expire. Amazon S3 deletes expired objects on your behalf.'' Therefore, by configuring an S3 Lifecycle rule on the S3 bucket to delete objects after 45 days, the security engineer can ensure that the data is removed from the S3 bucket according to the company's policy.

The other options are incorrect because:

B) Create an AWS Lambda function to check the last-modified date of the S3 objects and delete objects that are older than 45 days. Create an S3 event notification to invoke the Lambda function for each PutObject operation. This option is not optimal because it requires deploying and maintaining a Lambda function, which adds complexity and cost. Moreover, it does not guarantee that the data is deleted exactly after 45 days, since the Lambda function is triggered only when a new object is put into the S3 bucket. If there are no new objects for a long period of time, the Lambda function will not run and the data will not be deleted.

C) Create an AWS Lambda function to check the last-modified date of the S3 objects and delete objects that are older than 45 days. Create an Amazon EventBridge rule to invoke the Lambda function each month. This option is not optimal because it requires deploying and maintaining a Lambda function, which adds complexity and cost. Moreover, it does not guarantee that the data is deleted exactly after 45 days, since the Lambda function is triggered only once a month. If the data is older than 45 days but less than a month, it will not be deleted until the next month.

D) Configure S3 Intelligent-Tiering on the S3 bucket to automatically transition objects to another storage class. This option is not sufficient to enforce the data retention policy, because it does not delete the data from the S3 bucket. It only moves the data to a less expensive storage class based on access patterns. According to the AWS documentation2, ''S3 Intelligent-Tiering optimizes storage costs by automatically moving data between two access tiers, frequent access and infrequent access, when access patterns change.'' However, this feature does not expire or delete the data after a certain period of time.

A company that operates in a hybrid cloud environment must meet strict compliance requirements. The company wants to create a report that includes evidence from on-premises workloads alongside evidence from AWS resources. A security engineer must implement a solution to collect, review, and manage the evidence to demonstrate compliance with company policy.'

Which solution will meet these requirements?

A.
Create an assessment in AWS Audit Manager from a prebuilt framework or a custom framework. Upload manual evidence from the on-premises workloads. Add the evidence to the assessment. Generate an assessment report after Audit Manager collects the necessary evidence from the AWS resources.
A.
Create an assessment in AWS Audit Manager from a prebuilt framework or a custom framework. Upload manual evidence from the on-premises workloads. Add the evidence to the assessment. Generate an assessment report after Audit Manager collects the necessary evidence from the AWS resources.
Answers
B.
Install the Amazon CloudWatch agent on the on-premises workloads. Use AWS Config to deploy a conformance pack from a sample conformance pack template or a custom YAML template. Generate an assessment report after AWS Config identifies noncompliant workloads and resources.
B.
Install the Amazon CloudWatch agent on the on-premises workloads. Use AWS Config to deploy a conformance pack from a sample conformance pack template or a custom YAML template. Generate an assessment report after AWS Config identifies noncompliant workloads and resources.
Answers
C.
Set up the appropriate security standard in AWS Security Hub. Upload manual evidence from the on-premises workloads. Wait for Security Hub to collect the evidence from the AWS resources. Download the list of controls as a .csv file.
C.
Set up the appropriate security standard in AWS Security Hub. Upload manual evidence from the on-premises workloads. Wait for Security Hub to collect the evidence from the AWS resources. Download the list of controls as a .csv file.
Answers
D.
Install the Amazon CloudWatch agent on the on-premises workloads. Create a CloudWatch dashboard to monitor the on-premises workloads and the AWS resources. Run a query on the workloads and resources. Download the results.
D.
Install the Amazon CloudWatch agent on the on-premises workloads. Create a CloudWatch dashboard to monitor the on-premises workloads and the AWS resources. Run a query on the workloads and resources. Download the results.
Answers
Suggested answer: A

Explanation:

The reason is that this solution will meet the requirements of collecting, reviewing, and managing the evidence from both on-premises and AWS resources to demonstrate compliance with company policy.According to the web search results12, ''AWS Audit Manager helps you continuously audit your AWS usage to simplify how you manage risk and compliance with regulations and industry standards.AWS Audit Manager makes it easier to evaluate whether your policies, procedures, and activities---also known as controls---are operating as intended.'' The results1also state that ''In addition to the evidence that Audit Manager collects from your AWS environment, you can also upload and centrally manage evidence from your on-premises or multicloud environment.'' Therefore, by creating an assessment in AWS Audit Manager, the security engineer can use a prebuilt or custom framework that contains the relevant controls for the company policy, upload manual evidence from the on-premises workloads, and add the evidence to the assessment. After Audit Manager collects the necessary evidence from the AWS resources, the security engineer can generate an assessment report that includes all the evidence from both sources.

The other options are incorrect because:

B) Install the Amazon CloudWatch agent on the on-premises workloads. Use AWS Config to deploy a conformance pack from a sample conformance pack template or a custom YAML template. Generate an assessment report after AWS Config identifies noncompliant workloads and resources. This option is not sufficient to meet the requirements, because it does not collect or manage the evidence from both sources. It only monitors and evaluates the configuration compliance of the workloads and resources using AWS Config rules.According to the web search results3, ''A conformance pack is a collection of AWS Config rules and remediation actions that can be easily deployed as a single entity in an account and a Region or across an organization in AWS Organizations.'' However, a conformance pack does not provide a way to upload or include manual evidence from the on-premises workloads, nor does it generate an assessment report that contains all the evidence.

C) Set up the appropriate security standard in AWS Security Hub. Upload manual evidence from the on-premises workloads. Wait for Security Hub to collect the evidence from the AWS resources. Download the list of controls as a .csv file. This option is not optimal to meet the requirements, because it does not provide a comprehensive or audit-ready report that contains all the evidence. It only provides a list of controls and their compliance status in a .csv file format.According to the web search results4, ''Security Hub provides you with a comprehensive view of your security state within AWS and helps you check your environment against security industry standards and best practices.'' However, Security Hub does not provide a way to upload or include manual evidence from the on-premises workloads, nor does it generate an assessment report that contains all the evidence.

D) Install the Amazon CloudWatch agent on the on-premises workloads. Create a CloudWatch dashboard to monitor the on-premises workloads and the AWS resources. Run a query on the workloads and resources. Download the results. This option is not sufficient to meet the requirements, because it does not collect or manage the evidence from both sources. It only monitors and analyzes the metrics and logs of the workloads and resources using CloudWatch. According to the web search results, ''Amazon CloudWatch is a monitoring and observability service built for DevOps engineers, developers, site reliability engineers (SREs), and IT managers.'' However, CloudWatch does not provide a way to upload or include manual evidence from the on-premises workloads, nor does it generate an assessment report that contains all the evidence.

A company that uses AWS Organizations is using AWS 1AM Identity Center (AWS Single Sign-On) to administer access to AWS accounts. A security engineer is creating a custom permission set in 1AM Identity Center. The company will use the permission set across multiple accounts. An AWS managed policy and a customer managed policy are attached to the permission set. The security engineer has full administrative permissions and is operating in the management account.

When the security engineer attempts to assign the permission set to an 1AM Identity Center user who has access to multiple accounts, the assignment fails.

What should the security engineer do to resolve this failure?

A.
Create the customer managed policy in every account where the permission set is assigned. Give the customer managed policy the same name and same permissions in each account.
A.
Create the customer managed policy in every account where the permission set is assigned. Give the customer managed policy the same name and same permissions in each account.
Answers
B.
Remove either the AWS managed policy or the customer managed policy from the permission set. Create a second permission set that includes the removed policy. Apply the permission sets separately to the user.
B.
Remove either the AWS managed policy or the customer managed policy from the permission set. Create a second permission set that includes the removed policy. Apply the permission sets separately to the user.
Answers
C.
Evaluate the logic of the AWS managed policy and the customer managed policy. Resolve any policy conflicts in the permission set before deployment.
C.
Evaluate the logic of the AWS managed policy and the customer managed policy. Resolve any policy conflicts in the permission set before deployment.
Answers
D.
Do not add the new permission set to the user. Instead, edit the user's existing permission set to include the AWS managed policy and the customer managed policy.
D.
Do not add the new permission set to the user. Instead, edit the user's existing permission set to include the AWS managed policy and the customer managed policy.
Answers
Suggested answer: A

Explanation:

https://docs.aws.amazon.com/singlesignon/latest/userguide/howtocmp.html

'Before you assign your permission set with IAM policies, you must prepare your member account. The name of an IAM policy in your member account must be a case-sensitive match to name of the policy in your management account. IAM Identity Center fails to assign the permission set if the policy doesn't exist in your member account.'

A company is running an application on Amazon EC2 instances in an Auto Scaling group. The application stores logs locally. A security engineer noticed that logs were lost after a scale-in event. The security engineer needs to recommend a solution to ensure the durability and availability of log data All logs must be kept for a minimum of 1 year for auditing purposes. What should the security engineer recommend?

A.
Within the Auto Scaling lifecycle, add a hook to create and attach an Amazon Elastic Block Store (Amazon EBS) log volume each time an EC2 instance is created. When the instance is terminated, the EBS volume can be reattached to another instance for log review.
A.
Within the Auto Scaling lifecycle, add a hook to create and attach an Amazon Elastic Block Store (Amazon EBS) log volume each time an EC2 instance is created. When the instance is terminated, the EBS volume can be reattached to another instance for log review.
Answers
B.
Create an Amazon Elastic File System (Amazon EFS) file system and add a command in the user data section of the Auto Scaling launch template to mount the EFS file system during EC2 instance creation. Configure a process on the instance to copy the logs once a day from an instance Amazon Elastic Block Store (Amazon EBS) volume to a directory in the EFS file system.
B.
Create an Amazon Elastic File System (Amazon EFS) file system and add a command in the user data section of the Auto Scaling launch template to mount the EFS file system during EC2 instance creation. Configure a process on the instance to copy the logs once a day from an instance Amazon Elastic Block Store (Amazon EBS) volume to a directory in the EFS file system.
Answers
C.
Add an Amazon CloudWatch agent into the AMI used in the Auto Scaling group. Configure the CloudWatch agent to send the logs to Amazon CloudWatch Logs for review,
C.
Add an Amazon CloudWatch agent into the AMI used in the Auto Scaling group. Configure the CloudWatch agent to send the logs to Amazon CloudWatch Logs for review,
Answers
D.
Within the Auto Scaling lifecycle, add a lifecycle hook at the terminating state transition and alert the engineering team by using a lifecycle notification to Amazon Simple Notification Service (Amazon SNS). Configure the hook to remain in the Terminating:Wait state for 1 hour to allow manual review of the security logs prior to instance termination.
D.
Within the Auto Scaling lifecycle, add a lifecycle hook at the terminating state transition and alert the engineering team by using a lifecycle notification to Amazon Simple Notification Service (Amazon SNS). Configure the hook to remain in the Terminating:Wait state for 1 hour to allow manual review of the security logs prior to instance termination.
Answers
Suggested answer: C

Explanation:

Option C is the best solution to ensure the durability and availability of log data from EC2 instances in an Auto Scaling group. By using an Amazon CloudWatch agent, the logs can be sent to Amazon CloudWatch Logs, which is a fully managed service that can store, monitor, and analyze log data. CloudWatch Logs also allows you to set retention policies for your log groups, so you can keep the logs for a minimum of 1 year for auditing purposes.CloudWatch Logs also supports encryption, access control, and compliance features to protect your log data12

A company uses Amazon EC2 instances to host frontend services behind an Application Load Balancer. Amazon Elastic Block Store (Amazon EBS) volumes are attached to the EC2 instances. The company uses Amazon S3 buckets to store large files for images and music.

The company has implemented a security architecture oit>AWS to prevent, identify, and isolate potential ransomware attacks. The company now wants to further reduce risk.

A security engineer must develop a disaster recovery solution that can recover to normal operations if an attacker bypasses preventive and detective controls. The solution must meet an RPO of 1 hour.

Which solution will meet these requirements?

A.
Use AWS Backup to create backups of the EC2 instances and S3 buckets every hour. Create AWS CloudFormation templates that replicate existing architecture components. Use AWS CodeCommit to store the CloudFormation templates alongside application configuration code.
A.
Use AWS Backup to create backups of the EC2 instances and S3 buckets every hour. Create AWS CloudFormation templates that replicate existing architecture components. Use AWS CodeCommit to store the CloudFormation templates alongside application configuration code.
Answers
B.
Use AWS Backup to create backups of the EBS volumes and S3 objects every day. Use Amazon Security Lake to create a centralized data lake for AWS CloudTrail logs and VPC flow logs. Use the logs for automated response.
B.
Use AWS Backup to create backups of the EBS volumes and S3 objects every day. Use Amazon Security Lake to create a centralized data lake for AWS CloudTrail logs and VPC flow logs. Use the logs for automated response.
Answers
C.
Use Amazon Security Lake to create a centralized data lake for AWS CloudTrail logs and VPC flow logs. Use the logs for automated response Enable AWS Security Hub to establish a single location for recovery procedures. Create AWS CloudFormation templates that replicate existing architecture components. Use AWS CodeCommit to store the CloudFormation templates alongside application configuration code.
C.
Use Amazon Security Lake to create a centralized data lake for AWS CloudTrail logs and VPC flow logs. Use the logs for automated response Enable AWS Security Hub to establish a single location for recovery procedures. Create AWS CloudFormation templates that replicate existing architecture components. Use AWS CodeCommit to store the CloudFormation templates alongside application configuration code.
Answers
D.
Create EBS snapshots every 4 hours Enable Amazon GuardDuty Malware Protection. Create automation to immediately restore the most recent snapshot for any EC2 instances that produce an Execution:EC2/MaliciousFile finding in GuardDuty.
D.
Create EBS snapshots every 4 hours Enable Amazon GuardDuty Malware Protection. Create automation to immediately restore the most recent snapshot for any EC2 instances that produce an Execution:EC2/MaliciousFile finding in GuardDuty.
Answers
Suggested answer: A

Explanation:

The correct answer is A because it meets the RPO of 1 hour by creating backups of the EC2 instances and S3 buckets every hour. It also uses AWS CloudFormation templates to replicate the existing architecture components and AWS CodeCommit to store the templates and the application configuration code. This way, the security engineer can quickly restore the environment in case of a ransomware attack.

The other options are incorrect because they do not meet the RPO of 1 hour or they do not provide a complete disaster recovery solution. Option B only creates backups of the EBS volumes and S3 objects every day, which is not frequent enough to meet the RPO. Option C does not create any backups of the EC2 instances or the S3 buckets, which are essential for the frontend services. Option D only creates EBS snapshots every 4 hours, which is also not frequent enough to meet the RPO. Additionally, option D relies on Amazon GuardDuty to detect and respond to ransomware attacks, which may not be effective if the attacker bypasses the preventive and detective controls.

A company has an application that runs on Amazon EC2 instances behind an Application Load Balancer (ALB). The instances are in an Amazon EC2 Auto Scaling group and are attached to Amazon Elastic Blodfc Store (Amazon EBS) volumes.

A security engineer needs to preserve all forensic evidence from one of the instances.

Which order of steps should the security engineer use to meet this requirement?

A.
Take an EBS volume snapshot of the instance and store the snapshot in an Amazon S3 bucket. Take a memory snapshot of the instance and store the snapshot in an S3 bucket. Detach the instance from the Auto Scaling group. Deregister the instance from the ALB. Stop the instance.
A.
Take an EBS volume snapshot of the instance and store the snapshot in an Amazon S3 bucket. Take a memory snapshot of the instance and store the snapshot in an S3 bucket. Detach the instance from the Auto Scaling group. Deregister the instance from the ALB. Stop the instance.
Answers
B.
Take a memory snapshot of the instance and store the snapshot in an Amazon S3 bucket. Stop the instance. Take an EBS volume snapshot of the instance and store the snapshot in an S3 bucket. Detach the instance from the Auto Scaling group. Deregister the instance from the ALB.
B.
Take a memory snapshot of the instance and store the snapshot in an Amazon S3 bucket. Stop the instance. Take an EBS volume snapshot of the instance and store the snapshot in an S3 bucket. Detach the instance from the Auto Scaling group. Deregister the instance from the ALB.
Answers
C.
Detach the instance from the Auto Scaling group. Deregister the instance from the ALB. Take an EBS volume snapshot of the instance and store the snapshot in an Amazon S3 bucket. Take a memory snapshot of the instance and store the snapshot in an S3 bucket. Stop the instance
C.
Detach the instance from the Auto Scaling group. Deregister the instance from the ALB. Take an EBS volume snapshot of the instance and store the snapshot in an Amazon S3 bucket. Take a memory snapshot of the instance and store the snapshot in an S3 bucket. Stop the instance
Answers
D.
Detach the instance from the Auto Scaling group Deregister the instance from the ALB. Stop the instance. Take a memory snapshot of the instance and store the snapshot in an Amazon S3 bucket. Take an EBS volume snapshot of the instance and store the snapshot in an S3 bucket.
D.
Detach the instance from the Auto Scaling group Deregister the instance from the ALB. Stop the instance. Take a memory snapshot of the instance and store the snapshot in an Amazon S3 bucket. Take an EBS volume snapshot of the instance and store the snapshot in an S3 bucket.
Answers
Suggested answer: B

Explanation:

The correct answer is B because it preserves the forensic evidence from the instance in the correct order. The first step is to take a memory snapshot of the instance and store it in an S3 bucket, as memory data is volatile and can be lost when the instance is stopped. The second step is to stop the instance, which will prevent any further changes to the EBS volume. The third step is to take an EBS volume snapshot of the instance and store it in an S3 bucket, which will capture the disk state of the instance. The last two steps are to detach the instance from the Auto Scaling group and deregister it from the ALB, which will isolate the instance from the rest of the application.

The other options are incorrect because they do not preserve the forensic evidence in the correct order. Option A takes the EBS volume snapshot before the memory snapshot, which can result in inconsistent data. Option C detaches and deregisters the instance before taking any snapshots, which can affect the availability of the application. Option D stops the instance before taking the memory snapshot, which can cause the loss of memory data.

An AWS Lambda function was misused to alter data, and a security engineer must identify who invoked the function and what output was produced. The engineer cannot find any logs create^ by the Lambda function in Amazon CloudWatch Logs.

Which of the following explains why the logs are not available?

A.
The execution role for the Lambda function did not grant permissions to write log data to CloudWatch Logs.
A.
The execution role for the Lambda function did not grant permissions to write log data to CloudWatch Logs.
Answers
B.
The Lambda function was invoked by using Amazon API Gateway, so the logs are not stored in CloudWatch Logs.
B.
The Lambda function was invoked by using Amazon API Gateway, so the logs are not stored in CloudWatch Logs.
Answers
C.
The execution role for the Lambda function did not grant permissions to write to the Amazon S3 bucket where CloudWatch Logs stores the logs.
C.
The execution role for the Lambda function did not grant permissions to write to the Amazon S3 bucket where CloudWatch Logs stores the logs.
Answers
D.
The version of the Lambda function that was invoked was not current.
D.
The version of the Lambda function that was invoked was not current.
Answers
Suggested answer: A

A company hosts an application on Amazon EC2 instances. The application also uses Amazon S3 and Amazon Simple Queue Service (Amazon SQS). The application is behind an Application Load Balancer (ALB) and scales with AWS Auto Scaling.

The company's security policy requires the use of least privilege access, which has been applied to all existing AWS resources. A security engineer needs to implement private connectivity to AWS services.

Which combination of steps should the security engineer take to meet this requirement? (Select THREE.)

A.
Use an interface VPC endpoint for Amazon SQS
A.
Use an interface VPC endpoint for Amazon SQS
Answers
B.
Configure a connection to Amazon S3 through AWS Transit Gateway.
B.
Configure a connection to Amazon S3 through AWS Transit Gateway.
Answers
C.
Use a gateway VPC endpoint for Amazon S3.
C.
Use a gateway VPC endpoint for Amazon S3.
Answers
D.
Modify the 1AM role applied to the EC2 instances in the Auto Scaling group to allow outbound traffic to the interface endpoints.
D.
Modify the 1AM role applied to the EC2 instances in the Auto Scaling group to allow outbound traffic to the interface endpoints.
Answers
E.
Modify the endpoint policies on all VPC endpoints. Specify the SQS and S3 resources that the application uses
E.
Modify the endpoint policies on all VPC endpoints. Specify the SQS and S3 resources that the application uses
Answers
F.
Configure a connection to Amazon S3 through AWS Firewall Manager
F.
Configure a connection to Amazon S3 through AWS Firewall Manager
Answers
Suggested answer: A, C, E

Explanation:

The correct answer is A, C, and E because they provide the most secure and efficient way to implement private connectivity to AWS services. Using interface VPC endpoints for Amazon SQS and gateway VPC endpoints for Amazon S3 allows the application to access these services without using public IP addresses or internet gateways. Modifying the endpoint policies on all VPC endpoints enables the security engineer to specify the SQS and S3 resources that the application uses and restrict access to other resources.

The other options are incorrect because they do not provide private connectivity to AWS services or they introduce unnecessary complexity or cost. Option B is incorrect because AWS Transit Gateway is used to connect multiple VPCs and on-premises networks, not to connect to AWS services. Option D is incorrect because modifying the IAM role applied to the EC2 instances is not sufficient to allow outbound traffic to the interface endpoints. The security group and route table associated with the interface endpoints also need to be configured. Option F is incorrect because AWS Firewall Manager is used to centrally manage firewall rules across multiple accounts and resources, not to connect to AWS services.

A company suspects that an attacker has exploited an overly permissive role to export credentials from Amazon EC2 instance metadata. The company uses Amazon GuardDuty and AWS Audit Manager. The company has enabled AWS CloudTrail logging and Amazon CloudWatch logging for all of its AWS accounts.

A security engineer must determine if the credentials were used to access the company's resources from an external account.

Which solution will provide this information?

A.
Review GuardDuty findings to find InstanceCredentialExfiltration events.
A.
Review GuardDuty findings to find InstanceCredentialExfiltration events.
Answers
B.
Review assessment reports in the Audit Manager console to find InstanceCredentialExfiltration events.
B.
Review assessment reports in the Audit Manager console to find InstanceCredentialExfiltration events.
Answers
C.
Review CloudTrail logs for GetSessionToken API calls to AWS Security Token Service (AWS STS) that come from an acount ID from outside the company.
C.
Review CloudTrail logs for GetSessionToken API calls to AWS Security Token Service (AWS STS) that come from an acount ID from outside the company.
Answers
D.
Review CloudWatch logs for GetSessionToken API calls to AWS Security Token Service (AWS STS) that come from an account ID from outside the company.
D.
Review CloudWatch logs for GetSessionToken API calls to AWS Security Token Service (AWS STS) that come from an account ID from outside the company.
Answers
Suggested answer: A

Explanation:

The correct answer is A because GuardDuty can detect and alert on EC2 instance credential exfiltration events.These events indicate that the credentials obtained from the EC2 instance metadata service are being used from an IP address that is owned by a different AWS account than the one that owns the instance1.GuardDuty can also provide details such as the source and destination IP addresses, the AWS account ID of the attacker, and the API calls made using the exfiltrated credentials2.

The other options are incorrect because they do not provide the information needed to determine if the credentials were used to access the company's resources from an external account. Option B is incorrect because Audit Manager does not generate InstanceCredentialExfiltration events.Audit Manager is a service that helps you continuously audit your AWS usage to simplify how you assess risk and compliance with regulations and industry standards3. Option C is incorrect because CloudTrail logs do not show the account ID of the caller for GetSessionToken API calls to AWS STS.CloudTrail logs show the account ID of the identity whose credentials were used to call the API4. Option D is incorrect because CloudWatch logs do not show the GetSessionToken API calls to AWS STS by default.CloudWatch logs can show the API calls made by AWS Lambda functions, Amazon API Gateway, and other AWS services that integrate with CloudWatch5.

Total 327 questions
Go to page: of 33