ExamGecko
Home Home / Amazon / SCS-C02

Amazon SCS-C02 Practice Test - Questions Answers, Page 31

Question list
Search
Search

List of questions

Search

Related questions











A company wants to implement host-based security for Amazon EC2 instances and containers in Amazon Elastic Container Registry (Amazon ECR). The company has deployed AWS Systems Manager Agent (SSM Agent) on the EC2 instances. All the company's AWS accounts are in one organization in AWS Organizations. The company will analyze the workloads for software vulnerabilities and unintended network exposure. The company will push any findings to AWS Security Hub. which the company has configured for the organization.

The company must deploy the solution to all member accounts, including pew accounts, automatically. When new workloads come online, the solution must scan the workloads.

Which solution will meet these requirements?

A.
Use SCPs to configure scanning of EC2 instances and ECR containers for all accounts in the organization.
A.
Use SCPs to configure scanning of EC2 instances and ECR containers for all accounts in the organization.
Answers
B.
Configure a delegated administrator for Amazon GuardDuty for the organization. Create an Amazon EventBridge rule to initiate analysis of ECR containers
B.
Configure a delegated administrator for Amazon GuardDuty for the organization. Create an Amazon EventBridge rule to initiate analysis of ECR containers
Answers
C.
Configure a delegated administrator for Amazon Inspector for the organization. Configure automatic scanning for new member accounts.
C.
Configure a delegated administrator for Amazon Inspector for the organization. Configure automatic scanning for new member accounts.
Answers
D.
Configure a delegated administrator for Amazon Inspector for the organization. Create an AWS Config rule to initiate analysis of ECR containers
D.
Configure a delegated administrator for Amazon Inspector for the organization. Create an AWS Config rule to initiate analysis of ECR containers
Answers
Suggested answer: C

Explanation:

To implement host-based security for Amazon EC2 instances and containers in Amazon ECR with minimal operational overhead and ensure automatic deployment and scanning for new workloads, the recommended solution is to configure a delegated administrator for Amazon Inspector within the AWS Organizations structure. By enabling Amazon Inspector for the organization and configuring it to automatically scan new member accounts, the company can ensure that all EC2 instances and ECR containers are analyzed for software vulnerabilities and unintended network exposure. Amazon Inspector will automatically assess the workloads and push findings to AWS Security Hub, providing centralized security monitoring and compliance checking. This approach ensures that as new accounts or workloads are added, they are automatically included in the security assessments, maintaining a consistent security posture across the organization with minimal manual intervention.

A company has secured the AWS account root user for its AWS account by following AWS best practices. The company also has enabled AWS CloudTrail, which is sending its logs to Amazon S3. A security engineer wants to receive notification in near-real time if a user uses the AWS account root user credentials to sign in to the AWS Management Console.

Which solutions will provide this notification? (Select TWO.)

A.
Use AWS Trusted Advisor and its security evaluations for the root account. Configure an Amazon EventBridge event rule that is invoked by the Trusted Advisor API. Configure the rule to target an Amazon Simple Notification Service (Amazon SNS) topic. Subscribe any required endpoints to the SNS topic so that these endpoints can receive notification.
A.
Use AWS Trusted Advisor and its security evaluations for the root account. Configure an Amazon EventBridge event rule that is invoked by the Trusted Advisor API. Configure the rule to target an Amazon Simple Notification Service (Amazon SNS) topic. Subscribe any required endpoints to the SNS topic so that these endpoints can receive notification.
Answers
B.
Use AWS IAM Access Analyzer. Create an Amazon CloudWatch Logs metric filter to evaluate log entries from Access Analyzer that detect a successful root account login. Create an Amazon CloudWatch alarm that monitors whether a root login has occurred. Configure the CloudWatch alarm to notify an Amazon Simple Notification Service (Amazon SNS) topic when the alarm enters the ALARM state. Subscribe any required endpoints to this SNS topic so that these endpoints can receive notification.
B.
Use AWS IAM Access Analyzer. Create an Amazon CloudWatch Logs metric filter to evaluate log entries from Access Analyzer that detect a successful root account login. Create an Amazon CloudWatch alarm that monitors whether a root login has occurred. Configure the CloudWatch alarm to notify an Amazon Simple Notification Service (Amazon SNS) topic when the alarm enters the ALARM state. Subscribe any required endpoints to this SNS topic so that these endpoints can receive notification.
Answers
C.
Configure AWS CloudTrail to send its logs to Amazon CloudWatch Logs. Configure a metric filter on the CloudWatch Logs log group used by CloudTrail to evaluate log entries for successful root account logins. Create an Amazon CloudWatch alarm that monitors whether a root login has occurred Configure the CloudWatch alarm to notify an Amazon Simple Notification Service (Amazon SNS) topic when the alarm enters the ALARM state. Subscribe any required endpoints to this SNS topic so that these endpoints can receive notification.
C.
Configure AWS CloudTrail to send its logs to Amazon CloudWatch Logs. Configure a metric filter on the CloudWatch Logs log group used by CloudTrail to evaluate log entries for successful root account logins. Create an Amazon CloudWatch alarm that monitors whether a root login has occurred Configure the CloudWatch alarm to notify an Amazon Simple Notification Service (Amazon SNS) topic when the alarm enters the ALARM state. Subscribe any required endpoints to this SNS topic so that these endpoints can receive notification.
Answers
D.
Configure AWS CloudTrail to send log notifications to an Amazon Simple Notification Service (Amazon SNS) topic. Create an AWS Lambda function that parses the CloudTrail notification for root login activity and notifies a separate SNS topic that contains the endpoints that should receive notification. Subscribe the Lambda function to the SNS topic that is receiving log notifications from CloudTrail.
D.
Configure AWS CloudTrail to send log notifications to an Amazon Simple Notification Service (Amazon SNS) topic. Create an AWS Lambda function that parses the CloudTrail notification for root login activity and notifies a separate SNS topic that contains the endpoints that should receive notification. Subscribe the Lambda function to the SNS topic that is receiving log notifications from CloudTrail.
Answers
E.
Configure an Amazon EventBridge event rule that runs when Amazon CloudWatch API calls are recorded for a successful root login. Configure the rule to target an Amazon Simple Notification Service (Amazon SNS) topic. Subscribe any required endpoints to the SNS topic so that these endpoints can receive notification.
E.
Configure an Amazon EventBridge event rule that runs when Amazon CloudWatch API calls are recorded for a successful root login. Configure the rule to target an Amazon Simple Notification Service (Amazon SNS) topic. Subscribe any required endpoints to the SNS topic so that these endpoints can receive notification.
Answers
Suggested answer: C, E

Explanation:

To receive near-real-time notifications of AWS account root user sign-ins, the most effective solutions involve the use of AWS CloudTrail logs, Amazon CloudWatch Logs, and Amazon EventBridge.

Solution C involves configuring AWS CloudTrail to send logs to Amazon CloudWatch Logs and then setting up a CloudWatch Logs metric filter to detect successful root account logins. When such logins are detected, a CloudWatch alarm can be configured to trigger and notify an Amazon Simple Notification Service (Amazon SNS) topic, which in turn can send notifications to the required endpoints. This solution provides an efficient way to monitor and alert on root account usage without requiring custom parsing or handling of log data.

Solution E uses Amazon EventBridge to monitor for specific AWS API calls, such as SignIn events that indicate a successful root account login. By configuring an EventBridge rule to trigger on these events, notifications can be sent directly to an SNS topic, which then distributes the alerts to the necessary endpoints. This approach leverages native AWS event patterns and provides a streamlined mechanism for detecting and alerting on root account activity.

Both solutions offer automation, scalability, and the ability to integrate with other AWS services, ensuring that stakeholders are promptly alerted to critical security events involving the root user.

A company has AWS accounts that are in an organization in AWS Organizations. A security engineer needs to set up AWS Security Hub in a dedicated account for security monitoring.

The security engineer must ensure that Security Hub automatically manages all existing accounts and all new accounts that are added to the organization. Security Hub also must receive findings from all AWS Regions.

Which combination of actions will meet these requirements with the LEAST operational overhead? (Select TWO.)

A.
Configure a finding aggregation Region for Security Hub. Link the other Regions to the aggregation Region.
A.
Configure a finding aggregation Region for Security Hub. Link the other Regions to the aggregation Region.
Answers
B.
Create an AWS Lambda function that routes events from other Regions to the dedicated Security Hub account. Create an Amazon EventBridge rule to invoke the Lambda function.
B.
Create an AWS Lambda function that routes events from other Regions to the dedicated Security Hub account. Create an Amazon EventBridge rule to invoke the Lambda function.
Answers
C.
Turn on the option to automatically enable accounts for Security Hub.
C.
Turn on the option to automatically enable accounts for Security Hub.
Answers
D.
Create an SCP that denies the securityhub DisableSecurityHub permission. Attach the SCP to the organization's root account.
D.
Create an SCP that denies the securityhub DisableSecurityHub permission. Attach the SCP to the organization's root account.
Answers
E.
Configure services in other Regions to write events to an AWS CloudTrail organization trail. Configure Security Hub to read events from the trail.
E.
Configure services in other Regions to write events to an AWS CloudTrail organization trail. Configure Security Hub to read events from the trail.
Answers
Suggested answer: A, C

Explanation:

To set up AWS Security Hub for centralized security monitoring across all accounts in an AWS Organization with the least operational overhead, the best actions to take are:

Solution A: Configure a finding aggregation Region for Security Hub. This allows Security Hub to aggregate findings from multiple regions into a single designated region, simplifying monitoring and analysis. By centralizing findings, the security team can have a unified view of security alerts and compliance statuses across all accounts and regions, enhancing the efficiency of security operations.

Solution C: Turn on the option to automatically enable accounts for Security Hub within the AWS Organization. This ensures that as new accounts are created and added to the organization, they are automatically enrolled in Security Hub, and their findings are included in the centralized monitoring. This automation reduces the manual effort required to manage account enrollment and ensures comprehensive coverage of security monitoring across the organization.

These actions collectively ensure that Security Hub is effectively configured to manage security findings across all accounts and regions, providing a comprehensive and automated approach to security monitoring with minimal manual intervention.

A company needs a solution to protect critical data from being permanently deleted. The data is stored in Amazon S3 buckets.

The company needs to replicate the S3 objects from the company's primary AWS Region to a secondary Region to meet disaster recovery requirements. The company must also ensure that users who have administrator access cannot permanently delete the data in the secondary Region.

Which solution will meet these requirements?

A.
Configure AWS Backup to perform cross-Region S3 backups. Select a backup vault in the secondary Region. Enable AWS Backup Vault Lock in governance mode for the backups in the secondary Region
A.
Configure AWS Backup to perform cross-Region S3 backups. Select a backup vault in the secondary Region. Enable AWS Backup Vault Lock in governance mode for the backups in the secondary Region
Answers
B.
Implement S3 Object Lock in compliance mode in the primary Region. Configure S3 replication to replicate the objects to an S3 bucket in the secondary Region.
B.
Implement S3 Object Lock in compliance mode in the primary Region. Configure S3 replication to replicate the objects to an S3 bucket in the secondary Region.
Answers
C.
Configure S3 replication to replicate the objects to an S3 bucket in the secondary Region. Create an S3 bucket policy to deny the s3:ReplicateDelete action on the S3 bucket in the secondary Region
C.
Configure S3 replication to replicate the objects to an S3 bucket in the secondary Region. Create an S3 bucket policy to deny the s3:ReplicateDelete action on the S3 bucket in the secondary Region
Answers
D.
Configure S3 replication to replicate the objects to an S3 bucket in the secondary Region. Configure S3 object versioning on the S3 bucket in the secondary Region.
D.
Configure S3 replication to replicate the objects to an S3 bucket in the secondary Region. Configure S3 object versioning on the S3 bucket in the secondary Region.
Answers
Suggested answer: B

Explanation:

Implementing S3 Object Lock in compliance mode on the primary Region and configuring S3 replication to a secondary Region ensures the immutability of S3 objects, preventing them from being deleted or altered. This setup meets the requirement of protecting critical data from permanent deletion, even by users with administrative access. The replicated objects in the secondary Region inherit the Object Lock from the primary, ensuring consistent protection across Regions and aligning with disaster recovery requirements.

A company is storing data in Amazon S3 Glacier. A security engineer implemented a new vault lock policy for 10 TB of data and called the initiate-vault-lock operation 12 hours ago. The audit team identified a typo in the policy that is allowing unintended access to the vault.

What is the MOST cost-effective way to correct this error?

A.
Call the abort-vault-lock operation. Update the policy. Call the initiate-vault-lock operation again.
A.
Call the abort-vault-lock operation. Update the policy. Call the initiate-vault-lock operation again.
Answers
B.
Copy the vault data to a new S3 bucket. Delete the vault. Create a new vault with the data.
B.
Copy the vault data to a new S3 bucket. Delete the vault. Create a new vault with the data.
Answers
C.
Update the policy to keep the vault lock in place
C.
Update the policy to keep the vault lock in place
Answers
D.
Update the policy. Call the initiate-vault-lock operation again to apply the new policy.
D.
Update the policy. Call the initiate-vault-lock operation again to apply the new policy.
Answers
Suggested answer: A

Explanation:

The most cost-effective way to correct a typo in a vault lock policy during the 24-hour initiation period is to call the abort-vault-lock operation. This action stops the vault lock process, allowing the security engineer to correct the policy and re-initiate the vault lock with the corrected policy. This approach avoids the need for data transfer or creating a new vault, thus minimizing costs and operational overhead.

A company uses HTTP Live Streaming (HL'S) to stream live video content to paying subscribers by using Amazon CloudFront. HLS splits the video content into chunks so that the user can request the right chunk based on different conditions. Because the video events last for several hours, the total video is made up of thousands of chunks.

The origin URL is not disclosed, and every user is forced to access the CloudFront URL. The company has a web application that authenticates the paying users against an internal repository and a CloudFront key pair that is already issued.

What is the simplest and MOST effective way to protect the content?

A.
Develop the application to use the CloudFront key pair to create signed URLs that users will use to access the content.
A.
Develop the application to use the CloudFront key pair to create signed URLs that users will use to access the content.
Answers
B.
Develop the application to use the CloudFront key pair to set the signed cookies that users will use to access the content.
B.
Develop the application to use the CloudFront key pair to set the signed cookies that users will use to access the content.
Answers
C.
Develop the application to issue a security token that Lambda@Edge will receive to authenticate and authorize access to the content
C.
Develop the application to issue a security token that Lambda@Edge will receive to authenticate and authorize access to the content
Answers
D.
Keep the CloudFront URL encrypted inside the application, and use AWS KMS to resolve the URL on-the-fly after the user is authenticated.
D.
Keep the CloudFront URL encrypted inside the application, and use AWS KMS to resolve the URL on-the-fly after the user is authenticated.
Answers
Suggested answer: B

Explanation:

Utilizing CloudFront signed cookies is the simplest and most effective way to protect HLS video content for paying subscribers. Signed cookies provide access control for multiple files, such as video chunks in HLS streaming, without the need to generate a signed URL for each video chunk. This method simplifies the process for long video events with thousands of chunks, enhancing user experience while ensuring content protection.

A company runs workloads in the us-east-1 Region. The company has never deployed resources to other AWS Regions and does not have any multi-Region resources.

The company needs to replicate its workloads and infrastructure to the us-west-1 Region.

A security engineer must implement a solution that uses AWS Secrets Manager to store secrets in both Regions. The solution must use AWS Key Management Service (AWS KMS) to encrypt the secrets. The solution must minimize latency and must be able to work if only one Region is available.

The security engineer uses Secrets Manager to create the secrets in us-east-1.

What should the security engineer do next to meet the requirements?

A.
Encrypt the secrets in us-east-1 by using an AWS managed KMS key. Replicate the secrets to us-west-1. Encrypt the secrets in us-west-1 by using a new AWS managed KMS key in us-west-1.
A.
Encrypt the secrets in us-east-1 by using an AWS managed KMS key. Replicate the secrets to us-west-1. Encrypt the secrets in us-west-1 by using a new AWS managed KMS key in us-west-1.
Answers
B.
Encrypt the secrets in us-east-1 by using an AWS managed KMS key. Configure resources in us-west-1 to call the Secrets Manager endpoint in us-east-1.
B.
Encrypt the secrets in us-east-1 by using an AWS managed KMS key. Configure resources in us-west-1 to call the Secrets Manager endpoint in us-east-1.
Answers
C.
Encrypt the secrets in us-east-1 by using a customer managed KMS key. Configure resources in us-west-1 to call the Secrets Manager endpoint in us-east-1.
C.
Encrypt the secrets in us-east-1 by using a customer managed KMS key. Configure resources in us-west-1 to call the Secrets Manager endpoint in us-east-1.
Answers
D.
Encrypt the secrets in us-east-1 by using a customer managed KMS key. Replicate the secrets to us-west-1. Encrypt the secrets in us-west-1 by using the customer managed KMS key from us-east-1.
D.
Encrypt the secrets in us-east-1 by using a customer managed KMS key. Replicate the secrets to us-west-1. Encrypt the secrets in us-west-1 by using the customer managed KMS key from us-east-1.
Answers
Suggested answer: D

Explanation:

To ensure minimal latency and regional availability of secrets, encrypting secrets in us-east-1 with a customer-managed KMS key and then replicating them to us-west-1 for encryption with the same key is the optimal approach. This method leverages customer-managed KMS keys for enhanced control and ensures that secrets are available in both regions, adhering to disaster recovery principles and minimizing latency by using regional endpoints.

A company operates a web application that runs on Amazon EC2 instances. The application listens on port 80 and port 443. The company uses an Application Load Balancer (ALB) with AWS WAF to terminate SSL and to forward traffic to the application instances only on port 80.

The ALB is in public subnets that are associated with a network ACL that is named NACL1. The application instances are in dedicated private subnets that are associated with a network ACL that is named NACL2. An Amazon RDS for PostgreSQL DB instance that uses port 5432 is in a dedicated private subnet that is associated with a network ACL that is named NACL3. All the network ACLs currently allow all inbound and outbound traffic.

Which set of network ACL changes will increase the security of the application while ensuring functionality?

A.
Make the following changes to NACL3: * Add a rule that allows inbound traffic on port 5432 from NACL2. * Add a rule that allows outbound traffic on ports 1024-65536 to NACL2. * Remove the default rules that allow all inbound and outbound traffic.
A.
Make the following changes to NACL3: * Add a rule that allows inbound traffic on port 5432 from NACL2. * Add a rule that allows outbound traffic on ports 1024-65536 to NACL2. * Remove the default rules that allow all inbound and outbound traffic.
Answers
B.
Make the following changes to NACL3: * Add a rule that allows inbound traffic on port 5432 from the CIDR blocks of the application instance subnets. * Add a rule that allows outbound traffic on ports 1024-65536 to the application instance subnets. * Remove the default rules that allow all inbound and outbound traffic.
B.
Make the following changes to NACL3: * Add a rule that allows inbound traffic on port 5432 from the CIDR blocks of the application instance subnets. * Add a rule that allows outbound traffic on ports 1024-65536 to the application instance subnets. * Remove the default rules that allow all inbound and outbound traffic.
Answers
C.
Make the following changes to NACL2: * Add a rule that allows outbound traffic on port 5432 to the CIDR blocks of the RDS subnets. * Remove the default rules that allow all inbound and outbound traffic.
C.
Make the following changes to NACL2: * Add a rule that allows outbound traffic on port 5432 to the CIDR blocks of the RDS subnets. * Remove the default rules that allow all inbound and outbound traffic.
Answers
D.
Make the following changes to NACL2: * Add a rule that allows inbound traffic on port 5432 from the CIDR blocks of the RDS subnets. * Add a rule that allows outbound traffic on port 5432 to the RDS subnets.
D.
Make the following changes to NACL2: * Add a rule that allows inbound traffic on port 5432 from the CIDR blocks of the RDS subnets. * Add a rule that allows outbound traffic on port 5432 to the RDS subnets.
Answers
Suggested answer: B

Explanation:

For increased security while ensuring functionality, adjusting NACL3 to allow inbound traffic on port 5432 from the CIDR blocks of the application instance subnets, and allowing outbound traffic on ephemeral ports (1024-65536) back to those subnets creates a secure path for database access. Removing default allow-all rules enhances security by implementing the principle of least privilege, ensuring that only necessary traffic is permitted.

A company has hundreds of AWS accounts in an organization in AWS Organizations. The company operates out of a single AWS Region. The company has a dedicated security tooling AWS account in the organization. The security tooling account is configured as the organization's delegated administrator for Amazon GuardDuty and AWS Security Hub. The company has configured the environment to automatically enable GuardDuty and Security Hub for existing AWS accounts and new AWS accounts.

The company is performing control tests on specific GuardDuty findings to make sure that the company's security team can detect and respond to security events. The security team launched an Amazon EC2 instance and attempted to run DNS requests against a test domain, example.com, to generate a DNS finding. However, the GuardDuty finding was never created in the Security Hub delegated administrator account.

Why was the finding was not created in the Security Hub delegated administrator account?

A.
VPC flow logs were not turned on for the VPC where the EC2 instance was launched.
A.
VPC flow logs were not turned on for the VPC where the EC2 instance was launched.
Answers
B.
The VPC where the EC2 instance was launched had the DHCP option configured for a custom OpenDNS resolver.
B.
The VPC where the EC2 instance was launched had the DHCP option configured for a custom OpenDNS resolver.
Answers
C.
The GuardDuty integration with Security Hub was never activated in the AWS account where the finding was generated.
C.
The GuardDuty integration with Security Hub was never activated in the AWS account where the finding was generated.
Answers
D.
Cross-Region aggregation in Security Hub was not configured.
D.
Cross-Region aggregation in Security Hub was not configured.
Answers
Suggested answer: C

Explanation:

The correct answer is C. The GuardDuty integration with Security Hub was never activated in the AWS account where the finding was generated.

The reason is that Security Hub does not automatically receive findings from GuardDuty unless the integration is activated in each AWS account. According to the AWS documentation1, ''The Amazon GuardDuty integration with Security Hub enables you to send findings from GuardDuty to Security Hub. Security Hub can then include those findings in its analysis of your security posture.'' However, this integration is not enabled by default and requires manual activation in each AWS account. The documentation1 also states that ''You must activate the integration in each AWS account that you want to send findings from GuardDuty to Security Hub.''

Therefore, even though the company has configured the security tooling account as the delegated administrator for GuardDuty and Security Hub, and has enabled these services for existing and new AWS accounts, it still needs to activate the GuardDuty integration with Security Hub in each account. Otherwise, the findings from GuardDuty will not be sent to Security Hub and will not be visible in the delegated administrator account.

The other options are incorrect because:

A) VPC flow logs are not required for GuardDuty to generate DNS findings. GuardDuty uses VPC flow logs as one of the data sources for network connection findings, but not for DNS findings. According to the AWS documentation2, ''GuardDuty uses VPC Flow Logs as a data source for network connection findings.''

B) The VPC DHCP option configured for a custom OpenDNS resolver does not affect GuardDuty's ability to generate DNS findings. GuardDuty uses DNS logs as one of the data sources for DNS findings, regardless of the DNS resolver used by the VPC. According to the AWS documentation2, ''GuardDuty uses DNS logs as a data source for DNS activity findings.''

D) Cross-Region aggregation in Security Hub is not relevant for this scenario, since the company operates out of a single AWS Region. Cross-Region aggregation in Security Hub allows you to aggregate security findings from multiple Regions into a single Region, where you can view and manage them. However, this feature is not needed if the company only uses one Region. According to the AWS documentation3, ''Cross-Region aggregation enables you to aggregate security findings from multiple Regions into a single Region.''

A security engineer is configuring account-based access control (ABAC) to allow only specific principals to put objects into an Amazon S3 bucket. The principals already have access to Amazon S3.

The security engineer needs to configure a bucket policy that allows principals to put objects into the S3 bucket only if the value of the Team tag on the object matches the value of the Team tag that is associated with the principal. During testing, the security engineer notices that a principal can still put objects into the S3 bucket when the tag values do not match.

Which combination of factors are causing the PutObject operation to succeed when the tag values are different? (Select TWO.)

A.
The principal's identity-based policy grants access to put objects into the S3 bucket with no conditions.
A.
The principal's identity-based policy grants access to put objects into the S3 bucket with no conditions.
Answers
B.
The principal's identity-based policy overrides the condition because the identity-based policy contains an explicit allow.
B.
The principal's identity-based policy overrides the condition because the identity-based policy contains an explicit allow.
Answers
C.
The S3 bucket's resource policy does not deny access to put objects.
C.
The S3 bucket's resource policy does not deny access to put objects.
Answers
D.
The S3 bucket's resource policy cannot allow actions to the principal.
D.
The S3 bucket's resource policy cannot allow actions to the principal.
Answers
E.
The bucket policy does not apply to principals in the same zone of trust.
E.
The bucket policy does not apply to principals in the same zone of trust.
Answers
Suggested answer: A, B

Explanation:

The correct answer is A and B. The principal's identity-based policy grants access to put objects into the S3 bucket with no conditions, and the principal's identity-based policy overrides the condition because the identity-based policy contains an explicit allow.

The reason is that when evaluating access requests, AWS uses a combination of resource-based policies (such as bucket policies) and identity-based policies (such as IAM user policies) to determine whether to allow or deny the action. According to the AWS documentation1, ''If an explicit allow exists in either the resource-based policy or the identity-based policy, then AWS allows access to the resource.'' Therefore, even if the bucket policy has a condition that checks the tag values, it will not be effective if the principal's identity-based policy has an explicit allow for the PutObject action without any conditions. The explicit allow in the identity-based policy will override the condition in the bucket policy and grant access to the principal.

The other options are incorrect because:

C) The S3 bucket's resource policy does not deny access to put objects. This is not a factor that causes the PutObject operation to succeed when the tag values are different. The bucket policy can either allow or deny access based on conditions, but it cannot prevent an explicit allow in the identity-based policy from taking effect.

D) The S3 bucket's resource policy cannot allow actions to the principal. This is not true. The bucket policy can allow actions to specific principals by using the Principal element in the policy statement. According to the AWS documentation2, ''The Principal element specifies the user (IAM user, federated user, or assumed-role user), AWS account, AWS service, or other principal entity that is allowed or denied access to a resource.''

E) The bucket policy does not apply to principals in the same zone of trust. This is not true. The bucket policy applies to any principal that is specified in the Principal element, regardless of whether they are in the same zone of trust or not. A zone of trust is a logical boundary that defines who can access a resource and under what conditions. According to the AWS documentation3, ''A zone of trust can be as small as a single resource (for example, an Amazon S3 object) or as large as an entire AWS account.''

Total 327 questions
Go to page: of 33