ExamGecko
Home Home / Amazon / SCS-C02

Amazon SCS-C02 Practice Test - Questions Answers, Page 29

Question list
Search
Search

List of questions

Search

Related questions











A security engineer must use AWS Key Management Service (AWS KMS) to design a key management solution for a set of Amazon Elastic Block Store (Amazon

EBS) volumes that contain sensitive data. The solution needs to ensure that the key material automatically expires in 90 days.

Which solution meets these criteria?

A.
A customer managed CMK that uses customer provided key material
A.
A customer managed CMK that uses customer provided key material
Answers
B.
A customer managed CMK that uses AWS provided key material
B.
A customer managed CMK that uses AWS provided key material
Answers
C.
An AWS managed CMK
C.
An AWS managed CMK
Answers
D.
Operation system-native encryption that uses GnuPG
D.
Operation system-native encryption that uses GnuPG
Answers
Suggested answer: A

Explanation:

https://awscli.amazonaws.com/v2/documentation/api/latest/reference/kms/import-key-material.html

aws kms import-key-material \

--key-id 1234abcd-12ab-34cd-56ef-1234567890ab \

--encrypted-key-material fileb://EncryptedKeyMaterial.bin \

--import-token fileb://ImportToken.bin \

--expiration-model KEY_MATERIAL_EXPIRES \

--valid-to 2021-09-21T19:00:00Z

The correct answer is A. A customer managed CMK that uses customer provided key material.

A customer managed CMK is a KMS key that you create, own, and manage in your AWS account. You have full control over the key configuration, permissions, rotation, and deletion. You can use a customer managed CMK to encrypt and decrypt data in AWS services that are integrated with AWS KMS, such as Amazon EBS1.

A customer managed CMK can use either AWS provided key material or customer provided key material. AWS provided key material is generated by AWS KMS and never leaves the service unencrypted. Customer provided key material is generated outside of AWS KMS and imported into a customer managed CMK. You can specify an expiration date for the imported key material, after which the CMK becomes unusable until you reimport new key material2.

To meet the criteria of automatically expiring the key material in 90 days, you need to use customer provided key material and set the expiration date accordingly. This way, you can ensure that the data encrypted with the CMK will not be accessible after 90 days unless you reimport new key material and re-encrypt the data.

The other options are incorrect for the following reasons:

B) A customer managed CMK that uses AWS provided key material does not expire automatically. You can enable automatic rotation of the key material every year, but this does not prevent access to the data encrypted with the previous key material. You would need to manually delete the CMK and its backing key material to make the data inaccessible3.

C) An AWS managed CMK is a KMS key that is created, owned, and managed by an AWS service on your behalf. You have limited control over the key configuration, permissions, rotation, and deletion. You cannot use an AWS managed CMK to encrypt data in other AWS services or applications. You also cannot set an expiration date for the key material of an AWS managed CMK4.

D) Operation system-native encryption that uses GnuPG is not a solution that uses AWS KMS. GnuPG is a command line tool that implements the OpenPGP standard for encrypting and signing data. It does not integrate with Amazon EBS or other AWS services. It also does not provide a way to automatically expire the key material used for encryption5.

1: Customer Managed Keys - AWS Key Management Service 2: [Importing Key Material in AWS Key Management Service (AWS KMS) - AWS Key Management Service] 3: [Rotating Customer Master Keys - AWS Key Management Service] 4: [AWS Managed Keys - AWS Key Management Service] 5: The GNU Privacy Guard

A company's security team needs to receive a notification whenever an AWS access key has not been rotated in 90 or more days. A security engineer must develop a solution that provides these notifications automatically.

Which solution will meet these requirements with the LEAST amount of effort?

A.
Deploy an AWS Config managed rule to run on a periodic basis of 24 hours. Select the access-keys-rotated managed rule, and set the maxAccessKeyAge parameter to 90 days. Create an Amazon EventBridge (Amazon CloudWatch Events) rule with an event pattern that matches the compliance type of NON_COMPLIANT from AWS Config for the managed rule. Configure EventBridge (CloudWatch Events) to send an Amazon Simple Notification Service (Amazon SNS) notification to the security team.
A.
Deploy an AWS Config managed rule to run on a periodic basis of 24 hours. Select the access-keys-rotated managed rule, and set the maxAccessKeyAge parameter to 90 days. Create an Amazon EventBridge (Amazon CloudWatch Events) rule with an event pattern that matches the compliance type of NON_COMPLIANT from AWS Config for the managed rule. Configure EventBridge (CloudWatch Events) to send an Amazon Simple Notification Service (Amazon SNS) notification to the security team.
Answers
B.
Create a script to export a .csv file from the AWS Trusted Advisor check for IAM access key rotation. Load the script into an AWS Lambda function that will upload the .csv file to an Amazon S3 bucket. Create an Amazon Athena table query that runs when the .csv file is uploaded to the S3 bucket. Publish the results for any keys older than 90 days by using an invocation of an Amazon Simple Notification Service (Amazon SNS) notification to the security team.
B.
Create a script to export a .csv file from the AWS Trusted Advisor check for IAM access key rotation. Load the script into an AWS Lambda function that will upload the .csv file to an Amazon S3 bucket. Create an Amazon Athena table query that runs when the .csv file is uploaded to the S3 bucket. Publish the results for any keys older than 90 days by using an invocation of an Amazon Simple Notification Service (Amazon SNS) notification to the security team.
Answers
C.
Create a script to download the IAM credentials report on a periodic basis. Load the script into an AWS Lambda function that will run on a schedule through Amazon EventBridge (Amazon CloudWatch Events). Configure the Lambda script to load the report into memory and to filter the report for records in which the key was last rotated at least 90 days ago. If any records are detected, send an Amazon Simple Notification Service (Amazon SNS) notification to the security team.
C.
Create a script to download the IAM credentials report on a periodic basis. Load the script into an AWS Lambda function that will run on a schedule through Amazon EventBridge (Amazon CloudWatch Events). Configure the Lambda script to load the report into memory and to filter the report for records in which the key was last rotated at least 90 days ago. If any records are detected, send an Amazon Simple Notification Service (Amazon SNS) notification to the security team.
Answers
D.
Create an AWS Lambda function that queries the IAM API to list all the users. Iterate through the users by using the ListAccessKeys operation. Verify that the value in the CreateDate field is not at least 90 days old. Send an Amazon Simple Notification Service (Amazon SNS) notification to the security team if the value is at least 90 days old. Create an Amazon EventBridge (Amazon CloudWatch Events) rule to schedule the Lambda function to run each day.
D.
Create an AWS Lambda function that queries the IAM API to list all the users. Iterate through the users by using the ListAccessKeys operation. Verify that the value in the CreateDate field is not at least 90 days old. Send an Amazon Simple Notification Service (Amazon SNS) notification to the security team if the value is at least 90 days old. Create an Amazon EventBridge (Amazon CloudWatch Events) rule to schedule the Lambda function to run each day.
Answers
Suggested answer: A

A company has a guideline that mandates the encryption of all Amazon S3 bucket data in transit. A security engineer must implement an S3 bucket policy that denies any S3 operations if data is not encrypted.

Which S3 bucket policy will meet this requirement?

A.
A.
Answers
B.
B.
Answers
C.
C.
Answers
D.
D.
Answers
Suggested answer: B

Explanation:

https://aws.amazon.com/blogs/security/how-to-use-bucket-policies-and-apply-defense-in-depth-to-help-secure-your-amazon-s3-data/

A company hosts a public website on an Amazon EC2 instance. HTTPS traffic must be able to access the website. The company uses SSH for management of the web server.

The website is on the subnet 10.0.1.0/24. The management subnet is 192.168.100.0/24. A security engineer must create a security group for the EC2 instance.

Which combination of steps should the security engineer take to meet these requirements in the MOST secure manner? (Select TWO.)

A.
Allow port 22 from source 0.0.0.0/0.
A.
Allow port 22 from source 0.0.0.0/0.
Answers
B.
Allow port 443 from source 0.0.0.0/0.
B.
Allow port 443 from source 0.0.0.0/0.
Answers
C.
Allow port 22 from 192.168.100.0/24.
C.
Allow port 22 from 192.168.100.0/24.
Answers
D.
Allow port 22 from 10.0.1.0/24.
D.
Allow port 22 from 10.0.1.0/24.
Answers
E.
Allow port 443 from 10.0.1.0/24.
E.
Allow port 443 from 10.0.1.0/24.
Answers
Suggested answer: B, C

Explanation:

The correct answer is B and C)

B) Allow port 443 from source 0.0.0.0/0.

This is correct because port 443 is used for HTTPS traffic, which must be able to access the website from any source IP address.

C) Allow port 22 from 192.168.100.0/24.

This is correct because port 22 is used for SSH, which is the management protocol for the web server. The management subnet is 192.168.100.0/24, so only this subnet should be allowed to access port 22.

A) Allow port 22 from source 0.0.0.0/0.

This is incorrect because it would allow anyone to access port 22, which is a security risk. SSH should be restricted to the management subnet only.

D) Allow port 22 from 10.0.1.0/24.

This is incorrect because it would allow the website subnet to access port 22, which is unnecessary and a security risk. SSH should be restricted to the management subnet only.

E) Allow port 443 from 10.0.1.0/24.

This is incorrect because it would limit the HTTPS traffic to the website subnet only, which defeats the purpose of having a public website.

A company has hundreds of AWS accounts in an organization in AWS Organizations. The company operates out of a single AWS Region. The company has a dedicated security tooling AWS account in the organization. The security tooling account is configured as the organization's delegated administrator for Amazon GuardDuty and AWS Security Hub. The company has configured the environment to automatically enable GuardDuty and Security Hub for existing AWS accounts and new AWS accounts.

The company is performing control tests on specific GuardDuty findings to make sure that the company's security team can detect and respond to security events. The security team launched an Amazon EC2 instance and attempted to run DNS requests against a test domain, example.com, to generate a DNS finding. However, the GuardDuty finding was never created in the Security Hub delegated administrator account.

Why was the finding was not created in the Security Hub delegated administrator account?

A.
VPC flow logs were not turned on for the VPC where the EC2 instance was launched.
A.
VPC flow logs were not turned on for the VPC where the EC2 instance was launched.
Answers
B.
The VPC where the EC2 instance was launched had the DHCP option configured for a custom OpenDNS resolver.
B.
The VPC where the EC2 instance was launched had the DHCP option configured for a custom OpenDNS resolver.
Answers
C.
The GuardDuty integration with Security Hub was never activated in the AWS account where the finding was generated.
C.
The GuardDuty integration with Security Hub was never activated in the AWS account where the finding was generated.
Answers
D.
Cross-Region aggregation in Security Hub was not configured.
D.
Cross-Region aggregation in Security Hub was not configured.
Answers
Suggested answer: C

Explanation:

The correct answer is C. The GuardDuty integration with Security Hub was never activated in the AWS account where the finding was generated.

According to the AWS documentation1, GuardDuty findings are automatically sent to Security Hub only if the GuardDuty integration with Security Hub is enabled in the same account and Region. This means that the security tooling account, which is the delegated administrator for both GuardDuty and Security Hub, must enable the GuardDuty integration with Security Hub in each member account and Region where GuardDuty is enabled. Otherwise, the findings from GuardDuty will not be visible in Security Hub.

The other options are incorrect because:

VPC flow logs are not required for GuardDuty to generate DNS findings. GuardDuty uses VPC DNS logs, which are automatically enabled for all VPCs, to detect malicious or unauthorized DNS activity.

The DHCP option configured for a custom OpenDNS resolver does not affect GuardDuty's ability to generate DNS findings. GuardDuty uses its own threat intelligence sources to identify malicious domains, regardless of the DNS resolver used by the EC2 instance.

Cross-Region aggregation in Security Hub is not relevant for this scenario, because the company operates out of a single AWS Region. Cross-Region aggregation allows Security Hub to aggregate findings from multiple Regions into a single Region.

1: Managing GuardDuty accounts with AWS Organizations : Amazon GuardDuty Findings : How Amazon GuardDuty Works : Cross-Region aggregation in AWS Security Hub

A company has a group of Amazon EC2 instances in a single private subnet of a VPC with no internet gateway attached. A security engineer has installed the Amazon CloudWatch agent on all instances in that subnet to capture logs from a specific application. To ensure that the logs flow securely, the company's networking team has created VPC endpoints for CloudWatch monitoring and CloudWatch logs. The networking team has attached the endpoints to the VPC.

The application is generating logs. However, when the security engineer queries CloudWatch, the logs do not appear.

Which combination of steps should the security engineer take to troubleshoot this issue? (Choose three.)

A.
Ensure that the EC2 instance profile that is attached to the EC2 instances has permissions to create log streams and write logs.
A.
Ensure that the EC2 instance profile that is attached to the EC2 instances has permissions to create log streams and write logs.
Answers
B.
Create a metric filter on the logs so that they can be viewed in the AWS Management Console.
B.
Create a metric filter on the logs so that they can be viewed in the AWS Management Console.
Answers
C.
Check the CloudWatch agent configuration file on each EC2 instance to make sure that the CloudWatch agent is collecting the proper log files.
C.
Check the CloudWatch agent configuration file on each EC2 instance to make sure that the CloudWatch agent is collecting the proper log files.
Answers
D.
Check the VPC endpoint policies of both VPC endpoints to ensure that the EC2 instances have permissions to use them.
D.
Check the VPC endpoint policies of both VPC endpoints to ensure that the EC2 instances have permissions to use them.
Answers
E.
Create a NAT gateway in the subnet so that the EC2 instances can communicate with CloudWatch.
E.
Create a NAT gateway in the subnet so that the EC2 instances can communicate with CloudWatch.
Answers
F.
Ensure that the security groups allow all the EC2 instances to communicate with each other to aggregate logs before sending.
F.
Ensure that the security groups allow all the EC2 instances to communicate with each other to aggregate logs before sending.
Answers
Suggested answer: A, C, D

Explanation:

The possible steps to troubleshoot this issue are:

A) Ensure that the EC2 instance profile that is attached to the EC2 instances has permissions to create log streams and write logs. This is a necessary step because the CloudWatch agent uses the credentials from the instance profile to communicate with CloudWatch1.

C) Check the CloudWatch agent configuration file on each EC2 instance to make sure that the CloudWatch agent is collecting the proper log files. This is a necessary step because the CloudWatch agent needs to know which log files to monitor and send to CloudWatch2.

D) Check the VPC endpoint policies of both VPC endpoints to ensure that the EC2 instances have permissions to use them. This is a necessary step because the VPC endpoint policies control which principals can access the AWS services through the endpoints3.

The other options are incorrect because:

B) Creating a metric filter on the logs is not a troubleshooting step, but a way to extract metric data from the logs. Metric filters do not affect the visibility of the logs in the AWS Management Console.

E) Creating a NAT gateway in the subnet is not a solution, because the EC2 instances do not need internet access to communicate with CloudWatch through the VPC endpoints. A NAT gateway would also incur additional costs.

F) Ensuring that the security groups allow all the EC2 instances to communicate with each other is not a necessary step, because the CloudWatch agent does not require log aggregation before sending. Each EC2 instance can send its own logs independently to CloudWatch.

1: IAM Roles for Amazon EC2 2: CloudWatch Agent Configuration File: Logs Section 3: Using Amazon VPC Endpoints : Metric Filters : NAT Gateways : CloudWatch Agent

Reference: Log Aggregation

A company is using Amazon Elastic Container Service (Amazon ECS) to run its container-based application on AWS. The company needs to ensure that the container images contain no severe vulnerabilities. The company also must ensure that only specific IAM roles and specific AWS accounts can access the container images.

Which solution will meet these requirements with the LEAST management overhead?

A.
Pull images from the public container registry. Publish the images to Amazon Elastic Container Registry (Amazon ECR) repositories with scan on push configured in a centralized AWS account. Use a CI/CD pipeline to deploy the images to different AWS accounts. Use identity-based policies to restrict access to which IAM principals can access the images.
A.
Pull images from the public container registry. Publish the images to Amazon Elastic Container Registry (Amazon ECR) repositories with scan on push configured in a centralized AWS account. Use a CI/CD pipeline to deploy the images to different AWS accounts. Use identity-based policies to restrict access to which IAM principals can access the images.
Answers
B.
Pull images from the public container registry. Publish the images to a private container registry that is hosted on Amazon EC2 instances in a centralized AWS account. Deploy host-based container scanning tools to EC2 instances that run Amazon ECS. Restrict access to the container images by using basic authentication over HTTPS.
B.
Pull images from the public container registry. Publish the images to a private container registry that is hosted on Amazon EC2 instances in a centralized AWS account. Deploy host-based container scanning tools to EC2 instances that run Amazon ECS. Restrict access to the container images by using basic authentication over HTTPS.
Answers
C.
Pull images from the public container registry. Publish the images to Amazon Elastic Container Registry (Amazon ECR) repositories with scan on push configured in a centralized AWS account. Use a CI/CD pipeline to deploy the images to different AWS accounts. Use repository policies and identity-based policies to restrict access to which IAM principals and accounts can access the images.
C.
Pull images from the public container registry. Publish the images to Amazon Elastic Container Registry (Amazon ECR) repositories with scan on push configured in a centralized AWS account. Use a CI/CD pipeline to deploy the images to different AWS accounts. Use repository policies and identity-based policies to restrict access to which IAM principals and accounts can access the images.
Answers
D.
Pull images from the public container registry. Publish the images to AWS CodeArtifact repositories in a centralized AWS account. Use a CI/CD pipeline to deploy the images to different AWS accounts. Use repository policies and identity-based policies to restrict access to which IAM principals and accounts can access the images.
D.
Pull images from the public container registry. Publish the images to AWS CodeArtifact repositories in a centralized AWS account. Use a CI/CD pipeline to deploy the images to different AWS accounts. Use repository policies and identity-based policies to restrict access to which IAM principals and accounts can access the images.
Answers
Suggested answer: C

Explanation:

The correct answer is C. Pull images from the public container registry. Publish the images to Amazon Elastic Container Registry (Amazon ECR) repositories with scan on push configured in a centralized AWS account. Use a CI/CD pipeline to deploy the images to different AWS accounts. Use repository policies and identity-based policies to restrict access to which IAM principals and accounts can access the images.

This solution meets the requirements because:

Amazon ECR is a fully managed container registry service that supports Docker and OCI images and artifacts1. It integrates with Amazon ECS and other AWS services to simplify the development and deployment of container-based applications.

Amazon ECR provides image scanning on push, which uses the Common Vulnerabilities and Exposures (CVEs) database from the open-source Clair project to detect software vulnerabilities in container images2. The scan results are available in the AWS Management Console, AWS CLI, or AWS SDKs2.

Amazon ECR supports cross-account access to repositories, which allows sharing images across multiple AWS accounts3. This can be achieved by using repository policies, which are resource-based policies that specify which IAM principals and accounts can access the repositories and what actions they can perform4. Additionally, identity-based policies can be used to control which IAM roles in each account can access the repositories5.

The other options are incorrect because:

A) This option does not use repository policies to restrict cross-account access to the images, which is a requirement. Identity-based policies alone are not sufficient to control access to Amazon ECR repositories5.

B) This option does not use Amazon ECR, which is a fully managed service that provides image scanning and cross-account access features. Hosting a private container registry on EC2 instances would require more management overhead and additional security measures.

D) This option uses AWS CodeArtifact, which is a fully managed artifact repository service that supports Maven, npm, NuGet, PyPI, and generic package formats6. However, AWS CodeArtifact does not support Docker or OCI container images, which are required for Amazon ECS applications.

A company is using an Amazon CloudFront distribution to deliver content from two origins. One origin is a dynamic application that is hosted on Amazon EC2 instances. The other origin is an Amazon S3 bucket for static assets.

A security analysis shows that HTTPS responses from the application do not comply with a security requirement to provide an X-Frame-Options HTTP header to prevent frame-related cross-site scripting attacks. A security engineer must ipake the full stack compliant by adding the missing HTTP header to the responses.

Which solution will meet these requirements?

A.
Create a Lambda@Edge function. Include code to add the X-Frame-Options header to the response. Configure the function to run in response to the CloudFront origin response event.
A.
Create a Lambda@Edge function. Include code to add the X-Frame-Options header to the response. Configure the function to run in response to the CloudFront origin response event.
Answers
B.
Create a Lambda@Edge function. Include code to add the X-Frame-Options header to the response. Configure the function to run in response to the CloudFront viewer request event.
B.
Create a Lambda@Edge function. Include code to add the X-Frame-Options header to the response. Configure the function to run in response to the CloudFront viewer request event.
Answers
C.
Update the CloudFront distribution by adding X-Frame-Options to custom headers in the origin settings.
C.
Update the CloudFront distribution by adding X-Frame-Options to custom headers in the origin settings.
Answers
D.
Customize the EC2 hosted application to add the X-Frame-Options header to the responses that are returned to CloudFront.
D.
Customize the EC2 hosted application to add the X-Frame-Options header to the responses that are returned to CloudFront.
Answers
Suggested answer: A

Explanation:

The correct answer is A because it allows the security engineer to add the X-Frame-Options header to the HTTPS responses from the application origin without modifying the origin itself. A Lambda@Edge function is a Lambda function that runs in response to CloudFront events, such as viewer request, origin request, origin response, or viewer response.By configuring the function to run in response to the origin response event, the security engineer can modify the response headers that CloudFront receives from the origin before sending them to the viewer1.The function can include code to add the X-Frame-Options header with the desired value, such as DENY or SAMEORIGIN, to prevent frame-related cross-site scripting attacks2.

The other options are incorrect because they are either less efficient or less secure than option A) Option B is incorrect because configuring the Lambda@Edge function to run in response to the viewer request event is not optimal, as it adds latency to the request processing and does not modify the response headers that CloudFront receives from the origin. Option C is incorrect because adding X-Frame-Options to custom headers in the origin settings does not affect the response headers that CloudFront sends to the viewer.Custom headers are only used to send additional information to the origin when CloudFront forwards a request3. Option D is incorrect because customizing the EC2 hosted application to add the X-Frame-Options header to the responses requires changing the origin code, which may not be feasible or desirable for the security engineer.

An application has been built with Amazon EC2 instances that retrieve messages from Amazon SQS. Recently, IAM changes were made and the instances can no longer retrieve messages.

What actions should be taken to troubleshoot the issue while maintaining least privilege? (Select TWO.)

A.
Configure and assign an MFA device to the role used by the instances.
A.
Configure and assign an MFA device to the role used by the instances.
Answers
B.
Verify that the SQS resource policy does not explicitly deny access to the role used by the instances.
B.
Verify that the SQS resource policy does not explicitly deny access to the role used by the instances.
Answers
C.
Verify that the access key attached to the role used by the instances is active.
C.
Verify that the access key attached to the role used by the instances is active.
Answers
D.
Attach the AmazonSQSFullAccest. managed policy to the role used by the instances.
D.
Attach the AmazonSQSFullAccest. managed policy to the role used by the instances.
Answers
E.
Verify that the role attached to the instances contains policies that allow access to the queue
E.
Verify that the role attached to the instances contains policies that allow access to the queue
Answers
Suggested answer: B, E

Explanation:

The correct answer is B and E. To troubleshoot the issue, the security engineer should verify that the SQS resource policy does not explicitly deny access to the role used by the instances, and that the role attached to the instances contains policies that allow access to the queue. These actions will ensure that the instances have the necessary permissions to retrieve messages from Amazon SQS, while maintaining the principle of least privilege.

The other options are incorrect because they are either unnecessary or overly permissive. Option A is incorrect because configuring and assigning an MFA device to the role used by the instances is not required to access Amazon SQS.MFA is an optional security feature that adds an extra layer of protection on top of the user name and password1. Option C is incorrect because verifying that the access key attached to the role used by the instances is active is not relevant to the issue.Access keys are used to make programmatic requests to AWS services, not to retrieve messages from Amazon SQS2. Option D is incorrect because attaching the AmazonSQSFullAccess managed policy to the role used by the instances is overly permissive and violates the principle of least privilege.This policy grants full access to all Amazon SQS actions and resources, which may expose the instances to unnecessary risks3.

A company runs an online game on AWS. When players sign up for the game, their username and password credentials are stored in an Amazon Aurora database.

The number of users has grown to hundreds of thousands of players. The number of requests for password resets and login assistance has become a burden for the company's customer service team.

The company needs to implement a solution to give players another way to log in to the game. The solution must remove the burden of password resets and login assistance while securely protecting each player's credentials.

Which solution will meet these requirements?

A.
When a new player signs up, use an AWS Lambda function to automatically create an 1AM access key and a secret access key. Program the Lambda function to store the credentials on the player's device. Create 1AM keys for existing players.
A.
When a new player signs up, use an AWS Lambda function to automatically create an 1AM access key and a secret access key. Program the Lambda function to store the credentials on the player's device. Create 1AM keys for existing players.
Answers
B.
Migrate the player credentials from the Aurora database to AWS Secrets Manager. When a new player signs up. create a key-value pair in Secrets Manager for the player's user ID and password.
B.
Migrate the player credentials from the Aurora database to AWS Secrets Manager. When a new player signs up. create a key-value pair in Secrets Manager for the player's user ID and password.
Answers
C.
Configure Amazon Cognito user pools to federate access to the game with third-party identity providers (IdPs), such as social IdPs Migrate the game's authentication mechanism to Cognito.
C.
Configure Amazon Cognito user pools to federate access to the game with third-party identity providers (IdPs), such as social IdPs Migrate the game's authentication mechanism to Cognito.
Answers
D.
Instead of using usernames and passwords for authentication, issue API keys to new and existing players. Create an Amazon API Gateway API to give the game client access to the game's functionality.
D.
Instead of using usernames and passwords for authentication, issue API keys to new and existing players. Create an Amazon API Gateway API to give the game client access to the game's functionality.
Answers
Suggested answer: C

Explanation:

The best solution to meet the company's requirements of offering an alternative login method while securely protecting player credentials and reducing the burden of password resets is to use Amazon Cognito with user pools. Amazon Cognito provides a fully managed service that facilitates the authentication, authorization, and user management for web and mobile applications. By configuring Amazon Cognito user pools to federate access with third-party Identity Providers (IdPs), such as social media platforms or Google, the company can allow users to sign in through these external IdPs, thereby eliminating the need for traditional username and password logins. This not only enhances user convenience but also offloads the responsibility of managing user credentials and the associated challenges like password resets to Amazon Cognito, thereby reducing the burden on the company's customer service team. Additionally, Amazon Cognito integrates seamlessly with other AWS services and follows best practices for security and compliance, ensuring that the player's credentials are protected.

Total 327 questions
Go to page: of 33