ExamGecko
Home Home / Amazon / SCS-C02

Amazon SCS-C02 Practice Test - Questions Answers, Page 6

Question list
Search
Search

List of questions

Search

Related questions











An Incident Response team is investigating an IAM access key leak that resulted in Amazon EC2 instances being launched. The company did not discover the incident until many months later The Director of Information Security wants to implement new controls that will alert when similar incidents happen in the future

Which controls should the company implement to achieve this? {Select TWO.)

A.
Enable VPC Flow Logs in all VPCs Create a scheduled IAM Lambda function that downloads and parses the logs, and sends an Amazon SNS notification for violations.
A.
Enable VPC Flow Logs in all VPCs Create a scheduled IAM Lambda function that downloads and parses the logs, and sends an Amazon SNS notification for violations.
Answers
B.
Use IAM CloudTrail to make a trail, and apply it to all Regions Specify an Amazon S3 bucket to receive all the CloudTrail log files
B.
Use IAM CloudTrail to make a trail, and apply it to all Regions Specify an Amazon S3 bucket to receive all the CloudTrail log files
Answers
C.
Add the following bucket policy to the company's IAM CloudTrail bucket to prevent log tampering { 'Version': '2012-10-17-, 'Statement': { 'Effect': 'Deny', 'Action': 's3:PutObject', 'Principal': '-', 'Resource': 'arn:IAM:s3:::cloudtrail/IAMLogs/111122223333/*' } } Create an Amazon S3 data event for an PutObject attempts, which sends notifications to an Amazon SNS topic.
C.
Add the following bucket policy to the company's IAM CloudTrail bucket to prevent log tampering { 'Version': '2012-10-17-, 'Statement': { 'Effect': 'Deny', 'Action': 's3:PutObject', 'Principal': '-', 'Resource': 'arn:IAM:s3:::cloudtrail/IAMLogs/111122223333/*' } } Create an Amazon S3 data event for an PutObject attempts, which sends notifications to an Amazon SNS topic.
Answers
D.
Create a Security Auditor role with permissions to access Amazon CloudWatch Logs m all Regions Ship the logs to an Amazon S3 bucket and make a lifecycle policy to ship the logs to Amazon S3 Glacier.
D.
Create a Security Auditor role with permissions to access Amazon CloudWatch Logs m all Regions Ship the logs to an Amazon S3 bucket and make a lifecycle policy to ship the logs to Amazon S3 Glacier.
Answers
E.
Verify that Amazon GuardDuty is enabled in all Regions, and create an Amazon CloudWatch Events rule for Amazon GuardDuty findings Add an Amazon SNS topic as the rule's target
E.
Verify that Amazon GuardDuty is enabled in all Regions, and create an Amazon CloudWatch Events rule for Amazon GuardDuty findings Add an Amazon SNS topic as the rule's target
Answers
Suggested answer: A, E

A company's Security Engineer is copying all application logs to centralized Amazon S3 buckets. Currently, each of the company's applications is in its own IAM account, and logs are pushed into S3 buckets associated with each account. The Engineer will deploy an IAM Lambda function into each account that copies the relevant log files to the centralized S3 bucket.

The Security Engineer is unable to access the log files in the centralized S3 bucket. The Engineer's IAM user policy from the centralized account looks like this:

The centralized S3 bucket policy looks like this:

Why is the Security Engineer unable to access the log files?

A.
The S3 bucket policy does not explicitly allow the Security Engineer access to the objects in the bucket.
A.
The S3 bucket policy does not explicitly allow the Security Engineer access to the objects in the bucket.
Answers
B.
The object ACLs are not being updated to allow the users within the centralized account to access the objects
B.
The object ACLs are not being updated to allow the users within the centralized account to access the objects
Answers
C.
The Security Engineers IAM policy does not grant permissions to read objects in the S3 bucket
C.
The Security Engineers IAM policy does not grant permissions to read objects in the S3 bucket
Answers
D.
The s3:PutObject and s3:PutObjectAcl permissions should be applied at the S3 bucket level
D.
The s3:PutObject and s3:PutObjectAcl permissions should be applied at the S3 bucket level
Answers
Suggested answer: C

Example.com is hosted on Amazon EC2 instances behind an Application Load Balancer (ALB). Third-party host intrusion detection system (HIDS) agents that capture the traffic of the EC2 instance are running on each host. The company must ensure they are using privacy enhancing technologies for users, without losing the assurance the third-party solution offers.

What is the MOST secure way to meet these requirements?

A.
Enable TLS pass through on the ALB, and handle decryption at the server using Elliptic Curve Diffie-Hellman (ECDHE) cipher suites.
A.
Enable TLS pass through on the ALB, and handle decryption at the server using Elliptic Curve Diffie-Hellman (ECDHE) cipher suites.
Answers
B.
Create a listener on the ALB that uses encrypted connections with Elliptic Curve Diffie-Hellman (ECDHE) cipher suites, and pass the traffic in the clear to the server.
B.
Create a listener on the ALB that uses encrypted connections with Elliptic Curve Diffie-Hellman (ECDHE) cipher suites, and pass the traffic in the clear to the server.
Answers
C.
Create a listener on the ALB that uses encrypted connections with Elliptic Curve Diffie-Hellman (ECDHE) cipher suites, and use encrypted connections to the servers that do not enable Perfect Forward Secrecy (PFS).
C.
Create a listener on the ALB that uses encrypted connections with Elliptic Curve Diffie-Hellman (ECDHE) cipher suites, and use encrypted connections to the servers that do not enable Perfect Forward Secrecy (PFS).
Answers
D.
Create a listener on the ALB that does not enable Perfect Forward Secrecy (PFS) cipher suites, and use encrypted connections to the servers using Elliptic Curve Diffie-Hellman (ECDHE) cipher suites.
D.
Create a listener on the ALB that does not enable Perfect Forward Secrecy (PFS) cipher suites, and use encrypted connections to the servers using Elliptic Curve Diffie-Hellman (ECDHE) cipher suites.
Answers
Suggested answer: D

Explanation:

the most secure way to meet the requirements. TLS is a protocol that provides encryption and authentication for data in transit. ALB is a service that distributes incoming traffic across multiple EC2 instances. HIDS is a system that monitors and detects malicious activity on a host. ECDHE is a type of cipher suite that supports perfect forward secrecy, which is a property that ensures that past and current TLS traffic stays secure even if the certificate private key is leaked. By creating a listener on the ALB that does not enable PFS cipher suites, and using encrypted connections to the servers using ECDHE cipher suites, you can ensure that the HIDS agents can capture the traffic of the EC2 instance without compromising the privacy of the users. The other options are either less secure or less compatible with the third-party solution.

A company's Chief Security Officer has requested that a Security Analyst review and improve the security posture of each company IAM account The Security Analyst decides to do this by Improving IAM account root user security.

Which actions should the Security Analyst take to meet these requirements? (Select THREE.)

A.
Delete the access keys for the account root user in every account.
A.
Delete the access keys for the account root user in every account.
Answers
B.
Create an admin IAM user with administrative privileges and delete the account root user in every account.
B.
Create an admin IAM user with administrative privileges and delete the account root user in every account.
Answers
C.
Implement a strong password to help protect account-level access to the IAM Management Console by the account root user.
C.
Implement a strong password to help protect account-level access to the IAM Management Console by the account root user.
Answers
D.
Enable multi-factor authentication (MFA) on every account root user in all accounts.
D.
Enable multi-factor authentication (MFA) on every account root user in all accounts.
Answers
E.
Create a custom IAM policy to limit permissions to required actions for the account root user and attach the policy to the account root user.
E.
Create a custom IAM policy to limit permissions to required actions for the account root user and attach the policy to the account root user.
Answers
F.
Attach an IAM role to the account root user to make use of the automated credential rotation in IAM STS.
F.
Attach an IAM role to the account root user to make use of the automated credential rotation in IAM STS.
Answers
Suggested answer: A, D, E

Explanation:

because these are the actions that can improve IAM account root user security. IAM account root user is a user that has complete access to all AWS resources and services in an account. IAM account root user security is a set of best practices that help protect the account root user from unauthorized or accidental use. Deleting the access keys for the account root user in every account can help prevent programmatic access by the account root user, which reduces the risk of compromise or misuse. Enabling MFA on every account root user in all accounts can help add an extra layer of security for console access by requiring a verification code in addition to a password. Creating a custom IAM policy to limit permissions to required actions for the account root user and attaching the policy to the account root user can help enforce the principle of least privilege and restrict the account root user from performing unnecessary or dangerous actions. The other options are either invalid or ineffective for improving IAM account root user security.

A Security Architect has been asked to review an existing security architecture and identify why the application servers cannot successfully initiate a connection to the database servers. The following summary describes the architecture:

1 An Application Load Balancer, an internet gateway, and a NAT gateway are configured in the public subnet 2. Database, application, and web servers are configured on three different private subnets.

3 The VPC has two route tables: one for the public subnet and one for all other subnets The route table for the public subnet has a 0 0 0 0/0 route to the internet gateway The route table for all other subnets has a 0 0.0.0/0 route to the NAT gateway. All private subnets can route to each other

4 Each subnet has a network ACL implemented that limits all inbound and outbound connectivity to only the required ports and protocols

5 There are 3 Security Groups (SGs) database application and web Each group limits all inbound and outbound connectivity to the minimum required

Which of the following accurately reflects the access control mechanisms the Architect should verify1?

A.
Outbound SG configuration on database servers Inbound SG configuration on application servers inbound and outbound network ACL configuration on the database subnet Inbound and outbound network ACL configuration on the application server subnet
A.
Outbound SG configuration on database servers Inbound SG configuration on application servers inbound and outbound network ACL configuration on the database subnet Inbound and outbound network ACL configuration on the application server subnet
Answers
B.
Inbound SG configuration on database servers Outbound SG configuration on application servers Inbound and outbound network ACL configuration on the database subnet Inbound and outbound network ACL configuration on the application server subnet
B.
Inbound SG configuration on database servers Outbound SG configuration on application servers Inbound and outbound network ACL configuration on the database subnet Inbound and outbound network ACL configuration on the application server subnet
Answers
C.
Inbound and outbound SG configuration on database servers Inbound and outbound SG configuration on application servers Inbound network ACL configuration on the database subnet Outbound network ACL configuration on the application server subnet
C.
Inbound and outbound SG configuration on database servers Inbound and outbound SG configuration on application servers Inbound network ACL configuration on the database subnet Outbound network ACL configuration on the application server subnet
Answers
D.
Inbound SG configuration on database servers Outbound SG configuration on application servers Inbound network ACL configuration on the database subnet Outbound network ACL configuration on the application server subnet.
D.
Inbound SG configuration on database servers Outbound SG configuration on application servers Inbound network ACL configuration on the database subnet Outbound network ACL configuration on the application server subnet.
Answers
Suggested answer: A

Explanation:

this is the accurate reflection of the access control mechanisms that the Architect should verify. Access control mechanisms are methods that regulate who can access what resources and how. Security groups and network ACLs are two types of access control mechanisms that can be applied to EC2 instances and subnets. Security groups are stateful, meaning they remember and return traffic that was previously allowed. Network ACLs are stateless, meaning they do not remember or return traffic that was previously allowed. Security groups and network ACLs can have inbound and outbound rules that specify the source, destination, protocol, and port of the traffic. By verifying the outbound security group configuration on database servers, the inbound security group configuration on application servers, and the inbound and outbound network ACL configuration on both the database and application server subnets, the Architect can check if there are any misconfigurations or conflicts that prevent the application servers from initiating a connection to the database servers. The other options are either inaccurate or incomplete for verifying the access control mechanisms.

A Security Engineer receives alerts that an Amazon EC2 instance on a public subnet is under an SFTP brute force attack from a specific IP address, which is a known malicious bot. What should the Security Engineer do to block the malicious bot?

A.
Add a deny rule to the public VPC security group to block the malicious IP
A.
Add a deny rule to the public VPC security group to block the malicious IP
Answers
B.
Add the malicious IP to IAM WAF backhsted IPs
B.
Add the malicious IP to IAM WAF backhsted IPs
Answers
C.
Configure Linux iptables or Windows Firewall to block any traffic from the malicious IP
C.
Configure Linux iptables or Windows Firewall to block any traffic from the malicious IP
Answers
D.
Modify the hosted zone in Amazon Route 53 and create a DNS sinkhole for the malicious IP
D.
Modify the hosted zone in Amazon Route 53 and create a DNS sinkhole for the malicious IP
Answers
Suggested answer: D

Explanation:

what the Security Engineer should do to block the malicious bot. SFTP is a protocol that allows secure file transfer over SSH. EC2 is a service that provides virtual servers in the cloud. A public subnet is a subnet that has a route to an internet gateway, which allows it to communicate with the internet. A brute force attack is a type of attack that tries to guess passwords or keys by trying many possible combinations. A malicious bot is a software program that performs automated tasks for malicious purposes. Route 53 is a service that provides DNS resolution and domain name registration. A DNS sinkhole is a technique that redirects malicious or unwanted traffic to a different destination, such as a black hole server or a honeypot. By modifying the hosted zone in Route 53 and creating a DNS sinkhole for the malicious IP, the Security Engineer can block the malicious bot from reaching the EC2 instance on the public subnet. The other options are either ineffective or inappropriate for blocking the malicious bot.

A System Administrator is unable to start an Amazon EC2 instance in the eu-west-1 Region using an IAM role The same System Administrator is able to start an EC2 instance in the eu-west-2 and eu-west-3 Regions. The IAMSystemAdministrator access policy attached to the System Administrator IAM role allows unconditional access to all IAM services and resources within the account

Which configuration caused this issue?

A) An SCP is attached to the account with the following permission statement:

B)

A permission boundary policy is attached to the System Administrator role with the following permission statement:

C)

A permission boundary is attached to the System Administrator role with the following permission statement:

D)

An SCP is attached to the account with the following statement:

A.
Option A
A.
Option A
Answers
B.
Option B
B.
Option B
Answers
C.
Option C
C.
Option C
Answers
D.
Option D
D.
Option D
Answers
Suggested answer: B

A company manages three separate IAM accounts for its production, development, and test environments, Each Developer is assigned a unique IAM user under the development account. A new application hosted on an Amazon EC2 instance in the developer account requires read access to the archived documents stored in an Amazon S3 bucket in the production account.

How should access be granted?

A.
Create an IAM role in the production account and allow EC2 instances in the development account to assume that role using the trust policy. Provide read access for the required S3 bucket to this role.
A.
Create an IAM role in the production account and allow EC2 instances in the development account to assume that role using the trust policy. Provide read access for the required S3 bucket to this role.
Answers
B.
Use a custom identity broker to allow Developer IAM users to temporarily access the S3 bucket.
B.
Use a custom identity broker to allow Developer IAM users to temporarily access the S3 bucket.
Answers
C.
Create a temporary IAM user for the application to use in the production account.
C.
Create a temporary IAM user for the application to use in the production account.
Answers
D.
Create a temporary IAM user in the production account and provide read access to Amazon S3. Generate the temporary IAM user's access key and secret key and store these on the EC2 instance used by the application in the development account.
D.
Create a temporary IAM user in the production account and provide read access to Amazon S3. Generate the temporary IAM user's access key and secret key and store these on the EC2 instance used by the application in the development account.
Answers
Suggested answer: A

Explanation:

https://IAM.amazon.com/premiumsupport/knowledge-center/cross-account-access-s3/

A company is operating a website using Amazon CloudFornt. CloudFront servers some content from Amazon S3 and other from web servers running EC2 instances behind an Application. Load Balancer (ALB). Amazon DynamoDB is used as the data store. The company already uses IAM Certificate Manager (ACM) to store a public TLS certificate that can optionally secure connections between the website users and CloudFront. The company has a new requirement to enforce end-to-end encryption in transit.

Which combination of steps should the company take to meet this requirement? (Select THREE.)

A.
Update the CloudFront distribution. configuring it to optionally use HTTPS when connecting to origins on Amazon S3
A.
Update the CloudFront distribution. configuring it to optionally use HTTPS when connecting to origins on Amazon S3
Answers
B.
Update the web application configuration on the web servers to use HTTPS instead of HTTP when connecting to DynamoDB
B.
Update the web application configuration on the web servers to use HTTPS instead of HTTP when connecting to DynamoDB
Answers
C.
Update the CloudFront distribution to redirect HTTP corrections to HTTPS
C.
Update the CloudFront distribution to redirect HTTP corrections to HTTPS
Answers
D.
Configure the web servers on the EC2 instances to listen using HTTPS using the public ACM TLS certificate Update the ALB to connect to the target group using HTTPS
D.
Configure the web servers on the EC2 instances to listen using HTTPS using the public ACM TLS certificate Update the ALB to connect to the target group using HTTPS
Answers
E.
Update the ALB listen to listen using HTTPS using the public ACM TLS certificate. Update the CloudFront distribution to connect to the HTTPS listener.
E.
Update the ALB listen to listen using HTTPS using the public ACM TLS certificate. Update the CloudFront distribution to connect to the HTTPS listener.
Answers
F.
Create a TLS certificate Configure the web servers on the EC2 instances to use HTTPS only with that certificate. Update the ALB to connect to the target group using HTTPS.
F.
Create a TLS certificate Configure the web servers on the EC2 instances to use HTTPS only with that certificate. Update the ALB to connect to the target group using HTTPS.
Answers
Suggested answer: B, C, E

Explanation:

To enforce end-to-end encryption in transit, the company should do the following:

Update the web application configuration on the web servers to use HTTPS instead of HTTP when connecting to DynamoDB. This ensures that the data is encrypted when it travels from the web servers to the data store.

Update the CloudFront distribution to redirect HTTP requests to HTTPS. This ensures that the viewers always use HTTPS when they access the website through CloudFront.

Update the ALB to listen using HTTPS using the public ACM TLS certificate. Update the CloudFront distribution to connect to the HTTPS listener. This ensures that the data is encrypted when it travels from CloudFront to the ALB and from the ALB to the web servers.

A company has a web-based application using Amazon CloudFront and running on Amazon Elastic Container Service (Amazon ECS) behind an Application Load Balancer (ALB). The ALB is terminating TLS and balancing load across ECS service tasks A security engineer needs to design a solution to ensure that application content is accessible only through CloudFront and that I is never accessible directly.

How should the security engineer build the MOST secure solution?

A.
Add an origin custom header Set the viewer protocol policy to HTTP and HTTPS Set the origin protocol pokey to HTTPS only Update the application to validate the CloudFront custom header
A.
Add an origin custom header Set the viewer protocol policy to HTTP and HTTPS Set the origin protocol pokey to HTTPS only Update the application to validate the CloudFront custom header
Answers
B.
Add an origin custom header Set the viewer protocol policy to HTTPS only Set the origin protocol policy to match viewer Update the application to validate the CloudFront custom header.
B.
Add an origin custom header Set the viewer protocol policy to HTTPS only Set the origin protocol policy to match viewer Update the application to validate the CloudFront custom header.
Answers
C.
Add an origin custom header Set the viewer protocol policy to redirect HTTP to HTTPS Set the origin protocol policy to HTTP only Update the application to validate the CloudFront custom header.
C.
Add an origin custom header Set the viewer protocol policy to redirect HTTP to HTTPS Set the origin protocol policy to HTTP only Update the application to validate the CloudFront custom header.
Answers
D.
Add an origin custom header Set the viewer protocol policy to redirect HTTP to HTTPS. Set the origin protocol policy to HTTPS only Update the application to validate the CloudFront custom header
D.
Add an origin custom header Set the viewer protocol policy to redirect HTTP to HTTPS. Set the origin protocol policy to HTTPS only Update the application to validate the CloudFront custom header
Answers
Suggested answer: D

Explanation:

To ensure that application content is accessible only through CloudFront and not directly, the security engineer should do the following:

Add an origin custom header. This is a header that CloudFront adds to the requests that it sends to the origin, but viewers cannot see or modify.

Set the viewer protocol policy to redirect HTTP to HTTPS. This ensures that the viewers always use HTTPS when they access the website through CloudFront.

Set the origin protocol policy to HTTPS only. This ensures that CloudFront always uses HTTPS when it connects to the origin.

Update the application to validate the CloudFront custom header. This means that the application checks if the request has the custom header and only responds if it does. Otherwise, it denies or ignores the request. This prevents users from bypassing CloudFront and accessing the content directly on the origin.

Total 327 questions
Go to page: of 33