ExamGecko
Home Home / Amazon / SCS-C01

Amazon SCS-C01 Practice Test - Questions Answers, Page 32

Question list
Search
Search

List of questions

Search

Related questions











Which technique can be used to integrate AWS IAM (Identity and Access Management) with an onpremise LDAP (Lightweight Directory Access Protocol) directory service? Please select:

A.
Use an IAM policy that references the LDAP account identifiers and the AWS credentials.
A.
Use an IAM policy that references the LDAP account identifiers and the AWS credentials.
Answers
B.
Use SAML (Security Assertion Markup Language) to enable single sign-on between AWS and LDAP.
B.
Use SAML (Security Assertion Markup Language) to enable single sign-on between AWS and LDAP.
Answers
C.
Use AWS Security Token Service from an identity broker to issue short-lived AWS credentials.
C.
Use AWS Security Token Service from an identity broker to issue short-lived AWS credentials.
Answers
D.
Use IAM roles to automatically rotate the IAM credentials when LDAP credentials are updated.
D.
Use IAM roles to automatically rotate the IAM credentials when LDAP credentials are updated.
Answers
Suggested answer: B

Explanation:

On the AWS Blog site the following information is present to help on this context The newly released whitepaper. Single Sign-On: Integrating AWS, OpenLDAP, and Shibboleth, will help you integrate your existing LDAP-based user directory with AWS. When you integrate your existing directory with AWS, your users can access AWS by using their existing credentials. This means that your users don't need to maintain yet another user name and password just to access AWS resources.

Option A.C and D are all invalid because in this sort of configuration, you have to use SAML to enable single sign on. For more information on integrating AWS with LDAP for Single Sign-On, please visit the following URL:

https://aws.amazon.eom/blogs/security/new-whitepaper-sinEle-sign-on-inteErating-aws-openldapand-shibboleth/lThe correct answer is: Use SAML (Security Assertion Markup Language) to enable single sign-onbetween AWS and LDAP. Submit your Feedback/Queries to our Experts

You have an EBS volume attached to an EC2 Instance which uses KMS for Encryption. Someone has now gone ahead and deleted the Customer Key which was used for the EBS encryption. What should be done to ensure the data can be decrypted.

Please select:

A.
Create a new Customer Key using KMS and attach it to the existing volume
A.
Create a new Customer Key using KMS and attach it to the existing volume
Answers
B.
You cannot decrypt the data that was encrypted under the CMK, and the data is not recoverable.
B.
You cannot decrypt the data that was encrypted under the CMK, and the data is not recoverable.
Answers
C.
Request AWS Support to recover the key
C.
Request AWS Support to recover the key
Answers
D.
Use AWS Config to recover the key
D.
Use AWS Config to recover the key
Answers
Suggested answer: B

Explanation:

Deleting a customer master key (CMK) in AWS Key Management Service (AWS KMS) is destructive and potentially dangerous. It deletes the key material and all metadata associated with the CMK, and is irreversible. After a CMK is deleted you can no longer decrypt the data that was encrypted under that CMK, which means that data becomes unrecoverable. You should delete a CMK only when you are sure that you don't need to use it anymore. If you are not sure, consider disabling the CMK instead of deleting it. You can re-enable a disabled CMK if you need to use it again later, but you cannot recover a deleted CMK. https://docs.aws.amazon.com/kms/latest/developerguide/deleting-keys.htmlA is incorrect because Creating a new CMK and attaching it to the exiting volume will not allow thedata to be decrypted, you cannot attach customer master keys after the volume is encryptedOption C and D are invalid because once the key has been deleted, you cannot recover it For moreinformation on EBS Encryption with KMS, please visit the following URL:

https://docs.aws.amazon.com/kms/latest/developerguide/services-ebs.htmlThe correct answer is: You cannot decrypt the data that was encrypted under the CMK, and the datais not recoverable. Submit your Feedback/Queries to our Experts

You work as an administrator for a company. The company hosts a number of resources using AWS.

There is an incident of a suspicious API activity which occurred 11 days ago. The Security Admin has asked to get the API activity from that point in time. How can this be achieved? Please select:

A.
Search the Cloud Watch logs to find for the suspicious activity which occurred 11 days ago
A.
Search the Cloud Watch logs to find for the suspicious activity which occurred 11 days ago
Answers
B.
Search the Cloudtrail event history on the API events which occurred 11 days ago.
B.
Search the Cloudtrail event history on the API events which occurred 11 days ago.
Answers
C.
Search the Cloud Watch metrics to find for the suspicious activity which occurred 11 days ago
C.
Search the Cloud Watch metrics to find for the suspicious activity which occurred 11 days ago
Answers
D.
Use AWS Config to get the API calls which were made 11 days ago.
D.
Use AWS Config to get the API calls which were made 11 days ago.
Answers
Suggested answer: B

Explanation:

The Cloud Trail event history allows to view events which are recorded for 90 days. So one can use a metric filter to gather the API calls from 11 days ago. Option A and C is invalid because Cloudwatch is used for logging and not for monitoring API activity

Option D is invalid because AWSConfig is a configuration service and not for monitoring API activity For more information on AWS Cloudtrail, please visit the following URL:

https://docs.aws.amazon.com/awscloudtrail/latest/usereuide/how-cloudtrail-works.htmlNote:

In this question we assume that the customer has enabled cloud trail service.

AWS CloudTrail is enabled by default for ALL CUSTOMERS and will provide visibility into the past seven days of account activity without the need for you to configure a trail in the service to get started. So for an activity that happened 11 days ago to be stored in the cloud trail we need to configure the trail manually to ensure that it is stored in the events history. • https://aws.amazon.com/blogs/aws/new-amazon-web-services-extends-cloudtrail-to-all-awscustomers/The correct answer is: Search the Cloudtrail event history on the API events which occurred 11 daysago.

You need to ensure that the cloudtrail logs which are being delivered in your AWS account is encrypted. How can this be achieved in the easiest way possible? Please select:

A.
Don't do anything since CloudTrail logs are automatically encrypted.
A.
Don't do anything since CloudTrail logs are automatically encrypted.
Answers
B.
Enable S3-SSE for the underlying bucket which receives the log files
B.
Enable S3-SSE for the underlying bucket which receives the log files
Answers
C.
Enable S3-KMS for the underlying bucket which receives the log files
C.
Enable S3-KMS for the underlying bucket which receives the log files
Answers
D.
Enable KMS encryption for the logs which are sent to Cloudwatch
D.
Enable KMS encryption for the logs which are sent to Cloudwatch
Answers
Suggested answer: A

Explanation:

The AWS Documentation mentions the following

By default the log files delivered by CloudTrail to your bucket are encrypted by Amazon server-side encryption with Amazon S3-managed encryption keys (SSE-S3) Option B,C and D are all invalid because by default all logs are encrypted when they sent by Cloudtrail to S3 buckets For more information on AWS Cloudtrail log encryption, please visit the following URL:

https://docs.aws.amazon.com/awscloudtrail/latest/usereuide/encryptine-cloudtrail-loe-files-withaws-kms.htmllThe correct answer is: Don't do anything since CloudTrail logs are automatically encrypted. Submityour Feedback/Queries to our Experts

You have a requirement to serve up private content using the keys available with Cloudfront. How can this be achieved? Please select:

A.
Add the keys to the backend distribution.
A.
Add the keys to the backend distribution.
Answers
B.
Add the keys to the S3 bucket
B.
Add the keys to the S3 bucket
Answers
C.
Create pre-signed URL's
C.
Create pre-signed URL's
Answers
D.
Use AWS Access keys
D.
Use AWS Access keys
Answers
Suggested answer: C

Explanation:

Option A and B are invalid because you will not add keys to either the backend distribution or the S3 bucket. Option D is invalid because this is used for programmatic access to AWS resources You can use Cloudfront key pairs to create a trusted pre-signed URL which can be distributed to users Specifying the AWS Accounts That Can Create Signed URLs and Signed Cookies (Trusted Signers) Topics

• Creating CloudFront Key Pairs for Your Trusted Signers

• Reformatting the CloudFront Private Key (.NET and Java Only)

• Adding Trusted Signers to Your Distribution

• Verifying that Trusted Signers Are Active (Optional) 1 Rotating CloudFront Key Pairs To create signed URLs or signed cookies, you need at least one AWS account that has an active CloudFront key pair. This accou is known as a trusted signer. The trusted signer has two purposes:

• As soon as you add the AWS account ID for your trusted signer to your distribution, CloudFront starts to require that users us signed URLs or signed cookies to access your objects. ' When you create signed URLs or signed cookies, you use the private key from the trusted signer's key pair to sign a portion of the URL or the cookie. When someone requests a restricted object CloudFront compares the signed portion of the URL or cookie with the unsigned portion to verify that the URL or cookie hasn't been tampered with. CloudFront also verifies that the URL or cookie is valid, meaning, for example, that the expiration date and time hasn't passed. For more information on Cloudfront private trusted content please visit the following URL:

https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-contenttrusted-sThe correct answer is: Create pre-signed URL's Submit your Feedback/Queries to our Experts

Your company currently has a set of EC2 Instances hosted in a VPC. The IT Security department is suspecting a possible DDos attack on the instances. What can you do to zero in on the IP addresses which are receiving a flurry of requests. Please select:

A.
Use VPC Flow logs to get the IP addresses accessing the EC2 Instances
A.
Use VPC Flow logs to get the IP addresses accessing the EC2 Instances
Answers
B.
Use AWS Cloud trail to get the IP addresses accessing the EC2 Instances
B.
Use AWS Cloud trail to get the IP addresses accessing the EC2 Instances
Answers
C.
Use AWS Config to get the IP addresses accessing the EC2 Instances
C.
Use AWS Config to get the IP addresses accessing the EC2 Instances
Answers
D.
Use AWS Trusted Advisor to get the IP addresses accessing the EC2 Instances
D.
Use AWS Trusted Advisor to get the IP addresses accessing the EC2 Instances
Answers
Suggested answer: A

Explanation:

With VPC Flow logs you can get the list of IP addresses which are hitting the Instances in your VPC You can then use the information in the logs to see which external IP addresses are sending a flurry of requests which could be the potential threat foi a DDos attack.

Option B is incorrect Cloud Trail records AWS API calls for your account. VPC FLowlogs logs network traffic for VPC, subnets. Network interfaces etc. As per AWS,

VPC Flow Logs is a feature that enables you to capture information about the IP traffic going to and from network interfaces in your VPC where as AWS CloudTrail, is a service that captures API calls and delivers the log files to an Amazon S3 bucket that you specify.

Option C is invalid this is a config service and will not be able to get the IP addresses

Option D is invalid because this is a recommendation service and will not be able to get the IP addresses For more information on VPC Flow Logs, please visit the following URL:

https://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/flow-logs.htmlThe correct answer is: Use VPC Flow logs to get the IP addresses accessing the EC2 Instances Submityour Feedback/Queries to our Experts

You are building a system to distribute confidential training videos to employees. Using CloudFront, what method could be used to serve content that is stored in S3, but not publicly accessible from S3 directly? Please select:

A.
Create an Origin Access Identity (OAI) for CloudFront and grant access to the objects in your S3 bucket to that OAl.
A.
Create an Origin Access Identity (OAI) for CloudFront and grant access to the objects in your S3 bucket to that OAl.
Answers
B.
Add the CloudFront account security group "amazon-cf/amazon-cf-sg" to the appropriate S3 bucket policy.
B.
Add the CloudFront account security group "amazon-cf/amazon-cf-sg" to the appropriate S3 bucket policy.
Answers
C.
Create an Identity and Access Management (IAM) User for CloudFront and grant access to the objects in your S3 bucket to that IAM User.
C.
Create an Identity and Access Management (IAM) User for CloudFront and grant access to the objects in your S3 bucket to that IAM User.
Answers
D.
Create a S3 bucket policy that lists the CloudFront distribution ID as the Principal and the target bucket as the Amazon Resource Name (ARN).
D.
Create a S3 bucket policy that lists the CloudFront distribution ID as the Principal and the target bucket as the Amazon Resource Name (ARN).
Answers
Suggested answer: A

Explanation:

You can optionally secure the content in your Amazon S3 bucket so users can access it through CloudFront but cannot access it directly by using Amazon S3 URLs. This prevents anyone from bypassing CloudFront and using the Amazon S3 URL to get content that you want to restrict access to. This step isn't required to use signed URLs, but we recommend it To require that users access your content through CloudFront URLs, you perform the following tasks:

Create a special CloudFront user called an origin access identity.

Give the origin access identity permission to read the objects in your bucket.

Remove permission for anyone else to use Amazon S3 URLs to read the objects.

Option B,C and D are all automatically invalid, because the right way is to ensure to create Origin Access Identity (OAI) for CloudFront and grant access accordingly. For more information on serving private content via Cloudfront, please visit the following URL:

https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/PrivateContent.htmllThe correct answer is: Create an Origin Access Identity (OAI) for CloudFront and grant access to theobjects in your S3 bucket t that OAI. You can optionally secure the content in your Amazon S3 bucket so users can access it through CloudFront but cannot access it directly by using Amazon S3 URLs. This prevents anyone from bypassing CloudFront and using the Amazon S3 URL to get content that you want to restrict access to. This step isn't required to use signed URLs, but we recommend it To require that users access your content through CloudFront URLs, you perform the following tasks:

Create a special CloudFront user called an origin access identity.

Give the origin access identity permission to read the objects in your bucket.

Remove permission for anyone else to use Amazon S3 URLs to read the objects.

Option B,C and D are all automatically invalid, because the right way is to ensure to create Origin Access Identity (OAI) for CloudFront and grant access accordingly. For more information on serving private content via Cloudfront, please visit the following URL:

https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/PrivateContent.htmllThe correct answer is: Create an Origin Access Identity (OAI) for CloudFront and grant access to theobjects in your S3 bucket t that OAI. Submit your Feedback/Queries to our Experts

Submit your Feedback/Queries to our Experts

A company has an existing AWS account and a set of critical resources hosted in that account. The employee who was in-charge of the root account has left the company. What must be now done to secure the account. Choose 3 answers from the options given below.

Please select:

A.
Change the access keys for all IAM users.
A.
Change the access keys for all IAM users.
Answers
B.
Delete all custom created IAM policies
B.
Delete all custom created IAM policies
Answers
C.
Delete the access keys for the root account
C.
Delete the access keys for the root account
Answers
D.
Confirm MFAtoa secure device
D.
Confirm MFAtoa secure device
Answers
E.
Change the password for the root account
E.
Change the password for the root account
Answers
F.
Change the password for all IAM users
F.
Change the password for all IAM users
Answers
Suggested answer: C, D, E

Explanation:

Now if the root account has a chance to be compromised, then you have to carry out the below steps 1. Delete the access keys for the root account 2. Confirm MFA to a secure device 3. Change the password for the root account This will ensure the employee who has left has no change to compromise the resources in AWS.

Option A is invalid because this would hamper the working of the current IAM users

Option B is invalid because this could hamper the current working of services in your AWS account

Option F is invalid because this would hamper the working of the current IAM users For more information on IAM root user, please visit the following URL:

https://docs.aws.amazon.com/IAM/latest/UserGuide/id root-user.html The correct answers are: Delete the access keys for the root account Confirm MFA to a secure device. Change the password for the root account

Submit Your Feedback/Queries to our Experts

A company had developed an incident response plan 18 months ago. Regular implementations of the response plan are carried out. No changes have been made to the response plan have been made since its creation. Which of the following is a right statement with regards to the plan?

Please select:

A.
It places too much emphasis on already implemented security controls.
A.
It places too much emphasis on already implemented security controls.
Answers
B.
The response plan is not implemented on a regular basis
B.
The response plan is not implemented on a regular basis
Answers
C.
The response plan does not cater to new services
C.
The response plan does not cater to new services
Answers
D.
The response plan is complete in its entirety
D.
The response plan is complete in its entirety
Answers
Suggested answer: C

Explanation:

So definitely the case here is that the incident response plan is not catering to newly created services. AWS keeps on changing and adding new services and hence the response plan must cater to these new services. Option A and B are invalid because we don't know this for a fact.

Option D is invalid because we know that the response plan is not complete, because it does not cater to new features of AWS For more information on incident response plan please visit the following URL:

https://aws.amazon.com/blogs/publicsector/buildins-a-cloud-specific-incident-response-plan;The correct answer is: The response plan does not cater to new services Submit yourFeedback/Queries to our Experts

Your application currently uses customer keys which are generated via AWS KMS in the US east region. You now want to use the same set of keys from the EU-Central region. How can this be accomplished? Please select:

A.
Export the key from the US east region and import them into the EU-Central region
A.
Export the key from the US east region and import them into the EU-Central region
Answers
B.
Use key rotation and rotate the existing keys to the EU-Central region
B.
Use key rotation and rotate the existing keys to the EU-Central region
Answers
C.
Use the backing key from the US east region and use it in the EU-Central region
C.
Use the backing key from the US east region and use it in the EU-Central region
Answers
D.
This is not possible since keys from KMS are region specific
D.
This is not possible since keys from KMS are region specific
Answers
Suggested answer: D

Explanation:

Option A is invalid because keys cannot be exported and imported across regions.

Option B is invalid because key rotation cannot be used to export keys

Option C is invalid because the backing key cannot be used to export keys This is mentioned in the AWS documentation What geographic region are my keys stored in? Keys are only stored and used in the region in which they are created. They cannot be transferred to another region. For example; keys created in the EU-Central (Frankfurt) region are only stored and used within the EU-Central (Frankfurt) region For more information on KMS please visit the following URL:

https://aws.amazon.com/kms/faqs/The correct answer is: This is not possible since keys from KMS are region specific Submit your Feedback/Queries to our Experts

Total 590 questions
Go to page: of 59