ExamGecko
Home Home / Amazon / SCS-C01

Amazon SCS-C01 Practice Test - Questions Answers, Page 29

Question list
Search
Search

List of questions

Search

Related questions











Your company has a set of 1000 EC2 Instances defined in an AWS Account. They want to effectively automate several administrative tasks on these instances. Which of the following would be an effective way to achieve this? Please select:

A.
Use the AWS Systems Manager Parameter Store
A.
Use the AWS Systems Manager Parameter Store
Answers
B.
Use the AWS Systems Manager Run Command
B.
Use the AWS Systems Manager Run Command
Answers
C.
Use the AWS Inspector
C.
Use the AWS Inspector
Answers
D.
Use AWS Config
D.
Use AWS Config
Answers
Suggested answer: B

Explanation:

The AWS Documentation mentions the following

AWS Systems Manager Run Command lets you remotely and securely manage the configuration of your managed instances. A managed instance is any Amazon EC2 instance or on-premises machine in your hybrid environment that has been configured for Systems Manager. Run Command enables you to automate common administrative tasks and perform ad hoc configuration changes at scale. You can use Run Command from the AWS console, the AWS Command Line Interface, AWS Tools for Windows PowerShell, or the AWS SDKs. Run Command is offered at no additional cost. Option A is invalid because this service is used to store parameter

Option C is invalid because this service is used to scan vulnerabilities in an EC2 Instance.

Option D is invalid because this service is used to check for configuration changes For more information on executing remote commands, please visit the below U https://docs.aws.amazon.com/systems-manaEer/latest/usereuide/execute-remote-commands.htmll

( The correct answer is: Use the AWS Systems Manager Run Command Submit your Feedback/Queries to our Experts

You want to launch an EC2 Instance with your own key pair in AWS. How can you achieve this?

Choose 3 answers from the options given below.

Please select:

A.
Use a third party tool to create the Key pair
A.
Use a third party tool to create the Key pair
Answers
B.
Create a new key pair using the AWS CLI
B.
Create a new key pair using the AWS CLI
Answers
C.
Import the public key into EC2
C.
Import the public key into EC2
Answers
D.
Import the private key into EC2
D.
Import the private key into EC2
Answers
Suggested answer: A, B, C

Explanation:

This is given in the AWS Documentation Creating a Key Pair

You can use Amazon EC2 to create your key pair. For more information, see Creating a Key Pair Using Amazon EC2. Alternatively, you could use a third-party tool and then import the public key to Amazon EC2. For more information, see Importing Your Own Public Key to Amazon EC2. Option B is Correct, because you can use the AWS CLI to create a new key pair 1

https://docs.aws.amazon.com/cli/latest/userguide/cli-ec2-keypairs.html

Option D is invalid because the public key needs to be stored in the EC2 Instance For more information on EC2 Key pairs, please visit the below URL:

* https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-key-pairs

The correct answers are: Use a third party tool to create the Key pair. Create a new key pair using the AWS CLI, Import the public key into EC2 Submit your Feedback/Queries to our Experts

You have a set of Keys defined using the AWS KMS service. You want to stop using a couple of keys , but are not sure of which services are currently using the keys. Which of the following would be a safe option to stop using the keys from further usage.

Please select:

A.
Delete the keys since anyway there is a 7 day waiting period before deletion
A.
Delete the keys since anyway there is a 7 day waiting period before deletion
Answers
B.
Disable the keys
B.
Disable the keys
Answers
C.
Set an alias for the key
C.
Set an alias for the key
Answers
D.
Change the key material for the key
D.
Change the key material for the key
Answers
Suggested answer: B

Explanation:

Option A is invalid because once you schedule the deletion and waiting period ends, you cannot come back from the deletion process. Option C and D are invalid because these will not check to see if the keys are being used or not The AWS Documentation mentions the following Deleting a customer master key (CMK) in AWS Key Management Service (AWS KMS) is destructive and potentially dangerous. It deletes the key material and all metadata associated with the CMK, and is irreversible. After a CMK is deleted you can no longer decrypt the data that was encrypted under that CMK, which means that data becomes unrecoverable. You should delete a CMK only when you are sure that you don't need to use it anymore. If you are not sure, consider disabling the CMK instead of deleting it. You can re-enable a disabled CMK if you need to use it again later, but you cannot recover a deleted CMK.

For more information on deleting keys from KMS, please visit the below URL:

https://docs.aws.amazon.com/kms/latest/developereuide/deleting-keys.htmlThe correct answer is: Disable the keys Submit your Feedback/Queries to our Experts

You are building a large-scale confidential documentation web server on AWSand all of the documentation for it will be stored on S3. One of the requirements is that it cannot be publicly accessible from S3 directly, and you will need to use Cloud Front to accomplish this. Which of the methods listed below would satisfy the requirements as outlined? Choose an answer from the options below Please select:

A.
Create an Identity and Access Management (IAM) user for CloudFront and grant access to the objects in your S3 bucket to that IAM User.
A.
Create an Identity and Access Management (IAM) user for CloudFront and grant access to the objects in your S3 bucket to that IAM User.
Answers
B.
Create an Origin Access Identity (OAI) for CloudFront and grant access to the objects in your S3 bucket to that OAl.
B.
Create an Origin Access Identity (OAI) for CloudFront and grant access to the objects in your S3 bucket to that OAl.
Answers
C.
Create individual policies for each bucket the documents are stored in and in that policy grant access to only CloudFront.
C.
Create individual policies for each bucket the documents are stored in and in that policy grant access to only CloudFront.
Answers
D.
Create an S3 bucket policy that lists the CloudFront distribution ID as the Principal and the target bucket as the Amazon Resource Name (ARN).
D.
Create an S3 bucket policy that lists the CloudFront distribution ID as the Principal and the target bucket as the Amazon Resource Name (ARN).
Answers
Suggested answer: B

Explanation:

If you want to use CloudFront signed URLs or signed cookies to provide access to objects in your Amazon S3 bucket you probably also want to prevent users from accessing your Amazon S3 objects using Amazon S3 URLs. If users access your objects directly in Amazon S3, they bypass the controls provided by CloudFront signed URLs or signed cookies, for example, control over the date and time that a user can no longer access your content and control over which IP addresses can be used to access content. In addition, if user's access objects both through CloudFront and directly by using Amazon S3 URLs, CloudFront ace logs are less useful because they're incomplete. Option A is invalid because you need to create a Origin Access Identity for Cloudfront and not an IAM user Option C and D are invalid because using policies will not help fulfil the requirement For more information on Origin Access Identity please see the below Link: http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private- contentrestrictine- access-to-s3.htmll The correct answer is: Create an Origin Access Identity (OAI) for CloudFront and grant access to the objects in your S3 bucket to that OAI. (

Submit your Feedback/Queries to our Experts

Your company makes use of S3 buckets for storing dat a. There is a company policy that all services should have logging enabled. How can you ensure that logging is always enabled for created S3 buckets in the AWS Account? Please select:

A.
Use AWS Inspector to inspect all S3 buckets and enable logging for those where it is not enabled
A.
Use AWS Inspector to inspect all S3 buckets and enable logging for those where it is not enabled
Answers
B.
Use AWS Config Rules to check whether logging is enabled for buckets
B.
Use AWS Config Rules to check whether logging is enabled for buckets
Answers
C.
Use AWS Cloudwatch metrics to check whether logging is enabled for buckets
C.
Use AWS Cloudwatch metrics to check whether logging is enabled for buckets
Answers
D.
Use AWS Cloudwatch logs to check whether logging is enabled for buckets
D.
Use AWS Cloudwatch logs to check whether logging is enabled for buckets
Answers
Suggested answer: B

Explanation:

This is given in the AWS Documentation as an example rule in AWS Config Example rules with triggers Example rule with configuration change trigger 1. You add the AWS Config managed rule, S3_BUCKET_LOGGING_ENABLED, to your account to check whether your Amazon S3 buckets have logging enabled.

2. The trigger type for the rule is configuration changes. AWS Config runs the evaluations for the rule when an Amazon S3 bucket is created, changed, or deleted. 3. When a bucket is updated, the configuration change triggers the rule and AWS Config evaluates whether the bucket is compliant against the rule. Option A is invalid because AWS Inspector cannot be used to scan all buckets

Option C and D are invalid because Cloudwatch cannot be used to check for logging enablement for buckets. For more information on Config Rules please see the below Link:

https://docs.aws.amazon.com/config/latest/developerguide/evaluate-config-rules.htmlThe correct answer is: Use AWS Config Rules to check whether logging is enabled for buckets Submityour Feedback/Queries to our Experts

A security engineer must ensure that all infrastructure launched in the company AWS account be monitored for deviation from compliance rules, specifically that all EC2 instances are launched from one of a specified list of AM Is and that all attached EBS volumes are encrypted. Infrastructure not in compliance should be terminated. What combination of steps should the Engineer implement? Select 2 answers from the options given below.

Please select:

A.
Set up a CloudWatch event based on Trusted Advisor metrics
A.
Set up a CloudWatch event based on Trusted Advisor metrics
Answers
B.
Trigger a Lambda function from a scheduled CloudWatch event that terminates non-compliant infrastructure.
B.
Trigger a Lambda function from a scheduled CloudWatch event that terminates non-compliant infrastructure.
Answers
C.
Set up a CloudWatch event based on Amazon inspector findings
C.
Set up a CloudWatch event based on Amazon inspector findings
Answers
D.
Monitor compliance with AWS Config Rules triggered by configuration changes
D.
Monitor compliance with AWS Config Rules triggered by configuration changes
Answers
E.
Trigger a CLI command from a CloudWatch event that terminates the infrastructure
E.
Trigger a CLI command from a CloudWatch event that terminates the infrastructure
Answers
Suggested answer: B, D

Explanation:

You can use AWS Config to monitor for such Event

Option A is invalid because you cannot set Cloudwatch events based on Trusted Advisor checks.

Option C is invalid Amazon inspector cannot be used to check whether instances are launched from a specific A Option E is invalid because triggering a CLI command is not the preferred option, instead you should use Lambda functions for all automation purposes. For more information on Config Rules please see the below Link:

https://docs.aws.amazon.com/config/latest/developerguide/evaluate-config-rules.htmlThese events can then trigger a lambda function to terminate instances For more information onCloudwatch events please see the below Link:

https://docs.aws.amazon.com/AmazonCloudWatch/latest/events/WhatlsCloudWatchEvents.

(

The correct answers are: Trigger a Lambda function from a scheduled Cloudwatch event that terminates non-compliant infrastructure., Monitor compliance with AWS Config Rules triggered by configuration changes Submit your Feedback/ Queries to our Experts


A company has external vendors that must deliver files to the company. These vendors have crossaccount that gives them permission to upload objects to one of the company's S3 buckets. What combination of steps must the vendor follow to successfully deliver a file to the company?

Select 2 answers from the options given below

Please select:

A.
Attach an IAM role to the bucket that grants the bucket owner full permissions to the object
A.
Attach an IAM role to the bucket that grants the bucket owner full permissions to the object
Answers
B.
Add a grant to the objects ACL giving full permissions to bucket owner.
B.
Add a grant to the objects ACL giving full permissions to bucket owner.
Answers
C.
Encrypt the object with a KMS key controlled by the company.
C.
Encrypt the object with a KMS key controlled by the company.
Answers
D.
Add a bucket policy to the bucket that grants the bucket owner full permissions to the object
D.
Add a bucket policy to the bucket that grants the bucket owner full permissions to the object
Answers
E.
Upload the file to the company's S3 bucket
E.
Upload the file to the company's S3 bucket
Answers
Suggested answer: B, E

Explanation:

This scenario is given in the AWS Documentation

A bucket owner can enable other AWS accounts to upload objects. These objects are owned by the accounts that created them. The bucket owner does not own objects that were not created by the bucket owner. Therefore, for the bucket owner to grant access to these objects, the object owner must first grant permission to the bucket owner using an object ACL. The bucket owner can then delegate those permissions via a bucket policy. In this example, the bucket owner delegates permission to users in its own account.

Option A and D are invalid because bucket ACL's are used to give grants to bucket

Option C is not required since encryption is not part of the requirement For more information on this scenario please see the below Link:

https://docs.aws.amazon.com/AmazonS3/latest/dev/example-walkthroushs-manaeing-accessexample3.htmllThe correct answers are: Add a grant to the objects ACL giving full permissions to bucket owner.,Upload the file to the company's S3 bucketSubmit your Feedback/Queries to our Experts

An application running on EC2 instances in a VPC must access sensitive data in the data center. The access must be encrypted in transit and have consistent low latency. Which hybrid architecture will meet these requirements? Please select:

A.
Expose the data with a public HTTPS endpoint.
A.
Expose the data with a public HTTPS endpoint.
Answers
B.
A VPN between the VPC and the data center over a Direct Connect connection
B.
A VPN between the VPC and the data center over a Direct Connect connection
Answers
C.
A VPN between the VPC and the data center.
C.
A VPN between the VPC and the data center.
Answers
D.
A Direct Connect connection between the VPC and data center
D.
A Direct Connect connection between the VPC and data center
Answers
Suggested answer: B

Explanation:

Since this is required over a consistency low latency connection, you should use Direct Connect. For encryption, you can make use of a VPN Option A is invalid because exposing an HTTPS endpoint will not help all traffic to flow between a VPC and the data center. Option C is invalid because low latency is a key requirement

Option D is invalid because only Direct Connect will not suffice

For more information on the connection options please see the below Link:

https://aws.amazon.com/answers/networking/aws-multiple-vpc-vpn-connection-sharintThe correct answer is: A VPN between the VPC and the data center over a Direct Connect connectionSubmit your Feedback/Queries to our Experts

A company has several Customer Master Keys (CMK), some of which have imported key material.

Each CMK must be rotated annually.

What two methods can the security team use to rotate each key? Select 2 answers from the options given below Please select:

A.
Enable automatic key rotation for a CMK
A.
Enable automatic key rotation for a CMK
Answers
B.
Import new key material to an existing CMK
B.
Import new key material to an existing CMK
Answers
C.
Use the CLI or console to explicitly rotate an existing CMK
C.
Use the CLI or console to explicitly rotate an existing CMK
Answers
D.
Import new key material to a new CMK; Point the key alias to the new CMK.
D.
Import new key material to a new CMK; Point the key alias to the new CMK.
Answers
E.
Delete an existing CMK and a new default CMK will be created.
E.
Delete an existing CMK and a new default CMK will be created.
Answers
Suggested answer: A, D

Explanation:

The AWS Documentation mentions the following

Automatic key rotation is available for all customer managed CMKs with KMS-generated key material. It is not available for CMKs that have imported key material (the value of the Origin field is External), but you can rotate these CMKs manually.

Rotating Keys Manually

You might want to create a newCMKand use it in place of a current CMK instead of enabling automatic key rotation. When the new CMK has different cryptographic material than the current CMK, using the new CMK has the same effect as changing the backing key in an existing CMK. The process of replacing one CMK with another is known as manual key rotation. When you begin using the new CMK, be sure to keep the original CMK enabled so that AWS KMS can decrypt data that the original CMK encrypted. When decrypting data, KMS identifies the CMK that was used to encrypt the data, and it uses the sam CMK to decrypt the data. As long as you keep both the original and new CMKs enabled, AWS KMS can decrypt any data that was encrypted by either CMK. Option B is invalid because you also need to point the key alias to the new key

Option C is invalid because existing CMK keys cannot be rotated as they are

Option E is invalid because deleting existing keys will not guarantee the creation of a new default CMK key For more information on Key rotation please see the below Link:

https://docs.aws.amazon.com/kms/latest/developereuide/rotate-keys.htmlThe correct answers are: Enable automatic key rotation for a CMK, Import new key material to a newCMK; Point the key alias to the new CMK. Submit your Feedback/Queries to our Experts

A new application will be deployed on EC2 instances in private subnets. The application will transfer sensitive data to and from an S3 bucket. Compliance requirements state that the data must not traverse the public internet. Which solution meets the compliance requirement?

Please select:

A.
Access the S3 bucket through a proxy server
A.
Access the S3 bucket through a proxy server
Answers
B.
Access the S3 bucket through a NAT gateway.
B.
Access the S3 bucket through a NAT gateway.
Answers
C.
Access the S3 bucket through a VPC endpoint for S3
C.
Access the S3 bucket through a VPC endpoint for S3
Answers
D.
Access the S3 bucket through the SSL protected S3 endpoint
D.
Access the S3 bucket through the SSL protected S3 endpoint
Answers
Suggested answer: C

Explanation:

The AWS Documentation mentions the following

A VPC endpoint enables you to privately connect your VPC to supported AWS services and VPC endpoint services powered by PrivateLink without requiring an internet gateway, NAT device, VPN connection, or AWS Direct Connect connection. Instances in your VPC do not require public IP addresses to communicate with resources in the service. Traffic between your VPC and the other service does not leave the Amazon network. Option A is invalid because using a proxy server is not sufficient enough

Option B and D are invalid because you need secure communication which should not traverse the internet For more information on VPC endpoints please see the below link https://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/vpc-endpoints.htmllThe correct answer is: Access the S3 bucket through a VPC endpoint for S3 Submit yourFeedback/Queries to our Experts

Total 590 questions
Go to page: of 59