ExamGecko
Home / Amazon / SCS-C01 / List of questions
Ask Question

Amazon SCS-C01 Practice Test - Questions Answers, Page 35

List of questions

Question 341

Report
Export
Collapse

A company has set up the following structure to ensure that their S3 buckets always have logging enabled

Amazon SCS-C01 image Question 341 7459 09162024005924000000

If there are any changes to the configuration to an S3 bucket, a config rule gets checked. If logging is disabled , then Lambda function is invoked. This Lambda function will again enable logging on the S3 bucket. Now there is an issue being encoutered with the entire flow. You have verified that the Lambda function is being invoked. But when logging is disabled for the bucket, the lambda function does not enable it again. Which of the following could be an issue Please select:

The AWS Config rule is not configured properly
The AWS Config rule is not configured properly
The AWS Lambda function does not have appropriate permissions for the bucket
The AWS Lambda function does not have appropriate permissions for the bucket
The AWS Lambda function should use Node.js instead of python.
The AWS Lambda function should use Node.js instead of python.
You need to also use the API gateway to invoke the lambda function
You need to also use the API gateway to invoke the lambda function
Suggested answer: B

Explanation:

The most probable cause is that you have not allowed the Lambda functions to have the appropriate permissions on the S3 bucket to make the relevant changes. Option A is invalid because this is more of a permission instead of a configuration rule issue.

Option C is invalid because changing the language will not be the core solution.

Option D is invalid because you don't necessarily need to use the API gateway service For more information on accessing resources from a Lambda function, please refer to below URL https://docs.aws.amazon.com/lambda/latest/ds/accessing-resources.htmllThe correct answer is: The AWS Lambda function does not have appropriate permissions for thebucket Submit your Feedback/Queries to our Experts

asked 16/09/2024
Jose M Rivera Vega
38 questions

Question 342

Report
Export
Collapse

Your company has a set of EC2 Instances defined in AWS. They need to ensure that all traffic packets are monitored and inspected for any security threats. How can this be achieved? Choose 2 answers from the options given below Please select:

Use a host based intrusion detection system
Use a host based intrusion detection system
Use a third party firewall installed on a central EC2 instance
Use a third party firewall installed on a central EC2 instance
Use VPC Flow logs
Use VPC Flow logs
Use Network Access control lists logging
Use Network Access control lists logging
Suggested answer: A, B

Explanation:

If you want to inspect the packets themselves, then you need to use custom based software A diagram representation of this is given in the AWS Security best practices

Amazon SCS-C01 image Question 342 explanation 7460 09162024005924000000

Option C is invalid because VPC Flow logs cannot conduct packet inspection.

For more information on AWS Security best practices, please refer to below URL:

The correct answers are: Use a host based intrusion detection system. Use a third party firewall installed on a central EC2 Submit your Feedback/Queries to our Experts

asked 16/09/2024
Robert Fox
50 questions

Question 343

Report
Export
Collapse

Your company hosts a large section of EC2 instances in AWS. There are strict security rules governing the EC2 Instances. During a potential security breach , you need to ensure quick investigation of the underlying EC2 Instance. Which of the following service can help you quickly provision a test environment to look into the breached instance. Please select:

AWS Cloudwatch
AWS Cloudwatch
AWS Cloudformation
AWS Cloudformation
AWS Cloudtrail
AWS Cloudtrail
AWS Config
AWS Config
Suggested answer: B

Explanation:

The AWS Security best practises mentions the following

Unique to AWS, security practitioners can use CloudFormation to quickly create a new, trusted environment in which to conduct deeper investigation. The CloudFormation template can preconfigure instances in an isolated environment that contains all the necessary tools forensic teams need to determine the cause of the incident This cuts down on the time it takes to gather necessary tools, isolates systems under examination, and ensures that the team is operating in a clean room.

Option A is incorrect since this is a logging service and cannot be used to provision a test environment Option C is incorrect since this is an API logging service and cannot be used to provision a test environment Option D is incorrect since this is a configuration service and cannot be used to provision a test environment For more information on AWS Security best practises, please refer to below URL:

https://d1.awsstatic.com/whitepapers/architecture/AWS-Security-Pillar.pd1The correct answer is: AWS CloudformationSubmit your Feedback/Queries to our Experts

asked 16/09/2024
Piroon Dechates
35 questions

Question 344

Report
Export
Collapse

Your company has a set of EBS volumes defined in AWS. The security mandate is that all EBS volumes are encrypted. What can be done to notify the IT admin staff if there are any unencrypted volumes in the account. Please select:

Use AWS Inspector to inspect all the EBS volumes
Use AWS Inspector to inspect all the EBS volumes
Use AWS Config to check for unencrypted EBS volumes
Use AWS Config to check for unencrypted EBS volumes
Use AWS Guard duty to check for the unencrypted EBS volumes
Use AWS Guard duty to check for the unencrypted EBS volumes
Use AWS Lambda to check for the unencrypted EBS volumes
Use AWS Lambda to check for the unencrypted EBS volumes
Suggested answer: B

Explanation:

The enc config rule for AWS Config can be used to check for unencrypted volumes. encrypted-volurrn 5 volumes that are in an attached state are encrypted. If you specify the ID of a KMS key for encryptio using the kmsld parameter, the rule checks if the EBS volumes in an attached state are encrypted with that KMS key*1.

Options A and C are incorrect since these services cannot be used to check for unencrypted EBS volumes Option D is incorrect because even though this is possible, trying to implement the solution alone with just the Lambda servk would be too difficult For more information on AWS Config and encrypted volumes, please refer to below URL:

https://docs.aws.amazon.com/config/latest/developerguide/encrypted-volumes.htmlSubmit your Feedback/Queries to our Experts

asked 16/09/2024
Chien-Chung Chen
36 questions

Question 345

Report
Export
Collapse

Your company use AWS KMS for management of its customer keys. From time to time, there is a requirement to delete existing keys as part of housekeeping activities. What can be done during the deletion process to verify that the key is no longer being used.

Please select:

Use CloudTrail to see if any KMS API request has been issued against existing keys
Use CloudTrail to see if any KMS API request has been issued against existing keys
Use Key policies to see the access level for the keys
Use Key policies to see the access level for the keys
Rotate the keys once before deletion to see if other services are using the keys
Rotate the keys once before deletion to see if other services are using the keys
Change the IAM policy for the keys to see if other services are using the keys
Change the IAM policy for the keys to see if other services are using the keys
Suggested answer: A

Explanation:

The AWS lentation mentions the following

You can use a combination of AWS CloudTrail, Amazon CloudWatch Logs, and Amazon Simple Notification Service (Amazon SNS) to create an alarm that notifies you of AWS KMS API requests that attempt to use a customer master key (CMK) that is pending deletion. If you receive a notification from such an alarm, you might want to cancel deletion of the CMK to give yourself more time to determine whether you want to delete it Options B and D are incorrect because Key policies nor IAM policies can be used to check if the keys are being used.

Option C is incorrect since rotation will not help you check if the keys are being used.

For more information on deleting keys, please refer to below URL:

https://docs.aws.amazon.com/kms/latest/developereuide/deletine-keys-creatine-cloudwatchalarm.htmlThe correct answer is: Use CloudTrail to see if any KMS API request has been issued against existingkeys Submit your Feedback/Queries to our Experts

asked 16/09/2024
Oliver Mark
36 questions

Question 346

Report
Export
Collapse

You have a bucket and a VPC defined in AWS. You need to ensure that the bucket can only be accessed by the VPC endpoint. How can you accomplish this? Please select:

Modify the security groups for the VPC to allow access to the 53 bucket
Modify the security groups for the VPC to allow access to the 53 bucket
Modify the route tables to allow access for the VPC endpoint
Modify the route tables to allow access for the VPC endpoint
Modify the IAM Policy for the bucket to allow access for the VPC endpoint
Modify the IAM Policy for the bucket to allow access for the VPC endpoint
Modify the bucket Policy for the bucket to allow access for the VPC endpoint
Modify the bucket Policy for the bucket to allow access for the VPC endpoint
Suggested answer: D

Explanation:

This is mentioned in the AWS Documentation

Restricting Access to a Specific VPC Endpoint

The following is an example of an S3 bucket policy that restricts access to a specific bucket, examplebucket only from the VPC endpoint with the ID vpce-la2b3c4d. The policy denies all access to the bucket if the specified endpoint is not being used. The aws:sourceVpce condition is used to the specify the endpoint. The aws:sourceVpce condition does not require an ARN for the VPC endpoint resource, only the VPC endpoint ID. For more information about using conditions in a policy, see Specifying Conditions in a Policy.

Amazon SCS-C01 image Question 346 explanation 7464 09162024005924000000

Options A and B are incorrect because using Security Groups nor route tables will help to allow access specifically for that bucke via the VPC endpoint Here you specifically need to ensure the bucket policy is changed. Option C is incorrect because it is the bucket policy that needs to be changed and not the IAM policy. For more information on example bucket policies for VPC endpoints, please refer to below URL:

https://docs.aws.amazon.com/AmazonS3/latest/dev/example-bucket-policies-vpc-endpoint.htmlThe correct answer is: Modify the bucket Policy for the bucket to allow access for the VPC endpointSubmit your Feedback/Queries to our Experts

asked 16/09/2024
Kanik Sachdeva
37 questions

Question 347

Report
Export
Collapse

In order to encrypt data in transit for a connection to an AWS RDS instance, which of the following would you implement Please select:

Transparent data encryption
Transparent data encryption
SSL from your application
SSL from your application
Data keys from AWS KMS
Data keys from AWS KMS
Data Keys from CloudHSM
Data Keys from CloudHSM
Suggested answer: B

Explanation:

This is mentioned in the AWS Documentation

You can use SSL from your application to encrypt a connection to a DB instance running MySQL MariaDB, Amazon Aurora, SQL Server, Oracle, or PostgreSQL. Option A is incorrect since Transparent data encryption is used for data at rest and not in transit Options C and D are incorrect since keys can be used for encryption of data at rest For more information on working with RDS and SSL, please refer to below URL:

https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/UsingWithRDS.SSL.htmlThe correct answer is: SSL from your application Submit your Feedback/Queries to our Experts

asked 16/09/2024
Dirk van der Watt
29 questions

Question 348

Report
Export
Collapse

Which of the following is the responsibility of the customer? Choose 2 answers from the options given below. Please select:

Management of the Edge locations
Management of the Edge locations
Encryption of data at rest
Encryption of data at rest
Protection of data in transit
Protection of data in transit
Decommissioning of old storage devices
Decommissioning of old storage devices
Suggested answer: B, C

Explanation:

Below is the snapshot of the Shared Responsibility Model

Amazon SCS-C01 image Question 348 explanation 7466 09162024005924000000

For more information on AWS Security best practises, please refer to below URL .awsstatic corn/whitepapers/Security/AWS Practices.

The correct answers are: Encryption of data at rest Protection of data in transit Submit your Feedback/Queries to our Experts

asked 16/09/2024
P B
41 questions

Question 349

Report
Export
Collapse

A Devops team is currently looking at the security aspect of their CI/CD pipeline. They are making use of AWS resource? for their infrastructure. They want to ensure that the EC2 Instances don't have any high security vulnerabilities. They want to ensure a complete DevSecOps process. How can this be achieved?

Please select:

Use AWS Config to check the state of the EC2 instance for any sort of security issues.
Use AWS Config to check the state of the EC2 instance for any sort of security issues.
Use AWS Inspector API's in the pipeline for the EC2 Instances
Use AWS Inspector API's in the pipeline for the EC2 Instances
Use AWS Trusted Advisor API's in the pipeline for the EC2 Instances
Use AWS Trusted Advisor API's in the pipeline for the EC2 Instances
Use AWS Security Groups to ensure no vulnerabilities are present
Use AWS Security Groups to ensure no vulnerabilities are present
Suggested answer: B

Explanation:

Amazon Inspector offers a programmatic way to find security defects or misconfigurations in your operating systems and applications. Because you can use API calls to access both the processing of assessments and the results of your assessments, integration of the findings into workflow and notification systems is simple. DevOps teams can integrate Amazon Inspector into their CI/CD pipelines and use it to identify any pre-existing issues or when new issues are introduced.

Option A.C and D are all incorrect since these services cannot check for Security Vulnerabilities.

These can only be checked by the AWS Inspector service.

For more information on AWS Security best practices, please refer to below URL:

https://d1.awsstatic.com/whitepapers/Security/AWS Security Best Practices.pdlThe correct answer is: Use AWS Inspector API's in the pipeline for the EC2 Instances Submit yourFeedback/Queries to our Experts

asked 16/09/2024
Sriharsha Janga
43 questions

Question 350

Report
Export
Collapse

You want to track access requests for a particular S3 bucket. How can you achieve this in the easiest possible way? Please select:

Enable server access logging for the bucket
Enable server access logging for the bucket
Enable Cloudwatch metrics for the bucket
Enable Cloudwatch metrics for the bucket
Enable Cloudwatch logs for the bucket
Enable Cloudwatch logs for the bucket
Enable AWS Config for the S3 bucket
Enable AWS Config for the S3 bucket
Suggested answer: A

Explanation:

The AWS Documentation mentions the foil

To track requests for access to your bucket you can enable access logging. Each access log record provides details about a single access request, such as the requester, bucket name, request time, request action, response status, and error code, if any.

Options B and C are incorrect Cloudwatch is used for metrics and logging and cannot be used to track access requests. Option D is incorrect since this can be used for Configuration management but for not for tracking S3 bucket requests. For more information on S3 server logs, please refer to below UF

https://docs.aws.amazon.com/AmazonS3/latest/dev/ServerLoes.htmlThe correct answer is: Enable server access logging for the bucket Submit your Feedback/Queries toour Experts

asked 16/09/2024
Farshin Golpad
38 questions
Total 590 questions
Go to page: of 59
Search

Related questions