ExamGecko
Home Home / Amazon / SCS-C01

Amazon SCS-C01 Practice Test - Questions Answers, Page 28

Question list
Search
Search

List of questions

Search

Related questions











You have setup a set of applications across 2 VPC's. You have also setup VPC Peering. The applications are still not able to communicate across the Peering connection. Which network troubleshooting steps should be taken to resolve the issue?

Please select:

A.
Ensure the applications are hosted in a public subnet
A.
Ensure the applications are hosted in a public subnet
Answers
B.
Check to see if the VPC has an Internet gateway attached.
B.
Check to see if the VPC has an Internet gateway attached.
Answers
C.
Check to see if the VPC has a NAT gateway attached.
C.
Check to see if the VPC has a NAT gateway attached.
Answers
D.
Check the Route tables for the VPC's
D.
Check the Route tables for the VPC's
Answers
Suggested answer: D

Explanation:

After the VPC peering connection is established, you need to ensure that the route tables are modified to ensure traffic can between the VPCs Option A ,B and C are invalid because allowing access the Internet gateway and usage of public subnets can help for Inter, access, but not for VPC Peering.

For more information on VPC peering routing, please visit the below URL:

.com/AmazonVPC/latest/Peeri

The correct answer is: Check the Route tables for the VPCs Submit your Feedback/Queries to our Experts

A company requires that data stored in AWS be encrypted at rest. Which of the following approaches achieve this requirement? Select 2 answers from the options given below. Please select:

A.
When storing data in Amazon EBS, use only EBS-optimized Amazon EC2 instances.
A.
When storing data in Amazon EBS, use only EBS-optimized Amazon EC2 instances.
Answers
B.
When storing data in EBS, encrypt the volume by using AWS KMS.
B.
When storing data in EBS, encrypt the volume by using AWS KMS.
Answers
C.
When storing data in Amazon S3, use object versioning and MFA Delete.
C.
When storing data in Amazon S3, use object versioning and MFA Delete.
Answers
D.
When storing data in Amazon EC2 Instance Store, encrypt the volume by using KMS.
D.
When storing data in Amazon EC2 Instance Store, encrypt the volume by using KMS.
Answers
E.
When storing data in S3, enable server-side encryption.
E.
When storing data in S3, enable server-side encryption.
Answers
Suggested answer: B, E

Explanation:

The AWS Documentation mentions the following

To create an encrypted Amazon EBS volume, select the appropriate box in the Amazon EBS section of the Amazon EC2 console. You can use a custom customer master key (CMK) by choosing one from the list that appears below the encryption box. If you do not specify a custom CMK, Amazon EBS uses the AWS-managed CMK for Amazon EBS in your account. If there is no AWS-managed CMK for Amazon EBS in your account, Amazon EBS creates one. Data protection refers to protecting data while in-transit (as it travels to and from Amazon S3) and at rest (while it is stored on disks in Amazon S3 data centers). You can protect data in transit by using SSL or by using client-side encryption. You have the following options of protecting data at rest in Amazon S3.

• Use Server-Side Encryption - You request Amazon S3 to encrypt your object before saving it on disks in its data centers and decrypt it when you download the objects.

• Use Client-Side Encryption - You can encrypt data client-side and upload the encrypted data to Amazon S3. In this case, you manage the encryption process, the encryption keys, and related tools.

Option A is invalid because using EBS-optimized Amazon EC2 instances alone will not guarantee protection of instances at rest. Option C is invalid because this will not encrypt data at rest for S3 objects. Option D is invalid because you don't store data in Instance store. For more information on EBS encryption, please visit the below URL:

https://docs.aws.amazon.com/kms/latest/developerguide/services-ebs.htmlFor more information on S3 encryption, please visit the below URL:

https://docs.aws.amazon.com/AmazonS3/latest/dev/UsinEEncryption.htmlThe correct answers are: When storing data in EBS, encrypt the volume by using AWS KMS. Whenstoring data in S3, enable server-side encryption. Submit your Feedback/Queries to our Experts

You need to ensure that objects in an S3 bucket are available in another region. This is because of the criticality of the data that is hosted in the S3 bucket. How can you achieve this in the easiest way possible? Please select:

A.
Enable cross region replication for the bucket
A.
Enable cross region replication for the bucket
Answers
B.
Write a script to copy the objects to another bucket in the destination region
B.
Write a script to copy the objects to another bucket in the destination region
Answers
C.
Create an S3 snapshot in the destination region
C.
Create an S3 snapshot in the destination region
Answers
D.
Enable versioning which will copy the objects to the destination region
D.
Enable versioning which will copy the objects to the destination region
Answers
Suggested answer: A

Explanation:

Option B is partially correct but a big maintenance over head to create and maintain a script when the functionality is already available in S3 Option C is invalid because snapshots are not available in S3 Option D is invalid because versioning will not replicate objects The AWS Documentation mentions the following Cross-region replication is a bucket-level configuration that enables automatic, asynchronous copying of objects across buck in different AWS Regions. For more information on Cross region replication in the Simple Storage Service, please visit the below URL:

https://docs.aws.amazon.com/AmazonS3/latest/dev/crr.htmlThe correct answer is: Enable cross region replication for the bucket Submit your Feedback/Queriesto our Experts

You want to ensure that you keep a check on the Active EBS Volumes, Active snapshots and Elastic IP addresses you use so that you don't go beyond the service limit. Which of the below services can help in this regard? Please select:

A.
AWS Cloudwatch
A.
AWS Cloudwatch
Answers
B.
AWS EC2
B.
AWS EC2
Answers
C.
AWS Trusted Advisor
C.
AWS Trusted Advisor
Answers
D.
AWS SNS
D.
AWS SNS
Answers
Suggested answer: C

Explanation:

Below is a snapshot of the service limits that the Trusted Advisor can monitor

Option A is invalid because even though you can monitor resources, it cannot be checked against the service limit.

Option B is invalid because this is the Elastic Compute cloud service Option D is invalid because it can be send notification but not check on service limit For more information on the Trusted Advisor monitoring, please visit the below URL:

https://aws.amazon.com/premiumsupport/ta-faqs>The correct answer is: AWS Trusted AdvisorSubmit your Feedback/Queries to our Experts

A company has a legacy application that outputs all logs to a local text file. Logs from all applications running on AWS must be continually monitored for security related messages. What can be done to allow the company to deploy the legacy application on Amazon EC2 and still meet the monitoring requirement? Please select:

A.
Create a Lambda function that mounts the EBS volume with the logs and scans the logs for security incidents. Trigger the function every 5 minutes with a scheduled Cloudwatch event.
A.
Create a Lambda function that mounts the EBS volume with the logs and scans the logs for security incidents. Trigger the function every 5 minutes with a scheduled Cloudwatch event.
Answers
B.
Send the local text log files to CloudWatch Logs and configure a CloudWatch metric filter. Trigger cloudwatch alarms based on the metrics.
B.
Send the local text log files to CloudWatch Logs and configure a CloudWatch metric filter. Trigger cloudwatch alarms based on the metrics.
Answers
C.
Install the Amazon inspector agent on any EC2 instance running the legacy application. Generate CloudWatch alerts a based on any Amazon inspector findings.
C.
Install the Amazon inspector agent on any EC2 instance running the legacy application. Generate CloudWatch alerts a based on any Amazon inspector findings.
Answers
D.
Export the local text log files to CloudTrail. Create a Lambda function that queries the CloudTrail logs for security ' incidents using Athena.
D.
Export the local text log files to CloudTrail. Create a Lambda function that queries the CloudTrail logs for security ' incidents using Athena.
Answers
Suggested answer: B

Explanation:

One can send the log files to Cloudwatch Logs. Log files can also be sent from On-premise servers.

You can then specify metrii to search the logs for any specific values. And then create alarms based on these metrics. Option A is invalid because this will be just a long over drawn process to achieve this requirement Option C is invalid because AWS Inspector cannot be used to monitor for security related messages. Option D is invalid because files cannot be exported to AWS Cloudtrail For more information on Cloudwatch logs agent please visit the below URL:

https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/QuickStartEC2lnstance.htiThe correct answer is: Send the local text log files to Cloudwatch Logs and configure a Cloudwatchmetric filter. Trigger cloudwatch alarms based on the metrics.

Submit your Feedback/Queries to our Experts

Every application in a company's portfolio has a separate AWS account for development and production. The security team wants to prevent the root user and all IAM users in the production accounts from accessing a specific set of unneeded services. How can they control this functionality?

Please select:

A.
Create a Service Control Policy that denies access to the services. Assemble all production accounts in an organizational unit. Apply the policy to that organizational unit.
A.
Create a Service Control Policy that denies access to the services. Assemble all production accounts in an organizational unit. Apply the policy to that organizational unit.
Answers
B.
Create a Service Control Policy that denies access to the services. Apply the policy to the root account.
B.
Create a Service Control Policy that denies access to the services. Apply the policy to the root account.
Answers
C.
Create an IAM policy that denies access to the services. Associate the policy with an IAM group and enlist all users and the root users in this group.
C.
Create an IAM policy that denies access to the services. Associate the policy with an IAM group and enlist all users and the root users in this group.
Answers
D.
Create an IAM policy that denies access to the services. Create a Config Rule that checks that all users have the policy m assigned. Trigger a Lambda function that adds the policy when found missing.
D.
Create an IAM policy that denies access to the services. Create a Config Rule that checks that all users have the policy m assigned. Trigger a Lambda function that adds the policy when found missing.
Answers
Suggested answer: A

Explanation:

As an administrator of the master account of an organization, you can restrict which AWS services and individual API actions the users and roles in each member account can access. This restriction even overrides the administrators of member accounts in the organization. When AWS Organizations blocks access to a service or API action for a member account a user or role in that account can't access any prohibited service or API action, even if an administrator of a member account explicitly grants such permissions in an IAM policy. Organization permissions overrule account permissions. Option B is invalid because service policies cannot be assigned to the root account at the account level. Option C and D are invalid because IAM policies alone at the account level would not be able to suffice the requirement For more information, please visit the below URL id=docs_orgs_console https://docs.aws.amazon.com/IAM/latest/UserGimanage attach-policy.htmlThe correct answer is: Create a Service Control Policy that denies access to the services. Assemble allproduction accounts in an organizational unit. Apply the policy to that organizational unitSubmit your Feedback/Queries to our Experts

An application running on EC2 instances in a VPC must call an external web service via TLS (port 443). The instances run in public subnets.

Which configurations below allow the application to function and minimize the exposure of the instances? Select 2 answers from the options given below Please select:

A.
A network ACL with a rule that allows outgoing traffic on port 443.
A.
A network ACL with a rule that allows outgoing traffic on port 443.
Answers
B.
A network ACL with rules that allow outgoing traffic on port 443 and incoming traffic on ephemeral ports
B.
A network ACL with rules that allow outgoing traffic on port 443 and incoming traffic on ephemeral ports
Answers
C.
A network ACL with rules that allow outgoing traffic on port 443 and incoming traffic on port 443.
C.
A network ACL with rules that allow outgoing traffic on port 443 and incoming traffic on port 443.
Answers
D.
A security group with a rule that allows outgoing traffic on port 443
D.
A security group with a rule that allows outgoing traffic on port 443
Answers
E.
A security group with rules that allow outgoing traffic on port 443 and incoming traffic on ephemeral ports.
E.
A security group with rules that allow outgoing traffic on port 443 and incoming traffic on ephemeral ports.
Answers
F.
A security group with rules that allow outgoing traffic on port 443 and incoming traffic on port 443.
F.
A security group with rules that allow outgoing traffic on port 443 and incoming traffic on port 443.
Answers
Suggested answer: B, D

Explanation:

Since here the traffic needs to flow outbound from the Instance to a web service on Port 443, the outbound rules on both the Network and Security Groups need to allow outbound traffic. The Incoming traffic should be allowed on ephermal ports for the Operating System on the Instance to allow a connection to be established on any desired or available port. Option A is invalid because this rule alone is not enough. You also need to ensure incoming traffic on ephemeral ports Option C is invalid because need to ensure incoming traffic on ephemeral ports and not only port 443 Option E and F are invalid since here you are allowing additional ports on Security groups which are not required For more information on VPC Security Groups, please visit the below URL:

https://docs.aws.amazon.com/AmazonVPC/latest/UserGuideA/PC_SecurityGroups.htmll

The correct answers are: A network ACL with rules that allow outgoing traffic on port 443 and incoming traffic on ephemeral ports, A security group with a rule that allows outgoing traffic on port443Submit your Feedback/Queries to our Experts


A company is deploying a new web application on AWS. Based on their other web applications, they anticipate being the target of frequent DDoS attacks. Which steps can the company use to protect their application? Select 2 answers from the options given below.

Please select:

A.
Associate the EC2 instances with a security group that blocks traffic from blacklisted IP addresses.
A.
Associate the EC2 instances with a security group that blocks traffic from blacklisted IP addresses.
Answers
B.
Use an ELB Application Load Balancer and Auto Scaling group to scale to absorb application layer traffic.
B.
Use an ELB Application Load Balancer and Auto Scaling group to scale to absorb application layer traffic.
Answers
C.
Use Amazon Inspector on the EC2 instances to examine incoming traffic and discard malicious traffic.
C.
Use Amazon Inspector on the EC2 instances to examine incoming traffic and discard malicious traffic.
Answers
D.
Use CloudFront and AWS WAF to prevent malicious traffic from reaching the application
D.
Use CloudFront and AWS WAF to prevent malicious traffic from reaching the application
Answers
E.
Enable GuardDuty to block malicious traffic from reaching the application
E.
Enable GuardDuty to block malicious traffic from reaching the application
Answers
Suggested answer: B, D

Explanation:

The below diagram from AWS shows the best case scenario for avoiding DDos attacks using services such as AWS Cloudfro WAF, ELB and Autoscaling

Option A is invalid because by default security groups don't allow access

Option C is invalid because AWS Inspector cannot be used to examine traffic

Option E is invalid because this can be used for attacks on EC2 Instances but not against DDos attacks on the entire application For more information on DDos mitigation from AWS, please visit the below URL:

https://aws.amazon.com/answers/networking/aws-ddos-attack-mitieationiThe correct answers are: Use an ELB Application Load Balancer and Auto Scaling group to scale toabsorb application layer traffic., Use CloudFront and AWS WAF to prevent malicious traffic fromreaching the applicationSubmit your Feedback/Queries to our Experts

You are working in the media industry and you have created a web application where users will be able to upload photos they create to your website. This web application must be able to call the S3 API in order to be able to function. Where should you store your API credentials whilst maintaining the maximum level of security?

Please select:

A.
Save the API credentials to your PHP files.
A.
Save the API credentials to your PHP files.
Answers
B.
Don't save your API credentials, instead create a role in IAM and assign this role to an EC2 instance when you first create it.
B.
Don't save your API credentials, instead create a role in IAM and assign this role to an EC2 instance when you first create it.
Answers
C.
Save your API credentials in a public Github repository.
C.
Save your API credentials in a public Github repository.
Answers
D.
Pass API credentials to the instance using instance userdata.
D.
Pass API credentials to the instance using instance userdata.
Answers
Suggested answer: B

Explanation:

Applications must sign their API requests with AWS credentials. Therefore, if you are an application developer, you need a strategy for managing credentials for your applications that run on EC2 instances. For example, you can securely distribute your AWS credentials to the instances, enabling the applications on those instances to use your credentials to sign requests, while protecting your credentials from other users. However, it's challenging to securely distribute credentials to each instance. especially those that AWS creates on your behalf, such as Spot Instances or instances in Auto Scaling groups. You must also be able to update the credentials on each instance when you rotate your AWS credentials.

IAM roles are designed so that your applications can securely make API requests from your instances, without requiring you manage the security credentials that the applications use. Option A.C and D are invalid because using AWS Credentials in an application in production is a direct no recommendation 1 secure access For more information on IAM Roles, please visit the below URL: http://docs.aws.amazon.com/ AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html The correct answer is: Don't save your API credentials. Instead create a role in IAM and assign this role to an EC2 instance when you first create it Submit your Feedback/Queries to our Experts

A company has a set of resources defined in AWS. It is mandated that all API calls to the resources be monitored. Also all API calls must be stored for lookup purposes. Any log data greater than 6 months must be archived. Which of the following meets these requirements? Choose 2 answers from the options given below. Each answer forms part of the solution. Please select:

A.
Enable CloudTrail logging in all accounts into S3 buckets
A.
Enable CloudTrail logging in all accounts into S3 buckets
Answers
B.
Enable CloudTrail logging in all accounts into Amazon Glacier
B.
Enable CloudTrail logging in all accounts into Amazon Glacier
Answers
C.
Ensure a lifecycle policy is defined on the S3 bucket to move the data to EBS volumes after 6 months.
C.
Ensure a lifecycle policy is defined on the S3 bucket to move the data to EBS volumes after 6 months.
Answers
D.
Ensure a lifecycle policy is defined on the S3 bucket to move the data to Amazon Glacier after 6 months.
D.
Ensure a lifecycle policy is defined on the S3 bucket to move the data to Amazon Glacier after 6 months.
Answers
Suggested answer: A, D

Explanation:

Cloudtrail publishes the trail of API logs to an S3 bucket

Option B is invalid because you cannot put the logs into Glacier from CloudTrail

Option C is invalid because lifecycle policies cannot be used to move data to EBS volumes For more information on Cloudtrail logging, please visit the below URL:

https://docs.aws.amazon.com/awscloudtrail/latest/usereuide/cloudtrail-find-log-files.htmllYou can then use Lifecycle policies to transfer data to Amazon Glacier after 6 months For moreinformation on S3 lifecycle policies, please visit the below URL:

https://docs.aws.amazon.com/AmazonS3/latest/dev/object-lifecycle-mgmt.htmlThe correct answers are: Enable CloudTrail logging in all accounts into S3 buckets. Ensure a lifecyclepolicy is defined on the bucket to move the data to Amazon Glacier after 6 months.

Submit your Feedback/Queries to our Experts

Total 590 questions
Go to page: of 59