ExamGecko
Home Home / Amazon / SCS-C01

Amazon SCS-C01 Practice Test - Questions Answers, Page 40

Question list
Search
Search

List of questions

Search

Related questions











A company has a requirement to create a DynamoDB table. The company's software architect has provided the following CLI command for the DynamoDB table

Which of the following has been taken of from a security perspective from the above command?

Please select:

A.
Since the ID is hashed, it ensures security of the underlying table.
A.
Since the ID is hashed, it ensures security of the underlying table.
Answers
B.
The above command ensures data encryption at rest for the Customer table
B.
The above command ensures data encryption at rest for the Customer table
Answers
C.
The above command ensures data encryption in transit for the Customer table
C.
The above command ensures data encryption in transit for the Customer table
Answers
D.
The right throughput has been specified from a security perspective
D.
The right throughput has been specified from a security perspective
Answers
Suggested answer: B

Explanation:

The above command with the "-sse-specification Enabled=true" parameter ensures that the data for the DynamoDB table is encrypted at rest. Options A,C and D are all invalid because this command is specifically used to ensure data encryption at rest For more information on DynamoDB encryption, please visit the URL:

https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/encryption.tutorial.htmlThe correct answer is: The above command ensures data encryption at rest for the Customer table

You need to establish a secure backup and archiving solution for your company, using AWS.

Documents should be immediately accessible for three months and available for five years for compliance reasons. Which AWS service fulfills these requirements in the most cost-effective way?

A.
Upload data to S3 and use lifecycle policies to move the data into Glacier for long-term archiving.
A.
Upload data to S3 and use lifecycle policies to move the data into Glacier for long-term archiving.
Answers
B.
Upload the data on EBS, use lifecycle policies to move EBS snapshots into S3 and later into Glacier for long-term archiving.
B.
Upload the data on EBS, use lifecycle policies to move EBS snapshots into S3 and later into Glacier for long-term archiving.
Answers
C.
Use Direct Connect to upload data to S3 and use IAM policies to move the data into Glacier for long-term archiving.
C.
Use Direct Connect to upload data to S3 and use IAM policies to move the data into Glacier for long-term archiving.
Answers
D.
Use Storage Gateway to store data to S3 and use lifecycle policies to move the data into Redshift for long-term archiving.
D.
Use Storage Gateway to store data to S3 and use lifecycle policies to move the data into Redshift for long-term archiving.
Answers
Suggested answer: A

Explanation:

Explanation: amazon Glacier is a secure, durable, and extremely low-cost cloud storage service for data archiving and long-term backup. Customers can reliably store large or small amounts of data for as little as $0,004 per gigabyte per month, a significant savings compared to on-premises solutions.

With Amazon lifecycle policies you can create transition actions in which you define when objects transition to another Amazon S3 storage class. For example, you may choose to transition objects to the STANDARDJA (IA, for infrequent access) storage class 30 days after creation, or archive objects to the GLACIER storage class one year after creation. Option B is invalid because lifecycle policies are not available for EBS volumes Option C is invalid because IAM policies cannot be used to move data to Glacier Option D is invalid because lifecycle policies is not used to move data to Redshif For more information on S3 lifecycle policies, please visit the URL: http://docs.aws.amazon.com/AmazonS3/latest/dev/obiect-lifecycle-mgmt.html The correct answer is: Upload data to S3 and use lifecycle policies to move the data into Glacier for long-term archiving.

Submit your Feedback/Queries to our Experts

What is the result of the following bucket policy?

A.
It will allow all access to the bucket mybucket
A.
It will allow all access to the bucket mybucket
Answers
B.
It will allow the user mark from AWS account number 111111111 all access to the bucket but deny everyone else all access to the bucket
B.
It will allow the user mark from AWS account number 111111111 all access to the bucket but deny everyone else all access to the bucket
Answers
C.
It will deny all access to the bucket mybucket
C.
It will deny all access to the bucket mybucket
Answers
D.
None of these
D.
None of these
Answers
Suggested answer: C

Explanation:

The policy consists of 2 statements, one is the allow for the user mark to the bucket and the next is the deny policy for all other users. The deny permission will override the allow and hence all users will not have access to the bucket. Options A,B and D are all invalid because this policy is used to deny all access to the bucket mybucket For examples on S3 bucket policies, please refer to the below Link: http://docs.aws.amazon.com/AmazonS3/latest/dev/example-bucket- policies.htmll The correct answer is: It will deny all access to the bucket mybucket Submit your FeedbacK/Quenes to our Experts

A company is planning on using AWS EC2 and AWS Cloudfrontfor their web application. For which one of the below attacks is usage of Cloudfront most suited for? Please select:

A.
Cross side scripting
A.
Cross side scripting
Answers
B.
SQL injection
B.
SQL injection
Answers
C.
DDoS attacks
C.
DDoS attacks
Answers
D.
Malware attacks
D.
Malware attacks
Answers
Suggested answer: C

Explanation:

The below table from AWS shows the security capabilities of AWS Cloudfront AWS Cloudfront is more prominent for DDoS attacks.

Options A,B and D are invalid because Cloudfront is specifically used to protect sites against DDoS attacks For more information on security with Cloudfront, please refer to the below Link:

https://d1.awsstatic.com/whitepapers/Security/Secure content delivery with CloudFrontwhitepaper.pdiThe correct answer is: DDoS attacksSubmit your Feedback/Queries to our Experts

Your company is planning on using AWS EC2 and ELB for deployment for their web applications. The security policy mandates that all traffic should be encrypted. Which of the following options will ensure that this requirement is met. Choose 2 answers from the options below.

Please select:

A.
Ensure the load balancer listens on port 80
A.
Ensure the load balancer listens on port 80
Answers
B.
Ensure the load balancer listens on port 443
B.
Ensure the load balancer listens on port 443
Answers
C.
Ensure the HTTPS listener sends requests to the instances on port 443
C.
Ensure the HTTPS listener sends requests to the instances on port 443
Answers
D.
Ensure the HTTPS listener sends requests to the instances on port 80
D.
Ensure the HTTPS listener sends requests to the instances on port 80
Answers
Suggested answer: B, C

Explanation:

The AWS Documentation mentions the following

You can create a load balancer that listens on both the HTTP (80) and HTTPS (443) ports. If youspecify that the HTTPS listener sends requests to the instances on port 80, the load balancerterminates the requests and communication from the load balancer to the instances is notencrypted, if the HTTPS listener sends requests to the instances on port 443, communication fromthe load balancer to the instances is encrypted. Option A is invalid because there is a need for secure traffic, so port 80 should not be used Option D is invalid because for the HTTPS listener you need to use port 443 For more information on HTTPS with ELB, please refer to the below Link:

https://docs.aws.amazon.com/elasticloadbalancing/latest/classic/elb-create-https-ssl-loadbalancer.htmllThe correct answers are: Ensure the load balancer listens on port 443, Ensure the HTTPS listenersends requests to the instances on port 443Submit your Feedback/Queries to our Experts

Your company is hosting a set of EC2 Instances in AWS. They want to have the ability to detect if any port scans occur on their AWS EC2 Instances. Which of the following can help in this regard? Please select:

A.
Use AWS inspector to consciously inspect the instances for port scans
A.
Use AWS inspector to consciously inspect the instances for port scans
Answers
B.
Use AWS Trusted Advisor to notify of any malicious port scans
B.
Use AWS Trusted Advisor to notify of any malicious port scans
Answers
C.
Use AWS Config to notify of any malicious port scans
C.
Use AWS Config to notify of any malicious port scans
Answers
D.
Use AWS Guard Duty to monitor any malicious port scans
D.
Use AWS Guard Duty to monitor any malicious port scans
Answers
Suggested answer: D

Explanation:

The AWS blogs mention the following to support the use of AWS GuardDuty GuardDuty voraciously consumes multiple data streams, including several threat intelligence feeds, staying aware of malicious addresses, devious domains, and more importantly, learning to accurately identify malicious or unauthorized behavior in your AWS accounts. In combination with information gleaned from your VPC Flow Logs, AWS CloudTrail Event Logs, and DNS logs, th allows GuardDuty to detect many different types of dangerous and mischievous behavior including probes for known vulnerabilities, port scans and probes, and access from unusual locations. On the AWS side, it looks for suspicious AWS account activity such as unauthorized deployments, unusual CloudTrail activity, patterns of access to AWS API functions, and attempts to exceed multiple service limits. GuardDuty will also look for compromised EC2 instances talking to malicious entities or services, data exfiltration attempts, and instances that are mining cryptocurrency.

Options A, B and C are invalid because these services cannot be used to detect port scans For more information on AWS Guard Duty, please refer to the below Link:

https://aws.amazon.com/blogs/aws/amazon-guardduty-continuous-security-monitoring-threatdetection;

( The correct answer is: Use AWS Guard Duty to monitor any malicious port scans Submit your Feedback/Queries to our Experts

You have an Amazon VPC that has a private subnet and a public subnet in which you have a NAT instance server. You have created a group of EC2 instances that configure themselves at startup by downloading a bootstrapping script from S3 that deploys an application via GIT.

Which one of the following setups would give us the highest level of security?

Choose the correct answer from the options given below.

Please select:

A.
EC2 instances in our public subnet, no EIPs, route outgoing traffic via the IGW
A.
EC2 instances in our public subnet, no EIPs, route outgoing traffic via the IGW
Answers
B.
EC2 instances in our public subnet, assigned EIPs, and route outgoing traffic via the NAT
B.
EC2 instances in our public subnet, assigned EIPs, and route outgoing traffic via the NAT
Answers
C.
EC2 instance in our private subnet, assigned EIPs, and route our outgoing traffic via our IGW
C.
EC2 instance in our private subnet, assigned EIPs, and route our outgoing traffic via our IGW
Answers
D.
EC2 instances in our private subnet, no EIPs, route outgoing traffic via the NAT
D.
EC2 instances in our private subnet, no EIPs, route outgoing traffic via the NAT
Answers
Suggested answer: D

Explanation:

The below diagram shows how the NAT instance works. To make EC2 instances very secure, they need to be in a private sub such as the database server shown below with no EIP and all traffic routed via the NAT.

Options A and B are invalid because the instances need to be in the private subnet Option C is invalid because since the instance needs to be in the private subnet, you should not attach an EIP to the instance For more information on NAT instance, please refer to the below Link: http://docs.aws.amazon.com/AmazonVPC/latest/UserGuideA/PC lnstance.html! The correct answer is: EC2 instances in our private subnet no EIPs, route outgoing traffic via the NAT Submit your Feedback/Queries to our Experts

In your LAMP application, you have some developers that say they would like access to your logs.

However, since you are using an AWS Auto Scaling group, your instances are constantly being recreated. What would you do to make sure that these developers can access these log files? Choose the correct answer from the options below Please select:

A.
Give only the necessary access to the Apache servers so that the developers can gain access to the log files.
A.
Give only the necessary access to the Apache servers so that the developers can gain access to the log files.
Answers
B.
Give root access to your Apache servers to the developers.
B.
Give root access to your Apache servers to the developers.
Answers
C.
Give read-only access to your developers to the Apache servers.
C.
Give read-only access to your developers to the Apache servers.
Answers
D.
Set up a central logging server that you can use to archive your logs; archive these logs to an S3 bucket for developer-access.
D.
Set up a central logging server that you can use to archive your logs; archive these logs to an S3 bucket for developer-access.
Answers
Suggested answer: D

Explanation:

One important security aspect is to never give access to actual servers, hence Option A.B and C are just totally wrong from a security perspective. The best option is to have a central logging server that can be used to archive logs. These logs can then be stored in S3. Options A,B and C are all invalid because you should not give access to the developers on the Apache se For more information on S3, please refer to the below link https://aws.amazon.com/documentation/s3jThe correct answer is: Set up a central logging server that you can use to archive your logs; archivethese logs to an S3 bucket for developer-access. Submit vour Feedback/Queries to our Experts

Your company is planning on developing an application in AWS. This is a web based application. The application users will use their facebook or google identities for authentication. You want to have the ability to manage user profiles without having to add extra coding to manage this. Which of the below would assist in this.

Please select:

A.
Create an OlDC identity provider in AWS
A.
Create an OlDC identity provider in AWS
Answers
B.
Create a SAML provider in AWS
B.
Create a SAML provider in AWS
Answers
C.
Use AWS Cognito to manage the user profiles
C.
Use AWS Cognito to manage the user profiles
Answers
D.
Use IAM users to manage the user profiles
D.
Use IAM users to manage the user profiles
Answers
Suggested answer: B

Explanation:

The AWS Documentation mentions the following

The AWS Documentation mentions the following

OIDC identity providers are entities in IAM that describe an identity provider (IdP) service that supports the OpenID Connect (OIDC) standard. You use an OIDC identity provider when you want to establish trust between an OlDC-compatible IdP—such as Google, Salesforce, and many others—and your AWS account This is useful if you are creating a mobile app or web application that requires access to AWS resources, but you don't want to create custom sign-in code or manage your own user identities Option A is invalid because in the security groups you would not mention this information/ Option C is invalid because SAML is used for federated authentication Option D is invalid because you need to use the OIDC identity provider in AWS For more information on ODIC identity providers, please refer to the below Link:

https://docs.aws.amazon.com/IAM/latest/UserGuide/id roles providers create oidc.htmllThe correct answer is: Create an OIDC identity provider in AWS

Your company is planning on developing an application in AWS. This is a web based application. The application user will use their facebook or google identities for authentication. You want to have the ability to manage user profiles without having to add extra coding to manage this. Which of the below would assist in this.

Please select:

A.
Create an OlDC identity provider in AWS
A.
Create an OlDC identity provider in AWS
Answers
B.
Create a SAML provider in AWS
B.
Create a SAML provider in AWS
Answers
C.
Use AWS Cognito to manage the user profiles
C.
Use AWS Cognito to manage the user profiles
Answers
D.
Use IAM users to manage the user profiles
D.
Use IAM users to manage the user profiles
Answers
Suggested answer: C

Explanation:

The AWS Documentation mentions the following

A user pool is a user directory in Amazon Cognito. With a user pool, your users can sign in to your web or mobile app through Amazon Cognito. Your users can also sign in through social identity providers like Facebook or Amazon, and through SAML identity providers. Whether your users sign in directly or through a third party, all members of the user pool have a directory profile that you can access through an SDK. User pools provide:

Sign-up and sign-in services.

A built-in, customizable web Ul to sign in users.

Social sign-in with Facebook, Google, and Login with Amazon, as well as sign-in with SAML identity providers from your user pool. User directory management and user profiles.

Security features such as multi-factor authentication (MFA), checks for compromised credentials, account takeover protection, and phone and email verification. Customized workflows and user migration through AWS Lambda triggers.

Options A and B are invalid because these are not used to manage users Option D is invalid because this would be a maintenance overhead For more information on Cognito User Identity pools, please refer to the below Link:

https://docs.aws.amazon.com/coenito/latest/developerguide/cognito-user-identity-pools.htmlThe correct answer is: Use AWS Cognito to manage the user profiles Submit your Feedback/Queriesto our Experts

Total 590 questions
Go to page: of 59