ExamGecko
Home / Amazon / SAP-C01 / List of questions
Ask Question

Amazon SAP-C01 Practice Test - Questions Answers, Page 18

List of questions

Question 171

Report
Export
Collapse

DynamoDB uses only as a transport protocol, not as a storage format.

WDDX
WDDX
XML
XML
SGML
SGML
JSON
JSON
Suggested answer: D

Explanation:

DynamoDB uses JSON only as a transport protocol, not as a storage format. The AWS SDKs use JSON to send data to DynamoDB, and DynamoDB responds with JSON, but DynamoDB does not store data persistently in JSON format.

Reference: http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Programming.LowLev elAPI.html

asked 16/09/2024
Catarina Machado
32 questions

Question 172

Report
Export
Collapse

A company is using AWS CloudFormation to deploy its infrastructure. The company is concerned that, if a production CloudFormation stack is deleted, important data stored in Amazon RDS databases or Amazon EBS volumes might also be deleted.

How can the company prevent users from accidentally deleting data in this way?

Modify the CloudFormation templates to add a DeletionPolicy attribute to RDS and EBS resources.
Modify the CloudFormation templates to add a DeletionPolicy attribute to RDS and EBS resources.
Configure a stack policy that disallows the deletion of RDS and EBS resources.
Configure a stack policy that disallows the deletion of RDS and EBS resources.
Modify IAM policies to deny deleting RDS and EBS resources that are tagged with an “aws:cloudformation:stack-name” tag.
Modify IAM policies to deny deleting RDS and EBS resources that are tagged with an “aws:cloudformation:stack-name” tag.
Use AWS Config rules to prevent deleting RDS and EBS resources.
Use AWS Config rules to prevent deleting RDS and EBS resources.
Suggested answer: A

Explanation:

With the DeletionPolicy attribute you can preserve or (in some cases) backup a resource when its stack is deleted. You specify a DeletionPolicy attribute for each resource that you want to control. If a resource has no DeletionPolicy attribute, AWS CloudFormation deletes the resource by default. To keep a resource when its stack is deleted, specify Retain for that resource. You can use retain for any resource. For example, you can retain a nested stack, Amazon S3 bucket, or EC2 instance so that you can continue to use or modify those resources after you delete their stacks.

Reference:

https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-attribute-deletionpolicy.html

asked 16/09/2024
JIMMY GIOVANNY VARGAS TERAN
35 questions

Question 173

Report
Export
Collapse

A company is running a containerized application in the AWS Cloud. The application is running by using Amazon Elastic Container Service (Amazon ECS) on a set of Amazon EC2 instances. The EC2 instances run in an Auto Scaling group. The company uses Amazon Elastic Container Registry (Amazon ECR) to store its container images. When a new image version is uploaded, the new image version receives a unique tag. The company needs a solution that inspects new image versions for common vulnerabilities and exposures. The solution must automatically delete new image tags that have Critical or High severity findings. The solution also must notify the development team when such a deletion occurs.

Which solution meets these requirements?

Configure scan on push on the repository. Use Amazon EventBridge (Amazon CloudWatch Events) to invoke an AWS Step Functions state machine when a scan is complete for images that have Critical or High severity findings. Use the Step Functions state machine to delete the image tag for those images and to notify the development team through Amazon Simple Notification Service (Amazon SNS).
Configure scan on push on the repository. Use Amazon EventBridge (Amazon CloudWatch Events) to invoke an AWS Step Functions state machine when a scan is complete for images that have Critical or High severity findings. Use the Step Functions state machine to delete the image tag for those images and to notify the development team through Amazon Simple Notification Service (Amazon SNS).
Configure scan on push on the repository. Configure scan results to be pushed to an Amazon Simple Queue Service (Amazon SQS) queue. Invoke an AWS Lambda function when a new message is added to the SQS queue. Use the Lambda function to delete the image tag for images that have Critical or High severity findings. Notify the development team by using Amazon Simple Email Service (Amazon SES).
Configure scan on push on the repository. Configure scan results to be pushed to an Amazon Simple Queue Service (Amazon SQS) queue. Invoke an AWS Lambda function when a new message is added to the SQS queue. Use the Lambda function to delete the image tag for images that have Critical or High severity findings. Notify the development team by using Amazon Simple Email Service (Amazon SES).
Schedule an AWS Lambda function to start a manual image scan every hour. Configure Amazon EventBridge (Amazon CloudWatch Events) to invoke another Lambda function when a scan is complete. Use the second Lambda function to delete the image tag for images that have Critical or High severity findings. Notify the development team by using Amazon Simple Notification Service (Amazon SNS).
Schedule an AWS Lambda function to start a manual image scan every hour. Configure Amazon EventBridge (Amazon CloudWatch Events) to invoke another Lambda function when a scan is complete. Use the second Lambda function to delete the image tag for images that have Critical or High severity findings. Notify the development team by using Amazon Simple Notification Service (Amazon SNS).
Configure periodic image scan on the repository. Configure scan results to be added to an Amazon Simple Queue Service (Amazon SQS) queue. Invoke an AWS Step Functions state machine when a new message is added to the SQS queue. Use the Step Functions state machine to delete the image tag for images that have Critical or High severity findings. Notify the development team by using Amazon Simple Email Service (Amazon SES).
Configure periodic image scan on the repository. Configure scan results to be added to an Amazon Simple Queue Service (Amazon SQS) queue. Invoke an AWS Step Functions state machine when a new message is added to the SQS queue. Use the Step Functions state machine to delete the image tag for images that have Critical or High severity findings. Notify the development team by using Amazon Simple Email Service (Amazon SES).
Suggested answer: C

Explanation:

Reference: https://docs.aws.amazon.com/AmazonCloudWatch/latest/events/cwe-ug.pdf

asked 16/09/2024
Evgeniy Lyashonkov
34 questions

Question 174

Report
Export
Collapse

You need to develop and run some new applications on AWS and you know that Elastic Beanstalk and CloudFormation can both help as a deployment mechanism for a broad range of AWS resources. Which of the following is TRUE statements when describing the differences between Elastic Beanstalk and CloudFormation?

AWS Elastic Beanstalk introduces two concepts: The template, a JSON or YAML-format, text- based file
AWS Elastic Beanstalk introduces two concepts: The template, a JSON or YAML-format, text- based file
Elastic Beanstalk supports AWS CloudFormation application environments as one of the AWS resource types.
Elastic Beanstalk supports AWS CloudFormation application environments as one of the AWS resource types.
Elastic Beanstalk automates and simplifies the task of repeatedly and predictably creating groups of related resources that power your applications. CloudFormation does not.
Elastic Beanstalk automates and simplifies the task of repeatedly and predictably creating groups of related resources that power your applications. CloudFormation does not.
You can design and script custom resources in CloudFormation
You can design and script custom resources in CloudFormation
Suggested answer: D

Explanation:

These services are designed to complement each other. AWS Elastic Beanstalk provides an environment to easily deploy and run applications in the cloud. It is integrated with developer tools and provides a one-stop experience for you to manage the lifecycle of your applications. AWS CloudFormation is a convenient provisioning mechanism for a broad range of AWS resources. It supports the infrastructure needs of many different types of applications such as existing enterprise applications, legacy applications, applications built using a variety of AWS resources and container-based solutions (including those built using AWS Elastic Beanstalk). AWS CloudFormation supports Elastic Beanstalk application environments as one of the AWS resource types. This allows you, for example, to create and manage an AWS Elastic Beanstalk- hosted application along with an RDS database to store the application data. In addition to RDS instances, any other supported AWS resource can be added to the group as well.

Reference: https://aws.amazon.com/cloudformation/faqs

asked 16/09/2024
Tebello Mofokeng
29 questions

Question 175

Report
Export
Collapse

You create an Amazon Elastic File System (EFS) file system and mount targets for the file system in your Virtual Private Cloud (VPC). Identify the initial permissions you can grant to the group root of your file system.

write-execute-modify
write-execute-modify
read-execute
read-execute
read-write-modify
read-write-modify
read-write
read-write
Suggested answer: B

Explanation:

In Amazon EFS, when a file system and mount targets are created in your VPC, you can mount the remote file system locally on your Amazon Elastic Compute Cloud (EC2) instance. You can grant permissions to the users of your file system.

The initial permissions mode allowed for Amazon EFS are: read-write-execute permissions to the owner root read-execute permissions to the group root read-execute permissions to others

Reference: http://docs.aws.amazon.com/efs/latest/ug/accessing-fs-nfs-permissions.html

asked 16/09/2024
Mohamed Iftiquar Aslam Hameed
37 questions

Question 176

Report
Export
Collapse

A company hosts a game player-matching service on a public facing, physical, on-premises instance that all users are able to access over the internet. All traffic to the instance uses UDP. The company wants to migrate the service to AWS and provide a high level of security. A solutions architect needs to design a solution for the player-matching service using AWS. Which combination of steps should the solutions architect take to meet these requirements? (Choose three.)

Use a Network Load Balancer (NLB) in front of the player-matching instance. Use a friendly DNS entry in Amazon Route 53 pointing to the NLB’s Elastic IP address.
Use a Network Load Balancer (NLB) in front of the player-matching instance. Use a friendly DNS entry in Amazon Route 53 pointing to the NLB’s Elastic IP address.
Use an Application Load Balancer (ALB) in front of the player-matching instance. Use a friendly DNS entry in Amazon Route 53 pointing to the ALB’s internet-facing fully qualified domain name (FQDN).
Use an Application Load Balancer (ALB) in front of the player-matching instance. Use a friendly DNS entry in Amazon Route 53 pointing to the ALB’s internet-facing fully qualified domain name (FQDN).
Define an AWS WAF rule to explicitly drop non-UDP traffic, and associate the rule with the load balancer.
Define an AWS WAF rule to explicitly drop non-UDP traffic, and associate the rule with the load balancer.
Configure a network ACL rule to block all non-UDP traffic. Associate the network ACL with the subnets that hold the load balancer instances.
Configure a network ACL rule to block all non-UDP traffic. Associate the network ACL with the subnets that hold the load balancer instances.
Use Amazon CloudFront with an Elastic Load Balancer as an origin.
Use Amazon CloudFront with an Elastic Load Balancer as an origin.
Enable AWS Shield Advanced on all public-facing resources.
Enable AWS Shield Advanced on all public-facing resources.
Suggested answer: B, D, F
asked 16/09/2024
Vanildo Pedro
38 questions

Question 177

Report
Export
Collapse

An organization, which has the AWS account ID as 999988887777, has created 50 IAM users. All the users are added to the same group ABC. If the organization has enabled that each IAM user can login with the AWS console, which AWS login URL will the IAM users use??

https://999988887777.aws.amazon.com/ABC/
https://999988887777.aws.amazon.com/ABC/
https://signin.aws.amazon.com/ABC/
https://signin.aws.amazon.com/ABC/
https://ABC.signin.aws.amazon.com/999988887777/console/
https://ABC.signin.aws.amazon.com/999988887777/console/
https://999988887777.signin.aws.amazon.com/console/
https://999988887777.signin.aws.amazon.com/console/
Suggested answer: D

Explanation:

AWS Identity and Access Management is a web service which allows organizations to manage users and user permissions for various AWS services. Once the organization has created the IAM users, they will have a separate AWS console URL to login to the AWS console. The console login URL for the IAM user will be https:// AWS_Account_ID.signin.aws.amazon.com/console/. It uses only the AWS account ID and does not depend on the group or user ID.

Reference: http://docs.aws.amazon.com/IAM/latest/UserGuide/AccountAlias.html

asked 16/09/2024
Siegfried Paul
33 questions

Question 178

Report
Export
Collapse

Your company has an on-premises multi-tier PHP web application, which recently experienced downtime due to a large burst in web traffic due to a company announcement Over the coming days, you are expecting similar announcements to drive similar unpredictable bursts, and are looking to find ways to quickly improve your infrastructures ability to handle unexpected increases in traffic. The application currently consists of 2 tiers a web tier which consists of a load balancer and several Linux Apache web servers as well as a database tier which hosts a Linux server hosting a MySQL database. Which scenario below will provide full site functionality, while helping to improve the ability of your application in the short timeframe required?

Failover environment: Create an S3 bucket and configure it for website hosting. Migrate your DNS to Route53 using zone file import, and leverage Route53 DNS failover to failover to the S3 hosted website.
Failover environment: Create an S3 bucket and configure it for website hosting. Migrate your DNS to Route53 using zone file import, and leverage Route53 DNS failover to failover to the S3 hosted website.
Hybrid environment: Create an AMI, which can be used to launch web servers in EC2. Create an Auto Scaling group, which uses the AMI to scale the web tier based on incoming traffic. Leverage Elastic Load Balancing to balance traffic between on-premises web servers and those hosted in AWS.
Hybrid environment: Create an AMI, which can be used to launch web servers in EC2. Create an Auto Scaling group, which uses the AMI to scale the web tier based on incoming traffic. Leverage Elastic Load Balancing to balance traffic between on-premises web servers and those hosted in AWS.
Offload traffic from on-premises environment: Setup a CIoudFront distribution, and configure CloudFront to cache objects from a custom origin. Choose to customize your object cache behavior, and select a TTL that objects should exist in cache.
Offload traffic from on-premises environment: Setup a CIoudFront distribution, and configure CloudFront to cache objects from a custom origin. Choose to customize your object cache behavior, and select a TTL that objects should exist in cache.
Migrate to AWS: Use VM Import/Export to quickly convert an on-premises web server to an AMI. Create an Auto Scaling group, which uses the imported AMI to scale the web tier based on incoming traffic. Create an RDS read replica and setup replication between the RDS instance and on-premises MySQL server to migrate the database.
Migrate to AWS: Use VM Import/Export to quickly convert an on-premises web server to an AMI. Create an Auto Scaling group, which uses the imported AMI to scale the web tier based on incoming traffic. Create an RDS read replica and setup replication between the RDS instance and on-premises MySQL server to migrate the database.
Suggested answer: C

Explanation:

You can have CloudFront sit in front of your on-prem web environment, via a custom origin (the origin doesn’t have to be in AWS). This would protect against unexpected bursts in traffic by letting CloudFront handle the traffic that it can out of cache, thus hopefully removing some of the load from your on-prem web servers.

asked 16/09/2024
Mark Espena
25 questions

Question 179

Report
Export
Collapse

A company’s data center is connected to the AWS Cloud over a minimally used 10 Gbps AWS Direct Connect connection with a private virtual interface to its virtual private cloud (VPC). The company internet connection is 200 Mbps, and the company has a 150 TB dataset that is created each Friday. The data must be transferred and available in Amazon S3 on Monday morning. Which is the LEAST expensive way to meet the requirements while allowing for data transfer growth?

Order two 80 TB AWS Snowball appliances. Offload the data to the appliances and ship them to AWS. AWS will copy the data from the Snowball appliances to Amazon S3.
Order two 80 TB AWS Snowball appliances. Offload the data to the appliances and ship them to AWS. AWS will copy the data from the Snowball appliances to Amazon S3.
Create a VPC endpoint for Amazon S3. Copy the data to Amazon S3 by using the VPC endpoint, forcing the transfer to use the Direct Connect connection.
Create a VPC endpoint for Amazon S3. Copy the data to Amazon S3 by using the VPC endpoint, forcing the transfer to use the Direct Connect connection.
Create a VPC endpoint for Amazon S3. Set up a reverse proxy farm behind a Classic Load Balancer in the VPCopy the data to Amazon S3 using the proxy.
Create a VPC endpoint for Amazon S3. Set up a reverse proxy farm behind a Classic Load Balancer in the VPCopy the data to Amazon S3 using the proxy.
Create a public virtual interface on a Direct Connect connection, and copy the data to Amazon S3 over the connection.
Create a public virtual interface on a Direct Connect connection, and copy the data to Amazon S3 over the connection.
Suggested answer: C
asked 16/09/2024
shikeba barakzei
34 questions

Question 180

Report
Export
Collapse

You want to define permissions for a role in an IAM policy. Which of the following configuration formats should you use?

An XML document written in the IAM Policy Language
An XML document written in the IAM Policy Language
An XML document written in a language of your choice
An XML document written in a language of your choice
A JSON document written in the IAM Policy Language
A JSON document written in the IAM Policy Language
JSON document written in a language of your choice
JSON document written in a language of your choice
Suggested answer: C

Explanation:

You define the permissions for a role in an IAM policy. An IAM policy is a JSON document written in the IAM Policy Language.

Reference: http://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_terms-and-concepts.html

asked 16/09/2024
ahmed bin shehab
45 questions
Total 906 questions
Go to page: of 91
Search

Related questions