ExamGecko
Home Home / Amazon / SAP-C02

Amazon SAP-C02 Practice Test - Questions Answers, Page 29

Question list
Search
Search

List of questions

Search

Related questions











A company wants to manage the costs associated with a group of 20 applications that are infrequently used, but are still business-critical, by migrating to AWS. The applications are a mix of Java and Node.js spread across different instance clusters. The company wants to minimize costs while standardizing by using a single deployment methodology.

Most of the applications are part of month-end processing routines with a small number of concurrent users, but they are occasionally run at other times Average application memory consumption is less than 1 GB. though some applications use as much as 2.5 GB of memory during peak processing. The most important application in the group is a billing report written in Java that accesses multiple data sources and often runs for several hours.

Which is the MOST cost-effective solution?

A.
Deploy a separate AWS Lambda function tor each application. Use AWS CloudTrail logs and Amazon CloudWatch alarms to verify completion of critical jobs.
A.
Deploy a separate AWS Lambda function tor each application. Use AWS CloudTrail logs and Amazon CloudWatch alarms to verify completion of critical jobs.
Answers
B.
Deploy Amazon ECS containers on Amazon EC2 with Auto Scaling configured for memory utilization of 75%. Deploy an ECS task for each application being migrated with ECS task scaling. Monitor services and hosts by using Amazon CloudWatch.
B.
Deploy Amazon ECS containers on Amazon EC2 with Auto Scaling configured for memory utilization of 75%. Deploy an ECS task for each application being migrated with ECS task scaling. Monitor services and hosts by using Amazon CloudWatch.
Answers
C.
Deploy AWS Elastic Beanstalk for each application with Auto Scaling to ensure that all requests have sufficient resources. Monitor each AWS Elastic Beanstalk deployment by using CloudWatch alarms.
C.
Deploy AWS Elastic Beanstalk for each application with Auto Scaling to ensure that all requests have sufficient resources. Monitor each AWS Elastic Beanstalk deployment by using CloudWatch alarms.
Answers
D.
Deploy a new Amazon EC2 instance cluster that co-hosts all applications by using EC2 Auto Scaling and Application Load Balancers. Scale cluster size based on a custom metric set on instance memory utilization. Purchase 3-year Reserved Instance reservations equal to the GroupMaxSize parameter of the Auto Scaling group.
D.
Deploy a new Amazon EC2 instance cluster that co-hosts all applications by using EC2 Auto Scaling and Application Load Balancers. Scale cluster size based on a custom metric set on instance memory utilization. Purchase 3-year Reserved Instance reservations equal to the GroupMaxSize parameter of the Auto Scaling group.
Answers
Suggested answer: B

Explanation:

https://docs.aws.amazon.com/whitepapers/latest/serverless-architectures-lambda/timeout.html

During an audit, a security team discovered that a development team was putting IAM user secret access keys in their code and then committing it to an AWS CodeCommit repository. The security team wants to automatically find and remediate instances of this security vulnerability.

Which solution will ensure that the credentials are appropriately secured automatically?

A.
Run a script nightly using AWS Systems Manager Run Command to search tor credentials on the development instances. If found. use AWS Secrets Manager to rotate the credentials.
A.
Run a script nightly using AWS Systems Manager Run Command to search tor credentials on the development instances. If found. use AWS Secrets Manager to rotate the credentials.
Answers
B.
Use a scheduled AWS Lambda function to download and scan the application code from CodeCommit. If credentials are found, generate new credentials and store them in AWS KMS.
B.
Use a scheduled AWS Lambda function to download and scan the application code from CodeCommit. If credentials are found, generate new credentials and store them in AWS KMS.
Answers
C.
Configure Amazon Made to scan for credentials in CodeCommit repositories. If credentials are found, trigger an AWS Lambda function to disable the credentials and notify the user.
C.
Configure Amazon Made to scan for credentials in CodeCommit repositories. If credentials are found, trigger an AWS Lambda function to disable the credentials and notify the user.
Answers
D.
Configure a CodeCommit trigger to invoke an AWS Lambda function to scan new code submissions for credentials. It credentials are found, disable them in AWS IAM and notify the user
D.
Configure a CodeCommit trigger to invoke an AWS Lambda function to scan new code submissions for credentials. It credentials are found, disable them in AWS IAM and notify the user
Answers
Suggested answer: D

Explanation:

CodeCommit may use S3 on the back end (and it also uses DynamoDB on the back end) but I don't think they're stored in buckets that you can see or point Macie to. In fact, there are even solutions out there describing how to copy your repo from CodeCommit into S3 to back it up: https://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automate-event-driven-backups-from-codecommit-to-amazon-s3-using-codebuild-and-cloudwatch-events.html

A company has an application that generates reports and stores them in an Amazon S3 bucket When a user accesses their report, the application generates a signed URL to allow the user to download the report. The company's security team has discovered that the files are public and that anyone can download them without authentication The company has suspended the generation of new reports until the problem is resolved.

Which set of actions will immediately remediate the security issue without impacting the application's normal workflow?

A.
Create an AWS Lambda function that applies a deny all policy for users who are not authenticated.Create a scheduled event to invoke the Lambda function
A.
Create an AWS Lambda function that applies a deny all policy for users who are not authenticated.Create a scheduled event to invoke the Lambda function
Answers
B.
Review the AWS Trusted Advisor bucket permissions check and implement the recommended actions.
B.
Review the AWS Trusted Advisor bucket permissions check and implement the recommended actions.
Answers
C.
Run a script that puts a private ACL on all of the objects in the bucket.
C.
Run a script that puts a private ACL on all of the objects in the bucket.
Answers
D.
Use the Block Public Access feature in Amazon S3 to set the IgnorePublicAcls option to TRUE on the bucket.
D.
Use the Block Public Access feature in Amazon S3 to set the IgnorePublicAcls option to TRUE on the bucket.
Answers
Suggested answer: D

Explanation:

https://docs.aws.amazon.com/AmazonS3/latest/userguide/access-control-block-public-access.html

A large company recently experienced an unexpected increase in Amazon RDS and Amazon DynamoDB costs. The company needs to increase visibility into details of AWS Billing and Cost Management There are various accounts associated with AWS Organizations, including many development and production accounts There is no consistent tagging strategy across the organization, but there are guidelines in place that require all infrastructure to be deployed using AWS CloudFormation with consistent tagging. Management requires cost center numbers and project ID numbers for all existing and future DynamoDB tables and RDS instances.

Which strategy should the solutions architect provide to meet these requirements?

A.
Use Tag Editor to tag existing resources Create cost allocation tags to define the cost center and project ID and allow 24 hours for tags to propagate to existing resources.
A.
Use Tag Editor to tag existing resources Create cost allocation tags to define the cost center and project ID and allow 24 hours for tags to propagate to existing resources.
Answers
B.
Use an AWS Config rule to alert the finance team of untagged resources Create a centralized AWS Lambda based solution to tag untagged RDS databases and DynamoDB resources every hour using a cross-account role.
B.
Use an AWS Config rule to alert the finance team of untagged resources Create a centralized AWS Lambda based solution to tag untagged RDS databases and DynamoDB resources every hour using a cross-account role.
Answers
C.
Use Tag Editor to tag existing resources Create cost allocation tags to define the cost center and project ID Use SCPs to restrict resource creation that do not have the cost center and project ID on the resource.
C.
Use Tag Editor to tag existing resources Create cost allocation tags to define the cost center and project ID Use SCPs to restrict resource creation that do not have the cost center and project ID on the resource.
Answers
D.
Create cost allocation tags to define the cost center and project ID and allow 24 hours for tags to propagate to existing resources Update existing federated roles to restrict privileges to provision resources that do not include the cost center and project ID on the resource.
D.
Create cost allocation tags to define the cost center and project ID and allow 24 hours for tags to propagate to existing resources Update existing federated roles to restrict privileges to provision resources that do not include the cost center and project ID on the resource.
Answers
Suggested answer: C

Explanation:

Using Tag Editor to remediate untagged resources is a Best Practice (Page 14 or AWS Tagging Best Practices WhitePaper). However, that is were answer A stops. It doesn't address the requirement of "Management requires cost center numbers and project ID number for all existing and future DynamoDB tables and RDS instances". That is where Answer C comes in and addresses that requirement with SCPs in the company's AWS Organization. AWS Tagging Best Practices -

https://d1.awsstatic.com/whitepapers/aws-tagging-best-practices.pdf

A company is serving files to its customers through an SFTP server that is accessible over the internet The SFTP server is running on a single Amazon EC2 instance with an Elastic IP address attached Customers connect to the SFTP server through its Elastic IP address and use SSH for authentication The EC2 instance also has an attached security group that allows access from all customer IP addresses.

A solutions architect must implement a solution to improve availability minimize the complexity of infrastructure management and minimize the disruption to customers who access files. The solution must not change the way customers connect Which solution will meet these requirements?

A.
Disassociate the Elastic IP address from the EC2 instance Create an Amazon S3 bucket to be used for SFTP file hosting Create an AWS Transfer Family server. Configure the Transfer Family server with a publicly accessible endpoint Associate the SFTP Elastic IP address with the new endpoint. Point the Transfer Family server to the S3 bucket Sync all files from the SFTP server to the S3 bucket.
A.
Disassociate the Elastic IP address from the EC2 instance Create an Amazon S3 bucket to be used for SFTP file hosting Create an AWS Transfer Family server. Configure the Transfer Family server with a publicly accessible endpoint Associate the SFTP Elastic IP address with the new endpoint. Point the Transfer Family server to the S3 bucket Sync all files from the SFTP server to the S3 bucket.
Answers
B.
Disassociate the Elastic IP address from the EC2 instance Create an Amazon S3 bucket to be used for SFTP file hosting Create an AWS Transfer Family Server Configure the Transfer Family server with a VPC-hosted, internet-facing endpoint Associate the SFTP Elastic IP address with the new endpoint Attach the security group with customer IP addresses to the new endpoint Point the Transfer Family server to the S3 bucket. Sync all files from the SFTP server to the S3 bucket.
B.
Disassociate the Elastic IP address from the EC2 instance Create an Amazon S3 bucket to be used for SFTP file hosting Create an AWS Transfer Family Server Configure the Transfer Family server with a VPC-hosted, internet-facing endpoint Associate the SFTP Elastic IP address with the new endpoint Attach the security group with customer IP addresses to the new endpoint Point the Transfer Family server to the S3 bucket. Sync all files from the SFTP server to the S3 bucket.
Answers
C.
Disassociate the Elastic IP address from the EC2 instance. Create a new Amazon Elastic File System (Amazon EFS) file system to be used for SFTP file hosting. Create an AWS Fargate task definition to run an SFTP server Specify the EFS file system as a mount in the task definition Create a Fargate service by using the task definition, and place a Network Load Balancer (NLB) in front of the service.When configuring the service, attach the security group with customer IP addresses to the tasks that run the SFTP server Associate the Elastic IP address with the NLB Sync all files from the SFTP server to the S3 bucket.
C.
Disassociate the Elastic IP address from the EC2 instance. Create a new Amazon Elastic File System (Amazon EFS) file system to be used for SFTP file hosting. Create an AWS Fargate task definition to run an SFTP server Specify the EFS file system as a mount in the task definition Create a Fargate service by using the task definition, and place a Network Load Balancer (NLB) in front of the service.When configuring the service, attach the security group with customer IP addresses to the tasks that run the SFTP server Associate the Elastic IP address with the NLB Sync all files from the SFTP server to the S3 bucket.
Answers
D.
Disassociate the Elastic IP address from the EC2 instance. Create a multi-attach Amazon Elastic Block Store (Amazon EBS) volume to be used for SFTP file hosting. Create a Network Load Balancer (NLB) with the Elastic IP address attached. Create an Auto Scaling group with EC2 instances that run an SFTP server. Define in the Auto Scaling group that instances that are launched should attach the new multi-attach EBS volume Configure the Auto Scaling group to automatically add instances behind the NLB. configure the Auto Scaling group to use the security group that allows customer IP addresses for the EC2 instances that the Auto Scaling group launches Sync all files from the SFTP server to the new multi-attach EBS volume.
D.
Disassociate the Elastic IP address from the EC2 instance. Create a multi-attach Amazon Elastic Block Store (Amazon EBS) volume to be used for SFTP file hosting. Create a Network Load Balancer (NLB) with the Elastic IP address attached. Create an Auto Scaling group with EC2 instances that run an SFTP server. Define in the Auto Scaling group that instances that are launched should attach the new multi-attach EBS volume Configure the Auto Scaling group to automatically add instances behind the NLB. configure the Auto Scaling group to use the security group that allows customer IP addresses for the EC2 instances that the Auto Scaling group launches Sync all files from the SFTP server to the new multi-attach EBS volume.
Answers
Suggested answer: B

Explanation:

https://aws.amazon.com/premiumsupport/knowledge-center/aws-sftp-endpoint-type/

A solutions architect is creating an application that stores objects in an Amazon S3 bucket The solutions architect must deploy the application in two AWS Regions that will be used simultaneously The objects in the two S3 buckets must remain synchronized with each other.

Which combination of steps will meet these requirements with the LEAST operational overhead?

(Select THREE)

A.
Create an S3 Multi-Region Access Point. Change the application to refer to the Multi-Region Access Point
A.
Create an S3 Multi-Region Access Point. Change the application to refer to the Multi-Region Access Point
Answers
B.
Configure two-way S3 Cross-Region Replication (CRR) between the two S3 buckets
B.
Configure two-way S3 Cross-Region Replication (CRR) between the two S3 buckets
Answers
C.
Modify the application to store objects in each S3 bucket.
C.
Modify the application to store objects in each S3 bucket.
Answers
D.
Create an S3 Lifecycle rule for each S3 bucket to copy objects from one S3 bucket to the other S3 bucket.
D.
Create an S3 Lifecycle rule for each S3 bucket to copy objects from one S3 bucket to the other S3 bucket.
Answers
E.
Enable S3 Versioning for each S3 bucket
E.
Enable S3 Versioning for each S3 bucket
Answers
F.
Configure an event notification for each S3 bucket to invoke an AVVS Lambda function to copy objects from one S3 bucket to the other S3 bucket.
F.
Configure an event notification for each S3 bucket to invoke an AVVS Lambda function to copy objects from one S3 bucket to the other S3 bucket.
Answers
Suggested answer: A, B, E

Explanation:

https://docs.aws.amazon.com/AmazonS3/latest/userguide/MultiRegionAccessPointRequestRouting.html

https://stackoverflow.com/questions/60947157/aws-s3-replication-withoutversioning#:~:text=The%20automated%20Same%20Region%20Replication,is%20replicated%20between%20S3%20buckets.

A solutions architect is designing an application to accept timesheet entries from employees on their mobile devices. Timesheets will be submitted weekly, with most of the submissions occurring on

Friday. The data must be stored in a format that allows payroll administrators to run monthly reports The infrastructure must be highly available and scale to match the rate of incoming data and reporting requests.

Which combination of steps meets these requirements while minimizing operational overhead?

(Select TWO}

A.
Deploy the application to Amazon EC2 On-Demand Instances with load balancing across multiple Availability Zones. Use scheduled Amazon EC2 Auto Scaling to add capacity before the high volume of submissions on Fridays
A.
Deploy the application to Amazon EC2 On-Demand Instances with load balancing across multiple Availability Zones. Use scheduled Amazon EC2 Auto Scaling to add capacity before the high volume of submissions on Fridays
Answers
B.
Deploy the application in a container using Amazon Elastic Container Service (Amazon ECS) with load balancing across multiple Availability Zones Use scheduled Service Auto Scaling to add capacity before the high volume of submissions on Fridays
B.
Deploy the application in a container using Amazon Elastic Container Service (Amazon ECS) with load balancing across multiple Availability Zones Use scheduled Service Auto Scaling to add capacity before the high volume of submissions on Fridays
Answers
C.
Deploy the application front end to an Amazon S3 bucket served by Amazon CloudFront Deploy the application backend using Amazon API Gateway with an AWS Lambda proxy integration
C.
Deploy the application front end to an Amazon S3 bucket served by Amazon CloudFront Deploy the application backend using Amazon API Gateway with an AWS Lambda proxy integration
Answers
D.
Store the timesheet submission data in Amazon Redshift Use Amazon QuickSight to generate the reports using Amazon Redshift as the data source
D.
Store the timesheet submission data in Amazon Redshift Use Amazon QuickSight to generate the reports using Amazon Redshift as the data source
Answers
E.
Store the timesheet submission data in Amazon S3. Use Amazon Athena and Amazon QuickSight to generate the reports using Amazon S3 as the data source.
E.
Store the timesheet submission data in Amazon S3. Use Amazon Athena and Amazon QuickSight to generate the reports using Amazon S3 as the data source.
Answers
Suggested answer: C, E

Explanation:

https://aws.amazon.com/blogs/architecture/create-dynamic-contact-forms-for-s3-static-websitesusing-aws-lambda-amazon-api-gateway-and-amazon-ses/

A company wants to send data from its on-premises systems to Amazon S3 buckets. The company created the S3 buckets in three different accounts. The company must send the data privately without the data traveling across the internet The company has no existing dedicated connectivity to AWS Which combination of steps should a solutions architect take to meet these requirements? (Select TWO.)

A.
Establish a networking account in the AWS Cloud Create a private VPC in the networking account.Set up an AWS Direct Connect connection with a private VIF between the on-premises environment and the private VPC.
A.
Establish a networking account in the AWS Cloud Create a private VPC in the networking account.Set up an AWS Direct Connect connection with a private VIF between the on-premises environment and the private VPC.
Answers
B.
Establish a networking account in the AWS Cloud Create a private VPC in the networking account.Set up an AWS Direct Connect connection with a public VlF between the on-premises environment and the private VPC.
B.
Establish a networking account in the AWS Cloud Create a private VPC in the networking account.Set up an AWS Direct Connect connection with a public VlF between the on-premises environment and the private VPC.
Answers
C.
Create an Amazon S3 interface endpoint in the networking account.
C.
Create an Amazon S3 interface endpoint in the networking account.
Answers
D.
Create an Amazon S3 gateway endpoint in the networking account.
D.
Create an Amazon S3 gateway endpoint in the networking account.
Answers
E.
Establish a networking account in the AWS Cloud Create a private VPC in the networking account.Peer VPCs from the accounts that host the S3 buckets with the VPC in the network account.
E.
Establish a networking account in the AWS Cloud Create a private VPC in the networking account.Peer VPCs from the accounts that host the S3 buckets with the VPC in the network account.
Answers
Suggested answer: A, C

Explanation:

https://docs.aws.amazon.com/AmazonS3/latest/userguide/privatelink-interfaceendpoints.html#types-of-vpc-endpoints-for-s3

https://aws.amazon.com/premiumsupport/knowledge-center/s3-bucket-access-direct-connect/

Use a private IP address over Direct Connect (with an interface VPC endpoint) To access Amazon S3 using a private IP address over Direct Connect, perform the following steps:

...

3. Create a private virtual interface for your connection.

...

5. Create an interface VPC endpoint for Amazon S3 in a VPC that is associated with the virtual private gateway. The VGW must connect to a Direct Connect private virtual interface. This interface VPC endpoint resolves to a private IP address even if you enable a VPC endpoint for S3.

A company operates quick-service restaurants. The restaurants follow a predictable model with high sales traffic for 4 hours daily Sales traffic is lower outside of those peak hours.

The point of sale and management platform is deployed in the AWS Cloud and has a backend that is based on Amazon DynamoDB. The database table uses provisioned throughput mode with 100.000 RCUs and 80.000 WCUs to match known peak resource consumption.

The company wants to reduce its DynamoDB cost and minimize the operational overhead for the IT staff.

Which solution meets these requirements MOST cost-effectively?

A.
Reduce the provisioned RCUs and WCUs
A.
Reduce the provisioned RCUs and WCUs
Answers
B.
Change the DynamoDB table to use on-demand capacity.
B.
Change the DynamoDB table to use on-demand capacity.
Answers
C.
Enable Dynamo DB auto scaling tor the table
C.
Enable Dynamo DB auto scaling tor the table
Answers
D.
Purchase 1-year reserved capacity that is sufficient to cover the peak load for 4 hours each day
D.
Purchase 1-year reserved capacity that is sufficient to cover the peak load for 4 hours each day
Answers
Suggested answer: C

Explanation:

https://aws.amazon.com/blogs/database/amazon-dynamodb-auto-scaling-performance-and-costoptimization-at-any-scale/

"As you can see, there are compelling reasons to use DynamoDB auto scaling with actively changing traffic. Auto scaling responds quickly and simplifies capacity management, which lowers costs by scaling your table's provisioned capacity and reducing operational overhead."

A company manages hundreds of AWS accounts centrally in an organization in AWS Organizations.

The company recently started to allow product teams to create and manage their own S3 access points in their accounts. The S3 access points can be accessed only within VPCs not on the internet.

What is the MOST operationally efficient way to enforce this requirement?

A.
Set the S3 access point resource policy to deny the s3 CreateAccessPoint action unless the s3:AccessPointNetworkOngm condition key evaluates to VPC.
A.
Set the S3 access point resource policy to deny the s3 CreateAccessPoint action unless the s3:AccessPointNetworkOngm condition key evaluates to VPC.
Answers
B.
Create an SCP at the root level in the organization to deny the s3 CreateAccessPoint action unless the s3 AccessPomtNetworkOngin condition key evaluates to VPC.
B.
Create an SCP at the root level in the organization to deny the s3 CreateAccessPoint action unless the s3 AccessPomtNetworkOngin condition key evaluates to VPC.
Answers
C.
Use AWS CloudFormation StackSets to create a new 1AM policy in each AVVS account that allows the s3: CreateAccessPoint action only if the s3 AccessPointNetworkOrigin condition key evaluates to VPC.
C.
Use AWS CloudFormation StackSets to create a new 1AM policy in each AVVS account that allows the s3: CreateAccessPoint action only if the s3 AccessPointNetworkOrigin condition key evaluates to VPC.
Answers
D.
Set the S3 bucket policy to deny the s3: CreateAccessPoint action unless the s3 AccessPointNetworkOrigin condition key evaluates to VPC.
D.
Set the S3 bucket policy to deny the s3: CreateAccessPoint action unless the s3 AccessPointNetworkOrigin condition key evaluates to VPC.
Answers
Suggested answer: B

Explanation:

https://aws.amazon.com/s3/features/access-points/

https://aws.amazon.com/blogs/storage/managing-amazon-s3-access-with-vpc-endpoints-and-s3-access-points/

Total 492 questions
Go to page: of 50