ExamGecko
Home Home / Amazon / SAP-C02

Amazon SAP-C02 Practice Test - Questions Answers, Page 30

Question list
Search
Search

List of questions

Search

Related questions











A company is collecting a large amount of data from a fleet of loT devices Data is stored as Optimized Row Columnar (ORC) files in the Hadoop Distributed File System (HDFS) on a persistent Amazon EMR cluster. The company's data analytics team queries the data by using SQL in Apache Presto deployed on the same EMR cluster Queries scan large amounts of data, always run for less than 15 minutes, and run only between 5 PM and 10 PM.

The company is concerned about the high cost associated with the current solution A solutions architect must propose the most cost-effective solution that will allow SQL data queries Which solution will meet these requirements?

A.
Store data in Amazon S3 Use Amazon Redshift Spectrum to query data.
A.
Store data in Amazon S3 Use Amazon Redshift Spectrum to query data.
Answers
B.
Store data in Amazon S3 Use the AWS Glue Data Catalog and Amazon Athena to query data
B.
Store data in Amazon S3 Use the AWS Glue Data Catalog and Amazon Athena to query data
Answers
C.
Store data in EMR File System (EMRFS) Use Presto in Amazon EMR to query data
C.
Store data in EMR File System (EMRFS) Use Presto in Amazon EMR to query data
Answers
D.
Store data in Amazon Redshift. Use Amazon Redshift to query data.
D.
Store data in Amazon Redshift. Use Amazon Redshift to query data.
Answers
Suggested answer: B

Explanation:

(https://stackoverflow.com/questions/50250114/athena-vs-redshift-spectrum)

An environmental company is deploying sensors in major cities throughout a country to measure air quality The sensors connect to AWS loT Core to ingest timesheets data readings. The company stores the data in Amazon DynamoDB For business continuity the company must have the ability to ingest and store data in two AWS Regions Which solution will meet these requirements?

A.
Create an Amazon Route 53 alias failover routing policy with values for AWS loT Core data endpoints in both Regions Migrate data to Amazon Aurora global tables
A.
Create an Amazon Route 53 alias failover routing policy with values for AWS loT Core data endpoints in both Regions Migrate data to Amazon Aurora global tables
Answers
B.
Create a domain configuration for AWS loT Core in each Region Create an Amazon Route 53 latency-based routing policy Use AWS loT Core data endpoints in both Regions as values Migrate the data to Amazon MemoryDB for Radis and configure Cross-Region replication
B.
Create a domain configuration for AWS loT Core in each Region Create an Amazon Route 53 latency-based routing policy Use AWS loT Core data endpoints in both Regions as values Migrate the data to Amazon MemoryDB for Radis and configure Cross-Region replication
Answers
C.
Create a domain configuration for AWS loT Core in each. Region Create an Amazon Route 53 health check that evaluates domain configuration health Create a failover routing policy with values for the domain name from the AWS loT Core domain configurations Update the DynamoDB table to a global table
C.
Create a domain configuration for AWS loT Core in each. Region Create an Amazon Route 53 health check that evaluates domain configuration health Create a failover routing policy with values for the domain name from the AWS loT Core domain configurations Update the DynamoDB table to a global table
Answers
D.
Create an Amazon Route 53 latency-based routing policy. Use AWS loT Core data endpoints in both Regions as values. Configure DynamoDB streams and Cross-Region data replication
D.
Create an Amazon Route 53 latency-based routing policy. Use AWS loT Core data endpoints in both Regions as values. Configure DynamoDB streams and Cross-Region data replication
Answers
Suggested answer: C

Explanation:

https://aws.amazon.com/solutions/implementations/disaster-recovery-for-aws-iot/

A company is migrating an application to AWS. It wants to use fully managed services as much as possible during the migration The company needs to store large, important documents within the application with the following requirements 1 The data must be highly durable and available. The data must always be encrypted at rest and in transit.

3 The encryption key must be managed by the company and rotated periodically Which of the following solutions should the solutions architect recommend?

A.
Deploy the storage gateway to AWS in file gateway mode Use Amazon EBS volume encryption using an AWS KMS key to encrypt the storage gateway volumes
A.
Deploy the storage gateway to AWS in file gateway mode Use Amazon EBS volume encryption using an AWS KMS key to encrypt the storage gateway volumes
Answers
B.
Use Amazon S3 with a bucket policy to enforce HTTPS for connections to the bucket and to enforce server-side encryption and AWS KMS for object encryption.
B.
Use Amazon S3 with a bucket policy to enforce HTTPS for connections to the bucket and to enforce server-side encryption and AWS KMS for object encryption.
Answers
C.
Use Amazon DynamoDB with SSL to connect to DynamoDB Use an AWS KMS key to encrypt DynamoDB objects at rest.
C.
Use Amazon DynamoDB with SSL to connect to DynamoDB Use an AWS KMS key to encrypt DynamoDB objects at rest.
Answers
D.
Deploy instances with Amazon EBS volumes attached to store this data Use EBS volume encryption using an AWS KMS key to encrypt the data.
D.
Deploy instances with Amazon EBS volumes attached to store this data Use EBS volume encryption using an AWS KMS key to encrypt the data.
Answers
Suggested answer: B

A solutions architect must update an application environment within AWS Elastic Beanstalk using a blue/green deployment methodology The solutions architect creates an environment that is identical to the existing application environment and deploys the application to the new environment.

What should be done next to complete the update?

A.
Redirect to the new environment using Amazon Route 53
A.
Redirect to the new environment using Amazon Route 53
Answers
B.
Select the Swap Environment URLs option
B.
Select the Swap Environment URLs option
Answers
C.
Replace the Auto Scaling launch configuration
C.
Replace the Auto Scaling launch configuration
Answers
D.
Update the DNS records to point to the green environment
D.
Update the DNS records to point to the green environment
Answers
Suggested answer: B

Explanation:

https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/using-features.CNAMESwap.html

A company has a complex web application that leverages Amazon CloudFront for global scalability and performance Over time, users report that the web application is slowing down The company's operations team reports that the CloudFront cache hit ratio has been dropping steadily. The cache metrics report indicates that query strings on some URLs are inconsistently ordered and are specified sometimes in mixed-case letters and sometimes in lowercase letters.

Which set of actions should the solutions architect take to increase the cache hit ratio as quickly as possible?

A.
Deploy a Lambda@Edge function to sort parameters by name and force them lo be lowercase Select the CloudFront viewer request trigger to invoke the function
A.
Deploy a Lambda@Edge function to sort parameters by name and force them lo be lowercase Select the CloudFront viewer request trigger to invoke the function
Answers
B.
Update the CloudFront distribution to disable caching based on query string parameters.
B.
Update the CloudFront distribution to disable caching based on query string parameters.
Answers
C.
Deploy a reverse proxy after the load balancer to post-process the emitted URLs in the application to force the URL strings to be lowercase.
C.
Deploy a reverse proxy after the load balancer to post-process the emitted URLs in the application to force the URL strings to be lowercase.
Answers
D.
Update the CloudFront distribution to specify casing-insensitive query string processing.
D.
Update the CloudFront distribution to specify casing-insensitive query string processing.
Answers
Suggested answer: A

Explanation:

because Amazon CloudFront considers the case of parameter names and values when caching based on query string parameters , thus inconsistent query strings may cause CloudFront to forward mixedcased/ misordered requests to the origin. Triggering a Lambda@Edge function based on a viewer request event to sort parameters by name and force them to be lowercase is the best choice.

https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/QueryStringParameters.html#query-string-parameters-optimizing-caching

https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/lambda-cloudfronttrigger-events.html

https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/lambdaexamples.html#lambda-examples-normalize-query-string-parameters

A solutions architect has implemented a SAML 2 0 federated identity solution with their company's on-premises identity provider (IdP) to authenticate users' access to the AWS environment. When the solutions architect tests authentication through the federated identity web portal, access to the AWS environment is granted However when test users attempt to authenticate through the federated identity web portal, they are not able to access the AWS environment Which items should the solutions architect check to ensure identity federation is properly configured? (Select THREE)

A.
The 1AM user's permissions policy has allowed the use of SAML federation for that user
A.
The 1AM user's permissions policy has allowed the use of SAML federation for that user
Answers
B.
The 1AM roles created for the federated users' or federated groups' trust policy have set the SAML provider as the principal
B.
The 1AM roles created for the federated users' or federated groups' trust policy have set the SAML provider as the principal
Answers
C.
Test users are not in the AWSFederatedUsers group in the company's IdP
C.
Test users are not in the AWSFederatedUsers group in the company's IdP
Answers
D.
The web portal calls the AWS STS AssumeRoleWithSAML API with the ARN of the SAML provider, the ARN of the 1AM role, and the SAML assertion from IdP
D.
The web portal calls the AWS STS AssumeRoleWithSAML API with the ARN of the SAML provider, the ARN of the 1AM role, and the SAML assertion from IdP
Answers
E.
The on-premises IdP's DNS hostname is reachable from the AWS environment VPCs
E.
The on-premises IdP's DNS hostname is reachable from the AWS environment VPCs
Answers
F.
The company's IdP defines SAML assertions that properly map users or groups in the company to 1AM roles with appropriate permissions
F.
The company's IdP defines SAML assertions that properly map users or groups in the company to 1AM roles with appropriate permissions
Answers
Suggested answer: B, D, F

A company has automated the nightly retraining of its machine learning models by using AWS Step Functions. The workflow consists of multiple steps that use AWS Lambda Each step can fail for various reasons and any failure causes a failure of the overall workflow

A review reveals that the retraining has failed multiple nights in a row without the company noticing the failure A solutions architect needs to improve the workflow so that notifications are sent for all types of failures in the retraining process Which combination of steps should the solutions architect take to meet these requirements? (Select THREE)

A.
Create an Amazon Simple Notification Service (Amazon SNS) topic with a subscription of type "Email" that targets the team's mailing list.
A.
Create an Amazon Simple Notification Service (Amazon SNS) topic with a subscription of type "Email" that targets the team's mailing list.
Answers
B.
Create a task named "Email" that forwards the input arguments to the SNS topic
B.
Create a task named "Email" that forwards the input arguments to the SNS topic
Answers
C.
Add a Catch field all Task Map. and Parallel states that have a statement of "Error Equals": [ "States. ALL"] and "Next": "Email".
C.
Add a Catch field all Task Map. and Parallel states that have a statement of "Error Equals": [ "States. ALL"] and "Next": "Email".
Answers
D.
Add a new email address to Amazon Simple Email Service (Amazon SES). Verify the email address.
D.
Add a new email address to Amazon Simple Email Service (Amazon SES). Verify the email address.
Answers
E.
Create a task named "Email" that forwards the input arguments to the SES email address
E.
Create a task named "Email" that forwards the input arguments to the SES email address
Answers
F.
Add a Catch field to all Task Map, and Parallel states that have a statement of "Error Equals": [ "states. Runtime"] and "Next": "Email".
F.
Add a Catch field to all Task Map, and Parallel states that have a statement of "Error Equals": [ "states. Runtime"] and "Next": "Email".
Answers
Suggested answer: A, B, C

Explanation:

Create an Amazon Simple Notification Service (Amazon SNS) topic with a subscription of type "Email" that targets the team's mailing list. This will create a topic for sending notifications and add a subscription for the team's email list to that topic. C. Add a Catch field to all Task, Map, and Parallel states that have a statement of "ErrorEquals": [ "States.ALL" ] and "Next": "Email". This will ensure that any errors that occur in any of the steps in the workflow will trigger the "Email" task, which will forward the input arguments to the SNS topic created in step A. B. Create a task named "Email" that forwards the input arguments to the SNS topic. This will allow the company to send email notifications to the team's mailing list in case of any errors occurred in any step in the workflow.

A company is building an image service on the web that will allow users to upload and search random photos. At peak usage, up to 10.000 users worldwide will upload their images. The service will then overlay text on the uploaded images, which will then be published on the company website.

Which design should a solutions architect implement?

A.
Store the uploaded images in Amazon Elastic File System (Amazon EFS). Send application log information about each image to Amazon CloudWatch Logs Create a fleet of Amazon EC2 instances that use CloudWatch Logs to determine which images need to be processed Place processed images in another directory in Amazon EFS. Enable Amazon CloudFront and configure the origin to be the one of the EC2 instances in the fleet
A.
Store the uploaded images in Amazon Elastic File System (Amazon EFS). Send application log information about each image to Amazon CloudWatch Logs Create a fleet of Amazon EC2 instances that use CloudWatch Logs to determine which images need to be processed Place processed images in another directory in Amazon EFS. Enable Amazon CloudFront and configure the origin to be the one of the EC2 instances in the fleet
Answers
B.
Store the uploaded images in an Amazon S3 bucket and configure an S3 bucket event notification to send a message to Amazon Simple Notification Service (Amazon SNS) Create a fleet of Amazon EC2 instances behind an Application Load Balancer (ALB) to pull messages from Amazon SNS to process the images and place them in Amazon Elastic File System (Amazon EFS) Use Amazon CloudWatch metrics for the SNS message volume to scale out EC2 instances. Enable Amazon CloudFront and configure the origin to be the ALB in front of the EC2 instances
B.
Store the uploaded images in an Amazon S3 bucket and configure an S3 bucket event notification to send a message to Amazon Simple Notification Service (Amazon SNS) Create a fleet of Amazon EC2 instances behind an Application Load Balancer (ALB) to pull messages from Amazon SNS to process the images and place them in Amazon Elastic File System (Amazon EFS) Use Amazon CloudWatch metrics for the SNS message volume to scale out EC2 instances. Enable Amazon CloudFront and configure the origin to be the ALB in front of the EC2 instances
Answers
C.
Store the uploaded images in an Amazon S3 bucket and configure an S3 bucket event notification to send a message to the Amazon Simple Queue Service (Amazon SQS) queue Create a fleet of Amazon EC2 instances to pull messages from the SQS queue to process the images and place them in another S3 bucket. Use Amazon CloudWatch metncs for queue depth to scale out EC2 instances Enable Amazon CloudFront and configure the origin to be the S3 bucket that contains the processed images.
C.
Store the uploaded images in an Amazon S3 bucket and configure an S3 bucket event notification to send a message to the Amazon Simple Queue Service (Amazon SQS) queue Create a fleet of Amazon EC2 instances to pull messages from the SQS queue to process the images and place them in another S3 bucket. Use Amazon CloudWatch metncs for queue depth to scale out EC2 instances Enable Amazon CloudFront and configure the origin to be the S3 bucket that contains the processed images.
Answers
D.
Store the uploaded images on a shared Amazon Elastic Block Store (Amazon EBS) volume amounted to a fleet of Amazon EC2 Spot instances. Create an Amazon DynamoDB table that contains information about each uploaded image and whether it has been processed Use an Amazon EventBndge rule to scale out EC2 instances. Enable Amazon CloudFront and configure the origin to reference an Elastic Load Balancer in front of the fleet of EC2 instances.
D.
Store the uploaded images on a shared Amazon Elastic Block Store (Amazon EBS) volume amounted to a fleet of Amazon EC2 Spot instances. Create an Amazon DynamoDB table that contains information about each uploaded image and whether it has been processed Use an Amazon EventBndge rule to scale out EC2 instances. Enable Amazon CloudFront and configure the origin to reference an Elastic Load Balancer in front of the fleet of EC2 instances.
Answers
Suggested answer: C

Explanation:

(Store the uploaded images in an S3 bucket and use S3 event notification with SQS queue) is the most suitable design. Amazon S3 provides highly scalable and durable storage for the uploaded images. Configuring S3 event notifications to send messages to an SQS queue allows for decoupling the processing of images from the upload process. A fleet of EC2 instances can pull messages from the SQS queue to process the images and store them in another S3 bucket. Scaling out the EC2 instances based on SQS queue depth using CloudWatch metrics ensures efficient utilization of resources. Enabling Amazon CloudFront with the origin set to the S3 bucket containing the processed images improves the global availability and performance of image delivery.

A company is planning to migrate an Amazon RDS for Oracle database to an RDS for PostgreSQL DB instance in another AWS account. A solutions architect needs to design a migration strategy that will require no downtime and that will minimize the amount of time necessary to complete the migration. The migration strategy must replicate all existing data and any new data that is created during the migration The target database must be identical to the source database at completion of the migration process All applications currently use an Amazon Route 53 CNAME record as their endpoint for communication with the RDS for Oracle DB instance The RDS for Oracle DB instance is in a private subnet.

Which combination of steps should the solutions architect take to meet these requirements? (Select THREE)

A.
Create a new RDS for PostgreSQL DB instance in the target account Use the AWS Schema Conversion Tool (AWS SCT) to migrate the database schema from the source database to the target database
A.
Create a new RDS for PostgreSQL DB instance in the target account Use the AWS Schema Conversion Tool (AWS SCT) to migrate the database schema from the source database to the target database
Answers
B.
Use the AWS Schema Conversion Tool (AWS SCT) to create a new RDS for PostgreSQL DB instance in the target account with the schema and initial data from the source database
B.
Use the AWS Schema Conversion Tool (AWS SCT) to create a new RDS for PostgreSQL DB instance in the target account with the schema and initial data from the source database
Answers
C.
Configure VPC peering between the VPCs in the two AWS accounts to provide connectivity to both DB instances from the target account. Configure the security groups that are attached to each DB instance to allow traffic on the database port from the VPC in the target account.
C.
Configure VPC peering between the VPCs in the two AWS accounts to provide connectivity to both DB instances from the target account. Configure the security groups that are attached to each DB instance to allow traffic on the database port from the VPC in the target account.
Answers
D.
Temporarily allow the source DB instance to be publicly accessible to provide connectivity from the VPC in the target account Configure the security groups that are attached to each DB instance to allow traffic on the database port from the VPC in the target account.
D.
Temporarily allow the source DB instance to be publicly accessible to provide connectivity from the VPC in the target account Configure the security groups that are attached to each DB instance to allow traffic on the database port from the VPC in the target account.
Answers
E.
Use AWS Database Migration Service (AWS DMS) in the target account to perform a full load plus change data capture (CDC) migration from the source database to the target database When the migration is complete, change the CNAME record to point to the target DB instance endpoint F Use AWS Database Migration Service (AWS DMS) in the target account to perform a change data capture (CDC) migration from the source database to the target database When the migration is complete change the CNAME record to point to the target DB instance endpoint.
E.
Use AWS Database Migration Service (AWS DMS) in the target account to perform a full load plus change data capture (CDC) migration from the source database to the target database When the migration is complete, change the CNAME record to point to the target DB instance endpoint F Use AWS Database Migration Service (AWS DMS) in the target account to perform a change data capture (CDC) migration from the source database to the target database When the migration is complete change the CNAME record to point to the target DB instance endpoint.
Answers
Suggested answer: A, C, E

A company is using AWS Organizations to manage multiple accounts Due to regulatory requirements, the company wants to restrict specific member accounts to certain AWS Regions, where they are permitted to deploy resources The resources in the accounts must be tagged enforced based on a group standard and centrally managed with minimal configuration.

What should a solutions architect do to meet these requirements'?

A.
Create an AWS Config rule in the specific member accounts to limit Regions and apply a tag policy.
A.
Create an AWS Config rule in the specific member accounts to limit Regions and apply a tag policy.
Answers
B.
From the AWS Billing and Cost Management console in the management account, disable Regions for the specific member accounts and apply a tag policy on the root.
B.
From the AWS Billing and Cost Management console in the management account, disable Regions for the specific member accounts and apply a tag policy on the root.
Answers
C.
Associate the specific member accounts with the root Apply a tag policy and an SCP using conditions to limit Regions.
C.
Associate the specific member accounts with the root Apply a tag policy and an SCP using conditions to limit Regions.
Answers
D.
Associate the specific member accounts with a new OU. Apply a tag policy and an SCP using conditions to limit Regions.
D.
Associate the specific member accounts with a new OU. Apply a tag policy and an SCP using conditions to limit Regions.
Answers
Suggested answer: D

Explanation:

https://aws.amazon.com/es/blogs/mt/implement-aws-resource-tagging-strategy-using-aws-tagpolicies-and-service-control-policies-scps/

Total 492 questions
Go to page: of 50