ExamGecko
Home Home / Amazon / SAP-C02

Amazon SAP-C02 Practice Test - Questions Answers, Page 45

Question list
Search
Search

List of questions

Search

Related questions











A data analytics company has an Amazon Redshift cluster that consists of several reserved nodes. The cluster is experiencing unexpected bursts of usage because a team of employees is compiling a deep audit analysis report. The queries to generate the report are complex read queries and are CPU intensive.

Business requirements dictate that the cluster must be able to service read and write queries at all times. A solutions architect must devise a solution that accommodates the bursts of usage.

Which solution meets these requirements MOST cost-effectively?

A.
Provision an Amazon EMR cluster. Offload the complex data processing tasks.
A.
Provision an Amazon EMR cluster. Offload the complex data processing tasks.
Answers
B.
Deploy an AWS Lambda function to add capacity to the Amazon Redshift cluster by using a classic resize operation when the cluster's CPU metrics in Amazon CloudWatch reach 80%.
B.
Deploy an AWS Lambda function to add capacity to the Amazon Redshift cluster by using a classic resize operation when the cluster's CPU metrics in Amazon CloudWatch reach 80%.
Answers
C.
Deploy an AWS Lambda function to add capacity to the Amazon Redshift cluster by using an elastic resize operation when the cluster's CPU metrics in Amazon CloudWatch reach 80%.
C.
Deploy an AWS Lambda function to add capacity to the Amazon Redshift cluster by using an elastic resize operation when the cluster's CPU metrics in Amazon CloudWatch reach 80%.
Answers
D.
Turn on the Concurrency Scaling feature for the Amazon Redshift cluster.
D.
Turn on the Concurrency Scaling feature for the Amazon Redshift cluster.
Answers
Suggested answer: C

Explanation:

The best solution is to deploy an AWS Lambda function to add capacity to the Amazon Redshift cluster by using an elastic resize operation when the cluster's CPU metrics in Amazon CloudWatch reach 80%. This solution will enable the cluster to scale up or down quickly by adding or removing nodes within minutes. This will improve the performance of the complex read queries and also reduce the cost by scaling down when the demand decreases. This solution is more cost-effective than using a classic resize operation, which takes longer and requires more downtime. It is also more suitable than using Amazon EMR, which is designed for big data processing rather than data warehousing.Reference:Amazon Redshift Documentation,Resizing clusters in Amazon Redshift, [Amazon EMR Documentation]

A company is deploying a third-party firewall appliance solution from AWS Marketplace to monitor and protect traffic that leaves the company's AWS environments. The company wants to deploy this appliance into a shared services VPC and route all outbound internet-bound traffic through the appliances.

A solutions architect needs to recommend a deployment method that prioritizes reliability and minimizes failover time between firewall appliances within a single AWS Region. The company has set up routing from the shared services VPC to other VPCs.

Which steps should the solutions architect recommend to meet these requirements? (Select THREE.)

A.
Deploy two firewall appliances into the shared services VPC, each in a separate Availability Zone.
A.
Deploy two firewall appliances into the shared services VPC, each in a separate Availability Zone.
Answers
B.
Create a new Network Load Balancer in the shared services VPC. Create a new target group, and attach it to the new Network Load Balancer. Add each of the firewall appliance instances to the target group.
B.
Create a new Network Load Balancer in the shared services VPC. Create a new target group, and attach it to the new Network Load Balancer. Add each of the firewall appliance instances to the target group.
Answers
C.
Create a new Gateway Load Balancer in the shared services VPC. Create a new target group, and attach it to the new Gateway Load Balancer. Add each of the firewall appliance instances to the target group.
C.
Create a new Gateway Load Balancer in the shared services VPC. Create a new target group, and attach it to the new Gateway Load Balancer. Add each of the firewall appliance instances to the target group.
Answers
D.
Create a VPC interface endpoint. Add a route to the route table in the shared services VPC. Designate the new endpoint as the next hop for traffic that enters the shared services VPC from other VPCs.
D.
Create a VPC interface endpoint. Add a route to the route table in the shared services VPC. Designate the new endpoint as the next hop for traffic that enters the shared services VPC from other VPCs.
Answers
E.
Deploy two firewall appliances into the shared services VPC. each in the same Availability Zone.
E.
Deploy two firewall appliances into the shared services VPC. each in the same Availability Zone.
Answers
F.
Create a VPC Gateway Load Balancer endpoint. Add a route to the route table in the shared services VPC. Designate the new endpoint as the next hop for traffic that enters the shared services VPC from other VPCs.
F.
Create a VPC Gateway Load Balancer endpoint. Add a route to the route table in the shared services VPC. Designate the new endpoint as the next hop for traffic that enters the shared services VPC from other VPCs.
Answers
Suggested answer: A, C, F

Explanation:

The best solution is to deploy two firewall appliances into the shared services VPC, each in a separate Availability Zone, and create a new Gateway Load Balancer to distribute traffic to them. A Gateway Load Balancer is designed for high performance and high availability scenarios with third-party network virtual appliances, such as firewalls. It operates at the network layer and maintains flow stickiness and symmetry to a specific appliance instance. It also uses the GENEVE protocol to encapsulate traffic between the load balancer and the appliances. To route traffic from other VPCs to the Gateway Load Balancer, a VPC Gateway Load Balancer endpoint is needed. This is a VPC endpoint that provides private connectivity between the appliances in the shared services VPC and the application servers in other VPCs. The endpoint must be added as the next hop in the route table for the shared services VPC. This solution ensures reliability and minimizes failover time between firewall appliances within a single AWS Region.Reference:What is a Gateway Load Balancer?,Gateway load balancer - Azure Load Balancer,Introducing Azure Gateway Load Balancer: Deploy and scale network ...

An online survey company runs its application in the AWS Cloud. The application is distributed and consists of microservices that run in an automatically scaled Amazon Elastic Container Service (Amazon ECS) cluster. The ECS cluster is a target for an Application Load Balancer (ALB). The ALB is a custom origin for an Amazon CloudFront distribution.

The company has a survey that contains sensitive data. The sensitive data must be encrypted when it moves through the application. The application's data-handling microservice is the only microservice that should be able to decrypt the data.

Which solution will meet these requirements?

A.
Create a symmetric AWS Key Management Service (AWS KMS) key that is dedicated to the data-handling microservice. Create a field-level encryption profile and a configuration. Associate the KMS key and the configuration with the CloudFront cache behavior.
A.
Create a symmetric AWS Key Management Service (AWS KMS) key that is dedicated to the data-handling microservice. Create a field-level encryption profile and a configuration. Associate the KMS key and the configuration with the CloudFront cache behavior.
Answers
B.
Create an RSA key pair that is dedicated to the data-handling microservice. Upload the public key to the CloudFront distribution. Create a field-level encryption profile and a configuration. Add the configuration to the CloudFront cache behavior.
B.
Create an RSA key pair that is dedicated to the data-handling microservice. Upload the public key to the CloudFront distribution. Create a field-level encryption profile and a configuration. Add the configuration to the CloudFront cache behavior.
Answers
C.
Create a symmetric AWS Key Management Service (AWS KMS) key that is dedicated to the data-handling microservice. Create a Lambda@Edge function. Program the function to use the KMS key to encrypt the sensitive data.
C.
Create a symmetric AWS Key Management Service (AWS KMS) key that is dedicated to the data-handling microservice. Create a Lambda@Edge function. Program the function to use the KMS key to encrypt the sensitive data.
Answers
D.
Create an RSA key pair that is dedicated to the data-handling microservice. Create a Lambda@Edge function. Program the function to use the private key of the RSA key pair to encrypt the sensitive data.
D.
Create an RSA key pair that is dedicated to the data-handling microservice. Create a Lambda@Edge function. Program the function to use the private key of the RSA key pair to encrypt the sensitive data.
Answers
Suggested answer: B

Explanation:

The best solution is to create an RSA key pair that is dedicated to the data-handling microservice and upload the public key to the CloudFront distribution. Then, create a field-level encryption profile and a configuration, and add the configuration to the CloudFront cache behavior. This solution will ensure that the sensitive data is encrypted at the edge locations of CloudFront, close to the end users, and remains encrypted throughout the application stack. Only the data-handling microservice, which has access to the private key of the RSA key pair, can decrypt the data. This solution does not require any additional resources or code changes, and leverages the built-in feature of CloudFront field-level encryption. For more information about CloudFront field-level encryption, seeUsing field-level encryption to help protect sensitive data.

A company needs to store and process image data that will be uploaded from mobile devices using a custom mobile app. Usage peaks between 8 AM and 5 PM on weekdays, with thousands of uploads per minute. The app is rarely used at any other time. A user is notified when image processing is complete.

Which combination of actions should a solutions architect take to ensure image processing can scale to handle the load? (Select THREE.)

A.
Upload files from the mobile software directly to Amazon S3. Use S3 event notifications to create a message in an Amazon MQ queue.
A.
Upload files from the mobile software directly to Amazon S3. Use S3 event notifications to create a message in an Amazon MQ queue.
Answers
B.
Upload files from the mobile software directly to Amazon S3. Use S3 event notifications to create a message in an Amazon Simple Queue Service (Amazon SOS) standard queue.
B.
Upload files from the mobile software directly to Amazon S3. Use S3 event notifications to create a message in an Amazon Simple Queue Service (Amazon SOS) standard queue.
Answers
C.
Invoke an AWS Lambda function to perform image processing when a message is available in the queue.
C.
Invoke an AWS Lambda function to perform image processing when a message is available in the queue.
Answers
D.
Invoke an S3 Batch Operations job to perform image processing when a message is available in the queue
D.
Invoke an S3 Batch Operations job to perform image processing when a message is available in the queue
Answers
E.
Send a push notification to the mobile app by using Amazon Simple Notification Service (Amazon SNS) when processing is complete.
E.
Send a push notification to the mobile app by using Amazon Simple Notification Service (Amazon SNS) when processing is complete.
Answers
F.
Send a push notification to the mobile app by using Amazon Simple Email Service (Amazon SES) when processing is complete.
F.
Send a push notification to the mobile app by using Amazon Simple Email Service (Amazon SES) when processing is complete.
Answers
Suggested answer: B, C, E

Explanation:

The best solution is to upload files from the mobile software directly to Amazon S3, use S3 event notifications to create a message in an Amazon Simple Queue Service (Amazon SQS) standard queue, and invoke an AWS Lambda function to perform image processing when a message is available in the queue. This solution will ensure that image processing can scale to handle the load, as Amazon S3 can store any amount of data and handle concurrent uploads, Amazon SQS can buffer the messages and deliver them reliably, and AWS Lambda can run code without provisioning or managing servers and scale automatically based on the demand. This solution will also notify the user when processing is complete by sending a push notification to the mobile app using Amazon Simple Notification Service (Amazon SNS), which is a web service that enables applications to send and receive notifications from the cloud. This solution is more cost-effective than using Amazon MQ, which is a managed message broker service for Apache ActiveMQ that requires a dedicated broker instance, or S3 Batch Operations, which is a feature that allows users to perform bulk actions on S3 objects, such as copying or tagging, but does not support custom code execution. This solution is also more suitable than using Amazon Simple Email Service (Amazon SES), which is a web service that enables applications to send and receive email messages, but does not support push notifications for mobile devices.Reference:Amazon S3 Documentation,Amazon SQS Documentation,AWS Lambda Documentation,Amazon SNS Documentation

A company has an application that analyzes and stores image data on premises The application receives millions of new image files every day Files are an average of 1 MB in size The files are analyzed in batches of 1 GB When the application analyzes a batch the application zips the images together The application then archives the images as a single file in an on-premises NFS server for long-term storage

The company has a Microsoft Hyper-V environment on premises and has compute capacity available The company does not have storage capacity and wants to archive the images on AWS The company needs the ability to retrieve archived data within t week of a request.

The company has a 10 Gbps AWS Direct Connect connection between its on-premises data center and AWS. The company needs to set bandwidth limits and schedule archived images to be copied to AWS dunng non-business hours.

Which solution will meet these requirements MOST cost-effectively?

A.
Deploy an AWS DataSync agent on a new GPU-based Amazon EC2 instance Configure the DataSync agent to copy the batch of files from the NFS on-premises server to Amazon S3 Glacier Instant Retrieval After the successful copy delete the data from the on-premises storage
A.
Deploy an AWS DataSync agent on a new GPU-based Amazon EC2 instance Configure the DataSync agent to copy the batch of files from the NFS on-premises server to Amazon S3 Glacier Instant Retrieval After the successful copy delete the data from the on-premises storage
Answers
B.
Deploy an AWS DataSync agent as a Hyper-V VM on premises Configure the DataSync agent to copy the batch of files from the NFS on-premises server to Amazon S3 Glacier Deep Archive After the successful copy delete the data from the on-premises storage
B.
Deploy an AWS DataSync agent as a Hyper-V VM on premises Configure the DataSync agent to copy the batch of files from the NFS on-premises server to Amazon S3 Glacier Deep Archive After the successful copy delete the data from the on-premises storage
Answers
C.
Deploy an AWS DataSync agent on a new general purpose Amazon EC2 instance Configure the DataSync agent to copy the batch of files from the NFS on-premises server to Amazon S3 Standard After the successful copy deletes the data from the on-premises storage Create an S3 Lifecycle rule to transition objects from S3 Standard to S3 Glacier Deep Archive after 1 day
C.
Deploy an AWS DataSync agent on a new general purpose Amazon EC2 instance Configure the DataSync agent to copy the batch of files from the NFS on-premises server to Amazon S3 Standard After the successful copy deletes the data from the on-premises storage Create an S3 Lifecycle rule to transition objects from S3 Standard to S3 Glacier Deep Archive after 1 day
Answers
D.
Deploy an AWS Storage Gateway Tape Gateway on premises in the Hyper-V environment Connect the Tape Gateway to AWS Use automatic tape creation Specify an Amazon S3 Glacier Deep Archive pool Eject the tape after the batch of images is copied
D.
Deploy an AWS Storage Gateway Tape Gateway on premises in the Hyper-V environment Connect the Tape Gateway to AWS Use automatic tape creation Specify an Amazon S3 Glacier Deep Archive pool Eject the tape after the batch of images is copied
Answers
Suggested answer: B

Explanation:

Deploy DataSync Agent:

Install the AWS DataSync agent as a VM in your Hyper-V environment. This agent facilitates the data transfer between your on-premises storage and AWS.

Configure Source and Destination:

Set up the source location to point to your on-premises NFS server where the image batches are stored.

Configure the destination location to be an Amazon S3 bucket with the Glacier Deep Archive storage class. This storage class is cost-effective for long-term storage with retrieval times of up to 12 hours.

Create DataSync Tasks:

Create and configure DataSync tasks to manage the data transfer. Schedule these tasks to run during non-business hours to minimize bandwidth usage during peak times. The tasks will handle the copying of data batches from the NFS server to the S3 bucket.

Set Bandwidth Limits:

In the DataSync configuration, set bandwidth limits to control the amount of data being transferred at any given time. This ensures that your network's performance is not adversely affected during business hours.

Delete On-Premises Data:

After successfully copying the data to S3 Glacier Deep Archive, configure the DataSync task to delete the data from your on-premises NFS server. This helps manage storage capacity on-premises and ensures data is securely archived on AWS.

This approach leverages AWS DataSync for efficient, secure, and automated data transfer, and S3 Glacier Deep Archive for cost-effective long-term storage.

Reference

AWS DataSync Overview41.

AWS Storage Blog on DataSync Migration40.

Amazon S3 Transfer Acceleration Documentation42.

A company has a web application that uses Amazon API Gateway. AWS Lambda and Amazon DynamoDB A recent marketing campaign has increased demand Monitoring software reports that many requests have significantly longer response times than before the marketing campaign

A solutions architect enabled Amazon CloudWatch Logs for API Gateway and noticed that errors are occurring on 20% of the requests. In CloudWatch. the Lambda function. Throttles metric represents 1% of the requests and the Errors metric represents 10% of the requests Application logs indicate that, when errors occur there is a call to DynamoDB

What change should the solutions architect make to improve the current response times as the web application becomes more popular'?

A.
Increase the concurrency limit of the Lambda function
A.
Increase the concurrency limit of the Lambda function
Answers
B.
Implement DynamoDB auto scaling on the table
B.
Implement DynamoDB auto scaling on the table
Answers
C.
Increase the API Gateway throttle limit
C.
Increase the API Gateway throttle limit
Answers
D.
Re-create the DynamoDB table with a better-partitioned primary index.
D.
Re-create the DynamoDB table with a better-partitioned primary index.
Answers
Suggested answer: B

Explanation:

Enable DynamoDB Auto Scaling:

Navigate to the DynamoDB console and select the table experiencing high demand.

Go to the 'Capacity' tab and enable auto scaling for both read and write capacity units. Auto scaling adjusts the provisioned throughput capacity automatically in response to actual traffic patterns, ensuring the table can handle the increased load.

Configure Auto Scaling Policies:

Set the minimum and maximum capacity units to define the range within which auto scaling can adjust the provisioned throughput.

Specify target utilization percentages for read and write operations, typically around 70%, to maintain a balance between performance and cost.

Monitor and Adjust:

Use Amazon CloudWatch to monitor the auto scaling activity and ensure it is effectively handling the increased demand.

Adjust the auto scaling settings if necessary to better match the traffic patterns and application requirements.

By enabling DynamoDB auto scaling, you ensure that the database can handle the fluctuating traffic volumes without manual intervention, improving response times and reducing errors.

Reference

AWS Compute Blog on Using API Gateway as a Proxy for DynamoDB60.

AWS Database Blog on DynamoDB Accelerator (DAX)59.

A company wants to migrate virtual Microsoft workloads from an on-premises data center to AWS The company has successfully tested a few sample workloads on AWS. The company also has created an AWS Site-to-Site VPN connection to a VPC A solutions architect needs to generate a total cost of ownership (TCO) report for the migration of all the workloads from the data center

Simple Network Management Protocol (SNMP) has been enabled on each VM in the data center The company cannot add more VMs m the data center and cannot install additional software on the VMs The discovery data must be automatically imported into AWS Migration Hub

Which solution will meet these requirements?

A.
Use the AWS Application Migration Service agentless service and the AWS Migration Hub Strategy Recommendations to generate the TCO report
A.
Use the AWS Application Migration Service agentless service and the AWS Migration Hub Strategy Recommendations to generate the TCO report
Answers
B.
Launch a Windows Amazon EC2 instance Install the Migration Evaluator agentless collector on the EC2 instance Configure Migration Evaluator to generate the TCO report
B.
Launch a Windows Amazon EC2 instance Install the Migration Evaluator agentless collector on the EC2 instance Configure Migration Evaluator to generate the TCO report
Answers
C.
Launch a Windows Amazon EC2 instance. Install the Migration Evaluator agentless collector on the EC2 instance. Configure Migration Hub to generate the TCO report
C.
Launch a Windows Amazon EC2 instance. Install the Migration Evaluator agentless collector on the EC2 instance. Configure Migration Hub to generate the TCO report
Answers
D.
Use the AWS Migration Readiness Assessment tool inside the VPC Configure Migration Evaluator to generate the TCO report
D.
Use the AWS Migration Readiness Assessment tool inside the VPC Configure Migration Evaluator to generate the TCO report
Answers
Suggested answer: A

Explanation:

AWS Application Migration Service:

AWS Application Migration Service (MGN) facilitates the migration of virtual machines (VMs) to AWS without installing additional software on the VMs. This agentless service helps in seamlessly migrating workloads to AWS.

AWS Migration Hub Strategy Recommendations:

AWS Migration Hub Strategy Recommendations offer insights and guidance for planning and implementing migration strategies. It helps in generating a Total Cost of Ownership (TCO) report by automatically importing discovery data from the VMs.

Generating the TCO Report:

The combined use of AWS Application Migration Service and Migration Hub Strategy Recommendations enables the automatic import of discovery data and the generation of an accurate TCO report, ensuring a smooth and cost-effective migration process.

Reference

AWS Migration Hub Strategy Recommendations (AWS Documentation).

A startup company recently migrated a large ecommerce website to AWS The website has experienced a 70% increase in sates Software engineers are using a private GitHub repository to manage code The DevOps team is using Jenkins for builds and unit testing The engineers need to receive notifications for bad builds and zero downtime during deployments The engineers also need to ensure any changes to production are seamless for users and can be rolled back in the event of a major issue

The software engineers have decided to use AWS CodePipeline to manage their build and deployment process

Which solution will meet these requirements'?

A.
Use GitHub websockets to trigger the CodePipeline pipeline Use the Jenkins plugin for AWS CodeBuild to conduct unit testing Send alerts to an Amazon SNS topic for any bad builds Deploy in an in-place all-at-once deployment configuration using AWS CodeDeploy
A.
Use GitHub websockets to trigger the CodePipeline pipeline Use the Jenkins plugin for AWS CodeBuild to conduct unit testing Send alerts to an Amazon SNS topic for any bad builds Deploy in an in-place all-at-once deployment configuration using AWS CodeDeploy
Answers
B.
Use GitHub webhooks to trigger the CodePipelme pipeline Use the Jenkins plugin for AWS CodeBuild to conduct unit testing Send alerts to an Amazon SNS topic for any bad builds Deploy in a blue'green deployment using AWS CodeDeploy
B.
Use GitHub webhooks to trigger the CodePipelme pipeline Use the Jenkins plugin for AWS CodeBuild to conduct unit testing Send alerts to an Amazon SNS topic for any bad builds Deploy in a blue'green deployment using AWS CodeDeploy
Answers
C.
Use GitHub websockets to trigger the CodePipelme pipeline. Use AWS X-Ray for unit testing and static code analysis Send alerts to an Amazon SNS topic for any bad builds Deploy in a blue/green deployment using AWS CodeDeploy.
C.
Use GitHub websockets to trigger the CodePipelme pipeline. Use AWS X-Ray for unit testing and static code analysis Send alerts to an Amazon SNS topic for any bad builds Deploy in a blue/green deployment using AWS CodeDeploy.
Answers
D.
Use GitHub webhooks to trigger the CodePipeline pipeline Use AWS X-Ray for unit testing and static code analysis Send alerts to an Amazon SNS topic for any bad builds Deploy in an m-place. all-at-once deployment configuration using AWS CodeDeploy
D.
Use GitHub webhooks to trigger the CodePipeline pipeline Use AWS X-Ray for unit testing and static code analysis Send alerts to an Amazon SNS topic for any bad builds Deploy in an m-place. all-at-once deployment configuration using AWS CodeDeploy
Answers
Suggested answer: B

Explanation:

GitHub Webhooks to Trigger CodePipeline:

Configure GitHub webhooks to trigger the AWS CodePipeline pipeline. This ensures that every code push to the repository automatically triggers the pipeline, initiating the build and deployment process.

Unit Testing with Jenkins and AWS CodeBuild:

Use Jenkins integrated with the AWS CodeBuild plugin to perform unit testing. Jenkins will manage the build process, and the results will be handled by CodeBuild.

Notifications for Bad Builds:

Configure Amazon SNS (Simple Notification Service) to send alerts for any failed builds. This keeps the engineering team informed of build issues immediately, allowing for quick resolutions.

Blue/Green Deployment with AWS CodeDeploy:

Utilize AWS CodeDeploy with a blue/green deployment strategy. This method reduces downtime and risk by running two identical production environments (blue and green). CodeDeploy shifts traffic between these environments, allowing you to test in the new environment (green) while the old environment (blue) remains live. If issues arise, you can quickly roll back to the previous environment.

This solution provides seamless, zero-downtime deployments, and the ability to quickly roll back changes if necessary, fulfilling the requirements of the startup company.

Reference

AWS DevOps Blog on Integrating Jenkins with AWS CodeBuild and CodeDeploy32.

Plain English Guide to AWS CodePipeline with GitHub33.

Jenkins Plugin for AWS CodePipeline34.

A medical company is running a REST API on a set of Amazon EC2 instances The EC2 instances run in an Auto Scaling group behind an Application Load Balancer (ALB) The ALB runs in three public subnets, and the EC2 instances run in three private subnets The company has deployed an Amazon CloudFront distribution that has the ALB as the only origin

Which solution should a solutions architect recommend to enhance the origin security?

A.
Store a random string in AWS Secrets Manager Create an AWS Lambda function for automatic secret rotation Configure CloudFront to inject the random string as a custom HTTP header for the origin request Create an AWS WAF web ACL rule with a string match rule for the custom header Associate the web ACL with the ALB
A.
Store a random string in AWS Secrets Manager Create an AWS Lambda function for automatic secret rotation Configure CloudFront to inject the random string as a custom HTTP header for the origin request Create an AWS WAF web ACL rule with a string match rule for the custom header Associate the web ACL with the ALB
Answers
B.
Create an AWS WAF web ACL rule with an IP match condition of the CloudFront service IP address ranges Associate the web ACL with the ALB Move the ALB into the three private subnets
B.
Create an AWS WAF web ACL rule with an IP match condition of the CloudFront service IP address ranges Associate the web ACL with the ALB Move the ALB into the three private subnets
Answers
C.
Store a random string in AWS Systems Manager Parameter Store Configure Parameter Store automatic rotation for the string Configure CloudFront to inject the random string as a custom HTTP header for the origin request Inspect the value of the custom HTTP header, and block access in the ALB
C.
Store a random string in AWS Systems Manager Parameter Store Configure Parameter Store automatic rotation for the string Configure CloudFront to inject the random string as a custom HTTP header for the origin request Inspect the value of the custom HTTP header, and block access in the ALB
Answers
D.
Configure AWS Shield Advanced. Create a security group policy to allow connections from CloudFront service IP address ranges. Add the policy to AWS Shield Advanced, and attach the policy to the ALB
D.
Configure AWS Shield Advanced. Create a security group policy to allow connections from CloudFront service IP address ranges. Add the policy to AWS Shield Advanced, and attach the policy to the ALB
Answers
Suggested answer: A

Explanation:

Store Secret in AWS Secrets Manager:

Create a random string in AWS Secrets Manager to be used as a custom HTTP header value.

Set Up Automatic Rotation:

Implement a Lambda function to handle automatic rotation of the secret in AWS Secrets Manager, ensuring the header value remains secure.

Configure CloudFront Custom Header:

In the CloudFront distribution settings, configure an origin custom header with the name and value from AWS Secrets Manager. This header will be included in requests forwarded to the ALB.

Create AWS WAF Web ACL:

Create a Web ACL in AWS WAF with a string match rule to allow requests that include the custom header with the correct value.

Associate the Web ACL with the ALB to filter incoming traffic based on the custom header.

By using this method, you can ensure that only requests coming through CloudFront (which injects the custom header) can reach the ALB, enhancing the origin security

A company has multiple lines of business (LOBs) that toll up to the parent company. The company has asked its solutions architect to develop a solution with the following requirements

* Produce a single AWS invoice for all of the AWS accounts used by its LOBs.

* The costs for each LOB account should be broken out on the invoice

* Provide the ability to restrict services and features in the LOB accounts, as defined by the company's governance policy

* Each LOB account should be delegated full administrator permissions regardless of the governance policy

Which combination of steps should the solutions architect take to meet these requirements'? (Select TWO.)

A.
Use AWS Organizations to create an organization in the parent account for each LOB Then invite each LOB account to the appropriate organization
A.
Use AWS Organizations to create an organization in the parent account for each LOB Then invite each LOB account to the appropriate organization
Answers
B.
Use AWS Organizations to create a single organization in the parent account Then, invite each LOB's AWS account lo join the organization.
B.
Use AWS Organizations to create a single organization in the parent account Then, invite each LOB's AWS account lo join the organization.
Answers
C.
Implement service quotas to define the services and features that are permitted and apply the quotas to each LOB. as appropriate
C.
Implement service quotas to define the services and features that are permitted and apply the quotas to each LOB. as appropriate
Answers
D.
Create an SCP that allows only approved services and features then apply the policy to the LOB accounts
D.
Create an SCP that allows only approved services and features then apply the policy to the LOB accounts
Answers
E.
Enable consolidated billing in the parent account's billing console and link the LOB accounts
E.
Enable consolidated billing in the parent account's billing console and link the LOB accounts
Answers
Suggested answer: B, E

Explanation:

Create AWS Organization:

In the AWS Management Console, navigate to AWS Organizations and create a new organization in the parent account.

Invite LOB Accounts:

Invite each Line of Business (LOB) account to join the organization. This allows centralized management and governance of all accounts.

Enable Consolidated Billing:

Enable consolidated billing in the billing console of the parent account. Link all LOB accounts to ensure a single consolidated invoice that breaks down costs per account.

Apply Service Control Policies (SCPs):

Implement Service Control Policies (SCPs) to define the services and features permitted for each LOB account as per the governance policy, while still delegating full administrative permissions to the LOB accounts.

By consolidating billing and using AWS Organizations, the company can achieve centralized billing and governance while maintaining independent administrative control for each LOB account


Total 492 questions
Go to page: of 50