ExamGecko
Home Home / Amazon / SAP-C01

Amazon SAP-C01 Practice Test - Questions Answers, Page 39

Question list
Search
Search

List of questions

Search

Related questions











You are looking to migrate your Development (Dev) and Test environments to AWS. You have decided to use separate AWS accounts to host each environment. You plan to link each accounts bill to a Master AWS account using Consolidated Billing.

To make sure you keep within budget you would like to implement a way for administrators in the Master account to have access to stop, delete and/or terminate resources in both the Dev and Test accounts. Identify which option will allow you to achieve this goal.

A.
Create IAM users in the Master account with full Admin permissions. Create cross-account roles in the Dev and Test accounts that grant the Master account access to the resources in the account by inheriting permissions from the Master account.
A.
Create IAM users in the Master account with full Admin permissions. Create cross-account roles in the Dev and Test accounts that grant the Master account access to the resources in the account by inheriting permissions from the Master account.
Answers
B.
Create IAM users and a cross-account role in the Master account that grants full Admin permissions to the Dev and Test accounts.
B.
Create IAM users and a cross-account role in the Master account that grants full Admin permissions to the Dev and Test accounts.
Answers
C.
Create IAM users in the Master account. Create cross-account roles in the Dev and Test accounts that have full Admin permissions and grant the Master account access.
C.
Create IAM users in the Master account. Create cross-account roles in the Dev and Test accounts that have full Admin permissions and grant the Master account access.
Answers
D.
Link the accounts using Consolidated Billing. This will give IAM users in the Master account access to resources in the Dev and Test accounts
D.
Link the accounts using Consolidated Billing. This will give IAM users in the Master account access to resources in the Dev and Test accounts
Answers
Suggested answer: C

Explanation:

Bucket Owner Granting Cross-account Permission to objects It Does Not Own In this example scenario, you own a bucket and you have enabled other AWS accounts to upload objects. That is, your bucket can have objects that other AWS accounts own.

Now, suppose as a bucket owner, you need to grant cross-account permission on objects, regardless of who the owner is, to a user in another account. For example, that user could be a billing application that needs to access object metadata. There are two core issues:

The bucket owner has no permissions on those objects created by other AWS accounts. So for the bucket owner to grant permissions on objects it does not own, the object owner, the AWS account that created the objects, must first grant permission to the bucket owner. The bucket owner can then delegate those permissions.

Bucket owner account can delegate permissions to users in its own account but it cannot delegate permissions to other AWS accounts, because cross-account delegation is not supported. In this scenario, the bucket owner can create an AWS Identity and Access Management (IAM) role with permission to access objects, and grant another AWS account permission to assume the role temporarily enabling it to access objects in the bucket.

Background: Cross-Account Permissions and Using IAM Roles

IAM roles enable several scenarios to delegate access to your resources, and cross-account access is one of the key scenarios. In this example, the bucket owner, Account A, uses an IAM role to temporarily delegate object access crossaccount to users in another AWS account, Account

C. Each IAM role you create has two policies attached to it: A trust policy identifying another AWS account that can assume the role. An access policy defining what permissions—for example, s3:GetObject—are allowed when someone assumes the role. For a list of permissions you can specify in a policy, see Specifying Permissions in a Policy. The AWS account identified in the trust policy then grants its user permission to assume the role. The user can then do the following to access objects: Assume the role and, in response, get temporary security credentials. Using the temporary security credentials, access the objects in the bucket.

For more information about IAM roles, go to Roles (Delegation and Federation) in IAM User Guide. The following is a summary of the walkthrough steps:

Account A administrator user attaches a bucket policy granting Account B conditional permission to upload objects. Account A administrator creates an IAM role, establishing trust with Account C, so users in that account can access Account

A. The access policy attached to the role limits what user in Account C can do when the user accesses Account A. Account B administrator uploads an object to the bucket owned by Account A, granting full-control permission to the bucket owner. Account C administrator creates a user and attaches a user policy that allows the user to assume the role. User in Account C first assumes the role, which returns the user temporary security credentials. Using those temporary credentials, the user then accesses objects in the bucket. For this example, you need three accounts. The following table shows how we refer to these accounts and the administrator users in these accounts. Per IAM guidelines (see About Using an Administrator User to Create Resources and Grant Permissions) we do not use the account root credentials in this walkthrough. Instead, you create an administrator user in each account and use those credentials in creating resources and granting them permissions

A company ingests and processes streaming market data. The data rate is constant. A nightly process that calculates aggregate statistics is run, and each execution takes about 4 hours to complete. The statistical analysis is not mission critical to the business, and previous data points are picked up on the next execution if a particular run fails. The current architecture uses a pool of Amazon EC2 Reserved Instances with 1-year reservations running full time to ingest and store the streaming data in attached Amazon EBS volumes. On-Demand EC2 instances are launched each night to perform the nightly processing, accessing the stored data from NFS shares on the ingestion servers, and terminating the nightly processing servers when complete. The Reserved Instance reservations are expiring, and the company needs to determine whether to purchase new reservations or implement a new design.

Which is the most cost-effective design?

A.
Update the ingestion process to use Amazon Kinesis Data Firehose to save data to Amazon S3. Use a fleet of On- Demand EC2 instances that launches each night to perform the batch processing of the S3 data and terminates when the processing completes.
A.
Update the ingestion process to use Amazon Kinesis Data Firehose to save data to Amazon S3. Use a fleet of On- Demand EC2 instances that launches each night to perform the batch processing of the S3 data and terminates when the processing completes.
Answers
B.
Update the ingestion process to use Amazon Kinesis Data Firehouse to save data to Amazon S3. Use AWS Batch to perform nightly processing with a Spot market bid of 50% of the On-Demand price.
B.
Update the ingestion process to use Amazon Kinesis Data Firehouse to save data to Amazon S3. Use AWS Batch to perform nightly processing with a Spot market bid of 50% of the On-Demand price.
Answers
C.
Update the ingestion process to use a fleet of EC2 Reserved Instances behind a Network Load Balancer with 3-year leases. Use Batch with Spot instances with a maximum bid of 50% of the OnDemand price for the nightly processing.
C.
Update the ingestion process to use a fleet of EC2 Reserved Instances behind a Network Load Balancer with 3-year leases. Use Batch with Spot instances with a maximum bid of 50% of the OnDemand price for the nightly processing.
Answers
D.
Update the ingestion process to use Amazon Kinesis Data Firehose to save data to Amazon Redshift. Use an AWS Lambda function scheduled to run nightly with Amazon CloudWatch Events to query Amazon Redshift to generate the daily statistics.
D.
Update the ingestion process to use Amazon Kinesis Data Firehose to save data to Amazon Redshift. Use an AWS Lambda function scheduled to run nightly with Amazon CloudWatch Events to query Amazon Redshift to generate the daily statistics.
Answers
Suggested answer: D

In the context of Amazon ElastiCache CLI, which of the following commands can you use to view all ElastiCache instance events for the past 24 hours?

A.
elasticache-events --duration 24
A.
elasticache-events --duration 24
Answers
B.
elasticache-events --duration 1440
B.
elasticache-events --duration 1440
Answers
C.
elasticache-describe-events --duration 24
C.
elasticache-describe-events --duration 24
Answers
D.
elasticache describe-events --source-type cache-cluster --duration 1440
D.
elasticache describe-events --source-type cache-cluster --duration 1440
Answers
Suggested answer: D

Explanation:

In Amazon ElastiCache, the code "aws elasticache describe-events --source-type cache-cluster -- duration 1440" is used to list the cache-cluster events for the past 24 hours (1440 minutes).

Reference: http://docs.aws.amazon.com/AmazonElastiCache/latest/UserGuide/ECEvents.Viewing.html

A company is migrating its on-premises systems to AWS. The user environment consists of the following systems:

Windows and Linux virtual machines running on VMware. Physical servers running Red Hat Enterprise Linux. The company wants to be able to perform the following steps before migrating to AWS:

Identify dependencies between on-premises systems.

Group systems together into applications to build migration plans.

Review performance data using Amazon Athena to ensure that Amazon EC2 instances are right-sized.

How can these requirements be met?

A.
Populate the AWS Application Discovery Service import template with information from an on-premises configuration management database (CMDB). Upload the completed import template to Amazon S3, then import the data into Application Discovery Service.
A.
Populate the AWS Application Discovery Service import template with information from an on-premises configuration management database (CMDB). Upload the completed import template to Amazon S3, then import the data into Application Discovery Service.
Answers
B.
Install the AWS Application Discovery Service Discovery Agent on each of the on-premises systems. Allow the Discovery Agent to collect data for a period of time.
B.
Install the AWS Application Discovery Service Discovery Agent on each of the on-premises systems. Allow the Discovery Agent to collect data for a period of time.
Answers
C.
Install the AWS Application Discovery Service Discovery Connector on each of the on-premises systems and in VMware vCenter. Allow the Discovery Connector to collect data for one week.
C.
Install the AWS Application Discovery Service Discovery Connector on each of the on-premises systems and in VMware vCenter. Allow the Discovery Connector to collect data for one week.
Answers
D.
Install the AWS Application Discovery Service Discovery Agent on the physical on-premises servers. Install the AWS Application Discovery Service Discovery Connector in VMware vCenter. Allow the Discovery Agent to collect data for a period of time.
D.
Install the AWS Application Discovery Service Discovery Agent on the physical on-premises servers. Install the AWS Application Discovery Service Discovery Connector in VMware vCenter. Allow the Discovery Agent to collect data for a period of time.
Answers
Suggested answer: C

If no explicit deny is found while applying IAM's Policy Evaluation Logic, the enforcement code looks for any ______ instructions that would apply to the request.

A.
"cancel"
A.
"cancel"
Answers
B.
"suspend"
B.
"suspend"
Answers
C.
"allow”
C.
"allow”
Answers
D.
"valid"
D.
"valid"
Answers
Suggested answer: C

Explanation:

If an explicit deny is not found among the applicable policies for a specific request, IAM's Policy Evaluation Logic checks for any "allow" instructions to check if the request can be successfully completed.

Reference: http://docs.aws.amazon.com/IAM/latest/UserGuide/AccessPolicyLanguage_EvaluationLogic.html

Your company has HQ in Tokyo and branch offices all over the world and is using a logistics software with a multi-regional deployment on AWS in Japan, Europe and USA. The logistic software has a 3-tier architecture and currently uses MySQL 5.6 for data persistence. Each region has deployed its own database.

In the HQ region you run an hourly batch process reading data from every region to compute cross-regional reports that are sent by email to all offices this batch process must be completed as fast as possible to quickly optimize logistics. How do you build the database architecture in order to meet the requirements?

A.
For each regional deployment, use RDS MySQL with a master in the region and a read replica in the HQ region
A.
For each regional deployment, use RDS MySQL with a master in the region and a read replica in the HQ region
Answers
B.
For each regional deployment, use MySQL on EC2 with a master in the region and send hourly EBS snapshots to the HQ region
B.
For each regional deployment, use MySQL on EC2 with a master in the region and send hourly EBS snapshots to the HQ region
Answers
C.
For each regional deployment, use RDS MySQL with a master in the region and send hourly RDS snapshots to the HQ region
C.
For each regional deployment, use RDS MySQL with a master in the region and send hourly RDS snapshots to the HQ region
Answers
D.
For each regional deployment, use MySQL on EC2 with a master in the region and use S3 to copy data files hourly to the HQ region
D.
For each regional deployment, use MySQL on EC2 with a master in the region and use S3 to copy data files hourly to the HQ region
Answers
E.
Use Direct Connect to connect all regional MySQL deployments to the HQ region and reduce network latency for the batch process
E.
Use Direct Connect to connect all regional MySQL deployments to the HQ region and reduce network latency for the batch process
Answers
Suggested answer: A

Mike is appointed as Cloud Consultant in ABC.com. ABC has the following VPCs set- up in the US East Region:

A VPC with CIDR block 10.10.0.0/16, a subnet in that VPC with CIDR block 10.10.1.0/24 A VPC with CIDR block 10.40.0.0/16, a subnet in that VPC with CIDR block 10.40.1.0/24 ABC.com is trying to establish network connection between two subnets, a subnet with CIDR block 10.10.1.0/24 and another subnet with CIDR block 10.40.1.0/24.

Which one of the following solutions should Mike recommend to ABC.com?

A.
Create 2 Virtual Private Gateways and configure one with each VPC.
A.
Create 2 Virtual Private Gateways and configure one with each VPC.
Answers
B.
Create 2 Internet Gateways, and attach one to each VPC.
B.
Create 2 Internet Gateways, and attach one to each VPC.
Answers
C.
Create a VPC Peering connection between both VPCs.
C.
Create a VPC Peering connection between both VPCs.
Answers
D.
Create one EC2 instance in each subnet, assign Elastic IPs to both instances, and configure a set up Site-to-Site VPN connection between both EC2 instances.
D.
Create one EC2 instance in each subnet, assign Elastic IPs to both instances, and configure a set up Site-to-Site VPN connection between both EC2 instances.
Answers
Suggested answer: C

Explanation:

A VPC peering connection is a networking connection between two VPCs that enables you to route traffic between them using private IP addresses. EC2 instances in either VPC can communicate with each other as if they are within the same network. You can create a VPC peering connection between your own VPCs, or with a VPC in another AWS account within a single region. AWS uses the existing infrastructure of a VPC to create a VPC peering connection; it is neither a gateway nor a VPN connection, and does not rely on a separate piece of physical hardware.

Reference: http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/vpc-peering.html

A company wants to host its website on AWS using serverless architecture design patterns for global customers. The company has outlined its requirements as follow:

The website should be responsive.

The website should offer minimal latency.

The website should be highly available.

Users should be able to authenticate through social identity providers such as Google, Facebook, and Amazon. There should be baseline DDoS protections for spikes in traffic. How can the design requirements be met?

A.
Use Amazon CloudFront with Amazon ECS for hosting the website. Use AWS Secrets Manager to provide user management and authentication functions. Use ECS Docker containers to build an API.
A.
Use Amazon CloudFront with Amazon ECS for hosting the website. Use AWS Secrets Manager to provide user management and authentication functions. Use ECS Docker containers to build an API.
Answers
B.
Use Amazon Route 53 latency routing with an Application Load Balancer and AWS Fargate in different regions for hosting the website. Use Amazon Cognito to provide user management and authentication functions. Use Amazon EKS containers to build an API.
B.
Use Amazon Route 53 latency routing with an Application Load Balancer and AWS Fargate in different regions for hosting the website. Use Amazon Cognito to provide user management and authentication functions. Use Amazon EKS containers to build an API.
Answers
C.
Use Amazon CloudFront with Amazon S3 for hosting static web resources. Use Amazon Cognito to provide user management and authentication functions. Use Amazon API Gateway with AWS Lambda to build an API.
C.
Use Amazon CloudFront with Amazon S3 for hosting static web resources. Use Amazon Cognito to provide user management and authentication functions. Use Amazon API Gateway with AWS Lambda to build an API.
Answers
D.
Use AWS Direct Connect with Amazon CloudFront and Amazon S3 for hosting static web resources. Use Amazon Cognito to provide user management authentication functions. Use AWS Lambda to build an API.
D.
Use AWS Direct Connect with Amazon CloudFront and Amazon S3 for hosting static web resources. Use Amazon Cognito to provide user management authentication functions. Use AWS Lambda to build an API.
Answers
Suggested answer: C

Which of the following is NOT an advantage of using AWS Direct Connect?

A.
AWS Direct Connect provides users access to public and private resources by using two different connections while maintaining network separation between the public and private environments.
A.
AWS Direct Connect provides users access to public and private resources by using two different connections while maintaining network separation between the public and private environments.
Answers
B.
AWS Direct Connect provides a more consistent network experience than Internet-based connections.
B.
AWS Direct Connect provides a more consistent network experience than Internet-based connections.
Answers
C.
AWS Direct Connect makes it easy to establish a dedicated network connection from your premises to AWS.
C.
AWS Direct Connect makes it easy to establish a dedicated network connection from your premises to AWS.
Answers
D.
AWS Direct Connect reduces your network costs.
D.
AWS Direct Connect reduces your network costs.
Answers
Suggested answer: A

Explanation:

AWS Direct Connect makes it easy to establish a dedicated network connection from your premises to AWS. Using AWS Direct Connect, you can establish private connectivity between AWS and your datacenter, office, or colocation environment, which in many cases can reduce your network costs, increase bandwidth throughput, and provide a more consistent network experience than Internet-based connections. By using industry standard 802.1q VLANs, this dedicated connection can be partitioned into multiple virtual interfaces. This allows you to use the same connection to access public resources such as objects stored in Amazon S3 using public IP address space, and private resources such as Amazon EC2 instances running within an Amazon Virtual Private Cloud (VPC) using private IP space, while maintaining network separation between the public and private environments.

Reference: http://aws.amazon.com/directconnect/#details

A company is running an application in the AWS Cloud. The application consists of microservices that run on a fleet of Amazon EC2 instances in multiple Availability Zones behind an Application Load Balancer. The company recently added a new REST API that was implemented in Amazon API Gateway. Some of the older microservices that run on EC2 instances need to call this new API. The company does not want the API to be accessible from the public internet and does not want proprietary data to traverse the public internet. What should a solutions architect do to meet these requirements?

A.
Create an AWS Site-to-Site VPN connection between the VPC and the API Gateway. Use API Gateway to generate a unique API key for each microservice. Configure the API methods to require the key.
A.
Create an AWS Site-to-Site VPN connection between the VPC and the API Gateway. Use API Gateway to generate a unique API key for each microservice. Configure the API methods to require the key.
Answers
B.
Create an interface VPC endpoint for API Gateway, and set an endpoint policy to only allow access to the specific API. Add a resource policy to API Gateway to only allow access from the VPC endpoint. Change the API Gateway endpoint type to private.
B.
Create an interface VPC endpoint for API Gateway, and set an endpoint policy to only allow access to the specific API. Add a resource policy to API Gateway to only allow access from the VPC endpoint. Change the API Gateway endpoint type to private.
Answers
C.
Modify the API Gateway to use IAM authentication Update the IAM policy for the IAM role that is assigned to the EC2 instances to allow access to the API Gateway Move the API Gateway into a new VPDeploy a transit gateway and connect the VPCs.
C.
Modify the API Gateway to use IAM authentication Update the IAM policy for the IAM role that is assigned to the EC2 instances to allow access to the API Gateway Move the API Gateway into a new VPDeploy a transit gateway and connect the VPCs.
Answers
D.
Create an accelerator in AWS Global Accelerator, and connect the accelerator to the API Gateway. Update the route table for all VPC subnets with a route to the created Global Accelerator endpoint IP address. Add an API key for each service to use for authentication.
D.
Create an accelerator in AWS Global Accelerator, and connect the accelerator to the API Gateway. Update the route table for all VPC subnets with a route to the created Global Accelerator endpoint IP address. Add an API key for each service to use for authentication.
Answers
Suggested answer: B
Total 906 questions
Go to page: of 91