ExamGecko
Home Home / Amazon / SAP-C01

Amazon SAP-C01 Practice Test - Questions Answers, Page 27

Question list
Search
Search

List of questions

Search

Related questions











A company has decided to move some workloads onto AWS to create a grid environment to run market analytics. The grid will consist of many similar instances, spun-up by a job-scheduling function. Each time a large analytics workload is completed, a new VPC is deployed along with job scheduler and grid nodes. Multiple grids could be running in parallel. Key requirements are:

Grid instances must communicate with Amazon S3 to retrieve data to be processed.

Grid instances must communicate with Amazon DynamoDB to track intermediate data.

The job scheduler needs only to communicate with the Amazon EC2 API to start new grid nodes.

A key requirement is that the environment has no access to the internet, either directly or via the on-premises proxy. However, the application needs to be able to seamlessly communicate to Amazon S3, Amazon DynamoDB, and Amazon EC2 API, without the need for reconfiguration for each new deployment. Which of the following should the Solutions Architect do to achieve this target architecture? (Choose three.)

A.
Enable VPC endpoints for Amazon S3 and DynamoDB.
A.
Enable VPC endpoints for Amazon S3 and DynamoDB.
Answers
B.
Disable Private DNS Name Support.
B.
Disable Private DNS Name Support.
Answers
C.
Configure the application on the grid instances to use the private DNS name of the Amazon S3 endpoint.
C.
Configure the application on the grid instances to use the private DNS name of the Amazon S3 endpoint.
Answers
D.
Populate the on-premises DNS server with the private IP addresses of the EC2 endpoint.
D.
Populate the on-premises DNS server with the private IP addresses of the EC2 endpoint.
Answers
E.
Enable an interface VPC endpoint for EC2.
E.
Enable an interface VPC endpoint for EC2.
Answers
F.
Configure Amazon S3 endpoint policy to permit access only from the grid nodes.
F.
Configure Amazon S3 endpoint policy to permit access only from the grid nodes.
Answers
Suggested answer: A, C, E

Explanation:

Reference: https://aws.amazon.com/premiumsupport/knowledge-center/connect-s3-vpc-endpoint/

https://docs.aws.amazon.com/vpc/latest/userguide/vpce-interface.html

With Amazon Elastic MapReduce (Amazon EMR) you can analyze and process vast amounts of data. The cluster is managed using an open-source framework called Hadoop. You have set up an application to run Hadoop jobs. The application reads data from DynamoDB and generates a temporary file of 100 TBs. The whole process runs for 30 minutes and the output of the job is stored to S3. Which of the below mentioned options is the most cost effective solution in this case?

A.
Use Spot Instances to run Hadoop jobs and configure them with EBS volumes for persistent data storage.
A.
Use Spot Instances to run Hadoop jobs and configure them with EBS volumes for persistent data storage.
Answers
B.
Use Spot Instances to run Hadoop jobs and configure them with ethereal storage for output file storage.
B.
Use Spot Instances to run Hadoop jobs and configure them with ethereal storage for output file storage.
Answers
C.
Use an on demand instance to run Hadoop jobs and configure them with EBS volumes for persistent storage.
C.
Use an on demand instance to run Hadoop jobs and configure them with EBS volumes for persistent storage.
Answers
D.
Use an on demand instance to run Hadoop jobs and configure them with ephemeral storage for output file storage.
D.
Use an on demand instance to run Hadoop jobs and configure them with ephemeral storage for output file storage.
Answers
Suggested answer: B

Explanation:

AWS EC2 Spot Instances allow the user to quote his own price for the EC2 computing capacity. The user can simply bid on the spare Amazon EC2 instances and run them whenever his bid exceeds the current Spot Price. The Spot Instance pricing model complements the On-Demand and Reserved Instance pricing models, providing potentially the most cost-effective option for obtaining compute capacity, depending on the application. The only challenge with a Spot Instance is data persistence as the instance can be terminated whenever the spot price exceeds the bid price. In the current scenario a Hadoop job is a temporary job and does not run for a longer period. It fetches data from a persistent DynamoDB. Thus, even if the instance gets terminated there will be no data loss and the job can be re-run. As the output files are large temporary files, it will be useful to store data on ethereal storage for cost savings.

Reference: http://aws.amazon.com/ec2/purchasing-options/spot-instances/

A company has a legacy application running on servers on premises. To increase the application’s reliability, the company wants to gain actionable insights using application logs. A Solutions Architect has been given following requirements for the solution:

Aggregate logs using AWS.

Automate log analysis for errors.

Notify the Operations team when errors go beyond a specified threshold.

What solution meets the requirements?

A.
Install Amazon Kinesis Agent on servers, send logs to Amazon Kinesis Data Streams and use Amazon Kinesis Data Analytics to identify errors, create an Amazon CloudWatch alarm to notify the Operations team of errors
A.
Install Amazon Kinesis Agent on servers, send logs to Amazon Kinesis Data Streams and use Amazon Kinesis Data Analytics to identify errors, create an Amazon CloudWatch alarm to notify the Operations team of errors
Answers
B.
Install an AWS X-Ray agent on servers, send logs to AWS Lambda and analyze them to identify errors, use Amazon CloudWatch Events to notify the Operations team of errors.
B.
Install an AWS X-Ray agent on servers, send logs to AWS Lambda and analyze them to identify errors, use Amazon CloudWatch Events to notify the Operations team of errors.
Answers
C.
Install Logstash on servers, send logs to Amazon S3 and use Amazon Athena to identify errors, use sendmail to notify the Operations team of errors.
C.
Install Logstash on servers, send logs to Amazon S3 and use Amazon Athena to identify errors, use sendmail to notify the Operations team of errors.
Answers
D.
Install the Amazon CloudWatch agent on servers, send logs to Amazon CloudWatch Logs and use metric filters to identify errors, create a CloudWatch alarm to notify the Operations team of errors.
D.
Install the Amazon CloudWatch agent on servers, send logs to Amazon CloudWatch Logs and use metric filters to identify errors, create a CloudWatch alarm to notify the Operations team of errors.
Answers
Suggested answer: D

Explanation:

Reference: https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/WhatIsCloudWatchLogs.html

https://docs.aws.amazon.com/kinesis-agent-windows/latest/userguide/what-is-kinesis-agent-windows.html

A customer has a 10 GB AWS Direct Connect connection to an AWS region where they have a web application hosted on Amazon Elastic Computer Cloud (EC2). The application has dependencies on an on-premises mainframe database that uses a BASE (Basic Available, Soft state, Eventual consistency) rather than an ACID (Atomicity, Consistency, Isolation, Durability) consistency model. The application is exhibiting undesirable behavior because the database is not able to handle the volume of writes.

How can you reduce the load on your on-premises database resources in the most cost-effective way?

A.
Use an Amazon Elastic Map Reduce (EMR) S3DistCp as a synchronization mechanism between the on-premises database and a Hadoop cluster on AWS.
A.
Use an Amazon Elastic Map Reduce (EMR) S3DistCp as a synchronization mechanism between the on-premises database and a Hadoop cluster on AWS.
Answers
B.
Modify the application to write to an Amazon SQS queue and develop a worker process to flush the queue to the onpremises database.
B.
Modify the application to write to an Amazon SQS queue and develop a worker process to flush the queue to the onpremises database.
Answers
C.
Modify the application to use DynamoDB to feed an EMR cluster which uses a map function to write to the on-premises database.
C.
Modify the application to use DynamoDB to feed an EMR cluster which uses a map function to write to the on-premises database.
Answers
D.
Provision an RDS read-replica database on AWS to handle the writes and synchronize the two databases using Data Pipeline.
D.
Provision an RDS read-replica database on AWS to handle the writes and synchronize the two databases using Data Pipeline.
Answers
Suggested answer: A

Explanation:

Reference: https://aws.amazon.com/blogs/aws/category/amazon-elastic-map-reduce/

A user has created a VPC with public and private subnets. The VPC has CIDR 20.0.0.0/16. The private subnet uses CIDR 20.0.1.0/24 and the public subnet uses CIDR 20.0.0.0/24. The user is planning to host a web server in the public subnet (port 80) and a DB server in the private subnet (port 3306). The user is configuring a security group of the NAT instance. Which of the below mentioned entries is not required in NAT's security group for the database servers to connect to the Internet for software updates?

A.
For Outbound allow Destination: 0.0.0.0/0 on port 443
A.
For Outbound allow Destination: 0.0.0.0/0 on port 443
Answers
B.
For Inbound allow Source: 20.0.1.0/24 on port 80
B.
For Inbound allow Source: 20.0.1.0/24 on port 80
Answers
C.
For Inbound allow Source: 20.0.0.0/24 on port 80
C.
For Inbound allow Source: 20.0.0.0/24 on port 80
Answers
D.
For Outbound allow Destination: 0.0.0.0/0 on port 80
D.
For Outbound allow Destination: 0.0.0.0/0 on port 80
Answers
Suggested answer: C

Explanation:

A user can create a subnet with VPC and launch instances inside that subnet. If the user has created a public private subnet to host the web server and DB server respectively, the user should configure that the instances in the private subnet can connect to the internet using the NAT instances. The user should first configure that NAT can receive traffic on ports 80 and 443 from the private subnet. Thus, allow ports 80 and 443 in Inbound for the private subnet 20.0.1.0/24. Now to route this traffic to the internet configure ports 80 and 443 in Outbound with destination 0.0.0.0/0. The NAT should not have an entry for the public subnet CIDR.

Reference: http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Scenario2.html

A video processing company wants to build a machine learning (ML) model by using 600 TB of compressed data that is stored as thousands of files in the company’s on-premises network attached storage system. The company does not have the necessary compute resources on premises for ML experiments and wants to use AWS.

The company needs to complete the data transfer to AWS within 3 weeks. The data transfer will be a one-time transfer. The data must be encrypted in transit. The measured upload speed of the company’s internet connection is 100 Mbps, and multiple departments share the connection.

Which solution will meet these requirements MOST cost-effectively?

A.
Order several AWS Snowball Edge Storage Optimized devices by using the AWS Management Console. Configure the devices with a destination S3 bucket. Copy the data to the devices. Ship the devices back to AWS.
A.
Order several AWS Snowball Edge Storage Optimized devices by using the AWS Management Console. Configure the devices with a destination S3 bucket. Copy the data to the devices. Ship the devices back to AWS.
Answers
B.
Set up a 10 Gbps AWS Direct Connect connection between the company location and the nearest AWS Region. Transfer the data over a VPN connection into the Region to store the data in Amazon S3.
B.
Set up a 10 Gbps AWS Direct Connect connection between the company location and the nearest AWS Region. Transfer the data over a VPN connection into the Region to store the data in Amazon S3.
Answers
C.
Create a VPN connection between the on-premises network storage and the nearest AWS Region. Transfer the data over the VPN connection.
C.
Create a VPN connection between the on-premises network storage and the nearest AWS Region. Transfer the data over the VPN connection.
Answers
D.
Deploy an AWS Storage Gateway file gateway on premises. Configure the file gateway with a destination S3 bucket. Copy the data to the file gateway.
D.
Deploy an AWS Storage Gateway file gateway on premises. Configure the file gateway with a destination S3 bucket. Copy the data to the file gateway.
Answers
Suggested answer: B

Explanation:

Reference: https://aws.amazon.com/blogs/networking-and-content-delivery/building-a-global-network-using-aws-transitgateway-inter-region-peering/

A company runs its containerized batch jobs on Amazon ECS. The jobs are scheduled by submitting a container image, a task definition, and the relevant data to an Amazon S3 bucket. Container images may be unique per job. Running the jobs as quickly as possible is of utmost importance, so submitting job artifacts to the S3 bucket triggers the job to run immediately. Sometimes there may be no jobs running at all. However, jobs of any size can be submitted with no prior warning to the IT Operations team. Job definitions include CPU and memory resource requirements.

What solution will allow the batch jobs to complete as quickly as possible after being scheduled?

A.
Schedule the jobs on an Amazon ECS cluster using the Amazon EC2 launch type. Use Service Auto Scaling to increase or decrease the number of running tasks to suit the number of running jobs.
A.
Schedule the jobs on an Amazon ECS cluster using the Amazon EC2 launch type. Use Service Auto Scaling to increase or decrease the number of running tasks to suit the number of running jobs.
Answers
B.
Schedule the jobs directly on EC2 instances. Use Reserved Instances for the baseline minimum load, and use On- Demand Instances in an Auto Scaling group to scale up the platform based on demand.
B.
Schedule the jobs directly on EC2 instances. Use Reserved Instances for the baseline minimum load, and use On- Demand Instances in an Auto Scaling group to scale up the platform based on demand.
Answers
C.
Schedule the jobs on an Amazon ECS cluster using the Fargate launch type. Use Service Auto Scaling to increase or decrease the number of running tasks to suit the number of running jobs.
C.
Schedule the jobs on an Amazon ECS cluster using the Fargate launch type. Use Service Auto Scaling to increase or decrease the number of running tasks to suit the number of running jobs.
Answers
D.
Schedule the jobs on an Amazon ECS cluster using the Fargate launch type. Use Spot Instances in an Auto Scaling group to scale the platform based on demand. Use Service Auto Scaling to increase or decrease the number of running tasks to suit the number of running jobs.
D.
Schedule the jobs on an Amazon ECS cluster using the Fargate launch type. Use Spot Instances in an Auto Scaling group to scale the platform based on demand. Use Service Auto Scaling to increase or decrease the number of running tasks to suit the number of running jobs.
Answers
Suggested answer: C

A read only news reporting site with a combined web and application tier and a database tier that receives large and unpredictable traffic demands must be able to respond to these traffic fluctuations automatically. What AWS services should be used meet these requirements?

A.
Stateless instances for the web and application tier synchronized using ElastiCache Memcached in an autoscaimg group monitored with CloudWatch and RDS with read replicas.
A.
Stateless instances for the web and application tier synchronized using ElastiCache Memcached in an autoscaimg group monitored with CloudWatch and RDS with read replicas.
Answers
B.
Stateful instances for the web and application tier in an autoscaling group monitored with CloudWatch and RDS with read replicas.
B.
Stateful instances for the web and application tier in an autoscaling group monitored with CloudWatch and RDS with read replicas.
Answers
C.
Stateful instances for the web and application tier in an autoscaling group monitored with CloudWatch and multi-AZ RDS.
C.
Stateful instances for the web and application tier in an autoscaling group monitored with CloudWatch and multi-AZ RDS.
Answers
D.
Stateless instances for the web and application tier synchronized using ElastiCache Memcached in an autoscaling group monitored with CloudWatch and multi-AZ RDS.
D.
Stateless instances for the web and application tier synchronized using ElastiCache Memcached in an autoscaling group monitored with CloudWatch and multi-AZ RDS.
Answers
Suggested answer: A

A user has created an AWS AMI. The user wants the AMI to be available only to his friend and not anyone else. How can the user manage this?

A.
Share the AMI with the community and setup the approval workflow before anyone launches it.
A.
Share the AMI with the community and setup the approval workflow before anyone launches it.
Answers
B.
It is not possible to share the AMI with the selected user.
B.
It is not possible to share the AMI with the selected user.
Answers
C.
Share the AMI with a friend's AWS account ID.
C.
Share the AMI with a friend's AWS account ID.
Answers
D.
Share the AMI with a friend's AWS login ID.
D.
Share the AMI with a friend's AWS login ID.
Answers
Suggested answer: C

Explanation:

In Amazon Web Services, if a user has created an AMI and wants to share with his friends and colleagues he can share the AMI with their AWS account ID. Once the AMI is shared the other user can access it from the community AMIs under private AMIs options.

Reference: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/sharingamis-explicit.html

A customer has a website which shows all the deals available across the market. The site experiences a load of 5 large EC2 instances generally. However, a week before Thanksgiving vacation they encounter a load of almost 20 large instances. The load during that period varies over the day based on the office timings. Which of the below mentioned solutions is cost effective as well as help the website achieve better performance?

A.
Setup to run 10 instances during the pre-vacation period and only scale up during the office time by launching 10 more instances using the AutoScaling schedule.
A.
Setup to run 10 instances during the pre-vacation period and only scale up during the office time by launching 10 more instances using the AutoScaling schedule.
Answers
B.
Keep only 10 instances running and manually launch 10 instances every day during office hours.
B.
Keep only 10 instances running and manually launch 10 instances every day during office hours.
Answers
C.
During the pre-vacation period setup 20 instances to run continuously.
C.
During the pre-vacation period setup 20 instances to run continuously.
Answers
D.
During the pre-vacation period setup a scenario where the organization has 15 instances running and 5 instances to scale up and down using Auto Scaling based on the network I/O policy.
D.
During the pre-vacation period setup a scenario where the organization has 15 instances running and 5 instances to scale up and down using Auto Scaling based on the network I/O policy.
Answers
Suggested answer: B

Explanation:

AWS provides an on demand, scalable infrastructure. AWS EC2 allows the user to launch On- Demand instances and the organization should create an AMI of the running instance. When the organization is experiencing varying loads and the time of the load is not known but it is higher than the routine traffic it is recommended that the organization launches a few instances beforehand and then setups AutoScaling with policies which scale up and down as per the EC2 metrics, such as Network I/O or CPU utilization. If the organization keeps all 10 additional instances as a part of the AutoScaling policy sometimes during a sudden higher load it may take time to launch instances and may not give an optimal performance. This is the reason it is recommended that the organization keeps an additional 5 instances running and the next 5 instances scheduled as per the AutoScaling policy for cost effectiveness.

Total 906 questions
Go to page: of 91