ExamGecko
Home Home / Amazon / SAP-C01

Amazon SAP-C01 Practice Test - Questions Answers, Page 23

Question list
Search
Search

List of questions

Search

Related questions











A company has a photo sharing social networking application. To provide a consistent experience for users, the company performs some image processing on the photos uploaded by users before publishing on the application. The image processing is implemented using a set of Python libraries.

The current architecture is as follows:

The image processing Python code runs in a single Amazon EC2 instance and stores the processed images in an Amazon S3 bucket named ImageBucket. The front-end application, hosted in another bucket, loads the images from ImageBucket to display to users.

With plans for global expansion, the company wants to implement changes in its existing architecture to be able to scale for increased demand on the application and reduce management complexity as the application scales. Which combination of changes should a solutions architect make? (Choose two.)

A.
Place the image processing EC2 instance into an Auto Scaling group.
A.
Place the image processing EC2 instance into an Auto Scaling group.
Answers
B.
Use AWS Lambda to run the image processing tasks.
B.
Use AWS Lambda to run the image processing tasks.
Answers
C.
Use Amazon Rekognition for image processing.
C.
Use Amazon Rekognition for image processing.
Answers
D.
Use Amazon CloudFront in front of ImageBucket.
D.
Use Amazon CloudFront in front of ImageBucket.
Answers
E.
Deploy the applications in an Amazon ECS cluster and apply Service Auto Scaling.
E.
Deploy the applications in an Amazon ECS cluster and apply Service Auto Scaling.
Answers
Suggested answer: D, E

A company wants to analyze log data using date ranges with a custom application running on AWS. The application generates about 10 GB of data every day, which is expected to grow. A Solutions Architect is tasked with storing the data in Amazon S3 and using Amazon Athena to analyze the data.

Which combination of steps will ensure optimal performance as the data grows? (Choose two.)

A.
Store each object in Amazon S3 with a random string at the front of each key.
A.
Store each object in Amazon S3 with a random string at the front of each key.
Answers
B.
Store the data in multiple S3 buckets.
B.
Store the data in multiple S3 buckets.
Answers
C.
Store the data in Amazon S3 in a columnar format, such as Apache Parquet or Apache ORC.
C.
Store the data in Amazon S3 in a columnar format, such as Apache Parquet or Apache ORC.
Answers
D.
Store the data in Amazon S3 in objects that are smaller than 10 MB.
D.
Store the data in Amazon S3 in objects that are smaller than 10 MB.
Answers
E.
Store the data using Apache Hive partitioning in Amazon S3 using a key that includes a date, such as dt=2019-02.
E.
Store the data using Apache Hive partitioning in Amazon S3 using a key that includes a date, such as dt=2019-02.
Answers
Suggested answer: B, C

A company has a VPC with two domain controllers running Active Directory in the default configuration. The VPC DHCP options set is configured to use the IP addresses of the two domain controllers. There is a VPC interface endpoint defined; but instances within the VPC are not able to resolve the private endpoint addresses.

Which strategies would resolve this issue? (Choose two.)

A.
Define an outbound Amazon Route 53 Resolver. Set a conditional forward rule for the Active Directory domain to the Active Directory servers. Update the VPC DHCP options set to AmazonProvidedDNS.
A.
Define an outbound Amazon Route 53 Resolver. Set a conditional forward rule for the Active Directory domain to the Active Directory servers. Update the VPC DHCP options set to AmazonProvidedDNS.
Answers
B.
Update the DNS service on the Active Directory servers to forward all non-authoritative queries to the VPC Resolver.
B.
Update the DNS service on the Active Directory servers to forward all non-authoritative queries to the VPC Resolver.
Answers
C.
Define an inbound Amazon Route 53 Resolver. Set a conditional forward rule for the Active Directory domain to the Active Directory servers. Update the VPC DHCP options set to AmazonProvidedDNS.
C.
Define an inbound Amazon Route 53 Resolver. Set a conditional forward rule for the Active Directory domain to the Active Directory servers. Update the VPC DHCP options set to AmazonProvidedDNS.
Answers
D.
Update the DNS service on the client instances to split DNS queries between the Active Directory servers and the VPC Resolver.
D.
Update the DNS service on the client instances to split DNS queries between the Active Directory servers and the VPC Resolver.
Answers
E.
Update the DNS service on the Active Directory servers to forward all queries to the VPC Resolver.
E.
Update the DNS service on the Active Directory servers to forward all queries to the VPC Resolver.
Answers
Suggested answer: B, E

A company runs applications on Amazon EC2 instances. The company plans to begin using an Auto Scaling group for the instances. As part of this transition, a solutions architect must ensure that Amazon CloudWatch Logs automatically collects logs from all new instances. The new Auto Scaling group will use a launch template that includes the Amazon Linux 2 AMI and no key pair. Which solution meets these requirements?

A.
Create an Amazon CloudWatch agent configuration for the workload. Store the CloudWatch agent configuration in an Amazon S3 bucket. Write an EC2 user data script to fetch the configuration file from Amazon S3. Configure the CloudWatch agent on the instance during initial boot.
A.
Create an Amazon CloudWatch agent configuration for the workload. Store the CloudWatch agent configuration in an Amazon S3 bucket. Write an EC2 user data script to fetch the configuration file from Amazon S3. Configure the CloudWatch agent on the instance during initial boot.
Answers
B.
Create an Amazon CloudWatch agent configuration for the workload in AWS Systems Manager Parameter Store. Create a Systems Manager document that installs and configures the CloudWatch agent by using the configuration. Create an Amazon EventBridge (Amazon CloudWatch Events) rule on the default event bus with a Systems Manager Run Command target that runs the document whenever an instance enters the running state.
B.
Create an Amazon CloudWatch agent configuration for the workload in AWS Systems Manager Parameter Store. Create a Systems Manager document that installs and configures the CloudWatch agent by using the configuration. Create an Amazon EventBridge (Amazon CloudWatch Events) rule on the default event bus with a Systems Manager Run Command target that runs the document whenever an instance enters the running state.
Answers
C.
Create an Amazon CloudWatch agent configuration for the workload. Create an AWS Lambda function to install and configure the CloudWatch agent by using AWS Systems Manager Session Manager. Include the agent configuration inside the Lambda package. Create an AWS Config custom rule to identify changes to the EC2 instances and invoke Lambda function.
C.
Create an Amazon CloudWatch agent configuration for the workload. Create an AWS Lambda function to install and configure the CloudWatch agent by using AWS Systems Manager Session Manager. Include the agent configuration inside the Lambda package. Create an AWS Config custom rule to identify changes to the EC2 instances and invoke Lambda function.
Answers
D.
Create an Amazon CloudWatch agent configuration for the workload. Save the CloudWatch agent configuration as part of an AWS Lambda deployment package. Use AWS CloudTrail to capture EC2 tagging events and initiate agent installation.Use AWS CodeBuild to configure the CloudWatch agent on the instances that run the workload.
D.
Create an Amazon CloudWatch agent configuration for the workload. Save the CloudWatch agent configuration as part of an AWS Lambda deployment package. Use AWS CloudTrail to capture EC2 tagging events and initiate agent installation.Use AWS CodeBuild to configure the CloudWatch agent on the instances that run the workload.
Answers
Suggested answer: A

Explanation:

Reference: https://docs.aws.amazon.com/prescriptive-guidance/latest/implementing-logging-monitoring-cloudwatch/installcloudwatch-systems-manager.html

A Solutions Architect is designing a highly available and reliable solution for a cluster of Amazon EC2 instances. The Solutions Architect must ensure that any EC2 instance within the cluster recovers automatically after a system failure. The solution must ensure that the recovered instance maintains the same IP address.

How can these requirements be met?

A.
Create an AWS Lambda script to restart any EC2 instances that shut down unexpectedly.
A.
Create an AWS Lambda script to restart any EC2 instances that shut down unexpectedly.
Answers
B.
Create an Auto Scaling group for each EC2 instance that has a minimum and maximum size of 1.
B.
Create an Auto Scaling group for each EC2 instance that has a minimum and maximum size of 1.
Answers
C.
Create a new t2.micro instance to monitor the cluster instances. Configure the t2.micro instance to issue an aws ec2 reboot-instances command upon failure.
C.
Create a new t2.micro instance to monitor the cluster instances. Configure the t2.micro instance to issue an aws ec2 reboot-instances command upon failure.
Answers
D.
Create an Amazon CloudWatch alarm for the StatusCheckFailed_System metric, and then configure an EC2 action to recover the instance.
D.
Create an Amazon CloudWatch alarm for the StatusCheckFailed_System metric, and then configure an EC2 action to recover the instance.
Answers
Suggested answer: D

Explanation:

Reference: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-recover.html

A bank is designing an online customer service portal where customers can chat with customer service agents. The portal is required to maintain a 15-minute RPO or RTO in case of a regional disaster. Banking regulations require that all customer service chat transcripts must be preserved on durable storage for at least 7 years, chat conversations must be encrypted inflight, and transcripts must be encrypted at rest. The Data Loss Prevention team requires that data at rest must be encrypted using a key that the team controls, rotates, and revokes.

Which design meets these requirements?

A.
The chat application logs each chat message into Amazon CloudWatch Logs. A scheduled AWS Lambda function invokes a CloudWatch Logs CreateExportTask every 5 minutes to export chat transcripts to Amazon S3. The S3 bucket is configured for cross-region replication to the backup region. Separate AWS KMS keys are specified for the CloudWatch Logs group and the S3 bucket.
A.
The chat application logs each chat message into Amazon CloudWatch Logs. A scheduled AWS Lambda function invokes a CloudWatch Logs CreateExportTask every 5 minutes to export chat transcripts to Amazon S3. The S3 bucket is configured for cross-region replication to the backup region. Separate AWS KMS keys are specified for the CloudWatch Logs group and the S3 bucket.
Answers
B.
The chat application logs each chat message into two different Amazon CloudWatch Logs groups in two different regions, with the same AWS KMS key applied. Both CloudWatch Logs groups are configured to export logs into an Amazon Glacier vault with a 7-year vault lock policy with a KMS key specified.
B.
The chat application logs each chat message into two different Amazon CloudWatch Logs groups in two different regions, with the same AWS KMS key applied. Both CloudWatch Logs groups are configured to export logs into an Amazon Glacier vault with a 7-year vault lock policy with a KMS key specified.
Answers
C.
The chat application logs each chat message into Amazon CloudWatch Logs. A subscription filter on the CloudWatch Logs group feeds into an Amazon Kinesis Data Firehose which streams the chat messages into an Amazon S3 bucket in the backup region. Separate AWS KMS keys are specified for the CloudWatch Logs group and the Kinesis Data Firehose.
C.
The chat application logs each chat message into Amazon CloudWatch Logs. A subscription filter on the CloudWatch Logs group feeds into an Amazon Kinesis Data Firehose which streams the chat messages into an Amazon S3 bucket in the backup region. Separate AWS KMS keys are specified for the CloudWatch Logs group and the Kinesis Data Firehose.
Answers
D.
The chat application logs each chat message into Amazon CloudWatch Logs. The CloudWatch Logs group is configured to export logs into an Amazon Glacier vault with a 7-year vault lock policy. Glacier cross-region replication mirrors chat archives to the backup region. Separate AWS KMS keys are specified for the CloudWatch Logs group and the Amazon Glacier vault.
D.
The chat application logs each chat message into Amazon CloudWatch Logs. The CloudWatch Logs group is configured to export logs into an Amazon Glacier vault with a 7-year vault lock policy. Glacier cross-region replication mirrors chat archives to the backup region. Separate AWS KMS keys are specified for the CloudWatch Logs group and the Amazon Glacier vault.
Answers
Suggested answer: D

Explanation:

Reference: https://docs.aws.amazon.com/AmazonS3/latest/dev/replication.html

How can multiple compute resources be used on the same pipeline in AWS Data Pipeline?

A.
You can use multiple compute resources on the same pipeline by defining multiple cluster objects in your definition file and associating the cluster to use for each activity via its runs On field.
A.
You can use multiple compute resources on the same pipeline by defining multiple cluster objects in your definition file and associating the cluster to use for each activity via its runs On field.
Answers
B.
You can use multiple compute resources on the same pipeline by defining multiple cluster definition files
B.
You can use multiple compute resources on the same pipeline by defining multiple cluster definition files
Answers
C.
You can use multiple compute resources on the same pipeline by defining multiple clusters for your activity.
C.
You can use multiple compute resources on the same pipeline by defining multiple clusters for your activity.
Answers
D.
You cannot use multiple compute resources on the same pipeline.
D.
You cannot use multiple compute resources on the same pipeline.
Answers
Suggested answer: A

Explanation:

Multiple compute resources can be used on the same pipeline in AWS Data Pipeline by defining multiple cluster objects in your definition file and associating the cluster to use for each activity via its runs On field, which allows pipelines to combine AWS and on premise resources, or to use a mix of instance types for their activities.

Reference:

https://aws.amazon.com/datapipeline/faqs/

A user has created a VPC with CIDR 20.0.0.0/16. The user has created one subnet with CIDR 20.0.0.0/16 in this VPC. The user is trying to create another subnet with the same VPC for CIDR 20.0.0.1/24. What will happen in this scenario?

A.
The VPC will modify the first subnet CIDR automatically to allow the second subnet IP range
A.
The VPC will modify the first subnet CIDR automatically to allow the second subnet IP range
Answers
B.
The second subnet will be created
B.
The second subnet will be created
Answers
C.
It will throw a CIDR overlaps error
C.
It will throw a CIDR overlaps error
Answers
D.
It is not possible to create a subnet with the same CIDR as VPC
D.
It is not possible to create a subnet with the same CIDR as VPC
Answers
Suggested answer: C

Explanation:

A Virtual Private Cloud (VPC) is a virtual network dedicated to the user's AWS account. A user can create a subnet with VPC and launch instances inside that subnet. The user can create a subnet with the same size of VPC. However, he cannot create any other subnet since the CIDR of the second subnet will conflict with the first subnet.

Reference: http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Subnets.html

A company is using an Amazon EMR cluster to run its big data jobs. The cluster’s jobs are invoked by AWS Step Functions Express Workflows that consume various Amazon Simple Queue Service (Amazon SQS) queues. The workload of this solution is variable and unpredictable. Amazon CloudWatch metrics show that the cluster's peak utilization is only 25% at times and that the cluster sits idle the rest of the time. A solutions architect must optimize the costs of the cluster without negatively impacting the time it takes to run the various jobs. What is the MOST cost-effective solution that meets these requirements?

A.
Modify the EMR cluster by turning on automatic scaling of the core nodes and task nodes with a custom policy that is based on cluster utilization. Purchase Reserved Instance capacity to cover the master node.
A.
Modify the EMR cluster by turning on automatic scaling of the core nodes and task nodes with a custom policy that is based on cluster utilization. Purchase Reserved Instance capacity to cover the master node.
Answers
B.
Modify the EMR cluster to use an instance fleet of Dedicated On-Demand Instances for the master node and core nodes, and to use Spot Instances for the task nodes. Define target capacity for each node type to cover the load.
B.
Modify the EMR cluster to use an instance fleet of Dedicated On-Demand Instances for the master node and core nodes, and to use Spot Instances for the task nodes. Define target capacity for each node type to cover the load.
Answers
C.
Purchase Reserved Instances for the master node and core nodes. Terminate all existing task nodes in the EMR cluster.
C.
Purchase Reserved Instances for the master node and core nodes. Terminate all existing task nodes in the EMR cluster.
Answers
D.
Modify the EMR cluster to use capacity-optimized Spot Instances and a diversified task fleet. Define target capacity for each node type with a mix of On-Demand Instances and Spot Instances.
D.
Modify the EMR cluster to use capacity-optimized Spot Instances and a diversified task fleet. Define target capacity for each node type with a mix of On-Demand Instances and Spot Instances.
Answers
Suggested answer: D

Explanation:

Reference: https://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-instance-fleet.html

A solutions architect needs to migrate 50 TB of NFS data to Amazon S3. The files are on several NFS file servers on corporate network. These are dense file systems containing tens of millions of small files. The system operators have configured the file interface on an AWS Snowball Edge device and are using a shell script to copy data. Developers report that copying the data to the Snowball Edge device is very slow. The solutions architect suspects this may be related to the overhead of encrypting all the small files and transporting them over the network. Which changes can be made to speed up the data transfer?

A.
Cluster two Snowball Edge devices together to increase the throughput of the devices.
A.
Cluster two Snowball Edge devices together to increase the throughput of the devices.
Answers
B.
Change the solution to use the S3 Adapter instead of the file interface on the Snowball Edge device.
B.
Change the solution to use the S3 Adapter instead of the file interface on the Snowball Edge device.
Answers
C.
Increase the number of parallel copy jobs to increase the throughput of the Snowball Edge device.
C.
Increase the number of parallel copy jobs to increase the throughput of the Snowball Edge device.
Answers
D.
Connect directly to the USB interface on the Snowball Edge device and copy the files locally.
D.
Connect directly to the USB interface on the Snowball Edge device and copy the files locally.
Answers
Suggested answer: B
Total 906 questions
Go to page: of 91