ExamGecko
Home / Amazon / SAP-C01 / List of questions
Ask Question

Amazon SAP-C01 Practice Test - Questions Answers, Page 9

List of questions

Question 81

Report
Export
Collapse

A solutions architect is implementing federated access to AWS for users of the company’s mobile application. Due to regulatory and security requirements, the application must use a custom-built solution for authenticating users and must use IAM roles for authorization.

Which of the following actions would enable authentication and authorization and satisfy the requirements? (Choose two.)

Use a custom-built SAML-compatible solution for authentication and AWS SSO for authorization.
Use a custom-built SAML-compatible solution for authentication and AWS SSO for authorization.
Create a custom-built LDAP connector using Amazon API Gateway and AWS Lambda for authentication. Store authorization tokens in Amazon DynamoDB, and validate authorization requests using another Lambda function that reads the credentials from DynamoDB.
Create a custom-built LDAP connector using Amazon API Gateway and AWS Lambda for authentication. Store authorization tokens in Amazon DynamoDB, and validate authorization requests using another Lambda function that reads the credentials from DynamoDB.
Use a custom-built OpenID Connect-compatible solution with AWS SSO for authentication and authorization.
Use a custom-built OpenID Connect-compatible solution with AWS SSO for authentication and authorization.
Use a custom-built SAML-compatible solution that uses LDAP for authentication and uses a SAML assertion to perform authorization to the IAM identity provider.
Use a custom-built SAML-compatible solution that uses LDAP for authentication and uses a SAML assertion to perform authorization to the IAM identity provider.
Use a custom-built OpenID Connect-compatible solution for authentication and use Amazon Cognito for authorization.
Use a custom-built OpenID Connect-compatible solution for authentication and use Amazon Cognito for authorization.
Suggested answer: A, C
asked 16/09/2024
Omar Solomon
33 questions

Question 82

Report
Export
Collapse

An organization is hosting a scalable web application using AWS. The organization has configured ELB and Auto Scaling to make the application scalable. Which of the below mentioned statements is not required to be followed for ELB when the application is planning to host a web application on VPC?

The ELB and all the instances should be in the same subnet.
The ELB and all the instances should be in the same subnet.
Configure the security group rules and network ACLs to allow traffic to be routed between the subnets in the VPC.
Configure the security group rules and network ACLs to allow traffic to be routed between the subnets in the VPC.
The internet facing ELB should have a route table associated with the internet gateway.
The internet facing ELB should have a route table associated with the internet gateway.
The internet facing ELB should be only in a public subnet.
The internet facing ELB should be only in a public subnet.
Suggested answer: A

Explanation:

Amazon Virtual Private Cloud (Amazon VPC) allows the user to define a virtual networking environment in a private, isolated section of the Amazon Web Services (AWS) cloud. The user has complete control over the virtual networking environment.

Within this virtual private cloud, the user can launch AWS resources, such as an ELB, and EC2 instances. There are two ELBs available with VPC: internet facing and internal (private) ELB. For the internet facing ELB it is required that the ELB should be in a public subnet. After the user creates the public subnet, he should ensure to associate the route table of the public subnet with the internet gateway to enable the load balancer in the subnet to connect with the internet. The ELB and instances can be in a separate subnet. However, to allow communication between the instance and the ELB the user must configure the security group rules and network ACLs to allow traffic to be routed between the subnets in his VPC.

Reference: http://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/CreateVPCForELB.html

asked 16/09/2024
Vladimir Kornfeld
41 questions

Question 83

Report
Export
Collapse

A web-startup runs its very successful social news application on Amazon EC2 with an Elastic Load Balancer, an Auto- Scaling group of Java/Tomcat application-servers, and DynamoDB as data store. The main web-application best runs on m2 x large instances since it is highly memory- bound Each new deployment requires semi-automated creation and testing of a new AMI for the application servers which takes quite a while ana is therefore only done once per week. Recently, a new chat feature has been implemented in nodejs and wails to be integrated in the architecture. First tests show that the new component is CPU bound Because the company has some experience with using Chef, they decided to streamline the deployment process and use AWS Ops Works as an application life cycle tool to simplify management of the application and reduce the deployment cycles. What configuration in AWS Ops Works is necessary to integrate the new chat module in the most cost-efficient and flexible way?

Create one AWS OpsWorks stack, create one AWS Ops Works layer, create one custom recipe
Create one AWS OpsWorks stack, create one AWS Ops Works layer, create one custom recipe
Create one AWS OpsWorks stack create two AWS Ops Works layers, create one custom recipe
Create one AWS OpsWorks stack create two AWS Ops Works layers, create one custom recipe
Create two AWS OpsWorks stacks create two AWS Ops Works layers, create one custom recipe
Create two AWS OpsWorks stacks create two AWS Ops Works layers, create one custom recipe
Create two AWS OpsWorks stacks create two AWS Ops Works layers, create two custom recipe
Create two AWS OpsWorks stacks create two AWS Ops Works layers, create two custom recipe
Suggested answer: B
asked 16/09/2024
Terry Mergl
32 questions

Question 84

Report
Export
Collapse

You have recently joined a startup company building sensors to measure street noise and air quality in urban areas. The company has been running a pilot deployment of around 100 sensors for 3 months each sensor uploads 1KB of sensor data every minute to a backend hosted on AWS.

During the pilot, you measured a peak or 10 IOPS on the database, and you stored an average of 3GB of sensor data per month in the database. The current deployment consists of a load-balanced auto scaled Ingestion layer using EC2 instances and a PostgreSQL RDS database with 500GB standard storage. The pilot is considered a success and your CEO has managed to get the attention or some potential investors. The business plan requires a deployment of at least 100K sensors which needs to be supported by the backend. You also need to store sensor data for at least two years to be able to compare year over year Improvements.

To secure funding, you have to make sure that the platform meets these requirements and leaves room for further scaling. Which setup win meet the requirements?

Add an SQS queue to the ingestion layer to buffer writes to the RDS instance
Add an SQS queue to the ingestion layer to buffer writes to the RDS instance
Ingest data into a DynamoDB table and move old data to a Redshift cluster
Ingest data into a DynamoDB table and move old data to a Redshift cluster
Replace the RDS instance with a 6 node Redshift cluster with 96TB of storage
Replace the RDS instance with a 6 node Redshift cluster with 96TB of storage
Keep the current architecture but upgrade RDS storage to 3TB and 10K provisioned IOPS
Keep the current architecture but upgrade RDS storage to 3TB and 10K provisioned IOPS
Suggested answer: C

Explanation:

The POC solution is being scaled up by 1000, which means it will require 72TB of Storage to retain 24 months’ worth of data. This rules out RDS as a possible DB solution which leaves you with Redshift. I believe DynamoDB is a more cost effective and scales better for ingest rather than using EC2 in an auto scaling group. Also, this example solution from AWS is somewhat similar for reference.

asked 16/09/2024
ERIK BURDETT
42 questions

Question 85

Report
Export
Collapse

You have subscribed to the AWS Business and Enterprise support plan.

Your business has a backlog of problems, and you need about 20 of your IAM users to open technical support cases. How many users can open technical support cases under the AWS Business and Enterprise support plan?

5 users
5 users
10 users
10 users
Unlimited
Unlimited
1 user
1 user
Suggested answer: C

Explanation:

In the context of AWS support, the Business and Enterprise support plans allow an unlimited number of users to open technical support cases (supported by AWS Identity and Access Management (IAM)).

Reference: https://aws.amazon.com/premiumsupport/faqs/

asked 16/09/2024
Charl Grove
32 questions

Question 86

Report
Export
Collapse

What RAID method is used on the Cloud Block Storage back-end to implement a very high level of reliability and performance?

RAID 1 (Mirror)
RAID 1 (Mirror)
RAID 5 (Blocks striped, distributed parity)
RAID 5 (Blocks striped, distributed parity)
RAID 10 (Blocks mirrored and striped)
RAID 10 (Blocks mirrored and striped)
RAID 2 (Bit level striping)
RAID 2 (Bit level striping)
Suggested answer: C

Explanation:

Cloud Block Storage back-end storage volumes employs the RAID 10 method to provide a very high level of reliability and performance.

Reference: http://www.rackspace.com/knowledge_center/product-faq/cloud-block-storage

asked 16/09/2024
Samuel Ernesto
32 questions

Question 87

Report
Export
Collapse

A user is planning to launch multiple EC2 instance same as current running instance.

Which of the below mentioned parameters is not copied by Amazon EC2 in the launch wizard when the user has selected the option "Launch more like this"?

Termination protection
Termination protection
Tenancy setting
Tenancy setting
Storage
Storage
Shutdown behavior
Shutdown behavior
Suggested answer: C

Explanation:

The Amazon EC2 console provides a "Launch more like this" wizard option that enables the user to use a current instance as a template for launching other instances. This option automatically populates the Amazon EC2 launch wizard with certain configuration details from the selected instance.

The following configuration details are copied from the selected instance into the launch wizard: AMI ID Instance type Availability Zone, or the VPC and subnet in which the selected instance is located Public IPv4 address. If the selected instance currently has a public IPv4 address, the new instance receives a public IPv4 address - regardless of the selected instance's default public IPv4 address setting. For more information about public IPv4 addresses, see Public IPv4 Addresses and External DNS Hostnames. Placement group, if applicable

IAM role associated with the instance, if applicable Shutdown behavior setting (stop or terminate) Termination protection setting (true or false) CloudWatch monitoring (enabled or disabled) Amazon EBS-optimization setting (true or false) Tenancy setting, if launching into a VPC (shared or dedicated) Kernel ID and RAM disk ID, if applicable User data, if specified Tags associated with the instance, if applicable Security groups associated with the instance The following configuration details are not copied from your selected instance; instead, the wizard applies their default settings or behavior:

(VPC only) Number of network interfaces: The default is one network interface, which is the primary network interface (eth0). Storage: The default storage configuration is determined by the AMI and the instance type.

Reference: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/launching-instance.html

asked 16/09/2024
Christos Katopis
33 questions

Question 88

Report
Export
Collapse

A gaming company created a game leaderboard by using a Multi-AZ deployment of an Amazon RDS database. The number of users is growing, and the queries to get individual player rankings are getting slower over time. The company expects a surge in users for an upcoming version and wants to optimize the design for scalability and performance. Which solution will meet these requirements?

Migrate the database to Amazon DynamoDB. Store the leaderboard data in two different tables. Use Apache HiveQL JOIN statements to build the leaderboard.
Migrate the database to Amazon DynamoDB. Store the leaderboard data in two different tables. Use Apache HiveQL JOIN statements to build the leaderboard.
Keep the leaderboard data in the RDS DB instance. Provision a Multi-AZ deployment of an Amazon ElastiCache for Redis cluster.
Keep the leaderboard data in the RDS DB instance. Provision a Multi-AZ deployment of an Amazon ElastiCache for Redis cluster.
Stream the leaderboard data by using Amazon Kinesis Data Firehose with an Amazon S3 bucket as the destination. Query the S3 bucket by using Amazon Athena for the leaderboard.
Stream the leaderboard data by using Amazon Kinesis Data Firehose with an Amazon S3 bucket as the destination. Query the S3 bucket by using Amazon Athena for the leaderboard.
Add a read-only replica to the RDS DB instance. Add an RDS Proxy database proxy.
Add a read-only replica to the RDS DB instance. Add an RDS Proxy database proxy.
Suggested answer: D

Explanation:

Reference: https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/rds-proxy.html

asked 16/09/2024
Cheri Brown
33 questions

Question 89

Report
Export
Collapse

An IoT company has rolled out a fleet of sensors for monitoring temperatures in remote locations. Each device connects to AWS IoT Core and sends a message 30 seconds, updating an Amazon DynamoDB table. A System Administrator users AWS IoT to verify the devices are still sending messages to AWS IoT Core: the database is not updating. What should a Solutions Architect check to determine why the database is not being updated?

Verify the AWS IoT Device Shadow service is subscribed to the appropriate topic and is executing the AWS Lambda function.
Verify the AWS IoT Device Shadow service is subscribed to the appropriate topic and is executing the AWS Lambda function.
Verify that AWS IoT monitoring shows that the appropriate AWS IoT rules are being executed, and that the AWS IoT rules are enabled with the correct rule actions.
Verify that AWS IoT monitoring shows that the appropriate AWS IoT rules are being executed, and that the AWS IoT rules are enabled with the correct rule actions.
Check the AWS IoT Fleet indexing service and verify that the thing group has the appropriate IAM role to update DynamoDB.
Check the AWS IoT Fleet indexing service and verify that the thing group has the appropriate IAM role to update DynamoDB.
Verify that AWS IoT things are using MQTT instead of MQTT over WebSocket, then check that the provisioning has the appropriate policy attached.
Verify that AWS IoT things are using MQTT instead of MQTT over WebSocket, then check that the provisioning has the appropriate policy attached.
Suggested answer: D
asked 16/09/2024
Dennis Bruyn
39 questions

Question 90

Report
Export
Collapse

A large company runs workloads in VPCs that are deployed across hundreds of AWS accounts. Each VPC consists of public subnets and private subnets that span across multiple Availability Zones. NAT gateways are deployed in the public subnets and allow outbound connectivity to the internet from the private subnets.

A solutions architect is working on a hub-and-spoke design. All private subnets in the spoke VPCs must route traffic to the internet through an egress VPC. The solutions architect already has deployed a NAT gateway in an egress VPC in a central AWS account.

Which set of additional steps should the solutions architect take to meet these requirements?

Create peering connections between the egress VPC and the spoke VPCs. Configure the required routing to allow access to the internet.
Create peering connections between the egress VPC and the spoke VPCs. Configure the required routing to allow access to the internet.
Create a transit gateway, and share it with the existing AWS accounts. Attach existing VPCs to the transit gateway. Configure the required routing to allow access to the internet.
Create a transit gateway, and share it with the existing AWS accounts. Attach existing VPCs to the transit gateway. Configure the required routing to allow access to the internet.
Create a transit gateway in every account. Attach the NAT gateway to the transit gateways. Configure the required routing to allow access to the internet.
Create a transit gateway in every account. Attach the NAT gateway to the transit gateways. Configure the required routing to allow access to the internet.
Create an AWS PrivateLink connection between the egress VPC and the spoke VPCs. Configure the required routing to allow access to the internet.
Create an AWS PrivateLink connection between the egress VPC and the spoke VPCs. Configure the required routing to allow access to the internet.
Suggested answer: B

Explanation:

Reference: https://docs.aws.amazon.com/vpc/latest/userguide/vpc-nat-gateway.html

asked 16/09/2024
sushma kc
42 questions
Total 906 questions
Go to page: of 91
Search

Related questions