ExamGecko
Home Home / Amazon / SOA-C02

Amazon SOA-C02 Practice Test - Questions Answers, Page 34

Question list
Search
Search

List of questions

Search

Related questions











A SysOps administrator needs to configure an Amazon S3 bucket to host a web application. The SysOps administrator has created the S3 bucket and has copied the static files for the web application to the S3 bucket.

The company has a policy that all S3 buckets must not be public.

What should the SysOps administrator do to meet these requirements?

A.
Create an Amazon CloudFront distribution. Configure the S3 bucket as an origin with an origin access identity (OAI). Give the OAI the s3:GetObject permission in the S3 bucket policy.
A.
Create an Amazon CloudFront distribution. Configure the S3 bucket as an origin with an origin access identity (OAI). Give the OAI the s3:GetObject permission in the S3 bucket policy.
Answers
B.
Configure static website hosting in the S3 bucket. Use Amazon Route 53 to create a DNS CNAME to point to the S3 website endpomt.
B.
Configure static website hosting in the S3 bucket. Use Amazon Route 53 to create a DNS CNAME to point to the S3 website endpomt.
Answers
C.
Create an Application Load Balancer (ALB). Change the protocol to HTTPS in the ALB listener configuration. Forward the traffic to the S3 bucket.
C.
Create an Application Load Balancer (ALB). Change the protocol to HTTPS in the ALB listener configuration. Forward the traffic to the S3 bucket.
Answers
D.
Create an accelerator in AWS Global Accelerator. Set up a listener configuration for port 443. Set the endpoint type to forward the traffic to the S3 bucket.
D.
Create an accelerator in AWS Global Accelerator. Set up a listener configuration for port 443. Set the endpoint type to forward the traffic to the S3 bucket.
Answers
Suggested answer: A

Explanation:

To host a web application in an S3 bucket while adhering to the policy that prohibits public S3 buckets:

Amazon CloudFront: Set up a CloudFront distribution and designate the S3 bucket as its origin. This allows the web application to be served via CloudFront, which can handle web traffic at scale and provide additional features such as HTTPS delivery.

Origin Access Identity (OAI): Create an OAI for the CloudFront distribution and configure the S3 bucket policy to grant the s3:GetObject permission to the OAI. This allows only CloudFront to access the content in the S3 bucket, keeping the bucket private from direct public access.

Security and Performance: This configuration ensures that the web application is only accessible through CloudFront, enhancing security and performance. It also complies with the company's policy against public S3 buckets by controlling access strictly through CloudFront.

This method leverages CloudFront's capabilities to securely serve web applications from S3, maintaining privacy and compliance with organizational policies.

A company recently deployed an application in production. The production environment currently runs on a single Amazon EC2 instance that hosts the application's web application and a MariaDB database. Company policy states that all IT production environments must be highly available.

What should a SysOps administrator do to meet this requirement?

A.
Migrale the database from the EC2 instance to an Amazon RDS for MariaDB Multi-AZ DB instance. Run the application on EC2 instances that are in an Auto Scaling group that extends across multiple Availability Zones. Place the EC2 instances behind a load balancer.
A.
Migrale the database from the EC2 instance to an Amazon RDS for MariaDB Multi-AZ DB instance. Run the application on EC2 instances that are in an Auto Scaling group that extends across multiple Availability Zones. Place the EC2 instances behind a load balancer.
Answers
B.
Migrate the database from the EC2 instance to an Amazon RDS for MariaDB Multi-AZ DB instance. Use AWS Application Migration Service to convert the application into an AWS Lambda function. Specify the Multi-AZ option for the Lambda function.
B.
Migrate the database from the EC2 instance to an Amazon RDS for MariaDB Multi-AZ DB instance. Use AWS Application Migration Service to convert the application into an AWS Lambda function. Specify the Multi-AZ option for the Lambda function.
Answers
C.
Copy the database to a different EC2 instance in a different Availability Zone. Use AWS Backup to create Amazon Machine Images (AMIs) of the application EC2 instance and the database EC2 instance. Create an AWS Lambda function that performs health checks every minute. In case of failure, configure the Lambda function to launch a new EC2 instance from the AMIs that AWS Backup created.
C.
Copy the database to a different EC2 instance in a different Availability Zone. Use AWS Backup to create Amazon Machine Images (AMIs) of the application EC2 instance and the database EC2 instance. Create an AWS Lambda function that performs health checks every minute. In case of failure, configure the Lambda function to launch a new EC2 instance from the AMIs that AWS Backup created.
Answers
D.
Migrate the database to a different EC2 instance. Place the application EC2 instance in an Auto Scaling group that extends across multiple Availability Zones. Create an Amazon Machine Image (AMI) from the database EC2 instance. Use the AMI to launch a second database EC2 instance in a different Availability Zone. Put the second database EC2 instance in the stopped state. Use the second database EC2 instance as a standby.
D.
Migrate the database to a different EC2 instance. Place the application EC2 instance in an Auto Scaling group that extends across multiple Availability Zones. Create an Amazon Machine Image (AMI) from the database EC2 instance. Use the AMI to launch a second database EC2 instance in a different Availability Zone. Put the second database EC2 instance in the stopped state. Use the second database EC2 instance as a standby.
Answers
Suggested answer: A

Explanation:

To make the production environment highly available in accordance with company policy:

Database Migration: Move the MariaDB database from a single EC2 instance to Amazon RDS for MariaDB configured for Multi-AZ. This setup ensures high availability of the database with synchronous replication to a standby instance in a different Availability Zone.

Application Scalability: Deploy the application on EC2 instances within an Auto Scaling group. Configure the Auto Scaling group to operate across multiple Availability Zones to ensure that the application remains available even if one zone becomes unavailable.

Load Balancing: Place the EC2 instances behind an Elastic Load Balancer (ELB). The load balancer will distribute incoming application traffic across the multiple, geographically dispersed EC2 instances, further enhancing the availability and fault tolerance of the application.

This solution leverages AWS managed services to increase the reliability and availability of both the application and database layers, adhering to best practices for deploying critical production environments on AWS.

A SysOps administrator maintains the security and compliance of a company's AWS account. To ensure the company's Amazon EC2 instances are following company policy, a SysOps administrator wants to terminate any EC2 instance that do not contain a department tag. Noncompliant resources must be terminated in near real time.

Which solution will meet these requirements?

A.
Create an AWS Config rule with the required-tags managed rule to identify noncompliant resources. Configure automatic remediation to run the AWS-TerminateEC2lnstance automation runbook to terminate noncompliant resources.
A.
Create an AWS Config rule with the required-tags managed rule to identify noncompliant resources. Configure automatic remediation to run the AWS-TerminateEC2lnstance automation runbook to terminate noncompliant resources.
Answers
B.
Create a new Amazon EventBridge rule to monitor when new EC2 instances are created. Send the event to an Simple Notification Service (Amazon SNS) topic for automatic remediation.
B.
Create a new Amazon EventBridge rule to monitor when new EC2 instances are created. Send the event to an Simple Notification Service (Amazon SNS) topic for automatic remediation.
Answers
C.
Ensure all users who can create EC2 instances also have the permissions to use the ec2:CreateTags and ec2:DescribeTags actions. Change the instance's shutdown behavior to terminate.
C.
Ensure all users who can create EC2 instances also have the permissions to use the ec2:CreateTags and ec2:DescribeTags actions. Change the instance's shutdown behavior to terminate.
Answers
D.
Ensure AWS Systems Manager Compliance is configured to manage the EC2 instances. Call the AWS-StopEC2lnstances automation runbook to stop noncompliant resources.
D.
Ensure AWS Systems Manager Compliance is configured to manage the EC2 instances. Call the AWS-StopEC2lnstances automation runbook to stop noncompliant resources.
Answers
Suggested answer: A

Explanation:

To enforce compliance with tagging policies in real-time:

AWS Config Setup: Implement an AWS Config rule to continuously monitor and evaluate EC2 instances for compliance with the tagging requirements. The required-tags managed rule can be configured to specifically check for the presence of a 'department' tag.

Automatic Remediation: Configure AWS Config to automatically execute the AWS-TerminateEC2Instance Systems Manager Automation document as a remediation action. This runbook will terminate any EC2 instance identified as noncompliant due to missing required tags.

Operational Efficiency: This setup allows for the enforcement of company tagging policies automatically and in near real-time, reducing the manual overhead of monitoring and ensuring compliance.

This method provides an efficient and effective solution to ensure that all EC2 instances meet the company's tagging requirements and that any noncompliant instances are dealt with promptly.

A company has deployed an application on AWS. The application runs on a fleet of Linux Amazon EC2 instances that are in an Auto Scaling group. The Auto Scaling group is configured to use launch templates. The launch templates launch Amazon Elastic Block Store (Amazon EBS) backed EC2 instances that use General Purpose SSD (gp3) EBS volumes for primary storage.

A SysOps administrator needs to implement a solution to ensure that all the EC2 instances can share the same underlying files. The solution also must ensure that the data is consistent.

Which solution will meet these requirements?

A.
Create an Amazon Elastic File System (Amazon EFS) file system. Create a new launch template version that includes user data that mounts the EFS file system. Update the Auto Scaling group to use the new launch template version to cycle in newer EC2 instances and to terminate the older EC2 instances.
A.
Create an Amazon Elastic File System (Amazon EFS) file system. Create a new launch template version that includes user data that mounts the EFS file system. Update the Auto Scaling group to use the new launch template version to cycle in newer EC2 instances and to terminate the older EC2 instances.
Answers
B.
Enable Multi-Attach on the EBS volumes. Create a new launch template version that includes user data that mounts the EBS volume. Update the Auto Scaling group to use the new template version to cycle in newer EC2 instances and to terminate the older EC2 instances.
B.
Enable Multi-Attach on the EBS volumes. Create a new launch template version that includes user data that mounts the EBS volume. Update the Auto Scaling group to use the new template version to cycle in newer EC2 instances and to terminate the older EC2 instances.
Answers
C.
Create a cron job that synchronizes the data between the EBS volumes for all the EC2 instances in the Auto Scaling group. Create a lifecycle hook during instance launch to configure the cron job on all the EC2 instances. Rotate out the older EC2 instances.
C.
Create a cron job that synchronizes the data between the EBS volumes for all the EC2 instances in the Auto Scaling group. Create a lifecycle hook during instance launch to configure the cron job on all the EC2 instances. Rotate out the older EC2 instances.
Answers
D.
Create a new launch template version that creates an Amazon Elastic File System (Amazon EFS) file system. Update the Auto Scaling group to use the new template version to cycle in newer EC2 instances and to terminate the older EC2 instances.
D.
Create a new launch template version that creates an Amazon Elastic File System (Amazon EFS) file system. Update the Auto Scaling group to use the new template version to cycle in newer EC2 instances and to terminate the older EC2 instances.
Answers
Suggested answer: A

Explanation:

The requirement to share the same underlying files among EC2 instances with data consistency is best met by using Amazon Elastic File System (EFS), which supports concurrent access from multiple EC2 instances. A new launch template version should include user data scripts that mount the EFS file system on each instance launched by the Auto Scaling group. Older instances can be cycled out to ensure all instances use the new configuration. Option A is correct and provides the necessary solution while ensuring data consistency and availability. For implementation guidance, refer to the AWS documentation on integrating EFS with EC2 Amazon EFS Integration with EC2.

A SysOps administrator is re-architecting an application. The SysOps administrator has moved the database from a public subnet, where the database used a public endpoint. into a private subnet to restrict access from the public network. After this change, an AWS Lambda function that requires read access to the database cannot connect to the database. The SysOps administrator must resolve this issue without compromising security.

Which solution meets these requirements?

A.
Create an AWS PrivateLink interface endpoint for the Lambda function. Connect to the database using its private endpoint.
A.
Create an AWS PrivateLink interface endpoint for the Lambda function. Connect to the database using its private endpoint.
Answers
B.
Connect the Lambda function to the database VPC. Connect to the database using its private endpoint.
B.
Connect the Lambda function to the database VPC. Connect to the database using its private endpoint.
Answers
C.
Attach an 1AM role to the Lambda function with read permissions to the database.
C.
Attach an 1AM role to the Lambda function with read permissions to the database.
Answers
D.
Move the database to a public subnet. Use security groups for secure access.
D.
Move the database to a public subnet. Use security groups for secure access.
Answers
Suggested answer: B

Explanation:

To resolve the issue of an AWS Lambda function unable to connect to a database that has been moved to a private subnet, the Lambda function needs to be connected to the same VPC as the database. This is done by configuring the Lambda function with VPC access. This involves specifying the VPC, subnets, and security groups for the Lambda function so that it can communicate with the database using its private endpoint. Option B is correct as it directly addresses the issue without compromising security. AWS documentation on configuring VPC access for Lambda provides guidance on this setup Configuring VPC Access for Lambda.

A company is running Amazon RDS for PostgreSOL Multi-AZ DB clusters. The company uses an AWS Cloud Formation template to create the databases individually with a default size of 100 GB. The company creates the databases every Monday and deletes the databases every Friday.

Occasionally, the databases run low on disk space and initiate an Amazon CloudWatch alarm. A SysOps administrator must prevent the databases from running low on disk space in the future.

Which solution will meet these requirements with the FEWEST changes to the application?

A.
Modify the CloudFormation template to use Amazon Aurora PostgreSOL as the DB engine.
A.
Modify the CloudFormation template to use Amazon Aurora PostgreSOL as the DB engine.
Answers
B.
Modify the CloudFormation template to use Amazon DynamoDB as the database. Activate storage auto scaling during creation of the tables
B.
Modify the CloudFormation template to use Amazon DynamoDB as the database. Activate storage auto scaling during creation of the tables
Answers
C.
Modify the Cloud Formation template to activate storage auto scaling on the existing DB instances.
C.
Modify the Cloud Formation template to activate storage auto scaling on the existing DB instances.
Answers
D.
Create a CloudWatch alarm to monitor DB instance storage space. Configure the alarm to invoke the VACUUM command.
D.
Create a CloudWatch alarm to monitor DB instance storage space. Configure the alarm to invoke the VACUUM command.
Answers
Suggested answer: C

Explanation:

To prevent Amazon RDS for PostgreSQL Multi-AZ DB instances from running low on disk space, enabling storage auto-scaling is the most straightforward solution. This feature automatically adjusts the storage capacity of the DB instance when it approaches its limit, thus preventing the database from running out of space and triggering CloudWatch alarms. Option C is the least intrusive and most effective solution as it only requires a modification to the existing CloudFormation template to enable auto-scaling on storage. For reference, see AWS documentation on managing RDS storage automatically Managing RDS Storage Automatically.

A SysOps administrator manages a company's Amazon S3 buckets. The SysOps administrator has identified 5 GB of incomplete multipart uploads in an S3 bucket in the company's AWS account. The SysOps administrator needs to reduce the number of incomplete multipart upload objects in the S3 bucket.

Which solution will meet this requirement?

A.
Create an S3 Lifecycle rule on the S3 bucket to delete expired markers or incomplete multipart uploads
A.
Create an S3 Lifecycle rule on the S3 bucket to delete expired markers or incomplete multipart uploads
Answers
B.
Require users that perform uploads of files into Amazon S3 to use the S3 TransferUtility.
B.
Require users that perform uploads of files into Amazon S3 to use the S3 TransferUtility.
Answers
C.
Enable S3 Versioning on the S3 bucket that contains the incomplete multipart uploads.
C.
Enable S3 Versioning on the S3 bucket that contains the incomplete multipart uploads.
Answers
D.
Create an S3 Object Lambda Access Point to delete incomplete multipart uploads.
D.
Create an S3 Object Lambda Access Point to delete incomplete multipart uploads.
Answers
Suggested answer: A

Explanation:

To manage incomplete multipart uploads in an Amazon S3 bucket, creating an S3 Lifecycle rule to specifically address these uploads is the most effective method. The rule can be configured to automatically delete expired multipart upload parts, which helps in cleaning up unused data and reducing storage costs. Option A is correct as it directly addresses the requirement to manage incomplete uploads effectively. Reference on setting up S3 Lifecycle policies can be found here Amazon S3 Lifecycle.

A team of developers is using several Amazon S3 buckets as centralized repositories. Users across the world upload large sets of files to these repositories. The development team's applications later process these files.

A SysOps administrator sets up a new S3 bucket. DOC-EXAMPLE-BUCKET, to support a new workload. The new S3 bucket also receives regular uploads of large sets of files from users worldwide. When the new S3 bucket is put into production, the upload performance from certain geographic areas is lower than the upload performance that the existing S3 buckets provide.

What should the SysOps administrator do to remediate this issue?

A.
Provision an Amazon ElasliCache for Redis cluster for the new S3 bucket. Provide the developers with the configuration endpoint of the cluster for use in their API calls.
A.
Provision an Amazon ElasliCache for Redis cluster for the new S3 bucket. Provide the developers with the configuration endpoint of the cluster for use in their API calls.
Answers
B.
Add the new S3 bucket to a new Amazon CloudFront distribution. Provide the developers with the domain name of the new distribution for use in their API calls.
B.
Add the new S3 bucket to a new Amazon CloudFront distribution. Provide the developers with the domain name of the new distribution for use in their API calls.
Answers
C.
Enable S3 Transfer Acceleration for the new S3 bucket. Verify that the developers are using the DOC-EXAMPLE-BUCKET.s3-accelerate.amazonaws.com endpoint name in their API calls.
C.
Enable S3 Transfer Acceleration for the new S3 bucket. Verify that the developers are using the DOC-EXAMPLE-BUCKET.s3-accelerate.amazonaws.com endpoint name in their API calls.
Answers
D.
Use S3 multipart upload for the new S3 bucket. Verify that the developers are using Region-specific S3 endpoint names such as D0C-EXAMPLE-BUCKET.s3. [RegionJ.amazonaws.com in their API calls.
D.
Use S3 multipart upload for the new S3 bucket. Verify that the developers are using Region-specific S3 endpoint names such as D0C-EXAMPLE-BUCKET.s3. [RegionJ.amazonaws.com in their API calls.
Answers
Suggested answer: C

Explanation:

For improving upload performance globally for an Amazon S3 bucket, enabling S3 Transfer Acceleration is the best solution. This service optimizes file transfers to S3 using Amazon CloudFront's globally distributed edge locations. After enabling this feature, uploads to the S3 bucket are first routed to an AWS edge location and then transferred to S3 over an optimized network path. Option C is correct, and the developers should use the provided accelerate endpoint in their API calls. For more details, consult the AWS documentation on S3 Transfer Acceleration Amazon S3 Transfer Acceleration.

A company is planning to host an application on a set of Amazon EC2 instances that are distributed across multiple Availability Zones. The application must be able to scale to millions of requests each second.

A SysOps administrator must design a solution to distribute the traffic to the EC2 instances. The solution must be optimized to handle sudden and volatile traffic patterns while using a single static IP address for each Availability Zone.

Which solution will meet these requirements?

A.
Amazon Simple Queue Service (Amazon SQS) queue
A.
Amazon Simple Queue Service (Amazon SQS) queue
Answers
B.
Application Load Balancer
B.
Application Load Balancer
Answers
C.
AWS Global Accelerator
C.
AWS Global Accelerator
Answers
D.
Network Load Balancer
D.
Network Load Balancer
Answers
Suggested answer: D

Explanation:

'Network Load Balancer is optimized to handle sudden and volatile traffic patterns while using a single static IP address per Availability Zone.' https://aws.amazon.com/elasticloadbalancing/network-load-balancer/

For an application that must scale to millions of requests per second and requires a single static IP address for each Availability Zone, a Network Load Balancer (NLB) is the most suitable option. NLBs are designed for high-performance, low-latency networking, and they support static IP addresses for each Availability Zone, making it ideal for volatile traffic patterns. Option D is the correct choice. AWS provides extensive documentation on NLB capabilities and configurations that suit these requirements AWS Network Load Balancer.

A company has an AWS Site-to-Site VPN connection between on-premises resources and resources that are hosted in a VPC. A SysOps administrator launches an Amazon EC2 instance that has only a private IP address into a private subnet in the VPC. The EC2 instance runs Microsoft Windows Server.

A security group for the EC2 instance has rules that allow inbound traffic from the on-premises network over the VPN connection. The on-premises environment contains a third-party network firewall. Rules in the third-party network firewall allow Remote Desktop Protocol (RDP) traffic to flow between the on-premises users over the VPN connection.

The on-premises users are unable to connect to the EC2 instance and receive a timeout error.

What should the SysOps administrator do to troubleshoot this issue?

A.
Create Amazon CloudWatch logs for the EC2 instance to check for blocked traffic.
A.
Create Amazon CloudWatch logs for the EC2 instance to check for blocked traffic.
Answers
B.
Create Amazon CloudWatch logs for the Site-to-Site VPN connection to check for blocked traffic.
B.
Create Amazon CloudWatch logs for the Site-to-Site VPN connection to check for blocked traffic.
Answers
C.
Create VPC flow logs for the EC2 instance's elastic network interface to check for rejected traffic.
C.
Create VPC flow logs for the EC2 instance's elastic network interface to check for rejected traffic.
Answers
D.
Instruct users to use EC2 Instance Connect as a connection method.
D.
Instruct users to use EC2 Instance Connect as a connection method.
Answers
Suggested answer: C

Explanation:

To troubleshoot connectivity issues for an EC2 instance that's not accessible via RDP after moving to a private subnet, VPC flow logs are the most direct and useful tool. VPC flow logs capture information about the IP traffic going to and from network interfaces in your VPC, enabling you to identify whether the traffic to the EC2 instance is being allowed or rejected. Setting up flow logs for the EC2 instance's network interface will help pinpoint any blocks or drops in traffic that could be causing the timeout error. Option C is the correct action as it directly investigates the traffic flow, which is crucial for resolving connectivity issues. AWS documentation on VPC flow logs provides further details VPC Flow Logs.

Total 425 questions
Go to page: of 43