ExamGecko
Home Home / Amazon / SOA-C02

Amazon SOA-C02 Practice Test - Questions Answers, Page 33

Question list
Search
Search

List of questions

Search

Related questions











A company hosts an internet web application on Amazon EC2 instances. The company is replacing the application with a new AWS Lambda function. During a transition period, the company must route some traffic to the legacy application and some traffic to the new Lambda function. The company needs to use the URL path of request to determine the routing.

Which solution will meet these requirements?

A.
Configure a Gateway Load Balancer to use the URL path to direct traffic to the legacy application and the new Lambda function.
A.
Configure a Gateway Load Balancer to use the URL path to direct traffic to the legacy application and the new Lambda function.
Answers
B.
Configure a Network Load Balancer to use the URL path to direct traffic to the legacy application and the new Lambda function.
B.
Configure a Network Load Balancer to use the URL path to direct traffic to the legacy application and the new Lambda function.
Answers
C.
Configure a Network Load Balancer to use a regular expression to match the URL path to direct traffic to the new Lambda function.
C.
Configure a Network Load Balancer to use a regular expression to match the URL path to direct traffic to the new Lambda function.
Answers
D.
Configure an Application Load Balancer to use the URL path to direct traffic to the legacy application and the new Lambda function.
D.
Configure an Application Load Balancer to use the URL path to direct traffic to the legacy application and the new Lambda function.
Answers
Suggested answer: D

Explanation:

To route traffic based on the URL path during a transition period where both an EC2-based legacy application and a new AWS Lambda function are in use:

Use of Application Load Balancer (ALB): ALBs support advanced request routing based on the URL path, among other criteria. This capability allows the ALB to evaluate the URL path of incoming requests and route them appropriately to either the legacy EC2 instances or the Lambda function.

Path-Based Routing Rules: Configure the ALB with rules that specify which URL paths should be directed to the EC2 instances and which should be routed to the Lambda function. For example, requests to /legacy/* might go to the EC2 instances, while /new/* could be directed to the Lambda function.

Integration with Lambda: ALBs can directly invoke Lambda functions in response to HTTP requests, making them ideal for scenarios where both server-based and serverless components are used in tandem.

This setup not only facilitates a smooth transition by enabling simultaneous operation of both components but also leverages the native capabilities of ALBs to manage traffic based on application requirements effectively.

A SysOps administrator manages policies for many AWS member accounts in an AWS Organizations structure. Administrators on other teams have access to the account root user credentials of the member accounts. The SysOps administrator must prevent all teams, including their administrators, from using Amazon DynamoDB. The solution must not affect the ability of the teams to access other AWS services.

Which solution will meet these requirements?

A.
In all member accounts, configure 1AM policies that deny access to all DynamoDB resources for all users, including the root user.
A.
In all member accounts, configure 1AM policies that deny access to all DynamoDB resources for all users, including the root user.
Answers
B.
Create a service control policy (SCP) in the management account to deny all DynamoDB actions. Apply the SCP to the root of the organization
B.
Create a service control policy (SCP) in the management account to deny all DynamoDB actions. Apply the SCP to the root of the organization
Answers
C.
In all member accounts, configure 1AM policies that deny AmazonDynamoDBFullAccess to all users, including the root user.
C.
In all member accounts, configure 1AM policies that deny AmazonDynamoDBFullAccess to all users, including the root user.
Answers
D.
Remove the default service control policy (SCP) in the management account. Create a replacement SCP that includes a single statement that denies all DynamoDB actions.
D.
Remove the default service control policy (SCP) in the management account. Create a replacement SCP that includes a single statement that denies all DynamoDB actions.
Answers
Suggested answer: B

Explanation:

To prevent all teams within an AWS Organizations structure from using Amazon DynamoDB while allowing access to other AWS services, the most effective solution is to use a Service Control Policy (SCP). SCPs apply at the organization, organizational unit (OU), or account level and can override individual IAM policies, including the root user's permissions:

B: Create a service control policy (SCP) in the management account to deny all DynamoDB actions. Apply the SCP to the root of the organization. This policy will effectively block DynamoDB actions across all member accounts without affecting the ability to access other AWS services. SCPs are powerful tools for centrally managing permissions in AWS Organizations and can enforce policy compliance across all accounts. Further information on SCPs and their usage can be found in the AWS documentation on Service Control Policies AWS Service Control Policies.

A company hosts a production MySQL database on an Amazon Aurora single-node DB cluster. The database is queried heavily for reporting purposes. The DB cluster is experiencing periods of performance degradation because of high CPU utilization and maximum connections errors. A SysOps administrator needs to improve the stability of the database.

Which solution will meet these requirements?

A.
Create an Aurora Replica node. Create an Auto Scaling policy to scale replicas based on CPU utilization. Ensure that all reporting requests use the read-only connection string.
A.
Create an Aurora Replica node. Create an Auto Scaling policy to scale replicas based on CPU utilization. Ensure that all reporting requests use the read-only connection string.
Answers
B.
Create a second Aurora MySQL single-node DB cluster in a second Availability Zone. Ensure that all reporting requests use the connection string for this additional node.
B.
Create a second Aurora MySQL single-node DB cluster in a second Availability Zone. Ensure that all reporting requests use the connection string for this additional node.
Answers
C.
Create an AWS Lambda function that caches reporting requests. Ensure that all reporting requests call the Lambda function.
C.
Create an AWS Lambda function that caches reporting requests. Ensure that all reporting requests call the Lambda function.
Answers
D.
Create a multi-node Amazon ElastiCache cluster. Ensure that all reporting requests use the ElastiCache cluster. Use the database if the data is not in the cache.
D.
Create a multi-node Amazon ElastiCache cluster. Ensure that all reporting requests use the ElastiCache cluster. Use the database if the data is not in the cache.
Answers
Suggested answer: A

Explanation:

To alleviate performance degradation on a heavily queried Amazon Aurora DB cluster:

A: Create an Aurora Replica node and implement an Auto Scaling policy based on CPU utilization. Ensure all reporting requests use the read-only connection string to redirect read queries to the replica. This setup alleviates the load on the primary DB instance by balancing read traffic, which can significantly improve stability during periods of high demand. Aurora Replicas are ideal for scaling read operations and can improve the performance of the primary instance by offloading read requests. More details on Aurora Replicas and their benefits can be found in the AWS documentation on Aurora Replicas Amazon Aurora Replicas.

A SysOps administrator is responsible for managing a fleet of Amazon EC2 instances. These EC2 instances upload build artifacts to a third-party service. The third-party service recently implemented a strict IP allow list that requires all build uploads to come from a single IP address.

What change should the systems administrator make to the existing build fleet to comply with this new requirement?

A.
Move all of the EC2 instances behind a NAT gateway and provide the gateway IP address to the service.
A.
Move all of the EC2 instances behind a NAT gateway and provide the gateway IP address to the service.
Answers
B.
Move all of the EC2 instances behind an internet gateway and provide the gateway IP address to the service.
B.
Move all of the EC2 instances behind an internet gateway and provide the gateway IP address to the service.
Answers
C.
Move all of the EC2 instances into a single Availability Zone and provide the Availability Zone IP address to the service.
C.
Move all of the EC2 instances into a single Availability Zone and provide the Availability Zone IP address to the service.
Answers
D.
Move all of the EC2 instances to a peered VPC and provide the VPC IP address to the service.
D.
Move all of the EC2 instances to a peered VPC and provide the VPC IP address to the service.
Answers
Suggested answer: A

Explanation:

To ensure all EC2 instances upload build artifacts through a single IP address:

A: Move all of the EC2 instances behind a NAT gateway. Provide the IP address of the NAT gateway to the third-party service for the allow list. A NAT gateway enables instances in a private subnet to connect to services outside AWS (such as a third-party service) but prevents the internet from initiating connections with those instances. Using a NAT gateway standardizes all outgoing traffic to use a single IP address. More information on NAT gateways can be found in AWS documentation NAT Gateways.

A SysOps administrator needs to monitor a process that runs on Linux Amazon EC2 instances. If the process stops, the process must restart automatically. The Amazon CloudWatch agent is already installed on all the EC2 Instances.

Which solution will meet these requirements?

A.
Add a procstat monitoring configuration to the CloudWatch agent for the process. Create an Amazon EventBridge event rule that initiates an AWS Systems Manager Automation runbook to restart the process after the process stops.
A.
Add a procstat monitoring configuration to the CloudWatch agent for the process. Create an Amazon EventBridge event rule that initiates an AWS Systems Manager Automation runbook to restart the process after the process stops.
Answers
B.
Add a StatsD monitoring configuration to the CloudWatch agent for the process. Create a CloudWatch alarm that initiates an AWS Systems Manager Automation runbook to restart the process after the process stops.
B.
Add a StatsD monitoring configuration to the CloudWatch agent for the process. Create a CloudWatch alarm that initiates an AWS Systems Manager Automation runbook to restart the process after the process stops.
Answers
C.
Add a StatsD monitoring configuration to the CloudWatch agent for the process. Create an Amazon EventBridge event rule that initiates an AWS Systems Manager Automation runbook to restart the process after the process stops.
C.
Add a StatsD monitoring configuration to the CloudWatch agent for the process. Create an Amazon EventBridge event rule that initiates an AWS Systems Manager Automation runbook to restart the process after the process stops.
Answers
D.
Add a procstat monitoring configuration to the CloudWatch agent for the process. Create a CloudWatch alarm that initiates an AWS Systems Manager Automation runbook to restart the process after the process stops.
D.
Add a procstat monitoring configuration to the CloudWatch agent for the process. Create a CloudWatch alarm that initiates an AWS Systems Manager Automation runbook to restart the process after the process stops.
Answers
Suggested answer: A

Explanation:

https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch-Agent-procstat-process-metrics.html


A SysOps administrator is preparing to deploy an application to Amazon EC2 instances that are in an Auto Scaling group. The application requires dependencies to be installed. Application updates are Issued weekly.

The SysOps administrator needs to implement a solution to incorporate the application updates on a regular basis. The solution also must conduct a vulnerability scan during Amazon Machine Image (AMI) creation.

What is the MOST operationally efficient solution that meets these requirements?

A.
Create a script that uses Packer. Schedule a cron job to run the script.
A.
Create a script that uses Packer. Schedule a cron job to run the script.
Answers
B.
Install the application and its dependencies on an EC2 instance. Create an AMI of the H2 instance.
B.
Install the application and its dependencies on an EC2 instance. Create an AMI of the H2 instance.
Answers
C.
Use EC2 Image Builder with a custom recipe to install the application and its dependencies.
C.
Use EC2 Image Builder with a custom recipe to install the application and its dependencies.
Answers
D.
Invoke the EC2 Createlmage API operation by using an Amazon EventBridge scheduled rule.
D.
Invoke the EC2 Createlmage API operation by using an Amazon EventBridge scheduled rule.
Answers
Suggested answer: C

Explanation:

To efficiently manage application deployments and updates on Amazon EC2 instances within an Auto Scaling group, along with ensuring security through vulnerability scans:

EC2 Image Builder: This AWS service automates the creation, management, and deployment of customized, secure, and up-to-date 'golden' server images. By using EC2 Image Builder, you can automate the installation of software, patches, and security configurations.

Custom Recipes: Define a custom recipe in EC2 Image Builder that includes steps to install the application and its dependencies. Additionally, configure the recipe to perform vulnerability scans as part of the image creation process.

Automated Pipeline: Set up an Image Builder pipeline that triggers on a regular schedule (e.g., weekly) to incorporate the latest application updates and security patches into the AMI. The new AMIs can then be automatically used by the Auto Scaling group to launch updated and secure instances.

This solution not only streamlines the management of application deployments and updates but also ensures that all instances launched by the Auto Scaling group meet the latest security and compliance standards, minimizing operational overhead and enhancing security.

A company has a public web application that experiences rapid traffic increases after advertisements appear on local television. The application runs on Amazon EC2 instances that are in an Auto Scaling group. The Auto Scaling group is not keeping up with the traffic surges after an advertisement runs. The company often needs to scale out to 100 EC2 instances during the traffic surges.

The instance startup times are lengthy because of a boot process that creates machine-specific data caches that are unique to each instance. The exact timing of when the advertisements will appear on television is not known. A SysOps administrator must implement a solution so that the application can function properly during the traffic surges.

Which solution will meet these requirements?

A.
Create a warm pool. Keep enough instances in the Slopped state to meet the increased demand.
A.
Create a warm pool. Keep enough instances in the Slopped state to meet the increased demand.
Answers
B.
Start 100 instances. Allow the boot process to finish running. Store this data on the instance store volume before stopping the instances.
B.
Start 100 instances. Allow the boot process to finish running. Store this data on the instance store volume before stopping the instances.
Answers
C.
Increase the value of the instance warmup time in the scaling policy.
C.
Increase the value of the instance warmup time in the scaling policy.
Answers
D.
Use predictive scaling for the Auto Scaling group.
D.
Use predictive scaling for the Auto Scaling group.
Answers
Suggested answer: A

Explanation:

To address the issue of slow startup times during unexpected traffic surges, a warm pool for the Auto Scaling group is an effective solution:

Warm Pool Concept: A warm pool allows you to maintain a set of pre-initialized or partially initialized EC2 instances that are not actively serving traffic but can be quickly brought online when needed.

Management of Instances: Instances in the warm pool can be kept in a stopped state and then started much more quickly than launching new instances, as the machine-specific data caches are already created.

Scalability and Responsiveness: During a surge in traffic, especially unpredictable ones like those triggered by advertisements, instances from the warm pool can be rapidly activated to handle the increased load, ensuring that the application remains responsive without the typical delays associated with boot processes.

This method significantly reduces the time to scale out by utilizing pre-warmed instances, enhancing the application's ability to cope with sudden and substantial increases in traffic.

A company is running production workloads that use a Multi-AZ deployment of an Amazon RDS for MySQL db.m6g.xlarge (general purpose) standard DB instance. Users report that they are frequently encountering a 'too many connections' error. A SysOps administrator observes that the number of connections on the database is high.

The SysOps administrator needs to resolve this issue while keeping code changes to a minimum.

Which solution will meet these requirements MOST cost-effectively?

A.
Modify the RDS for MySQL DB instance to a larger instance size.
A.
Modify the RDS for MySQL DB instance to a larger instance size.
Answers
B.
Migrate the RDS for MySQL DB instance to Amazon DynamoDB.
B.
Migrate the RDS for MySQL DB instance to Amazon DynamoDB.
Answers
C.
Configure RDS Proxy. Modify the application configuration file to use the RDS Proxy endpoint.
C.
Configure RDS Proxy. Modify the application configuration file to use the RDS Proxy endpoint.
Answers
D.
Modify the RDS for MySQL DB instance to a memory optimized DB instance.
D.
Modify the RDS for MySQL DB instance to a memory optimized DB instance.
Answers
Suggested answer: C

Explanation:

For the issue of 'too many connections' on a MySQL database, using RDS Proxy offers a streamlined solution:

RDS Proxy Setup: RDS Proxy sits between your application and the database. It pools and efficiently manages database connections, which reduces the number of direct connections to the database.

Connection Management: By handling connection pooling, RDS Proxy can help mitigate issues related to connection overhead and limits, such as the 'too many connections' error, by allowing the database to serve more requests from a smaller and more stable number of connections.

Minimal Code Changes: Integrating RDS Proxy requires changes only to the database connection settings in the application's configuration files to point to the RDS Proxy endpoint instead of directly to the database. This minimizes the amount of code change needed and leverages RDS Proxy to handle connection scaling and management more efficiently.

This approach enhances database performance and scalability by efficiently managing connections without the need for significant application changes or database resizing.

A development team created and deployed a new AWS Lambda function 15 minutes ago. Although the function was invoked many times. Amazon CloudWatch Logs are not showing any log messages.

What is one cause of this?

A.
The developers did not enable log messages for this Lambda function.
A.
The developers did not enable log messages for this Lambda function.
Answers
B.
The Lambda function's role does not include permissions to create CloudWatch Logs items.
B.
The Lambda function's role does not include permissions to create CloudWatch Logs items.
Answers
C.
The Lambda function raises an exception before the first log statement has been reached.
C.
The Lambda function raises an exception before the first log statement has been reached.
Answers
D.
The Lambda functions creates local log files that have to be shipped to CloudWatch Logs first before becoming visible.
D.
The Lambda functions creates local log files that have to be shipped to CloudWatch Logs first before becoming visible.
Answers
Suggested answer: B

Explanation:

If AWS Lambda function logs are not appearing in Amazon CloudWatch, it is typically due to insufficient permissions:

IAM Role Permissions: The execution role assigned to the Lambda function must have the necessary permissions to interact with CloudWatch Logs. This includes actions like logs:CreateLogGroup, logs:CreateLogStream, and logs:PutLogEvents.

Check and Update Role: Verify that the IAM role used by the Lambda function includes a policy granting these permissions. If not, update the role to include these permissions.

Log Group and Stream: With the appropriate permissions, the Lambda function will be able to create or use a log group and stream in CloudWatch Logs and publish log messages accordingly.

Ensuring the Lambda function has the correct permissions is essential for diagnostics and monitoring, allowing log data to be captured and reviewed in CloudWatch Logs.

A company manages a set of accounts on AWS by using AWS Organizations. The company's security team wants to use a native AWS service to regularly scan all AWS accounts against the Center for Internet Security (CIS) AWS Foundations Benchmark.

What is the MOST operationally efficient way to meet these requirements?

A.
Designate a central security account as the AWS Security Hub administrator account. Create a script that sends an invitation from the Security Hub administrator account and accepts the invitation from the member account. Run the script every time a new account is created. Configure Security Hub to run the CIS AWS Foundations Benchmark scans.
A.
Designate a central security account as the AWS Security Hub administrator account. Create a script that sends an invitation from the Security Hub administrator account and accepts the invitation from the member account. Run the script every time a new account is created. Configure Security Hub to run the CIS AWS Foundations Benchmark scans.
Answers
B.
Run the CIS AWS Foundations Benchmark across all accounts by using Amazon Inspector.
B.
Run the CIS AWS Foundations Benchmark across all accounts by using Amazon Inspector.
Answers
C.
Designate a central security account as the Amazon GuardDuty administrator account. Create a script that sends an invitation from the GuardDuty administrator account and accepts the invitation from the member account. Run the script every time a new account is created. Configure GuardDuty to run the CIS AWS Foundations Benchmark scans.
C.
Designate a central security account as the Amazon GuardDuty administrator account. Create a script that sends an invitation from the GuardDuty administrator account and accepts the invitation from the member account. Run the script every time a new account is created. Configure GuardDuty to run the CIS AWS Foundations Benchmark scans.
Answers
D.
Designate an AWS Security Hub administrator account. Configure new accounts in the organization to automatically become member accounts. Enable CIS AWS Foundations Benchmark scans.
D.
Designate an AWS Security Hub administrator account. Configure new accounts in the organization to automatically become member accounts. Enable CIS AWS Foundations Benchmark scans.
Answers
Suggested answer: D

Explanation:

To ensure comprehensive and automated security scanning across multiple AWS accounts:

Security Hub Administrator Account: Designate one account within AWS Organizations as the Security Hub administrator account. This centralizes security findings management.

Automate Account Association: Configure Security Hub to automatically associate new accounts in the organization as member accounts. This ensures all new and existing accounts are continuously monitored under the same security policies.

Enable CIS Benchmark Scans: Within Security Hub, enable the CIS AWS Foundations Benchmark standard. This automatically scans all member accounts against this set of security best practices and compliance standards.

This configuration provides an operationally efficient and scalable way to manage security and compliance across an extensive AWS environment, leveraging the native integration of AWS services.

Total 425 questions
Go to page: of 43