Amazon SOA-C02 Practice Test - Questions Answers, Page 32
List of questions
Question 311

A SysOps administrator needs to configure the Amazon Route 53 hosted zone for example.com and www.example.com to point to an Application Load Balancer (ALB). Which combination of actions should the SysOps administrator take to meet these requirements? (Select TWO.)
Explanation:
You are correct that an A record typically points to an IP address. However, in the case of an Application Load Balancer (ALB), you cannot use an A record with an IP address because the IP addresses of an ALB can change over time. Instead, you can use an alias record to point to the DNS name of the ALB. An alias record is a Route 53 extension to DNS that allows you to route traffic to selected AWS resources, such as an ALB, by using a friendly DNS name, such as example.com, instead of the resource's IP address or DNS name.
Question 312

A SysOps administrator deployed a three-tier web application to a OA environment and is now evaluating the high availability of the application. The SysOps administrator notices that, when they simulate an unavailable Availability Zone, the application fails to respond. The application stores data in Amazon RDS and Amazon DynamoDB.
How should the SysOps administrator resolve this issue?
Explanation:
To improve the high availability of an application that utilizes Amazon RDS and experiences failure when an Availability Zone becomes unavailable:
Multi-AZ Deployment for RDS: Enable Multi-AZ deployments for your Amazon RDS instance. This setting ensures that RDS automatically provisions and maintains a synchronous standby replica in a different Availability Zone.
Automatic Failover: In the event of a primary RDS instance failure, RDS will automatically failover to the standby so that database operations can resume quickly with minimal disruption.
High Availability Configuration: This configuration not only enhances the robustness of the database component but also ensures that the application remains operational even if one Availability Zone is experiencing issues.
Enabling Multi-AZ for RDS is crucial for maintaining high availability and ensuring that the application remains resilient in the face of AZ disruptions.
Question 313

A company wants to reduce costs for jobs that can be completed at any time. The jobs currently run by using multiple Amazon EC2 On-Demand Instances, and the jobs take slightly less than 2 hours to complete. If a job fails for any reason, it must be restarted from the beginning.
Which solution will meet these requirements MOST cost-effectively?
Explanation:
To reduce costs effectively for jobs that are flexible in their scheduling and have a clear, predictable runtime:
Spot Instances with Defined Duration (Spot Blocks): Spot Instances offer significant discounts compared to On-Demand pricing. For workloads like the described jobs that have a predictable duration (slightly less than 2 hours), requesting Spot Instances with a defined duration (also known as Spot Blocks) is ideal. This option allows you to request Spot Instances guaranteed to not be terminated by AWS due to price changes for the duration specified.
Cost Efficiency: This method ensures that the instances will run for the duration required to complete the jobs without interruption, unless AWS experiences an exceptional shortage of capacity. The cost savings compared to On-Demand Instances can be substantial, especially for regular, predictable workloads.
Risk Mitigation: Although Spot Instances can be interrupted, using them with a defined duration reduces the risk of interruptions within the set time frame, making them suitable for jobs that can tolerate a restart in rare cases of interruption after the block time expires.
This strategy combines cost savings with the performance requirements of the jobs, making it an optimal choice for tasks that are not time-critical but need completion within a predictable timeframe.
Question 314

A company runs a worker process on three Amazon EC2 instances. The instances are in an Auto Scaling group that is configured to use a simple scaling policy. The instances process messages from an Amazon Simple Queue Service (Amazon SOS) queue.
Random periods of increased messages are causing a decrease in the performance of the worker process. A SysOps administrator must scale the instances to accommodate the increased number of messages.
Which solution will meet these requirements?
Explanation:
To manage scaling of EC2 instances in response to variable SQS message loads effectively:
Monitor SQS Queue Size: Utilize Amazon CloudWatch to monitor the number of visible messages in the SQS queue. This metric gives an indication of the workload that needs to be processed by the worker instances.
Metric Math Expression: Create a CloudWatch metric math expression that calculates the approximate number of messages visible per instance. This provides a more precise scaling metric, ensuring that each instance in the Auto Scaling group has a manageable load.
Target Tracking Scaling Policy: Implement a target tracking scaling policy based on this metric math expression. Configure the Auto Scaling group to automatically adjust its size to maintain a target value for the average number of SQS messages per instance. This approach ensures that the EC2 instances scale up during high traffic periods and scale down when the message load decreases.
This solution optimizes resource utilization and cost while maintaining performance by ensuring that the worker processes are neither overwhelmed nor idle.
Question 315

A company's security policy states that connecting to Amazon EC2 instances is not permitted through SSH and RDP. If access is required, authorized staff can connect to instances by using AWS Systems Manager Session Manager.
Users report that they are unable to connect to one specific Amazon EC2 instance that is running Ubuntu and has AWS Systems Manager Agent (SSM Agent) pre-installed These users are able to use Session Manager to connect to other instances in the same subnet, and they are in an 1AM group that has Session Manager permission for all instances.
What should a SysOps administrator do to resolve this issue?
Explanation:
If users are unable to connect to a specific Ubuntu EC2 instance using AWS Systems Manager Session Manager while other instances are accessible, the issue is likely due to IAM permissions:
Instance Profile Permissions: Ensure that the EC2 instance has the necessary IAM permissions to interact with Systems Manager. The AmazonSSMManagedInstanceCore managed policy includes permissions required for the SSM Agent on the instance to communicate with the AWS Systems Manager service.
Attach Managed Policy: Attach the AmazonSSMManagedInstanceCore policy to the IAM role that is associated with the Ubuntu instance's instance profile. This step is crucial as it authorizes the instance to use Systems Manager services, including Session Manager.
Verify Configuration and Connectivity: After updating the instance profile, verify that users can connect via Session Manager. This solution does not require any changes to network security settings like security groups.
By ensuring that the instance has the appropriate IAM permissions, you resolve issues related to access control and Systems Manager functionality, allowing authorized personnel to connect securely without using SSH or RDP.
Question 316

A fleet of servers must send local logs to Amazon CloudWatch. How should the servers be configured to meet this requirement?
Explanation:
To send local logs from a fleet of servers to Amazon CloudWatch:
Install the Unified CloudWatch Agent: The unified CloudWatch agent is capable of collecting both logs and metrics from servers. This agent supports various operating systems and can be configured according to specific logging requirements.
Configuration of the Agent: The agent's configuration involves specifying which log files to monitor and how they should be processed. This configuration can be done manually or through the AWS Systems Manager for automated deployment across multiple servers.
Send Logs to CloudWatch: Once configured and running, the CloudWatch agent will continuously monitor the specified log files and send the log data to Amazon CloudWatch Logs, allowing you to view, search, and set alarms on log data.
This setup ensures a robust and scalable way to manage log data across a fleet of servers, leveraging AWS native services for seamless integration and management.
Question 317

A company has 50 AWS accounts and wants to create an identical Amazon VPC in each account. Any changes the company makes to the VPCs in the future must be implemented on every VPC.
What is the MOST operationally efficient method to deploy and update the VPCs in each account?
Explanation:
To deploy and manage an identical Amazon VPC configuration across multiple AWS accounts efficiently:
AWS CloudFormation Template: Create a CloudFormation template that defines the VPC configuration. This template should include all necessary resources like subnets, route tables, internet gateways, etc.
Use CloudFormation StackSets: Utilize AWS CloudFormation StackSets to manage the deployment of the VPC template across the 50 AWS accounts. StackSets allow you to specify management and target accounts, automate deployments, and ensure consistency across all accounts.
Updating VPCs: When updates are required, modify the CloudFormation template and update the stack set. This will automatically apply the changes to all VPCs in the target accounts, ensuring uniformity and reducing operational overhead.
This method provides a centralized, consistent, and scalable way to manage resources across multiple AWS accounts, greatly simplifying the administration and ensuring compliance with organizational standards.
Question 318

A company runs a web application on three Amazon EC2 instances behind an Application Load Balancer (ALB). Web traffic increases significantly during the same 9-hour period every day and causes a decrease in the application's performance. A SysOps administrator must scale the application ahead of the changes in demand to accommodate the increased traffic.
Which solution will meet these requirements?
Explanation:
For predictable, significant traffic increases during a specific time period every day:
EC2 Auto Scaling Group: Set up an Auto Scaling group for the EC2 instances running the web application. This group automatically adjusts the number of instances based on policies defined.
Scheduled Scaling Policy: Use a scheduled scaling policy to pre-emptively increase the number of instances before the expected increase in traffic each day. Scheduled scaling allows you to specify the scaling actions to occur at specific times, based on known or expected demand patterns.
Attach to ALB: Ensure the Auto Scaling group is attached to the Application Load Balancer, which will distribute incoming traffic across the dynamically adjusted pool of EC2 instances.
This approach ensures that the application scales up resources ahead of the expected load, maintaining performance and user experience without manual intervention.
Question 319

A company deployed a new web application on multiple Amazon EC2 instances behind an Application Load Balancer (ALB). The EC2 instances run in an Auto Scaling group. Users report that they are frequently being prompted to log in.
What should a SysOps administrator do to resolve this issue?
Explanation:
Question 320

A SysOps administrator manages the caching of an Amazon CloudFront distribution that serves pages of a website. The SysOps administrator needs to configure the distribution so that the TTL of individual pages can vary. The TTL of the individual pages must remain within the maximum TTL and the minimum TTL that are set for the distribution.
Which solution will meet these requirements?
Explanation:
To allow the TTL (Time to Live) of individual pages to vary while adhering to the maximum and minimum TTL settings configured for the Amazon CloudFront distribution, setting cache behaviors directly at the origin is most effective:
Use Cache-Control Headers: By configuring the Cache-Control: max-age directive in the HTTP headers of the objects served from the origin, you can specify how long an object should be cached by CloudFront before it is considered stale.
Integration with CloudFront: When CloudFront receives a request for an object, it checks the cache-control header to determine the TTL for that specific object. This allows individual objects to have their own TTL settings, as long as they are within the globally set minimum and maximum TTL values for the distribution.
Operational Efficiency: This method does not require any additional AWS services or modifications to the distribution settings. It leverages HTTP standard practices, ensuring compatibility and ease of management.
Implementing the TTL management through cache-control headers at the origin provides precise control over caching behavior, aligning with varying content freshness requirements without complex configurations.
Question