Amazon SAA-C03 Practice Test - Questions Answers, Page 69
List of questions
Question 681
![Export Export](https://examgecko.com/assets/images/icon-download-24.png)
A company website hosted on Amazon EC2 instances processes classified data stored in The application writes data to Amazon Elastic Block Store (Amazon EBS) volumes The company needs to ensure that all data that is written to the EBS volumes is encrypted at rest.
Which solution will meet this requirement?
Explanation:
The simplest and most effective way to ensure that all data that is written to the EBS volumes is encrypted at rest is to create the EBS volumes as encrypted volumes. You can do this by selecting the encryption option when you create a new EBS volume, or by copying an existing unencrypted volume to a new encrypted volume. You can also specify the AWS KMS key that you want to use for encryption, or use the default AWS-managed key. When you attach the encrypted EBS volumes to the EC2 instances, the data will be automatically encrypted and decrypted by the EC2 host. This solution does not require any additional IAM roles, tags, or policies.
Amazon EBS encryption
Creating an encrypted EBS volume
Encrypting an unencrypted EBS volume
Question 682
![Export Export](https://examgecko.com/assets/images/icon-download-24.png)
A company's website hosted on Amazon EC2 instances processes classified data stored in Amazon S3 Due to security concerns, the company requires a pnvate and secure connection between its EC2 resources and Amazon S3.
Which solution meets these requirements?
Explanation:
This solution meets the following requirements:
It is private and secure, as it allows the EC2 instances to access the S3 bucket without using the public internet. A VPC endpoint is a gateway that enables you to create a private connection between your VPC and another AWS service, such as S3, within the same Region. A VPC endpoint for S3 provides secure and direct access to S3 buckets and objects using private IP addresses from your VPC. You can also use VPC endpoint policies and S3 bucket policies to control the access to the S3 resources based on the endpoint, the IAM user, the IAM role, or the source IP address.
It is simple and scalable, as it does not require any additional AWS services, gateways, or NAT devices. A VPC endpoint for S3 is a fully managed service that scales automatically with the network traffic. You can create a VPC endpoint for S3 with a few clicks in the VPC console or with a simple API call. You can also use the same VPC endpoint to access multiple S3 buckets in the same Region.
VPC Endpoints - Amazon Virtual Private Cloud
Gateway VPC endpoints - Amazon Virtual Private Cloud
Using Amazon S3 with interface VPC endpoints - Amazon Simple Storage Service
Using Amazon S3 with gateway VPC endpoints - Amazon Simple Storage Service
Question 683
![Export Export](https://examgecko.com/assets/images/icon-download-24.png)
A company has a three-tier environment on AWS that ingests sensor data from its users' devices The traffic flows through a Network Load Balancer (NIB) then to Amazon EC2 instances for the web tier and finally to EC2 instances for the application tier that makes database calls
What should a solutions architect do to improve the security of data in transit to the web tier?
Explanation:
A: How do you protect your data in transit?
Best Practices:
Implement secure key and certificate management: Store encryption keys and certificates securely and rotate them at appropriate time intervals while applying strict access control; for example, by using a certificate management service, such as AWS Certificate Manager (ACM).
Enforce encryption in transit: Enforce your defined encryption requirements based on appropriate standards and recommendations to help you meet your organizational, legal, and compliance requirements.
Automate detection of unintended data access: Use tools such as GuardDuty to automatically detect attempts to move data outside of defined boundaries based on data classification level, for example, to detect a trojan that is copying data to an unknown or untrusted network using the DNS protocol.
Authenticate network communications: Verify the identity of communications by using protocols that support authentication, such as Transport Layer Security (TLS) or IPsec.
https://wa.aws.amazon.com/wat.question.SEC_9.en.html
Question 684
![Export Export](https://examgecko.com/assets/images/icon-download-24.png)
A company wants to use NAT gateways in its AWS environment. The company's Amazon EC2 instances in private subnets must be able to connect to the public internet through the NAT gateways.
Which solution will meet these requirements'?
Explanation:
A public NAT gateway enables instances in a private subnet to send outbound traffic to the internet, while preventing the internet from initiating connections with the instances. A public NAT gateway requires an elastic IP address and a route to the internet gateway for the VPC. A private NAT gateway enables instances in a private subnet to connect to other VPCs or on-premises networks through a transit gateway or a virtual private gateway. A private NAT gateway does not require an elastic IP address or an internet gateway. Both private and public NAT gateways map the source private IPv4 address of the instances to the private IPv4 address of the NAT gateway, but in the case of a public NAT gateway, the internet gateway then maps the private IPv4 address of the public NAT gateway to the elastic IP address associated with the NAT gateway. When sending response traffic to the instances, whether it's a public or private NAT gateway, the NAT gateway translates the address back to the original source IP address.
Creating public NAT gateways in the same private subnets as the EC2 instances (option A) is not a valid solution, as the NAT gateways would not have a route to the internet gateway. Creating private NAT gateways in the same private subnets as the EC2 instances (option B) is also not a valid solution, as the instances would not be able to access the internet through the private NAT gateways. Creating private NAT gateways in public subnets in the same VPCs as the EC2 instances (option D) is not a valid solution either, as the internet gateway would drop the traffic from the private NAT gateways.
Therefore, the only valid solution is to create public NAT gateways in public subnets in the same VPCs as the EC2 instances (option C), as this would allow the instances to access the internet through the public NAT gateways and the internet gateway.Reference:
NAT gateways - Amazon Virtual Private Cloud
NAT gateway use cases - Amazon Virtual Private Cloud
Amazon Web Services -- Introduction to NAT Gateways
What is AWS NAT Gateway? - KnowledgeHut
Question 685
![Export Export](https://examgecko.com/assets/images/icon-download-24.png)
A company needs a solution to prevent AWS CloudFormation stacks from deploying AWS Identity and Access Management (1AM) resources that include an inline policy or '*' in the statement The solution must also prohibit deployment ot Amazon EC2 instances with public IP addresses The company has AWS Control Tower enabled in its organization in AWS Organizations.
Which solution will meet these requirements?
Explanation:
A service control policy (SCP) is a type of policy that you can use to manage permissions in your organization. SCPs offer central control over the maximum available permissions for all accounts in your organization, allowing you to ensure your accounts stay within your organization's access control guidelines. SCPs are available only in an organization that has all features enabled. SCPs do not grant permissions; instead, they specify the maximum permissions for an organization or organizational unit (OU). SCPs limit permissions that identity-based policies or resource-based policies grant to entities (users or roles) within the account, but do not grant permissions to entities. You can use SCPs to restrict the actions that the root user in an account can perform. You can also use SCPs to prevent users or roles in any account from creating or modifying certain AWS resources, such as EC2 instances with public IP addresses or IAM resources with inline policies or '''. For example, you can create an SCP that denies the ec2:RunInstances action if the request includes the AssociatePublicIpAddress parameter set to true. You can also create an SCP that denies the iam:PutUserPolicy and iam:PutRolePolicy actions if the request includes a policy document that contains '''. By attaching these SCPs to your organization or OUs, you can prevent the deployment of AWS CloudFormation stacks that violate these rules.
AWS Control Tower proactive controls are guardrails that enforce preventive policies on your accounts and resources. Proactive guardrails are implemented as AWS Organizations service control policies (SCPs) and AWS Config rules. However, AWS Control Tower does not provide a built-in proactive guardrail to block EC2 instances with public IP addresses or IAM resources with inline policies or ''*''. You would have to create your own custom guardrails using AWS CloudFormation templates and SCPs, which is essentially the same as option D. Therefore, option A is not correct.
AWS Control Tower detective controls are guardrails that detect and alert on policy violations in your accounts and resources. Detective guardrails are implemented as AWS Config rules and Amazon CloudWatch alarms. Detective guardrails do not block or remediate noncompliant resources; they only notify you of the issues. Therefore, option B is not correct.
AWS Config is a service that enables you to assess, audit, and evaluate the configurations of your AWS resources. AWS Config continuously monitors and records your AWS resource configurations and allows you to automate the evaluation of recorded configurations against desired configurations. AWS Config rules are customizable, AWS Lambda functions that AWS Config invokes to evaluate your AWS resource configurations. You can use AWS Config rules to check for compliance with your policies, such as ensuring that EC2 instances have public IP addresses disabled or IAM resources do not have inline policies or ''*''. However, AWS Config rules alone cannot prevent the deployment of AWS CloudFormation stacks that violate these policies; they can only report the compliance status. You would need to use another service, such as AWS Systems Manager Session Manager, to run automation scripts to delete or modify the noncompliant resources. This would require additional configuration and permissions, and may not be the most efficient or secure way to enforce your policies. Therefore, option C is not correct.
Service Control Policies
AWS Control Tower Guardrails
AWS Config
Question 686
![Export Export](https://examgecko.com/assets/images/icon-download-24.png)
A company has a mobile app for customers The app's data is sensitive and must be encrypted at rest The company uses AWS Key Management Service (AWS KMS)
The company needs a solution that prevents the accidental deletion of KMS keys The solution must use Amazon Simple Notification Service (Amazon SNS) to send an email notification to administrators when a user attempts to delete a KMS key
Which solution will meet these requirements with the LEAST operational overhead''
Explanation:
This solution meets the requirements with the least operational overhead because it uses AWS services that are fully managed and scalable. The EventBridge rule can detect the DeleteKey operation from the AWS KMS API and trigger the Systems Manager Automation runbook, which can execute a predefined workflow to cancel the key deletion. The EventBridge rule can also publish an SNS message to the topic that sends an email notification to the administrators. This way, the company can prevent the accidental deletion of KMS keys and notify the administrators of any attempts to delete them.
Option A is not a valid solution because AWS Config rules are used to evaluate the configuration of AWS resources, not to cancel the deletion of KMS keys. Option B is not a valid solution because it requires creating and maintaining a custom Lambda function that has logic to prevent KMS key deletion, which adds operational overhead. Option D is not a valid solution because it only notifies the administrators of the DeleteKey operation, but does not cancel it.
Using Amazon EventBridge rules to trigger Systems Manager Automation workflows - AWS Systems Manager
Using Amazon SNS for system-to-administrator communications - Amazon Simple Notification Service
Deleting AWS KMS keys - AWS Key Management Service
Question 687
![Export Export](https://examgecko.com/assets/images/icon-download-24.png)
A solutions architect is designing a user authentication solution for a company The solution must invoke two-factor authentication for users that log in from inconsistent geographical locations. IP addresses, or devices. The solution must also be able to scale up to accommodate millions of users.
Which solution will meet these requirements'?
Explanation:
Amazon Cognito user pools provide a secure and scalable user directory for user authentication and management. User pools support various authentication methods, such as username and password, email and password, phone number and password, and social identity providers. User pools also support multi-factor authentication (MFA), which adds an extra layer of security by requiring users to provide a verification code or a biometric factor in addition to their credentials. User pools can also enable risk-based adaptive authentication, which dynamically adjusts the authentication challenge based on the risk level of the sign-in attempt. For example, if a user tries to sign in from an unfamiliar device or location, the user pool can require a stronger authentication factor, such as SMS or email verification code. This feature helps to protect user accounts from unauthorized access and reduce the friction for legitimate users. User pools can scale up to millions of users and integrate with other AWS services, such as Amazon SNS, Amazon SES, AWS Lambda, and AWS KMS.
Amazon Cognito identity pools provide a way to federate identities from multiple identity providers, such as user pools, social identity providers, and corporate identity providers. Identity pools allow users to access AWS resources with temporary, limited-privilege credentials. Identity pools do not provide user authentication or management features, such as MFA or adaptive authentication. Therefore, option B is not correct.
AWS Identity and Access Management (IAM) is a service that helps to manage access to AWS resources. IAM users are entities that represent people or applications that need to interact with AWS. IAM users can be authenticated with a password or an access key. IAM users can also enable MFA for their own accounts, by using the AllowManageOwnUserMFA action in an IAM policy. However, IAM users are not suitable for user authentication for web or mobile applications, as they are intended for administrative purposes. IAM users also do not support adaptive authentication based on risk factors. Therefore, option C is not correct.
AWS IAM Identity Center (AWS Single Sign-On) is a service that enables users to sign in to multiple AWS accounts and applications with a single set of credentials. AWS SSO supports various identity sources, such as AWS SSO directory, AWS Managed Microsoft AD, and external identity providers. AWS SSO also supports MFA for user authentication, which can be configured in the permission sets that define the level of access for each user. However, AWS SSO does not support adaptive authentication based on risk factors. Therefore, option D is not correct.
Amazon Cognito User Pools
Adding Multi-Factor Authentication (MFA) to a User Pool
Risk-Based Adaptive Authentication
Amazon Cognito Identity Pools
IAM Users
Enabling MFA Devices
AWS Single Sign-On
How AWS SSO Works
Question 688
![Export Export](https://examgecko.com/assets/images/icon-download-24.png)
A company has an Amazon S3 data lake The company needs a solution that transforms the data from the data lake and loads the data into a data warehouse every day The data warehouse must have massively parallel processing (MPP) capabilities.
Data analysts then need to create and train machine learning (ML) models by using SQL commands on the data The solution must use serverless AWS services wherever possible
Which solution will meet these requirements?
Explanation:
AWS Glue is a fully managed extract, transform, and load (ETL) service that makes it easy to prepare and load your data for analytics. AWS Glue can automatically discover your data in Amazon S3 and catalog it, so you can query and search the data using SQL. AWS Glue can also run serverless ETL jobs using Apache Spark and Python to transform and load your data into various destinations, such as Amazon Redshift, Amazon Athena, or Amazon Aurora. AWS Glue is a serverless service, so you only pay for the resources consumed by the jobs, and you don't need to provision or manage any infrastructure.
Amazon Redshift is a fully managed, petabyte-scale data warehouse service that enables you to use standard SQL and your existing business intelligence (BI) tools to analyze your data. Amazon Redshift also supports massively parallel processing (MPP), which means it can distribute and execute queries across multiple nodes in parallel, delivering fast performance and scalability. Amazon Redshift Serverless is a new option that automatically scales query compute capacity based on the queries being run, so you don't need to manage clusters or capacity. You only pay for the query processing time and the storage consumed by your data.
Amazon Redshift ML is a feature that enables you to create, train, and deploy machine learning (ML) models using familiar SQL commands. Amazon Redshift ML can automatically discover the best model and hyperparameters for your data, and store the model in Amazon SageMaker, a fully managed service that provides a comprehensive set of tools for building, training, and deploying ML models. You can then use SQL functions to apply the model to your data in Amazon Redshift and generate predictions.
The combination of AWS Glue, Amazon Redshift Serverless, and Amazon Redshift ML meets the requirements of the question, as it provides a serverless, scalable, and SQL-based solution to transform, load, and analyze the data from the Amazon S3 data lake, and to create and train ML models on the data.
Option A is not correct, because Amazon EMR is not a serverless service. Amazon EMR is a managed service that simplifies running Apache Spark, Apache Hadoop, and other big data frameworks on AWS. Amazon EMR requires you to launch and configure clusters of EC2 instances to run your ETL jobs, which adds complexity and cost compared to AWS Glue.
Option B is not correct, because Amazon Aurora Serverless is not a data warehouse service, and it does not support MPP. Amazon Aurora Serverless is an on-demand, auto-scaling configuration for Amazon Aurora, a relational database service that is compatible with MySQL and PostgreSQL. Amazon Aurora Serverless can automatically adjust the database capacity based on the traffic, but it does not distribute the data and queries across multiple nodes like Amazon Redshift does. Amazon Aurora Serverless is more suitable for transactional workloads than analytical workloads.
Option D is not correct, because Amazon Athena is not a data warehouse service, and it does not support MPP. Amazon Athena is an interactive query service that enables you to analyze data in Amazon S3 using standard SQL. Amazon Athena is serverless, so you only pay for the queries you run, and you don't need to load the data into a database. However, Amazon Athena does not store the data in a columnar format, compress the data, or optimize the query execution plan like Amazon Redshift does. Amazon Athena is more suitable for ad-hoc queries than complex analytics and ML.
AWS Glue
Amazon Redshift
Amazon Redshift Serverless
Amazon Redshift ML
Amazon EMR
Amazon Aurora Serverless
Amazon Athena
Question 689
![Export Export](https://examgecko.com/assets/images/icon-download-24.png)
A company runs containers in a Kubernetes environment in the company's local data center. The company wants to use Amazon Elastic Kubernetes Service (Amazon EKS) and other AWS managed services Data must remain locally in the company's data center and cannot be stored in any remote site or cloud to maintain compliance
Which solution will meet these requirements?
Explanation:
AWS Outposts is a fully managed service that delivers AWS infrastructure and services to virtually any on-premises or edge location for a consistent hybrid experience. AWS Outposts supports Amazon EKS, which is a managed service that makes it easy to run Kubernetes on AWS and on-premises. By installing an AWS Outposts rack in the company's data center, the company can run containers in a Kubernetes environment using Amazon EKS and other AWS managed services, while keeping the data locally in the company's data center and meeting the compliance requirements. AWS Outposts also provides a seamless connection to the local AWS Region for access to a broad range of AWS services.
Option A is not a valid solution because AWS Local Zones are not deployed in the company's data center, but in large metropolitan areas closer to end users. AWS Local Zones are owned, managed, and operated by AWS, and they provide low-latency access to the public internet and the local AWS Region. Option B is not a valid solution because AWS Snowmobile is a service that transports exabytes of data to AWS using a 45-foot long ruggedized shipping container pulled by a semi-trailer truck. AWS Snowmobile is not designed for running containers or AWS managed services on-premises, but for large-scale data migration. Option D is not a valid solution because AWS Snowball Edge Storage Optimized is a device that provides 80 TB of HDD or 210 TB of NVMe storage capacity for data transfer and edge computing. AWS Snowball Edge Storage Optimized does not support Amazon EKS or other AWS managed services, and it is not suitable for running containers in a Kubernetes environment.
AWS Outposts - Amazon Web Services
Amazon EKS on AWS Outposts - Amazon EKS
AWS Local Zones - Amazon Web Services
AWS Snowmobile - Amazon Web Services
[AWS Snowball Edge Storage Optimized - Amazon Web Services]
Question 690
![Export Export](https://examgecko.com/assets/images/icon-download-24.png)
Asocial media company has workloads that collect and process data The workloads store the data in on-premises NFS storage The data store cannot scale fast enough to meet the company's expanding business needs The company wants to migrate the current data store to AWS
Which solution will meet these requirements MOST cost-effectively?
Explanation:
This solution meets the requirements most cost-effectively because it enables the company to migrate its on-premises NFS data store to AWS without changing the existing applications or workflows. AWS Storage Gateway is a hybrid cloud storage service that provides seamless and secure integration between on-premises and AWS storage. Amazon S3 File Gateway is a type of AWS Storage Gateway that provides a file interface to Amazon S3, with local caching for low-latency access. By setting up an Amazon S3 File Gateway, the company can store and retrieve files as objects in Amazon S3 using standard file protocols such as NFS. The company can also use an Amazon S3 Lifecycle policy to automatically transition the data to the appropriate storage class based on the frequency of access and the cost of storage. For example, the company can use S3 Standard for frequently accessed data, S3 Standard-Infrequent Access (S3 Standard-IA) or S3 One Zone-Infrequent Access (S3 One Zone-IA) for less frequently accessed data, and S3 Glacier or S3 Glacier Deep Archive for long-term archival data.
Option A is not a valid solution because AWS Storage Gateway Volume Gateway is a type of AWS Storage Gateway that provides a block interface to Amazon S3, with local caching for low-latency access. Volume Gateway is not suitable for migrating an NFS data store, as it requires attaching the volumes to EC2 instances or on-premises servers using the iSCSI protocol. Option C is not a valid solution because Amazon Elastic File System (Amazon EFS) is a fully managed elastic NFS file system that is designed for workloads that require high availability, scalability, and performance. Amazon EFS Standard-Infrequent Access (Standard-IA) is a storage class within Amazon EFS that is optimized for infrequently accessed files, with a lower price per GB and a higher price per access. Using Amazon EFS Standard-IA for migrating an NFS data store would not be cost-effective, as it would incur higher access charges and require additional configuration to enable lifecycle management. Option D is not a valid solution because Amazon EFS One Zone-Infrequent Access (One Zone-IA) is a storage class within Amazon EFS that is optimized for infrequently accessed files that do not require the availability and durability of Amazon EFS Standard or Standard-IA. Amazon EFS One Zone-IA stores data in a single Availability Zone, which reduces the cost by 47% compared to Amazon EFS Standard-IA, but also increases the risk of data loss in the event of an Availability Zone failure. Using Amazon EFS One Zone-IA for migrating an NFS data store would not be cost-effective, as it would incur higher access charges and require additional configuration to enable lifecycle management. It would also compromise the availability and durability of the data.
AWS Storage Gateway - Amazon Web Services
Amazon S3 File Gateway - AWS Storage Gateway
Object Lifecycle Management - Amazon Simple Storage Service
[AWS Storage Gateway Volume Gateway - AWS Storage Gateway]
[Amazon Elastic File System - Amazon Web Services]
[Using EFS storage classes - Amazon Elastic File System]
Question