ExamGecko
Home Home / Amazon / SAA-C03

Amazon SAA-C03 Practice Test - Questions Answers, Page 59

Question list
Search
Search

List of questions

Search

Related questions











A company has a new mobile app. Anywhere in the world, users can see local news on topics they choose. Users also can post photos and videos from inside the app.

Users access content often in the first minutes after the content is posted. New content quickly replaces older content, and then the older content disappears. The local nature of the news means that users consume 90% of the content within the AWS Region where it is uploaded.

Which solution will optimize the user experience by providing the LOWEST latency for content uploads?

A.
Upload and store content in Amazon S3. Use Amazon CloudFront for the uploads.
A.
Upload and store content in Amazon S3. Use Amazon CloudFront for the uploads.
Answers
B.
Upload and store content in Amazon S3. Use S3 Transfer Acceleration for the uploads.
B.
Upload and store content in Amazon S3. Use S3 Transfer Acceleration for the uploads.
Answers
C.
Upload content to Amazon EC2 instances in the Region that is closest to the user. Copy the data to Amazon S3.
C.
Upload content to Amazon EC2 instances in the Region that is closest to the user. Copy the data to Amazon S3.
Answers
D.
Upload and store content in Amazon S3 in the Region that is closest to the user. Use multiple distributions of Amazon CloudFront.
D.
Upload and store content in Amazon S3 in the Region that is closest to the user. Use multiple distributions of Amazon CloudFront.
Answers
Suggested answer: B

Explanation:

The most suitable solution for optimizing the user experience by providing the lowest latency for content uploads is to upload and store content in Amazon S3 and use S3 Transfer Acceleration for the uploads. This solution will enable the company to leverage the AWS global network and edge locations to speed up the data transfer between the users and the S3 buckets.

Amazon S3 is a storage service that provides scalable, durable, and highly available object storage for any type of data. Amazon S3 allows users to store and retrieve data from anywhere on the web, and offers various features such as encryption, versioning, lifecycle management, and replication1.

S3 Transfer Acceleration is a feature of Amazon S3 that helps users transfer data to and from S3 buckets more quickly. S3 Transfer Acceleration works by using optimized network paths and Amazon's backbone network to accelerate data transfer speeds. Users can enable S3 Transfer Acceleration for their buckets and use a distinct URL to access them, such as <bucket>.s3-accelerate.amazonaws.com2.

The other options are not correct because they either do not provide the lowest latency or are not suitable for the use case. Uploading and storing content in Amazon S3 and using Amazon CloudFront for the uploads is not correct because this solution is not designed for optimizing uploads, but rather for optimizing downloads. Amazon CloudFront is a content delivery network (CDN) that helps users distribute their content globally with low latency and high transfer speeds. CloudFront works by caching the content at edge locations around the world, so that users can access it quickly and easily from anywhere3. Uploading content to Amazon EC2 instances in the Region that is closest to the user and copying the data to Amazon S3 is not correct because this solution adds unnecessary complexity and cost to the process. Amazon EC2 is a computing service that provides scalable and secure virtual servers in the cloud. Users can launch, stop, or terminate EC2 instances as needed, and choose from various instance types, operating systems, and configurations4. Uploading and storing content in Amazon S3 in the Region that is closest to the user and using multiple distributions of Amazon CloudFront is not correct because this solution is not cost-effective or efficient for the use case. As mentioned above, Amazon CloudFront is a CDN that helps users distribute their content globally with low latency and high transfer speeds. However, creating multiple CloudFront distributions for each Region would incur additional charges and management overhead, and would not be necessary since 90% of the content is consumed within the same Region where it is uploaded3.

What Is Amazon Simple Storage Service? - Amazon Simple Storage Service

Amazon S3 Transfer Acceleration - Amazon Simple Storage Service

What Is Amazon CloudFront? - Amazon CloudFront

What Is Amazon EC2? - Amazon Elastic Compute Cloud

A company has applications that run on Amazon EC2 instances. The EC2 instances connect to Amazon RDS databases by using an IAM role that has associated policies. The company wants to use AWS Systems Manager to patch the EC2 instances without disrupting the running applications.

Which solution will meet these requirements?

A.
Create a new IAM role. Attach the AmazonSSMManagedlnstanceCore policy to the new IAM role. Attach the new IAM role to the EC2 instances and the existing IAM role.
A.
Create a new IAM role. Attach the AmazonSSMManagedlnstanceCore policy to the new IAM role. Attach the new IAM role to the EC2 instances and the existing IAM role.
Answers
B.
Create an IAM user. Attach the AmazonSSMManagedlnstanceCore policy to the IAM user. Configure Systems Manager to use the IAM user to manage the EC2 instances.
B.
Create an IAM user. Attach the AmazonSSMManagedlnstanceCore policy to the IAM user. Configure Systems Manager to use the IAM user to manage the EC2 instances.
Answers
C.
Enable Default Host Configuration Management in Systems Manager to manage the EC2 instances.
C.
Enable Default Host Configuration Management in Systems Manager to manage the EC2 instances.
Answers
D.
Remove the existing policies from the existing IAM role. Add the AmazonSSMManagedlnstanceCore policy to the existing IAM role.
D.
Remove the existing policies from the existing IAM role. Add the AmazonSSMManagedlnstanceCore policy to the existing IAM role.
Answers
Suggested answer: C

Explanation:

The most suitable solution for the company's requirements is to enable Default Host Configuration Management in Systems Manager to manage the EC2 instances. This solution will allow the company to patch the EC2 instances without disrupting the running applications and without manually creating or modifying IAM roles or users.

Default Host Configuration Management is a feature of AWS Systems Manager that enables Systems Manager to manage EC2 instances automatically as managed instances. A managed instance is an EC2 instance that is configured for use with Systems Manager. The benefits of managing instances with Systems Manager include the following:

Connect to EC2 instances securely using Session Manager.

Perform automated patch scans using Patch Manager.

View detailed information about instances using Systems Manager Inventory.

Track and manage instances using Fleet Manager.

Keep SSM Agent up to date automatically.

Default Host Configuration Management makes it possible to manage EC2 instances without having to manually create an IAM instance profile. Instead, Default Host Configuration Management creates and applies a default IAM role to ensure that Systems Manager has permissions to manage all instances in the Region and account where it is activated. If the permissions provided are not sufficient for the use case, the default IAM role can be modified or replaced with a custom role1.

The other options are not correct because they either have more operational overhead or do not meet the requirements. Creating a new IAM role, attaching the AmazonSSMManagedInstanceCore policy to the new IAM role, and attaching the new IAM role and the existing IAM role to the EC2 instances is not correct because this solution requires manual creation and management of IAM roles, which adds complexity and cost to the solution. The AmazonSSMManagedInstanceCore policy is a managed policy that grants permissions for Systems Manager core functionality2. Creating an IAM user, attaching the AmazonSSMManagedInstanceCore policy to the IAM user, and configuring Systems Manager to use the IAM user to manage the EC2 instances is not correct because this solution requires manual creation and management of IAM users, which adds complexity and cost to the solution. An IAM user is an identity within an AWS account that has specific permissions for a single person or application3. Removing the existing policies from the existing IAM role and adding the AmazonSSMManagedInstanceCore policy to the existing IAM role is not correct because this solution may disrupt the running applications that rely on the existing policies for accessing RDS databases. An IAM role is an identity within an AWS account that has specific permissions for a service or entity4.

AWS managed policy: AmazonSSMManagedInstanceCore

IAM users

IAM roles

Default Host Management Configuration - AWS Systems Manager

A company hosts a data lake on Amazon S3. The data lake ingests data in Apache Parquet format from various data sources. The company uses multiple transformation steps to prepare the ingested data. The steps include filtering of anomalies, normalizing of data to standard date and time values, and generation of aggregates for analyses.

The company must store the transformed data in S3 buckets that data analysts access. The company needs a prebuilt solution for data transformation that does not require code. The solution must provide data lineage and data profiling. The company needs to share the data transformation steps with employees throughout the company.

Which solution will meet these requirements?

A.
Configure an AWS Glue Studio visual canvas to transform the data. Share the transformation steps with employees by using AWS Glue jobs.
A.
Configure an AWS Glue Studio visual canvas to transform the data. Share the transformation steps with employees by using AWS Glue jobs.
Answers
B.
Configure Amazon EMR Serverless to transform the data. Share the transformation steps with employees by using EMR Serveriess jobs.
B.
Configure Amazon EMR Serverless to transform the data. Share the transformation steps with employees by using EMR Serveriess jobs.
Answers
C.
Configure AWS Glue DataBrew to transform the data. Share the transformation steps with employees by using DataBrew recipes.
C.
Configure AWS Glue DataBrew to transform the data. Share the transformation steps with employees by using DataBrew recipes.
Answers
D.
Create Amazon Athena tables for the data. Write Athena SQL queries to transform the data. Share the Athena SQL queries with employees.
D.
Create Amazon Athena tables for the data. Write Athena SQL queries to transform the data. Share the Athena SQL queries with employees.
Answers
Suggested answer: C

Explanation:

The most suitable solution for the company's requirements is to configure AWS Glue DataBrew to transform the data and share the transformation steps with employees by using DataBrew recipes. This solution will provide a prebuilt solution for data transformation that does not require code, and will also provide data lineage and data profiling. The company can easily share the data transformation steps with employees throughout the company by using DataBrew recipes.

AWS Glue DataBrew is a visual data preparation tool that makes it easy for data analysts and data scientists to clean and normalize data for analytics or machine learning by up to 80% faster. Users can upload their data from various sources, such as Amazon S3, Amazon RDS, Amazon Redshift, Amazon Aurora, or Glue Data Catalog, and use a point-and-click interface to apply over 250 built-in transformations.Users can also preview the results of each transformation step and see how it affects the quality and distribution of the data1.

A DataBrew recipe is a reusable set of transformation steps that can be applied to one or more datasets. Users can create recipes from scratch or use existing ones from the DataBrew recipe library.Users can also export, import, or share recipes with other users or groups within their AWS account or organization2.

DataBrew also provides data lineage and data profiling features that help users understand and improve their data quality. Data lineage shows the source and destination of each dataset and how it is transformed by each recipe step. Data profiling shows various statistics and metrics about each dataset, such as column

A company has an application that uses an Amazon DynamoDB table for storage. A solutions architect discovers that many requests to the table are not returning the latest data. The company's users have not reported any other issues with database performance. Latency is in an acceptable range.

Which design change should the solutions architect recommend?

A.
Add read replicas to the table.
A.
Add read replicas to the table.
Answers
B.
Use a global secondary index (GSI).
B.
Use a global secondary index (GSI).
Answers
C.
Request strongly consistent reads for the table.
C.
Request strongly consistent reads for the table.
Answers
D.
Request eventually consistent reads for the table.
D.
Request eventually consistent reads for the table.
Answers
Suggested answer: C

Explanation:

The most suitable design change for the company's application is to request strongly consistent reads for the table. This change will ensure that the requests to the table return the latest data, reflecting the updates from all prior write operations.

Amazon DynamoDB is a fully managed NoSQL database service that provides fast and predictable performance with seamless scalability. DynamoDB supports two types of read consistency: eventually consistent reads and strongly consistent reads. By default, DynamoDB uses eventually consistent reads, unless users specify otherwise1.

Eventually consistent reads are reads that may not reflect the results of a recently completed write operation. The response might not include the changes because of the latency of propagating the data to all replicas. If users repeat their read request after a short time, the response should return the updated data. Eventually consistent reads are suitable for applications that do not require up-to-date data or can tolerate eventual consistency1.

Strongly consistent reads are reads that return a result that reflects all writes that received a successful response prior to the read. Users can request a strongly consistent read by setting the ConsistentRead parameter to true in their read operations, such as GetItem, Query, or Scan. Strongly consistent reads are suitable for applications that require up-to-date data or cannot tolerate eventual consistency1.

The other options are not correct because they do not address the issue of read consistency or are not relevant for the use case. Adding read replicas to the table is not correct because this option is not supported by DynamoDB. Read replicas are copies of a primary database instance that can serve read-only traffic and improve availability and performance. Read replicas are available for some relational database services, such as Amazon RDS or Amazon Aurora, but not for DynamoDB2. Using a global secondary index (GSI) is not correct because this option is not related to read consistency. A GSI is an index that has a partition key and an optional sort key that are different from those on the base table. A GSI allows users to query the data in different ways, with eventual consistency3. Requesting eventually consistent reads for the table is not correct because this option is already the default behavior of DynamoDB and does not solve the problem of requests not returning the latest data.

Read consistency - Amazon DynamoDB

Working with read replicas - Amazon Relational Database Service

Working with global secondary indexes - Amazon DynamoDB

A company plans to migrate toAWS and use Amazon EC2 On-Demand Instances for its application. During the migration testing phase, a technical team observes that the application takes a long time to launch and load memory to become fully productive.

Which solution will reduce the launch time of the application during the next testing phase?

A.
Launch two or more EC2 On-Demand Instances. Turn on auto scaling features and make the EC2 On-Demand Instances available during the next testing phase.
A.
Launch two or more EC2 On-Demand Instances. Turn on auto scaling features and make the EC2 On-Demand Instances available during the next testing phase.
Answers
B.
Launch EC2 Spot Instances to support the application and to scale the application so it is available during the next testing phase.
B.
Launch EC2 Spot Instances to support the application and to scale the application so it is available during the next testing phase.
Answers
C.
Launch the EC2 On-Demand Instances with hibernation turned on. Configure EC2 Auto Scaling warm pools during the next testing phase.
C.
Launch the EC2 On-Demand Instances with hibernation turned on. Configure EC2 Auto Scaling warm pools during the next testing phase.
Answers
D.
Launch EC2 On-Demand Instances with Capacity Reservations. Start additional EC2 instances during the next testing phase.
D.
Launch EC2 On-Demand Instances with Capacity Reservations. Start additional EC2 instances during the next testing phase.
Answers
Suggested answer: C

Explanation:

The solution that will reduce the launch time of the application during the next testing phase is to launch the EC2 On-Demand Instances with hibernation turned on and configure EC2 Auto Scaling warm pools. This solution allows the application to resume from a hibernated state instead of starting from scratch, which can save time and resources. Hibernation preserves the memory (RAM) state of the EC2 instances to the root EBS volume and then stops the instances. When the instances are resumed, they restore their memory state from the EBS volume and become productive quickly. EC2 Auto Scaling warm pools can be used to maintain a pool of pre-initialized instances that are ready to scale out when needed. Warm pools can also support hibernated instances, which can further reduce the launch time and cost of scaling out.

The other solutions are not as effective as the first one because they either do not reduce the launch time, do not guarantee availability, or do not use On-Demand Instances as required. Launching two or more EC2 On-Demand Instances with auto scaling features does not reduce the launch time of the application, as each instance still has to go through the initialization process. Launching EC2 Spot Instances does not guarantee availability, as Spot Instances can be interrupted by AWS at any time when there is a higher demand for capacity. Launching EC2 On-Demand Instances with Capacity Reservations does not reduce the launch time of the application, as it only ensures that there is enough capacity available for the instances, but does not pre-initialize them.

Hibernating your instance - Amazon Elastic Compute Cloud

Warm pools for Amazon EC2 Auto Scaling - Amazon EC2 Auto Scaling

A company has deployed a multiplayer game for mobile devices. The game requires live location tracking of players based on latitude and longitude. The data store for the game must support rapid updates and retrieval of locations.

The game uses an Amazon RDS for PostgreSQL DB instance with read replicas to store the location data. During peak usage periods, the database is unable to maintain the performance that is needed for reading and writing updates. The game's user base is increasing rapidly.

What should a solutions architect do to improve the performance of the data tier?

A.
Take a snapshot of the existing DB instance. Restore the snapshot with Multi-AZ enabled.
A.
Take a snapshot of the existing DB instance. Restore the snapshot with Multi-AZ enabled.
Answers
B.
Migrate from Amazon RDS to Amazon OpenSearch Service with OpenSearch Dashboards.
B.
Migrate from Amazon RDS to Amazon OpenSearch Service with OpenSearch Dashboards.
Answers
C.
Deploy Amazon DynamoDB Accelerator (DAX) in front of the existing DB instance. Modify the game to use DAX.
C.
Deploy Amazon DynamoDB Accelerator (DAX) in front of the existing DB instance. Modify the game to use DAX.
Answers
D.
Deploy an Amazon ElastiCache for Redis cluster in front of the existing DB instance. Modify the game to use Redis.
D.
Deploy an Amazon ElastiCache for Redis cluster in front of the existing DB instance. Modify the game to use Redis.
Answers
Suggested answer: D

Explanation:

The solution that will improve the performance of the data tier is to deploy an Amazon ElastiCache for Redis cluster in front of the existing DB instance and modify the game to use Redis. This solution will enable the game to store and retrieve the location data of the players in a fast and scalable way, as Redis is an in-memory data store that supports geospatial data types and commands. By using ElastiCache for Redis, the game can reduce the load on the RDS for PostgreSQL DB instance, which is not optimized for high-frequency updates and queries of location data. ElastiCache for Redis also supports replication, sharding, and auto scaling to handle the increasing user base of the game.

The other solutions are not as effective as the first one because they either do not improve the performance, do not support geospatial data, or do not leverage caching. Taking a snapshot of the existing DB instance and restoring it with Multi-AZ enabled will not improve the performance of the data tier, as it only provides high availability and durability, but not scalability or low latency. Migrating from Amazon RDS to Amazon OpenSearch Service with OpenSearch Dashboards will not improve the performance of the data tier, as OpenSearch Service is mainly designed for full-text search and analytics, not for real-time location tracking. OpenSearch Service also does not support geospatial data types and commands natively, unlike Redis. Deploying Amazon DynamoDB Accelerator (DAX) in front of the existing DB instance and modifying the game to use DAX will not improve the performance of the data tier, as DAX is only compatible with DynamoDB, not with RDS for PostgreSQL. DAX also does not support geospatial data types and commands.

Amazon ElastiCache for Redis

Geospatial Data Support - Amazon ElastiCache for Redis

Amazon RDS for PostgreSQL

Amazon OpenSearch Service

Amazon DynamoDB Accelerator (DAX)

A manufacturing company runs its report generation application on AWS. The application generates each report in about 20 minutes. The application is built as a monolith that runs on a single Amazon EC2 instance. The application requires frequent updates to its tightly coupled modules. The application becomes complex to maintain as the company adds new features.

Each time the company patches a software module, the application experiences downtime. Report generation must restart from the beginning after any interruptions. The company wants to redesign the application so that the application can be flexible, scalable, and gradually improved. The company wants to minimize application downtime.

Which solution will meet these requirements?

A.
Run the application on AWS Lambda as a single function with maximum provisioned concurrency.
A.
Run the application on AWS Lambda as a single function with maximum provisioned concurrency.
Answers
B.
Run the application on Amazon EC2 Spot Instances as microservices with a Spot Fleet default allocation strategy.
B.
Run the application on Amazon EC2 Spot Instances as microservices with a Spot Fleet default allocation strategy.
Answers
C.
Run the application on Amazon Elastic Container Service (Amazon ECS) as microservices with service auto scaling.
C.
Run the application on Amazon Elastic Container Service (Amazon ECS) as microservices with service auto scaling.
Answers
D.
Run the application on AWS Elastic Beanstalk as a single application environment with an all-at-once deployment strategy.
D.
Run the application on AWS Elastic Beanstalk as a single application environment with an all-at-once deployment strategy.
Answers
Suggested answer: C

Explanation:

The solution that will meet the requirements is to run the application on Amazon Elastic Container Service (Amazon ECS) as microservices with service auto scaling. This solution will allow the application to be flexible, scalable, and gradually improved, as well as minimize application downtime. By breaking down the monolithic application into microservices, the company can decouple the modules and update them independently, without affecting the whole application. By running the microservices on Amazon ECS, the company can leverage the benefits of containerization, such as portability, efficiency, and isolation. By enabling service auto scaling, the company can adjust the number of containers running for each microservice based on demand, ensuring optimal performance and cost. Amazon ECS also supports various deployment strategies, such as rolling update or blue/green deployment, that can reduce or eliminate downtime during updates.

The other solutions are not as effective as the first one because they either do not meet the requirements or introduce new challenges. Running the application on AWS Lambda as a single function with maximum provisioned concurrency will not meet the requirements, as it will not break down the monolith into microservices, nor will it reduce the complexity of maintenance. Lambda functions are also limited by execution time (15 minutes), memory size (10 GB), and concurrency quotas, which may not be sufficient for the report generation application. Running the application on Amazon EC2 Spot Instances as microservices with a Spot Fleet default allocation strategy will not meet the requirements, as it will introduce the risk of interruptions due to spot price fluctuations. Spot Instances are not guaranteed to be available or stable, and may be reclaimed by AWS at any time with a two-minute warning. This may cause report generation to fail or restart from scratch. Running the application on AWS Elastic Beanstalk as a single application environment with an all-at-once deployment strategy will not meet the requirements, as it will not break down the monolith into microservices, nor will it minimize application downtime. The all-at-once deployment strategy will deploy updates to all instances simultaneously, causing a brief outage for the application.

Amazon Elastic Container Service

Microservices on AWS

Service Auto Scaling - Amazon Elastic Container Service

AWS Lambda

Amazon EC2 Spot Instances

[AWS Elastic Beanstalk]

A company needs a solution to prevent photos with unwanted content from being uploaded to the company's web application. The solution must not involve training a machine learning (ML) model. Which solution will meet these requirements?

A.
Create and deploy a model by using Amazon SageMaker Autopilot. Create a real-time endpoint that the web application invokes when new photos are uploaded.
A.
Create and deploy a model by using Amazon SageMaker Autopilot. Create a real-time endpoint that the web application invokes when new photos are uploaded.
Answers
B.
Create an AWS Lambda function that uses Amazon Rekognition to detect unwanted content. Create a Lambda function URL that the web application invokes when new photos are uploaded.
B.
Create an AWS Lambda function that uses Amazon Rekognition to detect unwanted content. Create a Lambda function URL that the web application invokes when new photos are uploaded.
Answers
C.
Create an Amazon CloudFront function that uses Amazon Comprehend to detect unwanted content. Associate the function with the web application.
C.
Create an Amazon CloudFront function that uses Amazon Comprehend to detect unwanted content. Associate the function with the web application.
Answers
D.
Create an AWS Lambda function that uses Amazon Rekognition Video to detect unwanted content. Create a Lambda function URL that the web application invokes when new photos are uploaded.
D.
Create an AWS Lambda function that uses Amazon Rekognition Video to detect unwanted content. Create a Lambda function URL that the web application invokes when new photos are uploaded.
Answers
Suggested answer: B

Explanation:

The solution that will meet the requirements is to create an AWS Lambda function that uses Amazon Rekognition to detect unwanted content, and create a Lambda function URL that the web application invokes when new photos are uploaded. This solution does not involve training a machine learning model, as Amazon Rekognition is a fully managed service that provides pre-trained computer vision models for image and video analysis. Amazon Rekognition can detect unwanted content such as explicit or suggestive adult content, violence, weapons, drugs, and more. By using AWS Lambda, the company can create a serverless function that can be triggered by an HTTP request from the web application. The Lambda function can use the Amazon Rekognition API to analyze the uploaded photos and return a response indicating whether they contain unwanted content or not.

The other solutions are not as effective as the first one because they either involve training a machine learning model, do not support image analysis, or do not work with photos. Creating and deploying a model by using Amazon SageMaker Autopilot involves training a machine learning model, which is not required for the scenario. Amazon SageMaker Autopilot is a service that automatically creates, trains, and tunes the best machine learning models for classification or regression based on the data provided by the user. Creating an Amazon CloudFront function that uses Amazon Comprehend to detect unwanted content does not support image analysis, as Amazon Comprehend is a natural language processing service that analyzes text, not images. Amazon Comprehend can extract insights and relationships from text such as language, sentiment, entities, topics, and more. Creating an AWS Lambda function that uses Amazon Rekognition Video to detect unwanted content does not work with photos, as Amazon Rekognition Video is designed for analyzing video streams, not static images. Amazon Rekognition Video can detect activities, objects, faces, celebrities, text, and more in video streams.

Amazon Rekognition

AWS Lambda

Detecting unsafe content - Amazon Rekognition

Amazon SageMaker Autopilot

Amazon Comprehend

A company has an on-premises data center that is running out of storage capacity. The company wants to migrate its storage infrastructure to AWS while minimizing bandwidth costs. The solution must allow for immediate retrieval of data at no additional cost.

How can these requirements be met?

A.
Deploy Amazon S3 Glacier Vault and enable expedited retrieval. Enable provisioned retrieval capacity for the workload.
A.
Deploy Amazon S3 Glacier Vault and enable expedited retrieval. Enable provisioned retrieval capacity for the workload.
Answers
B.
Deploy AWS Storage Gateway using cached volumes. Use Storage Gateway to store data in Amazon S3 while retaining copies of frequently accessed data subsets locally.
B.
Deploy AWS Storage Gateway using cached volumes. Use Storage Gateway to store data in Amazon S3 while retaining copies of frequently accessed data subsets locally.
Answers
C.
Deploy AWS Storage Gateway using stored volumes to store data locally. Use Storage Gateway to asynchronously back up point-in-time snapshots of the data to Amazon S3.
C.
Deploy AWS Storage Gateway using stored volumes to store data locally. Use Storage Gateway to asynchronously back up point-in-time snapshots of the data to Amazon S3.
Answers
D.
Deploy AWS Direct Connect to connect with the on-premises data center. Configure AWS Storage Gateway to store data locally. Use Storage Gateway to asynchronously back up point-in-time snapshots of the data to Amazon S3.
D.
Deploy AWS Direct Connect to connect with the on-premises data center. Configure AWS Storage Gateway to store data locally. Use Storage Gateway to asynchronously back up point-in-time snapshots of the data to Amazon S3.
Answers
Suggested answer: B

Explanation:

The solution that will meet the requirements is to deploy AWS Storage Gateway using cached volumes and use Storage Gateway to store data in Amazon S3 while retaining copies of frequently accessed data subsets locally. This solution will allow the company to migrate its storage infrastructure to AWS while minimizing bandwidth costs, as it will only transfer data that is not cached locally. The solution will also allow for immediate retrieval of data at no additional cost, as the cached volumes will provide low-latency access to the most recently used data. The data stored in Amazon S3 will be durable, scalable, and secure.

The other solutions are not as effective as the first one because they either do not meet the requirements or introduce additional costs or complexity. Deploying Amazon S3 Glacier Vault and enabling expedited retrieval will not meet the requirements, as it will incur additional costs for both storage and retrieval. Amazon S3 Glacier is a low-cost storage service for data archiving and backup, but it has longer retrieval times than Amazon S3. Expedited retrieval is a feature that allows faster access to data, but it charges a higher fee per GB retrieved. Provisioned retrieval capacity is a feature that reserves dedicated capacity for expedited retrievals, but it also charges a monthly fee per provisioned capacity unit. Deploying AWS Storage Gateway using stored volumes to store data locally and use Storage Gateway to asynchronously back up point-in-time snapshots of the data to Amazon S3 will not meet the requirements, as it will not migrate the storage infrastructure to AWS, but only create backups. Stored volumes are volumes that store the primary data locally and back up snapshots to Amazon S3. This solution will not reduce the storage capacity needed on-premises, nor will it leverage the benefits of cloud storage. Deploying AWS Direct Connect to connect with the on-premises data center and configuring AWS Storage Gateway to store data locally and use Storage Gateway to asynchronously back up point-in-time snapshots of the data to Amazon S3 will not meet the requirements, as it will also not migrate the storage infrastructure to AWS, but only create backups. AWS Direct Connect is a service that establishes a dedicated network connection between the on-premises data center and AWS, which can reduce network costs and increase bandwidth. However, this solution will also not reduce the storage capacity needed on-premises, nor will it leverage the benefits of cloud storage.

AWS Storage Gateway

Cached volumes - AWS Storage Gateway

Amazon S3 Glacier

Retrieving archives from Amazon S3 Glacier vaults - Amazon Simple Storage Service

Stored volumes - AWS Storage Gateway

AWS Direct Connect

A company's application runs on Amazon EC2 instances that are in multiple Availability Zones. The application needs to ingest real-time data from third-party applications.

The company needs a data ingestion solution that places the ingested raw data in an Amazon S3 bucket.

Which solution will meet these requirements?

A.
Create Amazon Kinesis data streams for data ingestion. Create Amazon Kinesis Data Firehose delivery streams to consume the Kinesis data streams. Specify the S3 bucket as the destination of the delivery streams.
A.
Create Amazon Kinesis data streams for data ingestion. Create Amazon Kinesis Data Firehose delivery streams to consume the Kinesis data streams. Specify the S3 bucket as the destination of the delivery streams.
Answers
B.
Create database migration tasks in AWS Database Migration Service (AWS DMS). Specify replication instances of the EC2 instances as the source endpoints. Specify the S3 bucket as the target endpoint. Set the migration type to migrate existing data and replicate ongoing changes.
B.
Create database migration tasks in AWS Database Migration Service (AWS DMS). Specify replication instances of the EC2 instances as the source endpoints. Specify the S3 bucket as the target endpoint. Set the migration type to migrate existing data and replicate ongoing changes.
Answers
C.
Create and configure AWS DataSync agents on the EC2 instances. Configure DataSync tasks to transfer data from the EC2 instances to the S3 bucket.
C.
Create and configure AWS DataSync agents on the EC2 instances. Configure DataSync tasks to transfer data from the EC2 instances to the S3 bucket.
Answers
D.
Create an AWS Direct Connect connection to the application for data ingestion. Create Amazon Kinesis Data Firehose delivery streams to consume direct PUT operations from the application. Specify the S3 bucket as the destination of the delivery streams.
D.
Create an AWS Direct Connect connection to the application for data ingestion. Create Amazon Kinesis Data Firehose delivery streams to consume direct PUT operations from the application. Specify the S3 bucket as the destination of the delivery streams.
Answers
Suggested answer: A

Explanation:

The solution that will meet the requirements is to create Amazon Kinesis data streams for data ingestion, create Amazon Kinesis Data Firehose delivery streams to consume the Kinesis data streams, and specify the S3 bucket as the destination of the delivery streams. This solution will allow the company's application to ingest real-time data from third-party applications and place the ingested raw data in an S3 bucket. Amazon Kinesis data streams are scalable and durable streams that can capture and store data from hundreds of thousands of sources. Amazon Kinesis Data Firehose is a fully managed service that can deliver streaming data to destinations such as S3, Amazon Redshift, Amazon OpenSearch Service, and Splunk. Amazon Kinesis Data Firehose can also transform and compress the data before delivering it to S3.

The other solutions are not as effective as the first one because they either do not support real-time data ingestion, do not work with third-party applications, or do not use S3 as the destination. Creating database migration tasks in AWS Database Migration Service (AWS DMS) will not support real-time data ingestion, as AWS DMS is mainly designed for migrating relational databases, not streaming data. AWS DMS also requires replication instances, source endpoints, and target endpoints to be compatible with specific database engines and versions. Creating and configuring AWS DataSync agents on the EC2 instances will not work with third-party applications, as AWS DataSync is a service that transfers data between on-premises storage systems and AWS storage services, not between applications. AWS DataSync also requires installing agents on the source or destination servers. Creating an AWS Direct Connect connection to the application for data ingestion will not use S3 as the destination, as AWS Direct Connect is a service that establishes a dedicated network connection between on-premises and AWS, not between applications and storage services. AWS Direct Connect also requires a physical connection to an AWS Direct Connect location.

Amazon Kinesis

Amazon Kinesis Data Firehose

AWS Database Migration Service

AWS DataSync

AWS Direct Connect


Total 886 questions
Go to page: of 89