ExamGecko
Home Home / Amazon / SAA-C03

Amazon SAA-C03 Practice Test - Questions Answers, Page 67

Question list
Search
Search

List of questions

Search

Related questions











A company wants to migrate an on-premises legacy application to AWS. The application ingests customer order files from an on-premises enterprise resource planning (ERP) system. The application then uploads the files to an SFTP server. The application uses a scheduled job that checks for order files every hour.

The company already has an AWS account that has connectivity to the on-premises network. The new application on AWS must support integration with the existing ERP system. The new application must be secure and resilient and must use the SFTP protocol to process orders from the ERP system immediately.

Which solution will meet these requirements?

A.
Create an AWS Transfer Family SFTP internet-facing server in two Availability Zones. Use Amazon S3 storage. Create an AWS Lambda function to process order files. Use S3 Event Notifications to send s3: ObjectCreated: * events to the Lambda function.
A.
Create an AWS Transfer Family SFTP internet-facing server in two Availability Zones. Use Amazon S3 storage. Create an AWS Lambda function to process order files. Use S3 Event Notifications to send s3: ObjectCreated: * events to the Lambda function.
Answers
B.
Create an AWS Transfer Family SFTP internet-facing server in one Availability Zone. Use Amazon Elastic File System (Amazon EFS) storage. Create an AWS Lambda function to process order files. Use a Transfer Family managed workflow to invoke the Lambda function.
B.
Create an AWS Transfer Family SFTP internet-facing server in one Availability Zone. Use Amazon Elastic File System (Amazon EFS) storage. Create an AWS Lambda function to process order files. Use a Transfer Family managed workflow to invoke the Lambda function.
Answers
C.
Create an AWS Transfer Family SFTP internal server in two Availability Zones. Use Amazon Elastic File System (Amazon EFS) storage. Create an AWS Step Functions state machine to process order files. Use Amazon EventBridge Scheduler to invoke the state machine to periodically check Amazon EFS for order files.
C.
Create an AWS Transfer Family SFTP internal server in two Availability Zones. Use Amazon Elastic File System (Amazon EFS) storage. Create an AWS Step Functions state machine to process order files. Use Amazon EventBridge Scheduler to invoke the state machine to periodically check Amazon EFS for order files.
Answers
D.
Create an AWS Transfer Family SFTP internal server in two Availability Zones. Use Amazon S3 storage. Create an AWS Lambda function to process order files. Use a Transfer Family managed workflow to invoke the Lambda function.
D.
Create an AWS Transfer Family SFTP internal server in two Availability Zones. Use Amazon S3 storage. Create an AWS Lambda function to process order files. Use a Transfer Family managed workflow to invoke the Lambda function.
Answers
Suggested answer: D

Explanation:

This solution meets the requirements because it uses the following components and features:

AWS Transfer Family SFTP internal server: This allows the application to securely transfer order files from the on-premises ERP system to AWS using the SFTP protocol over a private connection. The internal server is deployed in two Availability Zones for high availability and fault tolerance.

Amazon S3 storage: This provides scalable, durable, and cost-effective object storage for the order files. Amazon S3 also supports encryption at rest and in transit, as well as lifecycle policies and versioning for data protection and compliance.

AWS Lambda function: This enables the application to process the order files in a serverless manner, without provisioning or managing servers. The Lambda function can perform any custom logic or transformation on the order files, such as validating, parsing, or enriching the data.

Transfer Family managed workflow: This simplifies the orchestration of the file processing tasks by triggering the Lambda function as soon as a file is uploaded to the SFTP server. The managed workflow also provides error handling, retry policies, and logging capabilities.

A company needs to provide customers with secure access to its data. The company processes customer data and stores the results in an Amazon S3 bucket.

All the data is subject to strong regulations and security requirements. The data must be encrypted at rest. Each customer must be able to access only their data from their AWS account. Company employees must not be able to access the data.

Which solution will meet these requirements?

A.
Provision an AWS Certificate Manager (ACM) certificate for each customer. Encrypt the data client-side. In the private certificate policy, deny access to the certificate for all principals except an 1AM role that the customer provides.
A.
Provision an AWS Certificate Manager (ACM) certificate for each customer. Encrypt the data client-side. In the private certificate policy, deny access to the certificate for all principals except an 1AM role that the customer provides.
Answers
B.
Provision a separate AWS Key Management Service (AWS KMS) key for each customer. Encrypt the data server-side. In the S3 bucket policy, deny decryption of data for all principals except an 1AM role that the customer provides.
B.
Provision a separate AWS Key Management Service (AWS KMS) key for each customer. Encrypt the data server-side. In the S3 bucket policy, deny decryption of data for all principals except an 1AM role that the customer provides.
Answers
C.
Provision a separate AWS Key Management Service (AWS KMS) key for each customer. Encrypt the data server-side. In each KMS key policy, deny decryption of data for all principals except an 1AM role that the customer provides.
C.
Provision a separate AWS Key Management Service (AWS KMS) key for each customer. Encrypt the data server-side. In each KMS key policy, deny decryption of data for all principals except an 1AM role that the customer provides.
Answers
D.
Provision an AWS Certificate Manager (ACM) certificate for each customer. Encrypt the data client-side. In the public certificate policy, deny access to the certificate for all principals except an 1AM role that the customer provides.
D.
Provision an AWS Certificate Manager (ACM) certificate for each customer. Encrypt the data client-side. In the public certificate policy, deny access to the certificate for all principals except an 1AM role that the customer provides.
Answers
Suggested answer: C

Explanation:

The correct solution is to provision a separate AWS KMS key for each customer and encrypt the data server-side. This way, the company can use the S3 encryption feature to protect the data at rest and delegate the control of the encryption keys to the customers. The customers can then use their own IAM roles to access and decrypt their data. The company employees will not be able to access the data because they are not authorized by the KMS key policies. The other options are incorrect because:

Option A and D are using ACM certificates to encrypt the data client-side. This is not a recommended practice for S3 encryption because it adds complexity and overhead to the encryption process. Moreover, the company will have to manage the certificates and their policies for each customer, which is not scalable and secure.

Option B is using a separate KMS key for each customer, but it is using the S3 bucket policy to control the decryption access. This is not a secure solution because the bucket policy applies to the entire bucket, not to individual objects. Therefore, the customers will be able to access and decrypt each other's data if they have the permission to list the bucket contents. The bucket policy also overrides the KMS key policy, which means the company employees can access the data if they have the permission to use the KMS key.

S3 encryption

KMS key policies

ACM certificates

A company wants to rearchitect a large-scale web application to a serverless microservices architecture. The application uses Amazon EC2 instances and is written in Python.

The company selected one component of the web application to test as a microservice. The component supports hundreds of requests each second. The company wants to create and test the microservice on an AWS solution that supports Python. The solution must also scale automatically and require minimal infrastructure and minimal operational support.

Which solution will meet these requirements?

A.
Use a Spot Fleet with auto scaling of EC2 instances that run the most recent Amazon Linux operating system.
A.
Use a Spot Fleet with auto scaling of EC2 instances that run the most recent Amazon Linux operating system.
Answers
B.
Use an AWS Elastic Beanstalk web server environment that has high availability configured.
B.
Use an AWS Elastic Beanstalk web server environment that has high availability configured.
Answers
C.
Use Amazon Elastic Kubernetes Service (Amazon EKS). Launch Auto Scaling groups of self-managed EC2 instances.
C.
Use Amazon Elastic Kubernetes Service (Amazon EKS). Launch Auto Scaling groups of self-managed EC2 instances.
Answers
D.
Use an AWS Lambda function that runs custom developed code.
D.
Use an AWS Lambda function that runs custom developed code.
Answers
Suggested answer: D

Explanation:

AWS Lambda is a serverless compute service that lets you run code without provisioning or managing servers. You can use Lambda to create and test microservices that are written in Python or other supported languages. Lambda scales automatically to handle the number of requests per second. You only pay for the compute time you consume. Lambda also integrates with other AWS services, such as Amazon API Gateway, Amazon S3, Amazon DynamoDB, and Amazon SQS, to enable event-driven architectures. Lambda has minimal infrastructure and operational overhead, as you do not need to manage servers, operating systems, patches, or scaling policies.

The other options are not serverless solutions and require more infrastructure and operational support. They also do not scale automatically to handle the number of requests per second. A Spot Fleet is a collection of EC2 instances that run on spare capacity at low prices. However, Spot Instances can be interrupted by AWS at any time, which can affect the availability and performance of your microservice. AWS Elastic Beanstalk is a service that automates the deployment and management of web applications on EC2 instances. However, you still need to provision, configure, and monitor the underlying EC2 instances and load balancers. Amazon EKS is a service that runs Kubernetes on AWS. However, you still need to create, configure, and manage the EC2 instances that form the Kubernetes cluster and nodes. You also need to install and update the Kubernetes software and tools.

What is AWS Lambda?

Building Lambda functions with Python

Create a layer for a Lambda Python function

AWS Lambda -- Function in Python

How do I call my AWS Lambda function from a local python script?

A company stores critical data in Amazon DynamoDB tables in the company's AWS account. An IT administrator accidentally deleted a DynamoDB table. The deletion caused a significant loss of data and disrupted the company's operations. The company wants to prevent this type of disruption in the future.

Which solution will meet this requirement with the LEAST operational overhead?

A.
Configure a trail in AWS CloudTrail. Create an Amazon EventBridge rule for delete actions. Create an AWS Lambda function to automatically restore deleted DynamoDB tables.
A.
Configure a trail in AWS CloudTrail. Create an Amazon EventBridge rule for delete actions. Create an AWS Lambda function to automatically restore deleted DynamoDB tables.
Answers
B.
Create a backup and restore plan for the DynamoDB tables. Recover the DynamoDB tables manually.
B.
Create a backup and restore plan for the DynamoDB tables. Recover the DynamoDB tables manually.
Answers
C.
Configure deletion protection on the DynamoDB tables.
C.
Configure deletion protection on the DynamoDB tables.
Answers
D.
Enable point-in-time recovery on the DynamoDB tables.
D.
Enable point-in-time recovery on the DynamoDB tables.
Answers
Suggested answer: C

Explanation:

Deletion protection is a feature of DynamoDB that prevents accidental deletion of tables. When deletion protection is enabled, you cannot delete a table unless you explicitly disable it first. This adds an extra layer of security and reduces the risk of data loss and operational disruption. Deletion protection is easy to enable and disable using the AWS Management Console, the AWS CLI, or the DynamoDB API. This solution has the least operational overhead, as you do not need to create, manage, or invoke any additional resources or services.Reference:

Using deletion protection to protect your table

Preventing Accidental Table Deletion in DynamoDB

Amazon DynamoDB now supports table deletion protection

A solutions architect is designing an AWS Identity and Access Management (1AM) authorization model for a company's AWS account. The company has designated five specific employees to have full access to AWS services and resources in the AWS account.

The solutions architect has created an 1AM user for each of the five designated employees and has created an 1AM user group.

Which solution will meet these requirements?

A.
Attach the AdministratorAccess resource-based policy to the 1AM user group. Place each of the five designated employee IAM users in the 1AM user group.
A.
Attach the AdministratorAccess resource-based policy to the 1AM user group. Place each of the five designated employee IAM users in the 1AM user group.
Answers
B.
Attach the SystemAdministrator identity-based policy to the IAM user group. Place each of the five designated employee IAM users in the IAM user group.
B.
Attach the SystemAdministrator identity-based policy to the IAM user group. Place each of the five designated employee IAM users in the IAM user group.
Answers
C.
Attach the AdministratorAccess identity-based policy to the IAM user group. Place each of the five designated employee IAM users in the IAM user group.
C.
Attach the AdministratorAccess identity-based policy to the IAM user group. Place each of the five designated employee IAM users in the IAM user group.
Answers
D.
Attach the SystemAdministrator resource-based policy to the IAM user group. Place each of the five designated employee IAM users in the IAM user group.
D.
Attach the SystemAdministrator resource-based policy to the IAM user group. Place each of the five designated employee IAM users in the IAM user group.
Answers
Suggested answer: C

Explanation:

This solution meets the requirements because it uses the following components and features:

AdministratorAccess identity-based policy: This is an AWS managed policy that provides full access to AWS services and resources1. By attaching this policy to the IAM user group, the solutions architect can grant the permissions needed for the designated employees to perform any task in the AWS account.

IAM user group: This is a collection of IAM users that share common permissions2. By creating a user group and adding the five designated employees as members, the solutions architect can simplify the management of permissions and reduce the risk of human errors or inconsistencies.

IAM users: These are identities that represent the designated employees in AWS2. By creating an IAM user for each employee and requiring them to sign in with their own credentials, the solutions architect can enhance the security and accountability of the AWS account.

A company stores text files in Amazon S3. The text files include customer chat messages, date and time information, and customer personally identifiable information (Pll).

The company needs a solution to provide samples of the conversations to an external service provider for quality control. The external service provider needs to randomly pick sample conversations up to the most recent conversation. The company must not share the customer Pll with the external service provider. The solution must scale when the number of customer conversations increases.

Which solution will meet these requirements with the LEAST operational overhead?

A.
Create an Object Lambda Access Point. Create an AWS Lambda function that redacts the Pll when the function reads the file. Instruct the external service provider to access the Object Lambda Access Point.
A.
Create an Object Lambda Access Point. Create an AWS Lambda function that redacts the Pll when the function reads the file. Instruct the external service provider to access the Object Lambda Access Point.
Answers
B.
Create a batch process on an Amazon EC2 instance that regularly reads all new files, redacts the Pll from the files, and writes the redacted files to a different S3 bucket. Instruct the external service provider to access the bucket that does not contain the Pll.
B.
Create a batch process on an Amazon EC2 instance that regularly reads all new files, redacts the Pll from the files, and writes the redacted files to a different S3 bucket. Instruct the external service provider to access the bucket that does not contain the Pll.
Answers
C.
Create a web application on an Amazon EC2 instance that presents a list of the files, redacts the Pll from the files, and allows the external service provider to download new versions of the files that have the Pll redacted.
C.
Create a web application on an Amazon EC2 instance that presents a list of the files, redacts the Pll from the files, and allows the external service provider to download new versions of the files that have the Pll redacted.
Answers
D.
Create an Amazon DynamoDB table. Create an AWS Lambda function that reads only the data in the files that does not contain Pll. Configure the Lambda function to store the non-PII data in the DynamoDB table when a new file is written to Amazon S3. Grant the external service provider access to the DynamoDB table.
D.
Create an Amazon DynamoDB table. Create an AWS Lambda function that reads only the data in the files that does not contain Pll. Configure the Lambda function to store the non-PII data in the DynamoDB table when a new file is written to Amazon S3. Grant the external service provider access to the DynamoDB table.
Answers
Suggested answer: A

Explanation:

The correct solution is to create an Object Lambda Access Point and an AWS Lambda function that redacts the PII when the function reads the file. This way, the company can use the S3 Object Lambda feature to modify the S3 object content on the fly, without creating a copy or changing the original object. The external service provider can access the Object Lambda Access Point and get the redacted version of the file. This solution has the least operational overhead because it does not require any additional storage, processing, or synchronization. The solution also scales automatically with the number of customer conversations and the demand from the external service provider. The other options are incorrect because:

Option B is using a batch process on an EC2 instance to read, redact, and write the files to a different S3 bucket. This solution has more operational overhead because it requires managing the EC2 instance, the batch process, and the additional S3 bucket. It also introduces latency and inconsistency between the original and the redacted files.

Option C is using a web application on an EC2 instance to present, redact, and download the files. This solution has more operational overhead because it requires managing the EC2 instance, the web application, and the download process. It also exposes the original files to the web application, which increases the risk of leaking the PII.

Option D is using a DynamoDB table and a Lambda function to store the non-PII data from the files. This solution has more operational overhead because it requires managing the DynamoDB table, the Lambda function, and the data transformation. It also changes the format and the structure of the original files, which may affect the quality control process.

S3 Object Lambda

Object Lambda Access Point

Lambda function

A company stores multiple Amazon Machine Images (AMIs) in an AWS account to launch its Amazon EC2 instances. The AMIs contain critical data and configurations that are necessary for the company's operations. The company wants to implement a solution that will recover accidentally deleted AMIs quickly and efficiently.

Which solution will meet these requirements with the LEAST operational overhead?

A.
Create Amazon Elastic Block Store (Amazon EBS) snapshots of the AMIs. Store the snapshots in a separate AWS account.
A.
Create Amazon Elastic Block Store (Amazon EBS) snapshots of the AMIs. Store the snapshots in a separate AWS account.
Answers
B.
Copy all AMIs to another AWS account periodically.
B.
Copy all AMIs to another AWS account periodically.
Answers
C.
Create a retention rule in Recycle Bin.
C.
Create a retention rule in Recycle Bin.
Answers
D.
Upload the AMIs to an Amazon S3 bucket that has Cross-Region Replication.
D.
Upload the AMIs to an Amazon S3 bucket that has Cross-Region Replication.
Answers
Suggested answer: C

Explanation:

Recycle Bin is a data recovery feature that enables you to restore accidentally deleted Amazon EBS snapshots and EBS-backed AMIs. When using Recycle Bin, if your resources are deleted, they are retained in the Recycle Bin for a time period that you specify before being permanently deleted. You can restore a resource from the Recycle Bin at any time before its retention period expires. This solution has the least operational overhead, as you do not need to create, copy, or upload any additional resources. You can also manage tags and permissions for AMIs in the Recycle Bin. AMIs in the Recycle Bin do not incur any additional charges.Reference:

Recover AMIs from the Recycle Bin

Recover an accidentally deleted Linux AMI

A company's developers want a secure way to gain SSH access on the company's Amazon EC2 instances that run the latest version of Amazon Linux. The developers work remotely and in the corporate office.

The company wants to use AWS services as a part of the solution. The EC2 instances are hosted in a VPC private subnet and access the internet through a NAT gateway that is deployed in a public subnet.

What should a solutions architect do to meet these requirements MOST cost-effectively?

A.
Create a bastion host in the same subnet as the EC2 instances. Grant the ec2: CreateVpnConnection 1AM permission to the developers. Install EC2 Instance Connect so that the developers can connect to the EC2 instances.
A.
Create a bastion host in the same subnet as the EC2 instances. Grant the ec2: CreateVpnConnection 1AM permission to the developers. Install EC2 Instance Connect so that the developers can connect to the EC2 instances.
Answers
B.
Create an AWS Site-to-Site VPN connection between the corporate network and the VPC. Instruct the developers to use the Site-to-Site VPN connection to access the EC2 instances when the developers are on the corporate network. Instruct the developers to set up another VPN connection for access when they work remotely.
B.
Create an AWS Site-to-Site VPN connection between the corporate network and the VPC. Instruct the developers to use the Site-to-Site VPN connection to access the EC2 instances when the developers are on the corporate network. Instruct the developers to set up another VPN connection for access when they work remotely.
Answers
C.
Create a bastion host in the public subnet of the VPC. Configure the security groups and SSH keys of the bastion host to only allow connections and SSH authentication from the developers' corporate and remote networks. Instruct the developers to connect through the bastion host by using SSH to reach the EC2 instances.
C.
Create a bastion host in the public subnet of the VPC. Configure the security groups and SSH keys of the bastion host to only allow connections and SSH authentication from the developers' corporate and remote networks. Instruct the developers to connect through the bastion host by using SSH to reach the EC2 instances.
Answers
D.
Attach the AmazonSSMManagedlnstanceCore 1AM policy to an 1AM role that is associated with the EC2 instances. Instruct the developers to use AWS Systems Manager Session Manager to access the EC2 instances.
D.
Attach the AmazonSSMManagedlnstanceCore 1AM policy to an 1AM role that is associated with the EC2 instances. Instruct the developers to use AWS Systems Manager Session Manager to access the EC2 instances.
Answers
Suggested answer: D

Explanation:

AWS Systems Manager Session Manager is a service that enables you to securely connect to your EC2 instances without using SSH keys or bastion hosts. You can use Session Manager to access your instances through the AWS Management Console, the AWS CLI, or the AWS SDKs. Session Manager uses IAM policies and roles to control who can access which instances. By attaching the AmazonSSMManagedlnstanceCore IAM policy to an IAM role that is associated with the EC2 instances, you grant the Session Manager service the necessary permissions to perform actions on your instances. You also need to attach another IAM policy to the developers' IAM users or roles that allows them to start sessions to the instances. Session Manager uses the AWS Systems Manager Agent (SSM Agent) that is installed by default on Amazon Linux 2 and other supported Linux distributions. Session Manager also encrypts all session data between your client and your instances, and streams session logs to Amazon S3, Amazon CloudWatch Logs, or both for auditing purposes. This solution is the most cost-effective, as it does not require any additional resources or services, such as bastion hosts, VPN connections, or NAT gateways. It also simplifies the security and management of SSH access, as it eliminates the need for SSH keys, port opening, or firewall rules.Reference:

What is AWS Systems Manager?

Setting up Session Manager

Getting started with Session Manager

Controlling access to Session Manager

Logging Session Manager activity

A company runs a highly available web application on Amazon EC2 instances behind an Application Load Balancer The company uses Amazon CloudWatch metrics

As the traffic to the web application Increases, some EC2 instances become overloaded with many outstanding requests The CloudWatch metrics show that the number of requests processed and the time to receive the responses from some EC2 instances are both higher compared to other EC2 instances The company does not want new requests to be forwarded to the EC2 instances that are already overloaded.

Which solution will meet these requirements?

A.
Use the round robin routing algorithm based on the RequestCountPerTarget and Active Connection Count CloudWatch metrics.
A.
Use the round robin routing algorithm based on the RequestCountPerTarget and Active Connection Count CloudWatch metrics.
Answers
B.
Use the least outstanding requests algorithm based on the RequestCountPerTarget and ActiveConnectionCount CloudWatch metrics.
B.
Use the least outstanding requests algorithm based on the RequestCountPerTarget and ActiveConnectionCount CloudWatch metrics.
Answers
C.
Use the round robin routing algorithm based on the RequestCount and TargetResponseTime CloudWatch metrics.
C.
Use the round robin routing algorithm based on the RequestCount and TargetResponseTime CloudWatch metrics.
Answers
D.
Use the least outstanding requests algorithm based on the RequestCount and TargetResponseTime CloudWatch metrics.
D.
Use the least outstanding requests algorithm based on the RequestCount and TargetResponseTime CloudWatch metrics.
Answers
Suggested answer: D

Explanation:

The least outstanding requests (LOR) algorithm is a load balancing algorithm that distributes incoming requests to the target with the fewest outstanding requests. This helps to avoid overloading any single target and improves the overall performance and availability of the web application. The LOR algorithm can use the RequestCount and TargetResponseTime CloudWatch metrics to determine the number of outstanding requests and the response time of each target. These metrics measure the number of requests processed by each target and the time elapsed after the request leaves the load balancer until a response from the target is received by the load balancer, respectively. By using these metrics, the LOR algorithm can route new requests to the targets that are less busy and more responsive, and avoid sending requests to the targets that are already overloaded or slow. This solution meets the requirements of the company.

Application Load Balancer now supports Least Outstanding Requests algorithm for load balancing requests

Target groups for your Application Load Balancers

Elastic Load Balancing - Application Load Balancers

A pharmaceutical company is developing a new drug. The volume of data that the company generates has grown exponentially over the past few months. The company's researchers regularly require a subset of the entire dataset to be immediately available with minimal lag. However the entire dataset does not need to be accessed on a daily basis. All the data currently resides in on-premises storage arrays, and the company wants to reduce ongoing capital expenses.

Which storage solution should a solutions architect recommend to meet these requirements?

A.
Run AWS DataSync as a scheduled cron job to migrate the data to an Amazon S3 bucket on an ongoing basis.
A.
Run AWS DataSync as a scheduled cron job to migrate the data to an Amazon S3 bucket on an ongoing basis.
Answers
B.
Deploy an AWS Storage Gateway file gateway with an Amazon S3 bucket as the target storage Migrate the data to the Storage Gateway appliance.
B.
Deploy an AWS Storage Gateway file gateway with an Amazon S3 bucket as the target storage Migrate the data to the Storage Gateway appliance.
Answers
C.
Deploy an AWS Storage Gateway volume gateway with cached volumes with an Amazon S3 bucket as the target storage. Migrate the data to the Storage Gateway appliance.
C.
Deploy an AWS Storage Gateway volume gateway with cached volumes with an Amazon S3 bucket as the target storage. Migrate the data to the Storage Gateway appliance.
Answers
D.
Configure an AWS Site-to-Site VPN connection from the on-premises environment to AWS. Migrate data to an Amazon Elastic File System (Amazon EFS) file system.
D.
Configure an AWS Site-to-Site VPN connection from the on-premises environment to AWS. Migrate data to an Amazon Elastic File System (Amazon EFS) file system.
Answers
Suggested answer: C

Explanation:

AWS Storage Gateway is a hybrid cloud storage service that allows you to seamlessly integrate your on-premises applications with AWS cloud storage. Volume Gateway is a type of Storage Gateway that presents cloud-backed iSCSI block storage volumes to your on-premises applications. Volume Gateway operates in either cache mode or stored mode. In cache mode, your primary data is stored in Amazon S3, while retaining your frequently accessed data locally in the cache for low latency access. In stored mode, your primary data is stored locally and your entire dataset is available for low latency access on premises while also asynchronously getting backed up to Amazon S3.

For the pharmaceutical company's use case, cache mode is the most suitable option, as it meets the following requirements:

It reduces the need to scale the on-premises storage infrastructure, as most of the data is stored in Amazon S3, which is scalable, durable, and cost-effective.

It provides low latency access to the subset of the data that the researchers regularly require, as it is cached locally in the Storage Gateway appliance.

It does not require the entire dataset to be accessed on a daily basis, as it is stored in Amazon S3 and can be retrieved on demand.

It offers flexible data protection and recovery options, as it allows taking point-in-time copies of the volumes using AWS Backup, which are stored in AWS as Amazon EBS snapshots.

Therefore, the solutions architect should recommend deploying an AWS Storage Gateway volume gateway with cached volumes with an Amazon S3 bucket as the target storage and migrating the data to the Storage Gateway appliance.

Volume Gateway | Amazon Web Services

How Volume Gateway works (architecture) - AWS Storage Gateway

AWS Storage Volume Gateway - Cached volumes - Stack Overflow

Total 886 questions
Go to page: of 89