ExamGecko
Home Home / Amazon / SAA-C03

Amazon SAA-C03 Practice Test - Questions Answers, Page 43

Question list
Search
Search

List of questions

Search

Related questions











A company has a mobile chat application with a data store based in Amazon uynamoUb. users would like new messages to be read with as little latency as possible A solutions architect needs to design an optimal solution that requires minimal application changes.

Which method should the solutions architect select?

A.
Configure Amazon DynamoDB Accelerator (DAX) for the new messages table. Update the code to use the DAXendpoint.
A.
Configure Amazon DynamoDB Accelerator (DAX) for the new messages table. Update the code to use the DAXendpoint.
Answers
B.
Add DynamoDB read repticas to handle the increased read load. Update the application to point to the read endpoint for the read replicas.
B.
Add DynamoDB read repticas to handle the increased read load. Update the application to point to the read endpoint for the read replicas.
Answers
C.
Double the number of read capacity units for the new messages table in DynamoDB. Continue to use the existing DynamoDB endpoint.
C.
Double the number of read capacity units for the new messages table in DynamoDB. Continue to use the existing DynamoDB endpoint.
Answers
D.
Add an Amazon ElastiCache for Redis cache to the application stack. Update the application to point to the Redis cache endpoint instead of DynamoDB.
D.
Add an Amazon ElastiCache for Redis cache to the application stack. Update the application to point to the Redis cache endpoint instead of DynamoDB.
Answers
Suggested answer: A

Explanation:

https://aws.amazon.com/premiumsupport/knowledge-center/dynamodb-high-latency/

Amazon DynamoDB Accelerator (DAX) is a fully managed in-memory cache for DynamoDB that improves the performance of DynamoDB tables by up to 10 times and provides microsecond level of response time at any scale. It is compatible with DynamoDB API operations and requires minimal code changes to use1. By configuring DAX for the new messages table, the solution can reduce the latency for reading new messages with minimal application changes.

b) Add DynamoDB read repticas to handle the increased read load. Update the application to point to the read endpoint for the read replicas. This solution will not work, as DynamoDB does not support read replicas as a feature. Read replicas are available for Amazon RDS, not for DynamoDB2.

c) Double the number of read capacity units for the new messages table in DynamoDB. Continue to use the existing DynamoDB endpoint. This solution will not meet the requirement of reading new messages with as little latency as possible, as increasing the read capacity units will only increase the throughput of DynamoDB, not the performance or latency3.

d) Add an Amazon ElastiCache for Redis cache to the application stack. Update the application to point to the Redis cache endpoint instead of DynamoDB. This solution will not meet the requirement of minimal application changes, as adding ElastiCache for Redis will require significant code changes to implement caching logic, such as querying cache first, updating cache after writing to DynamoDB, and invalidating cache when needed.

Reference URL: https://aws.amazon.com/dynamodb/dax/

A company needs to integrate with a third-party data feed. The data feed sends a webhook to notifyan external service when new data is ready for consumption A developer wrote an AWS Lambfefunction to retrieve data when the company receives a webhook callback The developer must makethe Lambda function available for the third party to call.Which solution will meet these requirements with the MOST operational efficiency?


A.
Create a function URL for the Lambda function. Provide the Lambda function URL to the third party for the webhook.
A.
Create a function URL for the Lambda function. Provide the Lambda function URL to the third party for the webhook.
Answers
B.
Deploy an Application Load Balancer (ALB) in front of the Lambda function. Provide the ALB URL to the third party for the webhook
B.
Deploy an Application Load Balancer (ALB) in front of the Lambda function. Provide the ALB URL to the third party for the webhook
Answers
C.
Create an Amazon Simple Notification Service (Amazon SNS) topic. Attach the topic to the Lambda function. Provide the public hostname of the SNS topic to the third party for the webhook.
C.
Create an Amazon Simple Notification Service (Amazon SNS) topic. Attach the topic to the Lambda function. Provide the public hostname of the SNS topic to the third party for the webhook.
Answers
D.
Create an Amazon Simple Queue Service (Amazon SQS) queue. Attach the queue to the Lambda function. Provide the public hostname of the SQS queue to the third party for the webhook.
D.
Create an Amazon Simple Queue Service (Amazon SQS) queue. Attach the queue to the Lambda function. Provide the public hostname of the SQS queue to the third party for the webhook.
Answers
Suggested answer: A

Explanation:

A function URL is a unique identifier for a Lambda function that can be used to invoke the function over HTTPS. It is composed of the API endpoint of the AWS Region where the function is deployed, and the name or ARN of the function1. By creating a function URL for the Lambda function, the solution can make the Lambda function available for the third party to call with the most operational efficiency.

b) Deploy an Application Load Balancer (ALB) in front of the Lambda function. Provide the ALB URL to the third party for the webhook. This solution will not meet the requirement of the most operational efficiency, as it involves creating and managing an additional resource (ALB) that is not necessary for invoking a Lambda function over HTTPS2.

c) Create an Amazon Simple Notification Service (Amazon SNS) topic. Attach the topic to the Lambda function. Provide the public hostname of the SNS topic to the third party for the webhook. This solution will not work, as Amazon SNS topics do not have public hostnames that can be used as webhooks. SNS topics are used to publish messages to subscribers, not to receive messages from external sources3.

d) Create an Amazon Simple Queue Service (Amazon SQS) queue. Attach the queue to the Lamb-da function. Provide the public hostname of the SQS queue to the third party for the webhook. This solution will not work, as Amazon SQS queues do not have public hostnames that can be used as webhooks. SQS queues are used to send, store, and receive messages between AWS services, not to receive messages from external sources.

Reference URL: https://docs.aws.amazon.com/lambda/latest/dg/lambda-api-permissions-ref.html

A company wants to move from many standalone AWS accounts to a consolidated, multi-account architecture The company plans to create many new AWS accounts for different business units. The company needs to authenticate access to these AWS accounts by using a centralized corporate directory service.

Which combination of actions should a solutions architect recommend to meet these requirements? (Select TWO.)

A.
Create a new organization in AWS Organizations with all features turned on. Create the new AWS accounts in the organization.
A.
Create a new organization in AWS Organizations with all features turned on. Create the new AWS accounts in the organization.
Answers
B.
Set up an Amazon Cognito identity pool. Configure AWS IAM Identity Center (AWS Single Sign-On) to accept Amazon Cognito authentication.
B.
Set up an Amazon Cognito identity pool. Configure AWS IAM Identity Center (AWS Single Sign-On) to accept Amazon Cognito authentication.
Answers
C.
Configure a service control policy (SCP) to manage the AWS accounts. Add AWS IAM Identity Center (AWS Single Sign-On) to AWS Directory Service.
C.
Configure a service control policy (SCP) to manage the AWS accounts. Add AWS IAM Identity Center (AWS Single Sign-On) to AWS Directory Service.
Answers
D.
Create a new organization in AWS Organizations. Configure the organization's authentication mechanism to use AWS Directory Service directly.
D.
Create a new organization in AWS Organizations. Configure the organization's authentication mechanism to use AWS Directory Service directly.
Answers
E.
Set up AWS IAM Identity Center (AWS Single Sign-On) in the organization. Configure IAM Identity Center, and integrate it with the company's corporate directory service.
E.
Set up AWS IAM Identity Center (AWS Single Sign-On) in the organization. Configure IAM Identity Center, and integrate it with the company's corporate directory service.
Answers
Suggested answer: A, E

Explanation:

AWS Organizations is a service that helps users centrally manage and govern multiple AWS accounts. It allows users to create organizational units (OUs) to group accounts based on business needs or other criteria. It also allows users to define and attach service control policies (SCPs) to OUs or accounts to restrict the actions that can be performed by the accounts1. By creating a new organization in AWS Organizations with all features turned on, the solution can consolidate and manage the new AWS accounts for different business units.

AWS IAM Identity Center (formerly known as AWS Single Sign-On) is a service that provides single sign-on access for all of your AWS accounts and cloud applications. It connects with Microsoft Active Directory through AWS Directory Service to allow users in that directory to sign in to a personalized AWS access portal using their existing Active Directory user names and passwords. From the AWS access portal, users have access to all the AWS accounts and cloud applications that they have permissions for2. By setting up IAM Identity Center in the organization and integrating it with the company's corporate directory service, the solution can authenticate access to these AWS accounts using a centralized corporate directory service.

b) Set up an Amazon Cognito identity pool. Configure AWS IAM Identity Center (AWS Single Sign-On) to accept Amazon Cognito authentication. This solution will not meet the requirement of authenticating access to these AWS accounts by using a centralized corporate directory service, as Amazon Cognito is a service that provides user sign-up, sign-in, and access control for web and mobile applications, not for corporate directory services3.

c) Configure a service control policy (SCP) to manage the AWS accounts. Add AWS IAM Identi-ty Center (AWS Single Sign-On) to AWS Directory Service. This solution will not work, as SCPs are used to restrict the actions that can be performed by the accounts in an organization, not to manage the accounts themselves1. Also, IAM Identity Center cannot be added to AWS Directory Service, as it is a separate service that connects with Microsoft Active Directory through AWS Directory Service2.

d) Create a new organization in AWS Organizations. Configure the organization's authentication mechanism to use AWS Directory Service directly. This solution will not work, as AWS Organizations does not have an authentication mechanism that can use AWS Directory Service directly. AWS Organizations relies on IAM Identity Center to provide single sign-on access for the accounts in an organization.

Reference URL: https://docs.aws.amazon.com/organizations/latest/userguide/orgs_integrate_services.html

A group requires permissions to list an Amazon S3 bucket and delete objects from that bucket An administrator has created the following IAM policy to provide access to the bucket and applied that policy to the group. The group is not able to delete objects in the bucket. The company follows least-privilege access rules.

A)

B)

C)

A.
Option A
A.
Option A
Answers
B.
Option B
B.
Option B
Answers
C.
Option C
C.
Option C
Answers
D.
Option D
D.
Option D
Answers
Suggested answer: D

Explanation:

{

'Version': '2012-10-17',

'Statement': [

{

'Action': [

's3:ListBucket',

's3:DeleteObject'

],

'Resource': [

'arn:aws:s3:::<bucket-name>'

],

'Effect': 'Allow',

},

{

'Action': 's3:*DeleteObject',

'Resource': [

'arn:aws:s3:::<bucket-name>/*' # <- The policy clause kludge 'added' to match the solution (Q248.1) example

],

'Effect': 'Allow'

}

]

}

A solutions architect is designing a REST API in Amazon API Gateway for a cash payback service The application requires 1 GB of memory and 2 GB of storage for its computation resources. The application will require that the data is in a relational format.

Which additional combination of AWS services will meet these requirements with the LEAST administrative effort? {Select TWO.)

A.
Amazon EC2
A.
Amazon EC2
Answers
B.
AWS Lambda
B.
AWS Lambda
Answers
C.
Amazon RDS
C.
Amazon RDS
Answers
D.
Amazon DynamoDB
D.
Amazon DynamoDB
Answers
E.
Amazon Elastic Kubernetes Services (Amazon EKS)
E.
Amazon Elastic Kubernetes Services (Amazon EKS)
Answers
Suggested answer: B, C

Explanation:

AWS Lambda is a service that lets users run code without provisioning or managing servers. It automatically scales and manages the underlying compute resources for the code. It supports multiple languages, such as Java, Python, Node.js, and Go1. By using AWS Lambda for the REST API, the solution can meet the requirements of 1 GB of memory and minimal administrative effort.

Amazon RDS is a service that makes it easy to set up, operate, and scale a relational database in the cloud. It provides cost-efficient and resizable capacity while automating time-consuming administration tasks such as hardware provisioning, database setup, patching and backups. It supports multiple database engines, such as MySQL, PostgreSQL, Oracle, and SQL Server2. By using Amazon RDS for the data store, the solution can meet the requirements of 2 GB of storage and a relational format.

a) Amazon EC2. This solution will not meet the requirement of minimal administrative effort, as Amazon EC2 is a service that provides virtual servers in the cloud that users have to configure and manage themselves. It requires users to choose an instance type, an operating system, a security group, and other options3.

d) Amazon DynamoDB. This solution will not meet the requirement of a relational format, as Amazon DynamoDB is a service that provides a key-value and document database that delivers single-digit millisecond performance at any scale. It is a non-relational or NoSQL database that does not support joins or transactions.

e) Amazon Elastic Kubernetes Services (Amazon EKS). This solution will not meet the requirement of minimal administrative effort, as Amazon EKS is a service that provides a fully managed Kubernetes service that users have to configure and manage themselves. It requires users to create clusters, nodes groups, pods, services, and other Kubernetes resources.

Reference URL: https://aws.amazon.com/lambda/

A company has resources across multiple AWS Regions and accounts. A newly hired solutions architect discovers a previous employee did not provide details about the resources invent^. The solutions architect needs to build and map the relationship details of the various workloads across all accounts.

Which solution will meet these requirements in the MOST operationally efficient way?

A.
Use AWS Systems Manager Inventory to generate a map view from the detailed view report.
A.
Use AWS Systems Manager Inventory to generate a map view from the detailed view report.
Answers
B.
Use AWS Step Functions to collect workload details Build architecture diagrams of the workloads manually.
B.
Use AWS Step Functions to collect workload details Build architecture diagrams of the workloads manually.
Answers
C.
Use Workload Discovery on AWS to generate architecture diagrams of the workloads.
C.
Use Workload Discovery on AWS to generate architecture diagrams of the workloads.
Answers
D.
Use AWS X-Ray to view the workload details Build architecture diagrams with relationships
D.
Use AWS X-Ray to view the workload details Build architecture diagrams with relationships
Answers
Suggested answer: C

Explanation:

Workload Discovery on AWS (formerly called AWS Perspective) is a tool that visualizes AWS Cloud workloads. It maintains an inventory of the AWS resources across your accounts and Regions, mapping relationships between them, and displaying them in a web UI. It also allows you to query AWS Cost and Usage Reports, search for resources, save and export architecture diagrams, and more1. By using Workload Discovery on AWS, the solution can build and map the relationship details of the various workloads across all accounts with the least operational effort.

a) Use AWS Systems Manager Inventory to generate a map view from the detailed view report. This solution will not meet the requirement of building and mapping the relationship details of the various workloads across all accounts, as AWS Systems Manager Inventory is a feature that collects metadata from your managed instances and stores it in a central Amazon S3 bucket. It does not provide a map view or architecture diagrams of the workloads2.

b) Use AWS Step Functions to collect workload details Build architecture diagrams of the work-loads manually. This solution will not meet the requirement of the least operational effort, as it involves creating and managing state machines to orchestrate the workload details collection, and building architecture diagrams manually.

d) Use AWS X-Ray to view the workload details Build architecture diagrams with relationships. This solution will not meet the requirement of the least operational effort, as it involves instrumenting your applications with X-Ray SDKs to collect workload details, and building architecture diagrams manually.

Reference URL: https://aws.amazon.com/solutions/implementations/workload-discovery-on-aws/

A company's applications run on Amazon EC2 instances in Auto Scaling groups. The company notices that its applications experience sudden traffic increases on random days of the week The company wants to maintain application performance during sudden traffic increases.

Which solution will meet these requirements MOST cost-effectively?

A.
Use manual scaling to change the size of the Auto Scaling group.
A.
Use manual scaling to change the size of the Auto Scaling group.
Answers
B.
Use predictive scaling to change the size of the Auto Scaling group.
B.
Use predictive scaling to change the size of the Auto Scaling group.
Answers
C.
Use dynamic scaling to change the size of the Auto Scaling group.
C.
Use dynamic scaling to change the size of the Auto Scaling group.
Answers
D.
Use schedule scaling to change the size of the Auto Scaling group
D.
Use schedule scaling to change the size of the Auto Scaling group
Answers
Suggested answer: C

Explanation:

Dynamic scaling is a type of autoscaling that automatically adjusts the number of EC2 instances in an Auto Scaling group based on demand or load. It uses CloudWatch alarms to trigger scaling actions when a specified metric crosses a threshold. It can scale out (add instances) or scale in (remove instances) as needed1. By using dynamic scaling, the solution can maintain application performance during sudden traffic increases most cost-effectively.

a) Use manual scaling to change the size of the Auto Scaling group. This solution will not meet the requirement of maintaining application performance during sudden traffic increases, as manual scaling requires users to manually increase or decrease the number of instances through a CLI or console. It does not respond automatically to changes in demand or load2.

b) Use predictive scaling to change the size of the Auto Scaling group. This solution will not meet the requirement of most cost-effectiveness, as predictive scaling uses machine learning and artificial intelligence tools to evaluate traffic loads and anticipate when more or fewer resources are needed. It performs scheduled scaling actions based on the prediction, which may not match the actual demand or load at any given time. Predictive scaling is more suitable for scenarios where there are predictable traffic patterns or known changes in traffic loads3.

d) Use schedule scaling to change the size of the Auto Scaling group. This solution will not meet the requirement of maintaining application performance during sudden traffic increases, as schedule scaling performs scaling actions at specific times that users schedule. It does not respond automatically to changes in demand or load. Schedule scaling is more suitable for scenarios where there are predictable traffic drops or spikes at specific times of the day.

Reference URL: https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-scale-based-on-demand.html

A company has an on-premises server that uses an Oracle database to process and store customer information The company wants to use an AWS database service to achieve higher availability and to improve application performance. The company also wants to offload reporting from its primary database system.

Which solution will meet these requirements in the MOST operationally efficient way?

A.
Use AWS Database Migration Service (AWS DMS) to create an Amazon RDS DB instance in multiple AWS Regions Point the reporting functions toward a separate DB instance from the primary DB instance.
A.
Use AWS Database Migration Service (AWS DMS) to create an Amazon RDS DB instance in multiple AWS Regions Point the reporting functions toward a separate DB instance from the primary DB instance.
Answers
B.
Use Amazon RDS in a Single-AZ deployment to create an Oracle database Create a read replica in the same zone as the primary DB instance. Direct the reporting functions to the read replica.
B.
Use Amazon RDS in a Single-AZ deployment to create an Oracle database Create a read replica in the same zone as the primary DB instance. Direct the reporting functions to the read replica.
Answers
C.
Use Amazon RDS deployed in a Multi-AZ cluster deployment to create an Oracle database Direct the reporting functions to use the reader instance in the cluster deployment
C.
Use Amazon RDS deployed in a Multi-AZ cluster deployment to create an Oracle database Direct the reporting functions to use the reader instance in the cluster deployment
Answers
D.
Use Amazon RDS deployed in a Multi-AZ instance deployment to create an Amazon Aurora database. Direct the reporting functions to the reader instances.
D.
Use Amazon RDS deployed in a Multi-AZ instance deployment to create an Amazon Aurora database. Direct the reporting functions to the reader instances.
Answers
Suggested answer: D

Explanation:

Amazon Aurora is a fully managed relational database that is compatible with MySQL and PostgreSQL. It provides up to five times better performance than MySQL and up to three times better performance than PostgreSQL. It also provides high availability and durability by replicating data across multiple Availability Zones and continuously backing up data to Amazon S31. By using Amazon RDS deployed in a Multi-AZ instance deployment to create an Amazon Aurora database, the solution can achieve higher availability and improve application performance.

Amazon Aurora supports read replicas, which are separate instances that share the same underlying storage as the primary instance. Read replicas can be used to offload read-only queries from the primary instance and improve performance. Read replicas can also be used for reporting functions2. By directing the reporting functions to the reader instances, the solution can offload reporting from its primary database system.

a) Use AWS Database Migration Service (AWS DMS) to create an Amazon RDS DB instance in multiple AWS Regions Point the reporting functions toward a separate DB instance from the pri-mary DB instance. This solution will not meet the requirement of using an AWS database service, as AWS DMS is a service that helps users migrate databases to AWS, not a database service itself. It also involves creating multiple DB instances in different Regions, which may increase complexity and cost.

b) Use Amazon RDS in a Single-AZ deployment to create an Oracle database Create a read replica in the same zone as the primary DB instance. Direct the reporting functions to the read replica. This solution will not meet the requirement of achieving higher availability, as a Single-AZ deployment does not provide failover protection in case of an Availability Zone outage. It also involves using Oracle as the database engine, which may not provide better performance than Aurora.

c) Use Amazon RDS deployed in a Multi-AZ cluster deployment to create an Oracle database Di-rect the reporting functions to use the reader instance in the cluster deployment. This solution will not meet the requirement of improving application performance, as Oracle may not provide better performance than Aurora. It also involves using a cluster deployment, which is only supported for Aurora, not for Oracle.

Reference URL: https://aws.amazon.com/rds/aurora/

A law firm needs to share information with the public The information includes hundreds of files that must be publicly readable Modifications or deletions of the files by anyone before a designated future date are prohibited.

Which solution will meet these requirements in the MOST secure way?

A.
Upload all files to an Amazon S3 bucket that is configured for static website hosting. Grant read-only IAM permissions to any AWS principals that access the S3 bucket until the designated date.
A.
Upload all files to an Amazon S3 bucket that is configured for static website hosting. Grant read-only IAM permissions to any AWS principals that access the S3 bucket until the designated date.
Answers
B.
Create a new Amazon S3 bucket with S3 Versioning enabled Use S3 Object Lock with a retention period in accordance with the designated date Configure the S3 bucket for static website hosting. Set an S3 bucket policy to allow read-only access to the objrcts.
B.
Create a new Amazon S3 bucket with S3 Versioning enabled Use S3 Object Lock with a retention period in accordance with the designated date Configure the S3 bucket for static website hosting. Set an S3 bucket policy to allow read-only access to the objrcts.
Answers
C.
Create a new Amazon S3 bucket with S3 Versioning enabled Configure an event trigger to run an AWS Lambda function in case of object modification or deletion. Configure the Lambda function to replace the objects with the original versions from a private S3 bucket.
C.
Create a new Amazon S3 bucket with S3 Versioning enabled Configure an event trigger to run an AWS Lambda function in case of object modification or deletion. Configure the Lambda function to replace the objects with the original versions from a private S3 bucket.
Answers
D.
Upload all files to an Amazon S3 bucket that is configured for static website hosting. Select the folder that contains the files. Use S3 Object Lock with a retention period in accordance with the designated date. Grant read-only IAM permissions to any AWS principals that access the S3 bucket.
D.
Upload all files to an Amazon S3 bucket that is configured for static website hosting. Select the folder that contains the files. Use S3 Object Lock with a retention period in accordance with the designated date. Grant read-only IAM permissions to any AWS principals that access the S3 bucket.
Answers
Suggested answer: B

Explanation:

Amazon S3 is a service that provides object storage in the cloud. It can be used to store and serve static web content, such as HTML, CSS, JavaScript, images, and videos1. By creating a new Amazon S3 bucket and configuring it for static website hosting, the solution can share information with the public.

Amazon S3 Versioning is a feature that keeps multiple versions of an object in the same bucket. It helps protect objects from accidental deletion or overwriting by preserving, retrieving, and restoring every version of every object stored in an S3 bucket2. By enabling S3 Versioning on the new bucket, the solution can prevent modifications or deletions of the files by anyone.

Amazon S3 Object Lock is a feature that allows users to store objects using a write-once-read-many (WORM) model. It can help prevent objects from being deleted or overwritten for a fixed amount of time or indefinitely. It requires S3 Versioning to be enabled on the bucket3. By using S3 Object Lock with a retention period in accordance with the designated date, the solution can prohibit modifications or deletions of the files by anyone before that date.

Amazon S3 bucket policies are JSON documents that define access permissions for a bucket and its objects. They can be used to grant or deny access to specific users or groups based on conditions such as IP address, time of day, or source bucket. By setting an S3 bucket policy to allow read-only access to the objects, the solution can ensure that the files are publicly readable.

a) Upload all files to an Amazon S3 bucket that is configured for static website hosting. Grant read-only IAM permissions to any AWS principals that access the S3 bucket until the designated date. This solution will not meet the requirement of prohibiting modifications or deletions of the files by anyone before a designated future date, as IAM permissions only apply to AWS principals, not to public users. It also does not use any feature to prevent accidental or intentional deletion or overwriting of the files.

c) Create a new Amazon S3 bucket with S3 Versioning enabled Configure an event trigger to run an AWS Lambda function in case of object modification or deletion. Configure the Lambda func-tion to replace the objects with the original versions from a private S3 bucket. This solution will not meet the requirement of prohibiting modifications or deletions of the files by anyone before a designated future date, as it only reacts to object modification or deletion events after they occur. It also involves creating and managing an additional resource (Lambda function) and a private S3 bucket.

d) Upload all files to an Amazon S3 bucket that is configured for static website hosting. Select the folder that contains the files. Use S3 Object Lock with a retention period in accordance with the designated date. Grant read-only IAM permissions to any AWS principals that access the S3 bucket. This solution will not meet the requirement of prohibiting modifications or deletions of the files by anyone before a designated future date, as it does not enable S3 Versioning on the bucket, which is required for using S3 Object Lock. It also does not allow read-only access to public users.

Reference URL: https://docs.aws.amazon.com/AmazonS3/latest/userguide/WebsiteHosting.html

A company is looking for a solution that can store video archives in AWS from old news footage. The company needs to minimize costs and will rarely need to restore these files. When the h|es are needed, they must be available in a maximum of five minutes.

What is the MOST cost-effective solution?

A.
Store the video archives in Amazon S3 Glacier and use Expedited retrievals.
A.
Store the video archives in Amazon S3 Glacier and use Expedited retrievals.
Answers
B.
Store the video archives in Amazon S3 Glacier and use Standard retrievals.
B.
Store the video archives in Amazon S3 Glacier and use Standard retrievals.
Answers
C.
Store the video archives in Amazon S3 Standard-Infrequent Access (S3 Standard-IA).
C.
Store the video archives in Amazon S3 Standard-Infrequent Access (S3 Standard-IA).
Answers
D.
Store the video archives in Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA)
D.
Store the video archives in Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA)
Answers
Suggested answer: A

Explanation:

Amazon S3 Glacier is a storage class that provides secure, durable, and extremely low-cost storage for data archiving and long-term backup. It is designed for data that is rarely accessed and for which retrieval times of several hours are suitable1. By storing the video archives in Amazon S3 Glacier, the solution can minimize costs.

Amazon S3 Glacier offers three options for data retrieval: Expedited, Standard, and Bulk. Expedited retrievals typically return data in 1--5 minutes and are suitable for Active Archive use cases. Standard retrievals typically complete within 3--5 hours and are suitable for less urgent needs. Bulk retrievals typically complete within 5--12 hours and are the lowest-cost retrieval option2. By using Expedited retrievals, the solution can meet the requirement of restoring the files in a maximum of five minutes.

b) Store the video archives in Amazon S3 Glacier and use Standard retrievals. This solution will not meet the requirement of restoring the files in a maximum of five minutes, as Standard retrievals typically complete within 3--5 hours.

c) Store the video archives in Amazon S3 Standard-Infrequent Access (S3 Standard-IA). This solution will not meet the requirement of minimizing costs, as S3 Standard-IA is a storage class that provides low-cost storage for data that is accessed less frequently but requires rapid access when needed. It has a higher storage cost than S3 Glacier.

d) Store the video archives in Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA). This solution will not meet the requirement of minimizing costs, as S3 One Zone-IA is a storage class that provides low-cost storage for data that is accessed less frequently but requires rapid access when needed. It has a higher storage cost than S3 Glacier.

Reference URL: https://aws.amazon.com/s3/glacier/

Total 886 questions
Go to page: of 89