ExamGecko
Home Home / Amazon / SAA-C03

Amazon SAA-C03 Practice Test - Questions Answers, Page 88

Question list
Search
Search

List of questions

Search

Related questions











A consulting company provides professional services to customers worldwide. The company provides solutions and tools for customers to expedite gathering and analyzing data on AWS. The company needs to centrally manage and deploy a common set of solutions and tools for customers to use for self-service purposes.

Which solution will meet these requirements?

A.

Create AWS Cloud Formation templates for the customers.

A.

Create AWS Cloud Formation templates for the customers.

Answers
B.

Create AWS Service Catalog products for the customers.

B.

Create AWS Service Catalog products for the customers.

Answers
C.

Create AWS Systems Manager templates for the customers.

C.

Create AWS Systems Manager templates for the customers.

Answers
D.

Create AWS Config items for the customers.

D.

Create AWS Config items for the customers.

Answers
Suggested answer: B

Explanation:

AWS Service Catalog allows organizations to centrally manage commonly deployed IT services and offers self-service deployment capabilities to customers. By creating Service Catalog products, the consulting company can package their solutions and tools for easy reuse by customers while maintaining central control over configuration and access. This provides a standardized and automated solution with the least operational overhead for managing and deploying solutions across different customers.

Option A (CloudFormation): CloudFormation templates are useful but don't provide the same level of management and user-friendly self-service capabilities as Service Catalog.

Option C (Systems Manager): Systems Manager is more focused on managing infrastructure and doesn't offer the same self-service capabilities.

Option D (AWS Config): AWS Config is used for tracking resource configurations, not for deploying solutions.

AWS

Reference:

AWS Service Catalog

A company stores customer data in a multitenant Amazon S3 bucket. Each customer's data is stored in a prefix that is unique to the customer. The company needs to migrate data for specific customers to a new. dedicated S3 bucket that is in the same AWS Region as the source bucket. The company must preserve object metadata such as creation date and version IDs.

After the migration is finished, the company must delete the source data for the migrated customers from the original multitenant S3 bucket.

Which combination of solutions will meet these requirements with the LEAST overhead? (Select THREE.)

A.

Create a new S3 bucket as a destination bucket. Enable versioning on the new bucket.

A.

Create a new S3 bucket as a destination bucket. Enable versioning on the new bucket.

Answers
B.

Use S3 batch operations to copy objects from the specified prefixes to the destination bucket.

B.

Use S3 batch operations to copy objects from the specified prefixes to the destination bucket.

Answers
C.

Use the S3 CopyObject API, and create a script to copy data to the destination S3 bucket.

C.

Use the S3 CopyObject API, and create a script to copy data to the destination S3 bucket.

Answers
D.

Configure S3 Same-Region Replication (SRR) to replicate existing data from the specified prefixes in the source bucket to the destination bucket.

D.

Configure S3 Same-Region Replication (SRR) to replicate existing data from the specified prefixes in the source bucket to the destination bucket.

Answers
E.

Configure AWS DataSync to migrate data from the specified prefixes in the source bucket to the destination bucket.

E.

Configure AWS DataSync to migrate data from the specified prefixes in the source bucket to the destination bucket.

Answers
F.

Use an S3 Lifecycle policy to delete objects from the source bucket after the data is migrated to the destination bucket.

F.

Use an S3 Lifecycle policy to delete objects from the source bucket after the data is migrated to the destination bucket.

Answers
Suggested answer: A, B, F

Explanation:

The combination of these solutions provides an efficient and automated way to migrate data while preserving metadata and ensuring cleanup:

Create a new S3 bucket with versioning enabled (Option A) to preserve object metadata like version IDs during migration.

Use S3 batch operations (Option B) to efficiently copy data from specific prefixes in the source bucket to the destination bucket, ensuring minimal overhead.

Use an S3 Lifecycle policy (Option F) to automatically delete the data from the source bucket after it has been migrated, reducing manual intervention.

Option C (CopyObject API): This approach would require more manual scripting and effort.

Option D (Same-Region Replication): SRR is designed for ongoing replication, not for one-time migrations.

Option E (DataSync): DataSync adds more complexity than necessary for this task.

AWS

Reference:

S3 Batch Operations

S3 Lifecycle Policies

A media company is using video conversion tools that run on Amazon EC2 instances. The video conversion tools run on a combination of Windows EC2 instances and Linux EC2 instances. Each video file is tens of gigabytes in size. The video conversion tools must process the video files in the shortest possible amount of time. The company needs a single, centralized file storage solution that can be mounted on all the EC2 instances that host the video conversion tools.

Which solution will meet these requirements?

A.

Deploy Amazon FSx for Windows File Server with hard disk drive (HDD) storage.

A.

Deploy Amazon FSx for Windows File Server with hard disk drive (HDD) storage.

Answers
B.

Deploy Amazon FSx for Windows File Server with solid state drive (SSD) storage.

B.

Deploy Amazon FSx for Windows File Server with solid state drive (SSD) storage.

Answers
C.

Deploy Amazon Elastic File System (Amazon EFS) with Max I/O performance mode.

C.

Deploy Amazon Elastic File System (Amazon EFS) with Max I/O performance mode.

Answers
D.

Deploy Amazon Elastic File System (Amazon EFS) with General Purpose performance mode.

D.

Deploy Amazon Elastic File System (Amazon EFS) with General Purpose performance mode.

Answers
Suggested answer: C

Explanation:

Amazon EFS with Max I/O performance mode is designed for workloads that require high levels of parallelism, such as video processing across multiple EC2 instances. EFS provides shared file storage that can be mounted on both Windows and Linux EC2 instances, and the Max I/O mode ensures the best performance for handling large files and concurrent access across multiple instances.

Option A and B (FSx for Windows File Server): FSx for Windows File Server is optimized for Windows workloads and would not be ideal for Linux instances or high-throughput, parallel workloads.

Option D (EFS General Purpose mode): General Purpose mode offers lower latency but doesn't support the high throughput needed for large, concurrent workloads.

AWS

Reference:

Amazon EFS Performance Modes

A company has customers located across the world. The company wants to use automation to secure its systems and network infrastructure The company's security team must be able to track and audit all incremental changes to the infrastructure.

Which solution will meet these requirements?

A.

Use AWS Organizations to set up the infrastructure. Use AWS Config to track changes

A.

Use AWS Organizations to set up the infrastructure. Use AWS Config to track changes

Answers
B.

Use AWS Cloud Formation to set up the infrastructure. Use AWS Config to track changes.

B.

Use AWS Cloud Formation to set up the infrastructure. Use AWS Config to track changes.

Answers
C.

Use AWS Organizations to set up the infrastructure. Use AWS Service Catalog to track changes.

C.

Use AWS Organizations to set up the infrastructure. Use AWS Service Catalog to track changes.

Answers
D.

Use AWS Cloud Formation to set up the infrastructure. Use AWS Service Catalog to track changes.

D.

Use AWS Cloud Formation to set up the infrastructure. Use AWS Service Catalog to track changes.

Answers
Suggested answer: B

Explanation:

AWS CloudFormation allows for the automated, repeatable setup of infrastructure, reducing human error and ensuring consistency. AWS Config provides the ability to track changes in the infrastructure, ensuring that all changes are logged and auditable, which satisfies the requirement for tracking incremental changes.

Option A and C (AWS Organizations): AWS Organizations manage multiple accounts, but they are not designed for infrastructure setup or change tracking.

Option D (Service Catalog): Service Catalog is used for deploying products, not for setting up infrastructure or tracking changes.

AWS

Reference:

AWS Config

AWS CloudFormation

A company has a static website that is hosted on Amazon CloudFront in front of Amazon S3. The static website uses a database backend. The company notices that the website does not reflect updates that have been made in the website's Git repository. The company checks the continuous integration and continuous delivery (CI/CD) pipeline between the Git repository and Amazon S3. The company verifies that the webhooks are configured properly and that the CI/CD pipeline Is sending messages that indicate successful deployments.

A solutions architect needs to implement a solution that displays the updates on the website.

Which solution will meet these requirements?

A.

Add an Application Load Balancer.

A.

Add an Application Load Balancer.

Answers
B.

Add Amazon ElastiCache for Redis or Memcached to the database layer of the web application.

B.

Add Amazon ElastiCache for Redis or Memcached to the database layer of the web application.

Answers
C.

Invalidate the CloudFront cache.

C.

Invalidate the CloudFront cache.

Answers
D.

Use AWS Certificate Manager (ACM) to validate the website's SSL certificate.

D.

Use AWS Certificate Manager (ACM) to validate the website's SSL certificate.

Answers
Suggested answer: C

Explanation:

Amazon CloudFront is a content delivery network (CDN) service that caches copies of your content at edge locations around the world. This helps improve performance by serving content from the edge nearest to the user. However, when the content in Amazon S3 (your origin) is updated, those updates may not immediately reflect on the website if they are cached at the CloudFront edge locations.

The issue described in the question suggests that the CI/CD pipeline is functioning correctly, and updates are being deployed to S3. However, since CloudFront caches this content, the edge locations may still be serving outdated content, causing the updates to not be reflected on the website.

To resolve this issue, you need to invalidate the CloudFront cache. By invalidating the cache, CloudFront will remove the outdated content and retrieve the latest version from the S3 origin.

AWS documentation on this process:

CloudFront cache invalidation allows you to clear items from the cache so that CloudFront retrieves the latest version from the origin. You can create invalidation requests via the AWS Management Console, AWS CLI, or SDKs.

AWS CloudFront Documentation

Why the other options are incorrect:

A . Add an Application Load Balancer: ALBs are used to distribute incoming application traffic and are not relevant to caching or serving content from CloudFront.

B . Add Amazon ElastiCache for Redis or Memcached: This would help in caching database queries but has no relation to static website content hosted on CloudFront and S3.

D . Use AWS Certificate Manager (ACM): ACM is used for managing SSL/TLS certificates and is unrelated to the issue of content not being updated on CloudFront.

A company is migrating applications from an on-premises Microsoft Active Directory that the company manages to AWS. The company deploys the applications in multiple AWS accounts. The company uses AWS Organizations to manage the accounts centrally.

The company's security team needs a single sign-on solution across all the company's AWS accounts. The company must continue to manage users and groups that are in the on-premises Active Directory

Which solution will meet these requirements?

A.

Create an Enterprise Edition Active Directory in AWS Directory Service for Microsoft Active Directory. Configure the Active Directory to be the identity source for AWS 1AM Identity Center

A.

Create an Enterprise Edition Active Directory in AWS Directory Service for Microsoft Active Directory. Configure the Active Directory to be the identity source for AWS 1AM Identity Center

Answers
B.

Enable AWS 1AM Identity Center. Configure a two-way forest trust relationship to connect the company's self-managed Active Directory with 1AM Identity Center by using AWS Directory Service for Microsoft Active Directory.

B.

Enable AWS 1AM Identity Center. Configure a two-way forest trust relationship to connect the company's self-managed Active Directory with 1AM Identity Center by using AWS Directory Service for Microsoft Active Directory.

Answers
C.

Use AWS Directory Service and create a two-way trust relationship with the company's self-managed Active Directory.

C.

Use AWS Directory Service and create a two-way trust relationship with the company's self-managed Active Directory.

Answers
D.

Deploy an identity provider (IdP) on Amazon EC2. Link the IdP as an identity source within AWS 1AM Identity Center.

D.

Deploy an identity provider (IdP) on Amazon EC2. Link the IdP as an identity source within AWS 1AM Identity Center.

Answers
Suggested answer: B

Explanation:

The company is looking for a solution that provides single sign-on (SSO) across multiple AWS accounts while continuing to manage users and groups in their on-premises Active Directory (AD). AWS IAM Identity Center (formerly AWS SSO) is the recommended solution for this type of requirement.

AWS IAM Identity Center provides a centralized identity management solution, enabling single sign-on across multiple AWS accounts and other cloud applications. It can integrate with on-premises Active Directory to leverage existing users and groups.

By configuring a two-way forest trust relationship between AWS Directory Service for Microsoft Active Directory and the company's on-premises Active Directory, users can be authenticated by their on-premises AD and still access AWS resources through IAM Identity Center. This solution allows centralized management of AWS accounts within AWS Organizations.

The two-way trust allows mutual access between the on-premises AD and the AWS Directory Service. This means that users and groups in the on-premises AD can be used for authentication in AWS IAM Identity Center while maintaining the existing identity management system.

AWS

Reference:

AWS IAM Identity Center Documentation

AWS Directory Service for Microsoft Active Directory Trust Relationships

AWS Directory Service Integration with IAM Identity Center

Why the other options are incorrect:

A . Create an Enterprise Edition Active Directory in AWS Directory Service: This would require setting up a new directory and managing it in AWS, which adds unnecessary overhead. The requirement is to continue using the existing on-premises AD, making this option unsuitable.

C . Use AWS Directory Service and create a two-way trust relationship: While this approach establishes a trust between on-premises AD and AWS Directory Service, it does not address the single sign-on (SSO) requirements across multiple AWS accounts through IAM Identity Center.

D . Deploy an identity provider (IdP) on Amazon EC2: This is more complex than necessary and introduces more management overhead. AWS IAM Identity Center natively supports integration with on-premises Active Directory without requiring a custom IdP.

A company is designing a microservice-based architecture tor a new application on AWS. Each microservice will run on its own set of Amazon EC2 instances. Each microservice will need to interact with multiple AWS services such as Amazon S3 and Amazon Simple Queue Service (Amazon SQS).

The company wants to manage permissions for each EC2 instance based on the principle of least privilege.

Which solution will meet this requirement?

A.

Assign an 1AM user to each micro-service. Use access keys stored within the application code to authenticate AWS service requests.

A.

Assign an 1AM user to each micro-service. Use access keys stored within the application code to authenticate AWS service requests.

Answers
B.

Create a single 1AM role that has permission to access all AWS services. Associate the 1AM role with all EC2 instances that run the microservices

B.

Create a single 1AM role that has permission to access all AWS services. Associate the 1AM role with all EC2 instances that run the microservices

Answers
C.

Use AWS Organizations to create a separate account for each microservice. Manage permissions at the account level.

C.

Use AWS Organizations to create a separate account for each microservice. Manage permissions at the account level.

Answers
D.

Create individual 1AM roles based on the specific needs of each microservice. Associate the 1AM roles with the appropriate EC2 instances.

D.

Create individual 1AM roles based on the specific needs of each microservice. Associate the 1AM roles with the appropriate EC2 instances.

Answers
Suggested answer: D

Explanation:

When designing a microservice architecture where each microservice interacts with different AWS services, it's essential to follow the principle of least privilege. This means granting each microservice only the permissions it needs to perform its tasks, reducing the risk of unauthorized access or accidental actions.

The recommended approach is to create individual IAM roles with policies that grant each microservice the specific permissions it requires. Then, these roles should be associated with the EC2 instances that run the corresponding microservice. By doing so, each EC2 instance will assume its specific IAM role, and permissions will be automatically managed by AWS.

IAM roles provide temporary credentials via the instance metadata service, eliminating the need to hard-code credentials in your application code, which enhances security.

AWS

Reference:

IAM Roles for Amazon EC2 explains how EC2 instances can use IAM roles to securely access AWS services without managing long-term credentials.

Best Practices for IAM includes recommendations for implementing the least privilege principle and using IAM roles effectively.

Why the other options are incorrect:

A . Assign an IAM user to each microservice: This requires managing long-term credentials (access keys), which should be avoided. Storing keys in application code is insecure and creates a maintenance burden.

B . Create a single IAM role: This violates the principle of least privilege, as a single role with broad permissions across all services is less secure.

C . Use AWS Organizations: This approach adds unnecessary complexity. Managing permissions at the account level for each microservice is excessive for this use case and doesn't adhere to the principle of least privilege.

A media company hosts its video processing workload on AWS. The workload uses Amazon EC2 instances in an Auto Scaling group to handle varying levels of demand. The workload stores the original videos and the processed videos in an Amazon S3 bucket.

The company wants to ensure that the video processing workload is scalable. The company wants to prevent failed processing attempts because of resource constraints. The architecture must be able to handle sudden spikes in video uploads without impacting the processing capability.

Which solution will meet these requirements with the LEAST overhead?

A.

Migrate the workload from Amazon EC2 instances to AWS Lambda functions. Configure an Amazon S3 event notification to invoke the Lambda functions when a new video is uploaded. Configure the Lambda functions to process videos directly and to save processed videos back to the S3 bucket.

A.

Migrate the workload from Amazon EC2 instances to AWS Lambda functions. Configure an Amazon S3 event notification to invoke the Lambda functions when a new video is uploaded. Configure the Lambda functions to process videos directly and to save processed videos back to the S3 bucket.

Answers
B.

Migrate the workload from Amazon EC2 instances to AWS Lambda functions. Use Amazon S3 to invoke an Amazon Simple Notification Service (Amazon SNS) topic when a new video is uploaded. Subscribe the Lambda functions to the SNS topic. Configure the Lambda functions to process the videos asynchronously and to save processed videos back to the S3 bucket.

B.

Migrate the workload from Amazon EC2 instances to AWS Lambda functions. Use Amazon S3 to invoke an Amazon Simple Notification Service (Amazon SNS) topic when a new video is uploaded. Subscribe the Lambda functions to the SNS topic. Configure the Lambda functions to process the videos asynchronously and to save processed videos back to the S3 bucket.

Answers
C.

Configure an Amazon S3 event notification to send a message to an Amazon Simple Queue Service (Amazon SQS) queue when a new video is uploaded. Configure the existing Auto Scaling group to poll the SQS queue, process the videos, and save processed videos back to the S3 bucket.

C.

Configure an Amazon S3 event notification to send a message to an Amazon Simple Queue Service (Amazon SQS) queue when a new video is uploaded. Configure the existing Auto Scaling group to poll the SQS queue, process the videos, and save processed videos back to the S3 bucket.

Answers
D.

Configure an Amazon S3 upload trigger to invoke an AWS Step Functions state machine when a new video is uploaded. Configure the state machine to orchestrate the video processing workflow by placing a job message in the Amazon SQS queue. Configure the job message to invoke the EC2 instances to process the videos. Save processed videos back to the S3 bucket.

D.

Configure an Amazon S3 upload trigger to invoke an AWS Step Functions state machine when a new video is uploaded. Configure the state machine to orchestrate the video processing workflow by placing a job message in the Amazon SQS queue. Configure the job message to invoke the EC2 instances to process the videos. Save processed videos back to the S3 bucket.

Answers
Suggested answer: C

Explanation:

This solution addresses the scalability needs of the workload while preventing failed processing attempts due to resource constraints.

Amazon S3 event notifications can be used to trigger a message to an SQS queue whenever a new video is uploaded.

The existing Auto Scaling group of EC2 instances can poll the SQS queue, ensuring that the EC2 instances only process videos when there is a job in the queue.

SQS decouples the video upload and processing steps, allowing the system to handle sudden spikes in video uploads without overloading EC2 instances.

The use of Auto Scaling ensures that the EC2 instances can scale in or out based on the demand, maintaining cost efficiency while avoiding processing failures due to insufficient resources.

AWS

Reference:

S3 Event Notifications details how to configure notifications for S3 events.

Amazon SQS is a fully managed message queuing service that decouples components of the system.

Auto Scaling EC2 explains how to manage automatic scaling of EC2 instances based on demand.

Why the other options are incorrect:

A . AWS Lambda functions: While Lambda can handle some workloads, video processing is often resource-intensive and long-running, making EC2 a more suitable solution.

B . Using SNS with Lambda: Similar to A, Lambda is not ideal for large-scale video processing due to its time and memory limitations.

D . AWS Step Functions: While a valid orchestration solution, this introduces more complexity and overhead compared to the simpler SQS-based solution.

A company uses a set of Amazon EC2 instances to host a website. The website uses an Amazon S3 bucket to store images and media files.

The company wants to automate website infrastructure creation to deploy the website to multiple AWS Regions. The company also wants to provide the EC2 instances access to the S3 bucket so the instances can store and access data by using AWS Identity and Access Management (1AM).

Which solution will meet these requirements MOST securely?

A.

Create an AWS Cloud Format ion template for the web server EC2 instances. Save an 1AM access key in the UserData section of the AWS;:EC2::lnstance entity in the CloudFormation template.

A.

Create an AWS Cloud Format ion template for the web server EC2 instances. Save an 1AM access key in the UserData section of the AWS;:EC2::lnstance entity in the CloudFormation template.

Answers
B.

Create a file that contains an 1AM secret access key and access key ID. Store the file in a new S3 bucket. Create an AWS CloudFormation template. In the template, create a parameter to specify the location of the S3 object that contains the access key and access key ID.

B.

Create a file that contains an 1AM secret access key and access key ID. Store the file in a new S3 bucket. Create an AWS CloudFormation template. In the template, create a parameter to specify the location of the S3 object that contains the access key and access key ID.

Answers
C.

Create an 1AM role and an 1AM access policy that allows the web server EC2 instances to access the S3 bucket. Create an AWS CloudFormation template for the web server EC2 instances that contains an 1AM instance profile entity that Reference the 1AM role and the 1AM access policy.

C.

Create an 1AM role and an 1AM access policy that allows the web server EC2 instances to access the S3 bucket. Create an AWS CloudFormation template for the web server EC2 instances that contains an 1AM instance profile entity that Reference the 1AM role and the 1AM access policy.

Answers
D.

Create a script that retrieves an 1AM secret access key and access key ID from 1AM and stores them on the web server EC2 instances. Include the script in the UserData section of the AWS::EC2::lnstance entity in an AWS CloudFormation template.

D.

Create a script that retrieves an 1AM secret access key and access key ID from 1AM and stores them on the web server EC2 instances. Include the script in the UserData section of the AWS::EC2::lnstance entity in an AWS CloudFormation template.

Answers
Suggested answer: C

Explanation:

The most secure solution for allowing EC2 instances to access an S3 bucket is by using IAM roles. An IAM role can be created with an access policy that grants the required permissions (e.g., to read and write to the S3 bucket). The IAM role is then associated with the EC2 instances through an IAM instance profile.

By associating the role with the instances, the EC2 instances can securely assume the role and receive temporary credentials via the instance metadata service. This avoids the need to store credentials (such as access keys) on the instances or within the application, enhancing security and reducing the risk of credentials being exposed.

AWS CloudFormation can be used to automate the creation of the entire infrastructure, including EC2 instances, IAM roles, and associated policies.

AWS

Reference:

IAM Roles for EC2 Instances outlines the use of IAM roles for secure access to AWS services.

AWS CloudFormation User Guide details how to create and manage resources using CloudFormation templates.

Why the other options are incorrect:

A . Save IAM access key in UserData: This is insecure because it involves storing long-term credentials in the instance user data, which can be exposed.

B . Store access keys in S3: This is also insecure, as it involves managing and distributing long-term credentials, which should be avoided.

D . Retrieve access keys via a script: This approach is unnecessarily complex and less secure than using IAM roles, which provide temporary credentials automatically.

A company creates operations data and stores the data in an Amazon S3 bucket for the company's annual audit, an external consultant needs to access an annual report that is stored in the S3 bucket. The external consultant needs to access the report for 7 days.

The company must implement a solution to allow the external consultant access to only the report.

Which solution will meet these requirements with the MOST operational efficiency?

A.

Create a new S3 bucket that is configured to host a public static website. Migrate the operations data to the new S3 bucket. Share the S3 website URL with the external consultant.

A.

Create a new S3 bucket that is configured to host a public static website. Migrate the operations data to the new S3 bucket. Share the S3 website URL with the external consultant.

Answers
B.

Enable public access to the S3 bucket for 7 days. Remove access to the S3 bucket when the external consultant completes the audit.

B.

Enable public access to the S3 bucket for 7 days. Remove access to the S3 bucket when the external consultant completes the audit.

Answers
C.

Create a new 1AM user that has access to the report in the S3 bucket. Provide the access keys to the external consultant. Revoke the access keys after 7 days.

C.

Create a new 1AM user that has access to the report in the S3 bucket. Provide the access keys to the external consultant. Revoke the access keys after 7 days.

Answers
D.

Generate a presigned URL that has the required access to the location of the report on the S3 bucket. Share the presigned URL with the external consultant.

D.

Generate a presigned URL that has the required access to the location of the report on the S3 bucket. Share the presigned URL with the external consultant.

Answers
Suggested answer: D

Explanation:

A presigned URL allows temporary access to a specific object in an S3 bucket without needing to make the bucket public or creating and managing additional IAM users. The URL is time-limited, and permissions are granted only to the specific object (in this case, the annual report), making it a highly secure and operationally efficient solution.

With a presigned URL, the consultant can access the report for the specified duration (7 days), after which the URL will expire automatically, removing the need for manual intervention to revoke access.

AWS

Reference:

Amazon S3 Presigned URLs explain how to generate a presigned URL to grant temporary access to S3 objects.

Best Practices for S3 Security emphasize using presigned URLs for sharing temporary access to S3 objects securely.

Why the other options are incorrect:

A . Public static website: This approach involves making the S3 bucket publicly accessible, which is unnecessary and insecure for sensitive data.

B . Enable public access: Granting public access to the entire bucket, even temporarily, is a security risk and violates best practices.

Total 886 questions
Go to page: of 89