ExamGecko
Home Home / Amazon / SAA-C03

Amazon SAA-C03 Practice Test - Questions Answers, Page 65

Question list
Search
Search

List of questions

Search

Related questions











A company wants to analyze and troubleshoot Access Denied errors and Unauthonzed errors that are related to 1AM permissions The company has AWS CloudTrail turned on Which solution will meet these requirements with the LEAST effort?

A.
Use AWS Glue and write custom scripts to query CloudTrail logs for the errors
A.
Use AWS Glue and write custom scripts to query CloudTrail logs for the errors
Answers
B.
Use AWS Batch and write custom scripts to query CloudTrail logs for the errors
B.
Use AWS Batch and write custom scripts to query CloudTrail logs for the errors
Answers
C.
Search CloudTrail logs with Amazon Athena queries to identify the errors
C.
Search CloudTrail logs with Amazon Athena queries to identify the errors
Answers
D.
Search CloudTrail logs with Amazon QuickSight. Create a dashboard to identify the errors.
D.
Search CloudTrail logs with Amazon QuickSight. Create a dashboard to identify the errors.
Answers
Suggested answer: C

Explanation:

This solution meets the following requirements:

It is the least effort, as it does not require any additional AWS services, custom scripts, or data processing steps. Amazon Athena is a serverless interactive query service that allows you to analyze data in Amazon S3 using standard SQL. You can use Athena to query CloudTrail logs directly from the S3 bucket where they are stored, without any data loading or transformation. You can also use the AWS Management Console, the AWS CLI, or the Athena API to run and manage your queries.

It is effective, as it allows you to filter, aggregate, and join CloudTrail log data using SQL syntax. You can use various SQL functions and operators to specify the criteria for identifying Access Denied and Unauthorized errors, such as the error code, the user identity, the event source, the event name, the event time, and the resource ARN. You can also use subqueries, views, and common table expressions to simplify and optimize your queries.

It is flexible, as it allows you to customize and save your queries for future use. You can also export the query results to other formats, such as CSV or JSON, or integrate them with other AWS services, such as Amazon QuickSight, for further analysis and visualization.

Querying AWS CloudTrail Logs - Amazon Athena

Analyzing Data in S3 using Amazon Athena | AWS Big Data Blog

Troubleshoot IAM permisson access denied or unauthorized errors | AWS re:Post

A company is building an Amazon Elastic Kubernetes Service (Amazon EKS) cluster for its workloads. All secrets that are stored in Amazon EKS must be encrypted in the Kubernetes etcd key-value store.

Which solution will meet these requirements?

A.
Create a new AWS Key Management Service (AWS KMS) key Use AWS Secrets Manager to manage rotate, and store all secrets in Amazon EKS.
A.
Create a new AWS Key Management Service (AWS KMS) key Use AWS Secrets Manager to manage rotate, and store all secrets in Amazon EKS.
Answers
B.
Create a new AWS Key Management Service (AWS KMS) key Enable Amazon EKS KMS secrets encryption on the Amazon EKS cluster.
B.
Create a new AWS Key Management Service (AWS KMS) key Enable Amazon EKS KMS secrets encryption on the Amazon EKS cluster.
Answers
C.
Create the Amazon EKS cluster with default options Use the Amazon Elastic Block Store (Amazon EBS) Container Storage Interface (CSI) driver as an add-on.
C.
Create the Amazon EKS cluster with default options Use the Amazon Elastic Block Store (Amazon EBS) Container Storage Interface (CSI) driver as an add-on.
Answers
D.
Create a new AWS Key Management Service (AWS KMS) key with the ahas/aws/ebs alias Enable default Amazon Elastic Block Store (Amazon EBS) volume encryption for the account.
D.
Create a new AWS Key Management Service (AWS KMS) key with the ahas/aws/ebs alias Enable default Amazon Elastic Block Store (Amazon EBS) volume encryption for the account.
Answers
Suggested answer: B

Explanation:

This option is the most secure and simple way to encrypt the secrets that are stored in Amazon EKS. AWS Key Management Service (AWS KMS) is a service that allows you to create and manage encryption keys that can be used to encrypt your data. Amazon EKS KMS secrets encryption is a feature that enables you to use a KMS key to encrypt the secrets that are stored in the Kubernetes etcd key-value store. This provides an additional layer of protection for your sensitive data, such as passwords, tokens, and keys. You can create a new KMS key or use an existing one, and then enable the Amazon EKS KMS secrets encryption on the Amazon EKS cluster. You can also use IAM policies to control who can access or use the KMS key.

Option A is not correct because using AWS Secrets Manager to manage, rotate, and store all secrets in Amazon EKS is not necessary or efficient. AWS Secrets Manager is a service that helps you securely store, retrieve, and rotate your secrets, such as database credentials, API keys, and passwords. You can use it to manage secrets that are used by your applications or services outside of Amazon EKS, but it is not designed to encrypt the secrets that are stored in the Kubernetes etcd key-value store. Moreover, using AWS Secrets Manager would incur additional costs and complexity, and it would not leverage the native Kubernetes secrets management capabilities.

Option C is not correct because using the Amazon EBS Container Storage Interface (CSI) driver as an add-on does not encrypt the secrets that are stored in Amazon EKS. The Amazon EBS CSI driver is a plugin that allows you to use Amazon EBS volumes as persistent storage for your Kubernetes pods. It is useful for providing durable and scalable storage for your applications, but it does not affect the encryption of the secrets that are stored in the Kubernetes etcd key-value store. Moreover, using the Amazon EBS CSI driver would require additional configuration and resources, and it would not provide the same level of security as using a KMS key.

Option D is not correct because creating a new AWS KMS key with the alias aws/ebs and enabling default Amazon EBS volume encryption for the account does not encrypt the secrets that are stored in Amazon EKS. The alias aws/ebs is a reserved alias that is used by AWS to create a default KMS key for your account. This key is used to encrypt the Amazon EBS volumes that are created in your account, unless you specify a different KMS key. Enabling default Amazon EBS volume encryption for the account is a setting that ensures that all new Amazon EBS volumes are encrypted by default. However, these features do not affect the encryption of the secrets that are stored in the Kubernetes etcd key-value store. Moreover, using the default KMS key or the default encryption setting would not provide the same level of control and security as using a custom KMS key and enabling the Amazon EKS KMS secrets encryption feature.Reference:

Encrypting secrets used in Amazon EKS

What Is AWS Key Management Service?

What Is AWS Secrets Manager?

Amazon EBS CSI driver

Encryption at rest

A company built an application with Docker containers and needs to run the application in the AWS Cloud The company wants to use a managed sen/ice to host the application

The solution must scale in and out appropriately according to demand on the individual container services The solution also must not result in additional operational overhead or infrastructure to manage

Which solutions will meet these requirements? (Select TWO)

A.
Use Amazon Elastic Container Service (Amazon ECS) with AWS Fargate.
A.
Use Amazon Elastic Container Service (Amazon ECS) with AWS Fargate.
Answers
B.
Use Amazon Elastic Kubernetes Service (Amazon EKS) with AWS Fargate.
B.
Use Amazon Elastic Kubernetes Service (Amazon EKS) with AWS Fargate.
Answers
C.
Provision an Amazon API Gateway API Connect the API to AWS Lambda to run the containers.
C.
Provision an Amazon API Gateway API Connect the API to AWS Lambda to run the containers.
Answers
D.
Use Amazon Elastic Container Service (Amazon ECS) with Amazon EC2 worker nodes.
D.
Use Amazon Elastic Container Service (Amazon ECS) with Amazon EC2 worker nodes.
Answers
E.
Use Amazon Elastic Kubernetes Service (Amazon EKS) with Amazon EC2 worker nodes.
E.
Use Amazon Elastic Kubernetes Service (Amazon EKS) with Amazon EC2 worker nodes.
Answers
Suggested answer: A, B

Explanation:

These options are the best solutions because they allow the company to run the application with Docker containers in the AWS Cloud using a managed service that scales automatically and does not require any infrastructure to manage. By using AWS Fargate, the company can launch and run containers without having to provision, configure, or scale clusters of EC2 instances. Fargate allocates the right amount of compute resources for each container and scales them up or down as needed. By using Amazon ECS or Amazon EKS, the company can choose the container orchestration platform that suits its needs. Amazon ECS is a fully managed service that integrates with other AWS services and simplifies the deployment and management of containers. Amazon EKS is a managed service that runs Kubernetes on AWS and provides compatibility with existing Kubernetes tools and plugins.

C) Provision an Amazon API Gateway API Connect the API to AWS Lambda to run the containers. This option is not feasible because AWS Lambda does not support running Docker containers directly. Lambda functions are executed in a sandboxed environment that is isolated from other functions and resources. To run Docker containers on Lambda, the company would need to use a custom runtime or a wrapper library that emulates the Docker API, which can introduce additional complexity and overhead.

D) Use Amazon Elastic Container Service (Amazon ECS) with Amazon EC2 worker nodes. This option is not optimal because it requires the company to manage the EC2 instances that host the containers. The company would need to provision, configure, scale, patch, and monitor the EC2 instances, which can increase the operational overhead and infrastructure costs.

E) Use Amazon Elastic Kubernetes Service (Amazon EKS) with Amazon EC2 worker nodes. This option is not ideal because it requires the company to manage the EC2 instances that host the containers. The company would need to provision, configure, scale, patch, and monitor the EC2 instances, which can increase the operational overhead and infrastructure costs.

1AWS Fargate - Amazon Web Services

2Amazon Elastic Container Service - Amazon Web Services

3Amazon Elastic Kubernetes Service - Amazon Web Services

4AWS Lambda FAQs - Amazon Web Services

A company uses Amazon S3 as its data lake. The company has a new partner that must use SFTP to upload data files A solutions architect needs to implement a highly available SFTP solution that minimizes operational overhead.

Which solution will meet these requirements?

A.
Use AWS Transfer Family to configure an SFTP-enabled server with a publicly accessible endpoint Choose the S3 data lake as the destination
A.
Use AWS Transfer Family to configure an SFTP-enabled server with a publicly accessible endpoint Choose the S3 data lake as the destination
Answers
B.
Use Amazon S3 File Gateway as an SFTP server Expose the S3 File Gateway endpoint URL to the new partner Share the S3 File Gateway endpoint with the new partner
B.
Use Amazon S3 File Gateway as an SFTP server Expose the S3 File Gateway endpoint URL to the new partner Share the S3 File Gateway endpoint with the new partner
Answers
C.
Launch an Amazon EC2 instance in a private subnet in a VPC. Instruct the new partner to upload files to the EC2 instance by using a VPN. Run a cron job script on the EC2 instance to upload files to the S3 data lake
C.
Launch an Amazon EC2 instance in a private subnet in a VPC. Instruct the new partner to upload files to the EC2 instance by using a VPN. Run a cron job script on the EC2 instance to upload files to the S3 data lake
Answers
D.
Launch Amazon EC2 instances in a private subnet in a VPC. Place a Network Load Balancer (NLB) in front of the EC2 instances. Create an SFTP listener port for the NLB Share the NLB hostname with the new partner Run a cron job script on the EC2 instances to upload files to the S3 data lake.
D.
Launch Amazon EC2 instances in a private subnet in a VPC. Place a Network Load Balancer (NLB) in front of the EC2 instances. Create an SFTP listener port for the NLB Share the NLB hostname with the new partner Run a cron job script on the EC2 instances to upload files to the S3 data lake.
Answers
Suggested answer: A

Explanation:

This option is the most cost-effective and simple way to enable SFTP access to the S3 data lake. AWS Transfer Family is a fully managed service that supports secure file transfers over SFTP, FTPS, and FTP protocols. You can create an SFTP-enabled server with a public endpoint and associate it with your S3 bucket. You can also use AWS Identity and Access Management (IAM) roles and policies to control access to your S3 data lake. The service scales automatically to handle any volume of file transfers and provides high availability and durability. You do not need to provision, manage, or patch any servers or load balancers.

Option B is not correct because Amazon S3 File Gateway is not an SFTP server. It is a hybrid cloud storage service that provides a local file system interface to S3. You can use it to store and retrieve files as objects in S3 using standard file protocols such as NFS and SMB. However, it does not support SFTP protocol, and it requires deploying a file gateway appliance on-premises or on EC2.

Option C is not cost-effective or scalable because it requires launching and managing an EC2 instance in a private subnet and setting up a VPN connection for the new partner. This would incur additional costs for the EC2 instance, the VPN connection, and the data transfer. It would also introduce complexity and security risks to the solution. Moreover, it would require running a cron job script on the EC2 instance to upload files to the S3 data lake, which is not efficient or reliable.

Option D is not cost-effective or scalable because it requires launching and managing multiple EC2 instances in a private subnet and placing a NLB in front of them. This would incur additional costs for the EC2 instances, the NLB, and the data transfer. It would also introduce complexity and security risks to the solution. Moreover, it would require running a cron job script on the EC2 instances to upload files to the S3 data lake, which is not efficient or reliable.Reference:

What Is AWS Transfer Family?

What Is Amazon S3 File Gateway?

What Is Amazon EC2?

[What Is Amazon Virtual Private Cloud?]

[What Is a Network Load Balancer?]

A company has Amazon EC2 instances that run nightly batch jobs to process data. The EC2 instances run in an Auto Scaling group that uses On-Demand billing. If a job fails on one instance: another instance will reprocess the job. The batch jobs run between 12:00 AM and 06 00 AM local time every day.

Which solution will provide EC2 instances to meet these requirements MOST cost-effectively'?

A.
Purchase a 1-year Savings Plan for Amazon EC2 that covers the instance family of the Auto Scaling group that the batch job uses.
A.
Purchase a 1-year Savings Plan for Amazon EC2 that covers the instance family of the Auto Scaling group that the batch job uses.
Answers
B.
Purchase a 1-year Reserved Instance for the specific instance type and operating system of the instances in the Auto Scaling group that the batch job uses.
B.
Purchase a 1-year Reserved Instance for the specific instance type and operating system of the instances in the Auto Scaling group that the batch job uses.
Answers
C.
Create a new launch template for the Auto Scaling group Set the instances to Spot Instances Set a policy to scale out based on CPU usage.
C.
Create a new launch template for the Auto Scaling group Set the instances to Spot Instances Set a policy to scale out based on CPU usage.
Answers
D.
Create a new launch template for the Auto Scaling group Increase the instance size Set a policy to scale out based on CPU usage.
D.
Create a new launch template for the Auto Scaling group Increase the instance size Set a policy to scale out based on CPU usage.
Answers
Suggested answer: C

Explanation:

This option is the most cost-effective solution because it leverages the Spot Instances, which are unused EC2 instances that are available at up to 90% discount compared to On-Demand prices. Spot Instances can be interrupted by AWS when the demand for On-Demand instances increases, but since the batch jobs are fault-tolerant and can be reprocessed by another instance, this is not a major issue. By using a launch template, the company can specify the configuration of the Spot Instances, such as the instance type, the operating system, and the user data. By using an Auto Scaling group, the company can automatically scale the number of Spot Instances based on the CPU usage, which reflects the load of the batch jobs. This way, the company can optimize the performance and the cost of the EC2 instances for the nightly batch jobs.

A) Purchase a 1-year Savings Plan for Amazon EC2 that covers the instance family of the Auto Scaling group that the batch job uses. This option is not optimal because it requires a commitment to a consistent amount of compute usage per hour for a one-year term, regardless of the instance type, size, region, or operating system. This can limit the flexibility and scalability of the Auto Scaling group and result in overpaying for unused compute capacity. Moreover, Savings Plans do not provide a capacity reservation, which means the company still needs to reserve capacity with On-Demand Capacity Reservations and pay lower prices with Savings Plans.

B) Purchase a 1-year Reserved Instance for the specific instance type and operating system of the instances in the Auto Scaling group that the batch job uses. This option is not ideal because it requires a commitment to a specific instance configuration for a one-year term, which can reduce the flexibility and scalability of the Auto Scaling group and result in overpaying for unused compute capacity. Moreover, Reserved Instances do not provide a capacity reservation, which means the company still needs to reserve capacity with On-Demand Capacity Reservations and pay lower prices with Reserved Instances.

D) Create a new launch template for the Auto Scaling group Increase the instance size Set a policy to scale out based on CPU usage. This option is not cost-effective because it does not take advantage of the lower prices of Spot Instances. Increasing the instance size can improve the performance of the batch jobs, but it can also increase the cost of the On-Demand instances. Moreover, scaling out based on CPU usage can result in launching more instances than needed, which can also increase the cost of the system.

1Spot Instances - Amazon Elastic Compute Cloud

2Launch templates - Amazon Elastic Compute Cloud

3Auto Scaling groups - Amazon EC2 Auto Scaling

[4] Savings Plans - Amazon EC2 Reserved Instances and Other AWS Reservation Models

A company has stored 10 TB of log files in Apache Parquet format in an Amazon S3 bucket The company occasionally needs to use SQL to analyze the log files Which solution will meet these requirements MOST cost-effectively?

A.
Create an Amazon Aurora MySQL database Migrate the data from the S3 bucket into Aurora by using AWS Database Migration Service (AWS DMS) Issue SQL statements to the Aurora database.
A.
Create an Amazon Aurora MySQL database Migrate the data from the S3 bucket into Aurora by using AWS Database Migration Service (AWS DMS) Issue SQL statements to the Aurora database.
Answers
B.
Create an Amazon Redshift cluster Use Redshift Spectrum to run SQL statements directly on the data in the S3 bucket
B.
Create an Amazon Redshift cluster Use Redshift Spectrum to run SQL statements directly on the data in the S3 bucket
Answers
C.
Create an AWS Glue crawler to store and retrieve table metadata from the S3 bucket Use Amazon Athena to run SQL statements directly on the data in the S3 bucket
C.
Create an AWS Glue crawler to store and retrieve table metadata from the S3 bucket Use Amazon Athena to run SQL statements directly on the data in the S3 bucket
Answers
D.
Create an Amazon EMR cluster Use Apache Spark SQL to run SQL statements directly on the data in the S3 bucket
D.
Create an Amazon EMR cluster Use Apache Spark SQL to run SQL statements directly on the data in the S3 bucket
Answers
Suggested answer: C

Explanation:

AWS Glue is a serverless data integration service that can crawl, catalog, and prepare data for analysis. AWS Glue can automatically discover the schema and partitioning of the data stored in Apache Parquet format in S3, and create a table in the AWS Glue Data Catalog. Amazon Athena is a serverless interactive query service that can run SQL queries directly on data in S3, without requiring any data loading or transformation. Athena can use the table metadata from the AWS Glue Data Catalog to query the data in S3. By using AWS Glue and Athena, you can analyze the log files in S3 most cost-effectively, as you only pay for the resources consumed by the crawler and the queries, and you do not need to provision or manage any servers or clusters.

AWS Glue

Amazon Athena

Analyzing Data in S3 using Amazon Athena

A company has an AWS Direct Connect connection from its on-premises location to an AWS account The AWS account has 30 different VPCs in the same AWS Region The VPCs use private virtual interfaces (VIFs) Each VPC has a CIDR block that does not overlap with other networks under the company's control

The company wants to centrally manage the networking architecture while still allowing each VPC to communicate with all other VPCs and on-premises networks

Which solution will meet these requirements with the LEAST amount of operational overhead?

A.
Create a transit gateway and associate the Direct Connect connection with a new transit VIF Turn on the transit gateway's route propagation feature
A.
Create a transit gateway and associate the Direct Connect connection with a new transit VIF Turn on the transit gateway's route propagation feature
Answers
B.
Create a Direct Connect gateway Recreate the private VIFs to use the new gateway Associate each VPC by creating new virtual private gateways
B.
Create a Direct Connect gateway Recreate the private VIFs to use the new gateway Associate each VPC by creating new virtual private gateways
Answers
C.
Create a transit VPC Connect the Direct Connect connection to the transit VPC Create a peenng connection between all other VPCs in the Region Update the route tables
C.
Create a transit VPC Connect the Direct Connect connection to the transit VPC Create a peenng connection between all other VPCs in the Region Update the route tables
Answers
D.
Create AWS Site-to-Site VPN connections from on premises to each VPC Ensure that both VPN tunnels are UP for each connection Turn on the route propagation feature
D.
Create AWS Site-to-Site VPN connections from on premises to each VPC Ensure that both VPN tunnels are UP for each connection Turn on the route propagation feature
Answers
Suggested answer: A

Explanation:

This solution meets the following requirements:

It is operationally efficient, as it only requires one transit gateway and one transit VIF to connect the Direct Connect connection to all the VPCs in the same AWS Region. The transit gateway acts as a regional network hub that simplifies the network management and reduces the number of VIFs and gateways needed.

It is scalable, as it can support up to 5000 attachments per transit gateway, which can include VPCs, VPNs, Direct Connect gateways, and peering connections. The transit gateway can also be connected to other transit gateways in different Regions or accounts using peering connections, enabling cross-Region and cross-account connectivity.

It is flexible, as it allows each VPC to communicate with all other VPCs and on-premises networks using dynamic routing protocols such as Border Gateway Protocol (BGP). The transit gateway's route propagation feature automatically propagates the routes from the attached VPCs and VPNs to the transit gateway route table, eliminating the need to manually update the route tables.

Transit Gateways - Amazon Virtual Private Cloud

Working with transit gateways - AWS Direct Connect

Amazon VPC-to-Amazon VPC connectivity options - Amazon Virtual Private Cloud Connectivity Options

A company has a mobile game that reads most of its metadata from an Amazon RDS DB instance. As the game increased in popularity, developers noticed slowdowns related to the game's metadata load times Performance metrics indicate that simply scaling the database will not help A solutions architect must explore all options that include capabilities for snapshots, replication, and sub-millisecond response times

What should the solutions architect recommend to solve these issues'?

A.
Migrate the database to Amazon Aurora with Aurora Replicas
A.
Migrate the database to Amazon Aurora with Aurora Replicas
Answers
B.
Migrate the database to Amazon DynamoDB with global tables
B.
Migrate the database to Amazon DynamoDB with global tables
Answers
C.
Add an Amazon ElastiCache for Redis layer in front of the database.
C.
Add an Amazon ElastiCache for Redis layer in front of the database.
Answers
D.
Add an Amazon ElastiCache for Memcached layer in front of the database
D.
Add an Amazon ElastiCache for Memcached layer in front of the database
Answers
Suggested answer: C

Explanation:

This option is the most suitable way to improve the game's metadata load times without migrating the database. Amazon ElastiCache for Redis is a fully managed, in-memory data store that provides sub-millisecond latency and high throughput for read-intensive workloads. You can use it as a caching layer in front of your RDS DB instance to store frequently accessed metadata and reduce the load on the database. You can also take advantage of Redis features such as snapshots, replication, and data persistence to ensure data durability and availability. ElastiCache for Redis scales automatically to meet your demand and integrates with other AWS services such as CloudFormation, CloudWatch, and IAM.

Option A is not optimal because migrating the database to Amazon Aurora with Aurora Replicas would incur additional costs and complexity. Amazon Aurora is a relational database service that provides high performance, availability, and compatibility with MySQL and PostgreSQL. Aurora Replicas are read-only copies of the primary database that can be used for scaling read capacity and enhancing availability. However, migrating the database to Aurora would require modifying the application code, testing the compatibility, and performing the data migration. Moreover, Aurora Replicas may not provide sub-millisecond response times as ElastiCache for Redis does.

Option B is not optimal because migrating the database to Amazon DynamoDB with global tables would incur additional costs and complexity. Amazon DynamoDB is a NoSQL database service that provides fast and flexible data access for any scale. Global tables are a feature of DynamoDB that enables you to replicate your data across multiple AWS Regions for high availability and performance. However, migrating the database to DynamoDB would require changing the data model, modifying the application code, and performing the data migration. Moreover, global tables may not be necessary for the game's metadata, as they are mainly used for cross-region data access and disaster recovery.

Option D is not optimal because adding an Amazon ElastiCache for Memcached layer in front of the database would not provide the same capabilities as ElastiCache for Redis. Amazon ElastiCache for Memcached is another fully managed, in-memory data store that provides high performance and scalability for caching workloads. However, Memcached does not support snapshots, replication, or data persistence, which means that the cached data may be lost in case of a node failure or a cache eviction. Moreover, Memcached does not integrate with other AWS services as well as Redis does. Therefore, ElastiCache for Redis is a better choice for this scenario.Reference:

What Is Amazon ElastiCache for Redis?

What Is Amazon Aurora?

What Is Amazon DynamoDB?

What Is Amazon ElastiCache for Memcached?

A company uses AWS Organizations to run workloads within multiple AWS accounts A tagging policy adds department tags to AWS resources when the company creates tags.

An accounting team needs to determine spending on Amazon EC2 consumption The accounting team must determine which departments are responsible for the costs regardless of AWS account The accounting team has access to AWS Cost Explorer for all AWS accounts within the organization and needs to access all reports from Cost Explorer.

Which solution meets these requirements in the MOST operationally efficient way'?

A.
From the Organizations management account billing console, activate a user-defined cost allocation tag named department Create one cost report in Cost Explorer grouping by tag name, and filter by EC2.
A.
From the Organizations management account billing console, activate a user-defined cost allocation tag named department Create one cost report in Cost Explorer grouping by tag name, and filter by EC2.
Answers
B.
From the Organizations management account billing console, activate an AWS-defined cost allocation tag named department. Create one cost report in Cost Explorer grouping by tag name, and filter by EC2.
B.
From the Organizations management account billing console, activate an AWS-defined cost allocation tag named department. Create one cost report in Cost Explorer grouping by tag name, and filter by EC2.
Answers
C.
From the Organizations member account billing console, activate a user-defined cost allocation tag named department. Create one cost report in Cost Explorer grouping by the tag name, and filter by EC2.
C.
From the Organizations member account billing console, activate a user-defined cost allocation tag named department. Create one cost report in Cost Explorer grouping by the tag name, and filter by EC2.
Answers
D.
From the Organizations member account billing console, activate an AWS-defined cost allocation tag named department. Create one cost report in Cost Explorer grouping by tag name and filter by EC2.
D.
From the Organizations member account billing console, activate an AWS-defined cost allocation tag named department. Create one cost report in Cost Explorer grouping by tag name and filter by EC2.
Answers
Suggested answer: B

Explanation:

This solution meets the following requirements:

It is operationally efficient, as it only requires one activation of the cost allocation tag and one creation of the cost report from the management account, which has access to all the member accounts' data and billing pReference.

It is consistent, as it uses the AWS-defined cost allocation tag named department, which is automatically applied to resources when the company creates tags using the tagging policy enforced by AWS Organizations. This ensures that the tag name and value are the same across all the resources and accounts, and avoids any discrepancies or errors that might arise from user-defined tags.

It is informative, as it creates one cost report in Cost Explorer grouping by the tag name, and filters by EC2. This allows the accounting team to see the breakdown of EC2 consumption and costs by department, regardless of the AWS account. The team can also use other features of Cost Explorer, such as charts, filters, and forecasts, to analyze and optimize the spending.

Using AWS cost allocation tags - AWS Billing

User-defined cost allocation tags - AWS Billing

Cost Tagging and Reporting with AWS Organizations

A company is creating an application The company stores data from tests of the application in multiple on-premises locations

The company needs to connect the on-premises locations to VPCs in an AWS Region in the AWS Cloud The number of accounts and VPCs will increase during the next year The network architecture must simplify the administration of new connections and must provide the ability to scale.

Which solution will meet these requirements with the LEAST administrative overhead'?

A.
Create a peering connection between the VPCs Create a VPN connection between the VPCs and the on-premises locations.
A.
Create a peering connection between the VPCs Create a VPN connection between the VPCs and the on-premises locations.
Answers
B.
Launch an Amazon EC2 instance On the instance, include VPN software that uses a VPN connection to connect all VPCs and on-premises locations.
B.
Launch an Amazon EC2 instance On the instance, include VPN software that uses a VPN connection to connect all VPCs and on-premises locations.
Answers
C.
Create a transit gateway Create VPC attachments for the VPC connections Create VPN attachments for the on-premises connections.
C.
Create a transit gateway Create VPC attachments for the VPC connections Create VPN attachments for the on-premises connections.
Answers
D.
Create an AWS Direct Connect connection between the on-premises locations and a central VPC. Connect the central VPC to other VPCs by using peering connections.
D.
Create an AWS Direct Connect connection between the on-premises locations and a central VPC. Connect the central VPC to other VPCs by using peering connections.
Answers
Suggested answer: C

Explanation:

A transit gateway is a network transit hub that enables you to connect your VPCs and on-premises networks in a centralized and scalable way. You can create VPC attachments to connect your VPCs to the transit gateway, and VPN attachments to connect your on-premises networks to the transit gateway over the internet. The transit gateway acts as a router between the attached networks, and simplifies the administration of new connections by reducing the number of peering or VPN connections required. You can also use transit gateway route tables to control the routing of traffic between the attached networks. By creating a transit gateway and using VPC and VPN attachments, you can meet the requirements of the company with the least administrative overhead.

AWS Transit Gateway

Transit gateway attachments

Transit gateway route tables

Total 886 questions
Go to page: of 89