ExamGecko
Home Home / Amazon / DBS-C01

Amazon DBS-C01 Practice Test - Questions Answers, Page 7

Question list
Search
Search

List of questions

Search

Related questions











A company has an Amazon RDS Multi-AZ DB instances that is 200 GB in size with an RPO of 6 hours.

To meet the company’s disaster recovery policies, the database backup needs to be copied into another Region. The company requires the solution to be cost-effective and operationally efficient.

What should a Database Specialist do to copy the database backup into a different Region?

A.
Use Amazon RDS automated snapshots and use AWS Lambda to copy the snapshot into another Region
A.
Use Amazon RDS automated snapshots and use AWS Lambda to copy the snapshot into another Region
Answers
B.
Use Amazon RDS automated snapshots every 6 hours and use Amazon S3 cross-Region replication to copy the snapshot into another Region
B.
Use Amazon RDS automated snapshots every 6 hours and use Amazon S3 cross-Region replication to copy the snapshot into another Region
Answers
C.
Create an AWS Lambda function to take an Amazon RDS snapshot every 6 hours and use a second Lambda function to copy the snapshot into another Region
C.
Create an AWS Lambda function to take an Amazon RDS snapshot every 6 hours and use a second Lambda function to copy the snapshot into another Region
Answers
D.
Create a cross-Region read replica for Amazon RDS in another Region and take an automated snapshot of the read replica
D.
Create a cross-Region read replica for Amazon RDS in another Region and take an automated snapshot of the read replica
Answers
Suggested answer: C

Explanation:


System snapshot can't fulfill 6 hours requirement. You need to control it by script

https://aws.amazon.com/blogs/database/%C2%AD%C2%AD%C2%ADautomating-cross-region-crossaccount-snapshot-copies-with-the-snapshot-tool-for-amazon-aurora/

An Amazon RDS EBS-optimized instance with Provisioned IOPS (PIOPS) storage is using less than half of its allocated IOPS over the course of several hours under constant load. The RDS instance exhibits multi-second read and write latency, and uses all of its maximum bandwidth for read throughput, yet the instance uses less than half of its CPU and RAM resources.

What should a Database Specialist do in this situation to increase performance and return latency to sub- second levels?

A.
Increase the size of the DB instance storage
A.
Increase the size of the DB instance storage
Answers
B.
Change the underlying EBS storage type to General Purpose SSD (gp2)
B.
Change the underlying EBS storage type to General Purpose SSD (gp2)
Answers
C.
Disable EBS optimization on the DB instance
C.
Disable EBS optimization on the DB instance
Answers
D.
Change the DB instance to an instance class with a higher maximum bandwidth
D.
Change the DB instance to an instance class with a higher maximum bandwidth
Answers
Suggested answer: D

Explanation:


https://docs.amazonaws.cn/en_us/AmazonRDS/latest/UserGuide/CHAP_BestPractices.html

After restoring an Amazon RDS snapshot from 3 days ago, a company’s Development team cannot connect to the restored RDS DB instance. What is the likely cause of this problem?

A.
The restored DB instance does not have Enhanced Monitoring enabled
A.
The restored DB instance does not have Enhanced Monitoring enabled
Answers
B.
The production DB instance is using a custom parameter group
B.
The production DB instance is using a custom parameter group
Answers
C.
The restored DB instance is using the default security group
C.
The restored DB instance is using the default security group
Answers
D.
The production DB instance is using a custom option group
D.
The production DB instance is using a custom option group
Answers
Suggested answer: C

Explanation:


https://aws.amazon.com/premiumsupport/knowledge-center/rds-cannot-connect/

https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_RestoreFromSnapshot.html

A gaming company has implemented a leaderboard in AWS using a Sorted Set data structure within Amazon ElastiCache for Redis. The ElastiCache cluster has been deployed with cluster mode disabled and has a replication group deployed with two additional replicas. The company is planning for a worldwide gaming event and is anticipating a higher write load than what the current cluster can handle.

Which method should a Database Specialist use to scale the ElastiCache cluster ahead of the upcoming event?

A.
Enable cluster mode on the existing ElastiCache cluster and configure separate shards for the Sorted Set across all nodes in the cluster.
A.
Enable cluster mode on the existing ElastiCache cluster and configure separate shards for the Sorted Set across all nodes in the cluster.
Answers
B.
Increase the size of the ElastiCache cluster nodes to a larger instance size.
B.
Increase the size of the ElastiCache cluster nodes to a larger instance size.
Answers
C.
Create an additional ElastiCache cluster and load-balance traffic between the two clusters.
C.
Create an additional ElastiCache cluster and load-balance traffic between the two clusters.
Answers
D.
Use the EXPIRE command and set a higher time to live (TTL) after each call to increment a given key.
D.
Use the EXPIRE command and set a higher time to live (TTL) after each call to increment a given key.
Answers
Suggested answer: B

Explanation:


An ecommerce company has tasked a Database Specialist with creating a reporting dashboard that visualizes critical business metrics that will be pulled from the core production database running on Amazon Auror a. Data that is read by the dashboard should be available within 100 milliseconds of an update.

The Database Specialist needs to review the current configuration of the Aurora DB cluster and develop a cost-effective solution. The solution needs to accommodate the unpredictable read workload from the reporting dashboard without any impact on the write availability and performance of the DB cluster.

Which solution meets these requirements?

A.
Turn on the serverless option in the DB cluster so it can automatically scale based on demand.
A.
Turn on the serverless option in the DB cluster so it can automatically scale based on demand.
Answers
B.
Provision a clone of the existing DB cluster for the new Application team.
B.
Provision a clone of the existing DB cluster for the new Application team.
Answers
C.
Create a separate DB cluster for the new workload, refresh from the source DB cluster, and set up ongoing replication using AWS DMS change data capture (CDC).
C.
Create a separate DB cluster for the new workload, refresh from the source DB cluster, and set up ongoing replication using AWS DMS change data capture (CDC).
Answers
D.
Add an automatic scaling policy to the DB cluster to add Aurora Replicas to the cluster based on CPU consumption.
D.
Add an automatic scaling policy to the DB cluster to add Aurora Replicas to the cluster based on CPU consumption.
Answers
Suggested answer: A

Explanation:


A retail company is about to migrate its online and mobile store to AWS. The company’s CEO has strategic plans to grow the brand globally. A Database Specialist has been challenged to provide predictable read and write database performance with minimal operational overhead.

What should the Database Specialist do to meet these requirements?

A.
Use Amazon DynamoDB global tables to synchronize transactions
A.
Use Amazon DynamoDB global tables to synchronize transactions
Answers
B.
Use Amazon EMR to copy the orders table data across Regions
B.
Use Amazon EMR to copy the orders table data across Regions
Answers
C.
Use Amazon Aurora Global Database to synchronize all transactions
C.
Use Amazon Aurora Global Database to synchronize all transactions
Answers
D.
Use Amazon DynamoDB Streams to replicate all DynamoDB transactions and sync them
D.
Use Amazon DynamoDB Streams to replicate all DynamoDB transactions and sync them
Answers
Suggested answer: A

Explanation:


https://aws.amazon.com/dynamodb/features/

With global tables, your globally distributed applications can access data locally in the selected regions to get single-digit millisecond read and write performance.

Not Aurora Global Database, as per this link: https://aws.amazon.com/rds/aurora/globaldatabase/?nc1=h_ls .

Aurora Global Database lets you easily scale database reads across the world and place your applications close to your users.

A company is closing one of its remote data centers. This site runs a 100 TB on-premises data warehouse solution. The company plans to use the AWS Schema Conversion Tool (AWS SCT) and AWS DMS for the migration to AWS. The site network bandwidth is 500 Mbps. A Database Specialist wants to migrate the on- premises data using Amazon S3 as the data lake and Amazon Redshift as the data warehouse. This move must take place during a 2-week period when source systems are shut down for maintenance. The data should stay encrypted at rest and in transit.

Which approach has the least risk and the highest likelihood of a successful data transfer?

A.
Set up a VPN tunnel for encrypting data over the network from the data center to AWS. Leverage AWS SCT and apply the converted schema to Amazon Redshift. Once complete, start an AWS DMS task to move the data from the source to Amazon S3. Use AWS Glue to load the data from Amazon S3 to Amazon Redshift.
A.
Set up a VPN tunnel for encrypting data over the network from the data center to AWS. Leverage AWS SCT and apply the converted schema to Amazon Redshift. Once complete, start an AWS DMS task to move the data from the source to Amazon S3. Use AWS Glue to load the data from Amazon S3 to Amazon Redshift.
Answers
B.
Leverage AWS SCT and apply the converted schema to Amazon Redshift. Start an AWS DMS task with two AWS Snowball Edge devices to copy data from on-premises to Amazon S3 with AWS KMS encryption. Use AWS DMS to finish copying data to Amazon Redshift.
B.
Leverage AWS SCT and apply the converted schema to Amazon Redshift. Start an AWS DMS task with two AWS Snowball Edge devices to copy data from on-premises to Amazon S3 with AWS KMS encryption. Use AWS DMS to finish copying data to Amazon Redshift.
Answers
C.
Leverage AWS SCT and apply the converted schema to Amazon Redshift. Once complete, use a fleet of 10 TB dedicated encrypted drives using the AWS Import/Export feature to copy data from onpremises to Amazon S3 with AWS KMS encryption. Use AWS Glue to load the data to Amazon redshift.
C.
Leverage AWS SCT and apply the converted schema to Amazon Redshift. Once complete, use a fleet of 10 TB dedicated encrypted drives using the AWS Import/Export feature to copy data from onpremises to Amazon S3 with AWS KMS encryption. Use AWS Glue to load the data to Amazon redshift.
Answers
D.
Set up a VPN tunnel for encrypting data over the network from the data center to AWS. Leverage a native database export feature to export the data and compress the files. Use the aws S3 cp multiport upload command to upload these files to Amazon S3 with AWS KMS encryption. Once complete, load the data to Amazon Redshift using AWS Glue.
D.
Set up a VPN tunnel for encrypting data over the network from the data center to AWS. Leverage a native database export feature to export the data and compress the files. Use the aws S3 cp multiport upload command to upload these files to Amazon S3 with AWS KMS encryption. Once complete, load the data to Amazon Redshift using AWS Glue.
Answers
Suggested answer: B

Explanation:


https://aws.amazon.com/blogs/database/new-aws-dms-and-aws-snowball-integration-enablesmass-database-migrations-and-migrations-of-large-databases/

A company has multiple applications serving data from a secure on-premises database. The company is migrating all applications and databases to the AWS Cloud. The IT Risk and Compliance department requires that auditing be enabled on all secure databases to capture all log ins, log outs, failed logins, permission changes, and database schema changes. A Database Specialist has recommended Amazon Aurora MySQL as the migration target, and leveraging the Advanced Auditing feature in Aurora.

Which events need to be specified in the Advanced Auditing configuration to satisfy the minimum auditing requirements? (Choose three.)

A.
CONNECT
A.
CONNECT
Answers
B.
QUERY_DCL
B.
QUERY_DCL
Answers
C.
QUERY_DDL
C.
QUERY_DDL
Answers
D.
QUERY_DML
D.
QUERY_DML
Answers
E.
TABLE
E.
TABLE
Answers
F.
QUERY
F.
QUERY
Answers
Suggested answer: A, B, C

Explanation:


Connect - logins / DCL - authorizations (grant,revoke), DDL - schema updates

A gaming company has recently acquired a successful iOS game, which is particularly popular during the holiday season. The company has decided to add a leaderboard to the game that uses Amazon DynamoDB. The application load is expected to ramp up over the holiday season.

Which solution will meet these requirements at the lowest cost?

A.
DynamoDB Streams
A.
DynamoDB Streams
Answers
B.
DynamoDB with DynamoDB Accelerator
B.
DynamoDB with DynamoDB Accelerator
Answers
C.
DynamoDB with on-demand capacity mode
C.
DynamoDB with on-demand capacity mode
Answers
D.
DynamoDB with provisioned capacity mode with Auto Scaling
D.
DynamoDB with provisioned capacity mode with Auto Scaling
Answers
Suggested answer: C

Explanation:


Reference: https://aws.amazon.com/blogs/database/running-spiky-workloads-and-optimizing-costsby-more- than-90-using-amazon-dynamodb-on-demand-capacity-mode/?nc1=b_rp

A company’s Security department established new requirements that state internal users must connect to an existing Amazon RDS for SQL Server DB instance using their corporate Active Directory (AD) credentials. A Database Specialist must make the modifications needed to fulfill this requirement.

Which combination of actions should the Database Specialist take? (Choose three.)

A.
Disable Transparent Data Encryption (TDE) on the RDS SQL Server DB instance.
A.
Disable Transparent Data Encryption (TDE) on the RDS SQL Server DB instance.
Answers
B.
Modify the RDS SQL Server DB instance to use the directory for Windows authentication. Create appropriate new logins.
B.
Modify the RDS SQL Server DB instance to use the directory for Windows authentication. Create appropriate new logins.
Answers
C.
Use the AWS Management Console to create an AWS Managed Microsoft AD. Create a trust relationship with the corporate AD.
C.
Use the AWS Management Console to create an AWS Managed Microsoft AD. Create a trust relationship with the corporate AD.
Answers
D.
Stop the RDS SQL Server DB instance, modify it to use the directory for Windows authentication, and start it again. Create appropriate new logins.
D.
Stop the RDS SQL Server DB instance, modify it to use the directory for Windows authentication, and start it again. Create appropriate new logins.
Answers
E.
Use the AWS Management Console to create an AD Connector. Create a trust relationship with the corporate AD.
E.
Use the AWS Management Console to create an AD Connector. Create a trust relationship with the corporate AD.
Answers
F.
Configure the AWS Managed Microsoft AD domain controller Security Group.
F.
Configure the AWS Managed Microsoft AD domain controller Security Group.
Answers
Suggested answer: B, C, F

Explanation:


https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_SQLServerWinAuth.html

Total 321 questions
Go to page: of 33