ExamGecko
Home Home / Amazon / DBS-C01

Amazon DBS-C01 Practice Test - Questions Answers, Page 21

Question list
Search
Search

List of questions

Search

Related questions











A company runs hundreds of Microsoft SQL Server databases on Windows servers in its on-premises data center. A database specialist needs to migrate these databases to Linux on AWS.

Which combination of steps should the database specialist take to meet this requirement? (Choose three.)

A.
Install AWS Systems Manager Agent on the on-premises servers. Use Systems Manager Run Command to install the Windows to Linux replatforming assistant for Microsoft SQL Server Databases.
A.
Install AWS Systems Manager Agent on the on-premises servers. Use Systems Manager Run Command to install the Windows to Linux replatforming assistant for Microsoft SQL Server Databases.
Answers
B.
Use AWS Systems Manager Run Command to install and configure the AWS Schema Conversion Tool on the on-premises servers.
B.
Use AWS Systems Manager Run Command to install and configure the AWS Schema Conversion Tool on the on-premises servers.
Answers
C.
On the Amazon EC2 console, launch EC2 instances and select a Linux AMI that includes SQL Server. Install and configure AWS Systems Manager Agent on the EC2 instances.
C.
On the Amazon EC2 console, launch EC2 instances and select a Linux AMI that includes SQL Server. Install and configure AWS Systems Manager Agent on the EC2 instances.
Answers
D.
On the AWS Management Console, set up Amazon RDS for SQL Server DB instances with Linux as the operating system. Install AWS Systems Manager Agent on the DB instances by using an options group.
D.
On the AWS Management Console, set up Amazon RDS for SQL Server DB instances with Linux as the operating system. Install AWS Systems Manager Agent on the DB instances by using an options group.
Answers
E.
Open the Windows to Linux replatforming assistant tool. Enter configuration details of the source and destination databases. Start migration.
E.
Open the Windows to Linux replatforming assistant tool. Enter configuration details of the source and destination databases. Start migration.
Answers
F.
On the AWS Management Console, set up AWS Database Migration Service (AWS DMS) by entering details of the source SQL Server database and the destination SQL Server database on AWS. Start migration.
F.
On the AWS Management Console, set up AWS Database Migration Service (AWS DMS) by entering details of the source SQL Server database and the destination SQL Server database on AWS. Start migration.
Answers
Suggested answer: A, C, E

Explanation:


https://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/replatform-sql-server.html

https://d1.awsstatic.com/events/reinvent/2019/REPEAT_1_Leverage_automation_to_replatform_SQL_Server_to_Linux_WIN322-R1.pdf

A company is running a blogging platform. A security audit determines that the Amazon RDS DB instance that is used by the platform is not configured to encrypt the data at rest. The company must encrypt the DB instance within 30 days.

What should a database specialist do to meet this requirement with the LEAST amount of downtime?

A.
Create a read replica of the DB instance, and enable encryption. When the read replica is available, promote the read replica and update the endpoint that is used by the application. Delete the unencrypted DB instance.
A.
Create a read replica of the DB instance, and enable encryption. When the read replica is available, promote the read replica and update the endpoint that is used by the application. Delete the unencrypted DB instance.
Answers
B.
Take a snapshot of the DB instance. Make an encrypted copy of the snapshot. Restore the encrypted snapshot. When the new DB instance is available, update the endpoint that is used by the application. Delete the unencrypted DB instance.
B.
Take a snapshot of the DB instance. Make an encrypted copy of the snapshot. Restore the encrypted snapshot. When the new DB instance is available, update the endpoint that is used by the application. Delete the unencrypted DB instance.
Answers
C.
Create a new encrypted DB instance. Perform an initial data load, and set up logical replication between the two DB instances When the new DB instance is in sync with the source DB instance, update the endpoint that is used by the application. Delete the unencrypted DB instance.
C.
Create a new encrypted DB instance. Perform an initial data load, and set up logical replication between the two DB instances When the new DB instance is in sync with the source DB instance, update the endpoint that is used by the application. Delete the unencrypted DB instance.
Answers
D.
Convert the DB instance to an Amazon Aurora DB cluster, and enable encryption. When the DB cluster is available, update the endpoint that is used by the application to the cluster endpoint. Delete the unencrypted DB instance.
D.
Convert the DB instance to an Amazon Aurora DB cluster, and enable encryption. When the DB cluster is available, update the endpoint that is used by the application to the cluster endpoint. Delete the unencrypted DB instance.
Answers
Suggested answer: C

Explanation:


https://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/encrypt-an-existing-amazonrds-for-postgresql-db-instance.html

When the new, encrypted copy of the DB instance becomes available, you can point your applications to the new database. However, if your project doesn’t allow for significant downtime for this activity, you need an alternate approach that helps minimize the downtime. This pattern uses the AWS Database Migration Service (AWS DMS) to migrate and continuously replicate the data so that the cutover to the new, encrypted database can be done with minimal downtime.

An ecommerce company uses a backend application that stores data in an Amazon DynamoDB table.

The backend application runs in a private subnet in a VPC and must connect to this table.

The company must minimize any network latency that results from network connectivity issues, even during periods of heavy application usage. A database administrator also needs the ability to use a private connection to connect to the DynamoDB table from the application.

Which solution will meet these requirements?

A.
Use network ACLs to ensure that any outgoing or incoming connections to any port except DynamoDB are deactivated. Encrypt API calls by using TLS.
A.
Use network ACLs to ensure that any outgoing or incoming connections to any port except DynamoDB are deactivated. Encrypt API calls by using TLS.
Answers
B.
Create a VPC endpoint for DynamoDB in the application's VPC. Use the VPC endpoint to access the table.
B.
Create a VPC endpoint for DynamoDB in the application's VPC. Use the VPC endpoint to access the table.
Answers
C.
Create an AWS Lambda function that has access to DynamoDB. Restrict outgoing access only to this Lambda function from the application.
C.
Create an AWS Lambda function that has access to DynamoDB. Restrict outgoing access only to this Lambda function from the application.
Answers
D.
Use a VPN to route all communication to DynamoDB through the company's own corporate network infrastructure.
D.
Use a VPN to route all communication to DynamoDB through the company's own corporate network infrastructure.
Answers
Suggested answer: B

Explanation:


https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/vpc-endpointsdynamodb.html

A company's database specialist is building an Amazon RDS for Microsoft SQL Server DB instance to store hundreds of records in CSV format. A customer service tool uploads the records to an Amazon S3 bucket.

An employee who previously worked at the company already created a custom stored procedure to map the necessary CSV fields to the database tables. The database specialist needs to implement a solution that reuses this previous work and minimizes operational overhead.

Which solution will meet these requirements?

A.
Create an Amazon S3 event to invoke an AWS Lambda function. Configure the Lambda function to parse the .csv file and use a SQL client library to run INSERT statements to load the data into the tables.
A.
Create an Amazon S3 event to invoke an AWS Lambda function. Configure the Lambda function to parse the .csv file and use a SQL client library to run INSERT statements to load the data into the tables.
Answers
B.
Write a custom .NET app that is hosted on Amazon EC2. Configure the .NET app to load the .csv file and call the custom stored procedure to insert the data into the tables.
B.
Write a custom .NET app that is hosted on Amazon EC2. Configure the .NET app to load the .csv file and call the custom stored procedure to insert the data into the tables.
Answers
C.
Download the .csv file from Amazon S3 to the RDS D drive by using an AWS msdb stored procedure. Call the custom stored procedure to insert the data from the RDS D drive into the tables.
C.
Download the .csv file from Amazon S3 to the RDS D drive by using an AWS msdb stored procedure. Call the custom stored procedure to insert the data from the RDS D drive into the tables.
Answers
D.
Create an Amazon S3 event to invoke AWS Step Functions to parse the .csv file and call the custom stored procedure to insert the data into the tables.
D.
Create an Amazon S3 event to invoke AWS Step Functions to parse the .csv file and call the custom stored procedure to insert the data into the tables.
Answers
Suggested answer: C

Explanation:


Step 1: Download S3 Files Amazon RDS for SQL Server comes with several custom stored procedures and functions. These are located in the msdb database. The stored procedure to download files from S3 is "rds_download_from_s3". The syntax for this stored procedure is shown here: exec msdb.dbo.rds_download_from_s3 @s3_arn_of_file='arn:aws:s3:::<bucket_name>/<file_name>', @rds_file_path='D:\S3\<custom_folder_name>\<file_name>', @overwrite_file=1;

A company hosts a 2 TB Oracle database in its on-premises data center. A database specialist is migrating the database from on premises to an Amazon Aurora PostgreSQL database on AWS.

The database specialist identifies a problem that relates to compatibility Oracle stores metadata in its data dictionary in uppercase, but PostgreSQL stores the metadata in lowercase. The database specialist must resolve this problem to complete the migration.

What is the MOST operationally efficient solution that meets these requirements?

A.
Override the default uppercase format of Oracle schema by encasing object names in quotation marks during creation.
A.
Override the default uppercase format of Oracle schema by encasing object names in quotation marks during creation.
Answers
B.
Use AWS Database Migration Service (AWS DMS) mapping rules with rule-action as convertlowercase.
B.
Use AWS Database Migration Service (AWS DMS) mapping rules with rule-action as convertlowercase.
Answers
C.
Use the AWS Schema Conversion Tool conversion agent to convert the metadata from uppercase to lowercase.
C.
Use the AWS Schema Conversion Tool conversion agent to convert the metadata from uppercase to lowercase.
Answers
D.
Use an AWS Glue job that is attached to an AWS Database Migration Service (AWS DMS) replication task to convert the metadata from uppercase to lowercase.
D.
Use an AWS Glue job that is attached to an AWS Database Migration Service (AWS DMS) replication task to convert the metadata from uppercase to lowercase.
Answers
Suggested answer: B

Explanation:


https://aws.amazon.com/premiumsupport/knowledge-center/dms-mapping-oracle-postgresql/

A financial company is running an Amazon Redshift cluster for one of its data warehouse solutions.

The company needs to generate connection logs, user logs, and user activity logs. The company also must make these logs available for future analysis.

Which combination of steps should a database specialist take to meet these requirements? (Choose two.)

A.
Edit the database configuration of the cluster by enabling audit logging. Direct the logging to a specified log group in Amazon CloudWatch Logs.
A.
Edit the database configuration of the cluster by enabling audit logging. Direct the logging to a specified log group in Amazon CloudWatch Logs.
Answers
B.
Edit the database configuration of the cluster by enabling audit logging. Direct the logging to a specified Amazon S3 bucket
B.
Edit the database configuration of the cluster by enabling audit logging. Direct the logging to a specified Amazon S3 bucket
Answers
C.
Modify the cluster by enabling continuous delivery of AWS CloudTrail logs to Amazon S3.
C.
Modify the cluster by enabling continuous delivery of AWS CloudTrail logs to Amazon S3.
Answers
D.
Create a new parameter group with the enable_user_activity_logging parameter set to true. Configure the cluster to use the new parameter group.
D.
Create a new parameter group with the enable_user_activity_logging parameter set to true. Configure the cluster to use the new parameter group.
Answers
E.
Modify the system table to enable logging for each user.
E.
Modify the system table to enable logging for each user.
Answers
Suggested answer: A, D

Explanation:


AWS CloudWatch Logs are stored indefinitely and CloudWatch Log Insights is used to analyze the logs and query upon them.

https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/AnalyzingLogData.html

https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/WhatIsCloudWatchLogs.html"

Log retention – By default, logs are kept indefinitely and never expire. You can adjust the retention policy for each log group, keeping the indefinite retention, or choosing a retention period between10 years and one day."

https://docs.aws.amazon.com/redshift/latest/mgmt/db-auditing.html

A company plans to migrate a MySQL-based application from an on-premises environment to AWS.

The application performs database joins across several tables and uses indexes for faster query response times. The company needs the database to be highly available with automatic failover.

Which solution on AWS will meet these requirements with the LEAST operational overhead?

A.
Deploy an Amazon RDS DB instance with a read replica.
A.
Deploy an Amazon RDS DB instance with a read replica.
Answers
B.
Deploy an Amazon RDS Multi-AZ DB instance.
B.
Deploy an Amazon RDS Multi-AZ DB instance.
Answers
C.
Deploy Amazon DynamoDB global tables.
C.
Deploy Amazon DynamoDB global tables.
Answers
D.
Deploy multiple Amazon RDS DB instances. Use Amazon Route 53 DNS with failover health checks configured.
D.
Deploy multiple Amazon RDS DB instances. Use Amazon Route 53 DNS with failover health checks configured.
Answers
Suggested answer: B

Explanation:


A social media company is using Amazon DynamoDB to store user profile data and user activity dat a. Developers are reading and writing the data, causing the size of the tables to grow significantly.

Developers have started to face performance bottlenecks with the tables.

Which solution should a database specialist recommend to read items the FASTEST without consuming all the provisioned throughput for the tables?

A.
Use the Scan API operation in parallel with many workers to read all the items. Use the Query API operation to read multiple items that have a specific partition key and sort key. Use the GetItem API operation to read a single item.
A.
Use the Scan API operation in parallel with many workers to read all the items. Use the Query API operation to read multiple items that have a specific partition key and sort key. Use the GetItem API operation to read a single item.
Answers
B.
Use the Scan API operation with a filter expression that allows multiple items to be read. Use the Query API operation to read multiple items that have a specific partition key and sort key. Use the GetItem API operation to read a single item.
B.
Use the Scan API operation with a filter expression that allows multiple items to be read. Use the Query API operation to read multiple items that have a specific partition key and sort key. Use the GetItem API operation to read a single item.
Answers
C.
Use the Scan API operation with a filter expression that allows multiple items to be read. Use the Query API operation to read a single item that has a specific primary key. Use the BatchGetItem API operation to read multiple items.
C.
Use the Scan API operation with a filter expression that allows multiple items to be read. Use the Query API operation to read a single item that has a specific primary key. Use the BatchGetItem API operation to read multiple items.
Answers
D.
Use the Scan API operation in parallel with many workers to read all the items. Use the Query API operation to read a single item that has a specific primary key Use the BatchGetItem API operation to read multiple items.
D.
Use the Scan API operation in parallel with many workers to read all the items. Use the Query API operation to read a single item that has a specific primary key Use the BatchGetItem API operation to read multiple items.
Answers
Suggested answer: B

Explanation:


https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/SQLtoNoSQL.ReadData.html

A pharmaceutical company's drug search API is using an Amazon Neptune DB cluster. A bulk uploader process automatically updates the information in the database a few times each week. A few weeks ago during a bulk upload, a database specialist noticed that the database started to respond frequently with a ThrottlingException error. The problem also occurred with subsequent uploads.

The database specialist must create a solution to prevent ThrottlingException errors for the database. The solution must minimize the downtime of the cluster.

Which solution meets these requirements?

A.
Create a read replica that uses a larger instance size than the primary DB instance. Fail over the primary DB instance to the read replica.
A.
Create a read replica that uses a larger instance size than the primary DB instance. Fail over the primary DB instance to the read replica.
Answers
B.
Add a read replica to each Availability Zone. Use an instance for the read replica that is the same size as the primary DB instance. Keep the traffic between the API and the database within the Availability Zone.
B.
Add a read replica to each Availability Zone. Use an instance for the read replica that is the same size as the primary DB instance. Keep the traffic between the API and the database within the Availability Zone.
Answers
C.
Create a read replica that uses a larger instance size than the primary DB instance. Offload the reads from the primary DB instance.
C.
Create a read replica that uses a larger instance size than the primary DB instance. Offload the reads from the primary DB instance.
Answers
D.
Take the latest backup, and restore it in a DB cluster of a larger size. Point the application to the newly created DB cluster.
D.
Take the latest backup, and restore it in a DB cluster of a larger size. Point the application to the newly created DB cluster.
Answers
Suggested answer: C

Explanation:


https://docs.aws.amazon.com/neptune/latest/userguide/manage-console-add-replicas.html

Neptune replicas connect to the same storage volume as the primary DB instance and support only read operations. Neptune replicas can offload read workloads from the primary DB instance.

A global company is developing an application across multiple AWS Regions. The company needs a database solution with low latency in each Region and automatic disaster recovery. The database must be deployed in an active-active configuration with automatic data synchronization between Regions.

Which solution will meet these requirements with the LOWEST latency?

A.
Amazon RDS with cross-Region read replicas
A.
Amazon RDS with cross-Region read replicas
Answers
B.
Amazon DynamoDB global tables
B.
Amazon DynamoDB global tables
Answers
C.
Amazon Aurora global database
C.
Amazon Aurora global database
Answers
D.
Amazon Athena and Amazon S3 with S3 Cross Region Replication
D.
Amazon Athena and Amazon S3 with S3 Cross Region Replication
Answers
Suggested answer: B

Explanation:


Total 321 questions
Go to page: of 33