ExamGecko
Home Home / Google / Professional Cloud Database Engineer

Google Professional Cloud Database Engineer Practice Test - Questions Answers, Page 5

Question list
Search
Search

List of questions

Search

Related questions











Your company wants you to migrate their Oracle, MySQL, Microsoft SQL Server, and PostgreSQL relational databases to Google Cloud. You need a fully managed, flexible database solution when possible. What should you do?

A.
Migrate all the databases to Cloud SQL.
A.
Migrate all the databases to Cloud SQL.
Answers
B.
Migrate the Oracle, MySQL, and Microsoft SQL Server databases to Cloud SQL, and migrate the PostgreSQL databases to Compute Engine.
B.
Migrate the Oracle, MySQL, and Microsoft SQL Server databases to Cloud SQL, and migrate the PostgreSQL databases to Compute Engine.
Answers
C.
Migrate the MySQL, Microsoft SQL Server, and PostgreSQL databases to Compute Engine, and migrate the Oracle databases to Bare Metal Solution for Oracle.
C.
Migrate the MySQL, Microsoft SQL Server, and PostgreSQL databases to Compute Engine, and migrate the Oracle databases to Bare Metal Solution for Oracle.
Answers
D.
Migrate the MySQL, Microsoft SQL Server, and PostgreSQL databases to Cloud SQL, and migrate the Oracle databases to Bare Metal Solution for Oracle.
D.
Migrate the MySQL, Microsoft SQL Server, and PostgreSQL databases to Cloud SQL, and migrate the Oracle databases to Bare Metal Solution for Oracle.
Answers
Suggested answer: D

Your team uses thousands of connected IoT devices to collect device maintenance data for your oil and gas customers in real time. You want to design inspection routines, device repair, and replacement schedules based on insights gathered from the data produced by these devices. You need a managed solution that is highly scalable, supports a multi-cloud strategy, and offers low latency for these IoT devices. What should you do?

A.
Use Firestore with Looker.
A.
Use Firestore with Looker.
Answers
B.
Use Cloud Spanner with Data Studio.
B.
Use Cloud Spanner with Data Studio.
Answers
C.
Use MongoD8 Atlas with Charts.
C.
Use MongoD8 Atlas with Charts.
Answers
D.
Use Bigtable with Looker.
D.
Use Bigtable with Looker.
Answers
Suggested answer: C

Explanation:

This scenario has BigTable written all over it - large amounts of data from many devices to be analysed in realtime. I would even argue it could qualify as a multicloud solution, given the links to HBASE. BUT it does not support SQL queries and is not therefore compatible (on its own) with Looker. Firestore + Looker has the same problem. Spanner + Data Studio is at least a compatible pairing, but I agree with others that it doesn't fit this use-case - not least because it's Google-native. By contrast, MongoDB Atlas is a managed solution (just not by Google) which is compatible with the proposed reporting tool (Mongo's own Charts), it's specifically designed for this type of solution and of course it can run on any cloud.

Your application follows a microservices architecture and uses a single large Cloud SQL instance, which is starting to have performance issues as your application grows. in the Cloud Monitoring dashboard, the CPU utilization looks normal You want to follow Google-recommended practices to resolve and prevent these performance issues while avoiding any major refactoring. What should you do?

A.
Use Cloud Spanner instead of Cloud SQL.
A.
Use Cloud Spanner instead of Cloud SQL.
Answers
B.
Increase the number of CPUs for your instance.
B.
Increase the number of CPUs for your instance.
Answers
C.
Increase the storage size for the instance.
C.
Increase the storage size for the instance.
Answers
D.
Use many smaller Cloud SQL instances.
D.
Use many smaller Cloud SQL instances.
Answers
Suggested answer: D

Explanation:

https://cloud.google.com/sql/docs/mysql/best-practices#data-arch

You need to perform a one-time migration of data from a running Cloud SQL for MySQL instance in the us-central1 region to a new Cloud SQL for MySQL instance in the us-east1 region. You want to follow Google-recommended practices to minimize performance impact on the currently running instance. What should you do?

A.
Create and run a Dataflow job that uses JdbcIO to copy data from one Cloud SQL instance to another.
A.
Create and run a Dataflow job that uses JdbcIO to copy data from one Cloud SQL instance to another.
Answers
B.
Create two Datastream connection profiles, and use them to create a stream from one Cloud SQL instance to another.
B.
Create two Datastream connection profiles, and use them to create a stream from one Cloud SQL instance to another.
Answers
C.
Create a SQL dump file in Cloud Storage using a temporary instance, and then use that file to import into a new instance.
C.
Create a SQL dump file in Cloud Storage using a temporary instance, and then use that file to import into a new instance.
Answers
D.
Create a CSV file by running the SQL statement SELECT...INTO OUTFILE, copy the file to a Cloud Storage bucket, and import it into a new instance.
D.
Create a CSV file by running the SQL statement SELECT...INTO OUTFILE, copy the file to a Cloud Storage bucket, and import it into a new instance.
Answers
Suggested answer: C

Explanation:

https://cloud.google.com/sql/docs/mysql/import-export#serverless

You are running a mission-critical application on a Cloud SQL for PostgreSQL database with a multi-zonal setup. The primary and read replica instances are in the same region but in different zones. You need to ensure that you split the application load between both instances. What should you do?

A.
Use Cloud Load Balancing for load balancing between the Cloud SQL primary and read replica instances.
A.
Use Cloud Load Balancing for load balancing between the Cloud SQL primary and read replica instances.
Answers
B.
Use PgBouncer to set up database connection pooling between the Cloud SQL primary and read replica instances.
B.
Use PgBouncer to set up database connection pooling between the Cloud SQL primary and read replica instances.
Answers
C.
Use HTTP(S) Load Balancing for database connection pooling between the Cloud SQL primary and read replica instances.
C.
Use HTTP(S) Load Balancing for database connection pooling between the Cloud SQL primary and read replica instances.
Answers
D.
Use the Cloud SQL Auth proxy for database connection pooling between the Cloud SQL primary and read replica instances.
D.
Use the Cloud SQL Auth proxy for database connection pooling between the Cloud SQL primary and read replica instances.
Answers
Suggested answer: B

Explanation:

https://severalnines.com/blog/how-achieve-postgresql-high-availability-pgbouncer/

https://cloud.google.com/blog/products/databases/using-haproxy-to-scale-read-only-workloads-on-cloud-sql-for-postgresql

This answer is correct because PgBouncer is a lightweight connection pooler for PostgreSQL that can help you distribute read requests between the Cloud SQL primary and read replica instances1.PgBouncer can also improve performance and scalability by reducing the overhead of creating new connections and reusing existing ones1.You can install PgBouncer on a Compute Engine instance and configure it to connect to the Cloud SQL instances using private IP addresses or the Cloud SQL Auth proxy2.

Your organization deployed a new version of a critical application that uses Cloud SQL for MySQL with high availability (HA) and binary logging enabled to store transactional information. The latest release of the application had an error that caused massive data corruption in your Cloud SQL for MySQL database. You need to minimize data loss. What should you do?

A.
Open the Google Cloud Console, navigate to SQL > Backups, and select the last version of the automated backup before the corruption.
A.
Open the Google Cloud Console, navigate to SQL > Backups, and select the last version of the automated backup before the corruption.
Answers
B.
Reload the Cloud SQL for MySQL database using the LOAD DATA command to load data from CSV files that were used to initialize the instance.
B.
Reload the Cloud SQL for MySQL database using the LOAD DATA command to load data from CSV files that were used to initialize the instance.
Answers
C.
Perform a point-in-time recovery of your Cloud SQL for MySQL database, selecting a date and time before the data was corrupted.
C.
Perform a point-in-time recovery of your Cloud SQL for MySQL database, selecting a date and time before the data was corrupted.
Answers
D.
Fail over to the Cloud SQL for MySQL HA instance. Use that instance to recover the transactions that occurred before the corruption.
D.
Fail over to the Cloud SQL for MySQL HA instance. Use that instance to recover the transactions that occurred before the corruption.
Answers
Suggested answer: C

Explanation:

Binary Logging enabled, with that you can identify the point of time the data was good and recover from that point time. https://cloud.google.com/sql/docs/mysql/backup-recovery/pitr#perform_the_point-in-time_recovery_using_binary_log_positions

You plan to use Database Migration Service to migrate data from a PostgreSQL on-premises instance to Cloud SQL. You need to identify the prerequisites for creating and automating the task. What should you do? (Choose two.)

A.
Drop or disable all users except database administration users.
A.
Drop or disable all users except database administration users.
Answers
B.
Disable all foreign key constraints on the source PostgreSQL database.
B.
Disable all foreign key constraints on the source PostgreSQL database.
Answers
C.
Ensure that all PostgreSQL tables have a primary key.
C.
Ensure that all PostgreSQL tables have a primary key.
Answers
D.
Shut down the database before the Data Migration Service task is started.
D.
Shut down the database before the Data Migration Service task is started.
Answers
E.
Ensure that pglogical is installed on the source PostgreSQL database.
E.
Ensure that pglogical is installed on the source PostgreSQL database.
Answers
Suggested answer: C, E

Explanation:

https://cloud.google.com/database-migration/docs/postgres/faq

You are using Compute Engine on Google Cloud and your data center to manage a set of MySQL databases in a hybrid configuration. You need to create replicas to scale reads and to offload part of the management operation. What should you do?

A.
Use external server replication.
A.
Use external server replication.
Answers
B.
Use Data Migration Service.
B.
Use Data Migration Service.
Answers
C.
Use Cloud SQL for MySQL external replica.
C.
Use Cloud SQL for MySQL external replica.
Answers
D.
Use the mysqldump utility and binary logs.
D.
Use the mysqldump utility and binary logs.
Answers
Suggested answer: C

Explanation:

An external replica is a method that allows you to create a read-only copy of your Cloud SQL instance on an external server, such as a Compute Engine instance or an on-premises database server1. An external replica can help you scale reads and offload management operations from your data center to Google Cloud.You can also use an external replica for disaster recovery, migration, or reporting purposes1.

To create an external replica, you need to configure a Cloud SQL instance that replicates to one or more replicas external to Cloud SQL, and a source representation instance that represents the source database server in Cloud SQL1.You also need to enable access on the Cloud SQL instance for the IP address of the external replica, create a replication user, and export and import the data from the source database server to the external replica1.

Your company is shutting down their data center and migrating several MySQL and PostgreSQL databases to Google Cloud. Your database operations team is severely constrained by ongoing production releases and the lack of capacity for additional on-premises backups. You want to ensure that the scheduled migrations happen with minimal downtime and that the Google Cloud databases stay in sync with the on-premises data changes until the applications can cut over.

What should you do? (Choose two.)

A.
Use an external read replica to migrate the databases to Cloud SQL.
A.
Use an external read replica to migrate the databases to Cloud SQL.
Answers
B.
Use a read replica to migrate the databases to Cloud SQL.
B.
Use a read replica to migrate the databases to Cloud SQL.
Answers
C.
Use Database Migration Service to migrate the databases to Cloud SQL.
C.
Use Database Migration Service to migrate the databases to Cloud SQL.
Answers
D.
Use a cross-region read replica to migrate the databases to Cloud SQL.
D.
Use a cross-region read replica to migrate the databases to Cloud SQL.
Answers
E.
Use replication from an external server to migrate the databases to Cloud SQL.
E.
Use replication from an external server to migrate the databases to Cloud SQL.
Answers
Suggested answer: C, E

Your company is migrating the existing infrastructure for a highly transactional application to Google Cloud. You have several databases in a MySQL database instance and need to decide how to transfer the data to Cloud SQL. You need to minimize the downtime for the migration of your 500 GB instance. What should you do?

A.
Create a Cloud SQL for MySQL instance for your databases, and configure Datastream to stream your database changes to Cloud SQL. Select the Backfill historical data check box on your stream configuration to initiate Datastream to backfill any data that is out of sync between the source and destination. Delete your stream when all changes are moved to Cloud SQL for MySQL, and update your application to use the new instance.
A.
Create a Cloud SQL for MySQL instance for your databases, and configure Datastream to stream your database changes to Cloud SQL. Select the Backfill historical data check box on your stream configuration to initiate Datastream to backfill any data that is out of sync between the source and destination. Delete your stream when all changes are moved to Cloud SQL for MySQL, and update your application to use the new instance.
Answers
B.
Create migration job using Database Migration Service. Set the migration job type to Continuous, and allow the databases to complete the full dump phase and start sending data in change data capture (CDC) mode. Wait for the replication delay to minimize, initiate a promotion of the new Cloud SQL instance, and wait for the migration job to complete. Update your application connections to the new instance.
B.
Create migration job using Database Migration Service. Set the migration job type to Continuous, and allow the databases to complete the full dump phase and start sending data in change data capture (CDC) mode. Wait for the replication delay to minimize, initiate a promotion of the new Cloud SQL instance, and wait for the migration job to complete. Update your application connections to the new instance.
Answers
C.
Create migration job using Database Migration Service. Set the migration job type to One-time, and perform this migration during a maintenance window. Stop all write workloads to the source database and initiate the dump. Wait for the dump to be loaded into the Cloud SQL destination database and the destination database to be promoted to the primary database. Update your application connections to the new instance.
C.
Create migration job using Database Migration Service. Set the migration job type to One-time, and perform this migration during a maintenance window. Stop all write workloads to the source database and initiate the dump. Wait for the dump to be loaded into the Cloud SQL destination database and the destination database to be promoted to the primary database. Update your application connections to the new instance.
Answers
D.
Use the mysqldump utility to manually initiate a backup of MySQL during the application maintenance window. Move the files to Cloud Storage, and import each database into your Cloud SQL instance. Continue to dump each database until all the databases are migrated. Update your application connections to the new instance.
D.
Use the mysqldump utility to manually initiate a backup of MySQL during the application maintenance window. Move the files to Cloud Storage, and import each database into your Cloud SQL instance. Continue to dump each database until all the databases are migrated. Update your application connections to the new instance.
Answers
Suggested answer: B

Explanation:

https://cloud.google.com/datastream/docs/overview.

Total 132 questions
Go to page: of 14