ExamGecko
Home Home / Google / Professional Cloud Database Engineer

Google Professional Cloud Database Engineer Practice Test - Questions Answers, Page 11

Question list
Search
Search

List of questions

Search

Related questions











You are migrating a telehealth care company's on-premises data center to Google Cloud. The migration plan specifies:

PostgreSQL databases must be migrated to a multi-region backup configuration with cross-region replicas to allow restore and failover in multiple scenarios.

MySQL databases handle personally identifiable information (PII) and require data residency compliance at the regional level.

You want to set up the environment with minimal administrative effort. What should you do?

A.
Set up Cloud Logging and Cloud Monitoring with Cloud Functions to send an alert every time a new database instance is created, and manually validate the region.
A.
Set up Cloud Logging and Cloud Monitoring with Cloud Functions to send an alert every time a new database instance is created, and manually validate the region.
Answers
B.
Set up different organizations for each database type, and apply policy constraints at the organization level.
B.
Set up different organizations for each database type, and apply policy constraints at the organization level.
Answers
C.
Set up Pub/Sub to ingest data from Cloud Logging, send an alert every time a new database instance is created, and manually validate the region.
C.
Set up Pub/Sub to ingest data from Cloud Logging, send an alert every time a new database instance is created, and manually validate the region.
Answers
D.
Set up different projects for PostgreSQL and MySQL databases, and apply organizational policy constraints at a project level.
D.
Set up different projects for PostgreSQL and MySQL databases, and apply organizational policy constraints at a project level.
Answers
Suggested answer: D

You have a Cloud SQL instance (DB-1) with two cross-region read replicas (DB-2 and DB-3). During a business continuity test, the primary instance (DB-1) was taken offline and a replica (DB-2) was promoted. The test has concluded and you want to return to the pre-test configuration. What should you do?

A.
Bring DB-1 back online.
A.
Bring DB-1 back online.
Answers
B.
Delete DB-1, and re-create DB-1 as a read replica in the same region as DB-1.
B.
Delete DB-1, and re-create DB-1 as a read replica in the same region as DB-1.
Answers
C.
Delete DB-2 so that DB-1 automatically reverts to the primary instance.
C.
Delete DB-2 so that DB-1 automatically reverts to the primary instance.
Answers
D.
Create DB-4 as a read replica in the same region as DB-1, and promote DB-4 to primary.
D.
Create DB-4 as a read replica in the same region as DB-1, and promote DB-4 to primary.
Answers
Suggested answer: D

Explanation:

If you need to have the primary instance in the zone that had the outage, you can do a failback. A failback performs the same steps as the failover, only in the opposite direction, to reroute traffic back to the original instance. To perform a failback, use the procedure in Initiating failover. https://cloud.google.com/sql/docs/mysql/high-availability#failback

Your team is building a new inventory management application that will require read and write database instances in multiple Google Cloud regions around the globe. Your database solution requires 99.99% availability and global transactional consistency. You need a fully managed backend relational database to store inventory changes. What should you do?

A.
Use Bigtable.
A.
Use Bigtable.
Answers
B.
Use Firestore.
B.
Use Firestore.
Answers
C.
Use Cloud SQL for MySQL
C.
Use Cloud SQL for MySQL
Answers
D.
Use Cloud Spanner.
D.
Use Cloud Spanner.
Answers
Suggested answer: D

Explanation:

Spanner covers the SLA

You are the database administrator of a Cloud SQL for PostgreSQL instance that has pgaudit disabled. Users are complaining that their queries are taking longer to execute and performance has degraded over the past few months. You need to collect and analyze query performance data to help identity slow-running queries. What should you do?

A.
View Cloud SQL operations to view historical query information.
A.
View Cloud SQL operations to view historical query information.
Answers
B.
White a Logs Explorer query to identify database queries with high execution times.
B.
White a Logs Explorer query to identify database queries with high execution times.
Answers
C.
Review application logs to identify database calls.
C.
Review application logs to identify database calls.
Answers
D.
Use the Query Insights dashboard to identify high execution times.
D.
Use the Query Insights dashboard to identify high execution times.
Answers
Suggested answer: D

Explanation:

A Cloud SQL instance configured for HA is also called a regional instance and has a primary and secondary zone within the configured region. Within a regional instance, the configuration is made up of a primary instance and a standby instance. Through synchronous replication to each zone's persistent disk, all writes made to the primary instance are replicated to disks in both zones before a transaction is reported as committed. In the event of an instance or zone failure, the standby instance becomes the new primary instance. Users are then rerouted to the new primary instance. This process is called a failover.

You are configuring a brand new PostgreSQL database instance in Cloud SQL. Your application team wants to have an optimal and highly available environment with automatic failover to avoid any unplanned outage. What should you do?

A.
Create one regional Cloud SQL instance with a read replica in another region.
A.
Create one regional Cloud SQL instance with a read replica in another region.
Answers
B.
Create one regional Cloud SQL instance in one zone with a standby instance in another zone in the same region.
B.
Create one regional Cloud SQL instance in one zone with a standby instance in another zone in the same region.
Answers
C.
Create two read-write Cloud SQL instances in two different zones with a standby instance in another region.
C.
Create two read-write Cloud SQL instances in two different zones with a standby instance in another region.
Answers
D.
Create two read-write Cloud SQL instances in two different regions with a standby instance in another zone.
D.
Create two read-write Cloud SQL instances in two different regions with a standby instance in another zone.
Answers
Suggested answer: B

Explanation:

This answer is correct because it meets the requirements of having an optimal and highly available environment with automatic failover.According to the Google Cloud documentation1, a regional Cloud SQL instance is an instance that has a primary server in one zone and a standby server in another zone within the same region. The primary and standby servers are kept in sync using synchronous replication, which ensures zero data loss and minimal downtime in case of a zonal outage or an instance failure.If the primary server becomes unavailable, Cloud SQL automatically fails over to the standby server, which becomes the new primary server1.

During an internal audit, you realized that one of your Cloud SQL for MySQL instances does not have high availability (HA) enabled. You want to follow Google-recommended practices to enable HA on your existing instance. What should you do?

A.
Create a new Cloud SQL for MySQL instance, enable HA, and use the export and import option to migrate your data.
A.
Create a new Cloud SQL for MySQL instance, enable HA, and use the export and import option to migrate your data.
Answers
B.
Create a new Cloud SQL for MySQL instance, enable HA, and use Cloud Data Fusion to migrate your data.
B.
Create a new Cloud SQL for MySQL instance, enable HA, and use Cloud Data Fusion to migrate your data.
Answers
C.
Use the gcloud instances patch command to update your existing Cloud SQL for MySQL instance.
C.
Use the gcloud instances patch command to update your existing Cloud SQL for MySQL instance.
Answers
D.
Shut down your existing Cloud SQL for MySQL instance, and enable HA.
D.
Shut down your existing Cloud SQL for MySQL instance, and enable HA.
Answers
Suggested answer: C

Explanation:

Creating a new instance and migrating data can be time-consuming and disruptive to your application's availability. Shutting down the existing instance is not a recommended approach, as it will cause downtime for your application.

The recommended approach is to use the gcloud instances patch command to enable high availability on your existing Cloud SQL for MySQL instance. This command updates the instance's configuration to enable the failover replica, configure it, and enable automatic failover.

By following this approach, you can ensure minimal downtime, and your application can continue to operate during the process.

You are managing a set of Cloud SQL databases in Google Cloud. Regulations require that database backups reside in the region where the database is created. You want to minimize operational costs and administrative effort. What should you do?

A.
Configure the automated backups to use a regional Cloud Storage bucket as a custom location.
A.
Configure the automated backups to use a regional Cloud Storage bucket as a custom location.
Answers
B.
Use the default configuration for the automated backups location.
B.
Use the default configuration for the automated backups location.
Answers
C.
Disable automated backups, and create an on-demand backup routine to a regional Cloud Storage bucket.
C.
Disable automated backups, and create an on-demand backup routine to a regional Cloud Storage bucket.
Answers
D.
Disable automated backups, and configure serverless exports to a regional Cloud Storage bucket.
D.
Disable automated backups, and configure serverless exports to a regional Cloud Storage bucket.
Answers
Suggested answer: A

Explanation:

https://cloud.google.com/sql/docs/mysql/backup-recovery/backing-up#locationbackups You can use a custom location for on-demand and automatic backups. For a complete list of valid location values, see the Instance locations.

Your ecommerce application connecting to your Cloud SQL for SQL Server is expected to have additional traffic due to the holiday weekend. You want to follow Google-recommended practices to set up alerts for CPU and memory metrics so you can be notified by text message at the first sign of potential issues. What should you do?

A.
Use a Cloud Function to pull CPU and memory metrics from your Cloud SQL instance and to call a custom service to send alerts.
A.
Use a Cloud Function to pull CPU and memory metrics from your Cloud SQL instance and to call a custom service to send alerts.
Answers
B.
Use Error Reporting to monitor CPU and memory metrics and to configure SMS notification channels.
B.
Use Error Reporting to monitor CPU and memory metrics and to configure SMS notification channels.
Answers
C.
Use Cloud Logging to set up a log sink for CPU and memory metrics and to configure a sink destination to send a message to Pub/Sub.
C.
Use Cloud Logging to set up a log sink for CPU and memory metrics and to configure a sink destination to send a message to Pub/Sub.
Answers
D.
Use Cloud Monitoring to set up an alerting policy for CPU and memory metrics and to configure SMS notification channels.
D.
Use Cloud Monitoring to set up an alerting policy for CPU and memory metrics and to configure SMS notification channels.
Answers
Suggested answer: D

Explanation:

Cloud Monitoring collects metrics, events, and metadata from Google Cloud, Amazon Web Services (AWS), hosted uptime probes, and application instrumentation. Using the BindPlane service, you can also collect this data from over 150 common application components, on-premise systems, and hybrid cloud systems.

You finished migrating an on-premises MySQL database to Cloud SQL. You want to ensure that the daily export of a table, which was previously a cron job running on the database server, continues. You want the solution to minimize cost and operations overhead. What should you do?

A.
Use Cloud Scheduler and Cloud Functions to run the daily export.
A.
Use Cloud Scheduler and Cloud Functions to run the daily export.
Answers
B.
Create a streaming Datatlow job to export the table.
B.
Create a streaming Datatlow job to export the table.
Answers
C.
Set up Cloud Composer, and create a task to export the table daily.
C.
Set up Cloud Composer, and create a task to export the table daily.
Answers
D.
Run the cron job on a Compute Engine instance to continue the export.
D.
Run the cron job on a Compute Engine instance to continue the export.
Answers
Suggested answer: A

Explanation:

https://cloud.google.com/blog/topics/developers-practitioners/scheduling-cloud-sql-exports-using-cloud-functions-and-cloud-scheduler

Your organization needs to migrate a critical, on-premises MySQL database to Cloud SQL for MySQL. The on-premises database is on a version of MySQL that is supported by Cloud SQL and uses the InnoDB storage engine. You need to migrate the database while preserving transactions and minimizing downtime. What should you do?

A.
Use Database Migration Service to connect to your on-premises database, and choose continuous replication. After the on-premises database is migrated, promote the Cloud SQL for MySQL instance, and connect applications to your Cloud SQL instance.
A.
Use Database Migration Service to connect to your on-premises database, and choose continuous replication. After the on-premises database is migrated, promote the Cloud SQL for MySQL instance, and connect applications to your Cloud SQL instance.
Answers
B.
Build a Cloud Data Fusion pipeline for each table to migrate data from the on-premises MySQL database to Cloud SQL for MySQL. Schedule downtime to run each Cloud Data Fusion pipeline. Verify that the migration was successful. Re-point the applications to the Cloud SQL for MySQL instance.
B.
Build a Cloud Data Fusion pipeline for each table to migrate data from the on-premises MySQL database to Cloud SQL for MySQL. Schedule downtime to run each Cloud Data Fusion pipeline. Verify that the migration was successful. Re-point the applications to the Cloud SQL for MySQL instance.
Answers
C.
Pause the on-premises applications. Use the mysqldump utility to dump the database content in compressed format. Run gsutil --m to move the dump file to Cloud Storage. Use the Cloud SQL for MySQL import option. After the import operation is complete, re-point the applications to the Cloud SQL for MySQL instance.
C.
Pause the on-premises applications. Use the mysqldump utility to dump the database content in compressed format. Run gsutil --m to move the dump file to Cloud Storage. Use the Cloud SQL for MySQL import option. After the import operation is complete, re-point the applications to the Cloud SQL for MySQL instance.
Answers
D.
Pause the on-premises applications. Use the mysqldump utility to dump the database content in CSV format. Run gsutil --m to move the dump file to Cloud Storage. Use the Cloud SQL for MySQL import option. After the import operation is complete, re-point the applications to the Cloud SQL for MySQL instance.
D.
Pause the on-premises applications. Use the mysqldump utility to dump the database content in CSV format. Run gsutil --m to move the dump file to Cloud Storage. Use the Cloud SQL for MySQL import option. After the import operation is complete, re-point the applications to the Cloud SQL for MySQL instance.
Answers
Suggested answer: A

Explanation:

https://cloud.google.com/database-migration/docs/mysql/configure-source-database

To migrate the database while preserving transactions and minimizing downtime, you should use Database Migration Service. This service will allow you to migrate the database in a way that is transparent to your users and applications. It will also allow you to test the migration before you make it live, so that you can be sure that everything will work as expected.

Total 132 questions
Go to page: of 14