ExamGecko
Home Home / Google / Professional Cloud Database Engineer

Google Professional Cloud Database Engineer Practice Test - Questions Answers, Page 6

Question list
Search
Search

List of questions

Search

Related questions











Your company uses the Cloud SQL out-of-disk recommender to analyze the storage utilization trends of production databases over the last 30 days. Your database operations team uses these recommendations to proactively monitor storage utilization and implement corrective actions. You receive a recommendation that the instance is likely to run out of disk space. What should you do to address this storage alert?

A.
Normalize the database to the third normal form.
A.
Normalize the database to the third normal form.
Answers
B.
Compress the data using a different compression algorithm.
B.
Compress the data using a different compression algorithm.
Answers
C.
Manually or automatically increase the storage capacity.
C.
Manually or automatically increase the storage capacity.
Answers
D.
Create another schema to load older data.
D.
Create another schema to load older data.
Answers
Suggested answer: C

Explanation:

https://cloud.google.com/sql/docs/mysql/instance-settings#storage-capacity-2ndgen

You are managing a mission-critical Cloud SQL for PostgreSQL instance. Your application team is running important transactions on the database when another DBA starts an on-demand backup. You want to verify the status of the backup. What should you do?

A.
Check the cloudsql.googleapis.com/postgres.log instance log.
A.
Check the cloudsql.googleapis.com/postgres.log instance log.
Answers
B.
Perform the gcloud sql operations list command.
B.
Perform the gcloud sql operations list command.
Answers
C.
Use Cloud Audit Logs to verify the status.
C.
Use Cloud Audit Logs to verify the status.
Answers
D.
Use the Google Cloud Console.
D.
Use the Google Cloud Console.
Answers
Suggested answer: B

Explanation:

https://cloud.google.com/sql/docs/postgres/backup-recovery/backups#troubleshooting-backups Under Troubleshooting: Issue: 'You can't see the current operation's status.' The Google Cloud console reports only success or failure when the operation is done. It isn't designed to show warnings or other updates. Run the gcloud sql operations list command to list all operations for the given Cloud SQL instance.

You support a consumer inventory application that runs on a multi-region instance of Cloud Spanner. A customer opened a support ticket to complain about slow response times. You notice a Cloud Monitoring alert about high CPU utilization. You want to follow Google-recommended practices to address the CPU performance issue. What should you do first?

A.
Increase the number of processing units.
A.
Increase the number of processing units.
Answers
B.
Modify the database schema, and add additional indexes.
B.
Modify the database schema, and add additional indexes.
Answers
C.
Shard data required by the application into multiple instances.
C.
Shard data required by the application into multiple instances.
Answers
D.
Decrease the number of processing units.
D.
Decrease the number of processing units.
Answers
Suggested answer: A

Explanation:

In case of high CPU utilization like, mentioned in question, refer: https://cloud.google.com/spanner/docs/identify-latency-point#:~:text=Check%20the%20CPU%20utilization%20of%20the%20instance.%20If%20the%20CPU%20utilization%20of%20the%20instance%20is%20above%20the%20recommended%20level%2C%20you%20should%20manually%20add%20more%20nodes%2C%20or%20set%20up%20auto%20scaling. 'Check the CPU utilization of the instance. If the CPU utilization of the instance is above the recommended level, you should manually add more nodes, or set up auto scaling.' Indexes and schema are reviewed post identifying query with slow performance. Refer : https://cloud.google.com/spanner/docs/troubleshooting-performance-regressions#review-schema

Your company uses Bigtable for a user-facing application that displays a low-latency real-time dashboard. You need to recommend the optimal storage type for this read-intensive database. What should you do?

A.
Recommend solid-state drives (SSD).
A.
Recommend solid-state drives (SSD).
Answers
B.
Recommend splitting the Bigtable instance into two instances in order to load balance the concurrent reads.
B.
Recommend splitting the Bigtable instance into two instances in order to load balance the concurrent reads.
Answers
C.
Recommend hard disk drives (HDD).
C.
Recommend hard disk drives (HDD).
Answers
D.
Recommend mixed storage types.
D.
Recommend mixed storage types.
Answers
Suggested answer: A

Explanation:

if you plan to store extensive historical data for a large number of remote-sensing devices and then use the data to generate daily reports, the cost savings for HDD storage might justify the performance tradeoff. On the other hand, if you plan to use the data to display a real-time dashboard, it probably would not make sense to use HDD storage---reads would be much more frequent in this case, and reads that are not scans are much slower with HDD storage.

Your organization has a critical business app that is running with a Cloud SQL for MySQL backend database. Your company wants to build the most fault-tolerant and highly available solution possible. You need to ensure that the application database can survive a zonal and regional failure with a primary region of us-central1 and the backup region of us-east1. What should you do?

A.
Provision a Cloud SQL for MySQL instance in us-central1-a. Create a multiple-zone instance in us-west1-b. Create a read replica in us-east1-c.
A.
Provision a Cloud SQL for MySQL instance in us-central1-a. Create a multiple-zone instance in us-west1-b. Create a read replica in us-east1-c.
Answers
B.
Provision a Cloud SQL for MySQL instance in us-central1-a. Create a multiple-zone instance in us-central1-b. Create a read replica in us-east1-b.
B.
Provision a Cloud SQL for MySQL instance in us-central1-a. Create a multiple-zone instance in us-central1-b. Create a read replica in us-east1-b.
Answers
C.
Provision a Cloud SQL for MySQL instance in us-central1-a. Create a multiple-zone instance in us-east-b. Create a read replica in us-east1-c.
C.
Provision a Cloud SQL for MySQL instance in us-central1-a. Create a multiple-zone instance in us-east-b. Create a read replica in us-east1-c.
Answers
D.
Provision a Cloud SQL for MySQL instance in us-central1-a. Create a multiple-zone instance in us-east1-b. Create a read replica in us-central1-b.
D.
Provision a Cloud SQL for MySQL instance in us-central1-a. Create a multiple-zone instance in us-east1-b. Create a read replica in us-central1-b.
Answers
Suggested answer: B

Explanation:

https://cloud.google.com/sql/docs/sqlserver/intro-to-cloud-sql-disaster-recovery

You are building an Android game that needs to store data on a Google Cloud serverless database. The database will log user activity, store user preferences, and receive in-game updates. The target audience resides in developing countries that have intermittent internet connectivity. You need to ensure that the game can synchronize game data to the backend database whenever an internet network is available. What should you do?

A.
Use Firestore.
A.
Use Firestore.
Answers
B.
Use Cloud SQL with an external (public) IP address.
B.
Use Cloud SQL with an external (public) IP address.
Answers
C.
Use an in-app embedded database.
C.
Use an in-app embedded database.
Answers
D.
Use Cloud Spanner.
D.
Use Cloud Spanner.
Answers
Suggested answer: A

Explanation:

https://firebase.google.com/docs/firestore

You released a popular mobile game and are using a 50 TB Cloud Spanner instance to store game data in a PITR-enabled production environment. When you analyzed the game statistics, you realized that some players are exploiting a loophole to gather more points to get on the leaderboard. Another DBA accidentally ran an emergency bugfix script that corrupted some of the data in the production environment. You need to determine the extent of the data corruption and restore the production environment. What should you do? (Choose two.)

A.
If the corruption is significant, use backup and restore, and specify a recovery timestamp.
A.
If the corruption is significant, use backup and restore, and specify a recovery timestamp.
Answers
B.
If the corruption is significant, perform a stale read and specify a recovery timestamp. Write the results back.
B.
If the corruption is significant, perform a stale read and specify a recovery timestamp. Write the results back.
Answers
C.
If the corruption is significant, use import and export.
C.
If the corruption is significant, use import and export.
Answers
D.
If the corruption is insignificant, use backup and restore, and specify a recovery timestamp.
D.
If the corruption is insignificant, use backup and restore, and specify a recovery timestamp.
Answers
E.
If the corruption is insignificant, perform a stale read and specify a recovery timestamp. Write the results back.
E.
If the corruption is insignificant, perform a stale read and specify a recovery timestamp. Write the results back.
Answers
Suggested answer: A, E

Explanation:

https://cloud.google.com/spanner/docs/pitr#ways-to-recover

To recover the entire database, backup or export the database specifying a timestamp in the past and then restore or import it to a new database. This is typically used to recover from data corruption issues when you have to revert the entire database to a point-in-time before the corruption occurred.

This part describes significant corruption - A

To recover a portion of the database, perform a stale read specifying a query-condition and timestamp in the past, and then write the results back into the live database. This is typically used for surgical operations on a live database. For example, if you accidentally delete a particular row or incorrectly update a subset of data, you can recover it with this method.

This describes insignificant corruption case -- E

https://cloud.google.com/spanner/docs/pitr https://cloud.google.com/spanner/docs/backup/restore-backup

You are starting a large CSV import into a Cloud SQL for MySQL instance that has many open connections. You checked memory and CPU usage, and sufficient resources are available. You want to follow Google-recommended practices to ensure that the import will not time out. What should you do?

A.
Close idle connections or restart the instance before beginning the import operation.
A.
Close idle connections or restart the instance before beginning the import operation.
Answers
B.
Increase the amount of memory allocated to your instance.
B.
Increase the amount of memory allocated to your instance.
Answers
C.
Ensure that the service account has the Storage Admin role.
C.
Ensure that the service account has the Storage Admin role.
Answers
D.
Increase the number of CPUs for the instance to ensure that it can handle the additional import operation.
D.
Increase the number of CPUs for the instance to ensure that it can handle the additional import operation.
Answers
Suggested answer: A

Explanation:

https://cloud.google.com/sql/docs/mysql/import-export#troubleshooting

You are migrating your data center to Google Cloud. You plan to migrate your applications to Compute Engine and your Oracle databases to Bare Metal Solution for Oracle. You must ensure that the applications in different projects can communicate securely and efficiently with the Oracle databases. What should you do?

A.
Set up a Shared VPC, configure multiple service projects, and create firewall rules.
A.
Set up a Shared VPC, configure multiple service projects, and create firewall rules.
Answers
B.
Set up Serverless VPC Access.
B.
Set up Serverless VPC Access.
Answers
C.
Set up Private Service Connect.
C.
Set up Private Service Connect.
Answers
D.
Set up Traffic Director.
D.
Set up Traffic Director.
Answers
Suggested answer: A

Explanation:

https://medium.com/google-cloud/shared-vpc-in-google-cloud-64527e0a409e#:~:text=Unlike%20VPC%20peering%2C%20Shared%20VPC%20connects%20projects%20within%20the%20same%20organization.&text=There%20are%20a%20lot%20of,between%20VPCs%20in%20different%20projects.

You are running an instance of Cloud Spanner as the backend of your ecommerce website. You learn that the quality assurance (QA) team has doubled the number of their test cases. You need to create a copy of your Cloud Spanner database in a new test environment to accommodate the additional test cases. You want to follow Google-recommended practices. What should you do?

A.
Use Cloud Functions to run the export in Avro format.
A.
Use Cloud Functions to run the export in Avro format.
Answers
B.
Use Cloud Functions to run the export in text format.
B.
Use Cloud Functions to run the export in text format.
Answers
C.
Use Dataflow to run the export in Avro format.
C.
Use Dataflow to run the export in Avro format.
Answers
D.
Use Dataflow to run the export in text format.
D.
Use Dataflow to run the export in text format.
Answers
Suggested answer: C

Explanation:

https://cloud.google.com/spanner/docs/import-export-overview#file-format

Total 132 questions
Go to page: of 14