ExamGecko
Home Home / Google / Professional Cloud Database Engineer

Google Professional Cloud Database Engineer Practice Test - Questions Answers, Page 12

Question list
Search
Search

List of questions

Search

Related questions











Your company is developing a global ecommerce website on Google Cloud. Your development team is working on a shopping cart service that is durable and elastically scalable with live traffic. Business disruptions from unplanned downtime are expected to be less than 5 minutes per month. In addition, the application needs to have very low latency writes. You need a data storage solution that has high write throughput and provides 99.99% uptime. What should you do?

A.
Use Cloud SQL for data storage.
A.
Use Cloud SQL for data storage.
Answers
B.
Use Cloud Spanner for data storage.
B.
Use Cloud Spanner for data storage.
Answers
C.
Use Memorystore for data storage.
C.
Use Memorystore for data storage.
Answers
D.
Use Bigtable for data storage.
D.
Use Bigtable for data storage.
Answers
Suggested answer: B

Explanation:

google Cloud Spanner is a highly scalable, reliable, and fully managed relational database service that runs on Google's infrastructure. It's designed to handle large amounts of data and provide high availability, even in the face of failures. Spanner can be used to store and manage data for a variety of applications, including e-commerce websites. Spanner is a good choice for this scenario because it can handle high write throughput and provides 99.99% uptime. It's also a good fit for applications that need to be highly available, even in the face of failures.

Your organization has hundreds of Cloud SQL for MySQL instances. You want to follow Google-recommended practices to optimize platform costs. What should you do?

A.
Use Query Insights to identify idle instances.
A.
Use Query Insights to identify idle instances.
Answers
B.
Remove inactive user accounts.
B.
Remove inactive user accounts.
Answers
C.
Run the Recommender API to identify overprovisioned instances.
C.
Run the Recommender API to identify overprovisioned instances.
Answers
D.
Build indexes on heavily accessed tables.
D.
Build indexes on heavily accessed tables.
Answers
Suggested answer: C

Explanation:

The Cloud SQL overprovisioned instance recommender helps you detect instances that are unnecessarily large for a given workload. It then provides recommendations on how to resize such instances and reduce cost. This page describes how this recommender works and how to use it. https://cloud.google.com/sql/docs/mysql/recommender-sql-overprovisioned#:~:text=The%20Cloud%20SQL%20overprovisioned%20instance%20recommender%20helps%20you%20detect%20instances%20that%20are%20unnecessarily%20large%20for%20a%20given%20workload.%20It%20then%20provides%20recommendations%20on%20how%20to%20resize%20such%20instances%20and%20reduce%20cost.%20This%20page%20describes%20how%20this%20recommender%20works%20and%20how%20to%20use%20it.

Your organization is running a critical production database on a virtual machine (VM) on Compute Engine. The VM has an ext4-formatted persistent disk for data files. The database will soon run out of storage space. You need to implement a solution that avoids downtime. What should you do?

A.
In the Google Cloud Console, increase the size of the persistent disk, and use the resize2fs command to extend the disk.
A.
In the Google Cloud Console, increase the size of the persistent disk, and use the resize2fs command to extend the disk.
Answers
B.
In the Google Cloud Console, increase the size of the persistent disk, and use the fdisk command to verify that the new space is ready to use
B.
In the Google Cloud Console, increase the size of the persistent disk, and use the fdisk command to verify that the new space is ready to use
Answers
C.
In the Google Cloud Console, create a snapshot of the persistent disk, restore the snapshot to a new larger disk, unmount the old disk, mount the new disk, and restart the database service.
C.
In the Google Cloud Console, create a snapshot of the persistent disk, restore the snapshot to a new larger disk, unmount the old disk, mount the new disk, and restart the database service.
Answers
D.
In the Google Cloud Console, create a new persistent disk attached to the VM, and configure the database service to move the files to the new disk.
D.
In the Google Cloud Console, create a new persistent disk attached to the VM, and configure the database service to move the files to the new disk.
Answers
Suggested answer: A

Explanation:

https://cloud.google.com/compute/docs/disks/resize-persistent-disk#resize_partitions

You want to migrate your on-premises PostgreSQL database to Compute Engine. You need to migrate this database with the minimum downtime possible. What should you do?

A.
Perform a full backup of your on-premises PostgreSQL, and then, in the migration window, perform an incremental backup.
A.
Perform a full backup of your on-premises PostgreSQL, and then, in the migration window, perform an incremental backup.
Answers
B.
Create a read replica on Cloud SQL, and then promote it to a read/write standalone instance.
B.
Create a read replica on Cloud SQL, and then promote it to a read/write standalone instance.
Answers
C.
Use Database Migration Service to migrate your database.
C.
Use Database Migration Service to migrate your database.
Answers
D.
Create a hot standby on Compute Engine, and use PgBouncer to switch over the connections.
D.
Create a hot standby on Compute Engine, and use PgBouncer to switch over the connections.
Answers
Suggested answer: D

Explanation:

PgBouncer maintains a pool for connections for each database and user combination. PgBouncer either creates a new database connection for a client or reuses an existing connection for the same user and database. + PgBouncer is a simple PostgreSQL connection pool that allows for several thousand connections at a time. Using Kubernetes Engine to run a Helm Chart w/ PgBouncer based on the great article from futuretech-industries, we were able to set up an easily deployable system to get the most out of our CloudSQL DBs without breaking the bank. https://medium.com/google-cloud/increasing-cloud-sql-postgresql-max-connections-w-pgbouncer-kubernetes-engine-49b0b2894820#:~:text=That%20is%20where,breaking%20the%20bank.

You have an application that sends banking events to Bigtable cluster-a in us-east. You decide to add cluster-b in us-central1. Cluster-a replicates data to cluster-b. You need to ensure that Bigtable continues to accept read and write requests if one of the clusters becomes unavailable and that requests are routed automatically to the other cluster. What deployment strategy should you use?

A.
Use the default app profile with single-cluster routing.
A.
Use the default app profile with single-cluster routing.
Answers
B.
Use the default app profile with multi-cluster routing.
B.
Use the default app profile with multi-cluster routing.
Answers
C.
Create a custom app profile with multi-cluster routing.
C.
Create a custom app profile with multi-cluster routing.
Answers
D.
Create a custom app profile with single-cluster routing.
D.
Create a custom app profile with single-cluster routing.
Answers
Suggested answer: C

Explanation:

https://cloud.google.com/bigtable/docs/app-profiles#default-app-profile

The question states that a single cluster existed first, then a second cluster was added. Google's documentation states, ''if you created the instance with one cluster, the default app profile uses single-cluster routing. This ensures that adding additional clusters later does not change the behavior of your existing applications''. Simply adding a second cluster does not change the default profile from single-cluster routing to multi-cluster routing. Since you need multi-cluster routing, you're going to need a custom app profile. So C is correct. https://cloud.google.com/bigtable/docs/app-profiles#default-app-profile

Your organization works with sensitive data that requires you to manage your own encryption keys. You are working on a project that stores that data in a Cloud SQL database. You need to ensure that stored data is encrypted with your keys. What should you do?

A.
Export data periodically to a Cloud Storage bucket protected by Customer-Supplied Encryption Keys.
A.
Export data periodically to a Cloud Storage bucket protected by Customer-Supplied Encryption Keys.
Answers
B.
Use Cloud SQL Auth proxy.
B.
Use Cloud SQL Auth proxy.
Answers
C.
Connect to Cloud SQL using a connection that has SSL encryption.
C.
Connect to Cloud SQL using a connection that has SSL encryption.
Answers
D.
Use customer-managed encryption keys with Cloud SQL.
D.
Use customer-managed encryption keys with Cloud SQL.
Answers
Suggested answer: D

Your team is building an application that stores and analyzes streaming time series financial dat

a. You need a database solution that can perform time series-based scans with sub-second latency. The solution must scale into the hundreds of terabytes and be able to write up to 10k records per second and read up to 200 MB per second. What should you do?

A.
Use Firestore.
A.
Use Firestore.
Answers
B.
Use Bigtable
B.
Use Bigtable
Answers
C.
Use BigQuery.
C.
Use BigQuery.
Answers
D.
Use Cloud Spanner.
D.
Use Cloud Spanner.
Answers
Suggested answer: B

Explanation:

Financial data, such as transaction histories, stock prices, and currency exchange rates.

https://cloud.google.com/bigtable/docs/overview#what-its-good-for

With SSD:

Reads - up to 10,000 rows per second

Writes - up to 10,000 rows per second

Scans - up to 220 MB/s

https://cloud.google.com/bigtable/docs/performance#typical-workloads

You are designing a new gaming application that uses a highly transactional relational database to store player authentication and inventory data in Google Cloud. You want to launch the game in multiple regions. What should you do?

A.
Use Cloud Spanner to deploy the database.
A.
Use Cloud Spanner to deploy the database.
Answers
B.
Use Bigtable with clusters in multiple regions to deploy the database
B.
Use Bigtable with clusters in multiple regions to deploy the database
Answers
C.
Use BigQuery to deploy the database
C.
Use BigQuery to deploy the database
Answers
D.
Use Cloud SQL with a regional read replica to deploy the database.
D.
Use Cloud SQL with a regional read replica to deploy the database.
Answers
Suggested answer: A

Explanation:

Cloud Spanner is a fully managed, mission-critical, relational database service that offers transactional consistency at global scale, automatic, synchronous replication for high availability, and support for two SQL dialects: Google Standard SQL (ANSI 2011 with extensions) and PostgreSQL.

You are designing a database strategy for a new web application in one region. You need to minimize write latency. What should you do?

A.
Use Cloud SQL with cross-region replicas.
A.
Use Cloud SQL with cross-region replicas.
Answers
B.
Use high availability (HA) Cloud SQL with multiple zones.
B.
Use high availability (HA) Cloud SQL with multiple zones.
Answers
C.
Use zonal Cloud SQL without high availability (HA).
C.
Use zonal Cloud SQL without high availability (HA).
Answers
D.
Use Cloud Spanner in a regional configuration.
D.
Use Cloud Spanner in a regional configuration.
Answers
Suggested answer: D

Explanation:

https://docs.google.com/forms/d/e/1FAIpQLSfZ77ZnuUL0NpU-bOtO5QUkC0cnRCe5YKMiubLXwfV3abBqkg/viewform

You are running a large, highly transactional application on Oracle Real Application Cluster (RAC) that is multi-tenant and uses shared storage. You need a solution that ensures high-performance throughput and a low-latency connection between applications and databases. The solution must also support existing Oracle features and provide ease of migration to Google Cloud. What should you do?

A.
Migrate to Compute Engine.
A.
Migrate to Compute Engine.
Answers
B.
Migrate to Bare Metal Solution for Oracle.
B.
Migrate to Bare Metal Solution for Oracle.
Answers
C.
Migrate to Google Kubernetes Engine (GKE)
C.
Migrate to Google Kubernetes Engine (GKE)
Answers
D.
Migrate to Google Cloud VMware Engine
D.
Migrate to Google Cloud VMware Engine
Answers
Suggested answer: B

Explanation:

Oracle is neither licensed nor supported in GCE. The only platform which supports RAC and all existing Oracle features is BMS.

Total 132 questions
Go to page: of 14