ExamGecko
Home Home / Google / Professional Cloud Database Engineer

Google Professional Cloud Database Engineer Practice Test - Questions Answers, Page 7

Question list
Search
Search

List of questions

Search

Related questions











You need to redesign the architecture of an application that currently uses Cloud SQL for PostgreSQL. The users of the application complain about slow query response times. You want to enhance your application architecture to offer sub-millisecond query latency. What should you do?

A.
Configure Firestore, and modify your application to offload queries.
A.
Configure Firestore, and modify your application to offload queries.
Answers
B.
Configure Bigtable, and modify your application to offload queries.
B.
Configure Bigtable, and modify your application to offload queries.
Answers
C.
Configure Cloud SQL for PostgreSQL read replicas to offload queries.
C.
Configure Cloud SQL for PostgreSQL read replicas to offload queries.
Answers
D.
Configure Memorystore, and modify your application to offload queries.
D.
Configure Memorystore, and modify your application to offload queries.
Answers
Suggested answer: D

Explanation:

'sub-millisecond latency' always involves Memorystore. Furthermore, as we are talking about a relational DB (Cloud SQL), BigTable is not a solution to be considered.

You need to migrate existing databases from Microsoft SQL Server 2016 Standard Edition on a single Windows Server 2019 Datacenter Edition to a single Cloud SQL for SQL Server instance. During the discovery phase of your project, you notice that your on-premises server peaks at around 25,000 read IOPS. You need to ensure that your Cloud SQL instance is sized appropriately to maximize read performance. What should you do?

A.
Create a SQL Server 2019 Standard on Standard machine type with 4 vCPUs, 15 GB of RAM, and 800 GB of solid-state drive (SSD).
A.
Create a SQL Server 2019 Standard on Standard machine type with 4 vCPUs, 15 GB of RAM, and 800 GB of solid-state drive (SSD).
Answers
B.
Create a SQL Server 2019 Standard on High Memory machine type with at least 16 vCPUs, 104 GB of RAM, and 200 GB of SSD.
B.
Create a SQL Server 2019 Standard on High Memory machine type with at least 16 vCPUs, 104 GB of RAM, and 200 GB of SSD.
Answers
C.
Create a SQL Server 2019 Standard on High Memory machine type with 16 vCPUs, 104 GB of RAM, and 4 TB of SSD.
C.
Create a SQL Server 2019 Standard on High Memory machine type with 16 vCPUs, 104 GB of RAM, and 4 TB of SSD.
Answers
D.
Create a SQL Server 2019 Enterprise on High Memory machine type with 16 vCPUs, 104 GB of RAM, and 500 GB of SSD.
D.
Create a SQL Server 2019 Enterprise on High Memory machine type with 16 vCPUs, 104 GB of RAM, and 500 GB of SSD.
Answers
Suggested answer: C

Explanation:

Given that Google SSD performance is related to the size of the disk in an order of 30 IOPS for each GB, ti would require at least 833 GB to handle 25000 IOPS, the only answer that exceeds this value is C. https://cloud.google.com/compute/docs/disks/performance

You are managing a small Cloud SQL instance for developers to do testing. The instance is not critical and has a recovery point objective (RPO) of several days. You want to minimize ongoing costs for this instance. What should you do?

A.
Take no backups, and turn off transaction log retention.
A.
Take no backups, and turn off transaction log retention.
Answers
B.
Take one manual backup per day, and turn off transaction log retention.
B.
Take one manual backup per day, and turn off transaction log retention.
Answers
C.
Turn on automated backup, and turn off transaction log retention.
C.
Turn on automated backup, and turn off transaction log retention.
Answers
D.
Turn on automated backup, and turn on transaction log retention.
D.
Turn on automated backup, and turn on transaction log retention.
Answers
Suggested answer: C

Explanation:

https://cloud.google.com/sql/docs/mysql/backup-recovery/backups

You manage a meeting booking application that uses Cloud SQL. During an important launch, the Cloud SQL instance went through a maintenance event that resulted in a downtime of more than 5 minutes and adversely affected your production application. You need to immediately address the maintenance issue to prevent any unplanned events in the future. What should you do?

A.
Set your production instance's maintenance window to non-business hours.
A.
Set your production instance's maintenance window to non-business hours.
Answers
B.
Migrate the Cloud SQL instance to Cloud Spanner to avoid any future disruptions due to maintenance.
B.
Migrate the Cloud SQL instance to Cloud Spanner to avoid any future disruptions due to maintenance.
Answers
C.
Contact Support to understand why your Cloud SQL instance had a downtime of more than 5 minutes.
C.
Contact Support to understand why your Cloud SQL instance had a downtime of more than 5 minutes.
Answers
D.
Use Cloud Scheduler to schedule a maintenance window of no longer than 5 minutes.
D.
Use Cloud Scheduler to schedule a maintenance window of no longer than 5 minutes.
Answers
Suggested answer: A

You are designing a highly available (HA) Cloud SQL for PostgreSQL instance that will be used by 100 databases. Each database contains 80 tables that were migrated from your on-premises environment to Google Cloud. The applications that use these databases are located in multiple regions in the US, and you need to ensure that read and write operations have low latency. What should you do?

A.
Deploy 2 Cloud SQL instances in the us-central1 region with HA enabled, and create read replicas in us-east1 and us-west1.
A.
Deploy 2 Cloud SQL instances in the us-central1 region with HA enabled, and create read replicas in us-east1 and us-west1.
Answers
B.
Deploy 2 Cloud SQL instances in the us-central1 region, and create read replicas in us-east1 and us-west1.
B.
Deploy 2 Cloud SQL instances in the us-central1 region, and create read replicas in us-east1 and us-west1.
Answers
C.
Deploy 4 Cloud SQL instances in the us-central1 region with HA enabled, and create read replicas in us-central1, us-east1, and us-west1.
C.
Deploy 4 Cloud SQL instances in the us-central1 region with HA enabled, and create read replicas in us-central1, us-east1, and us-west1.
Answers
D.
Deploy 4 Cloud SQL instances in the us-central1 region, and create read replicas in us-central1, us-east1 and us-west1.
D.
Deploy 4 Cloud SQL instances in the us-central1 region, and create read replicas in us-central1, us-east1 and us-west1.
Answers
Suggested answer: A

Explanation:

https://cloud.google.com/sql/docs/mysql/quotas#table_limit

You work in the logistics department. Your data analysis team needs daily extracts from Cloud SQL for MySQL to train a machine learning model. The model will be used to optimize next-day routes. You need to export the data in CSV format. You want to follow Google-recommended practices. What should you do?

A.
Use Cloud Scheduler to trigger a Cloud Function that will run a select * from table(s) query to call the cloudsql.instances.export API.
A.
Use Cloud Scheduler to trigger a Cloud Function that will run a select * from table(s) query to call the cloudsql.instances.export API.
Answers
B.
Use Cloud Scheduler to trigger a Cloud Function through Pub/Sub to call the cloudsql.instances.export API.
B.
Use Cloud Scheduler to trigger a Cloud Function through Pub/Sub to call the cloudsql.instances.export API.
Answers
C.
Use Cloud Composer to orchestrate an export by calling the cloudsql.instances.export API.
C.
Use Cloud Composer to orchestrate an export by calling the cloudsql.instances.export API.
Answers
D.
Use Cloud Composer to execute a select * from table(s) query and export results.
D.
Use Cloud Composer to execute a select * from table(s) query and export results.
Answers
Suggested answer: B

Explanation:

https://cloud.google.com/blog/topics/developers-practitioners/scheduling-cloud-sql-exports-using-cloud-functions-and-cloud-scheduler

You are choosing a database backend for a new application. The application will ingest data points from IoT sensors. You need to ensure that the application can scale up to millions of requests per second with sub-10ms latency and store up to 100 TB of history. What should you do?

A.
Use Cloud SQL with read replicas for throughput.
A.
Use Cloud SQL with read replicas for throughput.
Answers
B.
Use Firestore, and rely on automatic serverless scaling.
B.
Use Firestore, and rely on automatic serverless scaling.
Answers
C.
Use Memorystore for Memcached, and add nodes as necessary to achieve the required throughput.
C.
Use Memorystore for Memcached, and add nodes as necessary to achieve the required throughput.
Answers
D.
Use Bigtable, and add nodes as necessary to achieve the required throughput.
D.
Use Bigtable, and add nodes as necessary to achieve the required throughput.
Answers
Suggested answer: D

Explanation:

https://cloud.google.com/memorystore/docs/redis/redis-overview

You are designing a payments processing application on Google Cloud. The application must continue to serve requests and avoid any user disruption if a regional failure occurs. You need to use AES-256 to encrypt data in the database, and you want to control where you store the encryption key. What should you do?

A.
Use Cloud Spanner with a customer-managed encryption key (CMEK).
A.
Use Cloud Spanner with a customer-managed encryption key (CMEK).
Answers
B.
Use Cloud Spanner with default encryption.
B.
Use Cloud Spanner with default encryption.
Answers
C.
Use Cloud SQL with a customer-managed encryption key (CMEK).
C.
Use Cloud SQL with a customer-managed encryption key (CMEK).
Answers
D.
Use Bigtable with default encryption.
D.
Use Bigtable with default encryption.
Answers
Suggested answer: A

Explanation:

Yes default encryption comes with AES-256 but the question states that you need to control where you store the encryption keys. that can be achieved by CMEK.

You are managing a Cloud SQL for MySQL environment in Google Cloud. You have deployed a primary instance in Zone A and a read replica instance in Zone B, both in the same region. You are notified that the replica instance in Zone B was unavailable for 10 minutes. You need to ensure that the read replica instance is still working. What should you do?

A.
Use the Google Cloud Console or gcloud CLI to manually create a new clone database.
A.
Use the Google Cloud Console or gcloud CLI to manually create a new clone database.
Answers
B.
Use the Google Cloud Console or gcloud CLI to manually create a new failover replica from backup.
B.
Use the Google Cloud Console or gcloud CLI to manually create a new failover replica from backup.
Answers
C.
Verify that the new replica is created automatically.
C.
Verify that the new replica is created automatically.
Answers
D.
Start the original primary instance and resume replication.
D.
Start the original primary instance and resume replication.
Answers
Suggested answer: C

Explanation:

Recovery Process: Once Zone-B becomes available again, Cloud SQL will initiate the recovery process for the impacted read replica. The recovery process involves the following steps: 1. Synchronization: Cloud SQL will compare the data in the recovered read replica with the primary instance in Zone-A. If there is any data divergence due to the unavailability period, Cloud SQL will synchronize the read replica with the primary instance to ensure data consistency. 2. Catch-up Replication: The recovered read replica will start catching up on the changes that occurred on the primary instance during its unavailability. It will apply the necessary updates from the primary instance's binary logs (binlogs) to bring the replica up to date. 3. Resuming Read Traffic: Once the synchronization and catch-up replication processes are complete, the read replica in Zone-B will resume its normal operation. It will be able to serve read traffic and stay updated with subsequent changes from the primary instance.

You are migrating an on-premises application to Google Cloud. The application requires a high availability (HA) PostgreSQL database to support business-critical functions. Your company's disaster recovery strategy requires a recovery time objective (RTO) and recovery point objective (RPO) within 30 minutes of failure. You plan to use a Google Cloud managed service. What should you do to maximize uptime for your application?

A.
Deploy Cloud SQL for PostgreSQL in a regional configuration. Create a read replica in a different zone in the same region and a read replica in another region for disaster recovery.
A.
Deploy Cloud SQL for PostgreSQL in a regional configuration. Create a read replica in a different zone in the same region and a read replica in another region for disaster recovery.
Answers
B.
Deploy Cloud SQL for PostgreSQL in a regional configuration with HA enabled. Take periodic backups, and use this backup to restore to a new Cloud SQL for PostgreSQL instance in another region during a disaster recovery event.
B.
Deploy Cloud SQL for PostgreSQL in a regional configuration with HA enabled. Take periodic backups, and use this backup to restore to a new Cloud SQL for PostgreSQL instance in another region during a disaster recovery event.
Answers
C.
Deploy Cloud SQL for PostgreSQL in a regional configuration with HA enabled. Create a cross-region read replica, and promote the read replica as the primary node for disaster recovery.
C.
Deploy Cloud SQL for PostgreSQL in a regional configuration with HA enabled. Create a cross-region read replica, and promote the read replica as the primary node for disaster recovery.
Answers
D.
Migrate the PostgreSQL database to multi-regional Cloud Spanner so that a single region outage will not affect your application. Update the schema to support Cloud Spanner data types, and refactor the application.
D.
Migrate the PostgreSQL database to multi-regional Cloud Spanner so that a single region outage will not affect your application. Update the schema to support Cloud Spanner data types, and refactor the application.
Answers
Suggested answer: C

Explanation:

The best answer is deploy an HA configuration and have a read replica you could promote to the primary in a different region

Total 132 questions
Go to page: of 14