ExamGecko
Home Home / Google / Professional Cloud Database Engineer

Google Professional Cloud Database Engineer Practice Test - Questions Answers, Page 9

Question list
Search
Search

List of questions

Search

Related questions











You are configuring a brand new Cloud SQL for PostgreSQL database instance in Google Cloud. Your application team wants you to deploy one primary instance, one standby instance, and one read replica instance. You need to ensure that you are following Google-recommended practices for high availability. What should you do?

A.
Configure the primary instance in zone A, the standby instance in zone C, and the read replica in zone B, all in the same region.
A.
Configure the primary instance in zone A, the standby instance in zone C, and the read replica in zone B, all in the same region.
Answers
B.
Configure the primary and standby instances in zone A and the read replica in zone B, all in the same region.
B.
Configure the primary and standby instances in zone A and the read replica in zone B, all in the same region.
Answers
C.
Configure the primary instance in one region, the standby instance in a second region, and the read replica in a third region.
C.
Configure the primary instance in one region, the standby instance in a second region, and the read replica in a third region.
Answers
D.
Configure the primary, standby, and read replica instances in zone A, all in the same region.
D.
Configure the primary, standby, and read replica instances in zone A, all in the same region.
Answers
Suggested answer: A

Explanation:

https://cloud.google.com/sql/docs/postgres/high-availability#failover-overview

You are running a transactional application on Cloud SQL for PostgreSQL in Google Cloud. The database is running in a high availability configuration within one region. You have encountered issues with data and want to restore to the last known pristine version of the database. What should you do?

A.
Create a clone database from a read replica database, and restore the clone in the same region.
A.
Create a clone database from a read replica database, and restore the clone in the same region.
Answers
B.
Create a clone database from a read replica database, and restore the clone into a different zone.
B.
Create a clone database from a read replica database, and restore the clone into a different zone.
Answers
C.
Use the Cloud SQL point-in-time recovery (PITR) feature. Restore the copy from two hours ago to a new database instance.
C.
Use the Cloud SQL point-in-time recovery (PITR) feature. Restore the copy from two hours ago to a new database instance.
Answers
D.
Use the Cloud SQL database import feature. Import last week's dump file from Cloud Storage.
D.
Use the Cloud SQL database import feature. Import last week's dump file from Cloud Storage.
Answers
Suggested answer: C

Explanation:

Using import/export from last week is slow for large scale databases and will restore database from last week.

Your organization has a busy transactional Cloud SQL for MySQL instance. Your analytics team needs access to the data so they can build monthly sales reports. You need to provide data access to the analytics team without adversely affecting performance. What should you do?

A.
Create a read replica of the database, provide the database IP address, username, and password to the analytics team, and grant read access to required tables to the team.
A.
Create a read replica of the database, provide the database IP address, username, and password to the analytics team, and grant read access to required tables to the team.
Answers
B.
Create a read replica of the database, enable the cloudsql.iam_authentication flag on the replica, and grant read access to required tables to the analytics team.
B.
Create a read replica of the database, enable the cloudsql.iam_authentication flag on the replica, and grant read access to required tables to the analytics team.
Answers
C.
Enable the cloudsql.iam_authentication flag on the primary database instance, and grant read access to required tables to the analytics team.
C.
Enable the cloudsql.iam_authentication flag on the primary database instance, and grant read access to required tables to the analytics team.
Answers
D.
Provide the database IP address, username, and password of the primary database instance to the analytics, team, and grant read access to required tables to the team.
D.
Provide the database IP address, username, and password of the primary database instance to the analytics, team, and grant read access to required tables to the team.
Answers
Suggested answer: B

Explanation:

'Read replicas do not have the cloudsql.iam_authentication flag enabled automatically when it is enabled on the primary instance.' https://cloud.google.com/sql/docs/postgres/replication/create-replica#configure_iam_replicas

Your organization stores marketing data such as customer preferences and purchase history on Bigtable. The consumers of this database are predominantly data analysts and operations users. You receive a service ticket from the database operations department citing poor database performance between 9 AM-10 AM every day. The application team has confirmed no latency from their logs. A new cohort of pilot users that is testing a dataset loaded from a third-party data provider is experiencing poor database performance. Other users are not affected. You need to troubleshoot the issue. What should you do?

A.
Isolate the data analysts and operations user groups to use different Bigtable instances.
A.
Isolate the data analysts and operations user groups to use different Bigtable instances.
Answers
B.
Check the Cloud Monitoring table/bytes_used metric from Bigtable.
B.
Check the Cloud Monitoring table/bytes_used metric from Bigtable.
Answers
C.
Use Key Visualizer for Bigtable.
C.
Use Key Visualizer for Bigtable.
Answers
D.
Add more nodes to the Bigtable cluster.
D.
Add more nodes to the Bigtable cluster.
Answers
Suggested answer: C

Explanation:

https://cloud.google.com/bigtable/docs/performance#troubleshooting

Your company is developing a new global transactional application that must be ACID-compliant and have 99.999% availability. You are responsible for selecting the appropriate Google Cloud database to serve as a datastore for this new application. What should you do?

A.
Use Firestore.
A.
Use Firestore.
Answers
B.
Use Cloud Spanner.
B.
Use Cloud Spanner.
Answers
C.
Use Cloud SQL.
C.
Use Cloud SQL.
Answers
D.
Use Bigtable.
D.
Use Bigtable.
Answers
Suggested answer: B

You want to migrate your PostgreSQL database from another cloud provider to Cloud SQL. You plan on using Database Migration Service and need to assess the impact of any known limitations. What should you do? (Choose two.)

A.
Identify whether the database has over 512 tables.
A.
Identify whether the database has over 512 tables.
Answers
B.
Identify all tables that do not have a primary key.
B.
Identify all tables that do not have a primary key.
Answers
C.
Identity all tables that do not have at least one foreign key.
C.
Identity all tables that do not have at least one foreign key.
Answers
D.
Identify whether the source database is encrypted using pgcrypto extension.
D.
Identify whether the source database is encrypted using pgcrypto extension.
Answers
E.
Identify whether the source database uses customer-managed encryption keys (CMEK).
E.
Identify whether the source database uses customer-managed encryption keys (CMEK).
Answers
Suggested answer: C, E

Your organization is running a Firestore-backed Firebase app that serves the same top ten news stories on a daily basis to a large global audience. You want to optimize content delivery while decreasing cost and latency. What should you do?

A.
Enable serializable isolation in the Firebase app.
A.
Enable serializable isolation in the Firebase app.
Answers
B.
Deploy a US multi-region Firestore location.
B.
Deploy a US multi-region Firestore location.
Answers
C.
Build a Firestore bundle, and deploy bundles to Cloud CDN.
C.
Build a Firestore bundle, and deploy bundles to Cloud CDN.
Answers
D.
Create a Firestore index on the news story date.
D.
Create a Firestore index on the news story date.
Answers
Suggested answer: C

Explanation:

A global audience strongly suggests serving content via Google's Content Delivery Network. Changing the isolation level won't decrease cost or latency

You are managing a Cloud SQL for PostgreSQL instance in Google Cloud. You need to test the high availability of your Cloud SQL instance by performing a failover. You want to use the cloud command.

What should you do?

A.
Use gcloud sql instances failover <PrimaryInstanceName>.
A.
Use gcloud sql instances failover <PrimaryInstanceName>.
Answers
B.
Use gcloud sql instances failover <ReplicaInstanceName>.
B.
Use gcloud sql instances failover <ReplicaInstanceName>.
Answers
C.
Use gcloud sql instances promote-replica <PrimaryInstanceName>.
C.
Use gcloud sql instances promote-replica <PrimaryInstanceName>.
Answers
D.
Use gcloud sql instances promote-replica <ReplicaInstanceName>.
D.
Use gcloud sql instances promote-replica <ReplicaInstanceName>.
Answers
Suggested answer: A

You use Python scripts to generate weekly SQL reports to assess the state of your databases and determine whether you need to reorganize tables or run statistics. You want to automate this report but need to minimize operational costs and overhead. What should you do?

A.
Create a VM in Compute Engine, and run a cron job.
A.
Create a VM in Compute Engine, and run a cron job.
Answers
B.
Create a Cloud Composer instance, and create a directed acyclic graph (DAG).
B.
Create a Cloud Composer instance, and create a directed acyclic graph (DAG).
Answers
C.
Create a Cloud Function, and call the Cloud Function using Cloud Scheduler.
C.
Create a Cloud Function, and call the Cloud Function using Cloud Scheduler.
Answers
D.
Create a Cloud Function, and call the Cloud Function from a Cloud Tasks queue.
D.
Create a Cloud Function, and call the Cloud Function from a Cloud Tasks queue.
Answers
Suggested answer: C

Explanation:

Cloud Scheduler triggers actions at regular fixed intervals, whereas Cloud Tasks triggers actions based on how the individual task object is configured.

Reference: https://cloud.google.com/tasks/docs/comp-tasks-sched

Your company is using Cloud SQL for MySQL with an internal (private) IP address and wants to replicate some tables into BigQuery in near-real time for analytics and machine learning. You need to ensure that replication is fast and reliable and uses Google-managed services. What should you do?

A.
Develop a custom data replication service to send data into BigQuery.
A.
Develop a custom data replication service to send data into BigQuery.
Answers
B.
Use Cloud SQL federated queries.
B.
Use Cloud SQL federated queries.
Answers
C.
Use Database Migration Service to replicate tables into BigQuery.
C.
Use Database Migration Service to replicate tables into BigQuery.
Answers
D.
Use Datastream to capture changes, and use Dataflow to write those changes to BigQuery.
D.
Use Datastream to capture changes, and use Dataflow to write those changes to BigQuery.
Answers
Suggested answer: D

Explanation:

''Datastream is a serverless and easy-to-use Change Data Capture (CDC) and replication service that allows you to synchronize data across heterogeneous databases, storage systems, and applications reliably and with minimal latency. Datastream supports change data streaming from Oracle and MySQL databases to Google Cloud Storage (GCS). The service offers streamlined integration with Dataflow templates to power up to date materialized views in BigQuery for analytics, replicate their databases into Cloud SQL or Cloud Spanner for database synchronization, or leverage the event stream directly from GCS to realize event-driven architectures.''

Total 132 questions
Go to page: of 14