ExamGecko
Home Home / Google / Professional Cloud Database Engineer

Google Professional Cloud Database Engineer Practice Test - Questions Answers, Page 13

Question list
Search
Search

List of questions

Search

Related questions











You are choosing a new database backend for an existing application. The current database is running PostgreSQL on an on-premises VM and is managed by a database administrator and operations team. The application data is relational and has light traffic. You want to minimize costs and the migration effort for this application. What should you do?

A.
Migrate the existing database to Firestore.
A.
Migrate the existing database to Firestore.
Answers
B.
Migrate the existing database to Cloud SQL for PostgreSQL.
B.
Migrate the existing database to Cloud SQL for PostgreSQL.
Answers
C.
Migrate the existing database to Cloud Spanner.
C.
Migrate the existing database to Cloud Spanner.
Answers
D.
Migrate the existing database to PostgreSQL running on Compute Engine.
D.
Migrate the existing database to PostgreSQL running on Compute Engine.
Answers
Suggested answer: B

Explanation:

You could migrate to Spanner leveraging the PostgreSQL dialect, but costs need to be minimized so that wouldn't be the cheapest option. Especially since the load doesn't justify Spanner. Again, you could migrate like-for-like to a GCE VM, but that defeats minimizing the migration effort. The cheapest and easiest way to migrate would be Database Migration Service to Cloud SQL for PostgreSQL.

Your organization is currently updating an existing corporate application that is running in another public cloud to access managed database services in Google Cloud. The application will remain in the other public cloud while the database is migrated to Google Cloud. You want to follow Google-recommended practices for authentication. You need to minimize user disruption during the migration. What should you do?

A.
Use workload identity federation to impersonate a service account.
A.
Use workload identity federation to impersonate a service account.
Answers
B.
Ask existing users to set their Google password to match their corporate password.
B.
Ask existing users to set their Google password to match their corporate password.
Answers
C.
Migrate the application to Google Cloud, and use Identity and Access Management (IAM).
C.
Migrate the application to Google Cloud, and use Identity and Access Management (IAM).
Answers
D.
Use Google Workspace Password Sync to replicate passwords into Google Cloud.
D.
Use Google Workspace Password Sync to replicate passwords into Google Cloud.
Answers
Suggested answer: A

Explanation:

Updating passwords represents user disruption. Eliminate B. Eliminate C for the same reason. D doesn't make sense, leaves A. From Google's documentation, ''Traditionally, applications running outside Google Cloud can use service account keys to access Google Cloud resources. However, service account keys are powerful credentials, and can present a security risk if they are not managed correctly. With identity federation, you can use Identity and Access Management (IAM) to grant external identities IAM roles, including the ability to impersonate service accounts. This approach eliminates the maintenance and security burden associated with service account keys.'' https://cloud.google.com/iam/docs/workload-identity-federation

You are configuring the networking of a Cloud SQL instance. The only application that connects to this database resides on a Compute Engine VM in the same project as the Cloud SQL instance. The VM and the Cloud SQL instance both use the same VPC network, and both have an external (public) IP address and an internal (private) IP address. You want to improve network security. What should you do?

A.
Disable and remove the internal IP address assignment.
A.
Disable and remove the internal IP address assignment.
Answers
B.
Disable both the external IP address and the internal IP address, and instead rely on Private Google Access.
B.
Disable both the external IP address and the internal IP address, and instead rely on Private Google Access.
Answers
C.
Specify an authorized network with the CIDR range of the VM.
C.
Specify an authorized network with the CIDR range of the VM.
Answers
D.
Disable and remove the external IP address assignment.
D.
Disable and remove the external IP address assignment.
Answers
Suggested answer: D

Explanation:

It is always more secure to use an internal IP, so removing them doesn't make sense. Eliminate A. You can use Private Google Access when VM instances only have internal IP addresses, so disabling the internal IPs and use Private Google Access doesn't make sense. Eliminate B. Specifying an authorized network when they're on the same subnet doesn't make sense. Eliminate C. A way to improve network security would be to disable external IPs since they're not needed.

You are managing two different applications: Order Management and Sales Reporting. Both applications interact with the same Cloud SQL for MySQL database. The Order Management application reads and writes to the database 24/7, but the Sales Reporting application is read-only. Both applications need the latest dat

a. You need to ensure that the Performance of the Order Management application is not affected by the Sales Reporting application. What should you do?

A.
Create a read replica for the Sales Reporting application.
A.
Create a read replica for the Sales Reporting application.
Answers
B.
Create two separate databases in the instance, and perform dual writes from the Order Management application.
B.
Create two separate databases in the instance, and perform dual writes from the Order Management application.
Answers
C.
Use a Cloud SQL federated query for the Sales Reporting application.
C.
Use a Cloud SQL federated query for the Sales Reporting application.
Answers
D.
Queue up all the requested reports in PubSub, and execute the reports at night.
D.
Queue up all the requested reports in PubSub, and execute the reports at night.
Answers
Suggested answer: A

You are the DBA of an online tutoring application that runs on a Cloud SQL for PostgreSQL database. You are testing the implementation of the cross-regional failover configuration. The database in region R1 fails over successfully to region R2, and the database becomes available for the application to process dat

a. During testing, certain scenarios of the application work as expected in region R2, but a few scenarios fail with database errors. The application-related database queries, when executed in isolation from Cloud SQL for PostgreSQL in region R2, work as expected. The application performs completely as expected when the database fails back to region R1. You need to identify the cause of the database errors in region R2. What should you do?

A.
Determine whether the versions of Cloud SQL for PostgreSQL in regions R1 and R2 are different.
A.
Determine whether the versions of Cloud SQL for PostgreSQL in regions R1 and R2 are different.
Answers
B.
Determine whether the database patches of Cloud SQI for PostgreSQL in regions R1 and R2 are different.
B.
Determine whether the database patches of Cloud SQI for PostgreSQL in regions R1 and R2 are different.
Answers
C.
Determine whether the failover of Cloud SQL for PostgreSQL from region R1 to region R2 is in progress or has completed successfully.
C.
Determine whether the failover of Cloud SQL for PostgreSQL from region R1 to region R2 is in progress or has completed successfully.
Answers
D.
Determine whether Cloud SQL for PostgreSQL in region R2 is a near-real-time copy of region R1 but not an exact copy.
D.
Determine whether Cloud SQL for PostgreSQL in region R2 is a near-real-time copy of region R1 but not an exact copy.
Answers
Suggested answer: D

Explanation:

Verify that the replica has processed all the transactions it has received from the primary. This ensures that when promoted, the replica reflects all transactions that were received before the primary became unavailable. https://cloud.google.com/sql/docs/postgres/replication/cross-region-replicas#verify_failover_criteria

Your company wants to migrate its MySQL, PostgreSQL, and Microsoft SQL Server on-premises databases to Google Cloud. You need a solution that provides near-zero downtime, requires no application changes, and supports change data capture (CDC). What should you do?

A.
Use the native export and import functionality of the source database.
A.
Use the native export and import functionality of the source database.
Answers
B.
Create a database on Google Cloud, and use database links to perform the migration.
B.
Create a database on Google Cloud, and use database links to perform the migration.
Answers
C.
Create a database on Google Cloud, and use Dataflow for database migration.
C.
Create a database on Google Cloud, and use Dataflow for database migration.
Answers
D.
Use Database Migration Service.
D.
Use Database Migration Service.
Answers
Suggested answer: D

Explanation:

Simplify migrations to the cloud. Available now for MySQL and PostgreSQL, with SQL Server and Oracle migrations in preview.

* Migrate to Cloud SQL and AlloyDB for PostgreSQL from on-premises, Google Cloud, or other clouds

* Replicate data continuously for minimal downtime migrations

* Serverless and easy to set up

Your DevOps team is using Terraform to deploy applications and Cloud SQL databases. After every new application change is rolled out, the environment is torn down and recreated, and the persistent database layer is lost. You need to prevent the database from being dropped. What should you do?

A.
Set Terraform deletion_protection to true.
A.
Set Terraform deletion_protection to true.
Answers
B.
Rerun terraform apply.
B.
Rerun terraform apply.
Answers
C.
Create a read replica.
C.
Create a read replica.
Answers
D.
Use point-in-time-recovery (PITR) to recover the database.
D.
Use point-in-time-recovery (PITR) to recover the database.
Answers
Suggested answer: A

Explanation:

. From Google's documentation, 'For stateful resources, such as databases, ensure that deletion protection is enabled. The syntax is: lifecycle { prevent_destroy = true } https://cloud.google.com/docs/terraform/best-practices-for-terraform#stateful-resources

You want to migrate an existing on-premises application to Google Cloud. Your application supports semi-structured data ingested from 100,000 sensors, and each sensor sends 10 readings per second from manufacturing plants. You need to make this data available for real-time monitoring and analysis. What should you do?

A.
Deploy the database using Cloud SQL.
A.
Deploy the database using Cloud SQL.
Answers
B.
Use BigQuery, and load data in batches.
B.
Use BigQuery, and load data in batches.
Answers
C.
C.Deploy the database using Bigtable.
C.
C.Deploy the database using Bigtable.
Answers
D.
Deploy the database using Cloud Spanner.
D.
Deploy the database using Cloud Spanner.
Answers
Suggested answer: C

Explanation:

Bigtable is a scalable, fully managed, and high-performance NoSQL database service that can handle semi-structured data and support real-time monitoring and analysis. Cloud SQL is a relational database service that does not support semi-structured dat

a. BigQuery is a data warehouse service that is optimized for batch processing and analytics, not real-time monitoring. Cloud Spanner is a relational database service that supports semi-structured data with JSON data type, but it is more expensive and complex than Bigtable for this use case.

You are a DBA of Cloud SQL for PostgreSQL. You want the applications to have password-less authentication for read and write access to the database. Which authentication mechanism should you use?

A.
Use Identity and Access Management (IAM) authentication.
A.
Use Identity and Access Management (IAM) authentication.
Answers
B.
Use Managed Active Directory authentication.
B.
Use Managed Active Directory authentication.
Answers
C.
Use Cloud SQL federated queries.
C.
Use Cloud SQL federated queries.
Answers
D.
Use PostgreSQL database's built-in authentication.
D.
Use PostgreSQL database's built-in authentication.
Answers
Suggested answer: A

Explanation:

https://cloud.google.com/sql/docs/postgres/authentication

You have deployed a Cloud SQL for SQL Server instance. In addition, you created a cross-region read replica for disaster recovery (DR) purposes. Your company requires you to maintain and monitor a recovery point objective (RPO) of less than 5 minutes. You need to verify that your cross-region read replica meets the allowed RPO. What should you do?

A.
Use Cloud SQL instance monitoring.
A.
Use Cloud SQL instance monitoring.
Answers
B.
Use the Cloud Monitoring dashboard with available metrics from Cloud SQL.
B.
Use the Cloud Monitoring dashboard with available metrics from Cloud SQL.
Answers
C.
Use Cloud SQL logs.
C.
Use Cloud SQL logs.
Answers
D.
Use the SQL Server Always On Availability Group dashboard.
D.
Use the SQL Server Always On Availability Group dashboard.
Answers
Suggested answer: D

Explanation:

Note, you cannot create a read replica in Cloud SQL for SQL Server unless you use an Enterprise Edition. Which is also a requirement for configuring SQL Server AG. That's not a coincidence. That's how Cloud SQL for SQL Server creates SQL Server read replicas. To find out about the replication, use the AG Dashboard in SSMS. https://cloud.google.com/sql/docs/sqlserver/replication/manage-replicas#promote-replica

Total 132 questions
Go to page: of 14