ExamGecko
Home Home / Google / Professional Cloud Database Engineer

Google Professional Cloud Database Engineer Practice Test - Questions Answers, Page 10

Question list
Search
Search

List of questions

Search

Related questions











You are designing a physician portal app in Node.js. This application will be used in hospitals and clinics that might have intermittent internet connectivity. If a connectivity failure occurs, the app should be able to query the cached dat

a. You need to ensure that the application has scalability, strong consistency, and multi-region replication. What should you do?

A.
Use Firestore and ensure that the PersistenceEnabled option is set to true.
A.
Use Firestore and ensure that the PersistenceEnabled option is set to true.
Answers
B.
Use Memorystore for Memcached.
B.
Use Memorystore for Memcached.
Answers
C.
Use Pub/Sub to synchronize the changes from the application to Cloud Spanner.
C.
Use Pub/Sub to synchronize the changes from the application to Cloud Spanner.
Answers
D.
Use Table.read with the exactStaleness option to perform a read of rows in Cloud Spanner.
D.
Use Table.read with the exactStaleness option to perform a read of rows in Cloud Spanner.
Answers
Suggested answer: A

Explanation:

https://firebase.google.com/docs/firestore/manage-data/enable-offline

You manage a production MySQL database running on Cloud SQL at a retail company. You perform routine maintenance on Sunday at midnight when traffic is slow, but you want to skip routine maintenance during the year-end holiday shopping season. You need to ensure that your production system is available 24/7 during the holidays. What should you do?

A.
Define a maintenance window on Sundays between 12 AM and 1 AM, and deny maintenance periods between November 1 and January 15.
A.
Define a maintenance window on Sundays between 12 AM and 1 AM, and deny maintenance periods between November 1 and January 15.
Answers
B.
Define a maintenance window on Sundays between 12 AM and 5 AM, and deny maintenance periods between November 1 and February 15.
B.
Define a maintenance window on Sundays between 12 AM and 5 AM, and deny maintenance periods between November 1 and February 15.
Answers
C.
Build a Cloud Composer job to start a maintenance window on Sundays between 12 AM and 1AM, and deny maintenance periods between November 1 and January 15.
C.
Build a Cloud Composer job to start a maintenance window on Sundays between 12 AM and 1AM, and deny maintenance periods between November 1 and January 15.
Answers
D.
Create a Cloud Scheduler job to start maintenance at 12 AM on Sundays. Pause the Cloud Scheduler job between November 1 and January 15.
D.
Create a Cloud Scheduler job to start maintenance at 12 AM on Sundays. Pause the Cloud Scheduler job between November 1 and January 15.
Answers
Suggested answer: A

Explanation:

'Deny maintenance period. A block of days in which Cloud SQL does not schedule maintenance. Deny maintenance periods can be up to 90 days long. ' https://cloud.google.com/sql/docs/mysql/maintenance

You want to migrate an on-premises 100 TB Microsoft SQL Server database to Google Cloud over a 1 Gbps network link. You have 48 hours allowed downtime to migrate this database. What should you do? (Choose two.)

A.
Use a change data capture (CDC) migration strategy.
A.
Use a change data capture (CDC) migration strategy.
Answers
B.
Move the physical database servers from on-premises to Google Cloud.
B.
Move the physical database servers from on-premises to Google Cloud.
Answers
C.
Keep the network bandwidth at 1 Gbps, and then perform an offline data migration.
C.
Keep the network bandwidth at 1 Gbps, and then perform an offline data migration.
Answers
D.
Increase the network bandwidth to 2 Gbps, and then perform an offline data migration.
D.
Increase the network bandwidth to 2 Gbps, and then perform an offline data migration.
Answers
E.
Increase the network bandwidth to 10 Gbps, and then perform an offline data migration.
E.
Increase the network bandwidth to 10 Gbps, and then perform an offline data migration.
Answers
Suggested answer: A, E

Explanation:

https://cloud.google.com/architecture/migration-to-google-cloud-transferring-your-large-datasets#online_versus_offline_transfer

You need to provision several hundred Cloud SQL for MySQL instances for multiple project teams over a one-week period. You must ensure that all instances adhere to company standards such as instance naming conventions, database flags, and tags. What should you do?

A.
Automate instance creation by writing a Dataflow job.
A.
Automate instance creation by writing a Dataflow job.
Answers
B.
Automate instance creation by setting up Terraform scripts.
B.
Automate instance creation by setting up Terraform scripts.
Answers
C.
Create the instances using the Google Cloud Console UI.
C.
Create the instances using the Google Cloud Console UI.
Answers
D.
Create clones from a template Cloud SQL instance.
D.
Create clones from a template Cloud SQL instance.
Answers
Suggested answer: B

Your organization is migrating 50 TB Oracle databases to Bare Metal Solution for Oracle. Database backups must be available for quick restore. You also need to have backups available for 5 years. You need to design a cost-effective architecture that meets a recovery time objective (RTO) of 2 hours and recovery point objective (RPO) of 15 minutes. What should you do?

A.
Create the database on a Bare Metal Solution server with the database running on flash storage. Keep a local backup copy on all flash storage. Keep backups older than one day stored in Actifio OnVault storage.
A.
Create the database on a Bare Metal Solution server with the database running on flash storage. Keep a local backup copy on all flash storage. Keep backups older than one day stored in Actifio OnVault storage.
Answers
B.
Create the database on a Bare Metal Solution server with the database running on flash storage. Keep a local backup copy on standard storage. Keep backups older than one day stored in Actifio OnVault storage.
B.
Create the database on a Bare Metal Solution server with the database running on flash storage. Keep a local backup copy on standard storage. Keep backups older than one day stored in Actifio OnVault storage.
Answers
C.
Create the database on a Bare Metal Solution server with the database running on flash storage. Keep a local backup copy on standard storage. Use the Oracle Recovery Manager (RMAN) backup utility to move backups older than one day to a Coldline Storage bucket.
C.
Create the database on a Bare Metal Solution server with the database running on flash storage. Keep a local backup copy on standard storage. Use the Oracle Recovery Manager (RMAN) backup utility to move backups older than one day to a Coldline Storage bucket.
Answers
D.
Create the database on a Bare Metal Solution server with the database running on flash storage. Keep a local backup copy on all flash storage. Use the Oracle Recovery Manager (RMAN) backup utility to move backups older than one day to an Archive Storage bucket.
D.
Create the database on a Bare Metal Solution server with the database running on flash storage. Keep a local backup copy on all flash storage. Use the Oracle Recovery Manager (RMAN) backup utility to move backups older than one day to an Archive Storage bucket.
Answers
Suggested answer: B

Explanation:

This answer meets the RTO and RPO requirements by using flash storage for the database and standard storage for the local backup copy.It also meets the cost-effectiveness requirement by using Actifio OnVault storage, which is a low-cost, high-performance, and scalable storage solution that integrates with Google Cloud Backup and DR Service1.

You are a DBA on a Cloud Spanner instance with multiple databases. You need to assign these privileges to all members of the application development team on a specific database:

Can read tables, views, and DDL

Can write rows to the tables

Can add columns and indexes

Cannot drop the database

What should you do?

A.
Assign the Cloud Spanner Database Reader and Cloud Spanner Backup Writer roles.
A.
Assign the Cloud Spanner Database Reader and Cloud Spanner Backup Writer roles.
Answers
B.
Assign the Cloud Spanner Database Admin role.
B.
Assign the Cloud Spanner Database Admin role.
Answers
C.
Assign the Cloud Spanner Database User role.
C.
Assign the Cloud Spanner Database User role.
Answers
D.
Assign the Cloud Spanner Admin role.
D.
Assign the Cloud Spanner Admin role.
Answers
Suggested answer: C

Explanation:

https://cloud.google.com/spanner/docs/iam#spanner.databaseUser

Your project is using Bigtable to store data that should not be accessed from the public internet under any circumstances, even if the requestor has a valid service account key. You need to secure access to this data. What should you do?

A.
Use Identity and Access Management (IAM) for Bigtable access control.
A.
Use Identity and Access Management (IAM) for Bigtable access control.
Answers
B.
Use VPC Service Controls to create a trusted network for the Bigtable service.
B.
Use VPC Service Controls to create a trusted network for the Bigtable service.
Answers
C.
Use customer-managed encryption keys (CMEK).
C.
Use customer-managed encryption keys (CMEK).
Answers
D.
Use Google Cloud Armor to add IP addresses to an allowlist.
D.
Use Google Cloud Armor to add IP addresses to an allowlist.
Answers
Suggested answer: B

Explanation:

''Users can define a security perimeter around Google Cloud resources such as Cloud Storage buckets, Bigtable instances, and BigQuery datasets to constrain data within a VPC and control the flow of data.'' https://cloud.google.com/vpc-service-controls

Your organization has a ticketing system that needs an online marketing analytics and reporting application. You need to select a relational database that can manage hundreds of terabytes of data to support this new application. Which database should you use?

A.
Cloud SQL
A.
Cloud SQL
Answers
B.
BigQuery
B.
BigQuery
Answers
C.
Cloud Spanner
C.
Cloud Spanner
Answers
D.
Bigtable
D.
Bigtable
Answers
Suggested answer: B

You are designing for a write-heavy application. During testing, you discover that the write workloads are performant in a regional Cloud Spanner instance but slow down by an order of magnitude in a multi-regional instance. You want to make the write workloads faster in a multi-regional instance. What should you do?

A.
Place the bulk of the read and write workloads closer to the default leader region.
A.
Place the bulk of the read and write workloads closer to the default leader region.
Answers
B.
Use staleness of at least 15 seconds.
B.
Use staleness of at least 15 seconds.
Answers
C.
Add more read-write replicas.
C.
Add more read-write replicas.
Answers
D.
Keep the total CPU utilization under 45% in each region.
D.
Keep the total CPU utilization under 45% in each region.
Answers
Suggested answer: A

Explanation:

https://cloud.google.com/spanner/docs/instance-configurations#multi-region-best-practices Best practices For optimal performance, follow these best practices: Design a schema that prevents hotspots and other performance issues. For optimal write latency, place compute resources for write-heavy workloads within or close to the default leader region. For optimal read performance outside of the default leader region, use staleness of at least 15 seconds. To avoid single-region dependency for your workloads, place critical compute resources in at least two regions. A good option is to place them next to the two different read-write regions so that any single region outage will not impact all of your application. Provision enough compute capacity to keep high priority total CPU utilization under 45% in each region.

Your company wants to migrate an Oracle-based application to Google Cloud. The application team currently uses Oracle Recovery Manager (RMAN) to back up the database to tape for long-term retention (LTR). You need a cost-effective backup and restore solution that meets a 2-hour recovery time objective (RTO) and a 15-minute recovery point objective (RPO). What should you do?

A.
Migrate the Oracle databases to Bare Metal Solution for Oracle, and store backups on tapes on-premises.
A.
Migrate the Oracle databases to Bare Metal Solution for Oracle, and store backups on tapes on-premises.
Answers
B.
Migrate the Oracle databases to Bare Metal Solution for Oracle, and use Actifio to store backup files on Cloud Storage using the Nearline Storage class.
B.
Migrate the Oracle databases to Bare Metal Solution for Oracle, and use Actifio to store backup files on Cloud Storage using the Nearline Storage class.
Answers
C.
Migrate the Oracle databases to Bare Metal Solution for Oracle, and back up the Oracle databases to Cloud Storage using the Standard Storage class.
C.
Migrate the Oracle databases to Bare Metal Solution for Oracle, and back up the Oracle databases to Cloud Storage using the Standard Storage class.
Answers
D.
Migrate the Oracle databases to Compute Engine, and store backups on tapes on-premises.
D.
Migrate the Oracle databases to Compute Engine, and store backups on tapes on-premises.
Answers
Suggested answer: B

Explanation:

https://www.actifio.com/solutions/cloud/google/

Total 132 questions
Go to page: of 14