ExamGecko
Question list
Search
Search

List of questions

Search

Related questions











Question 357 - Professional Data Engineer discussion

Report
Export

You are migrating your on-premises data warehouse to BigQuery. As part of the migration, you want to facilitate cross-team collaboration to get the most value out of the organization's data. You need to design an architecture that would allow teams within the organization to securely publish, discover, and subscribe to read-only data in a self-service manner. You need to minimize costs while also maximizing data freshness What should you do?

A.
Create authorized datasets to publish shared data in the subscribing team's project.
Answers
A.
Create authorized datasets to publish shared data in the subscribing team's project.
B.
Create a new dataset for sharing in each individual team's project. Grant the subscribing team the bigquery. dataViewer role on the dataset.
Answers
B.
Create a new dataset for sharing in each individual team's project. Grant the subscribing team the bigquery. dataViewer role on the dataset.
C.
Use BigQuery Data Transfer Service to copy datasets to a centralized BigQuery project for sharing.
Answers
C.
Use BigQuery Data Transfer Service to copy datasets to a centralized BigQuery project for sharing.
D.
Use Analytics Hub to facilitate data sharing.
Answers
D.
Use Analytics Hub to facilitate data sharing.
Suggested answer: C

Explanation:

To provide a cost-effective storage and processing solution that allows data scientists to explore data similarly to using the on-premises HDFS cluster with SQL on the Hive query engine, deploying a Dataproc cluster is the best choice. Here's why:

Compatibility with Hive:

Dataproc is a fully managed Apache Spark and Hadoop service that provides native support for Hive, making it easy for data scientists to run SQL queries on the data as they would in an on-premises Hadoop environment.

This ensures that the transition to Google Cloud is smooth, with minimal changes required in the workflow.

Cost-Effective Storage:

Storing the ORC files in Cloud Storage is cost-effective and scalable, providing a reliable and durable storage solution that integrates seamlessly with Dataproc.

Cloud Storage allows you to store large datasets at a lower cost compared to other storage options.

Hive Integration:

Dataproc supports running Hive directly, which is essential for data scientists familiar with SQL on the Hive query engine.

This setup enables the use of existing Hive queries and scripts without significant modifications.

Steps to Implement:

Copy ORC Files to Cloud Storage:

Transfer the ORC files from the on-premises HDFS cluster to Cloud Storage, ensuring they are organized in a similar directory structure.

Deploy Dataproc Cluster:

Set up a Dataproc cluster configured to run Hive. Ensure that the cluster has access to the ORC files stored in Cloud Storage.

Configure Hive:

Configure Hive on Dataproc to read from the ORC files in Cloud Storage. This can be done by setting up external tables in Hive that point to the Cloud Storage location.

Provide Access to Data Scientists:

Grant the data scientist team access to the Dataproc cluster and the necessary permissions to interact with the Hive tables.

Dataproc Documentation

Hive on Dataproc

Google Cloud Storage Documentation

asked 18/09/2024
Armindo Malafaia Neto
35 questions
User
Your answer:
0 comments
Sorted by

Leave a comment first