ExamGecko
Home / Google / Associate Data Practitioner / List of questions
Ask Question

Google Associate Data Practitioner Practice Test - Questions Answers, Page 2

Add to Whishlist

List of questions

Question 11

Report Export Collapse

You work for an ecommerce company that has a BigQuery dataset that contains customer purchase history, demographics, and website interactions. You need to build a machine learning (ML) model to predict which customers are most likely to make a purchase in the next month. You have limited engineering resources and need to minimize the ML expertise required for the solution. What should you do?

Use BigQuery ML to create a logistic regression model for purchase prediction.

Use BigQuery ML to create a logistic regression model for purchase prediction.

Use Vertex AI Workbench to develop a custom model for purchase prediction.

Use Vertex AI Workbench to develop a custom model for purchase prediction.

Use Colab Enterprise to develop a custom model for purchase prediction.

Use Colab Enterprise to develop a custom model for purchase prediction.

Export the data to Cloud Storage, and use AutoML Tables to build a classification model for purchase prediction.

Export the data to Cloud Storage, and use AutoML Tables to build a classification model for purchase prediction.

Suggested answer: A
Explanation:

Using BigQuery ML is the best solution in this case because:

Ease of use: BigQuery ML allows users to build machine learning models using SQL, which requires minimal ML expertise.

Integrated platform: Since the data already exists in BigQuery, there's no need to move it to another service, saving time and engineering resources.

Logistic regression: This is an appropriate model for binary classification tasks like predicting the likelihood of a customer making a purchase in the next month.

asked 13/02/2025
Meena Utsaha
34 questions

Question 12

Report Export Collapse

You are designing a pipeline to process data files that arrive in Cloud Storage by 3:00 am each day. Data processing is performed in stages, where the output of one stage becomes the input of the next. Each stage takes a long time to run. Occasionally a stage fails, and you have to address the problem. You need to ensure that the final output is generated as quickly as possible. What should you do?

Design a Spark program that runs under Dataproc. Code the program to wait for user input when an error is detected. Rerun the last action after correcting any stage output data errors.

Design a Spark program that runs under Dataproc. Code the program to wait for user input when an error is detected. Rerun the last action after correcting any stage output data errors.

Design the pipeline as a set of PTransforms in Dataflow. Restart the pipeline after correcting any stage output data errors.

Design the pipeline as a set of PTransforms in Dataflow. Restart the pipeline after correcting any stage output data errors.

Design the workflow as a Cloud Workflow instance. Code the workflow to jump to a given stage based on an input parameter. Rerun the workflow after correcting any stage output data errors.

Design the workflow as a Cloud Workflow instance. Code the workflow to jump to a given stage based on an input parameter. Rerun the workflow after correcting any stage output data errors.

Design the processing as a directed acyclic graph (DAG) in Cloud Composer. Clear the state of the failed task after correcting any stage output data errors.

Design the processing as a directed acyclic graph (DAG) in Cloud Composer. Clear the state of the failed task after correcting any stage output data errors.

Suggested answer: D
Explanation:

Using Cloud Composer to design the processing pipeline as a Directed Acyclic Graph (DAG) is the most suitable approach because:

Fault tolerance: Cloud Composer (based on Apache Airflow) allows for handling failures at specific stages. You can clear the state of a failed task and rerun it without reprocessing the entire pipeline.

Stage-based processing: DAGs are ideal for workflows with interdependent stages where the output of one stage serves as input to the next.

Efficiency: This approach minimizes downtime and ensures that only failed stages are rerun, leading to faster final output generation.

asked 13/02/2025
xun wang
48 questions

Question 13

Report Export Collapse

Another team in your organization is requesting access to a BigQuery dataset. You need to share the dataset with the team while minimizing the risk of unauthorized copying of data. You also want to create a reusable framework in case you need to share this data with other teams in the future. What should you do?

Create authorized views in the team's Google Cloud project that is only accessible by the team.

Create authorized views in the team's Google Cloud project that is only accessible by the team.

Create a private exchange using Analytics Hub with data egress restriction, and grant access to the team members.

Create a private exchange using Analytics Hub with data egress restriction, and grant access to the team members.

Enable domain restricted sharing on the project. Grant the team members the BigQuery Data Viewer IAM role on the dataset.

Enable domain restricted sharing on the project. Grant the team members the BigQuery Data Viewer IAM role on the dataset.

Export the dataset to a Cloud Storage bucket in the team's Google Cloud project that is only accessible by the team.

Export the dataset to a Cloud Storage bucket in the team's Google Cloud project that is only accessible by the team.

Suggested answer: B
Explanation:

Using Analytics Hub to create a private exchange with data egress restrictions ensures controlled sharing of the dataset while minimizing the risk of unauthorized copying. This approach allows you to provide secure, managed access to the dataset without giving direct access to the raw data. The egress restriction ensures that data cannot be exported or copied outside the designated boundaries. Additionally, this solution provides a reusable framework that simplifies future data sharing with other teams or projects while maintaining strict data governance.

asked 13/02/2025
Gajendran Balasingam
44 questions

Question 14

Report Export Collapse

Your company has developed a website that allows users to upload and share video files. These files are most frequently accessed and shared when they are initially uploaded. Over time, the files are accessed and shared less frequently, although some old video files may remain very popular.

You need to design a storage system that is simple and cost-effective. What should you do?

Create a single-region bucket with Autoclass enabled.

Create a single-region bucket with Autoclass enabled.

Create a single-region bucket. Configure a Cloud Scheduler job that runs every 24 hours and changes the storage class based on upload date.

Create a single-region bucket. Configure a Cloud Scheduler job that runs every 24 hours and changes the storage class based on upload date.

Create a single-region bucket with custom Object Lifecycle Management policies based on upload date.

Create a single-region bucket with custom Object Lifecycle Management policies based on upload date.

Create a single-region bucket with Archive as the default storage class.

Create a single-region bucket with Archive as the default storage class.

Suggested answer: C
Explanation:

Creating a single-region bucket with custom Object Lifecycle Management policies based on upload date is the most appropriate solution. This approach allows you to automatically transition objects to less expensive storage classes as their access frequency decreases over time. For example, frequently accessed files can remain in the Standard storage class initially, then transition to Nearline, Coldline, or Archive storage as their popularity wanes. This strategy ensures a cost-effective and efficient storage system while maintaining simplicity by automating the lifecycle management of video files.

asked 13/02/2025
Junan Kuah
39 questions

Question 15

Report Export Collapse

You recently inherited a task for managing Dataflow streaming pipelines in your organization and noticed that proper access had not been provisioned to you. You need to request a Google-provided IAM role so you can restart the pipelines. You need to follow the principle of least privilege. What should you do?

Request the Dataflow Developer role.

Request the Dataflow Developer role.

Request the Dataflow Viewer role.

Request the Dataflow Viewer role.

Request the Dataflow Worker role.

Request the Dataflow Worker role.

Request the Dataflow Admin role.

Request the Dataflow Admin role.

Suggested answer: A
Explanation:

The Dataflow Developer role provides the necessary permissions to manage Dataflow streaming pipelines, including the ability to restart pipelines. This role adheres to the principle of least privilege, as it grants only the permissions required to manage and operate Dataflow jobs without unnecessary administrative access. Other roles, such as Dataflow Admin, would grant broader permissions, which are not needed in this scenario.

asked 13/02/2025
Freddy KUBIAK
58 questions

Question 16

Report Export Collapse

You need to create a new data pipeline. You want a serverless solution that meets the following requirements:

* Data is streamed from Pub/Sub and is processed in real-time.

* Data is transformed before being stored.

* Data is stored in a location that will allow it to be analyzed with SQL using Looker.

Google Associate Data Practitioner image Question 16 638750842214636576873

Which Google Cloud services should you recommend for the pipeline?

1. Dataproc Serverless 2. Bigtable

1. Dataproc Serverless 2. Bigtable

1. Cloud Composer 2. Cloud SQL for MySQL

1. Cloud Composer 2. Cloud SQL for MySQL

1. BigQuery 2. Analytics Hub

1. BigQuery 2. Analytics Hub

1. Dataflow 2. BigQuery

1. Dataflow 2. BigQuery

Suggested answer: D
Explanation:

To build a serverless data pipeline that processes data in real-time from Pub/Sub, transforms it, and stores it for SQL-based analysis using Looker, the best solution is to use Dataflow and BigQuery. Dataflow is a fully managed service for real-time data processing and transformation, while BigQuery is a serverless data warehouse that supports SQL-based querying and integrates seamlessly with Looker for data analysis and visualization. This combination meets the requirements for real-time streaming, transformation, and efficient storage for analytical queries.

asked 13/02/2025
Angel Molina
48 questions

Question 17

Report Export Collapse

Your team wants to create a monthly report to analyze inventory data that is updated daily. You need to aggregate the inventory counts by using only the most recent month of data, and save the results to be used in a Looker Studio dashboard. What should you do?

Create a materialized view in BigQuery that uses the SUM( ) function and the DATE_SUB( ) function.

Create a materialized view in BigQuery that uses the SUM( ) function and the DATE_SUB( ) function.

Create a saved query in the BigQuery console that uses the SUM( ) function and the DATE_SUB( ) function. Re-run the saved query every month, and save the results to a BigQuery table.

Create a saved query in the BigQuery console that uses the SUM( ) function and the DATE_SUB( ) function. Re-run the saved query every month, and save the results to a BigQuery table.

Create a BigQuery table that uses the SUM( ) function and the _PARTITIONDATE filter.

Create a BigQuery table that uses the SUM( ) function and the _PARTITIONDATE filter.

Create a BigQuery table that uses the SUM( ) function and the DATE_DIFF( ) function.

Create a BigQuery table that uses the SUM( ) function and the DATE_DIFF( ) function.

Suggested answer: A
Explanation:

Creating a materialized view in BigQuery with the SUM() function and the DATE_SUB() function is the best approach. Materialized views allow you to pre-aggregate and cache query results, making them efficient for repeated access, such as monthly reporting. By using the DATE_SUB() function, you can filter the inventory data to include only the most recent month. This approach ensures that the aggregation is up-to-date with minimal latency and provides efficient integration with Looker Studio for dashboarding.

asked 13/02/2025
Dereque Datson
47 questions

Question 18

Report Export Collapse

You have a BigQuery dataset containing sales data. This data is actively queried for the first 6 months. After that, the data is not queried but needs to be retained for 3 years for compliance reasons. You need to implement a data management strategy that meets access and compliance requirements, while keeping cost and administrative overhead to a minimum. What should you do?

Use BigQuery long-term storage for the entire dataset. Set up a Cloud Run function to delete the data from BigQuery after 3 years.

Use BigQuery long-term storage for the entire dataset. Set up a Cloud Run function to delete the data from BigQuery after 3 years.

Partition a BigQuery table by month. After 6 months, export the data to Coldline storage. Implement a lifecycle policy to delete the data from Cloud Storage after 3 years.

Partition a BigQuery table by month. After 6 months, export the data to Coldline storage. Implement a lifecycle policy to delete the data from Cloud Storage after 3 years.

Set up a scheduled query to export the data to Cloud Storage after 6 months. Write a stored procedure to delete the data from BigQuery after 3 years.

Set up a scheduled query to export the data to Cloud Storage after 6 months. Write a stored procedure to delete the data from BigQuery after 3 years.

Store all data in a single BigQuery table without partitioning or lifecycle policies.

Store all data in a single BigQuery table without partitioning or lifecycle policies.

Suggested answer: B
Explanation:

Partitioning the BigQuery table by month allows efficient querying of recent data for the first 6 months, reducing query costs. After 6 months, exporting the data to Coldline storage minimizes storage costs for data that is rarely accessed but needs to be retained for compliance. Implementing a lifecycle policy in Cloud Storage automates the deletion of the data after 3 years, ensuring compliance while reducing administrative overhead. This approach balances cost efficiency and compliance requirements effectively.

asked 13/02/2025
Gabriel Pereira Dias
41 questions

Question 19

Report Export Collapse

You have created a LookML model and dashboard that shows daily sales metrics for five regional managers to use. You want to ensure that the regional managers can only see sales metrics specific to their region. You need an easy-to-implement solution. What should you do?

Create a sales_region user attribute, and assign each manager's region as the value of their user attribute. Add an access_filter Explore filter on the region_name dimension by using the sales_region user attribute.

Create a sales_region user attribute, and assign each manager's region as the value of their user attribute. Add an access_filter Explore filter on the region_name dimension by using the sales_region user attribute.

Create five different Explores with the sql_always_filter Explore filter applied on the region_name dimension. Set each region_name value to the corresponding region for each manager.

Create five different Explores with the sql_always_filter Explore filter applied on the region_name dimension. Set each region_name value to the corresponding region for each manager.

Create separate Looker dashboards for each regional manager. Set the default dashboard filter to the corresponding region for each manager.

Create separate Looker dashboards for each regional manager. Set the default dashboard filter to the corresponding region for each manager.

Create separate Looker instances for each regional manager. Copy the LookML model and dashboard to each instance. Provision viewer access to the corresponding manager.

Create separate Looker instances for each regional manager. Copy the LookML model and dashboard to each instance. Provision viewer access to the corresponding manager.

Suggested answer: A
Explanation:

Using a sales_region user attribute is the best solution because it allows you to dynamically filter data based on each manager's assigned region. By adding an access_filter Explore filter on the region_name dimension that references the sales_region user attribute, each manager sees only the sales metrics specific to their region. This approach is easy to implement, scalable, and avoids duplicating dashboards or Explores, making it both efficient and maintainable.

asked 13/02/2025
Kenny McCue
36 questions

Question 20

Report Export Collapse

You need to design a data pipeline that ingests data from CSV, Avro, and Parquet files into Cloud Storage. The data includes raw user input. You need to remove all malicious SQL injections before storing the data in BigQuery. Which data manipulation methodology should you choose?

EL

EL

ELT

ELT

ETL

ETL

ETLT

ETLT

Suggested answer: C
Explanation:

The ETL (Extract, Transform, Load) methodology is the best approach for this scenario because it allows you to extract data from the files, transform it by applying the necessary data cleansing (including removing malicious SQL injections), and then load the sanitized data into BigQuery. By transforming the data before loading it into BigQuery, you ensure that only clean and safe data is stored, which is critical for security and data quality.

asked 13/02/2025
Ramon Lim
40 questions
Total 72 questions
Go to page: of 8
Search

Related questions