Google Associate Data Practitioner Practice Test - Questions Answers
List of questions
Question 1

Your company currently uses an on-premises network file system (NFS) and is migrating data to Google Cloud. You want to be able to control how much bandwidth is used by the data migration while capturing detailed reporting on the migration status. What should you do?
Use a Transfer Appliance.
Use Cloud Storage FUSE.
Use Storage Transfer Service.
Use gcloud storage commands.
Using the Storage Transfer Service is the best solution for migrating data from an on-premises NFS to Google Cloud. This service allows you to control bandwidth usage by configuring transfer speed limits and provides detailed reporting on the migration status. Storage Transfer Service is specifically designed for large-scale data migrations and supports scheduling, monitoring, and error handling, making it an efficient and reliable choice for your use case.
Question 2

Your company uses Looker as its primary business intelligence platform. You want to use LookML to visualize the profit margin for each of your company's products in your Looker Explores and dashboards. You need to implement a solution quickly and efficiently. What should you do?
Create a derived table that pre-calculates the profit margin for each product, and include it in the Looker model.
Define a new measure that calculates the profit margin by using the existing revenue and cost fields.
Create a new dimension that categorizes products based on their profit margin ranges (e.g., high, medium, low).
Apply a filter to only show products with a positive profit margin.
Defining a new measure in LookML to calculate the profit margin using the existing revenue and cost fields is the most efficient and straightforward solution. This approach allows you to dynamically compute the profit margin directly within your Looker Explores and dashboards without needing to pre-calculate or create additional tables. The measure can be defined using LookML syntax, such as:
measure: profit_margin {
type: number
sql: (revenue - cost) / revenue ;;
value_format: '0.0%'
}
This method is quick to implement and integrates seamlessly into your existing Looker model, enabling accurate visualization of profit margins across your products.
Question 3

You are a data analyst working with sensitive customer data in BigQuery. You need to ensure that only authorized personnel within your organization can query this data, while following the principle of least privilege. What should you do?
Enable access control by using IAM roles.
Encrypt the data by using customer-managed encryption keys (CMEK).
Update dataset privileges by using the SQL GRANT statement.
Export the data to Cloud Storage, and use signed URLs to authorize access.
Using IAM roles to enable access control in BigQuery is the best approach to ensure that only authorized personnel can query the sensitive customer data. IAM allows you to define granular permissions at the project, dataset, or table level, ensuring that users have only the access they need in accordance with the principle of least privilege. For example, you can assign roles like roles/bigquery.dataViewer to allow read-only access or roles/bigquery.dataEditor for more advanced permissions. This approach provides centralized and manageable access control, which is critical for protecting sensitive data.
Question 4

Your retail company wants to predict customer churn using historical purchase data stored in BigQuery. The dataset includes customer demographics, purchase history, and a label indicating whether the customer churned or not. You want to build a machine learning model to identify customers at risk of churning. You need to create and train a logistic regression model for predicting customer churn, using the customer_data table with the churned column as the target label. Which BigQuery ML query should you use?
A)
B)
C)
D)
Option A
Option B
Option C
Option D
In BigQuery ML, when creating a logistic regression model to predict customer churn, the correct query should:
Exclude the target label column (in this case, churned) from the feature columns, as it is used for training and not as a feature input.
Rename the target label column to label, as BigQuery ML requires the target column to be named label.
The chosen query satisfies these requirements:
SELECT * EXCEPT(churned), churned AS label: Excludes churned from features and renames it to label.
The OPTIONS(model_type='logistic_reg') specifies that a logistic regression model is being trained.
This setup ensures the model is correctly trained using the features in the dataset while targeting the churned column for predictions.
Question 5

Your company has several retail locations. Your company tracks the total number of sales made at each location each day. You want to use SQL to calculate the weekly moving average of sales by location to identify trends for each store. Which query should you use?
A)
B)
C)
D)
Option A
Option B
Option C
Option D
To calculate the weekly moving average of sales by location:
The query must group by store_id (partitioning the calculation by each store).
The ORDER BY date ensures the sales are evaluated chronologically.
The ROWS BETWEEN 6 PRECEDING AND CURRENT ROW specifies a rolling window of 7 rows (1 week if each row represents daily data).
The AVG(total_sales) computes the average sales over the defined rolling window.
Chosen query meets these requirements:
Question 6

Your company is building a near real-time streaming pipeline to process JSON telemetry data from small appliances. You need to process messages arriving at a Pub/Sub topic, capitalize letters in the serial number field, and write results to BigQuery. You want to use a managed service and write a minimal amount of code for underlying transformations. What should you do?
Use a Pub/Sub to BigQuery subscription, write results directly to BigQuery, and schedule a transformation query to run every five minutes.
Use a Pub/Sub to Cloud Storage subscription, write a Cloud Run service that is triggered when objects arrive in the bucket, performs the transformations, and writes the results to BigQuery.
Use the ''Pub/Sub to BigQuery'' Dataflow template with a UDF, and write the results to BigQuery.
Use a Pub/Sub push subscription, write a Cloud Run service that accepts the messages, performs the transformations, and writes the results to BigQuery.
Using the 'Pub/Sub to BigQuery' Dataflow template with a UDF (User-Defined Function) is the optimal choice because it combines near real-time processing, minimal code for transformations, and scalability. The UDF allows for efficient implementation of custom transformations, such as capitalizing letters in the serial number field, while Dataflow handles the rest of the managed pipeline seamlessly.
Question 7

You want to process and load a daily sales CSV file stored in Cloud Storage into BigQuery for downstream reporting. You need to quickly build a scalable data pipeline that transforms the data while providing insights into data quality issues. What should you do?
Create a batch pipeline in Cloud Data Fusion by using a Cloud Storage source and a BigQuery sink.
Load the CSV file as a table in BigQuery, and use scheduled queries to run SQL transformation scripts.
Load the CSV file as a table in BigQuery. Create a batch pipeline in Cloud Data Fusion by using a BigQuery source and sink.
Create a batch pipeline in Dataflow by using the Cloud Storage CSV file to BigQuery batch template.
Using Cloud Data Fusion to create a batch pipeline with a Cloud Storage source and a BigQuery sink is the best solution because:
Scalability: Cloud Data Fusion is a scalable, fully managed data integration service.
Data transformation: It provides a visual interface to design pipelines, enabling quick transformation of data.
Data quality insights: Cloud Data Fusion includes built-in tools for monitoring and addressing data quality issues during the pipeline creation and execution process.
Question 8

You manage a Cloud Storage bucket that stores temporary files created during data processing. These temporary files are only needed for seven days, after which they are no longer needed. To reduce storage costs and keep your bucket organized, you want to automatically delete these files once they are older than seven days. What should you do?
Set up a Cloud Scheduler job that invokes a weekly Cloud Run function to delete files older than seven days.
Configure a Cloud Storage lifecycle rule that automatically deletes objects older than seven days.
Develop a batch process using Dataflow that runs weekly and deletes files based on their age.
Create a Cloud Run function that runs daily and deletes files older than seven days.
Configuring a Cloud Storage lifecycle rule to automatically delete objects older than seven days is the best solution because:
Built-in feature: Cloud Storage lifecycle rules are specifically designed to manage object lifecycles, such as automatically deleting or transitioning objects based on age.
No additional setup: It requires no external services or custom code, reducing complexity and maintenance.
Cost-effective: It directly achieves the goal of deleting files after seven days without incurring additional compute costs.
Question 9

You work for a healthcare company that has a large on-premises data system containing patient records with personally identifiable information (PII) such as names, addresses, and medical diagnoses. You need a standardized managed solution that de-identifies PII across all your data feeds prior to ingestion to Google Cloud. What should you do?
Use Cloud Run functions to create a serverless data cleaning pipeline. Store the cleaned data in BigQuery.
Use Cloud Data Fusion to transform the data. Store the cleaned data in BigQuery.
Load the data into BigQuery, and inspect the data by using SQL queries. Use Dataflow to transform the data and remove any errors.
Use Apache Beam to read the data and perform the necessary cleaning and transformation operations. Store the cleaned data in BigQuery.
Using Cloud Data Fusion is the best solution for this scenario because:
Standardized managed solution: Cloud Data Fusion provides a visual interface for building data pipelines and includes prebuilt connectors and transformations for data cleaning and de-identification.
Compliance: It ensures sensitive data such as PII is de-identified prior to ingestion into Google Cloud, adhering to regulatory requirements for healthcare data.
Ease of use: Cloud Data Fusion is designed for transforming and preparing data, making it a managed and user-friendly tool for this purpose.
Question 10

You manage a large amount of data in Cloud Storage, including raw data, processed data, and backups. Your organization is subject to strict compliance regulations that mandate data immutability for specific data types. You want to use an efficient process to reduce storage costs while ensuring that your storage strategy meets retention requirements. What should you do?
Configure lifecycle management rules to transition objects to appropriate storage classes based on access patterns. Set up Object Versioning for all objects to meet immutability requirements.
Move objects to different storage classes based on their age and access patterns. Use Cloud Key Management Service (Cloud KMS) to encrypt specific objects with customer-managed encryption keys (CMEK) to meet immutability requirements.
Create a Cloud Run function to periodically check object metadata, and move objects to the appropriate storage class based on age and access patterns. Use object holds to enforce immutability for specific objects.
Use object holds to enforce immutability for specific objects, and configure lifecycle management rules to transition objects to appropriate storage classes based on age and access patterns.
Using object holds and lifecycle management rules is the most efficient and compliant strategy for this scenario because:
Immutability: Object holds (temporary or event-based) ensure that objects cannot be deleted or overwritten, meeting strict compliance regulations for data immutability.
Cost efficiency: Lifecycle management rules automatically transition objects to more cost-effective storage classes based on their age and access patterns.
Compliance and automation: This approach ensures compliance with retention requirements while reducing manual effort, leveraging built-in Cloud Storage features.
Question