ExamGecko
Home Home / Google / Professional Data Engineer

Google Professional Data Engineer Practice Test - Questions Answers, Page 8

Question list
Search
Search

Related questions











The CUSTOM tier for Cloud Machine Learning Engine allows you to specify the number of which types of cluster nodes?

A.
Workers
A.
Workers
Answers
B.
Masters, workers, and parameter servers
B.
Masters, workers, and parameter servers
Answers
C.
Workers and parameter servers
C.
Workers and parameter servers
Answers
D.
Parameter servers
D.
Parameter servers
Answers
Suggested answer: C

Explanation:

The CUSTOM tier is not a set tier, but rather enables you to use your own cluster specification. When you use this tier, set values to configure your processing cluster according to these guidelines:

You must set TrainingInput.masterType to specify the type of machine to use for your master node.

You may set TrainingInput.workerCount to specify the number of workers to use.

You may set TrainingInput.parameterServerCount to specify the number of parameter servers to use.

You can specify the type of machine for the master node, but you can't specify more than one master node.

Reference: https://cloud.google.com/ml-engine/docs/trainingoverview#job_configuration_parameters

Which software libraries are supported by Cloud Machine Learning Engine?

A.
Theano and TensorFlow
A.
Theano and TensorFlow
Answers
B.
Theano and Torch
B.
Theano and Torch
Answers
C.
TensorFlow
C.
TensorFlow
Answers
D.
TensorFlow and Torch
D.
TensorFlow and Torch
Answers
Suggested answer: C

Explanation:

Cloud ML Engine mainly does two things:

Enables you to train machine learning models at scale by running TensorFlow training applications in the cloud.

Hosts those trained models for you in the cloud so that you can use them to get predictions about new data.

Reference: https://cloud.google.com/ml-engine/docs/technical-overview#what_it_does

Which TensorFlow function can you use to configure a categorical column if you don't know all of the possible values for that column?

A.
categorical_column_with_vocabulary_list
A.
categorical_column_with_vocabulary_list
Answers
B.
categorical_column_with_hash_bucket
B.
categorical_column_with_hash_bucket
Answers
C.
categorical_column_with_unknown_values
C.
categorical_column_with_unknown_values
Answers
D.
sparse_column_with_keys
D.
sparse_column_with_keys
Answers
Suggested answer: B

Explanation:

If you know the set of all possible feature values of a column and there are only a few of them, you can use categorical_column_with_vocabulary_list. Each key in the list will get assigned an autoincremental ID starting from 0.

What if we don't know the set of possible values in advance? Not a problem. We can use categorical_column_with_hash_bucket instead. What will happen is that each possible value in the feature column occupation will be hashed to an integer ID as we encounter them in training.

Reference: https://www.tensorflow.org/tutorials/wide

Which of the following statements about the Wide & Deep Learning model are true? (Select 2 answers.)

A.
The wide model is used for memorization, while the deep model is used for generalization.
A.
The wide model is used for memorization, while the deep model is used for generalization.
Answers
B.
A good use for the wide and deep model is a recommender system.
B.
A good use for the wide and deep model is a recommender system.
Answers
C.
The wide model is used for generalization, while the deep model is used for memorization.
C.
The wide model is used for generalization, while the deep model is used for memorization.
Answers
D.
A good use for the wide and deep model is a small-scale linear regression problem.
D.
A good use for the wide and deep model is a small-scale linear regression problem.
Answers
Suggested answer: A, B

Explanation:

Can we teach computers to learn like humans do, by combining the power of memorization and generalization? It's not an easy question to answer, but by jointly training a wide linear model (for memorization) alongside a deep neural network (for generalization), one can combine the strengths of both to bring us one step closer. At Google, we call it Wide & Deep Learning. It's useful for generic large-scale regression and classification problems with sparse inputs (categorical features with a large number of possible feature values), such as recommender systems, search, and ranking problems.

Reference: https://research.googleblog.com/2016/06/wide-deep-learning-better-togetherwith.html

To run a TensorFlow training job on your own computer using Cloud Machine Learning Engine, what would your command start with?

A.
gcloud ml-engine local train
A.
gcloud ml-engine local train
Answers
B.
gcloud ml-engine jobs submit training
B.
gcloud ml-engine jobs submit training
Answers
C.
gcloud ml-engine jobs submit training local
C.
gcloud ml-engine jobs submit training local
Answers
D.
You can't run a TensorFlow program on your own computer using Cloud ML Engine .
D.
You can't run a TensorFlow program on your own computer using Cloud ML Engine .
Answers
Suggested answer: A

Explanation:

This is especially useful in the case of testing distributed models, as it allows you to validate that you are properly interacting with the Cloud ML Engine cluster configuration.

Reference: https://cloud.google.com/sdk/gcloud/reference/ml-engine/local/train

If you want to create a machine learning model that predicts the price of a particular stock based on its recent price history, what type of estimator should you use?

A.
Unsupervised learning
A.
Unsupervised learning
Answers
B.
Regressor
B.
Regressor
Answers
C.
Classifier
C.
Classifier
Answers
D.
Clustering estimator
D.
Clustering estimator
Answers
Suggested answer: B

Explanation:

Regression is the supervised learning task for modeling and predicting continuous, numeric variables. Examples include predicting real-estate prices, stock price movements, or student test scores.

Classification is the supervised learning task for modeling and predicting categorical variables.

Examples include predicting employee churn, email spam, financial fraud, or student letter grades.

Clustering is an unsupervised learning task for finding natural groupings of observations (i.e. clusters) based on the inherent structure within your dataset. Examples include customer segmentation, grouping similar items in e-commerce, and social network analysis.

Reference: https://elitedatascience.com/machine-learning-algorithms

Suppose you have a dataset of images that are each labeled as to whether or not they contain a human face. To create a neural network that recognizes human faces in images using this labeled dataset, what approach would likely be the most effective?

A.
Use K-means Clustering to detect faces in the pixels.
A.
Use K-means Clustering to detect faces in the pixels.
Answers
B.
Use feature engineering to add features for eyes, noses, and mouths to the input data.
B.
Use feature engineering to add features for eyes, noses, and mouths to the input data.
Answers
C.
Use deep learning by creating a neural network with multiple hidden layers to automatically detect features of faces.
C.
Use deep learning by creating a neural network with multiple hidden layers to automatically detect features of faces.
Answers
D.
Build a neural network with an input layer of pixels, a hidden layer, and an output layer with two categories.
D.
Build a neural network with an input layer of pixels, a hidden layer, and an output layer with two categories.
Answers
Suggested answer: C

Explanation:

Traditional machine learning relies on shallow nets, composed of one input and one output layer, and at most one hidden layer in between. More than three layers (including input and output) qualifies as "deep" learning. So deep is a strictly defined, technical term that means more than one hidden layer.

In deep-learning networks, each layer of nodes trains on a distinct set of features based on the previous layer's output. The further you advance into the neural net, the more complex the features your nodes can recognize, since they aggregate and recombine features from the previous layer.

A neural network with only one hidden layer would be unable to automatically recognize high-level features of faces, such as eyes, because it wouldn't be able to "build" these features using previous hidden layers that detect low-level features, such as lines.

Feature engineering is difficult to perform on raw image data.

K-means Clustering is an unsupervised learning method used to categorize unlabeled data.

Reference: https://deeplearning4j.org/neuralnet-overview

What are two of the characteristics of using online prediction rather than batch prediction?

A.
It is optimized to handle a high volume of data instances in a job and to run more complex models.
A.
It is optimized to handle a high volume of data instances in a job and to run more complex models.
Answers
B.
Predictions are returned in the response message.
B.
Predictions are returned in the response message.
Answers
C.
Predictions are written to output files in a Cloud Storage location that you specify.
C.
Predictions are written to output files in a Cloud Storage location that you specify.
Answers
D.
It is optimized to minimize the latency of serving predictions.
D.
It is optimized to minimize the latency of serving predictions.
Answers
Suggested answer: B, D

Explanation:

Online prediction

.Optimized to minimize the latency of serving predictions.

.Predictions returned in the response message.

Batch prediction

.Optimized to handle a high volume of instances in a job and to run more complex models.

.Predictions written to output files in a Cloud Storage location that you specify.

Reference: https://cloud.google.com/ml-engine/docs/predictionoverview#online_prediction_versus_batch_prediction

Which of these are examples of a value in a sparse vector? (Select 2 answers.)

A.
[0, 5, 0, 0, 0, 0]
A.
[0, 5, 0, 0, 0, 0]
Answers
B.
[0, 0, 0, 1, 0, 0, 1]
B.
[0, 0, 0, 1, 0, 0, 1]
Answers
C.
[0, 1]
C.
[0, 1]
Answers
D.
[1, 0, 0, 0, 0, 0, 0]
D.
[1, 0, 0, 0, 0, 0, 0]
Answers
Suggested answer: C, D

Explanation:

Categorical features in linear models are typically translated into a sparse vector in which each possible value has a corresponding index or id. For example, if there are only three possible eye colors you can represent 'eye_color' as a length 3 vector: 'brown' would become [1, 0, 0], 'blue' would become [0, 1, 0] and 'green' would become [0, 0, 1]. These vectors are called "sparse" because they may be very long, with many zeros, when the set of possible values is very large (such as all English words).

[0, 0, 0, 1, 0, 0, 1] is not a sparse vector because it has two 1s in it. A sparse vector contains only a single 1.

[0, 5, 0, 0, 0, 0] is not a sparse vector because it has a 5 in it. Sparse vectors only contain 0s and 1s.

Reference: https://www.tensorflow.org/tutorials/linear#feature_columns_and_transformations

How can you get a neural network to learn about relationships between categories in a categorical feature?

A.
Create a multi-hot column
A.
Create a multi-hot column
Answers
B.
Create a one-hot column
B.
Create a one-hot column
Answers
C.
Create a hash bucket
C.
Create a hash bucket
Answers
D.
Create an embedding column
D.
Create an embedding column
Answers
Suggested answer: D

Explanation:

There are two problems with one-hot encoding. First, it has high dimensionality, meaning that instead of having just one value, like a continuous feature, it has many values, or dimensions. This makes computation more time-consuming, especially if a feature has a very large number of categories. The second problem is that it doesn't encode any relationships between the categories.

They are completely independent from each other, so the network has no way of knowing which ones are similar to each other.

Both of these problems can be solved by representing a categorical feature with an embedding column. The idea is that each category has a smaller vector with, let's say, 5 values in it. But unlike a one-hot vector, the values are not usually 0. The values are weights, similar to the weights that are used for basic features in a neural network. The difference is that each category has a set of weights (5 of them in this case).

You can think of each value in the embedding vector as a feature of the category. So, if two categories are very similar to each other, then their embedding vectors should be very similar too.

Reference: https://cloudacademy.com/google/introduction-to-google-cloud-machine-learningengine-course/a-wide-and-deep-model.html

Total 372 questions
Go to page: of 38