ExamGecko
Question list
Search
Search

List of questions

Search

Related questions











Question 94 - Professional Machine Learning Engineer discussion

Report
Export

You work at a subscription-based company. You have trained an ensemble of trees and neural networks to predict customer churn, which is the likelihood that customers will not renew their yearly subscription. The average prediction is a 15% churn rate, but for a particular customer the model predicts that they are 70% likely to churn. The customer has a product usage history of 30%, is located in New York City, and became a customer in 1997. You need to explain the difference between the actual prediction, a 70% churn rate, and the average prediction. You want to use Vertex Explainable AI. What should you do?

A.
Train local surrogate models to explain individual predictions.
Answers
A.
Train local surrogate models to explain individual predictions.
B.
Configure sampled Shapley explanations on Vertex Explainable AI.
Answers
B.
Configure sampled Shapley explanations on Vertex Explainable AI.
C.
Configure integrated gradients explanations on Vertex Explainable AI.
Answers
C.
Configure integrated gradients explanations on Vertex Explainable AI.
D.
Measure the effect of each feature as the weight of the feature multiplied by the feature value.
Answers
D.
Measure the effect of each feature as the weight of the feature multiplied by the feature value.
Suggested answer: B

Explanation:

Option A is incorrect because training local surrogate models to explain individual predictions is not a feature of Vertex Explainable AI, but rather a general technique for interpreting black-box models.Local surrogate models are simpler models that approximate the behavior of the original model around a specific input1.

Option B is correct because configuring sampled Shapley explanations on Vertex Explainable AI is a way to explain the difference between the actual prediction and the average prediction for a given input.Sampled Shapley explanations are based on the Shapley value, which is a game-theoretic concept that measures how much each feature contributes to the prediction2.Vertex Explainable AI supports sampled Shapley explanations for tabular data, such as customer churn3.

Option C is incorrect because configuring integrated gradients explanations on Vertex Explainable AI is not suitable for explaining the difference between the actual prediction and the average prediction for a given input.Integrated gradients explanations are based on the idea of computing the gradients of the prediction with respect to the input features along a path from a baseline input to the actual input4.Vertex Explainable AI supports integrated gradients explanations for image and text data, but not for tabular data3.

Option D is incorrect because measuring the effect of each feature as the weight of the feature multiplied by the feature value is not a valid way to explain the difference between the actual prediction and the average prediction for a given input. This method assumes that the model is linear and additive, which is not the case for an ensemble of trees and neural networks.Moreover, this method does not account for the interactions between features or the non-linearity of the model5.

Local surrogate models

Shapley value

Vertex Explainable AI overview

Integrated gradients

Feature importance

asked 18/09/2024
Ali Danial
36 questions
User
Your answer:
0 comments
Sorted by

Leave a comment first