Category "shap"

SHAP explainer identifies wrong framework

I want to use the SHAP-DeepInterpeter on the Braindecode Shallow_FBCSP-Model which is based on pytorch. The training and testing works perfectly fine on the mod

Shap value plotting error on Databricks but works locally

I want to do a simple shap analysis and plot a shap.force_plot. I noticed that it works without any issues locally in a .ipynb file, but fails on Databricks wit

fastshap: Error in UseMethod("explain") : no applicable method for 'explain' applied to an object of class "xgb.Booster"

After fitting a xgboost model (model_n) I try to run the code below to obtain shap-values, where trainval is a dataframe with my traindata without the Y variabe

Shap - The color bar is not displayed in the summary plot

When displaying summary_plot, the color bar does not show. shap.summary_plot(shap_values, X_train) I have tried changing plot_size. When the plot is higher th

SHAP: XGBoost and LightGBM difference in shap_values calculation

I have this code in visual studio code: import pandas as pd import numpy as np import shap import matplotlib.pyplot as plt import xgboost as xgb from sklearn.m

SHAP Summary Plot and Mean Values displaying together

Used the following Python code for a SHAP summary_plot: explainer = shap.TreeExplainer(model2) shap_values = explainer.shap_values(X_sampled) shap.summary_plot

SHAP for a single data point, instead of average prediction of entire dataset

I am trying to explain a regression model based on LightGBM using SHAP. I'm using the shap.TreeExplainer(<lightgbm model>).shap_values(X) method to get

Is there a way to customize the feature order in a SHAP beeswarm plot?

I'm wondering if there's a way to change the order the features in a SHAP beeswarm plot are displayed in. The docs describe "transforms" like using shap_values.

SHAP local_accuracy

When calculating local_accuracy from metrics.py I got the following error : NameError: name 'pickle' is not defined from shap.benchmark import metrics metrics.l

Custom features in beeswarm plot of shap

I have a causal inference model with featurizer=PolynomialFeatures(degree=3) which includes a degree 3 polynomial in X variable. I get the plot for interpretab

Difference between Shapley values and SHAP for interpretable machine learning

The Paper regarding die shap package gives a formula for the Shapley Values in (4) and for SHAP values apparently in (8) Still I don't really understand the dif

SHAP function throws exception in plotting method

samples.zip The sample zipped folder contains: model.pkl x_test.csv To reproduce the problems, do the following steps: use lin2 =joblib.load('model.pkl') to loa

SHAP function throws exception in plotting method

samples.zip The sample zipped folder contains: model.pkl x_test.csv To reproduce the problems, do the following steps: use lin2 =joblib.load('model.pkl') to loa

How are SHAP's feature contributions calculated for models with word embeddings as output?

In a typical Shapley value estimation for a numerical regression task, there is a clear way in which the marginal contribution of an input feature i to the fina