sommario
LIME[1] and SHAP[2] are two approaches to extracting feature importances for any Machine Learning model, the former using a proxy explainable model (like a decision tree) and the latter using a game-theory driven method based on Shapley values.
The idea of this project is to solve a classification task with different models, computing a LIME and SHAP explanation for each, then compare those with a ”baseline explanation” obtained directly from a decision tree or random forest model. This comparison aims at finding possible links between a model’s performance and its explanations’ similarity to the baseline, and at determining how ”easliy”, if ever, LIME and SHAP indipendently compute similar importances for a given model.
prodotti