Counterfactual Explanations for Machine Learning: Challenges Revisited


Sahil Verma, John P. Dickerson, Keegan Hines

CoRR abs/2106.07756,  2021

Counterfactual explanations (CFEs) are an emerging technique under the umbrella of interpretability of machine learning (ML) models. They provide ``what if'' feedback of the form ``if an input datapoint were x′ instead of x, then an ML model's output would be y′ instead of y.'' Counterfactual explainability for ML models has yet to see widespread adoption in industry. In this short paper, we posit reasons for this slow uptake. Leveraging recent work outlining desirable properties of CFEs and our experience running the ML wing of a model monitoring startup, we identify outstanding obstacles hindering CFE deployment in industry.

 @article{mlexplanations-corr210607756,
author = {Verma, Sahil and Dickerson, John P. and Hines, Keegan},
eprint = {2106.07756},
eprinttype = {arXiv},
journal = {CoRR},
title = {Counterfactual Explanations for Machine Learning: {C}hallenges Revisited},
url = {https://arxiv.org/abs/2106.07756},
volume = {abs/2106.07756},
year = 2021

Publication

— authors

Sahil Verma, John P. Dickerson, Keegan Hines

— status

published

— sort

other

Venue

— journal

CoRR

— volume

abs/2106.07756

— publication date

2021

URLs

original page  |  original PDF  |  open access PDF

BibTeX

— BibTeX ID
mlexplanations-corr201010596
— BibTeX category
article

Files

Open Access PDF

View this PDF full screen

You do not have the plugin required to display this PDF file. You can still download it: 2106.07756.pdf

Partita IVA: 01131710376 - Copyright © 2008-2022 APICe@DISI Research Group - PRIVACY