Explanation in artificial intelligence: Insights from the social sciences


Tim Miller

Artificial Intelligence 267, pp. 1–38, febbraio 2019

There has been a recent resurgence in the area of explainable artificial intelligence as researchers and practitioners seek to provide more transparency to their algorithms. Much of this research is focused on explicitly explaining decisions or actions to a human observer, and it should not be controversial to say that looking at how humans explain to each other can serve as a useful starting point for explanation in artificial intelligence. However, it is fair to say that most work in explainable artificial intelligence uses only the researchers' intuition of what constitutes a `good' explanation. There exist vast and valuable bodies of research in philosophy, psychology, and cognitive science of how people define, generate, select, evaluate, and present explanations, which argues that people employ certain cognitive biases and social expectations to the explanation process. This paper argues that the field of explainable artificial intelligence can build on this existing research, and reviews relevant papers from philosophy, cognitive psychology/science, and social psychology, which study these topics. It draws out some important findings, and discusses ways that these can be infused with work on explainable artificial intelligence.

(keywords) Explanation, Explainability, Interpretability, Explainable AI, Transparency
 @article{xaisocialsciences-ai267,
author = {Miller, Tim},
doi = {10.1016/j.artint.2018.07.007},
issn = {0004-3702},
journal = {Artificial Intelligence},
keywords = {Explanation, Explainability, Interpretability, Explainable AI, Transparency},
pages = {1--38},
title = {Explanation in artificial intelligence: Insights from the social sciences},
url = {https://www.sciencedirect.com/science/article/pii/S0004370218305988},
volume = 267,
year = 2019

Riviste & collane

Pubblicazione

— autori/autrici

Tim Miller

— stato

pubblicato

— tipo

articolo su rivista

Sede di pubblicazione

— rivista

Artificial Intelligence

— volume

267

— pagine

1–38

— data di pubblicazione

febbraio 2019

URL

pagina originale

Identificatori

— DOI

10.1016/j.artint.2018.07.007

— print ISSN

0004-3702

BibTeX

— BibTeX ID
xaisocialsciences-ai267
— BibTeX category
article

Partita IVA: 01131710376 - Copyright © 2008-2022 APICe@DISI Research Group - PRIVACY