Quod erat demonstrandum? Towards a typology of the concept of explanation for the design of explainable AI

Federico Cabitza, Andrea Campagner, Gianclaudio Malgieri, Chiara Natali, David Michael Schneeberger, Karl Stöger, Andreas Holzinger (Corresponding author)

Publications: Contribution to journalArticlePeer Reviewed

Abstract

In this paper, we present a fundamental framework for defining different types of explanations of AI systems and the criteria for evaluating their quality. Starting from a structural view of how explanations can be constructed, i.e., in terms of an explanandum (what needs to be explained), multiple explanantia (explanations, clues, or parts of information that explain), and a relationship linking explanandum and explanantia, we propose an explanandum-based typology and point to other possible typologies based on how explanantia are presented and how they relate to explanandia. We also highlight two broad and complementary perspectives for defining possible quality criteria for assessing explainability: epistemological and psychological (cognitive). These definition attempts aim to support the three main functions that we believe should attract the interest and further research of XAI scholars: clear inventories, clear verification criteria, and clear validation methods.

Original languageEnglish
Article number118888
Pages (from-to)1-16
Number of pages16
JournalExpert Systems With Applications
Volume213
Issue numberA
Early online date24 Sept 2022
DOIs
Publication statusPublished - 1 Mar 2023

Austrian Fields of Science 2012

  • 505015 Legal informatics
  • 505002 Data protection
  • 505010 Medical law

Keywords

  • Artificial intelligence
  • Explainable AI
  • Explanations
  • Machine learning
  • Taxonomy
  • XAI

Fingerprint

Dive into the research topics of 'Quod erat demonstrandum? Towards a typology of the concept of explanation for the design of explainable AI'. Together they form a unique fingerprint.

Cite this