Information that matters: Exploring information needs of people affected by algorithmic decisions

Publications: Contribution to journalArticlePeer Reviewed

Abstract

Every AI system that makes decisions about people has a group of stakeholders that are personally affected by these decisions. However, explanations of AI systems rarely address the information needs of this stakeholder group, who often are AI novices. This creates a gap between conveyed information and information that matters to those who are impacted by the system’s decisions, such as domain experts and decision subjects. To address this, we present the “XAI Novice Question Bank”, an extension of the XAI Question Bank (Liao et al., 2020) containing a catalog of information needs from AI novices in two use cases: employment prediction and health monitoring. The catalog covers the categories of data, system context, system usage, and system specifications. We gathered information needs through task based interviews where participants asked questions about two AI systems to decide on their adoption and received verbal explanations in response. Our analysis showed that participants’ confidence increased after receiving explanations but that their understanding faced challenges. These included difficulties in locating information and in assessing their own understanding, as well as attempts to outsource understanding. Additionally, participants’ prior perceptions of the systems’ risks and benefits influenced their information needs. Participants who perceived high risks sought explanations about the intentions behind a system’s deployment, while those who perceived low risks rather asked about the system’s operation. Our work aims to support the inclusion of AI novices in explainability efforts by highlighting their information needs, aims, and challenges. We summarize our findings as five key implications that can inform the design of future explanations for lay stakeholder audiences.
Original languageEnglish
Article number103380
Number of pages18
JournalInternational Journal of Human-Computer Studies
Volume193
Early online date26 Sept 2024
DOIs
Publication statusPublished - Jan 2025

Austrian Fields of Science 2012

  • 102013 Human-computer interaction
  • 102001 Artificial intelligence

Keywords

  • Explainable AI
  • Understanding
  • Information needs
  • Affected stakeholders
  • Question-driven explanations
  • Qualitative methods

Fingerprint

Dive into the research topics of 'Information that matters: Exploring information needs of people affected by algorithmic decisions'. Together they form a unique fingerprint.

Cite this