TY - JOUR
T1 - Information that matters: Exploring information needs of people affected by algorithmic decisions
AU - Schmude, Timothée
AU - Koesten, Laura
AU - Möller, Torsten
AU - Tschiatschek, Sebastian
N1 - Publisher Copyright:
© 2024 The Authors
PY - 2024/9/26
Y1 - 2024/9/26
N2 - Every AI system that makes decisions about people has a group of stakeholders that are personally affected by these decisions. However, explanations of AI systems rarely address the information needs of this stakeholder group, who often are AI novices. This creates a gap between conveyed information and information that matters to those who are impacted by the system’s decisions, such as domain experts and decision subjects. To address this, we present the “XAI Novice Question Bank”, an extension of the XAI Question Bank (Liao et al., 2020) containing a catalog of information needs from AI novices in two use cases: employment prediction and health monitoring. The catalog covers the categories of data, system context, system usage, and system specifications. We gathered information needs through task based interviews where participants asked questions about two AI systems to decide on their adoption and received verbal explanations in response. Our analysis showed that participants’ confidence increased after receiving explanations but that their understanding faced challenges. These included difficulties in locating information and in assessing their own understanding, as well as attempts to outsource understanding. Additionally, participants’ prior perceptions of the systems’ risks and benefits influenced their information needs. Participants who perceived high risks sought explanations about the intentions behind a system’s deployment, while those who perceived low risks rather asked about the system’s operation. Our work aims to support the inclusion of AI novices in explainability efforts by highlighting their information needs, aims, and challenges. We summarize our findings as five key implications that can inform the design of future explanations for lay stakeholder audiences.
AB - Every AI system that makes decisions about people has a group of stakeholders that are personally affected by these decisions. However, explanations of AI systems rarely address the information needs of this stakeholder group, who often are AI novices. This creates a gap between conveyed information and information that matters to those who are impacted by the system’s decisions, such as domain experts and decision subjects. To address this, we present the “XAI Novice Question Bank”, an extension of the XAI Question Bank (Liao et al., 2020) containing a catalog of information needs from AI novices in two use cases: employment prediction and health monitoring. The catalog covers the categories of data, system context, system usage, and system specifications. We gathered information needs through task based interviews where participants asked questions about two AI systems to decide on their adoption and received verbal explanations in response. Our analysis showed that participants’ confidence increased after receiving explanations but that their understanding faced challenges. These included difficulties in locating information and in assessing their own understanding, as well as attempts to outsource understanding. Additionally, participants’ prior perceptions of the systems’ risks and benefits influenced their information needs. Participants who perceived high risks sought explanations about the intentions behind a system’s deployment, while those who perceived low risks rather asked about the system’s operation. Our work aims to support the inclusion of AI novices in explainability efforts by highlighting their information needs, aims, and challenges. We summarize our findings as five key implications that can inform the design of future explanations for lay stakeholder audiences.
KW - Explainable AI
KW - Understanding
KW - Information needs
KW - Affected stakeholders
KW - Question-driven explanations
KW - Qualitative methods
UR - http://www.scopus.com/inward/record.url?scp=85205992227&partnerID=8YFLogxK
U2 - 10.1016/j.ijhcs.2024.103380
DO - 10.1016/j.ijhcs.2024.103380
M3 - Article
SN - 1071-5819
VL - 193
JO - International Journal of Human-Computer Studies
JF - International Journal of Human-Computer Studies
M1 - 103380
ER -