Explainability’s gain is optimality’s loss? How explanations bias decision-making

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

Decisions in organizations are about evaluating alternatives and
choosing the one that would best serve organizational goals. To the
extent that the evaluation of alternatives could be formulated as
a predictive task with appropriate metrics, machine learning algorithms are increasingly being used to improve the efficiency of the
process. Explanations help to facilitate communication between the
algorithm and the human decision-maker, making it easier for the
latter to interpret and make decisions on the basis of predictions by
the former. Feature-based explanations’ semantics of causal models,
however, induce leakage from the decision-maker’s prior beliefs.
Our findings from a field experiment demonstrate empirically how
this leads to confirmation bias and disparate impact on the decisionmaker’s confidence in the predictions. Such differences can lead to
sub-optimal and biased decision outcomes.
Original languageEnglish
Title of host publicationAIES '22
Subtitle of host publicationProceedings of the 2022 AAAI/ACM Conference on AI, Ethics, and Society
PublisherACM Electronic Library
Pages778–787
Number of pages9
ISBN (Print)978-1-4503-9247-1
Publication statusPublished - Jul 2022

Fingerprint

Dive into the research topics of 'Explainability’s gain is optimality’s loss? How explanations bias decision-making'. Together they form a unique fingerprint.

Cite this