The duet of representations and how explanations exacerbate it

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review


An algorithm effects a causal representation of relations between features and labels in the human’s perception. Such a representation might conflict with the human’s prior belief. Explanations can direct the human’s attention to the conflicting feature and away from other relevant features. This leads to causal overattribution and may adversely affect the human’s information processing. In a field experiment we implemented an XGBoost-trained model as a decision-making aid for counselors at a public employment service to predict candidates’ risk of long-term unemployment. The treatment group of counselors was also provided with SHAP. The results show that the quality of the human’s decision-making is worse when a feature on which the human holds a conflicting prior belief is displayed as part of the explanation.

Original languageEnglish
Title of host publicationExplainable Artificial Intelligence - 1st World Conference, xAI 2023, Proceedings
EditorsLuca Longo
PublisherSpringer Science and Business Media Deutschland GmbH
Number of pages17
ISBN (Print)9783031440663
Publication statusPublished - Oct 2023
Event1st World Conference on eXplainable Artificial Intelligence, xAI 2023 - Lisbon, Portugal
Duration: 26 Jul 202328 Jul 2023

Publication series

NameCommunications in Computer and Information Science
Volume1902 CCIS
ISSN (Print)1865-0929
ISSN (Electronic)1865-0937


Conference1st World Conference on eXplainable Artificial Intelligence, xAI 2023


  • biases
  • causal representations
  • communication
  • conflict
  • epistemic standpoint
  • explanations
  • human-AI interaction
  • information processing
  • prior beliefs
  • salience


Dive into the research topics of 'The duet of representations and how explanations exacerbate it'. Together they form a unique fingerprint.

Cite this