In AI we Trust: Harnessing Artificial Intelligence to Mitigate Anchoring Bias

Filipa de Almeida, Kim Fortuin, Ian J. Scott

Research output: Working paperPreprint

Abstract

In an effort to mitigate the inherent biases of human decision-making, organizations are increasingly turning to Artificial Intelligence (AI) for support. However, unlocking the full potential of AI-generated advice hinges on user willingness to trust and rely upon it. This research delves into this critical issue by exploring user receptivity to AI-based recommendations through two meticulously designed experiments.

The first experiment investigated user preference for advice sourced from AI versus human experts. Interestingly, results revealed an "algorithm appreciation" for objective tasks, such as stock price predictions. Conversely, for subjective tasks like employee performance evaluations, the data indicated a distinct "algorithm aversion." This differential reliance was entirely mediated by user trust in the source of the advice. Furthermore, the influence of self-confidence in one's own decision-making abilities also emerged as a significant factor.

The second study built upon these findings by examining user behavior when presented with both AI and human advice options. Additionally, it investigated whether user choice differed for themselves and for others. While "algorithm appreciation" persisted for the objective stock price prediction task, it did not translate to other scenarios.

These compelling findings offer valuable insights for organizations that are integrating Big Data and AI-powered recommendations into their decision-making processes. The research suggests that AI can be a powerful tool to augment human decision-making, fostering more informed and potentially more objective choices. However, the success of such implementations hinges on user trust and confidence in the AI system. Understanding the factors that influence user reliance on AI advice, as revealed in these studies, empowers organizations to tailor their strategies and maximize the benefits of AI-driven decision support.
Original languageEnglish
PublisherSocial Science Research Network (SSRN), Elsevier
Number of pages2
Publication statusPublished - 19 Apr 2024

Keywords

  • Artificial Intelligence
  • Human-AI interaction
  • Anchoring
  • Biased Decision Making
  • Trust

Cite this