Uncertainty-Based Rejection in Machine Learning: Implications for Model Development and Interpretability

Research output: Contribution to journalArticlepeer-review

6 Citations (Scopus)
152 Downloads (Pure)

Abstract

Uncertainty is present in every single prediction of Machine Learning (ML) models. Uncertainty Quantification (UQ) is arguably relevant, in particular for safety-critical applications. Prior research focused on the development of methods to quantify uncertainty; however, less attention has been given to how to leverage the knowledge of uncertainty in the process of model development. This work focused on applying UQ into practice, closing the gap of its utility in the ML pipeline and giving insights into how UQ is used to improve model development and its interpretability. We identified three main research questions: (1) How can UQ contribute to choosing the most suitable model for a given classification task? (2) Can UQ be used to combine different models in a principled manner? (3) Can visualization techniques improve UQ’s interpretability? These questions are answered by applying several methods to quantify uncertainty in both a simulated dataset and a real-world dataset of Human Activity Recognition (HAR). Our results showed that uncertainty quantification can increase model robustness and interpretability.

Original languageEnglish
Article number396
JournalElectronics (Switzerland)
Volume11
Issue number3
DOIs
Publication statusPublished - 1 Feb 2022

Keywords

  • Artificial intelligence
  • Human activity recognition
  • Interpretability
  • Machine learning
  • Rejection option
  • Uncertainty quantification

Fingerprint

Dive into the research topics of 'Uncertainty-Based Rejection in Machine Learning: Implications for Model Development and Interpretability'. Together they form a unique fingerprint.

Cite this