Explainability meets uncertainty quantification: Insights from feature-based model fusion on multimodal time series

Duarte Folgado, Marília Barandas, Lorenzo Famiglini, Ricardo Santos, Federico Cabitza, Hugo Gamboa

Research output: Contribution to journalArticlepeer-review

3 Citations (Scopus)
44 Downloads (Pure)

Abstract

Feature importance evaluation is one of the prevalent approaches to interpreting Machine Learning (ML) models. A drawback of using these methods for high-dimensional datasets is that they often lead to high-dimensional explanation output that hinders human analysis. This is especially true for explaining multimodal ML models, where the problem's complexity is further exacerbated by the inclusion of multiple data modalities and an increase in the overall number of features. This work proposes a novel approach to lower the complexity of feature-based explanations. The proposed approach is based on uncertainty quantification techniques, allowing for a principled way of reducing the number of modalities required to explain the model's predictions. We evaluated our method in three multimodal datasets comprising physiological time series. Results show that the proposed method can reduce the complexity of the explanations while maintaining a high level of accuracy in the predictions. This study illustrates an innovative example of the intersection between the disciplines of uncertainty quantification and explainable artificial intelligence.
Original languageEnglish
Article number101955
Number of pages17
JournalInformation Fusion
Volume100
DOIs
Publication statusPublished - Dec 2023

Keywords

  • Complexity
  • Explainable AI
  • Feature-based explanations
  • Multimodal
  • SHAP
  • Uncertainty quantification

Fingerprint

Dive into the research topics of 'Explainability meets uncertainty quantification: Insights from feature-based model fusion on multimodal time series'. Together they form a unique fingerprint.

Cite this