Abstract
Feature importance evaluation is one of the prevalent approaches to interpreting Machine Learning (ML) models. A drawback of using these methods for high-dimensional datasets is that they often lead to high-dimensional explanation output that hinders human analysis. This is especially true for explaining multimodal ML models, where the problem's complexity is further exacerbated by the inclusion of multiple data modalities and an increase in the overall number of features. This work proposes a novel approach to lower the complexity of feature-based explanations. The proposed approach is based on uncertainty quantification techniques, allowing for a principled way of reducing the number of modalities required to explain the model's predictions. We evaluated our method in three multimodal datasets comprising physiological time series. Results show that the proposed method can reduce the complexity of the explanations while maintaining a high level of accuracy in the predictions. This study illustrates an innovative example of the intersection between the disciplines of uncertainty quantification and explainable artificial intelligence.
Original language | English |
---|---|
Article number | 101955 |
Number of pages | 17 |
Journal | Information Fusion |
Volume | 100 |
DOIs | |
Publication status | Published - Dec 2023 |
Keywords
- Complexity
- Explainable AI
- Feature-based explanations
- Multimodal
- SHAP
- Uncertainty quantification