TY - JOUR
T1 - Uncertainty-Based Rejection in Machine Learning: Implications for Model Development and Interpretability
AU - Barandas, Marília
AU - Folgado, Duarte
AU - Santos, Ricardo
AU - Simão, Raquel
AU - Gamboa, Hugo
N1 - POCI-01-0247-FEDER-033479
PY - 2022/2/1
Y1 - 2022/2/1
N2 - Uncertainty is present in every single prediction of Machine Learning (ML) models. Uncertainty Quantification (UQ) is arguably relevant, in particular for safety-critical applications. Prior research focused on the development of methods to quantify uncertainty; however, less attention has been given to how to leverage the knowledge of uncertainty in the process of model development. This work focused on applying UQ into practice, closing the gap of its utility in the ML pipeline and giving insights into how UQ is used to improve model development and its interpretability. We identified three main research questions: (1) How can UQ contribute to choosing the most suitable model for a given classification task? (2) Can UQ be used to combine different models in a principled manner? (3) Can visualization techniques improve UQ’s interpretability? These questions are answered by applying several methods to quantify uncertainty in both a simulated dataset and a real-world dataset of Human Activity Recognition (HAR). Our results showed that uncertainty quantification can increase model robustness and interpretability.
AB - Uncertainty is present in every single prediction of Machine Learning (ML) models. Uncertainty Quantification (UQ) is arguably relevant, in particular for safety-critical applications. Prior research focused on the development of methods to quantify uncertainty; however, less attention has been given to how to leverage the knowledge of uncertainty in the process of model development. This work focused on applying UQ into practice, closing the gap of its utility in the ML pipeline and giving insights into how UQ is used to improve model development and its interpretability. We identified three main research questions: (1) How can UQ contribute to choosing the most suitable model for a given classification task? (2) Can UQ be used to combine different models in a principled manner? (3) Can visualization techniques improve UQ’s interpretability? These questions are answered by applying several methods to quantify uncertainty in both a simulated dataset and a real-world dataset of Human Activity Recognition (HAR). Our results showed that uncertainty quantification can increase model robustness and interpretability.
KW - Artificial intelligence
KW - Human activity recognition
KW - Interpretability
KW - Machine learning
KW - Rejection option
KW - Uncertainty quantification
UR - http://www.scopus.com/inward/record.url?scp=85123506978&partnerID=8YFLogxK
U2 - 10.3390/electronics11030396
DO - 10.3390/electronics11030396
M3 - Article
AN - SCOPUS:85123506978
SN - 2079-9292
VL - 11
JO - Electronics (Switzerland)
JF - Electronics (Switzerland)
IS - 3
M1 - 396
ER -