Quantified Explainability: Convolutional Neural Network Focus Assessment in Arrhythmia Detection

Research output: Contribution to journalArticlepeer-review

27 Downloads (Pure)

Abstract

In clinical practice, every decision should be reliable and explained to the stakeholders. The high accuracy of deep learning (DL) models pose a great advantage, but the fact that they function as black-boxes hinders their clinical applications. Hence, explainability methods became important as they provide explanation to DL models. In this study, two datasets with electrocardiogram (ECG) image representations of six heartbeats were built, one given the label of the last heartbeat and the other given the label of the first heartbeat. Each dataset was used to train one neural network. Finally, we applied well-known explainability methods to the resulting networks to explain their classifications. Explainability methods produced attribution maps where pixels intensities are proportional to their importance to the classification task. Then, we developed a metric to quantify the focus of the models in the heartbeat of interest. The classification models achieved testing accuracy scores of around 93.66% and 91.72%. The models focused around the heartbeat of interest, with values of the focus metric ranging between 8.8% and 32.4%. Future work will investigate the importance of regions outside the region of interest, besides the contribution of specific ECG waves to the classification.
Original languageEnglish
Pages (from-to)124-138
Number of pages15
JournalBioMedInformatics
Volume2
Issue number1
DOIs
Publication statusPublished - 17 Jan 2022

Fingerprint

Dive into the research topics of 'Quantified Explainability: Convolutional Neural Network Focus Assessment in Arrhythmia Detection'. Together they form a unique fingerprint.

Cite this