Towards complementary explanations using deep neural networks

Wilson Silva, Kelwin Fernandes, Maria J. Cardoso, Jaime S. Cardoso

Research output: Chapter in Book/Report/Conference proceedingConference contribution

6 Citations (Scopus)

Abstract

Interpretability is a fundamental property for the acceptance of machine learning models in highly regulated areas. Recently, deep neural networks gained the attention of the scientific community due to their high accuracy in vast classification problems. However, they are still seen as black-box models where it is hard to understand the reasons for the labels that they generate. This paper proposes a deep model with monotonic constraints that generates complementary explanations for its decisions both in terms of style and depth. Furthermore, an objective framework for the evaluation of the explanations is presented. Our method is tested on two biomedical datasets and demonstrates an improvement in relation to traditional models in terms of quality of the explanations generated.

Original languageEnglish
Title of host publicationUnderstanding and Interpreting Machine Learning in Medical Image Computing Applications - First International Workshops MLCN 2018, DLF 2018, and iMIMIC 2018, Held in Conjunction with MICCAI 2018, Proceedings
EditorsZeike Taylor, Mauricio Reyes, M. Jorge Cardoso, Carlos A. Silva, Danail Stoyanov, Lena Maier-Hein, Sergio Pereira, Seyed Mostafa Kia, Ipek Oguz, Bennett Landman, Anne Martel, Edouard Duchesnay, Tommy Lofstedt, Andre F. Marquand, Raphael Meier
PublisherSpringer Verlag
Pages133-140
Number of pages8
ISBN (Print)9783030026271
DOIs
Publication statusPublished - 2018
Event1st International Workshop on Machine Learning in Clinical Neuroimaging, MLCN 2018, 1st International Workshop on Deep Learning Fails, DLF 2018, and 1st International Workshop on Interpretability of Machine Intelligence in Medical Image Computing, iMIMIC 2018, held in conjunction with the 21st International Conference on Medical Imaging and Computer-Assisted Intervention, MICCAI 2018 - Granada, Spain
Duration: 16 Sep 201820 Sep 2018

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume11038 LNCS
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Conference

Conference1st International Workshop on Machine Learning in Clinical Neuroimaging, MLCN 2018, 1st International Workshop on Deep Learning Fails, DLF 2018, and 1st International Workshop on Interpretability of Machine Intelligence in Medical Image Computing, iMIMIC 2018, held in conjunction with the 21st International Conference on Medical Imaging and Computer-Assisted Intervention, MICCAI 2018
CountrySpain
CityGranada
Period16/09/1820/09/18

Keywords

  • Aesthetics evaluation
  • Deep neural networks
  • Dermoscopy
  • Explanations
  • Interpretable machine learning

Fingerprint Dive into the research topics of 'Towards complementary explanations using deep neural networks'. Together they form a unique fingerprint.

  • Cite this

    Silva, W., Fernandes, K., Cardoso, M. J., & Cardoso, J. S. (2018). Towards complementary explanations using deep neural networks. In Z. Taylor, M. Reyes, M. J. Cardoso, C. A. Silva, D. Stoyanov, L. Maier-Hein, S. Pereira, S. M. Kia, I. Oguz, B. Landman, A. Martel, E. Duchesnay, T. Lofstedt, A. F. Marquand, ... R. Meier (Eds.), Understanding and Interpreting Machine Learning in Medical Image Computing Applications - First International Workshops MLCN 2018, DLF 2018, and iMIMIC 2018, Held in Conjunction with MICCAI 2018, Proceedings (pp. 133-140). (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Vol. 11038 LNCS). Springer Verlag. https://doi.org/10.1007/978-3-030-02628-8_15