Aligning Artificial Neural Networks and Ontologies towards Explainable AI

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

30 Citations (Scopus)

Abstract

Neural networks have been the key to solve a variety of different problems. However, neural network models are still regarded as black boxes, since they do not provide any human-interpretable evidence as to why they output a certain result. We address this issue by leveraging on ontologies and building small classifiers that map a neural network model’s internal state to concepts from an ontology, enabling the generation of symbolic justifications for the output of neural network models. Using an image classification problem as testing ground, we discuss how to map the internal state of a neural network to the concepts of an ontology, examine whether the results obtained by the established mappings match our understanding of the mapped concepts, and analyze the justifications obtained through this method.

Original languageEnglish
Title of host publication35th AAAI Conference on Artificial Intelligence, AAAI 2021
PublisherAssociation for the Advancement of Artificial Intelligence
Pages4932-4940
Number of pages9
ISBN (Electronic)9781713835974
Publication statusPublished - 2021
Event35th AAAI Conference on Artificial Intelligence, AAAI 2021 - Virtual, Online
Duration: 2 Feb 20219 Feb 2021

Publication series

NameAAAI Conference on Artificial Intelligence
PublisherAssociation for the Advancement of Artificial Intelligence
Volume35
ISSN (Print)2159-5399
ISSN (Electronic)2374-3468

Conference

Conference35th AAAI Conference on Artificial Intelligence, AAAI 2021
CityVirtual, Online
Period2/02/219/02/21

Fingerprint

Dive into the research topics of 'Aligning Artificial Neural Networks and Ontologies towards Explainable AI'. Together they form a unique fingerprint.

Cite this