Employing AI for better understanding our morals

Luís Moniz Pereira, António Barata Lopes

Research output: Chapter in Book/Report/Conference proceedingChapterpeer-review

Abstract

Having addressed the prerequisite issues for a justified and contextualized computational morality, the absence of radically new problems resulting from the co-presence of agents of different nature, and addressed the difficulties inherent in the creation of moral algorithms, it is time to present the research we have conducted. The latter considers both the very aspects of programming, as the need for protocols regulating competition among companies or countries. Its aim revolves around a benevolent AI, contributing to the fair distribution of the benefits of development, and attempting to block the tendency towards the concentration of wealth and power. Our approach denounces and avoids the statistical models used to solve moral dilemmas, because they are “blind” and risk perpetuating mistakes. Thus, we use an approach where counterfactual reasoning plays a fundamental role and, considering morality primarily a matter of groups, we present conclusions from studies involving the pairs egoism/altruism; collaboration/competition; acknowledgment of error/apology. These are the basic elements of most moral systems, and studies make it possible to draw generalizable and programmable conclusions in order to attain group sustainability and greater global benefit, regardless of their constituents.

Original languageEnglish
Title of host publicationMachine Ethics
Place of PublicationCham
PublisherSpringer
Pages121-134
Number of pages14
ISBN (Electronic)978-3-030-39630-5
ISBN (Print)978-3-030-39629-9
DOIs
Publication statusPublished - 2020

Publication series

NameStudies in Applied Philosophy, Epistemology and Rational Ethics
PublisherSpringer
Volume53
ISSN (Print)2192-6255
ISSN (Electronic)2192-6263

Fingerprint

Dive into the research topics of 'Employing AI for better understanding our morals'. Together they form a unique fingerprint.

Cite this