Bayesian Learning

Research output: Chapter in Book/Report/Conference proceedingChapterpeer-review

2 Citations (Scopus)

Abstract

Bayesian learning (Tipping, 2004; Barber, 2012) is the name commonly used to identify a set of computational methods for supervised learning based on Bayes’ Theorem. Broadly speaking, Bayes’ Theorem deals with the modification of our perception of the probability of an event, as a consequence of the occurrence of one or more facts. For instance, what probability are you assigning to the event “somebody stole my car” at the moment? Of course, this can depend on many different factors, but on a normal day, one may argue that that probability is generally rather low. Now, imagine that you go looking for your car, and the car is not in the place where you remember that you parked it. What is now the probability of the event “somebody stole my car”? The fact that the car is not where it was parked clearly changes the probability that it was stolen. This property is general: the realization of some events can modify the probability of others. This property can be exploited to tackle Machine Learning tasks, for instance classification: data, interpreted as events, can be used to change the probability that a given observation belongs to a given class. Before studying this mechanism in detail, let us first present Bayes’ Theorem and its most immediate use in Machine Learning.

Original languageEnglish
Title of host publicationNatural Computing Series
Place of PublicationCham, Switzerland
PublisherSpringer, Cham
Pages259-270
Number of pages12
ISBN (Electronic)978-3-031-17922-8
ISBN (Print)978-3-031-17921-1, 978-3-031-17924-2
DOIs
Publication statusPublished - 13 Jan 2023

Publication series

NameNatural Computing Series
ISSN (Print)1619-7127

Fingerprint

Dive into the research topics of 'Bayesian Learning'. Together they form a unique fingerprint.

Cite this