Delta rule and backpropagation

Research output: Chapter in Book/Report/Conference proceedingEntry for encyclopedia/dictionarypeer-review


Assuming that the reader is already familiar with the general concept of Artificial Neural Network and with the Perceptron learning rule, this paper introduces the Delta learning rule, as a basis for the Backpropagation learning rule. After discussing the necessity of using multi-layer Artificial Neural Networks for solving non-linearly separable problems, the paper describes all the mathematical steps that allow us to pass from the simple gradient descent formulation to the Backpropagation algorithm, still nowadays one of the most used methods to train feed-forward multi-layer Artificial Neural Networks. The paper is concluded by discussing issues related to overfitting in feed-forward multi-layer Artificial Neural Networks, and by presenting some heuristics and ideas for an appropriate parameter setting.

Original languageEnglish
Title of host publicationEncyclopedia of Bioinformatics and Computational Biology
Subtitle of host publicationABC of Bioinformatics
EditorsShoba Ranganathan, Michael Gribskov, Kenta Nakai, Christian Schönbach
Number of pages13
ISBN (Electronic)9780128114322
ISBN (Print)9780128114148
Publication statusPublished - 2019


  • Artificial neural networks
  • Backpropagation learning rule
  • Delta learning rule
  • Multi-layer neural networks
  • Non-linearly separable problems


Dive into the research topics of 'Delta rule and backpropagation'. Together they form a unique fingerprint.

Cite this