Genetic programming for stacked generalization

Research output: Contribution to journalArticlepeer-review

12 Citations (Scopus)
9 Downloads (Pure)


In machine learning, ensemble techniques are widely used to improve the performance of both classification and regression systems. They combine the models generated by different learning algorithms, typically trained on different data subsets or with different parameters, to obtain more accurate models. Ensemble strategies range from simple voting rules to more complex and effective stacked approaches. They are based on adopting a meta-learner, i.e. a further learning algorithm, and are trained on the predictions provided by the single algorithms making up the ensemble. The paper aims at exploiting some of the most recent genetic programming advances in the context of stacked generalization. In particular, we investigate how the evolutionary demes despeciation initialization technique, ϵ-lexicase selection, geometric-semantic operators, and semantic stopping criterion, can be effectively used to improve GP-based systems’ performance for stacked generalization (a.k.a. stacking). The experiments, performed on a broad set of synthetic and real-world regression problems, confirm the effectiveness of the proposed approach.

Original languageEnglish
Article number100913
Pages (from-to)1-14
Number of pages14
JournalSwarm and Evolutionary Computation
Early online date26 May 2021
Publication statusPublished - Aug 2021


  • Ensemble Learning
  • Genetic Programming
  • Stacked Generalization
  • Stacking


Dive into the research topics of 'Genetic programming for stacked generalization'. Together they form a unique fingerprint.

Cite this