TY - JOUR
T1 - Genetic programming for stacked generalization
AU - Bakurov, Illya
AU - Castelli, Mauro
AU - Gau, Olivier
AU - Fontanella, Francesco
AU - Vanneschi, Leonardo
N1 - info:eu-repo/grantAgreement/FCT/3599-PPCDT/DSAIPA%2FDS%2F0113%2F2019/PT#
info:eu-repo/grantAgreement/FCT/3599-PPCDT/DSAIPA%2FDS%2F0022%2F2018/PT#
Bakurov, I., Castelli, M., Gau, O., Fontanella, F., & Vanneschi, L. (2021). Genetic programming for stacked generalization. Swarm and Evolutionary Computation, 65, 1-14. [100913]. https://doi.org/10.1016/j.swevo.2021.100913
PY - 2021/8
Y1 - 2021/8
N2 - In machine learning, ensemble techniques are widely used to improve the performance of both classification and regression systems. They combine the models generated by different learning algorithms, typically trained on different data subsets or with different parameters, to obtain more accurate models. Ensemble strategies range from simple voting rules to more complex and effective stacked approaches. They are based on adopting a meta-learner, i.e. a further learning algorithm, and are trained on the predictions provided by the single algorithms making up the ensemble. The paper aims at exploiting some of the most recent genetic programming advances in the context of stacked generalization. In particular, we investigate how the evolutionary demes despeciation initialization technique, ϵ-lexicase selection, geometric-semantic operators, and semantic stopping criterion, can be effectively used to improve GP-based systems’ performance for stacked generalization (a.k.a. stacking). The experiments, performed on a broad set of synthetic and real-world regression problems, confirm the effectiveness of the proposed approach.
AB - In machine learning, ensemble techniques are widely used to improve the performance of both classification and regression systems. They combine the models generated by different learning algorithms, typically trained on different data subsets or with different parameters, to obtain more accurate models. Ensemble strategies range from simple voting rules to more complex and effective stacked approaches. They are based on adopting a meta-learner, i.e. a further learning algorithm, and are trained on the predictions provided by the single algorithms making up the ensemble. The paper aims at exploiting some of the most recent genetic programming advances in the context of stacked generalization. In particular, we investigate how the evolutionary demes despeciation initialization technique, ϵ-lexicase selection, geometric-semantic operators, and semantic stopping criterion, can be effectively used to improve GP-based systems’ performance for stacked generalization (a.k.a. stacking). The experiments, performed on a broad set of synthetic and real-world regression problems, confirm the effectiveness of the proposed approach.
KW - Ensemble Learning
KW - Genetic Programming
KW - Stacked Generalization
KW - Stacking
UR - http://www.scopus.com/inward/record.url?scp=85108112139&partnerID=8YFLogxK
UR - https://www.webofscience.com/wos/woscc/full-record/WOS:000692828700013
U2 - 10.1016/j.swevo.2021.100913
DO - 10.1016/j.swevo.2021.100913
M3 - Article
AN - SCOPUS:85108112139
SN - 2210-6502
VL - 65
SP - 1
EP - 14
JO - Swarm and Evolutionary Computation
JF - Swarm and Evolutionary Computation
M1 - 100913
ER -