With the advancements of Machine Learning, the number of predictive models that can be used in a given situation has grown incredibly, and scientists willing to use Machine Learning have to spend a significant amount of time in searching, testing and tuning those models. This has an inevitable impact on the research quality. Many scientists are currently working on different approaches to automate this process by devising algorithms that can tune, select or combine multiple models for a specific application. This is the case of ensemble methods, hyper-heuristics and meta-learning algorithms. There have been great progresses in this direction, but typical approaches lack the presence of an unifying structure onto which these ensemble, hyper or meta algorithms are developed. In this thesis we discuss about a new meta-learning method based on Geometric Semantic Genetic Programming. The milestone introduced by this approach is the use of semantics as an intermediate representation to work with models of different nature. We will see how this approach is general and can be applied with any model, in particular we will apply this case to regression problems and we will test our hypotheses by experimental verification over some datasets for real-life problems.
|Qualification||Doctor of Philosophy|
|Award date||23 Jul 2019|
|Publication status||Published - 23 Jul 2019|
- Meta learning