Abstract
Hyperparameter optimization in machine learning (ML) deals with the problem of empirically learning an optimal algorithm configuration from data, usually formulated as a black-box optimization problem. In this work, we propose a zero-shot method to meta-learn symbolic default hyperparameter configurations that are expressed in terms of the properties of the dataset. This enables a much faster, but still data-dependent, configuration of the ML algorithm, compared to standard hyperparameter optimization approaches. In the past, symbolic and static default values have usually been obtained as hand-crafted heuristics. We propose an approach of learning such symbolic configurations as formulas of dataset properties from a large set of prior evaluations on multiple datasets by optimizing over a grammar of expressions using an evolutionary algorithm. We evaluate our method on surrogate empirical performance models as well as on real data across 6 ML algorithms on more than 100 datasets and demonstrate that our method indeed finds viable symbolic defaults.
Original language | English |
---|---|
Title of host publication | GECCO 2021 Companion - Proceedings of the 2021 Genetic and Evolutionary Computation Conference Companion |
Subtitle of host publication | Proceedings of the Genetic and Evolutionary Computation Conference Companion |
Publisher | Association for Computing Machinery, Inc |
Pages | 151-152 |
Number of pages | 2 |
ISBN (Electronic) | 978-1-4503-8351-6 |
DOIs | |
Publication status | Published - Jul 2021 |
Bibliographical note
2-page GECCO poster paper, full length original submission available as preprint on arXiv: https://arxiv.org/abs/2106.05767v2Keywords
- hyperparameter optimization
- metalearning