Abstract
Boosting combines weak (biased) learners to obtain effective learning algorithms for classification and prediction. In this paper, we show a connection between boosting and kernel-based methods, highlighting both theoretical and practical applications. In the ℓ 2 context, we show that boosting with a weak learner defined by a kernel K is equivalent to estimation with a special boosting kernel. The number of boosting iterations can then be modeled as a continuous hyperparameter, and fit (along with other parameters) using standard techniques. We then generalize the boosting kernel to a broad new class of boosting approaches for general weak learners, including those based on the ℓ 1 , hinge and Vapnik losses. We develop fast hyperparameter tuning for this class, which has a wide range of applications including robust regression and classification. We illustrate several applications using synthetic and real data.
Original language | English |
---|---|
Pages (from-to) | 1951-1974 |
Number of pages | 24 |
Journal | Machine Learning |
Volume | 108 |
Issue number | 11 |
DOIs | |
Publication status | Published - 1 Nov 2019 |
Keywords
- Boosting
- Kernel-based methods
- Reproducing kernel Hilbert spaces
- Robust estimation
- Weak learners