Boosting as a kernel-based method

Aleksandr Y. Aravkin (Corresponding author), Giulio Bottegal, Gianluigi Pillonetto

Research output: Contribution to journalArticleAcademicpeer-review

1 Citation (Scopus)
1 Downloads (Pure)


Boosting combines weak (biased) learners to obtain effective learning algorithms for classification and prediction. In this paper, we show a connection between boosting and kernel-based methods, highlighting both theoretical and practical applications. In the ℓ 2 context, we show that boosting with a weak learner defined by a kernel K is equivalent to estimation with a special boosting kernel. The number of boosting iterations can then be modeled as a continuous hyperparameter, and fit (along with other parameters) using standard techniques. We then generalize the boosting kernel to a broad new class of boosting approaches for general weak learners, including those based on the ℓ 1 , hinge and Vapnik losses. We develop fast hyperparameter tuning for this class, which has a wide range of applications including robust regression and classification. We illustrate several applications using synthetic and real data.

Original languageEnglish
Pages (from-to)1951-1974
Number of pages24
JournalMachine Learning
Issue number11
Publication statusPublished - 1 Nov 2019


  • Boosting
  • Kernel-based methods
  • Reproducing kernel Hilbert spaces
  • Robust estimation
  • Weak learners


Dive into the research topics of 'Boosting as a kernel-based method'. Together they form a unique fingerprint.

Cite this