Boosting as a kernel-based method

Aleksandr Y. Aravkin (Corresponding author), Giulio Bottegal, Gianluigi Pillonetto

Research output: Contribution to journalArticleAcademicpeer-review

Abstract

Boosting combines weak (biased) learners to obtain effective learning algorithms for classification and prediction. In this paper, we show a connection between boosting and kernel-based methods, highlighting both theoretical and practical applications. In the ℓ 2 context, we show that boosting with a weak learner defined by a kernel K is equivalent to estimation with a special boosting kernel. The number of boosting iterations can then be modeled as a continuous hyperparameter, and fit (along with other parameters) using standard techniques. We then generalize the boosting kernel to a broad new class of boosting approaches for general weak learners, including those based on the ℓ 1 , hinge and Vapnik losses. We develop fast hyperparameter tuning for this class, which has a wide range of applications including robust regression and classification. We illustrate several applications using synthetic and real data.

LanguageEnglish
JournalMachine Learning
DOIs
StateE-pub ahead of print - 17 May 2019

Fingerprint

Hinges
Learning algorithms
Tuning

Keywords

  • Boosting
  • Kernel-based methods
  • Reproducing kernel Hilbert spaces
  • Robust estimation
  • Weak learners

Cite this

Aravkin, A. Y., Bottegal, G., & Pillonetto, G. (2019). Boosting as a kernel-based method. Machine Learning. DOI: 10.1007/s10994-019-05797-z
Aravkin, Aleksandr Y. ; Bottegal, Giulio ; Pillonetto, Gianluigi. / Boosting as a kernel-based method. In: Machine Learning. 2019
@article{f6bca34a8ea34073a7009fbc1661dbeb,
title = "Boosting as a kernel-based method",
abstract = "Boosting combines weak (biased) learners to obtain effective learning algorithms for classification and prediction. In this paper, we show a connection between boosting and kernel-based methods, highlighting both theoretical and practical applications. In the ℓ 2 context, we show that boosting with a weak learner defined by a kernel K is equivalent to estimation with a special boosting kernel. The number of boosting iterations can then be modeled as a continuous hyperparameter, and fit (along with other parameters) using standard techniques. We then generalize the boosting kernel to a broad new class of boosting approaches for general weak learners, including those based on the ℓ 1 , hinge and Vapnik losses. We develop fast hyperparameter tuning for this class, which has a wide range of applications including robust regression and classification. We illustrate several applications using synthetic and real data.",
keywords = "Boosting, Kernel-based methods, Reproducing kernel Hilbert spaces, Robust estimation, Weak learners",
author = "Aravkin, {Aleksandr Y.} and Giulio Bottegal and Gianluigi Pillonetto",
year = "2019",
month = "5",
day = "17",
doi = "10.1007/s10994-019-05797-z",
language = "English",
journal = "Machine Learning",
issn = "0885-6125",
publisher = "Springer",

}

Aravkin, AY, Bottegal, G & Pillonetto, G 2019, 'Boosting as a kernel-based method' Machine Learning. DOI: 10.1007/s10994-019-05797-z

Boosting as a kernel-based method. / Aravkin, Aleksandr Y. (Corresponding author); Bottegal, Giulio; Pillonetto, Gianluigi.

In: Machine Learning, 17.05.2019.

Research output: Contribution to journalArticleAcademicpeer-review

TY - JOUR

T1 - Boosting as a kernel-based method

AU - Aravkin,Aleksandr Y.

AU - Bottegal,Giulio

AU - Pillonetto,Gianluigi

PY - 2019/5/17

Y1 - 2019/5/17

N2 - Boosting combines weak (biased) learners to obtain effective learning algorithms for classification and prediction. In this paper, we show a connection between boosting and kernel-based methods, highlighting both theoretical and practical applications. In the ℓ 2 context, we show that boosting with a weak learner defined by a kernel K is equivalent to estimation with a special boosting kernel. The number of boosting iterations can then be modeled as a continuous hyperparameter, and fit (along with other parameters) using standard techniques. We then generalize the boosting kernel to a broad new class of boosting approaches for general weak learners, including those based on the ℓ 1 , hinge and Vapnik losses. We develop fast hyperparameter tuning for this class, which has a wide range of applications including robust regression and classification. We illustrate several applications using synthetic and real data.

AB - Boosting combines weak (biased) learners to obtain effective learning algorithms for classification and prediction. In this paper, we show a connection between boosting and kernel-based methods, highlighting both theoretical and practical applications. In the ℓ 2 context, we show that boosting with a weak learner defined by a kernel K is equivalent to estimation with a special boosting kernel. The number of boosting iterations can then be modeled as a continuous hyperparameter, and fit (along with other parameters) using standard techniques. We then generalize the boosting kernel to a broad new class of boosting approaches for general weak learners, including those based on the ℓ 1 , hinge and Vapnik losses. We develop fast hyperparameter tuning for this class, which has a wide range of applications including robust regression and classification. We illustrate several applications using synthetic and real data.

KW - Boosting

KW - Kernel-based methods

KW - Reproducing kernel Hilbert spaces

KW - Robust estimation

KW - Weak learners

UR - http://www.scopus.com/inward/record.url?scp=85066010548&partnerID=8YFLogxK

U2 - 10.1007/s10994-019-05797-z

DO - 10.1007/s10994-019-05797-z

M3 - Article

JO - Machine Learning

T2 - Machine Learning

JF - Machine Learning

SN - 0885-6125

ER -

Aravkin AY, Bottegal G, Pillonetto G. Boosting as a kernel-based method. Machine Learning. 2019 May 17. Available from, DOI: 10.1007/s10994-019-05797-z