Speeding up algorithm selection using average ranking and active testing by introducing runtime

S.M. Abdulrahman, P. Brazdil, J.N. van Rijn, J. Vanschoren

Research output: Contribution to journalArticleAcademicpeer-review

1 Citation (Scopus)
32 Downloads (Pure)

Abstract

Algorithm selection methods can be speeded-up substantially by incorporating multi-objective measures that give preference to algorithms that are both promising and fast to evaluate. In this paper, we introduce such a measure, A3R, and incorporate it into two algorithm selection techniques: average ranking and active testing. Average ranking combines algorithm rankings observed on prior datasets to identify the best algorithms for a new dataset. The aim of the second method is to iteratively select algorithms to be tested on the new dataset, learning from each new evaluation to intelligently select the next best candidate. We show how both methods can be upgraded to incorporate a multi-objective measure A3R that combines accuracy and runtime. It is necessary to establish the correct balance between accuracy and runtime, as otherwise time will be wasted by conducting less informative tests. The correct balance can be set by an appropriate parameter setting within function A3R that trades off accuracy and runtime. Our results demonstrate that the upgraded versions of Average Ranking and Active Testing lead to much better mean interval loss values than their accuracy-based counterparts.

Original languageEnglish
Pages (from-to)79-108
Number of pages30
JournalMachine Learning
Volume107
Issue number1
DOIs
Publication statusPublished - 8 Jan 2018

Fingerprint

Testing

Keywords

  • Active testing
  • Algorithm selection
  • Average ranking
  • Loss curves
  • Mean interval loss
  • Meta-learning
  • Ranking of algorithms

Cite this

Abdulrahman, S.M. ; Brazdil, P. ; van Rijn, J.N. ; Vanschoren, J. / Speeding up algorithm selection using average ranking and active testing by introducing runtime. In: Machine Learning. 2018 ; Vol. 107, No. 1. pp. 79-108.
@article{55d55a86c2a04f0284c0112e8343b4cf,
title = "Speeding up algorithm selection using average ranking and active testing by introducing runtime",
abstract = "Algorithm selection methods can be speeded-up substantially by incorporating multi-objective measures that give preference to algorithms that are both promising and fast to evaluate. In this paper, we introduce such a measure, A3R, and incorporate it into two algorithm selection techniques: average ranking and active testing. Average ranking combines algorithm rankings observed on prior datasets to identify the best algorithms for a new dataset. The aim of the second method is to iteratively select algorithms to be tested on the new dataset, learning from each new evaluation to intelligently select the next best candidate. We show how both methods can be upgraded to incorporate a multi-objective measure A3R that combines accuracy and runtime. It is necessary to establish the correct balance between accuracy and runtime, as otherwise time will be wasted by conducting less informative tests. The correct balance can be set by an appropriate parameter setting within function A3R that trades off accuracy and runtime. Our results demonstrate that the upgraded versions of Average Ranking and Active Testing lead to much better mean interval loss values than their accuracy-based counterparts.",
keywords = "Active testing, Algorithm selection, Average ranking, Loss curves, Mean interval loss, Meta-learning, Ranking of algorithms",
author = "S.M. Abdulrahman and P. Brazdil and {van Rijn}, J.N. and J. Vanschoren",
year = "2018",
month = "1",
day = "8",
doi = "10.1007/s10994-017-5687-8",
language = "English",
volume = "107",
pages = "79--108",
journal = "Machine Learning",
issn = "0885-6125",
publisher = "Springer",
number = "1",

}

Speeding up algorithm selection using average ranking and active testing by introducing runtime. / Abdulrahman, S.M.; Brazdil, P.; van Rijn, J.N.; Vanschoren, J.

In: Machine Learning, Vol. 107, No. 1, 08.01.2018, p. 79-108.

Research output: Contribution to journalArticleAcademicpeer-review

TY - JOUR

T1 - Speeding up algorithm selection using average ranking and active testing by introducing runtime

AU - Abdulrahman, S.M.

AU - Brazdil, P.

AU - van Rijn, J.N.

AU - Vanschoren, J.

PY - 2018/1/8

Y1 - 2018/1/8

N2 - Algorithm selection methods can be speeded-up substantially by incorporating multi-objective measures that give preference to algorithms that are both promising and fast to evaluate. In this paper, we introduce such a measure, A3R, and incorporate it into two algorithm selection techniques: average ranking and active testing. Average ranking combines algorithm rankings observed on prior datasets to identify the best algorithms for a new dataset. The aim of the second method is to iteratively select algorithms to be tested on the new dataset, learning from each new evaluation to intelligently select the next best candidate. We show how both methods can be upgraded to incorporate a multi-objective measure A3R that combines accuracy and runtime. It is necessary to establish the correct balance between accuracy and runtime, as otherwise time will be wasted by conducting less informative tests. The correct balance can be set by an appropriate parameter setting within function A3R that trades off accuracy and runtime. Our results demonstrate that the upgraded versions of Average Ranking and Active Testing lead to much better mean interval loss values than their accuracy-based counterparts.

AB - Algorithm selection methods can be speeded-up substantially by incorporating multi-objective measures that give preference to algorithms that are both promising and fast to evaluate. In this paper, we introduce such a measure, A3R, and incorporate it into two algorithm selection techniques: average ranking and active testing. Average ranking combines algorithm rankings observed on prior datasets to identify the best algorithms for a new dataset. The aim of the second method is to iteratively select algorithms to be tested on the new dataset, learning from each new evaluation to intelligently select the next best candidate. We show how both methods can be upgraded to incorporate a multi-objective measure A3R that combines accuracy and runtime. It is necessary to establish the correct balance between accuracy and runtime, as otherwise time will be wasted by conducting less informative tests. The correct balance can be set by an appropriate parameter setting within function A3R that trades off accuracy and runtime. Our results demonstrate that the upgraded versions of Average Ranking and Active Testing lead to much better mean interval loss values than their accuracy-based counterparts.

KW - Active testing

KW - Algorithm selection

KW - Average ranking

KW - Loss curves

KW - Mean interval loss

KW - Meta-learning

KW - Ranking of algorithms

UR - http://www.scopus.com/inward/record.url?scp=85032356146&partnerID=8YFLogxK

U2 - 10.1007/s10994-017-5687-8

DO - 10.1007/s10994-017-5687-8

M3 - Article

AN - SCOPUS:85032356146

VL - 107

SP - 79

EP - 108

JO - Machine Learning

JF - Machine Learning

SN - 0885-6125

IS - 1

ER -