Speeding up algorithm selection using average ranking and active testing by introducing runtime

S.M. Abdulrahman, P. Brazdil, J.N. van Rijn, J. Vanschoren

Research output: Contribution to journalArticleAcademicpeer-review

27 Citations (Scopus)
88 Downloads (Pure)

Abstract

Algorithm selection methods can be speeded-up substantially by incorporating multi-objective measures that give preference to algorithms that are both promising and fast to evaluate. In this paper, we introduce such a measure, A3R, and incorporate it into two algorithm selection techniques: average ranking and active testing. Average ranking combines algorithm rankings observed on prior datasets to identify the best algorithms for a new dataset. The aim of the second method is to iteratively select algorithms to be tested on the new dataset, learning from each new evaluation to intelligently select the next best candidate. We show how both methods can be upgraded to incorporate a multi-objective measure A3R that combines accuracy and runtime. It is necessary to establish the correct balance between accuracy and runtime, as otherwise time will be wasted by conducting less informative tests. The correct balance can be set by an appropriate parameter setting within function A3R that trades off accuracy and runtime. Our results demonstrate that the upgraded versions of Average Ranking and Active Testing lead to much better mean interval loss values than their accuracy-based counterparts.

Original languageEnglish
Pages (from-to)79-108
Number of pages30
JournalMachine Learning
Volume107
Issue number1
DOIs
Publication statusPublished - 8 Jan 2018

Keywords

  • Active testing
  • Algorithm selection
  • Average ranking
  • Loss curves
  • Mean interval loss
  • Meta-learning
  • Ranking of algorithms

Fingerprint

Dive into the research topics of 'Speeding up algorithm selection using average ranking and active testing by introducing runtime'. Together they form a unique fingerprint.

Cite this