Abstract
Algorithm selection methods can be speeded-up substantially by incorporating multi-objective measures that give preference to algorithms that are both promising and fast to evaluate. In this paper, we introduce such a measure, A3R, and incorporate it into two algorithm selection techniques: average ranking and active testing. Average ranking combines algorithm rankings observed on prior datasets to identify the best algorithms for a new dataset. The aim of the second method is to iteratively select algorithms to be tested on the new dataset, learning from each new evaluation to intelligently select the next best candidate. We show how both methods can be upgraded to incorporate a multi-objective measure A3R that combines accuracy and runtime. It is necessary to establish the correct balance between accuracy and runtime, as otherwise time will be wasted by conducting less informative tests. The correct balance can be set by an appropriate parameter setting within function A3R that trades off accuracy and runtime. Our results demonstrate that the upgraded versions of Average Ranking and Active Testing lead to much better mean interval loss values than their accuracy-based counterparts.
Original language | English |
---|---|
Pages (from-to) | 79-108 |
Number of pages | 30 |
Journal | Machine Learning |
Volume | 107 |
Issue number | 1 |
DOIs | |
Publication status | Published - 8 Jan 2018 |
Fingerprint
Keywords
- Active testing
- Algorithm selection
- Average ranking
- Loss curves
- Mean interval loss
- Meta-learning
- Ranking of algorithms
Cite this
}
Speeding up algorithm selection using average ranking and active testing by introducing runtime. / Abdulrahman, S.M.; Brazdil, P.; van Rijn, J.N.; Vanschoren, J.
In: Machine Learning, Vol. 107, No. 1, 08.01.2018, p. 79-108.Research output: Contribution to journal › Article › Academic › peer-review
TY - JOUR
T1 - Speeding up algorithm selection using average ranking and active testing by introducing runtime
AU - Abdulrahman, S.M.
AU - Brazdil, P.
AU - van Rijn, J.N.
AU - Vanschoren, J.
PY - 2018/1/8
Y1 - 2018/1/8
N2 - Algorithm selection methods can be speeded-up substantially by incorporating multi-objective measures that give preference to algorithms that are both promising and fast to evaluate. In this paper, we introduce such a measure, A3R, and incorporate it into two algorithm selection techniques: average ranking and active testing. Average ranking combines algorithm rankings observed on prior datasets to identify the best algorithms for a new dataset. The aim of the second method is to iteratively select algorithms to be tested on the new dataset, learning from each new evaluation to intelligently select the next best candidate. We show how both methods can be upgraded to incorporate a multi-objective measure A3R that combines accuracy and runtime. It is necessary to establish the correct balance between accuracy and runtime, as otherwise time will be wasted by conducting less informative tests. The correct balance can be set by an appropriate parameter setting within function A3R that trades off accuracy and runtime. Our results demonstrate that the upgraded versions of Average Ranking and Active Testing lead to much better mean interval loss values than their accuracy-based counterparts.
AB - Algorithm selection methods can be speeded-up substantially by incorporating multi-objective measures that give preference to algorithms that are both promising and fast to evaluate. In this paper, we introduce such a measure, A3R, and incorporate it into two algorithm selection techniques: average ranking and active testing. Average ranking combines algorithm rankings observed on prior datasets to identify the best algorithms for a new dataset. The aim of the second method is to iteratively select algorithms to be tested on the new dataset, learning from each new evaluation to intelligently select the next best candidate. We show how both methods can be upgraded to incorporate a multi-objective measure A3R that combines accuracy and runtime. It is necessary to establish the correct balance between accuracy and runtime, as otherwise time will be wasted by conducting less informative tests. The correct balance can be set by an appropriate parameter setting within function A3R that trades off accuracy and runtime. Our results demonstrate that the upgraded versions of Average Ranking and Active Testing lead to much better mean interval loss values than their accuracy-based counterparts.
KW - Active testing
KW - Algorithm selection
KW - Average ranking
KW - Loss curves
KW - Mean interval loss
KW - Meta-learning
KW - Ranking of algorithms
UR - http://www.scopus.com/inward/record.url?scp=85032356146&partnerID=8YFLogxK
U2 - 10.1007/s10994-017-5687-8
DO - 10.1007/s10994-017-5687-8
M3 - Article
AN - SCOPUS:85032356146
VL - 107
SP - 79
EP - 108
JO - Machine Learning
JF - Machine Learning
SN - 0885-6125
IS - 1
ER -