Abstract
We present a cross-benchmark comparison of learning-to-rank methods using two evaluation measures: the Normalized Winning Number and the Ideal Winning Number. Evaluation results of 87 learning-to-rank methods on 20 datasets show that ListNet, SmoothRank, FenchelRank, FSMRank, LRUF and LARF are Pareto optimal learning-to-rank methods, listed in increasing order of Normalized Winning Number and decreasing order of Ideal Winning Number.
Original language | English |
---|---|
Title of host publication | Proceedings of the 1st International Workshop on LEARning Next gEneration Rankers |
Editors | N. Ferro, C. Lucchese, M. Maistro, R. Perego |
Place of Publication | Aachen |
Pages | 3-3 |
Number of pages | 1 |
Publication status | Published - 27 Nov 2017 |
Event | 1st International Workshop on LEARning Next gEneration Rankers (LEARNER 2017), October 1, 2017, Amsterdam, Netherlands - Amsterdam, Netherlands Duration: 1 Oct 2017 → 1 Oct 2017 http://learner2017.dei.unipd.it/ |
Publication series
Name | CEUR Workshop Proceedings |
---|---|
Volume | 2007 |
ISSN (Print) | 1613-0073 |
Workshop
Workshop | 1st International Workshop on LEARning Next gEneration Rankers (LEARNER 2017), October 1, 2017, Amsterdam, Netherlands |
---|---|
Abbreviated title | LEARNER 2017 |
Country/Territory | Netherlands |
City | Amsterdam |
Period | 1/10/17 → 1/10/17 |
Internet address |