Abstract
Comparative experimentation is important for studying reproducibility in recommender systems. This is particularly true in areas without well-established methodologies, such as fairness-aware recommendation. In this paper, we describe fairness-aware enhancements to our recommender systems experimentation tool librec-auto. These enhancements include metrics for various classes of fairness definitions, extension of the experimental model to support result re-ranking and a library of associated re-ranking algorithms, and additional support for experiment automation and reporting. The associated demo will help attendees move quickly to configuring and running their own experiments with librec-auto.
Original language | English |
---|---|
Title of host publication | RecSys 2020 - 14th ACM Conference on Recommender Systems |
Pages | 594-596 |
Number of pages | 3 |
ISBN (Electronic) | 9781450375832 |
DOIs | |
Publication status | Published - 2020 |
Keywords
- Experimentation
- Fairness
- Librec
- Recommender Systems Frameworks
- Reranking