Comparative experimentation is important for studying reproducibility in recommender systems. This is particularly true in areas without well-established methodologies, such as fairness-aware recommendation. In this paper, we describe fairness-aware enhancements to our recommender systems experimentation tool librec-auto. These enhancements include metrics for various classes of fairness definitions, extension of the experimental model to support result re-ranking and a library of associated re-ranking algorithms, and additional support for experiment automation and reporting. The associated demo will help attendees move quickly to configuring and running their own experiments with librec-auto.
|Title of host publication||RecSys 2020 - 14th ACM Conference on Recommender Systems|
|Number of pages||3|
|Publication status||Published - 2020|
- Recommender Systems Frameworks