Abstract
Recently there has been a growing interest in fairness-aware recommender systems including fairness in providing consistent performance across different users or groups of users. A recommender system could be considered unfair if the recommendations do not fairly represent the tastes of a certain group of users while other groups receive recommendations that are consistent with their preferences. In this paper, we use a metric called miscalibration for measuring how a recommendation algorithm is responsive to users' true preferences and we consider how various algorithms may result in different degrees of miscalibration for different users. In particular, we conjecture that popularity bias which is a well-known phenomenon in recommendation is one important factor leading to miscalibration in recommendation. Our experimental results using two real-world datasets show that there is a connection between how different user groups are affected by algorithmic popularity bias and their level of interest in popular items. Moreover, we show that the more a group is affected by the algorithmic popularity bias, the more their recommendations are miscalibrated.
Original language | English |
---|---|
Title of host publication | RecSys 2020 - 14th ACM Conference on Recommender Systems |
Pages | 726-731 |
Number of pages | 6 |
ISBN (Electronic) | 9781450375832 |
DOIs | |
Publication status | Published - 2020 |
Keywords
- Algorithmic bias
- Calibration
- Popularity bias amplification
- Recommender systems