Bias disparity in collaborative recommendation: algorithmic evaluation and comparison

Masoud Mansoury, Bamshad Mobasher, Robin Burke, Mykola Pechenizkiy

Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review

7 Citations (Scopus)
132 Downloads (Pure)

Abstract

Research on fairness in machine learning has been recently extended to recommender systems. One of the factors that may impact fairness is bias disparity, the degree to which a group’s preferences on various item categories fail to be reflected in the recommendations they receive. In some cases biases in the original data may be amplified or reversed by the underlying recommendation algorithm. In this paper, we explore how different recommendation algorithms reflect the tradeoff between ranking quality and bias disparity. Our experiments include neighborhood-based, model-based, and trust-aware recommendation algorithms.

Original languageEnglish
Title of host publicationProceedings of the Workshop on Recommendation in Multi-stakeholder Environments co-located with the 13th ACM Conference on Recommender Systems (RecSys 2019)
EditorsRobin Burke, Himan Abdollahpouri, Edward Malthouse
PublisherCEUR-WS.org
Number of pages7
Publication statusPublished - 1 Jan 2019
Event2019 Workshop on Recommendation in Multi-Stakeholder Environments, RMSE 2019 - Copenhagen, Denmark
Duration: 20 Sept 2019 → …

Publication series

NameCEUR Workshop Proceedings
PublisherCEUR-WS.org
Volume2440
ISSN (Print)1613-0073

Conference

Conference2019 Workshop on Recommendation in Multi-Stakeholder Environments, RMSE 2019
Country/TerritoryDenmark
CityCopenhagen
Period20/09/19 → …

Keywords

  • Bias disparity
  • Fairness
  • Recommender systems
  • Trust ratings

Fingerprint

Dive into the research topics of 'Bias disparity in collaborative recommendation: algorithmic evaluation and comparison'. Together they form a unique fingerprint.

Cite this