How playlist evaluation compares to track evaluations in music recommender systems

Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review

4 Citations (Scopus)
196 Downloads (Pure)

Abstract

Most recommendation evaluations in music domain are focused on algorithmic performance: how a recommendation algorithm could predict a user's liking of an individual track. However, individual track rating might not fully reflect the user's liking of the whole recommendation list. Previous work has shown that subjective measures such as perceived diversity and familiarity of the recommendations, as well as the peak-end effect can influence the user's overall (holistic) evaluation of the list. In this study, we investigate how individual track evaluation compares to holistic playlist evaluation in music recommender systems, especially how playlist attractiveness is related to individual track rating and other subjective measures (perceived diversity) or objective measures (objective familiarity, peak-end effect and occurrence of good recommendations in the list). We explore this relation using a within-subjects online user experiment, in which recommendations for each condition are generated by different algorithms. We found that individual track ratings can not fully predict playlist evaluations, as other factors such as perceived diversity and recommendation approaches can influence playlist attractiveness to a larger extent. In addition, inclusion of the highest and last track rating (peak-end) is equally good in predicting playlist attractiveness as the inclusion of all track evaluations. Our results imply that it is important to consider which evaluation metric to use when evaluating recommendation approaches.

Original languageEnglish
Title of host publicationIntRS 2019 Interfaces and Human Decision Making for Recommender Systems 2019
Subtitle of host publicationProceedings of the 6th Joint Workshop on Interfaces and Human Decision Making for Recommender Systems co-located with 13th ACM Conference on Recommender Systems (RecSys 2019) Copenhagen, Denmark, September 19, 2019
EditorsP. Brusilovsky, M. de Gemmis, A. Felfernig, P. Lops, J. O'Donovan, G. Semeraro, M.C. Willemsen
PublisherCEUR-WS.org
Pages1-9
Number of pages9
Publication statusPublished - 1 Jan 2019
Event6th Joint Workshop on Interfaces and Human Decision Making for Recommender Systems, IntRS 2019 - Copenhagen, Denmark
Duration: 19 Sept 2019 → …

Publication series

NameCEUR Workshop Proceedings
PublisherCEUR-WS.org
Volume2450
ISSN (Print)1613-0073

Conference

Conference6th Joint Workshop on Interfaces and Human Decision Making for Recommender Systems, IntRS 2019
Country/TerritoryDenmark
CityCopenhagen
Period19/09/19 → …

Keywords

  • Playlist
  • Recommender systems
  • Track evaluation
  • User-centric evaluation

Fingerprint

Dive into the research topics of 'How playlist evaluation compares to track evaluations in music recommender systems'. Together they form a unique fingerprint.

Cite this