Is it as Bad as it Looks? Judgments of Quantitative Scores Depend on their Presentation Format

Christophe Lembregts (Corresponding author), Jeroen J.L. Schepers, Arne De Keyser

Research output: Contribution to journalArticleAcademicpeer-review

1 Citation (Scopus)
13 Downloads (Pure)

Abstract

Firms like Uber, Amazon, and TripAdvisor have popularized the rating of people, goods, and services. These entities receive scores (e.g., through online reviews) in a variety of presentation formats: incremental (a raw score per episode; e.g., 5–5–2), cumulative (updated average scores; e.g., 5–5–4), or a combination thereof. This article focuses on prevalent situations in which a score deviates from prior scores and examines how the presentation format of the scores impacts decision makers’ (e.g., consumers, managers) evaluations of the entity scored. Across a wide variety of settings, nine experiments document that when a generally well-performing (poorly performing) entity suddenly receives a negative (positive) score, overall performance will be perceived as less negative (positive) when shown in a cumulative format compared with an incremental or combined format. This effect appears to be stronger when the deviating episode is more representative (e.g., due to higher recency or internal attribution). The authors also find evidence for their proposed explanation: a cumulative format distorts individuals’ perceptions of the underlying raw score of the deviating episode. These findings imply that presenting scores in alternative formats may affect marketing outcomes (e.g., customer churn, product choice, technology adoption, new product success, and user engagement on peer-to-peer platforms).

Original languageEnglish
Pages (from-to)937-954
Number of pages18
JournalJournal of Marketing Research
Volume61
Issue number5
DOIs
Publication statusPublished - Oct 2024

Funding

The authors gratefully acknowledge the research assistance and input provided by Paul Kievits, Erik Kemperman, Remco Vos, Jeroen Bakens, Dan Schley, and Katrien Verleye, as well as the valuable comments from the review teams that handled (previous versions of) this article. A previous version of this article was presented at KU Leuven and at the 12 AMA SERVSIG conference in Glasgow; the authors thank the participants at these events for their constructive comments. The authors express their gratitude for the financial support given to the first author by the Erasmus Research Institute of Management. th

Keywords

  • framing
  • judgment and decision making
  • online platforms
  • online reviews
  • quantitative information
  • ratings
  • reputation management
  • scoring

Fingerprint

Dive into the research topics of 'Is it as Bad as it Looks? Judgments of Quantitative Scores Depend on their Presentation Format'. Together they form a unique fingerprint.

Cite this