Abstract
Even though it is a controversial matter, research (e.g., publications, projects, researchers) is regularly evaluated based on some form of scientific impact. Particularly citation counts and metrics building on them (e.g., impact factor, h-index)
are established for this purpose, despite missing evidence that they are reasonable and researchers rightfully criticizing their use. Several ideas aim to tackle such problems by proposing to abandon metrics-based evaluations or suggesting new methods that cover other properties, for instance, through Altmetrics or Article Recommendation Platforms (ARPs). ARPs are particularly interesting, since they encourage their community to decide which publications are important, for instance, based on recommendations, post-publication reviews, comments, or discussions. In this paper, we report a comparative analysis of 11 ARPs, which utilize human expertise to assess the quality, correctness, and potential importance of a publication. We compare the different properties, pros, and cons of the ARPs, and discuss the adoption potential for computer science. We find that some of the platforms’ features are challenging to understand, but they enforce the trend
of involving humans instead of metrics for evaluating research.
are established for this purpose, despite missing evidence that they are reasonable and researchers rightfully criticizing their use. Several ideas aim to tackle such problems by proposing to abandon metrics-based evaluations or suggesting new methods that cover other properties, for instance, through Altmetrics or Article Recommendation Platforms (ARPs). ARPs are particularly interesting, since they encourage their community to decide which publications are important, for instance, based on recommendations, post-publication reviews, comments, or discussions. In this paper, we report a comparative analysis of 11 ARPs, which utilize human expertise to assess the quality, correctness, and potential importance of a publication. We compare the different properties, pros, and cons of the ARPs, and discuss the adoption potential for computer science. We find that some of the platforms’ features are challenging to understand, but they enforce the trend
of involving humans instead of metrics for evaluating research.
Original language | English |
---|---|
Title of host publication | Proceedings - 2021 ACM/IEEE Joint Conference on Digital Libraries, JCDL 2021 |
Editors | J. Stephen Downie, Dana McKay, Hussein Suleman, David M. Nichols, Faryaneh Poursardar |
Publisher | IEEE Press |
Pages | 1-10 |
Number of pages | 10 |
ISBN (Electronic) | 9781665417709 |
DOIs | |
Publication status | Published - 2021 |
Bibliographical note
DBLP License: DBLP's bibliographic metadata records provided through http://dblp.org/ are distributed under a Creative Commons CC0 1.0 Universal Public Domain Dedication. Although the bibliographic metadata records are provided consistent with CC0 1.0 Dedication, the content described by the metadata records is not. Content may be subject to copyright, rights of privacy, rights of publicity and other restrictions.Keywords
- Quality Assessment
- Peer Review
- Post Publication
- Computer Science
- Recommendation Service Platforms
- recommendation service platforms
- computer science
- peer review
- quality assessment
- post publication