Samenvatting
Even though it is a controversial matter, research (e.g., publications, projects, researchers) is regularly evaluated based on some form of scientific impact. Particularly citation counts and metrics building on them (e.g., impact factor, h-index)
are established for this purpose, despite missing evidence that they are reasonable and researchers rightfully criticizing their use. Several ideas aim to tackle such problems by proposing to abandon metrics-based evaluations or suggesting new methods that cover other properties, for instance, through Altmetrics or Article Recommendation Platforms (ARPs). ARPs are particularly interesting, since they encourage their community to decide which publications are important, for instance, based on recommendations, post-publication reviews, comments, or discussions. In this paper, we report a comparative analysis of 11 ARPs, which utilize human expertise to assess the quality, correctness, and potential importance of a publication. We compare the different properties, pros, and cons of the ARPs, and discuss the adoption potential for computer science. We find that some of the platforms’ features are challenging to understand, but they enforce the trend
of involving humans instead of metrics for evaluating research.
are established for this purpose, despite missing evidence that they are reasonable and researchers rightfully criticizing their use. Several ideas aim to tackle such problems by proposing to abandon metrics-based evaluations or suggesting new methods that cover other properties, for instance, through Altmetrics or Article Recommendation Platforms (ARPs). ARPs are particularly interesting, since they encourage their community to decide which publications are important, for instance, based on recommendations, post-publication reviews, comments, or discussions. In this paper, we report a comparative analysis of 11 ARPs, which utilize human expertise to assess the quality, correctness, and potential importance of a publication. We compare the different properties, pros, and cons of the ARPs, and discuss the adoption potential for computer science. We find that some of the platforms’ features are challenging to understand, but they enforce the trend
of involving humans instead of metrics for evaluating research.
Originele taal-2 | Engels |
---|---|
Titel | Proceedings - 2021 ACM/IEEE Joint Conference on Digital Libraries, JCDL 2021 |
Redacteuren | J. Stephen Downie, Dana McKay, Hussein Suleman, David M. Nichols, Faryaneh Poursardar |
Uitgeverij | IEEE Press |
Pagina's | 1-10 |
Aantal pagina's | 10 |
ISBN van elektronische versie | 9781665417709 |
DOI's | |
Status | Gepubliceerd - 2021 |