TY - GEN
T1 - Incorporating Altmetrics to Support Selection and Assessment of Publications During Literature Analyses.
AU - Shakeel, Yusra
AU - Alchokr, Rand
AU - Krüger, Jacob
AU - Leich, Thomas
AU - Saake, Gunter
N1 - DBLP License: DBLP's bibliographic metadata records provided through http://dblp.org/ are distributed under a Creative Commons CC0 1.0 Universal Public Domain Dedication. Although the bibliographic metadata records are provided consistent with CC0 1.0 Dedication, the content described by the metadata records is not. Content may be subject to copyright, rights of privacy, rights of publicity and other restrictions.
PY - 2022/6/13
Y1 - 2022/6/13
N2 - Background. The constantly increasing number of scientific publications poses challenges for researchers to monitor, select, and assess the publications relevant for their own research. Several guidelines for assessing publications manually during a literature analysis exist, with researchers proposing (semi-)automated techniques to facilitate such assessments. Aims. Still, research indicates that current techniques require further improvements to facilitate the analysis of large sets of publications. In this paper, we propose a semi-automatic technique with which we aim to improve in this direction by facilitating the selection and assessment of publications. Method. Our technique uses publicly available data of a publication, namely citation counts, article-level metrics, venue metrics, and altmetrics, to guide an analyst in assessing its relevance and impact. To evaluate the feasibility of our technique and the included metrics, we performed an experimental analysis to automatically assign ratings to the retrieved publications. Results. The results indicate that our technique can help an analyst in assessing publications, and reduce manual effort. Through our technique, we achieve an average accuracy of 53 % with a recall of 71 %. While precision (14 %) and F1-score (21 %) are—not surprisingly, due to the high number of irrelevant results returned by automatic searches in digital libraries—low, we see an improvement of these values for more recent reviews for which we could collect more complete data. However, some manual effort is still required for the final selection of papers. Conclusions. While it is not possible to achieve full automation for selecting and quality assessing publications, we can see that our metrics-based technique can be a helpful means to provide an initial rating for the analyst. Also, incorporating altmetrics seems to be a promising addition to rate comparably recent publications, helping researchers to further facilitate the execution of literature analyses.
AB - Background. The constantly increasing number of scientific publications poses challenges for researchers to monitor, select, and assess the publications relevant for their own research. Several guidelines for assessing publications manually during a literature analysis exist, with researchers proposing (semi-)automated techniques to facilitate such assessments. Aims. Still, research indicates that current techniques require further improvements to facilitate the analysis of large sets of publications. In this paper, we propose a semi-automatic technique with which we aim to improve in this direction by facilitating the selection and assessment of publications. Method. Our technique uses publicly available data of a publication, namely citation counts, article-level metrics, venue metrics, and altmetrics, to guide an analyst in assessing its relevance and impact. To evaluate the feasibility of our technique and the included metrics, we performed an experimental analysis to automatically assign ratings to the retrieved publications. Results. The results indicate that our technique can help an analyst in assessing publications, and reduce manual effort. Through our technique, we achieve an average accuracy of 53 % with a recall of 71 %. While precision (14 %) and F1-score (21 %) are—not surprisingly, due to the high number of irrelevant results returned by automatic searches in digital libraries—low, we see an improvement of these values for more recent reviews for which we could collect more complete data. However, some manual effort is still required for the final selection of papers. Conclusions. While it is not possible to achieve full automation for selecting and quality assessing publications, we can see that our metrics-based technique can be a helpful means to provide an initial rating for the analyst. Also, incorporating altmetrics seems to be a promising addition to rate comparably recent publications, helping researchers to further facilitate the execution of literature analyses.
KW - Altmetrics
KW - Literature analysis
KW - PlumX
KW - Quality assessment
UR - http://www.scopus.com/inward/record.url?scp=85132368307&partnerID=8YFLogxK
U2 - 10.1145/3530019.3530038
DO - 10.1145/3530019.3530038
M3 - Conference contribution
T3 - ACM International Conference Proceeding Series
SP - 180
EP - 189
BT - Proceedings of the ACM International Conference on Evaluation and Assessment in Software Engineering, EASE 2022
PB - Association for Computing Machinery, Inc
ER -