An Exploratory Study on Confusion in Code Reviews

Felipe Ebert (Corresponding author), Fernando Castor, Nicole Novielli, Alexander Serebrenik

    Research output: Contribution to journalArticleAcademicpeer-review

    26 Citations (Scopus)

    Abstract

    Context: Code review is a widely used technique of systematic examination of code changes which aims at increasing software quality. Code reviews provide several benefits for the project, including finding bugs, knowledge transfer, and assurance of adherence to project guidelines and coding style. However, code reviews have a major cost: they can delay the merge of the code change, and thus, impact the overall development process. This cost can be even higher if developers do not understand something, i.e., when developers face confusion during the code review.

    Objective: This paper studies the phenomenon of confusion in code reviews. Understanding confusion is an important starting point to help reducing the cost of code reviews and enhance the effectiveness of this practice, and hence, improve the development process.

    Method: We conducted two complementary studies. The first one aimed at identifying the reasons for confusion in code reviews, its impacts, and the coping strategies developers use to deal with it. Then, we surveyed developers to identify the most frequently experienced reasons for confusion, and conducted a systematic mapping study of solutions proposed for those reasons in the scientific literature.

    Results: From the first study, we build a framework with 30 reasons for confusion, 14 impacts, and 13 coping strategies. The results of the systematic mapping study shows 38 articles addressing the most frequent reasons for confusion. From those articles, we found 19 different solutions for confusion proposed in the literature, and nine impacts were established related to the most frequent reasons for confusion.

    Conclusions: Based on the solutions identified in the mapping study, or the lack of them, we propose an actionable guideline for developers on how to cope with confusion during code reviews; we also make several suggestions how tool builders can support code reviews. Additionally, we propose a research agenda for researchers studying code reviews.
    Original languageEnglish
    Article number12
    Number of pages48
    JournalEmpirical Software Engineering
    Volume26
    Issue number1
    DOIs
    Publication statusPublished - 28 Jan 2021

    Keywords

    • code review
    • confusion
    • card sorting
    • survey
    • Systematic mapping study
    • software engineering
    • Empirical software engineering
    • Survey
    • Code reviews
    • Card sorting
    • Confusion

    Fingerprint

    Dive into the research topics of 'An Exploratory Study on Confusion in Code Reviews'. Together they form a unique fingerprint.

    Cite this