Defect prediction approaches use various features of software product or process to prioritize testing, analysis and general quality assurance activities. Such approaches require the availability of project's historical data, making them inapplicable in early phase. To cope with this problem, researchers have proposed cross-project and even cross-company prediction models, which use training material from other projects to build the model. Despite such advances, there is limited knowledge of how, as the project evolves, it would be convenient to still keep using data from other projects, and when, instead, it might become convenient to switch towards a local prediction model. This paper empirically investigates, using historical data from four open source projects, on how the performance of various kinds of defect prediction approaches-within-project prediction, local and global cross-project prediction, and mixed (injected local cross) prediction-varies over time. Results of the study are part of a long-Term investigation towards supporting the customization of defect prediction models over projects' history.
|Status||Gepubliceerd - 29 mei 2018|