An empirical evaluation of NASA-MDP data sets using a genetic defect-proneness prediction framework
Guardado en:
Autores: | , , , |
---|---|
Formato: | artículo original |
Fecha de Publicación: | 2016 |
Descripción: | In software engineering, software quality is an important research area. Automated generation of learning schemes plays an important role and represents an efficient way to detect defects in software projects, thus avoiding high costs and long delivery times. This study carries out an empirical evaluation to validate two versions with different levels of noise of NASAMDP data sets. The main objective of this paper is to determine the stability of our framework. In all, 864 learning schemes were studied (8 data preprocessors x 6 attribute selectors x 18 learning algorithms). In line with statistical tests, our framework reported stable results between the analyzed versions. Results reported that evaluation and prediction phases were similar. Furthermore, the performance of the phases of evaluation and prediction between versions of data sets were stable. This means that the differences between versions did not affect the performance of our framework |
País: | Kérwá |
Institución: | Universidad de Costa Rica |
Repositorio: | Kérwá |
OAI Identifier: | oai:kerwa.ucr.ac.cr:10669/73872 |
Acceso en línea: | http://ieeexplore.ieee.org/document/7942359/ https://hdl.handle.net/10669/73872 |
Palabra clave: | Prediction models Learning schemes Software metrics Statistical analysis Empirical procedure |