Are found defects an indicator of software correctness? An investigation in a controlled case study
(2004) ISSRE 2004 Proceedings; 15th International Symposium on Software Reliability Engineering p.91-100- Abstract
- In quality assurance programs, we want indicators of software quality, especially software correctness. The number of found defects during inspection and testing are often used as the basis for indicators of software correctness. However, there is a paradox in this approach, since the remaining defects is what impacts negatively on software correctness, not the found ones. In order to investigate the validity of using found defects or other product or process metrics as indicators of software correctness, a controlled case study is launched. 57 sets of 10 different programs from the PSP course are assessed using acceptance test suites for each program. In the analysis, the number of defects found during the acceptance test are compared to... (More)
- In quality assurance programs, we want indicators of software quality, especially software correctness. The number of found defects during inspection and testing are often used as the basis for indicators of software correctness. However, there is a paradox in this approach, since the remaining defects is what impacts negatively on software correctness, not the found ones. In order to investigate the validity of using found defects or other product or process metrics as indicators of software correctness, a controlled case study is launched. 57 sets of 10 different programs from the PSP course are assessed using acceptance test suites for each program. In the analysis, the number of defects found during the acceptance test are compared to the number of defects found during development, code size, share of development time spent on testing etc. It is concluded from a correlation analysis that 1) fewer defects remain in larger programs 2) more defects remain when larger share of development effort is spent on testing, and 3) no correlation exist between found defects and correctness. We interpret these observations as 1) the smaller programs do not fulfill the expected requirements 2) that large share effort spent of testing indicates a "hacker" approach to software development, and 3) more research is needed to elaborate this issue. (Less)
Please use this url to cite or link to this publication:
https://lup.lub.lu.se/record/614273
- author
- Runeson, Per LU ; Jonsson, Mans Holmstedt and Scheja, Fredrik
- organization
- publishing date
- 2004
- type
- Chapter in Book/Report/Conference proceeding
- publication status
- published
- subject
- keywords
- Capture-recapture models (CRC), Software correctness, Controlled case study, Personal software process (PSP)
- host publication
- Proceedings - International Symposium on Software Reliability Engineering, ISSRE
- pages
- 91 - 100
- publisher
- IEEE - Institute of Electrical and Electronics Engineers Inc.
- conference name
- ISSRE 2004 Proceedings; 15th International Symposium on Software Reliability Engineering
- conference location
- Saint-Malo, France
- conference dates
- 2004-11-02 - 2004-11-05
- external identifiers
-
- wos:000225734400009
- other:CODEN: PSSRFV
- scopus:16244397099
- ISSN
- 1071-9458
- DOI
- 10.1109/ISSRE.2004.9
- language
- English
- LU publication?
- yes
- id
- 5d762580-2067-476f-bb4f-8212748956ed (old id 614273)
- date added to LUP
- 2016-04-01 16:28:54
- date last changed
- 2022-01-28 20:01:23
@inproceedings{5d762580-2067-476f-bb4f-8212748956ed, abstract = {{In quality assurance programs, we want indicators of software quality, especially software correctness. The number of found defects during inspection and testing are often used as the basis for indicators of software correctness. However, there is a paradox in this approach, since the remaining defects is what impacts negatively on software correctness, not the found ones. In order to investigate the validity of using found defects or other product or process metrics as indicators of software correctness, a controlled case study is launched. 57 sets of 10 different programs from the PSP course are assessed using acceptance test suites for each program. In the analysis, the number of defects found during the acceptance test are compared to the number of defects found during development, code size, share of development time spent on testing etc. It is concluded from a correlation analysis that 1) fewer defects remain in larger programs 2) more defects remain when larger share of development effort is spent on testing, and 3) no correlation exist between found defects and correctness. We interpret these observations as 1) the smaller programs do not fulfill the expected requirements 2) that large share effort spent of testing indicates a "hacker" approach to software development, and 3) more research is needed to elaborate this issue.}}, author = {{Runeson, Per and Jonsson, Mans Holmstedt and Scheja, Fredrik}}, booktitle = {{Proceedings - International Symposium on Software Reliability Engineering, ISSRE}}, issn = {{1071-9458}}, keywords = {{Capture-recapture models (CRC); Software correctness; Controlled case study; Personal software process (PSP)}}, language = {{eng}}, pages = {{91--100}}, publisher = {{IEEE - Institute of Electrical and Electronics Engineers Inc.}}, title = {{Are found defects an indicator of software correctness? An investigation in a controlled case study}}, url = {{http://dx.doi.org/10.1109/ISSRE.2004.9}}, doi = {{10.1109/ISSRE.2004.9}}, year = {{2004}}, }