Prospects and Limitations for Cross-Study Analyses – A Study on an Experiment Series
(2003) p.133-142- Abstract
- In software engineering research, experiments are conducted to evaluate new methods or techniques. The experimentation as such is beginning to mature, but little effort is spent on learning across different studies, except for a few meta-analyses. Meta-analysis can be applied to a set of experiments with the same design. This paper discusses learning across a set of experimental studies on fault detection techniques, conducted in very similar environments, although with different hypotheses. Four experiments have been conducted applying Usage-Based Reading (UBR), hence establishing a point of reference for other techniques. In the different experiments, UBR is compared to Checklist-Based Reading (CBR), two variants of UBR and Usage-Based... (More)
- In software engineering research, experiments are conducted to evaluate new methods or techniques. The experimentation as such is beginning to mature, but little effort is spent on learning across different studies, except for a few meta-analyses. Meta-analysis can be applied to a set of experiments with the same design. This paper discusses learning across a set of experimental studies on fault detection techniques, conducted in very similar environments, although with different hypotheses. Four experiments have been conducted applying Usage-Based Reading (UBR), hence establishing a point of reference for other techniques. In the different experiments, UBR is compared to Checklist-Based Reading (CBR), two variants of UBR and Usage-Based Testing (UBT). We present an approach to analysis across different experimental studies, and identify a set of issues for discussion on whether the approach is feasible for further use in empirical software engineering. (Less)
Please use this url to cite or link to this publication:
https://lup.lub.lu.se/record/708275
- author
- Runeson, Per LU and Thelin, Thomas LU
- organization
- publishing date
- 2003
- type
- Chapter in Book/Report/Conference proceeding
- publication status
- published
- subject
- host publication
- 2nd Workshop in Workshop Series on Empirical Software Engineering
- pages
- 133 - 142
- language
- English
- LU publication?
- yes
- id
- 6013118c-162b-4f25-949e-e43f8461fa0e (old id 708275)
- date added to LUP
- 2016-04-04 14:16:59
- date last changed
- 2021-04-29 09:44:30
@inproceedings{6013118c-162b-4f25-949e-e43f8461fa0e, abstract = {{In software engineering research, experiments are conducted to evaluate new methods or techniques. The experimentation as such is beginning to mature, but little effort is spent on learning across different studies, except for a few meta-analyses. Meta-analysis can be applied to a set of experiments with the same design. This paper discusses learning across a set of experimental studies on fault detection techniques, conducted in very similar environments, although with different hypotheses. Four experiments have been conducted applying Usage-Based Reading (UBR), hence establishing a point of reference for other techniques. In the different experiments, UBR is compared to Checklist-Based Reading (CBR), two variants of UBR and Usage-Based Testing (UBT). We present an approach to analysis across different experimental studies, and identify a set of issues for discussion on whether the approach is feasible for further use in empirical software engineering.}}, author = {{Runeson, Per and Thelin, Thomas}}, booktitle = {{2nd Workshop in Workshop Series on Empirical Software Engineering}}, language = {{eng}}, pages = {{133--142}}, title = {{Prospects and Limitations for Cross-Study Analyses – A Study on an Experiment Series}}, year = {{2003}}, }