Advanced

Prospects and Limitations for Cross-Study Analyses – A Study on an Experiment Series

Runeson, Per LU and Thelin, Thomas LU (2003) In 2nd Workshop in Workshop Series on Empirical Software Engineering p.133-142
Abstract
In software engineering research, experiments are conducted to evaluate new methods or techniques. The experimentation as such is beginning to mature, but little effort is spent on learning across different studies, except for a few meta-analyses. Meta-analysis can be applied to a set of experiments with the same design. This paper discusses learning across a set of experimental studies on fault detection techniques, conducted in very similar environments, although with different hypotheses. Four experiments have been conducted applying Usage-Based Reading (UBR), hence establishing a point of reference for other techniques. In the different experiments, UBR is compared to Checklist-Based Reading (CBR), two variants of UBR and Usage-Based... (More)
In software engineering research, experiments are conducted to evaluate new methods or techniques. The experimentation as such is beginning to mature, but little effort is spent on learning across different studies, except for a few meta-analyses. Meta-analysis can be applied to a set of experiments with the same design. This paper discusses learning across a set of experimental studies on fault detection techniques, conducted in very similar environments, although with different hypotheses. Four experiments have been conducted applying Usage-Based Reading (UBR), hence establishing a point of reference for other techniques. In the different experiments, UBR is compared to Checklist-Based Reading (CBR), two variants of UBR and Usage-Based Testing (UBT). We present an approach to analysis across different experimental studies, and identify a set of issues for discussion on whether the approach is feasible for further use in empirical software engineering. (Less)
Please use this url to cite or link to this publication:
author
organization
publishing date
type
Chapter in Book/Report/Conference proceeding
publication status
published
subject
in
2nd Workshop in Workshop Series on Empirical Software Engineering
pages
133 - 142
language
English
LU publication?
yes
id
6013118c-162b-4f25-949e-e43f8461fa0e (old id 708275)
date added to LUP
2007-12-12 12:33:32
date last changed
2016-04-16 12:20:07
@misc{6013118c-162b-4f25-949e-e43f8461fa0e,
  abstract     = {In software engineering research, experiments are conducted to evaluate new methods or techniques. The experimentation as such is beginning to mature, but little effort is spent on learning across different studies, except for a few meta-analyses. Meta-analysis can be applied to a set of experiments with the same design. This paper discusses learning across a set of experimental studies on fault detection techniques, conducted in very similar environments, although with different hypotheses. Four experiments have been conducted applying Usage-Based Reading (UBR), hence establishing a point of reference for other techniques. In the different experiments, UBR is compared to Checklist-Based Reading (CBR), two variants of UBR and Usage-Based Testing (UBT). We present an approach to analysis across different experimental studies, and identify a set of issues for discussion on whether the approach is feasible for further use in empirical software engineering.},
  author       = {Runeson, Per and Thelin, Thomas},
  language     = {eng},
  pages        = {133--142},
  series       = {2nd Workshop in Workshop Series on Empirical Software Engineering},
  title        = {Prospects and Limitations for Cross-Study Analyses – A Study on an Experiment Series},
  year         = {2003},
}