Advanced

Unfolding the phenomenon of interrater agreement: a multicomponent approach for in-depth examination was proposed.

Slaug, Björn LU ; Schilling, Oliver; Helle, Tina LU ; Iwarsson, Susanne LU ; Carlsson, Gunilla LU and Brandt, Åse LU (2012) In Journal of Clinical Epidemiology 65(9). p.1016-1025
Abstract
OBJECTIVE:

The overall objective was to unfold the phenomenon of interrater agreement: to identify potential sources of variation in agreement data and to explore how they can be statistically accounted for. The ultimate aim was to propose recommendations for in-depth examination of agreement to improve the reliability of assessment instruments.



STUDY DESIGN AND SETTING:

Using a sample where 10 rater pairs had assessed the presence/absence of 188 environmental barriers by a systematic rating form, a raters×items data set was generated (N=1,880). In addition to common agreement indices, relative shares of agreement variation were calculated. Multilevel regression analysis was carried out, using rater... (More)
OBJECTIVE:

The overall objective was to unfold the phenomenon of interrater agreement: to identify potential sources of variation in agreement data and to explore how they can be statistically accounted for. The ultimate aim was to propose recommendations for in-depth examination of agreement to improve the reliability of assessment instruments.



STUDY DESIGN AND SETTING:

Using a sample where 10 rater pairs had assessed the presence/absence of 188 environmental barriers by a systematic rating form, a raters×items data set was generated (N=1,880). In addition to common agreement indices, relative shares of agreement variation were calculated. Multilevel regression analysis was carried out, using rater and item characteristics as predictors of agreement variation. RESULTS: Following a conceptual decomposition, the agreement variation was statistically disentangled into relative shares. The raters accounted for 6-11%, the items for 32-33%, and the residual for 57-60% of the variation. Multilevel regression analysis showed barrier prevalence and raters' familiarity with using standardized instruments to have the strongest impact on agreement.



CONCLUSION:

Supported by a conceptual analysis, we propose an approach of in-depth examination of agreement variation, as a strategy for increasing the level of interrater agreement. By identifying and limiting the most important sources of disagreement, instrument reliability can be improved ultimately. (Less)
Please use this url to cite or link to this publication:
author
organization
publishing date
type
Contribution to journal
publication status
published
subject
in
Journal of Clinical Epidemiology
volume
65
issue
9
pages
1016 - 1025
publisher
Elsevier
external identifiers
  • wos:000307486300014
  • pmid:22742912
  • scopus:84864282727
ISSN
1878-5921
DOI
10.1016/j.jclinepi.2012.02.016
language
English
LU publication?
yes
id
fa7b85af-c92a-4c17-a5c1-aa42ecbbcf80 (old id 2858876)
alternative location
http://www.ncbi.nlm.nih.gov/pubmed/22742912?dopt=Abstract
date added to LUP
2012-07-04 20:51:48
date last changed
2017-10-01 03:14:14
@article{fa7b85af-c92a-4c17-a5c1-aa42ecbbcf80,
  abstract     = {OBJECTIVE: <br/><br>
The overall objective was to unfold the phenomenon of interrater agreement: to identify potential sources of variation in agreement data and to explore how they can be statistically accounted for. The ultimate aim was to propose recommendations for in-depth examination of agreement to improve the reliability of assessment instruments. <br/><br>
<br/><br>
STUDY DESIGN AND SETTING: <br/><br>
Using a sample where 10 rater pairs had assessed the presence/absence of 188 environmental barriers by a systematic rating form, a raters×items data set was generated (N=1,880). In addition to common agreement indices, relative shares of agreement variation were calculated. Multilevel regression analysis was carried out, using rater and item characteristics as predictors of agreement variation. RESULTS: Following a conceptual decomposition, the agreement variation was statistically disentangled into relative shares. The raters accounted for 6-11%, the items for 32-33%, and the residual for 57-60% of the variation. Multilevel regression analysis showed barrier prevalence and raters' familiarity with using standardized instruments to have the strongest impact on agreement. <br/><br>
<br/><br>
CONCLUSION: <br/><br>
Supported by a conceptual analysis, we propose an approach of in-depth examination of agreement variation, as a strategy for increasing the level of interrater agreement. By identifying and limiting the most important sources of disagreement, instrument reliability can be improved ultimately.},
  author       = {Slaug, Björn and Schilling, Oliver and Helle, Tina and Iwarsson, Susanne and Carlsson, Gunilla and Brandt, Åse},
  issn         = {1878-5921},
  language     = {eng},
  number       = {9},
  pages        = {1016--1025},
  publisher    = {Elsevier},
  series       = {Journal of Clinical Epidemiology},
  title        = {Unfolding the phenomenon of interrater agreement: a multicomponent approach for in-depth examination was proposed.},
  url          = {http://dx.doi.org/10.1016/j.jclinepi.2012.02.016},
  volume       = {65},
  year         = {2012},
}