Advanced

Optimising transparency, reliability and replicability: annotation principles and inter-coder agreement in the quantification of evaluative expressions

Fuoli, Matteo LU and Hommerberg, Charlotte (2015) In Corpora 10(3). p.315-349
Abstract
Manual corpus annotation facilitates exhaustive and detailed corpus-based analyses of evaluation that would not be possible with purely automatic techniques. However, manual annotation is a complex and subjective process. Most studies adopting this approach have paid insufficient attention to the methodological challenges involved in manually annotating evaluation - especially concerning transparency, reliability and replicability. This article illustrates a procedure for annotating evaluative expressions in text that facilitates more transparent, reliable and replicable analyses. The method is demonstrated through a case study analysis of APPRAISAL (Martin and White, 2005) in a small-size specialised corpus of CEO letters published by the... (More)
Manual corpus annotation facilitates exhaustive and detailed corpus-based analyses of evaluation that would not be possible with purely automatic techniques. However, manual annotation is a complex and subjective process. Most studies adopting this approach have paid insufficient attention to the methodological challenges involved in manually annotating evaluation - especially concerning transparency, reliability and replicability. This article illustrates a procedure for annotating evaluative expressions in text that facilitates more transparent, reliable and replicable analyses. The method is demonstrated through a case study analysis of APPRAISAL (Martin and White, 2005) in a small-size specialised corpus of CEO letters published by the British energy company, BP, and four competitors before and after the Deepwater Horizon oil spill of 2010. Drawing on Fuoli and Paradis's (2014) model of trust-repair discourse, we examine how ATTITUDE and ENGAGEMENT resources are strategically deployed by BP's CEO in the attempt to repair stakeholders' trust after the accident. (Less)
Please use this url to cite or link to this publication:
author
organization
publishing date
type
Contribution to journal
publication status
published
subject
keywords
evaluation, APPRAISAL theory, manual corpus annotation, inter-coder agreement, reliability, transparency, replicability, trust-repair, BP Deepwater Horizon oil spill
in
Corpora
volume
10
issue
3
pages
315 - 349
publisher
Edinburgh University Press
external identifiers
  • wos:000364637700004
  • scopus:84947944275
ISSN
1755-1676
DOI
10.3366/cor.2015.0080
language
English
LU publication?
yes
id
7cf6c60a-dd06-4e5d-bdc8-de6b95ba70dd (old id 8380485)
date added to LUP
2015-06-23 11:38:07
date last changed
2017-09-17 03:48:23
@article{7cf6c60a-dd06-4e5d-bdc8-de6b95ba70dd,
  abstract     = {Manual corpus annotation facilitates exhaustive and detailed corpus-based analyses of evaluation that would not be possible with purely automatic techniques. However, manual annotation is a complex and subjective process. Most studies adopting this approach have paid insufficient attention to the methodological challenges involved in manually annotating evaluation - especially concerning transparency, reliability and replicability. This article illustrates a procedure for annotating evaluative expressions in text that facilitates more transparent, reliable and replicable analyses. The method is demonstrated through a case study analysis of APPRAISAL (Martin and White, 2005) in a small-size specialised corpus of CEO letters published by the British energy company, BP, and four competitors before and after the Deepwater Horizon oil spill of 2010. Drawing on Fuoli and Paradis's (2014) model of trust-repair discourse, we examine how ATTITUDE and ENGAGEMENT resources are strategically deployed by BP's CEO in the attempt to repair stakeholders' trust after the accident.},
  author       = {Fuoli, Matteo and Hommerberg, Charlotte},
  issn         = {1755-1676},
  keyword      = {evaluation,APPRAISAL theory,manual corpus annotation,inter-coder agreement,reliability,transparency,replicability,trust-repair,BP Deepwater Horizon oil spill},
  language     = {eng},
  number       = {3},
  pages        = {315--349},
  publisher    = {Edinburgh University Press},
  series       = {Corpora},
  title        = {Optimising transparency, reliability and replicability: annotation principles and inter-coder agreement in the quantification of evaluative expressions},
  url          = {http://dx.doi.org/10.3366/cor.2015.0080},
  volume       = {10},
  year         = {2015},
}