Optimising transparency, reliability and replicability: annotation principles and inter-coder agreement in the quantification of evaluative expressions
(2015) In Corpora 10(3). p.315-349- Abstract
- Manual corpus annotation facilitates exhaustive and detailed corpus-based analyses of evaluation that would not be possible with purely automatic techniques. However, manual annotation is a complex and subjective process. Most studies adopting this approach have paid insufficient attention to the methodological challenges involved in manually annotating evaluation - especially concerning transparency, reliability and replicability. This article illustrates a procedure for annotating evaluative expressions in text that facilitates more transparent, reliable and replicable analyses. The method is demonstrated through a case study analysis of APPRAISAL (Martin and White, 2005) in a small-size specialised corpus of CEO letters published by the... (More)
- Manual corpus annotation facilitates exhaustive and detailed corpus-based analyses of evaluation that would not be possible with purely automatic techniques. However, manual annotation is a complex and subjective process. Most studies adopting this approach have paid insufficient attention to the methodological challenges involved in manually annotating evaluation - especially concerning transparency, reliability and replicability. This article illustrates a procedure for annotating evaluative expressions in text that facilitates more transparent, reliable and replicable analyses. The method is demonstrated through a case study analysis of APPRAISAL (Martin and White, 2005) in a small-size specialised corpus of CEO letters published by the British energy company, BP, and four competitors before and after the Deepwater Horizon oil spill of 2010. Drawing on Fuoli and Paradis's (2014) model of trust-repair discourse, we examine how ATTITUDE and ENGAGEMENT resources are strategically deployed by BP's CEO in the attempt to repair stakeholders' trust after the accident. (Less)
Please use this url to cite or link to this publication:
https://lup.lub.lu.se/record/8380485
- author
- Fuoli, Matteo LU and Hommerberg, Charlotte
- organization
- publishing date
- 2015
- type
- Contribution to journal
- publication status
- published
- subject
- keywords
- evaluation, APPRAISAL theory, manual corpus annotation, inter-coder agreement, reliability, transparency, replicability, trust-repair, BP Deepwater Horizon oil spill
- in
- Corpora
- volume
- 10
- issue
- 3
- pages
- 315 - 349
- publisher
- Edinburgh University Press
- external identifiers
-
- wos:000364637700004
- scopus:84947944275
- ISSN
- 1755-1676
- DOI
- 10.3366/cor.2015.0080
- language
- English
- LU publication?
- yes
- id
- 7cf6c60a-dd06-4e5d-bdc8-de6b95ba70dd (old id 8380485)
- date added to LUP
- 2016-04-01 10:33:14
- date last changed
- 2022-03-19 21:55:31
@article{7cf6c60a-dd06-4e5d-bdc8-de6b95ba70dd, abstract = {{Manual corpus annotation facilitates exhaustive and detailed corpus-based analyses of evaluation that would not be possible with purely automatic techniques. However, manual annotation is a complex and subjective process. Most studies adopting this approach have paid insufficient attention to the methodological challenges involved in manually annotating evaluation - especially concerning transparency, reliability and replicability. This article illustrates a procedure for annotating evaluative expressions in text that facilitates more transparent, reliable and replicable analyses. The method is demonstrated through a case study analysis of APPRAISAL (Martin and White, 2005) in a small-size specialised corpus of CEO letters published by the British energy company, BP, and four competitors before and after the Deepwater Horizon oil spill of 2010. Drawing on Fuoli and Paradis's (2014) model of trust-repair discourse, we examine how ATTITUDE and ENGAGEMENT resources are strategically deployed by BP's CEO in the attempt to repair stakeholders' trust after the accident.}}, author = {{Fuoli, Matteo and Hommerberg, Charlotte}}, issn = {{1755-1676}}, keywords = {{evaluation; APPRAISAL theory; manual corpus annotation; inter-coder agreement; reliability; transparency; replicability; trust-repair; BP Deepwater Horizon oil spill}}, language = {{eng}}, number = {{3}}, pages = {{315--349}}, publisher = {{Edinburgh University Press}}, series = {{Corpora}}, title = {{Optimising transparency, reliability and replicability: annotation principles and inter-coder agreement in the quantification of evaluative expressions}}, url = {{https://lup.lub.lu.se/search/files/8642534/FUOLI_HOMMERBERG_Optimizing_transparency_reliability_and_replicability_MANUSCRIPT.pdf}}, doi = {{10.3366/cor.2015.0080}}, volume = {{10}}, year = {{2015}}, }