Skip to main content

Lund University Publications

LUND UNIVERSITY LIBRARIES

Manual annotation of evaluative language expressions : bridging discourse and corpus approaches

Fuoli, Matteo LU and Glynn, Dylan LU (2013) Evaluative Language and Corpus Linguistics Workshop - Corpus Linguistics Conference
Abstract
The analysis of evaluation in text poses significant methodological challenges, which mainly arise from the fact that: a) evaluative meanings can be expressed through an open-ended range of lexico-grammatical resources; b) they can span multiple words; c) context and co-text play a key role in determining the evaluative meaning of words or phrases; d) the interpretation of evaluation in text depends on the reader’s/analyst’s reading position and is, therefore, necessarily subjective.

These complexities have so far seriously limited the development of corpus-driven description of this phenomenon. This study addresses these methodological issues through the application of multivariate usage-feature / profile-based analysis... (More)
The analysis of evaluation in text poses significant methodological challenges, which mainly arise from the fact that: a) evaluative meanings can be expressed through an open-ended range of lexico-grammatical resources; b) they can span multiple words; c) context and co-text play a key role in determining the evaluative meaning of words or phrases; d) the interpretation of evaluation in text depends on the reader’s/analyst’s reading position and is, therefore, necessarily subjective.

These complexities have so far seriously limited the development of corpus-driven description of this phenomenon. This study addresses these methodological issues through the application of multivariate usage-feature / profile-based analysis (Geeraerts et al., 1994; Gries, 2003). This method bridges quantitative and qualitative perspectives and has been successfully applied to the description of various semantic and morphosyntactic phe- nomena. The method relies on the use of manual annotation software and is based on explicit criteria for the identification of evaluative items, including inter-annotator agreement metrics to control for rater bias. The results of the annotation are modelled with mutlivariate statistics. This step identifies structural patterns in the data and tests the accuracy of the description with predictive modelling. (Less)
Please use this url to cite or link to this publication:
author
and
organization
publishing date
type
Contribution to conference
publication status
published
subject
conference name
Evaluative Language and Corpus Linguistics Workshop - Corpus Linguistics Conference
conference location
Lancaster, United Kingdom
conference dates
2013-07-22 - 2013-07-26
language
English
LU publication?
yes
id
d9b5d789-8462-40c3-8584-2c8132c2fe5a (old id 4015949)
date added to LUP
2016-04-04 13:22:02
date last changed
2018-11-21 21:13:30
@misc{d9b5d789-8462-40c3-8584-2c8132c2fe5a,
  abstract     = {{The analysis of evaluation in text poses significant methodological challenges, which mainly arise from the fact that: a) evaluative meanings can be expressed through an open-ended range of lexico-grammatical resources; b) they can span multiple words; c) context and co-text play a key role in determining the evaluative meaning of words or phrases; d) the interpretation of evaluation in text depends on the reader’s/analyst’s reading position and is, therefore, necessarily subjective.<br/><br>
These complexities have so far seriously limited the development of corpus-driven description of this phenomenon. This study addresses these methodological issues through the application of multivariate usage-feature / profile-based analysis (Geeraerts et al., 1994; Gries, 2003). This method bridges quantitative and qualitative perspectives and has been successfully applied to the description of various semantic and morphosyntactic phe- nomena. The method relies on the use of manual annotation software and is based on explicit criteria for the identification of evaluative items, including inter-annotator agreement metrics to control for rater bias. The results of the annotation are modelled with mutlivariate statistics. This step identifies structural patterns in the data and tests the accuracy of the description with predictive modelling.}},
  author       = {{Fuoli, Matteo and Glynn, Dylan}},
  language     = {{eng}},
  title        = {{Manual annotation of evaluative language expressions : bridging discourse and corpus approaches}},
  url          = {{https://lup.lub.lu.se/search/files/6102451/4015951.pdf}},
  year         = {{2013}},
}