Advanced

Unshared Task : (Dis)agreement in Online Debates

Skeppstedt, Maria ; Sahlgren, Magnus ; Paradis, Carita LU and Kerren, Andreas (2016) 3rd Workshop on Argument Mining (ArgMining '16) p.154-159
Abstract
Topic-independent expressions for conveying agreement and disagreement were annotated in a corpus of web forum debates, in order to evaluate a classifier trained to detect these two categories. Among the 175 expressions annotated in the evaluation set, 163 were unique, which shows that there is large variation in expressions used. This variation might be one of the reasons why the task of automatically detecting the categories was difficult. F-scores of 0.44 and 0.37 were achieved by a classifier trained on 2,000 debate sentences for detecting sentence-level agreement and disagreement.
Please use this url to cite or link to this publication:
author
; ; and
organization
publishing date
type
Chapter in Book/Report/Conference proceeding
publication status
published
subject
keywords
argumentation mining, online debates, classifier, agreement, disagreement, stance, corpus, annotation
host publication
The 54th Annual Meeting of the Association for Computational Linguistics : Proceedings of the 3rd Workshop on Argument Mining - Proceedings of the 3rd Workshop on Argument Mining
pages
154 - 159
publisher
Association for Computational Linguistics
conference name
3rd Workshop on Argument Mining (ArgMining '16)
conference location
Berlin, Germany
conference dates
2016-08-07 - 2016-08-12
ISBN
978-1-945626-17-3
language
English
LU publication?
yes
id
83f5d4b0-71de-445a-bec5-792d6e3583b7
alternative location
http://www.aclweb.org/anthology/W/W16/W16-28.pdf#page=166
date added to LUP
2016-11-28 20:51:20
date last changed
2019-03-08 02:28:45
@inproceedings{83f5d4b0-71de-445a-bec5-792d6e3583b7,
  abstract     = {Topic-independent expressions for conveying agreement and disagreement were annotated in a corpus of web forum debates, in order to evaluate a classifier trained to detect these two categories. Among the 175 expressions annotated in the evaluation set, 163 were unique, which shows that there is large variation in expressions used. This variation might be one of the reasons why the task of automatically detecting the categories was difficult. F-scores of 0.44 and 0.37 were achieved by a classifier trained on 2,000 debate sentences for detecting sentence-level agreement and disagreement.},
  author       = {Skeppstedt, Maria and Sahlgren, Magnus and Paradis, Carita and Kerren, Andreas},
  booktitle    = {The 54th Annual Meeting of the Association for Computational Linguistics : Proceedings of the 3rd Workshop on Argument Mining},
  isbn         = {978-1-945626-17-3},
  language     = {eng},
  pages        = {154--159},
  publisher    = {Association for Computational Linguistics},
  title        = {Unshared Task : (Dis)agreement in Online Debates},
  url          = {http://www.aclweb.org/anthology/W/W16/W16-28.pdf#page=166},
  year         = {2016},
}