Using semantic role labeling to predict answer types
(2014) the 7th International Workshop on Exploiting Semantic Annotations in Information Retrieval, ESAIR ’14 p.29-31- Abstract
- Most question answering systems feature a step to predict an expected answer type given a question. Li and Roth \cite{li2002learning} proposed an oft-cited taxonomy to the categorize the answer types as well as an annotated data set. While offering a framework compatible with supervised learning, this method builds on a fixed and rigid model that has to be updated when the question-answering domain changes. More recently, Pinchak and Lin \cite{pinchak2006} designed a dynamic method using a syntactic model of the answers that proved more versatile. They used syntactic dependencies to model the question context and evaluated the performance on an English corpus. However, syntactic properties may vary across languages and techniques... (More)
- Most question answering systems feature a step to predict an expected answer type given a question. Li and Roth \cite{li2002learning} proposed an oft-cited taxonomy to the categorize the answer types as well as an annotated data set. While offering a framework compatible with supervised learning, this method builds on a fixed and rigid model that has to be updated when the question-answering domain changes. More recently, Pinchak and Lin \cite{pinchak2006} designed a dynamic method using a syntactic model of the answers that proved more versatile. They used syntactic dependencies to model the question context and evaluated the performance on an English corpus. However, syntactic properties may vary across languages and techniques applicable to English may fail with other languages. In this paper, we present a method for constructing a probability-based answer type model for each different question. We adapted and reproduced the original experiment of Pinchak and Lin \cite{pinchak2006} on a Chinese corpus and we extended their model to semantic dependencies. Our model evaluates the probability that a candidate answer fits into the semantic context of a given question. We carried out an evaluation on a set of questions either drawn from NTCIR corpus \cite{ntcir2005} or that we created manually. (Less)
Please use this url to cite or link to this publication:
https://lup.lub.lu.se/record/5115015
- author
- Li, Zuyao ; Exner, Peter LU and Nugues, Pierre LU
- organization
- publishing date
- 2014
- type
- Chapter in Book/Report/Conference proceeding
- publication status
- published
- subject
- host publication
- ESAIR '14 Proceedings of the 7th International Workshop on Exploiting Semantic Annotations in Information Retrieval
- pages
- 29 - 31
- publisher
- Association for Computing Machinery (ACM)
- conference name
- the 7th International Workshop on Exploiting Semantic Annotations in Information Retrieval, ESAIR ’14
- conference location
- Shanghai, China
- conference dates
- 2014-11-03 - 2014-11-07
- external identifiers
-
- scopus:84978496759
- ISBN
- 978-1-4503-1365-0
- DOI
- 10.1145/2663712.2666186
- language
- English
- LU publication?
- yes
- id
- a7ed0a46-1fed-4cb4-bb80-387f3d3c1d30 (old id 5115015)
- alternative location
- http://humanities.uva.nl/~kamps/esair14/presentations/poster7.pdf
- date added to LUP
- 2016-04-04 13:34:15
- date last changed
- 2022-01-30 17:05:49
@inproceedings{a7ed0a46-1fed-4cb4-bb80-387f3d3c1d30, abstract = {{Most question answering systems feature a step to predict an expected answer type given a question. Li and Roth \cite{li2002learning} proposed an oft-cited taxonomy to the categorize the answer types as well as an annotated data set. While offering a framework compatible with supervised learning, this method builds on a fixed and rigid model that has to be updated when the question-answering domain changes. More recently, Pinchak and Lin \cite{pinchak2006} designed a dynamic method using a syntactic model of the answers that proved more versatile. They used syntactic dependencies to model the question context and evaluated the performance on an English corpus. However, syntactic properties may vary across languages and techniques applicable to English may fail with other languages. In this paper, we present a method for constructing a probability-based answer type model for each different question. We adapted and reproduced the original experiment of Pinchak and Lin \cite{pinchak2006} on a Chinese corpus and we extended their model to semantic dependencies. Our model evaluates the probability that a candidate answer fits into the semantic context of a given question. We carried out an evaluation on a set of questions either drawn from NTCIR corpus \cite{ntcir2005} or that we created manually.}}, author = {{Li, Zuyao and Exner, Peter and Nugues, Pierre}}, booktitle = {{ESAIR '14 Proceedings of the 7th International Workshop on Exploiting Semantic Annotations in Information Retrieval}}, isbn = {{978-1-4503-1365-0}}, language = {{eng}}, pages = {{29--31}}, publisher = {{Association for Computing Machinery (ACM)}}, title = {{Using semantic role labeling to predict answer types}}, url = {{http://dx.doi.org/10.1145/2663712.2666186}}, doi = {{10.1145/2663712.2666186}}, year = {{2014}}, }