On Using Active Learning and Self-Training when Mining Performance Discussions on Stack Overflow
(2017) 21st International Conference on Evaluation and Assessment in Software Engineering (EASE'17)- Abstract
- Abundant data is the key to successful machine learning. However, supervised learning requires annotated data that are often hard to obtain. In a classification task with limited resources, Active Learning (AL) promises to guide annotators to examples that bring the most value for a classifier. AL can be successfully combined with self-training, i.e., extending a training set with the unlabelled examples for which a classifier is the most certain. We report our experiences on using AL in a systematic manner to train an SVM classifier for Stack Overflow posts discussing performance of software components. We show that the training examples deemed as the most valuable to the classifier are also the most difficult for humans to annotate.... (More)
- Abundant data is the key to successful machine learning. However, supervised learning requires annotated data that are often hard to obtain. In a classification task with limited resources, Active Learning (AL) promises to guide annotators to examples that bring the most value for a classifier. AL can be successfully combined with self-training, i.e., extending a training set with the unlabelled examples for which a classifier is the most certain. We report our experiences on using AL in a systematic manner to train an SVM classifier for Stack Overflow posts discussing performance of software components. We show that the training examples deemed as the most valuable to the classifier are also the most difficult for humans to annotate. Despite carefully evolved annotation criteria, we report low inter-rater agreement, but we also propose mitigation strategies. Finally, based on one annotator's work, we show that self-training can improve the classification accuracy. We conclude the paper by discussing implication for future text miners aspiring to use AL and self-training. (Less)
Please use this url to cite or link to this publication:
https://lup.lub.lu.se/record/847799fd-a909-4212-97cb-4c5cc319bae4
- author
- Borg, Markus ; Lennerstad, Iben ; Ros, Rasmus LU and Bjarnason, Elizabeth LU
- organization
- publishing date
- 2017-06-01
- type
- Chapter in Book/Report/Conference proceeding
- publication status
- published
- subject
- host publication
- EASE'17 Proceedings of the 21st International Conference on Evaluation and Assessment in Software Engineering
- pages
- 6 pages
- publisher
- Association for Computing Machinery (ACM)
- conference name
- 21st International Conference on Evaluation and Assessment in Software Engineering (EASE'17)
- conference location
- Karlskrona, Sweden
- conference dates
- 2017-06-15 - 2017-06-16
- external identifiers
-
- scopus:85025467713
- ISBN
- 978-1-4503-4804-1
- DOI
- 10.1145/3084226.3084273
- language
- English
- LU publication?
- yes
- id
- 847799fd-a909-4212-97cb-4c5cc319bae4
- date added to LUP
- 2017-06-28 08:23:04
- date last changed
- 2023-09-07 07:55:03
@inproceedings{847799fd-a909-4212-97cb-4c5cc319bae4, abstract = {{Abundant data is the key to successful machine learning. However, supervised learning requires annotated data that are often hard to obtain. In a classification task with limited resources, Active Learning (AL) promises to guide annotators to examples that bring the most value for a classifier. AL can be successfully combined with self-training, i.e., extending a training set with the unlabelled examples for which a classifier is the most certain. We report our experiences on using AL in a systematic manner to train an SVM classifier for Stack Overflow posts discussing performance of software components. We show that the training examples deemed as the most valuable to the classifier are also the most difficult for humans to annotate. Despite carefully evolved annotation criteria, we report low inter-rater agreement, but we also propose mitigation strategies. Finally, based on one annotator's work, we show that self-training can improve the classification accuracy. We conclude the paper by discussing implication for future text miners aspiring to use AL and self-training.}}, author = {{Borg, Markus and Lennerstad, Iben and Ros, Rasmus and Bjarnason, Elizabeth}}, booktitle = {{EASE'17 Proceedings of the 21st International Conference on Evaluation and Assessment in Software Engineering}}, isbn = {{978-1-4503-4804-1}}, language = {{eng}}, month = {{06}}, publisher = {{Association for Computing Machinery (ACM)}}, title = {{On Using Active Learning and Self-Training when Mining Performance Discussions on Stack Overflow}}, url = {{http://dx.doi.org/10.1145/3084226.3084273}}, doi = {{10.1145/3084226.3084273}}, year = {{2017}}, }