Comparison of standard resampling methods for performance estimation of artificial neural network ensembles
(2007) Third International Conference on Computational Intelligence in Medicine and Healthcare- Abstract
- Estimation of the generalization performance for classification within the medical applications domain is always an important task. In this study we focus on artificial neural network ensembles as the machine learning technique. We present a numerical comparison between five common resampling techniques: k-fold cross validation (CV), holdout, using three cutoffs, and bootstrap using five different data sets. The results show that CV together with holdout $0.25$ and $0.50$ are the best resampling strategies for estimating the true performance of ANN ensembles. The bootstrap, using the .632+ rule, is too optimistic, while the holdout $0.75$ underestimates the true performance.
Please use this url to cite or link to this publication:
https://lup.lub.lu.se/record/593195
- author
- Green, Michael LU and Ohlsson, Mattias LU
- organization
- publishing date
- 2007
- type
- Chapter in Book/Report/Conference proceeding
- publication status
- published
- subject
- keywords
- performance estimation, k-fold cross validation, bootstrap, artificial neural networks
- host publication
- Third International Conference on Computational Intelligence in Medicine and Healthcare
- editor
- Ifeachor, Emmanuel
- pages
- 6 pages
- conference name
- Third International Conference on Computational Intelligence in Medicine and Healthcare
- conference dates
- 2007-07-25 - 2007-07-27
- language
- English
- LU publication?
- yes
- id
- 06a42779-0a76-4c80-8d24-84f384f01135 (old id 593195)
- date added to LUP
- 2016-04-04 14:20:08
- date last changed
- 2018-11-21 21:19:42
@inproceedings{06a42779-0a76-4c80-8d24-84f384f01135, abstract = {{Estimation of the generalization performance for classification within the medical applications domain is always an important task. In this study we focus on artificial neural network ensembles as the machine learning technique. We present a numerical comparison between five common resampling techniques: k-fold cross validation (CV), holdout, using three cutoffs, and bootstrap using five different data sets. The results show that CV together with holdout $0.25$ and $0.50$ are the best resampling strategies for estimating the true performance of ANN ensembles. The bootstrap, using the .632+ rule, is too optimistic, while the holdout $0.75$ underestimates the true performance.}}, author = {{Green, Michael and Ohlsson, Mattias}}, booktitle = {{Third International Conference on Computational Intelligence in Medicine and Healthcare}}, editor = {{Ifeachor, Emmanuel}}, keywords = {{performance estimation; k-fold cross validation; bootstrap; artificial neural networks}}, language = {{eng}}, title = {{Comparison of standard resampling methods for performance estimation of artificial neural network ensembles}}, url = {{https://lup.lub.lu.se/search/files/6336536/593198.ps}}, year = {{2007}}, }