Extracting Knowledge from Neural Networks in Image Processing
(2003) p.107-127- Abstract
- Despite their success-story, artificial neural networks have one major disadvantage
compared to other techniques: the inability to explain comprehensively how a trained
neural network reaches its output; neural networks are not only (incorrectly) seen as a “magic tool” but possibly even more as a mysterious “black box.” Although much research has already been done to “open the box,” there is a notable hiatus in known publications on analysis of neural networks. So far, mainly sensitivity analysis and rule extraction methods have been used to analyze neural networks. However, these can only be applied in a limited subset of the problem domains where neural network solutions are encountered. In this chapter we propose a wider... (More) - Despite their success-story, artificial neural networks have one major disadvantage
compared to other techniques: the inability to explain comprehensively how a trained
neural network reaches its output; neural networks are not only (incorrectly) seen as a “magic tool” but possibly even more as a mysterious “black box.” Although much research has already been done to “open the box,” there is a notable hiatus in known publications on analysis of neural networks. So far, mainly sensitivity analysis and rule extraction methods have been used to analyze neural networks. However, these can only be applied in a limited subset of the problem domains where neural network solutions are encountered. In this chapter we propose a wider applicable method which, for a given problem domain, involves identifying basic functions with which users in that domain are already familiar, and describing trained neural networks, or parts thereof, in terms of those basic functions. This will provide a comprehensible description of the neural network’s function and, depending on the chosen base functions, it may also provide an insight into the neural network’s inner “reasoning.” To illustrate our method, the elements of a feedforward-backpropagation neural network, that has been trained to detect edges in images, are described in terms of differential operators of various orders and with various angles of operation. The results are then compared with image filters known from literature, which we analyzed in the same way. (Less)
Please use this url to cite or link to this publication:
https://lup.lub.lu.se/record/602467
- author
- van der Zwaag, B.J. ; Slump, C.H. and Spaanenburg, Lambert LU
- organization
- publishing date
- 2003
- type
- Chapter in Book/Report/Conference proceeding
- publication status
- published
- subject
- keywords
- image processing, neural networks
- host publication
- Innovations in Knowledge Engineering
- editor
- Jain, R.K.
- pages
- 107 - 127
- publisher
- World Scientific Publishing
- language
- English
- LU publication?
- yes
- id
- d37afe92-6296-44eb-8ea6-8c52ec8eb67a (old id 602467)
- date added to LUP
- 2016-04-04 10:51:17
- date last changed
- 2021-01-25 10:54:21
@inbook{d37afe92-6296-44eb-8ea6-8c52ec8eb67a, abstract = {{Despite their success-story, artificial neural networks have one major disadvantage<br/>compared to other techniques: the inability to explain comprehensively how a trained<br/>neural network reaches its output; neural networks are not only (incorrectly) seen as a “magic tool” but possibly even more as a mysterious “black box.” Although much research has already been done to “open the box,” there is a notable hiatus in known publications on analysis of neural networks. So far, mainly sensitivity analysis and rule extraction methods have been used to analyze neural networks. However, these can only be applied in a limited subset of the problem domains where neural network solutions are encountered. In this chapter we propose a wider applicable method which, for a given problem domain, involves identifying basic functions with which users in that domain are already familiar, and describing trained neural networks, or parts thereof, in terms of those basic functions. This will provide a comprehensible description of the neural network’s function and, depending on the chosen base functions, it may also provide an insight into the neural network’s inner “reasoning.” To illustrate our method, the elements of a feedforward-backpropagation neural network, that has been trained to detect edges in images, are described in terms of differential operators of various orders and with various angles of operation. The results are then compared with image filters known from literature, which we analyzed in the same way.}}, author = {{van der Zwaag, B.J. and Slump, C.H. and Spaanenburg, Lambert}}, booktitle = {{Innovations in Knowledge Engineering}}, editor = {{Jain, R.K.}}, keywords = {{image processing; neural networks}}, language = {{eng}}, pages = {{107--127}}, publisher = {{World Scientific Publishing}}, title = {{Extracting Knowledge from Neural Networks in Image Processing}}, year = {{2003}}, }