Skip to main content

Lund University Publications

LUND UNIVERSITY LIBRARIES

Analysis of neural networks through base functions

VanderZwaag, B J ; Slump, C H and Spaanenburg, Lambert LU (2002) SNN/STW workshop "Lerende Oplossingen", 2002
Abstract
Problem statement. Despite their success-story, neural networks have one major disadvantage compared to

other techniques: the inability to explain comprehensively how a trained neural network reaches its output;

neural networks are not only (incorrectly) seen as a “magic tool” but possibly even more as a mysterious

“black box” [1]. This is an important aspect of the functionality of any technology, as users will be interested

in “how it works” before trusting it completely.

Although much research has already been done to “open the box,” there is a notable hiatus in known

publications on analysis of neural networks. So far, mainly sensitivity analysis and rule extraction ... (More)
Problem statement. Despite their success-story, neural networks have one major disadvantage compared to

other techniques: the inability to explain comprehensively how a trained neural network reaches its output;

neural networks are not only (incorrectly) seen as a “magic tool” but possibly even more as a mysterious

“black box” [1]. This is an important aspect of the functionality of any technology, as users will be interested

in “how it works” before trusting it completely.

Although much research has already been done to “open the box,” there is a notable hiatus in known

publications on analysis of neural networks. So far, mainly sensitivity analysis and rule extraction methods

have been used to analyze neural networks. However, these can only be applied in a limited subset of the

problem domains where neural network solutions are encountered.

Research goal and approach. We therefore propose a method which, for a given problem domain, involves

identifying basic functions with which users in that domain are already familiar, and describing trained

neural networks, or parts thereof, in terms of those basic functions. This will provide a comprehensible

description of the neural network’s function and, depending on the chosen basic functions, it may also

provide an insight into the network’s inner “reasoning.”

Relevance. Domain-specific analysis of neural networks through base functions will not only provide insight

into the in- and external behavior of neural networks and show their possible limitations in particular

applications, but it will also lower the acceptability threshold for future users unfamiliar with neural

networks. Further, domain-specific neural network analysis methods that utilize domain-specific base

functions can also be used to optimize neural network systems. An analysis in terms of base functions may

even make clear how to (re)construct a superior system using those base functions, thus using the neural

network merely as a construction advisor. If a user does not want to trust a neural network for any reason

whatsoever, he may still trust a non-neural system that would have been nearly impossible to construct

without using a neural network as an advisor.

Initial results. As an example, the poster shows that an edge detector realized by a neural network can be

analyzed in terms of differential filter operators, which are common in the digital image processing domain

(for more details, see [2]). The same analysis was applied to some well-known image filters, enabling

comparison of conventional edge detectors known from literature and the neural network edge detectors. The

difference between our comparison and more commonly used methods for comparison lies herein that our

comparison was based directly on the detectors’ filter operations rather than on their performance on a given

(benchmark) example. The latter is a more indirect method of comparison and does not provide any insight

into the neural network’s functionality. (Less)
Please use this url to cite or link to this publication:
author
; and
publishing date
type
Chapter in Book/Report/Conference proceeding
publication status
published
subject
host publication
SNN/STW workshop "Lerende Oplossingen"
conference name
SNN/STW workshop "Lerende Oplossingen", 2002
conference location
Nijmegen
conference dates
2002-06-14 - 2002-06-14
language
English
LU publication?
no
id
02d4a820-74ba-42bf-8877-3b8935e88d8e (old id 603942)
date added to LUP
2016-04-04 13:00:16
date last changed
2018-11-21 21:11:44
@inproceedings{02d4a820-74ba-42bf-8877-3b8935e88d8e,
  abstract     = {{Problem statement. Despite their success-story, neural networks have one major disadvantage compared to<br/><br>
other techniques: the inability to explain comprehensively how a trained neural network reaches its output;<br/><br>
neural networks are not only (incorrectly) seen as a “magic tool” but possibly even more as a mysterious<br/><br>
“black box” [1]. This is an important aspect of the functionality of any technology, as users will be interested<br/><br>
in “how it works” before trusting it completely.<br/><br>
Although much research has already been done to “open the box,” there is a notable hiatus in known<br/><br>
publications on analysis of neural networks. So far, mainly sensitivity analysis and rule extraction methods<br/><br>
have been used to analyze neural networks. However, these can only be applied in a limited subset of the<br/><br>
problem domains where neural network solutions are encountered.<br/><br>
Research goal and approach. We therefore propose a method which, for a given problem domain, involves<br/><br>
identifying basic functions with which users in that domain are already familiar, and describing trained<br/><br>
neural networks, or parts thereof, in terms of those basic functions. This will provide a comprehensible<br/><br>
description of the neural network’s function and, depending on the chosen basic functions, it may also<br/><br>
provide an insight into the network’s inner “reasoning.”<br/><br>
Relevance. Domain-specific analysis of neural networks through base functions will not only provide insight<br/><br>
into the in- and external behavior of neural networks and show their possible limitations in particular<br/><br>
applications, but it will also lower the acceptability threshold for future users unfamiliar with neural<br/><br>
networks. Further, domain-specific neural network analysis methods that utilize domain-specific base<br/><br>
functions can also be used to optimize neural network systems. An analysis in terms of base functions may<br/><br>
even make clear how to (re)construct a superior system using those base functions, thus using the neural<br/><br>
network merely as a construction advisor. If a user does not want to trust a neural network for any reason<br/><br>
whatsoever, he may still trust a non-neural system that would have been nearly impossible to construct<br/><br>
without using a neural network as an advisor.<br/><br>
Initial results. As an example, the poster shows that an edge detector realized by a neural network can be<br/><br>
analyzed in terms of differential filter operators, which are common in the digital image processing domain<br/><br>
(for more details, see [2]). The same analysis was applied to some well-known image filters, enabling<br/><br>
comparison of conventional edge detectors known from literature and the neural network edge detectors. The<br/><br>
difference between our comparison and more commonly used methods for comparison lies herein that our<br/><br>
comparison was based directly on the detectors’ filter operations rather than on their performance on a given<br/><br>
(benchmark) example. The latter is a more indirect method of comparison and does not provide any insight<br/><br>
into the neural network’s functionality.}},
  author       = {{VanderZwaag, B J and Slump, C H and Spaanenburg, Lambert}},
  booktitle    = {{SNN/STW workshop "Lerende Oplossingen"}},
  language     = {{eng}},
  title        = {{Analysis of neural networks through base functions}},
  year         = {{2002}},
}