Skip to main content

Lund University Publications

LUND UNIVERSITY LIBRARIES

Explorations of the mean field theory learning algorithm

Peterson, Carsten LU and Hartman, Eric (1989) In Neural Networks 2(6). p.475-494
Abstract

The mean field theory (MFT) learning algorithm is elaborated and explored with respect to a variety of tasks. MFT is benchmarked against the back-propagation learning algorithm (BP) on two different feature recognition problems: two-dimensional mirror symmetry and multidimensional statistical pattern classification. We find that while the two algorithms are very similar with respect to generalization properties, MFT normally requires a substantially smaller number of training epochs than BP. Since the MFT model is bidirectional, rather than feed-forward, its use can be extended naturally from purely functional mappings to a content addressable memory. A network with N visible and N hidden units can store up to approximately 4N patterns... (More)

The mean field theory (MFT) learning algorithm is elaborated and explored with respect to a variety of tasks. MFT is benchmarked against the back-propagation learning algorithm (BP) on two different feature recognition problems: two-dimensional mirror symmetry and multidimensional statistical pattern classification. We find that while the two algorithms are very similar with respect to generalization properties, MFT normally requires a substantially smaller number of training epochs than BP. Since the MFT model is bidirectional, rather than feed-forward, its use can be extended naturally from purely functional mappings to a content addressable memory. A network with N visible and N hidden units can store up to approximately 4N patterns with good content-addressability. We stress an implementational advantage for MFT: it is natural for VLSI circuitry.

(Less)
Please use this url to cite or link to this publication:
author
and
publishing date
type
Contribution to journal
publication status
published
subject
keywords
Bidirectional, Content addressable memory, Generalization, Learning algorithm, Mean field theory, Neural network
in
Neural Networks
volume
2
issue
6
pages
20 pages
publisher
Elsevier
external identifiers
  • scopus:0024901271
ISSN
0893-6080
DOI
10.1016/0893-6080(89)90045-2
language
English
LU publication?
no
id
4dfa588d-1062-4561-a708-17992994cd24
date added to LUP
2019-05-15 07:57:28
date last changed
2021-01-03 11:09:55
@article{4dfa588d-1062-4561-a708-17992994cd24,
  abstract     = {{<p>The mean field theory (MFT) learning algorithm is elaborated and explored with respect to a variety of tasks. MFT is benchmarked against the back-propagation learning algorithm (BP) on two different feature recognition problems: two-dimensional mirror symmetry and multidimensional statistical pattern classification. We find that while the two algorithms are very similar with respect to generalization properties, MFT normally requires a substantially smaller number of training epochs than BP. Since the MFT model is bidirectional, rather than feed-forward, its use can be extended naturally from purely functional mappings to a content addressable memory. A network with N visible and N hidden units can store up to approximately 4N patterns with good content-addressability. We stress an implementational advantage for MFT: it is natural for VLSI circuitry.</p>}},
  author       = {{Peterson, Carsten and Hartman, Eric}},
  issn         = {{0893-6080}},
  keywords     = {{Bidirectional; Content addressable memory; Generalization; Learning algorithm; Mean field theory; Neural network}},
  language     = {{eng}},
  number       = {{6}},
  pages        = {{475--494}},
  publisher    = {{Elsevier}},
  series       = {{Neural Networks}},
  title        = {{Explorations of the mean field theory learning algorithm}},
  url          = {{http://dx.doi.org/10.1016/0893-6080(89)90045-2}},
  doi          = {{10.1016/0893-6080(89)90045-2}},
  volume       = {{2}},
  year         = {{1989}},
}