
Carsten Peterson
Expert

Explorations of the mean field theory learning algorithm
Författare
Summary, in English
The mean field theory (MFT) learning algorithm is elaborated and explored with respect to a variety of tasks. MFT is benchmarked against the back-propagation learning algorithm (BP) on two different feature recognition problems: two-dimensional mirror symmetry and multidimensional statistical pattern classification. We find that while the two algorithms are very similar with respect to generalization properties, MFT normally requires a substantially smaller number of training epochs than BP. Since the MFT model is bidirectional, rather than feed-forward, its use can be extended naturally from purely functional mappings to a content addressable memory. A network with N visible and N hidden units can store up to approximately 4N patterns with good content-addressability. We stress an implementational advantage for MFT: it is natural for VLSI circuitry.
Publiceringsår
1989
Språk
Engelska
Sidor
475-494
Publikation/Tidskrift/Serie
Neural Networks
Volym
2
Issue
6
Dokumenttyp
Artikel i tidskrift
Förlag
Elsevier
Ämne
- Other Physics Topics
Nyckelord
- Bidirectional
- Content addressable memory
- Generalization
- Learning algorithm
- Mean field theory
- Neural network
Aktiv
Published
ISBN/ISSN/Övrigt
- ISSN: 0893-6080