Webbläsaren som du använder stöds inte av denna webbplats. Alla versioner av Internet Explorer stöds inte längre, av oss eller Microsoft (läs mer här: * https://www.microsoft.com/en-us/microsoft-365/windows/end-of-ie-support).

Var god och använd en modern webbläsare för att ta del av denna webbplats, som t.ex. nyaste versioner av Edge, Chrome, Firefox eller Safari osv.

Default user image.

Carsten Peterson

Expert

Default user image.

Explorations of the mean field theory learning algorithm

Författare

  • Carsten Peterson
  • Eric Hartman

Summary, in English

The mean field theory (MFT) learning algorithm is elaborated and explored with respect to a variety of tasks. MFT is benchmarked against the back-propagation learning algorithm (BP) on two different feature recognition problems: two-dimensional mirror symmetry and multidimensional statistical pattern classification. We find that while the two algorithms are very similar with respect to generalization properties, MFT normally requires a substantially smaller number of training epochs than BP. Since the MFT model is bidirectional, rather than feed-forward, its use can be extended naturally from purely functional mappings to a content addressable memory. A network with N visible and N hidden units can store up to approximately 4N patterns with good content-addressability. We stress an implementational advantage for MFT: it is natural for VLSI circuitry.

Publiceringsår

1989

Språk

Engelska

Sidor

475-494

Publikation/Tidskrift/Serie

Neural Networks

Volym

2

Issue

6

Dokumenttyp

Artikel i tidskrift

Förlag

Elsevier

Ämne

  • Other Physics Topics

Nyckelord

  • Bidirectional
  • Content addressable memory
  • Generalization
  • Learning algorithm
  • Mean field theory
  • Neural network

Status

Published

ISBN/ISSN/Övrigt

  • ISSN: 0893-6080