The importance of being synchronous in neural networks
โ Scribed by Behzad Kamgar-Parsi; Behrooz Kamgar-Parsi
- Publisher
- Elsevier Science
- Year
- 1988
- Tongue
- English
- Weight
- 72 KB
- Volume
- 1
- Category
- Article
- ISSN
- 0893-6080
No coin nor oath required. For personal study only.
โฆ Synopsis
Recurrent neural networks have been used to solve various problems such as combinatorial optimization [1], encoding information in a network with the help of a master network [2], etc. The approach is to cast the problem in terms of an energy function that is then minimized by the corresponding network as it spontaneously evolves from some randomly selected initial state to states of lower energy. The energy function has typically many minima that represent valid solutions to the problem. Deeper minima correspond to good solutions and the deepest minimum to the best solution. The results obtained with networks of analog neurons are found, in practice, to be of much better quality than those obtained with networks of digital (?.-state) neurons [1,3]. These two types of networks differ from each other in two respects:
(i) In a network composed of N neurons, the domain of operation of the analog network is the entire volume of the N-dimensional unit cube in the state space. The domain of operation of the digital network, on the other hand, is restricted to the corners of this cube.
(ii) The dynamics of the analog network are synchronous, while the neurons in the digital network are updated asynchrononsly.
Deeper minima, in genera], have larger basins of attraction -a fact that we further confirm hereand that analog networks, because they operate in the entire volume of the unit cube, take full advantage of this property. Previously, it was believed that this is the crucial advantage of analog networks over digital networks for finding good solutions [1,3]. Here we show that synchronous updating is equalty important in making analog networks more effective. Synchronous updating introduces the element of ~collective decision-making* in analog networks; a factor that is not strongly present in digital networks.
We further show that by analyzing the stability of the dynamical fixed points (i.e. the minima of the energy) of the network one can obtain useful information about the adjustable parameters of the energy function, and that by suitably choosing the parameter values one can render the shallower minima (i.e, poor solutions) unstable, so that they are not ~found* by the network, thus improving the performance of the network.
๐ SIMILAR VOLUMES
Artificial neural networks have been used for simulation, modeling, and control purposes in many engineering applications as an alternative to conventional expert systems. Although neural networks usually do not reach the level of performance exhibited by expert systems, they do enjoy a tremendous a