Bounds on the complexity of recurrent neural network implementations of finite state machines
β Scribed by Bill G. Horne; Don R. Hush
- Publisher
- Elsevier Science
- Year
- 1996
- Tongue
- English
- Weight
- 823 KB
- Volume
- 9
- Category
- Article
- ISSN
- 0893-6080
No coin nor oath required. For personal study only.
β¦ Synopsis
In this paper the efficiency of recurrent neural network implementations of m-state finite state machines will be explored. Specifically, it will be shown that the node complexity for the unrestricted case can be bounded above by O(v/-m-). It will also be shown that the node complexity is 0( ~) when the weights and thresholds are restricted to the set {-1, 1}, and O(m) when the fan-in is restricted to two. Matching lower bounds will be provided for each of these upper bounds assuming that the state of the FSM can be encoded in a subset of the nodes of size [log m].
π SIMILAR VOLUMES
We formalize a notion of loading information into connectionist networks that characterizes the training of feed-forward neural networks. This problem is NPcomplete, so we look for tractable subcases of the problem by placing constraints on the network architecture. The focus of these constraints is
Within the framework of the hierarchical hybrid control theory presented in (Caines and Wei, IEEE Trans. Automat. Control 43 (4) (1988) 501-508), one may pose the following hybrid modelling problem: for a given ΓΏnite state machine M , ΓΏnd a state space D, a state space partition , and a controlled O