Automaticity I: Properties of a Measure of Descriptional Complexity
β Scribed by Jeffrey Shallit; Yuri Breitbart
- Publisher
- Elsevier Science
- Year
- 1996
- Tongue
- English
- Weight
- 668 KB
- Volume
- 53
- Category
- Article
- ISSN
- 0022-0000
No coin nor oath required. For personal study only.
β¦ Synopsis
Let 7 and 2 be nonempty alphabets with 7 finite. Let f be a function mapping 7* to 2. We explore the notion of automaticity, which attempts to model how ``close'' f is to a finite-state function. Formally, the automaticity of f is a function A f (n) which counts the minimum number of states in any deterministic finite automaton that computes f correctly on all strings of length n (and its behavior on longer strings is not specified). We define A L (n) for languages L to be A /L (n), where / L is the characteristic function of L. The same or similar notions were examined previously by Trakhtenbrot, Grinberg and Korshunov, Karp, Breitbart, Gabarro , Dwork and Stockmeyer, and Kaneps and Freivalds. Karp proved that if L 7* is not regular, then A L (n) (n+3)Γ2 infinitely often. We prove that the lower bound is best possible. We obtain results on the growth rate of A f (n). If |7|=k 2 and |2| =l< , then A f (n) C(1+o(1)) k n+2 Γn for C=(log k l )Γ(k&1) 2 . Also, for almost all functions f and any =>0 we have A f (n)>(1&=) Ck n+1 Γn for all sufficiently large n. We also obtain bounds on N L (n), the nondeterministic automaticity function. This is similar to A f (n), except that it counts the number of states in the minimal NFA, and it is defined for languages L 7*. For |7| =k 2, we have N L (n)=O(k nΓ2 ). Also, for almost all languages L and every =>0 we have N L (n)>(1&=) k nΓ2 Γ -k&1 for all sufficiently large n. We prove some incomparability results between the automaticity measure and those defined earlier by Gabarro and others. Finally, we examine the notion of automaticity as applied to sequences.
π SIMILAR VOLUMES
In this paper, we prove that every recursively enumerable language can be generated by a scattered context grammar with a reduced number of both nonterminals and context-sensing productions.
A statistical measure of complexity utilising the concept of entropy or information is proposed. Our way in this study is to use a nonextensive entropy instead of an extensive (additive) Shannon entropy in the deΓΏnition, but can be characterised as a di erence between the qth-order RΓ enyi entropy a