Time complexity is associated with sensitive dependence on initial conditions and severe intrinsic predictability limits, in particular, the 'butter y e ect' paradigm: an exponential error growth and a corresponding characteristic predictability time. This was believed to be the universal long-time
The Dataflow Time and Space Complexity of FFTs
β Scribed by A.P.W. Bohm; R.E. Hiromoto
- Publisher
- Elsevier Science
- Year
- 1993
- Tongue
- English
- Weight
- 747 KB
- Volume
- 18
- Category
- Article
- ISSN
- 0743-7315
No coin nor oath required. For personal study only.
β¦ Synopsis
In this paper we analyze the performance of a recursive and an iterative fast Fourier transform algorithm, written in Id and run on MINT, a simulator for the Monsoon dataflow machine. Our complexity measures are: the number of instructions executed, the critical path length of the dataflow graph, and the storage occupancy. Using a set of simple functions, we calibrate loop and divide-and-conquer behavior of Monsoon and compare it with the "tagged token dataflow architecture." We discuss the issues in explicit resource management in a functional language, introduce the language features required for this, and show results of our resource managed FFT programs. 1993 Academic Press, Inc.
π SIMILAR VOLUMES
The frequency moments of a sequence containing m i elements of type i, 1 i n, are the numbers F k = n i=1 m k i . We consider the space complexity of randomized algorithms that approximate the numbers F k , when the elements of the sequence are given one by one and cannot be stored. Surprisingly, it
processor is balanced in carrying out a computation if its computing time equals its I/O time. When the computation bandwidth of a processor is increased, like when multiple processors are incorporated to form an array, the critical question is to what degree the processor's memory must be enlarged
A data and error complexity analysis of two algorithms for fast Fourier transforms is presented. The results show that the algorithm using decimation in time is "equivalent" to the one using decimation in frequency. This is also supported by the numerical experiments described.