On the global stabilization of locally convergent algorithms
β Scribed by E. Polak
- Publisher
- Elsevier Science
- Year
- 1976
- Tongue
- English
- Weight
- 476 KB
- Volume
- 12
- Category
- Article
- ISSN
- 0005-1098
No coin nor oath required. For personal study only.
β¦ Synopsis
It is possible to summarize the schemes, used in the past to stabilize algorithms, in the form of algorithm models. These models serve as patterns for subsequent applications.
Summm'y--There are a number of algorithms in the literature which both theoretically and empirically are known to be only locally convergent. These include such well known algorithms as secant, Newton, quasi-Newton and primal--dual algorithms. Locally, these algorithms tend to be highly efficient. Consequently, it is very desirable to find ways of extending, or modifying, these algorithms, so that they become globally convergent while retaining their attractive local properties. This paper describes a set of techniques which have recently emerged for stabilizing such algorithms and illustrates their application by means of a number of examples.
π SIMILAR VOLUMES
This article proves that the stability of the shifts of a refinable function vector ensures the convergence of the corresponding cascade algorithm in Sobolev space to which the refinable function vector belongs. An example of Hermite interpolants is presented to illustrate the result.  2002 Elsevie
In multiprocessor systems, iterative algorithms can be implemented synchronously or asynchronously. Unfortunately, few guidelines exist to make a choice. In this paper, we compare the execution times of an asynchronous iterative algorithm and of its synchronous counterpart. Synchronization overhead
The Delayed-x LMS algorithm is a simplified version of the Filtered-x LMS algorithm, in which the model C of the secondary path C from the adaptive filter output to the error sensor is represented by a pure delay of k samples (the delayed model D) in order to reduce system complexity. However, the s
In this paper, we consider the rate of convergence of the parameter estimation error and the cost function for the stochastic gradient-type algorithm. The problem is solved in the case of the minimum-variance stochastic adaptive control. It is proven that the cost function has the rate of convergenc