Consider any conjugate gradient method for finding a zero point of a given gradient whose function is implicit. We propose two different types of conditions for selecting the step length using the gradient information only. One is used for re-proving known convergence results under the same gradient
A new family of conjugate gradient methods
β Scribed by Zhen-Jun Shi; Jinhua Guo
- Publisher
- Elsevier Science
- Year
- 2009
- Tongue
- English
- Weight
- 659 KB
- Volume
- 224
- Category
- Article
- ISSN
- 0377-0427
No coin nor oath required. For personal study only.
β¦ Synopsis
In this paper we develop a new class of conjugate gradient methods for unconstrained optimization problems. A new nonmonotone line search technique is proposed to guarantee the global convergence of these conjugate gradient methods under some mild conditions. In particular, Polak-RibiΓ©re-Polyak and Liu-Storey conjugate gradient methods are special cases of the new class of conjugate gradient methods. By estimating the local Lipschitz constant of the derivative of objective functions, we can find an adequate step size and substantially decrease the function evaluations at each iteration. Numerical results show that these new conjugate gradient methods are effective in minimizing large-scale non-convex non-quadratic functions.
π SIMILAR VOLUMES