Nonlinear optimal controls
โ Scribed by A.A. Goldstein; J.S. Meditch
- Publisher
- Elsevier Science
- Year
- 1970
- Tongue
- English
- Weight
- 296 KB
- Volume
- 4
- Category
- Article
- ISSN
- 0022-0000
No coin nor oath required. For personal study only.
โฆ Synopsis
In this paper we state a necessary condition for optimality for unconstrained controls. For certain systems a constructive proof is given showing the existence of controls satisfying this condition. I. STATIONARY CONTROLS For the system ~(t) = f(x(t), u(t), t), (t) on the fixed time interval [0, T] with fixed boundary conditions x(O) and x(T) E E~, assume that there exists a continuous control u, u(t) ~ E r , such that T fo g(u(t)) dt is minimized. Assume further that f has partial Jacobians with respect to x and u, denoted respectively by Jx(x(t), u(t), t) and J~(x(t), u(t), t), and let Xu(t ) denote the fundamental matrix of the system
with u fixed, where x(t) is the corresponding solution of (1), and xo(t ) the solution of (2).
We denote by U the Hilbert space of r-tuples of square-integrable functions u on [0, T], with norm I1 u ]i 2 = f0rl u(t)l 2 dt, where I u(t)] * = [u(t), u(t)]. By L we denote the Hilbert space of n-tuples of such functions normed as above.
๐ SIMILAR VOLUMES