Bellman's dynamic programming equation for the optimal index and control law for stochastic control problems is a parabolic or elliptic partial differential equation frequently defined in an unbounded domain. Existing methods of solution require bounded domain approximations, the application of sing
β¦ LIBER β¦
Maximum entropy solutions and moment problem in unbounded domains
β Scribed by A. Tagliani
- Publisher
- Elsevier Science
- Year
- 2003
- Tongue
- English
- Weight
- 324 KB
- Volume
- 16
- Category
- Article
- ISSN
- 0893-9659
No coin nor oath required. For personal study only.
π SIMILAR VOLUMES
Solution of the Stochastic control probl
β
Prentiss Robinson; John Moore
π
Article
π
1973
π
Elsevier Science
π
English
β 541 KB
On the behavior of solutions to the Neum
β
A. I. Ibragimov; E. M. Landis
π
Article
π
1997
π
Springer US
π
English
β 587 KB
A finite element solution of diffraction
β
A. Jami; M. Polyzakis
π
Article
π
1981
π
Elsevier Science
π
English
β 1013 KB
Elliptic and parabolic problems in unbou
β
Patrick Guidotti
π
Article
π
2004
π
John Wiley and Sons
π
English
β 181 KB
## Abstract We consider elliptic and parabolic problems in unbounded domains. We give general existence and regularity results in Besov spaces and semiβexplicit representation formulas via operatorβvalued fundamental solutions which turn out to be a powerful tool to derive a series of qualitative r
The total variation of solutions of para
β
R. M. Redheffer; W. Walter
π
Article
π
1974
π
Springer
π
English
β 566 KB
Maximum-entropy and Bayesian methods in
π
Article
π
1990
π
Springer Netherlands
π
English
β 179 KB