In this paper we consider infinite horizon risk-sensitive control of Markov processes with discrete time and denumerable state space. This problem is solved by proving, under suitable conditions, that there exists a bounded solution to the dynamic programming equation. The dynamic programming equati
On piecewise deterministic Markov control processes: Control of jumps and of risk processes in insurance
✍ Scribed by Manfred Schäl
- Publisher
- Elsevier Science
- Year
- 1998
- Tongue
- English
- Weight
- 887 KB
- Volume
- 22
- Category
- Article
- ISSN
- 0167-6687
No coin nor oath required. For personal study only.
✦ Synopsis
Dynamic programming for piecewise deterministic Markov processes is studied where only the jumps but not the deterministic flow can be controlled. Then one can dispense with relaxed controls, There exists an optimal stationary policy of feedback form. Further, a piecewise deterministic Markov model for the control of dividend pay-out and reinsurance is introduced. This model can be transformed to a model with uncontrolled flow. It is shown that a classical solution to the Bellman equation exists and that a non-relaxed optimal policy of feedback form can be obtained via the Bellman equation. Lipschitz continuity of the one-dimensional vector field defining the controlled flow will be replaced by strict positivity.
📜 SIMILAR VOLUMES
In this article, the authors discuss mixed exponential distributions and, more generally, scale mixtures with specific consideration the purpose of insurance modeling. Results are derived for equilibrium distributions (defined via stop-loss transforms) of mixed distributions. Some recursive relation
## Abstract Action observation leads to the automatic activation of the corresponding motor representation in the observer through “mirror‐matching.” This constitutes a “shared representational system,” which is thought to subserve social understanding by motor simulation. However, it is unclear ho