نتایج جستجو برای: bellman zadehs principle

تعداد نتایج: 157398  

Journal: :SIAM J. Control and Optimization 2013
Yann-Shin Aaron Chen Xin Guo

This paper analyzes a class of impulse control problems for multidimensional jump diffusions in the finite time horizon. Following the basic mathematical setup from Stroock and Varadhan [Multidimensional Diffusion Processes, Springer-Verlag, Heidelberg, 2006], this paper first establishes rigorously an appropriate form of the dynamic programming principle. It then shows that the value function ...

Journal: :Math. Meth. of OR 2005
Ralf Korn Olaf Menkens

We consider the determination of portfolio processes yielding the highest worst-case bound for the expected utility from final wealth if the stock price may have uncertain (down) jumps. The optimal portfolios are derived as solutions of non-linear differential equations which itself are consequences of a Bellman principle for worst-case bounds. A particular application of our setting is to mode...

2013
Mishari Al-Foraih Paul V. Johnson Geoff Evatt Peter W. Duck

Private sector operators of response services such as ambulance, fire or police etc. are often regulated by targets on the distribution of response times. This may result in inefficient overstaffing to ensure those targets are met. In this paper, we use a network chain of M/M/K queues to model the arrival and completion of jobs on the system so that quantities such as the expected total time wa...

Journal: :SIAM J. Financial Math. 2010
Volker Krätschmer John Schoenmakers

In this paper we consider the optimal stopping problem for general dynamic monetary utility functionals. Sufficient conditions for the Bellman principle and the existence of optimal stopping times are provided. Particular attention is payed to representations which allow for a numerical treatment in real situations. To this aim, generalizations of standard evaluation methods like policy iterati...

2015
Maurizio Falcone Dante Kalise Axel Kröner

In this paper we consider a semi-Lagrangian scheme for minimum time problems with L-penalization. The minimum time function of the penalized control problem can be characterized as the solution of a Hamilton-Jacobi Bellman (HJB) equation. Furthermore, the minimum time converges with respect to the penalization parameter to the minimum time of the non-penalized problem. To solve the control prob...

2015
THUY T. T. LE

We introduce a new formulation of the minimum time problem in which we employ the signed minimum time function positive outside of the target, negative in its interior and zero on its boundary. Under some standard assumptions, we prove the so called Bridge Dynamic Programming Principle (BDPP) which is a relation between the value functions defined on the complement of the target and in its inte...

2006
Mou-Hsiung Chang Tao Pang Moustapha Pemy

This paper treats a finite time horizon optimal control problem in which the controlled state dynamics is governed by a general system of stochastic functional differential equations with a bounded memory. An infinite-dimensional HJB equation is derived using a Bellman-type dynamic programming principle. It is shown that the value function is the unique viscosity solution of the HJB equation. I...

Journal: :SIAM J. Scientific Computing 2016
Dante Kalise Axel Kröner Karl Kunisch

The numerical realization of the dynamic programming principle for continuous-time optimal control leads to nonlinear Hamilton-Jacobi-Bellman equations which require the minimization of a nonlinear mapping over the set of admissible controls. This minimization is often performed by comparison over a finite number of elements of the control set. In this paper we demonstrate the importance of an ...

2010

In many applications (engineering, management, economy) one is led to control problems for stochastic systems : more precisely the state of the system is assumed to be described by the solution of stochastic differential equations and the control enters the coefficients of the equation. Using the dynamic programming principle E. Bellman [6] explained why, at least heuristically, the optimal cos...

2014
A. Kröner K. Kunisch H. Zidani AXEL KRÖNER

An optimal finite-time horizon feedback control problem for (semi-linear) wave equations is presented. The feedback law can be derived from the dynamic programming principle and requires to solve the evolutionary Hamilton-Jacobi-Bellman (HJB) equation. Classical discretization methods based on finite elements lead to approximated problems governed by ODEs in high dimensional spaces which makes ...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید