Stochastic optimal control of a two-dimensional dynamical system
Închide
Articolul precedent
Articolul urmator
503 0
SM ISO690:2012
LEFEBVRE, Mario. Stochastic optimal control of a two-dimensional dynamical system. In: Electronics, Communications and Computing, Ed. 10, 23-26 octombrie 2019, Chişinău. Chișinău, Republica Moldova: 2019, Editia 10, p. 38. ISBN 978-9975-108-84-3.
EXPORT metadate:
Google Scholar
Crossref
CERIF

DataCite
Dublin Core
Electronics, Communications and Computing
Editia 10, 2019
Conferința "Electronics, Communications and Computing"
10, Chişinău, Moldova, 23-26 octombrie 2019

Stochastic optimal control of a two-dimensional dynamical system


Pag. 38-38

Lefebvre Mario
 
Polytechnique Montréal
 
 
Disponibil în IBN: 7 noiembrie 2019


Rezumat

We consider the following controlled two-dimensional dynamical system:where k is a positive constant, u(t) is the control variable, v[x(t),y(t)] is a positive function and B(t) is a standard Brownian motion. Our aim is to find the value u of the control variable that minimizes the expected value of the cost criterion   T where )] (),([ t ytxq is a positive function and  is a negative constant.  Hence, the aim is to maximize the expected survival time in the interval (0, d), taking the quadratic control costs into account. This type of optimal control problem, for which the final time is a random variable, has been termed LQG homing by Whittle (1982). These problems are generally very difficult to solve explicitly, especially in two or more dimensions. LQG homing problems have been considered, in particular, by Lefebvre and Zitouni (2014) and Makasu (2013), who solved explicitly a two-dimensional problem. In this paper, exact and explicit solutions will be found in particular cases by making use of the method of similarity solutions to solve the partial differential equation satisfied by the value function.

Cuvinte-cheie
dynamic programming, Brownian motion, first-passage time, Partial differential equations, error function