Articolul precedent 
Articolul urmator 
553 0 
SM ISO690:2012 LEFEBVRE, Mario. Stochastic optimal control of a twodimensional dynamical system. In: Electronics, Communications and Computing, Ed. 10, 2326 octombrie 2019, Chişinău. Chișinău, Republica Moldova: 2019, Editia 10, p. 38. ISBN 9789975108843. 
EXPORT metadate: Google Scholar Crossref CERIF DataCite Dublin Core 
Electronics, Communications and Computing Editia 10, 2019 

Conferința "Electronics, Communications and Computing" 10, Chişinău, Moldova, 2326 octombrie 2019  


Pag. 3838 



Descarcă PDF  
Rezumat  
We consider the following controlled twodimensional dynamical system:where k is a positive constant, u(t) is the control variable, v[x(t),y(t)] is a positive function and B(t) is a standard Brownian motion. Our aim is to find the value u of the control variable that minimizes the expected value of the cost criterion T where )] (),([ t ytxq is a positive function and is a negative constant. Hence, the aim is to maximize the expected survival time in the interval (0, d), taking the quadratic control costs into account. This type of optimal control problem, for which the final time is a random variable, has been termed LQG homing by Whittle (1982). These problems are generally very difficult to solve explicitly, especially in two or more dimensions. LQG homing problems have been considered, in particular, by Lefebvre and Zitouni (2014) and Makasu (2013), who solved explicitly a twodimensional problem. In this paper, exact and explicit solutions will be found in particular cases by making use of the method of similarity solutions to solve the partial differential equation satisfied by the value function. 

Cuvintecheie dynamic programming, Brownian motion, firstpassage time, Partial differential equations, error function 

