Survival maximization for time series
Închide
Articolul precedent
Articolul urmator
246 1
Ultima descărcare din IBN:
2022-04-30 16:58
SM ISO690:2012
LEFEBVRE, Mario. Survival maximization for time series. In: Mathematics and Information Technologies: Research and Education, Ed. 2021, 1-3 iulie 2021, Chişinău. Chișinău, Republica Moldova: 2021, pp. 50-51.
EXPORT metadate:
Google Scholar
Crossref
CERIF

DataCite
Dublin Core
Mathematics and Information Technologies: Research and Education 2021
Conferința "Mathematics and Information Technologies: Research and Education"
2021, Chişinău, Moldova, 1-3 iulie 2021

Survival maximization for time series


Pag. 50-51

Lefebvre Mario
 
Polytechnique Montréal
 
 
Disponibil în IBN: 30 iunie 2021


Rezumat

We consider the controlled discrete-time, continuous state stochastic process fXn; n = 0; 1; : : :g defined by formulafor n 2 f0; 1; : : :g, where ¹; ® 2 R, b > 0, un 2 f¡1; 0; 1g is the control variable, and f²1; ²2; : : :g are independent and identically distributed continuous random variables with zero mean and finite variance ¾2. Moreover, ²n+1 is independent of fX0; : : : ;Xng. The process fXn; n = 0; 1; : : :g is therefore a controlled autoregressive process. If ® 6= 0, it is an AR(1) process; when ® = 0, it reduces to an AR(0) process, that is, a white noise process. In particular, ²n+1 can have a Gaussian distribution, so that f²1; ²2; : : :g is Gaussian white noise, or be uniformly distributed on the interval [¡c; c]. If ® 2 (0; 1), then fXn; n = 0; 1; : : :g can be considered as a discrete version of a controlled Ornstein-Uhlenbeck process. Assume that X0 = x 2 [¡d; d] and define the first-passage timeformulaOur aim is to find the control u¤ n that minimizes the expected value of the cost functionformulawhere q > 0 and ¸ 6= 0 are constants. Hence, when ¸ > 0, the optimizer tries to minimize the (expected) time spent by the controlled process in the continuation region C := (¡d; d), while the aim is to maximize the expected survival time in C when ¸ < 0. In both cases, the quadratic control costs must of course be taken into account. This type of problem is known as homing. To solve homing problems in continuous time, one can make use of dynamic programming to obtain the equation satisfied by the value functionformulawhere the infimum is over all admissible values of the control variable in the time interval [0; T(x)). In discrete time, the control u(t) is replaced by un, for n 2 f0; : : : ; T(x)¡1g. It can be shown that in the continuous homing problem, it is sometimes possible to transform the non-linear partial differential equation satisfied by F(x) into a linear equation which is actually the Kolmogorov backward equation satisfied by a certain mathematical expectation for the corresponding uncontrolled process. However, in discrete time it is not possible to reduce the optimal control problem to a purely probabilistic problem. In this paper, first the dynamic programming equation satisfied by the value function F(x) will be given. Then, the cases when the parameter ® in Eq. (1) is equal to 0, 1=2 or 1 will be considered. The optimal control will be computed explicitly, either exactly or approximately, in particular problems.