您当前所在位置: 首页 > 学者
在线提示

恭喜!关注成功

在线提示

确认取消关注该学者?

邀请同行关闭

只需输入对方姓名和电子邮箱,就可以邀请你的同行加入中国科技论文在线。

真实姓名:

电子邮件:

尊敬的

我诚挚的邀请你加入中国科技论文在线,点击

链接,进入网站进行注册。

添加个性化留言

已为您找到该学者37条结果 成果回收站

上传时间

2021年03月31日

【期刊论文】Representation of asymptotic values for nonexpansive stochastic control systems

Stochastic Processes and their Applications,2019,129(2):634-673

2019年02月01日

摘要

In ergodic stochastic problems the limit of the value function Vλ of the associated discounted cost functional with infinite time horizon is studied, when the discounted factor λ tends to zero. These problems have been well studied in the literature and the used assumptions guarantee that the value function λVλ converges uniformly to a constant as λ→0. The objective of this work consists in studying these problems under the assumption, namely, the nonexpansivity assumption, under which the limit function is not necessarily constant. Our discussion goes beyond the case of the stochastic control problem with infinite time horizon and discusses also Vλ given by a Hamilton–Jacobi–Bellman equation of second order which is not necessarily associated with a stochastic control problem. On the other hand, the stochastic control case generalizes considerably earlier works by considering cost functionals defined through a backward stochastic differential equation with infinite time horizon and we give an explicit representation formula for the limit of λVλ, as λ→0.

Stochastic nonexpansivity condition Limit value BSDE

0

上传时间

2021年03月30日

【期刊论文】Stochastic differential games with reflection and related obstacle problems for Isaacs equations

Acta Mathematicae Applicatae Sinica, English Series ,2011,27():647 (2

2011年09月09日

摘要

In this paper we first investigate zero-sum two-player stochastic differential games with reflection, with the help of theory of Reflected Backward Stochastic Differential Equations (RBSDEs). We will establish the dynamic programming principle for the upper and the lower value functions of this kind of stochastic differential games with reflection in a straightforward way. Then the upper and the lower value functions are proved to be the unique viscosity solutions to the associated upper and the lower Hamilton-Jacobi-Bellman-Isaacs equations with obstacles, respectively. The method differs significantly from those used for control problems with reflection, with new techniques developed of interest on its own. Further, we also prove a new estimate for RBSDEs being sharper than that in the paper of El Karoui, Kapoudjian, Pardoux, Peng and Quenez (1997), which turns out to be very useful because it allows us to estimate the L p-distance of the solutions of two different RBSDEs by the p-th power of the distance of the initial values of the driving forward equations. We also show that the unique viscosity solution to the approximating Isaacs equation constructed by the penalization method converges to the viscosity solution of the Isaacs equation with obstacle.

0

上传时间

2021年03月30日

【期刊论文】Stochastic representation for solutions of Isaacs’ type integral–partial differential equations

Stochastic Processes and their Applications,2011,121(12):2715-2750

2011年12月01日

摘要

In this paper we study the integral–partial differential equations of Isaacs’ type by zero-sum two-player stochastic differential games (SDGs) with jump-diffusion. The results of Fleming and Souganidis (1989) [9] and those of Biswas (2009) [3] are extended, we investigate a controlled stochastic system with a Brownian motion and a Poisson random measure, and with nonlinear cost functionals defined by controlled backward stochastic differential equations (BSDEs). Furthermore, unlike the two papers cited above the admissible control processes of the two players are allowed to rely on all events from the past. This quite natural generalization permits the players to consider those earlier information, and it makes more convenient to get the dynamic programming principle (DPP). However, the cost functionals are not deterministic anymore and hence also the upper and the lower value functions become a priori random fields. We use a new method to prove that, indeed, the upper and the lower value functions are deterministic. On the other hand, thanks to BSDE methods (Peng, 1997) [18] we can directly prove a DPP for the upper and the lower value functions, and also that both these functions are the unique viscosity solutions of the upper and the lower integral–partial differential equations of Hamilton–Jacobi–Bellman–Isaacs’ type, respectively. Moreover, the existence of the value of the game is got in this more general setting under Isaacs’ condition.

Stochastic differential games Poisson random measure Value function Backward stochastic differential equations Dynamic programming principle Integral–partial differential operators Viscosity solution

0

上传时间

2021年03月30日

【期刊论文】Regularity Properties for General HJB Equations: A Backward Stochastic Differential Equation Method Read More: https://epubs.siam.org/doi/abs/10.1137/110828629

SIAM J. Control Optim,2012,50(3):1466–1501

2012年06月19日

摘要

In this work we investigate regularity properties of a large class of Hamilton--Jacobi--Bellman (HJB) equations with or without obstacles, which can be stochastically interpreted in the form of a stochastic control system in which nonlinear cost functional is defined with the help of a backward stochastic differential equation (BSDE) or a reflected BSDE. More precisely, we prove that, first, the unique viscosity solution $V(t,x)$ of an HJB equation over the time interval $[0,T],$ with or without an obstacle, and with terminal condition at time $T$, is jointly Lipschitz in $(t,x)$ for $t$ running any compact subinterval of $[0,T)$. Second, for the case that $V$ solves an HJB equation without an obstacle or with an upper obstacle it is shown under appropriate assumptions that $V(t,x)$ is jointly semiconcave in $(t,x)$. These results extend earlier ones by Buckdahn, Cannarsa, and Quincampoix [Nonlinear Differential Equations Appl., 17 (2010), pp. 715--728]. Our approach embeds their idea of time change into a BSDE analysis. We also provide an elementary counterexample which shows that, in general, for the case that $V$ solves an HJB equation with a lower obstacle the semiconcavity doesn't hold true.

0

上传时间

2021年03月30日

【期刊论文】Stochastic maximum principle in the mean-field controls

Automatica,2012,48(2):366-373

2012年02月01日

摘要

In Buckdahn, Djehiche, Li, and Peng (2009), the authors obtained mean-field Backward Stochastic Differential Equations (BSDEs) in a natural way as a limit of some highly dimensional system of forward and backward SDEs, corresponding to a great number of “particles” (or “agents”). The objective of the present paper is to deepen the investigation of such mean-field BSDEs by studying their stochastic maximum principle. This paper studies the stochastic maximum principle (SMP) for mean-field controls, which is different from the classical ones. This paper deduces an SMP in integral form, and it also gets, under additional assumptions, necessary conditions as well as sufficient conditions for the optimality of a control. As an application, this paper studies a linear quadratic stochastic control problem of mean-field type.

0

合作学者

  • 暂无合作作者