数学杂志  2015, Vol. 35 Issue (1): 23-34   PDF    
扩展功能
加入收藏夹
复制引文信息
加入引用管理器
Email Alert
RSS
本文作者相关文章
LI Biao
XU Jing
ZHANG Bo
CONVERSE COMPARISON THEOREM FOR BSDE DRIVEN BY LÉVY PROCESS AND JENSEN'S INEQUALITY
LI Biao1, XU Jing2, ZHANG Bo3    
1. School of Finance, Zhongnan University of Economics and Law, Wuhan 430073, China;
2. School of Econ. and Business Administration, Chongqing University, Chongqing 400030, China;
3. School of Statistics, Renmin University of China, Beijing 100872, China
Abstract: In this paper, we are devoted to the converse comparison theorem for backward stochastic differential equations (BSDEs, for short) driven by 1-dimensional Lévy processes. With the similar method of the converse comparison theorem under g-expectation, we prove the converse comparison theorem under f-expectation. Moreover, we provide a necessary and sufficient condition for the Jensen's inequality to hold under the f-expectation, the nonlinear expectation defined by BSDEs driven by Lévy processes.
Key words: converse comparison theorem     Lévy process     Jensen's inequality    
Lévy过程驱动的BSDE的反比较定理与Jensen不等式
李标1, 徐静2, 张波3    
1. 中南财经政法大学金融学院, 湖北 武汉 430073;
2. 重庆大学经济与管理学院, 重庆 400030;
3. 中国人民大学统计学院, 北京 100872
摘要:本文研究了由一维Lévy过程驱动的倒向随机微分方程(BSDE)的反比较定理.利用一般g-期望下BSDE的反比较定理的证明方法, 推导出了一般f-期望下BSDE的反比较定理, 并给出了一般f-期望下Jensen不等式成立的充分必要条件.
关键词反比较定理    Lévy过程    Jensen不等式    
1 Introduction

BSDEs were first introduced by Pardoux and Peng [12]. In 1997, Peng introduced the so called $g$-expectation, a kind of nonlinear expectation, based on the solution of a BSDE driven by a Brownian motion. This expectation is called nonlinear since the linearity property of the usual (linear) expectation is violated while other properties of the (linear) expectation hold in this nonlinear case. Since then, many researchers in related fields have explored the properties of BSDEs and related $g$-expectations, see [1, 4], for example. Among the results they obtained, the comparison theorem of real-valued BSDEs turns out to be one of the cornerstone results in this theory. This comparison theorem was first established by Peng [13] in the one dimensional case and later on generalized by many authors, see [11]. It allows one to compare the solutions of two real-valued BSDEs whenever we can compare the terminal conditions and the generators. Recently the comparison theorem was extended to multidimensional case by Hu et al. [6]. An inverse problem is interesting, namely, if we can compare the solution of two BSDEs with the same terminal condition, can we compare the generators? The result of Chen [3] can be thought as the first step in solving this theorem, and then it was further developed in [2] and [8] among others.

The comparison theorem and the related converse comparison theorem are applicable to mathematical finance. These results give a necessary and sufficient condition for the wealth process to be nonnegative, and also yield several of the classical properties of utilities, see [5]. In economics and finance, the nonlinear $g$-expectation can describe the attitude of the investor, e.g., risk preference or risk aversion. In [10], the author introduces the $g$-EU theory. The validity of this theory depends on the Jensen's inequality for conditional $g$-expectation. In [2], Briand gives a counter example to show that the $g$-expectation need not satisfy Jensen's inequality for most convex functions. This raises a natural question: under what conditions will the Jensen's inequality hold? Jiang [9] and Hu [7] explored the validity of Jensen's inequality for the $g$-expectation and provide necessary and sufficient conditions for the Jensen's inequality.

BSDEs and related $g$-expectations appear in many financial problems especially in the contingent claim valuation. A more realistic case will be financial markets with jumps, where jumps are caused by natural accidents, policy interference and so on. In a market with jumps, a jump-diffusion process will more aptly describe the price of the risky asset, and hence BSDEs driven by Lévy processes are used in the contingent claim valuation. The existence and uniqueness of the solution for this kind of equation was first dealt in [1]. In [15], the author studied BSDEs with jumps and establishes several basic properties including a comparison theorem. That study also introduced a nonlinear expectation, called the $f$-expectation, related to the BSDE studied there. However, the converse comparison theorem for this kind of equation and the conditional Jensen's inequality corresponding to the nonlinear expectation are not obtained so far, to the best of our knowledge. Therfore, our first purpose in this paper is to establish a converse comparison theorem for the BSDEs driven by Lévy processes. Based on this converse comparison theorem, we will provide a necessary and sufficient condition for the Jensen's inequality to hold.

The rest of the paper is organized as follows. In Section 2, we recall the basic definitions and the notations of BSDEs driven by Lévy processes. In Section 3, we prove the converse comparison theorem for this kind of BSDEs. In Section 4, we give the sufficient and necessary condition of the Jensen's inequality for the $f$-expectation. The last section reflects on the conclusions of this work and discusses about our future work.

2 BSDEs Driven by Lévy Process

This section sets up the notations, terminologies, and assumptions that will be in force for the rest of this work. We fix a finite time horizon $ T < \infty $ and the natural filtration $\mathcal {F}= \{\mathcal {F}_t\}_{t \in [0, T]}$, which is generated by two mutually independent processes $\{B_t\}_{t \in [0, T]}$ and $\{N_t\}_{t \in [0, T]}$. Here, $\{B_t\}_{t \in [0, T]}$ is a d-dimensional Brownian motion, $\{N_t\}_{t \in [0, T]}$ is a Lévy process with its Poisson random measure $\mu(ds, dx)$ defined on $[0, T]\times R^*$, where $ R^*= R\setminus\{0\}$, the compensator of the Poisson random measure is $\nu (ds, dx) = \lambda (dx)ds$, and $\widetilde\mu (ds, dx)= \mu(ds, dx)-\nu (ds, dx)$ is a martingale measure for all $A \in \mathcal{B}( R^*)$, where $\lambda (A)< \infty$. Moreover, $\lambda$ is a $\sigma$-finite measure defined on $\mathcal{B}( R^*)$, such that $\int_{ R^* } {(1 \wedge x^2 )} \lambda (dx) < \infty$. We shall now introduce some basic spaces that we will be working with.

$ \begin{eqnarray*} && \mathcal {S}^2 = \left\{ {\text {càdlàg process} \ \ \psi;\ \ E\left( {\mathop {\sup}\limits_{0 \le t \le T} \left| {\psi_t}\right|^2} \right) < \infty } \right\}, \\ && \mathcal {L}^2 (\widetilde\mu) = \left\{ {\mathcal {P} \otimes \mathcal{B} (R^*)}-\mbox {measurable process} \ \ v;\ \ E\left( {\int_0^T {\int_{R^* } {\left| v_s (x) \right|^2 \lambda (dx)ds} } } \right) < \infty \right\}, \end{eqnarray*} $

the set of $\mathcal {F}_t$-progressively measurable $R$-valued processes, where $\mathcal {P}$ denotes the $\sigma$-field of predictable sets on $[0, T] \times \Omega, $ and

$ \mathcal {L}^2 (B) = \left\{ {\mbox{predictable process} \ \ \theta;\ \ E\left( {\int_0^T {\parallel {\theta_s} \parallel^2 ds} } \right) < \infty } \right\}, $

where $\mathcal {L}^2 (B)$ denotes the square integrable predictable processes with respect to the Brownian filtration.

Let $\eta \in L^2(\mathcal {F}_T)$ be the terminal condition. We consider

$ f: \Omega \times [0, T] \times R \times R^d \times{ \mathcal {L}}^2(R^*, {\mathcal{B}(R^*)}, \lambda;R)\rightarrow R. $

Definition 2.1  A solution of the equation

$ \begin{equation} \label{21} Y_t = \eta + \int_t^T {f(s, Y_s, Z_s, U_s )} ds - \int_t^T {Z_s dB_s } - \int_t^T {\int_{R^* } {U_s } } (x)\widetilde\mu (ds, dx) \end{equation} $ (2.1)

with parameters $(f, \eta)$ is a triple of processes $(Y_t, Z_t, U_t) \in {\mathcal {S}}^2 \times {\mathcal {L}}^2 (W) \times {\mathcal {L}}^2 (\widetilde\mu )$ satisfying that equation for any $t \in [0, T].$

Next we consider the existence and uniqueness of the solution.

Lemma 2.2(see [15])  There exists a unique solution for equation (2.1) in ${\mathcal {S}}^2 \times {\mathcal {L}}^2 (W) \times {\mathcal {L}}^2 (\widetilde\mu )$ provided the generator $f$ satisfies the following conditions:

(ⅰ) $E\left[\int_0^T {\left| {f(s, 0, 0, 0)} \right|^2 ds} \right] < \infty$;

(ⅱ) $f$ is Lipschitz continuous $w.r.t \ y, z$ with the Lipschitz constant $K$;

(ⅲ) there exist constants $-1 < C_1 \leq 0$ and $C_2 \geq 0$ such that for any

$ y \in R, \ z \in R^d, \ u, u' \in {\mathcal {L}}^2(R^*, {\mathcal{B}}(R^*), \lambda; R ), $

we have

$ f(t, y, z, u) - f(t, y, z, u') \le \int_{R^* } {(u(x) - u'(x))} \gamma _t^{y, z, u, u'} (x)\lambda (dx), $

where $ \gamma _t^{y, z, u, u'}: \ \ \Omega \times [0, T] \times R^* \rightarrow R $ is ${\mathcal {P}} \times {\mathcal{B}(R^*)}$-measurable and satisfies

$ C_1(1 \wedge x) \leq \gamma _t^{y, z, u, u'} \leq C_2(1 \wedge x). $

The last condition (ⅲ) implies that $f$ is Lipschitz continuous in $u$, with Lipschitz constant denoted by $A$, and

$ \begin{eqnarray*} \left| {f(t, y, z, u) - f(t, y, z, u')} \right| &\le& C\int_{R^* } {\left| {u(x) - u'(x)} \right|(1 \wedge \left| x \right|)} \lambda (dx)\\ &\le& A\left(\int_{R^* } {\left| {u(x) - u'(x)} \right|} ^2 \lambda (dx)\right)^{\frac{1}{2}}. \end{eqnarray*} $

We shall now recall a comparison theorem for BSDE driven by a Lévy process, see [15]. Let $(Y^i, Z^i, U^i)$ $(i=1, 2)$ be two solutions of our equation (2.1) associated respectively with $(\eta^1, f^1)$ and $(\eta^2, f^2).$ Then, we have the following

Theorem 2.3(see [15])  Assume that the conditions in Lemma 2.2 are fulfilled for $\eta^1$, $\eta^2$, $f^1$ and $f^2.$ If $\eta^1\leq \eta^2$ a.s., and $f^1(t, Y_t^1, Z_t^1, U_t^1)\leq f^2(t, Y_t^1, Z_t^1, U_t^1)$, $dt\times dP$ a.e., then $Y_t^1\leq Y_t^2$ for $t\in [0, T]$ a.s.

Definition 2.4  Consider a BSDE driven by a Lévy process with generator $f$, satisfying

(ⅰ) $f(s, y, 0, 0)=0, \forall y \in R$;

(ⅱ) $f$ is Lipschitz continuous in $y, z $;

(ⅲ) $f$ satisfies the third condition in Lemma 2.2.

For any $\eta$ fixed in $ \mathcal {L} ^2(\Omega, \mathcal {F}_T, P)$, we denote the unique solution of the related BSDE by $(Y^{\eta, f, T}_t, Z^{\eta, f, T}_t, U^{\eta, f, T}_t)$. Let $ \varepsilon_f (\eta ) = Y_0^{\eta, f, T}$ denote the initial value of the solution. Then $ \varepsilon_f$ is a non-linear expectation called $f$-expectation.

Lemma 2.5  Let, for any $\xi\in L^2(\Omega, \mathcal {F}_T, P)$, $f$ satisfies the conditions in Definition 2.4. Then there exists a unique random variable $\eta \in L^2(\Omega, \mathcal {F}_T, P)$, such that

$ \varepsilon_f[I_A \xi]= \varepsilon_f[I_A \eta], \ \ \forall A \in \mathcal {F}_t. $

This $\eta$ ic called the conditional $f$-expectation of $\xi$ and is denoted by $ \varepsilon_f[\xi \mid \mathcal {F}_t]$. Moreover, $ \varepsilon_f[\xi \mid \mathcal {F}_t]=Y^{\xi, f, T}_t$ is the solution of equation (2.1) at time $t$.

3 Converse Comparison Theorem

In this section, we present one of the main result of this paper, namely, the converse comparison theorem for the solutions of BSDEs driven by Lévy processes.

Suppose the triple $(Y, Z, U)$ is the solution of the BSDE (2.1) and the generator $f$ satisfies the following

Assumption 3.1  For all $ (y, p, q)$, almost all sample paths $t \to f(t, y, p, q)$ are continuous.

To establish our converse comparison theorem, we need the following two lemmas.

Lemma 3.2  Let $(Y, Z, U)$ be the solution of BSDE (2.1). We then have

$ \begin{eqnarray*} & &E\left[\sup\limits_{t\leq s\leq T}e^{\beta s}|Y_s|^2\mid \mathcal {F}_t\right] +E\left[\int_t^T e^{\beta s}Z_s^2ds\mid \mathcal {F}_t\right] + E\left[\int_t^T\int_{R^*}e^{\beta s}U_s^2 \lambda(dx)ds\mid \mathcal {F}_t\right]\\ &\leq&C E\left[e^{\beta T}\eta^2+\left(\int_t^T e^{\beta s/2}|f(s, 0, 0, 0)|ds\right)^2\mid \mathcal {F}_t\right], \end{eqnarray*} $

where $C$ is a positive constant and $\beta=2K+4K^2$, (recall that $K$ is the Lipschitz constant of $f$ with respect to $y$ and $z$).

Proof  For any constant $\beta$, it follows from Ito's formula that

$ \begin{eqnarray}\label{ito 1} & &e^{\beta s}Y_s^2+\int_s^T e^{\beta u} Z_u^2du+\int_s^T\int_{R^*} e^{\beta u}U_u^2(x)\mu(dx, du)\nonumber\\ &=& e^{\beta T}\eta^2-\beta\int_s^T e^{\beta u}|Y_u|^2du+2\int_s^T e^{\beta u} Y_u f(u, Y_u, Z_u, U_u) du \nonumber\\ &&-2\int_s^T e^{\beta u} Y_u Z_u dB_u-2 \int_s^T\int_{R^*}e^{\beta u}Y_u U_u(x)\widetilde{\mu}(du, dx). \end{eqnarray} $ (3.1)

Also,

$ \begin{eqnarray}\label{ito 2} & &2\int_s^T e^{\beta u} Y_u f(u, Y_u, Z_u, U_u) du \nonumber \\ &\leq&2\int_s^T e^{\beta u} |Y_u| |f(u, Y_u, Z_u, U_u)| du\nonumber\\ &\leq&2\int_s^T e^{\beta u} |Y_u| |f(u, Y_u, Z_u, U_u)-f(u, 0, 0, 0)| du+2\int_s^T e^{\beta u} |Y_u| |f(u, 0, 0, 0)| du\nonumber\\ &\leq& 2K \int_s^T e^{\beta u} |Y_u|\left[|Y_u|+|Z_u|+\left(\int_{R^*}|U_u(x)|^2\lambda(dx)\right)^{1/2}\right] du\\ &&+2\int_s^T e^{\beta u} |Y_u| |f(u, 0, 0, 0)| du\nonumber\\ &\leq &2K \int_s^T e^{\beta u} |Y_u|^2du+4K^2\int_s^T e^{\beta u} |Y_u|^2du+\dfrac{1}{2}\int_s^T e^{\beta u} |Z_u|^2du \nonumber \\ &&+\dfrac{1}{2}\int_s^T \int_{R^*} e^{\beta u} |U_u(x)|^2\lambda(dx) du +2\int_s^T e^{\beta u} |Y_u| |f(u, 0, 0, 0)| du\ . \end{eqnarray} $ (3.2)

Taking $\beta=4K^2+2K$, we see from (3.1) and (3.2), that

$ \begin{eqnarray}\label{ito 3} & &e^{\beta s}|Y_s|^2+\dfrac{1}{2}\int_s^T e^{\beta u} |Z_u|^2du+ \int_s^T \int_{R^*} e^{\beta u} |U_u(x)|^2\mu(dx, du)\nonumber \\ && -\dfrac{1}{2} \int_s^T \int_{R^*} e^{\beta u} |U_u(x)|^2\lambda(dx) du\nonumber\\ &\leq &e^{\beta T}\eta^2-2\int_s^T e^{\beta u} Y_u Z_u dB_u-2 \int_s^T \int_{R^*} e^{\beta u} |U_u(x)|^2 \widetilde{\mu}(dx, du) \nonumber \\ &&+2\int_s^T e^{\beta u} |Y_u| |f(u, 0, 0, 0)| du . \end{eqnarray} $ (3.3)

In (3.3), let $s=t$ and take the conditional expectation to get

$ \begin{eqnarray}\label{ito 4} & &e^{\beta t}Y_t^2+\dfrac{1}{2}E\left[\int_t^T e^{\beta u} Z_u^2 du\mid \mathcal {F}_t\right]+\dfrac{1}{2}E\left[\int_t^T \int_{R^*}e^{\beta u} U_u^2(x) \lambda (dx)du\mid \mathcal {F}_t\right]\nonumber\\ &\leq&E[e^{\beta T}\eta^2\mid \mathcal {F}_t]+2E\left[\int_t^Te^{\beta u}|Y_u||f(u, 0, 0, 0)|du\mid \mathcal {F}_t\right]. \end{eqnarray} $ (3.4)

It follows from equation (3.1) that

$ \begin{eqnarray*} e^{\beta s}Y_s^2 &\leq &e^{\beta T}\eta^2-\beta\int_s^Te^{\beta u}Y_u^2du+2\int_s^Te^{\beta u} Y_u f(u, Y_u, Z_u, U_u)du\\ & &-2\int_s^T e^{\beta u} Y_u Z_u dB_u-2\int_s^T\int_{R^*}e^{\beta u}Y_u U_u(x) \widetilde{\mu}(du, dx)\ . \end{eqnarray*} $

The following estimate holds by equation (3.2).

$ \begin{eqnarray*} e^{\beta s}Y_s^2 &\leq&e^{\beta T}\eta^2-2\int_s^T e^{\beta u} Y_u Z_u dB_u-2\int_s^T\int_{R^*}e^{\beta u}Y_u U_u(x) \widetilde{\mu}(du, dx)\\ & &+2\int_t^Te^{\beta u}|Y_u||f(u, 0, 0, 0)|du+\dfrac{1}{2} \int_t^T\int_{R^*} e^{\beta u} U_u^2(x)\lambda (dx)du\ . \end{eqnarray*} $

Therefore

$ \begin{eqnarray*} \sup\limits_{t\leq s \leq T}e^{\beta s}Y_s^2 &\leq&e^{\beta T}\eta^2 -2\int_t^T e^{\beta u} Y_u Z_u dB_u-2\int_t^T\int_{R^*}e^{\beta u}Y_u U_u(x) \widetilde{\mu}(du, dx)\\ & &+ 2\sup\limits_{t\leq s \leq T}\left| \int _t^s e^{\beta u}Y_u Z_u dB_u\right|+2\sup\limits_{t\leq s\leq T}\left| \int_t^s \int_{R^*}e^{\beta u}Y_u U_u(x) \widetilde{\mu}(du, dx) \right|\\ & &+\dfrac{1}{2} \int_t^T\int_{R^*} e^{\beta u} U_u^2(x)\lambda (dx)du+2\int_t^Te^{\beta u}|Y_u||f(u, 0, 0, 0)|du\, \end{eqnarray*} $

and consequently

$ \begin{eqnarray*} E\left[\sup\limits_{t\leq s \leq T}e^{\beta s}Y_s^2\mid \mathcal {F}_t\right] &\leq &E[e^{\beta T}\eta^2\mid \mathcal {F}_t] +2E\left[\sup\limits_{t\leq s \leq T}\left| \int _t^s e^{\beta u}Y_u Z_u dB_u\right|\mid \mathcal {F}_t\right]\\ & &+2E\left[\sup\limits_{t\leq s\leq T}\left| \int_t^s \int_{R^*}e^{\beta u}Y_u U_u(x) \widetilde{\mu}(du, dx) \right|\mid\mathcal {F}_t\right]\\ & &+\dfrac{1}{2} E\left[\int_t^T\int_{R^*} e^{\beta u} U_u^2(x)\lambda (dx)du\mid \mathcal {F}_t\right]\\ & &+2E\left[\int_t^Te^{\beta u}|Y_u||f(u, 0, 0, 0)|du\mid \mathcal {F}_t\right]\ . \end{eqnarray*} $

By the Burkholder-Davis-Gundy inequality,

$ \begin{eqnarray} E\left[\sup\limits_{t\leq s \leq T}e^{\beta s}Y_s^2\mid \mathcal {F}_t\right] &\leq& E[e^{\beta T}\eta^2\mid \mathcal {F}_t]+2CE\left[\left(\int_t^T e^{2\beta u}Y_u^2 Z_u^2 du\right)^{1/2}\mid \mathcal {F}_t\right]\nonumber\\ &&+\dfrac{1}{2}E\left[\int_t^T\int_{R^*}e^{\beta u}|U_u(x)|^2\lambda(dx) du\mid \mathcal {F}_t \right] \nonumber\\ && +2E\left[\int_t^Te^{\beta u}|Y_u||f(u, 0, 0, 0)|du\mid \mathcal {F}_t\right]\nonumber\\ && +2CE\left[\left|\int_t^T \int_{R^*}e^{2\beta u}Y_u^2 U_u^{2}\mu(du, dx)\right|^{1/2}\mid \mathcal {F}_t\right]\nonumber\\ &\leq &E[e^{\beta T}\eta^2\mid \mathcal {F}_t]+\dfrac{1}{8}E\left[\sup\limits_{t\leq s \leq T}e^{\beta s}Y_s^2\mid \mathcal {F}_t\right]\\ &&+CE\left[\int_t^T|Z_u|^2e^{\beta u} du\mid \mathcal {F}_t\right]\nonumber\\ &&+\dfrac{1}{4}E\left[\sup\limits_{t\leq s \leq T}e^{\beta s}Y_s^2\mid \mathcal {F}_t\right]\\ &&+CE\left[\int_t^T\int_{R^*}e^{\beta u}|U_u(x)|^2du\mid \mathcal {F}_t\right]\nonumber\\ &&+CE\left[\left(\int_t ^T e^{\beta u/2}|f(u, 0, 0, 0)|du\right)^2\mid \mathcal {F}_t \right]\ . \end{eqnarray} $ (3.5)

Then, by estimates (3.4) and (3.5), we have

$ \begin{eqnarray*} & &E\left[\sup\limits_{t\leq s \leq T}e^{\beta s}Y_s^2\mid \mathcal {F}_t\right]+ E\left[\int_t ^T e^{\beta u}Y_u^2 du\mid \mathcal {F}_t\right]+ E\left[\int_t ^T \int_{R^*} e^{\beta u}U_u^2(x)\lambda(dx) du \mid \mathcal {F}_t\right]\\ &\leq& CE\left[e^{\beta T}\eta ^2+\left( \int_t^T e^{\beta s/2 }|f(s, 0, 0, 0)|ds\right)^2 \mid \mathcal {F}_t\right]. \end{eqnarray*} $

For any fixed $p, q\in R$, put

$ \begin{eqnarray*} X_s^{t, x} &= &x+p(B_s-B_t)+q\int_s^t\int_{R^*} 1\wedge |x| \widetilde{\mu}(du, dx)\, \\ ^nY_s&=& y+\left[p(B_{t+\frac{1}{n}}-B_t)+q\int_t^{t+\frac{1}{n}}\int_{R^*} 1\wedge |x| \widetilde{\mu}(du, dx)\right]\\ & &+\int_s^{t+\frac{1}{n}}f(u, {^nY_u}, {^nZ_u}, {^nU_u})du\\ & &-\int_s^{t+\frac{1}{n}} {^nZ_u} dB_u-\int_s^{t+\frac{1}{n}}\int_{R^*} {^nU_u(x)} \widetilde{\mu}(du, dx)\ . \end{eqnarray*} $

Lemma 3.3  Supposing $E[\sup_{0\leq t\leq T}|f(t, 0, 0, 0)|^2]$ is finite, we have

$ n({^nY_t}-y)\longrightarrow f(x, y, p, q), \ \ {\rm in}\ L^2\ {\rm sense}, \ \ {\rm as}\ n\rightarrow \infty . $

Proof  For any fixed $(t, y, p, q)$ and $t\leq s \leq t+\dfrac{1}{n}$, put

$ \begin{eqnarray*} {\widetilde{^nY_s}}&=& {^nY_s}-[p(B_{s}-B_t)+q\int_t^{s}\int_{R^*} 1\wedge |x| \widetilde{\mu}(du, dx)], \\ {\widetilde{^nZ_s}}&=&{^nZ_s}-p, \\ {\widetilde{^nU_s}}&=&{^nU_s}-q. \end{eqnarray*} $

Then $d{ \widetilde{^nY_s}}=d{^nY_s}-pd(B_{s}-B_t)-qd\int_t^{s}\int_{R^*} 1\wedge |x| \widetilde{\mu}(du, dx)$, that is

$ \begin{eqnarray*} { \widetilde{^nY_s}}&=&\int_s^{t+\frac{1}{n}}f(u, {^nY_u}, {^nZ_u}, {^nU_u})du-\int_s^{t+\frac{1}{n}}({^nZ_s}-p) dB_s\\ &&-\int_s^{t+\frac{1}{n}}\int_{R^*}({^nU_u(x)}-q)\widetilde{\mu}(du, dx)\\ &=&\int_s^{t+\frac{1}{n}}f(u, {\widetilde{^nY_u}}+p(B_u-B_t)+q\int_t^{u}\int_{R^*} 1\wedge |x| \widetilde{\mu}(dv, dx), \\ && {\widetilde{^nZ_u}}+p, {\widetilde{^nU_u}}+q)du\\ & &-\int_s^{t+\frac{1}{n}}\widetilde{{^nZ_s}} dB_s -\int_s^{t+\frac{1}{n}}\int_{R^*}\widetilde{{^nU_u}}(x)\widetilde{\mu}(du, dx). \end{eqnarray*} $

By Lemma 3.2 and $E[\sup_{0\leq t\leq T}|f(t, 0, 0, 0)|^2]$ is finite, we have the following estimation

$ \begin{eqnarray} E\left[\sup\limits_{t\leq s \leq t+\frac{1}{n}}|{\widetilde{^nY_s}}|^2\mid \mathcal {F}_t\right]+E\left[\int_t ^{t+\frac{1}{n}}|{\widetilde{^nZ_s}}|^2ds\mid \mathcal {F}_t\right]\\ +E\left[\int_t ^{t+\frac{1}{n}}\int_{R^*}|{\widetilde{^nU_s}}|^2\lambda(dx)ds\mid \mathcal {F}_t\right]\nonumber\\ \leq Ce^{\beta/n}E\left[\left(\int_t^{t+\frac{1}{n}}f(u, y+p(B_u-B_t)\\ +q\int_t^{u}\int_{R^*} 1\wedge |x| \widetilde{\mu}(dv, dx), p, q)du\right)^2\mid \mathcal {F}_t\right]\nonumber\\ \leq Ce^{\beta/n}E\left[\left(\int_t^{t+\frac{1}{n}}\left|f(u, y+p(B_u-B_t)\\ +q\int_t^{u}\int_{R^*}1\wedge |x| \widetilde{\mu}(dv, dx), p, q)-f(u, 0, 0, 0)\right|\right.\right.\nonumber\\ \left.\left.+|f(u, 0, 0, 0)|du\right)^2\mid \mathcal {F}_t\right]\nonumber\\ \leq CKe^{\beta/n}\dfrac{1}{n}E\left[\int_t^{t+\frac{1}{n}} \left[|y+p(B_u-B_t)\\ +q\int_t^{u}\int_{R^*}1\wedge|x| \widetilde{\mu}(dv, dx)|^2+q^2+p^2\right]du\right.\\ \left.+\int_t^{t+\frac{1}{n}}|f(u, 0, 0, 0)|^2du\mid \mathcal {F}_t\right]\end{eqnarray} $ (3.6)
$ \begin{eqnarray}&\leq &CKe^{\beta/n}\dfrac{1}{n}\left(\dfrac{p^2+q^2+y^2}{n}+\dfrac{p^2}{2}\dfrac{1}{n^2}+Kq^2 \dfrac{1}{n}\right)\\ &&+\dfrac{1}{n}Ce^{\beta/n}E\left[\int_t^{t+\frac{1}{n}}|f(u, 0, 0, 0)|^2du\mid \mathcal {F}_t\right]\nonumber\\ &\leq &C_{y, p, q} \dfrac{1}{n^2}, \end{eqnarray} $ (3.7)

where $C_{y, p, q}$ is a constant depending on $y, p$ and $q$.

Since

$ \begin{eqnarray*} n({^nY_t}-y)= nE\left[\int_t^{t+\frac{1}{n}}f(u, {^nY_u}, {^nZ_u}, {^nU_u})du\mid \mathcal {F}_t\right]\\ =nE\left[\int_t^{t+\frac{1}{n}}f(u, y+p(B_u-B_t)\\ +q\int_t^{u}\int_{R^*}1\wedge|x| \widetilde{\mu}(dv, dx), p, q)du\mid \mathcal {F}_t\right]+R_n, \end{eqnarray*} $

where

$ \begin{eqnarray*} R_n=nE\left[\int_s^{t+\frac{1}{n}}f(u, y+{\widetilde{^nY_u}}+p(B_u-B_t)+q\int_t^{u}\int_{R^*} 1\wedge |x| \widetilde{\mu}(dv, dx), \\ {\widetilde{^nZ_u}}+p, {\widetilde{^nU_u}}+q)du\right.\\ \left.-\int_t^{t+\frac{1}{n}}f(u, y+p(B_u-B_t)+q\int_t^{u}\int_{R^*} 1\wedge|x|\widetilde{\mu}(dv, dx), p, q)du\mid \mathcal {F}_t\right], \end{eqnarray*} $

we then get

$ |R_n|\leq nKE\left[\int_t^{t+\frac{1}{n}}|{\widetilde{^nY_u}}|+|{\widetilde{^nZ_u}}| +\left(\int_{R^*}|{\widetilde{^nU_u}}|^2\lambda(dx)\right)^{1/2}du\mid \mathcal {F}_t\right]. $

Now, by inequality (3.6),

$ \begin{eqnarray*} E[|R_n|^2]&\leq&n^2 K^2 E\left[\left(\int_t^{t+\frac{1}{n}}|{\widetilde{^nY_u}}|+|{\widetilde{^nZ_u}}| +\left(\int_{R^*}|{\widetilde{^nU_u}}|^2\lambda(dx)\right)^{1/2}du\right)^2\right]\\ &\leq &nK^2E\left[\int_t^{t+\frac{1}{n}}\left(|{\widetilde{^nY_u}}|^2+|{\widetilde{^nZ_u}}|^2 +\int_{R^*}|{\widetilde{^nU_u}}|^2\lambda(dx)\right)du\right]\\ &\leq &nK^2E\left[\dfrac{1}{n}\sup\limits_{t\leq s\leq t+\frac{1}{n}}|{\widetilde{^nY_s}}|^2+\int_t^{t+\frac{1}{n}}|{\widetilde{^nZ_u}}|^2du +\int_t^{t+\frac{1}{n}}\int_{R^*}|{\widetilde{^nU_u}}|^2\lambda(dx)du\right]\\ &\leq &C_{y, p, q}(\dfrac{1}{n^2}+\dfrac{1}{n})\\ &\longrightarrow&0. \end{eqnarray*} $

Let

$ A_n=nE\left[f(u, y+p(B_u-B_t)+q\int_t^{u}\int_{R^*}1\wedge|x| \widetilde{\mu}(dv, dx), p, q)du\mid \mathcal {F}_t\right]-f(t, y, p, q) . $

Then

$ \begin{eqnarray*} |A_n|&\leq&nE\left[f(u, p(B_u-B_t)+q\int_t^{u}\int_{R^*} 1\wedge|x|\widetilde{\mu}(dv, dx), p, q)-f(u, y, p, q)du\mid \mathcal {F}_t\right]\\ & &+nE\left[\int_t^{t+\frac{1}{n}}|f(t, y, p, q)-f(u, y, p, q)|du\mid \mathcal {F}_t\right], \end{eqnarray*} $

and

$ \begin{eqnarray*} E[|A_n|^2]&\leq &KE\left[\int_t^{t+\frac{1}{n}}\left(p^2(B_u-B_t)^2+q^2\left(\int_t^u\int_{R^*}1\wedge|x|\widetilde{\mu}(dv, dx) \right)^2+p^2+q^2\right)du\right]\\ & &+K E\left[\int_t^{t+\frac{1}{n}}|f(t, y, p, q)-f(u, y, p, q)|^2du\right]\\ &=&I+II, \end{eqnarray*} $

where

$ \begin{eqnarray*} I&=&KE\left[\int_t^{t+\frac{1}{n}}\left(p^2(B_u-B_t)^2+q^2\left(\int_t^u\int_{R^*}1\wedge|x|\widetilde{\mu}(dv, dx) \right)^2+p^2+q^2\right)du\right]\\ II&=&K E\left[\int_t^{t+\frac{1}{n}}|f(t, y, p, q)-f(u, y, p, q)|^2du\right]. \end{eqnarray*} $

Obviously $ I\leq K\dfrac{p^2}{2}\dfrac{1}{n^2}+K\dfrac{q^2}{2}\dfrac{1}{n}+(p^2+q^2)\dfrac{1}{n}\longrightarrow 0. $ By Assumption 3.1, we easily see $II\rightarrow 0$, and hence the result holds.

Now, here is our converse comparison theorem.

Theorem 3.4  Let $f_1(t, y, p, q)$ and $f_2(t, y, p, q)$ satisfy the conditions in Lemma 2.2 and in Definition 2.4. $\forall \eta \in L^2(\mathcal {F}_T)$, denote $Y_t^1(\eta)=Y_t^{\eta, f_1, T}$, $Y_t^2(\eta)=Y_t^{\eta, f_2, T}$. If $\forall t\in [0, T]$, we have $Y_t^1(\eta)\leq Y_t^2(\eta)$, $P-$a.s., then $\forall(t, y, p, q)$, $f_1(t, y, p, q)\leq f_2(t, y, p, q)$ holds $P-$a.s..

Proof  For any fixed $(t, y, p, q)$, define

$ \eta_n:=y+p(B_{t+\frac{1}{n}}-B_t)+q\int_t^{t+\frac{1}{n}}\int_{R^*}1\wedge|x|\widetilde{\mu}(ds, dx). $

By Lemma 3.3, we know $ n(Y_t^i(\eta_n)-y)\longrightarrow f_i(t, y, p, q), i=1, 2 $ in $L^2$ sense.

On the other hand, $ n(Y_t^1(\eta_n)-y)\leq n(Y_t^2(\eta_n)-y), $ so

$ f_1(t, y, p, q)\leq f_2(t, y, p, q), P-{\rm a.s.}. $

By Assumptions 3.1, $\forall(t, y, p, q)$, $f_1(t, y, p, q)\leq f_2(t, y, p, q)$ holds $P-$a.s..

4 Jensen's Inequality

In this section, we give a necessary and sufficient condition for the Jensen's inequality to hold under the $f$-expectation that has been defined in Section 2. Indeed, we appeal to the Converse Comparison Theorem to prove this basic inequality.

Theorem 4.1  Suppose $f$ satisfies the conditions in Lemma 2.2, Definition 2.4, and Assumption 3.1. Then the following statements are equivalent

(1) $P-$a.s $f$ dose not depend on $y$, and for $\forall\lambda \in R$ and $(t, p, q)$,

$ f(t, \lambda p, \lambda q)\geq \lambda f(t, p, q); $

(2) the conditional Jensen's inequality holds, that is, $\forall \eta\in L^2(\mathcal {F}_T)$ and a convex function $\psi$ defined on $R$, the following inequality holds $ \varepsilon_f[\psi(\eta)\mid \mathcal {F}_t]\geq \psi(\varepsilon_f[\eta\mid \mathcal {F}_t]), P-{\rm a.s.}. $

Proof   $(1)\Longrightarrow (2)$ Let $\varphi(x)=\lambda x+u$, $\lambda\neq0$. Let $(Y, Z, U)$ denote the solution of the following BSDE

$ Y_t=\eta+\int_t^T f(s, Z_s, U_s) ds-\int_t^TZ_sdB_s-\int_t^T\int_{R^*}U_s(x)\widetilde{\mu}(ds, dx), $

and $(Y', Z', U')$ the solution of the following BSDE

$ Y'_t=\varphi(\eta)+\int_t^T f'(s, Z'_s, U'_s) ds-\int_t^TZ'_sdB_s-\int_t^T\int_{R^*}U'_s(x)\widetilde{\mu}(ds, dx), $

where $f'(t, p, q)=\lambda f(t, \frac{p}{\lambda}, \frac{q}{\lambda})$. Obviously, $ Y'_t=\lambda Y_t+u, Z'=\lambda Z, U'=\lambda U $ and

$ Y_t=\varepsilon_f[\eta\mid \mathcal {F}_t], Y'_t=\varepsilon_{f'}[\varphi(\eta)\mid \mathcal {F}_t]. $

For $P-$a.s., $f(t, p, q)\geq f'(t, p, q)$, by comparison Theorem 2.3, we have

$ \varepsilon_f[\varphi(\eta)\mid \mathcal {F}_t]\geq \varepsilon_{f'}[\varphi(\eta)\mid \mathcal {F}_t], P-{\rm a.s.}, $

and $ \varepsilon_{f'}[\varphi(\eta)\mid \mathcal {F}_t]=Y'_t=\lambda Y_t+u=\lambda\varepsilon_f[\eta\mid \mathcal {F}_t]+u=\varphi[\varepsilon_f(\eta\mid \mathcal {F}_t)]. $ For any convex function $\psi$, there exists a countable set $D$ on $R^2$ such that

$\begin{eqnarray*} \psi(x)=\sup\limits_{(\lambda, u)\in D}(\lambda x+u).\end{eqnarray*} $

For any $(\lambda, u)\in D$,

$\begin{eqnarray*} \varepsilon_f[\psi(\eta)\mid \mathcal {F}_t]=\varepsilon_f[\sup\limits_{(\lambda, u)\in D}(\lambda \eta +u)\mid \mathcal {F}_t]\geq \lambda \varepsilon_f[\eta\mid \mathcal {F}_t]+u, P-{\rm a.s.}.\end{eqnarray*} $

For $D$ is a countable set,

$\begin{eqnarray*} \varepsilon_f[\psi (\eta)\mid \mathcal {F}_t]\geq \sup\limits_{(\lambda, u)\in D}[\lambda \varepsilon_f[\eta\mid \mathcal {F}_t]+u]=\psi(\varepsilon_f[\eta\mid \mathcal {F}_t]), P-{\rm a.s.}.\end{eqnarray*} $

$(2)\Longrightarrow (1)$ Let $(Y, Z, U)$ be the solution of the following BSDE:

$ Y_t=\eta+\int_t^T f(s, Y_s, Z_s, U_s) ds-\int_t^TZ_sdB_s-\int_t^T\int_{R^*}U_s(x)\widetilde{\mu}(ds, dx), $

and $(Y', Z', U')$ be the solution of the following BSDE

$ Y'_t=\varphi(\eta)+\int_t^T f'(s, Y'_s, Z'_s, U'_s) ds-\int_t^TZ'_sdB_s-\int_t^T\int_{R^*}U'_s(x)\widetilde{\mu}(ds, dx), $

where $f'(t, y, p, q)=\lambda f(t, \dfrac{y-u}{\lambda}, \frac{p}{\lambda}, \frac{q}{\lambda})$, and $\varphi(x)=\lambda x+u$ is a convex function.

As above, we have

$ \varepsilon_f[\varphi(\eta)\mid \mathcal {F}_t]\geq \varphi(\varepsilon_f[\eta\mid \mathcal {F}_t])=\lambda Y_t+u=Y'_t=\varepsilon_{f'}[\varphi(\eta)\mid \mathcal {F}_t]. $

By Theorem 3.4, $\forall (t, y, p, q)$, we have $f(t, y, p, q)\geq f'(t, y, p, q)$, $P-$a.s., that is

$ f(t, \lambda y+u, \lambda p, \lambda q)\geq \lambda f(t, y, p, q). $

Let $\lambda=1$. Then, $f(t, y+u, p, q)\geq f(t, y, p, q)$, that means $f$ dose not depend on $y$, and we can easily get that $f(t, \lambda p, \lambda q)\geq \lambda f(t, p, q)$.

5 Discussion

BSDEs driven by Lévy processes are widely used in mathematical finance, especially in the contingent claim evaluation in incomplete market. In this paper, we give the converse comparison theorem for the BSDEs driven by Lévy processes. The sufficient and necessary condition is obtained via the converse comparison theorem. These two results are the generalization of the corresponding results for BSDEs driven by Brownian motion, and they can be used in contingent claim evaluation, which has been mentioned in Section 1. There are many elementary problems to be solved for the $f$-expectation, such as up-crossing inequality etc.

References
[1] Barles G, Buckdahn R, Pardoux E. Backward stochastic differential equations and intergal-partial differential equations[J]. Stoch. Stoch. Rep., 1997, 60(1-2): 57–83.
[2] Briand P, Coquet F, Hu Y, Memin J, Peng S. A converse comparison theorem for BSDEs and related properties of g-expectation[J]. Elect. Comm. Probab., 2000, 5: 101–117. DOI:10.1214/ECP.v5-1025
[3] Chen Z. A property of backward stochastic differential equations[J]. C.R. Acad. Sci. Paris Ser. I. Math., 1998, 326(4): 483–488. DOI:10.1016/S0764-4442(97)89796-0
[4] Chen Z, Peng S. A non-linear decomposition theorem in L1[R]. Hong Kong: Pacific Rim. Conference on Math. , 1998.
[5] Karoui N El, Peng S, Queneze M C. Backward stochastic differential equations in finance[J]. Math. Finance, 1997, 7(1): 1–71. DOI:10.1111/mafi.1997.7.issue-1
[6] Hu Y, Peng S. On the comparison theorem of multidimensional BSDEs[J]. C. R. Acad. Sci. Paris, Ser. I, 2006, 343: 135–140. DOI:10.1016/j.crma.2006.05.019
[7] Hu Y. On Jensen's inequality for g-expectation and for nonlinear expectation[J]. Arc. Math., 2005, 85: 572–580. DOI:10.1007/s00013-005-1440-9
[8] Jiang L. Converse comparison theorems for backward stochastic differential equations[J]. Stat. Probability Letters, 2005, 71: 173–183. DOI:10.1016/j.spl.2004.10.032
[9] Jiang L, Chen Z. On Jensen's inequality for g-expectation[J]. Chin. Ann. Math., 2004, 25B: 401–412.
[10] Li Baoming. Jensen inequality of g-expectation and its applications[J]. Journal of Shandong University, 2000, 4(35): 413–417.
[11] Pardoux E, Peng S. Backward stochastic differential equations and quasi-linear parabolic partial differential equations[C]. Stochastic Partial Differential Equations and Their Applications[A]. Charlotte, NC, 1991, Lecture Notes in Control and Inform Sci. , Berlin: Springer, 1992, 176: 200-217.
[12] Pardoux E, Peng S. Adapted solution of a backward stochastic differential equation[J]. Systems Control Letters, 1990, 14(1): 55–61. DOI:10.1016/0167-6911(90)90082-6
[13] Peng S. Stochastic Hamilton-Jacobi-Bellman equations[J]. SIAM J. Control Optim., 1992, 30(2): 284–304. DOI:10.1137/0330018
[14] Peng S. Backward SDE and related g-expectation[A]. Karoui N El, Mazliak L (Eds). Backward stochastic differential equation[C]. Pitman Res. Notes Math. Ser. , Harlow: Longman, 1997, 364: 141-159.
[15] Royer M. Backward stochastic differential equations with jumps and related non-linear expectations[J]. Stochastic Processes Appl., 2006, 116: 1358–1376. DOI:10.1016/j.spa.2006.02.009
[16] Tang S, Li X. Necessary conditions for optimal control of stochastic systems with random jumps[J]. SIAM J. Control Optim., 1994, 32(5): 1447–1475. DOI:10.1137/S0363012992233858