数学杂志  2014, Vol. 34 Issue (6): 1073-1084   PDF    
扩展功能
加入收藏夹
复制引文信息
加入引用管理器
Email Alert
RSS
本文作者相关文章
CHENG Sheng-min
ZHOU Shao-bo
CONVERGENCE AND STABILITY OF NUMERICAL METHODS FOR STOCHASTIC DIFFERENTIAL DELAY EQUATION
CHENG Sheng-min1, ZHOU Shao-bo2    
1. Department of Basic Teaching, Zhengzhou Huaxin College, Xinzheng 451100, China;
2. School of Mathematics and Statistics, Huazhong University of Science and Technology, Wuhan 430074, China
Abstract: The paper investigates convergence and mean-square stability of the balanced method for stochastic differential delay equation. By applying the semi-martingale convergence theorem, a weaker condition of asymptotic and mean-square stability of the exact solution is given, the balanced method reproduces mean-square stability of the exact solution for stochastic differential delay equation.
Key words: balanced method     stochastic differential delay equation     convergence     mean-square stability     asymptotic stability    
随机延迟微分方程的数值方法的收敛性和稳定性
程生敏1, 周少波2    
1. 郑州华信学院基础教学部, 河南 新郑 451100;
2. 华中科技大学数学与统计学院, 湖北 武汉 430074
摘要:本文研究了随机延迟微分方程的平衡方法的收敛性和均方稳定性.利用半鞅收敛定理, 给出了真解的渐进稳定和均方稳定的一个更弱的条件.平衡方法下随机延迟微分方程的真解的均方稳定性.
关键词平衡方法    随机延迟微分方程    收敛性    均方稳定性    渐进稳定性    
1 Introduction

In many fields of science, the combined dynamical effects of noise and delay have received an increasing attention. Stochastic differential delay equations (SDDEs) are often used to describe the dynamical systems, which depend on not only present states but also their past history with noise disturbance. Unfortunately, explicit solutions can rarely be obtained for SDDEs. Therefore, it is necessary to establish effective numerical methods for such systems. Recently, the numerical methods for SDDEs as well as stochastic differential equations(SDEs) have received a great deal of attention [11, 18]. Many efficient numerical methods are proposed for solving different types of SDEs. The majority of the numerical methods are explicit or semi-implicit Euler-Maruyama (short for EM) methods [3, 5, 6, 11-13, 23]. For example, Liu [7] studied the convergence and mean-square (MS) stability of the semi-implicit Euler method for linear SDDEs. Wu [21] showed that the backward EM method reproduces the stability for nonlinear SDDEs using the martingale technique. Mao [16] considered the exponential stability of equidistant EM approximations for SDDEs. Zhang [22] constructed the strong convergence with order $\gamma=0.5$ and MS-stability of the split-step backward Euler method. Evelyn Buckwar [2] introduced the convergence of the numerical method for SDDEs.

For the EM method, the implicit part is only resricted to the drift term (or drift-implicit, which is well adapted for stiff systems with small stochastic noise intensity or additive noise. The method is unsuitalbe for solving stiff SDEs, in which the stochastic term (or diffusion term) plays an essential role, since the strong explicit methods work unreliably and large errors for not too small time step sizes. In order to improve stability properties of numerical methods for solving stiff SDEs. Milstein [17] proposed an implicit balanced method, which is a specific method with a kind of balance between approximating stochastic terms in the numerical scheme. Several authors constructed modified implicit methods (see [19-20]). Wang [19] discussed the convergence and MS-stability of several split-step backward balanced Milstein methods for stiff stochastic differential systems. Wang [19] considered the MS-stability of Milstein method for SDDEs. Gan [5] dealt with asymptotic stability by using the discretized Itô formula, which was developed by Appleby [1]. To the authors' best knowledge, there is little work on the convergence and stability of balanced method for stochastic differential delay equation.

In the paper, we consider a scalar test equation of the form

$\left\{ \begin{array}{ll} dx(t)=[ax(t)+bx(t-\tau)]dt+[cx(t)+dx(t-\tau))]dw(t), \ \ t \geq 0, \\ x(t)=\xi(t), \ \ t\in [-\tau, 0], \end{array} \right.$ (1.1)

where $a, b, c, d\in R$, $\tau$ be a positive fixed delay, $w(t)$ be a one-dimensional standard Wiener process. $\xi(t)$ is a $C([-\tau, 0];R)$-value initial segment with $E[\|\xi\|^2]<\infty$, here $\|\xi\|=\sup\limits_{-\tau\leq t\leq 0}{|\xi(t)|}.$ We shall investigate the convergence and the MS-stability for eq.(1.1). Comparing to the traditional conditions, a weaker condition of the asymptotic and mean-square stability of the exact solution is given by applying the semimartingale convergence theorem in the next section. Section 3 proves that the numerical solution converges to the exact solution in strong order 0.5. In Section 4, we shall show that the balanced method reproduces mean-square stability.

2 Stability of Analytical Solution

Throughout this paper, unless otherwise specified, let $|x|$ be the Euclidean norm in $x\in R^n$. If $A$ is a vector or matrix, its transpose is denoted by $A^T$. If $A$ is a matrix, its trace norm is denoted by $|A|=\sqrt{{\rm trace}(A^TA)}$, while its operator norm is denoted by $\|A\|=\sup\{|Ax|:|x|=1\}$ (without any confusion with $\|\varphi\|$). Let be $\tau>0$, we shall denote by $C([-\tau, 0];R^n)$ the family of continuous functions from $[-\tau, 0]$ to $R^n$ with the norm $\|\varphi\|=\sup\limits_{-\tau\leq\theta\leq0}|\varphi(\theta)|.$

Let $(\Omega, {\cal F}, \{{\cal F}_t\}_{t\geq 0}, P)$ be a complete probability space with a filtration $\{ {\cal F}_t\}_{t\geq 0}, $ satisfying the usual conditions (i.e., it is increasing and right continuous and ${\cal F}_0$ contains all P-null sets). Let $p>0, $ denote by $L_{{\cal F}_t}^p([-\tau, 0];R^n)$ the family of all ${\cal F}_t$-measurable and $C([-\tau, 0];R^n)$-valued random variables $\xi$ such that $E\|\xi\|^p<+\infty.$

Under the above assumptions, eq. (1.1) has a unique strong solution and has the following properties (see [17]).

Lemma 2.1 For any given $T>0$, there exists positive numbers $C_1, C_2$ and $M$ such that the solution of eq. (1.1) satisfies

$ E(\sup\limits_{-\tau\leq t\leq T}|x(t)|^2)\leq C_1[1+E\|\xi\|^2], \ \ t\in[-\tau, T], \\ E|x(t)-x(s)|^2\leq C_2(t-s), \ \ \ 0\leq s<t\leq T, t-s<1, \\ E|ax(t)+bx(t-\tau)|^2\leq \sqrt{2M(1+E\|\xi\|)}, \ \ t\in[0, T].$

In order to establish a result of stability under a weaker condition compared to the traditional condition, the following semi-martingale convergence theorem [14] plays an important role.

Lemma 2.2 Let $\{A_t\}_{t\geq 0}\} $ and $\{U_t\}_{t\geq 0}\}$ be two continuous adapted increasing processes on $t\geq 0$ with $A_0=U_0=0\ \ {\rm a.s.}. $ Let $\{M_t\}_{t\geq 0}\}$ be a real-valued continuous local martingale with $M_0=0\ \ {\rm a.s.}$. Let $\zeta$ be a nonnegative $ {\cal F}_0$-measurable random variable. Define

$X_t=\zeta+A_t-U_t+M_t \ \ {\rm for } \ \ t\geq 0.$

If $X_t$ is nonnegative, then

$\{\lim\limits_{t\rightarrow \infty}A_t<\infty\} \subset \{\lim\limits_{t\rightarrow \infty}X_t<\infty\} \cap \{\lim\limits_{t\rightarrow \infty}U_t<\infty\}\ \ {\rm a.s.}, $

where $C\subset D$ a.s. means $P(C\cap D^c)=0.$ In particular, if $\lim\limits_{t\rightarrow\infty}A_t<\infty\ \ {\rm a.s.}, $ then for almost all $\omega \in \Omega$

$\lim\limits_{t\rightarrow \infty}X_t(\omega)<\infty, \lim\limits_{t\rightarrow \infty}U_t(\omega)<\infty$

and

$-\infty<\lim\limits_{t\rightarrow \infty}M(t, \omega)<\infty.$

That is, all of the three processes $X(t), A(t)$ and $M(t)$ converge to finite random variables.

Theorem 2.3 Suppose that the condition

$2a+c^2+2|b+cd|+d^2<0$ (2.1)

holds. Then the solution of eq. (1.1) is mean square stable, i.e., $ \lim\sup\limits_{t\rightarrow\infty}E[|x(t)|^2]=0.$ Moreover, the solution of eq. (1.1) is also aymptotic stable, i.e., $ \lim\sup\limits_{t\rightarrow\infty}|x(t)|=0.$

Proof Define $V(x)=x^2, $ by the Itô formula and the inequality $2ab\leq a^2+b^2$, we compute

$ \quad {\rm d}V(x)\\ =(2x(t)[ax(t)+bx(t-\tau)]+[cx(t)+dx(t-\tau)]^2){\rm d}t+2x(t)[cx(t)+dx(t-\tau)]dw(t)\\ =[(2a+c^2)x^2(t)+d^2x^2(t-\tau)+2(b+cd)x(t)x(t-\tau)]{\rm d}t+2x(t)[cx(t)+dx(t-\tau)]dw(t)\\ =[(2a+c^2)x^2(t)+d^2x^2(t-\tau)+|b+cd|(x^2(t)+x^2(t-\tau))]{\rm d}t\\ \quad +2x(t)[cx(t)+dx(t-\tau)]dw(t)\\ =[(2a+c^2+|b+cd|)x^2(t)+(d^2+|b+cd|)x^2(t-\tau)]{\rm d}t+2x(t)[cx(t)+dx(t-\tau)]dw(t). $

Since ${\rm d}[{\rm e}^{\varepsilon t}V(x(t))]= {\rm e}^{\varepsilon t}[\varepsilon V(x){\rm d}t+{\rm d}V(x)], \varepsilon >0, $ integrating and computing

$\quad {\rm e}^{\varepsilon t}V(x(t))\\ =V(0)+\int^t_0[(\varepsilon+2a+c^2+|b+cd|)x^2(s)+(d^2+|b+cd|)x^2(s-\tau)]{\rm d}s\\ \quad +2\int^t_0x(s)[cx(s)+dx(s-\tau)]dw(s)\\ =V(0)+\int^t_0(\varepsilon+2a+c^2+|b+cd|)x^2(s){\rm d}s+\int^{t-\tau}_{-\tau}(d^2+|b+cd|)x^2(s){\rm d}s\\ \quad +2\int^t_0x(s)[cx(s)+dx(s-\tau)]dw(s)\\ \leq V(0)+\int^t_0(\varepsilon+2a+c^2+d^2+2|b+cd|)x^2(s){\rm d}s+\int^0_{-\tau}(d^2+|b+cd|)x^2(s){\rm d}s\\ \quad +2\int^t_0x(s)[cx(s)+dx(s-\tau)]dw(s).$ (2.2)

Since $ 2a+c^2+2|b+cd|+d^2<0, $ choose $\varepsilon<-2a-c^2-2|b+cd|-d^2, $ then

$\varepsilon+2a+c^2+2|b+cd|+d^2<0.$

Lemma 2.1 implies that there exists a constant $M_0$ such that $ {\rm e}^{\varepsilon t}x^2\leq M_0.$ That is,

$\lim\sup\limits_{t\rightarrow\infty}|x|^2=0 {\rm a.s.}. $

Taking expectation to (2.2), we have $E{\rm e}^{\varepsilon t}V(x(t))\leq M^{\prime}$, i.e., $\lim\sup\limits_{t\rightarrow\infty}E|x(t)|^2=0.$

Remark 2.4 In Theorem 2.3, condition $ 2a+c^2+2|b+cd|+d^2<0 $ is weaker than the traditional condition (see [6-7, 19])

$2a+2|b|+(|c|+|d|)^2<0.$ (2.3)

That is, $ 2a+2|b|+(|c|+|d|)^2<0 $ implies that $ 2a+c^2+2|b+cd|+d^2<0. $ In fact,

$ \quad 2a+c^2+2|b+cd|+d^2\\ =2a+2|b|+2|b+cd|+c^2+2|c||d|+d^2-2|b|-2|c||d|\\ =2a+2|b|+(|c|+|d|)^2-(2|b|+2|c||d|-2|b+cd|)\\ \leq2a+2|b|+(|c|+|d|)^2, $

noting that $|b|+|c||d|\geq|b+cd|.$ It is obvious that $2a+2|b|+(|c|+|d|)^2<0$ implies $2a+c^2+2|b+cd|+d^2<0.$

3 Convergence of Stochastic Differential Delay Equations

In the section, we apply the balanced implicit method (see [4, 16]) to eq. (1.1). This derives the following numerical scheme

$X_{n+1}=X_n+(aX_n+bX_{n-m})h+(cX_n+dX_{n-m})\Delta\omega_n+C(X_n)(X_n-X_{n+1}),$ (3.1)

here $h\geq 0$ is a stepsize which satisfies $\tau=mh$ for a positive integer $m.$ Let $ t_n=nh(n\geq 0). $ $X_n$ is an approximation to $x(t_n)$ if $t_n\leq 0$, we have $X_n=\xi(t_n)$. Moreover, increments $\Delta\omega_n=\omega(t_{n+1})-\omega(t_n), n=1, 2, \cdots$ are independent $N(0, h)$-distributed Gaussian random variables ${\cal{F}}_{t_n}$-measurable at the mesh-point $t_n.$ $C(X_n)$ is given by

$ C(X_n)=C_0(X_n)h+C_1(X_n)|\Delta\omega_n|=c_{0n}h+c_{1n}|\Delta\omega_n|, $

where the $c_{0n}=C _0(X_n), c_{1n}=C_1(X_n)$ are called control functions which are often chosen as constants. The control functions must satisfy some conditions.

Assumption 3.1 The $c_{0n}$ and $c_{1n}$ represent bounded functions. For any real numbers $\alpha_0 \in [0, \overline{\alpha}_0], \alpha_1\geq0, $ where $\overline{\alpha}_0\geq h$ for all step sizes h considered and $(t, x)\in[0, \infty]\times R, $ the matrix $ M(x) =1 + \alpha_0c_{0n}(x)+\alpha_1c_{1n}(x)$ has an inverse and satisfies the condition $|(M(x))^{-1}|\leq C <\infty, $ here $C$ is a positive constant.

The section will prove the convergence of scheme (3.1), which involves two useful definitions. Noting that the various constants throughout the section will be given the same letter $C$ except for $c_{0n}$ and $c_{1n}.$

Definition 3.2 (see [4]) The local error of the above approximation of the solution $x(t)$ of eq. (1.1) is the sequence of random variables $\delta_{n+1}=x(t_{n+1})-X(t_{n+1}), n=0, 1, \cdots, N-1.$ Here $X(t_{n+1})$ be the value obtained after just one step eq. (3.1), i.e.,

$ \begin{aligned} X(t_{n+1})&= x(t_{n})+(ax(t_{n})+bx(t_{n}-\tau))h+(cx(t_{n})+dx(t_{n}-\tau))\Delta\omega_n\\&\quad +C(x(t_{n}))(x(t_{n})-X(t_{n+1})). \end{aligned}$

The global error of the above approximation of the solution $x(t)$ of eq. (1.1) is the sequence of random variables $\varepsilon_{n}=x(t_{n})-X_n, n=1, \cdots, N.$

We shall prove that approximation (3.1) is convergent with order $\gamma=\frac{1}{2}, $ which depends on the following result established by Evelyn Buckwar (see [3]).

Lemma 3.3 Suppose the method defined by (3.1) such that the following estimates

$\max\limits_{0\leq n\leq N}|E(\delta_{n+1})|\leq Ch^{p_1}\ \ {\rm as} \ \ h\rightarrow0,$ (3.2)
$\max\limits_{0\leq n\leq N}(E(\delta_{n+1})^2)^{1/2}\leq Ch^{p_2}\ \ {\rm as} \ \ h\rightarrow0$ (3.3)

hold with $p_2\geq \frac{1}{2}$ and $p_1\geq p_2+\frac{1}{2}, $ where the constant $C$ does not depend on $h$, but may depend on $T$ and the initial data. That is, method (3.1) is consistent with order $p_1$ in the mean and with order $p_2$ in the mean-square sense. Then approximation (3.1) for eq. (1.1) is convergent with order $p=p_2-\frac{1}{2}.$ That is, convergence is in the mean-square sense and

$\max\limits_{0\leq n\leq N}(E|\varepsilon_{n}|^2)^{1/2}\leq C h^{p}\ \ {\rm as } \ \ h\rightarrow0, \ \ p=p_2-\frac{1}{2}.$ (3.4)

Theorem 3.4 The numerical solution produced by the method (3.1) converges to the exact solution of eq. (1.1) on the mesh-point in the mean-square sense with strong order $0.5, $ i.e. there exists a positive constant $C$ such that

$\max\limits_{0\leq n\leq N}(E|\varepsilon_{n}|^2)^{1/2}\leq Ch^{1/2}\ \ {\rm as} \ \ h\rightarrow0.$ (3.5)

Proof To prove (3.5), we first show that the estimates (3.2) holds for the balanced method (3.1) with $p_1=3/2.$ For this purpose, the local Euler approximation step defined by

$X^E(t_{n+1})=x(t_{n})+(ax(t_{n})+bx(t_{n}-\tau))h+(cx(t_{n})+dx(t_{n}-\tau))\Delta \omega_n, $

$n=0, 1, 2, \cdots, N-1.$ Noting that $|E[x(t_{n+1})-X^E(t_{n+1})]|\leq Ch^2$ (see [15]), one can deduce from $n=0, 1, 2, \cdots, N-1$ that

$\begin{aligned} |E(\delta_{n+1})|&=|E[x(t_{n+1})-X(t_{n+1})]|\\&\leq|E[x(t_{n+1})-X^E(t_{n+1})]|+|E[X^E(t_{n+1})-X(t_{n+1})]|\\&\leq Ch^2+|E[X^E(t_{n+1})-X(t_{n+1})]|. \end{aligned}$ (3.6)

Denoted by $C_n\equiv C(X_n), $ compute

$\quad |E[X^E(t_{n+1})-X(t_{n+1})]|\\ =|E(1-\frac{1}{1+C_n})[(ax(t_{n})+bx(t_{n}-\tau))h+(cx(t_{n})+dx(t_{n}-\tau))\Delta \omega_n]|\\ =|E[\frac{C_n}{1+C_n}(ax(t_{n})+bx(t_{n}-\tau))h+\frac{C_n}{1+C_n}(cx(t_{n})+dx(t_{n}-\tau))\Delta \omega_n]|.$ (3.7)

Exploiting the symmetry property of $\Delta \omega_n, $ then

$|E[\frac{C_n}{1+C_n}(cx(t_{n})+dx(t_{n}-\tau))\Delta \omega_n]| =|E[(cx(t_{n})+dx(t_{n}-\tau))E(\frac{C_n}{1+C_n}\Delta \omega_n|{\cal{F}}_{t_n})]|=0.$

(3.7) reduces to

$|E[X^E(t_{n+1})-X(t_{n+1})]|=|E[\frac{C_n}{1+C_n}(ax(t_{n})+bx(t_{n}-\tau))h]|.$

Since

$E[|C_n||{\cal{F}}_{t_n}]=E[|c_{0n}h+c_{1n}|\Delta \omega_n|||{\cal{F}}_{t_n}] \leq |c_{0n}|h+|c_{1n}|\frac{2}{\sqrt{2\pi}}h^{1/2}, $

noting that $x(t_n), x(t_n-\tau)$ is ${\cal{F}}_{t_n}$-measurable, making use of Assumption (3.1) and Lemma 2.1, and computing

$\quad |E[\frac{C_n}{1+C_n}(ax(t_{n})+bx(t_{n}-\tau))h]|\\ \leq E[\frac{|C_n|}{|1+C_n|}|ax(t_{n})+bx(t_{n}-\tau)|h]\\ =ChE[|C_n||ax(t_{n})+bx(t_{n}-\tau)|]\\ =ChE[|ax(t_{n})+bx(t_{n}-\tau)|E(|C_n||{\cal{F}}_{t_n})] \\ \leq Ch^{3/2}E[|ax(t_{n})+bx(t_n-\tau)|] \leq Ch^{3/2}.$ (3.8)

Substituting for (3.8) into (3.6) yields

$\max\limits_{0\leq n\leq N}|E\delta_{n+1}|\leq Ch^2+Ch^{3/2}\leq Ch^{3/2}.$

Therefore (3.2) holds for $p_1=3/2.$ In the following, we shall prove that (3.3) holds. Applying the inequality $(a+b)^2\leq 2(a^2+b^2), $ we have

$\begin{aligned} E|\delta_{n+1}|^2&=E[|x(t_{n+1})-X(t_{n+1})|^2]\\&\leq2E[|x(t_{n+1})-X^E(t_{n+1})|^2]+2E[|X^E(t_{n+1})-X(t_{n+1})|^2]\\&\leq Ch^2+2E[|X^E(t_{n+1})-X(t_{n+1})|^2], \end{aligned}$ (3.9)

noting that $ E[|x(t_{n+1})-X^E_{n+1}|^2|{\cal{F}}_{t_n}]\leq Ch^2 $(see[15]). Applying Assumption 3.1 and computing

$\quad E[|X^E(t_{n+1})-X(t_{n+1})|^2]\\ =E[|1-\frac{1}{1+C_n}|^2|(ax(t_{n})+bx(t_{n}-\tau))h+(cx(t_{n})+dx(t_{n}-\tau))\Delta \omega_n|^2]\\ = E[|\frac{C_n}{1+C_n}|^2[|(ax(t_{n})+bx(t_{n}-\tau))h+(cw(t_{n})+dx(t_{n}-\tau))\Delta \omega_n|^2]]\\ \leq2hE[|\frac{C_n}{1+C_n}(ax(t_{n})+bx(t_{n}-\tau))|^2] +2E[|\frac{C_n}{1+C_n}(cx(t_{n})+dx(t_{n}-\tau))\Delta \omega_n|^2]\\ \leq2C^2hE[|C_n(ax(t_{n})+bx(t_{n}-\tau))|^2] +2C^2E[|C_n(cx(t_{n})+dx(t_{n}-\tau))\Delta \omega_n|^2]\\ \leq2C^2hE[|ax(t_{n})+bx(t_{n}-\tau)|^2E[|C_n|^2|{\cal{F}}_{t_n}]]\\ \quad +2C^2E[|cx(t_{n})+dx(t_{n}-\tau)|^2E[|C_n\Delta \omega_n|^2|{\cal{F}}_{t_n}]].$ (3.10)

Making use of the properties

$ E[(\Delta \omega_n)^{2i}]=(2i-1)!!h^i, E[(\Delta \omega_n)^{2i-1}]=0, i=1, 2, \cdots, \\ E[|\Delta \omega_n|^{2i-1}]=\frac{2^i}{\sqrt{2\pi}}(i-1)!h^{\frac{2i-1}{2}}, i=1, 2, \cdots, $

where $(2i-1)!!=(2i-1)(2i-3)\cdots 3\cdot1, (i-1)!=(i-1)(i-2)\cdot\cdot\cdot 2\cdot 1, $ we compute

$\begin{aligned} E[|C_n|^2|{\cal{F}}_{t_n}]&=E[(c_{0n}h+c_{1n}|\Delta \omega_n|)^2|{\cal{F}}_{t_n}]\\&\leq c^2_{0n}h^2+2c_{0n}c_{1n}hE[|\Delta \omega_n||{\cal{F}}_{t_n}]+E[c^2_{1n}|\Delta \omega_n|^2|{\cal{F}}_{t_n}]\\&\leq c^2_{0n}h^2+\frac{4}{\sqrt{2\pi}}c_{0n}c_{1n}h^{3/2}+c^2_{1n}h\leq Ch,\end{aligned}$ (3.11)
$\begin{aligned}E[|C_n\Delta\omega_n|^2|{\cal{F}}_{t_n}]&=E[(c_{0n}h\Delta \omega_n+c_{1n}|\Delta \omega_n|\Delta \omega_n)^2|{\cal{F}}_{t_n}]\\&\leq c^2_{0n}h^2E[(\Delta \omega_n)^2|{\cal{F}}_{t_n}] +2c_{0n}c_{1n}hE[|\Delta \omega_n|^3|{\cal{F}}_{t_n}]+c^2_{1n}E[\Delta \omega_n)^4|{\cal{F}}_{t_n}]\\&\leq c^2_{0n}h^3+\frac{8}{\sqrt{2\pi}}c_{0n}c_{1n}h^{5/2}+3c^2_{1n}h^2\\&\leq Ch^2.\end{aligned}$ (3.12)

Substituting for (3.11)-(3.12) into (3.10), and making use of Lemma 2.1 yields

$E|X^E_{n+1}-X(t_{n+1})|^2 \leq Ch^2E[|ax(t_{n})+bx(t_{n}-\tau)|^2]+Ch^2E[|cx(t_{n})+dx(t_{n}-\tau)|^2]\leq C h^2. $

By this and (3.9), condition (3.3) holds. Lemma 3.3 implies $ \max\limits_{0\leq n\leq N}(E(\varepsilon_{n+1})^2)^{1/2}\leq Ch^{1/2}.$

4 Mean-Square Stability of Numerical Scheme

In the section, we shall investigate mean square stability of the balanced method. Now, we present the main result of the paper as follows.

Definition 4.1 Under condition (2.3), a numerical method is said to be mean square stable(MS-stable), if there exists a $h_{\{a, b, c, d, c_{0n}, c_{1n}\}}>0, $ such that any application of the method to problem (3.1) generates numerical approximation $X_n, $ which satisfy $\lim\limits_{n\rightarrow\infty}E|X_n|^2=0$ for all $h\in(0, h_{\{a, b, c, d, c_{0n}, c_{1n}\}}).$

Theorem 4.2 Assume that $C<1$ and condition (2.3) satisfing.

(ⅰ) If $c_{0n}=-a-|b|, c_{1n}>0$, then the balanced method is MS-stable for $h<\min\{h_1^*, 1\}, $ where $h_1^*=\frac{(1/C^2-1)^2}{(4/\sqrt{2\pi}c_{1n}+(|d|+|c|)^2+c^2_{1n})^2}.$

(ⅱ) If $c_{1n}<0, a+c_{0n}>|b|, $ then the balanced method is MS-stable for $h<h_2^*, $ where $h^*_2=\frac{1-C^2}{C^2[2a+2c_{0n}+(|d|+|c|)^2+c^2_{1n}+2|b|+(|a+c_{0n}|+|b|)^2]}.$

(ⅲ) If $c_{1n}<0, c_{0n}<0, $ then the balanced method is MS-stable for $h<\min\{h^*_3, 1\}, , $ where $\displaystyle h_3^*=\frac{1/C^2-1}{(|a+c_{0n}|+|b|+|c_{1n}|)^2}.$

(ⅳ) If $c_{1n}=0, c_{0n}=0.$ then the method is MS-stable for $h<\min\{1, \frac{-(2a+2|b|+(|d|+|c|)^2)}{(|a|+|b|)^2}\}, $ or If $c_{1n}=0, c_{0n}=0, 2a+2|b|+(|d|+|c|)^2)+(|a|+|b|)^2<0, $ then the method is MS-stable for $h<1, $ in which the balanced method (3.1) reduces to EM method.

Proof According to (3.1), we obtain

$X_{n+1}=\frac{1+ah+c\Delta\omega_n+C_n}{1+C_n}X_n+\frac{bh+d\Delta\omega_n}{1+C_n}X_{n-m},$ (4.1)

and

$ \begin{aligned} X^2_{n+1}&=\frac{(1+ah+c \Delta\omega_n+c_{0n}h+c_{1n}|\Delta\omega_n|)^2}{(1+C_n)^2}X^2_n +\frac{(bh+d \Delta\omega_n)^2}{(1+C_n)^2}X^2_{n-m}\\&\quad +\frac{2(1+ah+c \Delta\omega_n+c_{0n}h+c_{1n}|\Delta\omega_n|)(bh+d \Delta\omega_n)}{(1+C_n)^2}X_nX_{n-m}\\&=\frac{(1+ah+c \Delta\omega_n+c_{0n}h+c_{1n}|\Delta\omega_n|)^2}{(1+C_n)^2}X^2_n +\frac{(bh+d \Delta\omega_n)^2}{(1+C_n)^2}X^2_{n-m}\\&\quad +\frac{bh+b(a+c_{0n})h^2+bc_{1n}h|\Delta\omega_n|+dc(\Delta\omega_n)^2}{(1+C_n)^2}2X_nX_{n-m}\\&\quad +\frac{(bch+d+d(a+c_{0n})h)\Delta\omega_n+dc_{1n}|\Delta\omega_n|\Delta\omega_n}{(1+C_n)^2}2X_nX_{n-m}. \end{aligned} $

Taking expectation yields

$\begin{aligned} EX^2_{n+1}&\leq E[\frac{(1+ah+c\Delta\omega_n+c_{0n}h+c_{1n}|\Delta\omega_n|)^2}{(1+C_n)^2}X^2_n] +E[\frac{(bh+d\Delta\omega_n)^2}{(1+C_n)^2}X^2_{n-m}] \\&\quad +E[\frac{bh+b(a+c_{0n})h^2+bc_{1n}h|\Delta \omega_n|+dc(\Delta\omega_n)^2}{(1+C_n)^2}2X_nX_{n-m}]\\&\quad +E[\frac{(bch+d+d(a+c_{0n})h)\Delta\omega_n+dc_{1n}|\Delta\omega_n|\Delta\omega_n}{(1+C_n)^2}2X_nX_{n-m}]. \end{aligned}$ (4.2)

By the symmetry property of $\Delta\omega_n, $ then

$E[\frac{(bch+d+d(a+c_{0n})h)\Delta\omega_n+dc_{1n}|\Delta\omega_n|\Delta\omega_n}{(1+C_n)^2}2X_nX_{n-m}]=0.$

Noting that $E[\Delta \omega_n]=0, E[|\Delta \omega_n|]=\frac{2}{\sqrt{2\pi}}h^{1/2}, E[(\Delta \omega_n)^2]=h, E[|\Delta \omega_n|\Delta \omega_n]=0, $ compute

$\quad E[\frac{bh+b(a+c_{0n})h^2+bc_{1n}h|\Delta\omega_n|+dc(\Delta\omega_n)^2}{(1+C_n)^2}2X_nX_{n-m}]\\ \leq|bh+b(a+c_{0n})h^2|E|\frac{2X_nX_{n-m}}{(1+C_n)^2}| +|bc_{1n}|hE[\frac{2X_nX_{n-m}}{(1+C_n)^2}|\Delta\omega_n|]\\ \quad +|dc|E[|\frac{2X_nX_{n-m}}{(1+C_n)^2}(\Delta\omega_n)^2|] \\ \leq C^2|bh+b(a+c_{0n})h^2|(EX^2_n+EX^2_{n-m}) +|bc_{1n}|C^2hE[2|X_nX_{n-m}|E(|\Delta\omega_n||{\cal{F}}_{t_n})] \\ \quad +|dc|C^2E(2|X_nX_{n-m}|E[(\Delta\omega_n)^2|{\cal{F}}_{t_n}]) \\ \leq C^2[|bh+b(a+c_{0n})h^2|+\frac{2}{\sqrt{2\pi}}|bc_{1n}|h^{3/2}+|dc|h](EX^2_n+EX^2_{n-m})\\ \leq 2C^2[|bh+b(a+c_{0n})h^2|+\frac{2}{\sqrt{2\pi}}|bc_{1n}|h^{3/2}+|dc|h]\max\{EX^2_n, EX^2_{n-m}\}.$ (4.3)

Compute

$\quad E(1+(a+c_{0n})h+c\Delta \omega_n+c_{1n}|\Delta \omega_n|)^2 \\ = E[1+(a+c_{0n})^2h^2+c^2(\Delta \omega_n)^2+c^2_{1n}|\Delta \omega_n|^2 +2(a+c_{0n})h+2c\Delta \omega_n+2c_{1n}|\Delta \omega_n|\\ \quad +2(a+c_{0n})ch\Delta \omega_n+2(a+c_{0n})c_{1n}h|\Delta \omega_n|+2cc_{1n}|\Delta \omega_n|\Delta \omega_n] \\ =1+(a+c_{0n})^2h^2+c^2h+c^2_{1n}h +2(a+c_{0n})h+\frac{4}{\sqrt{2\pi}}c_{1n}\sqrt{h} +\frac{4}{\sqrt{2\pi}}(a+c_{0n})c_{1n}h^{3/2}\\ =1+\frac{4}{\sqrt{2\pi}}c_{1n}\sqrt{h}+(c^2+c^2_{1n}+2(a+c_{0n}))h +\frac{4}{\sqrt{2\pi}}(a+c_{0n})c_{1n}h^{3/2}+(a+c_{0n})^2h^2$ (4.4)

and

$E(bh+d \Delta \omega_n)^2= b^2h^2+d^2h.$ (4.5)

Substituting for (4.3)-(4.5) into (4.2), the result is

$ \begin{aligned} EX^2_{n+1}&\leq C^2[1+\frac{4}{\sqrt{2\pi}}c_{1n}h^{1/2}+(2a+2c_{0n}+(|d|+|c|)^2+c^2_{1n})h\\&\quad +\frac{4}{\sqrt{2\pi}}((a+c_{0n})c_{1n}+|bc_{1n}|)h^{3/2} +((a+c_{0n})^2+b^2)h^2\\&\quad +2|b||1+(a+c_{0n})h|h]\max\{EX^2_n, EX^2_{n-m}\}. \end{aligned}$

Let

$ \begin{aligned} P&=C^2[1+\frac{4}{\sqrt{2\pi}}c_{1n}h^{1/2}+(2a+2c_{0n}+(|d|+|c|)^2+c^2_{1n})h\\&\quad +\frac{4}{\sqrt{2\pi}}((a+c_{0n})c_{1n}+|bc_{1n}|)h^{3/2}+((a+c_{0n})^2+b^2)h^2 +2|b||1+(a+c_{0n})h|h]. \end{aligned} $

By recursive calculation we conclude that $E|X_n|^2\rightarrow0(n\rightarrow\infty)$ if $P<1.$ That is, if

$C^2[1+\frac{4}{\sqrt{2\pi}}c_{1n}h^{1/2}+(2a+2c_{0n}+(|d|+|c|)^2+c^2_{1n})h\\ +\frac{4}{\sqrt{2\pi}}((a+c_{0n})c_{1n}+|bc_{1n}|)h^{3/2}+((a+c_{0n})^2+b^2)h^2\\ +2|b||1+(a+c_{0n})h|h]<1,$ (4.6)

then method (3.1) is mean-square stable.

(ⅰ) If $c_{0n}=-a-|b|, c_{1n}>0$, under condition (2.3), it is obvious that $ c_{0n}>0, a+c_{0n}=-|b|<0, $ and $1+(a+c_{0n})h>0$ for $h<|b|^{-1}$, (4.6) reduces to

$\quad C^2[1+\frac{4}{\sqrt{2\pi}}c_{1n}h^{1/2}+(2a+2c_{0n}+2|b|+(|d|+|c|)^2+c^2_{1n})h\\ \quad +\frac{4}{\sqrt{2\pi}}((a+c_{0n})c_{1n}+|b||c_{1n}|)h^{3/2}+(a+c_{0n}+|b|)^2h^2]\\ =C^2[1+\frac{4}{\sqrt{2\pi}}c_{1n}h^{1/2}+((|d|+|c|)^2+c^2_{1n})h]\\ \leq C^2[1+(\frac{4}{\sqrt{2\pi}}c_{1n}+(|d|+|c|)^2+c^2_{1n})h^{1/2}].$ (4.7)

It is obvious from the above that for $h<\min\{h_1^*, 1, |b|^{-1}\}$

$C^2[1+(\frac{4}{\sqrt{2\pi}}c_{1n}+(|d|+|c|)^2+c^2_{1n})h^{1/2}]<1, $

which implies that (4.6) holds, therefore $E|X_n|^2\rightarrow0.$

(ⅱ) If $c_{1n}<0, a>|b|-c_{0n}, $ then $(a+c_{0n})c_{1n}+|bc_{1n}|=(a+c_{0n})c_{1n}-|b|c_{1n}<0, $ and

$C^2[1+(2a+2c_{0n}+(|d|+|c|)^2+c^2_{1n}+2|b|)h+(|a+c_{0n}|+|b|)^2h^2]<1,$ (4.8)

which implies (4.6) holds, therefore for $h<\min\{1, h_2^*\}$

$ C^2[1+(2a+2c_{0n}+(|d|+|c|)^2+c^2_{1n}+2|b|+(|a+c_{0n}|+|b|)^2)h]<1, $

which implies (4.8) holds, $E|X_n|^2\rightarrow0.$

(ⅲ) If $c_{1n}<0, c_{0n}<0, $ then $(a+c_{0n})c_{1n}>0, $ and condition (2.3) implies

$2a+2c_{0n}+(|d|+|c|)^2+2|b|<0, $

therefore

$\begin{aligned} P& \leq C^2[1+(2a+2c_{0n}+(|d|+|c|)^2+2|b|)h+c^2_{1n}h\\&\quad+2(|a+c_{0n}||c_{1n}|+|bc_{1n}|)h^{3/2}+(|a+c_{0n}|+|b|)^2h^2]\\&\leq C^2[1+c^2_{1n}h+2(|a+c_{0n}||c_{1n}|+|bc_{1n}|)h^{3/2}+(|a+c_{0n}|+|b|)^2h^2] \\& \leq C^2[1+[c^2_{1n}+2(|a+c_{0n}|+|b|)|c_{1n}|+(|a+c_{0n}|+|b|)^2]h]\\&=C^2[1+(|a+c_{0n}|+|b|+|c_{1n}|)^2h]. \end{aligned}$ (4.9)

If $C^2[1+(|a+c_{0n}|+|b|+|c_{1n}|)^2h]<1, $ by (4.9), then $P<1, $ therefore for $h<\min\{h^*_3, 1\}, $ then (4.9) holds which implies $P<1$, so (4.6) holds and $E|X_n|^2\rightarrow0.$

(ⅳ) If $c_{1n}=0, c_{0n}=0, $ then $K=1, $ and (4.6) reduces to

$(2a+(|d|+|c|)^2)h+(a^2+b^2)h^2+2|b||1+ah|h<1.$ (4.10)

If $h<\min\{1, \frac{-(2a+2|b|+(|d|+|c|)^2)}{(|a|+|b|)^2}\}, $ then (4.10) holds. Or if

$2a+2|b|+(|d|+|c|)^2)+(|a|+|b|)^2<0, $

then (4.10) holds for $h<1.$

References
[1] Appleby J A D, Berkolaiko G, Rodkina A. Non-exponential stability and decay rates in non-linear stochastic difierence equation with unbounded noises[J]. Stochastics, 2009, 81: 99–127.
[2] Baker C T H, Buckwar E. Exponential stability in p-th mean of solutions and of convergent Eulertype solutions of stochastic delay difierential equations[J]. Comput Appl. Math., 2005, 184: 404–427. DOI:10.1016/j.cam.2005.01.018
[3] Cen L, Zhou S. Convergence of numerical solutions to neutral stochastic delay difierential equation with Markovian switching and poisson jump[J]. Appl. Math., 2010, 23: 219–227.
[4] Buckwar Evelyn. Introduction to the numerical analysis of stochastic delay difierential equations[J]. Comput. Appl. Math., 2000, 125: 297–307. DOI:10.1016/S0377-0427(00)00475-1
[5] Hu L, Gan S, Wang X. Asymptotical stability of the balanced methods for stochastic jump-difiusion difierential equations[J]. Comput. Appl. Math., 2013, 238: 126–143. DOI:10.1016/j.cam.2012.08.026
[6] Li R, Hou Y. Convergence and stability of numerical solutions to SDDEs with Markovian switching[J]. Comput. Appl. Math., 2006, 175: 1080–1091.
[7] Liu M, Cao W, Fan Z. Convergence and stability of the semi-implicit Euler method for a linear stochastic difierential delay equation[J]. Comput. Appl. Math., 2004, 170: 255–268. DOI:10.1016/j.cam.2004.01.040
[8] Greame D C, Higham D J. Asymptotic stability of a jump-difierential equation and its numerical application[J]. SIAM SCI. Comput., 2008, 31(2): 1141–1155.
[9] Higham D J. A-stability and stochastic mean-square stability[J]. BIT, 2000, 4(2): 404–409.
[10] Higham D J. Mean-square and asymptotic stability of stochastic theta method[J]. SIAM Numer.Anal., 2000, 38: 753–769. DOI:10.1137/S003614299834736X
[11] Higham D J, Mao X, Stuart A M. Exponential mean-square stability of numerical solutions to stochastic difierential equations[J]. LMS Comput. Math., 2003, 6: 297–313. DOI:10.1112/S1461157000000462
[12] Higham D J, Mao X, Yuan C. Preserving exponential mean-square stability in the simulation of hybrid stochastic difierential equation[J]. Numer. Math., 2007, 108: 295–325. DOI:10.1007/s00211-007-0113-y
[13] Higham D J, Mao X, Yuan C. Almost sure and moment exponential stability in the numerical simulation of stochastic difierential equations[J]. SIAM Numer. Anal., 2007, 45(2): 592–609. DOI:10.1137/060658138
[14] Mao X. Stochastic difierential equations and application[M]. West Sussex, England: Horwood, 1997.
[15] Mao X, Yuan C. Stochastic difierential equations with Markovian switching[M]. Singapore: Imperial College Press, 2006.
[16] Mao X. Exponential stability of equidistant Euler-Maruyama approximations of stochastic difierential delay equations[J]. Comput. Appl. Math., 2007, 200: 297–316. DOI:10.1016/j.cam.2005.11.035
[17] Milstein G N, Schurz P E. Balanced implicit methods for stifi stochastic systems[J]. SIAM Numer.Anal., 1998, 35: 1010–1019. DOI:10.1137/S0036142994273525
[18] Rathinasamy A, Balachandran K. Mean square stability of semi-implicit Euler method for linear stochastic difierential equations with multiple delays and Markovian switching[J]. Comput. Appl.Math., 2006, 206: 968–979.
[19] Wang Z, Zhang C. An analysis of stability of Milstein for Stochastic difierential equation with delay[J]. Math. Appl. Comput., 2006, 51: 1445–1452. DOI:10.1016/j.camwa.2006.01.004
[20] Wang P, Liu Z. Split-step backward balanced Milstein methods for stifi stochastic systems[J]. Appl.Numer. Math., 2009, 59: 1198–1213. DOI:10.1016/j.apnum.2008.06.001
[21] Wu F, Mao X. Almost sure stability of Euler-Maruyama approximations for stochastic delay difierential equations[J]. Numerische Mathematik.
[22] Zhang H, Gan S, Hu L. The split-step backward Euler method for linear stochastic delay difierential equations[J]. Appl. Math. Comput., 2009, 225: 558–568. DOI:10.1016/j.cam.2008.08.032
[23] Zhou S, Wu F. Convergence of numerical solutions to neutral stochastic delay difierential equation with Markovian switching[J]. Appl. Math. Comput., 2009, 229(1): 85–96. DOI:10.1016/j.cam.2008.10.013