In many fields of science, the combined dynamical effects of noise and delay have received an increasing attention. Stochastic differential delay equations (SDDEs) are often used to describe the dynamical systems, which depend on not only present states but also their past history with noise disturbance. Unfortunately, explicit solutions can rarely be obtained for SDDEs. Therefore, it is necessary to establish effective numerical methods for such systems. Recently, the numerical methods for SDDEs as well as stochastic differential equations(SDEs) have received a great deal of attention [11, 18]. Many efficient numerical methods are proposed for solving different types of SDEs. The majority of the numerical methods are explicit or semi-implicit Euler-Maruyama (short for EM) methods [3, 5, 6, 11-13, 23]. For example, Liu [7] studied the convergence and mean-square (MS) stability of the semi-implicit Euler method for linear SDDEs. Wu [21] showed that the backward EM method reproduces the stability for nonlinear SDDEs using the martingale technique. Mao [16] considered the exponential stability of equidistant EM approximations for SDDEs. Zhang [22] constructed the strong convergence with order $\gamma=0.5$ and MS-stability of the split-step backward Euler method. Evelyn Buckwar [2] introduced the convergence of the numerical method for SDDEs.
For the EM method, the implicit part is only resricted to the drift term (or drift-implicit, which is well adapted for stiff systems with small stochastic noise intensity or additive noise. The method is unsuitalbe for solving stiff SDEs, in which the stochastic term (or diffusion term) plays an essential role, since the strong explicit methods work unreliably and large errors for not too small time step sizes. In order to improve stability properties of numerical methods for solving stiff SDEs. Milstein [17] proposed an implicit balanced method, which is a specific method with a kind of balance between approximating stochastic terms in the numerical scheme. Several authors constructed modified implicit methods (see [19-20]). Wang [19] discussed the convergence and MS-stability of several split-step backward balanced Milstein methods for stiff stochastic differential systems. Wang [19] considered the MS-stability of Milstein method for SDDEs. Gan [5] dealt with asymptotic stability by using the discretized Itô formula, which was developed by Appleby [1]. To the authors' best knowledge, there is little work on the convergence and stability of balanced method for stochastic differential delay equation.
In the paper, we consider a scalar test equation of the form
where $a, b, c, d\in R$, $\tau$ be a positive fixed delay, $w(t)$ be a one-dimensional standard Wiener process. $\xi(t)$ is a $C([-\tau, 0];R)$-value initial segment with $E[\|\xi\|^2]<\infty$, here $\|\xi\|=\sup\limits_{-\tau\leq t\leq 0}{|\xi(t)|}.$ We shall investigate the convergence and the MS-stability for eq.(1.1). Comparing to the traditional conditions, a weaker condition of the asymptotic and mean-square stability of the exact solution is given by applying the semimartingale convergence theorem in the next section. Section 3 proves that the numerical solution converges to the exact solution in strong order 0.5. In Section 4, we shall show that the balanced method reproduces mean-square stability.
Throughout this paper, unless otherwise specified, let $|x|$ be the Euclidean norm in $x\in R^n$. If $A$ is a vector or matrix, its transpose is denoted by $A^T$. If $A$ is a matrix, its trace norm is denoted by $|A|=\sqrt{{\rm trace}(A^TA)}$, while its operator norm is denoted by $\|A\|=\sup\{|Ax|:|x|=1\}$ (without any confusion with $\|\varphi\|$). Let be $\tau>0$, we shall denote by $C([-\tau, 0];R^n)$ the family of continuous functions from $[-\tau, 0]$ to $R^n$ with the norm $\|\varphi\|=\sup\limits_{-\tau\leq\theta\leq0}|\varphi(\theta)|.$
Let $(\Omega, {\cal F}, \{{\cal F}_t\}_{t\geq 0}, P)$ be a complete probability space with a filtration $\{ {\cal F}_t\}_{t\geq 0}, $ satisfying the usual conditions (i.e., it is increasing and right continuous and ${\cal F}_0$ contains all P-null sets). Let $p>0, $ denote by $L_{{\cal F}_t}^p([-\tau, 0];R^n)$ the family of all ${\cal F}_t$-measurable and $C([-\tau, 0];R^n)$-valued random variables $\xi$ such that $E\|\xi\|^p<+\infty.$
Under the above assumptions, eq. (1.1) has a unique strong solution and has the following properties (see [17]).
Lemma 2.1 For any given $T>0$, there exists positive numbers $C_1, C_2$ and $M$ such that the solution of eq. (1.1) satisfies
In order to establish a result of stability under a weaker condition compared to the traditional condition, the following semi-martingale convergence theorem [14] plays an important role.
Lemma 2.2 Let $\{A_t\}_{t\geq 0}\} $ and $\{U_t\}_{t\geq 0}\}$ be two continuous adapted increasing processes on $t\geq 0$ with $A_0=U_0=0\ \ {\rm a.s.}. $ Let $\{M_t\}_{t\geq 0}\}$ be a real-valued continuous local martingale with $M_0=0\ \ {\rm a.s.}$. Let $\zeta$ be a nonnegative $ {\cal F}_0$-measurable random variable. Define
If $X_t$ is nonnegative, then
where $C\subset D$ a.s. means $P(C\cap D^c)=0.$ In particular, if $\lim\limits_{t\rightarrow\infty}A_t<\infty\ \ {\rm a.s.}, $ then for almost all $\omega \in \Omega$
and
That is, all of the three processes $X(t), A(t)$ and $M(t)$ converge to finite random variables.
Theorem 2.3 Suppose that the condition
holds. Then the solution of eq. (1.1) is mean square stable, i.e., $ \lim\sup\limits_{t\rightarrow\infty}E[|x(t)|^2]=0.$ Moreover, the solution of eq. (1.1) is also aymptotic stable, i.e., $ \lim\sup\limits_{t\rightarrow\infty}|x(t)|=0.$
Proof Define $V(x)=x^2, $ by the Itô formula and the inequality $2ab\leq a^2+b^2$, we compute
Since ${\rm d}[{\rm e}^{\varepsilon t}V(x(t))]= {\rm e}^{\varepsilon t}[\varepsilon V(x){\rm d}t+{\rm d}V(x)], \varepsilon >0, $ integrating and computing
Since $ 2a+c^2+2|b+cd|+d^2<0, $ choose $\varepsilon<-2a-c^2-2|b+cd|-d^2, $ then
Lemma 2.1 implies that there exists a constant $M_0$ such that $ {\rm e}^{\varepsilon t}x^2\leq M_0.$ That is,
Taking expectation to (2.2), we have $E{\rm e}^{\varepsilon t}V(x(t))\leq M^{\prime}$, i.e., $\lim\sup\limits_{t\rightarrow\infty}E|x(t)|^2=0.$
Remark 2.4 In Theorem 2.3, condition $ 2a+c^2+2|b+cd|+d^2<0 $ is weaker than the traditional condition (see [6-7, 19])
That is, $ 2a+2|b|+(|c|+|d|)^2<0 $ implies that $ 2a+c^2+2|b+cd|+d^2<0. $ In fact,
noting that $|b|+|c||d|\geq|b+cd|.$ It is obvious that $2a+2|b|+(|c|+|d|)^2<0$ implies $2a+c^2+2|b+cd|+d^2<0.$
In the section, we apply the balanced implicit method (see [4, 16]) to eq. (1.1). This derives the following numerical scheme
here $h\geq 0$ is a stepsize which satisfies $\tau=mh$ for a positive integer $m.$ Let $ t_n=nh(n\geq 0). $ $X_n$ is an approximation to $x(t_n)$ if $t_n\leq 0$, we have $X_n=\xi(t_n)$. Moreover, increments $\Delta\omega_n=\omega(t_{n+1})-\omega(t_n), n=1, 2, \cdots$ are independent $N(0, h)$-distributed Gaussian random variables ${\cal{F}}_{t_n}$-measurable at the mesh-point $t_n.$ $C(X_n)$ is given by
where the $c_{0n}=C _0(X_n), c_{1n}=C_1(X_n)$ are called control functions which are often chosen as constants. The control functions must satisfy some conditions.
Assumption 3.1 The $c_{0n}$ and $c_{1n}$ represent bounded functions. For any real numbers $\alpha_0 \in [0, \overline{\alpha}_0], \alpha_1\geq0, $ where $\overline{\alpha}_0\geq h$ for all step sizes h considered and $(t, x)\in[0, \infty]\times R, $ the matrix $ M(x) =1 + \alpha_0c_{0n}(x)+\alpha_1c_{1n}(x)$ has an inverse and satisfies the condition $|(M(x))^{-1}|\leq C <\infty, $ here $C$ is a positive constant.
The section will prove the convergence of scheme (3.1), which involves two useful definitions. Noting that the various constants throughout the section will be given the same letter $C$ except for $c_{0n}$ and $c_{1n}.$
Definition 3.2 (see [4]) The local error of the above approximation of the solution $x(t)$ of eq. (1.1) is the sequence of random variables $\delta_{n+1}=x(t_{n+1})-X(t_{n+1}), n=0, 1, \cdots, N-1.$ Here $X(t_{n+1})$ be the value obtained after just one step eq. (3.1), i.e.,
The global error of the above approximation of the solution $x(t)$ of eq. (1.1) is the sequence of random variables $\varepsilon_{n}=x(t_{n})-X_n, n=1, \cdots, N.$
We shall prove that approximation (3.1) is convergent with order $\gamma=\frac{1}{2}, $ which depends on the following result established by Evelyn Buckwar (see [3]).
Lemma 3.3 Suppose the method defined by (3.1) such that the following estimates
hold with $p_2\geq \frac{1}{2}$ and $p_1\geq p_2+\frac{1}{2}, $ where the constant $C$ does not depend on $h$, but may depend on $T$ and the initial data. That is, method (3.1) is consistent with order $p_1$ in the mean and with order $p_2$ in the mean-square sense. Then approximation (3.1) for eq. (1.1) is convergent with order $p=p_2-\frac{1}{2}.$ That is, convergence is in the mean-square sense and
Theorem 3.4 The numerical solution produced by the method (3.1) converges to the exact solution of eq. (1.1) on the mesh-point in the mean-square sense with strong order $0.5, $ i.e. there exists a positive constant $C$ such that
Proof To prove (3.5), we first show that the estimates (3.2) holds for the balanced method (3.1) with $p_1=3/2.$ For this purpose, the local Euler approximation step defined by
$n=0, 1, 2, \cdots, N-1.$ Noting that $|E[x(t_{n+1})-X^E(t_{n+1})]|\leq Ch^2$ (see [15]), one can deduce from $n=0, 1, 2, \cdots, N-1$ that
Denoted by $C_n\equiv C(X_n), $ compute
Exploiting the symmetry property of $\Delta \omega_n, $ then
(3.7) reduces to
Since
noting that $x(t_n), x(t_n-\tau)$ is ${\cal{F}}_{t_n}$-measurable, making use of Assumption (3.1) and Lemma 2.1, and computing
Substituting for (3.8) into (3.6) yields
Therefore (3.2) holds for $p_1=3/2.$ In the following, we shall prove that (3.3) holds. Applying the inequality $(a+b)^2\leq 2(a^2+b^2), $ we have
noting that $ E[|x(t_{n+1})-X^E_{n+1}|^2|{\cal{F}}_{t_n}]\leq Ch^2 $(see[15]). Applying Assumption 3.1 and computing
Making use of the properties
where $(2i-1)!!=(2i-1)(2i-3)\cdots 3\cdot1, (i-1)!=(i-1)(i-2)\cdot\cdot\cdot 2\cdot 1, $ we compute
Substituting for (3.11)-(3.12) into (3.10), and making use of Lemma 2.1 yields
By this and (3.9), condition (3.3) holds. Lemma 3.3 implies $ \max\limits_{0\leq n\leq N}(E(\varepsilon_{n+1})^2)^{1/2}\leq Ch^{1/2}.$
In the section, we shall investigate mean square stability of the balanced method. Now, we present the main result of the paper as follows.
Definition 4.1 Under condition (2.3), a numerical method is said to be mean square stable(MS-stable), if there exists a $h_{\{a, b, c, d, c_{0n}, c_{1n}\}}>0, $ such that any application of the method to problem (3.1) generates numerical approximation $X_n, $ which satisfy $\lim\limits_{n\rightarrow\infty}E|X_n|^2=0$ for all $h\in(0, h_{\{a, b, c, d, c_{0n}, c_{1n}\}}).$
Theorem 4.2 Assume that $C<1$ and condition (2.3) satisfing.
(ⅰ) If $c_{0n}=-a-|b|, c_{1n}>0$, then the balanced method is MS-stable for $h<\min\{h_1^*, 1\}, $ where $h_1^*=\frac{(1/C^2-1)^2}{(4/\sqrt{2\pi}c_{1n}+(|d|+|c|)^2+c^2_{1n})^2}.$
(ⅱ) If $c_{1n}<0, a+c_{0n}>|b|, $ then the balanced method is MS-stable for $h<h_2^*, $ where $h^*_2=\frac{1-C^2}{C^2[2a+2c_{0n}+(|d|+|c|)^2+c^2_{1n}+2|b|+(|a+c_{0n}|+|b|)^2]}.$
(ⅲ) If $c_{1n}<0, c_{0n}<0, $ then the balanced method is MS-stable for $h<\min\{h^*_3, 1\}, , $ where $\displaystyle h_3^*=\frac{1/C^2-1}{(|a+c_{0n}|+|b|+|c_{1n}|)^2}.$
(ⅳ) If $c_{1n}=0, c_{0n}=0.$ then the method is MS-stable for $h<\min\{1, \frac{-(2a+2|b|+(|d|+|c|)^2)}{(|a|+|b|)^2}\}, $ or If $c_{1n}=0, c_{0n}=0, 2a+2|b|+(|d|+|c|)^2)+(|a|+|b|)^2<0, $ then the method is MS-stable for $h<1, $ in which the balanced method (3.1) reduces to EM method.
Proof According to (3.1), we obtain
Taking expectation yields
By the symmetry property of $\Delta\omega_n, $ then
Noting that $E[\Delta \omega_n]=0, E[|\Delta \omega_n|]=\frac{2}{\sqrt{2\pi}}h^{1/2}, E[(\Delta \omega_n)^2]=h, E[|\Delta \omega_n|\Delta \omega_n]=0, $ compute
Compute
Substituting for (4.3)-(4.5) into (4.2), the result is
Let
By recursive calculation we conclude that $E|X_n|^2\rightarrow0(n\rightarrow\infty)$ if $P<1.$ That is, if
then method (3.1) is mean-square stable.
(ⅰ) If $c_{0n}=-a-|b|, c_{1n}>0$, under condition (2.3), it is obvious that $ c_{0n}>0, a+c_{0n}=-|b|<0, $ and $1+(a+c_{0n})h>0$ for $h<|b|^{-1}$, (4.6) reduces to
It is obvious from the above that for $h<\min\{h_1^*, 1, |b|^{-1}\}$
which implies that (4.6) holds, therefore $E|X_n|^2\rightarrow0.$
(ⅱ) If $c_{1n}<0, a>|b|-c_{0n}, $ then $(a+c_{0n})c_{1n}+|bc_{1n}|=(a+c_{0n})c_{1n}-|b|c_{1n}<0, $ and
which implies (4.6) holds, therefore for $h<\min\{1, h_2^*\}$
which implies (4.8) holds, $E|X_n|^2\rightarrow0.$
(ⅲ) If $c_{1n}<0, c_{0n}<0, $ then $(a+c_{0n})c_{1n}>0, $ and condition (2.3) implies
therefore
If $C^2[1+(|a+c_{0n}|+|b|+|c_{1n}|)^2h]<1, $ by (4.9), then $P<1, $ therefore for $h<\min\{h^*_3, 1\}, $ then (4.9) holds which implies $P<1$, so (4.6) holds and $E|X_n|^2\rightarrow0.$
(ⅳ) If $c_{1n}=0, c_{0n}=0, $ then $K=1, $ and (4.6) reduces to
If $h<\min\{1, \frac{-(2a+2|b|+(|d|+|c|)^2)}{(|a|+|b|)^2}\}, $ then (4.10) holds. Or if
then (4.10) holds for $h<1.$