Stability of stochastic differential delay equations was studied widely and reader is referred, for example, to Arnold [1], Friedman [2], and Mao [3]. However, stability of stochastic neural network was initiated to study by Liao and Mao [4], [5]. Recently, Hu, Liao and Mao [6] discussed stability of stochastic Hopfield neural network. Zhang, Xu and Deng [7] studied the exponential stability of stochastic reaction-diffussion neural network with time-varyings.
Blythe, Mao and Liao [8] studied the almost sure stability of stochastic Hopfield neural networks as following
Zhang and Xu [9] were further to discuss almost sure exponential stability of the stochastic recurrent neural networks with time-varying delays as following (1.1).
This paper discusses moment exponential stability of stochastic recurrent neural networks with time-varying delays as following
where $D=\text{diag}(d_{1}, \cdots, d_{n})$, $A=(a_{ij})_{n\times n}$, $B=(b_{ij})_{n\times n}$,
$W(t)=(W_{1}(t), \cdots, W_{m}(t))^{T}$ is an $m$-dimensional Brownian motion which is defined on a complete probability space $(\Omega, F, P)$ with a natural filtration $\{F_{t}\}_{t\geq0}$ (i.e., $F_{t}=\sigma\{W(s):\ 0\leq s\leq t\}$), and $\xi(s)$ ($\overline{\tau}\leq s\leq0$) is a continuous $R^{n}$-valued stochastic process such that every $\xi(s)$ is $F_{0}$-measurable and $E|\xi(s)|^{2} < \infty$, where $n$ denotes the number of neurons in a neural network, $x_{i}(t)$ corresponds to the state of the $i$th neuron at time $t$, $f_{j}(x_{j}(t))$, $g_{j}(x_{j}(t))$ denote the activation functions of the $j$th neuron at time $t$, $a_{ij}$ denotes the constant connection weight of the $j$th neuron on the $i$th neuron at time $t$, $b_{ij}$ denotes the constant connection weight of the $j$th neuron on the $i$th neuron at time $t-\tau_{j}$, $d_{i}>0$ represents the rate with which the $i$th neuron will rest its potential to the resting state in isolation when disconnected from the networks and external inputs. $\sigma_{ij}$ denotes the intensity of the stochastic perturbation.
Denote by $|\cdot|$ the Euclidean norm. If $A$ is a vector or matrix, its transpose is denoted by $A^{T}$. If $A$ is a matrix, denote by $\|A\|$ its operator norm, i.e. $\|A\|=\sup\{|Ax|:\ |x|=1\}$.
Assume $f$, $g$ and $\sigma$ be locally Lipschitz continuous and satisfy the linear growth condition as well. So it is known that Eq. (1.1) has a unique global solution on $t\geq0$, which is denoted by $x(t; \xi)$. Moreover, assume also that $f(0)=0$, $g(0)=0$ and $\sigma(0, 0, t)\equiv0$ for the stability purpose of this paper. So Eq. (1.1) admits an equilibrium solution $x(t; 0)=0$.
Let $C^{2, 1}(R^{n}\times R_{+}; R_{+})$ denote the family of all nonnegative functions $V(x, t)$ on $R^{n}\times R_{+}$ which are continuously twice differentiable in $x$ and once differentiable in $t$. For each $V\in C^{2, 1}(R^{n}\times R_{+}; R_{+})$, define an operator $\mathcal{L}V$, associated with the stochastic delay recurrent neural network (1.1), from $R^{n}\times R^{n}\times R_{+}$ to $R$ by
where $V_{t}(x, t)=\frac{\partial V(x, t)}{\partial t}$, $V_{x}(x, t)=(\frac{\partial V(x, t)}{\partial x_{1}}, \cdots, \frac{\partial V(x, t)}{\partial x_{n}})^{T}$, $V_{xx}=(\frac{\partial^{2}V(x, t)}{\partial x_{i}\partial x_{j} })_{n\times n}$.
Let $C(R^{n}, R_{+})$ denote the family of all continuous functions from $R^{n}$ to $R_{+}$, while $C([-\overline{\tau}, 0], R^{n})$ denote the $R^{n}$-valued continuous functions from $[-\overline{\tau}, 0]$ to $R^{n}$, and $\overline{\tau}=\max\{\tau_{i}:\ 1\leq i\leq n\}$.
In order to prove our results, we need the following semi-martingale convergence theorem established by Liptser and Shiryayev [10].
Lemma 2.1 Let $A(t)$ and $U(t)$ be two continuous adapted increasing processes on $t\geq0$ with $A(0)=U(0)=0$ a.s.. Let $M(t)$ be a real-valued continuous local martingale with $M(0)=0$ a.s.. Let $\zeta$ be a nonnegative $F_{0}$-measurable random variable with $E\zeta < \infty$. Define $X(t)=\zeta+A(t)-U(t)+M(t)\ \ \text{for}\ t\geq0.$ If $X(t)$ is nonnegative, then $\{\lim\limits_{t\rightarrow \infty}A(t) < \infty\}\subset \{\lim\limits_{t\rightarrow \infty}X(t) < \infty\}\cap \{\lim\limits_{t\rightarrow \infty}U(t) < \infty\}$ a.s., where $B\subset D$ a.s. means $P(B\cap D^{c})=0$. In particular, if $\lim\limits_{t\rightarrow \infty}A(t) < \infty$ a.s., then for almost all $\omega\in \Omega$
that is both $X(t)$ and $U(t)$ converge to finite random variables.
Lemma 2.2 Let $M(t)$ be a real-valued continuous local martingale with $M(0)=0$ a.s.. Let $\zeta$ be a nonnegative $F_{0}$-measurable random variable with $E\zeta < \infty$. Define $X(t)=\zeta+M(t)\ \ \text{for}\ t\geq0.$ If $X(t)$ is nonnegative, then $X(t)$ is sample bounded a.s., $EX(t)\leq E \zeta$, and $M(t)$ is convergence a.s. when $t\longrightarrow\infty$.
Proof Let ${\tau_{n}}$ be infinitely stopping time consequences such that $M(t\wedge t_{n})$ is martingale. By Fatou theorem and controlled converged theorem, one have that
The sample bound of $X(t)$ and convergence of $M(t)$ can be derived from Lemma 2.1.
Because each differential equation in stochastic delay neural networks has own characteristic it is desirable to obtain stability criteria that make full use of these characteristics.
Lemma 2.3 Assume that there exist a number of functions $V\in C^{2, 1}(R^{n}\times R_{+}; R_{+})$, $\phi_{i}\in C(R; R_{+})$, $\psi_{i}\in C(R; R_{+})\ (1\leq i\leq n)$ and $3n$ constants $\lambda_{i}>\mu_{i}>0, \ \rho_{i}>0\ (1\leq i\leq n)$ such that
Then, for every $\xi\in C([-\overline{\tau}, 0];R^{n})\bigcap L_{F_{0}}^{2}(\Omega, C)$, the solution of eq.(1.1) has the property
where $\gamma$ is the root of the equation
Proof Fix initial data $\xi\in C([-\overline{\tau}, 0];R^{n})\bigcap L_{F_{0}}^{2}(\Omega, C)$ arbitrarily and write simply $x(t; \xi)=x(t)$. Define
which is in $C^{2, 1}(R^{n}\times R_{+}; R_{+})$ obviously. We can compute
Using conditions (2.1) and (2.2), we have
The It$\hat{o}$ formula shows that for any $t\geq0$
On the other hand, we have
This implies
It then follows from (2.4) and (2.5) that
Furthermore, we have that
where
which is a nonnegative semi-martingale, and Lemma 2.2 shows $EX(t)\leq E \zeta$. It therefore follows from (2.6) that
which implies
as required. The proof is complete.
To obtain our results, we give the following assumption:
H1:there exist positive constants $\alpha_{i}$ such that $|f(x_{i})|\leq \alpha_{i}|x_{i}|, \ i=1, 2, \cdots, n$.
H2:there exist positive constants $\beta_{i}$ such that $|g(x_{i})|\leq \beta_{i}|x_{i}|, \ i=1, 2, \cdots, n$.
Theorem 2.4 Let (H2) hold. Assume there exist symmetric nonnegative-definite matrices $C_{1}$, $C_{2}$, $C_{3}=\text{diag}(\delta_{1}, \cdots, \delta_{n})$ and $C_{4}$ such that
for all $(x, y, t)\in R^{n}\times R^{n}\times R_{+}$. Assume also that there exists a positive-definite diagonal matrix $G=\text{diag}(g_{1}, \cdots, g_{n})$ such that the symmetric matrix
is negative-definite, where $\bar D=\text{diag}(g_{1}\beta_{1}^{2}, \cdots, g_{n}\beta_{n}^{2})$. Let $-\lambda=\lambda_{\max}(H)$, the biggest eigenvalue of $H$. So $\lambda>0$. Then, for every $\xi\in C([-\overline{\tau}, 0];R^{n})\bigcap L_{F_{0}}^{2}(\Omega, C)$, the moment Lyapunov-exponent of the solution of eq.(1.1) can be estimated as
where $\gamma>0$ is the root of the equation
Proof Let $V(x, t)=|x|^{2}$. Then the operator $\mathcal{L}V$ has the form
Compute, by the hypotheses,
It is easy to see from the construction of $H$ that $\lambda\leq g_{i}$ for all $1\leq i\leq n$. Using H2 one can then derive that
In order to apply Theorem 2.3, define $\phi_{i}, \ \psi_{i}\in C(R; R_{+})$ by
It is obvious that
Moreover,
By Theorem 2.3, for every $\xi\in C([-\overline{\tau}, 0];R^{n})\bigcap L_{F_{0}}^{2}(\Omega, C)$, the solution of eq.(1.1) has the property
and the required assertion (2.9) follows. The proof is completed.
In the following, we shall make use of the characteristics of recurrent networks to obtain further results.
Corollary 2.5 Let (2.8) and (H2) hold. Assume that there exist nonnegative numbers $\theta_{i}$ and $\delta_{i}$ such that
is negative-definite, where $\bar D=\text{diag}(g_{1}\beta_{1}, \cdots, g_{1}\beta_{1})$. Let $-\bar{\lambda}=\lambda_{max}(\bar H)$, the biggest eigenvalue of $\bar H$. So $\bar{\lambda}>0$. If
then the stochastic delay recurrent neural network (1.1) is moment exponentially stable. Moreover, for every $\xi\in C([-\overline{\tau}, 0];R^{n})\bigcap L_{F_{0}}^{2}(\Omega, C)$, the moment Lyapunov-exponent of the solution of eq.(1.1) can be estimated as
where $\gamma>0$ is the root of the equation (2.10), as long as the $\lambda$ in (2.10) is determined by
Proof Set $C_{1}=\text{diag}(\theta_{1}, \theta_{2}, \cdots, \theta_{n})$, $C_{2}=0$, $C_{3}=\text{diag}(\delta_{1}, \delta_{2}, \cdots, \delta_{n})$. Then (12) can be written as
In view of Theorem 2.4, it is sufficient to verify that the matrix $H$ defined there is negative-definite. To do so, for any $x, y\in R^{n}$, compute
where $\lambda$ is defined by (2.14) and is positive due to (2.12). The proof is therefore completed.
Lemma 2.6 (see [9]) Let $P$ be a invertible matrix then
Corollary 2.7 Let H1, H2 and (2.11) hold. Assume there exist a invertible matrix $P$ and positive-definite diagonal matrix $G=\text{diag}(g_{1}, \cdots, g_{n})$ such that
is negative-definite, where $\bar D=\text{diag}(g_{1}\beta_{1}^{2}, \cdots, g_{1}\beta_{n}^{2})$, $L=\text{diag}(\alpha_{1}, \cdots, \alpha_{n})$. Let $-\bar{\mu}=\lambda_{\max}(H_{1})$, the biggest eigenvalue of $H_{1}$, so $\bar{\mu}>0$. If
then the stochastic delay recurrent neural networks (1.1) is moment exponentially stable. Moreover, the moment Lyapunov exponent can be estimated by (2.13), as long as the $\lambda$ in (2.10) is determined by
Proof Choose $C_{4}=PP^{T}+(P^{-1}AL)^{T}(P^{-1}AL)$. The conclusion of this corollary follows from Corollary 2.5. The proof is completed.
Corollary 2.8 Let H1, H2 and (2.11) hold. Assume there exist two positive-definite diagonal matrixes $P=\text{diag}(\varepsilon_{1}, \cdots, \varepsilon_{n})$ and $G=\text{diag}(g_{1}, \cdots, g_{n})$ such that
is negative-definite, where $\bar D=\text{diag}(g_{1}\beta_{1}^{2}, \cdots, g_{1}\beta_{n}^{2})$, $L=\text{diag}(\alpha_{1}, \cdots, \alpha_{n})$. Let $-\bar{\nu}=\lambda_{\max}(H_{2})$, the biggest eigenvalue of $H_{2}$, so $\bar{\nu}>0$. If
Proof Choose $P=\text{diag}(\sqrt{\varepsilon_{1}}, \cdots, \sqrt{\varepsilon_{n}})$. The conclusion of this corollary follows from Corollary 2.7. The proof is completed.
Example 1 Consider a two-dimensional stochastic delay recurrent neural network
where $W(t)$ is a real-valued scalar Brownian motion, $\tau_{1}$ and $\tau_{2}$ both positive numbers,
while
It is easily shown that H1 is satisfied with $\alpha_{1}=0.4, \ \alpha_{2}=0.3$ and H2 is satisfied with $\beta_{1}=\beta_{2}=1$, respectively. To apply Theorem 2.4, note in this example such that
Choosing $P=\text{diag}(0.2, 0.2)$ in Corollary 2.6, then $P^{-1}=\text{diag}(5, 5)$, we have
Now, choose $D=\text{diag}(3.815, 1.9375)$. The matrix $H$ defined in Theorem 2.4 becomes
Compute $\lambda_{\max}(H)=-0.1512$ which means that $H$ is negative-definite. By Theorem 2.4, the stochastic delay recurrent neural network (3.1) is moment exponentially stable. To estimate the moment Lyapunov-exponent, compute, by (2.10), $\gamma$ satisfied
If both $\tau_{1}$ and $\tau_{2}$ are $0.1$ then $\bar{\tau}=0.1$ and (3.1) has a unique root $\gamma=0.2166$. Therefore, Theorem 2.4 shows that the moment Lyapunov-exponent of the solution of network (3.1) should not be greater than $-0.1033$.
If $A=0$ in (3.1), the model (3.1) changes into the Hopfield neural network. Choosing $G=\text{diag}(3.875, 1.98)$, we have
Compute $\lambda_{\max}(H)=-0.1957$ which means that $H$ is negative-definite. By Theorem 2.4, the the stochastic delay Hopfield neural network (3.1) is moment exponentially stable. To estimate the moment Lyapunov exponent, compute, by (2.10), $\gamma$ satisfied
If both $\tau_{1}$ and $\tau_{2}$ are $0.1$ then $\bar{\tau}=0.1$ and (3.3) has a unique root $\gamma=0.2798$. Therefore, Theorem 2.4 shows that the moment Lyapunov-exponent of the solution of stochastic delay Hopfield neural network (3.1) should not be greater than $-0.1399$.