数学杂志  2014, Vol. 34 Issue (3): 487-496   PDF    
扩展功能
加入收藏夹
复制引文信息
加入引用管理器
Email Alert
RSS
本文作者相关文章
PAN Qing-fei
ZHANG Zi-fang
EXPONENTIAL STABILITY OF A CLASS OF STOCHASTIC DELAY RECURRENT NEURAL NETWORK
PAN Qing-fei1, ZHANG Zi-fang2    
1. College Of Civil Engineering and Architecture, Sanming University, Sanming 365004, China;
2. Department of Math. and Physics, Huaihai Institute of Technology, Lianyungang 222005, China
Abstract: The moment exponential stability for a stochastic delay recurrent neural networks is discussed by means of a nonnegative semi-martingale convergence theorem and Lyapunov functional method. The new algebraic criteria of the moment exponential stability for a stochastic delay recurrent neural network is derived, and these algebraic criteria are simple and practical. An example is also given for illustration.
Key words: stochastic recurrent neural network     time-varying delay     moment exponential stability     Lyapunov exponent    
一类随机时滞递归神经网络的指数稳定性
潘青飞1, 张子芳2    
1. 三明学院土木工程学院, 福建 三明 365004;
2. 淮海工学院理学院, 江苏 连云港 222005
摘要:本文研究了一类随机时滞递归神经网络的指数稳定性问题.利用非负鞅收敛定理和Lyapunov泛函的方法, 获得了这类神经网络矩指数稳定性的新的代数准则, 所给代数准则简单易用.一个具体实例用来说明稳定性判别准则的应用.
关键词随机递归神经网络    变时滞    矩指数稳定性    Lyapunov指数    
1 Introduction

Stability of stochastic differential delay equations was studied widely and reader is referred, for example, to Arnold [1], Friedman [2], and Mao [3]. However, stability of stochastic neural network was initiated to study by Liao and Mao [4], [5]. Recently, Hu, Liao and Mao [6] discussed stability of stochastic Hopfield neural network. Zhang, Xu and Deng [7] studied the exponential stability of stochastic reaction-diffussion neural network with time-varyings.

Blythe, Mao and Liao [8] studied the almost sure stability of stochastic Hopfield neural networks as following

$ \begin{eqnarray*} dx(t)=(-Dx(t)+Bg(x_{\tau}(t)))dt+\sigma(x(t), x_{\tau}(t), t)dW(t). \end{eqnarray*} $

Zhang and Xu [9] were further to discuss almost sure exponential stability of the stochastic recurrent neural networks with time-varying delays as following (1.1).

This paper discusses moment exponential stability of stochastic recurrent neural networks with time-varying delays as following

$ \begin{eqnarray} &&dx(t)=(-Dx(t)+Af(x(t)))+Bg(x_{\tau}(t)))dt+\sigma(x(t), x_{\tau}(t), t)dW(t), \nonumber\\ &&x(s)=\xi(s)\ \ \text{on}\ -\overline{\tau}\leq s\leq0, \label{2} \end{eqnarray} $ (1.1)

where $D=\text{diag}(d_{1}, \cdots, d_{n})$, $A=(a_{ij})_{n\times n}$, $B=(b_{ij})_{n\times n}$,

$ \begin{eqnarray*} &&f(x)=(f_{1}(x), f_{2}(x), \cdots, f_{n}(x))^{T}, \\ && g(x_{\tau}(x))=(g_{1}(x_{1}(t-\tau_{1})), g_{2}(x_{2}(t-\tau_{2})), \cdots, g_{n}(x_{n}(t-\tau_{n})))^{T}, \\ &&\sigma(x(t), x_{\tau}(t), t)=(\sigma_{ij}(x(t), x_{\tau}(t), t))_{n\times m}, x_{\tau}(t)=(x_{1}(t-\tau_{1}), x_{2}(t-\tau_{2}), \cdots, x_{n}(t-\tau_{n}))^{T}, \end{eqnarray*} $

$W(t)=(W_{1}(t), \cdots, W_{m}(t))^{T}$ is an $m$-dimensional Brownian motion which is defined on a complete probability space $(\Omega, F, P)$ with a natural filtration $\{F_{t}\}_{t\geq0}$ (i.e., $F_{t}=\sigma\{W(s):\ 0\leq s\leq t\}$), and $\xi(s)$ ($\overline{\tau}\leq s\leq0$) is a continuous $R^{n}$-valued stochastic process such that every $\xi(s)$ is $F_{0}$-measurable and $E|\xi(s)|^{2} < \infty$, where $n$ denotes the number of neurons in a neural network, $x_{i}(t)$ corresponds to the state of the $i$th neuron at time $t$, $f_{j}(x_{j}(t))$, $g_{j}(x_{j}(t))$ denote the activation functions of the $j$th neuron at time $t$, $a_{ij}$ denotes the constant connection weight of the $j$th neuron on the $i$th neuron at time $t$, $b_{ij}$ denotes the constant connection weight of the $j$th neuron on the $i$th neuron at time $t-\tau_{j}$, $d_{i}>0$ represents the rate with which the $i$th neuron will rest its potential to the resting state in isolation when disconnected from the networks and external inputs. $\sigma_{ij}$ denotes the intensity of the stochastic perturbation.

Denote by $|\cdot|$ the Euclidean norm. If $A$ is a vector or matrix, its transpose is denoted by $A^{T}$. If $A$ is a matrix, denote by $\|A\|$ its operator norm, i.e. $\|A\|=\sup\{|Ax|:\ |x|=1\}$.

Assume $f$, $g$ and $\sigma$ be locally Lipschitz continuous and satisfy the linear growth condition as well. So it is known that Eq. (1.1) has a unique global solution on $t\geq0$, which is denoted by $x(t; \xi)$. Moreover, assume also that $f(0)=0$, $g(0)=0$ and $\sigma(0, 0, t)\equiv0$ for the stability purpose of this paper. So Eq. (1.1) admits an equilibrium solution $x(t; 0)=0$.

2 Main Results

Let $C^{2, 1}(R^{n}\times R_{+}; R_{+})$ denote the family of all nonnegative functions $V(x, t)$ on $R^{n}\times R_{+}$ which are continuously twice differentiable in $x$ and once differentiable in $t$. For each $V\in C^{2, 1}(R^{n}\times R_{+}; R_{+})$, define an operator $\mathcal{L}V$, associated with the stochastic delay recurrent neural network (1.1), from $R^{n}\times R^{n}\times R_{+}$ to $R$ by

$ \mathcal{L}V=V_{t}(x, t)+V_{x}(x, t)[-Dx+Af(x)+Bg(y)]+ \frac{1}{2}[\sigma^{T}(x, y, t)V_{xx}\sigma(x, y, t)], $

where $V_{t}(x, t)=\frac{\partial V(x, t)}{\partial t}$, $V_{x}(x, t)=(\frac{\partial V(x, t)}{\partial x_{1}}, \cdots, \frac{\partial V(x, t)}{\partial x_{n}})^{T}$, $V_{xx}=(\frac{\partial^{2}V(x, t)}{\partial x_{i}\partial x_{j} })_{n\times n}$.

Let $C(R^{n}, R_{+})$ denote the family of all continuous functions from $R^{n}$ to $R_{+}$, while $C([-\overline{\tau}, 0], R^{n})$ denote the $R^{n}$-valued continuous functions from $[-\overline{\tau}, 0]$ to $R^{n}$, and $\overline{\tau}=\max\{\tau_{i}:\ 1\leq i\leq n\}$.

In order to prove our results, we need the following semi-martingale convergence theorem established by Liptser and Shiryayev [10].

Lemma 2.1   Let $A(t)$ and $U(t)$ be two continuous adapted increasing processes on $t\geq0$ with $A(0)=U(0)=0$ a.s.. Let $M(t)$ be a real-valued continuous local martingale with $M(0)=0$ a.s.. Let $\zeta$ be a nonnegative $F_{0}$-measurable random variable with $E\zeta < \infty$. Define $X(t)=\zeta+A(t)-U(t)+M(t)\ \ \text{for}\ t\geq0.$ If $X(t)$ is nonnegative, then $\{\lim\limits_{t\rightarrow \infty}A(t) < \infty\}\subset \{\lim\limits_{t\rightarrow \infty}X(t) < \infty\}\cap \{\lim\limits_{t\rightarrow \infty}U(t) < \infty\}$ a.s., where $B\subset D$ a.s. means $P(B\cap D^{c})=0$. In particular, if $\lim\limits_{t\rightarrow \infty}A(t) < \infty$ a.s., then for almost all $\omega\in \Omega$

$ \lim\limits_{t\rightarrow \infty}X(t, \omega)<\infty\ \text{and}\ \lim\limits_{t\rightarrow \infty}U(t, \omega)<\infty, $

that is both $X(t)$ and $U(t)$ converge to finite random variables.

Lemma 2.2   Let $M(t)$ be a real-valued continuous local martingale with $M(0)=0$ a.s.. Let $\zeta$ be a nonnegative $F_{0}$-measurable random variable with $E\zeta < \infty$. Define $X(t)=\zeta+M(t)\ \ \text{for}\ t\geq0.$ If $X(t)$ is nonnegative, then $X(t)$ is sample bounded a.s., $EX(t)\leq E \zeta$, and $M(t)$ is convergence a.s. when $t\longrightarrow\infty$.

Proof   Let ${\tau_{n}}$ be infinitely stopping time consequences such that $M(t\wedge t_{n})$ is martingale. By Fatou theorem and controlled converged theorem, one have that

$ \begin{eqnarray*} EX(t)=EX(\lim\limits_{n\rightarrow \infty}t\wedge t_{n})\leq\underline{\lim}_{n\rightarrow \infty}EX(t\wedge t_{n}) \leq\underline{\lim}_{n\rightarrow \infty}E\zeta=E\zeta. \end{eqnarray*} $

The sample bound of $X(t)$ and convergence of $M(t)$ can be derived from Lemma 2.1.

Because each differential equation in stochastic delay neural networks has own characteristic it is desirable to obtain stability criteria that make full use of these characteristics.

Lemma 2.3   Assume that there exist a number of functions $V\in C^{2, 1}(R^{n}\times R_{+}; R_{+})$, $\phi_{i}\in C(R; R_{+})$, $\psi_{i}\in C(R; R_{+})\ (1\leq i\leq n)$ and $3n$ constants $\lambda_{i}>\mu_{i}>0, \ \rho_{i}>0\ (1\leq i\leq n)$ such that

$ \begin{eqnarray} &&\mathcal{L}V(x, y, t)\leq \sum\limits_{i=1}^{n}(-\lambda_{i}\phi_{i}(x_{i})+\mu_{i}\psi_{i}(y_{i})), \ (x, y, t)\in R^{n}\times R^{n}\times R_{+}, \label{3}\\ &&\label{4}\\ &&\label{5} \end{eqnarray} $ (2.1)
$ V(x, t)\leq \sum\limits_{i=1}^{n}\rho_{i}\phi_{i}(x_{i}), \ (x, t)\in R^{n}\times R_{+}, $ (2.2)
$ \psi_{i}(x_{i})\leq \phi_{i}(x_{i}), \ x\in R. $ (2.3)

Then, for every $\xi\in C([-\overline{\tau}, 0];R^{n})\bigcap L_{F_{0}}^{2}(\Omega, C)$, the solution of eq.(1.1) has the property

$ \lim\limits_{t\rightarrow\infty}\sup\frac{1}{t}\log(EV(x(t;\xi), t))\leq-\gamma\ \ {\rm a.s.}, $

where $\gamma$ is the root of the equation

$ \gamma=\min\limits_{1\leq i\leq n}\frac{\lambda_{i}-\mu_{i}e^{\gamma\overline{\tau}}}{\rho_{i}}. $

Proof   Fix initial data $\xi\in C([-\overline{\tau}, 0];R^{n})\bigcap L_{F_{0}}^{2}(\Omega, C)$ arbitrarily and write simply $x(t; \xi)=x(t)$. Define

$ U(x, t)=e^{\gamma t}V(x, t)\ \ \text{for}\ (x, t)\in R^{n}\times R_{+}, $

which is in $C^{2, 1}(R^{n}\times R_{+}; R_{+})$ obviously. We can compute

$ \begin{eqnarray*} \mathcal{L}U(x, y, t)&=&\gamma e^{\gamma t}V(x, t)+e^{\gamma t}V_{t}(x, t)+e^{\gamma t}V_{x}(x, t)[-Dx+Af(x)+Bg(y)]\\ &&+\frac{1}{2}e^{\gamma t}[\sigma^{T}(x, y, t)V_{xx}\sigma(x, y, t)]\\ &=&e^{\gamma t}[\gamma V(x, t)+\mathcal{L}V(x, y, t)]. \end{eqnarray*} $

Using conditions (2.1) and (2.2), we have

$ \mathcal{L}U(x, y, t)\leq e^{\gamma t}[-\sum\limits_{i=1}^{n} (\lambda_{i}-\rho_{i}\gamma)\phi_{i}(x_{i})+\sum\limits_{i=1}^{n}\mu_{i}\psi_{i}(y_{i})]. $

The It$\hat{o}$ formula shows that for any $t\geq0$

$ \begin{eqnarray} &&e^{\gamma t}V(x, t)=V(x(0), 0)+\int_{0}^{t}\mathcal{L}U(x(s), x_{\tau}(s), s)ds \nonumber\\ &&+\int_{0}^{t}e^{\gamma s}V_{x}(x(s), s)\sigma(x(s), x_{\tau}(s), s)dW(s) \nonumber\\ &\leq& V(\xi(0), 0)-\sum\limits_{i=1}^{n}(\lambda_{i}-\rho_{i}\gamma)\int_{0}^{t}e^{\gamma s}\phi_{i}(x_{i}(s))ds \nonumber\\ &&+\sum\limits_{i=1}^{n}\mu_{i}\int_{0}^{t}e^{\gamma s}\psi_{i}(x_{i}(s-\tau_{i}))ds+\int_{0}^{t}e^{\gamma s}V_{x}(x(s), s)\sigma(x(s), x_{\tau}(s), s)dW(s). \label{6} \end{eqnarray} $ (2.4)

On the other hand, we have

$ \begin{eqnarray*} &&\int_{t-\tau_{i}}^{t}e^{\gamma s}\psi_{i}(x_{i}(s))ds=\int_{-\tau_{i}}^{t}e^{\gamma s}\psi_{i}(x_{i}(s))ds-\int_{0}^{t}e^{\gamma (s-\tau_{i})}\psi_{i}(x_{i}(s-\tau_{i}))ds\\ &\leq &\int_{-\bar{\tau}}^{t}e^{\gamma s}\psi_{i}(x_{i}(s))ds-e^{-\gamma \tau_{i}}\int_{0}^{t}e^{\gamma s}\psi_{i}(x_{i}(s-\tau_{i}))ds \\ &\leq &\int_{-\bar{\tau}}^{t}e^{\gamma s}\psi_{i}(x_{i}(s))ds-e^{-\gamma \bar{\tau}}\int_{0}^{t}e^{\gamma s}\psi_{i}(x_{i}(s-\tau_{i}))ds. \end{eqnarray*} $

This implies

$ \begin{eqnarray} &&\int_{0}^{t}e^{\gamma s}\psi_{i}(x_{i}(s-\tau_{i}))ds\leq \int_{-\bar{\tau}}^{t}e^{\gamma s}\psi_{i}(x_{i}(s))ds-e^{\gamma \bar \tau}\int_{t-\tau_{i}}^{t}e^{\gamma s}\psi_{i}(x_{i}(s))ds. \label{7} \end{eqnarray} $ (2.5)

It then follows from (2.4) and (2.5) that

$ \begin{eqnarray*} &&e^{\gamma t}V(x, t)\leq V(\xi(0), 0)-\sum\limits_{i=1}^{n}(\lambda_{i}-\gamma\rho_{i}-\mu_{i} e^{\gamma \bar{\tau}})\int_{0}^{t}e^{\gamma s}\phi_{i}(x_{i}(s))ds+e^{\gamma \bar \tau}\sum\limits_{i=1}^{n}\mu_{i}\int_{-\bar \tau}^{0}\psi_{i}(\xi_{i}(s))ds\\ &&-e^{\gamma \bar{\tau}}\sum\limits_{i=1}^{n}\mu_{i}\int_{t-\tau_{i}}^{t}e^{\gamma s}\psi_{i}(x_{i}(s))ds+\int_{0}^{t}e^{\gamma s}V_{x}(x(s), s)\sigma(x(s), x_{\tau}(s), s)dW(s). \end{eqnarray*} $

Furthermore, we have that

$ \begin{eqnarray} e^{\gamma t}V(x(t), t)+e^{\gamma \bar{\tau}}\sum\limits_{i=1}^{n}\mu_{i}\int_{t-\tau_{i}}^{t}e^{\gamma s}\psi_{i}(x_{i}(s))ds\leq X(t), \label{8} \end{eqnarray} $ (2.6)

where

$ \begin{eqnarray*} &&X(t)=\zeta+\int_{0}^{t}e^{\gamma s}V_{x}(x(s), s)\sigma(x(s), x_{\tau}(s), s)dW(s), \\ &&\zeta=V(\xi(0), 0)+e^{\gamma \bar{\tau}}\sum\limits_{i=1}^{n}\mu_{i}\int_{-\bar{\tau}}^{0}e^{\gamma s}\psi_{i}(\xi_{i}(s))ds, \end{eqnarray*} $

which is a nonnegative semi-martingale, and Lemma 2.2 shows $EX(t)\leq E \zeta$. It therefore follows from (2.6) that

$ E[e^{\gamma t}V(x, t)]=e^{\gamma t}EV(x, t)\leq E \zeta, $

which implies

$ \lim\limits_{t\rightarrow\infty}\sup\frac{1}{t}\log(EV(x(t;\xi), t))\leq-\gamma\ \ {\rm a.s.} $

as required. The proof is complete.

To obtain our results, we give the following assumption:

H1:there exist positive constants $\alpha_{i}$ such that $|f(x_{i})|\leq \alpha_{i}|x_{i}|, \ i=1, 2, \cdots, n$.

H2:there exist positive constants $\beta_{i}$ such that $|g(x_{i})|\leq \beta_{i}|x_{i}|, \ i=1, 2, \cdots, n$.

Theorem 2.4   Let (H2) hold. Assume there exist symmetric nonnegative-definite matrices $C_{1}$, $C_{2}$, $C_{3}=\text{diag}(\delta_{1}, \cdots, \delta_{n})$ and $C_{4}$ such that

$ {\rm{trace}}[{\sigma ^T}(x, y, t)\sigma (x, y, t)] \le {x^T}{C_1}x + {g^T}(y){C_2}g(y) + {y^T}{C_3}y, $ (2.7)
$ x^{T}Af(x)+f^{T}(x)A^{T}x\leq x^{T}C_{4}x $ (2.8)

for all $(x, y, t)\in R^{n}\times R^{n}\times R_{+}$. Assume also that there exists a positive-definite diagonal matrix $G=\text{diag}(g_{1}, \cdots, g_{n})$ such that the symmetric matrix

$ H:= \left(\begin{array}{cc} -2D+C_{1}+C_{3}+C_{4}+\bar D&B\\ B^{T}&-G+C_{2} \end{array} \right) $

is negative-definite, where $\bar D=\text{diag}(g_{1}\beta_{1}^{2}, \cdots, g_{n}\beta_{n}^{2})$. Let $-\lambda=\lambda_{\max}(H)$, the biggest eigenvalue of $H$. So $\lambda>0$. Then, for every $\xi\in C([-\overline{\tau}, 0];R^{n})\bigcap L_{F_{0}}^{2}(\Omega, C)$, the moment Lyapunov-exponent of the solution of eq.(1.1) can be estimated as

$ \begin{eqnarray} \lim\limits_{t\rightarrow\infty}\sup\frac{1}{t}\log(E|x(t;\xi)|)\leq-\frac{\gamma}{2} \ \ {\rm a.s.}, \label{11} \end{eqnarray} $ (2.9)

where $\gamma>0$ is the root of the equation

$ \begin{eqnarray} \gamma=\min\limits_{1\leq i\leq n }\{(\lambda+\delta_{i}+g_{i}\beta_{i}^{2})- (\delta_{i}+g_{i}\beta_{i}^{2}-\lambda\beta_{i}^{2})e^{\gamma\bar \tau }\}. \label{12} \end{eqnarray} $ (2.10)

Proof   Let $V(x, t)=|x|^{2}$. Then the operator $\mathcal{L}V$ has the form

$ \mathcal{L}V=2x^{T}[-2D+Af(x)+Bg(y)]+{\rm trace}[\sigma^{T}(x, y, t)\sigma(x, y, t)]. $

Compute, by the hypotheses,

$ \begin{eqnarray*} \mathcal{L}V(x, y, t)&\leq& -2x^{T}Dx+x^{T}C_{4}x+x^{T}Bg(x)+g^{T}(x)B^{T}x+x^{T}C_{1}x+ g^{T}(y)C_{2}g(y)+y^{T}C_{3}y \\ &=&(x^{T}\ g^{T}(y))H\left(\begin{array}{c} x\\f(x)\end{array}\right)-x^{T}(C_{3}+\bar D)x+y^{T}C_{3}y+g^{T}(y)Gg(y) \\ &\leq&-\lambda(|x|^{2}+|g(y)|^{2})-x^{T}(C_{3}+\bar D)x+y^{T}C_{3}y+g^{T}(y)Gg(y) \\ &=&-\sum\limits_{i=1}^{n}(\lambda+\delta_{i}+g_{i}\beta_{i}^{2})x_{i}^{2}+ \sum\limits_{i=1}^{n}(\delta_{i}y_{i}^{2}+(g_{i}-\lambda)g_{i}^{2}(y)). \end{eqnarray*} $

It is easy to see from the construction of $H$ that $\lambda\leq g_{i}$ for all $1\leq i\leq n$. Using H2 one can then derive that

$ \mathcal{L}V(x, y, t)\leq \sum\limits_{i=1}^{n}(-(\lambda+\delta_{i}+g_{i}\beta_{i}^{2})x_{i}^{2}+ (\delta_{i}y_{i}^{2}+(g_{i}-\lambda)\beta_{i}^{2}y_{i}^{2})). $

In order to apply Theorem 2.3, define $\phi_{i}, \ \psi_{i}\in C(R; R_{+})$ by

$ \phi_{i}(x_{i})=x_{i}^{2}, \ \psi_{i}(y_{i})=y_{i}^{2}, \ \lambda_{i}= \lambda+\delta_{i}+g_{i}\beta_{i}^{2}, \ \mu_{i}=\delta_{i}+(g_{i}-\lambda)\beta_{i}^{2}. $

It is obvious that

$ V(x, t)=|x|^{2}=\sum\limits_{i=1}^{n}\phi_{i}(x_{i})=\sum\limits_{i=1}^{n}\psi_{i}(y_{i}), \ \lambda_{i}>\mu_{i}. $

Moreover,

$ \mathcal{L}V(x, y, t)\leq \sum\limits_{i=1}^{n}(-\lambda_{i}\phi_{i}(x_{i})+\mu_{i}\psi_{i}(y_{i})). $

By Theorem 2.3, for every $\xi\in C([-\overline{\tau}, 0];R^{n})\bigcap L_{F_{0}}^{2}(\Omega, C)$, the solution of eq.(1.1) has the property

$ \begin{eqnarray*} \lim\limits_{t\rightarrow\infty}\sup\frac{1}{t}\log(E|x(t;\xi)|^{2})\leq-\gamma \ \ {\rm a.s.} \end{eqnarray*} $

and the required assertion (2.9) follows. The proof is completed.

In the following, we shall make use of the characteristics of recurrent networks to obtain further results.

Corollary 2.5   Let (2.8) and (H2) hold. Assume that there exist nonnegative numbers $\theta_{i}$ and $\delta_{i}$ such that

$ \begin{eqnarray} {\rm trace}[\sigma^{T}(x, y, t)\sigma(x, y, t)]\leq \sum\limits_{i=1}^{n}[\theta_{i}x_{i}^{2}+\delta_{i}y_{i}^{2}] \label{13} \end{eqnarray} $ (2.11)

for all $(x, y, t)\in R^{n}\times R^{n}\times R_{+}$. Assume also that there exists a positive-definite diagonal matrix $G=\text{diag}(g_{1}, \cdots, g_{n})$ such that the symmetric matrix

$ \bar H:= \left(\begin{array}{cc} -2D+C_{4}+\bar D&B\\ B^{T}&-G \end{array} \right) $

is negative-definite, where $\bar D=\text{diag}(g_{1}\beta_{1}, \cdots, g_{1}\beta_{1})$. Let $-\bar{\lambda}=\lambda_{max}(\bar H)$, the biggest eigenvalue of $\bar H$. So $\bar{\lambda}>0$. If

$ \begin{eqnarray} \theta_{i}+\delta_{i}<\bar \lambda, \ 1\leq i\leq n, \label{14} \end{eqnarray} $ (2.12)

then the stochastic delay recurrent neural network (1.1) is moment exponentially stable. Moreover, for every $\xi\in C([-\overline{\tau}, 0];R^{n})\bigcap L_{F_{0}}^{2}(\Omega, C)$, the moment Lyapunov-exponent of the solution of eq.(1.1) can be estimated as

$ \begin{eqnarray} \lim\limits_{t\rightarrow\infty}\sup\frac{1}{t}\log(E|x(t;\xi)|)\leq-\frac{\gamma}{2} \ \ {\rm a.s.}, \label{15} \end{eqnarray} $ (2.13)

where $\gamma>0$ is the root of the equation (2.10), as long as the $\lambda$ in (2.10) is determined by

$ \begin{eqnarray} \lambda=\min\limits_{1\leq i\leq n}[\bar \lambda-(\theta_{i}+\delta_{i})]. \label{16} \end{eqnarray} $ (2.14)

Proof   Set $C_{1}=\text{diag}(\theta_{1}, \theta_{2}, \cdots, \theta_{n})$, $C_{2}=0$, $C_{3}=\text{diag}(\delta_{1}, \delta_{2}, \cdots, \delta_{n})$. Then (12) can be written as

$ {\rm trace}[\sigma^{T}(x, y, t)\sigma(x, y, t)]\leq x^{T}C_{1}x+g^{T}(y)C_{2}g(y)+y^{T}C_{3}y. $

In view of Theorem 2.4, it is sufficient to verify that the matrix $H$ defined there is negative-definite. To do so, for any $x, y\in R^{n}$, compute

$ \begin{eqnarray*} (x^{T}\ y^{T})H\left(\begin{array}{c} x\\y\end{array}\right)&=&(x^{T}\ y^{T}) \left(\begin{array}{cc} -2D+C_{1}+C_{3}+C_{4}+\bar D&B\\ B^{T}&-G+C_{2}\end{array}\right) \left(\begin{array}{c} x\\y\end{array}\right)\\ &=&(x^{T}\ y^{T})\bar H\left(\begin{array}{c} x\\y\end{array}\right)+(x^{T}\ y^{T}) \left(\begin{array}{cc} C_{1}+C_{3}&0\\ 0&C_{2}\end{array}\right) \left(\begin{array}{c} x\\y\end{array}\right)\\ &\leq&-\bar \lambda(|x|^{2}+|y|^{2})+\sum\limits_{i=1}^{n}(\theta_{i}+\delta_{i})x_{i}^{2}\\ &\leq&-\lambda(|x|^{2}+|y|^{2}), \end{eqnarray*} $

where $\lambda$ is defined by (2.14) and is positive due to (2.12). The proof is therefore completed.

Lemma 2.6   (see [9]) Let $P$ be a invertible matrix then

$ \begin{eqnarray} 2x^{T}Ay\leq x^{T}PP^{T}x+y^{T}(P^{-1}A)^{T}(P^{-1}A)y. \label{17} \end{eqnarray} $ (2.15)

Corollary 2.7   Let H1, H2 and (2.11) hold. Assume there exist a invertible matrix $P$ and positive-definite diagonal matrix $G=\text{diag}(g_{1}, \cdots, g_{n})$ such that

$ \begin{eqnarray*} H_{1}:= \left(\begin{array}{cc} -2D+PP^{T}+(P^{-1}AL)^{T}(P^{-1}AL)+\bar D&B\\ B^{T}&-G \end{array} \right) \end{eqnarray*} $

is negative-definite, where $\bar D=\text{diag}(g_{1}\beta_{1}^{2}, \cdots, g_{1}\beta_{n}^{2})$, $L=\text{diag}(\alpha_{1}, \cdots, \alpha_{n})$. Let $-\bar{\mu}=\lambda_{\max}(H_{1})$, the biggest eigenvalue of $H_{1}$, so $\bar{\mu}>0$. If

$ \begin{eqnarray} \theta_{i}+\delta_{i}<\bar{\mu}\ \ (1\leq i\leq n), \label{18} \end{eqnarray} $ (2.16)

then the stochastic delay recurrent neural networks (1.1) is moment exponentially stable. Moreover, the moment Lyapunov exponent can be estimated by (2.13), as long as the $\lambda$ in (2.10) is determined by

$ \begin{eqnarray} \lambda=\min\limits_{1\leq i\leq n}[\bar \mu-(\theta_{i}+\delta_{i})]. \label{19} \end{eqnarray} $ (2.17)

Proof   Choose $C_{4}=PP^{T}+(P^{-1}AL)^{T}(P^{-1}AL)$. The conclusion of this corollary follows from Corollary 2.5. The proof is completed.

Corollary 2.8   Let H1, H2 and (2.11) hold. Assume there exist two positive-definite diagonal matrixes $P=\text{diag}(\varepsilon_{1}, \cdots, \varepsilon_{n})$ and $G=\text{diag}(g_{1}, \cdots, g_{n})$ such that

$ \begin{eqnarray*} H_{2}:= \left(\begin{array}{cc} -2D+PP^{T}+(AL)^{T}P^{-1}(AL)+\bar D&B\\ B^{T}&-G \end{array} \right) \end{eqnarray*} $

is negative-definite, where $\bar D=\text{diag}(g_{1}\beta_{1}^{2}, \cdots, g_{1}\beta_{n}^{2})$, $L=\text{diag}(\alpha_{1}, \cdots, \alpha_{n})$. Let $-\bar{\nu}=\lambda_{\max}(H_{2})$, the biggest eigenvalue of $H_{2}$, so $\bar{\nu}>0$. If

$ \begin{eqnarray} \theta_{i}+\delta_{i}<\bar{\nu}\ \ (1\leq i\leq n), \label{20} \end{eqnarray} $ (2.18)

then the stochastic delay recurrent neural networks (1.1) is moment exponentially stable. Moreover, the moment Lyapunov exponent can be estimated by (2.13), as long as the $\lambda$ in (2.10) is determined by

$ \begin{eqnarray} \lambda=\min\limits_{1\leq i\leq n}[\bar \nu-(\theta_{i}+\delta_{i})]. \label{21} \end{eqnarray} $ (2.19)

Proof   Choose $P=\text{diag}(\sqrt{\varepsilon_{1}}, \cdots, \sqrt{\varepsilon_{n}})$. The conclusion of this corollary follows from Corollary 2.7. The proof is completed.

3 Example

Example 1  Consider a two-dimensional stochastic delay recurrent neural network

$ \begin{eqnarray} dx(t)=[-Dx(t)+Af(x(t))+Bg(x_{\tau}(t))]dt+B_{1}g(x_{\tau}(t))dW(t), \label{22} \end{eqnarray} $ (3.1)

where $W(t)$ is a real-valued scalar Brownian motion, $\tau_{1}$ and $\tau_{2}$ both positive numbers,

$ \begin{eqnarray*} A=\left(\begin{array}{cc}0.1&-0.1\\0.1&0.1\end{array}\right), \ B=\left(\begin{array}{cc}2&-2\\1&1\end{array}\right), \ D=\left(\begin{array}{cc}4&0\\0&2\end{array}\right), \ \sigma(x, y, t)=(0.2y_{2}, 0.5y_{1})^{T}, \end{eqnarray*} $

while

$ f_{i}(x_{i})=(\alpha_{i}x_{i}\wedge1)\vee(-1)\ \text{with}\ \alpha_{1}=0.4, \ \alpha_{2}=0.3, \ g_{i}(y_{i})=\frac{1-e^{-y_{i}}}{1+e^{-y_{i}}}, \ i=1, 2. $

It is easily shown that H1 is satisfied with $\alpha_{1}=0.4, \ \alpha_{2}=0.3$ and H2 is satisfied with $\beta_{1}=\beta_{2}=1$, respectively. To apply Theorem 2.4, note in this example such that

$ \begin{eqnarray*} C_{1}=C_{2}=0, \ C_{3}=\text{diag}(0.25, 0.04), \ L=\text{diag}(0.4, 0.3), \ \sigma^{T}(x, y, t)\sigma(x, y, t)=0.25y_{1}^{2}+0.04y_{2}^{2}. \end{eqnarray*} $

Choosing $P=\text{diag}(0.2, 0.2)$ in Corollary 2.6, then $P^{-1}=\text{diag}(5, 5)$, we have

$ \begin{eqnarray*} &&x^{T}Af(x)+f^{T}(x)Ax\leq x^{T}(PP^{T}+(P^{-1}AL)^{T}(P^{-1}AL))x, \\ &&C_{4}=PP^{T}+(P^{-1}AL)^{T}(P^{-1}AL)=\text{diag}(0.12, 0.085), \ \bar D=G, \\ &&-2D+C_{1}+C_{3}+C_{4}=\text{diag}(-7.63, -3.875). \end{eqnarray*} $

Now, choose $D=\text{diag}(3.815, 1.9375)$. The matrix $H$ defined in Theorem 2.4 becomes

$ \begin{eqnarray*} H=\left(\begin{array}{cccc}-3.815&0&2&-2\\0&-1.9375&1&1\\2&1&-3.815&0 \\-2&1&0&-1.9375\end{array}\right). \end{eqnarray*} $

Compute $\lambda_{\max}(H)=-0.1512$ which means that $H$ is negative-definite. By Theorem 2.4, the stochastic delay recurrent neural network (3.1) is moment exponentially stable. To estimate the moment Lyapunov-exponent, compute, by (2.10), $\gamma$ satisfied

$ \begin{eqnarray} \gamma=4.2162-3.9138e^{\bar{\tau}\gamma}. \label{23} \end{eqnarray} $ (3.2)

If both $\tau_{1}$ and $\tau_{2}$ are $0.1$ then $\bar{\tau}=0.1$ and (3.1) has a unique root $\gamma=0.2166$. Therefore, Theorem 2.4 shows that the moment Lyapunov-exponent of the solution of network (3.1) should not be greater than $-0.1033$.

If $A=0$ in (3.1), the model (3.1) changes into the Hopfield neural network. Choosing $G=\text{diag}(3.875, 1.98)$, we have

$ \begin{eqnarray*} H=\left(\begin{array}{cccc}-3.875&0&2&-2\\0&-1.98&1&1\\2&1&-3.875&0 \\-2&1&0&-1.98\end{array}\right). \end{eqnarray*} $

Compute $\lambda_{\max}(H)=-0.1957$ which means that $H$ is negative-definite. By Theorem 2.4, the the stochastic delay Hopfield neural network (3.1) is moment exponentially stable. To estimate the moment Lyapunov exponent, compute, by (2.10), $\gamma$ satisfied

$ \begin{eqnarray} \gamma=4.3207-3.9293e^{\bar{\tau}\gamma}. \label{24} \end{eqnarray} $ (3.3)

If both $\tau_{1}$ and $\tau_{2}$ are $0.1$ then $\bar{\tau}=0.1$ and (3.3) has a unique root $\gamma=0.2798$. Therefore, Theorem 2.4 shows that the moment Lyapunov-exponent of the solution of stochastic delay Hopfield neural network (3.1) should not be greater than $-0.1399$.

References
[1] Arnold L. Stochastic differential equations: theory and applications[M]. New York: Wily, 1972.
[2] Friedman A. Stochastic differential equations and applications[M]. New York: Academic Press, 1976.
[3] Mao X. Exponential stability of stochastic differential equations[M]. New York: Marcel Dekker, 1994.
[4] Liao X, Mao X. Exponential stability and instability of stochastic neural networks[J]. Stochast. Anal. Appl., 1996, 14(2): 165–185. DOI:10.1080/07362999608809432
[5] Liao X, Mao X. Stability of stochastic neural networks[J]. Neural, Parallel Sci. Comput., 1996, 4(2): 205–224.
[6] Hu S, Liao X, Mao X. Stochastic Hopfield neural networks[J]. J. Phys. A: Math. Gen., 2003, 36: 2235–2249. DOI:10.1088/0305-4470/36/9/303
[7] Zhang Zifang, Xu Daiyi, Deng Jin. Exponential stability of stochastic reaction-diffussion neural network with time-varyings[J]. Chinese J. Engin. Math., 2008, 25(2): 219–223.
[8] Blythe S, Mao X, Liao X. Stability of stochastic delay neural networks[J]. J. Franklin Institute, 2001, 338: 481–495. DOI:10.1016/S0016-0032(01)00016-3
[9] Zhang Zifang, Xu Daiyi. A note on stability of stochastic delay neural networks[J]. Chinese J. Engineering Math., 2010, 27(4): 720–730.
[10] Liptser R Sh, Shiryayev A N. Theory of martingales[M]. Dordrecht: Kluwer Academic Publisheres, 1986.