Stochastic differential equations are well known to model problems from many areas of science and engineering, wherein quite often the future state of such systems depends not only on the present state but also on its past history (delay) leading to stochastic functional differential equations with delay. In recent years, there was an increasing interest in stochastic functional differential equations with infinite delay (ISFDEs in short) under less restrictive conditions. The existence and uniqueness of solutions to ISFDEs were discussed (see [1-4]). Some stabilities such as robustness, attraction, pathwise estimation of solutions to ISFDEs were studied (see [5-13]). It is well known that, in order for a stochastic differential equation to have a unique global solution for any given initial value, the coefficients of the equation are generally required to satisfy the linear growth condition and the local Lipschitz condition or a non-Lipschitz condition and the linear growth condition. In the above two classes of conditions, the linear growth condition plays an important role to suppress the growth of the solution and avoid explosion in a finite time. However, such results are limited on applications since the coefficients of many important systems which do not satisfy the linear growth condition. It is therefore important to find conditions to guarantee the existence of global solutions under the nonlinear growth coefficients.
Motivated by some results such as [3] and [9], this paper considers a class of stochastic functional differential equations with infinite delay whose coefficients are polynomial or controlled by the polynomial functions. We mainly examine the existence and uniqueness of the global solutions of such equations, moment boundedness and moment average boundedness in time.
In this paper, we consider the stochastic functional differential equation with infinite delay
where
Assumption 1.1 Both $f$ and $g$ are locally Lipschitz continuous.
Denote a solution to eq.(1.1) by $x(t)$. If $x(t)$ is defined on $(-\infty, +\infty)$, we call it a global solution. To show the dependence on the initial data $\xi$, we write $x(t)=x(t, \xi)$. This paper hopes to find some conditions on the coefficients under which there exists a unique global solution $x(t, \xi)$ to eq.(1.1) and this solution has properties
and
where $\alpha\geq0$ and $p>0$ are proper parameters, $K_{p}$ and $K_{\alpha+p}^{*}$ are positive constants independent of $\xi$.
In order to examine the above problems, a general result is given in Section 3. In Section 4 the general result is discussed in details and two classes of conditions assuring a unique global solution to eq.(1.1) and moment of this solution boundedness are provided in this paper.
First, we give some concepts, notations and stipulations which will be used in this paper. Let $\{\Omega, \mathcal{F}, P\}$ be a complete probability space with a filtration $\{\mathcal{F}_t\}_{t\geq0}$ satisfying the usual conditions (i.e., it is right continuous and $\mathcal{F}_0$ contains all $P$-null sets). Let $W(t)(t\geq0)$ be an {\it m}-dimensional Brownian motion defined on the probability space. Denote by $C((-\infty, 0];\mathbb{R}^{n})$ the family of continuous functions from $(-\infty, 0]$ to $\mathbb{R}^{n}$. Denote by $BC((-\infty, 0];\mathbb{R}^{n})$ the family of bounded continuous functions from $(-\infty, 0]$ to $\mathbb{R}^{n}$ with the norm
which forms a Banach space, which forms a Banach space. If $A$ is a vector or matrix, its transpose is denoted by $A^{T}$. If $A$ is a matrix, denote its trace norm and operator norm by $|A|$ and $\|A\|$ respectively. Denote the Euclidean norm of $x\in \mathbb{R}^{n}$ by $|x|$. Let $\mathbb{R}_{+}^{n}=\{(x_1, \cdots, x_n)^T: x_i\geq0\ \mbox{for every }i=1, \cdots, n\}$ and $\mathbb{R}_{++}^{n}=\{(x_1, \cdots, x_n)^T: x_i>0\ \mbox{for every }i=1, \cdots, n\}$. For any $d=(d_{1}, \cdots, d_n)^T\in \mathbb{R}_{+}^{n}$, define
where $\mbox{diag}(d_1, \cdots, d_n)$ represents the $n\times n$ matrix with all elements zero except those on the diagonal which are $d_1, \cdots, d_n$. For the positive definite matrix $Q$, let $\lambda_{\min}(Q)$ be the smallest eigenvalue of $Q$. Denote $h(x)$ by $o(|x|^{\alpha})$ if for any $\alpha>0$, $\lim\limits_{x\rightarrow+\infty}{h(x)}/{|x|^{\alpha}}=0$. Throughout this paper, when we use the notation $o(|x|^{\alpha})$, it is always under the condition $|x|\rightarrow+\infty$. Let $L^{p}((-\infty, 0];\mathbb{R}^{n})$ denote all functions $l:(-\infty, 0]\rightarrow\mathbb{R}^{n}$ such that
The sign function sgn $(x)$ will be used several times in this paper, and therefore, we provide the definition of the function sgn $(x)$ as follows
For the convenience of reference, several elementary results (see [14]) are given as lemmas in the following which will be used frequently.
Lemma 2.1 For any $x, y, \alpha\geq0, \beta, \varepsilon>0$,
in particular, when $\varepsilon=1$,
Lemma 2.2 For any $x, y\in \mathbb{R}^{n}, 0 < \delta < 1$,
Lemma 2.3 For any $h(x)\in C(\mathbb{R}^{n};\mathbb{R}), \alpha, a>0$, when $|x|\rightarrow\infty, h(x)=o(|x|^{\alpha})$, then
When we use the notation $o(|x|^{\alpha})$ in this paper, it is always under the condition $|x|\rightarrow+\infty$.
In addition, throughout this paper, $\mbox{const}$ represents a positive constant, whose precise value or expression is not important. $\mbox{I}(x)\leq \mbox{const}$ always implies that $\mbox{I}(x)(x\in \mathbb{R}^{n})$ has the bounded above. Hence Lemma 2.3 can be rewritten as
Note that the notation $o(|x|^{\alpha})$ includes the continuity.
Lemma 2.4 (see [9]) Let
for any $p>0$. Then for any $q>p, \varphi\in L^{q}((-\infty, 0];\mathbb{R}^{n})$.
Let $\mathcal{M}_{0}$ denote all probability measures $\mu$ on $(-\infty, 0]$. For any $\varepsilon\geq0$, define
Lemma 2.5 (see [9]) Fix $\varepsilon_{0}>0$. For any $\varepsilon\in[0, \varepsilon_{0}], \mu_{\varepsilon}$ is continuously nondecreasing and satisfies $\mu_{\varepsilon_0}\geq \mu_{\varepsilon}\geq \mu_{0}=1$ and $\mathcal{M}_{\varepsilon_0}\subseteq\mathcal{M}_{\varepsilon}\subseteq\mathcal{M}_{0}$.
Let $C^{1, 2}(\mathbb{R}_{+}\times\mathbb{R}^{n};\mathbb{R}_{+})$ denote the family of all nonnegative functions $V(t, x)$ on $\mathbb{R}_{+}\times\mathbb{R}^{n}$ which are continuously differential in $t$ and twice differential in $x$, define
For eq.(1.1), define an operator $\mathcal {L}V$ from $\mathbb{R}_{+}\times\mathbb{R}_{+}^{n}\times BC((-\infty, 0];\mathbb{R}^{n})$ to $\mathbb{R}$ by
If $x(t)$ is a solution to eq.(1.1), then by the Itô formula (see [15]), we have
where $LV(t, x(t))=\mathcal {L}V(t, x(t), x_{t}).$
In this paper, let
where $Q\in \mathbb{R}^{n\times n}$ are positive definite matrices and $p>0$. Define
Clearly, we have
By (2.3),
The following lemma plays a key role in this paper.
Lemma 3.1 Under Assumption 1.1, if there exist constants $\alpha\geq0, a, \varepsilon, p, K_{0}, K_{j}, \alpha_{j}>0$, probability measures $\mu_{j}\in\mathcal{M}_{\varepsilon}(1\leq j\leq N, j\in \mathbb{N})$, and a positive definite matrix $Q$, such that for the function $V$ defined in (2.5), $\varphi\in BC((-\infty, 0];\mathbb{R}^{n})$,
then for any initial data
where $\hat{\alpha}=\min\{\alpha_{1}, \cdots, \alpha_N\}$, there exists a unique global solution $x(t, \xi)$ to eq.(1.1) and this solution satisfies (1.2) and (1.3), where $K_{p}$ and $K_{\alpha+p}^{*}$ are positive constants independent of $\xi$.
Proof First, note that condition (3.1) includes the following three inequalities
For any given initial data $\xi\in BC((-\infty, 0];\mathbb{R}^{n})\cap L^{\hat{\alpha}}((-\infty, 0];\mathbb{R}^{n}), $ we will divide the whole proof into three steps.
Step 1 Let us first show the existence of the global solution $x(t, \xi)$. Under Assumption 1.1, eq.(1.1) admits a unique maximal local solution $x(t)$ for $-\infty<t<\sigma$, where $\sigma$ is the explosion time. Define the stopping time
Since $\xi$ is bounded, when $k$ is sufficiently large such that $V(\theta, x(\theta))\leq k$ for $-\infty < \theta\leq0$, thus $\sigma_{k}\geq0$. If $\sigma < +\infty$, when $t\rightarrow\sigma$, $x(t)$ may explode. Hence
shows that $\sigma_{k}\leq\sigma$. Thus, we may assume $0\leq\sigma_{k}\leq\sigma(\forall k\in \mathbb{N})$. Obviously, $\sigma_{k}$ is increasing and $\sigma_{k}\rightarrow\sigma_{+\infty}\leq\sigma(k\rightarrow+\infty) $ a.s.. If we can show $\sigma_{+\infty}=+\infty$, then $\sigma=+\infty$ a.s., which implies that $x(t)$ is a global solution. This is also to prove that, for any $t>0, P(\sigma_{k}\leq t)\rightarrow0$ as $k\rightarrow+\infty$.
Fix $t>0$. Now we prove that $P(\sigma_{k}\leq t)\rightarrow0(k\rightarrow+\infty)$. First note that if $\sigma_{k}<+\infty$, then by the continuity of $x(t)$, $V(\sigma_{k}, x(\sigma_{k}))\geq k$. Hence, by (2.4) and (3.2), Lemmas 2.4 and 2.5, we can compute that
where $K_{t}$ is a positive constant independent of $k$. Therefore we have
which shows that $x(t)=x(t, \xi)$ is a global solution to eq.(1.1).
Step 2 Let us now show inequality (1.2). Applying the Itô formula to $e^{\varepsilon t}V(t, x(t))$, by (2.4) and (3.3), Lemmas 2.4 and 2.5, yields
where $c$ is a positive constant independent of $t$ and $K=\varepsilon^{-1}K_{0}$ is a positive constant independent of $\xi$. Hence, we have $\limsup\limits_{t\rightarrow+\infty}EV(t, x(t))\leq K.$ Then the required assertion (1.2) follows from (2.7).
Step 3 Finally, let us show assertion (1.3). Using (3.4), Lemmas 2.4 and 2.5, we obtain that
where $c_{1}$ is a positive constant independent of $t$. Assertion (1.3) follows directly. The proof is therefore completed.
Denote the left hand of (3.1) by $\Phi$ and establish the inequality
and $\alpha\geq0, K_{j}, \alpha_{j}, a, p>0$. By Lemma 2.3,
This, together with (3.6), yields
Substituting this into (3.5) shows that condition (3.1) is satisfied. To get (3.5) and (3.6), some conditions are imposed on the coefficients $f$ and $g$. These conditions are considered in the next section.
Recall $\Phi$ to denote the left hand of (3.1). By (2.8),
We first list the following conditions that we will need
( $\mbox{H}_{1}$)There exist $\alpha, \kappa, \bar{\kappa}>0$, the probability measure $\mu\in \mathcal{M}_{\varepsilon}$ on $(-\infty, 0]$, a positive-definite matrix $Q$, $h(x)\in C(\mathbb{R}^{n};\mathbb{R})$, such that
( $\mbox{H}_{2}$) There exist $\beta, \lambda, \bar{\lambda}>0$, the probability measure $\nu\in \mathcal{M}_{\varepsilon}$ on $(-\infty, 0]$, $h(x)\in C(\mathbb{R}^{n};\mathbb{R})$, such that
( $\mbox{H}_{3}$) There exist $a, \beta, \sigma>0 $, the probability measure $\bar{\nu}\in \mathcal{M}_{\varepsilon}$ on $(-\infty, 0]$, a positive-definite matrix $Q$, $h(x)\in C(\mathbb{R}^{n};\mathbb{R})$, such that
( $\mbox{F}_{1}$) There exist $a, \alpha, \sigma>0$, the probability measure $\bar{\nu}\in \mathcal{M}_{\varepsilon}$ on $[-\infty, 0]$, a positive-definite matrix $Q$, $h(x)\in C(\mathbb{R}^{n};\mathbb{R})$, such that
( $\mbox{F}_{2}$) There exist $\beta, \lambda, \bar{\lambda}>0$, the probability measure $\nu\in \mathcal{M}_{\varepsilon}$ on $[-\infty, 0]$, $h(x)\in C(\mathbb{R}^{n};\mathbb{R})$, such that
The continuity of $h(x)$ is important in all these conditions.
Now we can state one of our main results in this paper.
Theorem 4.1 Under Assumption 1.1, if conditions ( $\mbox{H}_{1})$- $(\mbox{H}_{3}$) hold, $\alpha\leq 2\beta$ and
where $q$ and $R$ are as defined in (2.6), then for any given initial data $\xi\in BC((-\infty, 0];\mathbb{R}^{n})\cap L^{\hat{\alpha}}((-\infty, 0];\mathbb{R}^{n})$, there exists a unique global solution $x(t, \xi)$ to eq.(1.1). If $p\in (0, 2)$ satisfies
then the solution $x(t, \xi)$ has properties (1.2) and (1.3), except that $\alpha$ is replaced by $2\beta$.
Proof Let $V$ be as defined in (2.5), $p\in(0, 2), $ and $\varepsilon>0$ be sufficiently small. Now we estimate $\textrm{I}_{1}-\textrm{I}_{4}$, respectively. First, by condition $(\textrm{H}_{1})$ and Lemma 2.1,
Next, by condition $(\textrm{H}_{2})$ and Lemma 2.2, for any $u, \delta\in(0, 1)$,
Noting that $p < 2$ and by condition $(\textrm{H}_{3})$, we have
It is easy to see that $I_{4}=o(|\varphi(0)|^{2\beta+p})$. Then substituting (4.4)-(4.6) into (4.1) yields
whose form is similar to (3.5), where
Then we consider (4.8) under different cases. First, let condition (4.3) hold. If $\alpha<2\beta$, then by (4.8),
Therefore
Let $\lambda, \bar{\lambda}>0$ (otherwise, we can compute directly). Choosing $\delta=\lambda/(\lambda+\bar{\lambda})\in(0, 1)$, minimizing the right hand of the above formula and by (4.3), we obtain
Since $\varepsilon$ is sufficiently small, let $u$ approach to 1 adequately such that $\bar{a}>0$. Therefore, the form of (4.9) is similar to (3.6).
If $\alpha=2\beta$, then by (4.8),
and $\bar{a}$ is as defined in (4.9). Also choosing $\delta=\lambda/(\lambda+\bar{\lambda})$ and by (4.3), we get
Then we also have $\tilde{a}>0$, and the form of (4.10) is similar to (3.6). Thus, by Lemma 3.1, for any given initial data $\xi\in BC((-\infty, 0];\mathbb{R}^{n})\cap L^{\hat{\alpha}}((-\infty, 0];\mathbb{R}^{n})$, there exists a unique global solution $x(t, \xi)$ to eq.(1.1) and this solution satisfies (1.2) and (1.3) except that $\alpha$ is replaced by $2\beta$.
If condition (4.2) holds and $p>0$ is sufficiently small, then condition (4.3) holds. Therefore, there exists a unique global solution $x(t, \xi)(\forall \xi\in BC((-\infty, 0];\mathbb{R}^{n})\cap L^{\hat{\alpha}}((-\infty, 0];\mathbb{R}^{n})$ to eq.(1.1)). The proof is completed.
If we impose condition (F1) on function $f$, we have
Theorem 4.2 Under Assumption 1.1, if conditions (F1) and (F2) hold, $p\geq2$, $\alpha\geq2\beta-2$ and
where $R$ is as defined in (2.6), $m=\|Q\|[1+R(p-2)]$, then for any initial data $\xi\in BC((-\infty, 0];\mathbb{R}^{n})\cap L^{\hat{\alpha}}((-\infty, 0];\mathbb{R}^{n})$, there exists a unique global solution $x(t, \xi)$ to eq.(1.1) and this solution satisfies (1.2) and (1.3).
Proof The proof is similar to that of Theorem 4.1, so we omit it.