The class of stationary Gaussian processes is one of the most widely used families of stochastic processes for modeling the problems in many branches of natural and social sciences. The asymptotic properties of stationary Gaussian process recently received increasing attention. The limit behavior of stationary Gaussian sequences was well established, see Csáki and Gonchigdanzan [1] and Dudziński [2]. Kratz and Rootzen [3] studied the convergence for extremes of mean square differentiable stationary Gaussian processes and given the bounds for the convergent rate of the distribution of the maximum. Piterbarg [4] studied the joint distribution of maxima of a stationary Gaussian process on a continuous time and uniform discrete time points, proved them are asymptotically complete dependent and asymptotically independent under approximate restrictions. Tan and Hashorva [5] extended this result. Tan [6] obtained an almost sure limit theorem (ASLT) for the maxima of stationary Gaussian processes under some mild conditions.
The ASLT was first introduced independently by Brosamler [7] and Schatte [8] for partial sum. Lacey and Philipp [9] proved the ASLT for partial sum used a different method from Brosamler [7] and Schatte [8]. Zhang [10] obtained an ASLT for the maximum of Gaussian sequence under some conditions related to correlation. Fahrner and Stadtmüller [11] and Cheng et al. [12] independently proved the ALST under some certain conditions for the maxima of independent and identically distributed random variable sequences. Furthermore, Zhang [13] studied the ASLT for the maxima of independent random sequence.
Let $\left\{ {X(t), t \geq 0} \right\}$ be a continuous mean square differentiable stationary Gaussian process with covariance function $r(t)\hat = {\rm{E}}X(s)X(t + s)$ satisfying the following condition
where $\lambda = - r''(0)$. Next, set $M(T) = \max \left\{ {X(t), 0 \le t \le T} \right\}$ and let $N_u (T)$ be the number of upcrossings of the level $u$ by $\left\{ {X(t), 0 \le t \le T} \right\}$, so that by Rice's formula (see, Lindgren and Leadbetter [14]
when ${\rm{E}}N_u (T) = T\mu (u_T ) \to \tau$ for some constant $\tau>0$, then
and
Here the normalizing constants are defined for all large $T$ by
Tan [6] obtained the ASLT for the maximum $M(T)$ of the continuous mean square differentiable stationary Gaussian process $\left\{ {X(t), t \ge 0} \right\}$ with weight function 1/t, which is as follow:
Theorem 1.1 Let $\left\{ {X(t), t \ge 0} \right\}$ be a continuous mean square differentiable stationary Gaussian process with covariance function $r(\cdot)$ satisfying (1.1) and
for some constant $c>0$ and
for some constant $\varepsilon > 0$. Then
(ⅰ) if $T\mu (u_T ) \to \tau$ for $0 < \tau < \infty$,
(ⅱ) if $a_T, b_T$ are defined as in (1.4),
This result is a continuous version of the ASLT for the maximum of stationary Gaussian sequences in [1].
In this paper, we try to expand the ASLT for the maxima of continuous mean square differentiable stationary Gaussian process $\left\{ {X(t), t \ge 0} \right\}$ by a different weight function from that in Tan [6]. The rest of the paper is organized as follows. The main result is listed in Section 2. Some preliminary lemmas and the proof of the main result is given in Section 3. The proofs of Lemma 3.1 and Lemma 3.2 are collected in Appendix.
Theorem 2.1 Let $\left\{ {X(t), t \ge 0} \right\}$ be a continuous mean square differentiable stationary Gaussian process with covariance function $r(\cdot)$ satisfying (1.1), (1.5) and
Suppose $0 < \beta < \frac{1}{2}$ and set
(ⅰ) If $T\mu (u_T ) \to \tau$ for $0 < \tau < \infty$, then
(ⅱ) If $a_T, b_T$ are defined as in (1.4), then
Remark 2.1 Theorem 2.1 remains valid if we replace the function of weight $w_t$ by $w_t^*$ such that $0 \le w_t^ * \le w_t$, $\displaystyle\int_1^\infty {w_t^ * } {\rm d}t = \infty$.
Remark 2.2 The lower limit of integral in (2.3), (2.4) and Remark 2.1 can be replaced by any positive constant.
The following lemmas will be useful in the proof of Theorem 2.1.
Lemma 3.1 Let $\left\{ {\xi (t), t \ge 0} \right\}$ be a real-valued random process with continuous and bounded sample paths, if $w_t, W_T$ are defined as in (2.2), and
for some $\varepsilon > 0$, here $f(T) \ll g(T)$ denotes that there exists a constant $c > 0$ such that $f(T) \le cg(T)$ for sufficiently large $T$. The symbol $c$ stands for a generic positive constant which may differ from one place to another. Then, we have
Proof The proof can refer to Appendix.
Lemma 3.2 Suppose $\left\{ {X(t), t \ge 0} \right\}$ is a continuous mean square differentiable stationary Gaussian process with covariance function $r( \cdot )$ satisfying conditions (1.1), (1.5) and (2.1). Let $q = u_t^{ - 1} \left( {\ln t} \right)^{ - \beta (1 + \varepsilon )}$, we have
for some constant $\delta > 0$.
Lemma 3.3 (see Tan [6]) Let $\left\{ X(t), t\ge 0 \right\}$ be a stationary Gaussian process with covariance function $r( \cdot )$ satisfying the conditions (1.1), and $T\mu (u_T ) \to \tau, 0 < \tau < \infty$. For large enough $s$ and $t$, $s<t$, we have
Lemma 3.4 Let $\left\{ {X(t), t \ge 0} \right\}$ be a stationary Gaussian process with covariance function $r( \cdot )$ satisfying (1.1), (1.5), (2.1) and $T\mu (u_T ) \to \tau$, for $0 < \tau < \infty$. Set $q = u_t^{ - 1} \left( {\ln t} \right)^{ - \beta (1 + \varepsilon )}$. For large enough $s$ and $t$, $s<t$, we have
Proof Using Lemma 3.2 and Lemma 3.3, the proof of Lemma 3.4 is similar to that of Lemma 3.5 of Tan [6].
Proof of Theorem 2.1 Case (ⅰ) Let
Notice that $\eta (t)$ is a real-valued random process with continuous and bounded sample paths and ${\mathop{\rm Var}\nolimits} \left( {\eta (t)} \right) < 1$. First, we estimate ${\mathop{\rm Var}\nolimits} \left( {\displaystyle\int_1^T {w_t \eta (t){\rm d} t} } \right)$. Clearly
Note that by Lemmas 3.3 and 3.4, for $s<t$, we have
Consequently
For the second and the first terms, we have
here
By Wu and Chen [15], we gain the elementary calculation
From $0<\beta<\frac{1}{2}$, we know that $\frac{{1 - 2\beta }}{{2\beta }} > 0$. Set $\varepsilon \hat = \frac{{1 - 2\beta }}{{2\beta }}$, then $\frac{1}{{2\beta }} = 1 + \varepsilon$. Thus
So we obtain
It remains only to estimate the term $S_{T, 3}$ in (3.3). Using (3.5), we get
Thus, we can conclude from (3.3), (3.4), (3.6), (3.7) that
Next, note that $r(T)(\ln T)^{1 + 3\beta (1 + \varepsilon )} = O(1)$ implies $r(T)(\ln T) = o(1)$. From (1.3) we have
Clearly, we can gain
Now, the result of the theorem follows by Lemma 3.1 and (3.8).
Case (ⅱ) Case (ⅱ) is a special of Case (ⅰ).
Proof of Lemma 3.1
Set
Clearly as $T \to \infty$,
Now, we prove as $T \to \infty$ that
Let $[T]_k = \inf \left\{ {[T], W_{[T]} > \exp \left( {k^{1 - \eta } } \right)} \right\}$ for some $0 < \eta < \frac{\varepsilon }{{1 + \varepsilon }}$, then $W_{[T]_k } \ge \exp \left( {k^{1 - \eta } } \right)$ and $W_{[T]_k - 1} < \exp \left( {k^{1 - \eta } } \right)$. By (3.5), we get
that is
We have
Since $\eta < \frac{\varepsilon }{{1 + \varepsilon }}$ implies $1 - \eta > \frac{1}{{1 + \varepsilon }}$ and $(1 - \eta )(1 + \varepsilon ) > 1$, thus for sufficiently large $k$, we get
This implies
Obviously for any given $[T]$ there is an integer $k$ such that $[T]_k < [T] \le [T]_{k + 1}$, we have as $T \to \infty$,
From $\frac{{W_{[T]_{k + 1} } }}{{W_{[T]_k } }} \sim \frac{{\exp \left( {\left( {k + 1} \right)^{1 -\eta } } \right)}}{{\exp \left( {k^{1 -\eta } } \right)}} = \exp \left( {k^{1 -\eta } (( {1 + \frac{1}{k}} )^{1 - \eta } - 1}) \right) \sim \exp \left( {\left( {1 - \eta } \right)k^{ - \eta } } \right) \to 1 {\text{a.s.}}$, (4.3) holds.
Now, the result of Lemma 3.1 follows by (4.1), (4.2) and (4.3).
Proof of Lemma 3.2 Let $\upsilon (\delta ) = \sup _{\delta < iq} \left\{ {r(iq)} \right\}$. By assumption (1.1) and $\left\{ {X(t), t \ge 0} \right\}$ is a stationary Gaussian process, we have $\upsilon (\delta ) = \sup _{\delta < iq} \left\{ {r(iq)} \right\} < 1$. Further, let $\alpha$ satisfy $0 < \alpha < \frac{{1 - \upsilon (\delta )}}{{1 + \upsilon (\delta )}}$ for all sufficiently large $t$. We split the sum in (3.1) at $t^{\alpha}$ as
Using the facts $u_t^2 \sim 2\ln t$ and $q = u_t^{ - 1} \left( {\ln t} \right)^{ - \beta (1 + \varepsilon )}$ we have
Since $\alpha < \frac{{1 - \upsilon (\delta )}}{{1 + \upsilon (\delta )}}$, we get as $t \to \infty$ that
uniformly for $s \in (0, t]$. Notice that $r(t)\left( {\ln t} \right)^{1 + 3\beta (1 + \varepsilon )} = O(1)$ and $u_t^2 \sim 2\ln t$, we get
and as $t \to \infty$, we have
The result of Lemma 3.2 follows by (4.4), (4.5) and (4.6).