数学杂志  2015, Vol. 34 Issue (6): 1363-1371   PDF    
扩展功能
加入收藏夹
复制引文信息
加入引用管理器
Email Alert
RSS
本文作者相关文章
WU Xin-ye
WU Qun-ying
AN EXTENSION OF THE ALMOST SURE LIMIT THEOREM FOR THE MAXIMA OF SMOOTH STATIONARY GAUSSIAN PROCESS
WU Xin-ye, WU Qun-ying     
College of Science, Guilin University of Technology, Guilin 541004, China
Abstract: In this paper, we study the limit behavior for the maxima of continuous mean square differentiable stationary Gaussian process. Using a different weight function from that in Tan (2013), we obtain an almost sure limit theorem for the maxima of continuous mean square differentiable stationary Gaussian process under some mild conditions, which expands the corre-sponding results in Tan (2013).
Key words: stationary Gaussian process     almost sure     limit theorem     maximum     weight function    
光滑平稳高斯过程极值几乎处处极限定理的一个推广
伍欣叶, 吴群英     
桂林理工大学理学院, 广西 桂林 541004
摘要:本文研究了连续均方可微的平稳高斯过程的极限性态.通过选择一个不同于Tan (2013) 的权重函数, 在较弱的条件下得到了连续均方可微平稳高斯过程极值的一个几乎必然仅限定理, 推广了Tan(2013) 的结论.
关键词平稳高斯过程    几乎处处    极限定理    极值    权重函数    
1 Introduction

The class of stationary Gaussian processes is one of the most widely used families of stochastic processes for modeling the problems in many branches of natural and social sciences. The asymptotic properties of stationary Gaussian process recently received increasing attention. The limit behavior of stationary Gaussian sequences was well established, see Csáki and Gonchigdanzan [1] and Dudziński [2]. Kratz and Rootzen [3] studied the convergence for extremes of mean square differentiable stationary Gaussian processes and given the bounds for the convergent rate of the distribution of the maximum. Piterbarg [4] studied the joint distribution of maxima of a stationary Gaussian process on a continuous time and uniform discrete time points, proved them are asymptotically complete dependent and asymptotically independent under approximate restrictions. Tan and Hashorva [5] extended this result. Tan [6] obtained an almost sure limit theorem (ASLT) for the maxima of stationary Gaussian processes under some mild conditions.

The ASLT was first introduced independently by Brosamler [7] and Schatte [8] for partial sum. Lacey and Philipp [9] proved the ASLT for partial sum used a different method from Brosamler [7] and Schatte [8]. Zhang [10] obtained an ASLT for the maximum of Gaussian sequence under some conditions related to correlation. Fahrner and Stadtmüller [11] and Cheng et al. [12] independently proved the ALST under some certain conditions for the maxima of independent and identically distributed random variable sequences. Furthermore, Zhang [13] studied the ASLT for the maxima of independent random sequence.

Let $\left\{ {X(t), t \geq 0} \right\}$ be a continuous mean square differentiable stationary Gaussian process with covariance function $r(t)\hat = {\rm{E}}X(s)X(t + s)$ satisfying the following condition

$\begin{equation} r(t) = 1 - \frac{\lambda }{2}\left| t \right|^2 + o(\left| t \right|^2 ), {\kern 1pt} t \to 0, \label{1} \end{equation}$ (1.1)

where $\lambda = - r''(0)$. Next, set $M(T) = \max \left\{ {X(t), 0 \le t \le T} \right\}$ and let $N_u (T)$ be the number of upcrossings of the level $u$ by $\left\{ {X(t), 0 \le t \le T} \right\}$, so that by Rice's formula (see, Lindgren and Leadbetter [14]

$\mu =\mu (u)=\rm{E}{{\mathit{N}}_{\mathit{u}}}(1)=\frac{1}{2\pi }{{\lambda }^{1\rm{/}{{2}_{\rm{e}}}}}^{-{{\mathit{u}}^{2}}\rm{/}2}, $ (1.2)

when ${\rm{E}}N_u (T) = T\mu (u_T ) \to \tau$ for some constant $\tau>0$, then

${\mathop{\rm P}\nolimits} \left( {M(T) \le u} \right) \to {\rm{e}}^{ - \tau }, T \to \infty $

and

$\begin{equation}\label{3} {\rm P}\left( {a_T \left( {M(T) - b_T } \right) \le x} \right) \to \exp ( - {\rm{e}}^{ - x} ), T \to \infty. \end{equation}$ (1.3)

Here the normalizing constants are defined for all large $T$ by

${{a}_{T}}=\sqrt{2\ln T}, \quad {{b}_{T}}={{a}_{T}}+a_{T}^{-1}\ln \left( \frac{{{\lambda }^{1/2}}}{2\pi } \right).$ (1.4)

Tan [6] obtained the ASLT for the maximum $M(T)$ of the continuous mean square differentiable stationary Gaussian process $\left\{ {X(t), t \ge 0} \right\}$ with weight function 1/t, which is as follow:

Theorem 1.1  Let $\left\{ {X(t), t \ge 0} \right\}$ be a continuous mean square differentiable stationary Gaussian process with covariance function $r(\cdot)$ satisfying (1.1) and

$\begin{equation}\label{5} r''(t) - r''(0) \le ct^2, t\geq 0 \end{equation}$ (1.5)

for some constant $c>0$ and

$ r(t)(\ln t)(\ln \ln t)^{3(1 + \varepsilon )} = O(1)$

for some constant $\varepsilon > 0$. Then

(ⅰ) if $T\mu (u_T ) \to \tau$ for $0 < \tau < \infty$,

$\mathop {\lim }\limits_{T \to \infty } \frac{1}{{\ln T}}\int_1^T {\frac{1}{t}} I\left( {\mathop {\max }\limits_{1 \le s \le t} X(s) \le u_t } \right){\rm d} t = {\rm{e}}^{ - \tau } \quad {\text{a.s.}}.$

(ⅱ) if $a_T, b_T$ are defined as in (1.4),

$\mathop {\lim }\limits_{T \to \infty } \frac{1}{{\ln T}}\int_1^T {\frac{1}{t}} I\left( {a_T \left( {\mathop {\max }\limits_{1 \le s \le t} X(s) - b_T } \right) \le x} \right){\rm d} t = \exp \left( { - {\rm{e}}^{ - x} } \right) \quad {\text{a.s.}}.$

This result is a continuous version of the ASLT for the maximum of stationary Gaussian sequences in [1].

In this paper, we try to expand the ASLT for the maxima of continuous mean square differentiable stationary Gaussian process $\left\{ {X(t), t \ge 0} \right\}$ by a different weight function from that in Tan [6]. The rest of the paper is organized as follows. The main result is listed in Section 2. Some preliminary lemmas and the proof of the main result is given in Section 3. The proofs of Lemma 3.1 and Lemma 3.2 are collected in Appendix.

2 Main Result

Theorem 2.1  Let $\left\{ {X(t), t \ge 0} \right\}$ be a continuous mean square differentiable stationary Gaussian process with covariance function $r(\cdot)$ satisfying (1.1), (1.5) and

$\begin{equation}\label{6} r(t)(\ln t)^{3\beta + \varepsilon } = O(1). \end{equation}$ (2.1)

Suppose $0 < \beta < \frac{1}{2}$ and set

$\begin{equation}\label{7} w_t = \frac{{\exp \left( {(\ln t)^\beta } \right)}}{t}, \quad W_T = \int_1^T {w_t } {\rm d} t. \end{equation}$ (2.2)

(ⅰ) If $T\mu (u_T ) \to \tau$ for $0 < \tau < \infty$, then

$\begin{equation}\label{8} \mathop {\lim }\limits_{T \to \infty } \frac{1}{{W_T }}\int_1^T {w_t } I\left( {\mathop {\max }\limits_{1 \le s \le t} X(s) \le u_t } \right){\rm d} t = {\mathop{\rm e}\nolimits} ^{ - \tau } \quad {\text{a.s.}}. \end{equation}$ (2.3)

(ⅱ) If $a_T, b_T$ are defined as in (1.4), then

$\begin{equation}\label{9} \mathop {\lim }\limits_{T \to \infty } \frac{1}{{W_T }}\int_1^T {w_t } I\left( {a_T \left( {\mathop {\max }\limits_{1 \le s \le t} X(s) - b_T } \right) \le x} \right){\rm d} t = \exp \left( { - {\mathop{\rm e}\nolimits} ^{ - x} } \right) \quad \text{a.s.}. \end{equation}$ (2.4)

Remark 2.1  Theorem 2.1 remains valid if we replace the function of weight $w_t$ by $w_t^*$ such that $0 \le w_t^ * \le w_t$, $\displaystyle\int_1^\infty {w_t^ * } {\rm d}t = \infty$.

Remark 2.2  The lower limit of integral in (2.3), (2.4) and Remark 2.1 can be replaced by any positive constant.

3 Proof

The following lemmas will be useful in the proof of Theorem 2.1.

Lemma 3.1  Let $\left\{ {\xi (t), t \ge 0} \right\}$ be a real-valued random process with continuous and bounded sample paths, if $w_t, W_T$ are defined as in (2.2), and

$\begin{equation}\label{10} {\mathop{\rm Var}\nolimits} \left( {\int_1^T {w_t \xi (t){\rm d} t} } \right) \ll \left( {W_T } \right)^2 \left( {\ln W_T } \right)^{ - (1 + \varepsilon )} \end{equation}$ (3.1)

for some $\varepsilon > 0$, here $f(T) \ll g(T)$ denotes that there exists a constant $c > 0$ such that $f(T) \le cg(T)$ for sufficiently large $T$. The symbol $c$ stands for a generic positive constant which may differ from one place to another. Then, we have

$\begin{equation}\label{11} \mathop {\lim }\limits_{T \to \infty } \frac{1}{{W_T }}\int_1^T {w_t \left( {\xi (t) - {\rm{E}}\xi (t)} \right)} {\rm d} t = 0 \quad {\text{a.s.}}. \end{equation}$ (3.2)

Proof  The proof can refer to Appendix.

Lemma 3.2  Suppose $\left\{ {X(t), t \ge 0} \right\}$ is a continuous mean square differentiable stationary Gaussian process with covariance function $r( \cdot )$ satisfying conditions (1.1), (1.5) and (2.1). Let $q = u_t^{ - 1} \left( {\ln t} \right)^{ - \beta (1 + \varepsilon )}$, we have

$\mathop {\sup }\limits_{s \in (0, t)} \frac{s}{q}\sum\limits_{\delta \le iq \le t} {\left| {r(iq)} \right|} \exp \left( { - \frac{{u_s^2 + u_t^2 }}{{2\left( {1 + \left| {r(iq)} \right|} \right)}}} \right) \ll \left( {\ln t} \right)^{ - \beta (1 + \varepsilon )}$

for some constant $\delta > 0$.

Proof  The proof can refer to Appendix.

Lemma 3.3 (see Tan [6]) Let $\left\{ X(t), t\ge 0 \right\}$ be a stationary Gaussian process with covariance function $r( \cdot )$ satisfying the conditions (1.1), and $T\mu (u_T ) \to \tau, 0 < \tau < \infty$. For large enough $s$ and $t$, $s<t$, we have

${\rm{E}}\left| I\left( M\left( [1, t] \right)\le {{u}_{t}} \right)-I\left( M\left( [s, t] \right)\le {{u}_{t}} \right) \right|\ll \frac{s}{t}.$

Lemma 3.4  Let $\left\{ {X(t), t \ge 0} \right\}$ be a stationary Gaussian process with covariance function $r( \cdot )$ satisfying (1.1), (1.5), (2.1) and $T\mu (u_T ) \to \tau$, for $0 < \tau < \infty$. Set $q = u_t^{ - 1} \left( {\ln t} \right)^{ - \beta (1 + \varepsilon )}$. For large enough $s$ and $t$, $s<t$, we have

$\left| \text{Cov}\left( I\left( M\left( [1, s] \right) \right)\le {{u}_{s}}, I\left( M\left( [s, t] \right) \right)\le {{u}_{t}} \right) \right|\ll {{s}^{-1}}{{\left( \ln s \right)}^{-1/2}}+{{\left( \ln t \right)}^{-\beta (1+\varepsilon )}}.$

Proof  Using Lemma 3.2 and Lemma 3.3, the proof of Lemma 3.4 is similar to that of Lemma 3.5 of Tan [6].

Proof of Theorem 2.1Case (ⅰ)  Let

$\eta (t) = I\left( {M\left( {[1, t]} \right) \le u_t } \right) - {\rm{P}}\left( {M\left( {[1, t]} \right) \le u_t } \right).$

Notice that $\eta (t)$ is a real-valued random process with continuous and bounded sample paths and ${\mathop{\rm Var}\nolimits} \left( {\eta (t)} \right) < 1$. First, we estimate ${\mathop{\rm Var}\nolimits} \left( {\displaystyle\int_1^T {w_t \eta (t){\rm d} t} } \right)$. Clearly

${\mathop{\rm Var}\nolimits} \left( {\int_1^T {w_t \eta (t){\rm d} t} } \right) \le {\mathop{\rm E}\nolimits} \left( {\int_1^T {w_t \eta (t){\rm d} t} } \right)^2 = 2\iint_{1\leq s<\leq T}w_sw_t{\rm E}\left( {\eta (s)\eta (t)} \right){\rm d} t{\rm d} s.$

Note that by Lemmas 3.3 and 3.4, for $s<t$, we have

$\begin{array}{*{35}{l}} {}&\left| \text{E}\left( \eta (s)\eta (t) \right) \right| \\ =&\left| \text{Cov}\left( \left( I\left( M\left( [1, s] \right)\le {{u}_{s}} \right) \right), I\left( M\left( [1, t] \right)\le {{u}_{t}} \right) \right) \right| \\ \le &\left| \text{Cov}\left( \left( I\left( M\left( [1, s] \right)\le {{u}_{s}} \right) \right), \left[I\left( M\left( [1, t] \right)\le {{u}_{t}} \right)-I\left( M\left( [s, t] \right)\le {{u}_{t}} \right) \right] \right) \right| \\ {}&+\left| \text{Cov}\left( \left( I\left( M\left( [1, s] \right)\le {{u}_{s}} \right) \right), I\left( M\left( [s, t] \right)\le {{u}_{t}} \right) \right) \right| \\ \ll &\text{E}\left| I\left( M\left( [1, t] \right)\le {{u}_{t}} \right)-I\left( M\left( [s, t] \right)\le {{u}_{t}} \right) \right| \\ {}&+\left| \text{Cov}\left( \left( I\left( M\left( [1, s] \right)\le {{u}_{s}} \right) \right), I\left( M\left( [s, t] \right)\le {{u}_{t}} \right) \right) \right| \\ \ll &\frac{s}{t}+{{s}^{-1}}{{\left( \ln s \right)}^{-1/2}}+{{\left( \ln t \right)}^{-\beta (1+\varepsilon )}}. \\ \end{array}$

Consequently

$\begin{align} &\text{Var}\left( \int_{1}^{T}{{{w}_{t}}\eta (t)\text{d}t} \right) \\ &\ll {{\iint }_{1\le st\le T}}{{w}_{s}}{{w}_{t}}\frac{s}{t}\text{d}t\text{d}s+{{\iint }_{1\le st\le T}} \\ &\frac{{{\text{e}}^{{{\ln }^{\beta }}s}}{{\text{e}}^{{{\ln }^{\beta }}t}}}{{{s}^{2}}{{\left( \ln s \right)}^{1/2}}t}\text{d}t\text{d}s+{{\iint }_{1\le st\le T}}\frac{{{\text{e}}^{{{\ln }^{\beta }}s}}{{\text{e}}^{{{\ln }^{\beta }}t}}}{st{{\left( \ln t \right)}^{\beta (1+\varepsilon )}}}\text{d}t\text{d}s \\ &\hat{=}{{S}_{T, 1}}+{{S}_{T, 2}}+{{S}_{T, 3}}. \\ \end{align}$ (3.3)

For the second and the first terms, we have

$\begin{align} &{{S}_{T, 2}}={{\iint }_{1\le st\le T}}\frac{{{\text{e}}^{{{\ln }^{\beta }}s}}{{\text{e}}^{{{\ln }^{\beta }}t}}}{{{s}^{2}}{{\left( \ln s \right)}^{1/2}}t}\text{d}t\text{d}s=\int_{1}^{T}{\frac{{{\text{e}}^{{{\ln }^{\beta }}t}}}{t}}\left( \int_{1}^{T}{\frac{{{\text{e}}^{{{\ln }^{\beta }}s}}}{{{s}^{2}}{{\left( \ln s \right)}^{1/2}}}\text{d}s} \right)\text{d}t \\ &\ll \int_{1}^{T}{\frac{{{\text{e}}^{{{\ln }^{\beta }}t}}}{t}}\text{d}t\ll {{W}_{T}}\ll \frac{W_{T}^{2}}{{{\left( \ln {{W}_{T}} \right)}^{1+\varepsilon }}} \\ \end{align}$ (3.4)

and

$\begin{eqnarray*} S_{T, 1}&=&\displaystyle\iint_{1\le s<t\le T} w_sw_t\frac{s}{t}{\rm d} t{\rm d} s\\ &\le&\displaystyle\iint_{1\le s<t\le T, \frac{s}{t}\le{\left( {\ln W_T } \right)^{ - 2} }} w_sw_t\frac{s}{t}{\rm d} t{\rm d} s+\iint_{1\le s<t\le T, \frac{s}{t}>{\left( {\ln W_T } \right)^{ - 2} }} w_sw_t{\rm d} t{\rm d} s\\ &\hat =& \displaystyle S_{T, 1}^{(1)} + S_{T, 1}^{(2)}, \end{eqnarray*}$

here

$\begin{array}{l} S_{T, 1}^{(1)} =\displaystyle\iint_{1\le s<t\le T, \frac{s}{t}\le{\left( {\ln W_T } \right)^{ - 2} }} w_sw_t\frac{s}{t}{\rm d} t{\rm d} s\\ \qquad\ll\displaystyle\iint_{1\le s<t\le T, \le{\left( {\ln W_T } \right)^{ - 2} }} w_sw_t\frac{1}{{\left( {\ln W_T } \right)^2 }} {\rm d} t{\rm d} s\\ \qquad\ll \displaystyle\frac{{W_T^2 }}{{\left( {\ln W_T } \right)^2 }} \ll \frac{{W_T^2 }}{{\left( {\ln W_T } \right)^{1 + \varepsilon } }}. \end{array}$

By Wu and Chen [15], we gain the elementary calculation

$\begin{equation}\label{14} W_T \sim \frac{1}{\beta }\left( {\ln T} \right)^{1 - \beta } \exp \left( {\ln T} \right)^\beta, \ln W_T \sim \left( {\ln T} \right)^\beta, \ln \ln W_T \sim \ln \ln T. \end{equation}$ (3.5)

From $0<\beta<\frac{1}{2}$, we know that $\frac{{1 - 2\beta }}{{2\beta }} > 0$. Set $\varepsilon \hat = \frac{{1 - 2\beta }}{{2\beta }}$, then $\frac{1}{{2\beta }} = 1 + \varepsilon$. Thus

$\begin{array}{l} S_{T, 1}^{(2)} =\displaystyle\iint_{1\leq s<t\leq T, \frac{s}{t}>{\left( {\ln W_T } \right)^{ - 2} }} w_s w_t{\rm d} t{\rm d} s \ll \int_1^T {w_s } \int_s^{s\left( {\ln W_T } \right)^2 } {\frac{{{\text{e}}^{\ln ^\beta T} }}{t}} {\rm d} t{\rm d} s\\ \qquad\ll \frac{{W_T }}{{\left( {\ln W_T } \right)^{\frac{{1 - \beta }}{\beta }} }}\displaystyle\int_1^T {w_s } \ln \ln W_T {\rm d} s = \frac{{W_T ^2 \ln \ln W_T }}{{\left( {\ln W_T } \right)^{\frac{{1 - \beta }}{\beta }} }} = \frac{{W_T ^2 }}{{\left( {\ln W_T } \right)^{\frac{1}{{2\beta }}} }} \cdot \frac{{\ln \ln W_T }}{{\left( {\ln W_T } \right)^{\frac{{1 - 2\beta }}{{2\beta }}} }}\\ \qquad \ll \displaystyle\frac{{W_T ^2 }}{{\left( {\ln W_T } \right)^{\frac{1}{{2\beta }}} }} = \frac{{W_T ^2 }}{{\left( {\ln W_T } \right)^{1 + \varepsilon } }}. \end{array}$

So we obtain

$\begin{equation}\label{15} S_{T, 1} =\iint_{1\leq s<t\leq T}w_sw_t\frac{s}{t}{\rm d} t{\rm d} s \ll W_T ^2 \left( {\ln W_T } \right)^{ - (1 + \varepsilon )}. \end{equation}$ (3.6)

It remains only to estimate the term $S_{T, 3}$ in (3.3). Using (3.5), we get

$\begin{equation}\label{16} \begin{array}{l} S_{T, 3} =\displaystyle\iint_{1\leq s<t\leq T}{\frac{{{\text{e}}^{\ln ^\beta s} {\text{e}}^{\ln ^\beta t} }}{{st\left( {\ln t} \right)^{\beta (1 + \varepsilon )} }}} {\rm d} t{\rm d} s= \displaystyle\int_1^T {\frac{{{\text{e}}^{\ln ^\beta t} }}{{t\left( {\ln t} \right)^{\beta (1 + \varepsilon )} }}} \int_1^t {\frac{{{\text{e}}^{\ln ^\beta s} }}{s}} {\rm d} s{\rm d} t\\ \qquad \ll \displaystyle\int_1^T {\frac{{{\text{e}}^{\ln ^\beta t} }}{{t\left( {\ln t} \right)^{\beta (1 + \varepsilon )} }}} \left( {\ln t} \right)^{1 - \beta } {\text{e}}^{\ln ^\beta t} {\rm d} t= \displaystyle\int_0^{\ln T} {y^{1 - 2\beta - \beta \varepsilon } } {\text{e}}^{2y^\beta } {\rm d} y\\ \qquad\ll \displaystyle\int_0^{\ln T} {\left( {\frac{1}{{2 - 3\beta - \beta \varepsilon }} \cdot \frac{1}{{y^\beta }} \cdot y^{1 - 2\beta - \beta \varepsilon } \cdot {\text{e}}^{2y^\beta } + 2\beta \cdot y^{1 - 2\beta - \beta \varepsilon } \cdot {\text{e}}^{2y^\beta } } \right)} {\rm d} y\\ \qquad= \displaystyle\int_0^{\ln T} {{\rm d}\left( {y^{2 - 3\beta - \beta \varepsilon } {\text{e}}^{2y^\beta } } \right)}\ll \displaystyle\left( {\ln T} \right)^{2 - 3\beta - \beta \varepsilon } {\text{e}}^{2\left( {\ln T} \right)^\beta }\\ \qquad\ll \displaystyle\frac{{W_T^2 }}{{\left( {\ln W_T } \right)^{1 + \varepsilon } }}. \end{array} \end{equation}$ (3.7)

Thus, we can conclude from (3.3), (3.4), (3.6), (3.7) that

${\mathop{\rm Var}\nolimits} \left( {\int_1^T {w_t \eta (t){\rm d} t} } \right) \ll \frac{{W_T^2 }}{{\left( {\ln W_T } \right)^{1 + \varepsilon } }}.$

Next, note that $r(T)(\ln T)^{1 + 3\beta (1 + \varepsilon )} = O(1)$ implies $r(T)(\ln T) = o(1)$. From (1.3) we have

$\mathop {\lim }\limits_{t \to \infty } P\left( {M[1, t] \le u_t } \right) = \mathop {\lim }\limits_{t \to \infty } P\left( {M[0, t] \le u_t } \right) = {\mathop{\rm e}\nolimits} ^{ - \tau }.$

Clearly, we can gain

$\begin{equation}\label{17} \mathop {\lim }\limits_{T \to \infty } \frac{1}{{\ln T}}\int_1^T {w_t } P\left( {M\left( {[1, t]} \right) \le u_t } \right){\rm d} t = {\text e} ^{ - \tau }. \end{equation}$ (3.8)

Now, the result of the theorem follows by Lemma 3.1 and (3.8).

Case (ⅱ)  Case (ⅱ) is a special of Case (ⅰ).

4 Appendix

Proof of Lemma 3.1

Set

$\begin{equation}\label{18} \begin{array}{l} \quad\displaystyle\frac{1}{{W_T }}\int_1^T {w_t } \left( {\xi (t) - {\mathop{\rm E}\nolimits} \xi (t)} \right){\rm d} t\\ = \displaystyle\frac{{W_{[T]} }}{{W_T }} \cdot \frac{1}{{W_{[T]} }}\sum\limits_{k = 2}^{[T]} {\int_{k - 1}^k {w_t } \left( {\xi (t) - {\mathop{\rm E}\nolimits} \xi (t)} \right){\rm d} t} + \frac{1}{{W_T }}\int_{[T]}^T {w_t } \left( {\xi (t) - {\mathop{\rm E}\nolimits} \xi (t)} \right){\rm d} t\\ \hat = \displaystyle\frac{{W_{[T]} }}{{W_T }}\mu _{[T]} + \mu '_{[T]}. \end{array} \end{equation}$ (4.1)

Clearly as $T \to \infty$,

$\begin{equation}\label{19} \mu '_{[T]} = \frac{1}{{W_T }}\int_{[T]}^T {w_t } \left( {\xi (t) - {\mathop{\rm E}\nolimits} \xi (t)} \right){\rm d} t \to 0 \quad {\text{a.s.}}. \end{equation}$ (4.2)

Now, we prove as $T \to \infty$ that

$ \mu _{[T]} = \frac{1}{{W_{[T]} }}\sum\limits_{k = 2}^{[T]} {\int_{k - 1}^k {w_t } \left( {\xi _t - {\mathop{\rm E}\nolimits} \xi _t } \right){\rm d} t} \to 0\quad {\text{a.s.}}.$

Let $[T]_k = \inf \left\{ {[T], W_{[T]} > \exp \left( {k^{1 - \eta } } \right)} \right\}$ for some $0 < \eta < \frac{\varepsilon }{{1 + \varepsilon }}$, then $W_{[T]_k } \ge \exp \left( {k^{1 - \eta } } \right)$ and $W_{[T]_k - 1} < \exp \left( {k^{1 - \eta } } \right)$. By (3.5), we get

$1 \le \frac{{W_{[T]_k } }}{{\exp \left( {k^{1 - \eta } } \right)}} \sim \frac{{W_{[T]_k - 1} }}{{\exp \left( {k^{1 - \eta } } \right)}} < 1, $

that is

$W_{[T]_k } \sim \exp \left( {k^{1 - \eta } } \right).$

We have

$\begin{array}{ll} \sum\limits_{k = 3}^\infty {E\left( {\mu _{[T]_k }^2 } \right)} &= \sum\limits_{k = 3}^\infty {\left( {\frac{1}{{W_{[T]}^2 }}} \right)} {\mathop{\rm var}} \left( {\sum\limits_{k = 2}^{[T]} {\int_{k - 1}^k {w_t \left( {\xi (t) - E\left( {\xi (t)} \right)} \right){\rm d} t} } } \right)\\&= \sum\limits_{k = 3}^\infty {\left( {\frac{1}{{W_{[T]}^2}}} \right)} {\mathop{\rm var}} \left( {\int_1^{[T]} {w_t \left( {\xi (t)} \right){\rm d} t} } \right)\\ &\ll \sum\limits_{k = 3}^\infty {\left( {\frac{1}{{W_{[T]}^2 }}} \right)} \cdot W_{[T]}^2\cdot \left( {\ln W_{[T]} } \right)^{ - (1 + \varepsilon )}\\&\sim \sum\limits_{k = 3}^\infty {\frac{1}{{k^{(1 - \eta )(1 + \varepsilon )} }}}.\\ \end{array}$

Since $\eta < \frac{\varepsilon }{{1 + \varepsilon }}$ implies $1 - \eta > \frac{1}{{1 + \varepsilon }}$ and $(1 - \eta )(1 + \varepsilon ) > 1$, thus for sufficiently large $k$, we get

$\sum\limits_{k = 3}^\infty {\frac{1}{{k^{(1 - \eta )(1 + \varepsilon )} }}} < \infty.$

This implies

$\sum\limits_{k = 3}^\infty {\mu _{[T]_k }^2 } < \infty \quad {\text{a.s.}}.$

Obviously for any given $[T]$ there is an integer $k$ such that $[T]_k < [T] \le [T]_{k + 1}$, we have as $T \to \infty$,

$\begin{equation}\label{20} \begin{array}{ll} \mu _{[T]} &= \displaystyle\frac{1}{{W_{[T]} }}\sum\limits_{j = 2}^{[T]} {\int_{j - 1}^j {w_t } \left( {\xi (t) - {\mathop{\rm E}\nolimits} \xi (t)} \right){\rm d} t}\\&\le \displaystyle\frac{1}{{W_{[T]_k } }}\left| {\sum\limits_{j = 2}^{[T]_k } {\int_{j - 1}^j {w_t } \left( {\xi (t) - {\mathop{\rm E}\nolimits} \xi (t)} \right){\rm d} t} } \right| + \frac{1}{{W_{[T]_k } }}\sum\limits_{j = [T]_k + 1}^{[T]_{k + 1} } {\int_{j - 1}^j {w_t } \left( {\xi (t) - {\mathop{\rm E}\nolimits} \xi (t)} \right){\rm d} t}\\&\le \displaystyle\left| {\mu _{[T]_k } } \right| + \frac{1}{{W_{[T]_k } }}\left| {W_{[T]_{k + 1} } - W_{[T]_k + 1} } \right| \le \left| {\mu _{[T]_k } } \right| + \left| {\frac{{W_{[T]_{k + 1} } }}{{W_{[T]_k } }} - 1} \right| \to 0 \quad{\text{a.s.}}.\\ \end{array} \end{equation}$ (4.3)

From $\frac{{W_{[T]_{k + 1} } }}{{W_{[T]_k } }} \sim \frac{{\exp \left( {\left( {k + 1} \right)^{1 -\eta } } \right)}}{{\exp \left( {k^{1 -\eta } } \right)}} = \exp \left( {k^{1 -\eta } (( {1 + \frac{1}{k}} )^{1 - \eta } - 1}) \right) \sim \exp \left( {\left( {1 - \eta } \right)k^{ - \eta } } \right) \to 1 {\text{a.s.}}$, (4.3) holds.

Now, the result of Lemma 3.1 follows by (4.1), (4.2) and (4.3).

Proof of Lemma 3.2  Let $\upsilon (\delta ) = \sup _{\delta < iq} \left\{ {r(iq)} \right\}$. By assumption (1.1) and $\left\{ {X(t), t \ge 0} \right\}$ is a stationary Gaussian process, we have $\upsilon (\delta ) = \sup _{\delta < iq} \left\{ {r(iq)} \right\} < 1$. Further, let $\alpha$ satisfy $0 < \alpha < \frac{{1 - \upsilon (\delta )}}{{1 + \upsilon (\delta )}}$ for all sufficiently large $t$. We split the sum in (3.1) at $t^{\alpha}$ as

$\begin{equation}\label{21} \begin{array}{l} \quad\mathop {\sup }\limits_{s \in (0, t)} \frac{s}{q}\sum\limits_{\delta \le iq \le t} {\left| {r(iq)} \right|} \exp \left( { - \frac{{u_s^2 + u_t^2 }}{{2\left( {1 + \left| {r(iq)} \right|} \right)}}} \right)\\ = \mathop {\sup }\limits_{s \in (0, t)} \frac{s}{q}\sum\limits_{\delta \le iq \le t^\alpha } {\left| {r(iq)} \right|} \exp \left( { - \frac{{u_s^2 + u_t^2 }}{{2\left( {1 + \left| {r(iq)} \right|} \right)}}} \right)+ \mathop {\sup }\limits_{s \in (0, t)} \frac{s}{q}\sum\limits_{t^\alpha \le iq \le t} {\left| {r(iq)} \right|} \exp \left( { - \frac{{u_s^2 + u_t^2 }}{{2\left( {1 + \left| {r(iq)} \right|} \right)}}} \right)\\ \hat = B_{t, 1} + B_{t, 2}. \end{array} \end{equation}$ (4.4)

Using the facts $u_t^2 \sim 2\ln t$ and $q = u_t^{ - 1} \left( {\ln t} \right)^{ - \beta (1 + \varepsilon )}$ we have

$\begin{array}{ll} B_{t, 1} &= \mathop {\sup }\limits_{s \in (0, t)} \frac{s}{q}\sum\limits_{\delta \le iq \le t^\alpha } {\left| {r(iq)} \right|} \exp \left( { - \frac{{u_s^2 + u_t^2 }}{{2\left( {1 + \left| {r(iq)} \right|} \right)}}} \right)\le \frac{{st^\alpha }}{{q^2 }}\exp \left( { - \frac{{u_s^2 + u_t^2 }}{{2\left( {1 + \left| {r(iq)} \right|} \right)}}} \right)\\&\ll \frac{1}{{q^2 }}t^{\alpha - \frac{1}{{1 + \upsilon (\delta )}}} s^{1 - \frac{1}{{1 + \upsilon (\delta )}}}\ll t^{1 + \alpha - \frac{2}{{1 + \upsilon (\delta )}}} (\ln t)^2 (\ln t)^{2\beta (1 + \varepsilon )}.\\ \end{array}$ (22)

Since $\alpha < \frac{{1 - \upsilon (\delta )}}{{1 + \upsilon (\delta )}}$, we get as $t \to \infty$ that

$\begin{equation}\label{22} B_{t, 1} < t^{1 + \alpha - \frac{2}{{1 + \upsilon (\delta )}}} \to 0 \end{equation}$ (4.5)

uniformly for $s \in (0, t]$. Notice that $r(t)\left( {\ln t} \right)^{1 + 3\beta (1 + \varepsilon )} = O(1)$ and $u_t^2 \sim 2\ln t$, we get

$r(iq) \ll \frac{1}{{\left( {\ln (iq)} \right)^{1 + 3\beta (1 + \varepsilon )} }} < \frac{1}{{\left( {\ln (t^\alpha )} \right)^{1 + 3\beta (1 + \varepsilon )} }} \sim \frac{1}{{\left( {\ln t} \right)^{1 + 3\beta (1 + \varepsilon )} }}$

and as $t \to \infty$, we have

$u_t^2 \left| {r(iq)} \right| \ll \ln t \cdot \frac{1}{{\left( {\ln t} \right)^{1 + 3\beta (1 + \varepsilon )} }} = \frac{1}{{\left( {\ln t} \right)^{3\beta (1 + \varepsilon )} }} \to 0.$

Consequently

$\begin{equation}\label{23} \begin{array}{ll} B_2 &= \frac{s}{q}\sum\limits_{t^\alpha \le iq \le t} {\left| {r(iq)} \right|} \exp \left( { - \frac{{u_s^2 + u_t^2 }}{{2\left( {1 + \left| {r(iq)} \right|} \right)}}} \right)\\&= \frac{s}{q}\sum\limits_{t^\alpha \le iq \le t} {\left| {r(iq)} \right|} \exp \left( { - \frac{{u_s^2 + u_t^2 }}{2}} \right)\exp \left( {\frac{{\left( {u_s^2 + u_t^2 } \right)\left| {r(iq)} \right|}}{{2\left( {1 + \left| {r(iq)} \right|} \right)}}} \right)\\&\ll \frac{s}{q}\sum\limits_{t^\alpha \le iq \le t} {\left| {r(iq)} \right|} \exp \left( { - \frac{{u_s^2 + u_t^2 }}{2}} \right)\exp \frac{{\left( {u_s^2 + u_t^2 } \right)\left| {r(iq)} \right|}}{2}\\&\ll \frac{s}{q}\sum\limits_{t^\alpha \le iq \le t} {\left| {r(iq)} \right|} \exp \left( { - \frac{{u_s^2 + u_t^2 }}{2}} \right)\exp \left( {u_t^2 \left| {r(iq)} \right|} \right)\\&\ll \frac{s}{q}\sum\limits_{t^\alpha \le iq \le t} {\left| {r(iq)} \right|} \exp \left( { - \frac{{u_s^2 + u_t^2 }}{2}} \right)\\&\le \frac{{st}}{{q^2 }}t^{ - 1} s^{ - 1} \left( {\ln t} \right)^{ - 1 - 3\beta (1 + \varepsilon) } O(1)\\&\ll \left( {\ln t} \right)^{ - \beta (1 + \varepsilon )}.\\ \end{array} \end{equation}$ (4.6)

The result of Lemma 3.2 follows by (4.4), (4.5) and (4.6).

References
[1] Csáki E, Gonchigdanzan K. Almost sure limit theorems for the maximum of stationary Gaussian sequences[J]. Stat. Probab. Lett., 2002, 58(2): 195–203. DOI:10.1016/S0167-7152(02)00128-1
[2] Dudziński M. The almost sure central limit theorems in the joint version for the maxima and sums of certain stationary Gaussian sequences[J]. Stat. Probab. Lett., 2008, 78(4): 347–357. DOI:10.1016/j.spl.2007.07.007
[3] Kratz M F, Rootzén H. On the rate of convergence for extremes of mean square difierentiable stationary normal processes[J]. J. Appl. Probab., 1997, 34: 908–923. DOI:10.1017/S0021900200101615
[4] Piterbarg V I. Discrete and continuous time for large extremes of Gaussian processes[J]. Extremes, 2004, 7(2): 161–177. DOI:10.1007/s10687-005-6198-8
[5] Tan Zhongquan, Hashorva E. On Piterbarg max-discretization theorem for standardized maximum of stationary Gaussian processes [J]. Method. Comput. Appl. Prob. , 2013, doi: 10.1007/s11009-012-9305-8.
[6] Tan Zhongquan. An almost sure limit theorem for the maximum of smooth stationary Gaussian sequences[J]. Stat. Probab. Lett., 2013, 83: 2135–2141. DOI:10.1016/j.spl.2013.05.034
[7] Brosamler G A. An almost everywhere central limit theorem[J]. Math. Proc. Cambridge Phil. Soc., 1988, 104(03): 561–574. DOI:10.1017/S0305004100065750
[8] Schatte P. On strong versions of the central limit theorem[J]. Math. Nachrichten, 1988, 137(1): 249–256. DOI:10.1002/(ISSN)1522-2616
[9] Lacey M T, Philipp W. A note on the almost sure central limit theorem[J]. Stat. Probab. Lett., 1990, 9(3): 201–205. DOI:10.1016/0167-7152(90)90056-D
[10] Zhang Ling. Almost sure limit theorems for the maximum of Gaussian sequences[J]. J. Math., 2006, 26(4): 373–378.
[11] Fahrner I, Stadtmüller U. On almost sure max-limit theorems[J]. Stat. Probab. Lett., 1998, 37(3): 229–236. DOI:10.1016/S0167-7152(97)00121-1
[12] Cheng Shihong, Peng Liang, Qi Yongcheng. Almost sure convergence in extreme value theory[J]. Math. Nachrichten, 1998, 190(1): 43–50. DOI:10.1002/(ISSN)1522-2616
[13] Zhang Ling. On almost sure max-limit theorems for the independent random variables[J]. J. Math.(PRC), 2007, 27(2): 145–148.
[14] Lindgren G, Leadbetter M R, Rootzén H. Extremes and related properties of random sequences and processes[M]. New York: Springer-Verlag, 1983.
[15] Wu Qunying, Chen Pingyan. An improved result in almost sure central limit theorem for selfnormalized products of partial sums [J]. J. Inequal. Appl. , 2013, 129, doi: 10.1186/1029-242X-2013-129.