Let $ \{X(t); t\geq 0\} $ be a standard $ \alpha $-fractional Brownian motion with $ 0<\alpha<1 $ and $ X(0) = 0 $. The $ \{X(t); t\geq 0\} $ has a covariance function
for $ s, t\geq 0 $, and representation
where
(i) $ k_{\alpha}^2 = \int_{R^1}\left\{|x-t|^{(2\alpha-1)/2}-|x|^{(2\alpha-1)/2}\right\}^2dx, $
(ii) $ \{B(t); -\infty<t<+\infty\} $ is a Brownian motion,
(iii) $ \frac{1}{k_\alpha}\left\{|x-t|^{(2\alpha-1)/2}-|x|^{(2\alpha-1)/2}\right\} $ is interpreted to be $ I_{(0, t]} $ when $ \alpha = \frac{1}{2}. $
$ \{X(t); t\geq 0\} $ has stationary increments with $ E(X(s+t)-X(s))^2 = t^{2\alpha}, s, t\geq 0 $ and is a standard Brownian motion when $ \alpha = \frac{1}{2}. $
Let $ C_0[0, 1] $ be the space of continuous functions from $ [0, 1] $ to $ R $ with value zero at the origin, endowed with usual norm $ \|f\| = \sup\limits_{0\le t\le 1}|f(t)| $, and
Then $ H $ is a Hilbert space with respect to the scalar product
Define a mapping $ I:C_0[0, 1]\to [0, \infty] $ by
The limit set associated with functional laws of the iterated logarithm for $ \{X(t); t\geq 0\} $ is $ K_\alpha $, the subset of functions in $ C_0[0, 1] $ with the form
here the function $ g(x) $ ranges over the unit ball of $ L^2(R^1) $, and hence $ \int_{R^1}g^2(s)ds\le 1 $. The subset $ K $ of $ C_0[0, 1] $ is defined by
For $ 0<h<1 $, $ 0\le s\le 1, 0\leq t\leq 1 $, let
In [1], Monrad and Rootzén gave a Chung's functional law of the iterated logarithm for fractional Brownian motion, as follows, for any $ f\in K $, $ \langle f, f\rangle<1 $,
where $ \gamma(f) $ is a constant satifying $ 2^{-1/2}c^{\alpha}(1-\langle f, f\rangle)^{-\alpha}\le \gamma(f)\le 2^{-1/2}C^{\alpha}(1-\langle f, f\rangle)^{-\alpha} $, $ c, C $ denote the positive constants in (2.13) of [1].
Inspired by the arguments of Monrad and Rootzén, in the present paper, we obtain a liminf result for Lévy's modulus of continuity of a fractional Brownian motion. The main result is stated as follows.
Theorem 1.1 For each $ f\in K $ with $ \langle f, f\rangle<1 $, then
where $ b(f) $ is a constant satisfying
here $ c $ and $ C $ denote the positive constants in (2.13) of [1].
Remark When $ \alpha = \frac{1}{2} $, $ \{X(t); t\geq 0\} $ is a standard Brownian motion, in this case $ c = C = \frac{\pi^2}{8} $, the result in Theorem 1.1 is the exact approximation rate on the modulus of continuity for Brownian motion.
Our proofs are based on the following lemmas.
In order to prove (3.1) below, we need the following Lemma 2.1.
Lemma 2.1 (see (3.14) of [2]) \label{lem1} Let $ \{X(t);t\ge 0\} $ be fractional Brownian motion as above, $ \sigma^2(u) = E(X(t+u)-X(t))^2 $, we have that for any $ \varepsilon>0 $, there exists a positive constant $ k_0 = k_0(\varepsilon) $ such that
for any $ T, 0<u\le T $ and $ x\ge x_0 $ with some $ x_0>0 $.
In order to prove (6) below, we need the following Lemma 2.2 and Lemma 2.3.
Lemma 2.2 (see Lemma 2.3 in [3]) Let $ 0<\alpha<1, 0\le q_0<1 $ and fix $ 0<q_0<q<\alpha $. Let $ d_k = k^{k+(1-r)} $, $ s_k = k^{-k} $ for $ k\ge 1 $ and $ 0<r<1 $. Let
where $ I_k = (s_kd_{k-1}, s_kd_r] $. Let $ 0<\beta<r $. Then, for
there is a constant $ C'>0 $ depending only on $ \alpha $ such that uniformly in $ t, u, k $,
Lemma 2.3 Let $ \{\Gamma(t): t\ge 0\} $ be a centred Gaussian process with stationary increments and $ \Gamma(0) = 0 $. We assume that $ \sigma^2(u) = E(\Gamma(t+u)-\Gamma(t))^2. $ Let $ T>0 $, we have, for $ x $ large enough, any $ \varepsilon>0 $,
where $ C_1>0 $ is a constant.
Proof This conclusion is from page 49 in [4].
We only need to show the following two claims:
3.1 The Proof of (3.1)
Let $ h_n = n^{-d} $, $ \rho(h) = h(\log h^{-1})^{-3-\frac{1}{\alpha}} $. We further set $ k_n = [\frac{1}{\rho(h_{n})}] $, $ t_i = i\rho(h_{n}), i = 0, 1, \cdots, k_n $. Then
For any $ 0<\varepsilon <1 $, choose $ \delta>0 $, such that $ \eta = -\delta+ \langle f, f\rangle+\frac{1-\langle f, f\rangle }{(1-\varepsilon)^{1/\alpha}}>1 $. Then we have
By Proposition 4.2 in [1], we have for any $ \delta>0 $ and $ n $ large enough
By Corollary 2.2 in [1],
Thus
Choose $ d>(\eta-1)^{-1} $, then
which implies, by the Borel-Cantelli lemma,
On the other hand, for any $ \delta>0 $,
By Lemma 2.1, we have that for any $ \varepsilon>0 $, there exists a positive $ k_0 = k_0(\varepsilon) $ such that
Taking into account $ \log h_n^{-1}\rightarrow\infty $ as $ n\rightarrow\infty $, we have
By the Borel-Cantelli lemma,
By (3.3)–(3.5), we get
Remark that $ h_n $ is ultimately strictly decreasing to 0, so for any small $ h $, there is a unique $ n $ such that $ h\in(h_{n+1}, h_{n}]. $ Let $ \phi_{t, h}(s) = \frac{X(t+hs)-X(t)}{l(h)}, s\in [0, 1], t\in[0, 1] $. We define
By the definition of infimum, for any $ \varepsilon>0 $, there exists $ h_n'\in (h_{n+1}, h_{n}] $ such that $ \xi_n\geq \xi(h_n')-\varepsilon. $
For any $ r\in[0, 1] $, let $ x = \frac{rh_{n+1}}{h_n'} $. Then we have $ 0\leq x\leq 1, $
Noting that
By (3.6)–(3.9), we have
Since $ \liminf\limits_{h\to 0}\xi(h)\geq\liminf\limits_{n\to \infty}\xi_n\geq\liminf\limits_{n\to \infty}\xi(h'_n)-\varepsilon, $ which ends the proof.
3.2 The Proof of (3.2)
Note that
then it is sufficient to show that
where $ h_n = \frac{1}{n} $.
For $ r = 1, 2, 3, \cdot\cdot\cdot, $ we define
for $ 0\le t\le 1, d_r = r^{r+(1-\gamma)}, s_r = r^{-r}, 0<\gamma<1 $. Then $ \{Z_r(\cdot)\}, r = 1, 2, \cdot\cdot\cdot $ are independent and
where $ {Y_r(s_r, \cdot)} $ is as in Lemma 2.2.
In order to prove (14), we need to prove that for any $ \varepsilon>0 $,
and
First of all, we prove (3.15).
Now using the argument as in Lemma 2.2, we have
where $ q, \delta $ are as in Lemma 2.2. For any $ \varepsilon>0 $, we have, by Lemma 2.3,
Taking $ n $ sufficiently large such that $ \frac{C''n^{\delta/2}\varepsilon^2}{2+\varepsilon}>1 $, we get (19) by the definition of the sequence $ \{h_n:n\ge 1\} $
Second, we prove (3.14).
For any $ \; \varepsilon>0 $, choose $ \delta>0 $, such that $ \eta' = \frac{1-\langle f, f\rangle}{(1+\varepsilon)^{1/\alpha}}+\langle f, f\rangle+\delta<1 $. Let $ \beta = 2^{-1/2}C^{\alpha}(1-\langle f, f\rangle)^{-\alpha} $, then
By Proposition 4.2 in [1], we have for any $ \delta>0 $ and $ n $ large enough,
Similar to the proof of (3.15), we have the following estimate for $ I_2 $,
Thus $ \sum\limits_{n = 1}^\infty I_2<\infty. $ The proof of (3.14) is completed.