The self-similarity has become an important aspect of stochastic models in various scientific areas including hydrology, telecommunication, turbulence, image processing and finance. The best known and most widely used process that exhibits the self-similarity property is the fractional Brownian motion (fBm in short). The fBm with Hurst index $H\in(0, 1)$ is a zero mean Gaussian process $\{B^{H}(t), t\geq0\}$ with $B^{H}(0)=0$ and covariance
for all $s, t\geq 0$. Some surveys about the fBm could be found in Biagini et al. [4], Chen and Xiao [5], Mishura [10], Nualart [11], Wang and Wang [15], Yan [16] and the references therein.
By Decreusefond and Üstünel [6], $B^{H}$ has the following integral representation with respect to the standard Brownian motion $B$ ($H>\frac{1}{2}$) :
where the kernel $K_H$ is given by
with the normalizing constant $c_H>0$ given by
Many authors studied the approximation of the fBm. For example, Delgado and Jolis [7] proved that $B^H$ can be approximated in law by means of some processes constructed from the standard Poisson process. In Li and Dai [8], a special approximation to the one-parameter fractional Brownian motion is constructed using a two-parameter Poisson process. Mishura and Banna [12] found an approximation of fractional Brownian motion by wiener integrals.
On the other hand, many authors proposed to use more general self-similar Gaussian processes and random fields as stochastic models. Such applications raised many interesting theoretical questions about self-similar Gaussian processes and fields in general. Therefore, some generalizations of the fBm were introduced such as fractional Brownian sheet.
Recall that the fractional Brownian sheet can also be defined by a Wiener integral with respect to the Brownian sheet $\{B(t, s), (t, s)\in[0, T]\times[0, S]\}$ (see, for example, Bardina et al. [1])
where $\alpha, \beta \in(\frac12, 1)$, and the kernels $K_{\alpha}, K_{\beta} $ are defined above. Note that this process is a two-parameters centered Gaussian process, starting from $(0, 0)$, and its covariance is given by
It was proved in Bardina et al. [1] that the fractional Brownian sheet can be weakly approximated by discrete processes constructed from the Poisson process in the space of continuous functions. Tudor [9] generalized this approximation in the Besov space. Wang et al. [13, 14] showed that the fractional Brownian sheet can be approximated in distribution by the random walks and martingale differences sequence in the Skorohord space, respectively. We refer to Bardina and Florit [3], Bardina and Jolis [2], and the references therein for more information about weak approximation for the fractional Brownian sheet and multidimensional parameter process.
Motivated by all above results, in this paper, we will consider the approximation of the fractional Brownian sheet with $\alpha, \beta\in (\frac{1}{2}, 1)$ from wiener integrals.
More precisely, we consider the following problem. Let $T>0, S>0$ be two fixed number and consider the plane $[0, T]\times[0, S]$. Now, let the mapping $a:[0, T]\times[0, S]\to\mathbb{R}$ be a nonrandom measurable function of the square integral space $L_2([0, T]\times[0, S])$, that is, $a(t, s)$ is a function such that the stochastic integral $\displaystyle\int_{0}^{t}\int_{0}^{s}a(u, v)B(du, dv)$, $(t, s)\in[0, T]\times[0, S]$ is well defined with respect to the Brownian sheet $\{B(t, s), (t, s)$ $\in[0, T]\times[0, S]\}$. The problem is to find
The paper is organized as follows. In Section 2, we obtain an approximation of a fractional Brownian sheet by power function with a positive index. In Section 3, we construct an approximation of a fractional Brownian sheet by power function with a negative index, i.e., $a(t, s)=kt^{\frac{1}{2}-\alpha}s^{\frac{1}{2}-\beta}$, where $k>0$, $\alpha, \beta\in(\frac{1}{2}, 1)$, and find the point where the function attains its the minimum value.
Let $W^{\alpha, \beta}=\{W^{\alpha, \beta}(t, s), (t, s)\in\mathbb{R}^{2}_+\}$ be a fractional Brownian sheet with Hurst index $\alpha, \beta\in(\frac{1}{2}, 1)$, the number $T, S>0$ be fixed, $a(t, s)\in L_2([0, T]\times[0, S])$ is a measurable function. ${M(t, s), (t, s)\in[0, T]\times[0, S]}$ is a square integrable martingale which have the form
In this section, we will evaluate
where $A\subset L_2([0, T]\times[0, S])$ is some class of functions.
Lemma 2.1 If the Lebasgue measure of the set $A=\{(t, s)\in[0, T]\times[0, S]:a(t, s) < 0\}$ is positive, then we cannot attained the minimum in (2.1) at a function $a\in A$.
Proof
This makes it clear that if one changes $a(u, v)$ for $-a(u, v)$, at the points $(u, v)$ where $a(u, v) < 0$, then the right hand side of (2.2) does not increase. This completes the proof.
Theorem 2.2 Among all function $a\in L_2([0, T]\times[0, S])$ such that $a(t, s)t^{\frac{1}{2}-\alpha}s^{\frac{1}{2}-\beta}$ is nondecreasing with respect to $t$ and $s\frac{}{}$, then the minimum in (2.1) is attained at the function $a(t, s)=c_{\alpha}c_{\beta}t^{\alpha-\frac{1}{2}}s^{\beta-\frac{1}{2}}$, where $c_{\alpha}$, $c_{\beta}$ is given by $c_{\alpha}=\sqrt{\frac{2{\alpha}\Gamma{(\frac{3}{2}-\alpha)}}{\Gamma{(\alpha+\frac{1}{2})}\Gamma{(2-2\alpha)}}}$, $c_{\beta}=\sqrt{\frac{2{\beta}\Gamma{(\frac{3}{2}-\beta)}}{\Gamma{(\beta+\frac{1}{2})}\Gamma{(2-2\beta)}}}$, $\alpha, \beta \in (\frac{1}{2}, 1)$.
Proof Let $\varphi(t, s)$ be the right hand of the equation (2.2), that is,
Partial differentiating the right hand of (2.3) with respect to $t$, we get
Next, partial differentiating the right hand of (2.4) with respect to $s$, we get
Changing the variable $u=tx$, $v=sy$ in the integral, we obtain
Let $t^{\frac{1}{2}-\alpha}s^{\frac{1}{2}-\beta}a(t, s)=b(t, s)$, then $a(t, s)=t^{\alpha-\frac{1}{2}}s^{\beta-\frac{1}{2}}b(t, s)$, and $\frac{\partial^2{\varphi}}{\partial{t}\partial{s}}$ becomes of the form
where
Similarly, if we differentiate (2.3) with respect to $s$ and then $t$, the equation (2.5) can also be attained, because the variable $s$ and the variable $t$ are symmetry in (2.5). So we don't need to consider the precedence of the partial differential about $t$ and $s$.
If the function $b(t, s)$ is nondecreasing with respect to $t$ and $s$, i.e., $b(tx, sy)\leq b(t, sy)\leq b(t, s)$, where $x, y\in(0, 1)$. Thus
or
Next, we consider the discriminant of the quadratic polynomial $x^2-2c_{\alpha}c_{\beta}x+2\alpha\cdot2\beta$, so the discriminant is represented as follows:
The bound $c^{2}_{H} < 2H$ is easy, since
Thus the discriminant D is negative, whence $\psi(t, s)\geq0$, and the minimal value of $\psi(t, s)$ is attained at $b(t, s)=c_{\alpha}c_{\beta}$.
Now, we show that the $b(t, s)=c_{\alpha}c_{\beta}$ can also make $\varphi(t, s)$ a minimal value. Further, we obtain
Following the assumption of the $b(t, s)$, we have the form of the function $a(t, s)=t^{\alpha-\frac{1}{2}}s^{\beta-\frac{1}{2}}b(t, s)$. Because $b(t, s)=c_{\alpha}c_{\beta}$ is a constant, so we let $a(t, s)=kt^{\alpha-\frac{1}{2}}s^{\beta-\frac{1}{2}}$, and substituting it to (2.3).
since
and
Hence, when $k=c_{\alpha}c_{\beta}$, $\varphi(t, s)$ have a minimal value $2\alpha\cdot2\beta-c^{2}_{\alpha}c^{2}_{\beta}$. Thus the minimum among all $a(t, s)$ such that $b(t, s)=a(t, s)t^{\frac{1}{2}-\alpha}s^{\frac{1}{2}-\beta}$ is nondecreasing is attained at $b(t, s)=c_{\alpha}c_{\beta}$.
From Lemma 2.1, we obtain that the square integral function $a(t, s)$ is positive, and we get an approximation of a fractional Brownian sheet by power function with a positive index in Theorem 2.2. In this section, we try to construct an approximation of a fractional Brownian sheet by power function with a negative index, that is, $a(t, s)=kt^{\frac{1}{2}-\alpha}s^{\frac{1}{2}-\beta}$, where $k>0$, $\alpha, \beta\in(\frac{1}{2}, 1)$. In fact, if $k\leq0$, then $a(t, s)\leq0$, while the kernel of a fractional Brownian sheet is positive number. So, it is unreasonable to use $a(t, s)\leq0$ to approximate the kernel of a fractional Brownian sheet.
Let
then we need to evaluate
where $A=\{a(t, s)=kt^{\frac{1}{2}-\alpha}s^{\frac{1}{2}-\beta}, k>0\}\subset L_2([0, T]\times[0, S])$ is some class of functions.
Lemma 3.1 (1) The function $f(t, s, k)$ admits the following representation:
(2) For all $k\in \mathbb{R}_+$,
Proof By the straightforward calculations, we have
According to representation (1.1), we have
This completes the proof of assertion (1).
Assertion (2). Differentiating the function $f$ with respect to $t$, we have
Let $x=kt^{1-2\alpha}$, we consider the discriminant
of the follow equation
Because the denominator $c^{2}_{\alpha}c^{2}_{\beta}(2-2\beta)$ of the right hand of equation (3.4) is positive. So we need to consider the numerator. We have
since $(2\beta)^2(2-2\beta) < c_{\beta}^2 < 2\beta$, and the distance between $(2\beta)^2(2-2\beta)$ and $c_{\beta}^2$ is longer than the distance of $c_{\beta}^2$ and $2\beta$. Hence $\frac{D}{4} < 0$. So the roots of equation (3.5) is not exist with respect $t$ in $\mathbb{R}_+$. We obtain that $\frac{\partial f}{\partial t}$ is positive. Hence, $f(t, s, k)$ is nondecreasing for all $t$. Similarly, we also obtain that $f(t, s, k)$ is nondecreasing for all $s$. Following the above discussion, we get
Now, from Lemma 3.1, we easily obtain the following main result.
Theorem 3.2 Let $A=\{a(t, s)=kt^{\frac{1}{2}-\alpha}s^{\frac{1}{2}-\beta}, k>0, \alpha, \beta\in(\frac{1}{2}, 1)\}$, then
where $k^{*}=\frac{4(2-2\alpha)(2-2\beta)\cdot\alpha\cdot\beta}{T^{1-2\alpha}S^{1-2\beta}c_{\alpha}c_{\beta}}$.
Proof First of all, we calculate the value of the constant $k$ which makes $\max\limits_{{0\leq t\leq T}\atop{0\leq s \leq S}}f(t, s, k)$ a minimal value.
Following assertion (2) of Lemma 3.1, we have
So we need evaluate the $k$ such that the minimum of $f(T, S, k)$ can be attained at the $k$ in the next work, that is,
Now, differentiating $f(T, S, k)$ with respect to $k$,
Then we have
which makes the derivative $\frac{\partial f}{\partial k}$ is zero. So, if $k>k^{*}$, then $\frac{\partial f}{\partial k}>0$, that is, $f(T, S, k)$ is increasing; if $k < k^{*}$, then $\frac{\partial f}{\partial k} < 0$, that is, $f(T, S, k)$ is decreasing. Thus the minimum of $f(T, S, k)$ is attained at $k=k^{*}$. Hence
This completes the proof.