数学杂志  2023, Vol. 43 Issue (2): 126-134   PDF    
扩展功能
加入收藏夹
复制引文信息
加入引用管理器
Email Alert
RSS
本文作者相关文章
ZHANG Shi-ling
MODERATE DEVIATIONS FOR EMPIRICAL PERIODOGRAM OF LINEAR RANDOM FIELDS
ZHANG Shi-ling    
School of Mathematics and Statistics, Wuhan University, Wuhan 430072, China
Abstract: In this paper, we prove moderate deviations for quadratic forms and empirical periodograms of linear random fields. The main assumptions on the linear random fields are a Logarithmic Sobolev Inequality for the driving random variables and some integrability conditions for the spectral density. As statistical applications, we give the moderate deviation estimates of the least square and the Yule-Walker estimators for unilateral autoregression stationary fields. The results above are generalizations of the results for linear random processes in[8].
Keywords: linear random fields     moderate deviation principle     empirical periodogram    
线性随机场的经验周期图的中偏差
张施灵    
武汉大学数学与统计学院, 湖北 武汉 430072
摘要:在本文中,我们证明了线性随机场的二次形和经验周期图的中偏差.关于线性随机场的主要假设是驱动随机变量的对数Sobolev不等式和谱密度的一些可积条件.作为统计应用,我们给出了单边自回归平稳场的最小二乘估计和Yule-Walker估计的中偏估差计.上述结论是对文献[8]中线性随机过程的结论的推广.
关键词线性随机场    中偏差原理    经验周期图    
1 Introduction

Consider a linear random field $ X=\{X_{j, k}; (j, k)\in\mathbb Z^2\} $ defined on a probability space $ (\Omega, \mathcal F, \mathbb P) $ by

$ \begin{equation} X_{j, k}:=\sum\limits_{r, s\in \mathbb Z} a_{r, s}\xi_{j+r, k+s}, \end{equation} $ (1.1)

where $ \{a_{r, s}; (r, s)\in \mathbb Z^2\} $ is a sequence of constants, the innovations $ \{\xi_{r, s}; (r, s)\in\mathbb Z^2\} $ are independent and identically distributed (i.i.d.) centered square integrable random variables, with common law $ \mathcal L (\xi_{0, 0})=\mu $, and the variance Var $ (\xi_{0, 0})=1 $. Assume that

$ \begin{equation} \sum\limits_{(r, s)\in \mathbb Z^2} a_{r, s}^2<\infty. \end{equation} $ (1.2)

Under the condition (1.2), $ X_{j, k} $ in (1.1) is well defined, see Lemma A.1 in [1]. The family $ \{X_{j, k}; (j, k)\in \mathbb Z^2\} $ is strictly stationary with the spectral density given by

$ \begin{equation} f({{\mathit{\boldsymbol{ t }}}}):=\left|\sum\limits_{{\mathit{\boldsymbol{ u }}}\in \mathbb Z^2}a_{{\mathit{\boldsymbol{ u }}}} e^{i {\mathit{\boldsymbol{ u }}}\cdot {\mathit{\boldsymbol{ t }}}}\right|^2, \end{equation} $ (1.3)

where $ {\mathit{\boldsymbol{ u }}}\cdot {\mathit{\boldsymbol{ t }}}:=u_1t_1+u_2t_2 $ for any $ {\mathit{\boldsymbol{ u }}}=(u_1, u_2)\in \mathbb Z^2 $ and $ {\mathit{\boldsymbol{ t }}}=(t_1, t_2)\in {\mathit{\boldsymbol{ I }}} $, $ {\mathit{\boldsymbol{ I }}} $ is the torus identified with $ [-\pi, \pi)^2 $ in the usual way.

The empirical periodogram is one of the main tools in the estimation of the unknown spectral density $ f $. It is defined by

$ \begin{equation} \mathcal I_{{\mathit{\boldsymbol{ n }}}}({\mathit{\boldsymbol{ t }}}):=\frac{1}{(2\pi)^2 n^2} |\sum\limits_{{1\mathit{\boldsymbol{}}}\le {\mathit{\boldsymbol{ u }}}\le {\mathit{\boldsymbol{ n }}}} e^{i{\mathit{\boldsymbol{ u }}}\cdot {\mathit{\boldsymbol{ t }}}} X_{{\mathit{\boldsymbol{ u }}}}|^2, \end{equation} $ (1.4)

where $ {\mathit{\boldsymbol{ t }}}=(t_1, t_2)\in {\mathit{\boldsymbol{ I }}} $, $ {\mathit{\boldsymbol{ n }}}=(n, n) $, $ {1\mathit{\boldsymbol{}}}\le {\mathit{\boldsymbol{ u }}}\le {\mathit{\boldsymbol{ n }}} $ means that $ 1\le u_1\le n $, $ 1\le u_2\le n $ for $ {\mathit{\boldsymbol{ u }}}=(u_1, u_2)\in \mathbb Z^2 $.

The limit theorems of the empirical periodogram $ \mathcal I_{{\mathit{\boldsymbol{ n }}}} $ usually involve the following quadratic forms

$ \Phi_{{\mathit{\boldsymbol{ n }}}}:=(\frac{1}{n^2} \sum\limits_{{1\mathit{\boldsymbol{}}}\le {\mathit{\boldsymbol{ k }}}\le {\mathit{\boldsymbol{ n }}}} X_{{\mathit{\boldsymbol{ k }}}} X_{{\mathit{\boldsymbol{ k }}}+{\mathit{\boldsymbol{ l }}}})_{{\mathit{\boldsymbol{ l }}}\in \mathbb Z^2}. $

There have been abundant literatures contributing to the study of limit theorems for $ \mathcal I_{{\mathit{\boldsymbol{ n }}}} $ and $ \Phi_{{\mathit{\boldsymbol{ n }}}} $, we refer the reader to Rosenblatt [2] and references therein.

The main purpose of this paper is to estalish the moderate deviation principle(MDP in short) for the empirical periodogram $ \mathcal I_{{\mathit{\boldsymbol{ n }}}} $ and quadratic forms $ \Phi_{{\mathit{\boldsymbol{ n }}}} $. In order to do these, we put forward sone conditions such as $ L^q({\mathit{\boldsymbol{ I }}}, d{\mathit{\boldsymbol{ t }}}) $-integrability of the spectral density $ f $ and a Logarithmic Sobolev Inequality (LSI in short) for the law $ \mu $ of the driving random variable $ \xi $. Moreover, as statistical applications, we provide the moderate deviation estimates of the least square and the Yule-Walker estimators for unilateral autoregression stationary fields.

This paper is organized as follows. In section 2, we establish the MDP for quadratic form and the empirical periodogram, and give a statistical application based on our MDP results. Some lemmas of dependent stationary fields and the proofs of the main results are given in section 3.

2 Moderate Deviation Principles for the Linear Random Field
2.1 MDP for Quadratic Form

Assume the following conditions:

(H1) The law $ \mu $ of the driving random variable $ \xi $ satisfies a LSI, i.e., there exists a constant $ C>0 $ such that

$ \begin{equation} {\rm Ent}_{\mu}(h^2)\le 2C\mathbb E_{\mu}(|\nabla h|^2) \end{equation} $ (2.1)

for every smooth $ h $ such that $ \mathbb E_{\mu}(h^2\log^{+} h^2)<\infty $, where

$ \begin{equation} {\rm Ent}_{\mu}(h^2)=\mathbb E_{\mu}(h^2\log h^2) -\mathbb E_{\mu}(h^2) \log \mathbb E_{\mu}(h^2). \end{equation} $ (2.2)

(H2) The spectral density function $ f $ is in $ L^q({\mathit{\boldsymbol{ I }}}, d{\mathit{\boldsymbol{ t }}}) $, where $ 2<q\le +\infty $.

(H3) The sequence of positive numbers $ \{b_n\}_{n\in \mathbb N} $ satisfies that $ b_n\rightarrow \infty $ and $ b_n n^{1/q-1/2}\rightarrow 0 $ as $ n\rightarrow \infty. $ Here $ q $ is the constant appearing in (H2).

Remark The LSI (2.1) implies that there exists some constant $ \delta>0 $ such that

$ \begin{equation} \mathbb E_{\mu}\left(e^{\delta |x|^2} \right)<\infty. \end{equation} $ (2.3)

See [3, Chapter 3] for more details on the LSI.

Let

$ \kappa_4=\frac{\mathbb E (\xi_{{\mathit{\boldsymbol{0}}}}^4)-3[\mathbb E(\xi_{{\mathit{\boldsymbol{0}}}}^2)]^2}{ \mathbb E(\xi_{{\mathit{\boldsymbol{0}}}}^2)^2} $

be the cumulant of order $ 4 $ of the random variable $ \xi_{{\mathit{\boldsymbol{0}}}} $.

Theorem 2.1 Assume (H1)-(H3) hold. Then for every $ \lambda=(\lambda_{{\mathit{\boldsymbol{ k }}}})_{{\mathit{\boldsymbol{0}}}\le {\mathit{\boldsymbol{ k }}}\le {\mathit{\boldsymbol{ m }}}}\in \mathbb R^{(m+1)^2} $,

$ \begin{align} \lim\limits_{n \rightarrow \infty}\frac{1}{b_n^4}\log \mathbb E \exp\left(\frac{b_n^2}{n} \sum\limits_{{\mathit{\boldsymbol{0}}}\le {\mathit{\boldsymbol{ l }}}\le {\mathit{\boldsymbol{ m }}}} \lambda_{{\mathit{\boldsymbol{ l }}}}\sum\limits_{{1\mathit{\boldsymbol{}}}\le {\mathit{\boldsymbol{ k }}}\le {\mathit{\boldsymbol{ n }}}}(X_{{\mathit{\boldsymbol{ k }}}} X_{{\mathit{\boldsymbol{ k }}}+{\mathit{\boldsymbol{ l }}}}- \mathbb EX_{{\mathit{\boldsymbol{ k }}}}X_{{\mathit{\boldsymbol{ k }}}+{\mathit{\boldsymbol{ l }}}}) \right) =\frac{1}{2}\sum\limits_{{\mathit{\boldsymbol{0}}}\le {\mathit{\boldsymbol{ k }}}, {\mathit{\boldsymbol{ l }}}\le {\mathit{\boldsymbol{ m }}}} \lambda_{{\mathit{\boldsymbol{ k }}}}\Sigma^2_{{\mathit{\boldsymbol{ k }}}, {\mathit{\boldsymbol{ l }}}} \lambda_{{\mathit{\boldsymbol{ l }}}}, \end{align} $ (2.4)

where $ \Sigma^2_{{\mathit{\boldsymbol{ k }}}, {\mathit{\boldsymbol{ l }}}}, {\mathit{\boldsymbol{0}}}\le {\mathit{\boldsymbol{ k }}}, {\mathit{\boldsymbol{ l }}}\le {\mathit{\boldsymbol{ m }}} $ is given by

$ \begin{align} \Sigma^2_{{\mathit{\boldsymbol{ k }}}, {\mathit{\boldsymbol{ l }}}}=&\frac{1}{(2\pi)^2}\int_{{\mathit{\boldsymbol{ I }}}} 2 \cos ({\mathit{\boldsymbol{ k }}}\cdot {\mathit{\boldsymbol{ t }}}) \cos ({\mathit{\boldsymbol{ l }}} \cdot {\mathit{\boldsymbol{ t }}})f^2({\mathit{\boldsymbol{ t }}})d{\mathit{\boldsymbol{ t }}}\\ &+\kappa_4 \left(\frac{1}{(2\pi)^2} \int_{{\mathit{\boldsymbol{ I }}}} f({\mathit{\boldsymbol{ t }}})\cos({\mathit{\boldsymbol{ k }}} \cdot {\mathit{\boldsymbol{ t }}})d{\mathit{\boldsymbol{ t }}}\right)\left(\frac{1}{(2\pi)^2} \int_{{\mathit{\boldsymbol{ I }}}} f({\mathit{\boldsymbol{ t }}})\cos({\mathit{\boldsymbol{ l }}} \cdot {\mathit{\boldsymbol{ t }}})d{\mathit{\boldsymbol{ t }}}\right). \end{align} $ (2.5)

In particular,

$ \left(\frac{1}{nb_{n}^2 } \sum\limits_{{1\mathit{\boldsymbol{}}}\le {\mathit{\boldsymbol{ k }}}\le {\mathit{\boldsymbol{ n }}}} (X_{{\mathit{\boldsymbol{ k }}}} X_{{\mathit{\boldsymbol{ k }}}+{\mathit{\boldsymbol{ l }}}}- \mathbb EX_{{\mathit{\boldsymbol{ k }}}}X_{{\mathit{\boldsymbol{ k }}}+{\mathit{\boldsymbol{ l }}}})\right)_{{\mathit{\boldsymbol{0}}}\le {\mathit{\boldsymbol{ l }}}\le {\mathit{\boldsymbol{ m }}}} $

satisfies the LDP on $ \mathbb R^{(m+1)^2} $ with speed $ b_n^4 $ and with the rate function given by

$ I({\mathit{\boldsymbol{ z }}})=\sup\limits_{\lambda\in \mathbb R^{(m+1)^2}}\left\{ \sum\limits_{{\mathit{\boldsymbol{0}}}\le {\mathit{\boldsymbol{ k }}}\le {\mathit{\boldsymbol{ m }}}} \lambda_{{\mathit{\boldsymbol{ k }}}} z_{{\mathit{\boldsymbol{ k }}}}- \frac{1}{2}\sum\limits_{{\mathit{\boldsymbol{0}}}\le {\mathit{\boldsymbol{ k }}}, {\mathit{\boldsymbol{ l }}}\le {\mathit{\boldsymbol{ m }}}}\lambda_{{\mathit{\boldsymbol{ k }}}}\Sigma^2_{{\mathit{\boldsymbol{ k }}}, {\mathit{\boldsymbol{ l }}}} \lambda_{{\mathit{\boldsymbol{ l }}}}\right\}, \ \ {\mathit{\boldsymbol{ z }}}=(z_{{\mathit{\boldsymbol{ k }}}})_{{\mathit{\boldsymbol{0}}}\le {\mathit{\boldsymbol{ k }}}\le {\mathit{\boldsymbol{ m }}}}\in \mathbb R^{(m+1)^2}. $

As a consequence of Theorem 2.1, we have the following MDP by the contraction principle ([4, Theorem 4.2.1]).

Corollary 2.2 Assume (H1)-(H3) hold. For all $ {\mathit{\boldsymbol{ l }}}\ge {\mathit{\boldsymbol{0}}} $, $ \frac{1}{nb_{n}^2 } \sum\limits_{{1\mathit{\boldsymbol{}}}\le {\mathit{\boldsymbol{ k }}}\le {\mathit{\boldsymbol{ n }}}} (X_{{\mathit{\boldsymbol{ k }}}} X_{{\mathit{\boldsymbol{ k }}}+{\mathit{\boldsymbol{ l }}}}- \mathbb EX_{{\mathit{\boldsymbol{ k }}}}X_{{\mathit{\boldsymbol{ k }}}+{\mathit{\boldsymbol{ l }}}}) $ satisfies the LDP on $ \mathbb R $ with speed $ b_n^4 $ and with the rate function given by

$ I^{{\mathit{\boldsymbol{ l }}}}(z)=\frac12\frac{z^2}{{1}/({2\pi})^2\int_{{\mathit{\boldsymbol{ I }}}} 2 \cos^2 ({\mathit{\boldsymbol{ l }}} \cdot {\mathit{\boldsymbol{ t }}}) f^2({\mathit{\boldsymbol{ t }}})d{\mathit{\boldsymbol{ t }}}+k_4 \left({1}/({2\pi})^2 \int_{{\mathit{\boldsymbol{ I }}}} f({\mathit{\boldsymbol{ t }}})\cos({\mathit{\boldsymbol{ l }}} \cdot {\mathit{\boldsymbol{ t }}})d{\mathit{\boldsymbol{ t }}}\right)^2 } $

with the convention that $ a/0=+\infty $ for all $ a>0 $ and $ 0/0:=0 $.

2.2 MDP for the Empirical Periodgram

From Theorem 2.1 and the projective limit method (see [4]), we obtain the functional type's MDP for

$ \mathcal L_{n}({\mathit{\boldsymbol{ t }}})=\frac{n}{b_n^2}\left(\mathcal I_{{\mathit{\boldsymbol{ n }}}}({\mathit{\boldsymbol{ t }}}) -\mathbb E \mathcal I_{{\mathit{\boldsymbol{ n }}}}({\mathit{\boldsymbol{ t }}}) \right). $

Theorem 2.3 Assume (H1)-(H3) hold. Let $ 1\le p<2 $ and $ p'\in[2, +\infty] $ the conjugated number, i.e., $ 1/p+1/p'=1 $. Assume that

$ b_nn^{1/q+1/p'-1/2}\rightarrow 0, \ \ \frac{1}{p'}+\frac1q<\frac12. $

Then $ (\mathcal L_{n})_{n\ge1} $ satisfies the LDP on $ (L^p({\mathit{\boldsymbol{ I }}}, d{\mathit{\boldsymbol{ t }}}), \sigma(L^p({\mathit{\boldsymbol{ I }}}, d{\mathit{\boldsymbol{ t }}}), L^{p'}({\mathit{\boldsymbol{ I }}}, d{\mathit{\boldsymbol{ t }}})) ) $ with speed $ b_n^4 $ and with the rate function given by

$ \begin{equation} J(\eta) = \left\{ \begin{aligned} & \frac{1}{(2\pi)^2}\int_{{\mathit{\boldsymbol{ I }}}}\frac{\eta^2({{\mathit{\boldsymbol{ t }}}})}{4f^2({\mathit{\boldsymbol{ t }}})}d{\mathit{\boldsymbol{ t }}}-\frac{\kappa_4}{2+\kappa_4}\left(\frac1{(2\pi)^2}\int_{{\mathit{\boldsymbol{ I }}}}\frac{\eta({\mathit{\boldsymbol{ t }}})}{2f({\mathit{\boldsymbol{ t }}})}d{\mathit{\boldsymbol{ t }}} \right)^2, \\ & \ \ \ \ \ \ \ \ \text{if} \ \kappa_4>-2, \ \eta \ \text{is even}, \ \eta d{\mathit{\boldsymbol{ t }}} \ll f d{\mathit{\boldsymbol{ t }}} \ \ \text{and} \ \ \frac{\eta}{f}\in L^2({\mathit{\boldsymbol{ I }}}, d{\mathit{\boldsymbol{ t }}});\\ & +\infty, \ \ \ \text{otherwise}, \end{aligned} \ \right. \end{equation} $ (2.6)

where $ \eta $ is even means that $ \eta({\mathit{\boldsymbol{ t }}})=\eta(-{\mathit{\boldsymbol{ t }}}), \forall {\mathit{\boldsymbol{ t }}}\in {\mathit{\boldsymbol{ I }}} $.

As a consequence of Theorem 2.3, we have the following MDP.

Corollary 2.4 Under the assumptions of Theorem 2.3, we have that for all $ h\in L^{p'}({\mathit{\boldsymbol{ I }}}, d{\mathit{\boldsymbol{ t }}}) $,

$ \limsup\limits_{n\rightarrow \infty}\frac{1}{b_n^4}\log \mathbb E \left(e^{b_n^4 \frac{1}{(2\pi)^2}\int_{{\mathit{\boldsymbol{ I }}}} h({\mathit{\boldsymbol{ t }}})\mathcal L_n({\mathit{\boldsymbol{ t }}})d{\mathit{\boldsymbol{ t }}} } \right)=\frac12\sigma^2(h), $

where

$ \sigma^2(h):=\frac{1}{(2\pi)^2} \int_{{\mathit{\boldsymbol{ I }}}} 2\tilde h^2({\mathit{\boldsymbol{ t }}})f^2({\mathit{\boldsymbol{ t }}})d{\mathit{\boldsymbol{ t }}}+\kappa_4\left(\frac{1}{(2\pi)^2}\int_{{\mathit{\boldsymbol{ I }}}} h({\mathit{\boldsymbol{ t }}})f({\mathit{\boldsymbol{ t }}}) d{\mathit{\boldsymbol{ t }}}\right)^2 $

with $ \tilde h({\mathit{\boldsymbol{ t }}})=(h({\mathit{\boldsymbol{ t }}})+h(-{\mathit{\boldsymbol{ t }}}))/2 $. In particular, $ \frac{1}{(2\pi)^2}\int_{{\mathit{\boldsymbol{ I }}}} h({\mathit{\boldsymbol{ t }}})\mathcal L_n({\mathit{\boldsymbol{ t }}})d{\mathit{\boldsymbol{ t }}} $ satisfies the LDP on $ \mathbb R $ with speed $ b_n^4 $ and with the rate function given by $ I_h(z):=\frac{z^2}{2\sigma^2(h)}. $

2.3 Unilateral Autoregression Stationary Field

Consider the unilateral autoregression process:

$ \begin{align} X_{s, t}=\rho X_{s-1, t}+\varepsilon_{s, t}, \ \ \ s, t \in \mathbb N, \end{align} $ (2.7)

where the sequence $ \{\varepsilon_{s, t}\}_{s, t\in \mathbb Z} $ is i.i.d. with common law $ \mu $, satisfying a LSI, and $ \mathbb E (\varepsilon_{s, t})=0, \mathbb E(\varepsilon_{s, t}^2)=1 $ and $ \rho\in(-1, 1) $ is the unknown parameter. Assume that the sequence $ \{X_{0, k}\}_{k\in \mathbb N} $ is i.i.d. with common law of $ \sum_{k=0}^{\infty}\rho^k\varepsilon_{-k, 0} $, which is independent of $ \{\varepsilon_{s, t}\}_{s, t\ge1} $. $ \{ X_{s, t}\}_{s, t\in \mathbb N} $ is thus a centered stationary field, which is a special one of the unilateral autoregression fields, see [5] and [6].

Let $ \hat \rho_n $ be the least square estimator of $ \rho $ given by

$ \hat\rho_n=\frac{\sum _{{1\mathit{\boldsymbol{}}}\le (i, j)\le {\mathit{\boldsymbol{ n }}}} X_{i, j}X_{i-1, j}}{\sum _{{1\mathit{\boldsymbol{}}}\le (i, j)\le {\mathit{\boldsymbol{ n }}}} X_{i-1, j-1}^2} $

and let $ \tilde \rho_n $ be the Yule-Walker estimator of $ \rho $ given by

$ \tilde \rho_n=\frac{\sum _{{\mathit{\boldsymbol{0}}}\le (i, j)\le {\mathit{\boldsymbol{ n }}}} X_{i, j}X_{i-1, j}}{\sum _{{\mathit{\boldsymbol{0}}}\le (i, j)\le {\mathit{\boldsymbol{ n }}}} X_{i-1, j-1}^2}. $

It is well-known that $ \hat\rho_n $ and $ \tilde \rho $ are consistent and asymptotically normal, see [5] and [6]. Moreover, Bercu et al.[7] showed that the LDP of the Yule-Walker estimator is better than the one of the least square estimator.

By using Theorem 2.1 and the same proof of Proposition 3.1 in [8], one can prove the following MDP of the least square and the Yule-Walker estimator. These two estimators share the same MDP. Here we omit the proof.

Proposition 2.5 Let $ (b_n) $ be a sequence of positive numbers satisfying that $ b_n\rightarrow \infty $ and $ b_n n^{1/q-1/2}\rightarrow 0 $ as $ n\rightarrow \infty. $ Then $ \frac{n}{b_n^2}(\hat\rho_n-\rho) $ (or $ \frac{n}{b_n^2}(\tilde\rho_n-\rho) $) satisfies the LDP on $ \mathbb R $ with speed $ b_n^4 $ and with the rate function given by

$ I(x)=\frac{x^2}{2(1-\rho^2)}. $
3 Proofs of Main Theorems

The proof of Theorem 2.1 and Theorem 2.3 are similar to that in [8]. Here, we only give the sketch of the proof for Theorem 2.1. First, we give some lemmas.

3.1 Several Lemmas

In this part, we give several facts concerning the Toeplitz matrix, the Fejèr approximation and the MDP of $ m $-dependent stationary fields.

For an $ n\times n $ matrix $ A $, denote the usual operator norm $ \|A\|=\sup\limits_{x\in \mathbb R^n} \frac{|Ax|}{|x|} $. For any even function $ h\in L^1({\mathit{\boldsymbol{ I }}}, d{\mathit{\boldsymbol{ t }}}) $, $ T_{{\mathit{\boldsymbol{ n }}}}(h)=(\hat r_{{\mathit{\boldsymbol{ k }}}-{\mathit{\boldsymbol{ l }}}}(h))_{{1\mathit{\boldsymbol{}}}\le {\mathit{\boldsymbol{ k }}}, {\mathit{\boldsymbol{ l }}} \le {\mathit{\boldsymbol{ n }}}} $ is the Toeplitz matrix associated with $ h $, where $ \hat r_{{\mathit{\boldsymbol{ k }}}}(h) $ is the $ {\mathit{\boldsymbol{ k }}} $th Fourier coefficient of $ h $ given by

$ \begin{equation} \hat r_{{\mathit{\boldsymbol{ k }}}}(h)=\frac{1}{(2\pi)^2} \int_{{\mathit{\boldsymbol{ I }}}} e^{i {\mathit{\boldsymbol{ k }}}\cdot {\mathit{\boldsymbol{ t }}}} h({\mathit{\boldsymbol{ t }}})d{\mathit{\boldsymbol{ t }}}, \ \ \ \forall {\mathit{\boldsymbol{ k }}}\in \mathbb Z^2. \end{equation} $ (3.1)

Lemma 3.1 [9, Lemma 1] If $ f\in L^q({\mathit{\boldsymbol{ I }}}, d{\mathit{\boldsymbol{ t }}}) $, where $ 1\le q\le \infty $, then for all $ {\mathit{\boldsymbol{ n }}}\ge \bm1 $, we have

$ \|T_{{\mathit{\boldsymbol{ n }}}}(f)\|_{q}\le n^{2/q}\|f\|_{q}. $

Lemma 3.2 [9, Theorem 1] Let $ f_{k}\in L^{q_k}({\mathit{\boldsymbol{ I }}}, d{\mathit{\boldsymbol{ t }}}) $ with $ q_{k}\ge1 $ for $ k=1, \cdots, p $ and $ \sum\limits_{k=1}^p (1/q_k)\le 1 $. Then

$ \lim\limits_{n\rightarrow \infty} \frac{1}{n^2} \text{tr}( \prod\limits_{k=1}^p T_{{\mathit{\boldsymbol{ n }}}}(f_k) )=\hat r_{{\mathit{\boldsymbol{0}}}} (\prod\limits_{k=1}^p f_k ). $

Let $ m $ be a given positive integer, a random field $ (Z_{{\mathit{\boldsymbol{ n }}}})_{{\mathit{\boldsymbol{ n }}} \in \mathbb Z^2} $ is called $ m $-dependent, if for any finite sets $ A, B\subset \mathbb Z^2 $ with $ d(A, B)>m $, the $ \sigma $-algebras $ \sigma\{Z_{k, l}; (k, l)\in A\} $ and $ \sigma\{Z_{k, l}; (k, l)\in B\} $ are independent. Here

$ d(A, B)=\min\limits_{\left(k_A, l_A\right)\in A, \atop \left(k_B, l_B\right)\in B}\big\{\max\{|k_A-k_B|, |l_A-l_B| \}\big\}. $

See [10] for the study of the strong large number law for $ m $-dependent random fields.

The following Lemma is the generalization of the MDP result for the $ m $-dependent stationary processes in [11].

Lemma 3.3 Let $ (Z_{{\mathit{\boldsymbol{ n }}}})_{{\mathit{\boldsymbol{ n }}}\in \mathbb Z^2} $ be a centered $ m $-dependent stationary random field in $ \mathbb R^N $, such that

$ \begin{equation} \mathbb E\left(e^{\alpha |Z_{{\mathit{\boldsymbol{0}}}}| } \right)<\infty, \ \ \ \ \text{for some } \alpha>0. \end{equation} $ (3.2)

Then for all $ \lambda\in \mathbb R^N $,

$ \begin{align} \lim\limits_{n\rightarrow \infty}\frac{1}{b_n^4}\log\mathbb E(e^{\frac{ b_n^2}{n} \sum\limits_{{1\mathit{\boldsymbol{}}}\le {\mathit{\boldsymbol{ k }}}\le {\mathit{\boldsymbol{ n }}}} \langle \lambda, Z_{{\mathit{\boldsymbol{ k }}}} \rangle } ) = \frac{1}2 ( \mathbb E \langle \lambda, Z_{{\mathit{\boldsymbol{0}}}}\rangle^2+\sum\limits_{{\mathit{\boldsymbol{0}}} < d({\mathit{\boldsymbol{ k }}}, {\mathit{\boldsymbol{0}}}) \le {\mathit{\boldsymbol{ m }}}}\mathbb E \langle \lambda, Z_{{\mathit{\boldsymbol{0}}}}\rangle \langle \lambda, Z_{{\mathit{\boldsymbol{ k }}}}\rangle ). \end{align} $ (3.3)

Proof The proof is inspired by the method in the proof of [11, Theorem 1.1]. Here, we give the sketch of the proof. First, we prove this lemma in the case $ m=1 $. Fix the integer $ p>1 $ and for each $ n\ge1 $, we write $ n=k_np+r_n, $ where $ k_n $ and $ r_n $ are non-negative integers with $ 0\le r_n\le p-1 $. Define

$ Y_{r, s}=\sum\limits_{(r-1)p<j<rp, \atop (s-1)p<k<sp} Z_{j, k}, \ \ \ \ r, s=1, 2, \cdots. $

Then, $ \{Y_{r, s}\}_{(r, s)\in \mathbb N^2} $ is an independent and identically distributed sequence. By the moderate deviation principle for i.i.d. random variables (see [4, Theorem 3.7.1]), we have

$ \begin{align} & \lim\limits_{n \rightarrow \infty}\frac{1}{b_n^4}\log \mathbb E \exp\{\frac{b_n^2}{n} \sum\limits_{1\le r\le k_n, \atop 1\le s\le k_n} \langle \lambda, Y_{r, s}\rangle \} \\ =&\frac{1}{2p^2 }((p-1)^2\mathbb E \langle \lambda, Z_{0, 0}\rangle^2 + (p-2)^2 \sum\limits_{i, j=\pm1} \mathbb E\langle\lambda, Z_{0, 0}\rangle \langle \lambda, Z_{i, j}\rangle \\ &+ (p-1)(p-2) \sum\limits_{i=\pm1} \mathbb E\langle \lambda, Z_{0, 0}\rangle\langle \lambda, Z_{i, 0}+ Z_{0, i}\rangle ). \end{align} $ (3.4)

Similarly,

$ \begin{align} & \lim\limits_{n \rightarrow \infty}\frac{1}{b_n^4}\log \mathbb E \exp\big\{\frac{b_n^2}{n} \sum\limits_{l=1}^{k_n}\sum\limits_{1\le j, s\le pk_n, \text{with} \atop j=pl \text{ or } s=pl }\langle \lambda, Z_{j, s}\rangle \big\} \\ =&\lim\limits_{n \rightarrow \infty} \frac{1}{2n^2}\left(k^2_n(2p-1) \mathbb E \langle \lambda, Z_{0, 0}\rangle^2 + \big( 2k_n(k_n-1)+1 \big) \sum\limits_{i=\pm1} \mathbb E\langle\lambda, Z_{0, 0}\rangle \langle \lambda, Z_{i, i}\rangle \right. \\ &\left. + 2k_n(k_n-1) \sum\limits_{i=\pm1} \mathbb E\langle\lambda, Z_{0, 0}\rangle \langle \lambda, Z_{i, -i}\rangle +k_n(k_np-1) \sum\limits_{i=\pm1}\mathbb E\langle\lambda, Z_{0, 0}\rangle \langle \lambda, Z_{i, 0}+Z_{0, i} \rangle \right) \\ =& \frac{1}{2p^2}\Big((2p-1)\mathbb E \langle \lambda, Z_{0, 0}\rangle^2+ 2\sum\limits_{i, j=\pm1} \mathbb E\langle\lambda, Z_{0, 0}\rangle \langle \lambda, Z_{i, j}\rangle + p\sum\limits_{i=\pm1}\mathbb E\langle\lambda, Z_{0, 0}\rangle \langle \lambda, Z_{i, 0}+Z_{0, i} \rangle \Big). \end{align} $ (3.5)

By (H3) and (3.2), we have

$ \begin{align} \lim\limits_{n \rightarrow \infty}\frac{1}{b_n^4}\log \mathbb E \exp\{\frac{b_n^2}{n} \sum\limits_{1\le j, s\le n, \text{with} \atop j> pk_n \text{ or } s>pk_n }\langle \lambda, Z_{j, s} \rangle \}=0. \end{align} $ (3.6)

Let $ S_{{\mathit{\boldsymbol{ n }}}}=\sum\limits_{{1\mathit{\boldsymbol{}}}\le {\mathit{\boldsymbol{ k }}}\le {\mathit{\boldsymbol{ n }}}}Z_{{\mathit{\boldsymbol{ k }}}}, $ noticing that

$ \begin{align} S_{{\mathit{\boldsymbol{ n }}}}= \sum\limits_{1\le r\le k_n, \atop 1\le s\le k_n} Y_{r, s}+ \sum\limits_{l=1}^{k_n}\sum\limits_{1\le j, s\le pk_n, \text{with} \atop j=pl \text{ or } s=pl } Z_{j, s}+ \sum\limits_{1\le j, s\le n, \text{with} \atop j> pk_n \text{ or } s>pk_n } Z_{j, s}. \end{align} $ (3.7)

Then for any $ a_1, a_2, a_3>1 $ with $ 1/a_1+1/a_2+1/a_3=1 $, by the Hölder inequality, we have

$ \begin{align} \mathbb E \exp\big\{\frac{b_n^2}{n} \langle \lambda, S_{{\mathit{\boldsymbol{ n }}}}\rangle \big\} \le& \big\{\mathbb E \exp\{a_1\frac{b_n^2}{n} \sum\limits_{1\le r\le k_n, \atop 1\le s\le k_n} \langle \lambda, Y_{r, s}\rangle \} \big\} ^{1/a_1} \{\mathbb E \exp\{a_2\frac{b_n^2}{n} \sum\limits_{l=1}^{k_n}\sum\limits_{1\le j, s\le pk_n, \text{with} \atop j=pl \text{ or } s=pl }\langle \lambda, Z_{j, s} \rangle \} \}^{1/a_2} \\ & \cdot \big \{ \mathbb E \exp\{a_3\frac{b_n^2}{n} \sum\limits_{1\le j, s\le n, \text{with} \atop j> pk_n \text{ or } s>pk_n }\langle \lambda, Z_{j, s} \rangle \} \big \}^{1/a_3}. \end{align} $ (3.8)

By (3.4)–(3.6), we know that

$ \begin{align} & \lim\limits_{n \rightarrow \infty}\frac{1}{b_n^4}\log \mathbb E \exp\left\{\frac{b_n^2}{n} \langle \lambda, S_{{\mathit{\boldsymbol{ n }}}}\rangle \right\} \\ \le & \frac{a_1}{2p^2 }\Big((p-1)^2\mathbb E \langle \lambda, Z_{0, 0}\rangle^2 + (p-2)^2 \sum\limits_{i, j=\pm1} \mathbb E\langle\lambda, Z_{0, 0}\rangle \langle \lambda, Z_{i, j}\rangle \\ &+ (p-1)(p-2) \sum\limits_{i=\pm1} \mathbb E\langle \lambda, Z_{0, 0}\rangle\langle \lambda, Z_{i, 0}+ Z_{0, i}\rangle \Big)\\ & + \frac{a_2}{2p^2}\Big((2p-1)\mathbb E \langle \lambda, Z_{0, 0}\rangle^2+ 2\sum\limits_{i, j=\pm1} \mathbb E\langle\lambda, Z_{0, 0}\rangle \langle \lambda, Z_{i, j}\rangle + p\sum\limits_{i=\pm1}\mathbb E\langle\lambda, Z_{0, 0}\rangle \langle \lambda, Z_{i, 0}+Z_{0, i} \rangle \Big). \end{align} $ (3.9)

Firstly letting $ p\rightarrow \infty $ and then letting $ a_1\rightarrow 1 $ in (3.9), we have

$ \begin{align} & \lim\limits_{n \rightarrow \infty}\frac{1}{b_n^4}\log \mathbb E \exp\left\{\frac{b_n^2}{n} \langle \lambda, S_{{\mathit{\boldsymbol{ n }}}}\rangle \right\} \le \frac12\big(\mathbb E \langle \lambda, Z_{{\mathit{\boldsymbol{0}}}}\rangle^2 +\sum\limits_{{\mathit{\boldsymbol{0}}} < d({\mathit{\boldsymbol{ k }}}, {\mathit{\boldsymbol{0}}}) \le {1\mathit{\boldsymbol{}}}} \mathbb E \langle \lambda, Z_{{\mathit{\boldsymbol{0}}}} \rangle \langle \lambda, Z_{{\mathit{\boldsymbol{ k }}}} \rangle \big). \end{align} $ (3.10)

Similarly, from (3.7),

$ \begin{align} \mathbb E \exp\{\frac{b_n^2}{a_1 n} \sum\limits_{1\le r\le k_n, \atop 1\le s\le k_n} \langle \lambda, Y_{r, s}\rangle \} \le& \{ \mathbb E \exp\{\frac{b_n^2}{n} \langle \lambda, S_{{\mathit{\boldsymbol{ n }}}}\rangle \} \} ^{1/a_1} \{\mathbb E \exp\{-\frac{a_2}{a_1} \frac{b_n^2}{n} \sum\limits_{l=1}^{k_n}\sum\limits_{1\le j, s\le pk_n, \text{with} \atop j=pl \text{ or } s=pl }\langle \lambda, Z_{j, s}\rangle \} \}^{1/a_2} \\ & \cdot \{ \mathbb E \exp\{-\frac{a_3}{a_1}\frac{b_n^2}{n} \sum\limits_{1\le j, s\le n, \text{with} \atop j> pk_n \text { or } s>pk_n }\langle \lambda, Z_{j, s} \rangle \} \}^{1/a_3}. \end{align} $ (3.11)

This, together with the previous calculus, implies

$ \begin{align} \lim\limits_{n \rightarrow \infty}\frac{1}{b_n^4}\log \mathbb E \exp\left\{\frac{b_n^2}{n} \langle \lambda, S_{{\mathit{\boldsymbol{ n }}}}\rangle \right\} \ge \frac12\big(\mathbb E \langle \lambda, Z_{{\mathit{\boldsymbol{0}}}}\rangle^2 +\sum\limits_{{\mathit{\boldsymbol{0}}} < d({\mathit{\boldsymbol{ k }}}, {\mathit{\boldsymbol{0}}}) \le {1\mathit{\boldsymbol{}}}} \mathbb E \langle \lambda, Z_{{\mathit{\boldsymbol{0}}}} \rangle \langle \lambda, Z_{{\mathit{\boldsymbol{ k }}}} \rangle\big). \end{align} $ (3.12)

Hence, we have (3.3).

For general $ m>1 $, (3.3) can be proved by the argument in the proof of Theorem 1.1 of [11]. We omit the details here.

The proof is complete.

3.2 The Proof of Theorem 2.1

First, we approximate the linear random field by a linear random field of finite range $ 2N $, which satisfies the MDP.

Let $ X_{j, k}^N $ be the Fejér approximation of $ X_{j, k} $, that is

$ \begin{equation} X_{j, k}^N=\sum\limits_{r, s\in \mathbb Z} a_{r, s}^N\xi_{j+r, k+s}, \end{equation} $ (3.13)

where $ a_{r, s}^N=a_{r, s}\left(1-\frac{|r|}{N}\right)\left(1-\frac{|s|}{N}\right)_{|r|\le N, |s|\le N} $.

Set

$ Q_{n}^N=(Q_n^{N, {\mathit{\boldsymbol{ l }}}})=\left( \frac{1}{n b_n^2} Z_{{\mathit{\boldsymbol{ n }}}}^{N, {{\mathit{\boldsymbol{ l }}}}} \right)_{{\mathit{\boldsymbol{0}}}\le {\mathit{\boldsymbol{ l }}} \le {\mathit{\boldsymbol{ m }}}} \text{ and } \ Q_{n}=(Q_n^{ {\mathit{\boldsymbol{ l }}}})=\left( \frac{1}{n b_n^2} Z_{{\mathit{\boldsymbol{ n }}}}^{{{\mathit{\boldsymbol{ l }}}}} \right)_{{\mathit{\boldsymbol{0}}}\le {\mathit{\boldsymbol{ l }}} \le {\mathit{\boldsymbol{ m }}}}, $

where

$ Z_{{\mathit{\boldsymbol{ n }}}}^{N, {{\mathit{\boldsymbol{ l }}}}}=\sum\limits_{{{1\mathit{\boldsymbol{}}}\le {\mathit{\boldsymbol{ k }}}\le{\mathit{\boldsymbol{ n }}} }} (X_{{\mathit{\boldsymbol{ k }}}}^NX^N_{{\mathit{\boldsymbol{ k }}}+{\mathit{\boldsymbol{ l }}}}-\mathbb EX_{{\mathit{\boldsymbol{ k }}}}^NX^N_{{\mathit{\boldsymbol{ k }}}+{\mathit{\boldsymbol{ l }}}} )\ \ \ \text{and } Z_{{\mathit{\boldsymbol{ n }}}}^{{{\mathit{\boldsymbol{ l }}}}}=\sum\limits_{{{1\mathit{\boldsymbol{}}}\le {\mathit{\boldsymbol{ k }}}\le{\mathit{\boldsymbol{ n }}} }} (X_{{\mathit{\boldsymbol{ k }}}}X_{{\mathit{\boldsymbol{ k }}}+{\mathit{\boldsymbol{ l }}}}-\mathbb EX_{{\mathit{\boldsymbol{ k }}}}X_{{\mathit{\boldsymbol{ k }}}+{\mathit{\boldsymbol{ l }}}} ). $

Then the family $ \left\{(X^N_{{\mathit{\boldsymbol{ k }}}} X^N_{{\mathit{\boldsymbol{ k }}}+{\mathit{\boldsymbol{ l }}}})_{{\mathit{\boldsymbol{0}}}\le {{\mathit{\boldsymbol{ l }}}}\le {{\mathit{\boldsymbol{ m }}}}}\in \mathbb R^{(m+1)^2}, {\mathit{\boldsymbol{ k }}}\in \mathbb Z^2\right\} $ is a $ 2N $-dependent stationary field. By (2.3), for all $ N\ge1 $, there exists a constant $ \eta>0 $ such that $ \mathbb E \big(e^{\eta |X^N_{{\mathit{\boldsymbol{ k }}}} X^N_{{\mathit{\boldsymbol{ k }}}+{\mathit{\boldsymbol{ l }}}}|} \big)<\infty $.

By Lemma 3.3, we get that for any $ N $ fixed, and for all $ \lambda\in \mathbb R^{(m+1)^2} $,

$ \begin{align} \lim\limits_{n \rightarrow \infty}\frac{1}{b_n^4}\log \mathbb E \exp\left\{ {b_n^4} \langle \lambda, Q_n^N\rangle \right\}=\frac12\langle \lambda, \Sigma^{2, N}\lambda\rangle, \end{align} $ (3.14)

where $ \Sigma^{2, N} $ is the covariance matrix given in Lemma 3.3. Then, by the Gärtner-Ellis theorem, we know that $ Q_n^N $ satisfies the MDP on $ \mathbb R^{(m+1)^2} $ with the good rate function $ I_N({\mathit{\boldsymbol{ x }}})=\sup\limits_{\lambda\in \mathbb R^{(m+1)^2}}\left\{\langle \lambda, {\mathit{\boldsymbol{ x }}}\rangle -\frac{1}{2}\langle \lambda, \Sigma^{2, N}\lambda\rangle \right\}. $ Furthermore, by [2, Theorem 4.6.2], $ \Sigma^{2, N}_{{\mathit{\boldsymbol{ k }}}, {\mathit{\boldsymbol{ l }}}} $ can be expressed as (2.5) with $ f $ replaced by $ f^N $, where $ f^N({\mathit{\boldsymbol{ t }}})= |\sum\limits_{{\mathit{\boldsymbol{ u }}}\in \mathbb Z^2}a_{{\mathit{\boldsymbol{ u }}}}^N e^{i {\mathit{\boldsymbol{ u }}}\cdot {\mathit{\boldsymbol{ t }}}}|^2. $

Then we should show that the approximation is a good one in the sense of the MDP, and we establish the convergence of the rate function. These proofs are the same as the proof of Theorem 2.1 in [8] by using the techniques of LSI. We omit the details here.

The proof is complete.

References
[1]
Sang Hailin, Xiao Yimin. Exact moderate and large deviations for linear random fields[J]. Journal of Applied Probability, 2018, 55(2): 431-449. DOI:10.1017/jpr.2018.28
[2]
Rosenblatt M. Gaussian and non-Gaussian linear time series and random fields[M]. New York: Springer-Verlag, 2000.
[3]
Azéma J, Émery M, Ledoux M, Yor M. Séminaire de Probabilites XXXⅢ[M]. Berlin Heidelberg: Springer-Verlag, 1999.
[4]
Dembo A, Zeitouni O. Large deviations techniques and applications(2nd ed)[M]. Berlin Heidelberg: Springer-Verlag, 2010.
[5]
Tjϕstheim D. Statistical spatial series modeling[J]. Advances in Applied Probability, 1978, 10(1): 130-154. DOI:10.2307/1426722
[6]
Tjϕstheim D. Statistical spatial series modeling Ⅱ: Some further results on unilateral lattice processes[J]. Advances in Applied Probability, 1983, 15(3): 562-584. DOI:10.2307/1426619
[7]
Bercu B, Gamboa F, Rouault A. Large deviations for quadratic forms of stationary Gaussian processes[J]. Stochastic Processes and Their Applications, 1997, 71(1): 75-90. DOI:10.1016/S0304-4149(97)00071-9
[8]
Djellout H, Guillin A, Wu Liming. Moderate deviations of empirical periodogram and non-linear functionals of moving average processes[J]. Annales de l'Institut Henri Poincare (B) Probability and Statistics, 2006, 42(4): 393-416. DOI:10.1016/j.anihpb.2005.04.006
[9]
Avram F. On bilinear forms in Gaussian random variables on Toeplitz matrices[J]. Probability Theory and Related Fields, 1988, 79(1): 37-45. DOI:10.1007/BF00319101
[10]
Móricz F, Stadtmüller U, Thalmaier M. Strong laws for blockwise $\mathcal M$-dependent random fields[J]. Journal of Theoretical Probability, 2008, 21(3): 660-671.
[11]
Chen Xia. Moderate deviations for $m$-dependent random variables with Banach space values[J]. Statistics Probability Letters, 1997, 35(2): 123-134.