The theory and application of linear models with Markov type dependent errors recently attracted increasing research attention. In the case that the errors form a homogeneous Markov chain, one can see Maller [1], Pere [2], Fuller [3] and form a non-homogeneous Markov chain, see Azrak and M$\acute{\mbox{e}}$lard [4], Carsoule and Franses [5], Dahlhaus [6], Kwoun and Yajima [7]. It is well-known that compared with a homogeneous Markov chain, the limit behavior of a non-homogeneous Markov chain is much more complicated to handle. To simplify the models, we consider the hypothesis testing for the homogeneity of the process of errors in the following linear model
where $x_t\in R^d$ are deterministic regressor vectors, $\beta$ is a $d$-dimensional unknown parameter, and $\{\varepsilon_t\}$ is a Markov chain with recursive formula as follows
where $\theta\in R$ is an unknown parameter, $\varphi_t(\theta)$ is a real valued function on a compact set $\Theta$ which contains the true value $\theta_0$ as an inner point, and the $\eta_t$ are i.i.d. mean zero random variables (rvs) with finite variance $\sigma^2$ (also to be estimated).
It is obvious that the errors $\{\varepsilon_t\}$ is a non-homogeneous Markov chain when the coefficient $\varphi_t(\theta)$ depends on $t$. This paper discusses the hypothesis testing for the homogeneity of Markov chain $\{\varepsilon_t\}$ based on the quasi-maximum likelihood estimates (QMLEs) of the unknown parameters. Limiting distribution for likelihood ratio test statistics of hypotheses is obtained by the techniques of martingale-difference.
The log-likelihood of $y_2, y_3, \cdots, y_n$ conditional on $y_1$ is defined by [1]
We maximize (2.1) to obtain QML estimators denoted by $\hat{\beta}_n, \hat{\theta}_n$ and $\hat{\sigma}^2_n$ (when they exsit). Then the corresponding estimators, satisfy [1]
where $\varphi'_t(\theta_0)=\displaystyle\frac{d}{d\theta}\varphi_t(\theta)|_{\theta=\theta_0}\neq0$, and
Write the "true" model as
By (2.5)
Define $\displaystyle\prod^{-1}_{i=0}\varphi_{t-i}(\theta_0)=1$, from (2.7), we have
We need the following conditions
(A1) $X_n:=\displaystyle\sum\limits_{t=2}^{n}x_tx^T_t$ is positive definite for sufficiently large $n$ and
here $\displaystyle Z_n:=\frac{1}{2}\sum\limits_{t=2}^{n}(x_tx^T_{t-1}+x_{t-1}x^T_t)$ and $ |\lambda|_{\max}(\cdot)$ denotes the maximum in absolute value of the eigenvalues of a symmetric matrix, and throughout, the matrix square root can be derived from any square root decomposition of the form
(A2) There is a constant $\alpha>0$ such that
for any $t\in \{1, 2, \cdots, n\}$ and $\theta\in \Theta$.
(A3) $E\eta^4_1<\infty$ and $\varphi_t(\theta)$, $\varphi'_t(\theta):=\frac{d\varphi_t(\theta)}{d\theta}, \varphi''_t(\theta):=\frac{d\varphi'_t(\theta)}{d\theta}$ are bounded for any $t$ on $\theta\in \Theta$.
Remark 2.1 Condition (A1) is often imposed in order to obtain the existence of the estimators in some linear models with Markov type errors, see e.g. Muller [1], Hu [8], Xu and Hu [9].
And [8, 9] used condition (A2), Kwound and Yajima [7] used the first condition in (A2). Silvapulle [10], Tong et. al. [11] used the condition similar to (A3), when they discussed the asymptotic properties of the estimators in some linear and partial linear models.
Define $(d+1)$-vector $\mathscr{G}=(\beta, \theta)$, and
where $X_n(\theta):=-\sigma^2\displaystyle\frac{\partial^2\mathscr{L}_n}{\partial\beta\partial\beta^T}=$$\displaystyle \sum\limits_{t=2}^{n}(x_t-\varphi_t(\theta)x_{t-1})(x_t-\varphi_t(\theta)x_{t-1})^T$ and the $*$ indicates that the elements are filled in by symmetry.
Assume (A2), (A3), from $E(e_te_{t-1})=E[(\varphi_t(\theta_0)e_{t-1}+$$\eta_t)e_{t-1}]=\varphi_t(\theta_0)Ee^2_{t-1}$,
where
From eq.(5.29) in Hu [8], we have
and $\displaystyle F_n^{\frac{T}{2}}, D_n^{\frac{T}{2}}$ are difined as in (2.10).
In this paper, we consider the hypothesis
where the function $\rho(\theta)<1, \theta\in \Theta$ and $\rho(\theta_0)\neq0$, $\rho'(\theta)$ is bounded on $\Theta$.
Let $\hat{\beta}_H(n), \hat{\theta}_H(n)$ and $\hat{\sigma}^2_H(n)$ be the corresponding QML estimators of $\beta, \theta$ and $\sigma^2$ under the hypothesis $H_{0}$ and let
The main result in this paper is the following theorem.
Theorem 2.1 Assume (A1)-(A3). Suppose $H_{0}:\varphi_t(\theta)=\rho(\theta)$ holds. Then as $n\to\infty$, $d(n)\xrightarrow[]{~D~}\chi^2_1$, where $\chi^2_m$ is chi-square rv with $m$ degrees of freedom.
Lemma 3.1 Assume (A1)-(A3). Then $\hat{\beta}_n, \hat{\theta}_n$ and $\hat{\sigma}^2_n$, the QML estimators of $\beta, \theta$ and $\sigma^2$ in model (1.1)-(1.2) exist. And as $n\to \infty$,
Proof See Theorem 3.1 and Theorem 3.2 in Hu [8].
Lemma 3.2 Assume (A2) and (A3). Then $\mbox{Var}(\displaystyle\sum^n_{t=1}\varphi'^2_t(\theta_0)e^2_{t-1})=O(n), ~ n\to\infty.$
Proof
From Lemma 4.1 in Xu and Hu [9], we have $Ee^4_t\leq (E\eta^4_1+6\sigma^4_0)(1+\alpha)^2.$ Then
By recursive method,
Similarly,
Therefore, from (3.2), (3.4) and (3.5),
where $c_0$ is the bound of $\varphi'_t(\theta)$.
Using (2.2), (2.8) and (2.4),
By (2.8),
Then, from (2.4),
By (5.23) and (5.24) in Hu [8], as $n\to\infty$, $|T_2|=o_p(1), |T_3|=o_p(1).$ Thus
From (2.7),
Using (4.1), (4.3), (4.4) and Taylor expansion,
where $\tilde{\theta}_n=a\theta_0+(1-a)\hat{\theta}_n, \, \, 0\leq a\leq 1$.
From $\hat{\theta}_n\xrightarrow[]{~~P~~}\theta_0$ as $n\to\infty$, we have $\tilde{\theta}_n\xrightarrow[]{~~P~~}\theta_0$ as $ n\to\infty$, then
Thus,
Now we give an approximation for $(\hat{\theta}_n-\theta_0)$. In fact, from eq.(5.28) in Hu [8],
$\Phi_n, D_n, S_n$ same as in (2.14) and (2.11). Then
which means
In view of Lemma 3.2, the law of large numbers holds for the sequence $\{\varphi'^2_t(\theta_0)e^2_{t-1}\}_{t\geq2}$. Note that $E[\sum\limits^n_{t=2}\varphi'^2_t(\theta_0)e^2_{t-1}]=\Delta_n(\theta_0, \sigma_0)=O(n)$, then
From (4.11) and (2.14),
It is easy to verify that $\{\varphi'_t(\theta_0)\eta_t e_{t-1}\}_{t\geq 2}$ is a martingale difference array with respect to the $\sigma$-field $\mathscr{F}_{n, t}$ generated by $\eta_1, \eta_2, \cdots, \eta_t ~(t\leq n)$, in view of the orthogonality of second moment, $E(\displaystyle\sum\limits_{t=2}^{n}\varphi'_t(\theta_0)\eta_te_{t-1})^2=\sigma^2_0 \sum\limits_{t=2}^{n}\varphi'^2_t(\theta_0)Ee^2_{t-1}=O(n)$, then by Corollary 5.1.1.1 of Chebychev inequality in [3]
Thus, from(4.8), (4.13), (4.14) and Lemma 3.2, straightforward calculus yields
We now finish the proof of Theorem 2.1. From (2.4) and (2.8),
where $X_{Hn}(\hat{\theta}_H(n))=\sum\limits^n_{t=2}[x_t-\rho(\hat{\theta}_H(n))x_{t-1}]$$[x_t-\rho(\hat{\theta}_H(n))x_{t-1}]^T$. Then
By (2.2), (2.8) and (4.16),
Since $\hat{\sigma}^2_n \xrightarrow[]{~~~P~~~}\sigma^2_0, \, \, n\to\infty$, from (4.15), (4.17) and (4.10), we have
From (4.10) and Lemma 3.1, $U^{(n)}_2-U^{(n)}_3 \xrightarrow[]{~~P~~}0, \, \, n\to\infty$. Now, we need only to prove that
In view of eq.(4.14) and Lemma 3.1, Lemma 3.2,
and
Then to prove that
we need only to show that
Note that $\{\varphi'_t(\theta_0)\eta_te_{t-1}\Delta^{-\frac{1}{2}}_n(\theta_0, \sigma_0) \}_{t\geq2}$ is a martingale difference array with respect to $\mathscr{F}_{t, n}=\sigma(\eta_1, \eta_2, \cdots, \eta_t)(t\leq n)$, and from (4.12),
then, to obtain (4.21), it will suffice to verify the Lindeberg condition for the sequence $\{\varphi_t(\theta_0)\eta_te_{t-1}\Delta^{-\frac{1}{2}}_n(\theta_0, \sigma_0)\}_{t\geq 2}$.
In fact, since
$\Delta_n(\theta_0, \sigma_0)=O(n)$ as $ n\to\infty $, then for every $\varepsilon>0$, we have
Now, we obtain (4.21) due to the central limit theorem for martingale difference array (Theorem 8.1 in Pollard [12]). Then we prove (4.18) from (4.20).