Consider the following Cox-Ingersoll-Ross(CIR) process
where $ \delta>2 $ is a known constant, $ b $ is an unknown parameter, and $ \left\{B_{t}, \; t\ge 0\right\} $ is a standard Brownian motion. The Cox-Ingersoll-Ross model was introduced by Cox, Ingersoll and Ross in 1985, which was mainly used to study the term structure of interest rates. If $ b>0 $, the process is explosive; if $ b<0 $, the process is stationary. Let $ P_{\delta, b} $ denote the probability distribution of the solution of (1.1) on $ C(\mathbb{R}^+, \mathbb{R}) $.
Let $ V_{T} $ denote the log-likelihood ratio at time $ T $, namely
by using Girsanov formula,
where $ \mathscr{F}_{T}=\sigma\left(B_{t}, \; t\le{T}\right) $.
The above log-likelihood ratio process plays a crucial role in statistical inference. The maximum likelihood estimator $ \hat{b}_T $ of the parameter $ b $ can be defined by maximizing the likelihood ratio. According to the value of $ b $, the asymptotic distributions and the corresponding speeds of the maximum likelihood estimators are quite different. Overbeck [1] studied that $ \hat{b}_T $ is consistent and has asymptotic normal distribution in the stationary case, while $ \hat{b}_T $ has asymptotic Cauchy distribution in the explosive case. In the stationary case, Zani [2] and De Chaumaray [3] obtained the large deviations of $ \hat{b}_T $, Gao and Jiang [4] obtained the moderate deviations of $ \hat{b}_T $. For the parameter estimation and other issues of the Cox-Ingersoll-Ross model, see references [5–9]. In this paper, we will consider the hypothesis testing problem of this model.
Consider the following hypothesis testing problem
where $ b_{0} $, $ b_{1}<0 $. Here, the likelihood ratio statistic $ \left.\frac{dP_{\delta, b_{1}}}{dP_{\delta, b_{0}}}\right|_{\mathscr{F}_{T}} $ can be used as one of the above hypothesis testing statistics. By the Neyman-Pearson lemma, the decision region has the following form:
where $ c $ is the constant to be solved. Large deviation principle for the log-likelihood ratio is one of the effective methods to estimate $ c $, which has been applied by Bishwal [10], Zhao and Gao [11] to the hypothesis testing problem of the fractional Ornstein-Uhlenbeck model and Jacobi model. Since the large deviations only consider the limiting behavior, they have certain limitations in some practical statistical requirements.
The numerical approximations calculated by sharp large deviations outperform those obtained with the central limit theorem or Edgeworth expansions, so the sharp large deviations are very useful in practical situations. The sharp large deviations for the log-likelihood ratio and maximum likelihood estimator of the stationary Ornstein-Uhlenbeck process were studied by Bercu and Rouault [12]. In recent years, sharp large deviations for maximum likelihood estimators of the non-stationary Ornstein-Uhlenbeck process [13], fractional Ornstein-Uhlenbeck process [14], and Cox-Ingersoll-Ross process [15] have attracted much attention. In this paper, inspired by Bercu and Rouault [12], we investigate the sharp large deviations for the log-likelihood ratio of the Cox-Ingersoll-Ross process in the stationary case.
Now we state our main results.
Theorem 1.1 Under the hypothesis $ H_0 $, there exists a sequence $ (d_{c, k}) $ such that, for any $ p>0 $ and $ T $ large enough, if $ b_1<b_0 $, for all $ c<\frac{\delta(b_1-b_0)^2}{8b_0} $, we have
if $ b_1>b_0 $, for all $ c>\frac{\delta(b_1-b_0)^2}{8b_0} $, we have
where
and
The coefficients $ d_{c, 1} $, $ d_{c, 2} $,..., $ d_{c, p} $ may be explicitly given as functions of the derivatives of $ \Lambda $ and $ H $ (see Lemma 2.1) at point $ a_c $. For example, the first coefficient $ d_{c, 1} $ is given by
with $ \Lambda_k=\Lambda^{(k)}(a_c) $, $ H_k=H^{(k)}(a_c). $
Theorem 1.2 Under the hypothesis $ H_1 $, there exists a sequence $ ( \widetilde{d}_{c, k}) $ such that, for any $ p>0 $ and $ T $ large enough, if $ b_1<b_0 $, for all $ c<-\frac{\delta(b_1-b_0)^2}{8b_1} $, we have
if $ b_1>b_0 $, for all $ c>-\frac{\delta(b_1-b_0)^2}{8b_1} $, we have
Similarly, the coefficients $ \widetilde{d}_{c, 1} $, $ \widetilde{d}_{c, 2} $,..., $ \widetilde{d}_{c, p} $ can be calculated explicitly.
By Theorems 1.1 and 1.2, we get
Corollary 1.1 For any closed subset $ F\subset\mathbb{R} $,
and for any open subset $ G\subset\mathbb{R} $,
where $ I(x) $ is defined in Theorem 1.1.
Corollary 1.2 For any closed subset $ F\subset\mathbb{R} $,
where $ \widetilde{I}(x) $ is defined in Theorem 1.2.
In this section, we propose several lemmas that play an important role in the proof of Theorem 1.1.
In order to study the sharp large deviations for the log-likelihood ratio, we consider the logarithmic moment generating function under $ P_{\delta, b_0} $, i.e.,
Let
be the domain of $ \Lambda_{T} $.
Lemma 2.1 Set $ \varphi(\lambda)=-\sqrt{b_0^2+\lambda(b_1^2-b_0^2)} $, $ h(\lambda)=\frac{\lambda(b_1-b_0)+b_0}{\varphi(\lambda)} $.
$ (a) $ For all $ \lambda\in\mathcal{D}_\Lambda $, we have
$ (b) $ The remainder $ R_T(\lambda) $ satisfies
Proof By using Girsanov formula,
If $ \lambda(b_1^2-b_0^2)+b_0^2>0 $, we take $ \varphi(\lambda)=-\sqrt{\lambda(b_1^2-b_0^2)+b_0^2} $, then
according to Pitman and Yor [16],
So, for any $ \lambda\in\mathcal{D}_{\Lambda} $,
Finally, set $ h(\lambda)=\frac{\lambda(b_1-b_0)+b_0}{\varphi(\lambda)} $, we obtain that
And the remainder $ R_T(\lambda) $ satisfies
Let $ \Delta_{\Lambda_T}=\left\{z\in\mathbb{C}, Re(z)\in\mathcal{D}_{\Lambda_T}\right\} $. Now, we prove the following lemma by a similar method as in Appendix D in Bercu, Coutin, and Savy [13].
Lemma 2.2 For $ T $ large enough and for any $ (a, u)\in\mathbb{R}^2 $ such that $ a+iu\in\Delta_{\Lambda_T} $,
where $ \varphi(a)=-\sqrt{a(b_1^2-b_0^2)+b_0^2} $, $ l(a)=\max\left(1, \frac{\left|\varphi(a)+b_0\right|}{\left|\varphi(a)\right|}\right)\max\left(1, \frac{\left|\varphi(a)+b_1\right|}{\left|\varphi(a)\right|}\right) $.
Proof Step 1: For all $ a\in\mathcal{D}_{\Lambda} $, $ u\in\mathbb{R} $, we deduce from (2.2) that
which clearly implies that
Since
we have
Step 2: For all $ a\in\mathcal{D}_{\Lambda} $, $ u\in\mathbb{R} $, we deduce from (2.3) that
then
Step 3: For all $ (a, u)\in\mathbb{R}^2 $ such that $ a+iu\in\Delta_{\Lambda_T} $, we deduce from (2.4) that
Finally, together with (2.1), (2.6), (2.7) and (2.8), we can complete the proof of Lemma 2.2.
In this section, we mainly prove the sharp large deviations for the log-likelihood ratio. If $ b_1<b_0 $, for $ c<\frac{\delta(b_1-b_0)^2}{8b_0} $, let
consider the change of probability:
and denote by $ E_{Q} $ the expectation under $ Q_{\delta, T} $. We obtain that
then we have
Now we consider the asymptotic expansion of $ A_T $ and $ B_T $.
Lemma 3.1 For all $ c<\frac{\delta(b_1-b_0)^2}{8b_0} $, $ T $ tends to infinity,
Proof It follows from Lemma 2.1 that
Thus we can complete the proof of Lemma 3.1.
Let $ \Phi_T(\cdot) $ be the characteristic function of $ U_{T} $ under $ Q_{\delta, T} $. For all $ u\in\mathbb{R} $, we have
Lemma 3.2 For all $ c<\frac{\delta(b_1-b_0)^2}{8b_0} $,
And for $ T $ large enough, there exist two positive constants $ d $ and $ D $ such that
Proof Applying Parseval formula, we obtain
let
then $ B_T=C_T+D_T $. Next we prove that $ D_T $ goes exponentially fast to zero.
We deduce from the Cauchy-Schwarz inequality that
First of all, by integration by substitution,
Secondly, let $ \gamma_{T}=\frac{b_1^2-b_0^2}{\left|\sigma_{c}\sqrt{T}\right|\varphi^2(a_c)} $, we deduce from Lemma 2.2 together with (3.2) that for $ T $ large enough,
Then
if $ \zeta_T=2{T^\frac{1}{6}}\gamma_{T} $, we have
On the one hand,
where $ C_1 $ and $ C_2 $ are positive constants.
On the other hand, let
we obtain that $ e_T $ goes to $ -\infty $ as $ T $ tends to infinity, which implies that for $ T $ large enough, $ e_T-1<0 $. Then, for $ T $ large enough,
which tends to zero.
Thus, we obtain that
Finally, we deduce from (3.3), (3.4) and (3.5) that there exist two positive constants $ d $ and $ D $ such that
Now we prove the Taylor expansion of $ \Phi_T(\cdot) $. First of all, for any $ k\in\mathbb{N} $, $ R_T^{(k)}(a_c)=\mathcal{O}\left(T^k\exp\bigg\{-\left|\frac{\delta(b_1^2-b_0^2)}{2(4c+\delta(b_1-b_0))}\right|T\bigg\}\right). $ Then, we obtain from (2.1) that
where $ \Lambda_k=\Lambda^{(k)}(a_c) $, $ H_k=H^{(k)}(a_c). $
Lemma 3.3 For any $ p>0 $, and for any $ c<\frac{\delta(b_1-b_0)^2}{8b_0} $, there exist integers $ q(p) $ and a polynomial sequence $ (\eta_k) $ independent of $ p $, such that, for $ T $ large enough,
where the remainder $ \mathcal{O} $ is uniform as soon as $ \left|u\right|\le 2T^\frac{1}{6} $. Moreover, the $ \eta_k $ are polynomials in odd powers of $ u $ for $ k $ odd and in even powers of $ u $ for $ k $ even. For example,
Proof We deduce from (3.2) and (3.6) that there exists $ \xi\in\mathbb{R} $ such that, for any $ p>0 $,
One can observe that $ \Lambda^{(1)}(a_c)=c $ and $ \Lambda^{(2)}(a_c)=\sigma_{c}^2 $, thus,
Finally, we obtain (3.7) by taking the exponential of both sides of (3.8), remarking that in the range $ \left|u\right|\le 2T^\frac{1}{6} $ and for any $ k\ge 3 $, the quantity $ \frac{Tu^{k}}{(\sqrt{T})^k} $ remains bounded in (3.7).
From Lemmas 3.2 and 3.3 together with standard calculus on the $ N(0, 1) $ distribution, we obtain the asymptotic expansion of $ B_T $.
Lemma 3.4 For all $ c<\frac{\delta(b_1-b_0)^2}{8b_0} $, there exists a sequence $ (\psi_k) $ such that, for any $ p>0 $ and $ T $ large enough,
Proof of Theorem 1.1 and 1.2 We complete the proof of Theorem 1.1 by Lemmas 3.1 and 3.4 together with (3.1). The proof of Theorem 1.2 is similar to Theorem 1.1.
The Cox-Ingersoll-Ross process is widely used to model the evolution of short-term interest rates in mathematical finance, which has many appealing advantages. In the stationary case, in testing the Cox-Ingersoll-Ross model, we obtain the expansion formula of the probability of the first kind and the second kind. The limiting distribution for the log-likelihood ratio in the non-stationary case is different from that in the stationary case, and we will investigate the sharp large deviations for the log-likelihood ratio of the Cox-Ingersoll-Ross process in the non-stationary case.