数学杂志  2025, Vol. 45 Issue (6): 493-501   PDF    
扩展功能
加入收藏夹
复制引文信息
加入引用管理器
Email Alert
RSS
本文作者相关文章
ZHU Zhi-feng
ZHOU Jun-chao
F-GEOMETRIC ERGODIC OF CONTINUOUS TIME MARKOV PROCESSES BY COUPLING METHOD
ZHU Zhi-feng, ZHOU Jun-chao    
School of Mathematics and Statistics, Hubei Engineering University, Xiaogan 432000, China
Abstract: In this paper, we study the geometric ergodicity of continuous time Markov processes in general state space. For the geometric ergodic continuous time Markov processes, the condition $\pi(f^{p}) < \infty, p>$ 1 is added. Using the coupling method, we obtain the existence of a full absorbing set on which continuous time Markov processes are f-geometric ergodic.
Keywords: Markov process     coupling     f-norm     geometric ergodicity     f-geometric ergodicity    
用耦合方法研究连续时间Markov过程的f-指数遍历
朱志锋, 周俊超    
湖北工程学院数学与统计学院, 湖北 孝感 432000
摘要:该文在一般状态空间下研究连续时间Markov过程指数遍历性. 指数遍历的连续时间Markov过程, 增加条件$\pi(f^{p}) < \infty, p> 1$, 利用耦合方法得到了存在满的吸收集, 使得连续时间Markov过程在其上是f-指数遍历的.
关键词Markov过程    耦合    f-范数    指数遍历    f-指数遍历    
1 Introduction

Ergodicity of Markov processes is one of the basic topics in the study of Markov processes. The research on ergodicity of homogeneous and nonhomogeneous Markov chains is becoming more and more mature. For example, the ergodicity of homogeneous Markov processes was studied by Wang [1] before. Chen and Fu [2] also made a deep research on the ergodicity of Nonhomogeneous Markov chains. The ergodicity of Markov processes has important and wide applications involving Markov processes derived from queuing theory and birth and death processes in $ q $ processes.

In order to study the convergence rate of ergodicity, geometric ergodicity (also known as exponential ergodicity) and $ f $-geometric ergodicity (also known as $ f $-exponential ergodicity) are introduced and studied by Nummelin and Tweedie in the sense of total variation norm of general state space. Meyn and Tweedie introduced the more general form of $ f $-geometric ergodicity in [3]. They studied the $ f $-geometric ergodicity of discrete-time Markov chains using the coupling method.

Zhu [4] studied the $ f $-geometric ergodicity of Markov chains in discrete-time general state space. For the geometric ergodicity of continuous time Markov processes, the criteria given by Meyn and Tweedie [5, 6] by means of drift functions are sufficient.

Coupling method has been widely used in stochastic process. It has become an important tool to study the stability of the Markov processes.

Zhang [7, 8, 9] proposed the concepts of the $ \varphi $ optimal coupling operator, where $ \varphi $ is a nonnegative lower semicontinuous function. Zhang also proved the existence theorem of the optimal measurable coupling of transition probability and the existence theorem of optimal coupling operator of jump process. Both Chen M.F [10] and Zhang [11] used the coupling method to study the ergodicity of homogeneous Markov processes. Zhu [4, 12] used the coupling method to study the ergodicity of Markov processes.

In this paper, we study the geometric ergodicity of continuous time Markov processes and the geometric ergodicity of continuous time Markov processes by the coupling method. We strengthen the condition $ \pi(f)<\infty $ to $ \pi(f^{p})<\infty(p>1) $. Using the coupling method, the conclusion of $ f $-geometric ergodicity of continuous time Markov process is obtained, which is the main result of this paper.

Let $ \{\Phi_t, t\in R_{+}\} $ be a continuous time Markov process, which is sometimes abbreviated as $ \Phi_t $. The state space $ X $ is a polishi space (i.e., a complete and separable metric space), and $ \mathcal{B}(X) $ is a $ \sigma $ algebra generated by a countable subset of $ X $. The transition probability function of Markov process is denoted by $ P(t, x, A), t\in R_{+}, A\in \mathcal{B}(X) $, i.e.,

$ P(t, x, A)=P_{x}[\Phi_{t}\in A]=E_{x}[I_{\Phi_{t}\in A}]. $

Sometimes, continuous time Markov processes are also represented by $ P(t) $. The Markov processes mentioned in this paper, if not specified, refer to continuous time Markov processes.

Definition 1.1   Let $ v $ be the signed measure on $ \mathcal{B}(X) $, and $ g $ and $ f $ be measurable functions on $ \mathcal{B}(X) $. Define

$ \|v\|:=\sup\{|v(g)|:|g|\leq 1\} $

and

$ \|v\|_{f}:=\sup\{|v(g)|:|g|\leq f\}, f\geq1. $

Definition 1.2   The Markov process $ \{\Phi_t, t\in R_{+}\} $ is called ergodic if there is a unique invariant measure $ \pi $ satisfying

$ \lim\limits_{t\rightarrow \infty}\|P(t, x, \cdot)-\pi(\cdot)\|=0, \forall x\in X. $

Definition 1.3   Let $ f $ be a measurable function. The Markov process $ \{\Phi_t, t\in R_{+}\} $ is called $ f $-ergodic if $ f\geq 1 $ and satisfies the following conditions:

(ⅰ) $ \Phi_t $ is a positive Harris recurrent and has an invariant measure $ \pi $;

(ⅱ) $ \pi(f)<\infty $;

(ⅲ) for any initial state of $ x $,

$ \lim\limits_{t\rightarrow \infty}\|P(t, x, \cdot)-\pi(\cdot)\|_{f}=0. $

If $ f\equiv 1 $, then the $ f $ norm becomes the total variation norm, and accordingly the $ f $-ergodic becomes ergodic.

Definition 1.4   The Markov process $ \{\Phi_t, t\in R_{+}\} $ is called geometric ergodicity if there exists $ \rho\in (0, 1) $ and a nonnegative function $ R(x)<\infty $ such that

$ \|P(t, x, \cdot)-\pi(\cdot)\|\leq R(x)\rho^{t}, t\in R_{+}\}, \forall x\in X. $

Definition 1.5   Let $ f $ be a measurable function. The Markov process $ \{\Phi_t, t\in R_{+}\} $ is called $ f $-geometric ergodic if $ f\geq 1 $ and satisfies the following conditions:

(ⅰ) $ \Phi_t $ is a positive Harris recurrent and has an invariant measure $ \pi $;

(ⅱ) $ \pi(f)<\infty $;

(ⅲ) there is $ 0<\rho< 1 $ and nonnegative function $ R(x) $ such that

$ \lim\limits_{t\rightarrow \infty}\|P(t, x, \cdot)-\pi(\cdot)\|_{f}=0, \forall x\in X. $

If $ f\equiv 1 $, then the $ f $ norm becomes the total variation norm, and accordingly the $ f $-geometric ergodic becomes geometric ergodic.

Lemma 1.1   (see [3]) Let $ \Phi $ be an irreducible aperiodic Markov chain of $ \psi $ and $ f\geq 1 $ be a function of $ X $. Then the following conditions are equivalent:

(ⅰ) Markov chain $ \Phi $ is positive recurrent and has an invariant measure $ \pi $ satisfying $ \pi(f)<\infty. $

(ⅱ) There is a petite $ C\in \mathcal{B}(X) $ such that

$ \begin{eqnarray} \begin{aligned} \sup\limits_{x\in C}E_{x}[\sum^{\tau_{C}-1}_{n=0}f(\Phi_{n})]<\infty. \end{aligned} \end{eqnarray} $ (1.1)

(ⅲ) There is a petite of $ C $ and an extended value nonnegative function $ V $ for some $ x_{0}\in X $ satisfying $ V(x_{0})<\infty $ and

$ \begin{eqnarray} \begin{aligned} \bigtriangleup V(x)\leq -f(x)+b\amalg_{C}(x). \end{aligned} \end{eqnarray} $ (1.2)

Under any one of the above conditions, we obtain that $ S_{V}=\{x:V(x)<\infty\} $ is a fully absorbing set, where $ V $ is the solution of the satisfying condition () of the Formula $ (1.2) $ and any sublevel set $ V $ satisfying the Formula $ (1.1) $, for any $ x\in S_{V} $,

$ \lim\limits_{n\rightarrow \infty}\|P^{n}(x, \cdot)-\pi(\cdot)\|_{f}=0. $

The following result is Theorem $ 14.0.1 $ in Reference [3].

Theorem 1.1   Let $ P(t) $ be a geometric ergodic Markov process satisfying the following conditions:

(ⅰ) $ \pi $ is the only stationary distribution of $ P(t) $, there is a measurable function $ f\geq1 $ and $ p>1 $ so that $ \pi(f^{p})<\infty $;

(ⅱ)

$ \lim\limits_{t\rightarrow t_{0}} P(t, x, f^{p})=P(t_{0}, x, f^{p}), $

$ P(t_{0}, x, f^{p}) $ does not have to be less than infinity. Then there exists a full absorbing set $ M^{c}(\subset X) $ such that $ P(t) $ is $ f $-geometrical ergodic on $ M^{c} $.

We use $ _{r}\varepsilon_{+} $ and $ \mathcal{L}_{+} $ to denote the set of nonnegative real valued $ \mathcal{B}(X) $ measurable functions and the set of finite measure on $ \mathcal{B}(X) $, respectively.

Definition 1.6 (see [10]) $ \{P{(t, x, A)}:t\in R_{+}, x\in X, A\in\mathcal{B}(X)\} $ is called the transition probability function of jump process if the following conditions hold:

(ⅰ) $ \forall\; t \in R_{+}, A\in\mathcal{B}(X), P(t, \cdot, A)\in _{r}\varepsilon_{+} $;

(ⅱ) $ \forall\; t \in R_{+}, x\in X, P(t, x, \cdot)\in \mathcal{L}_{+} $, and $ P(t, x, X)\leq 1 $;

(ⅲ) (C-K equation) $ \forall\; t, s\in R_{+}, x\in X, A\in\mathcal{B}(X) $,

$ \begin{eqnarray*} P(t+s, x, A)=\int\nolimits_{X}P(t, x, dy)P(s, y, A); \end{eqnarray*} $

(ⅳ) (Continuity condition) $ \forall \; x\in X, A\in\mathcal{B}(X), $

$ \begin{eqnarray*} \lim\limits_{t\rightarrow 0}P(t, x, A)=P(0, x, A)=\delta (x, A). \end{eqnarray*} $

In this situation, we directly call $ P(t) $ a jump process.

If $ P(t) $ is a jump process, then the Condition () in Theorem $ 1.1 $ is automatically satisfied. Therefore, the following Corollary 1.1 is obtained.

Corollary 1.1   Let $ P(t) $ be a jump process, $ \pi $ be the only stationary distribution of $ P(t) $. There is a measurable function $ f\geq1 $ and $ p>1 $ so that $ \pi(f^{p})<\infty $. Then there exists a full absorbing set $ M^{c}(\subset X) $ such that $ P(t) $ is $ f $-geometrical ergodic on $ M^{c} $.

If $ P(t) $ is a Feller process, then $ P(t_{0}, x, f^{p}) $ can take infinity. This special case is included by Theorem 1.1.

2 Lemmas and Their Proofs

Definition 2.1  (see [13, 14]) Let $ \mu_1, \mu_2 $ be probability measures on $ \mathscr{B}(X) $ and $ \widetilde{\mu} $ be a probability measure on $ \mathscr{B}(X)\times \mathscr{B}(X) $. Then $ \widetilde{\mu} $ is called the coupling of $ \mu_1 $ and $ \mu_2 $ if it satisfies the following marginality:

(1) $ \widetilde{\mu}(A_1\times X)=\mu_1(A_1), A_1\in \mathscr{B}(X) $;

(2) $ \widetilde{\mu}(X\times A_2)=\mu_2(A_2), A_2\in \mathscr{B}(X). $

Lemma 2.1   (see [15]) Let $ \mu_1, \mu_2 $ be probability measures on $ \mathscr{B}(X) $, and $ \mu{'}=\mu_1+\mu_2 $. Denote

$ g_1=\frac{d\mu_1}{d\mu{'}}, g_2=\frac{d\mu_2}{d\mu^{'}}, $
$ g=\text{min}{\{g_1, g_2\}}, \gamma=\displaystyle{\int} gd\mu{'}, $
$ v_1(A)={\displaystyle{\int\nolimits}}_A(g_1-g)d\mu{'}, A\in\mathscr{B}(X), $
$ v_2(A)={\displaystyle{\int\nolimits}}_A(g_2-g)d\mu{'}, A\in\mathscr{B}(X), $

and

$ Q(B)={\displaystyle{\int\nolimits}}_{B\cap{\{(x, y):x=y\}}}g(x)\mu{'}d(x), B\in\mathscr{B}(X)\times\mathscr{B}(X). $

Then,

(1) $ 0\leq\gamma\leq1; $

(2) $ v_1, v_2 $ are two probability measures on $ \mathscr{B}(X) $;

(3) $ Q $ is a probability measure on $ \mathscr{B}(X)\times\mathscr{B}(X) $.

Furthermore, let

$ \begin{eqnarray*} \overline\mu=\left\{ \begin{array}{ll} Q, \qquad &\gamma=1;\\ \frac{v_1\times v_2}{1-\gamma}+Q, \qquad &\gamma\neq1. \end{array} \right. \end{eqnarray*} $

Then $ \overline{\mu} $ is the coupling of $ \mu_1 $ and $ \mu_2 $. We call $ \overline{\mu} $ the basic coupling of $ \mu_1 $ and $ \mu_2 $.

Lemma 2.2   Let $ \mu_1(t, x, dx), \mu_2(t, y, dy) $ be probability measures on $ \mathscr{B}(X) $. There is a discrete distance on $ X $, i.e.,

$ \begin{eqnarray*} d(x, y):=\left\{ \begin{array}{ll} 1, \qquad &x\neq y, \\ 0, \qquad &x=y, \end{array} \right. \end{eqnarray*} $

where $ \overline{\mu}(t;x, y;du, dv) $ is the basic coupling of $ \mu_1(t, x, du) $ and $ \mu_2(t, y, dv) $. Then we have

$ \begin{equation} \|\mu_1(t, x, du)-\mu_2(t, y, dv)\|=2{\displaystyle{\int}} d(u, v)\overline{\mu}(t;x, y;du, dv). \end{equation} $ (2.1)

Proof   The proof method is shown in Ref. [15].

Lemma 2.3  

$ \begin{equation} \|\mu_1(t, x, du)-\mu_2(t, y, dv)\|_{f}\leq{\displaystyle{\int}} d(u, v)[f(u)+f(v)]\mu(t;x, y;du, dv), \end{equation} $ (2.2)

where $ \mu(t;x, y;du, dv) $ is the coupling of $ \mu_1(t, x, du) $ and $ \mu_2(t, y, dv) $.

Proof   We have $ g(u)-g(v)\leq\varphi(u, v) $, where $ \varphi (u, v)=d(u, v)[f(u)+f(v)] $, and $ |g|\leq f $. Then

$ \begin{eqnarray*} \int gd\mu_1-\int gd\mu_2 &=&\int\mu(t;x, y;du, dv)[g(u)-g(v)]\\ &\leq&\int\mu(t;x, y;du, dv)\varphi(u, v). \end{eqnarray*} $

Similarly,

$ {\displaystyle{\int}} gd\mu_2-{\displaystyle{\int}}gd\mu_1\leq{\displaystyle{\int}}\mu(t;x, y;du, dv)\varphi(u, v). $

Hence,

$ |{\displaystyle{\int}} gd\mu_1-{\displaystyle{\int}} gd\mu_2|\leq{\displaystyle{\int}}\mu(t;x, y;du, dv)\varphi(u, v). $

In the above inequality, we calculate the supremum of $ |g|\leq f $ and get

$ \|\mu_1-\mu_2\|_{f}\leq{\displaystyle{\int}}\mu(t;x, y;du, dv)\varphi(u, v). $

Therefore $ (2.2) $ holds.

Definition 2.2   Let $ \{P(t), t\in R_{+}\} $ be an $ \Psi $ irreducible continuous time Markov process. Set $ A $ is called full set if there exists $ A\in \mathscr{B}(X) $ such that $ \Psi(A^{c})=0 $, and Set $ A $ is called absorbing set with $ \forall x\in A, P(t, x, A)=1 $.

Obviously, sets $ A $ and $ B $ are both full sets of the $ \Psi $ irreducible continuous time Markov process $ \{P(t), t\in R_{+}\} $. Then $ A\cap B $ is also full set of $ \Phi $. It is obviously from Theorem $ 2.2.6 $ in Reference [8], for the irreducible continuous time Markov process $ \{P(t), t\in R_{+}\} $, if there is a unique stationary distribution $ \pi $, then $ \Psi(A^{c})=0 $ is equivalent to $ \pi(A^{c})=0 $.

Lemma 2.4 (see [16]) Suppose that $ \{P(t), t\in R_{+}\} $ is $ \Psi- $irreducible, then

(ⅰ) every absorbing set is full,

(ⅱ) every full set contains a non-empty, absorbing set.

Proof   The proof is shown in Proposition $ 2.2.8 $ of Ref.[16].

Lemma 2.5  For a transition probability $ P $ and a generalized real valued function $ V:X\rightarrow [0, \infty] $ satisfying any one conditions of Lemma $ 1.6 $, if $ PV(x)<\infty $, then $ S_{V}=\{x:V(x)<\infty\} $ is a full set.

Proof   From

$ PV(x)-V(x)=\bigtriangleup V(x)\leq -f(x)+b\amalg_{C}(x), $

we obtain

$ PV(x)\leq V(x)-f(x)+b\amalg_{C}(x)< V(x)+b. $

Since $ PV(x)<\infty $, we get

$ \begin{eqnarray*} \begin{aligned} &\; \; PV(x)=\int P(x, dy)V(y)=\int\nolimits _{S_{V}} P(x, dy)V(y)+\int\nolimits _{S_{V}^{c}} P(x, dy)V(y). \end{aligned} \end{eqnarray*} $

Suppose $ P(x, S_{V}^{c})> 0, $ then we have $ \int\nolimits _{S_{V}^{c}} P(x, dy)V(y)= \infty, $ and thereby $ PV(x)= \infty, $ which contradicts with the assumption. Hence $ P(x, S_{V}^{c})= 0 $ for $ x\in S_{V} $. Therefore $ S_{V}=\{x:V(x)<\infty\} $ is a full absorbing set.

3 Proof of the Main Result

From Lemma $ 2.3 $ we have that

$ \|P(t, x, du)-\pi(dv)\|=2{\displaystyle{\int}}d(u, v)\overline{P}(t;x, y;du, dv), $

where $ \overline{P}(t;x, y;du, dv) $ is the basic coupling of $ P(t, x, du) $ and $ \pi(dv) $.

Since $ P(t) $ is geometric ergodic Markov process, there exists $ 0<\rho< 1 $, and for any $ x\in X $, there is a nonnegative function $ R(x)<\infty $ such that

$ \begin{equation} \|P(t, x, \cdot)-\pi(\cdot)\|\leq R(x)\rho^{t}, t\in R_{+}. \end{equation} $ (3.1)

By Lemma $ 2.4 $, we have

$ \begin{equation} \|P(t, x, du)-\pi(dv)\|_{f}\leq{\displaystyle{\int}} d(u, v)[f(u)+f(v)]\overline{P}(t;x, y;du, dv), \end{equation} $ (3.2)

where $ \overline{P}(t;x, y;du, dv) $ is the coupling of $ \mu_1(t, x, du) $ and $ \mu_2(t, y, dv) $.

According to the $ H\ddot{o}lder $ inequality, when $ p, q>1 $ and $ \frac{1}{p}+\frac{1}{q}=1 $, there is

$ \begin{eqnarray*} \begin{aligned} &\; \; \; \; \int d(u, v)[f(u)+f(v)]\overline{P}(t;x, y;du, dv)\\ &\leq(\int [d(u, v)]^{q}\overline{P}(t;x, y;du, dv)^{\frac{1}{q}}(\int [f(u)+f(v)]^{p}\overline{P}(t;x, y;du, dv))^{\frac{1}{p}}\\ &=(\int [d(u, v)]\overline{P}(t;x, y;du, dv))^{\frac{1}{q}}(\int [f(u)+f(v)]^{p}\overline{P}(t;x, y;du, dv))^{\frac{1}{p}}\\ &=(\frac{1}{2}\|P(t, x, du)-\pi(dv)\|)^{\frac{1}{q}}(\int [f(u)+f(v)]^{p}\overline{P}(t;x, y;du, dv))^{\frac{1}{p}}. \end{aligned} \end{eqnarray*} $

Then

$ \begin{equation} \|P(t, x, du)-\pi(dv)\|_{f}\leq(\frac{1}{2}\|P(t, x, du)-\pi(dv)\|)^{\frac{1}{q}}({\displaystyle{\int}} [f(u)+f(v)]^{p}\overline{P}(t;x, y;du, dv))^{\frac{1}{p}}. \end{equation} $ (3.3)

From $ Minkowski $ inequality we have that

$ \begin{eqnarray*} \begin{aligned} &\; \; \; \; (\int [f(u)+f(v)]^{p}\overline{P}(t;x, y;du, dv))^{\frac{1}{p}}\\ &\leq[\int f^{p}(u)\overline{P}(t;x, y;du, dv)]^{\frac{1}{p}}+[\int f^{p}(v)\overline{P}(t;x, y;du, dv)]^{\frac{1}{p}}\\ &=[\int f^{p}(u)P(t, x, du)]^{\frac{1}{p}}+[\int f^{p}(v)\pi(dv)]^{\frac{1}{p}}, \end{aligned} \end{eqnarray*} $

i.e.,

$ \begin{equation} ({\displaystyle{\int}} [f(u)+f(v)]^{p}\overline{P}(t;x, y;du, dv))^{\frac{1}{p}}\leq[{\displaystyle{\int}} f^{p}(u)P(t, x, du)]^{\frac{1}{p}}+[{\displaystyle{\int}} f^{p}(v)\pi(dv)]^{\frac{1}{p}}. \end{equation} $ (3.4)

Since there exists $ p>1 $ such that $ \pi(f^{p})<\infty $, we have $ [\pi(f^{p})]^{\frac{1}{p}}<\infty, $ i.e.,

$ \begin{equation} [{\displaystyle{\int}} f^{p}(v)\pi(dv)]^{\frac{1}{p}}<\infty. \end{equation} $ (3.5)

Then for each $ t\in R_{+}, \; \; \pi P(t)=\pi $,

$ \begin{equation*} \pi(P(t)f^{p})=\pi P(t)(f^{p})=\pi(f^{p})<\infty, \end{equation*} $

i.e.,

$ {\displaystyle{\int}} \pi(dx){\displaystyle{\int}} P(t, x, du)f^{p}(u)<\infty. $

Set

$ \begin{equation*} N_{t}=\{x\in X:P(t, x, f^{p})=\infty\}, \end{equation*} $

then $ \pi(N_{t})=0. $ Denote $ M=\bigcup\limits_{t\geq 0}N_{t} $. From

$ \lim\limits_{t\rightarrow t_{0}} P(t, x, f^{p})=P(t_{0}, x, f^{p}), $

we get

$ M=\bigcup\limits_{t\geq 0}N_{t}=\bigcup\limits_{t\geq 0, t\in Q}N_{t}, $

where $ Q $ is a rational number. So $ \pi(M)=0, \pi(M^{c})=1 $, equivalently $ \Psi(M^{c})=1 $. Therefore, for every $ x\in M^{c} $,

$ \begin{equation} P(t, x, f^{p})<\infty, t\geq 0. \end{equation} $ (3.6)

This means that

$ \begin{equation} [{\displaystyle{\int}} f^{p}(x)P(t, x, dx)]^{\frac{1}{p}}<\infty. \end{equation} $ (3.7)

Because $ P(t) $ is geometric ergodic Markov process, $ P(t) $ is ergodic, i.e.,

$ \forall x\in X, \|P(t, x, \cdot)-\pi(\cdot)\|\rightarrow0, t\rightarrow \infty. $

This together with $ \pi(f^{p})<\infty, (p>1) $ gives

$ \begin{equation} \lim\limits_{t\rightarrow \infty} P(t, x, f^{p})=\pi(f^{p})<\infty. \end{equation} $ (3.8)

From $ (3.8) $, we get that there exists a positive integer $ T $ and any positive integer $ \varepsilon $, when $ t>T $, such that

$ \begin{equation*} P(t, x, f^{p})<\pi(f^{p})+\varepsilon. \end{equation*} $

From $ (3.6) $, if $ t\in [0, T] $, $ P(t, x, f^{p}) $ has the maximum value, and thereby it has supremum, which is

$ \begin{equation*} \sup\limits_{0\leq t\leq T}P(t, x, f^{p}). \end{equation*} $

Set

$ M(x)=\max\{\sup\limits_{0\leq t\leq T}P(t, x, f^{p}), \pi(f^{p})+\varepsilon\}. $

By $ (3.6) $ and $ (3.8) $ we can get

$ \begin{equation} \forall x\in M^{c}, M(x)<\infty . \end{equation} $ (3.9)

Note that

$ \begin{equation} R^{'}(x)=(\frac{1}{2}R(x))^{\frac{1}{q}}([M(x)]^{\frac{1}{p}}+[\pi(f^{p})]^{\frac{1}{p}}), r=\rho^{\frac{1}{q}}. \end{equation} $ (3.10)

Obviously, $ 0<r< 1 $. By $ (3.5) $ and $ (3.9) $ we have $ R^{'}(x)<\infty $. It follows from $ (3.2)(3.3)(3.4) $ and $ (3.10) $ that

$ \forall x\in M^{c}, \|P(t, x, \cdot)-\pi(\cdot)\|_{f}\leq R^{'}(x)r^{t}. $

Then the proof is completed.

References
[1]
Wang Zikun. Birth and death process and Markov chain[M]. Beijing: Science Press, 1980.
[2]
Chen Yongyi, Fu Ruhui. Some results on ergodicity of Nonhomogeneous Markov chains[J]. Systems Science Press, 1994, 16(4): 311-317.
[3]
Meyn S P, Tweedie R L. Markov chains and stochastic stability[M]. London: Springer-Verlag, 1992.
[4]
Zhu Zhifeng, Zhang Shaoyi. Study on f-exponential ergodicity of Markov chains by coupling method[J]. Acta Mathematica Sinica, 2019(3): 287-292.
[5]
Cinlar E. Introduction to stochastic processes[M]. Chicago: Courier Corporation, 2013.
[6]
Chung K L. Markov chains with stationary transition probabilities(2nd ed.)[M]. Berlin: Springer-Verlang, 1967.
[7]
Zhang Shaoyi, Xu Kan. The existence of optimal measurable coupling of transition probability[J]. Acta Mathematica Sinica, 1997, 1(40): 5-13.
[8]
Zhang Shaoyi. The measurable coupling and probability distance of transition probability[J]. Chinese Annals of Mathematics, 1995, 6(16): 769-775.
[9]
Zhang Shaoyi. The existence of optimal coupling operator for jump processes[J]. Acta Mathematica Sinica, 1998, 41(2): 393-398.
[10]
Chen M F. From markov chains to non-equilibrium particle systems(2nd ed.)[M]. SingaPore: World Scientific, 2004.
[11]
Zhang Shaoyi. Existence of optimal measurable coupling and ergodicity of Markov processes[J]. Chinese Science (Series A), 1998, 28(11): 999-1008.
[12]
Zhu Zhifeng, Zhang Shaoyi. Study on the convergence of nonhomogeneous Markov Chains with probability distance[J]. Acta Mathematica Scientia, 2018, 38A(5): 963-969.
[13]
Andradottir S.. A Method for discrete stochastic optimization[J]. Management Science, 1995, 12(41): 1946-1961.
[14]
Gutjahr W J, Pflug G C. Simulated annealing for noisy cost functions[J]. Journal of Global Optimization, 1996, 8: l-13.
[15]
Lindvall T. Lectures on the coupling method[M]. New York: Wiley, 1992.
[16]
Zhang Shuili. Stochastic stability of jump processes in general state space[D]. Wuhan: Hubei University. 2014.