数学杂志  2019, Vol. 39 Issue (6): 811-822   PDF    
扩展功能
加入收藏夹
复制引文信息
加入引用管理器
Email Alert
RSS
本文作者相关文章
GUAN Jin-rui
ZHOU Fang
ZUBAIR Ahmed
A MODIFIED ALTERNATELY LINEARIZED IMPLICIT ITERATION METHOD FOR M-MATRIX ALGEBRAIC RICCATI EQUATION
GUAN Jin-rui1, ZHOU Fang1, ZUBAIR Ahmed2    
1. Department of Mathematics, Taiyuan Normal University, Jinzhong 030619, China;
2. Institute of Mathematics and Computer Science, University of Sindh, Jamshoro, Pakistan
Abstract: In this paper, we study the numerical solution of M-matrix algebraic Riccati equation. Based on the alternately linearized implicit iteration method, we propose a modified alternately linearized implicit iteration method (MALI) for computing the minimal nonnegative solution of MARE. Convergence of the MALI iteration method is proved under suitable conditions. Convergence rate with optimal parameters are given for the MARE associated with a nonsingular M-matrix or an irreducible singular M-matrix. Numerical experiments are given to show that the MALI iteration method is feasible in some cases.
Keywords: algebraic Riccati equations     minimal nonnegative solution     M-matrix     ALI iteration method    
M-矩阵代数Riccati方程的一类改进的交替线性化隐式迭代法
关晋瑞1, 周芳1, ZUBAIR Ahmed2    
1. 太原师范学院数学系, 山西 晋中 030619;
2. Institute of Mathematics and Computer Science, University of Sindh, Jamshoro, Pakistan
摘要:本文研究了M-矩阵代数Riccati方程的求解问题.基于交替线性化隐式迭代法,提出了一类改进的交替线性化隐式迭代法用于计算M-矩阵代数Riccati方程的最小非负解.在一定条件下证明了新方法的收敛性并给出最优参数表达式.数值实验表明,改进的方法在一定条件下是可行的.
关键词代数Riccati方程    最小非负解    M-矩阵    ALI迭代法    
1 Introduction

We study M-matrix algebraic Riccati equation (MARE) of the form

$ \begin{equation} XCX-XD-AX+B = 0, \end{equation} $ (1.1)

where $ A $, $ B $, $ C $, $ D $ are real matrices of sizes $ m\times m $, $ m\times n $, $ n\times m $, $ n\times n $, respectively, and

$ \begin{equation} K = \left( \begin{array}{cc} D & -C\\ -B & A \end{array} \right) \end{equation} $ (1.2)

is an M-matrix. M-matrix algebraic Riccati equation arises from many branches of applied mathematics, such as transport theory, Wiener-Hopf factorization of Markov chains, stochastic process, and so on [1-5]. Research on the theories and the efficient numerical methods of MARE became a hot topic in recent years [6-19].

The following are some notations and definitions we will use later.

For any matrices $ A = (a_{ij}), B = (b_{ij})\in \mathbb{R}^{m\times n} $, we write $ A\geq B(A> B) $, if $ a_{ij}\geq b_{ij}(a_{ij}>b_{ij}) $ for all $ i, j $. $ A $ is called a Z-matrix if $ a_{ij}\leq0 $ for all $ i\neq j $. Z-matrix $ A $ is called an M-matrix if there exists a nonnegative matrix $ B $ such that $ A = sI-B $ and $ s\geq\rho(B), $ where $ \rho(B) $ is the spectral radius of $ B $. In particular, $ A $ is called a nonsingular M-matrix if $ s> \rho(B) $ and singular M-matrix if $ s = \rho(B) $.

We first review some basic results of M-matrix. The following lemmas can be found in [20, Chapter 6].

Lemma 1.1 (see [20])   Let $ A $ be a Z-matrix, then the following statements are equivalent

(1) $ A $ is a nonsingular M-matrix;

(2) $ A^{-1}\geq0 $;

(3) $ Av > 0 $ for some vectors $ v>0 $;

(4) all eigenvalues of $ A $ have positive real part.

Lemma 1.2 (see [20])   Let $ A $, $ B $ be Z-matrices. If $ A $ is a nonsingular M-matrix and $ A\leq B $, then $ B $ is also a nonsingular M-matrix. In particular, for any nonnegative real number $ \alpha $, $ B = \alpha I+A $ is a nonsingular M-matrix.

Lemma 1.3 (see [20])   Let $ A $ be an M-matrix, $ B\ge A $ a Z-matrices. If $ A $ is nonsingular or if $ A $ is singular and irreducible with $ A\not = B $, then $ B $ is also a nonsingular M-matrix.

Lemma 1.4 (see [20])   Let $ A $ be a nonsingular M-matrix or an irreducible singular M-matrix. $ A $ is partitioned as

$ A = \left(\begin{array}{cc} A_{11} & A_{12}\\ A_{21} & A_{22} \end{array}\right), $

where $ A_{11} $ and $ A_{22} $ are square matrices, then $ A_{11} $ and $ A_{22} $ are nonsingular M-matrices.

Lemma 1.5 (see [20])   Let $ A $, $ B $ be nonsingular M-matrices and $ A\leq B $, then $ A^{-1}\geq B^{-1} $.

Lemma 1.6 (see [13])   Let $ A $ be an M-matrix and $ \lambda_{\min\limits}(A) $ be the eigenvalue of $ A $ with smallest absolute value. Then $ \lambda_{\min\limits}(A) $ is nonnegative and satisfies

$ \lambda_{\min\limits}(A)\leq \max\limits_{1\leq i \leq m}\{a_{ii}\}. $

For the minimal nonnegative solution of the MARE, we have the following important result.

Lemma 1.7 (see [3, 5, 6])   If $ K $ in (1.2) is a nonsingular M-matrix or an irreducible singular M-matrix, then (1.1) has a minimal nonnegative solution $ S $. If $ K $ is a nonsingular M-matrix, then $ A-SC $ and $ D-CS $ are also nonsingular M-matrices. If $ K $ is irreducible M-matrix, then $ S>0 $ and $ A-SC $ and $ D-CS $ are also irreducible M-matrices.

Lemma 1.8 (see [3, 5])   If $ K $ in (1.2) is an irreducible singular M-matrix, then there exist unique, up to a multiplicative constant, $ u>0 $ and $ v>0 $ such that $ u^TK = 0 $, $ Kv = 0 $ and $ u^Tv = 1 $.

Partition the vectors $ u $ and $ v $ in the above lemma according to the block structure of the matrix $ K $ as $ u^T = [u_1^T, u_2^T] $, $ v^T = [v_1^T, v_2^T] $. Through $ u $ and $ v $ we can define the drift $ \mu $ as the real number

$ \begin{equation} \mu = u_2^Tv_2-u_1^Tv_1. \end{equation} $ (1.3)

The term "drift" originates in the context of Markov chains and describes the different physical behaviors of the chain in the cases where $ \mu $ is positive, null, or negative. In this context the terms positive recurrent, null recurrent, and transient are used to denote the caess $ \mu<0 $, $ \mu = 0 $, and $ \mu>0 $, respectively.

In terms of the definition, we have the following result.

Lemma 1.9 (see [3])   If $ K $ defined in (1.2) is an irreducible singular M-matrix and $ S $ is the minimal nonnegative solution of the MARE (1.1). Then

(ⅰ) when $ \mu<0 $, $ D-CS $ is singular and $ A-SC $ is nonsingular;

(ⅱ) when $ \mu>0 $, $ D-CS $ is nonsingular and $ A-SC $ is singular;

(ⅲ) when $ \mu = 0 $, both $ D-CS $ and $ A-SC $ are singular.

Efficient numerical methods for MARE include fixed-point iterative methods, Newton method, SDA, and so on. For details see [3, 5, 8, 9, 11, 13].

Recently, Bai et al.[8] proposed an alternately linearized implicit iteration method (ALI) for the MARE, which is very simple and efficient than the fixed-point iterative methods and Newton method, since at each iteration only two linear matrix equations are needed to solve.

However, there still has room to improve the ALI iteration method. In this paper, to fully improve the effectiveness of the ALI iteration method, we propose a modified alternately linearized implicit iteration method (MALI) which has two parameters and includes the ALI iteration method as special cases.

The rest of this paper is organized as follows. In next section, we propose a modified alternately linearized implicit iteration method and give its convergence analysis. In Section 3, we analysis the convergence rate and the optimal parameters of the MALI iteration method. In Section 4, we use some numerical examples to show the feasibility of the MALI iteration method. Conclusion is given in the last section.

2 The MALI Iteration Method

In the following, we propose a modified alternately linearized implicit iteration method.

Compared with the ALI iteration method, there are two parameters $ \alpha $ and $ \beta $ in the MALI iteration method, which will reduce to the ALI iteration method when $ \alpha = \beta $. Hence the ALI iteration method is a special case of the MALI iteration method.

In the following, we give convergence analysis of the MALI iteration method. First we need several lemmas.

Lemma 2.1   Let $ \{X_k\} $ be the matrix sequence generated by the MALI iteration method, $ \mathcal{R}(X) = XCX-XD-AX+B $, and $ S $ be the minimal nonnegative solution to (1.1). Then for any $ k\ge 0 $, the following equalities hold

(1) $ (X_{k+1/2}-S)(\alpha I+(D-CX_k)) = (\alpha I -(A-SC))(X_k-S) $;

(2) $ (X_{k+1/2}-X_k)(\alpha I+(D-CX_k)) = \mathcal{R}(X_k) $;

(3) $ \mathcal{R}(X_{k+1/2}) = (\alpha I-(A-X_{k+1/2}C))(X_{k+1/2}-X_k) $;

(4) $ (\beta I +(A-X_{k+1/2}C))(X_{k+1}-S) = (X_{k+1/2}-S)(\beta I-(D-CS)) $;

(5) $ (\beta I +(A-X_{k+1/2}C))(X_{k+1}-X_{k+1/2}) = \mathcal{R}(X_{k+1/2}) $;

(6) $ \mathcal{R}(X_{k+1}) = (X_{k+1}-X_{k+1/2})(\beta I-(D-CX_{k+1})) $.

Proof   The proof is similar to that of in [8], so we omit here.

Lemma 2.2   For the MARE (1.1), if the matrix $ K $ in (1.2) is a nonsingular M-matrix or an irreducible singular M-matrix, $ S $ is the minimal nonnegative solution to (1.1), then for any $ \delta>0 $ and $ 0\le Z\le S $, the matrices $ \delta I+A-ZC $ and $ \delta I+D-CZ $ are nonsingular M-matrices.

Proof   First, from $ 0\le Z\le S $, we have $ A-SC\le A-ZC $ and $ D-CS\le D-CZ $.

If $ K $ is a nonsingular M-matrix, then $ A-SC $ and $ D-CS $ are also nonsingular M-matrices by Lemma 1.7. Thus $ \delta I+A-ZC $ and $ \delta I+D-CZ $ are nonsingular M-matrices by Lemma 1.2.

If $ K $ is an irreducible M-matrix, then $ A-SC $ and $ D-CS $ are also irreducible M-matrices by Lemma 1.7. Since $ \delta>0 $, $ A-SC\not = \delta I+ A-ZC $ and $ D-CS\not = \delta I+D-CZ $. By Lemma 1.3, $ \delta I+A-ZC $ and $ \delta I+D-CZ $ are nonsingular M-matrices.

Lemma 2.3   For the MARE (1.1), suppose that the matrix $ K $ in (1.2) is a nonsingular M-matrix or an irreducible singular M-matrix, $ S $ is the minimal nonnegative solution to (1.1), and the parameters $ \alpha, \beta $ satisfy

$ \begin{equation} \alpha\geq \max\limits_{1\leq i \leq m}\{a_{ii}\}, \ \ \beta \geq \max\limits_{1\leq j \leq n}\{d_{jj}\}, \end{equation} $ (2.2)

where $ a_{i, i} $ and $ d_{j, j} $ are diagonal entries of $ A $ and $ D $, respectively, then for any $ k\ge 0 $,

(ⅰ) $ \{X_{k+1/2}\} $ and $ \{X_{k+1}\} $ are well defined and bounded

$ \begin{equation} 0 \leq X_{k+1/2}\leq S, \;\;\; 0 \leq X_{k+1}\leq S; \end{equation} $ (2.3)

(ⅱ) $ \beta I+A-X_{k+1/2}C $ and $ \alpha I+D-CX_{k+1} $ are nonsingular M-matrices.

Proof   If $ K $ is a nonsingular M-matrix or an irreducible singular M-matrix, from Lemma 1.4, $ A $ and $ D $ are nonsingular M-matrices. Thus, when $ \alpha $ and $ \beta $ satisfy (2.2), both $ \alpha I -A $ and $ \beta I -D $ are nonnegative matrices.

We prove this lemma by induction. When $ k = 0 $, we have $ X_{1/2}(\alpha I+D) = B $ since $ X_0 = 0 $ from the MALI iteration. As $ D $ is a nonsingular M-matrix, by Lemma 1.2, $ \alpha I+D $ is also a nonsingular M-matrix. Thus from Lemma 1.1 we have $ (\alpha I+D)^{-1}\geq 0 $. Hence

$ X_{1/2} = B(\alpha I+D)^{-1}\geq 0. $

On the other hand, from Lemma 2.1(1), we have

$ (X_{1/2}-S)(\alpha I+D) = -(\alpha I -(A-SC))S. $

Thus since $ C\ge 0 $ and $ S\ge 0 $, we have

$ \begin{align*} X_{1/2}-S & = -(\alpha I -(A-SC))S(\alpha I+D)^{-1} \\ & = -((\alpha I -A)+SC)S(\alpha I+D)^{-1}\\ & \leq 0. \end{align*} $

This show that $ 0 \leq X_{1/2}\leq S $.

On the other hand, since $ C\geq 0 $, we have $ A-SC\leq A-X_{1/2}C $. Since that $ D $ is a nonsingular M-matrix, $ \beta $ must be positive from (2.2). Thus $ \beta I+A-X_{1/2}C $ is a nonsingular M-matrix by Lemma 2.2.

Similarly, from the MALI iteration and Lemma 2.1 (4), we have

$ (\beta I+(A-X_{1/2}C))X_{1} = X_{1/2}(\beta I-D)+B $

and

$ (\beta I +(A-X_{1/2}C))(X_{1}-S) = (X_{1/2}-S)(\beta I-(D-CS)). $

Hence

$ X_{1} = (\beta I+(A-X_{1/2}C))^{-1}(X_{1/2}(\beta I-D)+B)\geq 0 $

and

$ X_{1}-S = (\beta I +(A-X_{1/2}C))^{-1}(X_{1/2}-S)(\beta I-(D-CS))\leq 0. $

This show that $ 0 \leq X_1\leq S $.

On the other hand, since $ C\geq 0 $, we have $ D-CS\leq D-CX_1. $ Note that $ A $ is a nonsingular M-matrix, $ \alpha $ must be positive. By Lemma 2.2, $ \alpha I+D-CX_1 $ is a nonsingular M-matrix.

Thus (ⅰ) and (ⅱ) hold true for $ k = 0 $.

Assume that (ⅰ) and (ⅱ) hold true for $ k = l-1 $. From the MALI iteration, we have

$ X_{l+1/2}(\alpha I+(D-CX_l)) = (\alpha I-A)X_l+B, $

thus

$ X_{l+1/2} = ((\alpha I-A)X_l+B)(\alpha I+(D-CX_l))^{-1} \geq 0. $

From Lemma 2.1 (1), we have $ (X_{l+1/2}-S)(\alpha I+(D-CX_l)) = (\alpha I -(A-SC))(X_l-S), $ thus

$ X_{l+1/2}-S = (\alpha I -(A-SC))(X_l-S)(\alpha I+(D-CX_l))^{-1} \leq 0. $

This show that $ 0 \leq X_{l+1/2}\leq S $.

Again, since $ C\geq 0 $ and $ \beta>0 $, by Lemma 2.2, $ \beta I+A-X_{l+1/2}C $ is a nonsingular M-matrix.

Similarly, from the MALI iteration and Lemma 2.1 (4), we have

$ (\beta I+(A-X_{l+1/2}C))X_{l+1} = X_{l+1/2}(\beta I-D)+B $

and

$ (\beta I +(A-X_{l+1/2}C))(X_{l+1}-S) = (X_{l+1/2}-S)(\beta I-(D-CS)). $

Hence

$ X_{l+1} = (\beta I+(A-X_{l+1/2}C))^{-1}(X_{l+1/2}(\beta I-D)+B)\geq 0 $

and

$ X_{l+1}-S = (\beta I +(A-X_{l+1/2}C))^{-1}(X_{l+1/2}-S)(\beta I-(D-CS))\leq 0. $

This show that $ 0 \leq X_{l+1}\leq S $.

Again, since $ C\geq 0 $ and $ \alpha>0 $, by Lemma 2.2, $ \alpha I+D-CX_{l+1} $ is a nonsingular M-matrix.

Thus we prove by induction that (i) and (ii) hold true for all $ k\geq0 $.

Lemma 2.4   Under the assumption of Lemma 2.2, the following inequalities hold for all $ k\geq 0 $,

$ \begin{equation} X_{k}\leq X_{k+1/2}\leq X_{k+1}, \;\;\; \mathcal{R}(X_k)\geq 0, \;\;\; \mathcal{R}(X_{k+1/2})\geq 0, \;\;\; \mathcal{R}(X_{k+1})\geq 0, \end{equation} $ (2.4)

where $ \mathcal{R}(X) $ is defined in Lemma 2.1.

Proof   We prove this lemma by induction.

In fact, when $ k = 0 $, we have $ \mathcal{R}(X_{0}) = B\geq 0 $. From Lemma 2.1(2), we have

$ (X_{1/2}-X_0)(\alpha I+(D-CX_0)) = \mathcal{R}(X_0). $

Since $ \alpha I+D $ is a nonsingular M-matrix,

$ \begin{align*} X_{1/2}-X_0 = \mathcal{R}(X_0)(\alpha I+(D-CX_0))^{-1} = B(\alpha I+D)^{-1} \geq 0. \end{align*} $

From Lemma 2.1(3), we have

$ \begin{align*} \mathcal{R}(X_{1/2}) = (\alpha I-(A-X_{1/2}C))(X_{1/2}-X_0) = ((\alpha I-A)+X_{1/2}C)(X_{1/2}-X_0)\geq 0. \end{align*} $

From Lemma 2.1(5), we have $ (\beta I +(A-X_{1/2}C))(X_{1}-X_{1/2}) = \mathcal{R}(X_{1/2}) $. By Lemma 2.2, $ \beta I +(A-X_{1/2}C) $ is a nonsingular M-matrix, we have

$ X_{1}-X_{1/2} = (\beta I +(A-X_{1/2}C))^{-1}\mathcal{R}(X_{1/2})\geq 0. $

From Lemma 2.1(6), we have

$ \begin{align*} \mathcal{R}(X_{1}) & = (X_{1}-X_{1/2})(\beta I-(D-CX_{1})) = (X_{1}-X_{1/2})((\beta I-D)+CX_{1})\geq 0. \end{align*} $

This show that (2.4) holds for $ k = 0 $.

Now we suppose that (2.4) holds for $ k = l-1 $. Then from Lemma 2.1(2) and Lemma 2.2, we have

$ X_{l+1/2}-X_l = \mathcal{R}(X_l)(\alpha I+(D-CX_l))^{-1}\geq 0. $

From Lemma 2.1(3), we have

$ \mathcal{R}(X_{l+1/2}) = (\alpha I-(A-X_{l+1/2}C))(X_{l+1/2}-X_l)\geq 0. $

From Lemma 2.1(5) and Lemma 2.2, we have

$ X_{l+1}-X_{l+1/2} = (\beta I +(A-X_{l+1/2}C))^{-1}\mathcal{R}(X_{l+1/2})\geq 0. $

From Lemma 2.1(6), we have

$ \mathcal{R}(X_{l+1}) = (X_{l+1}-X_{l+1/2})(\beta I-(D-CX_{l+1}))\geq 0. $

Thus we show by induction that (2.4) holds for all $ k\geq 0 $.

By Lemmas 2.3 and 2.4, we can prove the following convergence theorem of the MALI iteration method.

Theorem 2.1   For the MARE (1.1), if $ K $ in (1.2) is a nonsingular M-matrix or an irreducible singular M-matrix, $ S $ is the minimal nonnegative solution to (1.1), and the parameters $ \alpha, \beta $ satisfy (2.2), then $ \{X_{k}\} $ is well defined, monotonically increasing and converges to $ S $.

Proof   Combining Lemma 2.3 with Lemma 2.4, we show that $ \{X_{k}\} $ is nonnegative, monotonically increasing and bounded from above. Thus there is a nonnegative matrix $ S^* $ such that $ \lim\limits_{k\rightarrow\infty}X_{k} = S^* $. It also holds that $ \lim\limits_{k \rightarrow\infty}X_{k+1/2} = S^* $. From Lemma 2.3, we have $ S^*\leq S $. On the other hand, take the limit in the MALI iteration, we have that $ S^* $ is a solution of the MARE, thus $ S\leq S^* $. Hence $ S = S^* $.

3 Convergence Rate of the MALI Iteration Method

In this section, we analyse the convergence rate and give optimal parameters of the MALI iteration method.

Theorem 3.1   Under the assumption of Theorem 2.1, for the sequence $ \{X_{k}\} $ generated by the MALI iteration method, we have

$ \begin{equation} \limsup\limits_{k\rightarrow\infty}\|S-X_k\|_2^{1/k} \leq \rho(\alpha, \beta), \end{equation} $ (3.1)

where

$ \begin{equation} \rho(\alpha, \beta) = \frac{\alpha-\lambda_{\min\limits}}{\beta+\lambda_{\min\limits}}\cdot\frac{\beta-\mu_{\min\limits}}{\alpha+\mu_{\min\limits}} \end{equation} $ (3.2)

and

$ \lambda_{\min\limits} = \lambda_{\min\limits}(A-SC), \quad \mu_{\min\limits} = \mu_{\min\limits}(D-CS). $

Proof   From Lemma 2.1(1) and (4), we have

$ X_{k+1/2}-S = (\alpha I -(A-SC))(X_k-S)(\alpha I+(D-CX_k))^{-1} $

and

$ X_{k+1}-S = (\beta I +(A-X_{k+1/2}C))^{-1}(X_{k+1/2}-S)(\beta I-(D-CS)). $

Thus

$ S-X_{k+1} = (\beta I +(A-X_{k+1/2}C))^{-1}(\alpha I -(A-SC))(S-X_k)(\alpha I+(D-CX_k))^{-1}(\beta I-(D-CS)). $

Because $ \beta I +(A-SC)\leq \beta I +(A-X_{k+1/2}C) $, by Lemma 1.5, we have

$ (\beta I +(A-X_{k+1/2}C))^{-1}\leq(\beta I +(A-SC))^{-1}. $

Similarly, we have

$ (\alpha I+(D-CX_k))^{-1}\leq (\alpha I+(D-CS))^{-1}. $

Thus

$ S-X_{k+1}\leq (\beta I +(A-SC))^{-1}(\alpha I -(A-SC))(S-X_k)(\alpha I+(D-CS))^{-1}(\beta I-(D-CS)). $

By induction we have

$ S-X_{k}\leq ((\beta I +(A-SC))^{-1}(\alpha I-(A-SC)))^k(S-X_0)((\alpha I+(D-CS))^{-1}(\beta I-(D-CS)))^k. $

Hence

$ \begin{align*} & \|S-X_{k}\|_2^{1/k}\\ \leq & \|((\beta I +(A-SC))^{-1}(\alpha I-(A-SC)))^k\|_2^{1/k}\|(S-X_0)\|_2^{1/k}\\ &\|((\alpha I+(D-CS))^{-1}(\beta I-(D-CS)))^k\|_2^{1/k}. \end{align*} $

Take limit on both side and, note that $ \rho(A) = \lim\limits_{k\rightarrow\infty}\|A^k\|_2^{1/k} $, we have

$ \begin{align*} \limsup\limits_{k\rightarrow\infty}\|S-X_k\|_2^{1/k} \leq & \rho((\beta I +(A-SC))^{-1}(\alpha I-(A-SC)))\\ & \cdot\rho((\alpha I+(D-CS))^{-1}(\beta I-(D-CS))). \end{align*} $

It's easy to verify that

$ \rho((\beta I +(A-SC))^{-1}(\alpha I-(A-SC))) = \frac{\alpha-\lambda_{\min\limits}}{\beta+\lambda_{\min\limits}}, $
$ \rho((\alpha I+(D-CS))^{-1}(\beta I-(D-CS))) = \frac{\beta-\mu_{\min\limits}}{\alpha+\mu_{\min\limits}}, $

where $ \lambda_{\min\limits} = \lambda_{\min\limits}(A-SC) $ and $ \mu_{\min\limits} = \mu_{\min\limits}(D-CS) $. Hence

$ \limsup\limits_{k\rightarrow\infty}\|S-X_k\|_2^{1/k} \leq\rho(\alpha, \beta) = \frac{\alpha-\lambda_{\min\limits}}{\beta+\lambda_{\min\limits}}\cdot\frac{\beta-\mu_{\min\limits}}{\alpha+\mu_{\min\limits}}. $

Corollary 3.1   When $ K $ in (2.2) is a nonsingular M-matrix or an irreducible singular M-matrix with nonzero drift, for any $ \alpha, \beta $ satisfy (2.2), we have $ \rho(\alpha, \beta)<1 $. In this case, the MALI iteration method has linear convergence rate. When $ K $ is an irreducible singular M-matrix with zero drift, for any $ \alpha, \beta $ satisfy (2.2, we have $ \rho(\alpha, \beta) = 1 $. In this case, the the MALI iteration method has sub-linear convergence rate.

Proof   When $ K $ is a nonsingular or an irreducible singular M-matrix with nonzero drift, we know that $ A-SC $ and $ D-CS $ have at least one which is nonsingular by Lemma 1.9. By Lemma 1.6,

$ \alpha\geq \max \{a_{ii}\} \geq \max\{(A-SC)_{ii}\}\geq \lambda_{\min\limits}(A-SC), $
$ \beta\geq \max \{d_{ii}\} \geq \max\{(D-CS)_{ii}\} \geq \mu_{\min\limits}(D-CS), $

where $ (A-SC)_{ii} $ is the diagonal entries of $ A-SC $ and $ (D-CS)_{ii} $ is the diagonal entries of $ D-CS $. Thus $ \rho(\alpha, \beta)<1 $. In this case, the MALI iteration method has linear convergence rate.

When $ K $ is an irreducible singular M-matrix with zero drift, we know that both $ A-SC $ and $ D-CS $ are singular M-matrices by Lemma 1.9. Thus $ \lambda_{\min\limits}(A-SC) = 0 $, $ \mu_{\min\limits}(D-CS) = 0 $. Hence $ \rho(\alpha, \beta) = 1 $. In this case, the the MALI iteration method has sub-linear convergence rate.

Corollary 3.2   The optimal parameters of the MALI iteration method are

$ \alpha_{opt} = \max \{a_{ii}\}, \quad \beta_{opt} = \max \{d_{jj}\}. $

Proof   It's easy to verify that $ \rho(\alpha, \beta) $ is an increasing function with respect to $ \alpha, \beta $. Thus the minimum of $ \rho(\alpha, \beta) $ is attained at $ \alpha = \max \{a_{ii}\} $, $ \beta = \max \{d_{jj}\} $.

Note that, in Corollary 3.2, the optimal parameters only minimize the upper bound of the contraction factor.

4 Numerical Experiments

In this section we use several examples to show the feasibility of the MALI iteration method. We compare the MALI iteration method with the ALI iteration method in [8], Newton method in [3, 11] and present computational results in terms of the numbers of iterations (IT), CPU time (CPU) in seconds and the residue (RES), where

$ \text {RES}: = \frac{\|XCX-XD-AX+B\|_\infty}{\|XCX\|_\infty+\|XD\|_\infty+\|AX\|_\infty+\|B\|_\infty}. $

In our implementations all iterations are performed in MATLAB (version 2012a) on a personal computer and are terminated when the current iterate satisfies $ RES<10^{-6} $ or the number of iterations is more than 9000, which will be denoted by '-'.

Example 1   Consider the MARE (1.1) with

$ D = -10E_{n\times n}+180.002 I_n, \ \ C = 0.001 E_{n\times m}, \ \ B = C^T, \ \ A = 0.018 I_m, $

where $ E_{m\times n} $ is the $ m\times n $ matrix with all ones and $ I_m $ is the identity matrix of size $ m $ with $ m = 2, n = 18 $. This example is from [5], where the corresponding $ K $ is an irreducible singular M-matrix. The computational results are summarized in Table 1.

Table 1
Numerical Results of Example 1

Example 2   Consider the MARE (1.1) with

$ D = \alpha\left( \begin{array}{ccccc} 3 & -1 & & & \\ -1 & 4 & -1 & & \\ & \ddots & \ddots & \ddots & \\ & & -1 & 4 & -1 \\ & & & -1 & 2 \\ \end{array} \right), \ \ \ C = \alpha\left( \begin{array}{cccc} 1 & 1 & & \\ & 1 & \ddots & \\ & & \ddots & 1 \\ & & & 1 \\ \end{array} \right), $
$ B = \left( \begin{array}{cccc} 1 & & & \\ 1 & 1 & & \\ & \ddots & \ddots & \\ & & 1 & 1 \\ \end{array} \right), \ \ \ A = \left( \begin{array}{cccc} n & -1 & \cdots & -1 \\ -1 & n+1 & \ddots & -1 \\ \vdots & \ddots & \ddots & -1 \\ -1 & \cdots & -1 & n+1 \\ \end{array} \right). $

This example is taken from [5], where we choose $ \alpha = 2 $ and the corresponding $ K $ is an irreducible singular M-matrix with $ \mu > 0 $. The computational results are summarized in Table 2 for different sizes of $ n $.

Table 2
Numerical Results of Example 2

From the two numerical experiments, we can observe that the MALI iteration method needs the least CPU time than the ALI iteration method and Newton method. So it is feasible.

5 Conclusions

We propose a modified alternately linearized implicit iteration method (MALI) for computing the the minimal nonnegative solution of MARE. Theoretical analysis and numerical experiments show that the MALI iteration method is feasible in some cases. However, since the MALI iteration method only has linear convergence rate in general, it will be very slowly as compared with the SDA algorithm in [9, 15]. So, in general, the SDA algorithm will be more preferable.

References
[1] Rogers L C G. Fluid models in queueing theory and Wiener-Hopf factorization of Markov chains[J]. Ann. Appl. Probab., 1994, 4: 390–413. DOI:10.1214/aoap/1177005065
[2] Juang J, Lin W W. Nonsymmetric algebraic Riccati equations and Hamilton-like matrices[J]. SIAM J. Matrix Anal. Appl., 1999, 20: 228–243.
[3] Guo C H. Nonsymmetric algebraic Riccati equations and Wiener-Hopf factorization for Mmatrices[J]. SIAM J. Matrix Anal. Appl., 2001, 23: 225–242. DOI:10.1137/S0895479800375680
[4] Bean N G, O'Reilly M M, Taylor P G. Algorithms for return probabilities for stochastic fluid flows[J]. Stochastic Models., 2005, 21: 149–184. DOI:10.1081/STM-200046511
[5] Bini D A, Iannazzo B, Meini B. Numerical solution of algebraic Riccati equations[M]. Philadelphia: SIAM, 2012.
[6] Guo C H. A note on the minimal nonnegative solution of a nonsymmetric algebraic Riccati equation[J]. Linear Algebra Appl., 2002, 357: 299–302. DOI:10.1016/S0024-3795(02)00431-7
[7] Lu L Z. Solution form and simple iteration of a nonsymmetric algebraic Riccati equation arising in transport theory[J]. SIAM J. Matrix Anal. Appl., 2005, 26: 679–685. DOI:10.1137/S0895479801397275
[8] Bai Z Z, Guo X X, Xu S F. Alternately linearized implicit iteration methods for the minimal nonnegative solutions of the nonsymmetric algebraic Riccati equations[J]. Numer. Linear Algebra Appl., 2006, 13: 655–674. DOI:10.1002/nla.500
[9] Guo X X, Lin W W, Xu S F. A structure-preserving doubling algorithm for nonsymmetric algebraic Riccati equation[J]. Numer. Math., 2006, 103: 393–412. DOI:10.1007/s00211-005-0673-7
[10] Guo C H. A new class of nonsymmetric algebraic Riccati equations[J]. Linear Algebra Appl., 2007, 426: 636–649. DOI:10.1016/j.laa.2007.05.044
[11] Guo C H, Higham N J. Iterative solution of a nonsymmetric algebraic Riccati equation[J]. SIAM J. Matrix Anal. Appl., 2007, 29: 396–412. DOI:10.1137/050647669
[12] Guo C H, Iannazzo B, Meini B. On the doubling algorithm for a (shift) algebraic Riccati equation[J]. SIAM J. Matrix Anal. Appl., 2007, 29: 1083–1100.
[13] Wang W G, Wang W C, Li R C. Alternating-directional doubling algorithm for M-matrix algebraic Riccati equations[J]. SIAM J. Matrix Anal. Appl., 2012, 33: 170–194. DOI:10.1137/110835463
[14] Guo C H. On algebraic Riccati equations associated with M-matrices[J]. Linear Algebra Appl., 2013, 439: 2800–2814. DOI:10.1016/j.laa.2013.08.018
[15] Guo P C, Guo X X. A modified structure-preserving doubling algorithm for nonsymmetric algebraic Riccati equations from transport theory[J]. J. Comput. App. Math., 2014, 261: 213–220. DOI:10.1016/j.cam.2013.09.058
[16] Guo C H, Lu D. On algebraic Riccati equations associated with regular singular M-matrices[J]. Linear Algebra Appl., 2016, 493: 108–119. DOI:10.1016/j.laa.2015.11.024
[17] Huang T M, Huang W Q, Li R C, Lin W W. A new two-phase structure-preserving doubling algorithm for critically singular M-matrix algebraic Riccati equations[J]. Numer. Linear Algebra Appl., 2016, 23: 291–313. DOI:10.1002/nla.2025
[18] Xue J G, Li R C. Highly accurate doubling algorithms for M-matrix algebraic Riccati equations[J]. Numer. Math., 2017, 135: 733–767. DOI:10.1007/s00211-016-0815-0
[19] Guan J R, Lu L Z. New alternately linearized implicit iteration for M-matrix algebraic Riccati equations[J]. J. Math. Study, 2017, 50(1): 54–64. DOI:10.4208/jms.v50n1.17.04
[20] Berman A, Plemmons R J. Nonnegative matrices in the mathematical sciences[M]. New York: Academic Press, 1994.