数学杂志  2014, Vol. 34 Issue (3): 448-460   PDF    
扩展功能
加入收藏夹
复制引文信息
加入引用管理器
Email Alert
RSS
本文作者相关文章
XUE Qiu-fang
GAO Xing-bao
LIU Xiao-guang
COMPARISON THEOREMS FOR A CLASS OF PRECONDITIONED AOR ITERATIVE METHODS
XUE Qiu-fang1,2, GAO Xing-bao1, LIU Xiao-guang1    
1. College of Math. and Infor. Science, Shaanxi Normal University, Xi'an 710062, China;
2. Department of Applied Math., Xi'an University of Technology, Xi'an 710048, China
Abstract: In this paper, the preconditioned AOR iterative methods with the preconditioners Pα1→k are studied when the coefficient matrix of the linear system is a strictly diagonally dominant L-matrix. By using the related theories of matrix splitting, the convergence performance of the preconditioned AOR methods and the comparison theorems about the influence of the parameters α and k on the rate of convergence are obtained. The results indicate that the preconditioners with the big k and α are efficient and competitive for the preconditioned AOR methods. The results in the paper generalize those about the preconditioned Gauss-Seidel methods given by Li et al. Numerical examples further verify the results.
Key words: preconditioner     preconditioned AOR iterative method     strictly diagonally dominant L-matrix     spectral radius    
一类预条件AOR迭代法的比较定理
薛秋芳1,2, 高兴宝1, 刘晓光1    
1. 陕西师范大学数学与信息科学学院, 陕西 西安 710062;
2. 西安理工大学应用数学系, 陕西 西安 710048
摘要:本文研究了当线性方程组的系数矩阵是严格对角占优L-矩阵时带有预条件子Pα1→k的预条件AOR迭代方法.利用矩阵分裂的相关理论, 获得了预条件AOR迭代法的收敛性结论以及参数αk对收敛速度影响的比较定理.结果表明当αk取值较大时这类预条件方法更加有效.文中的结论推广了Li等人关于预条件Gauss-Seidel迭代法的相关结论.最后, 用数值例子进一步验证了这些结果.
关键词预条件子    预条件AOR迭代法    严格对角占优L-矩阵    谱半径    
1 Introduction

Consider the large sparse linear system

$ \begin{eqnarray} Ax=b, \end{eqnarray} $ (1.1)

where $x, b\in R^n$ and $A=(a_{ij})\in R^{n\times n}$ is nonsingular. It is well known that this system often arises from computational fluid dynamics, thermal, structural and circuit simulator problems and usually is solved by the iterative methods. Let $A =M-N$ and $M$ be nonsingular, then the basic iterative method is

$ x_{k+1} = Tx_{k} +c, \, \, k = 0, 1, \cdots, $

where $T =M^{-1}N$ is the iterative matrix, $ c =M^{-1}b.$

Without loss of generality, in this paper, we let $A=I-L-U, $ where $I$ is a $n\times n$ identity matrix, $-L$ and $-U$ are the strictly lower and the strictly upper triangular parts of $A$ respectively. Let $M=(I-\gamma L)/\omega$ and $N=[(1-\omega)I+(\omega-\gamma)L+\omega U)]/\omega, $ then the AOR iterative matrix is given by

$ \begin{eqnarray}\label{itm1} L_{\gamma, \omega}=(I-\gamma L)^{-1} [(1-\omega)I+(\omega-\gamma)L+\omega U], \end{eqnarray} $ (1.2)

where $\omega, \gamma\in[0, 2)$ ($\omega\ne 0$) are relaxation parameters. For the detail, the reader can refer to [1]. Obviously, when $(\omega, \gamma)=(\omega, \omega)=(1, 1)$ and $(1, 0)$, the AOR iterative method will lead to the SOR method, the Gauss-Seidel method and the Jacobi method, respectively.

In order to improve the convergence performance of the iterative methods, many preconditioned iterative methods were proposed (see [2-14]), that is, considering the iterative methods for the preconditioned equation

$ \begin{eqnarray} PAx=Pb, \end{eqnarray} $ (1.3)

where $P$ is the preconditioner, which is nonsingular. For instance, two preconditioners

$ \widetilde{P}=\left( \begin{array}{ccccc} 1&-a_{12}&0&\cdots&0\\ 0&1&-a_{23}&\cdots&0\\ \vdots&\vdots&\vdots&\ddots&\vdots\\ 0&0&0&\cdots&-a_{n-1, n}\\ 0&0&0&\cdots&1 \end{array} \right) $

and

$ \widetilde{P}_{\alpha}=\left( \begin{array}{ccccc} 1&-\alpha_1 a_{12}&0&\cdots&0\\ 0&1&-\alpha_2 a_{23}&\cdots&0\\ \vdots&\vdots&\vdots&\ddots&\vdots\\ 0&0&0&\cdots&-\alpha_{n-1}a_{n-1, n}\\ 0&0&0&\cdots&1 \end{array} \right) $

were proposed in [2] and [3], respectively. Obviously, the preconditioner $\widetilde{P}_{\alpha}$ proposed in [3] generalized the preconditioner $\widetilde{P}$ in [2]. And further generalized in [4] and [5]. Usni et al. (see [4]) provided the preconditioner $P_{U}=I+U.$ Soon after, Kotakemori (see [5]) generalized the preconditioner $P_{U}$ by adopting $P^{\alpha}_{U}=I+\alpha U, $ where $\alpha$ is a positive real parameter. They all studied the preconditioned Gauss-Seidel method and obtained the comparison results on the convergence performance.

In 2008, Li et al. proposed a class of preconditioners $P_{\alpha}^{1\rightarrow k}, k=1, 2, \cdots, n-1$ (see [6]) with the following form $P_{\alpha}^{1\rightarrow 1}=\widetilde{P}_{\alpha}:$

$ P_{\alpha}^{1\rightarrow 2}=\left( \begin{array}{ccccccc} 1&-\alpha_1 a_{12}&-\alpha_1 a_{13} &0&\cdots&0&0\\ 0&1&-\alpha_2 a_{23} &-\alpha_2 a_{24}& \cdots&0&0\\ \vdots&\vdots&\vdots&\vdots&\ddots&\vdots&\vdots\\ 0&0&0 &0&\cdots &-\alpha_{n-2}a_{n-2, n-1}& -\alpha_{n-2}a_{n-2, n}\\ 0&0&0 &0&\cdots &1&-\alpha_{n-1}a_{n-1, n}\\ 0&0&0 &0&\cdots &0&1 \end{array} \right), \cdots, $
$ P_{\alpha}^{1\rightarrow (n-1)}=\left( \begin{array}{ccccc} 1&-\alpha_1 a_{12}&-\alpha_1 a_{13}&\cdots&-\alpha_1 a_{1n}\\ 0&1&-\alpha_2 a_{23}&\cdots&-\alpha_2 a_{2n}\\ \vdots&\vdots&\vdots&\ddots&\vdots\\ 0&0&0&\cdots&-\alpha_{n-1}a_{n-1, n}\\ 0&0&0&\cdots&1 \end{array} \right), $

where $\alpha={\rm diag}\{\alpha_{1}, \alpha_{2}, \cdots, \alpha_{n}\}, 0\le \alpha_{i}\le 1, \, \, 1\le i\le n$ and there exists an $i$ such that $0< \alpha_{i}\le 1.$ Obviously, the preconditioners mentioned above are all the special cases of $P_{\alpha}^{1\rightarrow k}, \, k=1, 2, \cdots, n-1.$ In the paper, they proved that the convergence speed of the preconditioned Gauss-Seidel methods is not slower than that of the standard Gauss-Seidel method. Moreover, the bigger the parameter $k$ is, the less the spectral radius of the preconditioned Gauss-Seidel iterative matrix is.

Considering the wide applicabilities of the preconditioners $P_{\alpha}^{1\rightarrow k}\, (k=1, 2, \cdots, n-1)$ and the AOR iterative method, in this paper, we apply the preconditioners to the AOR iterative method when $A$ is a strictly diagonally dominant $L$-matrix. We study the convergence performance of the preconditioned AOR iterative methods. Especially, we provide the monotonicity of the spectral radius of the iterative matrices with respect to the parameters and some comparison results. Finally, the numerical experiments verify the obtained results, which shows that these preconditioners yield a considerable improvement in the rate of convergence.

The paper is organized as follow. In Section 2, we briefly review some useful notations and definitions. In Section 3, the preconditioned AOR methods are advised and some comparison theorems about the convergence performance for the methods are discussed in detail. In Section 4, some numerical examples are given. Section 5 presents some conclusions.

2 Preliminaries

For the convenience of later discussions, it is necessary to introduce the following notations, definitions and lemmas.

For a vector $x\in R^n$, $x\ge 0 (x>0)$ denotes that all components of $x$ are nonnegative (positive). For two vectors $x, y\in R^n, x\ge y (x>y)$ means that $x-y\ge 0(x-y>0)$. These definitions carry immediately over to matrices. For any matrix $A=(a_{ij})\in R^{n\times n}$, $\rho(A)$ denotes the spectral radius of $A$.

Definition 2.1  A matrix $A=(a_{ij})\in R^{n\times n}$ is said to be

(a) an $L$-matrix if $a_{ij}\le 0$ for any $i\ne j$ and $a_{ii}>0$ for $i=1, 2, \cdots, n$;

(b) an $M$-matrix if $A=sI-B$ with $B\ge 0, s>0$ and $\rho(B)\le s$;

(c) a strictly diagonally dominant matrix if $|a_{ii}|>\sum_{j\ne i}|a_{ij}|$ for $i=1, 2, \cdots, n.$

It is well known that a strictly diagonally dominant matrix $L$-matrix $A$ must be an nonsingular $M$-matrix, so $A^{-1}\ge0$.

Definition 2.2  Let $A$ be a real matrix. The splitting $A=M-N$ is said to be

(a) regular if $M^{-1}\ge0$ and $N\ge 0$;

(b) weak regular (weak nonnegative of the first type) if $M^{-1}\ge 0$ and $M^{-1}N\ge0$;

(c) weak nonnegative of the second type if $M^{-1}\ge 0$ and $NM^{-1}\ge0$;

(d) nonnegative splitting if $M^{-1}N\geq0.$

In general, regular splitting $\Rightarrow$ weak nonnegative of either type splitting, but the converse is not true.

Lemma 2.1[6] Let $A$ be a strictly diagonally dominant $L$-matrix, then $P_{\alpha}^{1\rightarrow k}A$ is also a strictly diagonally dominant $L$-matrix for all $\alpha_{i}\in<sup>[<xref ref-type="bibr" rid="b0">0</xref>, <xref ref-type="bibr" rid="b1">1</xref>]</sup>, i=1, 2, \cdots, n-1$ and $1\le k\le n-1$.

Lemma 2.2[6] Let $A$ be a nonsingular matrix with $A^{-1}\ge0$.

(a) If $A=M-N$ is a weak nonnegative splitting of either type, then $\rho(M^{-1}N)<1$;

(b) If the splitting $A=M-N$ is weak nonnegative of second type, then there exists a vector $x\gneq0$ such that $M^{-1}Nx=\rho(M^{-1}N)x$ and $Ax\ge0$ as well as $Nx\gneq0$.

Lemma 2.3[7] Let $A_{1}, A_{2}\in R^{n\times n}$ and $A_{i}=M_{i}-N_{i}, i=1, 2, $ be nonnegative splittings. If the Perron eigenvector $x_{2}\gneq0$ (the eigenvector $x_{2}$ corresponding to $\rho(T_{2})$) satisfies $T_{1}x_{2}\le T_{2}x_{2}$ then $\rho(T_{1})\le \rho(T_{2}), $ where $T_{i}=M_{i}^{-1}N_{i}, i=1, 2.$

Lemma 2.4[15, 16] Let $A$ be a nonnegative matrix. Then

(a) if $\alpha x\le Ax$ for some vector $x\gneq 0$, then $\alpha\le\rho(A)$;

(b) if $A$ is irreducible, $\alpha x\leq Ax\leq\beta x$ for some vector $x\gneq 0$ and the equalities are not true, then $\alpha<\rho(A)<\beta$ and $x>0$;

(c) $\alpha>\rho(A)$ if and only if $\alpha I-A$ is nonsingular and $(\alpha I-A)^{-1}\ge0$.

Lemma 2.5[16] Let $A\in R^{n\times n}$ be nonnegative, then

(a) $\rho(A)$ is an eigenvalue of $A$; If $A$ is also irreducible, then $\rho(A)>0;$

(b) there is an eigenvector $x\gneq0$ corresponding to $\rho(A)$ is called the Perron vector; if $A$ is also irreducible, then $x>0$.

3 Comparison Theorems for a Class of Preconditioned AOR Iterative Methods

In this section, we will discuss the preconditioned AOR iterative methods with the preconditioners $P_{\alpha}^{1\rightarrow k}, k=1, 2, \cdots, n-1, $ from two aspects, that is for the parameter $k$ and the parameter $\alpha, $ respectively. Some comparison results for the preconditioned AOR methods will be stated and proven.

A Comparison Theorems with Respect to Parameter "$k$" for Preconditioned AOR Methods

Let $A_{D}, A_{L}$ and $A_{U}$ denote the diagonal, strictly lower and upper triangular matrices of $A$, respectively, that is $A=A_{D}+A_{L}+A_{U}.$ And let $P_{\alpha}^{1\rightarrow k}=I+S_{\alpha}^{1\rightarrow k}, k=1, 2, \cdots, n-1.$ For any positive integers $k$ and $k+1$ satisfying $1\le k\le k+1\le n-1, $ we consider the AOR splittings of $P_{\alpha}^{1\rightarrow k}A$ and $P_{\alpha}^{1\rightarrow (k+1)}A:$

$ \begin{eqnarray}\label{sp1} P_{\alpha}^{1\rightarrow k}A&=&(I+S_{\alpha}^{1\rightarrow k})(I-L-U)\nonumber\\ &=&[I-(S_{\alpha}^{1\rightarrow k}L)_{D}-\gamma L-\gamma(S_{\alpha}^{1\rightarrow k}L)_{L}]/\omega-[(1-\omega)(I-(S_{\alpha}^{1\rightarrow k}L)_{D})\nonumber\\ &&+(\omega-\gamma)(L+(S_{\alpha}^{1\rightarrow k}L)_{L})+\omega (U-S_{\alpha}^{1\rightarrow k}+(S_{\alpha}^{1\rightarrow k}L)_{U}+S_{\alpha}^{1\rightarrow k}U)]/\omega\nonumber\\ &=&M_{\alpha}^{k}-N_{\alpha}^{k}, \end{eqnarray} $ (3.1)

where $M_{\alpha}^{k}=[I-(S_{\alpha}^{1\rightarrow k}L)_{D}-\gamma L-\gamma(S_{\alpha}^{1\rightarrow k}L)_{L}]/\omega$ and $N_{\alpha}^{k}=M_{\alpha}^{k}-P_{\alpha}^{1\rightarrow k}A, $ and

$ \begin{eqnarray}\label{sp2} P_{\alpha}^{1\rightarrow (k+1)}A&=&(I+S_{\alpha}^{1\rightarrow (k+1)})(I-L-U)\nonumber\\ &=&[I-(S_{\alpha}^{1\rightarrow (k+1)}L)_{D}-\gamma L-\gamma(S_{\alpha}^{1\rightarrow (k+1)}L)_{L}]/\omega\nonumber\\ &&-[(1-\omega)(I-(S_{\alpha}^{1\rightarrow (k+1)}L)_{D})+(\omega-\gamma)(L+(S_{\alpha}^{1\rightarrow (k+1)}L)_{L})\nonumber\\ &&+\omega (U-S_{\alpha}^{1\rightarrow (k+1)}+(S_{\alpha}^{1\rightarrow (k+1)}L)_{U}+S_{\alpha}^{1\rightarrow (k+1)}U)]/\omega\nonumber\\ &=&M_{\alpha}^{k+1}-N_{\alpha}^{k+1}, \end{eqnarray} $ (3.2)

where $M_{\alpha}^{k+1}=[I-(S_{\alpha}^{1\rightarrow (k+1)}L)_{D}-\gamma L-\gamma(S_{\alpha}^{1\rightarrow (k+1)}L)_{L}]/\omega$ and $N_{\alpha}^{k+1}=M_{\alpha}^{k+1}-P_{\alpha}^{1\rightarrow (k+1)}A.$ So the iterative matrices of the AOR methods for $P_{\alpha}^{1\rightarrow k}A$ and $P_{\alpha}^{1\rightarrow (k+1)}A$ are $L_{\gamma, \omega}^{\alpha, k}=(M_{\alpha}^{k})^{-1}N_{\alpha}^{k}$ and $L_{\gamma, \omega}^{\alpha, k+1}=(M_{\alpha}^{k+1})^{-1}N_{\alpha}^{k+1}$, respectively.

By Lemma 2.1, we know that both the splittings (3.1) and (3.2) are regular splittings. Furthermore, by Lemma 2.2 (a), they are convergent.

Theorem 3.1  Let $A$ be a strictly diagonally dominant $L$-matrix, $0\le \gamma\le \omega\le 1(\omega\ne0), $ then $\rho(L_{\gamma, \omega}^{\alpha, k+1})\le\rho(L_{\gamma, \omega}^{\alpha, k})\le\rho(L_{\gamma, \omega})<1$ for all $\alpha_{i}\in<sup>[<xref ref-type="bibr" rid="b0">0</xref>, <xref ref-type="bibr" rid="b1">1</xref>]</sup>, i=1, 2, \cdots, n-1, $ and $1\le k\le k+1\le n-1, $ where $L_{\gamma, \omega}^{\alpha, k}, $ $L_{\gamma, \omega}^{\alpha, k+1}$ and $L_{\gamma, \omega}$ are the iterative matrices of the AOR methods for $P_{\alpha}^{1\rightarrow k}A, $ $P_{\alpha}^{1\rightarrow (k+1)}A$ and $A$ respectively.

Proof  Since $A=(I-\gamma L)/\omega-[(1-\omega)I+(\omega-\gamma)L+\omega U]/\omega$ is regular splitting, then $\rho(L_{\gamma, \omega})<1$ and there exists a vector $x\gneq0$ such that $L_{\gamma, \omega}x=\rho(L_{\gamma, \omega})x$ and $Ax\ge0 $ by Lemma 2.2 (b). Thus

$ P_{\alpha}^{1\rightarrow k}Ax=(I+S_{\alpha}^{1\rightarrow k})Ax=Ax+S_{\alpha}^{1\rightarrow k}Ax\ge Ax\ge 0. $

From $S_{\alpha}^{1\rightarrow k}L\ge0, $ we know

$ M_{\alpha}^{k}=[I-(S_{\alpha}^{1\rightarrow k}L)_{D}-\gamma( L+(S_{\alpha}^{1\rightarrow k}L)_{L})]/\omega\le (I-\gamma L)/\omega. $

So $(M_{\alpha}^{k})^{-1}\ge \omega(I-\gamma L)^{-1}$ since $(I-\gamma L)^{-1}\ge0$ and $(M_{\alpha}^{k})^{-1}\ge0.$ Therefore

$ \begin{eqnarray}\label{ie1} &&(I-\gamma L)^{-1}[(1-\omega)I+(\omega-\gamma)L+\omega U]x=\rho(L_{\gamma, \omega})x\nonumber\\ &=&\;x-\omega(I-\gamma L)^{-1}Ax\ge x-(M_{\alpha}^{k})^{-1}Ax\nonumber\\ &\ge &\;x-(M_{\alpha}^{k})^{-1}P_{\alpha}^{1\rightarrow k} Ax=(I-(M_{\alpha}^{k})^{-1}P_{\alpha}^{1\rightarrow k} A)x\nonumber\\ &=&\;(M_{\alpha}^{k})^{-1}N_{\alpha}^{k}x. \end{eqnarray} $ (3.3)

It follows from $(M_{\alpha}^{k})^{-1}N_{\alpha}^{k}\ge 0$ and Lemma 2.3 that $\rho(L_{\gamma, \omega}^{\alpha, k})\le\rho(L_{\gamma, \omega}).$

For the vector $x, $

$ \begin{eqnarray*} P_{\alpha}^{1\rightarrow (k+1)}Ax&\;=&\;(I+S_{\alpha}^{1\rightarrow (k+1)})Ax\\ &=&\;(I+S_{\alpha}^{1\rightarrow k})Ax+(S_{\alpha}^{1\rightarrow (k+1)}-S_{\alpha}^{1\rightarrow k})Ax\\ &\ge &\;P_{\alpha}^{1\rightarrow k}Ax\ge 0. \end{eqnarray*} $

Since both $S_{\alpha}^{1\rightarrow k}L$ and $(S_{\alpha}^{1\rightarrow (k+1)}-S_{\alpha}^{1\rightarrow k})L$ are nonnegative, the following inequality holds

$ \begin{eqnarray*} M_{\alpha}^{k+1}&\;=&\;[I-(S_{\alpha}^{1\rightarrow k}L)_{D}-((S_{\alpha}^{1\rightarrow (k+1)}-S_{\alpha}^{1\rightarrow k})L)_{D}-\gamma L\\ &&\;-\gamma(S_{\alpha}^{1\rightarrow k}L)_{L}-\gamma((S_{\alpha}^{1\rightarrow (k+1)}-S_{\alpha}^{1\rightarrow k})L)_{L}]/\omega\\ &\le &\;[I-(S_{\alpha}^{1\rightarrow k}L)_{D}-\gamma L-\gamma(S_{\alpha}^{1\rightarrow k}L)_{L}]/\omega=M_{\alpha}^{k}. \end{eqnarray*} $

Notice that both $ M_{\alpha}^{k+1}$ and $ M_{\alpha}^{k}$ are nonsingular $M$-matrices, so $(M_{\alpha}^{k+1})^{-1}\ge (M_{\alpha}^{k})^{-1}.$ Thus

$ \begin{eqnarray}\label{ie2} (M_{\alpha}^{k})^{-1}N_{\alpha}^{k}x&\;=&\;x-(M_{\alpha}^{k})^{-1}P_{\alpha}^{1\rightarrow k} Ax\nonumber\\ &\ge &\;x-(M_{\alpha}^{k+1})^{-1}P_{\alpha}^{1\rightarrow (k+1)} Ax\nonumber\\ &= &\;(I-(M_{\alpha}^{k+1})^{-1}P_{\alpha}^{1\rightarrow (k+1)} A)x\nonumber =(M_{\alpha}^{k+1})^{-1}N_{\alpha}^{k+1}x. \end{eqnarray} $

Then the result $\rho((M_{\alpha}^{k+1})^{-1}N_{\alpha}^{k+1})\le\rho((M_{\alpha}^{k})^{-1}N_{\alpha}^{k}) $ follows by Rheinboldt and Vandergraft (see [8]) and Lemma 2.2 in [9]. Therefore the assertion $\rho(L_{\gamma, \omega}^{\alpha, k+1})\le\rho(L_{\gamma, \omega}^{\alpha, k})\le\rho(L_{\gamma, \omega})<1$ is proved.

Remark 3.1 (1) Theorem 3.1 indicates that, for the AOR iteration, the bigger the parameter $k$ is (i.e., we select the more upper right diagonal elements to be the preconditioner), the less the spectral radius of the iterative matrix is. Consequently, under the condition of the same parameter $\alpha$, the preconditioner $I+\alpha U=P_{\alpha}^{1\rightarrow(n-1)}$ proposed in [5] is better than the preconditioner $I+S_{\alpha}=P_{\alpha}^{1\rightarrow 1}$ proposed in [3] for the AOR iterative method.

(2) In Theorem 3.1, let $(\omega, \gamma)=(\omega, \omega)=(1, 1)$ and $(1, 0), $ then the comparison theorems for the preconditioned SOR, Gauss-Seidel and Jacobi methods can be reached respectively. Therefore, Theorem 3.2 and Theorem 3.4 in [6] are the special cases of this theorem (see Corollary 3.2 and Corollary 3.3).

Corollary 3.1  Let $A$ be a strictly diagonally dominant $L$-matrix, $0\le \omega\le 1(\omega\ne0), $ then

$ \rho(L_{\omega}^{\alpha, k+1})\le\rho(L_{\omega}^{\alpha, k})\le\rho(L_{\omega})<1 $

for all $\alpha_{i}\in[0, 1], i=1, 2, \cdots, n-1, $ and $1\le k\le k+1\le n-1, $ where $L_{\omega}^{\alpha, k}, $ $L_{\omega}^{\alpha, k+1}$ and $L_{\omega}$ are the iterative matrices of the SOR methods for $P_{\alpha}^{1\rightarrow k}A, $ $P_{\alpha}^{1\rightarrow (k+1)}A$ and $A$, respectively.

Corollary 3.2  Let $A$ be a strictly diagonally dominant $L$-matrix, then

$ \rho(G_{\alpha}^{k+1})\le\rho(G_{\alpha}^{k})\le\rho(G)<1 $

for all $\alpha_{i}\in[0, 1], i=1, 2, \cdots, n-1, $ and $1\le k\le k+1\le n-1, $ where $G_{\alpha}^{k}, $ $G_{\alpha}^{k+1}$ and $G$ are the iterative matrices of the Gauss-Seidel methods for $P_{\alpha}^{1\rightarrow k}A, $ $P_{\alpha}^{1\rightarrow (k+1)}A$ and $A$, respectively.

Corollary 3.3  Let $A$ be a strictly diagonally dominant $L$-matrix, then

$ \rho(J_{\alpha}^{k+1})\le\rho(J_{\alpha}^{k})\le\rho(J)<1 $

for all $\alpha_{i}\in[0, 1], i=1, 2, \cdots, n-1, $ and $1\le k\le k+1\le n-1, $ where $J_{\alpha}^{k}, $ $J_{\alpha}^{k+1}$ and $J$ are the iterative matrices of the Jacobi methods for $P_{\alpha}^{1\rightarrow k}A, $ $P_{\alpha}^{1\rightarrow (k+1)}A$ and $A$, respectively.

Theorem 3.2  Let $A\in R^{n\times n}$ be an irreducible strictly diagonally dominant $L$-matrix, then $P_{\alpha}^{1\rightarrow k}A$ is also an irreducible strictly diagonally dominant $L$-matrix for all $\alpha_{i}\in[0, 1), i=1, 2, \cdots, n-1, $ and $1\le k\le n-1$.

Proof  Let $\tilde{A}_{\alpha}^{k}=P_{\alpha}^{1\rightarrow k}A=(I+S_{\alpha}^{1\rightarrow k})A, $ then $\tilde{A}_{\alpha}^{k}$ is also a strictly diagonally dominant $L$-matrix by Lemma 2.1. And

$ \begin{eqnarray}\label{sp3} \tilde{A}_{\alpha}^{k}&\;=&\;(I+S_{\alpha}^{1\rightarrow k})(I-L-U)\nonumber\\ &=&\;I-L-U+S_{\alpha}^{1\rightarrow k}-S_{\alpha}^{1\rightarrow k}L-S_{\alpha}^{1\rightarrow k}U\\ &=&\;(I-(S_{\alpha}^{1\rightarrow k}L)_{D})-(L+(S_{\alpha}^{1\rightarrow k}L)_{L})-(U-S_{\alpha}^{1\rightarrow k}+(S_{\alpha}^{1\rightarrow k}L)_{U}+S_{\alpha}^{1\rightarrow k}U)\nonumber. \end{eqnarray} $ (3.4)

Let $F=U-S_{\alpha}^{1\rightarrow k}=(f_{ij}), E=L+(S_{\alpha}^{1\rightarrow k}L)_{L}=(e_{ij}), $ and $L=(l_{ij}), U=(u_{ij}), $ then from the definition of $P_{\alpha}^{1\rightarrow k}$ we know that

$ \begin{eqnarray*} \left\{ \begin{array}{l} f_{ij}=u_{ij}-\alpha_{i}u_{ij}~\ \mbox{for} \ 1\le j-i\le k, \\ f_{ij}=u_{ij}\qquad\qquad\mbox{for} \ k+1\le j-i\le n-1, \\ f_{ij}=0~~~\qquad\qquad\mbox{for} \ j\le i. \end{array} \right. \end{eqnarray*} $

Thus if $u_{ij}\ne0, $ so does $f_{ij}$ for $\alpha_{i}\in[0, 1), i=1, 2, \cdots, n-1, $ and if $l_{ij}\ne0, $ so does $e_{ij}$ from $(S_{\alpha}^{1\rightarrow k}L)_{L}\ge0.$ With both $(S_{\alpha}^{1\rightarrow k}L)_{U}$ and $S_{\alpha}^{1\rightarrow k}U$ are nonnegative, we can conclude that $\tilde{A}_{\alpha}^{k}$ is also an irreducible strictly diagonally dominant $L$-matrix for $\alpha_{i}\in[0, 1), i=1, 2, \cdots, n-1, $ and $1\le k\le n-1$.

Theorem 3.3  Let $A$ be an irreducible strictly diagonally dominant $L$-matrix, $0\le \gamma\le \omega\le 1(\omega\ne0, \gamma\ne1), $ then

$ \rho(L_{\gamma, \omega}^{\alpha, k+1})<\rho(L_{\gamma, \omega}^{\alpha, k})<\rho(L_{\gamma, \omega})<1 $

for all $\alpha_{i}\in[0, 1), i=1, 2, \cdots, n-1\, (\alpha\ne 0), $ and $1\le k\le k+1\le n-1, $ where $L_{\gamma, \omega}^{\alpha, k}, $ $L_{\gamma, \omega}^{\alpha, k+1}$ and $L_{\gamma, \omega}$ are the iterative matrices of the AOR methods for $P_{\alpha}^{1\rightarrow k}A, $ $P_{\alpha}^{1\rightarrow (k+1)}A$ and $A$ respectively and $P_{\alpha}^{1\rightarrow (1+k)}\ne P_{\alpha}^{1\rightarrow k}$.

Proof  From (1.2) and Lemma 2.4 (c), the following equality holds,

$ L_{\gamma, \omega}= (1-\omega)I+\omega(1-\gamma)L+\omega U+T, $

where $T\ge 0.$ Since $A$ is irreducible, thus $(1-\omega)I+\omega(1-\gamma)L+\omega U$ is irreducible when $0\le \gamma\le \omega\le 1(\omega\ne0, \gamma\ne 1)$. So with $T\ge 0, $ we know that $L_{\gamma, \omega}$ is irreducible.

From Theorem 3.2, we also know that $\tilde{A}_{\alpha}^{k}$ is an irreducible strictly diagonally dominant $L$-matrix for $\alpha_{i}\in[0, 1), i=1, 2, \cdots, n-1, $ and $1\le k\le n-1.$ So both $ M_{\alpha}^{k+1}$ and $ M_{\alpha}^{k}$ are nonsingular and both $L_{\gamma, \omega}^{\alpha, k}, $ and $L_{\gamma, \omega}^{\alpha, k+1}$ are well defined. Furthermore, it can be proved that both $L_{\gamma, \omega}^{\alpha, k}, $ and $L_{\gamma, \omega}^{\alpha, k+1}$ are irreducible by the similar method as above.

From the proof of Theorem 3.1, we know there exists an $x\gneq0$ such that $(M_{\alpha}^{k})^{-1}N_{\alpha}^{k}x\ge (M_{\alpha}^{k+1})^{-1}N_{\alpha}^{k+1}x.$ Therefore, $\rho(L_{\gamma, \omega}^{\alpha, k+1})<\rho(L_{\gamma, \omega}^{\alpha, k})$ by Lemma 2.4 (b) and $P_{\alpha}^{1\rightarrow (1+k)}\ne P_{\alpha}^{1\rightarrow k}$. And $\rho(L_{\gamma, \omega}^{\alpha, k})<\rho(L_{\gamma, \omega})<1$ from (3.3) and $\alpha\ne0$.

The proof is completed.

Remark 3.2 (1) Theorem 3.3 can be seen as a further improvement for Theorem 3.1. If the matrix in Theorem 3.1 is irreducible, then the inequality of the result can become strict.

(2) From Theorem 3.3, we can easily obtained the following comparison results for the preconditioned SOR and Jacobi methods. Consequently, the result for the preconditioned Jacobi method improves that in Theorem 3.4 in [6] (see Corollary 3.5).

Corollary 3.4  Let $A$ be an irreducible strictly diagonally dominant $L$-matrix, $0<\omega<1, $ then $\rho(L_{\omega}^{\alpha, k+1})<\rho(L_{\omega}^{\alpha, k})<\rho(L_{\omega})<1$ for all $\alpha_{i}\in[0, 1), i=1, 2, \cdots, n-1\, (\alpha\ne 0), $ and $1\le k\le k+1\le n-1, $ where $L_{\omega}^{\alpha, k}, $ $L_{\omega}^{\alpha, k+1}$ and $L_{\omega}$ are the iterative matrices of the SOR methods for $P_{\alpha}^{1\rightarrow k}A, $ $P_{\alpha}^{1\rightarrow (k+1)}A$ and $A$, respectively, and $P_{\alpha}^{1\rightarrow (1+k)}\ne P_{\alpha}^{1\rightarrow k}$.

Corollary 3.5  Let $A$ be an irreducible strictly diagonally dominant $L$-matrix, then

$ \rho(J_{\alpha}^{k+1})<\rho(J_{\alpha}^{k})<\rho(J)<1 $

for all $\alpha_{i}\in[0, 1), i=1, 2, \cdots, n-1\, (\alpha\ne 0), $ and $1\le k\le k+1\le n-1, $ where $J_{\alpha}^{k}, $ $J_{\alpha}^{k+1}$ and $J$ are the iterative matrices of the Jacobi methods for $P_{\alpha}^{1\rightarrow k}A, $ $P_{\alpha}^{1\rightarrow (k+1)}A$ and $A$, respectively, and $P_{\alpha}^{1\rightarrow (1+k)}\ne P_{\alpha}^{1\rightarrow k}$.

B Comparison Theorems with Respect to Parameter "$\alpha$" for Preconditioned AOR Methods

Let $\beta=diag\{\beta_{1}, \beta_{2}, \cdots, \beta_{n}\}, 0\le \beta_{i}\le 1, \, \, i=1, 2, \cdots, n.$ Let $M_{\beta}^{k}=[I-(S_{\beta}^{1\rightarrow k}L)_{D}-\gamma L-\gamma(S_{\beta}^{1\rightarrow k}L)_{L}]/\omega$ and assume $ \alpha_{i}\le \beta_{i}\, \, i=1, 2, \cdots, n, $ then we have

$ \begin{eqnarray*} M_{\alpha}^{k}&\;= &\;[I-(S_{\alpha}^{1\rightarrow k}L)_{D}-\gamma L-\gamma(S_{\alpha}^{1\rightarrow k}L)_{L}]/\omega\\ &\ge &\;[I-(S_{\beta}^{1\rightarrow k}L)_{D}-\gamma L-\gamma(S_{\beta}^{1\rightarrow k}L)_{L}]/\omega=M_{\beta}^{k}, \end{eqnarray*} $

thus $(M_{\alpha}^{k})^{-1}\le (M_{\beta}^{k})^{-1}$ from $(M_{\alpha}^{k})^{-1}\ge 0$ and $(M_{\beta}^{k})^{-1}\ge 0.$ So the proof of the following monotone result with respect to the parameter $\alpha$ is completely similar to the proof of Theorem 3.1.

Theorem 3.4  Let $A$ be a strictly diagonally dominant $L$ -matrix, $0\le \gamma\le \omega\le 1(\omega\ne0), $ then $\rho(L_{\gamma, \omega}^{\beta, k})\le\rho(L_{\gamma, \omega}^{\alpha, k})\le\rho(L_{\gamma, \omega})<1$ for any $0\le\alpha_{i}\le \beta_{i}\le1, i=1, 2, \cdots, n-1, $ where $L_{\gamma, \omega}^{\beta, k}, $ $L_{\gamma, \omega}^{\alpha, k}$ and $L_{\gamma, \omega}$ are the iterative matrices of the AOR methods for $P_{\beta}^{1\rightarrow k}A, $ $P_{\alpha}^{1\rightarrow k}A$ and $A$, respectively.

Remark 3.3  Theorem 3.4 indicates the spectral radius of the preconditioned AOR methods is monotone with respect to the parameter $\alpha.$ And so do for the the preconditioned SOR, Gauss-Seidel and Jacobi methods from this theorem, which shows Theorem 3.3 and Theorem 3.5 in [6] are the special cases of our results (see Corollary 3.7 and Corollary 3.8).

Corollary 3.6  Let $A$ be a strictly diagonally dominant $L$-matrix, $0< \omega\le 1, $ then

$ \rho(L_{\omega}^{\beta, k})\le\rho(L_{\omega}^{\alpha, k})\le\rho(L_{\omega})<1 $

for any $0\le\alpha_{i}\le \beta_{i}\le1, i=1, 2, \cdots, n-1, $ where $L_{\omega}^{\beta, k}, \, \, L_{\omega}^{\alpha, k}$ and $L_{\omega}$ are the iterative matrices of the SOR methods for $P_{\beta}^{1\rightarrow k}A, \, \, P_{\alpha}^{1\rightarrow k}A$ and $A$ respectively.

Corollary 3.7   Let $A$ be a strictly diagonally dominant $L$-matrix, then

$ \rho(G_{\beta}^{k})\le\rho(G_{\alpha}^{k})\le\rho(G)<1 $

for any $0\le\alpha_{i}\le \beta_{i}\le1, i=1, 2, \cdots, n-1, $ where $G_{\beta}^{k}, \, \, G_{\alpha}^{k}$ and $G$ are the iterative matrices of the Gauss-Seidel methods for $P_{\beta}^{1\rightarrow k}A, \, \, P_{\alpha}^{1\rightarrow k}A$ and $A$ respectively.

Corollary 3.8  Let $A$ be a strictly diagonally dominant $L$-matrix, then

$ \rho(J_{\beta}^{k})\le\rho(J_{\alpha}^{k})\le\rho(J)<1 $

for any $0\le\alpha_{i}\le \beta_{i}\le1, i=1, 2, \cdots, n-1, $ where $J_{\beta}^{k}, \, \, J_{\alpha}^{k}$ and $J$ are the iterative matrices of the Jacobi methods for $P_{\beta}^{1\rightarrow k}A, \, \, P_{\alpha}^{1\rightarrow k}A$ and $A$ respectively.

For the irreducible strictly diagonally dominant $L$-matrix, the strict monotone result with respect to the parameter $\alpha$ can be reached similar to the proof of Theorem 3.3.

Theorem 3.5  Let $A$ be an irreducible strictly diagonally dominant $L$ -matrix, $0\le \gamma\le \omega\le 1(\omega\ne0, \gamma\ne1), $ then $\rho(L_{\gamma, \omega}^{\beta, k})<\rho(L_{\gamma, \omega}^{\alpha, k})<\rho(L_{\gamma, \omega})<1$ for any $0\le\alpha_{i}\le \beta_{i}\le1, i=1, 2, \cdots, n-1\, (\alpha\ne 0, \alpha\ne \beta), $ where $L_{\gamma, \omega}^{\beta, k}, $ $L_{\gamma, \omega}^{\alpha, k}$ and $L_{\gamma, \omega}$ are the iterative matrices of the AOR methods for $P_{\beta}^{1\rightarrow k}A, $ $P_{\alpha}^{1\rightarrow k}A$ and $A$ respectively.

Corollary 3.9  Let $A$ be an irreducible strictly diagonally dominant $L$ -matrix, $0<\omega\le 1, $ then

$ \rho(L_{\omega}^{\beta, k})<\rho(L_{\omega}^{\alpha, k})<\rho(L_{\omega})<1 $

for any $0\le\alpha_{i}\le \beta_{i}\le1, i=1, 2, \cdots, n-1\, (\alpha\ne 0, \alpha\ne \beta), $ where $L_{\omega}^{\alpha, k}, $ $L_{\omega}^{\beta, k}$ and $L_{\omega}$ are the iterative matrices of the SOR methods for $P_{\alpha}^{1\rightarrow k}A, $ $P_{\beta}^{1\rightarrow k}A$ and $A$ respectively.

Corollary 3.10  Let $A$ be an irreducible strictly diagonally dominant $L$ -matrix, then

$ \rho(J_{\beta}^{k})<\rho(J_{\alpha}^{k})<\rho(J)<1 $

for any $0\le\alpha_{i}\le \beta_{i}\le1, i=1, 2, \cdots, n-1\, (\alpha\ne 0, \alpha\ne \beta), $ where $J_{\alpha}^{k}, $ $J_{\beta}^{k}$ and $J$ are the iterative matrices of the Jacobi methods for $P_{\alpha}^{1\rightarrow k}A, $ $P_{\beta}^{1\rightarrow k}A$ and $A$ respectively.

Remark 3.4 (1) Obviously, Theorem 3.5 is an improvement for Theorem 3.4. If the matrix $A$ in Theorem 3.4 is irreducible, the inequality will become strict. Consequently, Corollary 3.10 improves the the result of Theorem 3.5 in [6].

(2) The preconditioners $P_{\alpha}^{1\rightarrow k}, k=1, 2, \cdots, n-1$ are defined by upper triangular, nonnegative matrices. Symmetrically, if we define the corresponding lower triangular, nonnegative matrices as the dual preconditioners, then we can get the parallel theorems to above theorems and corollaries.

4 Numerical Experiments

In this section, some experiments are provided to illustrate the convergence behavior of the preconditioned AOR/SOR iterative methods.

The spectral radius of the iterative matrices for the methods is illustrated and analyzed in detail.

Example 4.1  The coefficient matrix $A$ (see [7]) is given by

$ A=\left[ \begin{array}{cccccc} 1 &-\frac{t}{2\times20+1} &-\frac{t}{3\times20+1} &-\frac{t}{4\times20+1}&\cdots&-\frac{t}{n\times20+1}\\ -\frac{t}{2\times20+2}&1&-\frac{t}{3\times20+2}&-\frac{t}{4\times20+2}&\cdots&-\frac{t}{n\times20+2}\\ -\frac{t}{3\times20+3}&-\frac{t}{2\times20+3}&1&-\frac{t}{4\times20+3}&\cdots&-\frac{t}{n\times20+3}\\ -\frac{t}{4\times20+4}&-\frac{t}{3\times20+4}&-\frac{t}{2\times20+4}&1&\cdots&-\frac{t}{n\times20+4}\\ \vdots&\vdots&\vdots&\vdots&\ddots&\vdots\\ -\frac{t}{n\times20+n}&-\frac{t}{(n-1)\times20+n}&-\frac{t}{(n-2)\times20+n}&-\frac{t}{(n-3)\times20+n}&\cdots&1 \end{array} \right], $

where $t=2.5, n=200$.

Obviously, the matrix $A$ satisfies all the conditions of theorems in our paper. By simple computing, we have $\rho(L_{\gamma, \omega})=0.3955$ when $\gamma=0.7, \omega=0.9$ for the AOR method and $\rho(L_{\omega})=0.3261$ when $\omega=0.9$ for the SOR method.

Table 1 and Table 2 report the spectral radius of the iterative matrices of the preconditioned AOR/SOR methods when the parameters $k$ and $\alpha$ are varying.

Table 1
Spectral radius of iterative matrices for the preconditioned AOR methods

Table 2
Spectral radius of iterative matrices for the preconditioned SOR methods

First, from Tables 1 and 2, the preconditioners can greatly accelerate the convergence rate of the method whenever for the fixed parameter $\alpha$ or $k$. Second, it is observed that the spectral radius of the iterative matrices for the preconditioned AOR/SOR methods is monotonically decreasing with $k$ increasing when $\alpha$ is fixed, which shows that the more upper right diagonal elements are selected to be the preconditioner, the less the spectral radius of iterative matrix is when $\alpha$ is fixed. Third, it can be also seen that the spectral radius of the iterative matrices for the preconditioned AOR/SOR methods is monotonically decreasing with $\alpha$ increasing when $k$ is fixed, which indicates that the spectral radius of iterative matrices is monotone with respect to the parameter $\alpha$.

Example 4.2  Consider a nonsingular $Z$-matrix $A$ (see [7]) given by

$ A=\left[ \begin{array}{cccccc} 1&c_1&c_2&c_{3}&c_1&\cdots\\ c_{3}&1&c_1&c_2&\ddots&c_1\\ c_2&c_{3}&\ddots&\ddots&\ddots&c_{3}\\ c_1&\ddots&\ddots&1&c_1&c_2\\ c_{3}&\ddots&c_2&c_{3}&1&c_1\\ \cdots&c_{3}&c_1&c_2&c_{3}&1 \end{array} \right] $

where $c_1=-\frac{7}{10n}, c_2=-\frac{7}{10n+1}, c_{3}=-\frac{7}{10n+2}$ and $n=200$. It is easy to verify that $A$ is an irreducible strictly diagonally dominant $L$-matrix. By simple computing, we have $\rho(L_{\gamma, \omega})=0.6317$ when $\gamma=0.7, \omega=0.9$ for the AOR method and $\rho(L_{\omega})=0.5849$ when $\omega=0.9$ for the SOR method.

Table 3 and Table 4 report the spectral radius of the iterative matrices of the preconditioned AOR/SOR methods when the parameters $k$ and $\alpha$ are varying.

Table 3
Spectral radius of iterative matrices for the preconditioned AOR methods

Table 4
Spectral radius of iterative matrices for the preconditioned SOR methods

From Tables 3 and 4, we can see that the preconditioners can greatly accelerate the convergence rate for the method. It can be observed that the spectral radius of the iterative matrices for the preconditioned AOR/SOR methods is monotonically decreasing with $k$ increasing when the parameter $\alpha$ is fixed. So, the more upper right diagonal elements are selected to be the preconditioner, the less the spectral radius of iterative matrix is. Furthermore, we also see that the convergence speed for the preconditioned AOR/SOR methods become considerate fast with $\alpha$ increasing, which shows that the spectral radius of iterative matrices is monotone with respect to the parameter $\alpha$.

In a word, the above tables show that the preconditioners $P_{\alpha}^{1\rightarrow k}$ with the big $k$ and $\alpha$ are more efficient for accelerating the convergence speed of the AOR/SOR method.

5 Conclusion

In this paper, the preconditioners $P_{\alpha}^{1\rightarrow k}$ are applied to the AOR iterative method. The convergence performance of the preconditioned AOR methods is analyzed when the coefficient matrix is a strictly diagonally dominant $L$-matrix. Some comparison theorems about the influence of the parameters $\alpha$ and $k$ on the convergence rate for the preconditioned methods are given. The numerical examples further verified the results, which showed the preconditioners $P_{\alpha}^{1\rightarrow k}$ with the big $k$ and $\alpha$ are efficient and competitive.

References
[1] Hadjidimos A. Accelerated overrelaxation method[J]. Mathematics of Computation, 1978, 32: 149–157. DOI:10.1090/S0025-5718-1978-0483340-6
[2] Gunawardena A D, Jain S K, Snyder L. Modified iterative methods for consistent linear systems[J]. Linear Algebra and its Applications, 1991, 154-156: 123–143. DOI:10.1016/0024-3795(91)90376-8
[3] Kohno T, Kotakemori H, Niki H. Improving modified iterative methods for $Z$-matrices[J]. Linear Algebra and its Applications, 1997, 267: 113–123. DOI:10.1016/S0024-3795(97)80045-6
[4] Usni M, Niki H, Kohno T. Adaptive Gauss-Seidel method for linear systems[J]. International Journal of Computer Mathematics, 1994, 51(1-2): 119–125. DOI:10.1080/00207169408804271
[5] Kotakemori H, Niki H, Okamoto N. Accelrated iterative method for $Z$-matrices[J]. Journal of Computational and Applied Mathematics, 1996, 75(1): 87–97. DOI:10.1016/S0377-0427(96)00061-1
[6] Li J C, Li W. The optimal preconditioner of strictly diagonally dominant $Z$-matrix[J]. Acta Mathematicae Applicatae Sinica, 2008, 24(2): 305–312. DOI:10.1007/s10255-006-6148-5
[7] Hadjidimos A, Noutsos D, Tzoumas M. More on modifications and improvements of classical iterative schemes for $M$ -matrices[J]. Linear Algebra Appl., 2003, 364: 253–279. DOI:10.1016/S0024-3795(02)00570-0
[8] Rheinboldt W C, Vandergraft J. A sample approach to the Perron-Fobenius theory for positive operators on general partially ordered finite-dimensional linear spaces[J]. Mathematics of Computation, 1973, 27: 139–145. DOI:10.1090/S0025-5718-1973-0325650-4
[9] Neumann M, Plemmons R J. Convergence of parallel multisplitting iterative methods for $M$-matrices[J]. Linear Algebra Appl., 1987, 88: 559–573.
[10] Zhang Yong, Huang Tingzhu. Modified iterative method for nonnegative matrices and $M$-matrices linear systems[J]. Computers and Mathematics with Applications, 2005, 50(10-12): 1587–1602. DOI:10.1016/j.camwa.2005.07.005
[11] Wang Li, Song Yong Zhong. Preconditioned AOR iterative methods for $ M$-matrices[J]. J. Computational and Applied Mathematics, 2009, 226(1): 114–124. DOI:10.1016/j.cam.2008.05.022
[12] Li Wen, Sun Wei Wei. Modified Gauss-Seidel methods and Jacobi type methods for $Z$-matrices[J]. Linear Algebra and its Applications, 2000, 317(1-3): 223–240.
[13] Kotakemori H, Harada K, Morimoto M, Niki H. A comparison theorem for the iterative method with the preconditioner max$(I+S)$[J]. Journal of Computational and Applied Mathematics, 2002, 145(2): 373–378. DOI:10.1016/S0377-0427(01)00588-X
[14] Niki H, Harada K, Morimoto M, Sakakihara M. The survey of preconditioners used for accelerating the rate of convergence in the Gauss-Seidel method[J]. Journal of Computational and Applied Mathematics, 2004, 164: 587–600.
[15] Berman A, Plemmons R J. Nonnegative matrices in the mathematical sciences[M]. Philadelphia: SIAM Press, 1994.
[16] Varga R S. Matrix iterative analysis: Englewood Cliffs[M]. NJ: Prentice-Hall, Inc, 1962.
[17] Dou Q Y, Yin J F. Multi-level preconditioed block accelerated overrelaxation iteration method for $Z$-matrices[J]. Journal of Appl. Math. Comp., 2012, 38: 653–667. DOI:10.1007/s12190-011-0503-2