Consider the large sparse linear system
where $x, b\in R^n$ and $A=(a_{ij})\in R^{n\times n}$ is nonsingular. It is well known that this system often arises from computational fluid dynamics, thermal, structural and circuit simulator problems and usually is solved by the iterative methods. Let $A =M-N$ and $M$ be nonsingular, then the basic iterative method is
where $T =M^{-1}N$ is the iterative matrix, $ c =M^{-1}b.$
Without loss of generality, in this paper, we let $A=I-L-U, $ where $I$ is a $n\times n$ identity matrix, $-L$ and $-U$ are the strictly lower and the strictly upper triangular parts of $A$ respectively. Let $M=(I-\gamma L)/\omega$ and $N=[(1-\omega)I+(\omega-\gamma)L+\omega U)]/\omega, $ then the AOR iterative matrix is given by
where $\omega, \gamma\in[0, 2)$ ($\omega\ne 0$) are relaxation parameters. For the detail, the reader can refer to [1]. Obviously, when $(\omega, \gamma)=(\omega, \omega)=(1, 1)$ and $(1, 0)$, the AOR iterative method will lead to the SOR method, the Gauss-Seidel method and the Jacobi method, respectively.
In order to improve the convergence performance of the iterative methods, many preconditioned iterative methods were proposed (see [2-14]), that is, considering the iterative methods for the preconditioned equation
where $P$ is the preconditioner, which is nonsingular. For instance, two preconditioners
and
were proposed in [2] and [3], respectively. Obviously, the preconditioner $\widetilde{P}_{\alpha}$ proposed in [3] generalized the preconditioner $\widetilde{P}$ in [2]. And further generalized in [4] and [5]. Usni et al. (see [4]) provided the preconditioner $P_{U}=I+U.$ Soon after, Kotakemori (see [5]) generalized the preconditioner $P_{U}$ by adopting $P^{\alpha}_{U}=I+\alpha U, $ where $\alpha$ is a positive real parameter. They all studied the preconditioned Gauss-Seidel method and obtained the comparison results on the convergence performance.
In 2008, Li et al. proposed a class of preconditioners $P_{\alpha}^{1\rightarrow k}, k=1, 2, \cdots, n-1$ (see [6]) with the following form $P_{\alpha}^{1\rightarrow 1}=\widetilde{P}_{\alpha}:$
where $\alpha={\rm diag}\{\alpha_{1}, \alpha_{2}, \cdots, \alpha_{n}\}, 0\le \alpha_{i}\le 1, \, \, 1\le i\le n$ and there exists an $i$ such that $0< \alpha_{i}\le 1.$ Obviously, the preconditioners mentioned above are all the special cases of $P_{\alpha}^{1\rightarrow k}, \, k=1, 2, \cdots, n-1.$ In the paper, they proved that the convergence speed of the preconditioned Gauss-Seidel methods is not slower than that of the standard Gauss-Seidel method. Moreover, the bigger the parameter $k$ is, the less the spectral radius of the preconditioned Gauss-Seidel iterative matrix is.
Considering the wide applicabilities of the preconditioners $P_{\alpha}^{1\rightarrow k}\, (k=1, 2, \cdots, n-1)$ and the AOR iterative method, in this paper, we apply the preconditioners to the AOR iterative method when $A$ is a strictly diagonally dominant $L$-matrix. We study the convergence performance of the preconditioned AOR iterative methods. Especially, we provide the monotonicity of the spectral radius of the iterative matrices with respect to the parameters and some comparison results. Finally, the numerical experiments verify the obtained results, which shows that these preconditioners yield a considerable improvement in the rate of convergence.
The paper is organized as follow. In Section 2, we briefly review some useful notations and definitions. In Section 3, the preconditioned AOR methods are advised and some comparison theorems about the convergence performance for the methods are discussed in detail. In Section 4, some numerical examples are given. Section 5 presents some conclusions.
For the convenience of later discussions, it is necessary to introduce the following notations, definitions and lemmas.
For a vector $x\in R^n$, $x\ge 0 (x>0)$ denotes that all components of $x$ are nonnegative (positive). For two vectors $x, y\in R^n, x\ge y (x>y)$ means that $x-y\ge 0(x-y>0)$. These definitions carry immediately over to matrices. For any matrix $A=(a_{ij})\in R^{n\times n}$, $\rho(A)$ denotes the spectral radius of $A$.
Definition 2.1 A matrix $A=(a_{ij})\in R^{n\times n}$ is said to be
(a) an $L$-matrix if $a_{ij}\le 0$ for any $i\ne j$ and $a_{ii}>0$ for $i=1, 2, \cdots, n$;
(b) an $M$-matrix if $A=sI-B$ with $B\ge 0, s>0$ and $\rho(B)\le s$;
(c) a strictly diagonally dominant matrix if $|a_{ii}|>\sum_{j\ne i}|a_{ij}|$ for $i=1, 2, \cdots, n.$
It is well known that a strictly diagonally dominant matrix $L$-matrix $A$ must be an nonsingular $M$-matrix, so $A^{-1}\ge0$.
Definition 2.2 Let $A$ be a real matrix. The splitting $A=M-N$ is said to be
(a) regular if $M^{-1}\ge0$ and $N\ge 0$;
(b) weak regular (weak nonnegative of the first type) if $M^{-1}\ge 0$ and $M^{-1}N\ge0$;
(c) weak nonnegative of the second type if $M^{-1}\ge 0$ and $NM^{-1}\ge0$;
(d) nonnegative splitting if $M^{-1}N\geq0.$
In general, regular splitting $\Rightarrow$ weak nonnegative of either type splitting, but the converse is not true.
Lemma 2.1[6] Let $A$ be a strictly diagonally dominant $L$-matrix, then $P_{\alpha}^{1\rightarrow k}A$ is also a strictly diagonally dominant $L$-matrix for all $\alpha_{i}\in<sup>[<xref ref-type="bibr" rid="b0">0</xref>, <xref ref-type="bibr" rid="b1">1</xref>]</sup>, i=1, 2, \cdots, n-1$ and $1\le k\le n-1$.
Lemma 2.2[6] Let $A$ be a nonsingular matrix with $A^{-1}\ge0$.
(a) If $A=M-N$ is a weak nonnegative splitting of either type, then $\rho(M^{-1}N)<1$;
(b) If the splitting $A=M-N$ is weak nonnegative of second type, then there exists a vector $x\gneq0$ such that $M^{-1}Nx=\rho(M^{-1}N)x$ and $Ax\ge0$ as well as $Nx\gneq0$.
Lemma 2.3[7] Let $A_{1}, A_{2}\in R^{n\times n}$ and $A_{i}=M_{i}-N_{i}, i=1, 2, $ be nonnegative splittings. If the Perron eigenvector $x_{2}\gneq0$ (the eigenvector $x_{2}$ corresponding to $\rho(T_{2})$) satisfies $T_{1}x_{2}\le T_{2}x_{2}$ then $\rho(T_{1})\le \rho(T_{2}), $ where $T_{i}=M_{i}^{-1}N_{i}, i=1, 2.$
Lemma 2.4[15, 16] Let $A$ be a nonnegative matrix. Then
(a) if $\alpha x\le Ax$ for some vector $x\gneq 0$, then $\alpha\le\rho(A)$;
(b) if $A$ is irreducible, $\alpha x\leq Ax\leq\beta x$ for some vector $x\gneq 0$ and the equalities are not true, then $\alpha<\rho(A)<\beta$ and $x>0$;
(c) $\alpha>\rho(A)$ if and only if $\alpha I-A$ is nonsingular and $(\alpha I-A)^{-1}\ge0$.
Lemma 2.5[16] Let $A\in R^{n\times n}$ be nonnegative, then
(a) $\rho(A)$ is an eigenvalue of $A$; If $A$ is also irreducible, then $\rho(A)>0;$
(b) there is an eigenvector $x\gneq0$ corresponding to $\rho(A)$ is called the Perron vector; if $A$ is also irreducible, then $x>0$.
In this section, we will discuss the preconditioned AOR iterative methods with the preconditioners $P_{\alpha}^{1\rightarrow k}, k=1, 2, \cdots, n-1, $ from two aspects, that is for the parameter $k$ and the parameter $\alpha, $ respectively. Some comparison results for the preconditioned AOR methods will be stated and proven.
Let $A_{D}, A_{L}$ and $A_{U}$ denote the diagonal, strictly lower and upper triangular matrices of $A$, respectively, that is $A=A_{D}+A_{L}+A_{U}.$ And let $P_{\alpha}^{1\rightarrow k}=I+S_{\alpha}^{1\rightarrow k}, k=1, 2, \cdots, n-1.$ For any positive integers $k$ and $k+1$ satisfying $1\le k\le k+1\le n-1, $ we consider the AOR splittings of $P_{\alpha}^{1\rightarrow k}A$ and $P_{\alpha}^{1\rightarrow (k+1)}A:$
where $M_{\alpha}^{k}=[I-(S_{\alpha}^{1\rightarrow k}L)_{D}-\gamma L-\gamma(S_{\alpha}^{1\rightarrow k}L)_{L}]/\omega$ and $N_{\alpha}^{k}=M_{\alpha}^{k}-P_{\alpha}^{1\rightarrow k}A, $ and
where $M_{\alpha}^{k+1}=[I-(S_{\alpha}^{1\rightarrow (k+1)}L)_{D}-\gamma L-\gamma(S_{\alpha}^{1\rightarrow (k+1)}L)_{L}]/\omega$ and $N_{\alpha}^{k+1}=M_{\alpha}^{k+1}-P_{\alpha}^{1\rightarrow (k+1)}A.$ So the iterative matrices of the AOR methods for $P_{\alpha}^{1\rightarrow k}A$ and $P_{\alpha}^{1\rightarrow (k+1)}A$ are $L_{\gamma, \omega}^{\alpha, k}=(M_{\alpha}^{k})^{-1}N_{\alpha}^{k}$ and $L_{\gamma, \omega}^{\alpha, k+1}=(M_{\alpha}^{k+1})^{-1}N_{\alpha}^{k+1}$, respectively.
By Lemma 2.1, we know that both the splittings (3.1) and (3.2) are regular splittings. Furthermore, by Lemma 2.2 (a), they are convergent.
Theorem 3.1 Let $A$ be a strictly diagonally dominant $L$-matrix, $0\le \gamma\le \omega\le 1(\omega\ne0), $ then $\rho(L_{\gamma, \omega}^{\alpha, k+1})\le\rho(L_{\gamma, \omega}^{\alpha, k})\le\rho(L_{\gamma, \omega})<1$ for all $\alpha_{i}\in<sup>[<xref ref-type="bibr" rid="b0">0</xref>, <xref ref-type="bibr" rid="b1">1</xref>]</sup>, i=1, 2, \cdots, n-1, $ and $1\le k\le k+1\le n-1, $ where $L_{\gamma, \omega}^{\alpha, k}, $ $L_{\gamma, \omega}^{\alpha, k+1}$ and $L_{\gamma, \omega}$ are the iterative matrices of the AOR methods for $P_{\alpha}^{1\rightarrow k}A, $ $P_{\alpha}^{1\rightarrow (k+1)}A$ and $A$ respectively.
Proof Since $A=(I-\gamma L)/\omega-[(1-\omega)I+(\omega-\gamma)L+\omega U]/\omega$ is regular splitting, then $\rho(L_{\gamma, \omega})<1$ and there exists a vector $x\gneq0$ such that $L_{\gamma, \omega}x=\rho(L_{\gamma, \omega})x$ and $Ax\ge0 $ by Lemma 2.2 (b). Thus
From $S_{\alpha}^{1\rightarrow k}L\ge0, $ we know
So $(M_{\alpha}^{k})^{-1}\ge \omega(I-\gamma L)^{-1}$ since $(I-\gamma L)^{-1}\ge0$ and $(M_{\alpha}^{k})^{-1}\ge0.$ Therefore
It follows from $(M_{\alpha}^{k})^{-1}N_{\alpha}^{k}\ge 0$ and Lemma 2.3 that $\rho(L_{\gamma, \omega}^{\alpha, k})\le\rho(L_{\gamma, \omega}).$
For the vector $x, $
Since both $S_{\alpha}^{1\rightarrow k}L$ and $(S_{\alpha}^{1\rightarrow (k+1)}-S_{\alpha}^{1\rightarrow k})L$ are nonnegative, the following inequality holds
Notice that both $ M_{\alpha}^{k+1}$ and $ M_{\alpha}^{k}$ are nonsingular $M$-matrices, so $(M_{\alpha}^{k+1})^{-1}\ge (M_{\alpha}^{k})^{-1}.$ Thus
Then the result $\rho((M_{\alpha}^{k+1})^{-1}N_{\alpha}^{k+1})\le\rho((M_{\alpha}^{k})^{-1}N_{\alpha}^{k}) $ follows by Rheinboldt and Vandergraft (see [8]) and Lemma 2.2 in [9]. Therefore the assertion $\rho(L_{\gamma, \omega}^{\alpha, k+1})\le\rho(L_{\gamma, \omega}^{\alpha, k})\le\rho(L_{\gamma, \omega})<1$ is proved.
Remark 3.1 (1) Theorem 3.1 indicates that, for the AOR iteration, the bigger the parameter $k$ is (i.e., we select the more upper right diagonal elements to be the preconditioner), the less the spectral radius of the iterative matrix is. Consequently, under the condition of the same parameter $\alpha$, the preconditioner $I+\alpha U=P_{\alpha}^{1\rightarrow(n-1)}$ proposed in [5] is better than the preconditioner $I+S_{\alpha}=P_{\alpha}^{1\rightarrow 1}$ proposed in [3] for the AOR iterative method.
(2) In Theorem 3.1, let $(\omega, \gamma)=(\omega, \omega)=(1, 1)$ and $(1, 0), $ then the comparison theorems for the preconditioned SOR, Gauss-Seidel and Jacobi methods can be reached respectively. Therefore, Theorem 3.2 and Theorem 3.4 in [6] are the special cases of this theorem (see Corollary 3.2 and Corollary 3.3).
Corollary 3.1 Let $A$ be a strictly diagonally dominant $L$-matrix, $0\le \omega\le 1(\omega\ne0), $ then
for all $\alpha_{i}\in[0, 1], i=1, 2, \cdots, n-1, $ and $1\le k\le k+1\le n-1, $ where $L_{\omega}^{\alpha, k}, $ $L_{\omega}^{\alpha, k+1}$ and $L_{\omega}$ are the iterative matrices of the SOR methods for $P_{\alpha}^{1\rightarrow k}A, $ $P_{\alpha}^{1\rightarrow (k+1)}A$ and $A$, respectively.
Corollary 3.2 Let $A$ be a strictly diagonally dominant $L$-matrix, then
for all $\alpha_{i}\in[0, 1], i=1, 2, \cdots, n-1, $ and $1\le k\le k+1\le n-1, $ where $G_{\alpha}^{k}, $ $G_{\alpha}^{k+1}$ and $G$ are the iterative matrices of the Gauss-Seidel methods for $P_{\alpha}^{1\rightarrow k}A, $ $P_{\alpha}^{1\rightarrow (k+1)}A$ and $A$, respectively.
Corollary 3.3 Let $A$ be a strictly diagonally dominant $L$-matrix, then
for all $\alpha_{i}\in[0, 1], i=1, 2, \cdots, n-1, $ and $1\le k\le k+1\le n-1, $ where $J_{\alpha}^{k}, $ $J_{\alpha}^{k+1}$ and $J$ are the iterative matrices of the Jacobi methods for $P_{\alpha}^{1\rightarrow k}A, $ $P_{\alpha}^{1\rightarrow (k+1)}A$ and $A$, respectively.
Theorem 3.2 Let $A\in R^{n\times n}$ be an irreducible strictly diagonally dominant $L$-matrix, then $P_{\alpha}^{1\rightarrow k}A$ is also an irreducible strictly diagonally dominant $L$-matrix for all $\alpha_{i}\in[0, 1), i=1, 2, \cdots, n-1, $ and $1\le k\le n-1$.
Proof Let $\tilde{A}_{\alpha}^{k}=P_{\alpha}^{1\rightarrow k}A=(I+S_{\alpha}^{1\rightarrow k})A, $ then $\tilde{A}_{\alpha}^{k}$ is also a strictly diagonally dominant $L$-matrix by Lemma 2.1. And
Let $F=U-S_{\alpha}^{1\rightarrow k}=(f_{ij}), E=L+(S_{\alpha}^{1\rightarrow k}L)_{L}=(e_{ij}), $ and $L=(l_{ij}), U=(u_{ij}), $ then from the definition of $P_{\alpha}^{1\rightarrow k}$ we know that
Thus if $u_{ij}\ne0, $ so does $f_{ij}$ for $\alpha_{i}\in[0, 1), i=1, 2, \cdots, n-1, $ and if $l_{ij}\ne0, $ so does $e_{ij}$ from $(S_{\alpha}^{1\rightarrow k}L)_{L}\ge0.$ With both $(S_{\alpha}^{1\rightarrow k}L)_{U}$ and $S_{\alpha}^{1\rightarrow k}U$ are nonnegative, we can conclude that $\tilde{A}_{\alpha}^{k}$ is also an irreducible strictly diagonally dominant $L$-matrix for $\alpha_{i}\in[0, 1), i=1, 2, \cdots, n-1, $ and $1\le k\le n-1$.
Theorem 3.3 Let $A$ be an irreducible strictly diagonally dominant $L$-matrix, $0\le \gamma\le \omega\le 1(\omega\ne0, \gamma\ne1), $ then
for all $\alpha_{i}\in[0, 1), i=1, 2, \cdots, n-1\, (\alpha\ne 0), $ and $1\le k\le k+1\le n-1, $ where $L_{\gamma, \omega}^{\alpha, k}, $ $L_{\gamma, \omega}^{\alpha, k+1}$ and $L_{\gamma, \omega}$ are the iterative matrices of the AOR methods for $P_{\alpha}^{1\rightarrow k}A, $ $P_{\alpha}^{1\rightarrow (k+1)}A$ and $A$ respectively and $P_{\alpha}^{1\rightarrow (1+k)}\ne P_{\alpha}^{1\rightarrow k}$.
Proof From (1.2) and Lemma 2.4 (c), the following equality holds,
where $T\ge 0.$ Since $A$ is irreducible, thus $(1-\omega)I+\omega(1-\gamma)L+\omega U$ is irreducible when $0\le \gamma\le \omega\le 1(\omega\ne0, \gamma\ne 1)$. So with $T\ge 0, $ we know that $L_{\gamma, \omega}$ is irreducible.
From Theorem 3.2, we also know that $\tilde{A}_{\alpha}^{k}$ is an irreducible strictly diagonally dominant $L$-matrix for $\alpha_{i}\in[0, 1), i=1, 2, \cdots, n-1, $ and $1\le k\le n-1.$ So both $ M_{\alpha}^{k+1}$ and $ M_{\alpha}^{k}$ are nonsingular and both $L_{\gamma, \omega}^{\alpha, k}, $ and $L_{\gamma, \omega}^{\alpha, k+1}$ are well defined. Furthermore, it can be proved that both $L_{\gamma, \omega}^{\alpha, k}, $ and $L_{\gamma, \omega}^{\alpha, k+1}$ are irreducible by the similar method as above.
From the proof of Theorem 3.1, we know there exists an $x\gneq0$ such that $(M_{\alpha}^{k})^{-1}N_{\alpha}^{k}x\ge (M_{\alpha}^{k+1})^{-1}N_{\alpha}^{k+1}x.$ Therefore, $\rho(L_{\gamma, \omega}^{\alpha, k+1})<\rho(L_{\gamma, \omega}^{\alpha, k})$ by Lemma 2.4 (b) and $P_{\alpha}^{1\rightarrow (1+k)}\ne P_{\alpha}^{1\rightarrow k}$. And $\rho(L_{\gamma, \omega}^{\alpha, k})<\rho(L_{\gamma, \omega})<1$ from (3.3) and $\alpha\ne0$.
The proof is completed.
Remark 3.2 (1) Theorem 3.3 can be seen as a further improvement for Theorem 3.1. If the matrix in Theorem 3.1 is irreducible, then the inequality of the result can become strict.
(2) From Theorem 3.3, we can easily obtained the following comparison results for the preconditioned SOR and Jacobi methods. Consequently, the result for the preconditioned Jacobi method improves that in Theorem 3.4 in [6] (see Corollary 3.5).
Corollary 3.4 Let $A$ be an irreducible strictly diagonally dominant $L$-matrix, $0<\omega<1, $ then $\rho(L_{\omega}^{\alpha, k+1})<\rho(L_{\omega}^{\alpha, k})<\rho(L_{\omega})<1$ for all $\alpha_{i}\in[0, 1), i=1, 2, \cdots, n-1\, (\alpha\ne 0), $ and $1\le k\le k+1\le n-1, $ where $L_{\omega}^{\alpha, k}, $ $L_{\omega}^{\alpha, k+1}$ and $L_{\omega}$ are the iterative matrices of the SOR methods for $P_{\alpha}^{1\rightarrow k}A, $ $P_{\alpha}^{1\rightarrow (k+1)}A$ and $A$, respectively, and $P_{\alpha}^{1\rightarrow (1+k)}\ne P_{\alpha}^{1\rightarrow k}$.
Corollary 3.5 Let $A$ be an irreducible strictly diagonally dominant $L$-matrix, then
for all $\alpha_{i}\in[0, 1), i=1, 2, \cdots, n-1\, (\alpha\ne 0), $ and $1\le k\le k+1\le n-1, $ where $J_{\alpha}^{k}, $ $J_{\alpha}^{k+1}$ and $J$ are the iterative matrices of the Jacobi methods for $P_{\alpha}^{1\rightarrow k}A, $ $P_{\alpha}^{1\rightarrow (k+1)}A$ and $A$, respectively, and $P_{\alpha}^{1\rightarrow (1+k)}\ne P_{\alpha}^{1\rightarrow k}$.
Let $\beta=diag\{\beta_{1}, \beta_{2}, \cdots, \beta_{n}\}, 0\le \beta_{i}\le 1, \, \, i=1, 2, \cdots, n.$ Let $M_{\beta}^{k}=[I-(S_{\beta}^{1\rightarrow k}L)_{D}-\gamma L-\gamma(S_{\beta}^{1\rightarrow k}L)_{L}]/\omega$ and assume $ \alpha_{i}\le \beta_{i}\, \, i=1, 2, \cdots, n, $ then we have
thus $(M_{\alpha}^{k})^{-1}\le (M_{\beta}^{k})^{-1}$ from $(M_{\alpha}^{k})^{-1}\ge 0$ and $(M_{\beta}^{k})^{-1}\ge 0.$ So the proof of the following monotone result with respect to the parameter $\alpha$ is completely similar to the proof of Theorem 3.1.
Theorem 3.4 Let $A$ be a strictly diagonally dominant $L$ -matrix, $0\le \gamma\le \omega\le 1(\omega\ne0), $ then $\rho(L_{\gamma, \omega}^{\beta, k})\le\rho(L_{\gamma, \omega}^{\alpha, k})\le\rho(L_{\gamma, \omega})<1$ for any $0\le\alpha_{i}\le \beta_{i}\le1, i=1, 2, \cdots, n-1, $ where $L_{\gamma, \omega}^{\beta, k}, $ $L_{\gamma, \omega}^{\alpha, k}$ and $L_{\gamma, \omega}$ are the iterative matrices of the AOR methods for $P_{\beta}^{1\rightarrow k}A, $ $P_{\alpha}^{1\rightarrow k}A$ and $A$, respectively.
Remark 3.3 Theorem 3.4 indicates the spectral radius of the preconditioned AOR methods is monotone with respect to the parameter $\alpha.$ And so do for the the preconditioned SOR, Gauss-Seidel and Jacobi methods from this theorem, which shows Theorem 3.3 and Theorem 3.5 in [6] are the special cases of our results (see Corollary 3.7 and Corollary 3.8).
Corollary 3.6 Let $A$ be a strictly diagonally dominant $L$-matrix, $0< \omega\le 1, $ then
for any $0\le\alpha_{i}\le \beta_{i}\le1, i=1, 2, \cdots, n-1, $ where $L_{\omega}^{\beta, k}, \, \, L_{\omega}^{\alpha, k}$ and $L_{\omega}$ are the iterative matrices of the SOR methods for $P_{\beta}^{1\rightarrow k}A, \, \, P_{\alpha}^{1\rightarrow k}A$ and $A$ respectively.
Corollary 3.7 Let $A$ be a strictly diagonally dominant $L$-matrix, then
for any $0\le\alpha_{i}\le \beta_{i}\le1, i=1, 2, \cdots, n-1, $ where $G_{\beta}^{k}, \, \, G_{\alpha}^{k}$ and $G$ are the iterative matrices of the Gauss-Seidel methods for $P_{\beta}^{1\rightarrow k}A, \, \, P_{\alpha}^{1\rightarrow k}A$ and $A$ respectively.
Corollary 3.8 Let $A$ be a strictly diagonally dominant $L$-matrix, then
for any $0\le\alpha_{i}\le \beta_{i}\le1, i=1, 2, \cdots, n-1, $ where $J_{\beta}^{k}, \, \, J_{\alpha}^{k}$ and $J$ are the iterative matrices of the Jacobi methods for $P_{\beta}^{1\rightarrow k}A, \, \, P_{\alpha}^{1\rightarrow k}A$ and $A$ respectively.
For the irreducible strictly diagonally dominant $L$-matrix, the strict monotone result with respect to the parameter $\alpha$ can be reached similar to the proof of Theorem 3.3.
Theorem 3.5 Let $A$ be an irreducible strictly diagonally dominant $L$ -matrix, $0\le \gamma\le \omega\le 1(\omega\ne0, \gamma\ne1), $ then $\rho(L_{\gamma, \omega}^{\beta, k})<\rho(L_{\gamma, \omega}^{\alpha, k})<\rho(L_{\gamma, \omega})<1$ for any $0\le\alpha_{i}\le \beta_{i}\le1, i=1, 2, \cdots, n-1\, (\alpha\ne 0, \alpha\ne \beta), $ where $L_{\gamma, \omega}^{\beta, k}, $ $L_{\gamma, \omega}^{\alpha, k}$ and $L_{\gamma, \omega}$ are the iterative matrices of the AOR methods for $P_{\beta}^{1\rightarrow k}A, $ $P_{\alpha}^{1\rightarrow k}A$ and $A$ respectively.
Corollary 3.9 Let $A$ be an irreducible strictly diagonally dominant $L$ -matrix, $0<\omega\le 1, $ then
for any $0\le\alpha_{i}\le \beta_{i}\le1, i=1, 2, \cdots, n-1\, (\alpha\ne 0, \alpha\ne \beta), $ where $L_{\omega}^{\alpha, k}, $ $L_{\omega}^{\beta, k}$ and $L_{\omega}$ are the iterative matrices of the SOR methods for $P_{\alpha}^{1\rightarrow k}A, $ $P_{\beta}^{1\rightarrow k}A$ and $A$ respectively.
Corollary 3.10 Let $A$ be an irreducible strictly diagonally dominant $L$ -matrix, then
for any $0\le\alpha_{i}\le \beta_{i}\le1, i=1, 2, \cdots, n-1\, (\alpha\ne 0, \alpha\ne \beta), $ where $J_{\alpha}^{k}, $ $J_{\beta}^{k}$ and $J$ are the iterative matrices of the Jacobi methods for $P_{\alpha}^{1\rightarrow k}A, $ $P_{\beta}^{1\rightarrow k}A$ and $A$ respectively.
Remark 3.4 (1) Obviously, Theorem 3.5 is an improvement for Theorem 3.4. If the matrix $A$ in Theorem 3.4 is irreducible, the inequality will become strict. Consequently, Corollary 3.10 improves the the result of Theorem 3.5 in [6].
(2) The preconditioners $P_{\alpha}^{1\rightarrow k}, k=1, 2, \cdots, n-1$ are defined by upper triangular, nonnegative matrices. Symmetrically, if we define the corresponding lower triangular, nonnegative matrices as the dual preconditioners, then we can get the parallel theorems to above theorems and corollaries.
In this section, some experiments are provided to illustrate the convergence behavior of the preconditioned AOR/SOR iterative methods.
The spectral radius of the iterative matrices for the methods is illustrated and analyzed in detail.
Example 4.1 The coefficient matrix $A$ (see [7]) is given by
where $t=2.5, n=200$.
Obviously, the matrix $A$ satisfies all the conditions of theorems in our paper. By simple computing, we have $\rho(L_{\gamma, \omega})=0.3955$ when $\gamma=0.7, \omega=0.9$ for the AOR method and $\rho(L_{\omega})=0.3261$ when $\omega=0.9$ for the SOR method.
Table 1 and Table 2 report the spectral radius of the iterative matrices of the preconditioned AOR/SOR methods when the parameters $k$ and $\alpha$ are varying.
First, from Tables 1 and 2, the preconditioners can greatly accelerate the convergence rate of the method whenever for the fixed parameter $\alpha$ or $k$. Second, it is observed that the spectral radius of the iterative matrices for the preconditioned AOR/SOR methods is monotonically decreasing with $k$ increasing when $\alpha$ is fixed, which shows that the more upper right diagonal elements are selected to be the preconditioner, the less the spectral radius of iterative matrix is when $\alpha$ is fixed. Third, it can be also seen that the spectral radius of the iterative matrices for the preconditioned AOR/SOR methods is monotonically decreasing with $\alpha$ increasing when $k$ is fixed, which indicates that the spectral radius of iterative matrices is monotone with respect to the parameter $\alpha$.
Example 4.2 Consider a nonsingular $Z$-matrix $A$ (see [7]) given by
where $c_1=-\frac{7}{10n}, c_2=-\frac{7}{10n+1}, c_{3}=-\frac{7}{10n+2}$ and $n=200$. It is easy to verify that $A$ is an irreducible strictly diagonally dominant $L$-matrix. By simple computing, we have $\rho(L_{\gamma, \omega})=0.6317$ when $\gamma=0.7, \omega=0.9$ for the AOR method and $\rho(L_{\omega})=0.5849$ when $\omega=0.9$ for the SOR method.
Table 3 and Table 4 report the spectral radius of the iterative matrices of the preconditioned AOR/SOR methods when the parameters $k$ and $\alpha$ are varying.
From Tables 3 and 4, we can see that the preconditioners can greatly accelerate the convergence rate for the method. It can be observed that the spectral radius of the iterative matrices for the preconditioned AOR/SOR methods is monotonically decreasing with $k$ increasing when the parameter $\alpha$ is fixed. So, the more upper right diagonal elements are selected to be the preconditioner, the less the spectral radius of iterative matrix is. Furthermore, we also see that the convergence speed for the preconditioned AOR/SOR methods become considerate fast with $\alpha$ increasing, which shows that the spectral radius of iterative matrices is monotone with respect to the parameter $\alpha$.
In a word, the above tables show that the preconditioners $P_{\alpha}^{1\rightarrow k}$ with the big $k$ and $\alpha$ are more efficient for accelerating the convergence speed of the AOR/SOR method.
In this paper, the preconditioners $P_{\alpha}^{1\rightarrow k}$ are applied to the AOR iterative method. The convergence performance of the preconditioned AOR methods is analyzed when the coefficient matrix is a strictly diagonally dominant $L$-matrix. Some comparison theorems about the influence of the parameters $\alpha$ and $k$ on the convergence rate for the preconditioned methods are given. The numerical examples further verified the results, which showed the preconditioners $P_{\alpha}^{1\rightarrow k}$ with the big $k$ and $\alpha$ are efficient and competitive.