数学杂志  2015, Vol. 35 Issue (2): 207-213   PDF    
扩展功能
加入收藏夹
复制引文信息
加入引用管理器
Email Alert
RSS
本文作者相关文章
GOU Ting-xun
GAO Xin
A NOTE ON THE APPROXIMATED INVERSE OF A NON-NEGATIVE SYMMETRIC MATRIX
GOU Ting-xun, GAO Xin    
School of Mathematics and Statistics, Central China Normal University, Wuhan 430079, China
Abstract: This paper studies the issue of the approximated inverse of nonnegative symmetric matrices. By using the matrix S= (si, j) to approximate its inverse, an explicit bound on the approximation error is obtained, and one conclusion that the inverse is well approximated to the order 1/(n-1)2 uniformly for large n is also proved.
Key words: approximation error     inverse     symmetric     positive elements    
关于非负对称矩阵的近似逆矩阵
勾廷勋, 高欣    
华中师范大学数学与统计学院, 湖北 武汉 430079
摘要:本文研究了非负对称矩阵的近似逆矩阵问.利用矩阵S=(si, j)去近似它的逆矩阵方法, 获得了近似误差的一个显式上界, 并且证明了近似逆的误差对于很大的n一致地具有阶1/(n-1)2.
关键词近似误差        对称    非负元    
1 Introduction

When solving the solution for a large system of linear equations, a good approximate inverse of the coefficient matrix is crucially important in establishing fast convergence rates for iterative algorithms. See the extensive reviews [1, 5, 7, 20]. Here, we are concerned with a $n\times n$ symmetric diagonally dominant matrices $T=(t_{i, j})$ with positive elements, i.e.,

$t_{i, j}=t_{j, i}>0 \;\mbox{and}\; t_{i, i}\ge \sum\limits_{j=1, j\neq i}^{n}t_{i, j},\;i=1, \cdots, n.$ (1.1)

It is easy to show that $T$ must be positive definite. This kind of diagonally dominant nonnegative matrices has received wide attention [6, 8, 10]. In [2, 7, 9], the problems on inverses of nonnegative matrices have been investigated. Markham [13] and Martínez et al. [14] studied the sufficient conditions that the inverses of nonnegative matrices are $M$-matrices. We propose to approximate the inverse of $T$, $T^{-1}$, by the matrix $S=(s_{i, j})$, where

$s_{i, j}=\frac{\delta_{i, j}}{t_{i, i}}-\frac{1}{t_{..}},$

and $t_{..}=\sum\limits_{i, j=1 }^{n}(1-\delta_{i, j}) t_{i, j}$. In a special case that $t_{i, i}=\sum\limits_{j\neq i} t_{i, j}$ for all $i$, Yan and Xu [19] have obtained the upper bound of the approximation errors when using $S$ to approximate the inverse of $T$, which is crucially used to establish the asymptotical normality of an estimated vector in the $\beta$-model for undirected random graphs with a growing number of nodes. In this paper, we derive an explicit upper bound on the approximation error for general cases (1.1).

2 An Explicit Bound on the Approximation Error

Let $m:=\min\limits_{1\le i<j\le n} t_{i, j}$, $\Delta_i:= t_{i, i}-\sum\limits_{j\neq i} t_{i, j} $, $M:=\max\{ \max\limits_{1\le i<j\le n} t_{i, j}, \max\limits_{1\le i\le n} \Delta_i\}$, and for a matrix $A=(a_{i, j})$, define $||A||:=\max_{i, j} |a_{i, j}|$. We have the following theorem.

Theorem 2.1 If

$C(m, M) =\frac{2(n-2)m }{ nM+(n-2)m}-\frac{M}{m(n-1)}-\frac{ (n-2)Mm }{ [(n-2)m+M][(n-2)m + 2M] }>0,$ (2.1)

\begin{equation} \label{eq:2} \begin{aligned} \end{aligned} \end{equation}

then

$||T^{-1}-S|| \le \frac{1}{(n-1)^2} \times \left [\frac{1}{C(m, M)} (\frac{M}{m^2}+ \frac{4M}{m^2n} )+\frac{n-1}{mn} \right]. $

Proof Let $I_n$ be the $n\times n$ identity matrix. Define $F=(f_{ij})=T^{-1}-S$, $V=(v_{ij})=I_n-TS$ and $W=(w_{ij})=SV$. We have the recursion

$F=T^{-1}-S=(T^{-1}-S)(I_n-TS)+S(I_n-TS)=FV+W.$ (2.2)

Note that

$\begin{aligned}v_{i, j}& =\delta_{i, j}-\sum\limits_{k=1}^n t_{i, k}s_{k, j}\\& =\delta_{i, j}-\sum\limits_{k=1}^n t_{i, k}(\frac{\delta_{k, j}}{ t_{k, k} }-\frac{1}{t_{..}})\\& =(\delta_{i, j}-1)\frac{t_{i, j}}{t_{j, j}}+\frac{2t_{i, i}-\Delta_i}{t_{..}},\end{aligned}$ (2.3)

and

$\begin{aligned}w_{i, j}&=\sum\limits_{k=1}^n s_{i, k}v_{k, j}=\sum\limits_{k=1}^n( \frac{\delta_{i, k}}{t_{i, i}}-\frac{1}{t_{..}} )[(\delta_{k, j}-1)\frac{t_{k, j}}{t_{j, j}}+\frac{2t_{k, k}-\Delta_k}{t_{..}}]\\& = \sum\limits_{k=1}^n \frac{\delta_{i, k}}{t_{i, i}} [(\delta_{k, j}-1)\frac{t_{k, j}}{t_{j, j}}+\frac{2t_{k, k}-\Delta_k}{t_{..}}]-\frac{1}{t_{..}}\sum\limits_{k=1}^n [(\delta_{k, j}-1)\frac{t_{k, j}}{t_{j, j}}+\frac{2t_{k, k}-\Delta_k}{t_{..}}] \\& =[\frac{(\delta_{i, j}-1)}{t_{i, i}}( \frac{t_{i, j}}{t_{j, j}})+\frac{2t_{i, i}-\Delta_i}{t_{i, i}t_{..}}]-\frac{1}{t_{..}}[\frac{-(t_{j, j}-\Delta_j)}{t_{j, j}}+2+\frac{\sum\limits_k\Delta_k}{t_{..}}]\\& =\frac{(\delta_{i, j}-1)t_{i, j}}{t_{i, i}t_{j, j}}+\frac{1}{t_{..}}-\frac{\Delta_i}{t_{i, i}t_{..}}-\frac{\Delta_j}{t_{j, j}t_{..}}-\frac{\sum\limits_k \Delta_k}{t_{..}^2}.\end{aligned}$ (2.4)

Furthermore, when $i\neq j$,

$ 0<\frac{1}{t_{..}}\le \frac{1}{mn(n-1)}, \\ 0<\frac{t_{i, j}}{t_{i, i}t_{j, j}}\le \frac{M}{m^2(n-1)^2}, \\ 0<\frac{\Delta_i}{t_{i, i}t_{..}}\le \frac{M}{m^2n(n-1)^2}, $

and it is easy to show, when $i, j, k$ are different from each other,

$\begin{aligned} |w_{i, i}|& \le \max\{\frac{1}{mn(n-1)}, \frac{3M}{m^2n(n-1)^2} \}, \\ |w_{i, j}|& \le \max\{\frac{1}{mn(n-1)}, \frac{M}{m^2(n-1)^2}+ \frac{3M}{m^2n(n-1)^2} \}, \\ |w_{i, j}-w_{i, k}|&\le \frac{M}{m^2(n-1)^2}+\frac{M}{m^2n(n-1)^2}, \\ |w_{i, i}-w_{i, k}|&\le \frac{M}{m^2(n-1)^2}+\frac{M}{m^2n(n-1)^2}. \end{aligned}$

It follows that

$\max(|w_{i, j}|, |w_{i, j}-w_{i, k}|)\le \frac{M}{m^2(n-1)^2}+ \frac{3M}{m^2n(n-1)^2}\quad \mathrm{for\;all}\;i,j,k.$ (2.5)

Next we use the recursion (2.2) to obtain a bound of the approximate error $||F||$. By (2.2) and (2.3), for any $i$, we have

$f_{i, j}=\sum\limits_{k=1}^n f_{i, k}[(\delta_{k, j}-1)\frac{t_{k, j}}{t_{j, j}}+\frac{2t_{k, k}-\Delta_k}{t_{..}}] +w_{i, j},\quad\quad j=1, \cdots, n.$ (2.6)

Thus, to prove Theorem 2.1, it is sufficient to show that $|f_{i, j}|\le C(M, m)/(n-1)^2$ for any $i, j$. Fixing any $i$, let $f_{i, \alpha}=\max\limits_{1\le k\le n}f_{i, k}$ and $f_{i, \beta}=\min\limits_{1\le k\le n} f_{i, k}$.

First, we will show that $f_{i, \beta}\le 1/t_{..}\le 1/(m(n-1)^2)$. A direct calculation gives that

$\begin{aligned} \sum\limits_{k=1}^n f_{i, k}t_{k, i}&= \sum\limits_{k=1}^n (T_{i, k}^{-1}-(\frac{\delta_{i, k}}{t_{i, i}}-\frac{1}{t_{..}}) )t_{k, i}\\&= 1 - (1-\sum\limits_{k=1}^n \frac{t_{k, i}}{t_{..}})=\sum\limits_{k=1}^n \frac{t_{k, i}}{t_{..}}. \end{aligned}$ (2.7)

Thus, $f_{i, \beta} \sum\limits_{k=1}^n t_{k, i} \le \sum\limits_{k=1}^n f_{i, k}t_{k, i} = \sum\limits_{k=1}^n \frac{t_{k, i}}{t_{..}}$. It follows that $f_{i, \beta}\le 1/t_{..}$ and, similarly, $f_{i, \alpha}\ge 1/t_{..}$.

Note that $(1-\Delta_\alpha/t_{\alpha, \alpha}) f_{i, \beta}=-\sum\limits_{k=1}^n f_{i, \beta}(\delta_{k, \alpha}-1)\frac{t_{k, \alpha}}{t_{\alpha, \alpha}}$. Thus,

$f_{i, \alpha}+(1-\frac{\Delta_\alpha}{t_{\alpha, \alpha}})f_{i, \beta}=\sum\limits_{k=1}^n (f_{i, k}-f_{i, \beta})(\delta_{k, \alpha}-1)\frac{t_{k, \alpha}}{t_{\alpha, \alpha}} +\sum\limits_{k=1}^n f_{i, k}(\frac{2t_{k, k}-\Delta_k}{t_{..}})+w_{i, \alpha}.$ (2.8)

Similarly, by $(1-\Delta_\beta/t_{\beta, \beta}) f_{i, \beta}=-\sum\limits_{k=1}^n f_{i, \beta}(\delta_{k, \beta}-1)\frac{t_{k, \beta}}{t_{\beta, \beta}}$, we have that

$f_{i, \beta}+(1-\frac{\Delta_\beta}{t_{\beta, \beta}})f_{i, \beta}=\sum\limits_{k=1}^n (f_{i, k}-f_{i, \beta})(\delta_{k, \beta}-1)\frac{t_{k, \beta}}{t_{\beta, \beta}} +\sum\limits_{k=1}^n f_{i, k}(\frac{2t_{k, k}-\Delta_k}{t_{..}})+w_{i, \beta}.$ (2.9)

Combining the above two equations, it yields

$\quad f_{i, \alpha}-f_{i, \beta}+(\frac{\Delta_\beta}{t_{\beta, \beta}}-\frac{\Delta_\alpha}{t_{\alpha, \alpha}})f_{i, \beta} \\ = \sum\limits_{k=1}^n (f_{i, k}-f_{i, \beta})[(\delta_{k, \alpha}-1)\frac{t_{k, \alpha}}{t_{\alpha, \alpha}} -(\delta_{k, \beta}-1)\frac{t_{k, \beta}}{t_{\beta, \beta}}]+w_{i, \alpha}-w_{i, \beta} .$ (2.10)

Let $\Omega=\{k:(1-\delta_{k, \beta})t_{k, \beta}/t_{\beta, \beta} \ge (1-\delta_{k, \alpha})t_{k, \alpha}/t_{\alpha, \alpha} \}$ and let $|\Omega|=\lambda$. Note that $1\le \lambda \le n-1$. Then,

$\begin{aligned}&\quad\sum\limits_{k=1}^n(f_{i, k}-f_{i, \beta})[(\delta_{k, \alpha}-1)\frac{t_{k, \alpha}}{t_{\alpha, \alpha}} -(\delta_{k, \beta}-1)\frac{t_{k, \beta}}{t_{\beta, \beta}}] \\& \le \sum\limits_{k\in \Omega}(f_{i, k}-f_{i, \beta})[ (1-\delta_{k, \beta})\frac{t_{k, \beta}}{t_{\beta, \beta}}-(1-\delta_{k, \alpha})\frac{t_{k, \alpha}}{t_{\alpha, \alpha}}] \\& \le (f_{i, \alpha}-f_{i, \beta}) [\frac{\sum_{k\in \Omega}t_{k, \beta}}{t_{\beta, \beta}}-\frac{\sum_{k\in \Omega}(1-\delta_{k, \alpha})t_{k, \alpha}}{t_{\alpha, \alpha}}] \\& \le (f_{i, \alpha}-f_{i, \beta}) [\frac{\lambda M }{\lambda M+(n-1-\lambda)m}-\frac{(\lambda-1) m}{(\lambda-1) m+(n-\lambda)M+M}]. \end{aligned}$ (2.11)

Let

$f(\lambda)=\frac{\lambda M}{\lambda M+ (n-1-\lambda)m }- \frac{(\lambda-1)m}{ (\lambda-1)m+ (n-\lambda)M}.$

There are two cases to consider the maximum of $f(\lambda)$ in the range of $\lambda\in [1, n-1]$.

Case Ⅰ When $M=m$, it is easy to show $f(\lambda)=1/(n-1)$.

Case Ⅱ If $M\neq m$, since

$ \begin{aligned} f^\prime(\lambda)& = \frac{ (n-1)Mm}{[\lambda M+ (n-1-\lambda)m]^2 } - \frac{ (n-1)Mm}{ [(\lambda-1)m+(n-\lambda)M]^2} \\& = \frac{ (n-1)Mm [(n-2\lambda)(M-m)] [\lambda M+ (n-1-\lambda)m+(\lambda-1)m+ (n-\lambda)M]}{[\lambda M+ (n-1-\lambda)m]^2[(\lambda-1)m+ (n-\lambda)M]^2 } \end{aligned} $

and

$f^{\prime\prime}(\lambda)=-2(M-m)Mm(n-1)\left(\frac{ 1}{[\lambda M+ (n-1-\lambda)m]^3 } + \frac{ 1}{ [(\lambda-1)m+ (n-\lambda)M]^3} \right),$

$f(\lambda)$ takes its maximum at $\lambda=n/2$ when $1\le \lambda\le n-1$. A direct calculation gives that

$f(\frac{n}{2}) = \frac{ nM-(n-2)m}{ nM+(n-2)m}.$ (2.12)

Moreover, denote

$g(\lambda)=\frac{ (\lambda-1)m }{ (\lambda-1)m + (n-\lambda)M } - \frac{ (\lambda-1)m }{ (\lambda-1)m + (n-\lambda)M + M},$

therefore

$g^\prime(\lambda)= \frac{ Mm [M^2( (n-\lambda)^2+2(n-\lambda)(\lambda-1)+n-1 )+(2Mm-m^2)(\lambda-1)^2]} {[(\lambda-1)m + (n-\lambda)M]^2[(\lambda-1)m + (n-\lambda)M + M]^2},$

$g^\prime(\lambda)>0$ when $1\le\lambda\le n-1$ such that for $1\le\lambda\le n-1$,

$0\le g(\lambda)\le g(n-1)= \frac{ (n-2)Mm }{ [(n-2)m+M][(n-2)m + 2M] }.$ (2.13)

By (2.12) and (2.13), we have

$\begin{aligned}&\quad \max\limits_{1\le \lambda\le n-1}[\frac{\lambda M }{\lambda M+(n-1-\lambda)m}-\frac{(\lambda-1) m}{(\lambda-1) m+(n-\lambda)M+M}] \\ & \le \max\limits_{1\le \lambda\le n-1}f(\lambda) +\max\limits_{1\le \lambda \le n-1} g(\lambda)\\ & \le \frac{1}{n-1}I(M=m)+ \frac{nM-(n-2)m}{ nM+(n-2)m}I(M\neq m)+ \frac{ (n-2)Mm }{ [(n-2)m+M][(n-2)m + 2M] } \\& \label{upper-full} =\frac{nM-(n-2)m}{ nM+(n-2)m}+\frac{ (n-2)Mm }{ [(n-2)m+M][(n-2)m + 2M] }, \end{aligned}$ (2.14)

where $I(\cdot)$ is an indictor function. Since $ f_{i, \alpha}-f_{i, \beta}+\frac{1}{t_{..}} \ge |f_{i, \beta}| $, we have

$\quad f_{i, \alpha}-f_{i, \beta}+(\frac{\Delta_{\beta}}{t_{\beta, \beta}}-\frac{\Delta_\alpha}{t_{\alpha, \alpha}})f_{i, \beta} \\ \ge f_{i, \alpha}-f_{i, \beta}-(f_{i, \alpha}-f_{i, \beta}+\frac{1}{t_{..}})|\frac{\Delta_{\beta}}{t_{\beta, \beta}}-\frac{\Delta_\alpha}{t_{\alpha, \alpha}} | \\ \ge (1-\frac{M}{m(n-1)})(f_{i, \alpha}-f_{i, \beta})-\frac{M}{m^2n(n-1)^2}.$ (2.15)

Combining (2.10), (2.11), (2.14) and (2.15), it yields

$ \quad (1-\frac{M}{m(n-1)})(f_{i, \alpha}-f_{i, \beta}) \\ \le ( {\tiny \frac{nM-(n-2)m}{ nM+(n-2)m} + \frac{ (n-2)Mm }{ [(n-2)m+M][(n-2)m + 2M] } } ) \times (f_{i, \alpha}-f_{i, \beta}) + \frac{M}{m^2(n-1)^2}+\frac{4M}{m^2n(n-1)^2}, $

so that

$C(m, M) (f_{i, \alpha}-f_{i, \beta}) \le \frac{M}{m^2(n-1)^2}+\frac{4M}{m^2n(n-1)^2},$

where $C(m, M)$ is defined in (2.1). Consequently, if the condition (2.1) holds, then

$\begin{aligned} \max\limits_{j=1, \cdots, n}|f_{i, j}| &\le f_{i, \alpha}-f_{i, \beta}+\frac{1}{t_{..} } \\ & \le \frac{1}{C(m, M)} \times \left [\frac{M}{m^2(n-1)^2}+ \frac{4M}{m^2n(n-1)^2} \right] + \frac{1}{mn(n-1)} \\ & = \frac{1}{(n-1)^2} \times \left [\frac{1}{C(m, M)} (\frac{M}{m^2}+ \frac{4M}{m^2n} )+\frac{n-1}{mn} \right], \end{aligned} $

where the first inequality holds by $\max_j |f_{i, j}|\le f_{i, \alpha}-f_{i, \beta}+f_{i, \beta}I(f_{i, \beta}>0)$ and $0\le f_{i, \beta}I(f_{i, \beta}>0)\le 1/t_{..}$. This completes the proof.

Remark If $M$ and $m$ are constants, $C(m, M)\approx 2m/(M+m) $ when $n$ is large enough. Therefore the condition (2.1) is very week.

3 Discussion

Our proposed matrices $S$ could be used as preconditioners for solving linear systems with diagonally dominant and non-negative matrices just as [18, 20] concerned with $M$-matrices. The bound on the approximation error in Theorem 2.1 depends on $m$, $M$ and $n$. When $m$ and $M$ are bounded by a constant, all the elements of $T^{-1}-S$ are of order $O(1/(n-1)^2)$ as $n\to\infty$, uniformly.

Finally, we illustrate by an example that the bound on the approximation error in Theorem 2.1 is optimal in the sense that any bound in the form of $C(m, M)/f(n)$ requires $f(n)=O((n-1)^2)$ as $n\to\infty$. Assume that the matrix $T$ consists of the elements: $t_{i, i}=(n-1)M, i=1, \cdots, n-1; t_{n, n}=(n-1)m$ and $t_{i, j}=m, i, j=1, \cdots, n; i\neq j$, which satisfies (1.1). By the Sherman-Morrison formula, we have

$ (T^{-1})_{i, j}=\frac{\delta_{i, j} }{(n-1)M-m}-\frac{m}{[(n-1)M-m]^2}, i, j=1, \cdots, n-1, \\ (T^{-1})_{n, j}=\frac{\delta_{n, j} }{(n-2)m}-\frac{1}{(n-2)[(n-1)M-m]},\;j=1, \cdots, n. $

In this case, the elements of $S$ are

$ S_{i, j} = \frac{\delta_{i, j}}{(n-1)M}-\frac{1}{n(n-1)m}, i, j=1, \cdots, n-1;i\neq j, \\ S_{n, j} = \frac{\delta_{n, j}}{(n-1)m}-\frac{1}{n(n-1)m}, j=1, \cdots, n. $

It is easy to show that the bound of $||T^{-1}-S||$ is $O( \frac{1}{(n-1)^2m} )$. This suggests that the rate $1/(n-1)^2$ is optimal. On the other hand, if $M$ and $m$ are constants, the upper bound of $||T^{-1}-S||$ approximately equal to $(1+\frac{M}{m})\frac{M}{2(n-1)^2m^2}$. Therefore there is a gap between these two bounds that implies there might be space for improvement. It is interesting to see if the bounds in Theorem 2.1 can be further relaxed.

Acknowledgement

The authors thank Professor Qin Hong and Dr. Yan Ting in Central China Normal University for viewing this paper and helpful comments. This research was partially supported by the NNSF of China (No. 11271147).

References
[1] Axelsson O. A survey of preconditioned iterative methods for linear systems of algebraic equations[J]. BIT, 1985, 25: 166–187.
[2] Berman A, Plemmons R J. Nonnegative Matrices in the Mathematical Sciences[M]. America: SIAM, 1994.
[3] Benzi M. Preconditioning techniques for large linear systems: a survey[J]. J. Comp. Phy, 2002, 182: 418–477. DOI:10.1006/jcph.2002.7176
[4] Bermana A, Robert J P. Inverses of nonnegative matrices[J]. Linear and Multilinear Algebra, 1974, 2: 161–172. DOI:10.1080/03081087408817055
[5] Bruaset A M. A survey of preconditioned iterative methods[M]. London: Longman Scientiflc & Technical, 1995.
[6] Chatterjee S, Diaconis P, Sly A. Random Graphs with a Given Degree Sequence[J]. Annals of Applied Probability, 2011, 21: 1400–1435. DOI:10.1214/10-AAP728
[7] Geir D. A note on diagonally dominant matrices[J]. Linear Algebra Appl., 2000, 317: 217–224. DOI:10.1016/S0024-3795(00)00178-6
[8] Eglestona P D, Lenkerb T D, Narayanb S K. The nonnegative inverse eigenvalue problem[J]. Linear Algebra Appl., 2004, 379: 475–490. DOI:10.1016/j.laa.2003.10.019
[9] Kaykobad M. On nonnegative factorization of matrices[J]. Linear Algebra Appl., 1987, 96: 27–33. DOI:10.1016/0024-3795(87)90334-X
[10] Monahan J F. Numerical methods of statistics[M]. Cambridge: Cambridge University Press, 2001.
[11] Loewya R, Londona D. A note on an inverse problem for nonnegative matrices[J]. Linear and Multilinear Algebra, 1978, 6: 83–90. DOI:10.1080/03081087808817226
[12] Maggs B M, Miller G L, Parekh O, Ravi R, Woo S L. Solving symmetric diagonally-dominant systems by preconditioning[C]. In Proceedings 38th Annual Symposium on Foundations of Computer Science. 2002, Available online at http://www.cs.cmu.edu/maverick/Research/2003-PREPreconditioner.pdf.
[13] Markham T L. Nonnegative matrices whose inverses are M-matrices[J]. Pro. Amer. Math. Soc., 1972, 26: 326–330.
[14] Martínez S, Michon G, San Martín J. Inverses of ultrametric matrices are of Stieltjes type[J]. SIAM.J. Matrix Anal. & Appl., 1994, 15: 98–106.
[15] Minc H. Nonnegative matrices[M]. New York: John Wiley & Sons, 1988.
[16] Simons G, Yao Y. Approximating the inverse of a symmetric positive deflnite matrix[J]. Linear Algebra Appl., 1998, 281: 97–103. DOI:10.1016/S0024-3795(98)10038-1
[17] Simons G, Yao Y. Asymptotics when the number of parameters tends to inflnity in the BradleyTerry model for paired comparisons[J]. Annals of Statistics, 1999, 27: 1041–1060. DOI:10.1214/aos/1018031267
[18] Tam H S, Wei Y M, Jin X Q. A note on preconditioning for M-matrix[J]. Appl. Math. Lett., 2005, 18: 1137–1142. DOI:10.1016/j.aml.2005.02.001
[19] Yan T, Xu J. A central limit theorem in the fl-model for undirected random graphs with a diverging number of vertices[J]. Biometrika, 2013, 100: 519–524. DOI:10.1093/biomet/ass084
[20] Zhang Y, Huang T, Liu X, Gub T. A class of approximate inverse preconditioners for solving linear systems[J]. Int. J. Comput. Math., 2009, 86: 1243–1252. DOI:10.1080/00207160701821707