数学杂志  2016, Vol. 36 Issue (6): 1120-1132   PDF    
扩展功能
加入收藏夹
复制引文信息
加入引用管理器
Email Alert
RSS
本文作者相关文章
GUO Ming-le
DAI Yu
ZHANG Li-jun
COMPLETE MOMENT CONVERGENCE OF WEIGHTED SUMS FOR ARRAYS OF DEPENDENT RANDOM VARIABLES
GUO Ming-le, DAI Yu, ZHANG Li-jun    
School of Mathematics and Computer Science, Anhui Normal University, Wuhu 241003, China
Abstract: In this paper, the complete moment convergence of weighted sums for sequences of dependent random variables is investigated. By applying moment inequality and truncation methods, some sufficient conditions of complete moment convergence of weighted sums for sequences of dependent random variables are established. We extend the results of Volodin et al. (2004) and Chen et al. (2006) for independent random variables to negatively associated and negatively dependent random variables, which improve and generalize the results of Sung (2011), Wu (2012) and Guo and Zhu (2012).
Key words: negatively associated     negatively dependent     ρ*-mixing     complete moment convergence     complete convergence    
相依随机变量阵列加权和的矩完全收敛性
郭明乐, 戴钰, 张立君    
安徽师范大学数学计算机科学学院, 安徽 芜湖 241003
摘要:本文研究了相依随机变量阵列加权和的矩完全收敛性.利用矩不等式和截尾法, 建立了相依随机变量阵列加权和的矩完全收敛性的充分条件.将Volodin等(2004) 及陈平炎等(2006) 的关于独立随机变量阵列的结果推广到了负相协和负相依随机变量阵列的情形, 推广并完善了Sung(2011), 吴群英(2012) 及郭明乐和祝东进(2012) 的结果.
关键词负相协    负相依    ρ*混合    矩完全收敛性    完全收敛性    
1 Introduction

Hsu and Robbins [1] introduced the concept of complete convergence of $\{X_n\}$. A se-quence $\{X_n, n=1, 2, \cdots\}$ is said to converge completely to a constant C if

$\sum\limits_{n = 1}^\infty P (|X_n-C|>\epsilon)<\infty~~\mbox{for all}~~ \epsilon>0.$

Moreover, they proved that the sequence of arithmetic means of independent identically distributed (i.i.d.) random variables converge completely to the expected value if the variance of the summands is finite. The converse theorem was proved by Erdös [2]. In view of the Borel-Cantelli lemma, the complete convergence implies that almost sure convergence. Therefore the complete convergence is very important tool in establishing almost sure convergence. The result of Hsu-Robbins-Erdös is a fundamental theorem in probability theory and was generalized and extended in several directions by many authors.

We recall that the array $\{{X_{ni}, i\geq 1, n\geq 1}\} $ of random variables is said to be stochastically dominated by a random variable $X $ if there exists a positive constant $C$, such that $P\{|X_{ni}|> x\}\leq CP\{|X| > x\}$ for all $x\geq 0, $ $i\geq 1$ and $n\geq 1$.

Volodin et al. [3] and Chen et al. [4] ($\beta>-1 $ and $\beta = -1$, respectively) obtained complete convergence for weighted sums of arrays of rowwise independent Banach-space-valued random elements.

Theorem 1.1 [3, 4] Suppose that $\beta\geq -1$. Let $\{X_{ni}, i\geq 1, n\geq 1 \}$ be an array of rowwise independent random elements in a real separable Banach space which are stochastically dominated by a random variable $X$. Let $\{a_{ni}, i\geq 1, n\geq 1 \}$ be an array of constants satisfying

$\begin{equation} \displaystyle \sup_{i\geq 1}|a_{ni}|=O(n^{-r})\, \, \, \, {\rm for \, \, \, some} \, \, \, r>0 \end{equation}$ (1.1)

and

$\begin{equation} \displaystyle \sum_{i=1}^\infty |a_{ni}|^\theta=O(n^{\mu}) \end{equation}$ (1.2)

for some $0<\theta\leq 2$ and $\mu$ such that $\theta+\mu/r<2$ and $1+\mu+\beta>0$. If $E|X|^{\theta+(1+\mu+\beta)/r}<\infty$ and $\sum\limits_{i=1}^{\infty}a_{ni}X_{ni}\rightarrow 0$ in probability, then

$\begin{equation} \displaystyle\sum_{n=1}^{\infty}n^\beta P\left\{\left \|\sum_{i=1}^{\infty}a_{ni}X_{ni}\right \|>\epsilon\right\}<\infty\, \, \, {\rm for\, \, \, all}\, \, \, \epsilon >0.\end{equation}$ (1.3)

If $\beta<-1$, then (1.3) is immediate. Hence Theorem 1.1 is of interest only for $\beta\geq -1$.

Recently, Sung [5] extended Theorem 1.1 to negatively associated and negatively de-pendent random variables when $\theta=1$. Moreover, similar results for sequences of $\varphi$-mixing and $\rho^*$-mixing random variables are also established.

Theorem 1.2 [5] Suppose that $\beta\geq -1$. Let $\{X_{ni}, i\geq 1, n\geq 1 \}$ be an array of rowwise negatively associated random variables which are stochastically dominated by a random variable $X$. Let $\{a_{ni}, i\geq 1, n\geq 1 \}$ be an array of constants satisfying (1.1) and

$\begin{equation} \displaystyle \sum_{i=1}^\infty |a_{ni}|=O(n^{\mu})\, \, \, \, {\rm for \, \, \, \, some} \, \, \, \, \mu<r. \end{equation}$ (1.4)

If $EX_{ni}=0$ for all $i\geq 1, n\geq 1$ and

$E|X|\log |X| < \infty {\mkern 1mu} {\mkern 1mu} {\rm{for}}{\mkern 1mu} {\mkern 1mu} 1 + \mu + \beta = 0, $ (1.5)
$E|X{|^{1 + (1 + \mu + \beta )/r}} < \infty {\mkern 1mu} {\mkern 1mu} {\rm{for}}{\mkern 1mu} {\mkern 1mu} 1 + \mu + \beta > 0, $ (1.6)

then

$\begin{equation} \displaystyle\sum_{n=1}^{\infty}n^\beta P\left\{\sup_{k\geq 1}\left |\sum_{i=1}^{k}a_{ni}X_{ni}\right |>\epsilon\right\}<\infty\, \, \, {\rm for\, \, \, all}\, \, \, \epsilon >0.\end{equation}$ (1.7)

Guo and Zhu [6] extended Theorem 1.2 to complete moment convergence of the supre-mum of partial sums for arrays of negatively associated random variables when $\beta>-1$. However, the proof of Guo and Zhu [6] does not work for the case of $\beta=-1$.

Theorem 1.3 [6] Under the conditions of Theorem 1.2. If $\beta>-1$, then

$\begin{equation} \displaystyle\sum_{n=1}^{\infty}n^\beta E\left(\sup_{k\geq 1}\left |\sum_{i=1}^{k}a_{ni}X_{ni}\right |-\epsilon\right)^+<\infty\, \, \, {\rm for\, \, \, all}\, \, \, \epsilon >0.\end{equation}$ (1.8)

Wu [7] extended Theorem 1.1 to negatively dependent random variables when $\beta>-1$. Wu [7] also considered the case of $1+\mu+\beta=0(\beta>-1)$. However, the proof of Wu [7] does not work for the case of $\beta=-1$.

Theorem 1.4 [7] Suppose that $\beta> -1$. Let $\{X_{ni}, i\geq 1, n\geq 1 \}$ be an array of rowwise negatively dependent random variables which are stochastically dominated by a random variable $X$. Let $\{a_{ni}, i\geq 1, n\geq 1 \}$ be an array of constants satisfying (1.1) and (1.2) for some $\theta$ and $\mu$ such that $\mu<2r$ and $0<\theta<\min\{2, 2-\mu/r\}.$ Furthermore, assume that $EX_{ni}=0$ for all $i\geq 1$ and $n \geq 1$ if $\theta+(1+\mu+\beta)/r\geq 1$. If

$E|X|^\theta\log|X|<\infty\, \, {\rm for} \, \, 1+\mu+\beta=0, $
$E|X|^{\theta+(1+\mu+\beta)/r}<\infty \, \, {\rm for} \, \, 1+\mu+\beta>0, $

then

$\begin{equation} \displaystyle\sum_{n=1}^{\infty}n^\beta P\left\{ \left |\sum_{i=1}^{\infty}a_{ni}X_{ni}\right |>\epsilon\right\}, <\infty\, \, \, {\rm for\, \, \, all}\, \, \, \epsilon >0.\end{equation}$ (1.9)

In this paper, We deal with more general weights and establish some weaker sufficient conditions for complete moment convergence of weighted sums for arrays of negatively associated and negatively dependent random variables. Similar results for sequences of $\rho^*$-mixing random variables are also obtained. The results of Volodin et al. [3], Chen et al. [4], Sung [5], Wu [7] and Guo and Zhu [6] are improved and generalized.

For the proofs of the main results, we need to restate a few lemmas for easy reference. Throughout this paper, the symbol $C$ denotes a positive constant which is not necessarily the same one in each appearance, $I(A)$ denotes the indicator function of $A$. For a finite set $B$, the symbol $\sharp B$ denotes the number of elements in the set $B$. Let $a_n\ll b_n$ denote that there exists a constant $C>0$ such that $a_n\leq C b_n$ for sufficiently large $n$. Also, let $\log x$ denote $\ln\max(e, x)$.

Lemma 1.1 [5] Let the sequence $\{X_{n}, n\geq 1\}$ of random variables be stochastically dominated by a random variable $X$. Then for any $p>0, x>0, $

$E|X_{n}|^pI(|X_{n}|\leq x)\leq C\bigg[E|X|^pI(|X|\leq x)+x^pP\{|X|> x\}\bigg], $ (1.10)
$E|X_{n}|^pI(|X_{n}|> x)\leq CE|X|^pI(|X|> x).$ (1.11)

The following lemma is well known, and its proof is standard.

Lemma 1.2 Let $X$ be a random variable. For any $\alpha>0, r>0$, the following statements hold:

(ⅰ) $\sum\limits_{n=1}^\infty n^\beta E|X|^\alpha I{(|X|>n^r)}\ll E|X|^{\alpha+\frac {\beta+1}r} \, \, {\rm for \, \, any} \, \, \beta >-1$,

(ⅱ) $\sum\limits_{n=1}^\infty n^\beta E|X|^\alpha I{(|X|\leq n^r)}\ll E|X|^{\alpha+\frac {\beta+1}r} \, \, {\rm for \, \, any} \, \, \beta <-1$.

One of the most interesting inequalities to probability theory is the Rosenthal-type inequality. The Rosenthal-type inequality plays an important role in establishing complete convergence. The Rosenthal-type inequalities for sequences of dependent random variables were established by many authors.

The concept of negatively associated random variables was introduced by Alam and Saxena [8] and was carefully studied by Joag-Dev and Proschan [9]. A finite family of random variables $\{X_i, 1\leq i\leq n\}$is said to be negatively associated, if for every pair disjoint subset $A$ and $B$ of $\{1, 2, \cdots, n\}$ and any real nondecreasing coordinate-wise functions $f_1$ on $\mathbb{R}^A$ and $f_2$ on $\mathbb{R}^B, $

$\mbox{Cov}(f_1(X_i, i\in A), f_2(X_i, i\in B))\leq 0, $

whenever the covariance exists. An infinite family of random variables $\{X_i, -\infty<i<\infty\}$ is negatively associated if every finite subfamily is negatively associated.

The following lemma is a Rosenthal-type inequality for negatively associated random variables.

Lemma 1.3 [10] Let $\{X_n, n\geq 1\}$ be a sequence of negatively associated random variables with $EX_n=0$ and $E|X_n|^p<\infty$ for any $ n\geq 1, p\geq 1$. Then there exist constants $C_p> 0$ and $D_p>0$ depending only on p such that,

$E\mathop {\max }\limits_{1 \le k \le n} {\left| {\sum\limits_{i = 1}^k {{X_i}} } \right|^p} \le {C_p}\sum\limits_{i = 1}^n E |{X_i}{|^p}{\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\rm{for}}{\mkern 1mu} {\mkern 1mu} 1 \le p \le 2$

and

$E\mathop {\max }\limits_{1 \le k \le n} {\left| {\sum\limits_{i = 1}^k {{X_i}} } \right|^p} \le {D_p}\left( {\sum\limits_{i = 1}^n E |{X_i}{|^p} + {{\left( {\sum\limits_{i = 1}^n E X_i^2} \right)}^{p/2}}} \right) {\rm{for}}\,{p} > 2.$

The concept of negatively dependent random variables was given by Lehmann [11]. A finite family of random variables $\{X_i, 1\leq i\leq n\}$ is said to be negatively dependent (or negatively orthant dependent) if for all real numbers $x_1, x_2, \cdots, x_n$,

$P\left\{ {\bigcap\limits_{i = 1}^n {({X_i} \le {x_i})} } \right\} \le \prod\limits_{i = 1}^n P \{ {X_i} \le {x_i}\}, $ (1.12)
$P\left\{ {\bigcap\limits_{i = 1}^n {({X_i} > {x_i})} } \right\} \le \prod\limits_{i = 1}^n P \{ {X_i} > {x_i}\} .$ (1.13)

An infinite family of random variables is negatively dependent if every finite subfamily is negatively dependent.

Obviously, negatively associated implies negatively dependent from the definition of negatively associated and negatively dependent. But negatively dependent does not imply negatively associated, so negatively dependent is much weaker than negatively associated. The following lemma is a Rosenthal-type inequality for negatively dependent random variables.

Lemma 1.4 [12] Let $\{X_n, n\geq 1\}$ be a sequence of negatively dependent random variables with $EX_n=0$ and $E|X_n|^p<\infty$ for any $ n\geq 1, p\geq 1$. Then there exist constants $C_p> 0$ and $D_p>0$ depending only on p such that,

$E{\left| {\sum\limits_{i = 1}^n {{X_i}} } \right|^p} \le {C_p}\sum\limits_{i = 1}^n E |{X_i}{|^p}{\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\rm{for}}{\mkern 1mu} {\mkern 1mu} 1 \le p \le 2$

and

$E{\left| {\sum\limits_{i = 1}^n {{X_i}} } \right|^p} \le {D_p}\left( {\sum\limits_{i = 1}^n E |{X_i}{|^p} + {{\left( {\sum\limits_{i = 1}^n E X_i^2} \right)}^{p/2}}} \right){\mkern 1mu} {\mkern 1mu} {\rm{for}}{\mkern 1mu} {\mkern 1mu} p > 2.$

Let $\{X_n, \, \, n\geq 1\}$ be a sequence of random variables defined on probability space $(\Omega, \mathscr{F}, P)$. For any $S\subset \mathbb{ N}$, let $\mathscr{F}_S= \sigma(X_k, k\in S).$ Define the $\rho^*$-mixing coefficients by

${\rho ^*}(k) = \mathop {\sup }\limits_{S,T} \left( {\mathop {\sup }\limits_{X \in {L^2}({\mathscr{F}_S}),Y \in {L^2}({\mathscr{F}_T})} \frac{{{\rm{Cov}}(X,Y)}}{{\sqrt {{\rm{Var}}(X) \cdot {\rm{Var}}(Y)} }}} \right),$

where $S, T$ are the finite subsets of positive integers such that dist$(S, T)\geq k$. We call $\{X_n, \, \, n\geq 1\}$ a $\rho^*$-mixing sequence if there exists $k\geq 1$ such that $\rho^*(k)<1$.

Note that if $\{X_n, \, \, n\geq 1\}$ is a sequence of independent random variables, then $\rho^*(n)=0$ for all $n \geq 1$.

The following lemma is a Rosenthal-type inequality for $\rho^*$-mixing random variables.

Lemma 1.5 [13, 14] Let $\{X_n, n\geq 1\}$ be a sequence of $\rho^*$-mixing random variables, $Y_n\in \sigma (X_n), \, \, EY_n=0, \, \, E|Y_n|^p<\infty, n\geq 1, \, \, p\geq 1.$ Then there exist constants $C_p> 0$ and $D_p>0$ depending only on $p, k$ and $\rho^*(k)$ where $\rho^*(k)<1$ such that,

$E\mathop {\max }\limits_{1 \le k \le n} {\left| {\sum\limits_{i = 1}^k {{Y_i}} } \right|^p} \le {C_p}\sum\limits_{i = 1}^n E |{Y_i}{|^p}{\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\rm{for}}{\mkern 1mu} {\mkern 1mu} 1 \le p \le 2$

and

$E\mathop {\max }\limits_{1 \le k \le n} {\left| {\sum\limits_{i = 1}^k {{Y_i}} } \right|^p} \le {D_p}\left( {\sum\limits_{i = 1}^n E |{X_i}{|^p} + {{\left( {\sum\limits_{i = 1}^n E Y_i^2} \right)}^{p/2}}} \right){\mkern 1mu} {\mkern 1mu} {\rm{for}}{\mkern 1mu} {\mkern 1mu} p > 2.$
2 Main Results

Theorem 2.1 Suppose that $\beta\geq -1$. Let $\{X_{ni}, i\geq 1, n\geq 1 \}$ be an array of rowwise negatively associated random variables which are stochastically dominated by a random variable $X$ satisfying $E|X|^p<\infty$ for some $p>1$. Let $\{a_{ni}, i\geq 1, n\geq 1 \}$ be an array of constants satisfying (1.1) and

$\sum\limits_{i = 1}^\infty {|{a_{ni}}{|^q}} \ll {n^{ - 1 - \beta + r(p - q)}}{\mkern 1mu} {\mkern 1mu} {\rm{for}}{\mkern 1mu} {\mkern 1mu} {\rm{some}}{\mkern 1mu} {\mkern 1mu} q<p.$ (2.1)

Furthermore, assume that

$\sum\limits_{i = 1}^\infty {a_{ni}^2} \ll {n^{ - \alpha }}{\mkern 1mu} {\mkern 1mu} {\rm{for}}\,{\mkern 1mu} {\mkern 1mu} {\rm{some}}{\mkern 1mu} {\mkern 1mu} \alpha >0$ (2.2)

if $p\geq 2$. Let $EX_{ni}=0$ for all $i\geq 1$ and $n \geq 1$. Then

$\sum\limits_{n = 1}^\infty {{n^\beta }} E{\left( {\mathop {\sup }\limits_{k \ge 1} \left| {\sum\limits_{i = 1}^k {{a_{ni}}} {X_{ni}}} \right| - \epsilon} \right)^ + } < \infty {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\rm{for}}{\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\rm{all}}{\mkern 1mu} {\mkern 1mu} {\mkern 1mu} \epsilon > 0.$ (2.3)

Proof Without loss of generality, we can assume that $a_{ni}>0, 1\leq i\leq n, n\geq 1$ (otherwise, we use $a_{ni}^+$ and $a_{ni}^-$ instead of $a_{ni}$, resp., and note that $a_{ni}=a_{ni}^+-a_{ni}^-$). From (1.1) and (2.1), without loss of generality, we can assume that

$\mathop {\sup }\limits_{i \ge 1} {a_{ni}} \le {n^{ - r}}, {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} \sum\limits_{i = 1}^\infty {|{a_{ni}}{|^q}} \le {n^{ - 1 - \beta + r(p - q)}}.$ (2.4)

For any $i\geq 1, n\geq 1$, let

$X_{ni}'=-a_{ni}^{-1}I(a_{ni}X_{ni}<-1)+ X_{ni}I(a_{ni}|X_{ni}|\leq 1)+a_{ni}^{-1}I(a_{ni}X_{ni}>1), \, \, X_{ni}''=X_{ni}- X_{ni}'.$

Noting that $EX_{ni}=0, \, |X_{ni}''|\leq |X_{ni}|I(a_{ni}|X_{ni}|>1)$ for any $ i\geq 1, \, n\geq 1$, we have

$\begin{array}{*{20}{l}} {E{{\left( {\mathop {\sup }\limits_{k \ge 1} \left| {\sum\limits_{i = 1}^k {{a_{ni}}} {X_{ni}}} \right| - \epsilon} \right)}^ + }}\\ { \le E{{\left( {\mathop {\sup }\limits_{k \ge 1} \left| {\sum\limits_{i = 1}^k {{a_{ni}}} ({X_{n{i^\prime }}} - E{X_{n{i^\prime }}})} \right| - \epsilon} \right)}^ + } + E\mathop {\sup }\limits_{k \ge 1} \left| {\sum\limits_{i = 1}^k {{a_{ni}}} (X_{ni}^{\prime \prime } - EX_{ni}^{\prime \prime })} \right|}\\ { \ll E{{\left( {\mathop {\sup }\limits_{k \ge 1} \left| {\sum\limits_{i = 1}^k {{a_{ni}}} ({X_{n{i^\prime }}} - E{X_{n{i^\prime }}})} \right| - \epsilon} \right)}^ + } + \sum\limits_{i = 1}^\infty {{a_{ni}}} E|{X_{ni}}|I({a_{ni}}|{X_{ni}}| > 1).} \end{array}$

Therefore

$\begin{array}{l} \sum\limits_{n = 1}^\infty {{n^\beta }} E{\left( {\mathop {\sup }\limits_{k \ge 1} \left| {\sum\limits_{i = 1}^k {{a_{ni}}} {X_{ni}}} \right| - \epsilon} \right)^ + }\\ \ll \sum\limits_{n = 1}^\infty {{n^\beta }} E{\left( {\mathop {\sup }\limits_{k \ge 1} \left| {\sum\limits_{i = 1}^k {{a_{ni}}} ({X_{n{i^\prime }}} - E{X_{n{i^\prime }}})} \right| - \epsilon} \right)^ + } + \sum\limits_{n = 1}^\infty {{n^\beta }} \sum\limits_{i = 1}^\infty {{a_{ni}}} E|{X_{ni}}|I({a_{ni}}|{X_{ni}}| > 1)\\ = :{I_1} + {I_2}. \end{array}$

Hence, in order to prove (2.3), it suffices to prove that $I_1<\infty$ and $I_2<\infty$. Take $\delta>0$ such that $p-\delta>\max(1, q)$. By Lemma 1.1, Lemma 1.2 and (2.4), we get that

$\begin{array}{l} {I_2} \le \sum\limits_{n = 1}^\infty {{n^\beta }} \sum\limits_{i = 1}^\infty E a_{ni}^{p - \delta }|{X_{ni}}{|^{p - \delta }}I({a_{ni}}|{X_{ni}}| > 1)\\ \le \sum\limits_{n = 1}^\infty {{n^\beta }} \sum\limits_{i = 1}^\infty E a_{ni}^{p - \delta }|{X_{ni}}{|^{p - \delta }}I(|{X_{ni}}| > {n^r})\\ \ll \sum\limits_{n = 1}^\infty {{n^\beta }} {n^{ - r(p - \delta - q)}}\sum\limits_{i = 1}^\infty {a_{ni}^q} E|X{|^{p - \delta }}I(|X| > {n^r})\\ \le \sum\limits_{n = 1}^\infty {{n^{ - 1 + r\delta }}} E|X{|^{p - \delta }}I(|X| > {n^r}) \ll E|X{|^p}<\infty . \end{array}$ (2.5)

Next, we will prove $I_1<\infty$. Noting that $p>1$, for any $M\geq p$, we obtain by Markov's inequality that

$\begin{align} &{{I}_{1}}=\sum\limits_{n=1}^{\infty }{{{n}^{\beta }}}\int_{\epsilon }^{\infty }{P}\left\{ \underset{k\ge 1}{\mathop{\sup }}\, \left| \sum\limits_{i=1}^{k}{{{a}_{ni}}}({{X}_{n{{i}^{\prime }}}}-EX_{ni}^{\prime }) \right|>x \right\}dx \\ &\le \sum\limits_{n=1}^{\infty }{{{n}^{\beta }}}\int_{\epsilon }^{\infty }{{{x}^{-M}}}E{{\left( \underset{k\ge 1}{\mathop{\sup }}\, \left| \sum\limits_{i=1}^{k}{{{a}_{ni}}}(X_{ni}^{\prime }-EX_{ni}^{\prime }) \right| \right)}^{M}}dx \\ &\ll \sum\limits_{n=1}^{\infty }{{{n}^{\beta }}}E{{\left( \underset{k\ge 1}{\mathop{\sup }}\, \left| \sum\limits_{i=1}^{k}{{{a}_{ni}}}(X_{ni}^{\prime }-EX_{ni}^{\prime }) \right| \right)}^{M}}. \\ \end{align}$ (2.6)

Obviously, $X_{ni}'$ is monotonic on $X_{ni}$. Therefore $\{a_{ni}X_{ni}'-Ea_{ni}X_{ni}', i\geq 1, n\geq 1\}$ is also an array of rowwise negatively associated mean zero random variables.

Case 1 ($1<p< 2$). Taking $\delta>0$ such that $p+\delta<2$, we get by Lemma 1.1, Lemma 1.3, $C_r$ inequality, (2.5) and (2.6) that

$\begin{array}{l} {I_1} \ll \sum\limits_{n = 1}^\infty {{n^\beta }} E{\left( {\mathop {\sup }\limits_{k \ge 1} \left| {\sum\limits_{i = 1}^k {{a_{ni}}} ({X_{ni}^{\prime }} - E{X_{ni}^{\prime }})} \right|} \right)^{p + \delta }}.\\ \ll \sum\limits_{n = 1}^\infty {{n^\beta }} \sum\limits_{i = 1}^\infty {a_{ni}^{p + \delta }} E|{X_{ni}^{\prime }}{|^{p + \delta }}({\rm{by}}{\mkern 1mu} {\mkern 1mu} {\rm{Lemma}}{\mkern 1mu} {\mkern 1mu} {\rm{1}}.{\rm{3}}{\mkern 1mu} {\mkern 1mu} {\rm{and}}{\mkern 1mu} {\mkern 1mu} {C_r}{\mkern 1mu} {\mkern 1mu} {\rm{inequality}})\\ \le \sum\limits_{n = 1}^\infty {{n^\beta }} \sum\limits_{i = 1}^\infty {a_{ni}^{p + \delta }} E|{X_{ni}}{|^{p + \delta }}I({a_{ni}}|{X_{ni}}| \le 1) + \sum\limits_{n = 1}^\infty {{n^\beta }} \sum\limits_{i = 1}^\infty P \left\{ {{a_{ni}}|{X_{ni}}| > 1} \right\}\\ \le \sum\limits_{n = 1}^\infty {{n^\beta }} \sum\limits_{i = 1}^\infty {a_{ni}^{p + \delta }} E|{X_{ni}}{|^{p + \delta }}I({a_{ni}}|{X_{ni}}| \le 1) + \sum\limits_{n = 1}^\infty {{n^\beta }} \sum\limits_{i = 1}^\infty {{a_{ni}}} E|{X_{ni}}|I({a_{ni}}|{X_{ni}}| > 1)\\ \ll \sum\limits_{n = 1}^\infty {{n^\beta }} \sum\limits_{i = 1}^\infty {a_{ni}^{p + \delta }} E|X{|^{p + \delta }}I({a_{ni}}|X| \le 1) + \sum\limits_{n = 1}^\infty {{n^\beta }} \sum\limits_{i = 1}^\infty P \left\{ {{a_{ni}}|X| > 1} \right\} + C\\ \le \sum\limits_{n = 1}^\infty {{n^\beta }} \sum\limits_{i = 1}^\infty {a_{ni}^{p + \delta }} E|X{|^{p + \delta }}I({a_{ni}}|X| \le 1) + C. \end{array}$ (2.7)

Set $I_{nj}=\{i, (n(j+1))^{-r}<a_{ni}\leq (nj)^{-r}\}, \, \, \, j=1, 2, \cdots.$ Then $\cup_{j\geq 1}I_{nj}=\{1, 2, \cdots, \}$. Note also that for all $k\geq 1, n\geq 1, M\geq q$,

$\begin{array}{l} {n^{ - 1 - \beta + r(p - q)}} \ge \sum\limits_{i = 1}^\infty {a_{ni}^q} = \sum\limits_{j = 1}^\infty {\sum\limits_{i \in {I_{nj}}} {a_{ni}^q} } \ge \sum\limits_{j = 1}^\infty {(\sharp {I_{nj}}){{(n(j + 1))}^{ - rq}}} \\ \ge {n^{ - rq}}\sum\limits_{j = k}^\infty {(\sharp {I_{nj}}){{(j + 1)}^{ - rM}}} {(k + 1)^{rM - rq}}. \end{array}$

Hence we have

$\sum\limits_{j = k}^\infty {(\sharp {I_{nj}})} {j^{ - rM}} \ll {n^{ - 1 - \beta + rp}}{k^{ - r(M - q)}}.$ (2.8)

Note that for any $p>1, \delta>0$,

$\begin{array}{l} \sum\limits_{n = 1}^\infty {{n^\beta }} \sum\limits_{i = 1}^\infty E a_{ni}^{p + \delta }|X{|^{p + \delta }}I({a_{ni}}|X| \le 1) = \sum\limits_{n = 1}^\infty {{n^\beta }} \sum\limits_{j = 1}^\infty {\sum\limits_{i \in {I_{nj}}} E } a_{ni}^{p + \delta }|X{|^{p + \delta }}I({a_{ni}}|X| \le 1)\\ \le \sum\limits_{n = 1}^\infty {{n^\beta }} \sum\limits_{j = 1}^\infty \sharp {I_{nj}}{(nj)^{ - r(p + \delta )}}E|X{|^{p + \delta }}I(|X| \le {(n(j + 1))^r})\\ = \sum\limits_{n = 1}^\infty {{n^\beta }} \sum\limits_{j = 1}^\infty \sharp {I_{nj}}{(nj)^{ - r(p + \delta )}}E|X{|^{p + \delta }}I(|X| \le {(2n)^r})\\ + \sum\limits_{n = 1}^\infty {{n^\beta }} \sum\limits_{j = 1}^\infty \sharp {I_{nj}}{(nj)^{ - r(p + \delta )}}\sum\limits_{k = 2n + 1}^{n(j + 1)} E |X{|^{p + \delta }}I({(k - 1)^r}|X| \le {k^r}) = :{J_1} + {J_2}. \end{array}$ (2.9)

By Lemma 1.2 and (2.8), we obtain that

$\begin{array}{l} {J_1} = \sum\limits_{n = 1}^\infty {{n^\beta }} \sum\limits_{j = 1}^\infty \sharp {I_{nj}}{(nj)^{ - r(p + \delta )}}E|X{|^{p + \delta }}I(|X| \le {(2n)^r})\\ \ll \sum\limits_{n = 1}^\infty {{n^{ - 1 - r\delta }}} E|X{|^{p + \delta }}I(|X| \le {(2n)^r}) \ll E|X{|^p}<\infty . \end{array}$ (2.10)

By (2.8),

$\begin{array}{l} {J_2} = \sum\limits_{n = 1}^\infty {{n^\beta }} \sum\limits_{j = 1}^\infty \sharp {I_{nj}}{(nj)^{ - r(p + \delta )}}\sum\limits_{k = 2n + 1}^{n(j + 1)} E |X{|^{p + \delta }}I({(k - 1)^r}<|X| \le {k^r})\\ \le \sum\limits_{n = 1}^\infty {{n^{\beta - r(p + \delta )}}} \sum\limits_{k = 2n + 1}^\infty E |X{|^{p + \delta }}I({(k - 1)^r}<|X| \le {k^r})\sum\limits_{j = [\frac{k}{n}-1]}^\infty \sharp {I_{nj}}{j^{ - r(p + \delta )}}\\ \ll \sum\limits_{n = 1}^\infty {{n^{\beta - r(p + \delta )}}} \sum\limits_{k = 2n + 1}^\infty {{n^{ - 1 - \beta + rp}}} {(\frac{k}{n})^{ - r(p + \delta - q)}}E|X{|^{p + \delta }}I({(k - 1)^r}<|X| \le {k^r})\\ \ll \sum\limits_{k = 2}^\infty {{k^{ - r(p + \delta - q)}}} E|X{|^{p + \delta }}I({(k - 1)^r}<|X| \le {k^r})\sum\limits_{n = 1}^{[k/2]} {{n^{ - 1 + r(p - q)}}} \\ \ll \sum\limits_{k = 2}^\infty {{k^{ - r\delta }}} E|X{|^{p + \delta }}I({(k - 1)^r}|X| \le {k^r}) \ll E|X{|^p}\infty . \end{array}$ (2.11)

By (2.9), (2.10) and (2.11), for any $p>1, \delta>0$, we have

$\sum\limits_{n = 1}^\infty {{n^\beta }} \sum\limits_{i = 1}^\infty E a_{ni}^{p + \delta }|X{|^{p + \delta }}I({a_{ni}}|X| \le 1) \ll E|X{|^p}<\infty .{\rm{ }}$ (2.12)

Combining with (2.7), we get that $I_1<\infty.$

Case 2 ($p\geq 2$). Taking sufficient large $\delta>0$ such that $\beta-\alpha(p+\delta)/2<-1$, we get by Lemma 1.3, (2.6) and $C_r$ inequality that

$\begin{array}{l} {I_1} \ll \sum\limits_{n = 1}^\infty {{n^\beta }} E{\left( {\mathop {\sup }\limits_{k \ge 1} \left| {\sum\limits_{i = 1}^k {{a_{ni}}} {X_{ni}^{\prime }} - E{a_{ni}}{X_{ni}^{\prime }}} \right|} \right)^{p + \delta }}\\ \ll \sum\limits_{n = 1}^\infty {{n^\beta }} \sum\limits_{i = 1}^\infty E |{a_{ni}}{X_{ni}^{\prime }}{|^{p + \delta }} + \sum\limits_{n = 1}^\infty {{n^\beta }} {\left( {\sum\limits_{i = 1}^\infty E {{({a_{ni}}{X_{ni}^{\prime }})}^2}} \right)^{(p + \delta )/2}}\\ = :{I_{11}} + {I_{12}}. \end{array}$ (by Lemma 1.3)

From the proof of (2.7) and (2.12), we see that $I_{11}<\infty.$ Since $E|X|^p<\infty, p\geq 2$ implies $EX^2<\infty$, by (2.2), we obtain that

${I_{12}} \ll \sum\limits_{n = 1}^\infty {{n^\beta }} {\left( {\sum\limits_{i = 1}^\infty {a_{ni}^2} } \right)^{(p + \delta )/2}} \ll \sum\limits_{n = 1}^\infty {{n^{\beta - \alpha (p + \delta )/2}}}<\infty .$

Thus $I_1<\infty$.

Remark 2.1 As in Remark 2.3 of Guo and Zhu [6], (2.3) implies (1.7). Hence, when $\theta+(1+\mu+\beta)/r>1, $ Theorem 1.1 follows from Theorem 2.1 by taking $p=\theta+(1+\mu+\beta)/r, \, \, q=\theta, $ since

$\sum\limits_{i = 1}^\infty {a_{ni}^2} \le \mathop {\sup }\limits_{i \ge 1} {\mkern 1mu} |{a_{ni}}{|^{2 - \theta }}\sum\limits_{i = 1}^\infty | {a_{ni}}{|^\theta } \ll {n^{ - (r(2 - \theta ) - \mu )}}.$

Hence conditions (1.1) and (2.1) are weaker than conditions (1.1) and (1.2). Theorem 2.1 not only extends the result of Volodin et al. [3] and Chen et al. [4] for independent random variables to negatively associated case, but also obtains the weaker sufficient condition of complete moment convergence of the supremum of partial sums for arrays of negatively associated random variables.

Remark 2.2 If $1+\mu+\beta>0$, Theorem 1.2, Theorem 1.3 follow from Theorem 2.1 by taking $p=1+(1+\mu+\beta)/r, \, \, q=1.$ Theorem 2.1 extends the result of Sung [5] and Guo and Zhu [6]. Moreover, the method used for proving our main results is different from that of Sung [5]. Our method can be used efficiently to the field of the complete moment convergence for sequences of dependent random variables.

Note that conditions (1.1) and (2.1) together imply

$\sum\limits_{i = 1}^\infty {|{a_{ni}}{|^p}} \ll {n^{ - 1 - \beta }}.{\rm{ }}$ (2.13)

The following theorem shows that if the moment condition of Theorem 2.1 is replaced by a stronger condition $E|X|^p\log |X|<\infty$, then condition (2.1) can be replaced by the weaker condition (2.13).

Theorem 2.2 Suppose that $\beta\geq -1$. Let $\{X_{ni}, i\geq 1, n\geq 1 \}$ be an array of rowwise negatively associated random variables which are stochastically dominated by a random variable $X$ satisfying $E|X|^p\log|X|<\infty$ for some $p\geq 1$. Let $\{a_{ni}, i\geq 1, n\geq 1 \}$ be an array of constants satisfying (1.1) and (2.13). Furthermore, assume that (2.2) holds for some $\alpha>0$ if $p\geq 2$. Let $EX_{ni}=0$ for all $i\geq 1$ and $n \geq 1$. Then (2.3) holds.

Proof Applying the same notation and method of Theorem 2.1, we need only to give the different parts. Noting that $\sum\limits_{n=1}^k n^{-1}\ll \log k$ and $p\geq 1, $ we have

$\begin{array}{l} {I_2} \le \sum\limits_{n = 1}^\infty {{n^\beta }} \sum\limits_{i = 1}^\infty E a_{ni}^p|{X_{ni}}{|^p}I({a_{ni}}|{X_{ni}}| > 1) \le \sum\limits_{n = 1}^\infty {{n^\beta }} \sum\limits_{i = 1}^\infty {a_{ni}^p} E|X{|^p}I(|X| > {n^r})\\ \ll \sum\limits_{n = 1}^\infty {{n^\beta }} {n^{ - 1 - \beta }}E|X{|^p}I(|X| > {n^r}) = \sum\limits_{n = 1}^\infty {{n^{ - 1}}} \sum\limits_{k = n}^\infty E |X{|^p}I({k^r}<|X| \le {(k + 1)^r})\\ = \sum\limits_{k = 1}^\infty E |X{|^p}I({k^r}<|X| \le {(k + 1)^r})\sum\limits_{n = 1}^k {{n^{ - 1}}} \\ \ll \sum\limits_{k = 1}^\infty {\log } kE|X{|^p}I({k^r}<|X| \le {(k + 1)^r}) \ll E|X{|^p}\log |X|<\infty . \end{array}$ (2.14)

Set $I_{nj}=\{i, (n(j+1))^{-r}<a_{ni}\leq (nj)^{-r}\}, \, \, \, j=1, 2, \cdots.$ Note that for all $k\geq 1, n\geq 1, M\geq p$,

${n^{ - 1 - \beta }} \ge \sum\limits_{i = 1}^\infty {a_{ni}^p} = \sum\limits_{j = 1}^\infty {\sum\limits_{i \in {I_{nj}}} {a_{ni}^p} } \ge {n^{ - rp}}\sum\limits_{j = k}^\infty {(\sharp {I_{nj}}){{(j + 1)}^{ - rM}}} {(k + 1)^{rM - rp}}.$

Hence we have $\displaystyle \sum_{j=k}^\infty(\sharp I_{nj})j^{-rM} \ll n^{-1-\beta+rp}k^{-r(M-p)}.$ Similar to the corresponding part of the proof of (2.12), for any $p\geq 1, \delta>0, $ we can obtain that

$\begin{array}{l} \sum\limits_{n = 1}^\infty {{n^\beta }} \sum\limits_{i = 1}^\infty E a_{ni}^{p + \delta }|X{|^{p + \delta }}I({a_{ni}}|X| \le 1)\\ \ll \sum\limits_{n = 1}^\infty {{n^{ - 1 - r\delta }}} E|X{|^{p + \delta }}I(|X| \le {(2n)^r})\\ + \sum\limits_{n = 1}^\infty {{n^{\beta - r(p + \delta )}}} \sum\limits_{k = 2n + 1}^\infty {{n^{ - 1 - \beta + rp}}} {(\frac{k}{n})^{ - r\delta }}E|X{|^{p + \delta }}I({(k - 1)^r}<|X| \le {k^r})\\ \ll E|X{|^p} + E|X{|^p}\log |X|<\infty . \end{array}$ (2.15)

The rest of the proof is the same as that of Theorem 2.1 and is omitted.

Corollary 2.1 Suppose that $\beta\geq -1$. Let $\{X_{ni}, i\geq 1, n\geq 1 \}$ be an array of rowwise negatively associated random variables which are stochastically dominated by a random variable $X$. Let $\{a_{ni}, i\geq 1, n\geq 1 \}$ be an array of constants satisfying (1.1) and (1.2) for some $\theta$ and $\mu$ such that $\mu<2r$ and $1\leq\theta<\min\{2, 2-\mu/r\}.$ Furthermore, assume that $EX_{ni}=0$ for all $i\geq 1$ and $n \geq 1$. If

$\begin{array}{l} E|X{|^\theta }\log |X|<\infty {\mkern 1mu} {\mkern 1mu} {\rm{for}}{\mkern 1mu} {\mkern 1mu} 1 + \mu + \beta = 0, \\ E|X{|^{\theta + (1 + \mu + \beta )/r}} < \infty {\mkern 1mu} {\mkern 1mu} {\rm{for}}{\mkern 1mu} {\mkern 1mu} 1 + \mu + \beta > 0, \end{array}$

then (2.3) holds.

Proof If $1+\mu+\beta=0, $ we take $p=\theta$ in Theorem 2.2. If $1+\mu+\beta>0$, we take $p=\theta+(1+\mu+\beta)/r, \, \, q=\theta$ in Theorem 2.1. Hence (2.3) holds by Theorem 2.1 and Theorem 2.2.

Remark 2.3 Corollary 2.1 extends the result of Sung [5] and Guo and Zhu [6] for $\theta=1$ to $1\leq \theta< 2$.

The following theorems extend Theorem 1.1 to negatively dependent random variables. The proof is the same as that of Theorem 2.1 and Theorem 2.2 except that we use Lemma 1.4 instead of Lemma 1.3.

Theorem 2.3 Suppose that $\beta\geq -1$. Let $\{X_{ni}, i\geq 1, n\geq 1 \}$ be an array of rowwise negatively dependent random variables which are stochastically dominated by a random variable $X$ satisfying $E|X|^p<\infty$ for some $p>1$. Let $\{a_{ni}, i\geq 1, n\geq 1 \}$ be an array of constants satisfying (1.1) and (2.1). Furthermore, assume that (2.2) holds for some $\alpha>0$ if $p\geq 2$. Let $EX_{ni}=0$ for all $i\geq 1$ and $n \geq 1$. Then

$\sum\limits_{n = 1}^\infty {{n^\beta }} E{\left( {\left| {\sum\limits_{i = 1}^\infty {{a_{ni}}} {X_{ni}}} \right| - \epsilon} \right)^ + } < \infty {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\rm{for}}{\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\rm{all}}{\mkern 1mu} {\mkern 1mu} {\mkern 1mu} \epsilon > 0.$ (2.16)

Theorem 2.4 Suppose that $\beta\geq -1$. Let $\{X_{ni}, i\geq 1, n\geq 1 \}$ be an array of rowwise negatively dependent random variables which are stochastically dominated by a random variable $X$ satisfying $E|X|^p\log|X|<\infty$ for some $p\geq 1$. Let $\{a_{ni}, i\geq 1, n\geq 1 \}$ be an array of constants satisfying (1.1) and (2.13). Furthermore, assume that (2.2) holds for some $\alpha>0$ if $p\geq 2$. Let $EX_{ni}=0$ for all $i\geq 1$ and $n \geq 1$. Then (2.16) holds.

Remark 2.4 If $1+\mu+\beta=0, $ we take $p=\theta$ in Theorem 2.4. If $1+\mu+\beta>0$, we take $p=\theta+(1+\mu+\beta)/r, \, \, q=\theta$ in Theorem 2.3. Therefore Theorem 1.4 follows from Theorem 2.3 and Theorem 2.4. However, Theorem 1.4 does not deal with the case of $\beta=-1$. Our result covers the case of $\beta=-1$.

If the array $\{X_{ni}, i\geq 1, n\geq 1 \}$ in Theorem 2.1 and Theorem 2.2 is replaced by the sequence $\{X_n, n\geq 1\}$ then we can extend Theorem 1.1 to $\rho^*$-mixing random variables.

Theorem 2.5 Suppose that $\beta\geq -1$. Let $\{X_{i}, i\geq 1 \}$ be a sequence of rowwise $\rho^*$-mixing random variables which are stochastically dominated by a random variable $X$ satisfying $E|X|^p<\infty$ for some $p>1$. Let $\{a_{ni}, i\geq 1, n\geq 1 \}$ be an array of constants satisfying (1.1) and (2.1). Furthermore, assume that (2.2) holds for some $\alpha>0$ if $p\geq 2$. Let $EX_{i}=0$ for all $i\geq 1$. Then

$\sum\limits_{n = 1}^\infty {{n^\beta }} E{\left( {\mathop {\sup }\limits_{k \ge 1} \left| {\sum\limits_{i = 1}^k {{a_{ni}}} {X_i}} \right| - \epsilon} \right)^ + } < \infty {\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\rm{for}}{\mkern 1mu} {\mkern 1mu} {\mkern 1mu} {\rm{all}}{\mkern 1mu} {\mkern 1mu} {\mkern 1mu} \epsilon > 0.$ (2.17)

Proof For any $i\geq 1, n\geq 1$, let $\displaystyle X_{ni}= X_{i}I(|a_{ni}X_{i}|\leq 1).$ Note that

$\begin{array}{l} \sum\limits_{n = 1}^\infty {{n^\beta }} E{\left( {\mathop {\sup }\limits_{k \ge 1} \left| {\sum\limits_{i = 1}^k {{a_{ni}}} {X_i}} \right| - \epsilon} \right)^ + }\\ \le \sum\limits_{n = 1}^\infty {{n^\beta }} E{\left( {\mathop {\sup }\limits_{k \ge 1} \left| {\sum\limits_{i = 1}^k {{a_{ni}}} ({X_{ni}} - E{X_{ni}})} \right| - \epsilon} \right)^ + } + \sum\limits_{n = 1}^\infty {{n^\beta }} \sum\limits_{i = 1}^\infty E |{a_{ni}}{X_i}|I(|{a_{ni}}{X_i}| > 1). \end{array}$

The rest of the proof is the same as that of Theorem 2.1 except that we use Lemma 1.5 instead of Lemma 1.3 and it is omitted.

Theorem 2.6 Suppose that $\beta\geq -1$. Let $\{X_{i}, i\geq 1 \}$ be a sequence of rowwise $\rho^*$-mixing random variables which are stochastically dominated by a random variable $X$ satisfying $E|X|^p\log|X|<\infty$ for some $p\geq 1$. Let $\{a_{ni}, i\geq 1, n\geq 1 \}$ be an array of constants satisfying (1.1) and (2.13). Furthermore, assume that (2.2) holds for some $\alpha>0$ if $p\geq 2$. Let $EX_{i}=0$ for all $i\geq 1$. Then (2.17) holds.

Proof For any $i\geq 1, n\geq 1$, let $\displaystyle X_{ni}= X_{i}I(|a_{ni}X_{i}|\leq 1).$ Note that

$\begin{array}{l} \sum\limits_{n = 1}^\infty {{n^\beta }} E{\left( {\mathop {\sup }\limits_{k \ge 1} \left| {\sum\limits_{i = 1}^k {{a_{ni}}} {X_i}} \right| - \epsilon} \right)^ + }\\ \le \sum\limits_{n = 1}^\infty {{n^\beta }} E{\left( {\mathop {\sup }\limits_{k \ge 1} \left| {\sum\limits_{i = 1}^k {{a_{ni}}} ({X_{ni}} - E{X_{ni}})} \right| - \epsilon} \right)^ + } + \sum\limits_{n = 1}^\infty {{n^\beta }} \sum\limits_{i = 1}^\infty E |{a_{ni}}{X_i}|I(|{a_{ni}}{X_i}| > 1). \end{array}$

The rest of the proof is the same as that of Theorem 2.2 except that we use Lemma 1.5 instead of Lemma 1.3 and it is omitted.

Remark 2.5 As in Remark 3.7 of Sung [5], Theorem 2.5 and Theorem 2.6 can not be extended to the array $\{X_{ni}, i\geq 1, n\geq 1\}$ of rowwise $\rho^*$-mixing random variables by using the method of the proof of Theorem 2.1 and Theorem 2.2.

References
[1] Hsu P L, Robbins H. Complete convergence and the law of large numbers[J]. Proc.Nat.Acad.Sci.USA, 1947, 33: 25–31. DOI:10.1073/pnas.33.2.25
[2] Erdöos P. On a theorem of Hsu and Robbins[J]. Ann.Math.Statist, 1949, 20: 286–291. DOI:10.1214/aoms/1177730037
[3] Volodin A, Giuliano Antonini R, Hu T C. A note on the rate of complete convergence for weightedsums of arrays of Banach space valued random elements[J]. Lobachevskii J.Math., 2004, 15: 21–33.
[4] Chen P, Sung S H, Volodin A I. Rate of complete convergence for arrays of Banach space valuedrandom elements[J]. Sib.Adv.Math., 2006, 16: 1–14.
[5] Sung S H. On complete convergence for weighted sums of arrays of dependent random variables[J]. Abstr.Appl.Anal., 2011, 2011: 11.
[6] Guo Mingle, Zhu Dongjin. On complete moment convergence of weighted sums for arrays of rowwisenegatively associated random variables[J]. J.Prob.Stat., 2012, 2012: 12.
[7] Wu Qunying. A complete convergence theorem for weighted sums of arrays of rowwise negativelydependent random variables[J]. J.Inequ.Appl., 2012, 2012: 50. DOI:10.1186/1029-242X-2012-50
[8] Alam K., Saxena K M L. Positive dependence in multivariate distributions[J]. Comm.Stat.The.Meth.A, 1981, 10: 1183–1196. DOI:10.1080/03610928108828102
[9] Joag-Dev K, Proschan F. Negative association of random variables with applications[J]. Ann.Stat., 1983, 11: 286–295. DOI:10.1214/aos/1176346079
[10] Shao Qiman. A comparison theorem on moment inequalities between negatively associated andindependent random variables[J]. J.Theoret.Prob., 2000, 13: 343–356. DOI:10.1023/A:1007849609234
[11] Lehmann E L. Some concepts of dependence[J]. Ann.Math.Stat., 1966, 37: 1137–1153. DOI:10.1214/aoms/1177699260
[12] Asadian Fakoor N V, Bozorgnia A. Rosenthal's type inequalities for negatively orthant dependentrandom variables[J]. J.Iranian Stat.Soc., 2006, 5: 66–75.
[13] Yang Sanchao. Some moment inequalities for partial sums of random variables and their applications[J]. Chinese Sci.Bull., 1998, 43: 1823–1827. DOI:10.1007/BF02883381
[14] Utev S, Peligrad M. Maximal inequalities and an invariance principle for a class of weakly dependentrandom variables[J]. J.Theoret.Prob., 2003, 16: 101–115. DOI:10.1023/A:1022278404634