数学杂志  2014, Vol. 34 Issue (1): 31-36   PDF    
扩展功能
加入收藏夹
复制引文信息
加入引用管理器
Email Alert
RSS
本文作者相关文章
HUANG Hai-wu
WANG Ding-cheng
PENG Jiang-yan
COMPLETE CONVERGENCE FOR WEIGHTED SUMS OF $\tilde{\varphi }$-MIXING RANDOM VARIABLES
HUANG Hai-wu1,2, WANG Ding-cheng3,4, PENG Jiang-yan3    
1. Guangxi Scientics Experiment Center of Mining, Metallurgy and Environment, Guilin 541004, China;
2. College of Science, Guilin University of Technology, Guilin 541004, China;
3. School of Mathematics Science, University of Electronic Science and Technology of China, Chengdu 610054, China;
4. Institute of Financial Engineering; School of Finance; School of Applied Mathematics, Nanjing Audit University, Nanjing 211815, China
Abstract: In this article, we study the complete convergence for weighted sums of $\tilde{\varphi }$-mixing random variables without assumptions of identical distribution. By using the truncated method and the moment inequality of random variables, we obtain the complete convergence and strong law of large numbers for weighted sums of $\tilde{\varphi }$-mixing random variables, which generalize and extend the corresponding results for independent and identically distributed random variables.
Key words: $\tilde{\varphi }$-mixing random variables     complete convergence     strong law of large numbers     weighted sums    
$\tilde{\varphi }$混合随机变量加权和的完全收敛性
黄海午1,2, 王定成3,4, 彭江艳3    
1. 广西矿冶与环境科学实验中心, 广西 桂林 541004;
2. 桂林理工大学理学院, 广西 桂林 541004;
3. 电子科技大学数学科学学院, 四川 成都 610054;
4. 南京审计学院金融工程中心; 金融学院; 应用数学学院, 江苏 南京 211815
摘要:本文研究了不同分布$\tilde{\varphi }$混合随机变量加权和的完全收敛性问题.利用随机变量截尾和矩不等式方法, 获得了$\tilde{\varphi }$混合随机变量加权和的完全收敛性和强大数定律的结果, 所获得结果推广和改进了有关独立同分布随机变量序列的相应结果.
关键词$\tilde{\varphi }$混合随机变量    完全收敛性    强大数定律    加权和    
1 Introduction

Let $\{{{X}_{n}};n\ge 1\}$ be a sequence of random variables that defined on a probability space $(\Omega ,F,P)$. Let $n$ and $m$ be positive integers, $F_{n}^{m}$ denote the $\sigma $-algebra generated by the random variables ${{X}_{n}},{{X}_{n+1}},\cdots ,{{X}_{m}}$. Let $S,T\subset \mathbb{N}$ be nonempty sets, and define that ${{F}_{S}}=\sigma ({{X}_{i}};i\in S\subset \mathbb{N})$. Given two $\sigma $ -algebras $\psi ,\zeta $ in $F$, note that

$\begin{equation}\label{1.1} \varphi (\psi ,\zeta )=\sup \left( \left| P(B\left| A \right.)-P(B) \right|;A\in \psi ,P(A)>0,B\in \zeta\right), \end{equation}$ (1.1)

and define the $\tilde{\varphi }$ -mixing coefficients by

$\begin{equation}\label{1.2} \tilde{\varphi }(n)=\sup \{\varphi ({{F}_{S}},{{F}_{T}});\textrm{ finite subsets } S,T \subset \mathbb{N} \textrm{ such that dist(}S,T\text{)}\ge n\},\quad n\ge 0. \end{equation}$ (1.2)

Obviously, $0\le \tilde{\varphi }(n+1)\le \tilde{\varphi }(n)\le 1$, $n\ge 0$ and $\tilde{\varphi }(0)=1$.

Definition 1.1 A sequence of random variables $\{{{X}_{n}};n\ge 1\}$ is said to be a $\tilde{\varphi }$-mixing sequence of random variables if there exists $k\in \mathbb{N}$ such that $\tilde{\varphi }(k)<1$.

Note that if $\{{{X}_{n}};n\ge 1\}$ is a sequence of independent random variables, then $\tilde{\varphi }(n)=0$ for all $n\ge 1$.

The concept of $\tilde{\varphi }$-mixing was introduced by Wu and Lin [1], and then a number of publications are devoted to $\tilde{\varphi }$-mixing random variables. For example, We refer to Wu and Lin [1] for the complete convergence and strong law of large numbers for identically distributed, Wang and Hu et al. [2] for the convergence properties of the partial sums, Wang et al. [3] for the strong law of large numbers and growth rate, Jiang and Wu [4] for the weak convergence and complete convergence, Shen and Wang et al. [5] for the strong convergence properties of sums of products, and so on.

The main purpose of this paper is to study the complete convergence and strong law of large numbers for weighted sums of $\tilde{\varphi }$-mixing random variables without assumptions of identically distributed. The results obtained generalize and extend the results for independent and identically distributed random variables to the case of $\tilde{\varphi }$-mixing random variables, but also improve the almost sure convergence result of Wu and Lin [1] under a mild weighted condition.

Throughout this paper, $C$ will represent a positive constant whose value may change from one appearance to the next, and ${{a}_{n}}=O({{b}_{n}})$ will mean ${{a}_{n}}\le C({{b}_{n}})$. We assume that $\phi (x)$ is a positive increasing function on $(0,\infty )$ such that $\phi (x)\uparrow \infty $ as $x\to \infty $ and $\varphi (x)$ is the inverse function of $\phi (x)$. Since $\phi (x)\uparrow \infty $ as $x\to \infty $, it follows that $\varphi (x)\uparrow \infty $ as $x\to \infty $. For convenience, we let $\phi (0)=0$ and $\varphi (0)=0$.

To obtain our results, the following lemmas are needed.

Lemma 1.1 Let $\{{{X}_{n}};n\ge 1\}$ be a sequence of $\tilde{\varphi }$-mixing random variables with $E{{X}_{n}}=0$ and $E{{\left| {{X}_{n}} \right|}^{r}}<\infty $ for some $r\ge 1$ and all $n\ge 1$. Then there exists a constant $C=C(r,\tilde{\varphi }(k))$ depending only on $r$ and $\tilde{\varphi }(k)$ such that for any $n\ge 1$,

$\begin{eqnarray}\label{1.3} && E\left( \underset{1\le j\le n}{\mathop{\max\limits }}\,{{\left| \sum\limits_{i=1}^{j}{{{X}_{i}}} \right|}^{r}} \right)\le C{{\log }^{r}}n\left[ \sum\limits_{i=1}^{n}{E{{\left| {{X}_{i}} \right|}^{r}}} \right], \quad 1<r\le 2; \end{eqnarray}$ (1.3)
$\begin{eqnarray} && E\left( \underset{1\le j\le n}{\mathop{\max\limits }}\,{{\left|\sum\limits_{i=1}^{j}{{{X}_{i}}} \right|}^{r}} \right)\le C{{\log}^{r}}n\left[ \sum\limits_{i=1}^{n}{E{{\left| {{X}_{i}}\right|}^{r}}}+{{\left( \sum\limits_{i=1}^{n}{(EX_{i}^{2})}\right)}^{r/2}} \right], \quad r>2. \end{eqnarray}$ (1.4)

Proof The proof was obtained by Wu [6]. So we omit it.

Lemma 1.2 Assume that the inverse function $\varphi (x)$ of $\phi (x)$ satisfies

$\begin{equation}\label{1.5} \varphi (n)\sum\limits_{i=1}^{n}{\frac{1}{\varphi (i)}}=O(n). \end{equation}$ (1.5)

If $E\left[ \phi \left( \left| X \right| \right) \right]<\infty $, then

$\begin{equation}\label{1.6} \sum\limits_{n=1}^{\infty }{\frac{1}{\varphi (n)}E\left| X \right|I\left( \left| X \right|>\varphi (n) \right)}<\infty. \end{equation}$ (1.6)

Proof The proof is similar to that of Lemma 1 of Sung [7]. So we omit it.

The following lemma is known, see, for example (see [6]).

Lemma 1.3 Let $\{{{X}_{n}};n\ge 1\}$ be a sequence of random variables, if there exits a random variable $X$ such that

$\begin{equation}\label{1.7} P(\left| {{X}_{n}} \right|\ge x)\le CP(\left| X \right|\ge x) \end{equation}$ (1.7)

for all $x\ge 0$. We call that the sequence $\{{{X}_{n}};n\ge 1\}$ of random variables is stochastically dominated by a random variable $X$. Then, for $\forall \beta >0$ and $\forall t>0$,

$\begin{eqnarray}\label{1.8} && E{{\left| {{X}_{n}} \right|}^{\beta }}I(\left| {{X}_{n}} \right|\le t)\le C(E{{\left| X \right|}^{\beta }}I(\left| X \right|\le t)+{{t}^{\beta }}P(\left| X \right|>t)); \end{eqnarray}$ (1.8)
$\begin{eqnarray} && E{{\left| {{X}_{n}} \right|}^{\beta }}I(\left| {{X}_{n}} \right|>t)\le CE{{\left| X \right|}^{\beta }}I(\left| X \right|>t). \end{eqnarray}$ (1.9)
2 Main Results and Proofs

Theorem 2.1 Let $\{{{X}_{n}};n\ge 1\}$ be a sequence of $\tilde{\varphi }$-mixing random variables which is stochastically dominated by a random variable $X$. Suppose that $EX=0$, $E{{\left| X \right|}^{r}}<\infty $ for $1<r\le 2$ and $E\left[ \phi \left( \left| X \right| \right) \right]<\infty $. Assume that the inverse function $\varphi (x)$ of $\phi (x)$ satisfies (1.5). Let $\{{{a}_{ni}};n\ge 1,i\ge 1\}$ be an array of constants such that

(1) $\underset{1\le i\le n}{\mathop{\max\limits }}\,\left| {{a}_{ni}} \right|=O\left( \frac{1}{\varphi (n)} \right)$;

(2) $\sum\limits_{i=1}^{n}{|a_{ni}|^{r}}=O\left( {{\log }^{-1-\alpha }}n \right) \quad \textrm{ for } 1<r\le 2 \quad \textrm{ and some } \alpha >r$. Then

$\begin{equation}\label{2.1} \sum\limits_{n=1}^{\infty }{{{n}^{-1}}P\left( \underset{1\le j\le n}{\mathop{\max\limits }}\,\left| \sum\limits_{i=1}^{j}{{{a}_{ni}}{{X}_{i}}} \right|>\varepsilon \right)}<\infty \end{equation}$ (2.1)

for all $\varepsilon >0$.

Proof For $n\ge 1$, define that

$ {{Y}_{i}}={{X}_{i}}I\left( \left| {{X}_{i}} \right|\le \varphi (n) \right), {{T}_{j}}=\sum\limits_{i=1}^{j}{\left( {{a}_{ni}}{{Y}_{i}}-E{{a}_{ni}}{{Y}_{i}} \right)}, 1\le j\le n. $

It is easy to check that

$\begin{eqnarray*} \sum\limits_{i=1}^{j}{{{a}_{ni}}{{X}_{i}}}& = & \sum\limits_{i=1}^{j}{{{a}_{ni}}{{X}_{i}}I\left( \left| {{X}_{i}} \right|\le \varphi (n) \right)}+\sum\limits_{i=1}^{j}{{{a}_{ni}}{{X}_{i}}I\left( \left| {{X}_{i}} \right|>\varphi (n) \right)} \\ & = & {{T}_{j}}+\sum\limits_{i=1}^{j}{E{{a}_{ni}}{{Y}_{i}}}+\sum\limits_{i=1}^{j}{{{a}_{ni}}{{X}_{i}}}I\left( \left| {{X}_{i}} \right|>\varphi (n) \right); \\ \left( \underset{1\le j\le n}{\mathop{\max\limits }}\,\left| \sum\limits_{i=1}^{j}{{{a}_{ni}}{{X}_{i}}} \right|>\varepsilon \right)& \subset & \left( \underset{1\le j\le n}{\mathop{\max\limits }}\,\left| {{T}_{j}}+\sum\limits_{i=1}^{j}{E{{a}_{ni}}{{Y}_{i}}} \right|>\varepsilon \right)+\left( \sum\limits_{i=1}^{n}{{{a}_{ni}}{{X}_{i}}}I\left( \left| {{X}_{i}} \right|>\varphi (n) \right)\ne 0 \right)\\ & \subset & \left( \underset{1\le j\le n}{\mathop{\max\limits }}\,\left| {{T}_{j}} \right|>\varepsilon -\underset{1\le j\le n}{\mathop{\max\limits }}\,\left| \sum\limits_{i=1}^{j}{E{{a}_{ni}}{{Y}_{i}}} \right| \right)+\bigcup\limits_{i=1}^{n}{\left( \left| {{X}_{i}} \right|>\varphi (n) \right)}. \end{eqnarray*}$

Hence

$\begin{eqnarray} && P\left( \underset{1\le j\le n}{\mathop{\max\limits }}\,\left| \sum\limits_{i=1}^{j}{{{a}_{ni}}{{X}_{i}}} \right|>\varepsilon \right)\nonumber\\ & \le & P\left( \underset{1\le j\le n}{\mathop{\max\limits }}\,\left| {{T}_{j}} \right|>\varepsilon -\underset{1\le j\le n}{\mathop{\max\limits }}\,\left| \sum\limits_{i=1}^{j}{E{{a}_{ni}}{{Y}_{i}}} \right| \right)+P\left( \bigcup\limits_{i=1}^{n}{\left( \left| {{X}_{i}} \right|>\varphi (n) \right)} \right) \nonumber\\ & \le & P\left(\underset{1\le j\le n}{\mathop{\max\limits }}\,\left| {{T}_{j}} \right|>\varepsilon -\underset{1\le j\le n}{\mathop{\max\limits }}\,\left| \sum\limits_{i=1}^{j}{E{{a}_{ni}}{{Y}_{i}}} \right| \right)+\sum\limits_{i=1}^{n}{P\left( \left| {{X}_{i}} \right|>\varphi (n) \right)}. \end{eqnarray}$ (2.2)

First, we shall need to prove that

$\begin{equation}\label{2.3} \underset{1\le j\le n}{\mathop{\max\limits }}\,\left| \sum\limits_{i=1}^{j}{E{{a}_{ni}}{{Y}_{i}}} \right|\to 0 \quad\textrm{ as }n\to \infty. \end{equation}$ (2.3)

It follows from $E\left[ \phi \left( \left| X \right| \right) \right]<\infty $, Lemma 1.2, Lemma 1.3 and the Kronecker's lemma that

$\begin{equation}\label{2.4} \frac{1}{\varphi (n)}\sum\limits_{i=1}^{n}{E\left| X \right|I\left( \left| X \right|>\varphi (i) \right)}\to 0 \quad\textrm{ as }n\to \infty. \end{equation}$ (2.4)

From $EX=0$, $\underset{1\le i\le n}{\mathop{\max\limits }}\,\left| {{a}_{ni}} \right|=O\left( \frac{1}{\varphi (n)} \right)$, Lemma 1.3, (2.4) and $\varphi (n)\uparrow \infty $, we can get that

$\begin{eqnarray} && \underset{1\le j\le n}{\mathop{\max\limits }}\,\left| \sum\limits_{i=1}^{j}{E{{a}_{ni}}{{Y}_{i}}} \right|= \underset{1\le j\le n}{\mathop{\max\limits }}\,\left| \sum\limits_{i=1}^{j}{E{{a}_{ni}}{{X}_{i}}I\left( \left| {{X}_{i}} \right|>\varphi (n) \right)} \right| \nonumber\\ & \le & \sum\limits_{i=1}^{n}{E\left| {{a}_{ni}}{{X}_{i}} \right|I\left( \left| {{X}_{i}} \right|>\varphi (n) \right)} \le \sum\limits_{i=1}^{n}{\left| {{a}_{ni}} \right|E\left| X \right|I\left( \left| X \right|>\varphi (n) \right)}\nonumber\\ & \le & \frac{1}{\varphi (n)}\sum\limits_{i=1}^{n}{E\left| X \right|I\left( \left| X \right|>\varphi (i) \right)}\to 0 \end{eqnarray}$ (2.5)

as $n\to \infty $. It implies that (2.3) holds true. It follows from (2.2) and (2.3) that

$\begin{equation}\label{2.6} P\left( \underset{1\le j\le n}{\mathop{\max\limits }}\,\left| \sum\limits_{i=1}^{j}{{{a}_{ni}}{{X}_{i}}} \right|>\varepsilon \right)\le P\left( \underset{1\le j\le n}{\mathop{\max\limits }}\,\left| {{T}_{j}} \right|>\frac{\varepsilon }{2} \right)+\sum\limits_{i=1}^{n}{P\left( \left| {{X}_{i}} \right|>\varphi (n) \right)} \end{equation}$ (2.6)

for $n$ large enough.

Hence to prove (2.1), we need to prove that

$\begin{eqnarray} && \sum\limits_{n=1}^{\infty }{{{n}^{-1}}}P\left( \underset{1\le j\le n}{\mathop{\max\limits }}\,\left| {{T}_{j}} \right|>\frac{\varepsilon }{2} \right)<\infty; \end{eqnarray}$ (2.7)
$\begin{eqnarray} && \sum\limits_{n=1}^{\infty }{{{n}^{-1}}}\sum\limits_{i=1}^{n}{P\left( \left| {{X}_{i}} \right|>\varphi (n) \right)}<\infty. \end{eqnarray}$ (2.8)

It follows from the Markov's inequality, (1.3) of Lemma 1.1, Lemma 1.3, $E{{X}^{r}}<\infty $, $\alpha >r$ and the condition of $\sum\limits_{i=1}^{n}{a_{ni}^{r}}=O\left( {{\log }^{-1-\alpha }}n \right)$ that

$\begin{eqnarray} && \sum\limits_{n=1}^{\infty }{{{n}^{-1}}}P\left( \underset{1\le j\le n}{\mathop{\max\limits }}\,\left| {{T}_{j}} \right|>\frac{\varepsilon }{2} \right) \le C\sum\limits_{n=1}^{\infty }{{{n}^{-1}}}E\left( \underset{1\le j\le n}{\mathop{\max\limits }}\,{{\left| {{T}_{j}} \right|}^{r}} \right) \le C\sum\limits_{n=1}^{\infty }{{{n}^{-1}}{{\log }^{r}}n\sum\limits_{i=1}^{n}{E{{\left| {{a}_{ni}}{{Y}_{j}} \right|}^{r}}}} \nonumber\\ & = & C\sum\limits_{n=1}^{\infty }{{{n}^{-1}}{{\log }^{r}}n\sum\limits_{i=1}^{n}{a_{ni}^{r}E{{\left| {{X}_{i}} \right|}^{r}}I\left( \left| {{X}_{i}} \right|\le \varphi (n) \right)}} \le C\sum\limits_{n=1}^{\infty }{{{n}^{-1}}{{\log }^{r}}n\sum\limits_{i=1}^{n}{a_{ni}^{r}E{{\left| X \right|}^{r}}I\left( \left| X \right|\le \varphi (n) \right)}} \nonumber\\ & \le & C\sum\limits_{n=1}^{\infty }{{{n}^{-1}}{{\log }^{r}}n\sum\limits_{i=1}^{n}{a_{ni}^{r}}} \le C\sum\limits_{n=1}^{\infty }{{{n}^{-1}}{{\log }^{r-1-\alpha}}n}<\infty. \end{eqnarray}$ (2.9)

It follows from $E\left[ \phi \left( \left| X \right| \right) \right]<\infty $ and Lemma 1.3 that

$\begin{eqnarray*} && \sum\limits_{n=1}^{\infty }{{{n}^{-1}}}\sum\limits_{i=1}^{n}{P\left( \left| {{X}_{i}} \right|>\varphi (n) \right)} = \sum\limits_{n=1}^{\infty }{{{n}^{-1}}}nP\left( \left| X \right|>\varphi (n) \right)\\ & = & \sum\limits_{n=1}^{\infty }{P\left( \phi \left( \left| X \right| \right)>n \right)} \le CE\left[ \phi \left( \left| X \right| \right) \right] < \infty. \end{eqnarray*}$

The proof of Theorem 2.1 is completed.

Theorem 2.2 Let $\{{{X}_{n}};n\ge 1\}$ be a sequence of $\tilde{\varphi }$-mixing random variables which is stochastically dominated by a random variable $X$. Suppose that $EX=0$, $E{{\left| X \right|}^{r}}<\infty $ for $r>2$ and $E\left[ \phi \left( \left| X \right| \right) \right]<\infty $. Assume that the inverse function $\varphi (x)$ of $\phi (x)$ satisfies (1.5). Let $\{{{a}_{ni}};n\ge 1,i\ge 1\}$ be an array of constants such that

(1) $\underset{1\le i\le n}{\mathop{\max\limits }}\,\left| {{a}_{ni}} \right|=O\left( \frac{1}{\varphi (n)} \right)$;

(2) $\sum\limits_{i=1}^{n}{|a_{ni}|^{r}}=O\left( {{\log }^{-1-\alpha }}n \right) \quad \textrm{ for } r>2 \quad \textrm{ and some } \alpha >r$. Then (2.1) holds true.

Proof  The proof is similar to that of Theorem 2.1. We only need to prove that

$\begin{eqnarray*} && \sum\limits_{n=1}^{\infty }{{{n}^{-1}}}P\left( \underset{1\le j\le n}{\mathop{\max\limits }}\,\left| {{T}_{j}} \right|>\frac{\varepsilon }{2} \right) \le C\sum\limits_{n=1}^{\infty }{{{n}^{-1}}}E\left( \underset{1\le j\le n}{\mathop{\max\limits }}\,{{\left| {{T}_{j}} \right|}^{r}} \right) \nonumber\\ & \le & C\sum\limits_{n=1}^{\infty }{{{n}^{-1}}{{\log }^{r}}n\left[ \sum\limits_{i=1}^{n}{E{{\left| {{a}_{ni}}{{Y}_{j}} \right|}^{r}}+{{\left( \sum\limits_{i=1}^{n}{E{{\left| {{a}_{ni}}{{Y}_{j}} \right|}^{2}}} \right)}^{r/2}}} \right]} \nonumber\\ & = & C\sum\limits_{n=1}^{\infty }{{{n}^{-1}}{{\log }^{r}}n\sum\limits_{i=1}^{n}{a_{ni}^{r}E{{\left| {{X}_{i}} \right|}^{r}}I\left( \left| {{X}_{i}} \right|\le \varphi (n) \right)}}\nonumber\\ && + C\sum\limits_{n=1}^{\infty }{{{n}^{-1}}{{\log }^{r}}n{{\left( \sum\limits_{i=1}^{n}{a_{ni}^{2}E{{\left| {{X}_{i}} \right|}^{2}}I\left( \left| {{X}_{i}} \right|\le \varphi (n) \right)} \right)}^{r/2}}}\nonumber \\ & \le & C\sum\limits_{n=1}^{\infty }{{{n}^{-1}}{{\log }^{r}}n\sum\limits_{i=1}^{n}{a_{ni}^{r}E{{\left| X \right|}^{r}}I\left( \left| X \right|\le \varphi (n) \right)}} + C\sum\limits_{n=1}^{\infty }{{{n}^{-1}}{{\log }^{r}}n{{\left( \sum\limits_{i=1}^{n}{a_{ni}^{2}} \right)}^{r/2}}}\nonumber\\ \end{eqnarray*}$
$\begin{eqnarray} & \le & C\sum\limits_{n=1}^{\infty }{{{n}^{-1}}{{\log }^{r}}n\sum\limits_{i=1}^{n}{a_{ni}^{r}}}\text{+}C\sum\limits_{n=1}^{\infty }{{{n}^{-1}}{{\log }^{r}}n\sum\limits_{i=1}^{n}{a_{ni}^{r}}} \nonumber\\ & \le & C\sum\limits_{n=1}^{\infty }{{{n}^{-1}}{{\log }^{r-1-\alpha}}n}<\infty. \end{eqnarray}$ (2.10)

The proof of Theorem 2.2 is completed.

References
[1] Wu Qunying, Lin Liang. Convergence properties of $\tilde{\varphi }$-mixing random sequences[J]. Chinese Journal of Engineering Mathematics, 2004, 21(1): 75–80.
[2] Wang Xuejun, Hu Shuhe, et al. Convergence properties about the partial sum of $\tilde{\varphi }$-mixing random variable sequences[J]. Chinese Journal of Engineering Mathematics, 2009, 26(1): 183–186.
[3] Wang Xuejun, Hu Shuhe, et al. Strong law of large numbers and growth rate for a class of random variable sequences[J]. Statistics and Probability Letters, 2008, 78(18): 3330–3337. DOI:10.1016/j.spl.2008.07.010
[4] Jiang Yuanying, Wu Qunying. Weak convergence and complete convergence for $\tilde{\varphi }$-mixing sequences[J]. Chinese Journal of Engineering Mathematics, 2010, 27(6): 1118–1124.
[5] Shen Yan, Wang Xuejun, et al. Strong Convergence properties of sums of products for $\tilde{\varphi }$-mixing random sequence with different distributions[J]. J. Math. Study, 2010, 43(1): 75–78.
[6] Wu Qunying. Probability limit theory for mixed sequence[M]. Beijing: Science Press, 2006.
[7] Sung S H. Strong laws for weighted sums of i.i.d. random variables II[J]. Bulletin of the Korean Mathematical Society, 2002, 39(4): 607–615. DOI:10.4134/BKMS.2002.39.4.607
[8] Sung S H. Strong laws for weighted sums of i.i.d. random variables[J]. Statistics and Probability Letters, 2001, 52(4): 413–419. DOI:10.1016/S0167-7152(01)00020-7
[9] Qiu Dehua. Convergence for weighted sums of $\tilde{\rho }$-mixing random variable sequences[J]. J. of Math. (PRC), 2008, 28(3): 258–264.
[10] Cuzick J. A strong law for weighted sums of i.i.d. random variables[J]. Journal of Theoretical Probability, 1995, 8(3): 625–641. DOI:10.1007/BF02218047
[11] Utev S, Peligrad M. Maximal inequalities and an invariance principle for a class of weakly dependent random variables[J]. Journal of Theoretical Probability, 2003, 16(1): 101–115. DOI:10.1023/A:1022278404634