数学杂志  2022, Vol. 42 Issue (3): 193-204   PDF    
扩展功能
加入收藏夹
复制引文信息
加入引用管理器
Email Alert
RSS
本文作者相关文章
GUI Wen-yong
ZHANG Xiang-lei
DU Xing
SANG Li-xin
ORDER STATISTICS OF MULTIVARIATE ERLANG MIXTURES WITH APPLICATION IN MULTIPLE LIFETIME THEORY
GUI Wen-yong1, ZHANG Xiang-lei2, DU Xing1, SANG Li-xin1    
1. College of Computer Science and Artificial Intelligence, Wenzhou University, Wenzhou 325000, China;
2. School of Mechanical and Electrical Engineering, Wenzhou University, Wenzhou 325000, China
Abstract: In this paper, we study the order statistics of a set of dependent variables from a multivariate Erlang mixture and then apply the result to multiple lifetime theory. We derive the analytic density functions of the order statistics using the known mathematical induction method and show that any order statistic still has the form of a univariate Erlang mixture. Serval important quantities in life insurance actuarial field also have explicit expressions. The result extends the result of independent random variables to dependent case.
Keywords: order statistics     multivariate Erlang mixture     multiple life theory     dependency    
多维Erlang混合的次序统计量及其在多生命理论中的应用
桂文永1, 张祥雷2, 杜兴1, 桑立鑫1    
1. 温州大学计算机与人工智能学院, 浙江 温州 325000;
2. 温州大学机电工程学院, 浙江 温州 325000
摘要:本文研究来自多维Erlang混合分布的具有相关性的随机变量的次序统计量并将相关结论应用到多生命理论中.利用熟知的数学归纳法推导出次序统计量分布密度函数的解析表达式并证明其分布为一维Erlang混合分布, 寿险精算理论中常见的重要统计量可以算出精确解.这一结论将随机变量相互独立情形下的结果推广到变量存在相关性的情形.
关键词次序统计量    多维Erlang混合    多生命理论    相关性    
1 Introduction

[1] introduced a class of multivariate mixtures of Erlang distributions or multivariate Erlang mixtures and showed its good application in insurance. A multivariate Erlang mixture is defined as a random vector $ \textbf{X}=(X_1,X_2,\cdots,X_n) $ with probability density function (pdf)

$ \begin{equation} h(\textbf{x}|\beta)=\sum^\infty\limits_{m_1=1}\cdots \sum^\infty\limits_{m_n=1} \alpha_{\textbf{m}}\left\{\prod^n\limits_{j=1}\frac{\beta(\beta x_j)^{m_j-1}e^{-\beta x_j}}{(m_j-1)!}\right\}, \end{equation} $ (1.1)

where $ \textbf{x}=(x_1,\cdots,x_n) $, $ \textbf{m}=(m_1,\cdots,m_n) $, $ \alpha_{\textbf{m}} $ are the mixing weights with each $ \alpha_{\textbf{m}}\geq 0 $ and $ \sum\limits_{m_1=1}^\infty\cdots\sum\limits_{m_n=1}^\infty \alpha_{\textbf{m}}=1 $ and $ \beta $ is the common rate parameter ($ \theta=\frac{1}{\beta} $ is called the scale parameter). The mixing weights $ \alpha_{\textbf{m}} $ can be viewed as a joint probability function of a multivariate counting random vector $ \textbf{N}=(N_1,N_2,\cdots,N_n) $, that is,

$ \alpha_{\textbf{m}}=P(N_1=m_1, N_2=m_2,\cdots,N_n=m_n). $

The expectation-maximum (EM) algorithm is always used to estimate the parameters of a mixture model. A standard EM algorithm and some more modified versions for parameter estimation of Erlang mixtures can be seen in [1-3]. The class of Erlang mixtures are widely used in insurance, reliability theory and many other fields, see [4-8] and the references therein.

Let $ X_{1:n}\leq X_{2:n}\leq\cdots\leq X_{n:n} $ be the order statistics. The use of order statistics is an extremely important subject in a wide range of statistical applications. Some primary results about the order statistics are given under the assumption that the random variables are identically distributed or independent. The studies can be seen in [9-11] and the references therein. The study on order statistics of Erlang mixtures also can be found in recent years. For example, [12] showed that the order statistics of an independent set of mixed Erlang random variables belong to the same class of Erlang mixtures. More studies can be seen in [13-15].

In this paper, we consider a set of dependent and non-identically distributed random variables with the joint distribution being a multivariate Erlang mixture. [15] derived the distributions of the minimum $ X_{1:n} $ and the maximum $ X_{n:n} $ and showed that both distributions belong to the class of univariate Erlang mixtures. The purpose of this paper is to generalize the results in [15] and derive the distributions of all order statistics. Furthermore, we show that the distribution of any $ r $th ($ r=1,2,\cdots,n $) order statistic has the form of univariate Erlang mixtures.

We apply the class of multivariate Erlang mixtures to multiple lifetime area. Traditional actuarial theory of multiple life insurance often assumes independence among the future lifetimes, see, for example [16]. However, extensive research over the past years suggests otherwise, see [17, 18] and the references therein. The class of multivariate Erlang mixtures has been showed to flexibly capture the dependency among the variables, making it a reasonable choice. Another common tool used to describe the dependency in multivariate context is the copula method such as in [19]. Compared with the copula method, a multivariate Erlang mixture is more easier to deal with high dimensional data. The results in this paper show that we can get explicit expressions for some important quantities which can improve the accuracy.

This paper is organized as follows. In Section 2, we derive the density functions of the order statistics of a set of variables from a multivariate Erlang mixture and show that the order statistics are still of the form of Erlang mixtures. In Section 3, we apply the multivariate Erlang mixtures to the multiple lifetime theory and explicit results are given for some common quantities. In Section 4, a conclusion is made and some details about the method proposed in this paper are discussed.

2 Order Statistics

For notational simplicity, we denote an Erlang density with shape parameter $ m $ and rate parameter $ \beta $ as

$ \begin{equation} f(x|m,\beta)=\frac{\beta(\beta x)^{m-1}e^{-\beta x}}{(m-1)!}. \end{equation} $ (2.1)

An Erlang distribution is in fact a gamma distribution with a positive integer shape parameter. The distribution function (df) is given by

$ \begin{equation} F(x|m,\beta)=1-e^{-\beta x}\sum\limits_{n=0}^{m-1} \frac{x^n}{n!}, \end{equation} $ (2.2)

and the survival function $ \overline{F}(x|m,\beta)=1-F(x|m,\beta) $.

Let $ f_{r:n}(x) $ and $ F_{r:n}(x) $ be the density function and the distribution function of the $ r $th order statistic in a sample of size $ n $. It is clear that for $ r=1,\cdots,n-1, $ we have

$ \begin{equation} \begin{array}{rl} F_{r:n}(x) -F_{r+1:n}(x)&=P({\rm exactly}\; r\; {\rm of }\; X'{\rm s \; such\; that\; } X_i\le x)\\ &= \sum\limits_{S_r} P( X_{s_1}\le x, \cdots, X_{s_r}\le x, X_{s_{r+1}}> x, \cdots, X_{s_{n}}> x ), \end{array} \end{equation} $ (2.3)

where $ \sum_{S_r} $ denotes the sum over permutations $ \{s_1, \cdots, s_n\} $ of $ \{1,\cdots, n\} $ with $ s_1 < \cdots < s_r $ and $ s_{r+1}<\cdots<s_n $. For notational convenience, let

$ F_{[r]:n} (x)= \sum\limits_{S_r} P( X_{s_1}\le x, \cdots, X_{s_r}\le x, X_{s_{r+1}}> x, \cdots, X_{s_{n}}> x ), $

and

$ f_{[r]:n} (x)= \frac{d}{dx} F_{[r]:n} (x). $

Then we can express the density of the $ r $th order statistic as

$ \begin{eqnarray} f_{r:n}(x) &=& f_{r+1:n}(x) + f_{[r]:n}(x). \end{eqnarray} $ (2.4)

In this section, we will prove that any $ r $th ($ 1\leq r\leq n $) order statistic has the form of univariate Erlang mixtures. The derivation will be a little complex. We present our main results in this section and the proof can be seen in the appendix part.

Lemma 2.1 Suppose an $ n $-variate random vector $ \textbf{X}=(X_1,X_2,\cdots,X_n) $ has joint probability density function of form (1.1), the density function $ f_{[r]:n} (x),r=1,\cdots,n $ can be expressed as

$ \begin{equation} f_{[r]:n}(x)=\frac{1}{\beta^{n-1}}\sum\limits_{m_1=1}^\infty\cdots\sum\limits_{m_n=1}^\infty \widetilde{\alpha}_\textbf{m}([r],n)\prod\limits_{i=1}^n f(x|m_i,\beta), \end{equation} $ (2.5)

where

$ \begin{equation*} \widetilde{\alpha}_\textbf{m}([r],n)=\sum\limits_{S_r}\sum\limits_{j=1}^n\sum\limits_{\substack{u_j=m_{s_j},u_l< m_{s_l},u_k\geq m_{s_k}\\ l=1,\cdots,j-1,k=j+1,\cdots,n}}\alpha_\textbf{u}I_{\{j\leq r\}}, \end{equation*} $

and $ \sum_{S_r} $ denotes the sum over permutations $ \{s_1, \cdots, s_n\} $ of $ \{1,\cdots, n\} $ with $ s_1 < \cdots < s_r $, $ s_{r+1}<\cdots<s_n $, the notation $ I_{\{j\leq r\}} $ is an indicator function with a value of 1 when $ j\leq r $ and otherwise 0.

Remark    The density function in Lemma 2.1 also has form of univariate Erlang mixtures. However, this density function is a combination of Erlang distributions rather than a mixture of Erlang distributions because the coefficients $ \widetilde{\alpha}_\textbf{m}([r],n), \textbf{m}=(m_1,m_2,\cdots,m_n),r=1,\cdots,n $ are not all positive. The density function can be rewritten as

$ \begin{equation} f_{[r]:n}(x)=\sum\limits_{m=1}^\infty \alpha_m([r],n) f(x|m,n\beta), \end{equation} $ (2.6)

where

$ \alpha_m([r],n)=n^{-m}\sum\limits_{A_{m,n}}\left( \begin{array}{c} m-1\\ m_1-1,m_2-1,\cdots,m_n-1 \end{array} \right)\widetilde{\alpha}_\textbf{m}([r],n) $

and

$ A_{m,n}=\{(m_1,\cdots,m_n)|\sum\limits_{i=1}^n m_i=m+n-1\}. $

Theorem 2.2    Suppose an $ n $-variate random vector $ \textbf{X}=(X_1,X_2,\cdots,X_n) $ has joint probability density function of form (1.1), the density function of the $ r $th ($ r=1,\cdots,n $) order statistic is given by

$ \begin{equation} f_{r:n}(x)=\frac{1}{\beta^{n-1}}\sum\limits_{m_1=1}^\infty\cdots\sum\limits_{m_n=1}^\infty\widetilde{\alpha}_\textbf{m}(r,n)\prod\limits_{i=1}^n f(x|m_i,\beta), \end{equation} $ (2.7)

where

$ \widetilde{\alpha}_\textbf{m}(r,n)=\sum\limits_{S_r}\sum\limits_{j=1}^r \sum\limits_{\substack{u_j=m_{s_j},u_l< m_{s_l},u_k\geq m_{s_k}\\ l=1,\cdots,j-1,k=j+1,\cdots,n}}\alpha_\textbf{u}. $

Theorem 2.3    Suppose an $ n $-variate random vector $ \textbf{X}=(X_1,X_2,\cdots,X_n) $ has joint probability density function of form (1.1), the distribution of the $ r $th ($ r=1,\cdots,n $) order statistic is a univariate Erlang mixture and the density function can be rewritten as

$ \begin{equation} f_{r:n}(x)=\sum\limits_{m=1}^\infty \alpha_m(r,n) f(x|m,n\beta), \end{equation} $ (2.8)

where

$ \alpha_m(r,n)=n^{-m}\sum\limits_{A_{m,n}}\left( \begin{array}{c} m-1\\ m_1-1,m_2-1,\cdots,m_n-1 \end{array} \right)\widetilde{\alpha}_\textbf{m}(r,n) $

and

$ A_{m,n}=\{(m_1,\cdots,m_n)|\sum\limits_{i=1}^n m_i=m+n-1\}. $

Now we assume that the marginal random variables $ X_1 ,\cdots ,X_n $ are mutually independent. According to Corollary 2.3 in [1], the counting random variables $ N_1 ,\cdots ,N_n $ are also mutually independent. Hence, considering the relationship between the mixing weights in Erlang mixtures and the corresponding counting random variables, the coefficients in Theorem 2.2 in the independent case can be written as

$ \begin{equation*} \begin{split} \widetilde{\alpha}_\textbf{m}(r,n)&=\sum\limits_{S_r} \sum\limits_{j=1}^rP(N_{s_j}=m_j,\bigcap\limits_{k=1,k\neq j}^{r-1}N_{s_k}\leq m_k-1,\bigcap\limits_{k=r+1}^{n}N_{s_k}\geq m_k)\\ &=\sum\limits_{S_r} \sum\limits_{j=1}^rP(N_{s_j}=m_j)\prod\limits_{k=1,k\neq j}^{r-1}P(N_{s_k}\leq m_k-1)\prod\limits_{k=r+1}^{n}P(N_{s_k}\geq m_k). \end{split} \end{equation*} $

[12] studied the order statistics of independent Erlang mixtures and our result is consistent with their result.

Example 1    Consider a trivariate Erlang mixture with joint density function given by

$ \begin{equation} \begin{split} f(x,y,z)=&0.2f(x|2,1)f(y|5,1)f(z|10,1) +0.3f(x|4,1)f(y|8,1)f(z|2,1)\\ &+0.5f(x|1,1)f(y|3,1)f(z|5,1). \end{split} \end{equation} $ (2.9)

In this example, the positive mixing weights are $ \alpha_{(2,5,10)}=0.2, \alpha_{(4,8,2)}=0.3, \alpha_{(1,3,5)}=0.5 $. The coefficients $ \widetilde{\alpha}_\textbf{m}(r,n) $ in (2.7) are much simpler in form than the mixing weights $ \alpha_m(r,n) $ in (2.8), hence we first obtain the density functions of form (2.7) and then transform to form of univariate Erlang mixtures. Take the first order statistic for example and the parameters are given in Table 1.

Table 1
Parameters of the density function of form (2.7)

From the results in Table 1, we obtain the density function of the first order statistic in the form of Erlang mixtures according to Theorem 2.3. Similarly, the parameters of the second order statistic and the third statistic are given in Table 2. The survival curves for the order statistics are shown in Figure 1.

Table 2
Parameters of density functions of all three order statistics

Figure 1 Survival curves for the order statistics
3 Application: Multiple Lifetime Theory

In this section, we consider the payment of an insurance benefit occurs at the moment of death. The theory for analysis of financial benefit based on the death of a single life is well developed and the theory can be extended to the case involving several lives, see [20], [21] and the references therein. Order statistics are particularly relevant in various contexts including those involving multiple lives in life contingency.

In this section, we consider an insurance contract consisting of $ n $ dependent and non-identically distributed lives $ (X_1,X_2,\cdots,X_n) $. Denote $ (x_1),(x_2),\cdots,(x_n) $ be the ages of the members at the start of the contract and let $ T(x_i)=(X_i-x_i|X_i>x_i), i=1,2,\cdots,n $ be the future lifetime of $ (x_i) $.

First, we recall some important definitions in multiple life theory. The $ r $-survivor status of $ n $ lives $ (x_1),(x_2),\cdots,(x_n) $ exists at least $ r $ of the $ n $ lives survivor, denoted by $ ( \begin{array}{c} r:n\\\hline \textbf{x} \end{array}). $ In other words, the $ r $-survivor status of $ n $ lives fails upon the $ (n-r+1) $th death among the $ n $ lives. When $ r=n $, the $ n $-survivor status is also called joint-life status and $ r=1 $, the 1-survivor status called the last-survivor status. Obviously, we can note that the future lifetime of $ r $-survivor status is exactly the $ (n-r+1) $th order statistic of the $ n $ future lifetimes $ T(x_1),T(x_2),\cdots,T(x_n) $. For purpose of analysis, we define another status called the $ [r] $-deferred survivor status which exists while exactly $ r $ of the $ n $ lives survive. We denote the status by $ ( \begin{array}{c} [r]:n\\\hline \textbf{x} \end{array}). $

It should be noted that if $ n $ lives has pdf of form (1.1), the joint distribution of the future lifetimes $ \{T(x_1),T(x_2),\cdots,T(x_n)\} $ is still a multivariate Erlang mixture, see [1]. Hence, the joint pdf of future lifetimes $ \{T(x_1),T(x_2),\cdots,T(x_n)\} $ still has the density function of form (1.1).

The notation $ {_t}p_{\frac{r:n}{\textbf{x}}} $ represents the survival function of the future lifetime of $ r $-survivor status. Then according to Theorem 2.3, we have

$ \begin{equation} {_t}p_{\frac{r:n}{\textbf{x}}}=\sum^{\infty}\limits_{m=1} \alpha_m(n-r+1,n)\int _t^{\infty}\frac{n\beta(n\beta)^{m-1}e^{-n\beta t}}{(m-1)!}dt =\sum^{\infty}\limits_{m=1}\sum^{m-1}\limits_{u=0} \alpha_m(n-r+1,n) \frac{(n\beta t)^u e^{-n\beta t}}{u!}. \end{equation} $ (3.1)

Consider an insurance policy which pays a unit on the $ (n-r+1) $th death among the $ n $ lives, the actuarial present value denoted by $ \overline{A}_{\frac{r:n}{\textbf{x}}} $ will be

$ \begin{equation} \begin{array}{r l} \overline{A}_{\frac{r:n}{\textbf{x}}}=& \int_0^{\infty}v^t {_t}p_{\frac{r:n}{\textbf{x}}} \mu_{\frac{r:n}{\textbf{x}}}dt=\int_0^{\infty}v^t f_{n-r+1:n}(t)dt\\ =& \int _0^{\infty}v^t\sum^{\infty}\limits_{m=1} \alpha_m(n-r+1,n)\frac{n\beta(n\beta t)^{m-1}e^{-n\beta t}}{(m-1)!}dt\\ =& \sum^{\infty}\limits_{m=1} \alpha_m(n-r+1,n) \frac{(n\beta)^m}{(\delta+n\beta)^m}. \end{array} \end{equation} $ (3.2)

where $ \mu_{\frac{r:n}{\textbf{x}}}=(-\frac{d}{dt}{_t}p_{\frac{r:n}{\textbf{x}}})/{_t}p_{\frac{r:n}{\textbf{x}}} $, and $ \delta $ is a constant force of interest, we have $ v^t=e^{-\delta t} $. The symbols $ \delta $ and $ v $ represent the same meaning in the following part of this section.

For a continuous annuity of 1 payable annually as long as at least $ r $ of the $ n $ lives survive, the actuarial present value denoted by $ \overline{a}_{\frac{r:n}{\textbf{x}}} $ is

$ \begin{equation} \begin{array}{r l} \overline{a}_{\frac{r:n}{\textbf{x}}}=& \int_0^{\infty}v^t {_t}p_{\frac{r:n}{\textbf{x}}} dt\\ =& \int _0^{\infty}v^t\sum^{\infty}\limits_{m=1}\sum^{m-1}\limits_{u=0} \alpha_m(n-r+1,n)\frac{(n\beta t)^{u}e^{-n\beta t}}{u!}dt\\ =& \sum^{\infty}\limits_{m=1}\sum^{m-1}\limits_{u=0} \alpha_m(n-r+1,n) \frac{(n\beta)^u}{(\delta+n\beta)^{u+1}}. \end{array} \end{equation} $ (3.3)

Similarly, the notation $ {_t}p_{\frac{[r]:n}{\textbf{x}}} $ represents the survival function of the future lifetime of $ [r] $-deferred survivor status. According to the definition of $ [r] $-deferred survivor status, the density of the status can be seen in Lemma 2.1. Using the form of (2.6), we have

$ \begin{equation} {_t}p_{\frac{[r]:n}{\textbf{x}}}= \sum^{\infty}\limits_{m=1} \alpha_m([r],n)\int _t^{\infty}\frac{n\beta(n\beta)^{m-1}e^{-n\beta t}}{(m-1)!}dt = \sum^{\infty}\limits_{m=1} \sum^{m-1}\limits_{u=0} \alpha_m([r],n)\frac{(n\beta t)^{u}e^{-n\beta t}}{u!}. \end{equation} $ (3.4)

Consider a continuous annuity which is payable annually as long as any of $ (x_1),(x_2),\cdots, $ $ (x_n) $ are alive. The annual payment is $ c_r $ when there are exactly $ r $ of the $ n $ lives survive. Then the actuarial present value is $ \sum^n\limits_{r=1} c_r \cdot \overline{a}_{\frac{[r]:n}{\textbf{x}}} $ and the term $ \overline{a}_{\frac{[r]:n}{\textbf{x}}} $ can be calculated as

$ \begin{equation} \begin{array}{r l} \overline{a}_{\frac{[r]:n}{\textbf{x}}}=& \int_0^{\infty}v^t {_t}p_{\frac{[r]:n}{\textbf{x}}} dt\\ =& \int _0^{\infty}v^t\sum\limits_{m=1}^{\infty}\sum^{m-1}\limits_{u=0} \alpha_m([r],n)\frac{(n\beta t)^{u}e^{-n\beta t}}{u!}dt\\ =& \sum\limits_{m=1}^{\infty}\sum^{m-1}\limits_{u=0} \alpha_m([r],n)\frac{(n\beta)^u}{(\delta+n\beta)^{u+1}}. \end{array} \end{equation} $ (3.5)

The actuarial present value can also be calculated by the formula $ \overline{A}_{\frac{[r]:n}{\textbf{x}}}+\delta \overline{a}_{\frac{[r]:n}{\textbf{x}}}=1 $, where

$ \begin{equation} \begin{array}{r l} \overline{A}_{\frac{[r]:n}{\textbf{x}}}=& \int_0^{\infty}v^t {_t}p_{\frac{[r]:n}{\textbf{x}}} \mu_{\frac{[r]:n}{\textbf{x}}}dt\\ =& \int _0^{\infty}v^t\sum^{\infty}\limits_{m=1} \alpha_m([r],n)\frac{n\beta(n\beta t)^{m-1}e^{-n\beta t}}{(m-1)!}dt\\ =& \sum^{\infty}\limits_{m=1} \alpha_m([r],n) \frac{(n\beta)^m}{(\delta+n\beta)^m}. \end{array} \end{equation} $ (3.6)

Example 2 (Example 1 continued): Suppose the future lifetimes of 3 lives $ (x_1),(x_2),(x_3) $ in an insurance policy have the joint density function (2.9) shown in Example 1. We summarise some important quantities mentioned in this section in Table 3. The values in the last three columns are obtained by setting $ t=5,\delta=5\% $ and the mixing weights can be seen in Table 2.

Table 3
Summarize for important quantities
4 Conclusion

In this paper, we have studied the order statistics for the class of multivariate Erlang mixtures. We derive the distribution of any order statistic of some dependent variables coming from multivariate Erlang mixtures without the assumption of independence. Furthermore, we have shown that the order statistics are still of the form of univariate Erlang mixtures. This desirable property enables us to deal with multivariable issues more efficiently. For this purpose, we apply the multivariate Erlang mixtures to multiple lifetime theory. One of the advantages is that we can get explicit expressions for some important quantities while numerical methods may be used to calculate the quantities for a general distribution.

Appendix
A.1 Proof for Lemma 2.1

Proof    Without loss of generality, we set $ S_r=\{1,2,\cdots,r,r+1,\cdots,n\} $, then

$ \begin{equation*} \begin{array}{rl} F_{S_r}(x)&\overset{\bigtriangleup}{=}P(X_{1}\leq x,\cdots,X_{r}\leq x,X_{r+1}>x,\cdots,X_{n}\geq x)\\ &= \sum^\infty\limits_{m_1=1}\cdots \sum^\infty\limits_{m_n=1}\alpha_{\textbf{m}}\prod^r\limits_{i=1}\sum^\infty\limits_{u_i=m_i}\frac{(\beta x)^{u_i}e^{-\beta x}}{u_i !} \prod^{n}\limits_{i=r+1}\sum^{m_i-1}\limits_{u_i=0}\frac{(\beta x)^{u_i}e^{-\beta x}}{u_i !} \\ &=e^{-n\beta x} \sum^\infty\limits_{u_1=1}\cdots\sum^\infty\limits_{u_r=1} \sum^\infty\limits_{u_{r+1}=0}\cdots\sum^\infty\limits_{u_n=0}H_{S_r}(\textbf{u}+\textbf{1}) \prod^n\limits_{i=1}\frac{(\beta x)^{u_i}}{u_i !}\\ &=e^{-n\beta x} \sum^\infty\limits_{u_1=2}\cdots\sum^\infty\limits_{u_r=2} \sum^\infty\limits_{u_{r+1}=1}\cdots\sum^\infty\limits_{u_n=1}H_{S_r}(\textbf{u}) \prod^n\limits_{i=1}\frac{(\beta x)^{u_i-1}}{(u_i-1) !}\\ \end{array} \end{equation*} $
$ \begin{equation*} \begin{array}{rl} &= \sum^\infty\limits_{u_1=1}\cdots\sum^\infty\limits_{u_n=1}H_{S_r}(\textbf{u}) \prod^n\limits_{i=1}\frac{f(x|u_i,\beta)}{\beta}, \end{array} \end{equation*} $

the last equation holds due to the fact $ P(N_i=0)=0 $ and

$ H_{S_r}(\textbf{m})= P(N_{1}\leq m_1-1,\cdots,N_{r}\leq m_r-1,N_{r+1}\geq m_{r+1},N_{n}\geq m_n). $

Then take the derivative with respect to $ x $, we have

$ \begin{equation*} f_{S_r}(x)= \sum^\infty\limits_{m_1=1}\cdots\sum^\infty\limits_{m_n=1}H_{S_r}(\textbf{m}) \sum\limits_{j=1}^n\{f(x|m_j-1,\beta)-f(x|m_j,\beta)\} \prod^n\limits_{i=1,i\neq j}\frac{f(x|m_i,\beta)}{\beta}. \label{df-exact} \end{equation*} $

Obviously, we have,

$ \begin{eqnarray*} && \sum^\infty\limits_{m_j=1}H_{S_r}(\textbf{m})( f(x|m_j-1,\beta)-f(x|m_j,\beta))\\ &=& \sum^\infty\limits_{m_j=0}H_{S_r}(\textbf{m}+e_j) f(x|m_j,\beta)-\sum^\infty\limits_{m_j=1}H_{S_r}(\textbf{m})f(x|m_j,\beta)\\ &=& \sum^\infty\limits_{m_j=1}[H_{S_r}(\textbf{m}+e_j)-H_{S_r}(\textbf{m})]f(x|m_j,\beta), \end{eqnarray*} $

where the notation $ e_j $ represents an $ n $-length vector with the $ j $th entry equals 1 and others 0.

We repeat the procedure for all summations over $ S_r=\{s_1,s_2,\cdots,s_n\} $ of $ \{1,2,\cdots,n\} $ with conditions $ s_1<\cdots<s_r $ and $ s_{r+1}<\cdots<s_n $ and the result holds.

A.2 Proof for Theorem 2.2

Proof   We apply the mathematical deduction method.

(1) Let $ r=n $, according to the results in [15], it's obviously true.

(2) Assume that the result holds for any $ (r+1) $th order statistic, namely, we have

$ \begin{equation*} f_{r+1:n}(x)=\sum\limits_{j=1}^{r+1}\sum\limits_{m_1=1}^\infty\cdots\sum\limits_{m_n=1}^\infty \beta[H_{r+1}(\textbf{m}+e_j)-H_{r+1}(\textbf{m})]\prod\limits_{i=1}^n \frac{f(x|m_i,\beta)}{\beta}, \end{equation*} $

where

$ H_{r}(\textbf{m})= \sum\limits_{S_r}Pr(N_{s_1}\leq m_1-1,\cdots,N_{s_r}\leq m_r-1,N_{s_{r+1}}\geq m_{r+1},N_{s_n}\geq m_n). $

(3) To prove that the result also holds for $ r $th order statistic, from (2.4), we need to prove

$ \begin{equation*} \sum\limits_{j=1}^r [H_{r}(\textbf{m}+e_j)-H_{r}(\textbf{m})]=\sum\limits_{j=1}^{r+1} [H_{r+1}(\textbf{m}+e_j)-H_{r+1}(\textbf{m})]+\sum\limits_{j=1}^n [H_{r}(\textbf{m}+e_j)-H_{r}(\textbf{m})]. \end{equation*} $

Compare the left hand with the second term on the right hand, we further simplify the problem by proving

$ \begin{equation*} \sum\limits_{j=r+1}^n [H_{r}(\textbf{m}+e_j)-H_{r}(\textbf{m})]=-\sum\limits_{j=1}^{r+1} [H_{r+1}(\textbf{m}+e_j)-H_{r+1}(\textbf{m})]. \end{equation*} $

According the definition of notation $ H_r(\textbf{m}) $, we have

$ \begin{equation*} \begin{array}{rl} & \sum\limits_{j=r+1}^n [H_{r}(\textbf{m}+e_j)-H_{r}(\textbf{m})]\\ =&- \sum\limits_{j=r+1}^n \sum\limits_{S_r}P(N_{s_j}= m_j,\bigcap\limits_{i=1}^rN_{s_i}\leq m_i-1,\bigcap\limits_{l=r+1,l\neq j}^nN_{s_{l}}\geq m_{l})\\ =&- \sum\limits_{j=1}^{r+1} \sum\limits_{S_{r+1}}P(N_{s_j}= m_j,\bigcap\limits_{i=1,i\neq j}^{r+1}N_{s_i}\leq m_i-1,\bigcap\limits_{l=r+1}^nN_{s_{l}}\geq m_{l})\\ =&- \sum\limits_{j=1}^{r+1} [H_{r+1}(\textbf{m}+e_j)-H_{r+1}(\textbf{m})]. \end{array} \end{equation*} $

It means the conclusion also holds for $ r $th order statistic, then we finish the proof.

References
[1]
Lee S C, Lin X S. Modeling dependent risks with multivariate Erlang mixtures[J]. ASTIN Bulletin: The Journal of the IAA, 2012, 42(1): 153-180.
[2]
Verbelen R, Antonio K, Claeskens G. Multivariate mixtures of Erlangs for density estimation under censoring[J]. Lifetime Data Analysis, 2016, 22(3): 429-455. DOI:10.1007/s10985-015-9343-y
[3]
Gui Wenyong, Huang Rongtan, Lin X S. Fitting the Erlang mixture model to data via a GEM-CMM algorithm[J]. Journal of Computational and Applied Mathematics, 2018, 343: 189-205. DOI:10.1016/j.cam.2018.04.032
[4]
Yin Cuihong, Lin X S. Efficient estimation of Erlang mixtures using iSCAD penalty with insurance application[J]. ASTIN Bulletin: The Journal of the IAA, 2016, 46(3): 779-799. DOI:10.1017/asb.2016.14
[5]
Gui Wenyong, Huang Rongtan, Lin Jianhua, Lin X S. Conditional Residual Lifetimes of (n-k+1)- out-of-n Systems with Mixed Erlang Components[J]. Journal of Mathematical Study, 2017, 50(1): 1-16. DOI:10.4208/jms.v50n1.17.01
[6]
Fung T C, Badescu A L, Lin X S. Multivariate Cox Hidden Markov models with an application to operational risk[J]. Scandinavian Actuarial Journal, 2019, 8: 686-710.
[7]
Blostein M, Miljkovic T. On modeling left-truncated loss data using mixtures of distributions[J]. Insurance: Mathematics and Economics, 2019, 85: 35-46. DOI:10.1016/j.insmatheco.2018.12.001
[8]
Virchenko Y P, Novoseltsev A D. Probability distributions unimodality of finite sample extremes of independent erlang random variables[J]. Journal of Physics Conference Series, 2020. DOI:10.1088/1742-6596/1479/1/012104
[9]
Saran J, Pushkarna N, Tiwari R. Moment properties of generalized order statistics from lindley distribution[J]. Journal of Statistics Applications & Probability, 2015, 4: 429-434.
[10]
Malik M R, Kumar D. Relations for moments of progressively type-ii right censored order statistics from erlang-truncated exponential distribution[J]. Statistics, 2017, 18(4): 651-668.
[11]
Rashwan N I. Moments of order statistics from nonidentically Distributed Lomax, exponential Lomax and exponential Pareto Variables[J]. Journal of Advances in Mathematics, 2018, 14(1): 7431-7438. DOI:10.24297/jam.v14i1.6638
[12]
Landriault D, Moutanabbir K, Willmo GE. A note on order statistics in the mixed Erlang case[J]. Statistics & Probability Letters, 2015, 106: 13-15.
[13]
Barakat H M, Abdelkader Y H. Computing the moments of order statistics from nonidentical random variables[J]. Statistical Methods and Applications, 2004, 13(1): 15-26.
[14]
Hlynka M, Brill PH, Horn W. A method for obtaining Laplace transforms of order statistics of Erlang random variables[J]. Statistics & Probability Letters, 2010, 80(1): 9-18.
[15]
Willmot G E, Woo J K. On some properties of a class of multivariate Erlang mixtures with insurance applications[J]. ASTIN Bulletin: The Journal of the IAA, 2015, 45(1): 151-173. DOI:10.1017/asb.2014.23
[16]
Denuit M, Frostig E. Life insurance mathematics with random life tables[J]. North American Actuarial Journal, 2009, 13(3): 339-355. DOI:10.1080/10920277.2009.10597560
[17]
Denuit M, Cornet A. Multilife premium calculation with dependent future lifetimes[J]. Journal of Actuarial Practice, 1999, 7: 147-180.
[18]
Dufresne F, Hashorva E, Ratovomirija G, Toukourou Y. On age difference in joint lifetime modelling with life insurance annuity applications[J]. Annals of Actuarial Science, 2018, 12(2): 350-371. DOI:10.1017/S1748499518000076
[19]
Pak A, Dashti H. Distributional results for dependent Type-Ⅱ Hybrid censored order statistics[J]. Pakistan Journal of Statistics and Operation Research, 2017, 13(1): 201-210. DOI:10.18187/pjsor.v13i1.1462
[20]
Denuit M, Frostig E. Association and heterogeneity of insured lifetimes in the Lee-Carter framework[J]. Scandinavian Actuarial Journal, 2007, 1(1): 1-19.
[21]
Gerber H U. Life insurance mathematics[M]. New York: Springer Science & Business Media, 2013.