数学杂志  2017, Vol. 37 Issue (1): 39-50   PDF    
扩展功能
加入收藏夹
复制引文信息
加入引用管理器
Email Alert
RSS
本文作者相关文章
LIU Xiao
YU Hong-wei
OPTIMAL DIVIDEND STRATEGY IN THE BROWNIAN MOTION MODEL WITH INTEREST AND RANDOMIZED OBSERVATION TIME
LIU Xiao, YU Hong-wei     
School of Mathematics and Computer Science, Anhui Normal University, Wuhu 241003, China
Abstract: In this paper we study the optimal dividend problems in the Brownian motion model with interest and randomized observation time. By using stochastic control theory, we obtain the associated Hamilton-Jacobi-Bellman (HJB) equation with the optimal value function, which show that the optimal dividend strategy is a barrier strategy, and give the explicit expression for the optimal value function, which generalize the results of [19].
Key words: dividend     ruin     HJB equation    
带利率和随机观测时间的布朗运动模型中最优分红策略
刘晓, 余宏伟     
安徽师范大学数学计算机科学学院, 安徽 芜湖 241003
摘要:本文研究了带利率和随机观测时间的布朗运动模型中的最优分红问题.利用随机控制理论,获得了最优值函数相应的HJB方程,表明最优分红策略是障碍策略,并给出了最优值函数的显式表达式,推广了文献[19]的结果.
关键词分红    破产    HJB方程    
1 Introduction

Finding the optimal dividend strategy for an insurance company is a very popular research topics in actuarial mathematics. It was first proposed for measuring the stability of an insurance company by De Finetti [1]. In recent years, many papers were published about the optimal dividend strategy. See, for example, Jeanblanc and Shiryaev [2], Amussen et al. [3], Gerber and Shiu [4], Bai and Paulsen [5], Bayraktar et al. [6, 7], Wang [8] and the references therein. One can refer to Avanzi [9] and Albrecher and Thonhauser [10] for knowing more about the models with dividends before 2009. But in all of the above-mentioned literatures, the surplus process need continuously observed, which cannot be realized in practice. Albrecher et al. [11-14] first introduced the idea of randomized observation time in the classical risk model, the diffusion model and the Lévy model, in which the risk process can be "looked" only at random times. Avanzi [15] and Peng et al. [16] considered this idea in the dual model. The model was extended to the dual model with diffusion by Liu et al. [17] and Avanzi et al. [18].

Brownian motion model is a Brownian motion with a positive drift. This model can be viewed as a diffusion approximation of the classical risk model, but it often leads to more simple calculation of the characteristics than the classical risk model, such as ruin probability. Wang and Liu [19] studied the expected discounted sum of dividends paid until ruin in the Brownian motion model with interest under the assumption that the dividend strategy is restricted to a barrier strategy and the dividends can only be paid at some randomized observation times. Naturally, one interesting problem appears: is the barrier strategy optimal or not? In this paper, assuming that the surplus of a company is modeled by the Brownian motion model with interest, we show that the barrier strategy is the optimal dividend strategy among all the admissible dividend strategies by using stochastic control theory.

This paper is organized as follows. In Section 2, the Brownian motion model with interest and randomized observation time is shortly discussed. In Section 3, it is shown that the optimal value function can be characterized by the associated HJB equation. In addition, the verification theorem is stated and proved. In Section 4, we show that the optimal dividend strategy is a barrier strategy, and the explicit expression for the optimal value function is given.

2 The Model

Let $(\Omega, \ \mathcal{F}, \ \{\mathcal{F}_{t}\}_{t\geq 0}, \ P)$ be a filtered probability space on which all random processes and variables introduced in the following are defined. We assume that the surplus process of an insurance company is modeled by the Brownian motion model and the company invests all the surplus in the risk-free asset. Let $\{X(t);\ t\geq 0\}$ be the surplus of the company before a dividend strategy is imposed. Then $\{X(t);\ t\geq 0\}$ satisfies the following stochastic differential equation

$ {\rm{d}}X(t) = (c + rX(t)){\rm{d}}t + \sigma {\rm{d}}B(t), $ (2.1)

where $c > 0$ is the drift coefficient, $r > 0$ is the force of interest, $\sigma > 0$ is the standard deviation and $\{B(t);\ t\geq 0\}$ is a standard Brownian motion.

Let $\{T_{i};\ i=1, \ 2, \ \cdots\}$ denote the random observation times. Let $Z_{i}=T_{i}-T_{i-1}$ with $T_{0}=0$ be the $i$th time interval between observations, we assume that $\{Z_{i};\ i=1, \ 2, \ \cdots\}$ are independent random variables with an exponential distribution of mean $\frac{1}{\beta}$. Let $L_i$ be the dividend payment at $T_{i}$. Let $\{X_{L}(t);\ t\geq 0\}$ denote the surplus process after an admissible dividend strategy $L$ is imposed.

A dividend strategy $L=\{L_i;\ i=1, \ 2, \ \cdots\}$ is called admissible, if $L_i\leq X_{L}(T_{i}-)$ and no dividend is paid after ruin. Denote $\Pi$ the set of all admissible dividend strategies.

Let $\tau_{L}=\text{inf}\{t:\ X_{L}(t)\leq 0\}$ be the ruin time. Assume that dividends are discounted at a constant force of interest $\delta$. In this paper we assume $\delta > r$. For a given admissible strategy $L$, we define the value function $V_L$ as

$ {V_L}(x) = {E_x}[\sum\limits_{i = 1}^\infty {{\rm{exp}}} (-\delta {T_i}){I_{\{ {T_i} \le {\tau _L}\} }}{L_i}], $ (2.2)

where $I_{\{\cdot\}}$ is the indicator function and $E_{x}$ is the expectation corresponding to the law of $\{X_{L}(t);\ t\geq 0\}$ with $X_{L}(0)=x$. We aim to find the optimal value function, which is defined as

$ V(x) = \mathop {{\rm{sup}}}\limits_{L \in \Pi } {V_L}(x)\;\;{\rm{for}}\;x \ge 0, $ (2.3)

and find an optimal dividend strategy $L^{*}$ that satisfies $V(x)=V_{L^{*}}(x)$. For technical reasons, we define $V(x)=0$ for $x < 0$.

3 Hamilton-Jacobi-Bellman Equation

In this section, the HJB equation associated with (2.3) is obtained and the verification theorem is stated and proved.

Suppose $V(x)$ is twice continuously differentiable on $[0, \infty)$. Consider a small time interval $(0, \triangle t]$, where $\triangle t > 0$ is sufficiently small so that the surplus process will not reach $0$ if there is no dividend paying in the interval. The strategy $L$ is that paying amount $l\geq 0$ as dividend if $T_1\leq \triangle t$. By the strong Markov property of the surplus process, we have

$ V(x)\ge [(1-\beta \vartriangle t){{E}_{x}}[V(X(\vartriangle t))]+\beta \vartriangle t[l+V(x-l)]]{{e}^{-\delta \vartriangle t}}+o(\vartriangle t). $ (3.1)

Applying Itó formula, we get

$ {{E}_{x}}[V(X(\vartriangle t))]=V(x)+(c+rx){V}'(x)\vartriangle t+\frac{{{\sigma }^{2}}}{2}{V}''(x)\vartriangle t+o(\vartriangle t). $ (3.2)

Plugging (3.2) into (3.1), rearranging the terms, dividing by $\triangle t$ and then letting $\triangle t$ tend to 0, we have

$ \frac{{{\sigma }^{2}}}{2}{V}''(x)+(c+rx){V}'(x)-(\beta +\delta )V(x)+\beta [l+V(x-l)]\le 0. $

We obtain the HJB equation associated with (2.3) as follows

$ \mathop {\max }\limits_{0 \le l \le x} {\mkern 1mu} \{ \beta [l + V(x - l)]\} + {\cal L}V(x) = 0, $ (3.3)

where

$ \mathcal{L}V(x)=\frac{{{\sigma }^{2}}}{2}{V}''(x)+(c+rx){V}'(x)-(\beta +\delta )V(x). $ (3.4)

Because ruin is immediate and no dividend is paid if the initial surplus $x=0$, we get the boundary condition $V(0)=0$.

The next Theorem states the verification theorem.

Theorem 3.1  Let $H(x):[0, \infty)\rightarrow [0, \infty)$ with $H(0)=0$ be a twice continuously differentiable function. Assume that $H(x)$ is dominated by a linear function. If $H(x)$ satisfies

$ \mathop {\max }\limits_{_{0 \le l \le x}} \{ \beta [l + H(x-l)]\} + {\cal L}H(x) \le 0, $ (3.5)

we then have

$ H(x) \ge V(x). $ (3.6)

Furthermore, if $H(x)$ satisfies

$ \mathop {\max }\limits_{0 \le l \le x} \{ \beta [l + H(x-l)]\} + {\cal L}H(x) = 0, $ (3.7)

we then have

$ H(x) = V(x). $ (3.8)

Proof  Let $L=\{L_i;\ i=1, \ 2, \ \cdots\}$ be any admissible strategy and $X_{L}(t)$ be the surplus process associated with the dividend strategy $L$. Denote $\mathcal{S}=\{T_{i};\ i=1, \ 2, \ \cdots\}$. For convenience we let $H(x)=0$ for $x < 0$. From generalized Itô formula, we know that

$ \begin{align} e^{-\delta(t\wedge\tau)}H(X_{L}(t\wedge\tau))=&H(x)-\delta\int_{0}^{t\wedge\tau}e^{-\delta s}H(X_{L}(s-))\text{d}s\notag\\ &+\int_{0}^{t\wedge\tau}e^{-\delta s}\Big{[}(c+rX_{L}(s-))H'(X_{L}(s-))+\frac{\sigma^2}{2}H''(X_{L}(s-))\Big{]}\text{d}s\notag\\ &+\int_{0}^{t\wedge\tau}\sigma e^{-\delta s}H'(X_{L}(s-))\text{d}B(s)+R, \end{align} $ (3.9)

where

$ R=\sum\limits_{s\in \mathcal{S}, \ s\le t\wedge \tau }{{{e}^{-\delta s}}}[H({{X}_{L}}(s-)+\vartriangle {{X}_{L}}(s))-H({{X}_{L}}(s-))]. $

Defining

$ K(t)=\sum\limits_{s\in \mathcal{S}, \ s\le t}{{{e}^{-\delta s}}}\vartriangle {{X}_{L}}(s))-\beta \int_{0}^{t}{{{e}^{-\delta s}}}\vartriangle {{X}_{L}}(s))\text{d}s $

and

$ \begin{align} & J(t)=\sum\limits_{s\in \mathcal{S}, \ s\le t}{{{e}^{-\delta s}}}[H({{X}_{L}}(s)+\vartriangle {{X}_{L}}(s))-H({{X}_{L}}(s))] \\ & \ \ \ \ \ \ \ \ \ \ \ \ -\beta \int_{0}^{t}{{{e}^{-\delta s}}}[H({{X}_{L}}(s-)+\vartriangle {{X}_{L}}(s))-H({{X}_{L}}(s-))]\text{d}s, \\ \end{align} $

we have

$ \begin{align} &e^{-\delta(t\wedge\tau)}H(X_{L}(t\wedge\tau))\notag\\ =&H(x)-\delta\int_{0}^{t\wedge\tau}e^{-\delta s}H(X_{L}(s-))\text{d}s\notag\\ &+\int_{0}^{t\wedge\tau}e^{-\delta s}\Big{[}(c+rX_{L}(s-))H'(X_{L}(s-))+\frac{\sigma^2}{2}H''(X_{L}(s-))\Big{]}\text{d}s\notag\\ &+\beta\int_{0}^{t\wedge\tau}e^{-\delta s}[H(X_{L}(s-)+\triangle X_{L}(s))-H(X_{L}(s-))]\text{d}s\notag\\ &+\int_{0}^{t\wedge\tau}\sigma e^{-\delta s}H'(X_{L}(s-))\text{d}B(s)+J(t\wedge\tau)\notag\\ =&H(x)+\int_{0}^{t\wedge\tau}e^{-\delta s}[\beta(H(X_{L}(s-)+\triangle X_{L}(s))-\triangle X_{L}(s))+\mathcal{L}H(X_{L}(s-))]\text{d}s\notag\\ &+\mathop\sum_{s\in\mathcal{S}, \ s\leq t\wedge\tau}e^{-\delta s}\triangle X_{L}(s))-K(t\wedge\tau)+J(t\wedge\tau).\notag\\ &+\int_{0}^{t\wedge\tau}\sigma e^{-\delta s}H'(X_{L}(s-))\text{d}B(s). \end{align} $ (3.10)

Noting that $K(t)$ and $J(t)$ can be denoted as

$ \begin{align*} K(t)=\int_{0}^{t}e^{-\delta s}\triangle X_{L}(s)\text{d}N_{\beta}(s)-\beta\int_{0}^{t}e^{-\delta s}\triangle X_{L}(s))\text{d}s \end{align*} $

and

$ \begin{align*} J(t)=&\int_{0}^{t}e^{-\delta s}[H(X_{L}(s)+\triangle X_{L}(s))-H(X_{L}(s))]\text{d}N_{\beta}(s)\notag\\ &\ -\beta\int_{0}^{t}e^{-\delta s}[H(X_{L}(s-)+\triangle X_{L}(s))-H(X_{L}(s-))]\text{d}s, \end{align*} $

where $N_{\beta}(t)$ is a Poisson process with parameter $\beta$, we then have that $K(t)$ and $J(t)$ are martingales with zero-expectation, together with condition (3.5), we get

$ \begin{align} & {{E}_{x}}[{{e}^{-\delta (t\wedge \tau )}}H({{X}_{L}}(t\wedge \tau ))]\le H(x)+{{E}_{x}}[\sum\limits_{s\in \mathcal{S}, \ s\le t\wedge \tau }{{{e}^{-\delta s}}}\vartriangle {{X}_{L}}(s))] \\ & \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ =H(x)-{{E}_{x}}[\sum\limits_{i=1}^{\infty }{\text{exp}}(-\delta {{T}_{i}}){{I}_{\{{{T}_{i}}\le t\wedge \tau \}}}{{L}_{i}}]. \\ \end{align} $ (3.11)

Because $H(x)$ is dominated by a linear function, we know that

$ H(X_{L}(t\wedge\tau))\leq k_{1}X_{L}(t\wedge\tau)+k_{2}\leq k_{1}e^{rt}[x+\frac{c}{r}+\mid \sigma \int_{0}^{t}e^{-rs}\text{d}B(s)\mid]+k_{2} $

for some positive constants $k_{1}$ and $k_{2}$, hence

$ e^{-\delta(t\wedge\tau)}H(X_{L}(t\wedge\tau))\leq k_{1}e^{(r-\delta) t}\big{[}x+\frac{c}{r}+\mid \sigma \displaystyle\int_{0}^{t}e^{-rs}\text{d}B(s)\mid\big{]}+k_{2} $

if $t\leq \tau$ and $e^{-\delta(t\wedge\tau)}H(X_{L}(t\wedge\tau))=0$ if $t > \tau$. By dominated convergence theorem, we have

$ \mathop {\lim }\limits_{t \to \infty } {\mkern 1mu} {E_x}[{e^{ - \delta (t \wedge \tau )}}H({X_L}(t \wedge \tau ))] = 0. $ (3.12)

Let $t$ tend to $\infty$ in (3.11) and using (3.12) yield

$ H(x)\ge {{E}_{x}}[\sum\limits_{i=1}^{\infty }{\text{exp}}(-\delta {{T}_{i}}){{I}_{\{{{T}_{i}}\le \tau \}}}{{L}_{i}}]={{V}_{L}}(x). $ (3.13)

Because (3.13) holds for any admissible strategy $L$, we have $H(x)\geq V(x)$.

If condition (3.7) holds, we take the admissible strategy $L_{i}^{*}=l(X_{L^{*}}(T_{i}-))$, where $l(x)$ satisfies

$ \beta[l(x)+H(x-l(x))]\}+\mathcal{L}H(x)=0. $

By a similar argument, we get $H(x)=V_{L^{*}}(x)$, hence $H(x)=V(x)$ and $L^{*}$ is the optimal dividend strategy. The proof is completed.

4 The Optimal Dividend Strategy

In this section, we show that there exists a twice continuously differentiable concave function $V(x)$ which is dominated by a linear function and a solution to the HJB equation (3.3) with the boundary condition $V(0)=0$, and the optimal dividend strategy is a barrier strategy. In addition, the explicit expression for $V(x)$ is given.

Let us find a twice continuously differentiable, increasing and concave solution $V(x)$ to (3.3) with $V(0)=0$, and $V(x)$ is dominated by a linear function. If there exists some point $b>0$ with the following properties:

$ V'(x) > 1:\;x < b, \;\;\;\;V'(x) \le 1:\;x \ge b. $

Because

$ \frac{{\partial [l + V(x-l)]}}{{\partial l}} = 1 -V'(x -l) < 0 $

for $l\in[0, x]$ if $x < b$, and $1-V'(x-l)\geq0$ for $l\in[0, x-b)$ but $1-V'(x-l) < 0$ for $l\in[x-b, x)$ if $x > b$, we have that $l+V(x-l)$ is decreasing in $[0, b)$ with respect to $l$ if $x < b$, and $l+V(x-l)$ is increasing in $[0, x-b)$ but decreasing in $[x-b, x]$ with respect to $l$ if $x > b$. Hence a candidate of the optimal dividend strategy should be

$ \begin{equation*} L_{i}^{*}= \begin{cases} 0, & X_{L^{*}}(T_{i}-) < b, \\ X_{L^{*}}(T_{i}-)-b, & X_{L^{*}}(T_{i}-)\geq b.\\ \end{cases} \end{equation*} $

Therefore (3.3) is translated into

$ \begin{align} &\frac{\sigma^{2}}{2}V''(x)+(c+rx) V'(x)-\delta V(x)=0, \ \ \ x < b, \end{align} $ (4.1)
$ \begin{align} &\frac{\sigma^{2}}{2}V''(x)+(c+rx) V'(x)-(\beta+\delta)V(x)+\beta[x-b+V(b)]=0, \ \ \ x\geq b. \end{align} $ (4.2)

Noting that $V(x)$ behaves differently depending on wether $x$ is below or above the barrier $b$, we denote $V(x)$ as

$ \begin{equation*} V(x)= \begin{cases} V_1(x), & x < b, \\ V_2(x), & x\geq b.\\ \end{cases} \end{equation*} $

According to Theorem 16.69 of Breiman L [20], we know that equation (4.1) has two positive independent solutions $f_{1}(x)$ and $f_{2}(x)$, and thus the equation

$ \begin{align*} \frac{\sigma^{2}}{2}V''(x)+(c+rx) V'(x)-(\beta+\delta)V(x)=0 \end{align*} $

has two positive independent solutions $f_{3}(x)$ and $f_{4}(x)$. In addition, $f_{1}(x)$ and $f_{3}(x)$ are strictly decreasing, but $f_{2}(x)$ and $f_{4}(x)$ are strictly increasing. In fact, the explicit expressions for $f_{1}(x)$, $f_{2}(x)$, $f_{3}(x)$ and $f_{4}(x)$ are given by

$ {f_1}(x) = {e^{-\frac{{{{(c + rx)}^2}}}{{r{\sigma ^2}}}}}U(\frac{1}{2} + \frac{\delta }{{2r}}, \frac{1}{2}, \frac{{{{(c + rx)}^2}}}{{r{\sigma ^2}}}), $ (4.3)
$ {f_2}(x) = (c + rx){e^{-\frac{{{{(c + rx)}^2}}}{{r{\sigma ^2}}}}}M(1 + \frac{\delta }{{2r}}, \frac{3}{2};\frac{{{{(c + rx)}^2}}}{{r{\sigma ^2}}}), $ (4.4)
$ {f_3}(x) = {e^{-\frac{{{{(c + rx)}^2}}}{{r{\sigma ^2}}}}}U(\frac{1}{2} + \frac{{\delta + \beta }}{{2r}}, \frac{1}{2}, \frac{{{{(c + rx)}^2}}}{{r{\sigma ^2}}}), $ (4.5)
$ {f_4}(x) = (c + rx){e^{-\frac{{{{(c + rx)}^2}}}{{r{\sigma ^2}}}}}M(1 + \frac{{\delta + \beta }}{{2r}}, \frac{3}{2};\frac{{{{(c + rx)}^2}}}{{r{\sigma ^2}}}), $ (4.6)

where $M(a_1, a_2;x)$ and $U(a_1, a_2;x)$ are called the confluent hypergeometric functions of the first and second kinds respectively. $M(a_1, a_2;x)$ and $U(a_1, a_2;x)$ satisfy the following properties:

$ \frac{{\rm{d}}}{{{\rm{d}}x}}M({a_1}, {a_2};x) = \frac{{{a_1}}}{{{a_2}}}M({a_1} + 1, {a_2} + 1;x), $ (4.7)
$ \frac{{\rm{d}}}{{{\rm{d}}x}}U({a_1}, {a_2};x) =-{a_1}U({a_1} + 1, {a_2} + 1;x), $ (4.8)
$ M{\rm{(}}{a_{\rm{1}}}{\rm{, }}{a_{\rm{2}}}{\rm{;0) = 1, }} $ (4.9)
$ \mathop {\lim }\limits_{x \to \infty } M({a_1}, {a_2};x) = \infty, $ (4.10)
$ \mathop {\lim }\limits_{x \to \infty } U({a_1}, {a_2};x) = 0, $ (4.11)
$ U({a_1}, {a_2};x) = {x^{- {a_1}}}[1 + o(\mid x{\mid ^{-1}})]\;\;{\rm{as}}\;\;x \to \infty, $ (4.12)
$ M({a_1}, {a_2};x) = \frac{{\Gamma ({a_2})}}{{\Gamma ({a_1})}}{e^x}{x^{{a_1}- {a_2}}}[1 + o(\mid x{\mid ^{-1}})]\;\;{\rm{as}}\;\;x \to \infty . $ (4.13)

Therefore the solution of (4.1) can be expressed as

$ {V_1}(x) = {A_1}{f_1}(x) + {A_2}{f_2}(x) $ (4.14)

for some constants $A_1$ and $A_2$.

Using (4.13), we get $\mathop {\lim }\limits_{x \to \infty } \frac{{{f_4}(x)}}{x} = \infty $. Because $V(x)$ is dominated by a linear function, the solution of (4.2) can be expressed as

$ {V_2}(x) = {A_3}{f_3}(x) + Dx + F $ (4.15)

for some constants $A_3$, $D$ and $F$.

Plugging (4.15) into (4.2), we get

$ D = \frac{\beta }{{\beta + \delta-r}} $ (4.16)

and

$ F = \frac{{\beta (V(b)-b)}}{{\beta + \delta }} + \frac{{\beta c}}{{(\beta + \delta )(\beta + \delta-r)}}. $ (4.17)

Plugging (4.16) and (4.17) into (4.15) and letting $x$ tend to $b$, we have

$ {V_2}(b) = \frac{{\beta + \delta }}{\delta }{A_3}{f_3}(b) + \frac{{\beta (c + rb)}}{{\delta (\beta + \delta-r)}} $ (4.18)

and

$ F = \frac{\beta }{\delta }{A_3}{f_3}(b) + \frac{{\beta (c + rb)}}{{\delta (\beta + \delta - r)}} - \frac{\beta }{{\beta + \delta - r}}b. $ (4.19)

The conditions $V(0)=0$ and $V'_1(b-)=V'_2(b+)=1$ imply that

$ {A_1}{f_1}(0) + {A_2}{f_2}(0) = 0, $ (4.20)
$ {A_3}{f'_3}(b) + \frac{\beta }{{\beta + \delta-r}} = 1, $ (4.21)
$ {A_1}{f'_1}(b) + {A_2}{f'_2}(b) = 1. $ (4.22)

From (4.20)-(4.22), we have

$ {A_1} = \frac{{{f_2}(0)}}{{h'(b)}}, \;\;\;\;{A_2} =-\frac{{{f_1}(0)}}{{h'(b)}}, \;\;\;\;{A_3} = \frac{{(\delta-r)}}{{(\beta + \delta-r){{f'}_3}(b)}}, $

where $h(x)=f_1(x)f_2(0)-f_1(0)f_2(x)$. The value of $b$ is determined by $V_1(b-)=V_2(b+)$, i.e.,

$ \frac{{h(b)}}{{h'(b)}} = \frac{{(\delta-r)(\beta + \delta )}}{{\delta (\beta + \delta-r)}}\frac{{{f_3}(b)}}{{{{f'}_3}(b)}} + \frac{{\beta (c + rb)}}{{\delta (\beta + \delta-r)}}. $ (4.23)

Using (4.12) and (4.13), we can easily show that

$ \mathop {\lim }\limits_{x \to \infty } \frac{{h(x)}}{{xh'(x)}} = \infty $ (4.24)

and

$ \mathop {\lim }\limits_{x \to \infty } \frac{{{f_3}(x)}}{{{{f'}_3}(x)}} = 0. $ (4.25)

Hence there exists a positive solution to the equation (4.23) if

$ (\delta-r)(\beta + \delta )\frac{{{f_3}(0)}}{{{{f'}_3}(0)}} + \beta c > 0. $ (4.26)

If $(\delta-r)(\beta+\delta)\frac{f_3(0)}{f'_3(0)}+\beta c\leq0$, then $b=0$, by a similar argument, we have

$ V(x) = {A_4}{f_3}(x) + \frac{\beta }{{\beta + \delta-r}}x + \frac{{\beta c}}{{(\beta + \delta )(\beta + \delta-r)}} $ (4.27)

for some constant $A_4$. The condition $V(0)=0$ implies that

$ {A_4} =-\frac{{\beta c}}{{(\beta + \delta )(\beta + \delta-r){f_3}(0)}}. $ (4.28)

Theorem 4.1  If $(\delta-r)(\beta+\delta)\frac{f_3(0)}{f'_3(0)}+\beta c\leq0$, then the function

$ V(x) = \frac{{\beta c}}{{(\beta + \delta )(\beta + \delta- r)}}[1-\frac{{{f_3}(x)}}{{{f_3}(0)}}] + \frac{\beta }{{\beta + \delta -r}}x $ (4.29)

is twice continuously differentiable, concave, dominated by a linear function and a solution to the HJB equation (3.3).

If $(\delta-r)(\beta+\delta)\frac{f_3(0)}{f'_3(0)}+\beta c>0$, then the function

$ \begin{equation} V(x)= \begin{cases} \frac{h(x)}{h'(b)}, & 0\leq x < b, \\ \frac{\delta-r}{\beta+\delta-r}\frac{f_3(x)}{f'_3(b)}+\frac{\beta}{\beta+\delta-r}(x-b)+\frac{\beta(\delta-r)}{\delta(\beta+\delta-r)}\frac{f_3(b)}{f'_3(b)}+\frac{\beta(c+rb)}{\delta(\beta+\delta-r)}, & x\geq b\\ \end{cases} \end{equation} $ (4.30)

is twice continuously differentiable, concave, dominated by a linear function and a solution to the HJB equation (3.3), where $b$ is determined by (4.23).

Proof  If $(\delta-r)(\beta+\delta)\frac{f_3(0)}{f'_3(0)}+\beta c\leq0$, it is straightforward to verify that the function given by (4.29) is twice differentiable and satisfies the differential equation (4.2) with boundary condition $V(0)=0$. Because of the facts that

$ V'(0) =-\frac{{\beta c}}{{(\beta + \delta )(\beta + \delta-r)}}\frac{{{{f'}_3}(0)}}{{{f_3}(0)}} + \frac{\beta }{{\beta + \delta-r}} \le 1 $

and

$ \begin{align*} V''(x)&=-\frac{\beta c}{(\beta+\delta)(\beta+\delta-r)f_3(0)}f''_3(x)\\ &=-\frac{2\beta c}{\sigma^{2}(\beta+\delta)(\beta+\delta-r)f_3(0)}[(\beta+\delta)f_3(x)-(c+rx)f'_3(x)] < 0, \end{align*} $

we know that $V'(x) < 1$ for any $x > 0$, hence $V(x)$ is concave and satisfies the HJB equation (3.3) with boundary condition $V(0)=0$.

If $(\delta-r)(\beta+\delta)\frac{f_3(0)}{f'_3(0)}+\beta c > 0$, using (4.23) and the facts that

$ \begin{align*} \frac{\sigma^{2}}{2}h''(b)=\delta h(b)-(c+rb)h'(b) \end{align*} $

and

$ \begin{align*} \frac{\sigma^{2}}{2}f''_3(b)=(\delta+\beta)f_3(b)-(c+rb)f'_3(b), \end{align*} $

we have

$ \begin{align*} \frac{h''(b)}{h'(b)}=\frac{\delta-r}{\beta+\delta-r}\frac{f''_3(b)}{f'_3(b)}, \end{align*} $

which implies that the function given by (4.30) is twice differentiable. Noticing that, for $x\geq b$, we have

$ \begin{align*} V''(x)&=\frac{\delta-r}{\beta+\delta-r}\frac{f''_3(x)}{f'_3(b)}\\ &=\frac{\delta-r}{\beta+\delta-r}\frac{2[(\delta+\beta)f_3(x)-(c+rx)f'_3(x)]}{\sigma^{2}f'_3(b)} < 0, \end{align*} $

and hence $V'(x) < 1$. For $x < b$, using the formulas

$ \begin{align*} &(a_2-1)M(a_1-1, a_2-1, z)=(a_2-1-z)M(a_1, a_2, z)+zM'(a_1, a_2, z), \\ &(a_2-1)M(a_1, a_2-1, z)=(a_2-1)M(a_1, a_2, z)+zM'(a_1, a_2, z) \end{align*} $

and

$ U{\rm{(}}{a_{\rm{1}}}{\rm{, }}{a_{\rm{2}}}{\rm{, }}z{\rm{)-}}U{\rm{'(}}{a_{\rm{1}}}{\rm{, }}{a_{\rm{2}}}{\rm{, }}z{\rm{) = }}U{\rm{(}}{a_{\rm{1}}}{\rm{, }}{a_{\rm{2}}}{\rm{ + 1, }}z{\rm{), }} $

we have

$ \begin{align*} f'''_1(x)&=-\frac{8r(c+rx)}{\sigma^4}e^{-\frac{(c+rx)^2}{r\sigma^2}}U(\frac{\delta}{2r}-\frac{1}{2}, \frac{3}{2}, \frac{(c+rx)^2}{r\sigma^2}), \\ f'''_2(x)&=\frac{2r(\delta-r)}{\sigma^{2}}e^{-\frac{(c+rx)^2}{r\sigma^2}}M(\frac{\delta}{2r}-\frac{1}{2}, \frac{1}{2};\frac{(c+rx)^2}{r\sigma^2}), \end{align*} $

hence

$ h{\rm{'''(}}x{\rm{) = }}f_{\rm{1}}^{{\rm{'''}}}{\rm{(}}x{\rm{)}}{f_{\rm{2}}}{\rm{(0)-}}{f_{\rm{1}}}{\rm{(0)}}f_{\rm{2}}^{{\rm{'''}}}{\rm{(}}x{\rm{) < 0}}{\rm{.}} $

Noting that $h'(b)=f'_1(b)f_2(0)-f_1(0)f'_2(b) < 0$, then it follows that $V'''(x) > 0$. Hence $V''(x)$ is strictly increasing. Since

$ V''(b) = \frac{{\delta-r}}{{\beta + \delta-r}}\frac{{{{f''}_3}(x)}}{{{{f'}_3}(b)}} < 0, $

we have $V''(x) < 0$ for any $x < b$, therefore $V'(x)$ is decreasing in $[0, b]$. Because $V'(b)=1$, we have $V'(x) > 1$ for $x\leq b$. Hence $V(x)$ is concave and satisfies the HJB equation (3.3) with boundary condition $V(0)=0$. The proof is completed.

Combining Theorem 3.1 with Theorem 4.1, we obtain the following proposition.

Proposition 4.2  The optimal dividend strategy is a barrier strategy. The barrier is 0 if

$ (\delta-r)(\beta + \delta )\frac{{{f_3}(0)}}{{{{f'}_3}(0)}} + \beta c \le 0 $

or determined by (4.23) if

$ (\delta-r)(\beta + \delta )\frac{{{f_3}(0)}}{{{{f'}_3}(0)}} + \beta c > 0. $

The functions $V(x)$ given by (4.29) and (4.30) are the optimal value functions respectively.

References
[1] Finetti B. Su un'impostazione alternativa dell teoria collectiva del rischio[R]. New York: Transaction of 15th International Congress of Actuaries, 1957, 2: 433-443.
[2] Jeanblanc M, Shiryaev A N. Optimization of the flow of dividends[J]. Russian Math. Surv., 1995, 50(2): 257–277. DOI:10.1070/RM1995v050n02ABEH002054
[3] Asmussen S, Høgaard B, Taksar M. Optimal risk control and dividend distribution policies:Example of excess-of-loss reinsurance for an insurance corporation[J]. Fin. Stoch., 2000, 4(3): 299–324. DOI:10.1007/s007800050075
[4] Gerber H U, Shiu E S W. On optimal dividends strategies in the compound Poisson model[J]. North Amer. Act. J., 2006, 10(2): 76–93. DOI:10.1080/10920277.2006.10596249
[5] Bai Lihua, Paulsen J. Optimal dividend policies with transaction costs for a class of difiusion processes[J]. SIAM J. Contr. Optim., 2010, 48(8): 4987–5008. DOI:10.1137/090773210
[6] Bayraktar E, Kyprianou A, Yamazaki K. On optimal dividends in the dual model[J]. Astin Bull., 2013, 43(3): 359–372. DOI:10.1017/asb.2013.17
[7] Bayraktar E, Kyprianou A, Yamazaki K. Optimal dividends in the dual model under transaction costs[J]. Insurance:Math. Econ., 2014, 54: 133–143. DOI:10.1016/j.insmatheco.2013.11.007
[8] Wang Cuilian. Dividend problems in the classical compound poisson risk model with mixed exponentially distributed claim size[J]. J. Math., 2015, 35(3): 559–566.
[9] Avanzi B. Strategies for dividend distribution:A review[J]. North American Actuarial Journal, 2009, 13(2): 217–251. DOI:10.1080/10920277.2009.10597549
[10] Albrecher H, Thonhauser S. Optimality results for dividend problems in insurance[J]. RACSAM Revista de la Real Academia de Ciencias; Serie A, Matem'aticas, 2009, 103(2): 295–320.
[11] Albrecher H, Cheung E C K, Thonhauser S. Randomized observation periods for the compound Poisson risk model:Dividends[J]. Astin Bulletin, 2011, 41(2): 645–672.
[12] Albrecher H, Cheung E C K, Thonhauser S. Randomized observation periods for the compound Poisson risk model:The discounted penalty function[J]. Scandinavian Act. J., 2013, 2013(6): 424–452. DOI:10.1080/03461238.2011.624686
[13] Albrecher H, Gerber H U, Shiu E S W. The optimal dividend barrier in the Gamma-Omega model[J]. European Act. J., 2011, 1(1): 43–55. DOI:10.1007/s13385-011-0006-4
[14] Albrecher H, Bäuerle N, Thonhauser S. Optimal dividend-payout in random discrete time[J]. Stati. Risk Model., 2011, 28(3): 251–276. DOI:10.1524/stnd.2011.1097
[15] Avanzi B, Cheung E C K, Wong B. On a periodic dividend barrier strategy in the dual model with continuous monitoring of solvency[J]. Insurance:Math. Econ., 2013, 52(1): 98–113. DOI:10.1016/j.insmatheco.2012.10.008
[16] Peng Dan, Liu Donghai, Liu Zaiming. Dividend problems in the dual risk model with exponentially distributed observation time[J]. Stat. Prob. Lett., 2013, 83(3): 841–849. DOI:10.1016/j.spl.2012.11.025
[17] Liu Xiao, Chen Zhenlong. Dividend problems in the dual model with difiusion and exponentially distributed observation time[J]. Stat. Prob. Lett., 2014, 87: 175–183. DOI:10.1016/j.spl.2014.01.017
[18] Avanzi B, Tu V, Wong B. On optimal periodic dividend strategies in the dual model with difiusion[J]. Insurance:Math. Econ., 2014, 55: 210–224. DOI:10.1016/j.insmatheco.2014.01.005
[19] Wang Cuilian, Liu Xiao. Dividend problems in the difiusion model with interest and exponentially distributed observation time[J]. J. Appl. Math.. DOI:10.1155/2014/814835
[20] Breiman L. Probability[M]. Mass: Addison-Wesley, 1968.