Let $ B(n, p) $ denote a binomial random variable with parameters $ n $ and $ p $. Janson [1] introduced the following conjecture suggested by Vašk Chvátal.
Conjecture 1 (Chvátal). For any fixed $ n\geq 2 $, as $ m $ ranges over $ \{0, \ldots, n\} $, the probability $ q_m:=P(B(n, m/n)\leq m) $ is the smallest when $ m $ is closest to $ \frac{2n}{3} $.
Chvátal's conjecture has applications in machine learning (see Doerr [2], Greenberg and Mohri [3] and the references therein). Janson [1] showed that Conjecture 1 holds for large $ n $. Barabesi et al. [4] and Sun [5] proved that Conjecture 1 is true for general $ n\geq 2 $. Hereafter, we call Conjecture 1 by Chvátal's theorem.
Motivated by Chvátal's theorem, Li et al. [6] considered the infimum value problem on the probability that a random variable is not more than its expectation, when its distribution is the Poisson distribution, the geometric distribution or the Pascal distribution, where as to the Pascal distribution, only some partial results have been obtained. Sun et al. [7] investigated the corresponding infimum value problem for the Gamma distribution among other things. Li et al. [8] studied the infimum value problem for the Weibull distribution and the Pareto distribution. Guo et al. [9] considered the infimum value of the probability $ P(X\leq E[X]) $, where $ X $ is a negative binomial random variable, and gave an affirmative answer to the conjecture on the Pascal distribution posed in [6].
In this paper, we consider the infimum value of the probability $ P(X\leq \kappa E[X]) $, where $ \kappa $ is a positive real number, and $ X $ is a random variable whose distribution belongs to some infinitely divisible distributions including the inverse Gaussian, log-normal, Gumbel and logistic distributions in Section 2 and Section 3. Before presenting the main results, we give a remark.
Remark 1.1 Let $ X_{\alpha} $ be a random variable with finite expectation $ E[X_{\alpha}] $, where $ \alpha $ stands for some parameters in the distribution of $ X_{\alpha} $, and it is a real number or a vector in $ \mathbf{R}^n $ for some positive integers $ n\geq 2 $. We have the following two motivations to study $ \inf_{\alpha}P(X_{\alpha}\leq \kappa E[X_{\alpha}]) $:
● From $ \inf_{\alpha}P(X_{\alpha}\leq \kappa E[X_{\alpha}]) $, we can get $ \sup_{\alpha}P(X_{\alpha}>\kappa E[X_{\alpha}]) $. Obviously, if we wish the probability $ P(X_{\alpha}>\kappa E[X_{\alpha}]) $ is as large as possible, we should consider $ \sup_{\alpha}P(X_{\alpha}>\kappa E[X_{\alpha}]) $ or equivalently $ \inf_{\alpha}P(X_{\alpha}\leq \kappa E[X_{\alpha}]) $. Based on this observation, we wish that our work on this topic may find some applications in machine learning, statistics, finance and economics etc.
● Assume that $ X_{\alpha} $ is nonnegative and denote by $ \mu_{\alpha} $ the distribution of $ X_{\alpha} $. If $ \inf_{\alpha}P(X_{\alpha}\leq \kappa E[X_{\alpha}])=\beta>0 $, then for any $ \alpha $, we have
which tells us that the family of the distributions $ \{\mu_{\alpha}\} $ possesses a kind of measure concentration phenomenon.
Let $ X_{\mu, \lambda} $ be an inverse Gaussian random variable with parameters $ \mu $ and $ \lambda $ $ (\mu>0, \lambda>0). $ By [10, Chapter 27], we know that the density function of $ X_{\mu, \lambda} $ is given by
and $ E[X_{\mu, \lambda}]=\mu $. Then, for any given real number $ \kappa>0 $, by [10, Chapter 27], we know that
Hereafter, $ \Phi(\cdot) $ stands for the distribution function of the standard normal distribution.
Denote $ x:=\sqrt{\frac{\lambda}{\mu}} $. Then
Define a function
The main result of this section is
Theorem 2.1 (ⅰ) If $ \kappa \leq 1 $, then
(ⅱ) If $ \kappa>1 $, then $ \min\limits_{x\in \left ( 0, \infty \right ) }g_{\kappa }\left ( x \right )=g_{\kappa }\left ( x_{0}(\kappa) \right ), $ where $ x_{0}(\kappa) $ is the unique zero point of the function
Proof By taking the derivative of the function $ g_{\kappa}(x) $ defined in (1.1) and in virtue of (1.2), we have
By taking the derivative of the function $ h_{\kappa}(x), $ we get that
where $ \varphi_{\kappa}(x)=\frac{1}{\kappa}-\kappa+\frac{1}{x^2}. $
(ⅰ) If $ \kappa \leq 1 $, then $ \varphi_{\kappa}(x)\geq \frac{1}{x^2}>0 $ for any $ x \in \left (0 , \infty \right) $. Thus, we have $ h_{\kappa }^{'}(x)>0, \ \ \forall x\in (0, \infty), $ which implies that the function $ h_{\kappa}(x) $ is strictly increasing on interval $ (0, \infty) $. We have
Thus for any $ x\in (0, \infty) $, $ h_{\kappa }(x)< 0 $. Then by (1.3), we get
Therefore, $ g_{\kappa }\left ( x \right ) $ is a strictly decreasing function on interval $ \left (0, \infty \right) $ and thus
(ⅱ) If $ \kappa > 1 $, then by $ h^{'}_{\kappa}(x)=0 $, i.e., $ \varphi_{\kappa}(x)=\frac{1}{\kappa}-\kappa+\frac{1}{x^2}=0 $, we get $ x=\sqrt{\frac{\kappa}{\kappa^2-1}}. $ Then, by (1.4), we know that $ h^{'}_{\kappa}(x)>0 $ for any $ x\in (0, \sqrt{\frac{\kappa}{\kappa^2-1}}) $ and $ h^{'}_{\kappa}(x)<0 $ for any $ x\in (\sqrt{\frac{\kappa}{\kappa^2-1}}, \infty) $. It follows that the function $ h_{\kappa}(x) $ is strictly increasing on $ (0, \sqrt{\frac{\kappa}{\kappa^2-1}}) $ and strictly decreasing on $ (\sqrt{\frac{\kappa}{\kappa^2-1}}, \infty) $. Furthermore, by the fact that $ \lim_{x \rightarrow 0+}h_{\kappa}(x)=-\infty <0 $ and $ \lim_{x \rightarrow \infty}h_{\kappa}(x) = 0 $, we know that the continuous function $ h_{\kappa}(x) $ has a unique zero point $ x_{0}(\kappa)\in (0, \sqrt{\frac{\kappa}{\kappa^2-1}}) $. It follows that $ h_{\kappa}(x)<0 $ for any $ x\in (0, x_0(\kappa)) $ and $ h_{\kappa}(x)>0 $ for any $ x\in (x_0(\kappa), \infty) $.
Hence, by (1.3), we get that $ g^{'}_{\kappa}(x)<0 $ on $ (0, x_0(\kappa)) $ and $ g^{'}_{\kappa}(x)>0 $ on $ (x_0(\kappa), \infty) $. It follows that
The proof is complete.
Remark 2.2 (ⅰ) For any $ \kappa>1, \mu>0, \lambda>0 $, it is easy to know that
where the second inequality follows from the proof of Theorem 2.1. Then for any $ \kappa>1 $, we have
(ⅱ) The above analysis shows that there is an interesting phase transition phenomenon in the infimum value problem for the inverse Gaussian distribution. The critical point is $ \kappa=1 $.
Let $ X_{\mu, \sigma} $ be a log-normal random variable with parameters $ \mu $ and $ \sigma $ $ (\mu \in \mathbb{R}, \sigma>0). $ By [10, Chapter 22], we know that the density function of $ X_{\mu, \sigma} $ is
and the expectation is $ E[X_{\mu, \sigma}]=\exp(\mu +\frac{{\sigma}^2}{2}) $. Then, for any given real number $ \kappa>0, $ by [10, (22.1.2)], we have
which shows that $ P(X_{\mu, \sigma}\leq \kappa E[X_{\mu, \sigma}]) $ is independent of $ \mu. $ Define a function
Proposition 3.1 (ⅰ) If $ \kappa \leq 1 $, then
(ⅱ) If $ \kappa>1 $, then
Proof Define a function
Then $ g_{\kappa}(\sigma)=\Phi(h_{\kappa}(\sigma)). $ Therefore, in order to prove this proposition, it is enough to investigate the value of $ h_{\kappa}(\sigma) $.
If $ \kappa\le 1, $ then $ \ln \kappa\le 0. $ Thus, by (1.7), we have $ h^{'}_{\kappa}(\sigma)=-\frac{\ln \kappa}{\sigma^2}+\frac{1}{2}>0 $, which implies that $ h_{\kappa}(\sigma) $ is a strictly increasing function of $ \sigma $. Thus,
It follows that (1.5) holds.
If $ \kappa>1, $ then $ \ln \kappa>0. $ If $ h^{'}_{\kappa}(\sigma)=-\frac{\ln \kappa}{\sigma^2}+\frac{1}{2}=0 $, then $ \sigma=\sqrt{ 2\ln \kappa}. $ It is easy to check that the function $ h_{\kappa}(\sigma) $ is strictly decreasing on $ (0, \sqrt{ 2\ln \kappa}) $ and strictly increasing on $ (\sqrt{ 2\ln \kappa}, \infty) $. Thus,
It follows that (1.6) holds. The proof is complete.
Let $ X_{\mu, \beta} $ be a Gumbel random variable with parameters $ \mu $ and $ \beta $ $ (\mu\in \mathbb{R}, \beta>0) $. By [11], the density function of $ X_{\mu, \beta} $ is given by
and its expectation is $ E[X_{\mu, \beta}]=\mu +\beta \gamma, $ where $ \gamma $ is the Euler's constant. Then, for any given real number $ \kappa>0, $ we have
Let $ x:=\frac{\mu}{\beta} $. Then
Proposition 3.2 (ⅰ) If $ \kappa <1 $, then $ \inf\limits_{x \in \mathbb{R} }g_{\kappa }\left (x \right )=\lim\limits_{x\to \infty}g_{\kappa}(x)=0; $
(ⅱ) If $ \kappa> 1 $, then $ \inf\limits_{x \in \mathbb{R} }g_{\kappa }\left (x \right )=\lim\limits_{x\to -\infty}g_{\kappa}(x)=0; $
(ⅲ) If $ \kappa=1, $ then $ g_{\kappa}(x)\equiv e^{-e^{- \gamma}}>\frac{1}{2}. $
Then
By taking the derivative of $ g_{\kappa}(x) $, we have
If $ \kappa <1, $ then $ g^{'}_{\kappa}(x)< 0, $ which implies that $ g_{\kappa}(x) $ is a strictly decreasing function of $ x. $ Then
If $ \kappa> 1, $ then $ g^{'}_{\kappa}(x)> 0. $ Thus, $ g_{\kappa}(x) $ is a strictly increasing function of $ x $ and
If $ \kappa=1, $ then by (1.8), we get that $ g_{\kappa}(x)\equiv e^{-e^{- \gamma}}>\frac{1}{2} $. The proof is complete.
Let $ X_{\mu, \beta} $ be a logistic random variable with parameters $ \mu $ and $ \beta $ $ (\mu \in \mathbb{R}, \beta>0) $. By [10, Chapter 21], we know that the distribution function of $ X_{\mu, \beta} $ is given by
and the expectation of $ X_{\mu, \beta} $ is $ E[X]=\mu $. Then, for any given real number $ \kappa>0, $ we have
Let $ y:=\frac{\mu}{\beta} $. Then
Proposition 3.3 (ⅰ) If $ \kappa < 1 $, then $ \inf\limits_{y \in \mathbb{R} }g_{\kappa }\left (y \right )=\lim\limits_{y\to \infty}g_{\kappa}(y)=0; $
(ⅱ) If $ \kappa> 1 $, then $ \inf\limits_{y \in \mathbb{R} }g_{\kappa }\left ( y \right )=\lim\limits_{y\to -\infty}g_{\kappa}(y)=0; $
(ⅲ) If $ \kappa=1, $ then $ g_{\kappa}(y)\equiv \frac{1}{2}. $
Proof By the definition $ g_{\kappa}(y) $ in (1.9), we have
If $ \kappa<1, $ then $ g^{'}_{\kappa}(y)<0 $, which implies that the function $ g_{\kappa}(y) $ is strictly decreasing. Then
If $ \kappa> 1, $ then $ g^{'}_{\kappa}(y)> 0. $ Thus, $ g_{\kappa}(y) $ is a strictly increasing function of $ y $ and
If $ \kappa=1, $ then $ g_{\kappa}(y)\equiv\frac{1}{2} $ by (1.9). The proof is complete.