数学杂志  2023, Vol. 43 Issue (1): 38-42   PDF    
扩展功能
加入收藏夹
复制引文信息
加入引用管理器
Email Alert
RSS
本文作者相关文章
HU Xing-kai
LIU Wu-shuang
SOME IMPROVED INEQUALITIES FOR MATRICES
HU Xing-kai, LIU Wu-shuang    
Faculty of Science, Kunming University of Science and Technology, Kunming 650500, China
Abstract: In this paper, matrix inequalities are studied. Using two new scalar inequalities, the weighted geometric mean inequalities and Hilbert-Schmidt norm inequalities for matrices are obtained, the results are refinements of some corresponding inequalities.
Keywords: scalar inequalities     weighted geometric mean inequalities     positive definite ma- trix     Hilbert-Schmidt norm    
几个改进的矩阵不等式
胡兴凯, 刘武双    
昆明理工大学理学院, 云南 昆明 650500
摘要:本文研究了矩阵不等式的问题.利用两个新的标量不等式, 得到了矩阵的加权几何均值不等式和Hilbert-Schmidt范数不等式, 所得的结果改进了相应的不等式.
关键词标量不等式    加权几何均值不等式    正定矩阵    Hilbert-Schmidt范数    
1 Introduction

Let $ M_{m, n} $ be the space of $ m\times n $ complex matrices and $ M_n=M_{n, n} $. Let $ \left\| \cdot \right\| $ denote any unitarily invariant norm on $ M_n $, if $ ||UAV||=||A|| $ for all $ A\in M_n $ and for all unitary matrices $ U, V\in M_n $. The $ A> 0 $ is used to mean that $ A $ is a positive definite matrix. The Hilbert-Schmidt norm of $ A=(a_{ij})\in M_n $ is denoted by

$ \|A\|_2=\left(\sum\limits_{i, j=1}^n |a_{ij}|^2\right)^{\frac{1}{2}}. $

Let $ A, B\in M_n $ be positive definite and $ 0\leq v\leq 1 $, the weighted geometric mean of the matrices $ A $ and $ B $ is defined as follows:

$ A\sharp_vB=A^{\frac{1}{2}}(A^{-\frac{1}{2}}BA^{-\frac{1}{2}})^vA^{\frac{1}{2}}, $

for $ v = \frac{1}{2} $, we denote the geometric mean by $ A\sharp B $.

Kittaneh and Manasrah [1] proved that if $ A, B\in M_n $ are positive definite and $ 0\leq v\leq 1, $ then

$ \begin{equation} \begin{array}{lll} &2r_0(A+B-2A\sharp B)+A\sharp_vB+A\sharp_{1-v}B\\ &\leq A+B\\ &\leq 2s_0(A+B-2A\sharp B)+A\sharp_vB+A\sharp_{1-v}B, \end{array} \end{equation} $ (1.1)

where $ r_{0}=\min\{v, 1-v\}, s_{0}=\max\{v, 1-v\} $.

In 2018, Liu and Yang [2] refined the inequalities (1.1) as follows:

$ \begin{equation} \begin{array}{lll} &2r_0(A+B-2A\sharp B)+A\sharp_vB+A\sharp_{1-v}B\\ &\leq A+B\\ &\leq \alpha(v)(A+B-2A\sharp B)+A\sharp_vB+A\sharp_{1-v}B\\ &\leq 2s_0(A+B-2A\sharp B)+A\sharp_vB+A\sharp_{1-v}B, \end{array} \end{equation} $ (1.2)

where $ \alpha(v)=\frac{3}{2}-2(v-v^2). $

Let $ A, B, X \in M_n $ such that $ A $ and $ B $ are positive definite. Bhatia and Davis [3] proved that if $ 0\le v\le 1 $, then

$ 2||A^{\frac{1}{2}}XB^{\frac{1}{2}}||\leq ||A^{v}XB^{1-v}+A^{1-v}XB^{v}||\leq ||AX+XB||, $

where the second inequality is known as Heinz inequality.

He and Zou [4] showed if $ 0\le v\le 1 $, then

$ \begin{equation} ||AX+XB||_2^2\leq ||A^{v}XB^{1-v}+A^{1-v}XB^{v}||_2^2+2s_0||AX-XB||_2^2, \end{equation} $ (1.3)

where $ s_{0}=\max\{v, 1-v\}. $ Kittaneh and Manasrah [5] showed if $ 0\le v\le 1 $, then

$ \begin{equation} ||A^{v}XB^{1-v}+A^{1-v}XB^{v}||_2^2+2r_0||AX-XB||_2^2\leq ||AX+XB||_2^2, \end{equation} $ (1.4)

where $ r_{0}=\min\{v, 1-v\} $, inequality (1.4) is the inverse of inequality (1.3).

In 2018, Liu and Yang [2] refined inequality (1.3) as follows:

$ \begin{equation} ||AX+XB||_2^2\leq ||A^{v}XB^{1-v}+A^{1-v}XB^{v}||_2^2+\alpha(v)||AX-XB||_2^2, \end{equation} $ (1.5)

where $ \alpha(v)=\frac{3}{2}-2(v-v^2) $.

Recently, many interesting articles have been devoted to study the unitarily invariant norm inequalities for matrices, see [6-8] and references therein.

In this paper, we first give two scalar inequalities. By using scalar inequalities, we improve inequalities (1.2) and (1.5).

2 Main results

In the following, we give two scalar inequalities which will turn out to be useful in the proof of our results.

Theorem 2.1 Let $ a, b>0 $, $ 0\leq v\leq 1 $, then

$ \begin{equation} a+b\leq a^{v}b^{1-v}+a^{1-v}b^{v}+\gamma(v)(\sqrt{a}-\sqrt{b})^2, \end{equation} $ (2.1)

where $ \gamma(v)=\frac{5}{4}-(v-v^2) $.

Proof To prove inequality (2.1), we only need prove that the following inequality

$ (1-\gamma(v))(a+b)+2\gamma(v)\sqrt{ab}\leq a^{v}b^{1-v}+a^{1-v}b^{v}. $

Let $ a=e^x, b=e^y $, by the definition of the hyperbolic function, we have

$ \begin{equation} (v-v^{2}-\frac{1}{4})cosh(\frac{x-y}{2})+(\frac{5}{4}-(v-v^2))\leq cosh((1-2v)(\frac{x-y}{2})). \end{equation} $ (2.2)

Let $ z=\frac{x-y}{2} $, by the series expansion of the hyperbolic $ coshz $ function, we know that inequality (2.2) is equivalent to

$ \begin{equation} (v-v^{2}-\frac{1}{4})(1+\frac{z^2}{2!}+\frac{z^4}{4!}+\cdots)+(\frac{5}{4}-(v-v^2)) \leq 1+\frac{(1-2v)^2z^2}{2!}+\frac{(1-2v)^4z^4}{4!}+\cdots. \end{equation} $ (2.3)

For $ 0\leq v\leq 1 $, it is easy to know that inequality (2.3) holds.

This completes the proof.

Corollary 2.2 Let $ a, b>0 $, $ 0\leq v\leq 1 $, then

$ \begin{equation} (a+b)^2\leq (a^{v}b^{1-v}+a^{1-v}b^{v})^2+\gamma(v)(a-b)^2, \end{equation} $ (2.4)

where $ \gamma(v)=\frac{5}{4}-(v-v^2) $.

Proof By inequality (2.1), we have

$ \begin{array}{lll} (\sqrt{a}+\sqrt{b})^2-(a^\frac{v}{2}b^\frac{1-v}{2}+a^\frac{1-v}{2}b^\frac{v}{2})^2&=& a+b-(a^{v}b^{1-v}+a^{1-v}b^{v})\\ \\ &\le& (\frac{5}{4}-(v-v^2))(\sqrt{a}-\sqrt{b})^2, \end{array} $

hence

$ (a+b)^2\leq (a^{v}b^{1-v}+a^{1-v}b^{v})^2+(\frac{5}{4}-(v-v^2))(a-b)^2. $

This completes the proof.

Theorem 2.3 Let $ A, B\in M_n $ be positive definite. Then

$ \begin{equation} \begin{array}{lll} &2r_{0}(A+B-2A\sharp B)+A\sharp_{v}B+A\sharp_{1-v}B\\ \\ &\leq A+B\\ \\ &\leq \gamma(v)(A+B-2A\sharp B)+A\sharp_{v}B+A\sharp_{1-v}B\\ \\ &\leq \alpha(v)(A+B-2A\sharp B)+A\sharp_{v}B+A\sharp_{1-v}B, \end{array} \end{equation} $ (2.5)

where $ v \in [0, 1] $, $ r_{0}=\min\{v, 1-v\}, \gamma(v)=\frac{5}{4}-(v-v^2), \alpha(v)=\frac{3}{2}-2(v-v^2) $.

Proof By inequalities (1.2), we know that the first inequality of (2.5) holds. For the second inequality of (2.5). Since $ T\in M_n $ is positive definite, it follows by the spectral theorem that there exists unitary matrix $ U\in M_n $ such that

$ T=UPU^*, $

where $ P=diag(\lambda_1, \lambda_2, \cdots, \lambda_n), \lambda_j> 0, 1\leq j\leq n $. For $ a > 0, b = 1 $, by inequality (2.1), we have

$ a+1\leq a^v+b^{1-v}+\gamma(v)(\sqrt{a}-1)^2, $

and so

$ \begin{equation} P+I\leq P^v+P^{1-v}+\gamma(v)(\sqrt{P}-I)^2. \end{equation} $ (2.6)

Multiplying the left and right sides of the inequality of (2.6) by $ U $ and $ U^* $, we have

$ T+I\leq T^v+T^{1-v}+\gamma(v)(\sqrt{T}-I)^2, $

let $ T=A^{-\frac{1}{2}}BA^{-\frac{1}{2}} $, the second inequality of (2.5) holds. For $ 0\leq v\leq 1 $, it easy to know that

$ \alpha(v)-\gamma(v)=\frac{1}{4}-(v-v^2)=(v-\frac{1}{2})^2\geq 0. $

Therefore, Theorem 2.3 is a refinement of the inequalities (1.2).

This completes the proof.

Theorem 2.4 Let $ A, B, X\in M_n $ such that $ A, B $ are positive definite. Then

$ \begin{equation} ||AX+XB||^2_2\leq ||A^vXB^{1-v}+A^{1-v}XB^v||^2_2+\gamma(v)||AX-XB||^2_2, \end{equation} $ (2.7)

where $ v \in [0, 1] $, $ \gamma(v)=\frac{5}{4}-(v-v^2). $

Proof Since every positive definite matrix is unitarily diagonalizable, it follows that there exist unitary matrices $ U, V\in M_n $ such that

$ A=UP_1U^*, B=VP_2V^*, $

where $ P_1=diag(\lambda_1, \lambda_2, \cdots, \lambda_n), P_2=diag(\mu_1, \mu_2, \cdots, \mu_n), \lambda_j, \mu_j> 0, 1\leq j\leq n $.

Let $ C=U^*XV=(c_{ij}) $, then

$ \begin{array}{lll} A^vXB^{1-v}+A^{1-v}XB^v&=(UP_1U^*)^vX(VP_2V^*)^{1-v}+(UP_1U^*)^{1-v}X(VP_2V^*)^v\\ & = UP^v_1(U^*XV)P^{1-v}_2V^*+UP^{1-v}_1(U^*XV)P^v_2V^*\\ & = U(P^v_1CP^{1-v}_2+P^{1-v}_1CP^v_2)V^* , \end{array} $

and

$ \begin{array}{lll} ||A^vXB^{1-v}+A^{1-v}XB^v||^2_2&=||P^v_1CP^{1-v}_2+P^{1-v}_1CP^v_2||^2_2\\ & = \sum\limits_{i, j=1}^n(\lambda^v_i\mu^{1-v}_j+\lambda^{1-v}_i\mu^v_j)^2|c_{ij}|^2. \end{array} $

Using the same method, we have

$ ||AX+XB||^2_2= \sum\limits_{i, j=1}^n(\lambda_i+\mu_j)^2|c_{ij}|^2, $
$ ||AX-XB||^2_2= \sum\limits_{i, j=1}^n(\lambda_i-\mu_j)^2|c_{ij}|^2. $

By inequality (2.4), we obtain

$ \sum\limits_{i, j=1}^n(\lambda^v_i\mu^{1-v}_j+\lambda^{1-v}_i\mu^v_j)^2|c_{ij}|^2+\gamma(v)\sum\limits_{i, j=1}^n(\lambda_i-\mu_j)^2|c_{ij}|^2\geq \sum\limits_{i, j=1}^n(\lambda_i+\mu_j)^2|c_{ij}|^2. $

Therefore, inequality (2.7) holds, it is a refinement of the inequality (1.5).

This completes the proof.

References
[1]
Kittaneh F, Manasrah Y. Reverse Young and Heinz inequalities for matrices[J]. Linear Multilinear Algebra, 2011, 59(9): 1031-1037. DOI:10.1080/03081087.2010.551661
[2]
Liu Xin, Yang Xiaoying. On matrix weighted geometric mean and norm some inequalities[J]. Applied Mathematics A Journal of Chinese Universities, 2018, 33(3): 373-378.
[3]
Bhatia R, Davis C. More matrix forms of the arithmetic-geometric mean inequality[J]. SIAM J. Matrx Anal. Appl., 1993, 14(1): 132-136. DOI:10.1137/0614012
[4]
He Chuanjiang, Zou Limin. Some inequalities involving unitarily invariant norms[J]. Math. Inequal. Appl., 2012, 15(4): 767-776.
[5]
Kittaneh F, Manasrah Y. Improved Young and Heinz inequalities for matrices[J]. J. Math. Anal. Appl., 2010, 361(1): 262-269. DOI:10.1016/j.jmaa.2009.08.059
[6]
Xue Jianming, Hu Xingkai. A note on some inequalities for unitarily invariant norms[J]. J. Math. Inequal., 2015, 9(3): 841-846.
[7]
Zou Limin. Unification of the arithmetic-geometric mean and HÄolder inequalities for unitarily invariant norms[J]. Linear Algebra Appl., 2019, 562: 154-162. DOI:10.1016/j.laa.2018.09.030
[8]
Al-Natoor A, Benzamiab S, Kittaneh F. Unitarily invariant norm inequalities for positive semidefinite matrices[J]. Linear Algebra Appl., 2022, 633: 303-315. DOI:10.1016/j.laa.2021.10.012