数学杂志  2019, Vol. 39 Issue (6): 931-945   PDF    
扩展功能
加入收藏夹
复制引文信息
加入引用管理器
Email Alert
RSS
本文作者相关文章
何明明
彭建文
一种求解单调包含问题的惯性混合邻近外梯度算法
何明明, 彭建文    
重庆师范大学数学科学学院, 重庆 401331
摘要:本文研究了求解单调包含问题的一种新的惯性混合邻近外梯度算法.利用Opial定理,获得了惯性混合邻近外梯度算法的弱收敛性和非渐近全局收敛率.在惯性混合邻近外梯度算法的框架下,本文提出并分析了惯性Tseng’s向前向后算法和惯性非精确Spingarn’s部分逆算法的收敛性和非渐近全局收敛率.
关键词混合邻近外梯度算法    惯性    Tseng’s向前向后算法    Spingarn’s部分逆算法    
A INERTIAL HYBRID PROXIMAL EXTRAGRADIENT METHOD FOR SOLVING MONOTONE INCLUSIONS
HE Ming-ming, PENG Jian-wen    
School of Mathematical Sciences, Chongqing Normal University, Chongqing 401331, China
Abstract: In this paper, we study a new inertial hybrid proxiaml extragradient method for solving monotone inclusion problems. By using the Opial theorem, we obtain the weak convergence and the non-asymptotic global convergence rate of the inertial hybrid proxiaml extragradient method. In the framework of inertial hybrid proxiaml extragradient method, we propose and analyze the convergemce and non-asymptotic global convergence rate of an inertial Tseng's forward-backward method and an inertial inexact Spingarn's partial inverse method.
Keywords: hybrid proximal extragradient method     inertia     Tseng's forward-backward method     Spingarn's partial inverse method    
1 引言

对于单调包含问题, 即:找到$ x^*\in \mathbb{X} $使得

$ \begin{equation} 0 \in B\; (x^*), \end{equation} $ (1.1)

其中$ B:\mathbb{X}\rightrightarrows \mathbb{X} $是极大单调算子, $ \mathbb{X} $是实Hilbert空间.凸优化问题, 平衡问题和单调变分不等式等许多问题都可以归结为单调包含问题(1.1).单调包含问题在图像处理和聚类等实际问题中有着广泛的应用[1].

邻近点算法是解单调包含问题的一类经典算法, 它由Martinet[2]提出, Rockafellar[3]进一步发展而成.其迭代格式为

$ \begin{equation} x_k\in {( Id+{\lambda _k}B)}^{-1}(x_{k-1}), \end{equation} $ (1.2)

其中正则参数$ {\lambda _k}>0 $, $ Id $是单位算子. Alvarez和Attouch于2001年在文献[4]中提出解含有单调算子的广义方程的惯性邻近点算法, 并证明了该算法的弱收敛性.

混合邻近外梯度算法是由Solodov和Svaiter[5]在1999年针对单调包含问题提出的另一类算法.即

初始化  任取初始点$ {x_0} \in X $, 任意给定$ \overline{\sigma} \in \left[ {0, 1} \right) $, 令$ k = 1 $.

步骤 1   选择$ {\lambda_k} \ge \underline{\lambda} $, 容差参数$ \sigma_k \in[0, \overline{\sigma}) $, 并找到$ {y_k} $, $ {v_k} \in X $, $ {\varepsilon _k}\ge 0 $使其满足

$ {v_k} \in {T^{{[\varepsilon _k]}}}\left( {{y_k}} \right), \;\left\| {{\lambda_k}{v_k} + {y_k} - {x_k}} \right\|^2 + 2{\lambda_k}{\varepsilon _k} \le \sigma _k^2\left\| {{y_k} - {x_k}} \right\|^2; $

步骤 2   执行一个外梯度步: $ x_{k+1} = x_k-{\lambda_k}{v_k} $.令$ k: = k+1 $, 返回步骤1.

实质上, 混合邻近外梯度算法是邻近点算法的一种不精确版本, 在每一次迭代中, 对应的近似子问题都应在一个相对误差准则内求解, 这与Rockafellar提出的邻近点算法的可加误差准则不同.混合邻近外梯度算法的另一个特点是它还允许通过文献[6]中提出的极大单调算子的$ \varepsilon $-enlargement来松弛单调算子.

混合邻近外梯度算法作为设计新方法和分析现有方法的复杂性的框架特别有用.例如Monteiro和Svaiter[7]在2013年提出块状分解混合邻近外梯度算法, 并证明了该算法的迭代复杂性, 且在该算法的框架下证明了交替方向乘子法的遍历迭代复杂度; Goncalves, Melo和Monteiro[8]在2017年提出一种新的正则化混合邻近外梯度算法, 并建立了该算法的迭代复杂性, 且证明了交替方向乘子法的变体是该算法的特殊实例; Alves和Svaiter[9]在2016年提出了一种新的求解强单调包含问题的混合邻近外梯度算法:

初始化  任取初始点$ {x_0} \in \mathbb{X} $, 任意给定$ \sigma \in[0, 1) $, $ k = 1 $.

步骤 1   选定$ 0<\lambda _k \leq\lambda _{k-1} $, 并找到$ {y_k} $, $ {v_k} \in \mathbb{X} $, $ {\varepsilon _k} \ge 0 $, 使得

$ {v_k} \in A\left( {y_k} \right) + B^{[{\varepsilon _k}]}\left( {{y_k}} \right){\mkern 1mu} {\mkern 1mu} , \;\frac{{{\mkern 1mu} {{\left\| {{\lambda _k}{v_k} + {y_k} - {x_{k - 1}}} \right\|}^2}}}{{1{\rm{ + }}2{\lambda _k}\mu }} + 2{\lambda _k}{\varepsilon _k} \le \sigma^2{\left\| {{y_k} - {x_{k - 1}}} \right\|^2}; $

步骤 2   计算$ x_k $:

$ {x_k} = \frac{{\left( {{x_{k - 1}} - {\lambda _k}{v_k}} \right) + 2{\lambda _k}\mu {y_k}}}{{1 + 2{\lambda _k}\mu }}, \text {令}k: = k+1, \text {返回步骤}\; 1. $

并用混合邻近外梯度算法的框架证明了Korpelevich外梯度算法的迭代复杂性.

近年来, 惯性邻近点类算法被大量用于设计和分析各种具有惯性的一阶邻近算法.例如Chen[10]等人提出求解具有可分结构的线性约束凸优化问题的惯性交替方向乘子法, 并建立了渐近收敛率和非渐近收敛率; Bot, Csetnek和Hendrich在文献[11]中提出了求解单调包含问题的惯性Douglas-Rachford分裂算法, 并证明了该算法在希尔伯特空间中的收敛性; Bot和Csetnek[12]在2015年提出求解单调包含问题的惯性混合邻近外梯度算法, 并建立了该算法的弱收敛性和渐近收敛性; Alves和Marcavillaca于2018年在文献[1]中提出一种求解单调包含问题的新的惯性混合邻近外梯度算法, 并得到了它的渐近收敛率和非渐近全局收敛率.

受上述文献的启发, 本文提出了一种新的求解单调包含问题的惯性混合邻近外梯度算法, 并得到了它的收敛性和非渐近全局收敛率.不同的是, 本文中的惯性系数的范围得到了扩大, 其系数区间为$ [0\, , 1+2\underline{\lambda}\mu] $.特别的, 当$ \mu = 0 $时, 惯性区间为$ [0\, , 1] $, 正好与文献[12]中的区间一致.

本文结构如下:首先是预备知识, 对概念和符号进行说明, 并对证明过程中所需的定理进行说明; 然后是本文的主要内容, 本文提出了一种用于求解单调包含问题的惯性混合邻近外梯度算法, 并证明了它的非渐近全局收敛率.最后本文提出了惯性混合邻近外梯度算法的两种特殊情况:求解含有Lipschitz连续算子的强单调包含问题的惯性Tseng's向前向后算法, 并建立了该算法的弱收敛性和非渐近全局收敛率; 求解含有强单调算子和Lipschitz连续算子的原始—对偶问题的惯性非精确Spingarn's部分逆算法, 并得到了该算法的弱收敛性和非渐近全局收敛率.

2 预备知识

$ \mathbb X $是内积为$ \langle { \cdot \;\; \cdot } \rangle $, 范数为$ \left\| {\; \cdot \;} \right\|: = \sqrt {\langle { \cdot \;\; \cdot } \rangle} $的实Hilbert空间.对于集值算子$ S:{\mathbb X}\rightrightarrows {\mathbb X} $, 它的图和定义域分别为

$ G_r(S): = \{(x\, , \, v)\in {\mathbb X}\times{\mathbb X}:v\in S(x)\}, \; \hbox {Dom}(S): = \{x\in {\mathbb X}:S(x)\neq \emptyset\}. $

$ S^{-1}(v) = \{x:v\in S(x)\} $, 则称$ S^{-1}:{\mathbb X}\rightrightarrows {\mathbb X} $是集值算子$ S $的逆.

本文主要研究单调包含问题的求解算法, 即含有(极大)单调算子的包含问题.在本节接下来的内容中将回顾一些基本概念和结论.

定义 2.1 [12]  若集值算子$ A:\mathbb{X}\rightrightarrows \mathbb{X} $满足

$ \begin{eqnarray} \langle {v - v', x - x'} \rangle \geq \mu {\| {x - x'} \|^2}, \; \forall \; v\in A(x), \; v'\in A(x'), \end{eqnarray} $ (2.1)

则称算子$ A $$ \mu $ -强单调的.特别地, 如果$ \mu = 0 $则称$ A $是单调的.

定义 2.2 [12]  设算子$ A:\mathbb{X}\rightrightarrows \mathbb{X} $是单调的, 如果不存在其他单调算子$ B $真包含于$ A $, 则称算子$ A $是极大单调的.即如果单调算子$ B:\mathbb{X}\rightrightarrows \mathbb{X} $满足$ Gr(A)\subset Gr(B) $, 则$ A = B $.

集值算子$ S $, $ S':{\mathbb X}\rightrightarrows {\mathbb X} $的和$ S+ S':{\mathbb X}\rightrightarrows {\mathbb X} $定义为

$ (S+ S')(x): = \{s+s'\in {\mathbb X}:s\in S(x), \, s'\in S'(x)\}. $

显然, 如果$ A:\mathbb{X}\rightrightarrows \mathbb{X} $$ \mu $ -强单调算子, $ B:\mathbb{X}\rightrightarrows \mathbb{X} $是单调算子, 则$ A+B $也是$ \mu $ -强单调算子.特别的, 两个单调算子的和也是单调算子.

定义 2.3 [6] 设算子$ T:\mathbb{X}\rightrightarrows \mathbb{X} $是极大单调的, 如果$ T^{[\varepsilon]}:\mathbb{X}\rightrightarrows \mathbb{X} $满足

$ \begin{eqnarray} T^{[\varepsilon]}(x): = \{v\in\mathbb{X}:\langle {v - v', x - x'} \rangle \geq -\varepsilon \;, \forall x' \in T(v') \}, \end{eqnarray} $ (2.2)

则称算子$ T^{[\varepsilon]} $是极大单调算子$ T $$ \varepsilon $-enlargement.

注意到$ T\subset T^{[\varepsilon]} $, $ \forall x\in \mathbb X $.从而可以将$ T^{[\varepsilon]} $视为$ T $的外延或近似.下面是关于$ T^{[\varepsilon]} $的性质的结论.

命题 2.1 [13] 令$ T $, $ S:{\mathbb X}\rightrightarrows {\mathbb X} $是集值映射, 则下列论述成立

(ⅰ)  如果$ \varepsilon \leq \varepsilon' $, 则$ T^{[\varepsilon]}(x)\subset T^{[\varepsilon']}(x) $, $ \forall x\in \mathbb X $.

(ⅱ)  $ T^{[\varepsilon]}(x)+ S^{[\varepsilon']}(x) \subset ( T+ S)^{[\varepsilon+\varepsilon']}(x) $, $ \forall x\in \mathbb X $$ \varepsilon $, $ \varepsilon'\geq0 $.

(ⅲ)  $ T $是单调算子当且仅当$ T(x)\subseteq{ T}^0 (x) $.

(ⅳ)  $ T $是极大单调算子当且仅当$ T(x) = { T}^0 (x) $.

(ⅴ)  如果$ T $是极大单调算子, 点列$ \{(y_k, \, v_k, \, \varepsilon_k)\} $使得对于任意的$ k\geq1 $$ v_k\in T^{[\varepsilon_k]}(y_k) $, 点列$ \{y_k\} $弱收敛于点$ x $, 点列$ \{v_k\} $收敛于点$ v $和点列$ \{\varepsilon_k\} $收敛于点$ \varepsilon $, 则$ v\in { T}^{[\varepsilon]}(x) $.

定理 2.2 [12]   (Opial定理)令$ \emptyset \neq \Omega \subset \mathbb X $, 若对于任意的$ x^* \in \Omega $$ \mathbb X $中的数列$ \{x_k\} $使得, $ \mathop {\lim }\limits_{k \to \infty } \left\| {{x_k} - {x^*}} \right\| $存在.如果$ \{x_k\} $的每个弱聚点都属于$ \Omega $, 则数列$ \{x_k\} $弱收敛到$ \Omega $中的点.

引理 2.3 [14]  令$ c $, $ d \in {\mathbb{X}} $, $ p \in \mathbb{R} $, 则

$ \begin{equation} {\left\| {pc + \left( {1 - p} \right)d} \right\|^2} = p{\left\| c \right\|^2} + \left( {1 - p} \right){\left\| d \right\|^2} - p\left( {1 - p} \right){\left\| {c - d} \right\|^2}. \end{equation} $ (2.3)
3 惯性混合邻近外梯度算法

一般来说, 问题(1.1)多半是不适定的, 而对于含有强单调算子的单调包含问题总是适定的.因此在实际应用中, 通过正则化将问题(1.1)近似为强单调包含问题, 这一技术可追溯到Tikhonov[15]的工作中.下面考虑强单调包含问题

$ \begin{equation} 0 \in A(x)+ B(x), \end{equation} $ (3.1)

其中$ A: \mathbb{X}\rightrightarrows \mathbb{X} $是极大$ \mu $ -强单调算子, $ B: \mathbb{X}\rightrightarrows \mathbb{X} $是极大单调算子.假设问题(3.1)有解.

下面给出求解问题(3.1)的惯性混合邻近外梯度算法.

算法1

步骤 0   任取初始点$ {x_0} = {x_{ - 1}} \in \mathbb{X} $, 任意给定$ \alpha \in[0, 1) $, $ \alpha_0 \in[0 , \alpha] $, $ \sigma \in[0, 1) $, $ k = 1 $;

步骤 1   选定$ \alpha_k \in [{\alpha_{k-1}}, \;\alpha] $, 计算

$ \begin{equation} w_{k-1}: = x_{k-1}+ \alpha_{k-1}(x_{k-1}-x_{k-2}); \end{equation} $ (3.2)

步骤 2   选定$ 0<\lambda _k \leq\lambda _{k-1} $, 并找到$ {y_k} $, $ {v_k} \in \mathbb{X} $, $ {\varepsilon _k} \ge 0 $, 使得

$ \begin{eqnarray} {v_k} \in A\left( {y_k} \right) + B^{[{\varepsilon _k}]}\left( {{y_k}} \right){\mkern 1mu} {\mkern 1mu} , \;\frac{{{\mkern 1mu} {{\left\| {{\lambda _k}{v_k} + {y_k} - {w_{k - 1}}} \right\|}^2}}}{{1{\rm{ + }}2{\lambda _k}\mu }} + 2{\lambda _k}{\varepsilon _k} \le \sigma^2{\left\| {{y_k} - {w_{k - 1}}} \right\|^2}; \end{eqnarray} $ (3.3)

步骤 3   计算$ x_k $

$ \begin{eqnarray} {x_k} = \frac{{\left( {{w_{k - 1}} - {\lambda _k}{v_k}} \right) + 2{\lambda _k}\mu {y_k}}}{{1 + 2{\lambda _k}\mu }}, \end{eqnarray} $ (3.4)

$ k: = k+1 $, 返回步骤1.

注 3.1   (ⅰ)  对单调包含问题(1.1)进行正则化处理

$ \begin{equation} 0 \in B\; (x)+\mu (x-x_0), \end{equation} $ (3.5)

其中$ \mu > 0 $, $ x_0 $是初始点.令$ A(x) = \mu (x-x_0) $, 则算子$ A $是极大$ \mu $ -强单调算子.此时问题(3.5)是问题(3.1)的特殊情况. $ A $$ \mu $ -强单调算子, 根据Minty定理[16]可知它的解集是单点集, 记$ x_\mu ^ * $是它的解, 即$ x_\mu ^ * \in {({\mu ^{ - 1}}B +Id)^{ - 1}}({x_0}) $.当$ \, \mu\, $接近于$ \, 0\, $时, 它近似于问题(1.1)的解.

(ⅱ)  针对问题(3.1)也可以用文献[12]中的惯性混合邻近外梯度算法, 此时惯性系数的范围从$ [0\, , 1[ $; 而算法1通过正则化的技巧将惯性系数的范围从$ [0\, , 1] $扩大为$ [0\, , 1+2{\underline{\lambda}}\mu] $, 加快了迭代速度.

(ⅲ)  当$ \alpha_k\equiv0 $时, 算法1退化为文献[9]中的求解问题(3.1)的算法1.

(ⅳ)  当$ \mu = 0 $$ \alpha_k\equiv0 $时, 算法1退化为文献[5]中的求解问题(1.1)的混合邻近外梯度算法.

(ⅴ)  当$ \alpha_k\equiv0 $$ \sigma_k\equiv0 $时, $ x_{k-1} = w_{k-1} $由(3.3)式可知$ \varepsilon_k\equiv0 $$ x_{k-1}-{\lambda_k}{v_k} = {y_k} $, 从而由(3.4)式可知$ x_k = y_k = x_{k-1}-{\lambda_k}{v_k}. $此时, 算法1退化为经典的邻近点算法[3].

(ⅵ)  当$ \sigma_k\equiv0 $时, 算法1退化为文献[4]中的惯性邻近点算法.

引理 3.1   令点列$ \left\{ {{x_k}} \right\} $, $ \left\{ {{y_k}} \right\} $, $ \left\{ {{w_k}} \right\} $, $ \left\{ {{\alpha _k}} \right\} $由算法1生成, 取$ x \in \mathbb{X} $, 则$ \forall k \geq 1 $, 有

$ \begin{align} {\left\| {{w_{k - 1}} - x} \right\|^2}{\rm{ }} = &{\rm{ }}\left( {1 + {\alpha _{k - 1}}} \right){\left\| {{x_{k - 1}} - x} \right\|^2}-{\alpha _{k - 1}}{\left\| {{x_{k - 2}} - x} \right\|^2}\\ &+{\alpha _{k - 1}}\left( {1 + {\alpha _{k - 1}}} \right){\left\| {{x_{k - 1}} - {x_{k - 2}}} \right\|^2}. \end{align} $ (3.6)

  由(3.2)式可得$ {w_{k - 1}} - x = \left( {1 +{\alpha _{k - 1}}} \right)\left( {{x_{k - 1}} - x} \right) - {\alpha _{k - 1}}({x_{k - 2}} - x) $.将$ p = {\alpha _{k - 1}} $, $ c = {x_{k - 2}} - x $, $ d = {x_{k - 1}}-x $代入(2.3)式, 即得(3.6)式.

命题 3.2   令点列$ \left\{ {{x_k}} \right\} $, $ \left\{ {{y_k}} \right\} $, $ \left\{ {{w_k}} \right\} $由算法1生成, 则$ \forall {x^*} \in {(A + B)^{ - 1}}(0), $

$ \begin{align} \left( {1 + 2{\lambda _k}\mu } \right){\left\| {{x_k} - {x^*}} \right\|^2} + \left( {1 - {\sigma ^2}} \right){\left\| {{y_k} - {w_{k - 1}}} \right\|^2} \le {\left\| {{w_{k - 1}} - {x^*}} \right\|^2}. \end{align} $ (3.7)

  由(3.4)式可知$ {w_{k - 1}}{\rm{ = }}\left( {1 + 2{\lambda _k}\mu } \right){x_k} - 2{\lambda _k}\mu {y_k} + {\lambda _k}{v_k} $, 事实上, $ \forall a, b, c \in \mathbb{X}, $$ 2\langle {a - b, a - c} \rangle = {\left\| {a - b} \right\|^2} + {\left\| {a - c} \right\|^2} - {\left\| {b - c} \right\|^2} $, 从而可知

$ \begin{align} {\left\| {{w_{k - 1}} - x} \right\|^2} = & {\left\| {{w_{k - 1}} - {x_k}} \right\|^2} + {\left\| {{x_k} - x} \right\|^2} + 2\langle {{w_{k - 1}} - {x_k}, \;{x_k} - x} \rangle \\ = & {\left\| {{w_{k - 1}} - {x_k}} \right\|^2} + {\left\| {{x_k} - x} \right\|^2} + 2\langle {2{\lambda _k}\mu \left( {{x_k} - {y_k}} \right) + {\lambda _k}{v_k}, \;{x_k} - x} \rangle \\ = &{\left\| {{w_{k - 1}} - {x_k}} \right\|^2} + \left( {1 + 2{\lambda _k}\mu } \right){\left\| {{x_k} - x} \right\|^2} \\ &+ 2\left[ {{\lambda _k}\mu \left( {{{\left\| {{x_k} - {y_k}} \right\|}^2} - {{\left\| {x - {y_k}} \right\|}^2}} \right) + \langle {{\lambda _k}{v_k}, \;{x_k} - x} \rangle } \right]\\ = & {\left\| {{w_{k - 1}} - {x_k}} \right\|^2} + 2\left( {{\lambda _k}\mu {{\left\| {{x_k} - {y_k}} \right\|}^2} + \langle {{\lambda _k}{v_k}, \;{x_k} - {y_k}} \rangle - {\lambda _k}{\varepsilon _k}} \right)\\ &- 2\left( {{\lambda _k}\mu {{\left\| {x - {y_k}} \right\|}^2} + \langle {{\lambda _k}{v_k}, \;x - {y_k}} \rangle - {\lambda _k}{\varepsilon _k}} \right)+\left( {1 + 2{\lambda _k}\mu } \right){\left\| {{x_k} - x} \right\|^2}. \end{align} $ (3.8)

$ {\rho_k} = \frac{1}{{1 + 2{\lambda_k}\mu }} $, 由(3.4)式可知$ {w_{k-1}}-{x_k} = (1-{\rho_k})\left({w_{k - 1}}-{{y_k}}\right)-{\rho_k}{\lambda _k}{v_k} $, 故由(2.3)式可得

$ \begin{equation} {\left\| {{w_{k - 1}} - {x_k}} \right\|^2} = {\rho _k}{\left\| {{\lambda _k}{v_k}} \right\|^2} + \left( {1 - {\rho _k}} \right){\left\| {{y_k} - {w_{k - 1}}} \right\|^2} - {\rho _k}\left( {1 - {\rho _k}} \right){\left\| {{\lambda _k}{v_k} + {y_k} - {w_{k - 1}}} \right\|^2}. \end{equation} $ (3.9)

由(3.4)式可知

$ \begin{equation} {x_k}-{y_k} = {\rho_k}\left(w_{k-1}-{\lambda _k}{v_k}-{y_k}\right), \end{equation} $ (3.10)

从而由(3.10)式有

$ \begin{eqnarray} 2\langle {{\lambda _k}{v_k}, {x_k} - {y_k}} \rangle & = & 2{\rho _k}\langle {{\lambda _k}{v_k}, {w_{k - 1}} - {\lambda _k}{v_k} - {y_k}} \rangle \\ & = &{\rho _k}\left( {2\langle {{\lambda _k}{v_k}, {w_{k - 1}} - {y_k}} \rangle - 2{{\left\| {{\lambda _k}{v_k}} \right\|}^2}} \right)\\ & = & {\rho _k}\left( {{{\left\| {{\lambda _k}{v_k}} \right\|}^2} + {{\left\| {{w_{k - 1}} - {y_k}} \right\|}^2} - {{\left\| {{\lambda _k}{v_k} + {y_k} - {w_{k - 1}}} \right\|}^2} - 2{{\left\| {{\lambda _k}{v_k}} \right\|}^2}} \right)\\ & = &{\rho _k}\left( {{{\left\| {{w_{k - 1}} - {y_k}} \right\|}^2} - {{\left\| {{\lambda _k}{v_k} + {y_k} - {w_{k - 1}}} \right\|}^2} - {{\left\| {{\lambda _k}{v_k}} \right\|}^2}} \right). \end{eqnarray} $ (3.11)

事实上, $ \left( {1 - {\rho _k}} \right) = 2{\lambda _k}\mu{\rho _k} $, 从而由(3.10)式可得

$ \begin{equation} 2{\lambda _k}\mu {\left\| {{x_k} - {y_k}} \right\|^2} = {\rho _k}\left( {1 - {\rho _k}} \right){\left\| {{\lambda _k}{v_k} + {y_k} - {w_{k - 1}}} \right\|^2}, \end{equation} $ (3.12)

由(3.9), (3.11)和(3.12)式可得

$ \begin{align} &{\left\| {{w_{k - 1}} - {x_k}} \right\|^2} + 2\left( {{\lambda _k}\mu {{\left\| {{x_k} - {y_k}} \right\|}^2} + \langle {{\lambda _k}{v_k}, \;{x_k} - {y_k}} \rangle - {\lambda _k}{\varepsilon _k}} \right)\\ = &{\left\| {{y_k} - {w_{k - 1}}} \right\|^2} - \left( {{\rho _k}{{\left\| {{\lambda _k}{v_k} + {y_k} - {w_{k - 1}}} \right\|}^2} + 2{\lambda _k}{\varepsilon _k}} \right), \end{align} $

并联立(3.3)和(3.8)式可得

$ \begin{align} \left\| {{w_{k - 1}} - x} \right\|^2 \ge& \left( {1 + 2{\lambda _k}\mu } \right){\left\| {{x_k} - x} \right\|^2} + \left( {1 - {\sigma ^2}} \right){\left\| {{y_k} - {w_{k - 1}}} \right\|^2}\\ &- 2{\lambda _k}\left( {\mu {{\left\| {x - {y_k}} \right\|}^2} + \langle {v_k, \;x - {y_k}} \rangle - {\varepsilon _k}} \right). \end{align} $ (3.13)

$ \forall {x^*} \in{(A+ B)^{ - 1}}(0) $, 则存在$ a^* \in A({x^*} ) $, $ - a^ * \in B^{[{\varepsilon _k}]}\left( {x^*} \right) $.由(3.3)式可知:存在$ {a_k} \in A \left( {y_k} \right) $, $ {b_k} \in B^{[\varepsilon _k]}\left( {y_k} \right) $, 使得$ {v_k} = {a_k} + {b_k} $.由(2.1)与(2.2)式可知

$ \begin{eqnarray} \langle {a^ * - {a_k}, {x^*} - {y_k}} \rangle \ge \mu {\left\| {{x^*} - {y_k}} \right\|^2}, \;\;\; \langle {{b_k} + a^ * , {y_k} - {x^*}} \rangle \ge - {\varepsilon _k}\, . \end{eqnarray} $

将两式相加可得$ \langle {{v_k}\, , {y_k}-{x^*}} \rangle \ge \mu {\left\| {{x^*}- {y_k}} \right\|^2} - {\varepsilon _k} $, 即$ \mu {\left\| {x^* - {y_k}} \right\|^2} +\langle {v_k, \;{x^*} - {y_k}} \rangle - {\varepsilon _k}\leq0 $.并将$ x = {x^*} $代入(3.13)式即可得到(3.7)式.

命题 3.3   令点列$ \left\{ {{x_k}} \right\} $, $ \left\{ {{y_k}} \right\} $$ \left\{ {{w_k}} \right\} $由算法$ \, 1\, $生成, 令$ {\rho _k} : = \frac{1}{{1 + 2{\lambda _k}\mu }} $, 且定义$ s_k $

$ \begin{equation} {s_k} : = \max \left\{ {\eta {{\left\| {{x_k} - {w_{k - 1}}} \right\|}^2}\;, \;\left( {1 - {\sigma ^2}} \right){{\left\| {{y_k} - {w_{k - 1}}} \right\|}^2}} \right\}, \end{equation} $ (3.14)

其中$ \overline{\rho} = \frac{1}{1+2\underline{\lambda}\mu} $, $ \eta : = \frac{{1 - {\sigma ^2}}}{{{{\left( {1 + \sigma \sqrt {\overline{\rho}} } \right)}^2}}}\in(0, 1] $.则$ \forall{x^* } \in {(A + B)^{ - 1}}(0), $

$ \begin{equation} {\left\| {{x_k} - {x^* }} \right\|^2} + {\rho _k}{s_k} \le {\rho _k}{\left\| {{w_{k - 1}} - {x^*}} \right\|^2}. \end{equation} $ (3.15)

  由(3.4)式及$ \rho_k $的定义可知$ {x_k} = {\rho _k}\left( {{w_{k - 1}} - {\lambda _k}{v_k}} \right) + \left( {1 - {\rho _k}} \right){y_k} $, 故

$ {x_k} - {w_{k - 1}} = \left( {{y_k} - {w_{k - 1}}} \right) - {\rho _k}\left( {{\lambda _k}{v_k} + {y_k} - {w_{k - 1}}} \right), $

从而由三角不等式可知

$ \begin{equation} \left\| {{x_k} - {w_{k - 1}}} \right\| \le \left\| {{y_k} - {w_{k - 1}}} \right\| + {\rho _k}\left\| {{\lambda _k}{v_k} + {y_k} - {w_{k - 1}}} \right\|. \end{equation} $ (3.16)

由(3.3)式可知

$ \begin{equation} \left\| {{\lambda _k}{v_k} + {y_k} - {w_{k - 1}}} \right\| \le \frac{{\sigma}}{{\sqrt {{\rho _k}} }}\left\| {{y_k} - {w_{k - 1}}} \right\|, \end{equation} $ (3.17)

其中$ \rho_k \in[ \underline{\rho}, \overline{\rho}] $, $ \underline{\rho} = \frac{1}{1+2\overline{\lambda}\mu}, \overline{\rho} = \frac{1}{1+2\underline{\lambda}\mu} $.从而由(3.16)和(3.17)式可知

$ \left\| {{x_k} - {w_{k - 1}}} \right\| \le \left( {1 + {\sigma}\sqrt {{\rho _k}} } \right)\left\| {{y_k} - {w_{k - 1}}} \right\|\le \left( {1 + {\sigma}\sqrt {\overline\rho }} \right)\left\| {{y_k} - {w_{k - 1}}} \right\|, $

从而

$ \left( {1 - {\sigma ^2}} \right){\left\| {{y_k} - {w_{k - 1}}} \right\|^2} \ge \frac{{1 - {\sigma ^2}}}{{{{\left( {1 + \sigma \sqrt {\overline{\rho}} } \right)}^2}}}{\left\| {{x_k} - {w_{k - 1}}} \right\|^2}. $

并由(3.7)和(3.14)式可得到(3.15)式.

命题 3.4   令点列$ \left\{ {{x_k}} \right\} $, $ \left\{ {{y_k}} \right\} $, $ \left\{ {{w_k}} \right\} $由算法$ \, 1\, $生成, $ s_k $由(3.15)式所定义, 并且令

$ \begin{align} {\varphi _k}: = {\left\| {{x_k} - {x^* }} \right\|^2}, \; {\forall k \ge - 1} \;\;{\delta _k}: = {\alpha _{k - 1}}\left( {1 + {\alpha _{k - 1}}} \right){\rho _k}{\left\| {{x_{k - 1}} - {x_{k - 2}}} \right\|^2}, \;{\forall k \ge 1}, \end{align} $ (3.18)

$ {\rho _k}: = \frac{1}{{1 + 2{\lambda _k}\mu}} $, $ \underline{\rho} = \frac{1}{1+2\overline{\lambda}\mu}, \overline{\rho} = \frac{1}{1+2\underline{\lambda}\mu} $, $ {x^* } \in {(A + B)^{ - 1}}(0) $.则有$ {\varphi_0} = {\varphi_{-1}} $, 而且

$ \begin{align} {\varphi _k} - {\varphi _{k - 1}} + {\rho _k}{s_k} \le {\rho _k}{\alpha _{k - 1}}\left( {{\varphi _{k - 1}} - {\varphi _{k - 2}}} \right) + {\delta _k}, \;\;\forall k \ge 1. \end{align} $ (3.19)

  将$ x = x^* $代入(3.6)式并由(3.18)式可得

$ {\rho _k}{\left\| {{w_{k - 1}} - {x^* }} \right\|^2} = \left( {1 + {\alpha _{k - 1}}} \right){\rho _k}{\varphi _{k - 1}} - {\alpha _{k - 1}}{\rho _k}{\varphi _{k - 2}} + {\delta _k}, $

并由(3.15)式可知

$ \left( {{\varphi _k} - {\rho _k}{\varphi _{k - 1}}} \right) + {\rho _k}{s_k} \le {\alpha _{k - 1}}{\rho _k}\left( {{\varphi _{k - 1}} - {\varphi _{k - 2}}} \right) + {\delta _k}, $

注意到, $ 0<\rho _k \leq1 $, 有$ {\varphi _k} - {\varphi _{k - 1}} \le {\varphi _k} - {\rho _k}{\varphi _{k - 1}} $, 故(3.19)式成立.

引理 3.5 [4]  令点列$ \{\varphi_k\} $, $ \{\rho_ks_k\} $, $ \{\rho_k\alpha_k\}\in [0, +\infty[ $使得$ {\varphi_0} = {\varphi_{-1}} $, $ 0 \le {\rho _k} \le \overline{\rho} $, $ 0 \le {\alpha _k} \le \alpha $, $ \alpha \overline{\rho } < 1 $, 其中$ \overline{\rho} = \frac{1}{1+2\underline{\lambda}\mu} $, $ s_k $如(3.15)式所定义, 且满足(3.19)式.则$ \forall k\geq1, $

$ \begin{equation} {\varphi _k} + \sum\limits_{j = 1}^k {{\rho _j}{s_j}} \le {\varphi _0} + \frac{1}{{1 - \alpha \overline{\rho }}}\sum\limits_{j = 1}^k {{\delta _j}}; \end{equation} $ (3.20)

进一步, 如果$ \sum\limits_{j = 1}^\infty {{\delta _k}} < + \infty $, 则$ \mathop {\lim }\limits_{k \to \infty } {\varphi _k} $存在, 即数列$ \{\varphi_k\} $收敛到某一非负实数.

命题 3.6   令点列$ \{x_k\} $, $ \{\lambda_k\} $$ \{\alpha_k\} $由算法1产生, 如果对于任意的$ k\geq1 $, $ {\lambda _k} \ge \underline{\lambda}> 0 $, 满足下面的条件

$ \begin{equation} \sum\limits_{k = 0}^\infty {{\alpha _k}{{\left\| {{x_k} - {x_{k - 1}}} \right\|}^2}} < + \infty, \end{equation} $ (3.21)

则有下面的结论成立

(ⅰ)  点列$ \left\{ {{x_k} - {w_{k-1}}} \right\} $, $ \left\{ {{y_k} - {w_{k-1}}} \right\} $, $ \left\{ {{x_k} - {y_k}} \right\} $, $ \left\{ {{v_k}} \right\} $$ \left\{ {{\varepsilon_k}} \right\} $强收敛到零;

(ⅱ)  点列$ \{x_k\} $, $ \{w_{k-1}\} $$ \{y_k\} $弱收敛到单调包含问题(3.1)的解.

  由于$ {\lambda _k} \ge \underline{\lambda}> 0 $, 则$ {\rho_k}\leq\overline{\rho} = \frac{1}{1+2\underline{\lambda}\mu} $, 并由命题3.4和引理3.5可知$ \forall x^* \in (A + B)^{ - 1}(0) $, $ \mathop {\lim }\limits_{k \to \infty } \left\| {{x_k} - {x^* }} \right\| $存在, 故点列$ \{x_k\} $有界, 而且$ \sum\limits_{j = 1}^\infty {{s_k}} < + \infty $, 故$ \mathop {\lim }\limits_{k \to \infty } s_k = 0 $.又由(3.2), (3.3)和(3.14)式可知

$ \begin{equation} \mathop {\lim }\limits_{k \to \infty } \left\| {{x_k} - {w_{k - 1}}} \right\| = \mathop {\lim }\limits_{k \to \infty } \left\| {{y_k} - {w_{k - 1}}} \right\| = \mathop {\lim }\limits_{k \to \infty } \left\| {{v_k}} \right\| = \mathop {\lim }\limits_{k \to \infty } {\varepsilon _k} = 0, \end{equation} $ (3.22)

即点列$ \left\{ {{x_k} - {w_{k-1}}} \right\} $, $ \left\{ {{y_k} - {w_{k-1}}} \right\} $, $ \left\{ {{v_k}} \right\} $$ \left\{ {{\varepsilon_k}} \right\} $强收敛到零, 从而$ \left\{ {{x_k} - {y_k}} \right\} $也强收敛到零.

$ {x^\infty } \in \mathbb{X} $是点列$ \{x_k\} $的弱聚点, 并令子列$ \left\{ {{x_{{k_j}}}} \right\} $使得$ {x_{{k_j}}}\rightharpoonup {x^\infty } $.又由(3.2)和(3.22)式可得

$ {v_{{k_j}}} \in A\left( y_{k_j} \right) + B^{[\varepsilon _{k_j}]}\left( {{y_{{k_j}}}} \right), \;\mathop {\lim }\limits_{j \to \infty } \left\|{v_{{k_j}}}\right\| = 0, \;\mathop {\lim }\limits_{j \to \infty } {\varepsilon _{{k_j}}} = 0, \; {\forall j \ge 1} $

故由命题$ 2.1(v) $可知, $ x^\infty \in (A+ B)^{ - 1}(0) $, 故根据定理$ 2.2 $可知点列$ \{x_k\} $弱收敛到单调包含问题(3.5)的解.由于$ \left\{ {{x_k} - {w_{k-1}}} \right\} $$ \left\{ {{x_k} - {y_k}} \right\} $强收敛到零, 因此$ \{w_{k-1}\} $$ \{y_k\} $也弱收敛到单调包含问题(3.1)的解.

注 3.2   由于惯性邻近点算法是算法1的特殊情况, 因此命题3.6是文献[4]中定理2.1的推广.

命题 3.7   令点列$ \left\{ {{x_k}} \right\} $, $ \left\{ {{y_k}} \right\} $, $ \left\{ {{w_k}} \right\} $由算法$ \, 1\, $生成, 令$ {\rho _k}: = \frac{1}{{1 + 2{\lambda _k}\mu}} $, 且

$ \begin{equation} \beta : = \left\{ {\begin{array}{*{20}{c}} {\frac{(1+2\eta)\overline{\rho}-\sqrt{(1+2\eta)^2\overline{\rho}^2-4\eta(\eta-1)\overline{\rho}\underline{\rho}}}{2(\eta-1)\overline{\rho}}, \quad \sigma\in (0\;, \;1)}\\ {\frac{\underline{\rho}}{3\overline{\rho}}, \quad\quad\quad\quad \quad\quad \quad \quad \, \, \quad\sigma {\rm{ = }}0}, \end{array}} \right. \end{equation} $ (3.23)

其中$ \underline{\rho} = \frac{1}{1+2\overline{\lambda}\mu}, \overline{\rho} = \frac{1}{1+2\underline{\lambda}\mu}. $假设$ {\alpha _k} \le {\alpha _{k + 1}} \le \alpha <\beta $, 定义实函数$ q(\alpha') $

$ \begin{equation} q\left( {\alpha '} \right) : = \left( {\eta - 1} \right)\overline{\rho}{{\alpha '}^2} - \left( {1 + 2\eta } \right)\overline{\rho}\alpha ' +\underline{\rho} \eta, \end{equation} $ (3.24)

则有$ q\left( \alpha \right) > 0 $, 且$ {\forall x^* } \in (A+ B)^{ - 1}(0), $

$ \begin{equation} \sum\limits_{j = 1}^k {{\left\| {{x_j} - {x_{j - 1}}} \right\|}^2} \le \frac{{2{{\left\| {{x_0} - {x^* }} \right\|}^2}}}{{\left( {1 - \alpha\overline{\rho} } \right)q\left( \alpha \right)}}, \end{equation} $ (3.25)

因此点列$ \{x_k\} $弱收敛到单调包含问题(3.1)的解.

  由(3.2)式可知$ x_k-w_{k-1} = (x_k-x_{k-1})-\alpha_{k-1}(x_{k-1}-x_{k-2}) $, 故

$ \begin{align} {\left\| {{x_k} - {w_{k - 1}}} \right\|^2} = & {\left\| {{x_k} - {x_{k - 1}}} \right\|^2} + {\alpha^2 _{k - 1}}{\left\| {{x_{k - 1}} - {x_{k - 2}}} \right\|^2} - 2{\alpha _{k - 1}}\langle {{x_k} - {x_{k - 1}}, \;{x_{k - 1}} -{x_{k - 2}}} \rangle \\ \ge& {\left\| {{x_k} - {x_{k - 1}}} \right\|^2} + {\alpha^2 _{k - 1}}{\left\| {{x_{k - 1}} - {x_{k - 2}}} \right\|^2} - {\alpha _{k - 1}}\left( {2\left\| {{x_k} - {x_{k - 1}}} \right\|\left\| {{x_{k - 1}} - {x_{k - 2}}} \right\|} \right)\\ \ge& \left( {1 - {\alpha _{k - 1}}} \right){\left\| {{x_k} - {x_{k - 1}}} \right\|^2} - {\alpha _{k - 1}}\left( {1 - {\alpha _{k - 1}}} \right){\left\| {{x_{k - 1}} - {x_{k - 2}}} \right\|^2}. \end{align} $ (3.26)

并联立(3.19)式可得

$ \begin{align} & {\varphi _k} - {\varphi _{k - 1}} - {\rho _k}{\alpha _{k - 1}}\left( {{\varphi _{k - 1}} - {\varphi _{k - 2}}} \right) \\ \le& {\rho _k}\left[ {{\alpha _{k - 1}}\left( {1 + {\alpha _{k - 1}}} \right){{\left\| {{x_{k - 1}} - {x_{k - 2}}} \right\|}^2} - \eta {{\left\| {{x_k} - {w_{k - 1}}} \right\|}^2}} \right]\\ \le& {\rho _k}\left[ {\left( {1 - \eta } \right)\alpha _{k - 1}^2 + \left( {1 + \eta } \right){\alpha _{k - 1}}} \right]{\left\| {{x_{k - 1}} - {x_{k - 2}}} \right\|^2} - {\rho _k}\eta \left( {1 - {\alpha _{k - 1}}} \right){\left\| {{x_k} - {x_{k - 1}}} \right\|^2}. \end{align} $ (3.27)

$ {\gamma _k}: = \left( {1 - \eta } \right){\rho _{k+1}}\alpha _k^2 + \left( {1 + \eta } \right){\rho _{k+1}}{\alpha _k} $, $ \forall k \ge 0 $.定义

$ {\xi _0}: = \left( {1 - {\rho _1}{\alpha _0}} \right){\varphi _0}, \;\;{\xi _k}: = {\varphi _k} - {\rho _k}{\alpha _{k - 1}}{\varphi _{k - 1}} + {\gamma _k}{\left\| {{x_k} - {x_{k - 1}}} \right\|^2}, \;\forall k \ge 1, $

又由于$ \{\rho_k\} $$ \{\alpha_k\} $是单增非负数列, 且$ \rho_k \in[ \underline{\rho}, \overline{\rho}] $, 因此

$ \begin{align} {\xi _k} - {\xi _{k - 1}} \le& \left[ {{\varphi _k} - {\varphi _{k - 1}} - {\rho _k}{\alpha _{k - 1}}\left( {{\varphi _{k - 1}} - {\varphi _{k - 2}}} \right) - {\gamma _{k - 1}}{{\left\| {{x_{k - 1}} - {x_{k - 2}}} \right\|}^2}} \right] + {\gamma _k}{\left\| {{x_k} - {x_{k - 1}}} \right\|^2}\\ \le& \left[ {{\gamma _k} - {\rho _k}\eta \left( {1 - {\alpha _k}} \right)} \right]{\left\| {{x_k} - {x_{k - 1}}} \right\|^2}\\ \leq&- \left[ {\left( {\eta - 1} \right)\overline{\rho}\alpha _k^2 - \left( {1 + 2\eta } \right)\overline{\rho}{\alpha _k} + \underline{\rho} \eta } \right]{\left\| {{x_k} - {x_{k - 1}}} \right\|^2}\\ = & - q\left( {{\alpha _k}} \right){\left\| {{x_k} - {x_{k - 1}}} \right\|^2}. \end{align} $ (3.28)

$ \eta\in(0, 1) $时, 即$ \sigma\in(0, 1) $, 令$ q\left( {\alpha '} \right) : = \left( {\eta - 1} \right)\overline{\rho}{{\alpha '}^2} - \left( {1 + 2\eta } \right)\overline{\rho}\alpha ' +\underline{\rho} \eta = 0 $, 解得

$ \alpha'_1 = \frac{(1+2\eta)\overline{\rho}+ \sqrt{(1+2\eta)^2\overline{\rho}^2-4\eta(\eta-1)\overline{\rho}\underline{\rho}}}{2(\eta-1)\overline{\rho}}<0, $
$ \alpha'_2 = \frac{(1+2\eta)\overline{\rho}- \sqrt{(1+2\eta)^2\overline{\rho}^2-4\eta(\eta-1)\overline{\rho}\underline{\rho}}}{2(\eta-1)\overline{\rho}}\in(0, 1). $

另一方面, 当$ \eta = 1 $时, 即$ \sigma = 0 $, 令$ q\left( {\alpha '} \right) : = - 3 \overline{\rho} \alpha ' +\underline{\rho} = 0 $, 解得$ \alpha ' = \frac{\underline{\rho}}{3 \overline{\rho}} $.

在上述两种情况下, 当$ {\alpha _k} \le {\alpha _{k + 1}} \le \alpha < \beta $时都有$ q\left( {{\alpha _k}} \right) \ge q\left( \alpha \right) > q\left( \beta \right) = 0 $, 从而由(3.28)式可知

$ \begin{equation} \quad {\left\| {{x_k} - {x_{k - 1}}} \right\|^2} \le \frac{1}{{q\left( \alpha \right)}}\left( {{\xi _{k - 1}} - {\xi _k}} \right), \quad \forall k \ge 1, \end{equation} $ (3.29)

$ \xi_k $的定义可得$ \xi_k\geq-\alpha\overline{\rho}\xi_k $, 从而由(3.29)式有

$ \begin{equation} \sum\limits_{j = 1}^k {{{\left\| {{x_j} - {x_{j - 1}}} \right\|}^2}} \le \frac{1}{{q\left( \alpha \right)}}\left( {{\xi _0} - {\xi _k}} \right) \le \frac{1}{{q\left( \alpha \right)}}\left( {{\xi _0} + \alpha \overline{\rho} {\varphi _{k - 1}}} \right), \quad \forall k \ge 1. \end{equation} $ (3.30)

又由于$ {\xi _0} \ge \ldots \ge {\xi _k} = {\varphi _k} - {\rho _k}{\alpha _{k - 1}}{\varphi _{k - 1}} + {\gamma _k}{\left\| {{x_k} - {x_{k - 1}}} \right\|^2} \ge {\varphi _k} - \alpha \overline{\rho}{\varphi _{k - 1}} $, 所以

$ \begin{equation} {\varphi _k} \le {\left( {\alpha \overline{\rho }} \right)^k}{\varphi _0} + \frac{{{\xi _0}}}{{1 - \alpha\overline{ \rho }}} \le {\varphi _0} + \frac{{{\xi _0}}}{{1 - \alpha\overline{ \rho }}}, \quad\forall k \ge 0. \end{equation} $ (3.31)

联立(3.30)和(3.31)式可得(3.25)式.

又由于$ \alpha_k\in[0, 1) $, 从而由(3.25)式可知(3.21)式成立, 由命题3.6可知点列$ \{x_k\} $弱收敛到单调包含问题(3.1)的解.

注 3.3  (ⅰ)  当$ \sigma = 0 $时, 算法1退化为惯性邻近点算法, 条件(3.23)式退化为$ {\alpha _k} \le {\alpha _{k + 1}} \le \alpha <\frac{\underline{\rho}}{3 \overline{\rho}} $; 进一步, 当$ \mu = 0 $时, 有$ \underline{\rho} = \overline{\rho} = 1 $, 则条件(3.23)式退化为$ {\alpha _k} \le {\alpha _{k + 1}} \le \alpha <\frac{1}{3} $, 因此命题3.7对文献[4]中的命题2.1作了推广.

(ⅱ)  如果$ \sigma = 1 $, 则对于任意的$ \alpha '\in[0, 1) $, 二次函数$ q(\alpha') = - \overline{\rho} {\alpha '^2} - \overline{\rho}\alpha '\leq0 $, 并不能满足算法收敛的条件, 从而说明了容差参数$ \sigma $的最大取值范围为$ [0, 1) $.

推论 3.8   令点列$ \left\{ {{x_k}} \right\} $, $ \left\{ {{y_k}} \right\} $, $ \left\{ {{w_k}} \right\} $由算法$ \, 1\, $生成.令$ {\rho _k}: = \frac{1}{{1 + 2{\lambda _k}\mu}} $, $ \underline{\rho} = \frac{1}{1+2\overline{\lambda}\mu}, \overline{\rho} = \frac{1}{1+2\underline{\lambda}\mu}. $ $ \beta $如(3.23)式所定义, $ q(\alpha') $如(3.24)式所定义, 且$ x^* \in {(A + B)^{ - 1}}(0) $, 假设$ {\alpha _k} \le {\alpha _{k + 1}} \le \alpha <\beta $.则$ \forall k \ge 1, $

$ \begin{align} &&{\left\| {{x_k} - x ^*} \right\|^2}+{\sum\limits_{j = 1}^k {\frac{ {1 - {\sigma ^2}}}{{1{\rm{ + }}2{\lambda _j}\mu }}\left\| {{y_j} - {w_{j - 1}}} \right\|^2} } \le \left( {1 + \frac{{2\alpha \bar \rho \left( {1 + \alpha } \right)}}{{{{\left( {1 - \alpha \bar \rho } \right)}^2}q\left( \alpha \right)}}} \right){\left\| {{x_0} - x^*} \right\|^2}, \end{align} $ (3.32)
$ \begin{align} &&{\left\| {{x_k} - x ^*} \right\|^2} + \sum\limits_{j = 1}^k {\frac{{\rho _j^2\left( {1 - {\sigma ^2}} \right)}}{{\left( {\sqrt {{\rho _j}} + \sigma} \right)^2}}{{\left\| {\lambda_j} {v_j}\right\|}^2}} \le \left( {1 + \frac{{2\alpha \bar \rho \left( {1 + \alpha} \right)}}{{{{\left( {1 - \alpha \bar \rho } \right)}^2}q\left( \alpha \right)}}} \right){\left\| {{x_0} - x^*} \right\|^2}. \end{align} $ (3.33)

  由(3.20)和(3.25)式可得

$ {\varphi _k} + \sum\limits_{j = 1}^k {{\rho _j}{s_j}} \le {\varphi _0} + \frac{{\alpha \bar \rho \left( {1 + \alpha } \right)}}{{1 - \alpha \overline{\rho }}}\sum\limits_{j = 1}^k {{\left\| {{x_j} - {x_{j - 1}}} \right\|}^2}\leq\left( {1 + \frac{{2\alpha \bar \rho \left( {1 + \alpha } \right)}}{{{{\left( {1 - \alpha \bar \rho } \right)}^2}q\left( \alpha \right)}}} \right){\left\| {{x_0} - x ^*} \right\|^2}, $

从而由$ s_k $的定义可知, (3.32)式成立.

由柯西不等式及(3.3)式可知

$ \left\| {{\lambda _j}{v_j}} \right\| \le \left\| {{\lambda _j}{v_j} + {y_j} - {w_{j - 1}}} \right\| + \left\| {{y_j} - {w_{j - 1}}} \right\| \le \left( {1 + \frac{\sigma }{{\sqrt {{\rho _j}} }}} \right)\left\| {{y_j} - {w_{j - 1}}} \right\|, $

由(3.32)式可知(3.33)式成立.

接下来, 本文给出算法1的非渐近性全局收敛率.

定理 3.9   令点列$ \left\{ {{x_k}} \right\} $, $ \left\{ {{y_k}} \right\} $, $ \left\{ {{w_k}} \right\} $由算法$ \, 1\, $生成, 令$ {\rho _k}: = \frac{1}{{1 + 2{\lambda _k}\mu}} $, $ \underline{\rho} = \frac{1}{1+2\overline{\lambda}\mu}, \overline{\rho} = \frac{1}{1+2\underline{\lambda}\mu} $, $ \beta $如(3.23)式所定义, $ q(\alpha') $如(3.24)式所定义, $ x^* \in {(A + B)^{ - 1}}(0) $, 且$ {d_0} = \left\| {{x_0} - {x^*}} \right\| $, 假设$ {\alpha _k} \le {\alpha _{k + 1}} \le \alpha <\beta $.则$ \forall k \ge 1, $存在$ i \in \{ 1, \ldots , k\} $使得$ {v_i} \in A\left( {{y_i}} \right) + B^{{[\varepsilon _i]}}\left( {{y_i}} \right) $, 而且

$ \begin{align} &\left\| {{v_i}} \right\| \le \frac{{( {\sigma + \sqrt {\overline{\rho }} }){d_0}}}{\underline{\rho }{\underline{\lambda} \sqrt {k(1 - \sigma ^2) }}}\sqrt { {1 + \frac{{2\alpha\overline{ \rho} \left( {1 + \alpha } \right)}}{{{{\left( {1 - \alpha\overline{ \rho }} \right)}^2}q\left( \alpha \right)}}}}, \end{align} $ (3.34)
$ \begin{align} &{\varepsilon _i} \le \frac{{\sigma^2}{d_0^2}}{{2(1 - {\sigma ^2})\underline{\lambda}k}}\left( {1 + \frac{{2\alpha\overline{ \rho} \left( {1 + \alpha } \right)}}{{{{\left( {1 - \alpha\overline{ \rho }} \right)}^2}q\left( \alpha \right)}}} \right). \end{align} $ (3.35)

  令$ x^* \in {(A + B)^{ - 1}}(0) $使得$ {d_0} = \left\| {{x_0} - {x^* }} \right\| $.由(3.32)和(3.33)式可知, 对$ \forall k \ge 1 $存在$ i \in \{ 1, \ldots , k\} $使得

$ \begin{align} \frac{{\rho _i^2\left( {1 - {\sigma ^2}} \right)}}{{\left( {\sqrt {{\rho _i}} + \sigma } \right)^2}}{\left\| {\lambda _i}{v_i} \right\|^2} \le \left( {1 + \frac{{2\alpha \bar \rho \left( {1 + \alpha } \right)}}{{{{\left( {1 - \alpha \bar \rho } \right)}^2}q\left( \alpha \right)}}} \right)\frac{d^2_0}{k}, \\ \rho_i(1-\sigma^2){\left\| {{y_i} - {w_{i - 1}}} \right\|^2} \le \left( {1 + \frac{{2\alpha \bar \rho \left( {1 + \alpha } \right)}}{{{{\left( {1 - \alpha \bar \rho } \right)}^2}q\left( \alpha \right)}}} \right)\frac{d^2_0}{k}. \end{align} $

又由于$ \lambda_i\geq\underline{\lambda}>0 $, 因此

$ \left\| {{v_i}} \right\| \le \frac{{( {\sigma + \sqrt {\overline{\rho }} }){d_0}}}{\underline{\rho }{\underline{\lambda} \sqrt {k(1 - \sigma ^2) }}}\sqrt { {1 + \frac{{2\alpha\overline{ \rho} \left( {1 + \alpha } \right)}}{{{{\left( {1 - \alpha\overline{ \rho }} \right)}^2}q\left( \alpha \right)}}}}, $
$ {\left\| {{y_i} - {w_{i - 1}}} \right\|^2} \le \frac{d^2_0}{\underline{\rho}(1 - \sigma ^2)k} \left( {1 + \frac{{2\alpha \overline{\rho} \left( {1 + \alpha } \right)}}{{{{\left( {1 - \alpha \bar \rho } \right)}^2}q\left( \alpha \right)}}} \right). $

由(3.3)式可知$ {\varepsilon _i} \le \frac{\sigma^2}{{2{\lambda _i}}}{\left\| {{y_i} - {w_{i - 1}}} \right\|^2} $, 故

$ {\varepsilon _i} \le \frac{\sigma ^2d^2_0}{2\underline{\rho}(1 - \sigma ^2)\underline{\lambda} k} \left( {1 + \frac{{2\alpha \bar \rho \left( {1 + \alpha } \right)}}{{{{\left( {1 - \alpha \bar \rho } \right)}^2}q\left( \alpha \right)}}} \right). $

注 3.4  定理$ 3.9 $说明了算法1的全局逐点收敛率为$ \mathcal{O}(\frac{1}{\sqrt{k}}) $.

4 求解单调包含问题的惯性Tseng's向前向后算法

在这部分, 本文考虑单调包含问题

$ \begin{equation} 0 \in F(x)+ T(x), \end{equation} $ (4.1)

其中$ F: \mathbb{X}\rightarrow \mathbb{X} $是(单值)强单调和$ L $-Lipschitc连续算子, 即存在$ \mu\geq0 $$ L>0 $使得

$ \begin{align} &\langle {F(x) -F(x'), x - x'} \rangle \ge \mu {\left\| {x - x'} \right\|^2}, \end{align} $ (4.2)
$ \begin{align} &\left\| {F(x) - F(x')} \right\| \le L\left\| {x - x'} \right\|, \quad \forall x, x' \in \mathbb{X}. \end{align} $ (4.3)

$ T:\mathbb{X}\rightrightarrows \mathbb{X} $是极大单调算子.假设问题有解, 即$ {(F + T)^{ - 1}}(0) $非空.

下面本文给出求解问题(4.1)的惯性Tseng's向前向后算法.

算法2

步骤 0   任取初始点$ {x_0} = {x_{ - 1}} \in \mathbb{X} $, 任意给定$ \alpha \in[0, 1) $, $ \alpha_0 \in[0 , \alpha] $, $ \sigma < 1 $, $ k = 1 $, 令$ \overline{\lambda}: = \frac{\sigma }{{{L^2}}}\left( {\sigma \mu + \sqrt {{\sigma ^2}{\mu ^2} + {L^2}} } \right) $, 任意给定$ \lambda_0 \in [0 \;, \, \overline{\lambda}] $;

步骤 1   选定$ \alpha_k \in [{\alpha_{k-1}}, \;\alpha] $, 定义

$ \begin{equation} w_{k-1}: = x_{k-1}+ \alpha_{k-1}(x_{k-1}-x_{k-2}); \end{equation} $ (4.4)

步骤 2   选定$ 0<\lambda _k \leq\lambda _{k-1} $, 计算$ y_k $

$ \begin{eqnarray} y_k = \left( \lambda _k T+ Id \right)^{- 1}\left( w_{k - 1}- \lambda _k F\left( w_{k - 1} \right) \right); \end{eqnarray} $ (4.5)

步骤 3   计算$ x_k $

$ \begin{eqnarray} {x_k} = {y_k} - \frac{{{\lambda _k}}}{{1 + 2{\lambda _k}\mu }}\left( { F\left( {{y_k}} \right) - F\left( {{w_{k - 1}}} \right)} \right). \end{eqnarray} $ (4.6)

$ k: = k+1 $, 返回步骤1.

注 4.1  (ⅰ)  当$ \alpha_k\equiv0 $, $ \lambda_k\equiv\overline{\lambda} $时, 算法2退化为文献[12]中的算法2.

(ⅱ)  算法2是算法1在选定$ \varepsilon _k\equiv 0 $时的特殊情况.事实上, 在算法2中定义: $ A: = F $, $ B: = T $, 且

$ \begin{equation} {v_k}: = F\left( {{y_k}} \right) -F\left( {{w_{k - 1}}} \right) + \frac{1}{{{\lambda _k}}}({w_{k - 1}} - {y_k} ), \quad \forall k \ge 1. \end{equation} $ (4.7)

$ \overline{\lambda} $的定义可知$ \sigma^2 = \frac{\overline{\lambda}^2L^2}{1+2\overline{\lambda}\mu} $.由(4.5)式可得$ q_k: = \frac{1}{{{\lambda _k}}}({w_{k - 1}} - {y_k} )- F\left( {{w_{k - 1}}} \right) \in T(y_k), $又由(4.7)式可得$ v_k = F(y_k)+q_k \in F(y_k)+T(y_k) = A\left( {y_k} \right) + B^{[{\varepsilon _k}]}\left( {{y_k}} \right). $由(4.5)式可知

$ \begin{equation} {\lambda_k}{v_k}+y_k-w_{k-1} = {\lambda_k}( F\left( {{y_k}} \right) - F\left( {{w_{k - 1}}} \right)). \end{equation} $ (4.8)

从而可知

$ \begin{align} \frac{{{\mkern 1mu} {{\left\| {{\lambda _k}{v_k} + {y_k} - {w_{k - 1}}} \right\|}^2}}}{{1{\rm{ + }}2{\lambda _k}\mu }} = \frac{{{\mkern 1mu} {{\left\| {{\lambda_k}(F\left( {{y_k}} \right) -F\left( {{w_{k - 1}}} \right))} \right\|}^2}}}{{1{\rm{ + }}2{\lambda _k}\mu }}\leq\frac{{\lambda _k^2{L^2}}}{{1{\rm{ + }}2{\lambda _k}\mu }}{\left\| {{y_k} - {w_{k - 1}}} \right\|^2} \end{align} $ (4.9)

注意到$ f(\lambda) = \frac{{{\lambda ^2}{L^2}}}{{1{\rm{ + }}2\lambda \mu }} $是单增函数, 因此$ f(\lambda_k)\leq f(\overline{\lambda}) $, 故$ \frac{{{\mkern 1mu} {{\left\| {{\lambda _k}{v_k} + {y_k} - {w_{k - 1}}} \right\|}^2}}}{{1{\rm{ + }}2{\lambda _k}\mu }}\leq \sigma^2{\left\| {{y_k} - {w_{k - 1}}} \right\|^2} $, 即算法1的步骤2满足.

将(4.8)式代入(4.6)式可得

$ {x_k} = {y_k} - \frac{{\lambda_k}{v_k}+y_k-w_{k-1}}{{1 + 2{\lambda _k}\mu }} = \frac{{\left( {{w_{k - 1}} - {\lambda _k}{v_k}} \right) + 2{\lambda _k}\mu {y_k}}}{{1 + 2{\lambda _k}\mu }}, $

即算法1的步骤3满足.综上可知, 算法2是算法1的特殊情形.

由于算法2是算法1的特殊情况, 因此由命题3.6, 3.7和定理3.9可得到下面的结论.

命题 4.1   令点列$ \{x_k\} $由算法2产生, 令$ {\rho _k}: = \frac{1}{{1 + 2{\lambda _k}\mu}} $, $ \underline{\rho} = \frac{1}{1+2\overline{\lambda}\mu}, \overline{\rho} = \frac{1}{1+2\underline{\lambda}\mu} $, $ \{\varepsilon _k\} $$ \{v_k\} $如(4.7)式所定义, $ {d_0} = \left\| {{x_0} - {x^*}} \right\| $, 其中$ x^*\in{(F + T)^{ - 1}}(0) $, $ \beta $如(3.23)式所定义, $ q(\alpha') $如(3.24)式所定义, 假设$ {\alpha _k} \le {\alpha _{k + 1}} \le \alpha <\beta $.则有下面的结论成立

(ⅰ)  点列$ \{x_k\} $弱收敛到单调包含问题(4.1)的解.

(ⅱ)  对任意的$ k \ge 1 $存在$ i \in \{ 1, \ldots , k\} $使得$ {v_i} \in F\left( {{y_i}} \right) + B^{[\varepsilon _i]}\left( {{y_i}} \right) $, 而且

$ \left\| {{v_i}} \right\| \le \frac{{( {\sigma + \sqrt {\overline{\rho }} }){d_0}}}{\underline{\rho }{\underline{\lambda} \sqrt {k(1 - \sigma ^2) }}}\sqrt { {1 + \frac{{2\alpha\overline{ \rho} \left( {1 + \alpha } \right)}}{{{{\left( {1 - \alpha\overline{ \rho }} \right)}^2}q\left( \alpha \right)}}}}. $

注 4.2  (ⅰ)  命题$ 4.1 $说明了算法2的全局逐点收敛率为$ \mathcal{O}(\frac{1}{\sqrt{k}}) $.

(ⅱ)  当$ \lambda_k\equiv\overline{\lambda} $时, $ \frac{d_0}{\underline{\lambda}} = \frac{{{d_0}{L^2}}}{{\sigma \left( {\sigma \mu + \sqrt {{\sigma ^2}{\mu ^2} + {L^2}} } \right)}} $.在这种情况下, 给定容差$ \epsilon>0 $, 在算法2中找到有序对$ (x, \, v) $使得$ v\in(F+T)(x) $, $ \left\| v \right\|\leq\epsilon $至多需要$ \mathcal{O}(\lceil\frac{d^2_0 L^2}{\epsilon^2}\rceil) $次迭代.

5 求解原始—对偶问题的惯性非精确Spingarn's部分逆算法

接下来, 本文考虑原始—对偶问题[17].找到$ x $, $ u\in \mathbb{X} $使得

$ \begin{equation} z\in V, \quad u \in V^\bot, \quad u \in T(z), \end{equation} $ (5.1)

其中$ T:\mathbb{X}\rightrightarrows \mathbb{X} $是实Hilbert空间$ \mathbb{X} $中的极大单调算子, $ V $是闭子空间, $ V^\bot $$ V $的正交补空间.特别的, 当$ V = \mathbb{X} $时, 问题(5.1)退化为单调包含问题(1.1).假设$ T $$ \kappa $ -强单调和$ L $-Lipschitc连续算子.

定义 5.1 [18]  称$ T_V:\mathbb{X} \rightrightarrows \mathbb{X} $是极大单调算子$ T:\mathbb{X} \rightrightarrows \mathbb{X} $关于$ \mathbb{X} $中的闭子空间$ V $的Spingarn's部分逆算子, 如果$ T_V $满足

$ \begin{equation} G_r(T_V): = \{(x, v) \in \mathbb{X}\times \mathbb{X}|P_V(v)+P_{V^\bot}(x)\in T(P_V(x)+P_{V^\bot}(v))\}, \end{equation} $ (5.2)

其中$ P_V $$ P_{V^\bot} $分别是$ V $$ V^\bot $上的正交投影.

由定义可知, 原始—对偶问题(5.1)等价于单调包含问题$ 0 \in T_V(x) $, 而且若$ x^* $是单调包含问题$ 0 \in T_V(x^*) $的解, 则$ z^* = P_V(x^*) $, $ u^* = P_{V^\bot}(x^*) $是原始—对偶问题(5.1)的解.

引理 5.2 [19]  令$ T:\mathbb{X}\rightrightarrows \mathbb{X} $$ \mathbb{X} $中的极大单调算子, $ V $$ \mathbb{X} $中的闭子空间, 且$ \varepsilon>0 $, 则$ (T_V)^{[\varepsilon]} = (T^{[\varepsilon]})_V $.

引理 5.3 [20]  如果极大单调算子$ T $$ \kappa $ -强单调和$ L $-Lipschitc连续算子, 则它的关于$ V $的部分逆$ T_V $$ \mu $ -强单调的, 其中$ \mu = \frac{\kappa}{1+L^2} $.

下面给出求解原始—对偶问题(5.1)的惯性非精确Spingarn's部分逆算法如下

算法3

步骤 0   任取初始点$ {x_0} = {x_{ - 1}} \in \mathbb{X} $, 给定$ \alpha \in[0, 1) $, $ \alpha_0 \in[0 , \alpha] $, $ \sigma \in[0, 1) $, $ k = 1 $;

步骤 1   选定$ \alpha_k \in [{\alpha_{k-1}}, \;\alpha] $, 计算

$ \begin{equation} w_{k-1}: = x_{k-1}+ \alpha_{k-1}(x_{k-1}-x_{k-2}); \end{equation} $ (5.3)

步骤 2   找到$ {z_k} $, $ {u_k} \in \mathbb{X} $, $ {\varepsilon _k} \ge 0 $, 使得

$ \begin{eqnarray} {u_k} \in T^{[{\varepsilon _k}]}\left( {{z_k}} \right){\mkern 1mu} {\mkern 1mu} , \;\frac{{{\mkern 1mu} {{\left\| {{u_k} + {z_k} - {w_{k - 1}}} \right\|}^2}}}{{1{\rm{ + }}2\mu }} + 2{\varepsilon _k} \le \sigma^2{\left\| {P_V(z_k)+P_{V^\bot}(u_k) - {w_{k - 1}}} \right\|^2}; \end{eqnarray} $ (5.4)

步骤 3   计算$ x_k $

$ \begin{eqnarray} {x_k} = \frac{w_{k - 1} - u_k - z_k }{1 + 2\mu}+P_V(z_k)+P_{V^\bot}(u_k), \end{eqnarray} $ (5.5)

$ k: = k+1 $, 返回步骤1.

注 5.1  (ⅰ)  当$ \alpha_k\equiv0 $$ \mu = 0 $时, 算法3退化为文献[21]中的算法2.

(ⅱ)  算法3是算法1在选定$ \lambda _k\equiv 1 $时的特殊情况.事实上, 在算法3中定义$ A: = T_V $, $ B: = 0 $, 且

$ \begin{equation} {v_k}: = P_V(u_k)+P_{V^\bot}(z_k), \quad {y_k}: = P_V(z_k)+P_{V^\bot}(u_k), \quad \forall k \ge 1. \end{equation} $ (5.6)

由引理5.2, (5.2)及(5.4)式可知

$ P_V(u_k)+P_{V^\bot}(z_k) \in (T_V)^{[\varepsilon_k]}(P_V(z_k)+P_{V^\bot}(u_k)), $

从而由(5.6)式可知$ {v_k}\in (T_V)^{[\varepsilon_k]}(y_k) = A^{[\varepsilon_k]}(y_k)+B(y_k) $, 且

$ \begin{equation} v_k+y_k = P_V(u_k)+P_{V^\bot}(z_k)+P_V(z_k)+P_{V^\bot}(u_k) = u_k+z_k. \end{equation} $ (5.7)

$ \lambda _k\equiv 1 $, (5.6), (5.7)及(5.4)式可知

$ \begin{eqnarray} \frac{{{\mkern 1mu} {{\left\| {{\lambda _k}{v_k} + {y_k} - {w_{k - 1}}} \right\|}^2}}}{{1{\rm{ + }}2{\lambda _k}\mu }} + 2{\lambda _k}{\varepsilon _k}& = &\frac{{{\mkern 1mu} {{\left\| {{u_k} + {z_k} - {w_{k - 1}}} \right\|}^2}}}{{1{\rm{ + }}2\mu }} + 2{\varepsilon _k} \\ &\le &\sigma^2{\left\| {P_V(z_k)+P_{V^\bot}(u_k) - {w_{k - 1}}} \right\|^2} = \sigma^2{\left\| {{y_k} - {w_{k - 1}}} \right\|^2}, \end{eqnarray} $

即算法1的步骤2满足.由$ \lambda _k\equiv 1 $, (5.6), (5.7)及(5.5)式可知

$ \begin{eqnarray} {x_k} & = & \frac{w_{k - 1} - u_k - z_k }{1 + 2\mu}+P_V(z_k)+P_{V^\bot}(u_k) = \frac{w_{k - 1} - v_k - y_k }{1 + 2\mu}+{y_k}\\ & = &\frac{w_{k - 1} - v_k + 2\mu {y_k }}{1 + 2\mu} = \frac{{\left( {{w_{k - 1}} - {\lambda _k}{v_k}} \right) + 2{\lambda _k}\mu {y_k}}}{{1 + 2{\lambda _k}\mu }}, \end{eqnarray} $

即算法1的步骤3满足.故算法3是算法1的特殊情形.

由于算法3是算法1的特殊情况, 因此由命题3.6, 3.7和定理3.9可得到下面的结论.

命题 5.1   令点列$ \{x_k\} $由算法3产生, 且$ \{\lambda _k\} $, $ \{v_k\} $$ \{y_k\} $如(5.6)式所定义, 令$ \rho: = \frac{1+L^2}{1+L^2+2\kappa} $, $ \eta : = \frac{{1 - {\sigma ^2}}}{{{{\left( {1 + \sigma \sqrt {\rho}} \right)}^2}}}\in(0, 1] $, $ {d_0} = \left\| {{x_0} - {x^*}} \right\| $, 其中$ x^* $是单调包含问题$ 0 \in T_V(x^*) $的解.令

$ \begin{equation} \beta : = \left\{ {\begin{array}{*{20}{c}} {\frac{1+2\eta-\sqrt{8\eta+1}}{2(\eta-1)}, \quad \sigma\in (0\;, \;1), }\\ {\frac{1}{3}, \quad\quad\quad \sigma {\rm{ = }}0}, \end{array}} \right. \end{equation} $ (5.8)

定义实函数$ q(\alpha') $

$ \begin{equation} q\left( {\alpha '} \right) : = \left( {\eta - 1} \right)\rho {\alpha '}^2 - \left( 1 + 2\eta \right)\rho\alpha ' +\rho \eta. \end{equation} $ (5.9)

假设$ {\alpha _k} \le {\alpha _{k + 1}} \le \alpha <\beta $, 则有下面的结论成立

(ⅰ)  点列$ \{x_k\} $弱收敛到单调包含问题$ 0 \in T_V(x) $的解.

(ⅱ)  $ k \ge 1 $存在$ i \in \{ 1, \ldots , k\} $使得$ {v_i}\in (T_V)^{[\varepsilon_i]}(y_i) $, 而且

$ \left\| {{v_i}} \right\| \le \frac{{( {\sigma + \sqrt {\rho } }){d_0}}}{\rho { \sqrt {k(1 - \sigma ^2) }}}\sqrt { {1 + \frac{{2\alpha\rho \left( {1 + \alpha } \right)}}{{{{\left( {1 - \alpha\rho } \right)}^2}q\left( \alpha \right)}}}}, $
$ {\varepsilon _i} \le \frac{{\sigma^2}{d_0^2}}{{2(1 - {\sigma ^2})k}}\left( {1 + \frac{{2\alpha\rho \left( {1 + \alpha } \right)}}{{{{\left( {1 - \alpha\rho} \right)}^2}q\left( \alpha \right)}}} \right). $

注 5.2   有序对$ \left(P_V({x_k}), P_{V^\bot}({x_k})\right) $弱收敛到原始—对偶问题(5.1)的解.

6 总结

本文提出了一种新的求解单调包含问题的惯性混合邻近外梯度算法, 并得到了该算法在实希尔伯特空间中的弱收敛性和非渐近全局收敛率.惯性混合邻近外梯度算法包含一些现有的算法, 例如Tseng's向前向后算法和Spingarn's部分逆算法等.利用该算法的框架可以设计求解优化问题与变分不等式问题的新的算法.

参考文献
[1] Boţ R I, Csetnek E R. A hybrid proximal-extragradient algorithm with inertial effects[J]. Numerical Functional Analysis and Optimization, 2015, 36(8): 3, 951–963.
[2] Martinet B. Régularisation d'inéquations variationnelles par approximations successives[J]. Rev. Française Inform. Rech. Opérationnelle 4(Ser. R-3), 1970, 154-158.
[3] Rockafellar R T. Monotone operators and the proximal point algorithm[J]. SIAM J. Control Optim., 1976, 14(5): 877–898. DOI:10.1137/0314056
[4] Alvarez F, Attouch H. An inertial proximal method for maximal monotone operators via discretization of a nonlinear oscillator with damping[J]. Set-Valued Anal., 2001, 9(1-2): 3–11.
[5] Solodov M V, Svaiter B F. A hybrid approximate extragradient-proximal point algorithm using the enlargement of a maximal monotone operator[J]. Set-Valued Anal., 1999, 7(4): 323–345. DOI:10.1023/A:1008777829180
[6] Burachik R S, Iusem A, Svaiter B F. Enlargement of monotone operators with applications to variational inequalities[J]. Set-Valued Anal., 1997, 5(2): 159–180. DOI:10.1023/A:1008615624787
[7] Monteiro R D C, Svaiter B F. Iteration-complexity of block-decomposition algorithms and the alternating direction method of multipliers[J]. SIAM J. Optim., 2013, 23(1): 475–507. DOI:10.1137/110849468
[8] Gonçalves M L N, Melo J G, Monteiro R D C. Improved pointwise iteration-complexity of a regularized ADMM and of a regularized non-euclidean HPE framework[J]. SIAM J. Optim., 2017, 27(1): 379–407. DOI:10.1137/16M1055530
[9] Alves M M, Svaiter B F. A variant of the hybrid proximal extragradient method for solving strongly monotone inclusions and its complexity analysis[J]. J. Optim. Theory Appl., 2016, 168: 198–215. DOI:10.1007/s10957-015-0792-y
[10] Chen Caihua, Chan Raymond H, Ma Shiqian, Yang Junfeng. Inertial proximal ADMM for linearly constrained separable convex optimization[J]. SIAM J. Imag. Sci., 2015, 8(4): 2239–2267. DOI:10.1137/15100463X
[11] Boţ R I, Csetnek E R, Hendrich C. Inertial Douglas-Rachford splitting for monotone inclusion problems[J]. Appl. Math. Comput., 2015, 256: 472–487.
[12] Alves M M, Marcavillaca R T. On inexact relative-error hybrid proximal extragradient, forwardbackward and Tseng's modified forward-backward methods with inertial effects[J]. Set-Valued and Variational Analysis, 2019, 1-25. Published online: 15 March 2019, https://doi.org/10.1007/s11228-019-00510-7.
[13] Burachik R S, Svaiter B F. ε-enlargements of maximal monotone operators in Banach spaces[J]. Set-Valued Anal., 1999, 7(2): 117–132. DOI:10.1023/A:1008730230603
[14] Bauschke H H, Combettes P L. Convex analysis and monotone operator theory in Hilbert spaces[M]. New York: Springer-Verlag, 2010.
[15] Tikhonov A N, Arsenin V Y. Solutions of ill-posed problems[M]. Washington: Winston and Sons, 1977.
[16] Minty G J. Monotone (nonlinear) operators in Hilbert spaces[J]. Duke Math.J, 1962, 29(3): 341–346. DOI:10.1215/S0012-7094-62-02933-2
[17] Brice~no A, Luis M. Forward-Douglas-Rachford splitting and forward-partial inverse method for solving monotone inclusions[J]. Optimization, 2015, 64(5): 1239–1261. DOI:10.1080/02331934.2013.855210
[18] Spingarn J E. Partial inverse of a monotone operator[J]. Appl. Math. Optim., 1983, 10(3): 247–265.
[19] Burachik R S, Sagastizábal C A, Scheimberg S. An inexact method of partial inverses and a parallel bundle method[J]. Optim. Methods Softw., 2006, 21(3): 385–400. DOI:10.1080/10556780500094887
[20] Marques A M, Costa L S. On the convergence rate of the scaled proximal decomposition on the graph of a maximal monotone operator (SPDG) algorithm[J]. Optimization, 2018, 1-10.
[21] Alves M M, Lima S C. An inexact spingarn's partial inverse method with applications to operator splitting and composite optimization[J]. J. Optim. The. Appl., 2017, 175(3): 818–847. DOI:10.1007/s10957-017-1188-y