数学杂志  2014, Vol. 34 Issue (6): 1193-1199   PDF    
扩展功能
加入收藏夹
复制引文信息
加入引用管理器
Email Alert
RSS
本文作者相关文章
景书杰
赵海燕
Wolfe步长规则下约束优化问题的共轭梯度投影算法
景书杰, 赵海燕    
河南理工大学数学与信息科学学院, 河南 焦作 454003
摘要:本文研究了约束优化问题 $\displaystyle{\min_{x\in\Omega}f(x)}$利用共轭梯度算法与GLP梯度投影思想相结合的方法, 构造了一个新的共轭梯度投影算法, 并在Wolfe线搜索下获得了该算法的全局收敛性结果.
关键词约束优化问题    共轭梯度法    GLP梯度投影    Wolfe线搜索    全局收敛性    
CONJUGATE GRADIENT PROJECTION METHOD OF CONSTRAINED OPTIMIZATION PROBLEMS WITH WOLFE STEPSIZE RULE
JING Shu-jie, ZHAO Hai-yan    
School of Mathematics and Information Science, Henan Polytechnic University, Jiaozuo 454003, China
Abstract: In this paper, a new conjugate gradient projection algorithm for the constrained optimization problem is presented. By Combining conjugate gradient algorithm with GLP gradient projection theory, global convergence properties of the new algorithm under the Wolfe stepsize rule are proved.
Key words: constrained optimization problems     conjugate gradient method     GLP gradient projection     Wolfe line search     global convergence.    
1 引言

考虑约束优化问题

$ \begin{equation} \displaystyle{\min\limits_{x\in\Omega}f(x)}, \end{equation} $ (1.1)

其中 $\Omega \subseteq R^n$为非空闭凸集, $f:{R^n} \to R$ $\Omega $上连续可微函数. 1987年Calamai P H和More J J[1]借助投影算子沿可行域边界进行曲线搜索的思想给出了GLP梯度投影算法, 并讨论了该投影算法的收敛性质.

在约束优化问题中寻找可行的下降方向是构造新算法的关键, GLP梯度投影算法有效地解决了这一问题.但该算法收敛速度较慢并且需要多次投影, 计算量较大.而共轭梯度投影算法具有算法简单, 计算量少, 收敛速度快的特点, 是求解无约束优化问题 $\displaystyle{\min_{x\in R^n}f(x)}$的一种有效方法, 其迭代格式为:

$ \begin{eqnarray} && {x_{k + 1}} = {x_k} + {\alpha _k}{d_k}, \end{eqnarray} $ (1.2)
$ \begin{eqnarray} {d_k} = \left\{ {\begin{array}{*{20}{c}} { - {g_k}, {\rm{ }} {\rm{ }}k = 1, }\\ { - {g_k} + {\beta _k}{d_{k - 1}}, {\rm{ }}k \ge 2, } \end{array}} \right. \end{eqnarray} $ (1.3)

其中 ${d_k}$为搜索方向, ${\beta _k}$为标量, 参数 ${\beta _k}$的不同选取对应不同的共轭梯度法, 见文献[2].步长 ${\alpha _k}$满足Wolfe线搜索条件

$ \begin{eqnarray} && f({x_k} + {\alpha _k}{d_k}) - f({x_k}) \le {\sigma _1}{\alpha _k}g_k^T{d_k}, \end{eqnarray} $ (1.4)
$ \begin{eqnarray}&& g_{k + 1}^T{d_k} \ge {\sigma _2}g_k^T{d_k} \end{eqnarray} $ (1.5)

在无约束优化问题中, 为了得到目标函数在 ${x_k}$点的下降的共轭梯度方向, 假设对 $\forall k > 1$, 参数 ${\beta _k}$满足

$ \begin{equation} \left\{ \begin{array}{l} {\left\| {{g_k}} \right\|^2} > \left| {{\beta _k}g_k^T{d_{k - 1}}} \right|, \\ \left| {g_k^T( - {g_k} + {\beta _k}{d_{k - 1}})} \right| \ge \lambda \left| {{\beta _k}} \right|\left\| {{g_k}} \right\|\left\| {{d_{k - 1}}} \right\|, \end{array} \right. \end{equation} $ (1.6)

其中 $\lambda > 1$.记 $g_k =\nabla f(x_k)$ $f(x)$ ${x_k}$点的梯度.受文献[3-6]的启发, 限制 ${\beta _k}$的范围, 使其满足 ${\beta _k} \in \left[{-\beta _k^1, \beta _k^2} \right]$, 其中 $ \beta _k^1 = \frac{1}{{\lambda - \cos {\theta _k}}}\frac{{\left\| {{g_k}} \right\|}}{{\left\| {{d_{k - 1}}} \right\|}}, ~ \beta _k^2 = \frac{1}{{\lambda + \cos {\theta _k}}}\frac{{\left\| {{g_k}} \right\|}}{{\left\| {{d_{k - 1}}} \right\|}}, $其中 $\cos {\theta _k}$ $ - {g_k}$ ${d_{k - 1}}$的夹角.

考虑到投影梯度法易于寻找可行下降方向, 共轭梯度法较快的收敛速度以及好的数值表现, 本文将这两种算法相结合用于求解约束优化问题, 并且对参数 ${\beta _k}$进行适当的修正, 可以提高梯度投影算法的收敛速度.

2 定义与引理

定义2.1[2]  对任意 $x \in {R^n}$, 记 ${P_\Omega }\left( x \right) = \arg \min \left\{ {\left\| {x - y} \right\|:y \in \Omega } \right\}$, ${P_\Omega }\left( x \right)$ $x$ $\Omega $上的投影, ${P_\Omega }\left( \cdot \right)$称为从 ${R^n}$ $\Omega $的投影算子.

引理2.2[3]   $P$ ${R^n}$ $\Omega $上的投影, 令 $x(\alpha, s) = P(x + \alpha s)$, 对 $x \in \Omega $ $s \in {R^n}$

(1) $\left\langle {x(\alpha, s) - (x + \alpha s), y - x(\alpha, s)} \right\rangle \ge 0$, 对任意 $y \in \Omega $, $\alpha > 0$.

(2) $P$是一个非扩张算子, 则有 $\left\| {P(y) - P(x)} \right\| \le \left\| {y - x} \right\|$, $x, y \in {R^n}$.

(3) $\left\langle { - s, x - x(\alpha, s)} \right\rangle \ge \frac{{{{\left\| {x(\alpha, s) - x} \right\|}^2}}}{\alpha }$, 对任意 $\alpha > 0$.

定义2.3[7]   $\sigma $: $\left[{0, + \infty } \right) \to \left[{0, + \infty } \right)$称为强函数, 若对任意 ${t_k} \in \left[{0, + \infty } \right)$均有

$ \mathop {\lim }\limits_{k \to \infty } \sigma \left( {{t_k}} \right) = 0, \Rightarrow \mathop {\lim }\limits_{k \to \infty } {t_k} = 0. $

定义2.4[7]  逆连续模函数 $f:{R^n} \to R$为一阶连续可微函数, 设

$ a = \sup \left\{ {\left\| {\nabla f\left( x \right) - \nabla f\left( y \right)} \right\|\left| {x, y \in L} \right.} \right\} > 0. $

$ \rho \left( t \right) = \left\{ \begin{array}{l} \inf \left\{ {\left\| {x - y} \right\|\left| {x, y \in L, \left\| {\nabla f\left( x \right) - \nabla f\left( y \right)} \right\| \ge t} \right.} \right\}, t \in \left[{0, a} \right), \\ \mathop {\lim }\limits_{r \to a} \rho \left( r \right), t \in \left[{a, + \infty } \right), \end{array} \right. $

则称 $\rho \left( t \right)$ $\nabla f\left( x \right)$ $L$上的逆连续模函数.

引理2.5[7]  若 $\nabla f\left( x \right)$ $L$上一致连续, 则 $\nabla f\left( x \right)$ $L$上的逆连续模函数为强函数.

引理2.6[8]  设 $x \in \Omega $, $x \in \Omega $处的切锥记为

$ {T_\Omega }\left( x \right) = \left\{ {v \in {R^n}\left| {\exists {x_t} \in \Omega, {\tau _t} > 0, t \to \infty, {x_t} \to x, {\tau _t} \to \infty, v = \mathop {\lim}\limits_{t \to \infty } \frac{{{x_t} - x}}{{{\tau _t}}}} \right.} \right\}. $

引理2.7[8]  令 ${\nabla _\Omega }f\left( x \right)$ $f$ $x \in \Omega $上的梯度投影, 则有

(1) $\min \left\{ {\left\langle {\nabla f\left( x \right), v} \right\rangle \left| {v \in T\left( x \right), \left\| v \right\| \le 1} \right.} \right\} = - \left\| {{\nabla _\Omega }f\left( x \right)} \right\|$.

(2) $\left\| {{\nabla _\Omega }f\left( \cdot \right)} \right\|$ $\Omega $上是下半连续的, 若 $\mathop {\lim }\limits_{k \to \infty } {x_k} = x$, 则 $\left\| {{\nabla _\Omega }f\left( x \right)} \right\| \le \mathop {\lim }\limits_{k \to \infty } \inf \left\| {{\nabla _\Omega }f\left( {{x_k}} \right)} \right\|$.

(3) $x \in \Omega $是约束优化问题的稳定点, 当且仅当 ${\nabla _\Omega }f\left( x \right) = 0$.

3 共轭梯度投影算法

${x_k} \in \Omega $, ${\alpha _k} > 0$, 记 $\overline {{x_k}} \left( {{\alpha _k}, {s_k}} \right) = P\left( {{x_k} - {\alpha _k}\nabla f({x_k})} \right)$, 令

$ \begin{equation} {x_{k + 1}} = {x_k}\left( {{\alpha _k}, {s_k}} \right) = P\left( {{x_k} + {\alpha _k}{s_k}} \right), \end{equation} $ (3.1)

其中 ${s_k} = \left\{ \begin{array}{l} - \nabla f({x_k}), k = 1, \\ - \nabla f({x_k}) + {\beta _k}{d_{k - 1}}, k > 1. \end{array} \right.$

$ \begin{equation}\label{eq:3} \left| {{\beta _k}} \right| = \frac{1}{\lambda }\frac{{{{\left\| {\overline {{x_k}} \left( {{\alpha _k}, {s_k}} \right) - {x_k}} \right\|}^2}}}{{\alpha _k^2\left\| {\nabla f({x_k})} \right\|\left\| {{d_{k - 1}}} \right\|}}, \end{equation} $ (3.2)

其中 $\lambda > 1 + \frac{1}{\varepsilon }$, $\varepsilon > 0$, 且 $\varepsilon $充分小.由 ${\beta _k}$确定 ${s_k}$, 从而确定共轭梯度投影算法的搜索方向

$ \begin{equation} {d_k} = \frac{{{x_k}\left( {{\alpha _k}, {s_k}} \right) - {x_k}}}{{{\alpha _k}}}. \end{equation} $ (3.3)

结合Wolfe线搜索给出共轭梯度投影算法的步骤:

算法

步1  对 $\forall {x_1} \in \Omega $, 令 ${d_0} = 0$, ${\beta _1} = 0$, $k = 1$.

步2  计算 ${\alpha _k} > 0$, 满足条件式(1.4) 和式(1.5).

步3  令 ${x_{k + 1}} = {x_k} + {\alpha _k}{d_k}$, 若 $\left\| {{x_k} - {x_k}\left( {{\alpha _k}, {s_k}} \right)} \right\| \le \varepsilon $, 停止, 则 ${x_k}\left( {{\alpha _k}, {s_k}} \right)$为问题(1.1) 的近似最优解.

步4  令 $k = k + 1$, 转步2.

为了证明算法的全局收敛性, 需作如下假设H [2]:

(a)目标函数 $f(x)$在水平集 ${L_0} = \{ {x_1} \in R/f(x) \le f({x_1})\} $上有下界, 其中 ${x_1}$为初始点.

(b)目标函数 $f(x)$在水平集 ${L_0}$上连续可微, 且 $\nabla f(x)$是Lipschitz连续的, 即存在常数 $L > 0$, 使得 $\left\| {g(x) - g(y)} \right\| \le L\left\| {x - y} \right\|, $ $\forall x, y \in {L_0}. $

引理3.1  若 ${x_k}$不是问题(1.1) 的稳定点, 对 $\forall {\alpha _k} > 0$, ${x_k} \ne {x_k}\left( {{\alpha _k}, {s_k}} \right)$, 则由式(3.3) 生成的搜索方向 ${d_k}$为该算法的下降方向, 即 $\left\langle {\nabla f({x_k}), \frac{{{x_k}\left( {{\alpha _k}, {s_k}} \right) - {x_k}}}{{{\alpha _k}}}} \right\rangle \le \frac{{1 - \lambda }}{\lambda }{\left\| {\nabla f({x_k})} \right\|^2} < 0$.

  对 $\forall {\alpha _k} > 0$, 由式(3.1), 引理2.2(2), 引理2.2(3) 和式(3.2) 可得

$ \begin{eqnarray*} &&\left\langle {\nabla f({x_k}), \frac{{{x_k}\left( {{\alpha _k}, {s_k}} \right) - {x_k}}}{{{\alpha _k}}}} \right\rangle \\ &=& \frac{1}{{{\alpha _k}}}\left[{\left\langle {\nabla f({x_k}), {x_k}\left( {{\alpha _k}, {s_k}} \right)-\overline {{x_k}} \left( {{\alpha _k}, {s_k}} \right)} \right\rangle + \left\langle {\nabla f({x_k}), \overline {{x_k}} \left( {{\alpha _k}, {s_k}} \right)-{x_k}} \right\rangle } \right]\\ &\leq&\frac{1}{{{\alpha _k}}}\left[{\left\| {\nabla f({x_k})} \right\|\left\| {{x_k}\left( {{\alpha _k}, {s_k}} \right)- \overline {{x_k}} \left( {{\alpha _k}, {s_k}} \right)} \right\|- \left\langle {\nabla f({x_k}), {x_k}-\overline {{x_k}} \left( {{\alpha _k}, {s_k}} \right)} \right\rangle } \right] \\&\leq&\left| {{\beta _k}} \right|\left\| {\nabla f({x_k})} \right\|\left\| {{d_{k - 1}}} \right\| - \frac{{{{\left\| {\overline {{x_k}} \left( {{\alpha _k}, {s_k}} \right) - {x_k}} \right\|}^2}}}{{\alpha _k^2}} \leq \left( {\frac{1}{\lambda } - 1} \right)\frac{{{{\left\| {\overline {{x_k}} \left( {{\alpha _k}, {s_k}} \right) - {x_k}} \right\|}^2}}}{{\alpha _k^2}} \\&\leq&\frac{{1 - \lambda }}{\lambda }{\left\| {\nabla f({x_k})} \right\|^2} <0. \end{eqnarray*} $

注3.1  由引理3.1可得

$ \begin{equation} \frac{1}{{{\alpha _k}}}\left\langle {\nabla f({x_k}), {x_k} - {x_k}\left( {{\alpha _k}, {s_k}} \right)} \right\rangle \ge \frac{{\lambda - 1}}{\lambda }\frac{{{{\left\| {\overline {{x_k}} \left( {{\alpha _k}, {s_k}} \right) - {x_k}} \right\|}^2}}}{{\alpha _k^2}}. \end{equation} $ (3.4)

引理3.2  若 ${x_k} \in \Omega $不是稳定点, 则有

(1) $\left\| {{d_k}} \right\| \le \left\| {{s_k}} \right\| \le \frac{{\lambda + 1}}{\lambda }\left\| {\nabla f({x_k})} \right\|$.

(2) $\left\langle { - {s_k}, {x_k} - {x_k}\left( {{\alpha _k}, {s_k}} \right)} \right\rangle - \left\langle {\nabla f({x_k}), {x_k} - {x_k}\left( {{\alpha _k}, {s_k}} \right)} \right\rangle \le \frac{{\lambda + 1}}{{\lambda \left( {\lambda - 1} \right)}}\left\langle {\nabla f({x_k}), {x_k} - {x_k}\left( {{\alpha _k}, {s_k}} \right)} \right\rangle $.

  (1) 由式(3.3)、式(1.3) 和引理2.2(2) 可得

$ \begin{eqnarray*} \left\| {{d_k}} \right\| &=& \frac{{\left\| {{x_k}\left( {{\alpha _k}, {s_k}} \right) - {x_k}} \right\|}}{{{\alpha _k}}} \le \left\| {{s_k}} \right\|= \left\| { - {g_k} + {\beta _k}{d_{k - 1}}} \right\| \le \left\| {{g_k}} \right\| + \left| {{\beta _k}} \right|\left\| {{d_{k - 1}}} \right\|\\ &\leq&\left\| {\nabla f({x_k})} \right\| + \frac{{{{\left\| {\overline {{x_k}} \left( {{\alpha _k}, {s_k}} \right) - {x_k}} \right\|}^2}}}{{\lambda \alpha _k^2\left\| {\nabla f({x_k})} \right\|}}\leq\left( {\frac{{\lambda + 1}}{\lambda }} \right)\left\| {\nabla f({x_k})} \right\|. \end{eqnarray*} $

(2) 由 ${s_k}$的定义、式(3.2)、引理3.2(1) 和式(3.4) 得

$ \begin{eqnarray*} &&\left\langle { - {s_k}, {x_k} - {x_k}\left( {{\alpha _k}, {s_k}} \right)} \right\rangle - \left\langle {\nabla f({x_k}), {x_k} - {x_k}\left( {{\alpha _k}, {s_k}} \right)} \right\rangle \leq \left| {{\beta _k}} \right|\left\| {{d_{k - 1}}} \right\|\left\| {{x_k} - {x_k}\left( {{\alpha _k}, {s_k}} \right)} \right\|\\ &\leq& \frac{1}{\lambda }\frac{{{{\left\| {\overline {{x_k}} \left( {{\alpha _k}, {s_k}} \right) - {x_k}} \right\|}^2}}}{{\alpha _k^2\left\| {\nabla f({x_k})} \right\|}} \cdot {\alpha _k}\left\| {{s_k}} \right\|\leq \frac{{\left( {\lambda + 1} \right){\alpha _k}}}{{\lambda \left( {\lambda - 1} \right)}} \cdot \frac{{\lambda - 1}}{\lambda }\frac{{{{\left\| {\overline {{x_k}} \left( {{\alpha _k}, {s_k}} \right) - {x_k}} \right\|}^2}}}{{\alpha _k^2}}\\ &\leq& \frac{{\lambda + 1}}{{\lambda \left( {\lambda - 1} \right)}}\left\langle {\nabla f({x_k}), {x_k} - {x_k}\left( {{\alpha _k}, {s_k}} \right)} \right\rangle. \end{eqnarray*} $
4 收敛性

引理4.1  若假设H成立, $f$是连续可微函数, $\nabla f(x)$ $\Omega $上一致连续, $\left\{ {{x_k}} \right\}$是由共轭梯度投影算法产生的序列, 满足Wolfe线搜索条件

$ f\left( {{x_k}} \right) - f\left( {{x_k}\left( {{\alpha _k}, {s_k}} \right)} \right) \ge - {\sigma _1}\left\langle {\nabla f({x_k}), {x_k}\left( {{\alpha _k}, {s_k}} \right), {s_k} - {x_k}} \right\rangle = \sigma \left( {{t_k}} \right), $

其中 ${t_k} = \frac{{ - \left\langle {\nabla f({x_k}), {x_k}\left( {{\alpha _k}, {s_k}} \right) - {x_k}} \right\rangle }}{{\left\| {{x_k}\left( {{\alpha _k}, {s_k}} \right) - {x_k}} \right\|}}$, 则有

(1) $\sigma \left( {{t_k}} \right)$为强函数; (2) $\mathop {\lim }\limits_{k \to \infty } {t_k} = 0$.

  (1) 由式(1.5) 知

$ \begin{eqnarray*} &&\left( {1 - {\sigma _2}} \right)\left\langle {\nabla f({x_k}), {x_k} - {x_k}\left( {{\alpha _k}, {s_k}} \right)} \right\rangle \\ &=& \left\langle {\nabla f({x_k}), {x_k} - {x_k}\left( {{\alpha _k}, {s_k}} \right)} \right\rangle - {\sigma _2}\left\langle {\nabla f({x_k}), {x_k} - {x_k}\left( {{\alpha _k}, {s_k}} \right)} \right\rangle \\&\leq&\left\langle {\nabla f({x_k}), {x_k} - {x_k}\left( {{\alpha _k}, {s_k}} \right)} \right\rangle + \left\langle {\nabla f\left( {{x_k}\left( {{\alpha _k}, {s_k}} \right)} \right), {x_k}\left( {{\alpha _k}, {s_k}} \right) - {x_k}} \right\rangle \\&=&\left\langle {\nabla f({x_k}) - \nabla f\left( {{x_k}\left( {{\alpha _k}, {s_k}} \right)} \right), {x_k} - {x_k}\left( {{\alpha _k}, {s_k}} \right)} \right\rangle \\&\leq& \left\| {\nabla f({x_k}) - \nabla f\left( {{x_k}\left( {{\alpha _k}, {s_k}} \right)} \right)} \right\|\left\| {{x_k} - {x_k}\left( {{\alpha _k}, {s_k}} \right)} \right\|, \end{eqnarray*} $

所以

$ \begin{equation} \left\| {\nabla f({x_k}) - \nabla f\left( {{x_k}\left( {{\alpha _k}, {s_k}} \right)} \right)} \right\| \ge \left( {1 - {\sigma _2}} \right)\frac{{\left\langle {\nabla f({x_k}), {x_k} - {x_k}\left( {{\alpha _k}, {s_k}} \right)} \right\rangle }}{{\left\| {{x_k} - {x_k}\left( {{\alpha _k}, {s_k}} \right)} \right\|}} = \left( {1 - {\sigma _2}} \right){t_k} > 0. \end{equation} $ (4.1)

由假设H(2) 知 $\left\| {\nabla f({x_k}) - \nabla f\left( {{x_k}\left( {{\alpha _k}, {s_k}} \right)} \right)} \right\| \le L\left\| {{x_k} - {x_k}\left( {{\alpha _k}, {s_k}} \right)} \right\|$, 结合(4.1) 得

$ \begin{equation} \left\| {{x_k} - {x_k}\left( {{\alpha _k}, {s_k}} \right)} \right\| \ge \frac{1}{L}\left( {1 - {\sigma _2}} \right)\frac{{\left\langle {\nabla f({x_k}), {x_k} - {x_k}\left( {{\alpha _k}, {s_k}} \right)} \right\rangle }}{{\left\| {{x_k} - {x_k}\left( {{\alpha _k}, {s_k}} \right)} \right\|}}. \end{equation} $ (4.2)

由定义2.4和引理2.5知 $\sigma \left( {{t_k}} \right)$为强函数.

(2) 由式(1.4) 和式(4.2) 可得

$ \begin{eqnarray*} f\left( {{x_k}} \right) - f\left( {{x_k}\left( {{\alpha _k}, {s_k}} \right)} \right) \ge \sigma \left( {{t_k}} \right) &=& {\sigma _1}\left\langle {\nabla f({x_k}), {x_k} - {x_k}\left( {{\alpha _k}, {s_k}} \right)} \right\rangle \\ &=& {\sigma _1}\left\| {{x_k} - {x_k}\left( {{\alpha _k}, {s_k}} \right)} \right\|\frac{{\left\langle {\nabla f({x_k}), {x_k} - {x_k}\left( {{\alpha _k}, {s_k}} \right)} \right\rangle }}{{\left\| {{x_k} - {x_k}\left( {{\alpha _k}, {s_k}} \right)} \right\|}}\\ &\geq&\frac{1}{L}{\sigma _1}\left( {\left( {1 - {\sigma _2}} \right){t_k}} \right){t_k}. \end{eqnarray*} $

两边对 $k$取级数可得 $\sum\limits_{k \ge 1} {\left( {f\left( {{x_k}} \right) - f\left( {{x_k}\left( {{\alpha _k}, {s_k}} \right)} \right)} \right)} \ge \sum\limits_{k \ge 1} {\sigma \left( {{t_k}} \right)}. $

由假设H知 $f\left( x \right)$在水平集 $L$上单调不增有下界 $\sum\limits_{k \ge 1} {\left( {f\left( {{x_k}} \right) - f\left( {{x_k}\left( {{\alpha _k}, {s_k}} \right)} \right)} \right)} < + \infty, $所以 $\sum\limits_{k \ge 1} {\sigma \left( {{t_k}} \right)} < + \infty $, 则有 $\mathop {\lim }\limits_{k \to \infty } \sigma \left( {{t_k}} \right) = 0$.即

$ \begin{equation} \mathop {\lim }\limits_{k \to \infty } \left\langle {\nabla f({x_k}), {x_k} - {x_k}\left( {{\alpha _k}, {s_k}} \right)} \right\rangle = 0, \end{equation} $ (4.3)

利用定义2.3强函数的定义知 $\mathop {\lim }\limits_{k \to \infty } {t_k} = 0$.即

$ \begin{equation} \mathop {\lim }\limits_{k \to \infty } \frac{{\left\langle {\nabla f({x_k}), {x_k} - {x_k}\left( {{\alpha _k}, {s_k}} \right)} \right\rangle }}{{\left\| {{x_k} - {x_k}\left( {{\alpha _k}, {s_k}} \right)} \right\|}} = 0. \end{equation} $ (4.4)

引理4.2  设 $f:{R^n} \to R$ $\Omega $上连续可微函数且在 $\Omega $上有下界, $\nabla f(x)$ $\Omega $上一致连续, 且 $\left\{ {\nabla f({x_k})} \right\}$有界, 则由算法产生的迭代点列 $\left\{ {{x_k}} \right\}$满足

$ \mathop {\lim }\limits_{k \to \infty } \frac{{\left\| {{x_k} - {x_k}\left( {{\alpha _k}, {s_k}} \right)} \right\|}}{{{\alpha _k}}} = 0, \mathop {\lim }\limits_{k \to \infty } \frac{{\left\| {{x_k} - \overline {{x_k}} \left( {{\alpha _k}, {s_k}} \right)} \right\|}}{{{\alpha _k}}} = 0. $

  由引理3.2(2) 可得

$ \left\langle { - {s_k}, {x_k} - {x_k}\left( {{\alpha _k}, {s_k}} \right)} \right\rangle \le \frac{{{\lambda ^2} + 1}}{{\lambda \left( {\lambda - 1} \right)}}\left\langle {\nabla f({x_k}), {x_k} - {x_k}\left( {{\alpha _k}, {s_k}} \right)} \right\rangle, $

两边同时除以 $\left\| {{x_k} - {x_k}\left( {{\alpha _k}, {s_k}} \right)} \right\|$, 且取极限, 则上式可变为

$ 0 \le \mathop {\lim }\limits_{k \to \infty } \sup \frac{{\left\langle { - {s_k}, {x_k} - {x_k}\left( {{\alpha _k}, {s_k}} \right)} \right\rangle }}{{\left\| {{x_k} - {x_k}\left( {{\alpha _k}, {s_k}} \right)} \right\|}} \le \mathop {\lim }\limits_{k \to \infty } \frac{{{\lambda ^2} + 1}}{{\lambda \left( {\lambda - 1} \right)}}\frac{{\left\langle {\nabla f({x_k}), {x_k} - {x_k}\left( {{\alpha _k}, {s_k}} \right)} \right\rangle }}{{\left\| {{x_k} - {x_k}\left( {{\alpha _k}, {s_k}} \right)} \right\|}}, $

由式(4.4) 可得

$ \begin{equation} \mathop {\lim }\limits_{k \to \infty } \frac{{\left\langle { - {s_k}, {x_k} - {x_k}\left( {{\alpha _k}, {s_k}} \right)} \right\rangle }}{{\left\| {{x_k} - {x_k}\left( {{\alpha _k}, {s_k}} \right)} \right\|}} = 0. \end{equation} $ (4.5)

由引理2.2(3) 知 $\left\langle { - {s_k}, {x_k} - {x_k}({\alpha _k}, {s_k})} \right\rangle \ge \frac{{{{\left\| {{x_k}({\alpha _k}, {s_k}) - {x_k}} \right\|}^2}}}{{{\alpha _k}}}$, 则有

$ 0 \le \frac{{\left\| {{x_k}({\alpha _k}, {s_k}) - {x_k}} \right\|}}{{{\alpha _k}}} \le \frac{{\left\langle { - {s_k}, {x_k} - {x_k}\left( {{\alpha _k}, {s_k}} \right)} \right\rangle }}{{\left\| {{x_k} - {x_k}\left( {{\alpha _k}, {s_k}} \right)} \right\|}}, $

则由式(4.5) 可得 $\mathop {\lim }\limits_{k \to \infty } \frac{{\left\| {{x_k} - {x_k}\left( {{\alpha _k}, {s_k}} \right)} \right\|}}{{{\alpha _k}}} = 0, $由式(3.4) 知

$ 0 \le \frac{{\lambda - 1}}{\lambda }\frac{{{{\left\| {\overline {{x_k}} \left( {{\alpha _k}, {s_k}} \right) - {x_k}} \right\|}^2}}}{{\alpha _k^2}} \le \frac{1}{{{\alpha _k}}}\left\langle {\nabla f({x_k}), {x_k} - {x_k}\left( {{\alpha _k}, {s_k}} \right)} \right\rangle \le \left\| {\nabla f({x_k})} \right\|\frac{{\left\| {{x_k} - {x_k}\left( {{\alpha _k}, {s_k}} \right)} \right\|}}{{{\alpha _k}}}, $

两边取极限, 再利用 $\left\{ {\nabla f({x_k})} \right\}$有界和式(4.3) 可得 $\mathop {\lim }\limits_{k \to \infty } \frac{{\left\| {{x_k} - \overline {{x_k}} \left( {{\alpha _k}, {s_k}} \right)} \right\|}}{{{\alpha _k}}} = 0.$

定理4.3  设 $f:{R^n} \to R$ $\Omega $上连续可微函数且在 $\Omega $上有下界, $\nabla f(x)$ $\Omega $上一致连续, $\left\{ {{x_k}} \right\}$是由算法产生的迭代点列, $0 < {\alpha _k} \le \gamma $, $k = 1, 2, \cdots $, 则 $\mathop {\lim }\limits_{k \to \infty } \left\| {{\nabla _\Omega }f({x_k})} \right\| = 0$, 且 $\left\{ {{x_k}} \right\}$的任意聚点都是优化问题的稳定点.

  由引理2.7(1), (3) 知, $\forall \varepsilon > 0$, $\exists {v_k} \in {T_\Omega }\left( {{x_k}} \right)$, $\left\| {{v_k}} \right\| \le 1$满足

$ \begin{equation} \left\| {{\nabla _\Omega }f({x_k})} \right\| \le \left\langle { - \nabla f({x_k}), {v_k}} \right\rangle + \varepsilon, \end{equation} $ (4.6)

$\forall z \in \Omega $由引理2.2(1) 知 $\left\langle {{x_k}({\alpha _k}, {s_k}) - ({x_k} + {\alpha _k}{s_k}), z - {x_k}({\alpha _k}, {s_k})} \right\rangle \ge 0$, 则有

$ \left\langle {{\alpha _k}{s_k}, z - {x_k}({\alpha _k}, {s_k})} \right\rangle\\ \leq \left\langle {{x_k}({\alpha _k}, {s_k}) - {x_k}, z - {x_k}({\alpha _k}, {s_k})} \right\rangle\leq\left\| {{x_k}({\alpha _k}, {s_k}) - {x_k}} \right\|\left\| {z - {x_k}({\alpha _k}, {s_k})} \right\|, $

所以

$ \begin{equation} \left\langle {{s_k}, z - {x_k}({\alpha _k}, {s_k})} \right\rangle \le \frac{{\left\| {{x_k}({\alpha _k}, {s_k}) - {x_k}} \right\|}}{{{\alpha _k}}}\left\| {z - {x_k}({\alpha _k}, {s_k})} \right\|. \end{equation} $ (4.7)

${v_{k + 1}} = z - {x_k}({\alpha _k}, {s_k}) \in {T_\Omega }\left( {{x_{k + 1}}} \right)$, $\left\| {{v_{k + 1}}} \right\| \le 1$, 由式(4.7) 可得

$ \left\langle {{s_k}, {v_{k + 1}}} \right\rangle = \left\langle { - \nabla f({x_k}) + {\beta _k}{d_{k - 1}}, {v_{k + 1}}} \right\rangle \le \frac{{\left\| {{x_k}({\alpha _k}, {s_k}) - {x_k}} \right\|}}{{{\alpha _k}}}, $

所以将上式与式(3.2)、引理2.2(2) 结合可得

$ \begin{eqnarray*} \left\langle { - \nabla f({x_k}), {v_{k + 1}}} \right\rangle &\leq&\frac{{\left\| {{x_k}({\alpha _k}, {s_k}) - {x_k}} \right\|}}{{{\alpha _k}}} + \left| {{\beta _k}} \right|\left\| {{d_{k - 1}}} \right\|\\ &\leq& \frac{{\left\| {{x_k}({\alpha _k}, {s_k}) - {x_k}} \right\|}}{{{\alpha _k}}} + \frac{1}{{\lambda \left\| {\nabla f({x_k})} \right\|}}{\left( {\frac{{\left\| {\overline {{x_k}} \left( {{\alpha _k}, {s_k}} \right) - {x_k}} \right\|}}{{{\alpha _k}}}} \right)^2}\\ &\leq& \frac{{\left\| {{x_k}\left( {{\alpha _k}, {s_k}} \right) - {x_k}} \right\|}}{{{\alpha _k}}} + \frac{1}{\lambda }\frac{{\left\| {\overline {{x_k}} \left( {{\alpha _k}, {s_k}} \right) - {x_k}} \right\|}}{{{\alpha _k}}}, \end{eqnarray*} $

两边取极限, 再利用引理4.2得

$ \begin{eqnarray*} \mathop{\lim}_{k \to \infty } \sup \left\langle { - \nabla f({x_k}), {v_{k + 1}}} \right\rangle = 0, \end{eqnarray*} $ (4.8)

又因为

$ \begin{eqnarray} \left \langle { - \nabla f\left( {{x_k}\left( {{\alpha _k}, {s_k}} \right)} \right), {v_{k + 1}}} \right\rangle &=& \left\langle {\nabla f({x_k}) - \nabla f\left( {{x_k}\left( {{\alpha _k}, {s_k}} \right)} \right), {v_{k + 1}}} \right\rangle + \left\langle { - \nabla f({x_k}), {v_{k + 1}}} \right\rangle \nonumber\\ &\leq&\left\| {\nabla f({x_k}) - \nabla f\left( {{x_k}\left( {{\alpha _k}, {s_k}} \right)} \right)} \right\| + \left\langle { - \nabla f({x_k}), {v_{k + 1}}} \right\rangle, \end{eqnarray} $ (4.9)

由引理4.2和 $0 < {\alpha _k} \le \gamma $

$ \begin{equation} \mathop {\lim }\limits_{k \to \infty } \left\| {{x_k} - {x_k}\left( {{\alpha _k}, {s_k}} \right)} \right\| = 0, \end{equation} $ (4.10)

利用式(4.8)、式(4.9)、式(4.10) 和 $\nabla f(x)$ $\Omega $上一致连续得

$ \mathop {\lim }\limits_{k \to \infty } \sup \left\langle { - \nabla f\left( {{x_k}\left( {{\alpha _k}, {s_k}} \right)} \right), {v_{k + 1}}} \right\rangle = 0, $

由式(4.6) 和 $\varepsilon $的任意性可得

$ \begin{equation} \mathop {\lim }\limits_{k \to \infty } \left\| {{\nabla _\Omega }f({x_k})} \right\| = 0. \end{equation} $ (4.11)

$\mathop {\lim }\limits_{k \in {N_0}, k \to \infty } {x_k} = x$, 其中 $N_0\subseteq N$, 由引理2.7(2) 和式(4.11) 知

$ \left\| {{\nabla _\Omega }f(x)} \right\| \le \mathop {\lim }\limits_{k \in {N_0}, k \to \infty } \inf \left\| {{\nabla _\Omega }f({x_k})} \right\| = 0. $

由引理2.7(3) 知, $\left\{ {{x_k}} \right\}$的任一聚点 $x$都是优化问题(1.1) 的稳定点.

参考文献
[1] Calamai P H, More J J. Projected gradient methods for linearly constrained problems[J]. Mathematical Programming, 1987, 39: 93–116. DOI:10.1007/BF02592073
[2] 王宜举, 修乃华. 非线性规划理论与算法[M]. 西安: 山西科学技术出版社, 2008.
[3] Sun Qingying, Wang Changyu, Shi Zhenjun. Global convergence of a modified gradient projection method for convex constrained problems[J]. Acta Mathematicae Applicatae Sinica, 2006, 22(2): 227–242. DOI:10.1007/s10255-006-0299-2
[4] 孙清滢, 高宝, 渐令, 王长钰. 约束优化问题的修正共轭梯度投影算法[J]. 应用数学学报, 2010, 33(4): 640–651.
[5] 时贞军. 改进HS共轭梯度算法及其全局收敛性[J]. 计算数学, 2001, 23(4): 393–406.
[6] 时贞军. 无约束优化的超记忆梯度算法[J]. 工程数学学报, 2000, 17(2): 99–104.
[7] 王宜举, 江学军. 凸规划问题的一个梯度投影算法[J]. 曲阜师范大学学报, 1996, 22(3): 7–12.
[8] Wang Changyu, Qu Biao. Convergence of the gradient projection method with a new stepsize rule[J]. Operations Research Transactions, 2002, 6(1): 36–44.