数学杂志  2015, Vol. 35 Issue (3): 579-592   PDF    
扩展功能
加入收藏夹
复制引文信息
加入引用管理器
Email Alert
RSS
本文作者相关文章
CHEN Dong-hai
ZHANG Ming-wang
LARGE-UPDATE METHOD FOR $P_*(\kappa)$ LINEAR COMPLEMENTARITY PROBLEMS BASED ON A NEW KERNEL FUNCTION
CHEN Dong-hai, ZHANG Ming-wang    
College of Science, China Three Gorges University, YiChang 443002, China
Abstract: A large-update primal-dual interior-point method for $P_*(\kappa)$ linear complementarity problems is presented in this paper. Based on a new kernel function which is strongly convex and difiers from the usual logarithmic or self-regular function, we show that if a strictly feasible starting point is available, the new method for $P_*(\kappa)$ linear complementarity problems has the polynomial complexity $O\big((1+2\kappa) \sqrt{n}(\log n)^{2}\log\frac{n}{\varepsilon}\big)$, which reduced the gap between the practical behavior of the large-update method and its theoretical performance results.
Key words: linear complementarity problem     kernel function     large-update method     polynomial complexity    
$P_*(\kappa)$线性互补问题基于新核函数的大步校正算法
陈东海, 张明望    
三峡大学理学院, 湖北 宜昌 443002
摘要:本文研究了 $P_*(\kappa)$线性互补问题的大步校正原始-对偶内点算法.基于一个强凸且不同于通常的对数函数和自正则函数的新核函数, 对具有严格可行初始点的该问题, 算法获得的迭代复杂性为 $O\big((1+2\kappa) \sqrt{n}(\log n)^{2}\log\frac{n}{\varepsilon}\big)$, 该结果缩小了大步校正内点算法的实际计算与理论复杂性界之间的差距.
关键词线性互补问题    核函数    大步校正方法    多项式复杂性    
1 Introduction

Nowadays, the Interior-Point Methods (IPMs) have been proven to be effective for solving optimization problems. Because any differentiable convex quadratic problem can be formulated into a monotone Linear Complementarity Problem (LCP), i.e., $P_*(0)$ LCP[1], and a variant of an IPM's quality is measured by the fact whether it can be generalized to $P_*(\kappa)$ LCPs or not[2], $P_*(\kappa)$ LCPs become an active area of research.

In this paper, we consider the LCP as follows:

$ \begin{equation}\label{1} \begin{cases}s=Mx+q, \\ xs=0, \\(x, s)\geq 0, \end{cases} \end{equation} $ (1.1)

where $M\in R^{n\times n}$ is a $P_*(\kappa)$ matrix and $q\in R^n$.

It is well known that the primal-dual path following methods use the classical Newton direction as the search direction, which is closely related to the primal-dual logarithmic barrier function. Peng et al.[3] presented a new variant of primal-dual IPMs based on self-regular kernel functions, and provided the theoretical complexity $O(\sqrt{n}\log n\log\frac{n}{\varepsilon})$ for large-update IPMs. Bai et al.[4] proposed a new class of kernel functions which are neither logarithmic barrier nor self-regular, and they obtained the polynomial complexity for LO. He et al.[5] introduced a self-adjusting IPM for LCP based on a logarithmic barrier function and gave some numerical results. Recently, based on kernel functions, G.M.Cho[6] derived an iteration bound $O\big(q^{\frac{3}{2}}(1+2\kappa)\sqrt{n}(\log n)^{\frac{q+1}{q}}\log \frac{n}{\varepsilon}\big)$ for $P_*(\kappa)$ LCPs. Amini and Peyghami[7] obtained the iteration complexity $O\big((1+2\kappa)\sqrt{n}\log n\log(\log n)\log\frac{n}{\varepsilon} \big)$ in special case.

In general, each kernel function gives rise to a search direction, and determines a primal-dual interior-point algorithm. The barrier functions based on the kernel function can be regarded as a measure for the distance of the given point to the central path[8].In this paper, we focus on $P_*(\kappa)$ LCPs based on a new kernel function, design a large-update interior-point algorithm, and obtain the iteration complexity

$ O\big ((1+2\kappa) \sqrt{n}(\log n)^{2}\log\frac{n}{\varepsilon}\big), $

which is better than the bounds of large-update primal-dual IPMs based on some other kernel functions. The proposed kernel function is strongly convex, and it is neither self-regular nor usual logarithmic kernel function. We also provide a simple analysis to get iteration complexity of this algorithm based on the new kernel function, which is different from the analysis of the other primal-dual IPMs based on logarithmic barrier functions and recent kernel functions introduced by some authors in optimization field.

This paper is organized as follows: in Section 2, we recall basic concepts and the notion of the central path. We describe the kernel function and its growth properties for $P_*(\kappa)$ LCPs in Section 3. The analysis of feasible step size and the amount of the decrease in proximity function during an inner iteration are reported in Section 4. In Section 5, we explore the total number of iterations for our algorithm. In section 6, we have some concluding remarks.

The following notations are used throughout the paper. For $x\in R^n $,

$ x_\text{min}=\text{min}\{x_1, x_2, \cdot\cdot\cdot, x_n\}, $

i.e., the minimal component of $x$. The vector norm is the 2-norm denoted by $\|\cdot\|$. We also denote the all one-vector of length $n$ by $e$. The diagonal matrix with the diagonal vector $x$ has been denoted by $X=\text{diag}(x)$. The index set $J$ is $J=\{1, 2, \cdot\cdot\cdot, n\} $. For $x, s\in R^n$, $ xs$ denotes the coordinate-wise product (Hadamard product), and $x^{T}$s denotes the scalar product. We say $f(t)=\Theta (g(t))$ if there exist some positive constants $\omega_1$ and $\omega_2$ such that $\omega_{1}g(t)\leq f(t)\leq \omega_{2}g(t)$ holds for all $t > 0$. Further, $f(t)=O(g(t))$ if there exists a positive constant $\omega$ such that $f(t)\leq \omega$ holds for all $t> 0$.

2 Preliminaries

The $P_*(\kappa)$ matrix and $P_*(\kappa)$ LCP were introduced by Kojima et al.[1]. Here we give the definition of $P_*(\kappa)$ matrix and list some important properties related to its use in IPMs.

Definition 2.1  Let $\kappa \geq 0$ be a nonnegative number. A matrix $M\in R^{n\times n}$ is called a $P_*(\kappa)$ matrix if

$\begin{eqnarray*} (1+4\kappa)\sum\limits_{i\in J_+(x)}x_i(Mx)_i+\sum\limits_{i\in J_-(x)}x_i(Mx)_i\geq 0\end{eqnarray*} $

for all $x\in R^n$, where $J_+(x)=\{i\in J:x_i(Mx)_i\geq 0\}$ and $J_-(x)=\{i\in J:x_i(Mx)_i<0\}$.

Note that for $\kappa=0$ the $P_*(0)$ are the class of positive semi-define matrices. This implies that the class of convex quadratic programs and LO problems are a subclass of $P_*(0)$ LCPs. In what follows, we give some definitions about convexity concepts which will be essential tools in our analysis.

Definition 2.2  A twice differentiable function $f:X(\subset R)\rightarrow R$ is strongly convex if and only if there exists $m_0>0$ such that $f''(x)\geq m_0$.

Definition 2.3  A function $f:X(\subset R)\rightarrow R$ is exponentially convex if and only if

$ f(\sqrt{x_{1}x_{2}})\leq {{1}\over{2}} (f(x_1)+f(x_2)). $

Proposition 2.1  For $P_*(\kappa)$ matrix $M\in R^{n\times n}$, the matrix

$ M'= \left[\begin{array}{cc} -M&I\\ S&X \end{array}\right] $

is a nonsingular matrix for any positive diagonal matrices $X, S\in R^{n\times n}$[6].

Corollary 2.2  Let $M\in R^{n\times n}$ be a $P_*(\kappa)$ matrix and $x, s\in R^{n}_{++}.$ Then, for all $a\in R^n$ the system

$ \left\{ \begin{array}{ccccccc} -M\Delta x+\Delta s=0, \\ S\Delta x+X\Delta s=a \end{array} \right. $

has a unique solution $(\Delta x, \Delta s)$, where $X=\text{diag}(x)$ and $S=\text{diag}(x)$.

The basic idea of primal-dual IPMs is to replace the second equation in system (1.1), the complementarity condition, by the parameterized equation $xs=\mu e$, with $\mu>0$. Parameter $\mu$ is referred to as the central path parameter. This replace leads to the following system:

$ \begin{equation}\label{2} \begin{cases}s=Mx+c, \\xs=\mu e, \\(x, s)>0, \end{cases} \end{equation} $ (2.1)

where $\mu>0$. Without loss of generality, we assume that the system (1.1) is strictly feasible, i.e., there exists $(x^0, s^0, \mu^0)>0$ such that $s^0=Mx^0+c$ and $x^0s^0=\mu^0 e$. And moreover, we have an initial strictly feasible point with $\Psi(x^0, s^0, \mu^0)\leq \tau$ for some $\mu^0>0$(see [1]). For this given strictly feasible point $(x^0, s^0)$, we can always find a $\mu^0>0$ such that $\Psi(x^0, s^0, \mu^0)\leq \tau$.

Since $M$ is a $P_*(\kappa)$ matrix and (1.1) is strictly feasible, then (2.1) has a unique solution for any $\mu>0$. We denote the solution of (2.1) as $(x(\mu), s(\mu))$, the $\mu$-center, for given $\mu>0$. The set of unique solutions $\{(x(\mu), s(\mu)):\mu>0\}$ of system (2.1) forms the so-called central path of (1.1). As $\mu\rightarrow 0$, the sequence $(x(\mu), s(\mu))$ approaches to the solution $(x, s)$ of (1.1). If $(x(\mu), s(\mu))$ is known for some positive $\mu$, then we decrease $\mu$ to $\mu:=(1-\theta)\mu$ for some fixed $\theta\in(0, 1)$ and solve the following system:

$ \begin{equation}\label{3} \begin{cases}-M\Delta x+\Delta s=0, \\s\Delta x+x\Delta s=\mu e-xs. \end{cases} \end{equation} $ (2.2)

Using Corollary 2.2, the system (2.2) uniquely defines a search direction $(\Delta x, \Delta s)$, for any $(x, s)>0$. We define the following notations:

$ \begin{equation} d:=\sqrt{x\over s}, ~~~\upsilon:=\sqrt{xs\over \mu}, ~~~dx:={{v\Delta x}\over x}, ~~~ds:={{v\Delta s}\over s}. \end{equation} $ (2.3)

Then we have the scaled Newton-system

$ \begin{equation} \begin{cases}-\bar{M}dx+ds=0, \\dx+ds=\upsilon^{-1}-\upsilon, \end{cases} \end{equation} $ (2.4)

where $\bar{M}=DMD$ and $D=\text{diag}(d)$. The second equation in (2.4) is called the scaled centering equation.

We consider a strictly convex function $\Psi(\upsilon)$ which is minimal at $\upsilon=e$ and $\Psi(e)=0$. Then we replace the scaled centering equation in (2.4), by

$ \begin{equation} dx+ds=-\nabla \Psi(\upsilon). \end{equation} $ (2.5)

Therefore we have the modified Newton-system as follows:

$ \begin{equation} \begin{cases}-M\Delta x+\Delta s=0, \\S\Delta x+X\Delta s=-\mu\upsilon\nabla \Psi(\upsilon). \end{cases} \end{equation} $ (2.6)

By Corollary 2.2, this system uniquely defines a search direction $(\Delta x, \Delta s)$.

Throughout the paper we assume that a threshold parameter $\tau$ and a fixed barrier update parameter $\theta$ are given, and assume $\tau=O(n)$, $0<\theta<1$. The algorithm works as follows. Starting with a strictly feasible point $(x, s)$ in a $\tau$-neighborhood of the current $\mu$-center, we decrease $\mu$ to $\mu_+:=(1-\theta)\mu$, for some fixed $\theta\in(0, 1)$. Then we solve the modified Newton-system (2.6) to obtain the unique search direction. The positivity condition of a new iterate is ensured with the right choice of the step size $\alpha$ which is defined by some line search rule. By repeating this process, we find a new iterate $(x_+, s_+)$ in a $\tau$-neighborhood of the $\mu_+$-center. Then we let $\mu=\mu_+$ and $(x, s)=(x_+, s_+)$. Again, $\mu$ is reduced by the factor $1-\theta$ and we solve the modified Newton-system targeting at the new $\mu_+$-center, and so on. This process is repeated until $\mu$ is small enough, e.g., $n\mu\leq \varepsilon$. Now, using the proximity function $\Psi(\upsilon)$ to find the search direction and to measure the proximity between the current iterates and the $\mu$-center. The generic form of this algorithm is shown as follows.

In Algorithm 1, each outer iteration is consisted of an $\mu$-update and a sequence of inner iterations. The total number of inner iterations is referred to as iteration complexity of the algorithm. Usually, if $\theta=\Theta(1)$, we call the algorithm a large-update method. If $\theta=\Theta({1\over\sqrt{n}})$, the algorithm is named a small -update method.

3 The Kernel Function and Growth Behavior

In this section we consider the univariate function $\psi(t):$

$ \begin{equation} \psi(t)={{t^2-1}\over 2}-{(t-1)e^{{1\over t}-1}}, ~~~ t>0. \end{equation} $ (3.1)

This function is strongly convex and not self-regular.

To simplify the analysis, we restrict ourselves to the case where the proximity function $\Psi(\upsilon)$ is separable with identical coordinate functions, i.e.,

$ \begin{eqnarray*} \Psi(\upsilon):=\sum\limits_{i=1}^n\psi(\upsilon_i). \end{eqnarray*} $ (3.2)

We call the univariate function $\psi(t)$ the kernel function of the proximity function $\Psi(\upsilon)$. For $\psi(t)$, we have

$ \begin{equation} \psi'(t)=t-{{t^2-t+1}\over {t^2}}e^{{1\over t}-1}, \\ ~~~\psi''(t)=1+{{t+1}\over {t^4}}e^{{1\over t}-1}\geq 1, ~~~\psi'''(t)=-{{3t^2+5t+1}\over {t^6}}e^{{1\over t}-1}<0. \end{equation} $ (3.3)

It follows that $\psi(1)=\psi'(1)=0$, and $\psi''(t)\geq 1$, which shows that is indeed a strongly convex kernel function. It is also quite straightforward that

$ \begin{eqnarray*} \lim\limits_{t\rightarrow 0}\psi(t)=\lim\limits_{t\rightarrow +\infty}\psi(t)=+\infty, ~~~~~\psi'''(t)<0, ~~~~~t>0. \end{eqnarray*} $

And due to $\psi(1)=\psi'(1)=0$, $\psi(t)$ is determined by its second derivative:

$ \begin{equation} \psi(t)=\int_1^t\int_1^\xi\psi''(\zeta){d\zeta}{d\xi}. \end{equation} $ (3.4)

In the analysis of the algorithm, we also define the norm-based proximity measure $\delta(\upsilon)$ as follows:

$ \begin{equation} \delta(\upsilon):={1\over 2}\|\nabla \Psi(\upsilon) \|={1\over 2}\|dx+ds\|. \end{equation} $ (3.5)

Since $\Psi(\upsilon)$ is strongly convex and minimal at $\upsilon=e$, we have

$ \begin{equation} \Psi(\upsilon)=0\Longleftrightarrow \delta(\upsilon)=0\Longleftrightarrow\upsilon=e. \end{equation} $ (3.6)

In what follows, we give some key properties which are important in our analysis.

Lemma 3.1   $\psi(t)$ is exponentially convex, namely,

$ \psi(\sqrt{t_{1}t_{2}})\leq {{1}\over{2}} (\psi(t_1)+\psi(t_2)). $

Proof  This result follows easily by Lemma 1 in [3], which states that the above inequality holds if and only if $t\psi''(t)+\psi'(t)\geq 0$ for all $t>0$. Using simple calculus, we obtain

$ t\psi''(t)+\psi'(t)=2t+{{1+t^2-t^3}\over{t^3}}e^{{1\over t}-1}>0, ~~~ t>0. $

So this completes the proof of the lemma.

Lemma 3.2  Kernel function $\psi(t)$ satisfies the following properties:

(ⅰ) $ {{1}\over {2}}(t-1)^2\leq \psi(t)\leq {{1}\over {2}}\psi'(t)^2$, for all $t>0$.

(ⅱ) $\Psi(\upsilon)\leq 2\delta(\upsilon)^2$.

(ⅲ) $\|\upsilon\|\leq\sqrt{n}+\sqrt{2\Psi(\upsilon)}\leq\sqrt{n}+2\delta(\upsilon)$.

Proof  Using (3.4) and $\psi''(t)\geq 1$, for all $t>0$, one can easily obtain that

$ {1\over 2}(t-1)^2\leq \psi(t), $

which proves the first inequality. Then

$ \begin{eqnarray*} \psi(t)&=&\int_1^t\int_1^\xi\psi''(\zeta){d\zeta}{d\xi}\leq\int_1^t\int_1^\xi\psi''(\xi)\psi''(\zeta){d\zeta}{d\xi} \\&=&\int_1^t\psi'(\xi){d\psi'(\xi)}={1\over 2}\psi'(t)^2. \end{eqnarray*} $

This completes the proof of (ⅰ). In order to prove (ⅱ), using the second inequality of (ⅰ) and (3.5), we have

$ \begin{eqnarray*} \Psi(\upsilon)=\sum\limits_{i=1}^n\psi(\upsilon_i)\leq{1\over 2}\sum\limits_{i=1}^n\psi'(\upsilon_i)^2={1\over 2}\|\nabla \Psi(\upsilon) \|^2=2\delta(\upsilon)^2, \end{eqnarray*} $

which proves (ⅱ). Now, using the first inequality of (ⅰ), we have

$ \begin{eqnarray*} 2\Psi(\upsilon)=2\sum\limits_{i=1}^n\psi(\upsilon_i)\geq\sum\limits_{i=1}^n(\upsilon_i-1)^2=\|\upsilon\|^2-2e^T\upsilon+\|e\|^2\geq(\|\upsilon\|-\|e\|)^2, \end{eqnarray*} $

which implies that $\|\upsilon\|\leq\sqrt{n}+\sqrt{2\Psi(\upsilon)}\leq\sqrt{n}+2\delta(\upsilon)$.

At the start of any outer iteration of the algorithm, just before the update of $\mu$ with the factor $1-\theta$, we have $\Psi(\upsilon)\leq \tau$. Due to the update of $\mu$ the vector $\upsilon$ is divided by the factor $\sqrt{1-\theta}$, with $0<\theta<1$, which in general leads to an increase in the value of $\Psi(\upsilon)$. Then, during the subsequent inner iterations, $\Psi(\upsilon)$ decreases until it passes the threshold $\tau$ again. Hence, during the course of the algorithm the largest values of $\Psi(\upsilon)$ occur just after the updates of $\mu$. In next two lemmas we investigate the growth behavior of $\Psi(\upsilon)$ after $\mu$-update.

Lemma 3.3  Let $\beta\geq 1$. Then $\psi(\beta t)\leq\psi(t)+{1\over 2}(\beta^2-1)t^2$.

Proof  Using the definition of $\psi(t)$, we may write

$ \begin{eqnarray*} \psi(t)+{1\over 2}(\beta^2-1)t^2-\psi(\beta t)=(\beta t-1)e^{{1\over \beta t}-1}-(t-1)e^{{1\over t}-1}. \end{eqnarray*} $

Defining $h(\beta)=(\beta t-1)e^{{1\over \beta t}-1}$, we have $h(1)=(t-1)e^{{1\over t}-1}$ and

$ \begin{eqnarray*} h'(\beta)=te^{{1\over \beta t}-1}+{{1-\beta t}\over {\beta^2t}}e^{{1\over \beta t}-1}={{(\beta t-{1\over 2})^2+{3\over 4}}\over{\beta^2t}}e^{{1\over \beta t}-1}>0, ~~~~\forall ~~\beta\geq 1. \end{eqnarray*} $

Thus, we have

$ \begin{eqnarray*} (\beta t-1)e^{{1\over \beta t}-1}\geq (t-1)e^{{1\over t}-1}, \end{eqnarray*} $

which implies the lemma.

Lemma 3.4  Let $0<\theta<1$ and $\upsilon_+={\upsilon\over\sqrt{1-\theta}}$. Then

$ \begin{eqnarray*} \Psi(\upsilon_+)\leq\Psi(\upsilon)+{\theta\over{2(1-\theta)}}(2\Psi(\upsilon)+2\sqrt{2n\Psi(\upsilon)}+n). \end{eqnarray*} $

Proof  Using Lemma 3.3, with $\beta={1\over\sqrt{1-\theta}}$, we have

$ \begin{eqnarray*} \Psi(\beta\upsilon)=\sum\limits_{i=1}^n\psi(\beta\upsilon_i)\leq\sum\limits_{i=1}^n\psi(\upsilon_i)+{1\over 2}(\beta^2-1)\sum\limits_{i=1}^n\upsilon_i^2=\Psi(\upsilon)+{\theta\|\upsilon\|^2\over 2(1-\theta)}. \end{eqnarray*} $

Using Lemma 3.2 (ⅲ), we obtain

$ \begin{eqnarray*} \Psi(\upsilon_+)\leq\Psi(\upsilon)+{{\theta(\sqrt{n}+\sqrt{2\Psi(\upsilon)})^2}\over 2(1-\theta)}\leq\Psi(\upsilon)+{\theta\over{2(1-\theta)}}(2\Psi(\upsilon)+2\sqrt{2n\Psi(\upsilon)}+n). \end{eqnarray*} $

This completes the proof.

4 Decrease of the Barrier Function During an Inner Iteration

In this section, we evaluate the feasible step size $\alpha$ and obtain an upper bound for the decrease of the proximity function. After a damped step, with step size $\alpha$, we define

$ \begin{equation*} x_+=x+\alpha\Delta x, ~~~~~~~~~~~~~ s_+=x+\alpha\Delta s \end{equation*} $

Using (2.3), we have

$ \begin{equation} x_+=x(e+\alpha{\Delta x\over x})=x(e+\alpha{dx\over\upsilon})={x\over\upsilon}(\upsilon+\alpha dx), \\ s_+=s(e+\alpha{\Delta s\over s})=s(e+\alpha{ds\over\upsilon})={s\over\upsilon}(\upsilon+\alpha ds), \\ {\upsilon_+}^2={{x_+s_+}\over\mu}=(\upsilon+\alpha dx)(\upsilon+\alpha ds). \end{equation} $ (4.1)

Now, assuming the step size $\alpha$ is such that the coordinates of the vectors $\upsilon+\alpha dx$ and $\upsilon+\alpha ds$ are positive. And using Lemma 3.3, we have

$ \begin{equation*} \Psi(\upsilon_+)=\Psi(\sqrt{(\upsilon+\alpha dx)(\upsilon+\alpha ds)})\leq{1\over 2}(\Psi(\upsilon+\alpha dx)+\Psi(\upsilon+\alpha ds)). \end{equation*} $

Defining

$ \begin{equation*} f(\alpha)=\Psi(\upsilon_+)-\Psi(\upsilon), \\ f_1(\alpha)={1\over 2}(\Psi(\upsilon+\alpha dx)+\Psi(\upsilon+\alpha ds))-\Psi(\upsilon), \end{equation*} $

then $f(\alpha)\leq f_1(\alpha)$. Note that $f(0)=f_1(0)=0$.

By taking the first derivative of $f_1(\alpha)$ with respect to $\alpha$, we obtain

$ \begin{eqnarray*} f_1'(\alpha)={1\over 2}\sum\limits_{i=1}^n(\psi'(\upsilon_i+\alpha(dx)_i)(dx)_i+\psi'(\upsilon_i+\alpha(ds)_i)(ds)_i). \end{eqnarray*} $

From (2.5) and (3.5), we have

$ \begin{equation} f_1'(0)={1\over 2}\nabla\Psi(\upsilon)^T(dx+ds)={-{1\over 2}}\nabla\Psi(\upsilon)^T\nabla\Psi(\upsilon)=-2\delta(\upsilon)^2. \end{equation} $ (4.2)

By taking the second derivative of $f_1'(\alpha)$ with respect to $\alpha$, we know

$ \begin{eqnarray*} f_1''(\alpha)={1\over 2}\sum\limits_{i=1}^n(\psi''(\upsilon_i+\alpha(dx)_i)(dx)_i^2+\psi''(\upsilon_i+\alpha(ds)_i)(ds)_i^2). \end{eqnarray*} $ (4.3)

Since $M$ is a $P_*(\kappa)$ matrix and $M\Delta x=\Delta s$ from (2.6), for $\Delta x, \Delta s\in R^n$, we have

$ \begin{eqnarray*} (1+4\kappa)\sum\limits_{i\in J_+(x)}\Delta x_i\Delta s_i+\sum\limits_{i\in J_-(x)}\Delta x_i\Delta s_i\geq 0, \end{eqnarray*} $

where $J_+(x)=\{i\in J:\Delta x_i\Delta s_i\geq 0\}$ and $J_-(x)=J-J_+(x)$. Due to

$ \begin{eqnarray*} &&dxds={{\upsilon^2\Delta x\Delta s}\over{xs}}={{\Delta x\Delta s}\over{\mu}}, \\ &&(1+4\kappa)\sum\limits_{i\in J_+(x)}(dx)_i(ds)_i+\sum\limits_{i\in J_-(x)}(dx)_i(ds)_i\geq 0. \end{eqnarray*} $ (4.4)

For notional convenience, we define the follow notations:

$ \begin{eqnarray*} \delta=\delta(\upsilon), ~~~~\sigma_+=\sum\limits_{i\in J_+(x)}(dx)_i(ds)_i, ~~~~\sigma_-=\sum\limits_{i\in J_-(x)}(dx)_i(ds)_i. \end{eqnarray*} $

To compute the bound of $\|dx\|$ and $\|ds\|$, we need the following technical lemma.

Lemma 4.1  One has $\sigma_+\leq\delta^2$ and $\sigma_-\leq(1+4\kappa)\delta^2$.

Proof  From the definition of $\sigma_+$, $\sigma_-$ and $\delta$, we have

$ \begin{eqnarray*} \sigma_+=\sum\limits_{i\in J_+(x)}(dx)_i(ds)_i\leq{1\over 4}\sum\limits_{i\in J_+(x)}((dx)_i+(ds)_i)^2\leq{1\over 4}\sum\limits_{i=1}^n((dx)_i+(ds)_i)^2=\delta^2. \end{eqnarray*} $

From (4.4), we also have

$ \begin{equation*} (1+4\kappa)\sigma_+-\sigma_-\geq 0, \\ \sigma_-\leq(1+4\kappa)\sigma_+\leq(1+4\kappa)\delta^2. \end{equation*} $

This completes the proof.

To compute an upper bound for the decrease of the proximity function during an iteration, we need the following technical lemmas.

Lemma 4.2(Lemma 4.4 in[9])   $\|dx\|\leq 2\sqrt{1+2\kappa}\delta$ and $\|ds\|\leq 2\sqrt{1+2\kappa}\delta$.

Lemma 4.3(Lemma 4.5 in[9]) $f_1''(\alpha)\leq2(1+2\kappa)\delta^2\psi''(\upsilon_\text{min}-2\alpha\sqrt{1+2\kappa}\delta)$.

Lemma 4.4(Lemma 4.6 in[9]) $f_1'(\alpha)\leq 0$ if $\alpha$ satisfies the following inequality

$ \begin{equation}\label{5} \psi'(\upsilon_\text{min})-\psi'(\upsilon_\text{min}-2\alpha\sqrt{1+2\kappa}\delta)\leq{{2\delta}\over{\sqrt{1+2\kappa}}}. \end{equation} $ (4.5)

Note that for $t>0$, $\psi''(t)\geq 1$, so the function $-{1\over 2}\psi'(t)$ has the inverse function. Assume that $\rho:[0, \infty)\rightarrow(0, 1]$ is the inverse function of the restriction of $-{1\over 2}\psi'(t)$ to the interval $(0, 1]$. We want to estimate the step size $\alpha$ as large as possible. The derivative of the left hand in (4.5) with respect $\alpha$ is $2\sqrt{1+2\kappa}\delta\psi''(\upsilon_\text{min}-2\alpha\sqrt{1+2\kappa}\delta)>0$, since $\psi''(t)>0$. So the left hand in (4.5) is monotone increasing in $\alpha$. So the largest possible value of $\alpha$ satisfying (4.5) occurs when

$ \begin{equation} \psi'(\upsilon_\text{min})-\psi'(\upsilon_\text{min}-2\alpha\sqrt{1+2\kappa}\delta)={{2\delta}\over{\sqrt{1+2\kappa}}}. \end{equation} $ (4.6)

The derivative of the left hand in (4.6) with respect $\upsilon_\text{min}$ is

$ \begin{equation*} \psi''(\upsilon_\text{min})-\psi''(\upsilon_\text{min}-2\alpha\sqrt{1+2\kappa}\delta)<0, \end{equation*} $

since $\psi'''(t)<0$, the derivative of the left hand in (4.6) is decreasing in $\upsilon_\text{min}$. This implies that for fixed $\delta$, $\alpha$ gets smaller if $\upsilon_\text{min}$ gets smaller.

By the definition of $\delta$ and $\Psi(\upsilon)$, we have

$ \begin{equation*} \delta={1\over 2}\|\nabla\Psi(\upsilon)\|\geq-{1\over 2}\psi'(\upsilon_\text{min}), \end{equation*} $

and the equality holds if and only if $\upsilon_\text{min}$ is the only coordinate in $\upsilon$ which is different from 1 and $\upsilon_\text{min}\leq 1$, i.e. $\psi'(\upsilon_\text{min})\leq 0$. Hence, the worst situation for the step size occurs when $\upsilon_\text{min}$ satisfies

$ \begin{equation} -{1\over 2}\psi'(\upsilon_\text{min})=\delta. \end{equation} $ (4.7)

In this case by (4.7) and the definition of $\rho$, we have $\upsilon_\text{min}=\rho(\delta)$.

By (4.6) and (4.7),

$ \begin{equation} -{1\over 2}\psi'(\upsilon_\text{min}-2\alpha\sqrt{1+2\kappa}\delta)=\delta+{\delta\over\sqrt{1+2\kappa}}. \end{equation} $ (4.8)

Then by (4.8) and the definition of $\rho$, we have

$ \begin{equation*} \upsilon_\text{min}-2\alpha\sqrt{1+2\kappa}\delta=\rho(\delta+{\delta\over\sqrt{1+2\kappa}}). \end{equation*} $

So by $\upsilon_\text{min}=\rho(\delta)$, the largest step size $\alpha$ is given as follows:

$ \begin{equation} \bar{\alpha}={1\over{2\delta\sqrt{1+2\kappa}}}(\rho(\delta)-\rho(\delta+{\delta\over\sqrt{1+2\kappa}})). \end{equation} $ (4.9)

Now we want to compute the lower bound for $\bar{\alpha}$.

Lemma 4.5  Let $\rho$ and $\bar{\alpha}$ be as defined, then we have

$ \begin{equation} \bar{\alpha}\geq{1\over(1+2\kappa)\psi''(\rho(\delta+{\delta\over\sqrt{1+2\kappa}}))}. \end{equation} $ (4.10)

Proof  Using the definition of $\rho$, $-\psi'(\rho(\delta))=2\delta$. Taking the derivation with respect to $\delta$, we obtain

$ \begin{equation*} \rho'(\delta)=-{2\over \psi''(\rho(\delta))}<0. \end{equation*} $

Thus, $\rho$ is monotonically decreasing. By substituting this equality in (4.9), we have

$ \begin{equation*} \bar{\alpha}={1\over{2\delta\sqrt{1+2\kappa}}} \int_{\delta+{\delta\over\sqrt{1+2\kappa}})}^\delta\rho'(\xi)d\xi={1\over{\delta\sqrt{1+2\kappa}}} \int_\delta^{\delta+{\delta\over\sqrt{1+2\kappa}}}{d\xi\over\psi''(\rho(\xi))}. \end{equation*} $

Since $\delta\leq\xi\leq\delta+{\delta\over\sqrt{1+2\kappa}}$ and $\rho$ is monotonically decreasing, $\rho(\xi)\geq\psi''(\rho(\delta+{\delta\over\sqrt{1+2\kappa}}))$. Since $\psi''(t)$ is monotonically decreasing, $\psi''(\rho(\xi))\leq\psi''(\rho(\delta+{\delta\over\sqrt{1+2\kappa}}))$. Therefore, we have the following lower bound for $\bar{\alpha}$:

$ \begin{equation*} \bar{\alpha}\geq{1\over\sqrt{1+2\kappa}\delta\psi''(\rho(\delta+{\delta\over\sqrt{1+2\kappa}}))}(\delta+{\delta\over\sqrt{1+2\kappa}}-\delta)={1\over(1+2\kappa)\psi''(\rho(\delta+{\delta\over\sqrt{1+2\kappa}}))}. \end{equation*} $

This completes the proof.

Defining

$ \begin{equation} \tilde{\alpha}={1\over(1+2\kappa)\psi''(\rho(\delta+{\delta\over\sqrt{1+2\kappa}}))}, \end{equation} $ (4.11)

and we use $\tilde{\alpha}$ as the default step size in our algorithm. By Lemma 4.5, $\tilde{\alpha}\leq\bar{\alpha}$.

In the following, we want to estimate the value of $f(\tilde{\alpha})$. For this purpose, we cite the following technical lemma in [3] without proof.

Lemma 4.6  Let $h(t)$ be a twice differentiable convex function with $h(0)=0, h'(0)<0$ and let $h(t)$ attains its (global) minimum at $t^{*}>0$. If $h''(t)$ is increasing for $t\in[0, t^{*}]$, then

$ \begin{equation} h(t)\leq{{th'(0)}\over 2}, ~~~~t\in[0, t^{*}]. \end{equation} $ (4.12)

Now, we can present an upper bound for $f(\tilde{\alpha})$.

Lemma 4.7(Lemma 5.2 in [7])  If the step size $\alpha$ is such that $\tilde{\alpha}\leq\bar{\alpha}$, then we have

$ \begin{equation} f(\alpha)\leq-\alpha\delta^2. \end{equation} $ (4.13)

Lemma 4.8  Let $\rho$ be the inverse function of the restriction of $-{1\over 2}\psi'(t)$ to the interval $(0, 1]$ and $\tilde{\alpha}$ be defined as in (4.11). Then

$ \begin{equation} f(\tilde{\alpha})\leq{{-\delta^2}\over{(1+2\kappa)\psi''(\rho(\delta+{\delta\over\sqrt{1+2\kappa}}))}}. \end{equation} $ (4.14)

Proof  The result follows immediately if we apply Lemma 4.7 to the default step size $\tilde{\alpha}$.

Now, we apply the so far obtained results to our proximity function. To this end, we need to compute $s=\rho(\delta+{\delta\over\sqrt{1+2\kappa}})$. Note that $\rho$, $-\psi'(s)=2(\delta+{\delta\over\sqrt{1+2\kappa}})$. So from (3.3) and (4.11),

$ \begin{eqnarray*} \tilde{\alpha}={1\over(1+2\kappa)\psi''(s)}={1\over{(1+2\kappa)(1+{{s+1}\over {s^4}}e^{{1\over s}-1})}}, \\ e^{{1\over s}-1}={{s^2}\over {s^2-s+1}}(s+2(\delta+{\delta\over\sqrt{1+2\kappa}})). \end{eqnarray*} $

Thus,

$ \begin{equation*} {1\over s}=1+\log(s+2(\delta+{\delta\over\sqrt{1+2\kappa}}))+\log{{s^2}\over{s^2-s+1}} \end{equation*} $

Since $0\leq s\leq 1$, ${{s^2}\over{s^2-s+1}}\leq 1$, we have

$ \begin{eqnarray*} {1\over s}\leq 1+\log(s+2(\delta+{\delta\over\sqrt{1+2\kappa}}))\leq 1+\log(1+4\delta), \\ {1\over s^2}\leq (1+\log(1+4\delta))^2, ~~~~~~~ s^2-s+1\geq {3\over 4}. \end{eqnarray*} $

Therefore, we conclude that

$ \begin{equation*} \tilde{\alpha}\geq{1\over{(1+2\kappa)(1+{{(s+1)(s+4\delta)}\over {s^2(s^2-s+1)}})}}\geq{1\over{4(1+2\kappa)(1+4\delta)(1+\log(1+4\delta))^2}}. \end{equation*} $

Now, using Lemma 4.7, we know

$ \begin{equation} f(\alpha)\leq-\alpha\delta^2\leq -{\delta^2\over{4(1+2\kappa)(1+4\delta)(1+\log(1+4\delta))^2}}. \end{equation} $ (4.15)

Since the right side of (4.15) is monotonically decreasing in $\delta$, by using Lemma 3.2 (ⅱ), we can express the decrease in terms of $\Psi=\Psi(\upsilon)$ as follows:

$ \begin{eqnarray} f(\alpha)&\leq&{-{\Psi}\over{8(1+2\kappa)(1+4\sqrt{\Psi\over 2})(1+\log(1+4\sqrt{\Psi\over 2}))^2}}\nonumber\\ &\leq&{-{\sqrt{\Psi}}\over{24\sqrt{2}(1+2\kappa)(1+\log(1+2\sqrt{2\Psi_0}))^2}}\nonumber\\ &=&\Theta\big({-{\sqrt{\Psi}}\over{(1+2\kappa)(1+\log\Psi_0)^2}}\big), \label{16} \end{eqnarray} $ (4.16)

where the last equality is obtained from the definition of $\Theta$ and the fact that during an inner iteration in Algorithm 1, $\Psi_0\geq\Psi(\upsilon)\geq\tau\geq 1$.

5 Iteration Complexity of Algorithm 1

After the update of $\mu$ to $(1-\theta)\mu$, by Lemma 3.4 we have

$ \begin{equation*} \Psi(\upsilon_+)\leq\Psi(\upsilon)+{\theta\over{2(1-\theta)}}(2\Psi(\upsilon)+2\sqrt{2n\Psi(\upsilon)}+n). \end{equation*} $

At the start of an outer iteration, we have $\Psi(\upsilon)\leq \tau$. We assume that $\tau=O(n)$, and for the large update method, we have $\theta=\Theta(1)$. We need to compute how many inner iterations are required to return to the situation where $\Psi(\upsilon)\leq \tau$ after $\mu$-update. We denote the value of $\Psi(\upsilon)$ after the $\mu$-update as $\Psi_0$, and the subsequent values in the same outer iteration are denoted as $\Psi_i, i=1, 2, \cdot\cdot\cdot, K$, with $K$ denoting the total number of inner iterations in an outer iteration. Then we have

$ \begin{equation*} \Psi_0\leq \tau+{\theta\over{2(1-\theta)}}(2\tau+2\sqrt{2n\tau}+n)=O(n), ~~~~\Psi_{i-1}>\tau, ~~~~0\leq\Psi_i\leq \tau. \end{equation*} $

We need the following technical result (Lemma 14 in [3]) to get the iteration bound.

Lemma 5.1  Let $t_0, t_1, \cdot\cdot\cdot, t_K$ be a sequence of positive numbers such that

$ \begin{equation*} t_{i+1}\leq t_i-\beta t_i^{1-\gamma}, ~~i=0, \cdot\cdot\cdot, K-1, \end{equation*} $

where $\beta>0$ and $0<\gamma\leq 1$. Then $K\leq \lceil t_0^\gamma /\beta\gamma\rceil$.

Lemma 5.2  Let $K$ denotes the total number of inner iterations in an outer iteration. Then we have

$ \begin{equation*} K\leq \big \lceil O\big( (1+2\kappa)\sqrt{n}(\log n)^2\big) \big \rceil. \end{equation*} $

Proof  From (4.16), we have

$ \begin{equation*} \Psi_{i+1}\leq\Psi_i-{{\sqrt{\Psi_i}}\over{24\sqrt{2}(1+2\kappa)(1+\log(1+2\sqrt{2\Psi_0}))^2}}. \end{equation*} $

Letting

$ \begin{equation*} t_i=\Psi_i, ~~~~\beta={1\over{24\sqrt{2}(1+2\kappa)(1+\log(1+2\sqrt{2\Psi_0}))^2}}, ~~~~\gamma=\frac{1}{2}, \end{equation*} $

using Lemma 5.1 we can obtain that

$ \begin{eqnarray*} K&\leq& \big \lceil 48\sqrt{2}(1+2\kappa)(1+\log(1+2\sqrt{2\Psi_0}))^2{\Psi_0}^{\frac{1}{2}} \big \rceil\\ &=&\big \lceil O\big( (1+2\kappa)\sqrt{n}(\log n)^2\big) \big \rceil. \end{eqnarray*} $

This completes the proof.

Theorem 5.1  Given that $\tau=O(n), $ and $\theta=\Theta(1)$, which are characteristics of the large-update methods, Algorithm 1 has at most $O\big((1+2\kappa) \sqrt{n}(\log n)^{2}\log\frac{n}{\varepsilon}\big)$ iterations.

Proof  It is well know that the number of our iterations is bounded[9] by $\frac{1}{\theta}\log\frac{n}{\varepsilon}.$ Multiplying this number and the upper bound for the number inner iterations per outer iteration, we can get an upper bound for the total number of iterations, and using that $\tau=O(n), $ and $\theta=\Theta(1)$, then the iteration complexity is given by

$ \begin{equation*} O\big((1+2\kappa) \sqrt{n}(\log n)^{2}\log\frac{n}{\varepsilon}\big). \end{equation*} $
6 Concluding Remarks

In this paper, we introduced a new kernel function, which has simple algebraic expression and has not been used before to our knowledge. Based on the new kernel function, we proposed a polynomial interior-point algorithm for $P_*(\kappa)$ LCPs and showed that this large-update method for solving $P_*(\kappa)$ LCPs has the polynomial complexity $O\big((1+2\kappa) \sqrt{n}(\log n)^{2}\log\frac{n}{\varepsilon}\big)$. Our result was better than the bounds of large-update primal-dual IPMs based on some other kernel functions. Our analysis to get iteration complexity of this algorithm based on the new kernel function was also different from the analysis of the other primal-dual IPMs based on logarithmic barrier functions and recent kernel functions.

References
[1] Kojima M, Megiddo N, Noma T, Yoshie A. A unified approach to interior point algorithms for linear complementarity problems[J]. Operations Research Letters, 1991, 10(5): 247–254. DOI:10.1016/0167-6377(91)90010-M
[2] Illes T, Nagy M. A Mizuno-Todd-Ye predictor-corrector algorithm for sufficient linear complementarity problems[J]. European Journal of Operational Research, 2007, 181(3): 1097–1111. DOI:10.1016/j.ejor.2005.08.031
[3] Peng J, Roos C, Terlaky T. Self-regular functions and new search directions for linear and semidefinite optimization[J]. Math. Program., 2002, 93: 129–171. DOI:10.1007/s101070200296
[4] Bai Y Q, Roos C. A polynomial-time algorithm for linear optimization based on a new simple kernel function[J]. Optimization Methods and Software, 2003, 18(6): 631–646. DOI:10.1080/10556780310001639735
[5] He S, Li X, Pan S. A self-adjusting interior point algorithm for linear complementarity problems[J]. Computers Math. Appli., 2005, 50(1-2): 33–40. DOI:10.1016/j.camwa.2005.03.002
[6] Cho G M, Kim M K. A new large-update interior point algorithm for $P_*(\kappa)$-LCPs based on kernel functions[J]. Math. Comput., 2006, 182: 1169–1183.
[7] Amini K, Peyghami M R. Exploring complexity of large update interior-point methods for $P_*(\kappa)$ linear complementarity problem based on kernel function[J]. Applied Math. Comput., 2009, 207: 501–513.
[8] Wang G Q, Bai Y Q. Polynomial interior-point algorithms for $P_*(\kappa)$ horizontal linear complementarity problem[J]. J. Comput. Appli. Math., 2009, 233: 248–263. DOI:10.1016/j.cam.2009.07.014
[9] Peyghami M R. A kernel function based interior-point methods for solving $P_*(\kappa)$-linear complementarity problem[J]. Acta Math. Sinica, English Series, 2010, 26(9): 1761–1778. DOI:10.1007/s10114-010-7529-5