This paper considers the following minimax linear fractional programming problem (MLFP)
where $p\geq 2, \ A\in R^{m\times n}, \ b\in R^m$ are arbitrary real numbers, $n_i(x)=\sum\limits_{j=1}^nc_{ij}x_j+d_i, \ d_i(x)=\sum\limits_{j=1}^ne_{ij}x_j+f_i$ are affine functions, $D=\{x\in R^n\mid Ax\leq b\}$ is bounded with ${\rm int} D\neq \emptyset$, and $\forall x\in D$, $n_i(x)\geq0, \ d_i(x)>0, \ i=1, \cdots, p.$ In fact, if we use the method of [1] to preprocess problem MLFP, we also only require $d_i(x)\neq0, \ i=1, \cdots, p$.
As an important branch of nonlinear optimization, fractional programming attracted the interest of practitioners and researchers since the 1970's. There are two main reasons. One reason is that it frequently appears in various disciplines, including transportation planing [2, 3], government planing [4], finance and investment [5-7], and so on. Another reason is that, fractional programming is NP-hard [8, 9], that is, it generally posses multiple local optimal solutions that are not globally optimal, so it is of great challenge to solve this problem, and it is necessary to put forward some effective methods.
The problem MLFP is a special class of fractional programming, which also attracted the interest of practitioners and researchers during the past years [10-16]. To solve problem MLFP, many algorithms were proposed, including partial linearization algorithm [17], interior point algorithm [18], parametric programming method [19], cutting plane algorithm [20], monotonic optimization approach [21], branch and bound algorithms [1, 22-23], and so on. In addition, some theoretical results were obtained about the problem MLFP, and the readers can refer to the literature [1].
The aim of this paper is to present a new branch and bound algorithm for solving problem MLFP. Compared with other three branch and bound algorithms [1, 22-23], our algorithm is easier to implement. In their methods, to obtain a linear relaxation programming problem of problem MLFP, their procedures need twice linear relaxation. However, our method only need once linear relaxation. Comparison results show that the performance of our algorithm is superior to the other three methods in most case.
This paper is organized as follows. In Section 2, the new linear relaxation technique is presented, which can be used to obtain the linear relaxation programming problem LRP for problem MLFP. In Section 3, the global optimization algorithm is described, and the convergence of this algorithm is established. Numerical results are reported to show the feasibility and efficiency of our algorithm in Section 4.
To solve problem MLFP, we first convert it into an equivalent problem (EP). After that, for generate the linear relaxation problem of EP, we present a new linear relaxation technique.
In order to derive the equivalent EP of MLFP, we first compute $l_j^0=\min\limits_{x\in D} x_j$, $l_j^0=\max\limits_{x\in D} x_j$, and construct the initial rectangle $X^0=\{x\in R^n\mid l_j^0\leq x_j\leq u_j^0, \ j=1, \cdots, n\}$.
Then by introducing a new variable $t$, we can obtain the EP of problem MLFP as follows
Theorem 1 If $(x^*, t^*)$ is a global optimal solution of EP, then $x^*$ is also a global optimal solution of problem MLFP, and $t^*$ is the optimal value of EP and MLFP.
Proof Readers can refer to [1].
By Theorem 1, in order to globally solve problem MLFP, we may globally solve EP instead. So, in the following, we only consider how to solve the EP.
To solve EP, we present a branch and bound algorithm. In this algorithm, a principal process is to construct a linear relaxation programming problem for EP, which can provide a lower bound for the optimal value of EP over $X^k\subseteq X^0$.
Let $X^k=\{x\mid l\leq x\leq u\}$ be the initial box $X^0$ or modified box as defined for some partitioned subproblem in a branch and bound scheme. We will show how to construct the problem LRP for EP over $X^k$.
For convenience in expression, let
Obviously, we have $\underline{\xi}_i \leq n_i(x)\leq \overline{\xi}_i, \ \underline{\eta}_i\leq d_i(x)\leq \overline{\eta}_i, \ i=1, \cdots, p$.
To derive the problem LRP of EP over $X^k$, we first consider the term $\frac{n_i(x)}{d_i(x)}, \ i=1, \cdots, p$.
From $\overline{\eta}_in_i(x)-\underline{\xi}_i d_i(x)\geq0, \ d_i(x)-\overline{\eta}_i\leq0$, we have
Since $\overline{\eta}_i\underline{\eta}_id_i(x)>0$, by dividing (1) with $\overline{\eta}_i\underline{\eta}d_i(x)$, simplifying and rearranging, we have
Let $\Phi_i(x, t)=\frac{n_i(x)}{d_i(x)}-t$, from (2), we have the following relation
By (3), the linear relaxation programming problem LRP can be established as follows
Let $v({\rm LRP})$ and $v({\rm EP})$ be the optimal value of problems LRP and EP, respectively, from the above discussion, obviously, we have $v({\rm LRP})\leq v({\rm EP})$ over $X^k$.
Theorem 2 For all $x\in X^k=[l, u]$, let $\Delta x=u-l$, consider the functions $\Phi_i^l(x, t)$ and $\Phi_i(x, t)$. Then we have
Proof From the definitions $\Phi_i(x, t)$ and $\Phi_i^l(x, t)$, we have
By the definitions of $\underline{\eta}_i$ and $\overline{\eta}_i$, we know that, $\Delta s\triangleq\overline{\eta}_i-\underline{\eta}_i\rightarrow0$ as $\Delta x\rightarrow0$. Thus from the above inequality, we have
From Theorem 2, it follows that $\Phi_i^l(x, t)$ will approximate the function $\Phi_i(x, t)$ as $\Delta x\rightarrow0$.
In this section, based on the former results, we present the branch and bound algorithm to solve EP.
During each iteration of the algorithm, the branching process will generate a more refined partition that cannot yet be excluded from further consideration in searching for a global optimal solution for EP, which is a critical element in guaranteeing convergence. This paper chooses a simple and standard bisection rule, which is sufficient to ensure convergence since it drives the intervals shrinking to a singleton for all the variables along any infinite branch of the branch and bound tree.
Consider rectangle $X=\{x\in R^n\mid l_j \leq x_j\leq u_j, \ j=1, \cdots, n\}\subseteq X^0$, which is associated with a node subproblem. The branching rule is described as follows
(ⅰ) let $k=\mbox{argmax}\{u_j-l_j\mid j=1, \cdots, n\}$;
(ⅱ) let $\tau=(l_k+u_k)/2$;
(ⅲ) let
Through using this branching rule, the rectangle $X$ is partitioned into two subrectangles $X^1$ and $X^2$.
Based upon the results and operations given above, this subsection summarizes the basic steps of the proposed algorithm.
Let $LB(X^k)$ and $(x(X^k), t(X^k))$ be the optimal function value of problem LRP and an element of the corresponding argmin over the subrectangle $X^k$, respectively.
Algorithm statement
Step 1 Set the convergence tolerance $\epsilon\geq0$; the feasible error $\epsilon_1\geq0$; the upper bound $UB_0=+\infty$; the set of feasible points $F=\emptyset$.
Find $LB_0=LB(X^0)$ and $(x(X^0), t(X^0))$ by solving the problem LRP over $X^0$. With the feasible error $\epsilon_1$, if $(x(X^0), t(X^0))$ is feasible to EP, set $(x^0, t^0)=(x(X^0), t(X^0)), $ $F=F\bigcup \{(x^0, t^0)\}$ and update $UB_0$. If $UB_0-LB_0\leq\epsilon$, then stop: $(x^0, t^0)$ is an $\epsilon$-optimal solution of EP. Otherwise, set $Q_0=\{X^0\}, \ k=1$, and go to Step 2.
Step 2 Set $UB_k=UB_{k-1}$. Subdivide $X^{k-1}$ into two subrectangles $X^{k, 1}, \ X^{k, 2}$ via the branching rule. Let $\overline{X}=\{X^{k, 1}, \ X^{k, 2}\}$.
Step 3 For each $X^{k, t}\in \overline{X}\ (t=1, \ 2)$, find the lower bound $LB(X^{k, t})$ and $(x(X^{k, t}), t(X^{k, t}))$ by solving the LRP over $X^{k, t}$. If $LB(X^{k, t})>UB_k$, set $\overline{X}=\overline{X}\setminus X^{k, t}$; else if $(x(X^{k, t}), t(X^{k, t}))$ is feasible to EP with feasible error $\epsilon_1$, then set
If $UB_k=t(X^{k, t})$, set $(x^k, t^k)=(x(X^{k, t}), t(X^{k, t}))$.
Step 4 Set $Q_k=(Q_{k-1}\setminus X^{k-1})\bigcup \overline{X}$.
Step 5 Set $LB_k=\min\{LB(X)\mid X\in Q_k\}$. Let $X^k$ be the subrectangle which satisfies that $LB_k=LB(X^k)$. If $UB_k-LB_k\leq\epsilon$, then stop: $(x^k, t^k)$ is a global $\epsilon$-optimal solution of problem EP. Otherwise, set $k=k+1$, and go to Step 2.
The following theorem gives the global convergence properties of the above algorithm.
Theorem 3 If the algorithm terminates finitely, then upon termination, $x^k$ is a global $\epsilon$-optimal solution for problem MLFP; else, it will generate an infinite sequence $\{x^k\}$ of iterations such that along any infinite branch of the branch and bound tree, and any accumulation point will be a global optimal solution of problem MLFP.
Proof When the algorithm terminates finitely, the conclusion is obvious. When the algorithm terminates infinitely, as stated in [25], a sufficient condition for the algorithm to be convergent to a global optimum is that the bounding operation must be consistent and the selection operation is bound improving.
A bounding operation is called consistent if at every step any unfathomed partition can be further refined, and if any infinitely decreasing sequence of successively refined partition elements satisfies
where $UB_k$ is a computed upper bound in stage $k$ and $LB_k$ is the best lower bound at iteration $k$ not necessarily occurring inside the same subrectangle with $UB_k$. In the following, we will show (5) holds.
Since the employed subdivision process is the bisection, the process is exhaustive. Consequently, from Theorem 2 and the relationship $v({\rm LRP})\leq v(\rm EP)$, formulation (5) holds, this implies that the employed bounding operation is consistent.
A selection operation is called bound improving if at least one partition element where the actual upper bound is attained is selected for further partition after a finite number of refinements. Clearly, the employed selection operation is bound improving because the partition element where the actual upper bound is attained is selected for further partition in the immediately following iteration.
Based on the above discussion, we know that the bounding operation is consistent and that selection operation is bound improving. Therefore, according to [25], the employed algorithm is convergent to the global optimum of MFLP.
In this section, to verify the performance of the proposed algorithm, some numerical experiments are carried out and compared with three latest algorithms [1, 22-23]. The algorithm is implemented by Matlab 7.1, and all test problems are carried out on a Pentium IV (3.06 GHZ) microcomputer. The simplex method is applied to solve the linear relaxation programming problems.
For test problems 1-8, the convergence tolerance $\epsilon$ is set to $5\times 10^{-8}$, the feasible error $\epsilon_1$ are set 0.005, 0.001, 0.001, 0.005, 0.001, 0.001, 0.001, 0.001, which agree with the feasible error used in [1].
The results of problems 1-8 are summarized in Table 1, where the following notations have been used in row headers: $Iter$: number of algorithm iterations; $L_{\max}$: the maximal number of algorithm active nodes necessary.
Example 9 is a random test problem. Table 2 summarizes our computational results of Example 9. For this test problem, the convergence tolerance $\epsilon=5e-8$, and the feasible error $\epsilon_1=0.001$. In Table 2, $Ave.Iter$ denotes the average number of iterations; $Ave.Time$ represents the average CPU time of the algorithm in seconds, which are obtained by solving 10 different random instances for each size.
Example 1 [1, 21, 22]
Example 2 [1, 21, 22]
Example 3 [1, 22]
Example 4 [1, 22]
Example 5 [1, 23]
Example 6 [1, 23]
Example 7 [1, 23]
Example 8 [1, 23]
Example 9
where all elements of $c_{ij}, \ d_{ij}, \ i=1, \cdots, p, j=1, \cdots, n$ are randomly generated in [0, 1]; all elements of $d_i, \ f_i$ are randomly generated in [0, $p$]; all elements of $A$ and $b$ are randomly generated in [0, 1].
From Table 1, it can be seen that, except for Examples 2 and 8, the performance of our algorithm is superior to the other three methods. From Table 2, we can see that the CPU time and the number of iterations of our algorithm are not sensitive to the size of the problems $p$ and $m$.
Test results show that our algorithm is competitive and can be used to solve the problem MLFP.