Quadratic Optimization over a Polyhedral Set

access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Abstract In this paper we consider the quadratic optimization which is split into: convex quadratic maximization and convex quadratic minimization. Based on optimality conditions(local and global),we propose algorithms for solving those problems. The proposed algorithms use linear programming as subproblems and generate a sequence of local maxi-mizers and global minimizers. It has been shown that the algorithms are convergent under appropriate conditions. Numerical results are provided .


Introduction
Consider an extremum problem of a quadratic function over a polyhedral set D ⊂ R n : where C is an n × n matrix, d, x ∈ R n , and D bounded polyhedral set of R n .Here •, • denotes the scalar product of two vectors.
Quadratic programming plays an important role in mathematical programming.For example, quadratic programming surve as auxiliary problems for nonlinear programming in its linearized problems or in optimization problems approximated by quadratic functions.Also this has many applications in science, technology, statistics and economics.There are a number of methods for solving problem (1.1) as convex problem such as the interior point methods, the projected gradient method, the conditional gradient method, the proximal algorithm, penalty methods, finite step algorithm and so on [1,3,7].Then well known optimality condition for problem (1.1) is in Rockafellar [4].Also, the quadratic maximization problem is known as " NP" problem.There are many methods [2,5,6] and algorithms devoted to solution of the quadratic maximization over convex sets.
The paper is organized as follows.In section 2 we consider quadratic convex maximization problem and apply global optimality condition [5] to this.We propose some finite algorithms by approximation of the level sets of the objective function with a finite number of points and solving linear programming as auxiliary problems.In section 3 we consider the quadratic minimization problem over polyhedral set and recall the conditional gradient method for solving this problem.In the last section we present numerical solutions obtained by the proposed algorithms for quadratic maximization and minimization problems.

Quadratic Convex Maximization Problem
Consider the quadratic maximization problem.
where C is a positive semidefinite (n × n) matrix, and D ⊂ R n is a polyhedral set of R n .A vector d ∈ R n and a number q ∈ R are given.Then optimality conditions [6] can be formulated as follows.
Theorem 2.1 [6] Let z ∈ D be such that f (z) = 0. Then z is a solution of problem (2.1) if and only if where

Approximation of the Level Set
Furthermore, to construct a numerical method for solving problem (2.1) based on optimality conditions (2.2) we assume that C is a symmetric positive defined n × n matrix.Then problem (2.1) can be written as follows. where

Definition 2.2
The set A m z defined by is called the approximation set to the level set E f (z) (f ) at the point z.
Note that a checking the optimality conditions (2.2) requires to solve linear programming problems: We need to find an appropriate approximation set such that one could check the optimality conditions at a finite number of points.
The following lemma shows that finding a point on the level set of f (x) is computationally possible.Lemma 2.1 Let a point z ∈ D and a vector h ∈ R n satisfy f (z), h < 0. Then there exists a positive number α such that z + αh ∈ E f (z) (f ).Proof.Note that Ch, h > 0, and 2Cz + d, h < 0. (2.5) Construct a point y α for α > 0 defined by Solve the equation f (y α ) = f (z) with respect to α.In fact, we have or equivalently, By (2.5), we have ᾱ > 0 and consequently, For each Let u j , j = 1, 2, . . ., m be solutions of those problems which always exist due to their compact set D: Refer to the problems generated by (2.6) as auxiliary problems of the A m z .Define θ m as follows: (2.8) The value of θ m is said to be the approximate global condition value.There are some properties of A m z and θ m .Lemma 2.2 If for z ∈ D there is a point Proof.By the definition of u k , we have . Therefore, the assumption in the lemma implies that Define the approximation set A m z by where α i = 2Cz+d,a i Ca i ,a i , i = 1, 2, . . ., m, a i is i−th row of A, i = 1, 2, . . ., m Then an algorithm for solving (2.3) is described in the following.

Algorithm MAX
Input: A convex quadratic function f and D. Output: An approximate solution x to problem (2.3); i.e., an approximate global maximizer of f over D.
Step 2. Find a local maximizer z k ∈ D by the conditional gradient method starting with an initial approximation point x k .
Step 3. Construct an approximation set A m z k at the point z k by formulas (2.9).

Quadratic Convex Minimization Problem
Consider the quadratic minimization problem over a box constraint.
where C is a symmetric positive semidefinite n × n matrix and and Then z is a solution of problem (3.1) if and only if We show that how to apply the conditional gradient method for solving problem (3.1).It can be easily checked that the function f (x) defined by (3.1) is strictly convex quadratic function.Its gradient is computed as: The Conditional Gradient Algorithm [7] Step 1. Choose a tolerance > 0, and a feasible point x 0 ∈ D, and set k = 0.
Step 2.Solve a linear programming Let x k be a solution to this problem Step 3. Compute the value η k : Step 5. Solve the one dimensional minimization problem min Convergence of the algorithm is given by the following proposition.
Theorem 3.1 [1] Under the assumption of lemma 3.1, the sequence {x k } generated by the Algorithm is a minimizing sequence, i,e

Numerical Experiments
The proposed algorithms for quadratic maximization and minimization problems have been tested on the following type problems.The algorithms are coded in Matlab.Dimensions of the problems were ranged from 50 up to 1000.Computational time,and global solutions are given in the following tables.

Conclusion
To provide a unified view, we considered the quadratic programming problem consisting of convex quadratic maximization and convex quadratic minimization.Based on global optimizality conditions by Strekalovsky [5,6] and classical local optimality conditions [1], we proposed some algorithms for solving the above problem.Under appropriate conditions we have shown that the proposed algorithms converges to a global solution in a finite number of steps.
The Algorithm MAX generates a sequence of local maximizers and and uses linear programming at each iteration which makes algorithm easy to implement numerically.

Theorem 2 . 2
If θ k m > 0 for k = 1, 2, ..., then Algorithm MAX converges to a global solution in a finite number of steps .Proof immediate from lemma 2.2 and the fact that convex function reaches its local and global solutions at vertices of the polyhedral set D.