A Study Of Iterative Methods In Optimization

ABSTRACT

Optimization techniques are called into play everyday in decision making

processes involving resource allocation in almost every discipline. However, the

development and implementation of algorithm for solving optimization problems

are fraught with many difficulties and frustrations.

The aim of this thesis has been to examine and identify some of the problems

associated with the development and implementation of a class of optimization

algorithms and to develop a means of improving upon them. The

study made use of functions such as the Rosenbrock’s function that are known

to be a good test of robustness of these techniques.

The study covered a number of algorithms in both unconstrained and constrained

optimization. Some of the problems encountered were in the implementation

of the Modified Newton’s method. It was discovered that if

at any iterate xk, the Hessian matrix H(xk) is not positive definite, then

−H(xk)−1∇f(xk) is not a descent direction. In this case, we look for a new

direction where the Hessian matrix will be positive definite. Some of the

suggestions proposed in the literature did not always lead to a descent direction.

A new iterate could be found from xk+1 = xk − λkvk where vk is

the eigenvector corresponding to the negative eigenvalue of H(xk). However,

if this fails then an alternative is to make use of the Hessian at the previous

iterate H(xk−1) to compute the next iterate xk+1 which will hopefully give a

positive definite Hessian. If this also fails, then setting H(xk)−1 = In converts

the Modified Newton’s method into the Steepest Descent method, where

xk+1 = xk − αk∇f(xk).

The study also revealed that to determine the various critical points of a

given function, it may require more than one technique. All the computations

in this work made use of OCTAVE, a public domain mathematical software.