12-IJAEST-Volume-No-2-Issue-No-2-A-NEW-CONJUGATE-GRADIENT-METHOD-FOR-SOLVING-NONLINEAR-UNCONSTRAINED

Page 1

Emmanuel Nwaeze* et al. / (IJAEST) INTERNATIONAL JOURNAL OF ADVANCED ENGINEERING SCIENCES AND TECHNOLOGIES Vol No. 2, Issue No. 2, 187 - 192

A NEW CONJUGATE GRADIENT METHOD FOR SOLVING NONLINEAR UNCONSTRAINED OPTIMIZATION PROBLEMS Emmanuel Nwaeze Department of Mathematics, University of Ilorin, Ilorin, Nigeria. e-mail: nwaezeema@yahoo.com

O.M. Bamigbola Department of Mathematics, University of Ilorin, Ilorin, Nigeria.

Abstract - We present a New Conjugate

min f (x),

Gradient Method for Solving Nonlinear

where x  R, N R N is an N-dimensional Euclidean space and f is n – times differentiable. g (x) is the gradient vector.

ES

through a generalization of the

T

Unconstrained Optimization Problems

(1)

A conventional conjugate Gradient Method

multivariable Taylor’s series as the model

for solving (1) uses an iterative scheme:

of the objective function f. Numerical

x k 1  x k   k d k , k  0, 1, 2,...

results from this method produced the

where x0  R N is an initial point,  k , the

Keywords

step size at iteration k, is defined by

A

global optimum of f.

(2)

New Conjugate Gradient Method,

 k  min f ( x k  d k ), 

(3)

d k is a vector in the form d k 1   g k 1 , k  0

multivariable Taylor’s series, objective

d k 1   g k 1   k d k , k  1

function.

in which g k  g ( x k )  f ( x k ) and  k is a

IJ

Nonlinear Unconstrained Optimization,

I. Introduction

The New Conjugate Gradient Method (CGM) seeks to optimize a multivariable function f

ISSN: 2230-7818

(4)

parameter of the CGM, i.e. different  s determine different CGMs. In particular Dai and Yuan[1] used

  k

g k 1

2

T

d k yk

@ 2011 http://www.ijaest.iserp.org. All rights Reserved.

Page 187


Emmanuel Nwaeze* et al. / (IJAEST) INTERNATIONAL JOURNAL OF ADVANCED ENGINEERING SCIENCES AND TECHNOLOGIES Vol No. 2, Issue No. 2, 187 - 192

Where y k  g k 1  g k and || . || denotes

for Solving Nonlinear Unconstrained Optimization Problems through a

the Euclidean norm. The global convergence properties of the above variant of CGM have already been established by Dai and Yuan[1]. They used Wolfe conditions to establish the

generalization of the multivariable Taylor’s series as the model of the objective function..

II.

Representation of the Objective

Functional

global convergence of the following algorithm. A. ALGORITHM (  

g k 1

k

2

T

d k yk

)

Step 1 Given x1   v , d 1   g 1 , k  1,

point x k ,

F ( x)  f ( x k )  df ( x k ) 

Step 2 Compute an  k  0 that

satisfies Wolfe conditions Step 3 Let x k 1  x k   k d k .

1 n d f ( xk ) n!

1 2 d f ( x k )  ... 2!

ES

if g1  0 then stop

If g k 1  0 then stop.

Step 4 Compute  k and generate

A

d k 1 by (4),

T

By Taylor’s theorem about the

(5)

where

d n F ( xk )  N

N

N

 ... hi1 hi2 ...hiN i1 1 i2 1

i N 1

 n f ( xk ) , (6) xi1 xi2 ...xiN

x, x k , h  R N, h j  x j  x jk , n  2

k  k  1, go to step 2.

Bamigbola and Ejieji [9] showed that

IJ

the efficiency of many CGM as a

computational scheme for solving programming problems depends to a very large extent on the degree of the model of the objective function. Furthermore, they believe that the properties of the functional could be explored to characterize the method as well as utilized to fashion efficient algorithms for programming optimization problems. Therefore, in this paper, we present a new Conjugate Gradient Method

ISSN: 2230-7818

III.

Characterization of the New

Method The method is characterized by a model functional (5) and the following gradient vectors for various values of n. G n ( x k 1 )  G n 1 ( x k 1 ) 

1 n 1  n 1     g ( x k  (n  1  m)x k ) (n  1)! m 0  m  where n  2 , x , x k   N , f ( x)   , h  x  x k ,

(7)

x ( k )  x k 1  x k , g ( x)  f ( x) and

@ 2011 http://www.ijaest.iserp.org. All rights Reserved.

Page 188


Emmanuel Nwaeze* et al. / (IJAEST) INTERNATIONAL JOURNAL OF ADVANCED ENGINEERING SCIENCES AND TECHNOLOGIES Vol No. 2, Issue No. 2, 187 - 192

1 g ( x k  2xk )  g ( xk ), 2! n  3. (8) Using (8) as the gradient vector and Dk as descent vector of f , the new CGM seeks to solve (1) through the following algorithm. G3 ( x k 1 ) 

A. ALGORITHM i. Input initial values x0 and D0   G0   g 0 .

estimate of the optimal point x * of f . If not optimal set k  k  1.

IV.

Convergence of New Conjugate

Gradient Method In this section, we employ the convergence results of Algorithm (I.A) to establish the convergence of Algorithm

T

(III.A). We assume that the objective

ii. Repeat:

function satisfies the following conditions:

a. Find step length  k such

1. f is bounded below in  N and

ES

that f ( x k   k Dk ) 

A. ASSUMPTIONS

min f ( x k  Dk )  0

is continuously differentiable in a neighborhood Z of the level set

b. Compute new point:

A

c.

xk 1  xk   k Dk Update search direction: Dk 1   Gk 1   k Dk ,

1  g ( x k  2x k )     2 !  g ( xk ) 

IJ

Gk 1 

k 

G k 1

2

T

Dk y k y k  Gk 1  Gk d. Check for optimality of g: Terminate iteration at step

m when g m is so small that x m is an acceptable

ISSN: 2230-7818

LL  x   N : f ( x)  f ( x1 )

2. The gradient f (x) is Lipschitz continuous in Z, namely, there exists a constant

L  0 such that

|| f ( x)  f ( y ) ||  L || x  y ||, for any x, y  Z (9) B. LEMMA Suppose that x1 is a starting point for which the above Assumptions are satisfied. Consider any method in the form (2), d k is the descent direction and  k satisfies the standard Wolfe conditions. Then we have that

@ 2011 http://www.ijaest.iserp.org. All rights Reserved.

Page 189


Emmanuel Nwaeze* et al. / (IJAEST) INTERNATIONAL JOURNAL OF ADVANCED ENGINEERING SCIENCES AND TECHNOLOGIES Vol No. 2, Issue No. 2, 187 - 192

 k 1

T

(g k d k ) 2

(10)



|| d k || 2

D. THEOREM Suppose that x1 is a starting point

C. Proof. (See the prove in Dai and Yuan[1])

for which Assumption (IV.A) are satisfied.

Dai and Yuan proved Lemma

Let

{x k , k  1,2,...} be generated by Algorithm

Lemma (IV.B) for Algorithm (III.A) is

(I.A). Then the Algorithm either terminates

same when we use G( x k ) in place of

at a stationary point or converges in the

g ( x k ) and Dk in place of d k . Using Dai

sense that

lim inf || g ( x k ) ||  0

and Yuan’s proof,

k 1

T

( g k Dk ) || Dk ||

2

  we have

2

k 1

k

to prove Theorem (IV.D) for Algorithm Algorithm (III.A) is same when we use

 2x k ) T  g ( x k ) T ) Dk / 2 || Dk || 2

4 || Dk ||



k

k 1

 k 1

g ( x

2

k 1

 2x k ) T Dk 4 || Dk || 2

2

g ( x

G( x

k

k

2

) T Dk

4 || Dk || 2

k 1

k

2

) T Dk

g ( x

Dai and Yuan used proof by contradiction

IJ

k 1

g ( x

Yuan[1])

 2x k ) T Dk g ( x k ) Dk   2 || Dk || 2

T

) Dk

|| Dk || 2

ISSN: 2230-7818

2

A

k

E. Proof. (See the prove in Dai and

(I.A). The proof of Theorem (IV.D) for

T

(G k 1 Dk ) 2   || Dk || 2 k 1

( g ( x

2



(12)

ES

k 

1 g ( x k  2x k )  g ( xk ) and 2!

Gk 1 

T

(IV.B) for Algorithm (I.A). The proof of

(11)

G( x k ) in place of g ( x k ) and Dk in place

of d k . It is not difficult to see that if

lim inf || g ( x k ) ||  0 (as shown by Dai and

k 

Yuan) then x k =0(zero vector, no further improvement on x k ). With

Gk 1 

1 g ( x k  2x k )  g ( xk ) we have 2!

lim inf || G ( x k 1 ) || 

k 

lim inf || g ( x k  2x k )  g ( x k ) || / 2 k 

 lim inf || g ( x k )  g ( x k ) || / 2 k 

 lim inf || g ( x k ) || 0 k 

@ 2011 http://www.ijaest.iserp.org. All rights Reserved.

Page 190


Emmanuel Nwaeze* et al. / (IJAEST) INTERNATIONAL JOURNAL OF ADVANCED ENGINEERING SCIENCES AND TECHNOLOGIES Vol No. 2, Issue No. 2, 187 - 192

V.

n 1

Numerical Results

n

2

Minimize F ( x)   ( xi  1)  [ ( xi )  0.25] ,

The following constitute the test

2

i 1

2

i 1

[ x0 ]i  i

problems, by Andrei[], for the new CGM in which Algorithm (III.A) was

Problem 6

implemented in Visual Basic 6.0.

n

Minimize F ( x)  o.5 i ( xi  1) 2  x n ,

A. Test questions

i 1

Problem1(Exact solution:

[ x 0 ]i  0.5

[ x]i  1, i  1, 2, ..., N )

i 1

i

 1) 2  [ i ( x i  1)]  [ i ( xi  1)] , i 1

[ x0 ]i  1 

i

i 1

n

Problem 2(Exact solution:

[ x]i  1, i  1, 2, ..., N ) n

100( x

2i

2

are summarized in the tables below. Table 1 : Number of Function Evaluations and Iterations for Problems 5 to 6

P 1

Minimize F ( x)  i 1

Numerical results for the above problems

4

n

ES

 (x

2

n

T

B. Results

Minimize F ( x)  n

2

2

2

 x 2i 1 )  ((1  x 2i 1 ) ,

3 4

A

[x 0 ] 2i  1, [ x 0 ] 2i 1  1.2 Problem 3(Exact solution:

5

[ x]i  0, i  1, 2, ..., N )

6

Minimize F ( x)  n

i 1

j 1

IJ

n

 [n   cos( x

j

[ x 0 ]i  1 / n

Problem 4

n

Minimize F ( x)   ( Exp ( xi )  i sin( xi ), x 0  [1, 1, ..., 1] Problem 5

.

N 10 3000 10 3000 10 3000 10 3000

Iter 2 3 21 27 44 33 85 2000

T 0.1 47 1.1 204 1.5 1520 2.7 248.5

FE 53 3212 8914 14277 310 370 677 15557

X(1) 1 1 1.0002 1.000001 0.05517 0.000191 0.000001 -2.76E-10

X(N) 1 1 1.0004 1 0.09169 0.000190 1.223852 1.569195

10 3000 10

34 47 32

1.7 47 0.9

974 3595 228

0.35734 0.05453339 0.9999987

6.623E-6 2.2089E-7 1.024121

3000

1186

182

8541

0.99999103

1.0000833

2

)  i (1  cos( x i ))  sin( xi )] ,

i 1

2

KEY: P: Problem; N: Dimension of x; Iter: Iterations; T: Time(s) taken; FE: Number of function evaluatios; f: Objective function value. C. Remark on Numerical Results A cursory look at the tables reveals that results obtained with the new conjugate gradient method are in close agreement with the exact solutions.

ISSN: 2230-7818

@ 2011 http://www.ijaest.iserp.org. All rights Reserved.

Page 191

f 4.043E-30 2.26E-24 1.0E-6 2.0E-6 2.795E-5 1.3487E-7 -21.2043 1770029.228 4.52571586 2755.9737 1.012202172 1.000041663


Emmanuel Nwaeze* et al. / (IJAEST) INTERNATIONAL JOURNAL OF ADVANCED ENGINEERING SCIENCES AND TECHNOLOGIES Vol No. 2, Issue No. 2, 187 - 192

with the new method. VI. Conclusion Herein, we propose a generalization

[4] E. Polak, “Optimization: Algorithms and Consistent Approximations”, vol. 124 of Applied Mathematical Sciences, Springer, New York, NY, USA, 1997. [5]

M. Hestenes and E. Stiefel, “Method of conjugate gradients for solving linear systems”. J. Res. Nat. Bur. Standards 49, 409-436 (1952)

[6]

Y. Liu and C. Storey, “Efficient generalized conjugate gradient algorithms: I. Theory”. J. Optim. Theory Appl. 69, 129-137 (1991)

of the multivariable Taylor’s series for unconstrained nonlinear optimization. We further explored the new model with a view to establishing the attainment of global solution. Testing the new algorithm on standard problems including large scale problems

confirms

the

[7]

possibility of obtaining the global optimum for the objective function with the small

number of function evaluations. The results obtained also show that the method is amenable to mathematical analysis and

References

Y.H. Dai and Y. Yuan , “A nonlinear conjugate gradient method with a strong global convergence property”. SIAM J. Optim. 10, 177-182 (1999)

IJ

[1]

A

computer automation.

[2]

[3]

B. D. Bunday and G. R. Garside , “Optimization methods in Pascal”. Printed and bound in Britain by Edward Arnold ltd, London (1987) R. Fletcher and G. M. Reeves, “Function minimization by conjugate gradients”. Computer J. 7, 149-154 (1964)

ISSN: 2230-7818

E. Polak and G. Ribiére, Note sur la convergence de directions conjugées . Rev. Francaise Informat Recherche Opertionelle, 3e Année, 16, 35-43 (1969)

ES

optimization

T

Attainment of global optimum is possible

[8] G. Zoutendijk, “Nonlinear programming, computational methods, in Integer and Nonlinear Programming”, J. Abadie, Ed., pp. 37–86, North-Holland, Amsterdam, The Netherlands, 1970. [9]

O.M. Bamigbola and C.N. Ejieji, “A higher-order conjugate gradient method for non-linear programming”. ABACUS 33( 2B), 394-405 (2006)

[10]

M. J. D. Powell, “Restart procedures for the conjugate gradient method,” Mathematical Programming, vol. 12, no. 2, pp. 241–254, 1977.

[11]

Z. J. Shi and J. Guo, “A new algorithm of nonlinear conjugate gradient method with strong convergence”. Math. Appl. comput. 27(1), 1 -16 (2008)

@ 2011 http://www.ijaest.iserp.org. All rights Reserved.

Page 192


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.