Cmjv04i02p0126

Page 1

J. Comp. & Math. Sci. Vol.4 (2), 126-134 (2013)

C-Approach of ABS Algorithm for Fractional Programming Problem SANJAY JAIN1 , ADARSH MANGAL2 and SAROJ SHARMA3 1

Department of Mathematical Sciences, Government College Ajmer, INDIA. 2 Department of Mathematics, Government Engineering College Ajmer, INDIA. 3 Research Scholar, Bhagwant University Ajmer, INDIA. (Received on: April 19, 2013) ABSTRACT ABS algorithm has been used broadly for solving linear and nonlinear system of equations comprising large number of constraints and variables. In this paper, we are using ABS algorithm with C- approach to solve fractional programming problems. Firstly fractional programming problem is reduced to linear programming problem by suitable substitutions and then its solution can be recovered by the solution of transformed linear programming problem provided that degeneracy has been treated correctly. The same has been verified by traditional simplex method for fractional programming problem, Charnes-Cooper method and C – approach. Keywords: ABS algorithm, fractional programming problem, linear programming problem, C–approach. AMS Subject Classification: 65F30.

INTRODUCTION ABS algorithm have been introduced by Abaffy, Broyden and Spedicato to solve determined or undetermined linear systems and have been later extended to linear least squares , non linear equations, optimization problems and

integer (Diophantine) equations and linear programming problems. The class of ABS method unifies most existing method for solving linear system and provide variety of alternative ways of implementing a specific algorithm. Extensive computational experience has shown that ABS method are implementable in a stable way, being often

Journal of Computer and Mathematical Sciences Vol. 4, Issue 2, 30 April, 2013 Pages (80-134)


127

Sanjay Jain, et al., J. Comp. & Math. Sci. Vol.4 (2), 126-134 (2013)

more accurate than the corresponding traditional algorithm and the linear programming is one of the most important problem in optimization. presented the Feng E. et al.1 application of the ABS algorithm to simplex method and the dual simplex method. They discussed the ABS formulation of the stopping criterion, the search direction, the minimal rule to determine the vectors entering and leaving the basis matrix and updating of the Abaffian matrix after a basis vector change. Majid Adib et al.2 described a class of ABS type methods whose i-th iterate solves the first 2i equations, so that termination is achieved in at most m/2 steps. Hamid Esmaeili et al.3 gave a representation of the solutions of a system of m linear integer inequalities in n variables m ≤ n , with full rank coefficient matrix. Also, they applied this result to solve integer linear programming problems with m ≤ n inequalities. Emilio Spedicato et al.4 developed a method, called the IABS-MPVT algorithm, for solving a system of linear equations and linear inequalities. This algorithm is characterized by solving the system of linear equations first via the ABS algorithms and then solving the unconstrained minimization problem obtained by substituting the ABS general form of solutions into the system of linear inequalities and using a parallel algorithm for a minimization stage. Hamid Esmaeili et al.5 had used the ABS algorithms for linear real systems to solve full rank linear inequalities and linear programming problems where the number of inequalities is less than or equal to the number of variables. They obtained the conditions of both optimality and

unboundedness in the context of ABS algorithms. Zun- Quan Xia et al.6 developed a special ABS algorithm “ABS Algorithms For Diophantine Linear Equations and Integer LP Problems” for solving such equations which is effective in computation and storage, not requiring the computation of the greatest common divisor. Also, they discussed the ILP problem with upper and lower bounds on the variables using this result. Emilio Spedicato et al.7 discussed ABS methods for continuous and integer linear equations and optimization. Adarsh Mangal et al.8 gave a survey about ABS methods to solve optimization problems including various types of generalizations. The linear fractional programming problem seeks to optimize the objective functions of non-negative variables of quotient form with linear functions in numerator and denominator subject to a set of linear and homogeneous constraints. Charnes- Cooper9, Kantiswarup10, Chadha11,12, Jain13 and many researchers gave different methods for solving linear fractional programming problem. PRELIMINARIES Let the fractional programming under consideration is of the form:

max Z = s.t. and

c'x + α C'x + β

(1)

AX = b

x≥0

Constraints in fractional programming problem may include any of the sign

Journal of Computer and Mathematical Sciences Vol. 4, Issue 2, 30 April, 2013 Pages (80-134)


Sanjay Jain, et al., J. Comp. & Math. Sci. Vol.4 (2), 126-134 (2013)

( ≤, =, ≥ ) .

But by introducing slack and surplus variables, we can always convert them in strict equations. So we have constraints as AX = b. Where x, c, and C are

n × 1 column vector. A is an activity matrix of order m × n and b is a column vector of order m × 1 . The primes ( ’ ) over the c and C

denote the transpose of a vector and α , β are some scalars. Further it is assumed that the constraint set

{ x : Ax = b, x ≥ 0} = S

(say) is nonempty and bounded. Here, we are solving the fractional programming problem by four methods namely (i) Simplex method (ii) Charnes – Cooper method (iii) ABS Algorithm (iv) C approach The method is explaining by taking same example and solve by each cited method one by one. One can easily see that if the degeneracy is treated properly then the feasible solution given by ABS method becomes optimal solution after n iterations, where n is the rank of matrix. METHOD Here we use C-approach for ABS algorithm for solving FPP. The detailed Cprogram of ABS algorithm is as follows: //Program to solve LPP Problems using ABS Method. #include<stdio.h> #include<conio.h> #include<math.h>

128

void multiply(float a[][10],float b[][10], float c[][10],int r1,int c1,int c2); void trans(float a[][10],float t[][10],intr,int c); void colarr(float a[][10],float col[][10],intx, intr,int c); void identity(float idt[][10],int size); int main() { float A[10][10],b[10][10],x[10][10],H[10][10],s [10][10],a[10][10],z[10][10],p[10][10],HT [10][10],ZT[10][10],Y[10][10],X[10][10], P[10][10]; float Q,T,p1[10][10],PT[10][10],PT1[10][10],P T2[10][10],AT[10][10],BU[10][10],AU[10 ][10]; int r1,c1,i,j,k,l,rank; // clrscr(); printf("\n\n\t\t\tBasic ABS Algorithm for LPP"); printf("\n\nInput Section : "); //Input of Matrix A printf("\n\nEnter Size of Matrix A : "); printf("\n\n\t\t Rows : "); scanf("%d",&r1); printf("\n\n\t\t Cols : "); scanf("%d",&c1); rank=(r1<c1)?r1:c1; printf("\n\nEnter Elements of Matrix A : "); for(i=0;i<r1;i++) { for(j=0;j<c1;j++) { printf("\n\n\tElement [ %d ] [ %d ] : ",i+1,j+1); scanf("%f",&A[i][j]); } }

Journal of Computer and Mathematical Sciences Vol. 4, Issue 2, 30 April, 2013 Pages (80-134)


129

Sanjay Jain, et al., J. Comp. & Math. Sci. Vol.4 (2), 126-134 (2013)

printf("\n\nEnter Elements of Matrix b : "); j=0; for(i=0;i<r1;i++) { printf("\n\n\tElement[ %d ] [ %d ] : ",i+1,j+1); scanf("%f",&b[i][0]); } if(r1>c1) { trans(A,AT,r1,c1); multiply(AT,b,BU,c1,r1,1); for(k=0;k<rank;k++) { b[k][0]=BU[k][0]; } } else { trans(A,AU,r1,c1); for(k=0;k<c1;k++) { for(j=0;j<r1;j++) { A[k][j]=AU[k][j]; } } } for(i=0;i<r1;i++) { x[i][0]=0; } identity(H,r1); i=0; printf("\n\nOutput Section : "); while(i<rank) { colarr(A,a,i+1,r1,c1); //gained matrix a

trans(z,ZT,r1,1); //gained transpose of z (matrix ZT) multiply(ZT,s,Y,1,r1,1); //gained matrix Y multiply(ZT,x,X,1,r1,1); T=X[0][0]-b[i][0]; if(s[i][0]==0 && T==0) { continue; } else if(s[i][0]==0 && T!=0) { printf("\n\nSystem is Incompatible...."); } else { if(Y[0][0]!=0) { multiply(ZT,x,X,1,r1,1);

multiply(H,a,s,r1,r1,1);//gained matrix s trans(H,HT,r1,r1); //gained matrix HT

trans(p,PT,r1,1);

for(k=0;k<r1;k++) { z[k][0]=a[k][0]; } multiply(HT,a,p,r1,r1,1); //gained matrix p (search direction)

multiply(ZT,p,P,1,r1,1); Q=(X[0][0]-b[i][0])/P[0][0]; for(k=0;k<r1;k++) { p1[k][0]=Q*p[k][0]; } for(k=0;k<r1;k++) { x[k][0]-=p1[k][0]; }

multiply(p,PT,PT1,r1,1,r1); multiply(PT,p,PT2,1,r1,1); for(k=0;k<r1;k++) { for(l=0;l<r1;l++) { PT1[k][l]/=PT2[0][0]; }

}

Journal of Computer and Mathematical Sciences Vol. 4, Issue 2, 30 April, 2013 Pages (80-134)


Sanjay Jain, et al., J. Comp. & Math. Sci. Vol.4 (2), 126-134 (2013)

for(k=0;k<r1;k++) {

max Z =

for(l=0;l<r1;l++) { H[k][l]-=PT1[k][l]; }

}

}

}

printf("\n\nValue of x [%d]: ",i+1); for(k=0;k<r1;k++) { printf("%f ",x[k][0]); } i++;

3x + 9 y x− y+3

s.t.

x + 4y ≤ 8

and

x, y ≥ 0

130

(2)

x + 2y ≤ 4

1. Solution by Simplex Method: After introducing the slack variables

} printf("\n\n Value of x : ");

S1 ≥ 0 and

for(i=0;i<r1;i++)

becomes:

{

max Z =

printf("%f ",x[i][0]);

} getch(); return 0;

S2 ≥ 0 , the given FPP

1 3x + 9 y z( ) = x − y + 3 z( 2) x + 4 y + S1 + 0S2 = 8

s.t.

x + 2 y + 0S1 + S2 = 4

}

x, y, S1, S2 ≥ 0

NUMERICAL EXAMPLE

and

The following FPP is consider for solving and explaining our procedure

By using simplex method to solve FPP, we get the final optimal table as

cj

3

9

0

0

cj

1

-1

0

0

BasicVar.

CB

cB

xB

x

Y

S1

S2

Y

-1 0

9 0

2 0

1/4 1/2

1 0

¼ -1/2

0 1

S2

1 Z( ) =

18

∆ j(1)

-3/4

0

9/4

0

2 Z( ) =

1

∆ j( 2 )

-13/4

-10

-1/4

0

∆j

231/4

180

27/4

0

Journal of Computer and Mathematical Sciences Vol. 4, Issue 2, 30 April, 2013 Pages (80-134)


131

Z =

Sanjay Jain, et al., J. Comp. & Math. Sci. Vol.4 (2), 126-134 (2013) 1 Z( )

Z ( 2)

Z = 3 y1 + 9 y2 − 8 y0 + y1 + 4 y2 ≤ 0

max.

= 18

s.t.

− 4 y0 + y1 + 2 y2 ≤ 0 3 y0 + y1 − y2 = 1

The optimal solution of the given FPP comes out to be x = 0, y = 2 and max z = 18 .

y1, y2 , y0 ≥ 0

and

2. Solution by Charnes – Cooper Method Charnes and Cooper gave a simple technique in 1962 to solve a linear fractional programming problem. They reduced linear fractional programming problem into linear program by some substitution.

1 = y0 x− y+3

Let

Now by introducing slack variables y3 , y2 ≥ 0 and an artificial variable W1 ≥ 0 in above linear programming problem and for minimize the infeasibility from W = W1 from two phase simplex method, we have the final optimal table as

then problem (2) will be of the form : Basic Variable

y0

y1

y2

y3

W1

Constant

y2

0

1

0

2

y4

0

0

1

0

y0 Z

1

0

0

1

0

0

0

18

It is very much clear from the final table that

y1 = 0, y2 = 2 & y0 = 1 So an optimal solution of the given

ABS Algorithm 1) Let

y1 0 = =0 y0 1

x1 ∈ R n H ∈R

FPP is

x=

y4

be an arbitrary vector. n, n

1 Let .. be an arbitrary nonsingular matrix.

2) Cycle for i = 1, ..., n

z ∈ Rn

and

y2 2 = =2 y0 1 3x + 9 y max z = = 18 x− y+3

i a) Let .. be a vector arbitrary save for the condition:

y=

3)

ziT H i ai ≠ 0 Compute search vector

pi :

Journal of Computer and Mathematical Sciences Vol. 4, Issue 2, 30 April, 2013 Pages (80-134)


132

Sanjay Jain, et al., J. Comp. & Math. Sci. Vol.4 (2), 126-134 (2013)

4)

pi = H iT zi b)

Compute step size

αi = 5)

3.

αi :

aiT xi − bi

s.t.

Z = 0 y0 + 3 y1 + 9 y2 − 8 y0 + y1 + 4 y2 ≤ 0

and

− 4 y0 + y1 + 2 y2 ≤ 0 3 y0 + y1 − y2 = 0 y0 , y1, y2 ≥ 0

max

piT ai

which is well defined with regard to (3) to (4). c) Compute the new approximation of the solution using 6)

Solution by ABS Algorithm

xi +1 = xi − αi pi

If i = n stop; system (1).

xn +1 solve the

−8 A = −4   3

wiT H i ai = 1, and to update the matrix

8)

Hi :

1 1

4 2  − 1

1 H1 =  0   0

w ∈ Rn

d) Let i be a vector arbitrary save for the condition : 7)

1

Z1T H1a1

= [ −8

0 1 0

1

H i +1 = H i − H i ai wiT H i

There are three eligible parameters in the general version of the ABS algorithm:

H1 and two systems of z w vectors i and i . The new algorithms or Matrix

a new formulation of the classic algorithms can be created by a suitable choice of these parameters. Abaffy et al. have studied the

z above system for a variety of choices of i wi , calculating the storage and and arithmetic operations which are required to solve the system with various kinds of matrices.

0 B = 0    1  0 0  1 

1 4] 0   0

0 1 0

0  −8 0  1    1   4 

= 64 + 1 + 16 = 81 1 P1 = H1T Z1 =  0   0

0 1 0

0  −8 0  1    1   4 

 a T x − b1  x2 = x1 −  1 T1  p1  a1 p1  0  −8 0   x2 = 0 − 0  1  =  0         4   0   0  Updating H

Journal of Computer and Mathematical Sciences Vol. 4, Issue 2, 30 April, 2013 Pages (80-134)


133

Sanjay Jain, et al., J. Comp. & Math. Sci. Vol.4 (2), 126-134 (2013)

 aT x − b3  x4 = x3 −  3 T3  P3  a3 P3 

 P PT  H 2 = H1 −  1T 1   P1 P1  17 / 81 H 2 =  8 / 81   32 / 81 P2 =

H 2T Z 2

8 / 81 80 / 81 − 4 / 81

17 / 81 =  8 / 81   32 / 81

8 / 81 80 / 81 − 4 / 81

32 / 81  − 4 / 81  65 / 81  32 / 81   − 4  − 4 / 81  1    65 / 81   2 

 4 / 81  P2 =  40 / 81     − 2 / 81

x3 = x2 −

 G2T x2 − b2  T  a2 P2

0 0 0 x3 =  0  −  0  =  0         0   0   0  Update H

P PT  H 3 = H 2 −  2T 2   P2 P2   0.2 H3 =  0   0.4

P3 =  0.2 =  0   0.4

0 0 0

0.4  0   0.8 

H 3T Z3 0 0 0

0.4   3   0.2  0  1  =  0      0.8   − 1  0.4 

1 x4 =  0     2 

y0 = 1 y2 = 0

  P2 

y3 = 2

4. Solution by ABS Algorithm with C – Approach Basic ABS Algorithm for LPP Input Section: Enter Size of Matrix A: Rows : 3 Cols : 3 Enter Elements of Matrix A : Element [ 1 ] [ 1 ] : -8 Element [ 1 ] [ 2 ] : 1 Element [ 1 ] [ 3 ] : 4 Element [ 2 ] [ 1 ] : -4 Element [ 2 ] [ 2 ] : 1 Element [ 2 ] [ 3 ] : 2 Element [ 3 ] [ 1 ] : 3 Element [ 3 ] [ 2 ] : 1 Element [ 3 ] [ 3 ] : -1 Enter Elements of Matrix b : Element[ 1 ] [ 1 ] : 0 Element[ 2 ] [ 1 ] : 0 Element[ 3 ] [ 1 ] : 1 Output Section : Value of x [1]: 0.000000 0.000000 0.000000

Journal of Computer and Mathematical Sciences Vol. 4, Issue 2, 30 April, 2013 Pages (80-134)


Sanjay Jain, et al., J. Comp. & Math. Sci. Vol.4 (2), 126-134 (2013)

Value of x [2]: 0.000000 0.000000 0.000000 Value of x [3]: 1.000002 -0.000000 2.000005 Value of x : 1.000002 -0.000000 2.000005

CONCLUSION We have presented an extensive numerical experiment with ABS method for Fractional Programming Problem and with C-approach and verified the result with traditional simplex methods. We found that if the degeneracy is treated properly the feasible solution given by ABS method becomes optimal solution after n iteration where n the rank of matrix. An illustration observation used to demonstrate the advantage of the new approach.

REFERENCES 1. Feng E., Wang X. and Wang X.L., On the application of the ABS algorithm to linear programming and linear complementarity, OMS, 8, 133-142 (1997). 2. Adib Majid, Mahdavi-Amiri Nezam and Spedicato Emilio, ABS type methods for solving m linear equations in m/2 steps, QDMSIA,8 (2000). 3. Esmaeili Hamid, Mahdavi-Amiri and Spedicato Emilio, ABS solution of a class of linear integer inequalities and integer LP problems, QDMSIA,3 (2001). 4. Spedicato Emilio, Li-Ping Pang, Xia Zun-Quan and Wang Wei , A method

134

for solving linear inequality system, QDMSIA, 19 (2004). 5. Esmaeili Hamid, Mahdavi-Amiri Nezam and Spedicato Emilio, Explicit ABS solution of a class of linear inequality system and LP problems , Bulletin of the Iranian Mathematical Society, 30 (2), 21-38 (2004). 6. Xia Zun-Quan and Zou Mei-Feng , ABS algorithm for Diophantine linear equations and integer LP problems, QDMSIA, 3 (2004). 7. Spedicato Emilio, Bodon Elena, Xia Zunquan and Mahdavi-Amiri Nezam, ABS methods for continuous and integer linear equations and optimization, CEJOR, 18, 73-95 (2010). 8. Mangal Adarsh and Sharma Saroj, ABS Methods To Solve Optimization Problems : A Review, Research Journal of Mathematical and Statistical Sciences, 1(2), 19-21 (2013). 9. Charnes A. and Cooper W.W. Programming with linear fractional functional, Naval Research Log. Quart., 9,181-186 (1962). 10. Swarup K., Linear fractional functional programming, Operations Research, 13, 1029-1036 (1965). 11. Chadha S.S., A linear fractional program with homogeneous constraint, OPSEARCH, 36, 390-398 (1999). 12. Chadha S.S. and Chadha V., Linear fractional programming and duality , doi 10.1007/s10100-007-0021-3 (2007). 13. Jain S, Mangal A and Parihar P., Solution of fuzzy linear fractional programming problem, OPSEARCH, 48, 139-135 (2011).

Journal of Computer and Mathematical Sciences Vol. 4, Issue 2, 30 April, 2013 Pages (80-134)


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.