Solving Eigenvalue Problem Using Semi-Analytic Technique

Page 1

Solving Eigenvalue Problem Using Semi-Analytic Technique Luma. N. M. Tawfiq

&

Hassan. Z. Mjthap

Department of Mathematics, College of Education Ibn Al-Haitham, Baghdad University .

Abstract In this paper, we present a semi-analytic technique to solve the eigenvalue problem using osculator interpolation polynomial. The technique finds the eigenvalue and the corresponding nonzero eigenvector which represent the solution of the equation in a certain domain. Illustration example is presented, which confirm the theoretical predictions and a comparation between suggested technique and other methods. Keywords: ODE, BVP, Interpolation, eigenvalue.

1. Introduction The present paper is concerned with a semi-analytic technique to solve eigenvalue problems using osculator interpolation polynomial. The eigenvalue problem (EP) [1] involves finding an eigenvalue λ and the corresponding nonzero eigenvector that satisfy the solution of the problem. The eigenvalue problems can be used in a variety of problems in science and engineering. For example, quadratic eigenvalue problems arise in oscillation analysis with damping [2], [3] and stability problems in fluid dynamics [4], and the threedimensional (3D) Schrödinger equation can result in a cubic eigenvalue problem [5]. Similarly, the study of higher-order systems of differential equations leads to a matrix polynomial of degree greater than one [5]. However, its applications are more complicated than standard and generalized eigenvalue problems. One reason is in the difficulty in solving the EPs. Polynomial eigenvalue problems are typically solved by linearization [6], [7], which promotes the k-th order n × n matrix polynomial into the larger kn × kn linear eigenvalue problem. Other methods, such as Arnoldi shift and invert strategy [8], can be used when several eigenvalues are desired. A disadvantage of the shift-and-invert Arnoldi methods is that a change of the shift parameter requires a new Krylov subspace to be built. Another approach is a direct solution obtained by means of the Jacobi-Davidson method [9], although this method has been investigated far less extensively. In the present paper, we propose a series solution of eigenvalue problems by means of the osculator interpolation polynomial. The proposed method enables us to obtain the eigenvalue and the corresponding nonzero eigenvector of the 2nd order BVP.

1


The remainder of the paper is organized as follows. In the next section, we introduce the osculator interpolation polynomial. In section 3, we present the suggested technique. Illustration example is shown in Section 4. Finally, conclusions are presented in Section 5.

2. Osculator Interpolation Polynomial In this paper we use two-point osculatory interpolation polynomial, essentially this is a generalization of interpolation using Taylor polynomials. The idea is to approximate a function y by a polynomial P in which values of y and any number of its derivatives at a given points are fitted by the corresponding function values and derivatives of P [10]. We are particularly concerned with fitting function values and derivatives at the two end points of a finite interval, say [0, 1], i.e., P (j)(xi) = f(j)(xi), j = 0, . . . , n, xi = 0, 1, where a useful and succinct way of writing osculatory interpolant P 2n+1 of degree 2n + 1 was given for example by Phillips [11] as : n

P2n+1(x) =

 j 0

{ y ( j ) (0) q j (x) + (-1)

n  s    s   

j

y ( j ) (1) q j (1-x) },

(1)

n j

q j (x) = ( x j / j!)(1-x) n 1

xs = Q j (x) / j!,

(2)

s 0

so that (1) with (2) satisfies : ( j)

( j)

y ( j ) (0) = P2 n 1 (0) , y ( j ) (1) = P2 n 1 (1) ,

j = 0, 1, 2,…, n .

implying that P2n+1 agrees with the appropriately truncated Taylor series for y about x = 0 and x = 1. We observe that (1) can be written directly in terms of the Taylor coefficients

and

about x = 0 and x = 1 respectively, as :

n

P2n+1(x) =

 j 0

{

Q j (x) + (-1) j

Q j (1-x) },

(3)

3. Solution of the Second Order Eigenvalue Problem In this section, we suggest a semi-analytic technique which is based on osculatory interpolating polynomials P2n+1 and Taylor series expansion to solve 2nd order eigenvalue Problem. A general form of 2nd order BVP's is : y"(x) = λ f( x, y, y' ), 0≤ x≤1 (4a) Subject to the boundary condition(BC): In the case Dirichlet BC: y(0) = A, y(1) = B, where A, B ϵ R

(4b)

In the case Neumann BC: y′(0) = A, y′(1) = B, where A, B ϵ R

(4c)

In the case Cauchy or mixed BC: y(0) = A, y′(1) = B, where A, B ϵ R (4d)

2


Or

y′(0) = A, y(1) = B, where A, B ϵ R (4d)

where f, in general, nonlinear function of their arguments. Now, to solve the problem by the suggested method doing the following steps: Step one :Evaluate Taylor series of y(x) about x = 0 : y = y  i  0 ai x i = a 0 + a 1 x + 

 i 2

ai xi

(5)

where y(0) = a0, y'(0) = a1, y"(0) / 2! = a2, …, y(i)(0) / i! = ai , i= 3, 4,… And evaluate Taylor series of y(x) about x = 1 : y = y  i 0 bi ( x  1) i = b 0 + b 1 (x-1) + 

 i 2

b i (x-1) i

(6)

where y(1) = b0, y'(1) = b1, y"(1) / 2! = b2 , … , y(i)(1) / i! = bi , i = 3,4,… Step two :Insert the series form (5) into equation (4a) and put x = 0, then equate the coefficients of powers of x to obtain a2. Insert the series form (6) into equation (4a) and put x = 1, then equate the coefficients of powers of (x-1) to obtain b2. Step three :Derive equation (4a) with respect to x, to get new form of equation say (7) as follows: df ( x, y , y ' ) y ' ' ' ( x)   (7) dx then, insert the series form (5) into equation (7) and put x = 0 and equate the coefficients of powers of x to obtain a3, again insert the series form (6) into equation (7) and put x = 1, then equate the coefficients of powers of (x-1) to obtain b3. Step four :Iterate the above process many times to obtain a4, b4 then a5, b5 and so on, that is, to get ai and bi for all i ≥ 2, the resulting equations can be solved using MATLAB version 7.10, to obtain ai and bi for all i  2. Step five :The notation implies that the coefficients depend only on the indicated unknowns a0, a1, b0, b1, and λ, use the BC to get two coefficients from these, therefore, we have only two unknown coefficients and λ. Now, we can construct two point osculatory interpolating polynomial P2n+1(x) by insert these coefficients ( ai s‫ ۥ‬and bi s‫ ) ۥ‬into equation (3). Step six :To find the unknowns coefficients integrate equation (4a) on [0, x] to obtain: x

y'(x) – y'(0) – λ

f(s, y, y') ds = 0

(8a)

0

and again integrate equation (8a) on [0, x] to obtain: x

y(x) – y(0) – y'(0) x – λ

(1– s) f(s, y, y') ds = 0

0

Step seven :3

(8b)


Putting x = 1 in equations (8) to get : 1

b1 – a1 – λ

f(s, y, y') ds = 0

(9a)

0

and 1

b0 – a0 – a1 – λ

(1– s) f(s, y, y') ds = 0

(9b)

0

Step eight :Use P2n+1(x) which constructed in step five as a replacement of y(x), we see that equations (9) have only two unknown coefficients from a0, a1, b0, b1 and λ. If the BC is Dirichlet boundary condition, that is, we have a0 and b0, then equations (9) have two unknown coefficients a1, b1 and λ. If the BC is Neumann, that is, we have a1 and b1, then equations (9) have two unknown coefficients a0, b0 and λ. Finally, if the BC is Cauchy boundary condition or mixed condition, i.e., we have a0 and b1 or a1 and b0, then equations (9) have two unknown coefficients a1, b0 or a0, b1 and λ. Step ten :In the case of Dirichlet BC, we have: 1

F(a1, b1, λ) = b1 – a1 – λ

f(s, y, y') ds = 0

(10a)

0

1

G(a1, b1, λ) = b0 – a0 – a1 – λ

(1–s)f(s, y, y') ds = 0

(10b)

0

(∂F/∂a1)(∂G/∂b1) – (∂F/∂b1)(∂G/∂a1) In the case of Neumann BC, we have:

(10c)

1

F(a0, b0, λ) = b1 – a1 – λ

f(s, y, y') ds = 0

(11a)

0

1

G(a0, b0, λ) = b0 – a0 – a1 – λ

(1–s)f(s, y, y') ds = 0

(11b)

0

(∂F/∂a0)(∂G/∂b0) – (∂F/∂b0)(∂G/∂a0) In the case of mixed BC, we have:

(11c)

1

F(a1, b0, λ) = b1 – a1 – λ

f(s, y, y') ds = 0

(12a)

0

1

G(a1, b0, λ) = b0 – a0 – a1 – λ

(1–s)f(s, y, y') ds = 0

(12b)

0

(∂F/∂a1)(∂G/∂b0) – (∂F/∂b0)(∂G/∂a1)

(12c)

Or 1

F(a0, b1, λ) = b1 – a1 – λ

f(s, y, y') ds = 0

(13a)

0

1

G(a0, b1, λ) = b0 – a0 – a1 – λ

(1–s)f(s, y, y') ds = 0

(13b)

0

(∂F/∂a0)(∂G/∂b1) – (∂F/∂b1)(∂G/∂a0)

(13c)

4


So, we can find these coefficients by solving the system of algebraic equations (10) or (11) or (12) or (13) using MATLAB, so insert the value of the unknown coefficients into equation (3) , thus equation (3) represent the solution of the problem. Note Extensive computations have shown that this generally provides a more accurate polynomial representation for a given n.

4. Example In this section, we investigate the theory using example of eigenvalue problem. The algorithm was implemented in MATLAB 7.10. The bvp4c solver of MATLAB has been modified accordingly so that it can solve some class of eigenvalue problem as effectively as it previously solved eigenvalue problem. Also, we report a more conventional measure of the error, namely the error relative to the larger of the magnitude of the solution component and taking advantage of having a continuous approximate solution, we report the largest error found at 10 equally spaced points in [0, 1]. y'' + λ y 2 = e x with BC (Dirichlet case): y(0) = 2, y(1) = 1. Now, we solve this problem by the suggested method. Here the coefficients of equation (3) (which computed by steps 1- 4 ) are: a 0 = 1, a 2 =(1- λ) /2, a 3 = (1–2λa 1 ) /6, a 4 = (1–2 λa 12 –2λ +2 λ 2 ) /24,…….. b 0 = 2, b 2 = (e – 4 λ) / 2, b 3 = ( e– 4 λ b 1 ) /6, b 4 = (e + 16 λ 2 –4 e λ – 2 λ b 12 ) /24, …….. Also, equations (10) become: 1

F(a1, b1, λ) = 1 + b1 – a1 – exp(1) + λ

 y 2 (x) dx = 0,

(14a)

0

1

G(a1, b1, λ) = 3 – a1 – exp(1) + λ

(1– x) y 2 (x) dx = 0,

(14b)

0

(∂F/∂a1)(∂G/∂b1) – (∂F/∂b1)(∂G/∂a1)

(14c)

Now, we have to solve equations (14) for the unknowns a1, b1, λ using MATLAB, then we have a1 = 5.500860385, b1= – 4.699286617, λ = 1.850291233 to 9 decimal places. Then from equation (3) we have ( for n = 2): P5 = 1.67898 x5 – 0.480677 x4 – 5.27402 x3 – 0.425146 x2 + 5.50086 x + 1. We note there is no difficulty in taking higher values of n if, we wished to refine this value. Yuan et al in [12] solved this example with take λ =1, using Chebyshev series for N=5 and other numerical method then have two solutions of the problem for this value of λ and the results illustrated in Table (1).

5


Table 1: The results of other methods gave in [12 ] of example xi

First solution using other numerical method Ф1

2nd solution using other numerical method Ф3

First solution using chebyshev series for N=5 (Ф2)

2nd solution using Chebyshev series for N=5(Ф4)

0.2

5.604268

1.257928

1.257930

5.604135

0.5 0.8

9.545783 6.440172

1.612536

1.612540

9.5457 L3

1.883155

1.883155

6.440038

5. Conclusions In the present paper, we have proposed a semi-analytic technique to solve eigenvalue problems using osculator interpolation. It may be concluded that the suggested technique is a very powerful, efficient and can be used successfully for finding the solution of nonlinear eigenvalue problems with boundary condition. The bvp4c solver of MATLAB has been modified accordingly so that it can solve some class of eigenvalue problems as effectively as it previously solved BVP .

References [1] Z. Bai, J. Demmel, J. Dongarra, (2000), A. Ruhe, and H. Van Der Vorst, Templates for the solution of algebraic eigenvalue problems: A practical guide, SIAM, Philadelphia. [2] N. J. Higham and F. Tisseur, (2003), Bounds for eigenvalues of matrix polynomials, Lin. Alg. Appl., Vol. 358, pp: 5 – 22. [3] F. Tisseur, K. Meerbergen, (2001), The quadratic eigenvalue problem, SIAM Rev., Vol. 43, pp: 235 – 286. [4] R. S. Heeg, (1998), Stability and transition of attachment-line flow, Ph.D. thesis, Universiteit Twente, Enschede, the Netherlands. [5] T. Hwang, W. Lin, J. Liu and W. Wang, (2005), Jacobi-Davidson methods for cubic eigenvalue problems, Numer. Lin. Alg. Appl., Vol. 12, pp: 605 – 624. [6] D. S. Mackey, N. Mackey, C. Mehl and V. Mehrmann, (2006), Vector spaces of linearizations for matrix polynomials, SIAM J. Matrix Anal. Appl., Vol. 28, pp: 971– 1004. [7] E. Pereira, (2003), On solvents of matrix polynomials, Appl. Numer. Math., Vol. 47, pp: 197 – 208. [8] Z. Bai and Y. Su, (2005), SOAR: A second order Arnoldi method for the solution of the quadratic eigenvalue problem, SIAM J. Matrix Anal. Appl., Vol. 26, pp: 640 – 659. [9] J. Asakura, T. Sakurai, H. Tadano, T. Ikegami and K. Kimura, (2010), A numerical method for polynomial eigenvalue problems using contour integral, Tsukuba. [10] L. R. Burden and J. D. Faires , (2001), Numerical Analysis, Seventh Edition. [11] Phillips G .M., (1973), Explicit forms for certain Hermite approximations, BIT 13, pp: 177 - 180. 6


‫‪[12] H. Yuan, G. Zhang and H. Zhao, (2007), Existence of Positive Solutions for a‬‬ ‫‪Discrete Three-Point Boundary Value Problem, Discrete Dynamics in Nature and‬‬ ‫‪Society, Vol. 2007, pp.1-14.‬‬

‫حل مسائل القيم الذاتية باستخدام التقنية شبه – التحليلية‬ ‫لمى ناجي محمد توفيق و حسن زيدان مجذاب‬ ‫المستخلص‬ ‫في هذا البحث نقترح التقنية شبه – التحليلية لحل مسائل القيم الذاتية باستخدام الندراج التماسي‬ ‫التقنية تجد القيم الذاتية و المتجهات الذاتية الغير الصفرية المقابلة لها و التي تمثل الحل للمعادلة‬ ‫ضمن مجالها‪ .‬تم عرض مثال توضيحي يعزز و يوضح التوقعات النظرية و كذلك المقارنة بين‬ ‫التقنية المقترحة و طرق أخرى ‪.‬‬

‫‪7‬‬


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.