Solving The Quadratic Eigenvalue Problem Using Semi-Analytic Technique

Page 1

Solving The Quadratic Eigenvalue Problem Using SemiAnalytic Technique Luma. N. M. Tawfiq

&

Hassan. Z. Mjthap

Department of Mathematics, College of Education Ibn Al-Haitham, Baghdad University .

Abstract In this paper, we present a semi-analytic technique to solve the quadratic eigenvalue problem using repeated interpolation polynomial. The technique finds the eigenvalue and the corresponding nonzero eigenvector which represent the solution of the equation in a certain domain. Illustration example is presented, which confirm the theoretical predictions and a comparation between suggested technique and other methods. Keywords: quadratic eigenvalue problem, eigenvalue, eigenvector, differential equation, Interpolation.

1. Introduction The present paper is concerned with a semi-analytic technique to solve quadratic eigenvalue problems using repeated interpolation polynomial. The quadratic eigenvalue problem (QEP) [1] involves finding an eigenvalue λ and the corresponding nonzero eigenvector that satisfy the solution of the problem. The eigenvalue problems can be used in a variety of problems in science and engineering. For example, quadratic eigenvalue problems arise in oscillation analysis with damping [2], [3] and stability problems in fluid dynamics [4], and the threedimensional (3D) Schrödinger equation can result in a cubic eigenvalue problem [5]. Similarly, the study of higher-order systems of differential equations leads to a matrix polynomial of degree greater than one [5]. However, its applications are more complicated than standard and generalized eigenvalue problems. One reason is in the difficulty in solving the QEPs. Polynomial eigenvalue problems are typically solved by linearization [6], [7], which promotes the k-th order n × n matrix polynomial into the larger kn × kn linear eigenvalue problem. Other methods, such as Arnoldi shift and invert strategy [8], can be used when several eigenvalues are desired. A disadvantage of the shift-and-invert Arnoldi methods is that a change of the shift parameter requires a new Krylov subspace to be built. Another approach is a direct solution obtained by means of the Jacobi-Davidson method [9], although this method has been investigated far less extensively. In the present paper, we propose a series solution of quadratic eigenvalue problems by means of the osculator interpolation polynomial. The proposed method enables us to obtain the eigenvalue and the corresponding nonzero eigenvector of the 2nd order BVP.

1


The remainder of the paper is organized as follows. In the next section, we introduce the osculator interpolation polynomial. In section 3, we present the suggested technique. Illustration example is shown in Section 4. Finally, conclusions are presented in Section 5.

2. Osculator Interpolation Polynomial In this paper we use two-point osculatory interpolation polynomial, essentially this is a generalization of interpolation using Taylor polynomials. The idea is to approximate a function y by a polynomial P in which values of y and any number of its derivatives at a given points are fitted by the corresponding function values and derivatives of P [10]. We are particularly concerned with fitting function values and derivatives at the two end points of a finite interval, say [0, 1], i.e., P (j)(xi) = f(j)(xi), j = 0, . . . , n, x i = 0, 1, where a useful and succinct way of writing osculatory interpolant P 2n+1 of degree 2n + 1 was given for example by Phillips [11] as: n

P2n+1(x) =

 j 0

{ y ( j ) (0) q j (x) + (-1)

n  s    s   

j

y ( j ) (1) q j (1-x) },

(1)

n j

q j (x) = ( x j / j!)(1-x) n 1

xs = Q j (x) / j!,

(2)

s 0

so that (1) with (2) satisfies : ( j)

( j)

y ( j ) (0) = P2 n 1 (0) , y ( j ) (1) = P2 n 1 (1) ,

j = 0, 1, 2,…, n .

implying that P2n+1 agrees with the appropriately truncated Taylor series for y about x = 0 and x = 1. We observe that (1) can be written directly in terms of the Taylor coefficients

and

about x = 0 and x = 1 respectively, as :

n

P2n+1(x) =

 j 0

{

Q j (x) + (-1) j

Q j (1-x) },

(3)

3. Solution of the Second Order Quadratic Eigenvalue Problem In this section, we suggest a semi-analytic technique which is based on osculatory interpolating polynomials P2n+1 and Taylor series expansion to solve 2nd order quadratic eigenvalue problem. A general form of 2nd order QEP is : y"(x) = f( x, y, y', λ2), 0≤ x≤1 (4a) Subject to the boundary condition(BC): In the case Dirichlet BC: y(0) = A, y(1) = B, where A, B ϵ R

(4b)

In the case Neumann BC: y′(0) = A, y′(1) = B, where A, B ϵ R

(4c)

In the case Cauchy or mixed BC: y(0) = A, y′(1) = B, where A, B ϵ R (4d)

2


Or

y′(0) = A, y(1) = B, where A, B ϵ R (4d)

where f, in general, nonlinear function of their arguments. Now, to solve the problem by the suggested method doing the following steps: Step one Evaluate Taylor series of y(x) about x = 0 : y = y  i  0 ai x i = a 0 + a 1 x + 

 i 2

ai xi

(5)

where y(0) = a0, y'(0) = a1, y"(0) / 2! = a2, …, y(i)(0) / i! = ai , i= 3, 4,… And evaluate Taylor series of y(x) about x = 1: y = y  i 0 bi ( x  1) i = b 0 + b 1 (x-1) + 

 i 2

b i (x-1) i

(6)

where y(1) = b0, y'(1) = b1, y"(1) / 2! = b2 , … , y(i)(1) / i! = bi , i = 3,4,… Step two Insert the series form (5) into equation (4a) and put x = 0, then equate the coefficients of powers of x to obtain a2. Insert the series form (6) into equation (4a) and put x = 1, then equate the coefficients of powers of (x-1) to obtain b2. Step three Derive equation (4a) with respect to x, to get new form of equation say (7) as follows: df ( x, y , y ' , 2 ) y ' ' ' ( x)  (7) dx

then, insert the series form (5) into equation (7) and put x = 0 and equate the coefficients of powers of x to obtain a3, again insert the series form (6) into equation (7) and put x = 1, then equate the coefficients of powers of (x-1) to obtain b3. Step four Iterate the above process many times to obtain a4, b4 then a5, b5 and so on, that is, to get ai and bi for all i ≥ 2, the resulting equations can be solved using MATLAB version 7.11, to obtain ai and bi for all i  2. Step five The notation implies that the coefficients depend only on the indicated unknowns a0, a1, b0, b1, and λ, use the BC to get two coefficients from these, therefore, we have only two unknown coefficients and λ. Now, we can construct two point osculatory interpolating polynomial P2n+1(x) by insert these coefficients ( ai s‫ ۥ‬and bi s‫ ) ۥ‬into equation (3). Step six To find the unknowns coefficients integrate equation (4a) on [0, x] to obtain: x

y'(x) – y'(0) –  f(s, y, y', λ2) ds = 0

(8a)

0

and again integrate equation (8a) on [0, x] to obtain: x

y(x) – y(0) – y'(0) x –  (1– s) f(s, y, y', λ2) ds = 0 0

3

(8b)


Step seven Putting x = 1 in equations (8) to get : 1

b1 – a1 –  f(s, y, y', λ2) ds = 0

(9a)

0

and 1

b0 – a0 – a1 –  (1– s) f(s, y, y', λ2) ds = 0

(9b)

0

Step eight Use P2n+1(x) which constructed in step five as a replacement of y(x), we see that equations (9) have only two unknown coefficients from a0, a1, b0, b1 and λ. If the BC is Dirichlet boundary condition, that is, we have a0 and b0, then equations (9) have two unknown coefficients a1, b1 and λ. If the BC is Neumann, that is, we have a1 and b1, then equations (9) have two unknown coefficients a0, b0 and λ. Finally, if the BC is Cauchy boundary condition or mixed condition, i.e., we have a0 and b1 or a1 and b0, then equations (9) have two unknown coefficients a1, b0 or a0, b1 and λ. Step nine In the case of Dirichlet BC, we have: 1

F(a1, b1, λ2) = b1 – a1 –  f(s, y, y', λ2) ds = 0

(10a)

0

1

G(a1, b1, λ2) = b0 – a0 – a1 –  (1–s) f(s, y, y', λ2) ds = 0

(10b)

(∂F/∂a1)(∂G/∂b1) – (∂F/∂b1)(∂G/∂a1) = 0 In the case of Neumann BC, we have:

(10c)

0

1

F(a0, b0, λ2) = b1 – a1 –  f(s, y, y', λ2) ds = 0

(11a)

0

1

G(a0, b0, λ ) = b0 – a0 – a1 –  (1–s)f(s, y, y', λ2) ds = 0

(11b)

(∂F/∂a0)(∂G/∂b0) – (∂F/∂b0)(∂G/∂a0) = 0 In the case of mixed BC, we have:

(11c)

2

0

1

F(a1, b0, λ ) = b1 – a1 –  f(s, y, y', λ2) ds = 0 2

(12a)

0

1

G(a1, b0, λ ) = b0 – a0 – a1 –  (1–s)f(s, y, y', λ2) ds = 0

(12b)

(∂F/∂a1)(∂G/∂b0) – (∂F/∂b0)(∂G/∂a1) = 0

(12c)

2

0

Or 1

F(a0, b1, λ2) = b1 – a1 –  f(s, y, y', λ2) ds = 0

(13a)

0

1

G(a0, b1, λ2) = b0 – a0 – a1 –  (1–s)f(s, y, y', λ2) ds = 0

(13b)

(∂F/∂a0)(∂G/∂b1) – (∂F/∂b1)(∂G/∂a0) = 0

(13c)

0

4


So, we can find these coefficients by solving the system of algebraic equations (10) or (11) or (12) or (13) using MATLAB, so insert the value of the unknown coefficients into equation (3), thus equation (3) represent the solution of the problem.

4. Example In this section, we investigate the theory using example of quadratic eigenvalue problem. The algorithm was implemented in MATLAB 7.11. The bvp4c solver of MATLAB has been modified accordingly so that it can solve some class of quadratic eigenvalue problem as effectively as it previously solved eigen value problem. Also, we report a more conventional measure of the error, namely the error relative to the larger of the magnitude of the solution component and taking advantage of having a continuous approximate solution, we report the largest error found at 10 equally spaced points in [0, 1]. The problem arises in a study of heat and mass transfer in a porous spherical catalyst with a first order reaction. The reduction of a partial differential equation to an ODE by symmetry [15]. The QEVP is: xy"2 y ' 

x

2

 (1 y )  (1 y ye 1

, x ϵ [0, 1] there are two physical parameters  = 40 and  = 0.2. The BC (mixed case) are y(1) = 1 and the symmetry condition y'(0) = 0. Now, we solve this problem by suggested method. Here equations (13) become: 1

F(a0,

b1,

λ2)

=

b1

1

λ2

+

s

0

y"

ds = 0,

2 x

y' 

2

ye

(14a) 1

G(a0, b1, λ2) = 1 – 2  y(s) ds + λ2 0

1

s (1– s) f(s, y, y', λ2) ds = 0,

(14b)

0

(∂F/∂a0)(∂G/∂b1) – (∂F/∂b1)(∂G/∂a0) = 0,

(14c)

Now, we have to solve equations (14) for the unknowns a0, b1, λ2 using MATLAB, then we have a0 = 0.64, b1 = 0, λ = 0.6. Then from equation (3.4) we have ( where n = 4 ): 5

 (  (

1


P9 =

18. 63018391074087 x9 – 81.86548669822514 x8 + 136.0606573968419 x7 –

101.7163034006953 x6

29.5919367027597 x5 –

+

1.076125341642182 x4 +

0.7351374287263839 x2 + 0.64.

Higher accuracy can be obtained by evaluating higher n, now take n = 11, i.e.,

P23 =

2307273.5847575x22-

79339407.916265x19

+

201189.35259898x23- 12048976.340348x21+ 37826952.9633766x20116759503.0267302x18-123051968.6595243x17+92896106.64761737x16-

49247501.14161112x15+17467140.105114x14-3732006.240472856x13 7.614357331206804x

10

- 3.7600890064976x

8

+

+ 1.9310713450928x

6

364068.2388733x12 - 0.0761253416366x

+ 4

+

2

0.735137428726x + 0.64.

Kubiček et al (see [15], [24]) solved this problem by a collection method ( there are three solutions ) assembled by consider a range of parameter values: λ, β, α, such that, the values λ = 0.6, α = 0.1, β = 0.2 used in ex6bvp.m lead to three solutions that are displayed in figure (3.2). Table (3.2) compares the solutions in [15] at the origin to values reported in [39], [32].

Figure 3.2: Multiple solutions of example 3.2,gave in [39] Table (3.2): Computed y(0) for three solutions of the example 3.2 Method in [39] Kubiček et alia [15] 0.9071 0.9070 0.3637 0.3639 0.0001 0.0001

6


Another solution of this problem gave in [37] using collocation method with different code in MATLAB and Fortran given in Table (3.3) and Figure (3.3). Table 3.3: Comparisons for different code of solution in [37] with TOL = 10−7 bvp4c 205 6856

N fcount

CW-4 41 1080

CW-6 21 840

sbvp4 57 6051

sbvp4g 57 3699

sbvp6 22 3741

sbvp6g 15 1431

Where: – bvp4c (MATLAB 6.0 routine): which is based on collocation at three Lobatto points (see [37]). This is a method of order 4 for regular problems. – COLNEW (Fortran 90 code): The basic method here is collocation at Gaussian points, we chose the polynomial degrees m = 4 (CW-4) and m = 6 (CW-6), which results in (super convergent) methods of orders 8 and 12 respectively (for regular problems). – sbvp is used with equidistant (sbvp4 and sbvp6) and Gaussian (sbvp4g and sbvp6g) collocation points and polynomial degrees 4 and 6 respectively. Although for reasons mentioned here, the comparison of all three codes is difficult. Table (3.3) show the number of mesh points (N) and the number of function evaluations ( fcount ) that the different solvers required to reach tolerance (TOL).

7


Figure 3.3: Solution of example 3.2 gave in [37]

Shampine, L. F., (2002), Singular Boundary Value Problems for [15] .ODEs, Southern Methodist University J., Vol. 1, pp: 1-14 [24] Tawfiq, L. N. M. and Rasheed, H. W., " On Solving Singular

Boundary Value Problems and Its Applications", LAP LAMBERT Academic Publishing, 2013. Tawfiq, L. N. M. and Mjthap, H. Z., " Solving Singular Eigenvalue [32] Problem Using Semi-Analytic Technique", International Journal of .Modern Mathematical Sciences, 2013, 7(1): 121-131 [39] Shampine, L. F., Reichelt, M. W., and Kierzenka, J., "Solving Boundary Value Problems for Ordinary Differential Equations in MATLAB

with

bvp4c",

http://www.mathworks.com/bvp_tutorial , 2000.

8

available

at


[37] Morgado, L. and Lima, P.," Numerical Methods for a Singular Boundary Value Problem with Application to a Heat Conduction Model In

The

Human

Head", Proceedings

of

the

International

Conference on Computational and Mathematical Methods in Science and Engineering, CMMSE 2009.

5. Conclusions In the present paper, we have proposed a semi-analytic technique to solve eigenvalue problems using osculator interpolation. It may be concluded that the suggested technique is a very powerful, efficient and can be used successfully for finding the solution of nonlinear eigenvalue problems with boundary condition. The bvp4c solver of MATLAB has been modified accordingly so that it can solve some class of eigenvalue problems as effectively as it previously solved BVP .

References [1] Z. Bai, J. Demmel, J. Dongarra, (2000), A. Ruhe, and H. Van Der Vorst, Templates for the solution of algebraic eigenvalue problems: A practical guide, SIAM, Philadelphia. [2] N. J. Higham and F. Tisseur, (2003), Bounds for eigenvalues of matrix polynomials, Lin. Alg. Appl., Vol. 358, pp: 5 – 22. [3] F. Tisseur, K. Meerbergen, (2001), The quadratic eigenvalue problem, SIAM Rev., Vol. 43, pp: 235 – 286. [4] R. S. Heeg, (1998), Stability and transition of attachment-line flow, Ph.D. thesis, Universiteit Twente, Enschede, the Netherlands. [5] T. Hwang, W. Lin, J. Liu and W. Wang, (2005), Jacobi-Davidson methods for cubic eigenvalue problems, Numer. Lin. Alg. Appl., Vol. 12, pp: 605 – 624. [6] D. S. Mackey, N. Mackey, C. Mehl and V. Mehrmann, (2006), Vector spaces of linearizations for matrix polynomials, SIAM J. Matrix Anal. Appl., Vol. 28, pp: 971– 1004. [7] E. Pereira, (2003), On solvents of matrix polynomials, Appl. Numer. Math., Vol. 47, pp: 197 – 208. [8] Z. Bai and Y. Su, (2005), SOAR: A second order Arnoldi method for the solution of the quadratic eigenvalue problem, SIAM J. Matrix Anal. Appl., Vol. 26, pp: 640 – 659. [9] J. Asakura, T. Sakurai, H. Tadano, T. Ikegami and K. Kimura, (2010), A numerical method for polynomial eigenvalue problems using contour integral, Tsukuba.

9


[10] L. R. Burden and J. D. Faires , (2001), Numerical Analysis, Seventh Edition. [11] Phillips G .M., (1973), Explicit forms for certain Hermite approximations, BIT 13, pp: 177 - 180. [12] H. Yuan, G. Zhang and H. Zhao, (2007), Existence of Positive Solutions for a Discrete Three-Point Boundary Value Problem, Discrete Dynamics in Nature and Society, Vol. 2007, pp.1-14.

The QEP is to find scalars λ and nonzero vectors x, y satisfying (λ2M + λC + K)x = 0, y ∗(λ2M + λC + K) = 0, where M, C, and K are n×n complex matrices and x, y are the right and left eigenvectors, respectively, corresponding to the eigenvalue λ. A major algebraic difference between the QEP and the standard eigenvalue problem (SEP), Ax = λx, and the generalized eigenvalue problem (GEP), Ax = λBx, is that the QEP has 2n eigenvalues (finite or infinite) with up to 2n right and 2n left eigenvectors, and if there are more than n eigenvectors they do not, of course, form a linearly independent set. QEPs are an important class of nonlinear eigenvalue problems that are less familiar and less routinely solved than the SEP and the GEP, and they need special attention. A major complication is that there is no simple canonical form analogous to the Schur form for the SEP or the generalized Schur form for the GEP. A large literature exists on QEPs, spanning the range from theory to applications, but the results are widely scattered across disciplines. Our goal in this work is to gather together a collection of applications where QEPs arise, to identify and classify their characteristics, and to summarize current knowledge of both theoretical and algorithmic aspects. The type of problems we will consider in this survey and their spectral properties are summarized in Table 1.1. The structure of the survey is as follows. In section 2 we discuss a number of applications, covering the motivation, characteristics, theoretical results, and algorithmic issues. Section 3 reviews the spectral theory of QEPs and discusses important classes of QEPs coming from overdamped systems and gyroscopic systems. Several linearizations are introduced and their properties discussed. In this paper, the term linearization means that the nonlinear QEP is transformed into a linear eigenvalue problem with the same eigenvalues. Our treatment of the large existing body of theory is necessarily selective, but we give references where further details can be found.

10


Section 4 deals with tools that offer an understanding of the sensitivity of the problem and the behavior of numerical methods, covering condition numbers, backward errors, and pseudospectra. A major division in numerical methods for solving the QEP is between those that treat the problem in its original form and those that linearize it into a GEP of twice the dimension and then apply GEP techniques. A second division is between methods for dense, small- to medium-size problems and iterative methods for largescale problems. The former methods compute all the eigenvalues and are discussed in section 5, while the latter compute only a selection of eigenvalues and eigenvectors and are reviewed in section 6. The proper choice of method for particular problems is discussed. Finally, section 7 catalogues available software, and section 8 contains further discussion and related problems. We shall generally adopt the Householder notation: capital letters A, B, C, . . . denote matrices, lower case roman letters denote column vectors, Greek letters denote scalars,  ̄α denotes the conjugate of the complex number α, AT denotes the transpose of the complex matrix A, A∗ denotes the conjugate transpose of A, and _ ・ _ is any vector norm and the corresponding subordinate matrix norm. The values taken by any integer variable are described using the colon notation: “i = 1 :n” means the same as “i = 1, 2, . . . , n.” We write A > 0 (A ≥ 0) if A is Hermitian positive definite (positive semidefinite) and A < 0 (A ≤ 0) if A is Hermitian negative definite (negative semidefinite). A definite pair (A,B) is defined by the property that A,B ∈ Cn×n are Hermitian and min z∈Cn _z_2=1

_ (z∗Az)2 + (z∗Bz)2 > 0, which is certainly true if A > 0 or B > 0.

2.1. Second-Order Differential Equations. To start, we consider the solution of a linear second-order differential equation (2.1) Mq¨(t) + Cq˙(t) + Kq(t) = f(t), where M,C, and K are n × n matrices and q(t) is an nth-order vector. This is the underlying equation in many engineering applications. We show that the solution can be expressed in terms of the eigensolution of the corresponding QEP and explain why eigenvalues and eigenvectors give useful information. Two important areas where second-order differential equations arise are the fields of mechanical and electrical oscillation. The left-hand picture in Figure 2.1illustrates a single mass-spring system in which a rigid block of mass M is on rollers and can

11


move only in simple translation. The (static) resistance to displacement is provided by a spring of stiffness K, while the (dynamic) energy loss mechanism is represented by a damper C; f(t) represents an external force. The equation of motion governing this system is of the form (2.1) with n = 1 . As a second example, we consider the flow of electric current in a simple RLC circuit composed of an inductor with inductance L, a resistor with resistance R, and a capacitor with capacitance C, as illustrated on the right of Figure 2.1; E(t) is the input voltage. The Kirchhoff loop rule requires that the sum of the changes in potential around the circuit must be zero, so L di(t) dt + Ri(t) + q(t) C (2.2) − E(t) = 0, where i(t) is the current through the resistor, q(t) is the charge on the capacitor, and t is the elapsed time. The charge q(t) is related to the current i(t) by i(t) = dq(t)/dt. Differentiation of (2.2) gives the second-order differential equation

(2.3) , which is of the same form as (2.1), again with n = 1 .

Fig. 2.1 Left: Single mass-spring system. Right: A simple electric circuit.

12


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.