Solving Eigenvalue Problem Using Semi-Analytic Technique Luma. N. M. Tawfiq
&
Hassan. Z. Mjthap
Department of Mathematics, College of Education Ibn Al-Haitham, Baghdad University .
Abstract In this paper, we present a semi-analytic technique to solve the eigenvalue problem using osculator interpolation polynomial. The technique finds the eigenvalue and the corresponding nonzero eigenvector which represent the solution of the equation in a certain domain. Illustration example is presented, which confirm the theoretical predictions and a comparation between suggested technique and other methods. Keywords: ODE, BVP, Interpolation, eigenvalue.
1. Introduction The present paper is concerned with a semi-analytic technique to solve eigenvalue problems using osculator interpolation polynomial. The eigenvalue problem (EP) [1] involves finding an eigenvalue λ and the corresponding nonzero eigenvector that satisfy the solution of the problem. The eigenvalue problems can be used in a variety of problems in science and engineering. For example, quadratic eigenvalue problems arise in oscillation analysis with damping [2], [3] and stability problems in fluid dynamics [4], and the threedimensional (3D) Schrödinger equation can result in a cubic eigenvalue problem [5]. Similarly, the study of higher-order systems of differential equations leads to a matrix polynomial of degree greater than one [5]. However, its applications are more complicated than standard and generalized eigenvalue problems. One reason is in the difficulty in solving the EPs. Polynomial eigenvalue problems are typically solved by linearization [6], [7], which promotes the k-th order n × n matrix polynomial into the larger kn × kn linear eigenvalue problem. Other methods, such as Arnoldi shift and invert strategy [8], can be used when several eigenvalues are desired. A disadvantage of the shift-and-invert Arnoldi methods is that a change of the shift parameter requires a new Krylov subspace to be built. Another approach is a direct solution obtained by means of the Jacobi-Davidson method [9], although this method has been investigated far less extensively. In the present paper, we propose a series solution of eigenvalue problems by means of the osculator interpolation polynomial. The proposed method enables us to obtain the eigenvalue and the corresponding nonzero eigenvector of the 2 nd order BVP.
1
The remainder of the paper is organized as follows. In the next section, we introduce the osculator interpolation polynomial. In section 3, we present the suggested technique. Illustration example is shown in Section 4. Finally, conclusions are presented in Section 5.
2. Osculator Interpolation Polynomial In this paper we use two-point osculatory interpolation polynomial, essentially this is a generalization of interpolation using Taylor polynomials. The idea is to approximate a function y by a polynomial P in which values of y and any number of its derivatives at a given points are fitted by the corresponding function values and derivatives of P [10]. We are particularly concerned with fitting function values and derivatives at the two end points of a finite interval, say [0, 1], i.e., P (j)(xi) = f(j)(xi), j = 0, . . . , n, x i = 0, 1, where a useful and succinct way of writing osculatory interpolant P 2n+1 of degree 2n + 1 was given for example by Phillips [11] as : n
P2n+1(x) =
j 0
{ y ( j ) (0) q j (x) + (-1)
n s s
j
y ( j ) (1) q j (1-x) },
(1)
n j
q j (x) = ( x j / j!)(1-x) n 1
xs = Q j (x) / j!,
(2)
s 0
so that (1) with (2) satisfies : ( j)
( j)
y ( j ) (0) = P2 n 1 (0) , y ( j ) (1) = P2 n 1 (1) ,
j = 0, 1, 2,…, n .
implying that P2n+1 agrees with the appropriately truncated Taylor series for y about x = 0 and x = 1. We observe that (1) can be written directly in terms of the Taylor coefficients
and
about x = 0 and x = 1 respectively, as :
n
P2n+1(x) =
j 0
{
Q j (x) + (-1) j
Q j (1-x) },
(3)
3. Solution of the Second Order Eigenvalue Problem In this section, we suggest a semi-analytic technique which is based on osculatory interpolating polynomials P2n+1 and Taylor series expansion to solve 2nd order eigenvalue Problem. A general form of 2nd order BVP's is : y"(x) = λ f( x, y, y' ), 0≤ x≤1 (4a) Subject to the boundary condition(BC): In the case Dirichlet BC: y(0) = A, y(1) = B, where A, B ϵ R
(4b)
In the case Neumann BC: y′(0) = A, y′(1) = B, where A, B ϵ R
(4c)
In the case Cauchy or mixed BC: y(0) = A, y′(1) = B, where A, B ϵ R (4d)
2
Or
y′(0) = A, y(1) = B, where A, B ϵ R (4d)
where f, in general, nonlinear function of their arguments. Now, to solve the problem by the suggested method doing the following steps: Step one :Evaluate Taylor series of y(x) about x = 0 : y = y i 0 ai x i = a 0 + a 1 x +
i 2
ai xi
(5)
where y(0) = a0, y'(0) = a1, y"(0) / 2! = a2, …, y(i)(0) / i! = ai , i= 3, 4,… And evaluate Taylor series of y(x) about x = 1 : y = y i 0 bi ( x 1) i = b 0 + b 1 (x-1) +
i 2
b i (x-1) i
(6)
where y(1) = b0, y'(1) = b1, y"(1) / 2! = b2 , … , y(i)(1) / i! = bi , i = 3,4,… Step two :Insert the series form (5) into equation (4a) and put x = 0, then equate the coefficients of powers of x to obtain a2. Insert the series form (6) into equation (4a) and put x = 1, then equate the coefficients of powers of (x-1) to obtain b2. Step three :Derive equation (4a) with respect to x, to get new form of equation say (7) as follows: df ( x, y , y ' ) y ' ' ' ( x) (7) dx then, insert the series form (5) into equation (7) and put x = 0 and equate the coefficients of powers of x to obtain a3, again insert the series form (6) into equation (7) and put x = 1, then equate the coefficients of powers of (x-1) to obtain b3. Step four :Iterate the above process many times to obtain a4, b4 then a5, b5 and so on, that is, to get ai and bi for all i ≥ 2, the resulting equations can be solved using MATLAB version 7.10, to obtain ai and bi for all i 2. Step five :The notation implies that the coefficients depend only on the indicated unknowns a0, a1, b0, b1, and λ, use the BC to get two coefficients from these, therefore, we have only two unknown coefficients and λ. Now, we can construct two point osculatory interpolating polynomial P2n+1(x) by insert these coefficients ( ai s ۥand bi s ) ۥinto equation (3). Step six :To find the unknowns coefficients integrate equation (4a) on [0, x] to obtain: x
y'(x) – y'(0) – λ
f(s, y, y') ds = 0
(8a)
0
and again integrate equation (8a) on [0, x] to obtain: x
y(x) – y(0) – y'(0) x – λ
(1– s) f(s, y, y') ds = 0
0
Step seven :3
(8b)
Putting x = 1 in equations (8) to get : 1
b1 – a1 – λ
f(s, y, y') ds = 0
(9a)
0
and 1
b0 – a0 – a1 – λ
(1– s) f(s, y, y') ds = 0
(9b)
0
Step eight :Use P2n+1(x) which constructed in step five as a replacement of y(x), we see that equations (9) have only two unknown coefficients from a0, a1, b0, b1 and λ. If the BC is Dirichlet boundary condition, that is, we have a0 and b0, then equations (9) have two unknown coefficients a1, b1 and λ. If the BC is Neumann, that is, we have a1 and b1, then equations (9) have two unknown coefficients a0, b0 and λ. Finally, if the BC is Cauchy boundary condition or mixed condition, i.e., we have a0 and b1 or a1 and b0, then equations (9) have two unknown coefficients a1, b0 or a0, b1 and λ. Step ten :In the case of Dirichlet BC, we have: 1
F(a1, b1, λ) = b1 – a1 – λ
f(s, y, y') ds = 0
(10a)
0
1
G(a1, b1, λ) = b0 – a0 – a1 – λ
(1–s)f(s, y, y') ds = 0
(10b)
0
(∂F/∂a1)(∂G/∂b1) – (∂F/∂b1)(∂G/∂a1) In the case of Neumann BC, we have:
(10c)
1
F(a0, b0, λ) = b1 – a1 – λ
f(s, y, y') ds = 0
(11a)
0
1
G(a0, b0, λ) = b0 – a0 – a1 – λ
(1–s)f(s, y, y') ds = 0
(11b)
0
(∂F/∂a0)(∂G/∂b0) – (∂F/∂b0)(∂G/∂a0) In the case of mixed BC, we have:
(11c)
1
F(a1, b0, λ) = b1 – a1 – λ
f(s, y, y') ds = 0
(12a)
0
1
G(a1, b0, λ) = b0 – a0 – a1 – λ
(1–s)f(s, y, y') ds = 0
(12b)
0
(∂F/∂a1)(∂G/∂b0) – (∂F/∂b0)(∂G/∂a1)
(12c)
Or 1
F(a0, b1, λ) = b1 – a1 – λ
f(s, y, y') ds = 0
(13a)
0
1
G(a0, b1, λ) = b0 – a0 – a1 – λ
(1–s)f(s, y, y') ds = 0
(13b)
0
(∂F/∂a0)(∂G/∂b1) – (∂F/∂b1)(∂G/∂a0)
(13c)
4
So, we can find these coefficients by solving the system of algebraic equations (10) or (11) or (12) or (13) using MATLAB, so insert the value of the unknown coefficients into equation (3) , thus equation (3) represent the solution of the problem. Note Extensive computations have shown that this generally provides a more accurate polynomial representation for a given n.
4. Example In this section, we investigate the theory using example of eigenvalue problem. The algorithm was implemented in MATLAB 7.10. The bvp4c solver of MATLAB has been modified accordingly so that it can solve some class of eigenvalue problem as effectively as it previously solved eigenvalue problem. Also, we report a more conventional measure of the error, namely the error relative to the larger of the magnitude of the solution component and taking advantage of having a continuous approximate solution, we report the largest error found at 10 equally spaced points in [0, 1]. y'' + λ y 2 = e x with BC (Dirichlet case): y(0) = 2, y(1) = 1. Now, we solve this problem by the suggested method. Here the coefficients of equation (3) (which computed by steps 1- 4 ) are: a 0 = 1, a 2 =(1- λ) /2, a 3 = (1–2λa 1 ) /6, a 4 = (1–2 λa 12 –2λ +2 λ 2 ) /24,…….. b 0 = 2, b 2 = (e – 4 λ) / 2, b 3 = ( e– 4 λ b 1 ) /6, b 4 = (e + 16 λ 2 –4 e λ – 2 λ b 12 ) /24, …….. Also, equations (10) become: 1
F(a1, b1, λ) = 1 + b1 – a1 – exp(1) + λ
y 2 (x) dx = 0,
(14a)
0
1
G(a1, b1, λ) = 3 – a1 – exp(1) + λ
(1– x) y 2 (x) dx = 0,
(14b)
0
(∂F/∂a1)(∂G/∂b1) – (∂F/∂b1)(∂G/∂a1)
(14c)
Now, we have to solve equations (14) for the unknowns a1, b1, λ using MATLAB, then we have a1 = 5.500860385, b1= – 4.699286617, λ = 1.850291233 to 9 decimal places. Then from equation (3) we have ( for n = 2): P5 = 1.67898 x5 – 0.480677 x4 – 5.27402 x3 – 0.425146 x2 + 5.50086 x + 1. We note there is no difficulty in taking higher values of n if, we wished to refine this value. Yuan et al in [12] solved this example with take λ =1, using Chebyshev series for N=5 and other numerical method then have two solutions of the problem for this value of λ and the results illustrated in Table (1).
5
Table 1: The results of other methods gave in [12 ] of example xi
First solution using other numerical method Ф1
2nd solution using other numerical method Ф3
First solution using chebyshev series for N=5 (Ф2)
2nd solution using Chebyshev series for N=5(Ф4)
0.2
5.604268
1.257928
1.257930
5.604135
0.5 0.8
9.545783 6.440172
1.612536
1.612540
9.5457 L3
1.883155
1.883155
6.440038
5. Conclusions In the present paper, we have proposed a semi-analytic technique to solve eigenvalue problems using osculator interpolation. It may be concluded that the suggested technique is a very powerful, efficient and can be used successfully for finding the solution of nonlinear eigenvalue problems with boundary condition. The bvp4c solver of MATLAB has been modified accordingly so that it can solve some class of eigenvalue problems as effectively as it previously solved BVP .
References [1] Z. Bai, J. Demmel, J. Dongarra, (2000), A. Ruhe, and H. Van Der Vorst, Templates for the solution of algebraic eigenvalue problems: A practical guide, SIAM, Philadelphia. [2] N. J. Higham and F. Tisseur, (2003), Bounds for eigenvalues of matrix polynomials, Lin. Alg. Appl., Vol. 358, pp: 5 – 22. [3] F. Tisseur, K. Meerbergen, (2001), The quadratic eigenvalue problem, SIAM Rev., Vol. 43, pp: 235 – 286. [4] R. S. Heeg, (1998), Stability and transition of attachment-line flow, Ph.D. thesis, Universiteit Twente, Enschede, the Netherlands. [5] T. Hwang, W. Lin, J. Liu and W. Wang, (2005), Jacobi-Davidson methods for cubic eigenvalue problems, Numer. Lin. Alg. Appl., Vol. 12, pp: 605 – 624. [6] D. S. Mackey, N. Mackey, C. Mehl and V. Mehrmann, (2006), Vector spaces of linearizations for matrix polynomials, SIAM J. Matrix Anal. Appl., Vol. 28, pp: 971– 1004. [7] E. Pereira, (2003), On solvents of matrix polynomials, Appl. Numer. Math., Vol. 47, pp: 197 – 208. [8] Z. Bai and Y. Su, (2005), SOAR: A second order Arnoldi method for the solution of the quadratic eigenvalue problem, SIAM J. Matrix Anal. Appl., Vol. 26, pp: 640 – 659. [9] J. Asakura, T. Sakurai, H. Tadano, T. Ikegami and K. Kimura, (2010), A numerical method for polynomial eigenvalue problems using contour integral, Tsukuba. [10] L. R. Burden and J. D. Faires , (2001), Numerical Analysis, Seventh Edition. [11] Phillips G .M., (1973), Explicit forms for certain Hermite approximations, BIT 13, pp: 177 - 180. 6
[12] H. Yuan, G. Zhang and H. Zhao, (2007), Existence of Positive Solutions for a Discrete Three-Point Boundary Value Problem, Discrete Dynamics in Nature and Society, Vol. 2007, pp.1-14.
حل مسائل القيم الذاتية باستخدام التقنية شبه – التحليلية لمى ناجي محمد توفيق و حسن زيدان مجذاب المستخلص في هذا البحث نقترح التقنية شبه – التحليلية لحل مسائل القيم الذاتية باستخدام الندراج التماسي التقنية تجد القيم الذاتية و المتجهات الذاتية الغير الصفرية المقابلة لها و التي تمثل الحل للمعادلة تم عرض مثال توضيحي يعزز و يوضح التوقعات النظرية و كذلك المقارنة بين.ضمن مجالها . التقنية المقترحة و طرق أخرى Hermite Interpolation While we will use the Hermite interpolation formula to obtain some highly efficient quadrature formulae later, the primary reason for discussing this form of interpolation is to show a powerful approach to generating interpolation formulae from the properties of the Lagrange polynomials. In addition to the functional constraints of Lagrange interpolation given by equation (3.2.2), let us assume that the functional values of the derivative Y'(xi) are also specified at the points xi. This represents an additional (n+1) constraints. However, since we have assumed that the interpolative function will be a polynomial, the relationship between a polynomial and its derivative means we shall have to be careful in order that these 2n+2 constraints remain linearly independent. While a polynomial of degree n has (n+1) coefficients, its derivative will have only n coefficients. Thus the specification of the derivative at the various values of the independent variable allow for a polynomial with 2n+2 coefficients to be used which is .a polynomial of degree 2n+1 Rather than obtain the determinantal equation for the 2n+2 constraints and the functional form of the interpolative function, let us derive the interpolative function from what we know of the properties of L i(x). For the interpolative function to be independent of the values of the dependent variable and its derivative, it must have a form similar to equation (3.2.8) so that
Σ
=+=Φn0jjjjj)x(H)x('Y)x(h)x(Y)x(
. (3.2.21) As before we shall require that the interpolative function yield the exact values of the function at the tabular values of the independent variable. ,Thus
7
Σ
=+==Φn0jijjijjii)x(H)x('Y)x(h)x(Y)x(Y)x( . (3.2.22) Now the beauty of an interpolation formula is that it is independent of the values of the dependent variable and, in this case, its derivative. Thus equation (3.2.22) must hold for any set of data points Y i and their derivatives Y’i . So lets consider a very specific set of data points given by ∀=≠==j,0)x('Yij,0)x(Y1)x(Yjji . (3.2.23) This certainly implies that hi(xi) must be one. A different set of data points that have the properties that ∀==≠=j,0)x('Y,0)x(Ykj,0)x(Yjki , (3.2.24) will require that hk(xj) be zero. However, the conditions on h i(xj) must be independent of the values of the independent variable so that both conditions must hold. Therefore 3 @ Polynomial Approximation 73
8
hj(xi) = δij . (3.2.25) where δij is Kronecker's delta. Finally one can consider a data set where ==j,1)x('Y0)x(Yjj∀ (3.2.26 . Substitution of this set of data into equation (3.2.22) clearly requires that Hj(xi) = 0 . (3.2.27) Now let us differentiate equation (3.2.21) with respect to x and evaluate at the tabular values of the independent variable x i. This yields
Σ
=+==Φn0jijjijjii)x('H)x('Y)x('h)x(Y)x(Y)x('
. (3.2.28) By choosing our data sets to have the same properties as in equations (3.2.23,24) and (3.2.26), but with the roles of the function and its derivative reversed, we can show that δ==ijijij)x('H0)x('h. (3.2.29) We have now place constraints on the interpolative functions h j(x), Hj(x) and their derivatives at each of the n+1 values of the independent variable. Since we know that both h j(x) and Hj(x) are polynomials, we need only express them in terms of polynomials of the correct degree which .have the same properties at the points xi to uniquely determine their form We have already shown that the interpolative polynomials will have a degree of (2n+1). Thus we need only find a polynomial that has the form specified by equations (3.2.25) and (3.2.29). From equation (3.2.10) we can construct such a polynomial to have the form hj(x) = vj(x)Lj2 (x) , (3.2.30) where vj(x) is a linear polynomial in x which will have only two arbitrary constants. We can use the constraint on the amplitude and derivative of hj(xi) to determine those two constants. Making use of the constraints in equations (3.2.25) and (3.2.29) we can write that =+===0)x(L)x('L)x(v2)x(L)x('v)x('h1)x(L)x(v)x(hijijiji2jijiji2iiiii . (3.2.31) Since vi(x) is a linear polynomial, we can write vi(x) = aix + bi . (3.2.32) Numerical Methods and Data Analysis 74
9
Specifically putting the linear form for v i(x) into equation (3.2.31) we get +−===+=)x('L)bxa(2a)x('v1bxa)x(v iiiiiiiiiiii , (3.2.33) which can be solved for ai and bi to get +=−=)x('Lx21b)x('L2aiiiiiii . (3.2.34) Therefore the linear polynomial vi(x) will have the particular form vi(x) = 1 ─ 2(x-xi)L'i(xi) . (3.2.35) We must follow the same procedure to specify H j(x). Like hj(x), it will also be a polynomial of degree 2n+1 so let us try the same form for it as we did for hj(x). So Hj(x) = uj(x)Lj2(x) , (3.2.36) where uj(x) is also a linear polynomial whose coefficients must be determined from =+===1)x(L)x('L)x(u2)x(L)x('u)x('H0)x(L)x(u)x(Hiiiiiii2iiiiii2iiiii . (3.2.37) Since is unity, these constraints clearly limit the values of u)x(L i2i (x) and its derivative at the tabular points to be ==1)x('u0)x(uiiii . (3.2.38) Since ui(x) is linear and must have the form ui(x) = αix + βi , (3.2.39) we can use equation (3.2.38) to fine the constants αi and βi as −=−=β=α)xx()x(ux1iiiii , (3.2.40) thereby completely specifying ui(x). Therefore, the two functions hj(x) and Hj(x) will have the specific form −=−−=)x(L)xx()x(H)x(L)]x('L)xx(21[)x(h2jjj2jjjjj . (3.2.41) All that remains is to find L'j(xj). By differentiating equation (3.2.9) with respect to x and setting x to xj, we get i
Σ
, (3.2.42) which means that vj(x) will simplify to 3 @ Polynomial Approximation 75
10
≠−−=jk1kjjj)xx()x('L
Σ
≠−−−=jkkjjj)xx()xx(21)x(v . (3.2.43) Therefore the Hermite interpolative function will take the form
ΣΠ−−+=Φ
≠i2ijjijiiii)xx()xx()]x(u'Y)x(vY[)x(
. (3.2.44) This function will match the original function Y i and its derivative at each of the tabular points. This function is a polynomial of degree 2n-1 with 2n coefficients. These 2n coefficients are specified by the 2n constraints on the function and its derivative. Therefore this polynomial is unique and whether it is obtained in the above manner, or by expansion of the determinantal equation is irrelevant to the result. While such a specification is rarely needed, this procedure does indicate how the form of the Lagrange polynomials can be used to specify interpolative functions that meet more complicated constraints. We will now consider the imposition of a different set of constraints that lead to a class of interpolative functions that have found wide application.
fundamental numerical method من ملف مخزون عند دعاء75 الى ص72 من ص
The Quadratic Eigenvalue Problem 2001 Society for Industrial and Applied Mathematics Vol. 43,No . 2,pp . 235–286
Fran¸coise Tisseur† Karl Meerbergen Abstract. We survey the quadratic eigenvalue problem, treating its many applications, its mathematical properties, and a variety of numerical solution techniques. Emphasis is given to exploiting both the structure of the matrices in the problem (dense, sparse, real, complex, Hermitian, skew-Hermitian)and the spectral properties of the problem. We classify numerical methods and catalogue available software. Key words. quadratic eigenvalue problem, eigenvalue, eigenvector, λ-matrix, matrix polynomial, second-order differential equation, vibration, Millennium footbridge, overdamped system, gyroscopic system, linearization, backward error, pseudospectrum, condition number, Krylov methods, Arnoldi method, Lanczos method, Jacobi–Davidson method
1. Introduction. On its opening day in June 2000, the 320-meter-long Millennium footbridge over the river Thames in London (see Figure 1.1) started to wobble alarmingly under the weight of thousands of people; two days later the bridge was closed. To explain the connection between this incident and the quadratic eigenvalue problem (QEP), the subject of this survey, we need to introduce some ideas from vibrating systems. A natural frequency of a structure is a frequency at which the structure prefers to vibrate. When a structure is excited by external forces whose frequencies are close to the natural frequencies, the vibrations are amplified and the system becomes unstable. This is the phenomenon of resonance. The external forces on the Millennium Bridge were pedestrian-induced movements. On the opening day, a large crowd of people initially walking randomly started to adjust their balance to the bridge movement, probably due to high winds on that day. As they walked, they became more synchronized with each other and the bridge started to wobble even more. The lateral vibrations experienced on the bridge occurred because some of its natural modes of vibration are similar in frequency to the sideways component of pedestrian footsteps on the bridge. 1 The connection with this survey is that the natural modes and frequencies of a structure are the solution of
11
FRANC¸ OISE TISSEUR AND KARL MEERBERGEN
Fig. 1.1 The Millennium footbridge over the river Thames.
an eigenvalue problem that is quadratic when damping effects are included in the model. The QEP is currently receiving much attention because of its extensive applications in areas such as the dynamic analysis of mechanical systems in acoustics and linear stability of flows in fluid mechanics. The QEP is to find scalars λ and nonzero vectors x, y satisfying (λ2M + λC + K)x = 0, y ∗(λ2M + λC + K) = 0, where M, C, and K are n×n complex matrices and x, y are the right and left eigenvectors, respectively, corresponding to the eigenvalue λ. A major algebraic difference between the QEP and the standard eigenvalue problem (SEP), Ax = λx, and the generalized eigenvalue problem (GEP), Ax = λBx, is that the QEP has 2n eigenvalues (finite or infinite) with up to 2n right and 2n left eigenvectors, and if there are more than n eigenvectors they do not, of course, form a linearly independent set. QEPs are an important class of nonlinear eigenvalue problems that are less familiar and less routinely solved than the SEP and the GEP, and they need special attention. A major complication is that there is no simple canonical form analogous to the Schur form for the SEP or the generalized Schur form for the GEP. A large literature exists on QEPs, spanning the range from theory to applications, but the results are widely scattered across disciplines. Our goal in this work is to gather together a collection of applications where QEPs arise, to identify and classify their characteristics, and to summarize current knowledge of both theoretical and
12
algorithmic aspects. The type of problems we will consider in this survey and their spectral properties are summarized in Table 1.1. The structure of the survey is as follows. In section 2 we discuss a number of applications, covering the motivation, characteristics, theoretical results, and algorithmic issues. Section 3 reviews the spectral theory of QEPs and discusses important classes of QEPs coming from overdamped systems and gyroscopic systems. Several linearizations are introduced and their properties discussed. In this paper, the term linearization means that the nonlinear QEP is transformed into a linear eigenvalue problem with the same eigenvalues. Our treatment of the large existing body of theory is necessarily selective, but we give references where further details can be found. Section 4 deals with tools that offer an understanding of the sensitivity of the problem and the behavior of numerical methods, covering condition numbers, backward errors, and pseudospectra. A major division in numerical methods for solving the QEP is between those that treat the problem in its original form and those that linearize it into a GEP of twice the dimension and then apply GEP techniques. A second division is between methods for dense, small- to medium-size problems and iterative methods for largescale problems. The former methods compute all the eigenvalues and are discussed in section 5, while the latter compute only a selection of eigenvalues and eigenvectors and are reviewed in section 6. The proper choice of method for particular problems is discussed. Finally, section 7 catalogues available software, and section 8 contains further discussion and related problems. We shall generally adopt the Householder notation: capital letters A, B, C, . . . denote matrices, lower case roman letters denote column vectors, Greek letters denote scalars,  ̄α denotes the conjugate of the complex number α, AT denotes the transpose of the complex matrix A, A∗ denotes the conjugate transpose of A, and _ ・ _ is any vector norm and the corresponding subordinate matrix norm. The values taken by any integer variable are described using the colon notation: “i = 1 :n” means the same as “i = 1, 2, . . . , n.” We write A > 0 (A ≥ 0) if A is Hermitian positive definite (positive semidefinite) and A < 0 (A ≤ 0) if A is Hermitian negative definite (negative semidefinite). A definite pair (A,B) is defined by the property that A,B ∈ Cn×n are Hermitian and min z∈Cn _z_2=1
_ (z∗Az)2 + (z∗Bz)2 > 0, which is certainly true if A > 0 or B > 0. 2. Applications of QEPs. A wide variety of applications require the solution of a QEP, most of them arising in the dynamic analysis of structural mechanical, and acoustic systems, in electrical circuit simulation, in fluid mechanics, and, more
13
recently, in modeling microelectronic mechanical systems (MEMS) [28], [155]. QEPs also have interesting applications in linear algebra problems and signal processing. The list of applications discussed in this section is by no means exhaustive, and, in fact, the number of such applications is constantly growing as the methodologies for solving QEPs improve. 2.1. Second-Order Differential Equations. To start, we consider the solution of a linear second-order differential equation (2.1) Mq¨(t) + Cq˙(t) + Kq(t) = f(t), where M,C, and K are n × n matrices and q(t) is an nth-order vector. This is the underlying equation in many engineering applications. We show that the solution can be expressed in terms of the eigensolution of the corresponding QEP and explain why eigenvalues and eigenvectors give useful information. Two important areas where second-order differential equations arise are the fields of mechanical and electrical oscillation. The left-hand picture in Figure 2.1illustrates a single mass-spring system in which a rigid block of mass M is on rollers and can move only in simple translation. The (static) resistance to displacement is provided by a spring of stiffness K, while the (dynamic) energy loss mechanism is represented by a damper C; f(t) represents an external force. The equation of motion governing this system is of the form (2.1) with n = 1 . As a second example, we consider the flow of electric current in a simple RLC circuit composed of an inductor with inductance L, a resistor with resistance R, and a capacitor with capacitance C, as illustrated on the right of Figure 2.1; E(t) is the input voltage. The Kirchhoff loop rule requires that the sum of the changes in potential around the circuit must be zero, so L di(t) dt + Ri(t) + q(t) C (2.2) − E(t) = 0, where i(t) is the current through the resistor, q(t) is the charge on the capacitor, and t is the elapsed time. The charge q(t) is related to the current i(t) by i(t) = dq(t)/dt. Differentiation of (2.2) gives the second-order differential equation
(2.3) , which is of the same form as (2.1), again with n = 1 .
14
Fig. 2.1 Left: Single mass-spring system. Right: A simple electric circuit.
15