INTRODUCTION TO PARTIAL DIFFERENTIAL EQUATIONS
Lecture Notes given by DINH NHO HA O Universit at { GH { Siegen Summer Semester 1996
Foreword The author has worked for 4 years (1993{1996) at the University of Siegen in the group of Applied and Numerical Mathematics. His stay was supported by a DFG{scholarship and by a guest professorship from the University of Siegen. During this time, this lecture was held for graduate students. The group of Applied and Numerical Math. and the students of his lecture like to thank Priv.{Doz. Dr. Dinh Nho H ao for his great e orts in research and teaching. Comments on this lecture note should be sent directly to the author via email (hao@thevinh. ac.vn) or to hjreinhardt@numerik.math.uni-siegen.de . Siegen, Sept. 1997
H.{J. Reinhardt
Contents 1 Introduction
1.1 De nitions : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 1.2 Examples : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 1.2.1 Equations : : : : : : : : : : : : : : : : : : : : : : : : : : : 1.2.2 Solutions : : : : : : : : : : : : : : : : : : : : : : : : : : : : 1.3 First-order linear equations : : : : : : : : : : : : : : : : : : : : : 1.3.1 The constant coe cient case : : : : : : : : : : : : : : : : : 1.3.2 A variable coe cient case : : : : : : : : : : : : : : : : : : 1.4 Where PDE come from? : : : : : : : : : : : : : : : : : : : : : : : 1.4.1 Simple Transport : : : : : : : : : : : : : : : : : : : : : : : 1.4.2 Vibration Equations : : : : : : : : : : : : : : : : : : : : : 1.4.3 Di usion Equation : : : : : : : : : : : : : : : : : : : : : : 1.4.4 Steady State Equation : : : : : : : : : : : : : : : : : : : : 1.4.5 Schr odinger's Equation : : : : : : : : : : : : : : : : : : : : 1.5 Classi cation of Linear Di erential Equations : : : : : : : : : : : 1.5.1 Di erential Equations with Two Independent Variables : : 1.5.2 Di erential Equations with Several Independent Variables.
2 Characteristic Manifolds and the Cauchy problem
2.1 Notation : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 2.2 The Cauchy problem : : : : : : : : : : : : : : : : : : : : : : : 2.3 Real Analytic Functions and the Cauchy{Kowalevski Theorem 2.3.1 Real Analytic Functions : : : : : : : : : : : : : : : : : 2.3.2 The Cauchy{Kowalevski theorem : : : : : : : : : : : : 2.4 The Uniqueness theorem of Holmgren : : : : : : : : : : : : : : i
: : : : : :
: : : : : :
: : : : : : : : : : : : : : : : : : : : : :
: : : : : : : : : : : : : : : : : : : : : :
: : : : : : : : : : : : : : : : : : : : : :
: : : : : : : : : : : : : : : : : : : : : :
: : : : : : : : : : : : : : : : : : : : : :
1
: 1 : 1 : 1 : 2 : 2 : 2 : 3 : 4 : 4 : 5 : 7 : 8 : 8 : 9 : 9 : 14 : : : : : :
17 17 18 21 21 23 24
CONTENTS
ii
2.4.1 The Lagrange{Green Identity : : : : : : : : : : : : : : : : : : : : : : 24 2.4.2 The Uniqueness theorem of Holmgren : : : : : : : : : : : : : : : : : : 25
3 Hyperbolic Equations
3.1 Boundary and initial conditions : : : : : : : : : : : : : : : : : : : : : : : 3.2 The uniqueness : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 3.3 The method of wave propagation : : : : : : : : : : : : : : : : : : : : : : 3.3.1 The D'Alembert method : : : : : : : : : : : : : : : : : : : : : : 3.3.2 The stability of the solution : : : : : : : : : : : : : : : : : : : : : 3.3.3 The re ection method : : : : : : : : : : : : : : : : : : : : : : : : 3.4 The Fourier method : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 3.4.1 Free vibration of a string : : : : : : : : : : : : : : : : : : : : : : : 3.4.2 The proof of the Fourier method : : : : : : : : : : : : : : : : : : 3.4.3 Non{homogeneous equations : : : : : : : : : : : : : : : : : : : : : 3.4.4 General rst boundary value problem : : : : : : : : : : : : : : : : 3.4.5 General scheme for the Fourier method : : : : : : : : : : : : : : : 3.5 The Goursat Problem : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 3.5.1 De nition : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 3.5.2 The Darboux problem. The method of successive approximations 3.6 Solution of general linear hyperbolic equations : : : : : : : : : : : : : : : 3.6.1 The Green formula : : : : : : : : : : : : : : : : : : : : : : : : : : 3.6.2 Riemann's method : : : : : : : : : : : : : : : : : : : : : : : : : : 3.6.3 An application of the Riemann method : : : : : : : : : : : : : : :
4 Parabolic Equations
4.1 Boundary conditions : : : : : : : : : : : : : : : : : : : 4.2 The maximum principle : : : : : : : : : : : : : : : : : 4.3 Applications of the maximum principle : : : : : : : : : 4.3.1 The uniqueness theorem : : : : : : : : : : : : : 4.3.2 Comparison of solutions : : : : : : : : : : : : : 4.3.3 The uniqueness theorem in unbounded domains 4.4 The Fourier method : : : : : : : : : : : : : : : : : : : 4.4.1 The homogeneous problem : : : : : : : : : : : :
: : : : : : : :
: : : : : : : :
: : : : : : : :
: : : : : : : :
: : : : : : : :
: : : : : : : :
: : : : : : : :
: : : : : : : :
: : : : : : : :
: : : : : : : :
: : : : : : : : : : : : : : : : : : : : : : : : : : :
: : : : : : : : : : : : : : : : : : : : : : : : : : :
29 29 31 33 33 34 36 39 39 42 44 46 46 51 51 53 58 58 59 61
63 63 65 66 66 67 68 69 69
CONTENTS
iii
4.4.2 The Green function : : : : : : : : : : : : : : : : : : : : : : : 4.4.3 Boundary value problems with non-smooth initial conditions 4.4.4 The non-homogeneous heat equation : : : : : : : : : : : : : 4.4.5 The non-homogeneous rst boundary value problem : : : : : 4.5 Problems on unbounded domains : : : : : : : : : : : : : : : : : : : 4.5.1 The Green function in unbounded domains : : : : : : : : : : 4.5.2 Heat conduction in the unbounded domain (;1 1) : : : : 4.5.3 The boundary value problem in a half-space : : : : : : : : :
5 Elliptic Equations 5.1 5.2 5.3 5.4 5.5 5.6
The maximum principle : : : : : : : : : : The uniqueness of the Dirichlet problem : The invariance of the operator : : : : : Poisson's formula : : : : : : : : : : : : : : The mean value theorem : : : : : : : : : : The maximum principle (a strong version)
: : : : : :
: : : : : :
: : : : : :
: : : : : :
: : : : : :
: : : : : :
: : : : : :
: : : : : :
: : : : : :
: : : : : :
: : : : : :
: : : : : :
: : : : : :
: : : : : :
: : : : : : : : : : : : : :
: : : : : : : : : : : : : :
: : : : : : : : : : : : : :
: : : : : : : : : : : : : :
: : : : : : : : : : : : : :
72 73 77 78 79 79 82 87
93
93 94 94 96 100 101
Bibliography
103
Index
105
Chapter 1 Introduction 1.1 De nitions A partial di erential equation (PDE) for a function u(x y : : :) is a relation of the form
F (x y : : : u ux uy : : : uxx uxy : : :) = 0
(1:1:1)
where F is a given function of the independent variables x y : : :, and of the "unknown" function u and of a nite or in nite number of its derivatives. We call u a solution of (1.1.1) if after substitution of u(x y : : :) and its partial derivatives, (1.1.1) is satis ed identically in x y : : : in some region in the space of these independent variables. For simplicity, we suppose that x y : : : are real and that the derivatives of u occuring in (1.1.1) are continuous functions of x y : : : in the real domain . Several PDEs involving one or more unknown functions and their derivatives constitute a system. The order of a PDE or of a system is the order of the highest derivative that occurs. A PDE is said to be linear if it is linear in the unknown functions and their derivatives, with coe cients depending on the independent variables x y : : : . The PDE of order m is called quasilinear if it is linear in the derivatives of order m with coe cients that depend on x y : : : and the derivatives of order < m.
Remark 1.1.1 The dimension of the space where x y : : : belong to can be in nite. Remark 1.1.2 If in (1.1.1) derivatives of u are of in nite order, then we say that (1.1.1) is of in nite order.
1.2 Examples 1.2.1 Equations ux + uy = 0 (transport) ux + g(u)y = 0 (conservation law, shock wave) 1
CHAPTER 1. INTRODUCTION
2
uxx + uyy = 0 (the Laplace equation) utt ; uxx = 0 (the wave equation) ut = k u (the heat equation) ih t = ; 2hm + V (Schr odinger's equation) ut + cuux + uxxx = 0 (the Korteweg - de Vries equation) 2
1 1 dn u X n! dxn = g(x) 0
1.2.2 Solutions Example 1.2.1 Find all u(x y) satisfying the equation uxx = 0. We can integrate once to
get ux = const (in x). Since the solution depends on y, we have ux = f (y), where f (y) is arbitrary. Do it again to get u(x y) = f (y)x + g(y).
Example 1.2.2 Find u(x y) satisfying PDE uxx + u = 0.
Solution: u(x y) = f (y) cos x + g(y) sin x, f and g are arbitrary.
Example 1.2.3 Find u(x y) satisfying uxy = 0. First integrate in x, regarding y as xed.
So we get
uy (x y) = f (y): Next let integrate in y, regarding x as xed. We get the solution u(x y) = F (y) + G(x) where F 0 = f .
1.3 First-order linear equations 1.3.1 The constant coe cient case Consider the equation
aux + buy = 0 (1:3:1) where a and b are constants not both zero. Geometric Method. The quantity aux + buy is the directional derivative of u in the direction of the vector V~ = (a b) = a~i + b~j . It must always be zero. This means that u(x y) must be constant in the direction of V~ . The vector (b ;a) is orthogonal to V~ . The lines parallel to V~ satisfy the equation bx ; ay = const = c and along them the solution u has a constant value, say f (c). Then u(x y) = f (c) = f (bx ; ay). Since c is arbitrary, we have u(x y) = f (c) = f (bx ; ay) (1:3:2)
1.3. FIRST-ORDER LINEAR EQUATIONS
3
for all values of x and y. Coordinate method. Change variables to x0 = ax + by y0 = bx ; ay:
We have
(1:3:3)
@u @x0 + @u @y0 = au + bu = ux = @u x y @x @x0 @x @y0 @x 0
and
0
@u @y0 + @u @x0 = bu ; au : uy = @u = x y @y @y0 @y @x0 @y 0
0
Hence aux + buy = a(aux + buy ) + b(bux ; auy ) = (a2 + b2)ux : Since a2 + b2 6= 0, we have ux = 0. Thus, u = f (y0) = f (bx ; ay): 0
0
0
0
0
0
1.3.2 A variable coe cient case The equation
ux + yuy = 0 (1:3:4) is linear and homogeneous but has a variable coe cient y. The PDE (1.3.4) itself asserts that the directional derivative in the direction of the vector (1 y) is zero. The curves in the xy plane with (1 y) as tangent vectors have slopes y. Their equations are dy = y : (1:3:5) dx 1
CHAPTER 1. INTRODUCTION
4
u t=1
t=2
t=3
x
This ODE has the solution
y = Cex: (1:3:6) These curves are called the characteristic curves of the PDE (1.3.4). As C is changed, the curves ll out the xy plane perfectly without intesecting. On each of the curves u(x y) is a constant because d u(x Cex) = @u + Cex @u = u + yu = 0: y dx @x @y x Thus, u(x Cex) = u(0 Ce0) = u(0 C ) is independent of x. Putting y = Cex and C = e;xy, we have u(x y) = u(0 e;xy):
It follows that
is the general solution of our PDE.
u(x y) = f (e;xy)
(1:3:7)
1.4 Where PDE come from? 1.4.1 Simple Transport Consider a uid, say water, owing at a constant rate c along a horizontal pipe of a xed cross section in the positive x direction. A substance, say a pollutant, is suspended in the water. Let u(x t) be its concentration in grams/centimeter at time t.
1.4. WHERE PDE COME FROM?
5
u(x,t)
R â&#x2C6;&#x2019;T(x,t) We know that the amount of pollutant in the interval $0 b] at the time t is M = 0b u(x t)dx, in grams. At the later time t + h, the same molecules of pollutant have moved x to the right x x+dx by c h centimeters. Hence, Zb Z b+ch M = u(x t)dx = u(x t + h)dx: ch
0
Di erentiate with respect b, we get
u(b t) = u(b + ch t + h): Di erentiate with respect to h and putting h = 0, we get 0 = cux(b t) + ut(b t): Since b is arbitrary, the equation describing our simple transport problem looks as follows
ut + cux = 0:
(1:4:1)
1.4.2 Vibration Equations Consider a exible, elastic homogeneous string or thread of length l, which undergoes relatively small transverse vibrations. Assume that it remains in a plane. Let u(x t) be its displacement from equilibrium position at time t and position x. Because the string is perfectly exible, the tension (force) is directed tangentially along the string. Let T (x t) be the magnitude of this tension vector. Limiting ourselves to the consideration of small vibrations of the string, we shall ignore magnitudes of order greater than tan = @u=@x.
Any small section of the string (a b), after displacement from a state of equilibrium within the range of our approximation, does not change its length !2 u Z bv u t S= 1 + @u @x dx ' b ; a a
and therefore, in agreement with Hook's law, T is independent of t. We shall show now it is also independent of x. This means
T = T0 = const :
CHAPTER 1. INTRODUCTION
6
For the projections of the tension into the u; and x;axes (denoted by Tu and Tx, respectively) we have Tx(x) = T (x) cos = q T (x) 2 ' T (x) 1 + (ux) Tu(x) = T (x) sin = T (x) tan = T (x)ux where is the angle between the tangent at the curve u(x t) and the x;axis. The tension, the external force and the initia (Tr agheitskr afte) are acting into a small piece (x1 x2) of the string. The sum of the projections into x;axis of all these forces must be zero, since we consider only the case of transverse vibrations. Because the external force and the initia (Tr agheitskr afte) are acting along the u;axis direction, we have
Tx(x1) ; Tx(x2) = 0 or T (x1) = T (x2): Since x1 and x2 are arbitrary, we see that T is independent of x. Thus, for all x and t, we have T T0 : Let F (x t) be the magnitude of the external forces acting on the string at the point x at an instant of time t and directed perpendicular to the x;axis. Finally, let (x) be the linear density of the string at the point x, so that (x)dx is the mass of the element of the string (x x + dx). We now construct an equation of montion for the element (x x + dx) of the string. The tension forces T (x + dx t), ;T (x t) and the external force, the sum of which, according to Newton's law, must be equal to the product of the mass of the element and its acceleration, are acting upon the element (x x + dx). Projecting this vector equation onto the u axis, on account of all that has been said, we obtain the equation 2u(x t) @ T0 sin jx+dx ; T0 sin jx + F (x t)dx = (x)dx @t2 (1:4:2) However, whithin the range of our approximation sin = p tan 2 tan = @u @x 1 + tan and therefore " # 2u(x t) 1 @u ( x + dx t ) @u ( x t ) @ ; @x + F (x t) @t2 = T0 dx @x that is,
2 @ 2u + F: @@tu2 = T0 @x 2
(1:4:3)
This is indeed the equation of small transverse vibrations of a string. If the density is constant, (x) = , the equation of vibration of a string assumes the form @ 2u = a2 @ 2u + f (1:4:4) @t2 @x2
1.4. WHERE PDE COME FROM?
7
where we have set a2 = T0= (constant), f = F= . We shall also call this equation the one-dimensional wave equation. By the same way one can also derive the equation of small transverse vibrations of a membrane as 2 ! 2 2 2 @ u @ u @ u @ u (1:4:5) @t2 = T0 @x2 + @y2 + @z2 + F := T0 u + F:
1.4.3 Di usion Equation We now derive the equation of heat di usion or the di usion of particles in a medium. Denote by u(x t) the temperature of the medium at the point x = (x1 x2 x3) at an instant of time t. We shall consider the medium to be isotropic and shall denote its density by (x), its speci c heat by c(x), and its coe cient of thermal conductivity at point x by k(x). F (x t) denotes the intensity of heat sources at the point x at the instant of time t. We shall calculate the balance of heat in an arbitrary volume V during the time interval (t t + dt). The boundary of V is denoted by S and ~n is the external normal to it. In agreement with Fourier's law, an amount of heat Z Z @u Q1 = k @~n dSdt = (k grad u ~n)dSdt S S equal, by virtue of the Gauss{Ostrogradskii formula, to Z Q1 = V div(k grad u)dxdt enters volume V through surface S . An amount of heat
Z Q2 = V F (x t)dxdt arises in volume V as a result of heat sources. Since the temperature in the volume V during the interval of time (t t + dt) has increased in magnitude by u(x t + dt) ; u(x t) ' @u @t dt it follows that for this it is necessary to expend an amount of heat Z dxdt: Q3 = c @u V @t On the other hand, Q3 = Q1 + Q2, and thus # Z " @u div(k grad u) + F ; c @t dxdt = 0 V from which, since volume V is arbitrary, we obtain the equation of heat di usion c @u @t = div(k grad u) + F:
(1:4:6)
CHAPTER 1. INTRODUCTION
8
If the medium is homogeneous, that is, if c , and k are constants, Eq. (1.4.6) assumes the form @u = a2 u + f (1:4:7) @t where k f = F: a2 = c c Eq. (1.4.7) is called the heat equation.
1.4.4 Steady State Equation For steady state processes F (x t) = F (x) u(x t) = u(x), and both the wave equation (1.4.5) and the di usion equation (1.4.6) assume the form
; div(p grad u) + qu = F (x):
(1:4:8)
When p = const and q = 0, Eq. (1.4.8) is called Poisson's equation u = ;f f = Fp
(1:4:9)
when f = 0, Eq. (1.4.9) is called Laplace's equation u = 0:
(1:4:10)
In the wave equation (1.4.5), let the external disturbance f (x t) be periodic with frequency ! and amplitude f0(x), f (x t) = f0(x)ei!t: If we seek periodic solutions u(x t) with the same frequency and unknown amplitude u0(x), then for the function u0(x) we obtain the equation 2 u0 + k2u0 = ; f0(2x) k2 = !2 a a called the Helmholtz equation.
(1:4:11)
1.4.5 Schr odinger's Equation Let a quantum particle of mass m move in an external force eld with potential V (x). We shall denote by (x t) the wave function of this particle, so that j (x t)j2dx is the probability that the particle will be in a neighbourhood d(x) of the point x at an instant of time t& here dx is the volume of d(x). Then, the function satis es Schr odinger's equation 2 h ih t = ; 2m + V where h = 1:054:10;27 erg sec is Planck's constant.
(1:4:12)
1.5. CLASSIFICATION OF LINEAR DIFFERENTIAL EQUATIONS
9
1.5 Classi cation of Linear (Second-Order) Di erential Equations 1.5.1 Di erential Equations with Two Independent Variables Let us consider a second-order di erential equation with two independent variables ! 2u 2u 2u @ @ @u @u @ (1:5:1) a11 @x2 + 2a12 @x@y + a22 @y2 + F x y u @x @y = 0 where we shall suppose that the coe cients a11 a12 and a22 belong to C 2 in a certain neighbourhood and do not become zero simultaneously anywhere in this neighbourhood. For de niteness we assume that a11 6= 0 in this neighbourhood. In fact, it may happen that a22 6= 0. But then, by reversing the roles of x and y, we obtain an equation in which a11 6= 0. If then a11 and a22 become zero simultaneously at a point, then a12 6= 0 in the neighbourhood of this point. In this case the change of variables x0 = x + y y0 = x ; y leads to an equation in which both a11 and a22 are not zero. Using the new variables ! 6= 0 2 2 (1:5:2) = (x y) = (x y) 2 C 2 C D x y we have ux = u x + u x uy = u y + u y uxx = u x2 + 2u x x + u x2 + u xx + u xx uxy = u x y + u ( x y + y x) + u x y + u xy + u xy uyy = u y2 + 2u y y + u y2 + u yy + u yy : Putting these equations in our original equations, we get a new equation of the form a11u + 2a12u + a22u + F = 0 (1:5:3) where a11 = a11 x2 + 2a12 x y + a22 y2 a12 = a11 x x + a12( x y + x y ) + a22 y y a22 = a11 x2 + 2a12 x y + a22 y2 and F is independent of the second order derivatives with respect to and of u. We shall require that the functions (x y) and (x y) make the coe cients a11 and a22 vanish, that is they should satisfy the equations a11 x2 + 2a12 x y + a22 y2 = 0 (1.5.4) 2 2 (1.5.5) a11 x + 2a12 x y + a22 y = 0: First we prove the following lemma.
CHAPTER 1. INTRODUCTION
10
Lemma 1.5.1 1. If z = '(x y) is a particular solution of the equation a11zx2 + 2a12zxzy + a22zy2 = 0
(1:5:6)
then '(x y) = C is a general integral of the ordinary di erential equation
a11dy2 ; 2a12dxdy + a22dx2 = 0: (1:5:7) 2. Conversely, if '(x y) = C is a general integral of Eq. (1.5.7), then z = '(x y) is a solution of Eq. (1.5.6).
Proof. Since the function z = '(x y) is a solution of Eq. (1.5.6),
! !2 ' x (1:5:8) a11 ' ; 2a12 ; ''x + a22 y y for all x y in the neighbourhood where z = '(x y) is de ned and 'y (x y) 6= 0. Let '(x y) = C , Because 'y (x y) 6= 0, we can nd a neighbourhood where y = f (x C ), then " # dy = ; 'x(x y) (1:5:9) dx 'y (x y) y=f (x C) and
2 !2 3 !2 ! dy dy ' ' x x a11 dx ; 2a12 dx + a22 = 4a11 ; ' ; 2a12 ; ' + a225 = 0: y y y=f (x C )
Thus, y = f (x C ) is a solution of (1.5.7). Note that the last term vanishes for all x y. Conversely, let '(x y) = C be an integral of the Eq. (1.5.7). We shall show that for any point (x y) a11'2x + 2a12'x'y + a22'2y = 0 (1:5:10) Let (x0 y0) be given. Draw through it an integral curve for Eq. (1.5.7), for which we set '(x0 y0) = C0. Consider now the curve y = f (x C0). It is clear y0 = f (x0 C0). For any point of this curve we have 3 2 !2 !2 ! dy dy dy dy a11 dx ; 2a12 dx + a22 = 4a11 ; dx ; 2a12 ; dx + a225 : y=f (x C0 )
Putting x = x0 into this equation, we get
a11'2x(x0 y0) + 2a12'x'y (x0 y0) + a22'2y (x0 y0) = 0: Since x0 y0 are arbitrary, we have just proved our lemma. 2 Eq. (1.5.7) is called the characteristic equation of the di erential equation (1.5.1) and its integral curves are called characteristics. Let = (x y), where (x y) = const is an integral of Eq. (1.5.7), we see that the coe cient of u vanishes. Analogously, set = (x y) where (x y) = const is another integral of Eq. (1.5.7) independent of (x y), we get the coe cient of u to be also zero.
1.5. CLASSIFICATION OF LINEAR DIFFERENTIAL EQUATIONS Eq. (1.5.7) is equivalent to the linear equations q2 a + dy = 12 a12 ; a11a22 dx q a211 dy = a12 + a12 + a11a22 : dx a 11
11
(1.5.11) (1.5.12)
According to the terminology of the theory of second-order curves, we say that Eq. (1.5.1) at a point M is of hyperbolic type, when at this point a212 ; a11a22 > 0, elliptic type, when at this point a212 ; a11a22 < 0 parabolic type, when at this point a212 ; a11a22 = 0:
Since
a212 ; a11a22 = (a212 ; a11a22)( x y ; y x)2 the type of Eq. (1.5.1) is invariant under our transformation (1.5.2). Note that, at di erent points of the domain the equation may have di erent type. In what follows we consider a neighbouhood G where our equation is of only one type. Through any point of G there are two characteristics which are real and di erent for an equation of hyperbolic type, complex and di erent for an equation of elliptic type, and real and coincided for an equation of parabolic type. We consider these cases separately. 1. For equations of hyperbolic type a212 ; a11a22 > 0, so the right-hand sides of (1.5.11) and (1.5.12) are real and di erent. The general integrals (x y) = C and (x y) = C of these equations de ne two families of characteristics. Letting = (x y) = (x y) (1:5:13) we get Eq. (1.5.3) after dividing by the coe cient before u into the form u = F ( u u u ) with a12 6= 0: (1:5:14) This is called the canonical form of a hyperbolic equation. Normally we are using another canonical form. Let = + = ; it means = +2 = ;2 where and are considered as new variables. Then, u = 21 (u + u ) u = 12 (u ; u ) u = 14 (u ; u ) and Eq. (1.5.3) has the form u ; u = F1:
12
CHAPTER 1. INTRODUCTION
2. For equations of parabolic type a212 ; a11a22 = 0, therefore the equations (1.5.11) and (1.5.12) coincide and we have only a general integral for Eq. (1.5.6): '( ) = const. In this case, set = '(x y) and = (x y) where (x y) is an arbitrary function independent of '. For this choice we have a11 = a11 x2 + 2a12 x y + a22 y2 = (pa11 x + pa22 y )2 = 0 since a12 = pa11pa22. It follows a12 = a11 x x + a12( x y + y x) + a22 y y = (pa11 x + pa22 y )(pa11 x + pa22 y ) = 0: Thus, u = '( u u u ) with ' = ; aF (a22 6= 0) 22 is the canonical form of a parabolic equation. If the right hand side of this equation does not depend on u then we have an ordinary di erential equation. 3. For the case of elliptic equations a212 ; a11a22 < 0, and we see that the equation (1.5.11) is complex conjugate to (1.5.12). If '(x y) = C is a complex integral of Eq. (1.5.11), then ' (x y) = C is a complex integral of Eq. (1.5.12) where ' is the complex function conjugate to '. Let = '(x y) = ' (x y): In order to work with real functions, let further ' ; ' ' + ' = 2 = 2i : It is clear that and are real, and = + i = ; i : We note that since we are working with complex variables, we have to suppose that all coe cients a11 a12 : : : are analytic. In this case a11 x2 + 2a12 x y + a22 y2 = (a11 2x + 2a12 x y + a22 2y ) ; (a11 x2 + 2a12 x y + a22 y2) +2i(a11 x x + a12( x y + y x) + a22 y y ) therefore a11 = a22 and a12 = 0: Now we have u + u = '( u u u ) with ' = ; aF (a22 6= 0): 22 We summarize the results of this paragraph as follows.
1.5. CLASSIFICATION OF LINEAR DIFFERENTIAL EQUATIONS
x
13
parabolic
a212 ; a11a22 > 0 (hyperbolic type): uxx ; uyy = ' or uxy = '& hyperbolic a212 ; a11a22 < 0 (elliptic type): uxx + uyy = '& a212 ; a11a22 = 0 (parabolic type): uxx = '.
Example 1.5.2 Tricomi's equation @ 2u + @ 2u = 0 y @x 2 @y2 belongs to the mixed type: when y < 0 it is of hyperbolic type, but when y > 0 it is of elliptic type, and when y = 0 it is of parabolic type. Tricomi's equation is of interest in gas dynamics. With y < 0 the equations of the characteristics assume the form y0 = p1;y : Therefore the curves 3 x + q;y3 = C 3 x ; q;y3 = C 1 2 2 2 are characteristics of Tricomi's equation. The transform q q 3 3 3 = 2 x + ;y = 2 x ; ;y3 reduces Tricomi's equation to the canonical form ! @ u~ ; @ u~ = 0 > : @ 2u~ ; 1 @ @ 6( ; ) @ @
p
If y > 0, then ' = 32 x ; i y3 and with the change of variables q = 32 x = ; y3 Tricomi's equation reduces to @ 2u~ + @ 2u~ + 1 @ u~ = 0 < 0: @ 2 @ 2 3 @
CHAPTER 1. INTRODUCTION
14
1.5.2 Di erential Equations with Several Independent Variables. We consider now the linear di erential equation with real coe cients n X n n X X aij uxixj + biuxi + cu + f = 0 (aij = aji) j =1 i=1
i=1
(1:5:15)
where a b c and f are functions of variables x1 x2 : : : xn. Making the change of variables
k = k (x1 x2 : : : xn) (k = 1 : : : n) we have
uxi = uxixj =
n X
u k ik k=1 n X n n X X u k l ik jl + u k ( k )xixj k=1 l=1 k=1
where ik := @ k =@xi. Putting these expressions in (1.5.15) we get n X n n X X aklu k l + bk u k + cu + f = 0 k=1 l=1
where
akl = bk =
k=1
n X n X
aij ik jl i=1 j =1 n n X n X X bi ik + aij ( k )xixj : i=1 i=1 j =1
We consider now the quadratic form n X n X i=1 j =1
a0ij yiyj
where a0ij := aij (x01 : : : x0n). With the nonsingular linear transformation n X yi = ik k det( ik ) 6= 0 k=1
we get the following new quadratic form n X n X k=1 l=1
with
a0kl =
a0kl k l
n X n X i=1 j =1
a0ij ik jl:
(1:5:16)
1.5. CLASSIFICATION OF LINEAR DIFFERENTIAL EQUATIONS
15
Thus, the coe cients of the principal part of Eq. (1.5.15) are transformed into the coe cients of a quadratic form with the aid of a nonsingular linear transformation. It is well known that there exists a singular linear transformation which makes the coe cients of the matrix (a0ij ) of a quadratic form into a diagonal matrix, in which
ja0iij = 1 or 0 and a0ij = 0 (i 6= j & i = 1 1 : : : n) Moreover, by the Sylvester theorem about the inetia of quadratic forms, the number of coe cients a0ii 6= 0 equals the rank of the martrix (a0ij ) and the number of negative coe cients is invariant. We say that it is of the canonical form. We say that Eq. (1.5.15) at the point x0 is of elliptic type, if all n coe cients a0ii 6= 0 and they have the same sign& of hyperbolic type, if all a0ii 6= 0 and n ; 1 coe cients a0ii have the same sign and only one has other sign& of ultrahyperbolic type, when m coe cients a0ii are equal, and n ; m others with other sign are equal (m > 1 n ; m > 1)& of parabolic type, if at least one of the coe cients a0ii vanishes. We take the new independent variables i such that at the point x0 @ k = 0 ik = @x ik i where the 0ik are the coe cients of the transformation which makes the quadratic form (1.5.16) into the canonical form. We have
ux1x1 + ux2x2 + + uxnxn + ' = 0 n X ux1x1 = uxixx + ' i=2 m n X X uxixi = uxixi + ' i=1 i=m+1 nX ;m ( uxixi ) + ' = 0 (m > 0) i=1
(elliptic type) (hyperbolic type) (ultrahyperbolic type) (parabolic type):
Is it possible to use a single transformation to reduce (1.5.15) to the canonical form in a su ciently small neighborhood of each point? To make the reduction for any equation, it is necessary that the number of conditions
aik = 0 alk = la11
l 6= k l k = 1 2 : : : n l = 2 3 : : : n& a11 6= 0
where l = 0 1 should not be greater than the number of the unknown functions l l = 1 2 n 2 IN: n(n ; 1) + n ; 1 n that is n 2: 2 For n = 2 we have already proved that it is possible to use only one transformation to reduce (1.5.15) to the canonical form in a su ciently small neighborhood of each point.
16
CHAPTER 1. INTRODUCTION
Chapter 2 Characteristic Manifolds and the Cauchy problem 2.1 Notation Let X = (x1 : : : xn) 2 IRn. A vector = ( 1 : : : n ) whose components are non{negative integers k is called a multi{index. The letters x y : : : will be used for vectors, and : : : for multi{indices. Components are always indicated by adding a subscript to the vector{symbol: has components k . We set 0 = (0 0 : : : 0) 1 = (1 1 : : : 1) : (2:1:1) For an 2 ZZ n set j j = 1 + 2 + + n & ! = 1! 2! : : : n ! (2:1:2) and for x 2 IRn 2 ZZ n x := x 1 1 x 2 2 : : : x nn : (2:1:3) By C we generally denote a coe cient depending on n nonnegative integers 1 2 : : : n : C = C 1::: n : (2:1:4) The C may be real numbers, or vectors in a space IRm. The general m-th degree polynomial in x1 : : : x1 is then of the form X P (x) = C x (2:1:5) j j m n n with 2 ZZ x 2 IR C 2 IR. Setting Dk = @=@xk, we introduce the \gradient vector" D = (D1 : : : Dn ), and de ne the gradient of a function u(x1 : : : xn) as the vector Du = (D1u : : : Dnu) : (2:1:6) The general partial di erential operator of order m is then denoted by m D = D1 1 D2 2 : : : Dn n = @ 1 :@: : @x n (2:1:7) x1 n where j j = m. 17
18 CHAPTER 2. CHARACTERISTIC MANIFOLDS AND THE CAUCHY PROBLEM
2.2 The Cauchy problem
Consider the mth{order linear di erential equation for a function u(x) = u(x1 : : : xn) X Lu := A (x)D u = B (x) : (2:2:1) j j m The same formula describes the general mth{order system of N di erential equations in N unknowns if we interprete u and B as column vectors with N components and A as N N square matrices. Similarly the general mth{order quasi{linear equation (respectively system of such equations) is X Lu := A D u + C = 0 (2:2:2) j j m where now the A und C are functions of the independent variables xk and of the derivatives D u of the unknown u of orders j j m ; 1. More general nonlinear equations or systems
F (x D u) = 0
(2:2:3)
can be reduced formally to quasi{linear ones by applying a rst{order di erential operator to (2.2.3). On the other hand, an mth{order quasi{linear system (2.2.2) can be reduced to a (large) rst{order one, by introducing all derivatives D u with j j m ; 1 as new variables, and making use of suitable compatibility conditions for the D u. The Cauchy problem consists of nding a solution u of (2.2.2) or (2.2.1) having prescribed Cauchy data on a hyper{surface S IRn given by
(x1 x2 : : : xn) = 0 :
(2:2:4)
Here shall have m continuous derivatives and the surface should be regular in the sense that D = ( x1 : : : xn ) 6= 0 : (2:2:5) The Cauchy data on S for an mth{order equation consist of the derivatives of u of order less than or equal to m ; 1. They cannot be given arbitrarily but have to satisfy the compatibility conditions valid on S (instead normal derivatives of order less than m can be given independently from each other). We are to nd a solution u near S which has these Cauchy data on S . We call S noncharacteristic if we can get all D u for j j = m on S from the linear algebraic system or equations consisting of the compatibility condition for the data and the eq. (2.2.2) taken on S . We call S characteristic if at each point x of S the surface S is not noncharacteristic. To get an algebraic criterion for characteristic surfaces we rst consider the special case where the hyper{surface S is the coordinate plane xn = 0. The Cauchy data then consist of the D u with j j < m taken for xn = 0. Singling out the \normal" derivatives on S of orders m ; 1: k ! @u = Dk u = (x : : : x ) for k = 0 : : : m ; 1 and x = 0 (2:2:6) k 1 n;1 n n @xkn we have on S D u = D1 1 D2 2 : : : Dn n;11 n (2:2:7) ;
2.2. THE CAUCHY PROBLEM
19
provided that n < m. In particular for j j m ; 1 we have here the compatibility conditions expressing all Cauchy data in terms of normal derivatives on S . Let denote the multi{index = (0 : : : 0 m) : (2:2:8) In the eq (2.2.1) or (2.2.2) taken on S it is only the term with = that is not expressible by (2.2.7) in terms of 0 : : : m;1 and hence in terms of the Cauchy data. All others contain derivatives D u with n m ; 1. Thus D u, and hence all D u with j j m, are determined uniquely on S , if we can solve the di erential equation for the term D u. This is always possible in a unique way if and only if the matrix A is nondegenerated, i. e., det(A ) 6= 0. For a single scalar di erential equation this condition reduces to A 6= 0. In the linear case the condition det(A ) 6= 0 (2:2:9) does not depend on the Cauchy data on S & in the quasi{linear case, where the A depend on the D with j j m ; 1 and on x, one has to know the k in order to decide if S is characteristic. Since the condition (2.2.9) involves coe cients of mth{order derivatives, we de ne the principal part Lpr of L (both in (2.2.2) and (2.2.1)) as X Lpr := A D : (2:2:10) j j=m The \symbol" of this di erential operator is the matrix form (\characteristic matrix" of L): X *( ) = A : (2:2:11) j j=m The N N matrix *( ) has elements that are mth{degree forms
@min the components of the m vector = ( 1 : : : n ). In particular, the multiplier of Dn = @xmn in Lpr is A = *( ), where
= (0 0 : : : 0 1) = D (2:2:12) is the unit normal to the surface = xn = 0. The condition for the plane = xn = 0 to be noncharacteristic is then Q(D ) 6= 0 (2:2:13) where Q = Q( ) is the characteristic form de ned by Q( ) := det(*( )) (2:2:14) for any vector . (In the case of a scalar equation (N = 1) the characteristic form Q( ) coincides with the polynomial *( )). We shall see that (2.2.13) is the condition for a surface = 0 to be noncharacteristic. Consider a general S given by (2.2.4), by assumption (2.2.5) we can always suppose that in a neighborhood of a given point of S , the condition xn 6= 0 holds. Thus, the transformation ( i = 1 : : : n ; 1 yi = x i(x : : : x ) for (2:2:15) for i=n 1 n is then locally regular and invertible. By the chain rule, @u = X C @u (2:2:16) @xi k ik @yk
20 CHAPTER 2. CHARACTERISTIC MANIFOLDS AND THE CAUCHY PROBLEM where the
k Cik = @y (2:2:17) @xi are functions of x or of y. Denoting by C the matrix of the Cik and introducing the gradient operator d with respect to y with components di = @y@ (2:2:18) i we can write (2.2.16) symbolically, as D = Cd (2:2:19) taking D and d to be column vectors. Generally, for j j = m D = (Cd) + R (2:2:20) where R is a linear di erential operator involving only derivatives of order m ; 1. Then the principal part of L in (2.2.2) or (2.2.1) transformed to y{coordinates is given by X Lpr = A (Cd) = `pr (2:2:21) j j=m and its symbol, the characteristic matrix of `, by X A (C ) : (2:2:22) ( ) = j j=m Since the mapping (2.2.15) is regular, S is noncharacteristic for L if the plane yn = 0 is noncharacteristic with respect to the operator L transformed to y{coordinates, i. e. if 0 1 X det( ( )) = det @ A (C ) A 6= 0 (2:2:23) j j=m for = (0 0 : : : 0 1). But then ! @y @y n n = C = @x : : : @x = ( x1 : : : x1 ) 1 1 = D : Thus, the condition for noncharacteristic behavior of S can again be written as (2.2.13). If u in (2.2.2) stands for a vector with N components, the condition for S to be a characteristic surface is 0 1 X Q(D ) = det @ A (D ) A = 0 : (2:2:24) j j=m
Example 2.2.1 For the wave equation utt = c2(ux1x1 + ux2x2 ) for u = u(x1 x2 t), a characteristic surface t = (x1 x2) satis es equation
1 = c2 x21 + x22 :
(2:2:25) (2:2:26)
2.3. REAL ANALYTIC FUNCTIONS, THE CAUCHY{KOWALEVSKI THEOREM 21
2.3 Real Analytic Functions and the Cauchy{Kowalevski Theorem 2.3.1 Real Analytic Functions 2.3.1.1 Multiple in nite series We say that the multiple in nite series X
converges , if
c 2 ZZ n
X
(2:3:1)
jc j < 1 :
(2:3:2)
The sum of a convergent series does not depend on the order of summation.
Example 2.3.1 For x 2 IRn 2 ZZ n X
x =
01 n X Y @
i=1 i =0
1 x i i A =
1 1 = (1 ; x1)(1 ; x2) : : : (1 ; xn) (1 ; x)1
(2:3:3)
provided jxij < 1 for all i.
Example 2.3.2 For x 2 IRn 2 ZZ n , 1 X j j! X j j! X x = x
!
j =0 j j=j
=
1 X
!
(x1 + + xn)j =
j =0
1 1 ; (x1 + + xn)
provided that jx1 j + + jxnj < 1.
Let c (x) be continuous scalar functions and de ned in a set S IRn& if
jc (x)j c 8 2 ZZ n x 2 S and then
X
c < 1
X
c (x)
converges uniformly for x 2 S and represents a continuous function. If S is open and c 2 C j (S ) 8 , and if the series X D c (x)
22 CHAPTER 2. CHARACTERISTIC MANIFOLDS AND THE CAUCHY PROBLEM converges uniformly for x 2 S and each with j j j , then P c (x) 2 C j (S ) and X X D c (x) = D c (x) x 2 S j j j :
Of particular importance are the power series X c x x 2 IRn 2 Z n c 2 IR :
(2:3:4)
Assume that the series converges for a certain z: X jc j jz j < 1 :
(2:3:5)
Then (2.3.4) converges uniformly for all x with jxij jzij for all i : Hence, X f (x) = c x
(2:3:6) (2:3:7)
de nes a continuous function f in the set (2.3.6). Problem: All series obtained by formal di erentiation of (2.3.7) converge in the interior of (2.3.6), and even uniformly in every compact subset of the interior.
2.3.1.2 Real analytic functions De nition 2.3.3 Let f (x) be a function whose domain is an open set in IRn and whose range lies in IR. For y 2 we call f real analytic at y if there exist c 2 IR and a neighborhood U of y (all depending on y ) such that X f (x) = c (x ; y) 8x 2 U :
(2:3:8)
We say f is real analytic in (f 2 C ! ( )), if f is real analytic at each y 2 . A vector f (x) = (f1(x) : : : fm(x)) de ned in is called real analytic, if each of its components is real analytic. Theorem 2.3.4 If f = (f1 : : : fm) 2 C !( ), then f 2 C 1( ). Moreover, for any y 2 there exists a neighborhood U of y and positive numbers M r such that for all x 2 U . X1 f (x) = (D f (x))(x ; y) (2.3.9) ! D fk (x) M j j!r;j j 8 2 Z n 8k :
Theorem 2.3.5 Let f 2 C ! ( ) where is a connected open nset in IR. Let z 2 . Then f is uniquely determined in if we know the D f (z ) 8 2 ZZ . In particular f is uniquely
determined in by its values in any non{empty open subset of .
Proof:
Let f1 f2 2 C ! ( ) and let D f1(z) = D f2(z) for all z 2 ZZ n . Write g = f1 ; f2 and decompose into 1 = fx j x 2 & D g(x) = 0 8 2 ZZ n g 2 = fx j x 2 & D g(x) 6= 0 for some 2 ZZ ng : Then 2 is open by continuity of g. The set 1 also is open& for if y 2 1, we have g(x) = 0 in a neighborhood of y by (2.3.9). Since z 2 1 and is connected, 2 must be empty. 2
2.3. REAL ANALYTIC FUNCTIONS, THE CAUCHY{KOWALEVSKI THEOREM 23
2.3.1.3 Analytic and real analytic functions
Let x 2 CIPn 2 ZZ n P CIn f : P ! CIm . We call f analytic in P (f 2 C a(P)), if for each y 2 we can represent f in the form (2.3.8) in a neighborhood of y. Here c 2 CIm . It can be easily seen that (2:3:10) c = 1! D f (y) :
Theorem 2.3.6 A function f (x) with range in CIm and domain in CIn is analytic in the open P
set
if f is di erentiable with respect to the independent variables.
Theorem 2.3.7 If f P2 C ! ( ) with IRn , then for every compact subset S of there exists a neighborhood of S in CIn and a function F 2 C a(P) such that F (x) = f (x) for all x 2 S.
2.3.2 The Cauchy{Kowalevski theorem The theorem concerns the existence of a real analytic solution of the Cauchy problem for the case of real analytic data und equations. We restrict ourselves to quasi{linear systems of type (2.2.2) since more general nonlinear systems can be reduced to quasi{linear ones by di erentiation. We assume that S is real analytic in a neighborhood of one of its points x0, that is near x0 the surface S is given by an equation (x) = 0, where is real analytic at x0 and D 6= 0 at x0, say, Dn 6= 0. On S we prescribe compatible Cauchy data D u for j j < m which shall be real analytic at x0. The coe cients A C shall be real analytic functions of their arguments x D u at the point x0 D x0. We assume, moreover, S to be non{characteristic at x0 (and hence in a neighborhood of x0) in the sense that Q(D ) 6= 0. Then the Cauchy{Kowalevski theorem asserts that there exists a unique solution of (2.2.2) which is real analytic at x0. First of all, we can transform x0 into the origin and S locally by an analytic transformation into a neighborhood of the origin in the plane xn = 0. Then by introducing derivatives of orders less than m as new dependent variables one reduces the system to one of rst order. We make use here of the fact that the set of real analytic functions is closed under di erentiation and composition. One arrives at a rst order system in which the coe cient matrix of the term with @u=@xn is non{generate because S is non{characteristic. Hence we can solve for @u=@xn, obtaining a system in the standard form ;1 i @U + b(x U ) @U = nX a ( x U ) (2:3:11) @xn i=1 @xi where the ai(x U ) is a square matrix (aijk ), and b(x U ) is a column vector with components bi, U is the new vector of u and its derivatives of order less than m. On xn = 0, near 0, we have prescribed initial values U = f (x1 : : : xn;1). Here we can assume that f = 0, introducing U ; f as the new unknown functions. We can add xn as an additional dependent variable U or component of U satisfying the equation @u =@xn = 1, and the initial condition U = 0. This has the e ect that the ai and b in (2.3.11) do not depend on xn. In fact, we can write
24 CHAPTER 2. CHARACTERISTIC MANIFOLDS AND THE CAUCHY PROBLEM
ai(x U ) = = b(x U ) = =
ai(x1 x2 : : : xn U ) ai(x1 x2 : : : xn;1 U ) b(x1 x2 : : : xn U ) b(x1 x2 : : : xn U )
where U = (U U ). Writing (2.3.11) componentwise, we have to prove the following version of the Cauchy{Kowalevski theorem:
Theorem 2.3.8 (Cauchy{Kowalevski) Let the aijk and bj be real analytic functions of
Z = (x1 x2 : : : xn;1 u1 : : : uN ) at the origin of IRN +n;1 . Then the system of di erential equations ;1 X N @uj = nX i (z ) @uk + b (z ) for j = 1 : : : N a (2:3:12) @xn i=1 k=1 jk @xi j
with initial conditions
uj = 0 for xn = 0 & j = 1 : : : N (2:3:13) has a unique (among the real analytic uj ) system of solutions uj (x1 : : : xn) that is real analytic at the origin.
Proof: See, e.g., John's book $10] or Petrovskii's book $17]. Remark: The theorem of Cauchy and Kowalewski is local in character, and applies only
to analytic solutions of analytic Cauchy problems. It does not guarantee global existence of solutions& it does not exclude the possibility that other non{analytic solutions exist for the same Cauchy problem, nor the possibility that an analytic solution becomes non{analytic in some distance away from the initial surface.
2.4 The Uniqueness theorem of Holmgren 2.4.1 The Lagrange{Green Identity Let be a domain in IRn with the boundary @ which is su ciently regular. Denote d=dn the di erentiation in the direction of the exterior unit normal = ( 1 : : : n ) of @ . The Gauss{Ostrogradskii formula says that Z Z dxk dS = Z u( ) dS : (2:4:1) D u ( x ) dx = u ( ) k k dn @
@
If @ is su ciently regular then this formula is applicable to all u 2 C 1( ). The theorem can be generalized to u 2 C 1( ) \ C 0( ) by approximating from the interior. More generally, we have the formula for integration by parts, Z Z Z
vT Dk u dx = vT u k dS ; Dk vT u dx (2:4:2)
@
where u v are column vectors belonging to C 1( ) with T denoting transposition.
2.4. THE UNIQUENESS THEOREM OF HOLMGREN
25
Let now L be a linear di erential operator X Lu = a (x)D u : (2:4:3) j j m Let u v be column vectors and a be square matrices in C m( ). Then by repeated application of (2.4.2) it follows that Z X vT a (x)D u dx
jaj Zm
X = (;1)j jD vT a (x) u dx
j j m Z + M (v u )dS : (2.4.4) @
Here M in the surface integral is linear in the k with coe cients which are bilinear in the derivatives of v and u, the total number of di erentiations in each term being at most m ; 1. The expression M is not uniquely determined but depends on the order of performing the integration by parts. This is the Lagrange{Green identity for L which we also write in the form Z Z Z T T v Lu dx = (L v) u dx + M (u v )dS (2:4:5)
@
where L is the (formally) adjoint operator to L, de ned by
X L v := (;1)j jD a (x)T v : (2:4:6) j j m For the Laplace operator L = , and scalar functions u and v we have Z Z X Z X v u dx = vuxi i dS ; uxi vxi dx
@ i
Z du Z X = v dS ; uxi vxi dx : @ dn i Integrating once more by parts we obtain Z Z Z du dv ! (2:4:7) v u dx = u v dx + v ; u dn dS :
@ dn
2.4.2 The Uniqueness theorem of Holmgren The Cauchy{Kowalevski theorem does not exclude the possibility that nonanalytic solutions of the same Cauchy problem might exist, furthermore it works only for analytic Cauchy data. However, uniqueness can be proved for the Cauchy problem for a linear equation with analytic coe cients and for data (not necessarily analytic) prescribed in an analytic noncharacteristic surface S . The proof (due to Holmgren) uses of the Cauchy{Kowalevski theorem and the Lagrange{Green identity. Let u be a solution of a rst order system n X @u + b(x)u = 0 Lu = ak (x) @x (2:4:8) k k=1
26 CHAPTER 2. CHARACTERISTIC MANIFOLDS AND THE CAUCHY PROBLEM
in a \lens{shaped" region R bounded by two hypersurfaces S and Z . Here x 2 IRn u 2 IRn and ak b are N N matrices. Assume that u has Cauchy data u = 0 on Z and that S is non{characteristic& that is, the matrix n X A = ak (x) k (2:4:9) k=1
is non{degenerate for x 2 S , and is the unit normal of S at x. Let v be a solution of the adjoint equation n
X L v = ; @x@ (ak )T v + bT v = 0 for x 2 IR (2:4:10) k
k=1
(T for transposition) with Cauchy data
v = w(x) for x 2 S : Applying the Lagrange{Green identity we nd that Z wT Au ds = 0 : S
(2:4:11) (2:4:12)
Let now ; be the set of functions w on S for which the Cauchy problem (2.4.10){(2.4.11) has a solution v. If ; is dense in C 0(S ) we conclude that (2.4.12) holds for every w 2 C 0(S ). But then Au = 0 on S , and hence also, since A is non{degenerate, u = 0 on S . For if Au = 6 0 for some z 2 S , then also Au 6= 0 for all x in a neighborhood ! of z on S . We can nd a continuous non{negative scalar function (x) on S with support in ! and with (z) > 0. Then Z (Au)T (Au) ds > 0 S
for w = (Au) contrary to (2.4.12). Now in the case where the matrices ak and b are real analytic, and S and w are real analytic, the Cauchy{Kowalevski theorem guarantees the existence of a solution v of L v = 0 with v = w on S in a su ciently small neighborhood of S , and so we cannot be sure that this neighborhood includes all of R. To bridge the gap between S and Z and to conclude that u = 0 throughout R, we have to cover all of R by an analytic family of non{characteristic surfaces S .
De nition 2.4.1 A family of hypersurfaces S in IRn with parameter ranging over an
open interval * = (a b) forms an analytic eld, if the S can be transformed bi{analytically into the cross sections of a cylinder whose base is the unit ball in IRn;1 . This means that there exists a 1 ; 1 mapping F : * ! IRn , where x = F (y ) is real analytic in * and has non{vanishing Jacobian the S for 2 * are the sets
n o S = x x = F (y) & (y1 : : : yn;1) 2 yn = :
(2:4:13)
2.4. THE UNIQUENESS THEOREM OF HOLMGREN
27
y
n Our conditions imply that the set yn = b
X
= 2 S
(2:4:14)
called the support of the eld, is open, that the transformation x = F (y) has a real P ontoand analytic inverse y = G ( x ) mapping
*. In particular (x) = Gn (x) is real analytic yn = P in . n Uniqueness P theorem Let the S for 2 * = (a b) form an analytic eld in IR with ynth = aorder linear system support . Consider the m X A (x)D u = 0 yLu==0 n j j m
(2:4:15)
where x 2 IRn u 2 IRN , and the coe cient matrices A (x) are real analytic in P. Introduce the sets n X o R = x x 2 xn 0 (2.4.16) n X o Z = x x 2 xn = 0 (2.4.17) and for 2 *
o X n = x x 2 S for some with a < :
(2:4:18)
P \R for We assume that Z and all S are non{characteristic, with respect to L , and that any 2 * is a closed subset of the open set P. Let u be a solution of (2.4.15) of class C m(R) and having vanishing Cauchy data on Z . Then u = 0 in R.
28 CHAPTER 2. CHARACTERISTIC MANIFOLDS AND THE CAUCHY PROBLEM
Chapter 3 Hyperbolic Equations In this chapter we shall consider the following equation utt = a2uxx + f (x t) for x 2 IR1 t 2 R+ .
3.1 Boundary and initial conditions Consider the equation of small transverse vibrations of a string utt = a2uxx + f (x t) (3:1:1) for 0 x `. If the ends of the string are xed, say, on the plane, then the boundary conditions u(0 t) = 0 u(` t) = 0 (3:1:2) must be satis ed. Furthermore, the initial conditions , i. e. the form and the velocity @u=@t at the moment when the process begins, say at t0, are given: u(x t0) = '(x) (3.1.3) ut(x t0) = (x) : (3.1.4) These conditions must coincide with the conditions (3.1.2) at the ends of $0 `]. Here ' and are given functions. Later we shall prove that that these conditions fully describe the solution of the equation (3.1.1). If the ends of the string are moving with a given law, then the boundary conditions have the form u(0 t) = 1(t) u(` t) = 2(t) (3:1:20) where 1(t) and 2(t) are given. The boundary conditions can have another forms. For example, if one end of the string is xed, say for x = 0, and the other end is free, then we have u(0 t) = 0 & (3:1:5) 29
30 and at the free end the elasticity tension
CHAPTER 3. HYPERBOLIC EQUATIONS
@u T (` t) = k @x x=` is zero (no external force, k(x) is the Young modulus at the point x). Thus, ux(` t) = 0 : (3:1:6) If the end x = 0 is moving with a certain law (t), meanwhile at x = ` a given force (t) is acting then u(0 t) = (t) ux(` t) = (t) (t) = k Another type of boundary conditions is ux(` t) = ;h$u(` t) ; (t)] (3:1:7) (elastische Befestigung). Thus, we have the following types of boundary conditions Boundary condition of rst kind u(0 t) = (t) second kind ux(0 t) = (t) third kind ux(0 t) = h$u(0 t) ; (t)] : There are also other kinds, e. g., ux(` t) = k1 F $u(` t)] : : : We do not enter in this direction, but refer to the book by Tikhonov and Samarskii $22]. If the time interval is small, then the boundary conditions have not so much in uence upon the vibration and so we can consider the problem as the limit case for the initial value problem in an unbounded domain: Find a solution of the equation utt = a2uxx + f (x t) ;1 < x < 1 t > 0 with the initial deformation u(x t) t=0 = '(x) ;1 < x < 1 and the initial velocity ut(x t) t=0 = (x) ;1 < x < 1 : This is in fact the Cauchy problem for a hyperbolic equation. If one wants to study the behaviour of the string near to one end assuming that the in uence of the boundary condition at the other end is small, then we have, e. g., u(0 t) = (t) t 0 u(x 0) = '(x) 0 x < 1 ut(x 0) = (x) 0 x < 1 : For a concrete vibration process we have to nd appropriate boundary conditions for it.
3.2. THE UNIQUENESS
31
3.2 The uniqueness Theorem 3.2.1 The di erential equation
! 2 @ @u @ u (x) @t2 = @x k(x) @x + f (x t) 0 < x < ` t > 0
(3:2:1)
has no more than one solution which satis es the initial and boundary value conditions u(x 0) = '(x) ut(x 0) = (x) (3:2:2) u(0 t) = 1(t) u(` t) = 2(t) : Here, we assume that the function u(x t) and its rst and second derivatives are continuous in the interval 0 x ` for t 0 and (x) > 0 k (x) > 0 are continuous, given functions.
Proof:
di erence
Suppose that there exist two solutions u1(x t) u2(x t) of our problem, then its
v(x t) = u1(x t) ; u2(x t) satis es the homogeneous equation ! 2 @ @v @ v @t2 = @x k @x and the homogeneous conditions v(x 0) = 0 9 > = vt(x 0) = 0 > v(0 t) = 0 > > v(` t) = 0 :
(3:2:3)
(3:2:4)
We shall prove that with the conditions (3.2.4) v is identically zero. For this purpose we consider the function Z `n o E (t) := 21 k(vx)2 + (vt)2 dx (3:2:5) 0 and prove that it is independent of t. The function E (t) has a physical meaning& it is the total energy of the string at the moment t. Since v is twice continuously di erentiable, we can di erentiate E (t) and get dE (t) = Z ` (kv v + v v ) dx (3:2:6) x xt t tt dt 0 An integration by part in the second term of the right{hand side gives Z` ` Z ` kvxvxt dx = kvxvt 0 ; vt(kvx)x dx 0 0 Z` = ; vt(kvx)x dx : (3.2.7) 0
CHAPTER 3. HYPERBOLIC EQUATIONS
32
(Since v(0 t) = 0 it follows that vt(0 t) = 0 and v(` t) = 0 implies vt(` t) = 0 ). Thus, dE (t) = Z ` $ v v ; v (kv ) ] dx t tt t xx dt Z0` = $ vtt ; (kvx)x] vt dx = 0 : 0
It means, E (t) = const. Furthermore, since v(x 0) = 0 vt(x 0) = 0 we have Z `h i 1 2 2 E (t) = const = E (0) = 2 k(vx) + (vt) dx = 0 : (3:2:8) 0 t=0 On the other hand, the functions (x) and k(x) are positive, and it follows from (3.2.6), (3.2.8) that vx(x t) 0 vt(x t) 0 : Hence v(x t) = const = v(x 0) 0 : The same result remains valid for the second boundary problem, where ux(0 t) = 1(t) and ux(` t) = 2(t) because in this case vx(0 t) = vx(` t) = 0. The proof for the third boundary value problem ux(0 t) ; h1u(0 t) = 1 (t) h1 0 ux(` t) + h2u(0 t) = 2 (t) h2 0 is a little bit di erent. In this case vx(0 t) ; h1v(0 t) = 0 vx(` t) + h2v(0 t) = 0 and in (3.2.7) we obtain ` @ hh v2(` t) + h v2(0 t)i : kvxvt 0 = ; k2 @t 2 1 Integrating dE dt from 0 to t gives Z tZ ` vt $ vtt ; (kvx)x] dx dt E (t) ; E (0) = 0 0 n h i h io ; k2 h2 v2(` t) ; v2(` 0) + h1 v2(0 t) ; v2(0 0) : Taking into account that Z `n o kvx2 + vt2 t=0 dx = 0 E (0) = 12 0 and v(` 0) = v(0 0) = 0, we get h i E (t) = ; k2 h2v2(` t) + h1v2(0 t) 0 : On the other hand E (t) 0, it follows that E (t) 0 and so v(x t) 0.
2
3.3. THE METHOD OF WAVE PROPAGATION
33
3.3 The method of wave propagation (Wellenausbreitungsmethode) 3.3.1 The D'Alembert method We consider the problem utt ; a2uxx = 0 ;1 < x < 1 t > 0 (3.3.1) u(x 0) = '(x) ut(x 0) = (x) ;1 < x < 1 : (3.3.2) First, we transform the equation (3.3.1) into the canonical form, in which there is the mixed derivative. The characteristic equation dx2 ; a2dt2 = 0 is equivalent to dx ; a dt = 0 dx + a dt = 0 the integrals of which are x ; at = C1 x + at = C2 : Introducing the new variables = x + at = x ; at we get ux = u + u uxx = u + 2u + u ut = a(u ; u ) utt = (u ; 2u + u ) a2 : Thus, the equation (3.3.1) is transformed into u = 0 : (3:3:3) It is clear that every solution of Eq. (3.3.3) has the form u ( ) = f ( ) where f ( ) is an arbitrary function only of . Integrating this equation we obtain Z u( ) = f ( )d + f1( ) = f1( ) + f2( ) (3:3:4) where f1 depends only on and f2 depends only on . Conversely, if f1 and f2 are arbitrary di erentiable functions, then the function u( ) de ned by (3.3.4) is a solution of (3.3.3). It follows that u(x t) = f1(x + at) + f2(x ; at) (3:3:5) is a general solution of (3.3.3). Assume that there exists a solution of our Cauchy problem (3.3.1), (3.3.2), then it is de ned by (3.3.5). The functions f1 and f2 are de ned by u(x 0) = f1(x) + f2(x) = '(x) (3.3.6) 0 0 ut(x 0) = af1(x) ; a2f2(x) = (x) : (3.3.7)
34
CHAPTER 3. HYPERBOLIC EQUATIONS
From (3.3.7) we have
Zx f1(x) ; f2(x) = 1a ( )d + c x0 where x0 and c are some constants. The last equation and (3.3.6) yield 9 Zx 1 1 c > f1(x) = 2 '(x) + 2a ( )d + 2 > = x 0 Z (3:3:8) x 1 1 c > f2(x) = 2 '(x) ; 2a ( )d ; 2 : x0 It follows that Z x+at Z x;at u(x t) = '(x + at) +2 '(x ; at) + 12 ( )d ; ( )d x0 xo or Z x+at 1 ' ( x + at ) + ' ( x ; at ) + 2a x;at ( )d u(x t) = (3:3:9) 2 The formula (3.3.9) is called D'Alembert's formula. It proves the uniqueness of the solution of the problem (3.3.1), (3.3.2). It is clear also that if ' is twice di erentiable and is once di erentiable then (3.3.9) is a solution of (3.3.1){(3.3.2). Thus the method of D'Alembert proves not only the uniqueness of the solution but also its existence.
Let (x0 t0) be a point in the plane (x t) t 0. There are two characteristics crossing this point M = (x0 t0). Denoting the intersections of these characteristics with the line x = 0 by P and Q, respectively, we see that in order to determine u(x0 t0) we need only ' and in $P Q]. The triangle MPQ is called the characteristic triangle of the point M .
3.3.2 The stability of the solution The solution of Eq. (3.3.1) is uniquely determined by the condition (3.3.2). We shall prove that it depends continuously on the Cauchy data. Namely, we prove the following result.
Theorem 3.3.1 For every time interval $0 t0], and " > 0 there always exists a number = (" t0) such that if
then
j'j < j j < ju(x t)j < " (0 t t0) :
3.3. THE METHOD OF WAVE PROPAGATION
35
Remark 3.3.2 Let ui(x t) be the solutions of (3.3.1) with the Cauchy data 'i and i i =
1 2. The theorem says that, for every time interval $0 t0], and " > 0 there always exists a number = (" t0) such that if
j'1 ; '2j < j 1 ; 2j < then
Proof:
ju1(x t) ; u2(x t)j < " (0 t t0) : From the D'Alembert formula (3.3.9) we have
Z x+at ) j + j ' ( x ; at ) j 1 j ' ( x + at ju(x t)j + 2a j ( )jd 2 x;at < + 21a :2at (1 + t0) :
Thus, if we take = 1+"t0 , then ju(x t)j < ". A boundary value problem is called well{posed, if
2
1. It has a solution (Existence) 2. This solution is unique (Uniqueness) 3. The solution depends continuously on the initial and boundary conditions (Stability). If one of these three condtions is not met, we say that our problem is ill{posed. For our Cauchy problem we know that Z x+at u(x t) = '(x + at) +2 '(x ; at) + 21a ( )d x;at is a unique solution, if ' 2 C 2 2 C 1, and this solution is stable in L1 {norm. Thus, our problem is well{posed in this class. However, if ' 62 C 2 62 C 1, then the D'Alembert formula gives no solution to our Cauchy problem. Anyway, we have proved that the function u(x t) designed by (3.3.9) is stable for any ' and , thus if ' 62 C 2 62 C 1 then we can approximate them by '" 2 C 2 " 2 C 1 and receive a solution u" designed by (3.3.9) with respect to these data, such that ju ; u"j < ". Letting " ! 0, we have u" ! u. We say that u is a generalized solution of our Cauchy problem with respect to non-smooth Cauchy data ' and .
Another example of ill{posedness
The Cauchy problem for the Laplace equation has the form
uxx + uyy = 0 ;1 < x < 1 y > 0 u(x 0) = '(x) uy (x 0) = (x) ;1 < x < 1 : The functions
u(1)(x y) = 0 u(2)(x y) = 1 sin( x) cosh( y)
CHAPTER 3. HYPERBOLIC EQUATIONS
36
satisfy the Laplace equation& furthermore
u(1)(x 0) = 0 u(2)(x 0) = '(x) = 1 sin( x) (2) u(1) y (x 0) = 0 uy (x 0) = (0) = 0 : If ! 0, then ju(1)(x 0) ; u(2)(x 0)j ! 0& however, for y > 0 (1) = 0: u (x y) ; u(2)(x y) = 1 j sin xj j cosh(xy)j ;! Thus, the Cauchy problem for the Laplace equation is ill{posed.
3.3.3 The re ection method Consider the problem in the half axis x > 0: utt ; a2uxx = 0 0 < x < 1 t > 0 u(0 t) = (t) (or ux(0 t) = (t) ) u(x 0) = '(x) 0 < x < 1 ut(x 0) = (x) 0 < x < 1 : For simplicity, rst, suppose that u(0 t) = 0 (or, ux(0 t) = 0)
Lemma 3.3.3 Consider the problem (3.3.1), (3.3.2). 1. If the Cauchy data ' and are odd functions with respect to some point x0 , then the corresponding solution u(x t) is equal to zero at this point. 2. If the Cauchy data ' and are even functions with respect to x0, then the derivative of u with respect to x is equal to zero at this point.
Proof
1. We can suppose that x0 = 0, and thus '(x) = ;'(;x) (x) = ; (;x) : For x = 0 we have Z at u(0 t) = '(at) +2'(;at) + 21a ;at ( )d = 0 :
3.3. THE METHOD OF WAVE PROPAGATION
37
2. If '(x) = '(;x) (x) = (;x) then 0 0 h i ux(0 t) = ' (at) +2' (;at) + 21a (at) ; (;at) = 0 : t
2
Now we consider the problem
utt = a2uxx u(x 0) = '(x) ut(x 0) = (x) u(0 t) = 0 Let
0 < x < 1 t > 0t < 0<x<1 0<x<1 t>0:
(
'(x) = ;'(';(xx)) ( ,(x) = ; ( ;(xx))
x a x
x>0 x<0 x>0 x<0
be odd extensions of ' and , respectively. It is clear that the function Z x+at '( x + at ) + '( x ; at ) 1 u(x t) = + 2a ,( )d 2 x;at is well de nded. Furthermore, taking Lemma 3.3.3 into account , we see that
u(0 t) = 0 : Besides, for x > 0
) u(x 0) = '(x) = '(x) ut(x 0) = ,(x) = (x) : Thus u(x t) is a solution of our problem. We can write the solution in the following way 8 '(x + at) + '(x ; at) 1 Z x+at x x>0 > > + ( ) d for t < < a u(x t) = > '(x + at) +2 '(at ; x) 21a Zxx;+atat x x>0 > : + ( ) d t > 2 2a at;x a
38
CHAPTER 3. HYPERBOLIC EQUATIONS
In the domain t < xa the boundary condition has no in uence on the solution and it coincides with the solution of (3.3.1) in (;1 1). We consider now the case ux(0 t) = 0 : With this purpose we take the even extension of ' and : ( ( ' ( x ) x > 0 0 '(x) = '(;x) x < 0 ,(x) = ((x;)x ) xx > <0 : As a solution of our equation (3.3.1) we have Z x+at '( x + at ) + '( x ; at ) 1 u(x t) = + 2a ,( )d 2 x;at or 8 '(x + at) + '(x ; at) 1 Z x+at x > > + ( ) d t < < a : u(x t) = > '(x + at) +2 '(at ; x) 21a Zx;atx+at Z at;x x > : + 2a ( )d + ( )d t > a 2 0 0 It is clear that u(x t) satis es the equation, the initial conditions and the boundary condition ux(0 t) = 0. It remains to consider non{homogeneous cases
u(0 t) = (t) 6= 0 and
ux(0 t) = (t) 6= 0 : Let us consider the case of homogeneous Cauchy data u(x 0) = 0 ut(x 0) = 0 u(0 t) = (t) t > 0 : The general solution to this problem has the form
u(x t) = f1(x + at) + f2(x ; at) : Since u(x 0) = 0 ut(x 0) = 0, we see that f1(s) = ;f2(s) = c for s 0, where c is a constant. From the boundary condition
u(0 t) = (t) = f1(at) + f2(;at) = f2(;at) + c t > 0 :
Thus, f2(s) = ;as + c s < 0. Therefore, if x ; at < 0, say t > xa , we have x ; at ;x u(x t) = ; a + c = a + t + c : Putting x = 0, we get c = 0.
3.4. THE FOURIER METHOD
39
If x ; at 0, say t < xa , then
u(x t) = f1(x + at) + f2(x ; at) = 0 : Now it is easily seen that the solution of our boundary value problem is 8 '(x + at) + '(x ; at) + 1 Z x+at ( )d t < x > > < 2 2a Zx;at a u(x t) = >
x+at 1 x ' ( x + at ) + ' ( at ; x ) > : t ; xa + + 2a ( )d t > a : 2 at;x
3.4 The Fourier method 3.4.1 Free vibration of a string Consider the problem
utt = a2uxx 0 < x < ` t > 0 u(0 t) = u(` t) = 0 t > 0 u(x 0) = '(x) ut(x 0) = (x) 0 x ` :
(3.4.1) (3.4.2) (3.4.3)
The equation (3.4.1) is linear and homogeneous, and therefore the sum of its special solutions is again a solution. We shall try to nd the solution of our problem (3.4.1){(3.4.3) via a sum of special solutions with appropriate coe cients. With this purpose we consider the auxiliary problem: Find a solution of the problem utt = a2uxx 0 < x < ` t > 0 (3:4:1) which is not identically vanishing and satisfying the homogeneous boundary conditions u(0 t) = 0 u(` t) = 0 (3:4:2) and can be represented in the form
u(x t) = X (x)T (t) : Here X depends only on x and T depends only on t.
(3:4:4)
CHAPTER 3. HYPERBOLIC EQUATIONS
40 Putting (3.4.4) into (3.4.1) we get or, dividing by XT ,
X 00T = a12 T 00X
(3:4:5)
X 00(x) = 1 T 00(t) : (3:4:6) X (x) 2a2 T (t) The function (3.4.4) is a solution of (3.4.1), if (3.4.5) or (3.4.6) are satis ed. The right{hand side of (3.4.6) depends only on t, meanwhile the left{hand side depends only on x. It follows that they must be equal to a constant, say ; : X 00(x) = 1 T 00(t) = ; (3:4:7) X (x) a2 T (t) here, we put the sign minus before only for our convenience. From (3.4.7) we obtain two ordinary di erential equations for X and T X 00(x) + X (x) = 0 (3.4.8) T 00(t) + a2 T (t) = 0 : (3.4.9) The boundary value conditions (3.4.2) give u(0 t) = X (0) T (t) = 0 u(` t) = X (`) T (t) = 0 : It follows that X (0) = X (`) = 0 (3:4:10) otherwise T (t) 0 and u(x t) 0, and u is not a non{trivial solution of our problem. Thus, in order to nd X (x) we get an eigenvalue problem: Find such that there exists a non{trivial solution of the problem ) X 00 + X = 0 (3:4:11) X (0) = X (`) = 0 and nd the solutions corresponding to these . The , for which a non{trivial solution exists, are called eigenvalues, and the corresponding solutions to them are called eigenfunctions. In what follows we distinguish between three cases: 1. For < 0, the problem has no non{trivial solution. In fact, the general solution of (3.4.11) has the form p p X (x) = C1e ; x + C2e; ; x : This solution must satisfy the boundary conditions: X (0) = C1 +pC2 = 0 p X (`) = C1e` ; + C2e;` ; = 0 : Thus,
p p C1 = ;C2 and C1 e` ; ; e;` ; = 0 : p p p Because < 0 ; is real and positive, and so e` ; ; e;` ; 6= 0. Hence, C1 = 0 C2 = 0 and X (x) 0.
3.4. THE FOURIER METHOD
41
2. For = 0 there exists no non{trivial solution, since the general solution is
X (x) = ax + b and the boundary conditions then give X (0) = $ax + b] x=0 = b = 0 X (`) = a` = 0 : Thus, a = 0 b = 0 and X (x) 0. 3. For > 0, the general solution has the form p p X (x) = D1 cos x + D2 sin x : The boundary conditions give
X (0) = D1 = p0 X (`) = D2 sin ` = 0 : Since X (x) is not identically vanishing, D2 6= 0, and therefore
p
sin ` = 0 say
(3:4:12)
p
= n ` n 2 ZZ : Thus, a non{trivial solution is possible only for the values n 2 n = ` n 2 IN : These eigenvalues correspond to the eigenfunctions Xn (x) = Dn sin n ` x: Thus, for the values which are equal to n 2 n = ` n 2 IN there exist non{trivial solutions Xn (x) = sin n ` x
(3:4:13) (3:4:14)
which are uniquely de ned up to a constant coe cient. For these , the solutions of (3.4.9) are n Tn(t) = An cos n at + B (3:4:15) n sin at ` ` where An and Bn are to be de ned. It follows that the functions n at sin n x at + B sin (3:4:16) un(x t) = Xn (x)Tn(t) = An cos n n ` ` `
CHAPTER 3. HYPERBOLIC EQUATIONS
42
are special solutions of (3.4.1), which satisfy the boundary condition (3.4.2). Since (3.4.1) is linear and homogeneous, the function n 1 1 X X n n An cos ` at + Bn sin ` at sin ` x (3:4:17) u(x t) = un(x t) = n=1 n=1 | if it converges and one can di erentiate it termwise two times with respect to x and t | is a solution of (3.4.1) and satis es the boundary conditions (3.4.2). (We shall come back to this question in the next paragraph.) The initial conditions give 1 1 X X u(x 0)= '(x)= un(x 0) = An sin n ` x n1 =1 n1 =1 (3:4:18) X n X n n x : ut(x 0)= (x)= @u ( x 0)= aB sin n ` n=1 @t n=1 ` Let ' and be piecewise continuous and di erentiable, then we have 1 X 2 Z ` '( ) sin n d '(x) = 'n sin n x ' = (3.4.19) n ` ` 0 ` n=1 Z` 1 X 2 n (3.4.20) (x) = n sin ` x n = ` ( ) sin n ` d : 0 n=1 A comparison of these two series with (3.4.18) gives ` : An = 'n Bn = na n Thus, the series (3.4.17) is completely de ned.
(3:4:21)
3.4.2 The proof of the Fourier method Let L(u) be a linear di erential operator. It means that L(u), the action of L onto u, is equal to the sum of the corresponding derivatives of u with coe cients independent of u.
Lemma 3.4.1 Let the ui(i = 1 2 : : :) be special P solutions of a linear homogeneous di eren-
tial equation L(u) = 0. Then the series u = 1 i=1 Ci ui is also a solution of this equation, if the derivatives in L(u) can be di erentiated termwise.
We come back to our boundary value problem. First, we have to prove the continuity of the function n 1 1 X X n n u(x t) = un(x t) = An cos ` at + Bn sin ` at sin ` x : (3:4:22) n=1 n=1 As jun(x t)j jAnj + jBnj we conclude that if 1
X jAnj + jBnj) (3:4:23) n=1
3.4. THE FOURIER METHOD
43
converges, then the series (3.4.22) uniformly converges and u(x t) is continuous. Analogously, in order to prove the continuity of ut(x t) we have to prove the uniform convergence of the series n 1 n 1 @un X X n n = a ` ;An sin ` at + Bn cos ` at sin ` x (3:4:24) ut(x t) n=1 n=1 @t or the convergence of the majorant series 1
a X n j A n j + Bn j : ` n=1 Also we have to prove the uniform convergence of the series 2 X 1 @ 2 un 1 2 X n at + B sin n at sin n x uxx = ; n A cos n n 2 ` n=1 ` ` ` n=1 @x 1 @ 2 un 1 2 X a 2 X n n n utt = ; n A n cos at + Bn sin at sin x : 2 ` n=1 ` ` ` n=1 @t These correspond, up to a constant, to the majorant series 1 2
X n jAnj + jBnj : (3:4:25) Since where
n=1
` An = 'n Bn = na n
Z` Z` n 2 2 'n = ` '(x) sin ` x dx n = ` (x) sin n ` x dx 0 0 our method is proved if we can prove the convergence of the series 1 k X n j'n j (k = 0 1 2) n1 =1 X k n j nj (k = ;1 0 1) : n=1
Results from the theory of Fourier series
Let F be a 2`{periodic function, then we can expand F into its Fourier series 1 X a n n 0 F (x) = 2 + an cos ` x + bn sin ` x n=1 Z` Z` 1 n 1 an = ` F ( ) cos ` d bn = ` F ( ) sin n ` d : ;` ;` If F (x) is odd, then an = 0 and 1 X F (x) = bn sin n `x n=1 Z` 2 Z ` F ( ) sin n d : bn = 1` F ( ) sin n d = ` ` 0 ` ;`
(3:4:26)
CHAPTER 3. HYPERBOLIC EQUATIONS
44
If F (x) is de ned only in (0 `), then we can oddly extend F (x) to be de ned in (;` `) and then use the above expansion. It is known that, if F 2 C k and F (k) is piecewise continuous then the series 1 k
X n janj + jbnj n=1
converges. If a function f (x) is de ned only in (0 `) then we can extend it oddly to be de ned in (;` `), say to F (x). Since F (x) is continuous, f (0) must be 0. Further, f (`) must be also 0, since the F (x) must be 2`{periodic and continuous. The continuity of the rst derivative at x = 0 and x = ` is automatically satis ed. Generally, one has to require f (k)(0) = f (k) (`) = 0 (k = 0 2 4 : : : 2n) : From these results we conclude: 1. In order to guarentee the convergence of the series 1 k X n j'nj (k = 0 1 2) n=1
the function ' must be two times continuously di erentiable, furthermore the third derivative of ' must be piecewise continuous, and '(0) = '(`) = 0 '00(0) = '00(`) = 0 : (3:4:27) 2. In order to guarantee the convergence of the series 1 k X n j nj (k = ;1 0 1) n=1
the function must be continuously di erentiable, have a piecewise continuous second derivative, and (0) = (`) = 0 : (3:4:28) With these conditions, the representation (3.4.17) is in fact a solution of the problem (3.4.1), (3.4.2). Note that this solution is unique.
3.4.3 Non{homogeneous equations We consider the non{homogeneous hyperbolic equation utt = a2uxx + f (x t) 0 < x < ` t > 0 with the initial conditions u(x 0) = '(x) ut(x 0) = (x) 0 x ` and homogeneous boundary value conditions u(0 t) = u(` t) = 0 t > 0 :
(3:4:29) (3:4:30) (3:4:31)
3.4. THE FOURIER METHOD
45
We try to nd the solution of (3.4.30){(3.4.31) in the form 1 X u(x t) = un(t) sin n ` x n=1
(3:4:32)
where un(t) are to be determined. For this purpose we expand f ' and in the Fourier series Z` 1 X 2 n f (x t) = fn(t) sin ` x fn(t) = ` f ( t) sin n ` d 0 n=1 Z` 1 X 2 n (3.4.33) '(x) = 'n sin ` x 'n = ` '( ) sin n ` d 0 n=1 1 X 2 Z ` ( ) sin n d : x (x) = n sin n n = ` ` 0 ` n=1 Plugging (3.4.32) into (3.4.29), we get ! 1 n 2 n 2 X sin ` x ;a ` un(t) ; u n (t) + fn (t) = 0 : n=1 Thus, 2 2 u n (t) + n ` a un (t) = fn(t) : On the other hand, 1 1 X X n x = ' u(x 0) = '(x) = un (0) sin n n sin x ` ` n=1 n=1 1 1 X X n x = ut(x 0) = (x) = u_ n(0) sin n n sin x : ` ` n=1 n=1 It follows un (0) = 'n u_ n(0) = n : Consequently, we can nd un(t) in the form
(3:4:34)
(3:4:35)
un(t) = u(nI )(t) + u(II )(t) where and Thus
Z t n ` un (t) = na sin ` a(t ; ) fn( ) d 0 ` n u(nII )(t) = 'n cos n at + sin ` na n ` at :
(3:4:37)
1 ` Z t n X n x f ( ) d u(x t) = sin a ( t ; ) sin n ` ` 0 n=1 na ! 1 X n ` n + 'n cos ` at + na n sin ` at sin n ` x
(3.4.38)
(I )
n=1
:= u(I )(x t) + u(II )(x t) :
(3:4:36)
CHAPTER 3. HYPERBOLIC EQUATIONS
46
Taking (3.4.33) into account for fn (t), we can represent u(I )(x t) in the form ) Z t Z ` (2 X 1 ` n sin n n (I ) u (x t) = 0 0 ` na sin ` a(t ; ) ` x sin ` f ( ) d d Z t Z ` n=1 := G(x t ; )f ( ) d d (3.4.39) 0
where
0
1 1 n 2 X n x sin n : G(x t ; ) := a sin a ( t ; ) sin n ` ` ` n=1
(3:4:40)
3.4.4 General rst boundary value problem Find a solution of the problem
utt = a2uxx + f (x t) 0 < x < ` t > 0 u(x 0) = '(x) ut(x 0) = (x) 0 < x < ` u(0 t) = 1(t) u(` t) = 2(t) t > 0 :
(3.4.41) (3.4.42) (3.4.43)
To deal with this problem we shall nd a di erentiable function U , such that
U (0 t) = 1(t) U (` t) = 2(t)
(3:4:44)
and nd an equation of the type (3.4.41) for which U (x t) is a solution. It can be easily seen that the function U (x t) = 1(t) + x` $ 2(t) ; 1(t)] (3:4:45) satis es (3.4.44) and the equation Utt = a2Uxx + 1(t) + x` $ 2(t) ; 1(t)] = a2Uxx + f (x t) : (3.4.46) Now consider the problem
vtt v(x 0) vt(x 0) v(0 t)
= = = =
a2vxx + f ; f '(x) ; U (x 0) (x) ; Ut(x 0) 0 v(` t) = 0 :
(3:4:47)
It is clear that we can nd v in the form of (3.4.38) and furthermore
u(x t) = U (x t) + v(x t) :
3.4.5 General scheme for the Fourier method Let '(x) k(x) q(x) be positive functions.
(3:4:48)
3.4. THE FOURIER METHOD
47
Consider the problem
" # 2 @ @u L$u] := @x k(x) @x ; q(x)u = (x) @@tu2 (3.4.49) u(0 t) = 0 u(` t) = 0 t > 0 (3.4.50) u(x 0) = '(x) ut(x 0) = (x) 0 < x < ` : (3.4.51) We now use the Fourier method to solve this problem. Namely, we try to nd a non{trivial solution of (3.4.49) which satis es the boundary conditions u(0 t) = 0 u(` t) = 0 and can be represented in the form u(x t) = X (x)T (t) : With the same argument as in x 3.4.1, we arrive at the problems " # d k(x) dX ; qX + X = 0 dx dx and T 00 + T = 0 : To nd X (x) we get the following eigenvalue problem: Find , for which the boundary value problem
L$X ] + X = 0 0 < x < ` (3:4:52) X (0) = 0 X (`) = 0 (3:4:53) has a non{trivial solution. The are called eigenvalues, and the associated non{trivial solutions of (3.4.52), (3.4.53) are called eigenfunctions.
Properties of the problem (3.4.52), (3.4.53)
1. There exist countably many eigenvalues 1 < 2 < < n < , and their associated non{trivial solutions (eigenfunctions) X1(x) X2(x) : : : Xn (x) : : : 2. For q 0, all n are positive. 3. The eigenfunctions are orthogonal in the interval $0 `] with the weight (x) in the sense that Z` Xm (x)Xn(x) (x) dx = 0 m 6= n : (3:4:54) 0
4. Any function F (x) de ned in $0 `] which is two times continuously di erentiable and satis es the boundary condition F (0) = F (`) = 0 can be expanded into a uniformly and absolutely convergent series of the Xn (x): Z` 1 X F (x) = FnXn (x) Fn = N1 F (x)Xn(x) (x) dx n 0 n=1 Z` Nn = Xn2(x) (x) dx : (3:4:55) 0
CHAPTER 3. HYPERBOLIC EQUATIONS
48
The properties 1 and 4 are provided in the Theory of Integral Equations, and we do not give any proof for them here. We shall prove the properties 2 and 3. At rst, we give here the Green formula. Let u and v be de ned in $a b] u v 2 C 2(a b) and u v 2 C 1$a b]. Then uL$v] ; vL$u] = u(kv0)0 ; v(ku0)0 h i0 = k(uv0 ; vu0) : Integrating the last from a to b we get Z b
b uL$v] ; vL$u] = k(uv0 ; vu0) a : a
Proof of the property 3. Let Xm (x) and Xn (x) be two eigenfunctions and m n be their associated eigenvalues. From the boundary conditions (3.4.53) we have Z `n o X m L$Xn ] ; Xn L$Xm ] dx = 0 : 0 Taking Eq. (3.4.52) into account we obtain Z` ( n ; m) Xm (x)Xn(x) (x) dx = 0 : 0
Since n 6= m , it follows
Z` 0
Xm (x)Xn(x) (x) dx = 0 :
(3:4:56)
We shall prove that for every eigenvalue there exists, up to a constant factor, only one eigenfunction. In order to prove this fact, we use a result which states that the solution of a linear ordinary di erential equation is uniquely de ned by its value and the value of its rst derivative at a point. Let X and X be two eigenfunctions associated with the eigenvalue . Since X (0) = X (0) = 0, 0 0 we have X (0) 6= 0 and X (0) 6= 0. Set 0 X (x) := X 0(0) X (x) : X (0) The function X (x) satis es the equation (3.4.52), furthermore 0(0) X X (0) = 0 X (0) = 0 X (0) 0(0) 0 X X (0) = 0 X (0) = X 0(0) : X (0) 0
Thus, X (x) = X (x) and therefore
0(0) X X (x) = X (x) = 0 X (x) = AX (x) : X (0)
3.4. THE FOURIER METHOD
49
We conclude that if Xn (x) is an eigenfunction associated with the eigenvalue n , then AnXn (x) (An is a nonvanished constant) is also an eigenfunction. Since these eigenfunctions di er only by a constant factor, the value of this factor is not important, and it can be normalized through the equality Z` Nn = Xn2 (x) (x) dx = 1 : 0
If an eigenfunction X^n (x) is not normalized, then we can normalize it by ^ : Xn (x) = qR ` Xn (x) 2 (x) (x) dx ^ X 0 n By this way, we have found an orthogonal system of eigenfunctions Xn (x) of the problem (3.4.52), (3.4.53): ( Z` 6= n Xm (x)Xn(x) (x) dx = 01 m m=n: 0
2
Proof of the property 2. Let Xn (x) be the normalized eigenfunction associated to the eigenvalue n :
L$Xn] = ; n (x)Xn (x) : Multiplying both sides of this equation by Xn (x) and integrating with respect to x from 0 to `, we obtain Z` Z` n Xn2(x) (x) dx = ; Xn (x)L$Xn ] dx : 0 0 R Since ` X 2(x) (x) dx = 1, we get 0
n
n = ;
Z` 0
" # Z` d dX n Xn dx k(x) dx dx + q(x)Xn2(x) dx : 0
Integrating by part gives
Z` ` Z ` 0 0 2 n = ;Xn kXn 0 + k(x)$Xn(x)] dx + q(x)Xn2(x) dx 0 0 Z` Z` k(x)$Xn0 (x)]2 dx + q(x)Xn2(x) dx : = 0
0
Since k(x) > 0 q(x) 0, Xn 2 C 2(0 `), Xn0 (x) is not identically vanishing, we have
n > 0 : For a function F (x) de ned in (0 `) we can formally expand F (x) as follows 1 X F (x) = FnXn (x) n=1 R ` (x)F (x)X (x) dx n Fn = 0 R ` n 2 IN : 2 ( x ) X ( x ) dx n 0
CHAPTER 3. HYPERBOLIC EQUATIONS
50
We say that Fn are the Fourier coe cients of the function F in the system fX1 X2 : : :g. With the same method as in (3.4.1), we can prove that 1 q q X u(x t) = An cos n t + Bn sin n t Xn (x) n=1
where
An = 'n Bn = p n : n Here, 'n and n are the Fourier coe cients of ' and , respectively. This method has been developed by Steklov. We do not pursuit it in this lecture.
3.5. THE GOURSAT PROBLEM
3.5 The Goursat Problem
51
3.5.1 De nition Consider the equation
uxy = f (x y) : (3:5:1) Let C1 and C2 be two curves described by equations x = 1(y) and x = 2(y) and emanating from a common point, which we take to be the origin of the coordinate system. We assume that they do not intersect elsewhere. We assume further that the curves C1 and C2 are non{tangent to characteristics situated in the rst quarter and that the functions 1(y) and 2(y) are increasing, i. e. the curves C1 and C2 have time{like orientation. The functions 1(y) and 2(y) have inverses #1(x) and #2(x), respectively. For de niteness suppose that 1(y) < 2(y) for the positive values of the variable y for which these two functions exist.
Let D be a domain contained between these arcs and two characteristics P0 A and P0B . We seek a solution u(x y) of (3.5.1) de ned in the closure D of the domain D and satisfying the Dirichlet boundary conditions on the arcs C1 and C2: u OA = u( 1(y) y) = ,1(y) 0 y 1 (3.5.2) u = u( 2(y) y) = ,2(y) 0 y 2 (3.5.3) OB
where 1 and 2 are the y{coordinates of the points A and B . We assume that ,1(0) = ,2(0) = 0. Conditions (3.5.2), (3.5.3) can also be represented in the form
u(x #1(x)) = '1(x) 0 x 1 u(x #2(x)) = '2(x) 0 x 2
(3.5.4) (3.5.5)
where 1 and 2 are the abscissae of the points A and B . This problem is called the
Goursat problem.
If ,1(0) = ,2(0) = a 6= 0 it would be su cient to solve the Goursat problem with the conditions u = ,1 ; c on C1 and u = ,2 ; c on C2 and then add the function u0 c to the derived solution. The problem formulated above is the Goursat problem in a narrow sense. In a wider sense, in the Goursat problem the function 1(y) appearing in the equation of the curve C1 is
52
CHAPTER 3. HYPERBOLIC EQUATIONS
monotonically non{decreasing and the curve C2 is represented by an equation of the form y = #(x), where the function #(x) is monotonically non{decreasing. If C1 is the interval $0 ] of the axis Oy and C2 is the interval $0 ] of the axis Oy, then we have the Darboux problem:
Find the solution of (3.5.1) in OMRN, where M = ( 0) R = ( ) N = (0 ) and
u(x 0) = '(x) 0 x u(0 y) = (y) 0 y :
Let the arc OA of a curve C with the equation y = #(x), beginning at the origin of the coordinate system, belong to the class C 1 and #0(x) > 0. Let D be the domain contained between the arc OA, the axis Ox and the characteristic AB perpendicular to Ox. Let and be the coordinates of the point A. If we seek the solution of (3.5.1) in the domain D which satis es the boundary conditions u(x 0) = '(x) 0 x u( (y) y) = (x) 0 y then we have the Picard problem. Remark: The Darboux problem and the Picard problem are special cases of the Goursat problem. Remark: The Goursat problem has many applications in gas dynamics, etc. We restrict ourselves to the Darboux problem in this introductory lecture.
3.5. THE GOURSAT PROBLEM
53
3.5.2 The Darboux problem. The method of successive approximations We consider the Darboux problem uxy = f (x y) x 0 y 0
(3:5:1)
u(x 0) = '(x) x 0 u(0 y) = (x) y 0 :
(3.5.6) (3.5.7)
Let ' and be di erentiable and '(0) = (0). Integrating (3.5.1) rst with respect to x, and then w.r.t. y, we get Zx uy (x y) = uy (0 y) + f ( y) d 0 Zy Zx u(x y) = u(x 0) + u(0 y) ; u(0 0) + d f ( ) d : 0
0
Thus,
Z yZ x u(x y) = '(x) + (y) ; '(0) + f ( ) d d : (3:5:8) 0 0 It is clear that u(x y) de ned by (3.5.8) is a solution of the Darboux problem (3.5.1), (3.5.6), (3.5.7). Furthermore it is unique. We consider now a more general hyperbolic equation uxy = a(x y)ux + b(x y)uy + c(x y)u + f (x y)
(3:5:9)
with the boundary conditions (3.5.6), (3.5.7). We assume also that '(0) = (0), and the coe cients a b, and c are continuous functions of x and y. From (3.5.9) we see that a solution u(x y) of (3.5.9) satis es the integral equation Z yZ xh u(x y) = a( )u + b( )u + c( )u] d d
0 0 Z yZ x (3:5:10) + '(x) + (y) ; '(0) + f ( ) d d : 0
0
The solution of (3.5.10) will be constructed via the method of successive approximations. As the zero-th approximation we take the function
u0(x y) = 0 :
CHAPTER 3. HYPERBOLIC EQUATIONS
54
The equation (3.5.10) gives then the following expressions for the successive approximations Z yZ x u1(x y) = '(x) + (y) ; '(0) + f ( ) d d 0 0 : : :: : : : : : :: : : : : : :: : : : : ::Z : : Z: : : :: : : : : : :: : : : : :: : : : : : :: : : : : : :: : y xh (3:5:11) un (x y) = u1(x y) + a( ) @u@ n;1 + b(x ) @u@ n;1 + 0 0 i +c( )un;1 d d : Note also that @un = @x @un = @y
" # @u1 + Z y a(x ) @un;1 + b(x ) @un;1 + c(x )u n;1 d @x 0 " @x @
# @u1 + Z x a( y) @un;1 + b( y) @un;1 + c( )u n;1 d : @y @ @y
(3:5:12)
0
In order to prove the uniform convergence of the series ) ( ) ( @u @u n n fun (x y)g @x (x y) @y (x y) we consider the di erences
zn (x y) = = @zn(x y) = @x = zn (x y) = @y =
un+1(x "y) ; un(x y) # Z yZ x @z @z n;1 n;1 a( ) @ + b( ) @ + c( )zn;1( ) d d 0 0 @un+1(x y) ; @un(x y) # Z y "@x @zn;1 @x @z n;1 a(x ) @x + b(x ) @ + c(x )zn;1(x ) d 0 @un+1(x y) ; @un(x y) @y @x # Z x" @z @z n;1 n;1 a( y) @ + b( ) @y + c( )zn;1( y) d : 0
Assume that ja(x y)j jb(x y)j jc(x y)j M and 0 jz0j < H @z @x < H
@z0 < H @y
where x 2 $0 L] y 2 $0 L], L is a given positive number. We want to estimate zn above. First we note that 2 ( x + y ) jz1j < 3HMxy < 3HM 2! @z1 < 3HMy < 3HM (x + y) @x @z1 < 3HMx < 3HM (x + y) : @y
@zn @zn @x @y
3.5. THE GOURSAT PROBLEM
55
We shall prove by induction that
jznj < 3HM n K n;1 (x(n++y)1)! n+1
@zn < 3HM n K n;1 (x + y)n @x n! @zn < 3HM n K n;1 (x + y)n @y n!
(3.5.13)
where K = 2L + 2. For n = 1, our estimates are proved. Suppose that they are proved for n, we want to show that they are also valid for n + 1. In fact, from our de nition for zn we have # Z yZ x" @z @z n n jzn+1j = 0 0 a( ) @ + b( ) @ + c( )zn( ) d d
Z y Z x " ( + )n ( + )n+1 # n +1 n;1 3HM K 2 n! + (n + 1)! d d : 0
Noting that
we have
0
Z y Z x ( + )k Z y (x + )k+1 k+1 ! k! d d = 0 (k + 1)! ; (k + 1)! d
0 0 k+2 k+2 k+2 k+2 = (x(k++y)2)! ; (kx+ 2)! ; (ky+ 2)! (x(k++y)2)! n+2 (x + y )n+3 ! ( x + y ) jzn+1j 2 (n + 2)! + (n + 3)! n+2 x + y n +1 n;1 (x + y ) = 3HM K (n + 2)! 2 + n + 3 n+2 < 3HM n+1 K n (x(n++y)2)! : 3HM n+1 K n;1
Analogously,
and
@zn+1 3HM n+1 K n;1 (x + y)n+1 x + y + 2 @x (n + 1)! n + 2 n+1 < 3HM n+1 K n (x(n++y)1)! @zn+1 3HM n+1 K n;1 (x + y)n+1 x + y + 2 @y (n + 1)! n + 2 n+1 < 3HM n+1 K n (x(n++y)1)!
CHAPTER 3. HYPERBOLIC EQUATIONS
56 Thus the estimates in (3.5.13) are proved. From (3.5.13) we see that
n+1 )n+1 jznj < 3HM n K n;1 (x(n++y)1)! < K32HM (2KLM (n + 1)! n n
@zn < 3HM n K n;1 (x + y) < 3H (2KLM ) @x n! K n! n @zn < 3HM n K n;1 (x + y) < 3H (2KLM )n : @y n! K n! It follows
1 X
1 3H (2KLM )n+1 X 3H e2KLM ; 1 = 2 K 2M n=0 n=0 K M (n + 1)! 1 @zn 1 3H (2KLM )n X X 3H e2KLM @x < = n! K n=0 n=0 K 1 1 X 3H (2KLM )n 3H 2KLM X @zn = Ke : @y < n! n=0 K n=0
jznj <
We thus conclude that the series
un = u0 + z1 + + zn @un = @u0 + @z1 + + @zn @x @x @x @x @un = @u0 + @z1 + + @zn @y @y @y @y converges uniformly as n ! 1. We denote their limit functions by u(x y) := nlim !1 un(x y) @un v(x y) := nlim !1 @x (x y) @un w(x y) := nlim !1 @y (x y) : Taking the limit in the integral equations (3.5.11), (3.5.12), we get Z yZ xh i a( )v + b( )w + c( )u d d u(x y) = u1(x y) + 0 0 Z yh i @u 1 v(x y) = @x (x y) + 0 a(x )v + b(x )w + c(x )u d Z yh i 1 w(x y) = @u ( x y ) + a ( y ) v + b ( y ) w + c ( y ) u d : @y 0 From the rst equation it follows that v = ux w = uy
3.5. THE GOURSAT PROBLEM
57
and therefore Z yZ x u(x y) = '(x) + (y) ; '(0) + f ( ) d d
0 0 Z yZ xh i + a( )u + b( )u + c( )u d d 0
0
(3:5:10)
which satis es (3.5.9) and clearly the boundary conditions. Thus, there exists a solution of the Darboux problem (3.5.1), (3.5.6), (3.5.7). We shall prove that the solution is unique. In fact, suppose that there are two solutions u1(x y) and u2(x y). Let U (x y) = u1(x y) ; u2(x y) : The function U satis es the homogeneous integral equation Z yZ x U (x y) = (aUx + bUy + cU ) d d : 0
0
Let H1 be the majorant of jU j jUxj and jUy j for x 2 $0 L] y 2 $0 L]. Following the successive procedure we get n+2 )n+2 jU j < 3H1M n+1 K n (x(n++y)2)! < K3H2M1 (2KLM (n + 2)! for any n. It follows U (x y) 0 or u1(x y) u2(x y) : Thus, the solution of the Darboux problem is unique. If a b, and c are constant, then we make the transformation u = ve x+ y for which ux = vxe x+ y + ve x+ y uy = vy e x+ y + ve x+ y uxy = (vxy + vx + vy + v) e x+ y : Thus vxy + vx + vy + v = = a(vx + v) + b(vy + v) + cv + fe; x; y : It follows vxy = (a ; )vx + (b ; )vy + + (a + b + c ; )v + fe; x; y : Letting = a = b, we get vxy = (ab + c)v + fe;(bx+ay) : (3:5:14) If ab + c = 0, then we can nd the solution explicitly in the form (3.5.8), if ab + c 6= 0 then we shall use the method of the next paragraph.
CHAPTER 3. HYPERBOLIC EQUATIONS
58
3.6 Solution of general linear hyperbolic equations 3.6.1 The Green formula Let
L$u] = uxx ; uyy + a(x y)ux + b(x y)uy + c(x y)u where a(x y) b(x y) c(x y) are di erentiable functions. Let L $v] = vxx ; vyy ; (av)x ; (bv)y + cv 8 < vu ; v u + avu = (vu)x ; (2vx ; av)u = H=: x x ;(vu)x + (2ux + au)v 8 < (uv)y ; (2uy ; bu)v = K=: ;vuy + vu + bvu = ;(vu)y + (2vy + bv)u L$u] = uxx ; uyy + a(x y)ux + b(x y)uy + c(x y)u: Since L $v] = vxx ; vyy ; (av)x ; (bv)y + cv we have @K : vL$u] ; uL $v] = @H + @x @y Here we take
(3:6:1) (3:6:2) (3:6:3) (3:6:4)
H = vux ; vy u + avu
and
K = ;vuy + vy u + bvu:
The Green formula says that ZZ
G
vL$u] ; uL $v] d d = ZZ @H @K = + @y d d
@x GZ
= ; H cos(n x) + K cos(n y) ds Z S = Hd ; Kd : S
(3.6.5)
Here, n is the inward normal vector to S , that is
dx = cos(n y) ds dy = ; cos(n x) ds assuming that S is traced out anticlockwise so as to keep the interested domain area always on the left, and ds is taken to be positive.
3.6. SOLUTION OF GENERAL LINEAR HYPERBOLIC EQUATIONS
59
3.6.2 Riemann's method We consider the problem of nding the solution of the linear hyperbolic equation L$u] = uxx ; uyy + a(x y)ux + b(x y)uy + c(x y)u = ;f (x y) (3:6:6) which satis es the Cauchy data on a curve C , ujC = '(x) (3:6:7) unjC = (x) : Here, un is the normal derivative of u with respect to the curve C . The conditions posed on C are as follows: C is described by y = f (x) where f (x) is a di erentiable function. Further, every characteristic y ; x = const y + x = const intersects the curve C at most one time (for this, it is necessary that jf 0(x)j 1).
Let P and Q be two points on C . Through P and Q we draw two characteristics such that they intersect each other at a point M . Consider the domain bounded by the arc PQ of the curve C and by QM and MP . From the Green formula we have ZZ
vL$u] ; uL $v] d d = MPQ (3:6:8) ZM ZP ZQ = (H d ; K d ) + (H d ; K d ) + (H d ; K d ) : Q
On QM and MP we have
M
P
d = ;d = ; pds on QM 2 ds d = d = ; p on MP 2 ;! ;! where s is the curve element along QM and MP . Thus ZM Z M
(H d ; K d ) = (vu) ; (2v ; av)u d
Q Q
; ; (vu) + (2v + bv)u d
CHAPTER 3. HYPERBOLIC EQUATIONS Z Mh i ZM = (vu) d + (vu) d ; 2 (v u d + v u d ) Q Q ZM + (avu d ; bvu d ) Q ZM Z M @v a + b ! = ; d(uv) + 2 @s ; p v u ds Q Q 2 Z M @v a + b ! = ; (uv)M + (uv)Q + 2 @s ; p v u ds : Q 2
60
Analogously, ZP
Z M @v b ; a ! (H d ; K d ) = ;(uv)M + (uv)P + 2 @s ; p v u ds : M Q 2 The last two equalities and (3.6.8) yield Z M @v b ; a ! ( uv ) P + (uv )Q (uv)M = + ; p v u ds 2 @s P 2 2 Z M @v b + a ! ; 2p2 v u ds + Q @s ZQ ZZ
1 1 vL(u) ; uL (v) d d : + 2 (H d ; K d ) ; 2 P
MPQ
(3.6.9)
Let u be a solution of the problem (3.6.6), (3.6.7), whence v be a solution of the following problem depending on M as a parameter, L (v) = v ; v ; (av) ; (bv) + cv = 0 in MPQ (3:6:10) 9 @v = b ; a v on the characteristic MP > p > > @s = 2 2 @v = b + a (3:6:11) p v on the characteristic MQ > > @s 2 2 > v(M ) = 1 : The conditions of (3.6.11) yield Z s ! b ; a p ds on MP v = exp s0 2 2 ! Z s b + a p ds on MQ v = exp s0 2 2 where s0 is the value of s at the point M . Thus, we obtain the Darboux problem for v, which is uniquely determined in MPQ. The function v is called the Riemann function. Hence, from (3.6.6) and (3.6.9) we have u(M ) = (uv)P +2 (uv)Q ZP + 12 $v(u d + u d ) ; u(v d + v d ) + uv(ad ; bd )] (3:6:12) Q ZZ + 12 vf d d : MPQ
3.6. SOLUTION OF GENERAL LINEAR HYPERBOLIC EQUATIONS
61
By this formula, our problem (3.6.6), (3.6.7) is solved, since v is known in MPQ, and, on du are given: C , u and dn x P(xâ&#x2C6;&#x2019;y, 0) Q(x+y, 0) ujC = '(x) q '0(x) ; (x)f 0(x) 1 + f 02(x) ux = ux cos(x s) + un cos(x n) = 1 + f 02(q x) '0(x)f 0(x) + (x) 1 + f 02(x) : uy = us cos(y s) + un cos(y n) = 1 + f 02(x) The formula (3.6.12) ensures the existence and uniqueness of the solution of the problem (3.6.6), (3.6.7). One can show also that the function u de ned by (3.6.12) satis es in fact the conditions of the problem. We do not pursuit it in this lecture.
3.6.3 An application of the Riemann method Consider the problem
! f uyy = uxx + f1(x t) ;1 < x < 1 y > 0 y = at f1 = a2 u(x 0) = '(x) ;1 < x <!1 uy (x 0) = 1(x) 1 = a ; 1 < x < 1 : The operator L(u) = uxx ; uyy is self-adjoint, that is L(u) = L (u). PQ is now an interval of the axis y = 0.
From (3.6.11) we see that v 1 in MPQ. Since d = 0 on PQ, we get ZQ ZZ u ( P ) + u ( Q ) 1 1 u(M ) = + 2 u d + 2 f ( ) d d : 2 P MPQ
Let the coordinate of M be (x y), we have that the coordinate of P is (x ; y 0) and of Q is (x + y 0). It follows that xZ+y Zy x+(Zy; ) u ( x ; y ) + u ( x + y ) 1 1 u(x y) = + 2 1( ) d + 2 f1( ) d d : 2 x;y 0 x;(y; )
CHAPTER 3. HYPERBOLIC EQUATIONS
62 Changing the variable, we get
xZ+at Zt x+aZ(t; ) 1 1 u ( x ; at ) + u ( x + at ) u(x t) = + 2a ( ) d + 2a f ( ) d d : 2 x;at 0 x;a(t; )
Chapter 4 Parabolic Equations In this chapter we shall consider the parabolic equation ! @u @ c @t = @x k @u @x 8 k : (W armeleitf ahigkeit) > > < c : (spezi sche W armekapazit at) > > : : (die Dichte) in a simple form. Namely, we shall consider the heat equation k: ut = a2uxx a2 = c
4.1 Boundary conditions In contrast to hyperbolic equations, for parabolic equations we need only one initial condition u(x t) for the initial time t = t0. Suppose that we consider our process for a bar $0 l]. a) At one end of the bar (x = 0, or x = l) the temperature is prescribed by u(0 t) = (t) (or u(l t) = l (t)) where (t) ( l(t)) is a given function de ned on $t0 T ]. Here T characterizes the time interval, where our heat transfer process is considered. b) At one end of the interval $0 l], the derivative of u is given: @u (l t) = (t): @x Normally, the heat ux is de ned by @u Q(l t) = k @x : x=l 63
64
CHAPTER 4. PARABOLIC EQUATIONS
Thus
@u (l t) = Q( l t) = (t): @x k x=l Note that the heat ux at x = 0 is @u Q(0 t) = ;k @x : x=0
c) At one end, one has the following relation between the solution and its derivative @u (l t) = ; $u(l t) ; (t)]: @x This corresponds to the Newton law of heat transfer of the body boundary with its outside environment, the temperature (t) of which is given. At x = 0 we have another condition, @u (0 t) = $u(0 t) ; (t)]: @x d) If the interval $0 l] is very long and the time interval $t0 T ] is small, then we ignore the boundary conditions by consider the initial value problem for parabolic equations in the domain ;1 < x < 1 t t0 with the initial condition u(x t0) = '(x) (;1 < x < 1) where '(x) is a given function. e) Similarly, if the temperature at one end of the interval is given and the other end is very far, then we can consider our parabolic equation in the domain 0 x < 1 t t0 which satis es the conditions u(x t0) = '(x) (0 < x < 1) u(x 0) = (t) (t t0): Here ' and are given functions. f) We have also nonlinear boundary conditions. For example @u (0 t) = $u4(0 t) ; 4(0 t)]: k @x This condition is called the Stefan-Boltzmann law.
De nition 4.1.1 A function u is called a solution of the rst boundary value problem for
a parabolic equation, if i) it is de ned and continuous in the closed domain 0 x l t0 t T , ii) it satis es the parabolic equation in the domain 0 < x < l t0 < t < T , iii) it satis es the initial and boundary conditions u(x t0) = '(x) u(0 t) = 1(t) u(l t) = 2(t) where '(x), 1(t) and 2(t) are continuous functions, and '(0) = 1(t0) $= u(0 t0)] '(l) = 2(t0) $= u(l t0)]:
4.2. THE MAXIMUM PRINCIPLE
65
De nition 4.1.2 If the condition iii) in De nition 4.1.1 is replaced by @u (l t) = (t) u(x t0) = '(x) @u (0 t ) = 1 (t) 2 @x @x then we have "the second boundary value problem".
De nition 4.1.3 If the condition iii) in De nition 4.1.1 is replaced by @u (l t) = ; $u(l t) ; (t)] u(x t0) = '(x) @u (0 t ) = $ u (0 t ) ; 1(t)] 2 @x @x then we have "the third boundary value problem". We shall consider in the sequel the following questions: i) Is the solution of our problems unique? ii) Is there a solution? iii) Do the solutions depend continuously on the data?
4.2 The maximum principle In the sequel, we shall consider the equation with constant coe cients
vt = a2vxx + vx + v:
(4:2:1)
If we make the transformation 2 v = e x + tu = ; 2 a2 = ; 4 a2
then we have the equation
ut = a2uxx: (4:2:2) Thus, we have to consider the equation (4.2.2). The maximum principle: Let a function u(x t) be de ned and continuous in the closed domain 0 t T , 0 x l and be satis ed the equation (4.2.2) in the open domain 0 < t < T , 0 < x < l. Then the function u(x t) admits its maximum (minimum) at t = 0, or at x = 0 or x = l. Proof: Let M be the maximum of u(x t) for t = 0 (0 x l), or for x = 0 x = l (0 t T ). Assume further that u(x t) attains its maximum at some interior point (x0 t0), say 0 < x0 < l 0 < t0 < T : u(x0 t0) = M + > 0: (4:2:3) Since 0 < x0 < l and u(x t) attains its maximum at (x0 t0), we have
@u (x t ) = 0 @ 2u (x t ) 0: @x 0 0 @x2 0 0
(4:2:4)
CHAPTER 4. PARABOLIC EQUATIONS
66
Further, since the function u(x0 t) attains its maximum at t0 2 (0 T ] we have @u (x t ) 0: @t 0 0
(4:2:5)
@u (In fact, @u @t (x0 t0 ) = 0 if t0 < T , @t (x0 t0) 0 if t0 = T .) We shall nd another point (x1 t1) 2 (0 l) (0 T ] such that @@x2u2 (x1 t1) 0 @u @t (x1 t1) > 0. Consider the function v(x t) = u(x t) + k(t0 ; t) (4:2:6) where k is a constant. We have v(x0 t0) = u(x0 t0) = M + and k(t0 ; t) kT: Choose k > 0 such that kT < 2 , that is k < 2 T & we see that the maximum of v(x t) at t = 0 (0 x l), or x = 0 (0 t < T ), or x = l (0 t < T ) is not greater than M + 2 . It means that (4:2:7) v(x t) M + 2 (for t = 0 or x = 0 x = l) since the rst member on the right-hand side of (4.2.6) is not greater than M and the second one is not greater than 2 . Since the function v(x t) is continuous in $0 l] $0 T ], there is a point (x1 t1) 2 $0 l] $0 T ] where v(x t) attains its maximum: v(x1 t1) v(x0 t0) = M + : From (4.2.7) we see that 0 < x1 < l and 0 < t1 T . It follows that vxx(x1 t1) = uxx(x0 t0) 0 and vt(x1 t1) = ut(x1 t1) ; k 0: The last inequality says that ut(x1 t1) k > 0. Thus, at (x1 t1) the function u(x t) does not satisfy the equation (4.2.2). This contradicts the assumption of the maximum principle. Thus, the rst part of the maximum principle is proved. To prove the second part we note that the function u1 = ;u satis es (4.2.2) and all other conditions of the maximum principle in the case of "maximum". 2
4.3 Applications of the maximum principle 4.3.1 The uniqueness theorem The uniqueness theorem: Let u1(x t) and u2(x t) be continuous functions de ned on 0 x l 0 t T , satisfying the equation ut = a2uxx + f (x t) 0 < x < l t > 0
(4:3:1)
4.3. APPLICATIONS OF THE MAXIMUM PRINCIPLE
67
and the same initial and boundary value conditions
u1(x 0) = u2(x 0) = '(x) 0 x l u1(0 t) = u2(0 t) = 1(t) 0 t T u1(l t) = u2(l t) = 2(t) 0 t T: Then u1(x t) u2(x t). Proof: Let
v(x t) := u1(x t) ; u2(x t): Since u1 and u2 are continuous in $0 l] $0 T ], the function v is also continuous there. Further, the function v satis es the equation vt = a2vxx for 0 < x < l t > 0. Thus, v satis es the conditions of the maximum principle& that is v attains its maximum for t = 0 or x = 0, or x = l. Since v(x 0) = 0 v(0 t) = 0 and v(l t) = 0 we have v(x t) 0. Thus, u1(x t) u2(x t). 2
4.3.2 Comparison of solutions a) Let u1(x t) and u2(x t) be solutions of the heat equation with
u1(x 0) u2(x 0) u1(0 t) u2(0 t) u1(l t) u2(l t): Then
u1(x t) u2(x t)
for (x t) 2 $0 l] $0 T ]. In fact, the function v = u2 ; u1 satis es the heat equation, and v(x 0) 0, v(0 t) 0 v(l t) 0. From the maximum principle we have
v(x t) 0 for 0 < x < l 0 < t T: b) If the solutions u(x t), u(x t) and u(x t) of the heat equation satisfy the inequalities
u(x t) u(x t) u(x t) for t = 0 x = 0 and x = l then these inequalities remain valid for all (x t) 2 $0 l] $0 T ]. This statement follows immediatly from a). c) Let u1(x t) and u2(x t) be two solutions of the heat equation, which satisfy the inequality
ju1(x t) ; u2(x t)j for t = 0 x = 0 x = l: Then
ju1(x t) ; u2(x t)j 8 (x t) 2 $0 l] $0 T ]:
CHAPTER 4. PARABOLIC EQUATIONS
68
The proof of this statement follows immediately from b) with u(x t) = ; u(x t) = u1(x t) ; u2(x t) u(x t) = : Now we turn to the question about the stability of the solution of the heat equation: Let u(x t) be the solution of the rst boundary value problem for the heat equation with the initial and boundary value conditions u(x 0) = '(x) u(0 t) = 1(t) u(l t) = 2(t): Let these functions be given approximately by ' (x) 1(t) and 2(t), respectively:
j'(x) ; ' (x)j j 1(t) ; 1(t)j ju2(t) ; 2(t)j
such that there is a solution u (x t) of the heat equation with these data. Then ju (x t) ; u(x t)j :
4.3.3 The uniqueness theorem in unbounded domains Consider the problem
ut = a2uxx ;1 < x < 1 t > 0: (4:3:2) Uniqueness Theorem: Let u1(x t) and u2(x t) be two solutions of (4.3.2) which are continuous and bounded by M :
ju1(x t)j < M ju2(x t)j < M for ; 1 < x < M t 0: If u1(x 0) = u2(x 0) ;1 < x < 1, then u1(x t) u2(x t) ;1 < x < 1 t 0: Proof: Let v(x t) = u1(x t) ; u2(x t). The function v(x t) is continuous, satis es the heat equation and is bounded by 2M : jv(x t)j ju1(x t)j + ju2(x t)j < 2M ;1 < x < 1 t 0: Furthermore, v(x 0) = 0. We shall apply the maximum principle to prove that v(x t) = 0 ;1 < x < 1 t 0. Let L be a positive number. Consider the function 2 ! 4 M x 2 V (x t) = L2 2 + a t : (4:3:3)
The function V (x t) is continuous and satis es the heat equation (4.3.2). Furthermore, V (x 0) jv(x 0)j = 0 V ( L t) 2M v( L t): Applying the maximum principle in the domain jxj L, we get 2 ! 2 ! x 4 M x 4 M ; L2 2 + a2t v(x t) L2 2 + a2t : Let (x t) be xed, and let L tend to in nity, we get v(x t) = 0. Thus, v(x t) 0. 2
4.4. THE FOURIER METHOD
4.4 The Fourier method In this section we shall apply the Fourier method to the following problem ut = a2uxx + f (x t) 0 < x < l 0 < t T
u(x 0) = '(x) 0 x l u(0 t) = 1(t) u(l t) = 2(t) 0 t T:
69
(4:4:1) (4:4:2) (4:4:3)
4.4.1 The homogeneous problem Find the solution of the problem
ut = a2uxx 0 < x < l 0 < t T (4:4:4) u(x 0) = '(x) 0 x l (4:4:5) u(0 t) = u(l t) = 0 0 t T: (4:4:6) To do this we shall nd a solution of the equation ut = a2uxx which is not identically vanishing and satis es the boundary conditions u(0 t) = 0 u(l t) = 0: (4:4:7) Furthermore, this solution can be represented in the form u(x t) = X (x) T (t) (4:4:8) where X (x) is a function depending only on x and T (t) is a function depending only on t. Putting (4.4.8) into (4.4.4) and then deviding both sides by a2XT , we get 1 T 0 = X 00 = ; (4:4:9) a2 T X where = const, since T 0=T depends only on t and X 00=X depends only on x. This leads us to consider the equations X 00 + X = 0 (4:4:10) T 0 + a2 T = 0: (4:4:11) Because of (4.4.7) we have X (0) = X (l) = 0: (4:4:12) Thus, we obtain the following eigenvalue problem for X (x): X 00 + X = 0 X (0) = 0 X (l) = 0: (4:4:13) We have proved that the eigenvalues of this problem are 2 (4:4:14) n = n l n = 1 2 3 : : :
70
CHAPTER 4. PARABOLIC EQUATIONS
and the eigenfunctions associated with them are
Xn (x) = sin n l x:
With these values n the solutions of (4.4.11) have the form 2 Tn(t) = cn e;a nt
(4:4:15) (4:4:16)
where cn will be de ned later. We see that the function
2 un(x t) = Xn (x) Tn(t) = cn e;a n t Xn (x) (4:4:17) is a special solution of (4.4.4), which satis es the homogeneous boundary conditions (4.4.6). Consider the formal series 2 1 ; n a2t n X sin l x : u(x t) = cn e l (4:4:18) n=1 The function u(x t) satis es (4.4.6), since every element of the series does so. If u(x t) satis es also the initial condition, then 1 X (4:4:19) '(x) = u(x 0) = cn sin n lx : n=1 Thus, cn are the Fourier coe cients of '(x), when it is expanded on (0 l) by Zl 2 cn = 'n = l '( ) sin n d : (4:4:20) l 0
The series (4.4.18) is now completely de ned. We shall nd conditions which guarantee the convergence of the series and, furthermore, which allow to di erentiate (4.4.18) termwise two times w.r.t. x, and once w.r.t. t. We shall prove that the series 1 @un 1 @ 2un X X and 2 n=1 @t n=1 @x for t t > 0 (t is xed) converge uniformly. For this, we note that n 2 2 @un = ;cn 2 a2n2e; l a t sin n x @t l l n 2 2 ; l a2t 2 2 < jcnj l a n e : Let ' be bounded: j'(x)j < M . Then 2 Zl n jcnj = l '( ) sin l d < 2M: 0
4.4. THE FOURIER METHOD Hence, and Generally,
71
n 2 2 2 @un < 2M a2n2e; l a t t t @t l n 2 2 2 2 @ un < 2M n2e; l a t t t: @x2 l
n 2 k+l 2 @ u < 2M 2k+l n2l+l a2ke; l a t t t: @tk @xl l 1 Consider the majorant series P , where n=1
n
n 2 a2 t ; : n = A nq e l
(4:4:21)
Series of this kind converge because of the D'Alembert criterion 2 2 2 n+1 n + 1 q e; l a (n + 2n + 1)t 2 !1 n nlim !1 n = nlim ; l a2n2t e 1 q ; 2 a2(2n + 1)t = 0: = nlim !1 1 + n e l Thus, the series (4.4.18) for t t > 0 is in nitely di erentiable. Since t is arbitrary we can say that u(x t) for t > 0 is in nitely di erentiable, furthermore, it is clear that for t > 0 it satis es the equation (4.4.4).
Theorem 4.4.1 If ' is piecewise di erentiable and '(0) = '(l) = 0, then the series n 2 ; a2t n u(x t) = cn e l sin l x n=1 1 X
(4:4:22)
is a continuous function for t 0.
In fact, the inequalities
jun(x t)j < jcnj (t 0 0 x l) yield the uniform convergence of the series (4.4.18) for t 0 0 x l: For t > 0, we have proved this already& for t = 0, this follows directly, because '(0) = '(l) = 0 and ' is piecewise di erentiable.
CHAPTER 4. PARABOLIC EQUATIONS
72
4.4.2 The Green function From (4.4.22) we have
n 2 ; l a2t n sin l x u(x t) = cn e n=1 2 Zl 3 n 2 2 1 X 42 n d 5 e; l a t sin n x = ' ( ) sin l l l 1 X
n=1
0
3 2 n 2 2 Zl 6 2 X 1 ; at n 77 '( ) d : sin n x sin = 64 l e l l l 5 n=1 0 We can change the order of summation and integration for t > 0, since the series n 2 1 X ; l a2t n n sin l x sin l for t > 0 e n=1 converges uniformly w.r.t. . Set n 2 2 n n 1 ; a t X 2 (4:4:23) G(x t) = l e l sin l x sin l : n=1
The function u(x t) can then be written in the form Zl u(x t) = G(x t) '( ) d : 0
(4:4:24)
The function G(x t) is called the Green function. It has a physical meaning. We shall prove that G(x t), as a function of x, represents the temperature in the bar 0 x l at time t, if the temperature at the initial moment t = 0 equals to zero and at the same moment, at the point x = , produces a heat quantity Q (which will be determined later), while at the ends of the bar (x = 0 x = l) the temperature remains zero. The notion "a heat quantity Q is produced at the point " means that this is produced in a su ciently small neighbourhood of . The temperature change ' ( ), which produces a heat quantity at the point , is equal to zero outside the interval ( ; + ) and, inside this interval, ' ( ) is a positive, continuous and di erentiable function such that Z + c ' ( ) d = Q: (4:4:25) ; Note that the left-hand side of (4.4.25) represents the heat quantity produced by ' ( ). The temperature in this case is Zl u (x t) = G(x t) ' ( ) d : (4:4:26) 0
4.4. THE FOURIER METHOD Since G(x t) for t > 0 is a continuous function of we have R+ R+ u (x t) = G(x t) ' ( ) d = G(x t) ' ( ) d ; ; Q = G(x t) c
73
(4:4:27)
where is a point lying between ; and + . Letting ! 0, and taking into account that G(x t) is a continuous function of for t > 0, we get Q G(x t) lim u ( x t ) = !0 c n 2 (4:4:28) 1 ; t X Q 2 n n l = sin x sin c l n=1 e l l R+ (Here, we suppose that there exists the limit of the integral ' ( ) d as ! 0, and ; R+ lim ' ( ) d = c Q .) Thus, G(x t) represents the temperature in uence at the instant ; of time into the heat pole of intensity Q = c , which is lying at t = 0 and 2 (0 l). We shall prove that for any x and t > 0, G(x t) 0. Consider the function ' ( ) de ned above, since ' ( ) > 0 for 2 ( ; + ), u (0 t) = u (l t) = 0, from the maximum principle u (x t) 0 0 x l t > 0: Thus, Q 0 (t > 0): u (x t) = G(x t) c
Letting ! 0, we obtain that
G(x t) 0 0 x l t > 0:
4.4.3 Boundary value problems with non-smooth initial conditions We have considered our rst boundary value problem in the class of all continuous functions in the closed domain $0 l] $0 T ]. This has some restriction, for example, if u(x 0) = u0 6= 0, then the solution must be discontinuous at the points (0 0) and (l 0). It leads to the idea to consider our problem for non-smooth initial conditions.
Theorem 4.4.2 Let ' be a continuous function de ned on $0 l] and '(0) = '(l) = 0. Then the solution of the equation
ut = a2uxx (0 < x < l t > 0), which is continuous in $0 l] $0 T ] and satis es the conditions
(4.4.4)
CHAPTER 4. PARABOLIC EQUATIONS
74
u(0 t) = u(l t) = 0 t 2 $0 T ],
(4.4.6)
u(x 0) = '(x) x 2 $0 l],
(4.4.5)
is unique and can be represented by
Zl
u(x t) = G(x t) '( ) d .
(4.4.24)
0
Proof: The case where '( ) is continuous and piecewise di erentiable is already proved.
Consider a series of continuous, piecewise di erentiable functions 'n(x) ('n (0) = 'n(l) = 0), which uniformly converges to '(x) (as 'n (x) we can take the piecewise linear function, which coincides with '(x) at the points lkn (k = 0 1 : : : n)). For the 'n (x) we can determine un (x t) by (4.4.24), since 'n(x) is continuous and piecewise di erentiable. The functions un (x t) uniformly converge to u(x t). In fact, for > 0 small enough, there exists an n( ) such that j'n1 (x) ; 'n2 (x)j < (0 x l) 8n1 n2 n( ), since the 'n uniformly converge to '(x). It follows then by the maximum principle that jun1 (x t) ; un2 (x t)j < (0 x l 0 t T ) if n1 n2 n( ). Thus, fun(x t)g uniformly converge to u(x t). Since un (x t) is continuous, u(x t) is so, too. For any xed (x t) 2 $0 l] $0 T ] we have Zl Zl u(x t) = nlim !1 un(x t) = nlim !1 G(x t) 'n( ) d = G(x t) '( ) d : 0
0
Thus, the function u(x t) is continuous and satis es the condition (4.4.5). Further, as we have proved in x 4.4.1 this function also satis es (4.4.6). The uniqueness follows immediately from the maximum principle. 2
Theorem 4.4.3 Let ' be a piecewise continuous function. Then the function u(x t) de ned
by (4.4.24) represents a solution of the equation (4.4.4), satis es the condition (4.4.5), and is continuous where ' is continuous.
Proof: First, let ' be linear: Consider the sequence
'(x) = c x:
8 1 > > c x 0 x l 1; n < 'n(x) = > > : c(n ; 1)(l ; x) l 1 ; n1 x l n 2 IN :
(4:4:29)
4.4. THE FOURIER METHOD
75
'6
BB B BB B B -x r
r
l
1 ; n1
r
r
l
The functions 'n(x) are continuous and 'n(0) = 'n (l) = 0, hence the functions un(x t) de ned by (4.4.24) for 'n are the solutions of (4.4.4) which satisfy (4.4.6) and
un(x 0) = 'n(x): Since from the maximum principle,
'n (x) 'n+1(x) (0 x l)
un(x t) un+1(x t): The function U0(x) = cx is a continuous solution of the heat equation. The maximum principle now gives un (x t) U0(x) as this is valid for t = 0 x = 0 and x = l. The series fun(x t)g is monoton increasing and bounded above by the bounded function U0(x), so it is convergent. We have Zl Zl u(x t) = nlim !1 un(x t) = nlim !1 G(x t) 'n ( ) d = G(x t) '( ) d U0(x) 0
0
because we can take the limit process under the integral. In x 4.4.1 we have proved that u(0 t) = u(l t) = 0 and u(x t) satis es (4.4.4) for t > 0. It remains to show that it is continuous for t = 0 and 0 x < l. Let x0 < l. We choose n such that x0 < l 1 ; n1 . In this case 'n(x0) = U0(x0). Noting that
un(x t) u(x t) U0(x) and
xlim x un (x t) = xlim !x0 U0(x) = t 00 !
we conclude that the limit
'(x0)
!
xlim x t 00 !
!
u(x t) = '(x0)
exists and is independent of how x ! x0 and t ! 0. This implies the continuity of u(x t) at (x0 0). This function is bounded by U0(x). Thus, the theorem is proved for '(x) = cx. Changing x by l ; x, we see that the theorem is also proved for '(x) = b(l ; x).
CHAPTER 4. PARABOLIC EQUATIONS
76
Thus, the theorem is correct for the functions of the form '(x) = B + Ax. Further, the theorem is correct for any continuous function '(x) which needs not satisfy the condition '(0) = '(l) = 0. In fact, any such function can be represented in the form x '(x) = '(0) + l ('(l) ; '(0)) + (x) where (x) is a continuous function which vanishes at the ends of the interval: (0) = (l) = 0. From the superposition theorem we see that the theorem is proved as far as the continuity is concerned. Let ' be now a piecewise continuous function& we shall prove that the function u(x t) de ned by (4.4.24) gives a solution to (4.4.4) and satis es (4.4.6). Suppose that x0 is a point where ' is continuous. We shall prove that for any positive , there exists a ( ) such that if jx ; x0j < , t < ( ), then ju(x t) ; '(x0)j < . Because ' is continuous at x0, there exists an ( ) such that j'(x) ; '(x0)j 2 for jx ; x0j < ( ): It follows that (4:4:30) '(x0) ; 2 '(x) '(x0) + 2 for jx ; x0j < ( ): Let '(x) and '(x) be continuously di erentiable functions such that '(x) = '(x0) + 2 for jx ; x0j < ( ) '(x) '(x) for jx ; x0j > ( )
'(x) = '(x0) ; 2 for jx ; x0j < ( )
(4:4:32)
'(x) '(x) '(x):
(4:4:33)
'(x) '(x)
From (4.4.30) we have
(4:4:31)
Consider the functions
u(x t) =
for jx ; x0j > ( ):
Zl
G(x t) '( ) d
0
u(x t) =
Zl 0
G(x t) '( ) d :
Since '(x) and '(x) are continuous, u(x t) and u(x t) are so at x0. It follows the existence of a ( ) such that 9 ju(x t) ; '(x)j 2 >>= for jx ; x0j < ( ) t < ( ): > > ju(x t) ; '(x)j 2
4.4. THE FOURIER METHOD It means
77
9 u(x t) '(x) + 2 = '(x0)+ > > = for jx ; x0j < ( ) t < ( ): > > u(x t) '(x) ; 2 = '(x0);
Since the function G(x t) is not negative, the inequalities of (4.4.33) yield u(x t) u(x t) u(x t): Consequently, '(x0) ; u(x t) '(x0) + for jx ; x0j < ( ) t < ( ): That is
ju(x t) ; '(x0)j < for jx ; x0j < ( ) t < ( ):
Furthermore, (4.4.34) yields also the boundedness of u(x t).
(4:4:34)
2
4.4.4 The non-homogeneous heat equation We consider now the non-homogeneous heat equation
ut = a2uxx + f (x t) 0 < x < l 0 < t T with the initial condition and the boundary conditions
u(x 0) = 0 0 x l
u(0 t) = u(l t) = 0 0 < t T: We shall nd a solution of this problem in the form n 1 X u(x t) = un(t) sin l x : n=1
(4.4.1) (4:4:35) (4.4.6) (4:4:36)
In order to determine u(x t), we have to nd un(t). For this purpose, we represent f (x t) in the form 1 X f (x t) = fn(t) sin n lx
where
n=1
Zl 2 fn (t) = l f ( t) sin n d : l 0 Putting (4.4.36) and (4.4.37) into (4.4.1), we get ) 1 n ( n 2 2 X sin l x l a un(t) + u_ n(t) ; fn(t) = 0: n=1
(4:4:37)
CHAPTER 4. PARABOLIC EQUATIONS
78 It follows that
n 2 u_ n(t) = ;a l un (t) + fn (t): n 1 X u(x 0) = un(0) sin l x = 0 n=1 2
On the other hand and therefore
un(0) = 0: Solving (4.4.38) with the initial condition (4.4.39) we obtain Zt ; n 2 a2(t ; ) fn ( ) d : un(t) = e l
(4:4:38)
(4:4:39)
(4:4:40)
0
Hence,
3 2 n 2 t 2 Z 1 a (t ; ) X6 ; 7 fn ( ) d 75 sin n u(x t) = 64 e l lx : n=1
(4:4:41)
0
Taking (4.4.37) into account, we can write 9 8 n 2 > 2 Zt Zl > 1 = <2 X ; a ( t ; ) n n l u(x t) = e x sin f ( ) d d sin >l l l > 0 0 : n=1 Zt Zl =: G(x t ; ) f ( ) d d 0
where
0
(4:4:42)
n 2 1 ; a2(t ; ) n n X 2 l G(x t ; ) = l e sin l x sin l n=1
(4:4:43)
is in fact our Green's function de ned by (4.4.23). We can give here the physical meaning of (4.4.42) similar to that of (4.4.24), however, we omit to give it here.
4.4.5 The non-homogeneous rst boundary value problem Find the solution of the problem
ut = a2uxx + f (x t) 0 < x < l 0 < t T
(4.4.1)
u(x 0) = '(x) 0 x l
(4.4.2)
u(0 t) = 1(t) u(l t) = 2(t) 0 < t T:
(4.4.3)
4.5. PROBLEMS ON UNBOUNDED DOMAINS
79
In order to solve this problem, we change it to a homogeneous boundary value problem. In fact, consider the function U (x t) = 1(t) + xl $ 2(t) ; 1(t)] : (4:4:44) The function U satis es the boundary condition
U (0 t) = 1 (t) and U (l t) = 2(t) and the equation
Ut = a2Uxx + f1(x t) where f1(x t) = Ut(x t) ; a2Uxx(x t). It is clear now that the function v(x t) = u(x t) ; U (x t) is a solution of the problem
vt = avxx + f v(0 t) = v(l t) = 0 v(x 0) = '(x) ; U (x 0): The solution v can be found by the Fourier method.
4.5 Problems on unbounded domains 4.5.1 The Green function in unbounded domains We have derived the Green function
n 2 1 a2t n n X ; Gl (x t) := 2l e l sin l x sin l n=1
(4:5:1)
for the problem in the nite domain (0 l). Here we have used the subindex l to indicate the domain (0 l). We want to develope this theory when the domain is expanded to the whole IR. In doing so, we rewrite (4.5.1) in another form, such that the ends of the bar are ; 2l and l 2. Let x0 = x ; 2l , 0 = ; 2l . Then x0 and 0 are lying in (; 2l 2l ) and the Green function has the form n 2 ! ! 2 1 a t ; X n 2 l n l 0 0 sin Gl (x0 0 t) = l e l (4:5:2) l x + 2 sin l + 2 : n=1
We rewrite (4.5.2) by the following arguments.
CHAPTER 4. PARABOLIC EQUATIONS
80
If n is even, that is n = 2m, then ! ! l 2 m l 2 m 0 0 0 sin 2 m 0 : sin l x + 2 sin l + 2 = sin 2 m x l l Further, if n is odd, that is n = 2m + 1, then ! ! l (2 m + 1) l (2m + 1) x0 cos (2m + 1) 0: (2 m + 1) 0 0 sin x + sin + = cos l 2 l 2 l l Thus, n 2 1 X 00 ; l a2t n 0 n 0 2 0 0 sin Gl(x t) = l e l x sin l n=0 (4:5:3) n 2 2 1 ; at X 0 cos n 0 + 2l 0 e l cos n x l l n=1 where P00 stands for even n, and P0 stands for odd n. We study the limit of Gl(x0 0 t) as l ! 1. We have 1 00 ; 2 a2t 1 00 1X 2X n sin( n x0) sin( n 0 ) = e (4:5:4) l n=0 n=0 f1( n ) where 2 2 f1( ) = e; a t sin( x0) sin( 0) = 2l n = n l: The sum (4.5.4) is the integral sum for the function f1( ) over the interval 0 < 1. For l ! 1, ! 0. Taking the limit under the integral, we obtain Z1 Z1 2 2 1 00 X 1 1 1 f1( n ) = f1( ) d = e; a t sin( x0) sin( 0 ) d : (4:5:5) lim !0 n=0 0 0
Remark 4.5.1 Here we have used the following result: For a continuous function de ned on $0 1), if the integral sums 1 X i=1
f ( i )( i ; i;1) ( i ; i;1 = i;1 i i )
for any choice of i converge, then there exists the integral Z1 f ( ) d : 0
Analogously, where
1 0 ; 2 a2t 1 X 2X n cos ( n x0) cos ( n 0) = 1 0 f2( n ) e l n=1 n=1 2 2 f2( ) = e; a t cos( x0) cos( 0 ) = 2l and n = n l:
(4:5:6)
4.5. PROBLEMS ON UNBOUNDED DOMAINS
81
As ! 0, we get Z1 Z1 2 2 10 X 1 1 1 f2( n ) = f2( ) d = e; a t cos( x0) cos( 0 ) d : lim !0 n=1 0 0
(4:5:7)
Finally, we have
G(x t) = llim !1 Gl(x t) Z1 2 2 Z1 2 2 1 1 ; a t sin( x) sin( ) d + e; a t cos( x) cos( ) d = e 0 0 1 Z 2 2 = 1 e; a t cos (x ; ) d : 0 The Green function for the unbounded domain (;1 1) has thus the form Z1 2 2 1 G(x t) = e; a t cos (x ; ) d : 0 We calculate now the integral
Z1
I = e; cos( ) d ( > 0) 2
(4:5:8)
(4:5:9)
0
where the parameters and are given. In order to calculate this integral, we x and vary so that I is a function of : I ( ). It is clear that dI = ; Z1 e; 2 sin( ) d d 0 1 Z1 2 2 1 ; = sin( ) 2 e ; 2 e; cos( ) d 0 0 = ; 2 I ( ): Thus, I 0( ) = ; : I ( ) 2 Hence
; 4 2
I ( ) = C e On the other hand,
:
p Z1 2 2 1 1 ; ; z I (0) = e d = p e dz = p 2 : Z1 0
0
CHAPTER 4. PARABOLIC EQUATIONS
82 Thus,
2 p ; I ( ) = e; cos( ) d = 12 p e 4 :
Z1
2
(4:5:10)
0
Applying (4.5.10) to (4.5.8), we get
(x ; )2 ; 1 G(x t) = p 2 e 4a2t (4:5:11) 2 a t which is the Green function for the unbounded domain (;1 1). This function is often called the fundamental solution of the heat equation.
Properties of the fundamental solution i) The function G(x t ; t0) as a function of x t is a solution of the heat equation. In fact, (x ; )2 ; Gx = ; 2p1 2$a2(xt ;; t )]3=2 e 4a2(t ; t0) 0 2 ( x ; ) " # 2 ; Gxx = 2p1 ; 12 $a2(t ;1 t )]3=2 + 4$a2((xt ;; t))]5=2 e 4a2(t ; t0) 0
# ; (x ; )2 Gt = 2p1 ; 2$a2(t a; t )]3=2 + 4$aa2((tx;;t )]) 3=2 e 4a2(t ; t0) : 0 0 "
Thus, ii)
Z1 ;1
0
2
2
2
Gt = a2 Gxx: G(x t ; t0) dx = 1 for t > t0.
Z1 ; (x2 ; ) Z1 2 1 1 dx 4 a ( t ; t ) 0 q = p e; d = 1: G(x t ; t0) dx = p e 2 2 a (t ; t0) ;1 ;1 ;1 Z1
2
4.5.2 Heat conduction in the unbounded domain (;1 1) Find a bounded function u(x t) (;1 < x < 1 t 0), which satis es the equation ut = a2 uxx (;1 < x < 1 t > 0) (4:5:12) and the initial condition u(x 0) = '(x) ;1 < x < 1: (4:5:13) Since u is bounded, the solution of our problem is unique (x 4.3.5). We shall prove that if ' is bounded, say j'j < M , then for t > 0 the Poisson integral Z1 1 ; (x ; )2 1 u(x t) = 2p p 2 e 4a2t '( ) d (4:5:14) a t ;1
4.5. PROBLEMS ON UNBOUNDED DOMAINS
83
is a solution of (4.5.12), and xlimx0 u(x t) = '(x0). Here x0 is a point where ' is continuous. !
t
0
!
We note that
ju(x t)j
Z1
G(x t) j'( )j d < M
Z1
G(x t) d = M:
;1 ;1 Thus, the function u(x t) is bounded. We shall prove that for t > 0, u(x t) satis es the heat equation (4.5.12). In doing so, we prove that we can di erentiate (4.5.14) under the integral. For example, formally di erentiating both sides of (4.5.14) we get @u = Z1 @ G(x t) '( ) d @x ;1 @x and it remains to prove that the integral on the right-hand side converges uniformly. It is enough to prove the di erentiability of u at a point (x0 t0), or to prove the uniform convergence of the above integral in a neighbourhood of this point: t1 t0 t2 jx ; x0j x: t6
t2 t0 t1
x0 ; x
x0
x0 + x
-x
In doing so, we shall prove that there exists a positive function F ( ), which does not depend on x and t, and approximates jGx (x t) '( )j from above: jGx(x t) '( )j F ( ) (4:5:15) and Z1 Zx1 F ( ) d < 1 F ( ) d < 1& (4:5:16) x1 ;1 x1 is a number, for which (4.5.15) is valid. For jx ; x0j x and t1 t t2 we have (x ; )2 ; @ G(x t) j'( )j = p1 j p; xj e 4a2t j'( )j @x 2 2 a2t3 (j ; x0j ; x)2 (4:5:17) ; M j ; x 0j + x 2t 4 a 2 2p p 2 3 e 2 a t1 =: F ( ):
CHAPTER 4. PARABOLIC EQUATIONS
84
Let x1 be a number from which (4.5.17) is valid. Then
Z1 x1
2 Z1 M j ; x0j + x ; (j ; x02j ; x) 4a t2 d F ( ) d = 2p p 2 3 e 2 a t 1 x1 2 Z1 M 1 + 2x ; 21 p p 2 3 e 4a t2 d ( 1 = j ; x0j ; x): = 2 2 a t1 x1 ;x
This integral is clearly convergent. Thus, @u = Z1 @ G(x t) '( ) d : @x ;1 @x
(4:5:18)
Similarly, we can prove that for t > 0 the function u(x t) is di erentiable two times w.r.t. x and once w.r.t. t, and that u satis es the heat equation (4.5.12). Let x0 be a point where '(x) is continuous. It remains to prove that
u(x t) ;! '(x0) as t ! 0 and x ! x0: For any > 0 we have to show that there is a ( ) such that
ju(x t) ; '(x0)j < if jx ; x0j < ( ) and t < ( ). Since ' is continuous at x0, there exists an ( ) such that j'(x) ; '(x0)j < 6 for jx ; x0j < : We break the integral for u(x t) into three parts as follows:
Z1 1 ; (x ; )2 1 u(x t) = 2p p 2 e 4a2t '( ) d ;1 a t Zx1 Zx2 Z1 1 1 1 = 2p + 2p + 2p x1 x2 ;1 := u1(x t) + u2(x t) + u3(x t) where
x1 = x0 ; and x2 = x0 + : For the second integral we have
(4:5:19)
(4:5:20)
(4:5:21)
(x ; )2 (x ; )2 x2 x2 Z Z ; ; 1 1 1 ' ( x ) u2(x t) = 2p 0 p 2 e 4a2t d + 2p p 2 e 4a2t $'( ) ; '(x0)] d = I1 + I2: x1 a t x1 a t
4.5. PROBLEMS ON UNBOUNDED DOMAINS
85
The integral I1 can be calculated as xp x (x ; ) ; Za t 2 2 4a t ' ( x e ' ( x 0) 0) p 2 d = p e; d I1 = 2p at xp x x 2
Zx2
2
1
2
2; 2
1; a2 t
with = p; 2x , d = pd 2 . 2 at 2 at As jx ; x0j < , x1 = x0 ; , x2 = x0 + , we see that xp 1;x ;! ;1 as t ! 0 2 a2t x2p; x ;! +1 as t ! 0: 2 a2t It follows that lim I = '(x0): t 0 1 !
x x0 !
Thus, there is a 1( ) such that
jI1 ; '(x0j < 2 for jx ; x0j < 1 and t < 1:
(4:5:22)
We estimate I2 as follows: (x ; )2 x2 Z ; jI2j 2p1 p1 2 e 4a2t j'( ) ; '(x0)j d : x1 a t From (4.5.21), for x1 < < x2, we see that
j ; x0j < : Taking (4.5.19) and the inequality
Zx Z1 2 2 p1 e; d < p1 e; d = 1 8x0 x00 ;1 x 00
0
into account, we obtain
Zx2 1 ; (x ; )2 1 jI2j 6 2p p 2 e 4a2t d x1 4a t xp2 x 2Z a2 t 2 1 e; d < 6 : = 6 xp1 x 2 a2 t ;
;
(4:5:23)
CHAPTER 4. PARABOLIC EQUATIONS
86 Further,
ju3(x t)j = <
2 ( x ; ) 1 Z 1 p1 e; 4a2t '( ) d p 2 x2 a2t 1 Z 2 pM e; d ;! xt x00 0 xp2 x 2 a2 t 2 ( x ; ) x1 Z 1 p1 e; 4a2t '( ) d p 2 ;1 a2t xp1 x 2 Z a2 t 2 M p e; d ;! xt x00 0: ;1
(4:5:24)
!
!
;
ju1(x t)j =
;
<
(4:5:25)
!
!
(Note that as x ! x0, x2 ; x > 0 and x1 ; x < 0.) Thus, there is a 2( ) such that ju3(x t)j < 3 and ju1(x t)j < 3 for jx ; x0j < 2 and t < 2: A combination of (4.5.22), (4.5.23) and (4.5.26) gives
ju(x t) ; '(x0)j ju1j + jI1 ; '(x0)j + jI2j + ju3j < 3 + 6 + 6 + 3 as jx ; x0j < and t < , where = minf 1 2g. Thus, we have proved that the function (x ; )2 1 Z ; u(x t) = 2p1 p1 2 e 4a2t '( ) d ;1 a t is bounded and satis es the heat equation as well as the initial condition. If the initial condition is given at t = t0, then the solution has the form (x ; )2 1 Z ; u(x t) = 2p1 q 2 1 e 4a2(t ; t0) '( ) d : ;1 a (t ; t0)
Example: We try to nd the solution of the problem (4.5.12), (4.5.13) in the case 8 T for x < 0 < 1 u(x 0) = '(x) = : : T2 for x > 0
(4:5:26)
(4:5:27)
4.5. PROBLEMS ON UNBOUNDED DOMAINS In this case
u(x t) = = = = since
(x ; )2 1p Z1 p1 e; 4a2t '( ) d 2 ;1 a2t 2 2 ( x ; ) ( x ; ) 0 1 Z Z pT2 e; 4a2t pd 2 + pT1 e; 4a2t pd 2 2 at 2 at ;1 0 ; 2Zpxa2t Z1 2 2 T T 1 2 ; p e d + p e; d ;1 ; 2pxa2t px 2 Z a2 t T1 + T2 + T1p; T2 ; 2 d e 2 0
87
(4:5:28)
Z;z 2 Z0 Zz 2 Zz 2 2 1 1 1 1 1 ; ; ; p e d = p e d + p e d = 2 + p e; d ;1 ;1 0 0
and
Z1 2 Z0 2 Zz 2 1 1 1 1 1 ; ; p e d = 2 + p e d = 2 ; p e; d : ;z ;z 0 0 1 x p BB p2 2Z a2 t ; 2 CC 1 e d A. The function If T2 = 0, T1 = 1, then u(x t) = 2 @1 + 0 Zz 2 2 '(z) = p e; d 0
is called the error function which has many applications in the probability theory.
4.5.3 The boundary value problem in a half-space Consider the heat equation in the rst quadrant
ut = a2 uxx x > 0 t > 0
(4.5.12)
u(x 0) = '(x) x > 0
(4.5.13)
with the initial condition and one of the boundary value conditions
u(0 t) = (t) t 0 (the rst b.v.p.)
88
CHAPTER 4. PARABOLIC EQUATIONS @u (0 t) = (t) t 0 (the second b.v.p.) @x ! @u or @x (0 t) = $u(0 t) ; (t)] (the third b.v.p.)
In order to guarantee the uniqueness of the solution, we assume that the solution is bounded: ju(x t)j < M 0 < x < 1 t 0 where M is a constant. It follows also j'(x)j < M: We represent the solution of the rst b.v.p. in the form u(x t) = u1(x t) + u2(x t) where u1 is the solution of (4.5.12) with the conditions
u1(x 0) = '(x) u1(0 t) = 0
(4.5.14)
and u2 is the solution of (4.5.12) with the conditions
u2(x 0) = 0 u2(0 t) = (t):
(4.5.17)
Below we give two lemmas about the Poisson integral
Z1 1 ; (x ; )2 1 u(x t) = 2p p 2 e 4a2t ( ) d . ;1 a t
(4.5.14)
Lemma 4.5.2 If is a bounded and odd function, (x) = ; (;x) then u(0 t) = 0.
Proof: Since is bounded, the integral (4.5.14) converges, and 2 1 Z ; u(0 t) = 2p1 p1 2 e 4a2t ( ) d = 0: ;1 a t
Lemma 4.5.3 If is a bounded and even function, (x) = (;x) @u (0 t) = 0. then @x
2
4.5. PROBLEMS ON UNBOUNDED DOMAINS
Proof:
89
(x ; )2 1 Z ; (x ; ) e 4a2t ( ) d = 0: @u = ; p1 x=0 @x x=0 2 ;1 2pa2t3
2
Let (x) be a function de ned by
8 '(x) x > 0 < (x) = : : ;'(;x) x < 0
Since ' is bounded (by M ), is bounded. It follows that the function
Z1 1 ; (x ; )2 1 U (x t) = 2p p 2 e 4a2t ( ) d ;1 a t is well de ned. Furthermore, since is even, U (0 t) = 0. Thus, u1(x t) = U (x t) for x 0: The function U (x t) can be represented as follows:
Z0 1 ; (x ; )2 Z1 1 ; (x ; )2 1 1 U (x t) = 2p p 2 e 4a2t ( ) d + 2p p 2 e 4a2t ( ) d at ;1 a t 0 Z1 1 ; (x + )2 Z1 1 ; (x ; )2 1 1 = ; 2p p 2 e 4a2t '( ) d + 2p p 2 e 4a2t '( ) d at at 0 0 8 9 2 2> > ( x ; ) ( x + ) 1 Z <; = ; = 2p1 p1 2 >e 4a2t ; e 4a2t > '( ) d : a t: 0 Thus,
8 9 2 2> ( x ; ) ( x + ) 1 > Z <; = ; u1(x t) = 2p1 p1 2 e 4a2t ; e 4a2t > '( ) d : a t> : 0
(4:5:29)
Analogously, for the solution u1(x t) of the second boundary value problem with @u1 (0 t) = @x 0 and u1(x 0) = '(x) we have 8 9 2 2> > ( x + ) ( x ; ) 1 Z <; = ; u1(x t) = 2p1 p1 2 >e 4a2t + e 4a2t > '( ) d : (4:5:30) a t : 0 Now we try to nd u2(x t) and u2(x t). In doing so we note that if we consider the equation v1t = a2 v1xx, for 0 < x < 1 and t t0, with the conditions v1(x t0) = T and v1(0 t) = 0,
CHAPTER 4. PARABOLIC EQUATIONS
90
then from (4.5.29) we have 8 9 2 2 > > ( x ; ) ( x + ) 1 > Z<; 2 ; 4a2(t ; t ) >= d : T 4 a ( t ; t ) 0 0 q ;e v1(x t) = 2p >e > > 0 > : a2(t ; t0)
(4:5:31)
Taking the changes of variables
= q 2; x 1 = q 2+ 2 a (t ; t 0 ) 2 a (t ; t0) we get
2 66 Z1 T v1(x t) = p 66 4; p x 2 a2 (t
pa x t
Z T = p ;px 2
2
Thus,
2(
Z1
2 e; d ;
t0 )
;
t0 )
;
pa x t 2(
2
2 e; d = T p2
a2 (t t0 )
3 77 2 e; 1 d 177 5
;
t0 )
pa x t
2
Z2(
t0 )
;
e; d : 2
0
;
1 0 x A v1(x t) = T ' @ q 2 2 a (t ; t0)
where
(4:5:32)
Zz 2 2 '(z) = p e; d 0
is the error function. Let (t) = 0 = const. Then the function
0 1 x A v(x t) = 0 ' @ q 2 2 a (t ; t0)
is the solution of the heat equation (4.5.12) for t t0 which satis es the conditions
v(x t0) = 0 v(0 t) = 0: It follows that the function
13 2 0 x A5 v(x t) := 0 ; v(x t) = 0 41 ; ' @ q 2 2 a (t ; t0)
is the solution of (4.5.12) for t t0, which satis es the conditions
v(x t0) = 0 (x > 0) and v(0 t) = 0 (t > t0):
(4:5:33)
4.5. PROBLEMS ON UNBOUNDED DOMAINS
91
We rewrite v(x t) in the form where
v(x t) = 0 U (x t) 0 1 Z1 2 2 x @ A e; d : U (x t) = 1 ; ' q 2 = p 2 a (t ; t0) px 2 a2 (t t ) ;
0
The function U (x t) corresponds to the case 0 = 1. We extend U (x t) by zero for t < t0. It then corresponds to the step boundary condition 8 1 t>t < 0 U (0 t) = : : 0 t < t0 Consider now the solution v(x t) of (4.5.12), which satis es the conditions 8 < 0 t0 < t < t1 v(x t0) = 0 v(0 t) = (t) = : : 0 t > t1 It can be veri ed that
v(x t) = 0$U (x t ; t0) ; U (x t ; t1)]: Further, if has the form
8 > 0 > > > > < 1 (t) = > > > > > : n;1
t0 < t t1 t1 < t t2 :::
tn;1 < t tn then the solution of (4.5.12), (4.5.13) can be represented in the form nX ;2 u(x t) = i$U (x t ; ti) ; U (x t ; ti+1)] + n;1U (x t ; tn;1): i=0
With the aid of the theorem about the mean value, we have nX ;2 @U (x t ; ) u(x t) = i + n;1 U (x t ; tn;1) ti i ti+1: @t i=1 i
(4:5:34)
(4:5:35)
We consider the problem (4.5.12), (4.5.13) with u(x 0) = 0. If the function (t) is piecewise continuous, then we have an approximation to u(x t) of the form (4.5.34), when (t) is approximated by a piecewise constant function. If we re ne this approximation, then we have Zt @U u(x t) = @t (x t ; ) ( ) d 0
CHAPTER 4. PARABOLIC EQUATIONS
92 since for x > 0
lim1 !0 n;1 U (x t ; tn;1 ) = 0: We do not go into the details, when the limit process is meaningful, and so formally take Zt @U (4:5:36) u2(x t) = @t (x t ; ) ( ) d : 0 Further, 0 1 2 x 1 Z 2 B C 2 ; @U (x t) = @ B @G 2 a x @G 1 C ; d C = p p 3 e 4a2t = ;2a2 (x 0 t) = 2a2 : p B e @ A 2 a2 t @t @t px @x @ =0 2 a2 t t;tn
;
Thus,
x2 Zt @G ; 2 Zt a x 4a2(t ; ) ( ) d : u2(x t) = 2a2 @ (x 0 t ; ) ( ) d = 2p q 3e 0 a2(t ; ) 0 (4:5:37) We note that in the way of getting (4.5.37) we have used only the linearity of the heat equation and nothing more. Furthermore, the boundary and initial conditions on U are explicitly given U (0 t) = 1 t > 0 U (x 0) = 0 x > 0 or 8 1 t>0 < U (0 t) = : : 0 t<0 If the boundary condition for a given di erential equation is u(0 t) = (t) t > 0 which satis es the homogeneous initial condition, then Zt @U u(x t) = @t (x t ; ) ( ) d : 0 This is called the Duhamel principle, which shows that the di culties in the boundary value problems are due to the piecewise constant boundary conditions. By the method of odd extension of the data, we can easily nd the solution of the problem ut = a2 uxx + f (x t) (0 < x < 1 t > 0) u(0 t) = 0 u(x 0) = 0 in the form 8 9 2 2> > ( x ; ) ( x + ) 1 t Z Z < ; 4a2t ; 4a2t = f ( ) d d : q 21 u3(x t) = 2p1 e ; e > a (t ; ) > : 0 0
Chapter 5 Elliptic Equations We shall consider shortly in this chapter the Laplace equation u = uxx + uyy = 0: A solution of the Laplace equation is called a harmonic function. The inhomogeneous version of the Laplace equation u = f with f being a given function, is called the Poisson equation. The basic mathematical problem is to solve the Laplace or the Poisson equation in a given domain with a condition on the boundary @ of . u = f in @u + au = h on @ : = h or u = h or @u @n @n
5.1 The maximum principle Let be a connected bounded domain in IR2. Let u be a harmonic function that is continuous in = @ . Then the maximum and the minimum values of u are attained on @ . Proof: Consider the function
v(x y) = u(x y) + (x2 + y2) > 0: We have
v = u + (x2 + y2) = 0 + 4 > 0 in IR : Thus, v has no maximum in , otherwise vxx 0, vyy 0, and so vxx +vyy 0. Furthermore, since v is a continuous function, it attains its maximum in the bounded closed domain . As v has no maximum in , v attains its maximum at some point x0 2 @ . Hence, for all (x y) 2 u(x y) v(x y) v(x0 y0) = u(x0 y0) + (x20 + y02) max u + (x20 + y02): @
93
CHAPTER 5. ELLIPTIC EQUATIONS
94
Since is bounded, x20 + y02 is bounded and since is an arbitrary positive number, we have
u(x y) max u 8(x y) 2 : @
Thus, u attains its maximum at some point of the boundary @ . The proof of "minimum part" is similar.
2
5.2 The uniqueness of the Dirichlet problem u = f in
uj@ = h: Let u1 and u2 be two solutions of this problem. Put v = u1 ; u2. Then v = 0 in and v @ = 0. By the maximum principle v 0 in . Thus u1(x y) = u2(x y).
5.3 The invariance of the operator The operator is invariant under translations and rotations. In fact, a translation in the plane is a transformation x0 = x + a y0 = y + b: The invariance under translations means that uxx + uyy = ux x + uy y . A rotation in the plane through the angle is given by x0 = x cos + y sin 0
0
0
0
y0 = ;x sin + y cos : By the chain rule we calculate
ux uy uxx uyy
= = = =
ux cos ; uy sin ux sin + uy cos (ux cos ; uy sin )x cos ; (ux sin + uy cos )y sin (ux sin + uy cos )x sin + (ux sin + uy cos )y cos : 0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
Adding, we have
uxx + uyy = (ux x + uy y )(cos2 + sin2 ) + ux y : : : = ux x + uy y : 0
0
0
0
0
0
0
0
0
0
This proves the invariance of the Laplace operator. In engineering the Laplacian is a model for isotropic physical situations, in which there is no prefered direction.
5.3. THE INVARIANCE OF THE OPERATOR
95
@2 + @2 The rotational invariance suggests that the two-dimensional Laplacian 2 = @x 2 @y2 should take a particularly simple form in polar coordinates. The transformation has the form x = r cos y = r sin : It follows that q r = x2 + y2 = arccos px2x+ y2 = arcsin px2y+ y2 : We have @x = cos @y = sin @r @r @x = ;r sin @y = r cos @ @ @r = x = cos @r = y = sin @x r @y r @ = ; y = ; sin @ = x = cos : @x r2 r @y r2 r Thus, the transformation x = r cos , y = r sin has the Jacobian matrix 0 @x @y 1 BB @r @r CC 0 cos sin 1 A J =B B@ @x @y CCA = @ ;r sin r cos @ @ with the inverse 0 @r @ 1 0 1 ; sin B CC BB cos r CC @x @x B ; 1 CC : C=B J =B B @ @r @ CA B@ A cos sin @y @y r Hence, @u = @u @r + @u @ = @u cos ; @u sin @x @r @x @ @x @r @ r @u = @u @r + @u @ = @u sin + @u cos @y @r @y @ @y @r @ r ! ! @ 2u = @ @u cos ; @u sin cos ; @ @u cos ; @u sin sin @x2 @r @r @ r @ @r @ r r 2 2u sin cos @u sin cos = @@ru2 cos2 ; @r@ @ + r @ r2
u sin cos + @u sin + @ u sin + @u sin cos ; @r@ @ r @r r @ 2 r2 @ r2 2
2
2
2
2 2 2 2u sin cos @u sin2 + 2 @u sin cos : = @@ru2 cos2 + @@ u2 sinr2 ; 2 @r@ @ + r @r r @ r2
CHAPTER 5. ELLIPTIC EQUATIONS
96
2 2 uxy = uyx = urr sin cos ; u sin r2cos + ur cos ;r sin + 2 2 sin ; cos ; u sin cos +u r r2 r
Thus,
2 2 uyy = urr sin2 + u cosr2 + 2ur sin rcos + ur cosr ; 2u sin r2cos :
uxx + uyy = urr + u r12 + ur 1r ( ! 2 ) 1 = 2 r @ r @u + @ u2 : r @r @r @
(5:3:1) (5:3:2)
It is also natural to look for special harmonic functions that themselves are rotationally invariant. It means that we look for solutions depending only on r. Thus, by (5.3.2) ! @ 1 0 = uxx + uyy = r @r r @u @r if u does not depend on . Hence, r @u @r = c1, and so u = c1 ln r + c2. The function ln r will play a central role later.
5.4 Poisson's formula Consider the problem
uxx + uyy = 0 x2 + y2 < a (5:4:1) u = f x2 + y2 = a: (5:4:2) Here a is a given number and f is a given function. In polar coordinates (r ), the equation (5.4.1) has the form ( ! ) 1 r @ r @u + @ 2u = 0: (5:4:3) r2 @r @r @ 2 We shall nd solutions of this equation in the form u(r ) = R(r) 0( ): Plugging this into (5.4.3), we get d dr
r dR dr R r
00 = ; 0 = 0
where is a constant. From this we get
000 + 0 = 0
(5:4:4)
5.4. POISSON'S FORMULA
The equation (5.4.4) gives
! d r dr r dR dr ; R = 0:
p
97 (5:4:5)
p
0( ) = A cos + B sin : As u(r ) is periodic in , we have 0( + 2 ) = 0( ):
p
Thus, = n is an entire number and so 0n( ) = An cos(n ) + Bn sin(n ): We shall nd R(r) in the form R(r) = r . Putting this into (5.4.5) we get
n2 = 2 or = n (n > 0): Thus,
R(r) = Crn + Dr;n where n > 0 and C and D are some constants. We see that D must be equal to zero, otherwise as r = 0, R(r) is not bounded. Thus, special solutions of our problem have the form
un (r ) = rn An cos(n ) + Bn sin(n ) : If the series 1
X u(r ) = rn An cos(n ) + Bn sin(n ) n=0
converges, then it represents a harmonic function. Note that the equation (5.4.3) has no meaning for r = 0, so we have to prove that un = 0 also for r = 0. In fact, the special solutions rn cos(n ) and rn sin(n ) are the real and the imaginary parts of the function
n ein = ( ei )n = (x + iy)n which are polynomials in x and y. It is now clear that a polynomial which satis es the equation u = 0 for r > 0, due to the continuity of the second derivatives, satis es this equation also for r = 0. In order to determine An and Bn , we use the boundary condition 1
X u(a ) = an An cos(n ) + Bn sin(n ) = f: (5:4:6) n=0
Taking into account that f is a function of , we have 1
X n cos(n ) + n sin(n ) f ( ) = 20 + n=1
(5:4:7)
CHAPTER 5. ELLIPTIC EQUATIONS
98 where
Z Z Z 1 1 1 0 = f ( ) d n = f ( ) cos(n ) d n = f ( ) sin(n ) d : ; ; ;
A comparison of (5.4.6) with (5.4.7) gives A0 = 20 An = ann Bn = ann : Thus, 1 r n
X 0 u(r ) = 2 + n cos(n ) + n sin(n ) : a n=1
(5:4:8)
We shall nd conditions guaranteeing the convergence of this series. Consider the function
un = tn n cos(n ) + n sin(n ) t = ar 1: We have @ kun = tn nk cos n + k + sin n + k : n n @ k 2 2 Denoting by M the maximum of j nj, j nj, n = 0 1 : : :,
j nj < M j nj < M we have For a t0 r0 < 1, a
(5:4:9)
k @ un tnnk 2M: @ k 1 X n=1
tnnk (j n j + j nj) 2M
1 X n=1
tn0 nk (t t0):
It follows that the series on the right-hand side is uniformly convergent for t t0 < 1. Hence, u(r ') for r r0 < a is in nitely di erentiable w.r.t. . Analogously, it is also in nitely di erentiable w.r.t. r r0 < a. As r0 < a is arbitrary the function u de ned by (5.4.8) satis es (5.4.1) and for r < a it is in nitely di erentiable w.r.t. r and '. Up to this point, we have used only the conditions in (5.4.9). This conditions are satis ed, if f (') is bounded by M=2. In order to guarantee the continuity of the solution up to boundary (up to r = a), we need to suppose that f is continuous and di erentiable. Putting the expressions of the Fourier coe cients n , n into (5.4.8), we get ( X Z 1 r n
) 1 1 u(r ) = f ( ) 2 + cos(n ) cos(n ) + sin(n ) sin(n ) d a n =1 ; (5:4:10) ( X ) Z 1 r n 1 1 cos n( ; ) d : = f ( ) 2 + a n =1 ;
5.4. POISSON'S FORMULA
99
On the other hand, 1 r n 1 n in( ; ) ;in( ; ) X 1 1 1+X +e 2 n=1 a cos$n( ; )] = 2 + 2 n=1 t e ( X 1 i( ; ) n ;i( ; ) n ) 1 te + te = 2 1+ n=1 2 i( ; ) ;i( ; ) 3 te 1 te 5 = 2 41 + + 1 ; tei( ; ) 1 ; te;i( ; ) r 2 1 1 ; t = 2 1 ; 2t cos( ; ) + t2 t = a < 1 : Putting the last equality into (5.4.10), we get Z 1 ; ar22 1 d : u(r ) = 2 f ( ) r2 r ; 2 cos( ; ) + 1 2 a a ; Thus,
Z 2 2 1 u(r ) = 2 f ( ) r2 ; 2ar acos(; r; ) + a2 d : ; This formula is called Poisson's formula. The function 2 2 K ( a ) = r2 ; 2ar acos(; r; ) + a2
(5:4:11)
is called the Poisson kernel. For r < a it is positive, since 2ar < a2 + r2 for r 6= a. We now prove that Poisson's formula (5.4.11) (or (5.4.10)) gives also a continuous solution to (5.4.1), (5.4.2), if f is continuous. If f is continuous, then f is bounded, and so we have already proved that for r < a, the equation u = 0 is satis ed. It remains to prove that u(r ) is continuous up to the boundary. Let f1( ), f2( ) : : : be a sequence of continuous and di erentiable functions, which uniformly converge to f ('). For any fk ( ) we can determine the corresponding uk (r ) by the Poisson formula. Since ffk ( )g uniformly converges to f ( ), for any > 0 there exists a k0( ) > 0 such that jfk ( ) ; fk+l( )j < 8k > k0 ( ) l > 0: Thus, from the maximum principle, juk (r ) ; uk+l(r )j < for r r0 < a k > k0( ) l > 0 > 0: Hence, fuk g uniformly converges to a function u, u = klim !1 uk (r ). The function u is continuous in a closed domain, since all functions Z 2 2 1 uk (r ) = 2 r2 ; 2ar acos(; r; ) + a2 fk ( ) d ;
CHAPTER 5. ELLIPTIC EQUATIONS
100
are continuous in closed domains. Thus, 8 Z > a2 ; r 2 1 > < 2 r2 ; 2ar cos( ; ) + a2 f ( ) d a < r u(r ) = klim u ( r ) = k ; > !1 > : f ( ) a=r since ffk ( )g uniformly converges to f ( ).
Let X = (x y) have the polar coordinates (r ) and X 0 = (x0 y0) have the the polar coordinates (a ) (that means that (x0 y0) is lying at the boundary).
Then
(x y) r I ; a (x0 y0)
jX~ ; X~ 0j2 = a2 + r2 ; 2ar cos( ; ):
The arc length element on the circumference is ds0 = a d . Therefore, Poisson's formula takes the alternative form 2 jX~ j2 Z u(X 0) ds0 u(X ) = a ; (5:4:12) 2 a jX j=a jX ; X 0j2 0
for X 2 , where we write u(X 0) = f ( ).
5.5 The mean value theorem Let u be a harmonic function in a domain IR2, continuous in . Let M0 be a point of and Ba be the ball of radius a with the center at M0 which is completely lying in . Then 1 ZZ u( ) d : u(M0) = 2 a (5:5:1) @Ba
Proof: Without loss of generality we may suppose that M0 = 0. Since Ba , the function u is harmonic in , from Poisson's formula (5.4.11) we have (X = 0) Z u(X 0) 2 u(0) = 2a a ds0: 2 a jX j=a 0
2
5.6. THE MAXIMUM PRINCIPLE (A STRONG VERSION)
5.6 The maximum principle (a strong version)
101
We now prove the following: Let be a connected bounded domain in IR2. Let u be a harmonic function in , continuous in . Then the maximum and the minimum values of u are attained on @ and nowhere inside unless u const. Since u is continuous in , it attains its maximum somewhere, say xM 2 . We shall show that xM 62 unless u const. By de nition of M , we know that
u(x) u(xM ) = M 8x 2 : xM We draw a circle around xM entirely contained in . By the mean value theorem, u(xM ) is equal to its average around the circumference.
Since the average is not greater than the maximum, we have
M = u(xM ) M: Therefore, u(x) = M for all x on the circumference. This is true for any such circle. So u(x) = M for all x in the diagonally shaded region. Now we repeat the argument with a di erent center. We can ll the whole domain up with circles. In this way, using the assumption that is connected, we deduce that u(x) M throughout . So u const.
102
CHAPTER 5. ELLIPTIC EQUATIONS
Bibliography $1] R. A. Adams: Sobolev Spaces, Academic Press, New York, 1975. $2] S. Agmon: Lectures on Elliptic Boundary Value Problems. Van Nostrand, 1965. $3] R. Courant and D. Hilbert: Methods of Mathematical Physics. Interscience Publishers, Vol. I, 1953, Vol. 2, 1962. $4] A. Friedman: Partial Di erential Equations of Parabolic Type, Prentice Hall, 1964. $5] A. Friedman: Partial Di erential Equations, Holt, Rinehart and Wiston, 1969. $6] P. R. Garabedian: Partial Di erential Equations, John Wiley & Sons, Inc., 1964. $7] I. M. Gelfand and G. E. Shilov: Generalized Functions, Academic Press, New York, 1964. $8] D. Gilbarg and N. . Trudinger: Elliptic Partial Di erential Equations of Second Order. Springer-Verlag, Berlin, 1977. $9] L. H ormander: Linear Partial Di erential Operators, Springer-Verlag, Berlin, 1963. $10] F. John: Partial Di erential Equations. Springer-Verlag, Berlin, 1982. $11] M. Krzyzanski: Partial Di erential Equations of Second Order, Vol. 1&2, PWN, Warszawa, 1971. $12] O. A. Ladyzhenskaya: The Boundary Value Problems of Mathematical Physics. Springer-Verlag, New York-Berlin-Heidelberg-Tokyo 1985. $13] O. A. Lady2zenskaja, V. A. Solonnikov, N. N. Ural'ceva: Linear and Quasilinear Equations of Parabolic Type. Amer. Math. Soc. 1968. $14] O. A. Ladyzhenskaya et. al.: Linear and Quasilinear Equations of Elliptic Type. Amer. Math. Soc.. $15] J.{L. Lions and E. Magenes: Non-Homogeneous Boundary Value Problems and Applications. Vol. I{III, Springer-Verlag, Berlin, 1972. $16] J. T. Marti: Introduction to Sobolev Spaces and Finite Element Solution of Elliptic Boundary Value Problems. Academic Press, New York, 1986. $17] I. G. Petrovskii: Lectures on Partial Di erential Equations. Interscience Publishers, 1954. 103
104
BIBLIOGRAPHY
$18] M. H. Protter and H. F. Weinberger: Maximum Principles in Di erential Equations, Springer-Verlag, Berlin, 1984. $19] S. L. Sobolev: Partial Di erential Equations of Mathematical Physics, Pergamon, New York, 1964. $20] W. A. Strauss: Partial Di erential Equations. An Introduction. John Wiley, New York 1992. $21] F. Treves: Basic Linear Partial Di erential Equations, Academic Press, 1975. $22] A. N. Tykhonov, A. A. Samarski: Di erentialgleichungen der Mathematischen Physik. VEB, Berlin, 1959. $23] V. S. Vladimirov: Equations of Mathematical Physics, Marcel Dekker, Inc., New York, 1971. $24] V. S. Vladimirov: Generalized Functions in Mathematical Physics, Mir Publishers, Moscow, 1979. $25] D. Widder: The Heat Equation, Academic Press, New York ,1975. $26] J. Wloka: Partial Di erential Equations. Cambridge Univ. Press, 1987.
Index function harmonic, 93, 96 real analytic, 22
Boundary condition rst kind, 30 second kind, 30 third kind, 30
Gauss{Ostrogradskii formula, 7, 24 Geometric Method, 2 Goursat problem, 51 gradient operator, 20 gradient vector, 17 Green formula, 48, 58 Green function, 72, 78, 81, 82
Cauchy data, 18, 23 Cauchy problem, 18, 30, 33, 35 Cauchy{Kowalevski theorem, 24 characteristic, 18, 59 characteristic curves, 4 characteristic equation, 10, 33 characteristic form, 19 characteristic matrix, 19 characteristic triangle, 34 characteristics, 10 Coordinate method, 3
heat equation, 8, 63, 86 heat ux, 63 Helmholtz equation, 8 Hook's law, 5 hyperbolic equation, 44, 53
D'Alembert criterion, 71 D'Alembert's formula, 34 Darboux problem, 52 di erential equation second-order, 9 Duhamel principle, 92
ill{posedness, 35 initia (Tr agheitskr afte), 6 integration by parts, 24 Lagrange{Green identity, 25 Laplace equation, 8, 35, 93 Laplace operator, 25 Laplacian , 94
eigenvalue problem, 40, 47 elasticity tension, 30 elastische Befestigung, 30 energy total, 31 equation adjoint, 26 elliptic, 12 hyperbolic, 30 canonical form, 11 nonlinear, 18 parabolic, 12 quasi{linear, 18 error function, 87, 90
maximum principle, 65 membrane, 7 multi{index, 17 Newton's law, 6 of heat transfer, 64 noncharacteristic, 18 operator (formally) adjoint, 25 partial di erential, 17 order, 1 in nite, 1
eld
parabolic equation, 63 PDE linear, 1 quasilinear, 1 Picard problem, 52
analytic, 26 Fourier coe cients, 50 Fourier series, 43, 45 Fourier's law, 7
105
106 Planck's constant, 8 Poisson equation, 8, 93 Poisson integral, 82, 88 Poisson kernel, 99 Poisson's formula, 99 principal part, 19, 20 problem ill{posed, 35 second boundary value, 65 third boundary value, 65 well{posed, 35 region lens{shaped, 26 Riemann function, 60 rotation, 94 Schr odinger's equation, 8 solution fundamental, 82 standard form, 23 successive approximations, 54 support, 27 symbol, 19, 20 translation, 94 Tricomi's equation, 13 type elliptic, 11, 13, 15 hyperbolic, 11, 13, 15 parabolic, 11, 13, 15 ultrahyperbolic, 15 vibrations of a string, 29 wave equation, 20 one-dimensional, 7 Young modulus, 30
INDEX