Complex Analysis & Applications

Page 1

Complex Analysis & Applications Johar M. Ashfaque Contents 1 What are Complex Numbers? 1.1 The Complex Number . . . . . . . 1.2 The Complex Product . . . . . . . 1.3 Euler’s Formula . . . . . . . . . . 1.4 Argument, Magnitude & Conjugate

. . . .

2 2 2 2 3

2 Differentiation 2.1 The Cauchy-Riemann Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

4 4

3 Integration 3.1 Cauchy’s Differentiation Formula . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

5 5

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

4 Applications 4.1 Fourier Series: Complex Form . . . . . . . . . . 4.1.1 Parseval’s Theorem . . . . . . . . . . . . 4.2 Virasoro Algebra: Cauchy’s Integral Theorem . . 4.3 Some Properties of the Riemann Zeta Function . 4.3.1 Introduction . . . . . . . . . . . . . . . . 4.3.2 The Euler Product Formula for ζ(s) . . . 4.3.3 The Bernoulli Numbers . . . . . . . . . . 4.3.4 Relationship to the Zeta Function . . . . 4.3.5 The Gamma Function . . . . . . . . . . . 4.3.6 The Euler Reflection Formula . . . . . . . 4.4 The Hurwitz Zeta Function . . . . . . . . . . . . 4.4.1 Variants of the Hurwitz Zeta Function . . 4.4.2 Sums of the Hurwitz Zeta Function . . . . 4.5 Epstein Zeta Function . . . . . . . . . . . . . . . 4.5.1 Introduction . . . . . . . . . . . . . . . . . 4.5.2 The Functional Equation . . . . . . . . . 4.6 The Mellin Transform . . . . . . . . . . . . . . . 4.6.1 The Mellin Transform and its Properties 4.6.2 Relation to Laplace Transform . . . . . . 4.6.3 Inversion Formula . . . . . . . . . . . . . 4.6.4 Scaling Property for a > 0 . . . . . . . . . 4.6.5 Multiplication by ta . . . . . . . . . . . . . 4.6.6 Derivative . . . . . . . . . . . . . . . . . . 4.6.7 Another Property of the Derivative . . . . 4.6.8 Integral . . . . . . . . . . . . . . . . . . . 4.6.9 Example 1 . . . . . . . . . . . . . . . . . 4.6.10 Example 2 . . . . . . . . . . . . . . . . .

1

. . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . .

5 5 6 6 6 6 7 8 9 12 14 15 16 17 19 19 20 23 23 23 23 23 23 24 24 24 24 25


1 1.1

What are Complex Numbers? The Complex Number

A complex number, z, is an ordered pair of real numbers similar to the points in the real plane, R2 . The first and second components of z is called the real and imaginary parts respectively <(z) = x,

=(z) = y.

An equivalent form to the ordered pair, is to express the complex number in terms of its components as z = x + iy.

1.2

The Complex Product

The plane of complex numbers, C, differs from R2 in that the product of two complex numbers is defined to be z1 · z2

=

(x1 + iy1 ) · (x2 + iy2 )

= x1 x2 + iy1 x2 + iy2 x1 − y1 y2 = x1 x2 − y1 y2 + i(y1 x2 + y2 x1 )

1.3

Euler’s Formula

Consider the exponential of an imaginary number, eiϕ , where ϕ ∈ R e

∞ X (iϕ)2 = n! n=0

1 i 1 i 1 + iϕ − ϕ2 − ϕ3 + ϕ4 + ϕ5 ... 2 3! 4! 5! 1 2 1 4 1 1 = 1 − ϕ + ϕ − ... + i ϕ − ϕ3 + ϕ5 − ... 2 4! 3! 5! = cos ϕ + i sin ϕ =

Figure 1: The geometric interpretation of Euler’s formula. Given a general complex number, ξ + iϕ, we find eξ+iϕ = eξ (cos ϕ + i sin ϕ). We can express any non-zero complex number, z, in polar coordinates by defining r ≡ eξ as z = reiϕ = r(cos ϕ + i sin ϕ). 2


1.4

Argument, Magnitude & Conjugate

For a complex number, z, in polar coordinates, it is customary to call the angle, the argument of z, denoted ϕ = arg(z) and r is known as the magnitude of z denoted r = |z|. The complex conjugate is an operation which flips the sign of the imaginary part of the complex number on which it acts. For z = x + iy, the complex conjugate denoted by z ∗ is simply z ∗ = x − iy.

Figure 2: The geometric interpretation of the complex conjugate. In polar coordinates, we know that flipping the sign of the argument will flip the sign of the imaginary part since =(z) = r sin ϕ and since sine is an odd function r sin(−ϕ) = −r sin ϕ = −=(z). Taking the product of two complex numbers in polar coordinates z1 · z2 = r1 r2 ei(θ1 +θ2 ) . From this it can be easily seen that the geometric interpretation of the complex product is that it multiplies the magnitudes and adds the arguments. Taking the product of a complex number with its complex conjugate, we find z · z ∗ = r2 = |z|2 giving us a relation for the magnitude of a complex number.

3


2

Differentiation

One might expect to define the complex derivative as in the case of the reals. However, there is an additional complication because C forms a plane, and therefore we must specify a direction to step with the differential element h when taking the derivative just as in the case of the directional derivative in R2 . In general, the derivative at a single point can take on many values depending on the direction we choose to step. But, if we consider only holomorphic or analytic functions that have a single valued derivative at every point in some region. We might be in luck.

2.1

The Cauchy-Riemann Equations

Following the definition of the derivative f (z + h) − f (z) df = lim h→0 dz h if we pick h to be real df dz

= =

lim

h→0

f (x + h, y) − f (x, y) h

∂f ∂x

If we pick h to be imaginary, h = il, l ∈ R df dz

= =

f (x, y + l) − f (x, y) l→0 il 1 ∂f i ∂y lim

Defining u ≡ <(f (z)),

v ≡ =(f (z))

and requiring that the derivative be the same in all directions ∂u ∂v 1 ∂u ∂v +i = +i ∂x ∂x i ∂y ∂y

The Cauchy-Riemann equations then reads ∂u ∂v = , ∂x ∂y

∂u ∂v =− ∂y ∂x

If the partial derivatives of a function are continuous in some region and satisfy the Cauchy-Riemann Equations in that region, then the function is holomorphic in that region.

4


3 3.1

Integration Cauchy’s Differentiation Formula 2πi (n) f (a) = n!

4 4.1

I

f (z) dz (z − a)n+1

Applications Fourier Series: Complex Form

Complex exponential functions give a convenient way to rewrite Fourier series. By considering Fourier series in complex form allows to work with one set of coefficients {cn }∞ n=−∞ where all the coefficients are given by a single formula. Then

an cos nx + bn sin nx

=

=

and

π

einx + e−inx 2 −π Z π inx 1 e − e−inx + f (x) sin nx dx π 2i −π Z π 1 f (x)e−inx dx einx 2π −π Z π 1 f (x)einx dx e−inx 2π −π 1 π

Z

f (x) cos nx dx

1 a0 = 2 2π

Z

π

f (x)dx. −π

Let us now define the numbers 1 cn = 2π

Z

π

f (x)e−inx dx, n ∈ Z.

−π

When speaking of Fourier series of f in complex form, one simply means the infinite series appearing in ∞ X

f∼

cn einx .

n=−∞

Note. There is an easy way to come from the Fourier coefficients with respect to sine functions and cosine functions to the coefficients for Fourier series in complex form. In fact c0 = and for n ∈ N cn =

an − ibn , 2

a0 2 c−n =

an + ibn . 2

On the other hand, if the Fourier coefficients in complex form are known then a0 = 2c0 ,

an = cn + c−n ,

5

bn = i(cn − c−n ).


4.1.1

Parseval’s Theorem

∞ Assume that the function f ∈ L2 (−π, π) has the Fourier coefficients {an }∞ n=0 , {bn }n=1 or in complex ∞ form {cn }n=−∞ . Then

1 2π

Z

π

|f (x)|2 dx =

−π

∞ ∞ X 1X 1 |a0 |2 + (|an |2 + |bn |2 ) = |cn |2 . 4 2 n=1 n=−∞

As a consequence of Parseval’s theorem, note that if f ∈ L2 (−π, pi) then ∞ X

|cn |2 < ∞.

n=−∞

The converse also holds: if

∞ X

|cn |2 < ∞

n=−∞

then f (x) =

∞ X

cn einx

−∞ 2

define a function in L (−π, π).

4.2

Virasoro Algebra: Cauchy’s Integral Theorem

We know

I Lm =

and

I Ln =

z m+1 T (z)

ω n+1 T (ω)

dz 2πi

dω . 2πi

I [Lm , Ln ]

= = = = = = =

4.3 4.3.1

dz m+1 z (T (z)Ln − Ln T (z)) 2πi I I dz dω m+1 n+1 dz dω m+1 n+1 z ω − z ω 2πi 2πi 2πi 2πi |z|>|ω| |ω|>|z| I I c dω dz m+1 n+1 2T (ω) ∂T (ω) 2 z ω + + (z − ω)4 (z − ω)2 z−ω 0 2πi ω 2πi I dω n+1 1 d3 c m+1 d m+1 m+1 0 ω z + z 2T (ω) + z T (ω) 2πi 6 dz 3 2 dz z=ω I dω c m+n−1 m+n+1 m(m + 1)(m − 1) ω + (m + 1)ω 2T (ω) + ω m+n+2 T 0 (ω) 2πi 12 c m(m + 1)(m − 1)δm+n,0 + 2(m + 1)Lm+n − (m + n + 2)Lm+n 12 c m(m2 − 1)δm+n,0 + (m − n)Lm+n 12

Some Properties of the Riemann Zeta Function Introduction

Leonhard Euler lived from 1707 to 1783 and is, without a doubt, one of the most influential mathematicians of all time. His work covered many areas of mathematics including algebra, trigonometry, graph theory, mechanics and, most relevantly, analysis. 6


Although Euler demonstrated an obvious genius when he was a child it took until 1735 for his talents to be fully recognised. It was then that he solved what was known as the Basel problem, the problem set in 1644 and named after Euler’s home town [?]. This problem asks for an exact expression of the limit of the equation ∞ X 1 1 1 1 =1+ + + + ..., (1) 2 n 4 9 16 n=1 which Euler calculated to be exactly equal to π 2 /6. Going beyond this, he also calculated that ∞ X 1 1 1 1 π4 =1+ + + + ... = , 4 n 16 81 256 90 n=1

(2)

among other specific values of a series that later became known as the Riemann zeta function, which is classically defined in the following way. Definition 4.1 For <(s) > 1, the Riemann zeta function is defined as ζ(s) =

∞ X 1 1 1 1 = 1 + s + s + s + ... s n 2 3 4 n=0

This allows us to write the sums in the above equations (1) and (2) simply as ζ(2) and ζ(4) respectively. A few years later Euler constructed a general proof that gave exact values for all ζ(2n) for n ∈ N. These were the first instances of the special values of the zeta function and are still amongst the most interesting. However, when they were discovered, it was still unfortunately the case that analysis of ζ(s) was restricted only to the real numbers. It wasn’t until the work of Bernhard Riemann that the zeta function was to be fully extended to all of the complex numbers by the process of analytic continuation and it is for this reason that the function is commonly referred to as the Riemann zeta function. From this, we are able to calculate more special values of the zeta function and understand its nature more completely. We will be discussing some classical values of the zeta function as well as some more modern ones and will require no more than an undergraduate understanding of analysis (especially Taylor series of common functions) and complex numbers. The only tool that we will be using extensively in addition to these will be the ‘Big Oh’ notation, that we shall now define. Definition 4.2 We can say that f (x) = g(x) + O(h(x)) as x → k if there exists a constant C > 0 such that |f (x) − g(x)| ≤ C|h(x)| for when x is close enough to k. This may seem a little alien at first and it is true that, to the uninitiated, it can take a little while to digest. However, its use is more simple than its definition would suggest and so we will move on to more important matters. 4.3.2

The Euler Product Formula for ζ(s)

We now define the convergence criteria for infinite products. Theorem 4.3 If

P

|an | converges then

Q

(1 + an ) converges.

Although we will not give a complete proof here, one is referred to [?]. However, it is worth noting that (m ) m Y X (1 + an ) = exp ln(1 + an ) . n=1

n=1

From this it can be seen that, for the product to converge, it is sufficient that P Pthe summation on the right hand side converges. The summation ln(1 + an ) converges absolutely if |an | converges. This is the conceptual idea that the proof is rooted in. Now that we have this tool, we can prove the famous Euler Product formula for ζ(s). This relation makes use of the Fundamental Theorem of Arithmetic which says that every integer can be expressed as a unique product of primes. We will not prove it here but the interested reader can consult [?] for a proof. 7


Theorem 4.4 Let p denote the prime numbers. For <(s) > 1, ∞ Y ζ(s) = (1 − p−s )−1 . p

Proof. Observe that

1 1 1 1 1 ζ(s) = s + s + s + s + s + ... 2s 2 4 6 8 10 Then, we can make the subtraction 1 1 1 1 1 ζ(s) 1 − s = 1 + s + s + s + s + ... 2 2 3 4 5 1 1 1 1 1 − s + s + s + s + s + ... 2 4 6 8 10 1 1 1 1 = 1 + s + s + s + s + ... 3 5 7 9 Clearly, this has just removed any terms from ζ(s) that have a factor of 2−s . We can then take this a step further to see that 1 1 1 1 1 1 ζ(s) 1 − s 1 − s = 1 + s + s + s + s + ... 2 3 5 7 11 13 If we continue this process of siphoning off primes we can see that, by the Fundamental Theorem of Arithmetic, Y ζ(s) (1 − p−s ) = 1, p

which requires only a simple rearrangement to see that Y ζ(s) = (1 − p−s )−1 . p

Note that, for <(s) > 1 this converges because X X p−<(s) < n−<(s) , p

n

which converges. This completes the proof 4.3.3

The Bernoulli Numbers

We will now move on to the study of Bernoulli numbers, a sequence of rational numbers that pop up frequently when considering the zeta function. We are interested in them because they are intimately related to some special values of the zeta function and are present in some rather remarkable identities. We already have an understanding of Taylor series and the analytic power that they provide and so we can now begin with the definition of the Bernoulli numbers. This section will follow Chapter 6 in [?]. Definition 4.5 The Bernoulli Numbers Bn are defined to be the coefficients in the series expansion ∞ X Bn xn x = . ex − 1 n=0 n!

It is a result from complex analysis that this series converges for |x| < 2π but, other than this, we cannot gain much of an understanding from the implicit definition. Please note also, that although the left hand side would appear to become infinite at x = 0, it does not.

8


Corollary 4.6 We can calculate the Bernoulli numbers by the recursion formula 0=

k−1 X j=0

k Bj , j

where B0 = 1. Proof. * Let us first replace ex − 1 with its Taylor series to see that    ∞ ∞ ∞ ∞ j X X X Bn xn xj X Bn xn x  − 1 = . x =  j n! j! n=0 n! n=0 j=1 j=0 If we compare coefficients of powers of x we can clearly see that, except for x1 , xk : 0 =

B0 B1 B2 Bk−2 Bk−1 + + + ... + + . k! (k − 1)! 2!(k − 2)! 2!(k − 2)! (k − 1)!

Hence 0=

k−1 X j=0

k−1 1 X k Bj = Bj . (k − j)!j! k! j=0 j

Note that the inverse k! term is irrelevant to the recursion formula. This completes the proof. The first few Bernoulli numbers are therefore B0 = 1, B1 = −1/2, B2 = 1/6, B3 = 0, B4 = −1/30, B5 = 0, B6 = 1/42, B7 = 0. Lemma 4.7 The values of the odd Bernoulli numbers (except B1 ) are zero Proof. * As we know the values of B0 and B1 , we can remove the first two terms from Definition 4.5 and rearrange to get ∞ X x Bn xn x + = 1 + , x e −1 2 n! n=2 which then simplifies to give x 2

ex + 1 ex − 1

∞ X Bn xn . =1+ n! n=2

We can then multiply both the numerator and denominator of the left hand side by exp(−x/2) to get ∞ X Bn xn x ex/2 + e−x/2 = 1 + . (3) x/2 −x/2 2 e n! −e n=2 By substituting x → −x into the left hand side of this equation we can see that it is an even function and hence invariate under this transformation. Hence, as the odd Bernoulli numbers multiply odd powers of x, the right hand side can only be invariant under the same transformation if the value of the odd coefficients are all zero. 4.3.4

Relationship to the Zeta Function

As, we have already dicussed, Euler found a way of calculating exact values of ζ(2n) for n ∈ N. He did this using the properties of the Bernoulli numbers, although he originally did it using the infinite product for the sine function. The relationship between the zeta function and Bernoulli numbers is not obvious but the proof of it is quite satisfying. Theorem 4.8 For n ∈ N, ζ(2n) = (−1)n−1

9

(2π)2n B2n . 2(2n)!


To prove this theorem, we will be using the original proof attributed to Euler and reproduced in [?]. This will be done by finding two seperate expressions for z cot(z) and then comparing them. We will be using a real analytic proof which is slightly longer than a complex analytic proof, an example of which can be found in [?]. Lemma 4.9 The function z cot(z) has the Taylor expansion z cot(z) = 1 +

∞ X

(−4)n

n=1

B2n z 2n . (2n)!

(4)

Proof. Substitute x = 2iz into equation (3) and observe that, because the odd Bernoulli numbers are zero, we can write this as ∞ X eiz + e−iz x ex/2 + e−x/2 B2n z 2n = iz = 1 + (−4)n . iz −iz x/2 −x/2 2 e e −e (2n)! −e n=1 Noting that the left hand side is equal to z cot(z) completes the proof Lemma 4.10 The function cot(z) can be written as 1 cot(z/2n ) tan(z/2n ) − + n cot(z) = 2n 2n 2

2n−1 X−1

cot

j=1

z + jπ 2n

+ cot

z − jπ 2n

.

(5)

Proof. Recall that 2 cot(2z) = cot(z) + cot(z + π/2). If we continually iterate this formula we will find that 2n −1 1 X z + jπ cot(z) = n cot , 2 j=0 2n which can be proved by induction. Removing the j = 0 and j = 2n−1 terms and recalling that cot(z + π/2) = − tan(z) gives us cot(z)

= +

cot(z/2n ) tan(z/2n ) − 2n 2n    n−1 2 −1 z + jπ   1  X cot + 2n 2n j=1

n 2X −1

cot

j=2n−1 +1

 z + jπ  . 2n

All we have to do now is observe that, as cot(z + π) = cot(z), we can say that n 2X −1

cot

j=2n−1 +1

z + jπ 2n

=

2n−1 X−1

cot

j=1

z − jπ 2n

,

which completes the proof. Lemma 4.11 The function z cot(z) can therefore be expressed as z cot(z) = 1 − 2

∞ X j=1

z2 . j 2 π2 − z2

(6)

Proof. In order to obtain this, we first multiply both sides of equation (5) by z to get 2n−1 X−1 z z + jπ z z z − jπ n n z cot(z) = n cot(z/2 ) − n tan(z/2 ) + cot + cot . 2 2 2n 2n 2n j=1

(7)

Let us now take the limit of the right hand side as n tends to infinity. First recall that the Taylor series for x cot(x) and x tan(x) can be respectively expressed as x cot(x) = 1 + O(x2 ) 10


and x tan(x) = x2 + O(x4 ). Hence, if we substitute x = z/2n into both of these we can see that z i hz cot =1 lim n→∞ 2n 2n

(8)

and

hz z i tan = 0. (9) n→∞ 2n 2n Now we have dealt with the expressions outside the summation and so we need to consider the ones inside. To make things slightly easier for the moment, let us consider both of the expressions at the same time. Using Taylor series again, we can see that z z ± jπ z cot = + O(4−n ). (10) n n 2 2 z ± jπ lim

Substituting equations (8), (9) and (10) into the right hand side of equation (7) gives that z cot(z) = 1 − lim

n→∞

2n−1 X−1 j=1

z z + + O(4−n ) , z + jπ z − jπ

which simplifies a little to give z cot(z) = 1 − 2

∞ X j=1

n−1

2 X−1 z2 − lim O(4−n ). j 2 π 2 − z 2 n→∞ j=1

By Definition 4.2, it can be seen that |

2n−1 X−1

O(4−n )| ≤ C(2n−1 − 1)4−n ,

j=1

which clearly converges to zero as n → ∞, thus completing the proof. Lemma 4.12 For |z| < π, z cot(z) has the expansion z cot(z) = 1 − 2

∞ X

ζ(2n)

j=1

z 2n . π 2n

(11)

Proof. Take the summand of equation (4.11) and multiply both the numerator and denominator by (jπ)−2 to obtain ∞ X (z/jπ)2 z cot(z) = 1 − 2 . 1 − (z/jπ)2 j=1 But, we can note that the summand can be expanded as an infinite geometric series. Hence we can write this as 2n ∞ X ∞ X z z cot(z) = 1 − 2 , jπ j=1 n=1 which =1−2

∞ 2n X z n=1

π

ζ(2n)

as long as the geometric series converges (i.e. |z| < π). Note that exchanging the summations in such a way is valid as both of the series are absolutely convergent.

11


Now, we can complete the proof of the Theorem 4.8 by equating equations (4) and (11) to see that 1+

∞ X

n B2n z

= (−4)

n=2

2n

=1−2

(2n)!

∞ 2n X z n=1

π

ζ(2n).

If we then strip away the 1 terms and remove the summations we obtain the identity (−4)n

B2n z 2n z 2n = −2 2n ζ(2n), (2n)! π

which rearranges to complete the proof of Theorem 4.8 as required. Now that have proven this beautiful formula (thanks again, Euler) we can use it to calculate the sums of positive even values of the zeta function. First, let us rewrite the result of Theorem 4.8 to give us (2π)2n |B2n | . 2(2n)!

ζ(2n) =

From this, we can easily calculate specific values such as ζ(6) =

ζ(8) =

∞ X (2π)6 |B6 | π6 1 = = , 6 j 2(6)! 945 n=1 ∞ X (2π)8 |B8 | π8 1 = = , 8 j 2(8)! 9450 n=1

etc. This is a beautiful formula and it is unfortunate that no similar formula has been discovered for ζ(2n + 1). However, that’s not to say that there aren’t interesting results regarding these values! There have been recent results concerning the values of the zeta function for odd integers. For example, Apéry’s proof of the irrationality of ζ(3) in 1979 or Matilde Lalı́n’s integral representations of ζ(3) and ζ(5) by the use of Mahler Measures. Mercifully, special values for ζ(−2n) and ζ(−2n + 1) have been found, the latter of which also involves Bernoulli numbers! It is to our good fortune that we will have to take a whirlwind tour through some of the properties of the Gamma function in order to get to them. 4.3.5

The Gamma Function

The Gamma function is closely related to the zeta function and as such warrants some exploration of its more relevant properties. We will only be discussing a few of the qualities of this function but the reader should note that it has many applications in statistics (Gamma and Beta distributions) and orthoganal functions (Bessel functions). It was first investigated by Euler when he was considering factorials of non-integer numbers and it was through this study that many of its classical properties were established. This section will follow Chapter 8 in [?] with sprinklings from [?]. First, let us begin with a definition from Euler. Definition 4.13 For <(s) > 0 we define Γ(s) as Z Γ(s) =

ts−1 e−t dt.

0

Now, this function initially looks rather daunting and irrelavent. We will see, however, that is does have many fascinating properties. Among the most basic are the following two identities ... Corollary 4.14 Γ(s) has the recursive property Γ(s + 1) = sΓ(s).

12

(12)


Proof. We can prove this by performing a basic integration by parts on the gamma function. Note that Z ∞ Z ∞ s −t ∞ s −t ts−1 e−t dt t e dt = −t e 0 + s Γ(s + 1) = 0

0

=

[0] + sΓ(s)

as required. Corollary 4.15 For n ∈ N Γ(n + 1) = n!

(13)

Proof. Just consider Corollary 4.14 and iterate to complete the proof. It is this property that really begins to tell us something interesting about the Gamma function. Now that we know that the function calculates factorials for integer values, we can use it to ‘fill in’ non-integer values, which is the reason why Euler introduced it. Remark. We can use the fact that Γ(s) = Γ(s + 1)/s to see that, as s tends to 0, Γ(s) → ∞. We can also use this recursive relation to prove that the Gamma function has poles at all of the negative integers. However, the more beautiful proof of this is to come in Section 6. Lemma 4.16 We can calculate that

√ Γ(3/2) =

Proof. We observe that

π . 2

Z

exp(−x2 )dx.

Γ(3/2) =

(14)

0

This is an important result that is the foundation of the normal distribution and it is also easily calculable. We do this by first considering the double integral Z ∞Z ∞ I= exp(−x2 − y 2 )dxdy. −∞

−∞

If we switch to polar co-ordinates using the change of variables x = r cos(θ), y = r sin(θ). Noting that dydx = rdθdr, we have Z 2π Z ∞ Z ∞ 2 I= r exp(−r )drdθ = π 2r exp(−r2 )dr = π[exp(−r2 )]∞ 0 = π. 0

0

0

We can then seperate the original integral into two seperate integrals to obtain Z ∞ Z ∞ I= exp −x2 dx exp −y 2 dy = π. −∞

−∞

Noting that the two integrals are identical and are also both even functions, we can see that integrating one of them from zero to infinity completes the proof as required Corollary 4.17 Consider (n)!2 = n(n − 2)(n − 4)...,, which terminates at 1 or 2 depending on whether n is odd or even respectively. Then for n ∈ N, √ π(2n − 1)!2 2n + 1 = . Γ 2 2n Proof. * We will prove this by induction. Consider that √ √ 2n + 3 2(n + 1) + 1 (2n + 1) (2n − 1)!2 π (2n + 1)!2 π Γ =Γ = . = . 2 2 2 2n 2n+1 Noting that the leftmost and rightmost equalities are equal by definition completes the proof.

13


Remark. We can use this relationship Γ(1 + 1/s) = (1/s)Γ(s) to see, for example, that √ √ 3 Ď€ 15 Ď€ Γ(5/2) = , Γ(7/2) = , 4 8 etc. Corollary 4.18 We can computer the ‘factorial’ of −1/2 as √ Γ(−1/2) = −2 Ď€. Proof. We can rework Corollary 4.14 to show that 1−s Γ(1/s) = Γ s

1−s s

,

from which the corollary can easily be proven. 4.3.6

The Euler Reflection Formula

This chapter will use a slightly different definition of the Gamma function and will follow source [?]. First let us consider the definition of the very important Euler constant Îł. Definition 4.19 Euler’s constant Îł is defined as 1 1 1 − log(m) . Îł = lim 1 + + + .. + m→∞ 2 3 m We will then use Gauss’ definition for the Gamma function which can be written as follows ... Definition 4.20 For s > 0 we can define Γh (s) =

h!hs hs = s(s + 1)..(s + p) s(1 + s)(1 + s/2)...(1 + s/h)

and Γ(s) = lim Γh (s). h→∞

This does not seem immediately obvious but the relationship is true and is proven for <(s) > 0 in [?]. So now that we have these definitions we can work on a well known theorem. Theorem 4.21 The Gamma function can be written as the following infinite product; ∞ Y 1 s −s/n = seÎłs 1+ e . Γ(s) n n=1

Proof. Before we start with the derivation, let us note that the infinite product is convergent because the exponential term forces it. Now that we have cleared that from our conscience, we will begin by using Definition 4.20 and say that hs Γh (s) = . s(1 + s)(1 + s/2)...(1 + s/h) Now we can also see that hs

= =

exp (s log(h)) 1 1 1 1 exp s log(h) − 1 − − ... − exp s 1 + + ... + . 2 h 2 h

14


We can then observe that 1 es es/2 es/h 1 1 Γh (s) = ... exp s log(h) − 1 − − ... − , s 1 + s 1 + s/2 1 + s/h 2 h which we can write as the product Y h 1 1 1 1 es/n . Γh (s) = exp s log(h) − 1 − − ... − s 2 h 1 + s/n n=1 All we need to do now is to take the limit of this as h tends to infinity and use Definition 4.19 to prove the theorem as required. This theorem is very interesting as it allows us to prove two really quite beautiful identities, known as the Euler Reflection formulae. But before we do this, we are going to need another way of dealing with the sine function. It should be noted that the method of approach that we are going to use is not completely rigorous. However, it can be proven rigorously using the Weierstrass Factorisation Theorem - a discussion of which can be found in [?]. Theorem 4.22 The sine function has the infinite product ∞ Y s2 sin(Ď€s) = Ď€s 1− 2 . n n=1 Theorem 4.23 The Gamma function has the following reflective relation 1 sin(Ď€s) = . Γ(s)Γ(1 − s) Ď€ Proof. We can use Theorem 4.21 to see that ∞ ∞ Y Y n+s n−s n2 − s2 1 = −s2 eÎłsâˆ’Îłs es/n−s/n = −s2 . Γ(s)Γ(−s) n n n2 n=1 n=1 We can then use Corollary 4.14 to show that, as Γ(1 − s) = −sΓ(−s), ∞ Y 1 n2 − s2 =s . Γ(s)Γ(1 − s) n2 n=1

Comparing this to Theorem 4.22 then completes the proof as required. Corollary 4.24 The Gamma function also has the reflectional formula 1 s sin(Ď€s) =− . Γ(s)Γ(−s) Ď€ Proof. This can easily be shown using a slight variation of the previous proof. However, an alternate proof can be constructed by considering Theorem 4.23 and Corollary 4.14.

4.4

The Hurwitz Zeta Function

Definition 4.25 For 0 < a ≤ 1 and <(s) > 1, we define Μ(s, a), the Hurwitz zeta funtion as Μ(s, a) =

∞ X

1 . (n + a)s n=0

Remark. It is obvious that Îś(s, 1) = Îś(s) and from this we can see that if we can prove results for the Hurwitz zeta function that are valid when a = 1, then we obtain results for the regular zeta function automatically. Let us then begin with our first big result. 15


Theorem 4.26 For <(s) > 1, the Hurwitz zeta function can be expressed as the infinite integral Z ∞ s−1 −ax x e 1 dx. ζ(s, a) = Γ(s) 0 1 − e−x

(15)

Corollary 4.27 We have the identity ζ(s) =

1 Γ(s)

Z 0

xs−1 dx. ex − 1

Proof 4.28 Simply substitute a = 1 into equation (15) to complete the proof. Now we will prove a slight variation of the this integral. Proposition 4.29 The follow identity holds for the zeta function: Z ∞ s−1 x 2 x e s (2 − 1)ζ(s) = ζ(s, 1/2) = dx. Γ(s) 0 e2x − 1 Proof 4.30 First note that s

(2 − 1)ζ(s) =

s X s ∞ 2 1 1 1 2 + s+ + s + ... − 2 +1+ 3 2 5 3 ns n=1

which = 2s +

s

s s s 2 2 2 + + + ... = ζ(s, 21 ). 3 5 7

We have that ζ(s, 1/2) =

1 Γ(s)

Z 0

xs−1 e−x/2 dx. 1 − e−x

We can then multiply the numerator and denominator by exp( x2 ) and perform the substitution x = 2y to obtain the identity Z ∞ Z ∞ s−1 y 2 (2y)s−1 ey 2s y e ζ(s, 1/2) = dy = dy Γ(s) 0 e2y − 1 Γ(s) 0 e2y − 1 as required. 4.4.1

Variants of the Hurwitz Zeta Function

There are many variants of the Hurwitz zeta function and we will only prove identities involving the more obvious ones. Definition 4.31 The alternating Hurwitz zeta function is defined to be ζ(s, a) =

∞ X (−1)n . (n + a)s n=0

Definition 4.32 We define the alternating zeta function as ζ(s) = ζ(s, 1). Theorem 4.33 For <(s) > 1, the alternating Hurwitz zeta function can also be written as Z ∞ s−1 −ax 1 x e ζ(s, a) = dx. Γ(s) 0 1 + e−x

16


Corollary 4.34 The alternating zeta function can also be written as Z ∞ s−1 x 1 ζ(s) = (1 − 21−s )ζ(s) = dx. Γ(s) 0 ex + 1 Proof 4.35 We have already done the hard work in Theorem 4.33 and as such, all that remains to be proven is that ζ(s) = (1 − 21−s )ζ(s), which can be easily shown by expanding the series. Proposition 4.36 For |k| < 1 we have the identity Z ∞ ∞ X kn 1 kxs−1 = dx. s n Γ(s) 0 ex − k n=1 Proposition 4.37 We can also show that Z ∞ s−1 −ax ∞ X 1 x e e2nπik = dx. s (n + a) Γ(s) 1 − e2πik−x 0 n=1 4.4.2

Sums of the Hurwitz Zeta Function

Proposition 4.38 For a > 1, it is true that ∞ X

ζ(s, a) =

s=2

Proof 4.39 If s ∈ N then Z

1 ζ(s, a) = (s − 1)! Hence

∞ X

" ζ(s, a) = lim

k→∞

s=2

which

"Z = lim

k→∞

0

e−ax 1 − e−x

s=2

0

k Z X

0

1 . a−1

(16)

xs−1 e−ax dx. 1 − e−x

# xs−1 e−ax dx , (s − 1)! 1 − e−x

∞ ∞ X X xs−1 xs−1 − (s − 1)! (s − 1)! s=2

!

# dx .

s=k+1

Noting that the first term inside the curved brackets is just the Taylor series for ex − 1 then gives us "Z ! # Z ∞ ∞ k ∞ X X ex − 1 e−ax xs−1 x ζ(s, a) = dx − lim e − dx . k→∞ eax (1 − e−x ) 1 − e−x (s − 1)! 0 0 s=2 s=1 Now, we can see that expression inside the limit tends to zero as k tends to infinity which then leaves us with the rather inspiring identity ∞ X

Z ζ(s, a) =

s=2

which integrates to give

∞ X s=2 −1

This converges to (a − 1)

0

ex − 1 dx, − e−x )

eax (1

1 − e−x(a−1) . x→∞ a−1

ζ(s, a) = lim

for a > 1, thus proving the proposition.

17


Corollary 4.40 The sums of the regular zeta function diverge. Proof 4.41 Simply substitute a = 1 into the previous result to complete the proof. We can also prove a set of similar results using the same technique as in Proposition 4.38. The two easiest examples are given in the propositions below. Proposition 4.42 For a > 1 and s ∈ N ∞ X

ζ(2s, a) =

s=1

2a − 1 . 2a(a − 1)

(17)

Proof 4.43 First note that ∞ X

Z ζ(2s, a) = 0

s=1

which, considering the Taylor series of sinh(x), Z ∞ = 0

e−ax 1 − ex

∞ X x2s−1 (2s − 1)! s=1

! dx,

sinh(x) dx. eax (1 − ex )

We can the integrate this to see that, for a > 1, this converges to the given limit as required. Corollary 4.44 For a > 1 and s ∈ N ∞ X

ζ(2s + 1, a) =

s=1

1 . 2a(a − 1)

Proof 4.45 This can be shown by subtracting equation (17) from (16). The final identity that we will prove requires a little more work than the previous one and we will first require a nice Lemma. Lemma 4.46 For a ∈ N, " # a−1 X (−1)k y−1 1 1 a+1 = 2(−1) + − a. y a (y + 1) y+1 yk y k=1

Proof 4.47 We can prove this by induction by noting that y−1 y a+1 (y + 1)

=

y−1 1 y a (y + 1) y "

# a−1 X (−1)k 1 1 + = 2(−1) − a+1 (y + 1)y y k+1 y k=1 " # a−1 X (−1)k 1 1 a+1 1 = 2(−1) − + − a+1 k+1 y y+1 y y k=1 " # a X 1 (−1)k 1 = 2(−1)a+2 + − a+1 y+1 y k+1 y a+1

k=1

as required. 18


Proposition 4.48 If Hn0 represents the nth alternating Harmonic Number then, for an integer a > 2, it is true that ∞ X 1 0 ζ(s, a) = (−1)a ln(4) − 2Ha−2 − . a − 1 s=2 Proof 4.49 If we employ the same methods as used in the proof of Proposition 4.38 we can easily see that Z ∞ −ax x ∞ X e (e − 1) ζ(s, a) = dx. 1 + e−x 0 s=2 We can then make the substitution x = ln(y) and dx = dy/y to see that this transforms to ∞ X

Z

ζ(s, a) =

s=2

1

y−1 dy. + 1)

y a (y

Lemma 4.46 tells us that we can write this as " # ) Z ∞( a−1 ∞ X (−1)k X 1 1 a+1 ζ(s, a) = 2(−1) + − a dy. y+1 yk y 1 s=2 k=1

We can then remove the first term from the summation and seperate the integrals to find that the above equation "a−1 #) ∞ Z ∞ ( X (−1)k 1 a+1 a+1 + 2(−1) dy. = 2(−1) (ln(y + 1) − ln(y)) + a−1 y (a − 1) 1 yk 1 k=2

If we now compute the value of the left-most expression and integrate the right-most, (assuming that we can exchange the summation and integral) we see that a−1 X (−1)k+1 ∞ 1 a+1 . ζ(s, a) = (−1) ln(4) − − 2(−1) a−1 (k − 1)y k−1 1 s=2

∞ X

a

k=2

The summation can then be rejigged so that ∞ X

"a−2 # X (−1)k+1 1 a+1 + 2(−1) ζ(s, a) = (−1) ln(4) − a−1 k s=2 a

k=1

as required. Corollary 4.50 The sum of the alternating Hurwitz zeta functions is irrational. Corollary 4.51 The sum of the regular alternating zeta functions diverges.

4.5 4.5.1

Epstein Zeta Function Introduction

Let a, b and c be real numbers with a > 0 and D = 4ac − b2 > 0 so that Q(u, v) = au2 + buv + cv 2 is a positive-definite binary quadratic form of discriminant D.

19


The Epstein zeta function Z(s) is then defined by the double series ∞ X

Z(s) =

m,n=−∞ (m,n)6=(0,0)

1 Q(m, n)s

where s = Ďƒ + it with Ďƒ, t ∈ R and Ďƒ > 1. Since Q(u, v) ≼ Îť(u2 + v 2 ) with

p 1 2 2 Îť= a + c − (a − c) + b > 0 2

for all real numbers u and v, the double series converges absolutely for Ďƒ > 1 and uniformly in every half plane Ďƒ ≼ 1 + ( > 0). Thus Z(s) is analytic function of s for Ďƒ > 1. Furthermore, it can be continued analytically to the whole complex plane except for the simple pole at s = 1 and satisfies the functional equation √ s √ 1−s D D Γ(s)Z(s) = Γ(1 − s)Z(1 − s). 2Ď€ 2Ď€ 4.5.2

The Functional Equation

Recall 4.52 Ď„= Setting x=

b+

√

b2 − 4ac 2a

√ √ D b b+i D , y= , Ď„ = x + iy = 2a 2a 2a

we have Ď„ +Ď„

=

Ď„Ď„

=

b a c a

so that Q(m, n)

= am2 + bmn + cn2 = a(m + nτ )(m + nτ ) = a|m + nτ |2

and

∞ X

Z(s) =

m,n=−∞ (m,n)6=(0,0)

1 , Ďƒ > 1. as |m + nĎ„ |2s

Separating the term with n = 0, we have Z(s) =

∞ ∞ ∞ 2 X 1 2 X X 1 + , Ďƒ > 1. as m=1 m2s as n=1 m=−∞ |m + nĎ„ |2s

We wish to evaluate the second term in and therefore apply the Poisson summation formula Z ∞ ∞ ∞ X X f (m) = f (u) cos 2mĎ€u du m=−∞

m=−∞

−∞

20


to the function f (t) =

1 |t + τ |2s

to obtain ∞ X

1 |m + τ |2s m=−∞

Z ∞ X

= =

m=−∞ −∞ Z ∞ ∞ X m=−∞ −∞ Z ∞ ∞ X

cos 2mπu du |u + τ |2s cos 2mπu du {(u + x)2 + y 2 }s

cos 2mπ(t − x) dt (t2 + y 2 )s m=−∞ Z ∞ ∞ X cos 2mπt = dt cos 2mπx 2 + y 2 )s (t −∞ m=−∞ =

−∞

since the integrals involving the sine function vanish which yields Z ∞ Z ∞ ∞ ∞ X 1 2 1 22 X cos 2mπyt = 2s−1 dt + 2s−1 dt, σ > 1. cos 2mπx 2s 2 )s 2 s |m + τ | y (1 + t y 0 −∞ (1 + t ) m=−∞ m=1 Now, we wish to evaluate the two integrals by first making the substitution u=

t2 1 + t2

which gives 1 3 1 2tdt = 1 − u, du = = 2u 2 (1 − u) 2 dt 2 2 2 1+t (1 + t )

and therefore Z

0

dt 1 = (1 + t2 )s 2

Z

1

3

√ Γ(s − 21 ) π 1 1 1 B s− , . = 2 2 2 2Γ(s)

1

(1 − u)s− 2 u− 2 du =

0

From an integral representation of the Bessel function Z ∞ ν 1 cos yt 1 2 1 Γ ν+ Kν (y) = √ y > 0, <(v) > − , 1 dt, ν+ 2 2 2 π y 2 (1 + t ) 0 we have

Z 0

cos 2mπyt dt = (1 + t2 )s

1

π(mπy)s− 2 1 Ks− 21 (2mπy), σ > . Γ(s) 2

Thus ∞ X

1 = |m + τ |2s m=−∞

√ Γ s − 21 π y 2s−1 Γ(s)

√ ∞ X 1 4 π + 2s−1 (mπy)s− 2 cos(2mπx)Ks− 12 (2mπy), σ > 1 y Γ(s) m=1

such that for n ≥ 1 √ Γ s − 12 π

√ ∞ X 1 1 4 π = + (mnπy)s− 2 cos(2mnπx)Ks− 21 (2mnπy). 2s 2s−1 y 2s−1 Γ(s) 2s−1 y 2s−1 Γ(s) |m + nτ | n n m=−∞ m=1 ∞ X

Hence Γ s− Z(s) = 2a−s ζ(2s)+2a−s y 1−2s

1 2

Γ(s)

π ζ(2s−1)+

21

1 ∞ ∞ 1 8a−s y 2 −s π s X 1−2s X n (mn)s− 2 cos(2mnπx)Ks− 12 (2mnπy). Γ(s) n=1 m=1


Collecting the terms with mn = k, we obtain

Γ s− Z(s) = 2a−s ζ(2s)+2a−s y 1−2s

1 2

π ζ(2s−1)+

Γ(s)

k=1

that is Z(s) = 2a

−s

−s 1−2s

ζ(2s) + 2a

where H(s) = 4

√ Γ s − 12 π

y

∞ X

1 ∞ 1 8a−s y 2 −s π s X X 1−2s n (k)s− 2 cos(2kπx)Ks− 12 (2kπy) Γ(s)

Γ(s)

n|k

1

2a−s y 2 −s π s ζ(2s − 1) + H(s) Γ(s)

1

σ1−2s (k)(k)s− 2 cos(2kπx)Ks− 12 (2kπy)

k=1

and σν denotes the ν-th powers of the divisors of k σν (k) =

X

ν

d =

X k ν d|k

d|k

d

.

We can now express Z(s) in another form s s 1 1 ay y 1 Γ(s)Z(s) = 2 Γ(s)ζ(2s) + 2y 1−s π 2 −s Γ s − ζ(2s − 1) + 2y 2 H(s). π π 2 By the functional equation of the Riemann zeta function 1 2s−2 ζ(2s − 1) = 2(2π) sin s − πΓ(2 − 2s)ζ(2 − 2s) 2 and the basic properties of the gamma function √

Γ(2 − 2s) = we have

π Γ(1 − s) 22s−1 Γ(s − 12 ) sin(s − 21 )π

1 1 π 2 −s Γ s − ζ(2s − 1) = π s−1 Γ(1 − s)ζ(2 − 2s) 2

and

ay π

s

s 1−s 1 y y Γ(s)Z(s) = 2 Γ(s)ζ(2s) + 2 Γ(1 − s)ζ(2 − 2s) + 2y 2 H(s). π π

Recall 4.53 K−ν (y) = Kν (y) and

ν

ν

k − 2 σν (k) = k 2 σ−ν (k) leading to H(s) = H(1 − s). For

φ(s) =

ay π

s Γ(s)Z(s)

we have φ(s) = φ(1 − s).

22


Since

D 2

ay = we have

√ 1−s √ s D D Γ(s)Z(s) = Γ(1 − s)Z(1 − s) 2π 2π

which is the functional equation of Z(s).

4.6 4.6.1

The Mellin Transform The Mellin Transform and its Properties

The Mellin transform is extremely useful for certain applications including solving Laplaces equation. Definition 4.54 The Mellin transform is defined as M(f (t); p) = F (p) =

Z

f (t)tp−1 dt,

0

for some p > 0. 4.6.2

Relation to Laplace Transform

For t = e−x , dt = −e−x dx. Substitution into Definition (4.54) gives M(f (t); p) =

Z

f (e−x )(e−x )p−1 e−x dx

−∞

which is by definition the Laplace transform of e−px that is L{f (e−x )}. 4.6.3

Inversion Formula

For c lying in the strip of analyticity of F (p), we have 1 f (t) = 2πi 4.6.4

Z

c+i∞

F (p)t−p dp.

c−i∞

Scaling Property for a > 0

We have M(f (at); p) =

Z

f (at)tp−1 dt.

0

Substituting x = at where dx = adt, we obtain Z ∞ −p a f (x)xp−1 dx = a−p F (p). 0

4.6.5

Multiplication by ta

Similar to the scaling property, we obtain M(t f (t); p) = a

Z

f (t)t(p+a)−1 dt = F (p + a).

0

23


4.6.6

Derivative

We have M(f (t); p) 0

Z =

f 0 (t)tp−1 dt

0

=

p−1 ∞ t f (t) 0 − (p − 1)

Z

f (t)tp−2 dt

0

which gives M(f 0 (t); p) = −(p − 1)M(f (t); p − 1) provided tp−1 f (t) → 0 as x → 0 and tp−1 f (t) → ∞ as x → ∞. For the n-th derivative this produces M(f (n) (t); p) = (−1)n (p − 1)(p − 2)(p − 3)...(p − n)F (p − n) provided that the extension to higher derivatives of the conditions as x → 0 and asx → ∞ holds upto (n − 1)th derivative. Knowing that (p − 1)! Γ(p) = (p − n − 1)! Γ(p − n) the expression for the n-th derivative can be expressed as (p − 1)(p − 2)(p − 3)...(p − n) =

M(f (n) (t); p) = (−1)n

4.6.7

Γ(p) F (p − n). Γ(p − n)

Another Property of the Derivative M(xn f (n) (t); p) = (−1)n

4.6.8

(p + n − 1)! F (p). (p − 1)!

Integral

By making use of the derivative property Rof the Mellin transform we can easily derive this property. We t begin by choosing to write f (t) as f (t) = 0 h(u)du where f 0 (t) = h(t). As a result, we obtain Z t 0 M(f (t) = h(t); p) = −(p − 1)M(f (t) = h(u)du; p − 1). 0

Rearranging gives Z t 1 M(h(t); p) = M( h(u)du; p − 1) p−1 0 Substituting p for p − 1 we arrive at the desired identity Z t 1 − M(h(t); p + 1) = M( h(u)du; p). p 0 −

4.6.9

Example 1

The Mellin transform of e−t

2

is computed using the fact that M(e−t ; p) = Γ(p). Then by the scaling property, we immediately have that M(e

−t2

1 1 p −t p ; p) = M(e ; ) = Γ . 2 2 2 2 24


4.6.10

Example 2

The Mellin transform of

Z

2

e−u du

0

is computed as Z t 2 2 1 (p + 1) e−u du; p) = M(e−t ; p + 1) = − Γ M( . 2p 2 0

25


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.