Zeta Functions Johar M. Ashfaque
1 1.1
The Riemann Zeta Function Introduction
Leonhard Euler lived from 1707 to 1783 and is, without a doubt, one of the most influential mathematicians of all time. His work covered many areas of mathematics including algebra, trigonometry, graph theory, mechanics and, most relevantly, analysis. Although Euler demonstrated an obvious genius when he was a child it took until 1735 for his talents to be fully recognised. It was then that he solved what was known as the Basel problem, the problem set in 1644 and named after Euler’s home town [?]. This problem asks for an exact expression of the limit of the equation ∞ X 1 1 1 1 =1+ + + + ..., (1) 2 n 4 9 16 n=1 which Euler calculated to be exactly equal to π 2 /6. Going beyond this, he also calculated that ∞ X 1 1 1 π4 1 + + + ... = , =1+ 4 n 16 81 256 90 n=1
(2)
among other specific values of a series that later became known as the Riemann zeta function, which is classically defined in the following way. Definition 1.1 For <(s) > 1, the Riemann zeta function is defined as ζ(s) =
∞ X 1 1 1 1 = 1 + s + s + s + ... s n 2 3 4 n=0
This allows us to write the sums in the above equations (1) and (2) simply as ζ(2) and ζ(4) respectively. A few years later Euler constructed a general proof that gave exact values for all ζ(2n) for n ∈ N. These were the first instances of the special values of the zeta function and are still amongst the most interesting. However, when they were discovered, it was still unfortunately the case that analysis of ζ(s) was restricted only to the real numbers. It wasn’t until the work of Bernhard Riemann that the zeta function was to be fully extended to all of the complex numbers by the process of analytic continuation and it is for this reason that the function is commonly referred to as the Riemann zeta function. From this, we are able to calculate more special values of the zeta function and understand its nature more completely. We will be discussing some classical values of the zeta function as well as some more modern ones and will require no more than an undergraduate understanding of analysis (especially Taylor series of common functions) and complex numbers. The only tool that we will be using extensively in addition to these will be the ‘Big Oh’ notation, that we shall now define. Definition 1.2 We can say that f (x) = g(x) + O(h(x)) as x → k if there exists a constant C > 0 such that |f (x) − g(x)| ≤ C|h(x)| for when x is close enough to k. This may seem a little alien at first and it is true that, to the uninitiated, it can take a little while to digest. However, its use is more simple than its definition would suggest and so we will move on to more important matters.
1
1.2
The Euler Product Formula for ζ(s)
We now define the convergence criteria for infinite products. Theorem 1.3 If
P
|an | converges then
Q (1 + an ) converges.
Although we will not give a complete proof here, one is referred to [?]. However, it is worth noting that (m ) m Y X (1 + an ) = exp ln(1 + an ) . n=1
n=1
From this it can be seen that, for the product to converge, it is sufficient that P Pthe summation on the right hand side converges. The summation ln(1 + an ) converges absolutely if |an | converges. This is the conceptual idea that the proof is rooted in. Now that we have this tool, we can prove the famous Euler Product formula for ζ(s). This relation makes use of the Fundamental Theorem of Arithmetic which says that every integer can be expressed as a unique product of primes. We will not prove it here but the interested reader can consult [?] for a proof. Theorem 1.4 Let p denote the prime numbers. For <(s) > 1, ζ(s) =
∞ Y (1 − p−s )−1 . p
Proof. Observe that
1 1 1 1 1 ζ(s) = s + s + s + s + s + ... 2s 2 4 6 8 10 Then, we can make the subtraction 1 1 1 1 1 ζ(s) 1 − s = 1 + s + s + s + s + ... 2 2 3 4 5 1 1 1 1 1 − s + s + s + s + s + ... 2 4 6 8 10 1 1 1 1 = 1 + s + s + s + s + ... 3 5 7 9 Clearly, this has just removed any terms from ζ(s) that have a factor of 2−s . We can then take this a step further to see that 1 1 1 1 1 1 ζ(s) 1 − s 1 − s = 1 + s + s + s + s + ... 2 3 5 7 11 13 If we continue this process of siphoning off primes we can see that, by the Fundamental Theorem of Arithmetic, Y ζ(s) (1 − p−s ) = 1, p
which requires only a simple rearrangement to see that Y ζ(s) = (1 − p−s )−1 . p
Note that, for <(s) > 1 this converges because X X p−<(s) < n−<(s) , p
n
which converges. This completes the proof
2
1.3
The Bernoulli Numbers
We will now move on to the study of Bernoulli numbers, a sequence of rational numbers that pop up frequently when considering the zeta function. We are interested in them because they are intimately related to some special values of the zeta function and are present in some rather remarkable identities. We already have an understanding of Taylor series and the analytic power that they provide and so we can now begin with the definition of the Bernoulli numbers. This section will follow Chapter 6 in [?]. Definition 1.5 The Bernoulli Numbers Bn are defined to be the coefficients in the series expansion ∞ X x Bn xn = . ex − 1 n=0 n!
It is a result from complex analysis that this series converges for |x| < 2π but, other than this, we cannot gain much of an understanding from the implicit definition. Please note also, that although the left hand side would appear to become infinite at x = 0, it does not. Corollary 1.6 We can calculate the Bernoulli numbers by the recursion formula 0=
k−1 X j=0
k Bj , j
where B0 = 1. Proof. Let us first replace ex − 1 with its Taylor series to see that ∞ ∞ ∞ ∞ j X X X x Bn xn xj X Bn xn x = − 1 = . j n! j! n=0 n! n=0 j=0 j=1 If we compare coefficients of powers of x we can clearly see that, except for x1 , xk : 0 =
B1 B2 Bk−2 Bk−1 B0 + + + ... + + . k! (k − 1)! 2!(k − 2)! 2!(k − 2)! (k − 1)!
Hence 0=
k−1 X j=0
k−1 Bj 1 X k = Bj . (k − j)!j! k! j=0 j
Note that the inverse k! term is irrelevant to the recursion formula. This completes the proof. The first few Bernoulli numbers are therefore B0 = 1, B1 = −1/2, B2 = 1/6, B3 = 0, B4 = −1/30, B5 = 0, B6 = 1/42, B7 = 0. Lemma 1.7 The values of the odd Bernoulli numbers (except B1 ) are zero Proof. As we know the values of B0 and B1 , we can remove the first two terms from Definition 1.5 and rearrange to get ∞ X x Bn xn x + = 1 + , x e −1 2 n! n=2 which then simplifies to give x 2
ex + 1 ex − 1
=1+
3
∞ X Bn xn . n! n=2
We can then multiply both the numerator and denominator of the left hand side by exp(−x/2) to get ∞ X x ex/2 + e−x/2 Bn xn = 1 + . (3) x/2 −x/2 2 e n! −e n=2 By substituting x → −x into the left hand side of this equation we can see that it is an even function and hence invariate under this transformation. Hence, as the odd Bernoulli numbers multiply odd powers of x, the right hand side can only be invariant under the same transformation if the value of the odd coefficients are all zero.
1.4
Relationship to the Zeta Function
As, we have already dicussed, Euler found a way of calculating exact values of ζ(2n) for n ∈ N. He did this using the properties of the Bernoulli numbers, although he originally did it using the infinite product for the sine function. The relationship between the zeta function and Bernoulli numbers is not obvious but the proof of it is quite satisfying. Theorem 1.8 For n ∈ N, ζ(2n) = (−1)n−1
(2π)2n B2n . 2(2n)!
To prove this theorem, we will be using the original proof attributed to Euler and reproduced in [?]. This will be done by finding two seperate expressions for z cot(z) and then comparing them. We will be using a real analytic proof which is slightly longer than a complex analytic proof, an example of which can be found in [?]. Lemma 1.9 The function z cot(z) has the Taylor expansion z cot(z) = 1 +
∞ X
(−4)n
n=1
B2n z 2n . (2n)!
(4)
Proof. Substitute x = 2iz into equation (3) and observe that, because the odd Bernoulli numbers are zero, we can write this as ∞ 2n X x ex/2 + e−x/2 eiz + e−iz n B2n z = iz = 1 + (−4) . 2 ex/2 − e−x/2 eiz − e−iz (2n)! n=1 Noting that the left hand side is equal to z cot(z) completes the proof Lemma 1.10 The function cot(z) can be written as cot(z/2n ) tan(z/2n ) 1 cot(z) = − + n n n 2 2 2
2n−1 X−1
cot
j=1
z + jπ 2n
+ cot
z − jπ 2n
.
(5)
Proof. Recall that 2 cot(2z) = cot(z) + cot(z + π/2). If we continually iterate this formula we will find that 2n −1 1 X z + jπ cot , cot(z) = n 2 j=0 2n which can be proved by induction. Removing the j = 0 and j = 2n−1 terms and recalling that cot(z + π/2) = − tan(z) gives us cot(z)
= +
cot(z/2n ) tan(z/2n ) − 2n 2n n−1 2 X−1 1 z + jπ cot + 2n 2n j=1
n 2X −1
j=2n−1 +1
4
cot
z + jπ . 2n
All we have to do now is observe that, as cot(z + π) = cot(z), we can say that n 2X −1
cot
j=2n−1 +1
z + jπ 2n
=
2n−1 X−1
cot
j=1
z − jπ 2n
,
which completes the proof. Lemma 1.11 The function z cot(z) can therefore be expressed as z cot(z) = 1 − 2
∞ X j=1
z2 . − z2
j 2 π2
(6)
Proof. In order to obtain this, we first multiply both sides of equation (5) by z to get 2n−1 X−1 z z + jπ z z z − jπ n n z cot(z) = n cot(z/2 ) − n tan(z/2 ) + cot + cot . 2 2 2n 2n 2n j=1
(7)
Let us now take the limit of the right hand side as n tends to infinity. First recall that the Taylor series for x cot(x) and x tan(x) can be respectively expressed as x cot(x) = 1 + O(x2 ) and x tan(x) = x2 + O(x4 ). Hence, if we substitute x = z/2n into both of these we can see that hz z i cot lim =1 n→∞ 2n 2n
(8)
and
hz z i tan = 0. (9) n→∞ 2n 2n Now we have dealt with the expressions outside the summation and so we need to consider the ones inside. To make things slightly easier for the moment, let us consider both of the expressions at the same time. Using Taylor series again, we can see that z z ± jπ z cot + O(4−n ). (10) = n n 2 2 z ± jπ lim
Substituting equations (8), (9) and (10) into the right hand side of equation (7) gives that z cot(z) = 1 − lim
n→∞
2n−1 X−1 j=1
z z + + O(4−n ) , z + jπ z − jπ
which simplifies a little to give z cot(z) = 1 − 2
∞ X j=1
n−1
2 X−1 z2 − lim O(4−n ). j 2 π 2 − z 2 n→∞ j=1
By Definition 1.2, it can be seen that |
2n−1 X−1
O(4−n )| ≤ C(2n−1 − 1)4−n ,
j=1
which clearly converges to zero as n → ∞, thus completing the proof. Lemma 1.12 For |z| < π, z cot(z) has the expansion z cot(z) = 1 − 2
∞ X j=1
5
ζ(2n)
z 2n . π 2n
(11)
Proof. Take the summand of equation (1.11) and multiply both the numerator and denominator by (jπ)−2 to obtain ∞ X (z/jπ)2 . z cot(z) = 1 − 2 1 − (z/jπ)2 j=1 But, we can note that the summand can be expanded as an infinite geometric series. Hence we can write this as 2n ∞ X ∞ X z z cot(z) = 1 − 2 , jπ j=1 n=1 which =1−2
∞ 2n X z
π
n=1
ζ(2n)
as long as the geometric series converges (i.e. |z| < π). Note that exchanging the summations in such a way is valid as both of the series are absolutely convergent. Now, we can complete the proof of the Theorem 1.8 by equating equations (4) and (11) to see that 1+
∞ X
= (−4)n
n=2
∞ 2n X z B2n z 2n =1−2 ζ(2n). (2n)! π n=1
If we then strip away the 1 terms and remove the summations we obtain the identity (−4)n
z 2n B2n z 2n = −2 2n ζ(2n), (2n)! π
which rearranges to complete the proof of Theorem 1.8 as required. Now that have proven this beautiful formula (thanks again, Euler) we can use it to calculate the sums of positive even values of the zeta function. First, let us rewrite the result of Theorem 1.8 to give us (2π)2n |B2n | . 2(2n)!
ζ(2n) =
From this, we can easily calculate specific values such as ζ(6) =
ζ(8) =
∞ X 1 (2π)6 |B6 | π6 = = , 6 j 2(6)! 945 n=1 ∞ X 1 (2π)8 |B8 | π8 = = , j8 2(8)! 9450 n=1
and so on. It is unfortunate that no similar formula has been discovered for ζ(2n + 1). There have been recent results concerning the values of the zeta function for odd integers. For example, Apéry’s proof of the irrationality of ζ(3) in 1979 or Matilde Lalı́n’s integral representations of ζ(3) and ζ(5) by the use of Mahler measures. Special values for ζ(−2n) and ζ(−2n + 1) have been found, the latter of which also involves Bernoulli numbers. It is to our good fortune that we will have to take a whirlwind tour through some of the properties of the Gamma function in order to get to them.
1.5
The Gamma Function
Definition 1.13 For <(s) > 0 we define Γ(s) as Z Γ(s) =
∞
ts−1 e−t dt.
0
Now, this function initially looks rather daunting and irrelavent. We will see, however, that is does have many fascinating properties. Among the most basic are the following two identities ... 6
Corollary 1.14 Γ(s) has the recursive property Γ(s + 1) = sΓ(s).
(12)
Proof. We can prove this by performing a basic integration by parts on the gamma function. Note that Z ∞ Z ∞ ∞ Γ(s + 1) = ts e−t dt = −ts e−t 0 + s ts−1 e−t dt 0
=
0
[0] + sΓ(s)
as required. Corollary 1.15 For n ∈ N Γ(n + 1) = n!
(13)
Proof. Just consider Corollary 1.14 and iterate to complete the proof. It is this property that really begins to tell us something interesting about the Gamma function. Now that we know that the function calculates factorials for integer values, we can use it to ‘fill in’ non-integer values, which is the reason why Euler introduced it. Remark 1.16 We can use the fact that Γ(s) = Γ(s + 1)/s to see that, as s tends to 0, Γ(s) → ∞. We can also use this recursive relation to prove that the Gamma function has poles at all of the negative integers. Lemma 1.17 We can calculate that
√ Γ(3/2) =
Proof. We observe that
π . 2
∞
Z
exp(−x2 )dx.
Γ(3/2) =
(14)
0
This is an important result that is the foundation of the normal distribution and it is also easily calculable. We do this by first considering the double integral Z ∞Z ∞ I= exp(−x2 − y 2 )dxdy. −∞
−∞
If we switch to polar co-ordinates using the change of variables x = r cos(θ), y = r sin(θ). Noting that dydx = rdθdr, we have Z 2π Z ∞ Z ∞ I= r exp(−r2 )drdθ = π 2r exp(−r2 )dr = π[exp(−r2 )]∞ 0 = π. 0
0
0
We can then seperate the original integral into two seperate integrals to obtain Z ∞ Z ∞ 2 2 I= exp −x dx exp −y dy = π. −∞
−∞
Noting that the two integrals are identical and are also both even functions, we can see that integrating one of them from zero to infinity completes the proof as required Corollary 1.18 Consider (n)!2 = n(n − 2)(n − 4)...,, which terminates at 1 or 2 depending on whether n is odd or even respectively. Then for n ∈ N, √ π(2n − 1)!2 2n + 1 Γ = . 2 2n Proof. We will prove this by induction. Consider that √ √ 2n + 3 2(n + 1) + 1 (2n + 1) (2n − 1)!2 π (2n + 1)!2 π Γ =Γ = . = . 2 2 2 2n 2n+1 Noting that the leftmost and rightmost equalities are equal by definition completes the proof. 7
Remark. We can use this relationship Î&#x201C;(1 + 1/s) = (1/s)Î&#x201C;(s) to see, for example, that â&#x2C6;&#x161; â&#x2C6;&#x161; 3 Ď&#x20AC; 15 Ď&#x20AC; Î&#x201C;(5/2) = , Î&#x201C;(7/2) = , 4 8 etc. Corollary 1.19 We can computer the â&#x20AC;&#x2DC;factorialâ&#x20AC;&#x2122; of â&#x2C6;&#x2019;1/2 as â&#x2C6;&#x161; Î&#x201C;(â&#x2C6;&#x2019;1/2) = â&#x2C6;&#x2019;2 Ď&#x20AC;. Proof. We can rework Corollary 1.14 to show that 1â&#x2C6;&#x2019;s Î&#x201C;(1/s) = Î&#x201C; s
1â&#x2C6;&#x2019;s s
,
from which the corollary can easily be proven.
1.6
The Euler Reflection Formula
This chapter will use a slightly different definition of the Gamma function and will follow source [?]. First let us consider the definition of the very important Euler constant Îł. Definition 1.20 Eulerâ&#x20AC;&#x2122;s constant Îł is defined as 1 1 1 â&#x2C6;&#x2019; log(m) . Îł = lim 1 + + + .. + mâ&#x2020;&#x2019;â&#x2C6;&#x17E; 2 3 m We will then use Gaussâ&#x20AC;&#x2122; definition for the Gamma function which can be written as follows ... Definition 1.21 For s > 0 we can define Î&#x201C;h (s) =
h!hs hs = s(s + 1)..(s + p) s(1 + s)(1 + s/2)...(1 + s/h)
and Î&#x201C;(s) = lim Î&#x201C;h (s). hâ&#x2020;&#x2019;â&#x2C6;&#x17E;
This does not seem immediately obvious but the relationship is true and is proven for <(s) > 0 in [?]. So now that we have these definitions we can work on a well known theorem. Theorem 1.22 The Gamma function can be written as the following infinite product; â&#x2C6;&#x17E; Y 1 s â&#x2C6;&#x2019;s/n Îłs = se 1+ e . Î&#x201C;(s) n n=1
Proof. Before we start with the derivation, let us note that the infinite product is convergent because the exponential term forces it. Now that we have cleared that from our conscience, we will begin by using Definition 1.21 and say that hs Î&#x201C;h (s) = . s(1 + s)(1 + s/2)...(1 + s/h) Now we can also see that hs
= =
exp (s log(h)) 1 1 1 1 exp s log(h) â&#x2C6;&#x2019; 1 â&#x2C6;&#x2019; â&#x2C6;&#x2019; ... â&#x2C6;&#x2019; exp s 1 + + ... + . 2 h 2 h
8
We can then observe that Î&#x201C;h (s) =
1 es es/2 es/h 1 1 ... exp s log(h) â&#x2C6;&#x2019; 1 â&#x2C6;&#x2019; â&#x2C6;&#x2019; ... â&#x2C6;&#x2019; , s 1 + s 1 + s/2 1 + s/h 2 h
which we can write as the product Y h 1 1 1 1 es/n . Î&#x201C;h (s) = exp s log(h) â&#x2C6;&#x2019; 1 â&#x2C6;&#x2019; â&#x2C6;&#x2019; ... â&#x2C6;&#x2019; s 2 h 1 + s/n n=1 All we need to do now is to take the limit of this as h tends to infinity and use Definition 1.20 to prove the theorem as required. This theorem is very interesting as it allows us to prove two really quite beautiful identities, known as the Euler Reflection formulae. But before we do this, we are going to need another way of dealing with the sine function. It should be noted that the method of approach that we are going to use is not completely rigorous. However, it can be proven rigorously using the Weierstrass Factorisation Theorem - a discussion of which can be found in [?]. Theorem 1.23 The sine function has the infinite product â&#x2C6;&#x17E; Y s2 sin(Ď&#x20AC;s) = Ď&#x20AC;s 1â&#x2C6;&#x2019; 2 . n n=1 Theorem 1.24 The Gamma function has the following reflective relation 1 sin(Ď&#x20AC;s) = . Î&#x201C;(s)Î&#x201C;(1 â&#x2C6;&#x2019; s) Ď&#x20AC; Proof. We can use Theorem 1.22 to see that â&#x2C6;&#x17E; â&#x2C6;&#x17E; Y Y n+s nâ&#x2C6;&#x2019;s n2 â&#x2C6;&#x2019; s2 1 = â&#x2C6;&#x2019;s2 eÎłsâ&#x2C6;&#x2019;Îłs es/nâ&#x2C6;&#x2019;s/n = â&#x2C6;&#x2019;s2 . Î&#x201C;(s)Î&#x201C;(â&#x2C6;&#x2019;s) n n n2 n=1 n=1 We can then use Corollary 1.14 to show that, as Î&#x201C;(1 â&#x2C6;&#x2019; s) = â&#x2C6;&#x2019;sÎ&#x201C;(â&#x2C6;&#x2019;s), â&#x2C6;&#x17E; Y 1 n2 â&#x2C6;&#x2019; s2 =s . Î&#x201C;(s)Î&#x201C;(1 â&#x2C6;&#x2019; s) n2 n=1
Comparing this to Theorem 1.23 then completes the proof as required. Corollary 1.25 The Gamma function also has the reflectional formula 1 s sin(Ď&#x20AC;s) =â&#x2C6;&#x2019; . Î&#x201C;(s)Î&#x201C;(â&#x2C6;&#x2019;s) Ď&#x20AC; Proof. This can easily be shown using a slight variation of the previous proof. However, an alternate proof can be constructed by considering Theorem 1.24 and Corollary 1.14.
2
The Hurwitz Zeta Function
Definition 2.1 For 0 < a â&#x2030;¤ 1 and <(s) > 1, we define Îś(s, a), the Hurwitz zeta function as Îś(s, a) =
â&#x2C6;&#x17E; X
1 . (n + a)s n=0
Remark. It is obvious that Îś(s, 1) = Îś(s) and from this we can see that if we can prove results for the Hurwitz zeta function that are valid when a = 1, then we obtain results for the regular zeta function automatically. Let us then begin with our first big result. 9
Theorem 2.2 For <(s) > 1, the Hurwitz zeta function can be expressed as the infinite integral Z ∞ s−1 −ax x e 1 dx. ζ(s, a) = Γ(s) 0 1 − e−x
(15)
Corollary 2.3 We have the identity ζ(s) =
1 Γ(s)
∞
Z 0
xs−1 dx. ex − 1
Proof. Simply substitute a = 1 into equation (15) to complete the proof. Now we will prove a slight variation of this integral. Proposition 2.4 The following identity holds for the zeta function: Z ∞ s−1 x x e 2 (2s − 1)ζ(s) = ζ(s, 1/2) = dx. Γ(s) 0 e2x − 1 Proof. First note that s
(2 − 1)ζ(s) =
s s X ∞ 2 2 1 1 1 2 +1+ + s+ + s + ... − 3 2 5 3 ns n=1
which = 2s +
s
s s s 2 2 2 + + + ... = ζ(s, 21 ). 3 5 7
We have that ζ(s, 1/2) =
1 Γ(s)
Z 0
∞
xs−1 e−x/2 dx. 1 − e−x
We can then multiply the numerator and denominator by exp( x2 ) and perform the substitution x = 2y to obtain the identity Z ∞ Z ∞ s−1 y 2 (2y)s−1 ey 2s y e ζ(s, 1/2) = dy = dy Γ(s) 0 e2y − 1 Γ(s) 0 e2y − 1 as required.
2.1
Variants of the Hurwitz Zeta Function
There are many variants of the Hurwitz zeta function and we will only prove identities involving the more obvious ones. Definition 2.5 The alternating Hurwitz zeta function is defined to be ζ(s, a) =
∞ X (−1)n . (n + a)s n=0
Definition 2.6 We define the alternating zeta function as ζ(s) = ζ(s, 1). Theorem 2.7 For <(s) > 1, the alternating Hurwitz zeta function can also be written as Z ∞ s−1 −ax 1 x e ζ(s, a) = dx. Γ(s) 0 1 + e−x
10
Corollary 2.8 The alternating zeta function can also be written as Z ∞ s−1 x 1 ζ(s) = (1 − 21−s )ζ(s) = dx. Γ(s) 0 ex + 1 Proof. We have already done the hard work in Theorem 2.7 and as such, all that remains to be proven is that ζ(s) = (1 − 21−s )ζ(s), which can be easily shown by expanding the series. Proposition 2.9 For |k| < 1 we have the identity Z ∞ ∞ X 1 kxs−1 kn = dx. ns Γ(s) 0 ex − k n=1 Proposition 2.10 We can also show that Z ∞ s−1 −ax ∞ X 1 x e e2nπik = dx. s (n + a) Γ(s) 0 1 − e2πik−x n=1
2.2
Sums of the Hurwitz Zeta Function
Proposition 2.11 For a > 1, it is true that ∞ X
ζ(s, a) =
s=2
Proof. If s ∈ N then
Z
1 ζ(s, a) = (s − 1)!
Hence
∞ X
" ζ(s, a) = lim
k→∞
s=2
which
"Z = lim
k→∞
0
∞
e−ax 1 − e−x
s=2
∞
0
k Z X
∞
0
1 . a−1
(16)
xs−1 e−ax dx. 1 − e−x
# xs−1 e−ax dx , (s − 1)! 1 − e−x
∞ ∞ X X xs−1 xs−1 − (s − 1)! (s − 1)! s=2
!
# dx .
s=k+1
Noting that the first term inside the curved brackets is just the Taylor series for ex − 1 then gives us "Z ! # Z ∞ ∞ k ∞ X X ex − 1 e−ax xs−1 x ζ(s, a) = dx − lim e − dx . k→∞ eax (1 − e−x ) 1 − e−x (s − 1)! 0 0 s=2 s=1 Now, we can see that expression inside the limit tends to zero as k tends to infinity which then leaves us with the rather inspiring identity ∞ X
Z ζ(s, a) =
s=2
which integrates to give
∞ X s=2
This converges to (a − 1)
−1
0
∞
ex − 1 dx, − e−x )
eax (1
1 − e−x(a−1) . x→∞ a−1
ζ(s, a) = lim
for a > 1, thus proving the proposition.
Corollary 2.12 The sums of the regular zeta function diverge.
11
Proof. Simply substitute a = 1 into the previous result to complete the proof. We can also prove a set of similar results using the same technique as in Proposition 2.11. The two easiest examples are given in the propositions below. Proposition 2.13 For a > 1 and s ∈ N ∞ X
ζ(2s, a) =
s=1
Proof. First note that
∞ X
∞
Z ζ(2s, a) = 0
s=1
which, considering the Taylor series of sinh(x), Z ∞ = 0
2a − 1 . 2a(a − 1)
e−ax 1 − ex
∞ X x2s−1 (2s − 1)! s=1
(17)
! dx,
sinh(x) dx. eax (1 − ex )
We can the integrate this to see that, for a > 1, this converges to the given limit as required. Corollary 2.14 For a > 1 and s ∈ N ∞ X
ζ(2s + 1, a) =
s=1
1 . 2a(a − 1)
Proof. This can be shown by subtracting equation (17) from (16). The final identity that we will prove requires a little more work than the previous one and we will first require a nice Lemma.
12
Lemma 2.15 For a ∈ N, " # a−1 X (−1)k 1 1 y−1 = 2(−1)a+1 + − a. y a (y + 1) y+1 yk y k=1
Proof. We can prove this by induction by noting that y−1 + 1)
y a+1 (y
=
y−1 1 + 1) y "
y a (y
# a−1 X (−1)k 1 1 = 2(−1) + − a+1 (y + 1)y y k+1 y k=1 " # a−1 X (−1)k 1 1 a+1 1 = 2(−1) − + − a+1 y y+1 y k+1 y k=1 " # a k X 1 1 (−1) a+2 = 2(−1) − a+1 + k+1 y+1 y y a+1
k=1
as required.
13
Proposition 2.16 If Hn0 represents the nth alternating Harmonic Number then, for an integer a > 2, it is true that ∞ X 1 0 ζ(s, a) = (−1)a ln(4) − 2Ha−2 − . a − 1 s=2 Proof. If we employ the same methods as used in the proof of Proposition 2.11 we can easily see that ∞ X
∞
Z ζ(s, a) = 0
s=2
e−ax (ex − 1) dx. 1 + e−x
We can then make the substitution x = ln(y) and dx = dy/y to see that this transforms to ∞ X
∞
Z ζ(s, a) = 1
s=2
y−1 dy. + 1)
y a (y
Lemma 2.15 tells us that we can write this as " # ) Z ∞( ∞ a−1 X X (−1)k 1 1 a+1 2(−1) ζ(s, a) = + − a dy. y+1 yk y 1 s=2 k=1
We can then remove the first term from the summation and seperate the integrals to find that the above equation "a−1 #) ∞ Z ∞ ( X (−1)k 1 a+1 a+1 + 2(−1) = 2(−1) (ln(y + 1) − ln(y)) + a−1 dy. y (a − 1) 1 yk 1 k=2
If we now compute the value of the left-most expression and integrate the right-most, (assuming that we can exchange the summation and integral) we see that ∞ X
ζ(s, a) = (−1)a ln(4) −
s=2
a−1 X (−1)k+1 ∞ 1 . − 2(−1)a+1 a−1 (k − 1)y k−1 1 k=2
The summation can then be rejigged so that ∞ X
"a−2 # X (−1)k+1 1 a+1 ζ(s, a) = (−1) ln(4) − + 2(−1) a−1 k s=2 a
k=1
as required. Corollary 2.17 The sum of the alternating Hurwitz zeta functions is irrational. Corollary 2.18 The sum of the regular alternating zeta functions diverges.
3 3.1
The Functional Equation of Euler’s Γ Function Euler Sine Product sin(πz) = πz
∞ Y n=1
14
1−
z2 n2
3.2
Euler’s Reflection Formula
Proposition 3.1 Γ(s) satisfies the functional equation Γ(s)Γ(1 − s) =
π . sin(πs)
Proof. Γ(s) has another representation Γ(s) =
∞ 1 Y (1 + n1 )s . s n=1 1 + ns
Using this representation of Γ(s) and making use of the Euler sine product Γ(s)Γ(1 − s)
= = = =
−sΓ(s)Γ(−s) ∞ 1 Y 1 s
n=1
1−
s2 n2
1 πs s sin(πs) π sin(πs)
4
4.1
The Functional Equation of Riemann Zeta Function ζ(s): Symmetrical Form The Theta Function: ϑ(t)
The theta function ϑ(t) =
∞ X
2
e−πtn , <(t) > 0
n=−∞
which has the following functional equation 1 1 ϑ(t) = √ ϑ , t>0 t t and also satisfies
ϑ(t) − t− 21 < e− ct 1
which for some C > 0 ⇒ ϑ(t) t− 2 .
4.2
The Functional Equation
Theorem 4.1 We have that ζ(s) extends analytically onto C except for a simple pole at s = 1 with residue of 1. By defining s s Λ(s) = π − 2 Γ zeta(s) 2 we obtain the functional equation Λ(s) = Λ(s − 1). In particular, the zeta function satisfies the functional equation s 1−s − 2s − 1−s 2 π Γ ζ(s) = π Γ ζ(1 − s). 2 2 15
Proof. Define Z
∞
s
t 2 (ϑ(t) − 1)
φ(s) = 1
Note. ϑ(t) − 1 = 2
P∞
n=1
e
−πn2 t
dt + t
Z 0
1
s 1 dt t 2 ϑ(t) − √ . t t
→ 0 at ∞ ⇒ the integral converges.
1
Note. Since ϑ(t) t− 2 , the second integral will also converge. We now evaluate the second integral, assuming <(s) > 1: Z 1 Z 1 Z 1 s−1 dt s s dt dt 2 t 2 ϑ(t) − t 2 = t 2 ϑ(t) − . t t t s − 1 0 0 0 Thus φ(s) = 2
∞ Z X n=1
2
for <(s) > 1. Letting c → πn , s →
s 2,
∞
2
s
e−πn t t 2
0
dt 2 2 + + t s 1−s
we have
s 1 1 1 − 2s φ(s) = π ζ(s)Γ + + . 2 2 s 1−s Finally, we need to set stage for the functional equation. Let Λ(s) =
1 1 1 φ(s) − − . 2 s 1−s
1 are invariant under s → 1 − s, we only need to show that φ(s) = φ(1 − s). Recall the Since 1s and 1−s functional equation for ϑ. Then Z ∞ Z 1 s s dt 1 dt φ(s) = t 2 (ϑ(t) − 1) + t 2 ϑ(t) − √ t t t 1 0 Z 1 Z ∞ √ dt s s 1 dt 1 = t− 2 ϑ −1 + t− 2 ϑ − t t t t t 0 1 Z 1 Z ∞ √ dt √ √ s s dt = t− 2 tϑ(t) − 1 + t− 2 tϑ(t) − t t t 0 1 = φ(1 − s)
5
The Epstein Zeta Function
The Epstein zeta function Z(s) is defined for <(s) > 1 by Z(s) =
∞ X m,n=−∞ (m,n)6=(0,0)
1 (am2 + bmn + cn2 )s
where a, b and c are real numbers with a > 0 and b2 − 4ac < 0. Z(s) can be continued analytically to the whole complex plane except for the simple pole at s = 1. The value of Z(k) for k = 2, 3, ... is determined in terms of the infinite series of the form ∞ X cotr (nπτ ) , (r = 1, 2, ..., k) n2k−1 n=1
where τ=
b+
√
b2 − 4ac . 2a
16
5.1
Introduction
Let a, b and c be real numbers with a > 0 and D = 4ac â&#x2C6;&#x2019; b2 > 0 so that Q(u, v) = au2 + buv + cv 2 is a positive-definite binary quadratic form of discriminant D. The Epstein zeta function Z(s) is then defined by the double series â&#x2C6;&#x17E; X
Z(s) =
m,n=â&#x2C6;&#x2019;â&#x2C6;&#x17E; (m,n)6=(0,0)
1 Q(m, n)s
where s = Ď&#x192; + it with Ď&#x192;, t â&#x2C6;&#x2C6; R and Ď&#x192; > 1. Since Q(u, v) â&#x2030;Ľ Îť(u2 + v 2 ) with Îť=
p 1 a + c â&#x2C6;&#x2019; (a â&#x2C6;&#x2019; c)2 + b2 > 0 2
for all real numbers u and v, the double series converges absolutely for Ď&#x192; > 1 and uniformly in every half plane Ď&#x192; â&#x2030;Ľ 1 + ( > 0). Thus Z(s) is analytic function of s for Ď&#x192; > 1. Furthermore, it can be continued analytically to the whole complex plane except for the simple pole at s = 1 and satisfies the functional equation â&#x2C6;&#x161; s â&#x2C6;&#x161; 1â&#x2C6;&#x2019;s D D Î&#x201C;(s)Z(s) = Î&#x201C;(1 â&#x2C6;&#x2019; s)Z(1 â&#x2C6;&#x2019; s). 2Ď&#x20AC; 2Ď&#x20AC;
5.2
The Functional Equation
Recall 5.1 Ď&#x201E;= Setting
b+
â&#x2C6;&#x161;
b2 â&#x2C6;&#x2019; 4ac 2a
â&#x2C6;&#x161; â&#x2C6;&#x161; b D b+i D x= , y= , Ď&#x201E; = x + iy = 2a 2a 2a
we have Ď&#x201E; +Ď&#x201E;
=
Ď&#x201E;Ď&#x201E;
=
b a c a
so that Q(m, n)
= am2 + bmn + cn2 = a(m + nĎ&#x201E; )(m + nĎ&#x201E; ) = a|m + nĎ&#x201E; |2
and Z(s) =
â&#x2C6;&#x17E; X m,n=â&#x2C6;&#x2019;â&#x2C6;&#x17E; (m,n)6=(0,0)
1 , Ď&#x192; > 1. as |m + nĎ&#x201E; |2s
17
Separating the term with n = 0, we have Z(s) =
∞ ∞ ∞ 2 X X 2 X 1 1 + , σ > 1. s 2s s a m=1 m a n=1 m=−∞ |m + nτ |2s
We wish to evaluate the second term in and therefore apply the Poisson summation formula Z ∞ ∞ ∞ X X f (m) = f (u) cos 2mπu du m=−∞
−∞
m=−∞
to the function f (t) =
1 |t + τ |2s
to obtain ∞ X
1 |m + τ |2s m=−∞
Z ∞ X
= =
∞
m=−∞ −∞ Z ∞ ∞ X m=−∞ −∞ Z ∞ ∞ X
cos 2mπu du |u + τ |2s cos 2mπu du {(u + x)2 + y 2 }s
cos 2mπ(t − x) dt (t2 + y 2 )s m=−∞ −∞ Z ∞ ∞ X cos 2mπt = dt cos 2mπx 2 2 s −∞ (t + y ) m=−∞ =
since the integrals involving the sine function vanish which yields Z ∞ Z ∞ ∞ ∞ X 1 2 1 22 X cos 2mπyt = dt + cos 2mπx dt, σ > 1. 2s 2s−1 2 )s 2s−1 2 s |m + τ | y (1 + t y 0 −∞ (1 + t ) m=−∞ m=1 Now, we wish to evaluate the two integrals by first making the substitution u=
t2 1 + t2
which gives 1 3 1 2tdt = 1 − u, du = = 2u 2 (1 − u) 2 dt 1 + t2 (1 + t2 )2
and therefore Z 0
∞
dt 1 = (1 + t2 )s 2
1
Z
(1 − u)
s− 23
0
− 12
u
√ Γ(s − 21 ) π 1 1 1 du = B s − , = . 2 2 2 2Γ(s)
From an integral representation of the Bessel function ν Z ∞ 1 2 1 cos yt 1 Kν (y) = √ Γ ν+ dt, y > 0, <(v) > − , 2 )ν+ 21 2 2 π y (1 + t 0 we have
∞
Z 0
cos 2mπyt dt = (1 + t2 )s
√
1
π(mπy)s− 2 1 Ks− 21 (2mπy), σ > . Γ(s) 2
Thus ∞ X
1 = |m + τ |2s m=−∞
Γ s−
1 2
√
y 2s−1 Γ(s)
π +
√ ∞ X 1 4 π (mπy)s− 2 cos(2mπx)Ks− 12 (2mπy), σ > 1 2s−1 y Γ(s) m=1 18
such that for n ≥ 1 √ Γ s − 12 π
√ ∞ X 1 4 π 1 = + (mnπy)s− 2 cos(2mnπx)Ks− 21 (2mnπy). 2s 2s−1 y 2s−1 Γ(s) 2s−1 y 2s−1 Γ(s) |m + nτ | n n m=−∞ m=1 ∞ X
Hence Γ s− Z(s) = 2a−s ζ(2s)+2a−s y 1−2s
1 2
√
π ζ(2s−1)+
Γ(s)
1 ∞ ∞ 1 8a−s y 2 −s π s X 1−2s X n (mn)s− 2 cos(2mnπx)Ks− 12 (2mnπy). Γ(s) n=1 m=1
Collecting the terms with mn = k, we obtain
Z(s) = 2a
−s
−s 1−2s
ζ(2s)+2a
y
√ π Γ s − 12 Γ(s)
1 ∞ 1 8a−s y 2 −s π s X X 1−2s ζ(2s−1)+ n (k)s− 2 cos(2kπx)Ks− 12 (2kπy) Γ(s) k=1
that is Z(s) = 2a
−s
−s 1−2s
ζ(2s) + 2a
where H(s) = 4
√ Γ s − 12 π
y
∞ X
Γ(s)
n|k
1
2a−s y 2 −s π s ζ(2s − 1) + H(s) Γ(s)
1
σ1−2s (k)(k)s− 2 cos(2kπx)Ks− 12 (2kπy)
k=1
and σν denotes the ν-th powers of the divisors of k σν (k) =
X
ν
d =
X k ν
d|k
d|k
d
.
We can now express Z(s) in another form s s 1 y ay 1 1−s 12 −s Γ(s)Z(s) = 2 Γ(s)ζ(2s) + 2y π ζ(2s − 1) + 2y 2 H(s). Γ s− π π 2 By the functional equation of the Riemann zeta function 1 ζ(2s − 1) = 2(2π)2s−2 sin s − πΓ(2 − 2s)ζ(2 − 2s) 2 and the basic properties of the gamma function √
Γ(2 − 2s) = we have
π Γ(1 − s) 22s−1 Γ(s − 12 ) sin(s − 21 )π
1 1 π 2 −s Γ s − ζ(2s − 1) = π s−1 Γ(1 − s)ζ(2 − 2s) 2
and
ay π
s
s 1−s 1 y y Γ(s)Z(s) = 2 Γ(s)ζ(2s) + 2 Γ(1 − s)ζ(2 − 2s) + 2y 2 H(s). π π
Recall 5.2 K−ν (y) = Kν (y) and
ν
ν
k − 2 σν (k) = k 2 σ−ν (k) 19
leading to H(s) = H(1 − s). For
φ(s) =
ay π
s Γ(s)Z(s)
we have φ(s) = φ(1 − s). Since
√ ay =
we have
D 2
√ s √ 1−s D D Γ(s)Z(s) = Γ(1 − s)Z(1 − s) 2π 2π
which is the functional equation of Z(s).
6 6.1
The Laplace Transforms Introduction
The theory of the Laplace transform has a long and rich history. Many mathematicians can be named among which Euler, Lagrange and Laplace played important roles in realising the importance of the Laplace transform to solve not only differential equations but also difference equations. Euler used the Laplace transform in order to solve certain differential equations, whereas it was Laplace who understood the true essence of the theory of the Laplace transform in solving both differential and difference equations.
6.2
Laplace Transform
For a complex-valued function x, defined for t > 0, the Laplace transform of x(t) is defined by Z ∞ X(s) = x(t)e−st dt
(18)
0
for all s ∈ R. Alternatively, we may use the expression L{x(t)} = X(s) to denote the Laplace transform. Example 1: Let x(t) = e−at for a ∈ R. Then by direct integration Z ∞ 1 , s>a X(s) = e−(s+a) dt = s+a 0
.
In a similar fashion we can obtain the Laplace transform for eat . Example 2: Let x(t) = sin(ωt), for ω ∈ R. Then Z ∞ X(s) = sin(ωt) e−st dt . 0
Integrating the right hand side by parts twice, we obtain Z ∞ Z w ω2 ∞ sin(ωt) e−st dt = 2 − 2 sin(ωt)e−st dt s s 0 0
,
Rearranging we find 2 Z ∞ Z ∞ s + ω2 w w −st sin(ωt) e dt = =⇒ sin(ωt) e−st dt = 2 2 2 s s s + ω2 0 0 20
.
Thus L{sin(ωt)} =
s2
ω + ω2
as required.
6.3
Properties of the Laplace Transform
We look at the standard properties of the Laplace transform.
6.4
Linearity of the Laplace Transform
Linearity of the Laplace transform is an important result which states: L{Cx(t)} = CL{x(t)} = CX(s) , L{Ax(t) + By(t)} = AX(s) + BY (s) ,
6.5 6.5.1
C∈R
;
(19)
A, B ∈ R .
(20)
Derivatives of the Laplace Transform First Derivative
The first derivative of the Laplace transform is given by L{x0 (t)} = sX(s) − x(0) .
(21)
Proof. Directly from the definition of the Laplace transform, we have Z ∞ L{x0 (t)} = e−st x0 (t)dt 0
The integral on the right hand side, can be integrated by parts once to obtain
∞ Z ∞ Z ∞ Z ∞
−st 0 −st
−st e x (t)dt = x(t)e + s x(t)e dt = −x(0) + s e−st x(t)dt 0
0
0
.
0
Hence L{x0 (t)} = −x(0) + sX(s) = sX(s) − x(0) as required. 6.5.2
The nth Derivative
The equation of the nth derivative of the Laplace transform is given by f (n) (t) = sn L{x()t} − sn−1 x(0) − . . . − xn−1 (0)
6.6
.
(22)
Shift property of the Laplace Transform
The shift property of the Laplace transform states L{x(t)eat } = X(s − a) . Proof. By direct integration L{x(t)eat } =
Z
∞
x(t)e−(s−a)t dt = X(s − a) .
0
In many books, this property is referred to as the “First Shift Theorem”. 21
(23)
6.7
Scaling property of the Laplace Transform
The scaling property of the Laplace transform states 1 s L{x(at)} = X . a a
(24)
Proof. By the definition of the Laplace transform Z Z â&#x2C6;&#x17E; 1 s 1 â&#x2C6;&#x17E; â&#x2C6;&#x2019;( s )y â&#x2C6;&#x2019;st a e x(y)dy = X L{x(at)} = e x(at)dt = a a a 0 0 as required.
6.8
Laplace Transform of tn
The Laplace transform of tn is defined to be L{tn } =
n! sn+1
.
(25)
Example: The Laplace transform of t6 is computed as L{t6 } =
6.9
6! s7
.
Examples
Example 1: Let x(t) = e6t sin (3t). Then applying the shift property of the Laplace transform, we have L{e6t sin (3t)} =
3 (s â&#x2C6;&#x2019; 6)2 + 9
as the Laplace transform for x(t) = e6t sin (3t). In general, by the shift property of the Laplace transform, we conclude that Ď&#x2030; L{eat sin (Ď&#x2030;t)} = . (s â&#x2C6;&#x2019; a)2 + Ď&#x2030; 2 Example 2: Let x(t) = cos(Ď&#x2030;t) for Ď&#x2030; â&#x2C6;&#x2C6; R. Recall:
eiĎ&#x2030;t + eâ&#x2C6;&#x2019;iĎ&#x2030;t . 2 Then by the linearity property of the Laplace transform, we have iĎ&#x2030;t e + eâ&#x2C6;&#x2019;iĎ&#x2030;t 1 L{cos(Ď&#x2030;t)} = L = (L{eiĎ&#x2030;t } + L{eâ&#x2C6;&#x2019;iĎ&#x2030;t }) . 2 2 cos(Ď&#x2030;t) =
We can now use the shift property of the Laplace transform because we know that for x(t) = eâ&#x2C6;&#x2019;at , we 1 have X(s) = s+a . As a result, we obtain 1 1 1 1 1 2s 1 2s s (L{eiĎ&#x2030;t } + L{eâ&#x2C6;&#x2019;iĎ&#x2030;t }) = + = = = 2 . 2 2 s â&#x2C6;&#x2019; iĎ&#x2030; s + iĎ&#x2030; 2 (s â&#x2C6;&#x2019; iĎ&#x2030;)(s + iĎ&#x2030;) 2 (s2 + Ď&#x2030; 2 ) s + Ď&#x2030;2 Example 3: Let x(t) = e9t cos (8t). Then applying the shift property of the Laplace transform, we have L{e9t cos (8t)} =
sâ&#x2C6;&#x2019;9 (s â&#x2C6;&#x2019; 9)2 + 64
as the Laplace transform for x(t) = e9t cos (8t). In general, by the shift property of the Laplace transform, we conclude that sâ&#x2C6;&#x2019;a . L{eat cos (Ď&#x2030;t)} = (s â&#x2C6;&#x2019; a)2 + Ď&#x2030; 2 22
6.10
Inverse Laplace Transform
If for a given function X(s), we can find a function x(t) such that L{x(t)} = X(s), then the inverse Laplace transform is denoted Lâ&#x2C6;&#x2019;1 {X(s)} = x(t) and is unique. The inverse Laplace transform is linear: Lâ&#x2C6;&#x2019;1 {CX(s)} = Cx(t) ,
Câ&#x2C6;&#x2C6;R ;
Lâ&#x2C6;&#x2019;1 {AX(s) + BY (s)} = Ax(t) + By(t) ,
(26)
A, B â&#x2C6;&#x2C6; R .
(27)
Example: Find L
â&#x2C6;&#x2019;1
12s â&#x2C6;&#x2019; 36 (s + 5)(s â&#x2C6;&#x2019; 3)(s + 7)
.
We use equation (23) and equation (27), to obtain 33 3 9 33 â&#x2C6;&#x2019;5t 3 3t 9 â&#x2C6;&#x2019;7t 12s â&#x2C6;&#x2019; 36 â&#x2C6;&#x2019;1 â&#x2C6;&#x2019;1 =L + â&#x2C6;&#x2019; = e + e â&#x2C6;&#x2019; e L (s + 5)(s â&#x2C6;&#x2019; 3)(s + 7) 8(s + 5) 8(s â&#x2C6;&#x2019; 3) 2(s + 7) 8 8 2
6.11
.
Conditions for the existence of the Laplace Transform
Theorem 6.1 Let x(t) be a piecewise continuous function in the interval [0, â&#x2C6;&#x17E;) and is of exponential order a. Also, let |x(t)| â&#x2030;¤ Keat , t â&#x2030;Ľ 0 with real constants K and a, where K is positive. Then the Laplace transform L{x(t)} = X(s) exists for s > a. Example: x(t) = e7t cos 5t is said to be of exponential order a = 7 since |e7t cos 5t| â&#x2030;¤ e7t Hence for K = 1, the Laplace transform of x(t) exists.
6.12
Convolution Theorem
In this section, we seek to compute the Laplace transform of a convolution. Let us begin by reminding ourselves what we mean by a convolution. The convolution, in the interval [0, â&#x2C6;&#x17E;), is defined as Z (x â&#x2C6;&#x2014; y)(t) =
t
x(Ď&#x201E; )y(t â&#x2C6;&#x2019; Ď&#x201E; )dĎ&#x201E;
.
(28)
0
Theorem 6.2 The Laplace transform of two functions under convolution is L{(x â&#x2C6;&#x2014; y)(t)} = L{x(t)} ¡ L{y(t)} = X(s)Y (s)
.
Proof. From the definition of the Laplace transform, we have Z t Z â&#x2C6;&#x17E; Z t L x(Ď&#x201E; )y(t â&#x2C6;&#x2019; Ď&#x201E; )dĎ&#x201E; = x(Ď&#x201E; )y(t â&#x2C6;&#x2019; Ď&#x201E; )dĎ&#x201E; eâ&#x2C6;&#x2019;st dt 0
0
.
0
To simplify the repeated integral we introduce the shifted unit step function U (t â&#x2C6;&#x2019; Ď&#x201E; ) to obtain Z â&#x2C6;&#x17E; Z t Z â&#x2C6;&#x17E; Z â&#x2C6;&#x17E; x(Ď&#x201E; )y(t â&#x2C6;&#x2019; Ď&#x201E; )dĎ&#x201E; eâ&#x2C6;&#x2019;st dt = U (t â&#x2C6;&#x2019; Ď&#x201E; )x(Ď&#x201E; )y(t â&#x2C6;&#x2019; Ď&#x201E; )dĎ&#x201E; eâ&#x2C6;&#x2019;st dt . 0
0
0
0
23
Changing the order of integration, see [?, p. 187], we have Z Z â&#x2C6;&#x17E; Z â&#x2C6;&#x17E; Z â&#x2C6;&#x17E; â&#x2C6;&#x2019;st U (t â&#x2C6;&#x2019; Ď&#x201E; )x(Ď&#x201E; )y(t â&#x2C6;&#x2019; Ď&#x201E; )dĎ&#x201E; e dt = x(Ď&#x201E; ) 0
0
â&#x2C6;&#x17E;
U (t â&#x2C6;&#x2019; Ď&#x201E; )y(t â&#x2C6;&#x2019; Ď&#x201E; )e
â&#x2C6;&#x2019;st
dt dĎ&#x201E;
.
0
0
Using the second shift theorem, equation (30), we obtain Z â&#x2C6;&#x17E; U (t â&#x2C6;&#x2019; Ď&#x201E; )y(t â&#x2C6;&#x2019; Ď&#x201E; )eâ&#x2C6;&#x2019;st dt = eâ&#x2C6;&#x2019;sĎ&#x201E; Y (s) . 0
Therefore Z Z â&#x2C6;&#x17E; x(Ď&#x201E; ) 0
â&#x2C6;&#x17E;
U (tâ&#x2C6;&#x2019;Ď&#x201E; )y(tâ&#x2C6;&#x2019;Ď&#x201E; )e
â&#x2C6;&#x2019;st
â&#x2C6;&#x17E;
Z dt dĎ&#x201E; =
x(Ď&#x201E; )eâ&#x2C6;&#x2019;sĎ&#x201E; Y (s)dĎ&#x201E; = Y (s)
â&#x2C6;&#x17E;
eâ&#x2C6;&#x2019;sĎ&#x201E; x(Ď&#x201E; )dĎ&#x201E; = X(s)Y (s) .
0
0
0
Z
Example: Solve t
Z
(t â&#x2C6;&#x2019; u)f (u) du = sin(2t) .
f (t) + 0
We begin by taking the Laplace transform of both sides of the equation to obtain L{f (t)} + L{t â&#x2C6;&#x2014; f (t)} = L{sin(2t)}
.
Thus, by theorem (6.2), we have L{f (t)} +
1 2 ¡ L{f (t)} = 2 s2 s +4
.
Solving for L{f (t)}, gives L{f (t)} =
2s2 2 8 =â&#x2C6;&#x2019; 2 + 2 2 2 (s + 1)(s + 4) 3(s + 1) 3(s + 4)
.
Taking the inverse Laplace transform, we obtain f (t) =
6.13
2 4 sin(2t) â&#x2C6;&#x2019; sin(t) . 3 3
Ordinary Differential Equations
One application of the Laplace transform is to solve differential equations. In this section, we consider ordinary differential equations or ODEs. The schema behind the use of Laplace transforms to solve ODEs is shown in the following diagram: ODE in x(t)
L
Laplace Transform
/ Algebraic equation in X(s) solve
x(t) o
Lâ&#x2C6;&#x2019;1
Inverse Laplace Transform
X(s)
Figure 1: Ordinary differential equations(ODEs) can be solved generally, or by way of Laplace transforms, to obtain the same solution in both cases. This figure appears in [?]. Example: Find the solution to the initial value problem y 00 (t) â&#x2C6;&#x2019; y(t) = t â&#x2C6;&#x2019; 2 24
given y(2) = 3 and y 0 (2) = 0. We begin by moving the initial conditions to t = 0. This is done by setting x(t) = y(t + 2). Then x0 (t) = y 0 (t + 2) and x00 (t) = y 00 (t + 2). Substituting t = t + 2 and x(t) = y(t + 2) into the differential equation, the initial value problem becomes x00 (t) â&#x2C6;&#x2019; x0 (t) = t
,
x(0) = 3
x0 (0) = 0
,
.
Taking the Laplace transform of both sides and using the linearity of the Laplace transform, see subsection [6.4], we have L{x00 (t)} + L{x(t)} = L{t} . Using equation (22), and equation (25), we obtain s2 X(s) â&#x2C6;&#x2019; sx(0) â&#x2C6;&#x2019; x0 (0) â&#x2C6;&#x2019; X(s) =
1 s2
.
Using the initial values, x(0) = 3 and x0 (0) = 0, and simplifying gives X(s) =
2 1 + 3s3 1 1 = + â&#x2C6;&#x2019; s â&#x2C6;&#x2019; 1 s + 1 s2 s2 s2 â&#x2C6;&#x2019; 1
.
From the table of Laplace transforms, we find the inverse Laplace transform of X(s) is x(t) = 2et + eâ&#x2C6;&#x2019;t â&#x2C6;&#x2019; t
.
But x(t) = y(t + 2), therefore y(t) = x(t â&#x2C6;&#x2019; 2). Hence we have y(t) = 2e(tâ&#x2C6;&#x2019;2) + e(2â&#x2C6;&#x2019;t) + 2 â&#x2C6;&#x2019; t
.
For more examples, I highly recommend [?].
6.14
System Of ODEs
The Laplace transform can be used to solve a system of ordinary differential equations. Example 1: Find the solution to the initial value problem ( x0 = y + sin(t) x(0) = 2 y 0 = x + 2 cos(t) y(0) = 0 . We begin by taking Laplace transform of both sides, of both the equations, and using the initial conditions to obtain ( sX(s) â&#x2C6;&#x2019; 2 = Y (s) + s21+1 sY (s) = X(s) + s22s +1 knowing the Laplace transform of sin(t) from section [6.2], the Laplace transform of cos(t) from section [6.3] and using equation (21). We proceed by eliminating either X(s) or Y (s). Eliminating X(s), gives s2 â&#x2C6;&#x2019; 1 â&#x2C6;&#x2019; Y (s) = Thus Y (s) =
2s2 1 +2+ 2 2 s +1 s +1
.
1 7 7 4s2 + 3 = â&#x2C6;&#x2019; + 2 2 2 (s + 1)(s â&#x2C6;&#x2019; 1 2(s + 1) 4(s + 1) 4(s â&#x2C6;&#x2019; 1)
.
Using equation (27), gives L
â&#x2C6;&#x2019;1
1 â&#x2C6;&#x2019;1 1 7 â&#x2C6;&#x2019;1 1 7 â&#x2C6;&#x2019;1 1 {Y (s)} = L â&#x2C6;&#x2019; L + L 2 s2 + 1 4 s+1 4 sâ&#x2C6;&#x2019;1
Hence y(t) =
1 7 7 sin(t) â&#x2C6;&#x2019; eâ&#x2C6;&#x2019;t + et 2 4 4 25
.
.
We find x(t) simply: y 0 = x + 2 cos(t) ,
y0 =
7 7 1 cos(t) + eâ&#x2C6;&#x2019;t + et 2 4 4
.
Therefore, we obtain 7 â&#x2C6;&#x2019;t 7 t 3 e + e â&#x2C6;&#x2019; cos(t) . 4 4 2
x(t) =
6.15
Impulse-Response Function
Definition: The transfer function R(s) is defined as the ratio of the Laplace transform of the output x(t) to the Laplace transform of the input x(t), as [?, ?, ?] suggest, given that the initial conditions are zero. This is equivalent to Y (s) . R(s) = X(s) Consider a second order differential equation ay 00 + by 0 + cy = x(t) for a, b, c are constants with y(0) = 0 and y 0 (0) = 0. Taking the Laplace transform of both sides, using equation (20), and equation (22), gives (as2 + bs + c)Y (s) â&#x2C6;&#x2019; asy(0) â&#x2C6;&#x2019; ay 0 (0) â&#x2C6;&#x2019; by(0) = X(s) and using the fact that the initial conditions are zero, we have (as2 + bs + c)Y (s) = X(s) . Therefore, we define the transfer function to be R(s) =
1 Y (s) = 2 X(s) as + bs + c
.
The function r(s) is the impulse-response function defined as Lâ&#x2C6;&#x2019;1 {R(s)}
.
Example: Using the convolution theorem obtain the solution to the following initial value problem y 00 â&#x2C6;&#x2019; 2y 0 + y = x(t) given y(0) = â&#x2C6;&#x2019;1 and y 0 (0) = 1. We know the form of the transfer function and in our case we have R(s) =
1 1 1 = 2 = as2 + bs + c s â&#x2C6;&#x2019; 2s + 1 (s â&#x2C6;&#x2019; 1)2
.
From the table of Laplace transforms, see [?, ?, ?], the inverse of the Laplace transform of R(s) is 1 r(s) = Lâ&#x2C6;&#x2019;1 = tet . (s â&#x2C6;&#x2019; 1)2 We now solve the homogeneous problem, y 00 â&#x2C6;&#x2019; 2y 0 + y = 0, using the initial conditions y(0) = â&#x2C6;&#x2019;1 and y 0 (0) = 1 to obtain y(t) = (2t â&#x2C6;&#x2019; 1)et . Hence, using the convolution theorem, equation (28), the solution to the initial value problem is Z (x â&#x2C6;&#x2014; r)(t) + y(t) =
t
x(Ď&#x201E; )etâ&#x2C6;&#x2019;Ď&#x201E; (t â&#x2C6;&#x2019; Ď&#x201E; )dĎ&#x201E; + (2t â&#x2C6;&#x2019; 1)et
0
where r is the impulse-response function and y(t) is the unique solution.
26
6.16
Dirac Delta Functional
The Dirac delta functional δ(t) is defined by ( δ(t) = with the property:
0, â&#x2C6;&#x17E;,
t 6= 0 t = 0.
â&#x2C6;&#x17E;
Z
δ(t) dt = 1 . â&#x2C6;&#x2019;â&#x2C6;&#x17E;
The shifted δ(t) is defined by ( 0, δ(t â&#x2C6;&#x2019; a) = R â&#x2C6;&#x17E;
t 6= a t = a.
f (t)δ(t â&#x2C6;&#x2019; a)dt = f (a), â&#x2C6;&#x2019;â&#x2C6;&#x17E;
The Laplace transform of δ(t â&#x2C6;&#x2019; a), where f (t) = eâ&#x2C6;&#x2019;st , is computed as Z â&#x2C6;&#x17E; Z â&#x2C6;&#x17E; eâ&#x2C6;&#x2019;st δ(t â&#x2C6;&#x2019; a)dt = eâ&#x2C6;&#x2019;as eâ&#x2C6;&#x2019;st δ(t â&#x2C6;&#x2019; a)dt = L{δ(t â&#x2C6;&#x2019; a)} =
.
â&#x2C6;&#x2019;â&#x2C6;&#x17E;
0
We can find a relation between the δ(t) and the unit step function, U (t): Z
t
δ(x â&#x2C6;&#x2019; a)dx = U (t â&#x2C6;&#x2019; a) . â&#x2C6;&#x2019;â&#x2C6;&#x17E;
Thus δ(t â&#x2C6;&#x2019; a) = U 0 (t â&#x2C6;&#x2019; a) , that is the derivative of the Dirac delta function is the unit step function. Example: Boundary-value problem A beam of 2λ that is embedded in a support on the left and free on the right. The vertical deflection of the beam a distance x away from the support is denoted by y(x). If the beam has a concentrated load L on it in the center of the beam then the deflection must satisfy the boundary value problem ( EIy 0000 (x) = Lδ(x â&#x2C6;&#x2019; λ) y(0) = y 0 (0) = y 00 (2λ) = y 000 (2λ) = 0 where E is the modulus of elasticity and I is the moment of inertia, are constants. We shall solve the formula for the displacement y(x) in terms of constants λ, L, E, and I. We begin by taking the Laplace transform of both sides to obtain EIL{y 0000 (x)} = LL{δ(x â&#x2C6;&#x2019; λ)}
.
By equation (22), and the initial conditions y(0) = y 0 (0) = 0 L{y 0000 (x)} = s4 Y (s) â&#x2C6;&#x2019; s3 y(0) â&#x2C6;&#x2019; s2 y 0 (0) â&#x2C6;&#x2019; sy 00 (0) â&#x2C6;&#x2019; y 000 (0) = s4 Y (s) â&#x2C6;&#x2019; sy 00 (0) â&#x2C6;&#x2019; y 000 (0) Thus s4 Y (s) â&#x2C6;&#x2019; sy 00 (0) â&#x2C6;&#x2019; y 000 (0) =
L L â&#x2C6;&#x2019;λ(s) L{δ(x â&#x2C6;&#x2019; λ)} = e EI EI
.
.
Let A = y 00 (0) and B = y 000 (0), then Y (s) =
A B L eâ&#x2C6;&#x2019;λ(s) + + s3 s4 EI s4
.
We now need to use equation (25), and equation (30), to find the inverse Laplace transform Y (s) Ax2 Bx3 L 1 6Ax2 L y(x) = + + (x â&#x2C6;&#x2019; λ)3 U (x â&#x2C6;&#x2019; λ) = + Bx3 + (x â&#x2C6;&#x2019; λ)3 U (x â&#x2C6;&#x2019; λ) . 2 6 6EI 6 2 EI 27
We are given that y 00 (2Îť) = y 000 (2Îť) = 0, then differentiating twice and thrice respectively, we obtain 0 = 6A + 12ÎťB + 6Îť and 0 = 6B + 6
L EI
L EI
.
Hence, the solution to the problem is y(x) =
6.17
L 3Îťx2 â&#x2C6;&#x2019; x3 + (x â&#x2C6;&#x2019; Îť)3 U (x â&#x2C6;&#x2019; Îť) 6EI
.
Partial Differential Equations
In this section, we show how to use the Laplace transform to solve one-dimensional linear partial differential equations. Partial differential equations are also known as PDEs. There are 3 main steps in order to solve a PDE using the Laplace transform: 1. Begin by taking the Laplace transform with one of the two variables, usually t. This will give an ODE of the transform of the unknown function. 2. Solving the ODE, we shall obtain the transform of the unknown function. 3. By taking the inverse Laplace transform, we obtain the solution to the original problem. 6.17.1
The Heat Equation
In this section, through the use of the Laplace transforms, we seek solutions to initial-boundary value problems involving the heat equation. The one-dimensional partial differential equation â&#x2C6;&#x201A;2u â&#x2C6;&#x201A;u = c2 2 â&#x2C6;&#x201A;t â&#x2C6;&#x201A;x
.
(29)
is known as the heat equation, where c2 is known as the thermal diffusivity of the material. Example 1: Solve â&#x2C6;&#x201A;2u â&#x2C6;&#x201A;u = â&#x2C6;&#x201A;t â&#x2C6;&#x201A;x2
,
x>0
,
t>0
given u(x, 0) = 0
,
u(0, t) = δ(t) ,
lim u(x, t) = 0
xâ&#x2020;&#x2019;â&#x2C6;&#x17E;
.
We have to solve the heat equation for positive x and t, with c2 = 1, subject to the boundary conditions u(0, t) = δ(t) ,
lim u(x, t) = 0 ,
xâ&#x2020;&#x2019;â&#x2C6;&#x17E;
with the initial condition u(x, 0) = 0 . We begin by taking the Laplace transform, with respect to t, of both sides 2 â&#x2C6;&#x201A;u â&#x2C6;&#x201A; u L = sL{u(x, t)} = L . â&#x2C6;&#x201A;t â&#x2C6;&#x201A;x2 Let L{u(x, t)} = U (x, s), then
â&#x2C6;&#x201A;2U â&#x2C6;&#x201A;2U =â&#x2021;&#x2019; â&#x2C6;&#x2019; sU = 0 . â&#x2C6;&#x201A;x2 â&#x2C6;&#x201A;x2 Notice that we have obtained an ODE, in the variable U , which has a general solution sU =
â&#x2C6;&#x161;
U (x, s) = A(s)e
sx
28
â&#x2C6;&#x161;
+ B(s)eâ&#x2C6;&#x2019;
sx
.
Applying the boundary conditions, with L{f (t)} = F (s), we obtain U (0, s) = L{u(0, t)} = L{δ(t)}) = 1 , and
â&#x2C6;&#x17E;
Z
eâ&#x2C6;&#x2019;st u(x, t)dt =
lim U (x, s) = lim
xâ&#x2020;&#x2019;â&#x2C6;&#x17E;
xâ&#x2020;&#x2019;â&#x2C6;&#x17E;
â&#x2C6;&#x17E;
Z
0
eâ&#x2C6;&#x2019;st lim u(x, t)dt = 0 . xâ&#x2020;&#x2019;â&#x2C6;&#x17E;
0
The boundary condition lim U (x, s) â&#x2021;&#x2019; A(s) = 0 ,
xâ&#x2020;&#x2019;â&#x2C6;&#x17E; â&#x2C6;&#x161;
as for every fixed s > 0, e
sx
increases as x â&#x2020;&#x2019; â&#x2C6;&#x17E; . Hence U (0, s) = B(s) = 1 .
Therefore U (x, s) = eâ&#x2C6;&#x2019;
â&#x2C6;&#x161;
sx
.
From the tables of the Laplace transforms we obtain the inverse Laplace transform as
Hence
â&#x2C6;&#x161;
x2 x = â&#x2C6;&#x161; eâ&#x2C6;&#x2019; 4t 3 2 Ï&#x20AC;t
.
x2 x u(x, t) = â&#x2C6;&#x161; eâ&#x2C6;&#x2019; 4t 2 Ï&#x20AC;t3
.
eâ&#x2C6;&#x2019;
sx
Example 2: We find the temperature w(x, t) in a semi-infinite laterally insulated bar extending from x = 0 along the x-axis to infinity, assuming that the original temperature is 0, w(x, t) â&#x2020;&#x2019; 0 as x â&#x2020;&#x2019; â&#x2C6;&#x17E; for every fixed t â&#x2030;¥ 0 and w(0, t) = â&#x2C6;&#x161;1t . We have to solve the heat equation for positive x and t subject to the boundary conditions 1 w(0, t) = â&#x2C6;&#x161; t
,
lim w(x, t) = 0
xâ&#x2020;&#x2019;â&#x2C6;&#x17E;
with the initial condition w(x, 0) = 0
.
We begin by taking the Laplace transform, with respect to t, of both sides 2 â&#x2C6;&#x201A;w â&#x2C6;&#x201A; w L = sL{w(x, t)} â&#x2C6;&#x2019; 0 = c2 L . â&#x2C6;&#x201A;t â&#x2C6;&#x201A;x2 Let L{w(x, t)} = W (x, s), then sW = c2
â&#x2C6;&#x201A;2W â&#x2C6;&#x201A;2W s =â&#x2021;&#x2019; â&#x2C6;&#x2019; 2W = 0 . 2 â&#x2C6;&#x201A;x â&#x2C6;&#x201A;x2 c
Notice that we have obtained an ODE, in the variable W , which has a general solution â&#x2C6;&#x161;
W (x, s) = A(s)e
sx c
â&#x2C6;&#x161;
+ B(s)eâ&#x2C6;&#x2019;
sx c
.
Applying the first boundary condition, we have r r Ï&#x20AC; Ï&#x20AC; =â&#x2021;&#x2019; W (0, s) = A(s) + B(s) = W (0, s) = L{w(0, t)} = s s
.
Assuming we can interchange the limit, the second boundary condition gives Z â&#x2C6;&#x17E; Z â&#x2C6;&#x17E; â&#x2C6;&#x2019;st lim W (x, s) = lim e w(x, t)dt = eâ&#x2C6;&#x2019;st lim w(x, t)dt = 0 , xâ&#x2020;&#x2019;â&#x2C6;&#x17E;
xâ&#x2020;&#x2019;â&#x2C6;&#x17E;
0
0
xâ&#x2020;&#x2019;â&#x2C6;&#x17E;
â&#x2C6;&#x161;
sx c
increases as x â&#x2020;&#x2019; â&#x2C6;&#x17E; . Hence r Ï&#x20AC; W (0, s) = B(s) = . s
thus A(s) = 0 since c > 0, and for every fixed s > 0, e
29
Therefore
r W (x, s) =
π − √sx e c s
.
From the tables of the Laplace transforms, we obtain the inverse √
x
x2 e− s c 1 √ = √ e− 4c2 t s πt
Hence
.
x2 1 w(x, t) = √ e− 4c2 t t
.
By the convolution theorem, see section [6.12], we can also express the solution as Z t x2 1 w(x, t) = e− 4c2 τ (t − τ )− 2 dτ . 0
6.18
Unit Step Function
The unit step function, U (t), is defined by ( 0, U (t) = 1,
t<0 t ≥ 0.
For t ≥ 0 the unit step function is the same as 1. Therefore the Laplace transform of U (t) is L{U (t)} = L{1} =
1 s
.
Define the shifted unit step function, U (t − a) as ( 0, 0 ≤ t < a U (t − a) = 1, t ≥ a, then the Laplace transform of U (t − a) is Z ∞ Z LU (t − a) = U (t − a)e−st dt = 0
6.19
∞
e−st dt =
a
e−as s
.
Second Shift Theorem
Theorem 6.3 Let x(t) be a function, then L{x(t − a)U (t − a)} = e−as X(s)
.
(30)
It should be clear that L−1 {e−as X(s)} = x(t − a)U (t − a) .
7
The Mellin Transform and its Properties
The Mellin transform is extremely useful for certain applications including solving Laplace’s equation. Definition 7.1 The Mellin transform is defined as M(f (t); p) = F (p) =
Z 0
for some p > 0. 30
∞
f (t)tp−1 dt,
7.1
Relation to Laplace Transform
For t = e−x , dt = −e−x dx. Substitution into Definition (7.1) gives M(f (t); p) =
Z
∞
f (e−x )(e−x )p−1 e−x dx
−∞
which is by definition the Laplace transform of e−px that is L{f (e−x )}.
7.2
Inversion Formula
For c lying in the strip of analyticity of F (p), we have Z c+i∞ 1 F (p)t−p dp. f (t) = 2πi c−i∞
7.3
Scaling Property for a > 0
We have M(f (at); p) =
Z
∞
f (at)tp−1 dt.
0
Substituting x = at where dx = adt, we obtain Z ∞ a−p f (x)xp−1 dx = a−p F (p). 0
7.4
Multiplication by ta
Similar to the scaling property, we obtain M(ta f (t); p) =
Z
∞
f (t)t(p+a)−1 dt = F (p + a).
0
7.5
Derivative
We have M(f 0 (t); p)
Z =
∞
f 0 (t)tp−1 dt
0
=
p−1 ∞ t f (t) 0 − (p − 1)
Z
∞
f (t)tp−2 dt
0
which gives M(f 0 (t); p) = −(p − 1)M(f (t); p − 1) provided tp−1 f (t) → 0 as x → 0 and tp−1 f (t) → ∞ as x → ∞. For the n-th derivative this produces M(f (n) (t); p) = (−1)n (p − 1)(p − 2)(p − 3)...(p − n)F (p − n) provided that the extension to higher derivatives of the conditions as x → 0 and asx → ∞ holds upto (n − 1)th derivative. Knowing that Γ(p) (p − 1)! = (p − n − 1)! Γ(p − n) the expression for the n-th derivative can be expressed as (p − 1)(p − 2)(p − 3)...(p − n) =
M(f (n) (t); p) = (−1)n
31
Γ(p) F (p − n). Γ(p − n)
7.6
Another Property of the Derivative M(xn f (n) (t); p) = (−1)n
7.7
(p + n − 1)! F (p). (p − 1)!
Integral
By making use of the derivative property Rof the Mellin transform we can easily derive this property. We t begin by choosing to write f (t) as f (t) = 0 h(u)du where f 0 (t) = h(t). As a result, we obtain M(f (t) = h(t); p) = −(p − 1)M(f (t) = 0
Z
t
h(u)du; p − 1). 0
Rearranging gives −
1 M(h(t); p) = M( p−1
Z
t
h(u)du; p − 1) 0
Substituting p for p − 1 we arrive at the desired identity Z t 1 − M(h(t); p + 1) = M( h(u)du; p). p 0
7.8
Example 1
The Mellin transform of e−t
2
is computed using the fact that M(e−t ; p) = Γ(p). Then by the scaling property, we immediately have that 2
M(e−t ; p) =
7.9
p 1 p 1 M(e−t ; ) = Γ . 2 2 2 2
Example 2
The Mellin transform of
Z
∞
2
e−u du
0
is computed as Z t 2 2 1 (p + 1) M( e−u du; p) = M(e−t ; p + 1) = − Γ . 2p 2 0
32
7.10
The Mellin Tranform & The Zeta Function
The Mellin transform is an integral transform that helps to transform the symmetries of the ϑ functions to the symmetries of the ζ functions. The Mellin transform of 1 ex − 1 with <(s) > 1 is computed as M(
1 ; p) = x e −1
∞
Z
1 dx −1 0 Z ∞ ∞ X = xs−1 e−nx dx xs−1
0
=
n=1
∞ Z X n=1
ex
∞
xs−1 e−nx dx
0
Setting nx = t with dt = ndx
∞ Z X
s−1 1 t e−t dt = n n n=1 0 Z ∞ ∞ X = n−s ts−1 e−t dt ∞
n=1
0
= ζ(s)Γ(s).
33