Principles of Communications, 6th Edition Solution Manual

Page 1

Principles of Communications, 6th Edition BY Ziemer, Tranter


Chapter 2

Signal and Linear System Theory 2.1

Problem Solutions

Problem 2.1 a. For the single-sided spectra, write the signal as x(t) = 10 cos(4πt + π/8) + 6 sin(8πt + 3π/4) = 10 cos(4πt + π/8) + 6 cos(8πt + 3π/4 − π/2)

= 10 cos(4πt + π/8) + 6 cos(8πt + π/4) h i = Re 10ej(4πt+π/8) + 6ej(8πt+π/4)

For the double-sided spectra, write the signal in terms of complex exponentials using Euler’s theorem: x(t) = 5 exp[j(4πt + π/8)] + 5 exp[−j(4πt + π/8)] +3 exp[j(8πt + 3π/4)] + 3 exp[−j(8πt + 3π/4)] The spectra are plotted in Fig. 2.1. b. Write the given signal as i h xb (t) = Re 8ej(2πt+π/3) + 4ej(6πt+π/4) to plot the single-sided spectra. For the double-side spectra, write it as xb (t) = 4ej(2πt+π/3) + 4e−j(2πt+π/3) + 2ej(6πt+π/4) + 2e−j(6πt+π/4) The spectra are plotted in Fig. 2.2. 1


2

CHAPTER 2. SIGNAL AND LINEAR SYSTEM THEORY

Figure 2.1:

c. Change the sines to cosines by subtracting π/2 from their arguments to get xc (t) = 2 cos (4πt + π/8 − π/2) + 12 cos (10πt − π/2) = 2 cos (4πt − 3π/8) + 12 cos (10πt − π/2) i h = Re 2ej(4πt−3π/8) + 12ej(10πt−π/2)

= ej(4πt−3π/8) + e−j(4πt−3π/8) + 6ej(10πt−π/2) + 6e−j(10πt−π/2)

Spectral plots are given in Fig. 2.3.


2.1. PROBLEM SOLUTIONS

3

Figure 2.2:


4

CHAPTER 2. SIGNAL AND LINEAR SYSTEM THEORY

Figure 2.3:


2.1. PROBLEM SOLUTIONS

5

Problem 2.2 By noting the amplitudes and phases of the various frequency components from the plots, the result is x(t) = 4ej(8πt+π/2) + 4e−j(8πt+π/2) + 2ej(4πt−π/4) + 2e−j(4πt−π/4) = 8 cos (8πt + π/2) + 4 cos (4πt − π/4) = −8 sin (8πt) + 4 cos (4πt − π/4)

Problem 2.3 a. Not periodic because f1 = 1/π Hz and f2 = 3 Hz are not commensurable. b. Periodic. To find the period, note that 6π 30π = 3 = n1 f0 and = 15 = n2 f0 2π 2π Therefore

15 n2 = 3 n1

Hence, take n1 = 1, n2 = 5, and f0 = 3 Hz (we want the largest possible value for f0 with n1 and n2 integer-valued). c. Periodic. Using a similar procedure as used in (b), we find that n1 = 4, n2 = 21, and f0 = 0.5 Hz. d. Periodic. Using a similar procedure as used in (b), we find that n1 = 4, n2 = 6, n3 = 17, and f0 = 0.5 Hz.

Problem 2.4 a. The single-sided amplitude spectrum consists of a single line of height 5 at frequency 6 Hz, and the phase spectrum consists of a single line of height −π/6 radians at frequency 6 Hz. The double-sided amplitude spectrum consists of lines of height 2.5 at frequencies of 6 and −6 Hz, and the double-sided phase spectrum consists of a line of height −π/6 radians at frequency 6 Hz and a line of height π/6 at frequency −6 Hz.


6

CHAPTER 2. SIGNAL AND LINEAR SYSTEM THEORY b. Write the signal as xb (t) = 3 cos(12πt − π/2) + 4 cos(16πt)

From this it is seen that the single-sided amplitude spectrum consists of lines of height 3 and 4 at frequencies 6 and 8 Hz, respectively, and the single-sided phase spectrum consists of a line of height −π/2 radians at frequency 6 Hz (the phase at 8 Hz is 0). The doublesided amplitude spectrum consists of lines of height 1.5 and 2 at frequencies of 6 and 8 Hz, respectively, and lines of height 1.5 and 2 at frequencies −6 and −8 Hz, respectively. The double-sided phase spectrum consists of a line of height −π/2 radians at frequency 6 Hz and a line of height π/2 radians at frequency −6 Hz. c. Use the trig identity cos x cos y = 0.5 cos (x + y) + 0.5 cos (x − y) to write xc (t) = 2 cos 20πt + 2 cos 4πt From this we see that the single-sided amplitude spectrum consists of lines of height 2 at 2 and 10 Hz, and the single-sided phase spectrum is 0 at these frequencies. The double-sided amplitude spectrum consists of lines of height 1 at frequencies of −10, −2, 2, and 10 Hz. The double-sided phase spectrum is 0. Problem 2.5 a. This function has area Area =

=

Z∞

−1

−∞ Z∞ ∙ −∞

¸ sin(πt/ ) 2 dt (πt/ )

¸ sin(πu) 2 du = 1 (πu)

where a tabulated integral has been used for sinc2 u. A sketch shows that no matter how small is, the area is still 1. With → 0, the central lobe of the function becomes narrower and higher. Thus, in the limit, it approximates a delta function. b. The area for the function is Area =

Z∞

−∞

1

exp(−t/ )u (t) dt =

Z∞ 0

exp(−u)du = 1


2.1. PROBLEM SOLUTIONS

7

A sketch shows that no matter how small is, the area is still 1. With → 0, the function becomes narrower and higher. Thus, in the limit, it approximates a delta function. R1 R c. Area = − 1 (1 − |t| / ) dt = −1 Λ (t) dt = 1. As → 0, the function becomes narrower and higher, so it approximates a delta function in the limit. Problem 2.6 1 δ (t) to write δ (2t − 5) = δ [2 (t − 5/2)] = a. Make use of the formula δ (at) = |a| ¢ ¡ 1 5 2 δ t − 2 and use the sifting property of the δ-function to get µ ¶ µ ¶ 5 25 1 5 2 1 = = 3.125 + sin 2π Ia = 2 2 2 2 8

b. Impulses at −10, −5, 0, 5, 10 are included in the integral. Use the sifting property after writing the expression as the sum of five integrals to get Ib = (−10)2 + 1 + (−5)2 + 1 + 02 + 1 + 52 + 1 + 102 + 1 = 255 c. Matching coefficients of like derivatives of δ-functions on either side of the equation gives A = 5 and B = 10. d. The answer is 0 because none of the impulses is within the limits of integration of the 1 integral. Use δ (at) = |a| δ (t) to write δ (3t + 6) = 13 δ (t + 2). e. Use property 5 of the unit impulse function to get ¤ ¤ d2 £ d £ Ie = (−1)2 2 cos 8πt + e−2t t=2 = −8π sin 8πt − 2e−2t t=2 dt dt i h 2 −2t = − (8π) cos 8πt + 4e = − (8π)2 cos 16π + 4e−4 = −631.5814 t=2

Problem 2.7 (a), (c), and (e) are periodic. Their periods are 2 s (fundamental frequency of 0.5 Hz), 2 s, and 3 s, respectively. The waveform of part (c) is a periodic train of triangles, each 2 units wide, extending from -∞ to ∞ spaced by 2 s ((b) is similar except that it is zero for t < −1 thus making it aperiodic). Waveform (d) is aperiodic because the frequencies of its two components are incommensurable. The waveform of part (e) is a doubly-infinite train of square pulses, each of which is one unit high and one unit wide, centered at · · ·, −6, −3, 0, 3, 6, · · ·. Waveform (f) is identical to (e) for t ≥ −1/2 but 0 for t < −1/2 thereby making it aperiodic.


8

CHAPTER 2. SIGNAL AND LINEAR SYSTEM THEORY

Problem 2.8 a. The result is h ´ i ³ ¢ ¡ x(t) = cos (6πt − π/2)+2 cos (10πt) = Re ej(6πt−π/2) +Re 2ej10πt = Re ej(6πt−π/2) + 2ej10πt b. The result is

1 1 x(t) = e−j10πt + e−j(6πt−π/2) + ej(6πt−π/2) + ej10πt 2 2 c. The single-sided amplitude spectrum consists of lines of height 1 and 2 at frequencies of 3 and 5 Hz, respectively. The single-sided phase spectrum consists of a line of height −π/2 at frequency 3 Hz. The double-sided amplitude spectrum consists of lines of height 1, 1/2, 1/2, and 1 at frequencies of −5, −3, 3, and 5 Hz, respectively. The double-sided phase spectrum consists of lines of height π/2 and −π/2 at frequencies of −3 and 3 Hz, respectively. Problem 2.9 a. Power. Since it is a periodic signal, we obtain P1 =

1 T0

Z T0

4 cos2 (4πt + 2π/3) dt =

0

1 T0

Z T0

2 [1 + cos (8πt + 4π/3)] dt = 2 W

0

where T0 = 1/2 s is the period. The cosine in the above integral integrates to zero because the interval of integratation is two periods. b. Energy. The energy is E2 =

Z ∞

−2αt 2

e

u (t)dt =

−∞

Z ∞

e−2αt dt =

1 J 2α

Z 0

e2αt dt =

1 J 2α

0

c. Energy. The energy is E3 =

Z ∞

−∞

2αt 2

e

u (−t)dt =

−∞


2.1. PROBLEM SOLUTIONS

9

d. Energy. The energy is E4 =

lim

T →∞

Z T

dt 1 = lim 2 2 2 T →∞ α −T (α + t )

Z T

dt ³ ´ −T 1 + (t/α)2

∙ ¸T ¤ 1 1 £ −1 −1 t tan = lim = lim tan (T /α) − tan−1 (−T /α) T →∞ α α −T T →∞ α h ³ ´i 1 π π π = − − = J α 2 2 α e. Energy. Since it is the sum of x2 (t) and x3 (t), its energy is the sum of the energies of these two signals, or E5 = 1/α. f. Energy. The energy is Z T h i2 e−αt u (t) − e−α(t−1) u (t − 1) dt lim

E6 =

T →∞ −T Z T h

i e−2αt u2 (t) − e−αt e−α(t−1) u (t) u (t − 1) + e−2α(t−1) u2 (t − 1) dt T →∞ −T ¾ ½Z T Z T Z T −2αt −α −2α(t−1) −2α(t−1) lim e dt − e e dt + e dt T →∞ 0 1 1 ½Z T ¾ Z T −1 Z T −1 −2αt −α −2αt0 0 −2αt0 0 e dt − e e dt + e dt lim T →∞ 0 0 ⎧ 0 ⎫ ¯T −1 ¯T −1 ⎨ e−2αt ¯¯T ⎬ −2αt0 ¯ −2αt0 ¯ e e ¯ ¯ ¯ + e−α − − lim ¯ ¯ ⎭ T →∞ ⎩ 2α ¯0 2α ¯ 2α ¯ 0 0 ¶ µ e−α 1 1 1 1 − + = 1 − e−α J 2α 2α 2α α 2

=

lim

= = = =

Problem 2.10 a. Power. Since the signal is periodic with period π/ω, we have

ω P = π

Z π/ω 0

ω A |sin (ωt + θ)| dt = π 2

2

Z π/ω 0

A2 A2 {1 − cos [2 (ωt + θ)]} dt = W 2 2


10

CHAPTER 2. SIGNAL AND LINEAR SYSTEM THEORY b. Neither. The energy calculation gives Z T Z T (Aτ )2 dt (Aτ )2 dt √ √ √ dt = lim dt → ∞ E = lim T →∞ −T T →∞ −T τ + jt τ − jt τ 2 + t2

The power calculation gives à ! p Z T 1 + 1 + T 2 /τ 2 1 (Aτ )2 dt (Aτ )2 p √ ln dt = lim =0W P = lim T →∞ 2T −T T →∞ 2T τ 2 + t2 −1 + 1 + T 2 /τ 2 c. Energy:

E=

Z ∞ 0

1 A t exp (−2t/τ ) dt = A2 τ 8 2 2

r

πτ W (use a table of integrals) 2

d. Energy: This is a "top hat" pulse which is height 2 for |t| ≤ τ /2, height 1 for τ /2 < |t| ≤ τ , and 0 everywhere else. Making use of the even symmetry about t = 0, the energy is ! ÃZ Z τ τ /2 22 dt + 12 dt = 5τ J E=2 0

τ /2

Problem 2.11 a. This is a periodic train of “boxcars”, each 1 unit high and 3 units in width, centered at multiples of T0 = 6. Using the period centered on t = 0, the power calculation is µ ¶ Z Z 1 1.5 1 3 2 t 1 dt = Π dt = W P1 = 6 −3 3 6 −1.5 2 b. This is a periodic train of unit-high isoceles triangles, each 1 unit high and 4 units wide, centered at multiples of T0 = 5. Using the period centered on t = 0, the power calculation is ¶ µ ¶ ¯2 µ ¶ Z Z µ 2 2 t 3 ¯¯ t 2 1 2.5 2 t 22 4 dt = 1− W 1− Λ dt = − P2 = ¯ = 5 −2.5 2 5 0 2 53 2 ¯ 15 0

c. This is a backward train of sawtooths (right triangles with the right angle on the left), each of height 1 and 2 units wide, spaced by 3 units. Using the waveform period between 0 and 3, the power calculation is ¶ µ ¶ ¯2 Z µ 1 2 12 2 t 2 t 3 ¯¯ dt = − Pc = 1− 1− ¯ = W ¯ 3 0 2 33 2 9 0


2.1. PROBLEM SOLUTIONS

11

d. Use a trig identity to write this as 2 sin 5πt cos 5πt = sin 10πt which has a period of 1/5 second. The power calculation is Pd = 5

Z 1/5

2

sin 10πt dt = 5

0

Z 1/5 ∙ 0

¸ 1 1 1 − cos (20πt) dt = W 2 2 2

Problem 2.12 a. The energy is E1

The power is 0 W.

Z ∞¯ Z ∞ ¯ ¯ (−3+j4π)t ¯2 = e(−3+j4π)t e(−3−j4π)t dt ¯6e ¯ dt = 36 0 0 ¯∞ Z ∞ e−6t ¯¯ −6t e dt = −36 =6J = 36 6 ¯0 0

b. This signal is a "top hat" pulse which is 2 for 2 ≤ t ≤ 4, 1 for 0 ≤ t < 2 and 4 < t ≤ 6, and 0 everywhere else. It is clearly an energy signal with energy E2 = 2 × 12 + 2 × 22 + 2 × 12 = 12 J Its power is 0 W. c. This is a power signal with power 1 T →∞ 2T

P3 = lim

Z T

49 T →∞ 2T

49ej6πt e−j6πt u (t) dt = lim

−T

Z T 0

dt =

49 = 24.5 W 2

Its energy is infinite. 2

d. This is a periodic signal with power P = 22 = 2 W. Its energy is infinite. Problem 2.13 a. This is a cosine burst from t = −6 to t = 6 seconds. The energy is E1 = ¤ R6£ 2 0 12 + 12 cos (12πt) dt = 6 J

R6

2 −6 cos (6πt) dt =


12

CHAPTER 2. SIGNAL AND LINEAR SYSTEM THEORY b. The energy is E2

Z ∞h Z ∞ i2 −|t|/3 e = dt = 2 e−2t/3 dt (by even symmetry) −∞ 0 ¯∞ e−2t/3 ¯¯ = −2 ¯ =3J 2/3 ¯ 0

Since the result is finite, this is an energy signal. c. The energy is E3 =

Z ∞

−∞

2

{2 [u (t) − u (t − 8)]} dt =

Z 8

4dt = 32 J

0

Since the result is finite, this is an energy signal. d. Note that r (t) ,

Z t

u (λ) dλ =

−∞

½

0, t < 0 t, t ≥ 0

which is called the unit ramp. Thus the given signal is a triangle between 0 and 20. The energy is E4 =

Z ∞

−∞

2

[r (t) − 2r (t − 10) + r (t − 20)] dt = 2

Z 10 µ 0

t 10

¶2

20 dt = 3

µ

t 10

¶3 ¯¯10 20 ¯ J ¯ = ¯ 3 0

where the last integral follows because the integrand is a symmetrical triangle about t = 10. Since the result is finite, this is an energy signal. Problem 2.14 a. Expand the integrand, integrate term by term, and simplify making use of the orthogonality property of the orthonormal functions. b. Add and subtract the quantity suggested right above (2.34) and simplify using the orthogonality of the functions. c. These are unit-high rectangular pulses of width T /4. They are centered at t = T /8, 3T /8, 5T /8, and 7T /8. Since they are spaced by T /4, they are adjacent to each other and fill the interval [0, T ].


2.1. PROBLEM SOLUTIONS

13

d. Using the expression for the generalized Fourier series coefficients, we find that X1 = 1/8, X2 = 3/8, X3 = 5/8, and X4 = 7/8. Also, cn = T /4. Thus, the ramp signal is approximated by t 1 3 5 7 = φ1 (t) + φ2 (t) + φ3 (t) + φ4 (t) , 0 ≤ t ≤ T T 8 8 8 8 where the φn (t)s are given in part (c). e. These are unit-high rectangular pulses of width T /2 and centered at t = T /4 and 3T /4. We find that X1 = 1/4 and X2 = 3/4. f. To compute the ISE, we use N =

Z

T

2

|x (t)| dt −

N X

n=1

¯ ¯ cn ¯Xn2 ¯

R RT Note that T |x (t)|2 dt = 0 (t/T )2 dt = T /3. Hence, for (d), ¡1 ¢ 9 49 −3 + 64 + 25 ISEd = T3 − T4 64 64 + ¢ 64 = 5.208 × 10 T . ¡ T T 1 9 For (e), ISEe = 3 − 2 16 + 16 = 2.083 × 10−2 T .

Problem 2.15

a. Use the exponential representation of the sine to get the Fourier coefficients as (note that the period = 12 f10 ). ¶2 µ j2πf0 t ´ e − e−j2πf0 t 1³ x1 (t) = = − e−j4πf0 t − 2 + ej4πf0 t 2j 4 from which we find that

1 1 X−1 = X1 = − ; X0 = 4 2

All other coefficients are zero. b. Use the exponential representations of the sine and cosine to get 1 1 1 1 x2 (t) = ej2πf0 t + e−j2πf0 t + ej4πf0 t − e−j4πf0 t 2 2 2j 2j Therefore, the Fourier coefficients for this case are X−1 = X1 = All other coefficients are zero.

1 1 ∗ and X2 = X−2 = 2 2j


14

CHAPTER 2. SIGNAL AND LINEAR SYSTEM THEORY c. Use a trig identity to write this signal as x3 (t) =

1 1 1 sin 8πf0 t = ej8πf0 t − e−j8πf0 t 2 4j 4j

The fundamental frequency is 4f0 Hz. From this it follows that the Fourier coefficients are ∗ = X1 = X−1

1 4j

All other coefficients are zero. e. Use trig identities and the exponential forms of cosine to write this signal as x4 (t) = =

3 1 cos 2πf0 t + cos 6πf0 t 4 4 3 j2πf0 t 3 −j2πf0 t 1 j6πf0 t 1 −j6πf0 t e + e + e + e 8 8 8 8

The fundamental frequency is f0 Hz. It follows that the Fourier coefficients for this case are 3 1 X−1 = X1 = ; X−3 = X3 = 8 8 All other Fourier coefficients are zero. Problem 2.16 The expansion interval is T0 = 4 so that f0 = 1/4 Hz. The Fourier coefficients are Z Z 1 2 2 −jn(π/2)t 2 2 2 Xn = 2t e dt = t (cos nπt/2 − j sin nπt/2) dt = 4 −2 4 −2 ¶ µ Z 2 2 2 nπt dt = 2t cos 4 0 2 which follows by the oddness of the second integrand and the eveness of the first integrand. Let u = nπt/2 to obtain the form µ ¶3 Z nπ 2 16 n u2 cos u du = Xn = 2 2 (−1) n 6= 0 (use a table of integrals) nπ (nπ) 0 If n is odd, the Fourier coefficients are zero as is evident from the eveness of the function being represented. If n = 0, the integral for the coefficients is Z 1 2 2 8 2t dt = X0 = 4 −2 3


2.1. PROBLEM SOLUTIONS

15

The Fourier series is therefore

x (t) =

8 + 3

∞ X

(−1)n

n=−∞, n6=0

16 jn(π/2)t e (nπ)2

Problem 2.17 Parts (a) through (c) were discussed in the text. For (d), break the integral for Xn up into a part for t < 0 and a part for t > 0. Then use the odd half-wave symmetry condition.

The development follows:

Xn = = =

1 T0 1 T0 1 T0

1 = T0 (

=

"Z T0 /2

x (t) e−j2πnf0 t dt + x (t) e−j2πnf0 t dt +

0

"Z T0 /2 0

"Z T0 /2 0

Z T0 /2 0

x (t) e−j2πnf0 t dt − −j2πnf0 t

x (t) e

#

x (t) e−j2πnf0 t dt

−T0 /2

0

"Z T0 /2

Z 0

Z T0 /2 0

n

dt − (−1)

0, n even R T0 /2 2 x (t) e−j2πnf0 t dt, n odd T0 0

# ¡0 ¢ −j2πnf0 (t0 +T0 /2) x t + T0 /2 e dt , t0 = t + T0 /2 # ¡ 0 ¢ −j2πnf0 t0 −jnπ x t e dt , f0 = 1/T0

Z T0 /2 0

# ¡ 0 ¢ −j2πnf0 t0 x t e dt


16

CHAPTER 2. SIGNAL AND LINEAR SYSTEM THEORY

Problem 2.18 This is a matter of integration. Only the solution for part (b) will be given here. integral for the Fourier coefficients is Xn = =

=

= = =

Z T0 /2 ¡ jω0 t ¢ A sin (ω 0 t) e dt = − e−jω0 t e−jnω0 t dt e 2jT0 0 0 # "Z Z T0 /2 T0 /2 A ej(1−n)ω0 t dt − e−j(1+n)ω0 t dt 2jT0 0 0 ⎡ ¯T0 /2 ¯T0 /2 ⎤ ¯ ¯ j(1−n)ω t −j(1+n)ω t 0 0 A ⎣ e e ¯ ¯ ⎦ − ¯ ¯ 2jT0 j (1 − n) ω 0 ¯ −j (1 + n) ω 0 ¯ 0 0 " # ej(1−n)π − 1 e−j(1+n)π − 1 A + , n 6= ±1 (ω 0 T0 /2 = π) −4π 1−n 1+n ∙ ¸ A (−1)n + 1 (−1)n + 1 + , n 6= ±1 4π 1−n 1+n ( 0, n odd and n 6= ±1 A π(1−n2 ) , n even A T0

Z T0 /2

The

−jnω 0 t

For n = 1, the integral is X1 =

A 2jT0

=

A 2jT0

Z T0 /2 0

Z T0 /2 0

¡ jω0 t ¢ − e−jω0 t e−jω0 t dt e

¡ ¢ jA ∗ 1 − e−j2ω0 t dt = − = X−1 4

This is the same result as given in Table 2.1. Problem 2.19 a. Use Parseval’s theorem to get P|nf0 | ≤ 1/τ =

N X

n=−N

2

|Xn | =

¶ N µ X Aτ 2

n=−N

T0

sinc2 (nf0 τ )

where N is an appropriately chosen limit on the sum. We are given that only frequences for which |nf0 | ≤ 1/τ are to be included. This is the same as requiring that |n| ≤ 1/ (τ f0 ) =


2.1. PROBLEM SOLUTIONS

17

T0 /τ = 2. Also, for a pulse train, Ptotal = A2 τ /T0 and, in this case, Ptotal = A2 /2. Thus 2 µ ¶ P|nf0 | ≤ 1/τ 2 X A 2 = sinc2 (nf0 τ ) Ptotal A2 n=−2 2 2

= = =

1 X sinc2 (nf0 τ ) 2 n=−2

¢¤ ¡ 1£ 1 + 2 sinc2 (1/2) + sinc2 (1) 2" µ ¶2 # 1 2 = 0.9053 1+2 2 π

b. In this case, |n| ≤ 5, Ptotal = A2 /5, and P|nf0 | ≤ 1/τ Ptotal

5

1 X = sinc2 (n/5) 5 n=−5 h io 1n 1 + 2 (0.9355)2 + (0.7568)2 + (0.5046)2 + (0.2339)2 = 5 = 0.9029

c. In this case, |n| ≤ 10, Ptotal = A2 /10, and P|nf0 | ≤ 1/τ Ptotal

10 1 X sinc2 (n/10) 10 n=−10 ½ ∙ ¸¾ 1 (0.9836)2 + (0.9355)2 + (0.8584)2 + (0.7568)2 + (0.6366)2 1+2 = 10 + (0.5046)2 + (0.3679)2 + (0.2339)2 + (0.1093)2 = 0.9028

=

d. In this case, |n| ≤ 20, Ptotal = A2 /20, and P|nf0 | ≤ 1/τ Ptotal

=

=

20 1 X sinc2 (n/20) 20 n=−20 ) ( 20 X 1 sinc2 (n/20) 1+2 20 n=1

= 0.9028


18

CHAPTER 2. SIGNAL AND LINEAR SYSTEM THEORY

Problem 2.20 a. The integral for Yn is Z Z T0 1 1 −jnω 0 t y (t) e dt = x (t − t0 ) e−jnω0 t dt, ω0 = 2πf0 Yn = T0 T0 T0 0 Let t0 = t − t0 , which results in ¸ ∙ Z T0 −t0 ¡ 0 ¢ −jnω0 t0 0 −jnω0 t0 1 x t e dt e = Xn e−j2πnf0 t0 Yn = T0 −t0 b. Note that

y (t) = A cos ω0 t = A sin (ω 0 t + π/2) = A sin [ω0 (t + π/2ω 0 )] Thus, t0 in the theorem proved in part (a) here is −π/2ω 0 . By Euler’s theorem, a sine wave can be expressed as 1 1 sin (ω 0 t) = ejω0 t − e−jω0 t 2j 2j 1 1 Its Fourier coefficients are therefore X1 = 2j and X−1 = − 2j . According to the theorem proved in part (a), we multiply these by the factor

e−jnω0 t0 = e−jnω0 (−π/2ω0 ) = ejnπ/2 For n = 1, we obtain Y1 = For n = −1, we obtain

1 jπ/2 1 e = 2j 2

1 −jπ/2 1 e = 2j 2 which gives the Fourier series representation of a cosine wave as Y−1 = −

1 1 y (t) = ejω0 t + e−jω0 t = cos ω 0 t 2 2 We could have written down this Fourier representation directly by using Euler’s theorem. Problem 2.21 a. Use the Fourier series of a square wave (specialize the Fourier series of a pulse train) with A = 1 and t = 0 to obtain the series µ ¶ 1 1 1 4 1− + − +··· 1= π 3 5 7


2.1. PROBLEM SOLUTIONS

19

Multiply both sides by π4 to get the series in the problem statement. Hence, the sum is π4 . b. Use the Fourier series of a triangular wave as given in Table 2.1 with A = 1 and t = 0 to obtain the series 1=···+

4 4 4 4 4 4 + + + + + +··· 25π 2 9π 2 π 2 π 2 9π 2 25π 2

2

2

Multiply both sides by π8 to get the series in given in the problem. Hence, its sum is π8 . Problem 2.22 a. In the expression for the Fourier series of a pulse train (Table 2.1), let t0 = −T0 /8 and τ = T0 /4 to get µ ¶ ³n´ πnf0 A exp j Xn = sinc 4 4 4 The spectra are shown in Fig. 2.4. b. The amplitude spectrum is the same as for part (a) except that X0 = 3A 4 . Note that this can be viewed as having a sinc-function envelope with zeros at multiples of 3T4 0 . The phase spectrum can be obtained from that of part (a) by subtracting a phase shift of π for negative frequencies and adding π for postitive frequencies (or vice versa). The Fourier coefficients are given by µ ¶ µ ¶ 3n 3πnf0 3A sinc exp −j Xn = 4 4 4 See Fig. 2.4 for amplitude and phase plots. Problem 2.23 a. Use the rectangular pulse waveform of Table 2.1 specialized to ¶ µ t − T0 /4 xa (t) = 2AΠ − A, |t| < T0 /2 T0 /2 and periodically extended. Hence, from Table 2.1, we have µ ¶ µ ¶ nT0 /2 2AT0 /2 2πnT0 /4 A Xn = sinc exp −j T0 T0 T0 = Asinc (n/2) exp (−jπn/2) , n 6= 0 sin (nπ/2) exp (−jπn/2) , n 6= 0 = A nπ/2


20

CHAPTER 2. SIGNAL AND LINEAR SYSTEM THEORY

Part (a)

Part (b) 0.6 Amplitude

Amplitude

0.6 0.4 0.2 0

-5

0 f, Hz

0.2 0

5

-5

0 f, Hz

5

-5

0 f, Hz

5

2 Phase, rad

2 Phase, rad

0.4

0 -2

0 -2

-5

0 f, Hz

5

Figure 2.4:


2.1. PROBLEM SOLUTIONS

21

where the superscript A refers to xA (t). The dc component is 0 so X0A = 0. The Fourier coefficients are therefore X0A = 0 A = j2A/π X1A = −j2A/π; X−1 A X2A = 0 = X−2

A X3A = j2A/3π; X−3 = −j2A/3π

···

b. Note that xA (t) =

dxB (t) dt

= 4B where A = T2B T0 obtained from matching the amplitude of xA (t) with the slope of 0 /2 xB (t) or B = T40 A. The relationship between spectral components is therefore XnA = (jnω 0 ) XnB = j2πnf0 XnB or XnB =

XnA T0 XnA = j2πnf0 j2πn

where the superscript A refers to xA (t) and B refers to xB (t). For example, X1B = −

j2A/π 2B B = − 2 = X−1 j2πf0 π

Problem 2.24 a. This is a decaying exponential starting at t = 0 and its Fourier transform is Z ∞ Z ∞ X1 (f ) = A e−t/τ e−j2πf t dt = A e−(1/τ +j2πf )t dt 0 0 ¯∞ ¯ −(1/τ +j2πf )t Ae A ¯ = ¯ = 1/τ + j2πf ¯ 1/τ + j2πf 0

=

Aτ 1 + j2πfτ


22

CHAPTER 2. SIGNAL AND LINEAR SYSTEM THEORY b. Since x2 (t) = x1 (−t) we have, by the time reversal theorem, that X2 (f ) = X1∗ (f ) = X1 (−f ) Aτ = 1 − j2πf τ c. Since x3 (t) = x1 (t) − x2 (t) we have, after some simplification, that X3 (f ) = X1 (f ) − X2 (f ) Aτ Aτ = − 1 + j2πf τ 1 − j2πf τ −j4Aπf τ = 1 + (2πfτ )2 d. Since x4 (t) = x1 (t) + x2 (t) we have, after some simplification, that X4 (f ) = X1 (f ) + X2 (f ) Aτ Aτ + = 1 + j2πf τ 1 − j2πf τ 2Aτ = 1 + (2πfτ )2

This is the expected result since x4 (t) is really a double-sided decaying exponential. Problem 2.25 a. Using a table of Fourier transforms and the time reversal theorem, the Fourier transform of the given signal is X (f ) =

1 1 − α + j2πf α − j2πf

Note that x (t) → sgn(t) in the limit as α → 0. transform as α → 0, we deduce that F [sgn (t)] =

Taking the limit of the above Fourier

1 1 1 − = j2πf −j2πf jπf

b. Using the given relationship between the unit step and the signum function and the linearity property of the Fourier transform, we obtain F [u (t)] = =

1 1 F [sgn (t)] + F [1] 2 2 1 1 + δ (f ) j2πf 2


2.1. PROBLEM SOLUTIONS

23

c. The same result as obtained in part (b) is obtained. Problem 2.26 a. One differentiation gives dxa (t) = Π (t − 0.5) − Π (t − 2.5) dt Two differentiations give d2 xa (t) = δ (t) − δ (t − 1) − δ (t − 2) + δ (t + 3) dt2 Application of the differentiation theorem of Fourier transforms gives (j2πf )2 Xa (f ) = 1 − 1 · e−j2πf − 1 · e−j4πf + 1 · e−j6πf ´ ³ = ej3πf − ejπf − e−jπf + e−j3πf e−j3πf = 2 (cos 3πf − cos πf ) e−j3πf

where the time delay theorem and the Fourier transform of a unit impulse have been used. Dividing both sides by (j2πf )2 , we obtain Xa (f ) =

2 (cos 3πf − cos πf) e−j3πf cos πf − cos 3πf −j3πf = e 2 2π 2 f 2 (j2πf )

Use the trig identity sin2 x = 12 − 12 cos 2x or cos 2x = 1 − 2 sin2 x to rewrite this result as Xa (f ) = = = = =

1 − 2 sin2 (0.5πf ) − 1 + 2 sin2 (1.5πf ) −j3πf e 2π 2 f 2 − sin2 (0.5πf ) + sin2 (1.5πf ) −j3πf e π2f 2 sin2 (1.5πf ) −j3πf sin2 (0.5πf) −j3πf e − e π2 f 2 π2 f 2 " ¶2 ¶2 # µ µ 2 sin 1.5πf 2 sin 0.5πf 1.5 − 0.5 e−j3πf 1.5πf 0.5πf £ 2 ¤ 1.5 sinc2 (1.5f ) − 0.52 sinc2 (0.5f ) e−j3πf

This is the same result as would have been obtained by writing ¶ µ ¶ µ t − 1.5 t − 1.5 − 0.5Λ xa (t) = 1.5Λ 1.5 0.5

and using the Fourier transform of the triangular pulse along with the superposition and time delay theorems.


24

CHAPTER 2. SIGNAL AND LINEAR SYSTEM THEORY b. Two differentiations give (sketch dxb (t) /dt to see this) d2 xb (t) = δ (t) − 2δ (t − 1) + 2δ (t − 3) − δ (t − 4) dt2

Application of the differentiation theorem gives (j2πf )2 Xb (f ) = 1 − 2e−j2πf + 2e−j6πf − e−j8πf

Dividing both sides by (j2πf )2 , we obtain Xb (f ) =

1 − 2e−j2πf + 2e−j6πf − e−j8πf −4π 2 f 2

Further manipulation may be applied to this result to convert it to i h Xb (f ) = sinc2 (f ) e−j2πf − e−j6πf i h = sinc2 (f ) ej2πf − e−j2πf e−j4πf = 2j sin (2πf ) sinc2 (f ) e−j4πf

which would have resulted from Fourier transforming the waveform written as xb (t) = Λ (t − 1) − Λ (t − 3) c. Two differentiations give (sketch dxc (t) /dt to see this) d2 xc (t) = δ (t) − 2δ (t − 1) + 2δ (t − 2) − 2δ (t − 3) + δ (t − 4) dt2 Application of the differentiation theorem gives (j2πf )2 Xc (f ) = 1 − 2e−j2πf + 2e−j4πf − 2e−j6πf + e−j8πf

Dividing both sides by (j2πf )2 , we obtain Xc (f ) =

1 − 2e−j2πf + 2e−j4πf − 2e−j6πf + e−j8πf (j2πf)2

This result may be further arranged to give i h Xc (f ) = sinc2 (f ) e−j2πf + e−j6πf = 2 cos (2πf ) sinc2 (f ) e−j4πf

which would have resulted from Fourier transforming the waveform written as xc (t) = Λ (t − 1) + Λ (t − 3)


2.1. PROBLEM SOLUTIONS

25

d. Two differentiations give (sketch dxd (t) /dt to see this) d2 xd (t) = δ (t) − 2δ (t − 1) + δ (t − 1.5) + δ (t − 2.5) − 2δ (t − 3) + δ (t − 4) dt2 Application of the differentiation theorem gives (j2πf )2 Xd (f ) = 1 − 2e−j2πf + e−j3πf + e−j5πf − 2e−j6πf + e−j8πf Dividing both sides by (j2πf )2 , we obtain Xd (f ) =

1 − 2e−j2πf + e−j3πf + e−j5πf − 2e−j6πf + e−j8πf (j2πf )2

This result may be further arranged to give i h Xd (f ) = sinc2 (f ) e−j2πf + 0.5e−j4πf + e−j6πf

which would have resulted from Fourier transforming the waveform written as xd (t) = Λ (t − 1) + 0.5Λ (t − 2) + Λ (t − 3)

Problem 2.27 See the solutions to Problem 2.26. Problem 2.28 a. The steps in finding the Fourier transform for (i) are as follows: Π (t) ←→ sinc (f )

Π (t) exp [j4πt] ←→ sinc (f − 2)

Π (t − 1) exp [j4π (t − 1)] ←→ sinc (f − 2) exp (−j2πf ) The steps in finding the Fourier transform for (ii) are as follows: Π (t) ←→ sinc (f )

Π (t) exp [j4πt] ←→ sinc (f − 2)

Π (t + 1) exp [j4π (t + 1)] ←→ sinc (f − 2) exp (j2πf )


26

CHAPTER 2. SIGNAL AND LINEAR SYSTEM THEORY b. The steps in finding the Fourier transform for (i) are as follows: Π (t) ←→ sinc (f )

Π (t − 1) ←→ sinc (f ) exp (−j2πf )

Π (t − 1) exp [j4π (t − 1)]

=

Π (t − 1) exp (j4πt)

←→ sinc (f − 2) exp [−j2π (f − 2)] = sinc (f − 2) exp (−j2πf)

which follows because exp(±jn2π) = 1 where n is an integer. The steps in finding the Fourier transform for (ii) are as follows: Π (t) ←→ sinc (f )

Π (t + 1) ←→ sinc (f ) exp (j2πf )

Π (t + 1) exp [j4π (t + 1)]

=

Π (t + 1) exp (j4πt)

←→ sinc (f − 2) exp [j2π (f − 2)] = sinc (f − 2) exp (j2πf )

Problem 2.29 a. The time reversal theorem states that x (−t) ←→ X (−f ) 6= X ∗ (f ) if x (t) is complex, so xa (t) ←→ =

1 1 1 1 X1 (f ) + X1 (−f ) = sinc (f − 2) exp (−j2πf) + sinc (−f − 2) exp (j2πf ) 2 2 2 2 1 1 sinc (f − 2) exp (−j2πf ) + sinc (f + 2) exp (j2πf ) 2 2

Note that xa (t) = =

1 1 Π (t − 1) exp [j4π (t − 1)] + Π (−t − 1) exp [j4π (−t − 1)] 2 2 1 1 Π (t − 1) exp (j4πt) + Π (t + 1) exp (−j4πt) (by the eveness of Π (u) ) 2 2

b. Similarly to (a), we obtain 1 1 1 1 xb (t) ←→ X2 (f )+ X2 (−f ) = sinc (f − 2) exp (j2πf)+ sinc (f + 2) exp (−j2πf) 2 2 2 2


2.1. PROBLEM SOLUTIONS

27

Problem 2.30 a. The result is Xa (f ) = 2sinc (2f ) exp (−j2πf ) b. The result is

µ ¶ f 1 Xb (f ) = 2 × Π exp (−j2πf ) 2 2

c. The result is Xc (f ) = 8sinc2 (8f ) exp (−j4πf ) d. The result is Xd (f ) = 4Λ (4f ) exp (−j6πf )

Problem 2.31 a. This is an odd signal, so its Fourier transform is odd and purely imaginary. b. This is an even signal, so its Fourier transform is even and purely real. c. This is an odd signal, so its Fourier transform is odd and purely imaginary. d. This signal is neither even nor odd, so its Fourier transform is complex. e. This is an even signal, so its Fourier transform is even and purely real. f. This signal is even, so its Fourier transform is real and even.

Problem 2.32 a. Using superposition, time delay, and the Fourier transform of an impulse, we obtain X1 (f ) = ej8πt + 3 + e−j8πt = 3 + 2 cos (8πt) The Fourier transform is even and real because the signal is even. b. Using superposition, time delay, and the Fourierr transform of an impulse, we obtain X2 (f ) = 2ej16πt − 2e−j16πt = 4j sin (16πf)


28

CHAPTER 2. SIGNAL AND LINEAR SYSTEM THEORY

The Fourier transform is odd and imaginary because the signal is odd. c. The Fourier transform is 4 X ¡ 2 ¢ n + 1 e−j4πnf = 1 + 2e−j4πf + 5e−j8πf + 10e−j12πf + 17e−j16πf X3 (f ) = n=0

It is complex because the signal is neither even nor odd. d. The Fourier transform is X4 (f ) =

2 X ¡

n=−2

¢ n2 + 1 e−j4πnf = 5ej8πf + 2ej4πf + 1 + 2e−j4πf + 5e−j8πf

= 1 + 4 cos (4πf ) + 10 cos (8πf )

The Fourier transform is even and real because the signal is even. Problem 2.33 a. The Fourier transform of this signal is X1 (f ) =

2 (1/3) 2/3 2 = 1 + (2πf /3) 1 + [f / (3/2π)]2

Thus, the energy spectral density is G1 (f ) =

½

2/3 1 + [f / (3/2π)]2

¾2

b. The Fourier transform of this signal is µ ¶ f 2 X2 (f ) = Π 3 30 Thus, the energy spectral density is 4 X2 (f ) = Π2 9

µ

f 30

µ ¶ f 4 = Π 9 30

c. The Fourier transform of this signal is 4 X3 (f ) = sinc 5

µ ¶ f 5


2.1. PROBLEM SOLUTIONS

29

2 so the energy spectral density is G3 (f ) = 16 25 sinc

³ ´ f 5

d. The Fourier transform of this signal is ∙ µ ¶ µ ¶¸ f − 20 f + 20 2 X4 (f ) = sinc + sinc 5 5 5 so the energy spectral density is ∙ µ ¶ µ ¶¸ f − 20 f + 20 2 4 G4 (f ) = sinc + sinc 25 5 5 Problem 2.34 a. Use the transform pair x1 (t) = e−αt u (t) ←→

1 α + j2πf

Using Rayleigh’s energy theorem, we obtain the integral relationship Z ∞ Z ∞ Z ∞ Z ∞ df 1 2 2 |X1 (f )| df = |x1 (t)| dt = e−2αt dt = 2 df = 2 2α −∞ −∞ α + (2πf ) −∞ 0 b. Use the transform pair µ ¶ t 1 ←→ sinc (τ f ) = X2 (f ) x2 (t) = Π τ τ Rayleigh’s energy theorem gives Z ∞ |X2 (f )|2 df

=

−∞

Z ∞

−∞

=

Z ∞

2

sinc (τ f ) df =

1 2 Π 2 −∞ τ

Z ∞

−∞

|x2 (t)|2 dt

µ ¶ Z τ /2 t dt 1 dt = = 2 τ τ −τ /2 τ

c. Use the transform pair e−α|t| ←→

1 2α e−α|t| ←→ or 2 2 2 2α α + (2πf ) α + (2πf )2


30

CHAPTER 2. SIGNAL AND LINEAR SYSTEM THEORY

The desired integral, by Rayleigh’s energy theorem, is I3 = = d. Use the transform pair

Z ∞∙

¸2 Z ∞ −2α|t| e 1 df = dt 2 2 2 −∞ α + (2πf ) −∞ 4α ¯∞ Z ∞ 1 e−2αt ¯¯ 1 2 −2αt e dt = = 4α2 0 2α2 −2α ¯0 4α3 1 Λ τ

µ ¶ t ←→ sinc2 (τ f ) τ

The desired integral, by Rayleigh’s energy theorem, is Z ∞ Z ∞ 2 I4 = |X4 (f )| df = sinc4 (τ f ) df −∞ −∞ Z ∞ Z τ 1 2 2 = Λ (t/τ ) dt = 2 [1 − (t/τ )]2 dt τ 2 −∞ τ 0 ∙ ¸ Z ¢ 1 1¡ 2 2 2 1 2 2 − 1−u (1 − u) du = = = τ 0 τ 3 3τ 0 Problem 2.35 a. The convolution operation gives ⎧ t ≤ τ − 1/2 ⎨ 0, £ ¤ 1 −α(t−τ +1/2) , τ − 1/2 < t ≤ τ + 1/2 y1 (t) = 1 − e ¤ ⎩ 1α£ −α(t−τ −1/2) − e−α(t−τ +1/2) , t > τ + 1/2 α e

b. The convolution of these two signals gives

y2 (t) = Λ (t) + trap (t) where tr(t) is a trapezoidal function given by ⎧ 0, t < −3/2 or t > 3/2 ⎪ ⎪ ⎨ 1, −1/2 ≤ t ≤ 1/2 trap (t) = 3/2 + t, −3/2 ≤ t < −1/2 ⎪ ⎪ ⎩ 3/2 − t, 1/2 ≤ t < 3/2


2.1. PROBLEM SOLUTIONS

31

c. The convolution results in Z ∞ Z t+1/2 y3 (t) = e−α|λ| Π (λ − t) dλ = e−α|λ| dλ t−1/2

−∞

Sketches of the integrand for various values of t give the following cases: ⎧ R t+1/2 αλ ⎪ e dλ, t ≤ −1/2 ⎪ ⎨ t−1/2 R R0 t+1/2 αλ y3 (t) = e−αλ dλ, −1/2 < t ≤ 1/2 t−1/2 e dλ + 0 ⎪ R ⎪ t+1/2 ⎩ e−αλ dλ, t > 1/2 t−1/2

Integration of these three cases gives ¤ ⎧ 1 £ α(t+1/2) − eα(t−1/2) , ¤ t ≤ −1/2 ⎨ α £e 1 −α(t−1/2) − e−α(t+1/2) , −1/2 < t ≤ 1/2 y3 (t) = e ¤ ⎩ 1α£ −α(t−1/2) − e−α(t+1/2) , t > 1/2 α e d. The convolution gives

y4 (t) =

Z t

x (λ) dλ

−∞

Problem 2.36 a. Using the convolution and time delay theorems, we obtain Y1 (f ) = F [exp (−αt) u (t) ∗ Π (t − τ )] 1 sinc (f ) exp (−j2πf τ ) = α + j2πf b. The superposition and convolution theorems give Y2 (f ) = F {[Π (t/2) + Π (t)] ∗ Π (t)}

= [2sinc (2f ) + sinc (f )] sinc (f )

c. By the convolution theorem h i Y3 (f ) = F e−α|t| ∗ Π (t) =

sinc (f ) α2 + (2πf )2


32

CHAPTER 2. SIGNAL AND LINEAR SYSTEM THEORY d. By the convolution theorem (note, also, that the integration theorem can be applied directly) Y4 (f ) = F [x (t) ∗ u (t)] ¸ ∙ 1 1 + δ (f ) = X (f ) j2πf 2 X (f ) 1 = + X (0) δ (f ) j2πf 2

Problem 2.37 1 . The Fourier transform of the given a. From before, the total energy is E1, total = 2α signal is 1 X1 (f ) = α + j2πf

so that the energy spectral density is G1 (f ) = |X1 (f )|2 =

1 α2 + (2πf )2

By Rayleigh’s energy theorem, the normalized inband energy is ¶ µ Z W E1 (|f | ≤ W ) df 2 −1 2πW = 2α 2 = π tan 2 E1, total α −W α + (2πf ) b. The total energy is E2, total = τ . The Fourier transform of the given signal and its energy spectral density are, respectively, X2 (f ) = τ sinc (f τ ) and G2 (f ) = |X2 (f )|2 = τ 2 sinc2 (f τ ) By Rayleigh’s energy theorem, the normalized inband energy is Z Z τW E2 (|f | ≤ W ) 1 W 2 = τ sinc2 (f τ ) df = 2 sinc2 (u) du E2, total τ −W 0 The integration must be carried out numerically. c. The total energy is Z ∞h Z ∞h i2 i2 −αt −βt E3, total = e e−2αt − 2e−(α+β)t + e−2βt dt −e dt = 0

=

0

2 1 β (α + β) − 4αβ + α (α + β) (β − α)2 1 − + = = 2α α + β 2β 2αβ (α + β) 2αβ (α + β)


2.1. PROBLEM SOLUTIONS

33

The Fourier transform of the given signal and its energy spectral density are, respectively,

X3 (f ) =

1 1 − α + j2πf β + j2πf

and

G3 (f ) = = =

=

=

¯2 ¯ ¯ ¯ 1 1 ¯ ¯ − |X3 (f )| = ¯ α + j2πf β + j2πf ¯ ¸ ∙ 1 1 1 1 + 2 − 2 Re 2 α + j2πf β − j2πf α2 + (2πf ) β + (2πf )2 ¸ ∙ 1 1 α − j2πf β + j2πf + 2 − 2 Re 2 2 2 2 α2 + (2πf ) α2 + (2πf ) β + (2πf ) β + (2πf )2 ⎤ ⎡ 2 1 1 + (2πf ) ⎦ ⎣ ³αβ − (α − β)´j2πf ³ ´ + 2 2 − 2 Re 2 2 2 2 α + (2πf ) β + (2πf )2 α2 + (2πf ) β + (2πf ) 2

1 αβ + (2πf )2 1 ³ ´³ ´+ 2 − 2 2 2 2 2 2 α + (2πf ) β + (2πf )2 α2 + (2πf ) β + (2πf )

The normalized inband energy is E3 (|f | ≤ W ) 1 = E3, total E3, total

Z W

G3 (f ) df

−W

The first and third terms may be integrated easily as inverse tangents. The second term may be integrated after partial fraction expansion: A B αβ + (2πf )2 ´³ ´= 2³ 2 + 2 2 2 2 2 α + (2πf ) β + (2πf )2 α2 + (2πf) β + (2πf ) where A=2

α2 − αβ αβ − β 2 and B = 2 α2 − β 2 α2 − β 2


34

CHAPTER 2. SIGNAL AND LINEAR SYSTEM THEORY Pulse

Fraction of total energy 1 E1/E1,tot

x 1(t)

1 0.5 0 -2

0

2

0.5 0

4

0

2

αt E2/E2,tot

x 2(t)

6

4

6

4

6

1

1 0.5 0 -2

0

2

0.5 0

4

0

2

t/ τ

τW 1

1

β = 2α

E3/E3,tot

x 3(t)

4 W/ α

0.5 0 -2

0

2

4

0.5 0

0

αt

2 W/ α

Figure 2.5:

Therefore E3 (|f | ≤ W ) E3, total

= = = = =

¸ 1 A B 1 df − − + E3, total −W α2 + (2πf )2 α2 + (2πf )2 β 2 + (2πf )2 β 2 + (2πf )2 " 1 ¡ ¢ ¡ ¢ # −1 2πW − A tan−1 2πW 1 α tan α α α ³ ´ ³ ´ −1 2πW + 1 tan−1 2πW tan πE3, total − B β β β β ¶ ¶¸ ∙ µ µ 1 2πW 2πW 1 1 (1 − A) tan−1 + (1 − B) tan−1 πE3, total α α β β ¶∙ ¶ ¶¸ µ µ µ 1 1 β−α 2πW 2πW 1 tan−1 − tan−1 πE3, total α + β α α β β ∙ ¶ ¶¸ µ µ 2 2πW 2πW β tan−1 − α tan−1 π (β − α) α β 1

Z W ∙

Plots of the signals and inband energy for all three cases are shown in Fig. 2.5


2.1. PROBLEM SOLUTIONS

35

Problem 2.38 a. By the modulation theorem X (f ) = =

½ ∙ ¸ ∙ ¸¾ AT0 T0 T0 sinc (f − f0 ) + sinc (f + f0 ) 4 2 2 ¶¸ ∙ µ ¶¸¾ ½ ∙ µ 1 f AT0 1 f − 1 + sinc +1 sinc 4 2 f0 2 f0

b. Use the superposition and modulation theorems to get ½ µ ∙ ∙ µ ¶¸ ∙ µ ¶¸¸¾ ¶ f 1 f 1 f 1 AT0 sinc sinc − 2 + sinc +2 + X (f ) = 4 2f0 2 2 f0 2 f0 c. In this case, p(t) = x(t) and P (f ) = X(f ) of part (a) and Ts = T0 . From part (a), we have ∙ µ ¶ µ ¶¸ n−1 n+1 AT0 sinc + sinc P (nf0 ) = 4 2 2 Using this in (2.149), we have the Fourier transform of the half-wave rectified cosine waveform as ∙ µ ¶ µ ¶¸ ∞ X n−1 n+1 A sinc + sinc δ (f − nf0 ) X(f ) = 4 2 2 n=−∞ Note that sinc(x) = 0 for integer values of its argument and it is 1 for its argument 0. Also, use sinc(1/2) = 2/π, sinc(3/2) = −2/3π, etc. to get X (f ) =

A A A δ (f ) + [δ (f − f0 ) + δ (f + f0 )] + [δ (f − 2f0 ) + δ (f + 2f0 )] π 4 3π A − [δ (f − 4f0 ) + δ (f + 4f0 )] + · · · 15π

Problem 2.39 Signals x1 (t) and x2 (t) are even. Therefore their Fourier transforms are real and even. Signals x3 (t) and x4 (t) are odd. Therefore their Fourier transforms are imaginary and odd. Using the Fourier transforms for a square pulse and a triangle along with superposition, time delay, and scaling, the Fourier transforms of these signals are the following. a. X1 (f ) = 2sinc2 (2f ) + 2sinc(2f ) b. X2 (f ) = 2sinc(2f ) −sinc2 (f )


36

CHAPTER 2. SIGNAL AND LINEAR SYSTEM THEORY c. X3 (f ) = sinc(f ) ejπf −sinc(f ) e−jπf = 2j sin (πf)sinc(f ) d. X4 (f ) = sinc(f ) e−j2πf −sinc(f ) ej2πf = −2j sin (2πf )sinc(f )

Problem 2.40 a. Write 6 cos (20πt) + 3 sin (20πt) = R cos (20πt − θ)

= R cos θ cos (20πt) + R sin θ sin (20πt)

Thus R cos θ = 6 cos (20πt) R sin θ = 3 sin (20πt) Square both equations, add, and take the square root to obtain p √ R = 62 + 32 = 45 = 6.7082

Divide the second equation by the first to obtain tan θ = 0.5

θ = 0.4636 rad So x (t) = 3 +

√ 45 cos (20πt − 0.4636)

Following Example 2.19, we obtain Rx (τ ) = 32 +

45 cos (20πτ ) 2

b. Taking the Fourier transform of Rx (τ ) we obtain Sx (f ) = 9δ (f ) +

45 [δ (f − 10) + δ (f + 10)] 4

Problem 2.41 Use the facts that the power spectral density integrates to give total power, it must be even, and contains no phase information.


2.1. PROBLEM SOLUTIONS

37

a. The total power of this signal is 22 /2 = 2 watts which is distributed equally at the frequencies ±10 hertz. Therefore, by inspection we write S1 (f ) = δ (f − 10) + δ (f + 10) W/Hz b. The total power of this signal is 32 /2 = 4.5 watts which is distributed equally at the frequencies ±15 hertz. Therefore, by inspection we write S2 (f ) = 2.25δ (f − 15) + 2.25δ (f + 15) W/Hz c. The total power of this signal is 52 /2 = 12.5 watts which is distributed equally at the frequencies ±5 hertz. Therefore, by inspection we write S3 (f ) = 6.25δ (f − 5) + 6.25δ (f + 5) W/Hz d. The power of the first component of this signal is 32 /2 = 4.5 watts which is distributed equally at the frequencies ±15 hertz. The power of the second component of this signal is 52 /2 = 12.5 watts which is distributed equally at the frequencies ±5 hertz. Therefore, by inspection we write S4 (f ) = 2.25δ (f − 15) + 2.25δ (f + 15) + 6.25δ (f − 5) + 6.25δ (f + 5) W/Hz

Problem 2.42 Since the autocorrelation function and power spectral density of a signal are Fourier transform pairs, we may write down the answers by inspection using the Fourier transform pair A cos (2πf0 t) ←→ A2 δ (f − f0 ) + A2 δ (f + f0 ). The answers are the following. a. R1 (τ ) = 8 cos (30πτ ); Average power = R1 (0) = 8 W. b. R2 (τ ) = 18 cos (40πτ ); Average power = R2 (0) = 18 W. c. R3 (τ ) = 32 cos (10πτ ); Average power = R3 (0) = 32 W. d. R4 (τ ) = 18 cos (40πτ ) + 32 cos (10πτ ); Average power = R4 (0) = 50 W. Problem 2.43 The autocorrelation function must be (1) even, (2) have an absolute maximum at τ = 0, and (3) have a Fourier transform that is real and nonnegative.


38

CHAPTER 2. SIGNAL AND LINEAR SYSTEM THEORY a. Acceptable - all properties satisfied; b. Acceptable - all properties satisfied; c. Not acceptable - none of the properties satisfied; d. Acceptable - all properties satisfied; e. Not acceptable - property (3) not satisfied; f. Not acceptable - none of the properties satisfied.

Problem 2.44 2 Given that the autocorrelation function of x (t) = A cos (2πf0 t + θ) is Rx (τ ) = A2 cos (2πf0 τ ) (special case of Ex. 2.19), the results are as follows. a. R1 (τ ) = 2 cos (10πτ ); b. R2 (τ ) = 2 cos (10πτ ); £ ¡ ¢¤ c. R3 (τ ) = 5 cos (10πτ ) (write the signal as x3 (t) = Re 5 exp j tan−1 (4/3) exp (j10πt) ); 2

2

cos (10πτ ) = 4 cos (10πτ ). d. R4 (τ ) = 2 +2 2

Problem 2.45 This is a matter of applying (2.151) by making the appropriate identifications with the parameters given in Example 2.20 Problem 2.46 Fourier transform both sides of the differential equation using the differentiation theorem of Fourier transforms to get [j2πf + a] Y (f ) = [j2πbf + c] X (f ) Therefore, the frequency response function is H (f ) =

c + j2πbf Y (f ) = X (f ) a + j2πf

The amplitude response function is q c2 + (2πbf )2 |H (f )| = q a2 + (2πf )2


2.1. PROBLEM SOLUTIONS

39

a = 1; b = 2; c = 0

a = 1; b = 0; c = 3

2

3

mag(H)

mag(H)

1.5 1

2

1

0.5

0

0 -5

5

2

2

1

1

angle(H), rad

angle(H), rad

0 -5

0 -1 -2 -5

0 f, Hz

5

0

5

0 f, Hz

5

0 -1 -2 -5

Figure 2.6: and the phase response is −1

arg [H (f )] = tan

µ

2πbf c

− tan

−1

µ

2πf a

Amplitude and phase responses for various values of the constants are plotted in Figure 2.6. Problem 2.47 a. Use the transform pair Ae−αt u (t) ←→

A α + j2πf

to find the unit impulse response as h1 (t) = e−5t u (t) b. Long division gives H2 (f ) = 1 −

5 5 + j2πf


40

CHAPTER 2. SIGNAL AND LINEAR SYSTEM THEORY

Use the transform pair δ (t) ←→ 1 along with superposition and the transform pair in part (a) to get h2 (t) = δ (t) − 5e−5t u (t) c. Use the time delay theorem along with the result of part (a) to get h3 (t) = e−5(t−3) u (t − 3) d. Use superposition and the results of parts (a) and (c) to get h4 (t) = e−5t u (t) − e−5(t−3) u (t − 3)

Problem 2.48 Use the transform pair for a sinc function to find that µ ¶ µ ¶ f f Y (f ) = Π Π 2B 2W a. If W < B, it follows that

µ

f Y (f ) = Π 2W because Π

³

f 2B

´

= 1 throughout the region where Π

b. If W > B, it follows that Y (f ) = Π

because Π

³

f 2W

´

µ

³

f 2W

f 2B

= 1 throughout the region where Π

³

´

is nonzero.

´

is nonzero.

f 2B

c. Part (b) gives distortion because the output spectrum differs from the input spectrum. In fact, the output is y (t) = 2Bsinc (2Bt) which clearly differs from the input for W > B.


2.1. PROBLEM SOLUTIONS

41

Problem 2.49 a. Replace the capacitors with 1/jωC which is their ac-equivalent impedance. Call the junction of the input resistor, feedback resistor, and capacitors 1. Call the junction at the positive input of the operational amplifier 2. Call the junction at the negative input of the operational amplifier 3. Write down the KCL equations at these three junctions. Use the constraint equation for the operational amplifier, which is V2 = V3 , and the definitions for ω 0 , Q, and K to get the given transfer function as H (jω) = Vo (jω) /Vi (jω). The node equations are V1 − Vi V1 − Vo + jωCV1 + + jωC(V1 − V2 ) R R V2 jωC(V2 − V1 ) + R V3 V3 − Vo + Rb Ra (constraint on op amp input) V2

= 0 = 0 = 0 = V3

b. See plot given in Figure 2.7. c. In terms of f , the transfer function magnitude is (f /f0 ) K |H(f )| = √ s∙ ³ ´2 ³ ´2 ¸2 2 + Q12 ff0 1 − ff0 It can be shown that √ the maximim of |H(f )| is at f = f0 . By substitution, this maximum is |H(f0 )| = KQ/ √2. To find the 3-dB bandwidth, we must find the frequencies for which |H(f )| = |H(f0 )|/ 2. This results in Q (f3 /f0 ) √ = s∙ ´2 ¸2 ³ ´2 ³ 2 + Q12 ff30 1 − ff30 which reduces to the quadratic equation µ

f3 f0

¶4

µ ¶ µ ¶2 1 f3 − 2+ 2 +1=0 Q f0


42

CHAPTER 2. SIGNAL AND LINEAR SYSTEM THEORY

Using the quadratic formula, the solutions to this equation are µ ¶ sµ ¶ 1 1 2+ 2 ± = 2+ 2 −4 Q Q r µ ¶ 1 1 1 1 1 2+ 2 ± = 1+ ≈1± 2 Q Q 4Q2 Q r µ ¶ f±3 1 1 , Q >> 1 1± ≈1± ≈ f0 Q 2Q

µ

f±3 f0

¶2

1 2

Therefore, the 3-dB bandwidth is, for large Q, approximately ¶ ¶ µ µ 1 1 f0 B = f3 − f−3 ≈ 1 + f0 − 1 − f0 = 2Q 2Q Q d. Combinations of components giving RC = 2.2508 × 10−4 seconds and

Ra = 2.5757 Rb

will work.

Problem 2.42 a. By voltage division, with the inductor replaced by j2πf L, the frequency response function is R2 /L + j2πf R2 + j2πf L = H1 (f ) = R1 + R2 + j2πfL (R1 + R2 ) /L + j2πf By long division R1 /L H1 (f ) = 1 − R1 +R2 + j2πf L Using the transforms of a delta function and a one-sided exponential, we obtain µ ¶ R1 + R2 R1 h1 (t) = δ (t) − exp − t u (t) L L


2.1. PROBLEM SOLUTIONS

43

20

|H(f)|, dB

10 0 -10 f0 = 999.9959 Hz; B3 dB = 300.0242 Hz -20 2 10

3

10

4

10

angle(H(f)), radians

2 1 0 -1 -2 2 10

3

10 f, Hz

Figure 2.7:

4

10


44

CHAPTER 2. SIGNAL AND LINEAR SYSTEM THEORY b. Substituting the ac-equivalent impedance for the inductor and using voltage division, the frequency response function is H2 (f ) = =

R2 || (j2πf L) j2πf LR2 where R2 || (j2πf L) = R1 + R2 || (j2πfL) R2 + j2πf L ¶ µ j2πfL (R1 k R2 ) /L R2 R2 1− = R2 R1 + R2 RR1+R R1 + R2 (R1 k R2 ) /L + j2πf + j2πf L 1

2

R2 where R1 k R2 = RR11+R . Therefore, the impulse response is 2 µ ¶ ¸ ∙ R1 R2 R1 R2 R2 exp − t u (t) δ (t) − h2 (t) = R1 + R2 (R1 + R2 ) L (R1 + R2 ) L 2 Both have a high pass amplitude response, with the dc gain of the first circuit being R1R+R 2 and the second being 0; the high frequency gain of the first is 1 and that of the second is R2 R1 +R2 .

Problem 2.51 Application of the Payley-Wiener criterion gives the integral Z ∞ βf 2 df I= 2 −∞ 1 + f which does not converge. Hence, the given function is not suitable as the frequency response function of a causal LTI system. Problem 2.52 a. The condition for stability is Z ∞ Z ∞ |h1 (t)| dt = |exp (−αt) cos (2πf0 t) u (t)| dt −∞ −∞ Z ∞ Z ∞ 1 = exp (−αt) |cos (2πf0 t)| dt < exp (−αt) dt = < ∞ α 0 0 which follows because |cos (2πf0 t)| ≤ 1. Hence this system is BIBO stable. b. The condition for stability is Z ∞ Z ∞ |h2 (t)| dt = |cos (2πf0 t) u (t)| dt −∞ −∞ Z ∞ = |cos (2πf0 t)| dt → ∞ 0


2.1. PROBLEM SOLUTIONS

45

which follows by integrating one period of |cos (2πf0 t)| and noting that the total integral is the limit of one period of area times N as N → ∞. This system is not BIBO stable. c. The condition for stability is Z ∞ Z ∞ 1 u (t − 1) dt |h3 (t)| dt = −∞ −∞ t Z ∞ dt = = ln (t)|∞ 1 →∞ t 1 This system is not BIBO stable. Problem 2.53 The energy spectral density of the output is Gy (f ) = |H (f )|2 |X (f )|2 where |H (f )|2 = X (f ) = Hence

25 16 + (2πf)2 1 1 ; Gx (f ) = |X (f ) |2 = 3 + j2πf 9 + (2πf )2

25 ih i Gy (f ) = h 2 9 + (2πf ) 16 + (2πf )2

Plots of the input and output energy spectral densities are left to the student. Problem 2.54 Using the Fourier coefficients of a half-rectified sine wave from Table 2.1 and noting that those of a half-rectified cosine wave are related by Xcn = Xsn e−jnπ/2 The fundamental frequency is 10 Hz. The ideal rectangular filter passes all frequencies less than 13 Hz and rejects all frequencies greater than 13 Hz. Therefore y (t) =

3A 3A − cos (20πt) π 2


46

CHAPTER 2. SIGNAL AND LINEAR SYSTEM THEORY

Problem 2.55 a. The energy spectral density is given by ¯ ¯2 ¯ ¯ 1 1 ¯ ¯ = G1 (f ) = |X1 (f ) | = ¯ ¯ α + j2πf α2 + (2πf )2 2

1 Since Etotal = 2α , the 90% energy containment bandwidth is given by

0.9 2α

= 2

Z B90 0

= or B90 =

1 2 df = 2 2 α α2 + (2πf)

Z B90 0

1+

1 ³

2πf α

´2 df, u =

2πf α

Z 2πB90 /α du 1 1 tan−1 (2πB90 /α) = 2 πα 0 1+u πα α tan (0.45π) = 1.0049α 2π

b. For this case, using X2 (f ) = Π (f/2W ) and Etotal = 2W, we obtain Z B90 0.9 (2W ) = 2 df = 2B90 0

or B90 = 0.9W

c. For this case, using X3 (f ) = τ sinc(f τ ) and E3 = τ , we obtain Z B90 0.9τ = 2 τ 2 sinc2 (fτ ) df 0 Z τ B90 or 0.45 = sinc2 (u) du 0

Numerical integration gives B90 = 0.9/τ

Problem 2.56 The outputs are the inputs phase shifted by −π/2 radians for frequencies greater than 0 and π/2 for frequencies less than 0 Hz. a. y1 (t) = exp [j100πt − jπ/2] = −j exp (j100πt); b. y2 (t) = 12 exp [j100πt − jπ/2] + 12 exp [−j100πt + jπ/2] = sin (100πt);


2.1. PROBLEM SOLUTIONS

47

1 1 c. y3 (t) = 2j exp [j100πt − jπ/2] − 2j exp [−j100πt + jπ/2] = − cos (100πt);

d. The spectrum of the input is X4 (f ) = 2sinc(2f ). The spectrum of the output is ½ 2sinc (2f ) exp (−jπ/2) , f > 0 Y4 (f ) = 2sinc (2f ) exp (jπ/2) , f < 0 The time domain output signal is the inverse Fourier transform of this, which is Z ∞ Z 0 2sinc (2f ) exp (−jπ/2) exp (j2πf t) df + 2sinc (2f ) exp (jπ/2) exp (j2πf t) df y4 (t) = 0

= −2j

Z ∞

sinc (2f ) exp (j2πf t) df + 2j

0

Z 0

−∞

sinc (−2u) exp (−j2πut) (−du) ; u = −f

Z∞∞

Z ∞ = −2j sinc (2f ) exp (j2πf t) df + 2j sinc (2u) exp (−j2πut) du; sinc2u is even 0 0 Z ∞ = 2 sinc (2f ) [−j exp (j2πft) + j exp (−j2πf t)] df ; rewrite 2nd int in terms of f Z0 ∞ = 4 sinc (2f ) sin (2πf t) df 0

This requires numerical integration. A plot is given in Fig. 2.8. Problem 2.57 a. Amplitude distortion; no phase distortion. The output for (a) is ³ ³ π ´ π ´ + 10 cos 126πt − 63 ya (t) = 4 cos 48πt − 24 150 150 ¶ µ ¶ µ 1 1 + 10 cos 126π t − = 4 cos 48π t − 300 300 b. No amplitude distortion; phase distortion. The output for (b) is ³ π ´ yb (t) = 2 cos 126πt − 63 + cos (170πt) 150 ¶ µ 1 + cos (170πt) = 2 cos 126π t − 300 c. No amplitude distortion; no phase distortion.


48

CHAPTER 2. SIGNAL AND LINEAR SYSTEM THEORY

2 1.5 1

x 4(t), y 4(t)

0.5 0 -0.5 -1 -1.5 -2 -3

-2

-1

0 t, s

Figure 2.8:

1

2

3


2.1. PROBLEM SOLUTIONS

49

The output for (c) is

¶ µ ¶ µ 1 1 + 6 cos 144π t − yc (t) = 2 cos 126π t − 300 300

d. No amplitude distortion; no phase distortion. The output for (d) is

µ ¶ µ ¶ 1 1 yd (t) = 4 cos 10π t − + 16 cos 50π t − 300 300

Problem 2.58 a. The frequency response function corresponding to this impulse response is ∙ µ ¶¸ 3 3 −1 2πf H1 (f ) = =q exp −j tan 5 + j2πf 5 2 25 + (2πf )

The group delay is

∙ ¶¸ µ 2πf 1 d − tan−1 2π df 5 1 2π 1 ´2 ³ 2π 5 1 + 2πf 5

Tg1 (f ) = − =

=

5 25 + (2πf )2

The phase delay is Tp1 (f ) = −

θ1 (f ) = 2πf

tan−1

³

2πf 5

2πf

´

b. The frequency response function corresponding to this impulse response is 2 5 − H2 (f ) = 3 + j2πf 5 + j2πf 5 (5 + j2πf ) − 2 (3 + j2πf ) = (3 + j2πf ) (5 + j2πf ) 19 + j6πf = 15 − (2πf)2 + j16πf q ³ ´i h 361 + (6πf )2 exp j tan−1 6πf 19 = rh i2 ³ ´i h 16πf 15 − (2πf )2 + (16πf )2 exp j tan−1 15−(2πf )2


50

CHAPTER 2. SIGNAL AND LINEAR SYSTEM THEORY

Therefore θ2 (f ) = tan−1

µ

6πf 19

− tan−1

The group delay is Tg2 (f ) =

=

=

=

µ

16πf 15 − (2πf )2

∙ ¶ µ µ ¶¸ 16πf 1 d −1 6πf −1 tan − tan − 2π df 19 15 − (2πf )2 ¡ ¢ ⎧ ⎫ 1 6π ⎪ ⎪ 6πf 2 19 ⎨ ⎬ 1+( 19 ) 1 2 − 16π (15−(2πf ) )−16πf [−2(2πf )(2π)] 1 ⎪ 2π ⎪ 2 ⎩ − 1+³ 16πf ´2 ⎭ 15−(2πf )2 ] [ 2 15−(2πf ) ⎧ ⎫ i h 2 ⎪ ⎪ ⎨ ⎬ 16π 15 + (2πf ) 19 (6π) 1 − − h i2 2 ⎪ 2π ⎪ ⎩ 361 + (6πf ) 15 − (2πf )2 + (16πf )2 ⎭ i h 2 8 15 + (2πf ) 57 − + 361 + (6πf )2 225 + 34 (2πf )2 + (2πf )4

The phase delay is

Tp2 (f ) = −

θ2 (f ) = 2πf

tan−1

³

16πf 15−(2πf )2

´

2πf

− tan−1

³

6πf 19

´

The group and phase delays for (a) and (b) are shown in Fig. 2.9. Problem 2.59 The frequency response function may be written as q (4π)2 + (2πf )2 £ ¤ H (f ) = q exp j tan−1 (f/2) − j tan−1 (f /4) (8π)2 + (2πf )2 The group and phase delays are, respectively, Tg (f ) = = Tp (f ) =

¤ 1 d £ −1 tan (0.25f ) − tan−1 (0.5f ) 2π df ∙ ¸ 0.25 0.5 1 − 2π 1 + (0.25f )2 1 + (0.5f )2 ¤ 1 £ −1 tan (0.25f ) − tan−1 (0.5f ) 2πf


2.1. PROBLEM SOLUTIONS

51

Tg1(f), Tp1(f), s

0.2 Group delay Phase delay

0.15 0.1 0.05 0 -2 10

-1

10

0

10 f, Hz

1

10

2

10

Tg2(f), Tp2(f), s

0.5

0

-0.5

-1 -2 10

-1

10

0

10 f, Hz

Figure 2.9:

1

10

2

10


52

CHAPTER 2. SIGNAL AND LINEAR SYSTEM THEORY

Problem 2.60 In terms of the input spectrum, the output spectrum is Y (f ) = X (f ) + 0.1X (f ) ∗ X (f ) ∙ µ ¶ µ ¶¸ f − 20 f + 20 = 4 Π +Π 6 6 ∙ µ ¶ µ ¶¸ ∙ µ ¶ µ ¶¸ f − 20 f + 20 f − 20 f + 20 +1.6 Π +Π ∗ Π +Π 6 6 6 6 ∙ µ ¶ µ ¶¸ f − 20 f + 20 = 4 Π +Π 6 6 ∙ µ ¶ µ ¶ µ ¶¸ f − 40 f f + 40 +1.6 6Λ + 12Λ + 6Λ 6 6 6 where Λ (f ) is an isoceles triangle of unit height going from -1 to 1. The student should sketch the output spectrum given the above analytical result. Problem 2.61 a. The output of the nonlinear device is y (t) = cos (2000πt) + 0.1 [cos (2000πt)]2 ¸ ∙ 3 1 cos (2000πt) + cos (6000πt) = cos (2000πt) + 0.1 4 4 = 1.075 cos (2000πt) + 0.025 cos (6000πt) The steadystate filter output in response to this input is z (t) = 1.075 |H (1000)| cos (2000πt) + 0.025 |H (3000)| cos (6000πt) so that the THD is (neglecting the negative frequency term of H (f )) THD = = =

1 (1.075)2 |H (1000)|2 2 2 2 2 ; |H(3000)| = 1; |H(1000)| = 2 (0.025) |H (3000)| 1 + 4Q (1000 − 3000)2 432 (note that 1.075/0.025 = 43) 1 + 4Q2 (1000 − 3000)2 1849 1 + 16 × 106 Q2

where H (f ) is the transfer function of the filter.


2.1. PROBLEM SOLUTIONS

53

b. For THD = 0.001% = 0.00001, the equation for Q becomes 1849 = 0.00001 1 + 16 × 106 Q2 or 16 × 106 Q2 = 1849 × 105 − 1 Q2 = 11.56 Q = 3.4

Problem 2.62 Frequency components are present in the output at radian frequencies of 0, ω 1 , ω 2 , 2ω 1 , 2ω 2 , ω 1 − ω 2 , ω1 + ω2 ,3ω1 , 3ω2 , 2ω2 − ω1 , ω 1 + 2ω 2 , 2ω 1 − ω 2 , 2ω 1 + ω 2 . To use this as a frequency multiplier, pass the components at 2ω1 or 2ω2 to use as a doubler, or at 3ω 1 or 3ω 2 to use as a tripler. Problem 2.63 Write the transfer function as −j2πf t0

H (f ) = H0 e

− H0 Π

µ

f 2B

e−j2πf t0

Use the inverse Fourier transform of a constant, the delay theorem, and the inverse Fourier transform of a rectangular pulse function to get h (t) = H0 δ (t − t0 ) − 2BH0 sinc [2B (t − t0 )]

Problem 2.64 a. The Fourier transform of this signal is √ ¡ ¢ X (f ) = A 2πb2 exp −2π 2 τ 2 f 2

By definition, using a table of integrals, Z ∞ √ 1 T = |x (t)| dt = 2πτ x (0) −∞


54

CHAPTER 2. SIGNAL AND LINEAR SYSTEM THEORY

Similarly, 1 W = 2X (0) Therefore,

Z ∞

1 |X (f )| df = √ 2 2πτ −∞

2 √ 2πτ = 1 2W T = √ 2 2πτ

b. The Fourier transform of this signal is X (f ) = The pulse duration is T = The bandwidth is

1 x (0)

2A/α 1 + (2πf/α)2

Z ∞

|x (t)| dt =

Z ∞

|X (f )| df =

1 W = 2X (0)

−∞

Thus, 2W T = 2

−∞

³α´ µ 2 ¶ 4

α

2 α α 4

=1

Problem 2.65 a. The poles for a second order Butterworth filter are given by ω3 s1 = s∗2 = − √ (1 − j) 2 where ω 3 is the 3-dB cutoff frequency of the Butterworth filter. function is

Its s-domain transfer

ω2 ω2 i h3 i= √ 3 H (s) = h ω3 ω3 s2 + 2ω 3 s + ω 23 s+ √ (1 − j) s + √ (1 + j) 2 2

Letting ω3 = 2πf3 and s = jω = j2πf , we obtain H (j2πf ) =

4π 2 f32 f32 √ √ = −4π 2 f 2 + 2 (2πf3 ) (j2πf ) + 4π 2 f32 −f 2 + j 2f3 f + f32


2.1. PROBLEM SOLUTIONS

55

b. If the phase response function of the filter is θ (f ), the group delay is Tg (f ) = −

1 d [θ (f )] 2π df

For the second-order Butterworth filter considered here, Ã√ ! 2f f 3 θ (f ) = − tan−1 f32 − f 2 Therefore, the group delay is Tg (f ) = =

Ã√ !# " 2f3 f 1 d −1 tan 2π df f32 − f 2

f f2 + f2 1 1 + (f/f3 )2 √ 3 34 √ = 2π f3 + f 4 2πf3 1 + (f/f3 )4

This is plotted in Fig. 2.10. c. Use partial fraction expansion of H (s) /s and then inverse Laplace transform it to get the given step response. The expansion is ω2 Bs + C A √ 3 √ ¡ ¢= + 2 2 2 s s s + 2ω 3 s + ω 3 s + 2ω3 s + ω23 √ where A = 1, B = −1, C = − 2ω 3 H (s) s

=

This allows H (s) /s to be written as H (s) s

= = = = =

√ 1 s + 2ω 3 √ − s s2 + 2ω 3 s + ω 23 √ s + 2ω 3 1 √ − s s2 + 2ω 3 s + ω 23 /2 − ω 23 /2 + ω 23 √ s + 2ω 3 1 −¡ √ ¢2 s s + ω 3 / 2 + ω 23 /2 √ √ √ 1 s + ω3 / 2 − ω3 / 2 + 2ω 3 − √ ¢2 ¡ s s + ω 3 / 2 + ω23 /2 √ √ s + ω3 / 2 ω3/ 2 1 −¡ −¡ √ ¢2 √ ¢2 s s + ω 3 / 2 + ω 23 /2 s + ω 3 / 2 + ω23 /2


56

CHAPTER 2. SIGNAL AND LINEAR SYSTEM THEORY Step response for a 2nd-order BW filter 1.4

0.3

1.2

0.25

1

0.2

0.8 y s (t)

f3Tg(f)

Group delay for a 2nd-order BW filter 0.35

0.15

0.6

0.1

0.4

0.05

0.2

0 -1 10

0

10 f/f3

1

10

0

0

0.5 f3t

1

Figure 2.10: Using the s-shift theorem of Laplace transforms, this inverse transforms to the given expression for the step response (with ω3 = 2πf3 ). Plot it and estimate the 10% and 90% times from the plot. From the MATLAB plot of Fig. 2.10, f3 t10% ≈ 0.08 and f3 t90% ≈ 0.42 so that the 10-90 % rise time is about 0.34/f3 seconds. Problem 2.66 a. Slightly less than 0.5 seconds; b & c. Use sketches to show.

Problem 2.67 a. The leading edges of the flat-top samples follow the waveform at the sampling instants. b. The spectrum is Y (f ) = Xδ (f ) H (f )


2.1. PROBLEM SOLUTIONS

57

where Xδ (f ) = fs

∞ X

n=−∞

X (f − nfs )

and H (f ) = τ sinc (fτ ) exp (−jπfτ ) The latter represents the frequency response of a filter whose impulse response is a square pulse of width τ and implements flat top sampling. If W is the bandwidth of X (f ), very little distortion will result if τ −1 >> W . Problem 2.68 a. The sampling frequency should be large compared with the bandwidth of the signal. b. The output spectrum of the zero-order hold circuit is Y (f ) = sinc (Ts f )

∞ X

n=−∞

X (f − nfs ) exp (−jπf Ts )

where fs = Ts−1 . For small distortion, we want Ts << W −1 . Problem 2.69 Use trig identities to rewrite the signal as a sum of sinusoids: ¸ 1 (1 + cos (4800πt)) x (t) = 10 cos (600πt) 2 = 5 cos (600πt) + 2.5 cos (4200πt) + 2.5 cos (5400πt) ∙

The lowpass recovery filter can cut off in the range 2.7+ kHz to 3.3− kHz where the superscript + means just above and the superscript − means just below. The lower of these is the highest frequency of x (t) and the larger is equal to the sampling frequency minus the highest frequency of x (t). Problem 2.70 For bandpass sampling and recovery, all but (b) and (e) will work theoretically, although an ideal filter with bandwidth exactly equal to the unsampled signal bandwidth is necessary. For lowpass sampling and recovery, only (f) will work.


58

CHAPTER 2. SIGNAL AND LINEAR SYSTEM THEORY

Problem 2.71 The Fourier transform is 1 1 X (f − f0 ) + X (f + f0 ) 2 2 ∙ ¸ 1 1 −jπ/2 jπ/2 + [−jsgn (f ) X (f )] ∗ δ (f − f0 ) e + δ (f + f0 ) e 2 2 1 1 X (f − f0 ) [1 − sgn (f − f0 )] + X (f + f0 ) [1 + sgn (f + f0 )] = 2 2

Y (f ) =

Noting that

and

1 [1 − sgn (f − f0 )] = u (f0 − f ) 2 1 [1 + sgn (f + f0 )] = u (f + f0 ) 2

this may be rewritten as Y (f ) = X (f − f0 ) u (f0 − f ) + X (f + f0 ) u (f + f0 ) ³ ´ Thus, if X (f ) = Π f2 (a unit-height rectangle 2 units wide centered at f = 0) and f0 = 10 Hz, Y (f ) would consist of unit-height rectangles going from −10 to −9 Hz and from 9 to 10 Hz. Problem 2.72 a. x ba (t) = cos (ω 0 t − π/2) = sin (ω0 t), so 1 T →∞ 2T lim

Z T

−T

x (t) x b (t) dt =

1 T →∞ 2T lim

sin (ω 0 t) cos (ω 0 t) dt

−T Z T

1 sin (2ω 0 t) dt −T 2 ¯ 1 cos (2ω 0 t) ¯¯T = lim =0 T →∞ 2T 4ω 0 ¯ =

1 T →∞ 2T

Z T

lim

−T

b. Use trigonometric identities to express x (t) in terms of sines and cosines. Then find the Hilbert transform of x (t) by phase shifting by −π/2. Multiply x (t) and x b (t) together term by term, use trigonometric identities for the product of sines and cosines, then integrate. The integrand will be a sum of terms similar to that of part (a). The limit as T → ∞ will be zero term-by-term.


2.1. PROBLEM SOLUTIONS

59

c. For this case x b (t) = A exp (jω 0 t − jπ/2) = −jA exp (jω 0 t), take the product, integrate over time to get Z T Z T 1 1 lim x (t) x b (t) dt = lim [A exp (jω 0 t)] [jA exp (jω0 t)] dt T →∞ 2T −T T →∞ 2T −T Z jA2 T = lim exp (j2ω 0 t) dt = 0 T →∞ 2T −T by periodicity of the integrand Problem 2.73 a. Note that F [jb x(t)] = j [−jsgn (f )] X (f ). Hence 1 1 2 2 x (t) + jb x (t) → X1 (f ) = X (f ) + j [−jsgn (f )] X (f ) 3 3 3 ∙3 ¸ 2 1 = + sgn (f ) X (f ) 3 3 ½ 1 3 X (f ) , f < 0 = X (f ) , f > 0

x1 (t) =

b. It follows that

¸ 3 3 x (t) + jb x (t) exp (j2πf0 t) x2 (t) = 4 4 3 ⇒ X2 (f ) = [1 + sgn (f − f0 )] X (f − f0 ) 4 ½ 0, f < f0 = 3 2 X (f − f0 ) , f > f0 ∙

c. This case has the same spectrum as part (a), except that it is shifted right by W Hz. That is, ∙ ¸ 2 1 x (t) + jb x (t) exp (j2πW t) x3 (t) = 3 3 ¸ ∙ 2 1 + sgn (f − W ) X (f − W ) → X3 (f ) = 3 3 d. For this signal

¸ 2 1 x4 (t) = x (t) − jb x (t) exp (jπW t) 3 3 ¸ ∙ 2 1 → X4 (f ) = − sgn (f − W/2) X (f − W/2) 3 3 ∙


60

CHAPTER 2. SIGNAL AND LINEAR SYSTEM THEORY

Problem 2.74 a. The spectrum is Xp (f ) = X (f ) + j [−jsgn (f )] X (f ) = [1 + sgn (f )] X (f ) The Fourier transform of x (t) is µ µ ¶ ¶ f − f0 f + f0 1 1 X (f ) = Π + Π 2 2W 2 2W Thus,

µ

f − f0 Xp (f ) = Π 2W

if f0 > 2W

b. The complex envelope is defined by xp (t) = x̃ (t) ej2πf0 t Therefore x̃ (t) = xp (t) e−j2πf0 t Hence

¶¯ ¶ µ f − f0 ¯¯ f F [x̃ (t)] = F [xp (t)]|f →f +f0 = Π =Π ¯ 2W 2W f →f +f0 µ

c. The complex envelope is

x̃ (t) = F

−1

¶¸ ∙ µ f = 2W sinc (2W t) Π 2W

Problem 2.75 For t < τ /2, the output is zero. For |t| ≤ τ /2, the result is y (t) =

α/2 q α2 + (2π∆f )2 n o × cos [2π (f0 + ∆f ) t − θ] − e−α(t+τ /2) cos [2π (f0 + ∆f ) t + θ]


2.2. COMPUTER EXERCISES

61

For t > τ /2, the result is y (t) =

(α/2) e−αt q α2 + (2π∆f )2 n o × eατ /2 cos [2π (f0 + ∆f ) t − θ] − e−ατ /2 cos [2π (f0 + ∆f ) t + θ]

In the above equations, θ is given by

−1

θ = − tan

2.2

µ

2π∆f α

Computer Exercises

Computer Exercise 2.1 % ce2_1: Computes generalized Fourier series coefficients for exponentially % decaying signal, exp(-t)u(t), or sinewave, sin(2*pi*t) % type_wave = input(’Enter waveform type: 1 = exp(-t)u(t); 2 = sin(2 pi t): ’); tau = input(’Enter approximating pulse width: ’); if type_wave == 1 t_max = input(‘Enter duration of exponential: ’); elseif type_wave == 2 t_max = 1; end clf a = []; t = 0:.001:t_max; cn = tau; n_max = floor(t_max/tau) for n = 1:n_max LL = (n-1)*tau; UL = n*tau; if type_wave == 1 a(n) = (1/cn)*quad(@integrand_exp, LL, UL); elseif type_wave == 2 a(n) = (1/cn)*quad(@integrand_sine, LL, UL); end end disp(‘ ’)


62

CHAPTER 2. SIGNAL AND LINEAR SYSTEM THEORY disp(‘c_n’) disp(cn) disp(‘ ’) disp(‘Expansion coefficients (approximating pulses not normalized):’) disp(a) disp(‘ ’) x_approx = zeros(size(t)); for n = 1:n_max x_approx = x_approx + a(n)*phi(t/tau,n-1); end if type_wave == 1 MSE = quad(@integrand_exp2, 0, t_max) - cn*a*a’; elseif type_wave == 2 MSE = quad(@integrand_sine2, 0, t_max) - cn*a*a’; end disp(‘Mean-squared error:’) disp(MSE) disp(‘ ’) if type_wave == 1 plot(t, x_approx), axis([0 t_max -inf inf]), xlabel(‘t’), ylabel(‘x(t); x_a_p_p_r_o_x(t)’) hold plot(t, exp(-t)) title([’Approximation of exp(-t) over [0, ’,num2str(t_max),’] with contiguous rectangular pulses of width ’,num2str(tau)]) elseif type_wave == 2 plot(t, x_approx), axis([0 t_max -inf inf]), xlabel(‘t’), ylabel(‘x(t); x_a_p_p_r_o_x(t)’) hold plot(t, sin(2*pi*t)) title([’Approximation of sin(2*pi*t) over [0, ’,num2str(t_max),’] with contiguous rectangular pulses of width ’,num2str(tau)]) end % Decaying exponential function for ce_1 % function z = integrand_exp(t) z = exp(-t); % Sine function for ce_1 %


2.2. COMPUTER EXERCISES

63

function z = integrand_sine(t) z = sin(2*pi*t); % Decaying exponential squared function for ce_1 % function z = integrand_exp2(t) z = exp(-2*t); % Sin^2 function for ce_1 % function z = integrand_sine2(t) z = (sin(2*pi*t)).^2; A typical run follows with a plot given in Fig. 2.11. >> ce2_1 Enter waveform type: 1 = exp(-t)u(t); 2 = sin(2 pi t): 2 Enter approximating pulse width: 0.125 n_max = 8 c_n 0.1250 Expansion coefficients (approximating pulses not normalized): Columns 1 through 7 0.3729 0.9003 0.9003 0.3729 -0.3729 -0.9003 Column 8 -0.3729 Mean-squared error: 0.0252

-0.9003

Computer Exercise 2.2 % ce2_2.m: Plot of line spectra for half-rectified, full-rectified sinewave, square wave, % and triangle wave % clf waveform = input(’Enter type of waveform: 1 = HR sine; 2 = FR sine; 3 = square; 4 = triangle: ’); A = 1; n_max = 13; % maximum harmonic plotted; odd n = -n_max:1:n_max; if waveform == 1 X = A./(pi*(1+eps - n.^2)); % Offset 1 slightly to avoid divide by zero


64

CHAPTER 2. SIGNAL AND LINEAR SYSTEM THEORY

Approximation of sin(2*pi*t) over [0, 1] with contiguous rectangular pulses of width 0.125 1 0.8 0.6

x(t); x approx (t)

0.4 0.2 0 -0.2 -0.4 -0.6 -0.8 -1

0

0.1

0.2

0.3

0.4

0.5 t

Figure 2.11:

0.6

0.7

0.8

0.9

1


2.2. COMPUTER EXERCISES

65

for m = 1:2:2*n_max+1 X(m) = 0; % Set odd harmonic lines to zero X(n_max+2) = -j*A/4; % Compute lines for n = 1 and n = -1 X(n_max) = j*A/4; end elseif waveform == 2 X = 2*A./(pi*(1+eps - 4*n.^2)); elseif waveform == 3 X = abs(4*A./(pi*n+eps)); for m = 2:2:2*n_max+1 X(m) = 0; end elseif waveform == 4 X = 4*A./(pi*n+eps).^2; for m = 2:2:2*n_max+1 X(m) = 0; end end [arg_X, mag_X] = cart2pol(real(X),imag(X)); % Convert to magnitude and phase if waveform == 1 for m = n_max+3:2:2*n_max+1 arg_X(m) = arg_X(m) - 2*pi; % Force phase to be odd end elseif waveform == 2 m = find(n > 0); arg_X(m) = arg_X(m) - 2*pi; % Force phase to be odd elseif waveform == 4 arg_X = mod(arg_X, 2*pi); end subplot(2,1,1),stem(n, mag_X),ylabel(’|X_n|’) if waveform == 1 title(’Half-rectified sine wave spectra’) elseif waveform == 2 title(’Full-rectified sine wave spectra’) elseif waveform == 3 title(’Spectra for square wave with even symmetry ’) elseif waveform == 4 title(’Spectra for triangle wave with even symmetry’) end subplot(2,1,2),stem(n, arg_X),xlabel(’nf_0’),ylabel(’angle(X_n)’)⊂


66

CHAPTER 2. SIGNAL AND LINEAR SYSTEM THEORY Half-rectified sine wave spectra 0.4

|Xn|

0.3 0.2 0.1 0 -15

-10

-5

0

5

10

15

-10

-5

0 nf0

5

10

15

4

angle(Xn)

2 0 -2 -4 -15

Figure 2.12: A typical run follows with a plot given in Fig. 2.12. >> ce2_2 Enter type of waveform: 1 = HR sine; 2 = FR sine; 3 = square; 4 = triangle: 1 Computer Exercise 2.3 % ce2_4.m: FFT plotting of line spectra for half-rectified, full-rectified sinewave, % square wave, and triangular waveforms % clf I_wave = input(‘Enter type of waveform: 1 = positive squarewave; 2 = 0-dc level triangular; 3 = half-rect. sine; 4 = full-wave sine: ’); T = 2; del_t = 0.001; t = 0:del_t:T; L = length(t);


2.2. COMPUTER EXERCISES

67

fs = (L-1)/T; del_f = 1/T; n = 0:9; if I_wave == 1 x = pls_fn(2*(t-T/4)/T); X_th = abs(0.5*sinc(n/2)); disp(‘ ’) disp(‘ 0 - 1 level squarewave’) elseif I_wave == 2 x = 2*trgl_fn(2*(t-T/2)/T)-1; X_th = 4./(pi^2*n.^2); X_th(1) = 0; % Set n = even coefficients to zero (odd indexed because of MATLAB) X_th(3) = 0; X_th(5) = 0; X_th(7) = 0; X_th(9) = 0; disp(‘ ’) disp(‘ 0-dc level triangular wave’) elseif I_wave == 3 x = sin(2*pi*t/T).*pls_fn(2*(t-T/4)/T); X_th = abs(1./(pi*(1-n.^2))); X_th(2) = 0.25; X_th(4) = 0; % Set n = odd coefficients to zero (even indexed because of MATLAB) X_th(6) = 0; X_th(8) = 0; X_th(10) = 0; disp(‘ ’) disp(‘ Half-rectified sinewave’) elseif I_wave == 4 x = abs(sin(pi*t/T)); % Period of full-rectified sinewave is T/2 X_th = abs(2./(pi*(1-4*n.^2))); disp(‘ ’) disp(‘ Full-rectified sinewave’) end X = 0.5*fft(x)*del_t; % Multiply by 0.5 because of 1/T_0 with T_0 = 2 f = 0:del_f:fs; Y = abs(X(1:10)); Z = [n’ Y’ X_th’]; disp(‘Magnitude of the Fourier coefficients’); disp(‘ ’)


68

CHAPTER 2. SIGNAL AND LINEAR SYSTEM THEORY disp(‘ n FFT Theory’); disp(‘ __________________________’) disp(‘ ’) disp(Z); subplot(2,1,1),plot(t, x), xlabel(‘t’), ylabel(‘x(t)’) subplot(2,1,2),plot(f, abs(X),’o’),axis([0 10 0 1]),... xlabel(‘n’), ylabel(‘|X_n|’)

A typical run follows with a plot given in Fig. 2.13. >> ce2_3 Enter type of waveform: 1 = positive squarewave; 2 = 0-dc level triangular; 3 = half-rect. sine; 4 = full-wave sine: 3 Half-rectified sinewave Magnitude of the Fourier coefficients n FFT Theory __________________________ 0 0.3183 0.3183 1.0000 0.2501 0.2500 2.0000 0.1062 0.1061 3.0000 0.0001 0 4.0000 0.0212 0.0212 5.0000 0.0001 0 6.0000 0.0091 0.0091 7.0000 0.0000 0 8.0000 0.0051 0.0051 9.0000 0.0000 0 Computer Exercise 2.4 Make the time window long compared with the pulse width. Computer Exercise 2.5 % ce2_5.m: Finding the energy ratio in a preset bandwidth % I_wave = input(’Enter type of waveform: 1 = rectangular; 2 = triangular; 3 = half-rect. sine; 4 = raised cosine: ’); tau = input(’Enter pulse width: ’); per_cent = input(’Enter percent total desired energy: ’); clf T = 20;


2.2. COMPUTER EXERCISES

69

x(t)

1

0.5

0

0

0.2

0.4

0.6

0.8

1 t

1.2

1.4

1.6

1.8

2

0

1

2

3

4

5 n

6

7

8

9

10

|Xn|

1

0.5

0

Figure 2.13:


70

CHAPTER 2. SIGNAL AND LINEAR SYSTEM THEORY f = []; G = []; del_t = 0.001; t = 0:del_t:T; L = length(t); fs = (L-1)/T; del_f = 1/T; % n = [0 1 2 3 4 5 6 7 8 9]; if I_wave == 1 x = pls_fn((t-tau/2)/tau); disp(’ ’) disp(’ Rectangular pulse’) elseif I_wave == 2 x = trgl_fn(2*(t-tau/2)/tau); disp(’ ’) disp(’ Triangular pulse’) elseif I_wave == 3 x = sin(pi*t/tau).*pls_fn((t-tau/2)/tau); disp(’ ’) disp(’ Half sinewave’) elseif I_wave == 4 x = abs(sin(pi*t/tau).^2.*pls_fn((t-tau/2)/tau)); disp(’ ’) disp(’ Raised sinewave’) end X = fft(x)*del_t; f1 = 0:del_f*tau:fs*tau; G1 = X.*conj(X); NN = floor(length(G1)/2); G = G1(1:NN); ff = f1(1:NN); f = f1(1:NN+1); E_tot = sum(G); E_f = cumsum(G); E_W = [0 E_f]/E_tot; test = E_W - per_cent/100; L_test = length(test); k = 1; while test(k) <= 0 k = k+1;


2.2. COMPUTER EXERCISES

71

end B = k*del_f; if I_wave == 2 tau1 = tau/2; else tau1 = tau; end subplot(3,1,1),plot(t/tau, x), xlabel(’t/\tau’), ylabel(’x(t)’), axis([0 2 0 1.2]) if I_wave == 1 title([’Energy containment bandwidth for rectangular pulse of width ’, num2str(tau),’ seconds’]) elseif I_wave == 2 title([’Energy containment bandwidth for triangular pulse of width ’, num2str(tau),’ seconds’]) elseif I_wave == 3 title([’Energy containment bandwidth for half-sine pulse of width ’, num2str(tau),’ seconds’]) elseif I_wave == 4 title([’Energy containment bandwidth for raised cosine pulse of width ’, num2str(tau),’ seconds’]) end subplot(3,1,2),semilogy(ff*tau1, abs(G./max(G))), xlabel(’f\tau’), ylabel(’G(f)’), axis([0 10 1e-5 1]) subplot(3,1,3),plot(f*tau1, E_W,’—’), xlabel(’f\tau’), ylabel(’E_W’), axis([0 4 0 1.2]) legend([num2str(per_cent), ’% bandwidth X (pulse width) = ’, num2str(B*tau)],4) % This function generates a unit-high triangle centered % at zero and extending from -1 to 1 % function y = trgl_fn(t) y = (1 - abs(t)).*pls_fn(t/2); % Unit-width pulse function % function y = pls_fn(t) y = stp_fn(t+0.5)-stp_fn(t-0.5); A typical run follows with a plot given in Fig. 2.14. >> ce2_5


72

CHAPTER 2. SIGNAL AND LINEAR SYSTEM THEORY Energy containment bandwidth for half-sine pulse of width 2 seconds

x(t)

1 0.5 0

0

0.2

0.4

0.6

0.8

1 t/ τ

1.2

1.4

1.6

1.8

2

0

1

2

3

4

5 fτ

6

7

8

9

10

0

G(f)

10

EW

1 0.5 95% bandwidth X (pulse width) = 1.1 0

0

0.5

1

1.5

2 fτ

2.5

3

3.5

4

Figure 2.14: Enter type of waveform: 1 = positive squarewave; 2 = triangular; 3 = half-rect. sine; 4 = raised cosine: 3 Enter pulse width: 2 Enter percent total desired energy: 95 Computer Exercise 2.6 The program for this exercise is similar to that for Computer Exercise 2.5, except that the waveform is used in the energy calculation. Computer Exercise 2.7 Use Computer Example 2.2 as a pattern for the solution .


Chapter 3

Basic Modulation Techniques 3.1

Problems

Problem 3.1 The demodulated output, in general, is yD (t) = Lp{xc (t) 2 cos[ω c t + θ (t)]} where Lp {•} denotes the lowpass portion of the argument. With xc (t) = Ac m (t) cos [ω c t + φ0 ] the demodulated output becomes yD (t) = Lp {2Ac m (t) cos [ω c t + φ0 ] cos [ω c t + θ (t)]} Performing the indicated multiplication and taking the lowpass portion yields yD (t) = Ac m (t) cos [θ (t) − φ0 ] If θ(t) = θ0 (a constant), the demodulated output becomes yD (t) = Ac m (t) cos [θ0 − φ0 ] Letting Ac = 1 gives the error ε (t) = m (t) [1 − cos (θ0 − φ0 )] The mean-square error is E ­ 2 ® D 2 ε (t) = m (t) [1 − cos (θ0 − φ0 )]2 1


2

CHAPTER 3. BASIC MODULATION TECHNIQUES

where h·i denotes the time-average value. Since the term [1 − cos (θ0 − φ0 )] is a constant, we have ­ 2 ® ­ 2 ® ε (t) = m (t) [1 − cos (θ0 − φ0 )]2

Note that for θ0 = φ0 , the demodulation carrier is phase coherent with the original modulation carrier, and the error is zero. For θ (t) = ω 0 t we have the demodulated output yD (t) = Ac m (t) cos (ω 0 t − φ0 ) Letting Ac = 1, for convenience, gives the error ε (t) = m (t) [1 − cos (ω0 t − φ0 )] giving the mean-square error E ­ 2 ® D 2 ε (t) = m (t) [1 − cos (ω 0 t − φ0 )]2

In many cases, the average of a product is the product of the averages. (We will say more about this in Chapters 5). For this case E ­ 2 ® ­ 2 ®D ε (t) = m (t) [1 − cos (ω0 t − φ0 )]2

Note that 1 − cos (ω0 t − φ0 ) is periodic. Taking the average over an integer number of periods yields E D ­ ® = 1 − 2 cos (ω 0 t − φ0 ) + cos2 (ω0 t − φ0 ) [1 − cos (ω 0 t − φ0 )]2 = 1+

Thus

3 1 = 2 2

­ 2 ® 3­ 2 ® ε (t) = m (t) 2

Problem 3.2 Multiplying the AM signal xc (t) = Ac [1 + amn (t)] cos ω c t by xc (t) = Ac [1 + amn (t)] cos ωc t and lowpass filtering to remove the double frequency (2ωc ) term yields yD (t) = Ac [1 + amn (t)] cos θ (t)


3.1. PROBLEMS

3

xC ( t ) yD ( t )

R

C

Figure 3.1: Full wave rectifier. For negligible demodulation phase error, θ(t) ≈ 0, this becomes yD (t) = Ac + Ac amn (t) The dc component can be removed resulting in Ac amn (t), which is a signal proportional to the message, m (t). This process is not generally used in AM since the reason for using AM is to avoid the necessity for coherent demodulation. Problem 3.3 A full-wave rectifier takes the form shown in Figure 3.1. The waveforms are shown in Figure 3.2, with the half-wave rectifier on top and the full-wave rectifier on the bottom. The message signal is the envelopes. Decreasing exponentials can be drawn from the peaks of the waveform as depicted in Figure 3.3(b) in the text. It is clear that the full-wave rectified xc (t) defines the message better than the half-wave rectified xc (t) since the carrier frequency is effectively doubled. Problem 3.4 By evaluating ­ 2 ® 1 mn (t) = T

Z T 0

m2n (t)dt

® ­ See the first we see that m2n (t) is equal to the values given in the following table. ® ­ 2 (Note: part of Problem 3.5 for the ­ Part® a. Part b has the same value of mn (t) as Part a. For Part c it is obvious that m2n (t) = 1. The efficiency is defined as ® ­ a2 m2n (t) Ef f = 1 + a2 hm2n (t)i which gives the values given in the following table.


4

CHAPTER 3. BASIC MODULATION TECHNIQUES

2 1.5 1 0.5 0 0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

2 1.5 1 0.5 0

Figure 3.2: Output waveforms for a half-wave rectifier (top) and a full-wave rectifier (bottom). Part a b c

­ 2 ® mn (t) 1/3 1/3 1

a = 0.2 Ef f = 1.3158% Ef f = 1.3158% Ef f = 3.8462%

a = 0.4 Ef f = 5.0633% Ef f = 5.0633% Ef f = 13.7931

a = 0.7 Ef f = 14.0401% Ef f = 14.0401% Ef f = 32.8859%

Problem 3.5 By inspection, the normalized message signal is as shown in Figure 3.3. Thus T 2 0≤t≤ mn (t) = t, T 2 and µ ¶ µ ¶ µ ¶ Z ­ 2 ® 2 T /2 2 2 2 2 21 T 3 1 t dt = = mn (t) = T 0 T T T 3 2 3

Also

Ac [1 + a] = 40 Ac [1 − a] = 10 This yields 40 1+a = =4 1−a 10

a=1 Ef f = 25% Ef f = 25% Ef f = 50%


3.1. PROBLEMS

5

mn ( t ) 1 t

0 T -1

Figure 3.3: Normalized message signal for Problem 3.5. or 1 + a = 4 − 4a 5a = 3

Thus a = 0.6 Since the index is 0.6, we can write Ac [1 + 0.6] = 40 This gives Ac =

40 = 25 1.6

This carrier power is 1 1 Pc = A2c = (25)2 = 312.5 Watts 2 2 The efficiency is Ef f = Thus

(0.6)2

¡1¢

0.36 3 2 ¡ 1 ¢ = 3.36 = 0.107 = 10.7% 1 + (0.6) 3

Psb = 0.107 Pc + Psb where Psb represents the power in the sidebands and Pc represents the power in the carrier. The above expression can be written Psb = 0.107 + 0.107Psb


6

CHAPTER 3. BASIC MODULATION TECHNIQUES

This gives Psb =

0.107 Pc = 97.48 Watts 1.0 − 0.107

Problem 3.6 For the first signal T , 6

mn (t) = m (t)

and Ef f = 81.25%

5T , 6

1 mn (t) = m (t) 5

and Ef f = 14.77%

τ= and for the second signal τ=

Problem 3.7 (a) The using a root finding algorithm, or by using a simple MATLAB program, we find that the minimum value of m (t) is −12.8985. Thus the normalized message signal is mn (t) =

1 [9 cos(20πt) − 8 cos(60πt)] 12.8985

With the given value of c (t) and the index a, we have xc (t) = 100 [1 + 0.5mn (t)] cos 200πt or

∙ xc (t) = 100 1 +

¸ 1 [9 cos 20πt − 8 cos 60πt] cos 200πt 2(12.8985)

This yields xc (t) = −15.5057 cos 140πt + 17.4439 cos 180πt +100 cos 200πt

­ ® (b) The value of m2n (t) is

­ 2 ® mn (t) =

+17.4439 cos 220πt − 15.5057 cos 260πt µ

1 12.8985

¶2 µ ¶ h i 1 (9)2 + (8)2 = 0.4358 2


3.1. PROBLEMS

7

50 45 40 35

Amplitude

30 25 20 15 10 5 0 -150

-100

-50

0 Frequency

50

100

Figure 3.4: Amplitude spectrum for Problem 3.7.

150


8

CHAPTER 3. BASIC MODULATION TECHNIQUES

(c) This gives the efficiency E=

(0.5)2 (0.4358) = 0.0982 1 + (0.5)2 (0.4358)

or

9.82%

(d) The two-sided amplitude spectrum is shown in Figure 3.4.The phase spectrum is everywhere zero except for the components at f = ±70 and f = ±130 have phase ±π. The sign is arbitrary but the phase spectrum must be drawn so that it is odd. Problem 3.8 1 [9 cos(20πt) + 8 cos(60πt)] (a) mn (t) = 17 i ­ 2 ® ¡ 1 ¢2 ¡ 1 ¢ h 2 2 = 0.2509 + (8) (9) (b) mn (t) = 17 2

0.25(0.2509) (c) Ef f = 1+0.25(0.2509) = 0.0590 = 5.9% (d) The expression for xc (t) is ¸ ∙ 1 1 xc (t) = 100 1 + (9 cos 20πt + 8 cos 60πt) cos 200πt 2 17 = 11.7647 cos 140πt + 13.2353 cos 180πt

+100 cos 200πt +13.2353 cos 220πt + 11.7647 cos 260πt The amplitude spectrum is drawn as in the previous problem. The phase spectrum is everywhere zero. Problem 3.9 The modulator output xc (t) = 30 cos[2π (200) t] + 4 cos[2π (180) t] + 4 cos[2π (220) t] can be written xc (t) = [30 + 8 cos 2π (20) t] cos 2π (200) t or

¸ ∙ 8 cos 2π (20) t cos 2π (200) t xc (t) = 30 1 + 30

By inspection, the modulation index is

a=

8 = 0.2667 30

Since the component at 200 Hertz represents the carrier, the carrier power is Pc =

1 (30)2 = 450 Watts 2


3.1. PROBLEMS

9

The components at 180 and 220 Hertz are sideband terms. Thus the sideband power is Psb =

1 1 (4)2 + (4)2 = 16 Watts 2 2

Thus, the efficiency is Ef f =

Psb 16 = 0.0343 = 3.43% = Pc + Psb 450 + 16

Problem 3.10 The modulator output xc (t) = A cos[2π (200) t] + B cos[2π (180) t] + B cos[2π (220) t] can be written

¸ ∙ 2B cos[2π(20)t cos[2π (200) t] xc (t) = A 1 + A

The sideband power is 1 1 Psb = B 2 + B 2 = B 2 2 2

Watts

The carrier power is P0 =

A2 2

Thus, the efficiency is Ef f =

Psb B2 2B 2 = = 2 2 P0 + Psb P0 + B A + 2B 2

Since P0 = 100 = A2 /2 A=

√ 200 = 14.1421

The efficiency is Ef f =

B2 = 0.4 100 + B 2

Thus B 2 = 40 + 0.4B 2 So that B=

r

40 = 8.1650 0.6


10

CHAPTER 3. BASIC MODULATION TECHNIQUES

Finally, the modulation index is a=

2(8.1650) 2B = = 1.1547 A 14.1421

Problem 3.11 The modulator output xc (t) = 25 cos(2π (150) t) + 5 cos(2π (160) t) + 5 cos(2π (140) t) is

¸ ∙ 10 xc (t) = 25 1 + cos(2π (10) t) cos(2π (150) t) 25 Thus, the modulation index, a, is 10 = 0.4 a= 25 The carrier power is 1 Pc = (25)2 = 312.5 Watts 2 and the sideband power is 1 1 Psb = (5)2 + (5)2 = 25 Watts 2 2 Thus, the efficiency is 25 Ef f = = 0.0741 312.5 + 25 Problem 3.12 (a) By plotting m (t) or by using a root-finding algorithm we see that the minimum value of m (t) is M = −3.432. Thus mn (t) = 0.5828 cos (2πfm t) + 0.2914 cos (4πfm t) + 0.5828 cos (10πfm t) The AM signal is xc (t) = Ac [1 + 0.7mn (t)] cos(2πfc t) = 0.2040Ac cos[2π (fc − 5fm ) t]

+0.1020Ac cos[2π (fc − 2fm ) t]

+0.2040Ac cos[2π (fc − fm ) t]

+Ac cos(2πfc t)

+0.2040Ac cos[2π (fc + fm ) t] +0.1020Ac cos[2π (fc + 2fm ) t] +0.2040Ac cos[2π (fc + 5fm ) t]


3.1. PROBLEMS

11

The spectrum is drawn from the expression for xc (t). It contains 14 discrete components as shown Comp 1 2 3 4 5 6 7

Freq −fc − 5fm −fc − 2fm −fc − fm −fc −fc + fm −fc + 2fm −fc + 5fm

Amp 0.102Ac 0.051Ac 0.102Ac 0.5Ac 0.102Ac 0.051Ac 0.102Ac

Comp 8 9 10 11 12 13 14

Freq fc − 5fm fc − 2fm fc − fm fc fc + fm fc + 2fm fc + 5fm

Amp 0.102Ac 0.051Ac 0.102Ac 0.5Ac 0.102Ac 0.051Ac 0.102Ac

(b) The efficiency is 15.8%. Problem 3.13 (a) From Figure 3.72 x (t) = m (t) + cos ω c t With the given relationship between x (t) and y (t) we can write y (t) = 4 {m (t) + cos ω c t} + 2 {m (t) + cos ω c t}2 which can be written y (t) = 4m (t) + 4 cos ω c t + 2m2 (t) + 4m (t) cos ω c t + 2 + 2 cos 2ω c t The equation for y (t) is more conveniently expressed y (t) = 2 + 4m (t) + 2m2 (t) + 4 [1 + m (t)] cos ω c t + 2 cos 2ω c t (b) The spectrum illustrating the terms involved in y (t) is shown in Figure 3.5. The center frequency of the filter is fc and the bandwidth must be greater than or equal to 2W . In addition, fc − W > 2W or fc > 3W , and fc + W < 2fc . The last inequality states that fc > W , which is redundant since we know that fc > 3W . (c) From the definition of m (t) we have m (t) = M mn (t) so that g (t) = 4 [1 + 5M mn (t)] cos(ωc t) It follows that a = 0.1 = 5M


12

CHAPTER 3. BASIC MODULATION TECHNIQUES

ℑ{m ( t )}

0

W

{

Filter Characteristic

}

ℑ m2 ( t )

2W

fc − W

fc

fc + W

2 fc

f

Figure 3.5: Spectrum of sqrare-law device output. Thus

0.1 = 0.02 5 (d) This method of forming a DSB signal avoids the need for a multiplier. M=

Problem 3.14 With m(t) = 2 cos(2πfm t) + cos(4πfm t) the Hilbert transform of the message signal is m(t) b = 2 sin(2πfm t) + sin(4πfm t)

Thus xc (t) =

1 Ac [2 cos(2πfm t) + cos(4πfm t)] cos(2πfc t) ± 2 1 Ac [2 sin(2πfm t) + sin(4πfm t)] sin(2πfc t) 2

Performing the multiplication and letting Ac = 4 gives xc (t) = {2 cos[2π(fc + fm )t] + cos[2π(fc + 4fm )t] + 2 cos[2π(fc − fm )t] + cos[2π(fc − 4fm )t]} ±

{−2 cos[2π(fc + fm )t] − cos[2π(fc + 4fm )t] + 2 cos[2π(fc − fm )t] + cos[2π(fc − 4fm )t]}

Note that with the + sign, the (fc + fm ) and the (fc + 2fm ) cancel giving LSB SSB. With the − sign the (fc − fm ) and (fc − 2fm ) terms cancel giving USB SSB.


3.1. PROBLEMS

13

Problem 3.15 The filter for USB SSB will be defined by HU (f ) = 1 −

1 sgn (f + fc ) 2

which gives as the output 1 1 b (t) sin(2πfc t) xU SB (t) = Ac m (t) cos(2πfc t) − Ac m 2 2 Problem 3.16 We assume that the VSB waveform is given by xc (t) =

1 Aε cos[2π (fc − f1 ) t] + 2 1 A (1 − ε) cos[2π (fc + f1 ) t] 2 1 + B cos[2π (fc + f2 ) t] 2

We let y (t) be xc (t) plus a carrier. Thus y (t) = xc (t) + K cos(2πfc t) It can be shown that y (t) can be written y (t) = y1 (t) cos(2πfc t) + y2 (t) sin(2πfc t) where A B cos(2πf1 t) + cos(2πf2 t) + K 2 2 ¶ µ A B sin(2πf1 t) − sin(2πf2 t) Aε + y2 (t) = 2 2

y1 (t) =

In the preceding expression, the two terms involving A and B represent the message signal. Also y (t) = R (t) cos (2πfc t + θ) where R (t) is the envelope and is therefore the output of an envelope detector. It follows that q R (t) = y12 (t) + y22 (t)


14

CHAPTER 3. BASIC MODULATION TECHNIQUES

For K is sufficiently large so that y12 (t) À y22 (t), R (t) = |y1 (t)|, which is 12 m (t) + K, where K is a dc bias. Thus if the detector is ac coupled, K is removed and the output y (t) is m (t) scaled by 12 . Problem 3.17 The required plot follows. Problem 3.18 Since high-side tuning is used, the local oscillator frequency is fLO = fi + fIF where fi , the carrier frequency of the input signal, varies between 5 and 25 M Hz. The ratio is fIF + 25 R= fIF + 5 where fIF is the IF frequency expressed in M Hz. We make the following table fIF ,MHz 0.4 0.5 0.7 1.0 1.5 2.0

R 4.70 4.63 4.51 4.33 4.08 3.86

A plot of R as a function of fIF provides the required plot. Problem 3.19 With an IF frequency of fIF = 455 kHz we have for high-side tuning fLO = fi + fIF = 1120 + 455 = 1575 kHz fIMAGE = fi + 2fIF = 1120 + 910 = 2030 kHz and for low-side tuning we have fLO = fi − fIF = 1120 − 455 = 665 kHz

fIMAGE = fi − 2fIF = 1120 − 910 = 210 kHz With an IF frequency of fIF = 2500 kHz we have, for high-side tuning fLO = fi + fIF = 1120 + 2500 = 3620 kHz fIMAGE = fi + 2fIF = 1120 + 5000 = 6120 kHz


3.1. PROBLEMS

15

Desired Signal

ω

ω1 Local Oscillator

ω

ω1 − ω 2 Signal at Mixer Output

ω 2 − ω IF

2ω1 − ω 2

ω

Image Signal

ω 2ω1 − ω 2 Image Signal at Mixer Output

ω 3ω 2 − 2ω1

ω2

Figure 3.6: Plot for Problem 3.17.


16

CHAPTER 3. BASIC MODULATION TECHNIQUES

and for low-side tuning fLO = fi − fIF = 1120 − 2500 = −1380 kHz

fLO = 1380 kHz

fIMAGE = fi − 2fIF = 1120 − 5000 = −3880 kHz fLO = 3880 kHz

In the preceding development for low-side tuning, recall that the spectra are symmetrical about f = 0. Problem 3.20 By definition xc (t) = Ac cos [ω c t + kp m (t)] = Ac cos [ωc t + kp u(t − t0 )] The waveforms for the four values of kp are shown in Figure 3.7. The top-left pane is for kp = π, the top-right pane is for kp = π/4, the bottom-left pane is for, kp = −π, and the bottom-right pane is for kp = −π/4. The plots illustrate 2 s of data with a frequency 2 Hz. Problem 3.21 Let φ (t) = β cos ω m t where ω m = 2πfm . This gives o n xc2 (t) = Ac Re ejωc t ejβ cos ωm t

Expanding a Fourier series gives

ejβ cos ωm t =

∞ X

Cn ejnωm t

n=−∞

where ωm Cn = 2π

Z π/ωm

ejβ cos ωm t e−jnωm t dt

−π/ωm

With x = ω m t, the Fourier coefficients become Z π 1 Cn = ejβ cos x e−jnx dx 2π −π ¡ ¢ Since cos x = sin x + π2 Z π π 1 ej [β sin(x+ 2 )−nx] dx Cn = 2π −π


17

1

1

0.5

0.5 k p=pi/4

k p=pi

3.1. PROBLEMS

0

-0.5

-0.5

-1

-1 0

0.5

1

1.5

2

1

1

0.5

0.5 k p=-pi/4

k p=-pi

0

0

-0.5 -1

0

0.5

1

1.5

2

0

0.5

1

1.5

2

0

-0.5

0

0.5

1

1.5

2

-1

Figure 3.7: Required plots for Problem 3.20.


18

CHAPTER 3. BASIC MODULATION TECHNIQUES

With x = x + π2 , the preceding becomes 1 Cn = 2π

Z 3π/2

This gives nπ

Cn = ej 2

π

ej [β sin y−ny+n 2 ] dy

−π/2

½

1 2π

Z π

ej[β sin y−ny] dy

−π

¾

where the limits have been adjusted by recognizing that the integrand is periodic with period 2π. Thus nπ Cn = ej 2 Jn (β) and

(

jω c t

xc2 (t) = Ac Re e

j (nωm t+ nπ 2 )

Jn (β) e

n=−∞

Taking the real part yields xc2 (t) = Ac

∞ X

)

h nπ i Jn (β) cos (ω c + nω m ) t + 2 n=−∞ ∞ X

The amplitude spectrum is therefore the same as in the preceding problem. The phase spectrum is the same as in the preceding problem except that nπ 2 is added to each term. Problem 3.22 Since sin(x) = cos(x − π2 ) we can write

which is Since

³ ´ π xc3 (t) = Ac sin(ωc t + β sin ω m t) = Ac cos ω c t − + β sin ωm t 2 n o xc3 (t) = Ac Re ej(ωc t−π/2) ej sin ωm t j sin ω m t

e

=

∞ X

Jn (β)ejnωm t

n=−∞

we have

(

xc3 (t) = Ac Re ej(ωc t−π/2) Thus xc3 (t) = Ac

∞ X

n=−∞

∞ X

n=−∞

Jn (β)ejnωm t

)

o n Jn (β) Re ej(ωc t+nωm t−π/2)


3.1. PROBLEMS

19

Thus xc3 (t) = Ac

h πi Jn (β) cos (ωc + nω m ) t − 2 n=−∞ ∞ X

Note that the amplitude spectrum of xc3 (t) is identical to the amplitude spectrum for both xc1 (t) and xc2 (t). The phase spectrum of xc3 (t) is formed from the spectrum of xc1 (t) by adding −π/2 to each term. For xc4 (t) we write ³ ´ π xc4 (t) = Ac sin(ωc t + β cos ω m t) = Ac cos ω c t − + β cos ω m t 2 Using the result of the preceding problem we write ( ) ∞ X nπ xc4 (t) = Ac Re ej(ωc t−π/2) Jn (β) ej (nωm t+ 2 ) n=−∞

This gives xc4 (t) = Ac

∞ X

n=−∞

Thus xc4 (t) = Ac

o n π nπ Jn (β) Re ej(ωc t+nωm t− 2 + 2 )

i h π Jn (β) cos (ωc + nω m ) t + (n − 1) 2 n=−∞ ∞ X

Compared to xc1 (t), xc2 (t), x3 (t) and xc4 (t), we see that the only difference is in the phase spectrum. Problem 3.23 From the problem statement xc (t) = Ac cos [2π (50) t + 10 sin (2π (5) t)] Since fc = 50 and fm = 5 (it is important to note that fc is an integer multiple of fm ) there is considerable overlap of the portion of the spectrum centered about f = −50 with the portion of the spectrum centered about f = 50 for large β (such as 10). This can also be seen in the time domain. We can see from the time domain plot (top pane) of Figure 3.8 that the waveform has a non-zero dc value. This is due to the fact that the spectra centered about f = +50 and f = −50 overlap and the term at f = 0 (dc) is non-zero. Where terms overlap, they must be summed. The time average value appears from the spectrum to be approximately 0.63Ac .


20

CHAPTER 3. BASIC MODULATION TECHNIQUES

1

x c(t)

0.5 0 -0.5 -1 0

0.02

0.04

0.06

0.08

0.1 Time

0.12

0.14

0.16

0.18

0.2

0.8

M agnitude

0.6 0.4 0.2 0 -150

-100

-50

0 Frequency

50

100

150

Figure 3.8: Time domain and frequency domain plots for Problem 3.23. Problem 3.24 We are given J0 (3) = −0.2601 and J1 (3) = 0.3391. Using Jn+1 (β) =

2n Jn (β) − Jn−1 (β) β

with β = 3 we have 2 Jn+1 (3) = nJn (3) − Jn−1 (3) 3 With n = 1, J2 (3) = =

2 J1 (3) − J0 (3) 3 2 (0.3391) + 0.2601 = 0.4862 3

With n = 2, J3 (3) = =

4 J2 (3) − J1 (3) 3 4 (0.04862) − 0.3391 = 0.3092 3


3.1. PROBLEMS

21

With n = 3 we have J4 (3) = 2J3 (3) − J2 (3)

= 2 (0.3091) − 0.4862 = 0.1322

Letting n = 4 we can determine J5 (3) as required J5 (3) = =

8 J4 (3) − J3 (3) 3 8 (0.1322) − 0.3092 = 0.0433 3

Problem 3.25 The amplitude and phase spectra follow directly from the table of Fourier-Bessel coefficients, or one can write a simple MATLAB program as was done here.. The single-sided magnitude and phase spectra are shown in Figure 3.9. The magnitude spectrum is plotted assuming Ac = 1. Problem 3.26 The modulated signal can be written i h xc (t) = Re{ 6 + 6ej2π(30)t + 6e−j2π(30)t ej2π(100)t }

We will concentrate on the term in brackets, which is the complex envelope described in Chapter 2. Denoting the complex envelope by x ec (t), we can write x ec (t) = [6 + 6 cos 2π (30t) + 6 cos 2π (30) t] +j [6 sin 2π (30t) − 6 sin 2π (30) t]

= 6 + 12 cos 2π (30t)

It follows from the definition of xc (t) that

Obviously

x ec (t) = R (t) ejφ(t)

R (t) = 6 + 12 cos 2π (30t) and φ (t) = 0


22

CHAPTER 3. BASIC MODULATION TECHNIQUES

0.4

M agnitude

0.3 0.2 0.1 0 600

700

800

900

1000 1100 Frequency

1200

1300

1400

700

800

900

1000 1100 Frequency

1200

1300

1400

0

Phas e

-1 -2 -3 -4 600

Figure 3.9: Amplitude and phase spectra for Problem 3.25.


3.1. PROBLEMS

23

Problem 3.27 (a) Since the carrier frequency is 1000 Hertz, the general form of xc (t) is xc (t) = Ac cos [2π (1000) t + φ (t)] The phase deviation, φ (t), is therefore given by φ (t) = 40t2

rad

The frequency deviation is dφ = 80t dt or

rad/sec

1 dφ 40 = t 2π dt π

Hz

(b) The phase deviation is φ (t) = 2π (500) t2 − 2π (1000) t

rad

(Note that we substracted the phase of the unmodulated carrier from the instantaneous carrier.) The frequency deviation is dφ = 4π (500) t − 2π (1000) = 2000π(t − 1) dt or

1 dφ = 1000 (t − 1) 2π dt

rad/sec

Hz

(c) The phase deviation is φ (t) = 2π (1200) t − 2π(1000)t = 200πt

rad

and the frequency deviation is dφ = 200π dt

rad/sec

or

1 dφ = 100 Hz 2π dt which should be obvious from the expression for xc (t). (d) The phase deviation is √ √ φ (t) = 2π(900) + 10 t − 2π(1000) = −2π(100)t + 10 t

rad


24

CHAPTER 3. BASIC MODULATION TECHNIQUES

and the frequency deviation is 1 dφ 1 5 = −2π(100) + (10) t− 2 = −2π(100) + √ dt 2 t

or

1 dφ 5 = −100 + √ 2π dt 2π t

Hz

Problem 3.28 (a) The phase deviation is φ(t) = 2π(20)

Z t

which is, for 0 ≤ t ≤ 8, φ(t) = 40π

¸ 1 4Π (x − 4) dx 8

Z t

4dx = 160πt

0

Thus φ(t) = 0,

t<0

φ(t) = 160πt, φ(t) = 160π(8),

0≤t≤8 t>8

(b) The frequency deviation is, by definition ∙ ¸ 1 1 dφ = 80Π (t − 4) = 80 2π dt 8 for 0 ≤ t ≤ 8 and is zero for t < 0 and t > 8. (c) The peak frequency deviation is 80 Hz. (d) The peak phase deviation is 160π(8) = 1280π radians. (e) The transmitted power is P =

Problem 3.29

(100)2 = 5000 2

W

Hz

rad/sec


3.1. PROBLEMS

25

(a) The phase deviation is φ(t) = 0, φ(t) = = φ(t) = = = = φ(t) =

t<3 ¶ µ Z t 160π t2 9 4 (x − 3)dx = − − 3t + 9 40π 3 2 2 3 3 ¢ 80π ¡ 2 3≤t≤6 t − 6t + 9 , 3 ¶ ¶ Z tµ Z tµ 4 4 240π + 40π 4 − (x − 6) dx = 240π + 40π 12 − x dx 3 3 6 6 µ ¶ 41 2 240π + 40π 12(t − 6) − (t − 36) 32 µ ¶ 2 2 240π + 40π − t + 12t − 48 3 ¢ 80π ¡ 2 −t + 18t − 72 , 6≤t≤9 240π + 3 480π t>9

(b) The frequency deviation in Hz is ¸ 1 fd m(t) = 80Λ (t − 6) 3 ∙

(c) The peak frequency deviation is 4 (20) = 80 Hz. (d) The peak phase deviation is, since the area of Λ[(t−6)/3] = 3 is 2πfd (3) (4) = 24π (20) = 480π (e) The modulator output power is 1 1 P = A2c = (100)2 = 5000 Watts 2 2

Problem 3.30 For all three message signals illustrated in Figure 3.73, the frequency deviation in Hz is simply the given message signal with the ordinate values multiplied by 10. For the first message signal (the top signal given in Figure 3.73) the phase deviation in radians is given Z Z t

φ (t) = 2πfd

t

m (α) dα = 20π

For 0 ≤ t ≤ 1, we have

φ (t) = 20π

Z t 0

m (α) dα

2αdα = 20πt2


26

CHAPTER 3. BASIC MODULATION TECHNIQUES

For 1 ≤ t ≤ 2 φ (t) = φ(1) + 20π

Z t 1

¡ ¢ (5 − α) dα = 20π + 100π (t − 1) − 10π t2 − 1

= 20π + 100πt − 100π − 10πt2 + 10π = −70π + 100πt − 10πt2 For 2 ≤ t ≤ 3 φ (t) = φ(2) + 20π

Z t

3dα = 90π + 60π (t − 2) = 60πt − 30π

Z t

2dα = 150π + 40π(t − 3) = 30π + 40πt

2

For 3 ≤ t ≤ 4 φ (t) = φ(3) + 20π

3

Finally, for t > 4 we recognize that φ (t) = φ (4) = 190π. The required figure results by plotting these curves. The phase deviation for the second message signal (bottom right) is given again by φ (t) = 2πfd

Z t

m (α) dα = 20π

For 0 ≤ t ≤ 1, we have φ (t) = 20π

Z t

Z t

m (α) dα

αdα = 10πt2

0

For 1 ≤ t ≤ 2

Z t ¡ ¢ φ (t) = φ(1) + 20π (α − 2) dα = 10π + 10π t2 − 1 − 40π(t − 1) 1 ¢ ¡ = 10π t2 − 4t + 4 = 10π (t − 2)2

For 2 ≤ t ≤ 4

φ (t) = φ(2) + 20π

Z t 2

¡ ¢ (6 − 2α)dα = 0 + 20π (6) (t − 2) − 20π t2 − 4

= −20π(t2 − 6t + 8)

Finally, for t > 4 we recognize that φ (t) = φ(4) = 0 (Note that the area under the curve from 0 to 4 is obviously zero). The required figure follows by plotting these expressions.


3.1. PROBLEMS

27

Finally, the phase deviation in radians for the third message signal is again given by Z t Z t φ (t) = 2πfd m (α) dα = 20π m (α) dα

For 0 ≤ t ≤ 1, we have

φ (t) = 20π

Z t 0

For 1 ≤ t ≤ 2 φ (t) = φ(1) + 20π

Z t 1

For 2 ≤ t ≤ 2.5 φ (t) = φ(2) + 20π

Z t 2

(−2α)dα = −20πt2

2dα = −20π + 40π (t − 1) = −60π + 40πt

¡ ¢ (10 − 4α)dα = 20π + 20π (10) (t − 2) − 20π(2) t2 − 4

= 20π(−2t2 + 10t + 7) For 2.5 ≤ t ≤ 3

φ (t) = φ(2.5) − 20π

Z t

2.5

2dα = 15π − 40π(t − 2.5)

= (−250π + 500π + 140π) − 40π(t + 2.5) = 390π − 40πt − 100π = 290π − 40πt For 3 ≤ t ≤ 4 φ (t) = φ(3) + 20π

Z t 3

(2α − 8) dα = 170π + 20π(t2 − 9) − 20π(8)(t − 3)

= 20π(t2 − 8t + 23.5) Finally, for t > 4 we recognize that φ (t) = φ(4) = 20π(16 − 32 + 23.5) = 150π. The required figure follows by plotting these expressions. Problem 3.31 (a) The peak deviation is (14)(5) = 70 and fm = 10. Thus, the modulation index is 70 10 = 7. (b) The magnitude spectrum, scaled to Ac , is the Fourier-Bessel spectrum of Figure 3.24 with β = 7. (c) Since β is not ¿ 1, this is not narrowband FM. The bandwidth exceeds 2fm . (d) For phase modulation, kp (5) = 7 or kp = 1.4. Problem 3.32 The results are given in the following table. The deviation ratio is found by definition and the bandwidth follows from Carson’s rule.


28

CHAPTER 3. BASIC MODULATION TECHNIQUES Part a b c d

fd 20 200 2000 20000

D = 6fd /W = fd /2000 0.01 0.1 1 10

Problem 3.33 From xc (t) = Ac

∞ X

B = 2 (D + 1) W 24.24 kHz 26.4 kHz 48 kHz 264 kHz

Jn (β) cos[(ω c + ωm ) t]

n=−∞

we obtain

∞ ­ 2 ® 1 2 X J 2 (β) xc (t) = Ac 2 n=−∞ n

Also

® ­ 2 ® ­ 2 xc (t) = Ac cos2 [ω c t + φ (t)]

which, assuming that ω c À 1 so that xc (t) has no dc component, is ­ 2 ® 1 2 xc (t) = Ac 2

This gives

∞ 1 2 1 2 X 2 Ac = Ac J (β) 2 2 n=−∞ n

from which

∞ X

Jn2 (β) = 1

n=−∞

Problem 3.34 Since

1 dx = 2π

Z π

ej(β sin x−nx) dx

1 cos (β sin x − nx) dx + j 2π −π

Z π

sin (β sin x − nx) dx

1 Jn (β) = 2π

Z π

−π

−j(nx−β sin x)

e

−π

we can write 1 Jn (β) = 2π

Z π

−π


3.1. PROBLEMS

29

The imaginary part of Jn (β) is zero, since the integrand is an odd function of x and the limits (−π, π) are even. Thus Z π 1 cos (β sin x − nx) dx Jn (β) = 2π −π Since the integrand is even Jn (β) =

Z π

1 π

0

cos (β sin x − nx) dx

which is the first required result. With the change of variables λ = π − x, we have Z 1 π Jn (β) = cos [β sin (π − λ) − n (π − λ)] (−1) dλ π 0 Z π 1 = cos [β sin (π − λ) − nπ + nλ] dλ π 0 Since sin (π − λ) = sin λ, we can write Z 1 π cos [β sin λ + nλ − nπ] dλ Jn (β) = π 0 Using the identity cos (u − ν) = cos u cos ν + sin u sin ν with u = β sin λ + nλ and ν = nπ yields Jn (β) =

1 π

Z π

cos [β sin λ + nλ] cos (nπ) dλ Z π 1 + sin [β sin λ + nλ] sin (nπ) dλ π 0 0

Since sin (nπ) = 0 for all n, the second integral in the preceding expression is zero. Also cos (nπ) = (−1)n Thus

n 1

Jn (β) = (−1)

π

Z π 0

cos [β sin λ + nλ] dλ


30

CHAPTER 3. BASIC MODULATION TECHNIQUES

However

1 J−n (β) = π

Thus

Z π

cos [β sin λ + nλ] dλ

0

Jn (β) = (−1)n J−n (β) or equivalently J−n (β) = (−1)n Jn (β)

Problem 3.35 (a) Peak frequency deviation = 10(8) = 80 Hz. (b) Since Z φ(t) = 2π(8)

10 cos(20πt)dt =

160π sin(20πt) = 8 sin(20πt) 20π

the peak phase deviation is 8 rad. (c) β = 8 (d) The power at the filter input is

Pin =

(10)2 A2c = = 50 W 2 2

The power at the filter output is determined by the number of spectral components passed by the filter. The bandwith of the filter is 70 Hz, which is 35 Hz each sidfe of the carrier. Since fm = 10 the filter passes 3 terms each side of the carrier. Thus the power at the filter output is # " 3 3 X A2c X 2 A2c 2 2 J (8) = Jn (8) J0 (8) + 2 Pout = 2 n=−3 n 2 n=1 £ ¤ 2 = 50 (0.172) + 2(0.235) + 2(0.113)2 + 2(0.291)2 = 16.74 W

(e) The input and output spectra are determined from the corresponding Fourier-Bessel spectra. Problem 3.36 We wish to find k such that Pr = J02 (10) + 2

k X

n=1

J02 (10) ≥ 0.80


3.1. PROBLEMS

31

This gives k = 9, yielding a power ratio of Pr = 0.8747. The bandwidth is therefore B = 2kfm = 2 (9) (150) = 2700 Hz For Pr ≥ 0.9, we have k = 10 for a power ratio of 0.9603. This gives B = 2kfm = 2 (10) (150) = 3000 Hz

Problem 3.37 From the given data, we have fc1 = 110

kHz

fd1 = 0.05

fd2 = n (0.05) = 20

This gives n=

20 = 400 0.05

and fc1 = n (100)

kHz = 44 MHz

The two permissible local oscillator frequencies are f 0.1 = 100 − 44 = 56

f 0.2 = 100 + 44 = 144

MHz MHz

The center frequency of the bandpass filter must be fc = 100 MHz and the bandwidth is ¡ ¢ B = 2 (D + 1) W = 2 (20 + 1) (10) 103 or

B = 420 kHz

Problem 3.38 For the circuit shown H (f ) =

R E (f ) = 1 X (f ) R + j2πf L + j2πf C

or H (f ) =

1 ´ ³ 1 + j 2πfτ L − 2πf1τ C


32

CHAPTER 3. BASIC MODULATION TECHNIQUES

where 10−3 L = = 10−6 , 3 R 10 ¡ ¢¡ ¢ = RC = 103 10−9 = 10−6

τL = τC

A plot of the amplitude response shows that the linear region extends from approximately 54 kHz to118 kHz. Thus an appropriate carrier frequency is fc =

118 + 54 = 86 2

kHz

The slope of the operating characteristic at the operating point is measured from the amplitude response. The result is ¡ ¢ KD ∼ = 8 10−6 Problem 3.39 We can solve this problem by determining the peak of the amplitude response characteristic. This peak falls at 1 fp = √ 2π LC ¡ ¢ It is clear that fp > 100 MHz. Let fp = 150 MHz and let C = 0.001 10−12 . This gives L=

1

2

(2π)

fp2 C

¡ ¢ = 1.126 10−3

We find the value of R by trial and error using plots of the amplitude response. An appropriate value for R is found to be 1 M Ω. With these values, the discriminator constant is approximately ¡ ¢ KD ≈ 8.5 10−9 Problem 3.40 For Ai = Ac we can write, from (3.184), xr (t) = Ac cos(2πfc t) + Ai cos[2π (fc + fi ) t] = Ac {cos(2πfc t) + cos[2π (fc + fi ) t]} which is xr (t) = Ac {[1 + cos(2πfi t)] cos(2πfc t) − sin(2πfi t) sin(2πfc t)} This yields xr (t) = R (t) cos [2πfc t + φ (t)]


3.1. PROBLEMS

33

where ¸ sin(2πfi t) φ (t) = tan 1 + cos(2πfi t) ¸ ∙ 2πfi t 2πfi t = tan−1 tan = 2 2 −1

This gives 1 d yD (t) = 2π dt

µ

2πfi t 2

1 = fi 2

For Ai = −Ac , we have a similar situation to the preceding and 1 yD (t) = − fi 2 Finally, for Ai À Ac we see from the phasor diagram that ψ (t) ≈ θ (t) = ωi t and yD (t) =

Problem 3.41 From (3.229)

KD d (2πfi t) = fi 2π dt

¡ ¢ s θ0 s2 + 2πf∆ s + 2πR ψ ss = lim 3 s→0 s + Kt s2 + Kt as + Kt b

1. For first-order PLL (a = 0, b = 0) this is ¡ ¢ s θ0 s2 + 2πf∆ s + 2πR θ0 s2 + 2πf∆ s + 2πR = lim ψ ss = lim s→0 s→0 s3 + Kt s2 s2 + Kt s Thus θ0 s2 + 2πf∆ s + 2πR θ0 + (2πf∆ /s) + (2πR/s2 ) = lim s→0 s→0 s2 + Kt s 1 + (Kt /s) θ0 =0 6= 0, f∆ = 0, and R = 0: ψ ss = lim s→0 1 + (Kt /s) 2πf∆ θ0 + (2πf∆ /s) = 6 = 0, f∆ 6= 0, and R = 0: ψ ss = lim s→0 1 + (Kt /s) Kt θ0 + (2πf∆ /s) + (2πR/s2 ) 2πR = lim =∞ 6 = 0, f∆ 6= 0, and R 6= 0: ψ ss = lim s→0 s→0 Kt s 1 + (Kt /s)

ψ ss = lim For θ0 For θ0 For θ0


34

CHAPTER 3. BASIC MODULATION TECHNIQUES

2. For a second-order PLL (a 6= 0, b = 0) this is ¡ ¢ s θ0 s2 + 2πf∆ s + 2πR θ0 s2 + 2πf∆ s + 2πR = lim ψ ss = lim s→0 s→0 s3 + Kt s2 + Kt as s2 + Kt s + Kt a Thus

2πR Kt a which is zero for R = 0 and nonzero for R 6= 0 for any finite values of θ0 and f∆ . 3. For a third-order PLL (a 6= 0, b 6= 0) this is ¡ ¢ s θ0 s2 + 2πf∆ s + 2πR 0 = =0 ψ ss = lim 3 2 s→0 s + Kt s + Kt as + Kt b Kt b ψ ss =

for any finite values θ0 , f∆ , and R. Problem 3.42 For m (t) = A cos ω m t and φ (t) = Akf

Z t

cos ωm αdα =

and Φ (s) =

Akf sin ω m t ωm

Akf 2 s + ω 2m

The VCO output phase is Θ (s) = Φ (s)

Akf KT KT = s + KT (s + KT ) (s2 + ω 2m )

Using partial fraction expansion,Θ (s) can be expressed as ∙ ¸ Akf KT 1 s KT Θ (s) = − 2 + KT 2 + ω2m s + KT s + ω 2m s2 + ω 2m This gives, for t ≥ 0

¸ ∙ Akf KT KT −KT t − cos ωm t + sin ωm t e θ (t) = KT 2 + ω2m ωm

The first term is the transient response. For large KT , only the third term is significant. Thus, Ak K 2 1 dθ ¡ 2f T ¢ cos ωm t = eν (t) = Kν dt Kν KT + ω2m


3.1. PROBLEMS

35

Also, since KT is large, KT2 + ω2m ≈ KT2 . This gives eν (t) ≈

Akf cos ω m t Kν

and we see that eν (t) is proportional to m (t). If kf = Kν , we have eν (t) ≈ m (t)

Problem 3.43 The Costas PLL is shown in Figure 3.53. The output of the top multiplier is m (t) cos ω c t [2 cos (ω c t + θ)] = m (t) cos θ + m (t) cos (2ωc t + θ) which, after lowpass filtering, is m (t) cos θ. The quadrature multiplier output is m (t) cos ω c t [2 sin (ω c t + θ)] = m (t) sin θ + m (t) sin (2ω c t + θ) which, after lowpass filtering, is m (t) sin θ. The multiplication of the lowpass filter outputs is m (t) cos θm (t) sin θ = m2 (t) sin 2θ as indicated. Note that with the assumed input m (t) cos ωc t and VCO output 2 cos (ωc t + θ), the phase error is θ. Thus the VCO is defined by dψ dθ = = −Kν eν (t) dt dt This is shown in Figure 3.10. Since the dψ dt intersection is on a portion of the curve with negative slope, the point A at the origin is a stable operating point. Thus the loop locks with zero phase error and zero frequency error. Problem 3.44 ¡ ¢ With x (t) = A2πf0 t, we desire e0 (t) = A cos 2π 73 f0 t. Assume that the VCO output is a pulse train with frequency 13 fo . The pulse should be narrow so that the seventh harmonic is relatively large. The spectrum of the VCO output consists of components separated by 1 3 f0 with an envelope of sinc(τ f ), where τ is the pulse width. The center frequency of the bandpass filter is 73 f0 and the bandwidth is on the order of 13 f0 as shown in Figure.3.11. Problem 3.45


36

CHAPTER 3. BASIC MODULATION TECHNIQUES

dψ / dt

ψ A

Figure 3.10: Phase-plane plot for a Costas PLL.

This component (the fundamental) tracks the input signal

Bandpass filter passband

0

1 f0 3

f

7 f0 3

Figure 3.11: Filtering scheme for Problem 3.44.


3.1. PROBLEMS

37

The phase plane is defined by ψ = ∆ω − Kt sin ψ (t) at ψ = 0, ψ = ψ ss , the steady-state phase error. Thus µ ¶ µ ¶ ∆ω −1 ∆ω −1 ψ ss = sin = sin Kt 2π (100) For ∆ω = 2π (30)

30 100

µ

50 100

µ

80 100

µ

−1

ψss = sin

µ

For ∆ω = 2π (50) −1

ψ ss = sin For ∆ω = 2π (80)

−1

ψ ss = sin For ∆ω = −2π (80)

−1

ψ ss = sin

−80 100

= 17.46 degrees

= 30 degrees

= 53.13 defrees

= −53.13 degrees

For ∆ω = 2π (120), there is no stable operating point and the frequency error and the phase error oscillate (PLL slips cycles continually). Problem 3.46 From the definition f (t) = Kt e−Kt t u(t) it is clear that in the limit as Kt → ∞, f (0) = ∞, and f (t) = 0 for t 6= 0. Also Z ∞ Z ∞ ¯∞ f (t)dt = Kt e−Kt t dt = −e−Kt t −e−Kt t ¯0 = 1 −∞

0

Therefore the given f (t) satisfies all properties of an impulse function in the limit as Kt → ∞. Problem 3.47 From the definition of the transfer function Kt

³

s+a s+λa

´

Kt F (s) Θ (s) ³ ´ = = Φ (s) s + Kt F (s) s + K s+a t

s+λa


38

CHAPTER 3. BASIC MODULATION TECHNIQUES

which is

Kt (s + a) Kt (s + a) Θ (s) = = 2 Φ (s) s (s + λa) + Kt (s + a) s + (Kt + λa) s + Kt a

Therefore s2 + 2ζω n s + ω 2n = s2 + (Kt + λa) s + Kt a This gives ωn = and ζ=

Problem 3.48 Since

p Kt a

Kt + λa √ 2 Kt a

Kt (s + a) Θ (s) = Φ (s) s (s + λa) + Kt (s + a)

we first determine G(s) = so that

or

Kt (s + a) s (s + λa) Ψ(s) =1− = Φ (s) s (s + λa) + Kt (s + a) s (s + λa) + Kt (s + a)

¸ ¸∙ θ0 2πf∆ 2πR s (s + λa) + 2 + 3 Ψ(s) = s s s s (s + λa) + Kt (s + a) ∙ 2 ¸ ¸∙ θ0 s + 2πf∆ s + 2πR (s + λa) Ψ(s) = s2 s (s + λa) + Kt (s + a) ∙

Multiplying by s and taking the limit gives ¸∙ ¸ ∙ (s + λa) 2πR ψ ss = lim sθ0 + 2πf∆ + s→0 s s (s + λa) + Kt (s + a) or

Thus

¸∙ ¸ ∙ λ 2πR ψ ss = lim 2πf∆ + s→0 s Kt For θ0 6= 0, f∆ = 0, and R = 0:

ψ ss = 0

For θ0 6= 0, f∆ 6= 0, and R = 0:

ψ ss = lim

For θ0 6= 0, f∆ 6= 0, and R 6= 0:

2πλf∆ s→0 Kt 2πR λ ψ ss = lim =∞ s→0 s Kt


3.1. PROBLEMS

39

Problem 3.49 Since the phase error is assumed small, so that cos [ψ(t)] ≈ 1, the input from the top branch (see Figure 3.53 in the text.) to the multiplier preceding the loop filter is simply the phase error, m(t) cos[ψ(t)] ≈ m(t). The other input to the multiplier, for sufficiently small phase error, is m(t) sin[ψ(t)] ≈ m(t)ψ(t) Thus, the input to the loop filter preceding the VCO is m2 (t)ψ(t). Since m(t) is a step function m2 (t) = m(t) = u(t − t0 ) so that the input to the loop filter is u(t − t0 )ψ(t), which is exactly the loop filter input to a PLL. We therefore see that this problem is identical to Example 3.13 in the text and the phase error as a function of time is that given by (3.252) with t replaced by t − t0 . The parameter a in the loop filter transfer function is a = πfn /ζ as defined by (3.248). This problem illustrates the equivalence of a Costas PLL to a PLL in cases where the phase error is small. Problem 3.50 With the assumed VCO output represented by e−jθ(t) we have the output of the complex phase detector defined as Ac m(t)ej[φ(t)−θ(t)] = Ac m(t) cos[φ(t) − θ(t)] + jAc m(t) sin[φ(t) − θ(t)] This is Figure 3.53 with the top branch representing the real component Ac m(t) cos[ψ(t)] and the bottom branch representing the quadrature, or imaginary, component Ac m(t) sin[ψ(t)]. Costas PLLs are often simulated using a complex multiplication in this manner. Problem 3.51 Let A be the peak-to-peak value of the data signal. The peak error is 0.25% and the peak-to-peak error is 0.005 A. The required number of quantizating levels is A = 200 ≤ 2n = q 0.005A so we choose q = 256 and n = 8. The bandwidth is B = 2W k log2 q = 2W k(8) The value of k is estimated by assuming that the speech is sampled at the Nyquist rate. Then the sampling frequency is fs = 2W = 8 kHz. Each sample is encoded into n = 8 pulses. Let each pulse be τ with corresponding bandwidth τ1 . For our case τ=

1 1 = nfs 2W n


40

CHAPTER 3. BASIC MODULATION TECHNIQUES

Thus the bandwidth is 1 = 2W n = 2W log2 q = 2nW = 2knW τ and so k = 1. For k = 1 B = 2 (8, 000) (8) = 112 kHz

Problem 3.52 The message signal is m(t) = 3 sin 2π(10)t + 4 sin 2π(20)t The derivative of the message signal is dm (t) = 60π cos 2π (10) t + 160π cos 2π (20t) dt The maximum value of dm (t) /dt is obviously 220π and the maximum occurs at t = 0. Thus δ0 ≥ 220π Ts or 220π 220π = 4400 = fs ≥ δ0 0.05π Thus, the minimum sampling frequency is 4400 Hz. Problem 3.53 One possible commutator configuration is illustrated in Figure 3.12. The signal at the point labeled “output” is the baseband signal. The minimum commutator speed is 2W revolutions per second. For simplicity the commutator is laid out in a straight line. Thus, the illustration should be viewed as it would appear wrapped around a cylinder. After taking the sample at point 12 the commutator moves to point 1. On each revolution, the commutator collects 4 samples of x4 and x5 , 2 samples of x3 , and one sample of x1 and x2 . The minimum transmission bandwidth is X Wi = W + W + 2W + 4W + 4W = 12W B= i

Problem 3.54 The single-sided spectrum for x (t) is shown in Figure 3.13.


3.1. PROBLEMS

41

x1 (BW = W )

x2 (BW = W ) x3 (BW = 2W )

x4 (BW = 4W )

x5 (BW = 4W )

1

2

3

4

5

6

7

8

9

10

11

Output

Figure 3.12: Commutator configuration for Problem 3.53.

W

W

f

f1

f2

Figure 3.13: Single-sided spectrum for Problem 3.54.

12


42

CHAPTER 3. BASIC MODULATION TECHNIQUES

Y(f) a1

2a2W

a2W f

f 2 − f1

f1

f2

2 f2

f1 + f 2

2 f2

Figure 3.14: Output spectrum for Problem 3.54. From the definition of y (t) we have Y (s) = a1 X (f ) + a2 X (f ) ∗ X (f ) The spectrum for Y (f ) is given in Figure 3.14. Demodulation can be a problem since it may be difficult to filter the desired signals from the harmonic and intermodulation distortion caused by the nonlinearity. As more signals are included in x (t), the problem becomes more difficult. The difficulty with harmonically related carriers is that portions of the spectrum of Y (f ) are sure to overlap. For example, assume that f2 = 2f1 . For this case, the harmonic distortion arising from the spectrum centered about f1 falls exactly on top of the spectrum centered about f2 .

3.2

Computer Exercises

Computer Exercise 3.1 The MATLAB program is % File: ce3_1.m t = 0:0.001:1; fm = 1; fc =10; m = 4*cos(2*pi*fm*t-pi/9) + 2*sin(4*pi*fm*t); [minmessage,index] = min(m);


3.2. COMPUTER EXERCISES

43

mncoefs = [4 2]/abs(minmessage) mn = m/abs(minmessage); plot(fm*t,mn,’k’), grid, xlabel(’Normalized Time’), ylabel(’Amplitude’) mintime = 0.001*(index-1); dispa = [’The minimum of m(t) is ’, num2str(minmessage,’%15.5f’), ... ’and falls at ’,num2str(mintime,’%15.5f’), ’ s.’]; disp(dispa) % % Now we compute the efficiency. % mnsqave = mean(mn.*mn); % mean-square value a = 0.5; % modulation index asq = a^2; efficiency = asq*mnsqave/(1+asq*mnsqave); dispb = [’The efficiency is ’, num2str(efficency,’%15.5f’),’ .’]; disp(dispb) % % Now compute the carrier amplitude. % cpower = 50; % carrier power camplitude = sqrt(2*cpower); % carrier amplitude % % Compute the magnitude and phase spectra. % xct = camplitude*(1+a*mn).*cos(2*pi*fc*t); Npts = 1000; fftxct = fft(xct,Npts)/Npts; s1 = [conj(fliplr(fftxct(2:25))) fftxct(1:25)]; mags1 = abs(s1); angles1 = angle(s1); for k=1:length(s1) if mags1(k) <= 0.01 angles1(k)=0; end end % % Plot the magnitude and phase spectra. % figure subplot(2,1,1), stem(-24:24,mags1,’.’) xlabel(’frequency’); ylabel(’magnitude’)


44

CHAPTER 3. BASIC MODULATION TECHNIQUES

1.5

1

Amplitude

0.5

0

-0.5

-1

0

0.1

0.2

0.3

0.4 0.5 0.6 Normalized Time

0.7

0.8

0.9

Figure 3.15: Message signal. subplot(2,1,2), stem(-24:24,angles1,’.’) xlabel(’frequency’); ylabel(’phase’) % End of script file. Executing the program gives: » ce3_1 mncoefs = 0.9165 0.4583 The minimum of m(t) is -4.36424 and falls at 0.43500 s. The efficiency is 0.11607 . The program also generates Figures 3.15 and ??. Computer Exercise 3.2 The MATLAB code written for Computer Example 3.2 follows. % File: ce3_2.m t = 0:0.001:1;

1


3.2. COMPUTER EXERCISES

45

magnitude

6

4

2

0 -25

-20

-15

-10

-5

0 frequency

5

10

15

20

25

-20

-15

-10

-5

0 frequency

5

10

15

20

25

2

phase

1 0 -1 -2 -25

Figure 3.16: Spectra of AM signal.


46

CHAPTER 3. BASIC MODULATION TECHNIQUES fm = 1; fc =10; m = 2*cos(2*pi*fm*t) + cos(4*pi*fm*t); mhil = 2*sin(2*pi*fm*t) + sin(4*pi*fm*t); xctusb = 0.5*m.*cos(2*pi*fc*t)-0.5*mhil.*sin(2*pi*fc*t); xctlsb = 0.5*m.*cos(2*pi*fc*t)+0.5*mhil.*sin(2*pi*fc*t); % % Plot time-domain signals. % subplot(2,1,1), plot(t,xctusb) xlabel(’time’); ylabel(’USB signal’) subplot(2,1,2), plot(t,xctlsb) xlabel(’time’); ylabel(’LSB signal’) % % Determine and plot spectra. % Npts = 1000; fftxctu = fft(xctusb,Npts)/Npts; fftxctl = fft(xctlsb,Npts)/Npts; sxctusb = [conj(fliplr(fftxctu(2:25))) fftxctu(1:25)]; magsusb = abs(sxctusb); sxctlsb = [conj(fliplr(fftxctl(2:25))) fftxctl(1:25)]; magslsb = abs(sxctlsb); figure subplot(2,1,1), stem(-24:24,magsusb) xlabel(’frequency’); ylabel(’USB magnitude’) subplot(2,1,2), stem(-24:24,magslsb) xlabel(’frequency’); ylabel(’LSB magnitude’) % End of script file.

Executing the code gives Figures 3.17 and 3.18. Computer Exercise 3.3 In this computer example we investigate the demodulation of SSB using carrier resertion for several values of the constant k.for an assumed message signal. % File: ce3_3.m t = 0:0.001:1; fm = 1; m = 2*cos(2*pi*fm*t) + cos(4*pi*fm*t);


3.2. COMPUTER EXERCISES

47

USB signal

2 1 0 -1 -2 0

0.1

0.2

0.3

0.4

0.5 time

0.6

0.7

0.8

0.9

1

0

0.1

0.2

0.3

0.4

0.5 time

0.6

0.7

0.8

0.9

1

LSB signal

2 1 0 -1 -2

Figure 3.17: Time-domain waveforms.


48

CHAPTER 3. BASIC MODULATION TECHNIQUES

USB magnitude

0.8 0.6 0.4 0.2 0 -25

-20

-15

-10

-5

0 frequency

5

10

15

20

25

-20

-15

-10

-5

0 frequency

5

10

15

20

25

LSB magnitude

0.8 0.6 0.4 0.2 0 -25

Figure 3.18: Amplitude spectra.


3.2. COMPUTER EXERCISES

49

mhil = 2*sin(2*pi*fm*t) + sin(4*pi*fm*t); k = 1; ydt = sqrt((k+m).*(k+m)+mhil.*mhil); ydt1 = ydt-k; k = 10; ydt = sqrt((k+m).*(k+m)+mhil.*mhil); ydt2 = ydt-k; k = 100; ydt = sqrt((k+m).*(k+m)+mhil.*mhil); ydt3 = ydt-k; subplot(3,1,1), plot(t,m,’k’,t,ydt1,’k--’) ylabel(’k=1’) subplot(3,1,2), plot(t,m,’k’,t,ydt2,’k--’) ylabel(’k=10’) subplot(3,1,3), plot(t,m,’k’,t,ydt3,’k--’) ylabel(’k=100’) % End of script file. Executing the preceding MATLAB program gives the results illustrated in Figure 3.19. In the three plots the message signal is the solid line and the dashed line represents the output resulting from carrier reinsertion. It can be seen that the demodulated output improves for increasing k and that the distortion is negligible for k = 100. Computer Exercise 3.4 The MATLAB program illustrating recovery of the message signal from a VSB signal follows. % File: ce3_4.m t = 0:0.001:1; m = cos(2*pi*t)-0.4*cos(4*pi*t)+0.9*cos(6*pi*t); e1 = 0.64; e2 = 0.78; e3 = 0.92; fc = 25; A = 100; ct = A*cos(2*pi*fc*t); xct = e1*cos(2*pi*(fc+1)*t)+(1-e1)*cos(2*pi*(fc-1)*t)-... 0.4*e2*cos(2*pi*(fc+2)*t)-0.4*(1-e2)*cos(2*pi*(fc-2)*t)+... 0.9*e3*cos(2*pi*(fc+3)*t)+0.9*(1-e3)*cos(2*pi*(fc-3)*t); xdem = abs(xct+ct); subplot(3,1,1), plot(t,m), ylabel(’m(t)’) subplot(3,1,2), plot(t,xct), ylabel(’xc(t)’) subplot(3,1,3), plot(t,xdem), ylabel(’xc(t)+c(t)’) xlabel(’Time’), axis([0 1 97 103])


50

CHAPTER 3. BASIC MODULATION TECHNIQUES

k=1

4 2 0 -2

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

k=10

4 2 0 -2

k=100

4 2 0 -2

Figure 3.19: Illustration of demodulation resertion for k = 1, 10, and 100.


3.2. COMPUTER EXERCISES

51

m(t)

2 0 -2 -4 0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

0

0.1

0.2

0.3

0.4

0.5 Time

0.6

0.7

0.8

0.9

1

xc(t)

5

0

xc(t)+c(t)

-5

102 100 98

Figure 3.20: Recovery of m(t) by carrier reinsertion in a VSB system. % End of script file. The result of executing the preceding program is illustrated in Figure 3.20. The top pane shows m(t), the second pane shows the transmitted signal xc (t), and the third shows the transmitted signal plus unmodulated carrier (amplified so that the amplitude is 100). Envelope detecting this last signal recovers the message signal. Computer Exercise 3.5 The MATLAB code follows. % File: ce3_5.m fs = 500; delt = 1/fs; t = 0:delt:1-delt; npts = length(t); fm = 20; fd = [10 20 100]; for k=1:3 beta = fd(k)/fm; cxce = exp(i*beta*sin(2*pi*fm*t));


52

CHAPTER 3. BASIC MODULATION TECHNIQUES

Amplitude

2

1

0 -250

-200

-150

-100

-50

0

50

100

150

200

250

-200

-150

-100

-50

0

50

100

150

200

250

-200

-150

-100

-50

0

50

100

150

200

250

Amplitude

2

1

Amplitude

0 -250 1

0.5

0 -250

Figure 3.21: FM spectra with fm held constant and modulation indices of 0.5, 1 and 5. as = (1/npts)*abs(fft(cxce)); evenf = [as(fs/2:fs) as(1:fs/2-1)]; fn = -fs/2:fs/2-1; subplot(3,1,k); stem(fn,2*evenf,’.’) ylabel(’Amplitude’) end % End of script file.

Computer Exercise 3.6 For a square-wave message signal, the phase deviation will be a triangular-wave. The triangular-wave message signal is generated using a 6-term Fourier series. The MATLAB code follows. % File: ce3_6.m fs = 1000; delt = 1/fs; t = 0:delt:1-delt;

% sampling frequency % sampling increment % time vector


3.2. COMPUTER EXERCISES

53

npts = length(t); % number of points fn = (0:(npts/2))*(fs/npts); % frequency vector for plot % Generate a triangular-wave signal as the phase deviation corresponding % to a square-wave message signal using Fourier series and plot. fm = 5; b = [1 -1/9 1/25 -1/49 1/81 -1/121]; c = 2*pi*fm*[1 3 5 7 9 11]; m = zeros(1,length(t)); for n=1:length(c) m=m+b(n)*sin(c(n)*t); end m = m/max(m); plot(t,m), xlabel(’Time’) % Determine spectra. fd1 = 0.10; xct1 = sin(2*pi*250*t+fd1*m); % modulated carrier asxc1 = (2/npts)*abs(fft(xct1)); % amplitude spectrum ampspec1 = asxc1(1:((npts/2)+1)); % positive frequency portion fd2 = 1.0; xct2 = sin(2*pi*250*t+fd2*m); % modulated carrier asxc2 = (2/npts)*abs(fft(xct2)); % amplitude spectrum ampspec2 = asxc2(1:((npts/2)+1)); % positive frequency portion fd3 = 10.0; xct3 = sin(2*pi*250*t+fd3*m); % modulated carrier asxc3 = (2/npts)*abs(fft(xct3)); % amplitude spectrum ampspec3 = asxc3(1:((npts/2)+1)); % positive frequency portion % Plot spectra. figure % new figure subplot(3,1,1) stem(fn,ampspec1,’.k’); ylabel(’Magnitude - fd=0.1’) subplot(3,1,2) stem(fn,ampspec2,’.k’); ylabel(’Magnitude - fd=1’) subplot(3,1,3) stem(fn,ampspec3,’.k’); xlabel(’Frequency’), ylabel(’Magnitude - fd=10’) % End of script file.


54

CHAPTER 3. BASIC MODULATION TECHNIQUES

1 0.8 0.6 0.4 0.2 0 -0.2 -0.4 -0.6 -0.8 -1

0

0.1

0.2

0.3

0.4

0.5 Time

0.6

0.7

0.8

0.9

1

Figure 3.22: Triangular wave representing the phase deviation. Executing the program yields the results illustrated in Figures 3.22 and 3.23 , respectively. Figure 3.22 shows the triangular-wave message signal. Figure 3.23 shows the spectrum. As with Figure 3.29 in the text, all sum, and difference frequencies are present. This non-linear effect is most easily seen for fd = 10 since higher deviations spread the spectrum more than small deviation constants.

Computer Exercise 3.7 The MATLAB program for generating Figure 3.42 in the text follows. % File: ce3_7.m fs = 1000; % sampling frequency delt = 1/fs; % sampling increment t = 0:delt:1-delt; % time vector npts = length(t); % number of point fi = 1; Ac = 1; % % Let Ai=0.2. % Ai = 0.2; xcr = Ac+Ai*exp(i*2*pi*fi*t);


Magnitude - fd=10

Magnitude - fd=1

Magnitude - fd=0.1

3.2. COMPUTER EXERCISES

55

1

0.5

0 0

50

100

150

200

250

300

350

400

450

500

0

50

100

150

200

250

300

350

400

450

500

0

50

100

150

200

250 300 Frequency

350

400

450

500

1

0.5

0 1

0.5

0

Figure 3.23: Spectra for FM signals with a square-wave message signal. psi1 = atan(imag(xcr)./real(xcr)); upsi1 = unwrap(angle(xcr)); udifpsi1 = zeros(1,npts); for k=2:npts udifpsi1(k) = (upsi1(k)-upsi1(k-1))/delt; end % % Let Ai=0.9. % Ai = 0.9; xcr = Ac+Ai*exp(i*2*pi*fi*t); psi2 = atan(imag(xcr)./real(xcr)); upsi2 = unwrap(angle(xcr)); udifpsi2 = zeros(1,npts); for k=2:npts udifpsi2(k) = (upsi2(k)-upsi2(k-1))/delt; end % % Let Ai=1.1. % Ai = 1.1;


56

CHAPTER 3. BASIC MODULATION TECHNIQUES xcr = Ac+Ai*exp(i*2*pi*fi*t); psi3 = atan(imag(xcr)./real(xcr)); upsi3 = unwrap(angle(xcr)); udifpsi3 = zeros(1,npts); for k=2:npts udifpsi3(k) = (upsi3(k)-upsi3(k-1))/delt; end % % Plot results. % subplot(3,2,1); plot(t,psi1); ylabel(’Ai=0.2’); subplot(3,2,2); plot(t,udifpsi1) subplot(3,2,3); plot(t,psi2): ylabel(’Ai=0.9’); subplot(3,2,4); plot(t,udifpsi2) subplot(3,2,5); plot(t,psi3); ylabel(’Ai=1.1’); subplot(3,2,6); plot(udifpsi3) % End of script file.

The results are shown in Figure 3.24. We see that these match closely Figure 3.42 in the text.

Computer Exercise 3.8 The MATLAB code for the PLL simulation with a line added for entering the sampling frequency follows. % File: ce3_8.m % beginning of preprocessor clear all % be safe fdel = input(’Enter frequency step size in Hz > ’); fn = input(’Enter the loop natural frequency in Hz > ’); zeta = input(’Enter zeta (loop damping factor) > ’); fs = 2000; % default sampling frequency fs = input(’Enter the sampling frequency (default = 2000) > ’); npts = fs ; % default number of simulation point T = 1/fs; t = (0:(npts-1))/fs; % time vector nsettle = fix(npts/10); % set nsettle time as 0.1*npts Kt = 4*pi*zeta*fn; % loop gain a = pi*fn/zeta; % loop filter parameter


3.2. COMPUTER EXERCISES

57

2

0

0

Ai=0.2

0.5

-0.5

0

0.5

1

Ai=0.9

2

0

0.5

1

0

0.5

1

0

500

1000

50 0

0 -50 -2

-100 0

Ai=1.1

-2

0.5

1

2

100

0

50

-2

0

0.5

1

0

Figure 3.24: Interference performance of FM demodulator.


58

CHAPTER 3. BASIC MODULATION TECHNIQUES filt_in_last = 0; filt_out_last=0; vco_in_last = 0; vco_out = 0; vco_out_last=0; % end of preprocesso - beginning of simulation loop for i=1:npts if i<nsettle fin(i) = 0; phin = 0; else fin(i) = fdel; phin = 2*pi*fdel*T*(i-nsettle); end s1 = phin - vco_out; s2 = sin(s1); % sinusoidal phase detector s3 = Kt*s2; filt_in = a*s3; filt_out = filt_out_last + (T/2)*(filt_in + filt_in_last); filt_in_last = filt_in; filt_out_last = filt_out; vco_in = s3 + filt_out; vco_out = vco_out_last + (T/2)*(vco_in + vco_in_last); vco_in_last = vco_in; vco_out_last = vco_out; phierror(i)=s1; fvco(i)=vco_in/(2*pi); freqerror(i) = fin(i)-fvco(i); end % end of simulation loop - beginning of postprocessor kk = 0; while kk == 0 k = menu(’Phase Lock Loop Postprocessor’,... ’Input Frequency and VCO Frequency’,... ’Phase Plane Plot’,... ’Exit Program’); if k == 1 plot(t,fin,t,fvco), grid title(’Input Frequency and VCO Freqeuncy’) xlabel(’Time - Seconds’) ylabel(’Frequency - Hertz’) pause elseif k == 2


3.2. COMPUTER EXERCISES

59 Phase Plane

50

Frequency Error - Hz

40

30

20

10

0

-10

0

0.5

1

1.5

2 2.5 Phase Error / 2*pi

3

3.5

4

4.5

Figure 3.25: Phase-plane plot with fs = 400̇. plot(phierror/2/pi,freqerror), grid title(’Phase Plane’) xlabel(’Phase Error / 2*pi’) ylabel(’Frequency Error - Hz’) pause elseif k == 3 kk = 1; end end % end of postprocessor % End of script file. The preceding program was executed twice; once for fs = 400 and once for fs = 1000. The siumulation for fs = 400 is shown in Figure 3.25. We note that the curves, especially for large frequency error, are not smooth, This is an indication that the samplingt frequency is too low.


60

CHAPTER 3. BASIC MODULATION TECHNIQUES Phase Plane 50

Frequency Error - Hz

40

30

20

10

0

-10

0

0.5

1

1.5 2 Phase Error / 2*pi

2.5

3

3.5

Figure 3.26: Phase-plane plot with fs = 1000. We therefore increase the sampling frequency from 400 to 1000. The results are illustrated in Figure 3.26. We now see that the curve defining the phase plane is much smoother. In addition, for fs = 400 we see that the PLL skips 4 cycles while acquiring lock. For fs = 1000 we see that the PLL skips 3 cycles while acquiring lock. By executing the simulation a number of times we see that additional increases in the sampling frequency have no effect on the results. An effective way to determine an appropriate sampling frequency is to set the sampling frequency at a high value and then reduce the sampling frequency until the results change and the waveforms are distorted. At this point the sampling frequency is too low. Computer Exercise 3.9 The MATLAB program for the PLL with the limiting phase detector follows. We note that the phase detector has been modified and a line has been added for entering the phase detector constant A. % File: ce3_9.m % beginning of preprocessor


3.2. COMPUTER EXERCISES clear all % be safe fdel = input(’Enter frequency step size in Hz > ’); fn = input(’Enter the loop natural frequency in Hz > ’); zeta = input(’Enter zeta (loop damping factor) > ’); AA = input(’Enter phase detector constant A > ’); npts = 2000; % default number of simulation points fs = 2000; % default sampling frequency T = 1/fs; t = (0:(npts-1))/fs; % time vector nsettle = fix(npts/10); % set nsettle time as 0.1*npts Kt = 4*pi*zeta*fn; % loop gain a = pi*fn/zeta; % loop filter parameter filt_in_last = 0; filt_out_last=0; vco_in_last = 0; vco_out = 0; vco_out_last=0; % end of preprocessor - beginning of simulation loop for i=1:npts if i<nsettle fin(i) = 0; phin = 0; else fin(i) = fdel; phin = 2*pi*fdel*T*(i-nsettle); end s1 = phin - vco_out; s2 = sin(s1); % sinusoidal phase detector if s2<-AA s2=-AA; elseif s2>AA s2=AA; end s3 = Kt*s2; filt_in = a*s3; filt_out = filt_out_last + (T/2)*(filt_in + filt_in_last); filt_in_last = filt_in; filt_out_last = filt_out; vco_in = s3 + filt_out; vco_out = vco_out_last + (T/2)*(vco_in + vco_in_last); vco_in_last = vco_in; vco_out_last = vco_out; phierror(i)=s1;

61


62

CHAPTER 3. BASIC MODULATION TECHNIQUES fvco(i)=vco_in/(2*pi); freqerror(i) = fin(i)-fvco(i); end % end of simulation loop - beginning of postprocessor kk = 0; while kk == 0 k = menu(’Phase Lock Loop Postprocessor’,... ’Input Frequency and VCO Frequency’,... ’Phase Plane Plot’,... ’Exit Program’); if k == 1 plot(t,fin,t,fvco), grid title(’Input Frequency and VCO Freqeuncy’) xlabel(’Time - Seconds’) ylabel(’Frequency - Hertz’) pause elseif k == 2 plot(phierror/2/pi,freqerror), grid title(’Phase Plane’) xlabel(’Phase Error / 2*pi’) ylabel(’Frequency Error - Hz’) pause elseif k == 3 kk = 1; end end % end of postprocessor % End of script file.

We see that with A = 0.5, the loop slips 5 cycles, whereas with Ȧ > 1 (limiter absent), the PLL slips only three cycles with the parameters given. The acquisition time is therefore increased. Computer Exercise 3.10 The MATLAB code for producing the PAM, PWM, and PPM signals follow. % File: ce3_10.m clear all; N = 1000;

% be safe % vector length


3.2. COMPUTER EXERCISES

63

Phase Plane 50

40

Frequency Error - Hz

30

20

10

0

-10

-20

0

2

4

6 Phase Error / 2*pi

8

10

Figure 3.27: PLL acquisition with A = 0.5.

12


64

CHAPTER 3. BASIC MODULATION TECHNIQUES f = 1; % frequency x = zeros(1,N); % initialize zpam = zeros(1,N); % initialize zpwm = zeros(1,N); % initialize zppm = zeros(1,N); % initialize width = 40; % nominal width for n=1:N x(1,n) = sin(2*pi*f*(n-1)/N+pi/8); end for k=1:100:N zpam(1,k)=x(1,k); % sample value for kk=1:width m=k+kk; zpam(1,m)=x(1,k); % PAM sigmal end widthpwm = round(width*(1.2+x(1,k))); for kk=1:widthpwm m=k+kk; zpwm(1,m)=1.0; % PWM signal end for kk=1:5 m=k+widthpwm+(kk-1); zppm(1,m)=1.0; % PPM signal end end subplot(311) plot(1:N,zpam), axis([0 1000 -1.2 1.2]), grid ylabel(’PAM Signal’) subplot(312) plot(1:N,zpwm), axis([0 1000 -0.2 1.2]), grid ylabel(’PWM Signal’) subplot(313) plot(1:N,zppm), axis([0 1000 -0.2 1.2]), grid ylabel(’PPM Signal’), xlabel(’Sample Index’) % End of script file. The PAM, PWM, and PPM signals are shown in Figure 3.28.


PPM Signal

PWM Signal

PAM Signal

3.2. COMPUTER EXERCISES

65

1 0 -1 0

100

200

300

400

500

600

700

800

900

1000

0

100

200

300

400

500

600

700

800

900

1000

0

100

200

300

400 500 600 Sample Index

700

800

900

1000

1 0.5 0

1 0.5 0

Figure 3.28: PAM, PWM, and PPM signals.


Chapter 4

Principles of Baseband Digital Data Transmission 4.1

Problem Solutions

Problem 4.1 a. Split phase or bipolar RZ; b. NRZ change or NRZ mark; c. Split phase; d. NRZ mark; e. Polar RZ; f. Unipolar RZ. Problem 4.2 These are a matter of filling in the details given at the ends of Examples 4.1 through 4.5. Problem 4.3 This is a matter of adapting Figures 4.2 top and bottom to the data sequence given in the problem statement. Problem 4.4 This is a matter of adapting Figure 4.2, second plot, to the data sequence given in the problem statement. 1


2

CHAPTER 4. PRINCIPLES OF BASEBAND DIGITAL DATA TRANSMISSION

Problem 4.5 This is a matter of adapting Figure 4.2, third - fifth plots, to the data sequence given in the problem statement. Problem 4.6 a. 4 kbps; b. 2 kbps; c. 2 kbps; d. 4 kbps for first null and 2 kbps for second null.

Problem 4.7 The input pulse is x1 (t) = Au (t) − Au (t − T ) The response to the first step of amplitude A is y11 (t) = A [1 − exp (−t/RC)] u (t) and the response to the second step, using time invariance, is y12 (t) = A [1 − exp (− (t − T ) /RC)] u (t − T ) By the superposition property of a linear system, the total response is y1 (t) = y11 (t)−y12 (t) which gives (4.30) . Problem 4.8 From (4.37) the flat-top portion of the spectrum goes to f = 1/2T and the transition region is 0 in frequency extent. The remaining spectrum is 0 for 1/2T < f ≤ 1/T . Setting β = 0 in (4.36) clearly gives pRC (t) = sinc(t/T ).


4.1. PROBLEM SOLUTIONS

3

Problem 4.9 Because PRC (f ) is even, it follows that Z ∞ −1 PRC (f ) cos (2πft) df pRC (t) = F [PRC (f )] = 2 0 µ ¶¸¾ ∙ Z (1−β)/2T Z (1+β)/2T ½ πT 1−β = 2T f− cos (2πf t) df cos (2πf t) df + T 1 + cos β 2T 0 (1−β)/2T ³ ³ ³ ´ ´ ´ sin 1+β sin 1−β sin 1−β T πt T πt T πt +T −T = 2T 2πt 2πt 2πt µ ¶¸ ∙ Z (1+β)/2T 1−β πT f− cos (2πft) df cos +T β 2T (1−β)/2T ³ ³ ´ ´ 1+β sin sin 1−β πt πt T T +T = T 2πt 2πt µ ¶ ¸ ∙ µ ¶ ¸¾ ½ ∙ Z 1−β πT 1−β πT T (1+β)/2T f− + 2πf t + cos f− − 2πft df cos + 2 (1−β)/2T β 2T β 2T ³ ³ ´ ´ 1+β sin sin 1−β πt πt T T +T = T 2πt 2πt ´ i ¯(1+β)/2T ´ i ¯(1+β)/2T h ³ h ³ 1−β 1−β ¯ ¯ πT πT f − + 2πf t f − − 2πft sin sin ¯ ¯ β 2T β 2T T T ¯ ¯ + + ¯ ¯ πT πT 2 2 ¯ ¯ β + 2πt β − 2πt (1−β)/2T (1−β)/2T ³ ³ ´ ´ 1−β 1+β sin T πt sin T πt +T = T 2πt 2πt h ³ h ³ ´ ´ i i 1−β 1+β 1−β 1−β πT 1+β πT 1−β sin sin + π + π − t − t β 2T 2T T β 2T 2T T T T − + 2 πT /β + 2πt 2 πT /β + 2πt i i ´ ´ h ³ h ³ 1+β 1−β 1+β 1−β 1−β 1−β πT πT − t − t − π − π sin sin β 2T 2T T β 2T 2T T T T − + 2 πT /β − 2πt 2 πT /β − 2πt ∙ ¸ µ ¶ µ ¶ 1 πβt πt sin (πt/T ) cos (πβt/T ) 1 1 = + − cos sin πt/T 2πt/T T /2βt − 1 T /2βt + 1 T T ∙ ¸ 1 sin (πt/T ) = 1− 2 cos (πβt/T ) πt/T 1 − (T /2βt) " # (T /2βt)2 cos (πβt/T ) sin (πt/T ) = − = sinc (t/T ) 2 cos (πβt/T ) πt/T 1 − (T /2βt) 1 − (2βt/T )2


4

CHAPTER 4. PRINCIPLES OF BASEBAND DIGITAL DATA TRANSMISSION

Problem 4.10 a. This spectrum has the zero-ISI property. The appropriate sample rate is T1 = 1.5W . b. This spectrum does not have the zero-ISI property. c. This spectrum does not have the zero-ISI property. d. This spectrum has the zero-ISI property. The appropriate sample rate is T1 = W . e. This spectrum has the zero-ISI property. The appropriate sample rate is T1 = 3W . Problem 4.11 The MATLAB program is given below. The plot of the desired frequency response functions follows the program. % Problem 4_11 % Computation and plotting of transmitter and receiver frequency responses % for raised cosine signaling through lowpass Butterworth filtered channel % clear all A = char(’-’,’:’,’—’,’-.’); clf f3dB = input(’Enter channel filter 3-dB frequency, Hz => ’); n_poles = input(’Enter number of poles for Butterworth filter repr channel => ’); Rb = input(’Enter data rate, bits/s => ’); beta0 = input(’Enter vector of betas (max no = 3) => ’); Lbeta = length(beta0); for n = 1:Lbeta beta = beta0(n); f1 = 0:.1:(1-beta)*Rb/2; HR1 = sqrt(1+(f1/f3dB).^(2*n_poles)); f2 = (1-beta)*Rb/2:.1:(1+beta)*Rb/2; HR2 = sqrt(1+(f2/f3dB).^(2*n_poles)).*cos(pi*(f2-(1-beta)*Rb/2)./(2*beta*Rb)); f3 = (1+beta)*Rb/2:.1:Rb; HR3 = zeros(size(f3)); f = [f1 f2 f3]; HR = [HR1 HR2 HR3]; plot(f,HR,A(n,:), ’LineWidth’, 1.5),xlabel(’{\itf}, Hz’),... ylabel(’|{\itH_R}({\itf})| or |{\itH_T}({\itf})|’) if n == 1 hold on


4.1. PROBLEM SOLUTIONS

5

Bit rate = 5000 bps; channel filter 3-dB frequency = 5000 Hz; no. of poles = 1 1.4

β=1 β = 0.5

1.2

|HR(f)| or |HT(f)|

1

0.8

0.6

0.4

0.2

0

0

500

1000

1500

2000

2500 f, Hz

3000

3500

4000

4500

5000

Figure 4.1:

end end title([’Bit rate = ’,num2str(Rb),’ bps; channel filter 3-dB frequency = ’,num2str(f3dB), ’ Hz; no. of poles = ’,num2str(n_poles)]),... if Lbeta == 1 legend([’\beta = ’,num2str(beta0(1))],1) elseif Lbeta == 2 legend([’\beta = ’,num2str(beta0(1))],[’\beta = ’,num2str(beta0(2))],1) elseif Lbeta == 3 legend([’\beta = ’,num2str(beta0(1))],[’\beta = ’,num2str(beta0(2))],... [’\beta = ’,num2str(beta0(3))],1) end


6

CHAPTER 4. PRINCIPLES OF BASEBAND DIGITAL DATA TRANSMISSION

Problem 4.12 Use (4.37) and set the maximum frequency equal to 7 kHz: 1+β = 7 kHz 2T But T1 = 9 kbps, so 2×7 9 5 14 − 1 = = 0.556 9 9

1+β = β =

Problem 4.13 a. Consider P (f ) = T2 Λ (T f /2) which is a triangle T /2 units high going from −2/T to 2/T . Clearly, superimposing it with its translates by ±1/T, ± 2/T, ... produces a constant height of T /2 so Nyquist’s theorem is satisfied. b. Use the transform pair Λ (t) ↔ sinc2 (f ) with duality to produce the transform pair sinc2 (t) ↔ Λ (f ). Then use scale change to get sinc2 (at) ↔ a−1 Λ (f /a). Apply this to P (f ) with a = 2/T to get p (t) = sinc2 (2t/T ) which has zero crossings at t = ±T /2, ± T, ... Problem 4.14 a. The channel response matrix is ⎡

Its inverse is

⎤ 1.0 0.1 −0.01 [Pc ] = ⎣ 0.2 1.0 0.1 ⎦ −0.02 0.2 1.0 ⎡

⎤ 1.0217 −0.0163 0.0209 [Pc ]−1 = ⎣ −0.0216 1.0423 −0.0163 ⎦ 0.0626 −0.0216 1.0217

The equalizer coefficients are the middle column:

[α−1 α0 α1 ] = [−0.01631 .0423

− 0.0216]


4.1. PROBLEM SOLUTIONS

7

b. The output samples are given by peq (mT ) =

1 X

n=−1

αn pc [(m − n) T ]

[peq ] = [−0.0213 0.0000 1.0000 0.0000 − 0.0635]

Problem 4.15 a. The channel response matrix is ⎡ ⎤ 1.0 0.1 −0.01 0.001 −0.001 ⎢ 0.2 1.0 0.1 −0.01 0.001 ⎥ ⎢ ⎥ 0.2 1.0 0.1 −0.01 ⎥ [Pc ] = ⎢ ⎢ −0.02 ⎥ ⎣ 0.005 −0.02 0.2 1.0 0.1 ⎦ −0.003 0.005 −0.02 0.2 1.0 Its inverse is ⎡

⎤ 1.0218 −0.1067 0.0218 −0.0046 0.0018 ⎢ −0.2111 1.0438 −0.1111 0.0227 −0.0046 ⎥ ⎢ ⎥ ⎥ [Pc ] = ⎢ ⎢ 0.0651 −0.2179 1.0451 −0.1111 0.0218 ⎥ ⎣ −0.0234 0.0673 −0.2179 1.0438 −0.1067 ⎦ 0.0101 −0.0234 0.0651 −0.2111 1.0218

The equalizer coefficients are the middle column:

[α−2 α−1 α0 α1 α2 ] = [0.0218 − 0.1111 1.0451 − 0.2179 0.0651] b. The output samples are given by peq (mT ) =

1 X

n=−1

αn pc [(m − n) T ]

[peq ] = [0.0000 0.0000 1.0000 0.0000 0.0000] Problem 4.16 a. The output in terms of the input is y (t) = x (t) + βx (t − τ m )


8

CHAPTER 4. PRINCIPLES OF BASEBAND DIGITAL DATA TRANSMISSION

Using the superposition and time delay theorems, its Fourier transform is Y (f ) = X (f ) [1 + β exp (−j2πτ m f )] The frequency response function is Y (f ) = [1 + β exp (−j2πτ m f )] X (f ) q |H (f )| = [1 + β cos (2πτ m f )]2 + sin2 (2πτ m f ) q = 1 + 2β cos (2πτ m f ) + β 2 H (f ) =

b. From the block diagram

z (t) =

N X i=1

β i y [t − (i − 1) ∆]

Using the superposition and time delay theorems, the Fourier transform of z (t) is Z (f ) = Y (f )

N X i=1

β i exp [−j2π (i − 1) ∆]

The frequency response function is N

Heq (f ) =

Z (f ) X = β i exp [−j2π (i − 1) ∆] Y (f ) i=1

c. From part a, 1 H (f )

1 1 + β exp (−j2πτ m f ) = 1 − β exp (−j2πτ m f ) + β 2 exp (−j4πτ m f ) − β 3 exp (−j6πτ m f ) + ... =

Equate the above to Heq (f ) with τ m = ∆. Clearly β i = (−β)i−1 , i = 1, 2, ..., N. Problem 4.17 a. For a timing jitter of zero 1 exp (−z0 ) = 10−6 2 = − ln 2 + 6 ln 10

PE0 = or z0

= 13.12 = 11.18 dB


4.1. PROBLEM SOLUTIONS

9

b. For a timing jitter of 5% PE5% = =

1 1 exp (−z) + exp [−z (1 − 2 × 0.05)] 4 4 1 1 exp (−z) + exp (−0.9z) = 10−6 4 4

Trial and error provides the value z5% = 14.054 = 11.48 dB Degradation = 11.48 − 11.18 = 0.3 dB c. 1 exp (−z0 ) = 10−4 2 = − ln 2 + 4 ln 10

PE0 = or z0

= 8.52 = 9.3 dB

z5% = 9.07 = 9.6 dB Degradation = 9.6 − 9.3 = 0.3 dB

Problem 4.18 The modified program follows with the spectrum shown in Fig. 4.2. The spectral line at the bit rate does not appear to be much different than that of Fig 4.16. % Problem 4-18 % nsym = 1000; nsamp = 50; lambda = 0.7; [b,a] = butter(3,2*lambda/nsamp); l = nsym*nsamp; % Total sequence length y = zeros(1,l-nsamp+1); % Initalize output vector x =2*round(rand(1,nsym))-1; % Components of x = +1 or -1 for i = 1:nsym % Loop to generate info symbols k = (i-1)*nsamp+1; y(k) = x(i); end datavector1 = conv(y,ones(1,nsamp)); % Each symbol is nsamp long


CHAPTER 4. PRINCIPLES OF BASEBAND DIGITAL DATA TRANSMISSION

Amplitude

10

1 0 -1 0

100

200

300

400

500

600

0

100

200

300

400

500

600

Amplitude

1.5 1 0.5 0

Spectrum

1

0.5

0

0

200

400

600

800

1000 1200 FFT Bin

1400

1600

1800

2000

Figure 4.2: subplot(3,1,1), plot(datavector1(1,200:799),’k’, ’LineWidth’, 1.5) axis([0 600 -1.4 1.4]), ylabel(’Amplitude’) filtout = filter(b,a,datavector1); % datavector2 = filtout.*filtout; datavector2 = abs(filtout); subplot(3,1,2), plot(datavector2(1,200:799),’k’, ’LineWidth’, 1.5), ylabel(’Amplitude’) y = fft(datavector2); yy = abs(y)/(nsym*nsamp); subplot(3,1,3), stem(yy(1,1:2*nsym),’k.’) xlabel(’FFT Bin’), ylabel(’Spectrum’) % End of script file. Problem 4.19 The problem statement is incorrect. The FFT length should have been given as NF F T = 5, 000. Since this corresponds to fs = 50 Hz, the spectral line at the bit rate corresponds to 50 × 1000/5000 = 1 Hz.


4.2. COMPUTER EXERCISES

11

Problem 4.20 Expand the signal as i h π xPSK (t) = Ac cos 2πfc t + α d (t) i 2 i h π h π = Ac cos α d (t) cos (2πfc t) − sin α d (t) sin (2πfc t) 2 h 2 πi h πi = Ac cos ±α cos (2πfc t) − sin ±α sin (2πfc t) h π i2 h πi 2 = Ac cos α cos (2πfc t) ∓ sin α sin (2πfc t) 2i 2h i h π π = Ac cos α cos (2πfc t) − d (t) sin α sin (2πfc t) 2 2

The first term is clearly carrier and the second modulation. From the first line, the total power is¤ PT = 12 A2c From the last line the power in the carrier component is PC = £ 1 2 π 2 2 Ac cos α 2 . The fraction of total power in the carrier component is h πi PC = cos2 α PT 2

We want h πi √ h πi ³√ ´ 2 = 0.1 or cos α = 0.1 or α = cos−1 cos2 α 0.1 = 0.7952 2 2 π

4.2

Computer Exercises

Computer Exercise 4.1 % ce4_1: Simulation of a BB pulse transmission system filtered % by Butterworth filter % N_order = input(’Enter order of Butterworth filter => ’); BWT_sym = input(’Enter bandwidth X sym dur product => ’); T_sym = 1; BW = BWT_sym/T_sym; sam_sym = 100; [num den] = butter(N_order, 2*BW/sam_sym); N_sym = 20;


12

CHAPTER 4. PRINCIPLES OF BASEBAND DIGITAL DATA TRANSMISSION ts = T_sym/sam_sym; n_samp = N_sym*sam_sym; t = 0:ts:(n_samp-1)*ts; s = 3279; rand(’state’, s); ss = sign(rand(1, N_sym) - 0.5); % Bipolar data = 0.5*(ss+1); % Unipolar x1 = zeros(size(t)); for n = 1:N_sym x1 = x1 + ss(n)*unit_pulse((t-(n-1)*T_sym-T_sym/2)/T_sym); %NRZ change end y1 = filter(num, den, x1); x2 = zeros(size(t)); sgn = 1; for n = 1:N_sym if n == 1 x2 = x2 + ss(1)*unit_pulse((t-(n-1)*T_sym-T_sym/2)/T_sym); %NRZ mark else x2 = x2 + sgn*ss(1)*unit_pulse((t-(n-1)*T_sym-T_sym/2)/T_sym); end if n < N_sym if data(n+1) == 1 sgn = -sgn; elseif data(n+1) == 0 sgn = sgn; end end end y2 = filter(num, den, x2); x3 = zeros(size(t)); for n = 1:N_sym x3 = x3 + data(n)*unit_pulse(2*(t-(n-1)*T_sym-T_sym/4)/T_sym); %Unip RZ end y3 = filter(num, den, x3); x4 = zeros(size(t)); for n = 1:N_sym x4 = x4 + ss(n)*unit_pulse(2*(t-(n-1)*T_sym-T_sym/4)/T_sym); %Polar RZ end y4 = filter(num, den, x4); x5 = zeros(size(t));


4.2. COMPUTER EXERCISES

13

sgn = 1; for n = 1:N_sym x5 = x5 + sgn*data(n)*unit_pulse(2*(t-(n-1)*T_sym-T_sym/4)/T_sym); %Bip RZ if data(n) > 0 sgn = -sgn; end end y5 = filter(num, den, x5); x6 = zeros(size(t)); %Split phase for n = 1:N_sym x6 = x6 + ss(n)*(unit_pulse(2*(t-(n-1)*T_sym-T_sym/4)/T_sym) ... -unit_pulse(2*(t-(n-1)*T_sym-3*T_sym/4)/T_sym)); end y6 = filter(num, den, x6); figure(1) subplot(6,1,1), plot(t, x1, ’LineWidth’, 1.5), axis([min(t) max(t) -1.2 1.2]), ... ylabel(’NRZ change’), title(’Baseband binary data formats; unfiltered’) subplot(6,1,2), plot(t, x2, ’LineWidth’, 1.5), axis([min(t) max(t) -1.2 1.2]), ... ylabel(’NRZ mark’) subplot(6,1,3), plot(t, x3, ’LineWidth’, 1.5), axis([min(t) max(t) -1.2 1.2]), ... ylabel(’Unipolar RZ’) subplot(6,1,4), plot(t, x4, ’LineWidth’, 1.5), axis([min(t) max(t) -1.2 1.2]), ... ylabel(’Polar RZ’) subplot(6,1,5), plot(t, x5, ’LineWidth’, 1.5), axis([min(t) max(t) -1.2 1.2]), ... ylabel(’Bipolar RZ’) subplot(6,1,6), plot(t, x6, ’LineWidth’, 1.5), axis([min(t) max(t) -1.2 1.2]), ... ylabel(’Split phase’), xlabel(’Time, seconds’) figure(2) subplot(6,1,1), plot(t, y1, ’LineWidth’, 1.5), axis([min(t) max(t) -1.2 1.2]), ... ylabel(’NRZ change’), ... title([’Butterworth filter; order = ’, num2str(N_order), ’; ... BW = ’, num2str(BWT_sym), ’/T_b_i_t’]) subplot(6,1,2), plot(t, y2, ’LineWidth’, 1.5), axis([min(t) max(t) -1.2 1.2]), ... ylabel(’NRZ mark’) subplot(6,1,3), plot(t, y3, ’LineWidth’, 1.5), axis([min(t) max(t) -1.2 1.2]), ... ylabel(’Unipolar RZ’) subplot(6,1,4), plot(t, y4, ’LineWidth’, 1.5), axis([min(t) max(t) -1.2 1.2]), ... ylabel(’Polar RZ’) subplot(6,1,5), plot(t, y5, ’LineWidth’, 1.5), axis([min(t) max(t) -1.2 1.2]), ...


14

CHAPTER 4. PRINCIPLES OF BASEBAND DIGITAL DATA TRANSMISSION ylabel(’Bipolar RZ’) subplot(6,1,6), plot(t, y6, ’LineWidth’, 1.5), axis([min(t) max(t) -1.2 1.2]), ... ylabel(’Split phase’), xlabel(’Time, seconds’) % End of script file function y = unit_step(t) % Function for generating the unit step % y = zeros(size(t)); I = find(t >= 0); y(I) = ones(size(I)); function y = unit_pulse(t) % Unit rectangular pulse function % y = unit_step(t+0.5) - unit_step(t-0.5); A typical run follows. Plots appesr in Figures 4.3 and 4.4. >> ce4_1 Enter order of Butterworth filter => 2 Enter bandwidth X sym dur product => 1

Computer Exercise 4.2 % ce4_2: Computation and plotting of transmitter and receiver frequency responses % for raised cosine signaling through lowpass Butterworth filtered channel % clf clear all A = char(’-’,’:’,’—’,’-.’,’-o’,’-x’); clf f3dB = input(’Enter channel filter 3-dB frequency, Hz => ’); n_poles = input(’Enter number of poles for Butterworth filter representing channel => ’); Rb = input(’Enter data rate, bits/s => ’); beta0 = input(’Enter vector of betas => ’); delf = 100; Lbeta = length(beta0);


Split phaseBipolar RZ Polar RZUnipolar RZNRZ markNRZ change

4.2. COMPUTER EXERCISES

15

Baseband binary data formats; unfiltered 1 0 -1 0

2

4

6

8

10

12

14

16

18

0

2

4

6

8

10

12

14

16

18

0

2

4

6

8

10

12

14

16

18

0

2

4

6

8

10

12

14

16

18

0

2

4

6

8

10

12

14

16

18

0

2

4

6

8 10 12 Time, seconds

14

16

18

1 0 -1 1 0 -1 1 0 -1 1 0 -1 1 0 -1

Figure 4.3:


CHAPTER 4. PRINCIPLES OF BASEBAND DIGITAL DATA TRANSMISSION

Split phaseBipolar RZ Polar RZUnipolar RZNRZ markNRZ change

16

Butterworth filter; order = 2; BW = 1/Tbit 1 0 -1 0

2

4

6

8

10

12

14

16

18

0

2

4

6

8

10

12

14

16

18

0

2

4

6

8

10

12

14

16

18

0

2

4

6

8

10

12

14

16

18

0

2

4

6

8

10

12

14

16

18

0

2

4

6

8 10 12 Time, seconds

14

16

18

1 0 -1 1 0 -1 1 0 -1 1 0 -1 1 0 -1

Figure 4.4:


4.2. COMPUTER EXERCISES

17

for n = 1:Lbeta beta = beta0(n); f1 = 0:delf:(1-beta)*Rb/2; HR1 = sqrt(1+(f1/f3dB).^(2*n_poles)); f2 = (1-beta)*Rb/2:delf:(1+beta)*Rb/2; HR2 = sqrt(1+(f2/f3dB).^(2*n_poles)).*cos(pi*(f2-(1-beta)*Rb/2)./(2*beta*Rb)); f3 = (1+beta)*Rb/2:delf:Rb; HR3 = zeros(size(f3)); f = [f1 f2 f3]; HR = [HR1 HR2 HR3]; plot(f,HR,A(n,:), ’LineWidth’, 1.5) if n == 1 hold on xlabel(’{\itf}, Hz’),ylabel(’|{\itH_R}({\itf})| or |{\itH_T}({\itf})|’) title([’Bit rate = ’,num2str(Rb),’ bps; ch filter 3-dB freq = ’,num2str(f3dB),’ Hz; no. of poles = ’,... num2str(n_poles) ’; \beta = ’, num2str(beta0)]) end end A typical run follows with a plot given in Fig. 4.5. >> ce4_2 Enter channel filter 3-dB frequency, Hz => 2000 Enter number of poles for Butterworth filter representing channel => 2 Enter data rate, bits/s => 5000 Enter vector of betas => [0 .5 1] Computer Exercise 4.3 % ce4.3: Design of an N-tap zero-forcing equalizer; input and % output plotted % pc = [-0.03 0.05 -0.1 0.02 -0.05 0.2 1 0.3 -0.07 0.03 -0.2 0.05 -0.1]; n_tap = input(’Number of taps => ’); nn = floor(n_tap/2)+1; n_samp = length(pc); n_center = floor(n_samp/2)+1; if n_tap > n_center disp(’Not enough samples for specified equalizer length’)


CHAPTER 4. PRINCIPLES OF BASEBAND DIGITAL DATA TRANSMISSION

Bit rate = 5000 bps; ch filter 3-dB freq = 2000 Hz; no. of poles = 2; β = 0 2

0.5

1.8 1.6 1.4 |HR(f)| or |HT(f)|

18

1.2 1 0.8 0.6 0.4 0.2 0

0

500

1000

1500

2000

2500 f, Hz

Figure 4.5:

3000

3500

4000

4500

5000

1


4.2. COMPUTER EXERCISES

19

end Pc = []; for k = 1:n_tap Pc(:, k) = pc(n_center-k+1:n_center+n_tap-k)’; end Pcinv = inv(Pc); A = Pcinv(:,nn); disp(’The equalizer tap vector is:’) disp(A) nT = [-(n_center-1):n_center-1]; peq = []; n_eq = n_samp-n_tap+1; nTc = [-floor(n_eq/2):floor(n_eq/2)]; for n = 1:n_eq Nd = n-1; pcd = pc(Nd+1:n_tap+Nd)’; peq(n) = A’*flipud(pcd); end subplot(2,1,1), stem(nT, pc), axis([min(nT) max(nT) -0.5 1.5]), ... xlabel(’{\itn}’), ylabel(’{\itp_c}({\itn})’) title([’Input sequence and zero-forced equalizer output for ’, num2str(n_tap), ’-tap equalizer’]) subplot(2,1,2), stem(nTc, peq), axis([min(nT) max(nT) -0.5 1.5]), ... xlabel(’{\itn}’), ylabel(’{\itp}_e_q({\itn})’) % End of script file A typical run follows with a plot given in Fig. 4.6. >> ce4_3 Number of taps => 5 The equalizer tap vector is: 0.1581 -0.3160 1.2068 -0.4405 0.2577 Computer Exercise 4.4: See Problem 4.18 solution


CHAPTER 4. PRINCIPLES OF BASEBAND DIGITAL DATA TRANSMISSION

Input sequence and zero-forced equalizer output for 5-tap equalizer 1.5

pc (n)

1 0.5 0 -0.5 -6

-4

-2

0 n

2

4

6

-4

-2

0 n

2

4

6

1.5 1 peq(n)

20

0.5 0 -0.5 -6

Figure 4.6:


Chapter 5

Overview of Probability and Random Variables 5.1

Problem Solutions

Problem 5.1 S = sample space is the collection of the 21 parts. Let Ai = event that the pointer stops on the ith part, i = 1, 2, ..., 21. These events are exhaustive and mutually exclusive. Thus P (Ai ) = 1/21 a. P (even number) = P (2 or 4 or · · · or 20) = 10/21; b. P (A21 ) = 1/21; c. P (4 or 5 or 9) = 3/21 = 1/7; d. P (number > 10) = P (11, 12, · · · , or 21) = 11/21. Problem 5.2 a. Use a tree diagram similar to Fig. 5.2 to show that P (3K, 2A) =

4 52

½

∙ ¸ ∙ ¸ ∙ ¸ ∙ ¸¾ 4 3 3 2 3 4 3 2 3 4 3 2 4 3 3 2 3 + 3 + + 3 51 50 49 48 51 50 49 48 51 50 49 48 51 50 49 48

= 9.2345 × 10−6

1


2

CHAPTER 5. OVERVIEW OF PROBABILITY AND RANDOM VARIABLES b. Use the tree diagram of Fig. 5.2 except that we now look at outcomes that give 4 of a kind. Since the tree diagram is for a particular denomination, we multiply by 13 and get ¶ ¸ ¾ ½ ∙ µ 2 1 48 48 3 2 1 48 4 3 2 1 4 3 3 + + P (4 of a kind) = 13 52 51 50 49 48 51 50 49 48 52 51 50 49 48 = 0.0002401 c. The first card can be anything. Given a particular suit on the first card, the probability of suit match on the second card is 12/51; on the third card it is 11/50; on the fourth card it is 10/49; on the fifth and last card it is 9/48. Therefore, P (all same suit) = 1

12 11 10 9 = 0.001981 51 50 49 48

d. For a royal flush P (A, K, Q, J, 10 of same suit) =

4 1 1 1 1 = 1.283 × 10−8 52 51 50 49 48

e. The desired probability is P (Q|A, K, J, 10 not all of same suit) =

4 = 0.0833 48

Problem 5.3

P (A, B) = P (A) P (B) P (A, C) = P (A) P (C) P (B, C) = P (B) P (C) P (A, B, C) = P (A) P (B) P (C)

Problem 5.4 a. According to (5.11) P (A ∩ B) = P (A) P (B) for statistical independence. For the given probabilities, 0.2 × 0.5 = 0.1 6= 0.4 so they are not statistically independent.


5.1. PROBLEM SOLUTIONS

3

b. According to (5.6) P (A ∪ B) = P (A) + P (B) − P (A ∩ B) = 0.2 + 0.5 − 0.4 = 0.3. c. A and B being mutually exclusive implies that P (A|B) = P (B|A) = 0. A and B being statistically independent implies that P (A|B) = P (A) and P (B|A) = P (B). The only way that both of these conditions can be true is for P (A) = P (B) = 0. Problem 5.5 a. The result is ¢2 ¡ P (AB) = 1 − P (1 or more links broken) = 1 − 1 − q 2 (1 − q)

b. If link 4 is removed, the result is

¢ ¡ P (AB| link 4 removed) = 1 − 1 − q 2 (1 − q)

c. If link 2 is removed, the result is

¡ ¢2 P (AB| link 2 removed) = 1 − 1 − q 2

d. Removal of link 4 is more severe than removal of link 2. Problem 5.6 Using Bayes’ rule P (A|B) =

P (B|A) P (A) P (B)

where, by total probability ¢ ¡ ¢ £ ¡ ¢¤ ¡ ¢ ¡ P (B) = P (B|A) P (A) + P B|A P A = P (B|A) P (A) + 1 − P B|A P A = (0.95) (0.45) + (0.35) (0.55) = 0.62

Therefore P (A|B) = Similarly,

with

(0.95) (0.45) = 0.6895 0.62

¡ ¢ ¡ ¢ P B|A P (A) ¡ ¢ P A|B = P B

¡ ¢ ¡ ¢ ¡ ¢ ¡ ¢ P B = P B|A P (A) + P B|A P A = (0.05) (0.45) + (0.65) (0.55) = 0.38


4

CHAPTER 5. OVERVIEW OF PROBABILITY AND RANDOM VARIABLES

¡ ¢ (Note also that P B = 1 − P (B).) Thus

¢ (0.05) (0.45) ¡ = 0.0592 P A|B = 0.38

Problem 5.7 a. P (A2 ) = 0.3; P (A2 , B1 ) = 0.05; P (A1 , B2 ) = 0.05; P (A3 , B3 ) = 0.05; P (B1 ) = 0.15; P (B2 ) = 0.25; P (B3 ) = 0.6. b. P(A3 |B3 ) = 0.083; P (B2 |A1 ) = 0.091; P (B3 |A2 ) = 0.333. Problem 5.8 See the tables below for the results. Outcomes (1, 1) (1, 2) , (2, 1) (1, 3) , (3, 1) , (2, 2) (1, 4) , (4, 1) , (2, 3) , (3, 2) (1, 5) , (5, 1) , (2, 4) , (4, 2) , (3, 3) (a) (1, 6) , (6, 1) , (2, 5) , (5, 2) , (3, 4) , (4, 3) (2, 6) , (6, 2) , (3, 5) , (5, 3) , (4, 4) (3, 6) , (6, 3) , (4, 5) , (5, 4) (4, 6) , (6, 4) , (5, 5) (5, 6) , (6, 5) ((6, 6))

(b)

Outcomes (1, 1) , (1, 3) , (3, 1) , (2, 2) , (1, 5) , (5, 1) , (2, 4) , (4, 2) , (3, 3) , (2, 6) , (6, 2) , (3, 5) , (5, 3) , (4, 4) , (4, 6) , (6, 4) , (5, 5) , (6, 6) (1, 2) , (2, 1) , (1, 4) , (4, 1) , (2, 3) , (3, 2) , (1, 6) , (6, 1) , (2, 5) , (5, 2) , (3, 4) , (4, 3) , (3, 6) , (6, 3) , (4, 5) , (5, 4) , (5, 6) , (6, 5)

X1 = 2 3 4 5 6 7 8 9 10 11 12

P spots up

X2

P (X2 = xi )

1

18/36 = 1/2

0

18/36 = 1/2

P (X1 = xi ) 1/36 2/36 3/36 4/36 5/36 6/36 5/36 4/36 3/36 2/36 1/36

Problem 5.9 First, make a table showing outcomes giving the two values of the random variable:


5.1. PROBLEM SOLUTIONS

5

Outcomes (H = head, T = tail) X P (X = xi ) TTH, THT, HTT, HHH (odd # of heads) 0 4/8 = 1/2 HHT, HTH, THH, TTT (even # of heads) 1 4/8 = 1/2 The cdf is zero for x < 0, jumps by 1/2 at x = 0, jumps another 1/2 at x = 1, and remains at 1 out to ∞. The pdf consists of an impulse of weight 1/2 at x = 0 and an impulse of weight 1/2 at x = 1. It is zero elsewhere. Problem 5.10 a. As x → ∞, the cdf approaches 1. Therefore B = 1. Assuming continuity of the cdf, Fx (12) = 1 also, which says A × 124 = 1 or A = 4.8225 × 10−5 . b. The pdf is given by dFX (x) = 4 × 4.8225 × 10−5 x3 u (x) u (12 − x) dx = 1.929 × 10−4 x3 u (x) u (12 − x)

fX (x) =

The graph of this pdf is 0 for t < 0, the cubic 1.929 × 10−4 x3 for 0 ≤ x ≤ 12, and 0 for t > 12. c. The desired probability is P (X > 5) = 1 − FX (5) = 1 − 4.8225 × 10−5 × 54 = 1 − 0.0303 = 0.9699 d. The desired probability is

P (4 ≤ X < 6) = FX (6) − FX (4)

= 4.8225 × 10−5 × 64 − 4.8225 × 10−5 × 44 ¡ ¢ = 4.8225 × 10−5 64 − 44

= 0.0502 Problem 5.11 a. A = α where the condition b. B = β where the condition

R∞ 0

R0

Ae−αx dx = 1 must be satisfied;

βx ∞ Be dx = 1 must be satisfied;

c. C = γeγ where the condition

R∞ 1

Ce−γx dx = 1 must be satisfied;


6

CHAPTER 5. OVERVIEW OF PROBABILITY AND RANDOM VARIABLES d. D = 1/τ where the condition

0 Ddx = 1 must be satisfied.

Problem 5.12 a. Factor the joint pdf as fXY (x, y) =

√ √ A exp [− |x|] A exp [− |y|]

With proper choice of A, the two separate factors are marginal pdfs. Thus X and Y are statistically independent. b. Note that the pdf is the volume between a plane which intersects the x and y coordinate axes one unit out and the fXY coordinate axis at C. First find C from the integral Z 1 Z 1−y Z ∞Z ∞ fXY (x, y) dxdy = 1 or C (1 − x − y) dxdy = 1 −∞

−∞

0

0

This gives C = 6. Next find fX (x) and fY (y) as ( Z 1−x

fX (x) =

0

and fY (y) =

Z 1−y 0

6 (1 − x − y) dy =

6 (1 − x − y) dx =

(

3 (1 − x)2 , 0 ≤ x ≤ 1 0, otherwise 3 (1 − y)2 , 0 ≤ y ≤ 1 0, otherwise

Since the joint pdf is not equal to the product of the two marginal pdfs, X and Y are not statistically independent. Problem 5.13 RR a. fXY (x, y) dxdy = 1 gives C = 1/32; b. fXY (1, 1.5) = 1+1.5 32 = 0.0781

c. fXY (x, 3) = 0 because y = 3 is in the domain where fXY is zero. , 0 ≤ y ≤ 2, so the desired conditional pdf is d. By integration, fY (y) = (1+2y) 8 fX|Y (x|y) =

1 (1 + xy) fXY (x, y) (1 + xy) = 32 , 0 ≤ x ≤ 4, 0 ≤ y ≤ 2 = 1 fY (y) 4 (1 + 2y) 8 (1 + 2y)


5.1. PROBLEM SOLUTIONS

7

Substitute y = 1 to get the asked for result: fX|Y (x|3) =

1+x 1+x = , 0≤x≤4 4 (1 + 2) 12

Problem 5.14 a. To find A, evaluate the double integral Z ∞Z ∞ Z ∞Z ∞ fXY (x, y) dxdy = 1 = Axye−(x+y) dxdy −∞ −∞ 0 Z ∞ Z ∞0 −x xe dx ye−y dy = A 0

0

= A

Thus A = 1. b. Clearly, fX (x) = xe−x u (x) and fY (y) = ye−y u (y) c. Yes, because the joint pdf factors into the product of the marginal pdfs.

Problem 5.15 a. Use normalization of the pdf to unity: Z ∞ Z ∞ ¯∞ −2 αx u (x − α) dx = αx−2 dx = −αx−1 ¯α = 1 α

−∞

Hence, this is a pdf for any value of α.

b. The cdf is FX (x) =

Z x

−∞

αz

−2

u (z − α) dz =

½

0, x < α (1 − α/x) , x > α

c. The desired probability is P (X ≥ 10) = 1 − P (X < 10) = 1 − FX (10) =

½

1, α ≥ 10 α/10, α < 10


8

CHAPTER 5. OVERVIEW OF PROBABILITY AND RANDOM VARIABLES

Problem 5.16 The result is

⎧ 2 ⎨ exp√(−y/2σ ) , y ≥ 0 i 1 2πσ2 y fY (y) = fX (x)|x=−√y + fX (x)|x=√y √ = 2 y ⎩ 0, y < 0 h

Problem 5.17 First note that

P (Y = 0) = P (X ≤ 0) = 1/2

For y > 0, transformation of variables gives

¯ −1 ¯ ¯ dg (y) ¯ ¯ ¯ fY (y) = fX (x) ¯ dy ¯

x=g −1 (y)

Since y = g (x) = ax, we have g −1 (y) = y/a. Therefore ´ ³ 2 exp − 2ay2 σ2 , y>0 fY (y) = √ 2πa2 σ 2 For y = 0, we need to add 0.5δ (y) to reflect the fact that Y takes on the value 0 with probability 0.5. Hence, for all y, the result is ³ ´ y2 exp − 2 2 1 2a σ fY (y) = δ (y) + √ u (y) 2 2πa2 σ 2 where u (y) is the unit step function. Problem 5.18 a. The normalization of the pdf to unity provides the relationship Z ∞ Z ∞ fX (x) dx = A e−bx dx = Ae−2b /b = 1 Thus A = be2b .

−∞

2

b. The mean is E [X] = be2b

Z ∞

xe−bx dx ¶ ∙2 µ ¸ 1 −2b 2b 1 2+ e = be b b 1 = 2+ b where integration by parts was used.


5.1. PROBLEM SOLUTIONS

9

c. The second moment of X is © ª = be2b E X2

Z ∞

x2 e−bx dx µ ¶¸ ½ ∙ ¾ 1 1 2b 2 −2b 2+ 2+ e = be b b b ∙ µ ¶¸ 1 1 = 2 2+ 2+ b b 2

where integration by parts was used twice. d. The variance of X is σ 2X

© ª = E X 2 − {E [X]}2 ∙ µ ¶¸ µ ¶ 1 1 1 2 = 2 2+ 2+ − 2+ b b b 1 = 2 b

Problem 5.19 h i2 h i2 £ ¤ R2 R 2 dx 1 x3 4 1 x2 4 2 a. E X 2 = 0 x2 dx = = ; E [X] = x = 2 2 3 0 3 2 2 2 0 = 1; 3 > 1 so it is 0 true in this special case. h i4 h i4 £ ¤ R4 R 4 dx 1 x3 16 1 x2 16 2 b. E X 2 = 0 x2 dx = = ; E [X] = x = 4 4 3 0 3 4 4 2 0 = 2; 3 > 2 so it is 0 also true in this special case. n o c. Use the fact that E [X − E (X)]2 ≥ 0 (0 only if X = 0 with probability one). £ ¤ Expanding, we have E X 2 − {E [X]}2 > 0 if X 6= 0. Problem 5.20 a. The mean and variance of the Rayleigh random variable can be obtained by using tabulated definite integrals. The mean is r ¶ µ Z ∞ 2 √ 1 r 1 πσ 2 r2 2 = dr = exp − 2πσ E [R] = σ2 2σ 2 σ 2 22 (1/2σ 2 ) 2 0


10

CHAPTER 5. OVERVIEW OF PROBABILITY AND RANDOM VARIABLES

¢ ¡ R∞ by using a definite integral for 0 x2n exp −ax2 dx given in Appendix G. The mean square is µ ¶ Z ∞ 3 £ 2¤ r r2 r r2 = E R exp − , du = 2 dr dr; let u = 2 2 2 σ 2σ 2σ σ 0 Z ∞ = 2σ 2 u exp (−u) du 0 ¸ ∙ Z ∞ ∞ 2 exp (−u) du (integration by parts) = 2σ −u exp (−u)|0 + 0

= 2σ

2

Thus, the variance is £ ¤ var [R] = E R2 − E 2 [R] π = 2σ 2 − σ 2 2 which is the same as given in Table 5.4. b. The mean and second moment of the single-sided exponential pdf are easily found in terms of tabulated integrals or by integration by parts. The mean is Z ∞ αx exp (−αx) dx E [X] = 0 Z 1 ∞ u exp (−u) du = α 0 1 = α The second moment is E [X] = = = = and the variance is

Z ∞

αx2 exp (−αx) dx 0 Z ∞ 1 u2 exp (−u) du α2 0 ¸ ∙ Z ∞ ¯∞ 1 2 ¯ u exp (−u) du −u exp (−u) 0 + 2 α2 0 2 α2 £ ¤ 1 var [X] = E X 2 − E 2 [X] = 2 α


5.1. PROBLEM SOLUTIONS

11

c. The mean of the hyperbolic pdf is zero by virtue of the evenness of the pdf, . For the variance of the hyperbolic pdf, consider £ ¤ E X2 =

Z ∞

x2 (m − 1) hm−1 dx dx = 2 (|x| + h)m −∞

Z ∞ 0

x2 (m − 1) hm−1 dx dx (x + h)m

where the second integral follows by evenness of the integrand in the first integral, and the absolute value on x is unnecessary because the integration is for positive x. Integrate by parts twice to obtain

£ ¤ E X 2 = (m − 1) hm−1

("

x2 (h + h)−m−1 −m + 1

#∞ 0

+

Z ∞ 0

) 2x (x + h)−m+1 dx m−1

The first term is zero if m ≥ 3. Therefore £ ¤ E X 2 = 2hm−1

Z ∞

x (x + h)−m+1 dx =

0

2h2 , m≥4 (m − 2) (m − 3)

The variance is the same since its mean is zero.

d. The mean of a Poisson random variable is ∞ X λk E [K] = k exp (−λ) k! k=0

∞ X λk−1 = λ exp (−λ) ; let n = k − 1 (k − 1)! k=1

= λ exp (−λ)

∞ X λn

n=0

= λ

n!

= λ exp (−λ) exp (λ)


12

CHAPTER 5. OVERVIEW OF PROBABILITY AND RANDOM VARIABLES Its mean-square value is ∞ X £ ¤ λk k 2 exp (−λ) E K2 = k! k=0

= λ exp (−λ)

∞ X

λk−1 (k − 1)!

k

k=1

= λ exp (−λ)

∞ X

(n + 1)

n=0

λn n!

"∞ # ∞ X λn X λn + n = λ exp (−λ) n! n! n=1 n=0 " ∞ # ∞ X λn−1 X λn = λ exp (−λ) λ + (n − 1)! n=0 n! n=1

= λ exp (−λ) [λ exp (λ) + exp (λ)] = λ2 + λ

£ ¤ Hence, var[K] = E K 2 − E 2 [K] = λ.

e. For the geometric distribution, the mean is E [K] =

∞ X

kpq k−1

k=1

Differentiate the sum

P∞

k=0 q

k =

1 1−q , |q| < 1 to get the sum

∞ X

kqk−1 =

k=1

1 (1 − q)2

Apply this to E [K] to get E [K] =

1 p 2 = p (1 − q)

A similar procedure results in ∞ £ 2¤ X E K = k2 pqk−1 = k=1

p 1 q 1 2 + p2 = p2 + p2 (1 − q)


5.1. PROBLEM SOLUTIONS

13

Problem 5.21 ¤−1 £ ; a. A = b 1 − e−bB

¢ ¡ b. The cdf is 0 for x < 0 and 1 for x > B. For 0 ≤ x ≤ B, the result is (A/b) 1 − e−bx ; c. The mean is

∙ ¸ 1 e−bB E [X] = 1 − bB b 1 − e−bB

d. The mean-square is ¸ ∙ £ 2 ¤ 2A 1 e−bB AB 2 −bB − (1 + bB) − e E X = b b2 b2 b

where A must be substituted from part (a.).

e. For the variance, subtract the result of part (c) squared from the result of part (d). Problem 5.22 a. The integral evaluates as follows: Z ∞ Z ∞³ √ Z 2 ´2n e−y2 © ª e−x2/2σ 2n+1 σ 2n ∞ 2n −y √ dy = √ E X 2n = x2n √ dx = 2 y e dy 2 2σy π π 2πσ 2 −∞ 0 0 ¢ ¡ where y = x/ 21/2 σ . Using a table of definite integrals, the given result is obtained.

b. The integral is zero by virtue of the oddness of the integrand. Problem 5.23

E [X] = = £ ¤ E X2 = =

½

¾ 1 1 x δ (x − 5) + [u (x − 4) − u (x − 8)] dx 2 8 −∞ Z ∞ Z 1 1 8 1 1 82 − 42 11 xδ (x − 5) dx + xdx = (5) + = 2 −∞ 8 4 2 8 2 2

Z ∞

¾ 1 1 x δ (x − 5) + [u (x − 4) − u (x − 8)] dx 2 8 −∞ Z ∞ Z 1 1 8 2 1 1 83 − 43 187 2 x δ (x − 5) dx + x dx = (25) + = 2 −∞ 8 4 2 8 3 6

Z ∞

2

½


14

CHAPTER 5. OVERVIEW OF PROBABILITY AND RANDOM VARIABLES £ ¤ 187 121 11 − = α2X = E X 2 − E 2 [X] = 6 4 12

Problem 5.24 Regardless of the correlation coefficient, the mean of Z is E {Z} = E {3X − 4Y } = 3E {X} − 4E {Y } = 3 (1) − 4 (3) = −9 The variance of Z is

n o var (Z) = E [Z − E (Z)]2 n o = E [3X − 4Y − 3E(X) + 4E(Y )]2 n£ ¡ ¢ ¤2 o = E 3 X − X − 4(Y − Y ) ª © ¡ ¢ ¡ ¢ = E 9 X − X − 24 X − X (Y − Y ) + 16(Y − Y )2 = 9σ2X + 16σ 2Y − 24σ X σ Y ρXY

Putting in numbers, the results for the variance of Z are: (a) 148; (b) 122.6; (c) 59.1; (d) 21.0. Problem 5.25 The conditional pdf fX|Y (x|y) is given by fX|Y (x|y) = = = = = = =

fXY (x, y) 2πσ = fY (y)

1 √ 2

h 2 i x −2ρxy+y2 exp − 2 2 2σ (1−ρ ) 2

1−ρ exp(−y 2 /(2σ 2 )) √ 2πσ2 ∙ 2 x − 2ρxy + y 2

¸ y2 1 p √ + 2 exp − 2σ 2 (1 − ρ2 ) 2σ 2πσ 2 1 − ρ2 ∙ ¸ x2 − 2ρxy y2 1 y2 √ p exp − 2 − + 2σ (1 − ρ2 ) 2σ 2 2σ 2 (1 − ρ2 ) 2πσ 2 1 − ρ2 ∙ µ ¶¸ x2 − 2ρxy y2 1 1 √ p exp − 2 + 1− 2σ (1 − ρ2 ) 2σ 2 1 − ρ2 2πσ 2 1 − ρ2 ∙ µ ¶¸ x2 − 2ρxy 1 y2 ρ2 p √ exp − 2 + − 2σ (1 − ρ2 ) 2σ 2 1 − ρ2 2πσ 2 1 − ρ2 ∙ 2 ¸ x − 2ρxy + ρ2 y2 1 √ p exp − 2σ 2 (1 − ρ2 ) 2πσ 2 1 − ρ2 " # (x − ρy)2 1 √ p exp − 2 2σ (1 − ρ2 ) 2πσ 2 1 − ρ2


5.1. PROBLEM SOLUTIONS

15

Note that the conditional pdf of X given pY is Gaussian with conditional mean E [X|Y ] = ρY and conditional variance var(X|Y ) = (1 − ρ2 ) σ 2 .

Problem 5.26 Divide the joint Gaussian pdf of two random variables by the Gaussian pdf of Y , collect all exponential terms in a common exponent, complete the square of this exponent which will be quadratic in x and y, and the desired result is a Gaussian pdf with ¡ ¢ ρσX (Y − mY ) and var (X|Y ) = σ 2X 1 − ρ2 E {X|Y } = mX + σY The derivation follow closely that of Problem 5.25 except for the presence of nonzero means mX and mY . Problem 5.27 Convolve the two component pdfs. A sketch of the two component pdfs making up the integrand show that there is no overlap until z > 1, and the overlap ends when z > 7. The result, either obtained graphically or analytically, is given by (a trapezoid) ⎧ 0, z < 1 ⎪ ⎪ ⎪ ⎪ ⎨ (z − 1) /8, 1 ≤ z < 3 1/4, 3 ≤ z < 5 fZ (z) = ⎪ ⎪ − (z − 7) /8, 5 ≤ z ≤ 7 ⎪ ⎪ ⎩ 0, z > 7

Problem 5.28

a. From Table 5.4,

£ ¤ E [X] = 0, E X 2 = 1/32 = var [X]

b. By transformation of random variables, the pdf of Y is ¯ ¯ ¯ dx ¯ fY (y) = fX (x)|x=(y−4)/5 ¯¯ ¯¯ dy =

4 −8|y−4|/5 e 5

c. Using the transformation from X to Y , it follows that E [Y ] = E [4 + 5X] = 4 + 5E [X] = 4, h i £ ¤ E Y 2 = E (4 + 5X)2 ¤ £ ¤ £ = E 16 + 40X + 25X 2 = 16 + 40E [X] + 25E X 2 51 816 = = 25.5, = 16 + 25/32 = 32 2 ¤ £ σ 2 = E Y 2 − E 2 [Y ] = 25.5 − 42 = 9.5


16

CHAPTER 5. OVERVIEW OF PROBABILITY AND RANDOM VARIABLES

Problem 5.29 a. The characteristic function is MX (jv) =

a a − jv

b. The mean and mean-square values are ¯ ∂MX (jv) ¯¯ = 1/a E [X] = −j ¯ ∂v v=0

and

¯ £ ¤ ∂ 2 MX (jv) ¯¯ 2 E X 2 = (−j)2 = 2 ¯ ∂v2 a v=0

respectively. £ ¤ d. var[X] = E X 2 − E 2 [X] = 1/a2 . Problem 5.30 The required distributions are

µ ¶ n k P (k) = p (1 − p)n−k (Binomial) k 2

P (k) = P (k) =

e−(k−np) /[2np(1−p)] p (Laplace) 2πnp (1 − p)

(np)k −np (Poisson) e k!

Comparison tables are given below: n, p 3,1/5 a.

k 0 1 2 3

Binomial 0.512 0.384 0.096 0.008

Laplace 0.396 0.487 0.075 0.0014

Poisson 0.549 0.329 0.099 0.020


5.1. PROBLEM SOLUTIONS

17

n, p 3,1/10

k 0 1 2 3

Binomial 0.729 0.243 0.027 0.001

Laplace 0.650 0.310 0.004 1×10−6

Poisson 0.741 0.222 0.033 0.003

n, p 10,1/5

k 0 1 2 3 4 5 6 7 8 9

Binomial 0.107 0.268 0.302 0.201 0.088 0.026 0.005 7.9×10−4 7.4×10−5 4.1×10−6

Laplace 0.090 0.231 0.315 0.231 0.090 0.019 0.002 1.3×10−4 4.1×10−6 7.1×10−8

Poisson 0.135 0.271 0.271 0.180 0.090 0.036 0.012 3.4×10−3 8.6×10−4 1.9×10−4

n, p 10,1/10

k 0 1 2 3 4 5 6 7 8 9

Binomial 0.348 0.387 0.194 0.057 0.011 1.5×10−3 1.4×10−4 3.6×10−7 9.0×10−9 1×10−10

Laplace 0.241 0.420 0.241 0.046 0.003 6×10−5 3.9×10−7 6.3×10−13 1.5×10−16 1.2×10−20

b.

c.

d.

Poisson 0.368 0.368 0.183 0.061 0.015 3.1×10−3 5.1×10−4 3.4×10−3 1×10−6 1×10−7

Problem 5.31 a. P (5 or 6 heads in 10 trials) =

P

−10 = 10! 1 = 0.2461 k=5,6 [10!/k! (10 − k)!] 2 5!5! 1024

b. P (first head at trial 15) = (1/2)(1/2)4 = 1/32 P 100! −100 = 0.1193; P (first head at c. P (50 to 60 heads in 100 trials) = 60 k=50 k!(100−k)! 2 trial 50) = (1/2)(1/2)49 = 8.89 × 10−16 .


18

CHAPTER 5. OVERVIEW OF PROBABILITY AND RANDOM VARIABLES

Problem 5.32 a. Np = 26 (25) (24) (23) = 358, 800; b. Np = (26)4 = 456, 976; c. The answers are the reciprocals of the numbers found in (a) and (b), or 2.79 × 10−6 and 2.19 × 10−6 , respectively. Problem 5.33 a. The desired probability is P (fewer than 3 heads in 20 coins tossed)

=

¶ µ ¶20 2 µ X 20 1 k=0

k

2

= (1 + 20 + 190) 2−20 = 2.0123 × 10−4 b. var(N ) = npq = 20 (1/2) (1/2) = 5; E [N] = np = 10. The Laplace-approximated probability is P (< 3 heads in 20 coins tossed)

2 2 X e−(k−10) /10 √ 10π k=0 ´ ³ 1 = √ e−10 + e−81/10 + e−64/10 10π = 3.587 × 10−4

=

c. The percent error is % error =

3.587 × 10−4 − 2.0123 × 10−4 × 100 = 78.25% 2.0123 × 10−4

Problem 5.34 a. The probability of exactly one error in 105 digits, by the binomial distribution, is µ 5 ¶ ¡ ¢ ¢99,999 −5 ¡ 10 5 P 1 error in 10 10 = 1 − 10−5 1 = 105 × 0.3679 × 10−5 = 0.3679


5.1. PROBLEM SOLUTIONS

19

By the Poisson approximation, it is ¢ ¡ P 1 error in 105 = exp (−1) = 0.3679 b. The Poisson approximation gives

¢ 105 × 10−5 ¢ ¡ ¡ P 2 errors in 105 = exp −105 × 10−5 = 0.18395 2!

c. Use the Poisson approximation:

P (more than 5 errors in 105 ) = 1 − P (0, 1, 2, 3, 4, or 5 errors) 5 X (np)k exp (−np) = 1− k! k=0 ¶ µ 1 1 1 1 = 1 − exp (−1) 1 + 1 + + + + 2 6 24 120 = 1 − 0.9994 = 0.0006

Problem 5.35 a. The marginal pdf for X is " " # # (x − 1)2 1 (x − 1)2 1 exp − fX (x) = p = √ exp − 2×4 8 8π 2π (4)

The marginal pdf for Y is of the same form except for Y and y in place of X and x, respectively. b. The joint pdf is

fXY (x, y) =

= =

⎧ ¡ ¢ ¡ x−1 ¢ ³ y−1 ´ ³ y−1 ´2 ⎫ ⎪ ⎪ x−1 2 ⎨ ⎬ − 2 (0.5) + 2 2 2 2 1 √ exp − ⎪ ⎪ 2 (1 − 0.52 ) 2π (2) (2) 1 − 0.52 ⎩ ⎭ ) ( 1 (x − 1)2 − (x − 1) (y − 1) + (y − 1)2 √ exp − 4 × 2 (1 − 0.52 ) 8π 0.75 ¾ ½ 2 1 x − x − xy + y 2 − y + 1 √ exp − 6 8π 0.75


20

CHAPTER 5. OVERVIEW OF PROBABILITY AND RANDOM VARIABLES

The conditional pdf of X given Y = y is fX|Y (x| y) = =

= = =

fXY (x, y) fY (y) √1 exp 8π 0.75

o n 2 2 − (x−1) −(x−1)(y−1)+(y−1) 6 h i 2 (y−1) 1 √ exp − 8 8π

( ) (x − 1)2 − (x − 1) (y − 1) + (y − 1)2 (y − 1)2 1 √ exp − + 6 8 6π ( ) (x − 1)2 − (x − 1) (y − 1) (y − 1)2 1 √ exp − − 6 24 6π ) ) ( ( 1 1 [x − 1 − 0.5 (y − 1)]2 [x − 0.5y − 0.5]2 √ exp − = √ exp − 6 6 6π 6π

c. See Problem 5.26 to deduce that E [X| Y ] = 1 + 0.5 (Y − 1) ¡ ¢ and var [X| Y ] = 4 1 − 0.52 = 3

Problem 5.36 a. K = 1/π; b. E [X] is not defined, but one could argue that it is zero from the oddness of the integrand for this expectation. If one writes down the integral for the second moment, it clearly does not converge (the integrand approaches a constant as |x| → ∞). c. The answer is given; d. Compare the form of the characteristic function with the answer given in (c) and do a suitable redefinition of variables.


5.1. PROBLEM SOLUTIONS

21

Problem 5.37 a. The characteristic function is found from n Pn o ª © 2 MY (jv) = E ejvY = E ejv i=1 Xi = E

(n Y i=1

jvXi2

e

)

But Z ∞ o n −x2 /2σ2 2 2e ejvx √ dx = E ejvXi 2πσ2 −∞ Z ∞ −(1/2σ2 −jv)x2 e √ dx = 2πσ 2 −∞ ¡ ¢−1/2 = 1 − j2vσ 2

which follows by using the definite integral defined at the beginning of Problem 4.35. Thus ¢−N/2 ¡ MY (jv) = 1 − j2vσ2

b. The given pdf follows easily by letting α = 2σ 2 in the given Fourier transform pair. c. The mean of Y is E [Y ] = N σ2

by taking the sum of the expectations of the separate terms of the sum defining Y . The mean-square value of Xi2 is h¡ i ¢ 2 2 −1/2 ¯ h¡ ¢ i 1 − j2vσ d ¯ 2 ¯ = (−j)2 E Xi2 = 3σ 4 ¯ 2 dv v=0

Thus,

var

Xi2

¢2 i

= 3σ 4 − σ 4 = 2σ 4

Since the terms in the sum defining Y are independent, var[Y ] = 2N σ 4 . The mean and variance of Y can be put into the definition of a Gaussian pdf to get the desired approximation. d. The cases N = 2, 4, 8, and N = 16 are shown in Figure 4.1. The approximations appear to get better as N increases. Note the extreme difference between exact and approximation due to the central limit theorem not being valid for N = 2, 4.


CHAPTER 5. OVERVIEW OF PROBABILITY AND RANDOM VARIABLES

0.5

0.2

chi-sq; σ 2 = 1; N = 2 Gauss approx

0.4

chi-sq; σ 2 = 1; N = 4 Gauss approx

0.15 fY (y)

fY (y)

0.3 0.2

0

0.1

0.05

0.1

0

10

20

0

30

0

10

y

20

30

y

chi-sq; σ 2 = 1; N = 8 Gauss approx

0.12

chi-sq; σ 2 = 1; N = 16 Gauss approx

0.08

0.1 0.06 fY (y)

0.08 fY (y)

22

0.06

0.04

0.04 0.02 0.02 0

0

10

20

30

0

0

y

10

20 y

Figure 5.1:

30


5.1. PROBLEM SOLUTIONS

23

e. Evident by direct substitution. Problem 5.38 R ∞ exp(−t2 /2) exp(−x2 /2) √ √ This is a matter of plotting Q (x) = x dx and Q (x) = on the same a 2π 2πx set of axes. Problem 5.39 By definition, the cdf of a Gaussian random variable is h h i i 2 (u−m)2 Z x exp − (u−m) Z ∞ exp − 2σ 2 2σ2 √ √ du = 1 − du FX (x) = 2πσ 2 2πσ 2 −∞ x Change variables to u−m v= σ with the result that ³ 2´ ¶ µ Z ∞ exp − u2 x−m √ du = 1 − Q FX (x) = 1 − σ 2π (x−m)/σ

A plot may easily be obtained byR a MATLAB ¡ 2 ¢ program. It is suggested that you use the ∞ 2 √ MATLAB function erfc(x) = π x exp −t dt. Problem 5.40 Consider the product of integrals ¡ ¢ Z ∞ ¡ ¢ Z ∞ exp −u2 /2 exp −v2 /2 √ √ du dv I (x) = 2π 2π 0 x ¡ ¢ Z 1 ∞ exp −v 2 /2 1 √ = dv = Q (x) 2 x 2 2π

where the 1/2 comes from the fact that the first integral is the integral of the right half of a Gaussian pdf. Now write the product of the two integrals as an iterated integral to get µ 2 ¶ Z ∞Z ∞ u + v2 1 exp − I (x) = dudv 2π 0 2 x

Transform the integral to polar coordinates via u = r cos φ, v = r sin φ, and dudv = rdrdφ to get Z π/2 Z ∞ ¡ ¢ 1 1 exp −r2 /2 rdrdφ I (x) = Q (x) = 2 2π 0 x/ sin φ µ ¶ Z π/2 x2 1 exp − dφ = 2π 0 2 sin2 φ


24

CHAPTER 5. OVERVIEW OF PROBABILITY AND RANDOM VARIABLES

Multiplying through by 2 gives the answer given in the problem statement. Problem 5.41 The following relationships involving the Q-function will prove useful: ¡ ¢ exp −u2 /2 √ du = Q (a) − Q (b) , b > a; 2π a Q (x) = 1 − Q (|x|) , x < 0; Q (0) = 0.5

Z b

a. The probability is Z 15

2

e−(x−10) /50 √ dx 50π −15 ¡ ¢ Z 1 exp −u2 /2 x − 10 √ = du; u = √ 2π 25 −5 = Q (−5) − Q (1) = 1 − Q (5) − Q (1)

P (|X| ≤ 15) =

= 0.8413

b. This probability is Z 20

2

e−(x−10) /50 √ dx P (10 < X ≤ 20) = 50π 10 ¡ ¢ Z 2 exp −u2 /2 x − 10 √ du; u = √ = 2π 25 0 = Q (0) − Q (2) = 0.5 − Q (2) = 0.4772

c. The desired probability is Z 25

2

e−(x−10) /50 √ P (5 < X ≤ 25) = dx 50π 5 ¡ ¢ Z 3 exp −u2 /2 x − 10 √ = du; u = √ 2π 25 −1 = Q (−1) − Q (3) = 1 − Q (1) − Q (3) = 0.8400


5.1. PROBLEM SOLUTIONS

25

d. The result is Z 30

2

e−(x−10) /50 √ dx P (20 < X ≤ 30) = 50π 20 ¡ ¢ Z 4 exp −u2 /2 x − 10 √ du; u = √ = 2π 25 2 = Q (2) − Q (4) = 0.0227

Problem 5.42 a. Observe that σ

2

= ≥ ≥

Z ∞

Z−∞

(x − m)2 fX (x) dx

|x−m|>kσ

Z

(x − m)2 fX (x) dx k2 σ 2 fX (x) dx

|x−m|>kσ

= k2 σ 2 P {|X − m| > kσ} = k2 σ 2 {1 − P [|X − m| > kσ]} or P [|X − m| < kσ] ≥ 1 −

1 k2

b. Note that for this random variable mX = 0 and σ 2X = 22 /12 = 1/3. The actual probability is h √ i P [|X − mX | < kσ X ] = P |X| < k/ 3 ( R √ √ √ k/ 3 dx √ = k/ 3, k < 3 −k/ 3√2 = 1, k ≥ 3 A comparison is given in the table below: k Actual prob. Ch. bound 0 0 √ 1 1/ 3 = 0.577 0 2 1 0.75 3 1 0.889


26

CHAPTER 5. OVERVIEW OF PROBABILITY AND RANDOM VARIABLES

Note that in all cases the bound exceeds the actual probability. Problem 5.43 The desired probability is P [|X| > kσ] = P [X < −kσ] + P [X > kσ] , k = 1, 2, 3 ¢ ¢ ¡ ¡ Z −kσ Z ∞ exp −x2 /2σ 2 exp −x2 /2σ 2 √ √ = dx + dx 2πσ 2 2πσ 2 −∞ kσ = 2Q (k) a. P [|X| > σ] = 0.3173; b. P [|X| > 2σ] = 0.0455; c. P [|X| > 3σ] = 0.0027. Problem 5.44 a. The mean is 0 by even symmetry of the pdf. Thus, the variance is £ ¤ σ 2 = var [X] = E X 2 (because the mean is 0) Z ∞ ³a´ = x2 exp (−a|x|) dx 2 −∞ ³a´ Z ∞ = 2 x2 exp (−ax) dx 2 0 Z ∞ ³ ´2 dy y = 2a exp (−y) a a µ0 ¶ µ ¶ 1 1 = 1/a2 = 2a 3 a 2 Thus, in terms of the variance, the pdf is fX (x) =

1 exp (− |x| /σ) 2σ

b. The desired probability is P [|X| > kσ] = P [X < −kσ] + P [X > kσ] , k = 1, 2, 3 Z −kσ Z ∞ 1 1 = exp (x/σ) dx + exp (−x/σ) dx 2σ 2σ kσ Z−∞ ∞ exp (−u) du (by eveness of the integrand and u = x/σ) = k

= exp (−k)


5.1. PROBLEM SOLUTIONS

27

Thus, the required numerical values are P [|X| > σ] = 0.3679; P [|X| > 2σ] = 0.1353; P [|X| > 3σ] = 0.0498 Problem 5.45 Since both X and Y are Gaussian and the transformation is linear, Z is Gaussian. Thus, all we need are the mean and variance of Z. Since the means of X and Y are 0, so is the mean of Z. Therefore, its variance is h i £ ¤ σ2Z = E Z 2 = E (X + 2Y )2 ¤ £ = E X 2 + 4XY + 4Y 2 £ ¤ £ ¤ = E X 2 + 4E [XY ] + 4E Y 2 = σ2X + 4σ X σ Y ρXY + 4σ 2Y (because the means are 0) ³√ ´ ³√ ´ = 3 + (4) 3 4 (−0.4) + (4) (4) √ = 19 − 3.2 3 = 13.457

The pdf of Z is

¡ ¢ exp −z 2 /26.915 √ fZ (z) = 26.915π

Problem 5.46 Since both X and Y are Gaussian and the transformation is linear, Z is Gaussian. Thus, all we need are the mean and variance of Z. The mean of Z is mZ = E [3X + Y ] = 3mX + mY = (3) (1) + 2 = 5 Its variance is σ2Z

h i h i = E (Z − mZ )2 = E (3X + Y − 3mX − mY )2 h i = E (3X − 3mX + Y − mY )2 h h i i = 9E (X − mX )2 + 6E [(X − mX ) (Y − mY )] + E (Y − mY )2

= 9σ2X + 6σ X σ Y ρXY + σ 2Y ³√ ´ ³√ ´ = 9 (3) + (6) 3 2 (0.2) + 2 √ = 29 + 1.2 6 = 31.939


28

CHAPTER 5. OVERVIEW OF PROBABILITY AND RANDOM VARIABLES

The pdf of Z is

h i exp − (z − 5)2 /63.879 √ fZ (z) = 63.879π

Problem 5.47 a. fX (x) =

exp[−(x−5)2 /2] √ ; 2π

b. fXY (x, y) =

fY (y) =

exp[−(y−3)2 /4] √ 4π

exp[−(x−5)2 /2] exp[−(y−3)2 /4] √ √ 2π 4π

c. E (Z1 ) = mX + mY = 5 + 3 = 8; E (Z2 ) = mX − mY = 5 − 3 = 2 d. The variances are h i h i σ 2Zi = E (Zi − mZi )2 = E (X ± Y − mX ∓ mY )2 h i = E ((X − mX ) ± (Y − mY ))2 h i i h = E (X − mX )2 + 2E (X − mX ) E (Y − mY ) + E (Y − mY )2 = σ 2X + 0 + σ 2Y = 1+2 =3 e. The desired pdf is

h i exp − (z1 − 8)2 /6 √ fZ1 (z1 ) = 6π

f. The desired pdf is

h i exp − (z2 − 2)2 /6 √ fZ2 (z2 ) = 6π

Problem 5.48 a. fX (x) =

exp[−(x−4)2 /6] √ ; 6π

b. fXY (x, y) =

fY (y) =

exp[−(y−2)2 /10] √ 10π

exp[−(x−4)2 /6] exp[−(y−2)2 /10] √ √ 6π 10π

c. E (Z1 ) = 3mX + mY = (3) (4) + 2 = 14; E (Z2 ) = 3mX − mY = (3) (4) − 2 = 10


5.1. PROBLEM SOLUTIONS

29

d. The variances are i h i h σ 2Zi = E (Zi − mZi )2 = E (3X ± Y − mX ∓ mY )2 h i = E (3 (X − mX ) ± (Y − mY ))2 h h i i = 9E (X − mX )2 + 6E (X − mX ) E (Y − mY ) + E (Y − mY )2 = 9σ 2X + 0 + σ 2Y

= (9) (3) + 5 = 32 e. The pdf of Z1 is

h i exp − (z1 − 14)2 /64 √ fZ1 (z1 ) = 64π

f. The pdf of Z2 is

h i exp − (z2 − 10)2 /64 √ fZ2 (z2 ) = 64π

Problem 5.49 a. The probability of mean exceedance for a uniformly distributed random variable is ∙ ¸ Z b 1 dx P X > (a + b) = 2 (a+b)/2 b − a ¯b ¯ 1 1 x¯¯ = = b − a (a+b)/2 2

b. The probability of mean exceedance for a Rayleigh distributed random variable is r ¶ µ µ ¶ Z ∞ π r r2 r2 P X> exp − σ = √ dr; v = 2 2 2σ 2 2σ 2 π/2σ σ Z ∞ = e−v dv = e−π/4 = 0.4559 π/4

c. The probability of mean exceedance for a one-sided exponential distributed random variable is µ ¶ Z ∞ 1 α exp (−αx) dx P X> = α 1/α = e−1 = 0.3679


30

CHAPTER 5. OVERVIEW OF PROBABILITY AND RANDOM VARIABLES

5.2

Computer Exercises

Computer Exercise 5.1 % ce5_1.m: Generate an exponentially distributed random variable % with parameter a from a uniform random variable in [0, 1) % clf a = input(’Enter the parameter of the exponential pdf: f(x) = a*exp(-a*x)*u(x) => ’); N = input(’Enter number of exponential random variables to be generated => ’); U = rand(1,N); V = -(1/a)*log(U); [M, X] = hist(V); disp(’ ’) disp(’No. of random variables generated’) disp(N) disp(’ ’) disp(’Bin centers’) disp(X) disp(’ ’) disp(’No. of random variable counts in each bin’) disp(M) disp(’ ’) norm_hist = M/(N*(X(2)-X(1))); plot(X, norm_hist, ’o’) hold plot(X, a*exp(-a*X), ’—’), xlabel(’x’), ylabel(’f_X(x)’),... title([’Theoretical pdf and histogram for ’,num2str(N),... ’ computer generated exponential random variables’]) legend([’Histogram points’],[’Exponential pdf; a = ’, num2str(a)]) A typical run follows with a plot given in Fig. 5.2. >> ce5_1 Enter the parameter of the exponential pdf: f(x) = a*exp(-a*x)*u(x) => 2 Enter number of exponential random variables to be generated => 5000 No. of random variables generated 5000 Bin centers Columns 1 through 6


5.2. COMPUTER EXERCISES

31

Theoretical pdf and histogram for 5000 computer generated exponential random variables 1.4 Histogram points Exponential pdf; a = 2 1.2

1

fX(x)

0.8

0.6

0.4

0.2

0

0

0.5

1

1.5

2 x

2.5

3

3.5

4

Figure 5.2: 0.2094 0.6280 1.0467 1.4653 Columns 7 through 10 2.7212 3.1398 3.5585 3.9771 No. of random variable counts in each bin Columns 1 through 5 2880 1200 543 215 90 Columns 6 through 10 38 16 9 5 4 Current plot held

1.8839

2.3026

Computer Exercise 5.2 % ce5_2.m: Generate pairs of Gaussian random variables from Rayleigh and uniform RVs % clf


32

CHAPTER 5. OVERVIEW OF PROBABILITY AND RANDOM VARIABLES m = input(’Enter the mean of the Gaussian random variables => ’); sigma = input(’Enter the standard deviation of the Gaussian random variables => ’); N = input(’Enter number of Gaussian random variable pairs to be generated => ’); U = rand(1,N); V = rand(1,N); R = sqrt(-2*log(V)); X = sigma*R.*cos(2*pi*U)+m; Y = sigma*R.*sin(2*pi*U)+m; disp(’ ’) disp(’Covarance matrix of X and Y vectors:’) disp(cov(X,Y)) disp(’ ’) [MX, X_bin] = hist(X, 20); norm_MX = MX/(N*(X_bin(2)-X_bin(1))); [MY, Y_bin] = hist(Y, 20); norm_MY = MY/(N*(Y_bin(2)-Y_bin(1))); gauss_pdf_X = exp(-(X_bin - m).^2/(2*sigma^2))/sqrt(2*pi*sigma^2); gauss_pdf_Y = exp(-(Y_bin - m).^2/(2*sigma^2))/sqrt(2*pi*sigma^2); subplot(2,1,1), plot(X_bin, norm_MX, ’o’) hold subplot(2,1,1), plot(X_bin, gauss_pdf_X, ’—’), xlabel(’x’), ylabel(’f_X(x)’),... title([’Theoretical pdfs and histograms for ’,num2str(N),’ ... computer generated independent Gaussian RV pairs’]) legend([’Histogram points’],[’Gauss pdf; \sigma, \mu = ’, num2str(sigma),’, ’, num2str(m)]) subplot(2,1,2), plot(Y_bin, norm_MY, ’o’) hold subplot(2,1,2), plot(Y_bin, gauss_pdf_Y, ’—’), xlabel(’y’), ylabel(’f_Y(y)’) A typical run follows with a plot given in Fig. 5.3. >> ce5_2 Enter the mean of the Gaussian random variables => 2 Enter the standard deviation of the Gaussian random variables => 3 Enter number of Gaussian random variable pairs to be generated => 5000 Covarance matrix of X and Y vectors: 9.0928 0.2543 0.2543 9.0777 Current plot held


5.2. COMPUTER EXERCISES

33

Theoretical pdfs and histograms for 5000 computer generated independent Gaussian RV pairs 0.2 Histogram points 0.15 Gauss pdf; , = 3, 2 fX(x)

σ μ

0.1 0.05 0 -10

-5

0

5

10

15

5

10

15

x 0.2

fY (y)

0.15 0.1 0.05 0 -10

-5

0 y

Figure 5.3:


34

CHAPTER 5. OVERVIEW OF PROBABILITY AND RANDOM VARIABLES

Computer Exercise 5.3 % ce5_3.m: Generate a sequence of Gaussian random variables with specified correlation % between adjacent RVs % clf m = input(’Enter the mean of the Gaussian random variables => ’); sigma = input(’Enter the standard deviation of the Gaussian random variables => ’); rho = input(’Enter correlation coefficient between adjacent Gaussian random variables => ’); N = input(’Enter number of Gaussian random variables to be generated => ’); var12 = sigma^2*(1 - rho^2); X = []; X(1) = sigma*randn(1)+m; for k = 2:N m12 = m + rho*(X(k-1) - m); X(k) = sqrt(var12)*randn(1) + m12; end [MX, X_bin] = hist(X, 20); norm_MX = MX/(N*(X_bin(2)-X_bin(1))); gauss_pdf_X = exp(-(X_bin - m).^2/(2*sigma^2))/sqrt(2*pi*sigma^2); subplot(2,1,1), plot(X_bin, norm_MX, ’o’) hold subplot(2,1,1), plot(X_bin, gauss_pdf_X, ’—’), xlabel(’x’), ylabel(’f_X(x)’),... title([’Theoretical pdf and histogram for ’,num2str(N),’ computer generated Gaussian RVs’]) legend([’Histogram points’],[’Gauss pdf; \sigma, \mu = ’, num2str(sigma),’, ’, num2str(m)],2) Z = X(1:50); subplot(2,1,2), plot(Z, ’x’), xlabel(’k’), ylabel(’X(k)’) title([’Gaussian sample values with correlation ’, num2str(rho), ’ between samples’]) A typical run follows with a plot given in Fig. 5.4. >> ce5_3 Enter the mean of the Gaussian random variables => 0 Enter the standard deviation of the Gaussian random variables => 2 Enter correlation coefficient between adjacent Gaussian random variables => 0.8 Enter number of Gaussian random variables to be generated => 250 Current plot held


5.2. COMPUTER EXERCISES

35

Theoretical pdf and histogram for 250 computer generated Gaussian RVs 0.4 Histogram points fX(x)

0.3

Gauss pdf; σ , μ = 2, 0

0.2 0.1 0 -5

-4

-3

-2

-1

0 1 2 3 4 x Gaussian sample values with correlation 0.8 between samples

5

5

10

15

20

50

X(k)

5

0

-5

0

25 k

Figure 5.4:

30

35

40

45


36

CHAPTER 5. OVERVIEW OF PROBABILITY AND RANDOM VARIABLES

Computer Exercise 5.4 % ce5_4.m: Testing the validity of the Central Limit Theorem with sums of uniform % random numbers % clf N = input(’Enter number of Gaussian random variables to be generated => ’); M = input(’Enter number of uniform random variables to be summed to generate one GRV => ’); % Mean of uniform = 0; variance = 1/12 Y = rand(M,N)-0.5; X_gauss = sum(Y)/(sqrt(M/12)); disp(’ ’) disp(’Estimated variance of the Gaussian RVs:’) disp(cov(X_gauss)) disp(’ ’) [MX, X_bin] = hist(X_gauss, 20); norm_MX = MX/(N*(X_bin(2)-X_bin(1))); gauss_pdf_X = exp(-X_bin.^2/2)/sqrt(2*pi); plot(X_bin, norm_MX, ’o’) hold plot(X_bin, gauss_pdf_X, ’—’), xlabel(’x’), ylabel(’f_X(x)’),... title([’Theoretical pdf & histogram for ’,num2str(N),’ ... Gaussian RVs each generated as the sum of ’, num2str(M), ’ uniform RVs’]) legend([’Histogram points’],[’Gauss pdf; \sigma = 1, \mu = 0’],2) A typical run follows with a plot given in Fig. 5.5. >> ce5_4 Enter number of Gaussian random variables to be generated => 1000 Enter number of uniform random variables to be summed to generate one GRV => 10 Estimated variance of the Gaussian RVs: 0.9091 Current plot held


5.2. COMPUTER EXERCISES

37

Theoretical pdf & histogram for 1000 Gaussian RVs each generated as the sum of 10 uniform RVs 0.45 Histogram points Gauss pdf; σ = 1, μ = 0

0.4 0.35 0.3

fX(x)

0.25 0.2 0.15 0.1 0.05 0 -3

-2

-1

0

1 x

Figure 5.5:

2

3

4


Chapter 6

Random Signals and Noise 6.1

Problem Solutions

Problem 6.1 The various sample functions are as follows. Sample functions for case (a) are horizontal lines at levels 2A, 0, −2A, each case of which occurs equally often (with probability 1/3). Sample functions for case (b) are horizontal lines at levels 3A, 2A, A, −A, −2A, −3A, each case of which occurs equally often (with probability 1/6). Sample functions for case (c) are horizontal lines at levels 4A, 2A, −2A, −4A, or oblique straight lines of slope A or −A, each case of which occurs equally often (with probability 1/6). Problem 6.2 a. For case (a) of problem 6.1, since the sample functions are constant with time and are less than or equal 2A, the probability is one. For case (b) of problem 6.1, since the sample functions are constant with time and 5 out of 6 are less than or equal to 2A, the probability is 5/6. For case (c) of problem 6.1, the probability is 5/6 because 5 out of 6 of the sample functions will be less than 2A at t = 2. b. The probabilities are now 2/3, 1/2, and 1/2, respectively. c. The probabilities are 1, 5/6, and 5/6, respectively.

Problem 6.3 a. The sketches would consist of squarewaves of random delay with respect to t = 0. 1


2

CHAPTER 6. RANDOM SIGNALS AND NOISE b. X (t) takes on only two values A and −A, and these are equally likely. Thus 1 1 fX (x) = δ (x − A) + δ (x + A) 2 2

Problem 6.4 a. The sample functions at the integrator output are of the form Z t A (i) Y (t) = A cos (ω 0 λ) dλ = sin ω 0 t ω0 where A is Gaussian. b. Y (i) (t0 ) is Gaussian with mean zero and standard deviation σY = σa

|sin (ω 0 t0 )| ω0

c. Not stationary (the second moment, for example, depends on time) and not ergodic.

Problem 6.5 a. By inspection of the sample functions, the mean is zero. Considering the time average correlation function, defined as Z T Z 1 1 x (t) x (t + λ) dt = x (t) x (t + λ) dt R (λ) = lim T →∞ 2T −T T0 T0 where the last expression follows because of the periodicity of x (t), it follows from a sketch of the integrand that ½ A2 (1 − 4λ/T0 ) , 0 ≤ λ ≤ T0 /2 R (τ ) = A2 (1 + 4λ/T0 ) , − T0 /2 ≤ λ ≤ 0 Since R (τ ) is periodic, this defines the autocorrelation function for all λ. b. Yes, it is wide sense stationary. The random delay, λ, being over a full period, means that no untypical sample functions occur.


6.1. PROBLEM SOLUTIONS

3

Problem 6.6

a. The time average mean is zero. The statistical average mean is

E [X (t)] =

¯π ¯ 2A dθ = sin (2πf0 t + θ)¯¯ A cos (2πf0 t + θ) π/2 π π/2 π/2

Z π

2A [sin (2πf0 t + π) − sin (2πf0 t + π/2)] π 2A [− sin (2πf0 t) − cos (2πf0 t)] = π√ 2 2A [sin (2πf0 t + π/4)] = − π

=

The time average variance is A2 /2. The statistical average second moment is

¤ £ E X 2 (t) = = = =

Z π

dθ A2 cos2 (2πf0 t + θ) π/2 π/2 # "Z " ¯π # Z π π ¯ A2 1 A2 π + sin (4πf0 t + 2θ)¯¯ dθ + cos (4πf0 t + 2θ) dθ = π π 2 2 π/2 π/2 π/2

A2 A2 + [sin (4πf0 t + 2π) − sin (4πf0 t + π)] 2 2π A2 A2 A2 A2 + [sin (4πf0 t) + sin (4πf0 t)] = + sin (4πf0 t) 2 2π 2 π

The statistical average variance is this expression minus E 2 [X (t)].

b. The time average autocorrelation function is

hx (t) x (t + τ )i =

A2 cos (2πf0 τ ) 2


4

CHAPTER 6. RANDOM SIGNALS AND NOISE The statistical average autocorrelation function is Z π dθ R (τ ) = A2 cos (2πf0 t + θ) cos (2πf0 (t + τ ) + θ) π/2 π/2 Z A2 π [cos (2πf0 τ ) + cos (2πf0 (2t + τ ) + 2θ)] dθ = π π/2 ¯π ¯ A2 A2 cos (2πf0 τ ) + sin (2πf0 (2t + τ ) + 2θ)¯¯ = 2 2π π/2

=

= =

A2

A2

cos (2πf0 τ ) + [sin (2πf0 (2t + τ ) + 2π) − sin (2πf0 (2t + τ ) + π)] 2 2π A2 A2 cos (2πf0 τ ) + [sin 2πf0 (2t + τ ) + sin 2πf0 (2t + τ )] 2 π A2 2A2 cos (2πf0 τ ) + sin 2πf0 (2t + τ ) 2 π

c. No. The random phase, not being uniform over (0, 2π), means that for a given time, t, certain phases will be favored over others. Problem 6.7 a. The time-average mean is zero. The time-average autocorrelation function is Z µ ¶2 A 1 sin (ω0 t) sin [ω 0 (t + τ )] dt hY (t) Y (t + τ )i = T0 T0 ω 0 µ ¶2 Z A 1 = {cos (ω0 τ ) − cos [ω0 (2t + τ )]} dt 2T0 ω 0 T0 µ ¶ 1 A 2 cos (ω0 τ ) = 2 ω0 b. Again, the mean is zero because the mean of A is zero. The ensemble-average autocorrelation function is R (τ ) = E [Y (t; A) Y (t + τ ; A)] ¢ ¡ Z ∞ µ ¶2 exp −a2 /2σ 2a a p = sin (ω 0 t) sin [ω 0 (t + τ )] da 2πσ2a −∞ ω 0 =

=

σ 2a sin (ω 0 t) sin [ω 0 (t + τ )] ω20 σ 2a {cos (ω 0 τ ) − cos [ω0 (2t + τ )]} 2ω20


6.1. PROBLEM SOLUTIONS

5

c. No, it is not wide sense stationary because the ensemble-average autocorrelation function is time dependent (not a function only of time difference).

Problem 6.8 From the measurements given, assuming ergodicity, we can infer that the mean is 6 V and the second moment is 49 V2 (we assume that the true-rms meter is DC coupled). Thus the variance is 49 − 36 = 13 V2 . Hence, the pdf of the noise voltage is 2

e−(x−6) /26 √ fX (x) = 26π

Problem 6.9 a. Suitable; b. Suitable; c. Not suitable because the Fourier transform is not everywhere nonnegative; d. Not suitable because autocorrelation functions must be even; e. Suitable; f. Suitable.

Problem 6.10 Use the Fourier transform pair N0 W sinc (2W τ ) ↔

N0 Π (f /2W ) 2

with W = 103 Hz and N20 = 2 × 10−5 V2 /Hz to get the autocorrelation function as ¢¡ ¢ ¡ ¡ ¢ RX (τ ) = N0 W sinc (2W τ ) = 4 × 10−5 103 sinc 2 × 103 τ ¢ ¡ = 0.04 sinc 2 × 103 τ


6

CHAPTER 6. RANDOM SIGNALS AND NOISE

Problem 6.11 a. The r (τ ) function, (6.55), is Z 1 ∞ r (τ ) = p (t) p (t + τ ) dt T −∞ ∙ ¸ Z π (t + τ ) 1 T /2−τ = cos (πt/T ) cos dt, 0 ≤ τ ≤ T /2 T −T /2 T ³ πτ ´ 1 (1 − τ /T ) cos , 0 ≤ τ ≤ T /2 = 2 T

Now r (τ ) = r (−τ ) and r (τ ) = 0 for |τ | > T /2. Therefore it can be written as ³ πτ ´ 1 r (τ ) = Λ (τ /T ) cos 2 T

Hence, by (6.54)

³ πτ ´ A2 Λ (τ /T ) cos Ra (τ ) = 2 T The power spectral density is ½ ¶¸ ¶¸¾ ∙ µ ∙ µ 1 1 A2 T 2 2 sinc T f − + sinc T f + Sa (f ) = 4 2T 2T b. The autocorrelation function now becomes Rb (τ ) = A2 [2r (τ ) + r (τ + T ) + r (τ − T )] The corresponding power spectrum is £ ¤ Sb (f ) = A2 T sinc2 (T f − 0.5) + sin c2 (T f + 0.5) cos2 (πf T )

c. The plots are left for the student. Problem 6.12 a. The autocorrelation functions are

© ª RX (τ ) = E [X (t) X (t + τ )] = Rn (τ ) + E A2 cos (ω 0 t + Θ) × cos [ω 0 (t + τ ) + Θ] = Rn (τ ) +

A2 A2 cos (ω 0 τ ) = BΛ (τ /τ 0 ) + cos (ω 0 τ ) 2 2


6.1. PROBLEM SOLUTIONS

7

and © ª RY (τ ) = E [Y (t) Y (t + τ )] = Rn (τ ) + E A2 sin (ω 0 t + Θ) × sin [ω0 (t + τ ) + Θ] = Rn (τ ) +

A2 A2 cos (ω 0 τ ) = BΛ (τ /τ 0 ) + cos (ω 0 τ ) 2 2

b. The cross-correlation function is © ª RXY (τ ) = E [X (t) Y (t + τ )] = Rn (τ ) + E A2 cos (ω 0 t + Θ) × sin [ω 0 (t + τ ) + Θ] = BΛ (τ /τ 0 ) +

A2 sin (ω0 τ ) 2

Problem 6.13 a. By definition RZ (τ ) = E [Z (t) Z (t + τ )] = E [X (t) X (t + τ ) Y (t) Y (t + τ )] = E [X (t) X (t + τ )]E[Y (t) Y (t + τ )] = RX (τ ) RY (τ ) b. Since a product in the time domain is convolution in the frequency domain, it follows that SZ (f ) = SX (f ) ∗ SY (f ) c. Use the transform pairs 2W sinc (2W τ ) ←→ Π (f /2W ) and

1 1 cos (2πf0 τ ) ←→ δ (f − f0 ) + δ (f + f0 ) 2 2

Also, RY (τ ) = E {4 cos (50πt + θ) cos [50π (t + τ ) + θ]} = 2 cos (50πτ ) Using the first transform pair, we have RY (τ ) = 500 sinc (100τ )


8

CHAPTER 6. RANDOM SIGNALS AND NOISE This gives RZ (τ ) = [500 sinc (100τ )] [2 cos (50πτ )] = 1000 sinc (100τ ) cos (50πτ ) and 3

SZ (f ) = 5 × 10

¶ ∙Y µ f − 25 200

+

Y µ f + 25 ¶¸ 200

Problem 6.14 £ ¤ a. E X 2 (t) = R (0) = 12 W; E 2 [X (t)] = limτ →∞ R (τ ) = 9 W; £ ¤ σ 2X = E X 2 (t) − E 2 [X (t)] = 12 − 9 = 3 W.

b. DC power = E 2 [X (t)] = 9 W. £ ¤ c. Total power = E X 2 (t) = 12 W.

d. S (f ) = 9δ (f ) + 15 sinc2 (5f ).

Problem 6.15 a. The autocorrelation function is RX (τ ) = E [Y (t) Y (t + τ )] = E {[X (t) + X (t − T )][X (t + τ ) + X (t + τ − T )]}

= E [X (t) X (t + τ )] + E[X (t) X (t + τ + τ )]

+E [X (t − T ) X (t + τ )] + E[X (t − T ) X (t + τ − T )]

= 2RX (τ ) + RX (τ − T ) + RX (τ + T )

b. Application of the time delay theorem of Fourier transforms gives SY (f ) = 2SX (f ) + SX (f ) [exp (−j2πfT ) + exp (j2πfT )] = 2SX (f ) + 2SX (f ) cos (2πf T ) = 4SX (f ) cos2 (πf T )


6.1. PROBLEM SOLUTIONS

9

c. Use the transform pair RX (τ ) = 6Λ (τ ) ←→ SX (f ) = 6 sinc2 (f ) and the result of (b) to get SY (f ) = 24 sinc2 (f ) cos2 (πf /4)

Problem 6.16 a. The student should carry out the sketch. R 0+ b. DC power = 0− S (f ) df = 10 W. R∞ c. Total power = −∞ S (f ) df = 10 + 25/5 + 5 + 5 = 25 W

d. Power for |f | ≤ 0.2 Hz = 10 + 0.9 (25/5) = 14.5 W. Fraction of total power = 14.5/25 = 0.58 = 58%.

Problem 6.17 a. This is left for the student. b. Using the Fourier transform of a two-sided decaying exponential and the modulation theorem, the Fourier transform of the first is SX1 (f ) =

+ α2 + [2π (f − 1)]2 α2 + [2π (f + 1)]2

The Fourier transform of the second is SX2 (f ) =

4α + 2δ (f − b) + 2δ (f + b) α2 + (2πf )2

Using the Fourier transform of the Gaussian pulse, the Fourier transform of the third is µ 2 ¶ 5√ π SX3 (f ) = π exp − τ 2 2 4 All Fourier transforms are non-negative functions of frequency.


10

CHAPTER 6. RANDOM SIGNALS AND NOISE c. None of the Fourier transforms above has a unit impulse at f = 0 so no DC power. d. Evaluate the correlation function at τ = 0 to get total power. The results are 4, 6, and 5 W, respectively, for (a)-(c). e. Only the second one has a periodic component at b Hz.

Problem 6.18 a. The output power spectral density is ¡ ¢ Sout (f ) = 10−6 Π f /5 × 105

Note that the rectangular pulse function does not need to be squared because its amplitude squared is unity. b. Use the Fourier transform pair 2W sinc(2W τ ) ↔ Π (f/2W ). The autocorrelation function of the output is ¡ ¢ ¡ ¢ Rout (τ ) = 5 × 105 × 10−6 sinc 5 × 105 τ = 0.5 sinc 5 × 105 τ c. The output power is 0.5 W, which can be found from Rout (0) or the integral of Sout (f ).

Problem 6.19 a. By assuming a unit impulse at the input, the impulse response (i.e., the response to a unit impulse) is 1 h (t) = [u (t) − u (t − T )] T b. Use the time delay theorem of Fourier transforms and the Fourier transform of a rectangular pulse to get H (f ) = sinc (fT ) e−jπf T c. The output power spectral density is S0 (f ) =

N0 sinc2 (fT ) 2

d. Use F [Λ (τ /τ 0 )] = τ 0 sinc2 (τ 0 f ) to get R0 (τ ) =

N0 Λ (τ /T ) 2T


6.1. PROBLEM SOLUTIONS

11

e. By definition of the equivalent noise bandwidth £ ¤ 2 E Y 2 = Hmax BN N0

The output power is also R0 (0) = N0 /2T . Equating the two results and noting that Hmax = 1, gives BN = 1/ (2T ) Hz.

f. Evaluate the integral of the power spectral density over all frequency: Z Z ∞ £ 2¤ N0 N0 ∞ N0 2 sinc (f T ) df = = R0 (0) E Y = sinc2 (u) du = 2T −∞ 2T −∞ 2 Problem 6.20 a. The output power spectral density is S0 (f ) =

N0 /2 1 + (f /f3 )4

b. The autocorrelation function of the output, by inverse Fourier transforming the output power spectral density, is (a table of definite integrals is necessary) Z ∞ N0 /2 −j2πf τ df R0 (τ ) = F −1 [S0 (f )] = 4e 1 + (f /f ) −∞ 3 Z ∞ Z ∞ cos (2πf τ ) cos (2πf3 τ x) dx = N0 4 df = N0 f3 1 + x4 1 + (f /f3 ) 0 0 ³√ ´ πf3 N0 −√2πf3 τ = e cos 2πf3 τ − π/4 2 c. Yes. R0 (0) = πf32N0 = N0 BN , BN = πf3 /2 Problem 6.21 We have |H (f )|2 = Thus

(2πf )2 (2πf)4 + 5, 000

|2πf | |H (f )| = q (2πf )4 + 5, 000

This could be realized with a second-order Butterworth filter in cascade with a differentiator.


12

CHAPTER 6. RANDOM SIGNALS AND NOISE

Problem 6.22 a. The power spectral density is SY (f ) =

N0 Π (f /2B) 2

The autocorrelation function is RY (τ ) = N0 B sinc (2Bτ ) b. The transfer function is H (f ) =

A α + j2πf

The power spectral density and autocorrelation function are, respectively, B A2 2 2 α + (2πf ) 1 + (2πβf )2 ∙ ¸ B 1 A2 /α2 A2 B/α2 α2 β 2 = − 1 − α2 β 2 1 + (2πf/α)2 1 + (2πβf )2 1 + (2πf/α)2 1 + (2πβf )2

SY (f ) = |H (f )|2 SX (f ) = =

2τ 0 to find that the corresponding Use the transform pair exp (−|τ |/τ 0 ) ←→ 1+(2πf τ 0 )2 autocorrelation function is h i A2 B/α i e−α|τ | − αβ e−|τ |/β , α 6= 1/β RY (t) = h 2 1 − (αβ)2

Problem 6.23 a. E [Y (t)] = 0 because the mean of the input is zero. b. The frequency response function of the filter is 1 H (f ) = 10 + j2πf The output power spectrum is SY (f ) = Sx (f ) |H (f )|2 1 = 1× 100 + (2πf )2 0.01 2/10 = 2 = 0.05 × 1 + (2πf/10) 1 + (2πf /10)2 which is obtained applying (6.89).


6.1. PROBLEM SOLUTIONS

13

2τ 0 c. Use the transform pair exp (−|τ |/τ 0 ) ←→ 1+(2πf to find the power spectrum as τ )2 0

RY (τ ) = 0.05e−10|τ | d. Since the input is Gaussian, so is the output. Also, E [Y ] = 0 and var[Y ] = RY (0) = 0.05, so ¡ ¢ exp −y 2 /2σ 2Y q fY (y) = 2πσ2Y where σ 2Y = 0.05.

e. Use (5.189) with x = y1 = Y (t1 ) , y = y2 = Y (t2 ) , mx = my = 0, and σ 2x = σ 2y = 0.05 Also ρ (τ ) =

RY (τ ) = e−10|τ | RY (0)

Set τ = 0.03 to get ρ(0.03) = 0.741. Put these values into (5.189). Problem 6.24 Use the integrals I1 = and

Z ∞

jπb0 b0 ds = a0 a1 −∞ (a0 s + a1 ) (−a0 s + a1 ) ¡

¢ b0 s2 + b1 ds −b0 + a0 b1 /a2 = jπ I2 = 2 2 a0 a1 −∞ (a0 s + a1 s + a2 ) (a0 s − a1 s + a2 ) Z ∞

to get the following results for BN . Filter Type

First Order

Chebyshev

πfc /2

Butterworth

π 2 fc

Second Order πf / 2 qc√ 1+1/ 2 1+1/ 2 −1 π √ f 2 2 c

Problem 6.25 The transfer function is H (s) = =

ω2 √ √ ¢ ¡3 √ √ ¢ ¡ s + ω3 / 2 + jω 3 / 2 s + ω 3 / 2 − jω3 / 2 A A∗ √ √ ¢+¡ √ √ ¢ ¡ s + ω3 / 2 + jω 3 / 2 s + ω 3 / 2 − jω 3 / 2


14

CHAPTER 6. RANDOM SIGNALS AND NOISE

√ where A = jω 3 / 2 with ω 3 the 3-dB frequency in rad/s. This is inverse Fourier transformed, with s = jω, to yield ¶ µ ³ √ ´ √ ω3t u (t) h (t) = 2ω 3 exp −ω 3 t/ 2 sin √ 2 for the impulse response. Now Z 1 ∞ πf3 ω3 |h (t)|2 dt = √ = √ 2 −∞ 4 2 2 2 R∞ after some effort. Also note that −∞ h (t) dt = H (0) = 1. Therefore, the noise equivalent bandwidth, from (6.108), is 250π πf3 BN = √ Hz = √ Hz 2 2 2 Problem 6.26 a. Note that Ha, max = 2, Ha (f ) = 2 for −1 ≤ f ≤ 1, Ha (f ) = 1 for −2 ≤ f ≤ −1 and 1 ≤ f ≤ 2 and is 0 otherwise. Thus µZ 1 ¶ Z 2 Z ∞ 1 1 2 2 2 |Ha (f )| df = 2 df + 1 df BN = 2 Ha, 4 0 1 max 0 1 = (4 + 1) = 1.25 Hz 4 b. For this case Hb, max = 2 and the frequency response function is a 2-unit high isoceles triangle centered at 0 and 100 Hz wide so that Z Z ∞ 1 1 50 2 |H (f )| df = [2 (1 − f/50)]2 df ; v = f /50 BN = b 4 0 Hb,2 max 0 ¯1 Z 1 ¯ 50 2 3¯ = 50 (1 − v) dv = − (1 − v) ¯ = 16.67 Hz 3 0 0 c. For this case Hc, max = 1 and for the given frequency response function Z Z ∞ 100 1 1 ∞ 2 |H (f )| df = df BN = c Hc,2 max 0 1 0 100 + (2πf )2 Z ∞ 1 = df ; v = 2πf /10 1 + (2πf /10)2 0 ¯∞ Z ¯ 10 ∞ 1 5 −1 ¯ = dv = tan v ¯ 2 2π 0 1 + v π 0 5π = 2.5 Hz = π2


6.1. PROBLEM SOLUTIONS

15

d. This frequency response function can be written in terms of piecewise functions as ⎧ 0, f < −5 ⎪ ⎪ ⎨ 2 + f/5, −5 ≤ f < 0 Hd (f ) = 2 − f /5, 0 ≤ f ≤ 5 ⎪ ⎪ ⎩ 0, f > 5

That is, it is a unit-high isoceles triangle of total width 10 centered at 0 on top of a unit-high rectangle centered at 0 of total width 10. Thus Hd, max = 2 and for the given frequency response function Z ∞ Z 1 1 5 2 |H (f )| df = (2 − f /5)2 df, v = f /5 BN = d 2 2 2 Hd, 0 max 0 ¯1 Z 1 ¯ ¤ 5 51 5 £ 3 2 3¯ = (2 − v) ¯ = − 1 − 23 (2 − v) dv = − 4 0 43 12 0 = 35/12 = 2.92 Hz Problem 6.27 By definition BN

¶¸ Z ∙ µ f − 500 2 1 ∞ 2Λ = |H (f )| df = df 2 Hmax 4 0 100 0 ¸ ¸ Z 600 ∙ Z 500 ∙ f − 500 2 f − 500 2 f − 500 1+ 1− df + df ; u = = 100 100 100 400 500 Z 1 Z 0 (1 + u)2 du + 100 (1 − u)2 du = 100 1

Z ∞

−1

2

0

¯0 ¯1 ¯ ¯ 100 100 100 100 3¯ 3¯ = (1 + u) ¯ − (1 − u) ¯ = + 3 3 3 3 −1 0 = 66.67 Hz Problem 6.28

a. Use the impulse response method. First, use partial fraction expansion to get the impulse response: µ ¶ ¢ 1 1 10 10 ¡ −2t Ha (f ) = − ⇔ ha (t) = − e−25t u (t) e 23 j2πf + 2 j2πf + 25 23


16

CHAPTER 6. RANDOM SIGNALS AND NOISE Thus BN

=

= = = =

R∞

¡ 10 ¢2 R ∞ ¡ −2t ¢2 − e−25t dt e 23 0 hR i2 = £¡ 10 ¢ R ∞ ¤2 ∞ 2 23 0 (e−2t − e−25t ) dt 2 −∞ h (t) dt ¢ R∞¡ 1 0 e−4t − 2e−27t + e−50t dt £¡ 1 ¢ ¤ 1 −25t ∞ 2 2 e − 2 e−2t + 25 0 ¡ 1 −4t ¢ 2 −27t 1 −50t ∞ + 27 e − 50 e 1 −4e 0 2 (1/2 − 1/25)2 µ ¶ µ ¶ µ ¶ µ ¶ 2 1 1 50 2 529 1 50 2 1 − + = 2 23 4 27 50 2 23 27 × 100 0.463 Hz 2 −∞ |h (t)| dt

b. Again use the impulse response method. Use the transform pair t exp (−at) u (t) ↔ 1 to get (a+j2πf )2 hb (t) = 100t exp (−10t) u (t) Thus, the equivalent noise bandwidth is R∞ R∞ 2 2 −∞ |h (t)| dt 0 [100t exp (−10t)] dt BN = = hR i2 hR i2 ∞ ∞ 2 −∞ h (t) dt 2 −∞ 100t exp (−10t) dt R R∞ 2 −3 ∞ u2 exp (−u) du t exp (−20t) dt 20 0 = hR0 i2 = h i2 ; u = 20t, v = 10t R∞ ∞ −2 2 −∞ t exp (−10t) dt 2 10 v exp (−v) dv −∞ R∞ 2 Z ∞ 4 10 0 u exp (−u) du xn e−x dx = n! for n integer = hR i2 ; use the integral ∞ 0 2 × 8 × 103 −∞ v exp (−v) dv =

10 2! 20 − 1.25 Hz = 2 16 (1!) 16

Problem 6.29 a. Choosing f0 = f1 moves −f1 right to f = 0 and f1 left to f = 0. The lowpass filtered version of the superposition of these two spectra is the power spectrum of nc (t)h or ns ³(t). It ´iis a v-shaped spectrum centered at f = 0 given by SLP (f ) = f 1 2 N0 1 − Λ f2 −f1


6.1. PROBLEM SOLUTIONS

17

b. Choosing f0 = f2 moves −f2 right to f = 0 and ³ f2 left ´ to f = 0. Thus, the baseband f 1 spectrum can be written as SLP (f ) = 2 N0 Λ f2 −f1 .

c. For this case both triangles (left ´ right) are centered around the origin and they ³ and f 1 add to give SLP (f ) = 2 N0 Π f2 −f1 .

d. They are not uncorrelated for any case for an arbitrary delay. However, all cases give quadrature components that are uncorrelated at the same instant.

Problem 6.30 a. By inverse Fourier transformation, the autocorrelation function is Rn (τ ) =

α −α|τ | e 2

where K = α/2. b. Use the result from part (a) and the modulation theorem to get Rnp (τ ) =

α −α|τ | e cos (2πf0 τ ) 2

c. The result is Snc (f ) = Sns (f ) =

α2 α2 + (2πf )2

and Snc ns (f ) = 0

Problem 6.31 ´ ³ f . a. The equivalent lowpass power spectral density is given by Snc (f ) = Sns (f ) = N0 Π f2 −f 1 The cross-spectral density is Snc ns (f ) = 0. ´ ³ b. For this case Snc (f ) = Sns (f ) = N20 Π 2(f2f−f1 ) and the cross-spectral density is Snc ns (f ) =

½

− N20 , − (f2 − f1 ) ≤ f ≤ 0 N0 2 , 0 ≤ f ≤ (f2 − f1 )


18

CHAPTER 6. RANDOM SIGNALS AND NOISE c. For this case, the lowpass equivalent power spectral densities are the same as for part (b) and the cross-spectral density is the negative of that found in (b). d. For part (a) Rnc ns (τ ) = 0 For part (b) Rnc ns (τ ) = = = = =

"Z 0

# Z f1 −f2 N0 j2πf τ N0 j2πf τ e j − e df + df 2 2 0 −(f2 −f1 ) " ¯0 ¯(f −f ) # N0 −e−j2πf τ ¯¯ e−j2πf τ ¯¯ 2 1 j + j2πτ ¯ ¯ j2πτ 2 −(f2 −f1 ) 0 ´ N0 ³ −1 + ej2π(f2 −f1 )τ + e −j2π(f2 −f1 )τ −1 4πτ N0 {2 − 2 cos [2π (f2 − f1 ) τ ]} − 4πτ N0 − sin2 [π (f2 − f1 ) τ ] πτ i h

= − N0 π (f2 − f1 )2 τ sinc2 [(f2 − f1 ) τ ]

For part (c), the cross-correlation function is the negative of the above. Problem 6.32 The result is

1 1 Sn2 (f ) = Sn (f − f0 ) + Sn (f + f0 ) 2 2 so take the given spectrum, translate it to the right by f0 and to the left by f0 , add the two translated spectra, and divide the result by 2. Problem 6.33 Define N1 = A + Nc . Then R= Note that the joint pdf of N1 and Ns is

q N12 + Ns2

½ i¾ 1 1 h 2 2 fN1 Ns (n1 , ns ) = exp − 2 (n1 − A) + ns 2πσ2 2σ

Thus, the cdf of R is

FR (r) = Pr (R ≤ r) =

Z Z

√ 2

N1 +Ns2 ≤r

fN1 Ns (n1 , ns ) dn1 dns


6.1. PROBLEM SOLUTIONS

19

Change variables in the integrand from rectangular to polar with n1 = α cos β and n2 = α sin β Then ½ i¾ α 1 h 2 2 FR (r) = exp − 2 (α cos β − A) + (α sin β) dβdα 2πσ2 2σ 0 0 ¶ ∙ ¸ µ Z r ¢ α αA 1 ¡ 2 2 dα exp − 2 α + A I0 = 2 2σ σ2 0 σ Z r Z 2π

Differentiating with respect to r, we obtain the pdf of R: ∙ ¸ µ ¶ ¢ rA r 1 ¡ 2 2 , r≥0 fR (r) = 2 exp − 2 r + A I0 σ 2σ σ2

Problem 6.34 The suggested approach is to apply

Sn (f ) = lim

T →∞

o n E |= [x2T (t)]|2 2T

where x2T (t) is a truncated version of x (t). Thus, let

x2T (t) =

N X

k=−N

nk δ (t − kTs )

The Fourier transform of this truncated waveform is

= [x2T (t)] =

N X

k=−N

nk e−j2πkTs


20

CHAPTER 6. RANDOM SIGNALS AND NOISE

Thus, ⎧¯ ¯2 ⎫ N ¯ ⎬ ⎨¯ X o n ¯ ¯ = E ¯ nk e−j2πkTs ¯ E |= [x2T (t)]|2 ¯ ⎭ ⎩¯ k=−N ) ( N N X X −j2πkTs j2πlTs nk e nl e = E k=−N

=

N X

N X

l=−N

E [nk nl ] e−j2π(k−l)Ts

k=−N l=−N

=

N N X X

Rn (0) δ kl e−j2π(k−l)Ts

k=−N l=−N

=

N X

Rn (0) = (2N + 1) Rn (0)

k=−N

But 2T = 2N Ts so that Sn (f ) = =

lim

T →∞

o n E |= [x2T (t)]|2 2T

Rn (0) Ts

= lim

N →∞

(2N + 1) Rn (0) 2N Ts

Problem 6.35 a. The output is 1 y (t) = T

Z t+T t

x (λ) x (λ − τ ) dλ

Therefore ¸ Z 1 t+T x (λ) x (λ − τ ) dλ E [y (t)] = E T t Z 1 t+T = E [x (λ) x (λ − τ )] dλ T t Z 1 t+T = Rx (τ ) dλ = Rx (τ ) T t ∙


6.1. PROBLEM SOLUTIONS

21

b. Obtain the result by writing y 2 (t) as an iterated integral: Z t+T Z t+T 1 2 y (t) = x (λ) x (λ − τ ) dλ x (β) x (β − τ ) dβ T2 t t Z t+T Z t+T 1 = x (λ) x (β) x (λ − τ ) x (β − τ ) dλdβ T2 t t Take the expectation: ∙Z t+T Z t+T ¸ £ 2 ¤ 1 E x (λ) x (β) x (λ − τ ) x (β − τ ) dλdβ E y (t) = T2 t t Z t+T Z t+T 1 = E [x (λ) x (β) x (λ − τ ) x (β − τ )] dλdβ; identify x1 = x (λ) etc. T2 t t Z t+T Z t+T {E [x (λ) x (β)] E [x (λ − τ ) x (β − τ )] 1 +E [x (λ) x (λ − τ )] E [x (β) x (β − τ )] dλdβ = T2 t t +E [x (λ) x (β − τ )] E [x (β) x (λ − τ )]} Z t+T Z t+T ¤ £ 2 1 2 (β − λ) + R (τ ) + R (β − λ − τ ) R (λ − β − τ ) dλdβ R = x x x x T2 t t Problem 6.36 a. In the expectation for the cross correlation function, write the derivative as a limit: ¸ ∙ dy (t + τ ) Ryy_dotp (τ ) = E y (t) dt ½ ∙ ¸¾ y (t + τ + ) − y (t + τ ) = E y (t) lim = lim

→0

= lim

→0

=

1

→0

{E [y (t) y (t + τ + )] − E [y (t) y (t + τ )]}

Ry (τ + ) − Ry (τ )

dRy (τ ) dτ

b. The variance of Y = y (t) is σ 2Y = N0 B. The variance for Z = dy dt is found from its power spectral density. Using the fact that the transfer function of a differentiator is j2πf we get N0 SZ (f ) = (2πf )2 Π (f /2B) 2


22

CHAPTER 6. RANDOM SIGNALS AND NOISE so that

σ 2Z

2

= 2π N0 =

¯B f 3 ¯¯ f df = 2π N0 ¯ 3 −B −B

Z B

2

2

4 2 π N0 B 3 3

dR (τ )

y The two processes are uncorrelated because Ryy_dotp (0) = dτ = 0 because the derivative of the autocorrelation function of y (t) exists at τ = 0 and therefore must be 0. Thus, the random process and its derivative are independent. Hence, their joint pdf is

¢ ¡ ¡ ¢ exp −α2 /2N0 B exp −β 2 /2.67π 2 N0 B 3 √ √ fY Z (α, β) = 2πN0 B 2.67π 3 N0 B 3

c. No, because the derivative of the autocorrelation function at τ = 0 of such noise does not exist (it is a double-sided exponential).


6.2. COMPUTER EXERCISES

6.2

23

Computer Exercises

Computer Exercise 6.1 % ce6_1.m: Computes approximations to mean and mean-square ensemble and time % averages of cosine waves with phase uniformly distributed between 0 and 2*pi % clf f0 = 1; A = 1; theta = 2*pi*rand(1,100); t = 0:.1:1; X = []; for n = 1:100 X = [X; A*cos(2*pi*f0*t+theta(n))]; end EX = mean(X); AVEX = mean(X’); EX2 = mean(X.*X); AVEX2 = mean((X.*X)’); disp(‘ ’) disp(‘Sample means (across 100 sample functions)’) disp(EX) disp(‘ ’) disp(‘Typical time-average means (sample functions 1, 20, 40, 60, 80, & 100)’) disp([AVEX(1) AVEX(20) AVEX(40) AVEX(60) AVEX(80) AVEX(100)]) disp(‘ ’) disp(‘Sample mean squares’) disp(EX2) disp(‘ ’) disp(‘Typical time-average mean squares’) disp([AVEX2(1) AVEX2(20) AVEX2(40) AVEX2(60) AVEX2(80) AVEX2(100)]) disp(‘ ’) for n = 1:5 Y = X(n,:); subplot(5,1,n),plot(t,Y),ylabel(‘X(t,\theta)’) if n == 1 title(‘Plot of first five sample functions’) end if n == 5


24

CHAPTER 6. RANDOM SIGNALS AND NOISE xlabel(‘t’) end end A typical run follows with a plot given in Fig. 6.1. >> ce6_1 Sample means (across 100 sample functions) Columns 1 through 6 -0.0121 0.0226 0.0486 0.0560 0.0421 0.0121 Columns 7 through 11 -0.0226 -0.0486 -0.0560 -0.0421 -0.0121 Typical time-average means (sample functions 1, 20, 40, 60, 80, & 100) -0.0232 -0.0654 0.0255 -0.0344 0.0055 0.0245 Sample mean squares Columns 1 through 6 0.4613 0.4538 0.5101 0.5525 0.5223 0.4613 Columns 7 through 11 0.4538 0.5101 0.5525 0.5223 0.4613 Typical time-average mean squares 0.4604 0.5015 0.4617 0.4676 0.4549 0.4611

Computer Exercise 6.2 This is a matter of changing the statement theta = 2*pi*rand(1,100); in the program of Computer Exercise 6.1 to theta = (pi/2)*(rand(1,100) - 0.5); The time-average means will be the same as in Computer Exercise 6.1, but the ensembleaverage means will vary depending on the time at which they are computed. Computer Exercise 6.3 This was done in Computer exercise 5.2 by printing the covariance matrix. The diagonal terms give the variances of the X and Y vectors and the off-diagonal terms give the correlation coefficients. Ideally, they should be zero. % ce6_3: %

computes covariance matrix of two vectors of Gaussian random variables that are independent


6.2. COMPUTER EXERCISES

25

Plot of first five sample functions X(t,θ)

1 0 -1

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

0

0.1

0.2

0.3

0.4

0.5 t

0.6

0.7

0.8

0.9

1

X(t,θ)

1 0 -1 X(t,θ)

1 0 -1 X(t,θ)

1 0 -1 X(t,θ)

1 0 -1

Figure 6.1:


26

CHAPTER 6. RANDOM SIGNALS AND NOISE

4 3 2 1

Y

0 -1 -2 -3 -4 -4

-3

-2

-1

0 X

1

Figure 6.2: % clf X = randn(1,1000); Y = randn(1,1000); disp(’Covariance matrix:’) disp(cov(X, Y)) plot(X, Y, ’+’), axis square, xlabel(’X’), ylabel(’Y’) A typical run follows with a plot given in Fig. 6.2. >> ce6_3 Covariance matrix: 1.0553 0.0235 0.0235 1.0341

2

3

4


6.2. COMPUTER EXERCISES

27

Computer Exercise 6.4 % ce6_4.m: plot of Ricean pdf for several values of K % clf A = char(‘-’,‘—’,‘-.’,‘:’,‘—.’,‘-..’); sigma = input(‘Enter desired value of sigma => ’); r = 0:.1:15*sigma; n = 1; KdB = []; for KK = -10:5:15; KdB(n) = KK; K = 10.^(KK/10); fR = (r/sigma^2).*exp(-(r.^2/(2*sigma^2)+K)).*besseli(0, sqrt(2*K)*(r/sigma)); plot(r, fR, A(n,:)) if n == 1 hold on xlabel(’r’), ylabel(’f_R(r)’) grid on end n = n+1; end legend([‘K = ’, num2str(KdB(1)),‘ dB’],[‘K = ’, num2str(KdB(2)),‘ dB’], [‘K = ’, num2str(KdB(3)),‘ dB’], [‘K = ’, num2str(KdB(4)),‘ dB’],[‘K = ’, num2str(KdB(5)),‘ dB’], [‘K = ’, num2str(KdB(6)),‘ dB’]) title([‘Ricean pdf for \sigma = ’, num2str(sigma)]) A typical run follows with a plot given in Fig. 6.3. >> ce6_4 Enter desired value of sigma => 2


28

CHAPTER 6. RANDOM SIGNALS AND NOISE

Ricean pdf for σ = 2 0.35 K = -10 dB K = -5 dB K = 0 dB K = 5 dB K = 10 dB K = 15 dB

0.3

0.25

fR(r)

0.2

0.15

0.1

0.05

0

0

5

10

15 r

Figure 6.3:

20

25

30


Chapter 7

Noise in Modulation Systems 7.1

Problems

Problem 7.1 Degrees kelvin (degrees absolute) are converted to to the Celsius scale by substracting 273 (actually 273.15). Degrees Celsius are converted to degrees Fahrenheit by the familiar expression 9 degrees F = degrees C × + 32 5 Thus 9 degrees F = (kelvin − 273) + 32 5 Thus, 3 degrees on the kelvin scale is (3 − 273)

9 + 32 = −454 degrees F 5

and 290 degrees on the kelvin scale is (290 − 273)

9 + 32 = 62.6 degrees F 5

Problem 7.2 The signal power at the output of the lowpasss filter is PT . The noise power is N0 BN , where BN is the noise-equivalent bandwidth of the filter. From (6.116), we know that the noise-equivalent bandwidth of an nth order Butterworth filter with 3 dB bandwidth W is Bn (n) =

πW/2n sin (π/2n) 1


2

CHAPTER 7. NOISE IN MODULATION SYSTEMS

1 0.9 0.8 0.7

f(n)

0.6 0.5 0.4 0.3 0.2 0.1 0 1

2

3

4

5

6

7

8

9

10

n

Figure 7.1: Normalized SNR at the filter output. Thus, the signal-to-noise ratio at the filter output is SN R =

PT sin (π/2n) PT PT = f (n) = N0 BN π/2n N0 W N0 W

so that f (n) is given by sin (π/2n) π/2n Thus we have the values given in the following table. f (n) =

n 1 3 5 10

f (n) 0.6366 0.9549 0.9836 0.9959

SNR 0.6366 NP0TW 0.9549 NP0TW 0.9836 NP0TW 0.9959 NP0TW

Note that for large n, sin(π/2n)/(π/2n) ≈ 1, so that the SNR→ PT /N0 W as n → ∞. The plot is shown In Figure 7.1. Problem 7.3 We express n (t) as ¸ ¸ ∙ ∙ 1 1 n (t) = nc (t) cos 2πfc t ± (2πW ) t + θ + ns (t) sin 2πfc t ± (2πW ) t + θ 2 2


7.1. PROBLEMS

3

Snc ( f ) , Sns ( f ) N0

1 − W 2

1 W 2

f

Figure 7.2: Plots of Snc (f ) and Sns (f ) for Problem 7.3. where we use the plus sign for the U SB and the minus sign for the LSB. The received signal is b (t) sin (2πfc t + θ)] xr (t) = Ac [m (t) cos (2πfc t + θ) ∓ m

Multiplying xr (t) + n (t) by 2 cos (2πfc t + θ) and lowpass filtering yields yD (t) = Ac m (t) + nc (t) cos (πW t) ± ns (t) sin (πW t)

From this expression, we can show that the postdetection signal and noise powers are given by SD = A2c m2

ND = N0 W

= Ac m2

NT = N0 W

ST

This gives a detection gain of one. The power spectral densities of nc (t) and ns (t) are illustrated in Figure 7.2. Problem 7.4 For DSB, the received signal and noise are given by xr (t) = Ac m (t) cos (2πfc t + θ) + n (t) At the output of the predetection filter, the signal and noise powers are given by 1 ST = A2c m2 2

NT = n2 = N0 BT

The predetection SNR is (SNR)T =

A2c m2 2N0 BT


4

CHAPTER 7. NOISE IN MODULATION SYSTEMS

Snc ( f ) N0

1 − BT 2

0

1 BT 2

f

Figure 7.3: Plot of the spectrum of BT for Problem 7.4. If the postdetection filter passes all of the nc (t) component, yD (t) is yD (t) = Ac m (t) + nc (t) The output signal power is A2c m2 and the output noise PSD is shown in Figure 7.3. Case I: BD > 12 BT For this case, all of the noise, nc (t), is passed by the postdetection filter, and the output noise power is Z 1 BT 2 ND = N0 df = 2N0 BT − 12 BD

This yields the detection gain (SNR)D A2 m2 /N0 BT = c =2 (SNR)T A2c m2 /2N0 BT Case II: BD < 12 BT For this case, the postdetection filter limits the output noise and the output noise power is ND =

Z BD

N0 df = 2N0 BD

−BD

This case gives the detection gain (SNR)D A2 m2 /2N0 BD BT = c = (SNR)T BD A2c m2 /2N0 BT


7.1. PROBLEMS

5

For an AM signal the predetection signal power is i 1 h ST = A2c 1 + a2 m2n 2

and the postdetection signal power is

SD = A2c a2 m2n The noise powers do not change. Case I:

BD > 12 BT (SNR)D A2 a2 m2n /N0 BT 2a2 m2n h c i = = (SNR)T 1 + a2 m2n A2c 1 + a2 m2n /2N0 BT

Case II:

BD < 12 BT (SNR)D A2 a2 m2n /2N0 BD a2 m2n BT hc i = = = 2 2 (SNR)T B 2 2 2 D 1 + a mn Ac 1 + a mn /2N0 BT

Problem 7.5 Since the message signal is defined by m(t) = A cos (2πf1 t + θ1 ) + B cos (2πf2 t + θ2 ) the Hilbert transform is given by m(t) b = A sin (2πf1 t + θ1 ) + B sin (2πf2 t + θ2 )

Squaring and averging m(t) gives

E{m2 (t)} = E{A2 cos2 (2πf1 t + θ1 )} + 2E{A cos (2πf1 t + θ1 ) B cos (2πf2 t + θ2 )} + E{B 2 cos2 (2πf2 t + θ2 )} The first and third terms are A2 /2 and B 2 /2, respectively. If θ1 and θ2 are assumed independent, the middle term is zero. Thus E{m2 (t)} =

A2 B 2 + 2 2


6

CHAPTER 7. NOISE IN MODULATION SYSTEMS

In exactly the same manner A2 B 2 + 2 2 and we see that the powers in m(t) and m(t) b are equal. We now compute E{m b 2 (t)} =

m(t)m(t) b = [A cos (2πf1 t + θ1 ) + B cos (2πf2 t + θ2 )] × [A sin (2πf1 t + θ1 ) + B sin (2πf2 t + θ2 )]

which is m(t)m(t) b = A2 cos (2πf1 t + θ1 ) sin (2πf1 t + θ1 ) +

AB cos (2πf1 t + θ1 ) sin (2πf2 t + θ2 ) +

AB cos (2πf2 t + θ2 ) sin (2πf1 t + θ1 ) + B 2 cos (2πf2 t + θ2 ) sin (2πf2 t + θ2 )

Applying the appropriate trig identities gives m(t)m(t) b =

A2 B2 sin (4πf1 t + 2θ1 ) + sin (4πf2 t + 2θ2 ) + 2 2 AB [sin (2π(f2 − f1 )t − θ1 + θ2 ) + 2 sin (2π(f1 + f2 )t + θ1 + θ2 )] + AB sin (2π(f1 − f2 )t + θ1 − θ2 )

All five terms in the preceding equation are sinusoids of differing frequencies. Since the average value of a sinusoid is zero E {m(t)m(t)} b =0

Problem 7.6 Expanding the noise about fc ± W/4 gives ¸ ¸ ∙ ∙ W W n (t) = nc (t) cos 2πfc t ± π t + θ + ns (t) sin 2πfc t ± π t + θ 2 2 where we use the plus sign for the U SB and the minus sign for the LSB. The received signal is xr (t) = Ac [m (t) cos (2πfc t + θ) ∓ m b (t) sin (2πfc t + θ)] Multiplying xr (t) + n (t) by 2 cos (2πfc t + θ) and lowpass filtering yields ¶ ¶ µ µ W W yD (t) = Ac m (t) + nc (t) cos π t ± ns (t) sin π t 2 2


7.1. PROBLEMS

7

From this expression, we see that the postdetection signal and noise powers unchanged from those given in Problem 7.3 and are given by SD = A2c m2

ND = N0 W

= Ac m2

NT = N0 W

ST

This gives a detection gain of one. The power spectral densities of nc (t) and ns (t) are easily plotted. Problem 7.7 First we determine a general form by evaluating Z fb ´ A ³ j2πfb t ej2πf t df = − ej2πfa t m(t) = A e j2πt fa There are three cases to consider. Case 1 (fa = 0, fb = f2 ): This gives Z f2 ´ A ³ j2πf2 t e m(t) = A ej2πf t df = −1 j2πt 0 which is m(t) = or

´ A ³ j2πf2 t/2 e − e−j2πf2 t/2 ej2πf2 t/2 = j2πt

A sin(πf2 t)ejπf2 t πt Since taking the Hilbert transform adds π/2 to the phase for f > 0 and substracts π/2 to the phase for f < 0 we have m(t) =

and

m(t) b =

A sin(πf2 t)ej[πf2 t+(π/2)] , πt

f >0

A sin(πf2 t)ej[πf2 t−(π/2)] , f <0 πt Case 2 (fa = −f2 /2, fb = f2 ): This gives Z f2 ´ A ³ j2πf2 t e m(t) = A ej2πf t df = − e−jπf2 t j2πt −f2 /2 m(t) b =

which can be written m(t) =

´ A ³ j3πf2 t e − 1 e−jπf2 t j2πt


8

CHAPTER 7. NOISE IN MODULATION SYSTEMS

Thus m(t) =

´ A ³ j3πf2 t/2 A e sin(3πf2 t/2)ejπf2 t/2 − e−j3πf2 t/2 e−j(π−3π/2)f2 t = j2πt πt

The Hilbert transform is

and

m(t) b = m(t) b =

A sin(3πf2 t/2)ej[(πf2 t/2)+(π/2)] , πt

f >0

A sin(3πf2 t/2)ej[(πf2 t/2)−(π/2)] , πt

f <0

Case 3 (fa = −f2 , fb = f2 ): This gives m(t) = A

Z f2

ej2πf t df =

−f2

´ A A ³ j2πf2 t e sin(2πf2 t) − e−j2πf2 t = j2πt πt

The Hilbert transform is

and

m(t) b =

A sin(2πf2 t)ejπ/2 , πt

f >0

A sin(2πf2 t)e−jπ/2 , f <0 πt Note that m(t) is real and m(t) b is imaginary. The plots are easily constructed. All magnitude spectra have the familiar |sin(x)/x| form. The phase spectra are piecewise linear with ±π jumps for negative sin(x)/x. m(t) b =

Problem 7.8 The message signal is m (t) = 12 cos (8πt) Thus, mn (t) = cos (8πt) so that m2n =

1 2

The efficiency is therefore given by Ef f =

(0.5) (0.6)2 = 0.1525 = 15.25% 1 + (0.5) (0.6)2


7.1. PROBLEMS

9

From (7.36), the detection gain is (SNR)D = 2Ef f = 0.305 = −5.157 dB (SNR)T Relative to baseband, the output SNR is (SNR)D = −5.157 dB PT /N0 W If the modulation index in increased to 0.9, the efficiency becomes Ef f =

(0.5) (0.9)2 = 0.2883 = 28.83% 1 + (0.5) (0.9)2

This gives a detection gain of (SNR)D = 2Ef f = 0.5765 = −2.39 dB (SNR)T which, relative to baseband, is (SNR)D = −2.39 dB PT /N0 W This represents an improvement of 2.76 dB. Problem 7.9 The first step is to determine the value of M . Since P {−X < −M } + P {X > M } = 0.005,

P {X > M } 0.0025

First we compute P {X > M } =

Z ∞ M

1 2 2 √ e−y /2σ dy = Q 2πσ

This gives M = 2.807 σ Thus, M = 2.807σ and mn (t) =

m (t) 2.807σ

µ

M σ

= 0.0025


10

CHAPTER 7. NOISE IN MODULATION SYSTEMS

Thus, since m2 = σ 2 m2n =

m2 σ2 2 = 5.614σ 2 = 0.178 (2.807σ)

Since a = 0.7, we have 0.178 (0.7)2 Ef f = = 0.0802 = 8.02% 1 + 0.178 (0.7)2 and the detection gain is (SNR)D = 2E = 0.1604 (SNR)T Problem 7.10 The output of the predetection filter is e (t) = Ac [1 + amn (t)] cos [ω c t + θ] + rn (t) cos [ω c t + θ + φn (t)] The noise function rn (t) has a Rayleigh pdf. Thus fRn (rn ) =

r −r2 /2N e N

where N is the predetection noise power. This gives 1 1 N = n2c + n2s = N0 BT 2 2 From the definition of threshold 0.99 =

Z Ac 0

which gives

r −r2 /2N e dr N 2

0.99 = 1 − e−Ac /2N Thus, −

A2c = ln (0.01) 2N

which gives A2c = 9.21 N The predetection signal power is i 1 1 h PT = Ac 1 + a2 m2n ≈ A2c [1 + 1] = A2c 2 2


7.1. PROBLEMS

11

which gives A2 PT = c = 9.21 ≈ 9.64 dB N N Problem 7.11 Since m (t) is a sinusoid, m2n = 12 , and the efficiency is Ef f =

1 2 a2 2a = 2 + a2 1 + 12 a2

and the output SNR is (SNR)D = Ef f = In dB we have (SNR)D,dB = 10 log10 For a = 0.4,

µ

PT a2 2 2 + a N0 W

a2 2 + a2

+ 10 log10

PT N0 W

(SNR)D,dB = 10 log10

PT − 11.3033 N0 W

(SNR)D,dB = 10 log10

PT − 9.5424 N0 W

(SNR)D,dB = 10 log10

PT − 7.0600 N0 W

For a = 0.5,

For a = 0.7,

For a = 0.9, PT − 5.4022 N0 W The plot of (SNR)dB as a function of PT /N0 W in dB is linear having a slope of one. The modulation index only biases (SNR)D,dB down by an amount given by the last term in the preceding four expressions. (SNR)D,dB = 10 log10

Problem 7.12 Let the predetection filter bandwidth be BT and let the postdetection filter bandwidth be BD . The received signal (with noise) at the predetection filter output is represented xr (t) = Ac [1 + amn (t)] cos ω c t + nc (t) cos ω c t − ns (t) sin ω c t which can be represented xr (t) = {Ac [1 + amn (t)] + nc (t)} cos ωc t − ns (t) sin ωc t


12

CHAPTER 7. NOISE IN MODULATION SYSTEMS

The output of the square-law device is y (t) = {Ac [1 + amn (t)] + nc (t)}2 cos2 ω c t

− {Ac [1 + amn (t)] + nc (t)} ns (t) cos ωc (t) +n2s (t) sin2 cos ω c (t)

Assuming that the postdetection filter removes the double frequency terms, yD (t) can be written 1 1 yD (t) = {Ac [1 + amn (t)] + nc (t)}2 + n2s (t) 2 2 Except for a factor of 2, this is (7.53). We now express the preceding expression as 1 1 1 yD (t) = A2c [1 + 2amn (t) + a2 m2n (t)] + Ac [1 + amn (t)] nc (t) + n2c (t) + n2s (t) 2 2 2 Let m(t) = cos(ωm t) + cos(2ω m t), which clearly has a maximum value of 2. Thus mn (t) = and m2n (t) =

1 1 cos(ωm t) + cos(2ω m t) 2 2

1 1 1 1 1 + cos(2ωm t) + + cos(4ω m t) + cos(ωm t) cos(2ω m t) 8 8 8 8 2

or

1 1 1 1 1 + cos(ωm t) + cos(2ω m t) + cos(3ωm t) + cos(4ω m t) 4 4 8 4 8 Putting this together gives ¶ µ 1 2 a2 yD (t) = A 1+ + 2 c 4 ¶ ¶ ∙µ µ ¸ a 1 a 1 2 Ac a + cos(ω m t) + + cos(2ω m t) + 2 8 2 16 ¸ ∙ 1 2 2 1 Ac a cos(3ω m t) + cos(4ω m t) + 8 16 Ac nc (t) + ∙ ¸ 1 1 cos(ω m t) + cos(2ω m t) nc (t) + Ac a 2 2 1 2 1 2 n (t) + ns (t) 2 c 2 m2n (t) =

The PSD therefore consists of 6 basic terms as follows: Term 1: A dc term (f = 0) which is an impulse.


7.1. PROBLEMS

13

Term 2: Impulse functions at frequencies f = ±fm and f = ±2fm . The weights are given on line 2 of the preceding equation when squared and divided by two. Note that the terms do not have the same weight due to intermodulation distortion. Thus the demodulated output is distorted. Term 3: Impulse functions at frequencies f = ±3fm and f = ±4fm resulting from harmonic distortion. Term 4: The PSD of nc (t) (flat) about f = 0 and weighted by Ac . Term 5: The PSD of nc (t) (flat) translated to frequencies f = ±fm and f = ±2fm and weighted by Ac a/2. Term 6: The PSD of n2c (t) (triangular) about f = 0 The PSD of n2s (t) is identical to the PSD of n2c (t) Problem 7.13 Equation (7.59) is, suppressing t, σ 2n2c +n2s = E{[n2c + n2s ]2 } − E 2 {n2c + n2s } The first term is E{[n2c + n2s ]2 } = E{n4c } + E{n4s } + 2E{n2c }E{n2s } where the last term follows from the independence of nc and ns . Since (See Problem 5.22) E{n4c } = E{n4s } = 3σ 4n and E{n2c } = E{n2s } = σ 2n we have E{[n2c + n2s ]2 } = 3σ 4n + 3σ 4n + 2σ 2n σ 2n = 8σ 4n Also Thus

£ ¤2 £ ¤2 £ ¤2 E 2 {n2c + n2s } = E{n2c + n2s } = E{n2n + n2n } = 2σ 2n = 4σ 4n £ ¤2 σ 2n2c +n2s = E 2 {n2c + n2s } − E{n2c + n2s } = 8σ 4n − 4σ 4n = 4σ 4n

Problem 7.14 Letting the message signal signal be a sinusoid in (7.63) so that m2n = 0.5 gives (SNR)D =

PT /N0 W a2 /2 2 1 + N W/P 2 [1 + a /2] 0 T


14

CHAPTER 7. NOISE IN MODULATION SYSTEMS

Plotting these for a = 0.1, 0.2, 0.5, and 1 verifies Figure 7.6. The distortion to noise ratio is DD A4 a4 m4n = c ND ND where ­ ® m4n = sin4 (ω m t) =

¶2 + ¿ À 1 1 1 3 1 1 + cos(2ω m t) + cos(2ω m t) + cos2 (2ωm t) = = 2 2 4 2 4 8

and, from (7.60) ND = 4A2c σ 2n and, finally, from (7.32)

¶ µ ¡ ¢ a2 + σ 4n = 2A2c 2 + a2 N0 W + (N0 W )2 1+ 2

1 1 1 a2 PT = A2c (1 + a2 m2n ) = A2c (1 + ) = A2c (2 + a2 ) 2 2 2 4 We can therefore write ND as ND = 2A2c

4PT N0 W + (N0 W )2 = N0 W (8PT + N0 W ) A2c

and

3 DD = a4 8 Putting all of this togethen allows us to write DD /ND in terms of a, PT , N0 , and W . This gives 1 A4 a4 m4n 3a4 4PT DD = c = 2 ND ND 8 2 + a 8PT (N0 W ) + (N0 W )2 or DD PT /N0 W 3 a4 = 2 ND 2 2 + a 8PT + N0 W

We see that, like for DD /SD , the ratio DD /ND goes to zero as a → 0. Problem 7.15 By definition mn (t) =

m(t) kσ m


7.1. PROBLEMS

15

and

m2 σ 2m 1 = = 2 2 2 2 k σm k (kσ m )

m2n = so that

¡ ¢2 2 ka PT PT = (SNR)D = 2Ef f ¡ a ¢2 N0 W N 0W 1+ k

or

(SNR)D =

PT 2 2 (k/a) + 1 N0 W

The plot is easily generated. We conclude that for k À a ³ a ´2 P T (SNR)D ≈ 2 k N0 W and for k ¿ a

(SNR)D ≈ 2

PT N0 W

Problem 7.16 Assume sinusoidal modulation since sinusoidal modulation was assumed in the development of the square-law detector. For a = 1 and mn (t) = cos (2πfm t) so that m2n = 0.5, we have, for linear envelope detection (since PT /N0 W À 1, we use the coherent result), (SNR)D,C = Ef f

1 a2 m2n 1 PT PT PT PT = = 2 1 = N0 W 3 N0 W 1 + 2 N0 W 1 + a2 m2n N0 W

For square-law detection we have, from (6.61) with a = 1 µ ¶2 2 PT PT a = (SNR)D,SL = 2 2 2+a N0 W 9 N0 W Taking the ratio (SNR)D,SL (SNR)D,C

2

PT

= 91 NP0 W = T 3 N0 W

2 = −1.76 dB 3

This is approximately −1.8 dB. Problem 7.17 The transfer function of the RC highpass filter is H (f ) =

R j2πf RC = 1 1 + j2πf RC R + j2πC


16

CHAPTER 7. NOISE IN MODULATION SYSTEMS

so that |H(f )|2 =

(2πfRC)2 1 + (2πf RC)2

Thus, The PSD at the output of the ideal lowpass filter is ( 2 N0 (2πf RC) |f | < W 2, 2 1+(2πf RC) Sn (f ) = |f | > W 0, The noise power at the output of the ideal lowpass filter is Z W Z W (2πf RC)2 Sn (f ) df = N0 df N= 1 + (2πfRC)2 −W 0 with x = 2πf RC, the preceding expression becomes Z 2πRCW x2 N0 dx N= 2πRC 0 1 + x2 Since

x2 1 =1− 2 1+x 1 + x2

we can write N0 N= 2πRC or N= which is

½Z 2πRCW 0

df −

Z 2πRCW 0

dx 1 + x2

¾

¢ N0 ¡ 2πRCW − tan−1 (2πRCW ) 2πRC

¸ ∙ 2πRCW − tan−1 (2πRCW ) N0 tan−1 (2πRCW ) N = N0 W − = N0 2πRC 2πRC The output signal power is 1 A2 (2πfc RC)2 S = A2 |H (fc )|2 = 2 2 1 + (2πfc RC)2 Thus, the signal-to-noise ratio is (2πfc RC)2 2πRC S A2 = 2 N 2N0 1 + (2πfc RC) 2πRCW − tan−1 (2πRCW ) S → 0. Note that as W → ∞, N


7.1. PROBLEMS

17

Problem 7.18 From (7.78) and (7.84), we have σ 2n , SSB σ 2m 3 ¡ 2 ¢2 σ 2n + 2 , DSB σ 4 φ σm

0.05 = σ 2φ + 0.05 =

Thus we have the following signal-to-noise ratios σ 2m σ 2n

=

σ 2m σ 2n

=

1 , 0.05 − σ 2φ

SSB

1

³ ´2 , 0.05 − 34 σ 2φ

DSB

The SSB characteristic has an asymptote defined by σ 2φ = 0.05 and the DSB result has an asymptote defined by

3 ¡ 2 ¢2 = 0.05 σ 4 φ

or

σ 2φ = 0.258 The curves are shown in Figure 7.4. The appropriate operating regions are above and to the left of the curves. It is clear that DSB has the larger operating region. Problem 7.19 The mean-square error can be written

© ª ε2 (A, τ ) = E [y (t) − Ax (t − τ )]2

© ª ε2 (A, τ ) = E y 2 (t) − 2Ax(t − τ )y (t) + A2 x2 (t − τ )

In terms of autocorrelation and cross correlation functions, the mean-square error is ε2 (A, τ ) = Ry (0) − 2ARxy (τ ) + A2 Rx (0) Letting Py = Ry (0) and Px = Rx (0) gives ε2 (A, τ ) = Py − 2ARxy (τ ) + A2 Px


18

CHAPTER 7. NOISE IN MODULATION SYSTEMS

1000 900 800

SSB

700

DSB 600 500 SNR

400 300 200 100 0

0

0.05

0.1

0.15

0.2

0.25

0.3

0.35

Phase error variance

Figure 7.4: Plots for Problem 7.18. In order to minimize ε2 we choose τ = τ m such that Rxy (τ ) is maximized. This follows since the crosscorrelation term is negative in the expression for ε2 . Therefore, ε2 (A, τ m ) = Py − 2ARxy (τ m ) + A2 Px The gain Am is determined from dε2 = −2ARxy (τ m ) + A2 Px = 0 dA which yields Am =

Rxy (τ m ) Px

This gives the mean-square error ε2 (Am , τ m ) = Py − 2

2 (τ ) 2 (τ ) Rxy Rxy m m + Px Px

which is ε2 (Am , τ m ) = Py −

2 (τ ) Pxy m Px

The output signal power is © ª SD = E [Am x (t − τ m )]2 = A2m Px


7.1. PROBLEMS

19

R(t )e jψ (t )

rn (t )sin[ψ (t ) − φ (t )] = rn (t )sin[φe (t )]

rn (t ) cos[ψ (t ) − φ (t )] = rn (t ) cos[φe (t )]

rn (t )e jφn ( t )

Ac e jφ ( t )

Figure 7.5: Phasor diagram for SNR À 1. which is SD =

2 (τ ) 2 (τ ) Rxy Rxy m m = Px Rx (0)

Since ND is the error ε2 (Am , τ m ) we have 2 (τ ) Rxy SD m = 2 (τ ) ND Rx (0) Ry (0) − Rxy m

Note: The gain and delay of a linear system is often defined as the magnitude of the transfer and the slope of the phase characteristic, respectively. The definition of gain and delay suggested in this problem is useful when the magnitude response is not linear over the frequency range of interest. Problem 7.20 The phasor diagram for SNR À 1 is illustrated in Figure 7.5. The phasor diagram for SNR¿1 is shown in Figure 7.6. We see that the angle ψ(t), which determines the demodulated output, is approximately φn (t), whereas for SNR À 1 the angle ψ(t) is approximately φ(t). For ψ(t) ≈ φ(t) the error in the demodulated output is small. Problem 7.21 With BT = 2(D + 1)W (7.118) becomes (SNR)DF = 3

µ

¶2 ¶ µ ¶2 ¶ µ µ 3 BT PT PT BT − 1 m2n = − 2 m2n 2W N0 W 4 W N0 W

This clearly gives (7.119) for the case in which BT À W .


20

CHAPTER 7. NOISE IN MODULATION SYSTEMS

rn (t )sin[ψ (t ) − φ (t )] = rn (t )sin[φe (t )] rn (t ) cos[ψ (t ) − φ (t )] = rn (t ) cos[φe (t )] rn (t )e jφn ( t )

R(t )e jψ (t )

Ac e jφ (t )

Figure 7.6: Phasor diagram for SNR ¿ 1. The required plot follows by setting this equal to 1000 (30 dB). In order to make ³the plot ´ independent of unknown parameters normalize (SNR)DF by plotting (SNR)DF /m2n NP0TW .

The asymptotic value of (SNR)DF for BT À W differs from the preceding equation only by the approximation ¶2 µ ¶ µ BT BT 2 −2 = W W These differ by 0.5 dB when " µ " µ ¶ ¶# ¶2 ¶# µ µ 3 BT 2 2 3 BT PT P T − 2 m2n mn 10 log − 10 log = 0.5 4 W N0 W 4 W N0 W or 20 log

µ

− 20 log

Ã

!

BT W

This gives log or 1−

BT W −2 BT W

µ

¶ BT − 2 = 0.5 W

= −0.025

2W = 10−0.025 = 0.9441 BT


7.1. PROBLEMS

21

Finally 2W = 1 − 0.9441 = 0.0559 BT which gives a value of BT 2 = = 35.75 W 0.0559 Problem 7.22 The single-sided spectrum of a stereophonic FM signal and the noise spectrum is shown in Figure 7.7. The two-sided noise spectrum is given by SnF (f ) =

2 KD N0 f , 2 AC

−∞<f <∞

The predetection noise powers are easily computed. For the L + R channel Z 15,000 2 2 ¡ 12 ¢ KD KD 2 N f df = 2.25 10 N0 Pn,L+R = 2 0 A2C A2C 0 For the L − R channel

Pn,L−R = 2

Z 53,000 23,000

2 2 ¡ 12 ¢ KD KD 2 N f df = 91.14 10 N0 0 A2C A2C

Thus, the noise power on the L − R channel is over 40 times the noise power in the L + R channel. After demodulation, the difference will be a factor of 20 because of 3 dB detection gain of coherent demodulation. Thus, the main source of noise in a stereophonic system is the L − R channel. Therefore, in high noise environments, monophonic broadcasting is preferred over stereophonic broadcasting. Problem 7.23 The received FDM spectrum is shown in Figure 7.8. The k th channel signal is given by xk (t) = Ak mk (t) cos 2πkf1 t Since the guardbands have spectral width 3W , 1f1 = W + 3W + W = 5W , 2f1 = f1 + W + 3W + W = 10W and so on. Clearly kf1 = 5kW and the k th channel occupies the frequency band (5k − 1) W ≤ f ≤ (5k + 1) W Since the noise spectrum is given by SnF =

2 KD N0 f 2 , A2C

|f | < BT


22

CHAPTER 7. NOISE IN MODULATION SYSTEMS

Pilot

L( f ) + R( f )

0

15

Noise Spectrum

SnF ( f )

L( f ) + R( f )

23

53

f ( kHz )

Figure 7.7: Sterophonic FM baseband signal and noise. The noise power in the k th channel is Z (5k+1)W i BW 3 h (5k + 1)3 − (5k − 1)3 f 2 df = Nk = B 3 (5k−1)W where B is a constant. This gives Nk =

¢ BW 3 ¡ 150k 2 + 2 ∼ = 50BW 3 k 2 3

The signal power is proportional to A2k . Thus, the signal-to-noise ratio can be expressed as λA2 (SNR)D = 2k = λ k

µ

Ak k

¶2

where λ is a constant of proportionality and is a function of KD ,W ,AC ,N0 . If all channels are to have equal (SNR)D , Ak /k must be a constant, which means that Ak is a linear function of k. Thus, if A1 is known, Ak = k A1 , k = 1, 2, ..., 7. A1 is fixed by setting the SNR of Channel 1 equal to the SNR of the unmodulated channel. The noise power of the unmodulated channel is Z W B f 2 df = W 3 N0 = B 3 0 yielding the signal-to-noise ratio

P0 (SNR)D = B 3 3W


7.1. PROBLEMS

23

Noise PSD

f 0

f1

f3

f2

f5

f4

f6

f7

Figure 7.8: FDM signal and noise for Problem 7.23. where P0 is the signal power. Problem 7.24 From (7.132) and (7.119), the required ratio is ³ ´ K2 −1 W − tan 2 A2D N0 f33 W f3 f3 C R= 2 K 2 D 3 3 A2 N0 W C

or R=3

µ

f3 W

¶3 µ

W W − tan−1 f3 f3

This is shown in Figure 7.9. For f3 = 2.1kHz and W = 15kHz, the value of R is R=3

µ

2.1 15

¶2 µ

15 15 − tan−1 2.1 2.1

= 0.047

Expressed in dB this is R = 10 log10 (0.047) = −13.3 dB The improvement resulting from the use of preemphasis and deemphasis is therefore 13.3 dB. Neglecting the tan−1 (W/f3 ) gives an improvement of 21 − 8.75 = 12.25 dB. The difference is approximately 1 dB. Problem 7.25


24

CHAPTER 7. NOISE IN MODULATION SYSTEMS

1 0.9 0.8 0.7

R

0.6 0.5 0.4 0.3 0.2 0.1 0 0

1

2

3

4

5 W/f3

6

7

8

9

Figure 7.9: Value of R for Problem 7.24. From the plot of the signal spectrum it is clear that k= Thus the signal power is S=2

Z W 0

A W2

A 2 2 f df = AW W2 3

The noise power is N0 B. This yields the signal-to-noise ratio (SNR)1 =

2 AW 3 N0 B

If B is reduced to W , the SNR becomes (SNR)2 =

2 A 3 N0

This increases the signal-to-noise ratio by a factor of B/W . Problem 7.26

10


7.1. PROBLEMS

25

From the definition of the signal we have x (t) = A cos 2πfc t dx = −2πfc A sin 2πfc t dt d2 x = − (2πfc )2 A cos 2πfc t dt2 The signal component of y (t) therefore has power 1 SD = A2 (2πfc )4 = 8A2 π 4 fc4 2 The noise power spectral density at y (t) is Sn (f ) =

N0 (2πf )4 2

so that the noise power is N0 (2π)4 ND = 2

Z W

f 4 df =

−W

16 N0 π 4 W 5 5

Thus, the signal-to-noise ratio at y (t) is (SNR)D =

5 A2 2 N0 W

µ

fc W

¶4

Problem 7.27 Since the signal is integrated twice and then differentiated twice, the output signal is equal to the input signal. The output signal is ys (t) = A cos 2πfc t and the output signal power is 1 SD = A2 2 The noise power is the same as in the previous problem, thus ND =

16 N0 π 4 W 5 5

This gives the signal-to-noise ratio (SNR)D =

A2 5 4 32π N0 W 5


26

CHAPTER 7. NOISE IN MODULATION SYSTEMS

1 0.9 0.8

Norm alized S NR

0.7 0.6 0.5 0.4 0.3 0.2 0.1 0

1

2

3

4

5 W /f3

6

7

8

9

10

Figure 7.10: (SNR)D for Problem 7.28.

Problem 7.28 The signal power is SD = 2A

Z W 0

df = 2Af3 1 + (f /f3 )2

Z W/f3 0

dx = 2Af3 tan−1 1 + x2

µ

W f3

Since ND = N0 W the signal-to-noise ratio at y (t) is 2A (SNR)D = N0

µ

f3 W

tan−1

W f3

The normalized (SNR)D , defined by (SNR)D /(2A/N0 ), is illustrated in Figure 7.10. Problem 7.29 The instantaneous frequency deviation in Hz is δf = fd m (t) = x (t)


7.1. PROBLEMS

27

where x (t) is a zero-mean Gaussian process with variance σ 2x = fd2 σ 2m . Thus, Z ∞ |x| 2 2 √ |δf | = |x| = e−x /2σx dx 2πσx −∞ Recognizing that the integrand is even gives r r Z ∞ 2 1 2 −x2 /2σ2x σx |δf | = xe dx = π σx 0 π Therefore, |δf| =

r

2 fd σ m π

³

´2

Substitution into (7.148) gives

(SNR)D =

√ 1 + 2 3 BWT Q

fd W

m2n NP0TW ¸ q i h A2C −A2C 2 fd σ m + 6 exp N0 BT π W 2N0 BT

3 ∙q

The preceding expression can be placed in terms of the deviation ratio, D, by letting D=

fd W

and BT = 2 (D + 1) W

Problem 7.30 Note that the form of m(t) affects both the numerator and the denominator of¯ (7.139). ¯ The numerator is affected through m2n and the denominator is affected through ¯δf ¯. By definition, N 1 X Cn cos(2πnf t + θn ) mn (t) = M n=1

of M where M is the maximum value of m(t). Since the phases θn are unknown, the valueP cannot be determined and this is as far as we can go. If θn = 0 for all n, then M = Cn . This, however, is a very special case. In general, given the phases θn , m2n , is computed as 1 m2n = 2 M

*" N X n=1

#2 +

Cn cos(2πnft + θn )


28

CHAPTER 7. NOISE IN MODULATION SYSTEMS

where, as always, h·i denotes a time average value. However, m2n =

=

1 M2

*N X C2

n

2 n=1

N 1 X 2 C 2M 2 n=1 n

+

N X

N X

+

Cn Ck cos(2πnf t + θn ) cos(2πkf t + θk )

n=1 k=1(k6=n)

Using m(t) and M allows us to write (7.140). Using the preceding result in (7.146), with β replaced by D since the signal is no longer sinusoidal,allows determination of (7.148) as was done in Example 7.4. This completes the problem. Note that the difficulty lies in finding the normalization constant M . Finding M for a specific signal will, in general, require using MATLAB or similar tool to plot m(t). Problem 7.31 The input to the PLL is xc (t) = Ac cos(2πfc t + θ) + nc (t) cos(2πfc t + θ) − ns (t) sin((2πfc t + θ) which can be written R(t) cos[2πfc t + θ + φ(t)] = {Ac + nc (t)} cos(2πfc t + θ) − ns (t) sin(2πfc t + θ) The phase deviation is −1

φ(t) = tan

ns (t) Ac + nc (t)

¸

Since the signal-to-noise ratio is sufficiency large to justify the linear model, the phase deviation is written ns (t) φ(t) = Ac We know that the PSD of the quadrature noise is N0 for |f | < B/2, where B is the predetection bandwidth. Thus, the PSD of the input signal to the baseband (lowpass) model of the PLL is N0 |f | < B/2 Sφ (f ) = 2 , Ac Using the PLL transfer function (The linear model is assumed valid since the input SNR is assumed large, which yields small phase errors.) gives Sψ (f ) = Sφ (f ) |G(f )|2 =

N0 |G(f )|2 A2c


7.1. PROBLEMS

29

where G(f ) is the transfer function relating the phase error Ψ(f ) to the input phase Φ(f ) (See (3.221)). The variance of the phase error is Z Z N0 ∞ 2N0 ∞ σ 2ψ = 2 |G(f )|2 df = 2 |G(f )|2 df Ac −∞ Ac 0 The integral represents the equivalent noise bandwidth BL of the PLL based on the closedloop transfer function G(f ). We have obviously assumed BL ¿ B, which is typically the case. Thus 2N0 BL N0 BL = σ 2ψ = 2 Ac S where S = A2c /2 is the input signal power. The input noise is assumed Gaussian. The phase error is therefore Gaussian since it is a linear transformation, through the transfer function G(f ), of the input noise. The preceding expression shows that the phase error variance is the reciprocal of the signal-to-noise ratio, where the noise power is measured in the bandwidth of the transfer function G(f ). Problem 7.32 PPM was defined in Chapter 3. Assume that the pulse position is represented by τ (t) = τ o + τ 1 mn (t) where τ o is the nominal pulse position for m(t) = 0 and τ 1 establishes the peak offset from the nominal pulse position and is adjusted so that the pulse does not deviate outside of its assigned slot. We assume a zero-mean signal so that m(t) = 0 for which τ = τ o . The pulse is detected when the received pulse crosses a threshold. When noise is added to the pulse we can approximate the error in detecting the threshold crossing as n(t1 ) (t1 ) = TR A where TR is the pulse risetime, t1 is the point at which the pulse would cross the threshold in the absence of noise, and A represents the pulse amplitude. Thus, in general µ ¶2 µ ¶2 TR TR 2 = 2 n = No BT A A As we saw in Chapter 2, the relationship between risetime and bandwidth gives TR = 1/2BT , where BT is the transmission bandwidth. We let µ ¶ 1 1 τ 0 = 2TR = 2 = 2BT BT


30

CHAPTER 7. NOISE IN MODULATION SYSTEMS

This gives 2 =

µ

¶2

No BT =

PT =

τo 2 A Ts

1 2ABT

No 4A2 BT

The average transmitted signal power is

where Ts is the sampling frequency or 1/2W assuming Nyquist rate sampling. Using this in the preceding equation by substituting for A2 gives 2 =

No τ o 4PT Ts BT

Since τ 1 mn (t) represents the signal, the received signal power is SD = τ 21 m2n we have (SNR)D =

τ 21 m2n 2

= τ 21 m2n

or (SNR)D = τ 21 m2n

µ

µ

2BT τo

4PT Ts BT No τ o

= τ 21 m2n

¶µ

= 2τ 21 m2n BT2

PT No W

µ

1 2W

¶µ

µ

4PT BT No τ o

PT No W

where we have used τ o = 1/BT . The peak-to-peak pulse displacement is 2τ 1 = Ts = 1/2W so that τ 1 = 1/4W . Substituting this into the preceding equation yields 1 (SNR)D = m2n 8

µ

BT W

¶2 µ

PT No W

Therefore with all the assumptions made K = 1/8. Note: We have made many assumptions in this problem including Nyquist rate sampling. It is useful, however, for illustrating the thought process for applying the concepts of this chapter to modulation processes other than those considered in this chapter. Problem 7.33 The peak value of the signal is 12.5 so that the signal power is S=

1 (12.5)2 = 78.125 2


7.2. COMPUTER EXERCISES

31

If the A/D converter has a wordlength n, there are 2n quantizing levels. Since these span the peak-to-peak signal range, the width of each quantization level is S= This gives

¡ ¢ 25 = 25 2−n n 2

¡ ¢ 1 2 1 S = (25)2 2−2n = (52.08)2−2n 12 12 This results in the signal-to-noise ratio ε2 =

SN R =

7.2

S ε2

=

78.125 ¡ 2n ¢ 3 ¡ 2n ¢ = 2 2 52.08 2

Computer Exercises

Computer Exercise 7.1 From the expression for the normalized error σ 2n 2 2 = σ + φ NQ σ 2m the following MATLAB program results: % File: ce7_1a.m snrdb = [0 1 2]; snr = 10.^(snrdb./10); varphas = 0:0.1:1; hold on for k=1:3 nerrqpsk = varphas + (1/snr(k)); plot(varphas,nerrqpsk’) end hold off grid, xlabel(’Phase Error Variance’), ylabel(’Normalized MSE (QDSB)’) % End of script file. Executing the program gives the results illustrated in Figure 7.11. We see from the defining equation that the mse is linear in the phase error variance for a given SNR. The SNR simply provides an offset (SNR = 1 (0 dB) curve is on top). For the DSB system we have the MATLAB program


32

CHAPTER 7. NOISE IN MODULATION SYSTEMS

2

Normalized MSE (QDSB)

1.8

1.6

1.4

1.2

1

0.8

0.6

0

0.1

0.2

0.3

0.4 0.5 0.6 Phase Error Variance

0.7

0.8

0.9

1

Figure 7.11: Normalized mse for QDSB and SSB with the SNR as a parameter.


7.2. COMPUTER EXERCISES

33

2

1.8

Normalized MSE (DSB)

1.6

1.4

1.2

1

0.8

0.6

0

0.1

0.2

0.3

0.4 0.5 0.6 Phase Error Variance

0.7

0.8

0.9

1

Figure 7.12: Normalized mse for DSB with the SNR as a parameter. % File: ce7_1b.m snrdb = [0 1 2]; snr = 10.^(snrdb./10); varphas = 0:0.1:1; hold on for k=1:3 nerrqpsk = 0.75*varphas.*varphas + (1/snr(k)); plot(varphas,nerrqpsk’) end hold off grid, xlabel(’Phase Error Variance’), ylabel(’Normalized MSE (DSB)’) % End of script file. This yields the result shown in Figure 7.12. The mse is now quadradic in the phase error variance for a given SNR. The SNR again simply provides an offset (SNR = 1 (0 dB) curve is on top). Computer Exercise 7.2


34

CHAPTER 7. NOISE IN MODULATION SYSTEMS

The MATLAB program for solving this computer exercise follows. % File: ce7_2.m zdB = 0:50; % predetection SNR in dB z = 10.^(zdB/10); % predetection SNR beta = [1 5 10 20]; % modulation index vector hold on % hold for plots for j=1:length(beta) bta = beta(j); % current index a1 = exp(-(0.5/(bta+1)*z)); % temporary constant a2 = q(sqrt(z/(bta+1))); % temporary constant num = (1.5*bta*bta)*z; den = 1+(4*sqrt(3)*(bta+1))*(z.*a2)+(12/pi)*bta*(z.*a1); result = num./den; resultdB = 10*log10(result); plot(zdB,resultdB,’k’) end hold off xlabel(’Predetection SNR in dB’) ylabel(’Postdetection SNR in dB’) % End of script file. Executing the program gives the results shown in Figure 7.13. We see that the threshold effect is less pronounced. A smaller value of β yields a smaller bandwidth and decreased detection gain.

Computer Exercise 7.3 The MATLAB program which solves this computer exercise is a modification of the program used in Computer Example 7.2 and follows: % File: ce7_3.m zdB = 0:0.1:40; % predetection SNR in dB z = 10.^(zdB/10); % predetection SNR beta = [2 4 6 8 10 15 20]; % modulation index vector hold on % hold for plots for j=1:length(beta) a2 = exp(-(0.5/(beta(j)+1)*z)); a1 = q(sqrt((1/(beta(j)+1))*z)); r1 = 1+(4*sqrt(3)*(beta(j)+1)*a2);


7.2. COMPUTER EXERCISES

35

80 70 20

Postdetection SNR in dB

60 50 5 40 1 30 20 10 0 -10

0

5

10

15

20 25 30 35 Predetection SNR in dB

40

45

Figure 7.13: Plot for Computer Exercise 7.2.

50


36

CHAPTER 7. NOISE IN MODULATION SYSTEMS

4 3.5 3

1+D2+D3

2.5 4

6

2 8

2

10

15

1.5 20 1 0.5 0 10

12

14

16

18 20 22 24 Predetection SNR in dB

26

28

30

Figure 7.14: Values of PT /N0 W for which 1 + D2 + D3 = 2. r2 = r1+((12/pi)*beta(j)*a2); semilogy(zdB,r2,’k’) axis([10 30 0 4]) grid end hold off % release xlabel(’Predetection SNR in dB’) ylabel(’1+D_2+D_3’) % End of script file. This program gives the results shown in Figure 7.14 for β = 2, 4, 6, 8 10, 15, and 20. We see that denominator has a value of two for β = 2 at approximately 13 dB. This increases to approximately 23.5 dB at an index of 20. These values can be taken off Figure 7.14 and plotted if desired. It is clear that the predetection SNR for which the denominator of (7.148) is equal to 2 increases as the modulation index β increases.

Computer Exercise 7.4


7.2. COMPUTER EXERCISES

37

The MATLAB program for solving this computer exercise follows. The strategy used is to plot two curves for each modulation index; one curve with the modulation not included and one curve with the modulation included. In order to keep the results from being too cluttered only two modulation indexes are used; β = 5 and β = 20. % File: ce7_4.m zdB = 0:50; % predetection SNR in dB z = 10.^(zdB/10); % predetection SNR beta = [5 20]; % modulation index vector hold on % hold for plots for j=1:length(beta) bta = beta(j); % current index a1 = exp(-(0.5/(bta+1)*z)); % temporary constant a2 = q(sqrt(z/(bta+1))); % temporary constant num = (1.5*bta*bta)*z; den1 = 1+(4*sqrt(3)*(bta+1))*(z.*a2); den2 = 1+(4*sqrt(3)*(bta+1))*(z.*a2)+(12/pi)*bta*(z.*a1); result1dB = 10*log10(num./den1); % w/o modulation result2dB = 10*log10(num./den2); % with modulation plot(zdB,result1dB,’k’,zdB,result2dB,’k-.’) end hold off legend(’w/o modulation’,’with modulation’,4) xlabel(’Predetection SNR in dB’) ylabel(’Postdetection SNR in dB’) % End of script file. Executing the program gives the results shown in Figure 7.15. We see, as expected, that above threshold the results are identical. This results since the above-threshold performance is only a function of the predetection SNR and the modulation index. Below threshold, a region in which we typically have little interest, the inclusion of modulation reduces the postdetection SNR, since the denominator of the postdetection SNR expression is increased.

Computer Exercise 7.5 The MATLAB program used in Computer Example 3.4 is modified in order to allow the addition of noise to the input phase deviation. The only parameter to be entered in this program is the variance of the phase due to noise. % File:

ce7_5.m


38

CHAPTER 7. NOISE IN MODULATION SYSTEMS

80 70

Postdetection SNR in dB

60 50 40 30 20 10 0

w/o modulation with modulation 0

5

10

15

20 25 30 35 Predetection SNR in dB

40

45

50

Figure 7.15: FM performance results with and without modulation. % beginning of preprocessor clear all % be safe fdel = 40; fn = 10; zeta = 0.707; varn = input(’Enter phase noise variance > ’); npts = 2000; % number of simulation points fs = 2000; % default sampling frequency T = 1/fs; t = (0:(npts-1))/fs; % time vector nsettle = fix(npts/10); % set nsettle time as 0.1*npts Kt = 4*pi*zeta*fn; % loop gain a = pi*fn/zeta; % loop filter parameter filt_in_last = 0; filt_out_last=0; vco_in_last = 0; vco_out = 0; vco_out_last=0; % end of preprocessor - beginning of simulation loop for i=1:npts if i<nsettle fin(i) = 0; phin = 0; else fin(i) = fdel; phin = 2*pi*fdel*T*(i-nsettle)+sqrt(varn)*randn(1);


7.2. COMPUTER EXERCISES

39

end s1 = phin - vco_out; s2 = sin(s1); % sinusoidal phase detector s3 = Kt*s2; filt_in = a*s3; filt_out = filt_out_last + (T/2)*(filt_in + filt_in_last); filt_in_last = filt_in; filt_out_last = filt_out; vco_in = s3 + filt_out; vco_out = vco_out_last + (T/2)*(vco_in + vco_in_last); vco_in_last = vco_in; vco_out_last = vco_out; phierror(i)=s1; fvco(i)=vco_in/(2*pi); freqerror(i) = fin(i)-fvco(i); end % end of simulation loop - beginning of postprocessor subplot(2,1,1), plot(t,fin,t,fvco), grid xlabel(’Time - Seconds’) ylabel(’Frequency - Hertz’) subplot(2,1,2), plot(phierror/2/pi,freqerror), grid xlabel(’Phase Error / 2*pi’) ylabel(’Frequency Error - Hz’) % end of postprocessor % End of script file. The program is executed twice, first with a phase noise variance of zero as a reference and then with a phase noise variance of 0.4. The results are illustrated in Figures 7.16 and 7.17. We see that the acquisition time is increased. The student is warned that with noise present, the number of cycles slipped, and consequently the acquisition time, is a random variable. This should be kept in mind when attempting to reproduce these results.

Computer Exercise 7.6 (Note: This computer exercise is closely related to Problem 7.31.) The input to the PLL is assumed to be a sinusoid plus bandlimited white noise. Thus the PLL input is modeled as xc (t) = Ac cos(2πfc t + θ) + nc (t) cos(2πfc t + θ) − ns (t) sin(2πfc t + θ)


40

CHAPTER 7. NOISE IN MODULATION SYSTEMS

Frequency - Hertz

60 40 20 0 -20 0

0.1

0.2

0.3

0.4 0.5 0.6 Time - Seconds

0.7

1

1.5 2 Phase Error / 2*pi

2.5

0.8

0.9

1

Frequency Error - Hz

60 40 20 0 -20

0

0.5

3

3.5

Figure 7.16: Results for phase noise variance σ 2pn = 0.

Frequency - Hertz

80 60 40 20 0 -20 0

0.1

0.2

0.3

0.4 0.5 0.6 Time - Seconds

0.7

1

2 3 Phase Error / 2*pi

4

0.8

0.9

1

Frequency Error - Hz

60 40 20 0 -20 -40 -1

0

5

6

Figure 7.17: Results for phase noise variance σ 2pn = 0.4.


7.2. COMPUTER EXERCISES

41

or xc (t) = [Ac + nc (t)] cos(2πfc t + θ) − ns (t) sin(2πfc t + θ) This can be represented as xc (t) = R(t) cos[2πfc t + θ + φ(t)] where −1

φ(t) = tan

¸ ∙ ¸ ns (t) ns (t) −1 ns (t) ≈ tan = Ac + nc (t) Ac Ac

The approximation results since the input SNR is assumed large. We therefore use φ(t) as the input to the baseband PLL model. The SNR of the input signal is ρ=

A2c /2 A2c = σ 2n 2σ 2n

where σ 2n = n2s . The variance of φ(t) is σ 2φ =

σ 2n 1 = 2 Ac 2ρ

In the following program we set the phase error variance. Note that this does not determine the SNR. One can set σ2n at some value (such as 1) and then adjust Ac to give the required SNR. % File: ce7_6.m % beginning of preprocessor clear all % be safe fn = 10; zeta = 0.707; varn = input(’Enter noise variance > ’); npts = 50000; % number of simulation points fs = 2000; % sampling frequency T = 1/fs; t = (0:(npts-1))/fs; % time vector Kt = 4*pi*zeta*fn; % loop gain a = pi*fn/zeta; % loop filter parameter filt_in_last = 0; filt_out_last=0; vco_in_last = 0; vco_out = 0; vco_out_last=0; % end of preprocessor - beginning of simulation loop for i=1:npts phin = sqrt(varn)*randn(1); s1 = phin - vco_out;


42

CHAPTER 7. NOISE IN MODULATION SYSTEMS aa(1,i) = s1; s2 = sin(s1); % sinusoidal phase detector s3 = Kt*s2; filt_in = a*s3; filt_out = filt_out_last + (T/2)*(filt_in + filt_in_last); filt_in_last = filt_in; filt_out_last = filt_out; vco_in = s3 + filt_out; vco_out = vco_out_last + (T/2)*(vco_in + vco_in_last); vco_in_last = vco_in; vco_out_last = vco_out; phierror(i)=s1; end % end of simulation loop - beginning of postprocessor [N_samp,x] = hist(aa,40); % get histogram parameters bar(x,N_samp,1) xlabel(’Histogram Bin’), ylabel(’Number of Samples’) % end of postprocessor % End of script file.

Executing the program gives the histogram illustrated in Figure 7.18. The histogram suggests that the pdf of the phase error is closely gaussian.

Computer Exercise 7.7 The MATLAB program for this computer exercise is a simple modification of the program used for Computer Example 7.3. The only change is for the bit error probabilities for FSK (frequency-shift keying) and BPSK (binary phase-shift keying). In order to keep the results from being cluttered results are computed for n = 8. The MATLAB program follows. % File: ce7_7.m n = 8; snrtdB = 0:0.1:30; snrt = 10.^(snrtdB/10); Pb1 = q(sqrt(snrt)); Pb2 = q(sqrt(2*snrt)); Pw1 = 1-(1-Pb1).^n; Pw2 = 1-(1-Pb2).^n; a = 2^(-2*n); snrd1 = 1./(a+Pw1*(1-a));

% wordlength % predetection snr in dB % predetection snr % bit error probability for FSK % bit error for BPSK % Pw for FSK % Pw for BPSK % temporary constant


7.2. COMPUTER EXERCISES

43

4500 4000

Number of Samples

3500 3000 2500 2000 1500 1000 500 0 -2

-1.5

-1

-0.5

0 0.5 Histogram Bin

1

1.5

2

Figure 7.18: Histogram of phase error. snrddB1 = 10*log10(snrd1); % postdetection snr (FSK) snrd2 = 1./(a+Pw2*(1-a)); snrddB2 = 10*log10(snrd2); % postdetection snr (BPSK) plot(snrtdB,snrddB1,’k’,snrtdB,snrddB2,’k-.’) legend(’FSK’,’BPSK’) xlabel(’Predetection SNR in dB’) ylabel(’Postdetection SNR in dB’) % End of script file. The results are shown in Figure 7.19. We see that above threshold, the postdetection SNR is independent of the modulation scheme. This, of course, was expected since above threshold the postdetection SNR is a function only of wordlength. We also note that the use of BPSK extends the threshold over that of FSK due to the 3-dB improvement in bit error probability. Computer Exercise 7.8 The main calculations are accomplished in the function file as follows. function [gain,delay,px,py,rxy,rho,snrdb] = snrmse(x,y) ln = length(x); % length of reference (x) vector


44

CHAPTER 7. NOISE IN MODULATION SYSTEMS

50 FSK BPSK

45

Postdetection SNR in dB

40 35 30 25 20 15 10 5 0

0

5

10 15 20 Predetection SNR in dB

25

30

Figure 7.19: PCM performance for FSK and BPSK with n = 8. fx = fft(x,ln); % FFT the reference (x) vector fy = fft(y,ln); % FFT the measurement (y) vector fxconj = conj(fx); % Conjugate the FFT of reference vector sxy = fy .* fxconj; % Cross PSD rxy = ifft(sxy,ln); % Cross correlation function rxy = real(rxy)/ln; % Take the real part and scale px = x*x’/ln; % Power in reference vector py = y*y’/ln; % Power in measurement vector [rxymax,j] = max(rxy); % Find max of the crosscorrelation gain = rxymax/px; % System gain delay = j-1; % System delay rxy2 = rxymax*rxymax; % Square rxymax for later use rho = rxymax/sqrt(px*py); % Here’s the correlation coefficient snr = rxy2/(px*py-rxy2); % SNR snrdb = 10*log10(snr); % SNR in db % End of function file. The preceding function file is called by a program that sets up the reference signal and the measuremenjt signal. The system under study for this example is a channel having a sinusoidal input. An interfering signal and noise are added in the channel. The paremeters are established in the following program.


7.2. COMPUTER EXERCISES

45

% File: snrmsepost.m npts = 1024; % data points n = 1:npts; A = 20; % signal amplitude B = 2; % interference amplitude phii = rand(1); % interference phase fd = 2; % signal frequency fi = 3; % interference frequency theta = 2*pi*n/npts; % phase increment nvar = 0.1; % noise variance noise = sqrt(nvar)*randn(1,npts); % noise samples xx = A*cos(fd*theta); % reference signal y1 = (A/2)*cos(fd*theta+pi/4); y2 = B*cos(fi*theta+phii); y3 = noise; yy = y1+y2+noise; % measurement [gain,delay,px,py,rxy,rho,snrdb] = snrmse(xx,yy); % end of script Prior to executing the program we first calculate the SNR at the point of measurement. Some of the obvious results are as follows:

Px = Py = Gain = Delay =

(20)2 = 200 2 µ ¶ 1 A 2 (2)2 + 0.1 = 50 + 2 + 0.1 = 52.1 + 2 2 2 1 2 512 = 64 8

These values are easily understood. The reference signal is a sinusoid with a peak amplitude A = 20, which has a power of 200 Watts. The power at the measurement point is the sum of a signal of peak amplitude of A/2, an independent interfering sinusoid with peak amplitude B = 2, and additive noise having variance 0.1. Since the signal at the measurement point is a sinusoid of amplitude A/2 and the reference signal has a peak amplitude of amplitude A, the theoretical gain is 0.5. The delay requires a bit more thought. Since 1024 samples are taken in an FFT block, and the reference signal goes


46

CHAPTER 7. NOISE IN MODULATION SYSTEMS

through 2 periods in 1024 samples, the period of the signal is 512 samples. Since the phase shift of the signal at the measurement point is π/2, which is 1/8 of a period, the theoretical delay is 512/8 or 64 samples. In addition, since the signal is periodic, a valid delay is k(512) ± 64 samples where k is an arbitrary integer. The theoretical SNR requires a bit of calculation. Since the interference is orthogonal to the signal (different frequencies) we consider the interference to be noise. Also, the noise is independent of the signal. Therefore the noise power is the sum of the interference power plus the noise power. This gives N=

(2)2 + 0.1 = 2.1 2

The signal power is 1 S= 2

µ ¶2 A = 50 2

Thus the SNR in dB is (SNR)dB = 10 log

µ

50 2.1

= 13.7675

Executing the program gives » snrmsepost » px px = 200.0000 » py py = 52.4432 » gain gain = 0.5016 » delay delay = 448 » snrdb snrdb = 13.7322 All of these valuse are very close to the theoretical values previously determined. We should recall that any calculation involving noise will be a random variable. One source of error is that a sufficient number of samples must be processed to ensure that the variance of the result must be small. Another source of error, and often the most important source of error, results in the peak of the crosscorrelation function being missed. This problem can be reduced by increasing the sampling frequency. Computer Exercise 7.9


7.2. COMPUTER EXERCISES

47

Since the signal is a sinusoid, we can use an inverse sin function as a compressor characteristic. Note that this actually expands the signal but we will use the term ‘compressor’ since this is the first operation performed. The compressor output is a triangular wave so that the compressor output has a uniform pdf ranging from −π/2 to π/2. Thus, with 3 bit words we have 8 quantization levels given by L1 = 0 ≤ X ≤ π/8

L2 = π/8 ≤ X ≤ π/4

L3 = π/4 ≤ X ≤ 3π/8

L4 = 3π/8 ≤ X ≤ π/2 L5 = −π/8 ≤ X ≤ 0

L6 = −π/4 ≤ X ≤ −π/8

L7 = −3π/8 ≤ X ≤ −π/4

L8 = −π/2 ≤ X ≤ −3π/8

The expander characteristic has a sinusoidal characteristic to restore the original sinusiodal signal. The compressor and expander characteristics are given in Figure 7.20 and the original signal, the compressed signal, and the expanded signal are shown in Figure 7.21.


48

CHAPTER 7. NOISE IN MODULATION SYSTEMS

2

1 0.8

1.5

0.6 Expander Characteristic

Compressor Characteristic

1 0.5 0 -0.5

0.4 0.2 0 -0.2 -0.4

-1 -0.6 -1.5

-0.8

-2 -1

-0.5

0 Time

0.5

-1 -2

1

-1

0 Time

1

2

Figure 7.20: Compressor and expander characteristics.

Orig. sig.

1

0

Compressed sig.

-1 0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

0

0.1

0.2

0.3

0.4

0.5 Time

0.6

0.7

0.8

0.9

1

2

0

-2

Expanded sig.

1

0

-1

Figure 7.21: Original signal, compressed signal, and expanded signal.


Chapter 8

Principles of Data Transmission in Noise 8.1

Problem Solutions

Problem 8.1 The signal-to-noise ratio is z=

A2 A2 T = N0 N0 R

¢ ¡√ ¢ ¡ Trial and error using the asymptotic expression Q (x) ≈ exp −x2 /2 / 2πx (or use of a Q-function program in MATLAB) shows that PE = Q

³√ ´ 2z = 10−6 for z = 11.297 = 10.53 dB

Thus A2 = 11.297 N0 R or p 11.297N0 R p 11.297 × 10−7 × 10000 =

A =

= 0.3361 V

1


2

CHAPTER 8. PRINCIPLES OF DATA TRANSMISSION IN NOISE

Problem 8.2 The bandwidth should be equal to the data rate to pass the main lobe of the signal spectrum. By trial and error, solving the following equation for z, ³q ´ 2 z|required , z = Eb /N0 PE |desired = Q

we find that z|required = 6.78 dB (4.76 ratio), 8.39 dB (6.90 ratio), 9.58 dB (9.08 ratio), 10.53 dB (11.3 ratio) for PE |desired = 10−3 , 10−4 , 10−5 , 10−6 , respectively. The required signal power is Ps, required = Eb, required /Tsymbol = Eb, required R = z|required N0 R For example, for PE = 10−3 , N0 = 10−5 , and R = 1000 bits/second, we get Ps, required = z|required N0 R = 4.76 × 10−5 × 1000 = 4.76 × 10−2 W Similar calculations allow the rest of the table to be filled in, giving the following results. R, bps 1, 000 10, 000 100, 000

PE = 10−3 ; A2 and BW 4.76 × 10−2 W; 1 kHz 4.76 × 10−1 W; 10 kHz 4.76 W; 100 kHz

PE = 10−4 6.9 × 10−2 W 6.9 × 10−1 W 6.9 W

PE = 10−5 9.08 × 10−2 W 9.08 × 10−1 W 9.08 W

PE = 10−6 11.3 × 10−2 W 11.3 × 10−1 W 11.3 W

Problem 8.3 a. For PE |desired = 10−5 we have z|required = 9.58 dB (9.08 ratio). For R = B = 5, 000 bps, we have signal power = A2 = z|required N0 R = 9.08 × 10−6 × 5, 000

= 9.08 × 5 × 10−3 = 0.0454 W b. For R = B = 50, 000 bps, the result is signal power = A2 = z|required N0 R = 0.454W c. For R = B = 500, 000 bps, we get signal power = A2 = 4.54 W d. For R = B = 1, 000, 000 bps, the result is signal power = A2 = 9.08 W


8.1. PROBLEM SOLUTIONS

3

Problem 8.4 The decision criterion is V

> ε, choose + A

V

< ε, choose − A

where V = ±AT + N

N is a Gaussian random variable of mean zero and variance σ 2 = N0 T /2. P (A sent) = P (−A sent) = 1/2, it follows that

Because

1 1 1 PE = P (−AT + N > ε | − A sent) + P (AT + N < ε | A sent) = (P1 + P2 ) 2 2 2 where P1 =

⎤ ⎡s s 2 2T 2 e−η /N0 T 2A 2ε ⎦ √ + dη = Q ⎣ N N T πN T 0 0 0 AT +ε

Z ∞

= Q Similarly P2

Use the approximation

³√ ´ √ A2 T 2z + 2 /σ , σ = N0 T, z = N0

Z −AT +ε

Z ∞ 2 2 e−η /N0 T e−η /N0 T √ √ = dη = dη πN0 T πN0 T −∞ AT −ε ⎡s ⎤ s ³√ ´ 2T 2 √ 2A 2ε ⎦=Q = Q⎣ − 2z − 2 /σ N0 N0 T 2

e−u /2 Q (u) = √ u 2π to evaluate

1 (P1 + P2 ) = 10−6 2 by solving iteratively for z for various values of ε/σ. The results are given in the table below. ε/σ zdB for PE = 10−6 ε/σ zdB for PE = 10−6 0 10.53 0.6 11.34 10.69 0.8 11.67 0.2 11.01 1.0 11.98 0.4


4

CHAPTER 8. PRINCIPLES OF DATA TRANSMISSION IN NOISE

Problem 8.5 Use the z|required values found in the solution of Problem 8.2 along with z|required = A2 /N0 R to get R = A2 / z|required N0 . Substitute N0 = 10−6 V2 /Hz, A = 40 mV to get R = 16 × 10−4 × 106 / z|required = 1600/ z|required . This results in the values given in the table below. z|required , dB (ratio) R, bps PE 10−4 10−5 10−6

8.39 (6.9) 9.58 (9.08) 10.53 (11.3)

231.9 176.2 141.6

Problem 8.6 The integrate-and-dump detector integrates over the interval [0, T − δ] so that the SNR is z0 = instead of z = A2 T /N0 . Thus

A2 (T − δ) N0

δ z0 =1− z T

and the degradation in z in dB is ¶ µ δ D = −10 log10 1 − T Using δ = 10−6 s and the data rates given, we obtain the following values of D: (a) for 1/T = R = 10 kbps, D = 0.04 dB; (b) for R = 100 kbps, D = 0.46 dB; for R = 200 kbps, D = 0.97 dB. Problem 8.7 (a) For the pulse sequences (−A, −A) and (A, A), which occur half´the time, no degradation ³p results from timing error, so the error probability is Q 2Eb /N0 . The error probability for the sequences (−A, hp A) and (A, −A), whichi occur the other half the time, result in the 2Eb /N0 (1 − 2 |∆T | /T ) (sketches of the pulse sequences with the error probability Q integration interval offset from the transition between pulses is helpful here). Thus, the average error probability is the given result in the problem statement. Plots are shown in Figure 8.1 for the giving timing errors. (b) Blowing up the plots of Figure 8.1 show that the intercepts of PE = 10−4 are about 8.4, 9.9, 12.4, and 15.9 dB, respectively, for timing errors of 0, 0.1, 0.2, and 0.3. The corresponding degradations are 1.5, 4, and 7.5 dB for timing errors of 0.1, 0.2, and 0.3, respectively.


8.1. PROBLEM SOLUTIONS

5

0

10

Δ T/T = 0 Δ T/T = 0.1 Δ T/T = 0.2 Δ T/T = 0.3

-1

10

-2

PE

10

-3

10

-4

10

-5

10

-6

10

0

2

4

6

8

10 z, dB

Figure 8.1:

12

14

16

18

20


6

CHAPTER 8. PRINCIPLES OF DATA TRANSMISSION IN NOISE

Problem 8.8 a. The impulse response of the filter is h (t) = (2πf3 ) e−2πf3 t u (t) and the step response is the integral of the impulse response, or ´ ³ a (t) = 1 − e−2πf3 t u (t) Thus, for a step of amplitude A, the signal out at t = T is ´ ³ s0 (T ) = A 1 − e−2πf3 T

The power spectral density of the noise at the filter output is Sn (f ) =

N0 /2 1 + (f/f3 )2

so that the variance of the noise component at the filter output is Z ∞ £ 2¤ N0 πf3 = N0 BN Sn (f ) df = E N = 2 −∞ where BN is the noise equivalent bandwidth of the filter. Thus ¡ ¢2 2A2 1 − e−2πf3 T s20 (T ) = SNR = E [N 2 ] N0 πf3

b. To maximize the SNR, differentiate with respect to f3 and set the result equal to 0. Solve the equation for f3 . This results in the equation 4πf3 T e−2πf3 T − 1 + e−2πf3 T = 0 Let α = 2πf3 T to get the equation (2α + 1) e−α = 1 A numerical solution results in α ≈ 1.25. Thus f3, opt =

1.25 = 0.199/T = 0.199R 2πT


8.1. PROBLEM SOLUTIONS Problem 8.9 As in the derivation in the text for antipodal signaling, 1 var [N ] = N0 T 2 The output of the integrator at the sampling time is ½ AT + N, A sent V = N, 0 sent The probabilities of error, given these two signaling events, are µ ¶ 1 P (error | A sent) = P AT + N < AT 2 µ ¶ 1 = P N < − AT 2 Z −AT /2 −η2 /N0 T e √ dη = πN0 T −∞ ⎡s ⎤ 2 A T⎦ = Q⎣ 2N0 Similarly,

Thus

¶ µ 1 P (error | 0 sent) = P N > AT 2 Z ∞ −η2 /N0 T e √ = dη πN0 T AT /2 ⎤ ⎡s 2 A T⎦ = Q⎣ 2N0 1 1 P (error | A sent) + P (error | 0 sent) 2⎡ 2 ⎤ s A2 T ⎦ = Q⎣ 2N0

PE =

But, the average signal energy is

Eave = P (A sent) × A2 T + P (0 sent) × 0 1 2 A T = 2

7


8

CHAPTER 8. PRINCIPLES OF DATA TRANSMISSION IN NOISE

Therefore, the average probability of error is "r PE = Q

Eave N0

#

Problem 8.10 Substitute (8.27) and (8.28) into PE = pP (E | s1 (t)) + (1 − p) P (E | s2 (t)) i i h h Z k exp − (v − s02 )2 /2σ 2 Z ∞ exp − (v − s01 )2 /2σ 2 0 0 p p dv + (1 − p) dv = p 2πσ20 2πσ20 k −∞

Use Leibnitz’s rule to differentiate with respect to the threshold, k, set the result of the differentiation equal to 0, and solve for k = kopt . The differentiation gives h i i¯ h exp − (k − s01 )2 /2σ 20 exp − (k − s02 )2 /2σ 20 ¯¯ ¯ p p −p + (1 − p) =0 ¯ 2πσ 20 2πσ 20 ¯ k=ko p t

or

or

i i h h exp − (kopt − s02 )2 /2σ 20 exp − (kopt − s01 )2 /2σ 20 p p = (1 − p) p 2πσ20 2πσ 20 2

2

− (kopt − s01 ) + (kopt − s02 ) (s01 − s02 ) kopt +

s202 − s201 2 kopt

¶ 1−p = p ¶ µ 1−p 2 = σ 0 ln p ¶ µ 2 s02 − s01 σ0 1−p + = ln s01 − s02 p 2 2σ 20 ln

µ

Problem 8.11 a. Denote the output due to the signal as s0 (t) and the output due to noise as n0 (t). We wish to maximize ¯R ¯2 ¯ ∞ j2πf t0 df ¯ 2 S (f ) H (f ) e ¯ ¯ m 2 −∞ s (t) R∞ £0 2 ¤ = ∗ N0 E n0 (t) −∞ Hm (f ) Hm (f ) df


8.1. PROBLEM SOLUTIONS

9

By Schwartz’s inequality, we have R∞ R∞ Z ∞ ∗ (f ) df s20 (t) 2 −∞ S (f ) S ∗ (f ) df −∞ Hm (f ) Hm 2 R∞ £ ¤≤ |S (f )|2 df = ∗ (f ) ej2πf t0 df N0 N H (f ) H E n20 (t) 0 m −∞ m −∞ where the maximum value on the right-hand side is attained if Hm (f ) = S ∗ (f ) e−j2πf t0 b. The matched filter impulse response is hm (t) = s (t0 − t) by taking the inverse Fourier transform of Hm (f ) employing the time reversal and time delay theorems. c. The realizable matched filter has zero impulse response for t < 0. d. By sketching the components of the integrand of the convolution integral, we find the following results: For t0 = 0, s0 (t0 ) = 0; For t0 = T /2, s0 (t0 ) = A2 T /2; For t0 = T, s0 (t0 ) = A2 T ; For t0 = 2T, s0 (t0 ) = A2 T. Problem 8.12 a. These would be x (t) and y (t) reversed and then shifted to the right so that they are nonzero for t > 0. √ b. A = 7B. c. The outputs for the two cases are So1 (t) = A2 τ Λ [(t − t0 ) /τ ] = 7B 2 τ Λ [(t − t0 ) /τ ]

So2 (t) = 7B 2 τ Λ [(t − t0 ) /7τ ]

d. The shorter pulse gives the more accurate time delay measurement. e. Peak power is lower for y (t). Problem 8.13 a. The matched filter impulse response is given by hm (t) = s2 (T − t) − s1 (T − t)


10

CHAPTER 8. PRINCIPLES OF DATA TRANSMISSION IN NOISE b. Using (8.55) and a sketch of s2 (t) − s1 (t) for an arbitrary t0 , we obtain Z ∞ 2 2 ζ = [s2 (t) − s1 (t)]2 dt N0 −∞ ∙ µ ¶¸ 2 2 2 2 T = − t0 A t0 + 4A t0 + A N0 2 ∙ ¸ T 2 A2 T 2 + 4A t0 , 0 ≤ t0 ≤ = N0 2 2

This result increases linearly with t0 and has its maximum value for for t0 = T /2, which is 5A2 T /N0 . c. The probability of error is

ζ PE = Q √ 2 2

¸

To minimize it, use the maximum value of ζ, which is given in (b). d. The optimum receiver would have two correlators in parallel, one for s1 (t) and one for s2 (t). The outputs for the correlators at t = T are differenced and compared with the threshold k calculated from (8.30), assuming the signals are equally likely.

Problem 8.14 Use ζ 2max = 2Es /N0 for a matched filter, where Es is the signal energy. The required h valuesi /2) 2 2 2 2 for signal energy are: (a) A T ; (b) 3A T /8; (c) A T /2; (d) A T /3 (Note that AΛ (t−T T i h /2) ). in the problem statement should be AΛ 2(t−T T

Problem 8.15 The required formulas are

1 (E1 + E2 ) 2 Z 1 T = s1 (t) s2 (t) dt E 0 1 (E2 − E1 ) = 2⎡ ⎤ s (1 − R12 ) E ⎦ = Q⎣ N0

E = R12 kopt PE


8.1. PROBLEM SOLUTIONS

11

The energies for the three given signals are Z T EA = A2 dt = A2 T 0 ¶ µ µ ¶ Z T Z T π 2 2 πt 2 2 πt EB = B cos B sin − dt = dt T 2 T 0 0 ¶¸ µ Z ∙ 2πt B2T B2 T dt = 1 − cos = 2 0 T 2 µ ¶¸2 Z T 2∙ 2πt C 1 + cos −π EC = dt 4 T 0 µ ¶¸ Z T 2∙ 2πt 2 C 1 − cos dt = 4 T 0 ¶ ¶¸ µ µ Z ∙ 2πt C2 T 2 2πt + cos dt 1 − 2 cos = 4 0 T T µ µ ¶ ¶¸ Z ∙ 2πt 4πt C2 T 3 1 3C 2 T − 2 cos = + cos dt = 4 0 2 T 2 T 8 a. For s1 = sA and s2 = sB , the average signal energy is µ ¶ B2 1 A2 T 1+ E = (EA + EB ) = 2 2 2A2

The correlation coefficient is R12 = = = The optimum threshold is

¸ π (t − T /2) dt AB cos T 0 ∙ ¸ Z 2ABT πt 1 T dt = AB sin E 0 T πE 4AB π (A2 + B 2 /2) 1 E

Z T

µ ¶ 1 1 B2 2 kopt = (EB − EA ) = −A T 2 2 2 The probability of error is ⎡s ⎤ "s ¶ 2 µ µ ¶# 2 A (1 − R ) E 1 T B 4AB 12 ⎦=Q PE = Q ⎣ 1+ 1− N0 N0 π (A2 + B 2 /2) 2 2A2 "s "s ¶µ ¶# ¶# µ µ 2 T T 4AB B B 2 4AB 2 2 = Q A + =Q − 1− A + 2N0 π (A2 + B 2 /2) 2 2N0 2 π


12

CHAPTER 8. PRINCIPLES OF DATA TRANSMISSION IN NOISE b. s1 = sA and s2 = sC : ¢ 1¡ 2 1 (EA + EC ) = A + 3C 2 /8 T 2 2 AC R12 = A2 + 3C 2 /8 µ ¶ 1 1 3C 2 2 −A T kopt = (EC − EA ) = 2 2 8 "s ¶# µ 2 T 3C PE = Q − AC A2 + 2N0 8 E =

c. s1 = sB and s2 = sC : µ ¶ 1 3C 2 1 2 (EB + EC ) = B + T E = 2 4 4 16BC R12 = 3π (B 2 + 3C 2 /4) µ ¶ 1 1 3C 2 2 −B T kopt = (EC − EB ) = 2 4 4 "s ¶# µ 2 16BC T 3C PE = Q − B2 + 4N0 4 π d. s1 = sB and s2 = −sB : E = EB = B 2 T /2 R12 = −1

e. s1 = sC and s2 = −sC :

kopt = 0 ⎤ ⎡s 2T B ⎦ PE = Q ⎣ N0 E = EC = 3C 2 T /8 R12 = −1 kopt = 0


8.1. PROBLEM SOLUTIONS

13 ⎡s

PE = Q ⎣

⎤ 3C 2 T ⎦ 8N0

Problem 8.16 a. The energies are Z T A2 dt = A2 T EA = 0

EB =

Z T /2

2

A dt +

EC =

0

(−A)2 dt = A2 T

T /2

0

Z T /4

Z T

2

A dt +

Z T /2

2

(−A) dt +

T /4

Z 3T /4

2

A dt +

T /2

Z T

(−A)2 dt = A2 T

3T /4

b. Sketch the combinations (A, B) , (B, C) , and (A, C) to show that their respective products have as much area above the t-axis as below it. Therefore, RAB , RBC , and RAC are all zero because they are directly proportional to the integral of the products of the respective signals. The optimum threshold for each case is zero because it is proportional to the difference of the respective signal energies. c. The probability of error for each case is ⎛s

PE = Q ⎝

A2 T N0

⎞ ⎠

Problem 8.17 a. The signal set is given by (8.70) which shows that the total phase shift is 2 cos−1 m = 2 cos−1 (1/2) = 2π/3 rad = 120 degrees. b. From (8.78) and (8.79) the powers in the carrier and modulation components, respectively, are Pc = Pm =

A2 m2 2¡ ¢ A2 1 − m2 2


14

CHAPTER 8. PRINCIPLES OF DATA TRANSMISSION IN NOISE The total power is P = A2 /2 so % power in carrier % power in modulation

= m2 = 1/22 × 100 = 25%

= 1 − m2 = 75%

c. From (8.74) ´ ³p 2 (1 − m2 ) Eb /N0 ³p ´ 2 (1 − 1/22 ) Eb /N0 = Q ´ ³p 1.5Eb /N0 = Q

PE = Q

Fromptrial and error using a Q-function subprogram in MATLAB Q (4.624) = 1.004 × 10−5 so 1.5Eb /N0 ' 4.624 or Eb /N0 = 12.12 = 10.835 dB. Problem 8.18 Numerical integration of (8.83) with (8.81) and (8.82) substituted shows that the degradation for σ 2φ = 0.01 (σ φ = 0.1 rad) is about 0.045 dB; for σ 2φ = 0.05 (σ φ = 0.224 rad) it is about 0.6 dB; for σ 2φ = 0.1 (σ φ = 0.316 rad) it is about 9.5 dB. For the constant phase error model, the degradation is Dconst = −20 log10 [cos (φconst )] As suggested in the problem statement, we set the constant phase error equal to the standard deviation of the Gaussian phase error for comparison putposes; thus, we consider constant phase errors of φconst = 0.1, 0.224, and 0.316 radians. The degradations corresponding to these phase errors are 0.044, 0.219, and 0.441 dB, respectively. The constant phase error degradations are much less serious than the Gaussian phase error degradations. Problem 8.19 From trial and error using a Q-function subprogram in MATLAB Q (4.7534) = 1.0001 × 10−6 √ a. For ASK, PE = Q [ z] = 10−6 gives z = 4.75342 = 22.5948 or z = 13.54 dB. £√ ¤ b. For BPSK, PE = Q 2z = 10−6 gives z = 4.75342 /2 = 11.2974 or z = 10.53 dB. c. Binary FSK bit error probability is the same as ASK.

d. The degradation of BPSK with a phase error of 5 degrees is Dconst = −20 log10 [cos (5o )] = 0.033 dB, so the required SNR to give a bit error probability of 10−5 is 10.53+0.033 = 10.563 dB.


8.1. PROBLEM SOLUTIONS

15

0.7

0.6

Degradation, dB

0.5

0.4

0.3

0.2

0.1

0

0

2

4

6

8 10 12 Phase error, degrees

14

16

18

20

Figure 8.2: hp i √ √ 2 (1 − 1/2) z = Q [ z] = 10−6 gives z = 13.54 e. For PSK with m = 1/ 2, PE = Q dB. f. Assuming that the separate effects are additive, we add the degradation of part (d) to the SNR found in part (e) to get 13.54 + 0.033 = 13.573 dB.

Problem 8.20 The degradation in dB is Dconst = −20 log10 [cos (φ)]. It is plotted in Figure 8.2. The degradations for phase errors of 3, 5, 10, and 15 degrees are 0.0119, 0.0331, 0.1330, and 0.3011 dB, respectively. Problem 8.21 ¡√ ¢ a. Solve PE = Q 2z = 10−4 to get z = 8.4 dB. From ¡ (8.74), ¢ the additional SNR required due to a carrier component is D = −10 log10 1 − m2 . For example, with m = 0.2, D = 0.177 dB; for m = 0.4, D = 0.757 dB; for m = 0.6, D = 1.938 dB; for m = 0.8, D = 4.437 dB; for m = 0.9, D = 7.213 dB.


16

CHAPTER 8. PRINCIPLES OF DATA TRANSMISSION IN NOISE b. Solve PE = Q this value. c. Solve PE = Q this value.

¡√ ¢ 2z = 10−5 to get z = 9.57 dB. Add the degradations of (a) onto

¡√ ¢ 2z = 10−6 to get z = 10.53 dB. Add the degradations of (a) onto

Problem 8.22 a. For BPSK, an SNR of 9.58 dB is required to give an error probability of 10−5 (see the solution to Problem 8.2). For ASK and FSK it is 3.01 dB more, or 12.59 dB. For BPSK and ASK, the required bandwidth is 2R Hz where R is the data rate in bps. For FSK with minimum tone spacing of 0.5R Hz, the required bandwidth is 2.5R Hz. Hence, for BPSK and ASK, the required bandwidth is 100 kHz. For FSK, it is 125 kHz. b. For BPSK, the required SNR is 8.39 dB.for an error probability of 10−4 . For ASK and FSK it is 3.01 dB more or 11.4 dB. The required bandwidths are now 1 MHz for BPSK and ASK; it is 1.25 MHz for FSK.

Problem 8.23 The correlation coefficient is Z 1 T s1 (t) s2 (t) dt E 0 Z 1 T 2 = A cos (ω c t) cos [(ω c + ∆ω) t] dt E 0 = sinc (2∆fT )

R12 =

where ∆f is the frequency separation between the FSK signals in Hz. Using a calculator, we find that the sinc-function takes on its first minimum at an argument value of about 1.4, or ∆f = 0.7/T . The minimum value at this argument value is about −0.216. The improvement in SNR is −10 log10 (1 − R12 ) = −10 log10 (1 + 0.216) = 0.85 dB. Problem 8.24 The encoded bit streams, assuming a reference 1 to start with, are: (a) 1 100 001 000 010; (b) 1 100 110 011 001; (c) 1 111 111 111 111; (d)1 010 101 010 101; (e) 1 111 111 010 101; (f) 1 110 000 011 011; (g) 1 111 010 000 101; (h) 1 100 001 000 010.


8.1. PROBLEM SOLUTIONS

17

Problem 8.25 The differentially encoded bit stream is 1 000 010 101 111 (1 assumed for the starting reference bit). The phase of the transmitted carrier is π 000 0π0 π0π πππ. Integrate the product of the received signal and a 1-bit delayed signal over a (0, T ) interval. Each time the integrated productRis greater than 0, decide 1; each time it is less than 0, decide 0. The T first integration gives 0 A2 cos (ω0 t + π) cos (ω0 t) dt = −A2 T /2. The resulting decision is RT that a 0 was sent. The next integration gives 0 A2 cos (ω0 t) cos (ω 0 t) dt = A2 T /2. The resulting decision is that a 1 was sent. Continuing in this manner, using the previous signal for a reference in the integration for the current signal and using the stated decision criterion, we obtain the original bit sequence. If the signal is inverted, the same demodulated signal is detected as if it weren’t inverted. Problem 8.26 Note that E [n1 ] = E

∙Z 0

−T

¸ Z 0 n (t) cos (ωc t) dt = E [n (t)] cos (ωc t) dt = 0 −T

which follows because E [n (t)] = 0. Similarly for n2 , n3 , and n4 . Consider the variance of each of these random variables: £ ¤ var [n1 ] = E n21 = E = =

Z 0 Z 0

−T

−T

¸ n (t) n (λ) cos (ω c t) cos (ω c λ) dtdλ

E [n (t) n (λ)] cos (ω c t) cos (ω c λ) dtdλ

−T −T Z 0 Z 0 −T

=

∙Z 0 Z 0

N0 2

N0 δ (t − λ) cos (ω c t) cos (ωc λ) dtdλ −T 2

Z 0

−T

cos2 (ωc t) dt =

N0 T 4

where the fact that E [n (t) n (λ)] = N20 δ (t − λ) has been used along with the sifting property of the δ-function to reduce the double integral to a single integral. Similarly for n2 , n3 , and n4 . Therefore, w1 has zero mean and variance 1 1 N0 T var [w1 ] = var [n1 ] + var [n2 ] = 4 4 8 Similarly for w2 , w3 , and w4 .


18

CHAPTER 8. PRINCIPLES OF DATA TRANSMISSION IN NOISE

Problem 8.27 The two error probabilities are ¡√ ¢ 1 PE, opt = e−z and PE, delay-and-mult ≈ Q z 2

Comparative values are:

z, dB PE, opt PE, delay-and-mult 8 9.1 × 10−4 6.8 × 10−3 9 1.8 × 10−4 2.7 × 10−3 −5 10 2.3 × 10 8.5 × 10−4 11 1.7 × 10−6 2.1 × 10−4 Clearly the curves are within 2 dB (and slightly less) by comparing PE, opt at 8 dB with PE, delay-and-mult at 10 dB and similarly the values for 9 and 11 dB. Problem 8.28 Noncoherent binary FSK has error probability 1 PE, NCFSK = e−zN C F S K /2 2 Solve for the SNR to get zNCFSK = −2 ln (2PE, NCFSK ) DPSK has error probability 1 PE, DPSK = e−zD P S K 2 Solve for the SNR to get zDPSK = − ln (2PE, DPSK ) For the delay-and-multiply, the bit error probability approximation is ¡√ ¢ PE, delay-and-mult ≈ Q z

which is easiest solved by iterative evaluation in MATLAB. The values asked for, expressed in dB, are given in the table below: PE 10−3 10−4 10−5 10−6 10−7

zNCFSK , dB 10.9 12.3 13.4 14.2 14.9

zDPSK , dB, opt 7.9 9.3 10.3 11.2 11.9

zDPSK , dB, d&m 9.8 11.4 12.6 13.5 14.3


8.1. PROBLEM SOLUTIONS

19

Problem 8.29 Bandwidths are given by BBPSK = BDPSK = 2R; RBPSK = RBPSK = BBPSK, DPSK /2 = 25 kbps BCFSK = 2.5R; RCFSK = BCFSK /2.5 = 20 kbps BNCFSK = 4R; RNCFSK = BNCFSK /4 = 12.5 kbps

Problem 8.30 Consider the bandpass filter, just wide enough to pass the ASK signal, followed by an envelope detector with a sampler and threshold comparator at its output. The output of the bandpass filter is u (t) = Ak cos (ω c t + Θ) + n (t) = Ak cos (ω c t + Θ) + nc (t) cos (ω c t + Θ) − ns (t) sin (ω c t + Θ)

= x (t) cos (ωc t + Θ) − ns (t) sin (ωc t + Θ)

where Ak = A if signal plus noise is present at the receiver input, and A = 0 if noise alone is present at the input. In the last equation x (t) = Ak + nc (t) Now u (t) can be written in envelope-phase form as u (t) = r (t) cos [ω c t + Θ + φ (t)] where

¸ ∙ p (t) n s −1 r (t) = x2 (t) + n2s (t) and φ (t) = tan x (t)

The output of the envelope detector is r (t), which is sampled each T seconds and compared with the threshold. Thus, to compute the error probability, we need the pdf of the envelope for noise alone at the input and then with signal plus noise at the input. For noise alone, it was shown in Chapter 4 that the pdf of r (t) is Rayleigh, which is fR (r) =

r −r2 /2N e , r ≥ 0, noise only R

where N is the variance (mean-square value) of the noise at the filter output. For signal plus noise at the receiver input, the pdf of the envelope is Ricean. Rather than use this complex expression, however, we observe that for large SNR at the receiver input r (t) ≈ A + nc (t) , large SNR, signal present


20

CHAPTER 8. PRINCIPLES OF DATA TRANSMISSION IN NOISE

which follows by expanding the argument of the square root, dropping the squared terms, and approximating the square root as √ 1 1 + ≈ 1 + , | | << 1 2 Thus, for large SNR conditions, the envelope detector output is approximately Gaussian for signal plus noise present. It has mean A and variance N (recall that the inphase and quadrature lowpass noise components have variance the same as the bandpass process). The pdf of the envelope detector output with signal plus noise present is approximately 2

e−(r−A) /2N √ , large SNR fR (r) = 2πN For large SNR, the threshold is approximately A/2. Thus, for noise alone present, the probability of error is exactly Z ∞ r −r2 /2N 2 e dr = e−A /8N P (E | 0) = R A/2 For signal plus noise present, the probability of error is approximately Z A/2

P (E | S + N present) =

´ ³p e−(r−A) /2N √ A2 /2N dr = Q 2πN 2

−∞ 2 e−A /4N

≈ Now the average signal-to-noise ratio is z=

p π/N A A2 1 A2 /2 = 2 N0 BT 4N

which follows because the signal is present only half the time. probability of error is

Therefore, the average

1 e−z 1 1 + e−z/2 PE = P (E | S + N ) + P (E | 0) ≈ √ 2 2 2 4πz

Problem 8.31 The integral to be integrated is −z

PE = e

Z ∞ 0

r −r2 /N e I0 N

µ

Ar N

dr


8.1. PROBLEM SOLUTIONS The Ricean pdf, from (6.150), is ( fR (r) =

r exp σ2

0, r < 0

21

´ n h 2 io ³√ r r − 2σ + K I 2K 0 2 σ , r ≥0

Since it is a pdf, its integral over -∞ < r < ∞ is 1. The trick is to get the integrand of the expression for PE to look like a Ricean pdf. Obviously, a start is to identify N = 2σ 2 and √ A 2K A2 A2 which can be solved to give K = 4N = z2 because z = 2N . The the integral for N = σ PE becomes Z e−z ∞ r −r2 /2σ2 ³√ r´ PE = dr e I 2K 0 2 0 σ2 σ

We multiply by 1 = eK e−K to get

−z Z ∞ Ke

r −(r2 /2σ2 +K ) ³√ r´ dr e 2K I 0 2 0 σ2 σ 1 e−z 1 = eK = ez/2 e−z = e−z/2 2 2 2

PE = e

where the integral is 1 because the integrand is a Ricean pdf. Problem 8.32 The binary and Gray encoded numbers are given in the table below: Decimal Binary Gray Decimal Binary Gray 0 00000 00000 16 10000 11000 1 00001 00001 17 10001 11001 2 00010 00011 18 10010 11011 3 00011 00010 19 10011 11110 4 00100 00110 20 10100 11110 5 00101 00111 21 10101 11111 6 00110 00101 22 10110 11101 7 00111 00100 23 10111 11100 8 01000 01100 24 11000 10100 9 01001 01101 25 11001 10101 10 01010 01111 26 11010 10111 11 01011 01110 27 11011 10110 12 01100 01010 28 11100 10010 13 01101 01011 29 11101 10011 14 01110 01001 30 11110 10001 15 01111 01000 31 11111 10000


22

CHAPTER 8. PRINCIPLES OF DATA TRANSMISSION IN NOISE

Problem 8.33 Evaluation is facilitated by means of the following sums: m X

1 = m;

k=1

m X

m

k=

k=1

m (m + 1) X 2 m (m + 1) (2m + 1) ; k = 2 6 k=1

The expression to be evaluated is Eave =

=

¸ M M ∙ M −1 2 1 X 1 X ∆ (j − 1) ∆ − Ej = M M 2 j=1 j=1 " # ¸ M ∙ M 2 M − 1 2 ∆2 X ∆2 X (M − 1) (j − 1) − = (j − 1)2 − (M − 1) (j − 1) + M 2 M 4 j=1

=

=

= = =

M 2 X

j=1

M 2 X

M

(M − 1)2 ∆2 X (j − 1) + 1 4 M j=1 j=1 j=1 ⎡ ⎤ M−1 M−1 M 2 X 2 X X ∆ ⎣ (M − 1) k 2 − (M − 1) k+ 1⎦ M 4 j=1 k=1 k=1 " # 2 (M − 1) (M − 1)2 2 (M − 1) (2M − 1) − + ∆ 6 2 4 " # (M − 1)2 ∆2 2 (M − 1) (2M − 1) − (M − 1) [2 (2M − 1) − 3 (M − 1)] ∆ = 6 4 12 ¡ 2 ¢ M − 1 ∆2 ∆2 (M − 1) (M + 1) = 12 12 ∆ M

∆ (j − 1) − (M − 1) M 2

Problem 8.34 a. Appy the equation above (8.141) since baseband is specified: BP AM =

20 R or 10 = ⇒M =4 log2 M log2 M

b. Apply (8.140): 2 (M − 1) Pb, antip. PAM = Q M log2 M

Ãr

6 log2 M Eb M 2 − 1 N0

!


8.1. PROBLEM SOLUTIONS

23

For M = 4 and a BEP of 10−6 this becomes 2 (4 − 1) Q 4 log2 4 Q

Ãr

Ãr

6 log2 4 Eb 42 − 1 N0 6 log2 4 Eb 42 − 1 N0

! !

= 10−6 =

4 × 10−6 3

Trial and error with a Q-function routine in MATLAB yields r

6 log2 4 Eb Eb 5 ' 4.7 ⇒ = × 4.72 = 14.41 dB 2 4 − 1 N0 N0 4

For M = 4 and a BEP of 10−5 we have

Q

Ãr

6 log2 4 Eb 42 − 1 N0

!

4 × 10−5 3 r 6 log2 4 Eb ⇒ ' 4.2 42 − 1 N0 Eb 5 ⇒ = × 4.22 = 13.43 dB N0 4 =

Problem 8.35

a. Appy the equation above (8.141) since baseband is specified (M must be a power of 2): BPAM =

25 R or 10 ≥ ⇒M =8 log2 M log2 M


24

CHAPTER 8. PRINCIPLES OF DATA TRANSMISSION IN NOISE b. The bit error probability is Pb, antip. PAM

= = ⇒ ⇒ ⇒

For BEP

= ⇒ ⇒

Ãr ! 6 log2 M Eb 2 (M − 1) Q M log2 M M 2 − 1 N0 Ãr ! 6 log2 8 Eb 2 (8 − 1) Q = 10−6 8 log2 8 82 − 1 N0 Ãr ! 6 log2 8 Eb 12 × 10−6 Q = 82 − 1 N0 7 r 6 log2 8 Eb ' 4.643 82 − 1 N0 7 Eb = × 4.6432 = 18.78 dB N0 2 Ãr ! 6 log2 8 Eb 12 −5 10 ; Q × 10−5 = 2 8 − 1 N0 7 r 6 log2 8 Eb ' 4.143 82 − 1 N0 Eb 7 = × 4.1432 = 17.79 dB N0 2

Problem 8.36 For BPSK, the BEP is found from ´ ³p 2Eb /N0 = 10−5 ⇒ Eb /N0 = 9.58 dB Pb, BPSK = Q The data rate in terms of bandwidth for BPSK is RBPSK =

200 kHz BRF = = 100 kbps 2 2

For DPSK ¡ ¢ 1 exp (−Eb /N0 ) = 10−5 ⇒ Eb /N0 = − ln 2 × 10−5 = 10.82 = 10.34 dB 2

The data rate is the same as for BPSK. For coherent FSK, the required Eb /N0 is 3.01 dB more than for BPSK, or Eb /N0 = 9.58 + 3.01 = 12.59 dB. According to (8.91) RCFSK =

200 kHz BRF = = 80 kbps 2.5 2.5


8.1. PROBLEM SOLUTIONS

25

For noncoherent FSK ¢ ¡ 1 exp (−Eb /2N0 ) = 10−5 ⇒ Eb /N0 = −2 ln 2 × 10−5 = 21.64 = 13.35 dB 2

According to (8.121)

RNCFSK =

200 kHz BRF = = 50 kbps 4 4

For antipodal 4-ary PAM, according to (8.140), Ãr ! 6 log2 4 Eb 2 (4 − 1) Eb Q = 13.43 dB (see Prob. 8.34 solution) = 10−5 ⇒ 2 4 log2 4 4 − 1 N0 N0 The data rate in terms of bandwidth can be obtained from (8.141): 1 1 R = BRF , PAM log2 M = (200) log2 4 = 200 kbps 2 2 For antipodal 8-ary PAM, according to (8.140), Ãr ! 2 (8 − 1) 6 log2 8 Eb Eb Q = 17.79 dB (see Prob. 8.35 solution) = 10−5 ⇒ 8 log2 8 82 − 1 N0 N0 The data rate in terms of bandwidth can be obtained from (8.141): 1 1 R = BRF , PAM log2 M = (200) log2 8 = 300 kbps 2 2 Problem 8.37 For β = 0.25, P (f ) =

⎧ ⎨ T, ⎩

0,

0 ≤ |f | ≤ 0.75 2T © £ ¡ ¢¤ª T πT 0.75 0.75 1.25 1 + cos |f | − , ≤ |f | ≤ Hz 2 0.25 2T 2T 2T |f | > 0.75 2T

The optimum transmitter and receiver transfer functions are 1/4

|HT (f )|opt = and |HR (f )|opt =

A |P (f )|1/2 Gn (f ) |HC (f )|1/2

K |P (f )|1/2

1/4

Gn (f ) |HC (f )|1/2


26

CHAPTER 8. PRINCIPLES OF DATA TRANSMISSION IN NOISE

Setting all scale factors equal to 1, these become q |P (f )|1/2 1/4 1 + (f /fC )2 q |HT (f )|opt = 1/4 1 + (f /f3 )2

q q 2 1/4 1 + (f /f3 ) 1 + (f/fC )2

1/2 1/4

|HR (f )|opt = |P (f )|

A MATL:AB program for computing the optimum transfer functions is given below. Plots for the various cases are then given in Figures 8.3 - 8.5. % Problem 8.37: Optimum transmit and receive filters for raised-cosine pulse shaping, % first-order Butterworth channel filtering, and first-order noise spectrum % A = char(’-’,’:’,’—’,’-.’); clf beta00 = input(’Enter starting value for beta: >=0 and <= 1 ’); del_beta = input(’Enter step in beta: starting value + 4*step <= 1 ’); R = 1; f_3 = R/2; f_C = R/2; delf = 0.01; HT_opt = []; HR_opt = []; for n = 1:4 beta = (n-1)*del_beta+beta00; beta0(n) = beta; f1 = 0:delf:(1-beta)*R/2; f2 = (1-beta)*R/2:delf:(1+beta)*R/2; f3 = (1+beta)*R/2:delf:R; f = [f1 f2 f3]; P1 = ones(size(f1)); P2 = 0.5*(1+cos(pi/(R*beta)*(f2-0.5*(1-beta)*R))); P3 = zeros(size(f3)); Gn = 1./(1+(f/f_3).^2); HC = 1./sqrt(1+(f/f_C).^2); P = [P1 P2 P3]; HT_opt = sqrt(P).*(Gn.^.25)./sqrt(HC); HR_opt = sqrt(P)./((Gn.^.25).*sqrt(HC)); subplot(2,1,1),plot(f,HR_opt,A(n,:)),xlabel(’f, Hz’),ylabel(’|H_R_,_o_p_t(f)||’) title([’R = ’,num2str(R),’ bps; channel filter 3-dB frequency = ’,num2str(f_C),’


8.1. PROBLEM SOLUTIONS

27

|HR,opt(f)||

R = 1 bps; channel filter 3-dB frequency = 0.5 Hz; noise 3-dB frequency = 0.5 Hz 1.5

1

0.5

0

0

0.1

0.2

0.3

0.4

0.5 f, Hz

0.6

0.7

0.8

0.9

1

|HT,opt(f)|

1.5

β = 0.2 β = 0.3

1

β = 0.4 β = 0.5

0.5

0

0

0.1

0.2

0.3

0.4

0.5 f, Hz

0.6

0.7

0.8

0.9

1

Figure 8.3: Hz; noise 3-dB frequency = ’,num2str(f_3),’ Hz’]) if n == 1 hold on end subplot(2,1,2),plot(f,HT_opt,A(n,:)),xlabel(’f, Hz’),ylabel(’|H_T_,_o_p_t(f)|’) if n == 1 hold on end end legend([’\beta = ’,num2str(beta0(1))],[’\beta = ’,num2str(beta0(2))], [’\beta = ’,num2str(beta0(3))],[’\beta = ’,num2str(beta0(4))],1) a. f3 = fC = 1/2T b. fC = 2f3 = 1/T c. f3 = 2fC = 1/T


28

CHAPTER 8. PRINCIPLES OF DATA TRANSMISSION IN NOISE

|HR,opt(f)||

R = 1 bps; channel filter 3-dB frequency = 1 Hz; noise 3-dB frequency = 0.5 Hz 1.5

1

0.5

0

0

0.1

0.2

0.3

0.4

0.5 f, Hz

0.6

0.7

0.8

0.9

1

|HT,opt(f)|

1

β = 0.2 β = 0.3 β = 0.4 β = 0.5

0.5

0

0

0.1

0.2

0.3

0.4

0.5 f, Hz

Figure 8.4:

0.6

0.7

0.8

0.9

1


8.1. PROBLEM SOLUTIONS

29

|HR,opt(f)||

R = 1 bps; channel filter 3-dB frequency = 0.5 Hz; noise 3-dB frequency = 1 Hz 1.5

1

0.5

0

0

0.1

0.2

0.3

0.4

0.5 f, Hz

0.6

0.7

0.8

0.9

1

|HT,opt(f)|

1.5

β = 0.2 β = 0.3

1

β = 0.4 β = 0.5

0.5

0

0

0.1

0.2

0.3

0.4

0.5 f, Hz

Figure 8.5:

0.6

0.7

0.8

0.9

1


30

CHAPTER 8. PRINCIPLES OF DATA TRANSMISSION IN NOISE

Problem 8.38 b a Λ (f/b) − b−a Λ (f /a) and P (f ± B), where This a simple matter of sketching P (f ) = b−a B = 12 (a + b), side by side and showing that they add to a constant in the overlap region. Problem 8.39 a. The optimum transmitter and receiver transfer functions are 1/4

|HT (f )|opt = and |HR (f )|opt =

A |P (f )|1/2 Gn (f ) |HC (f )|1/2

K |P (f )|1/2

1/4

Gn (f ) |HC (f )|1/2 where P (f ) is the signal spectrum and A and K are arbitrary constants. From the problem ¢1/4 ¡ 1/4 statement, Gn (f ) = 10−8 = 10−2 and 1 |HC (f )| = q 1 + (f /4800)2

The signal spectrum for β = 1 and T1 = 9600 bps is, from (4.137), ¯ ¯¶¸ ∙ µ πT ¯¯ 1 − β ¯¯ 1−β 1+β T 1 + cos f− , = 0 ≤ |f | ≤ = 9600 Hz P (f ) = ¯ ¯ 2 β 2T 2T 2T ∙ µ ¶¸ 1 π |f | = 1 + cos , 0 ≤ |f | ≤ 9600 Hz 2 × 9600 9600 ¶ µ π |f | 1 2 cos , 0 ≤ |f | ≤ 9600 Hz = 9600 19200 Thus, |HT (f )|opt = = |HR (f )|opt = =

s

¶ µ i1/4 1 π |f | h 2 cos 1 + (f/4800)2 10 A , 0 ≤ |f | ≤ 9600 Hz 9600 19200 ¯ µ ¶¯ i1/4 π |f | ¯¯ h 10−2 A ¯¯ 2 1 + (f/4800) cos , 0 ≤ |f | ≤ 9600 Hz 97.98 ¯ 19200 ¯ s ¶ µ i1/4 1 π |f | h 2 2 cos 1 + (f /4800)2 10 K , 0 ≤ |f | ≤ 9600 Hz 9600 19200 ¯ µ ¶¯ i1/4 π |f | ¯¯ h 102 K ¯¯ 2 1 + (f /4800) cos , 0 ≤ |f | ≤ 9600 Hz 97.98 ¯ 19200 ¯ −2

−2

2

A 10 K where A and K may be chosen so that 10 97.98 = 97.98 = 1.


8.1. PROBLEM SOLUTIONS

31

b. From (8.149) Q (A/σ) = 10−4 results in A/σ ' 3.719. c. Matching arguments of (8.149) and (8.153) we have A p 3.72 ' = ET σ

"Z ∞

1/2

Gn (f ) P (f ) df |HC (f )| −∞

#−1

Therefore, we need to evaluate the integral Z ∞ 1/2 1/2 Gn (f ) P (f ) Gn (f ) P (f ) df = 2 df |H (f )| |HC (f )| C −∞ 0 ¶q µ Z πf 2 × 10−2 9600 2 = cos 1 + (f/4800)2 df, v = f/4800 9600 19200 0 Z 2 ³ πv ´ p cos2 1 + v 2 dv ' 1.213 × 10−2 = 10−2 4 0

I =

Thus

Z ∞

√ ET ⇒ ET = 2.036 × 10−3 J 3.72 = 1.213 × 10−2

Problem 8.40 The MATLAB program is given below and Figure 8.6 shows the probability of error curves. % Plots for Prob. 8.40 % clf A = char(’-’,’—’,’-.’,’:’,’—.’,’-..’); delta = input(’ Enter value for delta: ’) taum_T0 = input(’ Enter vector of values for tau_m/T: ’) L_taum = length(taum_T0); z0_dB = 0:.1:15; z0 = 10.^(z0_dB/10); for ll = 1:L_taum ll taum_T = taum_T0(ll) P_E = 0.5*qfn(sqrt(2*z0)*(1+delta))+0.5*qfn(sqrt(2*z0)*((1+delta)-2*delta*taum_T)); semilogy(z0_dB, P_E,A(ll,:)) if ll == 1 hold on; grid on; axis([0, inf, 10^(-6), 1]); xlabel(’z_0, dB’); ylabel(’P_E’) end


32

CHAPTER 8. PRINCIPLES OF DATA TRANSMISSION IN NOISE PE versus z 0 in dB for δ = 0.5

0

10

-1

10

-2

PE

10

-3

10

-4

10

τm/T = 0.2

-5

10

τm/T = 0.6 τm/T = 1

-6

10

0

5

10

15

z 0, dB

Figure 8.6: end title([’P_E versus z_0 in dB for \delta = ’, num2str(delta)]) if L_taum == 1 legend([’\tau_m/T = ’,num2str(taum_T0)],3) elseif L_taum == 2 legend([’\tau_m/T = ’,num2str(taum_T0(1))],[’\tau_m/T = ’,num2str(taum_T0(2))],3) elseif L_taum == 3 legend([’\tau_m/T = ’,num2str(taum_T0(1))],[’\tau_m/T = ’,num2str(taum_T0(2))], [’\tau_m/T = ’,num2str(taum_T0(3))],3) elseif L_taum == 4 legend([’\tau_m/T = ’,num2str(taum_T0(1))],[’\tau_m/T = ’,num2str(taum_T0(2))], [’\tau_m/T = ’,num2str(taum_T0(3))],[’\tau_m/T = ’,num2str(taum_T0(4))],3) end Problem 8.41 The program given in Problem 8.40 can be adapted for making the asked for plot.


8.1. PROBLEM SOLUTIONS

33

Problem 8.42 The appropriate equations are: BPSK: (8.74) with m = 0 for nonfading; (8.196) for fading; DPSK: (8.112) for nonfading; (8.198) for fading; CFSK: (8.88) for nonfading; (8.197) for fading; NFSK: (8.120) for nonfading; (8.199) for fading. Degradations for PE = 10−3 are: Modulation type BPSK DPSK Coh. FSK Noncoh. FSK

Fading SNR, dB 23.97 26.97 26.98 29.99

Nonfaded SNR, dB 6.78 7.93 9.79 10.94

Fading Margin, dB 17.19 19.05 17.19 19.05

Problem 8.43 Comparing the exponents in (8.194) and (8.195) it is apparent that 1 Z 1 1 or 2σ 2w = =1+ = 1 2 2σ w 1+ Z 1+Z Z

so that (8.194) becomes PE = = =

p ¡ ¢ Z 2πσ2w ∞ exp −w2 /2σ 2w √ p dw π 2πσ2w 0 p ´ p 2πσ2 1 1 1³ − √ w = 1 − 2σ 2w 2 2 π 2 ⎛ ⎞ s Z ⎠ 1⎝ 1− 2 1+Z 1 − 2

Problem 8.44 The expressions with Q-functions can be integrated by parts as shown in the text (BPSK and coherent FSK). The other two cases involve exponential integrals which are easily integrated (DPSK and noncoherent FSK).


34

CHAPTER 8. PRINCIPLES OF DATA TRANSMISSION IN NOISE

Problem 8.45 We need to invert the matrix ⎡ ⎤ ⎡ ⎤ pc (0) pc (−T ) pc (−2T ) 1 0.1 −0.01 1 0.1 ⎦ pc (0) pc (−T ) ⎦ = ⎣ 0.2 [Pc ] = ⎣ pc (T ) pc (0) −0.02 0.2 1 pc (2T ) pc (T ) Using a calculator or MATLAB, we find the inverse matrix to be ⎡ ⎤ 1.02 −0.11 0.02 [PC ]−1 = ⎣ −0.21 1.04 −0.11 ⎦ 0.06 −0.21 1.02 The optimum weights are given by the middle column. a. The equation giving the equalized output is peq (mT ) = −0.11pC [(m + 1) T ] + 1.04pC [mT ] − 0.21pC [(m − 1) T ] b. Calculations with the above equation using the given sample values result in peq (−2T ) = −0.02; peq (−T ) = 0; peq (0) = 1; peq (T ) = 0; peq (2T ) = −0.06 Note that equalized values two samples away from the center sample are not zero because the equalizer can only provide three specified values. c. According to (8.216), the degradation due to noise enhancement is D =

N X

α2j = (−0.11)2 + (1.04)2 + (−0.21)2

j=−N

= 1.1136 = 0.4673 dB

Problem 8.46 a. The solution is similar to Example 8.9 except that Snn (f ) =

N0 /2 1 + (f/f3 )2


8.1. PROBLEM SOLUTIONS

35

so that

πf3 N0 exp (−2πf3 |τ |) 2 The desired normalized autocorrelation matrix, with ∆ = Tm = T and f3 = 1/T , is ¢ ⎡ ¡ ⎤ π −4π 1 + b2 z + π2 ¡ bz + π2¢e−2π 2e N0 ⎣ [Ryy ] = bz + π2 e−2π 1 + b2 z + π2 ¡ bz + π2¢e−2π ⎦ T π −4π 1 + b2 z + π2 bz + π2 e−2π 2e Rnn (τ ) =

b. The following matrix is needed:

⎤ ⎡ ⎤ Ryd (−T ) bA [Ryd ] = ⎣ Ryd (0) ⎦ = ⎣ A ⎦ Ryd (T ) 0

Multiply the matrix given in (a) by the column matrix of unknown weights to get the equations ¢ ⎡ ¡ ⎤⎡ 0 ⎤ ⎡ ⎤ π −4π e 1 + b2 z + π2 ¡ bz + π2¢e−2π a−1 bA 2 N0 ⎣ bz + π2 e−2π 1 + b2 z + π2 ¡ bz + π2¢e−2π ⎦ ⎣ a00 ⎦ = ⎣ A ⎦ T π −4π 1 + b2 z + π2 bz + π2 e−2π a01 0 2e or ¢ ⎤⎡ ⎤ ⎡ ⎤ ⎡ ⎡ ¡ ⎤ π −4π 1 + b2 z + π2 ¡ bz + π2¢e−2π a−1 bA2 T /N0 bz 2e ⎣ bz + π e−2π 1 + b2 z + π2 ¡ bz + π2¢e−2π ⎦ ⎣ a0 ⎦ = ⎣ A2 T /N0 ⎦ = ⎣ z ⎦ 2 π −4π bz + π2 e−2π a1 0 1 + b2 z + π2 0 2e

where the factor N0 /T has been normalized out so that the right hand side is in terms of the signal-to-noise ratio, z (the extra factor of A needed mulitplies both sides of the matrix equation and on the left hand side is lumped into the new coefficients a−1 , a0 , and a1 . Invert the matrix equation above to find the weights. A MATLAB program for doing so is given below. For z = 10 dB and b = 0.1, the weights are a−1 = 0.0773, a0 = 0.7821, and a1 =.−0.2781 c. According to (8.229) the minimum MSE is Normalized minimum MSE

¤ [Ryd ]T [A]opt £ (z is for normalization) = E d2 (t) − z⎡ ⎤ ¤ a−1 1£ bz z 0 ⎣ a0 ⎦ = 1− z a1 ⎡ ⎤ ¤ −0.2781 1 £ 0.5 × 10 10 0 ⎣ 0.7821 ⎦ = 1− 10 0.0773 = 0.2179


36

CHAPTER 8. PRINCIPLES OF DATA TRANSMISSION IN NOISE

% Solution for Problem 8.46 % z_dB = input(’Enter the signal-to-noise ratio in dB ’); z = 10^(z_dB/10); b = input(’Enter multipath gain ’); R_yy(1,1)=(1+b^2)*z+pi/2; R_yy(2,2)=R_yy(1,1); R_yy(3,3)=R_yy(1,1); R_yy(1,2)=b*z+(pi/2)*exp(-2*pi); R_yy(1,3)=(pi/2)*exp(-4*pi); R_yy(3,1)=R_yy(1,3); R_yy(2,1)=R_yy(1,2); R_yy(2,3)=R_yy(1,2); R_yy(3,2)=R_yy(1,2); disp(’R_yy’) disp(R_yy) B = inv(R_yy); disp(’R_yy^-1’) disp(B) R_yd = [b*z z 0]’ A = B*R_yd Problem 8.47 In normalized form, from Example 8.9 ⎡ ¡

¢ ⎤ 1 + β 2 (2Eb /N0 ) + 1 ¡ 2βE /N 0 0 b ¢ ⎦ 2βEb /N0 2βE 1 + β 2 (2Eb /N0 ) + 1 ¡ [Ryy ] = ⎣ ¢ b /N0 2 0 2βEb /N0 1 + β (2Eb /N0 ) + 1 ⎡ ⎤ 2βEb /N0 ⎣ 2Eb /N0 ⎦ [Ryd ] = 0 The steepest descent tap weight adjustment algorithm is i h [A](k+1) = [A](k) + μ [Ryd ] − [Ryy ] [A](k)


8.1. PROBLEM SOLUTIONS

37

or

⎤ ⎡ k ⎤ ak+1 a−1 −1 ⎣ ak+1 ⎦ = ⎣ ak0 ⎦ 0 ak1 ak+1 1 ⎡ ⎤ ⎫ ⎧ 2βEb /N0 ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎣ 2Eb /N0 ⎦ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ 0 ⎪ ⎪ ¢ ¡ ⎪ ⎪ ⎡ ⎤ ⎪ ⎪ 2 ⎪ ⎪ /N ) + 1 2βE /N 0 (2E 1 + β ⎬ ⎨ 0 0 b b ¡ ¢ 2 ⎣ ⎦ 2βEb /N0 2βE /N +μ 1 + β (2Eb /N0 ) + 1 ¡ − 0 b ¢ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ 0 2βE 1 + β 2 (2Eb /N0 ) + 1 0 b /N⎤ ⎪ ⎪ ⎡ ⎪ ⎪ k ⎪ ⎪ ⎪ ⎪ a ⎪ ⎪ −1 ⎪ ⎪ ⎪ ⎪ k ⎦ ⎣ ⎪ ⎪ a0 × ⎪ ⎪ ⎭ ⎩ k a1 ⎡ k ⎤ a−1 = ⎣ ak0 ⎦ ak1 ⎡ ⎤ ⎧ ⎫ 2βEb /N0 ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎣ ⎦ ⎪ ⎪ 2E /N ⎪ ⎪ 0 b ⎪ ⎪ ⎨ ⎬ 0 £¡ ¤ ¢ ⎡ ⎤ +μ k 1 + β 2 £¡ (2Eb /N0¢) + 1 ak−1 + (2βE ⎪ ⎪ b /N0 ) a0 ⎪ ⎪ ¤ ⎪ 2 k k ⎪ − ⎣ (2βEb /N0 ) a + 1 + β (2Eb /N0 ) + 1 a + (2βEb /N0 ) ak ⎦ ⎪ ⎪ ⎪ ⎪ −1 0 1 ⎪ ⎪ £¡ ¤ k ¢ ⎩ ⎭ 2 k (2βEb /N0 ) a0 + 1 + β (2Eb /N0 ) + 1 a1 ⎡ ⎤ Eb 2β N ⎢ Eb0 ⎥ = μ ⎣ 2N ⎦ 0 0 ⎡ k ⎤ ⎧ ⎫ a−1 ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎣ ak0 ⎦ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎨ ⎬ k a 1 ¤ ¢ £¡ ⎤ ⎡ + k (2Eb /N0¢) + 1 ak−1 + (2βE 1 + β 2 £¡ ⎪ ⎪ ⎪ ⎪ ¤ kb /N0 ) a0 ⎪ ⎪ 2 k k ⎦ ⎪ ⎪ ⎣ ⎪ ⎪ (2βE /N ) a + 1 + β /N ) + 1 a + (2βE /N ) a −μ (2E 0 0 b b ¢0 b −1 0 1 ⎪ ⎪ £¡ ¤ ⎩ ⎭ 2 k k (2βEb /N0 ) a0 + 1 + β (2Eb /N0 ) + 1 a1 ⎡


38

CHAPTER 8. PRINCIPLES OF DATA TRANSMISSION IN NOISE The separate update equations are ∙ ¸¾ ´ ½ ¡ ¢ Eb ³ k+1 k 2 2Eb 1 − a0 + 1 − μ 1 + β a−1 = 2μβ + 1 ak−1 N0 N0 ½ ∙ ¸¾ ¡ ¢ Eb 2 2Eb ak+1 = 2μ + 1 − μ 1 + β + 1 ak0 0 N0 N0 ¶ ¶ ¾ ½µ µ Eb Eb k a−1 + 2β ak1 −μ 2β N0 N0 ¶ ½ ∙ ¸¾ µ ¡ ¢ Eb 2 2Eb k ak+1 ak0 = 1 − μ 1 + β + 1 a − μ 2β 1 1 N0 N0

Problem 8.48 From Example 8.9, the autocorrelation matrix equation with numerical values substituted, from (8.237), is ⎡ ⎤ 26 10 0 Ryy = ⎣ 10 26 10 ⎦ 0 10 26 Using MATLAB its eigenvalues are

λ1 = 11.86 λ2 = 26.00 λ3 = 40.14 An acceptable range for μ is therefore 0<μ<

2 = 0.0498 40.14

Problem 8.49 a. Equation (2.237) scaled by 10 is ⎡ ⎤ ⎡ ⎤ ⎤⎡ 2.6 1 0 1 c−1 ⎣ 1 2.6 1 ⎦ ⎣ c0 ⎦ = ⎣ 2 ⎦ c1 0 1 2.6 0 The inverse of the new Ryy is ⎡

⎤ 0.4654 −0.2101 0.0808 −1 Ryy = ⎣ −0.2101 0.5462 −0.2101 ⎦ 0.0808 −0.2101 0.4654


8.1. PROBLEM SOLUTIONS

39

The recomputed weights are ⎡ ⎤ ⎡ ⎤⎡ ⎤ ⎡ ⎤ c−1 0.4654 −0.2101 0.0808 1 0.0452 ⎣ c0 ⎦ = ⎣ −0.2101 0.5462 −0.2101 ⎦ ⎣ 2 ⎦ = ⎣ 0.8824 ⎦ 0.0808 −0.2101 0.4654 0 −0.3394 c1 which are the same as obtained in Example 8.9.

b. The eigenvalues of the scaled Ryy are 1.1858, 2.6000, and 4.0142 which are smaller by a factor of 10 than those found in Problem 8.47. Problem 8.50 The MATLAB program below computes the coefficients among other quantities. output for an SNR of 20 dB and β = 0.1 is also given. % Solution for Problem 8.50 % z_dB = input(’Enter the signal-to-noise ratio in dB ’); z = 10^(z_dB/10); beta = input(’Enter multipath gain ’); R_yy(1,1)=(1+beta^2)*2*z+1; R_yy(2,2)=R_yy(1,1); R_yy(3,3)=R_yy(1,1); R_yy(1,2)=2*beta*z; R_yy(1,3)=0; R_yy(3,1)=R_yy(1,3); R_yy(2,1)=R_yy(1,2); R_yy(2,3)=R_yy(1,2); R_yy(3,2)=R_yy(1,2); disp(’R_yy’) disp(R_yy) B = inv(R_yy); disp(’R_yy^-1’) disp(B) R_yd = [b*z z 0]’ C = B*R_yd Eigen = eig(R_yy) mu_max = 2/max(Eigen) >> pr8_50 Enter the signal-to-noise ratio in dB: 20

Its


40

CHAPTER 8. PRINCIPLES OF DATA TRANSMISSION IN NOISE Enter multipath gain: 0.1 R_yy 203 20 0 20 203 20 0 20 203 R_yy^-1 0.0050 −0.0005 −0.0005 0.0050 0.0000 −0.0005 R_yd = 50 100 0 C= 0.1992 0.4776 −0.0471 Eigen = 174.7157 203.0000 231.2843 mu_max = 0.0086

8.2

0.0000 −0.0005 0.0050

Computer Exercises

Computer Exercise 8.1 % ce8_1.m: Simulation of BEP for antipodal signaling % clf n_sim = input(’Enter number of bits for simulation => ’); zdB = input(’Enter z = A^2*T/N_0 vector in dB => ’); z = 10.^(zdB/10); Lz = length(z); A = 1; T = 1; N0 = (A^2*T)./z; S = A*T*sign(rand(1,n_sim)-.5);


8.2. COMPUTER EXERCISES

41

PE = []; ET = []; for k = 1:Lz N = sqrt(0.5*N0(k)*T)*randn(1,n_sim); Y = S+N; Z = sign(Y); V = 0.5*(1+Z); W = 0.5*(1+sign(S)); E = xor(V,W); ET(k) = sum(E); PE(k) = ET(k)/length(Y); end PEth = 0.5*erfc(sqrt(z)); disp(’ ’) disp(’ SNR, dB Errors P_E est. P_E theory’) disp(’ ________________________________________’) disp(’ ’) disp([zdB’ ET’ PE’ PEth’]) disp(’ ’) semilogy(zdB, PE, ’o’), xlabel(’z in dB’), ylabel(’P_E’), grid hold on semilogy(zdB, PEth) title([’BER simulated with ’, num2str(n_sim), ’ bits per SNR value ’]) legend([’Simulated’], [’Theoretical’]) A typical run is given below: >> ce8_1 Enter number of bits for simulation => 20000 Enter z = A^2*T/N_0 vector in dB => [2 4 6 8] SNR, dB Errors P_E est. P_E theory ______________________________________ 2.0000 704.0000 0.0352 0.0375 4.0000 238.0000 0.0119 0.0125 6.0000 44.0000 0.0022 0.0024 8.0000 4.0000 0.0002 0.0002 A plot also appears showing the estimated and theoretical probabilities of error plotted versus signal-to-noise ratio.


42

CHAPTER 8. PRINCIPLES OF DATA TRANSMISSION IN NOISE

Computer Exercise 8.2 % ce8_2: Compute degradation in SNR due to bit timing error % clf A = char(’-’,’-.’,’:’,’—’); for m = 1:4 PE0 = 10^(-m); delT = []; deg = []; Eb1 = []; k = 0; for Y = 0:.025:.45; k = k+1; delT(k) = Y; Eb1(k) = fzero(@PE1, 2, [], delT(k), PE0); end Eb = fzero(@PE, 2, [], PE0); deg = Eb1 - Eb; plot(delT, deg, A(m,:)), ... if m == 1 hold on axis([0 .45 0 20]) xlabel(’Fractional timing error, \DeltaT/T’), ylabel(’Degradation in dB’),... title([’Degradation in SNR for bit synchronization timing error’]), grid end end legend([’P_E = 10^-^1’], [’ = 10^-^2’], [’ = 10^-^3’], [’ = 10^-^4’], 2) The output plot is shown in Fig. 8.7.


8.2. COMPUTER EXERCISES

43

Degradation in SNR for bit synchronization timing error 20 PE = 10-1

18

Degradation in dB

= 10-2 16

= 10-3

14

= 10-4

12 10 8 6 4 2 0

0

0.05

0.1

0.15 0.2 0.25 0.3 Fractional timing error, ΔT/T

Figure 8.7:

0.35

0.4

0.45


44

CHAPTER 8. PRINCIPLES OF DATA TRANSMISSION IN NOISE

Computer Exercise 8.3 %

ce8_3.m; Uses subprogram pb_phase_pdf % bep_ph_error.m; calls user-defined subprogram pb_phase_pdf.m % Effect of Gaussian phase error in BPSK; fixed variance. % clf sigma_psi2 = input(’Enter vector of variances of phase error in radians^2 => ’); L_sig = length(sigma_psi2); Eb_N0_dB_min = input(’Enter minimum Eb/N0 in dB desired => ’); Eb_N0_dB_max = input(’Enter maximum Eb/N0 in dB desired => ’); A = char(’-.’,’:’,’—’,’-..’); a_mod = 0; for m = 1:L_sig Eb_N0_dB = []; Eb_N0 = []; PE = []; k = 1; for snr = Eb_N0_dB_min:.5:Eb_N0_dB_max Eb_N0_dB(k) = snr; Eb_N0(k) = 10.^(Eb_N0_dB(k)/10); sigma_psi = sqrt(sigma_psi2(m)); PE(k) = quadl(’pb_phase_pdf’,-pi,pi,[],[],Eb_N0(k),sigma_psi,a_mod); k = k+1; end semilogy(Eb_N0_dB, PE,A(m,:)),xlabel(’E_b/N_0, dB’),ylabel(’P_E’),... axis([Eb_N0_dB_min Eb_N0_dB_max 10^(-7) 1]),... if m == 1 hold on grid on end end P_ideal = .5*erfc(sqrt(Eb_N0)); semilogy(Eb_N0_dB, P_ideal),... legend([’BPSK; \sigma_\theta_e^2 = ’,num2str(sigma_psi2(1))],[’BPSK; \sigma_\theta_e^2 = ’,num2str(sigma_psi2(2))],... [’BPSK; \sigma_\theta_e^2 = ’,num2str(sigma_psi2(3))],[’BPSK; \sigma_\theta_e^2 = 0’],3) %

pb_phase_pdf.m; called by bep_ph_error.m and bep_ph_error2.m


8.2. COMPUTER EXERCISES

45

0

10

-1

10

-2

10

-3

PE

10

-4

10

-5

10

-6

10

-7

10

BPSK; σ 2e = 0.05 θ BPSK; σ 2e = 0.1 θ BPSK; σ 2e = 0.15 θ

BPSK; σ 2e = 0

0

θ

5

10

15

Eb/N0, dB

Figure 8.8: % function XX = pb_phase_pdf(psi,Eb_N0,sigma_psi,a) arg = Eb_N0*(1-a^2)*(cos(psi)-a/sqrt(1-a^2)*sin(psi)).^2; T1 = .5*erfc(sqrt(arg)); T2 = exp(-psi.^2/(2*sigma_psi^2))/sqrt(2*pi*sigma_psi^2); XX = T1.*T2; A plot from the program is given in Fig. 8.8. Computer Exercise 8.4 % ce8_4: For a given data rate and error probability, find the required % bandwidth and Eb/N0 in dB; baseband, BPSK/DPSK, coherent FSK % and noncoherent FSK modulation may be specified. % I_mod = input(’Enter desired modulation: 0 = baseband; 1 = BPSK; 2 = DPSK; 3 = CFSK; 4 = NCFSK: ’);


46

CHAPTER 8. PRINCIPLES OF DATA TRANSMISSION IN NOISE I_BW_R = input(’Enter 1 if bandwidth specified; 2 if data rate specified: ’); if I_BW_R == 1 BkHz = input(’Enter vector of desired bandwidths in kHz => ’); elseif I_BW_R == 2 Rkbps = input(’Enter vector of desired data rates in kbps => ’); end PE = input(’Enter desired BEP => ’); N0 = 1; if I_BW_R == 1 if I_mod == 0 Rkpbs = BkHz; % Rate for baseband elseif I_mod == 1 | I_mod == 2 Rkbps = BkHz/2; % Rate in kbps for BPSK or DPSK elseif I_mod == 3 Rkbps = BkHz/3; % Rate in kbps for CFSK elseif I_mod == 4 Rkbps = BkHz/4; % Rate in kbps for NFSK end elseif I_BW_R == 2 if I_mod == 0 BkHz = Rkbps; % Transmission BW for baseband in kHz elseif I_mod == 1 | I_mod == 2 BkHz = 2*Rkbps; % Transmission BW for BPSK or DPSK in kHz elseif I_mod == 3 BkHz = 3*Rkbps; % Transmission BW for CFSK in kHz elseif I_mod == 4 BkHz = 4*Rkbps; % Transmission BW for NFSK in kHz end end R = Rkbps*1000; B = BkHz*1000; % PE = 0.5*erfc(sqrt(z)) if I_mod == 0 | I_mod == 1 sqrt_z = erfinv(1-2*PE); z = sqrt_z^2; elseif I_mod == 2 z = -log(2*PE); elseif I_mod == 3 sqrt_z_over_2 = erfinv(1-2*PE); z = 2*sqrt_z_over_2^2;


8.2. COMPUTER EXERCISES

47

elseif I_mod == 4 z = -2*log(2*PE); end Eb_N0_dB = 10*log10(z);

% This is the required Eb/N0 in dB

if I_mod == 0 A = sqrt(R*N0*z);

% z = A^2/(R*N0) for baseband

elseif I_mod == 1 | I_mod == 2 | I_mod == 3 | I_mod == 4 A = sqrt(2*R*N0*z);

% z = A^2/2*R*N0 for BPSK, DPSK, CFSK, NFSK

end disp(’ ’) disp(’Desired P_E and required E_b/N_0 in dB: ’) format long disp([PE Eb_N0_dB]) format short disp(’ ’) disp(’ R, kbps BW, Hz A, volts’) disp(’ ___________________________’) disp(’ ’) disp([Rkbps’ BkHz’ A’]) disp(’ ’)


48

CHAPTER 8. PRINCIPLES OF DATA TRANSMISSION IN NOISE

A typical run is given below: >> ce8_4 Enter desired modulation: 0 = baseband; 1 = BPSK; 2 = DPSK; 3 = CFSK; 4 = NCFSK: 2 Enter 1 if bandwidth specified; 2 if data rate specified: 2 Enter vector of desired data rates in kbps => [5 10 50 100] Enter desired BEP => 1e-6 Desired P_E and required E_b/N_0 in dB: 0.000001000000000 11.180120598361940 R, kbps BW, Hz A, volts ___________________________ 1.0e+003 * 0.0050 0.0100 0.3622 0.0100 0.0200 0.5123 0.0500 0.1000 1.1455 0.1000 0.2000 1.6200 Computer Exercise 8.5 % ce8_5: program to plot Figure 8-28 % clf z0dB = 0:1:30; z0 = 10.^(z0dB/10); for delta = -0.9:0.3:0.6; if delta > -eps & delta < eps delta = 0; end for taum_over_T = 0:1:1 T1 = 0.5*qfn(sqrt(2*z0)*(1+delta)); T2 = 0.5*qfn(sqrt(2*z0)*(1+delta)-2*delta*taum_over_T); PE = T1 + T2; if taum_over_T == 0 semilogy(z0dB, PE, ’-’) elseif taum_over_T == 1 semilogy(z0dB, PE, ’—’) end if delta <= -.3 text(z0dB(10), PE(10), [’\delta = ’, num2str(delta)]) elseif delta > -0.3 & delta <= 0.3


8.2. COMPUTER EXERCISES

49

Bit error probability for two-ray multipath channel

0

10

τm/T = 0

δ = -0.9 δ = -0.9

-1

τm/T = 1

10

δ = -0.6 δ = -0.6 -2

10 PE

δ = 0.6 -3

10

δ = -0.3 δ = -0.3 δ=0

δ = 0.6

-4

10

δ = 0.3

δ = 0.3 0

5

10

15 z 0 in dB

20

25

Figure 8.9: text(z0dB(8), PE(8), [’\delta = ’, num2str(delta)]) elseif delta > 0.3 text(z0dB(5), PE(5), [’\delta = ’, num2str(delta)]) end if delta == -0.9 hold on ylabel(’P_E’), xlabel(’z_0 in dB’) axis([0 30 1E-5 1]) grid end end end legend([’\tau_m/T = 0’],[’\tau_m/T = 1’]) The output plot is shown in Fig. 8.9.

30


50

CHAPTER 8. PRINCIPLES OF DATA TRANSMISSION IN NOISE

Computer Exercise 8.6 Use Equations (8.74) (with m = 0), (8.88), (8.112), (8.120), and (8.196) - (8.199). Solve them for the SNR in terms of the error probability. Express the SNR in dB and subtract the nonfading result from the fading result. The MATLAB program below does this for the modulation cases of BPSK, coherent FSK (CFSK), DPSK, and noncoherent (NCFSK). % ce7_7.m: Program to evaluate degradation due to flat % Rayleigh fading for various modulation schemes % mod_type = input(’Enter type of modulation: 1 = BPSK; 2 = CFSK; 3 = DPSK; 4 = NCFSK: ’); PE = input(’Enter vector of desired probabilities of error: ’); disp(’ ’) if mod_type == 1 disp(’ BPSK’) Z_bar = (0.25*(1 - 2*PE).^2)./(PE - PE.^2); z = (erfinv(1 - 2*PE)).^2; % MATLAB has an inv. error function elseif mod_type == 2 disp(’ CFSK’) Z_bar = (0.5*(1 - 2*PE).^2)./(PE - PE.^2); z = 2*(erfinv(1 - 2*PE)).^2; elseif mod_type == 3 disp(’ DPSK’) Z_bar = 1./(2*PE) -1; z = -log(2*PE); elseif mod_type == 4 disp(’ NCFSK’) Z_bar = 1./PE -2; z = -2*log(2*PE); end Z_bar_dB = 10*log10(Z_bar); z_dB = 10*log10(z); deg_dB = Z_bar_dB - z_dB; disp(’ ’) disp(’ P_E Degradation, dB’) disp(’_________________________’) disp(’ ’) disp([PE’ deg_dB’]) disp(’ ’)


8.2. COMPUTER EXERCISES

51

A typical run is given below: >> ce8_6 Enter type of modulation: 1 = BPSK; 2 = CFSK; 3 = DPSK; 4 = NCFSK: 3 Enter vector of desired probabilities of error: [1e-2 1e-3 1e-4] DPSK P_E Degradation, dB _________________________ 0.0100 10.9779 0.0010 19.0469 0.0001 27.6859 Computer Exercise 8.7 The solution to this exercise consists of two MATLAB programs, one for the zero-forcing case and one for the MMSE case. % ce8_7a.m: Plots unequalized and equalized sample values for ZF equalizer % clf pc = input(’Enter sample values for channel pulse response (odd #): ’); L_pc = length(pc); disp(’Maximum number of zeros each side of decision sample: ’); disp((L_pc-1)/4) N = input(’Enter number of zeros each side of decision sample desired: ’); mid_samp = fix(L_pc/2)+1; Pc = []; for m = 1:2*N+1 row = []; for n = 2-m:2*N+2-m row = [row pc(mid_samp-1+n)]; end Pc(:,m) = row’; end disp(’ ’) disp(’Pc:’) disp(’ ’) disp(Pc) Pc_inv = inv(Pc); disp(’ ’) disp(’Inverse of Pc:’)


52

CHAPTER 8. PRINCIPLES OF DATA TRANSMISSION IN NOISE disp(’ ’) disp(Pc_inv) coef = Pc_inv(:,N+1); disp(’ ’) disp(’Equalizer coefficients:’) disp(’ ’) disp(coef) disp(’ ’) steps = (L_pc-length(coef))+1; p_eq = []; for m = mid_samp-floor(steps/2):mid_samp+floor(steps/2) p_eq(m) = sum(coef’.*fliplr(pc(m-N:m+N))); end p_eq_p = p_eq(N+1:mid_samp+floor(steps/2)); disp(’ ’) disp(’Equalized pulse train’) disp(’ ’) disp(p_eq_p) disp(’ ’) t=-5:.01:5; y=zeros(size(t)); subplot(2,1,1),stem(pc),ylabel(’Ch. output pulse’),... axis([1 L_pc -.5 1.5]),... title([’Ch. pulse samp. = [’,num2str(pc),’]’]) subplot(2,1,2),stem(p_eq_p),xlabel(’n’),ylabel(’Equal. output pulse’),... axis([1 L_pc -.5 1.5]),... title([’Output equalized with ’,num2str(N),’ zero samples either side’])

A typical run is given below: >> ce8_7a Enter sample values for channel pulse response (odd #): [.01 -.05 .2 -.3 1 -.2 .15 -.25 .07] Maximum number of zeros each side of decision sample: 2 Enter number of zeros each side of decision sample desired: 2 mid_samp = 5 Pc: 1.0000 -0.3000 0.2000 -0.0500 0.0100 -0.2000 1.0000 -0.3000 0.2000 -0.0500


8.2. COMPUTER EXERCISES 0.1500 -0.2000 -0.2500 0.1500 0.0700 -0.2500 Inverse of Pc: 1.0688 0.3050 0.1482 1.1107 -0.0631 0.1309 0.2381 0.0073 0.0193 0.2381 Equalizer coefficients: -0.1338 0.2824 1.1285 0.1309 -0.0631 Equalized pulse train -0.0000 -0.0000

53

1.0000 -0.2000 0.1500

-0.3000 1.0000 -0.2000

0.2000 -0.3000 1.0000

-0.1338 0.2824 1.1285 0.1309 -0.0631

-0.0441 -0.1388 0.2824 1.1107 0.1482

0.0181 -0.0441 -0.1338 0.3050 1.0688

1.0000

0.0000

0.0000

The MATLAB program for the MMSE case is given below: % ce8.7b % z_dB = input(’Enter the signal-to-noise ratio in dB: ’); z = 10^(z_dB/10); b = input(’Enter multipath gain: ’); R_yy(1,1)=(1+b^2)*z+pi/2; R_yy(2,2)=R_yy(1,1); R_yy(3,3)=R_yy(1,1); R_yy(1,2)=b*z+(pi/2)*exp(-2*pi); R_yy(1,3)=(pi/2)*exp(-4*pi); R_yy(3,1)=R_yy(1,3); R_yy(2,1)=R_yy(1,2); R_yy(2,3)=R_yy(1,2); R_yy(3,2)=R_yy(1,2); disp(’ ’) disp(’R_yy:’) disp(’ ’) disp(R_yy) B = inv(R_yy); disp(’ ’) disp(’R_yy^-1:’)


54

CHAPTER 8. PRINCIPLES OF DATA TRANSMISSION IN NOISE disp(’ ’) disp(B) R_yd = [0 z b*z]’; disp(’ ’) disp(’R_yd:’) disp(’ ’) disp(R_yd) A = B*R_yd; disp(’ ’) disp(’Optimum coefficients:’) disp(A) disp(’ ’) A typical run is given below: >> ce7_8b Enter the signal-to-noise ratio in dB: 7 Enter multipath gain: 0.6 R_yy: 8.3869 3.0101 0.0000 3.0101 8.3869 3.0101 0.0000 3.0101 8.3869 R_yy^-1: 0.1399 -0.0576 0.0207 -0.0576 0.1606 -0.0576 0.0207 -0.0576 0.1399 R_yd: 0 5.0119 3.0071 Optimum coefficients: -0.2267 0.6316 0.1319


Chapter 9

Advanced Data Communications Topics 9.1

Problem Solutions

Problem 9.1 Use the relationship Rb = (log2 M ) Rs bps where Rb is the data rate, Rs is the symbol rate, and M is the number of possible signals per signaling interval. In this case, Rs = 4, 000 symbols per second. For M = 4, Rb = 2 × 4, 000 = 8, 000 bps, for M = 8, Rb = 12, 000 bps, for M = 16, Rb = 20, 000 bps, and for M = 64, Rb = 24, 000 bps. Problem 9.2 (a) 5,000 symbols per second. (b) Recall that ∙ ¸ √ −1 d2 (t) y (t) = A [d1 (t) cos (ωc t) − d2 (t) sin (ωc t)] = 2A cos [ωc t + θi (t)] , θi (t) = tan d1 (t) Alternating every other bit of the given data sequence, 101110 000111 010011, between d1 (t) and d2 (t) gives d1 (t) = 1, 1, 1, −1, −1, 1, −1, −1, 1 and d2 (t) = −1, 1, −1, −1, 1, 1, 1, −1, 1 which results in the table below for the phases: d2 (t) −1 1 −1 −1 1 1 1 −1 1 d1 (t) 1 1 1 −1 −1 1 −1 −1 1 7π π 7π 5π 3π π 3π 5π π θi (t) 4 4 4 4 4 4 4 4 4 For QPSK the symbol switching points occur each Ts = 2Tb seconds. (c) Now the switching instants are each Tb seconds. Start with d1 (t) = 1. Then d2 (t) is staggered, or offset, by 1


2

CHAPTER 9. ADVANCED DATA COMMUNICATIONS TOPICS

Tb seconds. So the first phase shift is for d1 (t) = 1 and d2 (t) = −1 or θ1 (t) = 7π/4. After Ts = 2Tb seconds d1 (t) takes on the second 1-value, but d2 (t) is still −1, so θ2 (t) = 7π/4. At 3Tb seconds, d2 (t) changes to 1 while d1 (t) = 1, so θ3 (t) = π/4. At 4Tb seconds, d1 t) = 1, so θ4 (t) = π/4, etc. The phase sequence is shown in the table below: d2 (t) −1 1 −1 −1 1 1 1 −1 1 d1 (t) 1 1 1 −1 −1 1 −1 −1 1 7π 7π π π 7π 5π 5π 5π 3π π π 3π 3π 3π 5π 7π π θi (t) 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 Problem 9.3 For QPSK, PE, symbol = 2Q

´ ³p Es /N0 = 10−5

Trial and error using the asymptotic approximation for the Q-function gives [Es /N0 ]req’d ≈ 12.9 dB = 19.5. If the quadrature-carrier amplitudes are A, then the amplitude of the √ ¡√ ¢2 envelope-phase-modulated form of the carrier is 2A, and Es /N0 = 2A T / (2N0 ) = q p √ 2 A /N0 R. Hence, Areq’d = N0 R [Es /N0 ]req’d = (10−11 ) (19.5) R = 1.4 × 10−5 R. The answers are as follows: (a) 9.9 × 10−4 V; (b) 0.0014 V; (c) 0.0031 V; (d) 0.0044 V; (e) 0.0099 V; (f) 0.014 V.

Problem 9.4 Take the expectation of the product after noting that the average values are 0 because E[n (t)] = 0:

E [N1 N2 ] = E

∙Z Ts

n (t) cos (ω c t) dt

0

=

Z Ts Z Ts 0

= = =

0

¸ n (t) sin (ω c t) dt

E [n (t) n (λ)] cos (ω c t) sin (ω c λ) dtdλ

0

Z Ts Z Ts 0

Z Ts

0

Z Ts

N0 δ (t − λ) cos (ωc t) sin (ωc λ) dtdλ 2

N0 cos (ω c t) sin (ω c t) dt 2 0 Z N0 Ts sin (2ω c t) dt = 0 4 0


9.1. PROBLEM SOLUTIONS

3

Problem 9.5 a. The integrator outputs, without noise, are (the ± signs are determined by the data polarities) V10 = = = = V20 = = = =

Z Ts

xc (t) cos (2πfc t) dt ¶ µ ¶¸ µ Z Ts ∙ β β ∓ sin 2πfc t − cos (2πfc t) dt A ± cos 2πfc t + 2 2 0 µ ¶ µ ¶ ¸ Z ∙ β A Ts β ± cos ∓ sin − + dbl freq terms dt 2 0 2 2 ∙ µ ¶ µ ¶¸ β β ATs ± cos ± sin 2 2 2 Z Ts xc (t) sin (2πfc t) dt 0 ¶ µ ¶¸ µ Z Ts ∙ β β ∓ sin 2πfc t − sin (2πfc t) dt A ± cos 2πfc t + 2 2 0 ¶ µ ¶ ¸ µ Z ∙ β β A Ts ∓ cos − + dbl freq terms dt ± sin − 2 0 2 2 ∙ µ ¶ µ ¶¸ ATs β β ∓ sin ∓ cos 2 2 2 0

b. This follows from the outline given in the problem statement. c. Note that Es = 2Eb and that the expression given in the problem statement reduces to (9.12) for β = 0. A plot is given in Figure 9.1. The degradation at an error probability of 10−6 for 10 degrees phase imbalance is about 1.1 dB.

Problem 9.6 a. Both are equivalent in terms of symbol error probabilities. b. QPSK is 3 dB worse in terms of symbol error probability for equal transmission bandwidths, but it handles twice as many bits per second. c. Choose QPSK over BPSK in terms of performance; however, other factors might favor BPSK, such as simpler implementation.


4

CHAPTER 9. ADVANCED DATA COMMUNICATIONS TOPICS

-3

10

β = 0 deg β = 2.5 deg β = 5 deg β = 7.5 deg β = 10 deg

-4

PE, qaud ch

10

-5

10

-6

10

8

8.5

9

9.5

10 Eb/N0, dB

Figure 9.1:

10.5

11

11.5

12


9.1. PROBLEM SOLUTIONS

5

Problem 9.7 For the data stream 101011 010010 100110 110011, the quadrature data streams are d1 (t) = 1, 1, 1, -1, -1, 1, 1, -1, 1, 1, -1, 1 d2 (t) = -1, -1, 1, 1, -1, -1, -1, 1, -1, 1, -1, 1 Each 1 above means a positive rectangular pulse Ts seconds in duration and each -1 above means a negative rectangular pulse Ts seconds in duration. For QPSK, these pulse sequences are aligned and for OQPSK, the one corresponding to d2 (t) is delayed by Ts /2 = Tb seconds with respect to d1 (t). Type I MSK corresponds to the OQPSK waveforms d1 (t) and d2 (t) multiplied, respectively, by cosine and sine waveforms with periods of 2Ts and type II MSK corresponds to d1 (t) and d2 (t) multiplied, respectively, by absolute-value cosine and sine waveforms with periods of Ts (one half cosine or sine per Ts pulse). Problem 9.8 For QPSK, the excess phase corresponds to a stepwise function that may change values each Ts seconds. The phase deviation is given by θi (t) = tan−1

d2 (t) d1 (t)

¸

so the sequence of phases for QPSK computed from d1 (t) and d2 (t) given in Problem 9.7 is 7π/4, 7π/4, π/4, 3π/4, 5π/4, 7π/4, 7π/4, 3π/4, 7π/4, π/4, 5π/4, π/4 radians. For OQPSK, the phase can change each Ts /2 = Tb seconds because d2 (t) is delayed by Ts /2 seconds with respect to d1 (t). The maximum phase change is ±π/2 radians. For MSK, the excess phase trajectories are straight lines of slopes ±π/2Tb radians/second. Problem 9.9 Write the sinc-functions as sin(x) /x functions. Use appropriate trigonometric identities to reduce the product of the sinc-functions to the given form. For positive frequencies SMSK (f ) = |G (f )|2 SBP SK (f ) A2 Tb sinc2 [(f − fc ) Tb − 0.25] sinc2 [(f − fc ) Tb + 0.25] = 2 A2 Tb sin2 π [(f − fc ) Tb − 0.25] sin2 π [(f − fc ) Tb + 0.25] = 2 {π [(f − fc ) Tb − 0.25]}2 {π [(f − fc ) Tb + 0.25]}2 A2 Tb sin2 π [(f − fc ) Tb − 0.25] sin2 π [(f − fc ) Tb + 0.25] = h i2 2π 4 (f − fc )2 Tb2 − 1/16


6

CHAPTER 9. ADVANCED DATA COMMUNICATIONS TOPICS

Use sin π [(f − fc ) Tb ± 0.25] = √12 sin π (f − fc ) Tb ± √12 cos π (f − fc ) Tb to get

SMSK (f ) =

=

=

A2 Tb [sin π (f − fc ) Tb − cos π (f − fc ) Tb ]2 [sin π (f − fc ) Tb + cos π (f − fc ) Tb ]2 h i2 8π 4 (f − fc )2 Tb2 − 1/16 £ ¤2 A2 Tb sin2 π (f − fc ) Tb − cos2 π (f − fc ) Tb h i2 8π 4 (f − fc )2 Tb2 − 1/16 A2 Tb cos2 2π (f − fc ) Tb 32A2 Tb cos2 2π (f − fc ) Tb = i h h i2 2 8π 4 π4 (f − fc )2 Tb2 − 1/16 1 − 16 (f − fc )2 Tb2

Problem 9.10 If d (t) is a sequence of 1s or 0s, the instantaneous frequency is 1/4Tb Hz above the carrier (with the definition of θi (t) given by (9.20) and (9.21)). If it is a sequence of alternating 1s and 0s, the instantaneous frequency is 1/4Tb Hz below the carrier. (a) The instantaneous frequency is 10, 000, 000 − 50, 000/4 = 9, 987, 500 Hz. (b) The instantaneous frequency is 10, 000, 000 + 50, 000/4 = 10, 012, 500 Hz.


9.1. PROBLEM SOLUTIONS

7

Problem 9.11 The Fourier transform of h (t) is µ ¶ Z ∞ r 2π 2 B 2 2 2π exp − t exp (−j2πf t) dt H (f ) = B ln 2 ln 2 −∞ r µ ∙ ¶¸ Z 2π ∞ 2π 2 B 2 2 jf ln 2 = B t − exp − t dt; complete square in exponent ln 2 −∞ ln 2 πB 2 " à r ¶ ¶ !# µ µ Z 2π ∞ 2π 2 B 2 2 jf ln 2 jf ln 2 2 jf ln 2 2 = B exp − t− + dt t − ln 2 −∞ ln 2 πB 2 2πB 2 2πB 2 " r ∙ µ ¸Z ∞ ¶ # ln 2 2 jf ln 2 2 2π 2π 2 B 2 jf ln 2 exp − 2 f t− exp − dt; u = t − = B 2 ln 2 2B ln 2 2πB 2πB 2 −∞ " # r µ ¶ Z ∞ ∙ ¸ 2π ln 2 f 2 2π 2 B 2 2 2π 2 B 2 1 exp − u du; = B exp − = ln 2 2 B ln 2 2σ 2 ln 2 −∞ " # r µ ¶2 √ ∙ ¸ Z ∞ 2π 1 ln 2 f u2 √ exp − 2πσ 2 exp − 2 du; integral is a Gauss pdf = B ln 2 2 B 2σ 2πσ2 −∞ " r µ ¶2 # r 2π ln 2 f ln 2 exp − π 2 2 ; substitute for 2σ 2 and simplify = B ln 2 2 B 2π B " # µ ¶ ln 2 f 2 = exp − 2 B

Problem 9.12 √ a. The signal points lie on a circle of radius Es centered at the origin equally spaced at angular intervals of 22.5 degrees (360/16). The decision regions are pie wedges centered over each signal point. b. For M = 16, the upper bound (9.50) well approximates the symbol error probability. Thus "r # ³π´ 2Es sin PE, 16-PSK ' 2Q N0 16 ⎤ ⎡s ³π´ 2 log (16) E b 2 ⎦ sin = 2Q ⎣ N0 16


8

CHAPTER 9. ADVANCED DATA COMMUNICATIONS TOPICS

0

10

Symbol error probability Bit error probability -1

10

-2

PE, Pb; 16-PSK

10

-3

10

-4

10

-5

10

-6

10

0

2

4

6

8

10 12 Eb/N0, dB

14

16

18

20

Figure 9.2: c. The bit error probability, for Gray bit encoding, is well approximated by Pb, 16-PSK = =

PE, 16-PSK log2 16 "r # ³π´ 8Eb 1 Q sin 2 N0 16

A plot comparing the two error probabilities is given in Figure 9.2. Problem 9.13 The bounds on symbol error probability are given by P1 ≤ PE, symbol ≤ 2P1 where P1 = Q

"r

# 2Es sin (π/M ) N0


9.1. PROBLEM SOLUTIONS

9

For moderate signal-to-noise ratios and M ≥ 8, the actual error probability is very close to the upper bound. Assuming Gray encoding, the bit error probability is given in terms of symbol error probability as PE, symbol PE, bit ' log2 (M ) and the energy per bit-to-noise spectral density ratio is Eb Es 1 = N0 log2 (M ) N0 Thus, we solve 2 Q log2 (M )

"r

# Eb 2 log2 (M ) sin (π/M ) = 10−4 N0

or

Q

⎧ 1.5 × 10−4 , M = 8 ⎪ ⎪ ⎨ Eb 10−5 2 × 10−4 , M = 16 × log2 (M ) = 2 log2 (M ) sin (π/M ) = 2.5 × 10−4 , M = 32 ⎪ N0 2 ⎪ ⎩ 3 × 10−4 , M = 64

"r

#

The arguments of the Q-function to give these various probabilities are found, approximately, by trial and error (either using MATLAB and a program for the Q-function or a calculator and the asymptotic expansion of the Q-function): ⎧ 3.62, M = 8 ⎪ ⎪ ⎨ 3.54, M = 16 Q-function argument = 3.48, M = 32 ⎪ ⎪ ⎩ 3.43, M = 64

Thus, for M = 8, we have r

Eb 2 log2 (8) sin (π/8) = 3.62 N0 r Eb 6 × 0.3827 = 4.17 N0 Eb N0

" µ ¶ # 3.62 2 1 = 10 log10 = 11.74 dB 6 0.3827


10

CHAPTER 9. ADVANCED DATA COMMUNICATIONS TOPICS

For M = 16, we have r

Eb 2 log2 (16) sin (π/16) = 3.54 N r 0 Eb 8 × 0.1951 = 3.54 N0 Eb N0

" µ ¶ # 3.54 2 1 = 10 log10 = 16.14 dB 8 0.1951

For M = 32, we have r

Eb 2 log2 (32) sin (π/32) = 3.48 N0 r Eb 10 × 0.098 = 3.48 N0 Eb N0

= 10 log10

"

1 10

µ

3.48 0.098

¶2 #

= 21.00 dB

"

1 12

µ

3.43 0.049

¶2 #

= 26.11 dB

For M = 64, we have r

Eb 2 log2 (64) sin (π/64) = 3.43 N r 0 Eb 12 × 0.049 = 3.43 N0 Eb N0

= 10 log10

Problem 9.14 Let the coordinates of the received data vector, given signal labeled 1111 was sent, be (X, Y ) = (a + N1 , a + N2 ) where N1 and N2 are zero-mean, uncorrelated Gaussian random variables with variances N0 /2. Since they are uncorrelated, they are also independent. Thus, P (C | I) = P (0 ≤ X ≤ 2a) P (0 ≤ Y ≤ 2a) Z 2a −(x−a)2 /N0 Z 2a −(y−a)2 /N0 e e √ √ = dx dy πN0 πN0 0 0


9.1. PROBLEM SOLUTIONS Let

√ √ 2 (x − a) 2 (y − a) and v = u= N0 N0

This results in

Z √2a2 /N0

Z √2a2 /N0 −v2 /2 2 e−u /2 e √ √ P (C | I) = du √ dv √ 2 2π 2π − 2a /N0 − 2a2 /N0 Z √2a2 /N0 −u2 /2 Z √2a2 /N0 −v2 /2 e e √ √ du dv (by symmetry of integrands) = 4 2π 2π 0 0 ⎡ ⎛s ⎞⎤2 2 2a ⎠⎦ = ⎣1 − 2Q ⎝ N0

Similar derivations can be carried out for the type II and type III regions.

11


12

CHAPTER 9. ADVANCED DATA COMMUNICATIONS TOPICS

Problem 9.15 The symbol error probability expression is ¸ ∙ 1 1 1 Ps = 1 − P (C | I) + P (C | II) + P (C | III) 4 2 4 where 2

P (C | I) = [1 − 2Q (A)] , A =

s

2a2 N0

P (C | II) = [1 − 2Q (A)] [1 − Q (A)]

P (C | III) = [1 − Q (A)]2 Thus PE

1 = 1− M =

≈ = =

) ( ³√ ´2 ³√ ´ M − 2 [1 − 2Q (A)]2 + 4 M − 2 [1 − 2Q (A)] [1 − Q (A)]

+4 [1 − Q (A)]2 ) ( ³√ ´2 ³√ ´ 1 M − 2 [1 − 4Q (A) + · · ·] + 4 M − 2 [1 − 3Q (A) + · · ·] 1− M +4 [1 − 2Q (A) + · · ·] ´ ³√ ´ ) ( ³ √ 1 M − 4 M + 4 [1 − 4Q (A)] + 4 M − 2 [1 − 3Q (A)] 1− , neglect Q2 (A) M +4 [1 − 2Q (A)] ∙ µ ¶ ¸ n ³ ´ o √ 1 1 M − 4 M − M Q (A) = 1 − 1 − 4 1 − √ Q (A) 1− M M ⎞ ⎛s ¶ µ 2a2 ⎠ 1 Q⎝ 4 1− √ N0 M

Problem 9.16 The expression for a follows by computing the average symbol energy in terms of a. To accomplish this, the sums m X i=1

m

i=

X m (m + 1) m (m + 1) (2m + 1) and i2 = 2 6 i=1

are convenient. The average symbol energy is (the overbar indicates an average over the signal constellation) ! Ã Z Ts Bi2 2 A2i 2 + Ts = A2i + Bi2 si (t) dt = Es = T 2 2 s 0


9.1. PROBLEM SOLUTIONS

13

³√ ´ M − 1 a with M = 22n , n integer. where Ai , Bi = ±a, ±3a, · · ·, ± value of Ai , Bi equally likely, these averages become √

A2i

Assuming each

M/2 2 X 2 = Bi = √ (2i − 1)2 a2 (the factor of 2 accounts for negative amplitudes) M i=1 ⎤ ⎡ √ √ √ √ M/2 M/2 M/2 M/2 2 X X X 2a2 X 2a = √ (2i − 1)2 = √ ⎣4 i2 − 4 i+ 1⎦ M i=1 M i=1 i=1 i=1 ³√ ´ ³√ ´ ³√ ´ ´ ³√ ´ ⎤ ⎡ ³√ 2 M /2 M /2 + 1 M + 1 M /2 M /2 + 1 √ 2a −4 + M /2⎦ = √ ⎣4 6 2 M ³√ ´ ³√ ´ ´ ⎤ ⎡ ³√ M /2 + 1 M +1 M /2 + 1 −4 + 1⎦ = a2 ⎣4 6 2

´ ³√ ´ ³√ ´ i a2 h³√ M +2 M +1 −3 M +2 +3 3 ´ a2 √ √ a2 ³ M +3 M +2−3 M −6+3 = (M − 1) = 3 3 Therefore 2a2 Es = A2i + Bi2 = (M − 1) 3 and the given relationship follows. =

Problem 9.17 Use

M Es Eb PE, symbol and = log2 (M ) 2 (M − 1) N0 N0 A tight bound for the symbol error probability for coherent M -ary FSK is Ãr ! Es PE, symbol ≤ (M − 1) Q N0 PE, bit =

so M PE, bit ' Q 2

Ãr

Eb log2 (M ) N0

For M = 2 the exact expression is PE, binary = Q

Ãr

Eb N0

!

!


14

CHAPTER 9. ADVANCED DATA COMMUNICATIONS TOPICS

Use the asymptotic expression for the Q-function or iteration on a calculator to get the following results: ´ ³p For M = 2, PE, bit = Q Eb /N0 = 10−3 for Eb /N0 = 9.49 = 9.77 dB; ´ ³p 2Eb /N0 = 10−3 for Eb /N0 = 5.42 = 7.34 dB; For M = 4, PE, bit = 2Q ´ ³p 3Eb /N0 = 10−3 for Eb /N0 = 4.10 = 6.13 dB; For M = 8, PE, bit = 4Q ´ ³p 4Eb /N0 = 10−3 for Eb /N0 = 3.35 = 5.25 dB; For M = 16, PE, bit = 8Q ´ ³p 5Eb /N0 = 10−3 for Eb /N0 = 2.94 = 4.69 dB; For M = 32, PE, bit = 16Q ´ ³p 6Eb /N0 = 10−3 for Eb /N0 = 2.67 = 4.27 dB. For M = 64, PE, bit = 32Q

Problem 9.18 Use the same relations as in Problem 9.17 for relating signal-to-noise ratio per bit and symbol and for relating bit to symbol error probability, except now µ ¶ M−1 X µ M − 1 ¶ (−1)k+1 k Es exp − PE, symbol = k k+1 k + 1 N0 k=1

so

M−1 µ

X M PE, bit = 2 (M − 1) k=1

M −1 k

µ ¶ (−1)k+1 k log2 (M ) Eb exp − k+1 k + 1 N0

Use MATLAB to perform the sum. The following results are obtained: For M = 2, PE, bit = 10−3 for Eb /N0 = 10.9 dB; For M = 4, PE, bit = 10−3 for Eb /N0 = 8.3 dB; For M = 8, PE, bit = 10−3 for Eb /N0 = 7 dB; For M = 16, PE, bit = 10−3 for Eb /N0 = 6.1 dB; For M = 32, PE, bit = 10−3 for Eb /N0 = 5.4 dB; Problem 9.19 a. From most bandwidth efficient to least: 16-PSK and 16-QAM (tied); 16-CFSK. b. From most communication efficient to least: 16-CFSK; 16-QAM; 16-PSK. Problem 9.20 The bandwidth efficiencies are given by ⎧ ⎪ 0.5 log2 (M ) , M -PSK and M -QAM R ⎨ log2 (M) = M+1 , coherent M -FSK B ⎪ ⎩ log2 (M) , noncoherent M -FSK 2M


9.1. PROBLEM SOLUTIONS

15

For a data rate of 10⎧kbps this gives the results below (answers for (a), (b), and (c)): ⎧ ⎨ 2.0 bps/Hz, M = 16 ⎨ 5.0 kHz, M = 16 R PSK: B = 2.5 bps/Hz, M = 32 ; B= 4.0 kHz, M = 32 ⎩ ⎩ 3.0 bps/Hz, M = 64 3.33 kHz, M = 64 16-QAM and⎧16-DPSK are the same as 16-PSK. ⎧ ⎨ 0.333 bps/Hz, M = 8 ⎨ 30.0 kHz, M = 8 R CFSK: B = 0.235 bps/Hz, M = 16 ; B= 42.5 kHz, M = 16 ⎩ ⎩ 0.152 bps/Hz, M = 32 66.0 kHz, M = 32 ⎧ ⎧ ⎨ 0.188 bps/Hz, M = 8 ⎨ 53.2 kHz, M = 8 R 0.125 bps/Hz, M = 16 ; 80.0 kHz, M = 16 = B= NCFSK: B ⎩ ⎩ 0.078 bps/Hz, M = 32 128.2 kHz, M = 32

Problem 9.21 Use Figure 9.16 noting that the abscissa is normalized baseband bandwidth. RF bandwidthis twice as large. The 90% power containment bandwidth is defined by the ordinate = -10 dB. (a) B90, BPSK = 1.6/Tb = 1.6Rb ; (b) B90, QPSK, OQPSK = 0.8/Tb = 0.8Rb ; (c) B90, MSK = 0.8/Tb = 0.8Rb ; (d) The difference between (9.114) for BPSK and (9.118) for QAM as far as bandwidth is concerned is the factor of log2 M for the latter in the argument of the sinc-function. For 16-QAM, this is a factor of 4 so 16-QAM is a factor of 4 better than BPSK as far as bandwidth efficiency is concerned. Problem 9.22 The baseband power spectrum is G (f ) = 2A2 Tb [log2 (M )] sinc2 [log2 (M ) Tb f ] where A2 = Es cos2 θi = Es sin2 θi ¸ ¸ ∙ ∙ M M 1 X 2 2π (i − 1) 1 X 2 2π (i − 1) = = cos sin M M M M i=1

i=1

The 10% power containment bandwidth is defined by (see Problem 9.21) log2 (M ) B90, MPSK = 0.8/Tb = 0.8Rb B90, MPSK

⎧ ⎨ 0.267Rb Hz, M = 8 = 0.8Rb / log2 (M ) = 0.25Rb Hz, M = 16 ⎩ 0.16Rb Hz, M = 8


16

CHAPTER 9. ADVANCED DATA COMMUNICATIONS TOPICS

Problem 9.23 a. The given expression, when expanded using a trig identity, is £ ¤ sP SK (t) = A sin ωc t + cos−1 (m) d (t) + θ £ ¤ £ ¤ = A sin [ω c t + θ] cos cos−1 (m) d (t) + A cos [ωc t + θ] sin cos−1 (m) d (t) £ ¤ £ ¤ = A sin [ω c t + θ] cos cos−1 (m) + Ad (t) cos [ω c t + θ] sin cos−1 (m) ; d (t) = ±1 p = Am sin (ω c t + θ) + Ad (t) 1 − m2 cos (ω c t + θ)

b. This follows by recalling that the average power of a sine or cosine is one-half the amplitude squared. Problem 9.24

a. Find the autocorrelation function and Fourier transform it to obtain the power spectral density. By definition RASK (τ ) = E [sASK (t) sASK (t + τ )] 1 2 A E {[1 + d (t)] [1 + d (t + τ )] cos (ω c t + θ) cos [ω c (t + τ ) + θ]} = 4 1 2 A E {[1 + d (t)] [1 + d (t + τ )]} E {cos (ω c t + θ) cos [ωc (t + τ ) + θ]} = 4 1 2 A E {[1 + d (t)] [1 + d (t + τ )]} cos (ω c τ ) = 8 1 2 A E {1 + d (t) + d (t + τ ) + d (t) d (t + τ )} cos (ωc τ ) = 8 1 2 A {1 + E [d (t) d (t + τ )]} cos (ω c τ ) = 8 1 2 A {1 + Rd (τ )} cos (ω c τ ) = 8 The Fourier transform is SASK (f ) =

1 2 1 A [δ (f − fc ) + δ (f + fc )] + A2 [Sd (f − fc ) + Sd (f + fc )] 16 16

where Sd (f ) = F [Rd (τ )] = T sinc2 (T f ) b. See Problem 9.23 to write this as sP SK (t) = Am sin (ω c t + θ) + Ad (t)

p 1 − m2 cos (ω c t + θ)


9.1. PROBLEM SOLUTIONS

17

Following a procedure similar to that used in (a), we have RASK (τ ) = E [sP SK (t) sP SK (t + τ )] ⎧ ⎫ i h √ 2 cos (ω t + θ) ⎨ ⎬ t + θ) + d (t) 1 − m m sin (ω c c i h = A2 E √ ⎩ × m sin (ωc (t + τ ) + θ) + d (t + τ ) 1 − m2 cos (ω c (t + τ ) + θ) ⎭ ½ 2 ¾ m sin (ωc t + θ)¡sin (ω c (t¢ + τ ) + θ) + cross terms in d (t) 2 = A E +d (t) d (t + τ ) 1 − m2 cos (ω c t + θ) cos (ωc (t + τ ) + θ) (cross term average are 0) (9.1) ¢ 1¡ 1 2 2 m A cos (ω c τ ) + = 1 − m2 A2 E [d (t) d (t + τ )] cos (ωc τ ) 2 2 The power spectral density is

¢ 1 1¡ SP SK (f ) = m2 A2 [δ (f − fc ) + δ (f + fc )] + 1 − m2 A2 [Sd (f − fc ) + Sd (f + fc )] 4 4 where Sd (f ) = T sinc2 (T f ) as in (a).

Problem 9.25 To show: cos

µ

πt 2Tb

Π

µ

t 2Tb

←→

4Tb cos (2πTb f ) π 1 − (4Tb f )2

Use the modulation theorem together with the Fourier transform of a rectangular pulse to get ¶ µ ¶ µ t πt Π ←→ Tb sinc [2Tb (f − 1/4Tb )] + Tb sinc [2Tb (f + 1/4Tb )] cos 2Tb 2Tb Rewrite the RHS as RHS = Tb

sin (2πTb f − π/2) sin (2πTb f + π/2) + 2πTb f − π/2 2πTb f + π/2

¸

Note that sin (2πTb f ± π/2) = ± cos (2πTb f ) Then RHS = =

∙ ¸ 1 1 2Tb + cos (2πTb f ) π 1 − 4Tb f 1 + 4Tb f 4Tb cos (2πTb f ) π 1 − (4Tb f )2


18

CHAPTER 9. ADVANCED DATA COMMUNICATIONS TOPICS L = Rs /BL = 5

L = Rs /BL = 10

3

1.5 PLL Costas Data est.

σ

2

1

σ

2

2

PLL Costas Data est.

1 0

0.5

0

5

10 15 z, dB L = Rs /BL = 100

20

0

0

5

10 z, dB

15

20

0.2 PLL Costas Data est.

σ

2

0.15 0.1 0.05 0

0

5

10 z, dB

15

20

Figure 9.3: Problem 9.26 This would involve an 8th −power device which would produce a frequency component of 8×10 = 80 MHz at its output. This would be tracked by a PLL and then frequency divided to 10 MHz and used as the reference in a coherent demodulator for the 8-PSK signal. Problem 9.27 A plot is given in Figure 9.3. Problem 9.28 The dB difference is ¸ ∙ £ 2 ¤ 5×8 2 = 0.9691 dB 10 log10 σ , SL /σ , AV = 10 log10 32


9.1. PROBLEM SOLUTIONS

19

Problem 9.29 See the table below. As many shifts as will fit on the page are given. Shift 5 is within Hamming distance 1. HD Data 1 0 1 1 0 1 0 1 1 0 0 0 0 1 1 1 0 1 0 1 Shft0 1 0 1 1 1 0 0 0 3 Shft1 1 0 1 1 1 0 0 0 5 Shft2 1 0 1 1 1 0 0 0 5 Shft3 1 0 1 1 1 0 0 0 2 Shft4 1 0 1 1 1 0 0 0 3 Shft5 1 0 1 1 1 0 0 0 1 Shft6 1 0 1 1 1 0 0 0 5 Shft7 1 0 1 1 1 0 0 0 6 Shft8 1 0 1 1 1 0 0 0 6 Shft9 1 0 1 1 1 0 0 0 5 Shft10 1 0 1 1 1 0 0 0 4 Shft11 1 0 1 1 1 0 0 0 2 Shft12 1 0 1 1 1 0 0 0 5 Shft13 1 0 1 1 1 0 0 0 3 Shft14 1 0 1 1 1 0 0 0 4

Problem 9.30 See Example 2.20 for the special case of N = 7 for more details. Problem 9.31 a. N = 2no. of stages − 1 = 26 − 1 = 63. b. Each clock pulse corresponds to one chip, so Tone seq period = 63 × 10−4 = 6.3 milliseconds. c. The autocorrelation function consists of a periodic train of equilateral triangles with bases of 2 milliseconds between −1/64 and 1 unit high spaced by 6.3 milliseconds. d. The spectral lines are spaced by ∆f = 1/period = 1/6.3 × 10−3 = 158.73 Hz. e. The weight of the impulse at f = 0 in the power spectral density is (1/63)2 = 2.52 × 10−4 V2 . The DC level of the m-sequence is −1/63 V. f. The first null in the power spectrum is at 63∆f = 10 kHz.


20

CHAPTER 9. ADVANCED DATA COMMUNICATIONS TOPICS

Problem 9.32 The states are (read from top to bottom, column by column): 1111 0100 1011 0111 0010 0101 0011 1001 1010 0001 1100 1101 1000 0110 1110 The output sequence is the last digit in each 4-element word. The autocorrelation function is a triangular pulse centered at the origin two units wide repeated every 15 units and a horzontal line between the triangles at −1/15 units. Problem 9.33

a. Lengths 11 and 13 are not included.

N =2 Barker code Delay = 0 Delay = 1 N =4 Barker code Delay = 0 Delay = 1 Delay = 2 Delay = 3 N =5 Barker code Delay = 0 Delay = 1 Delay = 2 Delay = 3 Delay = 5 N =7

1 1

1 1

1 1

0 0 1

1 1 1

1 1 1

NA − NU

NA −NU N

2 -1

1 -1/2

0

0 0 1 1

1 1 1 1

1 1 0 1 1

0 0 1 1 1

1 0 1

1 1 0 1 1 1

1 0

1 0 1 1

NA − NU

NA −NU N

4 −1 0 1

1 −0.25 0 0.25

1

1 0 1

1 0

1

NA − NU

NA −NU N

5 0 1 0 1

1 0 0.2 0 0.2


9.1. PROBLEM SOLUTIONS

Barker code Delay = 0 Delay = 1 Delay = 2 Delay = 3 Delay = 5 Delay = 6 Delay = 7

1 1

1 1 1

1 1 1 1

21

0 0 1 1 1

0 0 0 1 1 1

1 1 0 0 1 1 1

0 0 1 0 0 1 1 1

0 1 0 0 1 1

0 1 0 0 1

0 1 0 0

0 1 0

0 1

NA − NU

NA −NU N

7 0 −1 0 −1 0 −1

1 0 −0.143 0 −0.143 0 −0.143

0

b. The m-sequence is 111100010011010. Autocorrelation results for delays up to 9 are given. NA −NU N

m-seq D=0 D=1 D=2 D=3 D=5 D=6 D=7 D=8

1 1

1 1 1

1 1 1 1

1 1 1 1 1

0 0 1 1 1 1

0 0 0 1 1 1 1

0 0 0 0 1 1 1 1

1 1 0 0 0 1 1 1 1

0 0 1 0 0 0 1 1 1

0 0 0 1 0 0 0 1 1

1 1 0 0 1 0 0 0 1

1 1 1 0 0 1 0 0 0

0 0 1 1 0 0 1 0 0

1 1 0 1 1 0 0 1 0

0 0 1 0 1 1 0 0 1

0 1 0 1 1 0 0

0 1 0 1 1 0

0 1 0 1 1

0 1 0 1

0 1 0

0 1

0

Problem 9.34 Note that the expectation of Ng is zero because n (t) has zero mean. Write the expectation of the square of Ng as an iterated integral. The expectation can be taken inside. ¸ ∙ Z Z £ 2¤ E Ng = E 4 n (t) c (t) cos (ω c t + θ) dt n (λ) c (λ) cos (ωc λ + θ) dλ Tb Tb ¸ ∙ Z Z n (t) n (λ) c (t) c (λ) cos (ω c t + θ) cos (ω c λ + θ) dtdλ = E 4 Tb Tb Z Z = 4 E [n (t) n (λ)] E [c (t) c (λ)] E [cos (ωc t + θ) cos (ω c λ + θ)] dtdλ Tb Tb Z Z N0 δ (t − λ) E [c (t) c (λ)] E [cos (ω c t + θ) cos (ω c λ + θ)] dtdλ = 4 Tb Tb 2 Z £ ¤ £ ¤ E c2 (t) E cos2 (ωc t + θ) dt = N0 Tb = 2N0 Tb | {z } 1 (9.2)

1 0 −0.08 0 −0.09 −0.4 −0.11 −0.25


22

CHAPTER 9. ADVANCED DATA COMMUNICATIONS TOPICS

Use has been made of the fact that E [n (t) n (λ)] =

N0 δ (t − λ) 2

to reduce the double integral to a single integral. The integral of the resulting cosine squared is Tb /2. The result for the variance (same as the mean square) is then found to be N0 Tb . Problem 9.35 The random variable to be considered is Z Tb NI = AI c (t) cos (∆ωt + θ − φ) dt 0

where θ − φ is assumed uniform in[0, 2π). Clearly, the expectation of NI is zero due to this random phase. The variance is the same as the mean square, which is given by var (NI ) = E

½Z Tb Z Tb 0

Z Tb Z Tb

0

¾

A2I c (t) c (λ) cos (∆ωt + θ − φ) cos (∆ωλ + θ − φ) dtdλ

A2I E [c (t) c (λ)] cos (∆ωt + θ − φ) cos (∆ωλ + θ − φ) dtdλ 0 0 Z Tb Z Tb 1 1 = A2I Tc Λ [(t − λ) /Tc ] cos [∆ω (t − λ)] dtdλ Tc 2 0 0 Z Tb 2 A Tc Tb 1 = A2I Tc dt = I 2 2 0 =

Problem 9.36 a. The processing gain is Gp =

Tb Rc = or Rc = Gp Rb Tc Rb

For Gp = 1000 and a data rate of 10 kbps, Rc = 103 × 104 = 10 megachips per second; b. BRF = 2Rc = 20 MHz; c. Use (9.155) and (9.158) to get the following:


9.1. PROBLEM SOLUTIONS

23

SNR For a JSR of 5 dB = 3.162 and SNR = 10 dB = 10, then 1+(SNR)(JSR)/G = p ¡√ ¢ 10 −4 9.694 = Q (3.113) = 9.25 × 10 . 1+10×3.162/1000 = 9.694 and PE = Q SNR = For a JSR of 10 dB = 10 and SNR = 10 dB = 10, then 1+(SNR)(JSR)/G p ¡√ ¢ 10 −3 9.091 = Q (3.015) = 1.3 × 10 . 1+10×10/1000 = 9.091 and PE = Q SNR = For a JSR of 15 dB = 31.62 and SNR = 10 dB = 10, then 1+(SNR)(JSR)/G p ¡√ ¢ 10 −3 7.598 = Q (2.756) = 2.9 × 10 . 1+10×31.62/1000 = 7.598 and PE = Q SNR For a JSR of 30 dB = 1000 and SNR = 10 dB = 10, then 1+(SNR)(JSR)/G = p ¡√ ¢ 10 0.909 = Q (0.954) = 0.17. 1+10×1000/1000 = 0.909 and PE = Q

Problem 9.37 The limit of the arguement of the Q-function in 9.155 is s r Gp SNR lim = SNR→∞ 1 + (SNR) (JSR) /Gp JSR

To get PE = Q (A) = 10−5 trial and error in MATLAB requires A = 4.264. a. For JSR = 30 dB = 1000, 1.82 × 104 = 42.6 dB;

p p Gp /JSR = Gp /1000 = 4.264 or Gp = 1000 × 4.2642 =

p p Gp /JSR = Gp /316.23 = 4.264 or Gp = 316.23 × b. For JSR = 25 dB = 316.23, 4.2642 = 5.75 × 103 = 37.6 dB; p p Gp /JSR = Gp /100 = 4.264 or Gp = 100 × 4.2642 = c. For JSR = 20 dB = 100, 1.82 × 103 = 32.6 dB Problem 9.38 ³√ ´ SNR In the limit at SNR Eb /N0 → ∞, the argument of the Q-function for PE = Q becomes r √ 3N ≈ 3.08 to give PE = 10−3 SNR = K −1 Thus,

3 × 255 +1 (3.08)2 = 81.64

K =

Round this down to K = 81 users since the number of users must be an integer.


24

CHAPTER 9. ADVANCED DATA COMMUNICATIONS TOPICS

Problem 9.39 With a propagation delay uncertainty of ±1.5 ms the chip uncertainty becomes ¢¡ ¢ ¡ C = 2 × 2 1.5 × 10−3 s 3 × 106 chips/s = 18, 000 half chips

Equation (9.161) becomes ∙ µ ¶ ¸ 2 − Pd 1 Tacq = 17, 999 (1 + 1000Pfa ) + Ti 2Pd Pd a. Tacq = 27.197 s; b. Tacq = 18.114 s; c. Tacq = 14.112 s. Problem 9.40 In this case T = 10 × 5 μs= 5 × 10−5 s, Tb = 1/Rb = 1/500 × 103 = 2 × 10−6 s, and M = 2 so (9.164) becomes 5 × 10−5 = 2N × 2 × 10−6 5 × 10−5 or N = = 12.5 subcarriers 4 × 10−6 We round this up to the next power of 2, or N = 16. Problem 9.41 8

3×10 m/s a. Use (9.168) with d = 1.5 m, λ = c/f = 10×10 9 Hz = 0.03 m, and ρa = 0.7:

G0 = 0.7

µ

π × 1.5 0.03

¶2

= 17, 272 = 42.37 dB

b. Apply (9.169): φ3dB = c. Apply (9.170).

0.03 √ = 0.0239 rad = 1.3696 degrees 1.5 0.7


9.1. PROBLEM SOLUTIONS

25

Problem 9.42 a. Only the OBP case will be done. (9.187) becomes

See the text for the bent-pipe case.

Equation

10−3 − pu 1 − 2pu For BPSK, the uplink and downlink probabilities of error are related to the uplink and downlink SNRs by "s µ ¶ # " Ãsµ ¶ !# Eb Eb 1 pu = Q 2 = 1 − erf N0 u 2 N0 u "s µ ¶ # " Ãsµ ¶ !# Eb Eb 1 pd = Q 2 = 1 − erf N0 d 2 N0 d pd =

where these probabilities have been put in terms of the error function because MATLAB includes an inverse error function. When these relations are inverted, these become µ ¶ £ ¤2 Eb = erf −1 (1 − 2pu ) N0 µ ¶u £ ¤2 Eb = erf −1 (1 − 2pd ) N0 d

The MATLAB program for computing them is also given below along with a plot. Part (b) is just a matter of inputting the desired p_overall in the program. See Figures 9.4 and 9.5. % Problem 9.42 p_overall = input(’Enter desired end-to-end probability of error => ’) pu = logspace(log10(p_overall)-3, log10(p_overall)+1e-20); pd = (p_overall - pu)./(1 - 2*pu); EbN0u = (erfinv(1-2*pu)).^2; EbN0d = (erfinv(1-2*pd)).^2; EbN0udB = 10*log10(EbN0u); EbN0ddB = 10*log10(EbN0d); disp(’p_u Eb/N0u dB p_d Eb/N0d dB’) disp([pu; EbN0udB; pd; EbN0ddB]’) plot(EbN0ddB, EbN0udB), xlabel(’(E_b/N_0)_d, dB’), ylabel(’(E_b/N_0)_u, dB’), ... title([’OBP; Overall BEP = ’, num2str(p_overall)]), grid, axis square


26

CHAPTER 9. ADVANCED DATA COMMUNICATIONS TOPICS

Overall BEP = 0.001 11 10.5 10

(Eb/N0)u, dB

9.5 9 8.5 8 7.5 7 6.5 6.5

7

7.5 (Eb/N0)d, dB

Figure 9.4:

8

8.5


9.1. PROBLEM SOLUTIONS

27

Overall BEP = 0.0001 11.5

11

(Eb/N0)u, dB

10.5

10

9.5

9

8.5

8 8.2

8.4

8.6

8.8 9 9.2 (Eb/N0)d, dB

Figure 9.5:

9.4

9.6

9.8


28

CHAPTER 9. ADVANCED DATA COMMUNICATIONS TOPICS

Problem 9.43 a. This is similar to the previous problem except that the error probability expresseion for coherent FSK is !# " Ãr ´ 1 ³p Eb Eb /N0 = PE = Q 1 − erf 2 2N0 When solved for Eb /N0 , we have 10−5 − pu pd = 1 − 2pu µ ¶ £ ¤2 Eb = 2 erf −1 (1 − 2pu ) N0 µ ¶u £ ¤2 Eb = 2 erf −1 (1 − 2pd ) N0 d b. The error probability expresseion for coherent FSK is µ ¶ Eb 1 PE = exp − 2 2N0 Solving for Eb /N0 , we have Eb = −2 ln (2PE ) N0 Thus, 10−5 − pu pd = 1 − 2pu µ ¶ Eb = −2 ln (2pu ) N0 u µ ¶ Eb = −2 ln (2pd ) N0 d c. For DPSK, the probability of error is µ ¶ Eb 1 PE = exp − 2 N0


9.1. PROBLEM SOLUTIONS

29

Therefore, the equations describing the link are 10−5 − pu pd = 1 − 2pu µ ¶ Eb = − ln (2pu ) N0 u µ ¶ Eb = − ln (2pd ) N0 d A MATLAB program for computing the performance curves for both parts (a) and (b) is given below. Typical performance curves are also shown in Figures 9.6 - 9.8. % Problem 9.43 % mod_type = input(’Enter 1 for CFSK; 2 for NCFSK; 3 for DPSK ’) p_overall = input(’Enter desired end-to-end probability of error ’) pu = logspace(log10(p_overall)-6, log10(1.000001*p_overall)); pd = (p_overall - pu)./(1 - 2*pu); if mod_type == 1 Eb_N0_u = 20*log10(sqrt(2)*erfinv(1-2*pu)); Eb_N0_d = 20*log10(sqrt(2)*erfinv(1-2*pd)); elseif mod_type == 2 Eb_N0_u = 10*log10(-2*log(2*pu)); Eb_N0_d = 10*log10(-2*log(2*pd)); elseif mod_type == 3 Eb_N0_u = 10*log10(-log(2*pu)); Eb_N0_d = 10*log10(-log(2*pd)); end plot(Eb_N0_d, Eb_N0_u), xlabel(’(E_b/N_0)_d, dB’), ylabel(’(E_b/N_0)_d, dB’),... grid, axis square if mod_type == 1 title([’OBP link performance for overall P_E = ’, num2str(p_overall), ’; CFSK modulation’]) elseif mod_type == 2 title([’OBP link performance for overall P_E = ’, num2str(p_overall), ’; NCFSK modulation’]) elseif mod_type == 3 title([’OBP link performance for overall P_E = ’, num2str(p_overall), ’; DPSK modulation’]) end


30

CHAPTER 9. ADVANCED DATA COMMUNICATIONS TOPICS

OBP link performance for overall PE = 1e-005; CFSK modulation 17 16.5 16

(Eb/N0)d, dB

15.5 15 14.5 14 13.5 13 12.5 12.5

12.6

12.7

12.8 12.9 (Eb/N0)d, dB

Figure 9.6:

13

13.1

13.2


9.1. PROBLEM SOLUTIONS

31

OBP link performance for overall PE = 1e-005; NCFSK modulation 17 16.5 16

(Eb/N0)d, dB

15.5 15 14.5 14 13.5 13 13

13.5

14

14.5 15 15.5 (Eb/N0)d, dB

Figure 9.7:

16

16.5

17


32

CHAPTER 9. ADVANCED DATA COMMUNICATIONS TOPICS

OBP link performance for overall PE = 1e-005; DPSK modulation 14 13.5 13

(Eb/N0)d, dB

12.5 12 11.5 11 10.5 10 10

10.5

11

11.5 12 12.5 (Eb/N0)d, dB

Figure 9.8:

13

13.5

14


9.1. PROBLEM SOLUTIONS

33

Problem 9.44 a. The angle subtended by a 100 mile diameter spot on the earth’s surface for a satellite at geosynchronous altitude is φ100 = φ3dB =

100 = 0.0045 rad 22, 235

Solving (9.169) for antenna diameter, we get d =

λ c/fc √ = √ φ3dB ρa φ3dB ρa

3 × 108 √ (0.0045) (21 × 109 ) 0.8 = 3.55 m

=

b. Use (9.168): G0

¶ ¶ µ πd 2 πdfc 2 = ρa = ρa λ c ¶2 µ π × 3.55 × 21 × 109 = 0.8 3 × 108 µ

= 4.88 × 105 = 56.88 dB c. Use (9.170) which becomes

i h g (φ) = 4.88 × 105 exp −2.76 (φ/0.0045)2 £ ¤ = 4.88 × 105 exp −1.363 × 105 φ2

Problem 9.45 a. The number of users per reuse pattern remain the same at 120. For 20 dB minimum signal-to-interference ratio and a propagation power law of α = 4, we have ¶ µ Dco − 1 − 7.7815 20 = 10(4) log10 dA


34

CHAPTER 9. ADVANCED DATA COMMUNICATIONS TOPICS or µ

log10

¶ Dco 20 + 7.7815 = 0.6945 −1 = dA 40 Dco = 100.6945 + 1 dA √ = 5.9492 = 3N

or N = d11.798e = 12 (i = j = 2) The efficiency is ηv = =

¥ 120 ¦ 12

6 MHz 5 10 = voice circuits per base station per MHz 6 3

b. For 14 dB minimum signal-to-interference ratio and a propagation power law of α = 4, we have ¶ µ Dco − 1 − 7.7815 14 = 10(4) log10 dA µ ¶ Dco 14 + 7.7815 = 0.5445 −1 = log10 dA 40 Dco = 100.5445 + 1 dA √ = 4.5038 = 3N or N = d6.7614e = 7 (i = 1, j = 2) The efficiency is ηv = =

¥ 120 ¦ 7

6 MHz 17 = 2.83 voice circuits per base station per MHz 6


9.2. COMPUTER EXERCISES

35

Problem 9.46 The number of users per reuse pattern remain the same at 120. For 10 dB minimum signal-to-interference ratio and a propagation power law of α = 3.5, we have µ ¶ Dco − 1 − 7.7815 10 = 10(3.5) log10 dA or log10

µ

¶ Dco 10 + 7.7815 = 0.508 −1 = dA 35 Dco = 100.508 + 1 dA √ = 4.2214 = 3N

or N = d5.94e = 7 (i = 1; j = 2) The efficiency is ηv = =

9.2

¥ 120 ¦ 7

6 MHz 17 = 2.83 voice circuits per base station per MHz 6

Computer Exercises

Computer Exercise 9.1 % ce9_1.m: Bit error probabilities for coh. and noncoh. M-ary FSK % clf z_dB = 3:.1:15; z = 10.^(z_dB/10); for j = 1:5 M=2^j; A = M/(2*(M-1)); if j == 1 Pbc = qfn(sqrt(z));


36

CHAPTER 9. ADVANCED DATA COMMUNICATIONS TOPICS elseif j >= 2 Psc = (M-1)*qfn(sqrt(log2(M)*z)); Pbc = A*Psc; end Psnc = zeros(size(z)); for k = 1:M-1 B = (prod(1:(M-1))/(prod(1:k)*prod(1:(M-1-k))))*(-1)^(k+1)/(k+1); alpha=k*log2(M)/(k+1); Psnc = Psnc + B*exp(-alpha*z); end Pbnc = A*Psnc; subplot(1,2,1), semilogy(z_dB, Pbc, ’LineWidth’, 1.5), ... axis([min(z_dB) max(z_dB) 1e-6 1]), ylabel(’{\itP_b}’), xlabel(’{\itE_b/N}_0’), ... if j == 1 hold on; grid text(z_dB(45)+.2, Pbc(45), [’{\itM} = ’, num2str(M)]),... title(’{\itM}-ary CFSK’) else text(z_dB(45)+.2, Pbc(45), [’= ’, num2str(M)]) end subplot(1,2,2), semilogy(z_dB, Pbnc, ’LineWidth’, 1.5), ... axis([min(z_dB) max(z_dB) 1e-6 1]), ... ylabel(’{\itP_b}’), xlabel(’{\itE_b/N}_0’), ... if j == 1 hold on; grid text(z_dB(45)+.2, Pbnc(45), [’{\itM} = ’, num2str(M)]),... title(’{\itM}-ary NCFSK’) else text(z_dB(45)+.2, Pbnc(45), [’= ’, num2str(M)]) end end The curves obtained are identical to Fig. 9.15

Computer Exercise 9.2 % ce9_2.m: Out-of-band power for M-ary PSK, QPSK (OQPSK), and MSK % clf A = char(’-’,’—’,’-.’,’:’,’—.’,’-..’);


9.2. COMPUTER EXERCISES

37

Tbf = 0:.025:7; LTbf = length(Tbf); A_BPSK = []; A_QPSK = []; A_MSK = []; for k = 1:LTbf A_BPSK(k) = quadl(’G_BPSK’, Tbf(k), 14); A_QPSK(k) = quadl(’G_QPSK’, Tbf(k), 7); A_MSK(k) = quadl(’G_MSK’, Tbf(k), 7); end OBP_BPSK = 10*log10(A_BPSK/A_BPSK(1)); OBP_QPSK = 10*log10(A_QPSK/A_QPSK(1)); OBP_MSK = 10*log10(A_MSK/A_MSK(1)); plot(Tbf, OBP_BPSK, A(1,:)), ylabel(’Relative out-of-band power, dB’), xlabel(’T_bf’),... axis([0 5 -40 0]),grid,... hold on plot(Tbf, OBP_QPSK, A(2,:)) plot(Tbf, OBP_MSK, A(3,:)) legend(’BPSK’, ’QPSK/OQPSK’, ’MSK’) % Function to compute area under BPSK spectrum % function y = G_BPSK(Tbf) y = 2*(sinc(Tbf)).^2; % Function to compute area under QPSK spectrum % function y = G_MSK(Tbf) y = (32/pi^2)*((cos(2*pi*Tbf)).^2)./(1-(4*Tbf+eps).^2).^2; % Function to compute area under QPSK spectrum % function y = G_QPSK(Tbf) y = 4*(sinc(2*Tbf)).^2; A typical plot generated by the program is shown in Fig. 9.9. Computer Exercise 9.3 % ce9_3.m: Computes spectrum of continuous phase FSK %


38

CHAPTER 9. ADVANCED DATA COMMUNICATIONS TOPICS

0 BPSK QPSK/OQPSK MSK

Relative out-of-band power, dB

-5 -10 -15 -20 -25 -30 -35 -40

0

0.5

1

1.5

2

2.5 Tbf

Figure 9.9:

3

3.5

4

4.5

5


9.2. COMPUTER EXERCISES

39

clf A = char(’-’,’—’,’-.’,’:’,’—.’,’-..’); Tbf0 = input(’Enter f_0 times T_b ’) delTbf0 = .5:.25:1.5; L_delTbf = length(delTbf0) Tbf = 0:.025:10; for k = 1:L_delTbf delTbf = delTbf0(k); G_FSK = (S_CFSK(Tbf-Tbf0) + S_CFSK(Tbf-Tbf0-delTbf)).^2; area = sum(G_FSK)*0.025; G_FSKN = G_FSK/area; plot(Tbf, G_FSKN, A(k,:)) if k == 1 hold on grid xlabel(’T_bf’), ylabel(’Spectral amplitude’) end end legend([’\DeltafT_b = ’,num2str(delTbf0(1))], [’\DeltafT_b = ’,num2str(delTbf0(2))], [’\DeltafT_b = ’,num2str(delTbf0(3))],[’\DeltafT_b = ’,num2str(delTbf0(4))], [’\DeltafT_b = ’,num2str(delTbf0(5))]) title([’Spectrum for continuous phase FSK; T_bf_0 = ’, num2str(Tbf0)]) % Function to approximate CFSK spectrum % function y = S_CFSK(Tbf) y = sinc(Tbf); A typical plot is shown in Fig. 9.10. Computer Exercise 9.4 % ce9_4.m: Computes bit error probability curves for jamming in DSSS. % For a desired P_E and given JSR and Gp, computes the required Eb/N0 % clf A = char(’-’,’—’,’-.’,’:’,’—.’,’-.x’,’-.o’); Gp_dB = input(’Enter desired processing gain in dB ’); Gp = 10^(Gp_dB/10); z_dB = 0:.5:30; z = 10.^(z_dB/10);


40

CHAPTER 9. ADVANCED DATA COMMUNICATIONS TOPICS

Spectrum for continuous phase FSK; Tbf0 = 4 1.4

Δ fTb = 0.5 Δ fTb = 0.75

1.2

Δ fTb = 1 Δ fTb = 1.25

Spectral amplitude

1

Δ fTb = 1.5 0.8

0.6

0.4

0.2

0

0

1

2

3

4

5 Tbf

Figure 9.10:

6

7

8

9

10


9.2. COMPUTER EXERCISES k = 1; JSR0 = []; for JSR_dB = 5:5:25; JSR0(k) = JSR_dB; JSR = 10^(JSR_dB/10); arg = z./(1+z*JSR/Gp); PE = 0.5*erfc(sqrt(arg)); semilogy(z_dB, PE, A(k+1,:)) if k == 1 hold on axis([0 30 1E-15 1]) grid on xlabel(’E_b/N_0, dB’), ylabel(’P_E’) title([’BEP for DS BPSK in jamming for proc. gain = ’, num2str(Gp_dB),’ dB’]) end k = k+1; end PEG = 0.5*qfn(sqrt(2*z)); semilogy(z_dB, PEG), text(z_dB(30)-5, PEG(30), ’No jamming’) legend([’JSR = ’, num2str(JSR0(1))], [’JSR = ’, num2str(JSR0(2))], [’JSR = ’, num2str(JSR0(3))], [’JSR = ’, num2str(JSR0(4))], [’JSR = ’, num2str(JSR0(5))], 3) disp(’Strike ENTER to continue’); pause PE0 = input(’Enter desired value of P_E ’); JSR_dB_0 = input(’Enter given value of JSR in dB ’); Gp_dB_0 = input(’Enter given value of processing gain in dB ’); JSR0 = 10^(JSR_dB_0/10); Gp0 = 10^(Gp_dB_0/10); PELIM = 0.5*erfc(sqrt(Gp0/JSR0)); disp(’ ’) disp(’Infinite Eb/N0 BEP limit’) disp(PELIM) if PELIM >= PE0 disp(’For given choices of G_p, JSR, & desired P_E, a solution is not possible ’); Gp0_over_JSR0 = (erfinv(1-2*PE0))^2; Gp0_over_JSR0_dB = 10*log10(Gp0_over_JSR0); disp(’The minimum required value of G_p over JSR in dB is:’)

41


42

CHAPTER 9. ADVANCED DATA COMMUNICATIONS TOPICS disp(Gp0_over_JSR0_dB) else arg0 = (erfinv(1-2*PE0))^2; SNR0 = 1/(1/arg0 - JSR0/Gp0); SNR0_dB = 10*log10(SNR0); end disp(’ ’) disp(’To give BEP of:’) disp(PE0) disp(’Requires GP, JSR, and Eb/N0 in dB of:’) disp([Gp_dB JSR_dB SNR0_dB])

A typical run follows, with both a plot given and a specific output generated: >> ce9_4 Enter desired processing gain in dB: 30 Strike ENTER to continue Enter desired value of P_E: 1e-3 Enter given value of JSR in dB: 20 Enter given value of processing gain in dB: 30 Infinite Eb/N0 BEP limit 3.8721e-006 To give BEP of: 1.0000e-003 Requires GP, JSR, and Eb/N0 in dB of: 30.0000 25.0000 9.6085 Computer Exercise 9.5 % ce9_5.m: Given a satellite altitude and desired spot diameter for % illumination, determines circular antenna aperture diameter and % maximum antenna gain. % h = input(’Enter satellite altitude in km: ’); d_spot = input(’Enter desired illuminated spot diameter at subsatellite point in km: ’); f0 = input(’Enter carrier frequency in GHz: ’); rho = input(’Enter desired antenna efficiency: 0 < \rho <= 1: ’); theta = d_spot/h; phi3dB = theta; lambda = 3E8/(f0*1E9);


9.2. COMPUTER EXERCISES

43

BEP for DS BPSK in jamming for processing gain = 30 dB

0

10

-5

PE

10

-10

10

JSR = 5 JSR = 10 JSR = 15 JSR = 20 JSR = 25

-15

10

0

5

No jamming 10

15 Eb/N0, dB

Figure 9.11:

20

25

30


44

CHAPTER 9. ADVANCED DATA COMMUNICATIONS TOPICS d = lambda/(phi3dB*sqrt(rho)); % G0 = rho*(pi*d/lambda)^2 G0 = (pi/phi3dB)^2; G0_dB = 10*log10(G0); disp(’ ’) disp(’3-dB beamwidth in degrees: ’) disp(phi3dB*57.3) disp(’Wavelength in meters:’) disp(lambda) disp(’Antenna diameter in meters:’) disp(d) disp(’Antenna gain in dB:’) disp(G0_dB) disp(’ ’)

A typical run follows: >> ce9_5 Enter satellite altitude in km: 200 Enter desired illuminated spot diameter at subsatellite point in km: 20 Enter carrier frequency in GHz: 8 Enter desired antenna efficiency: 0 < rho <= 1: .7 3-dB beamwidth in degrees: 5.7300 Wavelength in meters: 0.0375 Antenna diameter in meters: 0.4482 Antenna gain in dB: 29.9430 Computer Exercise 9.6 % ce9_6.m: Performance curves for bent-pipe relay and mod/demod relay % clf A = char(’—’,’-’,’-.’,’:’); I_mod = input(’Enter type of modulation: 1 = BPSK; 2 = coh. FSK; 3 = DPSK; 4 = noncoh. FSK ’); PE0 = input(’Enter desired value of P_E ’); for I_type = 1:2


9.2. COMPUTER EXERCISES if I_type == 1 if I_mod == 1 EbN0r = (erfinv(1 - 2*PE0))^2; elseif I_mod == 2 EbN0r = 2*(erfinv(1 - 2*PE0))^2; elseif I_mod == 3 EbN0r = -log(2*PE0); elseif I_mod == 4 EbN0r = -2*log(2*PE0); end EbN0r_dB = 10*log10(EbN0r) EbN0u_dB = EbN0r_dB+.000001:.01:35; EbN0u = 10.^(EbN0u_dB/10); den = 1/EbN0r-1./EbN0u; EbN0d = 1./den; EbN0d_dB = 10.*log10(EbN0d); elseif I_type == 2 d1 = log10(0.00000001*PE0); d2 = log10(0.9999999*PE0); pu = logspace(d1, d2, 5000); pd = (PE0 - pu)./(1 - 2*pu); if I_mod == 1 EbN0u = (erfinv(1 - 2*pu)).^2; EbN0d = (erfinv(1 - 2*pd)).^2; elseif I_mod == 2 EbN0u = 2*(erfinv(1 - 2*pu)).^2; EbN0d = 2*(erfinv(1 - 2*pd)).^2; elseif I_mod == 3 EbN0u = -log(2*pu); EbN0d = -log(2*pd); elseif I_mod == 4 EbN0u = -2*log(2*pu); EbN0d = -2*log(2*pd); end EbN0u_dB = 10*log10(EbN0u); EbN0d_dB = 10*log10(EbN0d); end plot(EbN0d_dB, EbN0u_dB,A(I_type,:)), if I_type == 1 axis square, axis([5 30 5 30]), grid on,...

45


46

CHAPTER 9. ADVANCED DATA COMMUNICATIONS TOPICS xlabel(’(E_b/N_0)_d, dB’), ylabel(’(E_b/N_0)_u, dB’) hold on end end legend([’Bent pipe’],[’Demod/remod’],1) if I_mod == 1 title([’Uplink E_b/N_0 versus downlink E_b/N_0, both in dB, to give P_E = ’,num2str(PE0),’; BPSK modulation’]) elseif I_mod == 2 title([’Uplink E_b/N_0 versus downlink E_b/N_0, both in dB, to give P_E = ’,num2str(PE0),’; coh. FSK modulation’]) elseif I_mod == 3 title([’Uplink E_b/N_0 versus downlink E_b/N_0, both in dB, to give P_E = ’,num2str(PE0),’; DPSK modulation’]) elseif I_mod == 4 title([’Uplink E_b/N_0 versus downlink E_b/N_0, both in dB, to give P_E = ’,num2str(PE0),’; noncoh. FSK modulation’]) end

A typical run follows with a plot given in Fig. 9.12. >> ce9_6 Enter type of modulation: 1 = BPSK; 2 = coh. FSK; 3 = DPSK; 4 = noncoh. FSK 2 Enter desired value of P_E 1e-3 EbN0r_dB = 9.7998 Computer Exercise 9.7 % ce9_7.m: plots waveforms or spectra for gmsk and msk % A = char(’-’,’-.’,’—’,’:.’); samp_bit = input(’Enter number of samples per bit used in simulation: ’); N_bits = input(’Enter total number of bits: ’); GMSK = input(’Enter 1 for GMSK; 0 for normal MSK: ’); if GMSK == 1 BT_bit = input(’Enter vector of bandwidth X bit periods: ’); N_samp = 50; end I_plot = input(’Enter 1 to plot waveforms; 2 to plot spectra and out-of-band power ’); if I_plot == 1


9.2. COMPUTER EXERCISES

47

Uplink Eb/N0 versus downlink Eb/N0, both in dB, to give PE = 0.001; coh. FSK modulation 30 Bent pipe Demod/remod

(Eb/N0)u, dB

25

20

15

10

5 5

10

15 20 (Eb/N0)d, dB

Figure 9.12:

25

30


48

CHAPTER 9. ADVANCED DATA COMMUNICATIONS TOPICS f0 = 1; elseif I_plot == 2 f0 = 0; end clf kp = pi/2; T_bit = 1; LB = length(BT_bit); for k = 1:LB B = BT_bit(k)/T_bit; alpha = 1.1774/B; % Note that B is two-sided bandwidth del_t = T_bit/samp_bit; fs = 1/del_t; data = 0.5*(sign(rand(1,N_bits)-.5)+1); s = 2*data - 1; L = length(s); t=0:del_t:L*T_bit-del_t; s_t = s(ones(samp_bit,1),:); % Build array whose columns are samp_bit long s_t = s_t(:)’; % Convert matrix where bit samples occupy columns to vector L_s_t = length(s_t); tp=0:del_t:N_samp*T_bit-del_t; L_t = length(t); L_tp=length(tp); t_max = max(tp); if GMSK == 1 hG = (sqrt(pi)/alpha)*exp(-pi^2*(tp-t_max/2).^2/alpha^2); freq1 = kp*del_t*conv(hG,s_t); L_hG = length(hG); freq = freq1(L_hG/2:length(freq1)-L_hG/2); elseif GMSK == 0 freq = kp*s_t; end phase = del_t*cumsum(freq); if GMSK == 0 y_mod = exp(j*2*pi*f0*t+j*phase); elseif GMSK == 1 y_mod = exp(j*2*pi*f0*t+j*phase); end if I_plot == 1 subplot(3,1,1),plot(t,s_t),axis([min(t), max(t), -1.2, 1.2]), ylabel(’Data’)


9.2. COMPUTER EXERCISES if GMSK == 0 title(’MSK waveforms’) elseif GMSK == 1 title([’GMSK for BT_b = ’, num2str(BT_bit(k))]) end subplot(3,1,2),plot(t,phase), ylabel(’Excess phase, rad.’) for nn = 1:11 if nn == 1 hold on end subplot(3,1,2),plot(t,-nn*(pi/2)*ones(size(t)),’m’) subplot(3,1,2),plot(t,nn*(pi/2)*ones(size(t)),’m’) end subplot(3,1,3),plot(t,real(y_mod)), xlabel(’t’), ylabel(’Modulated signal’) elseif I_plot == 2 Z=PSD(real(y_mod),2048,fs); % MATLAB supplied function ZN = Z/max(Z); ET = sum(ZN); OBP = 1-cumsum(ZN)/ET; LZ = length(Z); del_FR = fs/2/LZ; FR = [0:del_FR:fs/2-del_FR]; subplot(1,2,1),semilogy(FR,ZN,A(k,:)), axis([0 3 10^(-8) 1]) if k == 1 hold on grid ylabel(’PSD, J/Hz’), xlabel(’fT_b_i_t’) title(’Power spectra and out-of-band power for GMSK’) end subplot(1,2,2),semilogy(FR,OBP,A(k,:)), axis([0 3 10^(-8) 1]) if k == 1 hold on grid ylabel(’Fraction of out-of-band power’), xlabel(’fT_b_i_t’) end if LB == 1 legend([’BT_b_i_t = ’,num2str(BT_bit(1))],1) elseif LB == 2 legend([’BT_b_i_t = ’,num2str(BT_bit(1))], [’BT_b_i_t = ’,num2str(BT_bit(2))],1)

49


50

CHAPTER 9. ADVANCED DATA COMMUNICATIONS TOPICS elseif LB == 3 legend([’BT_b_i_t = ’,num2str(BT_bit(1))], [’BT_b_i_t = ’,num2str(BT_bit(2))], [’BT_b_i_t = ’,num2str(BT_bit(3))],1) elseif LB == 4 legend([’BT_b_i_t = ’,num2str(BT_bit(1))], [’BT_b_i_t = ’,num2str(BT_bit(2))], [’BT_b_i_t = ’,num2str(BT_bit(3))], [’BT_b_i_t = ’,num2str(BT_bit(4))],1) end end end Two typical runs follow - one for plotting waveforms and one for plotting spectra. >> ce8_7 Enter number of samples per bit used in simulation: 20 Enter total number of bits: 50 Enter 1 for GMSK; 0 for normal MSK: 1 Enter vector of bandwidth X bit periods: .5 Enter 1 to plot waveforms; 2 to plot spectra and out-of-band power: 1 >> ce8_7 Enter number of samples per bit used in simulation: 10 Enter total number of bits: 2000 Enter 1 for GMSK; 0 for normal MSK: 1 Enter vector of bandwidth X bit periods: [.5 1 1.5] Enter 1 to plot waveforms; 2 to plot spectra and out-of-band power: 2


9.2. COMPUTER EXERCISES

51

GMSK for BTb = 0.5

Data

1 0

Modulated signal

Excess phase, rad.

-1 0

5

10

15

20

25

30

35

40

45

0

5

10

15

20

25

30

35

40

45

50

0

5

10

15

20

25 t

30

35

40

45

50

20

0

-20 1

0

-1

Figure 9.13:


52

CHAPTER 9. ADVANCED DATA COMMUNICATIONS TOPICS

Power0 spectra and out-of-band power for GMSK 0 10 10 -1

10

10

-2

Fraction of out-of-band power

-3

10 PSD, J/Hz

BTbit = 1 BTbit = 1.5

-2

10

-4

10

-5

10

-6

10

-7

10

-3

10

-4

10

-5

10

-6

10

-7

10

10

-8

10

BTbit = 0.5

-1

-8

0

1

2

3

10

0

fTbit

1

2 fTbit

Figure 9.14:

3


Chapter 10

Optimum Receivers and Signal Space Concepts Problem 10.1 a. Given H1 , Z = N, so fZ (z|H1 ) = fN (n) |z=n = 10e−10z u (z) Given H2 , Z = S +N , where S and N are independent. Thus, the resulting pdf under H2 is the convolution of the separate pdfs of S and N : Z ∞ fZ (z|H2 ) = fS (z − λ) fN (λ) dλ −∞ Z ∞ = 2e−2(z−λ) u (z − λ) 10e−10λ u (λ) dλ −∞ Z z −2z = 20e e−8λ dλ, z ≥ 0 0 ∙ ¸z 1 = 20e−2z − e−8λ , z ≥ 0 8 ¡ ¢0 −2z −8z 1−e u (z) = 2.5e ¡ −2z ¢ −10z = 2.5 e −e u (z) which is the result given in the problem statement.

b. The likelihood ratio is

¡ ¢ 2.5 e−2Z − e−10Z fZ (Z|H2 ) = Λ (Z) = , Z≥0 fZ (Z|H1 ) 10e−10Z ¡ ¢ = 0.25 e8Z − 1 , Z ≥ 0 1


2

CHAPTER 10. OPTIMUM RECEIVERS AND SIGNAL SPACE CONCEPTS Note that an uppercase Z is used because we substitute data to carry out the test. c. The threshold is η=

(1/3) (7 − 0) 1 Pr (H1 true) (c21 − c11 ) = = Pr (H2 true) (c12 − c22 ) (2/3) (7 − 0) 2

d. The likelihood ratio test becomes H2 ¢ > 1 ¡ 8Z 0.25 e − 1 < 2 H1 This can be reduced to H2 > ln (4/2 + 1) Z = 0.1373 < 8 H1 e. The probability of detection is Z ∞ ¢ ¡ 2.5 e−2z − e−10z dz PD = 1 − PM = 0.1373 ∙ ¸ 1 −2z 1 −10z ∞ = 2.5 − e + e 2 10 0.1373 ¸ ∙ 1 −0.2746 1 1.373 = 0.6991 = 2.5 e − e 2 10 The probability of false alarm is Z ∞ £ ¤∞ 10e−10z dz = −e−10z 0.1373 = e−1.373 = 0.2533 PF = 0.1373

Therefore, the risk is Risk

= Pr (H1 ) c21 + Pr (H2 ) c22 + Pr (H2 ) (c12 − c22 ) PM

− Pr (H1 ) (c21 − c11 ) (1 − PF ) 2 2 1 1 × 7 + × 0 + (7 − 0) (1 − 0.6991) − (7 − 0) (1 − 0.2533) = 3 3 3 3 7 7 14 + × 0.3009 − × 0.7467 = 1.9952 = 3 3 3


3 f. Consider the threshold set at η. Then PF = e−10η ≥ 10−3 ⇒ η ≤ 0.3 ln (10) = 0.6908 to give PF ≤ 10−3 . Also, from part (e), Z ∞ ¡ −2z ¢ − e−10z dz PD = 2.5 e η

= 1.25e−2η − 0.25e−10η ; η ≥ 0

For values of η ≥ 0, PD is a monotonically decreasing function with maximum value of 1 at η = 0. For η = 0.6908, PD = 0.3137. g. From part (f), PF = e−10η and PD = 1.25e−2η − 0.25e−10η A plot of PD versus PF for various values of η constitutes the operating characteristic of the test.

Problem 10.2 a. The likelihood ratio is e−|Z| /2 √ = Λ (Z) = e−Z 2 /2 / 2π

r

π Z 2 /2−|Z| e 2

b. The decision regions are determined by the set of inequalities r π Z 2 /2−|Z| H2 e−|Z| /2 √ = e Λ (Z) = ≷η 2 2 e−Z /2 / 2π H1 or

H2

Z 2 /2 − |Z| ≷ ln η − H1

1 π ln = γ 2 2

Thus, solve the equation Z 2 /2 − |Z| = γ for Z > 0 and Z < 0 for the decision boundary points: p 1 + 2γ p 2 Z < 0 : Z /2 + Z − γ = 0 ⇒ Z3,4 = −1 ± 1 + 2γ

Z > 0 : Z 2 /2 − Z − γ = 0 ⇒ Z1,2 = 1 ±


4

CHAPTER 10. OPTIMUM RECEIVERS AND SIGNAL SPACE CONCEPTS For example, for γ = 0.1 √ Z1,2 = 1 ± 1 + 0.2 = 1 ± 1.0954 = 2.0954 , −0.0954, Z > 0 √ Z3,4 = −1 ± 1 + 0.2 = −1 ± 1.0954 = 0.0954, -2.0954 , Z < 0 So the decision regions in this particular case are R1 : R2 :

−∞ < Z < −2.0954; 2.0954 < Z < ∞ −2.0954 ≤ Z ≤ 2.0954

Note that the equality sign can be associated with either region. Problem 10.3 Under both hypotheses, the statistic Z is zero mean and Gaussian. Under hypothesis H1 (noise alone) Z is zero-mean Gaussian with variance σ 2n and under hypothesis H2 (signal plus noise) it is zero-mean Gaussian with variance σ 2s + σ 2n . Thus, the likelihood ratio test is s £ ¡ ¢¤ σ 2n exp −Z 2 /2 σ2s + σ 2n H2 Pr (H1 ) (c21 − c11 ) ≷η= Λ (Z) = σ 2s + σ 2n exp [−Z 2 /2σ 2n ] Pr (H2 ) (c12 − c22 ) H1 or ¡ ¢ ¤ H2 £ exp −Z 2 /2 σ 2s + σ 2n + Z 2 /2σ 2n ≷ η H1

or Z2 or

µ

1 1 − σ 2n σ 2s + σ 2n

= Z2

µ

σ 2s 2 σ n (σ 2s + σ 2n )

s

σ 2s + σ 2n σ 2n

H2

≷ 2 ln (η) + ln

H1

¡ ¢∙ µ 2 ¶¸ H2 σ 2 σ 2 + σ 2 σ s + σ 2n n s n 2 Z ≷ 2 ln (η) + ln σ 2s σ 2n H1

µ

σ 2s + σ 2n σ 2n

For the rest of the problem, we assume that σ 2s = 3, σ 2n = 1 so the LRT becomes H2 4

Z2 ≷

H1 3

[2 ln (η) + ln (4)]

q The boundaries of the decision regions are at Z1,2 = ± 43 [2 ln (η) + ln (4)] which results in the decision regions r r 4 4 [2 ln (η) + ln (4)] ≤ Z ≤ [2 ln (η) + ln (4)] R1 : − 3 3


5 and r

R2 : −∞ < Z < −

4 [2 ln (η) + ln (4)], 3

r

4 [2 ln (η) + ln (4)] < Z < ∞ 3

a. For c11 =qc22 = 0, c21 = c12 , and Pr (H1 ) = Pr (H2 ) = 1/2 we have η = 1 so that Z1,2 = ±

4 3 ln (4) = ±1.3596.

= 1/4, and q0 = 1 − p0 = Pr (H2 ) = 3/4 b. For c11 = c22 = 0, c21 = c12 , p0 = Pr (H1 )q ¡1¢ ¡4¢ 1 we have η = 4 3 = 3 so that Z1,2 = ± 43 [2 ln (1/3) + ln (4)] which is imaginary (i.e., Z 2 is being compared with a negative number in the LRT) which means that the decision is in favor of H2 (signal plus noise).

c. For c11 = c22 = 0, c21 = c12 /2, p0 = Pr (H1 ) = Pr (H2 ) = 1/2 we have η = cc21 = 12 so 12 q that Z1,2 = ± 43 [2 ln (1/2) + ln (4)] = 0 so the decision is in favor of H2 .

d. For c11 = c22 = 0, c21 = 2c12 , p0 = Pr (H1 ) = Pr (H2 ) = 1/2 we have η = cc21 = 2 so 12 q that Z1,2 = ± 43 [2 ln (2) + ln (4)] = ±1.9227. Problem 10.4

For σ 2n = 9, σ2s = 16 the LRT is ∙ µ ¶¸ H2 9 (25) 25 2 ln (η) + ln = 28.125 ln (η) + 14.367 Z2 ≷ 16 9 H1 p The boundaries of the decision regions are at Z1,2 = ± 28.125 ln (η) + 14.367 which results in the decision regions p p R1 : − 28.125 ln (η) + 14.367 ≤ Z ≤ 28.125 ln (η) + 14.367 and

p p R2 : −∞ < Z < − 28.125 ln (η) + 14.367, 28.125 ln (η) + 14.367 < Z < ∞

a. For c11 =√c22 = 0, c21 = c12 , and Pr (H1 ) = Pr (H2 ) = 1/2 we have η = 1 so that Z1,2 = ± 14.367 = ±3.7904 giving decision regions of R1 : R2 :

−3.7904 ≤ Z ≤ 3.7904

−∞ < Z < −3.7904, 3.7904 < Z < ∞.


6

CHAPTER 10. OPTIMUM RECEIVERS AND SIGNAL SPACE CONCEPTS The probability of false alarm is the probability that a decision is made in favor of signal plus noise when noise alone is present, and is given in this case by ¡ ¢ ¡ ¢ Z 3.7904 Z 3.7904 exp −z 2 /2 × 9 exp −z 2 /2 × 9 z √ √ PF = dz = 2 dz; u = 3 2π × 9 2π × 9 −3.7904 "0 Z # ¡ 2 ¢ ¡ ¢ Z 3.7904/3 ∞ exp −u /2 exp −u2 /2 1 √ √ du = 2 du − = 2 2 2π 2π 0 3.7904/3 = 1 − 2Q (1.2635) = 0.7936 The probability of detection is the probability that a decision is made in favor of signal plus noise when signal plus noise present, and is given in this case by ¡ ¢ ¡ ¢ Z −3.7904 Z ∞ exp −z 2 /2 × 25 exp −z 2 /2 × 25 √ √ PD = dz + dz 2π × 25 2π × 25 −∞ 3.7904 ¡ ¢ Z ∞ exp −z 2 /2 × 25 z √ = 2 dz; u = 5 2π × 25 3.7904 ¡ 2 ¢ Z ∞ exp −u /2 √ du = 2 2π 3.7904/5 = 2Q (0.7581) = 0.4484 The risk is Risk

= Pr (H1 ) c21 + Pr (H2 ) c22 + Pr (H2 ) (c12 − c22 ) PM

− Pr (H1 ) (c21 − c11 ) (1 − PF ) 1 1 1 1 × 1 + × 0 + × 1 × (1 − 0.4484) − × 1 × (1 − 0.7936) = 2 2 2 2 = 0.6726

b. For c11 = c22 = p0, c21 = c12 = 1, Pr (H1 ) = 1/4, and Pr (H2 ) = 3/4 we have η = 1/3 so that Z1,2 = ± 28.125 ln (1/3) + 14.367 = imaginary which says Z 2 is being compared with a negative number in the LRT which means that the decision is in favor of H2 (signal plus noise). The probability of false alarm is the probability that a decision is made in favor of signal plus noise when noise alone is present, and is 0 in this case. Since the decision is always in favor of H2 the probability of detection is 1. The risk is Risk

= Pr (H1 ) c21 + Pr (H2 ) c22 + Pr (H2 ) (c12 − c22 ) PM

− Pr (H1 ) (c21 − c11 ) (1 − PF ) 3 3 1 1 × 1 + × 0 + × 1 × (1 − 1) − × 1 × (1 − 0) = 4 4 4 4 = 0


7 c. For c11 = c22 = p 0, c21 = c12 /2 = 1/2, and Pr (H1 ) = Pr (H2 ) = 1/2 we have η = 1/2 so that Z1,2 = ± 28.125 ln (1/2) + 14.367 =imaginary giving which gives PF = 0 and PD = 1. The risk is Risk

= Pr (H1 ) c21 + Pr (H2 ) c22 + Pr (H2 ) (c12 − c22 ) PM

− Pr (H1 ) (c21 − c11 ) (1 − PF ) 1 1 1 1 1 1 × + × 0 + × 1 × (1 − 1) − × × (1 − 0) = 2 2 2 2 2 2 = 0

d. For c11 =pc22 = 0, c21 = 2c12 = 2, and Pr (H1 ) = Pr (H2 ) = 1/2 we have η = 2 so that Z1,2 = ± 28.125 ln (2) + 14.367 = ±5.8191 giving decision regions of R1 : R2 :

−5.8191 ≤ Z ≤ 5.8191

−∞ < Z < −5.8191, 5.8191 < Z < ∞.

The probability of false alarm is the probability that a decision is made in favor of signal plus noise when noise alone is present, and is given in this case by ¡ ¢ ¡ ¢ Z 5.8191 Z 5.8191 exp −z 2 /2 × 9 exp −z 2 /2 × 9 z √ √ PF = dz = 2 dz; u = 3 2π × 9 2π × 9 −5.8191 "0 Z # ¡ 2 ¢ ¡ ¢ Z 5.8191/3 2 ∞ exp −u /2 exp −u /2 1 √ √ du = 2 du = 2 − 2 2π 2π 0 5.8191/3 = 1 − 2Q (1.9397) = 0.9476 The probability of detection is the probability that a decision is made in favor of signal plus noise when signal plus noise present, and is given in this case by ¡ ¢ ¡ ¢ Z −5.8191 Z ∞ exp −z 2 /2 × 25 exp −z 2 /2 × 25 √ √ PD = dz + dz 2π × 25 2π × 25 −∞ 5.8191 ¡ ¢ Z ∞ exp −z 2 /2 × 25 z √ = 2 dz; u = 5 2π × 25 5.8191 ¡ 2 ¢ Z ∞ exp −u /2 √ = 2 du 2π 5.8191/5 = 2Q (1.1638) = 0.2445 The risk is Risk

= Pr (H1 ) c21 + Pr (H2 ) c22 + Pr (H2 ) (c12 − c22 ) PM − Pr (H1 ) (c21 − c11 ) (1 − PF ) 1 1 1 1 × 2 + × 0 + × 1 × (1 − 0.2445) − × 2 × (1 − 0.9476) = 2 2 2 2 = 1.3254


8

CHAPTER 10. OPTIMUM RECEIVERS AND SIGNAL SPACE CONCEPTS

Problem 10.5 Define A = (a1 , a2, a3 ) and B = (b1 , b2 , b3 ). Define scalar multiplication by α(a1 , a2, a3 ) = (αa1 , αa2, αa3 ) and vector addition by (a1 , a2, a3 ) +(b1 , b2 , b3 ) = (a1 + b1 , a2 + b2 , a3 + b3 ). The properties listed under ”Structure of Signal Space” in the text then become: 1. α1 A + α2 B = (α1 a1 + α2 b1 , α1 a2 + α2 b2 , α1 a3 + α2 b3 ) R3 ; 2. α (A + B) = α (a1 + b1, a2 + b2 , a3 + b3 ) = (αa1 + αb1, αa2 + αb2 , αa3 + αb3 ) = (αa1 , αa2, αa3 ) + (αb1, αb2 , αb3 ) = αA + αB R3 ; 3. α1 (α2 A) = (α1 α2 A) R3 (follows by writing out in component form); 4. 1 · A = A (follows by writing out in component form); 5. The unique element 0 is (0, 0, 0) so that A + 0 = A; 6. −A = (−a1 , −a2, − a3 ) so that A + (−A) = 0.

Problem 10.6 √ √ √ √ √ √ a. |A| = A · A = 12 + 32 + 22 = 14; |B| = B · B = 52 + 12 + 32 = 35; A·B √ √ = 1×5+3×1+2×3 = 0.6325. cos θ = |A||B| 14 35 √ √ √ √ √ √ b. |A| = A · A = 62 + 22 + 42 = 56; |B| = B · B = 22 + 22 + 22 = 12; A·B √ √ cos θ = |A||B| = 6×2+2×2+4×2 = 0.9258. 56 12 √ √ √ √ √ √ c. |A| = A · A = 42 + 32 + 12 = 26; |B| = B · B = 32 + 42 + 52 = 50; A·B √ √ = 4×3+3×4+1×5 = 0.8043. cos θ = |A||B| 26 50 q √ √ √ √ 2 2 2 d. |A| = A · A = 3 + 3 + 2 = 22; |B| = B · B = (−1)2 + (−2)2 + 32 = √ A·B √ √ 14; cos θ = |A||B| = 3×(−1)+3×(−2)+2×3 = −0.1709. 22 14


9 Problem 10.7 1. Consider (x, y) = lim

Z T

x (t) y∗ (t) dt

Z T

T →∞ −T

Then (x, y) = lim

Z T

y (t) x∗ (t) dt =

(αx, y) = lim

Z T

T →∞ −T

lim

T →∞ −T

¸∗ x (t) y∗ (t) dt = (x, y)∗

Also αx (t) y (t) dt = α lim

T →∞ −T

Z T

T →∞ −T

x (t) y∗ (t) dt = α (x, y)

and (x + y, z) =

lim

Z T

T →∞ −T Z T

=

lim

T →∞ −T

[x (t) + y (t)] z ∗ (t) dt ∗

x (t) z (t) dt + lim

= (x, z) + (y, z) Finally (x, x) = lim

Z T

T →∞ −T

Z T

T →∞ −T

x (t) x (t) dt = lim

Z T

T →∞ −T

y (t) z ∗ (t) dt

|x (t)|2 dt ≥ 0

The scalar product for power signals is considered in the same manner. Problem 10.8 a. This scalar product is (x1 , x2 ) = lim

Z T

T →∞ 0

2e−4t dt =

1 2

b. Use the energy signal scalar product definition as in (a): Z T 1 (x1 , x2 ) = lim e−(7−j2)t dt = T →∞ 0 7 − 2j c. Use the power signal product definition: Z T 1 (x1 , x2 ) = lim cos (2πt) cos (4πt) dt = 0 T →∞ 2T −T


10

CHAPTER 10. OPTIMUM RECEIVERS AND SIGNAL SPACE CONCEPTS d. Use the power signal scalar product definition: 1 T →∞ 2T

(x1 , x2 ) = lim

Z T

5 cos (2πt) dt = 0

0

Problem 10.9 Since the signals are assumed real, 2

kx1 + x2 k

= = =

lim

Z T

T →∞ −T Z T

lim

[x1 (t) + x2 (t)]2 dt x21 (t) dt + 2

lim

Z T

T →∞ −T T −→∞ −T 2 kx1 k + 2 (x1 , x2 ) + kx2 k2

x1 (t) x2 (t) dt + lim

Z T

T −→∞ −T

x22 dt

From this it is seen that kx1 + x2 k2 = kx1 k2 + kx2 k2 if and only if (x1 , x2 ) = 0.

Problem 10.10 By straight forward integration, it follows that

kx2 k2 kx3 k2

Z 1

√ kx1 k = 2 −1 r Z 1 2 2 2 = t dt = or kx2 k = 3 3 −1 r Z 1 8 8 2 = (1 + t) dt = or kx3 k = 3 3 −1

kx1 k2 =

12 dt = 2 or

Also (x1 , x2 ) = 0 and (x3 , x1 ) = 2 Since (x1 , x2 ) = 0, they are orthogonal. Choose them as basis functions (not normalized). Clearly, x3 is their vector sum.


11 Problem 10.11 It follows that kx1 k2 = =

= and similarly kx2 k2 =

à X

X

an φn ,

n

XX

bm φm

m

!

an b∗m (φn , φm )

n m N X

n=1 N X n=1

|an |2 because (φn , φm ) = δ mn |bn |2

Also, in a similar fashion, (x1 , x2 ) =

N X

an b∗n

n=1

Thus we must show that ¯N ¯2 N N ¯X ¯ X X ¯ 2 ∗¯ an bn ¯ ≤ |an | |bn |2 ¯ ¯ ¯ n=1

n=1

n=1

Writing the both sides out we get XX n

m

an a∗m bn b∗m ≤

XX n

an a∗n bm b∗m

m

This is equivalent to

or

XX n

m

XX

(2an a∗n bm b∗m − 2an a∗m bn b∗m ) ≥ 0

n m ∗ (an an bm b∗m − 2an a∗m bn b∗m + am a∗m bn b∗n )

≥ 0

The inequality is therefore equivalent to XX m

n

|an bm − am bn |2 ≥ 0

which is seen to be true because each term in the sum is nonnegative.


12

CHAPTER 10. OPTIMUM RECEIVERS AND SIGNAL SPACE CONCEPTS Problem 10.12 √ √ √ √ √ √ 2 + 12 + 32 = a. |A| = A · A = 12 + 32 + 22 = 14; |B| = B · B = 5 35; √ √ A · B = 1 × 5 + 3 × 1 + 2 × 3 = 14; |A · B| = 14 ≤ |A| |B| = 14 35 = 22.1359 which is true. √ √ √ √ √ √ 2 + 22 + 22 = 12; b. |A| = A · A = 62 + 22 + 42 = 56; |B| = B · B =√ 2√ A · B = 6 × 2 + 2 × 2 + 4 × 2 = 24; |A · B| = 24 ≤ |A| |B| = 56 12 = 25.923 which is true. √ √ √ √ √ √ 2 + 42 + 52 = 3 50; c. |A| = A · A = 42 + 32 + 12 = 26; |B| = B · B = √ √ A · B = 4 × 3 + 3 × 4 + 1 × 5 = 29; |A · B| = 29 ≤ |A| |B| = 26 50 = 36.0555 which is true. q √ √ √ √ d. |A| = A · A = 32 + 32 + 22 = 22; |B| = B · B = (−1)2 + (−2)2 + 32 = √ √ √ 14; A · B = 3 × (−1) + 3 × (−2) + 2 × 3 = −3; |A · B| = 3 ≤ |A| |B| = 22 14 = 17.5499 which is true.

Problem 10.13 a. A set of normalized basis functions is 1 √ s1 (t) 2 r ∙ ¸ 2 1 s2 (t) − s1 (t) φ2 (t) = 3 2 ∙ ¸ √ 2 2 φ3 (t) = 3 s3 (t) − s1 (t) − s2 (t) 3 3

φ1 (t) =

b. Evaluate (s1 , φ1 ), (s2 , φ1 ), and (s3 , φ1 ) to get √ 1 s1 = 2φ1 ; s2 = √ φ1 + 2

r

√ 3 φ2 ; s3 = 2φ1 + 2

r

2 (φ + φ3 ) 3 2

Note that a basis function set could have been obtained by inspection. For example, three unit-height, unit-width, nonoverlapping pulses spanning the interval could have been used.


13 Problem 10.14 Normalize the first vector to form the first orthonormal vector: 3bi + 2b j −b k x1 =q |x1 | 32 + 22 + (−1)2 ´ 1 ³b 3i + 2b j −b k = √ 14 = 0.8018bi + 0.5345b j − 0.2673b k

φ1 =

Take the second vector and subtract its component along φ1 : ³ ´ ´ ³ ´ 1 ³ ´ 1 ³b −2bi + 5b j+b k −√ 3i + 2b j −b k · −2bi + 5b j+b k √ 3bi + 2b j −b k 14 14 = −2.6429bi + 4.5714b j + 1.2143b k

v2 =

Normalize v2 to form the second orthonormal vector: φ2 =

−2.6429bi + 4.5714b j + 1.2143b k q (−2.6429)2 + (4.5714)2 + (1.2143)2

= −0.4878bi + 0.8437b j + 0.2241b k

Take the third vector and subtract its components along φ1 and φ2 : v3 =

³ ´ 6bi − 2b j + 7b k ³ ´ ³ ´³ ´ − 0.8018bi + 0.5345b j − 0.2673b k · 6bi − 2b j + 7b k 0.8018bi + 0.5345b j − 0.2673b k ´ ³ ´ ³ ´³ j + 0.2241b k − −0.4878bi + 0.8437b j + 0.2241b k · 6bi − 2b j + 7b k −0.4878bi + 0.8437b

= 3.0146bi − 0.4307b j + 8.1825b k

Normalize:

3.0146bi − 0.4307b j + 8.1825b k √ 3.01462 + 0.43072 + 8.18252 = 0.3453bi − 0.0493b j + 0.9372b k

φ3 =

Finally, take the fourth vector and subtract components of the other three along it:


14

CHAPTER 10. OPTIMUM RECEIVERS AND SIGNAL SPACE CONCEPTS

v4 =

³ ´ 3bi + 8b j − 3b k ³ ´ ³ ´³ ´ − 0.8018bi + 0.5345b j − 0.2673b k · 3bi + 8b j − 3b k 0.8018bi + 0.5345b j − 0.2673b k ´ ³ ´ ³ ´³ j + 0.2241b k − −0.4878bi + 0.8437b j + 0.2241b k · 3bi + 8b j − 3b k −0.4878bi + 0.8437b ³ ´ ³ ´³ ´ − 0.3453bi − 0.0493b j + 0.9372b k · 3bi + 8b j − 3b k 0.3453bi − 0.0493b j + 0.9372b k

= 0bi + 0b j + 0b k (actually of the order of 10−14 )

Therefore there are only three orthornormal vectors. Problem 10.15 a. A basis set is r

2 cos (2πfc t) and T r 2 sin (2πfc t) for 0 ≤ t ≤ T φ2 (t) = T

φ1 (t) =

b. The coordinates of the signal vectors are xi =

Z T 0

and yi =

Z T 0

si (t) φ1 (t) dt =

√ E cos

πi 4

µ

µ

√ si (t) φ2 (t) dt = − E sin

πi 4

where E = A2 T . Thus µ ¶ µ ¶ √ √ πi πi φ1 (t) − E sin φ2 (t) , i = 0, 1, 2, 3, 4, 5, 6, 7 si (t) = E cos 4 4 ´ ³√ E, 0 with the rest at multiples of π/4 radians on The signal point for i = 0 is at √ a circle of radius E centered at the origen.


15 Problem 10.16

a. Normalize x1 (t) to produce φ1 (t): 2

||x1 ||

=

Z ∞ 0

¡ −t ¢2 dt = e

Z ∞ 0

¯∞ 1 1 e−2t dt = − e−2t ¯0 = 2 2

x1 (t) √ ⇒ φ1 (t) = √ = 2e−t u (t) 1/ 2

Take the second signal and subtract its component along φ1 : Z ∞√ √ v2 (t) = e u (t) − 2e−t e−2t dt 2e−t u (t) "0 √ #∞ √ −t 2 e−3t = e−2t u (t) − − 2e u (t) 3 −2t

0

2 = e−2t u (t) − e−t u (t) 3 Normalize v2 (t) to produce φ2 (t): 2

||v2 ||

¶ ¶ Z ∞µ Z ∞µ 2 −t 2 4 −3t 4 −2t −2t −4t = − e dt = − e + e e e dt 3 3 9 0 0 ¸ ∙ 1 1 −4t 4 −3t 2 −2t ∞ + e − e = = − e 4 9 9 36 0 ¢ v2 (t) ¡ −2t ⇒ φ2 (t) = = 6e − 4e−t u (t) 1/6

Take the third signal and subtract its components along φ1 and φ2 : −3t

v3 (t) = e

Z ∞√ √ u (t) − 2e−t e−3t dt 2e−t u (t)

Z ∞

0

¡ −2t ¢ £ ¤ 6e − − 4e−t e−3t dt 6e−2t u (t) − 4e−t u (t) ¶ µ 0 6 −2t 3 −t −3t − e + e u (t) = e 5 10


16

CHAPTER 10. OPTIMUM RECEIVERS AND SIGNAL SPACE CONCEPTS Normalize v3 (t) to produce φ3 (t): 2

||v3 ||

¶ Z ∞µ 6 −2t 3 −t 2 −3t = − e + e dt e 5 10 0 ¶ Z ∞µ 12 −5t 51 −4t 18 −3t 9 −2t −6t e e dt − e + e − e + = 5 25 25 100 0 1 = 600 √ ¡ ¢ v3 (t) √ = 6 10e−3t − 12e−2t + 3e−t u (t) ⇒ φ3 (t) = 1/10 6

b. There does not appear to be a general pattern developing. Problem 10.17 a. Normalize x1 (t) to produce φ1 (t): Z 1 1 ¯1 2 2 t2 dt = t3 ¯−1 = ||x1 || = 3 3 −1 r 3 x1 (t) ⇒ φ1 (t) = p t, −1 ≤ t ≤ 1 = 2 2/3 Take the second signal and subtract its component along φ1 : r Z 1r 3 2 3 2 tt dt t v2 (t) = t − 2 2 −1 ∙ ¸1 3 t4 2 t = t − 2 4 −1 = t2 , −1 ≤ t ≤ 1

Normalize v2 (t) to produce φ2 (t): 2

||v2 ||

= =

Z 1

¡ 2 ¢2 t dt =

−1 ∙ 5 ¸1 t

5

−1

=

Z 1

t4 dt

−1

2 5

x2 (t) = ⇒ φ2 (t) = p 2/5

r

5 2 t , −1 ≤ t ≤ 1 2


17 Take the third signal and subtract its components along φ1 and φ2 : r r Z 1r Z 1r 3 3 3 5 2 3 5 2 3 tt dt t− t t dt t v3 (t) = t − 2 2 2 2 −1 −1 ∙ ¸1 ∙ ¸1 3 t5 5 t6 3 = t − t− t2 2 5 −1 2 6 −1 3 = t3 − t, −1 ≤ t ≤ 1 5 Normalize v3 (t) to produce φ3 (t): ||v3 ||2

¶ ¶ Z 1µ Z 1µ 3 2 6 9 t3 − t dt = t6 − t4 + t2 dt 5 5 25 −1 −1 ∙ 7 ¸ 1 t 6 t5 9 t3 9 2 8 2 62 − + + = = − = 7 55 25 3 −1 7 5 5 25 3 175 r µ ¶ 7 5 3 3 v3 (t) t − t , −1 ≤ t ≤ 1 = ⇒ φ3 (t) = p 2 2 2 8/175 =

Take the fourth signal and subtract its components along φ1 , φ2 , and φ3 : r r Z 1r Z 1r 3 4 3 5 2 4 5 2 4 tt dt t− t t dt t v4 (t) = t − 2 2 2 2 −1 −1 r r µ ¶ µ ¶ Z 1 7 5 3 3 7 5 3 3 4 t − t t dt t − t − 2 2 2 2 2 2 −1 ∙ 6 ¸1 ∙ 7 ¸1 ∙ 8 ¸1 µ ¶ 3 t6 3 t 5 t 7 5t 5 3 3 4 2 − t − t t− t − = t − 2 6 −1 2 7 −1 2 28 2 6 −1 2 2 5 = t4 − t2 , −1 ≤ t ≤ 1 7 Normalize v4 (t) to produce φ4 (t): 2

||v4 ||

¶ ¶ Z 1µ Z 1µ 5 2 2 10 6 25 4 4 8 t − t t − t + t dt = dt = 7 7 49 −1 −1 ¸1 ∙ 9 7 5 10 t 25 t 8 2 10 2 25 2 t − + + = = − = 9 7 7 49 5 −1 9 7 7 49 5 9 × 49 µ ¶ 27 xv4 (t) 5 t4 − t2 , −1 ≤ t ≤ 1 = √ ⇒ φ4 (t) = p 7 2 2 8/ (9 × 49)


18

CHAPTER 10. OPTIMUM RECEIVERS AND SIGNAL SPACE CONCEPTS b. The pattern is not apparent. Problem 10.18 Normalize s1 (t) to produce φ1 (t): Z 2 2 ||s1 || = 12 dt = 2 0

1 s1 (t) ⇒ φ1 (t) = √ = √ , 0 ≤ t ≤ 2 2 2

Take the second signal and subtract its component along φ1 : Z 2 1 1 √ cos (πt) dt √ v2 (t) = cos (πt) − 2 2 0 ∙ ¸2 1 1 sin (πt) = cos (πt) − 2 π 0 = cos (πt) , 0 ≤ t ≤ 2 Normalize it: ||v2 ||

2

¸ 1 1 + cos (2πt) dt = 1 = cos (πt) dt = 2 2 0 0 ⇒ φ2 (t) = s2 (t) = cos (πt) , 0 ≤ t ≤ 2 Z 2

2

Z 2∙

A similar operation with the third signal shows that φ3 (t) = s3 (t) = sin (πt) , 0 ≤ t ≤ 2 Take the fourth signal and subtract its components along φ1 and φ3 : Z 2 Z 2 1 1 2 2 √ √ sin (πt) dt − cos (πt) sin2 (πt) dt cos (πt) v4 (t) = sin (πt) − 2 2 0 0 Z 2 sin (πt) sin2 (πt) dt sin (πt) − 0

2

= sin (πt) −

1 1 1 1 = − cos (2πt) − 2 2 2 2

1 = − cos (2πt) , 0 ≤ t ≤ 2 2 Normalize it: ¸ Z ∙ Z 2 1 1 1 2 1 1 2 2 cos (2πt) dt = + cos (4πt) dt = ||v4 || = 4 4 2 2 4 0 0 v4 (t) ⇒ φ4 (t) = = − cos (2πt) , 0 ≤ t ≤ 2 1/2


19 From these results we find that √ s1 (t) = 2φ1 (t) s2 (t) = φ2 (t) s3 (t) = φ3 (t) 1 1 1 1 − cos (2πt) = √ φ1 (t) + φ4 (t) s4 (t) = 2 2 2 2

Problem 10.19 By the time delay and modulation theorems, ∙ µ ¶ µ ¶¸ 1 1 τ sinc τ f − + sinc τ f + e−j2πkτ f Φ (f ) = 2 2τ 2τ The total energy in the kth signal is E=

Z τ /2

2

cos (πt/τ ) dt =

−τ /2

Z τ /2 ∙ −τ /2

¸ 1 1 τ + cos (2πt/τ ) dt = 2 2 2

The energy for |f | ≤ W is EW

∙ µ ¶ µ ¶¸ 1 1 2 τ2 sinc fτ − + sinc fτ + = df 2 2 −W 4 µ ¶ µ ¶¸ Z Wτ ∙ τ 1 1 2 = dv sinc v − + sinc v + 2 2 −W τ 4 µ ¶ µ ¶¸ Z Wτ ∙ τ 1 1 2 = 2 sinc v − + sinc v + dv (even integrand) 4 2 2 0 Z W

Thus EW = E

¶ µ ¶¸ µ Z Wτ ∙ 1 2 1 + sinc v + sinc v − dv 2 2 0

The MATLAB program below computes EW /E: % pr10_19 % disp(’ W_tau E_W/E’) for tau_W = 0.7:.1:1.3 v = 0:0.01:tau_W; y = (sinc(v-0.5)+sinc(v+0.5)).^2;


20

CHAPTER 10. OPTIMUM RECEIVERS AND SIGNAL SPACE CONCEPTS EW_E = trapz(v, y); disp([tau_W, EW_E]) end The results are given below: >> pr10_19 W_tau E_W/E 0.7000 0.8586 0.8000 0.9106 0.9000 0.9467 1.0000 0.9701 1.1000 0.9838 1.2000 0.9909 1.3000 0.9939 The product 0.8 ≤ W τ ≤ 0.9 for EW /E ≥ 11 12 = 0.9167. Problem 10.20 a. By inspection of the given waveform set it follows that a satisfactory set of orthonormal functions is √ 2 cos (2πt) , 0 ≤ t ≤ 1 √ φ2 (t) = 2 sin (2πt) , 0 ≤ t ≤ 1 √ 2 cos (4πt) , 0 ≤ t ≤ 1 φ3 (t) = √ 2 sin (4πt) , 0 ≤ t ≤ 1 φ4 (t) =

φ1 (t) =

For i = 0, 1, 2, 3 si (t) = A cos (iπ/2) cos (2πt) − A sin (iπ/2) sin (2πt) √ E [cos (iπ/2) φ1 (t) − sin (iπ/2) φ2 (t)] = For i = 4, 5, 6, 7 si (t) = A cos ((i − 4) π/2) cos (4πt) − A sin ((i − 4) π/2) sin (4πt) √ = E [cos ((i − 4) π/2) φ3 (t) − sin ((i − 4) π/2) φ4 (t)] b. The receiver looks like Figure 10.5 with four multipliers and integrators, one for each


21 orthonormal function. The signal coordinates are A = [Aij , i = 0, 1, 2, 3, 4, 5, 6, 7, j = 1, 2, 3, 4] ⎧ h√ i ⎪ E, 0, 0, 0, 0, 0, 0, 0 , i = 0 ⎪ ⎪ ⎪ i h ⎪ √ ⎪ ⎪ E, 0, 0, 0, 0, 0, 0 , i=1 0, − ⎪ ⎪ ⎪ i h ⎪ √ ⎪ ⎪ 0, 0, − E, 0, 0, 0, 0, 0 , i = 2 ⎪ ⎪ ⎪ h i ⎪ √ ⎪ ⎨ 0, 0, 0, E, 0, 0, 0, 0 , i = 3 i h = √ ⎪ E, 0, 0, 0 , i=4 0, 0, 0, 0, ⎪ ⎪ ⎪ i h ⎪ √ ⎪ ⎪ 0, 0, 0, 0, 0, − E, 0, 0 , i = 5 ⎪ ⎪ ⎪ i h ⎪ √ ⎪ ⎪ E, 0 , i=6 0, 0, 0, 0, 0, 0, − ⎪ ⎪ ⎪ h ⎪ √ i ⎪ ⎩ 0, 0, 0, 0, 0, 0, 0, E , i = 7

From (10.96) the decision rule is

Choose H so that d2 =

4 X j=1

(Zj − A j )2 = minumum

where Zj = Aij + Nj =

Z T =1 0

y (t) φj (t) dt =

Z 1 0

[si (t) + n (t)] φj (t) dt, j = 1, 2, 3, 4

This can be done suboptimally in two stages: (1) Correlate with two tones, one at frequency 1 Hz and the other at frequency 2 Hz and choose the output having the largest envelope (or squared envelope) (note that this will involve a correlation with both a sine and a cosine at each frequency); (2) Taking the pair of outputs from the correlators (one sine and one cosine) having the largest output envelope, determine in which π/2 radian range the output pair lies (i.e., 0 to π/2, π/2 to π, π to 3π/2, or 3π/2 to 2π radians). c. As before E [Ni Nj ] = N20 δ ij . According to the suboptimum strategy given above, we make the comparison (y, φ1 )2 + (y, φ2 )2

> (y, φ3 )2 + (y, φ4 )2 <

Suppose s0 (t) was really transmitted. Then the probability of a correct decision on


22

CHAPTER 10. OPTIMUM RECEIVERS AND SIGNAL SPACE CONCEPTS frequency is Pcf

∙³ ¸ ´2 √ E + N1 + N22 > N32 + N42 ∙³ ¸ ´2 √ 2 2 2 = Pr E + N1 > N3 + N4 − N2 = Pr

The probability of correct decision on signal phase, from the analysis of quadriphase in Chapter 9 resulting in (9.15, is Pcp ∼ = 1 − 2Q

Ãr

E N0

!

By symmetry, these expressions hold for either frequency or any phase. The overall probability of correct reception is Pc = Pcf Pcp for an overall probability of error of PE = 1 − Pcf Pcp

Problem 10.21 Write

Z ∞

2

e−y √ Pc = π −∞

"Z √ y+ E/N0 −∞

and let z = x − y. This gives Pc =

Z √E/N0 −∞

2

e−z /2 √ π

# 2 e−x √ dx dy π

# 2 e−2(y+z/2) √ dy dz π −∞

"Z ∞

Complete the square in the exponent √ of the inside integral and use a table of definite integrals to show that is evaluates to 1/ 2. The result then reduces to Pc = 1 − Q which gives PE = Q

´ ³p E/N0

i hp E/N0 , the desired result.


23 Problem 10.22 a. The space is three-dimensional with signal points at the eight points ´ ³ p ´ ³ p ´ ³ p ± E/3 , ± E/3 , ± E/3 The optimum partitions are planes determined by the coordinate axes taken two at a time (three planes). Thus the optimum decision regions are the eight quadrants of the signal space. b. Consider S1 . Given the partitioning discussed in part (a), we make a correct decision only if kZ − S1 k2 < kZ − S2 k2 and kZ − S1 k2 < kZ − S4 k2 and kZ − S1 k2 < kZ − S8 k2 where S2 , S4 , and S8 are the nearest-neighbor signal points topS1 . These are the ) and S1 = E/3 (1, 1, 1) , S2 = most Z = (Z1 , Z2 , Z3p p probable errors. Substituting p E/3 (1, −1, 1, ) , S4 = E/3 (−1, 1, 1), and S8 = E/3 (1, 1, −1), the above conditions become ´2 ³ ´2 ³ ´2 ³ ´2 ³ p p p p < Z1 − E/3 , Z2 + E/3 < Z2 − E/3 , Z1 + E/3 ³ ´2 ³ ´2 p p and Z3 + E/3 < Z3 − E/3 by canceling like terms on each side. For S1 , these reduce to Z1 > 0, Z2 > 0, and Z3 > 0 to define the decision region for S1 . Therefore, the probability of correct decision is P [correct dec.|s1 (t)] = Pr (Z1 > 0, Z2 > 0, Z3 > 0) = [Pr (Zi > 0)]3 because the noises along each coordinate axis are independent. Note that E [Zi ] = (Es /3)1/2 , all i. The noise variances, and therefore the variances of the Zi s are all


24

CHAPTER 10. OPTIMUM RECEIVERS AND SIGNAL SPACE CONCEPTS N0 /2. Thus

∙ Z ∞ ´2 ¸ ¸3 p 1 1 ³ √ y − Es /3 exp − P [correct dec.|s1 (t)] = dy N0 πN0 0 " r µ 2¶ # Z ∞ ´ p 1 2 ³ u √ du , u = y − = exp − E /3 s √ 2 N0 2π − 2Es /3N0 ´i3 h ³p 2Es /3N0 = 1−Q

Since this is independent of the signal chosen, this is the average probability of correct detection. The symbol error probability is 1 − Pc . Generalizing to n dimensions, we have

´in ´in h ³p h ³p 2Es /nN0 =1− 1−Q 2Eb /N0 PE = 1 − 1 − Q

where n = log2 M . Note that Eb = Es /n since there are n bits per dimension and M = 2n .

c. The symbol error probability is plotted in Fig. 10.1.


25

0

10

-1

10

-2

Ps

10

-3

10

n=4 n=3 n=2 n=1

-4

10

-5

10

-6

10

0

5

10 Eb/N0

Figure 10.1:

15


26

CHAPTER 10. OPTIMUM RECEIVERS AND SIGNAL SPACE CONCEPTS Problem 10.23 a. Integrate the product of the i − 1 and i signals over [0, Ts ]: Z Ts

= = = '

A2 cos {2π [fc + (i − 1) ∆f ] t} cos [2π (fc + i∆f ) t] dt Z A2 Ts {cos [2π (2fc + (2i − 1) ∆f ) t]} dt + {cos [2π (−∆f ) t]} dt 2 0 2 0 ½ ¾ A2 sin [2π (2fc + (2i − 1) ∆f ) t] sin [2π (−∆f ) t] Ts + 2 2π (2fc + (2i − 1) ∆f ) 2π (−∆f ) 0 ½ ¾ 2 sin [2π (2fc + (2i − 1) ∆f ) Ts ] sin [2π (−∆f ) Ts ] A + 2 2π (2fc + (2i − 1) ∆f ) 2π (−∆f ) 2 2 A Ts sin [2π∆f Ts ] A Ts sinc (2∆fTs ) = 2 2π∆f Ts 2

0 Z A2 Ts

where the first term has been neglected because it is small for typical values of fc . The remaining term is 0 for its argument equal to a nonzero integer. The smallest such integer is 1, which gives 2∆fTs = 1 or ∆f =

1 2Ts

b. We need 1/2Ts Hz of bandwidth per signal, so M=

M W = 2W Ts or W = 1/2Ts 2Ts

c. For vertices-of-a-hypercube signaling M = 2n where n is the dimensionality (number of orthogonal functions) which is 2W Ts , so log2 M = 2W Ts or W =

log2 M 2Ts

Problem 10.24 In the equation P (E|H1 ) =

Z ∞ ∙Z ∞ 0

r1

¸

fR2 (r2 |H1 ) dr2 fR1 (r1 |H1 ) dr1


27 substitute fR1 (r1 |H1 ) =

2 2 2r1 e−r1 /(2Eσ +N0 ) ; r1 ≥ 0 2 2Eσ + N0

and fR2 (r2 |H1 ) = The inside integral becomes

2r2 −r22 /N0 e , r2 > 0 N0 2

I = e−r1 /N0 When substituted in the first equation, it can be reduced to Z ∞ Z ∞ 2r1 2r1 2 −r12 /(2Eσ2 +N0 ) −r12 /N0 P (E|H1 ) = e e dr1 = e−r1 /K dr1 2+N 2+N 2Eσ 2Eσ 0 0 0 0 where

¡ ¢ N0 2Eσ 2 + N0 K= 2 (Eσ2 + N0 )

The remaining integral is then easily integrated since its integrand is a perfect differential if written as Z ∞ 2r1 −r12 /K K e dr1 P (E|H1 ) = 2 2Eσ + N0 0 K In the integral, let v = r12 /K to get Z ∞ Z ∞ 2r1 −r12 /K e dr1 = e−v dv = 1 K 0 0

so that finally P (E|H1 ) = =

¡ ¢ N0 2Eσ2 + N0 K 1 = 2Eσ2 + N0 2Eσ2 + N0 2 (Eσ2 + N0 ) 1 N0 ´ = ³ 2 2 (Eσ2 + N0 ) 2 1 + 1 2Eσ 2 N0

which is the same as (10.142). Problem 10.25

a. By definition, the new signal set is M−1

s0i (t) = si (t) −

1 X sj (t) M j=0


28

CHAPTER 10. OPTIMUM RECEIVERS AND SIGNAL SPACE CONCEPTS The signal energy for the ith signal in the new set is ⎡ ⎤2 Z Ts M−1 X ⎣si (t) − 1 sj (t)⎦ dt Es0 = M 0 j=0 ⎡ ⎤ Z Ts M−1 M−1 X X M−1 X 2 1 ⎣s2i (t) − si (t) sj (t) + 2 sj (t) sk (t)⎦ dt = M M 0 j=0

=

Z Ts 0

s2i (t) dt − M−1

= Es −

2 M

j=0 k=0

M−1 X Z Ts j=0

0

M−1 M−1 Z Ts

1 X X si (t) sj (t) dt + 2 M j=0 k=0

sj (t) sk (t) dt

0

M−1 M−1

2 X 1 X X Es δ ij + 2 Es δ jk , δ jk = 0, j 6= k and 1, j = k M M j=0

j=0 k=0

M−1 X

2 1 2 1 Es + 2 Es + Es Es = Es − M M M M j=0 ¶ µ M −1 1 = Es = Es 1 − M M = Es −

b. The correlation coefficient between two different signals is ⎡ ⎤" # Z Ts M−1 M−1 X X 1 1 1 ⎣sm (t) − sj (t)⎦ sn (t) − sk (t) dt, m 6= n ρmn = Es0 0 M M j=0 k=0 # P P Z Ts " M−1 1 1 sn (t) j=0 sj (t) − M sm (t) M−1 sk (t) sm (t) sn (t) − M 1 k=0 P PM−1 = dt Es0 0 + M12 M−1 j=0 k=0 sj (t) sk (t) ⎡ RT ⎤ R s 1 PM−1 Ts s (t) s (t) dt − s (t) s (t) dt m n n j j=0 0 RM 1 ⎢ 0 ⎥ 1 PM−1 Ts = − s (t) s ⎣ ⎦ m k (t) dt k=0 M 0 R Es0 P P M−1 Ts + M12 M−1 s (t) s (t) dt j k j=0 k=0 0 ⎡ ⎤ M−1 M−1 M−1 M−1 1 ⎣ 1 X 1 X 1 X X = Es δ nj − Es δ nk + 2 Es δ jk ⎦ Es δ mn − Es0 M M M j=0 j=0 k=0 k=0 ¸ ∙ ¸ ∙ M Es Es Es Es 1 − + = − 0− = Es0 M M M Es (M − 1) M 1 , m 6= n = − M −1 c. Simply solve for Es in the above expression and substitute for Es in the orthogonal


29 signaling result. This gives s ¶!#)M−1 −v2 /2 µ Z ∞( " Ã 2Es0 e M √ Pc, simplex = 1 − dv Q − v+ N M − 1 2π 0 −∞ d. The union bound approximation for the symbol error probability for coherent orthogonal signaling is given by (9.67) as Ãr ! Es Ps, orthog ' (M − 1) Q N0 Using the substitution used in part c, we get Ãs Ps, simplex ' (M − 1) Q

Es0 N0

µ

M M −1

¶!

The bit error probability can be approximated as Ãs ¶! µ log2 (M ) Eb M M Pb, simplex ' Q 2 N0 M −1 Plots are given in Fig. 10.2. Problem 10.26 a. Use the basis set r

2 cos ω i t, 0 ≤ t ≤ T T r 2 sin ωi t, 0 ≤ t ≤ T, i = 1, 2, · ··, M φsi (t) = T φci (t) =

Base the decision on Z = (Zc1 , Zs1 , ..., ZcM , ZsM ) where Zci = (y, φci ) and Zsi = (y, φsi ). Suppose that the ith hypothesis is true. Then the components of Z are p Ej Gcj + Ncj , i = j Zci = Zsi

= Ncj , i 6= j p = Ej Gsj + Nsj , i = j = Nsj , i 6= j


CHAPTER 10. OPTIMUM RECEIVERS AND SIGNAL SPACE CONCEPTS

M-ary CFSK

0

M-ary Simplex

0

10

10

-1

-1

10

10

-2

10

-3

10

=4

10

-3

M= 2

Pb

10

-2

M= 2

Pb

30

-4

10

-4

=8

10

= 16

10

=4 =8

-5

10

-5

= 16 -6

-6

= 32

10

10

-7

10

= 32

-7

5

10 Eb /N0

15

10

Figure 10.2:

5

10 Eb /N0

15


31 where Gcj = Gj cos θ and Gsj = −Gj sin θ are independent Gaussian random variables with variances σ 2 and Ncj , Nsj are independent Gaussian random variables with variances N0 /2. The pdf of Z given hypothesis Hi is true is ⎛ ¶ µ 2 2 z +z 1 exp ⎝− fZ |Hi (z|Hi ) = exp − ci 2 si Ei σ + N0 N0

⎞ M X ¡ 2 ¢ £ ¡ ¢ ¤ 2 ⎠ zci + zsi / π M Ei σ 2 + N0 N0M

j=1, j6=i

Decide hypothesis i is true if

fZ |Hi (z|Hi ) ≥ fZ |Hj , all j This constitutes the maximum likelihood decision rule since the hypotheses are equally probable. Alternatively, we can take the natural log of both sides and use that as a test. This reduces to computing

¡ 2 ¢ Ei σ 2 2 zci + zsi 2 Ei σ + N0 and choosing the signal corresponding to the largest. If the signals have equal energy, then the optimum receiver is seen to be an M -signal replica of the one shown in Fig. 10.8a.

q 2 + Z 2 is Rayleigh distributed b. Assume that the Ei s are equal. Note that Zi = Zci si given Hi . Without loss of genarality assume that H1 is given. Then

fZ1 |H1 (z1 | H1 ) = fZi |H1 (zi | H1 ) =

£ ¡ ¢¤ z1 exp −z12 /2 Eσ2 + N0 , z1 ≥ 0 2 Eσ + N0 ¤ £ zi exp −zi2 /2N0 , zi ≥ 0, i 6= 1 N0


32

CHAPTER 10. OPTIMUM RECEIVERS AND SIGNAL SPACE CONCEPTS The probability of correct decision is Pc = E {Pr [Zi < Z1 , all i = 2, 3, ..., M ]} ¸M−1 ∙Z z1 Z ∞ ¢¤ ¤ £ 2 ¡ 2 £ 2 z1 zi exp −z1 /2 Eσ + N0 exp −zi /2N0 dzi dz1 = Eσ 2 + N0 N0 0 0 £ ¡ ¢¤ Z ∞ ¡ ¢¤M−1 z1 exp −z12 /2 Eσ 2 + N0 £ = dz1 , use binomial theorem 1 − exp −z12 /2N0 2 Eσ + N0 0 ¢¤ £ ¡ Z ∞ z1 exp −z12 /2 Eσ 2 + N0 = Eσ2 + N0 0 "M−1 µ #M−1 ¶ X ¡ ¢ M −1 exp − (M − 1 − i) z12 /2N0 × dz1 i i=0 M−1 X µ M − 1 ¶Z ∞ ¡ ¢ 1 2 = z exp −K z dz1 1 i 1 i Eσ2 + N0 0 i=0 ¡ ¢ M−1 X µ M −1 ¶ N0 Eσ2 + N0 1 = i Eσ2 + N0 N0 + (Eσ2 + N0 ) (M − 1 − i) i=0 M−1 X µM − 1¶ N0 = 2 i N0 + (Eσ + N0 ) (M − 1 − i) i=0 ¶ µ M−1 X M −1 1 = = 1 − PE 2 i 1 + (Eσ /N0 + 1) (M − 1 − i) i=0

where Ki ,

M −1−i 1 + 2 (Eσ 2 + N0 ) 2N0

Problem 10.27 a. It can be shown that under hypothesis H1 , Y1 is the sum of squares of 2N independent Gaussian random variables each having mean zero and variance σ 211 = 2

E 2 N0 σ + N 2

and Y2 is the sum of squares of 2N independent Gaussian random variables each having zero mean and variance N0 σ 221 = 2


33 Therefore, from Problem 5.37, the pdfs under hypothesis H1 are 2

y N−1 e−y1 /2σ11 , y1 ≥ 0 fY1 (y1 |H1 ) = 1 N N 2 σ 11 Γ (N ) and fY2 (y2 |H1 ) =

y2N−1 e−y2 /N0 2N (N0 /2)N Γ (N )

, y2 ≥ 0

b. The probability of error is PE = Pr (E|H1 ) = Pr (E|H2 ) = Pr (Y2 > Y1 |H1 ) ¸ Z ∞ ∙Z ∞ = fY2 (y2 |H1 ) dy2 fY1 (y1 |H1 ) dy1 y1

0

Using integration by parts, it can be shown that I (x; a) =

Z ∞

n −az

z e

−ax

dz = e

x

n X i=0

n! i!an+1−i

xi

This can be used to get the formula given in the problem statement. c. Plots are given for N = 1 − 13 in steps of 2 in Fig. 10.3. Problem 10.28 a. The characteristic function is found and used to obtain the moments. By definition, Z ∞ £ jΛω ¤ β m −βλ m−1 = e ejλω λ dλ Φ (jω) = E e Γ (m) 0 To integrate, note that

Z ∞ 0

ejλω e−βλ dλ =

1 β − jω

Differentiate both sides m − 1 times with respect to β. This gives Z ∞ (−1)m−1 (m − 1)! ejλω (−1)m−1 λm−1 e−βλ dλ = (β − jω)m 0 Multiply both sides by β m and cancel like terms to get Z ∞ β m −βλ m−1 βm e ejλω λ dλ = Γ (m) (β − jω)m 0


CHAPTER 10. OPTIMUM RECEIVERS AND SIGNAL SPACE CONCEPTS

PE for FSK signaling with diversity in Rayleigh fading: N = 1 - 13 paths in steps of 2 0

10

-1

10

-2

10

-3

10 PE

34

-4

10

-5

10

-6

10

-7

10

0

2

4

6

8

10 12 SNR, dB

Figure 10.3:

14

16

18

20


35 Thus

βm (β − jω)m Differentiating this with respect to ω, we get Φ (jω) =

E [Λ] = jΦ0 (0) − The variance is

£ ¤ m m (m + 1) and E Λ2 = j 2 Φ00 (0) = β β

¡ ¢ m Var (Λ) = E Λ2 − E 2 (Λ) = 2 β

b. Use Bayes’ rule. But first we need fZ (z), which is obtained as follows: Z ∞ fZ (z) = fZ|Λ (z|λ) fΛ (λ) dλ Z0 ∞ β m −βλ m−1 = e λe−λz λ dλ Γ (m) 0 Z ∞ m+1 mβ m (β+z)λ (β + z) λm dλ e = Γ (m + 1) (β + z)m+1 0 mβ m = (β + z)m+1 where the last integral is evaluated by noting that the integrand is a pdf and therefore integrates to unity. Using Bayes’ rule, we find that fΛ|Z (λ|z) =

λm (β + z)m+1 e−(β+z)λ Γ (m + 1)

Note that this is the same pdf as in part (a) except that β has been replaced by β + z and m has been replaced by m + 1. Therefore, we can infer the moments from part (a) as well. They are E [Λ|Z] =

m+1 m+1 and Var [Λ|Z] = β+Z (β + Z)2

c. Assume the observations are independent. Then fz1 z2 |Λ (z1, z2 |λ) = λ2 e−λ(z1 +z2 ) , z1 , z2 ≥ 0 The integration to find the joint pdf of Z1 and Z2 is similar to the procedure used to find the pdf of Z above with the result that fz1 z2 (z1, z2 ) =

m (m + 1) β m (β + z1 + z2 )m+2


36

CHAPTER 10. OPTIMUM RECEIVERS AND SIGNAL SPACE CONCEPTS Again using Bayes’ rule, we find that fΛ|z1 z2 (λ|z1 , z2 ) =

λm+1 e−λ(β+z1 +z2 ) (β + z1 + z2 )m+2 Γ (m + 2)

Since this is of the same form as in parts (a) and (b), we can extend the results there for the moments to E [Λ|Z1 , Z2 ] =

m+2 m+2 and V ar [Λ|Z1 , Z2 ] = β + Z1 + Z2 (β + Z1 + Z2 )2

d. By examining the pattern developed in parts. (a), (b), and (c), we conclude that λm+1+K e−λ(β+z1 +z2 +...+zK ) (β + z1 + z2 + . . . + zK )m+K fΛ|z1 z2 ...zK (λ|z1 , z2, . . . zK ) = Γ (m + 2) and E [Λ|Z1 , Z2 , . . . , ZK ] = var [Λ|Z1 , Z2 , . . . , ZK ] =

m+2 β + Z1 + Z2 + . . . + ZK m+K (β + Z1 + Z2 + . . . + ZK )2

Problem 10.29 The conditional mean and Bayes’ estimates are the same if: 1. C (x) is symmetric; 2. C (x) is convex upward; 3. fA|Z is symmetric about the mean. Armed with this, we find the following to be true: a. Same - all conditions are satisfied. b. Same - all conditions are satisfied; c. Condition 3 is not satisfied; d. Condition 2 is not satisfied.


37 Problem 10.30 The mean of the individual samples is A as is the mean of âML (Z). Therefore ⎡Ã !2 ⎤ K X 1 (Zk − A) ⎦ var [âML (Z)] = E ⎣ K k=1 ⎡ ⎤ K X K X 1 ⎣ = E (Zk − A) (Zj − A)⎦ K2 k=1 j=1

K

=

K

1 XX E [(Zk − A) (Zj − A)] K2 k=1 j=1

But, by the independence of samples and the fact that Zk − A = Nk ½ 0, k 6= j E [(Zk − A) (Zj − A)] = σ 2n , k = j Thus K

var [âML (Z)] =

K

1 XX 2 σ n δ kj K2 k=1 j=1 K

=

1 X 2 σn K2 k=1

=

σ 2n K

Problem 10.31 a. The conditional pdf of the noise samples is " K # X ¡ ¢ ¡ ¢ −K/2 exp − zi2 /2σ 2n fZ z1 , z2 , . . . , zK |σ 2n = 2πσ 2n i=1

The maximum likelihood estimate for σ 2n maximizes this expression. It can be found by differentiating the pdf with respect to σ 2n and setting the result equal to zero. Solving for σ 2n , we find the maximum likelihood estimate to be K

σ b2n =

1 X 2 Zi K i=1


38

CHAPTER 10. OPTIMUM RECEIVERS AND SIGNAL SPACE CONCEPTS b. The variance of the estimate is

c. Yes. PK 2 d. i=1 Zi is a sufficient statistic.

¡ 2 ¢ 2σ 4n var σ bn = K

Problem 10.32 We base our estimate on the observation Z0 , where Z0 = M0 + N where N is a Gaussian random variable of mean zero and variance N0 /2. The sample value M0 is Gaussian with mean m0 and variance σ 2m . The figure of merit is mean-squared error. Thus, we use a MAP estimate. It can be shown that the estimate for m0 is the mean value of M0 conditioned on Z0 because the random variables involved are Gaussian. That is, m b 0 = E (M0 |Z0 = z0 ) = ρz0 where

σm E (M0 Z0 ) p =q ρ= p var (M0 ) var (Z0 ) σ 2 + N0 m

2

It is of interest to compare this with the ML estimate, which is c m b 0,ML = Z0

which results if we have no a priori knowledge of the parameter. Note that the MAP estimate reduces to the ML estimate when σ 2m → ∞. Note also that for N0 → ∞ (i.e., small signal-to-noise ratio) we don’t use the observed value of Z0 , but instead estimate m0 as zero (its mean). Problem 10.33 a. Use the basis function set φ1 (t) =

r

2 cos ω c t and φ2 (t) = T

r

2 sin ω c t T

and trigonometric expansion of the received data as H1 : H2 :

y(t) = A cos θ cos ω c t − A sin θ sin ωc t + n (t) y(t) = A cos θ cos ω c t + A sin θ sin ωc t + n (t)


39 to write the data in vector form as H1 : H2 :

Ãr

! r T T A cos θ + N1 , − A sin θ + N2 Z = (Z1 , Z2 ) = 2 2 Ã r ! r T T A cos θ + N1 , A sin θ + N2 Z = (Z1 , Z2 ) = − 2 2

where N1 =

Z T 0

n (t) φ1 (t) dt and N2 =

Z T 0

n (t) φ2 (t) dt

Note that E [N1 ] = [N2 ] = 0 and var[N1 ] = var[N2 ] = N0 /2. Thus r

T A cos θ , A1 2 r T A sin θ , −A2 E [Z2 | H1 ] = − 2 r T A cos θ , −A1 E [Z1 | H2 ] = − 2 r T A sin θ , A2 E [Z1 | H2 ] = 2 E [Z1 | H1 ] =

Hence the conditional pdfs of the data, given θ and the particular hypothesis, are fZ|θ,H1 (z1 , z2 | θ, H1 ) = fZ|θ,H2 (z1 , z2 | θ, H2 ) =

½ i¾ 1 1 h (z1 − A1 )2 + (z2 + A2 )2 exp − πN0 N0 ½ i¾ 1 1 h 2 2 (z1 + A1 ) + (z2 − A2 ) exp − πN0 N0

b. Assume equally likely hypotheses so that fZ|θ (z1 , z2 | θ) = =

1 1 fZ|θ,H1 (z1 , z2 | θ, H1 ) + fZ|θ,H2 (z1 , z2 | θ, H2 ) 2 2 ⎡ n h io ⎤ 2 2 1 1 − A1 ) + (z2 + A2 ) 1 ⎣ exp n− N0 (z h io ⎦ 2πN0 + exp − 1 (z1 + A1 )2 + (z2 − A2 )2 N0


40

CHAPTER 10. OPTIMUM RECEIVERS AND SIGNAL SPACE CONCEPTS Expanding the exponents and taking out common terms results in ∙ µ ¶¸ 1 1 T 2 2 2 fZ|θ (z1 , z2 | θ) = exp − z1 + z2 + A 2πN0 N0 2 ½ ∙ ¸ ∙ ¸¾ 2 2 × exp (A1 z1 − A2 z2 ) + exp − (A1 z1 − A2 z2 ) N0 N0 ∙ ¸ µ ¶¸ ∙ 1 1 T 2 2 2 2 = exp − (A1 z1 − A2 z2 ) z1 + z2 + A cosh πN0 N0 2 N0 The maximum likelihood estimator for θ satisfies (the data variables are capitalized to signify actual data values)

From above

¯ ∂ ln fZ|θ (Z1 , Z2 | θ)¯θ=θ =0 ML ∂θ µ ¶ 1 T 2 2 2 ln fZ|θ (Z1 , Z2 | θ) = − ln (πN0 ) − Z1 + Z2 + A N0 2 ½ ∙ ¸¾ 2 + ln cosh (A1 Z1 − A2 Z2 ) N0

Taking the derivative with respect to θ and setting the result equal to 0 results in the condition "r #r T 2A T 2A tanh (Z1 cos θ − Z2 sin θ) (−Z1 sin θ − Z2 cos θ) = 0 2 N0 2 N0 Using definitions for Z1 and Z2 , which are r r Z T Z T 2 2 Z1 = cos ω c t dt and Z2 = sin ω c t dt y (t) y (t) T T 0 0 this can be put into the form ∙ ¸ Z T Z T 4z 4z tanh y (t) cos (ω c t + θ) dt −y (t) sin (ω c t + θ) dt = 0 AT 0 AT 0 where z = A2 T /2N0 . This implies a Costas loop type structure shown in the problem statement with integrators in place of the low pass filters and a tanh function in the leg with the cosine multiplication. Note that for z small tanh (x) ' x and the phase estimator becomes a Costas loop if the integrators are viewed as low pass filters.


41 c. The variance of the phase estimate is lower bounded by applying the Cramer-Rao bound which is " #−1 ∂ 2 fZ|θ var (θML ) ≥ E ∂θ2 The first derivative, from above, is ∙ ¸Z T Z ∂fZ|θ 2A T 2A = tanh y (t) cos (ωc t + θ) dt −y (t) sin (ωc t + θ) dt ∂θ N0 N0 0 0 Assume low SNR so that tanh (x) ' x. Under this assumption the second derivative, after some work, can be shown to be µ ¶2 Z T Z T ¡ ¢ ¡ ¢ ∂ 2 fZ|θ 2A y (t) y t0 cos ω c t − t0 dtdt0 2 = N0 ∂θ 0 0 After some development, it can be shown that ¤ ¤ £ ¡ ¢¤ £ ¡ ¢ £ ¡ ¢ = Pr [H1 ] E y (t) y t0 | H1 + Pr [H2 ] E y (t) y t0 | H2 E y (t) y t0 £ ¡ ¢ ¡ ¢¤ª 1 © E [A cos (ω c t + θ) + n (t)] A cos ωc t0 + θ + n t0 = 2 £ ¡ ¢ ¡ ¢¤ª 1 © + E [−A cos (ω c t + θ) + n (t)] −A cos ωc t0 + θ + n t0 2 2 ¡ ¢ N0 ¡ ¢ A cos ω c t − t0 + δ t − t0 = 2 2

so that " # µ ¶2 Z T Z T £ ¡ ¢¤ ¡ ¢ ∂ 2 fZ|θ 2A E y (t) y t0 cos ω c t − t0 dtdt0 E = 2 N0 ∂θ 0 0 µ ¶2 Z T Z T ∙ 2 ¸ ¡ ¢ N0 ¡ ¢ ¡ ¢ 2A A 0 0 cos ωc t − t + δ t − t cos ωc t − t0 dtdt0 = N0 2 2 0 0 µ ¶2 Z T Z T 2 ¡ ¢ 2A A N0 T cos2 ωc t − t0 dtdt0 + = N0 2 2 0 0 ¸ µ ¶2 ∙ 2 2 N0 T A T 2A + = N0 4 2 2

T Since the SNR is z = A 2N0 this result becomes " # ∂ 2 fZ|θ E = 4z (z + 1) ∂θ2


42

CHAPTER 10. OPTIMUM RECEIVERS AND SIGNAL SPACE CONCEPTS Thus, the variance of the phase estimate, for low SNR, is bounded by var (θML ) ≥

1 4z (z + 1)

Problem 10.34 £ ¤ £ ¤ ¡ ¢1/2 a. Note that cos cos−1 m = m and sin cos−1 m = 1 − m2 to write √ p √ s (t) = 2P m sin (ω c t + θ) ± 2P 1 − m2 cos (ω c t + θ) , ± sign due to data Use the basis functions

φ1 (t) =

r

2 cos ω c t and φ2 (t) = T

r

2 sin ω c t T

to write ³ ´ ³ ´ p p √ √ s (t) = P T m sin θ ± 1 − m2 cos θ φ1 (t)+ P T m cos θ ∓ 1 − m2 sin θ φ2 (t)

b. Use the above bases functions and base a decision on the vector ³ ´ ⎤ ⎡ √ √ P T m sin θ ± 1 − m2 cos θ + N1 , ⎦ ³ ´ Z = [Z1 , Z2 ] = ⎣ √ √ P T m cos θ ∓ 1 − m2 sin θ + N2 , [S1 + N1 , S2 + N2 ]

where N1 and N2 are independent Gaussian random variables with zero means and variances N0 /2 The pdf of Z conditioned on the signal sign and θ is ½ i¾ 1 1 h (Z1 − S1 )2 + (Z2 − S2 )2 exp − fZ |θ, ± (Z1 , Z2 | θ, ±) = πN0 N0 To find the pdf conditioned only on θ use the fact the Pr(+ bit) = Pr (− bit) = 1/2 to get

1 1 fZ |θ (Z1 , Z2 | θ) = fZ |θ, + (Z1 , Z2 | θ, +) + fZ |θ, − (Z1 , Z2 | θ, −) 2 2 After considerable simplification we obtain " √ # 2m P T fZ |θ (Z1 , Z2 | θ) = C exp (Z1 sin θ + Z2 cos θ) N0 # " √ √ 2 1 − m2 P T (Z1 cos θ − Z2 sin θ) × cosh N0


43 where C includes all terms independent of θ. The log-likelihood function is √ 2m P T (Z1 sin θ + Z2 cos θ) L (θ) = N0 # " √ √ 2 1 − m2 P T (Z1 cos θ − Z2 sin θ) + ln cosh N0 Note that Z1 sin θ + Z2 cos θ =

Z T

y (t)

0

Z1 cos θ − Z2 sin θ =

Z T 0

y (t)

r r

2 sin (ωc t + θ) dt T 2 cos (ω c t + θ) dt T

so that √ Z 2m 2P T L (θ) = y (t) sin (ω c t + θ) dt N0 "0 √ # √ Z 2 1 − m2 2P T + ln cosh y (t) cos (ω c t + θ) dt N0 0 = 0. Differentiating the above The maximum likelihood estimate satisfies ∂L(θ) ∂θ espression for L (θ) gives √ Z 2m 2P T ∂L (θ) = 0= y (t) cos (ω c t + θML ) dt ∂θ N0 0 # " √ √ Z 2 1 − m2 2P T y (t) cos (ω c t + θML ) dt − tanh N0 0 " √ # √ Z 2 1 − m2 2P T × y (t) sin (ω c t + θML ) dt N0 0 c. The block diagram follows the development of the one for Problem 10.33. The block diagram of the phase estimator has another arm that adds into the feedback to the VCO that acts as a phase-lock loop for the carrier component. Problem 10.35 Write the impulse response as µ ¶ t − T /2 1 h (t) = Π T T


44

CHAPTER 10. OPTIMUM RECEIVERS AND SIGNAL SPACE CONCEPTS and use the delay theorem to get H (f ) =

1 T sinc (T f ) e−jπf T = sinc (T f ) e−jπf T T

Thus the equivalent noise bandwidth is BN

10.1

1

Z ∞

|H (f )|2 df 2 Hmax Z ∞ 0 sinc2 (T f ) df = 0 Z 1 ∞ = sinc2 (u) du T 0 1 = 2T =

Computer Exercises

Computer Exercise 10.1 % file: ce10_1.m % % PF = Q(A) < 1e-3 => A > 3.08; B = ln(eta)/d + d/2 % PD = Q(B) > 0.95 => B < -1.65; A = ln(eta)/d -d/2 % d = A - B > 4.73; % ln(eta) = (A^2 - B^2)/2 > 3.38 => eta > 29.37 % clf clear all; d = [4.73 5 5.5 6]; log_eta = 3.38:0.01:9; lend = length(d); hold on for j = 1:lend dj = d(j); af = log_eta/dj + dj/2; ad = log_eta/dj - dj/2; pf = qfn(af);


10.1. COMPUTER EXERCISES

45

Receiver operating characteristic 1 d=6

d = 5.5 d=5

0.95

Probability of Detection

d = 4.73 0.9

0.85

0.8

0.75

0.7

0.65 0

0.2

0.4 0.6 0.8 Probability of False Alarm

Figure 10.4: pd = qfn(ad); semilogy(pf, pd) text(pf(20), pd(20)-.01, [’d = ’, num2str(dj)]) end hold off xlabel(’Probability of False Alarm’) ylabel(’Probability of Detection’) title(’Receiver operating characteristic’) % This function computes the Gaussian Q-function % function Q=qfn(x) Q = 0.5*erfc(x/sqrt(2));

1

1.2 -3

x 10


46

CHAPTER 10. OPTIMUM RECEIVERS AND SIGNAL SPACE CONCEPTS

Computer Exercise 10.2 % file: ce10_2.m % % From (10.171), sigmap^2/sigman^2 = (K + sigman^2/sigmaA^2)^-1 % clf clear all; sigmaA2_sigman2 = 0.1:0.5:2.1; Ls = length(sigmaA2_sigman2); K = 1:25; hold on for l=1:Ls sigma_ratio = sigmaA2_sigman2(l); sigmap2_sigman2 = 1./(K + 1./sigma_ratio); plot(K, sigmap2_sigman2) text(K(1), sigmap2_sigman2(1),[’\sigma_A^2/\sigma_n^2 = ’, num2str(sigma_ratio)]) end hold off xlabel(’K’),ylabel(’\sigma_p^2/\sigma_n^2’)

Computer Exercise 10.2 % file: ce10_3.m % Simulation of a digital PLL % clf clear all; AA = char(’-’,’:’,’—’,’-.’); K = 1000; epsilon = 0.1; theta0 = 0.5*pi; T = 1; A = 1; kk = 1:K; sigma_n = [0.1 0.5 1]; Ln = length(sigma_n); hold on for n = 1:Ln sigma = sigma_n(n);


10.1. COMPUTER EXERCISES

0.7

2

2

2

2

2

2

2

2

2

2

47

σ A /σ n = 2.1

0.6

σ A /σ n = 1.6

0.5

σ A /σ n = 1.1

0.4 2 2

σ p/ σ n

σ A /σ n = 0.6

0.3

0.2

σ A /σ n = 0.1

0.1

0

0

5

10

15 K

Figure 10.5:

20

25


48

CHAPTER 10. OPTIMUM RECEIVERS AND SIGNAL SPACE CONCEPTS First-order digital phase tracking device performance for ε = 0.01 1.6

σ n = 0.1

1.4

σ n = 0.5

1.2

σn = 1

1

θ(k), rad

0.8 0.6 0.4 0.2 0 -0.2 -0.4

0

100

200

300

400

500 k

600

700

800

900

1000

Figure 10.6: Z1(1) = sqrt(T/2)*A*cos(theta0) + sigma*randn(1); Z2(1) = -sqrt(T/2)*A*sin(theta0) + sigma*randn(1); theta(1) = theta0; for k = 1:K-1 theta(k+1) = theta(k) + epsilon*atan2(Z2(k), Z1(k)); Z1(k+1) = sqrt(T/2)*A*cos(theta(k+1)) + sigma*randn(1); Z2(k+1) = -sqrt(T/2)*A*sin(theta(k+1)) + sigma*randn(1); end plot(kk, theta, AA(n,:)) end hold off xlabel(’k’),ylabel(’\theta(k), rad’) legend([’\sigma_n = ’, num2str(sigma_n(1))], [’\sigma_n = ’, num2str(sigma_n(2))], [’\sigma_n = ’, num2str(sigma_n(3))]) title([’First-order digital phase tracking device performance for \epsilon = ’, num2str(epsilon)])


10.1. COMPUTER EXERCISES

49

First-order digital phase tracking device performance for ε = 0.1 2

σ n = 0.1 σ n = 0.5

1.5

σn = 1

θ(k), rad

1

0.5

0

-0.5

-1

-1.5 0

100

200

300

400

500 k

Figure 10.7:

600

700

800

900

1000


Chapter 11

Information Theory and Coding 11.1

Problem Solutions

Problem 11.1 The information in the message is I (x) = − log2 (0.95) = 0.0740 bits

I (x) = − loge (0.95) = 0.0513 nats

I (x) = − log10 (0.95) = 0.0223 Hartleys

Problem 11.2 (a) I (x) = − log2 (52) = 5.7004 bits (b) I (x) = − log2 [(52)(52)] = 11.4009 bits (c) I (x) = − log2 [(52)(51)] = 11.3729 bits Problem 11.3 The entropy is H (x) = −0.35 log2 (0.35) − 0.25 log2 (0.25) −0.20 log2 (0.20) − 0.15 log2 (0.15)

−0.05 log2 (0.05) = 2.1211 bits/symbol The maximum entropy is H (x) = log2 5 = 2.3219 bits/symbol

1


2

CHAPTER 11. INFORMATION THEORY AND CODING

0.7

x1

y1

0.2 0.1 0.2 y2

0.5

x2 0.3 0.1

0.2 x3

0.7

y3

Figure 11.1: Channel diagram for Problem 11.5. Problem 11.4 The entropy is H (x) = −2[0.25 log2 (0.25)] − 0.20 log2 (0.20)

−3[0.10 log2 (0.10)] = 2.4610 bits/symbol

Problem 11.5 (a) The channel diagram is shown in Figure 11.1 (b) The output probabilities are p (y1 ) = p (y2 ) = p (y3 ) =

10 1 (0.7 + 0.2 + 0.1) = = 0.3333 3 30 9 1 (0.2 + 0.5 + 0.2) = = 0.3000 3 30 11 1 (0.1 + 0.3 + 0.7) = = 0.3667 3 30

(c) Since [P (Y )] = [P (X)] [P (Y |X)]


11.1. PROBLEM SOLUTIONS

3

we can write [P (X)] = [P (Y )] [P (Y |X)]−1 which is ⎡

This gives

⎤ 1.6111 −0.6667 −0.0566 2.6667 −1.0566 ⎦ [P (X)] = [0.333 0.333 0.333] ⎣ −0.6111 −0.0566 −0.6667 1.7222 [P (X)] = [0.3148 0.4444 0.2407]

(d) The joint probability matrix is ⎤ ⎤⎡ 0.3148 0 0 0.7 0.2 0.1 ⎦ ⎣ 0.2 0.5 0.3 ⎦ [P (X; Y )] = ⎣ 0 0.4444 0 0 0 0.2407 0.1 0.2 0.7 ⎡

which gives

⎤ 0.2204 0.0630 0.0315 [P (X; Y )] = ⎣ 0.0889 0.2222 0.1333 ⎦ 0.0241 0.0481 0.1685

Note that the column sum gives the output probabilities [P (Y )], and the row sum gives the input probabilities [P (X)] . Problem 11.6 For a noiseless channel, the transition probability matrix is a square matrix with 1s on the main diagonal and zeros elsewhere. The joint probability matrix is a square matrix with the input probabilities on the main diagonal and zeros elsewhere. In other words ⎤ ⎡ ··· 0 p (x1 ) 0 ⎥ ⎢ 0 p (x2 ) · · · 0 ⎥ ⎢ [P (X; Y )] = ⎢ . ⎥ . . . . . . . ⎦ ⎣ . . . . 0

0

···

p(xn )

Problem 11.7 It was shown that the cascade of two BSCs is given has the channel matrix ∙ ¸∙ ¸ ∙ ¸ 1−α α 1−β β (1 − α) (1 − β) + αβ α (1 − β) + β (1 − α) = α 1−α β 1−β α (1 − β) + β (1 − α) αβ + (1 − α) (1 − β)


4

CHAPTER 11. INFORMATION THEORY AND CODING

which is (11.24) modified so that the two channels are symmetric. It is clear that the cascade of two BSCs is a BSC. Consider now three channels ABC, where A, B, and C are binary symmetric. We have shown that the cascade of A and B, denoted (AB), is binary symmetric. Therefore, the cascade of (AB) and C is binary symmetric. Continuing this process illustrates that, by induction, that the cascade of an arbitrary number of BSCs is a BSC. Problem 11.8 This problem may be solved by raising the channel matrix ∙ ¸ 0.995 0.005 A= 0.005 0.995 which corresponds to an error probability of 0.001, to increasing powers n and seeing where the error probability reaches the critical value of 0.08. Consider the MATLAB program a = [0.995 0.005; 0.005 0.995]; n = 1; a1 = a; while a1(1,2)<0.1 n=n+1; a1=a^n; end n-1

% channel matrix % initial value % save a

% display result

Executing the program yields n − 1 = 22. Thus we compute (MATLAB code is given) À a^22 ans = 0.9008 0.0992 0.0922 0.9008 À a^23 ans = 0.8968 0.1032 0.1032 0.8968 Thus 22 cascaded channels meets the specification for PE < 0.1. However cascading 23 channels yields PE > 0.1 and the specification is not satisfied. (Note: This may appear to be an impractical result since such a large number of cascaded channels are specified. However, the channel A may represent a lengthly cable with a large number of repeaters. There are a variety of other practical examples.)


11.1. PROBLEM SOLUTIONS

5

Problem 11.9 Note that P (Y ) is given by P (Y ) = [ p(x1 p(x2 ) ]

3/4 1/4 0 0 0 1

¸

= [ 0.75p(x1 ) 0.25p(x1 ) p(x2 ) ]

For this problem note that H (X|Y ) = 0 (each Y results from only one X) so that I (X; Y ) = H (X). It therefore follows that C = 1 bit/symbol and that the capacity is achieved for p (x1 ) = p (x2 ) =

1 2

Note that cascading this channel with a channel with 3 inputs and 2 outputs defined by ⎡ ⎤ 1 0 ⎣ 1 0 ⎦ 0 1

gives a channel with 2 inputs and 2 outputs having the channel matrix ⎡ ⎤ ∙ ¸ 1 0 ∙ ¸ 3/4 1/4 0 ⎣ 1 0 ⎦ 1 0 = 0 0 1 0 1 0 1 which is a noiseless channel having a capacity of 1 bit/symbol. Problem 11.10 For the erasure channel we determine I(X; Y ) = H(X) − H(X|Y ) where H(X|Y ) = −

3 2 X X i=1 j=1

p(xi , yj ) log2 p(xi |yj ) = −

3 2 X X i=1 j=1

p(yj |xi )p(xi ) log2

Let p(x1 ) = α and p(x2 ) = 1 − α. This gives p(y1 ) = α(1 − p)

p(y2 ) = αp + (1 − α)p = p p(y3 ) = (1 − α)(1 − p)

p(yj |xi )p(xi ) p(yj )

¸


6

CHAPTER 11. INFORMATION THEORY AND CODING

Thus ¸ ∙ ¸ α(1 − p) αp − αp log2 H(X|Y ) = −α(1 − p) log2 α(1 − p) p ¸ ¸ ∙ ∙ (1 − α)p (1 − α)(1 − p) −(1 − α)p log2 − (1 − α)(1 − p) log2 p (1 − α)(1 − p) ∙

This is H(X|Y ) = −αp log2 (α) − (1 − α)p log2 (1 − α) = −p [α log2 (α) + (1 − α) log2 (1 − α)] Since H(X) = −α log2 (α) − (1 − α) log2 (1 − α) we have I(X; Y ) = H(X) − pH(X) = H(X)(1 − p) Therefore C =1−p

bits/symbol

and capacity is achieved for equally likely inputs. Problem 11.11 The cascade of the two channels has the channel matrix ∙ ¸∙ ¸ 1 − p1 1 − p2 p2 p1 0 P (Y |X) = p1 1 − p1 0 p2 1 − p2 This gives P (Y |X) =

p1 (1 − p2 ) (1 − p1 ) (1 − p2 ) (1 − p1 ) p2 + p1 p2 p1 p2 + (1 − p1 ) p2 (1 − p1 ) (1 − p2 ) p1 (1 − p2 )

¸

which can be written P (Y |X) =

(1 − p1 ) (1 − p2 ) p2 p1 (1 − p2 ) p2 (1 − p1 ) (1 − p2 ) p1 (1 − p2 )

¸

The output probability for y2 is p(y2 ) = p2 Thus, the probability of y2 is independent of the input so that the cascade is a more general erasure channel in that p(y1 |x2 ) and p(y3 |x1 ) are not zero. We will encounter a channel like this when feedback channels are encountered in Section 11.6.3. Problem 11.12


11.1. PROBLEM SOLUTIONS

7

The capacity of the channel described by the transition probability matrix ⎡ ⎤ p q 0 0 ⎢ q p 0 0 ⎥ ⎥ ⎢ ⎣ 0 0 p q ⎦ 0 0 q p is easily computed by maximizing

I (X; Y ) = H (Y ) − H (Y |X) The conditional entropy H (Y |X) can be written XX X H (Y |X) = − p (xi ) p (yj |xi ) log2 p (yj |xi ) = p (xi ) H (Y |xi ) i

j

i

where H (Y |xi ) = −

X j

p (yj |xi ) log2 p (yj |xi )

For the given channel H (Y |xi ) = −p log2 p − q log2 q = K so that H (Y |xi ) is a constant independent of i. This results since each row of the matrix contains the same set of probabilities although the terms are not in the same order. For such a channel X p (xi ) K = K H (Y |X) = i

and

I (X; Y ) = H (Y ) − K We see that capacity is achieved when each channel output occurs with equal probability. We now show that equally likely outputs result if the inputs are equally likely. Assume that p (xi ) = 14 for all i. Then p (yj ) =

X i

For each j p (yj ) =

p (xi ) p (yj |xi )

1 1 1X 1 p (yj |xi ) = (p + q) = [p + (1 − p)] = 4 4 4 4 i

so that

1 p (yj ) = , 4

all j


8

CHAPTER 11. INFORMATION THEORY AND CODING

C 2

1

0

1/2

1

p

Figure 11.2: Figure for Problem 11.12. Since p (yj ) = 14 for all j, H (Y ) = 2, and the channel capacity is C = 2 + p log2 p + (1 − p) log2 (1 − p) or C = 2 − H (p) This is shown in Figure 11.2. This channel is modeled as a pair of binary symmetric channels as shown. If p = 1 or p = 0, each input uniquely determines the output and the channel reduces to a noiseless channel with four inputs and four outputs. This gives a capacity of log2 4 = 2 bits/symbol. If p = q = 12 , then each of the two subchannels (Channel 1 and Channel 2) have zero capacity. However, x1 and x2 can be viewed as a single input and x3 and x4 can be viewed as a single input as shown in Figure 11.3. The result is equivalent to a noiseless channel with two inputs and two outputs. This gives a capacity of log2 2 = 1 bit/symbol. Note that this channel is an example of a general symmetric channel. The capacity of such channels are easily computed. See Gallager (Information Theory and Reliable Communications, Wiley, 1968, pages 91-94) for a discussion of these channels. Problem 11.13 To show (10.30), write p (xi , yj ) in (10.29) as p (xi |yj ) p (yj ) and recall that log2 p (xi , yj ) = log2 p (xi |yj ) + log2 p (yj ) Evaluating the resulting sum yields (10.30). The same method is used to show (10.31) except that p (xi , yj ) is written p (yj |xi ) p (xi ) .


11.1. PROBLEM SOLUTIONS

9

p

x1

q

Channel 1

y1

q

x2

p

x3

p

y2

q

Channel 2

y3

q

x4

y4

p

Figure 11.3: Additional illustration for Problem 11.12. Problem 11.14 The entropy is maximized when each quantizing region occurs with probability 0.25. Thus Z x1 1 ae−ax dx = 1 − e−ax1 = 4 0 which gives e−ax1 = or

1 x1 = ln a

3 4

µ ¶ 4 = 0.2877/a 3

In a similar manner Z x2 x1

ae−ax dx = e−ax1 − e−ax2 =

3 1 − e−ax2 = 4 4

which gives e−ax2 = or x2 = Also

1 2

1 ln (2) = 0.6931/a a

Z x3 x2

ae−ax dx =

1 4


10

CHAPTER 11. INFORMATION THEORY AND CODING

gives x3 = 1.3863/a

Problem 11.15 The thresholds are now parameterized in terms of σ. We have Z x1 r −r2 /2σ2 1 2 2 e dx = 1 − e−x1 /σ = 2 σ 4 0 which gives

µ 2¶ ∙ ¸ x1 3 = ln(4/3) = − ln 2 σ 4

Thus x1 = For x2 we have Z x2 x1

p log(4/3)σ = 0.5364σ

r −r2 /2σ2 3 1 2 2 2 2 2 2 e dx = e−x1 /σ − e−x2 /σ = − e−x2 /σ = σ2 4 2

so that x2 =

p ln(2)σ = 0.6931σ

Finally, for x3 we have Z x3 1 r −r2 /2σ2 3 1 2 2 2 2 2 2 e dx = e−x2 /σ − e−x3 /σ = e−x2 /σ = − = 2 4 4 2 x2 σ so that x3 = As a check we compute Z ∞ x3

p ln(2)σ = 1.1774σ

r −r2 /2σ2 1 2 2 2 e dx = e−x3 /σ − 0 = e−(1.1774) = 2 σ 4

which is the correct result. Problem 11.16 First we compute the probabilities of the six messages. Using the Gaussian Q function Z ∞ Z ∞ ³x´ 1 1 2 −y2 /2σ 2 √ e dy = √ e−y /2 dy = Q σ 2πσ x 2π x/σ


11.1. PROBLEM SOLUTIONS

11

gives p (m3 ) = p (m4 ) = p (m5 ) =

1 √ 2πσ

Z σ 0

−y 2 /2σ 2

e

Z 2σ

1 1 dy = − √ 2 2π

Z ∞

σ/σ=1

2

e−y /2 dy = Q(0) − Q (1) = 0.3413

1 2 2 √ e−y /2σ dy = Q (1) − Q (2) = 0.1359 2πσ σ Z ∞ 1 2 2 √ e−y /2σ dy = Q (2) − Q(∞) = Q(2) = 0.0214 2πσ 2σ

By symmetry p (m0 ) = p (m5 ) = 0.0228 p (m1 ) = p (m4 ) = 0.1359 p (m2 ) = p (m3 ) = 0.3413 the entropy at the quantizer output is H (X) = −2 [0.3413 log2 0.3413 + 0.1359 log2 0.1359 + 0.0228 log2 0.0228] = 2.09 bits/symbol

Since the symbol (sample) rate is 500 samples/s, the information rate is r = 500 H (X) = 1045 symbols/s

Problem 11.17 Five thresholds are needed to define the six quantization regions. The five thresholds are −k2 , −k1 , 0, k1 , and k2 . The thresholds are selected so that the six quantization regions are equally likely. For k1 we have µ ¶ Z k1 Z ∞ 1 1 k1 1 1 1 −y 2 /2σ 2 −y2 /2σ2 =√ e dy = − √ e dy = − Q 6 2 2 σ 2πσ 0 2πσ k1 so that Q

µ

k1 σ

=

1 1 1 − = 2 6 3

inding the inverse Q-function yields −1

Q

µ ¶ k1 1 = 0.4307 = 3 σ


12

CHAPTER 11. INFORMATION THEORY AND CODING

For k2 we have 1 1 =√ 6 2πσ

Z ∞

−y 2 /2σ 2

e

dy = Q

k2

µ

k2 σ

Taking the inverse Q function −1

Q

µ ¶ k2 1 = 0.9674 = 6 σ

and k1 = 0.4307σ

and

k2 = 0.9674

This gives the quantizing rule Quantizer Input −∞ < X < −0.9674σ −0.9674σ < X < −0.4307σ −0.4307σ < X < 0 0 < X < 0.4307σ 0.4307σ < X < 0.9674σ 0.9674σ < X < ∞

Quantizer Output m0 m1 m2 m3 m4 m5

Problem 11.18 The transition probabilities for the cascade of Channel 1 and Channel 2 are ∙ ¸ ∙ ¸∙ ¸ ∙ ¸ p11 p12 0.9 0.1 0.75 0.25 0.7 0.3 = = p21 p22 0.1 0.9 0.25 0.75 0.3 0.7 Therefore, the capacity of the overall channel is C = 1 + 0.7 log2 0.7 + 0.3 log2 0.3 = 0.1187 bits/symbol The capacity of Channel 1 is C = 1 + 0.9 log2 0.9 + 0.1 log2 0.1 = 0.5310 bits/symbol and the capacity of Channel 2 is C = 1 + 0.75 log2 0.75 + 0.25 log2 0.25 = 0.1887 bits/symbol It can be seen that the capacity of the cascade channel is less than the capacity of either channel taken alone, which is to be expected since each channel causes errors and the error


11.1. PROBLEM SOLUTIONS

13

probability on the cascade channel is greater than the error probability of either of the individual channels. Problem 10.19 ¡√ ¢ For BPSK, the error probability is Q 2z . From the data given ¸ ∙q ¡ ¢ 8/10 2 10 = 0.000191 Uplink error probability = Q ∙q ¸ ¢ ¡ 5/10 Downlink error probability = Q 2 10 = 0.0060

Using the techniques of the previous problem ¸ ∙ ¸∙ ¸ ∙ ¸ ∙ 0.999809 0.000191 0.9940 0.0060 0.9938 0.0062 p11 p12 = = p21 p22 0.000191 0.999809 0.0060 0.9940 0.0062 0.9938 PE = 0.0062 Note that for all practical purposes the performance of the overall system is established by the downlink performance. Problem 11.20 The source entropy is 3 1 1 3 H (X) = − log2 − log2 = 0.8113 4 4 4 4 and the entropy of the fourth-order extension is ¡ ¢ H X 4 = 4H (X) = 4 (0.3113) = 3.2451

The second way to determine the entropy of the fourth-order extension, is to determine the probability distribution of the extended source. We have 81 256 27 4 symbols with probability 256 9 6 symbols with probability 256 3 4 symbols with probability 256 1 1 symbol with probability 256

1 symbol with probability


14 Thus

CHAPTER 11. INFORMATION THEORY AND CODING

µ ¶ µ ¶ ¡ 4¢ 27 9 81 27 9 81 log2 −4 log2 −6 log2 H X = − 256 256 256 256 256 256 µ ¶ 3 1 3 1 −4 log2 − log2 256 256 256 256 = 3.2451 bits/symbol

Problem 11.21 For the fourth-order extension we have Source Symbol AAAA BAAA ABAA AABA AAAB AABB ABAB BAAB ABBA BABA BBAA ABBB BABB BBAB BBBA BBBB

P (·) 0.6561 0.0729 0.0729 0.0729 0.0729 0.0081 0.0081 0.0081 0.0081 0.0081 0.0081 0.0009 0.0009 0.0009 0.0009 0.0001

Codeword 0 100 101 110 1110 111100 1111010 1111011 1111100 1111101 1111110 111111100 111111101 111111110 1111111110 1111111111

The average wordlength: L = 1.9702 which is L/4 = 1.9702/4 = 0.4926. The efficiencies are given in the following table: n 1 2 3 4

Problem 11.22

L 1 1.29 1.598 1.9702

H (X n ) 0.4690 0.9380 1.4070 1.8760

Efficiency 46.90% 72.71% 88.05% 95.22%


11.1. PROBLEM SOLUTIONS

15

For the Shannon-Fano code we have Source Symbol m1 m2 m3 m4 m5

P (·) 0.2 0.2 0.2 0.2 0.2

Codeword 00 01 10 110 111

The entropy is H (X) = −5 (0.2 log2 0.2) = 2.3219 The average wordlength is I = (0.2) (2 + 2 + 2 + 3 + 3) = 2.4 The efficiency is therefore given by 2.319 H (X) = 0.9675 = 96.75% = 2.4 L The diagram for determining the Huffman code is illustrated in Figure 11.4. This diagram yields the codewords m1 01 m2 10 m3 11 m4 000 m5 001 The average wordlength is L = 0.2 (2 + 2 + 2 + 3 + 3) = 2.4 Since the average wordlength is the same as for the Shannon-Fano code, the efficiency is also the same. Problem 11.23 The codewords for the Shannon-Fano and Huffman codes are summarized in the following table: Codewords Source Symbol P (·) Shannon-Fano Huffman m1 0.40 00 1 m2 0.19 01 000 0.16 10 001 m3 0.15 110 010 m4 0.10 111 011 m5


16

CHAPTER 11. INFORMATION THEORY AND CODING 0.6 0 1 0.4 0.4

m1

0.4 0 1

0.2

m2 0.2 m3 0.2 m4 0.2 m5 0.2

0.2 0 1

0 1

Figure 11.4: Huffman coding procedure for Problem 11.22. The average wordlengths are: L = 2.25

(Shannon-Fano code)

L = 2.22

(Huffman code)

Note that the Huffman code gives the shorter average wordlength and therefore the higher efficiency. Problem 11.24 The entropy is H (X) = −0.85 log2 0.85 − 0.15 log2 0.15 = 0.60984 bits/symbol

We know that lim r

n→∞

L ≤ 300 n

and that lim

L

n→∞ n

= H (X)


11.1. PROBLEM SOLUTIONS

17

Therefore, for n large rH (X) ≤ 300 or r≤

300 300 = H (X) 0.60984

which gives r ≤ 491.932 symbols/s

Problem 11.25 The Shannon-Fano and Huffman codes are as follows. (Note that the codes are not unique.)

Source Symbol m1 m2 m3 m4 m5 m6 m7 m8 m9

Codewords Shannon-Fano Huffman 000 001 001 010 010 011 011 100 100 101 101 110 110 111 1110 0000 1111 0001

In both cases, there are seven codewords of length three and two codewords of length four. Thus, both codes have the same average wordlengths and therefore have the same efficiencies. The average wordlength is L=

29 1 [7 (3) + 2 (4)] = = 3.2222 9 9

Since the source entropy is H (X) = log2 9 = 3.1699 the efficiency is E=

Problem 11.26

H (X) = 98.38% L


18

CHAPTER 11. INFORMATION THEORY AND CODING

For the Shannon-Fano code we have Source Symbol m1 m2 m3 m4 m5 m6 m7 m8 m9 m10 m11 m12

P (·) 1/12 1/12 1/12 1/12 1/12 1/12 1/12 1/12 1/12 1/12 1/12 1/12

Codeword 000 0010 0011 010 0110 0111 100 1010 1011 110 1110 1111

The average wordlength is 44 1 [4 (3) + 8 (4)] = = 3.6667 12 12

L= and the entropy of the source is

H (X) = log2 12 = 3.5850 This gives the efficiency H (X) = 0.9777 = 97.77% L The diagram for determining the Huffman code is illustrated in Figure 11.5. This yields the codewords: Source Symbol Codeword m1 100 101 m2 110 m3 m4 111 0000 m5 0001 m6 0010 m7 0011 m8 m9 0100 0101 m10 0110 m11 0111 m12 E=


11.1. PROBLEM SOLUTIONS

m1

1/12

0

m2

1/12

1

m3

1/12

0

m4

1/12

1

m5

1/12

0

m6

1/12

1

m7

1/12

0

m8

1/12

1

m9

1/12

0

m10

1/12

1

m11

1/12

0

m12

1/12

1

1/6

19

0

1/3

1

0 1/3

1/6

1

1/6

0

1/6

1

1/6

0

1/6

2/3

1/3

0

1/3

1

1

Figure 11.5: Huffman procedure for Problem 11.26.


20

CHAPTER 11. INFORMATION THEORY AND CODING

As in the case of the Shannon-Fano code, we have four codewords of length three and eight codewords of length four. The average wordlength is therefore 3.6667 and the efficiency is 97.77%. Problem 11.27 The probability of message mk is P (mk ) =

Z 0.1k

0.1(k−1)

2x dx = 0.01 (2k − 1)

A Huffman code yields the codewords in the following table: Source Symbol m10 m9 m8 m7 m6 m5 m4 m3 m2 m1

P (·) 0.19 0.17 0.15 0.13 0.11 0.09 0.07 0.05 0.03 0.01

Codeword 11 000 010 011 100 101 0011 00100 001010 001011

The source entropy is given by H (X) = 3.0488 bits/source symbol and the average wordlength is L = 3.1000 binary symbols/source symbol A source symbol rate of 250 symbols (samples) per second yields a binary symbol rate of rL = 250(3.1000) = 775 binary symbols/second and a source information rate of Rs = rH (X) = 250(3.0488) = 75.2 bits/second

Problem 11.28


11.1. PROBLEM SOLUTIONS

21

The source entropy is H (X) = −0.35 log2 0.35 − 0.3 log2 0.3 − 0.2 log2 0.2 − 0.15 log2 0.15 = 1.9261 bits/source symbol

and the entropy of the second-order extension of the source is ¡ ¢ H X 2 = 2H (X) = 3.8522

The second-order extension source symbols are shown in the following table along with the Shannon-Fano code. Source Symbol m1 m1 m1 m2 m2 m1 m2 m2 m1 m3 m3 m1 m2 m3 m3 m2 m4 m1 m1 m4 m4 m2 m2 m4 m2 m2 m3 m4 m4 m3 m4 m4

P (·) 0.1225 0.1050 0.1050 0.0900 0.0700 0.0700 0.0600 0.0600 0.0525 0.0525 0.0450 0.0450 0.0400 0.0300 0.0300 0.0225

Codeword 000 001 010 0110 0111 1000 1001 1010 1011 1100 1101 11100 11101 111100 111101 111110

The average wordlength of the extended source is L = 3(0.1225 + 0.105 + 0.105) +4 (0.09 + 0.07 + 0.07 + 0.06 + 0.06 + 0.0525 + 0.0525 + 0.045) +5 (0.045 + 0.04) +6(0.03 + 0.03 + 0.0225) = 3.9175 Thus the efficiency is E=

¡ ¢ H X2 L

=

3.8522 = 98.33% 3.9175


22

CHAPTER 11. INFORMATION THEORY AND CODING

We have demonstrated the Huffman code previously. For the second-order extension of the given source we have 100

010

001

1111

1100

11100

11011

1011

1010

0111

for the first 8 codewords, and 0110

0001

11101

11010

00001

00000

for the last eight codewords. The average wordlength is 3.88000 and the efficiency is ¡ ¢ H X2 3.8522 E= = 99.28% = 3.88 L which is a small improvement over the Shannon-Fano code. Problem 11.29 The source entropy is H (X) = −0.35 log2 0.35 − 0.3 log2 0.3 − 0.2 log2 0.2 − 0.15 log2 0.15 = 1.9261 bits/source symbol

and the entropy of the second-order extension of the source is ¡ ¢ H X 2 = 2H (X) = 3.8522

The second-order extension source symbols are shown in the following table. For a D = 3 alphabet we partition into sets of 3. The set of code symbols is (0, 1, 2). Source Symbol m1 m1 m1 m2 m2 m1 m2 m2 m1 m3 m3 m1 m2 m3 m3 m2 m4 m1 m1 m4 m4 m2 m2 m4 m2 m2 m3 m4 m4 m3 m4 m4

P (·) 0.1225 0.1050 0.1050 0.0900 0.0700 0.0700 0.0600 0.0600 0.0525 0.0525 0.0450 0.0450 0.0400 0.0300 0.0300 0.0225

Codeword 00 01 02 10 110 111 120 121 200 201 210 211 2200 2201 2210 2211


11.1. PROBLEM SOLUTIONS

23

The average wordlength of the extended source is L = 2(0.1225 + 0.105 + 0.105 + 0.09) + +3 (0.07 + 0.07 + 0.06 + 0.06 + 0.0525 + 0.0525 + 0.045 + 0.045) +4 (0.04 + 0.03 + 0.03 + 0.0225) = 2.70 Thus the efficiency is E=

¡ ¢ H X2

L log2 D

=

3.8522 = 90.02% (2.70) log2 3

Problem 11.30 The set of wordlengths from Table 10.3 is {1, 3, 3, 3, 5, 5, 5, 5}. The Kraft inequality gives 8 X i=1

¡ ¢ ¡ ¢ 1 3 4 =1 2li = 2−1 + 3 2−3 + 4 2−5 = + + 2 8 32

Thus the Kraft inequality is satisfied. Problem 11.31 The Shannon-Hartley law with S = 50 and N = 10−5 B gives à ¡ ¢! µ ¶ 50 105 S Cc = B log2 1 + = B log2 1 + N B This is illustrated in the following figure. It can be seen that as B → ∞ the capacity approaches a finite constant. For sufficiently small x, ln(1 + x) ≈ x. Therefore lim Cc =

B→∞

50(105 ) = 7.2135 × 106 ln(2)

as we see from Figure 11.6 Problem 11.32 We now determine à ¡ ¢! ¡ 4¢ ¡ ¢ P 105 = 104 log2 (1 + 10P ) Cc = 10 log2 1 + 4 10


24

CHAPTER 11. INFORMATION THEORY AND CODING

6

8

x 10

7 6

Capacity

5 4 3 2 1 0 2 10

4

10

6

10 Bandwidth - Hz

8

10

10

10

Figure 11.6: Capacity as a function of bandwidth for Problem 11.31.


11.1. PROBLEM SOLUTIONS

25

4

10

x 10

9 8

Capacity - bits/s

7 6 5 4 3 2 1 0

0

10

20

30

40 50 60 Signal Power - Watts

70

80

90

100

Figure 11.7: Capacity as a function of bandwidth for Problem 11.32.


26

CHAPTER 11. INFORMATION THEORY AND CODING

The plot is given in Figure 11.7. For sufficiently large P , ¡ ¢ Cc ≈ 105 log2 (P ) and, therefore, Cc → ∞ as P → ∞.

Problem 11.33 For a (8,7) parity-check code an odd number of errors can be detected but not corrected. We therefore must compute the probablity of 1, 3, 5, or 7 errors in an 8-symbol sequence. This is µ ¶ µ ¶ µ ¶ µ ¶ 8 8 3 8 5 8 7 PE = q (1 − p)7 + q (1 − q)5 + q (1 − q)3 + q (1 − q) 1 3 5 7 where q is the coded symbol error probability. This gives PE = 40320q (1 − p)7 + 56q 3 (1 − q)5 + 56q 5 (1 − q)3 + 8q 7 (1 − q)

Problem 11.34 For a rate 17 repetition code we have, assuming an error probability of q on a single transmission, µ ¶ µ ¶ µ ¶ µ ¶ 7 4 7 5 7 6 7 7 3 2 q (1 − q) + q (1 − q) + q (1 − q) + q PE = 4 5 6 7

which is

PE = 35q4 (1 − q)3 + 21q5 (1 − q)2 + 7q6 (1 − q) + q 7

For a rate 13 code we have the corresopnding result

PE = 3q 2 (1 − q) + q 3 The results, generated by the MATLAB code, q = 0:0.01:0.1; p7 = 35*(q.^4).*((1-q).^3)+21*(q.^5).*((1-q).^2)+7*(q.^6).*(1-q)+(q.^7); p3 = 3*(1-q).*(q.^2)+(q.^3); plot(q,p3,q,p7) xlabel(’Error probability - q’) ylabel(’Word error probability’) are illustrated in Figure ??. The top curve (poorest performance) is for the rate 13 code and the bottom curve is for the rate 17 repetition code. Note that these results are not typically compariable since the values of p and q change as the code rate changes.


11.1. PROBLEM SOLUTIONS

27

0.03

Word error probability

0.025

0.02

0.015

0.01

0.005

0

0

0.01

0.02

0.03

0.04 0.05 0.06 Error probability - q

0.07

0.08

0.09

0.1

Figure 11.8: Performance curves for rate 1/7 and rate 1/3 repetition codes.


28

CHAPTER 11. INFORMATION THEORY AND CODING

Problem 11.35 Consider the ratio of the error probability of the coded system to the error probability of the uncoded system for high SNR. Since we are considering z À 1 we use the asympotic approximation for the Gaussian Q function. Therefore, for the coded system Ãr ! r 2z n 1 −2z/n 1 −(√2z/n)2 1 √ e √ e = ≈p qc = Q n 2z 2π 2z/n 2π For the uncoded system we simply let n = 1. Thus s ¶¸ ∙ µ qc 2z exp (−2z/n) √ 1 = n exp 2z 1 − = qu 2z/n exp (−2z) n

√ Note that for large n the ratio becomes nK, where K is a constant defined by exp(2z). For fixed n, the loss in symbol error probability is therefore exponential with respect to the SNR z, which is a greater loss than can be compensated by the error-correcting capability √ of the repetition code. Note that for fixed z, the corresponding losss is proportional to n. Problem 11.36 A possible parity-check matrix for a (15,11) Hamming code, which has 4 parity symbols, is ⎡

0 ⎢ 0 [H] = ⎢ ⎣ 1 1

0 1 0 1

0 1 1 0

0 1 1 1

1 0 0 1

1 0 1 0

1 0 1 1

1 1 0 0

1 1 0 1

1 1 1 0

1 1 1 1

1 0 0 0

0 1 0 0

0 0 1 0

⎤ 0 0 ⎥ ⎥ 0 ⎦ 1

Since columns of the parity-check matrix can be permuted, it is not unique, The form shown is a systemantic code. Since each column is distinct, and there is no all-zeros column, there are 15 distinct non-zero syndromes (columns) with each syndrome corresponding to a single error in the position of that column. Thus, all single errors can be corrected and the code is a distance 3 code. Problem 11.37 The parity-check matrix will have 15 − 11 = 4 rows and 15 columns. Since rows may be permuted, the parity-check matrix is not unique. One possible selection is ⎡

0 ⎢ 0 [H] = ⎢ ⎣ 1 1

0 1 0 1

0 1 1 0

0 1 1 1

1 0 0 1

1 0 1 0

1 0 1 1

1 1 0 0

1 1 0 1

1 1 1 0

1 1 1 1

1 0 0 0

0 1 0 0

0 0 1 0

⎤ 0 0 ⎥ ⎥ 0 ⎦ 1


11.1. PROBLEM SOLUTIONS

29

The corresponding generator matrix is ⎡ 1 0 0 0 0 ⎢ 0 1 0 0 0 ⎢ ⎢ 0 0 1 0 0 ⎢ ⎢ 0 0 0 1 0 ⎢ ⎢ 0 0 0 0 1 ⎢ ⎢ 0 0 0 0 0 ⎢ ⎢ ⎢ 0 0 0 0 0 ⎢ [G] = ⎢ 0 0 0 0 0 ⎢ ⎢ 0 0 0 0 0 ⎢ ⎢ 0 0 0 0 0 ⎢ ⎢ 0 0 0 0 0 ⎢ ⎢ 0 0 0 0 1 ⎢ ⎢ 0 1 1 1 0 ⎢ ⎣ 1 0 1 1 0 1 1 0 1 1

0 0 0 0 0 1 0 0 0 0 0 1 0 1 0

0 0 0 0 0 0 1 0 0 0 0 1 0 1 1

0 0 0 0 0 0 0 1 0 0 0 1 1 0 0

0 0 0 0 0 0 0 0 1 0 0 1 1 0 1

0 0 0 0 0 0 0 0 0 1 0 1 1 1 0

⎤ 0 0 ⎥ ⎥ 0 ⎥ ⎥ 0 ⎥ ⎥ 0 ⎥ ⎥ 0 ⎥ ⎥ ⎥ ¸ 0 ⎥ ∙ I11 ⎥ 0 ⎥= H11 ⎥ 0 ⎥ ⎥ 0 ⎥ ⎥ 1 ⎥ ⎥ 1 ⎥ ⎥ 1 ⎥ ⎥ 1 ⎦ 1

where I11 is the 11×11 identity matrix and H11 represents the first 11 columns of the paritycheck matrix. From the structure of the parity-check matrix, we see that each parity symbol is the sum of 7 information symbols. Since 7 is odd, the all-ones information sequence yields the parity sequence 1111. This gives the codeword [T ]t = [111111111111111]

A single error in the third position yields a syndrome which is the third column of the parity-check matrix. Thus, for the assumed parity-check matrix, the syndrome is ⎡ ⎤ 0 ⎢ 1 ⎥ ⎥ [S] = ⎢ ⎣ 1 ⎦ 0 Problem 11.38 The generrator matrix corresponding to the given parity-check matrix is ⎤ ⎡ 1 0 0 0 ⎢ 0 1 0 0 ⎥ ⎥ ⎢ ⎢ 0 0 1 0 ⎥ ⎥ ⎢ ⎥ [G] = ⎢ ⎢ 0 0 0 1 ⎥ ⎢ 0 1 1 1 ⎥ ⎥ ⎢ ⎣ 1 0 1 1 ⎦ 1 1 0 1


30

CHAPTER 11. INFORMATION THEORY AND CODING

The code words are of the form

£

a1 a2 a3 a4 c1 c2 c3 ⎡ ⎤ c1 = a2 ⊕ a3 ⊕ a4 ⎣ c2 = a1 ⊕ a3 ⊕ a4 ⎦ c3 = a1 ⊕ a2 ⊕ a4

¤

where

Thus, the 24 = 16 code words are (read veritcally) a1 a2 a3 a4

0 0 0 0

0 0 0 1

0 0 1 0

0 0 1 1

0 1 0 0

0 1 0 1

0 1 1 0

0 1 1 1

1 0 0 0

1 0 0 1

1 0 1 0

1 0 1 1

1 1 0 0

1 1 0 1

1 1 1 0

1 1 1 1

c1 c2 c3

0 0 0

1 1 1

1 1 1

0 0 1

1 0 1

0 1 0

0 0 1

1 0 0

0 1 1

1 0 0

1 0 1

0 0 0

1 0 0

0 0 1

0 0 0

1 1 1

Problem 11.39 We first determine [Ti ] = [G] [Ai ] for i = 1, 2. The generator matrix for the previous problem is used. For [A1 ] we have ⎡ ⎤ 0 ⎡ ⎤ ⎢ 1 ⎥ ⎢ ⎥ 0 ⎢ ⎥ ⎢ 1 ⎥ ⎢ 1 ⎥ ⎥ ⎢ ⎥ [G] [A1 ] = [G] ⎣ ⎦ = ⎢ ⎢ 1 ⎥ = [T1 ] 1 ⎢ 1 ⎥ ⎢ ⎥ 1 ⎣ 0 ⎦ 0 and for [A2 ] we have

We note that

⎤ 1 ⎡ ⎤ ⎢ 0 ⎥ ⎢ ⎥ 1 ⎢ ⎥ ⎢ 0 ⎥ ⎢ 1 ⎥ ⎥ ⎢ ⎥ [G] [A2 ] = [G] ⎢ ⎣ 1 ⎦ = ⎢ 0 ⎥ = [T2 ] ⎢ 1 ⎥ ⎢ ⎥ 0 ⎣ 0 ⎦ 1 ⎡

⎤ 1 ⎢ 1 ⎥ ⎥ [A1 ] ⊕ [A2 ] = ⎢ ⎣ 0 ⎦ 1


11.1. PROBLEM SOLUTIONS and

31

⎤ 1 ⎡ ⎤ ⎢ 1 ⎥ ⎢ ⎥ 1 ⎢ ⎥ ⎢ 1 ⎥ ⎢ 0 ⎥ ⎥ ⎢ ⎥ [G] {[A1 ] ⊕ [A2 ]} = [G] ⎣ ⎦ = ⎢ ⎢ 1 ⎥ 0 ⎢ 0 ⎥ ⎢ ⎥ 1 ⎣ 0 ⎦ ⎡

1

which is [T1 ] ⊕ [T2 ].

Problem 11.40 For a rate 13 repetition code

⎤ 1 [G] = ⎣ 1 ⎦ 1

and for a rate 15 repetition code

⎤ 1 ⎢ 1 ⎥ ⎢ ⎥ ⎥ [G] = ⎢ ⎢ 1 ⎥ ⎣ 1 ⎦ 1

The generator matrix for a rate n1 repetition code has one column and n rows. All elements are unity. Problem 11.41 For a Hamming code with r = n − k parity check symbols, the wordlength n is n = 2r − 1 and k = 2r − r − 1 The rate is

2r − r − 1 k = n 2r − 1 If n is large, r is large. As r → ∞ both k and n approach 2r . Thus R=

lim R = lim R =

n→∞

r→∞

2r =1 2r


32

CHAPTER 11. INFORMATION THEORY AND CODING

Problem 11.42 The codeword is of the form (a1 a2 a3 a4 c1 c2 c3 ) where c1 = a1 ⊕ a2 ⊕ a3 c2 = a2 ⊕ a3 ⊕ a4 c3 = a1 ⊕ a2 ⊕ a4 This gives the generator matrix ⎡

1 ⎢ 0 ⎢ ⎢ 0 ⎢ [G] = ⎢ ⎢ 0 ⎢ 1 ⎢ ⎣ 0 1

0 1 0 0 1 1 1

0 0 1 0 1 1 0

⎤ 0 0 ⎥ ⎥ 0 ⎥ ⎥ 1 ⎥ ⎥ 0 ⎥ ⎥ 1 ⎦ 1

Using the generator matrix, we can determine the codewords. The codewords are (read vertically) a1 a2 a3 a4

0 0 0 0

1 0 0 0

1 1 0 0

1 1 1 0

1 1 1 1

0 0 0 1

0 0 1 1

0 1 1 1

1 0 1 1

1 1 0 1

1 0 0 1

0 1 1 0

0 1 0 0

0 0 1 0

0 1 0 1

1 0 1 0

c1 c2 c3

0 0 0

1 0 1

0 1 0

1 0 0

1 1 1

0 1 1

1 0 1

0 1 0

0 0 0

0 0 1

1 1 0

0 0 1

1 1 1

1 1 0

1 0 0

0 1 1

Let αi denote the i-th code word and let αi → αj indicate that a cyclic shift of αi yields αj . The cyclic shifts are illustrated in Figure 11.9. Problem 11.43 The parity-check matrix for the coder in the previous problem is ⎡ ⎤ 1 1 1 0 1 0 0 [H] = ⎣ 0 1 1 1 0 1 0 ⎦ 1 1 0 1 0 0 1 For the received sequence

[R]t = [1101001]


11.1. PROBLEM SOLUTIONS

33

α1

α5

α2

α3

α12

α9

α15

α14

α6

α4

α8

α7

α11

α13

α16

α10

Figure 11.9: Illustration of cyclic property. the syndrome is

⎤ 0 [S] = [H] [R] = ⎣ 0 ⎦ 0

This indicates no errors, which is in agreement with Figure 11.15. For the received sequence [R]t = [1101011] the syndrome is

⎤ 0 [S] = [H] [R] = ⎣ 1 ⎦ 0

This syndrome indicates an error in the 6th position. Thus the assumed transmitted codeword is [T ]t = [1101001] which is also in agreement with Figure 11.15. Problem 11.44 The probability of two errors in a 7 symbol codeword is µ ¶ 7 P {2 errors} = (1 − qc )5 qc2 = 21qc2 (1 − qc )5 2 and the probability of three errors in a 7 symbol codeword is µ ¶ 7 P {3 errors} = (1 − qc )4 qc3 = 35qc3 (1 − qc )4 3


34

CHAPTER 11. INFORMATION THEORY AND CODING

The ratio of P {3 errors} to P {2 errors} is R=

Pr {3 errors} 35qc3 (1 − qc )4 5 qc = 5 = 3 1−q 2 Pr {2 errors} 21qc (1 − qc ) c

We now determine the coded error probability qc . Since BPSK modulation is used Ãr ! 2z qc = Q 7 where z is STw /N0 . This gives us the following table z 8 dB 9 dB 10 dB 11 dB 12 dB 13 dB 14 dB

qc 0.0897 0.0660 0.0455 0.0290 0.0167 0.0085 0.0037

errors} R = Pr{3 Pr{2 errors} 0.1642 0.1177 0.0794 0.0497 0.0278 0.0141 0.0062

It is clear that as the SNR, z, increases, the value of R becomes negligible. Problem 11.45 The parity-check matrix for a (7, 4) Hamming code, as we have defined a Hamming code, is a code in which the ith column of the parity matrix is the binary representation of i. This gives ⎡ ⎤ 0 0 0 1 1 1 1 [H] = ⎣ 0 1 1 0 0 1 1 ⎦ 1 0 1 0 1 0 1

The parity checks are in positions 1,2, and 4. Moving the parity checks to positions 5, 6, and 7 yields a systematic code with the generator matrix ⎡

1 ⎢ 0 ⎢ ⎢ 0 ⎢ [G] = ⎢ ⎢ 0 ⎢ 0 ⎢ ⎣ 1 1

0 1 0 0 1 0 1

0 0 1 0 1 1 0

⎤ 0 0 ⎥ ⎥ 0 ⎥ ⎥ 1 ⎥ ⎥ 1 ⎥ ⎥ 1 ⎦ 1


11.1. PROBLEM SOLUTIONS

35

which has the partity-check matrix ⎡

⎤ 0 1 1 1 1 0 0 [H] = ⎣ 1 0 1 1 0 1 0 ⎦ 1 1 0 1 0 0 1 Note that all columns of [H] are still distinct and therefore all single errors are corrected. Thus, the distance three property is preserved. Since the first four columns of [H] can be in any order, there are 4! = 24 different (7, 4) systematic codes. If the code is not required to be systematic, there are 7! = 5040 different codes, all of which will have equivalent performance in an AWGN channel. Problem 11.46 For the convolutional coder shown in Figure 11.24 we have the three outputs v1 = s1 ⊕ s2 ⊕ s3 v2 = s1

v2 = s1 ⊕ s2 Shifting in a 0 gives, in terms of the initial shift register contents v1 = 0 ⊕ s1 ⊕ s2 = 0 ⊕ a

v2 = 0

v3 = 0 ⊕ s1 and shifting in a binary 1 gives v1 = 1 ⊕ s1 ⊕ s2 = 1 ⊕ a

v2 = 1

v3 = 1 ⊕ s1 where a = s1 ⊕ s2 . Clearly v1 = 0 ⊕ a or v1 = 1 ⊕ a are complements for a = 0 or 1. The output v2 for a 0 or 1 input are obviously complements. Also v1 = 0 ⊕ s1 or v1 = 1 ⊕ s1 are complements for s1 = 0 or 1 Problem 11.47 For the convolutional coder shown in Figure 11.24 we have the three outputs v1 = s1 ⊕ s2 ⊕ s3 ⊕ s4

v2 = s1 ⊕ s2 ⊕ s4


36

CHAPTER 11. INFORMATION THEORY AND CODING

Shifting in a 0 gives, in terms of the initial shift register contents v1 = 0 ⊕ s1 ⊕ s2 ⊕ s3 = 0 ⊕ a

v2 = 0 ⊕ s1 ⊕ s3 = 0 ⊕ b and shifting in a binary 1 gives

v1 = 1 ⊕ s1 ⊕ s2 ⊕ s3 = 1 ⊕ a

v2 = 1 ⊕ s1 ⊕ s3 = 1 ⊕ b

where a = s1 ⊕ s2 ⊕ s3 and b = s1 ⊕ s3 . Clearly v1 = 0 ⊕ a or v1 = 1 ⊕ a are complements for a = 0 or 1. In the same manner v2 = 0 ⊕ b or v2 = 1 ⊕ b are complements for b = 0 or 1. Problem 11.48 For the encoder shown, the constraint span is k = 4. We first need to compute the states for the encoder. The output is v1 v2 where v1 = s1 ⊕ s2 ⊕ s3 ⊕ s4 and v2 = s1 ⊕ s2 ⊕ s4 . We now compute the state transitions and the output using the state diagram shown in Figure 11.10. This gives the following table. State A

Previous s1 s2 s2 000

B

001

C

010

D

011

E

100

F

101

G

110

H

111

Input 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1

s1 s2 s3 s4 0000 1000 0001 1001 0010 1010 0011 1011 0100 1100 0101 1101 0110 1110 0111 1111

Current State A E A E B F B F C G C G D H D H

Output 00 11 11 00 10 01 01 10 11 00 00 11 01 10 10 01

Problem 11.49 For the encoder shown, we have v1 = s1 ⊕ s2 ⊕ s3 and v2 = s1 ⊕ s3 . The trellis diagram is illustrated in Figure 11.11. The state transitions, and the output corresponding to each transition appears in the following table.


11.1. PROBLEM SOLUTIONS

37 (0 0)

A (1 1) (1 1) (1 0) C B

(0 1)

(1 1)

(0 0)

E

(0 1) (0 0)

(0 0)

D

(0 1) (1 0) (1 1) G F (1 0) (1 0) H (0 1)

Figure 11.10: State diagram for Problem 11.48.

Problem 11.50

State A

s1 s2 00

B

01

C

10

D

11

Input 0 1 0 1 0 1 0 1

s1 s2 s3 000 100 001 101 010 110 011 111

State A C A C B D B D

Output 00 11 11 00 10 01 01 10


38

CHAPTER 11. INFORMATION THEORY AND CODING (00)

(11)

(00)

(00)

(00)

(11)

(11)

(10)

(00) (10)

(01)

(01)

(00)

(11)

(11)

(11)

(10)

(01) (01)

(10) Steady-State Transitions

Termination

Figure 11.11: Trellis diagram for Problem 11.49. The source rate is rs = 5000 symbols/second. The channel symbol rate is ³n´ n rc = rs = 5000 k k

If the burst duration is Tb , then

rc Tb = rs Tb

³n´ n = 5000Tb k k

channel symbols can be affected. Assuming that Tb = 0.2 s, the burst length, expressed in symbols, is ³n´ ³n´ m = rc Tb = (5000) (0.2) = 1000 k k symbols affected by the burst. The interleaving array contains m rows with each row representing a (n, k) codeword. We now must now determine n and k. Since n > k, it is clear that the burst affects more than 1000 code symbols. We also know that the rate of a Hamming code approaches 1 as n gets large, so that for large n,.m / will exceed 1000 by only a small amount. We therefore try a (1023,1013) Hamming code


11.1. PROBLEM SOLUTIONS

39

having n − k = 10 parity symbols. (See problem 10.41.) The number of symbols affected by the burst is ¶ µ 1023 = 1009.9 m = 1000 1013 Since m must be an integer lower bounded by 1009.9, we let m = 1010. The full interleaving array contains mn symbols, since m may be received in error (worst case) m(n − 1) must be received correctly. Since the symbol rate on the channel is 5000n/k this gives a required error-free span of m(n − 1) 1010 1022 = = 204.43 s 5000n/k 5000 1023/1013 This is probably not very practical but it illustrates the process and the problem of using long codes with low error-correcting capabilities for interleaving. Problem 11.51 The source rate is rs = 5000 symbols/second. The channel symbol rate is µ ¶ 23 n rc = rs = 5000 k 12 since we have a (23,12) code. If the burst duration is Tb , then µ ¶ n 23 rc Tb = rs Tb = 5000Tb k 12 channel symbols can be affected by the burst. The burst can affect 3 columns, containing 3m symbols, of the interleaving table. Assuming that Tb = 0.2 s, the burst length, expressed in symbols, is µ ¶ 23 = 1916.7 symbols 3m = rc Tb = (5000) (0.2) 12 Thus

1916.7 = 638.89 3 We therefore set m = 639, which gives an interleaving array with 639 rows and 23 columns. Since 3 columns can contain errors we must have 20 error-free columns representing 20(639) symbols. This represents an error-free span of m=

20(639) 20m = 1.3336 = rc 5000(23/12)

s

We see that the more powerful code results in a much shorter required error-free span and a smaller interleaving array.


40

CHAPTER 11. INFORMATION THEORY AND CODING

Problem 11.52 The probability of n transmissions is the probability of n − 1 retransmissions. This is P2n−1 . The expected value of N is therefore ) (∞ X n−1 nP2 E n=1

From

∞ X

xn =

n=0

1 , 1−x

|x| < 1

we can obtain, by differentiating both sides of the above wrt x, ∞ X

nxn−1 =

n=1

1 (1 − x)2

There the expected value of N is E {N } =

1 (1 − P2 )2

Problem 11.53 The required plot is shown in Figure 11.12.

11.2

Computer Exercises

Computer Exercise 11.1 For two messages the problem is easy since there is only a single independent probability. The MATLAB program follows % File: ce11_1a.m a = 0.001:0.001:0.999; b =1-a; lna = length(a); H = zeros(1,lna); H=-(a.*log(a)+b.*log(b))/log(2); plot(a,H); xlabel(’a’) ylabel(’Entropy’)


11.2. COMPUTER EXERCISES

41

Figure 11.12: Detection gain characteristice for Problem 11.53. The plot is illustrated in Figure 11.13. For three source messages we have two independent probabilities. Two of them will be specified and the remaining probability can then be determined. The resulting MATLAB code follows: % File: ce11_1b.m n = 50; nn = n+1; H = zeros(nn,nn); z = (0:n)/n; z(1) = eps;

% number of points in vector % increment for zero % initialize entropy array % probability vector % prevent problems with log(0)


42

CHAPTER 11. INFORMATION THEORY AND CODING

1 0.9 0.8 0.7

E ntropy

0.6 0.5 0.4 0.3 0.2 0.1 0 0

0.1

0.2

0.3

0.4

0.5 a

0.6

0.7

0.8

0.9

1

Figure 11.13: Entropy for binary source. for i=1:nn for j=1:nn c = 1-z(i)-z(j); % calculate third probability if c>0 xx = [z(i) z(j) c]; H(i,j)= -xx*log(xx)’/log(2); % compute entropy end end end colormap([0 0 0]) % b&w colormap for mesh mesh(z,z,H); % plot entropy view(-45,30); % set view angle xlabel(’Probability of message a’) ylabel(’Probability of message b’)


11.2. COMPUTER EXERCISES

43

Figure 11.14: Entropy for three-symbol source. zlabel(’Entropy’) figure contour(z,z,H,10) xlabel(’Probability of message a’) ylabel(’Probability of message b’)

% new figure % plot contour map

The resulting plot is illustrated in Figure 11.14. One should experiment with various viewing locations. Next we draw a countour map with 10 countour lines. The result is illustrated in Figure 10.17. Note the peak at 0.33, 0.33, indicating that the maximum entropy occurs when all three source messages occur with equal probability Computer Exercise 11.2 The solution of this computer example is, in large part, taken from the book by Proakis, Salehi, and Bauch (Comtemporary Communication Systems using MATLAB, Thomson-


44

CHAPTER 11. INFORMATION THEORY AND CODING

Brooks/Cole, 2004, ISBN 0-534-40617-3).1 Modifications made to the ortiginally published program include changing the program from a function to a script, allowing the user to determine probability distributions and source codes for extended sources, allowing the user to enter the source distribution and desired extension order interactively and, in addition, the entropy of the source, the entropy of the extended source, the average word length, and the efficiency are displayed as the program is executed. This was done since a typical use of source coding is to continue incrementing the source extension until the efficiency reaches a point where transmission can be accomplished. The resulting MATLAB code follows. % File: ce11_2.m clear all msgprobs = input(’Enter message probabilities as a vector, ... e.g., [0.5 0.25 0.25] > ’); ext = input(’Enter desired extension (1 if no extension is desired) > ’); entp = -sum(msgprobs.*log(msgprobs))/log(2); % original source entropy dispa = [’The source entropy is ’, ... num2str(entp,’%15.5f’),’ bits/symbol.’]; disp(dispa) % % Determine probabilities and entropy of extended source. % if ext>1 b = msgprobs; % working array for k=2:ext b = kron(b,msgprobs); % Kronecker tensor product end msgprobs = b; % probabilities for extended source entp = -sum(msgprobs.*log(msgprobs)) ... /log(2); % entropy of extended source dispaa = [’The extended source entropy is ’, ... um2str(entp,’%15.5f’),’ bits/symbol.’]; disp(dispaa) end % % Continue 1

This book is highly recommended as a supplemental text in basic communications courses. We have found it useful as a resource for in-class demonstrations and for homework assignments extending material covered in class. This particular computer example, as originally developed by Proakis, Salehi, and Bauch, provides an excellent example of several MATLAB programming techniques. It has been said that a key to understanding efficient MATLAB programming techniques is to understand the use of the colon and the find command. This program makes efficient use of both.


11.2. COMPUTER EXERCISES % n = length(msgprobs); % length of source (or extended source) q = msgprobs; % copy of msgprobs m = zeros(n-1,n); % initialize for i=1:n-1 [q,l] = sort(q); % order message probabilities m(i,:) = [l(1:n-i+1),zeros(1,i-1)]; q = [q(1)+q(2),q(3:n),1]; % combine messages in column vector end % % Code words are estabilished in the c array. % for i = 1:n-1 c(i,:) = blanks(n*n); % initialize character array end c(n-1,n) = ’0’; c(n-1,2*n) = ’1’; for i=2:n-1 c(n-i,1:n-1) = c(n-i+1,n*(find(m(n-i+1,:)==1)) ... (n-2):n*(find(m(n-i+1,:)==1))); c(n-i,n) = ’0’; c(n-i,n+1:2*n-1) = c(n-i,1:n-1); c(n-i,2*n) = ’1’; for j=1:i-1 c(n-i,(j+1)*n+1:(j+2)*n) = ... c(n-i+1,n*(find(m(n-i+1,:)==j+1)-1)+1:n*find(m(n-i+1,:)==j+1)); end end % % Establish code words, average word length, and entropy. % for i=1:n % determine codewords and wordlengths codewords(i,1:n) = c(1,n*(find(m(1,:)==i)-1)+1:find(m(1,:)==i)*n); wrdlengths(i) = length(find(abs(codewords(i,:))~=32)); end avewl = sum(msgprobs.*wrdlengths); % average word length codewords % display code words disp3 = [’The average wordlength is ’, ... num2str(avewl,’%15.5f’),’ symbols.’]; disp(disp3) disp4 = [’The efficiency is ’, ... num2str(entp*100/avewl,’%15.5f’),’ %.’]; disp(disp4)

45


46

CHAPTER 11. INFORMATION THEORY AND CODING % End of function file.

We will use the preceding MATLAB program to work Problem 11.28. The MATLAB dialog follows. » ce11_2.m Enter message probabilities as a vector, ... e.g., [0.5 0.25 0.25] > [0.35 0.3 0.2 0.15] The source entropy is 1.92612 bits/symbol. Enter desired extension (1 if no extension is desired) > 2 The extended source entropy is 3.85224 bits/symbol. codewords = 100 001 1011 0001 010 1111 0111 11100 1100 1010 11011 00001 0110 11101 11010 00000 The average wordlength is 3.88000 symbols. The efficiency is 99.28457 %. Computer Exercise 10.3 The MATLAB code for generating the performance curves illustrated in Figure 11.18 in the text follows. % File: ce11_3.m zdB = 0:30; z = 10.^(zdB/10); qa = 0.5*exp(-0.5*z); qa3 = 0.5*exp(-0.5*z/3);


11.2. COMPUTER EXERCISES

47

qa7 = 0.5*exp(-0.5*z/7); pa7 = 35*((1-qa7).^3).*(qa7.^4)+21*((1-qa7).^2).*(qa7.^5)... +7*(1-qa7).*(qa7.^6)+(qa7.^7); pa3 = 3*(1-qa3).*(qa3.^2)+(qa3.^3); qr = 0.5./(1+z/2); qr3 = 0.5./(1+z/(2*3)); qr7 = 0.5./(1+z/(2*7)); pr7 = 35*((1-qr7).^3).*(qr7.^4)+21*((1-qr7).^2).*(qr7.^5)... +7*(1-qr7).*(qr7.^6)+(qr7.^7); pr3 = 3*(1-qr3).*(qr3.^2)+(qr3.^3); semilogy(zdB,qr,zdB,pr3,zdB,pr7,zdB,qa,zdB,pa3,zdB,pa7) axis([0 30 0.0001 1]) xlabel(’Signal-to-Noise Ratio, z - dB’) ylabel(’Probability ’) Executing the code yields the results illustrated in Figure 11.15 in the text. The curves can be identified from Figure 10.18 in the text. Computer Exercise 11.4 For the rate 0.5 BCH codes, we use the following MATLAB program. % File: ce11_4a.m zdB = 0:0.5:10; % set Eb/No in dB z = 10.^(zdB/10); % convert to linear scale ber1 = q(sqrt(4*2*z/7)); % SER for (7,4) BCH code ber2 = q(sqrt(7*2*z/15)); % SER for (15,7) BCH code ber3 = q(sqrt(16*2*z/31)); % SER for (31,16) BCH code ber4 = q(sqrt(30*2*z/63)); % SER for (63,30) BCH code berbch1 = ser2ber(2,7,3,1,ber1); % BER for (7,4) BCH code berbch2 = ser2ber(2,15,5,2,ber2); % BER for (15,7) BCH code berbch3 = ser2ber(2,31,7,3,ber3); % BER for (31,16) BCH code berbch4 = ser2ber(2,64,13,6,ber4); % BER for (63,30) BCH code semilogy(zdB,berbch1,’k’,zdB,berbch2,’k’,zdB,berbch3,’k’,zdB,berbch4,’k’) xlabel(’E_b/N_o in dB’) % label x axis ylabel(’Bit Error Probability’) % label y axis % End of script file. Executing the code for the rate 0.5 code yields the results shown in Figure 11.16. For SNRs exceeding 5 dB, the order of the plots is in order of word length n, with n = being the top


48

CHAPTER 11. INFORMATION THEORY AND CODING

0

10

-1

P robability

10

-2

10

-3

10

-4

10

0

5

10 15 20 Signal-to-Noise Ratio, z - dB

25

30

Figure 11.15: Results for Computer Exercise 11.3. curve (least error correcting capability) and n = 63 being the bottom curve (greatest error correcting capability). For the rate 0.75 BCH codes, we use the following MATLAB program. % File: ce11_4b.m zdB = 0:0.5:10; % set Eb/No in dB z = 10.^(zdB/10); % convert to linear scale ber1 = q(sqrt(11*2*z/15)); % SER for (15,11) BCH code ber2 = q(sqrt(21*2*z/31)); % SER for (31,21) BCH code ber3 = q(sqrt(45*2*z/63)); % SER for (63,45) BCH code ber4 = q(sqrt(99*2*z/127)); % SER for (127,99) BCH code berbch1 = ser2ber(2,15,3,1,ber1); % BER for (15,11) BCH code berbch2 = ser2ber(2,31,5,2,ber2); % BER for (31,21) BCH code berbch3 = ser2ber(2,63,7,3,ber3); % BER for (63,45) BCH code


11.2. COMPUTER EXERCISES

49

0

10

-2

10

-4

Bit Error Probability

10

-6

10

-8

10

-10

10

-12

10

-14

10

0

1

2

3

4

5 6 Eb/No in dB

7

8

9

10

Figure 11.16: Rate 1/2 BCH code results for Computer Exercise 11.4.


50

CHAPTER 11. INFORMATION THEORY AND CODING

0

10

Bit Error Probability

-5

10

-10

10

-15

10

0

1

2

3

4

5 6 Eb/No in dB

7

8

9

10

Figure 11.17: Rate 3/4 BCH code results for Computer Exercise 11.4. berbch4 = ser2ber(2,127,9,4,ber4); % BER for (127,99) BCH code semilogy(zdB,berbch1,zdB,berbch2,zdB,berbch3,zdB,berbch4) xlabel(’E_b/N_o in dB’) % label x axis ylabel(’Bit Error Probability’) % label y axis % End of script file. Executing the code for the rate 0.75 code yields the results shown in Figure 11.17. For SNRs exceeding 5 dB, the order of the plots is in order of word length n, with n = 15 being the top curve (least error correcting capability) and n = 127 being the bottom curve (greatest error correcting capability).

Computer Exercise 11.5


11.2. COMPUTER EXERCISES

51

In order to illustrate the problems associated with using nchoosek with large arguments we compute » nchoosek(1000,500) Warning: Result may not be exact. > In C:\MATLABR11\toolbox\matlab\specfun\nchoosek.m at line 50 ans = 2.7029e+299 The warning indicates a possible with numerical percision. A possible way to mitigate this is to perform the calculation using logarithms as illustrated in the function below. %File: nkchoose.m Function out=nkchoose(n,k) % Computes n!/k!/(n-k)! a = sum(log(1:n)); b = sum(log(1:k)); c = sum(log(1:(n-k))); out = round(exp(a-b-c)); % End of function file.

% ln of n! % ln of k! % ln of (n-k)! % result

The main program for evaluating the performance of the (511,385) BCH code is shown below. % File: ce11_5.m zdB = 0:0.5:10; % set Eb/No in dB z = 10.^(zdB/10); % convert to linear scale ber1 = q(sqrt(z)); % FSK result ber2 = q(sqrt(385*z/511)); % SER for (511,385) BCH code ber3 = q(sqrt(768*z/1023)); % SER for (1023,768) BCH code berbch1 = ser2ber1(2,511,29,14,ber2); % BER for (511,385) BCH code berbch2 = ser2ber1(2,1023,53,26,ber3); % BER for (1023,768) BCH code semilogy(zdB,ber1,’k-’,zdB,berbch1,’k--’,zdB,berbch2,’k-.’) xlabel(’E_b/N_o in dB’) % label x axis ylabel(’Bit Error Probability’) % label y axis legend(’Uncoded’,’(511,385) BCH code’,’(1023,768) BCH code’,3) % End of script file. Note that this program calls ser2ber1 rather than ser2ber. In order to evaluate the performance of the (1023,768) BCH code, it is necessay to modify the routine for mapping


52

CHAPTER 11. INFORMATION THEORY AND CODING

the code symbol error rate so that the calculation is performed using logarithms as the following MATLAB code indicates. % File: ser2ber1.m function [ber] = ser2ber(q,n,d,t,ps) lnps = length(ps); % length of error vector ber = zeros(1,lnps); % initialize output vector for k=1:lnps % iterate error vector ser = ps(k); % channel symbol error rate sum1 = 0; sum2 = 0; % initialize sums for i=(t+1):d lnterm = log(nkchoose(n,i))+(i*log(ser))+(n-i)*log((1-ser)); term = exp(lnterm); sum1 = sum1+term; end for i=(d+1):n lnterm = log(i)+log(nkchoose(n,i))+(i*log(ser))+(n-i)*log((1-ser)); term = exp(lnterm); sum2 = sum2+term; end ber(k) = (q/(2*(q-1)))*((d/n)*sum1+(1/n)*sum2); end % End of function file. Executing the program yields the results shown in Figure 11.18.

Computer Exercise 11.6 In order to work this computer exercise we write a MATLAB program to determine the outputs for each of the 16 tree branches illustrated in Figure 11.25. The inputs for each tree branch are illustrated in the right-hand column of Figure 11.25. The outputs for the portion of the tree are generated by inputting the appropriate binary vector corresponding the portion of the tree of interest. The MATLAB code follows. % File: ce11_6.m clear all invector = input(’Enter input sequence as a vector (in brackets with spaces) > ’); d = [’The input sequence is ’,num2str(invector),’.’]; disp(d)


11.2. COMPUTER EXERCISES

53

0

10

-5

Bit Error Probability

10

-10

10

-15

10

Uncoded (511,385) BCH code (1023,768) BCH code

-20

10

0

1

2

3

4

5 6 Eb/No in dB

7

8

9

Figure 11.18: Performance results for Computer Exercise 11.5.

10


54

CHAPTER 11. INFORMATION THEORY AND CODING sr = [0 0 0]; for k=1:4 sr = [invector(1,k) sr(1,1) sr(1,2)]; v1 = dec2bin(rem(sr(1,1)+sr(1,2)+sr(1,3),2)); v2 = dec2bin(sr(1,1)); v3 = dec2bin(rem(sr(1,1)+sr(1,2),2)); c = [v1 v2 v3]; out(1,k) = bin2dec(c); end out = dec2bin(out) % End of script file.

The program is illustrated by generating the outputs for the path labeled ’A’ in Figure 11.25., for which the input bit stream is 1010. The MATLAB dialog is as follows: » ce11_6 Enter input sequence as a vector (in brackets with spaces) > [1 0 1 0] The input sequence is 1 0 1 0. out = 111 101 011 101 The four outputs are represented as the rows following ‘out’ in the preceding dialog. This corresponds to the sequence 111 101 011 101, which is Path A as shown in Figure 11.25 in the text. (Note: If all outputs are 0, the sequence 000 is represented by a single 0.) Computer Exercise 11.7 This Computer Exercise is worked by modifying the MATLAB code in the text (c11ce6.m) to allow the calculation of results for γ = 0.1 and γ = 0.3 and plotting the results together so that they are easily compared. The revised MATLAB code follows. % File: ce11_7.m g = [0.1 0.3]; % gamma zdB = 0:0.1:10; % z in dB z = 10.^(zdB/10); % vector of z values qt = Q(sqrt(2*z)); % gamma=0 case for k=1:2 q1 = Q((1-g(k))*sqrt(2*z));


11.2. COMPUTER EXERCISES

55

q2 = Q((1+g(k))*sqrt(2*z)); p2 = q1-q2; % P2 p1 = q2; % P1 pe(k,:) = p1./(1-p2); % probability of error N(k,:) = 1./((1-p2).*(1-p2)); end subplot(2,2,1) semilogy(zdB,pe(1,:),zdB,qt) title(’gamma = 0.1’) xlabel(’z - dB’); ylabel(’Error Probability’); grid subplot(2,2,2) semilogy(zdB,pe(2,:),zdB,qt) title(’gamma = 0.3’) xlabel(’z - dB’); ylabel(’Error Probability’); grid subplot(2,2,3) plot(zdB,N(1,:)) xlabel(’z - dB’); ylabel(’Ave. Transmissions/Symbol’); axis([0 10 1 1.4]); grid subplot(2,2,4) plot(zdB,N(2,:)) xlabel(’z - dB’); ylabel(’Ave. Transmissions/Symbol’); axis([0 10 1 1.4]); grid % End of script file. Executing the program gives the results shown in Figure 11.19. As expected, as γ is increased, the error probability deceases at the cost of an increase in the average number of transmissions. The key, however, is that increased performance is achieved with only a moderate increase in the rate of retransmissions. This is especially true at higher SNRs.


56

CHAPTER 11. INFORMATION THEORY AND CODING

gamma = 0.1

0

Error Probability

-5

-10

10

-10

0

5 z - dB

10

1.4 1.3 1.2 1.1 1

-5

10

0

5 z - dB

10

10

Ave. Transmissions/Symbol

Error Probability

10

10

Ave. Transmissions/Symbol

gamma = 0.3

0

10

0

5 z - dB

10

0

5 z - dB

10

1.4 1.3 1.2 1.1 1

Figure 11.19: Performance results for feedback channel.


APPENDIX A: PHYSICAL NOISE SOURCES AND NOISE CALCULATIONS

Problem A.1 All parts of the problem are solved using the relation p Vrm s = 4kT RB where k B

1:38 10 23 J/K 30 MHz = 3 107 Hz

= =

a. For R = 10; 000 ohms and T = T0 = 290 K p Vrm s = 4 (1:38 10 23 ) (290) (104 ) (3 = 6:93 10 5 V rms = 69:3 V rms b. Vrm s is smaller than the result in part (a) by a factor of

107 )

p

10 = 3:16: Thus

p

100 = 10: Thus

Vrm s = 21:9 V rms c. Vrm s is smaller than the result in part (a) by a factor of Vrm s = 6:93 V rms d. Each answer becomes smaller by factors of 2; Problem A.2 Use

eV kT

10 = 3:16; and 10, respectively.

eV kT

I = Is exp We want I > 20Is or exp

p

1

1 > 20. 19

e 1:6 10 a. At T = 290 K, kT = 1:38 1023 290 = 40, so we have exp (40V ) > 21 giving

V i2rm s or

i2rm s B

ln (21) = 0:0761 volts 40 eV = 2eIB ' 2eBIs exp kT eV = 2eIs exp kT

>

= =

2 1:6

10 19

1:0075

22

10

1:5

10 5 exp (40

0:0761)

2

A /Hz

e b. If T = 90 K, then kT = 129, and for I > 20Is , we need exp(129V ) > 21 or

V >

ln (21) = 2:36 129

10 2 volts

1


APPENDIX A: PHYSICAL NOISE SOURCES AND NOISE CALCULATIONS

2

Thus i2rm s B

=

2 1:6

10 19

=

1:0079

10 22 A2 /Hz

1:5

10 5 exp (129

0:0236)

approximately as before. Problem A.3 a. Use Nyquist’s formula to get the equivalent circuit of Req in parallel with RL , where Req is given by R3 (R1 + R2 ) Req = R 1 + R 2 + R3 The noise equivalent circuit consists of a rms noise voltage, Veq , in series with Req and a rms noise voltage, VL , in series with with RL with these series combinations being in parallel. The equivalent noise voltages are Veq

p 4kT Req B p = 4kT RL B =

VL

The rms noise voltages across the parallel combination of Req and RL , by using superposition and voltage division, are Ve q RL VL Re q V01 = and V02 = Req + RL Req + RL Adding noise powers to get V02 we obtain V02

= = =

2 2 RL Veq

2 VL2 Req + 2 2 (Req + RL ) (Req + RL ) (4kT B) RL Req Req + RL RL R3 (R1 + R2 ) 4kT B R 1 R 3 + R2 R 3 + R1 RL + R 2 R L + R 3 R L

Note that we could have considered the parallel combination of R3 and RL as an equivalent load resistor and found the Thevenin equivalent. Let Rjj =

R 3 RL R3 + R L

The Thevenin equivalent resistance of the whole circuit is then R R

Req2

=

3 L (R1 + R2 ) Rjj (R1 + R2 ) L = RR33+R R L Rjj + R1 + R2 R +R + R1 + R2

=

RL R3 (R1 + R2 ) R 1 R 3 + R2 R3 + R1 RL + R2 RL + R 3 R L

3

L


APPENDIX A: PHYSICAL NOISE SOURCES AND NOISE CALCULATIONS

3

and the mean-square output noise voltage is now V02 = 4kT BReq2 which is the same result as obtained before. b. With R1 = 2000 ; R2 = RL = 300 ; and R3 = 500 V02 B

= = =

, we have

RL R3 (R1 + R2 ) R1 R3 + R 2 R 3 + R 1 R L + R 2 R L + R3 RL 4 1:38 10 23 (290) (300) (500) (2000 + 300) 2000 (500) + 300 (500) + 2000 (300) + 300 (300) + 500 (300)

4kT B

2:775

10 18 V2 /Hz

Problem A.4 Find the equivalent resistance for the R1 ; R2 ; R3 combination and set RL equal to this to get RL =

R3 (R1 + R2 ) R1 + R2 + R 3

Problem A.5 Using Nyquist’s formula, we …nd the equivalent resistance looking back into the terminals with Vrm s across them. It is Req

= 50 k k 20 k k (5 k + 10 k + 5 k) = 50 k k 20 k k 20 k = 50 k k 10 k (50 k) (10 k) = 50 k + 10 k = 8; 333

Thus 2 Vrm s

= =

4kT Req B 4 1:38 10 23 (400) (8333) 2

=

3:68

10 10 V2

which gives Vrm s = 19:18 V rms

Problem A.6

106


APPENDIX A: PHYSICAL NOISE SOURCES AND NOISE CALCULATIONS

4

To …nd the noise …gure, we …rst determine the noise power due to a source at the output, then due to the source and the network, and take the ratio of the latter to the former. Initally assume unmatched conditions. The results are V02 due to R , only = S

V02 due to R1 and R2

R2 k RL R S + R1 + R 2 k RL

2

R2 k RL R S + R 1 + R2 k RL

2

= +

V02 due to R ; R and R S

1

2

+

(4kT R1 B)

RL k (R1 + RS ) R2 + (R1 + RS ) k RL

R2 k RL RS + R1 + R 2 k R L

=

(4kT RS B)

2

(4kT R2 B)

2

[4kT (RS + R1 ) B]

RL k (R1 + RS ) R2 + (R1 + RS ) k RL

2

(4kT R2 B)

The noise …gure is (after some simpli…cation) F =1+

R1 + RS

RL k (R1 + RS ) R2 + (R1 + RS ) k RL

2

R S + R1 + R 2 k R L R2 k RL

2

R2 RS

In the above, Ra k Rb =

Ra Rb Ra + Rb

Note that the noise due to RL has been excluded because it belongs to the next stage. Since this is a matching circuit, we want the input matched to the source and the output matched to the load. Matching at the input requires that RS = Rin = R1 + R2 k RL = R1 +

R2 RL R2 + RL

and matching at the output requires that RL = Rout = R2 k (R1 + RS ) =

R2 (R1 + RS ) R1 + R 2 + R S

Next, these expressions are substituted back into the expression for F . After some simpli…cation, this gives 2 2 R1 2RL RS (R1 + R2 + RS ) = (RS R1 ) R2 F =1+ + 2 2 RS R2 (R1 + RS + RL ) + RL (R1 + R2 + RS ) RS Note that if R1 >> R2 we then have matched conditions of RL = R2 and RS = R1 : Then, the noise …gure simpli…es to R1 F = 2 + 16 R2


APPENDIX A: PHYSICAL NOISE SOURCES AND NOISE CALCULATIONS

5

Note that the simple resistive pad matching circuit is very poor from the standpoint of noise. The equivalent noise temperature is found by using Te

= T0 (F

1)

= T0 1 + 16

R1 R2

Problem A.7 a. The important relationships are Fl = 1 + Tel = T0 (Fl T e0 = T e1 +

T el T0 1)

T e3 T e2 + Ga1 Ga1 Ga2

Completion of the table gives Ampl. No. 1 2 3

F not needed 6 dB 11 dB

T ei 300 K 864.5 K 3360.9 K

Gai 10 dB = 10 30 dB = 1000 30 dB = 1000

Therefore, T e0

=

864:5 3360:9 + 10 (10) (1000) 386:8 K

F0

=

=

300 +

Hence,

=

T e0 T0 2:33 = 3:68 dB

1+

b. With ampli…ers 1 and 2 interchanged T e0

=

300 3360:9 + 10 (10) (1000) 865:14 K

F0

=

=

864:5 +

This gives a noise …gure of

=

865:14 290 3:98 = 6 dB 1+


APPENDIX A: PHYSICAL NOISE SOURCES AND NOISE CALCULATIONS

6

c. See part (a) for the noise temperatures. d. For B = 50 kHz, TS = 1000 K, and an overall gain of Ga = 107 , we have, for the con…guration of part (a) Pna; out

= Ga k (T0 + Te0 ) B = 107 1:38 10 23 (1000 + 386:8) 5 =

We desire

9:57

104

10 9 watts

Psa; out Psa; out = 104 = Pna; out 9:57 10 9

which gives Psa; out = 9:57

10 5 watts

For part (b), we have Pna; out

=

107 1:38

=

1:29

10 23 (1000 + 865:14) 5

104

10 8 watts

Once again, we desire Psa; out Psa; out = 104 = Pna; out 1:29 10 8 which gives Psa; out = 1:29 and Psa; in =

10 4 watts

Psa; out = 1:29 Ga

10 11 watts

Problem A.8 a. The noise …gure of the cascade is Foverall = F1 +

F2 1 F 1 = LF =L+ Ga1 (1=L)

b. For two identical attenuator-ampli…er stages Foverall = L +

F 1 L 1 F 1 + + = 2LF (1=L) (1=L) L (1=L) L (1=L)

c. Generalizing, for N stages we have Foverall

NFL

1

2LF; L >> 1


APPENDIX A: PHYSICAL NOISE SOURCES AND NOISE CALCULATIONS

Problem A.9 a. The data for this problem is Stage 1 (preamp) 2 (mixer) 3 (ampli…er)

Fi 2 dB = 3.01 8 dB = 6.31 5 dB = 3.16

Gi G1 1.5 dB = 1.41 30 dB = 1000

The overall noise …gure is F = F1 +

F2 1 F3 1 + G1 G1 G2

which gives 5 dB = 3:16 or

5:31

G1

1:58 +

6:31 1 3:16 1 + G1 1:41G1

(2:16=1:41) = 2:39 = 2:79 dB 1:58

b. First, assume that G1 = 15 dB = 31:62: Then F

= =

6:31 1 3:16 1 + 31:62 (1:41) (31:62) 1:8 = 2:54 dB

1:58 +

Then Tes

= T0 (F 1) = 290 (1:8 1) = 230:95 K

and Toverall

= Tes + Ta = 230:95 + 300 = 530:95 K

Now use G1 as found in part (a): F Tes Toverall

= 3:16 = 290 (3:16 1) = 626:4 K = 300 + 626:4 = 926:4 K

c. For G1 = 15 dB = 31:62; Ga = 31:62 (1:41) (1000) = 4:46 Pna; out

= Ga kToverall B = 4:46 104 1:38 =

3:27

10 9 watts

104 : Thus

10 23 (530:9) 109

7


APPENDIX A: PHYSICAL NOISE SOURCES AND NOISE CALCULATIONS

For G1 = 3:79 dB = 2:39; Ga = 2:39 (1:41) (1000) = 3:37 Pna; out

=

3:37

103

=

4:31

10 10 watts

1:38

8

103 : Thus

10 23 (926:4) 107

Note that for the second case, we get less noise power out even wth a larger Toverall . This is due to the lower gain of stage 1, which more than compensates for the larger input noise power. d. A transmission line with loss L = 2 dB connects the antenna to the preamp. We …rst …nd TS for the transmission line/preamp/mixer/amp chain: FS = FTL +

F1 1 F2 1 F3 1 + + ; GTL GTL G1 GTL G1 G2

where GTL = 1=L = 10 2=10 = 0:631 and FTL = L = 102=10 = 1:58 Assume two cases for G1 : 15 dB and 6.37 dB. First, for G1 = 15 dB = 31:6, we have

FS

=

1:58 +

=

2:84

6:31 1 3:16 1 1:58 1 + + 0:631 (0:631) (31:6) (0:631) (31:6) (1:41)

This gives TS = 290 (2:84

1) = 534 K

and Toverall = 534 + 300 = 834 K Now, for G1 = 3:79 dB = 2:39, we have FS

=

1:58 +

=

7:04

1:58 1 6:31 1 3:16 1 + + 0:631 (0:631) (2:39) (0:631) (2:39) (1:41)

This gives TS = 290 (7:04

1) = 1752 K

and Toverall = 1752 + 300 = 2052 K

Problem A.10 a. (a) Using Pna; out = Ga kTS B with the given values yields Pna; out = 7:45

10 6 watts


APPENDIX A: PHYSICAL NOISE SOURCES AND NOISE CALCULATIONS

b. We want

Psa; out = 105 Pna; out

or Psa; out = 105

7:45

10 6 = 0:745 watts

This gives Psa; in

= =

Psa; out = 0:745 Ga 51:28 dBm

10 8 watts

9


APPENDIX A: PHYSICAL NOISE SOURCES AND NOISE CALCULATIONS

10

Problem A.11 a. For

A = 1 dB, Y = 1:259 and the e¤ective noise temperature is Te =

For

(1:259) (300) = 858:3 K 1:259 1

A = 1:5 dB, Y = 1:413 and the e¤ective noise temperature is Te =

For

600

600

(1:413) (300) = 426:4 K 1:413 1

A = 2 dB, Y = 1:585 and the e¤ective noise temperature is Te =

600

(1:585) (300) = 212:8 K 1:585 1

b. These values can be converted to noise …gure using F =1+

Te T0

With T0 = 290 K, we get the following values: (1) For A = 1 dB; F = 5:98 dB; (2) For A = 1:5 dB; F = 3:938 dB; (3) For A = 2 dB; F = 2:39 dB. Problem A.12 a. Using the data given, we can determine the following: =

0:039 m

2

=

4 d GT PT GT

202:4 dB

= 39:2 dB = 74:2 dBW

This gives PS =

202:4 + 74:2 + 6

5=

127:2 dBW

b. Using Pn = kTe B for the noise power, we get Pn

Te T0

=

10 log10 kT0

=

10 log10 [kT0 ] + 10 log10

=

174 + 10 log10

= =

108:6 dBm 138:6 dBW

B

1000 290

Te T0

+ 10 log10 (B)

+ 10 log10 106


APPENDIX A: PHYSICAL NOISE SOURCES AND NOISE CALCULATIONS

11

c. PS Pn

=

127:2

( 138:6)

dB

= 11:4 dB = 101:14 = 13:8 ratio d. Assuming the SNR = z = Eb =N0 = 13:8; we get the results for various digital signaling techniques given in the table below: Modulation type BPSK DPSK Noncoh. FSK QPSK

Error probability p Q 2z = 7:4 10 8 1 z = 5:06 10 7 2e 1 z=2 = 5:03 10 4 2e Same as BPSK


APPENDIX D: ZERO-CROSSING AND ORIGIN ENCIRCLEMENT STATISTICS

1

Problem D.1 a. Consider z (t) = A cos (! 0 + ! d ) t + n (t) ; ! 0 = 2 f0 ; ! d = 2 fd = A cos (! 0 + ! d ) t + nc (t) cos ! 0 t with the PSD of n (t) given by Sn (t) = N0 =2 for jf f0 j also that the PSD of the quadrature noise components is

ns (t) sin ! 0 t B=2 and 0 otherwise. Note

N0 ; jf j B=2 0; otherwise

Snc (f ) = Sns (f ) =

Also, the quadrature noise components are uncorrelated. It follows that n0c (t) cos (! 0 + ! d ) t

n0s (t) sin (! 0 + ! d ) t = n0c (t) [cos ! 0 t cos ! d t =

sin ! 0 t sin ! d t] 0 ns (t) [sin ! 0 t cos ! d t + cos ! 0 t sin ! d t] n0c (t) cos ! d t n0s (t) sin ! d t cos ! 0 t n0c (t) sin ! d t + n0s (t) cos ! d t sin ! 0 t

nc (t) cos ! 0 t

ns (t) sin ! 0 t

Therefore nc (t) = n0c (t) cos ! d t

n0s (t) sin ! d t

ns (t) = n0c (t) sin ! d t + n0s (t) cos ! d t These two equations can be solved for n0c (t) and n0s (t). The results are n0c (t) = nc (t) cos ! d t + ns (t) sin ! d t n0s (t) =

nc (t) sin ! d t + ns (t) cos ! d t

To …nd the PSDs of n0c (t) and n0s (t) we …rst …nd their autocorrelation functions. de…nition, [nc (t) cos ! d t + ns (t) sin ! d t] [nc (t + ) cos ! d (t + ) + ns (t + ) sin ! d (t + )] = Rc ( ) cos ! d t cos ! d (t + ) + Rs ( ) sin ! d t sin ! d (t + )

Rn0c ( ) = E

Noting that Rc ( ) = Rs ( ) and using a trig identity, we …nd that Rn0c ( ) = Rc ( ) cos ! d

By


APPENDIX D: ZERO-CROSSING AND ORIGIN ENCIRCLEMENT STATISTICS

2

Similarly, it can be shown that Rn0s ( ) = Rs ( ) cos ! d The PSDs follow by applying the modulation theorem: 1 1 Sn0c (f ) = Sn0s (f ) = Snc (f + fd ) + Snc (f fd ) 2 2 8 jf j B=2 fd < N0 ; N0 =2; B=2 fd < jf j B=2 + fd = : 0; otherwise

b. To …nd the cross-spectral density of n0c (t) and n0s (t), we …rst …nd their cross-correlation function: Rn0c n0s ( ) = E n0c (t) n0s (t + ) = E =

[nc (t) cos ! d t + ns (t) sin ! d t] [ nc (t + ) sin ! d (t + ) + ns (t + ) cos ! d (t + )] Rc ( ) cos ! d t sin ! d (t + ) + Rs ( ) sin ! d t cos ! d (t + )

Again noting that Rc ( ) = Rs ( ) and using a trig identity, we …nd that Rn0c n0s ( ) =

Rs ( ) sin ! d =

Rs ( )

exp (j! d )

exp ( j! d ) 2j

Application of the frequency translation theorem results in the following for the crossspectral density: j [Sns (f fd ) Sns (f + fd )] 2 8 B=2 fd f B=2 + fd < jN0 =2; jN0 =2; (B=2 + fd ) f (B=2 fd ) = : 0; otherwise

Sn0c n0s (f ) =

For fd 6= 0 this cross-PSD is not identically zero as a sketch will show. Thus n0c (t) and n0s (t) are not uncorrelated. However, they are when sampled at the same instant because Rn0c n0s (0) = 0. Therefore, they are statistically independent when sampled at the same instant since they are jointly Gaussian. Problem D.2 The derivation here follows S. O. Rice, "Noise in FM Receivers," Proc. of the Symp. of Time Series Analysis, M. Rosenblatt, ed., New York: Wiley (1963), pp. 395-422.


APPENDIX D: ZERO-CROSSING AND ORIGIN ENCIRCLEMENT STATISTICS

3

a. We need the additional clicks due to modulation. Thus, consider z (t) = A cos (! 0 + ! d ) t + nc (t) cos ! 0 t = =

ns (t) sin ! 0 t 0 A cos (! 0 + ! d ) t + nc (t) cos (! 0 + ! d ) t n0s (t) sin (! 0 + ! d ) t A + n0c (t) cos (! 0 + ! d ) t n0s (t) sin (! 0 + ! d ) t

, R (t) cos [(! 0 + ! d ) t + where R (t) =

q

(t)]

[A + n0c (t)]2 + [n0s (t)]2 and tan

(t) =

n0s (t) A + n0c (t)

It follows from the cross-correlation function found in Problem D.1 that Rn0c n0s (0) = E n0c (t) n0s (t) = 0 From an appropriate sketch, it follows that the probability of a counter-clockwise origin encirclement in a small time interval is Pcc = Pr A + n0c (t) < 0 and n0s (t) undergoes a + to

zero crossing in [0;

]

The condition for n0s (t) to undergo a + to zero crossing in [0; ], from an appropriate 0 (t) 0 (t) s s sketch, is that n0s (0) > 0 and that n0s (0) + dndt < 0 for all dndt < 0. Therefore t=0

Pcc = Pr A + n0c (0) < 0; n0s (0) > 0; n0s (0) +

t=0

dn0s (t) dt t=0

< 0;

dn0s (t) <0 dt t=0

Thus we need the joint pdf of X , n0c (0) ; Y , n0s (0) ; and Z ,

dn0s (t) dt t=0

These are jointly Gaussian random variables (the derivative is a linear operation).

The


APPENDIX D: ZERO-CROSSING AND ORIGIN ENCIRCLEMENT STATISTICS

following expectations hold: E [X] = E [Y ] = E [Z] = 0 Z 1 Z 1 2 2 Sn0s (f ) df = N0 B , a Sn0c (f ) df = E X = E Y = 1 1 Z 1 jj2 f j2 Sn0s (f ) df E Z2 = 1

= E [XY ] = E [XZ] = E [Z j X] = =

1 2 N0 B 3 + 4 2 fd2 N0 B , b + a! 2d 3 E [Y Z] = 0 dRn0c n0s ( ) d ( Rs ( ) sin ! d ) = = a! d d d =0 =0 dn0s (t) dns (t) E j n0c (0) = E ! d nc (0) + j nc (0) dt t=0 dt t=0 ! d E [nc (0) j nc (0)] = ! d X

Therefore, the joint pdf of X; Y; Z is fXY Z (x; y; z) = fZjXY (z j x; y) fX (x) fY (y)

= fZjX (z j x) fX (x) fY (y) ( " #) x2 + y 2 (z + a! d )2 1 = + p exp 2a 2b (2 )3=2 a b

4


APPENDIX D: ZERO-CROSSING AND ORIGIN ENCIRCLEMENT STATISTICS

5

Thus, Pcc

= Pr [X < A; 0 < Y < Z ; all Z < 0] Z AZ 0 Z z Z AZ 0 Z z = fXY Z (x; y; z) dydzdx = fZj X (z j x) fX (x) fY (y) dydzdx 1 1 0 1 1 0 ( " #) Z AZ 0 Z z 1 x2 + y 2 (z + x! d )2 = dydzdx + p exp 2a 2b (2 )3=2 a b 1 1 0 8 # 9 " Z A Z = (z + x! d )2 =2b Z z exp y 2 =2a exp x2 =2a < 0 exp p p p dy dz dx; << 1 = ; : 1 2 a 2 a 2 b 1 0 8 9 Z A Z = exp (z + x! d )2 =2b exp x2 =2a < 0 z x z p p p = dz dx; = p ; & = p : ; a 2 a 2 a 2 b b 1 1 2 3 r Z r Z 1 exp 2 (& + )2 =2 =2 A2 b 1 exp a 4 5 p p & = p d& d ; = ! ; = d a p2 b 2a 2 2 2 0 "Z # r Z 2 1 exp u2 =2 =2 b 1 exp p p = p (u ) du d ; u = & + a p2 2 2 2 "Z # r Z Z 1 2 1 exp u2 =2 =2 exp u2 =2 b 1 exp p p p = p u du du d a p2 2 2 2 2 " # r Z 2 2 2 =2 =2 exp b 1 exp p p = p Q( ) d a p2 2 2 2 ) r (Z Z 1 1 exp p 1 + 2 2 =2 b 2 p d Q ( ) exp =2 d ; v = 1+ 2 = p p 2 a 2 2 2 r Z 1 p b 1 2 p 2 (1 + 2 ) = Q Q ( ) exp d p 2 a 1+ 2 2

To integrate the last integral, consider Z 1 xQ (Cx) exp

2

x =2

B

dx = uvj1 B

Z 1

dt; du =

C

vdu

B

integrated by parts with u = Q (Cx) =

Z 1

p

t2 =2

2 2 x =2 dx; v = Cx

dv = x exp

exp

exp

x2 =2

exp

p

C 2 x2 2

dx


APPENDIX D: ZERO-CROSSING AND ORIGIN ENCIRCLEMENT STATISTICS

6

Thus, by parts Z 1

xQ (Cx) exp

D

x2 2

dx =

Q (Cx) exp

Pcc

=

=

= = where

1 B

C

Z 1

Z 1

D

exp

x2 2

exp

C 2 x2 =2 p dx 2

1 + C 2 x2 =2 p dx 2 D Z 1 exp w2 =2 D2 C p p = Q (CD) exp dw p 2 2 1 + C 2 D 1+C 2 p D2 C p Q D 1 + C2 = Q (CD) exp 2 1 + C2 p with the above integral substituted with C = and D = 2 , becomes r Z 1 p b 1 2 2 p Q Q ( ) exp d 2 (1 + ) p 2 a 1+ 2 2 9 p r 8 1 2) < = p Q 2 (1 + b 1+ 2 h i p p ; 2 a: Q 2 exp ( ) p1+ 2 Q 2 (1 + 2 ) r p p b 1+ 2 p Q 2 (1 + 2 ) Q 2 exp ( ) 2 a 1+ 2 np o p p 1 + 2Q 2 (1 + 2 ) Q 2 exp ( ) = Q (CD) exp

Therefore, Pcc

x2 2

= = =

D2 2

C

exp

!d fd = 2 s r 1 2 3 b 1 1 B 3 ( N0 B ) = = p 2 a 2 N0 B 2 3 A2 A2 = 2a 2N0 B

b. It follows that the probability of a clockwise origin encirclement is given by this


APPENDIX D: ZERO-CROSSING AND ORIGIN ENCIRCLEMENT STATISTICS

expression but with fd replaced by fd ( = ). Taking this into account, p p p Pc = 1 + 2Q 2 (1 + 2 ) + fd Q 2 exp ( ) i h p p p = 1 + 2Q 2 (1 + 2 ) + fd 1 Q 2 exp ( ) p p p 1 + 2Q 2 (1 + 2 ) fd Q 2 exp ( ) + fd exp ( = Pcc

=

Pcc

>

+ fd exp (

7

)

)

; fd > 0

Therefore 1

=

mod

= 2

(Pc + Pcc )

Pcc

A2 2N0 B

+ fd exp

; fd > 0

To make this applicable to square wave modulation, replace fd by jfd j to get h p i p p 1 + 2Q 2 (1 + 2 ) fd Q 2 exp ( ) + jfd j exp ( sw mod = 2 ,

where = 2

+

h p

2Q

1+ 2 s

= 24

)

1+

2

B = 24 p 2 3

s

fd

p

2 (1 + 0v u 2 u @ Q t2

1 + 12

fd B

2

2)

1+

0v u u Q @t2

fd Q fd

2

p 2 !1

1 + 12

A

fd B

exp ( fd Q 2

!1

i ) (reduces to (D.23) for fd = 0) 3 fd p 2 exp ( )5

A

p p 2fd = 2 p Q 8 fd Q 6 exp ( ) ; B = 2fd 3 0 s 1 0s 1 2 2 A2 4fd A A A A = p Q @2 2fd Q @ 3 exp N0 B N0 B 2N0 B 3

= jfd j exp

A2 2N0 B

Typical results for v and

are shown below versus SNR.

fd p 24 B

fd Q

;

=

A2 2N0 B

exp (

3

)5


APPENDIX D: ZERO-CROSSING AND ORIGIN ENCIRCLEMENT STATISTICS

Average encirclements per second, ν and δν, for fd = 5 Hz 0

10

-2

10

-4

ν, δν

10

-6

10

-8

10

-10

10

-15

ν δν -10

-5

0

A2/(2N0B) in dB

5

10

8


APPENDIX D: ZERO-CROSSING AND ORIGIN ENCIRCLEMENT STATISTICS

Average encirclements per second, ν and δν, for fd = 10 Hz 0

10

-2

10

ν, δν

-4

10

-6

10

-8

10

-10

10

-15

ν δν -10

-5

0

A2/(2N0B) in dB

5

10

9


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.