3 (0, 0, 1) (0, 1/3, 2/3) (0, 1/2, 1/2)
(1/3, 1/3, 1/3)
(1/10, 3/5, 3/10)
(2/5, 2/5, 1/5)
2.5. (1, 0, 0)
(0, 1, 0)
P 2.6. If a σ-strategist plays in the population αj δpj where pj is a mixed P strategy p = p S , then the σ-strategist plays against Si with probability j i j,i i P α p . It is thus the same as playing in the population δp̄ . j j j,i 2.7. E[p, q] = pAqT = (−p2 + p3 )q1 + (p1 − p3 )q2 + (−p1 + p2 )q3 = p1 (q2 − q3 ) + p2 (q3 − q1 ) + p3 (q1 − q2 ). Let m = max{q2 − q3 , q3 − q1 , q1 − q2 }. The best reply to q is (1, 0, 0) if q2 − q3 = m; it is (0, 1, 0) if q3 − q1 = m; and it is (0, 0, 1) if q1 − q2 = m. 2.8. (i) generic, (ii) technically non-generic but does not affect the analysis, (iii) non-generic, (iv) generic, (v) technically non-generic but does not affect the analysis, (vi) technically non-generic but does not affect the analysis, (vii) non-generic, (viii) non-generic. 2.9. By Exercise 2.6, the payoffs are the same as if the game is played against an individual playing p = 0.4(0.5, 0.5) + 0.3(1, 0) + 0.3(0.2, 0.8) = (0.56, 0.44). We thus get E[(0.5, 0.5); Π] = (0.5, 0.5)A(0.56, 0.44)T = 0.4 and similarly for other strategies, giving payoffs 0.56 and 0.464. 2.10. For the matrix games, the mean strategy is as in Exercise 2.9. p = 0.4(0.5, 0.5) + 0.3(1, 0) + 0.3(0.2, 0.8) = (0.56, 0.44). If the opponent is selected based on its probability of playing S1 the mean strategy is pe = 0.5 1 0.2 0.5+1+0.2 (0.5, 0.5) + 0.5+1+0.2 (1, 0) + 0.5+1+0.2 (0.2, 0.8).
Chapter 3 β=1
β=100
0.3
0.3
0.3
0.3
0
0.1
1 2 Generation
3
0
p
0.5
p
0.5
0.1
3.1.
β=10
0.5 p
p
β=0 0.5
0.1
5 Generation
10
0
0.1
50 Generation
100
0
500 Generation
1000
4 β=1
β=10
β=100 0.5
0.3
0.3
0.3
0.3
0.1
0.1
0
5 Generation
10
p
0.5 p
0.5 p
p
β=0 0.5
0.1
0
10 Generation
20
0
0.1
50 Generation
100
0
3.2. Note that β here is the same as β + 1 in Exercise 3.1.
500 Generation
1000
3.3. d p = p ap + b(1 − p) − p ap + b(1 − p) − (1 − p) cp + d(1 − p) dt = p(1 − p) (a − c)p + (b − d)(1 − p) = p(1 − p)h(p).
d A point p is an asymptotically stable rest point of dynamics dt p = f (p) 0 if and only if f (p) = 0 and f (p) < 0. The conclusion follows by setting f (p) = p(1 − p)h(p). n X
xi ≤ 0 and this inequality x∗i i=1 is satisfied because logarithm is a concave function and Jensen’s inequality ! n n X X xi ≤ ln x1 = ln(1) = 0. implies x∗i ln ∗ x i i=1 i=1 To prove that L is increasing along trajectories, we calculate n d ln L x(t) X dL x(t) x∗i dxi (t) 1 = = L dt dt x (t) dt i=1 i n X T T ∗ = xi A x(t) − x(t)A x(t)
3.4. L(x∗ ) ≥ L(x) can be written as
x∗i ln
i
i=1
T T = x∗ (t)A x(t) − x(t)A x(t) > 0.
1
1
0.8
0.8
0.8
0.6
0.4
0.2
0
3.6.
Best response to s
1
Best response to s
Best response to s
P P 3.5. E[p, SP i] = ji ≤ j pj a j pj aii = aii = E[Si , Si ]. For the second case P E[p, Si ] = j pj aji < j pj aii = aii = E[Si , Si ].
0.6
0.4
0.2
0
0.2
0.4 0.6 0.8 s, representing strategy (s,1−s)
1
0
0.6
0.4
0.2
0
0.2
0.4 0.6 0.8 s, representing strategy (s,1−s)
1
0
0
0.2
0.4 0.6 0.8 s, representing strategy (s,1−s)
1
5 3.7. The Nash equilibria are a) (0, 1), b) (1/3, 2/3), c) (1, 0), (0, 1), (1/2, 1/2). All of these are ESSs except (1/2, 1/2) in c). 3.8. No, the best reply to the strategy is to play Paper in the first round and then repeat the opponent’s move from the previous round. 3.9. For trembling hand Nash equilibrium we need E[pε , pe ] ≥ E[qε , pε ] for all q 6= p. If the inequality is satisfied for all small ε > 0, then it is satisfied also for ε = 0. 3.10. For p ∈ (0, 1), we can use the incentive function from Exercise 3.3. We get h(p) = (a − c)p + (b − d)(1 − p), i.e. h(p) = 0 if and only if p = (d − b)/(a − b − c + d). Also, h0 (p) = (a − b − c − d). Thus, p ∈ (0, 1) is stable if and only if a − b − c + d < 0 and as seen in Section 6.2 these are exactly conditions for (p, 1 − p) (with p ∈ (0, 1)) to be an ESS. In the generic case, a point p = 1 is a stable point of the dynamics if and only if h(1) > 0, i.e. if a > c, which corresponds to (1, 0) being an ESS. Similarly for p = 0. 3.11. See Figure 3.1(a) and compare to Section 6.4.4 to see a specific payoff matrix of an RSP game with no ESS but an asymptotically stable rest point.
Chapter 4 4.1. Following Gintis (2000), the fish play the following game
C D
C 5 8
D −3 . 0
with D to be the only Nash equilibrium. The cooperation can be achieved if the game is repeated, see Gintis (2000). 4.2. Up to a permutation in the order, there are the following seven ways to divide T = 6 as a sum of N = 3 nonnegative integers: 006, 015, 024, 033, 114, 123, 222. Any such a division yields a mating strategy; and for the strategy to be successful, the strategy has to be based on random permutations of the division (for example, 006 yields a strategy that with probability 1/3 courts for t=6 to a specific female and 0 to the other two).
6 This then yields the following payoff matrix 006 015 006 9/6 8/6 015 10/6 9/6 024 10/6 9/6 033 10/6 9/6 114 12/6 10/6 123 12/6 11/6 222 12/6 12/6
024 033 114 123 8/6 8/6 6/6 6/6 9/6 9/6 8/6 7/6 9/6 9/6 9/6 9/6 9/6 9/6 8/6 10/6 9/6 10/6 9/6 8/6 9/6 8/6 10/6 9/6 9/6 6/6 12/6 9/6
222 6/6 6/6 9/6 12/6 . 6/6 9/6 9/6
It follows that the only best reply to itself is 024 but this is not an ESS, so there are no pure ESSs. Using the results of PChapter 6, we can also show that there are no mixed ESSs since for z with i zi = 0 we have 3 zAz = 2 T
!2 X
zi
= 0.
i
4.3. The payoff to a player telling x when the other told y is given by if x = y, x; E[x, y] = x − 2, if x > y, y + 2, if x < y. It follows that the best response to y < 8 is x = y + 1. Consequently, the unique Nash equilibrium (and ESS if considered as an evolutionary game) is to say 8. 4.4. The payoff matrix is given by
Stag Hare
Stag 5 1
Hare 0 . 1
There are two pure Nash equilibria (and both are ESSs): both hunters hunt stag, or both hunt hare. 4.5. The payoff matrix is given by
Place 1 Place 2
Place 1 a A
Place 2 A a
with a < A. The only ESS of this game is to go to place 1 with probability 1/2.
7 4.6. The payoff matrix is given by Pond Puddle
Pond q p
Puddle Q p
with Q > p > q. The only ESS is to go to the pond with probability (Q − p)/Q − q. 4.7. Let ci , i = 1, 2 is the cost of an item purchased by player i. The payoff to player 1 is thus given by f1 (c1 ) − (c1 + c2 )/2. The best reply (to any choice of player 2) is thus to choose an item that maximizes f1 (c) − c/2. For example, when f1√ (c) = c, then player 1 should chose the most expensive item. When f1 (c) = c, then player 1 should choose an item with a cost c = 1. 4.8. Let q = (q, 1−q) be the mean strategy in the population. Being away from the equilibrium line means that q 6= pESS . The mean fitness of the population is E[q, q] where E denotes the payoff in the classical Hawk-Dove game. Since pESS is an ESS of the game and E[q, pESS ] = E[pESS , pESS ], we must have E[pESS , q] > E[q, q]. 4.9. If V > C, then Hawk is the best response to any strategy. If V < C, then the best response to p = (p, 1 − p) for p > V /C is to play Dove; and the best response to p when p < V /C is to play Hawk. Consequently, p = (V /C, 1 − V /C) is the only candidate for an ESS with Hawk and Dove scoring equally well against such a strategy p. Thus, E[q, p] = E[p, p] for all q. One can easily check that E[p, q] > E[q, q] and thus p is an ESS. 4.10. The only way to score more than an opponent is to defect sooner (or more) than the opponent. However, TFT defects only after its opponent’s defection. 4.11. We assume that there are no atoms of probability. Writing P (x) as the distribution function of p, and Q(y) as that of q, Z Z E[p, q] = (V − cy)dP (x)dQ(y) − cxdP (x)dQ(y) x>y x<y Z Z = (V − cy)p(x)q(y)dxdy − cxp(x)q(y)dxdy x>y x<y Z Z =V p(x)q(y)dxdy − c min(x, y)p(x)q(y)dxdy x<y
= V P [X < Y ] − E[min(X, Y )]. 4.12. a) For t ≥ 3, the best response is to leave right away, i.e. S0 ; this
8 yields the payoff of 0. For t < 3, then any Sx for x > t is a best response; this yields the payoff of 6 − 2t. b) The ESS strategy has a density function p(t) = exp(−t/3)/3. (i) The ESS is p(t) = exp(−t/3)/3 for t ≤ 7 and P [T = 10] = exp(−7/3). (ii) The payoff matrix is S0 S0 3 S5 6 S8 6 S10 6
S5 0 −7 −4 −4
S8 S10 0 0 −10 −10 −13 −16 −10 −17
and the ESS is given by (0.7295, 0.1721, 0.0246, 0.0738). 4.13. The payoff to a p-player is proportional to 1−p p 1−p p + + = m1 1 − m1 m + ε(p − m) 1 − m − ε(p − m) p m−p 1−p p−m ≈ 1+ε + 1+ε m m 1−m 1−m p 1−p p 1−p = + + ε(m − p) − . m 1−m m2 (1 − m)2 If m 6= 1/2 we can ignore the ε term, as the first one dominates. If m = 1/2, the above becomes 2 − 2ε(1 − 2p)2 , so that the unique best reply to m = 1/2 is p = 1/2. 4.14. If we have available strategies either side of 0.5 and not 0.5 itself, the ESS must (1) involve a strategy from either side and (2) the overall population sex ratio must be 0.5; otherwise all involved strategies cannot have equal fitness. (i) To achieve population sex ration 0.5, we muist have 0.2x + 0.6(1 − x) = 0.5 ⇒ the proportion of 0.2 players is x = 0.25, with proportion 0.75 0.6 players. (ii) Here the sex ratio will always be less than 0.5, so the strategy with the most females 0.3 is best, and the ESS is all 0.3. (iii) Here the ESS must involve 0.6. There is an equilibrium involving all three strategies, but this will be subject to drift. The mixture of 0.3 and 0.6 with proportion 0.3x + 0.6(1 − x) = 0.5 ⇒ x = 1/3 0.3 players is the unique ESS, in particular resisting invasion from the strategy 0.2.
Chapter 5
9 5.1. The frequencies are as follows. PA Pa pAA Population 1 1/2 1/2 1/4 Population 2 1/10 9/10 1/100 Sample of 1 and 2 3/10 7/10 13/100 Sample as from one population 3/10 7/10 9/10
pAa paa 1/2 1/4 18/100 81/100 34/100 53/100 42/100 49/100
For for population 1 and 2, PA was given, Pa was calculated as 1 − PA , pAA , pAa and pa a were calculated by formulas (5.1)-(5.2). The third line is an average of the first two lines. When considering the sample as if it comes from a single random mating population, PA and Pa stay same as in the third line and pAA , pAa and pa a were calculated by formulas (5.1)-(5.2). 5.2. Let pi,j denote the probability that NA = j at time point t + 1 given NA = i at time point t. We get NA N −NA j = i + 1, N > i > 0 N · N , N −NA · NA , j = i − 1, N > i > 0 N 2 N N −NA 2 pi,j = N A + , j = i, N > i > 0 N N 0, otherwise. If Ti denotes the time to fixation when NA = i, we get Ti = 1 + pi,i+1 Ti+1 + pi,i−1 Ti−1 + pi,i Ti ,
0<i<N
Ti = 0,
i = 0, N.
Following Traulsen and Hauert (2009)., we can solve the above system as p follows. Set zj = Tj − Tj−1 and γj = pj,j−1 . We get j,j+1 zj+1 = γj zj −
1 pj,j+1
and thus, iteratively, z1 = T1 − T0 = T1 z2 = T2 − T1 = γ1 T1 −
1
p1,2 γ2 1 z3 = T3 − T2 = γ2 γ1 T1 − − p1,2 p2,3 .. .
zk = Tk − Tk−1 = T1
k−1 Y m=1
γm −
k−1 X l=1
1 pl,l+1
k−1 Y m=l+1
γm
10 Since
N X k=j+1
zk = TN − Tj = −Tj
we get T1 = −
N X
zk =
k=2
N X k X
1 1+
PN −1 Qk k=1
j=1 γj k=1 l=1
1 pl,l+1
k Y
γj .
j=l+1
Finally, from above, we get Tj = −
N X k=j+1
zk = −T1
N −1 Y k X k=j m=1
γm +
N −1 Y k X k=j l=1
1 pl,l+1
k Y
γm .
m=l+1
5.3. The dynamics for the frequency of copies of allele A in the population is given by PA (t + 1) = NA (t + 1)/N (t + 1) where NA (t) is the number of copies of A at time t. Considering the count of AA, Aa and aa offspring, we get (5.11) which could be further rearranged as PA (t + 1) = PA (t)
WAA PA (t) + WAa (1 − PA (t)) . 2 WAA PA (t) + 2WAa PA (t)(1 − PA (t)) + Waa (1 − PA (t))2
where WAA PA (t)+WAa (1−PA (t)) can be considered to be a fitness of an allele A (that with probability PA (t) ends up together with another allele A, resulting in viability WAA and with probability (1−PA (t) ends up together with an allele a resulting in viability WAa ). Similarly, WAA PA (t)2 + 2WAa PA (t)(1 − PA (t)) + Waa (1 − PA (t))2 is the mean fitness in the population. For the dependence of PA (t), see for example Figure 5.2. 5.4. We have WAA = HAA EH + (1 − HAA )ED WAa = HAa EH + (1 − HAa )ED Waa = Haa EH + (1 − Haa )ED
where Wxy is the fitness of the genotype xy and EH is the payoff to a Hawk, ED is the payoff to a Dove. Thus, WAA − WAa = (HAA − HAa )(EH − ED ), Waa − WAa = (Haa − HAa )(EH − ED ).
When EH 6= ED , the only possible mixed equilibrium of the dynamics (5.11) is given by (5.13) as PA =
Haa − HAa Waa − WAa = . WAA − 2WAa + Waa HAA − 2HAa + Haa
11 As discussed in Section 5.2.2, for the above to be an equilibrium, we need the signs of Haa − HAa and HAA − HAa to be the same. 5.5. Assume that individuals that play the game are part of an infinite diploid random mating population. Let there be two alleles A and a, and each genotype xy ∈ {AA, Aa, aa} has a (potentially) different probability Dxy of playing Defector. Writing the payoffs of Defectors and Cooperators as ED and EC , the payoff Wxy to the genotype xy is given by Wxy = Dxy ED + (1 − Dxy )EC . Since, under any circumstances, ED > EC , the only possible mixed equilibrium of the dynamics (5.11) is given as in the Exercise 5.4 by PA =
Daa − DAa . DAA − 2DAa + Daa
As discussed in Section 5.2.2, for the above to be an equilibrium, we need the signs of Daa − DAa and DAA − DAa to be the same; for the equilibrium to be stable, we further need Daa − DAa < 0, DAA − DAa < 0. 5.6. The situation is similar to Exercises 5.4. and 5.5 for any two strategy games. For three or more strategies this can become very complex. 5.7. We have the following situation. Genotype AB aB Ab Frequency βt αB,t βt (1 − αB,t ) (1 − βt )αb,t Fitness 1+s 1+s 1
ab (1 − βt )(1 − αb,t ) 1
The mean population growth is (1+βs) and thus we get that the proportion B, βt evolves according to βt+1 =
βt (1 + s) . 1 + sβt
An offspring of type AB can result from one of the following 5 possible mating scenarios 1) AB with AB, 2) AB with aB, 3) AB with Ab, 4) AB with ab, 5) aB with Ab. In 1)-3), AB results irrespective of the genes separating, in 4) the genes cannot separate and in 5) the genes have to separate. Given the above, in case 2) and similarly in cases 3)-5), an offspring AB results only with probability 1/2, but the pair AB and aB meets with probability 2pAB paB , so 1/2 cancels with 2. Let PAB denote the proportion of type AB in the next generation.
12
2 (1 + βt s)PAB (t + 1) = (1 + s)2 βt2 αB,t + (1 + s)2 βt2 αB,t (1 − αB,t )
+ (1 + s)βt αB,t (1 − βt )αb,t
+ (1 + s)βt αB,t (1 − βt )(1 − αb,t )(1 − c) + (1 + s)βt (1 − αB,t )(1 − βt )αb,t c
= βt2 (1 + s)2 αB,t + βt (1 − βt )αB,t (1 + s) + cβt (1 − βt )(αb,t − αB,t )(1 + s).
αB,t+1 = PAB (t + 1)/βt (1 + s) which substituting into the above gives the required solution. Similarly, an offspring of type Ab can result from one of the following 5 possible mating scenarios 1) Ab with Ab, 2) Ab with ab, 3) Ab with AB, 4) Ab with aB, 5) ab with AB. and calculations similar to the above yield (1 + βt s)αb,t+1 = (1 + βt s)αb,t + c(1 + s)βt (αB,t − αb,t ). 5.8. Subtracting (5.19) from (5.20) and we get (αB,t+1 − αb,t+1 ) = (αB,t − αb,t )(1 − c). Consequently, since αB,0 = 0, we have (αB,t − αb,t ) = −αb,0 (1 − c)t . Further, we get from (5.19) (1 − βt ) (αb,t − αB,t ) (1 + βt s) (1 − βt ) = αB,t + cαb,0 (1 − c)t (1 + βt s) (1 − β0 ) = αB,t + cαb,0 (1 − c)t 1 − β0 + β0 (1 + s)t+1
αB,t+1 = αB,t + c
using (5.21). Thus we have αB,t+1 = cαb,0 (1 − β0 )
t X
(1 − c)n . 1 − β0 + β0 (1 + s)n+1 n=1
Heterozygosity will be large if fixation takes longer to occur (small β0 , small s), or large (but not too large) αb,0 and large c. For the effect of s and c, see Figure 5.3. 5.9. Let Wij denote the viability of an Ai Aj individual and P2 (t) be the
13 proportion of A2 at generation t (after the segregation took place). Thus, before the segregation takes place for the next generation, the number of individuals will be in the following relative proportions: W22 P22 (t) for A2 A2 , W11 (1 − P2 (t))2 for A1 A1 and 2W12 P2 (t)(1 − P2 (t)) for A1 A2 . Since an individual A1 A2 has a probability S to pass allele A2 to the next generation, we get that
P2 (t + 1) =
W22 P22 (t) + 2SW12 P2 (t)(1 − P2 (t)) . 2 W22 P2 (t) + 2W12 P2 (t)(1 − P2 (t)) + W11 (1 − P2 (t))2
The rest points are either 0, 1 or a a mixed one solving P2 (t + 1) = P2 (t) in the above, which gives P2 (t)(W22 + W1 − 2W12 ) = W11 − 2SW12 . Analysis similar to the previous cases can be used to show when rest points are stable. 5.10. The evolution of allele frequencies for a diploid model follows (5.11). In our genetic hitchhiking example, A/a is neutral, so frequencies change following the fitness of the B/b alleles. The equivalent terms from (5.11) would give WBb = WB + Wb = 1 + s + 1 etc. leading to 2(1 + s)PB (t)2 + (2 + s)PB (t)(1 − PB (t)) 2(1 + s)PB (t)2 + 2(2 + s)PB (t)(1 − PB (t) + 2(1 − PB (t))2 1 1 1+s + = PB (t) , 2 2 1 + sPB (t)
PB (t + 1) =
which comparing to the relationship between βt+1 and βt is the same difference equation, except moving at “half speed”.
Chapter 6 6.1. ESSs for matrix A are (0, 1, 0) and (3/5, 0, 2/5). For matrix B they are (0, 1/3, 2/3) and (9/13, 4/13, 0). For matrix C the ESS is (6/14, 1/14, 7/14). For D there is no ESS. 6.2. E[p, q] − E[q, q] = apq + bp(1 − q) + c(1 − p)q + d(1 − p)(1 − q) − (aq 2 + (b + c)q(1 − q) + d(1 − q)2 )
= (p − q)(aq + b(1 − q) − cq − d(1 − q)) = (p − q)(b − d + (a − b − c + d)q) = (p − q)(a − b − c + d)(q − p).
14 For Haigh’s condition, consider z = (z, −z). Then a b 1 T 2 zAz = z (1, −1) c d −1 = z 2 (a − c − b + d)
6.3. We show that, for every x, y, xAyT ≤ yAyT if and only if xByT ≤ yByT . This can be achieved by realizing that B − A = C where C is a matrix with all entries P 0 except a constant c in one column and thus (x − y)(A − B)yT = c ( i (xi − yi )) yk = 0. 6.4. Consider the matrix
0 0 0
2 1 0
2 0 1
and p = (1, 0, 0). Clearly T (p) = {1, 2, 3} 6= S(p) and thus E[p, p] = E[q, p] for all q. Since E[p, q] − E[q, q] = −(p − q)A(p − q)T = 4q2 q3 + q22 + q32 > 0 p is an ESS. However, (0, 1, −1)A(0, 1, −1)T = 2 > 0 showing that (6.42) and (6.43) are not equivalent. 6.5. We rewrite the system as p1 + p2 + p3 = 1 −cp1 + ap2 + (b − d)p3 = 0
−ep1 + (a − f )p2 + bp3 = 0 and by elimination we get p1 + p2 + p3 = 1 (a + c)p2 + (c + b − d)p3 = c
(a + e − f )p2 + (b + e)p3 = e leading to p1 + p2 + p3 = 1 (a + c)p2 + (c + b − d)p3 = c
(α + β + γ)p3 = γ
15 where α = ad + bf − df, β = bc + de − be, γ = ae + cf − ac. The solution stated in (6.58) then follows. 6.6. For q = (q1 , q2 , q3 , . . .) we get E[q; S1 ] = −
∞ X
qi 2−i
i=2
and thus E[S1 ; S1 ] = 0 > E[q; S1 ], for all q 6= S1 . Consequently, S1 is an ESS. S1 is not uniformly uninvadable since for qn given by qn = qn+1 = 1/2 and qi = 0, i 6= n, n + 1, we get that E[S1 , (1 − ε)δ1 + εδq ] = −ε while E[q, (1 − ε)δ1 + εδq ] = −(2−n + 2−n−1 )(1 − ε) + ε > −ε for large n. 6.7. Consider payoffs given by E[p, q] = E[p; δq ] where ( 1; p = S1 , Π = δ1 E[p; Π] = 0, otherwise Then S1 is not an ESS since E[S1 , (1 − ε)δ1 + εδq ] = 0 = E[q, (1 − ε)δ1 + εδq ], but on the other hand we have E[S1 , S1 ] = 1 > 0 = E[q, S1 ], i.e. S1 satisfies the conditions of Theorem 6.2. 6.8. By Lemma 6.7, for any q, E[q, p] = E[p, p]. Since p is an ESS, by (6.7), E[p, q] > E[q, q]. 6.9. Assume the matrix is in the form 0 a c 0 e f
b d 0
as in Section 6.4. By results in that section, for all (1, 2), (1, 3) and (2, 3) to be supports of ESSs, we need a, b, c, d, e, f > 0 and α, β, γ < 0. Since α = ad + bf − df < 0 and a, b, d, f > 0, it follows that we need b < d. Since β = bc + de − be < 0 and b, c, d, e > 0, it follows that we need d < b, a contradiction.
16 6.10. Up to renumberings, once we exclude patterns using BishopCannings Theorem 6.9, we are left with the following possible patterns: {(1, 2, 3)}, {(1, 2), (1, 3), (2, 3)}, {(1, 2), (3)}, {(1), (2), (3)}, {(1, 3), (2, 3)}, {(1, 2)}, {(1), (2)}, {(1)}, {∅}. The pattern {(1, 2), (1, 3), (2, 3)} is not attainable by Exercise 6.9. The first, third, fourth and fifth patterns are attainable using the following matrices A, B, −A, −B, respectively, where 0 1 1 0 1 −1 0 −1 A = 1 0 1 B = 1 (1) 1 1 0 −1 −1 0 The matrices for the remaining patterns are: 0 1 1 0 −1 1 0 1 0 1 −1 0 1 −1 −1 1 0 −1 −1 0 −1
1 1 0 1 1 0
and for example matrix D from Exercise 6.1. 6.11. The only candidate for an ESS with support (1, 2) is p = (1/2, 1/2, 0). We get E[p, p] = 1/2 and as in Section 6.4, for generic cases, we only need to check whether or not it can be invaded by S3 . Since E[S3 , p] = (x + y)/2, we get that p is an ESS as long as x + y < 1. When x + y > 1, then p can be invaded by S3 and there is thus no ESS with support of (1, 2). When x + y = 1, then E[p, p] = E[S3 , p] and thus S(p) 6= T (p). However, in this case we have E[p, p] = E[q, p] for all q. E[p, q] − E[q, q] = q1 /2 + q2 /2 = 5q3 /2 − 2q1 q2 − 3q1 q3 − q2 q3 . Using the fact that 4q3 = 1 − q1 − q2 we obtain E[p, q] − E[q, q] = −5q1 − 5q2 + 5/2 + 4q1 q2 + 3q12 + 3q22
= 5(1 − q1 )2 /2 + 5(1 − q2 )2 /2 + q12 /2 + q22 /2 + 4q1 q2
for every q. It is non-generic as E[S3 , p] = E[p, p] for S3 outside the support of p, which would occur with probability 0 for “real cases” from the arguments in Chapter 3. 6.12. Consider a matrix A = (aij ) and, after potential renumbering, let p have a support {1, 2, . . . , m}. Thus, p, and consequently E[p, p] could be expressed only by using aij for i, j = 1, . . . , m. For a moment, consider aij for i, j = 1, . . . , m being fixed. Then E[p, p] is just a number and for k > m, E[Sk , p] is a linear function of ak1 , ak2 , . . . , akm . Hence E[p, p] − E[Sk , p] = 0 only for a linear subspace of codimension 1 in Rm and hence of measure 0. 6.13. If S1 is an ESS, then no pure strategy can invade. If no pure strategy
17 can invade and payoffs are generic, then E[Si , S1 ] < E[S1 , S1 ], for all i > 1. Thus, X X E[q, S1 ] = qi E[Si , S1 ] < qi E[S1 , S1 ] = E[S1 , S1 ]. i
i
The situation for n = 2 (even non-generic) is illustrated in Figure 6.2. The
generic payoffs are needed for n > 2 as seen for example for matrix
1 1 1 1 0 10 1 10 0
.
6.14. For Si to be an ESS of a clique matrix, we must have aji = −1 for all j 6= i. So choose j 6= i and redefine aji to be 1, but leaving all other values unaltered (including aij = −1). 6.15. There are no pure ESSs since there are 0’s on the diagonal and always at least one positive number in each of the columns. When checking for the pair ESSs, we identify candidates by looking for 2 × 2 subaii aij matrix of the form aji ajj . Given aii = ajj = 0, we need to have aij > 0 and aji > 0. This is the case only for i = 2, j = 4 and i = 3, j = 4. One can check that the corresponding candidates for ESSs (0, 3/10, 0, 7/10), (0, 0, 2/3, 1/3) cannot be invaded and are thus ESSs. This means that the only possible ESS with support on 3 strategies is the one supported on 1, 2, 3, and we find that (1/3, 1/3, 1/3, 0) is indeed an ESS. Finally there is no internal ESS. Putting it all together, we get that the ESSs for A are (1/3, 1/3, 1/3, 0), (0, 3/10, 0, 7/10), (0, 0, 2/3, 1/3). Working along similar lines, the ESSs for B are (1/3, 1/3, 1/3, 0, 0), (0, 3/10, 0, 7/10, 0), (0, 0, 2/3, 1/3, 0), (1/3, 1/3, 0, 0, 1/3), (0, 0, 0, 1/3, 2/3). Note the symmetry between pure strategies S3 and S5 , which helped matrix construction. 6.16. We will be using results of Table 6.4. First, consider that the population started as Hawks only. a) if D appears next, the population settles in (H, D) ESS; b) if B appear next, the population settles in (H, B) ESS, c) if either H or R appears next, then the population stays as H (until next introduction occurs. We can repeat the similar process for D, R, B and any other potential mixed ESS we have (which happen to be only (H, D) and (H, B). The resulting diagram is shown below. It follows that eventually the population ends up either in (H, B) of R (with 50% probability each).
18 H, R
H B
D
H, D, R, B
H, D, R
B
(H, D)
(H, B) H, D, R, B
R
H
H R
R D
D
B
B
B, D
p 1−p 6.17. Using the formula (4.28) we set E[p, m] = m + 1−m to get the payoff 2 58/24 matrix 58/9 2 . The “matrix-like ESS” is (0.0857, 0.9143) with the mean population strategy 0.5571. However, this is not an ESS since as shown in Chapter 4, if the mean population strategy is not 1/2, then it can be invaded. The true ESS, giving a mean population strategy of 1/2, is (1/5, 4/5). Also, the methods from Chapter 6 cannot be used for this game as the payoffs are nonlinear. 6.18. D is not a pure ESS, and since C > V neither is H. The third strategy is not an ESS as it can be (borderline) invaded by the equilibrium mixture of H and D. Pair ESSs involving the third strategy have no equilibrium in their own strategy space due to row domination. The equilibrium Hawk-Dove mixture is (borderline) invaded by the third strategy. There cannot be an internal ESS as the payoff matrix has no inverse, and there is a corresponding infinite number of equilibrium strategies (any proportion of strategy 3, plus the right relative proportion of the other two strategies), each of which (borderline) invades any other. 6.19. Pure ESSs: H is a pure ESS if v ≥ c as it dominates its column for v > c and when v = c it dominates R in its column, and has row domination over B and D. R is always a pure ESS as it always dominates its column, B and D are never ESSs. By Bishop-Cannings, we only need to consider mixed ESSs excluding R. Within B, D and H, B dominates D, so any mixed ESS excludes D, leaving (B, H) as the only possibility. This has a stable equilibrium in its own space if c > v. Then in the H and B columns only, H dominates R and B dominates D, so the mixture is an ESS.
19
Chapter 7 7.1. If E[p; δp ] > E[q; δp ], then hp,q,0 > 0 and from the continuity of h in u, we have that hp,q,u > 0, for all u small enough, i.e. p is an ESS. If E[p; δp ] = E[q; δp ], then hp,q,0 = 0 and if we had hp,q,u ≤ 0, for some arbitrarily small ∂ u’s, we would have ∂u
(u=0)
hp,q,u ≤ 0, contradictory to the assumptions.
Hence hp,q,u > 0 for all small u and thus p is an ESS. 7.2. a) Since a1 + a2 + a3 > 0, S1 is an ESS. Since roots of a1 p2 + a2 p + a3 = 0 are pb1 = 1/3 and pb2 = 2/3, we have two additional candidates for ESS. The condition 2a1 pb + a2 < 0 is satisfied only for pb1 , so the ESSs are S1 and 1/3. b) Since a1 + a2 + a3 > 0, S1 is an ESS. Since there are no roots of a1 p2 + a2 p + a3 = 0, there are thus no other ESSs. 7.3. The payoffs are given by E[x; (1 − ε)δy + εδz ] = x1 f1 (1 − ε)y + εz
where f1 (q) = a1 q12 + a2 q1 + a3 . Hence, the payoffs are linear in the focal player strategy and satisfy polymorphic-monomorphic equivalence, and f1 is clearly continuous, so the results follow. 7.4. To have a root the incentive function is not exponential, and hence not its own derivative. Thus we have two distinct functions h and h0 each with a finite (or at least countable) number of roots where there is a common root. Such a coincidence will happen only on a subspace of the parameter space of measure zero, so the game is non-generic. 7.5. Consider a two strategy game with E[p, δq ] = p1 (1/2 − q1 )3 . Then, for p = (1/2, 1/2), we get hp,q,u = (1/2 − q1 )4 u3 . T 7.6. hp,q,u = (p − q)A p + u(q − p) and thus ∂ hp,q,u = (p − q)A(q − p) = E[p, q] − E[q, q] ∂u (u=0) if E[p, p] = E[q, p]. Since hp,q,0 ≥ 0 if and only if E[p, p] ≥ E[q, p], the conditions of Theorems 6.2 and 7.3 are identical. 7.7. As shown in the proof of Theorem 7.3, the continuity of hp,q,u is enough to guarantee that p is a best response to itself. Thus, the proof of Theorem 7.5 can stay as written. 7.8. As shown in Section 7.2.2.1, a candidate p for ESS in this game must be
20 internal. P Hence, by Theorem 7.5, it must satisfy E[p, δp ] = E[Sj , δp ], for all j, i.e. i ri = rj /pj for all j. 7.9. (i) When α = 1, we have E[x; yT ] = x1 (y1 + 2y2 ) + x2 (2y1 + y2 ) = xAyT where A = ( 12 21 ). There is no pure ESS for this matrix as each pure strategy can be invaded by the other one; and by the formula from Section 6.2, we see that the only ESS is pESS = (0.5, 0.5). (ii) We need to maximize x1 (y1 + 2y2 )α + (1 − x1 )(2y1 + y2 )α . Since this is linear in x1 , the maximum is attained either for x1 = 0 (when (y1 + 2y2 )α − (2y1 + y2 )α ≤ 0) or x1 = 1 (when (y1 + 2y2 )α − (2y1 + y2 )α ≥ 0), or any x1 if (y1 + 2y2 )α − (2y1 + y2 )α = 0. (iii) From results of (ii), it follows that neither 0 nor 1 are ESSs. For the internal ESS, we need 0 = (y1 +2y2 )α −(2y1 +y2 )α = (2−y1 )α −(1+y1 )α , which means (1/2, 1/2) is an equilibrium. For α > 0, introducing a small invading group of size so that the population value is now z1 = (1 − )y1 + x1 we can see that if x1 < (>)y1 S1 players do worse (better), so that the equilbrium is an ESS. (iv) For α < 0 we have as in (iii) except that here the equilibrium is unstable, and we also have two pure ESSs (following part (ii)). 7.10. The function E[S1 ; pT ] − E[S2 ; pT ] is continuous in p. Clearly, if S1 is an ESS, then E[S1 ; S1 T] − E[S2 ; S1 T] > 0 and from continuity, it follows that E[S1 ; pT ] − E[S2 ; pT ] > 0 for p close to S1 . Hence, S1 is a stable point of the dynamics. The reverse implication and the case of p = S2 is similar. For an internal ESS we have E[S1 ; pT ] − E[S2 ; pT ] = 0, so we get a rest
∂ point of the dynamics. Since hp,q,0 = 0 we have ∂u
hp,q,u > 0 which (u=0) (using linearity on the left) implies that E[S1 ; rT ] − E[S2 ; rT ] < 0(> 0) for any mixture r with probability r of playing S1 sufficiently close to p and satisfying r > (<)p. Thus the rest point is stable.
t where t is the positive root of r11 (a − 7.11. The ESS is given by p = 1+t 2 1)t + ar11 r22 t + r22 = 0.
7.12. The same function E[p; Π] as in solution to Exercise 6.7 as then S1 is not an ESS but satisfies the conditions of the Theorem. 7.13. If α = 1 we have a matrix game with unique mixed ESS when y1 = 0.5. Otherwise, the first derivative with respect to x1 gives αxα−1 (y1 + 2y2 ) − α(1 − x1 )α−1 (2y1 + y2 ). 1 If α > 1 taking the second derivative clearly shows that any root will be a minimum, so all best responses will be 0 or 1. It is easy to check that in this
21 case both 0 and 1 are ESSs. If α < 1 there is a unique root where x1 = y1 when y1 = 0.5 which is a maximum and so an ESS. 7.14. Since f represents the proportion of leaf tissue of a tree of height h, we may consider f ≥ 0, f (0) = 1, f (1) = 0 and f 0 (h) ≤ 0, f 0 (0) = 0. Similarly, since g(h − H) represents the advantage or disadvantage of being bigger/smaller than one’s neighbour, we may consider g > 0, g 0 > 0. H = 0 and H = 1 are not an ESS since for any 0 < h < 1, E[h; δH ] > 0 = E[H, δH ]. For a general H, to find the best response h to H, evaluate ∂ E[h; δH ] = f 0 (h)g(h − H) + f (h)g 0 (h − H). ∂h The derivative is positive for h = 0 and negative for h = 1. The best response h is thus the solution to −
g 0 (h − H) f 0 (h) = f (h) g(h − H)
and the ESS must thus solve −
g 0 (0) f 0 (H) = . f (H) g(0)
7.15. Set equation (7.48) equal to 0 and then multiply all terms by 12 22 11 + nτ12 + nτ22 . Then expanding the right hand side square term and 4 nτ11 simplifying yields equation (7.51).
Chapter 8 8.1. (i) Strategy-role independent, with ρ1 = 1/2 for all. (ii) Strategy-role independent with ρ1 = 1 for owners and ρ1 = 0 for intruders. (iii) Strategyrole independent with ρ1 = p for females and ρ1 = 1 − p for males. (iv) Not strategy-role independent. 8.2. dy b12 b21 dx − − t x 1−x t = (1 − y)a12 − a21 y b2 − (b12 + b21 )x − b12 (1 − x) − b21 x a12 − (a12 + a21 )y
dH(x, y) = dt
a12 a21 − y 1−y
= 0.
Thus H(x, y) is constant over time. If a12 a21 > 0, b12 b21 > 0 and a12 b21 < 0,
22 then the four coefficients of ln x, ln(1 − x), ln y and ln(1 − y) all have the same sign, and the equation H(x, y) = K yields a closed orbit. 8.3. The Hawk-Dove equilibrium has Hawk probability p = V /C, giving expected payoff (C − V )V /(2C 2 ). The expected payoffs for B and M against this mixed strategy is E[B, p] = E[M, p] = (C − V )V /(2C 2 ). However, E[B, B] = E[M, M ] = V /2 > E[p, B] = E[p, M ] = V 2 /(2C) so B and M can invade. 8.4. The stated equilibrium is given in the text. Consider the introduction of a small mutant group of size with Hawk proportion q, then the population proportion of Hawks is p/ = p(1 − ) + q . If q > p(q < p) then the payoff from (8.28) is less than (greater than 0), and since the Marauder payoff is still 0, it is easy to see that the strategy with the lower (higher) Hawk probability does best. This is p, so p can resist all invaders and is thus an ESS. 8.5. Consider a population of (0, 0) strategists where everybody plays Dove and consider an invader playing (1, 0), i.e. playing Hawk when large. Such an invader has w10 > 0 probability to be in a contest as a Hawk against a Dove, getting payoff V while the resident population average in such contest is V /2 (Dove against Dove). So (1, 0) can invade (0, 0) and thus (0, 0) is not an ESS. 8.6. All parts of the question follow from considering Figure 8.3. 8.7. For males this is immediately apparent from equation (8.40) since the population strategy Π is determined by x and y. A similar equation can be found for females. 8.8. Z ∞ s
CAA exp VAA
Z s 0
Z ∞ CAA CAA CAA (s − x) dx = exp −y dx = 1. VAA VAA VAA 0
1 wBA CBA + wBB CBB CBB exp − x dx VBB wBB VBB wBA CBA + wBB CBB CBB = 1 − exp − s = 1. wBB CBB VBB
8.9. For the stated parameters s = v ln 2 and then (8.49) and (8.50) immediately follow. If v → ∞ then A plays the exponential distribution with mean V , and B plays 0 with probability 1. If v → V then A plays V ln 2 plus an exponential with mean V , B plays an exponential with mean V truncated at V ln 2. The
23 v = V /2 case can be found by substitution into the equations, but then does not simplify. 8.10. Letting α = wBB CBB /(wBA CBA we find the expected time for a B individual to wait is VBB 1 E[B] = 1 − ln(1 + α) CBB α and the expectation of a BvB contest can be found using the distirubiton of B (the probability that a contest last longer than x is P [B > x]2 ), giving 1 1 VBB 1 − + 2 ln(1 + α) . E[BvB] = CBB 2 α α The mean reward for A is 2wAA (factor of 2 since this is conditional on being in role A) times the expected reward from playing A, which is 0 − CAA s (it plays time s then a conventional WoA worth 0) plus 2wAB times the reward from playing B (wins VAB , and pays cost CAB E[B]) giving 2 −
VBB VBB CAA wAA ln(1 + α) + VAB wAB − CAB wAB CBB CBB ! wAB VBB CAB wBA CBA + ln(1 + α) . CBB wBB CBB
The B reward is similarly VBB VBB − 2(CBB wBB + CBA wBA ) 2 CBB
1 1 1 − + ln(1 + α) . 2 α α2
in our special case α = 1 and we obtain b reward as v(1 − ln 2) and A reward as (V − v)/2. Thus in this case when v is close to V , B has a bigger payoff than A (this would not occur for small wAA , wBB ).
Chapter 9 9.1. The k−1 dividers D can be placed into any of the m−1+k−1 = m+k−2 positions, which yields the solution. 9.2. There is a single distinct payoff for every combination of m individuals, so following Exercise 9.1 but replacing m − 1 by m, we obtain m−1+k . k−1
24 9.3. We have E[p; (1 − ε)δp + εδq ] =
m−1 X l=0
m−1 (1 − ε)j εm−1−j E[p; p, qm−1−j ]. l
Subtracting the corresponding term for q gives that E[p; (1 − ε)δp + εδq ] − E[q; (1 − ε)δp + εδq ] equals to m−1 X l=0
m−1 (1 − ε)j εm−1−j E[p; p, qm−1−j ] − E[p; p, qm−1−j ] . l
For sufficiently small ε, we just need to compare coefficients from the smallest powers of ε, with p being Evolutionarily Stable against q if the first of theses that differs from 0 is positive (and that there is a least one such value). P 9.4. Since E[p; pm−1 ] = pi E[Si , pm−1 ], the payoffs are linear in the focal player strategy and we can use Theorem 7.5. 9.5. For example h(p) = (p − 1/3)(p − 2/3) = p2 − p + 2/9 yields two ESS: (1, 0) and (1/3, 2/3). Parameters that give this are α12 = 1, α22 = 0, α11 = 0, α21 = 1, α20 = 1/3, α21 = 1/9. 9.6. This is just a special case of Theorem 7.8. 9.7. We have dq = q E[S1 ; δ(p,1−p) ] − qE[S1 ; δ(p,1−p) ] + (1 − q)E[S2 ; δ(p,1−p) ] dt = q(1 − q) E[S1 ; δ(p,1−p) ] − E[S2 ; δ(p,1−p) ] = q(1 − q)h(q).
9.8. Take h(p) = −(p − 1/2)3 . One has h(1/2) = 0 and h0 (1/2) = 0 but h(1/2 + ε) < 0, h(1/2 − ε) > 0 so that p = 1/2 is an ESS. 9.9. In a generic 2 strategy game p is a mixed ESS if and only if h(p) = 0 and h0 (p) < 0. Because h(p) = 0, we have E[S1 ; δ(p,1−p) ] = E[S2 , δ(p,1−p) ] and so (9.14) holds for i = 0 for all strategies; because h0 (p) < 0 we have a positive value for j = 1 in (9.13) for all p 6= q, and hence the ESS is of level 1. For p = 1 to be a pure ESS we need h(1) > 0, i.e. E[S1 ; δ(1,0) ] > E[S2 ; δ(1,0) ] which means that the ESS is of level 0 (similarly for p = 0). For the non-generic example, see Exercise 9.8. 9.10. Suppose that neither pure is an ESS. Then h(0) > 0 and h(1) < 0.
25 Since h(p) is differentiable, there must then be a point where h(p) = 0 and h0 (p) ≤ 0, which means than h0 (p) < 0 for a generic game, which means that p is an ESS. Thus if there is no pure, there is at least one mixed ESS, so there is always at least one ESS. 9.11. The complete payoffs to the three player two strategy game can be written as a111 a112 a211 a212 a121 a122 a221 a222 which after considering symmetries and adding appropriate numbers to each column yields a 0 0 b 0 b b b+c with 3 parameters a, b, c. It follows easily that (1, 0) is an ESS if and only if a > 0 and (0, 1) is an ESS if and only if c > 0. To investigate internal ESSs, we need E[S1 ; δp ] = E[S2 ; δp ] which yields ax2 = 2bx(1 − x) + c(1 − x)2 or h(x) = (a + 2b − c)x2 − 2(b − c)x − c = 0. In the generic case, we have (a + 2b − c) 6= 0 and thus h is indeed a quadratic function. The different ESSs occur as follows for generic games. If a > 0, c > 0 there are both pure ESSs, and one unstable mixed equilibrium. √ If a < 0, c < 0 there is a mixed ESS. If a > 0, c < 0 and b > −ac then (1, 0) is an ESS, and there are two mixed equilibria, the one with the lower √ probability of playing S1 being stable, the other not. If a > 0, c < √ 0 and b < −ac then the pure (1, 0) is the only ESS. If a < 0, c > 0 and b < − −ac then (0, 1) is an ESS, and there are two mixed equilibria, the one with the higher √ probability of playing S1 being stable, the other not. If a < 0, c > 0 and b > − −ac then the pure (0, 1) is the only ESS. Bukowski and Miȩkisz (2004) also deal with non-generic cases omitted above. 9.12. a) h(p) = +(p − 1/2)(p − 1/4)(p − 3/4) so that the ESSs are 0, 1/2, 1. b) In this case we get 3 2 1 3 h(p) = −(2p − 1) p + p(1 − p) + (1 − p)2 8 12 8 which is (1/2 − p) times a positive term, which leads to a unique ESS at p = 1/2. 9.13. The ESS solution has the whole positive real line as its support. Thus the payoff to any pure strategy against the ESS must take some constant
26 value. The payoff to pure strategy 0 is clearly 0, so 0 must be the constant value. Thus any strategy has expectation 0 against a population playing the ESS. 9.14. a) Payoffs increase as individuals drop out, and so the chosen time is exponential with parameter (i − 1)(Vi−1 − Vi ). Thus the times up to the first second and third drop out are exponential distributions with means 18, 6 and 3 respectively. b) The final contest with two individuals is exponential with mean 6. Previously with three individuals, victory leads to an expected reward of 6, defeat to 9, so the strategy is to quit instantly, with expected payoff 7. Thus at the start the strategy is exponential with mean 21. 9.15. If Wk is the expected reward for winning in round k then Vek = Vk + Wk . Clearly in S1 v S1 and S2 v S2 contests each individual wins with probability half, and if losing also receives the relevant cost, so giving the entry in (9.48). For S1 v S2 S1 receives (1/2 + ∆)Wk + (1/2 − ∆)(Vk − c12 ) which rearranges to the corresponding matrix entry, as does the reward (1/2 − ∆)Wk + (1/2 + ∆)(Vk − c21 ) for S2 . To satisfy the negative definiteness condition in the standard 2 × 2 matrix we need b + c − a − d > 0. Substituting the values from matrix (9.48) into this condition, we get the condition in (9.49).
Chapter 10 10.1. For generic payoffs every terminal vertex payoff is different, so for each preceding vertex there is a unique optimal decision, whichever player is making the choice. There is thus a best choice at each vertex back to the root, and then following these choices from the root gives the unique equilibrium path. 10.2. It is easy to see that each change results in a worse or gives an identical payoff to the player who would make the change. Strategy changes at the following vertices lead to the associated payoff changes 1 : 8 → 5, 2 : 1 → 0, 4 : 8 → 7, others are unchanged. Thus we have a Nash equilibrium. 10.3. The game at this vertex starts with old player 2, with strategies U and D. The second player (old player 1) has four strategies, for instance DU will represent play D if the other player plays U and U if the other plays D. U D
UU UD DU DD (1, 8) (1, 8) (7, 7) (7, 7) (0, 2) (3, 3) (0, 2) (3, 3)
27 Here the sequence U, D is like mutual cooperation, D, D like mutual defection and U, U leads to the second player suckering the first. Associated payoffs would be R = 7, P = 3, T = 8, S = 1. 10.4. Writing the player 1 strategy as three values, where e.g. U DD means play U and then after either choice of player 2 play D, and for player 2 U D means play U if player 1 plays U and D if they play D, we obtain UU UUU (8, 1) UUD (8, 1) UDU (7, 7) UDD (7, 7) DUU (4, 1) DUD (4, 1) DDU (0, 3) DDD (0, 3)
UD (8, 1) (8, 1) (7, 7) (7, 7) (5, 2) (1, 4) (5, 2) (1, 4)
DU (2, 0) (3, 3) (2, 0) (3, 3) (4, 1) (4, 1) (0, 3) (0, 3)
DD (2, 0) (3, 3) (2, 0) (3, 3) (5, 2) (1, 4) (5, 2) (1, 4)
10.5. The four payoff pairs become (2, 2), (1, 3 + y), (3 + x, 1) and (x, y). If M plays C then F plays C if y < −1 and D otherwise, giving a payoff of 5/4 to M. If M plays D, then F plays C if y < 1 giving a payoff to M of 9/4 + x. Thus M plays C if x < −1. Thus if x > −1, y < 1 we have D,C if x > −1, y > 1 we have D,D if x < −1, y > −1 we have C,D and if x < −1, y < −1 we have C,C. 10.6. The number of options at each move, is k to the power of the number of starting positions, which in turn is k to the power of the number of previous moves. The number of strategies is the product of all of these options for a Pn−1 2j 2 2n−2 given player. Thus forPplayer 1 this is k 1 k k . . . k k = k j=0 k and similarly n−1 2j+1 for player 2 this is k j=0 k . 10.7. This game reduces to a normal form game with four strategies for Player 1 (a play in each position) and two for player 2, since neither see what choice has been made by the other. The matrix is U D UU (8, 1) (2, 0) UD (7, 7) (3, 3) DU (4, 1) (5, 2) DD (0, 3) (1, 4)
The only possible ESSs are pure, and these are (UU,U) and (DU,D). 10.8. We only need to look for pure strategies. Here clearly (C, D) and (D, C) are ESSs, and the other two possibilities are not.
28 10.9. If P1 plays U then P2 plays U , if P1 plays D then P2 plays D, thus P1 plays U and so U, U happens with payoffs (3,3). For a simultaneous play we obtain the matrix
U D
U 3 2
D 0 1
so we have two pure solutions U, U and D, D. 10.10. Working from the end of the game, the first subgames occur at vertices 4-7. These are a single shot PD games, so that the optimal play is for both players to play D. This leads to another single shot PD game at vertex where the payoffs are P plus the usual ones for all choices. Thus the optimal play is for both to play D. Thus both defect in both rounds. 10.11. If P1 plays U first, we then have a bi-matrix game with payoffs
U D
U D (1, 1) (0, 0) (0, 0) (3, 3)
There are eight strategy pairs. For instance U DU means P1 plays U , then P2 plays D and P1 plays U ; DDU means that P1 plays D, but if by some error U happened by mistake, then play is as above. U U D and U DU are not NE since a change in positions 2 or 3 improves the payoff, U U U because a change in position 1 does. U DD is an NE. DDD is not an NE because a change in position 1 improves the payoff. DU D and DDU are NEs, but if there was a possibility of error then a change in positions 2 or 3 would improve the payoff. DU U is an NE, and the possibility of error does not change this.
Chapter 11 11.1. Prove this by induction on t. Clearly it is true for t = tmax . If it is true for t + 1 then if xc − x > tmax − t both decisions lead V = 0. If xc − x ≤ 0 no foraging gives V = 1. For intermediate xc − x foraging gives V = (1 − z)(1 − z)xc −(x+1) = (1 − z)xc −x . No foraging also leads to this unless xc − x = tmax − t which gives V = 0. Hence we have the result. 11.2. As in Exercise 11.1, if xc − x ≤ 0 no foraging is best, and if xc − x > tmax − t all decisions give 0 reward. Otherwise xc − x successful foraging attempts must be made in time tmax − t. The risk of death due to predation is
29 constant whenever foraging occurs. It is thus best to continue to forage until xc has been achieved (this minimises the chance of reaching tmax without being killed, but short of the required level xc ). 11.3. H(x, tmax − 1, uf ) = (1 − z)(x + s). Thus making a single foraging attempt is better than none if (1 − z)(x + s) > x i.e. x < (1 − z)s/z. As in Exercise 11.2, if some foraging is worthwhile, it should be done as soon as possible in case of failure. Thus the optimal strategy is to forage until x reaches the threshold (1 − z)s/z, and then to stop foraging. 11.4. Comparing male payoffs, Care dominates Desert, so that males must always care. Given that males care, females gain 5 from caring, 6 from deserting, so females play Desert. 11.5. The game is shown on the figure below. C
(3, 3)
Female C
D
(2, 2 + wy)
Male D
C
(2 + wx, 2)
Female D
(wx, wy)
If D, D is played then x = y = 1 so this is stable just as in the original, i.e. when w > 2. If C, C is played then x = y = 0 so the rewards for C,C are the largest and this is always stable. If C, D is played then x = 0, y = 1 and the female does better by changing to C, so this is always unstable. If D, C is played then x = 1, y = 0 and the Male does better by switching, so this is always unstable. 11.6. si = s and so Yt = 1 + s + s2 + s3 + . . . = 1/(1 − s). Substitution into (11.28) and (11.32) leads directly to (11.33). 11.7. Consider an example with s0 = s1 = s2 = 0.5, s3 = s4 = . . . = sM −1 = 1, sM = 0 for large M , HT (T 0 ) = (T 0 + 1)2 . This gives the payoffs to the strategies 2, 3, 4 and 5 as (approximately) 2M, 9M/4, 2M and 25M/8 respectively, so 3 can cannot be invaded by 2 or 4, but can by 5. 11.8. (11.33) leads to βT > sβ(T + 1) and βT > β(T − 1)/s. These lead to s/(1 − s) < T < 1/(1 − s). Since 1/(1 − s) − s/(1 − s) = 1, then except for non-generic boundary cases there is precisely one solution for T . Thus we have a unique ESS with T = b1/(1 − s)c. 11.9. a) is obtained directly by substituting into (11.37).
30 b) (11.37) becomes (T − 1)/(T + 1) < s < (T + 1)/(T + 3). This becomes (3s−1)/(1−s) < T < (1+s)/(1−s). Since (1+s)/(1−s)−(3s−1)/(1−s) = 2, there are exactly two ESSs for s > 1/3 and there is one ESS for s < 1/3. c) (11.37) becomes (T − 1)/T < s < T /(T − 1). So no T > 1 can be a pure ESS, but they can feature in a mixed ESS, since there are values of s where (11.38) holds. 11.10. For xc ≤ 3 k = 0 gives the maximum probability of 1. For xc = 4, k = 0 gives 0, k = 1 0.3, k = 2 0.09 and k = 3 0.216, and thus k = 1 is optimal. For xc = 5, k = 0, 1 gives 0, k = 2 0.09 and k = 3 0.027 so k = 2 is optimal. For xc = 6 only k = 3 gives a positive probability. 11.11. For m > 0 individuals are of that age at time t if they have survived from the previous year, so Nm (t) = Nm−1 (t−1)sm−1 . For m = 0 these are new juveniles, produced by individuals of any age class so N0 (t) = N0 (t − 1)f0 + . . . + NM −1 (t − 1)fM −1 as in the equation. The s terms are just as in the size game, so survival to adulthood occurs with probability s0 s1 . . . sm−1 , and every surviving year as an adult produces Hm (n) offspring, which corresponds to (11.42). 11.12. Equation (11.24) implies that all newborns are equally valuable. If a population is growing early newborns are better as they will on average produce more offspring than their number in later generations. For declining populations the reverse is true. Thus (11.24) only holds for the steady state case of constant population size.
Chapter 12 12.1. Substituting for pi,j in (12.9) we obtain rPi+1 − (1 + r)Pi + Pi−1 = 0. Substituting for Pi = θi is this recurrence relation gives θ = 0 or 1/r. The solution is thus Pi = A + B(1/r)i . Using P0 = 0, PN = 1 gives the result. 12.2. Following Traulsen and Hauert (2009), set yj = xj − xj+1 and rearrange (12.21) into 0 = −δi (xi − xi−1 ) + βi (xi+1 − xi = −δi yi + βi yi+1 .
31 δ
It follows that yj+1 = βjj yj and thus y1 = x1 − x0 = x0 δ1 y2 = x2 − x1 = x1 β1 .. . yk = xk − xk−1 = x1
k−1 Y
δj . β j=1 j
Thus, 1 = xN − x0 =
N X
yk =
N X k=1
k=1
x1
k Y δj
β j=1 j
= x1 1 +
δj . β j=1 j
k N −1 Y X k=1
12.3. See solution to Exercise 5.2. 12.4. For the IP dynamics, we get ir N −i · ir + N − i N N −i i δi = · ir + N − i N
βi =
and thus δi /βi = 1/r. For the Voter model, we get i N −i · i/r + N − i N i/r N −i δi = · i/r + N − i N
βi =
and thus δi /βi = 1/r, showing that IP and Voter model yields the same fixation probabilities. For the BD-D process, i N −i · N (i − 1)/r + N − i N −i i/r δi = · N i/r + N − i − 1
βi =
−i and thus δi /βi = (1/r) · (i−1)/r+N i/r+N −i−1 . For DB-B process, we get
N −i ir · N ir + N − 1 − i i N −i δi = · N (i − 1)r + N − i
βi =
32 ir+N −i−1 and thus δi /βi = (1/r) · (i−1)r+N −i .
12.5. Since the fitness of every individual is 1, we have that being chosen at random, or proportional to its fitness, or inversely proportional to its fitness is just the same thing. 12.6. The top two vertices have identical probability. The probability that the next event is the mutant replacing a resident is r/(r + 3). The probability that a resident replaces the mutant is 1/(3 + r)1/2 + 1/(3 + r)1/3 = 5/6(3 + r). The probability that the mutant is eliminated next is thus 5/ 6(3 + r) 5 . = 6r + 5 5/ 6(3 + r) + r/(3 + r) If r = 3 this is 5/23. Following the same process for the bottom vertex gives the probability of elimination as 1/(3r + 1) or 1/10 for r = 3. 12.7. Writing a system of 16 equations and solving it (with a computer, for example), yields the fixation probability 0.6847. 12.8. Following Hadjichrysanthou et al. (2011) we get that for the IP dynamics, the average fixation probability for a star is approximately (1 − r−2 )/(1 − r−2N ) (see also Lieberman et al., 2005), while under the BD-D process, the average fixation probability is approximately (1−r−1 )/(1−r−1 ), i.e. the same as for the Moran process. 12.9. If the graph is irregular and connected there must be a connection between a vertex of minimal degree i and one of higher degree j. X X 1 X 1 ei − 1 1= wvi = wji + wvi < + wvi ≤ + =1 e e ei i i v v6=j
v6=j
which gives a P contradiction. Clearly if a matrix is double stochastic it is isothermal. If wji is constant and this value is not 1 adding the weights into every vertex gives a different sum to the sum of those out of every vertex, which is a contradiction because these are sums of identical sets of elements. 12.10. There are only 2N states, governed by the number of leaf mutants and whether there is a mutant in the centre. Representing the fixation probability of the state with i mutants on the leaves and 1 (0) in the centre by p1,i (p0,i ) we obtain the equations r N −1 p1,i+1 + p0,i ; r+N −1 r+N −1 (N − 1)r 1 p0,i = p1,i + p0,i−1 (N − 1)r + 1 (N − 1)r + 1
p1,i =
33 where p0,0 = 0, p1,N −1 = 1. Fixation probability is (N − 1)p0,1 + p1,0 /N . Using the above gives a simple relationship between each of these and p1,1 . An iterative procedure gives p1,1 = 1 + and thus
p1,1 ρ= N
N −2 X
N −1 N − 1 + r j=1
n+r r (N − 1)r + 1
!j −1
(N − 1)r r (N − 1) + . (N − 1)r + 1 r + N − 1
12.11. By the nature of the dynamics, if the mutant invasion start by a single mutants, then the mutants will always form a line segment. This means that any mutant formation can be encoded by (b, e), the beginning and the end of the segment, 1 ≤ b < e ≤ N (or an empty segment), giving only N (N − 1)/2 equations. Since the line is symmetric with respect to its middle, the pairs (b, e) and (N + 1 − e, N + 1 − b) are indistinguishable, further reducing the size of the system by a factor of 2. 12.12. (i) Here we have four 1/4s in each row and column meaning that this set of weights is an element of each of WC , WI , WL and WR . The transformations fR and fL leave it unchanged, so we have that this is equivalent to the Moran process for LB, LD, BDB and DBD (but not BDD or DBB). (ii) This set of weights is not an element of any of the sets, and applying fL does not give a member of any of the sets, but applying fR gives us a member of WH . Thus this is equivalent to the Moran process for BDB and BDD, but not the other dynamics. 12.13. In equations (12.72) and (12.73), relating to a circle with a run of i mutants and a run of N − i residents, the denominators are simply the total fitness of the population; for example in (12.72) for 1 < i < N − 1 there are i − 2 mutants who play two mutants, N − i − 2 residents who play two residents, and then two of each type of the boundary who each play one of each type. Replacement can only happen if the boundary individuals are selected to reproduce, and the numerator is the sum of the fitness of the boundary mutants (in 12.72) or boundary residents (in 12.73) multiplied by the probability that they replace one of the other type. For example in (12.72) when i = 1 there is only one boundary mutant, but it would be certain to replace a resident if selected to reproduce, as both of its neighbours are residents; in other cases there are two boundary individuals, replacing residents with probability 0.5.
34 12.14. Rearranging the term in (12.72) for 1 < i < N − 1 gives 1 + β(a + b) 2(1 + β(a + b)) + (i − 2)(1 + 2βa) + 2(1 + β(c + d)) + (N − i − 2)(1 + 2βd) β 1 1 + [(N − 2 − 2i + 4)a + (N − 2)b − 2c − (2(N − i) − 2)d) . = N N
pi,i+1 ≈
For large populations, we get 1 β pi,i+1 ≈ 1 + ((N − 2i)a + N b − 2(N − i)d] . N N β Similarly we get pi,i−1 ≈ N1 1 + N (−2ia + N c − (N − 2i)d) . We then have that pi,i+1 > pi,i−1 if a + b − c − d > 0. This is the condition for selection to favour mutants. We note that this is different to that in Section 12.2, because here we have a circle rather than a complete graph, and so competition takes place at the edges of the boundary region, and there is consequently no frequency dependence. M 12.15. For equation (12,79) kR (i) is the number of neighbours of R which are M s when R is replaced by an M as i M s become i + 1. NM M (i + 1) is the number of M M links after this replacement, NM M (i) the number before, and so the difference must be the number of M neighbours that the replaced M R had, i.e. kR (i). Similarly reasoning for the R neighbours holds in (12.80). For (12.81) and (12.82) there is only a single R, so clearly all neighbours of that R must be M s. Similarly in (12.85) and (12.86) we only have a single M. For (12.83) and (12.84) the left hand expression is the number of neighbours of each type when an M is replaced when there are i + 1 M s, and the right hand side the same number when an R is replaced when there are i M s. The latter is the reverse process at the same site, so clearly these two numbers are equivalent.
12.16. For the Hawk-Dove game mentioned we have a = (15 − C)/2, b = 10, c = 5, d = 15/2 and the condition for Hawk to be favoured is σa+b > c+σd, where σ = (k + 1)/(k − 1). This becomes: (k + 1)(15 − C)/2 + 10(k − 1) > 5(k − 1) + 15(k + 1)/2 ⇒ k(10 − C)/ < 10 + C, i.e. k < (10 + C)/(10 − C) if C < 10 and never if C ≥ 10.
Chapter 13 13.1. The graph is degree k, so including her own state, she makes a decision
35 based upon the type of k + 1 individuals. There are m types, so there are mk+1 configurations of individuals. For each configuration, she must pick one of the m types, leading to the result. 13.2. The left hand term is the expected number of steps until random walks starting at i and j meet. The first step could be either from i or from j, each with probability 1/2. Conditional on the first step being from i, the probability that this step is to k is pik . Thus with probability pik /2 the next step leaves us with random walks at k and j. The expected time after the first step is just the sum of all the times from the positions reached after that step, weighted by the probability of those being the positions. This gives the summation on the right hand side of (13.1). We then just add 1, as we have taken a time step to reach this point. 13.3. Inequality (13.4) is b/c > t2 /(t3 −t1 ) and is equivalent to σ(b−c)+(−c) > b + 0σ. The later rearranges to b/c > (σ + 1)/(σ − 1) so we have t2 /(t3 − t1 ) = (σ + 1)/(σ − 1) which rearranges to give σ = (−t1 + t2 + t3 )/(t1 + t2 − t3 ) as required. 13.4. Following the described process, every individual goes to any given place with probability 1/n. Thus for a particular individual, wherever it goes, the group will contain that individual plus a number of others following the Binomial distribution with parameters n−1 and 1/n. Thus the expected group size from the individual’s perspective is 1 + (n − 1)/n = (2n − 1)/2n. From the observer’s perspective we see groups that are Binomial with parameters n and 1/n. A random such group, conditional on not P being empty (we only consider groups that the observer can see) we have n>0 npn /(1 − p0 ) = P 1/(1 − p0 ) × n npn = 1/(1 − (1 − 1/n)n ). The expected group size from the individual’s perspective is larger (it is always at least as large); for example for very large n, these are approximately 2 and e/(e − 1). 13.5. We have that Eq [φ(Y )] ≥ Ep [φ(X)] for any convex function φ. As φ(x) = x is convex, we have Eq [Y ] ≥ Ep [X]. Similarly as φ(x) = −x is convex, we have −Eq [Y ] ≥ −Ep [X]. Thus we have Eq [Y ] = Ep [X]. 13.6. For m = 1 and m = 4 individuals are connected to their parent’s neighbours and parent’s neighbours offspring respectively. This we need an initial link for new links to develop, so in each case starting form a lone individual we just get 2n lone individuals after n steps. For m = 2 a single, becomes a connected pair, becomes a line of four becomes two connected pairs and two singles. Thus we have a growing population consisting of singles, pairs and lines of four.
36 13.7. For m = 2 a line of three becomes two pairs and a single, which then becomes two lines of four and a pair, and continuing we get a growing population consisting of singles, pairs and lines of four as above. For m = 4 all are connected to their parent’s neighbours’ offspring, so a line of three just reproduces another, so after n steps we have 2n lines of three. For m = 1 a line of three becomes a line of three and two singles, and at each step we maintain one line of three and double the number of singles as well as adding two more. 13.8. 1 2 3 4 5 6 7 8
""" Games with reproducing vertices This script is based on Some Models of Reproducing Graphs: III Game Based Reproduction. For a given maximum degree Q and a starting graph G it runs a given number of iteration of the process determined by parameters r0, r1, and r2. It plots the initial and final graph. """
9 10 11 12 13
## Import basic packages import networkx as nx import matplotlib.pyplot as plt import copy
14 15
## Define the basic parameters
16 17 18 19 20
# Set up the initial graph G = nx.Graph() G.add nodes from(['a', 'b']) G.add edge('a','b')
21 22 23
# Set the parameter Q (max degree) Q = 5
24 25 26
# Set the number of iterations to run numIter = 2
27 28 29 30 31
# Set the updating rules r0 = 1 # if 1, connect offsprings to their parrents neighbors r1 = 1 # if 1, connect offsprings to their parents r2 = 1 # if 1, offsprings of connected parents are connected ... to one another
32 33 34 35
# Plot the initial graph G plt.figure(1) nx.draw(G, with labels=True)
36 37 38
# Iterate a given number of times for iteration in range(numIter):
39 40 41 42
# Copy G for stage 1 process G1 = copy.deepcopy(G)
37 43 44
# For all vertices in the graph for parent in G:
45 46 47
# Name the offspring offspring = parent+str(iteration)
48 49 50
# Add it to the graph G1.add node(offspring)
51 52 53 54
if r0 == 1: # Connect it to its parent's neighbors [G1.add edge(offspring,nbr) for nbr in ... G.neighbors(parent)]
55 56 57 58
if r1 == 1: # Connect it to its parent G1.add edge(offspring, parent)
59 60 61 62 63 64 65 66 67 68 69 70 71 72 73
74
# for parent in G: ends here if r2 == 1: # Connect offsprings of connected parents # Go through the graph G, look at all vertices # find and their neighbors and connect their offsprings for par in G: # Get the name of the offspring offspr = par+str(iteration) # Go through the parent's neighbors for parnbr in G.neighbors(par): # Get the name of their offspring offsprnbr = parnbr+str(iteration) # Connect the two offsprings # The offspring may already be connected, but ... it is OK G1.add edge(offspr, offsprnbr)
75 76 77 78
# If we want to see progress, uncomment below #plt.figure() #nx.draw(G1, with labels=True)
79 80 81
# Create a copy of the graph for stage 2 process G2 = copy.deepcopy(G1)
82 83 84 85 86
# Cull the vertices with degree more than Q for node in G1: if G1.degree(node)>Q+1: G2.remove node(node)
87 88 89
# Copy G2 back to G and restart the cycle G = copy.deepcopy(G2)
90 91 92 93
# Plot the final graph G plt.figure() nx.draw(G, with labels=True)
94 95 96
# Display the plots plt.show()
38
Chapter 14 14.1. Theorem 6.2 gives the ESS conditions as E[x, x] ≥ E[y, x] and if E[x, x] ≥ E[y, x] then E[x, y] ≥ E[y, y]. The first condition is simply condition (13.16) s(y, x) ≤ 0, and the second is s(x, y) > 0 whenever (13.16) holds, i.e. (13.17). 14.2. For pure strategy S1 to be an ESS, E[S1 , S1 ] > E[S2 , S1 ]. Thus, s(y, S1 ) = E[y, S1 ] − E[S1 , S1 ] = y(E[S2 , S1 ] − E[S1 , S1 ]). Thus the derivative of s(y, S1 ) is negative, so S1 is not an ess. 14.3. An example function is s(y, x) = (1 − y 2 )(x2 − 2y 2 + xy) which has an ess at 0 which resists small mutants, but mutants with value greater than 1 or less than -1 invade. 1−y 1−y so s(y, x) = xy + 1−x − xx − 1−x 14.4. E[y; δx ] = xy + 1−x 1−x which gives the 1 1 required result. Differentiating gives x − 1−x which is zero only at x = 1/2. The second derivative with respect to x is 16, w.r.t y is 0. Thus, as in matrix games, it is convergent stable, a protected polymorphism, can invade but is borderline invasible.
14.5. Using the standard terminology we have a = 4, b = 2. Comparing to (13.21), (13.22) and (13.24) we have case 3 in Table 13.1, as required. 14.6. Examples are: 1) −x2 + 2y 2 − xy; 2) x2 + 2y 2 − 3xy; 4) 2x2 − y 2 − xy; 6) −x2 − 2y 2 + 3xy; 8) −2x2 + 2y 2 − xy. 14.7. Differentiating and setting to 0 with y = x gives −6y + 2x − 8 = 0 with the root x = −2. Taking second derivatives gives a = 2 and b = −6 which is an example of case 5 from Table 13.1. 14.8. s(y, x) = a(y 2 − x2 ) + b(yx − x2 ) + d(y − x). The derivative w.r.t y is 2ay + bx + d so setting this to 0 with y = x gives x = −d/(2a + b). This is an ess if it lies within the allowable range (in this case) of [0,1]. The second derivative w.r.t is 2a which for a < 0 gives a local maximum at the ess. As s(y, x) is quadratic this is also a global maximum, and so the expression in the hint holds, and the ess is an ESS. 14.9. The second derivative of s(y, x) w.r.t y is 2a and the second derivative w.r.t x is −2a−2b. Thus the condition for convergence stability (13.24) reduces to 2a+b < 0. Switching the inequality means the ESS is not convergent stable.
39 14.10. E(y, x) = a11 yx + a12 y(1 − x) + a21 (1 − y)x + a22 (1 − y)(1 − x). The second derivative of s(y, x) w.r.t y is 0, the second derivative w.r.t x gives −2(a11 − a12 − a21 + a22 ). Thus convergent stability holds if a11 − a12 − a21 + a22 < 0. This is just (6.45) for the two strategy case. 14.11. To be non-invasible s(y, x∗) must be a local maximum at y = x∗. The condition for this is that the stated second derivative matrix is negative definite. To be able to invade, s(x∗, x) must be a local minimum at x = x∗. The condition for this is that the stated second derivative matrix is positive definite. 14.12. s(y, x) contains no yi yj or yi2 terms, so (13.52) gives 0 entries. The (i,j) entry from (13.53) is 0 − (aij − ain − anj + ann ) − (aji − ani − ajn + ann ). The (i,j) entry in (13.54) is 0 + (aij − ain − anj + ann )/2 + (aji − ani − ajn + ann )/2, and so is -1/2 times the (i,j) entry in (13.53). This is also 1/2 times the (i,j) entry in (6.45). Thus (6.45) is negative definite if and only if (13.54) is negative definite if and only if (13.53) is positive definite. 14.13. The differential equations become dx2 dx1 = 2a(x2 − x1 ), = 2a(x1 − x2 ), dt dt and so x1 + x2 are constant. a) a = 1 means that the lower of x1 and x2 increases, the higher decreases at the same rate, and they finish at the mid-point of their initial values. b) a = −1, x1 < x2 means that x1 tends to minus infinity, x2 to plus infinity. c) a = −1, x1 = x2 means both derivatives are 0 and so they stay at their initial values.
Chapter 15 15.1. Similarly to the hint, we have the following: facultative-helping r = 1, B = pb, C = pc; obligate-helping r = 1, B = pb, C = c; facultative-harming r = −p/(1 − p), B = −(1 − p)d, C = (1 − p)a. We use rB > C where r, B and C as above to get the desired results. 15.2. All games between GRIM, TFT and TF2T involve cooperation every move and lead to the standard R/(1 − w) for both players. ALLD defects always, and against ALLD; STFT always defects, GRIM and TFT always defect after one cooperation and TF2T always defects after two cooperations, and so the stated payoffs follow. STFT against TFT is a series of alternating
40 CvD games (so e.g. STFT gains T + wS + w2 T + . . .) giving the payoffs in the matrix. Finally in STFT against TF2T, all moves involve cooperation except the first move of STFT. 15.3. This occurs if S + Rw/(1 − w) > (S + wT )/(1 − w2 which rearranges to w > (T − R)/(R − S) which is less than 1 since S + T < 2R. 15.4. Against GRIM, TFT and TF2T, WSLS is involved in a game with all cooperation, so that the payoff is R/(1 − w). Against ALLD, WSLS plays an alternating sequence gaining S + wP + w2 S + . . . = (S + wP )/(1 − w2 ). WSLS versus STFT has the sequence WSLS: C STFT: D
D C
D D
C D
D C
D D
... ...
The payoff to WSLS is thus (S + wT + w2 P )/(1 − w3 ). 15.5. For ε small enough that the probability of two errors in same or two consecutive moves is negligible, we have the following games (the bold letter was a move played by an error). ALLD-e: DD ALLD-e: DD
. . . CDDD . . . DDDD
... ...
ALLD-e: DDDD Grudge-e: CDDD
. . . CDDD . . . DDDD
. . . DDDD . . . CDDD
ALLD-e: WSLS-e:
DDDD CDCD
. . . CDDDD . . . DCDCD
. . . DDDDD . . . CDCDC
WSLS-e: WSLS-e:
CC CC
... ...
DDCC CDCC
WSLS-e: Grudge-e: WSLS-e: Grudge-e:
CC CC CC CC
... ... ... ...
DDCDC CDDDD CDCDC DDDDD
Grudge-e: CC Grudge-e: CC
. . . DDDD . . . CDDD
... ... ... ...
... ... . . . CCC . . . CCC . . . DCDCD . . . DDDDD . . . CDDD . . . DDDD
... ... ... ...
... ...
15.6. Looking at Figure 14.1, we have a three state process, where we can label the states CC, CD and DD with long term probabilities pCC , pCD and pDD respectively. By symmetry pCC = pDD , and the transitions being in detailed balance give 2pCD ε = 2(pCC + pDD )ε and so pCD = 1/2, pDD = pCC = 1/4.
41 The mean payoff per round to a TFT player is thus R/4 + P/4 + (S + T )/2 × 1/2 = (R + P + S + T )/4. 15.7. Starting with αE1 + βE2 + γ = 0, letting γ = −(α + β)P we have α(E1 − P ) = −β(E2 − P ). Letting χ = −β/α > 1 gives equation (15.27) where the advantage of player 1 over the base payoff P is the advantage of player 2 multiplied by χ > 1, the extortion condition. 15.8. χ = 2 ⇒ β = −2α and so γ = αP = α. So we have pR = 1 − 2α, pT = 6α, pS = 1 − 9α and pP = 0. This works for all values of α such that all are probabilities, i.e. lie between 0 and 1, which is true for 0 < α < 1/9 (not allowing α = 0 as χ = −β/α). 15.9. The payoff matrix is given by ai1 ...i4 = 20 − i1 + 0.4
4 X j=1
ij = 20 − 0.6i1 + 0.4
4 X
ij .
j=2
It follows that ai1 i2 i3 i4 < ai01 i2 i3 i4 if i1 > i01 and thus S0 is the best reply to any strategies of the other players. 15.10. This game consists of two rounds. In the first round, the game and payoffs ai1 ...i4 are identical to the ones in Exercise 14.7. In the second round, each player is choosing whether or not to punish the offenders that did not invest fully in the first round. The players must decide how much they are willing to pay for punishing the others. We assume that if more than one of the opponents offended, then the focal player will punish all of the offenders equally. The maximal price is $10; but since up to three opponents could offend, the focal player may not be able to punish with full intensity (plus, in some instances such as if a player invests $20 as the only player, the focal player may be left with only $8 for the potential punishment). However, it is clear that regardless of what others will do, the decision of not to punish at all is almost always strictly better than punishing - the punishment adds a cost to the focal player without bringing any benefits. Consequently, no punishing is the strict best reply under almost all scenarios. The only exception is the case where every opponent invests fully; in this case, any level (of intended but not really made) punishment is equally good for the focal player. When the players are not punishing, we are in the situation of Exercise 15.9. Consequently, the only strategies that are best replies to players using identical strategies are the following ones: a) do not invest and do not punish; b) invest maximum and intent to punish at some level (possibly 0). The strategies from category b) can invade each other by drift, so they are not ESSs.
42 15.11. For given k, l we find the best response by comparing the terms in (14.24). Invest is clearly worse than defect, and comparing the strategies gives (14.25). Defect is stable if the first condition is satisfied with l = m − 1, k = 0, so always. Cooperate is stable if the second is satisfied with k = m − 1, l = 0, giving V (1 − c/m) < (m − 1)P (there is an unstable mixed equilibrium also). 15.12. The defector has payoff 2b, its two neighbours b − 2c and all other cooperators 2(b − c). The defector is replaced if it is selected for death, with probability 1/n. The defector will replace a cooperator if it replaces a neighbour, with probability 2/n × 2b/(4b − 2c) = 2b/(n(2b − c). 15.13. CCs cooperate against each other with reward R, and play D (C) with probability q (1 − q) against ALLD with reward qP + (1 − q)S to CC, qP + (1 − q)T to ALLD. ALLD is an ESS, so there is no mixed ESS, and CC is an ESS if q > (T − R)/(T − P ). 15.14. If there are no errors games between TFT/CTFT versus TFT/CTFT are all cooperation. If TFT defects first in turn n by error, then a retaliatory D in turn n + 1 by its opponent will be met with its own D in turn n + 2. If CTFT defects first in n a retaliation in n + 1 will leave its opponent in good standing, so CTFT plays C in turn n + 2, and mutual cooperation is restored.
Chapter 16 16.1. If the subordinate stays, its total reward is kp from itself plus kr(1 − p) inclusive fitness from the dominant, thus giving the result. The equivalents are x and r if it leaves, and f and (1 − f )r if it fights, respectively, thus giving the results. Comparing the stay and leave rewards from (15.2) gives (15.3), and similarly comparing the stay and fight rewards gives (15.4). 16.2. Following the hint: if pc > ps > max{pp , 0} then the staying incentive must be paid, which occurs if k −1−xr > x−r(k −1) > f (1−r)−r(k −1) and x−r(k−1) > 0 implying the result. Similarly (and with similar substitution) if pp > 0 > ps or pp > ps > 0 the peace incentive must be paid, if ps < 0, pp < 0 nothing need be paid and if ps > pc , ps > pp the payment for staying is too great to be worthwhile. Together these cover all cases.
43 x 1
Pair splits k−1
x = f (1 − r)
ps
r(k − 1)
pp
0 r k−1 1−r
1
f
16.3. If the dominant offers p > ps then the subordinate will stay (similarly re fighting), so it should offer the smallest such value. However no smallest value exists, so as stated there is no solution. If we assume that there are only a finite number of allowable strategies then there will be such a smallest value, and this problem disappears. 16.4. There are eight possible triads, each occurring with probability 1/8. Six of these are transitive, giving a probability of 3/4. 16.5. If A beats B, C transitivity has probability 1, if A beats B beats C, it has probability 1/2. 16.6. If W > L + C then we have pure Hawk, if L > W pure Dove, if L+C > W > L then we have a mixture with Hawk probability p = (W −L)/C. 16.7. The hint comes from the fact that half of i/(j − 1) individuals lose, half of (i − 1)/(j − 1) win. Use induction, assuming the result for j − 1 and showing for j (it is true for j = 0). 16.8. If pi(j+1) = 1, then Wi+1,j+2 − Wi,j+2 ≥ C, Wi,j+1 = (Wi+1,j+2 + Wi,j+2 − C)/2 and we know that Wi+1,j+1 ≥ Wi+1,j+2 . Then Wi+1,j+1 − Wi,j+1 ≥ Wi+1,j+2 − (Wi+1,j+2 + Wi,j+2 − C)/2 ≥ C. 16.9. Working back from the end we get payoff and Hawk probability tables: i/j 4 3 2 1 0
0 1 2 3 9.5 4.45 4.4 3.651 3.176 2.6 2.825 2.342 1.856 1.4
4 16 8 4 2 1
44 i/j 3 2 1 0
0 1 2 3 1 1 0.8 0.255 0.36 0.4 0.262 0.264 0.24 0.2
16.10. Scans last ts and take proportion of time 1 − u, so mean interscan time T is given by 1 − u = ts /(ts + T ). Scans occur at a Poisson process at rate 1/T = (1 − u)/(ts u). The probability that no scan occurs within time ta is thus (15.29). 16.11. We have h0 (u) = (Am − Bm )mg m−1 (u)g 0 (u) + (Bm − Cm )ug 00 (u). Since Am > Bm ≥ Cm and g 0 (u) > 0, g 00 (u) > 0, we get that h0 (u) > 0. Thus, since h(0) < 0, there is at most one root of h(u∗ ) = 0 depending on the value of h(1). If h(1) < 0, there is no root and in that case u∗ = 1 is ESS. Otherwise, there is only one root and that root is an ESS. 16.12. a) Neither (15.25) nor (15.26) hold so there is no attack. b) t < 0.41 (15.25) holds so N attacks with no defence; t > 0.41 neither holds, so no attack. c) t < 0.73 (15.25) holds so N attacks with no defence; 0.73 < t < 1.07 (15.26) holds but (15.25) does not, so N attacks and F defends; t > 1.07 neither hold so no attack.
Chapter 17 17.1. All non Hawk-Hawk contests lead to survival with extra reward R plus the usual reward. Individuals in Hawk-Hawk contests survive with probability 1 − z/2, with additional reward V /2. This gives (16.54), which is the standard H-D matrix with C replaced by zR and background fitness R. Thus we have Hawk if V > zR, and a mixture with Hawk probability V /zR otherwise. 17.2. We start with S1 = 1, p1 = p. A proportion pi z/2 of Hawks die in round i, so that Si+1 = Si (1 − pi z/2). Removing the dead individuals from round i,
45 we obtain pi+1 = (pi − p2i z/2)/(1 − p2i z/2). The payoffs are E[D] =
KX max i=1
E[H] =
(1 − pi )
V + R, 2
KX max i=1
V pi + (1 − pi )V 2
Si + SKmax R.
17.3. The expected reward per female found is 1 − θ + θ(1 + q − p)/2 times V . The probability of surviving from meeting one female to the next is (1 − θpqz/2)s, so we have a geometrically distributed number of meetings with this probability, giving the desired result. 17.4. (16.18) and (16.19) both hold, and substituting for λ from (16.18) into (16.19) gives the result. 17.5. 1 2 3 4 5
""" Sperm competition contest Males with no knowledge The script solves a system of equations and plots the ESS allocations to virgin and non−virgin females """
6 7 8 9 10 11
## Define parameters E = 3 # Energy available for reproduction D = 1 # Production cost of a unit of sperm r = 0.2 # Discount factor for a second mating e = 0.001 # Probability of female to stay unfertilized
12 13 14 15 16
## Import basic packages import numpy as np from scipy.optimize import fsolve import matplotlib.pyplot as plt
17 18 19 20 21
## Initialize variables # All possible probabilities of double mating q all q = np.linspace(0,1,101)
22 23 24
# Sperm allocation (to any) female S = np.empty(len(all q))
25 26 27 28 29 30 31 32
## Define equations to solve def eqs(variables, *Params) : " returns the equations to be solved " # Unpack the variables (s, lbda) = variables
46 33 34
# Unpack the parameters E, D, r, e, q = Params
35 36 37
38 39
# Set the equations eq1 = 1/(1+q) *((1−q)*e/(s+e)**2+ ... q*(2*r*s+(1+r)*e)/(s*(1+r)+e)**2) − lbda*D eq2 = s*(1−q)/(s + e) + q*(s+r*s)/(s + r*s + e) − lbda*E return [eq1, eq2]
40 41 42 43 44 45
# For every q (and a corresponding index j) for j, q in enumerate (all q): # Solve the equations S[j], multiplier = fsolve(eqs, (0.01,0.1), args = (E, D, r, e, q))
46 47 48 49 50 51 52
## Plot the solutions plt.plot(all q, S, label = '$sˆ*$, (any) female') plt.xlabel('Probability of double mating, $q$') plt.ylabel('Sperm allocation') plt.legend() plt.show()
17.6. This game is just a bimatrix game where each player has two pure strategies, as described in Chapter 8. The payoff matrices are B − C2R − CC 0 B − C2R − CC B − C2R , M2 = M1 = . 0 B B − C2R B − CR Subtracting appropriate constants from each column to have 0 on the diagonals yield 0 CR − B 0 − C2R 0 0 , M2 = . M1 = CC 0 0 CC + C2R − B Thus, there is an unstable equilibrium at (x, 1 − x), (y, 1 − y) where CR − B , (CR + CC − B) CR y= 2(B − CC )
x=
whenever these x and y values lie in the open interval (0, 1). The dynamics then follow closed orbits around these values, again as described in Chapter 8.
17.7. a) if B < CR /2 then Male plays P, Female plays C; if CR /2 < B < CR then Male plays F, Female plays F; if CR < B then Male plays P, Female plays F. b) Comparing B − CR /2 − CC , B − CR and 0; if B − CR /2 − CC
47 is largest then Male plays F, Female plays C; if B − CR is largest then Male plays P, Female plays F; if 0 is largest then Male plays P, Female plays C. 17.8. The male payoffs reduce to γH = BH / 1 + (λρF )−1 , γN = BN λρF (p) and we obtain that helpful males do better under the condition (BH − BN )/BN > λρF (p). Expressing ρF in terms of ρM , which itself depends upon ρF , gives ρ2F λ 1 + λS(1 − p) + ρF 1 + λ(S − 1) − 1 = 0. The single positive root of this increases in p. If the above condition holds for p = 1 all males are helpful, if it does not hold for p = 0 all are not helpful, otherwise there is a stable mixture for the p where the terms are equal.
Chapter 18 18.1. The first part involves working carefully through the example. For large r, the function P (a) is approximated by q0 1 − (a − a0 /r is approximately linearly decreasing in a. However, from (16.56) A(q) will now increase sharply with q. The higher r the more females discriminate, the higher the levels of advertising needed, and consequently the survival rates and payoffs decline. 18.2. Females currently make optimal choices against all males, so any change is worse. Any male q < α (q > α) reduces its payoff by changing signal from 0 to α (α to 0). Any other signal changes also clearly reduce payoffs. 18.3. w1 = −1/q < 0, w2 = 1 > 0 and so w1 /w2 = −1/q which increases with q (also w13 = q −2 > 0, w23 = 0). 18.4. The payoffs are as in Table 18.1 and the proportion of High quality signallers is p. Following the selected strategies, there are two scenarios – (1) a high quality Sender signals and the Receiver, after seeing the signal, chooses A or (2) a low quality Sender does not signal and the Receiver, after seeing no signal, chooses B. The Sender has no incentive to change the strategy because (a) if a high quality Sender changes to no signal, the payoff decreases from 1 − cH to 0 and (b) if a low quality Sender changes to signal, the payoff decreases from 0 to 1 − cL . Similarly, the Receiver has no incentive to change the strategy – (a) if the strategy is to use A always, the payoff would decrease from 1 to p. (b) if the
48 1 − cH , 1 A
−cH , 0 1 − cH , 1
−cH , 0
1 − cL , 0
B
A
A
B
−cL , 1 1 − cL , 0 B
A
−cL , 1 B
Receiver
Fa il
Low type
Sender
Receiver
Nature High type
Stay silent
Stay silent
Sender
Attempt signal
r
Receive
Attempt signal
ed
Su
cce
cce
Su
il Fa
ed
Receiver
Receiver A
1, 1
B
A
0, 0
1, 0
B
0, 1
FIGURE 1: For exercise 18.5. Pygmalion game from Figure 18.3 with sH = 1 and sL = 1 is like removing the red parts; the remaining black parts are equivalent to Index signal game. strategy is to use B always, the payoff would decrease to 1 − p, and (c) if the strategy is to use B after the signal and A after no signal, the payoff would decrease to 0. 18.5. This is illustrated in Figure 1. Removing the branches shown in red recovers the index signal model. Compare the figure that results to Figure 18.2. 18.6. For the index signal model a receiver knows that the high quality individuals can give the signal and a low quality one cannot. Thus assuming it is profitable for the high quality to signal, when the receiver sees (or does not see) the signal, it knows the quality, irrespective of the population frequency. As soon as there is the possibility of error, it is important to know the frequencies, as the proportion of signallers of different types with the high or low quality signal depends both upon their chances of failing to send the signal, but also on their frequency in the population. For the system as described, following (18.28) we have 2s > c > s and 2/3 < 1/(2 − 2s) ⇒ s > 1/4. Thus we have in total 1 ≥ 2s > c > s > 1/4. 18.7. When all Senders signal and the Receiver does A only if a signal is observed, the high quality Senders get sH − cH and low quality Senders get sL − cL . If Senders do not signal, they would receive 0. Thus, the Senders
49 ract s inte
2
Alway
− δ + p(1 − πQ ) + (1 − p)(1 − πq ), 1 − (pαQ + (1 − p)αq )
Agent
ing and
Intera ct
Dem
only if
Q
− δ + p(1 − πQ ) + (1 − p)P, p(1 − αQ ) + (1 − p)A
1 Principal
Not
ract s inte
dem
and
Alway
ing
p(1 − πQ ) + (1 − p)(1 − πq ), 1
Agent 3
Intera ct
only if
Q
p(1 − πQ ) + (1 − p)P, p + (1 − p)A
FIGURE 2: The screening game from Exercises 18.8 and 18.9. The first payoff is to the principal, the second to the agent. Agents could possibly also choose to ”Never interact” or ”Interact only if q”. prefer not to switch if sH > cH and sL > cL . The signal reaches the Receiver with probability psH + (1 − p)sL and does not reach it with probability p(1 − sH ) + (1 − p)(1 − sL ). The Receiver thus get psH + (1 − p)(1 − sL ). Switching to “always A” would yield a payoff p and switching to “always B” would yield a payoff 1 − p. Switching is thus not advantageous whenever psH + (1 − p)(1 − sL ) ≥ max{p, 1 − p} which is 1−sL L ≤ p ≤ 2−(s . Furthermore, if the Receiver chooses equivalent to sLs+s H L +sH ) B after a signal and A without a signal, the payoff is p(1 − sH ) + (1 − p)sL . This reduces to 2p(sH + sL − 1) > 2sL − 1. Depending on whether sH + sL is greater or smaller than 1, we then get the remaining conditions. 18.8. We assume that there are two kinds of agents, a high quality Q and low quality q; the proportion of Q is p. The principal acts first. It can decide whether to be demanding or not, i.e., whether to set up a task for agents to complete. Setting up the task costs the principal a fixed amount δ. Without an interaction, the principal gets P and the agent gets A. The principal’s benefits of interactions with agents of quality i ∈ {q, Q} are 1 − πi with πQ < πq , i.e., the principal prefers high quality agents. The agent’s payoff of the interaction is 1 if the principal is not demanding and it is 1−αi , i ∈ {q, Q} with 0 < αQ < αq , if the principal is demanding. The game is shown in Figure 2. 18.9. We extend the game from Exercise 18.8 which could be seen as a discrete version of this game where the principal could be either fully demanding or not demanding at all. Here, we assume that the principal can exert effort e ∈ [0, 1] with e = 0 corresponding to not demanding and e = 1 to fully demanding. The effort e costs the principal eδ and the agent will get 1 − eαi for i ∈ {q, Q}. Now, assume that the principal exerts effort e. If the agents always interact, they get p(1 − eαQ ) + (1 − p)(1 − eαq ). If they interact only if Q, then the
50 payoff is p(1 − eαQ ) + (1 − p)A. The second option is preferred by agents if A > (1 − eαq ), i.e., if e > (1 − A)/αq . 18.10. The relatedness between the individuals is k, so that their payoff is their own reward plus k times the other’s. Thus with donation the donor (signaller) gets y (1), so the payoffs to donor and signaller are y + k and 1 + ky (with no donation these are 1 + kx and x + k). The donor should donate and the receiver receive then if y + k > 1 + xk and 1 + ky > x + k. The second condition is implied by the first, thus we get y > 1 − k(1 − x). 18.11. Here each player has four strategies, i.e. all possible combinations of whether to accept each type of pollinator or not (Y Y, Y N, N Y, N N ). The actual payoffs depend upon the frequency of pollinators of each type, and their number when compared to the number of plants. If for simplicity we assume that there are enough pollinators to ensure all plants are visited and all pollen used, then the payoff to a plant is the probability that the pollen is passed on to a plant of the same species. N N is clearly bad as the payoff is 0. The maximum payoff of 1 is achieved if and only if the specific plant type only allows one type of pollinator, and the other plant type does not allow it. As N N is bad, this means that the only possible 1 payoffs occur for the strategy pairs (Y N, N Y ) and (N Y, Y N ). Precise payoffs in other scenarios depend upon plant and pollinator relative frequencies, but these strategies are stable regardless.
Chapter 19 19.1. Suppose that Bl+1 > R1 and consider an invading q with ql+1 > 0 and qi = 0 for i > l + 1. Then hp,q,0 = ql+1 (R1 − Bl+1 ) < 0 which is a contradiction. 19.2. Working is as in the proof up to (17.13), where we now have extra terms (0 − qi )(Bi − fi (qi ) − Ri ) for i > l and these are positive for qi > 0, so the same result follows. 19.3. The function B1 −f1 (p)−(B2 −f2 (1−p)) is decreasing in p. It is positive at p = 0, so if (17.17) does not hold there is a unique root p1 in (0,1), giving (17.16). Similarly if (17.17) holds there is clearly no root. In each case using Theorem 17.2 there is a unique ESS. 19.4. For an ESS we need 1) R1 = . . . = Rl and 2) R1 ≥ Bl+1 . Work sequentially from l = 1. 1) clearly holds for l = 1. If 2) holds then we have
51 the solution, if not go to l = 2, where there will be a mixture where 1) holds. Progressing, eventually there will be an l for which 1) and 2) holds. Any such solution is unique since we cannot have two with the same l (compare payoffs to each on two patches where the solutions differ) and a lower l solution means that a larger l can never feature in an equilibrium. 19.5. As B1 = 10, B2 = 5, B3 = 2.5, f1 (d) = f2 (d) = f3 (d) = N d/10, we get that patch only is occupied if 10 − N/10 ≥ 5, i.e. if N ≤ 50. Otherwise, setting 10 − N p1 /10 = 5 − N (1 − p1 )/10 we get p1 = (50 + N )/2N and since 10 −
N 50 + N ≥ 2.5 10 2N
if N ≤ 100, we get that patches 1 and 2 are occupied for 50 < N ≤ 100. For N > 100, all three patches will be occupied and we get that p1 , p2 need to solve N N p1 = 5 − p2 10 10 N N 10 − p1 = 2.5 − (1 − p1 − p2 ) 10 10 10 −
which yields p1 = (125 + N )/(3N ) and p2 = (N − 25)/(3N ). 19.6. The main difference is that for Parker’s payoffs, there is a discontinuity at p = 0, and empty patches have infinite value. Thus any strategy that left an empty patch would be invadable by one that used it. 19.7. There is a pure p1 = 1 if α ≥ 1. Otherwise there is mixture when 1√+ α − (1 − p2 ) = 1 − p22 which gives a single root within (0,1) at p2 = ( 5 − 4α − 1)/2. For α = 1/4 we have p2 = 1/2. 19.8. q1 = 4/9 − p1 /3. Thus there is a valid q1 for any p1 . The full range of q1 values is 1/9 to 4/9, so the second type cannot all be on one patch. 19.9. For the case where both patches are occupied by both species there is an analogous equation to (17.29) q1 = r1 (R1 )/ g1 (R1 )N − p1 f1 (R1 )M/ g1 (R1 )N and additionally f1 (R1 ) = f2 (R2 ), g1 (R1 ) = g2 (R2 ). This gives three equations in four unknowns p1 , q1 , R1 , R2 , leading to a line of equilibria as before. If one species is entirely on one patch and the other both (e.g. 0 < p1 < 1, q1 =
52 1), we have three equations r1 (R1 ) = f1 (R1 ) + p1 M + g1 (R1 )N, r2 (R2 ) = f2 (R2 )p2 M, f1 (R1 ) = f2 (R2 ) in three unknowns p1 , R1 , R2 with a unique solution. 19.10. Mi is a Poisson process of rate νi so the combined process is a PP of rate ν1 + ν2 giving mean time 1/(ν1 + ν2 ). Consider a small time interval of length δt and the probability that M1 occurs given some event occurs. By Bayes’ theorem this is P [M1 ]/P (some event), which sending δt to 0 gives the result. 19.11. Equation (17.42) leads to (νf f + νh H)TS = 1 + νh H(1 − αq)TS + qtc νh H and thus TS =
1 + qtc νh H . νf f + αqνh H
Equation (17.43) gives (1/th + pνh S)TH = 1 + pνh S(tc + TH + αTS ) and thus TH = th + pνh Sth (tc + αTS ) where TS is as above. 19.12. Taking (13.37) from (13.36) shows that S = H/(νf f th ). Substituting into (17.38) gives G2 = pνh tc H 2 /(th νf f ). Substituting for S and G2 into (17.35) gives (17.40). 19.13. For a given p, q only features in TS , so we need to minimise TS . This simply depends upon the ratio of νh Htc /1 and ανh H/νf f . Thus if tc νf f > α then q = 0 is always best (so p = 0 is the ESS), and q = 1 is best if this inequality is reversed (so p = 1 is the ESS). 19.14. From Exercise 19.11 we obtain TS + TH =
1 th νf f + pνh Htc + TS (νf f + pνh Hα) . νf f
When q = p, using the expression for TS from Exercise 19.11 we can substitute
53 for the TS term on the RHS to obtain TS + TH =
1 (th νf f + 2pνh Htc + 1) νf f
P th 1 = H νf f
H (1 + th νf f ) + P
H P
!
2 P pνh 2tc
.
Using equation (17.40) the main bracketed term above collapses to νf f th , so giving H 1 = TS + TH P th as required. 19.15. If both individuals have full information, then if the Joiner makes the decision first, the solutions are exactly as before with the individuals Joiner and Finder reversed. If only F knows a and α = 0.5, then if J plays 0 F plays H and J gets 0. Assume J plays H. Then F plays H if V − a − C > 0. Then J gains V − a if V − a − C < 0 and (V − a − C)/2 otherwise. Thus Js reward is positive so its optimal strategy is indeed to play H. Extra knowledge here is a disadvantage, as before. 19.16. This is an Owner-Intruder game where the remaining reward is V − a and cost C. Thus if V − a − C > 0 there is a single ESS with both Hawk, and otherwise there are two ESSs one where the Finder plays Hawk and the Joiner Dove, and one where the Joiner plays Hawk and the Finder Dove.
Chapter 20 20.1. 2 r1 N u (a − v)2 G1 = 1 + r1 − exp − 2 − bmax P exp − , Kmax sk s2p r2 P u − v)2 G2 = 1 + r2 − exp . cN s2p Differentiating G2 and setting to 0 implies that u = v, substituting into(18.15) gives u = 0. Finally substituting into (18.11) and (18.12) gives the result. 20.2. The second derivative from (18.17) is −2r1 N/(s2k ) + 2bmax /(s2p ) which is negative for sufficiently small sp , and positive for sufficiently large sp .
54 20.3. The derivative of the term in (18.24) with respect to t is 0 F F (t) K 0 (t) Q0 (I) dI 1 − − = −f + + a. DKQ F (t) K(t) Q(I) dt 1+t Setting this equal to zero, we get t=
1−f +a 3 − 2f = , f −a 2f − 1
or t = 0 if this is negative. The derivative of the term in (18.24) with respect to r is −F (D0 Q + Q0 D)/KD2 Q2 . For stability at r = 0 (cryptic case) we need > 0. Substitution this to be negative. This is equivalent to exp(−I) 1 − dI dr leads to 1 + ν(1 − a)(t − 1) − a(t − 1) > 0 and thus 3 − t + ν(t − 1) > 0. Substituting for the relevant t values gives the results in the exercise. 20.4. The probability of being parasitised is e−aPt . All acceptors have fitness f when not parasitised and 0 when parasitised, giving (18.28). The other fitnesses are calculated similarly. e1 , when there is no parasite, is a weighting between (correct) non-rejection with probability 1 − pe , and a mistaken rejection of value be < 1. e2 , when there is a parasite, leads to be (an egg has already been lost) with correct rejection, and otherwise the parasite destroys the nest. k1 comes about through the four possible combinations of reject or not for egg and chick. Others are similar. 20.5.
1 − e1 1 − c1 pe (1 − be ) pc (1 − bc ) > ⇒ > e2 c2 (1 − qe )be (1 − qc )γbe
which rearranges to the desired result.
20.6. If the host had full information, there would be the following stages: the nest would be parasitised or not; if parasitised, the host could reject the egg (and be successful or no) or not; and if the parasitised egg was still there, the host could reject the chick (and be successful or not). The situation is shown below.
55 Host not parasitised
Egg rejection unsuccessful
Chick rejection unsuccesfull Chick rejection succesfull Chick not rejected
Host parasitised Egg rejection succesfull Egg not rejected
Chick rejection unsuccesfull Chick rejection succesfull Chick not rejected
Since the host does not have full information, some stages have to be in the same information set and those are connected by dotted lines. For example, the host does not know if it is parasitised or not and if the egg/chick rejection was successful or not. Because some stages are in the same set, the set of choices have to be the same and the corresponding figure is thus shown below. Chick rejection unsuccesfull Chick rejection succesfull Egg rejection unsuccesfull
Chick not rejected Chick rejection unsuccesfull Chick rejection succesfull
Host not parasitised Egg not rejected
Egg rejection succesfull Egg rejection unsuccesfull Host parasitised
Chick not rejected Chick rejection unsuccesfull Chick rejection succesfull Chick not rejected Chick rejection unsuccesfull Chick rejection succesfull Chick not rejected Chick rejection unsuccesfull Chick rejection succesfull
Egg not rejected
Chick not rejected Chick rejection unsuccesfull Chick rejection succesfull Chick not rejected
20.7. For the four cases yet to consider, the derivatives of u, v and w are, respectively a) −λu, λu, 0; d) −2λu, λu, λu; e) −(λ+µ)u, µu+σµw, λu−σµw; f) −2µu, µu + σµw − σµv, µu + σµv − σµw. 20.8. In “Parasitise” v is constant so movement is parallel to the (v = 0)-edge.
56 If “Leave” is entered next she leaves. Otherwise she crosses into “Superparasitise”. When in “Superparasitise” the direction changes, and she carries on into “Leave”, when she leaves. See the Figure below.
All U Parasitise
Leave
Superparasitise
All V
All W
20.9. The only term which depends upon the population mixture (occurring twice) is x = k2 V22 + k3 V3 . With only one free term, for non-generic games there cannot be a solution with all three strategies. The three possible pairs (S1 , S2 ), (S1 , S3 ) and (S2 , S3 ) have equilibria when x takes the respective values 2 x12 = V22 pp (1 − φ)/(V1 − V21 ),
x13 = V32 pp (1 − φ)/(V1 ),
x23 = (V32 − V22 )pp (1 − φ)/V21 .
In each case we check whether they can occur (if they do they are stable within the pair), and then whether the third can invade. Finally pure S1 is impossible, pure S2 (S3 ) occurs if F2 (F3 ) is the largest of the three payoffs when x = V22 (x = V3 ). 20.10. (i) λ = 2/11 so k = 2 > p1 /λ = 11/6 so the solution is that the hider hides in place 1, and the searcher searches this one with probability 1, and others with sufficient probability each to prevent the hider changing strategy being beneficial. (ii) λ = 60/539, so k = 2 < p1 /λ = 77/30 so the solution is hide in place 1, 2, 3 and 4 with probabilities, 30/77, 20/77, 15/77 and 12/77 respectively. The searching probabilities are exactly twice this (two places are searched). (iii) λ = (1 − a)/((1/a)n − 1) and so p1 /λ = an /λ = (1 − an )/(1 − a). If k is greater than this value then the hider hides in place 1, and searching is
57 as in part (i), if k is less than this then the hiding probability in place i is proportional to ai and the searching probabilities simply k times the hiding probabilities.
Chapter 21 21.1. The infective population changes according to dI/dt = I(Sβ/N − g − d). Hence, S = (g + d)N/β at equilibrium (or I = 0). Since S = N − I, the proportion is 0 or 1 − (g + d)/β. 21.2. Substituting v = δ into (19.3) gives a positive derivative, and so 0 is unstable and the epidemic occurs, for sufficiently small δ if β − g > 0. The derivative is negative otherwise, and 0 is stable. 21.3. For an undirected graph aij = aji so clearly A = AT . A2 has diagonal entries X X X aij aji = a2ij = aij . j
Thus k =
j
j
2
P
i,j aij /N = trace(A )/N . Further,
(A3 )k,l = (AA2 )k,l =
X i
aki
X j
(aij ajl ) =
X
aki aij ajl .
i,j
P Thus trace(A3 ) = i,j,k aki aij ajk which is 6 times the number of triangles (each product represents a triangle, each counted 3! times). X X X (A2 )i,j − trace(A2 ) = aik akj − aik aki i,j
is 2 times the number of triangles with at least two sides. The ratio of the two is thus φ. 21.4. When infection occurs an individual is infective for mean time 1/(d+ν). In the absence of infectives, the number of susceptibles in the steady state is b/d. R0 is this multiplied by the infective time and the rate τ . 21.5. The optimal value maximises aν/ (c + ν)(d + ν) . The derivative is a √ positive number times cd − ν 2 , so that ν = cd (it is a maximum because the
58 second derivative is negative). There are no limits on c and d except they are positive, so νopt can potentially be anywhere in (0, ∞). 21.6. Using (19.22) we have I∗ =
√ d + cd b d(c + ν) b √ − − =√ . d+ν aν a d(d + cd)
21.7. An equilibrium with both strains occurs if 10 − 1.5S − S(I1 /2 + 2I2 /3) = 0,
I1 (S/2 − 1.5 − 1) = 0,
I2 (2S/3 − 1.5 − 2 + 2I1 /3) = 0. Solving the second equation gives S = 5, substituting into the third gives I1 = 1/4 and then into the first gives I2 = 9/16. 21.8. Assuming the model from 19.3.1 with only type 1 in equilibrium π1 − a1 P = 0 so the fitness of type 1 is a1 P = π1 . The fitness of a mutant type 2 should be a2 P = π1 a2 /a1 . Similarly from a type 2 population the fitnesses are π2 for type 2, π1 a1 /s2 for type 1. Using the payoffs we get π1 = 1, π2 = 0.83. This then gives a2 /a1 = 1.99 and a1 /a2 = 0.65/0.83 which is a contradiction. A possible explanation is spatial clustering, so that type 1 does better in a population of type 2 than would be expected through complete mixing. 21.9. Solving for steady states of the dynamics (21.37)–(21.39), we set the equations to 0 and solve for u, v, w. From (21.38), it follows that either v = 0 µ 1 or u = g+µ β = R0 . Substituting it to (21.37) yields v = β (1 − p)R0 − 1 . The probability that a susceptible individual gets infected is given by πp = βv 1 ∗ ∗ βv+µ = 1− R0 (1−p) . The Nash equilibrium happens when E[1, p ]−E[0, p ] = 0, 1 1 ∗ i.e., by (21.36), when r = 1 − R0 (1−p ∗ ) which is equivalent to p = 1 − R (1−r) . 0 1 3 21.10. We have R0 = 2/(0.1+0.05) = 40/3 and p∗ = 1− R0 (1−r) = 1− 40(1−r) . This vaccination level is above 0, so there should be some vaccination, if r < 37/40.
Chapter 22 22.1. This becomes a fitness function G(v, u) of two variables the mutant v and resident u. If ∂G(v, u)/∂v at v = u is positive (negative) then the
59 population trait value will increase (decrease). When we achieve a 0 value of the partial derivative, then the second derivatives will be important as to whether there is a stable solution, or a branching into distinct values. 22.2. Using the standard matrix game formulae from Chapter 6 we have that (2, 3) and (1, 2, 3) are not ESS supports since 0 − d < 0 and there are no pure ESSs as no leading diagonal element is the biggest in its column. (1, 2) is an ESS support if e > c due to direct row domination, and (2, 3) is the support of an ESS if bc > be − dc ⇒ dc/be + c/e > 1. 22.3. As above, there is no internal ESS or ESS with support (2, 3) as F −D = −d < 0. There is a pure 1 if E < 0, C < 0, a pure 2 if A < 0, F < 0 (so there is no pure 2 unless θ < 0), a pure 3 if B < 0 < D, a pair (1, 2) if AC + EF < AE and a pair (1, 3) if BE − CD < BC. 22.4. In general varying θ does not affect the ESSs in a straightforward way and it depends upon the relative sizes of a, b, c, d and e. For instance if a > e sufficiently negative θ introduces a pure 2 ESS, whereas if a < e there is no pure 2 whatever the value of θ. We can see that A + E = a + e, B + C = b + c, F − D = −d so that the negative definiteness condition, which only depends upon these sums, is unaffected by the change in θ (and as noted in the above solution, there is never an internal ESS). 22.5. Here we need W (S) > W (RA ) and W (S) > W (RB ) for pA = pB = 0. For fixed dA , dB , X and Y this gives cA > dA + (1 − α)dB + X,
cB > dB + (1 − β)dA + Y.
The best treatment is the one that minimises W (S) subject to this, i.e. choose the maximum value of dA + dB for which the above hold. 22.6. For an equilibrium we need W (S) = W (RA ) = W (RB ) i.e. 1 − dA − dB = 1 − cA − dA − αdB + (1 − pA )X = 1 − cB − βdA + (1 − pB )Y.
This rearranges to: cA − dA − (1 − α)dB , X 1 − cB − dB − (1 − β)dA pB = 1 − . Y pA = 1 −
22.7. The optimal strategy is the value of uc that maximises N ∗ from equation
60 2 (22.14). Differentiating this p expression, we get −K + uT Kb/r(k + buc ) . The maximum thus occurs at uT /rb − k/b if this lies in the interval (0, 1) or at 0 (1) if it is negative (greater than 1), as in (22.15).
22.8. To be in equilibrium, we need the derivative in (22.13) to be 0. This is clearly true for N = 0. Setting the large bracket in (22.13) equal to 0 simply re-arranges to the expression in (22.14) for N ∗ , so that this is also an equilibrium (if it is positive). It is easy to see that if 0 > N ∗ , then the derivative is negative for all positive N , and the system converges to 0. If 0 < N ∗ then the derivative is negative (positive) for N > N ∗ (N < N ∗ ), so that the system converges to N ∗ from any positive N . 22.9. 1 2 3 4
""" Generates the trajectories of solutions in the multiple myeloma model """
5 6 7 8 9
## Import basic packages import numpy as np import matplotlib.pyplot as plt from scipy.integrate import odeint
10 11 12 13 14
## Define payoff matrix A=[ [ 0, 1, 1], # Payoffs to OC [ 1, 0, −2], # Payoffs to OB [.5, 0, 0]] # Payoffs to MM
15 16 17
# Give initial proportion of OC, OB, MM y0 = [0.9, 0.09, 0.01]
18 19 20 21 22 23 24 25
## Define replicator dynamics def dynamics(y,t): " Defines the dynamics " dydt = y*(np.dot(A,y) − np.dot(y,A).dot(y)); # dy/dt[i] = y[i] * ((A yˆT)[i] − yAyˆT) return dydt
26 27 28
# Specify the time interval over which we will solve t = np.linspace(0, 30, 101)
29 30 31
# Numerically solve the dynamics Sols = odeint(dynamics, y0, t)
32 33 34
# Unpack the solutions (OC = Sols[:,0]) etc OC, OB, MM = Sols.T
35 36 37 38
## Plot results plt.plot(t,OC, label='OC') plt.plot(t,OB, label='OB')
61 39 40 41 42 43
plt.plot(t,MM, label='MM') plt.legend(loc='best') plt.xlabel('Time') plt.ylabel('Prevalence') plt.show()