Numerical algorithms for reflected anticipated backward stochastic differential equations with two o

Page 1

risks Article

Numerical Algorithms for Reflected Anticipated Backward Stochastic Differential Equations with Two Obstacles and Default Risk Jingnan Wang 1, * 1 2

*

and Ralf Korn 2

Department of Mathematics, University of Kaiserslautern, 67663 Kaiserslautern, Germany Department of Mathematics, University of Kaiserslautern, Fraunhofer ITWM, 67663 Kaiserslautern, Germany; korn@mathematik.uni-kl.de Correspondence: wang@mathematik.uni-kl.de Tel.:+49-0631-205-2595

Received: 27 April 2020; Accepted: 12 June 2020; Published: 1 July 2020

Abstract: We study numerical algorithms for reflected anticipated backward stochastic differential equations (RABSDEs) driven by a Brownian motion and a mutually independent martingale in a defaultable setting. The generator of a RABSDE includes the present and future values of the solution. We introduce two main algorithms, a discrete penalization scheme and a discrete reflected scheme basing on a random walk approximation of the Brownian motion as well as a discrete approximation of the default martingale, and we study these two methods in both the implicit and explicit versions respectively. We give the convergence results of the algorithms, provide a numerical example and an application in American game options in order to illustrate the performance of the algorithms. Keywords: numerical algorithm; reflected anticipated backward stochastic differential equations; discrete penalization scheme; discrete reflected scheme

1. Introduction The backward stochastic differential equation (BSDE) theory plays a significant role in financial modeling. Given a probability space (Ω, F , P), where B := ( Bt )t≥0 is a d-dimensional standard Brownian motion, F := (Ft )t≥0 is the associated natural filtration of B, Ft = σ ( Bs ; 0 ≤ s ≤ t), and F0 contains all P-null sets of F . We first consider the following form of BSDE with the generator f and the terminal value ξ: Yt = ξ +

Z T t

f (s, Ys , Zs )ds −

Z T t

Zs dBs ,

t ∈ [0, T ].

(1)

2 (0, T; R) × The setting of this problem is to find a pair of Ft -adapted processes (Y, Z ) ∈ SF L2F (0, T; Rd ) satisfying BSDE (1). Linear BSDE was first introduced by Bismut (1973), when he studied maximum principle in stochastic optimal control. Pardoux and Peng (1990) studied the general nonlinear BSDEs under a smooth square integrability assumptions on the coefficient and the terminal value, and a Lipschitz condition for the generator f . Duffie and Epstein (1992) independently used a class of BSDEs to describe the stochastic differential utility function theory in uncertain economic environments. Tang and Li (1994) considered the BSDEs driven by a Brownian motion and an independent Poisson jump. Barles et al. (1997) completed the theoretical proofs of BSDE with Poisson jump. Cordoni and Di Persio (2014) studied the hedging, option pricing and insurance problems in a BSDE approach.

Risks 2020, 8, 72; doi:10.3390/risks8030072

www.mdpi.com/journal/risks


Risks 2020, 8, 72

2 of 30

Cvitanic and Karatzas (1996) first studied reflected BSDEs with continuous lower obstacle and continuous upper obstacle under the smooth square integrability assumption and Lipschitz condition. A quadruple (Y, Z, K + , K − ) := (Yt , Zt , Kt+ , Kt− )0≤t≤T is a solution RBSDE with the generator f , the terminal value ξ and the obstacles L and V:  RT RT + + − −  Y = ξ + f ( s, Y , Z ) ds + ( K − K ) − ( K − K ) − t ∈ [0, T ]; t s s  t t T T t t Zs dBs ,  (2) Vt ≥ Yt ≥ Lt , t ∈ [0, T ];  R R  T T  + − 0 (Yt − Lt ) dKt = 0 (Vt − Yt ) dKt = 0. where K + and K − are continuous increasing processes, K + is to keep Y above L, while K − is to keep Y under V in a minimal way. When V ≡ ∞ and K − ≡ 0 (resp. L ≡ ∞ and K + ≡ 0), we obtain a reflected anticipated backward stochastic differential equation (RABSDE) with one lower (resp. upper) obstacle. The existence of the solution of RBSDE with two obstacles can be obtained under one of the following assumptions: (1) one of the obstacles L and V are regular (see e.g., Cvitanic and Karatzas 1996; Hamadène et al. 1997); (2) Mokobodski’s condition (see e.g., Hamadène and Lepeltier 2000; Lepeltier and Xu 2007), which means the existence of a difference of non-negative super-martingales between obstacles L and V. However, both of them have disadvantages, assumption (1) is somewhat restrictive, (2) is difficult to verify in practice. In this paper, we use the Assumption 5 for the obstacles. Peng and Yang (2009) studied a new type of BSDE, anticipated BSDE (ABSDE) whose generator includes the values of both the present and the future,       

Yt = ξ T +

RT t

RT f s, Ys , EGs [Ys+δ ], Zs , EGs [ Zs+δ ] ds − t Zs dBs , t ∈ [0, T ];

Yt = ξ t ,

t ∈ ( T, T + T δ ];

Zt = αt ,

t ∈ ( T, T +

(3)

T δ ].

under the smooth square integrability assumption of the anticipated processes ξ and α, and Lipschitz condition of the generator f . Peng and Yang (2009) gave the existence and uniqueness theorem and the comparison theorem of anticipated BSDE (3). Øksendal et al. (2011) extended this topic to ABSDEs driven by a Brownian motion and an independent Poisson random measure. Jeanblanc et al. (2017) studied ABSDEs driven by a Brownian motion and a single jump process. Default risk is the risk that an investor suffers a loss due to the inability of getting back the initial investment, it arises from a borrower failing to make required payments. This loss may be complete or partial (more see Kusuoka 1999). Peng and Xu (2009) introduced BSDE with default risk and gave the relative existence and uniqueness theorem and comparison theorem. Jiao and Pham (2011) studied the optimal investment with counterparty risk. Jiao et al. (2013) continued the research on the optimal investment under multiple default risk through a BSDE approach. Cordoni and Di Persio (2016) studied the BSDE with delayed generator in a defaultable setting. In this paper, we focus on the study of reflected anticipated BSDE with two obstacles and default risk. For the numerical methods of BSDEs, Peng and Xu (2011) studied numerical algorithms for BSDEs driven by Brownian motion. Xu (2011) introduced a discrete penalization scheme and a discrete reflected scheme for RBSDE with two obstacles. Later Dumitrescu and Labart (2016) extended to RBSDE with two obstacles driven by Brownian motion and an independent compensated Poisson process. Lin and Yang (2014) studied the discrete BSDE with random terminal horizon. The paper is organized as follows, we first introduce the basics of the defaultable model in Section 1.1 and the reflected anticipated BSDE (4) with two obstacles and default risk in Section 1.3. Section 2 illustrates the discrete time framework. We study the implicit and the explicit methods of two discrete schemes, i.e., the discrete penalization scheme in Section 3 and the discrete reflected scheme in Section 4. Section 5 completes the convergence results of the numerical algorithms which were provided in the previous sections. In Section 6, we illustrate the performance of the algorithms


Risks 2020, 8, 72

3 of 30

by a simulation example and an application in American game options in the defaultable setting. The proofs of the convergence results in Section 5 can be found in the Appendix A. 1.1. Basics of the Defaultable Model Let τ = {τi ; i = space (Ω, G , P) satisfying

1, ...k} be k non-negative random variables on a probability

P(τi > 0) = 1;

P(τi > t) > 0, ∀t > 0;

P(τi = τj ) = 0, i 6= j.

For each i = 1..., k, we define a right-continuous default process H i := ( Hti )t≥0 , where Hti := 1{τi ≤t} , denote by Hi := (Hti )t≥0 the associated filtration Hti := σ ( Hsi ; 0 ≤ s ≤ t). We assume that F0 is trivial (it follows that G0 is trivial as well). For a fixed terminal time T ≥ 0, there are two kinds of information: one is from the asset prices, denoted by F := (Ft )0≤t≤T ; the other is from the default times {τi ; i = 1, ..., k}, denoted by {Hi ; i = 1, ..., k}. The enlarged filtration considered is denoted by G := (Gt )0≤t≤T , where Gt = Ft ∨ H1t ∨ ... ∨ Htk . Generally, a G -stopping time is not necessarily a F -stopping time. Let G := Gt , where Gt = P(τ > t|Ft ), i.e., Gti = P(τi > t|Ft ), for each i = 1, ..., k. In the following, Gi is assumed to be continuous, then the random default time τi is totally inaccessible G -stopping time. The processes H i (i = 1, ..., k ) are obviously G -adapted, but they are not necessarily F -adapted. We need the following assumptions (see Kusuoka 1999; Bielecki et al. 2007): Assumption 1. There exist F -adapted processes γi ≥ 0 (i = 1, ..., k) such that Mti = Hti −

Z t 0

1{τi >s} γsi ds

are G -martingales under P. γi is the G -intensity of the default time τi :

P(t < τi ≤ t + ∆|Ft ) , ∆ P(τi > t|Ft ) ∆ →0

γti = lim

t ∈ [0, T ];

Assumption 2. Every F -local martingale is a G -local martingale. 1.2. Basic Notions

L2 (GT ; R) := { ϕ ∈ R| ϕ is a GT -measurable random variable and E| ϕ|2 < ∞}; Rt 2 L2G (0, t; Rd ) := { ϕ : Ω × [0, t] → Rd | ϕt is Gt -progressively measurable and E 0 | ϕs | ds < ∞}; SG2 (0, t; R) := { ϕ : Ω × [0, t] → R| ϕt is Gt -progressively measurable rcll process and h i E sup0≤s≤t | ϕs |2 < ∞}; L2,τ (0, t; Rk ) := { ϕ : Ω × [0, t] → Rk | ϕt is Gt -progressively measurable and satisfies hGR i hR i p 2 t t E 0 | ϕs | 1{τ >s} γs ds = E 0 ∑ik=1 | ϕi,s | 1{τi >s} γsi ds < ∞}; A2G (0, T; R) := {K : Ω × [0, T ] → R| Kt is a Gt -adapted rcll increasing process and K0 = 0, KT ∈ L p (G T ; R)}; 1.3. Reflected Anticipated BSDEs with Two Obstacles and Default Risk Consider the RABSDE below with two obstacles and default risk with coefficient ( f , ξ, δ, L, V ). (Y, Z, U, K + , K − ) := (Yt , Zt , Ut , Kt+ , Kt− )0≤t≤T +δ is a solution for RABSDE with the generator f , the terminal value ξ T , the anticipated processes ξ, the anticipated time δ (δ > 0 is a constant), and the obstacles L and V, such that


Risks 2020, 8, 72

4 of 30

        

RT Yt = ξ T + t f s, Ys , EGs [Ys+δ ], Zs , Us ds + (KT+ − Kt+ ) − (KT− − Kt− ) RT RT − t Zs dBs − t Us dMs , t ∈ [0, T ];

       

Yt = ξ t , t ∈ ( T, T + δ]; RT RT + − 0 (Yt − Lt ) dKt = 0 (Vt − Yt ) dKt = 0,

Vt ≥ Yt ≥ Lt ,

(4)

t ∈ [0, T ];

k ± 2 where Y ∈ SG2 (0, T + δ; R), Z ∈ L2G (0, T; Rd ), U ∈ L2,τ G (0, T; R ), K ∈ AG (0, T; R). We further state the following assumptions for RABSDE (4):

Assumption 3. The anticipated process ξ ∈ L2G ( T, T + δ; Rd ), α ∈ L2G ( T, T + T δ ; Rd ), here ξ is a given process, and ξ T is the terminal value; Assumption 4. The generator f (w, t, y, ȳr , z, z̄r , u, ūr ) : Ω × [0, T + T δ ] × R × SG2 (t, T + T δ ; R) × Rd × δ k L2G (t, T + T δ ; Rd ) × Rk → L2,τ G ( t, T + T ; R ) satisfies: (a) (b)

f (·, 0, 0, 0, 0) ∈ L2G (0, T + δ; R); Lipschitz condition: for any t ∈ [0, T ], r ∈ [t, T + δ], y, y0 ∈ R, z, z0 ∈ Rd , u, u0 ∈ Rk , ȳ, ȳ0 ∈ L2G (t, T + δ; R), there exists a constant L ≥ 0 such that √ | f (t, y, ȳr , z, u) − f (t, y0 , ȳr0 , z0 , u0 )| ≤ L |y − y0 | + EGt |ȳr − ȳr0 | + |z − z0 | + |u − u0 |1{τ >t} γt ;

(c)

for any t ∈ [0, T ], r ∈ [t, T + δ], y, y0 ∈ R, z, z0 ∈ Rd , u, u0 ∈ Rk , ȳ, ȳ0 ∈ L2G (t, T + T δ ; R), the following holds: f (t, y, ȳr , z, ũi−1 ) − f (t, y, ȳr , z, ũi ) > −1, (ui − ũi )1{τ i >t} γti where ũi = (ũ1 , ũ2 , ..., ũi , ui+1 , ..., uk ), ui is the i-th element of u.

Assumption 5. The obstacle processes satisfy L, V ∈ SG2 (0, T + δ; R): (a) (b)

for any t ∈ [0, T ], VT ≥ ξ ≥ L T , L and V are separated, i.e., Vt > Lt , P − a.s.; L and V are rcll and their jumping times are totally inaccessible and satisfy "

E

# sup 0≤ t ≤ T

(c)

2 ( L+ t )

"

< ∞,

E

# sup 0≤ t ≤ T

(Vt− )2

< ∞;

there exists a process of the following form: X t = X0 −

Z t (1) 0

σs dBs −

Z t (2) 0

− σs dMs + A+ t − At ,

k + − where XT = ξ T , σ(1) ∈ L2G (0, T; Rd ), σ(2) ∈ L2,τ G (0, T; R ), A and A are G -adapted increasing − 2 2 processes, E[| A+ T | + | A T | ] < ∞, such that

Vt ≥ Xt ≥ Lt ,

t ∈ [0, T ],

P − a.s.

2. Discrete Time Framework In order to discretize [0, T ], for n ∈ N, we introduce ∆n := (ti )i=0,1,...n,...nδ with step size ∆n , where ti := i∆n , nδ = n + ∆δn .

T n

and an equidistant time grid


Risks 2020, 8, 72

5 of 30

2.1. Random Walk Approximation of the Brownian Motion We use a random walk to approximate the 1-dimensional standard Brownian motion: B0n = 0;

Btn :=

[t/∆n ]

∆n

∆Bin := Bin − Bin−1 =

enj ,

t ∈ (0, T ];

j =1

∆n ein ,

i ∈ [1, nδ ],

where (ein )i=1,...n is a {−1, 1}-value i.i.d. Bernoulli sequence with P(ein = 1) = P(ein = −1) = 21 . Denote Fin = σ{e1n , ...ein }, for any i ∈ [1, nδ ]. By Donsker’s invariance principle and the Skorokhod representation theorem, there exists a probability space, such that sup0≤t≤T +δ | Btn − Bt | → 0, in L2 (G T +δ ), as n → ∞. 2.2. Approximation of the Defaultable Model We consider a defaultable model of a single uniformly distributed random default time τ ∈ (0, T ]. We define the discrete default process hin = hnti = 1{τ ≤ti } (i ∈ [1, n]). Particularly, when i ∈ [n + 1, nδ ], hin = 1 (since default case already happened). We have the conditional expectations of hin in Gin−1 : E hin = 1|hin−1 = 1 =P(τ ≤ ti |τ ≤ ti−1 ) = 1, ∆n , E hin = 1|hin−1 = 0 =P(τ ≤ ti |τ > ti−1 ) = T − t i −1 T − ti E hin = 0|hin−1 = 0 =P(τ > ti |τ > ti−1 ) = , T − t i −1

i ∈ [1, n].

We have the following approximation for the discrete martingale Mtn directly based on the definition of the martingale M (Assumption 1): [t/∆n ]

M0n

= 0;

Mtn

:=

hn[t/∆n ]

−∆

n

(1 − hnj )γnj ,

t ∈ (0, T ];

j =1

∆Min := hin − hin−1 − ∆n (1 − hin )γin ,

(5)

i ∈ [1, n],

where the discrete intensity process γin = γtni ≥ 0 is an Fin -adapted process. Denote G := {Gin ; i ∈ [0, nδ ]}, G0n = {Ω, ∅}, for i ∈ [1, n], Gin = σ{e1n , ...ein , hin }; for i ∈ [n + 1, nδ ], Gin = σ{e1n , ...ein , hnn }, where hi is independent from e1n ,...ein . From the martingale property of Mi , we can get n n EGi−1 [∆Min ] = EGi−1 hin − hin−1 − ∆n (1 − hin )γin = 0,

i ∈ [1, n],

therefore, the discrete intensity process has the following form (by the projection on Fin−1 ): γin =

P(ti−1 < τ ≤ ti |Fin−1 ) 1 = , ∆n P(τ > ti |Fin−1 ) T − ti

Note that γin = 0, when i = 0 and i ∈ [ n → ∞, it follows that γ̂tn converges to γt .

τ ∆n

i ∈ [1,

h τ i ]. ∆n

+ 1, n]. If we set γ̂tn = γ[nt/∆n ] (t ∈ [0, T ]), then as

2.3. Computing the Conditional Expectations When i ∈ [1, n − 1], we use the following formula to compute the conditional expectation for the function f : Ri+2 → R:


Risks 2020, 8, 72

6 of 30

n EGi f e1n , ..., ein+1 , hin+1

1 = f (e1n , ..., ein , 1, 1) en =1,hn =1 + f (e1n , ..., ein , −1, 1) en =−1,hn =1 2 i i +1 i i +1

∆n n n n n + f (e1 , ..., ei , 1, 1) en =1,hn =0,hn =1 + f (e1 , ..., ei , −1, 1) en =−1,hn =0,hn =1 2( T − t i ) i +1 i i +1 i +1 i i +1

T − t i +1 + f (e1n , ..., ein , 1, 0) en =1,hn =0,hn =0 + f (e1n , ..., ein , −1, 0) en =−1,hn =0,hn =0 . 2( T − t i ) i +1 i i +1 i +1 i i +1 When i ∈ [n, nδ ], we have the following conditional expectation for the function f : Ri+2 → R: n

EGi

f e1n , ..., ein+1 , hnn

1 1 f (e1n , ...ein , 1, 1) en =1 + f (e1n , ...ein , −1, 1) en =−1 . 2 2 i +1 i +1

=

2.4. Approximations of the Anticipated Processes and the Generator Consider the approximation ξ nn of the terminal value ξ, we have the following assumption: Assumption 6. (ξ in )i∈[n,nδ ] is Gin -measurable, Ψ : {1, −1}n → R is a real analytic function, such that ξ in = Ψ (e1n , ...ein , hin ) ,

i ∈ [n, nδ ],

particularly, the terminal value ξ nn = Ψ e1n , ...enn , hin is Gnn -measurable. For the approximation ( f n (ti , y, ȳ, z, u))i∈[0,n] of the generator f : Assumption 7. for any i ∈ [0, n], f n (ti , y, ȳ, z, u) is Gin -adapted, and satisfies: (a)

there exists a constant C > 0, such that for all n > 1 + 2L + 4L2 , "

E ∆

n

n −1

∑ |f

# n

(·, 0, 0, 0, 0)|

2

< C.

i =0

(b)

for any i ∈ [0, n − 1], y, y0 ∈ R, z, z0 ∈ R, u, u0 ∈ R, ȳ, ȳ0 ∈ SG2 (t, T + δ; R), there exists a constant L ≥ 0, such that √ | f n (ti , y, ȳi , z, u) − f n (ti , y0 , ȳi0 , z0 , u0 )| ≤ L |y − y0 | + EGt |ȳi − ȳi0 | + |z − z0 | + |u − u0 |1{τ >ti } γi , n

where ȳi = EGi [yī ], ī = i + As n → ∞, f n (

t ∆n

δ ∆n

.

, y, ȳ, z, u) converges to f (t, y, ȳ, z, u) in SG2 (0, T + δ; R).

2.5. Approximation of the Obstacles

( Lin )i∈[0,n] and (Vin )i∈[0,n] are the discrete versions of L and V, by Assumption 5, we can have the following approximations: Lin = L0 + ∆n

i −1

∑ lj

(1)

j =0

Vin = V0 + ∆n

i −1

(2)

j =0

∑ vj

j =0

i −1

(1)

i −1

+ ∑ l j ∆Bnj+1 + ∑ l j ∆Mnj+1 ; i −1

(3)

j =0

i −1

+ ∑ v j ∆Bnj+1 + ∑ v j ∆Mnj+1 , j =0

(2)

j =0

(3)


Risks 2020, 8, 72

(k)

where l j

7 of 30

(k)

(k)

= lt j , v j

(k)

= vt j (k = 1, 2, 3). By the Burkholder–Davis–Gundy inequality, it follows " Vin

Lin ,

# sup( Lin )+2 i

sup E n

+ sup(Vin )−2 i

< ∞.

We introduce the discrete version of Assumption 5 (c): Assumption 8. There exists a process Xin with the following form: Xin = X0n −

i −1

∑ σj

(1)

∆Bin+1 −

j =0

i −1

∑ σj

(2)

j =0

∆Min+1 + Ai+n − Ai−n ,

n 2 −n 2 < ∞, such that where Ai+n and Ai−n are Gin -adapted increasing processes, E | A+ n | + | An | Lin ≤ Xin ≤ Vin ,

i ∈ [0, n].

We introduce two numerical algorithms below, discrete penalization scheme in Section 3 and discrete reflected scheme in Section 4. For each scheme, we study the implicit and explicit versions. 3. Discrete Penalization Scheme We first use the methodology of penalization for the discrete scheme below. El Karoui et al. (1997a) proved the existence of RBSDE with one obstacle under a smooth square integrability assumption and Lipschitz condition through penalization method. Lepeltier and Martín (2004) used the similar penalization method to prove the existence theorem of RBSDE with two obstacles and Poisson jump. Similarly to Lemma 4.3.1 in Wang (2020), we consider the following special case of the penalized ABSDE for RABSDE (2): p p p p p +p −p p p −dYt = f n t, Yt , EGt [Yt+δ ], Zt , Ut dt + dKt − dKt − Zt dBt − Ut dMt , p Yt

t ∈ [ T, T + δ],

=ξ t ,

where +p Kt

=p

Z T t

p (Ys

− Ls ) ds,

−p Kt

=p

Z T t

t ∈ [0, T ],

(6)

p

(Ys − Vs )+ ds.

By the existence and uniqueness theorem for ABSDEs with default risk (Theorem 4.3.3 in Wang 2020), there exists the unique solution for this penalized ABSDE (6). We will give the convergence of penalized ABSDE (6) to RABSDE (2) in Theorem 1 below. 3.1. Implicit Discrete Penalization Scheme We first introduce the implicit discrete penalization scheme. In this scheme, p represents the penalization parameter. In practice, we can choose p which is independent of n and much larger than n, this will be illustrated in the simulation Section 6.  p,n p,n p,n p,n p,n p,n + p,n − p,n  yi = yi+1 + f n (ti , yi , ȳi , zi , ui )∆n + k i − ki    p,n p,n   − zi ∆Bin+1 − ui ∆Min+1 , i ∈ [0, n − 1];   + p,n p,n ki = p∆n (yi − Lin )− , i ∈ [0, n − 1]; (7)   − p,n p,n  n n +  ki = p∆ (yi − Vi ) , i ∈ [0, n − 1];    p,n  n δ yi = ξ i , i ∈ [n, n ], p,n

where ȳi

n

p,n

= EGi [yī ], ī = i +

δ ∆n

.


Risks 2020, 8, 72

8 of 30

For the theoretical convergence results in Section 5, we first prove the convergence (Theorem 2) of implicit discrete penalization scheme (7) to the penalized ABSDE (6), then combining with Theorem A2, we can get the convergence of the explicit discrete penalization scheme. By Theorem A3 and Theorem 1, we can prove the convergence of the implicit discrete reflected scheme (13). p,n From Section 2.3, taking conditional expectation in Gin , we can calculate ȳi as follows: 

1 p,n

p,n

  y + y

  ī 2 ī eīn =1,hin =1  eīn =−1,hin =1   

 δ  p,n

p,n

  + y + y

 ī  2( T − ti ) ī eīn =1,hin =0,hīn =1 eīn =−1,hin =0,hīn =1 n p,n EGi [yī ] =

T − ti − δ p,n

p,n

  + y + y ,

  ī  2( T − ti ) ī eīn =1,hin =0,hīn =0 eīn =−1,hin =0,hīn =0    

  1 n

n

  , + ξ ξ

 2 ī n ī en =−1 eī =1 ī p,n

(8) ī ∈ [i, n − 1]; ī ∈ [n, nδ ].

p,n

(i ∈ [0, n − 1]) are given by h i h i √ Gin y p,n ∆Bn Gin y p,n ∆n en E E i +1 i +1 i +1 i +1 p,n zi = G n = n √ E i [(∆Bin+1 )2 ] EGi [( ∆n ein+1 )2 ]

p,n

p,n

− y i +1

y i +1

h i n 1 ei+1 =−1 ei + 1 = 1 p,n √ = √ EGi yi+1 ein+1 = ; n n ∆ 2 ∆ h i h (9) i Gin y p,n ∆M n Gin y p,n hn − hn − ∆n (1 − hn ) γ E E i + 1 i +1 i i +1 i +1 i +1 i +1 p,n ui = G n = n E i [(∆Min+1 )2 ] EGi [(hin+1 − hin − ∆n (1 − hin+1 )γi+1 )2 ]

p,n p,n

y i +1 ∆ n

− y i + 1 ( T − t i + 1 ) ∆ n γi + 1

hi =0,hi+1 =1 hi =0,hi+1 =0 . = ∆n + ( T − ti+1 )(∆n γi+1 )2 p,n Note that ui only exists on [0, ∆τn − 1] (i.e., before the default event happens). By taking the conditional expectation of (7) in Gin , it follows: Similarly, zi

                        

and ui

n = (Φ p,n )−1 EGi [yin+1 ] ,

p,n

yi

i ∈ [0, n − 1];

+ p,n p,n ki = p∆n (yi − Lin )− , i ∈ [0, n − 1]; − p,n p,n n n + ki = p∆ (yi − Vi ) , i ∈ [0, n − 1]; p,n yi = ξ in , i ∈ [n, nδ ]; h i n p,n p,n zi = √1 n EGi yi+1 ein+1 , i ∈ [0, n − 1]; ∆ Gin p,n n n n n E [yi+1 ( hi+1 −hi −∆ (1−hi+1 )γi+1 )] p,n , i∈ ui = Gn E i [(hn −hn −∆n (1−hn )γ )2 ] p,n

i +1

i

p,n

p,n

i +1

(10)

[0,

i +1

τ ∆n

− 1].

where Φ p,n (y) = y − f n (ti , y, ȳi , zi , ui )∆n − p∆n (y − Lin )− + p∆n (y − Vin )+ . For the continuous p,n p,n p,n + p,n − p,n time version (Yt , Zt , Ut , Kt , Kt )0≤ t ≤ T : p,n

Yt

p,n

:= y[t/∆n ] ,

+ p,n Kt

[t/∆n ]

:=

i =0

p,n

Zt + p,n ki ,

p,n

p,n

:= z[t/∆n ] , − p,n Kt

Ut [t/∆n ]

:=

i =0

− p,n

ki

p,n

:= u[t/∆n ] , .


Risks 2020, 8, 72

9 of 30

3.2. Explicit Discrete Penalization Scheme In many cases, the inverse of mapping Φ is not easy to get directly, for example, if f is not a linear n p,n p,n function on y. We replace yi in f n by EGi [yi+1 ] in (7), it follows n p,n p,n p,n p,n p,n + p,n − p,n = ỹi+1 + f n (ti , EGi [ỹi+1 ], ỹ¯i , z̃i , ũi )∆n + k̃ i − k̃ i p,n p,n − z̃i ∆Bin+1 − ũi ∆Min+1 , i ∈ [0, n − 1]; + p,n p,n n n − k̃ i = p∆ (ỹi − Li ) , i ∈ [0, n − 1]; − p,n p,n k̃ i = p∆n (ỹi − Vin )+ , i ∈ [0, n − 1]; p,n n δ ỹi = ξ i , i ∈ [n, n ],

p,n

      

ỹi

      p,n

p,n

p,n

where ỹ¯i , z̃i as follows:

and ũi n

p,n

EGi [ỹi+1 ] =

(11)

n

p,n

can be calculated as (8) and (9). By Section 2.3, we computer EGi [yi+1 ] 1 p,n

p,n

ỹi+1 en =1,hn =1 + ỹi+1 en =−1,hn =1 2 i i +1 i i +1 ∆n p,n

p,n

+ ỹi+1 en =1,hn =0,hn =1 + ỹi+1 en =−1,hn =0,hn =1 2( T − t i ) i +1 i i +1 i +1 i i +1

T − t i +1 p,n p,n + ỹi+1 en =1,hn =0,hn =0 + ỹi+1 en =−1,hn =0,hn =0 . 2( T − t i ) i +1 i i +1 i +1 i i +1

By taking the conditional expectation of (11) in Gin , we have the following explicit penalization scheme:               

i h n n p,n p,n p,n p,n p,n + p,n − p,n = EGi ỹi+1 + f n (ti , EGi [ỹi+1 ], ỹ¯i , z̃i , ũi )∆n + k̃ i − k̃ i , i ∈ [0, n − 1]; i − nh n n p,n p,n p,n p,n + p,n p,n k̃ i = 1+p∆p∆n EGi ỹi+1 + f n (ti , EGi [ỹi+1 ], ỹ¯i , z̃i , ũi )∆n − Lin , i ∈ [0, n − 1]; i + nh n n − p,n p,n p,n p,n p,n p,n k̃ i = 1+p∆p∆n EGi ỹi+1 + f n (ti , EGi [ỹi+1 ], ỹ¯i , z̃i , ũi )∆n − Vin , i ∈ [0, n − 1]; p,n

ỹi

p,n

             

ỹi

p,n

z̃i

p,n

ũi

= ξ in , i ∈ [n, nδ ]i; h n p,n = √1 n EGi ỹi+1 ein+1 ,

i ∈ [0, n − 1];

=

E

Gin

p,n ỹi+1 hin+1 −hin −∆n (1−hin+1 )γi+1 n G E i [(hin+1 −hin −∆n (1−hin+1 )γi+1 )2 ]

[

)]

(

i ∈ [0,

,

p,n

p,n

p,n

p,n

p,n

p,n

:= ỹ[t/∆n ] ,

+ p,n

K̃t

Z̃t

[t/∆n ]

:=

i =0

+ p,n

k̃ i

− p,n

Ũt [t/∆n ]

:=

)0≤ t ≤ T :

p,n

p,n

− p,n

− 1].

, K̃t

:= z̃[t/∆n ] , K̃t

,

τ ∆n

+ p,n

For the continuous time version (Ỹt , Z̃t , Ũt , K̃t Ỹt

i =0

+ p,n

p,n

If Vin > ỹi p,n If ỹi ≤ − p,n k̃ i =

+ p,n

> Lin , we can get k̃ i

Lin , we can get

+ p,n k̃ i

=

p,n

:= ũ[t/∆n ] ,

− p,n

k̃ i

Remark 1. We give the following explanations of the derivation of k̃ i •

(12)

. − p,n

and k̃ i

:

− p,n

= k̃ i

p∆n 1+ p∆n

= 0; h i − n n p,n p,n p,n p,n p,n EGi ỹi+1 + f n (ti , EGi [ỹi+1 ], ỹ¯i , z̃i , ũi )∆n − Lin , p,n

0. From (12), we know that p should be much larger than n to keep ỹi above the lower obstacle Lin ; nh i + n n p,n p,n − p,n p,n p,n p,n p,n If ỹi ≥ Vin , we can get k̃ i = 1+p∆p∆n EGi ỹi+1 + f n (ti , EGi [ỹi+1 ], ỹ¯i , z̃i , ũi )∆n − Vin , + p,n

p,n

k̃ i = 0. From (12), we know that p should be much larger than n to keep ỹi obstacle Vin .

under the upper


Risks 2020, 8, 72

10 of 30

4. Discrete Reflected Scheme We can obtain the solution Y by reflecting between the two obstacles and get the increasing processes K + and K − directly. 4.1. Implicit Discrete Reflected Scheme We have the following implicit discrete reflected scheme,             

n −n n n n n yin = yin+1 + f n (ti , yin , ȳin , zin , uin )∆n + k+ i − k i − zi ∆Bi +1 − ui ∆Mi +1 , i ∈ [0, n − 1]; n n n Vi ≥ yi ≥ Li , i ∈ [0, n − 1]; +n −n n −n k i ≥ 0, k i ≥ 0, k+ = 0, i ∈ [0, n − 1]; i ki +n n n n n (yi − Li )k i = (yi − Vin )k− = 0, i ∈ [0, n − 1]; i n n δ yi = ξ i , i ∈ [n, n ].

(13)

where ȳin , zin and uin can be calculated as (8) and (9). By taking conditional expectation of (13) in Gin , it follows  n n −n yin = EGi [yin+1 ] + f n (ti , yin , ȳin , zin , uin )∆n + k+  i − k i , i ∈ [0, n − 1];     Vin ≥ yin ≥ Lin , i ∈ [0, n − 1];    −n +n −n +n  k ≥ 0, k ≥ 0, k = 0, i ∈ [0, n − 1];   i i i ki  + n n  (yin − Lin )k i = (yin − Vin )k− = 0, i ∈ [0, n − 1]; i (14) n n δ yi = ξ i , i ∈ [n, n ];    n   i ∈ [0, n − 1]; zin = √1 n EGi yin+1 ein+1 ,   ∆  n  G  E i [yin+1 ( hin+1 −hin −∆n (1−hin+1 )γi+1 )]   uin = , i ∈ [0, ∆τn − 1].  Gn n n n n 2 i

E

[(hi+1 −hi −∆ (1−hi+1 )γi+1 ) ]

If ∆n is small enough, similarly to Section 4.1 in Xu (2011), (14) is equivalent to                             

n n −n − k yin = Φ−1 EGi [yin+1 ] + k+ , i ∈ [0, n − 1]; i i n − n Gi [ yn ] + f n ( t , Ln , L̄n , zn , un ) ∆n − Ln k+ = E , i ∈ [0, n − 1]; i i +1 i i i i i i n + n k− = EGi [yin+1 ] + f n (ti , Vin , V̄in , zin , uin )∆n − Vin , i ∈ [0, n − 1]; i yin = ξ in , i ∈ [n, nδ ]; n 1 G n n zi = √ n E i yi+1 ein+1 , ∆

uin =

E

Gin

i ∈ [0, n − 1];

yin+1 hin+1 −hin −∆n (1−hin+1 )γi+1 n G E i [(hin+1 −hin −∆n (1−hin+1 )γi+1 )2 ]

[

)]

(

(15)

i ∈ [0,

,

τ ∆n

− 1].

here Φ(y) = y − f n (ti , y, ȳin , zin , uin )∆n . For the continuous time version (Ytn , Ztn , Utn , Kt+n , Kt−n )0≤t≤T : Ztn := zn[t/∆n ] ,

Ytn := yn[t/∆n ] , [t/∆n ]

Kt+n

:=

i =0

n k+ i ,

Kt−n

Utn := un[t/∆n ] ,

[t/∆n ]

:=

i =0

n k− i .

4.2. Explicit Discrete Reflected Scheme We introduce the following explicit discrete reflected scheme by replacing yin in the generator f n by E[yin+1 |Gin ] in (13).


Risks 2020, 8, 72

11 of 30

n

n −n n n ỹin = ỹin+1 + f n (ti , EGi [ỹin+1 ], ỹ¯in , z̃in , ũin )∆n + k̃+ i − k̃ i − z̃i ∆Bi +1 n n − ũi ∆Mi+1 , i ∈ [0, n − 1]; Vin ≥ ỹin ≥ Lin , i ∈ [0, n − 1]; n n n −n k̃+ ≥ 0, k̃− ≥ 0, k̃+ = 0, i ∈ [0, n − 1]; i i i k̃ i +n n n n n n (ỹi − Li )k̃ i = (ỹi − Vi )k̃− = 0, i ∈ [0, n − 1]; i ỹin = ξ in , i ∈ [n, nδ ].

                

(16)

where ỹ¯in , z̃in and ũin can be calculated as (8) and (9). By taking conditional expectation of (16) in Gin :                             

n n n −n ỹin = EGi [ỹin+1 ] + f n (ti , EGi [ỹin+1 ], ỹ¯in , z̃in , ũin )∆n + k̃+ i − k̃ i , n n n Vi ≥ ỹi ≥ Li , i ∈ [0, n − 1]; n −n +n −n k̃+ ≥ 0, k̃ ≥ 0, k̃ = 0, i ∈ [0, n − 1]; i i i ki +n −n n n n n (ỹi − Li )k̃ i = (ỹi − Vi )k̃ i = 0, i ∈ [0, n − 1];

i ∈ [0, n − 1];

ỹin = ξ in ,

i ∈ [n, nδ ]; n i ∈ [0, n − 1]; z̃in = √1 n EGi ỹin+1 ein+1 , p,n ũi

=

∆ Gn E i [ỹin+1 ( hin+1 −hin −∆n (1−hin+1 )γi+1 )]

E

Gn i [( hn − hn − ∆n (1− hn ) γi +1 )2 ] i +1 i i +1

i ∈ [0,

,

τ ∆n

(17)

− 1].

Similarly to the implicit reflected case, we can obtain                           

n n n −n ỹin = EGi [ỹin+1 ] + f n (ti , EGi [ỹin+1 ], ỹ¯in , z̃in , ũin )∆n + k̃+ i ∈ [0, n − 1]; i − k̃ i , n − n +n k̃ i = EGi [ỹin+1 ] + f n (ti , EGi [ỹin+1 ], ỹ¯in , z̃in , ũin )∆n − Lin , i ∈ [0, n − 1]; + n n −n G G n n n n n n n n , i ∈ [0, n − 1]; k̃ i = E i [ỹi+1 ] + f (ti , E i [ỹi+1 ], ỹ¯i , z̃i , ũi )∆ − Vi

i ∈ [n, nδ ]; ỹin = ξ in , n z̃in = √1 n EGi ỹin+1 ein+1 , i ∈ [0, n − 1]; p,n ũi

=

∆ Gn E i [ỹin+1 ( hin+1 −hin −∆n (1−hin+1 )γi+1 )] Gn E i [(hin+1 −hin −∆n (1−hin+1 )γi+1 )2 ]

,

i ∈ [0,

τ ∆n

(18)

− 1].

For the continuous time version (Ỹtn , Z̃tn , Ũtn , K̃t+n , K̃t−n )0≤t≤T : Ỹtn := ỹn[t/∆n ] ,

Z̃tn := z̃n[t/∆n ] ,

[t/∆n ]

K̃t+n :=

i =0

n k̃+ i ,

K̃t−n :=

Ũtn := ũn[t/∆n ] ,

[t/∆n ]

i =0

n k̃− i .

5. Convergence Results We first state the convergence result from the Penalized ABSDE (19) to RABSDE (2) in Theorem 1, which is the basis of the following convergence results of the discrete schemes we have studied above. We prove the convergence (Theorem 2) from the implicit discrete penalization scheme (7) to the penalized ABSDE (6) with the help of Lemma 1. Combining with Theorem A2, we can get the convergence (Theorem 3) of the explicit discrete penalization scheme (11). By Theorem A3, Lemma 1 and Theorem 1, we can prove the convergence of the implicit discrete reflected scheme (13). By Theorem A3, Theorem A4 and Lemma A4, the convergence (Theorem 5) of the explicit penalization discrete scheme (16) then follows. The proofs of Theorem 1, Lemma 1, Theorem 2 and Theorem 4 can be seen in Appendix A.


Risks 2020, 8, 72

12 of 30

5.1. Convergence of the Penalized ABSDE to RABSDE (2) Theorem 1. Suppose that the anticipated process ξ, the generator f satisfy Assumption 3 and Assumption 4, f (t, y, ȳr , z, u) is increasing in ȳ, the obstacles L and V satisfy Assumption 5. We can consider the following special case of the penalized ABSDE for RABSDE (2):           

p p p p p +p −p p p −dYt = f n t, Yt , EGt [Yt+δ ], Zt , Ut dt + dKt − dKt − Zt dBt − Ut dMt , Rt +p p Kt = 0 p(Ys − Ls )− ds, t ∈ [0, T ]; R −p p t Kt = 0 p(Ys − Vs )+ ds, t ∈ [0, T ]; p

t ∈ [0, T ]; (19)

t ∈ [ T, T + δ].

Yt = ξ t ,

p

Then we have the limiting process (Y, Z, U, K + , K − ) of (Y p , Z p , U p , K + p , K − p ), i.e., as p → ∞, Yt → Yt p p +p −p Zt → Zt weakly in L2G (0, T; R), Ut → Ut weakly in L2,τ ( Kt ) → G (0, T; R), Kt A2G (0, T; R). Moreover, there exists a constant Cξ, f ,L,V depending on ξ, f (t, 0, 0, 0, 0), L and V, such that " Z Z

in SG2 (0, T + δ; R), Kt+ (Kt− ) weakly in

E

T

p

sup |Yt − Yt |2 +

0

0≤ t ≤ T

+ sup 0≤ t ≤ T

+p |(Kt

T

p

| Zt − Zt |2 dt +

−p − Kt+ ) − (Kt

p

|Ut − Ut |2 1{τ >t} γt dt

0

#

1 − Kt− )|2 ≤ √ Cξ, f ,L,V . p

5.2. Convergence of the Implicit Discrete Penalization Scheme We first introduce the following lemma to prove the convergence result of the penalized ABSDE (19) to the implicit penalization scheme. p,n

p,n

p,n

p

p

p

Lemma 1. Under Assumption 6 and Assumption 7, (Yt , Zt , Ut ) converges to (Yt , Zt , Ut ) in the following sense: lim E

h

n→∞

p,n

sup |Yt

0≤ t ≤ T

Z T

p

− Yt |2 + + p,n

for any t ∈ [0, T ], as n → ∞, Kt

0

− p,n

− Kt

p,n

| Zt

p

− Zt |2 dt + +p

→ Kt

−p

− Kt

Z T 0

p,n

|Ut

i p − Ut |2 1{τ >t} γt dt = 0,

(20)

in L2G (0, T; R).

Theorem 2. (Convergence of the implicit discrete penalization scheme) Under Assumption 3 and p,n p,n p,n Assumption 7, (Yt , Zt , Ut ) converges to (Yt , Zt , Ut ) in the following sense: " lim lim E

p→∞ n→∞

sup 0≤ t ≤ T

p,n |Yt

2

− Yt | +

Z T

+ p,n

for any t ∈ [0, T ], as p → ∞, n → ∞, Kt

0

p,n | Zt

− p,n

− Kt

2

− Zt | dt +

Z T 0

# p,n |Ut

2

− Ut | 1{τ >t} γt dt = 0,

(21)

→ Kt+ − Kt− in L2G (0, T; R).

5.3. Convergence of the Explicit Discrete Penalization Scheme By Theorem 2 and Theorem A2, we can obtain the following convergence result of the explicit penalization discrete scheme. Theorem 3. (Convergence of the explicit discrete penalization scheme) Under Assumption 3 and p,n p,n p,n Assumption 7, (Ỹt , Z̃t , Ũt ) converges to (Yt , Zt , Ut ) in the following sense: " lim E

n→∞

sup 0≤ t ≤ T

p,n |Ỹt

2

− Yt | +

Z T 0

p,n | Z̃t

2

− Zt | dt +

Z T 0

# p,n |Ũt

2

− Ut | 1{τ >t} γt dt = 0,

(22)


Risks 2020, 8, 72

13 of 30

+ p,n

for any t ∈ [0, T ], as n → ∞, K̃t

− p,n

− K̃t

→ Kt+ − Kt− in L2G (0, T; R).

5.4. Convergence of the Implicit Discrete Reflected Scheme Theorem 4. (Convergence of the implicit discrete reflected scheme) Under Assumption 7 and Assumption 3, (Ytn , Ztn , Utn ) converges to (Yt , Zt , Ut ) in the following sense: " sup |Ytn − Yt |2 +

lim E

n→∞

0≤ t ≤ T

Z T 0

| Ztn − Zt |2 dt +

Z T 0

#

|Utn − Ut |2 1{τ >t} γt dt = 0,

(23)

and for any t ∈ [0, T ], as n → ∞, Kt+n − Kt−n → Kt+ − Kt− in L2G (0, T; R). 5.5. Convergence of the Explicit Discrete Reflected Scheme By Theorem A3, Theorem A4 and Lemma A4, we can get the convergence result of the explicit penalization discrete scheme. Theorem 5. (Convergence of the explicit discrete reflected scheme) Under Assumption 3 and Assumption 7, (Ỹtn , Z̃tn , Ũtn ) converges to (Yt , Zt , Ut ) in the following sense: " lim E

n→∞

sup 0≤ t ≤ T

|Ỹtn

2

− Yt | +

Z T 0

| Z̃tn

2

− Zt | dt +

Z T 0

#

|Ũtn

2

− Ut | 1{τ >t} γt dt = 0,

(24)

for any t ∈ [0, T ], as n → ∞, K̃t+n − K̃t−n → Kt+ − Kt− in L2G (0, T; R). 6. Numerical Calculations and Simulations 6.1. One Example of RABSDE with Two Obstacles and Default Risk For the convenience of computation, we consider the case when the terminal time T = 1, the n −n calculation begins from ynn = ξ n , and proceeds backward to solve (yin , zin , uin , k+ i , k i ) for i = n − 1, n − 2, ..., 1, 0. We use Matlab for the simulation. We consider a simple situation: the terminal value ξ T = Φ( BT , MT ) and anticipated process ξ t = Φ( Bt ) (t ∈ ( T, T + δ]); the obstacles Lt = Ψ1 (t, Bt , Mt ) and Vt = Ψ2 (t, Bt , Mt ), where Φ, Ψ2 and Ψ3 are real analytic functions defined on R, [0, T ] × R and [0, T ] × R respectively. We take the following example (n = 200, anticipated time δ = 0.3):

y ȳ

f (t, y, ȳ, z, u) = +

+ z + u, 2 2 Φ( Bt ) = | Bt | + MT ,

t ∈ [0, T ];

t ∈ [ T, T + δ];

Ψ1 (t, Bt , Mt ) = | Bt | + Mt + T − t, Ψ2 (t, Bt , Mt ) = | Bt | + Mt + 2 ( T − t) ,

t ∈ [0, T ]; t ∈ [0, T ];

This example satisfies the Assumption 3, Assumption 4 and Assumption 5 in the theoretical Section 1.3. We choose the default time τ as a uniformly distributed random variable. As the inverse for both implicit schemes in (10) and (15) is not easy to get directly, we only use explicit schemes below. We are going to illustrate the behaviors of the explicit reflected scheme by looking at the pathwise behavior for n = 400. Further, we will compare the explicit reflected scheme with the explicit penalization scheme for different values of the penalization parameter. Figure 1 represents one path of the Brownian motion, Figures 2 and 3 represent one path of the Brownian motion and one path of the default martingale when the default time τ = 0.7 and 0.2 respectively.


Risks 2020, 8, 72

14 of 30

Figure 1. One path of the Brownian motion

Figure 2. One path of the default martingale (τ = 0.7)

Figure 3. One path of the default martingale (τ = 0.2)

Figures 4 and 5 represent the paths of the solution ỹn , increasing processes K̃ +n and K̃ −n in the explicit reflected scheme where the random default time τ = 0.7. We can see that for all i, ỹin stays between the lower obstacle Lin and the upper obstacle Vin , the increasing process K̃i+n (resp. K̃i−n ) pushes ỹin upward (resp. downward), and they can not increase at the same time. In this example for n = 400, default time τ = 0.7, we can get the reflected solution ỹ0n = 1.2563 from the explicit reflected scheme.


Risks 2020, 8, 72

15 of 30

Figure 4. One path of ỹn in the explicit reflected scheme (τ = 0.7)

Figure 5. The paths of the increasing processes in the explicit reflected scheme (τ = 0.7)

Figures 4 and 6 illustrate the influence of the jump on the solution ỹn at the different random default times, the reflected solution ỹn moves downwards after the default time (which can not be shown in Figure 7). From the approximation of the default martingale (5), Mn is larger with a larger default time.

Figure 6. One path of ỹn in the explicit reflected scheme (τ = 0.2)


Risks 2020, 8, 72

16 of 30

Figure 7. One path of ỹn in the explicit reflected scheme without default risk.

Table 1 and contains the comparison between the explicit reflected scheme and the explicit p,n penalization scheme by the values of ỹ0n and ỹ0 with respect to the parameters n and p. As n increases, the reflected solution ỹ0n increases because of the choice of the coefficient. For fixed n, as p,n the penalization parameter p increases, the penalization solution ỹ0 converges increasingly to the reflected solution ỹ0n , which is obvious from the comparison theorem of BSDE with default risk. If p p,n and n have a smaller difference (when n = 103 , p = 103 ), the penalization solution ỹ0 is far from the reflected solution ỹ0n . Hence,the penalization parameter p should be chosen as large as possible. Table 2 illustrates the comparison between the reflected solution ỹ0n and ỹ0n∗ . Figure 7 represents the situation without the default risk, the reflected solution ỹ0n∗ has a larger value than in the situation when the default case happens (Figure 4). p,n

Table 1. The values of the penalization solution ỹ0 p,n

ỹ0

n = 200 n = 400 n = 1000

(τ = 0.7).

p = 103

p = 104

p = 105

p = 106

1.2369 1.2458 1.2343

1.2394 1.2482 1.2497

1.2428 1.2496 1.2527

1.2452 1.2511 1.2630

Table 2. The values of the reflected solution ỹ0n (τ = 0.7) and ỹ0n∗ .

n = 200 n = 400 n = 1000

ỹ0n

ỹ0n∗

1.2469 1.2563 1.2644

1.5451 1.5507 1.5614

6.2. Application in American Game Options in a Defaultable Setting 6.2.1. Model Description Hamadène (2006) studied the relation between American game options and RBSDE with two obstacles driven by Brownian motion. In our paper, we consider the case with default risk. An American game option contract with maturity T involves a broker c1 and a trader c2 : • • •

The broker c1 has the right to cancel the contract at any time before the maturity T, while the trader c2 has the right to early exercise the option; the trader c2 pays an initial amount (the price of this option) which ensures an income Lτ1 from the broker c1 , where τ1 ∈ [0, T ] is an G -stopping time; the broker has the right to cancel the contract before T and needs to pay Vτ2 to c2 . Here, the payment amount of the broker c1 should be greater than his payment to the trader c2 (if trader


Risks 2020, 8, 72

•

17 of 30

decides for early exercise), i.e. VĎ„2 ≼ LĎ„2 , VĎ„2 − LĎ„2 is the premium that the broker c1 pays for his decision of early cancellation. Ď„2 ∈ [0, T ] is an G -stopping time; if c1 and c2 both decide to stop the contract at the same time Ď„, then the trader c2 gets an income equal to QĎ„ 1{Ď„ <T } + Ξ1{Ď„ =T } .

6.2.2. The Hedge for the Broker Consider a financial market M, we have a riskless asset Ct ∈ R with risk-free rate r: ( dCt = rCt dt, t ∈ (0, T ], C0 = c0 , t = 0;

(25)

one risky asset St ∈ R: (

= St (Âľdt + ĎƒdBt + χdMt ) , = s0 , t = 0,

dSt S0

t ∈ (0, T ],

(26)

where Bt is a 1-dimensional Brownian Motion, Âľ is the expected return, Ďƒ is the volatility, χ is the parameter related to the default risk. (1) (2) Consider a self-financing portfolio Ď€ ∈ Rd with strategy Ď€ = β s , β s s∈[t,T ] trading on C and S respectively on the time interval [t, T ]. AĎ€,Îą is the wealth process with the value Îą A at time t, here is a non-negative Ft -measurable random variable. (1)

(2)

s ∈ [t, T ];

AĎ€,Îą = β s Cs + β s Ss , s AĎ€,Îą = ÎąA + s Z Th t

(1)

Z s (1) t

β u dCu +

(2)

Z s (2) t

β u dSu ,

s ∈ [t, T ];

(27)

i

| β u | + ( β u Su )2 du < ∞;

Let LĎ€,Îą be a positive local martingale with the following form: (

dLĎ€,Îą t L0Ď€,Îą

−1 = − LĎ€,Îą t− Ďƒ ( Âľ − r ) dBt , = 1, t = 0,

t ∈ (0, T ],

By Girsanov’s theorem, let Pπ,ι be the equivalent measure of P: 2 Pπ,ι

1 −1 Ď€,Îą −1 âˆ’Ďƒ (Âľ − r ) T ,

G T = L T = exp âˆ’Ďƒ (Âľ − r ) BT − P 2 here let EĎ€,Îą be the expectation, BĎ€,Îą and MĎ€,Îą be the Brownian motion and the default martingale under the measure PĎ€,Îą : BtĎ€,Îą := Bt + Ďƒâˆ’1 (Âľ − r )t; MtĎ€,Îą := Mt . Hence, the risky asset St defined in (26) can be converted into the following form under measure PĎ€,Îą : (

dSt S0

= St rdt + ĎƒdBtĎ€,Îą + χdMtĎ€,Îą , = s0 , t = 0.

t ∈ (0, T ],

(28)

Denote by (Ď€, θ ) a hedge for the broker against the American game option after t, where Ď€ is defined in (27), θ ∈ [t, T ] is a stopping time, satisfying AĎ€,Îą ≼ R(s, θ ) := Vθ 1{θ <s} + Ls 1{s<θ } + Qs 1{θ =s<T } + Ξ1{θ =s=T } , s ∈ [t, T ], P − a.s. s

(29)


Risks 2020, 8, 72

18 of 30

here R(s, θ ) is the amount that the broker c1 has to pay if the option is exercised by c2 at s or canceled at time θ. Similarly to El Karoui et al. (1997b) and Karatzas and Shreve (1998), we define the value of the option at time t by Jt , where ( Jt )0≤t≤T is an rcll (right continuous with left limits) process, for any t ∈ [0, T ], n Jt := ess inf α A ≥ 0; Gt -measurable such that there exists a hedge (π, θ ) after t, o where π is a self-financing portfolio after t whose value at t is α A .

(30)

Consider the following RBSDE with two obstacles and default risk, for any t ∈ [0, T ], there exist a stopping time θt , a process Zsπ,α t≤s≤T and increasing processes Ksπ,α+ t≤s≤T and Ksπ,α− t≤s≤T , such that  R θ π,α π,α π,α,+ π,α,+ π,α− π,α−  Y = Y + K − K − K − K − s t Zuπ,α dBuπ,α  s s t  θt θt θt   Rθ   − s t Uuπ,α dMuπ,α , s ∈ [t, θt ];   π,α − rT (31) YT = e ξ;    π,α − rs − rs  e Ls ≤ Ys ≤ e Vs , s ∈ [t, T ];    R R  θ θ  π,α t t −ru L ) dK π,α+ = −ru V − Y π,α ) dK π,α− = 0; u u u u u t (Yu − e t (e For any s ∈ [t, T ], ert Ytπ,α is a hedge for the broker c1 against the game option, i.e. Jt = ert Ytπ,α (see Theorem A5 in the Appendix). Similarly to Proposition 4.3 in Hamadène (2006), we set θt∗ := inf s ≥ t; Ytπ,α = e−rs Vs ∧ T = inf s ≥ t; Ksπ,α− > 0 ; v∗t := inf s ≥ t; Ytπ,α = e−rs Ls ∧ T = inf s ≥ t; Ksπ,α+ > 0 ,

(32)

therefore, we can get R̂t (v, θt∗ ) ≤ Ytπ,α = R̂t (v∗t , θt∗ ) ≤ R̂t (v∗t , θ ), where h

i R̂t (v, θ ) := Eπ,α e−rθ Vθ 1{θ <v} + e−rv Lv 1{v<θ } + e−rθ Qθ 1{θ =v<T } + e−rT ξ1{θ =v=T } Gt , P − a.s. 6.2.3. Numerical Simulation We use the same calculation method as in Section 6.1, starting from Ynπ,α = ξ, and proceeding backward to solve (Yiπ,α , Ziπ,α , Uiπ,α , Kiπ,α+ , Kiπ,α− ) for i = n − 1, ..., 1, 0 with step size ∆n . The forward SDEs (25) and (26) can be numerically approximated by the Euler scheme on the time grid (ti )i=0,1,...n : Ci+1 =Ci + rCi ∆n ; Si+1 =Si + Si (µ∆n + σ∆Bin + χ∆Min ) . In this case, we consider parameters as below: s0 =1.5, T = 1, r = 1.1, µ = 1.5, σ = 0.5, χ = 0.2, Lt = (St − 1)+ , Vt = 2 (St − 1)+ , ξ = 1.2 (ST − 1)+ , In the case n = 400, Figure 8 represents one path of the Brownian motion, Figures 9 and 10 represent the paths of the solution Y π,α , increasing processes K π,α− and K π,α− in the explicit reflected scheme where the random default time τ = 0.2. We can see that Ytπ,α stays between the lower obstacle e−rt Lt and the upper obstacle e−rt Vt . In this example for n = 400, default time τ = 0.2, we can get the solution Y0π,α = 0.6857 from the explicit reflected scheme, i.e. the hedge for the broker c1 against the game option at t = 0 in the defaultable model. In the case without the default risk, Y0π,α = 0.7704,


Risks 2020, 8, 72

19 of 30

which means the occurrence of the default event could reduce the value of Y π,α . Figure 11 represents the situation without the default risk, the solution Y π,α has a larger value than in the situation when the default case happens (Figure 9).

Figure 8. One path of the Brownian motion

Figure 9. One path of Y π,α in the explicit reflected scheme (τ = 0.2)

Figure 10. One path of the increasing processes in the explicit reflected scheme (τ = 0.2)


Risks 2020, 8, 72

20 of 30

Figure 11. The paths of Y π,α in the explicit reflected scheme without default risk

Author Contributions: Both two authors have contributed to the final version of the manuscript, and have read and agreed to the published version of the manuscript. This paper is one part of Jingnan Wang’s doctoral dissertation (Wang 2020), Ralf Korn is Jingnan Wang’s PhD supervisor. For this manuscript, Jingnan Wang developed the theoretical formalism and completed the numerical simulations through Matlab. Ralf Korn supervised the work, discussed several versions of the research with Jingnan Wang and assisted in the transformation of the research to this article format. All authors have read and agreed to the published version of the manuscript. Funding: Jingnan Wang’s work has been carried out as a member of the DFG Research Training Group 1932 “Stochastic Models for Innovations in the Engineering Sciences. The funding by DFG is gratefully aknowledged. Acknowledgments: We acknowledge the PhD seminar held in our department every week to exchange research progress and bring new insights. Conflicts of Interest: The authors declare no conflict of interest.

Appendix A Lemma A1. (Discrete Gronwall’s Inequality) (Lemma 2.2 in Mémin et al. 2008) Suppose that a, b and c are positive constants, b∆ < 1, ( β i )i∈N is a sequence with positive values, such that i

β i + c ≤ a + b∆ ∑ β j ,

i ∈ N,

j =1

then it follows sup β i + c ≤ aF∆ (b), i≤n

where F∆ (b) is a convergent series with the following form: ∞

F∆ (b) = 1 +

bn (1 + ∆)... (1 + (n − 1)∆) . n n =1

Theorem A1. (Itô’s formula for rcll semi-martingale) (Protter 2005) Let X := ( Xt )0≤t≤T be a rcll semi-martingale, g is a real value function in C 2 , therefore, g( X ) is also a semi-martingale, such that g ( X t ) = g ( X0 ) +

Z t 0

g0 ( Xs− )dXs +

1 2

Z t 0

g00 ( Xs )d[ X ]cs +

g( Xs ) − g( Xs− ) − g0 ( Xs− )∆Xs .

0< s ≤ t

where [ X ] is the second variation of X, [ X ]c is the continuous part of [ X ], ∆Xs = ∆Xs − ∆Xs− . We give the proofs of Theorem 1, Lemma 1, Theorem 2 and Theorem 4 can be seen in Section 5.


Risks 2020, 8, 72

21 of 30

Proof of Theorem 1. Firstly, we introduce the following ABSDE: p,q

p,q

p,q

p,q

p,q

= f t, Yt , EGt [Yt+δ ], Zt , Ut

−dYt

p,q

p,q

dt + q(Yt

p,q

− Lt )− dt

p,q

− Vt )+ dt − Zt dBt − Ut dMt ,

− p(Yt

(A1)

t ∈ [0, T ].

By the existence and uniqueness theorem for ABSDEs with default risk (Theorem 4.3.3 in p,q p Wang 2020), there exists the unique solution for ABSDE (A1). It follows that as q → ∞, Yt ↑ Y t R p,q p p,q p p,q p t − in SG2 (0, T + δ; R), Zt → Z t in L2G (0, T; R), Ut → U t in L2,τ G (0, T; R), 0 q (Ys − Ls ) ds → K t in A2G (0, T; R). (Y p , Z p , U p , K p ) is a solution of the following RABSDE with one obstacle L: p p p p p p −dY t = f t, Y t , EGt [Y t+δ ], Z t , U t dt + dK t p

p

p

− p(Y t − Vt )+ dt − Z t dBt − U t dMt , p

(A2)

t ∈ [0, T ].

p

p

Let p → ∞, it follows that Y t ↓ Yt in SG2 (0, T + δ; R), Z t → Zt in L2G (0, T; R), U t → Ut in 2,τ LG (0, T; R). By the comparison theorem for ABSDEs with default risk (Theorem 2.3.1 in Wang 2020), p

+p

p

p +1

p +1

p

we know that K t is increasing, then K T ↑ KT and K T

p

− K T ≥ sup0≤t≤T [K t − K t ] ≥ 0, therefore, p +p 2 K t → Kt in AG (0, T; R). Hence, there exists a constant C1 depending on ξ, f (t, 0, 0, 0, 0), δ, L and V, such that # " Z T

2 Z T

2

2

C1

p

p

p E sup Y t − Yt +

Z t − Zt dt +

U t − Ut 1{τ >t} γt dt ≤ √ . p 0 0 0≤ t ≤ T p,q

q

p

p,q

Similarly, let p → ∞ in (A1), it follows that Yt ↓ Y t in SG2 (0, T + δ; R), Zt → Z t in L2G (0, T; R), Rt q p q q q q p,q p + 2 Ut → U t in L2,τ G (0, T; R), 0 p (Y s − Vs ) ds → K t in AG (0, T; R). (Y , Z , U , K ) is a solution of the following RABSDE with one obstacle V: q q q q q q q q q −dY t = f t, Y t , EGt [Y t+δ ], Z t , U t dt + q(Y t − Lt )− dt − dK t − Z t dBt − U t dMt , t ∈ [0, T ]. q

q

(A3)

q

Letting q → ∞, it follows that Y t ↓ Yt in SG2 (0, T + δ; R), Z t → Zt in L2G (0, T; R), U t → Ut −p

p

in L2,τ in A2G (0, T; R). Moreover, there exists a constant C2 depending on ξ, G (0, T; R), K t → Kt f (t, 0, 0, 0, 0), δ, L and V, such that "

E

2 Z

q sup Y t − Yt +

0≤ t ≤ T

T 0

Z

2

q

Z t − Zt dt +

T 0

#

2

C2

q

U t − Ut 1{τ >t} γt dt ≤ √ . q p

p

p

By the comparison theorem for ABSDEs with default risk, it follows that Y t ≤ Yt ≤ Y t , for any t ∈ [0, T ]. Therefore, " #

2 C

p

E sup Yt − Yt ≤ √3 , p 0≤ t ≤ T where C3 ≥ 0 is a constant. Applying Itô formula for rcll semi-martingale (Theorem A1), we can obtain T

Z

E

0

p

| Zt − Zt |2 dt +

Z T 0

C p |Ut − Ut |2 1{τ >t} γt dt ≤ √4 , p

where C4 ≥ 0 is a constant. Since +p

Kt

−p

− Kt

Kt+

− Kt−

p

p

=Y0 − Yt − =Y0 − Yt −

Z t 0

Z t 0

Z t Z t p p p p p p f s, Ys , EGs [Ys+δ ], Zs , Us ds − Zs dBs − Us dMs ; 0

Gs

f s, Ys , E [Ys+δ ], Zs , Us ds −

Z t 0

Zs dBs −

0

Z t 0

Us dMs .


Risks 2020, 8, 72

22 of 30

By the convergence of Y p , Z p , U p and the Lipschitz condition of f , it follows "

E

sup 0≤ t ≤ T

"

+p |(Kt

−p − Kt+ ) − (Kt

2 Z

p sup Yt − Yt +

≤ λE

0≤ t ≤ T

T 0

#

− Kt− )|2

p | Zt

Z T

− Zt |2 dt +

0

#

p |Ut

C − Ut |2 1{τ >t} γt dt ≤ √5 , p

−p

+p

where λ, C5 ≥ 0 are constants. Since E[|KT |2 + |KT |2 ] < ∞, there exist processes K̂ + and K̂ − in p p p A2G (0, T; R) are the weak limits of K + p and K − p respectively. Since for any t ∈ [0, T ], Y t ≤ Yt ≤ Y t , we can get p +p +p p dKt = p(Yt − Lt )− dt ≤ p(Y t − Lt )− dt = dK t ; −p

dKt

−p

p

p

= p(Yt − Vt )+ dt ≥ p(Y t − Vt )+ dt = dK t .

Therefore, dK̂t+ ≤ dKt+ , dK̂t− ≥ dKt− , it follows that dK̂t+ − dK̂t− ≤ dKt+ − dKt− . On the other hand, the limit of Y p is Y, so dK̂t+ − dK̂t− = dKt+ − dKt− , it follows that dK̂t+ = dKt+ , dK̂t− = dKt− , then K̂t+ = Kt+ , K̂t− = Kt− . Proof of Lemma 1. Step 1. Firstly, we consider the continuous and discrete time equations by Picard’s method. p,∞,m+1 p,∞,m+1 p,∞,m+1 In the continuous case, we set Y p,∞,0 = Z p,∞,0 = U p,∞,0 = 0, (Yt , Zt , Ut ) is the solution of the following BSDE:              p,∞,m

p,∞,m+1

Yt

p,∞,m+1

Yt

p,∞,m

RT p,∞,m p,∞,m p,∞,m p,∞,m = ξ T + t f n s, Ys , EGs [Ys+δ ], Zs , Us ds RT RT p,∞,m p,∞,m − + + t q(Ys − Ls ) ds − t p(Ys − Vs ) ds R T p,∞,m+1 R T p,∞,m+1 − t Zs dBs − t Us dMs , t ∈ [0, T ];

(A4)

t ∈ ( T, T + δ],

= ξt,

p,∞,m

p

p

p

p,n,m

)∆n

where (Yt , Zt , Ut ) is the Picard approximation of (Yt , Zt , Ut ). p,n,0 p,n,0 p,n,0 p,n,m+1 p,n,m+1 p,n,m+1 In the discrete case, we set yi = zi = ui = 0 (for any i = 0, 2, ...n), (yi , zi , ui ) is the solution of the following BSDE:              p,n,m

p,n,m+1

yi

p,n,m+1

= y i +1

p,n,m

+ f n ( ti , yi

p,n,m+1

− zi p,n,m+1

yi p,n,m

= ξ in ,

p,n,m+1

∆Bin+1 − ui

p,n

− p∆n (yi

p,n,m

, ȳi

p,n,m

, zi

, ui

p,n

∆Min+1 + p∆n (yi

− Lin )−

− Vin )+ , i ∈ [0, n − 1];

(A5)

i ∈ [n, nδ ].

p,n,m

here (Yt , Zt , Ut ) is the continuous time version of the discrete Picard approximation of p,n,m p,n,m p,n,m ( yi , zi , ui ). Step 2. Then, we consider the following decomposition: Y p,n − Y p = (Y p,n − Y p,n,m ) + (Y p,n,m − Y p,∞,m ) + (Y p,∞,m − Y p ) . From Proposition 1 and Proposition 3 in Lejay et al. (2014) and the definition of Lin and Vin , it follows (20).


Risks 2020, 8, 72

23 of 30

Proof of Theorem 2. By Lemma 1 and Theorem 1, as p → ∞, n → ∞, it follows " p,n |Yt

sup

E

0≤ t ≤ T

2

− Yt | +

Z T 0

"

≤2E

sup 0≤ t ≤ T

p,n |Yt

p − Yt |2

Z T

+

+ 2E

sup 0≤ t ≤ T

2

− Yt | +

2

− Zt | dt +

p,n | Zt

0

" p |Yt

p,n | Zt

Z T 0

p | Zt

Z T 0

p Zt |2 dt +

2

− Zt | dt +

# p,n |Ut

Z T 0

Z T 0

2

− Ut | 1{τ >t} γt dt #

p,n |Ut

p − Ut |2 1{τ >t} γt dt

# p |Ut

2

− Ut | 1{τ >t} γt dt → 0.

For the increasing processes K + p,n and K − p,n , by Theorem 1, we can obtain i2 + p,n − p,n Kt − Kt − Kt+ − Kt− h i C + p,n − p,n +p −p 2 ≤2E Kt − Kt − Kt − Kt +√ , p h

E

where C ≥ 0 is a constant depending on ξ, f (t, 0, 0, 0, 0), δ, L and V. For each fixed p, + p,n Kt

− p,n − Kt

+p

Kt

p,n =Y0

−p

p,n − Yt

p

Z t

p

=Y0 − Yt −

− Kt

Z t

f

0 0

p,n p,n p,n p,n s, Ys , EGs [Ys+δ ], Zs , Us ds −

n

Z t 0

p,n Zs dBsn

Z t 0

p,n

Us dMsn ;

Z t Z t p p p p p p f s, Ys , EGs [Ys+δ ], Zs , Us ds − Zs dBs − Us dMs . 0

0

R· p p,n From Corollary 14 in Briand et al. (2002), we know that as n → ∞, 0 Zs dBsn → 0 Zs dBs R · p,n R· p in SG2 (0, T; R), 0 Us dMsn → 0 Us dMs in SG2 (0, T; R). By the Lipschitz condition of f and the − p,n

+ p,n

convergence of Y p,n , it follows that Kt

− Kt

→ Kt+ − Kt− in L2G (0, T; R).

Proof of Theorem 4. Firstly, we prove (23). From Theorem A3, Lemma 1 and Theorem 1. For fixed p ∈ N, as n → ∞, it follows "

E

sup 0≤ t ≤ T

|Ytn

2

− Yt | +

Z T 0

"

≤3E

sup 0≤ t ≤ T

|Ytn

| Ztn

p,n − Yt |2

+

Z T 0

"

+ 3E

sup 0≤ t ≤ T

p,n |Yt

p − Yt |2

+

+ 3E

sup 0≤ t ≤ T

2

− Yt | +

"

≤3E

sup 0≤ t ≤ T

p,n |Yt

p − Yt |2

+

− Zt | dt +

| Ztn

Z T 0

" p |Yt

2

Z T 0

Z T 0

0

2

3 3 + √ Cξ, f ,L,V + √ λ L,T,δ C f n ,ξ n ,Ln ,V n → 0. p p

− Ut | 1{τ >t} γt dt

0

Z T

p Zt |2 dt +

2

Z T

p Zt |2 dt +

− Zt | dt +

p,n | Zt

#

|Utn

p,n Zt |2 dt +

p,n | Zt

p | Zt

Z T

0

Z T 0

p,n − Ut |2 1{τ >t} γt dt

# p,n |Ut

p − Ut |2 1{τ >t} γt dt

# p |Ut

Z T 0

#

|Utn

2

− Ut | 1{τ >t} γt dt #

p,n |Ut

p − Ut |2 1{τ >t} γt dt


Risks 2020, 8, 72

24 of 30

For increasing processes, for fixed p ∈ N, as n → ∞, h

2 i E Kt+n − Kt−n − Kt+ − Kt−

+ p,n − p,n +p − p 2 − p,n 2

+ p,n

+n −n − Kt ≤3E Kt − Kt − Kt − Kt − Kt

− Kt

+ 3E Kt

2 −p

+p − Kt+ − Kt−

+ 3E Kt − Kt 3 3 − p,n +p − p 2

+ p,n − Kt − Kt − Kt + √ Cξ, f ,L,V + √ λ L,T,δ C f n ,ξ n ,Ln ,V n → 0. ≤3E Kt p p

We give the proof of the following Lemma A2. Lemma A3 and Lemma A4 below have the similar proof method. Lemma A2. (Estimation result of implicit discrete penalization scheme) Under Assumption 6 and Assumption 7, for each p ∈ N and ∆n , when ∆n + 3∆n L + 4∆n L2 + (∆n L)2 < 1, there exists a constant λ L,T,δ depending on the Lipschitz coefficient L, T and δ, such that " p,n

E sup |yi |2 + ∆n i

n −1

1 + p∆n

j =0

n −1

∑ |z j

p,n 2

| + ∆n

j =0

+ p,n | k j |2

n −1

∑ |u j

p,n 2

| (1 − hnj+1 )γ j+1

j =0

− p,n + | k j |2

(A6)

#

≤ λ L,T,δ Cξ n , f n ,Ln ,V n ,

where Cξ n , f n ,Ln ,V n ≥ 0 is a constant depending on ξ n , f n (t j , 0, 0, 0, 0), ( Ln )+ and (V n )− . Proof of Lemma A2. By the definition of implicit penalization discrete scheme (7), applying Itô p,n formula for rcll semi-martingale (Theorem A1) to |y j |2 on j ∈ [i, n − 1], it follows h i n −1 n −1 p,n p,n p,n E |yi |2 + ∆n ∑ |z j |2 + ∆n ∑ |u j |2 (1 − hnj+1 )γ j+1 j =i

= E |ξ nn |2 + 2∆n E

n −1 h

j =i

p,n

yj

j =i

i i n −1 h p,n p,n p,n p,n

p,n + p,n p,n − p,n

n − yj k j ,

f (t j , y j , ȳ j , z j , u j ) + 2E ∑ y j k j j =i

since p,n + p,n

yj k j

p,n − p,n

yj k j

1 + p,n 2 + p,n k + Lnj k j ; p∆n j 1 − p,n 2 − p,n = n kj + Vjn k j . p∆

=−

(A7)


Risks 2020, 8, 72

25 of 30

Moreover, by the Lipschitz condition of f n , we can obtain "

E

∆n + 2

p,n | y i |2

n −1

j =i

p,n | z j |2

j =i

p,n |u j |2 (1 − hnj+1 )γ j+1

2 + p∆n

n −1

j =i

+ p,n | k j |2

− p,n + | k j |2

#

2 i h n −1 i

n

ξ j + ∆n E ∑ | f n (t j , 0, 0, 0, 0)|2

n δ −1

h ≤E |ξ nn |2 + ∆n L

n −1

∆n + 2

j =i

j = n +1

!2 h n −1 i n −1 1 + p,n n 2 + ∆ + 3∆ L + 4∆ L + (∆ L) E ∑ |y j | + E ∑ k j λ1 j =i j =i !2 n −1 1 − p,n + λ1 E sup ((Vjn )− )2 . + λ1 E sup (( Lnj )+ )2 + E ∑ k j λ 1 i ≤ j ≤ n −1 i ≤ j ≤ n −1 j =i

n

n

n 2

n

2

By Assumption 8, applying techniques of stopping times for the discrete case, it follows n −1

E

j =i

!2 + p,n kj

n −1

+E

j =i

− p,n kj

!2

" n

≤ Cξ n , f n ,X n 1 + ∆ E

n −1

j =i

p,n | z j |2

p,n + |u j |2 (1 − hnj+1 )γ j+1

# .

where Cξ n , f n ,X n ≥ 0 is a constant depending on ξ n , f n (t j , 0, 0, 0, 0), X n . Since X n can be dominated by Ln and V n , we can replace it by Ln and V n . By the discrete Gronwall’s inequality (Lemma A1), when ∆n + 3∆n L + 4∆n L2 + (∆n L)2 < 1, we can obtain sup E i

h

p,n | y i |2

i

"

+ E ∆n

n

∑ |z j

p,n 2

| + ∆n

j =0

1 + p∆n

n

∑ |u j

p,n 2

| (1 − hnj+1 )γ j+1

j =0

n

j =0

+ p,n | k j |2

− p,n + | k j |2

#

≤ λ L,T,δ Cξ n , f n ,Ln ,V n ,

where Cξ n , f n ,Ln ,V n ≥ 0 is a constant depending on ξ n , f n (t j , 0, 0, 0, 0), ( Ln )+ and (V n )− . Reconsidering (A7), we take square, sup and sum over j, then take expectation, by Burkholder-Davis-Gundy inequality for the martingale parts, it follows "

# " #

2

n −1

p,n

p,n 2

p,n 2 E sup yi ≤ Cξ n , f n ,Ln ,V n + C∆n ∑ E yi ≤ Cξ n , f n ,Ln ,V n + CT sup E yi . i

i

i =0

It follows (A6). We present the proof of the following Theorem A2. Theorem A3 and Theorem A4 below have the similar proof method. Theorem A2. (Distance between implicit discrete penalization and explicit discrete penalization schemes) Under Assumption 6 and Assumption 7, for any p ∈ N: "

E

p,n

sup |Ỹt

0≤ t ≤ T

p,n

− Yt |2 +

Z T 0

p,n

| Z̃t

p,n

− Zt |2 dt +

− p,n + p,n − p,n 2

+ p,n + K̃t − K̃t − Kt − Kt

Z T 0

p,n

|Ũt

p,n

− Ut |2 1{τ >t} γt dt (A8)

# n 2

≤ λ L,T,δ C f n ,ξ n ,Ln ,V n ,p (∆ ) .

where λ L,T,δ ≥ 0 is a constant depending on the Lipschitz coefficient L, the terminal T and δ, C f n ,ξ n ,∆n ,Ln ,V n ,p ≥ 0 is a constant depending on f n (t j , 0, 0, 0, 0), ξ n , ( Ln )+ , (V n )− and p.


Risks 2020, 8, 72

26 of 30

Proof of Theorem A2. From the definitions of implicit discrete penalization scheme (7) and explicit discrete penalization scheme (11), and the Lipschitz condition of f n , it follows

p,n 2 p,n 2 p,n 2

p,n n p,n n p,n n E ỹ j − y j + ∆ z̃ j − z j + ∆ ũ j − u j (1 − h j+1 )γ j+1 h p,n p,n p,n p,n p,n G n p,n ≤2∆n LE |E j [ỹ j+1 ] − y j | + |ỹ¯ j − ȳ j | + |z̃ j − z j | i p p,n p,n p,n p,n + |ũ j − u j |(1 − hnj+1 ) γ j+1 ỹ j − y j ,

(A9)

Summing from j = i, ..., n − 1, it follows "

∆n p,n 2

p,n E ỹi − yi + 2 n

≤2∆ LE

n −1

n −1

j =i

j =i

∆n p,n 2

p,n

z̃ j − z j + 2

p,n 2

p,n

ũ j − u j (1 − hnj+1 )γ j+1

n −1 h

i p,n p,n p,n

G jn p,n

E [ỹ j+1 ] − y j |ỹ j − y j | + ∆n (2L + 4L2 )E

j =i

"

#

#

p,n 2

p,n

ỹ j − y j .

n −1

j =i

By (17), (11) and the Lipschitz condition of f n , we can obtain 2∆n LE

n −1 h

i p,n p,n p,n

G jn p,n

E [ỹ j+1 ] − y j |ỹ j − y j |

j =i

n −1 h

≤(∆n L)2 E

j =i

i n −1 h

2 i p,n p,n p,n 2|ỹ j |2 + |z̃ j |2 + |ũi |2 (1 − hin+1 )γi+1 + (∆n )2 E ∑ f n (t j , 0, 0, 0, 0)

j =i

n −1

+ ( p∆n )2 E

j =i

n

n

2

p,n

(ỹi

− Lin )−

+ 8(∆ L) + 2∆ L E

"

2

p,n + (ỹi − Vin )+

2

# # " δ

n −1 p,n 2

p,n n 2 n 2

ỹ j − y j + (∆ L) E ∑ |ξ j | ,

n −1

i= j

i=n

hence, there exists a constant C f n ,ξ n ,Ln ,V n ≥ 0 depending on f n (t j , 0, 0, 0, 0), ξ n , ( Ln )+ and (V n )− , such that # "

∆n n−1

p,n ∆n n−1

p,n p,n 2 p,n 2 p,n 2

p,n n E ỹi − yi +

z̃ j − z j +

ũ j − u j (1 − h j+1 )γ j+1 2 ∑ 2 ∑ j =i j =i " #

n −1

p,n 2

p,n n 2 n 2 n n 2 ≤C f n ,ξ n ,Ln ,V n (∆ ) + 8(∆ L) + 4∆ L + 4∆ L E ∑ ỹ j − y j . j =i

By the discrete Gronwall’s inequality (Lemma A1), when 8(∆n L)2 + 4∆n L + 4∆n L2 < 1, we can get

n 2 2 p,n 2

p,n sup E ỹi − yi ≤ C f n ,ξ n ,Ln ,V n (∆n )2 e(8∆ L +4L+4L ) T , i

therefore, it follows "

#

n −1

p,n 2 p,n 2

p,n

p,n n

z̃ j − z j + ∑ ũ j − u j (1 − h j+1 )γ j+1 ≤ C f n ,ξ n ,Ln ,V n (∆n )2 .

n −1

E

j =i

j =i


Risks 2020, 8, 72

27 of 30

Reconsidering (A9), we take square, sup and sum over j, then take expectation, by Burkholder-Davis-Gundy inequality for the martingale parts, we can get "

# " #

2

n −1

p,n 2 p,n p,n p,n 2

p,n

p,n E sup ỹi − yi ≤ C∆n ∑ E ỹi − yi ≤ CT sup E ỹi − yi , i

i

i =0

hence, "

n −1

n −1

p,n 2 p,n 2 p,n 2

p,n

p,n

p,n E sup ỹi − yi + ∆n ∑ z̃ j − z j + ∆n ∑ ũ j − u j (1 − hnj+1 )γ j+1 i

j =0

#

j =0

(A10)

n 2

≤λ L,T,δ C f n ,ξ n ,Ln ,V n ,p (∆ ) . For the increasing processes, by the Lipschitz condition of f n and (A10), it follows, for each fixed p, − p,n + p,n − p,n 2

+ p,n E K̃t − K̃t − Kt − Kt

≤ λ L,T,δ C f n ,ξ n ,Ln ,V n ,p (∆n )2 . It follows (A8). Similarly to the proof method of Lemma A2, we can get the following Lemma A3. Lemma A3. (Estimation result of implicit discrete reflected scheme) Under Assumption 6 and Assumption 7, for each p ∈ N and ∆n , when ∆n + 3∆n L + 4∆n L2 + (∆n )2 L2 < 1, there exists a constant λ L,T,δ depending on the Lipschitz coefficient L and the terminal time T, such that " sup |yin |2 i

E

+∆

n

n −1

|znj |2

+∆

n

j =0

n −1

|unj |2 (1 − hnj+1 )γ j+1

j =0

2 #

2

n −1

n −1

+n

−n

+ ∑ k j + ∑ k j ≤ λ L,T,δ Cξ n , f n ,Ln ,V n , (A11)

j =0

j =0

where Cξ n , f n ,Ln ,V n ≥ 0 is a constant depending on ξ n , f n (t j , 0, 0, 0, 0), ( Ln )+ and (V n )− . Similarly to the proof method of Theorem A2, we can obtain the Theorem A3 below. Theorem A3. (Distance between implicit discrete penalization and implicit discrete reflected schemes) Under Assumption 6 and Assumption 7, for any p ∈ N: "

E

p,n

sup |Ytn − Yt |2 +

0≤ t ≤ T

Z T 0

p,n

| Ztn − Zt |2 dt +

+ p,n − p,n 2

+ Kt+n − Kt−n − Kt − Kt

#

Z T 0

p,n

|Utn − Ut |2 1{τ >t} γt dt (A12)

1 ≤ λ L,T,δ C f n ,ξ n ,Ln ,V n √ , p

where λ L,T,δ ≥ 0 is a constant depending on the Lipschitz coefficient L, the terminal time T and δ, C f n ,ξ n ,Ln ,V n ≥ 0 is a constant depending on f n (t j , 0, 0, 0, 0), ξ n , Ln and V n . Similarly to the proof of Lemma A2, we can get the following Lemma A4. Lemma A4. (Estimation result of explicit discrete reflected scheme) Under Assumption 6 and n Assumption 7, for each p ∈ N and ∆n , when 7∆4 + 2∆n L + 12∆n L2 + 10(∆n L)2 < 1, there exists a constant λ L,T,δ depending on the Lipschitz coefficient L, T and δ, such that "

E sup |ỹin |2 + ∆n i

n −1

j =0

|z̃nj |2 + ∆n

2

2 #

n −1

n −1

−n

n 2 n +n

| ũ | ( 1 − h ) γ + k̃ + k̃ ≤ λ L,T,δ Cξ n , f n ,Ln ,V n , (A13)

∑ j

∑ j

∑ j j +1 j +1

j =0

j =0

j =0

n −1


Risks 2020, 8, 72

28 of 30

where Cξ n , f n ,Ln ,V n ≥ 0 is a constant depending on ξ n , f n (t j , 0, 0, 0, 0), ( Ln )+ and (V n )− . Similarly to the proof method of Theorem A2, we can obtain the Theorem A4 below. Theorem A4. (Distance between implicit discrete reflected and explicit discrete reflected schemes) Under Assumption 3 and Assumption 7, for any p ∈ N: h

E

Z

n

n 2

sup Ỹt − Yt +

0≤ t ≤ T

T 0

Z

n

Z̃t − Ztn 2 dt +

T 0

n

Ũt − Utn 2 1

{τ >t} γt dt

(A14)

2 i + K̃t+n − K̃t−n − Kt+n − Kt−n ≤ λ L,T,δ C f n ,ξ n ,Ln ,V n ,p (∆n )2 . where λ L,T,δ ≥ 0 is a constant depending on Lipschitz coefficient L, T and δ, C f n ,ξ n ,∆n ,Ln ,V n ,p ≥ 0 is a constant depending on f n (t j , 0, 0, 0, 0), ξ n , Ln , V n and p. Theorem A5. For any s ∈ [t, T ], ert Ytπ,α is a hedge for the broker c1 against the game option, i.e., Jt = ert Ytπ,α . Proof of Theorem A5. Step 1. We first prove Jt ≥ ert Ytπ,α . Similarly to the proof method of Theorem 5.1 in Hamadène (2006), for any fixed time t ∈ [0, T ], (π, θ ) is a hedge after t for the broker against the American game option. By (29) and (30), it follows (1) (2) that θ ≥ t, π = β s , β s s∈[t,T ] is a self-financing portfolio whose value at time t is A, satisfying Aπ,α ≥ R(s, θ ), here s ∈ [t, T ]. By (27) and Itô formula for rcll semi-martingale (Theorem A1), we can s obtain −rt A e−r(s∧θ ) Aπ,α α + s∧θ = e

Z s∧θ Z s∧θ (2) (2) β u e−ru Sσu dBuπ,α + β u e−ru χSu dMuπ,α ≥ e−r(s∧θ ) R(s, θ ) t

t

(A15)

Let v ∈ [t, T ] be a G -stopping time, setting s = v and taking the conditional expectation in (A15), it follows

i h

e−rt α A ≥ Eπ,α e−r(v∧θ ) R(v, θ ) Gt . Hence, similarly to the result of Proposition 4.3 in Hamadène (2006),

i h

e−rt α A ≥ess sup Eπ,α e−r(v∧θ ) R(v, θ ) Gt v≥t

i h

≥ess inf ess sup Eπ,α e−r(v∧θ ) R(v, θ ) Gt = Ytπ,α . θ ≥t

v≥t

It follows Jt ≥ er tYtπ,α . Step 2. Then prove Jt ≤ ert Ytπ,α . By the definition of θt∗ in (32), it follows + π,α − Ksπ,α, Ysπ,α ∧θ ∗ + ∧θ ∗ = Yt

Z s∧θ ∗ t

t

t

Since

t

Zuπ,α dBuπ,α +

Z s∧θ ∗ t t

Uuπ,α dMuπ,α .

−rθt Uθt∗ 1{θt∗ <T } ≥ e−rθt Qθt∗ 1{θt∗ <T } , Yθπ,α ∗ 1{ θ ∗ < T } = e t t

therefore, by (A16), we can obtain π,α π,α π,α Ysπ,α ∧θ ∗ =Ys 1{s<θt∗ } + Yθ ∗ 1{θt∗ <s} + Yθ ∗ 1{s=θt∗ < T } + ξ1{s=θt∗ < T } t

t

t

≥e

−rs

Ls 1{s<θt∗ } + e ∗

=e−r(s∧θt ) R(s, θt∗ ).

−rθt∗

Uθt∗ 1{θt∗ <T }

+ e−rθt Qθt∗ 1{s=θt∗ <T } + ξ1{s=θt∗ <T }

(A16)


Risks 2020, 8, 72

29 of 30

It follows Ytπ,α +

Z s∧θ ∗ t t

Zuπ,α dBuπ,α +

Set Ās =e

rs

Ytπ,α

Z s∧θ ∗ t

+

t

Uuπ,α dMuπ,α ≥ e−r(s∧θt ) R(s, θt∗ ),

Z s∧θ ∗ t t

Zuπ,α dBuπ,α

+

Z s∧θ ∗ t t

s ∈ [t, T ].

Uuπ,α dMuπ,α

;

β2s =ers Zsπ,α (σSs )−1 + Usπ,α (χSs )−1 1{s≤θt∗ } ; β1s = Ās − β(2) Ss Cs−1 . (1)

(2)

Obviously, Ās = β s Cs + β s Ss . Applying Itô formula for rcll semi-martingale (Theorem A1), we can obtain Ās =ert Ytπ,α +

=ers Ytπ,α + (1)

t

Z s t

(2)

So π = β s , β s

s it

Z s

s∈[t,T ] ∈ [t, T ], Ās∧θt∗ ≥ R(s, θt∗ ), follows that Jt ≤ ert Ytπ,α .

r Āu du + (1)

Z s t

eru Zuπ,α 1{s≤θt∗ } dBuπ,α +

eru β u dCu +

Z s t

Z s t

eru Uuπ,α 1{s≤θt∗ } dMuπ,α

(2)

eru β u dSu .

is a self-financing portfolio with value ert Ytπ,α at time t. Since for any then (π, θt∗ ) is a hedge strategy against this American game option,

References Barles, Guy, Rainer Buckdahn, and Étienne Pardoux. 1997. Backward stochastic differential equations and integral-partial differential equations. Stochastics: An International Journal of Probability and Stochastic Processes 60: 57–83. Bielecki, Tomasz, Monique Jeanblanc, and Marek Rutkowski. 2007. Introduction to Mathematics of Credit Risk Modeling. Stochastic Models in Mathematical Finance. Marrakech: CIMPA–UNESCO–MOROCCO School, pp. 1–78. Bismut, Jean Michel. 1973. Conjugate convex functions in optimal stochastic control. Journal of Mathematical Analysis and Applications 44: 384–404. Briand, Philippe, Bernard Delyon, and Jean Mémin. 2002. On the robustness of backward stochastic differential equations. Stochastic Processes and their Applications 97: 229–53. Cordoni, Francesco, and Luca Di Persio. 2014. Backward stochastic differential equations approach to hedging, option pricing, and insurance problems. International Journal of Stochastic Analysis 2014: 152389. Cordoni, Francesco, and Luca Di Persio. 2016. A bsde with delayed generator approach to pricing under counterparty risk and collateralization. International Journal of Stochastic Analysis 2016: 1059303. Cvitanic, Jaksa, and Ioannis Karatzas. 1996. Backward stochastic differential equations with reflection and dynkin games. Annals of Probability 24: 2024–56. Duffie, Darrell, and Larry Epstein. 1992. Stochastic differential utility. Econometrica: Journal of the Econometric Dociety 60: 353–94. Dumitrescu, Roxana, and Céline Labart. 2016. Numerical approximation of doubly reflected bsdes with jumps and rcll obstacles. Journal of Mathematical Analysis and Applications 442: 206–43. El Karoui, Nicole, Christophe Kapoudjian, Étienne Pardoux, Shige Peng, and Marie Claire Quenez. 1997a. Reflected solutions of backward sdes and related obstacle problems for pdes. Annals of Probability 25: 702–37. El Karoui, Nicole, Étienne Pardoux, and Marie Claire Quenez. 1997b. Reflected backward sdes and american options. Numerical Methods in Finance 13: 215–31. Hamadène, Said. 2006. Mixed zero-sum stochastic differential game and american game options. SIAM Journal on Control and Optimization 45: 496–518. Hamadène, Said, Jean Pierre Lepeltier, and Anis Matoussi. 1997. Double barrier backward sdes with continuous coefficient. Pitman Research Notes in Mathematics Series 41: 161–76.


Risks 2020, 8, 72

30 of 30

Hamadène, Said, and Jean Pierre Lepeltier. 2000. Reflected bsdes and mixed game problem. Stochastic Processes and Their Applications 85: 177–88. Jeanblanc, Monique, Thomas Lim, and Nacira Agram. 2017. Some existence results for advanced backward stochastic differential equations with a jump time. ESAIM: Proceedings and Surveys 56: 88–110. Jiao, Ying, Idris Kharroubi, and Huyen Pham. 2013. Optimal investment under multiple defaults risk: A bsde-decomposition approach. The Annals of Applied Probability 23: 455–91. Jiao, Ying, and Huyên Pham. 2011. Optimal investment with counterparty risk: A default-density model approach. Finance and Stochastics 15: 725–53. Karatzas, Ioannis, and Steven Shreve. 1998. Methods of Mathematical Finance. New York: Springer, vol. 39. Kusuoka, Shigeo. 1999. A remark on default risk models. Advances in Mathematical Economics 1: 69–82. Lejay, Antoine, Ernesto Mordecki, and Soledad Torres. 2014. Numerical approximation of backward stochastic differential equations with jumps. Research report. inria-00357992v3. Available online: https://hal.inria.fr/ inria-00357992v3/document (accessed on 30 June 2020). Lepeltier, Jean Pierre, and Jaime San Martín. 2004. Backward sdes with two barriers and continuous coefficient: An existence result. Journal of Applied Probability 41: 162–75. Lepeltier, Jean Pierre, and Mingyu Xu. 2007. Reflected backward stochastic differential equations with two rcll barriers. ESAIM: Probability and Statistics 11: 3–22. Lin, Yin, and Hailiang Yang. 2014. Discrete-time bsdes with random terminal horizon. Stochastic Analysis and Applications 32: 110–27. Mémin, Jean, Shige Peng, and Mingyu Xu. 2008. Convergence of solutions of discrete reflected backward sdes and simulations. Acta Mathematicae Applicatae Sinica, English Series 24: 1–18. Øksendal, Bernt, Agnes Sulem, and Tusheng Zhang. 2011. Optimal control of stochastic delay equations and time-advanced backward stochastic differential equations. Advances in Applied Probability 43: 572–96. Pardoux, Étienne, and Shige Peng. 1990. Adapted solution of a backward stochastic differential equation. Systems and Control Letters 14: 55–61. Peng, Shige, and Mingyu Xu. 2011. Numerical algorithms for backward stochastic differential equations with 1-d brownian motion: Convergence and simulations. ESAIM: Mathematical Modelling and Numerical Analysis 45: 335–60. Peng, Shige, and Xiaoming Xu. 2009. Bsdes with random default time and their applications to default risk. arXiv arXiv:0910.2091. Peng, Shige, and Zhe Yang. 2009. Anticipated backward stochastic differential equations. Annals of Probability 37: 877–902. Potter, Philip. 2005. Stochastic Integration and Differential Equations. Berlin/Heidelberg: Springer. Tang, Shanjian, and Xunjing Li. 1994. Necessary conditions for optimal control of stochastic systems with random jumps. SIAM: Journal on Control and Optimization 32: 1447–75. Wang, Jingnan. 2020. Reflected anticipated backward stochastic differential equations with default risk, numerical algorithms and applications. Doctoral dissertation, University of Kaiserslautern, Kaiserslautern, Germany. Xu, Mingyu. 2011. Numerical algorithms and simulations for reflected backward stochastic differential equations with two continuous barriers. Journal of Computational and Applied Mathematics 236: 1137–54. © 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.