International Association of Scientific Innovation and Research (IASIR) (An Association Unifying the Sciences, Engineering, and Applied Research)
ISSN (Print): 2279-0047 ISSN (Online): 2279-0055
International Journal of Emerging Technologies in Computational and Applied Sciences (IJETCAS) www.iasir.net Minimax Estimation of the Scale Parameter of the Laplace Distribution under Different Loss Function Huda, A. Rasheed AL-Mustansiriyah University, College of Science, Department of Math, Baghdad, Iraq Emad, F. AL-Shareefi Southern Technical University, Basra, Iraq and Nasiriyah Technical Institute, Iraq __________________________________________________________________________________________ Abstract: In this paper, we obtained Minimax estimators of the scale parameter đ?&#x153;&#x192; for the Laplace distribution under different loss function by applying the theorem of Lehmann [1950]. Some Bayes estimators for the unknown scale parameter đ?&#x153;&#x192; of Laplace distribution have been obtained, using Non-informative prior (Jeffreys) and Informative priors (Gumbel Type II) under different loss function, represented by (Quadratic loss function, Squared â&#x20AC;&#x201C; log error loss function, Entropy loss function). According to Monte-Carlo simulation study, the performance of these estimates is compared depending on the mean squared errors (MSEâ&#x20AC;&#x2122;s). Key words: Minimax estimator; Laplace distribution; Bayes estimator; Quadratic loss function; Entropy loss function; Squared-log error loss function; Jeffery prior; Gumbel Type II prior; Mean squared error. _________________________________________________________________________________________ I. Introduction Laplace distribution has been used for modeling data that have heavier tails than those of the normal distribution and analyze engineering, financial, industrial, environmental, and biological data (Kotz et al., 2001)[5]. It is used for modeling data that have heavier tails than those of the normal distribution. The probability density functions of a Laplace distributed random variable is given by: [9] |đ?&#x2018;Ľâ&#x2C6;&#x2019;đ?&#x2018;&#x17D;| 1 đ?&#x2018;&#x201C;(đ?&#x2018;Ľ|đ?&#x2018;&#x17D;, θ) = đ?&#x2018;&#x2019;đ?&#x2018;Ľđ?&#x2018;? [â&#x2C6;&#x2019; ] â&#x2C6;&#x2019;â&#x2C6;&#x17E;<đ?&#x2018;Ľ <â&#x2C6;&#x17E; (1) θ
2θ
Where đ?&#x2018;&#x17D; â&#x2C6;&#x2C6; (â&#x2C6;&#x2019;â&#x2C6;&#x17E;,â&#x2C6;&#x17E;) and θ > 0 are location and scale parameters, respectively. The cumulative distribution function is given by: 1
aâ&#x2C6;&#x2019;x
1 â&#x2C6;&#x2019; exp [ ] for x â&#x2030;Ľ a F(x| a, θ) = { 1 2 xâ&#x2C6;&#x2019;a θ exp [ ] for x < đ?&#x2018;&#x17D; 2 θ With moment â&#x20AC;&#x201C;generating function defined as: Î&#x153;đ?&#x2018;Ľ (đ?&#x2018;Ą) =
đ?&#x2018;&#x2019; đ?&#x2018;Ąđ?&#x153;&#x2021; 1â&#x2C6;&#x2019;đ?&#x2018;? 2 đ?&#x2018;Ą 2
II. Bayes Estimators A. Jefferys Prior Information [9] Assume that, đ?&#x153;&#x192; has non-information prior density, defined g â&#x2C6;? â&#x2C6;&#x161;I(θ) Where I(θ) represented Fisher information which defined as follows:
(2)
đ?&#x153;&#x2022;2 đ?&#x2018;&#x2122;đ?&#x2018;&#x203A; đ?&#x2018;&#x201C;(đ?&#x2018;Ľ;đ?&#x2018;&#x17D;,đ?&#x153;&#x192;)
I(θ) = â&#x2C6;&#x2019;đ?&#x2018;&#x203A;đ??¸ [
] . Hence
đ?&#x153;&#x2022;đ?&#x153;&#x192; 2
đ?&#x2018;&#x201D;(đ?&#x153;&#x192;) = đ?&#x2018;&#x2DC;â&#x2C6;&#x161;â&#x2C6;&#x2019;đ?&#x2018;&#x203A;đ??¸ (
đ?&#x153;&#x2022;2 đ?&#x2018;&#x2122;đ?&#x2018;&#x203A; đ?&#x2018;&#x201C;(đ?&#x2018;Ľ;đ?&#x2018;&#x17D;,đ?&#x153;&#x192;) đ?&#x153;&#x2022;đ?&#x153;&#x192; 2
đ?&#x2018;&#x2122;đ?&#x2018;&#x203A; đ?&#x2018;&#x201C;(đ?&#x2018;Ľ; đ?&#x2018;&#x17D;, đ?&#x153;&#x192;) = â&#x2C6;&#x2019; đ?&#x2018;&#x2122;đ?&#x2018;&#x203A;(2đ?&#x153;&#x192;) â&#x2C6;&#x2019; đ?&#x153;&#x2022; ln đ?&#x2018;&#x201C;(đ?&#x2018;Ľ;đ?&#x2018;&#x17D;,đ?&#x153;&#x192;)
=â&#x2C6;&#x2019;
đ?&#x153;&#x2022;đ?&#x153;&#x192; đ?&#x153;&#x2022;2 ln đ?&#x2018;&#x201C;(đ?&#x2018;Ľ;đ?&#x2018;&#x17D;,đ?&#x153;&#x192;)
=
1 2đ?&#x153;&#x192; 1
đ?&#x153;&#x2022;đ?&#x153;&#x192; 2 2đ?&#x153;&#x192; 2 đ?&#x153;&#x2022;2 ln đ?&#x2018;&#x201C;(đ?&#x2018;Ľ;đ?&#x2018;&#x17D;,đ?&#x153;&#x192;)
â&#x2C6;&#x2019; â&#x2C6;&#x2019; 1
|đ?&#x2018;Ľâ&#x2C6;&#x2019;đ?&#x2018;&#x17D;|
)
(3)
|đ?&#x2018;Ľâ&#x2C6;&#x2019;đ?&#x2018;&#x17D;| đ?&#x153;&#x192;
đ?&#x153;&#x192;2 2|đ?&#x2018;Ľâ&#x2C6;&#x2019;đ?&#x2018;&#x17D;| đ?&#x153;&#x192;3 2
đ??¸[ ] = 2 â&#x2C6;&#x2019; 3 đ??¸[|đ?&#x2018;Ľ â&#x2C6;&#x2019; đ?&#x2018;&#x17D;|] đ?&#x153;&#x2022;đ?&#x153;&#x192; 2 2đ?&#x153;&#x192; đ?&#x153;&#x192; Let X~Laplace(đ?&#x2018;&#x17D;, đ?&#x153;&#x192;)đ?&#x2018;Ąâ&#x201E;&#x17D;đ?&#x2018;&#x2019;đ?&#x2018;&#x203A;, [5] |đ?&#x2018;Ľđ?&#x2018;&#x2013; â&#x2C6;&#x2019; đ?&#x2018;&#x17D;| = đ?&#x153;&#x192;W Where (W ) is standard exponential with mean and variance equal to one. Hence, |đ?&#x2018;Ľđ?&#x2018;&#x2013; â&#x2C6;&#x2019; đ?&#x2018;&#x17D;|~đ??şđ?&#x2018;&#x17D;đ?&#x2018;&#x161;đ?&#x2018;&#x161;đ?&#x2018;&#x17D;(1, đ?&#x153;&#x192; â&#x2C6;&#x2019;1 )
IJETCAS 15-605; Š 2015, IJETCAS All Rights Reserved
Page 1
A. Rasheed et al., International Journal of Emerging Technologies in Computational and Applied Sciences, 14(1), September-November, 2015, pp. 01-09 đ?&#x153;&#x2022;2 ln đ?&#x2018;&#x201C;(đ?&#x2018;Ľ;đ?&#x2018;&#x17D;,đ?&#x153;&#x192;)
â&#x2C6;&#x2019;3
Then, E [ ] = 2 đ?&#x153;&#x2022;đ?&#x153;&#x192; 2 2đ?&#x153;&#x192; After Substitution (4) into (3), we get đ?&#x2018;&#x201D;(đ?&#x153;&#x192;) = đ?&#x2018;&#x2DC;â&#x2C6;&#x161;â&#x2C6;&#x2019;đ?&#x2018;&#x203A; (â&#x2C6;&#x2019;
3 2đ?&#x153;&#x192; 2
đ?&#x2018;&#x2DC;
3đ?&#x2018;&#x203A;
đ?&#x153;&#x192;
2
)= â&#x2C6;&#x161;
(4) (5)
Now, combining the prior (5) with the likelihood function, we have the posterior distribution of θ with Jeffreys prior information given by â&#x201E;&#x17D;1 (θ|đ?&#x2018;Ľ) : â&#x201E;&#x17D;1 (θ|đ?&#x2018;Ľ) =
â&#x2C6;&#x17E;
đ?&#x2018;&#x201D;(đ?&#x153;&#x192;)đ??ż(đ?&#x153;&#x192;;đ?&#x2018;Ľ1 đ?&#x2018;Ľ2 â&#x20AC;Śâ&#x20AC;Ś..đ?&#x2018;Ľđ?&#x2018;&#x203A; )
â&#x2C6;Ť0 đ?&#x2018;&#x201D;(đ?&#x153;&#x192;)đ??ż(đ?&#x153;&#x192;;đ?&#x2018;Ľ1 đ?&#x2018;Ľ2 â&#x20AC;Śâ&#x20AC;Ś..đ?&#x2018;Ľđ?&#x2018;&#x203A; )đ?&#x2018;&#x2018;đ?&#x153;&#x192;
â&#x2C6;&#x2018;đ?&#x2018;&#x203A; |đ?&#x2018;Ľ â&#x2C6;&#x2019;đ?&#x2018;&#x17D;| 1 đ?&#x2018;&#x2019;đ?&#x2018;Ľđ?&#x2018;?[â&#x2C6;&#x2019; đ?&#x2018;&#x2013;=1 đ?&#x2018;&#x2013; ] đ?&#x153;&#x192; đ?&#x153;&#x192;đ?&#x2018;&#x203A;+1 đ?&#x2018;&#x203A; |đ?&#x2018;Ľ â&#x2C6;&#x2019;đ?&#x2018;&#x17D;| â&#x2C6;&#x2018; â&#x2C6;&#x17E; 1 đ?&#x2018;&#x2013; ]đ?&#x2018;&#x2018;đ?&#x153;&#x192; â&#x2C6;Ť0 đ?&#x2018;&#x203A;+1 đ?&#x2018;&#x2019;đ?&#x2018;Ľđ?&#x2018;?[â&#x2C6;&#x2019; đ?&#x2018;&#x2013;=1đ?&#x153;&#x192; đ?&#x153;&#x192;
â&#x201E;&#x17D;1 (θ|đ?&#x2018;Ľ) =
On simplification, we have â&#x2C6;&#x2019; â&#x2C6;&#x2018;đ?&#x2018;&#x203A; đ?&#x2018;&#x2013;=1|đ?&#x2018;Ľđ?&#x2018;&#x2013; â&#x2C6;&#x2019;đ?&#x2018;&#x17D;| đ?&#x153;&#x192;
đ?&#x2018;&#x203A;
(â&#x2C6;&#x2018;đ?&#x2018;&#x203A; đ?&#x2018;&#x2013;=1|đ?&#x2018;Ľđ?&#x2018;&#x2013; â&#x2C6;&#x2019;đ?&#x2018;&#x17D;|) đ?&#x2018;&#x2019;
â&#x201E;&#x17D;1 (θ|đ?&#x2018;Ľ) =
đ?&#x153;&#x192; đ?&#x2018;&#x203A;+1 Î&#x201C;(đ?&#x2018;&#x203A;)
â&#x2C6;&#x2019;đ?&#x2018;&#x2021;
=
đ?&#x2018;&#x2021; đ?&#x2018;&#x203A;đ?&#x2018;&#x2019; đ?&#x153;&#x192;
(6)
đ?&#x153;&#x192; đ?&#x2018;&#x203A;+1 Î&#x201C;(đ?&#x2018;&#x203A;)
The posterior density â&#x201E;&#x17D;1 (θ|đ?&#x2018;Ľ) is recognized as the density of the Inverse Gamma (IG) distribution: (θ|đ?&#x2018;Ľ)~IG(đ?&#x2018;&#x203A;, đ?&#x2018;&#x2021;) B. Gumbel Type II prior Information Assume that, the unknown scale parameter θ has Gumbel type II prior with hyper-parameter (b), defined as: [1] 1 2
â&#x2C6;&#x2019;b
g 2 (θ) = b ( ) Exp [ ] , b, θ > 0 θ θ (7) Now, combining the prior with the likelihood function đ?&#x2018;&#x203A;
1
đ??ż(đ?&#x2018;Ľ1 , đ?&#x2018;Ľ2, â&#x20AC;Ś , đ?&#x2018;Ľđ?&#x2018;&#x203A; ; đ?&#x153;&#x192;) = ( ) đ?&#x2018;&#x2019;đ?&#x2018;Ľđ?&#x2018;? [â&#x2C6;&#x2019;
â&#x2C6;&#x2018;đ?&#x2018;&#x203A; đ?&#x2018;&#x2013;=1|đ?&#x2018;Ľđ?&#x2018;&#x2013; â&#x2C6;&#x2019;đ?&#x2018;&#x17D;|
2đ?&#x153;&#x192;
đ?&#x153;&#x192;
(7)
] yields the posterior distribution of θ with Gumbel Type II prior
distribution information denoted by h2 (θ|x), where: â&#x201E;&#x17D;2 (đ?&#x153;&#x192;|đ?&#x2018;Ľ) =
â&#x2C6;&#x17E;
đ?&#x2018;&#x201D;2 (đ?&#x153;&#x192;)đ??ż(đ?&#x153;&#x192;;đ?&#x2018;Ľ1 đ?&#x2018;Ľ2 â&#x20AC;Śâ&#x20AC;Ś..đ?&#x2018;Ľđ?&#x2018;&#x203A; )
â&#x2C6;Ť0 đ?&#x2018;&#x201D;2 (đ?&#x153;&#x192;)đ??ż(đ?&#x153;&#x192;;đ?&#x2018;Ľ1 đ?&#x2018;Ľ2 â&#x20AC;Śâ&#x20AC;Ś..đ?&#x2018;Ľđ?&#x2018;&#x203A; )đ?&#x2018;&#x2018;đ?&#x153;&#x192;
=
đ?&#x2018;&#x203A; |đ?&#x2018;Ľ 1 )/đ?&#x153;&#x192; đ?&#x2018;&#x2019; â&#x2C6;&#x2019;(â&#x2C6;&#x2018;đ?&#x2018;&#x2013;=1 đ?&#x2018;&#x2013; â&#x2C6;&#x2019;đ?&#x2018;&#x17D;|+đ?&#x2018;? đ?&#x153;&#x192;đ?&#x2018;&#x203A;+2 â&#x2C6;&#x17E; 1 â&#x2C6;&#x2019;(â&#x2C6;&#x2018;đ?&#x2018;&#x203A; đ?&#x2018;&#x2013;=1|đ?&#x2018;Ľđ?&#x2018;&#x2013; â&#x2C6;&#x2019;đ?&#x2018;&#x17D;|+đ?&#x2018;?)/đ?&#x153;&#x192; đ?&#x2018;&#x2018;đ?&#x153;&#x192; â&#x2C6;Ť0 đ?&#x2018;&#x203A;+2 đ?&#x2018;&#x2019; đ?&#x153;&#x192;
(8)
After simplification, we get: â&#x201E;&#x17D;2 (đ?&#x153;&#x192;|đ?&#x2018;Ľ) =
đ?&#x2019;?+đ?&#x;? â&#x2C6;&#x2019;(â&#x2C6;&#x2018;đ?&#x2018;&#x203A; |đ?&#x2018;Ľ â&#x2C6;&#x2019;đ?&#x2018;&#x17D;|+đ?&#x2018;?)/đ?&#x153;˝ đ?&#x2018;&#x2013;=1 đ?&#x2018;&#x2013; đ?&#x2019;&#x2020;
(â&#x2C6;&#x2018;đ?&#x2018;&#x203A; đ?&#x2018;&#x2013;=1|đ?&#x2018;Ľđ?&#x2018;&#x2013; â&#x2C6;&#x2019;đ?&#x2018;&#x17D;|+đ?&#x2018;? )
(9)
đ?&#x153;&#x192; đ?&#x2018;&#x203A;+2 Î&#x201C;(đ?&#x2018;&#x203A;+đ?&#x;?)
Which implies that, (θ|x)~Inv â&#x2C6;&#x2019; Gamma(n + 1, (â&#x2C6;&#x2018;ni=1|xi â&#x2C6;&#x2019; a| + b)) Where, E(θ|x) =
đ?&#x;?
â&#x2C6;&#x2018;đ?&#x2019;? đ?&#x2019;&#x160;=đ?&#x;?|đ?&#x2019;&#x2122;đ?&#x2019;&#x160; â&#x2C6;&#x2019;đ?&#x2019;&#x201A;|+đ?&#x2019;&#x192;
, Var(θ|x) =
đ?&#x2019;?
(â&#x2C6;&#x2018;đ?&#x2019;? đ?&#x2019;&#x160;=đ?&#x;?|đ?&#x2019;&#x2122;đ?&#x2019;&#x160; â&#x2C6;&#x2019;đ?&#x2019;&#x201A;|+đ?&#x2019;&#x192;) n2 (nâ&#x2C6;&#x2019;1)
, đ?&#x2018;&#x203A;>1
III. Bayes Estimators under Different Loss Function Bayesian estimates of the scale parameter of the Laplace distribution has been obtained under different loss functions and as follows: A. Baye Estimator under Quadratic Loss Function De Groot (1970) discussed different types of loss function and obtained the Bayes estimates under these loss function which is a non- negative symmetric and continuous loss function and is defined as:[9] 2
Ě&#x201A;
Ě&#x201A; 2
Ě&#x201A; θ) = (θâ&#x2C6;&#x2019; θ) = (1 â&#x2C6;&#x2019; θ) L(θ, θ θ Hence, Bayes estimator under the Quadratic loss function will be:[9]
(10)
1
θĚ&#x201A; Q =
E (θ)
(11)
1 E( 2 ) θ
(đ?&#x2013;&#x17D;) With Jeffreys prior information: From (6) posterior density function, the Bayes estimator of θ of Laplace distribution under Quadratic loss function can be obtained as follows: â&#x2C6;&#x17E;
1 1 E ( m |đ?&#x2018;Ľ) = â&#x2C6;Ť m h1 (θ|x)dθ θ θ 0
â&#x2C6;&#x17E;
đ?&#x2018;&#x203A;
1 (â&#x2C6;&#x2018;đ?&#x2018;&#x203A;đ?&#x2018;&#x2013;=1|đ?&#x2018;Ľđ?&#x2018;&#x2013; â&#x2C6;&#x2019; đ?&#x2018;&#x17D;|)đ?&#x2018;&#x203A; â&#x2C6;&#x2019; â&#x2C6;&#x2018;đ?&#x2018;&#x2013;=1|đ?&#x2018;Ľđ?&#x2018;&#x2013;â&#x2C6;&#x2019;đ?&#x2018;&#x17D;| đ?&#x153;&#x192; =â&#x2C6;Ť đ?&#x2018;&#x161; đ?&#x2018;&#x2019; đ?&#x2018;&#x2018;đ?&#x153;&#x192; đ?&#x153;&#x192; đ?&#x153;&#x192; đ?&#x2018;&#x203A;+1 Î&#x201C;(đ?&#x2018;&#x203A;) 0
Therefore, E(
1 θm
|đ?&#x2018;Ľ) =
Î&#x201C;(đ?&#x2018;&#x203A;+đ?&#x2018;&#x161;) đ?&#x2018;&#x161;
Î&#x201C;(đ?&#x2018;&#x203A;)(â&#x2C6;&#x2018;đ?&#x2018;&#x203A; đ?&#x2018;&#x2013;=1|đ?&#x2018;Ľđ?&#x2018;&#x2013; â&#x2C6;&#x2019;đ?&#x2018;&#x17D;| )
,
đ?&#x2018;&#x161; = 0, 1, 2, â&#x20AC;Ś
IJETCAS 15-605; Š 2015, IJETCAS All Rights Reserved
(12)
Page 2
A. Rasheed et al., International Journal of Emerging Technologies in Computational and Applied Sciences, 14(1), September-November, 2015, pp. 01-09
After substituting, we get the Bayes estimator of parameter đ?&#x153;˝ under Quadratic loss function with Jeffreys prior information as: â&#x2C6;&#x2018;đ?&#x2018;&#x203A; |đ?&#x2018;Ľ â&#x2C6;&#x2019;đ?&#x2018;&#x17D;| đ?&#x153;&#x192;Ě&#x201A;đ?&#x2018;&#x201E;đ??˝ = đ?&#x2018;&#x2013;=1 đ?&#x2018;&#x2013; (13) (đ?&#x2018;&#x203A;+1)
(đ?&#x2013;&#x17D;đ?&#x2013;&#x17D;) With Gumbel Type II prior information: Using the posterior density function (9), the Bayes estimator of đ?&#x203A;&#x2030; of Laplace distribution under Quadratic loss function is obtained as: đ?&#x153;&#x192;Ě&#x201A;đ?&#x2018;&#x201E;đ??ş =
đ?&#x;? đ?&#x153;˝ đ?&#x2018;Ž đ?&#x;? đ?&#x2018;Ź( đ?&#x;? |đ?&#x2018;Ľ) đ?&#x153;˝ đ?&#x2018;Ž
đ?&#x2018;Ź( |đ?&#x2018;Ľ)
(14) â&#x2C6;&#x17E;
1 1 đ??¸ ( đ?&#x2018;&#x161; |đ?&#x2018;Ľ) = â&#x2C6;Ť â&#x201E;&#x17D;2 (đ?&#x153;&#x192;|đ?&#x2018;Ľ)đ?&#x2018;&#x2018;đ?&#x153;&#x192; đ?&#x153;&#x192; đ?&#x153;&#x192; 0 đ?&#x2018;&#x203A; (â&#x2C6;&#x2018;đ?&#x2018;&#x2013;=1|đ?&#x2018;Ľđ?&#x2018;&#x2013; â&#x2C6;&#x2019; đ?&#x2018;&#x17D;| + đ?&#x2018;?)đ?&#x2018;&#x203A;+1 â&#x2C6;Ť đ?&#x153;&#x192; đ?&#x2018;&#x203A;+đ?&#x2018;&#x161;+2 Î&#x201C;(đ?&#x2018;&#x203A; + 1) đ?&#x;&#x17D; â&#x2C6;&#x17E;
=
â&#x2C6;&#x2019;(â&#x2C6;&#x2018;đ?&#x2018;&#x203A; đ?&#x2018;&#x2013;=1|đ?&#x2018;Ľđ?&#x2018;&#x2013; â&#x2C6;&#x2019;đ?&#x2018;&#x17D;|+đ?&#x2018;?) đ?&#x153;&#x192; đ?&#x2018;&#x2019; đ?&#x2018;&#x2018;đ?&#x153;&#x192; â&#x2C6;&#x17E;
đ?&#x2018;&#x203A;
(â&#x2C6;&#x2018;đ?&#x2018;&#x203A;đ?&#x2018;&#x2013;=1|đ?&#x2018;Ľđ?&#x2018;&#x2013; â&#x2C6;&#x2019; đ?&#x2018;&#x17D;| + đ?&#x2018;?)đ?&#x2018;&#x203A;+đ?&#x2018;&#x161;+1 â&#x2C6;&#x2019;(â&#x2C6;&#x2018;đ?&#x2018;&#x2013;=1|đ?&#x2018;Ľđ?&#x2018;&#x2013; â&#x2C6;&#x2019;đ?&#x2018;&#x17D;|+đ?&#x2018;?) Î&#x201C;(đ?&#x2018;&#x203A; + đ?&#x2018;&#x161; + 1) đ?&#x153;&#x192; = â&#x2C6;Ť đ?&#x2018;&#x2019; đ?&#x2018;&#x2018;đ?&#x153;&#x192; Î&#x201C;(đ?&#x2018;&#x203A; + 1)(â&#x2C6;&#x2018;đ?&#x2018;&#x203A;đ?&#x2018;&#x2013;=1|đ?&#x2018;Ľđ?&#x2018;&#x2013; â&#x2C6;&#x2019; đ?&#x2018;&#x17D;| + đ?&#x2018;?)đ?&#x2018;&#x161; Î&#x201C;(đ?&#x2018;&#x203A; + đ?&#x2018;&#x161; + 1)đ?&#x153;&#x192; đ?&#x2018;&#x203A;+đ?&#x2018;&#x161;+2 0
Hence 1 Î&#x201C;(đ?&#x2018;&#x203A; + đ?&#x2018;&#x161; + 1) đ??¸ ( đ?&#x2018;&#x161; |đ?&#x2018;Ľ) = đ?&#x2018;&#x161; đ?&#x153;&#x192; Î&#x201C;(đ?&#x2018;&#x203A; + 1)(â&#x2C6;&#x2018;đ?&#x2018;&#x203A;đ?&#x2018;&#x2013;=1|đ?&#x2018;Ľđ?&#x2018;&#x2013; â&#x2C6;&#x2019; đ?&#x2018;&#x17D;| + đ?&#x2018;?) After substituting, we get the Bayes estimator of parameter đ?&#x153;˝ under Quadratic loss function with Gumbel Type II information as: (â&#x2C6;&#x2018;đ?&#x2018;&#x203A; |đ?&#x2018;Ľ â&#x2C6;&#x2019;đ?&#x2018;&#x17D;|+đ?&#x2018;?) đ?&#x153;&#x192;Ě&#x201A;đ?&#x2018;&#x201E;đ??ş = đ?&#x2018;&#x2013;=1 đ?&#x2018;&#x2013; (15) đ?&#x2018;&#x203A;+2 B. Bayes Estimator Under Squared â&#x20AC;&#x201C; Log Error Loss Function Farsipour and Zakerzadeh (2006) [4] studied the estimation of generalized variance under an asymmetric loss function named the squared log error loss function, where is of the form: Ě&#x201A; 2
Ě&#x201A; θ) = (ln θĚ&#x201A; â&#x2C6;&#x2019;ln θ)2 = (ln θ) L(θ,
(16)
θ
Ě&#x201A; θ) â&#x2020;&#x2019; â&#x2C6;&#x17E; as θĚ&#x201A; â&#x2020;&#x2019; 0 or â&#x2C6;&#x17E; This is balanced with lim L(θ, A balanced loss function takes both error of estimation and goodness of fit in to account but the unbalanced loss Ě&#x201A; đ?&#x203A;&#x2030;
function considers only error of estimation. This loss function is convex for â&#x2030;¤ đ??&#x17E; and concave otherwise, it's đ?&#x203A;&#x2030; Ě&#x201A; . [2 ] risk function has a unique minimum with respect to đ?&#x203A;&#x2030; According to the above mentioned loss function, we derive the corresponding Bayes estimators for đ?&#x203A;&#x2030; using Ě&#x201A; đ?&#x203A;&#x2030;), which minimizes the posterior risk. Risk function đ??&#x2018;(đ?&#x203A;&#x2030;, Ě&#x201A; θ) = E [L(θ, Ě&#x201A; θ)] R(θ, (17) 2 â&#x2C6;&#x17E; = â&#x2C6;Ť (ln θĚ&#x201A; â&#x2C6;&#x2019; ln θ) h(θ|x)dθ 0
The Bayes estimator for the parameter đ?&#x203A;&#x2030; of Laplace distribution under the Squared-log error loss function is:[9] θĚ&#x201A; SL = Exp[E(ln θ |x)] (18) (đ?&#x2013;&#x17D;) With Jeffreys prior information According to the posterior density function (6), the Bayes estimator of đ?&#x203A;&#x2030; of Laplace distribution under Squared â&#x20AC;&#x201C; log error loss function can be obtained as θĚ&#x201A; SLJ = Exp [E(ln θ |x) ] (19) J
â&#x2C6;&#x17E;
đ??¸(đ?&#x2018;&#x2122;đ?&#x2018;&#x203A;đ?&#x153;&#x192;|đ?&#x2018;Ľ)đ?&#x2018;ą = â&#x2C6;Ť ln đ?&#x153;&#x192; â&#x201E;&#x17D;1 (đ?&#x153;&#x192;|đ?&#x2018;Ľ)đ??˝ đ?&#x2018;&#x2018;đ?&#x153;&#x192; đ?&#x;&#x17D; â&#x2C6;&#x17E;
â&#x2C6;&#x17E;
đ?&#x2018;&#x203A;
â&#x2C6;&#x2019; â&#x2C6;&#x2018;đ?&#x2018;&#x203A; đ?&#x2018;&#x2013;=1|đ?&#x2018;Ľđ?&#x2018;&#x2013; â&#x2C6;&#x2019;đ?&#x2018;&#x17D;|
đ?&#x153;&#x192; (â&#x2C6;&#x2018;đ?&#x2018;&#x203A;đ?&#x2018;&#x2013;=1|đ?&#x2018;Ľđ?&#x2018;&#x2013; â&#x2C6;&#x2019; đ?&#x2018;&#x17D;|)đ?&#x2018;&#x203A; â&#x2C6;&#x2019; â&#x2C6;&#x2018;đ?&#x2018;&#x2013;=1|đ?&#x2018;Ľđ?&#x2018;&#x2013;â&#x2C6;&#x2019;đ?&#x2018;&#x17D;| (â&#x2C6;&#x2018;đ?&#x2018;&#x203A;đ?&#x2018;&#x2013;=1|đ?&#x2018;Ľđ?&#x2018;&#x2013; â&#x2C6;&#x2019; đ?&#x2018;&#x17D;|)đ?&#x2018;&#x203A; đ?&#x2018;&#x2019; đ?&#x153;&#x192; E(đ?&#x2018;&#x2122;đ?&#x2018;&#x203A;đ?&#x153;&#x192;|đ?&#x2018;Ľ)đ?&#x2018;ą = â&#x2C6;Ť ln đ?&#x153;&#x192; đ?&#x2018;&#x2019; đ?&#x2018;&#x2018;đ?&#x153;&#x192; = â&#x2C6;Ť ln đ?&#x153;&#x192; đ?&#x2018;&#x203A;+1 đ?&#x2018;&#x203A;+1 đ?&#x153;&#x192; Î&#x201C;(đ?&#x2018;&#x203A;) Î&#x201C;(đ?&#x2018;&#x203A;) đ?&#x153;&#x192;
đ?&#x2018;&#x2122;đ?&#x2018;&#x2019;đ?&#x2018;Ą đ?&#x2018;Ś = đ?&#x153;&#x192;=
đ?&#x;&#x17D; â&#x2C6;&#x2018;đ?&#x2018;&#x203A; đ?&#x2018;&#x2013;=1|đ?&#x2018;Ľđ?&#x2018;&#x2013; â&#x2C6;&#x2019;đ?&#x2018;&#x17D;|
đ?&#x153;&#x192; â&#x2C6;&#x2018;đ?&#x2018;&#x203A; đ?&#x2018;&#x2013;=1|đ?&#x2018;Ľđ?&#x2018;&#x2013; â&#x2C6;&#x2019;đ?&#x2018;&#x17D;| đ?&#x2018;Ś
đ?&#x2018;&#x2018;đ?&#x153;&#x192;
đ?&#x;&#x17D;
, which implies that đ?&#x2018;&#x2018;đ?&#x153;&#x192;
â&#x2C6;&#x2018;đ?&#x2018;&#x203A; đ?&#x2018;&#x2013;=1|đ?&#x2018;Ľđ?&#x2018;&#x2013; â&#x2C6;&#x2019;đ?&#x2018;&#x17D;|
đ?&#x2018;&#x2018;đ?&#x2018;Ś
đ?&#x2018;Ś2
, | | = |â&#x2C6;&#x2019;
| , đ?&#x2018;&#x2018;đ?&#x153;&#x192; =
â&#x2C6;&#x2018;đ?&#x2018;&#x203A; đ?&#x2018;&#x2013;=1|đ?&#x2018;Ľđ?&#x2018;&#x2013; â&#x2C6;&#x2019;đ?&#x2018;&#x17D;| đ?&#x2018;Ś2
đ?&#x2018;&#x2018;đ?&#x2018;Ś
Hence,
IJETCAS 15-605; Š 2015, IJETCAS All Rights Reserved
Page 3
A. Rasheed et al., International Journal of Emerging Technologies in Computational and Applied Sciences, 14(1), September-November, 2015, pp. 01-09
đ?&#x2018;&#x203A;
đ?&#x2018;Ź(ln đ?&#x153;&#x192; |đ?&#x2019;&#x2122;)đ?&#x2018;ą = =
(â&#x2C6;&#x2018;đ?&#x2018;&#x203A; đ?&#x2018;&#x2013;=1|đ?&#x2018;Ľđ?&#x2018;&#x2013; â&#x2C6;&#x2019;đ?&#x2018;&#x17D;|) Î&#x201C;(đ?&#x2018;&#x203A;)
ln(â&#x2C6;&#x2018;đ?&#x2018;&#x203A; đ?&#x2018;&#x2013;=1|đ?&#x2018;Ľđ?&#x2018;&#x2013; â&#x2C6;&#x2019;đ?&#x2018;&#x17D;|) Î&#x201C;(đ?&#x2018;&#x203A;)
â&#x2C6;&#x17E; â&#x2C6;Ťđ?&#x;&#x17D;
(â&#x2C6;&#x2018;đ?&#x2018;&#x203A; đ?&#x2018;&#x2013;=1|đ?&#x2018;Ľđ?&#x2018;&#x2013; â&#x2C6;&#x2019;đ?&#x2018;&#x17D;|) đ?&#x2019;&#x161;đ?&#x;? đ?&#x2018;&#x203A;+1
â&#x2C6;&#x2019;đ?&#x2019;&#x161; (ln(â&#x2C6;&#x2018;đ?&#x2018;&#x203A; đ?&#x2018;&#x2013;=1|đ?&#x2018;Ľđ?&#x2018;&#x2013; â&#x2C6;&#x2019;đ?&#x2018;&#x17D;|)â&#x2C6;&#x2019;ln(đ?&#x2018;Ś))đ?&#x2019;&#x2020; (â&#x2C6;&#x2018;đ?&#x2018;&#x203A; đ?&#x2018;&#x2013;=1|đ?&#x2018;Ľđ?&#x2018;&#x2013; â&#x2C6;&#x2019;đ?&#x2018;&#x17D;|) đ?&#x2019;&#x161;đ?&#x2019;?+đ?&#x;?
â&#x2C6;&#x17E; ln(đ?&#x2018;Ś)đ?&#x2018;Ś đ?&#x2018;&#x203A;â&#x2C6;&#x2019;1 đ?&#x2018;&#x2019; â&#x2C6;&#x2019;đ?&#x2018;Ś
â&#x2C6;&#x17E;
â&#x2C6;Ť0 đ?&#x2018;Ś đ?&#x2018;&#x203A;â&#x2C6;&#x2019;1 đ?&#x2018;&#x2019; â&#x2C6;&#x2019;đ?&#x2018;Ś đ?&#x2018;&#x2018;đ?&#x2018;Ś â&#x2C6;&#x2019; â&#x2C6;Ť0
= ln(â&#x2C6;&#x2018;đ?&#x2018;&#x203A;đ?&#x2018;&#x2013;=1|đ?&#x2018;Ľđ?&#x2018;&#x2013; â&#x2C6;&#x2019; đ?&#x2018;&#x17D;|) â&#x2C6;&#x2019;
Î&#x201C;(đ?&#x2018;&#x203A;)
đ?&#x2018;&#x2018;đ?&#x2018;Ś
đ?&#x2018;&#x2018;đ?&#x2018;Ś
Ě Î&#x201C;(đ?&#x2018;&#x203A;) Î&#x201C;(đ?&#x2018;&#x203A;)
đ??¸(ln đ?&#x153;&#x192; |đ?&#x2018;Ľ)đ??˝ = ln(â&#x2C6;&#x2018;đ?&#x2018;&#x203A;đ?&#x2018;&#x2013;=1|đ?&#x2018;Ľđ?&#x2018;&#x2013; â&#x2C6;&#x2019; đ?&#x2018;&#x17D;|) â&#x2C6;&#x2019; đ?&#x153;&#x201C;(đ?&#x2018;&#x203A;) Such that Ď&#x2C6;(n) =
Ě Î&#x201C;(n) Î&#x201C;(n)
(20)
, where Ď&#x2C6;(n) is a digamma function.[10]
After substituting (20) in (19), we get the Bayes estimator of parameter đ?&#x203A;&#x2030; under Squared-log error loss function with Jeffreys prior information as: θĚ&#x201A; SLJ = Exp[ln(â&#x2C6;&#x2018;ni=1|xi â&#x2C6;&#x2019; a|) â&#x2C6;&#x2019; Ď&#x2C6;(n)] n
â&#x2C6;&#x2018; |x â&#x2C6;&#x2019;a| θĚ&#x201A; SLJ = i=1Ď&#x2C6;(n)i (21) e (đ?&#x201D;Śđ?&#x201D;Ś) With Gumbel Type II prior information: From the posterior density function (9), the Bayes estimator of θ of Laplace distribution under Squared â&#x20AC;&#x201C; log error loss function is obtained as: đ?&#x153;&#x192;Ě&#x201A;đ?&#x2018;&#x2020;đ??żđ??ş = đ??¸đ?&#x2018;Ľđ?&#x2018;? [đ??¸(ln đ?&#x153;&#x192; |đ?&#x2018;Ľ)đ??ş ] (22) â&#x2C6;&#x17E;
đ??¸(ln đ?&#x153;&#x192; |đ?&#x2018;Ľ)đ??ş = â&#x2C6;Ť0 ln đ?&#x153;&#x192; â&#x201E;&#x17D;2 (đ?&#x153;&#x192;|đ?&#x2018;Ľ)đ??ş đ?&#x2018;&#x2018;đ?&#x153;&#x192;
â&#x2C6;&#x2019;(â&#x2C6;&#x2018;đ?&#x2018;&#x203A; đ?&#x2018;&#x2013;=1|đ?&#x2018;Ľđ?&#x2018;&#x2013; â&#x2C6;&#x2019;đ?&#x2018;&#x17D;|+đ?&#x2018;?)
đ?&#x2018;&#x203A;+1
đ?&#x153;&#x192; â&#x2C6;&#x17E; (â&#x2C6;&#x2018;đ?&#x2018;&#x203A; |đ?&#x2018;Ľ â&#x2C6;&#x2019;đ?&#x2018;&#x17D;|) đ?&#x2018;&#x2019; đ??¸(ln đ?&#x153;&#x192; |đ?&#x2019;&#x2122;)đ?&#x2018;Ž = â&#x2C6;Ťđ?&#x;&#x17D; ln đ?&#x153;&#x192; đ?&#x2018;&#x2013;=1 đ?&#x2018;&#x2013; đ?&#x2018;&#x203A;+2 đ?&#x2018;&#x2018;đ?&#x153;&#x192; đ?&#x153;&#x192; Î&#x201C;(đ?&#x2018;&#x203A;+1) |đ?&#x2018;Ľ â&#x2C6;&#x2019;(â&#x2C6;&#x2018;đ?&#x2018;&#x203A; â&#x2C6;&#x2019;đ?&#x2018;&#x17D;|+đ?&#x2018;?) â&#x2C6;&#x17E; đ?&#x2018;&#x2013;=1 đ?&#x2018;&#x2013; đ?&#x153;&#x192; (â&#x2C6;&#x2018;đ?&#x2018;&#x203A;đ?&#x2018;&#x2013;=1|đ?&#x2018;Ľđ?&#x2018;&#x2013; â&#x2C6;&#x2019; đ?&#x2018;&#x17D;|)đ?&#x2018;&#x203A;+1 đ?&#x2018;&#x2019; đ??¸(ln đ?&#x153;&#x192; |đ?&#x2018;Ľ)đ??ş = â&#x2C6;Ť ln đ?&#x153;&#x192; đ?&#x2018;&#x2018;đ?&#x153;&#x192; đ?&#x2018;&#x203A;+2 Î&#x201C;(đ?&#x2018;&#x203A; + 1) đ?&#x153;&#x192; đ?&#x;&#x17D; (â&#x2C6;&#x2018;đ?&#x2018;&#x203A; đ?&#x2018;&#x2013;=1|đ?&#x2018;Ľđ?&#x2018;&#x2013; â&#x2C6;&#x2019;đ?&#x2018;&#x17D;|+đ?&#x2018;?)
đ?&#x2018;&#x2122;đ?&#x2018;&#x2019;đ?&#x2018;Ą đ?&#x2018;Ś = đ?&#x153;&#x192;=
, then:
đ?&#x153;&#x192;
(â&#x2C6;&#x2018;đ?&#x2018;&#x203A; đ?&#x2018;&#x2013;=1|đ?&#x2018;Ľđ?&#x2018;&#x2013; â&#x2C6;&#x2019;đ?&#x2018;&#x17D;|+đ?&#x2018;?) đ?&#x2018;Ś
đ?&#x2018;&#x2018;đ?&#x153;&#x192;
(â&#x2C6;&#x2018;đ?&#x2018;&#x203A; đ?&#x2018;&#x2013;=1|đ?&#x2018;Ľđ?&#x2018;&#x2013; â&#x2C6;&#x2019;đ?&#x2018;&#x17D;|)+đ?&#x2018;?)
đ?&#x2018;&#x2018;đ?&#x2018;Ś
đ?&#x2018;Ś2
, | | = |â&#x2C6;&#x2019;
Hence
| , đ?&#x2018;&#x2018;đ?&#x153;&#x192; =
â&#x2C6;&#x17E;
(â&#x2C6;&#x2018;đ?&#x2018;&#x203A; đ?&#x2018;&#x2013;=1|đ?&#x2018;Ľđ?&#x2018;&#x2013; â&#x2C6;&#x2019;đ?&#x2018;&#x17D;| )+đ?&#x2018;?) đ?&#x2018;Ś2
dy
â&#x2C6;&#x17E;
ln(â&#x2C6;&#x2018;đ?&#x2018;&#x203A;đ?&#x2018;&#x2013;=1|đ?&#x2018;Ľđ?&#x2018;&#x2013; â&#x2C6;&#x2019; đ?&#x2018;&#x17D;|) ln(đ?&#x2018;Ś)đ?&#x2018;Ś đ?&#x2018;&#x203A; đ?&#x2018;&#x2019; â&#x2C6;&#x2019;đ?&#x2018;Ś đ??¸(ln đ?&#x153;&#x192; |đ?&#x2018;Ľ)đ??ş = â&#x2C6;Ť đ?&#x2018;Ś đ?&#x2018;&#x203A; đ?&#x2018;&#x2019; â&#x2C6;&#x2019;đ?&#x2018;Ś đ?&#x2018;&#x2018;đ?&#x2018;Ś â&#x2C6;&#x2019; â&#x2C6;Ť đ?&#x2018;&#x2018;đ?&#x2018;Ś Î&#x201C;(đ?&#x2018;&#x203A; + 1) Î&#x201C;(đ?&#x2018;&#x203A; + 1) 0
= ln(â&#x2C6;&#x2018;đ?&#x2018;&#x203A;đ?&#x2018;&#x2013;=1|đ?&#x2018;Ľđ?&#x2018;&#x2013; â&#x2C6;&#x2019; đ??¸(ln đ?&#x153;&#x192; |đ?&#x2018;Ľ)đ??ş = ln(â&#x2C6;&#x2018;đ?&#x2018;&#x203A;đ?&#x2018;&#x2013;=1|đ?&#x2018;Ľđ?&#x2018;&#x2013;
đ?&#x2018;&#x17D;| + đ?&#x2018;?) â&#x2C6;&#x2019;
Î&#x201C;Ě (đ?&#x2018;&#x203A;+1)
0
Î&#x201C;(đ?&#x2018;&#x203A;+1)
â&#x2C6;&#x2019; đ?&#x2018;&#x17D;| + đ?&#x2018;?) â&#x2C6;&#x2019; đ?&#x153;&#x201C;(đ?&#x2018;&#x203A; + 1) (23) Substituting (23) into (22), we get the Bayes estimator of parameter θ under Squared-log error loss function with Gumbel Type II prior information as đ?&#x2018;&#x203A;
đ?&#x153;&#x192;Ě&#x201A;đ?&#x2018;&#x2020;đ??żđ??ş = đ??¸đ?&#x2018;Ľđ?&#x2018;? [ln (â&#x2C6;&#x2018;|đ?&#x2018;Ľđ?&#x2018;&#x2013; â&#x2C6;&#x2019; đ?&#x2018;&#x17D;| + đ?&#x2018;?) â&#x2C6;&#x2019; đ?&#x153;&#x201C;(đ?&#x2018;&#x203A; + 1)] đ?&#x2018;&#x2013;=1
đ?&#x2018;&#x203A;
(â&#x2C6;&#x2018; |đ?&#x2018;Ľđ?&#x2018;&#x2013; â&#x2C6;&#x2019;đ?&#x2018;&#x17D;|+đ?&#x2018;? ) đ?&#x153;&#x192;Ě&#x201A;đ?&#x2018;&#x2020;đ??żđ??ş = đ?&#x2018;&#x2013;=1đ?&#x153;&#x201C;(đ?&#x2018;&#x203A;+1)
(24)
đ?&#x2018;&#x2019;
C.
Bayesian Estimator under Entropy Loss Function
In many practical situations, it appears to be more realistic to express the loss in terms of the ratio case used the Entropy loss function which is defined as:[8] θĚ&#x201A; θĚ&#x201A; đ?&#x2018;&#x2122;(θĚ&#x201A; , đ?&#x153;&#x192;) = đ?&#x2018;§ [( ) â&#x2C6;&#x2019; ln ( ) â&#x2C6;&#x2019; 1] , đ?&#x2018;§ > 0 đ?&#x153;&#x192;
đ?&#x153;&#x192;
Then the Risk function under the Entropy loss function, denoted by R EN (θĚ&#x201A; , θ) is R EN (θĚ&#x201A; , θ) = E[LEN (θĚ&#x201A; , θ)] â&#x2C6;&#x17E; = â&#x2C6;Ť0 LEN (θĚ&#x201A; , θ)h(θ|x)dθ â&#x2C6;&#x17E;
θĚ&#x201A;
θĚ&#x201A;
θ θĚ&#x201A;
θ
[12]
θĚ&#x201A; đ?&#x153;&#x192;
. In this (25)
.
= â&#x2C6;Ť0 đ?&#x2018;§ [( ) â&#x2C6;&#x2019; ln ( ) â&#x2C6;&#x2019; 1] h(θ|x)dθ â&#x2C6;&#x17E;
= â&#x2C6;Ť0 đ?&#x2018;§ [( ) â&#x2C6;&#x2019; ln(θĚ&#x201A; ) â&#x2C6;&#x2019; ln(θ) â&#x2C6;&#x2019; 1] h(θ|x)dθ θ
1 = z θĚ&#x201A; E ( |đ?&#x2018;Ľ)- zln(θĚ&#x201A; ) - z E (ln(θ)) -z â&#x2C6;&#x201A;REN (θĚ&#x201A; ,θ) â&#x2C6;&#x201A;θĚ&#x201A;
θ
1
1
θ
θ
= z E ( |đ?&#x2018;Ľ) - z Ě&#x201A;
IJETCAS 15-605; Š 2015, IJETCAS All Rights Reserved
Page 4
A. Rasheed et al., International Journal of Emerging Technologies in Computational and Applied Sciences, 14(1), September-November, 2015, pp. 01-09
â&#x2C6;&#x201A;R EN (θĚ&#x201A; , θ) = 0, then â&#x2C6;&#x201A;θĚ&#x201A; 1 1 = E ( |đ?&#x2018;Ľ) Ě&#x201A;θEN θ â&#x2C6;&#x2019;1 Ě&#x201A;θEN = [E (1 |đ?&#x2018;Ľ)] Let
(26)
θ
1
Provided that, E ( |đ?&#x2018;Ľ) exist and is finite. θ (đ?&#x2013;&#x17D;) With Jeffreys prior information From the posterior density function (6), the Bayes estimator of θ under Entropy loss function with Jeffreys prior information, is derived as follows: â&#x2C6;&#x2019;1 1 θĚ&#x201A; ENJ = [E ( |đ?&#x2018;Ľ) ] θ J â&#x2C6;&#x17E;
1 1 E ( |đ?&#x2018;Ľ) = â&#x2C6;Ť h1 (θ|đ?&#x2018;Ľ)dθ θ θ J 0
Putting m=1 in (14), we get 1 n E ( |đ?&#x2018;Ľ) = â&#x2C6;&#x2018;n |x θ
(27)
i â&#x2C6;&#x2019;a|
i=1
Therefore, â&#x2C6;&#x2018;n |x â&#x2C6;&#x2019;a| θĚ&#x201A; ENJ = i=1 i
(28) n (đ?&#x2013;&#x17D;đ?&#x2013;&#x17D;) With Gumbel Type II prior information According to the posterior density function (9), the Bayes estimator of θ under Entropy loss function with Gumbel type II prior information, is derived as follows: â&#x2C6;&#x2019;1
1 θĚ&#x201A; ENG = [E ( |đ?&#x2018;Ľ) ] θ
Recall that, 1 E ( |đ?&#x2018;Ľ) = θ
(29)
G
n+1
(30)
(â&#x2C6;&#x2018;n i=1|xi â&#x2C6;&#x2019;a|+b)
Therefore, Bayes estimator of θ under entropy loss function with Gumbel TypeÎ&#x2122;Î&#x2122; prior information is: (â&#x2C6;&#x2018;n |x â&#x2C6;&#x2019;a|+b) θĚ&#x201A; ENG = i=1 i n+1
(31)
IV. Minimax Estimators The derivation of Minimax estimators depends primarily on a theorem due to Lehmann, which can be stated as follows: Lehmann's Theorem: [3] Let Ď&#x201E; = {Fθ; θ Îľ Î&#x2DC;} be a family of distribution functions and D a class of estimators of θ. Suppose that d* Îľ D is a Bayesâ&#x20AC;&#x2122; estimator against a prior distribution Ξ*(θ) on the parameter space Î&#x2DC; and the risk function R(d*, θ) = constant on Î&#x2DC;; then d* is a minimax estimator of θ. The main results are contained in the following Theorem. (A) Definition [7] An estimator đ?&#x2018;Ą â&#x2C6;&#x2014; is defined to be a Minimax estimator if and only if: đ?&#x2018; đ?&#x2018;˘đ?&#x2018;? đ?&#x2018; đ?&#x2018;˘đ?&#x2018;? đ?&#x2018;&#x2026; â&#x2C6;&#x2014; (đ?&#x153;&#x192;) â&#x2030;¤ đ?&#x2018;&#x2026; (đ?&#x153;&#x192;) đ?&#x2018;&#x201C;đ?&#x2018;&#x153;đ?&#x2018;&#x; đ?&#x2018;&#x2019;đ?&#x2018;Łđ?&#x2018;&#x2019;đ?&#x2018;&#x;đ?&#x2018;Ś đ?&#x2018;&#x2019;đ?&#x2018; đ?&#x2018;Ąđ?&#x2018;&#x2013;đ?&#x2018;&#x161;đ?&#x2018;&#x17D;đ?&#x2018;Ąđ?&#x2018;&#x153;đ?&#x2018;&#x; đ?&#x2018;Ą . đ?&#x153;&#x192; đ?&#x2018;Ą đ?&#x153;&#x192; đ?&#x2018;Ą (B)Theorm: Let X = (X1, X2, â&#x20AC;Ś , Xn ) be n independently and identically distributed random variables drawn from the density (1), then: â&#x2C6;&#x2018;đ?&#x2018;&#x203A;đ?&#x2018;&#x2013;=1|đ?&#x2018;Ľđ?&#x2018;&#x2013; â&#x2C6;&#x2019; đ?&#x2018;&#x17D;| đ?&#x153;&#x192;Ě&#x201A;đ?&#x2018;&#x20AC;đ?&#x2018;&#x201E;đ??˝ = (đ?&#x2018;&#x203A; + 1) Is the Minimax estimator of the scale parameter đ?&#x153;&#x192; for the Laplace distribution for the Quadratic loss function for the type (10) [9] Proof : â&#x2C6;&#x2018;đ?&#x2018;&#x203A; |đ?&#x2018;Ľ â&#x2C6;&#x2019;đ?&#x2018;&#x17D;| In order to prove the Theorem (4.2), it will be sufficient to show that, đ?&#x153;&#x192;Ě&#x201A;đ?&#x2018;&#x20AC;đ?&#x2018;&#x201E;đ??˝ = đ?&#x2018;&#x2013;=1 đ?&#x2018;&#x2013; is the minimax (đ?&#x2018;&#x203A;+1)
estimator of đ?&#x153;&#x192; for Quadratic loss function (10). If we can show that, the risk of đ?&#x153;&#x192;Ě&#x201A;đ?&#x2018;&#x20AC;đ?&#x2018;&#x201E;đ??˝ is constant, then, the Theorem (4.2) will be proved. Since, đ?&#x2018;&#x2026;θĚ&#x201A;đ?&#x2018;&#x201E;đ??˝ (đ?&#x153;&#x192;) = đ??¸[đ??ż(đ?&#x153;&#x192;|đ?&#x153;&#x192;Ě&#x201A;đ?&#x2018;&#x201E;đ??˝ )]
IJETCAS 15-605; Š 2015, IJETCAS All Rights Reserved
Page 5
A. Rasheed et al., International Journal of Emerging Technologies in Computational and Applied Sciences, 14(1), September-November, 2015, pp. 01-09
= đ??¸ [( =
1 đ?&#x153;&#x192;2
Ě&#x201A; đ?&#x2018;&#x201E;đ??˝ 2 đ?&#x153;&#x192;â&#x2C6;&#x2019;đ?&#x153;&#x192; đ?&#x153;&#x192;
) ]
đ??¸ [đ?&#x153;&#x192; 2 â&#x2C6;&#x2019;
2đ?&#x153;&#x192; đ?&#x2018;&#x203A;+1
1
đ??¸(â&#x2C6;&#x2018;đ?&#x2018;&#x203A;đ?&#x2018;&#x2013;=1|đ?&#x2018;Ľđ?&#x2018;&#x2013; â&#x2C6;&#x2019; đ?&#x2018;&#x17D;|) + (đ?&#x2018;&#x203A;+1)2 đ??¸(â&#x2C6;&#x2018;đ?&#x2018;&#x203A;đ?&#x2018;&#x2013;=1|đ?&#x2018;Ľđ?&#x2018;&#x2013; â&#x2C6;&#x2019; đ?&#x2018;&#x17D;|)2 ]
(32)
â&#x2C6;&#x2019;1 )
Recall that, |đ?&#x2018;Ľđ?&#x2018;&#x2013; â&#x2C6;&#x2019; đ?&#x2018;&#x17D;|~đ??şđ?&#x2018;&#x17D;đ?&#x2018;&#x161;đ?&#x2018;&#x161;đ?&#x2018;&#x17D;(1, đ?&#x153;&#x192; Hence, đ?&#x2018;&#x2021; = â&#x2C6;&#x2018;đ?&#x2018;&#x203A;đ?&#x2018;&#x2013;=1|đ?&#x2018;Ľđ?&#x2018;&#x2013; â&#x2C6;&#x2019; đ?&#x2018;&#x17D;|~đ??şđ?&#x2018;&#x17D;đ?&#x2018;&#x161;đ?&#x2018;&#x161;đ?&#x2018;&#x17D;(đ?&#x2018;&#x203A;, đ?&#x153;&#x192; â&#x2C6;&#x2019;1 ) With đ??¸(đ?&#x2018;&#x2021;) = đ?&#x2018;&#x203A;đ?&#x153;&#x192; , đ?&#x2018;&#x2030;đ?&#x2018;&#x17D;đ?&#x2018;&#x;(đ?&#x2018;&#x2021;) = đ?&#x2018;&#x203A;đ?&#x153;&#x192; 2 Then, substituting đ??¸(đ?&#x2018;&#x2021;) đ?&#x2018;&#x17D;đ?&#x2018;&#x203A;đ?&#x2018;&#x2018; đ??¸(đ?&#x2018;&#x2021; 2 ) into (35), we get 1 2đ?&#x153;&#x192; đ?&#x2018;&#x203A;đ?&#x153;&#x192; 2 (đ?&#x2018;&#x203A; + 1) (đ?&#x2018;&#x203A;đ?&#x153;&#x192;) + đ?&#x2018;&#x2026;θĚ&#x201A;đ?&#x2018;&#x201E;đ??˝ (đ?&#x153;&#x192;) = 2 [đ?&#x153;&#x192; 2 â&#x2C6;&#x2019; ] (đ?&#x2018;&#x203A; + 1)2 đ?&#x153;&#x192; đ?&#x2018;&#x203A;+1 2đ?&#x2018;&#x203A; đ?&#x2018;&#x203A; = 1 â&#x2C6;&#x2019; (đ?&#x2018;&#x203A;+1) + , which is a constant (đ?&#x2018;&#x203A;+1)
So, according to the Lehmann's theorem it follows that: â&#x2C6;&#x2018;đ?&#x2018;&#x203A;đ?&#x2018;&#x2013;=1|đ?&#x2018;Ľđ?&#x2018;&#x2013; â&#x2C6;&#x2019; đ?&#x2018;&#x17D;| θĚ&#x201A; đ?&#x2018;&#x20AC;đ?&#x2018;&#x201E;đ??˝ = đ?&#x2018;&#x203A;+1 is minimax estimator of the unknown parameter đ?&#x153;&#x192; under Quadratic loss function C. Theorem: Let X = (X1, X2, â&#x20AC;Ś , Xn ) be n independently and identically distributed random variables drawn from the density (1), then: â&#x2C6;&#x2018;đ?&#x2018;&#x203A;đ?&#x2018;&#x2013;=1|đ?&#x2018;Ľđ?&#x2018;&#x2013; â&#x2C6;&#x2019; đ?&#x2018;&#x17D;| đ?&#x153;&#x192;Ě&#x201A;đ?&#x2018;&#x2020;đ??żđ??˝ = đ?&#x2018;&#x2019; đ?&#x153;&#x201C;(đ?&#x2018;&#x203A;) Is minimax estimator of the scale parameter đ?&#x153;&#x192; for the Laplace distribution for the Squared-log error loss function (22), where đ?&#x153;&#x192;Ě&#x201A;đ?&#x2018;&#x2020;đ??żđ??˝ is the estimate of đ?&#x153;&#x192;. Proof: â&#x2C6;&#x2018;đ?&#x2018;&#x203A; |đ?&#x2018;Ľđ?&#x2018;&#x2013; â&#x2C6;&#x2019;đ?&#x2018;&#x17D;| In order to prove the Theorem (4.3), it will be sufficient to show that, đ?&#x153;&#x192;Ě&#x201A;đ?&#x2018;&#x2020;đ??żđ??˝ = đ?&#x2018;&#x2013;=1đ?&#x153;&#x201C;(đ?&#x2018;&#x203A;) , is the minimax đ?&#x2018;&#x2019;
estimator of đ?&#x153;&#x192; for Squared-log error loss function (16). If we can show that, the risk of đ?&#x153;&#x192;Ě&#x201A;đ?&#x2018;&#x2020;đ??żđ??˝ is a constant, then, the Theorem (4.3) will be proved. Since, (đ?&#x153;&#x192;) đ?&#x2018;&#x2026;θĚ&#x201A;đ?&#x2018;&#x2020;đ??żđ??˝ = đ??¸[đ??ż(đ?&#x153;&#x192;|θĚ&#x201A;đ?&#x2018;&#x2020;đ??żđ??˝ )] 2 â&#x2C6;&#x17E; = â&#x2C6;Ť0 (ln θĚ&#x201A;đ?&#x2018;&#x2020;đ??żđ??˝ â&#x2C6;&#x2019;ln đ?&#x153;&#x192;) â&#x201E;&#x17D;1 (đ?&#x153;&#x192;|đ?&#x2018;Ľ)đ??˝ đ?&#x2018;&#x2018;đ?&#x2018;Ľ 2 = đ??¸(ln θĚ&#x201A;đ?&#x2018;&#x2020;đ??żđ??˝ ) â&#x2C6;&#x2019; 2(ln đ?&#x153;&#x192;)đ??¸(ln θĚ&#x201A;đ?&#x2018;&#x2020;đ??żđ??˝ ) + (ln đ?&#x153;&#x192;)2
(33)
Now, đ??¸[lnθĚ&#x201A;đ?&#x2018;&#x2020;đ??żđ??˝ ] = đ??¸ (ln
â&#x2C6;&#x2018;đ?&#x2018;&#x203A;đ?&#x2018;&#x2013;=1|đ?&#x2018;Ľđ?&#x2018;&#x2013; â&#x2C6;&#x2019; đ?&#x2018;&#x17D;|
) đ?&#x2018;&#x2019; đ?&#x153;&#x201C;(đ?&#x2018;&#x203A;) đ??¸[lnθĚ&#x201A;đ?&#x2018;&#x2020;đ??żđ??˝ ] = đ??¸[ln(â&#x2C6;&#x2018;đ?&#x2018;&#x203A;đ?&#x2018;&#x2013;=1|đ?&#x2018;Ľđ?&#x2018;&#x2013; â&#x2C6;&#x2019; đ?&#x2018;&#x17D;|) â&#x2C6;&#x2019; đ?&#x153;&#x201C;(đ?&#x2018;&#x203A;)] Now, from the conclusion, we get:
(34)
đ?&#x2018;&#x203A;
đ?&#x2018;&#x2021; = â&#x2C6;&#x2018;|đ?&#x2018;Ľđ?&#x2018;&#x2013; â&#x2C6;&#x2019; đ?&#x2018;&#x17D;| ~đ?&#x203A;¤(đ?&#x2018;&#x203A;, đ?&#x153;&#x192; â&#x2C6;&#x2019;1 ) đ?&#x2018;&#x2013;=1
Thus, we have: Ě đ??¸[ln(â&#x2C6;&#x2018;đ?&#x2018;&#x203A;đ?&#x2018;&#x2013;=1|đ?&#x2018;Ľđ?&#x2018;&#x2013; â&#x2C6;&#x2019; đ?&#x2018;&#x17D;|)] = đ?&#x153;&#x201C;(đ?&#x2018;&#x203A;) + ln đ?&#x153;&#x192; đ?&#x2018;&#x17D;đ?&#x2018;&#x203A;đ?&#x2018;&#x2018; đ?&#x2018;Łđ?&#x2018;&#x17D;đ?&#x2018;&#x;[ln(â&#x2C6;&#x2018;đ?&#x2018;&#x203A;đ?&#x2018;&#x2013;=1|đ?&#x2018;Ľđ?&#x2018;&#x2013; â&#x2C6;&#x2019; đ?&#x2018;&#x17D;|)] = đ?&#x153;&#x201C;(đ?&#x2018;&#x203A;) After substituting in (34), yields đ??¸[lnθĚ&#x201A;đ?&#x2018;&#x2020;đ??żđ??˝ ] = đ?&#x153;&#x201C;(đ?&#x2018;&#x203A;) + ln đ?&#x153;&#x192; â&#x2C6;&#x2019; đ?&#x153;&#x201C;(đ?&#x2018;&#x203A;) = ln đ?&#x153;&#x192; 2
đ??¸[lnθĚ&#x201A;đ?&#x2018;&#x2020;đ??żđ??˝ ] = đ??¸ [ln
â&#x2C6;&#x2018;đ?&#x2018;&#x203A; đ?&#x2018;&#x2013;=1|đ?&#x2018;Ľđ?&#x2018;&#x2013; â&#x2C6;&#x2019;đ?&#x2018;&#x17D;|
(35)
2
]
đ?&#x2018;&#x2019; đ?&#x153;&#x201C;(đ?&#x2018;&#x203A;) đ??¸[lnθĚ&#x201A;đ?&#x2018;&#x2020;đ??żđ??˝ ] = đ??¸[ln(â&#x2C6;&#x2018;đ?&#x2018;&#x203A;đ?&#x2018;&#x2013;=1|đ?&#x2018;Ľđ?&#x2018;&#x2013; â&#x2C6;&#x2019; đ?&#x2018;&#x17D;|) â&#x2C6;&#x2019; đ?&#x153;&#x201C;(đ?&#x2018;&#x203A;)]2 = đ??¸(ln(â&#x2C6;&#x2018;đ?&#x2018;&#x203A;đ?&#x2018;&#x2013;=1|đ?&#x2018;Ľđ?&#x2018;&#x2013; â&#x2C6;&#x2019; đ?&#x2018;&#x17D;|))đ?&#x;? â&#x2C6;&#x2019; 2đ?&#x153;&#x201C;(đ?&#x2018;&#x203A;)đ??¸(ln(â&#x2C6;&#x2018;đ?&#x2018;&#x203A;đ?&#x2018;&#x2013;=1|đ?&#x2018;Ľđ?&#x2018;&#x2013; â&#x2C6;&#x2019; đ?&#x2018;&#x17D;|)) + đ?&#x153;&#x201C; 2 (đ?&#x2018;&#x203A;) Sinceđ?&#x2018;Łđ?&#x2018;&#x17D;đ?&#x2018;&#x;, [ln(â&#x2C6;&#x2018;đ?&#x2018;&#x203A;đ?&#x2018;&#x2013;=1|đ?&#x2018;Ľđ?&#x2018;&#x2013; â&#x2C6;&#x2019; đ?&#x2018;&#x17D;|)] = đ??¸(ln(â&#x2C6;&#x2018;đ?&#x2018;&#x203A;đ?&#x2018;&#x2013;=1|đ?&#x2018;Ľđ?&#x2018;&#x2013; â&#x2C6;&#x2019; đ?&#x2018;&#x17D;|))2 â&#x2C6;&#x2019; [đ??¸(ln(â&#x2C6;&#x2018;đ?&#x2018;&#x203A;đ?&#x2018;&#x2013;=1|đ?&#x2018;Ľđ?&#x2018;&#x2013; 2
(36) â&#x2C6;&#x2019; đ?&#x2018;&#x17D;|))]2
Then Ě + (đ?&#x153;&#x201C;(đ?&#x2018;&#x203A;) + ln đ?&#x153;&#x192;)2 đ??¸(ln(â&#x2C6;&#x2018;đ?&#x2019;?đ?&#x2019;&#x160;=đ?&#x;?|đ?&#x2019;&#x2122;đ?&#x2019;&#x160; â&#x2C6;&#x2019; đ?&#x2018;&#x17D;|))2 = đ?&#x2018;Łđ?&#x2018;&#x17D;đ?&#x2018;&#x;[(â&#x2C6;&#x2018;đ?&#x2018;&#x203A;đ?&#x2018;&#x2013;=1|đ?&#x2018;Ľđ?&#x2018;&#x2013; â&#x2C6;&#x2019; đ?&#x2018;&#x17D;|)] + [đ??¸(ln(â&#x2C6;&#x2018;đ?&#x2018;&#x203A;đ?&#x2018;&#x2013;=1|đ?&#x2018;Ľđ?&#x2018;&#x2013; â&#x2C6;&#x2019; đ?&#x2018;&#x17D;|))]2 = đ?&#x153;&#x201C;(đ?&#x2018;&#x203A;) 2 2 Ě + đ?&#x153;&#x201C; (đ?&#x2018;&#x203A;) + 2đ?&#x153;&#x201C;(đ?&#x2018;&#x203A;) ln đ?&#x153;&#x192; + (ln đ?&#x153;&#x192;) = đ?&#x153;&#x201C;(đ?&#x2018;&#x203A;) (37) Substitution (37) and (35) into (36), we get 2 Ě + đ?&#x153;&#x201C; 2 (đ?&#x2018;&#x203A;) + 2đ?&#x153;&#x201C;(đ?&#x2018;&#x203A;) ln đ?&#x153;&#x192; + (ln đ?&#x153;&#x192;)2 â&#x2C6;&#x2019; 2đ?&#x153;&#x201C;(đ?&#x2018;&#x203A;)(đ?&#x153;&#x201C;(đ?&#x2018;&#x203A;) + ln đ?&#x153;&#x192;) + đ?&#x153;&#x201C; 2 (đ?&#x2018;&#x203A;) đ??¸[lnθĚ&#x201A;đ?&#x2018;&#x2020;đ??żđ??˝ ] = đ?&#x153;&#x201C;(đ?&#x2018;&#x203A;) Ě + 2đ?&#x153;&#x201C; 2 (đ?&#x2018;&#x203A;) + 2đ?&#x153;&#x201C;(đ?&#x2018;&#x203A;) ln đ?&#x153;&#x192; + (ln đ?&#x153;&#x192;)2 â&#x2C6;&#x2019; 2đ?&#x153;&#x201C; 2 (đ?&#x2018;&#x203A;) â&#x2C6;&#x2019; 2đ?&#x153;&#x201C;(đ?&#x2018;&#x203A;) ln đ?&#x153;&#x192; = đ?&#x153;&#x201C;(đ?&#x2018;&#x203A;) Ě + (ln đ?&#x153;&#x192;)2 = đ?&#x153;&#x201C;(đ?&#x2018;&#x203A;) Substitution (35) and (38) into(33), we get
IJETCAS 15-605; Š 2015, IJETCAS All Rights Reserved
(42)
Page 6
A. Rasheed et al., International Journal of Emerging Technologies in Computational and Applied Sciences, 14(1), September-November, 2015, pp. 01-09
đ?&#x2018;&#x2026;θĚ&#x201A;đ?&#x2018;&#x2020;đ??żđ??˝ (đ?&#x153;&#x192;) = đ?&#x153;&#x201C;Ě (đ?&#x2018;&#x203A;) + (ln đ?&#x153;&#x192;)2 â&#x2C6;&#x2019; 2 ln(đ?&#x153;&#x192;) (ln đ?&#x153;&#x192;) + (ln đ?&#x153;&#x192;)2 = đ?&#x153;&#x201C;Ě (đ?&#x2018;&#x203A;) + (ln đ?&#x153;&#x192;)2 â&#x2C6;&#x2019; 2(ln đ?&#x153;&#x192;)2 + (ln đ?&#x153;&#x192;)2 = đ?&#x153;&#x201C;Ě (đ?&#x2018;&#x203A;) (43) Then, đ?&#x2018;&#x2026;θĚ&#x201A;đ?&#x2018;&#x2020;đ??żđ??˝ (đ?&#x153;&#x192;) is a constant So, according to the Lehmann's theorem it follows that: â&#x2C6;&#x2018;đ?&#x2018;&#x203A;đ?&#x2018;&#x2013;=1|đ?&#x2018;Ľđ?&#x2018;&#x2013; â&#x2C6;&#x2019; đ?&#x2018;&#x17D;| đ?&#x153;&#x192;Ě&#x201A;đ?&#x2018;&#x2020;đ??żđ??˝ = đ?&#x2018;&#x2019; đ?&#x153;&#x201C;(đ?&#x2018;&#x203A;) Is minimax estimator of the parameter đ?&#x153;&#x192; of the Laplace distribution under the Squared â&#x20AC;&#x201C; log error loss function. D. Theorem đ??żđ?&#x2018;&#x2019;đ?&#x2018;Ą đ?&#x2018;&#x2039; = (đ?&#x2018;Ľ1, đ?&#x2018;Ľ2, â&#x20AC;Ś , đ?&#x2018;Ľđ?&#x2018;&#x203A; ) be independently and identically distributed random variable drawn from the density (1).Then â&#x2C6;&#x2018;đ?&#x2018;&#x203A;đ?&#x2018;&#x2013;=1|đ?&#x2018;Ľđ?&#x2018;&#x2013; â&#x2C6;&#x2019; đ?&#x2018;&#x17D;| đ?&#x153;&#x192;Ě&#x201A;đ??¸đ?&#x2018; đ??˝ = đ?&#x2018;&#x203A; Is minimax estimator of the scale parameter đ?&#x153;&#x192; for the Laplace distribution for the Entropy loss functions: θĚ&#x201A; θĚ&#x201A; đ?&#x2018;&#x2122;(θĚ&#x201A; , đ?&#x153;&#x192;) = đ?&#x2018;§ [( ) â&#x2C6;&#x2019; ln ( ) â&#x2C6;&#x2019; 1] , đ?&#x2018;§ > 0 đ?&#x153;&#x192;
đ?&#x153;&#x192;
Where đ?&#x153;&#x192; is the parameter to be estimated and θĚ&#x201A; is the estimator of đ?&#x153;&#x192;. Proof : â&#x2C6;&#x2018;đ?&#x2018;&#x203A; |đ?&#x2018;Ľ â&#x2C6;&#x2019;đ?&#x2018;&#x17D;| In order to prove the Theorem (4.4), it will be sufficient to show that, đ?&#x153;&#x192;Ě&#x201A;đ??¸đ?&#x2018; đ??˝ = đ?&#x2018;&#x2013;=1 đ?&#x2018;&#x2013; , is the minimax đ?&#x2018;&#x203A; estimator of đ?&#x153;&#x192; for Entropy loss function (25). If we can show that, the risk of đ?&#x153;&#x192;Ě&#x201A;đ??¸đ?&#x2018; đ??˝ is a constant, then, the Theorem (4.4) will be proved. Since, đ?&#x2018;&#x2026;đ?&#x153;&#x192;Ě&#x201A;đ??¸đ?&#x2018; đ??˝ (đ?&#x153;&#x192;) = đ??¸[đ??ż(đ?&#x153;&#x192;Ě&#x201A;đ??¸đ?&#x2018; đ??˝ , đ?&#x153;&#x192;)] â&#x2C6;&#x17E;
θĚ&#x201A;
θĚ&#x201A;
đ?&#x153;&#x192;
đ?&#x153;&#x192;
= â&#x2C6;Ť0 [( ) â&#x2C6;&#x2019; ln ( ) â&#x2C6;&#x2019; 1] â&#x201E;&#x17D;1 (đ?&#x153;&#x192;|đ?&#x2018;Ľ)đ?&#x2018;&#x2018;đ?&#x153;&#x192; â&#x2C6;&#x2018;đ?&#x2018;&#x203A; đ?&#x2018;&#x2013;=0|đ?&#x2018;Ľâ&#x2C6;&#x2019;đ?&#x2018;&#x17D;|
=đ??¸[
â&#x2C6;&#x2019; ln
â&#x2C6;&#x2018;đ?&#x2018;&#x203A; đ?&#x2018;&#x2013;=0|đ?&#x2018;Ľâ&#x2C6;&#x2019;đ?&#x2018;&#x17D;|
+ 1]
đ?&#x2018;&#x203A;đ?&#x153;&#x192; đ?&#x2018;&#x203A;đ?&#x153;&#x192; = đ??¸[â&#x2C6;&#x2018;đ?&#x2018;&#x203A;đ?&#x2018;&#x2013;=0|đ?&#x2018;Ľ â&#x2C6;&#x2019; đ?&#x2018;&#x17D;|] â&#x2C6;&#x2019; đ??¸[ln(â&#x2C6;&#x2018;đ?&#x2018;&#x203A;đ?&#x2018;&#x2013;=0|đ?&#x2018;Ľ â&#x2C6;&#x2019; đ?&#x2018;&#x17D;|)] + ln(đ?&#x153;&#x192;) + đ?&#x2018;&#x203A;đ?&#x153;&#x192; đ??¸[â&#x2C6;&#x2018;đ?&#x2018;&#x203A;đ?&#x2018;&#x2013;=0|đ?&#x2018;Ľ â&#x2C6;&#x2019; đ?&#x2018;&#x17D;|] = đ?&#x2018;&#x203A;đ?&#x153;&#x192; đ?&#x2018;&#x17D;đ?&#x2018;&#x203A;đ?&#x2018;&#x2018;đ??¸[ln(â&#x2C6;&#x2018;đ?&#x2018;&#x203A;đ?&#x2018;&#x2013;=0|đ?&#x2018;Ľ â&#x2C6;&#x2019; đ?&#x2018;&#x17D;|)] = ln(đ?&#x153;&#x192;) đ?&#x2018;&#x203A;đ?&#x153;&#x192; 1
ln(đ?&#x2018;&#x203A;) + 1
Since Then đ?&#x2018;&#x2026;đ?&#x153;&#x192;Ě&#x201A;đ??¸đ?&#x2018; đ??˝ (đ?&#x153;&#x192;) = â&#x2C6;&#x2019; ln(đ?&#x153;&#x192;) + ln(đ?&#x153;&#x192;) + ln(đ?&#x2018;&#x203A;) + 1 đ?&#x2018;&#x203A;đ?&#x153;&#x192; đ?&#x2018;&#x2026;đ?&#x153;&#x192;Ě&#x201A;đ??¸đ?&#x2018; đ??˝ (đ?&#x153;&#x192;) = 2 + ln(đ?&#x2018;&#x203A;) (44) Which is a constant. So, according to the Lehmann's theorem it follows that: â&#x2C6;&#x2018;đ?&#x2018;&#x203A; |đ?&#x2018;Ľ â&#x2C6;&#x2019;đ?&#x2018;&#x17D;| đ?&#x153;&#x192;Ě&#x201A;đ??¸đ?&#x2018; đ??˝ = đ?&#x2018;&#x2013;=1 đ?&#x2018;&#x2013; is minimax estimator of the parameter đ?&#x153;&#x192; of the Laplace distribution under the Entropy loss đ?&#x2018;&#x203A; function. V. Simulation Results In this section, Monte-Carlo simulation study is performed to compare the methods of estimation by using mean squared errors (MSEâ&#x20AC;&#x2122;s) as an index for precision to compare the efficiency of each of estimators, where: 2 â&#x2C6;&#x2018;đ??źđ?&#x2018;&#x2013;=1(đ?&#x153;&#x192;Ě&#x201A;đ?&#x2018;&#x2013; â&#x2C6;&#x2019; đ?&#x153;&#x192;) đ?&#x2018;&#x20AC;đ?&#x2018;&#x2020;đ??¸(đ?&#x153;&#x192;Ě&#x201A;) = đ??ź In our simulation study, we generated I = 5000 samples of size n = 5, 10, 20, 50 and 100 from Laplace distribution to represent small, moderate and large sample size with đ?&#x153;&#x192; = 0.5 and 3. The default values of the hyper-parameter of Gumbel type II prior (b) chosen are ( b= 0.9, 2, 5). The results were summarized and tabulated in the tables (1-6) for each estimator and for all sample sizes. VI. Discussion Based on the results of estimating the scale parameter of Laplace distribution, the following important conclusion could be reached: 1. Generally, the performance of Bayes estimator under different loss functions using Gumbel Type II prior is more appropriate than using Jefferys prior. 2. The values of MSE's for Bayes estimators using Gumbel Type II informative prior, are decreasing with using the value of the parameter of Gumbel prior (b), nearly, twice the value of the scale parameter θ. 3. In most cases, it is obvious that, the Bayes estimator under Quadratic loss function with Gumbel prior (QG) is the best estimator for đ?&#x153;&#x192;. Finally, we can say that, the best estimator of the scale parameter (θ) for the Laplace distribution is the Bayes minimax estimator under Quadratic loss function with Gumbel prior using hyper parameter (b) for Gumbel prior, nearly twice the value of θ in comparing with other minimax estimators under study References
IJETCAS 15-605; Š 2015, IJETCAS All Rights Reserved
Page 7
A. Rasheed et al., International Journal of Emerging Technologies in Computational and Applied Sciences, 14(1), September-November, 2015, pp. 01-09
[1] [2] [3] [4] [5] [6] [7] [8] [9] [10]
Ali, S.; Aslam M.; Abbas N.and Ali kazmiz S.M.(2012). Scale Parameter Estimation of the Laplace Model using Different Asymmetric Loss Function, International Journal of of statistics and Probability, vol.1,No.1, pp.105-127. Dey, S. (2010) Bayesian Estimation of the Parameter of Generalized Exponential Distribution under Different Loss Function, pak.J.Stat.Oper.Res,vol.6,No.2,pp.163-174. Dey, S., 2008. Minimax estimation of the parameter of the Rayleigh distribution under quadratic loss function, Data Science Journal, 7, pp.23-30. Farsipour, N. and Zakerzadeh, H. (2006). Estimation of Generalized Variance Under an Asymetric Loss Function â&#x20AC;&#x153;Squared Log Errorâ&#x20AC;?, Publishing Technology, Volume 35, Number 4, 2006, pp. 571-581(11) . Kotz, S., et al. (2001). The Laplace Distribution and Generalizations, A Revisit With New Applications. Li, L.(2014), Minimax Estimation of the parameter of Exponential Distribution based on Record Values, vol,03.pp.47-53. Mood. A.M., Graybill, and Boes D.C., (1974)," Introduction to the Theory of Statistics",3rd edition, McGraw-Hall.pp.350 Rasheed, H. et al. (2013),"Using Entropy Loss Function To Estimate The Scale Parameter For Laplace Distribution", AlMustansiriyah Journal of Science, Vol. 24, No 5, pp. 253- 258. Rasheed, H., A. and AL-Shareefi, E., F.(2015);" Minimax Estimation of the Scale Parameter of Laplace Distribution under Squared-Log Error Loss Function", Mathematical Theory and Modeling, Vol. 5 No.(1), pp. 183-193. Yarmohammadi, M. and Pazira, H.(2010), "Minimax Estimation of the Parameter of Burr. Type XII Distribution", Australian Journal of Basic and Applied Sciences, vol.12,No.4,PP.6611-6622.
Table (1): Estimated Value and MSE of Different Estimates of the Scale parameter of the Laplace Distribution when đ?&#x153;˝ = đ?&#x;&#x17D;. đ?&#x;&#x201C; đ?&#x2019;&#x201A;đ?&#x2019;?đ?&#x2019;&#x2026; đ?&#x2019;&#x192; = đ?&#x;&#x17D;. đ?&#x;&#x2014; n Criteria EXP QJ MSE EXP QG MSE EXP SLJ MSE EXP SLG MSE EXP ENJ MSE EXP ENG MSE Best Estimator
Estimator
5
10
20
50
100
0.41807 0.04081 0.48691 0.02522 0.55630 0.06354 0.61887 0.05460 0.50168 0.04910 0.56807 0.03873 QG
0.45551 0.02254 0.49255 0.01733 0.52721 0.02828 0.56272 0.02649 0.50106 0.02488 0.53733 0.02195 QG
0.47807 0.01205 0.49725 0.01055 0.51482 0.01363 0.53362 0.01327 0.50197 0.01275 0.52092 0.01200 QG
0.49019 0.00490 0.49807 0.00462 0.50514 0.00512 0.51295 0.00507 0.50000 0.00499 0.50784 0.00486 QG
0.49553 0.00247 0.49949 0.00240 0.50318 0.00253 0.50712 0.00252 0.50048 0.00249 0.50444 0.00247 QG
Table (2): Estimated Value and MSE of Different Estimates of the Scale parameter of the Laplace Distribution when đ?&#x153;˝ = đ?&#x;&#x17D;. đ?&#x;&#x201C; đ?&#x2019;&#x201A;đ?&#x2019;?đ?&#x2019;&#x2026; đ?&#x2019;&#x192; = đ?&#x;? n Criteria EXP QJ MSE EXP QG MSE EXP SLJ MSE EXP SLG MSE EXP ENJ MSE EXP ENG MSE Best Estimator
Estimator
5
10
20
50
100
0.41807 0.04081 0.64406 0.04580 0.55630 0.06354 0.81860 0.14198 0.50168 0.04910 0.75140 0.09730
0.45551 0.02254 0.58422 0.02437 0.52721 0.02828 0.66745 0.05059 0.50106 0.02488 0.63733 0.03942
0.47807 0.01205 0.54725 0.01277 0.51482 0.01363 0.58728 0.01975 0.50197 0.01275 0.57331 0.01694
0.49019 0.00490 0.51923 0.00499 0.50514 0.00512 0.53473 0.00610 0.50000 0.00499 0.52941 0.00567
0.49553 0.00247 0.51028 0.00250 0.50318 0.00253 0.51807 0.00280 0.50048 0.00249 0.51533 0.00268
QJ
QJ
QJ
QJ
QJ
Table (3): Estimated Value and MSE of Different Estimates of the Scale parameter of the Laplace Distribution when đ?&#x153;˝ = đ?&#x;&#x17D;. đ?&#x;&#x201C; đ?&#x2019;&#x201A;đ?&#x2019;?đ?&#x2019;&#x2026; đ?&#x2019;&#x192; = đ?&#x;&#x201C; Estimator QJ QG SLJ SLG ENJ ENG
n Criteria EXP MSE EXP MSE EXP MSE EXP MSE EXP MSE EXP MSE
5
10
20
50
100
0.418068 0.040811 1.072630 0.352956 0.556297 0.063544 1.363324 0.785799 0.501681 0.049104 1.251401 0.598702
0.455509 0.022541 0.834215 0.128978 0.527207 0.028284 0.953068 0.227820 0.501060 0.024880 0.910055 0.188706
0.478068 0.012047 0.683610 0.044251 0.514822 0.013632 0.733613 0.066711 0.501971 0.012755 0.716162 0.058292
0.490193 0.004897 0.576920 0.010535 0.505137 0.005124 0.594149 0.013762 0.499997 0.004995 0.588232 0.012586
0.495528 0.002465 0.539689 0.003973 0.503179 0.002532 0.547929 0.004769 0.500484 0.002495 0.545034 0.004473
IJETCAS 15-605; Š 2015, IJETCAS All Rights Reserved
Page 8
A. Rasheed et al., International Journal of Emerging Technologies in Computational and Applied Sciences, 14(1), September-November, 2015, pp. 01-09
Best Estimator
QJ
QJ
QJ
QJ
QJ
Table (4): Estimated Value and MSE of Different Estimates of the Scale parameter of the Laplace Distribution when đ?&#x153;˝ = đ?&#x;&#x2018; đ?&#x2019;&#x201A;đ?&#x2019;?đ?&#x2019;&#x2026; đ?&#x2019;&#x192; = đ?&#x;&#x17D;. đ?&#x;&#x2014; Estimator
n Criteria EXP MSE EXP MSE EXP MSE EXP MSE EXP MSE EXP MSE
QJ QG SLJ SLG ENJ ENG Best Estimator
5
10
20
50
100
2.50841 1.46920 2.27863 1.42223 3.33779 2.28757 2.89616 1.46772 3.01009 1.76775 2.65840 1.34422 ENG
2.73305 0.81146 2.58029 0.79812 3.16324 1.01821 2.94792 0.81454 3.00636 0.89569 2.81487 0.77448 ENG
2.868405 0.433674 2.778936 0.428237 3.088928 0.490744 2.982201 0.437212 3.01183 0.459174 2.911269 0.42423 ENG
2.94116 0.17629 2.90191 0.17587 3.03082 0.18447 2.98857 0.17645 2.99998 0.17981 2.95881 0.17452 ENG
2.97318 0.08876 2.95284 0.08854 3.01908 0.09114 2.99793 0.08898 3.00290 0.08981 2.98209 0.08836 ENG
Table (5): Estimated Value and MSE of Different Estimates of the Scale parameter of the Laplace Distribution when đ?&#x153;˝ = đ?&#x;&#x2018; đ?&#x2019;&#x201A;đ?&#x2019;?đ?&#x2019;&#x2026; đ?&#x2019;&#x192; = đ?&#x;? Estimator
n Criteria EXP MSE EXP MSE EXP MSE EXP MSE EXP MSE EXP MSE
QJ QG SLJ SLG ENJ ENG Best Estimator
5
10
20
50
100
2.50841 1.46920 2.43578 1.22021 3.33779 2.28757 3.09590 1.46613 3.01009 1.76775 2.84174 1.25258 QG
2.73305 0.81146 2.67197 0.72958 3.16324 1.01821 3.05264 0.81460 3.00636 0.89569 2.91487 0.74745 QG
2.86841 0.43367 2.82893 0.40863 3.08893 0.49074 3.03585 0.43818 3.01183 0.45917 2.96364 0.41768 QG
2.94116 0.17629 2.92305 0.17216 3.03082 0.18447 3.01035 0.17643 2.99998 0.17981 2.98038 0.17321 QG
2.97318 0.08876 2.97343 0.08702 3.01908 0.09114 3.01882 0.08933 3.00290 0.08981 3.00287 0.08804 QG
Table (6): Estimated Value and MSE of Different Estimates of the Scale parameter of the Laplace Distribution when đ?&#x153;˝ = đ?&#x;&#x2018; đ?&#x2019;&#x201A;đ?&#x2019;?đ?&#x2019;&#x2026; đ?&#x2019;&#x192; = đ?&#x;&#x201C; Estimator
n Criteria
QJ QG SLJ SLG ENJ ENG Best Estimator
EXP MSE EXP MSE EXP MSE EXP MSE EXP MSE EXP MSE
5
10
20
50
100
2.50841 1.46920 2.86435 0.92027 3.33779 2.28757 3.64062 1.86733 3.01009 1.76775 3.34174 1.34432 QG
2.73305 0.81146 2.92197 0.62807 3.16324 1.01821 3.33826 0.92625 3.00636 0.89569 3.18760 0.77540 QG
2.86841 0.43367 2.96530 0.38057 3.08893 0.49074 3.18220 0.47009 3.01183 0.45917 3.10651 0.42770 QG
2.94116 0.17629 2.98075 0.16661 3.03082 0.18447 3.06977 0.18119 2.99998 0.17981 3.03920 0.17436 QG
2.97318 0.08876 2.99304 0.08637 3.01908 0.09114 3.03874 0.09047 3.00290 0.08981 3.02267 0.08855 QG
IJETCAS 15-605; Š 2015, IJETCAS All Rights Reserved
Page 9