IMPROVED RATIO TYPE ESTIMATOR USING TWO AUXILIARY VARIABLES UNDER SECOND ORDER APPROXIMATION

Page 1

DOI: 10.15415/mjis.2014.22014

Improved Ratio Type Estimator Using Two Auxiliary Variables under Second Order Approximation Prayas Sharma* and Rajesh Singh† Department of Statistics, Banaras Hindu University, Varanasi-221005, India *Email: Prayassharma02@gmail.com; †Email: rsinghstat@gmail.com Abstract  In this paper, we have proposed a new Ratio Type Estimator using auxiliary information on two auxiliary variables based on Simple random sampling without replacement (SRSWOR). The proposed estimator is found to be more efficient than the estimators constructed by Olkin (1958), Singh (1965), Lu (2010) and Singh and Kumar (2012) in terms of second order mean square error. Keywords: simple random sampling, population mean, study variable, auxiliary variable, ratio type estimator, product estimator, Bias and MSE.

1. Introduction

I

n sampling survey, the use of auxiliary information is always useful in considerable reduction of the MSE of a ratio type estimator. Therefore, many authors suggested estimators using some known population parameters of an auxiliary variable. Hartley-Ross (1954), Quenouille’s (1956) and Olkin (1958) have considered the problem of estimating the mean of a survey variable when auxiliary variables are made available. Jhajj and Srivastava (1983), Singh et al.(1995), Upadhyaya and Singh (1999), Singh and Tailor (2003), Kadilar and Cingi (2006), Khoshnevisan et al. (2007), Singh et al. (2007), Singh and Kumar (2011,), etc. suggested estimators in simple random sampling using auxiliary variable. Moreover, when two auxiliary variables are present Singh (1965,1967) and Perri(2007) suggested some ratio -cum -product type estimators. Most of these authors discussed the properties of estimators along with their first order bias and MSE. Hossain et al. (2006), Sharma et al. (2013a, b) studied the properties of some estimators under second order approximation. In this paper, we have suggested an estimator using auxiliary information in simple random sampling and compared with some existing estimators under second order of approximation when information on two auxiliary variables are available. Let U= (U1 ,U2 , U3, …..,Ui, ….UN ) denotes a finite population of distinct and identifiable units. For estimating the population mean Y of a study variable

179

Mathematical Journal of Interdisciplinary Sciences Vol. 2, No. 2, March 2014 pp. 179–190

©2014 by Chitkara University. All Rights Reserved.


Sharma, P. Singh, R.

Y, let us consider X and Z are the two auxiliary variable that are correlated with study variable Y, taking the corresponding values of the units. Let a sample of size n be drawn from this population using simple random sampling without replacement (SRSWOR) and yi , xi and zi (i=1,2,…..n ) are the values of the study variable and auxiliary variables respectively for the i-th units of the sample. When the information on two auxiliary variables are available Singh (1965,1967) proposed some ratio-cum-product estimators in simple random sampling without replacement to estimate the population mean Y of the study variable y, generalized version of one of these estimators is given by,  X t1 = y    x

α1

α2

 Z  z 

(1.1)

where α1 and α2 are suitably chosen scalars such that the mean square error of t1 is minimum.

and y =

1 n 1 n 1 n y , x = x and z = ∑ i ∑ i ∑ z i , n i=1 n i=1 n i=1

Olkin(1958) proposed an estimator t2 as

 X Z t 2 = y  λ1 + λ 2  z  x

(1.2)

where, λ1 and λ2 are the weights that satisfy the condition λ1 + λ2 = 1 Lu (2010) proposed multivariate ratio estimator using information on two auxiliary variables asα

 w X +w X  t3 = y  1 1 2 2   w1 x1 +w 2 x 2 

(1.3)

where, w1 and w2 are weights that satisfy the condition: w1 + w2 = 1. Singh and Kumar (2012) proposed exponential ratio-cum-product estimator. Generalized form of this estimator is given by β

β

 X-x  1  Z-z  2  exp   t 4 = y exp   X+x   Z+z 

(1.4)

where β1 and β2 are constants such that the MSE of estimator t4 is minimise. y −Y x-X z-Z then E(e0) = E(e1) = , e1 = and e 2 = Y X Z E(e2) = 0 and variances and co-variances are as follows: Theorem1.1 Let e 0 =

180


Improved Ratio Type Estimator Using Two Auxiliary Variables under Second Order Approximation

(i)

V(e 0 )=E{(e 0 )2 }=

L1C200 = V200 Y2

(ii)

V(e1 )=E{(e1 )2 }=

L1C020 = V020 X2

(iii)

V(e1 )=E{(e 2 )2 }=

L1C002 = V002 Z2

(iv)

= V110 XY LC COV(e1 ,e 2 ) = E{(e1e 2 )}= 1 011 = V011 XZ L1C101 COV(e 0 ,e 2 ) = E{(e 0 e 2 )}= = V101 YZ LC E{(e 0 2 e1 )}= 2 2 210 = V210 Y X

(v) (vi) (vii)

COV(e 0 ,e1 )=E{(e 0 e1 )}=

L1C110

(viii) E{(e 0 2 e 2 )}=

L 2 C201 = V201 Y2 Z

(ix)

E{(e12 e 2 )}=

L 2 C021 = V021 X2 Z

(x)

E{(e 0 e12 )}=

(xi)

E{(e1e 22 )}=

(xv)

E(e1e 32 ) =

L 2 C120 2

YX L 2 C012

= V120

= V012 2 XZ LC (xii) E{(e 0 e 22 )} = 2 102 = V102 2 YZ LC (xiii) E{(e13 )} = 2 3030 = V030 X L C + 3L 4 C020 C011 = V031 (xiv) E(e13 e 2 ) = 3 031 X3 Z

(xvi) E(e 0 e13 ) =

where,

L 3 C013 + 3L 4 C002 C011 3

XZ L 3 C130 + 3L 4 C020 C110 YX L1 =

3

(N − n) 1 , (N −1) n

= V013 = V130

L2 =

181

(N − n )(N − 2 n ) 1 (N −1)(N − 2) n 2


Sharma, P. Singh, R.

And

L3 =

(N − n )(N 2 + N-6nN+6n 2 ) 1 N(N − n )(N − n −1)(nn −1) 1 , L4 = 3 (N −1)(N − 2)(N − 3) n (N −1)(N − 2)(N − 3) n 3 N

Cpqr =

∑ (Xi − X) (Y − Y) (Z p

q

i

1

i =1

− Z) r

This theorem will be used to obtain MSE expressions of estimators considered here. Proof of this theorem is straight forward by using SRSWOR (see Sukhatme and Sukhatme (1970)). 2. First Order Biases and Mean Squared Errors The expression of the biases of the estimators t1, t2 t3 and t4 to the first order of approximation are respectively, written as

Bias(t1 ) = Y[-α1V110 +α 2 V102 +R1V020 +S1V002 +α1α 2 V011 ] where , R1 =

(2.1)

α1 (α1 + 1) α (α + 1) S1 = 2 2 2 2

Bias(t 2 ) = Y[-w1 V110 − w2 V101 +w1 V020 +w2 V002 +α1α 2 V011 ]

(2.2)

Bias(t 3 ) = Y −αλ (w1X1V110 − w2 V101 )   (α + 1) 2 2 +λ 2 α w1 X1 V020 + w22 X 22 V002 + 2 w1w1X1X 2 V011 ) (2.3) ( 2  where, λ =

1 w1X1 + w2 X 2

 β   β β2   β β β2  Bias(t 4 ) = Y  − 1 V110 − 2 V101 + V020  1 + 1  +  2 + 2  V002  2 4 8  4 8  2 

(2.4)

Expressions for the MSE of the estimators t1,t2, t3 and t4 to the first order of approximation are respectively, given by

MSE ( t1 ) = Y 2  V200 + α12 V020 + α 22 V002 − 2α1V110 − 2α 2 V101 + 2α1α 2 V011  (2.5)

The MSE of the estimator t1 is minimized for  ρyx − ρyz ρxz  V200  α1* =   1 − ρ2  V 020 xz   And

(2.6)

(2.7)

 ρyz − ρyx ρxz  V200 α1* =   2  1 − ρxz  V002

182


where α1* and α *2 are, respectively, partial regression coefficients of y on x and of y on z in simple random sampling. MSE ( t 2 ) = Y 2  V200 + λ12 V020 + λ 22 V002 − 2λ1V110 − 2λ 2 V101 + 2λ1λ 2 V011 

(2.8)

The MSE of the estimator t2 is minimum for λ1* =

V002 − V101 + V110 − V012 and λ *2 = 1 − λ1* V020 + V002 − 2V012

(2.9)

MSE( t 3 ) = Y 2  V200 + α 2 λ 2 ( w12 X12 V020 + w 22 X 22 V002 + 2 w1 w 2 X1X 2 V011 ) − 2αλ( w1X1V110 + w 2 X 2 V101 

(2.10)

Differentiating (2.10) with respect to w1 and w2 partially, we get the optimum values of w1 and w2 respectively as

w1* =

X1V110 − X 2 V101 + X 22 V002 − X1X 2 V011 and w *2 = 1 − w1* X12 V020 + ±»X 22 V002 − 2X1X 2 V011

(2.11)

For optimum value of w1 = w1∗ and w2 = w∗2 MSE of the estimator t3 is minimum.   β2 β2 ββ MSE ( t 4 ) = Y 2  V200 + 1 V020 + 2 V002 − β1V110 − β 2 V101 + 1 2 V011  4 4 2  

(2.12)

On differentiating (2.12) with respect to β1 and β2 respectively, we get the optimum values of β1 and β2 as

β1* =

2 ( V110 V002 − V101V011 )

(V

002

and

β*2 =

2 V020 − V011

)

2 (V020 V101 − V110 V011 )

(V

2 002 V020 − V011 )

(2.13)

(2.14)

Estimators t1, t2 t3 and t4 at their respective optimum values attains MSE values which are equal to the MSE of regression estimator for two auxiliary variables. 3. Proposed Estimator When auxiliary information on two auxiliary variables are known, we propose an estimator t5 as

183

Improved Ratio Type Estimator Using Two Auxiliary Variables under Second Order Approximation


Sharma, P. Singh, R.

δ2   cX − dx δ1   z    t 5 = y  k1   + k 2 2 −       (c-d ) X    Z   

(3.1)

where d and c are either real numbers or a function of the known parameters associated with auxiliary information . k1 and k2 are constants to be determined such that the MSE of estimator t5 is minimum under the condition that k1+k2 = 1 and δ1 and δ2 are integers and can take values -1, 0 and +1 . Expressing the estimator t5 in terms of e’s we have where, η1 =

d . c−d

t 5 = Y  k1 (1 − η1e1 )δ1 + k 2 {2 − (1 + e 2 )δ2 }  

(3.2)

We assume |η1e1| < 1 so that (1-η1e1)δ1 are expandable. Expanding the right hand side of (3.2), and neglecting terms of e’s having power greater than two we have ( t 5 − Y ) = Y  −k1η1δ1e0 e1 − k 2δ2 e0 e2 + k1M1e12 − k 2 N1e22 + α1α 2 e1e2  (3.3) Taking expectation on both sides we get bias of estimator t5, to the first degree of approximation as Bias( t 5 ) = Y [ − k1 η1δ1V110 − k 2 δ 2 V101 + k1M1V020 − k 2 N1V002 + α1α 2 V011 ]

where,

{

−α1

t1 = Y (1 + e 0 ) (1 + e1 )

−α 2

(1 + e1 )

(3.4)

}

Squaring both sides of (3.3) and neglecting terms of e’s having power greater than two we have ( t 5 − Y) 2 = Y e02 + k12 η12 δ12 e12 + k 22 δ 22 e 22 − 2k1 η1δ1e0 e1 − 2k 2 δ 2 e0 e 2 + 2k1k 2 δ1δ 2 η1e1e 2 ]

(3.5)

Taking expectation on both sides of (3.5) and using theorem 1.1, we get the MSE of t5 up to first degree of approximation as MSE( t 5 ) = Y 2  V200 + k12 δ12 η12 V020 + k 22 δ 22 V002 − 2k1δ1η1V110 − 2k 2 δ 2 V101 + 2k1k 2 δ1δ 2 η1V011 ]

(3.6)

Differentiating (3.6) with respect to k1 and k2 partially, equating them to zero and after simplification, we get the optimum values of k1 and k2 respectively, as

k1* =

δ1 η1V110 + δ 22 V002 − δ 2 V102 − δ1δ 2 η1V011 * , k 2 = 1 − k1* . δ12 η12 V110 + δ 22 V002 − 2δ1δ 2 η1V011

184

(3.7)


Putting these values in (3.6) we get minimum MSE of estimator t5. The minimum MSE of the estimator t1, t2, t3, t4 and proposed estimator t5 is equal to the MSE of combined regression estimator based on two auxiliary variables, which motivated us to study the properties of estimators up to the second order of approximation. 4. Second Order Biases and Mean Squared Errors Expressing estimator ti’s (i = 1,2,3,4,5) in terms of e’s (i = 0,1), we get

{

−α1

t1 = Y (1 + e 0 ) (1 + e1 )

Or

−α 2

(1 + e1 )

}

(4.1)

( t1 − Y) = Y {e0 − α1e1 − α1e0 e1 − α 2 e 2 − α 2 e0 e 2 + α1α 2 e1e 2 + α1α 2 e0 e1e 2 + R1e12 + R1e0 e12 + S1e 22 + S1e0 e 22 − R 2 e13 − R 2 e0 e13 − S2 e32 − S2 e0 e32 − R 2 e13 − α 2 R1e 2 e12 − α1S1e1e 22 + α 2 R 2 e13 e 2 + α1R1e1e32

}

(4.2)

Squaring both sides and neglecting terms of e’s having power greater than four, we have

(t

− Y ) = Y 2 e02 + α12 e12 + α 22 e 22 − 2α1e0 e1 − 2α 2 e0 e 2 + 2α1α 2 e1e 2 2

1

− 2α1e02 e1 − 2α 2 e02 e 2 + (2R1 + 2α12 )e0 e12 + 2S1e0 e 22 − 2α12 α 2 e12 e 2 − 2S1α1e1e 22 − 2R1α1e13 + 6α1α 2 e0 e1e 2 + (α12 + 2R1 )e02 e12 + (α 22 + 2S1 )e02 e 22 + (α12 α 22 + 2R1S1 )e12 e 22 − (4α12 α 2 + 6R1α1 )e0 e12 e 2 − 4α1 (S1 + α 22 )e0 e1e 22 + 4α1α 2 e02 e1e 2 − 2(R 2 + 2α1R1 )e0 e13 − 2(S2 + 2α 2S1 )ee0 e32 − 2α1 (S2 − α 2S1 )e1e 22 + 2α 2 (R 2 + α1R1 )e12 e 2

(4.3)

+ (R12 + 2α1R 2 )e14 + (S12 + 2α 2S2 )e 42 

Taking expectations, and using theorem 1.1 we get the MSE of the estimator t1 up to the second order of approximation as MSE ( t ) = Y [ V + α V + α V − 2 α V 2

2

2

1

200

1

2

020

2

002

1

110

−2 α 2 V101 + 2 α 1 α 2 V011 − 2 α 1 V210 − 2 α 2 V201 + (2 R 1 + 2 α 1 ) V120 2

+2S1 V102 − 2 α 1 α 2 V021 − 2S1 α 1 V012 − 2 R 1 α 1 V030 + 6 α 1 α 2 V111 2

+(α 1 + 2 R 1 ) V220 + (α 2 + 2S1 ) V202 + (α 1 α 2 + 2 R 1S1 ) V022 2

2

2

2

−( 4 α 1 α 2 + 6 M1 α 1 ) V121 − 4 α 1 (S1 + α 2 ) V112 + 4 α 1 α 2 V211 2

2

−2( R 2 + 2 α 1 R 1 ) V130 − 2(S2 + 2 α 2 S1 ) V103 − 2 α 1 (S2 − α 2 S1 ) V012

+2 α 2 ( R 2 + α 1 R 1 ) V021 + ( R 1 + 2 α 1 R 2 )V040 + (S1 + 2 α 2 S2 ) V004  2

185

2

(4.4)

Improved Ratio Type Estimator Using Two Auxiliary Variables under Second Order Approximation


Sharma, P. Singh, R.

where, α (α + 1) α (α + 1)(α1 + 2) α (α + 1)(α1 + 2)(α1 + 3) R1 = 1 1 , R2 = 1 1 , R3 = 1 1 , 2 6 24 α (α + 1) α (α + 1)(α 2 + 2) α (α + 1)(α 2 + 2)(α 2 + 3) S1 = 2 2 , S2 = 2 2 , S3 = 2 2 . 2 6 24

Similarly, MSE expression of estimator t2 is given by MSE 2 ( t 2 ) = Y 2  V200 + λ12 (V020 + V220 + 3V040 + 2V120 − 2V030 − 4V130 ) + λ 22 (V002 + V202 + 3V004 + 2V102 − 2V003 − 4V103 ) + 2λ1 ( − V110 − V210 + V120 + V220 − V130 ) + 2λ 2 ( − V101 − V201 + V202 ) + 2λ1λ 2 V011 + 2V111 − V012 − 2V112 + V013 +V211 − V021

− 2V121 + V022 + V031 )]

(4.5)

MSE expression of estimator t3 is given by MSE 2 (t 3 ) = Y 2  V200 + w12 X1 {α 2 θ 2 (V020 + V220 + 2 V120 ) + 2 A1θ 2 (V120 + V220 )}  + w22 X 2 {α 2 θ 2 (V002 + V202 + 2 V120 ) + 2 A1θ 2 (V120 + V202 )} + 2 w1w2 X1X 2 {α 2 θ 2 (V211 + 2 V111 ) + 2 A1θ 2 (V111 + V211 } + 2 w13 w2 X13 X 2 (A12 θ 4 + 4 A 2 αθ 4 )V031 + 2 w1w32 X1X 32 (A12 θ 4 + 4 A 2 αθθ 4 )V013 − 2αθw1X1 (V110 + V210 ) − 2αθw2 X 2 (V101 + V201 ) + 3w12 w2 X12 X 2 (−2 A 2 θ3 V121 − 4 A1αθ3 V121 − 2 A1αθ3 V012 ) +3w1X1w22 X 22 (−2 A 2 θ3 V112 − 4 A1αθ3 V112 − 2 A1αθ3 V012 ) +w14 X14 (A14 θ 4 + 2 A 2 αθ 4 )V040 + w24 X 24 (A14 θ 4 + 2 A 2 αθ 4 )V002 +w13 X13 (−2 A 2 θ3 V130 − 4 A 1αθ3 V130 − 2 A1αθ3 V030 )

+w32 X 32 (−2 A 2 θ3 V103 − 4 A1αθ3 V103 − 2 A1αθ3 V003 ) +6 w12 X12 w22 X 22 (A12 θ 4 + 2 A 2 αθ 4 )V022 

MSE expression of estimator t4 is given by

186

(4.6)


Improved Ratio Type Estimator Using Two Auxiliary Variables under Second Order Approximation

β12 β2 V020 + 2 V002 − β1V110 4 4 2  β  V  β2  − β1V210 M + 1  + 102 N + 2  − V021 {M + β12 β2 }  2  4  4 

MSE 2 (t 4 ) = Y 2 [ V200 +

3 1 3 − V012 {N+β22 β1 } + β1β2 V111 + β1β2 V011 − β1MV030 2 2 4 2 2   V  β  V  β  1 − β2 NV003 + 220 M + 1  + 202 N + 2     4  2  4  2 2  +

 V V022  β12 β22 + β1 + β2 Q + MN − 130 {O + 2β1M}   8  2 4

V103 V {P + 2β2 N} + 031 {β1Q + O + SM} 8 4 V V + 013 {β2 R + β1P + SN} − 121 {Q + 2β12 β2 + β2 M} 8 4 V V112 2β22 β1 + 2β1N + R} + − 040 {2β1O + M 2 } − { 16 4  V V (4.7) + 004 {2β2 P + N 2 } + 211 {β12 β2 + S}  2 16 −

where,

   β2  β2  β3  M =  β1 + 1  , N =  β2 + 2  , O =  β12 + 1  2 2 6   

   β2 β  β β2  β3  P =  β22 + 2  , Q =  β1β2 + 1 2  , R =  β1β2 + 1 2  6 2  2    

1.1. MSE of Proposed estimator t5 up to second order of approximation Expanding right hand side of equation (3.2) and neglecting terms of e’s having power greater than four, we get t 5 − Y = Y e0 + k1 ( −δ1 η1e1 + M1e12 − M 2 e13 + M 3 e14 ) + k 2 ( −δ 2 e 2 + N1e 22

− N 2 e32 + N 3 e 42 ) + k1 ( −δ1 η1e0 e1 + M1e0 e12 − M 2 e0 e13 ) + k 2 ( −δ 2 e0 e 2

(4.8)

+ N1e0 e 22 − N 2 e0 e32 ) 

Squaring both sides of (4.8) and neglecting terms of e’s having power greater than four, we have

187


Sharma, P. Singh, R.

(t

{ + M e } +k {δ (e + e e + 2e e ) + 2δ ( N e e + N e + N e ) + N e } + 2k {δ η ( −e e − e e ) + M (e e + e e ) − M e e } + 2k {δ ( −e e − e e ) + N ( −e e − e e ) − N e e } +2k k {δ δ η (e e + 2e e e + e e e ) + δ η ( N e e + N e e )}

− Y ) = Y 2 e02 + k12 δ12 η12 (e12 + e12 e 22 + 2e0 e12 ) + 2δ1 η1 (M1e14 − 2M1e13 ) 2

5

2 4 1 1

2 2

2 4 1 2 2

1

1

2

2

2 2

1

2 0 2

2 0 1

0 1

1 1

2 0 2

0 2

1 2

2 2 0 2

2 2

1 2

1

2 0 2

1

2 0 1 2

0 1 2

3 1 0 2

2

2 0 1

2 2 0 2

2 1 1

2 2 0 1

4 2 2

3 2 0 1

3 2 0 2

1 1

2 1 1 2

3 2 1 2

(4.9)

Taking expectations on both sides of (4.9) and using theorem1.1, we get the MSE of estimator t5,up to the second order of approximation as MSE 2 (t 5 ) = Y 2 [ V200 + k12 {δ12 η12 (V020 + V022 + 2 V120 ) + 2δ1η1 (M1V040 − 2 M1V030 ) +M12 V040 } + k 22 {δ22 (V002 + V202 + 2 V102 ) + 2δ2 (N1V103 + N1V030 + N 2 V004 ) +N12 V004 } + 2 k1 {δ1η1 (-V110 − V210 ) + M1 (V120 + V220 ) − M 2 V130 } + 2 k 2 {δ2 (−V101 − V201 ) + N1 (−V102 − V202 ) − N 2 V103 } + 2 k1 k 2 {δ1δ2 η1 (V011 + 2V111 + V211 ) + δ1η1 (N1V012 +N 2 V013 )}

(4.10)

5. Numerical Illustration For a natural population data, we have calculated the mean square error’s of the estimator’s and compared MSE’s of the estimator’s under first and second order of approximations. 1.2. Data Set The data for the empirical analysis are taken from Book, “An Introduction to Multivariate Statistical Analysis”, page no. 58, 2nd Edition By T.W. Anderson. The population consist of 25 persons with Y = Head length of second son, X = Head length of first son and Z = Head breadth of first son. The following values are obtained from Raw data given on page no. 58.

Y = 183.84, X = 185.72, Z = 151.12, N = 25, n = 7 and

V020 = 0.000244833, V200 = 0.000306792, V020 = 0.000284171, V110 = 0.00020986, V011 = 0.000193753, V101 = 0.000189972, V111 = -0.000059732, V210 = -0.0000004582, V201= -0.0000002.77, V021 =

188


-0.0000002775, V102 = -0.0000002354, V120 = -0.00000036455, V012 = 0.00000025179, V202 = 0.000013,V003 = 0.000001544, V031 = 0.3893411 ,V013 = 0.380025, V030 = -0.00001363, V103 = 0.00001215,V040 = 0.0000214,V220 = 0.000015 V022 = 0.001624, V121 = 0.0000115, V211 = 0.0000122, V004 = 0.00001384, V112 = 0.000000085 Table 5.1: MSE’s of the estimators ti (i = 1,2,3,4,5) Estimators

MSE Second order 16156.644

t1

First order 4.508

t2

4.508

27204.321

t3

4.508

17679.890

t4

4.508

20928.689

t5

4.508

275.926*

*for δ1 = δ2 =1 This table shows the comparison of estimators on the basis of MSE because Y cannot be extended up to second order of approximation therefore, we are unable to calculate PRE for second order approximation. 6. Conclusion In the Table 5.1 the MSE’s of the estimators t1, t2 , t3, t4 and t5 are written under first order and second order of approximations. It has been observed that for all the estimators, the mean squared error increases for second order. Observing second order MSE’s we conclude that the estimator t5 is best estimator among the estimators considered here for the given data set. Acknowledgement: The authors are thankful to the learned referee’s for their valuable suggestions leading to the improvement of contents and presentation of the original manuscript. REFERENCES Anderson, T. W. (1984). An introduction to multivariate statistical analysis (2nd ed.). New York: Wiley. http://dx.doi.org/10.1016/0047-259X(84)90065-4 Hartley, H.O. and Ross,A. (1954): Unbiased ratio estimators. Nature, 174, 270-271. http://dx.doi.org/10.1038/174270a0 Hossain, M.I., Rahman, M.I. and Tareq, M.(2006) : Second order biases and mean squared errors of some estimators using auxiliary variable. SSRN.

189

Improved Ratio Type Estimator Using Two Auxiliary Variables under Second Order Approximation


Sharma, P. Singh, R.

Jhajj H.S and Srivastva S.K., (1983): A class of PPS estimators of population mean using auxiliary information. Jour. Indian Soc. Agril. Statist. 35, 57-61. Kadilar, C.; Cingi, H., (2006): Improvement in Estimating the Population Mean in Simple Random Sampling, Applied Mathematics Letters, 19, 1, 75-79. http://dx.doi.org/10.1016/j.aml.2005.02.039 Khoshnevisan, M., Singh, R., Chauhan, P., Sawan, N., and Smarandache, F. (2007). A general family of estimators for estimating population mean using known value of some population parameter(s), Far East Journal of Theoretical Statistics 22 181–191. Lu,J. Yan,Z. Ding,C. Hong,Z., (2010): 2010 International Conference On Computer and Communication Technologies in Agriculture Engineering (CCTAE), 3, 136-139. Olkin, I. (1958) Multivariate ratio estimation for finite populations, Biometrika, 45, 154–165. http://dx.doi.org/10.1093/biomet/45.1-2.154 http://dx.doi.org/10.2307/2333053 Perri, P. F. (2007). Improved ratio-cum-product type estimators. Statist. Trans. 8:51–69. Quenouille, M. H. (1956): Notes on bias in estimation, Biometrika, 43, 353-360. http://dx.doi.org/10.2307/2332914 http://dx.doi.org/10.1093/biomet/43.3-4.353 Sharma, P., Singh, R. ,Jong, Min-kim. (2013a): Study of Some Improved Ratio Type Estimators using information on auxiliary attributes under second order approximation. Journal of Scientific Research, vol.57, 138-146. Sharma, P., Singh, R.,Verma, H. (2013b): Some Exponential Ratio-Product Type Estimators using information on Auxiliary Attributes under Second Order Approximation. International Journal of Statistics and Economics, Vol. 12, issue no. 3. Singh, H.P. and Tailor, R. (2003). Use of known correlation coefficient in estimating the finite population mean. Statistics in Transition 6 555-560. Singh, M. P. (1965). On the estimation of ratio and product of the population parameters. Sankhya B 27:231-328. Singh, M. P. (1967). Ratio cum product method of estimation. Metrika 12:34–42. http://dx.doi.org/10.1007/BF02613481 Singh, R. and Kumar, M. (2011): A note on transformations on auxiliary variable in survey sampling. MASA, 6:1, 17-19. http://dx.doi.org/10.3233/MAS-2011-0154 Singh, R. and Kumar, M. (2012 Improved Estimators of Population Mean Using Two Auxiliary Variables in Stratified Random Sampling Pak.j.stat.oper.res. Vol.VIII No.1 pp65-72. http://dx.doi.org/10.1234/pjsor.v8i1.161 Singh, R., Cauhan, P., Sawan, N., and Smarandache, F. (2007). Auxiliary Information and A Priori Values in Construction of Improved Estimators. Renaissance High Press. Singh S, Mangat N.S and Mahajan P.K., (1995): General Class of Estimators. Jour. Indian Soc. Agril. Statist.47(2), 129-133. Srivastava, S.K. (1967) : An estimator using auxiliary information in sample surveys. Cal. Stat. Ass. Bull. 15:127-134. Sukhatme, P.V. and Sukhatme, B.V. (1970): Sampling theory of surveys with applications. Iowa State University Press, Ames, U.S.A. Upadhyaya, L. N. and Singh, H. P. (1999): Use of transformed auxiliary variable in estimating the finite population mean. Biom. Jour., 41, 627-636. http://dx.doi.org/10.1002/(SICI)1521-4036(199909)41:5<627::AIDBIMJ627>3.3.CO;2-N http://dx.doi.org/10.1002/(SICI)1521-4036(199909)41:5<627::AIDBIMJ627>3.0.CO;2-W

190


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.