Mohammad Khoshnevisan School of Accounting and Finance Griffith University, Australia
Housila P. Singh School of Studies in Statistics, Vikram University Ujjain - 456 010 (M. P.), India
Florentin Smarandache University of New Mexico, Gallup, USA
A Family of Estimators of Population Mean Using Multiauxiliary Information in Presence of Measurement Errors
Published in: M. Khoshnevisan, S. Saxena, H. P. Singh, S. Singh, F. Smarandache (Editors) RANDOMNESS AND OPTIMAL ESTIMATION IN DATA SAMPLING (second edition) American Research Press, Rehoboth, USA, 2002 ISBN: 1-931233-68-3 pp. 44 - 61
Abstract This paper proposes a family of estimators of population mean using information on several auxiliary variables and analyzes its properties in the presence of measurement errors. Keywords: Population mean, Study variate, Auxiliary variates, Bias, Mean squared error, Measurement errors. 2000 MSC: 62E17
1. INTRODUCTION The discrepancies between the values exactly obtained on the variables under consideration for sampled units and the corresponding true values are termed as measurement errors. In general, standard theory of survey sampling assumes that data collected through surveys are often assumed to be free of measurement or response errors. In reality such a supposition does not hold true and the data may be contaminated with measurement errors due to various reasons; see, e.g., Cochran (1963) and Sukhatme et al (1984). One of the major sources of measurement errors in survey is the nature of variables. This may happen in case of qualitative variables. Simple examples of such variables are intelligence, preference, specific abilities, utility, aggressiveness, tastes, etc. In many sample surveys it is recognized that errors of measurement can also arise from the person being interviewed, from the interviewer, from the supervisor or leader of a team of interviewers, and from the processor who transmits the information from the recorded interview on to the punched cards or tapes that will be analyzed, for instance, see Cochran (1968). Another source of measurement error is when the variable is conceptually well defined but observations can be obtained on some closely related substitutes termed as proxies or surrogates. Such a situation is encountered when one needs to measure the economic status or the level of education of individuals, see Salabh (1997) and Sud and Srivastava (2000). In presence of measurement errors, inferences may be misleading, see Biemer et al (1991), Fuller (1995) and Manisha and Singh (2001). Thereis today a great deal of research on measurement errors in surveys. An attempt has been made to study the impact of measurement errors on a family of estimators of population mean using multiauxiliary information. 44
2. THE SUGGESTED FAMILY OF ESTIMATORS Let Y be the study variate and its population mean µ0 to be estimated using information on p(>1) auxiliary
(
)
variates X1, X2, ...,Xp. Further, let the population mean row vector µ ′ = µ1 , µ 2 , " , µ p of the vector ~
X~ ′ = ( X 1 , X 2 , X p ) . Assume that a simple random sample of size n is drawn from a population, on the study character Y and auxiliary characters X1, X2, ...,Xp. For the sake of simplicity we assume that the population is infinite. The recorded fallible measurements are given by
y j = Yj + E j xij = X ij + η ij , i = 1,2," , p; j = 1,2, " , n. where Yj and Xij are correct values of the characteristics Y and Xi (i=1,2,..., p; j=1,2,..., n). For the sake of simplicity in exposition, we assume that the error Ej's are stochastic with mean 'zero' and variance σ(0)2 and uncorrelated with Yj's. The errors ηij in xij are distributed independently of each other and of the Xij with mean 'zero' and variance σ(i)2 (i=1,2,...,p). Also Ej's and ηij's are uncorrelated although Yj's and Xij's are correlated. Define
45
xi , (i = 1,2," , p ) µi
ui =
u T = (u1 , u 2 , " u p )1× p , e T = (1,1, " ,1)1× p y=
1 n ∑ yj n j =1
xi =
1 n ∑ xij n j =1
With this background we suggest a family of estimators of µ0 as
(
µˆ g = g y , u T
) (2.1)
(
where g y , u
T
) is a function of y, u , u ,", u 1
2
p
such that
g (u ⇒
0 ,e
T
) = µ0
∂g (⋅) =1 ∂y (µ0 ,eT )
and such that it satisfies the following conditions:
(
1. The function g y , u
T
) is continuous and bounded in Q. (
2. The first and second order partial derivatives of the function g y , u
T
) exist and are continuous and
bounded in Q.
(
To obtain the mean squared error of µˆ g , we expand the function g y , u
T
) about the point (µ e ) in a 0,
T
second order Taylor's series. We get
(
)
µˆ g = g µ 0 , e T + ( y − µ 0 )
+
∂g (⋅) T + (u − e ) g (1) (µ0 ,eT ) ∂y (µ0 ,eT )
(1) 2 1 T ∂g (⋅) ( y − µ 0 )2 ∂ g2(⋅) + 2( y − µ 0 )(u − e ) 2 ∂y ( y * ,u 7 T ) ∂y ( y * ,u *T )
{
(
)
}
G T + (u − e ) g (2 ) y*, u *T (u − e )
(2.2)
46
where
y* = µ 0 + θ ( y − µ 0 ), u* = e + θ (u − e ), 0 < θ < 1; g (1) (⋅) denote the p element column vector of first partial derivatives of g(⋅) and g(2)(⋅) denotes a p×p matrix of second partial derivatives of g(⋅) with respect to u. Noting that g(µ0,eT)= µ0, it can be shown that
E (µˆ g ) = µ 0 + O(n −1 ) (2.3) which follows that the bias of µˆ g is of the order of n-1, and hence its contribution to the mean squared error of
µˆ g will be of the order of n-2.
From (2.2), we have to terms of order n-1,
{
MSE (µˆ g ) = E ( y − µ 0 ) + (u − e ) g (1) (µ0 ,eT ) T
}
2
= E [( y = µ 0 ) + 2( y − µ 0 )(u − e ) g (1) (µ0 ,eT ) 2
T
(
+ g (1) (µ0 ,eT )
=
) (u − e)(u − e) (g ( ) ( T
T
1
µ 0 , eT
]
))
]
[
T 1 2 2 µ 0 (C 0 + C (20 ) ) + 2µ 0 b T g (1) (µ0 ,eT ) + (g (1) (µ0 ,eT ) ) A(g (1) (µ0 ,eT ) ) n
(2.4) where bT=(b1,b2,…,bp), bi,=ρ0iC0Ci,(i=1,2, …,p); Ci=σi/µi, C(i)= σi/µi, (i=1,2, …,p) and C0=σ0/µ0,
C12 + C (21) ρ 12 C1C 2 A = ρ13C1C 3 # ρ C C 1p 1 p The
ρ12 C1C 2 C 22 + C (22 ) ρ 23C 2 C 3 # ρ 2 p C2C p
ρ13C1C 3 ρ 23C 2 C 3 C 32 + C (23 ) # ρ 3 p C3C p
" ρ1 p C1C p " ρ 2 p C2C p " ρ 3 p C3C p # 2 2 " C p + C( p )
p× p
MSE (µˆ g ) at (2.4) is minimized for g (1) (µ0 ,eT ) = − µ 0 A −1b (2.5)
47
Thus the resulting minimum MSE of µˆ g is given by
)[
(
min.MSE (µˆ g ) = µ 02 / n C 02 + C (20 ) − b T A −1b
] (2.6)
Now we have established the following theorem. Theorem 2.1 = Up to terms of order n-1,
(
)[
MSE (µˆ g ) ≥ µ 02 / n C 02 + C (20 ) − b T A −1b
] (2.7)
with equality holding if
g (1) (µ0 ,eT ) = − µ 0 A −1b It is to be mentioned that the family of estimators µˆ g at (2.1) is very large. The following estimators: p µ (1) µˆ g = y ∑ ω i i i =1 xi
;
p x (2 ) µˆ g = y ∑ ω i i i =1 µi
,
p
∑ω i =1
= 1,
i
[Olkin (1958)]
p
∑ω i =1
− 1, [Singh (1967)]
i
p
µˆ g
(3 )
=y
∑ω µ i =1 p
i
∑ω x i =1
i
i
p
∑ω
,
i =1
i
= 1, [Shukla (1966) and John (1969)]
i
= 1, [Sahai et al (1980)]
i
p
µˆ g
(4 )
=y
∑ω x i =1 p
µˆ g
µˆ g
(6 )
i
∑ω µ i =1
(5 )
i
i
µ = y ∏ i i =1 x i p
p
∑ω
;
i =1
i
ωi
,
p ωx = y ∑ i i i =1 µ i
−1
,
p
∑ω i =1
i
= 1, [Mohanty and Pattanaik (1984)]
p
∑ω i =1
i
= 1, [Mohanty and Pattanaik (1984)]
48
µˆ g
(7 )
µˆ g
(8 )
x = y ∏ i i =1 µ i p
ωi
,
p
∑ω i =1
−1
p ωµ = y ∑ i i , i =1 xi
i
= 1, [Tuteja and Bahl (1991)]
p
∑ω i =1
i
p µ (9 ) µˆ g = y ω p +1 + ∑ ω i i i =1 xi
,
p x (10 ) µˆ g = y ω p +1 + ∑ ω i i i =1 µi
,
µˆ g
(11)
q µ = y ∑ ω i i i =1 xi
= 1, [Tuteja and Bahl (1991)]
p xˆ + ∑ i i =q +1 µ i
p +1
∑ω
i
= 1.
∑ω
i
= 1.
;
p q ∑ ω i + ∑ ω i ; [Srivastava (1965) and Rao i = q +1 i =1
i =1
p +1 i =1
=1
and Mudhalkar (1967)]
µˆ g
(12 )
µˆ g
(13 )
x = y ∏ i i =1 µ i p
αi
(α i ' s are suitably constants ) [Srivastava (1967)]
x = y ∏ 2 − i µ i =1 i p
p
(14 ) µˆ g = y ∏ i =1
αi
[Sahai and Rey (1980)]
xi [Walsh (1970)] {µ i + α i ( xi − µ i )}
p (15 ) µˆ g = y exp ∑θ i log u i [Srivastava (1971)] i =1 p (16 ) ˆ µ g = y exp ∑θ i (u i − 1) [Srivastava (1971)] i =1 p
(17 ) µˆ g = y ∑ ω i exp {(θ i / ω i ) log u i }; i =1
p
∑ω i =1
i
= 1, [Srivastava (1971)]
p
(18 ) µˆ g = y + ∑ α i ( xi − µ i ) i =1
49
etc. may be identified as particular members of the suggested family of estimators µˆ g . The MSE of these estimators can be obtained from (2.4). It is well known that
(
)(
V ( y ) = µ 02 / n C 02 + C (20 )
) (2.8)
It follows from (2.6) and (2.8) that the minimum variance of
µˆ g is no longer than conventional unbiased
estimator y . On substituting σ(0)2=0, σ(i)2=0 ∀i=1,2,…,p in the equation (2.4), we obtain the no-measurement error case. In that case, the MSE of
µˆ g , is given by
[
T 1 2 2 C 0 µ 0 + 2 µ 0 b T g *(1) (µ0 ,eT ) + g *(1) (µ0 ,eT ) A * g *(1) (µ0 ,eT ) n = MSE (µˆ g *)
MSE (µˆ g ) =
(
)
(
)] (2.9)
where
X X Xp µˆ g = g * Y , 1 , 2 , ", µ µ µp 1 2 = g * Y ,U T
(
)
(2.10) and Y and X i (i = 1,2, " , p ) are the sample means of the characteristics Y and Xi based on true measurements. (Yj,Xij, i=1,2,…,p; j=1,2,…,n). The family of estimators µˆ g * at (2.10) is a generalized version of Srivastava (1971, 80). The MSE of µˆ g * is minimized for
g *(1) (µ0 ,eT ) = − A *−1 bµ 0 (2.11) Thus the resulting minimum MSE of µˆ g * is given by
50
2
[
µ0 C 02 − b T A *−1 b n 2 σ0 = 1− R2 n
min.MSE (µˆ g *) =
(
]
)
(2.12) where A*=[a*ij] be a p×p matrix with a*ij = ρijCiCj and R stands for the multiple correlation coefficient of Y on X1,X2,…,Xp. From (2.6) and (2.12) the increase in minimum MSE
(µˆ ) due to measurement errors is g
obtained as
µ 02 min.MSE (µˆ g ) − min.MSE (µˆ g *) = n >0
2 C (0 ) + b T A *−1 b − b T A −1b
[
]
This is due to the fact that the measurement errors introduce the variances fallible measurements of study variate Y and auxiliary variates Xi. Hence there is a need to take the contribution of measurement errors into account.
3. BIASES AND MEAN SQUARE ERRORS OF SOME PARTICULAR ESTIMATORS IN THE PRESENCE OF MEASUREMENT ERRORS. To obtain the bias of the estimator
µˆ g , we further assume that the third partial derivatives of g ( y , u T )
(
also exist and are continuous and bounded. Then expanding g y , u
(y, u ) = (µ T
0
T
) about the point
)
, e T in a third-order Taylor's series we obtain
(
)
µˆ g = g µ 0 , e T + ( y − µ 0 )
+
2 1 ( y − µ 0 )2 ∂ g2(⋅) 2 ∂y (µ
{
∂g (⋅) T + (u − e ) g (1) (µ0 ,eT ) ∂y (µ0 ,eT ) + 2( y − µ 0 )(u − e ) g (1) (µ0 ,eT ) T
0 ,u
T
)
(
)
}
+ (u − e ) g (2 ) (µ0 ,eT ) (u − e ) T
51
3
∂ ∂ 1 + ( y − µ 0 ) + (u − e ) g y * , u *T ∂y ∂u 6
(
) (3.1)
(
where g(12)(µ0,eT) denotes the matrix of second partial derivatives of g y , u
(y, u ) = (µ T
0
T
) at the point
)
, eT .
Noting that
(
)
g u0 eT = µ 0
∂g (⋅) =1 ∂y (µ0 ,eT ) ∂ 2 g (⋅) ∂y 2 (µ
0 ,e
T
)
=0
and taking expectation we obtain the bias of the family of estimators µˆ g to the first degree of approximation,
B (µˆ g ) =
{
}
1 µ T E (u − e ) g (2 ) (µ0 ,eT ) (u − e ) + 2 0 2 n
(
)
T (12 ) T b g (µ0 ,e ) (3.2)
where bT=(b1,b2,…,bp) with bi=ρoiC0Ci; (i=1,2,…, p). Thus we see that the bias of µˆ g depends also upon
(
the second order partial derivatives of the function on g y , u
T
) at the point (µ ,e ), and hence will be 0
T
different for different optimum estimators of the family. (i )
The biases and mean square errors of the estimators µˆ g ; i = 1 to 18 up to terms of order n-1 along with the values of g(1)(µ0,eT), g(2)(µ0,eT) and g(12)(µ0,eT) are given in the Table 3.1.
52
Table 3.1 Biases and mean squared errors of various estimators of µ0 ESTIMATOR
g(1)(µ0,eT)
− µ 0 ω~
g(2)(µ0,eT)
g(12)(µ0,eT)
2µ 0 W~ p× p
− ω~
µˆ g (1)
BIAS
where C
(null matrix) −
µ0 ω *
2 µ 0 ω * ω *T
ωT µ
ωT µ µ ω
~
~
where
~
~
~
~ ~
~ T
~
−
ω* ~ T
ω µ ~
T
~
[
2 C 0 + C (2o ) − 2b T ω + ω T A ω ~ ~ ~
(
= C12 + C (21) , C 22 + C (22 ) , " , C p2 + C (2p )
µ0 T b ω n ~
~
~ p× p
µˆ g (2 )
µˆ g (3)
ω
O
~
µ02 n
µ0 T − b T ω C W~ ~ × p p n
where Wpxp=dig(ω1,ω2,...,ωp)
µ0 ω
MSE
µ0 n
T b T ω * ω~ * Aω~ − T~ T T ω µ µ ω ω µ ~ ~ ~ ~ ~ ~
µ0 n
T b ω~ T ω~ µ
[
]
)
]
µ 02 n
2 C 0 + C (20 ) + 2b T ω + ω T A ω ~ ~ ~
µ 02 n
T 2b T ω * ω * A ω * 2 ~ ~ C 0 + C (20 ) − + ~T T T ω µ ω µ µ ω ~ ~ ~ ~ ~ ~
µ 02 n
2b T ω ωT Aω 2 2 ~ C 0 + C (0 ) + T ~ + ~T T ω µ ω µ µ ω ~ ~ ~ ~ ~ ~
ω~ *T =
(ω1,*ω2,*...,ωp*) with (ωi,*=ωi µi) (i=1,2,...,p)
µ0 ω
O
ωT µ
(null matrix)
ω µ
− µ0 ω
µ 0 ω ω T + W~ p× p ~ ~
−ω
~
µˆ g ( 4 )
~
~
~
µˆ g
(5 )
ω ~ T
~ p× p ~
~
~
µ0 2n
~
ω T A ω + C T W − 2b T ω ~ ~ ~ ~ p× p
µ02 n
[
2 C 0 + C (2o ) − 2b T ω + ω T A ω ~ ~ ~
]
[
]
µˆ g (6 )
− µ0 ω
µˆ g (7 )
µ0 ω
µ 0 ω ω T − W~ p× p ~ ~
ω
µ0 2n
µˆ g (8 )
µ0 ω
T 2 µ 0 ω ω − W~ p× p ~ ~
ω
µ0 T ω A ω − C T W~ ~ n ~
~
~
~
2µ 0 ω ω T
~
~
−ω ~
~
~
µ0 T ω Aω − b T ω ~ ~ n ~
ω T A ω − C T W + 2b T ω ~ ~ ~ ~ p× p
54
+ bT ω ~ p× p
[
]
[
]
[
]
µ02 n
2 C 0 + C (2o ) − 2b T ω + ω T A ω ~ ~ ~
µ02 n
2 C 0 + C (2o ) + 2b T ω + ω T A ω ~ ~ ~
µ02 n
2 C 0 + C (2o ) + 2b T ω + ω T A ω ~ ~ ~
Table 3.1 Biases and mean squared errors of various estimators of µ0 g(1)(µ0,eT)
g(2)(µ0,eT)
g(12)(µ0,eT)
µˆ g (9 )
− µ0 ω
2 µ 0 W~ p× p
−ω
µ0 T − b T ω C W~ ~ × p p n
µ 02 n
2 C 0 + C (20 ) + 2b T ω + ω T A ω ~ ~ ~
µˆ g (10 )
µ0 ω
O
ω
µ0 T b ω n ~
µ 02 n
2 C 0 + C (20 ) + 2b T ω + ω T A ω ~ ~ ~
µ0 n
µ02 n
ESTIMATOR
~
~
~
ω (1) µ 0 µˆ g (11)
2W ~
~
where
~
ω ~
(1) p× p
~
µ0
" "
0 0
" 0 " # " ωq " 0 " # "
0
MSE
[
]
[
]
ω (1) ~
(1) =(-ω1,-ω2,..., -ωq, -ωq+1,...,ωp)1×p
0 ω 1 0 0 ω 0 2 0 0 ω3 # # # W (1) p× p = 0 ~ 0 0 0 0 0 # # # 0 0 0
BIAS
0 " 0 0 " 0 0 " 0 # " # , 0 " 0 0 " 0 # " # 0 " 0 p× p
T C * W~ − b T ω ~ (1) (1)
C*T=(C12+C(1)2,…, Cq2+C(q)2;…0)
55
2 C 0 + C (2o ) − 2b T ω + ω T (1) A ω ~ (1) ~ ~ (1)
µˆ g (12 )
µ 0 α α T − ∝~ p× p ~ ~
α µ0 ~
where
α ~
T
=(α1,α2,...,αp)1×p
−α ~
µ0 2n
[
]
[
]
2 C 0 + C (2o ) + 2b T α + α T A α ~ ~ ~
µ02 n
2 C 0 + C (2o ) − 2b T α + α T A α ~ ~ ~
µ02 n
2 C 0 + C (2o ) − 2b T α + α T A α ~ ~ ~
∝ =diag(α1,α2,...,αp) ~
− α µ0
T − µ 0 α α − ∝ ~ p× p ~ ~
−α
µ0 T − α T Aα − 2b T α C ∝ ~ ~ ~ ~ p× p 2 n
µˆ g (14 )
− α µ0
2µ 0 α α
−α
µ0 T α Aα − C T − b T α ~ ~ n ~
~
]
µ02 n
µˆ g (13)
~
[
α T Aα − C T ∝ + 2b T α ~ ~ ~ ~ p× p
~ ~
~
~
[
56
]
Table 3.1 Biases and mean squared errors of various estimators of µ0 g(1)(µ0,eT)
ESTIMATOR
µˆ g
µ0 θ , ~
(15 )
g(2)(µ0,eT)
µ 0 θ θ T − Θ ~ p× p , ~ ~ where
µˆ g (16 )
µˆ g (17 )
µ0 θ
~ p× p
µ0 θ
α*
~
µ0 T + 2b T θ θ Aθ − C T Θ ~ ~ ~ ~ × p p 2n
[
µ0 T θ Aθ + 2b T θ ~ ~ 2n ~
]
µ0 T T C Θ ~ * p× p +2b θ ~ 2 n
~
θ θ Θ* p× p =diag{θ1 1 −1 …,θp p −1 } ω ~ ω 1 p Unbiased
O ~ p× p
α * T =(α1*,α2*,...,αp*) with α *i ~
MSE
[
θ
O ~ p× p
~
~
θ
T
BIAS
θ
=diag{θ1,θ2,… θp}
Θ ~ * p× p µ 0 ,
~
where
v0
~ ~
where
µˆ g (18 )
Θ
µ0 θ θ
~
g(12)(µ0,eT)
~
=(αi,µi, i=1,2,...,p)
57
]
[
]
[
]
µ 02 n
2 C 0 + C (20 ) + 2b T θ + θ T A θ ~ ~ ~
µ 02 n
2 C 0 + C (20 ) + 2b T θ + θ T A θ ~ ~ ~
µ02 n
[
2 C 0 + C (2o ) + 2b T θ + θ T A θ ~ ~ ~
]
1 2 2 T T C 0 + C ( o ) + 2 µ 0 b α *+ α * A α * ~ ~ n ~
4. ESTIMATORS BASED ON ESTIMATED OPTIMUM It may be noted that the minimum MSE (2.6) is obtained only when the optimum values of constants involved in the estimator, which are functions of the unknown population parameters µ0, b and A, are known quite accurately. To use such estimators in practice, one has to use some guessed values of the parameters µ0, b and A, either through past experience or through a pilot sample survey. Das and Tripathi (1978, sec.3) have illustrated that even if the values of the parameters used in the estimator are not exactly equal to their optimum values as given by (2.5) but are close enough, the resulting estimator will be better than the conventional unbiased estimator y . For further discussion on this issue, the reader is referred to Murthy (1967), Reddy (1973), Srivenkataramana and Tracy (1984) and Sahai and Sahai (1985). On the other hand if the experimenter is unable to guess the values of population parameters due to lack of experience, it is advisable to replace the unknown population parameters by their consistent estimators. Let
φˆ
be a consistent estimator of φ=A-1b. We then replace φ by
φˆ
and also µ0 by y if necessary, in the
optimum µˆ g resulting in the estimator µˆ g (est ) , say, which will now be a function of y , u and φ. Thus we define a family of estimators (based on estimated optimum values) of µ0 as
(
µˆ g (est ) = g * * y , u T , φˆ T
) (4.1)
(
where g**(⋅) is a function of y , u , φˆ T
T
) such that (
)
g * * µ 0 , e T , φ T = µ 0 for all µ 0 , ⇒
∂g * *(⋅) ∂y (µ
T T 0 , e ,φ
)
=1
∂g * *(⋅) ∂g (⋅) = = − µ 0 A −1b = − µ 0φ ∂u (µ0 ,eT φ T ) ∂u ( µ0 ,e )T (4.2)
and
∂g * *(⋅) ∂φˆ
(µ
0 ,e
T
,φ
T
)
=0
With these conditions and following Srivastava and Jhajj (1983), it can be shown to the first degree of approximation that
MSE (µˆ g (est ) ) = min.MSE (µˆ g ) µ02 = n
2 C 0 + C (20 ) − b T A −1b
[
]
Thus if the optimum values of constants involved in the estimator are replaced by their consistent estimators and conditions (4.2) hold true, the resulting estimator
µˆ g (est )
will have the same asymptotic
mean square error, as that of optimum µˆ g . Our work needs to be extended and future research will explore the computational aspects of the proposed algorithm.
REFERENCES
Biermer, P.P., Groves, R.M., Lyberg, L.E., Mathiowetz, N.A. and Sudman, S. (1991): Measurement errors in surveys, Wiley, New York. Cochran, W. G. (1963): Sampling Techniques, John Wiley, New York. Cochran, W.G. (1968): Errors of measurement in statistics, Technometrics, 10(4), 637-666. Das, A.K. and Tripathi, T.P. (1978): Use of auxiliary information in estimating the finite population variance. Sankhya, C, 40, 139-148. Fuller, W.A. (1995): Estimation in the presence of measurement error. International Statistical Review, 63, 2, 121-147. John, S. (1969): On multivariate ratio and product estimators. Biometrika, 533-536. Manisha and Singh, R.K. (2001): An estimation of population mean in the presence of measurement errors. Jour. Ind. Soc. Agri. Statist., 54 (1), 13-18.
59
Mohanty, S. and Pattanaik, L.M. (1984): Alternative multivariate ratio estimators using geometric and harmonic means. Jour. Ind. Soc.Agri. Statist., 36, 110-118. Murthy, M.N. (1967): Sampling Theory and Methods, Statistical Publishing Society, Calcutta. Olkin, I. (1958): Multivariate ratio estimation for finite population. Biometrika, 45, 154-165. Rao, P.S.R.S. and Mudholkar, G.S. (1967): Generalized multivariate estimators for the mean of a finite population. Jour. Amer. Statist. Assoc. 62, 1009-1012. Reddy, V.N. and Rao, T.J. (1977): Modified PPS method of estimation, Sankhya, C, 39, 185-197. Reddy, V.N. (1973): On ratio and product methods of estimation. Sankhya, B, 35, 307-316. Salabh (1997): Ratio method of estimation in the presence of measurement error, Jour. Ind. Soc. Agri. Statist., 52, 150-155. Sahai, A. and Ray, S.K. (1980): An efficient estimator using auxiliary information. Metrika, 27, 271-275. Sahai, A., Chander, R. and Mathur, A.K. (1980): An alternative multivariate product estimator. Jour. Ind. Soc. Agril. Statist., 32, 2, 6-12. Sahai, A. and Sahai, A. (1985): On efficient use of auxiliary information. Jour. Statist. Plann. Inference, 12, 203-212. Shukla, G. K. (1966): An alternative multivariate ratio estimate for finite population. Cal. Statist. Assoc. Bull., 15, 127-134. Singh, M. P. (1967): Multivariate product method of estimation for finite population. Jour. Ind. Soc. Agri. Statist., 19, (2) 1-10. Srivastava, S.K. (1965): An estimator of the mean of a finite population using several auxiliary characters. Jour. Ind. Statist. Assoc., 3, 189-194. Srivastava, S.K. (1967): An estimator using auxiliary information in sample surveys. Cal. Statist. Assoc. Bull., 16, 121-132. Srivastava, S.K. (1971): A generalized estimator for the mean of a finite population using multiauxiliary information. Jour. Amer. Statist. Assoc. 66, 404-407. Srivastava, S.K. (1980): A class of estimators using auxiliary information in sample surveys. Canad. Jour. Statist., 8, 253-254.
60
Srivastava, S.K. and Jhajj, H.S. (1983): A class of estimators of the population mean using multi-auxiliary information. Cal. Statist. Assoc. Bull., 32, 47-56. Srivenkataramana, T. and Tracy, D.S. (1984):: Positive and negative valued auxiliary variates in Surveys. Metron, xxx(3-4), 3-13. Sud, U.C. and Srivastava, S.K. (2000): Estimation of population mean in repeat surveys in the presence of measurement errors. Jour. Ind. Soc. Ag. Statist., 53 (2), 125-133. Sukhatme, P.V., Sukhatme, B.V., Sukhatme, S. and Ashok, C. (1984): Sampling theory of surveys with applications. Iowa State University Press, USA. Tuteja, R.K. and Bahl, Shashi (1991): Multivariate product estimators. Cal. Statist. Assoc. Bull., 42, 109115. Tankou, V. and Dharmadlikari, S. (1989): Improvement of ratio-type estimators. Biom. Jour. 31 (7), 795802. Walsh, J.E. (1970): Generalization of ratio estimate for population total. Sankhya, A, 32, 99-106.
61