III. Data Collection A. Personal Interviews

Page 1

III. Data Collection A. Personal Interviews r

J. R. McKENZIE*

Results of a survey of attitudes toward a range of potential new products are analyzed to detect interviewer effects by means of a nested random effectsanalysis of variance model. The relationship of such effects to interviewers' views on the

and to question content also is investigated.

An Investigation into Interviewer Effects in Market Resea rc h

INTRODUCTION

for interviewer effects, these being generally larger for attitudes toward products and buying intentions than for the classification variables used (cf. Table 1). This finding suggests that interviewer-based variations in response, which are likely to be more serious than sampling variations due to design effects if

interviewers conduct more than a small number of

interviews, may be even more of a problem in the

context of market research designed to elicit respon-

dents' reactions to products.

This article summarizes the results of an investigation into interviewer effects on a survey comparing several complex marketing models in a new product situation. This survey afforded the opportunity to measure interviewer effects on a variety of long or involved questions, where they might be expected to be greatest; emphasis is on effects due to interviewerrespondent interaction rather than selective nonresponse. In addition, an attempt is made to relate the effects observed to the views held by individual interviewers, both on the subject matter and on the

questions themselves.

It has long been accepted that variation in response to market research survey questions caused by differential interviewer questioning technique or interpretation of replies can be a major source of bias or additional sampling variance. Boyd and Westfall [I-31 review implications and research results in the marketing field. More work has been carried out in this area in the field of social surveys; Sudman and

Bradburn [15] collate and analyze the results of a large number of surveys, and Kish [13] summarizes the results of several studies and suggests that a doubling of basic sampling variation could be caused by interviewer variation in several instances. One noteworthy investigation of interviewer effects is that of Hanson and Marks 171, who for census data relate interviewer performance to several measures of inter- viewer characteristics. Such effects must be distinguished from "design effects" due to the degree of clustering present in the sample, for which a more than fivefold increase in basic sampling variance has been observed [e.g.

43. However, there are indications that design effects tend to be greater for classification variables (occupation, education, etc), which tend themselves to be geographically clustered, than for marketing concepts such as buying intentions [cf. 14, Table 14.1.IVl. For the survey described here the reverse is found -

--

THE SURVEY The survey analyzed was carried out in July 1974 on attitudes and "buying intentions" of telephone subscribers or their spouses toward eight potential new products. The survey was intended to compare the usefulness of several statistical and marketing models in this situation. Thus, data were collected for analysis by multidimensional scaling [ 6 ] , factor analysis [8], two of Fishbein's behavior-attitude-belief models [5], and the St. James model [9]. Three

-

*J. R. McKenzie is a Market Research Statistician, Statistics and Business Research Department, British Post Office. 330

Journal of Marketing Research Vol. XIV (August 1977), 330-6


INTERVIEWER EFFECTS I N MARKET RESEARCH

questionnaires (denoted by A, X, and 0) were applied to subsamples of size 200, 200, and 600, respectively; in addition, different subsets of the data were extracted from subsamples, each of size 100, for the last questionnaire. Interviewer effects in this complex survey situation were investigated by use of a (partially) interpenetrating sample design [cf. e.g. 11, 121. In each of 10 primary sampling units (PSUs, henceforth referred to as areas), respondents were assigned randomly to four interviewers to permit analysis of interviewer effects via a nested analysis of variance model. The sample, however, was not intended to be nationally representative, being concentrated in the Midlands and North West of England. Interviewers were each given twice the number of addresses necessary to achieve their quota; thus it might be argued that differences observed might be due to different methods of deriving their quotas from these (e.g., by varying the number of callbacks, or interviewing at different times of day). This effect is interviewer variation of a sort, but one which might have been removed by imposing identical interviewing schedules for each interviewer. Similarly, differential ability to obtain respondent cooperation also would tend to produce interviewer effects, but would be much more difficult to control (although response rates could be monitored for each interviewer). In practice, however, virtually no real differences were found in classification profiles of the samples obtained by different interviewers within the same area, and thus these were not considered serious sources of variation. The advantages of carrying out such an investigation are twofold: to provide improved estimates of the variance of averaged results, and to detect (and, with rather more difficulty, to correct for) particularly sensitive questions or deviant interviewers.

THE ANALYSIS O F VARIANCE MODEL Interviewer and area differences in results were studied by means of an analysis of variance of individual question answers. Most responses were precoded on a 6-point numerical scale; for the others, simple codings (e.g., 0-1 for no-yes) were used. As each interviewer worked only in a single area, a two-way nested classification model was appropriate, and the following random effects model was applied.

where A and I refer to area and interviewer, respectively. In this formula xAIjis the response for respondent j interviewed by interviewer I in area A, u is the overall response mean, E , the effect due to area A, F,,,, the effect due to interviewer I for the area A (n,,,, being the number of interviews completed), and eAIjis the error term.

Under the random effects model it is assumed that effects E , are distributed over areas with zero mean and variance a:, that F,,,, is distributed over interviewers within area with zero mean and variance a:, and that eAIi is distributed over respondents within area, for each interviewer, with zero mean and variance u2. The survey design then can be thought of as 1. a random sample of areas, 2. a random sample of interviewers within an area, and 3. arandom sample of respondents for each interviewer within an area. Note that under this model no attempt is made to measure any actual bias caused collectively by the interviewers involved (F,,,, being assumed to have zero mean), only the increase in sampling variation caused by them. Two statistics might be used, with rather different implications, for measuring interviewer effects. One is: (2)

I,, = o i / ( u 2 + a:

+ a:)

which measures the size of variation among interviewers as a proportion of that for a completely random interview (i.e., interviewer, area, and respondent selected at random). The other is: (3)

I,,, = (cr2/nma+ u: / m a

+ o f , /a ) /

where it is assumed that there are m interviewers in each of a areas, each interviewing n respondents, leading to total sample size nma. I,,, measures the size of variation in sample mean results as a proportion of that for a comparable survey in which a completely random sample of interviewers was used, i.e., one interview per interviewer. This is the "design effect" of using a sample clustered by interviewer, and will be highly dependent on the number of interviews carried out by each ihterviewer; in contrast, I,,, should be independent of this number (n). The situation is complicated by the fact that for the survey in question the number of interviews, n,,,, , varies for different interviewers; however, the pattern within areas is always identical (two interviewers carrying out 20 interviews in total,, two carrying out 30). Thus one can write

The theory of Huitson [lo] can be used to derive the following five results. 1. MA = Z A {Xi=, n,)(x,.. - x.,.)'/9 has mean value u 2 + X:=, n, { u i + Z:=, n:/(Z,4=, n,)' .

41. 2. MI = Z, n, (x,,. - xA..12/30 has mean value u 2 + {[(Xi=, n i l 2 - X 4I = , n;]/X,4,, n,) . a i / 3 .


JOURNAL OF MARKETING RESEARCH, AUGUST 1977

Table 1 INTERVIEWER EFFECTS

Question

Description A2 Partial similarity orders (most, next most, and least similar products) 01 Ratings of four products on various features 05 Ratings of "ideal" product on various features Al Importance ratings (of various product features) 04 "Buying intention" Partial similarity orders (most, next most, and least similar A2' products) X4 Fishbein analysis: normative beliefs 03 Ratings of liking for products X3 Fishbein analysis (a) "buying intention," (b) ratings on various features "Buying intentions" A4 Classification details (questionnaire A) Ratings of similarity between products 02

Preference order A3

X1

Importance ratings (of a reduced set of product features) -

Classification details (questionnaire 0) Classification details (questionnaire X) -

X2 Paired comparisons (preferences for products) X6 Preference order Ratings for liking for products X5 "Interviewers within area. bBased on response/nonresponse only. 'Number of question parts = 24; converted to 8 x 8 off-diagonal matrix. dBased on six subsamples of size 100 each.

-Denotes negative estimate.

3. M E = zl,A.j ( x A I j- x A I .)21ZA,l( n r - 1) has mean value u2. 4. Under normality conditions, the ratio M , / M , has an F-distribution with degrees of freedom 30 and E l + , (n, - 1) under the null hypothesis a: = 0. 5. Under normality conditions, M A I M I has an F-distribution with degrees of freedom 9 and 30 under the null hypothesis o i = 0, providing n , = n for all I. However, this last test can be expected to be robust for small differences in n , , and thus could be used as an approximate indicator of area differences. Results 1-3 can be used to provide estimators of the variance components u 2 , o:, and u i for any question part, and hence I,,, and I,,, via formulas 2 and 3, by use of averaged values of m (number of interviews per interviewer) in formula 3 to obtain an approximate result. Similarly, results 4 and 5 could be used to test for the presence of interviewer and area effects, respectively. ANALYSIS O F VARIANCE: INTER VIEWER EFFECTS Values of I,,, and I,,,, calculated as just described, for each question on the three questionnaires (averaged over the parts of the questions concerned) are given in order of decreasing in Tab'e l7 as as averaged F-test Statistics for interviewer differences within area. The latter values are dependent on the

Sample size

Number of parts

Mean F-valuea

Mean I,,,

Mean I,

600

24 216d 9 24 5

1.95 1.31 2.54 1.51 2.14

0.16 0.11 0.09 0.09 0.07

1.45 1.17 1.39 1.21 1.35

200 200 600

56 20 8

1.33 1.27 f.46

0.06 0.05 0.03

1.21 1.17 1.42

200 200 200 100 200 200 200 200 200 200 200

10 5 10 36* 8 9 10 10 12 8 8

1.15 1.14 1.16 1.08 1.11 1.08 1.21 0.95 0.97 0.97 0.92

0.03 0.03 0.03 0.03 0.02 0.02 0.01

1.10 1.10 1.07 1.04 1.09 1.04 1.04 0.98 0.97 0.97 0.94

200 100 600 200

-

-

sample sizes involved, which range from 100 to 600. Significance tests cannot be applied to them as individual test values will be correlated, but they give an indication of the degree to which individual test statistics are significant. Values of I,,, are also dependent on the sample size involved (because the.same number of interviewers was always used), but those of I,,, should be comparable across samples, although subject to greater sampling variation for smaller samples. As can be seen from Table 1, interviewer variation as measured by I,,, is in many cases very pronounced; for several questions interviewer variance is nearly 10% of that ascribable to simple random sampling. ' 'These values are not directly comparable to the roh statistic used by Kish [13], where the largest roh value observed over a number of attitudinal surveys is 0.11. It can be shown, by use of the approximate formulaof Kish [13], that in terms of parameters of equation 1 one has, approximately, (4)

roh = u;/(u2

+

($

r ~ , ):U

+ .):SI

The largest value of this statistic, again averaged over each part of any individual question, turns out to be 0.11 for question 01, other values being 0.05 or below. values of I,, (rather than roh) are presented in Table 1 as it was believed to be a more objective measure of the size of interviewer variation-it does not depend on the survev design or the size of area effects present as rob does.

-


INTERVIEWER EFFECTS I N MARKET RESEARCH

The largest value of I,,, observed for scaled responses was 1.42 for question 03. This represents an increase in variance of more than 40% caused by clustering of interviews by interviewer. The maximum number of individual questionnaires completed per interviewer was 18 (for questionnaire 0); had more interviews per interviewer been carried out, such effects would have been even more pronounced. The major interviewer differences arising are for -ratings of products on various features (Ol), -ratings of an "ideal" product on various features (09,

-importance ratings for various product features (Al); and, to a lesser extent,, -partial similarity orders (A2), -Fishbein analysis: normative beliefs (X4), -"buying intentions" (04). The major common factors seem to be that questions are either long and repetitive (01, A l , A2), or involve unusual concepts such as "ideal product," "similarity of products," or "normative beliefs" (05, A2, X4). The "buying intention" question (04) appears to be an exception-an identical question on questionnaire A (A4) has smaller differences, but detailed apalysis shows these to be consistent with those of 04, as discussed hereafter. Differences in classification details between interviewers (within area) are in general small; of 30 F-tests for the three questionnaires, one is significant at the 1% level, and two more are just significant at the 5% level. The overall estimate of the variance ratio I,,, for the 10 classification questions is 0.01, suggesting that interviewer differences due to variation in coding or to a tendency to "clump" their interviews are small, and certainly could not account for other differences observed. The argument that boredom on long repetitive questions will introduce interviewer differences is supported by the fact that X1, a shorter version of Al, has only small differences between interviewers. In addition, the rank correlation between the values of the interviewer variance ratio and the order of the question part was 0.35 which, though not quite significant at the 5% level, suggests that differences arise at least partly from differential prompting or hurrying when respondents become bored with the repetitive nature of the question. However, for question 01, which consisted of ratings on nine features for each of four products in turn, no relationship with the order of the question part was detected, and hence the length of the question presumably affects interviewer treatment of the initial question parts as well. Question 05 is identical in type to 01, except that ratings are asked of an "ideal" product only. Interviewer differences similar to those of 01 arise, but it is difficult to judge whether these are due to the

question being regarded as an extension of 01, or to the novel concept of an "ideal" product requiring interviewer explanation. Two questions involving respondents explicitly rating the similarity of products were involved in the survey, A2 and 02. Question A2 required respondents to nominate most similar, next most similar, and least similar products for each of the eight products in turn, whereas 02 required ratings of similarity (on a 6-point scale) for six selected pairs of products. As the results of question A2 were to be analyzed by multidimensional scaling with "similarity matrices" (two-dimensional arrays of similarities) derived from these data, the model originally was applied to the elements of these matrices, consisting of 56 off-diagonal terms. A large number of significant interviewer effects were observed (10 significant at the 1% level) and were particularly noticeable for measures of dissimilarity involving the two products perceived to be very different from the rest. It was suspected that the differences observed for this question were largely due to no responses being elicited for some question parts by some interviewers. An analysis of variance therefore was also carried out for the response rates to each of the 24 question parts; the results averaged over all question parts are included in Table 1. It can be seen that a very large amount of interviewer variation is present, although individual F-values may be inflated because the response rate was very close to 100% for some questions. Differences in response rates seem to explain most of the similarity matrix discrepancies observed, and answers to question A2 therefore may be suspect because of differential prompting by interviewers in situations where some respondents apparently feel that they cannot answer sensibly (particularly in choosing products similar to the two products previously mentioned, these being seen as very different from the rest). In contrast, the simple similarity rating question (02) has a response rate of 100% and few interviewer differences appear; however, this question extracts less information. The most prominent interviewer differences for the "Fishbein analysis" questions occur for "normative belief" questions in the form, "How likely would be to think that you should buy this product?" for a number of people or groups of people (spouse, children, neighbors, e t ~ . )This . ~ rather artificial form probably required interviewer explanation or prompting. More surprisingly, large interviewer variation also is observed for question 04, which consists of "likelihood to buy" ratings (at specified price) for the five products at present unavailable, and also to a lesser extent for question 03 involving ratings of liking (on 2For a description of the models of Fishbein, see [ 5 ] .


JOURNAL OF MARKETING RESEARCH, AUGUST 1977

a 6-point scale) for each of the full set of eight products; these questions are obviously of fundamental marketing importance in this context. Corresponding questions on the other questionnaires (A4 and X5, respectively) do not show such large effects. The differences observed for 04 and 03 turn out to be principally due to a single interviewer being heavily biased against a certain subset of the products (namely, those more futuristic in style); and closer examination of questions A4 and X5 shows the same results to be repeated in less pronounced form. For other questions, where more than one interviewer is discrepant, the discrepancies generally tend to cancel each other when accumulated. An exception is the "similarity" question A2 just described, where interviewer differences are principally due to differential nonresponse, which in this case leads to systematic effects on the measures of similarity between styles calculated from responses to this question. The overall effect of these discrepant interviewers on proportions, average ratings, etc., is small (in contrast to the effect on the variation in ratings), because a total of 40 interviewers were involved.

FOLLO W U P SURVEY OF INTERVIEWER

VIEWS

The results of the foregoing section indicated the potential importance of interviewer effects for surveys of the type studied. Thus it was decided to conduct a followup survey of interviewer views of the difficulties involved in the application of the questionnaires for the survey, and also their views on the subject matter, i.e., the products concerned, to investigate whether these might have affected the interviews. In August 1974 each of the interviewers was asked to, and did, complete a questionnaire consisting of: 1. Ratings of a set of key questions selected from the original survey (those showing large interviewer effects) in terms of a. length (3-point scale), b. repetitivity (3-point scale), c. respondent interest (3-point scale), d. explanation required (yes/no), and e. meaningfulness (yes/no). 2. Interviewers' own responses to these questions. It is realized that because this survey was carried out after the main survey the answers could have been affected by the views of the respondents previously interviewed rather than vice versa. This effect would invalidate any conclusions on the effect of interviewer views on respondents. To yield firm conclusions such questions would have to be asked before the interviews were carried out. For point 2 this could reasonably, and quite naturally, be done at the briefing stage but, unless merely subjective views are considered adequate, the questions involved in point 1 would need to be applied in midsurvey,

and only later interviews analyzed in relation to them. However, analysis by means of the followup survey is presented in the next two sections, and tentative conclusions drawn, in the hope that it might provide guidelines to any future research. Interviewer Ratings of the Questions With two exceptions (discussed hereafter), responses were generally favorable, although about 15% of interviewers rated question A1 (importance ratings for 24 product features) as "very long" and question 05 (ratings of an "ideal" product on nine features) as "very repetitive," presumably because it repeats the format of the four parts of question 01. Interviewers were critical of two questions, A2 (partial similarity orders; see the interviewer effects section), which about half rated as "very long," "very repetitive," "very difficult to answer," and X4 (Fishbein analysis normative beliefs), which more than 70% rated as "requiring considerable explanation" and more than 85% rated as "difficult to answer meaningfully." For the set of key questions as a whole, however, no overall relationship between these ratings and the size of interviewer effects was apparent. Similarly, individual interviewers identified as obtaining discrepant responses to particular questions showed no tendency to be any more (or less) critical than average for these questions. Relation of Respondents' Answers to Interviewers' Answers It was postulated that where respondent answers to a particular question have been shown (by the analysis of variance model) to be discrepant between interviewers in a particular area, the views of one or more of the interviewers concerned may in fact be insinuating themselves upon some of the respondents. The model used to represent this situation is:

For a single question (with n parts) within a fixed area, Y,,, is the response obtained for the jth question part by the interviewer I from respondent r. Only a single interviewer ( I f )is assumed to be producing interviewer effects; yT measures the mean (unaffected) response obtained by the remaining interviewers for the jthquestion part, x,, measures the Ithinterviewer's own response, and e , , , is an error term, assumed to be zero mean for each interviewer. This model postulates that responses are "attracted" on average a proportion A, of the way toward the views of the single deviant interviewer concerned. The assumption that at most one interviewer (within


INTERVIEWER EFFECTS I N MARKET RESEARCH

a given area) is responsible for such effects is essentially one of convenience. In practice one might anticipate that any effects would be present to a greater or lesser extent for most interviewers. However, for a model attempting to relate responses achieved by interviewers to their own views for all interviewers simultaneously, any effects present are likely to be greatly diluted by the results of the majority of interviewers, which are compatible with chance variation and are thus likely to have been affected to at most a small degree by differences in interviewing technique.

RESULTS From the results of the analysis of variance, several areas were selected as having pronounced interviewer effects for particular questions. A total of 14 questions-by-area was generated in this manner for analysis by this model, and a further selection was made for each question of individual interviewers who showed the largest effects (measured as a simple sum of squares of their mean responses compared with those of the remaining interviewers in their area); a total of 24 questions-by-interviewer thus was generated for particular attention. The model was fitted to the data for each interviewer within the 14 questions-by-area by use of a least squares criterion. Unfortunately no significance test was possible to test the hypothesis A , = 0 for this model for individual respondents as responses to question parts of the same question certainly would be correlated. The values of A , obtained for each of the 56 questions-by-interviewers are presented in Table 2. The results are split into 25 instances where appreciable interviewer variation was observed (and therefore positive values of A , might be expected under the Table 2

VALUES OF A , CALCULATED FOR VARIOUS QUESTIONS

AND INTERVIEWERS

A!

+

Restricted list 0 1 I 2 2

Selected interviewers 0 2

Remaining interviewers 0

All interviewers 0 3

1.0 1 0.8- 1 .o 0 0.6-0.8 1 0.4-0.6 4 I 8 0.2-0.4 3 . 0.0-0.2 . . . . . . . . . . . . 6. . . . . . . . . 8. . . . . . . . . . 9. . . . . . . . 9 -0.2-0.0 1 3 2 3 I -0.4--0.2 -0.6--0.4 0 0 2 -0.8--0.6

- 1.0--0.8 - 1.0-

Total Percentage positive

0 0

0

0 0 0

1 5 11 17

12

4

2

0

0 1

15

0 1 25

31

56

80

72

61

66

0

model) and the rest, where at most small interviewer effects are observed and hence A, also would be thought to be small. Because the accuracy of fitting the model will depend on the number of question parts involved, and several questions had only a small number of parts (5, 8, or 9), a restricted selection of 15 questions-by-interviewer was made, with concentration on longer questions and larger observed effects, in the hope of highlighting any relationships present. It can be seen from Table 2 that 37 of the 56 estimated A, values are positive, a result significantly different from a 50-50 split at the 5% level (and the 1% level for a one-sided test). In addition there are signs of a greater proportion of positive values occurring with greater selectivity of questions and of interviewers within questions. The range of estimates observed is -0.4 to 1.0 with the exception of a single value of - 1.6 for a question with only five question parts on which to base estimates. (A value of A , = 1 would imply that respondents had altered their views to coincide with those of the interviewer on average.) Thus there is some evidence of a relationship between respondent views and interviewer views in a limited number of instances, but as interviewers' own responses were not obtained until after the main survey, proof of the direction of causality must await further studies.

CONCLUSIONS The following conclusions drawn from this investigation of interviewer effects are specific to the survey concerned, but it is hoped that they will serve as guidelines to complex marketing surveys in general. 1. As measured by "between interviewer" variances, interviewer differences of more than 10% of the basic random sampling variance were observed for the survey described. Where a large number of interviews per interviewer are carried out, such differences would be a major source of additional variance for averaged results. 2. Sizeable interviewer differences were detected for several questions; these were principally either of a repetitive nature or involved possible difficulty in interpretation, although a short "buying intention" question also produced noticeable interviewer effects. 3. In long repetitive questions interviewer differences seemed to show only a limited relationship to the order of the question parts-interviewer influence appears to be present from the start rather than only when respondents or interviewers become bored. 4. For most questions, a large part of any interviewer variation in responses to questions was attributable to a small number of individual interviewers. These interviewers did not appear to differ from the other interviewers in their opinions of the quality of the questions concerned; their effects on overall (mean)


JOURNAL OF MARKETING RESEARCH, AUGUST 1977

results obtained were much less pronounced than their effects on sampling variation. 5. In a limited number of instances, a relationship between respondent answers and interviewers' own answers to the survey questions was detected. However, from the data collected the direction of causality cannot be inferred.

REFERENCES

6. Green, Paul and Frank Carmone. Multi-dimensional

Scaling and Related Techniques in Marketing Analysis. Boston: Allyn and Bacon, 1972. 7. Hanson, R. H, and E. S. Marks. "Influence of the Interviewer on the Accuracy of Survey Results," Journal of the American Statistical Association, 53 (1958), 635-55. 8. Harman, H. H. Modem Factor Analysis. Chicago: University of Chicago Press, 1960. 9. Hendrikson, A. E. and E. J. Willson. "Variation on

as a Source of Errors in Surveys," Journal of Marketing, 19 (April 1955), 311-24. . "Interviewer Bias Re-visited," 2. -and Journal of Marketing Research, 2(February 1%5), 58-63. . "Interviewer Bias Once More Re3. -and visited," Journal o f Marketing Research, 7 (May 1970),

St. James," Market Research Society Annual Conference, 1972, 49-58. 10. Huitson, A. The Analysis of Variance: A Basic Course. London: Griffin & Co., 1971. 11. Kemseley, W. F. F. "Interviewer Variability and a Budget Survey," Applied Statistics, 9 (1%0), 122-8. 12. "Interviewer Variability in Expenditure Surveys," Journal o f the Royal Statistical society (A), 128 (1%5),

249-53. 4. Felligi, I. P. and G. G. Gray. "Sampling Errors in

118-39. 13. Kish, L. "Studies of Interviewer Variance for Attitudi-

Periodic Surveys," Proceedings, Social Statistics Section, American Statistical Association, 197 1. 5. Fishbein, M. "Attitude and the Prediction of Behavior," in M. Fishbein, ed., Readings on Attitude Theory and Measurement. New York: John Wiley and Sons, Inc.,

nal Variables," Journal of the American Statistical Association, 57 (1%2), 92-115. 14. . Survey Sampling. New York: John Wiley and Sons, Inc., 1%5. 15. Sudman, S. and N. M. Bradburn. Response Effects in Surveys. Chicago: Aldine Publishing Co., 1974.

1. Boyd, Harper W., Jr. and Ralph Westfall. "Interviewers

1%7.


http://www.jstor.org

LINKED CITATIONS - Page 1 of 2 -

You have printed the following article: An Investigation into Interviewer Effects in Market Research J. R. McKenzie Journal of Marketing Research, Vol. 14, No. 3, Special Issue: Recent Developments in Survey Research. (Aug., 1977), pp. 330-336. Stable URL: http://links.jstor.org/sici?sici=0022-2437%28197708%2914%3A3%3C330%3AAIIIEI%3E2.0.CO%3B2-M

This article references the following linked citations. If you are trying to access articles from an off-campus location, you may be required to first logon via your library web site to access JSTOR. Please visit your library's website or contact a librarian to learn about options for remote access to JSTOR.

[Footnotes] 1

Studies of Interviewer Variance for Attitudinal Variables Leslie Kish Journal of the American Statistical Association, Vol. 57, No. 297. (Mar., 1962), pp. 92-115. Stable URL: http://links.jstor.org/sici?sici=0162-1459%28196203%2957%3A297%3C92%3ASOIVFA%3E2.0.CO%3B2-N 1

Studies of Interviewer Variance for Attitudinal Variables Leslie Kish Journal of the American Statistical Association, Vol. 57, No. 297. (Mar., 1962), pp. 92-115. Stable URL: http://links.jstor.org/sici?sici=0162-1459%28196203%2957%3A297%3C92%3ASOIVFA%3E2.0.CO%3B2-N

References 1

Interviewers as a Source of Error in Surveys Harper W. Boyd, Jr.; Ralph Westfall Journal of Marketing, Vol. 19, No. 4. (Apr., 1955), pp. 311-324. Stable URL: http://links.jstor.org/sici?sici=0022-2429%28195504%2919%3A4%3C311%3AIAASOE%3E2.0.CO%3B2-1

NOTE: The reference numbering from the original has been maintained in this citation list.


http://www.jstor.org

LINKED CITATIONS - Page 2 of 2 -

2

Interviewer Bias Revisited Harper W. Boyd, Jr.; Ralph Westfall Journal of Marketing Research, Vol. 2, No. 1. (Feb., 1965), pp. 58-63. Stable URL: http://links.jstor.org/sici?sici=0022-2437%28196502%292%3A1%3C58%3AIBR%3E2.0.CO%3B2-R 3

Interviewer Bias Once More Revisited Harper W. Boyd, Jr.; Ralph Westfall Journal of Marketing Research, Vol. 7, No. 2. (May, 1970), pp. 249-253. Stable URL: http://links.jstor.org/sici?sici=0022-2437%28197005%297%3A2%3C249%3AIBOMR%3E2.0.CO%3B2-5 7

Influence of the Interviewer on the Accuracy of Survey Results Robert H. Hanson; Eli S. Marks Journal of the American Statistical Association, Vol. 53, No. 283. (Sep., 1958), pp. 635-655. Stable URL: http://links.jstor.org/sici?sici=0162-1459%28195809%2953%3A283%3C635%3AIOTIOT%3E2.0.CO%3B2-T 11

Interviewer Variability and a Budget Survey W. F. F. Kemsley Applied Statistics, Vol. 9, No. 2. (Jun., 1960), pp. 122-128. Stable URL: http://links.jstor.org/sici?sici=0035-9254%28196006%299%3A2%3C122%3AIVAABS%3E2.0.CO%3B2-G 13

Studies of Interviewer Variance for Attitudinal Variables Leslie Kish Journal of the American Statistical Association, Vol. 57, No. 297. (Mar., 1962), pp. 92-115. Stable URL: http://links.jstor.org/sici?sici=0162-1459%28196203%2957%3A297%3C92%3ASOIVFA%3E2.0.CO%3B2-N

NOTE: The reference numbering from the original has been maintained in this citation list.


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.