Observing "The Observers Observed": A Comment* TOM W. SMITH, National Opinion Research Center WOODY CARTER,National Opinion Research Center Peneff's (1988)ethnographic study of interviewers of the Institut National de la Statistique et des Etudes Economiques (INSEE) provides rich insight into how the institutional history and administrative structure of INSEE impacts on data collection in the field. In particular, it documents how and why interviewers deviate from the rules and interviewing procedures established by INSEE. As an idiographic study of INSEE, Peneffs is an interesting "peek behind the veil" of data collection. But Peneffs article runs into major problems when it tries to generalize beyond the unique experience of INSEE to survey research in general. First, Peneff completely ignores that there is a long research tradition on interviewing and interviewers. From Hyman's seminal work in the 1940s and 1950s (1954) to Converse and Schuman's detailed study (1974) of interviewers on the Detroit Area Study of the University of Michigan and in association with the Institute for Social Research (ISR), survey researchers have studied many aspects of interviewing. To give only a few examples, these include (1) how characteristics of respondents and interviewers interact to influence response (Hyman 1954; Schaeffer 1980; Anderson, Silver, and Abrahamson 1988), (2) how to train interviewers and evaluate their performances (Cannell, Lawson, and Hausser 1975; Fowler and Mangione 1986; Barioux 1952),(3) how often and in what ways interviewers deviate from the scripted questions (Bradburn and Sudman 1979), and (4) how vocal attributes of interviewers effect cooperation and responses (Groves and Magilavy 1986; Oksenberg, Coleman, and Cannell 1986). Second, Peneff fails to appreciate how special and atypical the situation at INSEE appears to be. In terms of structure, supervision, and field staff, INSEE does not resemble any major American survey research organization. For example, while most INSEE interviewers are "male and old" (Peneff 1988:525), personal interviewers for all national American surveys are overwhelmingly female and usually middle aged. In addition, INSEE interviewers are paid a piece rate (Peneff 1988:525),while American interviewers are almost always paid by the hour. Finally, there is little indication of meaningful supervision of INSEE interviewers; personal interviewers in American survey research are supervised through regular phone reports, direct observation in the field, and taping, often by supervisors with extensive personal experience as interviewers. Centralized telephone interviewers are even more closely monitored and supervised. These differences apparently result from the institutional history and structure of INSEE that Peneff ably relates. Third, Peneff misjudges the situation at the ISR (Peneff 1988:521, n. 2) which he cites to prove a general lack of attention to interviewing and interviewers by survey research. Probably no organization has done more to investigate these issues than ISR. In fact, among the sample of literature we cited above, half were from ISR authors and surveys. Peneff's unfamiliarity with ISR is further shown by his claim that their interviewers are "expected to complete 45 questionnaires daily" (Peneff 1988:521, n.2). Considering the seven-hour day that he reports they work, this comes to a finished interview every 9.3 minutes! Given that ISR inter-
*
Correspondence to: Smith. National Opinion Research Center. university of Chicago, 1155 East 60th Street, Chicago, IL 60637
3 10
SOCIAL PROBLEMS, Vol. 36, No. 3, June 1989
On Observing Observers
views typically last several times longer than nine minutes and that no time is allotted for no answers, conversion attempts, and other delays, the figure of 45 interviews per day is obviously absurd. Finally, Peneff frequently draws general conclusions that are inadequately supported by his evidence and extreme. For example, he emphasizes the interviewers' "failure to respect the wording of questions" (Peneff 1988:531) and how they are "continuously" changing wordings and content (Peneff 1988:521, 530). He concludes that "No question is asked exactly as worded" (Peneff 1988:521). Does he mean that no interviewer ever asks any question as written? While interviewer ad-libbing certainly does occur, the best available empirical evidence from American surveys finds that, in about one quarter to one third of question administrations, the wordings are modified by some addition, deletion, or substitution of words (Bradburn and Sudman 1979, Fowler and Mangione 1986). Most of these modifications are small in magnitude and neutral in their impact. Another exaggerated claim is that "The survey design is not suited to the study of heterogeneous populations" (Peneff 1988:533). Given that any national sample and most other samples cover heterogeneous populations, this would seem to eliminate surveys as a valid scientific procedure for collecting data. Peneff bases this conclusion on the alleged need to specially phrase questions to communicate with class and cultural minorities. He actually offers little or no evidence that respondents from different segments of society cannot understand the questions. Survey researchers are well aware of the potential problems of communicating across social groups. In general, survey questions are intentionally worded in simple terms that can be well understood by the general population. While class, regional, and other minorities may well have special accents and argot (as in the case of Cockney rhyming slang and black English) these groups are almost always capable of understanding the mainstream dialect and the vocabulary of the daily newspapers and television. The special problem of language differences in immigrant populations or indigenous linguistic minorities are handled by special means. In the case of the Hispanic population in the United States, bilingual interviewers and translated questionnaires are commonly employed. Multi-lingual interviewing is, of course, universal in Canada, Switzerland, and other multi-lingual nations. In other instances, the survey population may be defined to cover only the native speakers. In general, survey researchers stress the need for questions that are readily understandable by the general population; use focus groups, pretests, interviewer specifications, and other procedures to clarify meaning and achieve wide understandability; and utilize special techniques, such as bilingual interviewing, when confronted with special problems. When such care is taken, surveys are eminently suitable for studying heterogeneous populations. Interviewer behavior is an important topic, as important as any, in the field of survey research. Peneff's account of INSEE's practices in the Pays de Lolre Center is interesting and shows how institutional cultures can subvert scientific protocols. But, while Peneff describes problems such as altering question wordings and ignoring sampling procedures that do occur in survey research, these problems are not nearly so common nor so grave as he pictures them.
References Anderson, Barbara A., Brian D. Silver, and Paul R. Abrahamson 1988 "Interviewer race and black voter participation." Public Opinion Quarterly 52:53-83. Barioux, Max 1952 "A method for the selection, training, and evaluation of interviewers." Public Opinion Quarterly 16:128-30. Bradburn, Norman M. and Seymour Sudman 1979 Improving Interview Method and Questionnaire Design. San Francisco, CA: Jossey-Bass.
311
Cannell, Charles F., Sally A. Lawson, and Doris L. Hausser 1975 A Technique for Evaluating Interviewer Performance. Ann Arbor, MI: Institute for Social Research. Converse, Jean and Howard Schuman 1974 Conversations at Random: Survey Research as Interviewers See It. New York: John Wiley and Sons. Fowler, Floyd J., Jr. and Thomas W. Mangione Reducing Interviewer Effects on Health Survey Data. Washington, DC: National Center 1986 for Health Services Research and Health Care Technology Assessment. Groves, Robert M. and Lou J. Magilavy "Measuring and explaining interviewer effects in centralized telephone surveys." Public 1986 Opinion Quarterly 50:251-66. Hyman, Herbert, William J. Cobb, Jacob J. Feldman, Clyde W. Hart, and Charles Herbert Stember 1954 Interviewing in Social Research. Chicago: University of Chicago Press. Oksenberg, Lois, Lerita Coleman, and Charles F. Cannell "Interviewers' voices and refusal rates in telephone surveys." Public Opinion Quarterly 1986 50:97-111. Peneff, Jean 1988 "The observer observed: French survey researchers at work." Social Problems 35:520-35. Schaeffer, Nora Cate "Evaluating race-of-interviewer effects in a national survey." Sociological Methods and 1980 Research 8:400-19.
http://www.jstor.org
LINKED CITATIONS - Page 1 of 2 -
You have printed the following article: Observing "The Observers Observed": A Comment Tom W. Smith; Woody Carter Social Problems, Vol. 36, No. 3. (Jun., 1989), pp. 310-312. Stable URL: http://links.jstor.org/sici?sici=0037-7791%28198906%2936%3A3%3C310%3AO%22OOAC%3E2.0.CO%3B2-E
This article references the following linked citations. If you are trying to access articles from an off-campus location, you may be required to first logon via your library web site to access JSTOR. Please visit your library's website or contact a librarian to learn about options for remote access to JSTOR.
References The Effects of Race of the Interviewer on Measures of Electoral Participation by Blacks in SRC National Election Studies Barbara A. Anderson; Brian D. Silver; Paul R. Abramson The Public Opinion Quarterly, Vol. 52, No. 1. (Spring, 1988), pp. 53-83. Stable URL: http://links.jstor.org/sici?sici=0033-362X%28198821%2952%3A1%3C53%3ATEOROT%3E2.0.CO%3B2-4
A Method for the Selection, Training, and Evaluation of Interviewers Max Barioux The Public Opinion Quarterly, Vol. 16, No. 1. (Spring, 1952), pp. 128-130. Stable URL: http://links.jstor.org/sici?sici=0033-362X%28195221%2916%3A1%3C128%3AAMFTST%3E2.0.CO%3B2-8
Measuring and Explaining Interviewer Effects in Centralized Telephone Surveys Robert M. Groves; Lou J. Magilavy The Public Opinion Quarterly, Vol. 50, No. 2. (Summer, 1986), pp. 251-266. Stable URL: http://links.jstor.org/sici?sici=0033-362X%28198622%2950%3A2%3C251%3AMAEIEI%3E2.0.CO%3B2-B
Interviewers' Voices and Refusal Rates in Telephone Surveys Lois Oksenberg; Lerita Coleman; Charles F. Cannell The Public Opinion Quarterly, Vol. 50, No. 1. (Spring, 1986), pp. 97-111. Stable URL: http://links.jstor.org/sici?sici=0033-362X%28198621%2950%3A1%3C97%3AIVARRI%3E2.0.CO%3B2-0
http://www.jstor.org
LINKED CITATIONS - Page 2 of 2 -
The Observers Observed: French Survey Researchers at Work Jean Peneff Social Problems, Vol. 35, No. 5. (Dec., 1988), pp. 520-535. Stable URL: http://links.jstor.org/sici?sici=0037-7791%28198812%2935%3A5%3C520%3ATOOFSR%3E2.0.CO%3B2-H