CHARLES S. MAYER*
The accuracy of marketing research results is affected by sampling and nonsampling errors. While the former can be computed objectively using sampling theory, the latter depend on subjective estimates. Such estimates can be facilitated by examining relevant empirical evidence. This article presents a model for integrating the subjective estimates about nonsampling error into estimates of total survey error and demonstrates its use through an example.
Assessing the Accuracy of Marketing Research
The marketing research profession has been deluding its clients and itself that the work it ~erformsis more accurate than it really is. When a research report contains any statement about accuracy, it usually refers to potential errors introduced by sampling. If if is a detailed rePort, some nonsampling may be but evidence of them seldom gets translated into a quantitative evaluation of survey accuracy for several reasons. ~ i r s t ,including nonsam~ling error assessments in estimates of total survey error moves the discussion into the realm of subjectivity. Until recent gains in respectability, such subjectivity was (and in some circles still is) deemed unscientific. Second, the pressure to include nonsampling error assessments into total survey error estimates has not been applied by research users, and a self-imposed pressure is not likely to come in the research profession because it would of necessity weaken the research output and lessen its apparent value in decision making. Yet survey accuracy estimates based on sampling error alone do not truly portray the risks in using the results. While undoubtedly some implicit adjustments are made to reported "confidence intervals," the chances of such adjustments being correct are indeed slim. As this article suggests, quantitative adjustments based on empirical findings are needed to assess the accuracy of marketing research.
THE MEANING OF ACCURACY Accuracy in research describes the probability that the research may mislead the user. According to a definition offered by Kish, "Accuracy is the inverse of the total error, including bias as well as [6, p. 251. If some true but unknown quantity t can be identified as the variable under study, and a research result obtained, the concept of accuracy would be useful in describing how far r may fall from t ; the greater the accuracy, the less the chance of a specified divergence. Any one research result will lie a unique but unknown distance from t. The value t could be obtained immediately from r if this distance were known with certainty. But in most practical situations it is unknown. However, by studying the process which generated the research result, one may be able to say something about the probability of r being a certain distance away from t. This number would be obtained from the conditional probability distribution P(r I t). The amount of bias assumed as known would be given by the distance between the expectation of this distribution and t, while the variance of this distribution would tell about both the randomness of the generating process and the uncertainty associated with it. To make the distribution more general (i.e., independent of the units of r or t) one can divide through by t and talk about the distribution P(r/f ( t). In many research situations it is reasonable to assume that the ratio (rlt) is independent of the value of t. For example, if a 10% overreporting of life insurance policies is believed equally likely, whether the average policies held were $15,000 or $20,000, the conditional can be dropped. Perception of the accuracy of a research technique is
* Charles S. Mayer is Professor of Marketing, Faculty of Administrative Studies, York University, Toronto, Canada. The contributions of Professor Rex V. Brown of the University of Michigan to the analytical structure of this paper are acknowledged both here and by the references t o his work. 285
Joztrnal of Marketing Reseaw/2, Vol. VII (August 19701, 285-91
JOURNAL OF MARKETING RESEARCH, AUGUST 1970
encoded by assessing the distribution of the ratio (r/t) identified as the error ratio [9]. If the result-generating process is unbiased, the expectation of (r/t), E(r/t) will equal 1.0. The variance of the distribution of (r/t) reflects uncertainty about the relation between r and t, be it due to randomness of the process or ignorance of it. It is often desirable to express this variance in relative terms, that is in terms of relvariances. A relvariance is a pure number obtained by dividing variance by the mean squared. Since accuracy is the inverse of total error, and error is commonly defined in standard deviation terms, the accuracy of a research procedure is best expressed by the coeficient of variation, the square root of the relvariance. In summary, the accuracy of a research process can be encoded through the distribution of the error ratio (r/t). As a minimum, this distribution can be described by its expectation and its coefficient of variation. Since the expectation refers to "known bias," the relevant differentiation between alternative research designs is their coefficient of variation. For a fixed budget, for example, the research design with the lowest coefficient of variation should be selected.
Segmentation of the Error Ratio It is very difficult to assess the distribution of the error ratio in one step. All sources of error are combined in this single estimate. Any intuitive combination of these sources is likely to be wrong, with an over-weighting probably occurring for the more obvious errors. An alternative is to segment the error ratio into components and make independent estimates for each one. For example, the total error ratio can be segmented into the following components:
r. -- m- X -jeX - sX - aX - r t t m f s a where: r is the reported sample mean t is the true mean of the population m is the mean that would be obtained if the same measurement techniques were applied to the population f is the value that would be obtained if the measurement were applied to the sampling frame s is the expectation of the distribution of selected sample means a is the expectation of the distribution of achieved sample means. Then, tnlt is the measurement error f l m is the frame error sl f is the selection error als is the nonresponse error rla is the random sampling error. Several points deserve mention. First, this segmentation of error terms can be continued. For example, it is often desirable to segment nonresponse into noncon-
tacts and refusals. The limits to segmentation are: (1) at each stage of segmentation errors of the component terms must be more certain than error of the aggregate term, and (2) to avoid assessing covariance terms, it is desirable to segment only independent error sources. Certain component error terms refer to exclusion errors. For example, both frame and nonresponse errors occur because of the exclusion of certain members of the population. In order to assess the significance of such exclusions, the proportion excluded, w, has to be known. Looking at nonresponse error, the term (als) can be rewritten as:
where n is the expectation of the distribution of sample means among nonrespondents. It is easier to estimate the distribution of (nla) and w than (a/s) directly. Incidentally, this formulation demonstrates that nonresponse can introduce serious error only if there is a high nonresponse rate (w) and nonrespondents differ significantly from respondents on the variable under study (n/a differs significantly from 1.0).
Estimation of the Distribution of Component Error Ratios After deciding which component error ratios to estimate, the next step is to make the actual estimations. The following procedure has been found useful: 1. The assessor estimates the expectation of the component ratio. This will be 1.0 if the ratio is un-
biased. 2. Estimates are also made of the interval in which
the assessor is 95% sure that the value of the ratio lies-the 95% credible interval (C.I.). An illustration of a hypothetical assessment is shown in Figure 1. From such an assessment the mean and the relvariance of the component error ratio can be obtained; the latter is derived by dividing the credible interval by four times the expectation and squaring. The various component error ratio estimates are then combined using a simple technique developed by Brown [I]. The expectation of the total error ratio equals the product of the expectations of the component error ratios, and its relvariance equals the sum of the component relvariances plus a product correction term which usually is negligible.
THE NEEDS FOR EMPIRICAL EVIDENCE The foregoing discussion presented a construct for evaluating the accuracy of a research result. However, there are some very serious barriers to making the proper assessments. What values should the assessor use in his estimates of the component error ratios? While sampling theory provides an elegant theoretical struc-
ASSESSING THE ACCURACY OF MARKETING RESEARCH
ture for objectively assessing sampling error, no such structure is available for the other error sources. For example, while Lansing suggests a need for ". . . a sophisticated theory of response error.. ." [7], no such theory has yet been advanced. Lacking theory, a substantial contribution can still be made by presenting related empirical evidence on the direction and magnitude of potential nonsampling error sources. Some of this evidence is available in the literature. Probably a much greater amount is buried in the files of practitioners. Before error ratio analysis, or any other technique for that matter, can make a substantial contribution to nonsampling error assessments, the available empirical evidence has to be collected and digested into usable form. This task has been attempted elsewhere by the writer [8]. How such data would be used is demonstrated by an example of how total error assessments could be made in the research design stage. The same techniques could also be applied to draw inferences about population parameters from sample results, using either Bayesian or classical procedures.
A S T U D Y OF SAVINGS: A N E X A M P L E Suppose that a bank is interested in learning what proportion of liquid assets consumers hold in various forms of savings-for example, checking accounts, savings accounts, savings shares, corporate stocks and bonds, life insurance, and so forth. The key variable is the proportion of assets currently placed in savings accounts. In order to get this figure, respondents would have to report all liquid assets accurately. Suppose further that for a given research budget the following alternatives are available to the bank's research director: 1. A personal interview study expected to deliver 300 completed interviews. Respondents will be selected using area probability techniques. With two callbacks the supplier feels that a 75% response rate will be achieved. 2. A telephone survey with 600 completed interviews. Respondents will be selected systematically from the telephone directory. Because of the nature of the questions, the supplier will only guarantee a 75% response rate (a 15% item or total refusal is anticipated). 3. A n established mail panel capable o f delivering 1000 respondents. Quota controls are established over some demographics, including household income. An 8 5 % response rate is anticipated.
The research director would like to choose the technique which will give him the greatest accuracy. His classical statistical training tells him that he must exclude the mail panel study since it is a quota sample. Using the criterion of minimizing sampling error, he would choose the telephone interview. Yet, he somehow feels uneasy about this choice. He decides to quantify his uneasiness by working through an error ratio analysis.
Figure 1 A HYPOTHETICAL ERROR RATIO ASSESSMENT (MEASUREMENT ERROR)
C.I.
=
2 0 - .5
=
,1.5
Measurement Error All prior studies on savings indicate that savings account holdings are likely to be underreported in the personal interview study. For example, Ferber found that the amount of holdings are underreported by about 45% because of nonreporting of accounts [4, p. 981. Since the subject matter of this survey is the proportion of liquid assets held in savings accounts, underreporting may not be as significant, i.e., people who fail to report savings accounts will also fail to report other forms of liquid assets. The research director estimates that there will be a 20% underreporting, or that E(m/t) will be .8. He encodes his uncertainty about this figure. by assessing a credible interval from .6 to 1.1. Note that in his estimates he thinks a log-symmetric distribution is the most appropriate. His reasoning for the telephone interview is approximately the same. While he feels that underreporting may be a more serious problem over the telephone, he also feels that nonreporting may show up as a refusal more easily. His mean is assessed again at .8, but the greater underreporting possibility is reflected by a credible interval ranging from .5 to 1.1. Evidence suggests that mail panels have been far more reliable in reporting financial holdings because of established rapport and the self-administered feature of the questionnaire. Nuckols found that in the case of life insurance holdings, a mail panel reported the value to within 1 % while a personal interview overreported by 12% [lo]. Cannell and Fowler conclude from a study on hospitalization that "When one is trying to obtain information for which records may be available . . . a self-enumerative procedure yields more accurate information" [2]. O'Dell also shows that for borrowing, mail panels report at a much higher (and presumably more accurate) level [ l l ] . Ferber reports for a study of fi-
JOURNAL OF MARKETING RESEARCH, AUGUST 1970
nancial holdings that ". . . in terms of an indicator of accuracy of the data-reporting of dollar figures to the nearest cent-the mail returns were generally far superior (to personal interviews)" [4, p. 2281. Although he still expects some underreporting, this evidence sways the research director to assess the mean of his measurement error ratio at .9, with the credible interval ranging from .7 to 1.1. These assessments show how related experience was used to assess the distribution of the measurement error ratio. They also illustrate the problem of applying such evidence to new areas. While substantial evidence exists on measurement errors in reporting financial data, this evidence may have no relevance to other variables, say, the amount of television watching. Each variable will have its own particular measurement error. On the other hand, some generalizations must be made, since in research each situation tends to be unique. Essentially, then, meaningful error ratio assessments depend on a combination of the availability of relevant empirical data and their judicious selection and interpretation by the assessor. Improved assessments can be anticipated only if a feedback mechanism is planned. The assessor must be able to compare his subjective assessments with facts, requiring occasional inclusion of methodological tests within ongoing research work. The payoff from such investments can be significant.
Frame Error The personal interview study will utilize a geographic frame (area probability sample). According to Kish, a 10% noncoverage rate is not unusual in a national area probability sample [6, p. 5311. Since this will be a metropolitan area sample, the research director estimates an 8% noncoverage rate. He suspects that the noncovered sample will have a 10% lower proportion of their savings in savings accounts. Accordingly, he estimates the mean of the ratio of the noncovered mean to the frame mean at .9. However, he reflects his uncertainty about this ratio by assessing a credible interval from .7 to 1.1. He is not too concerned with the accuracy of this assessment, since he recognizes that the contribution of the frame error to the total error will be small because of the high proportion of coverage. For the telephone sample a directory will be used as the frame. From the data presented by Cooper [3] and the local telephone office, he learns that 13% of the homes do not have a telephone, 7% have unlisted numbers, and since the directory is almost a year old, 10% of recently connected homes will be excluded. He can get a description of the non-telephone-owning households from data such as those collected by Kildegaard [5]: 1. Family households are more likely to be repre-
sented in telephone surveys than are households with single or unrelated individuals (83% ownership among families vs. 71 % among individuals).
2. Telephone-owning households are likely to be older.
One in five of the telephone-owning households has a head under 35, compared to one in three among non-telephone-owning households. 3. Telephone-owning households are likely to have higher incomes. The median (1964) income in homes with phones was $7,300 compared with $3,400 in homes without phones. Substantiating evidence is also presented by Schmiedeskamp [12]. Similar descriptions could also be obtained about unlisted-number households and recent movers. Homes without telephones will introduce errors in the opposite direction to unlisted and new numbers. The research director estimates that the frame error will net out at 1.0, but reflects his uncertainty by permitting the credible interval to span from .7 to 1.5. For the mail panel study, he prefers to assess the frame and selection errors jointly, since he cannot think of a clear frame from which the quota sample is selected.
Selection Error Selection error is error introduced through the method by which the sample is selected. For probability designs, its expectation is 1.0, with no credible interval required. The preference for probability sample designs stems from the certainty with which this error can be assessed. For both the personal and telephone interview, no selection error has to be assessed. In thinking about the mail panel study, the research director feels that people cooperating in a panel are probably a little better educated, more communicative, and hence more aware of alternative ways of holding liquid assets. Accordingly, they are likely to have a lower proportion of their funds in savings accounts. He is further troubled by the fact that he is not sure from what frame the panel was initially recruited and how mortality and replacement have affected the panel over time. On the other hand, he feels that quota controls on income will protect him from going too far wrong. He assesses the mean of the selection error ratio at .9, but reflects his uncertainty by permitting the credible interval to range from .7to 1.1.
Nonresponse Error The research suppliers inform the director that he may anticipate a 75% completion rate for the personal interview and the telephone study and an 85% completion rate for the mail panel study. For the personal interview study, he further separates his ratio into noncontacts and refusals; 15% of the people will not be contacted with two callbacks, and 10% will refuse to answer. Those who cannot be contacted are more likely to be young and members of smaller households. Accordingly, he anticipates a lower level of savings among them, with a lower proportion being funnelled into savings accounts. He estimates the ratio of the proportion of savings accounts of noncontacts to
ASSESSING THE ACCURACY O F MARKETING RESEARCH
that of contacts at .8, with a credible interval from .6 to 1.1. People who refuse to answer the survey may do so either because they have large or little savings-the former do not wish to disclose the amount of their savings while the latter feel that they have little to contribute to the study and may be reluctant to disclose how little savings they have. The research director feels that the net effect of the refusal group may not be very large. He is further convinced by Ferber's studies that the balance of savings accounts of refusers is not that different from responders [4, p. 981. He estimates the mean of his ratio at 1.0, not knowing which way the combination will net out, and reflects his uncertainty with a credible interval ranging from .7 to 1.5. For the telephone study he again segments nonresponse into noncontacts and refusals. Noncontacts will be held to l o % , as up to six attempts will be made. Since this proportion is so low, he does not think hard before assigning an expectation of .8 with a credible interval from .6 to 1.1. The refusal rate in a survey requesting personal financial data over the telephone is likely to be high (15% including both item and total refusal). He suspects that people who refuse to give this type of information over the telephone may have a substantially larger proportion of their assets in savings accounts, and estimates the mean of the error ratio at 1.5. Since so little is known about such refusals, he assesses a credible interval from 1.0 to 2.2. For the mail panel, the research director assesses only a nonresponse error ratio. He is not sure which way nonresponse will affect the ratio. The account executive at the mail panel operation assures him that demographic comparisons between responders and nonresponders in a mail panel survey show nonsignificant differences. He still feels that those with low liquid assets are less likely to respond, but that they would have a higher proportion of their liquid assets in savings accounts. Accordingly, he assesses the ratio of the proportion among nonresponders to responders at 1.2 with a credible interval from .9 to 1.6.
Figure 2 A N ERROR RATIO ANALYSIS: PARTS O F A CONVERSATIONAL COMPUTER PROGRAM PRINT-OUTa DO Y O U W!NT
(ANSYER
TO USE,THE DATA N03E O R THE CONVERSATIONAL M O D E DATA' OR CONV') ?CONV
PRELIMINARY INFORMATION
- - ---- -- -- -- - ---- -- - ---- -- - ---- -- -- ---- --
H O Y M A N Y PROPOSALS ARE BEING CONSIDERED 7 3
THE FOLLOWING ERRORS ARE AVAILABLE FOR ANALYSIS:
MEASURNENT ERROR
DEFINITIONAL ERROR
- - . FRANE ERROR-
SELECTION ERROR
NONRESPONSE ERROR
NONCONTACT ERROR
REFUSAL ERROR
RANDO! SAIPLIh'G ERROR
PROCESSING ERROR
SELECT THE ERRORS WHICH Y O U WANT TO CONSIDER BY RESPONDING TO THE ' 7 ' WITH NINE ZEROS A N D / O R ONES: 0 = D O NOT EVALUATE I = EVALUATE ERRORS IN PROJECT I
P l , E , l , 0 , 0 , l , l , 1.0
ERRORS IN PROJECT 2
PI 90 91 90 ,091
ERRORS IN PROJECT 3
71,0,0,1,1,0,0,1,0
90
ESTINATES FOR PROJECT I -ERROR - - -- - -- -RATIO - - -- - -- - -- - -- -- ---- - ---- ---- -- -- -- -- ---- -- -- ---- -- -- -MEASURMENT ERROR ESTI\'lATE ..............................
WHAT IS Y O U R ESTIMATE OF THE MEAN OF THE RATIO:
MEASURED MEAN/TRUE MEAN 7.8
WHAT IS Y O U R 95% CREDIBLE INTERVAL OF THIS ESTIMATE7
FRACTILE .025 ESTIMATE 7.6
FRACTILE .975 ESTIflATE 71 . I
FRAME ERROR ESTIMATE -------- ---- - ------------ -- -WAT I S Y O U R ESTIMATE OF THE PROPORTION OF THE DEFINED
POPULATION EXCLU3ED FROM Y O U R FRAi'lE 7.08
W H A T I S Youa E S T I M A T E OF T H E M E A K OF T H E R A T I O :
EXCLUDED MEAN/FRANE MEAN 7.9
WHAT IS Y O U R 95% CREDIBLE IKTERVAL OF THIS ESTIMATE?
FRACTILE .025 ESTINATE 7.7
FRACTILE , 9 7 5 ESTIMATE 71 . I
NONCONTACT ERROR ESTIXATE ..............................
WHAT IS Y O U R ESTIMATE OF THE NONCONTACT flATE 7.15
WHAT I S Y O U R ESTINATE OF THE M E A N OF THE RATIO? ...
NONCONTACTED WEAN/CONTACTE3 MEAN 7.8
W H A T IS Y O U R 95% CREDIaLE INTERVAL OF THIS ESTIMATE?
FRACTILE .025 ESTIIATE 7.6
FHACTILE .975 ESTINATE 71 . I
---------- -------- ------------
REFUSAL ERROR ESTIMATE
Random Sampling Error Even in cases where the sampling error estimate is partially based on subjective assessments, it is useful to compute the simple random sampling error and then adjust this estimate by a multiplier. In the case of a proportion, all that is required to compute the simple random sampling error is an estimate of the value of the proportion and the sample size. The research director anticipates that 15% of liquid assets will be held in savings accounts. The delivered sample size for the personal interview study will be 300, the telephone study 600, and the mail panel study 1,000. The resultant coefficients of variation are .137, .097, and .075 respectively. Two multipliers are then applied. The random error multiplier reflects the random variability in other error
W H A T I S Y O U R ESTIMATE OF THE REFUSAL RATE 7. 1
WHAT IS Y O U R ESTIMATE OF THE MEAN OF THE RATIO:
REFUSALS MEAN/ACHIEVED nEAN 71 .0
WAT I S Y O U R 95% CSEDIaLE IhTERVAL OF THIS ESTIMATE?
FRACTILE . 0 2 5 ESTlmATE 7.7
FRACTILE . 9 7 5 ESTIMATE 71.5
------------------ ------------
R A N D O M SAMPLING ERROR ESTIMATE
W H A T IS THE PROPOSEC SAMPLE SIZE FOR PROJECT 1 7300
WHAT IS Y O U R EXPECTATION OF THE TRUE MEAN 7.15
I S THE TRUE i'lEAN A PROPORTION ( 'YES' OR 'NO') ?YES
THE SIMPLE R A N D O M SAMPLING ERROR -I S- ..-. -1.17
WHAT IS Y O U R R A N D O N E R R O R M U L T ~ P L I E R71 . I
WAT I S Y O U R DESIGN EFFECT MULTIPLIER 71.2
" Estimates for Projects 2 and 3 are entered as for Project 1.
sources into the random sampling error term. For example, some randomness, beyond that explained by sampling theory, would be present if respondents varied their answers when interviewed repeatedly. To get an
290
JOURNAL OF MARKETING RESEARCH, AUGUST 1970 EVALUATION O F PROJECTSa
Error ratio
Mean
Measurement Frame Noncontact Refusal Random sampling
0.8 1.01 1.03 1 1
Lower and upper bounds
Coeficient of variation
Multiple of SRSE
0.156 0.008 0.019 0.02 0.158
1.14 0.06 0.14 0.14 1.15
0.0508
0.225
1.64
EVALUATION OF PROJECT 2 0.5 -1.1 0.0352 0.87-1.1 0.0033 0.99-1.04 0.0002 0.85-1 0.0017 1 -1 0.0104
0.187 0.057 0.013 0.041 0.102
1.93 0.59 0.13 0.42 1.05
0.0513
0.226
2.33
EVALUATION OF PROJECT 3 0 . 7 -1.1 0.0123 0.7 -1.1 0.0123 0.92-1.02 0.0006 1 -1 0.0093
0.111 0.111 0.025 0.097
1.48 1.48 0.33 1.28
0.187
2.49
Relative variance
EVALUATION OF PROJECT I
0.6 -1.1 0.99-1.02 0.99-1.06 0.95-1.03 1 -1
0.0244 0.0001 0.0004 0.0004 0.0249
Product correction term
0.0006
Project
0.83
Measurement Frame Noncontact Refusal Random sampling
0.8 1 1.02 0.93 1
0.54-1.29
Product correction term
0.0006
Project
0.76
Measurement Selection Nonresponse Random sampling
0.9 0.9 0.97 1
0.49-1.18
Product correction term Project
0.0004 0.79
0.54-1.13
0.0351
S U M M A R Y OF PROJECTS Project
Mean
Coeficient of variation
Index
1 2 3
0.83 0.76 0.79
0.225 0.226 0.187
120 121 I00
Project 1: personal interview, Project 2: telephone interview, Project 3: mail panel. Table produced on the computer.
empirical feel for this multiplier, it would be obtained by dividing the sampling error estimate obtained through replicate sampling by the theoretically derived sampling error. For all three surveys it is assessed at 1.1. The design effect multiplier adjusts for the fact that the sample design is not simple random. To adjust the variance for clustering in the area probability sample, the research director uses a multiplier of 1.2. For the telephone sample his multiplier is 1.0, since he considers the systematic sample to be equivalent to simple random sampling. For the quota sample, he refers to empirical evidence presented by Stephan and McCarthy [13], and uses a multiplier of 1.5.
COMPUTATIONS One great advantage of error ratio analysis (ERA) is that all the calculations are simple, and the whole
analysis can be performed by hand. The relvariances of the component ratios can be computed from the estimates of the mean and the credible interval (as shown in Figure 1). The expectation of the total error ratio is equal to the product of the expectations of the component ratios and the total relvariance is approximately equal to the sum of the individual relvariances. The total coefficient of variation is obtained by taking the square root of the total relvariance. The research director's assessments are processed by computer simply because the writer has an ERA program and a timesharing environment available to him. The use of the computer also enables tabulations which give additional detail. Special attention is drawn to the conversational mode input (Figure 2), which makes the translation of assessments that much simpler. The table is an evaluation of the projects. The results indicate that the mail panel study will
ASSESSING THE ACCURACY OF MARKETING RESEARCH
produce the most accurate research result. Its coefficient of variation is approximately 20% lower than that of the other two designs, despite the fact that the personal interview study is "less biased." However, since the mean of the total error ratio is treated as "known bias," adjustments for this amount would be made prior to any estimation. Hence, it need not be considered in the process of selecting from among different research designs. The contribution of the various error sources to total error is measured by an index made up of the multiple of the simple random sampling error (SRSE). Since the index is dependent on the simple random sampling error in each design, and since this varies by design, the index is not suited for between-method comparisons. It is, however, useful in showing what multiple of the SRSE the assessed total error is. Also, within a survey, it shows the relative contribution of the various errors. For example, in the personal interview study, measurement error and sampling error are the principal sources. For the telephone survey, measurement error, frame error, and refusal error all contribute, with measurement error being the most important. Conducting the analysis on the computer makes it easy to make between and within-technique sensitivity tests. For example, 800 completed interviews would be required in the personal interview study before the research director became indifferent between mail panels and personal interviews. A questioning technique that reduced the refusal rate over the telephone to 5% would have the same effect as an increase in sample size of 17%.
CONCL USZONS That sampling error is only part of total survey error is not a startling conclusion. Yet it is probably one that bears reiteration to the research profession. In order to make marketing research more useful, estimates of total survey error have to be made. The numerical encoding of personalistic probability estimates finds a very relevant application in the assessment of research accuracy. A technique of integrating such assessments at the research design phase was discussed in this article. Similar techniques can also be used in drawing inferences from research results.
The subjective assessment of various error terms does not imply that these assessments are quantified wild guesses. A substantive body of empirical evidence is available to guide the assessor in his judgments. Since such evidence may vary by subject matter of the survey and research technique, considerable latitude still confronts the assessor. Only through the continued publication of methodological findings can this latitude be constrained and barriers to subjective error assessments be lessened. The most powerful way to overcome bias is still control and elimination. However, since total bias elimination is not a feasible or even necessarily a desirable goal, assessments of research accuracy will have to incorporate subjective assessments of nonsampling errors. REFERENCES 1. Rex V. Brown, Research and the Credibility of Estimates, Boston: Graduate School of Business Administration, Harvard University, 1969. 2. Charles F. Cannell and Floyd J. Fowler, "Comparison of a Self-enumerative Procedure and a Personal Interview," Public Opinion Quarterly, 27 (Summer 1963), 262. 3. Sanford L. Cooper, "Random Sampling by Telephone-An Improved Method," Journal of Marketing Research, 1 (November 1964), 45-8. 4. Robert Ferber, The Reliability of Consumer Reports of Financial Assets and Debts, Urbana, Ill.: Bureau of Economic and Business Research, University of Illinois, 1966. 5. Ingrid C. Kildegaard, "Telephone Trends," Journal o f Advertising Research, 6 (June 1966), 56-60. 6. Leslie Kish, Survey Sampling, New York: John Wiley & Sons, Inc., 1967. 7. John B. Lansing, et al., A n Investigation of Response Error. Urbana, Ill.: Bureau of Economic and Business Reseaich, ~ n i v k r s i of t ~ Illinois, 1961, 204. 8. Charles S. Mayer, Assessing the Accuracy o f Marketing Research, (forthcoming). 9. -and Rex V. Brown, "A Search for the Rationale of Non-probability Sample Designs," Proceedings, Fiftieth Anniversary Symposium on Marketing, American Marketing Association, 1965, 300. 10. Robert C. Nuckols, "Personal Interview Versus Mail Panel Survey," Journal o f Marketing Research, 1 (February 1964), 12. 11. William F. O'Dell, "Personal Interviews or Mail Panels?" Journal o f Marketing, 26 (October 1962), 36. 12. Jay W. Schmiedeskamp, "Reinterviews by Telephone," Journal of Marketing, 26 (January 1962), 29. 13. Frederick F. Stephan and Philip J. McCarthy, Sampling Opinions, New York: John Wiley & Sons, Inc., 1958, 233.