Change in the Structure of American Polit,ical

Page 1

Change in the Structure of American Polit,ical Attitudes: The Nagging Question of Question Wording* George F. Bishop, Alfred J. Tuchfarber, Robert W. Oldendick, University of Cincinnati

One of the current controversies raging in the voting behavior literature concerns the "rational" character of the American electorate. The early Michigan studies depicted the typical American voter as nonrational and inconsistent in his political attitudes, while recent research has cast him in a more favorable light. Using data from the 1956 and 1964 SRC Election Studies, this article demonstrates that much of the change which has been uncovered during this period can be traced to methodological artifacts, specifically changes in question wording and format introduced by the SRC in 1964. The effects of these artifacts have some major implications for many current theories of electoral behavior.

If we are to believe the burgeoning literature on trends in political attitude consistency and issue voting (Bennett, 1973; Boyd, 1972; Declercq et al., 1975; Field and Anderson, 1969; Jackson, 1975; Kessel, 1972; Kirkpatrick et al., 1975; Luttbeg, 1968; Miller et al., 1976; Nie and Andersen, 1974; Nie et al., 1976; Page and Brody, 1972; Pierce, 1970; Pomper, 1972; Repass, 1971; Schulman and Pomper, 1975; St. Angelo and Dobson, 1975; Stimson, 1975 ) the American voter has become surprisingly sophisticated. So sweeping have the changes been that some writers have begun to talk about a new voter "rationality," even an "ideology" (cf. Nie and Andersen, 1974; Niemi and Weisberg, 1976, pp. 67-84). But whether increased attitudinal consistency can be equated with ideology in the classic sense or whether issue voting can be identified with rationality remains disputable and will no doubt lead to various conceptual clarifications and reclarifications, ad infinitum (see, for example, Converse's recent discussion in the Handbook of Political Science, 1975, pp. 75-169). How to account for the rapid changes in the supposedly similar empirical indicators of mass political sophistication, however, represents a far

* The research reported in this paper was supported in part by a grant from the National Science Foundation (SOC77-10509). We would especially like to thank Richard Dawson, Program Director for Political Science at NSF, for his encouraging support of this project. American Journal of Political Science, Vol. 22, No. 2, May 1978

0 1978 by the University of Texas Press 0026-3397/78/2202-0250$01.60


THE QUESTION OF QUESTION WORDING

2 s

more significant task. The new "conventional wisdom," if we may call it that, attributes nearly all of the increments in issue voting and attitude consistency to the heightened "salience" of politics in the socially turbulent atmosphere of the 1960s and early 1970s (see, especially, Nie and Andersen, 1974, pp. 571-74, 585). Theoretically, the greater prominence of public affairs during this period stimulated greater citizen interest and involvement in politics which, in turn, produced higher levels of political information and increased integration between attitudes and behavior. This interpretation, along with the obvious implication of the "bankruptcy" of Converse's (1964) thesis on mass belief systems, has gained widespread currency despite the fact that there is little direct evidence for it. If we take a close look, for instance, at trends in one of the only comparable indicators of mass political motivation over the past two decades (also the one which Nie and Andersen used to operationalize salience)-interest in following the political campaigns-we fail to find any consistent relationship between these trends and those described in the literature for issue voting and attitudinal consistency (see Table 1 ) .l The 1964 American national election marks a critical comparison point in this time series, for it is at this juncture that the sharpest upward shifts in consistency and issue voting have occurred. If changes in political motivation account for the upsurge in political sophistication, as many researchers have theorized, we should expect to discover a corresponding increment in the public's interest in politics. Yet the trend figures in Table 1 tell us there was no change whatsoever in political interest during the crucial phase from 1960 to 1964. Other comparable SRC measures of political involvement for this time period (e.g., media use, political influence attempts) behave in a similar fashion and thus raise serious questions about the validity of the salience-of-politics hypothesis. Proponents of the salience thesis can still argue, of course, that politics did become more prominent in the minds of American citizens during the sixties or that at least certain issues (e.g., Vietnam, civil rights) became more central to the mass electorate, and that if only we could develop a better measure of this penetration of policy concerns, it would all become evident; Converse's thesis would be thoroughly discredited; and the respon1 The data were made available by the Inter-University Consortium for Political and Social Research (ICPSR) through the University of Cincinnati's Behavioral Sciences Laboratory (BSL). Neither the Consortium, the BSL, nor the Center for Political Studies, bear any responsibility for the analyses or interpretations presented here.


Bishop, Tuchfarber, and Oldendick

TABLE 1 Trends in Levels of Political Interest, 1952-1972a INTERESTLEVEL

1952

1956

1960

1964

1968

1972

Very Much Interested

37%

30%

38%

38%

39%

32%

Somewhat Interested

34

40

37

37

40

41

Not Much Interested

29 30 25 25 21 27 - - - - - 100 100 100 100 100 100

a The exact wording of the item measuring political interest reads: "Some people don't pay much attention to the political campaigns. How about you, would you say that you have been very much interested, somewhat interested, or not much interested in following the political campaigns so far this year?" SOURCE: Codebooks for the 1952, 1956, 1960, 1964, 1968, and 1972 American national election studies (Inter-University Consortium for Political and Social Research, Ann Arbor: University of Michigan).

sible American voter would thus be redeemed. As always, the burden of proof rests with the advocates, and we will make just a brief comment on the pitfalls of adopting the salience-of-issues strategy. The most obvious difficulty stems from the time-bound character of many issues. Issues like1 law-and-order, medicare, and Vietnam have come and gone, and though they may reemerge under similar circumstances in the future or appear in some other guise, their volatility necessarily precludes any meaningful comparison of how they may have become more or less integrated into mass attitude structures over time. How, for example, do we begin to compare the salience of Taft-Hartley attitudes in 1952 with civil rights attitudes in 1964 with law-and-order support in 1968 with Vietnam positions in 1972, and so forth and so on? Such an approach negates the very use of trend analysis and leaves the door open for numerous ad hoc interpretations. The solution, as we shall suggest, lies in exact replication of the "old" issue questions.

Changes in Mass Education The improving political quality of the American electorate may also be due to its ever-increasing educational attainments. In 1952, for example,


THE QUESTION OF QUESTION WORDING

253

only 15 percent of the population reported having at least some college education, but by 1972, this figure had nearly doubled to 29 percent. During the same time-span the proportion of the public with less than a high school education dropped from 61 percent to 38 percent. Changes of this magnitude should, according to Converse's model of mass belief systems, produce a corresponding boost in the electorate's ideological awareness and, as a consequence, greater attitudinal crystallization. The public's educational achievements have climbed rather gradually over time, however, and may thus explain any long-term increments in mass political sophistication (cf. Bishop, 1976; Miller et al., 1976), but it would not account for the sudden, steplike surge in this phenomenon at the time of the 1964 national election. Some other factor must be operating, one which increased consistency (and issue voting) despite the lack of an upward trend in political interest, and over and above any long-term effects of rising educational attainment. Changes in Question Wording A plausible hypothesis, stimulated by recent methodological work, would implicate changes in question wording and format as a causal factor. Beginning with the 1964 American national election study-again the point at which the most abrupt shift in mass sophistication occurredthe Michigan Survey Research Center instituted major changes in the wording and format of the issue questions that have been used by numerous researchers to chart the growth of issue voting and attitudinal consistency. This, as we shall see, creates serious problems of comparability for any trend analysis of the Michigan data sets and raises the question of how much of the change in mass sophistication reported by Nie and others might be due to methodological artifacts. Such considerations cannot be readily dismissed, for as Schuman and his associates have amply demonstrated (Presser and Schuman, 1975; Schuman and Duncan, 1974) variations in'question wording and format can significantly affect not only the marginal distributions of survey items but also the magnitude of association between items-i.e., the indicator typically employed to operationalize "constraint" or issue voting. Moreover, their analysis suggests that less educated respondents tend to be particularly susceptible to question wording effects of all kinds, a finding with crucial implications when we recall the startling leaps in consistency for respondents with less than a high school education in the fJie and Andersen analysis. At this point the reader may well say: "So what about the Schuman et al. experiments . . . just how comparable or noncomparable were the


254

Bishop, Tuchfarber, and Oldendick

issue questions used by Nie and his colleagues to assess the changes in the structure of American political attitudes?" Were the variations in wording and format as minor as they claim or as markedly discontinuous as we are suggesting here? With most of the scholarly consideration to this point riveted on the correlations and average correlations among items, very little attention has been given to a careful examination of the wording and the simple marginal distributions for the seven basic issue questions used by Nie and his colleagues. Are the different forms of these questions before and after the critical point of the 1964 election in any way reasonably equivalent? Table A gives the exact wording of the issue questions and response categories, along with their percentage distributions, in each of the principal comparison years of the Nie and Andersen analysis: 1956 and 1964. Before considering the Nie-Andersen scheme for recoding the responses to these questions, we need to evaluate the original marginals. Starting with the question on the government's guarantee of economic welfare in Table A, we notice a striking difference in the percentage of each sample that would be classified as "liberal" or "conservative" on this issue. For example, if we collapse the agree and agree strongly responses into one directional category and do the same for disagree and disagree strongly, we would conclude that about 62-63 percent of the 1956 electorate supported the "liberal" policy of government action to solve the problem of unemployment, while only 29-30 percent endorsed the "conservative" position by disagreeing with this course of action. But in 1964 we discover a dra-' matic decline in public approval for a supposedly similar alternative. Now only 36-37 percent favor the liberal policy, whereas a majority (50.6 percent) subscribe to the conservative stand of letting "each person get ahead on his own." This reversal seems surprising not only because the general trend in American politics has been one of increasing acceptance of New Deal economic philosophies (see Gallup, 1972) but also because 1964 was somewhat of a watershed in which the "Great Society" ticket of Johnson and Humphrey rode into office on a landslide, a result which presumably implied, at least 'in part, a rejection of the economic conservatism of Barry Goldwater. Is it possible the electorate had suddenly become conservative in its basic posture on issues of public policy and simultaneously ignored the policy positions of the two major candidates and their parties when casting its ballots? Such an interpretation hardly seems plausible; in fact, if we accept the findings from the Nie and Andersen (1974) analysis for the moment, we learn that the relationship between respondents' issue attitudes and voting behavior rose substantially at the time of the 1964


THE QUESTION OF QUESTION WORDING

255

election. What then accounts for the gross discrepancies between the 1956 and 1964 marginals? Could it have something to do with the addition of the phrase- ". . . and a good standard of livingv-to the liberal response alternative in 1964 and, more importantly, to the availability of a less ambiguous and perhaps more attractive conservative alternative of letting "each person get ahead on his own," compared to just being able to "disagree" or "disagree strongly"? We think so, and we believe these changes altered the meaning and implication of the economic welfare question in such a way as to produce a more balanced ordering of respondents on the traditional left-right axis. We also believe that the variation in marginals from 1956 to 1964 (and, correspondingly, the weak issue correlations in 1956) has something to do with the inducing of acquiescence by the conventional Likert (strongly agree-strongly disagree) format. We shall have more to say about these wording variations later. Right now we would like to explore whether the recoding procedures used by Nie and Andersen to group respondents into three "statistically comparable" categories for analysis-liberal, centrist, and conservative-would lead to any significant changes in our interpretation. The reader may recall that they followed two principle guidelines in recoding their data: "(1) to make as even as possible the proportions of the population in each of the three categories, while (2) not permitting the first guideline to place respondents on the agree and disagree side of an issue in the same category" (Nie and Andersen, 1974, pp. 546-47). If we apply these guidelines to the percentages for the response categories of the 1956 question on economic welfare in Table A, we get the following distribution: 48.0 percent Liberal (agree strongly) 22.3 percent Centrist (agree, but not very strongly and not sure) 29.7 percent Conservative (disagree strongly and disagree, but not very strongly) Although this recoding reduces some of the discrepancy with the 1964 proportions for the liberal category, it is far from satisfactory because of the still gaping difference for the conservative category: 29.7 percent (1956) vs. 50.6 percent (1964). Furthermore, as we shall see later, the relatively small percentage of centrists in the 1964 sample (13 percent) compared with the percentage in the newly recoded category of centrists in the 1956 data (22.3 percent) may well account for some of the 19561964 differences in the size of the gamma coefficients reported by Nie et al., due to the smaller cell frequencies created in crosstabulations with the former data set. No matter how we handle these data then, the marginals for the economic welfare questions in 1956 and 1964 cannot be construed


256

Bishop, Tuchfarber,and Oldendick

as comparable. And this represents only the first instance, and not even the most disturbing one. Consider the question on the size of government (Table A ) . Attitudes toward this issue, more than any other, became highly interconnected with other policy positions in 1964. Ignoring for the moment the conspicuous dissimilarities in the wording of the 1956 and 1964 items, we should ask ourselves: Do the two questions, however we code or recode their responses, sort the respondents into relatively equivalent proportions of liberal, centrist, and conservative? If we just collapse again within our agree and disagree classifications, we end up with 58.5 percent of our 1956 respondents subscribing to the notion that the government should leave things like electric power and housing for private businessmen to handle and about 32 percent voicing their "liberal" disagreement. In 1964, however, we uncover a new majority (52 percent) accepting the liberal argument that ". . . the government has not gotten too strong" and a minority, though sizeable (43.6 percent), agreeing with the conservative charge that ". . . the government is getting too powerful." And so we now confront a "liberal" trend on the size of government issue from 1956 to 1964 which contrasts sharply with the "conservative" trend in the same period for the economic welfare issue (see Table A ) . And these are not the only examples of concurrent "liberal" and "conservative" trends. What happens if we apply the Nie and Andersen recoding scheme to the 1956 size of government item? We get a somewhat different breakdown, of course, one which virtually eliminates the marginal differences for the conservative classification in the two studies (43 percent vs. 53.6 percent), but the departure at the liberal end of the spectrum is totally unaffected (31.9 percent vs. 51.9 percent), and the gap in the centrist category is even greater than before: 25.1 percent vs. 4.5 percent. Surely these are not comparable distributions, no more than the wording of the questions themselves. Take a good look at the wording of the items in Table A. First, notice how differently the two questions cover the size of government issue; they both concern a common symbol-"the governmentv-but the 1964 version projects a much broader scope than the pre-1964 version, cutting across a variety of specific issues that might involve the use of governmental power. While we would expect responses to these two items to correlate if administered concurrently or on separate occasions to the same individuals, the magnitude of the association would not necessarily be high or even moderately so (see the data below). For example, some respondents might believe that, in general, the government in Washington


THE QUESTION OF QUESTION WORDING

257

is getting too powerful, but that in certain critical economic areas, such as the housing industry, utilities, and energy conservation, the government should play a stronger role; and in certain other domains-racial composition of schools, sexual behavior, use of marijuana-stay out entirely. Nor would we expect these two items to exhibit the same degree of correlation with other issue attitudes or voting behavior. Quite apart from these considerations, however, a more troubling datum exists. The 1964 postelection interview contained a forced-choice item on government ownership of power plants which was far more comparable in content to the pre-1964 question on the power of government issue than the 1964 version; it read as follows (see The 1964 SRC Election Study, pp. 190-191) : "Some people think it's all right for the government to own some power plants while others think the production of electricity should be left to private business. Have you been interested enough in this to favor one side over the other?" (If Yes) : "Which position is more like yours, having the government own power plants or leaving this to private business?" 31.0 percent Liberal (government should own power plants) 6.4 percent Centrist (other, depends) 62.6 percent Conservative (leave to private business) (N = 906) The percentages for these response categories come much closer to the estimates in the 1956 sample than those generated by the 1964 item on government power (see Table A ) . Recall that, by collapsing agree and strongly agree, we said about 58.5 percent of the 1956 sample could be classified as "conservative," which is well within sampling error distance (for multistage designs) of the 62.6 percent estimate of "conservatives" in 1964 given by the more comparable question on government ownership of power plants. And by combining disagree with disagree strongly, we identified 31.9 percent of the 1956 sample as "liberal" which is nearly identical with the 31 percent figure from the distribution for the 1964 question on power plants. Even the figures for the "centrist" category give us a better fit: 9.6 percent (1956) and 6.4 percent (1964). The concurrence of the two questions on government power in the 1964 survey provides another opportunity to assess comparability. Do responses to these items correlate highly with one another as we would expect if changes in question wording made little or no difference? Calculating a Pearson r, we learn they were only weakly related ( r = -.24), sharing just 5-6 percent common variance. This is hardly the kind of compara-


258

Bishop, Tuchfarber, and Oldendick

bility we were led to believe existed in the original analysis by Nie and his associates. And to this extent, we feel the inclusion of the 1964 question of the government ". . . getting too powerful" rather than the one on ownership of power plants, grossly distorts and inflates the magnitude of change in mass political sophistication. One more illustration should suffice: the data on the cold war questions in Table A (cf. Repass, 1976, p. 82511). Adding the percentages for agree and agree strongly, we find an overwhelming majority of the electorate (72.7 percent) in 1956 favoring the "conservative" policy of keeping U.S. troops overseas to help countries opposed to communism and very few persons (16.2 percent) expressing "liberal" opposition. But by 1964, either the question didn't mean the same thing to respondents or a revolution in American political attitudes had taken place, because now we discover an even more overwhelming majority (84.3 percent) supporting the liberal approach of trying "to discuss and settle our differences" with the communists, while just a mere 11.4 percent say they would "refuse to have anything to do with the communists." Such a massive ideological reversal strains the assumption of comparability beyond any reasonable cutting point. Trying to salvage things a bit by recoding the cold war data along Nie and Andersen lines, we get a little different distribution for the 1956 sample : 49.0 percent Conservative (agree strongly) 23.7 percent Centrist (agree; not sure, depends) 27.3 percent Liberal (disagree, disagree strongly) But this device closes the gap between the 1956 and 1964 distributions only to a minor degree and leaves them still a long way from comparability. Once more the variations in wording appear to be at fault. Although both the 1956 and 1964 versions of the cold war question appear to tap an underlying dimension of feelings toward "communism" or "communists," they focus on rather different, though not necessarily contradictory, policy options. The pre-1964 form of the question centers on the desirability of military support for anti-communist countries, whereas the 1964 version revolves around the more general matter of negotiation and conflict resolution. Clearly, many people could "consistently" support both options: keeping troops overseas to bolster the military defense of anticommunist nations and yet, at the same time, favor a policy of "negotiation" rather than "confrontation," as it is often referred to today. To put it all very simply, theoretical relatedness does not imply methodological or empirical comparability.


THE QUESTION OF QUESTION WORDING

259

Noncomparability of Opinion Filters Another aspect of noncomparability between the 1956 and 1964 issue questions arises from differences in the kinds of "filter" questions used to screen out respondents who may not have formed an opinion on a given issue. In 1956 (see Table A ) the filter read: "Now would you have an opinion on this or not"; whereas in 1964, the filter changes to: "Have you been interested enough in this to favor one side over the other?" The variation in wording may seem trivial to some observers, but what matters is whether it makes a difference; and it does. In Table 2 we have a comparison of the percentages for the no opinion (1956) and no interest (1964) responses in the two studies. Some sizeable discrepancies are obvious. On the question of federal aid to schools, for instance, the 1964 filter generated almost twice as much missing data (16.42 percent) as the 1956 version (8.34 percent). On the other hand, the percentage of no opinion responses to the 1956 item on the cold war (18.39 percent) greatly exceeded the corresponding percentage for the 1964 question.

TABLE 2 Percentage Distributions for Don't Know, No Opinion, and

No Interest Responses to Seven Issue Questions in

the 1956 and 1964 SRC Election Studies

ISSUE

1956 ( N = 1762) No DK Opinion Total

Economic Welfare 1.13%

8.57%

9.70%

1964 ( N = 1571) No DK Interest Total 2.10% 12.60% 14.70%

Medicare

1.02

10.56

11.58

2.55

13.43

15.98

Black Welfare

1.70

11.52

13.22

3.37

10.18

13.55

Federal School Aid 1.02

8.34

9.36

1.46

16.42

17.88

1.53

18.39

19.92

3.06

11.52

14.58

Size of Government 1.59

27.13

28.72

2.74

28.13

30.87

School Integration

10.22

11.47

3.31

9.55

12.86

Cold War

1.25


260

Bishop, Tuchfarber,and Oldendick

Not even the "don't know" distributions are comparable. In every instance (Table 2) we find a greater percentage of these responses given to the 1964 version. The importance of all these contrasts is not just to demonstrate another form of noncomparability; we also want to suggest how differences in the magnitude of missing data may operate to distort our estimates of the "true" level of attitudinal consistency. For example, recent analyses of the determinants of "don't know" and other forms of nonsubstantive responses (Francis and Busch, 1975; Sudman and Bradburn, 1974) tell us that individuals giving such responses tend to be significantly less educated and less politically involved than their more opinionated counterparts. Thus, excluding these kinds of respondents through "filter" questions tends to inflate observed levels of consistency. And to the extent that there are differences between samples in the number of respondents screened out by various kinds of filters (as in Table 2 ) , comparisons of consistency coefficients become even more misleading. Together with the wording and format changes we discussed, the variation in filtering effects more than underscores our concern with the comparability of the SRC questions. Eflects of Question Format on the Magnitude of Gamma Coefficients To illustrate how the wording and format differences operated to produce some of the changes in the size of the gamma coefficients, we will analyze the cell frequencies and marginals (in percentages) for the pair of issues showing the greatest absolute change from 1956 to 1964: black welfare and the size of government (-.lo and .49, respectively). Table 3 gives the percentages of the total sample falling into each cell of the crosstabulation for the 1956 pair and the 1964 pair along with the column and row marginal totals. We now begin to see one consequence of the imbalance in marginals created by the change in question formats and the coding procedures adopted by Nie et al., namely the very low cell frequencies for the column and row intersections involving the "centrist" category. It is well known that zero or small cell frequencies can significantly alter the magnitude of gamma coefficients (cf. Blalock, 1972; Mueller et al., 1970; Weissberg, 1976), but we need to show just how this works so that the readers may satisfy their curiosity. The reader will recall that in computing a gamma coefficient, we cumulate the concordant pairs in tables, such as this by starting with the upper-left cell frequency and multiplying it by every frequency below and to the right of it; then we sum these products and repeat the operation for the cell frequency to the immediate right of the upper-left cell and so forth


26 1

THE QUESTION OF QUESTION WORDING

TABLE 3

Percentage of the Total Sample Falling into Each of Nine Cells in a

3 X 3 Crosstabulation of Issue Questions about the Size o f Government

and Black Welfare in the 1956 and 1964 SRC Election Studies

1956 Size of Government

Black Welfare Liberal

Centrist

Conservative

Liberal

16.7

11.0

18.7

Centrist

8.7

9.5

10.4

4.9

12.7

Conservative - - 7 7 7 7 * 7 - - - - - < - - 7

7.3 - - - - - - - - - - - - < -

--------<-----------------------------/-*.

1964 Size of Government

Black Welfare Liberal

Centrist

Conservative

Liberal

36.7

1.8

10.5

Centrist

3.7

.7

3.6

15.8

1.8

25.5

Conservative

and so on (see Blalock, 1972; Davis, 1971). To get the discordant pairs, we begin with the upper-right hand frequency and multiply it by every frequency below and to the left, etc. Finally we plug these numbers into our formula for the gamma coefficient. Given that some "true" relationship exists between a pair of issue attitudes, we can see how the dichotomous forced-choice items in the 1964 study, by pushing many more respondents into the only available end categories and leaving very few in the center category, creates a much larger multiplier for the upper-right or upper-left (and lower-left and lower-right) corners of the contingency table than the 1956 items, which spread the cases more evenly through the middle three categories of the five-point Likert item. For example, in the 1964 subtable (Table 3 ) our upper-left cell multiplier-i.e. "consistent liberalsu-represents 36.7 percent and our lower-right cell-"consistent conservatives"-25.5 percent, of the total


262

Bishop, Tuchfarber, and Oldendick

cases compared to just 16.7 percent and 12.7 percent for the corresponding cells in the 1956 subtable. But the reader may object that the cells involving the "centrist" category, which are among those to be multiplied by the upper-left cell, are somewhat larger in the 1956 data set. This is true, but it does not compensate for the multiplicative advantage of the upper-left cell in the 1964 data and is offset, in addition, by the other large cell in the lower-right corner of the 1964 subtable. Does all this mean that the 1956-1964 differences in gamma coefficients are simply an artifact of format-induced discrepancies in cell frequencies? Certainly not. We are not saying all of the variance can be accounted for through the changes created by the dichotomous-choice format and the Nie-Andersen recoding scheme. Above and beyond these effects, there may be a fair amount of "true variance" still to be explained by, among other things, the differences in the actual wording of the items, presuming this can be disentangled from the format factor. In other words, the dichotomous-choice items may be either more valid measures of the underlying factor, for whatever reason, or they may be measuring something that overlaps only partially with the supposedly similar indicators from the pre-1964 studies. What we are saying though is that part, in some instances perhaps a substantial part, of the variation in gamma coefficients reported by Nie et al. resulted from the subtle psychometric properties of the dichotomous-choice items, in particular their minimizing of cases in the center categories, and that this was compounded by their recoding procedures which operated to increase the gap between the "centrist" cell proportions. Having demonstrated the noncomparability of the marginal distribut i o n ( ~ )of the 1956 and 1964 questions, and the effect which this can have on the magnitude of gamma coefficients, we must now ask: How are these variations related to the changes in consistency reported for 1956 and 1964 by Nie and others? First of all, we should look at the exact size of the associations between the various pairs of issue items analyzed by Nie et al. Unfortunately, their research described only the average gamma coefficients between and within domestic and foreign issue domains, and so it is instructive to scrutinize the individual correlations that contributed to those averages (Table 4 ) . In general we get the same upward trend uncovered by Nie and associates (see columns 1 through 3 of Table 4 ) , with all of the correlations rising in magnitude from 1956 to 1964. But there is a good deal of variation among the pairs of issues that is concealed by their average procedures. For one thing, we notice that several combinations of issue attitudes


THE QUESTION OF QUESTION WORDING

TABLE 4 Gamma and Yule's Q Coefficients for Issue Pairs in the 1956 and 1964 SRC Election Studies Computed from Nie-Andersen Recoded Variables (3-Category) and Dichotomized Variablesa Nie-Andersen Issue Pair Econ. Welf ./Medicare Econ. Welf./Black Welf. Econ. Welf ./School Aid Econ. Welf./Cold War Econ. Welf./Govt. Size Econ. Welf./School Integ. Medicare/Black Welf. Medicare/School Aid Medicare/Cold War Medicare/Govt. Size Medicare/School Integ. Black Welf ./School Aid Black Welf./Cold War Black Welf./Govt. Size Black Welf./School Integ. School Aid/Cold War School Aid/Govt. Size School Aid/School Integ. Cold War/Govt. Size Cold War/School Integ. Govt. Size/School Integ.

1956

1964

Diff.

Dichotomous Variables 1956

1964

Diff.

.57 .46 .46 .09 .13 .07 .34 .41 .06 .26 .07 .38 .14 .lo .42 .16 .12 .09 .13 .07 .23

a In computing the coefficients for the trichotomized variables we followed as exactly as possible the two principal guidelines used by Nie et al. to recode the fivepoint, agree-disagree items into three "statistically comparable" categories-liberal, centrist, and conservative. These were: "(1) to make as even as possible the proportions of the population in each of the three categories, while ( 2 ) not permitting the first guideline to place respondents on the agree and disagree side of an issue in the same category" (Nie and Andersen, 1974: 546-47). We also replicated their procedure of using the response-"other, dependsH-as a middle category for the 1964 format.


264

Bishop, Tuchfarber, and Oldendick

in 1956 display rather substantial degrees of consistency: economic welfare with medicare (Gamma = .57), black welfare (.46) and school aid (.46); medicare with school aid (.41); and black welfare with school desegregation (--.42). For another, we see that most of the large changes from 1956 to 1964 involve one of the two most noncomparable items: the question on the size of government. This provides some evidence that the reported increases in consistency may not have been that striking and may be traced, at least in part, to variations in question wording. An additional test of this hypothesis can be made. For if, as we have contended, the cell proportions comprising the "centrist" category are so critical to the 1956-1964 comparisons, why not compare the 1956 and 1964 coefficients by deleting the center category (not sure, depends) in both data sets and combining agree with agree strongly and disagree with disagree strongly in the 1956 sample-that is, compare the Yule's Q values for dichotomized variables. Since we would be excluding respondents with less well-formed opinions through such a procedure, we expect some enhancement of the observed consistency coefficients. This should not make too much difference, however, if our arguments about the effects of small cell proportions and the Nie-Andersen recoding guidelines are unfounded. But the figures in columns 4 through 6 of Table 4 tell us that these artifacts are not inconsequential. In 1956 our recoding to a dichotomous variable results in a hefty increase for many of the issue pairs, some so large that they actually exceed the corresponding coefficients in the 1964 sample (e.g., economic welfare/school aid and black welfare/school aid) and others so substantial (e.g., economic welfare/ medicare; medicare/school aid; black welfare/school integration) that they create serious reservations about the sweeping generalizations of "weak constraint" in the mass electorate as of the middle 1950s. Actually, what we have in these data sets are two roughly distinct clusters of differences: ( a ) correlations among the issues of economic welfare, medicare, black welfare, and school aid which are fairly similar in magnitude from 1956 to 1964-i.e., when we remove the effects of small cells and recoding artifacts (columns 4 through 6 ) and (b) all those correlations involving the issues of the cold war, size of government, and school integration, in which sizeable discrepancies from 1956 to 1964 remain despite our readjustments with dichotomized variables. The radical modifications in the wording of the items covering the cold war and the size of government issues (see Table A ) probably account for a good share of these changes. And while the variation in the wording of the school integration question may not be quite as drastic as those above,


THE QUESTION OF QUESTION WORDING

2%

its comparability from 1956 to 1964 appears to be noticeably less than the items on economic welfare, medicare, and black welfare. For each of the latter questions in 1956 and 1964, the interviewer initially presented respondents a roughly similar "liberal" position-e.g., "the government . . . ought to help people get doctors and hospital care at low cost." In the case of the the school integration issue, however, the 1956 respondents got a strong "segregationist" statement to agree or disagree with, whereas the 1964 respondents first received an equally strong "integrationist" statement. In this way the change in wording probably altered the nature and size of the relations between attitudes toward school integration and other policy positions (cf. Gaertner, 1976).

Summary Methodological Implications T o summarize our thesis, we are arguing that three methodologically related factors probably accounted for the great bulk of the upward shift in attitudinal consistency at the time of the 1964 election: ( 1 ) the small cell frequencies associated with the "centrist" category(ies) in the 1964 data set, which resulted largely, if not entirely, from the use of dichotomous-choice questions: ( b ) the Nie-Andersen recoding guidelines which inflated the small cell effect even further; and (c) actual differences in the meaning and wording of the items which are intimately bound up with the format modifications, but theoretically independent. On top of these artifacts we must add the effects of the noncomparable filter questions (Table 2) which only compound the problem of teasing out the "true" amount of change in the structure of American political attitudes. But even this represents only part of the methodological noise in the SRC data sets, albeit a very important piece of it. We have not even touched upon the increasing noncomparabilities that arise from another major change in the SRC issue questions at the time of the 1968 election: the shift from dichotomous-choice scales to seven-point scales, a change which probably maximized the amount of "true variance" in the items and, indirectly, the magnitude of correlation (i.e., constraint) and which may also have created an opportunity for extremity response biases to operate (see Shulman, 1973). But these additional problems are beyond the scope of this already too long article, and so we only mention it to reinforce our contention that the current accounts of change in mass political sophistication are fraught with artifacts, so much so that we believe any adequate trend analysis with available SRC data sets to be hopelessly difficult. What we need to do is


266

Bishop, Tuchfarber, and Oldendick

replicate exactly the old SRC items from 1956, 1964, and other elections. And only then, perhaps, will we be able to put to rest some of the suspicions we have raised here. Implications for Current Electoral Behavior Theory

What are the implications for the current theories of electoral behavior that have been largely dependent on secondary analyses of the SRC/CPS election studies? The evidence we have described here documents a major change in the wording, format, and filtering properties of the basic SRC issue questions between 1956 and 1964, a change which had substantial effects on both the marginals and intercorrelations of issue attitudes. Apparently these changes improved the reliability (and perhaps the validity) of the issue questions as the SRC project directors undoubtedly desired. However, the important side effects of these methodological modifications were ( 1 ) that the issue items intercorrelated more highly with one another, and (2) that they would correlate better with other substantive variables (e.g., vote) because of their improved psychometric properties (not because of a "real" increase in strength of relationship). We have been able to show that had exactly the same questions been asked in 1956 and 1964 it is probable that the inter-item correlations of the issue variables would have been similar. If we had directly comparable measures, which unfortunately we do not, we would probably find that the extent of issue voting in 1956 was also similar to the amount of issue voting in 1964. This leads us to conclude that Repass (1971) was essentially correct in inferring that the elections of 1956 and 1964 were quite similar in their "ideological" character. Theoretical implications follow from this analysis. Had Campbell et al. (1960) and Converse (1964) had access to survey data from 1956 and 1958 that used the "improved" 1964 items, they would probably not have been quite as strong in their statements that only a small percentage of the American electorate was ideological or issue-constrained. Their basic conclusion might not have changed, but it would have been tempered. In addition, they would probably have found that issue voting explained more of the variance in vote than the 1956 items led them to believe; thus they might have rated it nearly as important as party identification and candidate images. On the other hand, those theorists of electoral behavior who have suggested that a new-found American ideology exists must face the realization that at least from 1956 to 1964 there was probably little "real" change in issue constraint and issue voting. Nie et ale's "salience" and


THE QUESTION OF QUESTION WORDING

267

Pornper's "nature of the times" hypotheses are unnecessary, for the phenomena they sought to explain-i.e., increases in constraint and issue voting-probably can be accounted for by improvements in the measurement of the issue constructs. As noted earlier, other important changes in question wording have occurred since 1964. A disconcerting thought is that much of the creative theorizing of the past decade which has tried to offer a substantive explanation for the rise in issue constraint and issue voting has overlooked a very elementary alternative: methodological artifacts.

Manuscript submitted 10 February 1977 Final manuscript received 15 August 1977 REFERENCES Bennett, Stephen E. 1973. Consistency among the public's social welfare policy attitudes in the 1960's. American Journal of Political Science, 17 (August 1973) : 544-570. Bishop, George F. 1976. The effect of education on ideological consistency. Public Opinion Quarterly, 40 (Fall 1976): 337-348. Blalock, Hubert M., Jr. 1972. Social statistics. New York: McGraw-Hill. Boyd, Richard W. 1972. Popular control of public policy: A normal vote analysis of the 1968 election. American Political Science Review, 66 (June 1972): 429-449. Converse, Philip E. 1964. The nature of belief systems in mass publics. In David E. Apter, ed. Ideology and discontent. New York: Free Press, pp. 206-261. . 1975. Public opinion and voting behavior. In Fred I. Greenstein and Nelson W. Polsby, eds. Handbook of Political Science, Vol. 4: Nongovernmental Politics. Reading, Mass.: Addison-Wesley, pp. 75-169. Davis, James A. 1971. Elementary survey analysis. Englewood Cliffs, N.J.: PrenticeHall. Declercq, Eugene R., Thomas L. Hurley, and Norman R. Luttbeg. 1975. Voting in American presidential elections: 1956-1972. American Politics Quarterly, 3 (July 1975) : 222-246. Field, John O., and Ronald E. Anderson. 1969. Ideology in the public's conceptualization of the 1964 election. Public Opinion Quarterly, 33 (Fall 1969): 380-398. Francis, Joe D., and Lawrence Busch. 1975. What we now know about "I Don't Know". Public Opinion Quarterly, 39 (Summer 1975): 207-218. Gaertner, Karen N. 1976. A note on question wording effects. In James A. Davis, Studies o f social change since 1948, Vol. 1: Methodological. Chicago, Ill.: National Opinion Research Center, NORC Report 127A, pp. 93-127. Gallup, George H. 1972. T h e Gallup Poll: Public opinion 1935-1972. New York: Random House. Jackson, John E. 1975. Issues, party choices and presidential votes. American Journal o f Political Science, 19 (May 1975): 161-185. Kessel, John H. 1972. Comment: The issues in issue voting. American Political Science Review, 66 (June 1972): 459-465.


268

Bishop, Tuchfarber, and Oldendick

Kirkpatrick, Samuel A., William Lyons, and Michael R. Fitzgerald. 1975. Candidates, parties and issues in the American electorate: Two decades of change. American Politics Quarterly, 3 (July 1975): 247-283. Luttbeg, Norman R. 1968. The structure of beliefs among leaders and the public. Public Opinion Quarterly, 32 (Fall 1968) : 398-409. Miller, Arthur H., Warren E. Miller, Alden S. Raine, and Thad A. Brown. 1976. A majority party in disarray: Policy polarization in the 1972 election. American Political Science Review, 70 (September 1976): 753-778. Mueller, John H., Karl F. Schuessler, and H. L. Costner. 1970. Statistical reasoning in Sociology. Boston: Houghton Mifflin Company. Nie, Norman H., with Kristi Andersen. 1974. Mass belief systems revisited: Political change and attitude structure. Journal of Politics, 36 (August 1974): 541-591. Nie, Norman H., Sidney Verba, and John R. Petrocik. 1976. T h e changing American voter. Cambridge: Harvard. Niemi, Richard G., and Herbert F. Weisberg, eds. 1976. Controversies in American voting behavior. San Francisco: W. H. Freeman and Company. Page, Benjamin I., and Richard A. Brody. 1972. Policy voting and the electoral process: The Vietnam war issue. American Political Science Review, 66 (September 1972): 979-995. Pierce, John C. 1970. Party identification and the changing role of ideology in American politics. Midwest Journal o f Political Science, 14 (February 1970): 25-42. Political Behavior Program, University of Michigan. 1971. T h e I964 election study, revised l C P R edition. Ann Arbor: Inter-University Consortium for Political Research. Pomper, Gerald M. 1972. From confusion to clarity: Issues and American voters, 1956-1968. American Political Science Review, 66 (June 1972): 415-428. Presser, Stanley, and Howard Schuman. 1975. Question wording as an independent variable in survey analysis: A first report. A paper presented at the Annual Conference of the American Statistical Association, Atlanta, August, 1975. Repass, David E. 1971. Issue salience and party choice. American Political Science Review, 65 (June 1971): 389-400. . 1976. Comment: Political methodologies in disarray. American Political Science Review, 70 (September 1976) : 814-831. Schulman, Mark A., and Gerald M. Pomper. 1975. Variability in electoral behavior: Longitudinal perspectives from causal modeling. American Jourrral of Political Science, 19 (xebruary 1975) : 1-18. Schuman, Howard, and Otis D. Duncan. 1974. Questions about attitude survey questions. In Herbert L. Costner, ed. Sociological Methodology 1973-1974. San Francisco: Josey-Bass. Shulman, Art. 1973. A comparison of two scales on extremity response bias. Public Opinion Quarterly, 37 (Fall 1973) : 407-412. St. Angelo, Douglas, and Douglas Dobson. 1975. Candidates, issues and political estrangement. American Politics Quarterly, 40 (January 1975) : 45-59. Stimson, James A. 1975. Belief systems: Constraint, complexity and the 1972 election. American Journal o f Political Science, 19 (August 1975): 393-417. Sudman, Seymour, and Norman M. Bradburn. 1974. Response effects in surveys. Chicago: Aldine.


THE QUESTION OF QUESTION WORDING

z69

Survey Research Center, University of Michigan. 1968. The 1956 election study, revised ZCPR edition. Ann Arbor: Inter-University Consortium for Political Research. Weissberg, Robert. 1976. Consensual attitudes and attitude structure. Public Opinion Quarterly, 40 (Fall 1976) : 349-359.


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.