O que aprendemos com 25 anos de DC?

Page 1

What can we learn from 25 years of PUS survey research? Liberating and expanding the agenda Martin W. Bauer, Nick Allum, Steve Miller

To cite this version: Martin W. Bauer, Nick Allum, Steve Miller. What can we learn from 25 years of PUS survey research? Liberating and expanding the agenda. Public Understanding of Science, SAGE Publications, 2007, 16 (1), pp.79-95. �10.1177/0963662506071287�. �hal-00571116�

HAL Id: hal-00571116 https://hal.archives-ouvertes.fr/hal-00571116 Submitted on 1 Mar 2011

HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers.

L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.


SAGE PUBLICATIONS (www.sagepublications.com)

PUBLIC UNDERSTANDING OF SCIENCE

Public Understand. Sci. 16 (2007) 79–95

What can we learn from 25 years of PUS survey research? Liberating and expanding the agenda Martin W. Bauer, Nick Allum and Steve Miller

This paper reviews key issues of public understanding of science (PUS) research over the last quarter of a century. We show how the discussion has moved in relation to large-scale surveys of public perceptions by tracing developments through three paradigms: science literacy, public understanding of science and science and society. Naming matters here like elsewhere as a marker of “tribal identity.” Each paradigm frames the problem differently, poses characteristic questions, offers preferred solutions, and displays a rhetoric of “progress” over the previous one. We argue that the polemic over the “deficit concept” voiced a valid critique of a common sense concept among experts, but confused the issue with methodological protocol. PUS research has been hampered by this “essentialist” association between the survey research protocol and the public deficit model. We argue that this fallacious link should be severed to liberate and to expand the research agenda in four directions: contextualizing survey research, searching for cultural indicators, integrating datasets and doing longitudinal analysis, and including other data streams. Under different presumptions, assumed and granted, we anticipate a fertile period for survey research on public understanding of science. 1. “Paradigms” for understanding the public’s understanding of science Over the last 20 years the public understanding of science (PUS) has spawned a field of enquiry that engages, to a greater or lesser degree, sociology, psychology, history, political science, communication studies, and science policy analysis. Much discussion has concerned the polemic over the “deficit model” and its in essence quantitative research methodologies. As the polemic has it, PUS researchers conduct survey research in the service of sponsors such as government, business and scientific institutions. Survey researchers necessarily presume a public deficient in knowledge, attitude or trust. Thus they serve existing powers, and provide rhetorical means-to-given-ends. In contrast, so the polemic has it, the “Critical” researcher will also research the ends, the means and the context of PUS research, using exclusively qualitative protocols, which will introduce reflexivity. This polemic is reminiscent of Lazarsfeld’s (1941) dichotomy of “administrative” and “critical” research, but with a strange fixation on method protocol that was absent in the original discussion.1 The problem with this polemic is the automatic equation of particular agendas with particular research protocols: survey research means deficit modeling for an administrative © SAGE Publications

ISSN 0963-6625

DOI: 10.1177/0963662506071287


80

Public Understanding of Science 16 (1) Table 1. Paradigms, problems and proposals Period

Attribution Problems

Proposals Research

Science Literacy 1960s onwards

Public deficit Knowledge

Literacy measures Education

Public Understanding After 1985

Public deficit Attitudes

Knowledge–attitude Attitude change

Education

Image marketing

Trust deficit Expert deficit Notions of public Crisis of confidence

Participation Deliberation

Science and Society 1990s–present

“Angels” mediators Impact evaluation

agenda; qualitative methodology is the sine qua non of “Critical” and reflexive investigation. Thus, valid critiques of the deficit model, and the agendas sponsoring it, have resulted in stigmatizing survey research. In contrast, we consider the association of method protocol and knowledge interest a fallacy (Bauer et al., 2000a). The identification of protocol and interest is “behaviouristic.” But any behavior can serve different motives, and results produced to serve one interest can easily be useful for different interests: while lower animals are blessed with relatively fixed linkages between motive and behavior, Homo sapiens is unfortunate in that respect, and pays the price of anxieties and mutual misunderstandings. Like motive and behavior in everyday life, in social research, knowledge interest and method protocol are flexibly linked. In Table 1, we outline three “paradigms” of research in chronological order of origin. Each paradigm has its prime time, more or less clearly defined, and is characterized by a diagnosis of the problem that science faces in its relationship with the public. A key feature of each paradigm is the attribution of a deficit. Each paradigm defines particular problems and offers preferred solutions. We argue that, contrary to common rhetoric, these paradigms do not supersede each other, but continue to inform research.

2. Scientific Literacy (from 1960s to mid 1980s) The idea of “scientific literacy” builds on a double analogy. Science is part of the cultural stock of knowledge with which everybody ought to be familiar. Scientific education ties in with the quest for “basic literacy” in reading, writing and numeracy. The second analogy is “political literacy.” Here the idea is that in a democracy people take part in political decisions, directly in referenda or indirectly via elections or as voices of public opinion. However, voice can only be effective if people command knowledge of the political process and its institution (see Althaus, 1996). The literacy idea attributes a knowledge deficit to an insufficiently literate public. This deficit model serves the education agenda, demanding increased efforts in science education at all stages of the life cycle. However, it also plays into the hands of technocratic attitudes among decision makers: a de facto ignorant public is disqualified from participating in science policy decisions. An influential concept of “science literacy” was proposed by Jon D. Miller (1983, 1987, 1992, 1998). Miller’s definition included four elements: a) knowledge of basic textbook


Bauer et al.: 25 years of PUS research

81

facts of science, b) an understanding of scientific methods such as probability reasoning and experimental design, c) an appreciation of the positive outcomes of science and technology for science, and d) the rejection of superstitious beliefs such as astrology or numerology. Miller constructed survey based indicators for literacy, based on earlier work (Withey, 1959; see also the review of Etzioni and Nunn, 1976), which became the basis of the biannual science indicator surveys of the US National Science Foundation (NSF) from the late 1970s onwards.

The research problems The psychometrics of factual knowledge is the key problem of this paradigm. Knowledge is measured by quiz-like textbook items. Respondents are asked to decide whether a statement of a scientific fact is true, false or they don’t know. Some of these items have become notorious, have traveled the globe, and hit the headlines: for example, the Ptolemaeic item “the sun goes round the earth” (true or false), or “electrons are smaller than atoms” or “all radioactivity is man-made” (see Eurobarometer 31, 1989; Durant et al., 1989). Respondents score a point for every correct answer. It is an empirical problem to formulate short and unambiguous statements for which the authoritative answer can be determined, and to balance easy and difficult items from various fields of science (item analysis). Recently item response theory (IRT) has been brought into the discussion (Miller and Pardo, 2000). As literacy indicators, these items are of value only in combination; a single item has little significance. However, public speakers and the mass media repeatedly pick out stand-alone items as indicators of public ignorance and reasons for moral panic. So for example, recently the London Observer (21 October 2005) cited CNN for the results of “the-sun-goes-aroundthe-earth-item” under the heading “The American World View.”2 So, what counts as scientific knowledge? Miller suggested two dimensions: facts and methods. This stimulated efforts to operationalize “knowledge of scientific methods” such as probability reasoning, experimental design, the importance of theory and hypothesis testing. Useful in this context is an open question initially suggested by Withey (1959): “Tell me in your own words, what does it mean to study something scientifically.” Respondents’ answers are recorded verbatim and coded for an index of methodological awareness. However, the coding of these responses remains controversial: normative or descriptive (see Bauer and Schoon, 1993)? Critics have argued that the key to understanding science is its process and not its facts (e.g. Collins and Pinch, 1993). Therefore awareness of issues such as uncertainty, peer reviewing, scientific controversies, and replication of experiments should be reflected in the assessment of literacy. Furthermore, relevant knowledge includes that of the scientific institution and its politics, what Prewitt (1983) called “scientific savvy” and Wynne (1996) has called the “body language” of science. The latter dimension has to date received little attention (although see Bauer et al., 2000b; Sturgis and Allum, 2004).

What is to be done: education The literacy paradigm is concerned with the public deficit of scientific knowledge. Interventions are mainly in the area of public education. A deficit in adult scientific literacy requires increased attention in school curricula and continuing education to achieve goals such as those proposed in the US Project 2061.3


82

Public Understanding of Science 16 (1)

Critique: artefacts and different kinds of knowledge The critique of the literacy paradigm focuses on conceptual as well as empirical issues. Why does scientific knowledge deserve special attention? What about historical, financial or legal literacy? Critics argued that large-scale survey measures reify knowledge and produce irrelevant indicators of “textbook knowledge.” Miller’s 1983 definition of science literacy included a “positive appreciation of the outcomes of science.” This definition precludes a knowledgeable person from having negative attitudes towards science. However, the correlation between knowledge of science and agreeing to statements such as “science and technology are making our lives healthier, easier and more comfortable” or “the benefits of science are greater than any harmful effects” (e.g. Eurobarometer 31, 1989) must be an empirical matter, and cannot be resolved as a normative requirement for literacy. Miller’s original proposal to the NSF was that scientific literacy was a threshold measure. To qualify as a member of the “attentive public for science,” one needed to command “some” minimal level of literacy, be interested in and feel informed about science and technology, appreciate their positive outcomes, and renounce superstitions. However, the definition of this “minimal level of literacy” changed from audit to audit, and it is unclear whether the changes that the NSF reported, or for that matter lack of changes, reflect shifts in definition or in substance (see e.g. Beveridge and Rudell, 1988). Since the 1970s, many countries have undertaken audits of adult scientific literacy—the US, Canada, China, Brazil, India, Korea, Japan, Bulgaria, Switzerland, Singapore, Britain, Germany and France and many other EU countries—although access to the raw data remains problematic. The US NSF has presented “horse race” type ranking of different countries’ literacy. A possible problem of such comparisons remains the fairness of the indicators. A particular set of knowledge items may be biased towards certain countries. Countries have different science bases, and literacy is likely to reflect the country’s historical science base (e.g. Raza et al., 1996, 2002). The issue of culturally sensitive literacy measures deserves more attention. Then, there is the question of superstitions. Belief in astrology disqualifies one from being scientifically literate, Miller (1983, 1987) suggested. However, the co-existence of forms of “superstition” and scientific literacy is an empirical matter. To make “belief in astrology” an exclusion criterion for literacy bars us from understanding the variable tolerance/intolerance of science and “superstitions” as cultural variable (see Boy and Michelat, 1986; Bauer and Durant, 1997). Finally, it is suggested that the concern for literacy arises in parallel with the crisis of legitimacy of “big science.” But, if the Baconian notion of “knowledge = power” holds, any attempt to share big science knowledge without public empowerment might only create alienation rather than rapprochement. Literacy is therefore the wrong answer to a problem of crisis (Roqueplo, 1974; Fuller, 2000).

3. Public Understanding of Science (1985 to mid 1990s) In the second half of the 1980s, new concerns emerge under the title public understanding of science (PUS).4 In the UK this transition is marked by an internationally influential report (Royal Society, 1985). PUS shares with the previous phase the diagnosis of a public deficit. However, this time round, it is public attitudes that are highlighted (Bodmer, 1987). The public is not positive enough about science and technology; there are dangers citizens will become negative or outright anti-science, and this is of natural concern to institutions of science.


Bauer et al.: 25 years of PUS research

83

The research problems For PUS, the research agenda shifts away from the measurement of knowledge to that of public attitudes. Such measurements have a long tradition in social psychology (see Eagly and Chaiken, 1993). Research became concerned with the construction of reliable attitudeto-science scales, the potential “acquiescence response bias” in existing attitude batteries (Bauer et al., 1994), the multi-dimensional structure of attitudes (e.g. Pardo and Calvo, 2002), the relationship between general attitudes and specific attitudes (Daamen and vanderLans, 1995), the effects of question ordering on interest levels (e.g. Gaskell et al., 1993) and, most importantly, the relationship between knowledge and attitudes (Einsiedel, 1994; Evans and Durant, 1995). The concern for scientific literacy carried over into PUS. A knowledge measure is needed to test the expectation “the more you know, the more you love it.” However, the emphasis shifts from a threshold measure to that of a continuum: not “one is either literate or not,” but “one is more or less knowledgeable.” The correlation between knowledge and attitudes becomes the focus of research (Evans and Durant, 1989; Durant et al., 2000). Some research looked at “don’t know” (DK) responses to knowledge measures. Comparing incorrect and DK responses offers an index of confidence: women and some milieus seem generally more likely to declare ignorance and hesitate to guess the science (see Bauer, 1996; Mondak and Anderson, 2003). Turner and Michael (1996) suggested four types of self-admitted ignorance of science: “I am embarrassed not to know”; “I am not very scientific”; “I know somebody who might know”; and “I could not care less.” Finally, differences in the rates of DK responses over time or across contexts need to be interpreted with care, as they might reflect artefacts due to different fieldwork protocols (for example Eurobarometer changes its contractors every 5 years and with it the fieldwork protocols which might affect DK rates). What is to be done: either educate the public or seduce the public Old and new reasons for the promotion of science in public are put forward: understanding is important for making informed consumer choices; it enhances the competitiveness of the nation’s industry and commerce; and it is part of tradition and culture (see Thomas and Durant, 1987; Durant, 1993; Gregory and Miller, 1998). It might be said that the PUS paradigm calls upon a rationalist and a realist agenda. Both agree with the diagnosis of an attitudinal deficit—the public is insufficiently in love with science and technology but they disagree on what to do about it. For the normative-rationalist, attitudes are a product of information processing with a rational core. The axiom of PUS is “the more you know, the more you love it”; lack of knowledge is the driver of negative attitudes and biased risk perceptions. A knowledgeable public will agree with experts, who do not succumb to biases as the public does. The battle for the public is a battle for rational minds trained in probabilistic reasoning (Gigerenzer and Hoffrage, 1995). For the realist-empiricist, attitudes are value-loaded relations with the world. How values and emotions may relate to reason is a vexing question, one that is traditionally confused by conflating values and emotion, and cognition and emotion with rationality and irrationality. For the realists, values and emotion are a fact of life and the battle is a battle for hearts of a lifestyle public. For a “consumer,” it is claimed, there is little difference between science, a car and washing powder. So they receive the same treatment of market segmentation, profiling, targeted campaigning and message positioning. British science


84

Public Understanding of Science 16 (1)

consumers are thus divided into six groups (see OST, 2000): confident believers, technophiles, supporters, concerned, the “not sure,” and the “not for me” similarly found in Portugal (Costa et al., 2002). Critique: institutional neuroticism Both the scientific literacy and PUS paradigm assume a public state of deficiency: citizens lack either enough or the right kind of knowledge, and thus fail to display sufficiently positive attitudes or “reasonable” risk perceptions. But, some critics argue, of far more importance is knowledge-in-context that emerges from local controversies and people’s life concerns (see Ziman, 1991; Irwin and Wynne, 1996). Here the internationally influential British Research Programme on Public Understanding of Science between 1986 and 1991 became entrenched in conflicts over how to study the public understanding of science.5 Wynne (1993) saw the deficit model as an expression of “institutional neuroticism,” anxieties and a lack of generosity among scientific actors vis-a-vis the public. The deficit model is a self-serving rhetorical device and at the heart of a vicious circle: a deficient public cannot be trusted. Mistrust on the part of scientific actors is returned in kind by the public. Negative public attitudes, revealed in large-scale surveys, confirm the assumptions of scientists: a deficient public is not to be trusted. An “institutional unconsciousness” called for “soul searching” among scientific actors. However valid, this critique stigmatized survey research by identifying survey research in essence with the deficit model (see Wynne, 1995; Irwin and Wynne, 1996; also Jasanoff, 2005) and thus established a spurious but lasting association between method protocol and concept.6 Some have argued for a reframing of the issues. Attitudes could be seen within the framework of social representations of science, their variable structure and diverse functions. Representations arise when local common sense encounters novelty; they allow familiarization of the unfamiliar within the constraints of tradition and inter-group relations (Farr, 1993). This framework refocuses research from rank orderings of groups of people to characterizing the diversity of representations (e.g. Boy, 1989; Bauer and Schoon, 1993; Durant et al., 1992). Studying representations opens the door not only for qualitative enquiry, but also for different types of analyses of survey data (see Bauer and Gaskell, 1999). While the PUS paradigm was fixated on the common sense axiom “the more you know, the more you love it,” empirical investigations of the knowledge/attitude relationship have remained inconclusive until recently. Surveys do show a small positive correlation between knowledge and positive attitudes, but they also show larger variance among the knowledgeable: with controversial issues, the correlation tends to be lower or zero (Allum et al., 2007). Thus, not all informed citizens are also enthusiastic; for some “familiarity breeds contempt” for science and technology. Furthermore, in attitude theory, it is well known that knowledge is not a lever of positive attitudes, but of the quality of attitudes. Attitudes—both positive or negative—that are based on knowledge are more likely to resist change (see Eagly and Chaiken, 1993); knowledge makes the difference between attitudes and non-attitudes, and not between positive or negative attitudes (Converse, 1964). It has also emerged that positive attitudes to science and technology are related to general “political sophistication” as well as to a specific scientific literacy (Sturgis and Allum, 2004; Gaskell et al., 2003). Many PUS surveys compare interest in science with other life interests. Aggregate figures from Eurobarometer surveys suggest that interest in science is declining between 1992 and 2001, while knowledge is increasing (Miller et al., 2002). This trend would suggest that “familiarity breeds disinterest.”


Bauer et al.: 25 years of PUS research

85

4. Science and Society (from mid 1990s onwards) The critique of the literacy and PUS paradigms as “deficit models” ushered in a reversal of attribution. The diagnosis of “institutional neurosis” has been widely heeded: the deficit is not with the public, but with the scientific institutions and expert actors who harbor prejudices about an ignorant public. Henceforth, there can be several deficits: public deficits of knowledge, attitude or trust, but also deficits on the part of scientific and technological institutions and their expert representatives. Now, the focus of attention shifted to the deficit of the technical experts. The research problems Evidence of negative attitudes from large-scale surveys, from focus group research and quasi-ethnographic observations led to the declaration of a “crisis of public confidence” (House of Lords, 2000; Miller, 2001) in Britain and elsewhere. Science and technology operate in society, and therefore stand related to other sectors of society. A crisis of trust of the public vis-a-vis science indicates a breach of contract that needs a re-negotiation. The implicit and explicit views of the public held by scientific experts come under scrutiny, they explain part of the trust crisis. False conceptions of the public operate in science policy making and misguide communication efforts of scientific institutions which alienate the public still further. What is to be done: up-stream public engagement In the “Science and Society” paradigm, the distinction between research and intervention is blurred. Many researchers are committed to action research and reject the separation of analysis and intervention. The aim of analysis is to change institutions and policy. This agenda, academically grounded as it may be, often ends in political advice with a pragmatist outlook. The implicit and dysfunctional notions of public, public opinion and the public sphere among experts and policy makers are the focus of closed executive training seminars and advisory panel discussions rather than of publicly documented research results. Advice is confidently offered to public and private actors on how to rebuild public trust, and to address the paradoxes of trust: once an issue, trust is already lost; trust cannot be engineered, it is granted to those who deserve it (see Luhmann, 1968). Public deliberation and participation are the new “royal road” to rebuild public trust. The UK House of Lords Report (2000) listed many forms of deliberative activities such as citizen juries, deliberative opinion polling, hearings, consensus conferencing, national debates to engage the public and to rebuild trust. And efforts of public engagement should be undertaken up-stream, which means at the early stages of new technoscientific developments (Stilgoe et al., 2005) to enable front-end inputs and not only post-hoc reactions to already established facts. Critique: of “angels” and “monaud” The prolific writing on the governance of science promotes public participation as part of a “New Deal” (e.g. Fuller, 2000), and has created a market for consultancy. As public engagement takes time and needs know-how, civil servants and public academics are overwhelmed by this managerial task. Here, the new sector of “angels” steps in. “Angels” are age-old go-betweens that mediate, here not between heaven and earth, but between a disenchanted public and the institutions of science, industry and policy making. Books


86

Public Understanding of Science 16 (1)

describe the how-tos of “good practice” and the financial implications of public deliberations (e.g. Seargent and Steele, 1998). This industry of public engagement advice exudes confidence on how to overcome the crisis of trust between science and the public. In the UK, the idea of public consultation has become official government policy. Not only for science and technology, public consultation beyond the parliamentary process has become “due process.” In the UK and beyond, we see the proliferation of new labels for public participation, a product differentiation in the logic of a market of consultancies.7 What is urgently needed is a typology of public engagement activities to inform critical judgment. There is as yet little critique of the achievements. The general ethos of public participation recently acquired an interest in evaluating its outcomes. In the utilitarian spirit of modern politics sooner or later the question arises: and what does this approach bring (effectiveness)? And how do different approaches compare? Can one save money with a cheaper approach doing equally well as a more expensive one (efficiency)? These questions call for evaluation criteria so that such activities can be audited (e.g. Rowe and Frewer, 2004). The apparent success of the “science and society” agenda in the UK has caused embarrassment among its protagonists as demonstrated by GM Nation in 2003. The focus on reaching a social consensus raises a number of questions. For example, will “dialogue” be appropriated by the technocratic deficit model as a strategy of public persuasion? Is this all old wine in new bottles? These doubts seems to be vindicated by the GM Nation debate, the great consultation in 2003 on genetically modified (GM) crops and food products in the UK (Rowe et al., 2005).8 This extended national debate produced evidence that the British public was far from convinced of the benefits of GM crops and foods—not what the government hoped to hear. There were two responses: to attack the process on protocol for allowing environmental groups to have too much influence; or on outcome and to conclude that further dialogue was needed until the public had the “right” attitude. In this way, consensus is reached by “monaud”: all “sides” are talking; but only the public is supposed to listen.

5. Liberating and expanding the research agenda Is the path from “Scientific Literacy” to “Science and Society” a path of progress? Clearly the protagonists of each phase use rhetorics of Progress writ large. PUS researchers claimed that they had left behind the limited concern with scientific literacy and they moved on to attitudes. For the Science and Society protagonists, research of both literacy and public attitudes are “old hat” and buried with the deficit model. Public participation, and for that matter “angelic” mediation, is the new mission; empirical social research becomes a somewhat anachronistic preoccupation. Progress, or is it? Ironically, the call for evaluation of participatory policy making invites a re-entry of traditional paradigms of PUS research. Researchers come to advocate quasi-experimental evaluation of deliberative events (Macoubrie, 2005), using indicators such as media coverage, shifts in issue awareness, knowledge and attitudes, and impact on policy agendas (e.g. Butschi and Nentwich, 2002; Joss and Bellucci, 2002). With ignorance of its history, this revival of the classical paradigms of PUS might amount to the reinvention of the wheel, but this time for a different car. Public deficit is no longer under scrutiny, but the performances of “angels” who spend public monies. Additionally, the international survey of educational achievement (PISA) of 2006 has focused on “scientific literacy,” albeit for


Bauer et al.: 25 years of PUS research

87

children of school age (PISA, 2006). These international studies might well re-launch the discussions over adult literacy. If we dare make a prediction, it will be that PUS research, based on survey, focus group and media analysis, will revive, albeit within the wider framework of science and society. And as long as science and society are not identical, the public’s understanding of science as well as the scientists’ understanding of the public will continue to be a pressing issue. In this changing context, we envisage four major developments of PUS research: various ways of contextualizing the survey evidence; cultural indicators; longitudinal data integration and analysis; and a widening of the range of data. Political sophistication: reframing the knowledge–attitude problem One possible cause of the limited progress of PUS research is perhaps to be found in the multi-disciplinary character of the whole enterprise. Research is carried out by psychologists, sociologists, political scientists, cultural theorists; even philosophers are increasingly getting in on the act. Whilst this must confer some advantages in broadening the range of perspectives and the explanations that are judged plausible, the lack of common foundations has led to reluctance on the part of PUS researchers to see the public’s relationship with science and technology as but one example of the range of political and social issues relevant to citizens in modern democracies. An example of the relative weakness of PUS research becomes apparent when compared to the voluminous research from political scientists on the causes and consequences of knowledge on political attitudes and behavior. In this field, there has been a stream of empirical research and theoretical debate that dates back to the early 1960s. A consistent finding is that a large proportion of the public, generally uninformed, responds randomly to questions about political issues of the day. The early work of Converse (1964) on “non-attitudes” and other researchers showed that the extent to which citizens make informed or ill-informed judgments about political issues makes a profound difference to collective decision-making (Althaus, 1996, 1998; Delli Carpini and Keeter, 1996). The nonattitudes thesis was criticized as a “measurement error” problem (Achen, 1975). More recently “online-processing” has been proposed: people may use information momentarily to form opinions but will be unable to recall the information later on (Hastie and Park, 1986). There are also models that link information, knowledge and values in the formation of preferences, and the role of “cues” as substitutes for political information (Zaller, 1991, 1992). Evidence from the political science literature leads us to begin with the assumption that for attitudes towards science and technology, as for other issues of the day, information matters, as well as the ability and motivation to process it, unless proven otherwise. The PUS activity context: benchmarking PUS within an indicator framework Public interest, knowledge and attitudes to science must also be seen in the context of the “PUS movement,” its motives, actors, repertoire and resources. For many people, the “public understanding of science” became a rallying cry from the mid 1980s onwards, a banner around which to gather and to do better work. Among the actors we find, traditionally the visible scientists communicating to a wider audience (e.g. Werskey, 1978). But there are also the professional science writers in newspapers, magazines, on radio and television or of popular books. Then there are the staffs of science museums and the more recent science centers. Increasingly there are the press officers of scientific laboratories, scientific bodies and universities. Their repertoire includes exhibitions, science festivals,


88

Public Understanding of Science 16 (1)

news events, press office work, science reportage in the press, radio and television, popular science books and cinema, public hearings, meetings, Internet websites, consensus conferences, deliberative opinion polls, tables rondes, citizen juries etc. These activities grow in scope with new communication technology. Auxiliary efforts in “science communication” training have proliferated—an industry feeding on an industry, and becoming part of the available resources. All these activities draw on public or private sponsorship. Resources also include ideals and motivation to bring science and the public ever closer. Ideals of public education compete with self-interest, public entertainment, national pride in international competition, or more managerial notions such as Public Relations (PR) for the purpose of creating a favorable image for science or a particular institution. A public profile and favorable images are important to sustain public goodwill that may translate into higher citation counts and into funding for future research (e.g. Gregory and Miller, 1998; Gregory and Bauer, 2003). An inventory of this international PUS movement is urgently needed. Preliminary inventories were attempted by Schiele (1994) for Europe, North America, Africa, Australia, and Japan, and more recently, for Brazil by Massarani, Moreira and Brito (2002). In 2002, the European Commission published its Report of the Expert Group Benchmarking the Promotion of RTD Culture and Public Understanding of Science (Miller et al., 2002). The report showed that Europe is heterogeneous in approaches and levels of activity, although most actors address the “public deficits” that we discussed above. A problem is the lack of knowledge of governments about what they themselves are doing: with few exceptions (Ciencia Viva in Portugal and PUSH “Wissenschaft im Dialogue” in Germany) it was impossible to find meaningful figures on the money being spent on PUS activities, let alone to say if there was “value added” in what they were doing. Furthermore, in all but a few countries, little is done to prepare scientific researchers for communication activity with lay audiences, despite the commitment expressed in official documents. Perhaps surprisingly, it appears that industrial actors have shown an increased interest in PUS and are moving towards public engagement as part of corporate PR (see Burningham et al., 2007 in this issue). How far the latter trend develops beyond a PR/marketing activity remains to be seen. Towards cultural indicators The idea that literacy and attitudes are part of a wider framework for the accounting of the national science base, alongside the figures for research and development investments, counts of publications and citations, patent outputs, and size of high tech industry, is not new, but worth reiterating. Since the early 1980s the US NSF included the literacy surveys in their Science Indicator Reports.9 Canadian researchers proposed an elaborate system of indicators of PUS that included measures of education, mass media coverage and public perceptions (Schiele, 1994; Godin and Gingras, 2000). Very recently, India has undertaken its first India Science Report. Although based mainly on a large-scale survey of public perceptions, it stipulates a model with a range of input and output indicators (Shukla, 2005). The EU conducts occasional surveys of public perceptions, but these are hitherto isolated and not part of the official science indicators and innovation scoring board. One can look at PUS indicators also as comparative measures of performance of the PUS movement. A performance indicator measures action-outcomes, and these outcomes are supposed to be causally linked to these actions. They are used in planning, to set targets and to evaluate the success and failure of these actions. But instead of evaluating performance, they can be just as validly interpreted as indicators of the cultural climate.


Bauer et al.: 25 years of PUS research

89

Cultural indicators measure the context of activities, they indicate the context-for-action within which to make decisions (Bauer, 2000). Few actors consider a “culture” within their remit; however, they will acknowledge performance as within their responsibility. This dual purpose of PUS indicators encourages different types of data and analyses. As performance indicators, the unit of analysis is typically the national polity. For cultural indicators, the unit of analysis should be more flexible. Comparative analysis might focus on clustering response patterns that indicate distinct patterns of attitudes rooted in transnational cultural milieus. Here a combination of survey analysis, ethnographic data, and current and official social statistics will be informative. The spatial distribution of political values has been found to correlate with variations in historical kinship patterns across Europe; this might serve as a model for how to map technoscientific cultures (Todd, 1996). The polemic over surveys and the deficit concept encouraged alternative protocols of research, such as discourse analysis of focus interviews and textual materials. Mass media monitoring, in particular of newspapers, is cost effective and can be readily extended historically. Such data reveal long-term trends such as the medicalization of science news over the last 25 years (Bauer, 1998a) and non-trivial long waves in public attention to science and technology (see LaFollette, 1990; Bauer et al., 1995; Bauer, 1998b; Bauer et al., 2006). Integrated databases for longitudinal analysis A burgeoning field of research is the history of popular science. Historical studies reveal a shifting repertoire of activities, ideologies and motives, a changing relationship between science and society, and the various roles played by popular science in boundary work (see Kohlstedt et al., 1999; Gieryn, 1999; Gregory and Miller, 1998). Studies of PUS with a long-term perspective (e.g. Lewenstein, 1992; Bauer, 1998b) show a dynamic phenomenon. And dynamic systems require continuous time-series data for an adequate representation. But presently, PUS research is still dominated by snapshot studies of the shock-horror “20% of X believe that the Sun goes around the Earth” or “35% believe that ordinary tomatoes do not contain genes” variety. As snapshots, however, these figures only offer a projection ground for moral panics: these figures only have meaning when compared in time (increase/ decrease) and space (higher/lower than) and in conjunction with similar items. In the US (biennially since 1979), the UK (1986, 1988, 1996, 2000, 2004), France (1972, 1982, 1989, 1996), and the EU (1989, 1992, 2000, 2005) the databases support longitudinal analysis in principle. But, few efforts have been made to systematically integrate these datasets. In the US, the NSF has recently consolidated the science indicator data into a single file including the surveys from 1979 to 2001. Similar efforts seem to be under way for the International Social Survey Programme (ISSP) which has for many years collected public perception data on various issues pertaining to science and technology. The EU and Eurobarometer should follow suit. Consolidated databases will inform benchmarking and contextualization. Integrated databases and longitudinal modeling will bring a step change to PUS research. We might finally see dynamic modeling of public understanding of science across national contexts. Cross-sectional analyses have shown that the correlation between interest, knowledge and attitudes is itself a variable. Examining this co-variation led to the developmental “two culture model” of PUS. In this analysis, PUS in industrialized (e.g. France, UK) and industrializing (e.g. Portugal, Ireland) countries is characterized among other things by a higher correlation between interest, knowledge and attitudes, while in post-industrial countries (e.g. Denmark, Germany) correlations are much lower (Bauer et al., 1994; Durant


90

Public Understanding of Science 16 (1)

et al., 2000). Allum, Boy and Bauer (2002) tested this model with partial success across European regions. The recent India Science Report (see Shukla, 2005) and attempts to construct indicator surveys in Latin America, China, Japan and South Africa will globalize this database. However, in order to finally test the “two cultures of PUS” hypothesis we need longitudinal data in any one context. This kind of model-based, longitudinal analysis might bring PUS research in fruitful dialogue with discussions on “civic epistemology” in science and technology studies: i.e. the diversification of technoscientific culture in a globalized context (Jasanoff, 2005). Broadening the scope of data streams Finally, our plea is to expand the range of data “officially and legitimately” relevant for monitoring public understanding of science. Qualitative data have been used in the field for some time, so have mass media analyses. But little effort has gone into persistent and comparative collection and analysis of such data. Yet both mass media and qualitative enquiry lend themselves to longitudinal data streaming, albeit the methodology might need development. In this way, an inter-disciplinary field of enquiry like PUS might offer an opportunity to make significant contributions to other disciplines by developing existing methods with a longitudinal perspective. For example, the analysis of science reportage in mass media yields long-term indicators of public salience and issue framing and might contribute to a dynamic theory of social representations, a social psychological theory that deals more generally with the transformation of knowledge across communities (see Wagner and Hayes, 2005; Jovchelovitch, 2006).

6. Conclusion We have traced the recent history of PUS research from “literacy” to “science and society” and pointed to confusions around the “deficit concept.” Identifying the deficit model of the public with survey research per se, and for that matter critical enquiry with qualitative protocols, is fallacious and not useful beyond temporary rhetoric. The critique of the public deficit model as a common sense prejudice among experts is valid, for certain, but its identification with the protocol of survey research is dysfunctional. Breaking this unfortunate mind frame of linking model and protocol will help to liberate and expand the research agenda. And we suggest that a liberated agenda might include: contextualizing survey results through a reframing of the knowledge–attitude problem and within a framework of science indicators, analyzing data in search for cultural indicators, the global integration and analysis of longitudinal databases, and the mobilization of additional, preferably qualitative data streams with a long-term perspective. As long as science and society are not entirely identical spheres, the issues of the public’s understanding of science, and of scientists’ understanding of the public, are here to stay. In this light, we are confident that the field will see fertile and burgeoning research activities in the years to come. Notes 1 At the time, Lazarsfeld was searching for a way to continue discussions with T.W. Adorno and his group. This dichotomy is a simplification of a more complex issue which Habermas (1965) later caught in the triadic distinction of technical-instrumental, practical-normative, and critical-emancipatory knowledge interests. 2 As an anecdote, John Durant used to recall a ride on the interstate highway through New Mexico, where a radio program at 4 a.m. referred to the results of the Ptolemaeus question. There is clearly entertainment value in these


Bauer et al.: 25 years of PUS research

3 4

5

6

7

8

9

91

items. More seriously, they are frequently cited by alarmed scientists in public lectures, keynote speeches, and private and public meetings. They are rhetorical devices speakers, even unexpectedly love to call upon. So for example the anthropologist Paul Rabinow, not known to think favorably on such data, made rhetorical use of such survey items at a recent public lecture at the London School of Economics (BIOS annual lecture 2005). “Project 2061” is an initiative of the American Association for the Advancement of Science (AAAS) to advance literacy of science, mathematics and technology for the long term; http://www.project2061.org/ PUS was also extended to PUST to include “T” for technology, PUSTE to include “E” for engineering, or PUSH to include “H” for the humanities, the last indicating a more continental understanding of “science” as “Wissenschaft.” The dating of these phases follows mainly the influential UK experience. In the US all through the 1970s the AAAS had a standing Committee on Public Understanding of Science (see Kohlstedt et al., 1999: 140ff.). One of the authors, Martin Bauer, joined the Economic and Social Research Council (ESRC) PUS research program in the late 1980s as a part-time “number cruncher” for John Durant at the Science Museum in London. He remembers that many of the members of this research program were on non-speaking terms. The debate curiously centered on whether a team would use numerical or verbal-qualitative protocols for their observations. The publication that finally presented the “main results” of this research program (Irwin and Wynne, 1996) excluded the three numerical projects: the survey of the adults (Durant et al., 1989), the survey of adolescent children (see Breakwell and Robertson, 2001), and the analysis of mass media reportage of science (Hansen and Dickinson, 1992). Whether such a spurious association was an intentional part of fights over academic territory in a context of temporarily increasing funding streams, or just an unfortunate reading and side effect of the polemic over the deficit model is difficult to ascertain, and must be left to the historians of PUS to answer. Brian Wynne is admittedly aware of an unfortunate reception of some of his formulations regarding survey research and the deficit model (personal communication, June 2006) The European Commission’s Science and Society Action Plan (2001) states: “. . . there are indications that the immense potential of our [scientific and technical] achievements is out of step with European citizens’ current needs and aspirations . . .” (p. 7) and then proposes 38 actions to bring the two into line. The UK GM Nation debate cost the government close to £1 million. A survey of public attitudes might cost between £50,000 and £100,000, a set of focus groups around £20,000. This cost differential suggest a differential indication, and all expenditures will ultimately have to be justified with added value, in particular if deliberation should become policy routine and not just one-off experiments. The NSF annual science indicators report for many years includes a chapter on public attitudes. Until 2001 the NSF basically adopted Jon Miller’s framework model of literacy of 1983. Since then it is not clear what framework informs the chapter on public understanding and attitudes in the US.

References Achen, C. (1975) “Mass Political Attitudes and the Survey Response,” American Political Science Review 69: 1218–3. Allum, N., Boy, D. and Bauer, M.W. (2002) “European Regions and the Knowledge Deficit Model,” in M.W. Bauer and G. Gaskell (eds) Biotechnology: the Making of a Global Controversy, pp. 224–43. Cambridge: Cambridge University Press. Allum, N., Sturgis, P., Tabourazi, D. and Brunton-Smith, I. (2007). “Science Knowledge and Attitudes across Cultures: a Meta-analysis,” Public Understanding of Science (forthcoming). Althaus, S. (1996) “Opinion Polls, Information Effects and Political Equality: Exploring Ideological Biases in Collective Opinion,” Political Communication 13: 3–21. Althaus, S. (1998) “Information Effects in Collective Preferences,” American Political Science Review 69: 1218–23. Bauer, M. (1996) “Socio-economic Correlates of DK-Responses in Knowledge Surveys,” Social Science Information 35: 39–68. Bauer, M. (1998a) “The Medicalisation of Science News: From the ‘Rocket-Scalpel’ to the ‘Gene-Meteorite’ Complex,” Social Science Information 37: 731–51. Bauer, M. (1998b) “La longue duree of popular science, 1830-present,” in D. Deveze-Berthet (ed.) La promotion de la culture scientifique et technique: ses acteur et leurs logic, Actes du colloque des 12 et 13 decembre 1996, pp. 75–92. Bauer, M. (2000) “Science in the Media as Cultural Indicators: Contextualising Survey with Media Analysis,” in M. Dierkes and C. von Grote (eds) Between Understanding and Trust: The Public, Science and Technology, pp. 157–78. Amsterdam: Harwood Academic Publishers.


92

Public Understanding of Science 16 (1)

Bauer, M. and Durant, J. (1997) “Belief in Astrology: a Social-Psychological Analysis,” Culture and Cosmos: A Journal of the History of Astrology and Cultural Astronomy 1: 55–72. Bauer, M. and Gaskell, G. (1999) “Towards a Paradigm for Research on Social Representations,” Journal for the Theory of Social Behaviour 29: 163–86. Bauer, M. and Schoon, I. (1993) “Mapping Variety in Public Understanding of Science,” Public Understanding of Science 2: 141–55. Bauer, M., Durant, J. and Evans, G. (1994) “European Public Perceptions of Science,” International Journal of Public Opinion Research 6: 163–86. Bauer, M., Durant, J., Ragnarsdottir, A. and Rudolfsdottir, A. (1995) “Science and Technology in the British Press, 1946–1992, The Media Monitor Project, Vols 1–4,” Technical Reports. London: Science Museum and Wellcome Trust for the History of Medicine. Bauer, M.W., Gaskell, G. and Allum, N. (2000a) “Quantity, Quality and Knowledge Interests: Avoiding Confusions,” in M. Bauer and G. Gaskell (eds) Qualitative Researching with Text, Image and Sound: A Practical Handbook, pp. 3–18. London: SAGE. Bauer, M., Petkova, K. and Boyadjiewa, P. (2000b) “Public Knowledge of and Attitudes to Science: Alternative Measures,” Science, Technology and Human Values 25: 30–51. Bauer, M., Petkova, K., Boyadjieva, P. and Gornev, G. (2006) “Long-term Trends in the Representations of Science across the Iron Curtain: Britain and Bulgaria, 1946–95,” Social Studies of Science 36: 97–129. Beveridge, A.A. and Rudell, F. (1988) “An Evaluation of ‘Public Attitudes toward Science and Technology’: Science Indicators: the 1985 Report,” Public Opinion Quarterly 52: 374–85. Bodmer, W. (1987) “The Public Understanding of Science.” Science and Public Affairs 2: 69–90. Boy, D. (1989) Les attitudes des Francais a l’egard de la science. Paris: CNRS, CEVIPOF. Boy, D. and Michelat, G. (1986) “Croyance aux parasciences: dimensions sociales et culturelle,” Revue Francaise de Sociologie 27: 175–204. Breakwell, G.M. and Robertson, T. (2001) “The Gender Gap in Science Attitudes, Parental and Peer Influences: Changes between 1987/88 and 1997/98,” Public Understanding of Science 10: 71–82. Burningham, K., Barnett, J., Carr, A., Clift, R. and Wehrmeyer, W. (2007) “Industrial Constructions of Publics and Public Knowledge: A Qualitative Investigation of Practice in the UK Chemicals Industry,” Public Understanding of Science (forthcoming). Butschi, D. and Nentwich, M. (2002) “The Role of Participatory Technology Assessment in the Policy-making Process,” in S. Joss and S. Bellucci (eds) Participatory Technology Assessment, pp. 234–56. London: Centre for the Study of Democracy. Collins, H. and Pinch, T. (1993) The Golem: What Everyone Should Know about Science. Cambridge: Cambridge University Press. Converse, P.E. (1964) “The Nature of Belief Systems in Mass Publics,” in D.E. Apter (ed.) Ideology and Discontent, pp. 206–61. New York: Free Press. Costa, A.R. da, Avila, P. and Mateus, S. (2002) Publicos da ciencia em Portugal. Lisbon: Gravida. Daamen, D.D.L. and vanderLans, I.A. (1995) “The Changeability of Public Opinions about New Technology: Assimilation Effects in Attitude Surveys,” in M. Bauer (ed.) Resistance to New Technology, pp. 81–96. Cambridge: Cambridge University Press. Delli Carpini, M. and Keeter, S. (1996) What Americans Know about Politics and Why it Matters. New Haven: Yale University Press. Durant, J. (1993) “What Is Scientific Literacy?,” in J. Durant and J. Gregory (eds) Science and Culture in Europe, pp. 129–38. London: Science Museum. Durant, J., Evans, G.A. and Thomas, G.P. (1989) “The Public Understanding of Science,” Nature 340: 11–14. Durant, J., Evans, G. and Thomas, G. (1992) “Public Understanding of Science in Britain: the Role of Medicine in the Popular Representation of Science,” Public Understanding of Science 1: 161–82. Durant, J., Bauer, M., Midden, C., Gaskell, G. and Liakopoulos, M. (2000) “Two Cultures of Public Understanding of Science,” in M. Dierkes and C. von Grote (eds) Between Understanding and Trust: The Public, Science and Technology, pp. 131–56. Amsterdam: Harwood Academic Publishers. Eagly, A.H. and Chaiken, S. (1993) The Psychology of Attitudes. Fort Worth: Harcourt Brace College Publishers. Einsiedel, E.F. (1994) “Mental Maps of Science: Knowledge and Attitudes among Canadian Adults,” International Journal of Public Opinion Research 6: 35–44. Etzioni, A. and Nunn, C. (1976) “The Public Appreciation of Science in Contemporary America,” in G. Holton and W.A. Planpied (eds) Science and its Public: the Changing Relationship, Boston Studies in the Philosophy of Science, vol. 33, pp. 229–43. Dordrecht: Kluwer Academic. Eurobarometer 31 (1989) URL: http://ec.europa.eu/public_opinion/archives/eb_arch_en.htm


Bauer et al.: 25 years of PUS research

93

European Commission (2001) Science and Society Action Plan. Brussels: Commission of the European Communities. Evans, G. and Durant, J. (1989) “Understanding of Science in Britain and the USA,” in R. Jowell, S. Witherspoon and L. Brook (eds) British Social Attitudes, 6th Report, pp. 105–120. Aldershot: Gower. Evans, G. and Durant, J. (1995) “The Relationship between Knowledge and Attitudes in the Public Understanding of Science in Britain,” Public Understanding of Science 4: 57–74. Farr, R.M. (1993) “Common Sense, Science and Social Representations,” Public Understanding of Science 2: 189–204. Fuller, S. (2000) The Governance of Science. Buckingham: Open University Press. Gaskell, G., Wright, D. and O’Muircheartaigh, C. (1993) “Measuring Scientific Interest: the Effect of Knowledge Questions on Interest Ratings,” Public Understanding of Science 2: 39–58. Gaskell, G., Allum, N. and Stares, S. (2003) Europeans and Biotechnology in 2002: Eurobarometer 58.0. A report to the EC Directorate General for Research for the project “Life Sciences in European Society,” March. London and Brussels: European Commission. Gieryn, T.F. (1999) Cultural Boundary Work of Science. Chicago: University of Chicago Press. Gigerenzer, G. and Hoffrage, U. (1995) “How to Improve Bayesian Reasoning without Instruction: Frequency Formats,” Psychological Review 102: 684–704. Godin, B. and Gingras, Y. (2000) “What Is Scientific and Technological Culture and How to Measure It? A Multidimensional Model,” Public Understanding of Science 9: 43–58. Gregory, J. and Bauer, M.W. (2003) “CPS INC: l’avenir de la communication de la science,” in B. Schiele (ed.) Les Nouveaux Territoires de la Culture Scientifique, pp. 41–65. Montreal: Les Presses de l’universite de Montreal and Press Universite de Lyon. Gregory, J. and Miller, S. (1998) Science in Public: Communication, Culture and Credibility. New York: Plenum Trade. Habermas, J. (1965) “Erkenntnis und Interesse,” Merkur 213: 1139–53 (reprinted in the essay collection Technik und Wissenschaft als “Ideologie.” Frankfurt am Main: Suhrkamp, 1968). Hansen, A. and Dickinson, R. (1992) “Science Coverage in the British Mass Media: Media Output and Source Input,” Communication 17: 365–77. Hastie, R. and Park, B. (1986) “The Relationship between Memory and Judgement Depends on Whether the Task is Memory-based or On-line,” Psychological Review 93: 258–68. House of Lords Select Committee on Science and Technology (2000) Science and Society, 3rd Report. London: HMSO. Irwin, A. and Wynne, B. (1996) Misunderstanding Science? The Public Reconstruction of Science and Technology. Cambridge: Cambridge University Press. Jasanoff, S. (2005) Designs on Nature: Science and Democracy in Europe and the United States. Princeton: Princeton University Press. Joss, S. and Bellucci, S., eds (2002) Participatory Technology Assessment. London: Centre for the Study of Democracy. Jovchelovitch, S. (2006) Knowledge in Context: Representations, Community and Culture. London: Routledge. Kohlstedt, S.G., Sokal, M.M. and Lewenstein, B. (1999) The Establishment of Science in America. Washington DC: AAAS. LaFollette, M.C. (1990) Making Science our Own: Public Images of Science, 1910–1955. Chicago: Chicago University Press. Lazarsfeld, P.F. (1941) “Remarks on Administrative and Critical Communication Research,” Studies in Philosophy and Social Science 9: 2–16. Lewenstein, B. (1992) “The Meaning of Public Understanding of Science in the US after World War II,” Public Understanding of Science 1: 45–68. Luhmann, N. (1968) Vertrauen—ein Mechanismus der Reduktion sozialer Komplexitaet. Stuttgart: Enke Verlag. Macoubrie, J. (2005) Informed Public Perceptions of Nanotechnology and Trust in Government. Washington DC: Woodrow Wilson International Center for Scholars. Massarani, L., Moreira, I.C. and Brito, F. (2002) Ciencia e publico—Caminhos da divulgacao cientifica no Brasil. Rio de Janeiro: UFRJ/Casa da Ciencia. Miller, J.D. (1983) “Scientific Literacy: a Conceptual and Empirical Review,” Daedalus Spring: 29–48. Miller, J.D. (1987) “Scientific Literacy in the United States,” in Communicating Science to the Public, pp. 19–40. Chichester: Wiley. Miller, J.D. (1992) “Towards a Scientific Understanding of the Public Understanding of Science and Technology,” Public Understanding of Science 1: 23–30. Miller, J.D. (1998) “The Measurement of Civic Scientific Literacy,” Public Understanding of Science 7: 203–24.


94

Public Understanding of Science 16 (1)

Miller, J.D. and Pardo, R. (2000) “Civic Scientific Literacy and Attitude to Science and Technology: a Comparative Analysis of the European Union, the United States, Japan and Canada,” in M. Dierkes and C. von Grote (eds) Between Understanding and Trust: The Public, Science and Technology, pp. 81–130. Amsterdam: Harwood Academic Publishers. Miller, S. (2001) “Public Understanding at the Crossroads,” Public Understanding of Science 10: 115–20. Miller, S., Caro, P., Koulaidis, V., de Semir, V., Staveloz, W. and Vargas, R. (2002) Benchmarking the Promotion of RTD Culture and Public Understanding of Science. Brussels: Commission of the European Communities. Mondak, J. and Anderson, M. (2003) “A Knowledge Gap or a Guessing Game? Gender and Political Knowledge,” Public Perspective 14: 6–9. Office of Science and Technology (OST) (2000) Science and the Public: A Review of Science Communication and Public Attitudes to Science in Britain. London: OST and Wellcome Trust. Pardo, R. and Calvo, F. (2002) “Attitudes toward Science among the European Public: a Methodological Analysis,” Public Understanding of Science 11: 155–95. PISA (2006) OECD Programme for International Student Assessment, URL: http://www.pisa.oecd.org Prewitt, K. (1983) “Scientific Literacy and Democratic Theory,” Daedalus Spring: 49–64. Raza, G., Singh, S., Dutt, B. and Chander, J. (1996) Confluence of Science and People’s Knowledge at the Sangam. New Delhi: NISTED. Raza, G., Singh, S. and Dutt, B. (2002) “Public, Science, and Cultural Distance”, Science Communication 23(3): 292–309. Roqueplo, P. (1974) Le partage du savoir—Science, culture, vulgarisation. Paris: Seuil. Royal Society of London (1985) The Public Understanding of Science. London: Royal Society. Rowe, G. and Frewer, L. (2004) “Evaluating Public Participation Exercises: a Research Agenda,” Science, Technology and Human Values 29: 512–56. Rowe, G., Horlick-Jones, T., Walls, J. and Pidgeon, N. (2005) “Difficulties in Evaluating Public Engagement Initiatives: Reflections on an Evaluation of the UK GM Nation? Public Debate about Transgenic Crops,” Public Understanding of Science 14: 331–52. Schiele, B. (1994) Science Culture—la culture scientifique dans le monde. Montreal: CIRST, UQAM. Seargent, J. and Steele, J. (1998) Consulting the Public: Guidelines and Good Practice. London: Policy Studies Institute. Shukla, R. (2005) India Science Report: Science Education, Human Resources and Public Attitudes towards Science and Technology. Delhi: NCAER. Stilgoe, J., Wilsdon, J. and Wynne, B. (2005) The Public Value of Science. London: Demos. Sturgis, P.J. and Allum, N.C. (2004) “Science in Society: Re-evaluating the Deficit Model of Public Attitudes,” Public Understanding of Science 13: 55–74. Thomas, G.P. and Durant, J.R. (1987) “Why Should We Promote the Public Understanding of Science?,” in M. Shortland (ed.) Scientific Literacy Papers, pp. 1–14. Oxford: Rewley House. Todd, E. (1996) L’invention de l’Europe. Paris: Editions du Seuil. Turner, J. and Michael, M. (1996) “What Do We Know about ‘Don’t Knows’? Or, Contexts of Ignorance,” Social Science Information 35: 15–37. Wagner, W. and Hayes, N. (2005) Everyday Discourse and Common Sense: the Theory of Social Representations. London: Palgrave Macmillan. Werskey, G. (1978) The Visible College. London: Allen Lane. Withey, S.B. (1959) “Public Opinion about Science and the Scientist,” Public Opinion Quarterly 23: 382–8. Wynne, B. (1993) “Public Uptake of Science: a Case for Institutional Reflexivity,” Public Understanding of Science 2: 321–38. Wynne, B. (1995) “Public Understanding of Science,” in S. Jasanoff, G.E. Markel, J.C. Petersen and T. Pinch (eds) Handbook of Science and Technology Studies, pp. 361–88. London: SAGE. Wynne, B. (1996) “May the Sheep Safely Graze? A Reflexive View of the Expert-Lay Knowledge Divide,” in S. Lash, B. Szerszynski and B. Wynne (eds) Risk, Environment and Modernity: Towards a New Ecology, pp. 44–83. London: SAGE. Zaller, J. (1991) “Information, Values and Opinions,” American Political Science Review 85: 1215–37. Zaller, J. (1992) The Nature and Origins of Mass Opinion. New York: Cambridge University Press. Ziman, J. (1991) “Public Understanding of Science,” Science, Technology and Human Values 16: 99–105.

Authors Martin W. Bauer is currently Reader in Social Psychology and Research Methodology at the London School of Economics, and associated with Social Psychology, BIOS and the


Bauer et al.: 25 years of PUS research

95

Methodology Institute. A former Research Fellow at the Science Museum London, he directs the LSE Master Programme in Social and Public Communications. He researches the contributions of resistance to social processes and the comparison of public representations of science and technology. He publishes books and scholarly articles on the dynamics and impact of public opinion on new scientific and technological developments: Resistance to New Technology (Cambridge University Press, 1995), Qualitative Researching with Text, Image and Sound (with G. Gaskell; SAGE, 2000), Biotechnology: The Making of a Global Controversy (with G. Gaskell; Cambridge University Press, 2002) and Genomics and Society (with G. Gaskell; Earthscan, 2006). Correspondence: Martin W. Bauer, London School of Economics and Political Sciences, Institute of Social Psychology, Houghton Street, London WC2A 2AE, UK; e-mail: M.Bauer@lse.ac.uk Nick Allum is Lecturer in Quantitative Sociology at the Department of Sociology, University of Surrey. His current research interests are in public understanding of science, the social psychology of risk and social and political trust. He also has interests in survey methods and teaches quantitative methods to undergraduates and postgraduates. Steve Miller is Professor of Science Communication and Planetary Science at University College London. He is Head of the UCL Science and Technology Studies Department, and Chair of the Particle Physics and Astronomy Research Council’s Solar System Advisory Panel. He is co-author, with Jane Gregory, of Science in Public: Communication, Credibility and Culture (Plenum, 1998), and Director of the European Science Communication (ESConet) Workshops project, funded by the European Union’s Framework 6 Programme.


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.