Thposcebs

Page 1


Tainted


ENVIRONMENTAL ETHICS AND SCIENCE POLICY SERIES Editor-in Chief: Kristin Shrader-Frechette A Perfect Moral Storm The Ethical Tragedy of Climate Change Stephen M. Gardiner Acceptable Evidence Science and Values in Risk Management Edited by Deborah Mayo and Rachelle Hollander Across the Boundaries Extrapolation in Biology and Social Science Daniel Steel Democracy, Risk, and Community Technological Hazards and the Evolution of Liberalism Richard Hiskes Environmental Justice Creating Equality, Reclaiming Democracy Kristin Shrader-Frechette Experts in Uncertainty Expert Opinion and Subjective Probability in Science Roger Cooke In Nature’s Interests? Interests, Animal Rights, and Environmental Ethics Gary E. Varner Is a Little Pollution Good for You? Incorporating Societal Values in Environmental Research Kevin C. Elliott Only One Chance How Environmental Pollution Impairs Brain Development—and How to Protect the Brains of the Next Generation Philippe Grandjean Privatizing Public Lands Scott Lehmann Tainted How Philosophy of Science Can Expose Bad Science Kristin Shrader-Frechette Taking Action, Saving Lives Our Duties to Protect Environmental and Public Health Kristin Shrader-Frechette What Will Work Fighting Climate Change with Renewable Energy, Not Nuclear Power Kristin Shrader-Frechette


Tainted How Philosophy of Science Can Expose Bad Science

K R I S T I N S H R A DE R-F R E C H E T T E

3


3 Oxford University Press is a department of the University of Oxford. It furthers the University’s objective of excellence in research, scholarship, and education by publishing worldwide. Oxford New York Auckland  Cape Town  Dar es Salaam  Hong Kong  Karachi Kuala Lumpur Madrid Melbourne Mexico City Nairobi New Delhi Shanghai Taipei Toronto With offices in Argentina Austria Brazil Chile Czech Republic France Greece Guatemala Hungary Italy Japan Poland Portugal Singapore South Korea Switzerland Thailand Turkey Ukraine Vietnam Oxford is a registered trademark of Oxford University Press in the UK and certain other countries. Published in the United States of America by Oxford University Press 198 Madison Avenue, New York, NY 10016

© Oxford University Press 2014 All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, without the prior permission in writing of Oxford University Press, or as expressly permitted by law, by license, or under terms agreed with the appropriate reproduction rights organization. Inquiries concerning reproduction outside the scope of the above should be sent to the Rights Department, Oxford University Press, at the address above. You must not circulate this work in any other form and you must impose this same condition on any acquirer.Library of Congress Cataloging-in-Publication Data Shrader-Frechette, K. S. (Kristin Sharon) Tainted : how philosophy of science can expose bad science / Kristin Shrader-Frechette.   pages cm.—(Environmental ethics and science policy) ISBN 978-0-19-939641-2 (hardcover : alk. paper) 1. Errors, Scientific.  2. Science—Methodology. I.  Title.  II.  Title: Exposing bad science, practicing philosophy of science. Q172.5.E77S57 2014 501—dc23

9 8 7 6 5 4 3 2 1 Printed in the United States of America on acid-free paper


For Evelyn, that you and your sister may help create an even better world than the one into which we brought you.



CONTENTS

CHAPTER

1. Speaking Truth to Power:

Uncovering Flawed Methods, Protecting Lives and Welfare  1

PA R T I   CHAPTER

CONCEPTUAL AND LOGICAL ANALYSIS

2. Discovering Dump Dangers:

Unearthing Hazards in Hydrogeology

CHAPTER

17

3. Hormesis Harms:

The Emperor Has No Biochemistry Clothes

CHAPTER

29

4. Trading Lives for Money:

Compensating Wage Differentials in Economics

PA R T I I

44

H EURISTIC ANALYSIS AND DEVELOPING

HYPOTHESES

CHAPTER

5. Learning from Analogy:

Extrapolating from Animal Data in Toxicology

CHAPTER

6. Conjectures and Conflict:

A Thought Experiment in Physics

CHAPTER

69

7. Being a Disease Detective:

Discovering Causes in Epidemiology

CHAPTER

57

88

8. Why Statistics Is Slippery:

Easy Algorithms Fail in Biology

99

vii


viii

Contents

PA R T I I I

M ETHODOLOGICAL ANALYSIS AND JUSTIFYING

HYPOTHESES

CHAPTER

9. Releasing Radioactivity:

Hypothesis-Prediction in Hydrogeology

CHAPTER

111

10. Protecting Florida Panthers:

Historical-Comparativist Methods in Zoology

CHAPTER

11. Cracking Case Studies:

Why They Work in Sciences such as Ecology

CHAPTER

128

144

12. Uncovering Cover-Up:

Inference to the Best Explanation in Medicine

157

VALUES ANALYSIS AND SCIENTIFIC

PA R T I V

UNCERTAINTY

CHAPTER

13. Value Judgments Can Kill:

Expected-Utility Rules in Decision Theory

CHAPTER

179

14. Understanding Uncertainty:

False Negatives in Quantitative Risk Analysis

CHAPTER

15. Where We Go from Here:

Making Philosophy of Science Practical

Notes  Index

221 291

206

196


Tainted



C H A P T ER

1

Speaking Truth to Power UNCOVERING FL AW ED MET HODS, PROTECTING LIVES AND WELFARE

Although Westerners formally repudiate racism, sometimes their science may encourage it. For instance, many people assume that Aboriginal dominance by industrialized peoples exhibits a Darwinian survival of the fittest, that technologically primitive people retain more evolutionary traces of descent from apelike beings. After all, says Pulitzer-Prize–winning biologist Jared Diamond, Aborigines had lived in Australia for 40,000 years as hunter-gatherers. Yet within a century of colonizing Australia, white immigrants built a literate, democratic, industrialized, politically centralized state. Given identical Australian environments, many drew the scientific conclusion that divergent Aboriginal and European achievements arose from biological and cognitive differences between the peoples themselves.1 Diamond rejects this scientific conclusion as invalid and racist. He argues that on average, stone-age peoples probably are more intelligent than industrialized peoples because they must learn to cope with high-mortality societies facing tribal warfare, accidents, and food-procuring difficulties. Westerners, however, often fail to learn from their environment. Instead they waste more time in passive entertainment—7 hours/day of TV in average American households. But if Westerners often survive, regardless of their abilities, what explains their dominance? Real estate, says Diamond. Because Europeans were fortunate to live in regions with accessible metals, they built guns and steel tools that were unavailable to stone-tool people. Because they lived in urban centers and developed nastier germs, they and not colonized peoples became immune to them. Because guns, germs, and steel indirectly conferred political and economic power on colonizers, Diamond says colonized and enslaved peoples have never competed on a level playing field. Thus there is no scientific evidence for their supposed inferiority. 2 Diamond’s analysis of the case of Australian Aborigines suggests that science can be done well or badly. It can be used for advancing knowledge or allowing oppression. As the ace in the deck of knowledge and action, science has the power 1


2

Tainted

to trump opinion and to create or settle disputes. Because science is so powerful, those who ignore its evaluation do so at their own peril.

Practical and Theoretical Evaluations of Science Arguing for a new focus in evaluating science, this book is the first devoted entirely to practical philosophy of science—to using classic philosophy-of-science analyses to help uncover flawed science, promote reliable science, and thus help liberate people from science-related societal harms. It illustrates traditional philosophy of science—how to analyze concepts, methods, and practices in a variety of sciences. Yet it does so by investigating parts of biology, economics, physics, and other sciences that can make life-and-death differences for humans. Instead of autopsies on dead scientific theories—purely theoretical or historical evaluations—practical philosophy of science has at least four characteristics. It analyzes science having critical, welfare-related uses and consequences. It analyzes science naturalistically, using the logic and methods that scientists use. It analyzes science at the heart of contemporary controversies. It illustrates how to evaluate scientific methods and does not merely describe what they are. In its focus on practice and the how of evaluating science, this book is eminently practical in illustrating how to use methodological criticisms of science to liberate people from the flawed science that often harms them. It aims to be a philosophy-of-science analogue for the practical legal analyses of the Innocence Project—used by law students who work pro bono to liberate death-row inmates from the flawed legal system that often kills them in error. Practical evaluation of science is important because, at least in the United States, 75 percent of all science is funded by special interests in order to achieve specific practical goals, such as developing pharmaceuticals or showing some pollutant causes only minimal harm. Of the remaining 25 percent of US-science funding, more than half addresses military goals. This means that less than one-eighth of US-science funding is for basic science; roughly seven-eighths is for practical projects. 3 Yet, as later paragraphs reveal, most philosophy of science focuses on evaluating the one-eighth of science that is theoretical, while almost none assesses the seven-eighths that is practical. This book seeks to broaden the scope of philosophy of science, to evaluate contemporary scientific methods that make a difference in the world. While traditional or theoretical philosophy of science focuses on understanding science, this book addresses both understanding and justice. It seeks understanding by assessing classic questions of philosophers of science. It seeks justice by addressing these classic questions in ways that also assess their practical consequences for welfare. What are some of these classic questions?


Speaking Truth to Power

3

Scientists and philosophers of science typically ask at least 5 types of traditional theoretical questions about • • • • •

how to analyze concepts, how to make inferences about data, how to discover or develop hypotheses, how to test or justify hypotheses, and how to deal with unavoidable value judgments that arise, especially in situations of scientific uncertainty.

Given this roughly chronological account of some classic philosophy-of-science questions, this book asks them about science that has practical, often life-and-death consequences for human welfare. Following this 5-question framework, the book has 4 sections: • • • •

conceptual analysis and logical analysis heuristic analysis, questions about hypothesis-discovery/development methodological analysis, questions about testing/justifying hypotheses values analysis, questions about normative judgments in science.

Thus the book’s framework and questions focus on traditional philosophy of science, but its analyses, cases, and illustrations assess science with welfare-related consequences.

What Is Science? More than today, early science probably was dominated by practical concerns, by devising inventions that would help humans. In fact, the English word “scientist” arose rather late, in 1834, when the naturalist-theologian William Whewell coined the word. Until that time, scientists often were called artisans or inventors, as when philosopher-scientist Francis Bacon said, “the good effects wrought by founders of cities, law-givers . . . extirpers of tyrants, and heroes . . . extend but for short times, whereas the work of the inventor . . . is felt everywhere and lasts forever.”4 Similarly, until the 1800s, European universities had no “scientists.” Instead they had at most 4 branches of learning: law, medicine, theology, and philosophy. Philosophy included what we now call scientific, engineering, and humanistic disciplines, something that explains why the advanced degree in all these disciplines is still called the PhD (Doctor of Philosophy). By the 17th century, however, philosophy was divided into moral philosophy, the study of the human world, as through politics, economics, ethics, and psychology, and natural philosophy, the


4

Tainted

conceptual/mathematical study of non-human phenomena, as through physics or chemistry. Although in Scotland “natural philosophy” sometimes is still used today to label science departments such as physics, it was the dominant label for science only until Whewell’s time. That is why Isaac Newton called his 1687 physics classic “The Mathematical Principles of Natural Philosophy.”5 Even persons studying “scientia” (“episteme” or knowledge, for Aristotle) did not study science as we know it, but any well-established systematic or causal knowledge, including theology. Although early 20th-century philosophers sometimes followed Aristotle and called ethics “science/scientia,”6 they were in the minority. By the 19th century most scholars no longer considered academic disciplines like theology to be science.7 Instead they defined science as grounded in rigorous, observation-based experiment, logic, prediction, and replicability. As these chapters illustrate, however, scientists still disagree on how much, and what kind of, rigor is enough, in order for something to be scientific. Why did natural philosophy slowly evolve into science? One reason is the rising status of mathematical practitioners or artisans doing “ars” (“techne” in Greek). They became aligned with the elite natural philosophers who were doing scientia. Ars was practical knowledge of how to do something—like building tables or tracking stars—whereas scientia was theoretical knowledge of a demonstrative kind, like metaphysics. Many slaves, laborers, and tradespeople did ars, whereas free men (rarely women) of wealth and leisure did scientia. Once the scientific revolution began in the 16th century, however, the theory-focused natural philosophers rejected Aristotelianism and turned to less theological and more practical, empirical, mathematically based views, such as heliocentrism and atomism. Their new natural philosophy focused on predicting and changing the world, not just contemplating it. Nevertheless natural philosophers such as Nicolaus Copernicus, Galileo Galilei, Johannes Kepler, and Gottfried Leibniz remained theists. Charles Boyle, Isaac Newton, and other scientists explicitly tied their work to theology, partly because they believed scientific phenomena required a designer, God. Newton even claimed that because God maintains the solar system, gravity can hold the planets together. However, when most 19th-century scholars rejected such theologically oriented causal explanations for material phenomena, natural philosophy was fully transformed into science. 8

What Is Philosophy of Science? From Aristotle to Einstein, scientists and philosophers of science have focused both on science, assessing questions within some science, and on metascience or philosophy of science, assessing questions about or beyond science. Although science and philosophy of science are different in content, philosophy of science


Speaking Truth to Power

5

requires understanding science, and science often requires understanding philosophy of science. For centuries philosophers have evaluated science, mainly because of their interest in the limits of what one could know. In the 17th and 18th centuries, philosopher John Locke sought foundations for the experimental knowledge developed by the scientists of his day, like Robert Boyle and Isaac Newton. However, philosophy of science as a labeled discipline is relatively new, and Locke’s work was not called philosophy of science. Neither was that of philosopher Immanuel Kant in the 18th and 19th centuries, although he studied the conceptual conditions necessary for scientific knowledge. Only in the late 19th century were questions about the nature of science called philosophy of science or “logic of science”— the name that chemist- mathematician-logician Charles Sanders Peirce gave to a course he taught at Harvard in the 1860s. By the 1890s at Harvard, Josiah Royce taught philosophy of science under that label, as did Edgar Singer at the University of Pennsylvania. In 1890 Paul Carus founded Monist, a journal whose title page proclaimed it was “devoted to the philosophy of science.” In 1934, the journal Philosophy of Science began. Emphasizing rigorous analytic philosophy and logic, similar journals arose in the 1930s, including Erkenntnis in Germany, Analysis in Britain, and Journal for Symbolic Logic in the United States. They signaled a new way of doing philosophy, one focused on logical empiricism, the belief that all knowledge could be based either on sensory experience or on mathematical logic and linguistics. Although chapter 9 explains why most philosophers of science today are not logical empiricists, the discipline of philosophy of science emerged from logical empiricism.9 Today philosophers of science address both theoretical and practical questions, although the dominance of the former is one reason for needing this book on practical philosophy of science. Their theoretical questions can be abstract—that is, focusing on science generally—or concrete, that is, focusing on a specific science. More practical questions concern science having real-world, often welfarerelated, consequences. Of course, these 4 categories of questions—theoretical, practical, abstract, concrete—are not mutually exclusive, partly because there are degrees of each and because some categories can be subdivisions of others. For instance, concrete scientific questions can be either practical or theoretical. Some more abstract (about-science-generally) theoretical questions include the following: • If human observation is imperfect and partial, how can one justify empirical claims? • What are the different types of scientific models and theories? • How can scientific revolutions occur, if science is supposed to be true? • Does science require predicting events, finding their causes, or something else?


6

Tainted

Some more concrete (specific-to-some-science) theoretical questions include the following: • • • •

Except for genetics, does biology have any laws of nature? Are special and general relativity compatible with Newton’s laws? Is Copernicus’s theory simpler than that of Ptolemy? Are economists right that people typically maximize expected utility?

Some more practical (welfare-relevant) methodological questions include the following: • • • •

How reliable are long-term projections for the safety of chemical-waste dumps? Do different sexes and races have different cognitive abilities? How reliable are different models of climate change? How might some pollution, developmental toxins, cause heritable epigenetic damages?

Practical Philosophy of Science As already noted, most evaluations of scientific method have focused on theoretical rather than practical questions. Despite the practical origins of science, concerns with scientific practice and practical consequences of science have been outside mainstream, English-language philosophy of science—one reason the Society for the Philosophy of Science in Practice began in 2006.10 The American Philosophical Association began the Committee on Public Philosophy in 2005,11 and the Public Philosophy Network began in 2010.12 This book should extend the efforts of these groups, doing practical philosophy of science by illustrating and assessing methodological flaws in welfare-related science—and thus improving science. One of its goals is to help scientists and philosophers of science—because their methods of analysis can be the same—make room for analyzing scientific practices and science-related practical questions that affect us all. Insofar as practical analyses of science are naturalized—that is, employ the methods of science—they also address empirical questions, such as whether there is a threshold below which low-dose carcinogens cause no harm (chapter 3), or whether Florida-panther habit is restricted to dry upland areas (chapter 10). However, although this book includes naturalized evaluations of science, it has no presuppositions, one way or the other way, about religion or beyond-natural beings. It naturalistically assumes that both scientists and philosophers of science should use naturalistic methods and attempt to explain the natural world in non-supernatural terms. However, it non-naturalistically assumes that history


Speaking Truth to Power

7

and sociology do not exhaust the ways science may be evaluated.13 Instead, it argues that both science and philosophy of science include some irreducibly normative or evaluative questions, unanswerable by history or sociology, such as what science ought to be, and which criteria for scientific confirmation are superior to others. Some logical empiricists, however, believe that in order for science and philosophy of science to be genuinely objective, they must be free of all values. Chapters 13–15 explain why they are wrong and outline the different types of value judgments that can and cannot be avoided in science. Indeed, chapter 15 argues that insofar as scientists and philosophers of science have greater expertise and power, sometimes they must make ethical value judgments in order to fulfill their science-related societal duties to help avoid threats to public welfare.14 For instance, after Berlin scientists discovered nuclear fission in 1938, US physicists Leo Szilard and Eugene Wigner asked Albert Einstein, a lifelong pacifist, to sign a letter to US President Franklin Roosevelt that advocated nuclear-weapons research. Einstein agreed. He explained that because “the possession of knowledge carries an ethical responsibility” to help protect others, he must urge Roosevelt to develop nuclear weapons before Hitler did.15 Arguing that scientific work must serve “the essential demands of common welfare and justice,”16 Einstein also used his scientific stature to condemn tyranny, champion Negroes’ equal rights, and support democracy. Following Einstein, this book illustrates practical philosophy of science as a way to serve “the essential demands of common welfare and justice,” especially liberation of the oppressed.17

The Importance of Practical Philosophy of Science As the case of the Australian Aborigines illustrates, practical philosophy of science is important not only because it corrects flawed science, but also because these corrections can help reverse injustice and liberate people. Every chapter in this book illustrates how assessing and improving scientific methods can help correct flawed scientific conclusions and science-related policies. Sometimes, it can even save lives. Consider the work of 4 philosophers who do such practical work. Carl Cranor’s groundbreaking evaluations of causal inferences have helped to improve courtroom verdicts about legal liability that otherwise put victims at risk. Helen Longino’s biological analyses have revealed unjustified sexist assumptions in scientific theories about hormonal determinants of gender behavior. Sheldon Krimsky has uncovered how, despite the many benefits of pharmaceuticals, biased testing methods have corrupted medical science and put lives at risk. Deborah Mayo has shown what makes evidence relevant for regulation and how questionable statistical methods generate incorrect conclusions, such as those about formaldehyde risks.18


8

Tainted

This book shows how practical philosophy of science is especially liberating in helping to counteract special-interest science—biased science done to promote profits rather than truth. Perhaps the best-known special-interest science is that done by tobacco companies. For half a century they successfully delayed regulations on cigarettes, responsible for hundreds of thousands of US deaths each year.19 Subsequent chapters show how philosophers of science have challenged landowners’ use of questionable biological methods that would allow them to profit by developing sensitive habitat; chemical-interests’ use of questionable toxicological methods that save money by avoiding pollution control; fossil-fuel interests’ use of biased models that deny climate change and allow greater oil/gas/ petroleum profits; and pharmaceutical interests’ use of invalid statistical methods that falsely show dangerous drugs are safe.20 Criticizing military science, Jose Sanchez-Ron likewise shows how taxpayer-funded military science has shifted the character and methods of research, especially in physics and harmed basic science. He argues that national-security justifications often cloak poor science and ethics, including secret military experimentation on uninformed civilians and soldiers. Sanchez-Ron also argues that nearly all science-based industries of the past century are products of wartime, from pesticides created through World War I German nerve-gas development; to US nuclear-weapons development used for commercial atomic power; to World War II quantitative risk assessment, modern rocketry, and cybernetics. He says military science deserves practical scrutiny, now, because it was both needed and unquestioned during wartime.21 Others claim that because “the corporate giants of the automobile, chemical, and electronics industries all made massive fortunes profiting from war” and now drive “20th century’s unending arms race,” their war-promoting science needs special scrutiny.22 This book shows that those who do special-interest science in the name of profits—whether individuals, industries, environmentalists, labor unions, or universities—are like the Queen in Lewis Carroll’s Through the Looking Glass. Just as the Queen claimed she could believe 6 impossible things before breakfast, profits often drive them to use science in impossible—that is, inconsistent, biased, and question-begging ways that serve their interests. By investigating special-interest manipulations of scientific methods, practical philosophy of science can serve both science and justice, liberating people from poor science. One sort of liberation occurs when evaluations of science help protect those who otherwise would be harmed by questionable assessments of risky consumer products and technologies. If earlier scientists and philosophers of science had been more practical in examining then-current scientific methods/practices, perhaps physicians would not have given patients radium water to cure various ailments. Perhaps the Tacoma Narrows bridge would not have collapsed. Perhaps space-shuttle Challenger would not have exploded and killed the astronauts. Perhaps US military scientists would not have said radiation from US


Speaking Truth to Power

9

above-ground, nuclear-weapons testing was harmless. 23 Perhaps economists would not have promoted economic stabilization through the 2008 US-bank bailout, while they ignored high unemployment rates. Perhaps physicians would not have prescribed hormones for post-menopausal women. Perhaps shoe sellers would not have used x-rays to determine whether shoes fit. Chapter 3 reveals that a US National Academy of Sciences’ report recently warned about the threat of special-interest science. It challenged questionable chemical-industry methods and motives behind promoting weakened regulations for low-dose chemical exposures. 24 Although pesticides provide many benefits to farmers and consumers, and although legal pesticides already kill a million Americans in every generation, the academy said pesticide manufacturers, “economically interested third parties,” fund flawed studies to justify further weakening chemical-safety standards, thus saving them pollution-control monies.25 Following the academy’s insights, these chapters show how practical philosophy of science can help improve science, save lives, and promote welfare. Practical philosophy of science also can help reduce irrational attacks on science, including ideologically motivated rejections of evolution. As the scientific-research society, Sigma Xi, noted, because “the pathways that we pursue” as researchers “are infinite and unfrequented, we cannot police them as we protect our streets and personal property; instead, we depend on those other travelers” along such “lonely byways of knowledge.” This volume shows how scientist-and-philosopher-of-science travelers can help police irrationality, questionable science and science-policy, and attacks on science. 26

How This Book Is Different As already emphasized, this book is different in using classic philosophy-ofscience analyses—but to help clarify contemporary scientific controversies that have great consequences for both science and human welfare. It tries to liberate people from incorrect science and the effects of incorrect science. It emphasizes how to do philosophy of science, not just what it is, descriptively. In interesting, high-stakes cases, it shows readers how to be scientific detectives. Unlike many philosophy-of-science books, it helps clarify science-related disputes, from the safety of cell phones to extrapolating from animal data. Despite its classic philosophy-of-science framework, the book also is unique in evaluating contemporary scientific-method controversies from the perspective of many different sciences, including biochemistry, biology, economics, epidemiology, geology, hydrology, medicine, physics, radiology, statistics, toxicology, and zoology. Although the book provides a contemporary, practical introduction to philosophy of science, it is unlike most philosophy-of-science books, intended for more advanced readers. It aims to be more readable, to reach a wider audience, to avoid


10

Tainted

more jargon—than most philosophy-of-science books. It also addresses prominent public concerns, such as whether current pollution regulations adequately protect children. (The book argues they do not.) Given its practical concerns, the book deliberately addresses only as much philosophy-of-science theory as is necessary to understand each chapter’s methodological analysis. After all, other books cover theory in detail. However, no other books use classic philosophy-ofscience methods to help solve a variety of practical, real-world, scientific problems that have great consequences for welfare. As a result of this practical focus, under-represented sciences like hydrogeology and toxicology—not merely physics and biology—are well represented. This book is also different in that it aims not merely to illustrate analysis of scientific methods but also to inspire others to use similar analyses to improve both science and human welfare. It hopes to encourage those with scientific expertise to become, in part, public intellectuals who can make a difference in the world. Yet another difference is this book’s focus on the social aspect of science, on how bias and profit can skew scientific methods. The book thus presents science as it is often practiced, something most science-related books do not do. For laypeople interested in learning about science, and for beginning scientists who are interested in improving the ways they do science, the book provides a number of practical strategies, criticisms, and clarifications. These include rejecting demands for only human evidence to support hypotheses about human biology (chapter 3), avoiding using statistical-significance tests with observational data (chapter 12), and challenging use of pure-science default rules for scientific uncertainty when one is doing welfare-affecting science (chapter 14). More generally, the book helps people understand why good science requires analysis, not just cookbook algorithms or plug-and-chug applications. This book also is different in providing insider insights on scientific methods/practice, based on decades of doing science-advisory work “inside the Beltway,” with Washington groups such as the US National Academy of Sciences, Environmental Protection Agency, and Department of Energy—and internationally, with organizations such as the United Nations and the World Health Organization. Because of all these ways in which this book is different, it shows practical philosophy of science as a way to serve both knowledge and social justice. Hence this book argues that, at its best, practical philosophy of science is partly liberation science.

Seeking Justice by Exposing Flawed Science Because this book analyzes both contemporary scientific methods and harmful consequences of biased scientific methods, it should interest 2 quite different audiences. One audience consists of people interested in clarifying and


Speaking Truth to Power

11

improving scientific methods. Another audience consists of people interested in securing the many benefits of science and protecting public policy from biased science. Addressing at least 2 different groups—the scientific-methods audience and the policy audience, each chapter of the book pursues both truth and justice. It argues for both methodological claims and for substantive, science-related policy claims. As an analysis and illustration of how science, people, and public policy must be liberated from bias in order to secure the many benefits of science, the book makes at least 14 substantive arguments. Chapter 2: Subjective hydrogeological concepts and methods invalidate government and nuclear-industry claims that high-level radioactive wastes can be stored safely underground in perpetuity. Chapter 3: The incoherent biochemical concept of hormesis invalidates chemical-manufacturers’ and attorneys’ toxic-tort-defendant claims that low-dose carcinogens, including dioxins, are harmless. Chapter 4: The scientifically questionable economic concept of the compensating-wage differential risks the lives of blue-collar workers and falsely presupposes their pay is proportional to their higher workplace risks. Chapter 5: Contrary to popular belief, animal and not human data often provide superior evidence for human-biological hypotheses. Chapter 6: Contrary to many physicists’ claims, there is no threshold for harm from exposure to ionizing radiation. Chapter 7: Contrary to dominant epidemiological and toxicological standards, many pharmaceutical drugs are not safe, just because they fail to double their victims’ probabilities of serious harms. Chapter 8: Contrary to standard statistical and medical practice, statistical-significance tests are not causally necessary to show medical and legal evidence of some effect. Chapter 9: Contrary to accepted government/engineering hydrogeological claims, many US Superfund sites are leaking or about to leak, putting millions of people at risk. Chapter 10: Because of methodologically flawed government-agency zoological methods, government-approved habitat plans protect mainly real-estate-development profits, not the endangered Florida panther. Chapter 11: Although the US government rejected the biological methods outlining needed protection for the endangered Northwest Spotted Owl, they are correct and likely needed to ensure owl survival amid intensive logging. Chapter 12: Contrary to the dominant government/scientific/medical conclusion that Pennsylvania’s Three Mile Island nuclear accident killed no one, the plant has caused infant retardation and thousands of premature cancers.


12

Tainted

Chapter 13: Contrary to accepted government/engineering/economic practice, decision theorists who use expected-utility rules, amid scientific uncertainty, put society at risk from catastrophic technological accidents. Chapter 14: Contrary to accepted scientific practice, minimizing false positives and not false negatives puts society at risk from harms such as bio-chemical warfare and fracking. Chapter 15: The current epidemic of heritable developmental toxicity in children shows that—contrary to claims that scientists should remain neutral about science-related policy issues—experts have duties to speak out about flawed science that can cause harm.

Seeking Truth by Evaluating Scientific Methods Besides arguments for the preceding 14 substantive claims that people and public policy must be liberated from biased science, each chapter of the book makes a number of arguments about ensuring and clarifying scientific method, the central focus of philosophy of science. As already noted, the book’s 4 sections focus on 4 classic, roughly chronological, phases of philosophy-of-science analysis: conceptual and logical analysis, heuristic analysis, methodological analysis, and normative analysis. The first or conceptual-and-logical-analysis section of the book, chapters 2–4, uses cases in hydrogeology, biochemistry, and economics to show how to do logical assessment of various scientific assumptions, concepts, and inferences. Chapter 2 gives a brief overview of logic, then explains why it is the foundation of scientific analysis. Uncovering several logical fallacies, chapter 2 illustrates how to evaluate scientific assumptions associated with a prominent hydrogeological model. Models especially deserve logical scrutiny because they can be misused, given their typical employment in situations where data are not available. Chapter 3 illustrates analysis of the hormesis concept in biochemistry and shows how its scientific support relies on logical fallacies that include equivocation, invalid extrapolation, hasty generalization, inconsistency, begging the question, and confusing necessary and sufficient conditions. The chapter illustrates that, even without obtaining any data on hormesis, practical philosophers of science often can use logic to uncover flawed scientific methods. Chapter 4 evaluates the economic concept of the compensating wage differential, assesses supposed evidence for it, shows it is a construct based on invalid data aggregation, and requires real-world conditions that rarely exist. A common theme of chapters 2–4 is that conceptual and logical analysis promotes scientific progress. As biologist Ernst Mayr emphasized, recent progress in evolutionary biology has come mainly from conceptual clarification, not improved measurements or better scientific laws. 27


Speaking Truth to Power

13

The second or heuristic-analysis section of the book, chapters 5–8, uses cases in biology, physics, epidemiology, and statistics to show how to evaluate alternative strategies for discovering/developing scientific hypotheses. Chapter 5 evaluates the heuristic strategy of analogy, developing new hypotheses from those in other areas. It shows that because of different socio-political constraints, animal-based and not human-based hypotheses are often more promising avenues for learning about human biology. Chapter 6 evaluates the heuristic strategy of using a thought experiment to develop hypotheses in physics. It shows that deductively valid sets of conjectures, not merely generalizations from data, help clarify scientific hypotheses. Chapter 7 criticizes a common but questionable way of discovering hypotheses in epidemiology and medicine—looking at the magnitude of some effect in order to discover causes. The chapter shows instead that the likelihood, not the magnitude, of an effect is the better key to causal discovery. Chapter 8 evaluates the well-known statistical-significance rule for discovering hypotheses and shows that because scientists routinely misuse this rule, they can miss discovering important causal hypotheses. The third or methodological-analysis section of the book, chapters 9–12, uses cases in hydrogeology, zoology, ecology, and toxicology to show how to evaluate alternative methods for testing/justifying hypotheses. Chapter 9 outlines the dominant or hypothesis-deduction account of scientific method—formulating hypotheses, then testing them predictively. It shows that because real-world phenomena virtually never fit this method’s strict requirements, scientists typically simplify the method in ways that can cause devastating errors. Chapter 10 summarizes a prominent alternative to this dominant hypothesis-deduction method, one inspired by historian Thomas Kuhn’s argument that science often undergoes revolutions that cause rejection of accepted findings. However, the chapter shows that another alternative method, theory comparison, also fails in allowing reliance on suspect evidence. Chapter 11 investigates another alternative to the dominant hypothesis-deduction method, justifying hypotheses through case studies. It examines constraints on using individual cases, then shows that provided the constraints are met, case studies can justify conclusions. Chapter 12 investigates a third, and perhaps the most prominent, alternative to hypothesis-testing: inference to the best explanation. The chapter shows why this method is superior to those outlined earlier in the book. Its success is a result of its mandating assessment of potential underlying mechanisms, unification of theoretical principles, and manipulation of competing hypotheses. The fourth or normative-analysis section of the book, chapters 13–15, uses problems in decision theory, quantitative risk assessment, and developmental toxicology to show alternative ways to deal with scientific uncertainty. Chapter 13 outlines different types of scientific value judgments, shows which are unavoidable, especially in situations of scientific uncertainty, and argues that under 3 distinct circumstances, the evaluative rule of maximizing


14

Tainted

expected utility is inferior to the maximin rule of avoiding the worst outcome. Chapter 14 assesses another typical scientific value judgment, minimizing false positives (false assertions of an effect), not false negatives (false denials of an effect), when both cannot be minimized. Contrary to standard scientific opinion, the chapter argues that in welfare-related, uncertain science, minimizing false negatives is scientifically and ethically superior. The final chapter summarizes the book’s conclusions that practical philosophy of science can help correct both questionable scientific methods and thus questionable science-related policies. It argues that people with scientific expertise have special ethical duties to help protect others from flawed scientific methods that can cause great harm. 28

Conclusion This chapter began the book’s argument that because science often trumps other forms of knowledge, its power has encouraged people to use science to serve both noble and ignoble ends. When Jared Diamond exposed racist biological assumptions, when Marie Curie discovered the element polonium, when Jonas Salk developed the polio vaccine, they used science for good. Yet Adolph Hitler used Nordic eugenics to justify enslaving Jews, Joseph Stalin used Lysenkoist science to defend killing geneticists, and cigarette manufacturers used biased statistics to deny tobacco-caused cancer. They misused scientific methods as weapons of oppression. This book shows that practical philosophy of science can both improve science and liberate those harmed by it.


PA RT

I

CONCEPTUAL AND LOGICAL ANALYSIS



C H A P T ER

2

Discovering Dump Dangers UNEARTHING HAZARDS IN HYDROGEOLOGY

Harvard professor Cass Sunstein argues that science and reason demand the cost-benefit state—a nation that requires every health or safety regulation to save money overall, to pass a cost-benefit test. Otherwise, he says, regulations should be rejected as irrational and political. Using this cost-benefit criterion, Sunstein has challenged the Clean Air Act, and required child-car-seat restraints in automobiles, workplace-exposure limits on methylene chloride, restrictions on nitrogen-oxide emissions from fossil-fuel plants, and regulations for arsenic in drinking water. He also says government economic calculations should count seniors’ lives as worth less than non-seniors’ because seniors will earn less in the future. Sunstein’s economics are not hypothetical. During 2009–2012 he directed the US Office of Information and Regulatory Affairs of the Office of Management and Budget, and government followed Sunstein’s scientific directives. At his command, they rejected many regulations that could have prevented death or injury, including prohibitions against child labor in hazardous agricultural jobs, like grain elevators. Sunstein, however, defends his decisions as scientific, claiming that because his opponents have “mass delusions,” irrational views of things like hazardous chemicals and pesticides, they demand regulations that are too expensive.1 “Too expensive for whom?” ask Sunstein’s critics. For children who suffer IQ losses and neurodegenerative diseases from smelter and other heavy-metal pollution? Or too expensive for polluters who do not want emissions-controls to limit their profits? Critics say economic rules should focus not only on overall costs and benefits, but also on their distribution, on whether those harmed by pollution are those who profit from it, on who gains and who loses from pollution. Sunstein’s critics want regulatory economics also to consider fairness, compensation, rights to life, and to equal protection, not just costs and benefits.2 His opponents also say that once one calculates full costs and benefits of many toxic exposures, the

17


18

Conceptual and Logical Analysis

calculations usually show pollution-prevention is cheaper than allowing it. For example, • Leading physicians say US newborns annually suffer IQ losses, just from coal-plant mercury pollution, that will reduce their lifetime earnings by $9 billion, apart from losses caused by other IQ-damaging coal pollutants. After 2 years, US coal-plant, mercury-induced IQ and income losses = $18 billion; after 3 years, $27 billion, and so on. 3 • Harvard economists say the United States has about 25 million children, aged 0–5. Current organophosphate-pesticide exposures cause these children to lose 3,400,000 IQ points/year and $61 billion in future earnings/year. Once pesticides like organochlorines and carbamates are included, their neurological and economic damages rise even higher.4 • Other Harvard scientists say some pollutants, like lead, cause both IQ losses and neurobehavioral problems such as crime and violence. Annual lead-induced IQ losses cause French children to lose future earnings of $30 billion/year, and cause French crime losses of $81 billion/year. For the United States, these lead-caused losses are $150 billion income/year and $400 billion crime/year. 5 Deciding whether Sunstein or his critics are correct is difficult because each side makes different value judgments about how to do economic science and whether ethics should affect policy. Sunstein is obviously correct that not all regulations are affordable, and that not all risks can or should be reduced to zero. However, he seems to downplay rights to life, equal treatment, and consent to risk. Likewise, Sunstein’s critics seem correct to emphasize that economist Adam Smith warned that efficient market transactions require all parties’ full consent—including parties facing increased risks from mercury, pesticides, or lead. However, it is not always clear how to incorporate ethics into economic decisionmaking, because citizens’ ethics are not uniform. Thus, partly because people have different views of science/ethics, they have different views of whether or not Sunstein is right. Who is correct?

Evaluating Science with Logical Analysis One initial way to answer this question is to try to avoid controversial value judgments and instead rely on logical analysis. Because one of the most fundamental requirements in logic often is consistency, one might ask whether Sunstein’s economic analyses are consistent. If so, they may be reasonable. If not, they may be questionable. Sunstein does not seem consistent. On one hand, he repeatedly gives examples of regulations that should be dropped because they do not pass the cost-benefit “test,” as illustrated by his cost-benefit tables that include only market-based


Discovering Dump Dangers

19

figures of aggregate risks, benefits, and costs for various regulations.6 On the other hand, he says government agencies are “permitted” to take qualitative factors like ethics into account, factors beyond the range of his cost-benefit test.7 Yet Sunstein’s emphatically requiring cost-benefit tests for all regulations, then rejecting regulations that fail it, 8 appears inconsistent with using qualitative factors that trump cost-benefit tests—because the qualitative factors fail the cost-benefit test that, he says, is necessary. Either one takes ethics into account, therefore has no purely cost-benefit test, or one ignores ethics, therefore requires a purely cost-benefit test. Once Sunstein allows qualitative criteria for regulations, he appears inconsistent to claim he has a purely monetary “test” for regulations, and that people are “irrational” in rejecting this test. Of course, even distinguished scientists can be inconsistent in damaging ways. Consider Hal Lewis, a famous solid-state and plasma physicist who studied under J. Robert Oppenheimer. In 1991 Lewis’s book, Technological Risk, won the Science Writing Award from the American Physical Society, the most prestigious association of physicists.9 Yet the book contains many inconsistencies and unsubstantiated scientific assumptions. For instance, on page 220 Lewis says that because of radiation dangers, “nuclear waste must be disposed of carefully.” Yet on pages 245–246, he says nuclear “waste . . . risk . . . turns out to be ridiculously low. . . . Nuclear-waste disposal is a non-risk.” However, if nuclear waste is a non-risk, it need not be disposed of carefully. If it requires careful disposal, it must be risky, not a non-risk. Therefore Lewis’s scientific account likewise appears inconsistent. As philosopher of science Karl Popper noted, and as Sunstein and Lewis seem to forget, rational analysis of science is based on the ideal of the so-called principle of contradiction, on trying to eliminate inconsistencies whenever we discover them. Otherwise it often is impossible to understand scientific hypotheses and their deductive consequences. The need for scientists to avoid logical error is analogous to the need for physicians to try to avoid harming patients. At the root of both disciplines is a necessary condition, “first, do no harm,” whether the harm is to people or to science. Logic is valuable for science because it is an account of deduction, the derivability of one claim from another. Deduction guarantees that a valid inference transmits truth from the premises to the conclusion. Therefore, if we know some conclusion is false, but our logical inference is valid, we know that at least one premise must be false. Such logical reasoning helps us assess whether or not scientific claims are true.10

Chapter Overview This chapter provides several examples of how and why logical analysis of science, one illustration of philosophy of science, can advance both science and


20

Conceptual and Logical Analysis

societal welfare. The chapter discusses the importance of logic, next outlines the logical fallacies known as appeal to ignorance and affirming the consequent, then provides additional examples of how logical fallacies can invalidate science and threaten welfare. Finally, it explains the source of many such fallacies: special-interest science, biased science done by special interests to promote their profits rather than truth.

Deductive Logic and Science At least since Aristotle’s time, people have recognized that knowledge requires reasoning correctly. Otherwise, it is impossible to understand claims, communicate with others, or evaluate beliefs. Thus most disciplines, especially science, try to avoid logical fallacies, errors in deductive reasoning. Because valid conclusions always follow with certainty from their premises, deductive conclusions usually are less controversial than other conclusions, such as inductive ones. For instance, if one knows that all apples are fruits, and all fruits can be eaten, one can deductively conclude that apples can be eaten. This deductive argument is uncontroversial because it is based on a valid deductive-inference pattern, transitivity: If A entails B, and B entails C, then A entails C. However, if one reasoned through induction—from particular claims to a general claim—this would often be questionable, as in “all apples that I have ever seen are red, therefore all apples are red.” This inductive argument commits a logical fallacy, hasty generalization. It invalidly draws a conclusion about all apples based only on some apples. Other common logical fallacies include appeal to authority—assuming some conclusion is true merely because some expert says it is; begging the question— assuming one’s conclusion is true instead of arguing for it or giving evidence for it; equivocation—using the same term in an argument but with different meanings; and appeal to the people—assuming some conclusion is true merely because most people accept it. Because elementary logic texts typically discuss various logical fallacies, they need not be covered here. Instead, the point is that because scientists should avoid deductively invalid claims, one of the easiest ways to analyze science, and thus do philosophy of science, is to look for possible logical fallacies like those that seem to appear in Sunstein’s economics and Lewis’s physics.

Appeals to Ignorance To illustrate how logical analysis of science can help promote reliable science and policy, consider the proposed Yucca Mountain, Nevada, nuclear-waste-storage project. Despite their many excellent scientific site assessments, US government scientists also made some logically fallacious inferences that doomed


Discovering Dump Dangers

21

the project. On one hand, US Geological Survey scientists correctly saw the many desirable features of the site for long-term-waste storage, including its low precipitation, high evaporation, limited water seepage into the ground, and low groundwater velocity. On the other hand, they sometimes used flawed logic and science when they claimed the site—100 miles northwest of Las Vegas—was a geologically/hydrologically/tectonically/seismically stable place for permanent underground storage of nuclear waste. Because this waste will remain lethal for about a million years and has no safe dose,11 the logical analyses illustrated in this chapter are important. They played a partial role in the 2011 government decision to reject the Yucca site. How did simple logical analysis help stop a dump that had been sanctioned by more than $15 billion in scientific studies?12 Along with other proposed sites for the nation’s first permanent, high-level-nuclear-waste facility, in 1978 the US Department of Energy (DOE) began studying Yucca Mountain because of desirable features such as low precipitation. Moreover, the land was already owned by the federal government for nuclear-weapons testing. Partly because government studies said the site was superior to others, in 1987 Congress directed DOE to study only Yucca for the dump. In 1992, DOE scientists concluded Yucca Mountain was acceptable, and site excavation began. In 2002 Congress and President George Bush said it would accept nuclear waste in 2006. But flawed science and safety-related legal challenges delayed site work. In 2011 US President Barack Obama halted all site work and funding. What went wrong?13 Although much site science was excellent, some of the flawed assessments arose from flawed logic. For instance, official hydrogeological assessments sometimes relied on the logical fallacy of appeal to ignorance, the inference that if scientists know of no way for repository failure or radionuclide migration to occur, none will occur.14 Yet appealing to ignorance is problematic because, from ignorance, nothing follows. One’s inability to conclude A provides no deductive basis for inferring not-A. Although science often proceeds by exhaustively ruling out alternative hypotheses, then accepting the remaining hypothesis, this process is not the same as appealing to ignorance. DOE Yucca scientists appealed to ignorance because they did not exhaustively rule out Yucca problems caused, for example, by future volcanic/seismic activity. Although they had insufficient data to about future volcanic/seismic activity, instead of collecting more data, they merely assumed it posed no problem. They fallaciously argued, for example, that no mechanisms have been identified whereby the expected tectonic processes or events could lead to unacceptable radionuclide releases. Therefore . . . the evidence does not support a finding that the site is not likely to meet the qualifying condition for postclosure tectonics.15


22

Conceptual and Logical Analysis

Similarly, instead of ruling out alternative hypotheses that Yucca safety might be compromised by humans searching for precious materials, DOE fallaciously appealed to ignorance. It simply said (without a full investigation) that the Yucca Mountain site has no known valuable natural resources. . . . Therefore, on the basis of the above evaluation, the evidence does not support a finding that the site is not likely to meet the qualifying condition for post-closure human interference.16 In addition, instead of examining alternative hypotheses about whether the site could be successfully secured forever, DOE made an appeal to ignorance. It fallaciously concluded that no impediments to eventual complete ownership and control [of Yucca Mountain] by the DOE have been identified. Therefore, on the basis of the above evaluation, the evidence does not support a finding that the site is not likely to meet the qualifying condition for post-closure site ownership and control.17 Despite much otherwise-reliable work, DOE’s logically invalid appeals to ignorance are especially obvious in its 1992 Early Site Suitability Evaluation for Yucca Mountain.18 Repeatedly it correctly notes substantial site uncertainties (e.g., about seismic activity), then mysteriously concludes the site is suitable, although it never extensively studied these uncertainties. Indeed, without using the formal language of the fallacy, the DOE evaluation admits that appeal to ignorance is one of its main scientific inferences: If . . . current information does not indicate that the site is unsuitable, then the consensus position was that at least a lower-level suitability finding could be supported.19 Rather than intensive empirical analysis of alternative hypotheses about site suitability, the DOE instead fallaciously claims that, given no “current information” about site unsuitability, the site is suitable. This invalid inference guarantees that despite serious site uncertainties, the site is suitable. Indeed, only an invalid inference could allow one to conclude that a site is suitable for something, despite massive uncertainties and lack of study. Even more disturbing, DOE’s external peer reviewers warned of site uncertainties, and DOE ignored them. Representing the most distinguished geologists in the United States, they said there was substantial, non-quantifiable uncertainty regarding “future geologic activity, future value of mineral deposits and mineral occurrence models, . . . rates of tectonic activity and volcanism, . . . natural resource occurrence and value.”20 In response, DOE


Discovering Dump Dangers

23

fallaciously appealed to ignorance, avoided further study of site uncertainty, then claimed Yucca was acceptable.21 By invalidly assuming that failure to adequately investigate a site, therefore failure to show the site is unsuitable, are sufficient grounds to support site suitability,22 the DOE evaluation placed the burden of proof on those arguing for site unsuitability. Yet ethically/logically, why should the burden fall only on one side, only on those concerned about million-year safety? Civil- or tort-law cases are decided based on which side has the greater weight of scientific evidence.23 As chapter 12 argues, instead of logically flawed Yucca studies, scientists could have used weight-of-evidence or inference-to-the-best-explanation methods of scientific analysis to assess competing Yucca hypotheses. Unfortunately, most scientists’ appeals to ignorance are not as obvious as those illustrated by some DOE work. For instance, some scientists allege there are “no significant technical obstacles to use of the world deserts as sites for a retrievable [nuclear] storage facility for 500 years.”24 These scientists assume that their many positive results, despite their lack of complete site study—plus ignorance of any obstacles—constitute sufficient conditions for denying the obstacles. Similarly, US Nuclear Regulatory Commission (NRC) officials say “spent [nuclear] fuel can be stored in a safe and environmentally acceptable manner until disposal facilities are available,” that “that safe and environmentally acceptable extended storage can be achieved.” 25 Yet, they invalidly assume their ignorance of any nuclear-fuel threats (despite lack of exhaustive study) is a sufficient condition for denying such threats. Of course, given comprehensive, long-term studies of well-understood phenomena, it often makes sense for scientists to draw conclusions, based on searching exhaustively for contrary evidence and finding none—especially if the bulk of evidence supports one side of an issue. In the Yucca case, however, appeals to ignorance are problematic because government scientists often ignore the bulk of site evidence and instead rely on untested site models that could be tested. For example, although DOE scientists admitted they measured neither water-infiltration nor fracture-flow of water into Yucca, both of which could cause massive, rapid radwaste migration, they invalidly concluded that the site would meet government radioactive-release standards—less than 1 Yucca-caused health harm every 1,400 years.26 Instead of measuring infiltration/fracture-flow, DOE scientists used computer models to simulate the situation. Yet the simulations were based on conditions that the site violated—such as one-dimensional groundwater flow, dispersionless transport, homogeneous geologic media, and constant-velocity field sorption. Despite these counterfactual assumptions, DOE scientists concluded their model was “an effective tool for simulation of the performance of the repository systems at Yucca Mountain.”27 How could it be “effective” if it violated numerous site conditions? Similar fallacious appeals to ignorance occur throughout DOE science: “For the rock mass, it was assumed that nonlinear effects,


24

Conceptual and Logical Analysis

including pore water migration and evaporation, could be ignored. In practice, nonlinear effects and the specific configuration of the canister, canister hole, and backfilling material would strongly influence very near field conditions.”28 Despite otherwise-plausible DOE work, invalid inferences such as appealing to ignorance (but failing to do the requisite studies and ignoring falsifying evidence) are not limited to Yucca studies. Indeed, when DOE scientists studied Hanford, Washington as a proposed permanent-nuclear-waste facility, they also made invalid appeals to ignorance, such as: “A final conclusion on . . . radiological exposures cannot be made based on available data . . . it is concluded that the evidence does not support a finding that the reference repository location is disqualified.” 29 Similarly, although other DOE scientists correctly said data were insufficient to determine whether offsite radwaste-migration would occur, they invalidly concluded there was only a small chance of radioactively contaminating public-water supplies. 30 They noted that changes in groundwater flow “are extremely sensitive to the fracture properties,” but then without doing needed empirical work, appealed to ignorance by concluding they could simulate heavily fractured Yucca Mountain “without taking fractures into account.”31 This fallacy is disturbing, given sparse data for unsaturated, fractured sites like Yucca, 32 and given that DOE scientists correctly admit their simulation models work only for empirical conditions the site does not meet. They correctly warned the validity of the effective-continuum-approximation method cannot be ascertained in general terms. The approximation will break down for rapid transients in flow systems with low matrix permeability and/ or large fracture spacing, so that its applicability needs to be carefully evaluated for the specific processes and conditions under study.33 If the validity of such methods cannot be known apart from actual experiments, if most Yucca studies rely instead on general simulations, 34 simulations that do not predict known site conditions, then DOE’s site-suitability claims are invalid appeals to ignorance. Such fallacies are especially evident in DOE’s admitting it ignores terrorism and human intrusion as ways to compromise site integrity and cause massive radiation exposures. DOE admitted it ignored faulting, terrorism, and human intrusion35 (although US Environmental Protection Agency [EPA] scientists say human intrusion could cause massive exposures36), then appealed to ignorance, saying it had “no information that indicates” the Yucca site was likely to be disqualified. 37 Despite DOE’s correctly admitting it ignored possible catastrophic site-radiation releases, 38 the precise materials/design for nuclear-waste containers, 39 their failure rates,40 and DOE studies’ showing 100-percent-canister failure within a year because of stress-corrosion cracking,41 DOE appealed to ignorance in affirming site safety and waste-canister acceptability. Other DOE


Discovering Dump Dangers

25

scientists appealed to ignorance by using Monte-Carlo-simulation models to claim nuclear-waste-canister safety,42 and their satisfying “regulatory requirements.”43 How could such simulations be acceptable, if the authors correctly admitted their analysis “does not show how to address uncertainties in model applicability or degree of completeness of the analysis”?44 Obviously scientists cannot test everything. When they cannot, they should admit their uncertainties, not ignore them and then fallaciously appeal to ignorance to claim their models are “an acceptable framework” to show Yucca safety. Problems such as future nuclear-canister failure and nuclear-waste-dump intrusion are understandably difficult to test/predict, just as is future climate change. However, scientists err logically if they admit they have not studied key factors, assume their untested models are acceptable, fail to test the models with available data, then conclude the proposed site is safe. For example, several DOE scientists correctly listed 11 of their contrary-to-fact assumptions about the Yucca site—such as that groundwater flow was only vertically downward, despite obvious horizontal fractures—yet invalidly appealed to ignorance, in asserting that the site “would comply with NRC requirements for slow release of wastes.”45 Because nuclear waste must be secured for a million years, whereas recorded history is only about 6,000 years, it is puzzling that DOE scientists repeatedly used appeals to ignorance instead of admitting their uncertainty and testing everything they were able to test. They could have avoided appeals to ignorance, as already noted, with weight-of-evidence or inference-to-the-best-explanation assessments. They could have used “if . . . then” claims, such as: “if our assumptions about Yucca are reliable for the centuries required, then the site would comply with regulations.” Instead, DOE scientists drew logically fallacious conclusions.

Affirming the Consequent Instead of doing reasonable empirical analyses and admitting uncertainties, different DOE scientists often commit other fallacies such as affirming the consequent. This fallacy occurs whenever one claims some hypothesis is true, merely because some test result—predicted to follow from the hypothesis—actually occurs.46 Yet failure of predictions can only falsify theories. Successful predictions never show hypotheses are true. They can only tend to confirm hypotheses and to show results are consistent with them. Of course, despite many reliable analyses, one of the repeated failures of Yucca Mountain science was not just affirming the consequent, but not testing hypothesis when they could have done so, to see if they could predict current conditions.47 Moreover, the greater the number of representative tests, the greater is the assurance that predictions are consistent with/tend to confirm the model or hypothesis. If predictions turn out to be consistent, however,


26

Conceptual and Logical Analysis

it is wrong to assume models have been “verified” or “validated” because this assumption affirms the consequent. For instance, in landscape ecology scientists often affirm the consequent when they incorrectly assume that because landscape features at the edge of the population can sometimes predict population substructure, therefore this substructure always and only results from landscape features at the edge of the population. Similarly, when neuroscientists use what they call the “reverse-inference” approach, they fallaciously affirm the consequent. They assume that because they can sometimes predict what parts of the brain perform certain functions, therefore those parts always and only perform those functions.48 DOE Yucca scientists likewise affirm the consequent whenever they claim that hypotheses about Yucca groundwater-travel times are “verified,” and thus meet “regulatory requirements,” merely because their testing shows the predictions’ consistency with the hypotheses.49 They also often speak of “verification of engineering software used to solve thermomechanical problems” at Yucca Mountain. 50 Although software and systems engineers speak of models’ being “validated” and “verified,”51 as already noted, validation guarantees only that some test results are consistent with/tend to confirm a model. To avoid affirming the consequent, they should not speak of validation and verification. 52 The scientists also commit another logical fallacy, equivocation, when they use the same word, “verified,” with different meanings in program, versus algorithm, verification. 53 Algorithms, as logical structures, often occur in pure mathematics or logic and can be verified because they characterize claims that are always true as a function of the meanings assigned to the specific symbols used to express them. Programs, however, as causal models of logical structures, are never verifiable because their hypothetical premises are not true merely as a function of their meaning, but instead as a function of a physical system—like Yucca hydrogeology. As Einstein put it, insofar as the laws of mathematics refer to reality, they are (like programs) not certain. Insofar as they are certain (like some algorithms), they do not refer to reality. Insofar as scientists affirm the consequent or use fallacious verification/validation language, they mislead people about Yucca-related reliability. For example, explicitly affirming the consequent, DOE claims validation is a “demonstration that a model as embodied in a computer code is an adequate representation of the process or system for which it is intended,” a demonstration achievable through “in-situ testing, lab testing, or natural analogs with the results of computational models that embody the model assumptions that are being tested.”54 The same official DOE document says verification “is the provision of assurance that a code correctly performs the operations it specifies,” assurance provided by “comparison of a code’s results with solutions obtained analytically. . . . Benchmarking is a useful method that consists of using two or more codes to solve related problems and then comparing the results.”55


Discovering Dump Dangers

27

As the quoted claims reveal, although DOE suggests its computer models/ codes accurately represent Yucca phenomena, their verification/validation language is a misleading euphemism for benchmarking, comparing the results of 2 different computer-simulation models. The real world, however, requires validating a model against reality, not just another model. Besides, even with repeated field testing, model compliance with reality is never fully assured. The problem of induction is that complete testing often is impossible. Therefore, the shorter the testing and the fewer the cases considered, the less reliable/confirmed are supposedly validated computer models. A key problem with many DOE Yucca studies is that they did not exhaustively test all the models they could test, at least for known periods where data are available, and they did not admit that model verification/validation “is not even a theoretical possibility.”56 Instead, they affirmed the consequent and said their models were verified/validated. In speaking of validation/verification at Yucca Mountain and ignoring relevant field data, at worst they commit logical fallacies. At best, they are misleading. 57 Verification/validation language also errs in encouraging using formal modeling alone, rather than also including empirical knowledge of complex, potentially very dangerous relationships in the physical world. “Misplaced advocacy of formal analysis,”58 overselling software reliability and underselling design failures in safety-critical applications like nuclear-waste repositories, puts the public at risk. 59 The moral? When scientists have not checked their abstract models against existing field data, they should avoid misleading claims to have verified/validated those models.60 Instead, they should emphasize empirical testing, speak in terms of probabilities that a given model/hypothesis has been confirmed, and avoid logical fallacies.

Are Scientists Stupid? Of course, much excellent science was done at Yucca Mountain. Scientists are human and may make logical errors. They also may face pressure from employers. Given problems with induction, they often recognize that because proof about the natural world is impossible, at best they can provide only robust consensus based on continued examination/correction.61 Yet when laypeople, science funders, and politicians often naively demand proof—before taking action on science-related problems like nuclear-waste storage or climate change—scientists may try to give them what they want. They also may unintentionally cut corners in attempting to serve their employers. After all, DOE Yucca scientists knew that finding a US nuclear-waste repository was decades behind schedule, that many reactors had nowhere to store wastes, and that utilities were suing government to provide a dump. As the


28

Conceptual and Logical Analysis

previous chapter noted, such political pressures can promote “special-interest science.” It can dominate any areas of science where profit can be made, from coal companies seeking to dispute climate change, to solar companies trying to lie about their clean energy. Recall that in 2012 solar company Solyndra manipulated its economic science so as to receive a $535-million US-taxpayer loan. In fact, in 2009, 34 percent of scientists admitted questionable research practices, some of which included fabrication and falsification. Such biased science may arise partly from conflicts of interest and from the fact that, as the previous chapter revealed, 75 percent of US science is funded by special interests, mainly to advance profit-oriented ends. Many scientists thus may face employer pressure. US federal-regulatory agencies also are often influenced by industries that they regulate. If so, DOE scientists may have felt pressure to support a nuclear-waste dump that they did not fully investigate. Of course, no one can exhaustively test million-year models, whether of climate change or nuclear-waste dumps. Scientists can, however, ensure that their models adequately predict all existing empirical findings—something not done at Yucca Mountain. 62

Conclusion Whether scientists err unintentionally, or whether outside forces pressure them—from the Catholic Church’s pressure on Galileo and cosmology, to Nazi pressure for racist pseudoscience, to the Bush Administration’s pressuring government scientists to deny climate change—scientists and philosophers of science must find and avoid logical fallacies. Otherwise, science will lose credibility, and the public will be confused. People arguably were confused in 1992 when Philip Morris attempted to discredit US EPA scientists’ evidence that secondhand smoke is a human carcinogen. People likewise likely were confused in 2003 when the American Academy of Pediatric Dentistry—flush with a $1 million donation from Coca-Cola—erroneously claimed that the “scientific evidence is certainly not clear on the exact role that soft drinks play in terms of children’s oral disease.”63 Because most science is funded by special interests—many of which have conflicts of interest—it deserves special scrutiny. Only then will it be possible, as President Obama promised, for scientists to “do their jobs, free from manipulation or coercion,” so that we are able to listen “to what they tell us . . . especially when it’s inconvenient.”64


C H A P T ER

3

Hormesis Harms THE EMPEROR HAS NO BIOCHEMISTRY CLOTHES

For more than 20 years, Walter Allen was a maintenance worker at Baton Rouge General Hospital. His duties included replacing ethylene-oxide cylinders (ETO), used to sterilize medical and surgical devices. After Allen died of brain cancer, in 1996, his widow and children sued the sterilizer manufacturer for wrongful death. They claimed Allen’s ETO exposure contributed to his brain cancer. Because the UN International Agency for Research on Cancer had showed ETO is a potent carcinogen and genotoxin, the Allen lawsuit should have been an “easy win.” Acting directly on the genes, ETO causes chromosomal damage in both humans and other mammals. Because of its small size, it also directly penetrates DNA and crosses the blood-brain barrier.1 Yet, the Allens lost their case. Why? The pretrial judge made false statements about ETO, denied the Allens a jury trial, then claimed they had no case because workplace ETO did not contribute to Walter Allen’s brain cancer. 2 The Allen family lost its lawsuit partly because of special-interest science—science dictated by the profit motives of special interests. The special-interest science, on which the judge relied, included faulty testimony from an ETO-industryfunded toxicologist Edward Calabrese. Calabrese made a number of questionable assumptions, 3 some discussed later in chapter 5,4 as well as several factual errors about ETO, 5 and the judge did not detect these errors.6 Calabrese thus did a scientifically erroneous report,7 and it misled the judge. 8 Yet publications as early as 1969 contradicted Calabrese’s court claims.9 Examining Calabrese’s Allen case report, a US National Science Foundation-funded researcher at the University of California confirmed that Calabrese employed “speculation and subjective opinions . . . misleading statements and unsupported assumptions . . . [that] are inconsistent with . . . procedures” used by virtually all scientists; he said that Calabrese’s claims “are clearly outside the range of respectable scientific disagreement by experts in cancer risk assessment.”10 Partly because Calabrese’s scientific

29


30

Conceptual and Logical Analysis

errors misled the court, however, the victim and his family were denied any benefits.

Chapter Overview Just as the last chapter focused on logical analysis in science, this chapter investigates another part of philosophy of science, conceptual analysis. It uses conceptual analysis to show how Calabrese’s special-interest science misleads people about toxins and carcinogens, just as it misled the judge in the Allen case. Because chemical manufacturers seek deregulation of toxic emissions and avoiding costly pollution cleanups, they often fund special-interest science that supports the concept of hormesis—the claim that low-dose toxins/carcinogens have beneficial and not harmful effects. Calabrese is the main hormesis defender, and this chapter shows how he again errs, misleading people about toxins. It argues that (1) he use the same term, hormesis, for 3 different concepts (H, HG, HD) that have different scientific, regulatory, and ethical validity; (2) H is trivially true but irrelevant to regulations; (3) HG is relevant to regulation but scientifically false; and (4) HD is relevant to regulation but ethically and scientifically questionable. (5) Although none of the 3 hormesis concepts has both scientific validity and regulatory relevance, Calabrese and others obscure this fact by begging the question and equivocating about the term hormesis. The result? (6) Calabrese’s scientific errors and special-interest science provide undeserved plausibility for deregulating low-dose toxins/carcinogens—deregulation that is likely to harm people.

Conceptual Analysis To illustrate how conceptual analysis might help avoid science-related error and harm, consider hormesis. As noted, this concept refers to supposed beneficial effects of low-dose toxins/carcinogens. After all, some low-dose vitamins have beneficial effects, despite high-dose harms. As chapter 1 revealed, the US National Academy of Sciences has explained part of the popularity of the hormesis concept. To avoid expensive pollution cleanup and promote weaker regulations, polluters often spend millions of dollars to fund hormesis research—like that of Calabrese—designed to show that low-dose toxins/carcinogens have some beneficial effects.11

Hormesis Concept H Although Calabrese fails to distinguish different hormesis concepts, the simplest such concept (that may be called H) is that, for at least 1 biological endpoint/


Hormesis Harms

31

response/subject/age/condition, some low-dose toxin/carcinogen exhibits a “beneficial” effect,12 an “adaptive response characterized by biphasic dose responses” that results from “compensatory biological processes following an initial disruption in homeostasis.”13 For instance, low-dose-cadmium exposure is 1 of Calabrese’s 6 main examples supposedly satisfying H.14 It reduces some tumors in some species—a beneficial effect at 1 biological endpoint for some individuals. However, scientific consensus says that, despite this single-endpoint-beneficial effect, low-dose cadmium causes excess diabetes, pancreas damage, glucose dysregulation, and kidney damage—harmful effects at other biological endpoints.15 Thus, H claims may be true, mainly because they require so little: only 1 non-monotonic effect, on 1 endpoint, from 1 pollutant, for 1 short period of time, for only some people. However, H proponents ignore devastating effects on other endpoints, during longer periods of time, for many other people. H “benefit” claims thus would be satisfied if a pollutant caused cancer (one biological endpoint), but increased hair growth (another biological endpoint). Given Calabrese’s minimalist definition of 1 type of hormesis (what we call H), if low-dose responses to toxins/carcinogens were mildly beneficial for 1 endpoint for some time/people, but seriously harmful for most other endpoints/times/people, these effects nevertheless would satisfy his definition of H. Moreover, Calabrese and others call responses “hormetic”16 —H—even when they fail to satisfy criteria for statistically significant changes from control. Thus, Calabrese calls a not-statistically significant “beneficial” change in incidence from 2 to 3, in a sample of only 20 people, a 33-percent change, evidence of hormesis—H.17 Likewise, Calabrese and Baldwin use a study of a pollutant’s no-observed-adverse-effect level (NOAEL) to “confirm” H. Yet because sample size, statistical power, data variability, endpoint measured, exposure duration, exposure route, exposure rate, and so on, affect a pollutant’s NOAEL, alleged H responses appear to be merely artifacts of poor scientific methods. These poor methods include using small sample sizes, low statistical power, data variability, irrelevant endpoints, and so on.18 Given Calabrese’s flawed scientific criteria for H, alleged H instances are easy to find. Yet, they reveal nothing about all-endpoint, lifetime, synergistic, cumulative, or net responses to multiple low-dose toxins/carcinogens. However, knowing net responses is crucial to reliably assessing the medical, scientific, and policy relevance of low-dose responses to toxins, like TCDD (dioxin). TCDD is1 of Calabrese’s 6 main examples supposedly satisfying H.19 Consider 4 methodological flaws in a 2-year, low-dose-dioxin (TCDD) study allegedly supporting hormesis. Its suspect allegations of decreased tumor-incidence in rats are typical of studies that allege hormesis, H.20 First, the study trimmed the data on adverse effects by including only two-thirds of the rats’ lifespan, not infancy and old age, when subjects exhibit more tumor-sensitivity to toxins. After all, if 80 percent of human cancers are diagnosed in the last


32

Conceptual and Logical Analysis

one-third of life,21 and if the rat analogy holds for human lifespans and cancers, the study may have captured only 20 percent of TCDD-induced cancers. A second flaw is that although the study documented increased liver, lung, tongue, and nasal tumors but decreased pituitary, uterine, mammary, pancreas, and adrenal tumors, it invalidly aggregated all tumors. Because no individual tumor response was non-monotonic, the alleged H response was only an artifact of invalid aggregation. A third flaw is that the study ignored early mortality and confounders, such as lower body weights, when it calculated tumor rates, relative to controls. Yet scientists who ignore confounders—that could explain decreased-tumor response—can draw no valid conclusions about alleged pollutant/hormetic effects. A fourth flaw is that the study’s results have not been replicated; other TCDD studies have shown many adverse low-dose effects.22 Despite these 4 methodological problems, hormesis proponents like Calabrese say the study supports “hormesis,”23 that TCDD (dioxin) is1 of the 6 main examples of hormesis. 24

Confusing Hormesis Concepts H and HG As noted, Calabrese routinely ignores sample size, statistical power, statistical significance, data variability, endpoint measured, exposure duration/route/ rate, and methodological differences among studies. He also looks at existing studies without doing experimental research, and he uses no rigorous scientific conditions for alleged confirmation of hormesis (H). Thus, his conclusions are questionable. Subsequent paragraphs show that, partly because of Calabrese’s questionable ways of “confirming” single-endpoint hormesis (H), his using H to generalize across all biological endpoints/responses/species/subjects/exposure conditions—to (what can be called) HG—is invalid. HG is the claim that H is “generalizable across biological model, endpoint measured, and chemical class.”25 Yet as chapter 12 illustrates, most scientists follow the rule that HG is invalid, that all carcinogens have linear, no-threshold (LNT) dose-responses, that their harmful effects increase linearly with dose and have no exposure-threshold for causing damage. Calabrese and coauthor Linda Baldwin, however, claim that HG “is not an exception to the rule, it [HG] is the rule.” 26 However, as later arguments show, after invalidly inferring HG from H, Calabrese and his coauthors invalidly reject scientific consensus that a toxin’s/carcinogen’s harm typically is proportional to dose. Perhaps the greatest indicator of HG’s conceptual problems is that the research, from which Calabrese and Baldwin most often infer HG,27 includes no epidemiological/field studies28—whose conditions most mirror real-world exposures and easily refute HG. Instead, they illegitimately generalize from H to HG. For instance, using the preceding flawed studies on TCDD (dioxin), Calabrese and Baldwin say it is1 of the 6 main examples that satisfies H, 29 although each alleged


Hormesis Harms

33

H instance has conceptual problems like those already noted. Another HG problem is that Calabrese and others often assume HG merely because they have limited or false-negative data showing that low-dose-toxic/carcinogenic exposures are harmful. They fallaciously appeal to ignorance—inferring dioxin benefits merely because of the alleged absence of data on dioxin harms. This dearth-ofdata problem arises because low-dose studies require large sample sizes to detect effects, and most toxicological/carcinogen studies are at higher doses. Without low-dose data, Calabrese and others fallaciously take alleged absence of evidence, against HG, as evidence for it—although chapter 2 explained why this is a logical fallacy. 30 In fact, fallacies of appeal to ignorance frequently typify special-interest science. For instance, US National Academy of Sciences’ studies have warned that, despite children’s known higher sensitivities to pesticides/herbicides, data are lacking to precisely define their higher sensitivities (e.g., to neurodevelopmental effects). 31 Without precise neurodevelopmental-effects data, chemical-manufacturer scientists often appeal to ignorance. They invalidly assume low-dose toxins cause no harm, and they posit HG, 32 as Calabrese does. Many US government regulatory agencies also make similar appeals to ignorance, particularly when regulated industries push them to assume that no harm will result from some product/pollutant. 33 As such examples suggest, Calabrese’s appeal to ignorance is so common, especially in special-interest science, that prominent epidemiologist Kenneth Rothman confirms that most scientists probably equate inadequate data (showing harm) with evidence for no harm. 34 Besides, given the burden of proof in US law, courts often require toxic-tort plaintiffs to provide conclusive evidence demonstrating harm. Otherwise, they assume the defendants did no harm. 35 Besides appealing to ignorance, HG proponents also exhibit an inductive or invalid-extrapolation fallacy when they use only a few endpoints/populations/ time-periods/conditions that allegedly satisfy H—then invalidly generalize from H to HG claims about all endpoints/populations/time-periods/conditions. They also generalize purely on the basis of simple, quantitative, context-dependent, subject-dependent, low-dose measurements, to dose effects that rely on when and how the dose is received, who receives it (e.g., their health and nutritional status), and with what it is received (e.g., other toxin exposures). The earlier case of low-dose cadmium, 1 of Calabrese’s 6 main examples allegedly satisfying H, 36 illustrates why such extrapolation to HG errs. It ignores how individual variations—in intra-species genetics/lifestyle/medication/context/endpoint/ age difference—affect responses to pollutants (e.g., children’s greater sensitivity to toxins). For instance, the half-lives of some chemicals are 3–9 times longer in neonates than adults, and neonates may take more than 10 times longer to eliminate chemicals. Likewise, drinking 1.2–2.2 alcoholic beverages/day may have some beneficial maternal effect on some endpoint, while only 0.5 drinks/ day can cause adverse fetal behavioral/developmental effects. Or, even among


34

Conceptual and Logical Analysis

adults, pesticide-exposure responses vary significantly because of factors like 7-fold-individual differences in levels of detoxifying enzymes. 37 HG proponents’ extrapolation fallacies are especially objectionable because they are inconsistent. On one hand, they explicitly, harshly criticize those who extrapolate from high-dose to low-dose toxic/carcinogenic effects. 38 On the other hand, they themselves invalidly extrapolate • from H to HG; • from some to all biological endpoints; • from adult, pure-bred, homogenous animal populations to non-adult, non-pure-bred, heterogeneous human populations; • from some, to net, adaptive responses; and • from a few, to all, chemicals. For instance, HG proponents invalidly extrapolate from some to all chemicals when they say [single-endpoint] hormesis [H]‌“is a ubiquitous natural phenomenon . . . across the biological spectrum,” therefore generalizable [as HG], so that HG “is the rule,”39 although they claim H has been demonstrated only for some “inorganic preservatives, antineoplastic drugs, pesticides, and various industrial chemicals.”40 Obviously the move from H to HG, from some to all chemicals, is logically invalid. Moreover, biology suggests that even if individual, low-dose, beneficial responses H exist, they are unlikely to be generalizable as beneficial. Why not? The cases of cadmium and TCDD (dioxin), 2 of Calabrese’s 6 main examples allegedly satisfying H,41 already illustrated 1 reason, that beneficial effects on 1 endpoint cannot be generalized to other endpoints. Low-dose cadmium, for instance, reduces some tumors in some species but causes excess diabetes, pancreas damage, glucose dysregulation, and kidney damage.42 Low-dose TCDD (dioxin) likewise reduces some tumors but increases liver, lung, tongue, and nasal tumors.43 As mentioned, a second reason, that single-endpoint hormesis H cannot be generalized to all people/groups, is children’s roughly 10-times-higher sensitivity to the same toxin dose. A third reason—admitted by Calabrese and Baldwin—is that hormesis effects are likely “overcompensations in response to disruptions in homeostasis.”44 But when organisms overcompensate in response to threats, they pay a price. There is no free lunch. As Calabrese admits, so-called hormetic responses are cases of “only reparative responses to the injury that has been done” by the toxin/carcinogen, cases of “reparative overcompensation.”45 While overcompensations, like temporarily adaptive adrenalin rushes, might help fight assaults or toxic threats, they can be maladaptive because of their metabolic costs (e.g., stress responses that cause long-term harm). In admitting that hormetic overcompensation is a response to injury, not generally


Hormesis Harms

35

beneficial, Calabrese and HG proponents inconsistently undercut their case. If so, H cannot be generalized to HG.

Confusing Hormesis Concepts H, HG, and HD More generally, because HG proponents like Calabrese commit inconsistencies, inductive fallacies, and appeals to ignorance when they generalize from H to HG—from single to all chemicals/endpoints/responses/subjects/ages/ exposure conditions, they ignore detailed empirical evidence and instead beg the question by assuming what they should argue.46 Calabrese and Baldwin illustrated their question-begging when they claimed hormesis [HG] is “the rule,”47 “ubiquitous,” demonstrating beneficial effects “across the biological spectrum.”48 Yet, as mentioned, damaging effects on children, reparative overcompensation, and cases like harms from cadmium and dioxins—2 of Calabrese’s 6 main examples supposedly satisfying H49 —show that beneficial effects H clearly are not generalizable. Why do Calabrese and others want to claim HG is “the rule”—that toxin harm is not proportional to dose?50 As already noted, they want to use HG to justify weakening regulations for toxic exposures. That is, from HG, Calabrese and others want to infer a third hormesis concept (what can be called) HD. HD is the claim that hormesis should be the “default assumption in the risk-assessment [therefore risk-regulation] process,” the default assumption that low-dose toxins are harmless. 51 To support their inference from HG to HD, Calabrese and others make at least 3 questionable claims, none of which empirical data support, all of which beg the question. (1) Because low-dose effects of some chemicals, like vitamins, have beneficial effects, and accepting HD would “prevent excess disease or death over background” and “promote better health,” therefore “public health might be better served by setting exposure standards [HD] . . . based on the hormetic model [HG].”52 (2) Developing HD regulatory policies, based on hormetic model HG, would have “economic implications” that are “substantial” because HD “could lead to less costly . . . clean-up standards” for pollutants. 53 (3) Developing HD regulatory policies, based on “hormetic model” [HG], would promote science and “encourage the collection of data across a broader range of dose.”54 Calabrese and his coauthors have defended none of the preceding 3 claims. Yet, the consensus scientific position is that weakening pollution standards, by accepting HD, would greatly harm public health, 55 especially for vulnerable groups like


36

Conceptual and Logical Analysis

children. Consequently Calabrese’s 3 question-begging claims need no further consideration. Apart from Calabrese’s question-begging, his inferring HD from HG is ethically questionable because of its violating 5 ethical norms based, respectively, on justice, consent, biomedicine, protection, and operationalizability. These violations show that, even if HG were true—if low-dose toxic/carcinogenic exposures caused beneficial responses for all endpoints/responses/subjects/ages/exposure conditions, HG would constitute only necessary, not sufficient, conditions for inferring HD. Why only necessary? Because default rules like HD are used to justify imposing risks on a democratic society, in situations of scientific uncertainty their acceptance requires both scientific and ethical judgments. The latter include, for instance, whether risk-victims also should be risk-beneficiaries, whether alleged benefits are compensable, fair, worth the risks, and so forth. 56 Ethical/policy conclusions like HD thus cannot be validly inferred from scientific claims H and HG, which include no ethical/policy premises. One reason is that, from purely scientific or “is” claims (H, HG), one cannot validly deduce ethical or “ought” claims (HD) because of the is-ought fallacy in ethics. According to this fallacy, no evidence about alleged purely scientific facts—what is the case—is sufficient to justify ethical conclusions about what ought to be the case. 57 Calabrese and others thus commit the is-ought fallacy by invalidly inferring HD. To validly infer HD, at a minimum they must show it would meet the 5 conditions of justice, consent, biomedicine, protection, and operationalizability. That is, they must show HD would be just, equitable, and compensable; worthy of free informed consent by risk bearers; consistent with basic rules of biomedical ethics, as set by Nuremburg, Belmont, Helsinki, the Common Rule and other bioethics requirements;58 an adequately health-protective stance, in the face of uncertainty;59 and operationalizable. Because Calabrese and others ignore these 5 ethical conditions, it is unlikely HD could meet them. HD arguably could not satisfy the justice requirement because HD beneficiaries would be industries that avoided controlling low-dose pollution, but put potential pollution victims at risk. Such a situation could violate fairness, equal treatment, and due process, including rights to compensation from HD-related harm. Ignoring fairness, Calabrese and others note only expedient, supposedly desirable economic consequences to industry of accepting HD.60 Moreover, because industrial polluters would be HD’s primary beneficiaries, but pollution victims would be HD’s primary risk-bearers, victims probably would deny consent to HD. 61 If so, HD could not meet the second or consent requirement. After all, people generally agree to bear risks only when they get something in return. Breast-cancer patients may take tamoxifen, despite its risks of thrombosis, stroke, uterine hyperplasia, uterine cancer, and uterine sarcoma,62 because they get something in return, cancer treatment. Likewise, virtually all pharmaceuticals impose one risk in exchange for another. Because HD victims


Hormesis Harms

37

would get little/nothing for bearing increased HD risks, their consent to HD is unlikely.63 Nor would HD likely meet the third or biomedicine condition. Classic biomedical-ethics codes require that potential risk victims exercise rights to know the risks imposed on them.64 Yet polluting industries do not distribute rightto-know disclosure forms in neighborhoods where they release toxins. Because many pollution victims are unaware of their increased health risks, their rights to know are probably violated. Nor would pollution victims likely enjoy an acceptable benefit-risk ratio, another requirement of biomedical-ethics codes, because most HD benefits would go to industry, while most HD risks to the public.65 HD also would not likely meet another necessary biomedical-ethics condition, that exposures not target a special, vulnerable group.66 Yet by accepting HD, not LNT, Calabrese targets a vulnerable group, children; they are roughly 10 times more sensitive to toxins/carcinogens.67 Because meeting the fourth or protection condition requires safeguarding vulnerable populations, HD appears unlikely to meet this condition. Finally, meeting the fifth or operationalizability condition also seems unlikely. HD is not operationalizable in the real world for several reasons. One reason is that each person’s toxic/carcinogenic exposures cannot be individually titrated, to achieve total exposures that are only low-dose; every person’s doses, from tens of thousands of pollutants, cannot be measured, every instant, to provide immediate feedback to polluters about whether total exposures exceed low dose. 68 A second reason is that HD regulations—that allow no more than low-dose-toxic releases—cannot guarantee only low-dose exposures. Why not? Thousands of synergistic pollutants together drive total exposures beyond low doses. For instance, by the time a child is born, background exposures already have given more than a low dose of ionizing radiation, although all radiation-dose effects are cumulative, with no threshold for risky effects at any dose. 69 As radiation illustrates, HD could never be operationalizable to most exposures. Besides, Calabrese and Baldwin say maximal low-dose beneficial responses occur at doses that are about one-fifth of the no-observed-adverse-effect level or NOAEL.70 This means that simultaneous exposure to 5 equally potent toxins, each acting on the same biological mechanisms, and each at one-fifth NOAEL, would move victims from the low-dose, to adverse-effects, ranges. Moreover, repeated US EPA and Centers for Disease Control studies show all US citizens receive doses of not 5, but thousands of chemicals whose residues are measurable in blood/tissue.71 These cause synergistic, not merely additive, effects. For instance, risks are synergistic when people are exposed to dioxins and smoking, radon and smoking, asbestos and smoking, alcohol and smoking—partly because additional exposures add to total, harmful, immunologic, and estrogenic burdens.72 Yet TCDD (dioxin), ionizing radiation, and alcohol are 3 of Calabrese’s


38

Conceptual and Logical Analysis

6 main examples supposedly satisfying H.73 HD thus is irrelevant in a world in which virtually everyone already has had more than low-dose-toxic exposures. HD also would not be operationalizable for a third reason, its harming sensitive populations. Roughly 25 percent of the population, including children and pregnant women, are more sensitive to toxins, thus more likely to exhibit harmful responses even to low-dose exposures. Also, HD regulations would not be operationalizable because of intra-species differences in absorption. The same toxic releases can cause radically different doses among people. For instance, adults absorb 10–15 percent of lead entering their gastrointestinal tracts, while pregnant women and children absorb about 50 percent.74 All 4 reasons thus mean HD cannot meet the operationalizability criterion. If not, HD is inapplicable to real-world policymaking. Yet by the “ought implies can” rule, people ought not be required to do what is impossible,75 thus ought not be required to adopt HD. Calabrese forgets this fundamental ethical rule.76 Instead he erroneously and unethically claims that it is not US “policy to protect the most sensitive in the general population.” 77 This claim both implicitly admits HD operationalizability problems and presupposes that society has no duties to protect the vulnerable.

Bait-and-Switch Hormesis Arguments Preceding arguments show that no hormesis concept—H, HG, or HD—has both scientific validity and regulatory relevance. On one hand, H is scientifically valid because1 biological endpoint, among thousands, often shows minor beneficial effects of some toxin/carcinogen. Yet H is trivial and irrelevant to regulations because regulations require HG, net-beneficial effects for all lifetimes/ages/ endpoints/contexts/responses/individuals. On the other hand, although positing HG is relevant to regulations proposed by chemical manufacturers, it is scientifically invalid because of invalid extrapolation, appeals to ignorance, and ignoring biological reparative-overcompensation costs. Similarly, although positing HD is relevant to regulations proposed by chemical manufacturers, it is scientifically and ethically invalid because of its question-begging failure to meet justice, consent, biomedicine, protection, and operationalizability conditions. If neither H, nor HG, nor HD has both regulatory relevance and scientific validity, why have Calabrese and others been able to publish their pro-hormesis essays in journals like Nature and Environmental Health Perspectives?78 Three possible explanations come to mind. One is that most Calabrese essays are opinion pieces, not empirical research, thus not subject to standard scientific peer review. Another explanation is that, when Calabrese and others claim some study illustrates hormesis, only scientists who know this study, in detail, can evaluate Calabrese’s hormesis claims.79 Yet journal reviewers are unlikely to know these studies because many are either non-peer-reviewed industry studies or not about


Hormesis Harms

39

hormesis. Yet if journal reviewers do not know these other studies, they may illegitimately assume Calabrese’s allegations about them are correct. Journal referees also may have been misled by Calabrese’s failure to distinguish different concepts H, HG, and HD—all of which Calabrese labels “hormesis,”80 Calabrese likely has confused reviewers. They may have recognized the scientific validity but ignored the regulatory irrelevance of hormesis concept H; recognized the regulatory relevance but ignored the scientific invalidity of hormesis concept HD; then erroneously concluded that the same concept had both scientific validity and regulatory relevance. This third explanation appears plausible because Calabrese repeatedly equivocates among H, HG, and HD. For instance, Ralph Cook and Calabrese commit the fallacy of equivocation when, under the heading “FDA Regulation of Hormesis,” they claim to be “proponents of hormesis” and urge “regulation of hormesis.”81 They arguably should have said “proponents of H,” and “regulation via HD,” because claims positing H are trivially true and scientifically valid but irrelevant to regulation, while only their scientifically invalid HG and HD claims are relevant to regulation. Calabrese also equivocates in answering critics. For instance, without using my H, HG, and HD labels, Kristina Thayer and coauthors attack Calabrese’s claims positing HG and HD.82 Yet Cook and Calabrese respond equivocally, by defending only H, and saying “hormetic dose-response curves [H]‌have been observed for a large number of individual agents.”83 Thus Cook and Calabrese first “bait” the reader by supporting a biologically and ethically doubtful hormesis concept HD. After scientists like Thayer and coauthors respond to this bait, 84 by criticizing HD, Calabrese and coauthors “switch” the critics by defending hormesis concept H, one not at issue. They may appear correct, but only because they fallaciously equivocate, defend a concept H not at issue, and use the same hormesis label for both trivially true H claims and scientifically invalid HG and HD claims.

Conflicts of Interest and Conceptual Obfuscation Investigating Calabrese’s scientific and ethical errors reveals many insights about why practical philosophy of science is important. One insight is that much special-interest science appears caused by financial conflicts of interest. Calabrese and Baldwin note the financial stakes in hormesis debates. They admit that “the external influence of the enormous cost of environmental cleanups and the proper allocation of limited societal resources have strongly encouraged a . . . reexamination of . . . hormesis.”85 In less flattering terms, as noted in chapter 1, a US National Academy of Sciences’ report warned about chemical-industry motives behind promoting weakened regulations for low-dose-chemical exposures. 86 The academy said “pesticide manufacturers” and other “economically interested


40

Conceptual and Logical Analysis

third parties” are funding studies, trying “to justify reducing” chemical-safety standards.”87 A second insight is that special-interest science practitioners, like Calabrese, often violate standard disclosure guidelines regarding conflicts of interest. The guidelines of Calabrese’s state employer, the University of Massachusetts, dictate “disclosure and review of every Conflict of Interest . . . involving a Financial Interest and . . . Compensation in an aggregate amount greater than $10,000 within the prior twelve-month period that is received by or contractually promised to a Covered Individual.”88 (As later paragraphs show, Calabrese appears to have between $810,000 and more than $3,000,000 in funding—far more than $10,000—for which he has not disclosed his sources of funding.) Likewise, the Journal of the American Medical Association, International Committee of Medical Journal Editors, Council of Science Editors, World Association of Medical Editors, and others have policies requiring authors to specifically indicate whether they have conflicts of interest regarding subject matters about which they write.89 Calabrese, however, fails to reveal many funding sources, despite his research defending chemical-industry positions. Years ago, he disclosed chemical-industry support (e.g., from the Texas Institute for Advancement of Chemical Technology),90 funded by Dow, BASF Chemical, Bayer Chemical, Shell Chemical, and Syngenta pesticide company.91 However, in 2007, Calabrese’s official University of Massachusetts online resume failed to disclose funding sources for 3 of his 9 research projects, responsible for $810,000; since then, his full resume cannot be obtained on the Internet.92 In another official 2007 University of Massachusetts online resume—the latest available on the Internet—Calabrese listed receiving more than $3 million from unnamed sources. Named sources include Atlantic Richfield Oil, Chemical Manufacturers Association, Dow Chemical, Exxon Oil, Reynolds Metals, and Rohm and Hass Chemicals.93 After a researcher blew the whistle, in a January 2008 toxicology publication about Calabrese’s conflicts of interest and failure to disclose funding sources,94 Calabrese began disclosing even less. His online university resumes changed dramatically. Those trying since 2008 to access his official, public-university, online resume95 —whose 2007 web version had $3 million in undisclosed funding sources—instead received the message: “FORBIDDEN: You don’t have permission to access” this resume.96 Instead of providing the second resume,97 whose 2007 web version had $810,000 in non-disclosed-funding sources, the shortened 2008 Calabrese resume had no references to chemical-industry funding, yet listed $570,000 from undisclosed sources.98 The website also said one must “contact the professor” to obtain Calabrese’s complete resume.99 Calabrese’s responses to whistleblowing about his conflicts of interest and funding-disclosure failures thus appear to have increased his funding-source cover-up and decreased his disclosures.


Hormesis Harms

41

A third special-interest science insight is that often authors publish with industry representatives/employees having conflicts of interest, but fail to disclose their coauthors’ affiliations. For instance, although Calabrese-coauthor Ralph Cook,100 was “Director of Epidemiology, Dow Chemical, USA . . . Midland Michigan” and remains a Dow consultant,101 Calabrese ignores Cook’s Dow ties and lists his affiliation merely as “RRC Consulting, LLC . . . Midland Michigan 48640-2636.”102 A fourth special-interest science insight from the Calabrese case is that when other scientists point out special-interest-science conflicts of interest, offenders like Calabrese attempt to censor whistleblower disclosures. For instance, an article to be published in a journal, edited by Calabrese, criticized the flawed science in Calabrese’s work.103 Yet without the author’s or issue-editor’s consent, someone deleted from the page proofs documentation of Calabrese’s conflicts of interest and failure to disclose industry research-funding. (The page proofs were in Calabrese’s possession when this deletion occurred.) The issue editor, Dr. Kevin Elliott, had to force Calabrese to re-instate the deleted material.104 Later, when this article was reprinted in Human and Experimental Toxicology,105 someone again deleted endnotes without author consent. Again Dr. Elliott had to pressure the journal to re-instate endnote material.106 Why the deletion? Lee John Rourke, the editor of the journal where the reprint appeared, said he could not include the author’s entire endnote 41 without Calabrese’s permission.107 Yet because the journal is located overseas, and its editors are scattered throughout the world, phone exchanges about these problems were impossible. Despite repeated email exchanges, Rourke would reveal neither why he allowed Calabrese to censor the author’s paper, by cutting sentences from it.108 Instead Calabrese forced the journal to cut these sentences from the author’s article: Author’s note: When the author received these page proofs on 10-1-08, someone had deleted the relevant endnote number from the text; moved this endnote material to the bottom of an earlier page; and changed the location in the text where this material was cited; these unauthorized changes were made neither by the production editors at Sage nor by the issue editor, Dr. Kevin Elliott. Earlier, when this article appeared in Biological Effects of Low-Dose Exposures, a journal edited by Dr. Calabrese, someone also tried to delete completely (from the page proofs) the material in this endnote.109 Even worse, when Calabrese realized he could not defend his invalid scientific conclusions in any peer-reviewed journals, instead he used chemical-industry funding to attack the author and attempt to silence her in 3 ways. First, he filed scientific-research-misconduct charges against her with her university, an event that automatically triggered an extensive, stressful, time-consuming investigation. As a result, all university investigators unanimously and fully exonerated


42

Conceptual and Logical Analysis

the author and criticized her harassment by special interests—something that happens to 50 percent of pollution and health researchers, mostly at universities. For instance, much harassment of renowned climate scientists arises from the fossil-fuel industry.110 Having failed in the bogus research-misconduct charges filed with the whistleblower’s university, Calabrese next filed research-misconduct charges with scientific journals in which she publishes. He also failed in this attempt at retaliation. Next Calabrese filed research-misconduct charges with the US National Science Foundation—which has funded the author’s research for more than a quarter-century. Again the author was completely exonerated on August 20, 2012. Nevertheless, many university, government, and journal researchers have had to waste time and face stress from chemical-industry-funded Calabrese because of these bogus charges. When scientists (with conflicts of interest) are unable to defend their flawed science in professional journals, they often resort to attacking those who blow the whistle on their flawed science.111

Preventing Scientific Misconduct and Flawed Science Policy One obvious question, arising from this account of analysis of scientific concepts and apparent scientific misconduct, is “How common is special-interest harassment of researchers who reveal threats from special-interest pollution or products?” The answer to this question is probably already evident, given the preceding statistic that 50 percent of pollution researchers face industry harassment—whenever their research threatens industry profits.112 Indeed, chapter 15 notes many classic cases of scientist harassment. After university scientist Herbert Needleman discovered the harms of lead, the lead industry filed research-misconduct charges against him. After climatologist Mike Mann confirmed climate change, the fossil-fuel industry filed research-misconduct charges against him, and so on. Another obvious question, in response to this chapter’s analysis, is “What can scientists and philosophers of science do, in the face of problems such as Calabrese’s conceptual obfuscation of H, HG, and HD concepts; his false testimony in the Allen case; and his filing bogus research-misconduct charges against those who blew the whistle on his poor science? Chapter 15 has an answer. Those with scientific expertise have professional duties not only to do good science but to blow the whistle on poor science whenever it occurs. If enough people blow the whistle on flawed science—especially science having potential to harm


Hormesis Harms

43

people—no single whistleblower will be forced to endure harassment merely for defending science.

Conclusion One of the biggest lessons from the Calabrese case may be that ethical shortcomings often accompany scientific shortcomings. Calabrese invalidly “confirmed” hormesis, then failed to reveal decades of millions of dollars from the chemical industry, the main beneficiary of his questionable hormesis claims. Another lesson is that those with scientific expertise cannot afford to be ethically naïve about flawed scientific methods. As author Upton Sinclair warned, “it is difficult to get someone to understand something when his salary depends on his not understanding it.”113


C H A P T ER

4

Trading Lives for Money C O M PE N S AT I N G WAG E DI F F E R E N T I A L S IN ECONOMICS

Slavery is not a thing of the past. The US State Department says 4 million people/year—mainly women and children—are bought, sold, transported, and held against their will in slave-like conditions. In roughly 90 nations, modern slave traders use threats, intimidation, and violence to force victims to engage in sex acts or endure slave-like conditions as factory workers, domestic servants, or street beggars. Annually in the United States, at least 50,000 women and children are trafficked for sexual exploitation. Armies also abduct children and force them to fight for government or rebel military groups. At least in numbers, this contemporary slave trade may even exceed the earlier African slave trade. Every 3 years, contemporary traffickers enslave about 12 million people, equal to the total number of people enslaved during all 250 years of the African slave trade to the Americas.1 Traffickers succeed by deception. They promise poor people good jobs at high pay in foreign cities. Once their victims are isolated and far from help, traffickers enslave them. While every nation prohibits slavery, why does it continue? One reason is that poor people in economically deprived nations must fight for survival. There are almost no jobs for them. Their survival may require their taking risks with businessmen who sometimes are traffickers. This chapter shows that something similar happens in nations like the United States. If poor people in economically deprived situations want to survive, they must take health-and-safety risks. Often they take whatever jobs they can get, regardless of the risks, and economic science attempts to justify their risks. However, the chapter shows that this supposed justification is not real, that it is based partly on a flawed concept, the compensating wage differential. By using philosophy of science to analyze this flawed concept, this chapter shows how science can err and justify harm to today’s workers. 44


Trading Lives for Money

45

Chapter Overview Should workers have the right to trade their health for work? Neoclassical economic theory says they should. On one hand, especially in the English-speaking world, economists defend people’s rights to accept risky work on the grounds that laborers are usually paid extra for the risk, and the risk is voluntary, not slave labor. As a result, in nations like Australia, the United Kingdom, and the United States, workers are allowed to trade their health/safety for higher wages. On the other hand, throughout most of Europe, nations do not allow workers to receive risks, such as higher occupational doses of pollutants, than are allowed in public. They say such a double standard victimizes already-vulnerable laborers. This chapter argues that the Europeans are right, that the economic-science rationale—the “compensating wage differential” or hazard pay—for allowing higher workplace, than public, risks is flawed for both empirical and conceptual reasons. This chapter’s conceptual analysis shows that (1) the wage differential does not exist for the workers who need it most. (2) Contrary to economist Adam Smith’s requirement, workers typically do not know their precise individual, cumulative, and relative risks from pollutants like radiation. Yet (3) there is no safe dose of ionizing radiation, for instance, and yet US regulatory standards allow workplace radiation doses that are 50 times higher than allowed in public. (4) These higher doses annually cause 1 additional, premature cancer death for every 400 workers. Yet (5) workers cannot consent to risks they do not know. Therefore, given the flaws in the compensating wage differential, this chapter argues that (6) economists must use new strategies, including a dose registry, to address the scientific and ethical problems with the concept. Conceptual analysis thus reveals how science can err, how it can put lives at risk, and how alternative scientific strategies can help prevent both harms.

US Workplace Risks The case of 20-year-old Scott Dominguez, permanently brain damaged from cyanide poisoning on the job, illustrates some of the problems with the compensating-wage-differential concept. He was an employee of Evergreen Resources in Soda Springs, Idaho, a company that made fertilizer from vanadium-mining waste. The owner, Allan Elias, ordered Scott to clean out an enclosed, sludge-filled storage tank. Although Elias knew the sludge was laced with cyanide, he didn’t tell the workers or give them safety training or equipment, both of which are required for employer consent and for valid economic use of the compensating-wage-differential concept. On the second day of cleaning,


46

Conceptual and Logical Analysis

Dominguez collapsed, had to be carried out of the tank, and was not rescued for more than an hour. Because the company did not have the proper equipment, no one was able to help him. An Idaho jury found Elias guilty of knowingly endangering the lives of his employees. He ignored their repeated complaints of sore throats, of needing protective gear to clean the tanks, and of tanks needing to be tested for toxic chemicals. A federal judge sentenced Elias to 17 years in prison and ordered him to pay $6 million to Scott’s family. 2 In another typical case, a non-English-speaking US immigrant dropped dead from poisoning while using chemicals for his company’s processing work. Three executives of his company, Film Recovery, were tried and convicted of murder. However, such convictions are rare, just as punishing child-sex traffickers is rare. Annually in the United States, 7,000 to 11,000 people die prematurely from injuries sustained in the workplace. Roughly another 100,000 people/year die prematurely from occupationally induced diseases like cancer—caused by unsafe work environments. Many of their deaths could have been prevented, if workplace and public regulations were equally protective. However, the victims represent a largely silent minority, not only because their number represents less than one one-thousandth of the US work force but also because their deaths frequently have undetected chemical or radiological causes for which it is difficult to hold companies responsible. Employers who risk employees’ lives typically neither give workers dosimeters to wear nor test workplaces for hazards. Consequently, workers have little evidence that stands up in court. Also, because there are only a handful of US Occupational Safety and Health Administration (OSHA) inspectors—roughly enough to check a worksite only once every 75 years—health and safety laws are often subjected to politically manipulated enforcement. For instance, during 1980–1988 OSHA referred 30 cases of job-related criminal homicide to the US Justice Department. Yet by 1989, only 4 had been prosecuted or were being prosecuted, in part because the US Bush Administration cut Justice Department funding. However, annual occupation-related deaths in the United States are approximately 5 times greater than those caused by the illegal drug trade, and approximately 4 times greater than those caused by AIDS. Most casualties of the workplace environment are poor, African American, or Hispanic. They have few advocates. 3 Although unhealthy workplace environments annually cause 3 times more deaths and injuries than street crime, even in developed nations employers can avoid responsibility for what happens to their employees.4 In developing countries, apparent injustice in the workplace environment is even more evident. Worldwide, workplace risks also are increasing, in part because of the World Trade Organization (WTO), established in 1995 as part of the Uruguay Round Agreements of the General Agreement on Tariffs and Trade. The WTO has defined all worker health or safety protections, including prohibitions against


Trading Lives for Money

47

child labor, as “barriers to trade” that violate WTO international regulations by which all member nations must abide. 5

The Compensating-Wage Differential A major reason society fails to stop most occupation-related fatalities is that economists tend to justify risky workplaces on the grounds of the compensating-wage-differential concept, or hazard pay. It is the supposed wage increment, all other things being equal, that workers in risky jobs receive. According to the concept, employees trade safety for higher pay, and they know some workers will suffer the consequences of their risky employment. However, economist Adam Smith warned that risky jobs would lead to efficient markets and were defensible if workers received higher pay, a wage-differential, and had full information about the risks they faced. Otherwise, he emphasized that market transactions would not meet necessary conditions for economic science. 6 Are the higher-pay and consent conditions met in most workplaces, like that of Scott Dominguez? Apart from whether workers should be allowed to trade their health and safety for additional pay, the most basic problem with the wage-differential is that it often does not exist. Researchers have shown that, when all workers are lumped together, from lowest to highest paid, risk and salary increase proportionately, as the wage-differential predicts. However, when researchers separate the workers into 2 groups—a primary group of white, male, unionized, college-educated, or skilled workers—and a secondary group of nonwhite, female, nonunionized, non-college-educated, or nonskilled workers, the wage-differential disappears. Primary-group workers enjoy a wage-differential, while those in secondary groups do not. Hence the alleged wage-differential, for both primary and secondary workers, is an artifact of data aggregation. In fact, the primary-group wage-differential may exacerbate harm to members of the secondary group because the data aggregation covers up their not having a wage-differential. Flawed economic science—misleading aggregation—causes flawed workplace regulations.7 Indeed, some economists show that nonunionized workers have a negative wage-differential. As risk increases, wages get lower. To the degree that risky jobs are filled by less skilled, socially disadvantaged workers, even Adam Smith’s theory suggests no compensating-wage-differential exists. Comparing wages across jobs, not adjusting for skill requirements, shows that hazardous jobs pay 20–30 percent less than safe employments. Thus, the expedient way for employers to hold down wages is to hold down skill requirements, because socio-economic inequality ensures many disadvantaged workers who are willing to accept health/


48

Conceptual and Logical Analysis

safety risks in return for pay. This high-hazard, low-wage situation could not exist without a large supply of socially disadvantaged workers willing to accept it. Yet if nonhazardous jobs are unavailable, or if workers are unaware of occupational hazards, then contrary to Smith’s requirements, employers need not pay a compensating-wage-differential to keep employees on the job. Thus, where the wage-differential is most needed, it does not exist. Where it exists, it is not needed. In either case, economic conditions for the wage-differential are not met in the real world.8

Workplace Risks: The Case of Ionizing Radiation What if people consent to a risky job, even if there is no higher pay? Consider workplace exposures to ionizing radiation. US nuclear plants may expose workers to doses 50 times higher/year than those to which the public may be exposed.9 Yet scientists, including members of the influential 2005 National Academy of Sciences’ Committee, say the relation between low-dose radiation exposure and resulting health effects is risky and cumulative, that health effects are linear with no threshold (LNT) for increased risk.10 Although radiation effects vary among people as a function of factors like genetics/age at exposure/sex/coexposures, the International Atomic Energy Agency estimates that normal background radiation of about 3 milliSieverts (mSv)/year, causes 3–6 percent of all cancers.11 The largest study of nuclear workers to date (2005), by the International Agency for Research on Cancer, says they face a 20-percent, lifetime-cancer increase and a 10-percent, lifetime-fatal-cancer increase for exposures to the maximum-allowable occupational-radiation dose, 50 mSv/year.12 Every 20mSv exposure—roughly 7 times what the public may receive annually—causes 1–2 percent of radiation workers’ fatal cancers, and the doses/risks are cumulative.13 How many people receive occupational-radiation exposures? In Canada, there are more than 550,000 radiation workers in more than 80 occupations, including commercial nuclear-power generation, nuclear-weapons’ industries, food processing, industrial imaging/inspection/testing, mineral-deposits discovery, and so on. In Switzerland, radiation workers number 60,000. In South Korea, 65,000. In the United States, 1.5 million radiation workers are occupationally exposed to ionizing radiation—300,000 by the commercial-nuclear industry.14

Flawed Disclosure and Consent Given such high workplace-radiation risks, obviously workers should consent to them in order to satisfy the economic requirements of the compensatingwage-differential and basic ethics. Two factors that can block occupational


Trading Lives for Money

49

consent are the absence of individualized and cumulative radiation-dose data. Unlike other developed nations that require workers to have personal air monitors, the United States has little individualized data because it allows employers to use general air monitors—single, fixed, air samplers to assess radiation dose—and to report only workplace-mean radiation exposures.15 Consequently, US occupational-radiation-dose reports frequently underestimate exposures and mask variations. In some US workplaces, radiation concentrations change 4 orders of magnitude over 2 months, and 3 orders of magnitude/day.16 The US National Council on Radiation Protection and Measurement warns that general air samplers can underestimate radionuclide concentrations by 3 orders of magnitude, especially if they are located far from high-exposure employees.17 Thus radiation workers may be unable to know or consent to their precise, individual, radiation doses. Lack of data on cumulative radiation doses likewise threatens both occupational consent and implementation of the wage-differential. To see why, suppose 2 workers, one a cancer survivor who had received radiotherapy, and another who had not received it, were deciding whether to continue radiation work. If risks from radiation increase on a scale of the excess relative risk, as is often assumed, suppose both workers receive the same occupational dose. However, according to the linear model adopted by most scientists, when expressed on a relative-risk scale,18 risk differences associated with this same dose are larger at higher cumulative doses. All other things being equal, the prior radiotherapy could give the first worker a 10-year average cancer risk 6 times higher than that of the second worker.19 Yet as the academy notes, depending on the type of cancer and therapy, therapeutic-radiation doses could be 200 to1200 times greater than the maximum-occupational-radiation dose/year.20 If so, this would give the first worker a cancer risk more than 6 times that of the second worker. Or, because 60 percent of human-made-radiation exposures are from medical x-rays, suppose the first worker had 1 whole-body computed tomography scan, with exposures of about 10 mSv. 21 This would give him about half the cumulative-radiation dose of workers in the classic radiation study, 22 or one-fifth of the US maximum-allowable-annual occupational dose. A diagnostic abdominal helical-CT scan, performed in childhood, would increase the first worker’s cancer risk about as much as receiving half the US maximum-allowable-annual occupational dose of radiation. Even x-rays taken as part of required worker health exams contribute to radiation risk. 23 Despite these 2 workers’ radically different radiation-exposure histories, they would not receive quantitative information about their different relative risks. Because all nations require employers to disclose only occupational-radiation doses, employees typically have incomplete information about their individual, cumulative, and relative radiation risks.24 Protecting US radiation workers thus relies only on average occupational dose—to achieve employer compliance with


50

Conceptual and Logical Analysis

regulations. Achieving employee consent, however, also requires another type of information—individual cumulative dose. Both the economic-theory requirements of the compensating-wage-differential concept, as well as all bioethics codes, like the famous Helsinki Declaration, require potential risk recipients to be adequately informed of, and to consent to, any risks imposed on them.25 Implementing this requirement, the classic doctrine of informed consent mandates 4 necessary conditions. The risk imposer must fully disclose the risk; the risk recipients must fully understand the risk; they must be competent to assess the risk; and they must voluntarily accept the risk.26 If cumulative, individual radiation doses determine occupational-exposure risks, but workers know only average, occupational doses, obviously their risk disclosure is incomplete. Workers may misunderstand the different relative risks associated with the same average occupational-radiation dose. To see why this misunderstanding of risk—and lack of consent—is likely, consider a thought experiment (see chapter 6), a typical scientific way of reasoning about a problem when one does not experiment. For the thought experiment, consider the 2 radiation workers in the previous example. Receiving the same occupational-radiation exposures, they are like 2 nighttime drivers on a foggy mountain road without a guardrail. The distance to the edge represents the odds ratio (which is linear) of getting radiation-related cancer, although cell death may be more likely at high doses. The edge represents malignancy, and the fog represents difficulties with radiation-risk assessment and workers’ understanding of their relative risks. The driver closer to the edge is like the higher-exposure worker who has accumulated all radiation hits except the last one required for cancer. The driver farther from the edge is like the lower-exposure worker who has not accumulated these hits. If both drivers move 2 feet toward the edge—both get another hit—the risks will not be the same for each of them. Worker information and consent also are limited because the law mandates no overall radiation-dose/risk limits, only limits within single exposure classes (e.g., medical, occupational, public) and from single sources, like a nuclear-power plant.27 Consequently no nation routinely measures cumulative radiation dose/ risk from all sources and exposure classes, even for high-exposure workers. Most nations also have not followed Canada and instituted a reliable, centralized radiation-dose registry. The United States has a variety of registries, 28 some run by groups alleged to have conflicts of interest, like the Department of Energy (DOE), the Nuclear Regulatory Commission, the Department of Veterans Affairs, and individual facilities. No one has systematically studied radiation-induced disease by combining and improving all US registries, partly because different groups control them. The result is flawed occupational-dose data, difficulties compensating radiation workers, inadequate occupational-dose disclosure and consent, repeated contamination by radiation, and avoidable deaths at many DOE facilities, as with hundreds of Navajo uranium-miner fatalities. 29


Trading Lives for Money

51

Because of DOE’s questionable activities that put citizens at risk, beginning in 1991 various government bodies, including the Office of Technology Assessment, have recommended DOE abolition or outside regulation; numerous government bodies confirmed contamination and radiation-dose falsification among 600,000 nuclear workers at 3,500 US DOE facilities. 30 Again in 1994 and 1999, Congress criticized DOE and its contractors for radiation-safety violations, falsification of worker-dose records, contamination, and cover-up. 31 In 1998, the US Government Accountability Office (GAO) warned: “Widespread environmental contamination at DOE facilities . . . provides clear evidence that [DOE] self-regulation has failed.”32 In 2012, GAO again warned that DOE quality, safety, and oversight problems have not been corrected. 33 Because DOE has never been abolished or subject to external regulation, workers continue to be put at risk.

Protection through a Dose Registry One way to protect high-risk employees—and to implement risk-disclosure requirements for the compensating-wage-differential and for informed consent—would be reliable, centralized, pollution-dose registries, perhaps in the US Centers for Disease Control and Prevention. At a minimum, such registries would include activities of centralized dose collection, epidemiological analysis, risk assessment, risk communication, and confirming dose measurement. Creating dose registries, however, would not alone resolve most problems of pollution-dose accuracy. However, at least they would help provide better information to workers, a necessary condition for applying the compensating-wage-differential. Because ways to develop this radiation-dose registry have been assessed elsewhere, there is no need to outline them here. 34 Data collection could be implemented in stages, beginning with data on occupational exposures. The US Centers for Disease Control and Prevention and National Cancer Institute web sites already reveal at least one precedent for part of such a pollution-dose registry. This is the National Cancer Institute’s radiation-dose calculator. It allows citizens to estimate their US nuclear-weapons-testing, fallout-related, iodine–131 thyroid doses. 35 A dose registry is necessary because otherwise it is impossible to know workers’ risks, like radiation, accurately and thus impossible to ensure implementation of the compensating-wage-differential, something necessary for both sound economic science and justice to workers. A registry also is necessary to implement current annual, 5-year and lifetime radiation-dose limits within exposure classes. 36 Without such a registry, one could never know whether such limits were exceeded, especially because workers can move among nuclear plants and accrue maximum-allowable annual radiation doses at several different plants. Some high-risk workers, “jumpers,” work at several facilities each year, as illustrated


52

Conceptual and Logical Analysis

in chapter 12. Without a registry, they would bear sole responsibility for knowing/reporting past radiation exposures. The radiation-dose and other registries also could clarify radiation-dose distribution among members of the public, providing sounder bases for regulation and for clarifying/resolving scientific controversies. 37 Such registries are especially important because, historically, scientists have repeatedly discovered that radiation and other pollutants are far more hazardous than previously thought. Also, empirically derived radiation-dose and other pollution data often conflict with extrapolated data. For instance, the classic 2005 radiation study has central risk estimates of cancer mortality that are 2 to 3 times higher than linear extrapolations from the data for atomic-bomb survivors, although the 2005 estimates are statistically compatible with bomb estimates, given wide confidence intervals. 38 Empirical data from the 2005 Techa cohort likewise has produced much higher estimates of excess relative risk than atomic-bomb extrapolations have produced. 39 The fact that the 2005 and Techa studies found higher radiation-risk coefficients than are currently accepted is a good reason to promote radiation-dose and other pollution registries, to do follow-up studies,40 and to take account of more sensitive populations, like women and children, who are not included in the earlier cohorts.41 By controlling for factors like confounders, healthy-worker effects, and dose misclassifications; providing direct, individualized, exposure data; offering larger samples and longer exposure periods; and building on worker studies,42 pollutant-dose registries could provide data to provide a more complete assessment of whether the economic concept of the compensating-wage-differential actually is satisfied—and thus whether workers are treated justly.

Objections In principle, if pollution-dose registries are scientifically/ethically defensible, as well as necessary to assess the economic theory behind the compensating-wage-differential concept, why have they not been adopted? Some objectors might say employers should not have to ensure that employees are informed about/consent to pollution risks, because employers have no control over nonoccupational risks. Yet neoclassical economics recognizes that imposition of workplace risks requires employees’ consent and their full receipt of information; economic theory also mandates that economic efficiency obliges employers to help meet these requirements.43 Ethics likewise requires employers to promote employee-risk disclosure/consent/protection, because employers profit from employee radiation exposures, and rights to profit entail corresponding responsibilities.44 Many nations also recognize this employer responsibility,


Trading Lives for Money

53

as illustrated by laws requiring employers both to monitor pregnant radiation workers and to take workers’ medical histories.45 A second possible objection is that because dose registries could open highly exposed workers to occupational discrimination, like that used against chemical-industry employees with genetic predispositions to chemically induced disease,46 workers might avoid chemotherapy or diagnostic x-rays that could add to their exposures. In other words, the situation might be like that after the Fukushima nuclear accident, described in chapter 12, when temporary workers feared losing their radiation-cleanup work, and hence covered up their exposures.47 However, there are worse consequences than occupational discrimination or hiding exposures, namely, society could continue to follow a nonexistent compensating-wage-differential—whose economic conditions for validity are not met. As a result, basic human-rights violations—to life and to informed consent—and high workplace disease/death could continue. A better solution is working to protect victims of discrimination, as in cases of workplace mistreatment based on race, religion, or gender. Besides, workers would still retain their rights not to disclose their nonoccupational radiation exposures—and thus avoid discrimination.48 A third objection to creating pollution-dose registries might be challenges to whether they are needed, because most occupational-pollution exposures are low. However, if earlier 2005 radiation data are correct, many radiation doses are not low, and the same likely holds for other pollutants. At least 400 radiation-cohort members received cumulative occupational radiation doses greater than 500 mSv—which current National Academy of Sciences models say will cause at least 8 fatal cancers. About 41,000 cohort members received cumulative occupational radiation doses greater than 50 mSv, which will cause 82 fatal cancers. Even the cumulative-occupational dose for members of this cohort, averaging about 20 mSv, will cause fatal cancer in more than 1 of every 250 workers.49 Earlier accounts of DOE’s lax safeguards and occupational-dose falsification also suggest that some US worker doses might be excessive. Otherwise, why has the United States—with its 50-mSv allowable-radiation dose/year—not adopted the stricter, 20-mSv occupational standard of other nations, or the 12.5-mSv limit recommended by British authorities?50 Even if most US occupational-pollution doses were low, this third objection errs in assuming that not everyone has rights to equal protection, that only utilitarian or majority protection is necessary—the greatest good for the greatest number of workers. The objection also erroneously assumes that the size of pollution doses alone is sufficient to make them ethically acceptable. Described by British ethicist G. E. Moore, 51 this size-related error is known as the naturalistic fallacy. Those who commit this fallacy attempt to reduce ethical questions (e.g., is this imposition of workplace risk just?) to scientific questions (e.g., how high is this workplace risk?). The questions are irreducible because even small risks may


54

Conceptual and Logical Analysis

be ethically unacceptable if they are easily preventable, rights violations, imposed unfairly, without informed consent, without adequate compensation, and so on. Besides, risk bearers ultimately must judge whether or not risks are low by giving/ withholding their consent. A fourth possible objection to using pollution registries—as necessary to protect workers and assess economic theory used to justify risk—is that there is less reason for disclosing workers’ pollution doses/risks than for disclosing sometimes-larger risks—like smoking. Epidemiologically, this objection is partly correct. As already mentioned, risks like smoking are important covariates whose inclusion in dose registries are probably essential to accurate dose information. Ethically, however, disclosing alcohol or tobacco risks is less important than disclosing individual/cumulative/relative risks associated with occupational-pollutant exposures. Despite pressures such as cigarette advertising, often personal risks like smoking are more ethically and scientifically legitimate than workplace-pollution exposures, because personal risks typically involve more personal choice/informed consent/individual control. However, occupational risks often involve less personal choice/informed consent/individual control, partly because of inadequate disclosure and the frequent absence of alternative-employment options. 52 Besides, inequitable workplace risks are allowed, in large part, because of the economic theory behind the compensating-wage-differential concept. If society’s allowing such risks is scientifically and ethically defensible, there must be available scientific data, required for implementing and assessing the wage-differential.

Conclusion Government relies on the economic concept of the compensating-wage-differential to justify imposing higher workplace-pollution risks on typically poor, uneducated workers who are often forced to take dangerous jobs. Yet conceptual analysis reveals that the main economic conditions required for compensating-wage-differential validity—worker knowledge of workplace risks/ consent to them/higher compensation for them—are rarely met in the real world. Instead, often there is no increased economic compensation, and regulations are inadequate to ensure worker knowledge and consent. Thus, apart from the theoretical scientific validity of the compensating-wage-differential concept, it has little practical, real-world validity. To defend the compensating-wage-differential concept but not implement it, as economic science requires, is like a nation’s defending justice but never implementing police protection and courts of law. Actions regarding the wage-differential concept speak louder than words. Science is only as good as the way it is practiced. Otherwise, both science and justice suffer.


PA RT

II

HEURISTIC ANALYSIS AND DEVELOPING HYPOTHESES



C H A P T ER

5

Learning from Analogy E X T R A P OL AT I N G F R O M A N I M A L DATA I N TO X IC OL O G Y

Common sense and ordinary observations are usually the first guides to scientific discovery. To learn how heavy an object is, people may pick it up. To discover water temperature, people may use their fingers or a thermometer. However, optical illusions show that relying on ordinary observations can yield flawed hypotheses. Penrose triangles cannot exist in the physical world, and neither can the after-images seen after exposure to a bright light. Instead, illusions like the Rubin Vase, Kanizsa Triangle, or Necker-Cube suggest that the brain can organize incoming sensations in different ways. These different unconscious organizations often make people think they see something that either is not there or is different from what is there. In fact, the success of motion pictures depends on the optical illusion created by slightly varied still images, produced in rapid succession. Assessing ordinary observations for veracity is especially difficult whenever physical instruments like cameras can capture illusions, such as nonexistent pools of water that drivers often see on hot roads. Because light rays bend differently in cold and warm air, they can produce these road mirages—images that fool even cameras.

Chapter Overview Although optical illusions are well known, even scientists often do not realize that their ordinary intuitions also may rely on illusions. In fact, because everyday intuitions may depend on logical fallacies or conceptual incoherence, the first part of the book (chapters 1–4) illustrated why logical analysis can help avoid fallacies in science. Because flawed intuitions also can jeopardize scientific hypothesizing, this second part of the book (chapters 5–8) turns to heuristic analysis—assessment of various ways to discover/develop hypotheses. Illustrating heuristic analysis, this chapter shows that many scientists rely on erroneous but commonsense 57


58

Heuristic Analysis and Developing Hypotheses

intuitions that the better way to discover human-related hypotheses is through human, not animal, studies. Contrary to intuition, this chapter argues that the human-studies approach is often wrong and should not be used to discredit plausible animal evidence for hypothesizing human harm. First, the chapter surveys different approaches to hypothesis-development. Next, it summarizes the intuitive grounds for supporting human-studies approaches to hypothesis-development. Third, it evaluates central scientific/practical/ethical problems with this approach. Instead it shows that developing hypotheses based on analogies with animal, not human, behavior is often more scientifically fruitful. Next the chapter shows that, contrary to much scientific opinion, the human-studies approach ought not be used to delay hypothesizing human harm—and thus delay possible action to protect humans from what harms animals. Finally, the chapter offers a number of practical insights about how to develop scientific hypotheses. In particular, it shows that hypothesis-discovery always must take account not merely of phenomena to which the hypotheses apply, but also the socio-cultural conditions under which the hypotheses are discovered. Otherwise, scientific discovery is likely to miss whole classes of hypotheses or to rely on common but flawed intuitions.1

Scientific Explanation and Hypothesis-Development To provide a context for understanding scientific disagreement over the human-studies approach, consider 3 general ways that scientists and philosophers often disagree about scientific methods. They frequently disagree about when hypotheses truly explain something, how they should develop hypotheses, and what it means to justify hypotheses. This chapter quickly surveys the first several areas of disagreement, while chapter 9 outlines the third area of conflict before assessing various approaches to each of these areas of disagreement. Regarding disagreement about scientific explanation, chapter 9 shows that some scientists, logical empiricists, believe they explain phenomena when they can deduce claims about them from general scientific laws. However, chapter 12 shows that other scientists believe they explain phenomena when they can discover their causes, underlying mechanisms, or how to unify a wide variety of phenomena in a coherent way. Still others argue, as chapter 12 does, that scientific explanation should involve a number of considerations, including causes/mechanisms/unification, and so on. Given their disagreements about what constitutes scientific explanation, scientists and philosophers of science likewise disagree about how science begins and how to do heuristic analysis—how to discover/develop hypotheses. Logical empiricists, like Rudolph Carnap and Hans Reichenbach, discussed in chapter 1, thought that observation was the beginning of scientific discovery and heuristic


Learning from Analogy

59

analysis.2 Thus they partly followed the tradition of John Stuart Mill and Francis Bacon who believed discovery begins with making inductive inferences about observations. Inductive inferences use particular observations in order reach a general conclusion, such as that because all 50 observed swans are white, therefore all swans are white. Inductive inferences thus are ampliative, because the content of their conclusion goes beyond the content of the premises, and nondemonstrative, because having true premises—such as that each swan is white—does not guarantee the truth of general conclusions. Because inductive conclusions can err when they postulate general hypotheses, they face what is known as the problem of induction. It is one reason that there is no completely reliable method for discovering/developing hypotheses based on observations. Rather than observation and induction, other scientists and philosophers of science, like Charles Sanders Peirce, thought hypothesis-discovery was a matter of having a good, fertile “guessing instinct” for hypotheses. 3 Karl Popper likewise says discovery begins with creativity, insight, and bold conjecture, not observations. He also argues that, partly because of the problem of induction, there are no rules/logic that lead to hypothesis-discovery and development.4 Contrary to both the observation and the conjecture approaches to hypothesis-development, Norwood Russ Hanson formulated a rough “logic” of scientific discovery based on Peirce’s abduction or retroduction. It follows this format: Surprising phenomenon p exists; p would be explained if hypothesis q; therefore q. For instance, Hanson says Kepler discovered the hypothesis of the elliptical orbit of Mars because he retroductively inferred it. 5 Of course, because many different hypotheses q might explain some phenomenon, one difficulty with retroduction is knowing which q is better and why.

Hypothesis-Discovery versus Hypothesis-Justification Most contemporary scientists/philosophers of science probably believe scientific discovery can begin with observation, conjectures, retroduction, or some combination of them. They likely disagree, however, about whether there are different—or any—scientific methods for discovering, versus justifying, hypotheses. Some of the earliest philosophers of science, logical empiricists like Herbert Feigl and Hans Reichenbach, distinguished proposing/discovering hypotheses from testing/justifying them. That is, they distinguished heuristic from methodological analyses. They claimed hypothesis-discovery/development (discussed in chapters 5–8) is more subjective, something understood via psychology, sociology, or history. However, they said hypothesis-justification (discussed in chapters 9–12) is more objective, logical, and involves critical hypothesis-testing. The logical empiricists who accept this discovery-versus-justification distinction often focus on the different temporal stages of science—from first formulating hypotheses, to developing them, to


60

Heuristic Analysis and Developing Hypotheses

testing/justifying them. Those who reject the hypothesis-development-versusjustification distinction—like Norwood Russell Hanson and Thomas Kuhn, discussed in chapters 9 and 10—typically claim that even confirmed hypotheses are not certain. They also say that hypotheses must be assessed at both the discovery-development and justification stages, that both stages are partly objective/subjective, and that because these assessment methods are similar, there is no clear dividing line between hypothesis-discovery/development and justification.6 Today, most contemporary philosophers of science probably agree that the context of scientific discovery provides reasons for entertaining some hypothesis, whereas the context of scientific justification provides reasons for accepting it, but that the discovery-versus-justification distinction is not firm. To better understand the heuristic analysis involved in hypothesis discoveryand development, consider a prominent way of discovering and developing scientific hypotheses about human behavior.

Using Analogy to Discover and Develop Hypotheses When biologists are trying to discover or develop hypotheses about human behavior, they often follow several main heuristic strategies. One strategy is to observe humans or human cell/tissue/organ cultures. Another strategy is to study analogous, well-known, causal, or mechanical processes in other animals, then—using animal-and-human analogies—to hypothesize about how animal data might apply to humans. Those following the first strategy often employ the human-studies approach, that is, they assume that proposing hypotheses about human effects of something require human-epidemiological data, not merely good animal or laboratory data. Those following the second strategy typically reject the humanstudies approach and often rely on animal data for hypothesizing. For instance, the US National Academy of Sciences relies partly on animal data, says children are not adequately protected by current pesticide standards for food, and recommends a tenfold strengthening of US pesticide regulations in order to protect children.7 However, most chemical-industry scientists reject animal data, use the human-studies approach, reject the academy’s tenfold safety factor for children’s pesticide-exposure standards, and thus reject the 1996 US Food Quality Protection Act that mandated this tenfold improvement for 10 years.8 The chemical-industry’s main pro-human-studies argument against the academy was that, despite abundant animal data and children’s known higher sensitivity to toxins, scientists also must provide human-epidemiological data in order to have a plausible hypothesis about special pesticide-related harm to children. University of California geneticist Bruce Ames and many scientists funded by chemical manufacturers, like Michael Gough or Elizabeth Whelan, support the human-studies approach. They claim animal data on pollutants often


Learning from Analogy

61

are unreliable,9 largely “speculative”; therefore regulators also should use “epidemiological evidence in humans” before proposing any hypothesis about human harm.10 On one hand, they are partly right. While animal data often are not precise indicators of human harm, well-designed, sufficiently sensitive, human-epidemiological studies frequently provide more direct evidence for pollutant-related human harm. As a prominent academy panel put it, “uncertainty in extrapolating among different species of mature animals is appreciable. . . . [Because of] interspecies maturation patterns . . . choice of an appropriate animal model for pesticide toxicity of neonates, infants, and children becomes even more complex,” partly because the translation from animals to humans can be difficult.11 Echoing this complexity, risk assessors routinely apply an uncertainty factor of 10 to animal results—to account for interspecies variation—and another factor of 10 to account for intraspecies variation.12 Both applications suggest the imprecision of animal data and their need for translation to humans. They are 2 reasons the American Cancer Society has argued that laboratory and animal data provide insufficient evidence for developing carcinogenicity hypotheses about humans— and that only human-epidemiological studies are sufficient to reject the null or no-effect hypothesis regarding effects on humans.13 A quarter-century ago, Irwin Bross, then-director of Biostatistics at New York’s Roswell Park Memorial Institute, also said the lack of knowledge about cancer has arisen partly because of misleading animal studies and scientists’ not demanding human-epidemiological data.14 He quoted Marvin Pollard, former American Cancer Society president, who claimed that many cancer-research failures have arisen because of reliance on animal studies that are inapplicable to human hypothesizing. Many courts likewise require the human-studies approach in order to consider hypotheses about human harm in so-called toxic-torts suits, like those about Agent Orange and Bendectin.15 On the other hand, proponents of the human-studies approach appear partly wrong in demanding human-epidemiological data before developing hypotheses about human-health harms. Their requiring human studies, before discovering or developing hypotheses about humans, raises at least 8 scientific and 6 ethical problems.16

Scientific Problems with Requiring Human Studies before Hypothesizing On the scientific side, those who support the human-studies approach to hypothesis discovery/development typically ignore at least 2 problems that often make human-epidemiological data inferior to good animal data. These problems are (1) errors in gathering human-exposure data and (2) selection biases such as the healthy-worker-survivor effect. Human-studies proponents also often wrongly


62

Heuristic Analysis and Developing Hypotheses

support human, over animal, data because they fall victim to (3) confusion about the precision of exposure-disease relations with their strength; (4) rejection of classical accounts of scientific explanation; (5) erroneously privileging human-epidemiological—but ignoring weight-of-evidence—data and committing appeals to ignorance; (6) demanding infallible, rather than highly probable, scientific evidence and assuming that merely probable evidence is not evidence; (7) ignoring past inductive evidence for using animal data; and (8) ignoring dominant scientific practices regarding using animal evidence for causal claims about humans. Consider each of these 8 scientific problems. Human-studies proponents err, regarding scientific problem (1), because they overestimate difficulties with getting accurate animal-exposure data, yet underestimate difficulties with getting accurate human-exposure data. As compared to human data, animal-exposure data have at least 5 superior scientific merits. First, they usually result from intended/controlled exposures; second, they typically rely on direct, large-sample observation of exposures to thousands of subjects; third, they usually involve direct, long-term observation of exposures that often capture effects over entire lifetimes/multiple generations.17 Fourth, animal-exposure data also are typically from consistent, constant exposures over time, and fifth, they are from empirically confirmed exposures obtained through frequent measurements of differences between target-exposures and actual delivered doses.18 Human-exposure data, by contrast, usually are less reliable—in all 5 preceding respects. First, human toxicology data typically arise from accidental, unintended exposures that cannot be reliably estimated/measured. Second, because good human-epidemiological studies are difficult/expensive, they often have false-negative biases because they rely instead on indirect-rather-t han-controlled, small-sample, observations of 100 subjects or less. Third, they often have false-negative biases because they rely on indirect, short-term observation of exposures. Consequently most human studies are prone to confounding/ bias, and they miss many legitimate effects,19 as when studies of workers, exposed to benzene, were conducted too late to detect all effects. 20 Partly because of small sample sizes and short-time lengths, human studies also are less able than animal studies to take account of inter-individual variability.21 For the same reason, they often underestimate effects, given that latencies of different cancers may vary from several months, up to 50 years. Fourth, typical human data are from variable exposures, observed over time, from which fewer conclusions can be drawn. Fifth, typical human data are merely estimated exposures, often after-the-fact, from occasional measurements of accidental subject exposures, or of those thought to have had similar exposures. For all 5 preceding reasons, because human-exposure studies typically have less ability than animal studies to control quality, in all the ways necessary to develop good scientific hypotheses, human studies usually have greater


Learning from Analogy

63

exposure-related uncertainties. These often lead to distortions in central estimates and potency estimates—that require interpretation, and perhaps carefully adjusting estimation procedures for slope factors/fitting dose-response models. Although biostatisticians use various modeling techniques to compensate for such errors, how to compensate is frequently unclear because of inadequate quantitative analyses of likely errors in human-exposure estimates.22 A second scientific error of human-studies proponents is their ignoring massive selection-biases in human studies. These biases are minimal/nonexistent in animal studies, yet they can complicate mortality-data comparisons between human-study populations and the general population. Some of these selection-biases include healthy-worker and healthy-worker-survivor effects.23 All other things being equal, the healthy-worker effect occurs because, despite their higher exposures to occupational hazards, workers nevertheless represent healthier segments of the population, as compared to children/sick/elderly; thus workers have lower mortality rates; because human-epidemiological tests often are done in occupational settings, they usually underestimate health effects on the general population, especially effects on sensitive subpopulations like children.24 The healthy-worker-survivor effect occurs because, all other things being equal, those who survive various health threats tend to be healthier than average and thus are overrepresented in longer-term pollution-exposure groups. Consequently, although one can adjust for the healthy-worker-survivor effect, it often produces distortions in relationships between measured, cumulative exposure and risks—because shorter-term-exposure subjects suffer greater mortality than longer-term-exposure subjects. 25 Indeed, for diesel particulates, the relative-risk-versus-cumulative-dose curve has a negative, not positive, slope.26 A third scientific problem for human-studies proponents is that they often prefer human, to good animal, tests because they confuse 2 different things: the precision with which animal-based relations between exposure and disease can be measured and the strength of those relations. While imprecise animal data, with its tenfold uncertainty in animal-to-human extrapolation, may motivate scientists to accept the human-studies approach, imprecise animal data nevertheless are compatible with strong exposure-harm associations. Similarly, precise data on animal-human responses could reflect a weak exposure-harm association. 27 Yet the strength of these exposure-harm relations, not their precision, is more important to scientific hypotheses necessary to protect human health. A fourth scientific problem is that proponents of human-studies approaches reject classical accounts of scientific explanation. According to these accounts, when scientists have experimentally established that a certain class of chemicals is of type X, they have explained something. They know that because chemicals X have certain structures/functions, they are likely to have particular properties. 28 Whenever they investigate a new chemical in this same class X, they do not assume they know nothing about it. Nor do they demand all new


64

Heuristic Analysis and Developing Hypotheses

tests—analogous to demanding human tests—on this new chemical, before drawing conclusions about its potential harmfulness. Instead, they rely on their earlier experimentally established explanations about the structure/functions/ effects of this class of chemicals, at least until these earlier explanations are proved wrong. Human-studies proponents, who require human-epidemiological studies for developing hypotheses about humans, are like those who require completely new assessments of some chemical, already known to be in class X. If these proponents were right, science would be reduced to case-by-case bean-counting, not principled explanation. A fifth scientific problem is human-studies proponents’ discounting relevant information from good animal or laboratory tests, thus privileging only human-epidemiological data, and thereby rejecting weight-of-evidence rules for harm. Such rules dictate that the hypothesis supported by greater evidence— not necessarily human-epidemiological evidence—is the better hypothesis. Yet human-studies proponents ignore weight-of-evidence rules and instead demand human studies, as a precondition for hypothesizing. Yet if good animal data suggest something causes disease, to demand human-epidemiological studies, before positing hypotheses about risk, is to ignore existing data and privilege only another type of data.29 Such privileging is irrational because investigators, such as airplane, auto, or space-shuttle scientists, can develop causal hypotheses about various harms without having specific human-epidemiological data. 30 If so, scientists who have massive scientific—albeit not human-epidemiological—evidence, could reasonably use a weight-of-evidence approach, instead of merely a human-studies approach, for developing hypotheses about effects on humans.31 Why? If all existing, non-epidemiological data suggests some agent can cause disease, weight-ofevidence considerations at least create a hypothetical presumption in favor of causality, a fact that human-studies proponents forget. As a consequence, they often not only ignore other evidence, but also commit a fallacious appeal to ignorance. That is, they confuse the absence of human-epidemiological evidence for harm—with evidence for the absence of human-epidemiological harm. A sixth hypothesis-development problem of human-studies proponents is that they require near-infallible evidence, from human-epidemiological studies, for proposing a human-harm hypothesis, yet ignore highly probable evidence, namely, animal studies, and thus behave in unscientific ways. Science requires only probable evidence for hypothesizing because, apart from purely abstract logic, no science is infallible. Science cannot overcome the problem of induction, already mentioned earlier in this chapter. Rather, science includes hypotheses that reasonable and careful people propose, develop, then test. In life-and-death cases, like possible human harm from toxins, reasonable people do not demand only infallible evidence for hypothesizing about harm, or only the most difficult-toobtain, human-epidemiological evidence, before using those hypotheses to warn people. Reasonable people don’t wait until they see flames to call the fire


Learning from Analogy

65

department. They call when they smell smoke. Reasonable people don’t go to the doctor only when they are ill. Instead they get annual check-ups. Reasonable people don’t carry umbrellas only when it is raining. Instead they carry them even when it looks like rain. In short: Because reasonable people do not avoid hypothesizing, merely because they have no human data, they do not require human data for all scientific-hypothesis development about humans. 32 A seventh scientific problem is that those who demand only human data, for hypothesizing about humans, commit an inductive fallacy by tending to ignore previous scientific history. This history shows that most agents, established as harmful to animals, also have been confirmed as harmful to humans. Human and animal tests are “highly correlated,” as Princeton University risk assessor Adam Finkel notes. 33 He emphasizes there are no legitimate scientific tests showing that rodent carcinogens are not also human carcinogens; likewise he points out that most human tests, used to deny human carcinogenicity, employ small samples from which no reliable conclusions can be drawn, as do most chemical-manufacturers’ tests of pesticides. 34 Contrary to polluters’ claims, says Finkel—and as the first reason above shows—typical, high-power, animal data underestimate real risks more than typical, low-power, human-epidemiological data. Why? Animal studies are performed on less-sensitive, adolescent-to-late-middle-age animals, not on more-sensitive neonatal and elderly animals. An eighth scientific problem with the human-studies approach is that most reputable scientific-research programs do not follow it. Given the massive similarities between humans and other animals, both the US National Toxicology Program and the International Agency for Research on Cancer propose carcinogenicity hypotheses before they have any human data. They have classified many agents as possible or probable human carcinogens, even without human-epidemiological data. Thus animal data are massively used for hypothesizing about human behavior, especially in pharmaceutical, micronutrient, psychiatry, substance-abuse, toxicology, pain, diabetes, and epilepsy studies. 35

Practical Ethical Problems with HumanStudies Approaches On the practical ethical side, those who support the necessity of human-studies approaches, prior to hypothesis development, err in at least 6 ways. First, they demand data that often are unethical to obtain because classical bioethics prohibits human experimentation involving likely harms. Thus, it is ethically/legally questionable to dose humans with pesticides so as to obtain epidemiological data. 36 Consequently, human-studies approaches beg the question against following bioethics, against rejecting the null hypothesis, against animal-based


66

Heuristic Analysis and Developing Hypotheses

hypothesizing, and against protective regulation in the face of possible harms. If human-studies approaches were right, medical ethics would be reduced to a guinea-pig strategy, doing nothing until dead bodies started to appear. 37 A second ethical problem is that human-studies proponents demand data that also are impractical to obtain. Because human-epidemiological studies require large sample sizes and long time frames—high transaction costs—less than 1 percent of hazardous substances has been tested epidemiologically. Instead government relies mainly on controlled animal testing, then hypothesizes about human harm. By rejecting animal testing as inadequate for human-harm hypotheses, human-studies proponents use economics in a way that again begs an ethical question. The question is that if protecting people from hazards is expensive, government should not protect them. A third problem is that human-studies proponents ignore classical ethical norms to protect the vulnerable. Instead they place the heaviest evidentiary and health burdens on the most vulnerable people. When pollutant harms are controversial, why should potential victims bear the evidentiary burden of showing harm? Victims typically have fewer intellectual/financial/political resources than those who make/release/use toxins, often because victims are unaware of exposures. Moreover, human-studies proponents also unfairly assume that pollution victims must meet a scientific standard that polluters themselves rarely meet. Polluters have never made public a multiple-decades-long epidemiological study, with thousands of human subjects, to develop hypotheses about full health effects of their products/pollutants. Because of both victim vulnerability and probable pollutant harm, ethics requires protecting the vulnerable. Therefore it requires developing animal-based hypotheses about human harm, rejecting the necessity of having human studies, prior to hypothesizing, and asking deep-pocket polluters to bear heavier evidentiary burdens—whenever they deny harm from their releases.38 A fourth ethical problem is that human-studies proponents behave expediently when they reject the classical-ethics default rule that, in the face of probable harm, one should take precautions and not ignore good animal-laboratory evidence for probable harm. Virtue ethics, in particular, recognizes precaution, benevolence, and care as necessary for moral behavior. Yet it is neither benevolent nor virtuous to claim people can pollute, ignore good animal-laboratory data for pollutant harm, yet require human studies before proposing human-harm hypotheses. This stance is like allowing hunters to shoot anywhere, at will, without reasonable assurance that no people are nearby. If hunters ought not ignore data on possible human risks, scientists ought not delay hypothesizing about human harm just because they have no full human epidemiological data. 39 Human-studies proponents also ignore a fifth ethical rule, to take responsibility for risks/harms caused by one’s actions. In ignoring the fact that polluters are the main economic beneficiaries of pollution, human-studies proponents assume that polluters have no duty to ensure the safety of those they put at risk. Yet


Learning from Analogy

67

virtually all ethics codes hold that rights (to pollution-related economic benefits) presuppose corresponding responsibilities (for pollution-related costs imposed on innocent others). This rights-responsibilities principle is fundamental both to deontological or duty-based ethics—and to contractarian ethics, based on contracts/promises and treating people consistently. All law likewise is premised on equal rights and responsibilities. That is why people have rights to their property, provided they use it in responsible, not harmful, ways. Their property rights end where other people’s equal rights begin. Ignoring such responsibility, human-studies proponents often forget that if good animal data suggest pollutant harm, they have ethical responsibilities either to hypothesize human harm, or to show why the animal data are wrong.40 A sixth problem is that human-studies supporters risk harmful consequences when they demand human-epidemiological data before proposing hypotheses about human harm. Consequently they ignore the utilitarian ethical requirement to minimize harm and maximize desirable consequences. If animal/laboratory data show some pollutant is probably harmful, utilitarianism requires taking this probability into account and therefore proposing human-harm hypotheses. It requires that people calculate the expected utility of their acts, the magnitude of harmful/beneficial consequences and the probability that each of those consequences may occur. In demanding human studies, but rejecting good animallaboratory studies before proposing hypotheses about human harm, Ames and other scientists thus violate utilitarian ethical requirements.41

Conclusion The preceding considerations provide prima facie arguments against the heuristic claim that human data are required for proposing scientific hypotheses about human behavior. Otherwise, the human-studies approach may cause scientists to ignore important hypotheses. Instead, this chapter’s heuristic analysis suggests that hypothesis-development must be sensitive not only to theoretical concerns about analogies between human and animal biology, but also to practical concerns about real-world limitations on human-versus-animal data. Such practical concerns illustrate that in welfare-related areas of science, like epidemiology, toxicology, and parts of biology, even ethics consequences should be taken into account before deciding whether to employ specific heuristic strategies for hypothesis-development. Otherwise, scientists employing strategies like the human-studies approach could ignore whole classes of data and therefore ignore important scientific hypotheses. Consider how scientists ignored important hypotheses in the science of fingerprinting, the most common form of global forensic evidence. Ancient


68

Heuristic Analysis and Developing Hypotheses

Chinese, Babylonians, and medieval Persians used fingerprints to “sign” documents, not because they understood fingerprints were unique, but because they superstitiously believed personal contact with documents made them more binding. Relying on this superstition, in the 1800s an English official in India, Sir William Herschel, required natives’ fingerprints for “signing” official documents. Herschel also proposed to local prison officials his finger-signature hypothesis, that each individual fingerprint is different. However, officials ignored him—perhaps because of his lack of scientific training—and scientists never developed his hypothesis. Later, in 1888 Darwin’s cousin, Sir Francis Galton, began studying this hypothesis and developed a fingerprint-classification system. Only in 1892, however, did Galton’s work lead to eventual scientific confirmation and fingerprint use in criminal cases. In 1892 Juan Vucetich, an Argentine police official, achieved the first criminal conviction based on fingerprint evidence.42 Had scientists and officials listened to Herschel decades earlier, and had they been sensitive to his practical expertise, they could have developed hypotheses important both to scientific progress and to criminal justice. Something similar is true regarding the human-studies approach. If scientists were more attentive to the practical limitations of human studies, they might not rely on the misguided intuition that human studies are the only way to discover hypotheses about human behavior.


C H A P T ER

6

Conjectures and Conflict A THOUGHT EXPERIMENT IN PHYSICS

Thinking about something is a powerful way to learn about it. Just ask the many scientists who have studied athletic success, like Australian psychologist Alan Richardson. He showed that thinking about how to play basketball actually improves playing about as much as practice does. Richardson randomly chose 3 groups of students and had the first group practice basketball free throws 20 minutes a day for 20 days. He had the second group practice for 20 minutes on the first day, then do nothing more. He had the third group also practice only for 20 minutes on the first day, then spend 20 minutes a day, on each of the 19 remaining days, thinking about how to make free throws, how to avoid missing shots. On day 20, Richardson measured the percentage of improvement in each group. The first or practice-only group improved 24 percent. The second or no-practice-nothinking group had no improvement, while the third or thinking-only group improved 23 percent.1 Sports writers also say that athletes who practice in their heads are superior to those who merely practice. They say thinking about the game is responsible in particular, for the soccer greatness of Wayne Rooney, Manchester United’s striker. Rooney is considered a better player than David Beckham and the only world soccer player as good as Real Madrid’s Cristiano Ronaldo. Many great golfers— including Jack Nicklaus, Arnold Palmer, and Tiger Woods—also say their success is the result of practicing in their heads, thinking about different phenomena.2 Is something similar true for scientists? Can they learn about hypotheses just by thinking about them? This chapter argues that they can and illustrates one way of doing so.

Developing Hypotheses through Thought Experiments The previous chapter investigated using animal-human analogies as a way to develop human-behavior hypotheses. This chapter focuses on using thought 69


70

Heuristic Analysis and Developing Hypotheses

experiments—a priori, not empirical, analyses using only reason and imagination—as a way to develop scientific hypotheses. As the previous chapter suggested, Charles Sanders Peirce and Karl Popper argued for hypothesis-development, heuristic analysis, based mainly on creative insights, not observation. Physicist Richard Feynman even said thought experiments are “more elegant” than physical ones. Galileo Galilei’s famous tower thought experiment suggested that, contrary to Aristotle, objects of different masses fall at the same acceleration. 3 James Maxwell’s demon suggested that, contrary to the second law of thermodynamics, entropy could be decreased. Albert Einstein used the Schrödinger-cat thought experiment to argue, contrary to Copenhagen interpretations of quantum mechanics, that observation does not break quantum-state superposition.4 Moral philosophers also use thought experiments to help clarify hypotheses. For instance, Judith Jarvis Thomson’s “transplant surgeon” suggested that, contrary to some utilitarian theorists, one cannot deliberately kill an innocent person in order to save more lives. Philippa Foot’s “trolley” suggested that, contrary to some egalitarian theorists, one could allow one person’s death in order to save more people, provided the victim was not used as a means to this end.

Chapter Overview To continue the work of the previous chapter, illustrating heuristic analysis to develop scientific hypotheses, this chapter uses an original thought experiment in mathematical physics. It helps clarify different hypotheses about the shape of the dose-response curve for ionizing radiation. (Dose-response curves show what doses of something cause which precise health effects.) First, the chapter gives an overview of thought experiments as heuristic tools. Second, it outlines 3 main hypotheses regarding the shape of the radiation dose-response curve. Third, the chapter sketches a thought experiment to help clarify one of these hypotheses. Fourth, it responds to several key objections about this thought experiment. The chapter shows both that thought experiments can clarify scientific hypotheses and help science progress, and that doing so can lead to greater protection of human welfare, including radiation protection.

Thought Experiments As already noted, thought experiments are ways of exploring factual reality through reasoning, the characteristic method of the Greek natural philosophers. 5 Even in contemporary physics, thought experiments remain valuable. In his Lectures on Physics, Richard Feynman praises Simon Stevinus’s 16th-century


Conjectures and Conflict

71

thought experiments on the inclined plane, calling them “more brilliant” than experimental work.6 An essential characteristic of any thought experiment is that it be an exploratory, idealized process to hypothetically answer/clarify a theoretical question in some discipline and be carried out according to the rules of logic and that discipline. For 2 reasons, however, thought experiments need not have physical counterparts in the area of laboratory-like or field-like experiments. First, many thought experiments involve non-imitable idealizations of actual conditions under which phenomena occur. Second, for mathematical thought experiments, like the one in this chapter, there are no laboratory-like counterparts. Indeed, the only genuine experiments in mathematics are thought experiments.7 But if thought experiments need have no empirical counterparts, how can they have novel empirical import, as they take place entirely inside one’s head? One answer is that they are arguments, not some special window on the world. 8 As arguments, they posit hypothetical or counterfactual situations, then invoke particulars irrelevant to the generality of the conclusion.9 The concern of this chapter, however, is neither what thought experiments are, nor how they are justified, nor whether their logic has a privileged status, as Gottlob Frege supposed.10 Instead, this chapter asks: Could a particular mathematical thought experiment help clarify alternative hypotheses about effects of ionizing radiation? Thought experiments can be categorized in many ways.11 One crude classification is into refuters, corroborators, and clarifiers. Karl Popper calls the refuters “critical” thought experiments—and the corroborators, “heuristic” thought experiments.12 Refuting thought experiments provide counterexamples that try to overturn statements by disproving one of their consequences. Refuting thought experiments are typically reductio ad absurdum arguments. That is, they assume the opposite of what one is trying to prove, then deduce a contradiction from this assumption, therefore infer that the conclusion (one is trying to prove) must be true.13 Corroborating or heuristic thought experiments provide imaginative analogies that aim at substantiating statements, as in the famous transplant-surgeon arguments by Judith Jarvis Thomson.14 Unlike corroborating thought experiments, clarifying thought experiments provide imaginative analogies that aim at neither refutation nor corroboration, but illuminating some case, as did economist Ezra Mishan.15 In order to clarify whether to build an airport nearby, or farther away at an additional cost of $2 million annually, Mishan proposed a thought experiment, dividing the annual cost of the distant relocation by the number of residents x who would avoid noise pollution from the nearby location. If nearby residents asked whether it was worth $2 million/x (or approximately $20 per year per household) to avoid the closer location, Mishan said this thought experiment would clarify the airport controversy and make it easier to resolve. Because the mathematical thought experiment here seems both to corroborate


72

Heuristic Analysis and Developing Hypotheses

the hypothesis that the radiation curve is linear with no threshold (LNT) and to clarify all 3 radiation-harm hypotheses, it appears to be both corroborative/ heuristic and clarificatory.

Mathematical Thought Experiments Standard work on thought experiments in mathematics divides them into at least 6 groups. (1) Presupposing a new conceptual framework, some thought experiments attempt to hypothetically answer specific questions about whether something is the case. (2) Other thought experiments attempt to do (1), but within strict frameworks of fixed theory, such as within Zermelo-Fraenkel set theory. (3) Still other thought experiments, arising during a period of crisis, attempt to construct new conceptual frameworks, as when mathematicians in the early twentieth century proposed various ways to address set-theoretic paradoxes. (4) Other thought experiments emerge when thinkers attempt to corroborate or refute some basic postulate that seems impossible to prove/disprove, as when geometers tried to negate the parallel postulate. (5) Still other thought experiments arise when researchers discover ways to reconceptualize something, as when Peter Klimek, Stefan Thurner, and Rudolf Hanel used the mathematics of spin theory in physics to discover insights about Darwinian evolution. (6) A final type of thought experiment occurs when thinkers attempt to devise new frameworks that are easier to employ, as when Stephen Cowin and Mohammed Benalla discovered new analytical ways to illustrate proving the formula for effective-stress coefficient.16 According to the preceding 6-part classification, this chapter’s mathematicalphysics thought experiment likely is (4) heuristic/corroborating as well as (6) clarifying. Because it hypothetically illuminates a question about the shape of the radiation-dose-response curve, within the set of 6 assumptions accepted by virtually all physicists, yet presupposes no new conceptual framework, this thought experiment appears to fit within categories (2), (4), and (6).

Empirical Underdetermination in Physics A mathematical, not actual, thought experiment is essential to illuminating the ionizing-radiation dose-response curve because its shape is empirically underdetermined. Although the consensus position of physicists is that the curve is linear, with no threshold for harmful effects (LNT), there is no uncontroversial epidemiological evidence about low-dose-radiation effects. Also DNA techniques that tie specific molecular responses to different radiation exposures are not developed enough to specify the curve. As a result, there is an infinite number of mathematical functions, each with different radiation-behavior assumptions,


Conjectures and Conflict

73

that pass through all data points representing radiation-effect observations. It also is difficult to obtain person-specific, radiation-exposure estimates. One reason is differences among radiation filters. The US Environmental Protection Agency, for example, has long used filters that detect only about 15 percent of the atmospheric radioiodine that the Finns detect in their filters; although the detection technology has improved since 2006, it remains problematic because of poor monitoring coverage. Belgium, for instance, has about 530 times more radiation-monitoring-density-per-area than the United States, about 52 times more coverage-per-population, and about 27 times more coverage-per-nuclear-plant.17 Another reason for poor radiation data is the presence of many global hot spots, with radiation levels millions of times above average.18 In addition, sample sizes necessary for low-dose-radiation studies would have to be extraordinarily large—and the follow-up extremely long—for epidemiological and statistical methods to detect radiation effects such as cancers. But as sample sizes increased, the likelihood of population exposure to other toxins would increase, confusing the results. High naturally occurring cancer rates and individual variations in nutrition/lifestyle/genetic susceptibility also obscure empirical effects of low-dose-ionizing radiation.19 Besides, there is no unique DNA fingerprint from radiation-induced, versus other, genetic disturbances,20 and no non-controversial biological model of radiation carcinogenesis.21 For all these reasons, experiments alone currently are unable to settle radiation-dose-curve conflicts. Yet, as chapter 12 notes, for a variety of reasons, LNT is scientists’ consensus hypothesis. Trying to employ thought experiments to clarify radiation hypotheses also seems reasonable because of the long time frame of required experiments, particularly those that must take account of the bystander effect and genomic instability. These labels refer, respectively, to the facts that even cells/molecules not hit by radiation exhibit its detrimental effects, and that ionizing-radiation exposure reduces gene-pool fitness for subsequent generations. In addition, most radiation studies are able to control neither for external- and internal-selection effects, nor for variation in susceptibility with age at exposure. Consequently, studies that stratify exposed populations for age at exposure show higher, low-dose-radiation risks, while those that ignore age-stratification do not, partly because of the healthy-worker effect, described in the previous chapter.22 This effect occurs when researchers illegitimately compare radiation-worker fatalities to general-population fatalities. Better scientists, however, compare radiation-worker fatalities to those for comparable healthy groups.23 Controversy over radiation-dose-response hypotheses also continues because of disagreement over whether the Japanese atomic-bomb-survivor database, or the recent radiation-worker database, discussed in chapter 4, is predicatively superior. Although both support LNT, the former database relies on estimated doses, whereas the latter relies on recorded doses that show radiation is 3–4 times more dangerous than the former suggests.24


74

Heuristic Analysis and Developing Hypotheses

Thought-Experiment Requirements To devise a mathematical-physics thought experiment to clarify radiation-effects hypotheses, one must define low-dose radiation. Although the same dose affects various tissues/people differently, some physicists believe a low dose is what causes only 1 particle track across a nucleus. According to this definition, a low dose is less than 0.2 mGy (20 mrad), less than one-fifteenth of average background-radiation dose/year, 3 mGy (300 mrad). Most scientists say a low dose is something under 100–200 mGy (10–20 rad)/year. 25 Therefore the thought experiment developed here presupposes low doses are at/below 20 rad/year. This thought experiment also aims to satisfy various open-ended theoretical conditions, given inadequate agreement on conceivability constraints. Such thought-experiment constraints include (1) simplicity, (2) familiarity, (3) plausibility, (4) efficiency, and (5) conceptualization conditions.26 These require, respectively, that the thought experiment be (1) clear, readily understood, without superfluous details; (2) humanly tractable; (3) believable enough to facilitate mathematical/philosophical/scientific communication; (4) able to be achieved in a reasonable time, using computer assistance; and (5) able to be represented mathematically. Practically speaking, if mathematical-physics thought experiments are to clarify some controversy, their conceptual framework and starting point must be acceptable to all parties involved in the controversy. Therefore, the thought experiment developed here will begin with non-controversial assumptions, likely to be accepted by all. To understand these assumptions, however, one must understand controversial alternative hypotheses about the radiation-dose-response curve.

Radiation-Effects Hypotheses To illustrate conflicting radiation-effects hypotheses, consider different Chernobylnuclear-accident fatality estimates. On one hand, nuclear-industry lobbyists and nuclear-proponent nations, like France, Russia, and the United States, say the 1986 Chernobyl, Ukraine reactor explosion/fire were minimal, causing only 28 casualties, although latent cancers may appear later.27 The International Atomic Energy Agency (IAEA), a nuclear-industry-dominated group, places Chernobyl fatalities at 31, with possible later cancers still to appear.28 On the other hand, many health experts, scientists, and environmentalists, especially in developed nations, say Chernobyl effects were catastrophic. The pro-nuclear US Department of Energy says Chernobyl-caused premature deaths are 32,000, not including cancer fatalities in later generations.29 Apart from heavy fatalities in Belarus and Russia, Ukraine alone puts its to-date Chernobyl fatalities at 125,000.30 The late John Gofman, a well-known University of California doctor and research scientist, puts total Chernobyl-caused, premature deaths induced by germline mutations and cancer at 500,000—and total Chernobyl-induced nonfatal cancers at 475,000. As chapter 12 shows, similar


Conjectures and Conflict

75

cancer-fatality disagreements between industry and other scientists have arisen since the even-deadlier, 2011 Fukushima, Japan nuclear-core melts, explosions, and fires.31 One reason for industry-versus-health-scientist disagreement about nuclear-accident effects is that, as chapter 12 illustrates, the IAEA and national governments typically base nuclear-fatality claims on nuclear-utility-estimated radiation releases, although the utility caused the accident and has obvious conflicts of interest. For instance, IAEA estimated 31 Chernobyl fatalities mainly because it relied on local-government radiation estimates, visited only 2 mildly contaminated villages, then failed to consider the 800,000 Chernobyl-clean-up personnel, mainly young military men who had the highest exposures. It also ended epidemiological studies only 3 years after the accident, then concluded there were “no health disorders that could be attributed directly to radiation exposure.” Yet because radiation fatalities can have latencies from several months, up to 60 years, shorter studies obviously underestimate fatalities. 32 A second reason for nuclear-fatality controversies is that associated deaths are neither immediate nor obvious. They are statistical casualties, premature deaths that scientists infer from radiation dose-response curves. Using Hiroshima-Nagasaki statistics and nuclear-worker statistics, physicists agree, within an order of magnitude, about the shape of the radiation-dose-response curve at higher exposures. Yet as noted, they often disagree about this curve at very low exposures. For instance, industry scientists often assume a threshold for low-dose-radiation damage, then claim radiation-related health effects are likely minimal. As a consequence, (a) Chernobyl-accident-accident-induced and Fukushima-accident-induced premature cancer deaths may number only in the tens or hundreds; (b) governments may be able to deregulate low-level-radioactive waste; and (c) ionizing radiation cannot have caused all the problems that atomic veterans, downwinders (near the Nevada nuclear-bomb test site), or radiation workers attribute to it.33 However, the US National Academy of Sciences (NAS) and most health scientists claim radiation effects are LNT, then claim radiation-related-health effects often are substantial. As a consequence, (a´) Chernobyl-induced and Fukushima-induced premature deaths may each number as many as 500,000; (b΄) governments may not be able to deregulate low-level-radioactive waste; and (c΄) ionizing radiation likely has caused numerous premature fatalities, especially among radiation workers. 34 In other words, different radiation-effects hypotheses generate different nuclear-accident-fatality estimates. Before developing a thought experiment to help clarify this controversy, first consider the main alternative proposals about radiation effects.

Three Main Hypotheses Most physicists tend to subscribe to 1 of 3 hypotheses that can be called “LNT,” “T,” and “U.” Hypothesis LNT, supported by the International Commission on


76

Heuristic Analysis and Developing Hypotheses

Radiological Protection (ICRP), IAEA, and US NAS, is that all non-zero doses of ionizing radiation are risky, the relationship between ionizing-radiation doses and health responses is linear, and any exposure increases probability of harm. LNT proponents argue that analysis of Hiroshima-Nagasaki/worker/child-mortality data support LNT. 35 Because tumors almost always arise from single cells, 36 LNT proponents say a single mutational event (radiation track) has a finite probability of generating DNA damage that can cause tumors. Many LNT proponents say this probability is not zero because less than 0.2 mGy (0.02 rad)—one-fifteenth the average background-radiation dose/year—causes a single-particle track across a nucleus. 37 Only 10 eV or less can alter biological molecules. 38 Given this non-zero probability, LNT proponents say standard-setting bodies are correct to say that any apparent adaptation to low-dose-ionizing radiation is “essentially short term,” for several hours. 39 Instead, apparent repair of radiation damage creates cells that are like broken plates, glued back together—but much more likely to break again. Similarly, LNT advocates like the US NAS says repaired cells survive in a weakened state and are much more likely to die from other causes.40 Hypothesis T (threshold), supported by the global nuclear industry and many French scientists—whose nation has a higher percentage of nuclear energy than any other—contradicts the UNSCEAR/NAS hypothesis LNT. Instead, T proponents say ionizing radiation and some chemical toxins are not harmful at low doses because the body can fight effects of small exposures. They say hormesis explains why some people can receive high radiation doses before showing cancer signs.41 Although chapter 3 refuted hormesis claims, other T advocates say low-dose radiation is beneficial and can increase factors such as fertility.42 Still other physicists reject both LNT and T. Their hypothesis U is that measurement problems make any radiation-damage threshold currently unknowable. Roger Clarke of ICRP, Bo Lindell of the Swedish Radiation Protection Institute, Kenneth Mossman of the American Health Physics Society, and Gunnar Walinder of the Swedish Nuclear Training Center, all support U. Whether or not exposures below 100 mGy (10 rads) are risky, they say such effects are too small to observe, are speculative, and are therefore unknowable.43 Nevertheless, scientists/policymakers must daily make decisions about radiation-effects hypotheses, because they are needed to set radiation standards, protect citizens, and award compensation for radiation damages. These decisions are complicated by the fact that much hypothesis disagreement—over LNT, T, and U—arises because LNT advocates typically depend on long-term exposure data, age-stratified studies, large sample sizes, and non-caloric restriction test subjects. T and U advocates, however, tend to rely on short-term exposure data, non-stratified studies, small sample sizes, and some caloric-restricted subjects.44 Partly because each group relies on different methods/data, as illustrated in chapter 3, they have different hypotheses about radiation-dose-response curves. To


Conjectures and Conflict

77

help clarify this controversy, however, this chapter’s thought experiment must not beg any questions. Its starting points must rely on assumptions acceptable to all LNT/T/U parties, or it will clarify nothing.

Shared Assumptions What assumptions do LNT, T, and U proponents share? They disagree on whether human-caused ionizing radiation produces cancer by the same mechanisms as does background radiation.45 Nevertheless, LNT, T, and U proponents agree on at least 6 crucial points (A1)–(A6) that provide a starting point for a thought experiment. (A1) is that all non-zero, ionizing-radiation doses produce an ionization track, through a cell, one theoretically capable of producing cancer.46 (A2) is that if any repair of radiation-induced cell damage takes place, it is within about 6 hours, post-exposure.47 (A3) is that cancer begins in a single cell, and mutations cause cancers.48 (A4) is that because radiation exposures are cumulative, any additional human-caused exposures never begin at zero. Given normal-background radiation, everyone receives about 300 mrad/year of radiation. As a result, no radiation exposures, even for newborns, begin from a zero dose.49 (A5) is that mutations require at least one ionizing hit in which a charged particle transfers energy to an object like DNA. 50 (A6) is that, according to simple-target theory, 51 radiation hits (single ionizing events) in a critical volume (like DNA) for a given period/dose, are Poisson distributed with probability P ( n) = e

−x

xn (P1) n!

where x = the mathematical expectation (or average number) of hits in some time/ space interval; where e is the base of the natural log system, 2.71828; and n = the number of radiation hits. 52

Relying on Consensus Assumptions (A1)–(A6) Assumptions (A1)–(A6) suggest a basis for a thought experiment to clarify radiation-effects hypotheses. Its heart is hypothesis (P1)—assumption (A6), that the number of radiation hits (single ionizing events) in a critical volume (like DNA), for a given period/dose, follows a Poisson distribution: P ( n) = e

−x

xn (P1) n!


78

Heuristic Analysis and Developing Hypotheses

If (P1) is correct, then

( )

x1 = e − x x (P1A) n!

P ( 0 hits) =

e − x x 0 e − x (1) − x = = e (P1B) 1 0!

−x P 1 hit = e

But if (P1A) is correct, then

And if (P1B) is correct, then P ( at least 1 hit) = 1 − P (0 hits) = 1 − e − x (P1C)

But if standard assumptions (A1)–(A6) are correct, and if deductions (P1)– (P1C) are correct, then (P2) represents the probability of at least one hit in the DNA: −x

1− e (P2)

But for cancer to arise, some scientists claim that at least 2 different target areas in the DNA must be hit, by 2 different particles.53 T proponents say up to 7 different target areas must be hit.54 Still other scientists claim to have confirmed that 1 hit, in 3–4 different target areas, triggers cancer.55 Despite their disagreement, LNT, T, and U proponents likely would agree that the probability of n hits in different target areas is (1 − e − x ) (P3) n

If (P1) through (P3) are plausible, a mathematical-physics thought experiment based on the simple relation (P3) may provide insight into the role of radiation in carcinogenesis. If R is the expectation of radiation-induced hits, as a function of time, and if M is the expectation of hits induced by all other causes, as a function of time, then over time, the probability that radiation and other mutagens will hit at least n target areas in DNA is

(1− e ( ) )     − R+ M

n

(P4)

Of course, (P4) presupposes that radiation R and other mutagens M do not interact to induce mutations and cancers, and this presupposition could be


Conjectures and Conflict

79

false. Nevertheless, if one makes this and several other assumptions (that expectation of hits is a function of time, that over time the number of hits in a given volume is Poisson distributed (as (A6) presupposes), then several important results follow. Given (P4), and provided that n = at least 2, then the probability of radiation-induced cancers is given by

(

−( R + M )     PR = 1 − e

) − (1 − e ) n

−M n

(P5)

If (P5) is correct, it might be possible to specify the probability of radiation-induced cancers, despite other causes of DNA damage. To check this thought experiment, one can represent (P5) on a graphing calculator. (P5) appears linear with dose or number of hits, at least for low doses, and at least when M is much larger than R. For instance, consider the case in which M is 10 and R is 1, that is, in which hits induced by all other causes are 10 times greater than the hits induced by radiation. Substituting M = 10 and R = 1 in (P5), when the number of DNA target areas hit is n, and letting n vary from 1 through 25, it is clear that (P5) is linear. Using Mathematica 3.0, we obtain the results below. Table 6.1 shows that, given the assumptions of Poisson distribution, and that total other mutations M are much larger than radiation-induced mutations R, the probability of radiation-induced cancers PR (A–B) is LNT. This particular variant (Table 6.1 and Figure 6.1) of the thought experiment, where M = 10 and R = 1 in (P5), is important because it is consistent with the fact that most experts believe radiation-induced mutations cause fewer fatal cancers than all other mutations together. According to the United Nations Scientific Committee on the Effects of Atomic Radiation (UNSCEAR),56 radiation causes 1 in 40 of all fatal cancers. This case or variant (Table 6.1 and Figure 6.1) of the thought experiment also is significant because, if the simple thought experiment (P5) is close to correct, it answers A–B 0.0007 0.0006 0.0005 0.0004 0.0003 0.0002 0.0001 0

0

5

10

15

20

25

n

Figure 6.1  Probability of radiation-induced cancer when mutations induced by all other causes are 10 times greater than those induced by radiation.


80

Heuristic Analysis and Developing Hypotheses

Table 6.1  n

A= (1–e–(R+M))n

B = (1– e–M)n

A–B

1

0.9999833

0.9999546

0.0000286982

2

0.9999666

0.9999092

0.0000573947

3

0.9999499

0.9998638

0.0000860893

4

0.9999332

0.9998184

0.0000114782

5

0.9999165

0.999773

0.000143473

6

0.9998998

0.9997276

0.000172163

7

0.9998831

0.9996822

0.00020085

8

0.9998664

0.9996369

0.000229536

9

0.9998497

0.9995915

0.00025822

10

0.999833

0.9995461

0.000286902

11

0.9998163

0.9995007

0.000315583

12

0.9997996

0.9994553

0.000344261

13

0.9997829

0.99941

0.000372938

14

0.9997662

0.9993646

0.000401613

15

0.9997495

0.9993192

0.000430286

16

0.9997328

0.9992738

0.000458958

17

0.9997161

0.9992285

0.000487628

18

0.9996994

0.9991831

0.000516296

19

0.9996827

0.9991378

0.000544962

20

0.999666

0.9990924

0.000573626

21

0.9996493

0.999047

0.000602289

22

0.9996326

0.9990017

0.000630949

23

0.9996159

0.9989563

0.000659609

24

0.9995992

0.998911

0.000688266

25

0.9995825

0.9988656

0.000716921

yes to the question whether radiation effects are LNT. If the thought experiment is correct, then even 2 hits of radiation increase one’s cancer risk. Besides, because of background exposures (see (A4)), even unborn children receive far more than 2 hits. Following the preceding UNSCEAR suggestion for the percent of cancer that is radiation-induced, consider the curve in which PR is linear with dose/number


Conjectures and Conflict

81

(A–B)*10^17 6 5 4 3 2 1 0

0

5

10

15

20

25

n

Figure 6.2  Probability of radiation-induced cancer when mutations induced by all other causes are 40 times greater than those induced by radiation.

of hits. In this case, M is 40 and R is 1, that is, hits/mutations induced by all other causes are 40 times greater than those induced by radiation. Using Mathematica 3.0 and substituting M = 40 and R = 1 in (P5), Figure 6.2 and Table 6.2 show that, in this case, (P5) is LNT. The thought experiment just described appears plausible, in part, because of additional characteristics of the curve (P5) that make it a reasonable representation of the probability of radiation-induced cancers. When one looks at the slope of the curve (P5), for low levels of R, this slope becomes independent of R and depends on M. For realistic values (because it represents the probability of fatal cancer) of

(

−( R + M ) 1− e

)

n

(P6)

between 0.1 and 0.6, there is only slight variation in the slope. Moreover, the (P5) slope has a maximum when risk PR (A–B) is 1.1. This maximum is consistent with the fact that the cancer risk from mutagens other than radiation—namely

(1− e )

−M n

(P7)

may increase rapidly with exposure, e.g., with years of life, but at some level, must stop increasing because the total cancer probability cannot exceed 1. 57 But this last constraint means that the relationship expressing radiation-induced-cancer risk—(P6) less (P7) or PR—as a function of radiation-induced hits, R, is sigmoid. That is, if one holds n constant at 1 and substitutes, respectively, R = 1, 2, . . ., 25 and so on, PR (A–B) remains sigmoid. Moreover, given that the total cancer risk (from radiation and other mutagens) is about 0.25 and slowly rising, it


82

Heuristic Analysis and Developing Hypotheses

Table 6.2  n

A= (1 – e–(R+M))n

B= (1 – e–M)n

A–B

1 0.9999999999999999984371178 0.9999999999999999957516457 2.6854721 x 10–18 2 0.9999999999999999968742356 0.9999999999999999915032915 5.370944 x 10–18 3 0.9999999999999999953113534 0.9999999999999999872549372 8.056416 x 10–18 4 0.999999999999999993748471

0.999999999999999983006583

1.0741888 x 10–17

5 0.999999999999999992185589

0.999999999999999978758229

1.342736 x 10–17

6 0.999999999999999990622707

0.999999999999999974509874

1.6112832 x 10–17

7 0.999999999999999989059825

0.99999999999999997026152

1.8798304 x10–17

8 0.999999999999999987496942

0.999999999999999966013166

2.1483777 x 10–17

9 0.99999999999999998593406

0.999999999999999961764812

2.4169249 x 10–17

10 0.999999999999999984371178

0.999999999999999957516457

2.6854721 x 10–17

11 0.999999999999999982808296

0.999999999999999953268103

2.9540193 x 10–17

12 0.999999999999999981245414

0.999999999999999949019749

3.2225665 x 10–17

13 0.999999999999999979682532

0.999999999999999944771395

3.4911137 x 10–17

14 0.999999999999999978119649

0.99999999999999994052304

3.7596609 x 10–17

15 0.999999999999999976556767

0.999999999999999936274686

4.0282081 x 10–17

16 0.999999999999999974993885

0.999999999999999932026332

4.296755 x 10–17

17 0.999999999999999973431003

0.999999999999999927777978

4.565303 x 10–17

18 0.999999999999999971868121

0.999999999999999923529623

4.83385 x 10–17

19 0.999999999999999970305238

0.999999999999999919281269

5.102397 x 10–17

20 0.999999999999999968742356

0.999999999999999915032915

5.370944 x 10–17

21 0.999999999999999967179474

0.999999999999999910784561

5.639491 x 10–17

22 0.999999999999999965616592

0.999999999999999906536206

5.908039 x 10–17

23 0.99999999999999996405371

0.999999999999999902287852

6.176586 x 10–17

24 0.999999999999999962490827

0.999999999999999898039498

6.445133 x 10–17

25 0.999999999999999960927945

0.999999999999999893791144

6.71368 x 10–17


Conjectures and Conflict

83

is reasonable to assume we are on the middle part of the sigmoid curve (between 0.1 and 0.6), where the slope is fairly constant. But if we are on this middle part of the curve, any exposure increment, such as from radiation, therefore causes 2 things: a proportional risk (which is always the case within differential intervals) and approximately one and the same risk per unit of exposure. Thus, the mathematical thought experiment appears to have at least an initial plausibility. Of course, to use this thought experiment, one must presuppose that the expectation of hits is a function of time, that the number of hits follows a Poisson distribution, and so on (assumptions A1–A6). One also must presuppose there is no significant interaction (such as synergy) among radiation and non-radiation means of inducing cancers and mutations. Yet, it is not obvious whether this presupposition is borne out in reality. However, if such an interaction is true, radiation risk is even higher than this thought experiment presupposes. Hence, if anything, this presupposition about synergy understates the case for LNT.

General Objections Potential problems with the preceding mathematical thought experiment are that it is simple and may include the doubtful presupposition that there is no significant, perhaps synergistic, interaction among radiation and other means of inducing cancer and mutations. Thus, although all participants in the radiation-effects controversy appear to agree on the 6 assumptions (A1–A6), this thought experiment appears potentially vulnerable to at least 3 types of objections: (1) Do thought experiments trivialize the problem they are meant to solve by begging the question?58 (2) As Bernard Williams notes, because contradictory thought experiments are possible, does this one frame some question in ways that predispose readers/hearers to agree with it, perhaps by overweighting familiar facts?59 And (3) because thought experiments are purely hypothetical, does this one fail to support a particular radiation hypothesis?60 Perhaps the most troubling objection is the first, begging the question. To evaluate objection (1), one must determine whether (P2) or (P3) is LNT, the conclusion the thought experiment hopes to support. On a graphing calculator, when x ranges from 1 to 7, equation (P2) rises quickly. At about x = 7, it quickly becomes asymptotic and forms a horizontal line. Equation (P2) clearly is not linear. Similarly, when n = at least 2, as most physicists agree, and when x ranges from 1 to 7, (P3) rises quickly. However, in the case when n = at least 2, and x ranges between 1 and 7, (P3) does not rise so rapidly as (P2); at about x = 7, (P3) quickly becomes asymptotic and forms a horizontal line. Thus (P2) and (P3) are clearly not linear. Because the thought experiment suggests that the curve representing


84

Heuristic Analysis and Developing Hypotheses

low-dose-radiation risks is LNT, there is no obvious sense in which this thought experiment begs the question. A related potential problem is whether the experiment is unrealistic in some damaging sense. Because equation (P5) has its maximum at 1.1 and not 1.0, yet represents a probability, and because (P5) presupposes no interaction among non-radiation hits and radiation-induced hits causing mutations/cancer—is this thought experiment unrealistic? Because thought experiments are simple, obviously they err under certain conditions. Yet these errors need not constitute a problem. For example, a compass is a simple but useful device for determining direction, even though it errs in the presence of magnets. Its scope is limited, and it becomes unreliable near the North Pole, in mineshafts, when vibrated, and when near metal. The compass also does not point precisely north, only close enough for most navigational problems. Moreover, most people who follow compasses likely do not know how/why they work. Nevertheless people use them. Employing analogous reasoning, one might use this thought experiment, even with the knowledge that, like the compass, it is limited.61 Such limitations also are less troublesome because, within the next several decades, DNA techniques are likely to enable molecular biologists to track the smallest amounts of radiation damage, independent of uncertainties surrounding epidemiological effects. As a result, provisional acceptance of (P5) does not appear needlessly problematic. In response to question (2) about lack of realism, note that this thought experiment does not frame radiation hypotheses in prejudiced ways. Instead, its frames include mainly assumptions (A1–A6), already mentioned. A6, for example, about Poisson distribution, appears reasonable because it is part of most cancer models. The presupposition that hits of ionizing radiation increase as a function of time also seems plausible because older people bear more evidence of exposure to ionizing radiation, and thus cancer. Therefore the fundamental assumptions of the thought experiment merely presuppose the world is similar enough, so reasoning about it sometimes works. As Simon Blackburn puts it: “the world is not so disconnected that anticipation and imagination always fail . . . we could not survive in it if it was.”62 Regarding question (3), this thought experiment does not seem hypothetical in any damaging sense. After all, for the objection to succeed, it must be hypothetical in some damaging sense. Type-(3) questions ought not reflect merely an aesthetic preference for true stories, 63 because reasonable people typically refuse to deliberate about contingencies only when the stakes are low. Yet as already noted, the stakes are not low in the radiation case. Given great potential health harms, the possible hypothetical character of this thought experiment is less important than whether it is hypothetical in some damaging sense. Also, philosophers have long accepted hypothetical thought experiments. In his later work, Ludwig Wittgenstein was addicted to examples, parables, or philosophical thought experiments. His later method is largely one of exploring phenomena by


Conjectures and Conflict

85

imagining changes, then evaluating what happens as a result.64 The value of such imaginings is that they allow new ways of thinking about phenomena. Like sensitivity analysis, this thought experiment does not resolve the controversy, but clarifies it. After all, if experimental data were always conclusive, thought experiments would be unnecessary.

Specific Objections One question about this thought experiment is that, although there are grounds for assuming the probability of mutations is proportional to the number of radiation hits (P1 through P5), why assume cancer probability is proportional to the number of radiation hits? This question, however, is not compelling on empirical grounds. Empirical confirmation shows only 3 hits in different DNA target areas are sufficient to produce cancer.65 Because a hit of only 35 eV or less is sufficient to damage DNA, everyone has experienced DNA damage from background radiation. Thus, given many hits, DNA damage could be large, especially because cancer risk increases with age, just as hits increase. Hence it is reasonable to assume numbers of cancers are proportional to numbers of hits. Some also might ask whether this thought experiment is just a model, nothing more. There are at least 3 responses to this model objection. One is that it challenges all mathematical thought experiments, despite their significant philosophical acceptance.66 This objection fails because it proves too much. A second response to this model objection is that thinking about a mathematical-physical model and how it would behave, as when one considers (P1)–(P7), is not the same as manipulating a mathematical-physical model and seeing how it behaves in fact. Reflection and execution are different. Just because one thinks about how a mathematical model would behave, and checks some part of one interpretation of it on a graphing calculator, does not mean that one is not doing a thought experiment. These partial checks merely contribute to the plausibility of the thought experiment, based on conceptual relationships among (P1)–(P6). The heart of the experiment is not these checks, but (P1)–(P6). Moreover, some thought experiments involve models, and others do not. If a thought experimenter thinks about the relationship between A and B in order to understand the relationship between C and D, A and B may constitute a model for C and D. However, if a thought experimenter thinks about the relationship between A and B, or postulates something about A and B in order to learn more about them, no model may be involved. Thus, even if there is a model, one may still have a thought experiment that employs a model of something in order to learn about it. Besides, not all models involve thought experiments, for example, those relying merely on simulation. Likewise, not all thought experiments involve models, for instance if no vehicles (such as A and B) are used to understand something (such as C and D).


86

Heuristic Analysis and Developing Hypotheses

A third response to the model objection is that one could distinguish among thought experiments, models, simulations, and re-enactments. If Roy Sorensen is correct,67 this objection confuses indirect thought experiments with models. Just because something is indirect does not mean it is merely a model. Thinking about molecules of a solid, when heat is applied, illustrates a direct thought experiment. Thinking about people trying to hold hands when they are violently jumping up and down, illustrates an indirect thought experiment, showing that the more violent the jumping, the harder it is to stay connected.68 Using this jumping model to understand molecules subjected to heat does not mean there is no thought experiment. Instead, it relies on the analogy between heated molecules and jumping people. Another specific question is “how can this thought experiment help clarify LNT, when it is not needed, if one assumes no full radiation repair?” If the body does not repair all radiation damage, LNT is correct. If the body does repair all radiation damage, T is correct. However, there are several responses to this no-need objection. First, virtually all relevant scientists agree (A2) that all radiation repair takes place within 6 hours of damage, or it is not repaired. Everyone agrees repair can be incomplete. The issue is how extensive the repair is, given only 6 hours to do it. Thus, there is no question-begging about repair, as the no-need objection suggests, but merely accepting standard assumption (A2). Moreover, as discussed, the repair situation is more complex than repair/no-repair options. A second response is that, even if LNT would be true if there were no repair, and T would be true if there were always complete repair, these facts are irrelevant to the thought experiment. It is needed precisely because, apart from what is the case, no complete empirical data about repair exist. Some partial empirical work also suggests that not all radiation repair is complete, and therefore this thought experiment is correct. Kenneth Crump and coauthors showed that, if carcinogenesis by an external agent behaves additively with any already-ongoing process, then under almost any model, the response will be linear at low doses, provided the extra risk is less than the spontaneous or background risk, and provided individual cancers arise from a single cell. 69 Crump’s work thus provides limited empirical support for parts of the thought experiment discussed here. What is interesting is that, if the Crump research is correct, it shows (as assumption (A3) presupposes) that the radiation-effects statistical nature is governed by the extreme tail of the response-distribution. This tail makes any process of discrete events approximately linear at low doses. Even simpler than Crump’s considerations and the earlier use of the relationship (P5) for PR is a quick examination of the Taylor series generated by f at a. If x is the total of non-radiation-induced plus radiation-induced cancers, and if a is the number of non-radiation-induced cancers, then (x–a) is the number of radiation-induced cancers. When x > a, and (x–a) is a very small quantity, all


Conjectures and Conflict

87

non-linear terms of the Taylor series are close to zero, and the function is approximately linear. For the plausibility of this trivial Taylor-series consideration in favor of an LNT radiation hypothesis, one need assume merely that the number of radiation-induced cancers (x–a) is small in proportion to those arising from other causes.

Conclusion If this chapter is correct, it suggests that thought experiments, not just observation, can help clarify and develop scientific hypotheses. It also suggests that thought experiments can have great consequences for human welfare. Just ask the many victims of Chernobyl and Fukushima.


C H A P T ER

7

Being a Disease Detective DI SCOVERING C AUSES IN EPIDEMIOLOGY

Do high-voltage-electricity wires cause leukemia in nearby residents? US National Cancer Institute studies say they do not, while Danish, French, and UK studies say they do. US scientists claim that, although the risk of childhood acute lymphoblastic leukemia is associated with in-home magnetic field measurements, living near magnetic fields, associated with high-voltage lines, does not increase risk.1 However, recent studies done by European scientists point to massive epidemiological evidence showing a consistent association between childhood leukemia and exposure to power-frequency magnetic fields from high-voltage wires. 2 Does bovine-growth hormone cause health harm? Top US regulatory scientists say it does not, while regulatory scientists in 32 other nations say it does. In fact, the United States is the only developed nation to allow hormone injections of cattle. Along with several third-world countries, the United States has allowed the hormone since 1993, noting that if cattle receive Monsanto’s genetically engineered hormones, the cows grow faster, reach maturity earlier, increase milk production 16–40 percent,3 and thus increase owner profits. Roughly 80 percent of US feedlot cattle are injected with hormones.4 However, neither Australia, Canada, Israel, Japan, New Zealand, nor any of the 27 European Union countries allow hormones. They say hormones increase consumers’ risks of reproductive and hormone abnormalities, like diabetes, along with vaginal, prostate, breast, and pancreatic cancer. Who is right about high-voltage wires and hormones? Do they really cause health harms?5 Chapter 5 showed that one way to answer such questions is to use analogy and inductive inferences about similar effects in animals. Chapter 6 illustrated how to use thought-experiment insights to develop hypotheses. This chapter shows that another way scientists discover hypotheses is by using informal rules of thumb that seem to have worked in the past.

88


Being a Disease Detective

89

Chapter Overview One prominent rule of thumb, used in hypothesis-discovery, is the relative-risk rule. Relative risk is defined as the incidence of harm, like cancer, in a population exposed to some factor, divided by the incidence in a non-exposed population, for example, one not exposed to high-voltages wires. According to the rule, an agent like bovine-growth hormone can be causally hypothesized as implicated in harm only when the relative risk = 2 or more. Thus, rule proponents say that unless an agent at least doubles one’s risk, the factor ought not be hypothesized as a cause of risk/harm. Epidemiologists might say they need the rule because, otherwise, variable data make them uncertain of a true adverse effect. Court or government representatives might say they need the rule in order to fit legal standards of proof of harm. Are rule advocates right? Does their inference rule help clarify hypotheses about causality in cases like high-voltage wires and hormones? This chapter answers these questions in 4 steps. It defines methodological rules, including rules of thumb like the relative-risk rule, and it shows how scientific conflicts—like those over bovine-growth hormone—often arise because of different presuppositions about rules of thumb for hypothesis-discovery. Although the chapter outlines the rationale for widespread acceptance of the relative-risk rule, nevertheless it argues that the rule errs on epistemic, ethical, and practical grounds. The moral: Careful philosophy of science—in this case, evaluating hypothesis-discovery through rules of thumb for causal hypotheses—helps improve science and saves lives through better regulation.

Hypothesis-Discovery and Data As the high-voltage-wires and hormone cases illustrate, one of the most troubling scientific questions is how to develop and assess causal hypotheses. Many biologists disagree, for instance, over whether or not something is a risk factor for a given biological population and how best to explain that risk. They disagree on the “whether” question, for example, when they accept, 6 or reject,7 the hypothesis that human-population density, not the vector-species’ niche, is more important in determining malaria risk. They disagree on the “how” question, for instance, when they accept,8 or reject,9 the hypothesis that the Panther Habitat Evaluation Model, premised on requiring forest-habitat patches larger than 500 hectares, is a good predictor of Florida panthers’ extinction risk. Of course, many causal hypotheses in science can be developed by greater attention to case-specific empirical details. For instance, chapter 10 shows that, in the case of the Florida panther, examining nocturnal,10 not merely daytime,11 panther habitats has helped develop hypotheses about what is causing increased


90

Heuristic Analysis and Developing Hypotheses

Florida-panther-extinction risk. In the case of malarial infection, examining both average age in mosquito populations and larval-habitat distribution, not merely human-population density, has helped scientists develop hypotheses about increased malaria risk.12 Other hypothesis-discovery conflicts arise when scientists agree about relevant empirical data, but disagree about the methodological rules m that should be used in developing hypotheses. What are these methodological rules, and why do scientists often disagree about them? How do they affect hypothesis development?

Methodological Rules According to one prominent account, methodological rules in science dictate means to cognitive ends, as in the following rule m: “If you want theories likely to stand up successfully to subsequent testing, then accept only theories that have successfully made surprising predictions, in addition to explaining what is already known, over those which explain only what is already known.” On Larry Laudan’s basic account, methodological rules m have this form: If one’s scientific goal is to achieve g, in an empirical world with characteristics c, one ought to follow methodological rule m.13 Why are many scientific controversies also often conflicts over methodological rules? Given equivalent data, but different methodological rules for discovering hypothetical causes in those data, scientists may hypothesize different causes. For instance, when epidemiologists Wynder and Harris assessed the association between breast cancer and alcohol consumption,14 they used a methodological rule of thumb (that might be called) the relative-risk rule. As already noted, according to the rule, hypothesizing that some factor has caused harm requires evidence that relative risk = at least 2. If relative risk = 1, the null hypothesis is the case, and the supposed agent is not a hypothetical cause of the phenomenon. If relative risk < 1, the phenomenon is less likely to occur in the experimental/exposed group, than in the control group, and the hypothetical causal agent may diminish some effect. If relative risk > 1, the event is more likely to occur in the experimental/exposed group than in the control group, and the agent is a hypothetical cause. Higher relative risks indicate stronger statistical associations between hypothetical causes and effects, as when pack-aday smokers, compared to nonsmokers, have relative risk = 10 for developing lung cancer.15 Requiring the relative-risk rule, Wynder and Harris denied that moderate alcohol consumption is an important hypothetical risk factor for breast cancer because, for their defined levels of alcohol consumption, 1 < relative risk < 2.16 Yet Hiatt rejected the rule and thus hypothesized the alcohol-breast-cancer association.17 Instead of the relative-risk rule, Hiatt used another methodological rule,


Being a Disease Detective

91

the external-consistency rule, according to which one can hypothesize a causal association if other studies replicate the association. As a consequence, Hiatt hypothesized a small, detectable, increased breast-cancer risk associated with alcohol consumption. In other words, he hypothesized that moderate alcohol consumption is a causally important risk factor for breast cancer, although he agreed with Wynder and Harris that for alcohol consumption, 1 < relative risk < 2. Thus he suggested that women with breast-cancer risk factors limit alcohol consumption.18 Which scientists seem more correct in their hypotheses and about the methodological rule of thumb about relative risk? If the preceding account of methodological rules is correct, perhaps the answer depends partly on each scientist’s goals g, and on different characteristics c of the empirical world. Thus, the Wynder-Harris-Hiatt conflict over m might be explained by their differing g and c. When Wynder and Harris required the relative-risk rule and therefore rejected the hypothesis of alcohol as a risk factor for breast cancer, their g might have been “to discover only major risk factors for breast cancer, those with very strong associations (very high relative risk) with disease,” and their postulated c might have been “only a few empirical factors are responsible for increased incidence of breast cancer.” Similarly, like Wynder and Harris, epidemiologists might accept a c such as, when relative risk is very low, less than 2, such variable data make it unlikely that there is actually an adverse causal effect. Court or government representatives might have a slightly different c: that legal standards of proof for causal effects require large relative risks. Given these g and c, a reasonable m could have been “count only relative risk = at least 2 as evidence for hypotheses about empirical factors that are causally associated with breast cancer.” However, when Hiatt rejected the relative-risk rule and accepted alcohol as a hypothetical risk factor for breast cancer,19 his g might have been “to discover even very small risk factors (low relative risks) for breast cancer,” and his c might have been that “many different empirical factors each contribute slightly to increased incidence of breast cancer.” Given this g and c, a reasonable m could have been “count even small relative risks (1 < relative risk < 2) as important for hypothesis-discovery, if they have been repeatedly replicated.” Why does the preceding account of the Wynder-Harris-Hiatt conflict seem plausible? Although the m in question, the relative-risk rule, requires relative risk = at least 2 before proposing a causal hypothesis about risk, this m does not specify either the g or c on which requiring or not requiring the relative-risk rule might be conditional. This gap (in the formulation of m) suggests at least 2 hypotheses whose investigation might clarify both m and scientific disagreements over causal discoveries. These hypotheses are that by presupposing different g or c—m can be more or less appropriate, and by using heuristic analysis to discover implicit g or c and to make them explicit, philosophers of science might help clarify methodological rules m. That is, they might help clarify disagreement over 1 type of


92

Heuristic Analysis and Developing Hypotheses

m, those governing causal hypotheses. For instance, when g and c are not made explicit, one easily can think of some g (e.g., avoiding false positives) and c (e.g., the relevant risk is not seriously harmful) for which requiring the relative-risk rule might be reasonable for hypothesis-discovery and development. Likewise, one easily can think of some g (e.g., avoiding false negatives) and c (e.g., the relevant risk is catastrophic) for which requiring this rule might not be reasonable. In the absence of case-specific information about g and c, however, should one use the relative-risk rule for hypothesis-discovery? To answer this question, consider what can be said for the rule.

The Relative-Risk Rule for Discovering Causal Hypotheses Some authors require the relative-risk rule because although they explicitly use no goal language, their g is to avoid postulating causes on the basis of apparently weak associations, perhaps like those between mammography and breast cancer.20 They argue that although scientists might report relative risks >1, because the margin of sampling error might include RR = 1, these relative risks may be illusory. They also say that the benefits of requiring the relative-risk rule, for hypothesizing causes, are the transparency of its rationale and its limiting the latitude of judgments that experts can use to suggest causal effects. The widespread requirement of methodological rules m for causal hypotheses about population risk, rules that are at least as strong as the relative-risk rule, was apparent, more than a decade ago, when Science editors interviewed 20 top epidemiologists and biologists—for example,, Philip Cole, Richard Doll, Alvin Feinstein, Joe Fraumeni, Sander Greenland, and others.21 Virtually all those interviewed, except for John Bailar, said they required relative risk = 2, 3, 4, or more, before they were willing to make causal hypotheses about risk. “As a general rule of thumb,” Marcia Angell (then coeditor with Jerome Kassirer) said the New England Journal of Medicine would publish epidemiology articles only when relative risk = 3 or higher.22 Robert Temple, evaluation director for the US Food and Drug Administration, likewise claimed his agency wanted relative risk = 3, 4, or more for causal hypotheses. 23 Presenting a list of 25 alleged causal associations, for example, between vasectomy and prostate cancer, for most of which 1 < relative risks < 2, Science authors said that, because most of these hypothetical causal associations had not been replicated, requiring the relative-risk rule was needed to achieve a particular g (although they did not explicitly speak of goals), avoiding flawed or false-positive causal hypotheses. 24 Consistent with epidemiological/biological support for requiring m at least as strong as the relative-risk rule, half of US courts—that discuss relative risk—require the relative-risk rule for causal hypotheses in toxic-tort


Being a Disease Detective

93

cases.25 Hence the scientific question about what rules of thumb to use in developing hypotheses is very practical. If the relative-risk rule is used, many causal victims of harm might be undetected and hence harmed worse. If this rule is not used, people might be thought to be victims when they are not.

Scientific Reasons Not to Require the Rule Are these many scientists correct to require methodological rules like the relative-risk rule? As already mentioned, requiring the relative-risk rule is ultima facie reasonable given g such as avoiding false positives, and c such as knowing only trivial risks are involved.26 However, heuristic analysis reveals that at least 11 considerations—6 rational, 3 ethical, and 2 practical—suggest scientists should be wary of prima facie requiring the relative-risk rule as an m for causal hypothesis-discovery. On the rational side, those who require the rule appear to confuse 2 different things: evidence for causation and frequency of adverse effects. While relative risk measures only the latter, many proponents, who require the relative-risk rule,27 do so on grounds that it moves toward causation. Yet, while a noble goal, causation is not precisely tied to relative risk. Why not? In confusing causation and frequency of adverse effects, proponents who require the relative-risk rule forget that the absence of evidence for causation is compatible with high frequency of adverse effects, like relative risk > 5, while massive evidence for causation is compatible with a low frequency of adverse effects, like 1 < relative risk < 2. A second consideration against requiring the relative-risk rule for causal hypotheses is that, because any relative risk > 1 can support causal evidence, those who require the relative-risk rule specify an arbitrary cut-off for when evidence is sufficient. Just as the next chapter shows that there are no clear, non-arbitrary grounds for choosing a particular p in statistics, there also are no clear grounds for requiring relative risk = 2, 3, 4, or more, despite the preceding Science discussion. This lack of justification is especially true because higher relative risks indicate greater frequency of, not greater causal evidence for, adverse effects. This absence of clear grounds for hypothesis-formation may be one reason many scientists argue for alternative ways of calculating relative risk, but refrain from making recommendations about what relative risk is needed.28 A third problem is that requiring the relative-risk rule is inconsistent with current scientific findings. Radiation biologists have long known that for the roughly 20 radiation-induced cancers (like those of the bone, esophagus, stomach, colon, lung, lymph nodes), all except 4 (leukemia, multiple myeloma, urinary-tract, and colon cancer) have small relative risks (1 < relative risk < 2). Yet scientists accept radiation as one of the factors able to induce these cancers. 29 If radiation biologists had required the relative-risk rule before making causal hypotheses about


94

Heuristic Analysis and Developing Hypotheses

radiation cancers, they would have missed discovering an important carcinogen and thus indirectly erroneously encouraged weaker regulatory standards for ionizing radiation. A fourth point is that requiring the relative-risk rule for hypothesis-formation may be unnecessary, if one’s goal is to avoid false hypotheses about causal links between some factor and resulting effects. Because scientists require research results to be replicated before they are accepted, this m (replication), rather than requiring the relative-risk rule before hypothesizing causes, could help avoid false positives and ensure better confirmation of results. A fifth reason, to avoid requiring the relative-risk rule for hypothesis-formation, is premised on the observation that sciences such as vector biology and epidemiology often involve more initial conditions and auxiliary hypotheses than do most physical sciences. 30 For instance, one auxiliary hypothesis of epidemiologists might be that avoiding some risk is necessary to protect public health. These initial conditions/hypotheses complicate studies about population risks, requiring scientists to assess not only epistemic concerns, but also possible harms/benefits. This need to assess welfare consequences provides population-risk studies with prima-facie reasons for avoiding m like the relative-risk rule, that are more applicable to basic science. 31 That is, facing statistical uncertainty, scientists studying welfare-related population risks must “weigh carefully the value of what is to be won or lost, against the odds of winning or losing. . . If a lot might be lost if we are wrong, we want higher probabilities that we are right before we act.”32 Thus, as the next chapter also emphasizes, although Neyman-Pearson suggests that minimizing false positives—false causal hypotheses, for instance—is a more important g for pure scientists, 33 this is not the case in practical science. Whenever scientists studying population risks have g, like protecting human welfare, that require them to minimize false negatives (false hypotheses that something is not harmful), the relative-risk rule need not be required for hypothesis-formation. 34 A final rational consideration against prima facie requiring the relative-risk rule, as a hypothesis-formation rule m in welfare-affecting research, is that doing so appears contrary to scientific-demarcation criteria often used in practical sciences like conservation biology and epidemiology. Although researchers in such sciences realize that they bear the burden of proof for inferring risk, 35 their disciplinary demarcation criteria often are not purely factual. For instance, they partly demarcate their discipline by saying (i) it focuses on minimizing harm, 36 not merely falsehoods. (ii) It focuses on hypothesizing causal inferences that require merely a preponderance of evidence, 37 not confirmation beyond a reasonable doubt. 38 Using such demarcation criteria, however, argues against the relative-risk rule because it is less likely, than requiring m such as replication and relative risk > 1, to meet (i), as radiation-induced cancer illustrates. Requiring the relative-risk rule also is stricter than (ii) and hence requires too much.


Being a Disease Detective

95

Ethical and Practical Reasons against the Rule On the ethics side, requiring the relative-risk rule—not m like replication and relative risk > 1, for causal hypotheses—would allow greater imposition of population risks, because fewer risks could be discovered. Moreover, requiring the relative-risk rule falsely suggests that risk impositions in which 1 < relative risk < 2 are ethically acceptable. For large populations, 1 < relative risk < 2 could cause many additional deaths, as from alcohol-induced breast cancer, or cancer from US nuclear-weapons testing, estimated to have caused from hundreds of thousands, 39 to a million, additional cancers.40 Requiring the relative-risk rule, thus ignoring risks when 1 < relative risk < 2, is like allowing people to play Russian roulette, provided it does not double their chances of death. Most people would reject a 90-, or even a 30-percent increase. People don’t just avoid risks that increase their chances of death by 100 percent. If not, scientists ought to use less demanding m, not the relative-risk rule, for hypothesizing about causes of social-welfarerelated risks. Another ethics worry is that requiring the relative-risk rule, based on average relative risk, would not protect sensitive subpopulations who could be seriously harmed by agents such as cell-phone radiation, even if 1 < relative risk < 2.41 As mentioned, most radiation-induced cancers do not satisfy the relative-risk rule. Yet for identical exposures when all other things are equal, radiation-induced cancers are 50 percent more likely in women than men, and up to 38 times more likely in infants and children than adults.42 Therefore, m that put weaker (than the relative-risk rule) constraints on causal hypotheses about harm seem needed to protect vulnerable groups. A third ethics worry focuses on rights to equal protection. All other things being equal, people harmed by risks whose 1 < relative risk < 2 do not suffer less harm, simply because the set of those harmed is smaller than that of those harmed by agents whose relative risk > 2. If not, requiring the relative-risk rule for causal hypotheses is ethically questionable on human-rights grounds.43 On the practical side, a prima facie consideration for using weaker hypothesis-discovery requirements for m, like replication and relative risk > 1, may be needed to counterbalance pressures from special interests. When biologists hypothesize exposure-harm associations, such as lung cancer from tobacco, or species extinction from habitat development, chapters 1–3 showed that special interests often subject them to professional defamation and harassment.44 These chapters revealed that because special interests try to discredit scientific hypotheses that could harm their profits, they do biased special-interest science.45 Requiring the relative-risk rule makes it easier for special interests to use special-interest science and to deny harms they cause. Why? Consider a case mentioned earlier. When the US Food and Drug Administration (FDA) approved bovine-growth hormone, it did so mainly on the basis of an unpublished 90-day study on rats


96

Heuristic Analysis and Developing Hypotheses

by Monsanto, the main company that would profit from allowing hormones. Yet, both Monsanto and the US FDA have refused to release this study, despite repeated Freedom-of-Information-Act requests. Monsanto claims the release would cause the company financial harm. However, when Monsanto submitted the same unpublished study to Health Canada, the Canadian equivalent of the US FDA, as part of Monsanto’s unsuccessful attempt to gain Canadian approval of bovine-growth hormone, Health Canada scientists said it did not show hormone safety and that the US FDA had misled the public about the study. FDA scientists officially said the Monsanto study showed “no toxicologically significant changes” in rats given bovine-growth hormone; it also said that, contrary to the 2010 decision of the US Sixth Circuit Court of Appeals, milk and meat from hormone-injected cattle were no different than that from cows without hormones.46 Based mainly on the preceding false claims, the FDA did not require standard human-health reviews of bovine-growth hormone, toxicological-safety assessments typically required for drugs. In reality, however, Health Canada scientists said the Monsanto study showed 20–30 percent of high-dose rats developed primary antibody responses to the artificial hormone, increased infiltration of the prostate gland, and cysts on their thyroids—all toxicologically significant changes that the FDA never mentioned. As the Canadian-government scientists put it, in the Monsanto data on hormones, “both procedural and data gaps . . . fail to properly address the human safety requirements of this drug.”47 Why were the Canadian and US regulatory scientists’ responses so different? One reason could be Monsanto’s financial conflicts of interest, causing its presentation of biased, unpublished hormone studies and its reported pressure on government scientists. Another reason could be that US and Canadian scientists used the relative-risk rule differently, a fact that suggests use of the rule could worsen harmful effects of special-interest science. Consider the relative risk of bovine-growth hormone, as reported by the Canadian scientists. All such risks were 1< relative risk <2. Relative risk for cystic ovaries = 1.22, a 22-percent increase in bovine-growth hormone cows, and relative risk for pus in the milk = 1.27, a 27-percent increase in bovinegrowth hormone cows. On one hand, US-government scientists appear to have used the relative-risk rule, claimed no harm from the hormones, therefore asserted no need for testing because the reported relative risk of the hormones was between 1 and 2, less than what the relative-risk rule requires. On the other hand, Canadian-government scientists appear to have rejected the relative-risk rule but, because of confirmed increased harms from bovinegrowth hormone, like the 27-percent-pus increase, they concluded the hormone was not obviously safe and therefore needed further testing. 48


Being a Disease Detective

97

Besides the way that the bovine-growth-hormone case argues against the relative-risk rule, other c argue against requiring it, and for requiring more welfare-protective m that might help counterbalance special-interest science. For instance, as noted in earlier chapters, the American Association for the Advancement of Science confirms that three-fourths of all US science is funded by special interests, often anti-regulatory interests, 49 resulting in many false-negative biases about product/pollution harms. 50 For instance, the Science Advisory Board of the US Environmental Protection Agency (EPA) found false-negative biases in all pesticide-industry studies of chemical risks submitted to EPA for regulatory use, because they all used too-small sample sizes, mostly under 50. Thus, not requiring the relative-risk rule might help counterbalance these false-negative biases. Why? Consider again the hormone example. As noted, when US FDA approved bovine-growth hormone, it relied mainly on Monsanto’s 90-day unpublished study of rats, among Monsanto’s 26 unpublished study reports. 51 Yet, these studies clearly have false-negative biases. One reason is small sample size, not having enough statistical power to detect harmful effects. As the Canadian-government scientists put it: “Given the small numbers of animals,” studies “had virtually no power to detect differences within each of these categories, so a lack of significant findings was not surprising.” The studies were “biased towards the null (no effect)” conclusion, 52 in part because most had small or moderate sample sizes (less than 100 cows) [and] . . . had insufficient power to detect either beneficial or harmful health effects associated with the use of the drug. . . . Much larger sample sizes are required to detect drug effects.53 A second reason for the hormones’ false-negative bias, also corroborated by the Canadians, is conflicts of interest. They say that because Monsanto authors were looking for “lack of significance of a health effect resulting from . . . rBST [bovinegrowth hormone],”54 there was evidence of pre-existing bias. The Monsanto authors likewise were biased in using only too-short, 90-day studies, although some cancers can take 60 years to appear. As the Canadian-government scientists likewise reported, the Monsanto studies were biased because they were not blinded. Monsanto scientists knew which subjects received hormones and which did not. Finally, because Monsanto did not conduct studies on commercial herds of cattle, its testing likely revealed fewer bovine-growth hormone harms because cattle faced less stress, crowding, and disease than occur on factory farms. 55 Requiring the relative-risk rule, amid special-interest science, likely causes questionable findings.


98

Heuristic Analysis and Developing Hypotheses

Conclusion Requiring the relative-risk rule for hypothesis-discovery/formation not only contradicts many established causal relationships, like those for radiation-induced cancer. It also worsens already-existing harms from special-interest science, like those from bovine-growth hormone. The relative-risk rule thus imposes too-strict requirements on scientific discovery, in failing to take account of such practical complexities. In fact, scientists who impose too-strict requirements on hypothesis-discovery or development can thwart scientific progress, as with mathematical genius Evariste Galois. Without Galois’s foundational work in abstract algebra, including group theory and Galois theory, scientists today could not analyze central mathematical relationships in human-gene chemistry and computer circuitry. Yet because mathematicians of his day imposed too-strict, simplistic requirements on Galois’s hypothesis-discovery/development, they rejected Galois’s work. As a result, it was not discovered until long after his death at age 20. Even the French Academy, including the famous probability theorist Simeon Poisson rejected Galois’s discoveries. The case suggests that scientists should be wary of putting too-strict, simplistic requirements on hypothesis-discovery/ development. Two decades after Galois’s death by duel, a French mathematical journal published his discoveries. The editor observed that those who rejected Galois’s discoveries had not understood them. The same may be true of other discoveries. 56


C H A P T ER

8

Why Statistics Is Slippery E A SY ALGORITHMS FAIL IN BIOLOGY

Organophosphate pesticides, responsible for about 40 percent of the global insecticide market, cause tens of thousands of annual, unintended human casualties. Developed after World War I, these pesticides are a by-product of German nerve-gas development. They differ chemically from warfare gases only in their doses. Both organophosphate pesticides and nerve gas are used to kill living things, whether insects or humans, by means of the same biochemical mechanisms. Jewish chemist Fritz Haber began organophosphate development because he was eager to ingratiate himself with German leaders who despised Jews. In fact, in 1915 Haber himself supervised the first German toxic-gas attacks on Allied forces. After Germany lost World War I, however, Haber feared being named a war criminal because of his role as the father of chemical warfare. Wearing a fake beard, he escaped to Switzerland. Once his pre-war discoveries earned him the 1918 Nobel Prize in Chemistry, Haber felt safe enough to return to Germany and to his chemical-warfare and pesticide research. However, neither Haber’s Nobel nor his pesticide work protected his relatives and millions of other Jews. Nazis killed them in death camps—using the very chemical-warfare agents whose development Haber had began. Indirectly, Haber likewise caused the deaths of his wife Clara, also a PhD chemist, and their son Hermann, both of whom committed suicide. Ashamed, they said he had perverted science through his chemical-warfare research.1 Haber’s case shows that 1930s Nazi researchers and their enemies clearly understood the causal-neurotoxic effects of organophosphate pesticides and nerve gas used in the death camps. Today, however, many chemical manufacturers claim the causal-carcinogenic effects of these pesticides are controversial. For instance, after Pennsylvania pesticide-applicator Robert Pritchard applied Dow Chemical’s oganophosphate, chlorpyrifos (trade name Dursban) nearly daily to buildings and lawns during 1982 to 2000, in 2005 he was diagnosed with cancer, non-Hodgkins’s lymphoma. Partly because earlier US government

99


100

Heuristic Analysis and Developing Hypotheses

studies of pesticide applicators, like Pritchard, showed chlorpyrifos causes cancer,2 Pritchard and his wife sued Dow for causing his cancer. In 2011, however, the Third Circuit, and earlier, the Federal District Court in Pittsburgh, threw out Prichard’s medical experts who testified that Dursban had caused his cancer. Why did the courts reject the testimony of Pritchard’s physicians and pathologists? They claimed “the plaintiffs [Pritchard and his wife] have no evidence of [cancer] causation” by Dursban because the experts have not “presented any statistically significant evidence showing an association” between Dursban and cancer. Yet, besides the increased cancers among Dursban applicators, for several decades scientists showed that California farmers using Dursban had higher risks of many health problems, including lymphoma. Dursban risks are so great that in 2000, 5 years before the Pritchard cancer was diagnosed, the US Environmental Protection Agency phased out all of Dursban’s residential uses and allowed it only commercially, to industries like those that employed Pritchard. 3 Pritchard, however, was not the only victim of courts who demand statistical-significance tests before postulating hypotheses about causality. As already noted, in toxic-tort cases, hundreds of victims have been denied their day in court because of many US judges’ and scientists’ assumption that statistical-significance is necessary to hypothesize causal harm from agents like toxic chemicals.4 Are they right?

Chapter Overview Most scientists probably agree with the court that statistical significance is often necessary to hypothesize causality in statistics-relevant areas of science. They use statistical significance as a methodological rule, just as the previous chapter showed that many scientists erroneously use the relative-risk rule for hypothesis-discovery. However, this chapter shows that the statistical-significance rule is not a legitimate requirement for discovering causal hypotheses. After examining reasons for holding the dominant statistical-significance view, the chapter presents 3 arguments against it. The chapter concludes that unless the rule is rejected as necessary for hypothesis-discovery, it will likely lead to false causal claims, questionable scientific theories, and massive harm to innocent victims like Robert Pritchard.

Background Whether they are molecular geneticists examining DNA microarrays to estimate gene-expression changes, 5 wildlife biologists looking for differences in animal-survival distributions,6 or epidemiologists analyzing the characteristics


Why Statistics Is Slippery

101

of various diseases,7 many scientists say statistical-significance tests are necessary for hypothesis-discovery. They say they want to avoid results being due merely to chance, and that one prominent way of doing so is by null-hypothesis testing and use of the statistical-significance rule. It requires that, in statistically relevant areas of science, researchers should reject the null or no-effect hypothesis only if there is statistically significant evidence for some effect, p < or = 0.05. The statistical-significance level p is defined as the probability of the observed data, given that the null hypothesis is true.8 However, typically scientists view statistical significance as a measure of how confidently one might reject the null hypothesis. Traditionally they have used a 0.05 statistical-significance level, p < or = 0.05, and have viewed the probability of a false-positive (incorrectly rejecting a true null hypothesis), or type-1, error as 5 percent. Thus they assume that some finding is statistically significant and provides grounds for rejecting the null if it has at least a 95-percent probability of not being due to chance. However, as David Heath noted in his classic volume on biological statistics, statistically significant effects need not be biologically significant.9 If one takes large enough samples, one probably could find statistically significant differences almost anywhere, but this would not mean the differences were scientifically meaningful and the null could be rejected. If Heath is right, and if the rule is not always useful in cases when it is separated from underlying scientific theory, the statistical-significance rule is not required for hypothesis-discovery. Yet most scientists and many judges appear to think so. A recent survey of articles in Conservation Biology and Biological Conservation revealed that roughly 80 percent of articles used the statistical-significance rule and reported results of null-hypothesis tests.10 The same survey showed that 63 percent of articles assumed that statistical nonsignificance is evidence for no effect, and that the rule is necessary for hypothesizing some effect.11 Likewise, in 2002 a leading epidemiologist, editor of the 2 main journals in the field, said use of the rule is viewed as necessary for inferences and “nearly ubiquitous” in epidemiological studies.12 Particularly in relatively newer fields of biology, like ecology, that are seeking to establish their methodological rigor, the statistical-significance rule often is viewed as necessary for hypothesis-postulation. Prominent ecologists Dan Simberloff and Ed Connor, for instance, embrace null-hypothesis testing and the rule as ways to overcome phenomenological or corroborative methods and promote rigorous, falsificationist accounts of biological explanation, although determining causes is notoriously difficult.13

The Dominant Position on Statistical Significance As already noted, many scientists’ require the statistical-significance rule for hypotheses about causal claims and for rejecting the null. Thus, when scientists


102

Heuristic Analysis and Developing Hypotheses

used the rule to examine the effectiveness of St. John’s Wort in relieving depression,14 or when they employed it to examine the efficacy of flutamide to treat prostate cancer,15 they concluded the treatments were ineffective because they were not statistically significant at the 0.05 level. Only at p < or = 0.14 were the results statistically significant. They had an 86-percent chance of not being due to chance.16 Likewise, in their 20-year analysis of American Journal of Epidemiology articles, D. A. Savitz and his coauthors concluded that statistical-significance testing has grown markedly since 1970, and that presentation of significance-test results and confidence intervals now “is dominant.”17 Prominent epidemiologist Kenneth Rothman likewise confirms that most scientists equate “a lack of significance [at the 0.05 level] with a lack of effect and the presence of significance with ‘proof ’ of an effect.”18 As the Pritchard case illustrates, even courts often require toxic-tort plaintiffs to provide evidence satisfying the statistical-significance rule before hypothesizing that an agent harmed them.19 Most scientific journals and associations, for instance, the American Psychological Association, likewise recommend using the rule before postulating some hypothesis.20 They routinely view meeting it as necessary for legitimate statistical results.21 Even researchers who emphasize clinical, not statistical, significance often claim that fulfilling the rule is necessary for hypothesizing clinically significant results.22

The Knowledge-Conditions Argument Although many researchers believe meeting the statistical-significance rule is helpful in avoiding results due to chance, in hypothesizing causal effects, and in rejecting the null, they appear wrong. Why? One reason is the knowledgeconditions argument, that one should not require it for hypothesis-discovery and development because science rarely meets several necessary conditions for its use. These conditions include having a non-arbitrary rationale for choosing p < or = 0.05 and knowing that relevant causal effects neither are exhibited in complex, interactive ways, nor rely on systematic errors. Consider each of these conditions for the rule’s reliability.

Arbitrariness The arbitrariness problem is that because many p values, other than 0.05, often are consistent with observed effects and because choosing any statistical-significance rules involves more than purely mathematical exercises, requiring the statistical-significance rule relies partly on subjective value judgments about when data are sufficient for hypothesizing some effect. 23 These subjective judgments are


Why Statistics Is Slippery

103

especially obvious in the case of choosing p-values. Why should p < or = 0.05, or p < or = 0.10 count as the test for reliable inferences?24 When renowned statistician Ronald Fisher initially recommended this 0.05 significance level, he noted that measures of 1.96 standard deviations on either side of the mean of a Gaussian curve would include 95 percent of the data. As a result, he concluded that the upper 5 percent of the data were significant in their divergence. Fisher’s observation, however, has turned into a dogma, something Fisher did not intend. He later wrote that this fixed significance level was “absurdly academic,” that the level should be flexible, based on the circumstances. To evaluate hypotheses, Fisher also recommended presentation of observed p values.25 Requiring the statistical-significance rule for hypothesis-development also is arbitrary in presupposing a nonsensical distinction between a significant finding if p = 0.049, but a nonsignificant finding if p = 0. 051.26 Besides, even when one uses a 90-percent (p < or = 0.10), an 85-percent (p < or = 0.15), or some other confidence level, it still may not include the null point. If not, these other p values also show the data are consistent with an effect. Statistical-significance proponents thus forget that both confidence levels and p values are measures of consistency between the data and the null hypothesis, not measures of the probability that the null is true. When results do not satisfy the rule, this means merely that the null cannot be rejected, not that the null is true. Yet because statistical-significance proponents often erroneously interpret the null as false or true, “according to whether the statistical test . . . is or is not statistically significant,”27 they often commit an appeal to ignorance. They confuse the absence of evidence for an effect with evidence for the absence of an effect. 28 Illustrating this logical fallacy, discussed earlier in chapters 2 and 3, epidemiologists studying the effects of flutamide exposure concluded that their large p value (0.08) instead showed there was no exposure-disease relationship. This conclusion obviously is questionable, given data showing that exposure could more than triple one’s risk of disease, 29 and given an arbitrarily chosen p value.

Ignoring Systematic Errors and Complex Effects Statistical-significance proponents also tend to forget that statistical data often are very sensitive to different theoretical assumptions and methods of computation. 30 Because the rule is sensitive to systematic errors/biases/confounders that can overwhelm statistical variation, 31 interpreting it requires understanding the underlying science and relevant biological interactions. In addition, of the preferred mathematical tools—like the logistic, Poisson, and Cox-regression models—all those using the rule presuppose linearity. All these tools estimate parameters by quantifying the amount of variation in some effect that should be attributed to one or another independent covariate. Yet different phenomena often


104

Heuristic Analysis and Developing Hypotheses

exhibit their effects in additive or multiplicative and not linear ways, 32 because of complex interactions among factors like environmental and genetic effects. Thus, without detailed knowledge of underlying scientific mechanisms and factors like environmental-pollution exposures and gene polymorphisms, it is risky to allow or disallow hypothesizing purely on the basis of statistical significance. This riskiness arises partly from the fact that Fisher and others did not distinguish statistical from scientific hypotheses. Scientific hypotheses derive from general theories about how the real world operates, and they rely on more than statistical support. Scientific hypotheses are global or apply to all of nature, and their truth is usually in question. Statistical hypotheses, however, typically are not based on any theory about the world; are merely postulated statements about properties of populations; and apply only to particular systems. They are usually thought to be a priori false. Because of these differences between scientific and statistical hypotheses, even under the best of conditions, a single statistical test may have little effect on the plausibility of a scientific hypothesis. 33 Given the differences between statistical and scientific hypotheses, requiring statistical significance, before making a causal hypothesis, might mean ignoring abundant scientific evidence, for instance, for causal mechanisms showing a pollution-disease connection merely because the rule was not satisfied. Yet, as David Resnik correctly recognizes, causal hypothesizing involves not only statistics but also eliminating confounding factors, revealing a time-order sequence, and seeking some understanding of underlying scientific mechanisms. 34 Because statistical data are not the only data and rarely meet various theoretical requirements, such as independent observations and no systematic error, “a p value usually cannot be taken as a meaningful probability value.”35 Therefore, meeting the rule is not necessary for hypothesizing about causes.

The Welfare-Affecting Argument A second reason, that statistical significance ought not be taken as necessary for causal hypothesizing, is that whenever inferences have welfare-affecting and not merely knowledge-related consequences, requiring it could cause harm. In situations of uncertainty, preventing harm—like that to Robert Pritchard—may be more important than ensuring truth. As chapter 14 argues, it may require minimizing type-II or false-negative errors, not merely type-I or false-positive errors that the rule minimizes. Although distinguished statisticians Jerzy Neyman, Egon Pearson, and others suggest that controlling type-1 errors is more important in science, 36 they admit their suggestion applies only to pure science. 37 When welfare is at stake, practical science requires different research rules. Facing uncertainty, scientists must “weigh carefully the value of what is to be won or lost, against the odds of winning or losing. . . If a lot might be lost if we are wrong, we


Why Statistics Is Slippery

105

want higher probabilities that we are right before we act”—that is, higher alpha values. 38 How sure scientists need to be, before hypothesizing statistical effects, depends largely on how serious the consequences could be, if they are wrong in their causal hypotheses. 39 This is why scientists—who do practical research having serious welfare-affecting consequences—should minimize false negatives over false positives when both cannot be minimized. They should not ask too much by requiring statistical significance before making causal hypotheses.40 If researchers did require hypothesizing to satisfy statistical significance, for example, before recommending action to allow new pharmaceuticals or to control harms like climate change, great injury could occur. For instance, one review of 71 clinical trials,41 each reporting no statistically significant differences among treatments, showed that the not–statistically significant data were consistent with the drugs’ having moderate, even strong, effects.42 Although premature disclosure of not–statistically significant results could be professionally irresponsible, disclosure may be required as an important early warning of possible, serious, public-health consequences.43 Requiring statistical significance for hypothesizing about causes, in welfare-affecting areas of science, also jeopardizes the integrity of many scientific subdisciplines whose goals are practical as well as cognitive, welfare-protecting as well as truth-seeking.44 For instance, although epidemiologists say their science is largely observational/statistical, and bears the burden of proof for showing harm, they claim it should minimize harm,45 not merely falsehoods.46 Yet requiring the statistical-significance rule for all hypothesizing minimizes mainly falsehoods, not harms.

The Alternative-Methods Argument A third reason that statistical significance should not be taken as necessary for causal or statistical hypotheses is that there are better strategies for promoting methodological rigor. One general alternative is basing evaluation of causal hypotheses on a preponderance of evidence,47 whether effects are more likely than not. In welfare-affecting areas of science, a preponderance-of-evidence rule often is better than a statistical-significance rule because it could take account of evidence based on underlying mechanisms and theoretical support, even if evidence did not satisfy statistical significance. After all, even in US civil law, juries need not be 95 percent certain of a verdict, but only sure that a verdict is more likely than not. Another reason for requiring the preponderance-of-evidence rule, for welfare-related hypothesis development, is that statistical data often are difficult or expensive to obtain, for example, because of large sample-size requirements. Such difficulties limit statistical-significance applicability. Requiring a preponderance-of-evidence rule, not a statistical-significance rule, also would better counteract conflicts of interest that sometimes cause


106

Heuristic Analysis and Developing Hypotheses

scientists to pay inadequate attention to public-welfare consequences of their work. Such conflicts of interest often arise, as noted in earlier chapters, because more than two-thirds of US science is funded by special-interest industries, often anti-regulatory interests,48 that have false-negative biases,49 exhibited even in government regulatory proceedings. 50 Likewise, up to 80 percent of welfare-related statistical studies have false-negative or type-II errors, failing to reject a false null. 51 Because the statistical-significance rule encourages false-negative and not false-positive errors, and because misuse of null-hypothesis testing is rampant in its use, 52 perhaps because of scientists’ conflicts of interest, requiring preponderance-of-evidence rules seems preferable to requiring the statistical-significance rule. Examples of alternatives to the rule include using confidence intervals to estimate effect size, measure its uncertainty, reveal numerically the level of confidence about some result, and reveal the range of p values consistent with the study data. (Confidence intervals are the P-values for a range—of possible parameter values, in addition to the null value—that is broad enough, so that one can identify an interval for which the test P-value exceeds a typical alpha level such as 0.05, and for which all parameter values within the range are compatible with the data, given the standard interpretation of significance tests.) Because significance levels are not appropriate indices of the size or importance of differences in outcome and lack a theoretical justification, and because the width of a confidence interval depends both on the level of random variability in the data and on the alpha level, epidemiologists such as Kenneth Rothman support calculating confidence intervals. Their suggestions: (A) look for/report confidence intervals rather than statistical significance because narrow confidence intervals—not low P-values and statistical significance—more likely identify which results are least influenced by random error. Yet, (B) recognize that neither confidence intervals nor statistical significance can measure the probability of a hypothesis. Therefore (C) avoid strict/exact interpretations of P-values, statistical significance, or confidence intervals in observational research. For instance, knowing that one has a 95-percent-confidence interval of (–50, 300), not (120, 130), one knows that the second, not the first, interval shows a superior parameter estimate, because of both its location and breadth. 53 Another specific alternative to assessing statistical significance is doing “what if ” analysis. By holding effect-size constant, then adding/removing data points systematically to analyze how sample size influences statistical significance, researchers can determine sample sizes needed to achieve statistical significance, given particular effect sizes. 54 Instead of using traditional p values or following the statistical-significance rule, a third option is providing probabilities that the value of some effect is beneficial, trivial, or harmful—thus focusing on hypotheses’ clinical/practical relevance. 55 As chapters 13 and14 suggest, scientists likewise can use decision theory


Why Statistics Is Slippery

107

to take into account the costliness and riskiness of different errors and to set alpha levels higher in situations posing especially risky consequences. A false-negative error could be costlier than a false-positive error, for instance, if a pesticide appeared to reduce the survival rate of an endangered species by 5 percent, but appeared harmless because the effect did not satisfy the rule. False-negative errors likewise could be costlier if continued overharvesting of marine fisheries caused ecosystem collapse—yet was judged harmless because these damaging fishing effects did not satisfy the statistical-significance rule. 56

Conclusion Good scientific reasons, such as differences between statistical and scientific hypotheses, argue against requiring the statistical-significance rule for causal hypothesis-discovery and development. It also is ethically dangerous to require the rule and to ignore non-cognitive, health, or safety goals of more practical areas of sciences, such as epidemiology, toxicology, and medicine. Moreover, scientists have options, besides the rule, for hypothesizing about causes. Just as chapter 7 argued, this chapter shows that easy rules for causal-hypothesis-development oversimplify the complexity of scientific discovery. Many people want easy rules, even when it is not clear whether they work. Perhaps that is why Wall Street financiers often use algorithmic trading, instead of depending on their own ingenuity, to identify and act on arbitrage opportunities. The seduction of simplistic, yet dangerous, algorithms is part of what motivated Kurt Vonnegut’s 1952 novel, Player Piano. The player piano is a metaphor for the many sorts of self-playing instruments in society, from Wall-Street-trading software; to computer-generated, homogenized techno-music; to automated phone messages; to Google’s internet-search algorithms; to police software for predicting crimes; to calculators for doing simple computations. Sometimes all their tasks could be better done by humans. 57 If this chapter is right, scientists, traders, musicians, telephoners, and police— everyone—should be wary of using algorithms that crudely limit the hypotheses that one can discover and develop.



PA RT

III

METHODOLOGICAL ANALYSIS AND JUSTIFYING HYPOTHESES



C H A P T ER

9

Releasing Radioactivity HYPOTHESIS-PREDICTION IN HYDROGEOLOGY

If science, accepted for centuries, later turns out to be at least partly untrue, how can we ever know when science provides reliable information? For instance, observations support Isaac Newton’s laws of motion and gravity, and quantum mechanics violates ordinary observations.1 Yet only quantum mechanics, not Newton’s laws, accurately predicts the behavior of small particles. Similarly, Einstein’s special theory of relativity predicts effects such as time dilation and Lorentz length contraction, so neither time nor length are uniform, but dependent on velocity. As a result, moving clocks tick more slowly than an observer’s stationary clock, and with respect to the observer, objects are shortened in the direction they are moving. If Einstein’s general relativity is right, clocks run more slowly in deeper gravitational wells, light rays bend in the presence of gravitational fields, and the universe is not a constant size but expanding because its farther parts are moving away from us faster than the speed of light. Although we never travel at velocities where we can observe such effects, they are essential to modern science. Yet, are quantum mechanics and relativity true—or will they too be superseded some day? Scientists and philosophers known as logical empiricists, discussed in chapter 1, believed that if they could develop scientific methods enabling them to clearly distinguish true from false scientific claims, they would have a foundation for reliable knowledge—and for using science to fight evils such as bigotry, eugenics, racism, tyranny, and war. Distinguishing reliable from unreliable scientific conclusions, however, requires having ways to test or justify hypotheses—ways to separate valid and true hypotheses from invalid and false ones. This third section of the book (chapters 9–12) examines 4 different approaches to methodological analysis, 4 different ways of justifying hypotheses. The previous section of the book (chapters 5–8) examined different approaches to heuristic analysis, different ways of discovering and developing scientific hypotheses.

111


112

Methodological Analysis

Chapter Overview How do many scientists test or justify hypotheses? This chapter first outlines the classic, logical-empiricist view of scientific explanation, called the deductivenomological account, and the prominent logical-empiricist view of scientific method, hypothesis-deduction. Second, it introduces a case study that shows how hydrogeological methods erred, partly because they relied on hypothesisdeduction and its questionable account of scientific laws. Third, it explains why idealized laws, as in this hydrogeological case, are problematic for hypothesisdeduction. Fourth, it shows that hypothesis-deduction also can be practically problematic because its flaws can put people at risk because of its underestimating uncertain threats. Fifth, the chapter suggests that there are reasonable alternatives to using hypothesis-deduction.

Deductive-Nomological Explanation As chapter 5 noted, scientists disagree about what it means to say they have explained something. Some believe they explain phenomena when they discover their causes or underlying mechanisms. Others believe they explain when they unify a variety of phenomena. Chapter 12 outlines ways to assess these alternative causal, mechanistic, and unificationist accounts of scientific explanation. This chapter assesses the deductive-nomological, or covering-law, view of logical empiricists such as Carl Hempel and Peter Oppenheim—perhaps the most prominent account of scientific explanation. Deductive-nomological proponents say scientific explanations are arguments in which a scientific conclusion, the explanandum (e.g., copper expands when heated), is derived from universal or statistical scientific laws (e.g., all metals expand when heated), and empirical statements about particular facts (e.g., this copper is being heated). Although deductive-nomological explanations are based on deductions from universal laws, inductive-statistical explanations are based on inductions from statistical laws. 2 The strengths of deductive-nomological accounts of explanation are that they attempt to subsume a theory’s laws under more fundamental laws, to avoid any metaphysical commitments, to focus merely on what is empirical, and to provide a clear account of how scientists explain things. However, because of 2 key deductive-nomological problems, irrelevance and assymmetry, it fails to provide sufficient conditions for scientific explanations. The irrelevance problem is that one can deduce non-explanatory conclusions (e.g., John did not become pregnant) from true claims about current circumstances (e.g., John took birth-control pills) and universal laws (taking birth control pills can prevent pregnancy). The symmetry problem is that although one can use deductive-nomological methods to say, for instance, that the length of a flagpole’s shadow is x because of the


Releasing Radioactivity

113

height of the pole and the current location of the sun, these methods allow one to “explain” the height of the pole on the basis of the length of the shadow—although the shadow obviously does not cause flagpole height. Similarly, inductive-statistical accounts are questionable because they cannot provide necessary conditions for explanation. They cannot specify non-arbitrary criteria for accepting hypotheses. For instance, if inductive-statistical proponents require a relative risk (see chapter 7) of at least 1.5, they cannot explain why each mGy of ionizing-radiation exposure causes a relative risk of leukemia of 1.036. To address such problems, in the last few decades many philosophers of science have begun focusing instead on causal, mechanistic, and other accounts of scientific explanation. These are discussed in chapter 12. 3

Hypothetical-Deductive Methods Just as deductive-nomological accounts of explanation are questionable, but widely used in sciences like physics, so is the hypothetico-deductive (HD) method. From chapter 5, recall that for hypothetico-deductivists like Hempel, no method tells how to discover hypotheses. Instead, they say science begins when one attempts to use HD methods to test hypotheses. The tests involve deducing observational predictions from hypotheses, then observing whether predicted phenomena occur. If not—and if all auxiliary assumptions, associated with the deduction, are correct—then by modus tollens (if p then q, not q, therefore not p), the hypothesis is false. However, when predicted phenomena occurred, Hempel believed they inductively confirmed the hypothesis, although not with certainty. Hence philosophers of science, like Rudolph Carnap, attempted to develop methods for quantitatively assessing the probability of inductively confirmed hypotheses. These methods include a logic of confirmation, Bayesian models, and bootstrapping models. However, all of them remain controversial, given no solution to the problem of induction (see chapter 5).4 Other problems likewise face the widely used HD methods. One is the difficulty of knowing whether assumptions associated with hypotheses and testing methods are correct. For instance, suppose one wants to test the hypothesis that if Higgs bosons exist, they should be observable in gamma scattering in hadron colliders. These colliders are miles-long machines that accelerate and smash protons together, so physicists can see their subatomic debris, scattered after collisions. Because the Higgs appears only about once every billion collisions and is unstable/short-lived, its observation is difficult. Scientists could err in assuming a 5-sigma result confirms Higgs but that, below 5 sigma, results show no Higgs. If such assumptions err, no genuine Higgs test occurs. Another HD problem is that hypothesis confirmation can be subjective and arbitrary, given the problem of induction and committing the fallacy of affirming


114

Methodological Analysis

the consequent (see chapter 2). Instead, Popper proposes falsificationism: If strong attempts to falsify a hypothesis fail, it has been corroborated but never confirmed. 5 Yet another HD problem is that, in the absence of good alternative hypotheses, scientists do not always abandon falsified hypotheses but label them “anomalies.” Thus HD tests are not compelling. Other problems arise because of the scientific laws on which HD relies, as the subsequent hydrogeology case illustrates.

A Case Study in Hydrogeology Foundational hydrogeological laws/models, like those required for HD, are essential to siting underground hazardous-waste facilities because waste escapes geologically through groundwater. Consider Yucca Mountain, Nevada, proposed for subsurface, million-year storage of US high-level nuclear waste. Although Yucca has measurable seismic/volcanic activity, because it is in a remote desert, 90 miles northwest of Las Vegas, with little annual rainfall and a deep water table, scientists proposed it for possible nuclear-waste storage, including radionuclides that must be kept isolated forever. After massive site studies, in 1987 the US Congress designated Yucca as the nation’s only commercial-high-level-radioactive-waste site and commissioned further study. Since 1987 the US government has spent $15 billion to study the site. In 2002 the US Congress and President Bush formally approved it. In 2009, however, the government stopped site work because of health, safety, and water-contamination concerns. In 2012 a blue-ribbon-scientific panel recommended finding an alternate site. Thus the government made a 20-year, 15-billiondollar, scientific mistake.6 What went wrong? On the political side, 80 percent of Nevadans strongly oppose Yucca Mountain.7 “Not in My Backyard,” they said. On the scientific side, this chapter shows that one problem is use of flawed HD methods to justify site-suitability claims. Scientists using HD methods at Yucca also forgot that, when science is used in practical situations, its conclusions must be within the limits of accuracy/precision required by that situation. For instance, Yucca groundwater velocity/flow conclusions must be reliable enough to ground public-policy decisions about permanently securing extremely dangerous wastes. Otherwise, people can be hurt, as illustrated by the Maxey Flats, Kentucky, dump. In 1962, hydrogeologists approved what became the world’s largest (in curies) commercial-radioactive-waste dump. Before the corporate owner sited this Kentucky facility, scientists used hydrogeological models to determine groundwater velocity/site suitability. Nuclear Engineering Company, now US Ecology, said it would take 24,000 years for onsite plutonium to migrate one-half inch onsite.8 However, 10 years after opening the facility, plutonium and other lethal, long-lived radionuclides were 2 miles offsite. The dump was closed but still poses threats to those nearby. It is a Superfund site.9


Releasing Radioactivity

115

Something similar happened at Yucca Mountain. As noted, the government spent $15 billion studying the site, partly because it was misled by false hydrogeological predictions. At both Maxey Flats and Yucca Mountain, a fundamental problem is that HD methods, used at the sites, relied on idealized laws/models, including porous-media-continuum models that presuppose underlying rock is solid/unfractured. Yucca, however, is heavily fractured, not solid, partly because of hundreds of massive nuclear-bomb tests nearby. Although scientists use idealized laws/models because they have no better alternatives, Yucca scientists claim the laws/models “demonstrate the safety of a final storage site for nuclear wastes.”10 Virtually all groundwater-flow assessments, including at proposed hazardous-waste sites, rely on variants of the highly idealized Darcy’s Law.11 This chapter shows why Darcy’s Law is unreliable—as are all fundamental laws. Yet, why would scientists use Darcy’s Law to justify waste-site-suitability decisions and their spending $15 billion on site work?

Idealization and Hypothetico-Deductive Problems One reason is that dominant HD methods require fundamental, theoretical laws to generate accurate predictions. Yet, at least 2 constraints are contrary to this requirement. One is that phenomena to be explained by theoretical laws must be simplified, or relevant parameters become too numerous to be manageable. As economist Milton Friedman put it, a hypothesis is important only if “it explains much by little . . . if it abstracts the common and crucial elements from the mass of complex and detailed circumstances surrounding the phenomena to be explained.”12 A second constraint is that theoretical laws must be general enough to be applicable to many different situations. As Nancy Cartwright notes, quoting Aristotle’s Nichomachean Ethics, Book II, Chapter 7: “Among statements about conduct, those which are general apply more widely, but those which are particular are more genuine.”13 Both requirements mean scientific laws must be idealized or, as Friedman says, “they must be descriptively false.”14 Using a physics example, Cartwright argues that because of idealized geometries, exact solutions in fluid mechanics are rare and typically involve approximations.15 In claiming fundamental-butidealized laws are false, scholars like Cartwright and Friedman make the point of the Aristotelian Simplicio in Galileo’s dialogue, The New Sciences. Simplicio objects to the idealizations underlying the new mechanics and claims they falsify the real world because it is not as regular and uniform as the laws presuppose.16 In discussing idealization, this chapter uses the term to mean a deliberate simplifying of a complicated situation/claim, so as to achieve partial understanding of it. Idealizations typically include at least 4 types: mathematical, construct, empirical-causal, and subjunctive-causal. Mathematical idealization involves imposing mathematical formalisms on physical situations. Construct


116

Methodological Analysis

idealizations consist of simplifying models/conceptual representations of the empirical situation being examined. Causal idealizations consist of simplifying not models but empirical situations themselves. Causal idealizations may be of 2 types, empirical or subjunctive. Empirical-causal idealizations involve a pure case, an idealized experiment to confirm an idealized law, and one looks at cases in which distorting complexities either are not present or are present only to a limited degree. Subjunctive-causal idealizations include no real experiment, only a thought experiment, a “what-if ” analysis, to arrive at an idealized law that would be true if other factors were not present. As such, subjunctive-causal idealizations are divorced from experience and rely on experimental intuitions.17 Because they depart from truth, idealizations raise problems for HD methods. HD accounts require realism, at least in the sense that theoretical laws must present accurate pictures of the world and ascribe real characteristics to it. To understand this problem, recall the standard deductive-nomological account,18 discussed earlier. For it, logic/empirical facts control science—whose purpose “is to formulate the truth about the natural world.” Laws may be observational or theoretical, and observational laws generalize sensory data, whereas theoretical laws postulate unobservable elements to explain observable laws—which themselves generalize phenomena. Because theoretical laws yield observational laws “as deductive consequences of their own abstract postulations,” they are “also applied in the explanation of particular occurrences and in the solution of problems of prediction and control.”19 According to deductive-nomological, hypothetico-deductive account, the fundamental theoretical laws of science are true and thus should give correct accounts of what happens when they are applied in particular situations. But as Nancy Cartwright, Bas van Fraassen, and others have argued, HD methods do not give correct accounts of what happens in their applications. 20 Instead, theoretical laws like Boltzmann’s equation are abstract formulas that describe no particular circumstances. As Cartwright notes, if one could always add phenomenological correction factors whenever applying these abstract equations, correction factors dictated by the theoretical laws, HD methods might be saved. However, because theoretical laws typically do not dictate correction factors, truth is lost. 21 To illustrate this point, Geoffrey Joseph uses the example of Coulomb’s Law, 22 and Cartwright uses the examples of the Lamb shift, the exponential-decay law in quantum mechanics, and equations associated with an amplifier model. Consider the last, where Cartwright supposes one wants to calculate signal properties of an amplifier. One must decide which of 2 standard models, each with different laws, to use and then use the equations. Applying these models/ laws gives a rough approximation of transistor parameters. However, if one measures relevant parameters on the actual circuit, then uses the measured, not


Releasing Radioactivity

117

theoretically predicated, values for further calculations, the measured results are not even close to what the theory predicts. This outcome shows that (1) practical approximations usually improve on the accuracy of rigorous HD deductions from fundamental laws, and (2) although HD should illustrate how theoretical/ observational laws are applicable to particular situations, and how the latter can be deduced from the former, they do not. Particular facts are almost never complete enough to enable deducing observational from theoretical laws that explain them. As Cartwright emphasizes, improvements in the law’s results come at the wrong place: from the ground up (measurement), rather than from the top down (theoretical laws). Thus, she says real objects do not obey theoretical scientific laws—as HD methods presuppose. Moreover, if one uses empirical/observational laws to correct approximations from idealized laws, the corrections are unexplained. 23 Therefore, HD accounts do not really explain/predict real-object behavior because the content of the observational laws is not contained in the theoretical laws allegedly explaining them. 24 Cartwright, Joseph, van Fraassen, and others raise many issues about realism, deductive-nomological, and hypothetico-deductive accounts that cannot be addressed here.25 These include whether deductive-nomological accounts provide correct explanations of how theoretical laws are applied; whether they require realism/truth that is not achieved; and whether observational laws can be derived from theoretical laws. Rather than addressing such issues, let us assume theoretical laws are idealized—as Aristotle, Friedman, and others argue, and then ask, When is a law, used in a practical HD situation, such as permanent-waste storage, too idealized to be reliable? When/how do HD predictions jeopardize public health and safety?

A Case in Hydrogeology To help answer these questions, consider Darcy’s Law, widely used to help determine hazardous-waste-site suitability, especially in symmetric-cone models and well-pumping tests. Along with laws for equilibrium/nonequilibrium hydraulics of wells, Darcy’s Law and the cone model are typical ways that hydrogeologists assess groundwater velocity and hydraulic conductivity. Are their Darcy-based conclusions too idealized to be reliable in waste-facility siting? Suppose hydrogeologists are assessing groundwater velocity at some site. Regardless of specific equations, all rely on Darcy’s Law to give flow velocity in permeable media. This fundamental, theoretical law is that, in groundwater flow, velocity is proportional to the hydraulic gradient v = ks

(5.1)


118

Methodological Analysis

where k is the hydraulic conductivity having units of v, and where s is the slope of the hydraulic gradient. In one form often used by hydrogeologists, (5.1) may be expressed as v = k (h1 − h2 ) / 1

(5.2)

where k is the hydraulic conductivity, and where (h1–h2) is pressure loss over distance 1 in the direction of flow. The discharge q is the product of area A and velocity    q = Ak p (h1 − h2 ) / 1

(5.3)

where Kp is the hydraulic conductivity, with porosity taken into account. Although hydraulic conductivity can be expressed many ways, it is usually taken to be rate of discharge per unit area under controlled hydraulic conditions. In the United States, it is usually expressed in Meinzer units that show discharge as flow in gallons/day through an area of 1 square foot under a gradient of 1 foot per foot at 60 degrees Fahrenheit.26

Darcy-Law Idealizations The fundamental Darcy Law, expressed either as (5.1) or (5.2) above, is experimentally warranted, according to hydrogeological theory, by using a pure case, an empirical-causal idealization of a cylinder packed with sand and having 2 piezometers some distance apart. One determines the flow rate q for water through a cross-sectional area A of the cylinder. The total energy heads/fluid potentials above a datum plane may be expressed by the Bernoulli equation, and the resulting pressure loss is defined as the potential loss within the sand cylinder.27 Thus, theoretically one can show that flow rate q through a porous medium is proportional to pressure loss and inversely proportional to length of the flow path. In reality, however, this fundamental theoretical warrant is highly idealized, since actual flow velocity is a function of the microstructure of the medium through which water flows. Darcy’s Law, however, presupposes flow occurs through the entire cross section of the material, rather than through pores and between solids, as actually happens. Hence it is a causal idealization, an empirical-causal idealization. The law assumes that velocity is uniform, whereas actual groundwater velocity is non-uniform and involves countless accelerations/decelerations/changes in direction. For naturally occurring geological materials, the microstructure cannot be specified in 3 dimensions; hence actual velocities must be quantified


Releasing Radioactivity

119

statistically, by using the average values of hydraulic variables applicable to representative volume elements.28 Thus, results given by Darcy’s Law, (5.1), (5.2), are only approximate—and thus may not be good enough for waste-siting science. In addition to simplifying the physical situation by ignoring pores/interstices and using an empirical-causal idealization, Darcy’s Law is a mathematical idealization. Its formalism (employing statistical averages of hydraulic variables) is imposed on the physical world, giving only approximate values for key hydrogeological variables. In particular, flow velocities through interstices are always higher than the macro-velocity values given by Darcy’s Law. This means actual velocity in soil/fissured rock is highly variable. Moreover, flow paths are not straight, as Darcy assumes, but long and tortuous as water runs in/around/through bodies of varying porosity and solidity. Apart from preceding mathematical and empirical-causal idealizations, deviations from Darcy’s Law can occur in at least 4 situations, each illustrating a subjunctive-causal idealization that tells what would happen if certain factors were not present. The first situation occurs where steep hydraulic gradients exist, as near pumped wells. A second situation occurs where there is turbulent flow. As water velocity increases, because turbulent eddies dissipate kinetic energy, the hydraulic gradient is less effective in inducing flow, as in many limestone areas. A third situation where Darcy may fail is very slow flow through dense clay. There, electrically charged particles act on water in small pores, producing flow-rate and hydraulic-gradient non-linearities. A fourth problematic situation is that of soil-moisture movement resulting from thermal/osmotic gradients. 29 Discussing such Darcy-inaccuracy situations, most hydrogeologists speak of them as “minor deviations.”30 They justify Darcy use by pointing out that direct observation of groundwater velocities is impossible. As Ward puts it, hydrogeological theory must proceed at the macro level, because observations of molecular behavior/fluid velocity/pore pressure is impossible—despite statistical and fluid mechanics. 31 But if Darcy’s Law is built on macro not molecular/micro-level theory, it is false at molecular/micro levels. Moreover, given the 4 deviations already noted, the law seems technically false, even at the macro level. Apart from Darcy falsity at micro/molecular/macro levels, corrections to it do not come from the theory behind it, but from phenomenological/observational factors not deducible from theory. Such observational factors include the presence of clay soil, significant soil moisture, and geological formations permitting turbulent flow. Because needed Darcy corrections cannot be deduced from the fundamental law itself, but come from laboratory/field measurements, Darcy’s Law deviates from HD requirements. Because corrections come from ad hoc, site-specific considerations, they support Cartwright’s account of how theoretical and observational laws are related. Thus, contrary to HD, theoretical laws are not responsible for observational-law accuracy, as applied in a particular situation.


120

Methodological Analysis

Idealized Methods Inherent in Using Darcy’s Law Besides Darcy idealizations resulting from model simplification, other idealizations result from counterfactual assumptions about site characteristics. To understand these subjunctive idealizations, consider how the law is used to calculate hydraulic conductivity. Typically it is determined from laboratory analysis of site soil cores or from well tests in the field. The soil-cores method involves taking site-soil samples, subjecting them to water under known pressure, and measuring flow through samples in a known time. Next one substitutes flow numbers in the general flow equations, (5.1), (5.2), to obtain hydraulic conductivity. The general equations for groundwater flow are from Darcy’s Law, according to which flow rate through porous media is proportional to pressure loss and inversely proportional to flow-path length. However, this soil-cores method has 3 difficulties likely to cause false Darcy results. First, samples of unconsolidated materials cannot be placed in permeameters in their natural, undisturbed state. Second, samples cannot be known typical of the aquifer from which they are taken. Third, soil-cores methods cannot measure actual permeability as a consequence of site irregularities like animal burrows and root cavities.32 Consequently, hydrogeologists’ use of Darcy’s Law with soil-cores methods constitutes a causal idealization whose often-incorrect results cause many hydrogeologists instead to use well-pumping methods for determining hydraulic conductivity. In well-pumping methods, one digs a well to be pumped for each test, plus several observation wells surrounding it. One pumps the test well, and because there is a gradient toward it, the water table takes the shape of a cone of depression around the well. Using the cone-of-depression model, with its associated assumptions, well-hydraulics laws, and observation-well measurements, one obtains average-aquifer permeability/hydraulic conductivity around the test well. This construct idealization/model simplification presupposes that flow toward the well through a cylindrical surface equals the discharge of the well. Hence, using Darcy’s Law (5.2), flow toward the well, through a cylindrical surface at radius x from the test well, can be represented by Q = 2 ( 3.141592 + ) xyK p

dy (5.4) dx

where 2(3.141592+)xy is the area of cylinder, Kp is conductivity, and dy/dx is the slope of the water table. 33 Various well-pumping-method problems show why Darcy results often are inaccurate. Specifically, quilibrium-hydraulics laws for wells, illustrated by (5.4), rely on many counterfactual/idealized assumptions that are model simplifications/construct idealizations. One is that the aquifer is homogeneous, of infinite


Releasing Radioactivity

121

extent, with an initially horizontal water table. Because of root cavities/animal burrows/fractures, obviously many aquifers are neither homogeneous nor infinite. Instead, negative boundaries of impermeable material, or positive boundaries such as streams, circumscribe all real-world aquifers. Another counterfactual assumption presupposed in (5.4) is that equilibrium exists. Yet, because of low groundwater velocities, equilibrium occurs only after long periods of pumping at constant rates. During initial pumping from a new well, most discharge comes from storage in the portion of the aquifer that is unwatered as the cone of depression develops around the well. This means equilibrium analysis, via (5.4), yields too-high hydraulic-conductivity values because only part of the discharge comes from flow through the aquifer to the well; the potential yield of the well is much smaller than indicated by this initial discharge—which includes storage. Thus equation 5.4 might give an inaccurate, worst-case analysis for siting a hazardous-waste facility, whereas the equations/laws in porous-media models tend to err in the opposite direction—thus presenting possible threats to human welfare. 34 Of course, Theis and others have developed equations for the non-equilibrium hydraulics of wells that take account of time/storage effects on discharge. But these equations also are problematic, typically consisting of formulas based on analogies with heat flow. Although these analogies enabled Theis to adjust for the effects of time/storage, they presuppose idealizations: for example, aquifers unwater instantly, as water tables drop, and aquifers are homogeneous/infinite in extent. The first assumption is false for all non-artesian aquifers and leads to erroneous conclusions for thin/poorly permeable aquifers—like those proposed for waste sites. The second assumption is likely false, as mentioned regarding equilibrium equations. 35 Thus, principles for equilibrium/non-equilibrium hydraulics of wells presuppose symmetrical cones of depression and homogeneous aquifers of infinite extent, contrary to most geological situations.

The Uncertainty Relations of Hydrogeology Besides idealizations, cones of depression also present conceptual problems that might be called “the Uncertainty Relations of Hydrogeology.” These problems arise because obtaining accurate measures of some cone-ofdepression parameters interferes with obtaining accurate measures of others. Hence, when scientists substitute the measured values in (5.4), the resulting hydraulic-conductivity values are inaccurate. On one hand, for pumping tests, wells must be close, so any lack of aquifer/soil uniformity will be evident. On the other hand, wells cannot be too close, as their cones of depression may overlap/interfere. When this happens, the drawdown at a point is the sum of drawdowns caused by individual wells; flow from wells is impaired,


122

Methodological Analysis

and draw-downs are increased. 36 Thus, the choice between more-versus-fewer wells really means one cannot simultaneously/reliably know both important aquifer heterogeneities and flow from all the wells. Accurate knowledge of both heterogeneities and well flows is impossible. Of course, one might argue that image wells avoid these Uncertainty Relations of Hydrogeology. According to image-well theory, whenever one encounters negative boundaries (e.g., faults across which no groundwater is transmitted), or positive boundaries (e.g., streams causing influent seepage), one can attempt to avoid errors from assuming that the aquifer is homogeneous/of infinite extent. To do so, one employs the method of images devised by Kelvin for electrostatic theory. The theory presupposes that in limited-extent aquifers, effects of barrier boundaries (e.g., another well pumping from an aquifer) are the same as effects in infinite-extent aquifers, of a similar image pumping well located across the real boundary, on a perpendicular to it, at the same distance from the boundary as the real pumping well. Likewise, in limited-extent aquifers, effects of a recharge boundary (e.g., a stream), discharging into an aquifer, are the same as effects, in infinite-extent aquifers, of a similar image discharging well, located across the real boundary, on a perpendicular to it, at the same distance from the boundary as the real discharging well. Hypothetical image wells are model simplifications/ construct idealizations that presuppose model wells produce hydraulic gradients, from the boundary to image wells, equal to the hydraulic gradient from the boundary toward pumped wells. 37 However, this central, equal-hydraulic-gradient presupposition is flawed. First, the image-well hypothesis is not field-testable, but what Hempel calls a pseudo-hypothesis. Second, if one knew actual field conditions, one would never use image wells. Third, image-well methods presuppose homogeneous/ infinite-extent aquifers. Otherwise, one would have neither symmetrical cones of depression caused by image wells and designed to take account of barrier boundaries, nor symmetrical cones of impression caused by image wells and designed to take account of recharge boundaries. Yet, as noted, this homogeneous, infinite-extent assumption is counterfactual. Thus, standard Darcy ways of determining hydraulic conductivity/groundwater flow produce no realistic results/ laws, contrary to HD requirements. However, are such results realistic enough for applications like hazardous-waste-facility siting?

Idealizations and Practical Problems After all, many scientific laws—like Coulomb’s Law—are idealized, but scientists still use them. Coulomb’s Law states that around any point charge it produces a spherically symmetric electromagnetic field. But not every point charge produces a spherically symmetric electromagnetic field. As Joseph


Releasing Radioactivity

123

points out, according to relativistic theory of gravitational fields, a dense source of mass-energy, near the point charge, can change the metric of space-time in that vicinity. Because the quanta of electromagnetic fields respond to this gravitational influence, their trajectories deviate from spherically symmetric distributions. The upshot: important properties of physical objects, like their trajectories, are not correctly described by the equations governing any 1 of the 4 known force fields. 38 In at least one important sense, the Coulomb’s Law idealization is as problematic as the Darcy idealizations/equations. Recall the latter are used with well-pumping tests requiring homogeneous/unlimited-extent aquifers. For both Coulomb and Darcy, the equations contain implicit ceteris absentibus clauses, in presupposing subjunctive-causal idealizations. The ceteris absentibus clauses in the Coulomb case are something like “if there were no dense source of mass-energy near the point charge” and “if factors x, y, and so on, were absent,” then around any point charge, it would produce a spherically symmetric electromagnetic field. The ceteris absentibus clauses in the Darcy case would be something like “if there were no animal burrows/fractures in underlying strata, etc.,” then, around any well, the draw-down would produce a spherically symmetric cone of depression, or “if there were no clay soils/turbulent eddies, etc.,” Darcy’s Law would produce accurate macro-level results. Notice that such ceteris absentibus clauses change the logical form of the relevant laws from indicatives to subjunctives and makes them nonextensional. 39 This suggests the laws are true of models, not real-world processes. While both Coulomb’s and Darcy’s laws are true of models, not the real world, and both are causal idealizations, they have important idealization differences. First, Coulomb’s Law is an inaccurate microphysical law, and Darcy’s Law is an inaccurate macrophysical law. Because of the level of theory it addresses, Darcy’s Law is a much cruder idealization; deviations from it are evident in the macro-, not just the microphysical, universe. One physics analogue for the Darcy macrophysical inaccuracies is a situation in which one was applying the laws of classical mechanics in order to predict the behavior of a system of rigid bodies, for example, a bridge. Suppose the bridge was made of materials with internal imperfections, these imperfections could not be discovered accurately beforehand, yet they were capable of influencing whether the bridge collapsed or not. With the bridge, the relevant laws of mechanics might be causal idealizations at the microphysical level, because their predictions failed to take account of microphysical problems capable of making a difference in the accuracy of the macrophysical predictions. Darcy’s and Coulomb’s laws also represent different types of causal idealizations because theories to correct Darcy predictions face observational difficulties, while those designed to correct Coulomb predictions face more theoretical problems. Darcy deviations from reality could be corrected only if one were able to measure factors such as interstitial velocity or pore pressure. Coulomb deviations


124

Methodological Analysis

could be handled by a theory of the vector sum of total forces acting on a particle, but the theory has no physical model, no realistic microphysical interpretation— no single, consistent theory of physics satisfying relativity/quantum mechanics. Although Darcy-correction observational problems and Coulomb-correction theoretical problems can in principle be overcome, Darcy’s problems seem far worse. Why? First, the theoretical constraints imposed by quantum theory, relativity, and the absence of a unified field theory would render any microphysical law idealistic. However, constraints that would render any such law idealistic do not impose idealism in any interesting sense. Second, Darcy’s observational difficulties typically do not render most macrophysical laws idealistic. It is relatively easy, for example, to determine the velocity of a macrophysical object traveling on earth. Because the Coulomb idealization results from a theoretical constraint pervading all microphysical science, whereas the Darcy idealization results from an observational constraint not common at the macrophysical level, the latter problem appears more damaging to science. That is, the idealization is less typical/expected/likely to be recognized by those who apply science, especially in problematic situations. A third difference in Coulomb and Darcy idealizations regards practical consequences. If Darcy’s Law does not give accurate results at the macro level, applied science can provide incorrect findings to guide policy, as occurred at Maxey Flats. But if Coulomb’s Law gives inaccurate results at the microphysical level, little real-world harm occurs. The gravitational force is so small that, even if it causes electromagnetic-field-quanta trajectories to deviate from spherically symmetrical distributions, they have little practical-world import. Fourth, Coulomb’s and Darcy’s laws represent different types of idealizations in the sense that it is easier to get around the former than the latter. At least in theory, one could overcome the Coulomb idealization by describing the properties of physical objects, for example, their trajectories, by means of equations that considered each field law as contributing one component to the vector sum of total forces acting on a particle. Although it might not have a realistic interpretation, contemporary physics thus has ways to compute total vector force and to avoid ceteris absentibus clauses causing idealizations. Contemporary hydrogeology, however, has no way to compute hydraulic gradient in all situations, especially at subsurface sites. Even if computations were possible, they would be untestable, given subsurface parameters whose unearthing could change measurements. Moreover, because HD methods require knowing something that often cannot be known hydrogeologically, initial and boundary conditions, key well-pumping hydrogeological equations are idealized in a more problematic sense than are those associated with Coulomb’s Law. In the case of using well-pumping tests to measure hydraulic conductivity/ groundwater flow at Yucca Mountain and Maxey Flats, geologists used laws/ equations whose predictions could not be confirmed within reasonable periods


Releasing Radioactivity

125

of time. This means their idealized conclusions could not be assessed for accuracy. As a consequence, geologists examining the sites were forced to make many epistemic/cognitive value judgments (see chapters 13 and 14). Such value judgments could explain why Maxey Flats hydrogeologists said lethal groundwater contaminants could move only a half-inch onsite during 24,000 years, yet appeared 2 miles offsite in only 10 years later. Idealized hydrogeological laws also are particularly problematic at potential hazardous-waste sites having allegedly low-permeability clays/shales—because groundwater in these materials’ fractures/joints is typically neither infinite nor horizontal, contrary to what hypothetico-deductive and hydrogeological laws require. Rather, groundwater is probably perched—located in different-sized bodies, at varying distances from the surface, in nearly closed small cavities. Darcy’s Law also is likely to give inaccurate results in clay soils because their water velocities are less than proportional to the hydraulic gradient. Thus, the precise hydrogeological conditions (clay, shale) apparently suited for hazardous-waste storage are exactly those in which hydrogeological prediction, via Darcy’s Law, is most inaccurate. Of course, one might object that if Darcy’s Law gives inaccurate results, does this show it is more idealized than other laws, or that it has been misapplied? Is expecting Darcy’s Law to apply, in heterogeneous media of finite extent, similar to expecting Coulomb’s Law to explain situations in which the electric charge is widely distributed in space and moving continuously? While this objection makes an important point about not confusing misapplication of a law with its idealization, it misses significant differences between Coulomb’s and Darcy’s laws. As already noted, one can theoretically compute total vector forces and thus avoid idealizations inherent in Coulomb’s Law. However, Darcy’s Law allows no such computation. Hence Darcy’s Law is more idealized, thus more troublesome in the real world of dangers like waste-siting.

Amending Hypothetico-Deductive Methods Given Darcy idealizations, what should hydrogeologists do? Obviously they should realize that using it in HD methods provides no reliable basis for hazardous-waste-site predictions. A partial solution might be to provide tentative criteria for using idealized laws in such situations. Given limited subsurface-hydrogeology observations, one criterion might be that if one uses abstract, oversimplified laws (5.2), one ought to be able to correct for omitted/ simplified factors. As Cartwright puts it, “If the idealization is to be of use, when the time comes to apply it to a real system, we had better know how to add back the contributions of the factors that have been left out.”40 If the idealizations can be made realistic, as HD requires, fundamental laws will dictate this “adding back.”


126

Methodological Analysis

However, if phenomenological correction factors dictate the adding back, 2 problems arise for subsurface hydrogeology: (1) no specific, quantitative phenomenological laws, derived from fundamental law (5.2), can correct Darcy’s Law, and (2) subsurface-observation difficulties typically allow no phenomenological basis for precise correction factors for non-uniform hydrogeological phenomena miles beneath earth’s surface. Mostly ad-hoc manipulation of equation-parameter values is possible—exactly what one US Geological Survey scientist described as the primary problem with hydrogeological investigations.41 Yet to use HD methods and do reliable hazardous-waste siting, one needs to be able to add back these deviations quantitatively, something that presupposes knowing the distance between reality and idealization.42 Because subsurface hydrogeologists typically do not know this distance, or they would not use idealized models in the first place, they must always admit the massive uncertainties in their work, lest their science cause massive harm. If one is applying idealized scientific laws in practical situations requiring predictive accuracy, one warning might be that the greater the inaccuracy of the idealization, the less defensible the application. Yet, knowing when idealized laws are more/less accurate is difficult. Often idealized laws, for example, all people behave so as to maximize wealth, are used precisely because scientists cannot specify either superior alternative laws or precise conditions that would make the laws more accurate. If so, using idealized laws is more acceptable when their associated, unquantifiable errors are known to be in a particular direction and can be bounded. This way, scientists could err on the side of safety. Yet, even if errors lie in a particular direction, deciding to use idealized laws often embroils scientists in practical, welfare-related consequences of any inaccuracy. (See chapters 13 and 14.) Another criterion for using idealized laws might be their yielding predictions that are accurate enough for the particular application, as economist Milton Friedman suggested. He said scientific idealizations are unproblematic insofar as they lead to correct predictions.43 Thus, if hydrogeologists use Darcy’s Law to make 10-year predictions, they must know their predictions are correct for this period. But if the law’s accuracy is uncertain, one ought not use predictions, checked over weeks/years, to predict millennia occurrences. Yet, this is exactly what happened in siting both the Maxey Flats and the Yucca Mountain facilities.44 This problem suggests scientists should employ a second criterion for using idealized laws: that laws be confirmed to within the degree of accuracy required for the particular application. What is the status of these 2 criteria? Scientific applications that fail to satisfy the first criterion provide no basis for empirical assessment of whether applications are successful. Those that fail to satisfy the second criterion would have untested empirical adequacy. Yet if idealized scientific laws cannot be evaluated empirically, there may be no laws of which there are idealized versions.


Releasing Radioactivity

127

Still another alternative—to using idealized HD methods in practical situations where they may err—is to base crucial practical science on the preponderance of scientific evidence and on inference to the best explanation. Chapter 12 investigates these strategies.

Conclusion This chapter shows that accepted scientific laws can present problems for HD methods of testing hypotheses because the laws may be unrealistic or unable to support reliable phenomenological laws. Uncritical use of HD methods, and failure to consider the degree to which theoretical laws are idealized, thus could lead to flawed science and human harm. Although this classic HD account of scientific method presents simple, tempting algorithms for testing scientific hypotheses, reliable science has few algorithms. Instead, as chapter 12 shows, it often relies on inference to the best explanation and on a preponderance of evidence. If science did rely mainly on algorithms, it would be easy. Society would not have to wait centuries to discover things like evolution, plate tectonics, quarks, and vaccines. Creating gold is far more difficult than the alchemists realized. Likewise, justifying hypotheses is far more difficult than HD proponents realize. Remembering this alchemy lesson might help prevent the problems faced at Yucca Mountains and Maxey Flats.


C H A P T ER

10

Protecting Florida Panthers H I S T OR I C A L - C O M PA R AT I V I S T M E T H O D S I N Z O OL O G Y

Three decades ago, Arkansas lawmakers proposed that Creationism, not just evolution, be taught in schools. In a landmark trial, however, the judge struck down the proposed law on the grounds that public schools should not teach religion, and that Creationism is religion, not science, because it is not testable.1 Can all scientific hypotheses be tested? Many people think so. As the previous chapter showed, proponents of the deductive-nomological and hypothetical-deductive accounts of scientific method probably think so. They likely would agree with the judge about what makes something scientific. However, many other scientists and philosophers of science—most of whom would not want Creationism taught as science—nevertheless would disagree with the judge that testability is necessary for all science. Instead, they would say that testability is a goal of science, one that cannot always be achieved. Consequently their account of scientific method focuses on the importance of comparing different hypotheses to assess their relative historical problem-solving ability, rather than testing any single hypothesis.

Chapter Overview Who is right about how to justify scientific hypotheses, the logic-oriented, hypothesis-deduction proponents who focus on testing or the history-oriented scientists who deny that all scientific methods require testability? Answering this question, the chapter first outlines logical versus historical-comparative accounts of science. Second, it evaluates 3 main tenets of Larry Laudan’s historical-comparativist methods of justification. Third, although hypothesis comparison is essential to science, this chapter argues that many comparativists, like Laudan, fail to explain how science justifies hypotheses. Fourth, the chapter shows that partly because leading Florida-panther biologists rely mainly on 128


Protecting Florida Panthers

129

Laudan’s comparativism—and ignore non-comparative conditions for scientific justification—they do flawed science that threatens the panther. Fifth, thechapter uses the Florida-panther case to show why using only historicist-comparativist methods of science can lead to error.

Non-Comparativist versus Comparativist Science To understand comparativism, recall the differences between what Dudley Shapere calls the historical school in philosophy of science,2 as opposed to the logical or deductive-nomological school, discussed in chapter 9. Although they disagreed on particulars, members of the logical school, like Rudoph Carnap, 3 Carl Hempel,4 and Thomas Nagel, 5 say that experiments can be value-neutral, that scientific change is cumulative, and that scientific hypotheses should be tested by experiment. Their account of science is what the judge in the Arkansas Creationism case seems to have presupposed. Proponents of the historical-comparativist account, like Paul Feyerabend,6 Norwood Russell Hanson,7 Tom Kuhn,8 Larry Laudan,9 and Stephen Toulmin,10 would disagree with the judge’s rationale but probably not his conclusion. They say scientific experiments can never be value-neutral; that hypotheses need not be rejected when scientific experiments prima facie falsify them; and that because science includes not merely what is tested but also standards/methodology/interpretations/perhaps metaphysics, testing scientific hypotheses cannot be done. Instead they say scientists can merely compare different hypotheses to see which have better problem-solving ability. While hypothesis-deduction proponents invoke scientific objectivity, neutral observations, experimental control of speculation, and scientific rationality—historical-comparativist advocates deny or weaken each of these claims.11 As a result, hypothesis-deduction proponents believe scientific hypotheses can be evaluated largely independent of context and comparisons to competing hypotheses. However, historical-comparativist advocates believe scientific hypotheses should be compared relative to competitors. Hypothesis-deduction proponents thus are non-comparativists, and historical proponents are comparativists. What has caused this recent scientific emphasis on hypothesis-comparison? Feyerabend claimed Darwinian competition among biological species influenced science, and Ernst Mach and Ludwig Boltzmann argued that “a struggle of alternatives is decisive for science.”12 Other scientists noted that once scientists/philosophers of science rejected hypothesis-deduction methods, they recognized the importance of hypothesis comparison.13 Kuhn spoke of hypothesis competitors,14 and Laudan writes of hypotheses as rivals evaluated in terms of progressiveness.15 David Hull claims competition “regulates” science.16 Even experimentalists like


130

Methodological Analysis

Peter Galison and Alan Franklin focus on competition.17 Does competition really dominate scientific-hypothesis-justification? The answer depends on whom you ask. On one scientific-method continuum, stretching from proponents of hypothesis-deduction to the historical advocates, one might proceed from Schaffner to Feyerabend to Kuhn to Laudan. Here Laudan is the most fully comparativist, partly because he says scientific rationality is defined in terms of comparative problem-solving ability, not the reverse; he believes there is no method of discovering/developing/justifying scientific hypotheses, except for comparing them.18 Schaffner is the least comparativist of this group because, although he affirms context-dependent, comparative, scientific principles like theoretical-context sufficiency, experimental adequacy, and simplicity,19 he also affirms observationally common elements among competing hypotheses. 20 Schaffner denies that comparative-hypothesis judgments must be subjective, just because they are context-dependent. 21 He says allegedly subjective claims about scientific-hypothesis-justification can show merely theoretical underdetermination, lack of complete understanding, not subjectivity.22 Like Schaffner, Feyerabend also claimed scientific hypotheses are comparable under some interpretations, but not others.23 He claimed primitive terms of different scientific hypotheses are incommensurable; methods, problem-fields, and standards of solution are comparable.24 Feyerabend thus believed that despite scientific revolutions, scientists can pursue a pluralistic methodology,25 thus assess which of 2 hypotheses is closer to the truth.26 He also argued that comparative-scientific-hypothesis assessment should not focus only on fully formed hypotheses, as Laudan says.27 Instead Feyerabend said scientists should compare “explicit and implicit assertions, doubtful and intuitively evident theories, known and unconsciously held principles,” and therefore “provide means for the discovery and criticism of the latter.”28 Unlike Feyerabend, Kuhn claims linguistic and non-linguistic elements of scientific-hypothesis choice are permanently incommensurable and therefore that scientists must use only changeable “values . . . rather than [unchangeable] rules of [hypothesis] choice.”29 Kuhn believes proponents of different scientific hypotheses never “even see the same thing, possess the same data, but identify or interpret it differently.”30 Because of different scientists’ radically different contexts, Kuhn nevertheless says they can use scientific values like accuracy to compare otherwise-incommensurable hypotheses. 31 Is Kuhn right?32 Is Larry Laudan right, when he says hypotheses should be judged by how progressive they are? Or, is hypothesis-comparison subjective?

Laudan’s Comparativism To answer these questions, consider Laudan’s views. He defines scientific rationality in terms of comparative problem-solving ability and thus believes there are


Protecting Florida Panthers

131

no standards (such as testability or making successful predictions) for justifying science, independent of hypothesis comparison. His 3 central tenets can be called the trump, rejection, and partiality claims. The trump claim is that comparative problem-solving ability trumps truth and probability in hypothesis-justification. The rejection claim is that scientists should reject no hypothesis until a better one is available. The partiality claim is that scientists can evaluate part of a scientific theory or hypothesis for the whole. Although hypothesis comparison is essential to science, this chapter shows why using Laudan’s 3 claims leads to flawed science, mainly because their extreme comparativism ignores standards such as truth and probability. The most basic of these tenets, the trump claim defines rationality by comparing scientific hypotheses on a scale of problem-solving abilities, 33 such that severe testing means only that some hypothesis has survived tests its known rivals have failed to pass, and not vice versa. 34 Surprisingly, Laudan’s trump claim ignores evidence. Why? Laudan thinks problem-solving ability is the only measure of rationality; he thus reverses “the presumed dependency of progress on rationality” and says “rationality consists in making the most progressive” hypothesis choices, 35 a claim that presupposes no prior standards of rationality. Laudan also says no difficulty, like being falsified in a test, counts as a problem for some hypothesis until another hypothesis can solve it. 36 “Unsolved problems generally count as genuine problems only when they are no longer unsolved” because, until they are solved, people can claim they were not really problems, not in their fields, or need no solution. Thus contradicting hypothesis-deduction accounts, Laudan says “the class of unsolved problems is altogether irrelevant. . . . In determining if a [scientific hypothesis or] theory solves a problem, it is irrelevant whether the [hypothesis or] theory is true or false, well or poorly confirmed.”37 Supposed refuting instances often have “little cognitive significance.”38 As defended by Laudan, the trump claim requires restricting hypothesis-competition to fully developed, extant rivals. 39 Unlike Feyerabend, Laudan says choosing one hypothesis over another “requires no Herculean enumeration of all the possible hypotheses;” he says hypotheses without “a clearly articulated formulation” can be ignored.40 But if the trump claim requires fully formed hypotheses and includes no rules for single-hypothesis assessment,41 Laudan’s rejection claim (to never reject some hypothesis until a better one is available) holds: No problem, like an hypothesis’ contradicting the data, is powerful enough to force one to reject it, prior to comparative assessment.42 Given the trump and rejection claims, Laudan-style comparativists say a hypothesis must always reestablish its credentials after another’s challenge, a reestablishment that may require changing the way the hypothesis makes explanatory connections to other hypotheses and theories. But if Laudan says hypothesis choice is always among whole, general hypotheses and theories that have consistent explanatory connections among observations and assumptions,43 then


132

Methodological Analysis

testing part of some hypothesis or theory counts as testing the whole. As Laudan explains this partiality claim, “testing or confirming one ‘part’ of a general theory provides, defeasibly, an evaluation of all of it.”44 Do Laudan’s trump, rejection, and partiality claims succeed when used in science? To answer this question, consider competing habitat hypotheses about the Florida panther, Puma concolor coryi.

The Florida Panther The golden-haired Florida panther, Puma concolor coryi, is a subspecies of cougar or mountain lion. Once present throughout the United States, cougars are extinct east of the Mississippi, except for a small population in southern Florida. Development has pushed them to the wildlands of Florida, partly because they require many acres of home range and often travel 15 miles per night, searching for their main prey, deer. An endangered umbrella and keystone subspecies, Florida panthers are sacred to the Seminole Indians and essential to the survival of the western Everglades and many other South-Florida species. Consequently the government spends billions of dollars annually, restoring lands west of Miami. Yet west coast Florida development, in Everglades and its watershed, threatens both panthers and the unique Everglades. To try to save panthers, since 1981 the government has monitored them through radiotelemetry collars. In 1995 Florida also introduced several Texas panthers to address inbreeding depression,45 genetic flaws caused by breeding within a small-subspecies population. Nevertheless, only about 80 panthers, including less than 20 breeding females, remain in South Florida, mostly on public lands.46 Saving panthers, however, requires reliable science to support species-recovery plans, science that includes detailed knowledge of panther habitat/genetics/ colonization/mortality/breeding, so as to determine what land to set aside for panthers. Yet panther range/habitat has changed over time. Panthers in different locations have different habitats and home ranges. According to the multiple-habitats hypothesis, Florida panthers prefer hardwood hammock, mixed hardwood swamp, and cypress swamp.47 According to the dry-habitat hypothesis, the only vital Florida-panther habitat is dry, upland pine forest.48 As a consequence of many scientists,’ developers,’ and policymakers’ accepting dry-habitat, over multiple-habitat, hypotheses, they argue for panther recovery through colonizing dry, central-Florida pine forests north of the Caloosahatchee River, and not colonizing wetter South Florida—where development pressure is much greater and land is more commercially desirable.49 What does competition between these 2 panther-habitat hypotheses suggest about Laudan’s comparativism? Dominant panther biologists defend the dry-habitat hypothesis mainly by appealing to Laudan’s trump, rejection, and partiality claims. Consider first both hypotheses’ problem-solving abilities.


Protecting Florida Panthers

133

The Multiple-Habitat Hypothesis: Its Problem-Solving Trump The dominant or dry-panther-habitat hypothesis trumps the multiple-habitats hypothesis because of at least 4 Laudan-type, problem-solving assets. It better explains panther decline, range, reluctance to cross rivers, and ability to survive more easily in western United States. First, if one accepts the dry-habitat hypothesis, one can partly explain why panthers have declined. Dry, upland US forests, especially pine, have continued rapid shrinking, 50 they have declined more rapidly than cypress swamps, 51 and US panthers have declined rapidly. 52 If so, the dry-habitat, not multiple-habitats, hypothesis better explains panther decline because it better explains supposed causal connections between 2 independently documented variables (dry-forest decline and panther decline). Moreover, it better solves panther-decline problems because swamp forests have declined less than dry forests. 53 Second, accepting the dry-habitat hypothesis also helps explain why panther home range has shrunk from half of the United States: Dry forests have declined more rapidly than wetter forests. If panthers require dry habitat—which has declined more rapidly than wetter habitats—the dry-habitat hypothesis may better explain panthers’ decreased range. Third, if one accepts the dry-habitat, not multiple-habitats, hypothesis, one also can explain why panthers rarely cross the Caloosahatchee River; why 95 percent of them have never done so; and why no breeding-age females have been documented as doing so. 54 Because the multiple-habitats hypothesis presupposes that swamps are included panther habitat, the dry-habitat hypothesis is less able to explain panther reluctance to cross the river, although they can swim. If panthers preferred dry over wet-and-dry habitat, they would be less likely to cross rivers. Thus the dry-habitat hypothesis seems superior in solving this problem. Fourth, accepting the dry-habit, over the multiple-habitats, hypothesis also helps explain why panthers are doing better in western than eastern United States. East of the Mississippi, panthers remain only in the Everglades and its watershed. In the West, however, panthers number about 5000 and range from south to north, from California to Texas, on drier land than the Everglades. 55 Hence, the multiple-habitats hypothesis is less able than its competitor to solve problems of dry-western-US panther success. Although the preceding 4 comparisons do not show the dry-habitat hypothesis is true, they suggest it has greater problem-solving ability than the multiple-habitats hypothesis—which solves only one problem, why panthers are in the Everglades (because of development pressure in drier northern Florida). Thus, according to Laudan’s problem-solving criterion, the dry-habitat hypothesis appears to trump the multiple-habitat hypothesis.


134

Methodological Analysis

The Dry-Habitat Hypothesis and Trump, Rejection, and Partiality Presuppositions Although biological claims supporting the dry-habitat hypothesis appear consistent with Laudan’s comparativism, they also rely on Laudan’s trump, rejection, and partiality claims. (Admittedly, most panther biologists have neither precisely formulated their methodological beliefs nor fully assessed them.) However, dominant Florida-panther biologists David Maehr and coauthors, for instance, accept the trump claim. They note the superior problem-solving ability of the dry-habitat, over multiple-habitat, hypotheses, as part of their argument for the former. 56 Nevertheless, they admit concerns about the truth of the dry-habitat hypothesis because they used only daytime, not 24-hour, telemetry to assess it. Yet daytime telemetry gives little habitat information about the nocturnal panther. Admitting potential data-representativeness problems, Maehr and coauthors ignore these problems on comparativist grounds that the dry-habitat hypothesis is better than its competitor. 57 Their statements also reveal they accept the rejection claim—not rejecting a hypothesis until they have a better one. Hence Maehr and coauthors reject the multiple-habitats hypothesis for having less problem-solving ability than its competitor and accept the dry-habitat hypothesis as better. Yet they recognize that other, less-comparativist, biologists have concerns about the truth of the hypothesis. That is, other biologists focus on theory comparison, but unlike Laudan, they also assess truth and probability. 58 Maehr also appears to accept a third Laudan tenet, the partiality claim, evaluating only part of a general hypothesis. On November 14, 2001, speaking before the US Fish and Wildlife Service (USFWS) Panther Subteam, Maehr argued for the dry-habitat hypothesis, over the multiple-habitats hypothesis, based on available but partial data. 59 Although the former is based on panther daytime/resting habitat, given its reliance on daytime telemetry, Maehr argued there was no better hypothesis available than the dry-habitat hypothesis.60 Other scientists and policymakers also accepted Maehr’s dry-habitat justification, partly because they subscribe to the trump, rejection, and partiality claims. That is, despite the incompleteness of the dry-habitat hypothesis, they accept it as illustrating the best-available problem solving and using the best-available data.61

Spatial and Temporal Biases and the Partiality Claim What about dry-habitat-hypothesis problems with truth and data representativeness? In December 2001, USFWS biologists likewise appealed to Laudan’s comparativism (without using this name) to defend this hypothesis. Biologist Dawn Jennings, USFWS’s section head, discussed the USFWS panther-conservation-strategy draft and used the rejection and partiality claims;


Protecting Florida Panthers

135

she said Maehr’s dry-habitat hypothesis should be accepted because, although it is flawed in using only partial evidence, comparatively it is the “best available.”62 (As this chapter shows later, such USFWS commitments to comparativism, instead of also to rules of evidence for good science, may explain why USFWS scientists, who used the trump, rejection, and partiality claims, could not criticize Maehr’s poor science, that is, his basing hypothesis choice and hypothesis-justification on non-representative data.) Does the partiality claim support the dry-habitat hypothesis?63 Note that partiality proponents reject bottom-up approaches,64 according to which hypothesis justification can be accomplished gradually/partially, as more low-level hypotheses pass severe testing. Instead they count assessing partial theories and hypotheses as assessing complete theories and hypotheses. Dominant panther-biologists err in using the partiality claim to accept the dry-habitat hypothesis because they ignore its limited geographic domain. In evaluating dry- versus multiple-habitat, 65 they evaluated only part of the problem, that covering only 60 percent of panthers, those living in semi-forested-dry-pine areas north of I-75.66 Thus, they ignored the 40 percent of Florida panthers living in the wetter habitat south of I-75, a mosaic of prairies/marshes/tree islands. Because this wetter, southern habitat is not representative of the drier, northern habitat, they reveal spatial bias in accepting the dry-habitat hypothesis. Accepting the dry-habitat hypothesis, because of the partiality claim, involves a second spatial bias. Dry-habitat scientists ignore errors caused by the fact that each panther-radiotelemetry pixel has 224-meters-average error.67 Yet these biologists assumed that, provided panthers were within 90 meters of pine forests for a majority of daytime-telemetry locations, they preferred pine forests.68 In using pixel data, but neither providing panther-position-value-uncertainty bounds, nor including habitat information for 224 meters around telemetry points, dry-habitat biologists used a biased, incomplete hypothesis that overestimated pine-forest importance and ignored the patchy landscape. A third dry-habitat bias, resulting from the partiality claim, is the already-mentioned problem that Maehr and coauthors used only daytime telemetry. Pilots of fixed-wing aircraft are reluctant, given alligators and no landing places, to do nighttime Everglades flights. By using only partial, daytime evidence to evaluate all panther-habitat hypotheses, dry-habitat biologists defined daytime or resting panther habitat as “preferred,” then named all other areas “avoided” habitat, although panthers use avoided areas for nighttime breeding, denning, and hunting.69 Thus, dry-habitat biologists used the partiality claim to accept the dry-habitat over the multiple-habitat hypothesis, despite 3 data-collection biases and begging habitat questions.70 Similar flawed science, based on the partiality claim, occurs in Laudan’s accepting industry claims that industrial-risk levels are negligible.71 He ridicules “scare stories,” “exaggerated reports,” and “media obfuscation” of “most of


136

Methodological Analysis

us . . . most of the time” regarding risks like nuclear power and toxic chemicals.72 Instead, Laudan says people should accept “the straight facts,” avoid “editorializing,” and accept his views that industrial-pollution risks are trivial.73 However, Laudan claims this triviality because he uses the partiality claim. He uses part of his risk hypothesis, about acute fatalities, to confirm the whole hypothesis, about acute and later fatalities. Thus he says the Chernobyl nuclear accident caused 31 fatalities, and mining causes about 4 deaths per 10,000 miners.74 Yet both claims are false. They include only acute fatalities as the basis for affirming a general risk hypothesis about all fatalities, and thus make egregious errors because long-term statistical casualties, for example, from cancer, are about 10 times greater than acute fatalities; in the United States, 7,000–11,000 people die annually from acute workplace fatalities, but 62,000–86,000 die from non-acute, occupationally induced, later diseases like cancer.75 Laudan’s partiality claim thus biases science, grossly underestimates pollution risks, and victimizes sufferers of industrial pollution. After all, despite the many unknowns and the complexity of gene/environment interactions, by the 1980s the government claimed environmental pollutants cause up to 90 percent of all cancers and virtually all childhood cancers. In 2002 a classic, long-term New England Journal of Medicine study—of 90,000 people—showed that the environment makes “the overwhelming contribution to the causation of cancer.” 76 Using Laudan’s partiality claim also hinders developing more complete, alternative hypotheses, goals that other—less extreme—comparativists, like Kuhn, Imre Lakatos, and Feyerabend, seek.77 Why? If Laudan’s partiality claim is right,78 there are strong reasons neither to develop more complete, including nocturnal, panther-habitat hypotheses, nor to avoid the 3 temporal-spatial-data biases. Having accepted a flawed dry-habitat hypothesis, government panther-protection agencies are more likely to be underfunded because of inadequate pressure, on purely comparativist grounds, for future research. Consequently panther science is less likely to progress.79 Developers seeking Everglades land can simply use the partiality claim to promote accepting flawed, dry-habitat accounts and to resist developing better hypotheses—exactly what happened in the panther case. Given dry-habitat accounts, Maehr argued future panther research is unnecessary,80 thus causing failure to develop better hypotheses and losses of panther-research funding.81 Failure to develop alternative hypotheses is dangerous in practical cases like that of the panther or nuclear-weapons-test effects. Scientists accepted biased hypotheses of negligible human-health effects from above-ground, US weapons-testing, mainly because government-weapons proponents urged no development of alternative hypotheses. Consequently, no funding was available to investigate false government claims about trivial fallout risks. Special interests, including military interests, kept weapons-related epidemiological analyses from being done, just as Florida developers funded Maehr to keep further panther


Protecting Florida Panthers

137

research from being done. Thirty years after serious, weapons-related, health effects became evident, the US Congress called for assessing effects of weapons testing.82 Again the same vested interests succeeded in covering up the results for 15 years. They were made available only after fallout victims, “downwinders,” forced the US National Academy of Sciences and Congress to hold hearings about the coverup. 83 Cases like industrial risks, weapons tests, and the panther thus suggest that, while Laudan’s extreme comparativism may work theoretically, it is practically flawed. It requires a perfect world in which special interests do not block research that could threaten their profits. With blocked research, comparativist critics have few tools—like demanding representative data—to detect an obviously flawed, but comparatively best hypothesis. Why? Laudan-style comparativists neither have rules of evidence, nor take its lack to be problematic.84 Consequently they are more likely to accept biased hypotheses than those who reject the partiality claim.

The Two-Value Frame and the Rejection Claim Another reason many panther biologists accept dry-habitat over multiple-habitats hypotheses, despite the former’s representativeness, spatial, and temporal biases, is their following the rejection claim—that one ought never reject a hypothesis until a better one is available. Not having a better hypothesis, because they failed to encourage adequate data collection, they accepted a flawed hypothesis. Laudan-style comparativists thus sanction a too-simple, question-begging attitude to hypothesis-justification.85 Yet ambiguity often characterizes scientific hypotheses, especially more incomplete ones. As the panther case illustrates, the rejection claim can cause scientists to fall victim to an appeal to ignorance, to accepting the best hypothesis, despite ignorance about the adequacy of alternative hypotheses. The rejection claim thus allows scientists to fallaciously treat absence of evidence for alternative hypotheses as evidence of absence. Accepting the rejection claim also led panther biologists astray because Laudan-style comparativist tenets are somewhat reasonable if hypothesis comparison is unbiased. That is, if tests are comprehensive/fair, it makes sense to follow a preponderance-of-evidence rule, as argued in this book. Yet the flawed dry-habitat hypothesis gained dominance, in part, because of external factors such as inadequate panther-article refereeing in mainline journals and referees’ deferring to dry-habitat-hypothesis proponents, like David Maehr and Robert Lacy. 86 Why were referees unaware of the representativeness, spatial, and temporal biases in the hypothesis? With only about 80 panthers east of the Mississippi, few biologists study the Florida panther. Few could claim to know as much about


138

Methodological Analysis

panthers as dry-habitat proponents, Maehr and coauthors, who never admitted using only daytime-telemetry and only north-of-I-75 data. One would have to know the original Florida Fish and Wildlife Conservation Commission (FFWCC) data set, 87 then compare it with Maehr’s articles, to see the omissions. Yet almost no biologists know this data set. A third practical reason for dry-habitat-hypothesis dominance is the absence of Florida-panther experts on the USFFWCC, given its funding constraints. With low Florida-panther numbers, and no government scientists able to assess panther-habitat justification, only one biologist, David Maehr, dominated Florida-panther literature. Laudan-style comparativists miss this practical point: Comparativism works somewhat only if comparisons are procedurally fair and avoid external/practical constraints, like inadequate refereeing. In subscribing to the rejection claim, in paying inadequate attention to the probability of alternative hypotheses, comparativists also err in rejecting any reliable, independent, pre-comparative criteria for good science. Extreme comparativists who reject such criteria arguably do more subjective science than those who accept such criteria (e.g., Schaffner). In fact, Feyerabend criticized the early Kuhn for defining science in terms only of puzzle solving and not also in terms of goals.88 He said the early Kuhn made hypothesis justification too arbitrary, no different from criminal and other puzzle-solving activities. Similar criticisms apply to Laudan. In rejecting rules of thumb to assess problem-solving ability, Laudan’s account of hypothesis justification can include external factors that are unscientific. It can allow more arbitrary choices than rule-governed analysis includes, such as rules for data representativeness. Laudan-style comparativists forget that scientific competition is legitimized partly by objective rules of fair play.89 Yet, as already noted, real-world science often compromises fair play. As earlier chapters revealed, disinterested, nonpartisan groups like NSF and NIH provide only 25 percent of US-scientific-research monies. Special interests—trying to show their products or pollutants are beneficial or at least harmless—control the remaining 75 percent.90 Laudan’s comparativism is wholly inappropriate in such a world. Just as ethicists recognized that procedural requirements for fair play are never fully realized in the real world,91 scientists and philosophers of science need to recognize the analogous point. To the degree that scientific procedures are unfair, comparativist science is more likely to err. Relying on comparison, in the absence of fair play, is like relying on the invisible hand, in the absence of full market information and free choices. The rejection claim also errs in presupposing a 2-value frame of selecting the best or better hypothesis and never rejecting a hypothesis until a better one is found. Yet in the face of uncertainty, arguably sometimes scientists should reject all extant hypotheses or argue for more research before making a hypothesis choice, as chapter 2 illustrated. Just as Bayesians recognize situations of certainty, risk, or uncertainty (see chapter 13), and just as experimentalists recognize that hypotheses can be confirmed/falsified/uncertain, comparativists ought to accept


Protecting Florida Panthers

139

at least 3 categories. Otherwise, neglecting uncertainty could lead to avoiding needed future research and to accepting false hypotheses that later would be difficult to dislodge, like dry-habitat accounts. Laudan might object, of course, that comparativists have the uncertainty option. However, his language requires replacing hypotheses only with other hypothesis, not avoiding choosing. “Avoiding choosing,” for Laudan, involves “dogmatism about ‘hard-core’ assumptions” instead of comparing hypotheses.92 Besides, the comparativist failure to deal with uncertainty in hypothesis-justification is magnified by Laudan’s requiring that scientific hypotheses, being compared, have “flesh-and-blood in the form of a clearly articulated formulation,”93 a requirement that ignores not-worked-out-scientific hypotheses. Yet Feyerabend and Lakatos both encourage theoretical pluralism, development of alternative hypotheses and theories,94 thus ameliorating uncertainty. Moreover, because Laudan rejects all prior-to-comparison standards of rationality, he must argue for rational hypothesis-justification at any time. But choosing a hypothesis at any time, independent of adequate data, is a recipe for ethical and scientific disaster. As chapters 13 and 14 show, scientific choices under uncertainty, in cases having welfare consequences, are partly ethical justifications, about appropriate behavior under uncertainty. Scientists alone thus do not have rights to make such choices. The people do. Laudan-style comparativists especially do not have such rights, given their flawed scientific standards.

Conflicts of Interest and the Trump Claim Perhaps one reason Laudan-style comparativists accept the apparently best scientific hypothesis, at any time, is their underestimating effects of practical circumstances, such as small numbers of Florida-panther experts. Although it seems impossible to frame scientific hypotheses in ways that avoid damaging effects of social circumstances, 95 Laudan comparativists are more likely—than those who do not define rationality purely in terms of problem solving—to miss biased scientific frames and subjective hypothesis-justifications, as the case of ignoring 40 percent of panther data shows. Objectivity also is a problem for Laudan-style comparativists because, subscribing to the trump and rejection claims, they need not modify hypothesisjustifications because of activities like assessing statistical-significance, confidence levels, and uncertainty analysis, all of which contribute to knowledge of probabilities. The trump claim allows problem solving to trump probabilistic knowledge of probabilities. Because Laudan-style comparativists accept only “clearly articulated formulations” of hypotheses,96 absence of information need not inhibit their accepting some problem-solving hypothesis. Laudan-style comparativists, who justify failure to collect more evidence and to develop


140

Methodological Analysis

alternative hypotheses, thus could threaten objectivity, as acceptance of the flawed dry-habitat account illustrates.97 Yet, this dominant researcher (Maehr), touted as “the foremost expert on the Florida panther,”98 has repeatedly used his dry-habitat hypothesis in paid testimony, on behalf of developers seeking to dredge/fill/develop the western Everglades, the only remaining Florida-panther habitat east of the Mississippi. In exchange for hundreds of thousands of dollars, Maehr has argued for development permits on grounds that denying them “is not necessary to protect the panther.” Why not? He says panthers could be relocated out of wet public lands and the Everglades and into private parcels in dry central Florida.99 As noted, one consequence of accepting trump and rejection claims and requiring comparison of only fully formulated hypotheses is failure to encourage further research, which, in turn, has encouraged scientific and practical-policy biases in favor of dry-habitat accounts. Because of promoting these accounts, Maehr has been able to testify for developers, so they have to pay panther mitigation costs for only a tiny fraction (the forested part) of land they develop in western Everglades. In 1999, for example, when Florida Rock Industries wanted to mine rock on 6071 acres of current Florida-panther habitat, the industry hired Maehr as a testifying consultant. Using his dry-habitat accounts, the developer provided Maehr’s table, “showing the impact to panther habitat (forested cover types only)” to the court.100 As a result, in February 2003 Maehr enabled the developer to obtain permits to buy 6071 acres of protected panther habitat, yet to pay panther-mitigation costs for only about 1 percent of the total acres used,101 only for the forested parts of the lands. Using his dry-habitat account also enabled Maehr to argue that cheaper, central-Florida land qualified as panther-mitigation habitat, when developers took expensive South Florida land. As a result of the flawed dry-habitat hypothesis, developers have paid lower mitigation costs for less land, all because Maehr used flawed comparativist science to defend a flawed hypothesis. Yet no dry, central-Florida land was purchased for the panthers, and none were moved. Without the trump and rejection claims, and requirements to compare only fully formulated hypotheses, Maehr could not have argued for the science and profit-related consequences his funders wanted. In response, Laudan-style comparativists might claim that foul play and conflicts of interest can always infect science, that Maehr was swayed by money. However, without the trump and rejection claims, and their requiring comparing fully formulated hypotheses, Maehr could not have gotten away with his bad science. Because Laudan comparativism has no pre-comparative scientific criteria, it can easily be used to manipulate any results. Objectivity, however, demands hypotheses get fair hearings, not just hearings relative to other hypotheses. Instead, Laudan’s comparativism may involve “shoehorning,”102 scientists’ attempting to make disparate hypotheses functionally commensurable, thus


Protecting Florida Panthers

141

comparable with each other,103 something other, more moderate comparativists do not face. Instead, moderate comparativists who reject the trump claim—and who make comparison necessary, not sufficient, for hypothesis justification— could demand full-hypothesis development before accepting or rejecting them. More moderate (than Laudan) comparativists have default rules for justifications under scientific uncertainty, as chapters 13 and 14 argue, and they require adherence to standards of avoiding bias. Thus moderate comparativists, like Schaffner, who reject the trump claim, have better resources to challenge flawed hypotheses and theories. For them, absence of adequate investigation is not grounds for hypothesis justification, as with Laudan. They are less vulnerable to fallacious appeals to ignorance, precisely because all their evidentiary standards are not dependent merely on competitor hypotheses. Again, Laudan’s analyses of industrial risk also show his trump claim can lead to societal harm. Discussing his chosen hypothesis, that the public exaggerates allegedly negligible industrial-environmental risks, Laudan allows scientific-hypothesis comparison with only a “clearly articulated formulation” of alternative risk hypotheses.104 Laudan says “unless someone can tell you what level of risk is associated with a given activity, then they [sic] have no business telling you that it is risky to begin with.”105 His dictum thus counts unquantified or unknown risks as having zero harm. Laudan says nothing about getting more information or assessing when potentially catastrophic consequences might trump considerations of probability. He would be forced to say that because scientists cannot specify a precise mathematical risk of nuclear war, therefore nuclear-war risks are zero. Laudan comparativists thus again fallaciously appeal to ignorance and encourage people to ignore what they do not know. Yet what you don’t know can hurt you. One remedy is using comparativism that rejects the trump claim.106

Objections In response to the previous criticisms, someone might claim panther biologists were merely doing bad science, not comparativist science. Although anyone can do bad science, this chapter’s argument is partly that it is harder for Laudan-style comparativists to avoid bad science because of having no pre-comparative evidence rules, comparing only fully developed hypotheses and accepting the trump, rejection, and partiality claims. Because dominant panther biologists accepted Laudan comparativism, they had few checks and balances to prevent bad science. People also might say poor panther science is better explained by developers’ monetary influence, not different accounts of hypothesis justification. However, this chapter’s claim is not that poor hypothesis justification is the only problem, merely a significant one. Obviously money influences science. Because


142

Methodological Analysis

methodologically sound science is easier to influence than flawed Laudan-style science, practical philosophers of science can help prevent flawed science. Another concern might be whether Laudan’s misuse of risk data follows from his extreme comparativism. Perhaps not. This chapter argues merely that faulty risk evaluations are more likely among those who accept Laudan-comparativist trump, rejection, and partiality claims and who reject pre-comparative scientific rules such as probabilistic considerations.

Panther Lessons for Comparativist Scientists In at least 3 ways, Laudan’s extreme-comparativist accounts lead to faulty science. In requiring hypothesis-comparison only via problem-solving ability, Laudan forgets some hypotheses may be formulated incorrectly/incompletely, while others may affirm something false. He forgets that some scientific problems “are not solved but dissolved, removed from the domain of legitimate inquiry”—like the problem of the absolute velocity of the earth.107 In arguing only for hypothesis comparison and against pre-comparativeassessment rules, Laudan confuses ethical with epistemic judgments, desirability of consequences with assessing their probability. He assumes that the question, “Is this hypothesis good enough” is the same as “Is this hypothesis superior to its competitors?” Using a 2-value, hypothesis-choice frame (accept/reject), not a 3-value frame (accept/reject/uncertain), Laudan encourages covering up theoretical uncertainties. Laudan’s flaws are not easy to fix because he cannot consistently argue for accepting the best hypothesis at any given time, independent of rules for rational science, yet argue for more research, before making hypothesis choices. Allowing further research undercuts his claim about hypothesis choice at any time. Hypothesis-choice at any time forces one into situations of mathematical/scientific uncertainty. But if so, contrary to what Laudan claims,108 ethics should play a role in comparative hypothesis-justification whenever hypotheses have consequences for public welfare. Societal behavior under uncertainty is also a values issue, not merely a scientific one.

Conclusions If earlier arguments are correct, what might be done to improve Laudan-style comparativist accounts of hypothesis justification—given that theory comparison is essential to science? Instead of the trump claim,109 moderate comparativists might say relative problem-solving ability is only as important as questions of truth/probability/falsity.110 Instead of requiring clearly articulated formulations of competing hypotheses,111moderate comparativists might


Protecting Florida Panthers

143

also require theoretical pluralism, developing alternative hypotheses before rejecting them. Instead of the rejection claim, moderate comparativists might admit grounds, like error statistics, for not accepting hypotheses. Even without an alternative, scientists should avoid Laudan’s confusing “not accepting” a hypothesis with “rejecting” it. Instead of the partiality claim, moderate comparativists might require the most complete evidence practically possible and not allow partial hypotheses to confirm general hypotheses.112 Of course, comparing alternative hypotheses is necessary to science. However, those who follow only extreme comparativism, as Laudan does, and who deny any a priori rational criteria for good science, thereby allow both flawed science and flawed policy based on it. Those who frame the questions control the answers. The extreme-comparativist frame, alone, is not sufficient for hypothesis-justification. Later chapters reveal other methods that are necessary for justification in science.


C H A P T ER

11

Cracking Case Studies WHY THEY WORK IN SCIENCES SUCH AS ECOLOGY

Investigating a local slum, sociologist William Whyte used case-study methods to explain why some poverty-level young people advanced to prominent careers, while others did not. Although his book discusses only one neighborhood in one city, its results are widely accepted as generalizable.1 Similarly, political scientist Graham Allison did a famous case study of the 1962 Cuban missile crisis. He explains why the US-Soviet confrontation did not lead to nuclear catastrophe, although the Soviets had placed offensive and not defensive missiles in Cuba. 2 Case studies often are used as research methods in sciences such as anthropology, political science, psychology, and sociology. Yet, many scientists believe case-study methods are appropriate only for preliminary, hypothesis-development phases of science, not for testing or justifying hypotheses. Other experts say case-study methods ought never be used in science. They claim the methods are less rigorous than other methods, cannot establish causal relationships, and are unable to support scientific generalizations. 3 Which of these views of case-study methods is correct? This is an important question, partly because case studies are so widely used, especially in social sciences. Besides, as chapter 9 showed, scientists often need reasonable alternative methods of justifying hypotheses—other than hypothesis-deduction. And chapter 10 argued that some alternative, comparative methods of hypothesis-justification also have serious flaws. Do case studies sometimes offer a better alternative to hypothesis-deduction?

Chapter Overview This chapter argues that because some areas of science, like ecology, face problems with their proposed concepts and generalizations, they may need case-study methods of hypothesis-justification. Otherwise, they may be unable to draw any scientific conclusions. Using an ecological case study of the Northern Spotted Owl, 144


Cracking Case Studies

145

this chapter (1) characterizes case-study methods, (2) surveys their strengths, (3) summarizes and responds to case-study shortcomings, and (4) investigates and defends case-study methods for both developing and justifying conclusions. The chapter concludes that case-study methods—including informal inferences, rules of thumb, and systematic plans—can sometimes help make sense of a situation and allow reliable scientific conclusions. They often can tie scientific objectivity to scientific actions and practices, like those in specific case studies.

Why Some Sciences Need Case-Study Methods Why are case studies often needed? As chapter 9 reveals, some sciences cannot employ hypothetic-deductive methods, often because their required general laws are unknown. Nor can they use inductive methods, making generalizations on the basis of specific instances, especially if state variables, shared characteristics, and other relevant parameters are unknown. Ecology exhibits both these problems. Ecologist Tom Schoener warned that because using hypothesis-deduction was difficult or even impossible, ecology has a “constipating accumulation of untested models,”4 most of which are untestable. Showing the difficulty of using induction, ecologist Robert Peters complained that most ecological hypotheses/models/ generalizations either fail to describe the phenomena they purport to describe, or contain internal mathematical problems, or both. 5 If Schoener and Peters are right, case-study methods may be required in sciences like ecology. Its unique, singular situations may not always allow either hypothetico-deductive or inductive methods. For instance, community ecology is unlikely to ever have many, if any, simple, exceptionless laws applicable to a variety of communities and species. Fundamental ecological terms, like community and stability, are too vague to support precise empirical laws.6 For example, although “species” has a commonly accepted meaning, and evolutionary theory gives a precise technical sense to the term, biologists agree neither on what counts as causally sufficient or necessary conditions for a set of organisms to be a species nor on whether species are individuals.7 Ecological laws also are unlikely because apparent ecological patterns keep changing as a result of heritable variations and evolution.8 Evolution thus undercuts general ecological laws. Moreover, neither specific communities nor particular species recur at different times and places. Both communities and the species that comprise them are unique.9 Of course, every event is unique in some respects,10 and repetition of unique events is in principle impossible.11 Although the deductive-nomological model of scientific explanation, already discussed, and its associated hypothetico-deductive methods might be able to capture some of the uniqueness of an event, ecologists often do not have the historical information either to specify the relevant initial conditions or to know what counts as


146

Methodological Analysis

unique events.12 Consequently, instead of developing their own general theories/ laws, ecologists are often forced to be content with a user science, a discipline based on borrowing methods and tools from other sciences. Although ecologists often apply useful findings about particular models to other situations/species/communities, such models are unlikely to lead to general, exceptionless laws because the ultimate, ecological-theory units—organisms—are few in number as compared with other theories’ ultimate units, such as subatomic particles, and they cannot easily be replicated. As a result, ecologists can rarely discount the random or purely statistical nature of events. One disturbance in one key environment may be enough to wipe out a species. Model applications are also limited because biologists do not know the natural kinds. Consequently any laws based on species categories may be impossible.13 If exceptionless ecological laws are unlikely, if species are not obviously natural kinds, and if each individual in a population is unique, ecology may need new methods for obtaining reliable inferences, for justifying hypotheses. Although several new methods are likely, this chapter describes, illustrates, and defends case-study methods but does not focus on their epistemological status.14 This is partly because of the complexities of sciences like ecology,15 and partly because an important committee of the US National Academy of Sciences and National Research Council argues for using case studies in sciences like ecology.16 Indeed, when it assessed using ecology in environmental problem solving, the committee focused on how ecological science uses practical, case-specific ecological knowledge, not some general ecological theory.17 Faced with no general ecological theories or laws available for environmental problem solving, the academy committee recognized that ecology’s greatest predictive successes occur in situations with weak or missing general ecological theory and involve only 1 or 2 species.18 Such situations suggest that ecology’s methodological and predictive successes are coming not from general theory, but from lower-level ecological theories and natural-history knowledge of specific organisms.19 As the authors of the National Academy report put it, “the success of the cases described . . . depended on such [natural-history] information.”20

Lessons from Vampire Bats One example in the famous National Academy report, that of the vampire bat, provides an excellent instance of valuable natural-history and case-study information when scientists are interested in practical problem-solving. 21 In the vampire-bat case, scientists wanted to find a pest-control agent that affected only the species of concern, the bat—that sucked blood from cattle and other animals. The specific natural-history information useful for a control, diphenadione, included the following facts: The bats are much more susceptible than cattle to the action of


Cracking Case Studies

147

anticoagulants, they roost extremely close to each other, they groom each other, their rate of reproduction is low, they do not migrate, and they forage only in the absence of moonlight.22 Rather than attempting to apply some general ecological theory, top down, scientists scrutinized this case, bottom up, to gain explanatory insights. 23 This academy success suggests that case-study methods might be applicable in unique situations where singular events cannot be replicated, as in the case of the Northern Spotted Owl. Since the 1970s Northern Spotted Owls have been at risk because of timber harvesting that has removed almost all accessible lowland, old-growth forest. As a result, they have been pushed to the rugged, mountainous old-growth forest of the Pacific Northwest. Scientists want to know how to both protect the owls and use the forest. To answer this question, scientists need to determine habitat characteristics required for owl nesting and survival; successful owl dispersal and distribution; owl-population sizes able to withstand environmental fluctuations and random demographic changes; and effective population sizes able to minimize genetic depression. Scientists have made progress on these questions. Regarding habitat, for instance, some ecologists concluded that spotted owls do not breed in young, second-growth forests.24 Using the framework of island-biogeographic theory,25 government and wildlife-agency scientists concluded that owl nesting and survival requires 191 habitat blocks of old-growth forest, each 50 to 676,000 acres; that the blocks ought not be more than 12 miles apart and connected by corridors or suitable forest lands (at least 40 percent canopy cover and average, breast-height-timber diameter of at least 11 inches); and that habitat blocks must contain at least 20 owl pairs.26 Congressional hearings, however, concluded that there is no confirmed general ecological theory that could justify the preceding conclusions, and that the scientists’ methods were unclear.27 Instead, the hearings showed that even the most prominent owl researchers filled the gaps in their limited data with untested or untestable general theories like island biogeography.28 Consequently, they accused scientists of employing inadequate rigor and drawing conclusions that other good scientists would not support. 29 Interestingly, although federal owl studies neither explicitly used nor defended case-study methods, this chapter shows that, upon examination, 30 case-study methods and informal inferences can show the federal-owl conclusions are correct. 31 In doing so, the chapter helps provide the rough outlines of a framework for using case-study methods for hypothesis-justification.

Case-Study Methods Dan Campbell claims that case-study methods are “quasi-experimental,” an interesting choice of terms because scientists sometimes classify their


148

Methodological Analysis

hypothesis-testing methods as classical-experimental, quasi-experimental, and observational. 32 Classical-experimental methods involve manipulation, a control, replicated observations, and randomization. Observational methods may not include any of these components. Between these 2 methodological extremes lie quasi-experimental approaches, like case-study methods, that embody some manipulations but lack some features of classical experiments. Case-study methods are partly experimental, not observational or descriptive, because their goal is specification of cause-and-effect relationships by means of manipulating variables of interest. 33 They are quasi-experimental, however, in that controlling these variables often is difficult, if not impossible. In ecology, for instance, quasi-experimental methods typically involve some manipulation and partially replicated observations. The interactions are complex, 34 and there usually are uncertainties regarding subject and target systems, boundary conditions, bias in data, and the nature of the underlying phenomena. 35 As a result, usually it is impossible to use either classical-experimental or statistical methods, or even to specify uncontroversial null hypotheses. 36 The goal of case-study methods is to clarify/amend/evaluate/sometimes test cases. Unfortunately, in investigating particular cases, no simple method like hypothesis-deduction is applicable. Instead, one must follow methods— sets of procedures and rules of thumb—that help confront the facts of a situation, then look for ways to make sense of it through informal inferences. Often relevant or possible replication is unknown. Indeed, in some of the best case studies that are cited in the National Academy report, ecologists remained divided even about relevant variables. In the spotted owl study, 37 for example, some theorists claimed that limiting genetic deterioration is the most critical variable in preserving the owl and determining minimal population sizes. Other researchers, however, maintained that demographic, not genetic, factors are the most critical. Because of uncertainties about the relevant variables, researchers using case-study methods have often been forced to use informal causal, partly inductive, retroductive (see chapter 12), or consequence-based inferences in order to make sense of particular situations. 38 In the National Academy spotted owl study, for example, ecologists did this by using several inductive inferences based on observations about reproductive ecology/dispersal/foraging behavior. As such, they used quantitative natural history. 39 In using informal inferences, the case-study analyst has 2 main objectives: to pose and assess competing explanations for the same phenomena and to discover whether and how such explanations might apply to other situations.40 When they wrote All the President’s Men,41 for instance, Carl Bernstein and Bob Woodward used a popular version of case-study methods to assess competing explanations for why the Watergate cover up occurred and how their explanations might apply to other situations.42


Cracking Case Studies

149

Case-Study Components How should scientists assess competing accounts of the same case? Consideration of at least 5 factors is needed, including (1) the case-study research design; (2) the characteristics of the investigator; (3) the types of evidence accepted; (4) the analysis of the evidence; and (5) the case-study evaluation. The case-study research design (1) is a plan for assembling, organizing, and evaluating information according to a particular problem definition and specific goals. It links data to be collected and the resulting conclusions to the study’s initial questions. Because case-study methods are new, however, no accepted catalog of case-study research designs is available.43 However, most research designs have at least 5 components: the questions to be investigated; the hypotheses; the units of analysis; the logic linking data to hypotheses; and the criteria for interpreting the findings.44 In the academy’s spotted owl case study, scientists addressed 2 main questions: What are the minimal owl regional-population sizes necessary to ensure long-term owl survival? What are the amounts/distribution of old-growth forests, owl habitat, necessary to ensure owl survival? Although case studies involve many hypotheses, one owl hypothesis was the following, “This particular Spotted Owl management area (SOMA) is supporting as many pairs of owls as expected on the basis of calculations of Ne , expected population size.”45 The unit of analysis in the owl study was the existing population of individual owls, estimated at 2000 in the US Northwest. In other cases, the unit of analysis could be an individual organism, or multiple units of analysis. The most problematic aspect of case-study research design is the fourth component, the logic linking data to hypotheses. This link is an informal way to assess whether the data tend to confirm the hypotheses. However, because auxiliary assumptions and controlling parameters often are unclear, and case studies often represent unique situations, as already mentioned, scientists typically are unable to use hypothesis-deduction. Instead, they must use what Abe Kaplan calls pattern models of inference.46 Pattern models rarely give predictive power and instead merely extend data to formulate some hypothesis or pattern. For example, discussing the relationship between the annual number of traffic fatalities and automobile speed limit in the state of Connecticut, Campbell illustrated pattern matching.47 Each of his 2 hypotheses, that speed limits had no effect on fatality numbers, and that they had an effect, corresponded to different patterns of fatalities. Although he was not able to formulate an uncontroversial null hypothesis and test it statistically, Campbell concluded there was a pattern of no effects.48 He simply looked at the number of fatalities over 9 years, then determined there was no systematic trend. In using informal inferences to examine whether data are patterned, of course, one can always question whether an actual inference pattern is correct.


150

Methodological Analysis

In the owl case study, ecologists used many patterns from theoretical population genetics and ecology, including specific formulas for factors such as F, the inbreeding coefficient. Because some variables in the formula for F cannot be measured in wild populations, the ecologists’ informal inferences about actual F are questionable.49 Likewise, although Campbell claimed, for example, that his data matched one pattern better than another, it is unclear how close matching data must be. Campbell could not use statistics to compare patterns because each data point was a single number, fatalities/year, and he had 5 data points prior to, and 4 after, the reduced speed limit, thus an insufficient basis for reliable statistical testing. If one analyzes case-study literature, however, one can discover several criteria for assessing quality of informal inferences and associated research designs. Robert Yin and Tracy Kidder, 50 for example, suggest construct validity, internal validity of the causal inferences, external validity or applicability of the case study, and reliability. One tests research-design reliability by using a protocol, an organized list of tasks, procedures, and rules, specified ahead of time, to help account for all relevant variables and methods. In the academy owl study, the protocol consisted of 8 steps. One early step was to perform owl censuses on all national forest land, then to perform risk analyses of demographic, generic, and geographical results of different management alternatives. The final step was to monitor various owl-management areas to determine whether those managing the owls were achieving their goals. 51 To test research-design reliability requires developing, amending, and continually improving a database against which case-study findings can be reassessed. The last step in the owl protocol, for example, described just such an updating of the owl preservation plan and conclusions. One also assesses this reliability partly by determining whether another researcher, evaluating the same case and evidence, would draw the same conclusions. In analyzing case-study evidence, scientists employ 3 general analytic and methodological strategies. The first is developing a case-study description capable of organizing data and hypotheses. The owl description emphasized small-size owl populations and their vulnerability from habitat destruction, chance factors, and genetic deterioration. Formulating such descriptions presupposes both developing categories that enable recognizing/collecting data—and looking for regularities in the data. A second evidential strategy is hypothesis formation, using inductive or retroductive inferences to discover patterns or possible causal explanations for the data. Organizing inductive events chronologically often assists hypothesis-formation, providing a basis for time-series analysis. So can database-management programs. In the owl study, one important hypothesis was that long-term protection of local owl populations might require exchanging individuals among regional populations. A third evidential strategy, informal testing, consists of using informal inferences to compare empirical, owl-survival results with predictions generated by case-study hypotheses and causal explanations.


Cracking Case Studies

151

Available statistical techniques are unlikely to be relevant here, because each data point in the pattern is probably a single point. Nevertheless, in the owl case, ecologists have been able to test their models of owl-population viability by using research on long-term viability of other taxa. 52 After analyzing the evidence, scientists can use informal logic to draw conclusions and compose case-study reports. In the owl case, one group of scientists concluded that internal factors, like fertility changes, and external stresses, like habitat disturbances, increase extinction risk; that these factors can be offset only by immigration from other populations; and that regional populations of approximately 500 are necessary to protect owls for several centuries. 53 The final component of case-study methods is assessing conclusions. Often this can be accomplished by using the 4 criteria already mentioned—construct validity, internal validity, external validity, and reliability. Other criteria include standard explanatory values like completeness, coherence, consistency, heuristic power, predictive power, and so on. Often these evaluations are best accomplished through outside review. 54 In the owl study, ecologists have on-going monitoring and research plans to evaluate critical assumptions in their conclusions, metapopulation models, and protocol. 55

Deficiencies in Case-Study Methods Obviously case studies can easily be biased by practitioners. 56 Although there is no failsafe way to prevent all case-study bias, one way to deal with it is to realize that bias can enter the conduct of all science. Moreover, science does not require that scientists be completely unbiased, but only that different scientists have different biases. 57 If they have different biases, then alternative conceptual analyses, accomplished by different scientists, will likely reveal these biases. Hence, it is important that case-study methods employ informal inferences, independent data, and other cases to help confirm their results. One also might avoid bias, or at least make it explicit, by developing rules for assessing similarities among system components and initial and boundary conditions, and by using multiple methods and sources of evidence. 58 Another response to case-study bias is realizing that it exists only because bias can be recognized and because of the flexibility of case-study methods. 59 For example, in case-study work on the gopher tortoise, ecologists discovered many insights, such as the directional positions of entrances to tortoise burrows, although they had neither algorithms nor hypothesis-deduction to guide them.60 Moreover, in certain unique situations, not subject to statistical testing, there are no alternatives to case-study methods. Because they are organized means of obtaining information, are able to be criticized, and proceed step-by-step, via problem-definition, research design, data collection, data analysis, composition of results, and report, case-study methods can be


152

Methodological Analysis

used in objective or unbiased ways. For example, many scientists and philosophers of science have repeatedly argued that a given case study does not illustrate what its practitioners claim; does not fit the model imposed on it; is factually deficient; misrepresents the phenomena;61 fails to take account of certain data;62 or leads to inconsistency, dogmatism, or faulty inferences63 (e.g., affirming the consequent or assuming that 2 conjoint phenomena have a cause-effect relationship).64 Such criticisms show that because case-study methods are open to critical analysis and are revisable, they can be rational and objective in at least 2 ways: Expert practitioners can distinguish better or worse case-study-method applications. Following the method, and thinking one is following it, are not the same.65 In arguing that case-study methods need not be subjective in a damaging sense, consider Ludwig Wittgenstein’s insight, discussed later, that objectivity is not tied to propositions but to people’s practices. As Wittgenstein puts it: “Giving grounds . . . is not a kind of seeing on our part; it is our acting.”66 Admittedly, more traditional accounts of objectivity are tied to seeing, to mind-independent beliefs about the world, to impersonality, and to a set of judgments. Typically we do not attach blame to judgments that fail to be objective in this traditional sense. The newer Wittgensteinian account of objectivity, however, is tied to actions, impartiality, and methods or procedures for behaving in ways that lack bias. Usually we do attach blame to persons who fail to be objective in this sense.67 If a judgment is thought to be objective in the traditional sense, obviously a single counterinstance might discredit it. Objectivity in this sense is not compatible with error. However, objectivity in the second or practical sense is compatible with error. Distinguishing these 2 senses of objectivity shows that case-study methods—tied to actions and practices, not rules/propositions/ deductive logic—are not infallible but may be objective in a scientific sense. Another problem with case-study methods is that they provide little basis for scientific generalization.68 In the owl study, for example, generalizations about habitat requirements were problematic because owls’ needs varied from place to place. In California, each pair of owls used 1900 acres of old-growth forest. In Oregon, this acreage was 2264; in Washington, 3800.69 While concerns about the ability to generalize are well placed, 2 considerations mitigate it. First, if Nancy Cartwright,70 Jim Fetzer,71 Paul Humphreys,72 and others are correct, it may be possible to establish some reliable singular causal claims without first establishing regularities. Second, the single case study and the single experiment face the same problem. Both can be generalizable to theoretical propositions but not to populations and universes. Both face the problem of induction. Although scientists must replicate a case or experiment in order to generalize from it, mere replication is never sufficient for theorizing. Scientific goals are not adequately accomplished merely by enumerating frequencies. Nevertheless, single cases or experiments, even in physics, often are sufficient for scientific theorizing. As Karl Popper pointed out,73 the severity of the tests, not mere replication, is important. Cases such as parity nonconservation in physics and the Einstein–de Haas experiment likewise suggest that often single convincing experiments decide controversies.74


Cracking Case Studies

153

Hence case-study methods need not be seriously flawed in providing little basis for generalization. Other criticisms of case-study methods are that they enable one to evaluate only those interpretations that case-study methods already presuppose. However, any method is able only to evaluate hypotheses already discovered.75 Moreover, to the degree that case-study methods are open to criticism, their conclusions are not merely begged and may, to some degree, be tested. Marshall Edelson and Adolph Grünbaum both provide insights regarding such testing. Recognizing that direct replication of a case study is typically impossible, Edelson argues persuasively that partial replication, inference to the best explanation (see chapter 12), and pitting a conclusion against rival hypotheses (see chapter 10) are all useful. Although Grünbaum says individual-case data cannot be used to test psychoanalytic propositions, for example, he says one can use induction in experimental and epidemiological tests of case-study conclusions. One also can seek confirming instances or exclude plausible alternative explanations. Though Edelson and Clark Glymour recognize that testing case-study conclusions is difficult, they appear unwilling to relegate case-study methods only to hypothesis-discovery. Doing so would discourage rigorous argument about relationships among hypotheses and evidence and might force scientists to presuppose accounts of testing that are rarely applicable in science.76 Another allegation against case-study methods is that, because they follow no hypothetico-deductive scheme, they fall victim to uncritical thinking,77 erroneous inductive inferences, or fallacies of false cause.78 Community ecologists, for example, have been divided recently over the causal role of competition versus random chance in structuring biological communities. As a consequence, different ecologists, using the same case studies, often make incompatible, controversial inductive inferences regarding competition.79 However, the solution to such controversies is not to abandon case-study methods, but to subject them to repeated criticism, reevaluation, and discussion, to seek independent evidence and alternative analyses of the same case. 80 Grünbaum’s criticisms of many of the central causal and inductive inferences of psychoanalysis, 81 for example, offers alternatives to case analyses provided by doctrinaire Freudians. Likewise, in evaluating evidence of a species’ shared genealogy, Elliott Sober has discussed in detail which sorts of causal inferences are justified and which are not. 82 In general, much of the literature that discusses problems with the principle of the common cause or inductive inferences helps avoid questionable case-study inferences. 83

Are Case-Study Methods Scientific? Although case-study methods typically employ informal inferences that are difficult to evaluate, they have several assets. They enable scientists to gain a measure


154

Methodological Analysis

of practical control over real-world problems, like pest management. Their informal inferences often allow rough generalizations that may suffice for sensible explanations, for example, about a taxon’s susceptibility to anticoagulants, even though exceptions cannot be treated in systematic ways.84 By facilitating such inferences, case-study methods show how to use case-specific descriptions to study different, but partially similar, cases. Apart from practical benefits, case studies are also important because their systematic procedures are applicable to unique situations that are not amenable to replication, statistical testing, or hypothesis-deduction. Case-studies methods provide organized frameworks for consideration of alternative explanatory accounts, for doing science in situation where exceptionless empirical laws are not evident or cannot be had. They enable learning about phenomena when the relevant behavior cannot be manipulated, as is often the situation in ecology.85 Case studies are also valuable for some of the same reasons that Kuhnian exemplars are important. By example, they show how to do science and how one problem is like another.86 A final benefit of case studies is that although they often are unable to provide information about regularities or to confirm hypotheses, they enable scientists to see whether a phenomenon can be interpreted in the light of certain models and assumptions. 87 More generally, as the authors of the National Academy study on applying ecological theory put it: “The clear and accessible presentation of the [case-study] plan . . . focuses the debate and research.”88 It enables scientists to deal with a full range of evidence in a systematic, organized way and therefore to uncover crucial details sometimes missed by more formal scientific methods. Despite these benefits of case-study methods, critics are likely to object that they are not scientific. Moreover, any comprehensive case-studies defense would require defending some causal account of explanation that includes how causality operates and how at least 2 independent avenues function to advance our understanding of phenomena. Nevertheless, no fully developed, specific account of causality is yet available. 89 Because it is not, this chapter argues that a causal account of scientific explanation, focused on singular claims and unique events, is prima facie plausible for at least 4 reasons. One reason is pragmatic. Case-study methods are more useful, appropriate, and workable than other methods, like hypothesis-deduction, when dealing with unique situations in which testing and experimental controls are impossible.90 Often there is no reasonable alternative to case-study methods. A second justification is that case studies are scientific by virtue of being embodied in the practices, actions, and dispositions of the scientific community, including its ability to see situations as exemplars, as like each other.91 As both Kuhn and Wittgenstein showed, there is room in scientific method for the rationality of practice, for a way of grasping a rule that is exhibited in obeying the rule, rather than in being able to formulate it.92 Moreover, because this behavior


Cracking Case Studies

155

involves an implicit reference to a community of persons,93 case-study practices need not be purely subjective.94 Another Wittgensteinian criterion for the correctness of case-study practices is whether they enable us to make sense of further practices and to see likenesses among different cases.95 A third prima facie reason case-study methods are scientific, although they have no exceptionless rules to guide practices, is that no science relies solely on exceptionless rules. This is because, as Saul Kripke and Nancy Cartwright note,96 no rule can determine what to do in accord with it, because no rule for the application of a rule can fix what counts as accord. Therefore, every rule generates the same problem, how to apply it. If the practices of experts ultimately guide scientists in applying rules, then practices can also guide scientists in applying case-study methods.97 They may be appropriate to science if one conceives of scientific justification and objectivity partly in terms of practices that are unbiased, rather than merely propositions that are impersonal. For Wittgenstein, practices are normative, not purely subjective, partly because their existence requires multiple, non-unique occasions.98 Although case-study methods typically are applied to a unique phenomenon, even accounts of unique events may be tested indirectly on the basis of past scientific practices, heuristics, or rules of thumb.99 Moreover, as mentioned, even unique events may be explained if one is able to supply initial conditions.100 A fourth reason case-study methods are scientific is that they presuppose little more than other methods of inference. That is, workable general causal laws require reference to singular claims. All forms of inference, whether deductive or inductive, presuppose recognizing regularities, parallel cases, which presupposes recognizing similarities and differences.101 Deduction addresses all possible singular instances that support a particular claim, whereas case-study methods address only some of the singular instances, often one at a time. Both may transmit truth, but neither is alone capable of initiating truth. Initiating truth requires some sort of tacit knowledge in the form of both ultimate premises and knowing what explains particular cases. Indeed, all knowing requires appeal to tacit knowledge, as when we discover novelty, reorganize experience, understand symbols, distinguish what is meant from what is said, understand in a gestalt or holistic way, value something, know reasons and not merely causes, understand subsidiary rather than focal points, grasp unique things, and make methodological value judgments (see chapter 7). Whenever we apply a generalization, law, or theory to a particular case, as Kripke realized,102 we use tacit, not explicit, knowledge. It also tells scientists what needs to be explained and what counts as a criterion for justification. As Wittgenstein noted in discussing the foundations of mathematics, even mathematical proofs proceed by means of analogy, tacit knowledge that one case is like another. Hence, if there is a problem with the tacit knowledge that characterizes case-study methods, there are problems with the tacit knowledge that underlies all science.103


156

Methodological Analysis

Objectors might say that, although tacit knowledge is necessary for all science, it is not sufficient. In other words, in contrast to deductive scientific logic, which employs tacit knowledge, case-study methods bear the additional burden of being able, at best, only to show the rationality of a particular scientific conclusion, not to confirm it.104 However, merely being able to show the rationality of particular conclusions need not count against scientific methods. For them to be defective, confirmation—or something more than an illustration of rationality—must be possible where they are used. As with the spotted owl case, however, it is not obvious that hypothesis-deduction or other methods can confirm hypotheses in situations where case-study methods are used. If not, objectors must either deny scientific status to many sciences like ecology and anthropology, on grounds that they fail to use any deductive methods of confirmation, or admit that case-study methods sometimes can provide reliable grounds for hypothesis-acceptance.

Conclusion Obviously the best way to defend case-study methods is to illustrate what they can do in real situations, like preserving an owl species or controlling the vampire bat. Both studies showed that practical, precise knowledge of particular taxa often is important to science when no general theory is available. This practical, precise knowledge—rules of thumb and informal inferences, coupled with conceptual and methodological analyses typical of case-study methods—is an important departure from much earlier theorizing. In ecology, these earlier methods led to untestable principles and deductive inferences drawn only from mathematical models. However, case-study-based, informal inferences often are more capable of being realized in sciences like community ecology than are hypothetico-deductive inferences based on exceptionless general laws. While classical hypothesis-deduction (see chapter 9) is an ideal in science, this chapter shows that the ideal often cannot be realized. If not, case-study methods of hypothesisjustification are helpful. Science, after all, must assess the real world and not a non-existent ideal world.


C H A P T ER

12

Uncovering Cover-Up I N F E R E N C E TO T H E B E S T E X P L A N AT ION I N M E DIC I N E

Blaming the victim is an old strategy. African American victims of centuries of racism are accused of being lazy. Female victims of rape are accused of “asking for it.” Poor people, people of color, and women are more likely to be accused by medical personnel of having ailments that are merely “in your head.” As a result, many studies document their receiving substandard clinical care. Is the problem mainly racism, classism, or sexism? Sociologist Theodor Adorno, who in 1947 first coined “blaming the victim,” called it one of the most sinister characteristics of the fascist character.1 Fascist or not, blaming the victim also often occurs after technological accidents. This chapter shows that science often sanctions the blame, and that the victims typically are not mostly poor, female, or people of color. Of course, minority victims often receive the worst blame, as happened in the 2012 Chevron refinery fire that occurred in Richmond, California. Chevron is the largest stationary source of greenhouse gases in California, and the majority–African American population of Richmond has long had higher rates of cancer and respiratory illnesses because of plant proximity. Thus, when the refinery exploded and burned for hours in 2012, it released thick black smoke laden with particulate, benzene, nitrogen-oxide, and polycyclic-aromatic-hydrocarbon pollution—none of which has any safe dose. The fire/contamination forced 15,000 people into local hospitals. Government health-and-safety experts warned residents to stay indoors to help protect themselves from the lethal vapor cloud. Although burning oil is a notoriously damaging contaminant, and government fined Chevron $1 million for safety violations that caused the fire/contamination, many bloggers blamed the victims. They said Richmond victims were faking their illnesses just to get a “handout.”2

Chapter Overview Like the Richmond bloggers, do physicians and scientists ever wrongly blame the victim? This chapter argues that they do, especially when they use flawed scientific 157


158

Methodological Analysis

methods. The chapter investigates a fourth method of hypothesis-justification, inference to the best explanation. It also illustrates why this method is superior to the 3 methods discussed in previous chapters, namely, hypothesis-deduction, extreme-comparativist methods, and case studies. Although theory comparison is essential to science, chapter 10 showed that using extreme-comparative methods leads to flawed science. Instead, this chapter defends a more moderate account of theory comparison, one that takes account of evidentiary standards, truth, and probability. First, the chapter summarizes the 3 major nuclear accidents—at Fukushima (Japan) in 2011, Chernobyl (Ukraine) in 1986, and Three Mile Island (Pennsylvania) in 1979. Second, it outlines government/industry cover-up of many radiation-caused consequences of these accidents. Instead they blamed the fatalities on victims’ stress. Next the chapter shows how flawed statistical-causal methods help explain why official US-government claims err in saying that radiation from the Pennsylvania accident killed no one, but caused only mental-health problems. 3 Finally, the chapter argues that because many scientific data are non-experimental—like those from accidents—and because neither inductive nor hypothetical-deductive methods can adequately assess non-experimental hypotheses, inference to the best explanation, including theory comparison, often provides the best method of scientific justification. Illustrating methodological analysis, the chapter also shows the egregious scientific errors of scientists who blame the victims harmed by accidents.

Fukushima Daiichi Consider the Fukushima Daiichi nuclear accident. Until March 11, 2011, atomic energy supplied about 30 percent of Japanese electricity. By May 2012, all Japanese commercial nuclear reactors were closed.4 Despite massive citizen protests, some began to go back online. 5 By mid-2013, however, only 2 Japanese reactors were operating.6 What happened? Beginning on March 11, 2011, multiple earthquakes and a tsunami hit Japan. They left nearly 16,000 dead; more than 3000 missing; more than 6000 injured;7 and still more to die from radiation-induced cancer from nuclear reactors. After flooding cut cooling water to the Fukushima Daichi reactors, radioactive-fuel pools, nuclear-plant fires, 3 reactor meltdowns, extremely intense radioactive releases, and at least 4 explosions occurred. They spewed radioactive contamination around the globe. Roofs and walls blew off several reactors. Gaping holes were ripped in radioactive containment. Nuclear fuel melted through thick steel-and-concrete-reactor bottoms. Plant-radiation doses soared to 500 milliSieverts (mSv)/hour, roughly a million times higher than normal-background radiation. Data from the United Nations International


Uncovering Cover-Up

159

Agency for Research on Cancer (IARC) predict that after only 2 hours, these doses would cause all the cancers of those exposed. MIT nuclear-engineering PhD, Kenichi Ohmae says that “from the amount of fission material released and from the size of the hydrogen explosions,” the core melts and containment catastrophes were “undeniable.” Yet, the Japanese government denied the meltdowns for 3 months—and denied radioactive-containment destruction for 6 months. 8 The Fukushima utility, Tokyo Electric Power Company (TEPCO), was a partner in the cover-up.9 Nine months after the accident began, in late 2011, TEPCO announced the 3 melted reactors were in “cold shutdown.” Yet years later, high radiation levels still prevent workers from entering the entire plant.10 Even outside the buildings, TEPCO says radiation levels could kill someone within 15 minutes.11 In 2013, more than a year after the cold-shutdown claim, the cooling system for 4 radioactive-fuel ponds at 3 reactors suddenly failed for 30 hours, threatening massive radiation releases. Months after the cold-shutdown claim, a leaking reactor gushed 8.5 tons of radioactive water and was not stopped for another month. The science journal Nature says the severely damaged reactors and radioactive-fuel pools will continue leaking radioactivity “for another few years at least. . . . TEPCO must continue to [actively] inject water at the rate of around half-a-million litres a day.” The eventual cleanup will take many decades, but given still-lethal radiation levels, no one knows when it can begin.12 Such a situation seems hardly a cold shutdown. In 2014, more 3 years after the catastrophe began, radioactive water is still gushing into the Pacific Ocean, the government admits it underestimated radiation levels by at least a factor of 5, and Japanese nuclear engineers say Fukushima remains a catch-22 because all alternative ways of removing damaged nuclear fuel rods could kill many people. “If you hoist them up in the air, huge amounts of radiation will come out . . . and people nearby will die.”13 Yet, not removing them also could kill people when the damaged facility collapses entirely. Former US nuclear-industry vice-president Arnold Gunderson warns this collapse could “destroy the world environment and our civilization,” creating “a disaster worse than the three [Fukushima] reactor meltdowns. . . . People should get out of Japan, and residents of the West Coast of America and Canada should shut all of their windows and stay inside.”14

Fukushima-Daiichi Deaths: Stress versus Radiation How serious was the 2011 Fukushima accident? On one hand, more than 3 years after the accident, several hundred thousand people were still displaced from their homes. Unremediated, radioactive soils contaminate areas of Tokyo, 200 miles away, including some playgrounds and schools. In the


160

Methodological Analysis

United States, such soils would be considered radioactive waste, dug up, and shipped to waste-management sites.15 Physicians for Social Responsibility, winner of the 1985 Nobel Peace Prize, says Fukushima radiation releases could be several times greater than those from Chernobyl. Its cesium releases, alone, already equal those from 168 Hiroshima bombs. The pro-nuclear US Nuclear Regulatory Commission (NRC) likewise warns that Fukushima threats and “catastrophic explosions . . . could persist indefinitely”—one reason the international scientific community and US government recommended a Fukushima 50-mile-radius, 2-million-people evacuation.16 On the other hand, the Japanese government, financially responsible for cleanup, claimed it was safe to evacuate only 130,000 people within a 12-mile radius. The government also says stress, not radiation, is the accident’s main consequence: “If you live in an area outside of the [12-mile-radius] evacuation area, you do not need to worry about . . . receiving any radiation at all. . . . lf you worry too much about radiation, it causes mental and physical instability.”17 Denying the claims of the international scientific community, the Japanese government likewise says Fukushima released only about one-tenth as much radiation as Chernobyl. Yet Fukushima had 3 times more meltdowns, 4 times more explosions, and much more radioactive fuel than Chernobyl.18 Who should people trust about Fukushima? If government and industry are correct, it was not as harmful as Chernobyl, no Fukushima-radiation-induced fatalities will occur outside the 12-mile-evacuation zone, and stress is the main problem. If the international scientific community is correct, Fukushima fatalities will be worse than the 1 million, mostly-premature-cancer, Chernobyl deaths confirmed by the New York Academy of Sciences; many cancers will occur outside the 12-mile evacuation alone; and stress is not the main problem.19 Already scientists have shown that long-term, air-radiation readings, 19 miles northwest of the plant, were 0.8 mSv/hour, about 27 times higher than normal-background radiation. Yet, as chapter 4 revealed, there is no safe dose of ionizing radiation, normal-background radiation causes 3–6 percent of all fatal cancers/year, and dose-effects are additive. IARC data show that after 2 months of this 27-timeshigher exposure, most fatal cancers of those exposed would be attributable to Fukushima radiation.20

Chernobyl Deaths: Stress versus Radiation Chernobyl and Fukushima will continue to cause harm, partly because many victims live in areas with long-lived-radioactive contamination. The UN’s World Health Organization (WHO) says 7 million people are receiving or eligible for benefits as Chernobyl victims living in radioactively contaminated areas. 21 They are like the nearly 2 million Japanese living in


Uncovering Cover-Up

161

Fukushima-contaminated areas from which the international scientific community recommends evacuation. Contradicting WHO, however, the main international commercial-nuclearlobby group, the World Nuclear Association (WNA), blames the victims for their stress. It says Chernobyl’s “ionizing radiation killed only a few occupationally exposed people . . . [and] did not expose the general population to harmful radiation doses. . . . Psychosomatic disorders . . . were the only detectable health consequences. . . . Panic and mass hysteria could be regarded as the most important.” 22 In 2011, the WNA again blamed the victims for stress, claiming Chernobyl’s “biological and health effects . . . cannot be attributed to radiation exposure . . . and are much more likely to be due to psychological factors and stress.”23 Are Chernobyl and Fukushima fatalities mostly stress-induced? Should more people be evacuated from contaminated areas? Scientific analyses could answer such questions, but governments and nuclear utilities control all accident sites/ data, giving independent university scientists little/no access. Instead, scientists are limited to inserting accident-radiation-release data into the confirmed IARC radiation dose-response curves, then predicting the number of radiation-fatality responses, based on radiation-dose input. Short of such statistical techniques, IARC’s Elisabeth Cardis warns that the only way to assess Fukushima/Chernobyl health harms is by constant population monitoring. 24 Yet because monitoring is costly and would trigger increased radiation-protection/cleanup costs, the government is not funding monitoring. Consequently scientific controversy over nuclear accidents will continue, with groups such as the WNA claiming that Chernobyl killed only a “few occupationally exposed people,” and medical scientists—such as those publishing with the New York Academy of Sciences— claiming that Chernobyl will cause 1 million premature deaths. Of course, one could “follow the money” to help decide whom to believe, based on each camp’s financial conflicts of interest. Typically, those who profit from fission minimize nuclear-accident harms, while medical scientists—free from these financial conflicts of interest—do not. One also could examine the scientific methods used by the 2 sides.25

Three Mile Island Deaths: Stress versus Radiation To examine the respective scientific methods and to bypass problems with lack of access to accident-radiation-dose data, this chapter evaluates radiation fatalities in cases where more data are available, as in the Pennsylvania accident. Although the US government and nuclear industry claim “no member of the public died” because of Three Mile Island, 26 most university medical scientists who have studied the case disagree. They say it has killed and will kill many people prematurely. Four years after the accident, epidemiologists agree that a


162

Methodological Analysis

64-percent-cancer-incidence increase occurred within 10 miles of the plant. 27 If census data are correct that this 10-mile population is about 200,000, and if cancer incidence is about 1 percent per year (2000 cases/year), a 64-percent-cancer increase is about (64)(2000)—or 128,000 cancers within 10 miles, rather than 2000; the accident may have caused about 126,000 excess, premature cancers within 10 miles. Nevertheless, epidemiologists disagree about what caused this increase. Holding the majority position, nuclear-industry-funded scientists say stress has caused the deaths. Holding the minority position, independent university and medical scientists say radiation caused them. Stress-hypothesis proponents typically accept industry/government assumptions that the Pennsylvania nuclear-accident-radiation doses were no more than 100 mrem, about one-third of annual background radiation. Consequently, they deny that accident radiation affected overall mortality. 28 Instead, as occurred at Fukushima and Chernobyl, industry scientists say accident-related stress likely caused nearby cancer and mortality increases. 29 Scientists supported by nuclear-industry monies from the Three Mile Island Health Fund—mainly at Columbia University30 and the University of Pittsburgh—support the stress position. 31 Radiation-hypothesis proponents, independent university/medical scientists not funded by the nuclear industry, typically reject industry/government assumptions of low Three Mile Island doses. Consequently they say radiation, not stress, likely caused the agreed-upon, increased fatalities. 32 These scientists are mostly physicians working for governmental or non-governmental agencies or for institutions such as the University of North Carolina. 33 Which scientific camp is more likely correct?

Climate-Change Background This answer is important, partly because nuclear safety—along with carbon emissions and costs—are crucial issues in deciding whether to promote atomic energy as a response to climate change. If fission is less safe than government/ industry claim, and if the Chernobyl, Fukushima, and Pennsylvania nuclear accidents killed far more people than admitted, nuclear power is an unlikely solution to climate change. Besides, recent analyses show that, once all fuel-cycle stages are considered, fission is as carbon-intensive as natural gas. Full-fuel-cycle, greenhouse-emissions ratios, per kWhr of electricity produced, show that nuclear emissions are roughly 9 times greater than those from wind, and more than 4 times higher than those from solar-photovoltaic: coal 60: gas 9: nuclear 9: solar 2: wind 1. 34 Regarding nuclear-energy costs, credit-rating firms say nuclear electricity is much more expensive than that from natural-gas or scrubbed-coal


Uncovering Cover-Up

163

facilities—at least 15 cents/kWhr, even before counting taxpayer subsidies that pay most nuclear costs, including construction, reactor-decommissioning, permanent-waste-storage, full-insurance, and other expenses. 35 Unlike other energy-production methods, fission also has no economies of scale. 36 According to the pro-nuclear US Department of Energy (DOE), nuclear-fuel-cycle prices are much higher than current US-solar-photovoltaic-cycle prices, 37 and more than triple current US-wind-cycle prices of 4.8 cents/kWhr. 38 In fact, DOE says existing renewable-energy technologies could provide 99 percent of year-2020 US electricity. 39 Yet for the last half-century, solar and wind together have received 25 times fewer US government subsidies than has commercial-nuclear fission.40 Safety, however, is the crucial nuclear question, the one this chapter addresses. Safety requires avoiding/controlling catastrophic accidents. If stress proponents are right, the 3 major nuclear accidents have not been catastrophic and have killed few people. If radiation proponents are right, they were catastrophic and have killed many. To determine who is correct, first consider radiation-related health effects. Although chapter 6 reveals that some industry scientists disagree, scientific consensus is that damaging effects of ionizing radiation are cumulative and linear, proportional to dose, with no threshold (LNT) for increased risk.41 That is, the US National Academy of Sciences, IARC, and others say that even the tiniest radiation dose is risky. All other things being equal, higher ionizing-radiation doses cause more molecular damage/cancer/genetic defects/immune deficiencies. These effects are measured as rads/grays of absorbed dose (energy absorbed per gram of tissue) or as rems/sieverts (Sv) of effective dose (energy per gram of tissue, considering health effects). One gray = 100 rad. For most exposures, however, absorbed and effective doses are roughly the same—because 1 Sv is the amount of radiation required to produce the same biological effect as 1 gray of high-penetration x-rays. One gray or Sv = 1 joule per kilogram (J/kg) = roughly 100 rad or rem. Given the consensus hypothesis that there is no safe dose of radiation, natural-background radiation at about 3.6 mgray or mSv/year42 causes about 6 percent of fatal cancers/year—36,000 US deaths/year.43 Nuclear-plant radiation adds to these background-radiation risks, especially during accidents. What are these accident risks?

Flawed Pennsylvania Science The Three Mile Island, Pennsylvania, accident released massive radiation because most of its core melted.44 Reactor-core temperatures reached 2800 degrees Celsius (above 5100 degrees Fahrenheit), the melting point of uranium-dioxide.45 In part because the utility was negligent and provided no way to store accident-caused, radioactive-pollutant releases, US-government reports say it deliberately released


164

Methodological Analysis

uncontrolled, illegal levels of radioactive iodine and noble gases like krypton and xenon.46 The government also convicted the utility of criminal misconduct and destroying accident-safety data—for which it received maximum-allowable fines.47 How much radiation was released? The Pennsylvania utility’s dosimeters provided incomplete coverage, most utility radiation monitors went off-scale only hours after the weeks-long accident began, and the utility says it “lost” dosimetry data for days of highest radiation releases.48 Nevertheless, as noted, the government says the radiation releases were low, 100 mrem maximum,49 because it uses utility claims “as ‘best estimates’ of the level of radiation” released50 and trims doses by basing release-estimates only on the few, distant dose monitors “that remained on scale for most of the accident;” however, most monitors went off-scale. 51 Thus the government says total Three Mile Island releases were 13 million curies. 52 Non-industry scientists/physicians, however, say releases were at least 56–106 million curies, 53 and that thyroid radiation doses to the public were at least 100 times greater than industry and government claim. 54 Experienced nuclear-industry executives say the accident released 1 billion curies, one-sixteenth of total-core-radiation inventory because the melted core released tell-tale, detectable radionuclides.55 Also, as the government admitted, onsite-filtration systems were quickly overloaded/non-functioning during most of the accident. 56 Even official government reports say outside-air vents released enough radiation to kill someone within seconds—30,000 rad/hour.57 Despite differing by-products and half-lives, the UN says Chernobyl released 200 times, and Three Mile Island released 10 times, more radiation than Hiroshima-Nagasaki weapons released.58 This comparison is significant because no one denies the Hiroshima-Nagasaki radiation-induced harm, yet the government denies harm from even-greater Pennsylvania releases. Who is right? If one uses hypothesis-deduction methods, discussed in chapter 9, at least 7 reasons suggest industry/government underestimates Three Mile Island radiation releases. One reason is that official government and UN reports show its doses were 100,000 times higher/hour than the nuclear industry/US government allege. The government’s Kemeny report also said noble gases, the bulk of releases, were 13 million curies; 1–131 was 13–17 curies, and 1-131 was 8-12 percent of total releases. 59 Yet deductively, 8–12 percent of noble-gas releases = 1 million and not 13–17 curies of 1–131, as Kemeny alleged.60 Thus the government’s own data show that 1–131 releases were 600 times higher than the government claimed. Likewise the chair of the pro-nuclear US NRC said he measured accident-radiation-dose plumes at 120 mrem/hour,61 20 percent higher per hour than the utility/government alleged for total, weeks-long radiation doses. Even after most releases ended, NRC admitted that offsite doses were 365 mrem/hour, 3 times higher than industry’s alleged total weeks-long doses.62 In court, the utility also admitted that doses exceeded 500 mrem/hour,63 enough to cause cancer in 10 percent of those exposed


Uncovering Cover-Up

165

for 2 days.64 Thus, official utility/government claims contradict their courtroom admissions that Pennsylvania doses were catastrophic. The absence of dose-monitoring data; the utility’s claiming it lost the first 2 (heaviest-release) days of radiation-monitoring data, when most monitors went off-scale; and its financial conflicts-of-interest also challenge claims that maximum Three Mile Island doses were 100 mrem.65 At best, such claims beg the question.66 Besides, correcting only 2 contrary-to-fact industry/government assumptions increases industry-estimated-radiation doses by 100. Although the utility/government ignored beta doses, and no radiation dosimeters measured them, government admits beta releases were 90 percent of total-Pennsylvania doses.67 If one includes beta doses by using official government radiation models, local radiation releases increase by 1000 percent. Also, by using official measures of (1) outside-air-temperature, (2) radioactive-plume-rise, 68 (3) confirmed reactor-building temperature, before temperatures went off-scale, 69 and (4) government-dose models70 —instead of industry/government assumptions for (1)–(3), these corrections again increase reactor releases by 1000 percent. If so, then these 2 corrections alone show accident-radiation releases at least 100 times higher than admitted.71 Moreover, if industry/government are right that the accident released only 100 mrem, why did utility insurers—facing billion-dollar-damage claims—quietly spend $80 million to pay off accident victims of cancer/retardation/infant mortality; force payees to sign gag orders about injuries/deaths; then deny thousands of other damage claims?72 Obviously mere 100-mrem radiation doses do not cause massive cancer, retardation, infant mortality, and hypothyroidism. Another reason industry/government radiation underestimates err is that most accident-health-effects studies “were funded by the nuclear industry and conducted under court-ordered constraints” whose required, “a priori assumptions precluded interpretation of observations as support for the hypothesis” that accident radiation caused increased health harms; because utility insurers had to concur on scientific-study designs, they required all government-funded scientists to accept their 100-mrem-dose assumption.73 Using this question-begging, court-ordered, 100-mrem-dose assumption, government-funded scientists had no testable hypothesis. Instead, their anchoring biases and circular reasoning supported the stress hypothesis.

Flawed Inductive-Statistical Inferences Faulty inductive-statistical methods also help explain scientists’ flawed “stress” account of Pennsylvania nuclear-accident fatalities. Despite increased-post-accident odds ratios for radiation-induced cancers, stress proponents reject such results, saying they are not statistically significant.74 Although


166

Methodological Analysis

chapter 8 showed why such rejections are scientifically wrong, frequently scientists employ non-experimental data to examine possible pollution-disease associations, then erroneously use classical statistical-significance tests to deny these associations. For instance, Chevron-Texaco recently funded UCLA epidemiologists to help fight a $27-million court verdict in favor of 30,000 Amazon cancer victims. The victims charged that Chevron-Texaco’s substandard oil-drilling practices had contaminated people in Rhode-Island–sized areas. To reject such claims, the epidemiologists used statistical-significance tests of non-experimental Amazon data, then denied that Chevron-Texaco caused harm.75 They followed a dominant epidemiological practice of ignoring pathogenic/pollution mechanisms, then using statistical-significance tests to assess non-experimental data.76 Their rationale is that statistical-significance tests of non-experimental data have generated important conclusions, like the Framingham, Massachusetts, study showing that smoking predicts heart-disease risk.77 Yet in Ireland and France, epidemiologists cannot predict heart disease when they use the US Framingham tobacco-risk functions.78 Similarly, when other epidemiologists follow the dominant practice of using statistical-significance tests with non-experimental data to show cardio-protective benefits of moderate alcohol consumption,79 replication again fails. Why? Non-experimental studies cannot avoid confounders, such as that sick people do not drink, moderate drinkers are socially advantaged and healthier. Thus the scientists using significance tests erroneously suggest that a carcinogen, alcohol, improves health. 80 They forget that drinking is something healthy people often do, not what makes them healthy.81 Even worse, most US courts that hear toxic-torts or pollution-harm cases erroneously require statistical-significance tests for drawing causal conclusions from non-experimental data.82 Why are these courts, many epidemiologists, and government-funded Three Mile Island scientists wrong?

Inductive-Statistical Inferences and Randomization On one hand, those who use statistical-significance tests with non-experimental data dominate epidemiology because, as chapter 8 revealed, experimental studies must be large, expensive, and cannot be done after accident exposure. 83 On the other hand, those who reject statistical-significance tests of non-experimental data recognize that such data include no randomized and representative samples, exposure levels, or experimental conditions; only randomized experiments provide links between inferential statistics and causal parameters. 84 Without randomization, they realize that no reliable interpretations, statistical-significance tests, power, confidence intervals, or inferences from samples to larger populations are possible. 85 As statistician Ronald Fisher recognized, randomness helps scientists avoid bias, apply


Uncovering Cover-Up

167

probability laws, and make rationally based inductive and statistical inferences. 86 Randomization helps assess potential cause-effect relationships (e.g., pharmaceuticals and recovered health) by breaking mechanisms between earlier cause-effect relationships (e.g., healthy immune systems and recovered health). 87 Another conceptual argument against using statistical-significance tests to assess non-experimental data is that such tests allow no precise definition of parent populations. Because many factors affect subjects’ responses to study conditions, and the most relevant factors are unknown, scientists do not know everything in the parent population of which the sample is supposed to be a sample. For Framingham heart-disease studies, is it Massachusetts residents? Massachusetts males? Non-diabetic Massachusetts males? All such descriptors could signal confounders, yet because non-experimental studies have no controls for them, homogeneity relationships between sample and parent populations may not hold. Yet homogeneity relationships are what make statistical inferences work. They ensure representative samples, random subjects and exposures, and controls for strong risk factors. Because non-experimental studies ignore homogeneity relationships and required randomness, their statistical-significance tests are invalid/non-representative/inapplicable to other groups—as the Framingham studies showed.88 Scientists also err when they fail to find statistically significant associations in non-experimental data, deny evidence for health harms, then attribute harms to chance, as a US National Academy of Sciences committee did. 89 However, knowing something occurred by chance requires knowing a priori-probability distributions and homogeneity relationships, which, in turn, requires experimental control and randomization, neither of which non-experimental data allow one to know.90 Without experimental randomization, tests could reflect bias.91 If so, how should scientists evaluate non-experimental Three Mile Island accident data?

Inference to the Best Explanation At least 4 inductive-causal strategies—examples of methodological analysis— might help assess Three Mile Island data: inference-to-the-best-explanation, mechanism, unification, and intervention-counterfactual traditions. Subsequent sections employ the first strategy to evaluate Three Mile Island hypotheses, and they suggest these best-explanation principles are roughly consistent with those of the 3 other traditions. According to the inference-to-the-best-explanation account, scientists infer what would, if true, better explain all evidence. 92 For causal hypotheses, the better explanation links cause and effect by providing information about an effect’s causal history,93 such as increased-near-reactor cancers. However, most


168

Methodological Analysis

causal-history information (e.g., a plane’s being blue, not yellow) is explanatorily irrelevant to an effect (e.g., an airplane accident). As Carl Hempel recognized,94 scientists must assess which causal-history factors are relevant. One prominent inference-to-the-best-explanation-assessment strategy is contrastive explanation, using a fact and a control to explain why x, not y, causes some phenomenon.95 Like John Stuart Mill’s Method of Difference, choosing controls helps select/ justify explanatory causes. For Mill, causes are discovered among antecedent differences between cases where effects do/do not, occur. Thus, to explain whether radiation or stress more likely caused increases in Three Mile Island cancers, one can use inference to the best explanation by assessing different contrasts/controls in the causal history of the Three Mile Island accident.96 First, one must filter potential causal explanations—stress versus radiation hypotheses—so they satisfy conditions for actual explanations (e.g., logical compatibility with observations).97 Next, one must select the most explanatory of potential explanations,98 by inferring which hypothesis better explains various contrasts. Yet as Thomas Kuhn and Noam Chomsky recognized, using this best-explanation method rigorously requires guidelines. Consider Peter Lipton’s 5 guidelines, illustrated via physician Ignaz Semmelweis’s inferring the cause of childbed fever in the 1850s at Vienna General Hospital.99 (1) The best causal explanation (e.g., cadaveric-matter exposure causes childbed fever) cites common antecedent events (e.g., patients’ being in the hospital) that it shares with other explanations (e.g., rough handling of patients). (2) The best causal explanation (e.g., cadaveric-matter exposure) cites some contrast (e.g., higher childbed-fever rates in the first, not second, hospital-maternity divisions) between it and competitor explanations (e.g., epidemic influences). (3) After removal of the supposed causes (e.g., removing cadaveric matter by physician hand-washing), the supposed effect (e.g., childbed fever) disappears. (4) Reject causal hypotheses (e.g., epidemic influences) that fail to explain contrasts (e.g., higher childbed-fever rates in first-maternity division, despite divisions’ similar crowding or epidemic influences). (5) To prevent false inferences and reduce potential hypotheses, use further observations and conjectures to produce new contrasts. How might one use the preceding inference-to-the-best-explanation principles to assess the stress-versus-radiation hypotheses about Three Mile Island cancers?

Applying Best-Explanation Principles (1)–(2) Regarding the preceding principle (1), finding common antecedents of both stress and radiation hypotheses, the same nuclear accident is a common antecedent.


Uncovering Cover-Up

169

Regarding the preceding principle (2), finding contrasts to assess stress-versus-radiation explanations, one contrast is that post-accident-cancer increases are disproportionately radiosensitive ones (e.g., leukemia, lymphoma, lung cancer).100 But if radiation, not stress alone, better explains these radiosensitive-cancer increases (odds ratio 1.4),101 then 3 contrasts follow: • Cancers should be mainly radiosensitive ones. • Non-radiosensitive cancers (e.g., lymphocytic leukemia)102 should not increase. • Odds ratios for radiosensitive cancers should increase (e.g., lymphoma to1.9).103 However, if the stress hypothesis is correct, the preceding 3 contrasts should not occur. Because they did occur, best-explanation principle (2) shows that radiation, not stress, better explains accident-related-cancer effects. A second principle-(2) contrast, that post-accident-cancer increases are mostly respiratory, also supports radiation, not stress, accounts. Accident-radiation releases were mostly noble gases, mostly affecting respiratory systems, those working outdoors, and males.104 Radiation better explains this contrast because post-accident-area females had fewer respiratory cancers than males. Also, postaccident cancers were disproportionately of the respiratory-system/bronchus/ trachea/lung,105 with a 1.7 respiratory-cancer-odds ratio—as compared to only a 1.4 total-cancer-odds ratio.106 Stress would not have caused mostly male and mostly respiratory cancers. Can stress proponents cite any supporting contrasts to support their hypothesis? Columbia scientists say the odds ratio 1.4, between reactor proximity (the stress proxy) and increased cancers, supports stress over radiation.107 However, this contrast is questionable, given disproportionate radiosensitive/respiratory cancers,108 and no reliable stress-cancer measures. Although oxidative stress or pro-inflammatory cytokines reveal biochemical stress-cancer associations,109 most randomized, case-control studies reject associations between psychosocial stress and cancer-onset; however, stress may worsen already-existing conditions.110 Only non-representative, non-experimental, qualitative studies suggest otherwise.111 Other problems with stress proponents’ reactor-proximity contrast are that cancer rates for post-accident-hilltop residents, living 3–8 miles from the reactor, where the radioactive plume passed, were 7 times higher than those for the local area.112 Although hilltop-cancer numbers were small (2 rising to 19), making scientific interpretation difficult, stress—measured only as reactor-proximity— cannot explain higher hilltop effects. Also, only radiation, not stress, can explain distant-downwind-cancer increases, hundreds of miles away.113 Besides, because most researchers use reactor-proximity to measure radiation effects, stress proponents have no defensible stress measure. In addition, the pre-accident odds


170

Methodological Analysis

ratio of 1.2, between increased cancer and reactor proximity,114 suggests that even low-stress, non-accident, normal-radioactive emissions cause cancer. These findings are consistent with the LNT dose-response curve, already mentioned. Robust studies in England, France, Germany, Scotland, and the United States all show near-reactor, increased, infant-and-fetal mortality/childhood cancers/leukemia/lymphoma/brain cancers.115 Once reactors close, all health effects return to normal.116 Radiation, not stress, better explains such results. Stress proponents also claim a second contrast, accident-maximum radiation of 100 mrem, too low to cause massive observed cancer increases.117 However, earlier sections showed this 100-mrem claim errs on grounds of inconsistency/ incompleteness/begging the question/counterfactual assumptions/utility criminal misconduct/data falsification. As noted, if accident doses were only 100 mrem, the utility would not have spent $ 80 million on out-of-court settlements and gag orders, because 100 mrem cannot cause birth defects/retardation/infant mortality; only much higher doses can do so.118 Local physicians like Dr. Joseph Leaser also reject the 100-mrem claim of the stress proponents. He testified that hundreds of local citizens, most unaware of any nuclear accident, and especially 450 hilltop residents near the reactor, had symptoms consistent with acute, high-dose-gaseousradiation exposure, including metallic taste, hair loss, nausea, vomiting, diarrhea, and erythema.119 Yet hair loss requires at least 400 rad,120 4000 times greater than industry’s claimed 100-mrem dose.121 To induce infant-retardation—for which the utility settled out of court, with gag orders on victims’ families—requires at least 100 rad,122 1000 times higher than industry/government claims. Dr. Leaser also reported many patients with high eosinophils, white blood-cell counts characterizing high-radiation exposure.123 Russian scientist V. A. Shevchenko, who diagnosed and treated Chernobyl radiation-exposure victims, likewise confirmed many residents had chromosome aberrations indicating radiation doses at least 2000 times higher than industry/government alleges.124 Although clinical, this evidence is consistent with courtroom-documented, high-radiation-dose claims. Nearby childhood-mortality increases also contradict stress proponents’ low-dose contrast. Former Pennsylvania state-health director, physician Gordon MacLeod, showed that in Harrisburg, 10 miles from the reactor, neonatal death rates quadrupled within 3 months post-accident, then increased sevenfold within 7 months.125 Within 6 months post-accident, MacLeod showed infant mortality increased sevenfold within 5 miles of the reactor, and twofold within 10 miles.126 Because government accident-cancer studies excluded children,127 stress proponents cannot answer this child-mortality challenge to their low-dose contrast.

Applying Best-Explanation Principles (3)–(4) Because the Three Mile Island accident occurred under uncontrolled conditions, it is impossible to use controlled experiments to apply best-explanation principle


Uncovering Cover-Up

171

(3), removing/varying proposed causes. However, one can use influence analysis, investigating how hypothesized causal relationships change under perturbations,128 and thus reveal likely causes.129 One can follow principle (3), varying radiation/stress values by using the LNT ionizing-radiation-dose-response curve, discussed earlier. Based on Japanese-atomic-bomb data,130 and IARC worker-radiation-dosimetry data,131virtually all scientists accept LNT. It shows that radiation-induced cancers are not lower in the low-stress, non-accident IARC database than in the high-stress, atomic-bomb database. Thus, if radiation-dose-response studies are quasi-experiments that vary radiation dose, as principle 3 directs, radiation and not stress better explains post-accidenthealth harms. Illustrations of effects of best-explanation, principle-(3) stress removal appeared earlier. They showed 20-percent-higher-cancer rates (odds ratio 1.2) near normally operating reactors—evidence that radiation, not stress, better explains increased accident-caused-health harms. Earlier evidence likewise showed post-accident-cancer and mortality increases, hundreds of miles downwind,132 where airborne radiation, not stress, had effects.133 Elevated, post-accident radioactivity engulfed cities like Portland, Maine, 430 miles downwind,134 causing documented infant-mortality increases in Syracuse, Rochester, and Albany.135 However, if one removes radiation, by looking 10 miles upwind of the reactor, where stress was high and radiation was normal, no cancer increases occurred.136 Best-explanation principle (4) thus suggests radiation, not stress, is the better hypothesis.

Best-Explanation Principle (5), Heterogeneous Evidence To evaluate hypothetical-causal hypotheses, principle (5) seeks new contrasts involving heterogeneous evidence, mechanisms, and schematic unifications.137 Heterogeneous evidence includes heterogeneous victims, methods, or effects. Radiation and not stress better accounts for heterogeneous Pennsylvania victims because botanists like Dr. James Gunckel found widespread, post-accident, oversized, and deformed plants. Yet plants have no psychosocial stress, and 100-mrem-radiation doses cannot cause such defects.138 Russian radiation-science expert, V. A. Shevchenko, likewise argued that observed tree damage required doses at least 600–900 times above 100 mrem.139 Regarding heterogeneous health effects—hypothyroidism, birth defects, genetic defects, immune-system defects, and infant-retardation increased after the accident. However, extensive literature clearly links these problems to radiation, not stress.140 Regarding heterogeneous methods for investigating hypotheses, the latest Three Mile Island accident studies used the 1979–1997 registry of victims.141 Yet these


172

Methodological Analysis

studies underestimate radiation harms for at least 7 reasons. They are short term (18 years, given cancer latency of several months to 50+ years), mortality-based (not disease-incidence-based), and partial (covering only those now living 5 miles from the reactor, but ignoring distant victims and those who moved). These studies also underestimate because they are counterfactual (ignoring beta doses, as noted),142 low-power, small-sample studies (that considered different cancers separately), incomplete (ignoring effects on African Americans/Harrisburg/children), and based on industry-assumed doses (that contradict government-measured doses).143 However, if one instead uses heterogeneous methods like cohort analysis—not the 7 biased methods above—radiation and not stress better explains accident-related-health harms. Avoiding statistical-significance tests, cohort analysis compares the health of exposed and non-exposed sub-groups (e.g., children’s deaths, before/after the accident).144 For the infant-cohort under age 1 on March 28, 1979, when the accident began, in the 2 counties (Dauphin and Lebanon) immediately downwind, this cohort’s all-cause mortality (excluding accidents, suicides, and homicides) has been 26–54 percent higher throughout childhood/adolescence/ adulthood than statewide-age-group peers.145 The cohort’s cancer rate is 46 percent higher than statewide rates, although pre-accident-area-cancer rates were lower, given the rural location.146 Such results suggest that children, who are up to 38 times more radiation-vulnerable than adults, experienced worse accident harms.147 Even the accident’s distant-downwind radionuclides affected children, whereas stress did not.148

Best-Inference Principle (5), Mechanisms, and Explanatory Unification Contrasts that specify underlying mechanisms likewise suggest radiation, not stress, better explain post-accident-cancer increases. As noted, there are no established mechanisms whereby psychosocial stress causes cancer, although stress worsens most existing diseases.149 However, radiation-induced-cancer mechanisms are well established. They change molecular structure in cells, including DNA, and initiate processes like gene deletion, causing at least 20 different cancers.150 Radiation, not stress, also better unifies schemes explaining post-accidentcancer increases. As noted, increased radiation has caused cancer among atomic-bomb victims/radiation-therapy patients/US military-atomic veterans/ nuclear-weapons workers/civilian-nuclear workers.151 If radiation-induced cancers have the same mechanisms, the accident-radiation hypothesis unifies different exposure situations, radionuclide types, and radiation effects. Moreover, as noted, because psychosocial-stress hypotheses have no known mechanism/ measure/proxy/dose-response curve, they cannot unify different phenomena. However, radiation hypotheses can unify many different phenomena, including


Uncovering Cover-Up

173

high-energy-radionuclide molecular effects/LNT observations/20 different radiation-induced cancers/increased child-radiation effects152/increased hilltop cancers/hundreds of post-accident effects like hair loss and retardation.

Three Other Causal-Inductive Accounts Brevity precludes using other accounts of scientific explanation (interventionist, mechanist, and unificationist) to assess stress-versus-radiation hypotheses. However, both these accounts and best-explanation accounts also appear to support radiation over stress hypotheses. For instance, Ken Woodward’s interventionist view seems a further specification of best-explanation principle (3) insofar as he assesses causal relations by manipulating proposed causes to analyze changes in effects.153 Woodward also can explain why Pennsylvania non-experimental/ statistical-significance studies likely erred; they controlled neither pre-existing endogenous causal connections, nor those set exogenously by experimental design.154 Because both Woodward’s interventionist theory155 and best-explanation principle (2) are grounded in Mill’s difference principle, Woodward’s intervention strategy seems a further specification of principle (2).156 Best-explanation and intervention accounts likewise both support radiation, not stress, hypotheses because best-explanation principle (4) is consistent with Woodward’s requirement to show that counterfactuals can characterize interventions causally. Fully explained, best-explanation principles (2)–(4) thus would show that Woodward’s account also likely supports radiation over stress hypotheses.157 Mechanist analyses likewise would support the radiation hypothesis,158 because only radiation has underlying mechanisms that explain contrasts like Pennsylvania’s increased numbers of radiosensitive/respiratory cancers. The mechanist strategy also is a further specification of best-explanation principle (5), using heterogeneous evidence to produce new contrasts to assess hypotheses.159 Similarly, unification analyses likely would support the radiation hypothesis because, as noted, radiation better unifies empirical cancer research.160 Because Phil Kitcher’s unification account requires better causal explanations to unify greater ranges of phenomena with fewer assumptions/inference-patterns,161 it is a further specification of best-explanation principle (5). Because radiation, not stress, also better unifies epidemiological studies of radiation-exposed workers/ children/atomic-bomb victims,162 full unification analyses arguably would also support the radiation hypothesis, as this chapter has done.

Three Objections In response to using inference-to-the-best-explanation methods to evaluate Three Mile Island stress-versus-radiation hypotheses, objectors might ask whether


174

Methodological Analysis

clinical evidence, like post-accident hair loss, supports causal hypotheses. While clinical evidence is not the best evidence, this objector forgets that accidentrelated clinical data are consistent with academy and IARC studies—and that randomized, controlled, experimental evidence is impossible after an accident. Moreover, both the former Pennsylvania state-health director and the Kemeny Advisory Commission charged industry and government with accident coverup; failure to do requested, post-accident dose studies on respirable-dust exposures;163 and not releasing relevant accident data.164 Given cover-up and failure to do requested studies, clinical evidence should be considered.165 Ignoring it would encourage people to erroneously assume that absence of evidence, for some effect, is evidence for the absence of effects. Besides, the issue is not certainty, but relative radiation-versus-stress evidence, all things considered. A second possible objection is that using statistical-significance tests seems more likely to reveal, than cover up, accident-radiation harms, if samples disproportionately include radiation-exposed people. However, Pennsylvania-accident samples do not include disproportionately radiation-exposed people because they include only those living near the reactor 11–12 years after the accident, not those exposed at the time, or who had moved. Thus they diluted accident-related cancers by treating current unexposed people as exposed, and by treating moved-away, exposed people as unexposed. Moreover, even if Pennsylvania-accident data were experimental, they are so biased that using them is questionable. Their already-mentioned 7 scientific biases include using low-power studies and ignoring effects on children.166 The studies likewise are limited by the utility’s refusal to do immediate, post-accident dose studies, and by crude political biases such as utility control of all accident data/studies/official dose estimates. As noted in earlier chapters, such bias is unsurprising, given that two-thirds of all US science is funded by special interests to promote their financial goals.167 A third possible objection is that although a non-experimental, a US National Cancer Institute (NCI) study admitted it could support no conclusions, it suggested no higher cancer rates near nuclear plants.168 Although NCI calculated all-county standard mortality ratios (SMRs) for each county, before/after nuclear-facility startups, then found no overall SMR differences, at least 6 factors bias this NCI study against finding radiation-induced cancers. • NCI used countywide data to examine nuclear-plant harms, thus biased samples by erroneously counting all residents of counties-without-reactors as non-radiation-exposed, while counting all residents of counties-with-reactors as radiation-exposed. Yet because downwind radiation is often in another county, counted as non-exposed, while many upwind residents of counties with reactors are counted as exposed but are not, data sets were invalid. • NCI used typical countywide-study areas of 1200 square miles, where half the population lived more than 20 miles from the reactors—and thus diluted


Uncovering Cover-Up

175

radiation effects by including distant, very low-dose victims and upwind, no-dose victims. • Because NCI ignored wind direction, its near-zero-upwind-radiation health effects diluted severe downwind harms. • Because NCI used cancer-mortality, not cancer-incidence data, despite mortality-data invalidity,169 NCI ignored latent cancers and effects of cancer treatment. • NCI ignored statistically significant, radiation-related-mortality increases near many nuclear facilities by presenting only average, overall, mortality data. • NCI suggestions of no reactor-induced cancers are inconsistent, as already noted, with well-confirmed LNT curves and with studies in England, France, Germany, Scotland, and the United States that show increased, near-reactor cancers.

Conclusion This chapter argues that the dominant industry/government scientific conclusion about the Three Mile Island nuclear accident errs. It relies on biased dose estimates, using statistical-significance tests of non-experimental data, and falsely blaming accident-radiation victims, saying they were stressed. Instead, the chapter argues that reliable hypothesis-justification methods, including inference to the best explanation and theory comparison—that take account of truth, probability, and evidence standards—show nuclear-accident radiation has caused catastrophic harms. Epidemiologists agree that more than 100,000 additional or excess cancers appeared within 10 miles of Three Mile Island, only 4 years after the accident. At the least, nuclear accidents reveal that 2 of the 3 largest economies in the world—that of the United States and Japan—are not immune to technological disaster. Even money cannot always buy good science.



PA RT

IV

VALUES ANALYSIS AND SCIENTIFIC UNCERTAINTY



C H A P T ER

13

Value Judgments Can Kill EXPECTED-UTILITY RULES IN DECISION THEORY

Max Planck’s advisors were wrong when they told him not to go into physics because all the important questions had been answered. Medical doctors were wrong when they gave people arsenic for stomach ailments. Geophysicists were wrong when they said continents could not drift. Gynecologists were wrong when they gave women hormones for menopause. Obviously scientists can make mistakes, and often they disagree.1 How scientists deal with disagreement and uncertainty, especially when they cannot be resolved, is important both because they must make value judgments to fill the gaps in knowing and because such judgments can affect both science and human welfare. For example, consider the case of the most common orthopedic surgery, hip replacement. From roughly 1970 until 2012, when a classic Lancet study largely resolved the issue, scientists disagreed about whether metal-onmetal hip replacements release metal-particle debris and cause heavy-metal blood poisoning. Hip-prosthetic manufacturers claimed any particles resulted from flawed surgical techniques, whereas some scientists said the problem was the implant. This uncertainty/disagreement—and scientists’ responses to it—has caused the worst US medical-device catastrophe in history. In addition to other global victims, more than 500,000 US citizens received metal-on-metal hips— not the more-expensive-but-safer ceramic hips. The majority, who have not had their metal hips replaced or who may be too old to undergo corrective surgery, face possible slow deaths from metal toxicity, massive organ failure, and dementia.2

Chapter Overview How should scientists handle situations like the hip-prosthesis case? To answer such questions, this chapter explains why no scientists can avoid specific types of value judgments and why prevailing views about value judgments are often 179


180

Values Analysis and Scientific Uncertainty

wrong. 3 The chapter first reviews different types of value judgments in science, then summarizes utilitarian and maximin value judgments about how to deal with scientific uncertainty. It assesses John Harsanyi’s arguments in favor of utilitarian strategies, and John Rawls’s arguments in favor of maximin. It argues that under 3 specific conditions, using maximin value judgments appear more scientifically/ethically defensible, whereas under all other conditions, using utilitarian value judgments may be more defensible. These 3 conditions are that the situation involves (1) societal (not just individual) decisionmaking under scientific uncertainty and unknown probabilities of harm; (2) potential for catastrophic effects; and (3) the absence of offsetting benefits in exchange for imposing unequal societal threats. The chapter closes by answering key objections to these arguments.

Value Judgments in Science Although many scientists believe they can be completely objective and avoid value judgments in their work, they are wrong. Mainly because no science is complete/ perfect, values arise in at least 4 different ways in science. First, scientists always must presuppose standards/norms for assessing scientific hypotheses. Different scientists sometimes have different norms—or give them different priorities. The late Harvard University physicist and historian of science, Thomas Kuhn, for instance, believed these norms were empirical accuracy (of experiments/observations); consistency (among theories and observational claims); broadness of scope (of a theory’s explanatory consequences); simplicity; and fruitfulness (of a theory in revealing new phenomena/relationships),4 but some scientists would add other norms, such as predictive power, while others would not. A second way values always arise in science is through interpretive assumptions. Even when scientists see something, they must make evaluative assumptions, as when a heliocentrist and a geocentrist each see the same sunrise. 5 Evaluative assumptions are necessary because no datum or hypothesis is either purely descriptive or factual or complete. Instead, all presuppose interpretations—like heliocentrism—or they could not be understood. The more they are factually or mathematically underdetermined, the greater the role that values must play in interpreting them. Science often is evaluative in a third way because of science-specific goals, such as ecology’s goal of conserving biodiversity or epidemiology’s goal of protecting human health. Science likewise is evaluative in a fourth way, in presupposing uncontroversial ethical/societal/policy goals, such as promoting equal opportunity or protecting human research subjects. 6 Because science is always normative in the first 2 senses above, and often normative in these last 2 senses, evaluating science always requires evaluating its underlying norms. Otherwise,


Value Judgments Can Kill

181

as Francis Bacon pointed out, science will be corrupted by the false norms or “idols” of the tribe (cultural biases), the cave (personal biases), the marketplace (language-based and perhaps greed-based biases), and the theater (beliefsystem biases). Of course, it is possible to always avoid bias values, like sexism and racism, in science. Because many scientists assume bias values are the only values, they think they can avoid all value judgments in science. However, they ignore contextual, cognitive, and ethical values in science. Contextual or cultural values—like constraints on science imposed by inadequate funding/available software/social-political—are difficult to avoid because science is always done in some context. Cognitive or methodological values—like consistency and empirical adequacy—can never be avoided, as they are essential to theorizing, evaluating data, formulating conclusions, and making judgments under empirical underdetermination and scientific uncertainty. Often these cognitive value judgments rely on the first 2, just-discussed types of norms that are unavoidable in science.7 Although ethical-value judgments are often avoidable in science, frequently scientists must make them in order to avoid dishonesty/injustice/violating professional codes of ethics. For example, scientists might make ethical judgments not to delete crucial data, as a colleague wants, or not to permit a human subject in an experiment because she appears unable to give genuine consent. Besides, scientists/philosophers of science never cease to have ethical and citizenship duties, just because they do science/philosophy of science. Most important, scientists’ ethics judgments should never intrude on other citizens’ rights. For this reasons, scientists do not have rights to do experiments that could put others at risk without their knowledge and consent. Similarly, because citizens, not scientists alone, have rights to decide welfare-related issues, scientists often have special ethical duties to help educate fellow citizens about the consequences of science—to reveal how different scientists’ methodological value judgments (see chapter 7) can generate different welfare-affecting consequences, especially harmful consequences. Scientists alone obviously do not have the right to make value judgments in their work that could adversely affect the welfare of others. If not, their work must be done in adherence to such ethical values.8

Default Rules under Scientific Uncertainty Sometimes, however, scientists should (but do not) reveal to citizens either how they make methodological value judgments that could harm the public—or how different methodological value judgments could have different welfare consequences. To see how different value judgments, under uncertainty, can generate different scientific/policy conclusions, consider the case of using different default


182

Values Analysis and Scientific Uncertainty

rules/value judgments to assess uncertainty about nuclear-power accidents. As background, note that US reactors have had at least 5 core melts in roughly 50 years, and global reactors have had at least 26 core melts in roughly 50 years.9 However, nuclear-industry scientists say theoretical probabilities, not the preceding empirical data, better predict future nuclear disasters because regulations improve after accidents and thus lower future-accident probabilities. Hence, they calculate the core-melt probability for all US reactors as about 1 in 4 during their lifetimes.10 They say a core-melt accident in the 104 US reactors will occur only once every 1000 years.11 What happens when 2 different scientific studies use the preceding, identical, nuclear-accident-probability and consequences assessments, yet employ different methodological value judgments/default rules? They can draw contradictory conclusions. The Ford Foundation–Mitre Corporation studies concluded atomic energy is safe, whereas the Union of Concerned Scientists (UCS) studies concluded it is unsafe.12 Although the studies used identical data, they reached contradictory policy conclusions because they used different default rules. Ford-Mitre research used the accepted utilitarian rule to choose policies having the highest overall utility. However, UCS followed the maximin rule to choose the policy that avoids the worst possible consequences.13

Maximin versus Expected-Utility Value Judgments Whose conclusion and which default rule is better? The late, Nobel Prize–winning economist, John Harsanyi, says the “prevailing opinion” is to use the utilitarian rule,14 even under scientific uncertainty.15 According to the late John Rawls of Harvard University, maximin is the better rule, while Harsanyi follows expected-utility maximization.16 Supporting equal rights and equal protection, Rawls accepts the maximin rule—and whatever maximizes equal opportunity and ensures that any inequalities benefit society’s least-advantaged members.17 Both maximin and expected utility rules apply to any of 3 situations, uncertainty, certainty, and risk. Uncertainty occurs when people are completely ignorant of the probability of an outcome (e.g., whether full-body, airport-security screening causes radiation health risks).18 Certainty occurs when people know, with probability 1, that a given outcome will occur (e.g., using commercial-nuclear energy generates radioactive wastes that must be managed). Risk occurs when people have reliable probabilistic knowledge about an outcome (e.g., bets on fair coins). Because scientists rarely have complete knowledge of all probabilities (e.g., genetically engineered food risks, uncertainty often dominates science-related policymaking, thus causes serious values conflicts.19 Hence the Harsanyi-versus-Rawls, expected-utility-versus-maximin debate poses a major issue: Which value judgment should scientists use in cases of uncertainty?


Value Judgments Can Kill

183

As noted, Harsanyi and members of the dominant school believe that, under conditions of uncertainty, one should maximize expected utility. For a 2-state problem, this is u1p + u 2(1–p), where u1 and u 2 are outcome utilities, p is probability of state S1, (1–p) is probability of state S2 , and p is the decisionmaker’s subjective-probability estimate.20 They say one should choose policies with the highest average utility (subjective determination of welfare). 21 However, maximin proponents, like Rawls, say one should maximize the minimum, avoid the policy having the worst possible consequences, those hurting the most/most vulnerable people. 22 As the Ford-Mitre-UCS case shows, the obvious problem is that often maximin and utilitarian rules recommend different science-related policies. To understand these differences, consider an easy case. Imagine 2 societies.23 The first consists of 1000 people, including 100 workers who are poor, minority, gay, or female, and the remaining 900 who are free to do whatever they wish. Given science and technology, the 100 can support the rest of society, but they are all very unhappy. Likewise, assume the 900 are very happy, partly because they need not work. Assume, however, that their happiness is not disturbed by feelings of compassion for the workers because the 900 have convinced themselves that the workers deserve their fates. The 100 are thus convinced both that all workers and their children have had good educations and equal opportunity to compete for non-worker positions and that the workers have not attempted to better themselves. Also suppose that, using a utility/happiness scale of 1 to 100, the workers each have 1 unit of utility, while the 900 each have 90 units, and therefore average utility is 81.1. However, consider a second society, similar to the first but having a rotation scheme, so that everyone takes turns at being a worker and has average utility of 35 units. Users of utility rules would count the first society as more just/rational because of its higher average-expected utility, whereas users of maximin rules would count the second society as more just/rational because the most vulnerable people are better off than in the first society. Which society is better? Which value judgments for science-related, societal decisionmaking are better under uncertainty (not personal decisions, not under risk, where utility rules sometimes may be better)?24 In other words, although utilitarian default rules may be superior under situations of risk, this chapter argues that maximin rules appear superior to utilitarian rules in situations of uncertainty, especially societal uncertainty—provided the situation meets 3 criteria. To understand these arguments, consider the best defenses of utilitarian and maximin positions, respectively, as provided by Harsanyi and Rawls.

Harsanyi’s First Argument Harsanyi’s main expected-utility arguments against maximin, under uncertainty, are that failure to follow expected-utility rules leads to irrational decisions that


184

Values Analysis and Scientific Uncertainty

ignore probabilities and impractical/unethical consequences. He also says that not using expected utility rules ignores the fact that they can allows equal treatment, assigning equal a priori probability to everyone’s interests. Consider each of these 4 arguments. His first argument is that it is “extremely irrational” for maximin proponents to make their behavior wholly dependent on possible unfavorable consequences, despite their low probabilities.25 Thus, says Harsanyi, suppose someone living in New York City is offered 2 jobs, a tedious and low-paid New York job and an interesting and well-paid Chicago job that requires immediately flying to Chicago on a plane that has a tiny probability of killing all on board. Harsanyi says maximin users would take the New York job to avoid the worst case, dying in a Chicago plane crash, while utilitarians would take the Chicago job because of its higher average-expected utility, as the following table illustrates: If Chicago plane crashes

If no Chicago plane crash

If you choose New York job

You have a poor job but are alive.

You have a poor job but are alive.

If you choose Chicago job

You die.

You have a good job and are alive.

Because Harsanyi’s example unrealistically presupposes zero chances of dying, except from the Chicago plane crash, and because maximin requires avoiding the worst outcome, death, he says maximin users under uncertainty must choose the New York job, whereas an expected-utility user would recognize the low probability of the plane crash and choose the better Chicago job. Does Harsanyi’s argument work? No, for at least 4 reasons. First, it presupposes that the greatest risk is the Chicago plane crash, not getting mugged in New York or dying in an auto accident, whose actual per-mile probability of death is 10 times that of plane-crash death.26 Harsanyi’s flawed assumption, not maximin, is what makes the maximin decision look questionable. Second, Harsanyi’s example fails because he begs the question when he claims the job choice shows maximin ought not be used under uncertainty.27 Obviously the situation is one of risk, not uncertainy, because Harsanyi says the plane crash is “highly unlikely,” and thus has a low probability.28 His example gives no argument about decisionmaking under uncertainty.29 A third reason Harsanyi’s example fails is its erroneously presupposing that rational aversion to death is a linear function of probability of death. 30 University of California geneticist Bruce Ames and Harvard University attorney Cass Sunstein also make this incorrect death-aversion assumption; consequently they criticize laypeople for chemophobia, greater aversion to industrial chemicals


Value Judgments Can Kill

185

than to natural toxins like molds in foods—which they claim have higher probabilities of harm. 31 Yet, Nobel Prize–winning economist/psychologist Daniel Kahneman and Amos Tversky show that Ames, Harsanyi, Sunstein, and others err in assuming probabilities alone determine rational risk aversion—that only irrational people avoid low-probability risks. 32 Harsanyi and others err because their models ignore essential determinants of judgment, like equity and consent. 33 Instead, rational people often choose higher-probability and fairer risks, over lower-probability risks that are grossly unfair/racist/sexist, and so on. A fourth reason Harsanyi’s job example fails is its presupposing that it illustrates a situation of societal risk, as required, not individual risk. 34 Individual-risk decisions are not problematic because people have rights to do what they want, provided they harm no one else. However, societal-risk decisions are at issue because they affect others who have rights to fair treatment, consent, 35 and democratic decisionmaking—and who could be unjustly harmed, for instance, if society decides against regulating greenhouse gases.36 Hence, in societal decisions where probabilities of harm are unknown, protecting all rights of all citizens requires maximin, not just protecting average rights or rights of most people. If all rights were not equally protected, there would be no human rights. Hence, when harm probabilities are unknown, societal and individual decisions are not analogous. Harsanyi forgets that individual decisionmaking focuses on a decider’s substantive concept of rationality, whereas societal decisionmaking also requires citizens’ procedural or process concepts of rationality, 37 taking account of conflicting points of view and democratic obligations. If someone asks, “How safe is rational enough?” and decides to be an airplane-test pilot, he can be rational, given his value-system. But someone has no right to ask, “How safe is rational enough?” and require society to take the same risk. In the societal case, decisionmakers have moral obligations to others— including those hurt in a worst-case scenario that is not their fault. Thus decisionmakers must ask questions like, “How safe is free enough?” “Equitable enough?” “Voluntary enough?” Harsanyi fails to support Bayestian-utilitarian-default rules, for societal decisions, under uncertainty because his flawed example illustrates merely individual decisions, under risk, counterfactual plane-crash assumptions, and questionable risk-aversion assumptions. 38 Hence, at least when probabilities of harm are unknown, and when innocent people could face catastrophic harm, societal decisions require using maximin, not expected utility, as a default rule.

Harsanyi’s Second Argument Harsanyi’s second anti-maximin argument is that it dictates “unreasonable consequences.”39 Does it? Consider another example. Suppose the nightshift foreman discovers a leaking toxic-gas canister at a West Virginia Union Carbide plant.40 Company policies and local law require him to notify the local sheriff, company


186

Values Analysis and Scientific Uncertainty

safety engineer (to try to repair the leak within 30 minutes), and the company’s president. If he notifies the sheriff, probably no townspeople will die from the leak, although he and the safety engineer will lose their jobs if they cannot quickly fix the leak. If he does not notify the sheriff and can quickly repair the leak, he and the safety engineer will each receive a $50,000 bonus. However, if he does not notify the sheriff and cannot quickly repair the leak, 10 nearby nursing-home residents will soon die from fumes, and they will have to notify the sheriff anyway. To make his decision, the foreman uses expected-utility rules and the table below. It employs 2 acts (notifying/not notifying the sheriff) and 2 states (fixing/not fixing the leak in 30 minutes). Because he is uncertain about outcome probabilities, the foreman uses the principle of insufficient reason,41 what Harsanyi calls “the equiprobability assumption,”42 to assign equal probabilities (0.5) to both possible states. Finally the foreman and safety engineer assign a utility (u) to each of 4 outcomes. They decide not to notify the sheriff because the expected utility for non-notification is higher ((0.5)(38) + (0.5) (–16) = 11) than that for notification ((0.5)(16) + (0.5)(4) = 10). However, maximin users would notify the sheriff so as to avoid the worst outcome—sacrificing lives and jobs. If crew fixes leak in 30 minutes

If crew doesn’t fix leak in 30 minutes

If sheriff is notified now (10u)

Ten lives and 2 jobs are safe. (16u)

Two people lose jobs but 10 townspeople are safe. (4u)

If sheriff is not notified now (11u)

Ten lives and 2 jobs are safe; 2 men get bonus; people suffer no fear. (38u)

Ten lives and 2 jobs are lost. (–16u)

Contrary to the utilitarian decision in the preceding example, maximin seems superior because all employees are required to follow the law/company policies— and to recognize potential innocent victims’ rights to life/consent. Indeed, the basic human/legal rights of the 10 potential victims may be the most important factor in deciding this situation,43 because personal gain (bonuses) ought not trump rights to life of innocent victims. If maximin is the superior rule, this case provides a counterexample to the claim that, whenever the recommendations of the 2 rules differ, maximin always suggests unreasonable consequences.44 In response, the foreman could say that society must accept some risks; that he was not acting in self-interest,45 but trying to avoid mass panic;46 or that utilitarian rules require users to rank lives on a common-interval scale—to make


Value Judgments Can Kill

187

difficult interpersonal comparisons of utility. Thus the foreman could say that, given difficult life-comparisons of elderly/sick people, he would rather be dead than elderly/sick.47 If his responses are reasonable, the counterexample against Harsanyi fails. However, arguably he does not have the authority to decide to ignore the human and legal rights of innocent victims, just because he does not value elderly people. Thus the utilitarian default rule has at least 2 serious ethical problems. One is allowing rights violations in the name of expected utility. The other problem is assuming that maximizing the average utility of all persons— as the rule requires—actually maximixes the utility of each person. Obviously it does not. Because it does not treat elderly people equally, it errs.

Harsanyi’s Third Argument Another Harsanyi anti-maximin argument is that it causes unacceptable ethical consequences: benefiting the least-well-off individuals, when they do not deserve it and society is harmed. For support, Harsanyi gives 2 examples.48 The first concerns 2 critically ill pneumonia patients, one of whom has terminal cancer, but there is enough antibiotic to treat only 1 of them. Harsanyi says utilitarians would give the antibiotic to the non-cancer patient, but maximin strategists would give it to the cancer patient, because he is worse off. In Harsanyi’s second example, a severely mentally handicapped citizen, and one with superior mathematical ability, request money. Should society’s surplus money help educate the mathematician or provide remedial training for the mentally handicapped person? Harsanyi says utilitarians would help the mathematician, but maximin proponents would help the less-well-off mentally handicapped person. Unfortunately, even if one grants Harsanyi the implausible assumption that utilitarians and maximin proponents would decide as he says, neither example works. Again Harsanyi provides no example of societal decisionmaking under uncertainty because the first-example, terminal-cancer victim faces certain death soon. Similarly, in the second example, it is certain that money will never improve the mentally handicapped person’s situation because Harsanyi says he “could achieve only trivial improvements,” whereas educating the mathematician would be successful and also is not uncertain. Harsanyi’s mentally handicapped–mathematician example likewise fails because he defines the mentally handicapped person as less well off, according to maximin, therefore deserving of remedial-education funding. However, being less well-off involves not only intelligence, but also financial well-being, equal opportunity, and so forth. If the mentally handicapped person is happy and cannot be made better off, he has reached full potential and is not worse off than the mathematician—whose education would increase his welfare. Admittedly, Harsanyi claims the mentally handicapped person has greater need for resources.49 Yet if


188

Values Analysis and Scientific Uncertainty

he cannot be bettered, he has no greater need; people need only what can better them. Again Harsanyi’s examples fail to illustrate utilitarian superiority under uncertainty and rely on questionable assumptions. 50

Harsanyi’s Fourth Argument Harsanyi’s final anti-maximin argument relies on the equiprobability assumption, 51 a variant of the principle of insufficient reason. First formulated by 17th-century mathematician Jacob Bernoulli, it says that if no evidence suggests one event from an exhaustive set of mutually exclusive events is more likely than another, events should be judged equally probable.52 Harsanyi claims scientific decisionmakers ought to use this assumption in the utilitarian default rule because it supposedly supports equal treatment in treating all individuals’ a priori interests as equal,53 giving everyone “the same probability, 1/n, of taking the place of the best-off individual, or the second-best-off individual, and so forth, up to the worst-off individual.”54 However, using this assumption relies on inconsistency and subjectivity. If there is no justification for assigning a set of probabilities, given uncertainty, there is no justification for assuming states are equally probable. 55 The assumption contradicts the stipulation of uncertainty. 56 The assumption also is subjective because in a situation of uncertainty, assuming states are equally probable errs. As Nobel Prize–winning Daniel Kahneman and coauthor Amos Tversky have argued 57 such reliance on subjective probabilities is irrational and encourages judgmental errors. 58 The assumption also is arbitrary in its presuppositions about state definitions because it is impossible to specify mutually exclusive/exhaustive states under conditions of genuine uncertainty. 59 Moreover, because different ways of defining states could result in different utility-maximizing schemes, thus different decisions,60 Harsanyi’s supposed argument supports only his arbitrary definitions. A fourth difficulty with the equiprobability assumption is that it encourages societal disaster. Whenever several states are not equiprobable, and the highest probability is associated with catastrophe, the utilitarian assumption could lead to catastrophe. For instance, if people used the assumption to decide how to handle the Fukushima Daiichi nuclear catastrophe, they could make it worse because of underestimating the likelihood of catastrophe. If so, why do people defend the assumption? Duncan Luce and Howard Raiffa say it satisfies all axioms required in situations of partial uncertainty, while maximin satisfies all but 1 of the axioms. 61 However, their defense fails because their remarks apply only to individual, not societal, decisionmaking under uncertainty—and hence are not relevant here.62 Therefore, under conditions of societal decisionmaking where probabilities of harm are unknown, and where consequences are potentially catastrophic, avoiding disaster requires using maximin.


Value Judgments Can Kill

189

The equiprobability assumption also errs in encouraging discrimination. Because utilitarian rules require one to maximize average-expected utility, earlier discussions of rights violations show that these rules allow minorities, with less-than-average utility, to receive heavier burdens/unequal treatment.63 In addition, using the assumption to assign equal probabilities—to the occurrence of pollution-induced and non-pollution-induced cancers, for instance—obviously treats people unequally because pollution-induced-cancer probability is higher than 0.5 for those living/working near noxious facilities—typically poor people/ minorities who are hurt by utility-maximizing rules.64 Harsanyi also discriminates insofar as he confuses states and interests. Because people’s interests are affected not only by equiprobable states but also by their different abilities/genetics/histories/social institutions—the equiprobability assumption worsens harm to them. For instance, consider genetic differences in those exposed to typical municipal-waste-combustion-facility emissions; maximum-individual-lifetime-cancer risks vary 300-fold for dioxin, polychlorinated biphenyls, arsenic, beryllium, and chromium—and phenotypic variation causes more than 200-fold differences in individuals’ toxin sensitivity. 65 Averaging expected utility in such cases does not treat people equally, but further hurts already-vulnerable people. Likewise, Harsanyi errs in presupposing that treating everyone the same is equitable. Genuine equal treatment often requires treating people differently, so as to take account of different degrees of merit/ need/rights to compensation. Treating people the same/equiprobably, within existing harmful relationships, merely reinforces harm.66 Because equal treatment requires ethical analysis, not crude assumptions, Harsanyi’s fourth argument again fails. The result? Although Harsanyi’s arguments defending expected utility default rules are reasonable for cases of risk and individual decisonmaking, they are unreasonable/unethical for cases of uncertainty and societal decisonmaking. Therefore, maximin rules appear superior to expected-utility rules in situations of societal uncertainty, where there is potential for catastrophe, and where probabilities of harm are unknown.

Rawls’s First Argument Of course, flaws in Harsanyi’s arguments are not sufficient conditions for abandoning them, unless Rawls’s alternative arguments are superior. Consider his 4 main arguments. Rawls’s first argument is that using maximin would lead to justice based on an equal-opportunity criterion according to which societal-policy arrangements are fair to all, especially the worst-off persons.67 Rawls believes that although the first virtue of institutions is justice, society could create just social institutions, even if all rational individuals cared about only their own


190

Values Analysis and Scientific Uncertainty

interests. Rawls says that provided rational people negotiate with each other about social institutions—but behind the “veil of ignorance,” ignorance of anyone’s socio-economic position/IQ /history/talents/opportunities—they would negotiate just institutions. Why? Given such ignorance, they would realize they could be disadvantaged themselves. Consequently they would arrange society in equal-opportunity ways, to ensure that least-well-off people were least disadvantaged.68 Given this veil-of-ignorance uncertainty, Rawls says rational people would mitigate the “arbitrariness of the natural lottery,”69 arbitrary distributions of talents, wealth, and so on.70 If so, they would support maximin, not expected-utility rules. The main objection to Rawls’s maximin argument is that using it might not increase average-societal utility.71 This objection fails, however, because it sanctions treating humans as means to the end of societal utility. Instead, most moral philosophers say people should be treated as ends in themselves because the comparison class is all humans; all have the same capacity for a happy life;72 free, informed, rational people would agree to equal-protection principles;73 and all legal and political schemes presuppose equal protection by presupposing consistency, fairness, justice, rights, and autonomy.74 “Law itself embodies ideals of equal treatment for persons similarly situated.”75 A related problem is the objection’s sanctioning human-rights violations because of allowing unequal treatment of minority or vulnerable people. Yet if all members of society have equal, prima facie rights to life, as the most basic human right, allowing uncompensated societal threats to innocent groups violates their rights. If there were no such rights, the majority could do whatever it wished to minorities, creating a tyranny of the majority.76 A second problem with this objection is its providing no morally relevant grounds for allowing expected-utility rules that treat people unequally. Yet given duties to treat people equally,77 only relevant moral reasons—justified each time on moral grounds like merit/compensation/need—justify treating people unequally.78 Only unequal treatment requires defense because the burden of proof is on discriminators.79 Under uncertainty, however, discriminators could not logically argue that maximizing average-expected utility provided morally relevant grounds for discrimination,80 because by definition such grounds would be unknown. Thus Rawls succeeds. A third problem with the objection to Rawls is its providing no higher interest that maximal-average-expected utility serves,81 so as to justify unequal treatment. People often claim economic progress is a higher interest, especially when risky science/technology threatens vulnerable people.82 Or they often claim some equal protections—e.g., increased food and pesticide inspections—are not cost-effective or utility-maximizing. However, such claims err.83 If economic progress really served everyone’s equal interests, it must be “required for the promotion of equality in the long run”; any other interpretation of “serving everyone’s interest” would be open to charges of using some humans as means to the ends


Value Judgments Can Kill

191

of others.84 But expected-utility decisions—such as avoiding expensive pollution controls, thus more equal risk distribution—do not obviously lead to greater overall equality because they rely on the questionable factual assumption that promoting science/technology, without seeking equal risk protection, will lead to greater long-term equal treatment. Historically, this assumption has been proved false.85 For instance, although the United States increased its standard of living/ average-expected utility in the past half century, wealth distribution has become less equal; only the welfare of the top 1 percent has actually improved. Indeed, since the mid-1970s, the relative income shares of the poorest 80 percent of Americans has declined.86 Because wealth has declined for the most vulnerable, but wealth is needed to utilize equal opportunities, 87 economic growth has not equalized political treatment, but made inequities worse.88 One reason is that science/technology generally eliminates jobs, 89 and most recent US job increases are in the service sector.90 Hence, expected-utility rules do not typically equalize opportunities,91 but worsen the plight of the poor; they bear more scientific/technological risks, are more unskilled/unemployable, and must compete more frantically for scarcer jobs.92 Yet they bear most environmental injustice.93 Expected-utility rules thus distribute societal costs in regressive ways but provide disproportionate benefits for the educated and wealthy, who can pay for environmental quality.”94 Non-whites and the poor bear the highest pollution levels,95 the rich receive the most benefits,96 and expected-utility rules further burden poor people and minorities.97 Thus, if one has moral obligations to promote equal opportunity,98 expected-utility rules fail in situations of uncertainty where no great benefits arise from imposing unequal threats. That is, as already suggested, maximin rules appear superior to expected-utility rules in situations of societal uncertainty, potential catastrophe, unknown harm probabilities, and no essential benefits derive from imposing unequal threats. Besides, contrary to Harsanyi, most utilitarians do not defend average-expected-utility rules. “Most utilitarians think that inequalities of distribution tend to reduce the total welfare.”99 If so, objections to Rawls’s first argument fail,100 and objections that ignore societal inequity, in the name of great societal benefits, also fail.

Rawls’s Second Argument Another Rawlsian argument is that maximin would avoid using a utility function, designed for risk taking, in the area of ethics—where it does not belong. Because utility functions express the subjective importance people attribute to their interests, whereas ethics expresses the importance they ought to attribute, Rawls’s argument is that utilitarians cannot ethically discriminate rationally among alternative preferences.101 Utilitarians’ equating preferences with oughts


192

Values Analysis and Scientific Uncertainty

also is problematic because people often prefer things, like cigarettes or certain marriage partners, that do not increase their welfare. If one’s welfare = one’s preferences, many undesirable consequences would follow, such as • ignoring the quality of preferences and espousing moral relativism; 102 • contradicting utilitarian assumptions that preferences are stable, consistent, and so on;103 • equating needs and wants,104 morality and utility;105 and • defining group welfare as merely aggregated, averaged, individual preferences.106 Yet, morality is not merely aggregate or average individual preferences. Although such egoism might solve personal welfare, it would destroy society and community. Besides, in rapidly changing situations, leaders must act on the basis of likely future events, not present individual preferences.107 Harsanyi’s response to this second Rawlsian argument is that utility functions indicate not merely individual attitudes, but also the utility individuals assign to goals, including ethical goals.108 However, Harsanyi’s response is unprincipled because utility judgments are based on preferences, whereas moral judgments are based on principles. Thus Harsanyi must admit preferences can be unprincipled and harm genuine utility. His response also is incoherent because he wishes to make moral judgments by using subjective-utility functions—not unchanging moral principles like equal protection—yet cannot compare interpersonal utilities. Why not? He says preferences that measure things’ subjective importance are more important than following moral principles, because people’s preferences are different.109 If so, even people in similar circumstances, with similar backgrounds, could have different preferences not governed by the same basic psychological laws. If so, interpersonal-utility comparisons are impossible. Yet this conclusion contradicts Harsanyi’s claims that all humans’ “preferences and utility functions are governed by the same basic psychological laws,”110 and interpersonal-utility comparisons can be specified completely because they “have a completely specific theoretical meaning.”111 Thus, Harsanyi cannot consistently claim both that people’s differences require preferences as welfare measures,112 and that interpersonal-utility comparisons are possible because people’s utility functions “are governed by the same basic psychological laws.”113 Why does Harsanyi base utility on subjective preferences, and not also on ethical principles?114 To treat societal decisionmaking, Harsanyi must eliminate differences among persons and define rational behavior as average-expected utility. But eliminating such differences eliminates individual decisionmakers. Therefore, because Harsanyi cannot consistently claim that his theory, like Rawls’s, both formulates social problems in terms of individual decisions, yet has no individual utility function,115 his attacks on Rawls’s second argument are incoherent.


Value Judgments Can Kill

193

Rawls’s Third Argument Rawls’s third argument against expected-utility rules is their requiring heroic actions.116 Because utilitarians must maximize average-expected utility, people are equally obliged to perform normal and heroic actions,117 such as giving up one’s life for poor people. For talented individuals, utilitarian rules would demand unfair sacrifices. Harsanyi agrees and says utilitarians need not sacrifice.118 If Harsanyi’s response is correct, however, his system is not utilitarian but partly duty-based or rights-based. He can escape requiring heroic actions, but only by abandoning utilitarian rules and falling into inconsistency. Analogous problems for utilitarians arise with conflicts between duty and utility. Suppose a father sees a building on fire. In one room of the building is his child, and in another are 2 other children. He must decide whether to save his child, or the other 2 children, when doing both is not possible. Parental duty dictates trying to save his own child, but average-expected-utility rules dictate trying to save the 2 other children. Utilitarians err because they could not consistently defend duties/ rights, only average-expected utility.119

Rawls’s Fourth Argument Another problem with expected-utility rules is their dependence on uncertain predictions about results of alternative policies. As a consequence, 2 well-informed, well-intentioned utilitarians could each come to different conclusions about what is good or bad.120 This problem results partly from the fact that many unpredictable variables affect expected-utility outcomes,121 making it subjective. Of course, objectors might say maximin rules also rely on predicting consequences because, although maximin relies on no probabilities, it must avoid the worst possible outcome. To some degree, this anti-maximin objection is correct, although it is easier for maximin proponents to tell which consequences are worst than for utilitarians to rank their interval-scale utilities or assign specific probabilities, as already mentioned. Also, if the worst scientific threats typically are imposed on the poor, and if census-tract income distributions often suggest who is likely least advantaged/ most vulnerable, discovering worst outcomes is not difficult. Harsanyi, however, says greater difficulties with predicting expected utility actually are advantageous in allowing utilitarians to avoid “simple-minded, rigid mechanical rules” that are inappropriate for complex moral problems.122 However, because Harsanyi admits utilitarians have nothing but expected-utility rules, unlike Rawls, they are open to many objections.123 The objections include that every decision must always depend on expected advantages and disadvantages,124 as calculated by one person, not also on duties/rights. If so, utilitarians could consistently and secretly allow disenfranchising some minority.125 Harsanyi admits this unfairness, suggesting


194

Values Analysis and Scientific Uncertainty

utilitarians could justify curtailing civil liberties.”126 Unless one believes expediency supersedes duty, however, Harsanyi fails to defeat Rawls.127

Maximin, Practicality, and Three Conditions The preceding arguments, about evaluative decision rules under scientific uncertainty, show that if at least 3 conditions are met, maximin is likely to be superior to expected-utility decisionmaking: (1) societal (not just individual) decisionmaking under uncertainty/unknown harm probabilities; (2) potentially catastrophic effects; and (3) no benefits offsetting unequal societal threats. Besides ethics, prudence also argues for using maximin rules under circumstances (1)–(3), although utilitarian rules often are superior in situations of individual risk. One prudential reason is that the 1969 US National Environmental Policy Act requires policymakers to ensure every individual safe, healthy surroundings, not merely what maximizes average utility.128 In protecting all Americans, the act clearly rejects expected-utility rules that could harm minorities. Also, slow-moving, inefficient bureaucracies ought not be trusted to make reliable, timely, utility-based decisions about life-and-death matters. To protect people, bureaucracies need maximin, especially in potentially catastrophic/uncertain situations,129 like the Pearl Harbor attack. Although by December 7, Admiral Kimmel, the US Pacific Fleet Commander, had received 7 different warnings of Japanese attack, he and his bureaucracy failed to move the US fleet from the harbor, to patrol the island by air, and to staff emergency-warning centers.130 Disaster resulted. Kimmel should have followed maximin.131 Likewise, Chernobyl and other disasters occurred partly because assessors called them highly improbable,132 thus ignored the fact that a nuclear-core melt could kill 150,000 persons.133 They should have followed maximin under uncertain/potentially catastrophic circumstances. As already mentioned, Kahneman and Tversky also suggest maximin is needed under conditions (1)–(3). They show that even statistics PhDs fall victim to statistics biases such as overconfidence, representativeness, availability, and anchoring,134 although elementary texts warn against them,135 especially when dealing with uncertainty.136 Thus experts were wrong when they said that irradiating enlarged tonsils was harmless, that X-raying feet—to determine shoe size—was harmless, that irradiating women’s breasts—to alleviate mastitis—was harmless,137 that witnessing A-bomb tests at close range was harmless.138 Psychometricians say experts typically overlook 6 biases, including ignoring human-error-caused catastrophes (e.g., the Three Mile Island nuclear accident); being overconfident about their judgments (e.g., the 1976 Teton Dam collapse); misunderstanding whole-system functioning (e.g., airplane accidents); ignoring chronic and cumulative effects (e.g., climate change); not anticipating flawed human responses to accidents (e.g., Fukushima-Daiichi nuclear


Value Judgments Can Kill

195

melts); and not anticipating simultaneous, common-mode failures (e.g., Brown’s Ferry accident).139 If even experts err in these 6 ways, Kahneman and Tversky warn that experts cannot model hazards correctly with subjective-utilitarian approaches,140 cannot avoid errors under uncertainty. If not, prudence dictates being conservative and following maximin, at least in cases of uncertainty and potentially catastrophic threats. Ignoring prudence, maximin opponents may say maximin could harm progress/economic development.141 However, using expected-utility rules, under conditions of societal uncertainty and potential catastrophe, also harms progress and economic welfare, as catastrophic scientific failures like Fukushima-Daiichi show. If so, restricting expected-utility rules to cases where they are successful—individual scientific hazards under risk, not societal hazards under uncertainty—may promote science and economic progress.142 However, utilitarians might complain that maximin could thwart economic progress because it forces more safety expenditures to protect against worst-case hazards.143 The obvious response, however, is that worst-case occurrences also are extremely costly. Besides, one knows that preventing worst cases is more costly only if one knows catastrophes are improbable—which, by definition, one cannot know in a case of uncertainty. If not, reasonable people use maximin evaluations so as to avoid catastrophe. Consider anthropogenic climate change. Recent polls found that only 56 percent of Americans believe average global temperatures have risen, and 64 percent think scientists disagree about whether it exists.144 Fossil-fuel industries, eager for the United States not to restrict their products, are partly responsible for climate-change doubts. They fund scientists, like physicist Fred Singer, to deny climate change.145 Yet if one examines all basic climate research in refereed scientific-journal articles since 1993, there is no disagreement about whether anthropogenic climate change exists, merely disagreement on minor details about it. All scientists doing climate research say it exists. None challenge its existence, consistent with the consensus statements of major scientific groups warning against it.146 Thus, although it is certain in the future and has calculable probabilities, it is not a case of uncertainty. Even if fossil-fuel interests convinced people that climate change was uncertain, this chapter’s arguments show that prudence dictates addressing it because of the magnitude of the possible catastrophe.147

Conclusion Of course, as the climate-change case shows, people are not always reasonable, especially if economic incentives interfere with scientific uncertainties. However, such situations provide even further grounds for arguing that, if at least 3 conditions are met, maximin is likely to be superior to expected-utility decisionmaking about science: (1) uncertainty/unknown harm probabilities; (2) catastrophic potential; and (3) no offsetting benefits for unequal treatment. Better safe than sorry.


C H A P T ER

14

Understanding Uncertainty F A L S E N E G AT I V E S I N Q UA N T I TAT I V E R I S K A N A LY S I S

Are cell phones dangerous? The World Health Organization and International Agency for Research on Cancer say it is likely. They classify wireless radiofrequency-electromagnetic fields as possibly carcinogenic to humans because of increased brain cancers.1 Many neuro-oncologists likewise say they have confirmed a linear relationship between cell-phone usage and brain-tumor incidence, and therefore people should limit cell-phone exposure. 2 Some epidemiologists, however, say that despite insufficient data, results do not suggest excess brain tumors from mobile phones. 3 Similar controversies beset fracking— hydraulic and chemical fracturing of shale so as to extract natural gas. Scientists at the International Energy Agency say fracking can be done safely; independent, university-based scientists and physicians say that it cannot, that it contaminates groundwater and air quality. Indeed, nations like France have suspended all fracking.4 How should scientists respond, if harms like cell-phone carcinogenicity are uncertain? In situations of scientific uncertainty, what is the most defensible value judgment or default rule—the rule specifying who has the burden of proof and who should be assumed correct, in the absence of arguments to the contrary? If the previous chapter is correct, in situations involving uncertainty, potential societal catastrophe, and no overarching benefits, scientists ought to use the default rule of assessing their data in terms of maximin, not expected-utility rules. Using maximin would require scientists evaluating cell phones to protect those who are most sensitive to electromagnetic radiation, like children—at least in default cases where there are no compelling arguments to the contrary. Yet because it is more expensive for commercial interests to use this default rule to help protect extremely sensitive populations, doing so might increase market costs. Not using this default rule, therefore protecting children less, might decrease market costs. 5

196


Understanding Uncertainty

197

Chapter Overview This chapter argues that scientists facing situations having 5 characteristics—(1) uncertainty about harm probabilities and consequences, (2) potentially catastrophic societal risks, (3) absence of overarching benefits, (4) the impossibility of avoiding both false positives and false negatives, and (5) the absence of compelling arguments to the contrary—should not follow the traditional scientific value judgment/default rule of minimizing false positives, false assertions of harmful effects. Instead, under conditions (1)–(5), scientists should follow the default rule of minimizing false negatives, false assertions of no harm. Thei chapter first reviews these 2 types of error, then shows how they are analogous, respectively, to default rules in civil and criminal law. Third, it makes the case for using the default rule of minimizing false negatives in situations involving conditions (1)– (5). Fourth, it answers objections to these arguments.

False-Positive and False-Negative Errors When science includes legitimate and unresolved disagreement, scientists face uncertainty. For instance, some scientists say nano-particle-containing sunscreens are dangerous, while others say they are not. 6 Some scientists say oral contraceptives are dangerous,7 while others say they are not. 8 In such uncertain situations, false positives (type-I errors) occur when scientists reject a true null hypothesis, for example, “progestin-estradiol oral contraceptives have no increased-ovarian-cancer effects.” False negatives (type-II errors) occur when one fails to reject a false null hypothesis. Yet under conditions of scientific uncertainty, often it is statistically impossible for scientists to minimize both false positives and false negatives. Instead, they must make a value judgment—or use a default rule—about which type of error to minimize and about a testing pattern for their hypothesis. Typically they define the concept of statistical significance (see chapter 8) in terms of a false-positive risk of 0.01 or 0.05. That is, there is not more than a 1 in 100, or a 5 in 100, chance of committing a false-positive error. Which error is more serious, false negatives or false positives? In law, an analogous issue is whether it is more serious to acquit a guilty person or to convict an innocent person. Should scientists run the risk of not recommending some scientific product that is really safe, or of recommending some scientific product—like cell phones—that is unsafe and could harm people? Decreasing commercial risks, by minimizing false positives, might hurt public health. Yet decreasing public risk, by minimizing false negatives, might hurt commercial profits.9


198

Values Analysis and Scientific Uncertainty

Scientific Preferences for Minimizing False Positives Most scientists have done hypothesis-testing so as to minimize false positives, to limit incorrect rejections of the no-effect hypothesis. To do so, they typically design experimental studies to guard against confounders, alternative possible causes. Thus, as chapter 8 illustrated, they demand replication of study results and often test for statistical significance. They do so because the scientific enterprise needs skepticism, rigorous reluctance to accept positive results. Otherwise, false claims would be claimed as science. As Abraham Kaplan put it, “The scientist usually attaches a greater loss to accepting a falsehood than to failing to acknowledge a truth. As a result, there is a certain conservatism or inertia in the scientific enterprise.”10 When both types of error cannot be avoided, apparent scientific preferences for false-negative or public risks also might arise because, as the previous chapter noted, most science is done by special interests that hope to profit from it. Special-interest scientists often underestimate product and pollution risks,11 partly because it is difficult to identify all hazards, they assume unidentified risks are zero, and their employers want to claim their products and pollutants are safe. As chapter 13 suggested, using the default rule of minimizing commercial risk also arises because scientific experts almost always use expected-utility decision rules, regardless of whether the situation meets the 3 criteria where maximin rules appear more appropriate. Scientists’ preferences for the default rule of risking false-negative or public risks and for minimizing false-positive or commercial risks—when both cannot be avoided—is also consistent with standards of proof required in criminal cases, as opposed to cases in torts. Because US law requires juries in criminal cases to be sure beyond a reasonable doubt that a defendant is guilty before deciding against him, criminal standards of proof reveal preferences for false negatives, for innocence verdicts, for risking acquitting guilty people. In cases in torts, however, because US law requires juries to believe only that it is more probable, than not, that defendants are guilty, standards of proof—default rules—in civil cases reveal no preference for false negatives or false positives. Why the difference? Judith Jarvis Thomson says that “in a criminal case, the society’s potential mistake-loss is very much greater than the society’s potential omission-loss.”12 That is, consequences to criminal defendants could include mistakes like execution. Nations also protect their moral legitimacy by minimizing false-positive verdicts in criminal cases. If they fail to convict the guilty, they commit a more passive, less reprehensible, wrong than if they convict the innocent. Thus, if standards of proof in cases of commercial or false-positive risks were analogous to those in criminal cases, society should minimize commercial and not public risk. Later paragraphs argue, however, that how scientists should behave in uncertain, potentially catastrophic scientific situations, are disanalogous to hypothesis-testing in pure science and to determining criminal guilt. Why? Researchers doing pure science prefer the default to minimize false positives because it seems more scientifically conservative. It avoids positing an effect, for


Understanding Uncertainty

199

instance, that a product causes cancer. Instead, it presupposes the null hypothesis is correct, for intance, that the product causes no cancer. In pure/basic science—without welfare consequences—it thus seems reasonable to claim that one should maximize truth and avoid false positives. However, chapter 13 argued that societal decisionmaking under uncertainty is disanalogous to pure-science decisionmaking because it also requires taking account of processes for recognizing ethical/legal obligations. When one moves from basic to policy-relevant science, what is rational moves from epistemological to both ethical and epistemological considerations. Thus, the default of minimizing false positives in basic science provides no rationale for doing so in policy-relevant science. Civil law likewise exhibits no preference for minimizing false positives, although criminal law does. It protects the more vulnerable person. In policy-relevant-science cases, however, the public is more vulnerable than commercial interests because many could die from dangerous pollutants/products. Yet if scientists err in assuming some product/process is harmful when it is not, the main losses are economic, not fatalities. Therefore, if the aim of societal decisionmaking is to avoid more serious harms, cases of policy-relevant science are not analogous to criminal cases. Why not? In policy-related science, the greater threats are to the public and false negatives, whereas the greater criminal-case threats are to defendants and false positives. Members of the public (as compared to producers/polluters) often have less information about the risks of welfare-affecting science, fewer financial resources to use in avoiding them, and greater difficulty exercising due-process rights after being harmed, because they bear the burden of proof.13 If so, welfare-related science requires protecting the public, the more vulnerable party, by using the default of minimizing false negatives, false assertions of harm, when scientists face situations having 5 characteristics—(1) uncertainty about harm probabilities and consequences, (2) potentially catastrophic societal harm, (3) absence of overarching benefits, (4) unavoidability of both types of error, and (5) absence of compelling arguments to the contrary. Characteristics (4) and (5) merely state some of the conditions for defining something as a default rule, and subsequent sections defend characteristics (1)–(3) as situations that, together, argue for using the default rule of minimizing false negatives.

Minimizing False-Negative Risks in Practical Science Obviously the decision whether to minimize false negatives or false positives must partly be made on a case-by-case basis. Therefore, this chapter argues for prima facie grounds for reducing public risk, false negatives, under conditions (1)–(5) above. That is, it argues that the person seeking to reduce commercial risks or false positives bears the burden of proof for accepting potentially catastrophic, uncertain, low-benefit, societal risks. These arguments show both that


200

Values Analysis and Scientific Uncertainty

public risks are the kinds most deserving of reduction, and that members of the public (who typically choose to minimize public risks) should be the main locus of decisionmaking regarding potentially catastrophic risks. One reason to minimize false negatives in potentially catastrophic cases is that it is more important to protect the public from harm than to provide some societal benefit. Why? Protecting from harm is a necessary condition for enjoying other freedoms.14 Ethicist Jeremy Bentham, for instance, discussing liberalism, cautioned—as ethicist Robert Nozick and others might—that “the care of providing for his enjoyments ought to be left almost entirely to each individual; the principal function of government being to protect him from sufferings.”15 Although sometimes people cannot easily distinguish between providing benefits and protecting from harm, there is a general distinction between welfare and negative rights;16 between welfare laws that provide benefits and protective laws that prevent harms; between letting die versus killing; and between acts of omission and commission.17 Given such distinctions, because protecting people from harm is more important than providing them some good, protecting people from dangerous products is more important than providing risky products. Another reason for minimizing false negatives in potentially catastrophic cases of practical science is that doing so protects the innocent public, whereas minimizing false positives would protect mainly those trying to profit from risky products. Because industrial producers, users, and implementers of science and technology—not the public— receive the bulk of science-and-technology benefits, they and not the public deserve to bear most associated risks.18 In addition, the public typically needs more risk protection than commercial interests. Laypeople usually have fewer financial resources and less information to cope with societal risks, especially because special interests may deny that risks to the public exist.19 A typical situation occurred in Japan, where the dangers of mercury poisoning were identified in 1940, and deaths were reported in 1948. Yet, the infamous Minimata poisoning occurred in 1953. Because of commercial/ government indifference and repeated official denials of harm, government took action against mercury contamination only in the 1960s, 25 years after the first deaths occurred. Such cases, as well as economic and government financial incentives for ignoring public risks, suggest the public has greater need of protection.20 Minimizing public risks also seems reasonable because citizens have legal rights to protection against scientific-commercial decisions that could threaten their welfare, whereas risk imposers have no rights to profits from any products/ pollutants. Citizens’ right to such protection are especially important because many potentially lethal risks typically are not fully compensated. Instead, consumers usually have 3 options regarding risks: prevention, transferal of loss, and risk retention. When citizens protect themselves by maintaining enough assets to rectify damages, they employ risk retention. When they use mechanisms like insurance and legal liability to transfer the risk, they protect themselves by passing


Understanding Uncertainty

201

most of the loss to the insurer or liable party. Risk transfer obviously is more practically desirable than retention because it does not require people to retain idle, unproductive assets, to guard against damage. The moral desirability of risk transfer is that, if special interests harm someone, they and not the victim should be liable. The ethical responsibility of special interests thus provides grounds for removing financial responsibility from the victim. Insurance is a better vehicle for risk transfer than liability because insurance typically does not require victims to use costly, lengthy, legal remedies to obtain protection or compensation.21 However, the most ethical way for innocent victims to protect against science-related societal risks is prevention because it ties up no victim assets. Thus, in cases where those responsible cannot compensate harms they do to others, ethics requires those risks to be eliminated, especially if potential victims do not give free, informed consent to them. Judith Jarvis Thomson describes incompensable harms as those so serious that no money could compensate the victim. By this definition, death obviously is an incompensable harm. “However fair and efficient the judicial system may be,” because those who cause incompensable harms by their negligence “cannot square accounts with their victims,”22 they commit an ethically unjustifiable harm. But when are risks unjustifiable? Of course, borderline cases are controversial—because potentially catastrophic technologies, like nuclear energy or industrial-chemical carcinogens, impose significant, potentially catastrophic, incompensable risks on others. Nevertheless, scientists assessing such cases should minimize false negatives. For instance, as earlier chapters noted, the US government admits that a nuclear accident could kill 150,000 people and that the core-melt probability for all existing/planned US commercial reactors is 1 in 4 during their lifetimes. Even worse, US citizens are prohibited by law from obtaining compensation from negligent nuclear utilities for more than 1–2 percent of losses from a worst-case, commercial-nuclear catastrophe. In most of the world, citizens have no commercial-nuclear-insurance protection at all. Because commercial-nuclear risks are both uncompensated and rejected by a majority of people in all nations except North Korea, they appear ethically indefensible. If so, they should be prevented, including through scientists’ minimizing false negatives in their assessments.23 Another reason for minimizing false negatives in practical, potentially catastrophic, welfare-related science is that many uncertain risks, for instance, million-year-hazardous-waste disposal, impose involuntary, uncompensated, thus unjustifiable harm. Besides, as chapter 13 argued, harm is justifiable only when it leads to greater good for all, including those most disadvantaged by it. If there is uncertainty about some science-related harm, obviously one could not show that imposing it would lead to greater good for all. Hence one could not justify imposing it. Still another reason for minimizing false negatives in such circumstances is that doing so is often necessary to counter special-interest science, discussed earlier.24


202

Values Analysis and Scientific Uncertainty

Chapter 7 revealed false-negative biases in all pesticide-manufacturer studies of chemical risks, submitted to the Environmental Protection Agency (EPA) for regulatory use. Yet such false-negative biases, especially small sample sizes, occur frequently in pharmaceutical, medical-devices, energy, pollution-related and other commercial scientific research because, as chapter 12 warned, most scientific work is funded by special interests. The result? Even when government decisions affect them, citizens often receive the best science money can buy, not truth. However, scientists can help counter this false-negative bias by minimizing false positives in welfare-affecting science that has potentially catastrophic consequences.

An Economics Objection In response to the preceding arguments, suppose someone objects that scientists have duties, for the good of the economy, to minimize commercial risks and false positives, not public risks and false negatives under conditions (1)–(5) outlined earlier. 25 This objection fails because, as chapter 13 argues, it would require using human beings as means to the end of economic growth, rather than as ends in themselves. Yet tort law, the US Constitution,26 basic ethics rules, and rights to bodily security prohibit using persons as means to the ends of other persons.27 Moreover, as chapter 12 revealed, expert-calculated-risk probabilities do not provide reliable biases for pursuing economic efficiency because experts have as many biases in estimating harm probabilities as laypeople.28 If so, scientists ought to minimize false negatives and public risk in cases characterized by conditions (1)–(5). Besides, as chapters 4 and 12 illustrate, citizens should be protected against false-negative biases in assessments of potentially catastrophic technologies that could cripple the economy. For instance, the major pro-nuclear lobby, the Atomic Industrial Forum, admits that commercial-atomic-energy could not have survived without protection from normal market mechanisms and accident-liability claims.29 Yet nuclear accidents can cripple the economy, as chapter 12 and Fukushima show. Massively undercapitalized and unable to withstand liability claims, atomic energy and such capital-intensive technologies can both threaten the economy and jeopardize investments in clean technologies like solar-photovoltaic and wind. 30 Still another problem with arguments to maximize public risk and minimize commercial risk is their inconsistency. Often they sanction interfering with market mechanisms to protect special interests, yet they reject market-regulatory interference in order to protect the public. If so, minimizing false positives and commercial risk is questionable in situations characterized by (1)–(5). 31 Other problems also face those who minimize false positives so as to protect the economy more than human beings. Their reasoning is as invalid as analogous


Understanding Uncertainty

203

arguments that abolishing slavery would destroy the economy of the South, or that giving equal rights to women would destroy the family, or that forcing automakers to put safety glass in windshields would bankrupt carmakers. Such arguments err because they pit cultural values—like family and economic well-being—against human rights to life and to equal rights, and they sanction discrimination. Judith Jarvis Thomson’s response to such arguments is that “it is morally indecent that anyone in a moderately well-off society should be faced with such a choice.”32 As chapter 13 argued, the only grounds justifying such discrimination against the people are that it will work to the advantage of everyone, including those discriminated against. Otherwise, such discrimination would amount to sanctioning the use of some humans as means to the ends of other humans. 33 For all these reasons, in cases of uncertain, potentially catastrophic science having characteristics (1)– (5), the burden of proof should be on those attempting to put the public at risk by minimizing false positives.

The Public Should Accept or Reject Societal Risks In cases of uncertain, potentially catastrophic, low-benefit science, the public and not scientists alone also should choose scientific value or default rules. Given democratic process, no one should impose risks on others without their free, informed consent. This dictum holds true in medical experimentation, and it applies to science and technology as well. 34 Citizen and consumer sovereignty are justified by a revered ethical principle: No taxation without representation. As economist Tom Schelling notes, citizens have safeguarded this sovereignty by means of “arms, martyrdom, boycott,” the “inalienable right of the consumer to make his own mistakes.”35 A second reason for minimizing public risk and false negatives through citizen self-determination—not scientific paternalism that minimizes false positives—is its consistency with most ethical theories. In his classic discussion of liberty, John Stuart Mill argues that one ought to override individual decisionmaking only to protect others or keep them from selling themselves into slavery. Otherwise, says Mill, paternalism would be a dangerous infringement on individual autonomy. 36 If so, paternalistic grounds never justify overriding public reluctance to accept uncertain, potentially catastrophic, scientific risks. In fact, arguments to minimize false positives and commercial risks, at the expense of the public, allege citizens are overprotective of themselves. 37 Because such arguments sanction providing the public with less, not more, protection, largely for special-interest benefit, citizens need protection against such self-interested, ethically indefensible heavy-handedness. As chapter 3 noted, recall that at least 50 percent of environmental-health scientists’ rights are violated by polluter harassment after they publish research that


204

Values Analysis and Scientific Uncertainty

suggests the need for greater pollutant regulation. 38 Innocent scientists with fully corroborated research thus face harassment, including violence, merely for speaking the truth. 39 Lawrence Livermore’s Benjamin Santer had a dead rat left at his chiming front door; University of Victoria’s Andrew Weaver had his computer stolen. Harvard University’s Mary Amdur lost her job, despite being later fully exonerated and her findings corroborated.40 Similar private-interest biases in favor of minimizing false positives or commercial risk characterize the scientific-regulatory context, as when the US government tried to regulate tobacco, dioxins, benzene, and dozens of other risks. Based on robust science, the US Occupational Safety and Health Administration (OSHA) first regulated benzene to 10 ppm/8 hours, then tightened these limits to 1 ppm/8 hours, partly because benzene has no safe dose. Yet after organizations such as the American Petroleum Institute filed petitions, the Supreme Court set aside benzene regulations, claiming OSHA did not show significant risk from benzene.41 The tobacco industry likewise fought scientists’ cancer-tobacco link by using special-interest science to deny it, create uncertainty, and claim controversy about scientific findings.42 Likewise, in the 1950s scientists showed dioxins caused chick-edema disease, killing millions of chickens in the Midwest. Later, scientists showed serious dioxin health harms from spraying Agent Orange in the Vietnam War in the 1960s. Yet Dow and Monsanto apparently used special-interest science to thwart dioxin regulations until 1994.43 And as soon as the US EPA proposed new standards in 1987 to protect people living near steel mills from coke-oven emissions’ causing lung cancer, special-interest scientists criticized EPA recommendations and argued that the regulations “would weaken the [steel] industry” because they had “very high” costs. These special-interest-science claims were even more apparent when steel-industry scientists said the EPA was “unjustified” in proposing regulations that could save on1e person in 1000 from avoidable, premature cancer induced by steel-mill emissions. They wrote: An increase of one cancer per 1000 residents . . . represents only a 2 percent increase in the cancer rate. This rate is too small to detect using epidemiology. Is a 2 percent or smaller increase in the lung cancer rate for the most exposed population worth all the effort?. . . The EPA approach is an arbitrary one . . . unjustified.44 This flawed reasoning of special-interest scientists presupposes that, in order to benefit steel manufacturers financially, it is acceptable to kill 1 person per 1000. Yet government typically regulates all involuntarily imposed risks that are higher than 1 fatality per million.45 This means special-interest scientists were trying to impose a steel-emissions risk on the public that was 3 orders of magnitude greater than those typically prohibited. Moreover, it is false for these scientists to claim that a 1-in-1000-cancer increase “is too small to detect using epidemiology.” It


Understanding Uncertainty

205

also is ethically question-begging for them to justify their false-negative biases and impose risks on innocent citizens in order to increase steel-industry profits. To the preceding remarks, some scientists might object that, because the public wants the benefits associated with scientific risks like steel mills, nuclear power, sunscreens, and contraceptives, therefore the public wants scientists to minimize false positives, false assertions of harms. However, extensive evidence shows that citizens who bear high levels of pollution risk do not consent to it and recognize that it reduces their welfare.46 If so, special-interest scientists may want to minimize false positives so as to impose their risks on unconsenting citizens. A final reason for minimizing false negatives, in cases with characteristics (1)–(5), is that doing so is less likely to lead to socio-political unrest than minimizing false-positive risks. Otherwise, accidents, costly publicity, and civil disobedience occur. The long controversy over the Seabrook, New Hampshire, commercial-nuclear plant illustrates the costs of such civil unrest.47 Even if special-interest scientists were better judges of practical-science risks than laypeople,48 giving experts exclusive franchises for science-related decisionmaking would mean substituting short-term efficiency for long-term equal treatment, consent, and democracy.49 Such substitutions make no sense—if science is intended to promote social welfare and not merely private profits. As a US National Academy of Sciences’ committee notes, when one ignores public preferences for minimizing potentially catastrophic, uncertain societal risks, the costs always outweigh the benefits. In such cases, what is practical and prudent is also the most ethical: taking account of public preferences to minimize false negatives. 50

Conclusion What should scientists do in policy-relevant situations characterized by (1) uncertainty about harm probabilities and consequences, (2) potentially catastrophic societal harm, (3) absence of overarching benefits, (4) unavoidability of both types of error, and (5) no compelling arguments to the contrary? This chapter argued that in circumstances of (1)–(5), scientists should not follow the traditional scientific value judgment of minimizing false positives but instead should minimize false negatives, false assertions of no harm. That is, they should take account of ethical obligations, not merely scientific considerations. Moreover, because choosing default rules—to use in situations of scientific uncertainty—is a matter of values, the public and not scientists alone should help choose them. As Thomas Jefferson put it, “I know no safe depository of the ultimate powers of the society but the people . . . If we think them not enlightened enough to exercise their control . . . the remedy is not to take it from them, but to inform their discretion by education.”51


C H A P T ER

15

Where We Go from Here MAKING PHILOSOPHY OF SCIENCE PRACTICAL

Twenty-eight-year-old Marianne Chapman was healthy and active in 2006 when she began suffering from severe neurological symptoms, including numbness, inability to move her hands and feet, inability to walk, devastating pain, anemia, and low red and white blood-cell counts. Now completely bedridden, Marianne is unable to care for her 2 young children. In 2011, her attorneys filed suit against Procter and Gamble, makers of a zinc-containing medication that Marianne began using just before her neurological problems started. Given no epidemiological data on the medication, her attorneys and 3 scientists used extensive case-study evidence to argue that the medication caused her myelopathy, damage to the spinal cord. Procter and Gamble attorneys and scientists, however, rejected all this evidence, accused the Chapman scientists of invalid causal inferences, then denied their product caused the severe neurological disabilities.1 The Chapman case is important because roughly 36 million people take different brands of the same zinc-based medication that Chapman took. The economic, legal, and scientific stakes likewise are high because in 2011 alone, GlaxoSmithKline, another maker of the product, paid more than $120 million out of court to settle several lawsuits by other victims of debilitating myelopathy. However, GlaxoSmithKline required confidentiality or gag clauses in the settlement agreements. These prohibit victims from being identified or speaking publicly, from warning other potential victims about the product. Thus, the scheduled Chapman proceeding was called a “bellwether trial,” one that many other plaintiffs/defendants expected to use to assess how juries would respond to dozens of similar lawsuits. It also was a case-study bellweather, because the Eleventh Circuit Court—with jurisdiction over the case—historically has been very hostile to experts relying on case-study causal arguments like those used by Chapman.2 Yet, the district judge in the Chapman case countered the Eleventh Circuit and affirmed several conditions for reliably using case-study evidence. At 6 points in her detailed analysis, she quoted Carl Cranor, a prominent US philosopher of science whose work on causation made a monumental difference in the 206


Where We Go from Here

207

Chapman-lawsuit outcome and in setting science-based legal precedent. Perhaps it also will help some of the 36 million people who took the same medication as Chapman. 3

Chapter Overview Cranor’s work illustrates this volume’s point: that practical philosophy of science can help promote better science and policy and make a difference in the world. But do philosophers of science have any ethical obligations to do at least some practical philosophy of science? Why or why not? This chapter answers such questions by providing examples of work that have made a difference. Next it assesses the threat of developmental toxicity and gives causality-based and benefits-based arguments to show that all professionals, and especially scientists and philosophers of science, have special duties to help science avoid it; because of their expertise, scientists and philosophers of science have greater duties than other citizens to help reveal/prevent science-related societal harms like developmental toxicity. The chapter closes after considering several objections to these arguments.

Philosophy of Science That Makes a Difference As Carl Cranor’s work demonstrates, practical philosophy of science can improve science and promote justice.4 Besides publishing landmark pieces in top philosophy/law/science journals, Cranor has been a prominent expert in the policy arena. He co-authored a report for the US Office of Technology Assessment, Identifying and Regulating Carcinogens, as well as a study by a US National Academy of Sciences Committee, Valuing Health. Both the US National Science Foundation and the University of California Toxic Substances Research and Teaching Program have funded his work. He also has served on many science-advisory groups, including California’s Proposition 65 Panel, California’s Electric and Magnetic Fields Panel, and several academy committees. Cranor’s outstanding practical work has been enhanced by the fact that he served in Washington, DC, as a Congressional Fellow with the US Office of Technology Assessment, then later as a Consultant to the US Congress, Office of Technology Assessment. At Yale Law School he learned about legal/ethical aspects of the scientific issues he assesses. Yet, like most philosophers of science, his PhD is not in science but philosophy. Historian and philosopher of science Naomi Oreskes of Harvard University is another scholar who, like Cranor, has made a difference in the world. She has worked as a consultant for the US Environmental Protection Agency (EPA) and the US National Academy of Sciences, and also has taught at Dartmouth College, Harvard University, and New York University. Besides being an outstanding


208

Values Analysis and Scientific Uncertainty

scholar who has written many award-winning books/articles, Oreskes has served as provost at the University of California, San Diego, and has written best-selling popular articles/books, including Merchants of Doubt. Reviewers called it the best science book of 2010. It shows how special interests repeatedly have misused/ misapplied/misrepresented science in their attempts to deny environmental harms like acid rain, climate change, ozone depletion, and tobacco. Oreskes thus is not only an academic star but a rare scholar whose dozens of YouTube videos and lectures have helped to educate people about science, to stop scientific misrepresentation, and to protect public welfare. She has received many awards for both her technical work and her practical history and philosophy of science, including the George Sarton Award from the American Association for the Advancement of Science and the Lindgren Prize of the Society of Economic Geologists. In 2011, US geoscientists gave Oreskes the Shea Award for doing work on climate change that promotes better public/scientific understanding. Also in 2011, climate scientists throughout the United States named her the Climate Change Communicator of the Year. In her 2004 landmark article in Science, Oreskes argued that by 1993, all climate scientists, at least those who were doing basic empirical work and publishing in refereed professional journals, had fully accepted the existence of anthropogenic climate change. Hence Oreskes showed that apparent climate dissensus—like earlier controversies over tobacco, ozone, and many carcinogens—has been largely manufactured by special interests. They have hired amateur scientists-lobbyists to muddy scientific waters and delay regulation.5 Roger Cooke of Delft University in the Netherlands likewise has contributed to better science and policy in a variety of fields. Recognized as one of the world’s leading authorities on mathematical modeling of risk and uncertainty, his work has helped protect many people by estimating climate-change risks and calculating health risks from a variety of science-related accidents/pollutants. He has evaluated post–Gulf War threats from oil fires in Kuwait, from chemical-weapons disposal, from nuclear risks, from nitrogen-oxide emissions, and from microbiological risks. Besides playing bass for decades in a well-known Dutch jazz band, Cooke has held several senior university positions both in the United States and abroad. Currently he holds an endowed chair in a non-governmental organization, Resources for the Future, where his methodological analyses of economics and health-risk problems contribute to both US and international policy disputes.6 Cooke’s work clearly speaks truth to power. Similarly, the books and technical papers of Kristin Shrader-Frechette, translated into 13 languages, show how flawed scientific methods put people at risk. She holds an endowed chair at the University of Notre Dame, teaching in both the Biological Sciences and Philosophy departments. The US National Science Foundation has funded her research for 28 years. After her publications criticized radiation-dose-response-curve methods, the International Commission on Radiological Protection named her as the sole US representative on an


Where We Go from Here

209

international committee proposing new radiation standards. After her publications criticized current population-biology methods, the US National Academy of Sciences invited her to join its board that assesses methodological problems in toxicology and ecology. Her science-advisory work has taken her to the US Environmental Protection Agency, United Nations, World Health Organization, and many nations. For decades, she and her students also have done pro-bono scientific work in the United States and abroad, for groups like the National Association for the Advancement of Colored People. Their “liberation science” analyzes how flawed science is typically used to justify imposing heavier pollution burdens on poor and minority communities. In 1994, their methodological criticisms helped achieve the first major US environmental-justice victory—in a poor, all-black, rural-Louisiana town. In 2004 she became only the third American to win the World Technology Award in Ethics. In 2007, Catholic Digest named her one of 12 “Heroes for the US and the World.” In 2011, Tufts University gave her the Jean Mayer Global Citizenship Award.7 As these examples suggest, many philosophers of science have contributed to ongoing practical work that benefits science and society. These include scholars such as Hanne Andersen, Rachel Ankeny, Justin Biddle, Mieke Boon, Martin Carrier, Nancy Cartwright, Hasok Chang, Sharyn Clough, Robert Crease, Lindley Darden, Inmaculada de Melo-Martin, Henk de Regt, Heather Douglas, John Dupre, Sophia Efstathiou, Kevin Elliott, Carla Fehr, Peter Galison, Lisa Gannett, Heidi Grasswick, Ben Hale, Gary Hardcastle, Susan Hawthorne, Kristen Intemann, Phil Kitcher, Hugh Lacey, Andrew Light, Helen Longino, Shunzo Majima, Eric Martin, Deborah Mayo, Sharon Meagher, Sandra Mitchell, Margaret Morrison, Lynn Hankinson Nelson, Nancy Nersessian, Kathleen Okruhlik, Wendy Parker, Kathryn Plaisance, Michael Polanyi, George Reisch, Julian Reiss, Sarah Richardson, Michael Root, Laura Ruetsche, Mark Sagoff, Henry Shue, Miriam Solomon, Patrick Suppes, Paul Thompson, Nancy Tuana, Kyle Whyte, Jim Woodward, Andrea Woody, John Worrall, Alison Wylie, Maria-Kayko Yasuoko, and others.8 These scholars—who also do practical philosophy of science—have begun new research, new organizations, and new university courses. For instance, in 2005 philosophers of science from Australia, England, and the Netherlands founded the Society for the Philosophy of Science in Practice; it now has a listserv of more than 500 members.11 For decades, the Center for Philosophy and Public Policy at the University of Maryland has been a leader in practical philosophy of science. Center scholars like Mark Sagoff, now at George Mason University, have written classic accounts of how poor scientific methods, especially in ecology, economics, and quantitative risk assessment, have jeopardized science and science-related public policy.9 Despite such contributions, and except for exceptional scholars like Helen Longino and Phil Kitcher,10 philosophers of science generally have exhibited the


210

Values Analysis and Scientific Uncertainty

behavior that Aristophanes attributed to all philosophers—having their heads in the clouds. They have focused predominantly on theoretical assessments of scientific method—leaving sociologists/historians/psychologists/political scientists to address the practical aspects of science. As the Committee on Public Philosophy of the American Philosophy Association noted, in 2010, “we live in a time when a growing number of philosophers are doing what may be called ‘public philosophy’, but it is not always recognized as ‘legitimate’ philosophy by all within the discipline and also goes largely unnoticed by the general public.”11

Philosophers of Science and the Causality Argument Although greater numbers of scholars are doing practical philosophy of science, are there duties to do so? As already argued,12 at least 2 reasons suggest there are. The first or causality argument is that along with other citizens, philosophers of science have helped cause science-related harm. The other argument is that, along with other citizens, they have benefitted disproportionately from these harms and thus have a responsibility to help stop them. For instance, how have philosophers of science helped cause developmental toxicity, harm that occurs when children are developmentally (prenatal-to-age-5) exposed to epigenetically toxic pollutants? These pollutants—such as metals and endocrine disruptors like plastics and pesticides—cause heritable damages that program both the exposed children and later generations for premature death/disease/injury. These pollutants modify DNA through methylation and thus control what genes are expressed. For instance, certain pollutants turn off tumor-suppressor genes and thus can harm both current victims and their descendants.13 Virtually everyone is responsible for developmental-toxicity harms because everyone, especially those with greater affluence/education/consumption, contributes to air pollution, a major cause of developmental toxicity. Virtually everyone purchases and uses products containing developmental-toxicity-causing chemicals such as bisphenol A, phthalates, organophosphate-organochlorine pesticides, nicotine, perfluorooctane compounds, or polybrominated diphenyl ethers.14 Because everyone contributes to developmental toxicity through pollution and product use, everyone has duties to help stop these harms. Of course, developmental toxicity is not the only science-related harm that citizens cause. However, this chapter illustrates developmental toxicity because it is typical of the harms that educated consumers help cause—and because its victims are completely innocent, children and future generations. The argument is that because scientists and philosophers of science contribute to harms like developmental-toxicity-inducing products and pollutants, they have justice-based duties to do research that helps stop these harms. Their work might include


Where We Go from Here

211

revealing methodological flaws in scientific studies, often special-interest studies that deny developmental-toxicity harm or delay its regulation. The ethics is basic: we helped break it, so we should help fix it. Moreover, because democracy is not a spectator sport, and all citizens are responsible for being politically active and helping to stop public harms, all have prima facie duties to help stop avoidable harms like developmental toxicity. Those who have never attempted to help arguably have greater duties to do so, precisely because their failure to exercise the duties of citizenship makes them more responsible for developmental toxicity. Obviously, however, individual duties differ, ultima facie, as a function of factors like one’s fractional contribution to developmental toxicity, knowledge, ability, time, and so on. Moreover, although scientists and philosophers of science share duties with other citizens to help stop developmental toxicity, they have special duties to do so because of their greater expertise/resources. As professional-ethics codes dictate, greater ability to prevent some harm generates greater responsibilities to do so.15 Citizens’ duties to help alleviate science-related harms are based also on unearned advantages from society. All other things being equal, the greater a citizen’s unearned wealth, intelligence, genetic advantages, freedom, health, education, and so on, the greater her responsibility to help alleviate avoidable harms like developmental toxicity. Scientists and philosophers of science enjoy greater advantages, including “corners” on university-teaching markets. Consequently they have greater responsibilities to give back, to help level the societal playing field, to help promote equal opportunity and protection from harm. Their greater opportunities arise partly because of what philosopher John Rawls called “the natural lottery of life” that gives them unintended but unfair advantages over others. To help compensate for these unearned advantages—like higher IQs— scientists/philosophers of science should help take action on science-related harms. Of course, most scientists and philosophers of science have worked hard. However, without largely unearned advantages like high intelligence and educational opportunities, they would not have achieved what they have. If not, they have greater responsibilities, especially in areas related to their scientific expertise.16

Philosophers of Science and the Benefits Argument Citizens, and especially wealthier citizens, like scientists/philosophers of science, also have justice-based responsibilities to help reveal/prevent science-related harms like developmental toxicity because they benefit from it. How? Many people have saved money by purchasing sweatshop-produced goods and therefore have greater responsibility for causing sweatshop harms. Something similar holds for developmental-toxicity harms. To see how all citizens, especially wealthier


212

Values Analysis and Scientific Uncertainty

citizens, benefit from/help cause these harms, consider 3 examples: fossil-fueled automobiles and electricity, pesticide-laden food, and waste incinerators.17 At least in Europe and the United States, fossil-fueled vehicles help cause developmental toxicity because they are responsible for roughly half of all ozone and particulates, neither of which has a safe dose, both of which are especially harmful to children.18 Although asthma is a complex disease with multi-factorial, multi-level origins, particulates alone cause both developmental decrements and at least $2 billion annually in environmentally attributable asthma harms to US children. Particulates are a major reason that US pediatric-asthma rates have doubled in the last 10 years.19 Yet, drivers of fossil-fueled vehicles never compensate their child victims for these harms. Something similar holds for recipients of fossil-fuel-generated electricity. Coal-fired plants produce roughly 45 percent of US, and 41 percent of global, electricity. They are the largest US source of SO2/ mercury/air toxins, and they cause major NOX/ozone/particulate pollution. Yet, many citizens benefit unfairly from coal pollution. Failure to control coal pollutants like mercury saves consumers money but imposes uncompensated developmental-toxicity risks on innocent children. US newborns suffer $9 billion/year in IQ and discounted-lifetime-earnings losses from coal-plant-mercury pollution alone.20 Moreover, the uncompensated economic benefits that wealthier people, including scientists/philosophers of science, receive from fossil-fueled vehicles/ electricity may dwarf the benefits that poorer people face. Because of their higher consumption and travel, those in the highest-income decile appear to cause about 25 times more fossil-fuel pollutants than those in the lowest-income decile.21 If so, they bear greater responsibility for unintended fossil-fuel harms, including developmental toxicity. In his classic UK-government report, economist Nicholas Stern estimated that each person in the developed world causes an average of 11 tons/year of carbon-equivalent emissions, roughly $935/year in climate-only (droughts, floods) effects that kill about 150,000 people/year.22 In addition, global outdoor-air-pollution-related deaths, mostly from fossil fuels, are about 1.3 million/year, roughly 10 times greater than climate-related deaths.23 But if each person in the developed world causes $900/year climate-related deaths and harms, and if pollution-related fossil-fuel deaths are 10 times greater, then developed-world citizens could each cause $9000/year in fossil-fuel harms. If so, the uncompensated fossil-fuel harm caused by each wealthy person, including some scientists and philosophers of science, could be (25 x $9000) or $225,000 /year. Something similar holds for uncompensated harms from pesticides. People who buy non-organic food profit unfairly from pesticides. Why? Non-organic food saves consumers money, but it imposes inequitable, developmental-toxicity harms on innocent children, especially farmworker


Where We Go from Here

213

children. Moreover, children’s pesticide-related income losses are substantial. US children aged 0–5 annually lose roughly $61 billion in future earnings just from organophosphate-pesticide-induced IQ and therefore earnings losses. Once organochlorines and carbamates are included, developmental-toxicity pesticide harms increase. Data show that for a population of 25.5 million children, aged 0–5, organophosphate pesticides cause losses of about 3,400,000 IQ points/year.24 Yet the monetary value of a 1-point IQ loss is about $18,000—if one takes the average of US EPA,25 Harvard University,26 US Centers for Disease Control, 27 and Mount Sinai Medical School of Medicine figures.28 Thus $18,000 x 3,400,000 = $61 billion/year lifetime-earnings losses caused by organophosphate-exposed young children. However, this figure counts only the 50-percent-highest-exposed children. Given the remaining 50 percent of exposures and other classes of pesticides, obviously $61billion/year underestimates annual US developmental-toxicity harms from organophosphates. Although adult food consumers partly benefit financially because pesticide-laden food is cheaper, they never compensate pesticide victims. Yet fairness dictates that consumers pay full costs for their activities and goods, not impose them on innocent children. To the degree that consumers do not pay these full costs, they have greater duties to help prevent both pesticide harm and flawed science used to justify pesticides. 29 Incinerator emissions, like lead, constitute a third example of how many citizens have special duties to help prevent science-related harm because they unintentionally receive health and economic benefits from imposing risks like developmental toxicity on children. At least some developmental toxicity occurs because most garbage-creating citizens are “free riders” who fail to cover full waste-management costs. Because consumers pay only small household-garbage-pickup fees, the waste is often incinerated in poor neighborhoods. There, lead, dioxins, and other emissions impose uncompensated IQ and thus lifetime-earnings losses on residents, especially poor or minority children. 30 By benefitting from fossil fuels, pesticide-laden foods, and waste incineration, most citizens are at least partial, unintentional “free riders.” They have duties to help stop such harm because they save money/health by imposing developmental-toxicity risks on poor children. 31 Moreover, because their unfair fossil-fuel, pesticide, or incinerator-related benefits typically are proportional to their consumption/wealth, those in higher-income groups have greater duties to help stop this science-related harm because they receive greater undeserved benefits from it. Therefore they bear greater responsibility for developmental-toxicity harms. One way for scientifically literate people to compensate for these harms is to help stop them—by assessing the f lawed science used to justify imposing, and not regulating, developmentaltoxicity harms.


214

Values Analysis and Scientific Uncertainty

Objections How might people object to these arguments that all citizens—especially scientifically literate or wealthier citizens—have justice-based duties to help avoid science-related harms like developmental toxicity? The avoidability objection asks how one can determine which developmental-toxicity agents are “avoidable.” This answer rests on a case-by-case analysis, depending on whether some developmental-toxicity agent is truly irreplaceable for human needs, as in medicines. It also depends partly on ethical theory and the state of science, including what is known about various chemicals and their alternatives. For instance, all other things being equal, as chapter 13 suggests, those who follow the equal-opportunity ethics of Rawls would be more likely (than would John-Harsanyi utilitarians) to say some developmental-toxicity exposure was avoidable, if a much more expensive alternative were available. Why? Chapter 13 shows that egalitarians, more than utilitarians, require protecting vulnerable minorities and take less account of the economic costs of protection. However, there is no algorithm to answer the avoidability question, because the answer is case-specific, dependent on partial empirical details. The main ethical point is that posed by Immanuel Kant, “ought implies can.” If one can avoid developmental toxicity, the preceding arguments show that one ought to do so. If one cannot avoid causing developmental toxicity then, following Kant, one is not obligated to do so. 32 Another objector might ask how we, who intend no harm, can be responsible for developmental toxicity. Aristotle answered this question, at least generally. 33 We are responsible when our intended acts/culpable ignorance/inaction cause harm. We are responsible for what we should know but do not—and who we allow ourselves to become. Asking why people failed to stop Adolph Hitler, philosophers Karl Jaspers and Jean-Paul Sartre charged humankind with metaphysical guilt, 34 with not creating themselves as the kinds of people able to do what is right. Analogously, if we have failed to stop avoidable developmental toxicity, we too are guilty of not creating ourselves, through our attitudes/choices/ acts, as the sorts of people who would be able to help avoid societal harms. To the degree that our own inaction and insensitivity has allowed us to become passive, compassionless, or weak, to live in “bad faith,” we are culpable for inaction in the face of great harms, whether Hitler or developmental toxicity. Hence, to varying degrees—proportional to individual benefits, consumption, ability, education, income, opportunity, as argued—we have duties to help prevent harms like avoidable developmental toxicity. 35 Yet, if science and philosophy of science are supposed to be neutral and value-free, how can researchers become activists who speak against poor science that contributes to developmental toxicity? This objection errs because it relies on the false assumption that objectivity is neutrality; it is not. Objectivity


Where We Go from Here

215

is even-handedness, lack of bias. Otherwise, scientists and philosophers of science could make no legitimate value judgments and always would have to remain neutral. Yet earlier chapters showed that scientists and philosophers of science must make methodological value judgments—about appropriate sample size, models, reliable data, and so on—even to do their work. Obviously scientists also ought not remain neutral about issues such as whether the earth is flat, whether evolution is a fact, whether anthropogenic climate change exists, or whether they should follow biomedical-ethics codes in their experiments. No good researchers are neutral about poor research and poor ethics. Instead, they use rational debate to help resolve controversies. Hence, consistency demands that people use rational debate to assess possible advocacy in areas related to their expertise. The neutrality objection also fails because, as citizens, scientists and philosophers of science must make ethical value judgments—for instance, about allowing avoidable developmental toxicity—or they violate virtually all codes of professional ethics. These codes bind scientists/professionals to be unbiased, objective, and to promote human welfare. They emphasize the “added responsibility of members of the scientific community, individually and through their formal organizations, to speak out” whenever public health or safety is at risk. 36 Besides, if scholars do not speak out and instead remain neutral, they cannot fulfill their duties of citizenship, duties to help ensure others’ equal protection. Yet no one ought to avoid the duties of citizenship. Those with scientific expertise do not cease being citizens, just because they also are professionals. Indeed, as argued earlier, professionals’ heightened abilities/training/education arguably give them greater duties as citizens. Besides, if they do not speak out, especially in areas related to their special expertise, worse harms could occur. As Burke put it, “All that is necessary for the triumph of evil is that good people do nothing.” If philosophers of science and scientists remained neutral and never wrote about how poor science is often used to justify science-related harms, such as pollution-induced developmental toxicity, this would generate harmful consequences for both science and society. As already noted in earlier chapters, roughly 75 percent of US science is funded by industry, 25 percent by government; more than half of US-government-funded science is military; and for every $100 that environmental-health industries spend on their science, government spends about $1. 37 Consequently, instead of being level, the scientific playing field is often politicized. Declining government science-education funding, and increased special-interest funding, further tilt this playing field, as illustrated by fossil-fuel-industry-funded challenges to anthropogenic climate change. Chapter 13 showed that, because of special-interest manipulations, half of Americans believe scientists disagree about whether anthropogenic climate change exists, although this scientific issue was settled 20 years ago, in 1993. Chapter 3 explained how special-interest science, pharmaceutical-industry-funded studies


216

Values Analysis and Scientific Uncertainty

rarely attribute harmful effects to their drugs, while independent researchers often do so; why chemical-industry-funded studies rarely attribute health damage to their pollutants, while independent researchers often do so; why the smelting industry harassed scientist Mary Amdur; why the asbestos industry harassed physician Irving Selikoff; why the beryllium industry harassed scientist Adam Finkel; and why 50 percent of pollution-health researchers report industry harassment. 38 Given such tilted scientific playing fields, researchers who remain neutral serve the status quo, not objectivity. Neutrality merely allows biased conclusions to become more dominant. The solution is even-handed, unbiased inquiry, not neutrality. A third, or economics, objection—to earlier arguments for justice-based duties to uncover flawed science—is whether preventing avoidable, science-related harms, like developmental toxicity, might cause economic harm. The obvious response is, “harm for whom?” Polluters or innocent children? The main problem with this objection is its begging the question whether those who seriously harm the health/welfare of innocent people should have rights to profit from causing harm. Making this objection is thus a bit like an accused murderer’s claiming, correctly, that prosecuting him is not cost-effective because most murderers are not repeat offenders and hence pose little threat to society. Yet a key issue is justice and human rights, not merely cost. Besides, the costs of preventing serious health harms, like developmental toxicity, are likely less than those caused by allowing them. For instance, French researchers recently examined economic benefits of lead-abatement-and-prevention. They discovered that lead-abatement-and-prevention costs were far less than the health/social costs of developmental toxicity, including special-education expenditures, crime, suffering, and reduced lifetime-earnings of those affected. Although France’s population is one-fifth that of the United States, developmental toxicity and lead-induced IQ losses in French children annually cause them lifetime-earnings losses of 22 Euros/year or $30 billion/year—and French crime losses of 62 Euros/year or $81 billion/year. 39 These French data suggest US children’s future-earnings losses from lead-induced IQ harms could be about $150 billion/year, and their lead-caused crime losses, about $400 billion/year. For mercury pollutants, US children’s health, IQ , and lifetime-earnings losses are about $8 billion/year. 50 More generally, US costs of environmentally caused children’s lead exposure, asthma, cancer, and neurobehavioral problems are up to $55 billion/year; if these costs are analogous to those for other developmental toxins, then every $1—spent on prevention and pollution controls—generates $17–221 in benefits.40 If so, the economics objection has little factual merit. Another objector might ask, why do people who do many other good things with their lives, including teaching and raising children, also have duties to professionally address science-related harms like developmental toxicity? The answer is that those who cause developmental toxicity and benefit from it have


Where We Go from Here

217

justice-based duties to help stop it. They cannot rationally claim excuses for not taking action, any more than robbers can claim ethical excuses for not compensating their victims. Serious injustice requires justice-based restitution.41 Because taking action on harms like developmental toxicity is a matter of justice and not charity, it is not optional. Of course, as already mentioned, different people’s duties to help address developmental toxicity differ, depending on their wealth/ consumption/income/intelligence/health, and so on. Nevertheless, no excuses completely exonerate offenders from justice-based duties to rectify injustice. Otherwise, justice would not exist.42 But how can scientists and philosophers of science take action, by revealing developmental-toxicity harms and criticizing flawed science, if developmental toxicity remains partly uncertain? The answer is that, as earlier discussions reveal, the existence/causes of developmental toxicity are not uncertain. What is uncertain are some of the precise molecular mechanisms that cause different harms. The developmental-toxicity case thus is a bit like that of climate change. As already shown, since 1993 scientists have agreed that anthropogenic climate change exists—although not all details about it have been discovered. Likewise, scientists know developmental toxicity exists and that many pollutants cause it— although not all details about it have been discovered. Besides, the uncertainty objection errs in relying on faulty assumptions. One assumption is that doing nothing is the way to remain neutral or objective in science. Responses to the neutrality argument showed this assumption is false. Another erroneous assumption is that the best evidentiary rule is to consider a known toxin innocent of health harm until proved guilty in every possible way and in every detail. However, this assumption has been widely challenged, partly through the precautionary principle. Waiting for complete certainty about every detail of a toxin’s harm could cause massive death and disease. Consequently, as chapter 12 argued, protecting public health requires society to use a preponderance- of-evidence rule, not a beyond-a-reasonable-doubt rule to assess great harms. After all, given some uncertainty about fire or flood, one does not do nothing. One buys insurance. Given uncertainty about rain, one carries an umbrella. One gets medical check ups, balances her checkbook, exercises, visits the auto mechanic. One prevents—not merely waits for—catastrophic harm. Analogous precautionary requirements also hold for threats like developmental toxicity.43 Yet, is it fair for scientists and philosophers of science—who already have research and teaching duties—to have special duties to tackle developmental toxicity? Contrary to this unfairness objection, we do have special duties, as argued earlier, because of our greater abilities, knowledge, consumption, and so on. As Francis Bacon put it, knowledge is power, and those with more power have more responsibility. To those who say they have earned their privileges, there is an obvious response: Because well-educated, talented people, as already argued, are not


218

Values Analysis and Scientific Uncertainty

fully self-made, but beneficiaries of life’s natural lottery, we therefore have special societal duties.44 Fairness thus dictates that those with greater intelligence and ability thus have greater duties to take action than other people. Noblesse oblige.

How Philosophers of Science Can Take Action Citizens, and especially science-related professionals, of course, have no duties to help protect others, in their areas of expertise, unless they are able to do so. “Ought implies can,” as already noted.45 How can they take action on science-related harms like developmental toxicity? At the group level, organizations like the Philosophy of Science Association could issue statements supporting the Intergovernmental Panel on Climate Change and its warnings about climate change, as many scientific groups have done.46 After all, decades ago, the American Geophysical Union and the American Meteorological Society acknowledged its “collective responsibility” to take action.47 As already, analogous responsibilities hold for science-related harms like developmental toxicity. Philosophers of science likewise can join existing professional societies to do practical work that serves the public good, such as the American Philosophical Association’s Committee on Public Philosophy, discussed earlier. They can volunteer their time and expertise to groups seeking to address science-related harms. The benefit of such group and especially local volunteering is that together, professionals can accomplish what no one, acting alone, can achieve. Recently, for example, Notre Dame scientists convinced local government not to permit a dangerous coal-gasification facility in our already heavily polluted area because the scientific rationale underlying its claims of safety were incorrect. Practical philosophers of science also can write popular essays or blogs on important science-related topics. Whenever they publish professional-journal articles, they can translate their findings for laypeople and thus help promote action on science-related harms like developmental toxicity. Through universities/employers/professional associations, philosophers of science can list themselves as media experts, so that TV, radio, newspaper, or internet reporters can easily contact and interview them. They can serve pro-bono, on local/state/federal advisory boards, such as that for the county health association or for national groups such as the US EPA’s Science Advisory Board. They can give pro-bono testimony in science-related court cases and public hearings. To promote consumer outreach and education on science-related harms like developmental toxicity, people with science-related expertise can speak at school parent-teacher-association meetings, publish local op-eds, and advise citizens’ groups. Parent-teacher-association outlets are especially important because, regardless of their politics, parents are concerned about their children and thus about harms like developmental toxicity.


Where We Go from Here

219

At the university level, they can teach practice-related courses like Philosophy of Science and Public Policy, as Cal Tech, Notre Dame, Pitt, and other universities do.48 They also can encourage students to do real-world science-related projects for course credit, service-learning credit, or instead of exams. They can teach courses or parts of courses in which students learn about scientific methods, and how they can go wrong, by responding to contemporary scientific analyses, such as draft health/safety regulations, environmental-impact assessments, and risk assessments—thousands of which are released annually by the US government. In one course, students receive many benefits from their practical, pro-bono philosophy-of-science projects, often responses to draft scientific/government/ regulatory analyses. One benefit is that students often obtain publications from their work. Second, victims who are impacted by science-related harms receive free scientific assistance that helps protect/empower them. Third, attorneys often can use this pro-bono student scientific work to assist poor/minority communities harmed by science-related threats. Fourth, and most important, students who do this scientific work usually become vaccinated by social justice, inspired to do practical science-related work that makes a difference. As a result, they often dedicate at least part of their lives to “liberation science” that helps protect vulnerable people.

Conclusion Of course, there are many areas where practical philosophy of science might be a valuable practical tool. Philosophers of cognitive science and psychology might address flawed IQ research that puts women/minorities at risk. Philosophers of biology might address flawed gender research that puts gays and transgender people at risk. Philosophers of physics might address flawed nuclear-weapons research that puts everyone at risk. The effort needed to take action on such science-related harms is not great. If historians are correct, only about 14 percent of the early colonists supported the US revolution against England, partly because revolutions are not good for banks and businesses. If 14 percent of people were mobilized because philosophers of science were speaking and writing about science-related harms like developmental toxicity, the world would change. If this book is right, philosophers of science can illuminate not only science but also the darkness that poor science creates.



NOTES

Chapter 1

1. Jared Diamond, Guns, Germs, and Steel (New York: Norton, 1997), 6–25. 2. Diamond, Guns, Germs, and Steel, 6–25. 3. Kai Koizumi, R & D Trends and Special Analyses, AAAS Report (Washington, DC: American Association for the Advancement of Science, 2005, 2004); Sheldon Krimsky, Science in the Private Interest (Lanham, MD: Rowman and Littlefield, 2003); Kristin Shrader-Frechette, Taking Action, Saving Lives (New York: Oxford University Press, 2007); hereafter cited as Shrader-Frechette, TASL. 4. Francis Bacon, Novum Organum (Charleston: Nabu Press, 2012). See Peter Godfrey-Smith, Theory and Reality (Chicago: University of Chicago Press, 2003), 2–4. Jean-François Gauvin, “Artisans, Machines, and Descartes’s Organon,” History of Science 44 (2006): 187–216. 5. Isaac Newton, The Principia: The Mathematical Principles of Natural Philosophy (Berkeley: University of California Press, 1999). 6. George H. Sabine, “Descriptive and Normative Sciences,” The Philosophical Review 21, no. 4 (July 1912): 433–450. 7. E. A. Burtt, Metaphysical Foundations of Modern Science (Garden City, NY: Doubleday and Company, 1954). See Clifford Truesdell, Six Lectures on Modern Natural Philosophy (Berlin: Springer-Verlag, 1966); David Snoke, Natural Philosophy (Colorado Springs: Access Research Network, 2003). 8. Steve Clarke, “Naturalism, Science, and the Supernatural,” SOPHIA 48 (2009): 127– 142; Peter R. Dear, The Intelligibility of Nature (Chicago: University of Chicago Press, 2006); Stephen Gaukroger, The Emergence of a Scientific Culture (New York: Oxford University Press, 2006); Gauvin; Peter R. Dear, Revolutionizing the Sciences (Princeton, NJ: Princeton University Press, 2001); Jon H. Roberts and James Turner, The Sacred and the Secular University (Princeton, NJ: Princeton University Press, 2000); John Hedley Brooke, Science and Religion (Cambridge: Cambridge University Press, 1991). See Peter Harrison, Ronald Numbers, Michael Shank, eds., Wrestling with Nature (Chicago: University of Chicago Press, 2011). 9. Alan Richardson, “Occasions for an Empirical History of Philosophy of Science,” HOPOS: The Journal of the International Society for the History of Philosophy of Science 2, no. 1 (Spring 2012): 1–20, esp. 16; Heather Douglas, A History of the PSA Before 1970 (Chicago: Philosophy of Science Association, 2012); Morris R. Cohen and Ernest Nagel, An Introduction to Logic and Scientific Method (New York: Harcourt Brace & Co., 1934); Herbert Feigl and May Brodbeck, Readings in the Philosophy of Science (New York: Appleton-Century-Crofts, 1953); Philipp Frank, Philosophy of Science (New York: Prentice-Hall, 1957); Don Howard, “Two Left Turns Make a Right,”

221


222

Notes

in Logical Empiricism in North America, ed. A. W. Richardson and G. L. Hardcastle, 25–93 (Minneapolis: University of Minnesota Press, 2003); hereafter cited as Richardson, LE; George Reisch, How the Cold War Transformed Philosophy of Science (Cambridge: Cambridge University Press, 2005). 10. Society for the Philosophy of Science in Practice, Society for the Philosophy of Science in Practice: Mission statement; http://www.philosophy-science-practice.org/en/missionstatement/, accessed February 24, 2013. 11. American Philosophical Association’s Committee on Public Philosophy, The American Philosophical Association’s Committee on Public Philosophy; http://www.publicphilosophy.org/, accessed February 24, 2013. 12. Public Philosophy Network, Public Philosophy Network: Encouraging and Supporting Publicly Engaged Philosophical Research and Practices; http://publicphilosophynetwork. ning.com, accessed February 20, 2013. 13. Larry Laudan, Progress and Its Problems (Berkeley: University of California Press, 1977). 14. Kristin Shrader-Frechette, Ethics of Scientific Research (London: Rowman & Littlefield, 1994). 15. Albert Einstein, “Letter to a Japanese Correspondent,” in The Quotable Einstein, ed. A. Caprice (Princeton, NJ: Princeton University Press, 1996), 117. 16. Albert Einstein, “The War Has Been Won, But the Peace Is Not,” in Out of My Later Years (New York: Citadel Press, 1974), 200. 17. Richardson, “Occasions,” 1–20, esp. 16; Richardson, LE, 25–93; George Reisch, How the Cold War Transformed Philosophy of Science (Cambridge: Cambridge University Press, 2005). 18. Carl Cranor, Legally Poisoned (Cambridge: Harvard University Press, 2013); Carl Cranor, Toxic Torts (Cambridge: Cambridge University Press, 2008); Carl Cranor, Regulating Toxic Substances (New York: Oxford University Press, 1997); hereafter cited, respectively, as Cranor, LP, TT, or RT. Helen Longino, “Beyond ‘Bad Science’,” Science, Technology, and Human Values 8, no. 1 (Winter 1983): 7–17; Helen Longino, Science as Social Knowledge (Princeton, NJ: Princeton University Press, 1990); Helen Longino, The Fate of Knowledge (Princeton, NJ: Princeton University Press, 2002); Krimsky, Science in the Public Interest; Deborah Mayo, Error and Inference (Cambridge: Cambridge University Press, 2010); Deborah Mayo, Error and the Growth of Experimental Knowledge (Chicago: University of Chicago Press, 1996); Deborah Mayo and Rachelle Hollander, Acceptable Evidence (New York: Oxford University Press, 1994). 19. Tobacco-Related Disease Research Program, Research Portfolio (Berkeley: University of California, 2012); http://www.trdrp.org/about/index.php, accessed Febraury 23, 2013; World Health Organization (WHO), WHO Report on the Global Tobacco Epidemic, 2011 (Geneva: WHO, 2011); Shrader-Frechette, TASL, ch. 2. 20. Cranor, LP, TT, RT, and Are Genes Us? The Social Consequences of the New Genetics (New Brunswick, NJ: Rutgers University Press, 1994); Kristin Shrader-Frechette, What Will Work: Fighting Climate Change and Renewable Energy, Not Nuclear Power (Oxford: Oxford University Press, 2011); Krimsky, Science in the Public Interest 21. Paul Forman and Jose M. Sanchez-Ron, eds., National Military Establishments and the Advancement of Science and Technology, vol. 180, Boston Studies in the Philosophy of Science (New York: Springer, 1996). 22. Brian Tokar, “Agribusiness, Biotechnology, and War,” Z Magazine, September 2002; http://www.zcommunications.org/agribusiness-biotechnology-and-war-by-briantokar; accessed February 24, 2013. 23. Kristin Shrader-Frechette, “Comparativist Rationality and Epidemiological Epistemology,” Topoi 2 (September–October 2004): 153–163. 24. C. Oleskey, A. Fleishman, and L. R. Goldman, “Pesticide Testing in Humans,” Environmental Health Perspectives 112, no. 8 (2004): 114–119; A. H. Lockwood, “Human Testing of Pesticides,” American Journal of Public Health 94, no. 11 (2004): 1908–1915.


Notes

223

25. Committee on the Use of Third Party Toxicity Research with Human Research Participants Science, Technology, and Law Program, National Research Council, Intentional Human Dosing Studies for EPA Regulatory Purposes (Washington, DC: National Academies Press, 2004), 3, 110, 112–113. 26. C. I. Jackson, Honor in Science (New Haven, CT: Sigma Xi, the Scientific Research Society, 1986), 33. 27. Ernst Mayr, Toward a New Philosophy of Biology (Cambridge, MA: Harvard University Press, 1988). 28. Kristin Shrader-Frechette, “Colored Town and Liberation Science,” in Holy Ground: A Gathering of Voices on Caring for Creation, ed. D. Landau and L. Moseley, 218– 229 (San Francisco: Sierra, 2008).

Chapter 2

1. Cass Sunstein, Risk and Reason (New York: Cambridge University Press, 2002), 76, 79, 113, ch. 7; hereafter cited as Sunstein; Christine Hall, “Obama Adviser Urges Controversial ‘Senior Death Discount’ in Health Care Reform” (Washington, DC: Competitive Enterprise Institute, July 23, 2009). Some of the points made in this chapter also were made in Kristin Shrader-Frechette, Burying Uncertainty: Risk and the Case against Geological Disposal of Nuclear Waste (Berkeley: University of California Press, 1993), available at www.netlibrary.com; hereafter cited as ShraderFrechette, BU. 2. Kristin Shrader-Frechette, “Taking Action on Developmental Toxicity: Scientists’ Duties to Protect Children,” Environmental Health 11 (2012): 61. 3. P. J. Landrigan, C. B. Schechter, J. M. Lipton, M. C. Fahs, and J. Schwartz, “Environmental Pollutants and Disease in American Children,” Environmental Health Perspectives 110 (2002): 721–728. 4. D. C. Bellinger, “A Strategy for Comparing the Contributions of Environmental Chemicals and Other Risk Factors to Neurodevelopment of Children,” Environmental Health Perspectives 120 (2011): 501–507. 5. C. Pichery, M. Bellanger, D. Zmirou-Navier, P. Glorennec, P. Hartemann, and P. Grandjean, “Childhood Lead Exposure in France,” Environmental Health 10 (2011): 44. 6. Sunstein, 49, 113, chs. 5–7. 7. Sunstein, 111. 8. Sunstein, 49, 113; esp. chs. 5–6. 9. H. W. Lewis, Technological Risk (New York: W. W. Norton, 1992). 10. Karl Popper, Objective Knowledge (Gloucester: Claredon, 1972), 285–318 (ch. 8). 11. National Research Council, Technical Bases for Yucca Mountain Standards (Washington, DC: National Academy Press, 1995). See chapter 12, this volume. 12. For some good government scientific insights, see Ardyth M. Simmons and John S. Stuckless, “Analogues to Features and Processes of a High-level Radioactive Waste Repository Proposed for Yucca Mountain, Nevada (Washington, DC: US Geological Survey, 2010). For criticisms of some government Yucca Mountain science, see, for instance, Shrader-Frechette, BU; Kristin Shrader-Frechette, “Idealized Laws, AntiRealism, and Applied Science,” Synthese 81 (1989): 329–352; Kristin Shrader-Frechette, Nuclear Power and Public Policy (Boston: Kluwer, 1980). 13. The report that led to stopping Yucca Mountain is the Blue-Ribbon Commission on America’s Nuclear Future, Report to the Secretary of Energy (Washington, DC: January 2012): 1–158; see N. Lior, “Sustainable Energy Development (May 2011) with Some Gamechangers,” Energy 40, no. 1 (April 2012): 3–18; J. Karlesky, “Collaboration by Deflection: Coping with Spent Nuclear Fuel,” Public Administration Review 72, no. 2 (March/April 2012): 196–205; M. C. Thorne, “Is Yucca Mountain a Long-Term Solution


224

Notes

for Disposing of US Spent Nuclear Fuel and High-Level Radioactive Waste?,” Journal of Radiological Protection 32, no. 2 (2012): 175–180; hereafter cited as Thorne-2012. 14. See K. Bennett, “What You Don’t Know Can Hurt You,” Philosophy and Phenomological Research 79, no. 3 (November 2009): 766–774; D. Kleinman and S. Suryanarayanan, “Dying Bees and the Social Production of Ignorance,” Science, Technology & Human Values [in press], doi: 10.1177/0162243912442575; K. Elliot, “Selective Ignorance and Agricultural Research,” Science, Technology and Human Values [in press], doi: 10.1177/0162243912442399. 15. US DOE, Nuclear Waste Policy Act, Environmental Assessment, Yucca Mountain Site, Nevada Research and Development Area, Nevada, DOE/RW-0073, 3 vols. (Washington, DC: US DOE, 1986), vol. 2: 6–280; hereafter cited as US DOE, NWPA-Yucca. See Shrader-Frechette, BU. 16. US DOE, NWPA-Yucca: vol. 2: 6–292. 17. US DOE, NWPA-Yucca: vol. 2: 6–12 and 6–25. 18. J. L. Younker, W. B. Andrews, G. A. Fasano, C. C. Herrington, S. R. Mattson, R. C. Murray, L. B. Ballou, M. A. Revelli, A. R. Ducharme, L. E. Shephard, W. W. Dudley, D. T. Hoxie, R. J. Herbst, E. A. Patera, B. R. Judd, J. A. Docka, and L. R. Rickertsen, Report of Early Site Suitability Evaluation of the Potential Repository Site at Yucca Mountain, Nevada, SAIC91/8000 (Washington, DC: US DOE, 1992), E-11; hereafter cited as Younker, Andrews, et al. 19. Younker, Andrews, et al., E-11. 20. J. L. Younker, S. L. Albrecht, et al., Report of the Peer Review Panel on the Early Site Suitability Evaluation of the Potential Repository Site at Yucca Mountain, Nevada, SAIC91/8001 (Washington, DC: US DOE, 1992), B-2; hereafter cited as Younker, Albrecht, et al. See Thorne-2012. 21. Younker, Andrews, et al., E-5. 22. Younker, Andrews, et al., E-11. 23. Kristin Shrader-Frechette, Risk and Rationality (Berkeley: University of California Press, 1991), 133–145. 24. L. Libby, M. Wurtele, and C. Whipple, “Evaluation of Great Deserts of the World for Perpetual Radwaste Storage,” The Environmental Professional 4, no. 2 (1982): 111–128 (Item 122 in US DOE, DE89005394). 25. US NRC, In the Matter of Proposed Rulemaking on the Storage and Disposal of Nuclear Waste (Waste Confidence Rulemaking), PR-50, 51 (44FR61372) (Washington, DC: US NRC, 1980), 1–25, IV-1 26. F. Thompson, F. Dove, and F. Krupka, Preliminary Upper-Bound Consequence Analysis for a Waste Repository at Yucca Mountain, Nevada, SAND83-7475 (Albuquerque, NM: Sandia National Labs, 1984), i, v–vi, 7, 47. 27. Y. Lin, Sparton—A Simple Performance Assessment Code for the Nevada Nuclear Waste Storage Investigations Project, SAND85-0602 (Albuquerque, NM: Sandia National Labs, 1985), i, 1; see J. Helton, C. W. Hansen, and C. J. Sallaberry, “Uncertainty and Sensitivity Analysis in Performance Assessment for the Proposed High-Level Radioactive Waste Repository at Yucca Mountain, Nevada,” Reliability, Engineering and System Safety 107 (2012), 44–63. 28. C. St. John, Thermal Analysis of Spent Fuel Disposal in Vertical Emplacement Boreholes in a Welded Tuff Repository, SAND84-7207 (Albuquerque, NM: Sandia National Labs, 1985), 2. 29. US DOE, Nuclear Waste Policy Act, Environmental Assessment, Reference Repository Location, Hanford Site, Washington, 3 vols. DOE/RW-0070 (Washington, DC: US DOE, 1986), vol. 2: 6–75; see US Government Accountability Office, NUCLEAR WASTE: Uncertainties and Questions about Costs and Risks Persist with DOE’s Tank Waste Cleanup Strategy at Hanford (Washington, DC: September 2009), 1–53; R. F. Schumacher, C. L. Crawford, N. E. Bibler, D. M. Ferrera, H. D. Smith, G. L. Smith, J.


Notes

225

D. Vienna, I. L. Pegg, I. S. Muller, D. B. Blumenkranz, and D. J. Swanberg, Hanford LowLevel Waste Form Performance for Meeting Land Disposal Requirements (Savanna River Site: 2002), 1–6. 30. I. Borg, R. Stone, H. Levy, and L. Ramspott, Information Pertinent to the Migration of Radionuclides in Ground Water at the Nevada Test Site. Part 1. Review and Analysis of Existing Information (Livermore, CA: Lawrence Livermore Lab, 1976) (Item 53 in US DOE, DE89005394). 31. J. Wang and T. Narasimhan, Hydrologic Mechanisms Governing Fluid Flow in Partially Saturated, Fractured, Porous Tuff at Yucca Mountain (Berkeley, CA: Lawrence Berkeley Lab, 1984) (Item 105 in US DOE, DE88004834); J. Wang and T. Narasimhan, Hydrologic Mechanisms Governing Partially Saturated Fluid Flow in Fractured Welded Units and Porous Non-Welded Units at Yucca Mountain (Berkeley, CA: Lawrence Berkeley Lab, 1986) (Item 282 in US DOE, DE88004834). 32. Nevada NWPO, State of Nevada Comments on the US Department of Energy Consultation Draft Site Characterization Plan, Yucca Mountain Site, Nevada Research and Development Area, Nevada: Vol. 2 (Carson City: Nevada Nuclear Waste Project Office, 1988), Items 334–335 in US DOE, DE90006793. 33. K. Pruess, J. Wang, and Y. Tsang, Effective Continuum Approximation for Modeling Fluid and Heat Flow in Fractured Porous Tuff Nevada Nuclear Waste Storage Investigations Project (Berkeley, CA: Lawrence Berkeley Lab, 1988) (Item 42 in US DOE, DE89005394). 34. C. Jantzen, J. Stone, and R. Ewing, eds., Scientific Basis for Nuclear Waste Management VIII. Volume 44 (Pittsburgh: Materials Research Society, 1989) (Item 403 in US DOE, DE90006793). 35. Younker, Andrews, et al., 2–150. 36. Younker, Andrews, et al., 2–157. 37. Younker, Andrews, et al., 2–150. 38. Younker, Andrews, et al., 2–155. 39. Younker, Andrews, et al., 2–155; see K. Kostelnik and J. Clarke, “Managing Residual Contaminants—Reuse and Isolation Case Studies,” Remediation 18, no. 2 (2008): 75–97. 40. S. Sinnock and T. Lin, Preliminary Bounds on the Expected Postclosure Performance of the Yucca Mountain Repository Site, Southern Nevada, SAND84-1492 (Albuquerque, NM: Sandia National Labs, 1984), 47. 41. S. Pitman, R. Westerman, and J. Haberman, “Corrosion and Slow-Strain-Rate Testing of Type 304L Stainless in Tuff Groundwater Environments,” in Corrosion ’87 (San Francisco: Pacific Northwest Lab, 1986) (Item 172 in US DOE, DE88004834). 42. C. Sastre, C. Pescatore, and T. Sullivan, Waste Package Reliability, NUREG/CR-4509 (Washington, DC: US NRC, 1986), 22. 43. Sastre et al., Waste Package Reliability, 65. 44. Sastre et al., Waste Package Reliability, 66. 45. Sinnock and Lin, Preliminary Bounds, 7, 37. 46. See D. Socher, “The Textbook Case of Affirming the Consequent,” Teaching Philosophy 24, no. 3 (September 2001): 241–251. On the contrary, see K. Harris, “Affirming the Consequent: Or, How My Science Teachers Taught Me How to Stop Worrying and Love Committing the Fallacy,” Educational Philosophy and Theory 34, no. 3 (August 2002): 345–352. 47. See, for example, Loux, State of Nevada Comments on the U.S. Department of Energy Site Characterization Plan, vol. 1: 3, vol. 2: 2. 48. E.g., see S. A. Cushman and E. L. Landguth, “Spurious Correlations and Inference in Landscape Genetics,” Molecular Ecology 19, no. 17 (September 2010): 3592–3602. The authors use a spatially explicit simulation model to generate genetic data across a spatially distributed population as functions of several alternative gene-flow processes. Their conclusion? Simple correlational analyses between genetic data and proposed explanatory models—used by many scientists—produce strong spurious correlations, which


226

Notes

lead to incorrect inferences. See also N. Lee, C. Senior, and M. J. Butler, “The Domain of Organizational Cognitive Neuroscience: Theoretical and Empirical Challenges,” Journal of Management 38, no. 4 (July 2012): 923–931. Lee et al. criticize other neuroscientists for their invalid inferences. 49. Sinnock et al., Preliminary Estimates of Groundwater Travel Time. See J. Bredehoeft and M. King, “Potential Contaminant Transport in the Regional Carbonate Aquifer beneath Yucca Mountain, Nevada, USA,” Hydrogeology Journal 18, no. 3 (2010): 775–789, who say groundwater travel time could be as short as 100 years, or on the order of hundreds of years, rather than the needed million years. Also, see J. Zhu, K. Pohlmann, J. Chapman, C. Russell, R. Carroll, and D. Shafer, Uncertainty and Sensitivity of Contaminant Travel Times from the Upgradient Nevada Test Site to the Yucca Mountain Area (Desert Research Institute: February 2009), 1–96. See S. Kelkar, M. Ding, S. Chu, B. Robinson, B. Arnold, A. Meijer, and A. Eddebbarh, “Modeling Solute Transport Through Saturated Zone Ground Water at 10 km Scale: Example from the Yucca Mountain License Application,” Journal of Contaminant Hydrology 117, no. 1-4 (September 2010): 7–25, and its encouraging appeals to ignorance. 50. L. Costin and S. Bauer, Thermal and Mechanical Codes First Benchmark Exercise, Part I: Thermal Analysis, SAND88-1221 UC-814 (Albuquerque, NM: Sandia National Labs, 1990): i; see N. Hayden, Benchmarking: NNMSI Flow and Transport Codes: Cove 1 Results, SAND84-0996 (Albuquerque, NM: Sandia National Labs, 1985), 1–1, 1–2. For those who speak of “verifying” their models, see, for example, R. Barnard and H. Dockery, Technical Summary of the Performance Assessment Calculational Exercises for 1990 (PACE-90), vol. 1: “Nominal Configuration” Hydrogeologic Parameters and Calculation Results, SAND90-2727 (Albuquerque, NM: Nuclear Waste Repository Technology Department, Sandia National Labs, 1991), 1–3. For those who claim to “validate” their models, see, for example, T. Brikowski, Yucca Mountain Program Summary of Research, Site Monitoring and Technical Review Activities (January 1987–June 1988) (Carson City: State of Nevada, Agency for Projects/Nuclear Waste Project Office, December 1988), 51; R. L. Hunter and C. J. Mann, Techniques for Determining Probabilities of Events and Processes (New York: Oxford University Press, 1992), 4; K. Stephens et al., Methodologies for Assessing Long-Term Performance of High-Level Radioactive Waste Packages, NUREG/ CR-4477 ATR-85 (5810-01)1ND (Washington, DC: US NRC, Division of Waste Management, Office of Nuclear Material Safety and Safeguards, January 1986), xvi; Barnard and Dockery, Technical Summary of the Performance Assessment Calculational Exercises for 1990 (PACE-90), vol. 1: 1–3. K. D. Huff and T. H Bauer, Benchmarking a New Closed-Form Thermal Analysis Technique Against a Tranditional Lumped Parameter, Finite-Difference Method FCRD-UFD-2012-000142 (Lemont, IL: Argonne National Laboratory: 13 July 2012), 1–13. Criticism of existing DOE “verification and validation” models also is a theme in US Government Accountability Office, Nuclear Waste: DOE Needs a Comprehensive Strategy and Guidance on Computer Models that Support Environmental Cleanup Decisions (Washington DC: US GAO, February 2011), 1–36. 51. E.g., S. Sakr and F. Casaiti, “Liquid Benchmarks: Towards an Online Platform for Collaborative Assessment of Computer Science Research Results,” Lecture Notes in Computer Science 6417 (2011): 10–24; M. Liang, X. Wang, X. Xue, and G. Li, “Formal Verification of UML Models Based on TLA,” Computer Engineering 37, no. 2 (January 20, 2011): 72–74; V. C. Prakash, J. K. R. Sastry, and D. B. K. Kamesh, “On Verification and Validation of State Based Internal Behavoiral Models of Embedded Systems,” International Journal of Communication Engineering Applications 2, no. 2 (June 2011): 73– 85; S. Goyal and G. K. Goyal, “Time-Delay Single Layer Artificial Neural Network Models for Estimating Shelf Life of Burfi,” International Journal of Research Studies in Computing 1, no. 2 (2012): 13–19. 52. For these who believe in program “verification,” see note 51 and for example, E. Dijstra, A Discipline of Programming (Englewood Cliffs, NJ: Prentice-Hall, 1976); C. Hoare,


Notes

227

“An Axiomatic Basis for Computer Programming,” Communications of the ACM 12 (1969): 576–580, 583; C. Hoare, “Mathematics of Programming,” BYTE (August 1986): 115–149. 53. J. H. Fetzer, “Program Verification: The Very Idea,” Communications of the ACM 31, no. 9 (September 1988): 1048–1063. See also J. H. Fetzer, “Philosophical Aspects of Program Verification,” Minds and Machines 1 (1991): 197–216; and J. H. Fetzer, “Mathematical Proofs of Computer System Correctness,” Notices of the American Mathematical Society 36, no. 10 (December 1989): 1352–1353. For discussions regarding program verification, I am indebted to J. H. Fetzer. 54. Office of Civilian Radioactive Waste Management, Performance Assessment Implementation Plan. 55. Office of Civilian Radioactive Waste Management, Performance Assessment Implementation Plan. 56. J. H. Fetzer, “Another Point of View,” Communications of the ACM 32, no. 8 (August 1989): 921. 57. S. Savitzky, “Letters,” Communications of the ACM 32, no. 3 (March 1989): 377. 58. D. Nelson, “Letters,” Communications of the ACM 32, no. 7 (July 1989): 792. 59. J. Dobson and B. Randell, “Program Verification,” Communications of the ACM 32, no. 4 (April 1989): 422. 60. See, for example, P. Hopkins, Cone 2A Benchmarking Calculations Using LLUVIA, SAND88-2511-UC-814 (Albuquerque, NM: Sandia National Labs, 1990), 1. 61. Naomi Oreskes, Ken Belitz, and Kristin Shrader-Frechette, “Verification, Validation, and Confirmation of Numerical Models in the Earth Sciences,” Science, 263 (February 4, 1994): 641–646. Reprinted in Computer Measurement Group Transactions 84 (Spring 1994): 85–92; Naomi Oreskes, “Science and Public Policy: What’s Proof Got to Do With It?,” Environmental Science and Policy 7 (2004): 369–383. 62. Kristin Shrader-Frechette, Taking Action, Saving Lives (New York: Oxford University Press, 2007). Admission of flawed scientific-research practices are in D. Fanelli, “How Many Scientists Fabricate and Falsify Research?,” PLoS One 4, no. 5 (2009): e5738, doi: 10.1371/journal.pone.0005738. 63. John Grant, Corrupted Science: Fraud, Ideology, and Politics in Science (Surrey: Facts, Figures & Fun, 2007); Union of Concerned Scientists, Heads They Win, Tails We Lose: How Corporations Corrupt Science at the Public’s Expense (Cambridge, MA: UCS Publications, 2012), 20, 22. See note 62. 64. UCS, Heads They Win, 31–32.

Chapter 3

1. International Agency for Research on Cancer (IARC), “Ethylene Oxide CAS No.: 75-218,” Monographs 60, no. 73 (1994), http://www.inchem.org/documents/iarc/vol60/m6002.html, accessed November 24, 2008; Allen v. Pennsylvania Engineering Corp. (APEC), 102 V.3d 194, 195 (5thCir.1996). An earlier version of parts of this article appeared in Kristin Shrader-Frechette, “Conceptual Analysis and Special-Interest Science,” Synthese 177, no. 3 (2010): 449–469. The author thanks the US National Science Foundation (NSF) for History and Philosophy of Science grant, “Three Methodological Rules in Risk Assessment,” 2007–2009, through which project research was done. All opinions and errors are those of the author, not NSF. 2. C. Cranor, Toxic Torts (New York: Cambridge University Press, 2006). Carl Cranor, Legally Poisoned (Cambridge, MA: Harvard University Press, 2011). 3. S. Greenland, “Randomization, Statistics, and Causal Inferences,” Epidemiology 1, no. 6 (1990): 421–429; K. J. Rothman, “Statistics in Non-Randomized Studies,” Epidemiology 1, no. 6 (1990): 417–418; S. Wing, “Objectivity and Ethics and Environmental Health


228

Notes

Science,” Environmental Health Perspectives 111, no. 14 (2003): 1809–1818; K. ShraderFrechette, “Statistical Significance in Biology,” Biological Theory 3, no. 1 (2008): 12–16. 4. K. Shrader-Frechette, “Evidentiary Standards and Animal Data,” Environmental Justice 1, no. 3 (2008): 1–6. 5. IARC, “Ethylene Oxide CAS No.: 75-21-8”; see Cranor, Toxic Torts, 325–326. 6. Cranor, Toxic Torts, 325–326. 7. E. J. Calabrese, “Did Occupational Exposure to ETO Cause Mr. Walter Allen’s Brain Tumor?,” Unpublished Report, April 13, 1993; C. Cranor, e-mail to author, November 24, 2008. 8. Cranor, e-mail to author, November 24, 2008. 9. L. Ehrenberg, K. D. Hiesche, S. Osterman-Golkar, and I. Wennberg, “Evaluation of Genetic Risks of Alkylating Agents,” Mutation Research 24, no. 2 (1974): 83–103; K. Sulovska, D. Lindgren, G. Eriksson, and L. Ehrenberg, “The Mutagenic Effect of Low Concentrations of Ethylene Oxide in Air,” Hereditas 6, no. 1-2 (1969): 264–266. 10. Cranor, e-mail to author, November 24, 2008. 11. E.g., E. J. Calabrese, Curriculum Vitae, University of Massachusetts (2007), http://people.umass.edu/nrephc/EJCCVApril02.pdf and http://www.umassmarine.net/faculty/ showprofs.cfm?prof_ID=30), accessed January 3, 2007. See Sharon Beder, Global Spin (Glasgow, UK: Green Books, 2002). See also E. J. Calabrese, “Improving the Scientific Foundations for Estimating Health Risks from Fukushima Incident,” PNAS 108, no. 49 (2011): 19447–19448. The article—E. J. Calabrese, I. Iavaicoli, and V. Calabrese, “Hormesis: Its Impact on Medicine and Health,” Human and Experimental Toxicology (October 2012): doi: 10.1177/0960327112455069—says, “This work was sponsored by the Air Force Office of Scientific Research, Air Force Material Command, USAF, under grant number FA9550-07-1-0248. The lead author (EJC) has received unrestricted research grants from ExxonMobil Foundation concerning hormesis. However, this funding was not applied to the present manuscript.” 12. E. J. Calabrese and L. A. Baldwin, “Hormesis,” Annual Review of Pharmacology and Toxicology 43, no. 1 (2003): 191. 13. E. J. Calabrese and L. A. Baldwin, “Defining Hormesis,” Human and Experimental Toxicology 21 (2002): 91; E. J. Calabrese and R. Blain, “The Hormesis Debate,” Regulatory Toxicology and Pharmacology 61, no. 1 (2011): 73–81; E. J. Calabrese, “Toxicology Rewrites Its History and Rethinks Its Future,” Environmental Toxicology and Chemistry 30, no. 12 (2011): 2658–2673. 14. E. J. Calabrese and L. A. Baldwin, “Toxicology Rethinks Its Central Belief,” Nature 421, no. 6924 (2003): 691; hereafter cited as Calabrese and Baldwin, Belief. 15. D. Axelrod, K. Burns, D. Davis, and N. Von Larebeke, “Hormesis,” International Journal of Occupational and Environmental Health 10, no. 3 (2004): 336–338. 16. E. J. Calabrese and L. A. Baldwin, “Hormesis,” Trends in Pharmacological Sciences 22, no. 6 (2001): 285–286, 290. 17. K. A. Thayer, R. Melnick, K. Burns, D. Davis, and J. Huff, “Fundamental Flaws of Hormesis for Public Health Decisions,” Environmental Health Perspectives 113, 10 (2005): 1272–1275; hereafter cited as Thayer et al., FF. 18. See note 19. Scientific Advisory Board and FIFRA Scientific Advisory Panel (SAB), Comments on the Use of Data from Testing of Human Subjects, EPA-SAB-EC-00-017 (Washington, DC: EPA, 2000). 19. See note 16. See also M. Korkalainen, K. Huumonen, J. Naarala, M. Viluksela, and J. Juutilainen, “Dioxin Induces Genomic Instability in Mouse Embryonic Fibroblasts,” PLoS One 7, no. 5 (2012): e37895; Y. Wang, Y. Fan, and A. Puga, “Dioxin Exposure Disrupts the Differentiation of Mouse Embryonic Stem Cells into Cardiomyocytes,” Toxological Sciences 115, no. 1 (2010): 225–237. 20. R. J. Kociba, D. G. Keyes, and J. E. Beyer, “Results of a Two-Year Chronic Toxicity and Oncogenicity Study of 2,3,7,8-Tetrachlorodibenzo-p-dioxin in Rats,” Toxicology and Applied Pharmacology 46, no. 2 (1978): 279–303.


Notes

229

21. M. N. Mead, “Sour Finding on Popular Sweetener,” Environmental Health Perspectives 114, no. 3 (2006): A176–A178. 22. S. E. Reir, D. C. Martin, R. E. Bowman, W. P. Domowski, and J. L. Becker, “Endometriosis in Rhesus Monkeys (Macaca mulatta) Following Chronic Exposure to 2,3,7,8-tetrachlorodibenzo-p-dioxin,” Fundamentals of Applied Toxicology 21, no. 4 (1993): 433–441. 23. See notes 17 and 19. 24. See note 16. 25. E. J. Calabrese, “Historical Blunders,” Cellular and Molecular Biology 51, no. 7 (2005): 643. 26. See note 16. 27. See note 18. 28. See note 19. 29. Calabrese and Baldwin, Belief, 691. 30. See notes 27 and 16. 31. US National Research Council (US NRC), Measuring Lead Exposure in Infants, Children (Washington, DC: National Academy Press, 1993); L. London, C. Beseler, M. Bouchard, D. Bellinger, C. Colosio, P. Grandjeam, R. Harari, T. Kootbodien, H. Kromhout, F. Little, T. Meijster, A. Moretto, S. Rohlmann, and L. Stallones, “Neurobehavioral and Neurodevelopmental Effects of Pesticide Exposures,” NeuroToxicology 33, no. 4 (August 2012): 887–896. 32. SAB, Comments on the Use of Data from Testing of Human Subjects. 33. K. Shrader-Frechette, Burying Uncertainty (Berkeley: University of California Press, 1993), 105–114. 34. K. J. Rothman, Epidemiology (New York: Oxford University Press, 2002), 126. 35. Cranor, Toxic Torts, 227. 36. See note 16. 37. See note 17. 38. R. Cook and E. J. Calabrese, “Hormesis Is Biology, Not Religion,” Environmental Health Perspectives 114, no. 12 (2006): A688. 39. See note 16. 40. R. Cook and E. J. Calabrese, “The Importance of Hormesis to Public Health,” Environmental Health Perspectives 114, no. 11 (2006): 1631–1635; hereafter cited as Cook and Calabrese, Importance. 41. See note 16. 42. See note 17. 43. E.g., see note 22. 44. Calabrese and Baldwin, “Hormesis,” 285–286, 290. 45. Calabrese, “Historical Blunders,” 650. 46. See notes 16 and 17. 47. See note 16. 48. See note 43. 49. See note 16. 50. See note 16. 51. See note 18. 52. See note 40, Cook and Calabrese, Importance, 1632–1634. 53. Calabrese and Baldwin, Belief, 692. 54. See note 40 and E. Kendig, H. Le, and S. Belcher, “Defining Hormesis,” International Journal of Toxicology 29, no. 3 (2010): 235–246. 55. E.g., Thayer et al., FF; D. Currie, “EPA Power Plant Standards to Improve Air Quality, Health. Heart Attacks, Deaths to Be Prevented,” This Nation’s Health 42, no. 1 (2012): 1–22. 56. K. Shrader-Frechette, Risk and Rationality (Berkeley: University of California Press, 1993); hereafter cited as Shrader-Frechette, RR. 57. D. Hume, A Treatise of Human Nature (Oxford: Clarendon Press, 1975); see also Shrader-Frechette, RR, 24, 156. See also R. Belohrad, “The Is-Ought Problem, the


230

Notes

Open Question Argument, and the New Science of Morality,” Human Affairs 21, no. 3 (2011): 262–271. 58. B. Jennings, J. Kahn, A. Mastroianni, and L. Parker, eds., Ethics and Public Health (Washington, DC: Association of Schools of Public Health, 2003). 59. Shrader-Frechette, RR, 100–145; K. Shrader-Frechette, Ethics of Scientific Research (Rowman and Littlefield: New York, 1994), 9–12, 23–118; ibid. 60. E.g., see note 16. 61. See note 59. 62. See note 17. M. Jones, F. van Leeuwen, W. Hoogendoom, M. Moufits, H. Jolleema, H. van Boven, M. Press, L. Bernstein, and A. Swerdlow, “Endometrial Cancer Survival After Breast Cancer in Relation to Tamoxifen Treatment,” Breast Cancer Research 14, no. 3 (2012): R19; E. Waters, K. Cronin, B. Graubard, P. Han, and A. Freedman, “Prevalence of Tamoxifen Use,” Cancer, Epidemiology, Biomarkers and Prevention 19 (2010): 443–446. 63. See note 59. 64. T. Beauchamp and J. Childress, Principles of Biomedical Ethics (New York: Oxford University Press, 1989); R. Faden and T. Beauchamp, A History and Theory of Informed Consent (New York: Oxford University Press, 1986); T. Beauchamp, “Informed Consent,” Cambridge Quarterly of Healthcare Ethics 20, no. 4 (2011): 515–523. 65. See note 59; Beauchamp and Childress, Principles of Biomedical Ethics. 66. K. Shrader-Frechette, “Ideological Toxicology,” Biological Effects of Low-Level Exposures 14, no. 4 (2008): 39–47; hereafter cited as Shrader-Frechette, IT. See note 59; Hume, A Treatise of Human Nature. 67. K. Shrader-Frechette, Taking Action, Saving Lives, hereafter cited as Shrader-Frechette, TASL. See also P. Landrigan and L. Goldman, “Protecting Children from Pesticides and Other Toxic Chemicals,” Journal of Exposure Science and Environmental Epidemiology (2011): 119–120; P. Landrigan, V. Rauh, and M. Galvez, “Environmental Justice and the Health of Children,” Mount Sinai Journal of Medicine 77 (2010): 178–187; P. Landrigan and A. Miodovnik, “Children’s Health and the Environment,” Mount Sinai Journal of Medicine 77, no. 1 (2011): 1–10; R. Barouki, P. Guckman, P. Grandjean, M. Hanson, and J. Heindel, “Developmental Origins of Non-Communicable Disease,” Environmental Health 11 (2012): 42. 68. See note 17. 69. See chapter 6, this volume. 70. E. J. Calabrese and L. A. Baldwin, “Reevaluation of the Fundamental Dose-Response Relationship,” BioScience 49, no. 9 (1999): 725–732. 71. See note 25. 72. L. Lang, “Strange Brew: Assessing Risk of Chemical Mixtures.” Environmental Health Perspectives 103, no. 2 (1995): 142–145. 73. Cook and Calabrese, Importance; Calabrese, “Historical Blunders.” See note 16. 74. US NRC, Measuring Lead Exposure in Infants, Children, 187. 75. Aristotle, Nicomachean Ethics, trans. Terence Irwin (Indianapolis: Hackett, 1985). 76. National Research Council (NRC), Intentional Human Dosing Studies for EPA Regulatory Purposes (Washington, DC: National Academy Press, 2004), 110–113. See note 16; E. J. Calabrese and R. Cook, “Hormesis,” BELLE 12, no. 3 (2005): 26. 77. Cook and Calabrese, Importance, 1633. 78. E.g., see Cook and Calabrese, Importance, see notes 16, 31, and 73. 79. See note 19. 80. E.g., Cook and Calabrese, Importance. 81. Cook and Calabrese, Importance. 82. See note 19. 83. See note 84. 84. Thayer et al., FF. 85. See note 18.


Notes

231

86. C. Oleskey, A. Fleishman, and L. R. Goldman, “Pesticide Testing in Humans,” Environmental Health Perspectives 112, no. 8 (2004): 114–119; A. H. Lockwood, “Human Testing of Pesticides,” American Journal of Public Health 94, no. 11 (2004): 1908–1915. 87. NRC, Intentional Human Dosing Studies for EPA Regulatory Purposes, 3, 110, 112–113. 88. University of Massachusetts, Policy on Conflicts of Interest, doc. T96-039 (Amherst: University of Massachusetts Press, 1997), 38, http://www.umass.edu/ research/ora/conflict.html, accessed October 1, 2009. 89. A. Flanagin, P. Fontanarosa, and C. DeAngelis, “Update on JAMA’s Conflict of Interest Policy,” Journal of the American Medical Association 296, no. 2 (2006): 220. 90. E. J. Calabrese and L. A. Baldwin, “ ‘Chemical Hormesis: Scientific Foundations,’ Unpublished final report of the Texas Institute for Advancement of Chemical Technology (1998),” http://cheweb.tamu.edu/tiact/index_files/Page536.htm, accessed November 22, 2006; E. J. Calabrese, L. A. Baldwin, and C. Holland, “Hormesis,” Risk Analysis 19, no. 2 (1999): 261–281. 91. See note 17. 92. E. J. Calabrese, “Resume, University of Massachusetts (2006),” http://www.umassmarine.net/faculty/showprofs.cfm?prof_ID=30, accessed January 30, 2007. 93. E. J. Calabrese, “Curriculum Vitae, University of Massachusetts (2002),” http://people. umass.edu/nrephc/EJCCVApril02.pdf, accessed January 3, 2007. 94. K. Shrader-Frechette, IT. 95. E. J. Calabrese, “Curriculum Vitae, University of Massachusetts (2002),” http://people. umass.edu/nrephc/EJCCVApril02.pdf, accessed November 24, 2008. 96. Calabrese, “Curriculum Vitae.” 97. See notes 92–93. 98. E. J. Calabrese, “Resume, University of Massachusetts (2006),” http://www.umassmarine.net/faculty/showprofs.cfm?prof_ID=30, accessed November 24, 2008. 99. Calabrese, “Resume.” 100. E.g., see note 40; Cook and Calabrese, Importance, 1631–1635. 101. K. Shrader-Frechette, IT. 102. See note 40; Cook and Calabrese, Importance, 1631–1635. 103. Shrader-Frechette, IT, 647. 104. K. Elliott, e-mail to the author, January 28, 2007. 105. Shrader-Frechette, IT. 106. K. Elliott, e-mail to author, October 3, 2008; K. Elliott, e-mail to author, November 13, 2008. 107. L. J. Rourke, e-mail to author, November 13, 2008; L. J. Rourke, e-mail to author, November 27, 2008. 108. L. J. Rourke, e-mail to author, November 27, 2008. 109. K. Shrader-Frechette, e-mail to Kevin Elliott, November 11, 2008. 110. Thomas McGarity and Wendy Wagner, Bending Science (Cambridge, MA: Harvard University Press, 2010), 158. See Suzanne Goldenberg, “US Senate’s Top Climate Sceptic Accused of Waging ‘McCarthyite Witch-Hunt’,” The Guardian (March 1, 2010), w w w.guardian.co.uk/environment/2010/march/01/inhofe-climate-mccarthyite, accessed on February 3, 2012; Shrader-Frechette, TASL; Cheryl Hogue, “Scientists Are Being INTIMIDATED AND HARASSED Because of Their Research,” Chemical and Engineering News 88, no. 23 (2010): 31–32. See also David Michaels, Doubt Is Their Product (New York: Oxford University Press, 2008); Raymond Bradley, Global Warming and Political Intimidation (Amherst: University of Massachusetts Press, 2011). 111. Kristin Shrader-Frechette, “Environmental-Justice Whistleblowers Versus Industry Retaliators,” Environmental Justice 6, no. 2 (2012): 214–218; Kristin Shrader-Frechette, “Research Integrity and Conflicts of Interest,” Accountability in Research 19, no. 4 (2012): 220–242. 112. See note 110.


232

Notes

113. Upton Sinclair, I, Candidate for Governor (Berkeley: University of California Press, 1934), 109.

Chapter 4

1. State Department, Trafficking in Persons Report (Washington, DC: State Department, 2002);Nicholas D. Kristof and Sheryl WuDunn, Half the Sky (New York: Knopf, 2009). This chapter was developed, in part, thanks to funding from the National Science Foundation, Division of Biological and Behavioral Sciences Grant SES-98-10611. Earlier versions of arguments in this chapter appeared in Kristin Shrader-Frechette, Environmental Justice (New York: Oxford University Press, 2002): ch. 7; hereafter cited as Shrader-Frechette, EJ—and inKristin Shrader-Frechette, “Trimming Exposure Data, Putting Radiation Workers at Risk,” American Journal of Public Health 97, no. 9 (2007): 1782–1786; hereafter cited as Shrader-Frechette, RDR. All opinions expressed are those of the author and do not reflect those of the National Science Foundation. 2. Kara Sissell, “Judge Sentences Executive to 17 Years,” Chemical Week (May 10, 2000), http://www.highbeam.com/doc/1G1-62259192.html, accessed March 10, 2013. 3. J. Paul Leigh, Causes of Death in the Workplace (London: Quorum Books, 1995), 3–7, 215; D. Hurley and A. Lebbon, “A Comparison of Nonfatal Occupational Injuries and Illnesses Among Hispanic Versus Non-Hispanic Workers in the United States,” Hispanic Journal of Behavioral Sciences 34, no. 3 (June 2012): 474–490; S. Bittle, Still Dying for a Living: Corporate Criminal Liability after the Westray Mine Disaster (Vancouver: University of British Columbia, 2012). 4. Carl Gersuny, Work Hazards and Industrial Conflict (Hanover, NH: University Press of New England, 1981): xi. See National Institute for Occupational Safety and Health (NIOSH), Identifying High-Risk Small Business (Washington, DC: NIOSH, 1999). 5. Lori Wallach and Michelle Sforza, Whose Trade Organization? (Washington, DC: Public Citizen, 1999), esp. chs. 6 and 7; D. Brown, A. Deardoff, and R. Stern, “Labor Standards and Human Rights: Implications for International Trade and Investment,” International Policy Center—Working Paper Series (Ann Arbor, MI: Research Seminar in International Economics, 2012). 6. W. Kip Viscusi, Fatal Tradeoffs (New York: Oxford University Press, 1992): 6–8, 6–69; W. K. Viscusi, Risk by Choice (Cambridge, MA: Harvard University Press, 1983), 37ff, 156–158. See also Ian M. Dobbs, “Compensating Wage Differentials,” Economics Letters 63, no. 1 (April 1999): 103–109, and Shrader-Frechette, EJ, ch. 7; K. A. Bender, C. P. Green, and J. S. Heywood, “Piece Rates and Workplace Injury?,” Journal of Population Economics 25, no. 2 (2012): 569–590. 7. See, for example, Matthias Beck, “Dualism in the German Labor Market,” American Journal of Economics and Sociology 57, no. 3 (July 1998): 261–283; Shrader-Frechette, EJ, ch. 7; J. Daw and J. Hardie, “Compensating Differentials, Labor Market Segmentation, and Wage Inequality,” Social Science Research 41, no. 5 (September 2012): 1179–1197. 8. See, for example, Michael J. Moore, “Unions, Employment Risks, and Market Provision of Employment Risk Differentials,” Journal of Risk and Uncertainty 10, no. 1 (January 1995): 57–70; Christophe Daniel and Catherine Sofer, “Bargaining, Compensating Wage Differentials, and Dualism of the Labor Market,” Journal of Labor Economics 16, no. 3 (July 1998): 546–575; Peter Dorman and Paul Hagstrom, “Wage Compensation for Dangerous Work Revisited,” Industrial and Labor Relations Review 52, no. 1 (October 1998): 116–135; J. Daw and J. Hardie, “Compensating Differentials, Labor Market Segmentation, and Wage Inequality,” 1179–1197. 9. “Standards for Protection Against Radiation,” Code of Federal Regulations Title 10, Part 20, sections 1201, 1301 (Washington, DC: US Government Printing Office, 2005); hereafter cited as 10 CFR 20.1201, 20.1301. Citations to this part are hereafter cited as 10


Notes

233

CFR 20. Citations to other sections are hereafter cited in analogous ways. K. S. ShraderFrechette, What Will Work (New York: Oxford University Press, 2011), ch. 4; hereafter cited as Shrader-Frechette, WWW. 10. National Research Council, Health Risks from Exposure to Low Levels of Ionizing Radiation, BEIR VII (Washington, DC: National Academy Press, 2005), 6; hereafter cited as BEIR. 11. A. J. González, “Biological Effects of Low Doses of Ionizing Radiation,” IAEA Bulletin 4 (1994): 37–45. International Commission on Radiological Protection, 1990 Recommendations of the ICRP (Oxford, UK: Pergamon, 1991); hereafter cited as ICRP. 12. E. Cardis et al., “Risk of Cancer After Low Doses of Ionizing Radiation,” BMJ 331 (2005): 77–80; hereafter cited as Cardis. These data assume a 25-percent cancer mortality. See later discussion of paper in section “Additional RDR Arguments.” 13. See previous notes. 14. E.g., What Is the National Dose Registry? (Ottawa: Health Canada, 2004), www.hc-sc. gc.ca/ewh-semt/occup-travail/radiation/regist/what_is-quelle_est_e. html, accessed December 15, 2005; M. Moser, “The National Dose Registry for Radiation Workers in Switzerland,” Health Physics 69, no. 6 (1995): 979–986; S. Y. Choi et al., “Analysis of Radiation Workers’ Dose Records in the Korean National Dose Registry,” Radiation Protection Dosimetry 95, no. 2 (2001): 143–148; Energy-Related Health Research Program (Washington, DC: National Institute for Occupational Safety and Health, 2001), http:// www.cdc.gov/niosh/2001–133a.html, accessed October 16, 2005. For data on nearly 200,000 Canadian radiation-exposed workers, see Canadian National Dose Registry, “2008 Report on Occupational Radiation Exposures in Canada” Health Canada, http:// www.hc-sc.gc.ca/ewh-semt/alt_formats/hecs-sesc/pdf/pubs/occup-travail/2008report-rapport-eng.pdf, accessed November 19, 2012. For data on nearly 70,000 Swiss radiation-exposed workers, see D. Frei, C. Wernli, S. Baechler, G. Fischer, H. Jossen, A. Leupin, Y. Lortscher, R. Mini, T. Otto, R. Schuh, and U. Weidmann, “Integration of External and Internal Dosimetry in Switzerland,” Radiation Protection Dosimetry 125, no. 1-4 (2007): 47–51. For data on nearly 65,000 South Korean radiation-exposed workers, see Y. Jin, M. Jeong, K. Moon, M. Jo, and S. Kang, “Ionizing Radiation-Induced Diseases in Korea,” Journal of Korean Medical Science 25 (December 2010): S70–S76. 15. 10 CFR 20. 16. I. Linkov and D. Burmistrov, “Reconstruction of Doses From Radionuclide Inhalation for Nuclear-Power-Plant Workers Using Air-Concentration Measurements and Associated Uncertainties,” Health Physics 81, no. 1 (2001): 70–75. 17. Operational Radiation Safety Program (Bethesda, MD: National Council on Radiation Protection and Measurement, 1998): Report 127; L. Ruzer, “Radioactive Aerosols,” in Aerosols Handbook: Measurement, Dosimetry, and Health Effects, ed. L. Ruzer and N. Harley, 383–412 (Boca Raton, LA: CRC Press, 2012). 18. See earlier notes, esp. BEIR. 19. E. J. Hall, “The Crooked Shall Be Made Straight,” International Journal of Radiation Biology 80, no. 5 (2004): 327–337. 20. See previous notes, esp. BEIR. 21. See note 20. 22. See Cardis. 23. 10 CFR 19.13(b), 835.1; see J. L. Anderson and R. D. Daniels, “Bone Marrow Dose Estimates From Work-Related Medical X-Ray Examinations Given Between 1943 and 1966 for Personnel From Five US Nuclear Facilities,” Health Physics 90, no. 6 (2006): 544–553; J. Cardelli et al., “Significance of Radiation Exposure From WorkRelated Chest X-Rays for Epidemiological Studies of Radiation Workers,” American Journal of Industrial Medicine 42, no. 6 (2002): 490–501; J. Boei, S. Vermeulen, M. Skubakova, M. Meijers, W. Loenen, R. Wolterbeek, L. Mullenders, H. Vrieling, and M. Giphart-Gassler, “No Threshold for the Induction of Chromosomal Damage at Clinically Relevant Low Doses of X Rays,” Radiation Research 177, no. 5 (May 2012): 602–613; E.


234

Notes

Claus, L. Calvocoressi, M. Bondy, J. Schildkraut, J. Wiemels, and M. Wrensch, “Dental X-Rays and Risk of Meningioma,” Cancer 118, no. 18 (2012): 4530–4537. 24. See 10 CFR 20. 25. Declaration of Helsinki (Ferney-Voltaire, France: World Medical Organization, 2004). 26. T. L. Beauchamp and J. F. Childress, Principles of Biomedical Ethics (New York: Oxford University Press, 1989), esp. 78ff; R. Faden and T. L. Beauchamp, A History and Theory of Informed Consent (New York: Oxford University Press, 1986); Y. Schenker and A. Meisel, “Informed Consent in Clinical Care,” The Journal of the American Medical Association 305, no. 11 (March 2011): 1130–1131. 27. See ICRP, 10 CFR 20. 28. G. R. Howe et al., “Analysis of the Mortality Experience Amongst US Nuclear Power Industry Workers After Chronic Low-Dose Exposure to Ionizing Radiation,” Radiation Research 162, no. 5 (2004): 517–526, esp. 518. 29. Report of the Advisory Committee on Human Radiation Experiments (Washington, DC: Advisory Committee on Human Radiation Experiments, 1994), ch. 12, section 6. 30. Complex Cleanup (Washington, DC: US Office of Technology Assessment, 1991), 111, 138–143. 31. Worker Safety at DOE Nuclear Facilities (Washington, DC: US Congress, 1999); Worker Safety at DOE Nuclear Sites (Washington, DC: US Congress, 1994). 32. DOE: Clear Strategy on External Regulation Needed for Worker and Nuclear Facility Safety (Washington, DC: US Government Accountability Office, 1998), 4. 33. US GAO, Report to Congressional Requesters (Washington, DC: US Government Accountability Office, 2012), http://www.gao.gov/assets/600/590982.txt, accessed November 19, 2012. 34. E.g., Shrader-Frechette, RDR. 35. I-131 Thyroid Dose/Risk Calculator for NTS Fallout (Bethesda, MD: National Cancer Institute, 2003), http://ntsi131.nci.nih.gov/, accessed December 1, 2005. 36. See 10 CFR 20; ICRP. 37. Shrader-Frechette, WWW, ch. 4 38. Shrader-Frechette, WWW, ch. 4 39. L. Krestinina, D. Preston, E. Ostroumova, M. Degteva, E. Ro, O. Vyushkova, N. Startsev, M. Jossenko, and A. Akleyev, “Protracted Radiation Exposure and Cancer Mortality in the Techa River Cohort,” Radiation Research 164 (2005): 602–611. 40. See note 39. 41. See BEIR. 42. E. S. Gilbert, “Invited Commentary,” American Journal of Epidemiology 153, no. 4 (2001): 321. 43. A. Smith, The Wealth of Nations (New York: Modern Library, 1993); S. Lahiri, C. Low, and M. Barry, “A Business Case Evaluation of Workplace Engineering Noise Control: A NetCost Model,” Journal of Occupational & Environmental Medicine 53, no. 3 (March 2011): 329–337. 44. M. Zimmerman, An Essay on Moral Responsibility (Totowa, NJ: Rowman and Littlefield, 1988); J. Glover, Responsibility (London: Routledge and Kegan Paul, 1970); J. Feinberg, Doing and Deserving (Princeton, NJ: Princeton University Press, 1970). 45. 10 CFR 20.1208, 20.1502; ICRP 1990; ICRP 2005, 45. 46. E. Draper, Risky Business (New York: Cambridge University Press, 1991); Genetic Monitoring and Screening in the Workplace, OTA-BA-455 (Washington, DC: US Office of Technology Assessment, 1990); R. Jansson, C. Watts, A. Katz, P. Kuzler, A. Mastroiani, J. Thompson, and A. McWilliams, Genetic Testing in the Workplace: Implications for Public Policy (Seattle: University of Washington Press, 2000). 47. Kristin Shrader-Frechette, “Nuclear Catastrophe, Disaster-Related Environmental Injustice, and Fukushima,” Environmental Justice 5, no. 3 (June 2012): 133–139. 48. National Registry for Radiation Workers (London: Ministry of Defence, 2005), www.mod. uk/issues/radiation_workers/registry.htm, accessed December 5, 2005.


Notes

235

49. See BEIR; Cardis. 50. Committee on the Medical Aspects of Radiation in the Environment, The Implications of the New Data (London: Her Majesty’s Stationery Office, 1986). 51. J. Bentham, An Introduction to the Principles of Morals and Legislation, ed. J. H. Burns and H. Hart (London: Athlone, 1970). 52. G. E. Moore, Principia Ethica (Cambridge: Cambridge University Press, 1960).

Chapter 5

1. An earlier version of this chapter appeared in Kristin Shrader-Frechette, “Evidentiary Standards and Animal Data,” Environmental Justice 1, no. 3 (2008): 139–144. See also Carl Cranor, Toxic Torts (Cambridge: Cambridge University Press, 2006); hereafter cited as Cranor, TT; Paul Ehrlich and Anne Ehrlich, Betrayal of Science and Reason (Washington, DC: Island Press, 1996); Thomas McGarity and Wendy Wagner, Bending Science (Cambridge, MA: Harvard University Press, 2008); and Kristin ShraderFrechette, Taking Action, Saving Lives (New York: Oxford University Press, 2007); hereafter cited as Shrader-Frechette, TASL. Work on this chapter was funded by US National Science Foundation (NSF) grant SES-0724781, “Three Methodological Rules in Risk Assessment,” 2007–2009, for which the author thanks NSF. All opinions expressed are those of the author and not the US NSF. 2. Hans Reichenbach, The Theory of Probability (Berkeley: University of California Press, 1949), 373, 472; Rudolph Carnap, Logical Foundations of Probability (Chicago: University of Chicago Press, 1950). 3. Charles S. Peirce (1931–1958), Collected Papers of Charles Sanders Peirce, vols. 1–6, ed. C. Hartshorne and P. Weiss; vols. 7–8, ed. A. W. Burks (Cambridge, MA: Harvard University Press, 1958), vol. 7, particularly 210, 1901. 4. Karl Popper, Logic of Discovery (New York: Plenum Press, 1968), 31. 5. Norwood Russell Hanson, Patterns of Discovery (New York: Cambridge University Press, 1958); Norwood Russell Hanson, “More on ‘the Logic of Discovery’,” Journal of Philosophy 57, no. 6 (1960): 182–188. 6. Carl Kordig, “Discovery and Justification,” Philosophy of Science 45 (1978): 110–117; Paul Hoyningen-Huene, “Context of Discovery and Context of Justification,” Studies in History and Philosophy of Science 18, no. 4 (1987): 501–515. 7. US National Research Council, Pesticides in the Diets of Infants and Children (Washington, DC: National Academy Press, 1993). 8. Kristin Shrader-Frechette, “EPA’s 2006 Human-Subjects Rule for Pesticide Experiments,” Accountability in Research 14 (2007): 211–254; Alan Lockwood, “Human Testing of Pesticides,” American Journal of Public Health 94 (2004): 1908–1916. 9. Bruce Ames and Lois Gold, “Pesticides, Risk, and Applesauce,” Science 244 (1989): 757; hereafter cited as Ames and Gold, 1989. Bruce Ames and Lois Gold, “The Causes and Prevention of Cancer,” Environmental Health Perspectives 105, Supplement 4 (June 1997): 865–874; hereafter cited as Ames 1997. Elizabeth Whelan, “Ratty Test Rationale (Washington, DC: American Council on Science and Health, 2005), available at http:// www.acsh.org/healthissues/newsID.1035/healthissue_detail.asp, accessed November 29, 2005; Michael Gough, “ ‘Environmental Cancer’ Isn’t What We Thought or Were Told,” www.cato.org/tetimony/ct-mg030697.html, accessed May 28, 2005. 10. Ames and Gold 1989, 757; Ames 1997. Elizabeth Whelan claims animal tests should be used to regulate chemicals more strictly only when 2 or more species exhibit “highly lethal” cancers that have short latency, arise at low doses and do not also occur spontaneously. Whelan, “Stop Banning Products at the Drop of a Rat,” Insight 10 (December 12, 1994): 18–20. 11. See note 7, 51. J. Keeler and T. Robbins, “Translating Cognition from Animals to Humans,” Biochemical Pharmacology 81, no. 12 (2011): 1356–1366; J. States, A. Barchowsky, I.


236

Notes

Cartwright, J. Reichard, and B. Futscher, “Arsenic Toxicology,” Environmental Health Perspectives 119, no. 10 (2011): 1356–1363; Y. Shen, “Primate Models for Cardiovascular Drug Research and Development,” Current Opinion in Investigational Drugs 11, no. 9 (2010): 1025–1029; S. Reagan-Shaw, M. Nihal, and N. Ahmad, “Dose Translation from Animal to Human Studies Revisited,” FASEB Journal 2, no. 3 (2008): 659–661. 12. Hans Reichenbach, Theory of Probability, 288. 13. S. S. Epstein, Cancer-Gate (Amityville, NY: Baywood Press, 2005), 83; M. Cohen, “The Impact of Medical Censorship on Patient Care,” Townsend Letter for Doctors and Patients (December 2004), http://www.findarticles.com/p/articles/mi_m0ISW/is_257/ai_ n7638036, accessed July 16, 2006. 14. I. Bross, “Animals in Cancer Research: A Multibillion-Dollar Fraud,” The A-V Magazine (November 1982). 15. Cranor, TT, 224–225, 248–254. 16. Several of these scientific and ethical difficulties are discussed in Shrader-Frechette, TASL, ch. 3. 17. Adam Finkel, “Rodent Tests Continue to Save Human Lives,” Insight 10 (1994): 20–22. US National Research Council, Intentional Human Dosing Studies for EPA Regulatory Purposes (Washington, DC: National Academy Press, 2004), 159ff., appendix A; hereafter cited as NAS, Intentional. Marianne I. Martić-Kehl, Roger Schibli, and P. August Schubiger, “Can Animal Data Predict Human Outcome?,” European Journal of Nuclear Medicine and Molecular Imaging 39 (2012): 1492–1496; Society of Toxicology (SOT), Reliability of Animal Data (Reston, VA: SOT, 2014), http://www.toxicology.org/ai/air/ air2.asp#, accessed January 25, 2014. 18. NAS, Intentional. See (but do not cite/quote/take as policy) Stephen Roberts and US Environmental Protection Agency (EPA) Science Advisory Board (SAB) Ethylene Oxide Review Panel, Draft Advisory Report (Washington, DC: US EPA SAB, 2007), 36–37l, http://www.epa.gov/sab/pdf/ethylene_oxide_final_review_draft_report_8-30-07. pdf, accessed July 31, 2008. 19. US Environmental Protection Agency (EPA) Science Advisory Board, Comments on the Use of Data from the Testing of Human Subjects, EPA-SAB- EC-00-017 (Washington, DC: US EPA, 2000). See notes 17 and 18. 20. S. R. Silver, R. A. Rinsky, S. P. Cooper, R. W. Hornung, and D. Lai, “Effect of Follow-Up Time on Risk Estimates,” American Journal of Industrial Medicine 42 (2002): 481–489. 21. Dale Hattis and K. Silver, “Human Interindividual Variability—A Major Source of Uncertainty in Assessing Risks for Non-Cancer Health Effects,” Risk Analysis 14 (1994): 421–431. 22. See note 13. Exceptions to the dearth of good quantitative analyses of exposureestimate errors include, for instance, S. C. Brown, M. F. Schonbeck, D. McClure, A. E. Baron, W. C. Navidi, T. Byers, and A. J. Ruttenberg, “Lung Cancer and Internal Lung Doses among Plutonium Workers at the Rocky Flats Plant: A Case-Control Study,” American Journal of Epidemiology 160 (2004): 163–172; D. Richardson, S. Wing, K. Steenland, and W. McKelvey, “Time-Related Aspects of the HealthyWorker Survivor Effect,” Annals of Epidemiology 14 (2004): 633–639; Dale Hattis, “Illustration of a Simple Approach for Approximately Assessing the Effect of Measurement/Estimation Uncertainties for Individual Worker Exposures,” appendix B, in note 13 above. 23. See, e.g., L. Stayner, K. Steenland, M. Dosemeci, and I. Hertz-Picciotto, “Attenuation of Exposure-Response Curves in Occupational Cohort Studies at High-Exposure Levels,” Scandinavian Journal of Worker and Environmental Health 29 (2003): 317–324; and K. Steenland, J. Deddens, A. Salvan, and L. Stayner, “Negative Bias in Exposure-Response Trends in Occupational Studies,” American Journal of Epidemiology 143 (1996): 202–210. 24. See note 23. 25. See note 18.


Notes

237

26. See, e.g., E. Garshick, F. Laden, J. E. Hart, B. Rosner, T. J. Smith, D. W. Dockery, and F. E. Speizer, “Lung Cancer in Railroad Workers Exposed to Diesel Exhaust,” Environmental Health Perspectives 112 (2004): 1539–1543. See note 13. 27. K. J. Rothman, Epidemiology (New York: Oxford University Press, 2002), 117. 28. National Research Council and Institute of Medicine, Dietary Supplements (Washington, DC: National Academy Press, 2005), 205–206. 29. S. Jasanoff, Science at the Bar (Cambridge, MA: Harvard University Press, 1995), 125. 30. Cranor, TT, 267. 31. D. Weed, “Weight of Evidence,” Risk Analysis 25 (2005): 1545–1557. 32. See Alan Gewirth, “The Rationality of Reasonableness,” Synthese 57 (1983): 225– 247; Virginia Held, “Rationality and Reasonable Cooperation,” Social Research 44 (1977): 708–744; John Rawls, “Kantian Constructivism in Moral Theory,” Journal of Philosophy 77 (1980): 515–572; David Richards, A Theory of Reasons for Actions (Oxford: Clarendon Press, 1971). 33. See note 12 and Scott Eliasof, Douglas Lazarus, et a l., “Correlating Preclinical Animal Studies and Human Clinical Trials,” Proceedings of the National Academy of Sciences, doi: 10.1073/ pnas.1309566110, http://www.pnas.org/content/early/2013/08/21/1309566110. abstract, accessed January 15, 2014. 34. See notes 12, 14. 35. See, e.g., A. Ross, “Use of Laboratory Studies for the Design, Explanation, and Validation of Human Micronutrient Intervention Studies,” Journal of Nutrition 142, no. 1 (2012): 157– 160; S. M. Baker, “Animal Models,” Nature 475, no. 7354 (2011): 123–128; W. Lynch, K. Nicholson, M. Dance, R. Morgan, and P. Foley, “Animal Models of Substance Abuse and Addiction: Implications for Science, Animal Welfare, and Society,” Comparative Medicine 60, no. 3 (2010): 177–188; E. Tokar, L. Benbrahim-Tallaa, J. Ward, R. Lunn, and R. Sams, “Cancer in Experimental Animals Exposed to Arsenic and Arsenic Compounds,” Critical Reviews in Toxicology 40, no. 10 (2010): 912–927; J. Mogil, K. Davis, and S. Derbyshire, “The Necessity of Animal Models in Pain Research,” Pain 15, no. 1 (2010): 12–17; K. Chadman, M. Yang, and J. Crawley, “Criteria for Validating Mouse Models of Psychiatric Diseases,” American Journal of Medical Genetics Part B—Neuropsychiatric Genetics 150B, no. 1 (2009): 1–11; Matthias von Herrath and Gerald T. Nepom, “Animal Models of Human Type-1 Diabetes,” Nature Immunology 10 (2009): 129–132; E. Taubøll, L. Røste, S. Svalheim, and L. Gjerstad, “Disorders of Reproduction in Epilepsy—What Can We Learn from Animal Studies?,” Seizure 17, no. 2 (2009): 120. Regarding IARC and NTP, see D. P. Rall, M. D. Hogan, J. E. Huff, B. A. Schwetz, and R. W. Tennant, “Alternatives to Using Human Experience in Assessing Health Risks,” Annual Review of Public Health 8 (1987): 356; Cranor, TT, 250. 36. See note 14 and M. Swindle, A. Makin, A. Herron, F. Clubb, and K. Frazier, “Swine as Models in Biomedical Research and Toxicology Testing,” Veterinary Pathology 49, no. 2 (2012): 344–356. K. Bergman, “The Animal Rule and Emerging Infections,” Clinical Pharmacology and Therapeutics 86, no. 3 (2009): 328–331. 37. See Thomas Beauchamp and James Childress, Principles of Biomedical Ethics (New York: Oxford University Press, 1989); Charles Culver and Bernard Gert, Philosophy in Medicine (New York: Oxford University Press, 1982); Norman Daniels, Just Health Care (New York: Oxford University Press, 1985); Robert Veatch, A Theory of Medical Ethics (New York: Basic, 1981). 38. Regarding ethics and burdens of proof, see Kristin Shrader-Frechette, Risk and Rationality (Berkeley: University of California Press, 1991), 100–145; hereafter cited as ShraderFrechette, RR; Carl Cranor, Regulating Toxic Substances (New York: Oxford University Press, 1993). 39. Regarding virtue and protecting the vulnerable, see Philippa Foot, Virtues and Vices and Other Essays in Moral Philosophy (Berkeley: University of California Press, 1978); Peter Geach, The Virtues (Cambridge: Cambridge University Press, 1977); Robert Kruschwitz


238

Notes

and Robert Roberts, eds., The Virtues (East Windsor, CT: Wadsworth, 1987); Edmund Pincoffs, Quandaries and Virtues (Lawrence: University of Kansas Press, 1986); G. H. von Wright, The Varieties of Goodness (Oxford: Routledge, 1963). 40. Regarding ethical responsibility, see Aristotle, Nicomachean Ethics, ed. and trans. D. Ross (New York: Oxford University Press, 1925), esp. books I–II; Joel Feinberg, Doing and Deserving (Princeton, NJ: Princeton University Press, 1970); Jonathan Glover, Responsibility (Oxford: Routledge, 1970); and Michael Zimmerman, An Essay on Moral Responsibility (Lanham, MD: Rowman and Littlefield, 1988). Regarding deontological and contractarian ethics, see Ernest Barker, ed., Social Contract (Chicago: Greenwood, 1980); James Buchanan and Gordon Tullock, The Calculus of Consent (Chicago: University of Chicago Press, 1975); Ronald Dworkin, Taking Rights Seriously (Cambridge, MA: Harvard University Press, 1977); John Locke, Two Treatises of Government (Cambridge: Cambridge University Press, 1977); John Rawls, A Theory of Justice (Cambridge, MA: Harvard University Press, 1971); and John Simmons, Moral Principle and Political Obligations (Princeton, NJ: Princeton University Press, 1979). 41. Regarding utilitarianism, see Shrader-Frechette, RR, 100–130; Richard Brandt, Ethical Theory (New York: Prentice-Hall, 1959), esp. chs. 12–19; John Stuart Mill, Utilitarianism, ed. J. M. Robson (Toronto: University of Toronto Press, 1969); Samuel Scheffler, ed., Consequentialism and Its Critics (New York: Oxford University Press, 1988); J. J. C. Smart, Utilitarianism (Cambridge: Cambridge University Press, 1973); Amartya Sen and Bernard Williams, eds., Utilitarianism and Beyond (Cambridge: Cambridge University Press, 1982). 42. Colin Beavan, Fingerprints: The Origins of Crime Detection and the Murder Case That Launched Forensic Science (New York: Hyperion, 2001); Adrian Berry, Scientific Anecdotes (Buffalo, NY: Prometheus Publishers, 1993), 104–105.

Chapter 6

1. Elizabeth Harper, Wishing (New York: Simon and Schuster, 2008), 38. 2. David Winner, “Wayne Rooney: Beautiful Game. Beautiful Mind,” ESPN: The Magazine (May 16, 2012), http://espnfc.com/euro2012/en/news/1071240/beautiful-game-beautiful-mind-.html, accessed January 15, 2014. 3. Galileo Galilei, Dialogue Concerning the Two Chief World Systems, 2nd rev. ed., trans. S. Drake (Berkeley: University of California Press, 1967). A much earlier version of this chapter appeared in Kristin Shrader-Frechette, “Using a Thought Experiment to Clarify a Radiobiological Controversy,” Synthese: An International Journal for Epistemology, Methodology, and Philosophy of Science 128, no. 3 (2001): 319–342. 4. Albert Einstein, “Autobiographical Notes,” in Albert Einstein: Philosopher-Scientist, ed. Paul Arthur Schilpp (LaSalle, IL: Open Court, 1949). 5. Melanie Frappier, Letitia Meynell, and James Robert Brown, eds. Thought Experiments in Science, Philosophy, and the Arts (New York: Routledge, 2012). Nicholas Rescher, “Thought Experimentation in Presocratic Philosophy,” in Thought Experiments in Science and Philosophy, ed. Tamara Horowitz and Gerald Massey, 31–42 (Lanham, MD: Rowman and Littlefield, 1991). 6. Richard P. Feynman, Robert B. Leighton, and Matthew Sands, The Feynman Lectures in Physics, 3 vols. (Reading, MA: Addison-Wesley, 1963), I: 54. 7. James R. Brown, “Thought Experiments in Science, Philosophy, and Mathematics,” Croatian Journal of Philosophy 7, no. 19 (2007): 3–27; David Sherry, “Mathematical Reasoning: Induction, Deduction and Beyond,” Studies in History and Philosophy of Science 37, no. 3 (2006): 489–504; hereafter cited as Sherry, “Mathematical Reasoning”; D. A. Anapolitanos, “Thought Experiments and Conceivability Conditions,” in Thought Experiments in Science and Philosophy, ed. Tamara Horowitz and Gerald Massey, 87 (Lanham, MD: Rowman and Littlefield, 1991); hereafter cited as Anapolitanos, “Thought Experiments.”


Notes

239

8. James R. Brown, The Laboratory of the Mind: Thought Experiments in the Natural Sciences, 2nd ed. (New York: Routledge, 2010); James R. Brown, The Laboratory of the Mind (New York: Routledge, 1991); hereafter cited as Brown, The Laboratory of the Mind; see John J. Clement, “The Role of Imagistic Simulation in Scientific Thought Experiments,” Topics in Cognitive Science 1, no. 4 (2009): 686–710; hereafter cited as Clement, “Thought Experiments.” 9. Clement, “Thought Experiments”; Sherry, “Mathematical Reasoning”; John D. Norton, “Thought Experiments in Einstein’s Work,” in Thought Experiments in Science and Philosophy, ed. Tamara Horowitz and Gerald Massey, 129 (Lanham, MD: Rowman and Littlefield, 1991); hereafter cited as Norton, “Einstein”; John D. Norton, “Are Thought Experiments Just What You Thought?,” Canadian Journal of Philosophy 26, no. 3 (1996): 333–366; M. Picha, “How to Reconstruct a Thought Experiment,” Organon F 18, no. 2 (2012): 154–188. 10. See B. D. Massey, “Do All Rational Folk Reason as We Do? Frege’s Thought Experiment Reconsidered,” in Thought Experiments in Science and Philosophy, ed. Tamara Horowitz and Gerald Massey, 99–112 (Lanham, MD: Rowman and Littlefield, 1991); hereafter cited as Massey, “Reason.” 11. Andrew D. Irvine, “On the Nature of Thought Experiments in Scientific Reasoning,” in Thought Experiments in Science and Philosophy, ed. Tamara Horowitz and Gerald Massey, 159 (Lanham, MD: Rowman and Littlefield, 1991); see Joseph Shieber, “On the Nature of Thought Experiments and a Core Motivation of Experimental Philosophy,” Philosophical Psychology 23, no. 4 (2010): 547–564; hereafter cited as Shieber, “Thought Experiments.” 12. Karl Popper, The Logic of Scientific Discovery (London: Hutchinson, 1959). 13. Brown, The Laboratory of the Mind, 76ff.; Norton, “Einstein,” 131; see Shieber, “Thought Experiments.” 14. Judith Jarvis Thomson, “A Defense of Abortion,” Philosophy and Public Affairs 1, no. 1 (1971): 47–68. 15. Ezra Mishan, Economics for Social Decisions (New York: Praeger, 1972), 21. 16. Anapolitanos, “Thought Experiments,” 88–94; see Brown “Thought Experiments”; Sherry, “Mathematical Reasoning”; Stephen Cowin and Mohammed Benalla, “Graphical Illustrations for the Nur-Byerlee-Carroll Proof of the Forumula for the Biot Effective Stress Coefficient in Poroelasticity,” Journal of Elasticity: The Physical and Mathematical Science of Solids 104, no. 1-2 (2011): 133–141; Peter Klimek, Stefan Thurner, and Rudolf Hanel, “Evolutionary Dynamics from a Variational Principle,” Physical Review E: Statistical, Nonlinear and Soft Matter Physics (3) 82, no. 1 (2010): 1–10. 17. Catherine Caufield, Multiple Exposures: Chronicles of the Radiation Age (New York: Harper and Row, 1989), 238–239; hereafter cited as Caufield, Multiple Exposures. For Belgium-US comparisons, see Testimony of Radiation Monitoring, Before the Senate Committee on Environment and Public Works, 112th Congress (April 12, 2011) (statement of Lisa Jackson, US EPA administrator) and Federal Agency for Nuclear Control (FANC), Radiological Monitoring in Belgium: Experts Permanently Monitor the Radioactivity in Our Environment (Brussels: FANC, 2005). 18. Anthony Robbins, Arjun Makhijani, and K. Yih, Radioactive Heaven and Earth: The Health and Environmental Effects of Nuclear Weapons Testing In, On, and Above the Earth (New York: Apex Press, 1991), 16–17. 19. National Research Council (NRC), Committee to Assess Health Risks from Exposure to Low Levels of Ionizing Radiation and Board on Radiation Effects Research Division on Earth and Life Studies, Health Risks from Exposure to Low Levels of Ionizing Radiation, BEIR VII Phase 2 (Washington, DC: National Academies Press, 2006), 225, 314; hereafter cited as BEIR VII; James E. Trosko, “Role of Low-Level Ionizing Radiation in Multi-Step Carcinogenic Process,” Health Physics 70, no. 6 (1996): 812; hereafter cited as Trosko, “Ionizing Radiation”; William J. Schull, Effects of Atomic Radiation: A HalfCentury of Studies from Hiroshima and Nagasaki (New York: Wiley-Liss, 1995), 277; hereafter cited as Schull, Atomic Radiation.


240

Notes

20. BEIR VII, 65; Trosko, “Ionizing Radiation,” 815–817. 21. BEIR VII, 89; Schull, Atomic Radiation, 800. P. Duport, H. Jiang, N. S. Shilnikova, D. Krewski, and J. M. Zielinkski, “Database of Radiogenic Cancer in Experimental Animals Exposed to Low Doses of Ionizing Radiation,” Journal of Toxicology and Environmental Health, Part B, Critical Reviews 15, no. 3 (January 2012): 186–209; E. Calabrese, “Mulle’s Nobel Prize Lecture: When Ideology Prevailed over Science,” Toxological Sciences 126, no. 1 (2012): 1–4; hereafter cited as Calabrese-2012; C. L. Sanders, Radiation Hormesis and the Linear-No-Threshold Assumption (Heidelberg: Springer, 2010); hereafter cited as Sanders-2010; A. M. Vaiserman, L. V. Mekhova, N. M. Koshel, and V. P. Voitenko, “Cancer Incidence and Mortality after Low-dose Radiation Exposure: Epidemiological Aspects,” Radiatsionnaia biologiia, radioecologiia 50, no. 6 (November–December 2010): 691–702; hereafter cited as Vaiserman et al.-2010. 22. BEIR VII, 194; A. M. Stewart and G. W. Kneale, “The Hanford Data: Issues of Age at Exposure and Dose Recording,” PSR Quarterly 3 (1993): 101–111; R. Nussbaum and W. Köhnlein, “Health Consequences of Exposures to Ionising Radiation from External and Internal Sources: Challenges to Radiation Protection Standards and Biomedical Research,” Medicine and Global Survival 2, no. 4 (1995): 202–204; hereafter cited as Nussbaum and Köhnlein, “Health Consequences.” 23. BEIR VII, 194, 202. 24. A-bomb–based estimated data are in BEIR VII. Newer, measured, nuclear-worker data are in Elisabeth Cardis et al., “Risk of Cancer after Low Doses of Ionizing Radiation,” British Medical Journal 331 (2005): 77–80; hereafter cited as Cardis et al., “Risk”; Elisabeth Cardis et al., “The 15-Country Collaborative Study of Cancer Risk among Radiation Workers in the Nuclear Industry,” Radiation Research 167 (2007): 396–416. This debate also is evaluated in ch. 4, Kristin Shrader-Frechette, What Will Work (New York: Oxford University Press, 2011); hereafter cited as Shrader-Frechette, WWW. 25. R. J. M. Fry, “Effects of Low Doses of Radiation,” Health Physics 70, no. 6 (1996): 823; hereafter cited as Fry, “Radiation”; BEIR VII, 2, defines low doses as at below 10 rads. 26. Anapolitanos, “Thought Experiments”; Tamara Horowitz and Gerald Massey, Thought Experiments in Science and Philosophy (Lanham, MD: Rowman and Littlefield, 1991); see Brown, “Thought Experiments”; Sherry, “Mathematical Reasoning”; Roman Frigg, Models and Theories (Chesham, UK: Acumen Publishers, 2012); Gila Hanna, Hans Niels Jahnke, and Helmut Pulte, eds., Explanation and Proof in Mathematics: Philosophical and Educational Perspectives (New York: Springer, 2009). 27. World Nuclear Association (WNA), Chernobyl Accident (London: WNA, 2011), http:// www.world-nuclear.org/info/chernobyl/inf07.html, accessed February 6, 2011; A. MacLachlan, “Official Russian Register Keeps Chernobyl Death Toll at 28,” Nucleonics Week 35, no. 46 (1994): 11ff. 28. International Atomic Energy Agency (IAEA), International Chernobyl Project: Technical Report (Vienna, VA: IAEA, 1991), 4; hereafter cited as IAEA; Z. Jaworowski, “Observations on the Chernobyl Disaster and LNT,” Dose-response 8, no. 2 (2010): 148–171. 29. Y. M. Shcherbak, “Ten Years of the Chornobyl Era,” Scientific American 274, no. 4 (1996): 46; see A. V. Yablokov, “Chernobyl’s Public Health Consequences: Some Methodological Problems,” Chernobyl: Consequences of the Catastrophe for People and the Environment 1181, no. 189454852 (2009): 32–41; hereafter cited as Yablokov, “Chernobyl.” 30. P. Campbell, “Chernobyl’s Legacy to Science,” Nature 380, no. 6576 (1996): 653; see Yablokov, “Chernobyl.” 31. J. W. Gofman, “Foreword,” in A. Yaroshinskaya, Chernobyl – The Forbidden Truth (Lincoln: University of Nebraska Press, 1995), 1–13; see Yablokov, “Chernobyl.” See Shrader-Frechette, WWW, ch. 4. 32. IAEA, 508–510. For discussion of latency, see Shrader-Frechette, WWW, ch. 4. 33. US Department of Energy (US DOE), Committee on the Assessment of Health Consequences in Exposed Populations, Chairman: M. Goldman, L. Anspaugh, R.


Notes

241

J. Catlin, J. I. Fabrikant and P. Gudiksen, Health and Environmental Consequences of the Chernobyl Nuclear Power Plant Accident, DOE/ER-0332 (Washington, DC: US DOE, 1987); see Carmel Mothersill, Richard Smith, and Colin Seymour, “Molecular Tools and the Biology of Low-dose Effects,” BioScience 59, no. 8 (2009): 649. 34. J. W. Gofman, Radiation-Induced Cancer from Low-Dose Exposure: An Independent Analysis (Berkeley, CA: Committee for Nuclear Responsibility, Inc, 1990), ch. 24; hereafter cited as Gofman, Cancer; see BEIR VII, 10. 35. BEIR VII, 6; Nussbaum and Köhnlein, “Health Consequences,” 198–203; Trosko, “Ionizing Radiation,” 818. 36. BEIR VII, 11; D. Beninson, “Risk of Radiation at Low Doses,” Health Physics 71, no. 2 (1996): 123; hereafter cited as Beninson, “Risk”; V. P. Bond, L. Wielopolski, and G. Shani, “Current Misinterpretations of the Linear No-Threshold Hypothesis,” Health Physics 70, 6 (1996): 878; hereafter cited as Bond et al., “Linear No-Threshold”; T. D. Jones, “A Unifying Concept for Carcinogenic Risk Assessments: Comparison with RadiationInduced Leukemia in Mice and Men,” Health Physics 47, no. 4 (1984): 539; hereafter cited as Jones, “Leukemia.” 37. Fry, “Radiation,” 823; BEIR VII, 55, 63–63. BEIR VII, 55 and 62–63, says this value is 1-5 mGy. 38. Bond et al., “Linear No-Threshold,” 880; S. Lacombe and C. Le Sech, “Advances in Radiation Biology: Radiosensitization in DNA and Living Cells,” Surface Science 603 (2009): 1953–1960, say low energy electrons (<10 eV), which are present all along the tracks of ionizing particles, can cause “direct” DNA damage. 39. BEIR VII, 51; D. Kovan, “NRPB Cuts Up the Cut-Off Theory,” Nuclear Engineering International 40, no. 497 (1995): 15; National Radiological Protection Board (NRPB), “Risk of Radiation-Induced Cancer at Low Doses and Low Dose Rates for Radiation Protection Purposes,” Documents of the NRPB 6, no. 1 (1995); National Research Council (NRC), Committee on the Biological Effects of Ionizing Radiation and Board on Radiation Effects Research Commission on Life Sciences, Health Effects of Exposure to Low Levels of Ionizing Radiation, BEIR V (Washington, DC: National Academy Press, 1990); Jones, “Leukemia,” 537. 40. BEIR VII, 31, 47, 51, 62; Caufield, Multiple Exposures, 159; United Nations Scientific Committee on the Effects of Atomic Radiation (UNSCEAR), Sources and Effects of Ionizing Radiation—UNSCEAR 1994 Report to the General Assembly, with Scientific Annexes (New York: United Nations, 1994); hereafter cited as UNSCEAR 1994; A. J. González, “Biological Effects of Low Doses of Ionizing Radiation: A Fuller Picture,” IAEA Bulletin 4 (1994): 40; hereafter cited as González, “Biological Effects.” 41. BEIR VII, 83–87; E. P. Cronkite and S. V. Musolino, “The Linear No-Threshold Model,” Health Physics 70, no. 6 (1996): 775–776. 42. Calabrese-2012; Sanders-2010; Vaiserman et al.-2010. See also E. Calabrese, “Toxicology Rewrites Its History and Rethinks Its Future: Giving Equal Focus to Both Harmful and Beneficial Effects,” Environmental Toxicology and Chemistry 30, no. 12 (2011): 2658– 2673; E. Calabrese, “Muller’s Nobel Lecture on Dose-Response for IonizingRadiation: Ideology or Science?,” Archives of Toxicology 85, no. 12 (2011): 1495–1498; E. Calabrese, “Getting the Dose-Response Wrong: Why Hormesis Became Marginalized and the Threshold Model Accepted,” Archives of Toxicology 83, no. 3 (2009): 227–247; Bobby Scott, “Low-dose Radiation Risk Extrapolation Fallacy Associated with the Linear-no-threshold Model,” Human & Experimental Toxicology 27, no. 2 (2008): 163– 168; hereafter cited as Scott, “Extrapolation Fallacy”; D. Jolly and J. Meyer, “A Brief Review of Radiation Hormesis,” Australasian Physical & Engineering Sciences in Medicine 32, no. 4 (2009), 180–187; BEIR VII, 10; L. Sagan, “Policy Forum Section—On Radiation, Paradigms, and Hormesis,” Science 245, no. 4918 (1989): 574, 621; T. D. Luckey, Hormesis with Ionizing Radiation (Boca Raton, FL: CRC Press, 1980); see J. J. Cohen, “Conference on Radiation Hormesis: An Overview,” Health Physics 53, no. 5 (1987): 519.


242

Notes

43. Jyrki Tapani Kuikka, “Low-dose Radiation Risk and the Linear No-threshold Model,” International Journal of Low Radiation 6, no. 2 (2009): 157; hereafter cited as Kuikka, “Linear No-threshold”; Scott, “Extrapolation Fallacy”; David D. Brenner and Rainer K. Sachs, “Estimating Radiation-induced Cancer Risks at Very Low Doses: Rationale for Using a Linear No-threshold Approach,” Radiation and Environmental Biophysics 44, no. 4 (2006): 253–256; Roger Clarke, “Control of Low-Level Radiation Exposure,” Journal of Radiological Protection 19, no. 2 (1999): 107–115; B. Lindell, “The Risk Philosophy of Radiation Protection,” Radiation Protection Dosimetry 68, no. 3-4 (1996): 159; K. Mossman, “Trivialization of Radiation Hormesis,” SSI News 7, no. 2 (1999): 13–15; K. L. Mossman, “Guest Editorial—The Linear, No-Threshold Model in Radiation Protection: The HPS Response,” HPS Newsletter 24, no. 3 (1996): 2; K. L. Mossman, M. Goldman, F. Mass, W. A. Mills, K. J. Schiager, and R. J. Vetter, “Radiation Risk in Perspective,” HPS Newsletter 24, no. 3 (1996): 3; G. Walinder, Has Radiation Protection Become a Health Hazard? (Nykoping, Sweden: Swedish Nuclear Training and Safety Center, 1995); hereafter cited as Walinder, Radiation Protection; G. Greenhalgh, “Radiation and Cancer—New Moves in the Threshold Debate,” Nuclear Engineering International 40, no. 497 (1996): 14–15. 44. See Junko Matsubara, “Re-examination of the Linear No Threshold Hypothesis, and a Trial for the Public to Comprehend the Reality of the Low-dose Effects of Radiation,” International Journal of Low Radiation 3, no. 4 (2007): 241; A. MacLachlan, “Official Russian Register Keeps Chernobyl Death Toll at 28,” Nucleonics Week 35, no. 46 (1994): 11ff.; P. Duport, “The Recommendations of the French Academy of Science on ICRP 60,” Bulletin of the Canadian Radiation Protection Association 17, no. 1 (1996): 18–21. 45. BEIR VII, 65; R. Wilson, “The Need for New Risk Default Assumptions,” BELLE Newsletter 5, no. 2/3 (1996): 19; hereafter cited as Wilson, “Assumptions”; K. S. Crump, D. G. Hael, C. H. Langley, and R. Peto, “Fundamental Carcinogenic Processes,” Cancer Research 36, no. 19 (1976): 2973–2979; hereafter cited as Crump et al., “Carcinogenic Processes.” 46. See BEIR VII, 46, 55, 62; J. A. Myrden and J. E. Hiltz, “Breast Cancer Following Multiple Fluoroscopies during Artificial Pneumothorax Treatment of Pulmonary Tuberculosis,” Canadian Medical Association Journal 100, no. 22 (1969): 1032–1034; B. Modan, E. Ron, and A. Werner, “Thyroid Cancer following Scalp Irradiation,” Radiology 123 (1977): 741– 744; B. Modan, A. Chetrit, E. Alfandary, and L. Katz, “Increased Risk of Breast Cancer after Low-Dose Irradiation,” Lancet 1 (1989): 629–631; J. D. Boice and R. R. Monson, “Breast Cancer in Women After Repeated Fluoroscopic Examinations of the Chest,” Journal of the National Cancer Institute 59, no. 3 (1977): 823–832; A. M. Stewart and G. W. Kneale, “Radiation Dose Effects in Relation to Obstetric X-Rays and Childhood Cancers,” Lancet 1 (1970): 1185–1188; E. B. Harvey, J. D. Boice Jr., M. Honeyman, and J. T. Flannery, “Prenatal X-Ray Exposure and Childhood Cancer in Twins,” New England Journal of Medicine 312, no. 9 (1985): 541–545; see Gofman, Cancer, ch. 21. 47. L. W. Brackenbush and L. A. Braby, “Microdosimetric Basis for Exposure Limits,” Health Physics 55, no. 2 (1980): 256; R. Wilson, “The Need for New Risk Default Assumptions,” BELLE Newsletter 5, no. 2/3 (1996): 19; see BEIR VII, 52. 48. BEIR VII, 11; Beninson, “Risk,” 122–123; Bond et al., “Linear No-Threshold,” 878; Trosko, “Ionizing Radiation,” 812; Fry, “Radiation,” 824–825; Jones, “Leukemia,” 539. 49. See BEIR VII, 3–6; Wilson, “Assumptions,” 19. 50. See BEIR VII, 9, 54; Bond et al., “Linear No-Threshold,” 877; Beninson, “Risk,” 124; Trosko, “Ionizing Radiation,” 812. 51. See R. Spangler, N. L. Goddard, D. N. Spangler, and D. S. Thaler, “Tests of the Single-hit DNA Damage Model,” Journal of Molecular Biology 392, no. 2 (2009): 283–300; hereafter cited as Spangler et al., “Single-hit”; A. M. Kellerer, “Radiobiology Challenges Posed by Microdosimetry,” Health Physics 70, no. 6 (1996): 835; hereafter cited as Kellerer,


Notes

243

“Radiobiology”; J. Urquhart, “Leukaemia and Nuclear Power in Britain,” in Radiation and Health, ed. Russel Jones and R. Southwood, 233–242 (Chichester: Wiley, 1987); Beninson, “Risk,” 123. 52. See Beninson, “Risk”; B. Lindell, “The Case of Linearity,” SSI News 4, no. 1 (1996): 2–4; hereafter cited as Lindell, “Linearity”; Walinder, Radiation Protection. 53. See BEIR VII, 69. 54. Lindell, “Linearity,” 3; Beninson, “Risk,” 124; Kellerer, “Radiobiology,” 834. 55. J. P. Brody, “Age-specific Incidence Data Indicate Four Mutations Are Required for Human Testicular Cancers,” PLoS One 6, no. 10 (2011): e25978; hereafter cited as Brody-2011; William C. Hahn, Christopher M. Counter, Ante S. Lundberg, Roderick L. Beijersbergen, Mary W. Brooks, and Robert A. Weinberg, “Creation of Human Tumor Cells with Defined Genetic Elements,” Nature 400 (July 1999): 464–468; John Travis, “Add Three Genes, Get One Cancer Cell,” Science News 156 (1999): 139; see Spangler, “Single-hit.” 56. Kuikka, “Linear No-threshold,” 157; BEIR VII, 7–8; United Nations Scientific Committee on the Effects of Atomic Radiation (UNSCEAR), Report of the United Nations Scientific Committee on the Effects of Atomic Radiation (New York: United Nations, 1993); United Nations Scientific Committee on the Effects of Atomic Radiation (UNSCEAR), Adaptive and Stimulatory Responses to Radiation in Cells and Organisms—United Nations General Assembly, 42nd Session of UNSCEAR, A/AS.82/R.538 (Vienna: United Nations, 1993); UNSCEAR 1994; González, “Biological Effects,” 44; Lindell, “Linearity”; D. M. Parkin and S. C. Darby, “12 Cancers in 2010 Attributable to Ionising Radiation Exposure in the UK,” British Journal of Cancer 105, no. S2 (2011): 105, S57–S65. 57. See Lindell, “Linearity,” 3–4. 58. See Jeanne Peijnenburg and David Atkinson, “When Are Thought Experiments Poor Ones?,” Journal for General Philosophy of Science 34, no. 2 (2003): 305–322; R. A. Sorensen, Thought Experiments (New York: Oxford University Press, 1992), 256–259; hereafter cited as Sorensen, Thought Experiments. 59. Rachel Cooper, “Thought Experiments,” Metaphilosophy 36, no. 3 (2005): 328–347; hereafter cited as Cooper, “Thought Experiments”; Sorensen, Thought Experiments, 261–269. 60. See Cooper, “Thought Experiments”; K. V. Wilkes, Real People: Personal Identity Without Thought Experiments (New York: Oxford University Press, 1988). 61. Sorensen, Thought Experiments, 288–289; see Cooper, “Thought Experiments.” 62. S. Blackburn, “What If? The Uses and Abuses of Thought Experiments,” Times Literary Supplement 4707 (June 18, 1993): 10; hereafter cited as Blackburn, “What If?” 63. See Sorensen, Thought Experiments, 275–276; Brown, “Thought Experiments.” 64. See David Egan, “Pictures in Wittgenstein’s Later Philosophy,” Philosophical Investigations 34, no. 1 (2011): 55–76; Anthony Palmer, “Propositions, Properties and Relations: Wittgenstein’s ‘Notes on Logic’ and the Tractatus,” Philosophical Investigations 34, no. 1 (2011): 77–93; Blackburn, “What If?”, 11; G. Gale, “Some Pernicious Thought Experiments,” in Thought Experiments in Science and Philosophy, ed. Tamara Horowitz and Gerald Massey, 301 (Lanham, MD: Rowman and Littlefield, 1991). 65. Brody-2011; John Travis, “Add Three Genes, Get One Cancer Cell,” 139. Re: disagreement, see D. H. Roukos, “Trastuzumab and Beyond: Sequencing Cancer Genomes and Predicting Molecular Networks,” Pharmacogenomics Journal 11, no. 2 (April 2011): 81–92. 66. See, for example, Sherry, “Mathematical Reasoning”; Anapolitanos, “Thought Experiments”; Brown, “Thought Experiments”; Tamara Horowitz and Gerald Massey, Thought Experiments in Science and Philosophy (Lanham, MD: Rowman and Littlefield, 1991); Massey, “Reason.” 67. Sorensen, Thought Experiments, 225–228. 68. Sorensen, Thought Experiments, 225. 69. Uwe Schneider, “Mechanistic Model of Radiation-induced Cancer After Fractionated Radiotherapy Using the Linear-quadratic Formula,” Medical Physics 36, no. 4 (2009): 1138– 1143; A. F. Malenchenko, S. N Sushko, and I. V Saltanova, “Interaction of Radiative and


244

Notes Nonradiative Factors in the Process of Tumour Formation,” International Journal of Low Radiation 7, no. 3 (2010): 188; Crump et al., “Carcinogenic Processes”; see BEIR VII, 11.

Chapter 7

1. Ruth A. Kleinerman, William T. Kaune, Elizabeth E. Hatch, Sholom Wacholder, Martha S. Linet, Leslie L. Robison, Shelley Niwa, and Robert E. Tarone, “Are Children Living Near High-Voltage Power Lines at Increased Risk of Acute Lymphoblastic Leukemia?,” American Journal of Epidemiology 151 (2000): 512–515. A shorter, much earlier version of this article appeared in Kristin Shrader-Frechette, “Relative Risk and Methodological Rules for Causal Inferences,” Biological Theory 2, no. 4 (2007): 332–336. 2. Myron Maslanyj, Tracy Lightfoot, Joachim Schüz, Zenon Sienkiewicz, and Alastair McKinlay, “A Precautionary Public Health Protection Strategy for the Possible Risk of Childhood Leukaemia from Exposure to Power-Frequency Magnetic Fields,” BMC Public Health 10 (2010): 673–682. 3. I. R. Dohoo, K. Leslie, L. DesCôteaux, A. Fredeen, P. Dowling, A. Preston, and W. Shewfelt, “A Meta-Analysis Review of the Effects of Recombinant Bovine Somatotropin,” Canadian Journal of Veterinary Research 67, no. 4 (October 2003): 241–251; hereafter cited as Dohoo-1. 4. Janet Raloff, “Hormones: Here’s the Beef: Environmental Concerns Reemerge over Steroids Given to Livestock,” Science News 161, no. 1 (January 5, 2002): 10. 5. See, e.g., American Cancer Society (ACS), Recombinant Bovine Growth Hormone (Atlanta: ACS, 2011), http://www.cancer.org/cancer/cancercauses/othercarcinogens/athome/recombinant-bovine-growth-hormone, accessed April 19, 2012; Niels Møller and Jens Otto Lunde Jørgensen, “Effects of Growth Hormone on Glucose, Lipid, and Protein Metabolism in Human Subjects,” Endocrine Reviews 30, no. 2 (April 1, 2009): 152–177; M. Kucia, M. Masternak, R. Liu, D. Shin, J. Ratajczak, K. Mierzejewska, A. Spong, J. Kopchick, A. Bartke, and M. Ratajczak, “The Negative Effect of Prolonged Somatotrophic/Insulin Signaling on Adult Bone Marrow-Residing Population of Pluripotent Very Small Embryonic-like Stem Cells (VSELs),” AGE 35, no. 2 (2012): 315–330. 6. E.g., A. Moffett, N. Shackelford, and S. Sarkar, “Malaria in Africa: Vector Species’ Niche Models and Relative-Risk Maps,” PLoS One 2, no. 9 (2007): e824. 7. E.g., R. W. Snow, N. Peshu, D. Forster, G. Bomu, E. Mitsanzw, et al., “Environmental and Entomological Risk Factors for the Development of Clinical Malaria Among Children on the Kenyan Coast,” Transactions of the Royal Society of Tropical Medicine and Hygiene 92, no. 4 (1998): 381–385. 8. E.g., D. S. Maehr and J. P. Deason, “Wide-ranging Carnivores and Development Permits,” Clean Technologies and Environmental Policy 3 (2002): 398–406. 9. E.g., E. J. Comiskey, A. C. Eller, and D. W. Perkins, “Evaluating Impacts to Florida Panther Habitat: How Porous Is the Umbrella?,” Southeastern Naturalist 3, no. 1 (2004): 51–74; K. S. Shrader-Frechette, “Measurement Problems and Florida Panther Models,” Southeastern Naturalist 3, no. 1 (2004): 37–50. 10. E.g., P. Beier, M. R. Vaughan, M. J. Conroy, and H. Quigley, An Analysis of Scientific Literature Related to the Florida Panther: Submitted as Final Report for Project NG01-105 (Tallahassee: Florida Fish and Wildlife Conservation Commission, 2003). 11. Maehr and Deason, “Wide-ranging Carnivores and Development Permits.” 12. D. L. Smith, J. Dushoff, and F. E. McKenzie, “The Risk of a Mosquito-Borne Infection in a Heterogeneous Environment,” PLoS Biology 2, no.11 (2004): e368. 13. G. Doppelt, “The Naturalist Conception of Methodological Standards in Science,” Philosophy of Science 57, no. 1 (1990): 12; see L. Laudan, “Progress or Rationality,” American Philosophical Quarterly 24, no. 1 (1987): 19–26; K. Schaffner, Discovery and


Notes

245

Explanation in Biology and Medicine (Chicago: University of Chicago Press, 1993), 390– 391. See also W. Schmaus, “The Empirical Character of Methodological Rules,” Philosophy of Science 63 (1996): S98–106; N. Lechopier, “A Militant Rationality: Epistemic Values, Scientific Ethos, and Methodological Pluralism in Epidemiology,” Scientiae Studia 10 (2012): 141–150. 14. E. L. Wynder, I. T. Higgins, and R. E. Harris, “The Wish Bias,” Philosophy of Science 63 (1996): S98–106. 15. K. R. Foster, D. E. Bernstein, and P. W. Huber, Phantom Risk (Cambridge, MA: MIT Press, 1993), 4. 16. Wynder, Higgins, and Harris, “The Wish Bias.” 17. R. A. Hiatt, “Alcohol Consumption and Breast Cancer,” Medical Oncology Tumor Pharmacotherapy 7 (1990): 143–151. 18. See D. Weed, “Underdetermination and Incommensurability in Contemporary Epidemiology,” Kennedy Institute of Ethics Journal 7, no. 2 (1997): 107–124. 19. Hiatt, “Alcohol Consumption and Breast Cancer.” 20. Foster, Bernstein, and Huber, Phantom Risk, 5; N. Breslow, “Are Statistical Contributions to Medicine Undervalued?,” Biometrics 59, no. 1 (2003): 1–8; N. T. van Rasteyn et al., “Tipping the Balance of Benefits and Harms to Favor Screening Mammography Starting at Age 40 Years: A Comparative Modeling Study of Risk,” Annals of Internal Medicine 169, no. 9 (2012): 609–617. 21. G. Taubes, “Epidemiology Faces Its Limits,” Science 268 (1995): 164–169, vol. 269, no. 5221. 22. Taubes, “Epidemiology Faces Its Limits,” 168; E. I. Rosenberg, P. F. Bass, and R. A. Davidson, “Arriving at Correct Conclusions: The Importance of Association, Causality, and Clinical Significance,” Southern Medical Journal 105, no. 3 (March 2012): 161–162. 23. Rosenberg, Bass, and Davidson, “Arriving at Correct Conclusions,” 161–162. 24. Taubes, “Epidemiology Faces Its Limits,” 165. 25. C. Cranor, Toxic Torts (New York: Cambridge University Press, 2006), 233. 26. For cases when use of some type of RR might be ultima facie reasonable, see, e.g., K. Steenland, R. Seals, M. Klein, J. Jinot, and H. Kahn, “Risk Estimation with Epidemiologic Data When Response Attenuates at High-Exposure Levels,” Environmental Health Perspectives 119, no. 6 (2011): 831–837; P. Austin, “Absolute Risk Reductions, Relative Risks, Relative Risk Reductions, and Numbers Needed to Treat Can Be Obtained from a Logistic Regression Model,” Journal of Clinical Epidemiology 63, no. 1 (2010): 2–6. 27. E.g., Phantom Risk; Breslow, “Are Statistical Contributions to Medicine Undervalued?” 28. E.g., G. Heller, “A Measure of Explained Risk in the Proportional Hazards Model,” Biostatistics 13, no. 2 (2012): 315–325; K. Steenland, R. Seals, M. Klein, J. Jinot, and H. Kahn, “Risk Estimation with Epidemiologic Data When Response Attenuates at High-Exposure Levels,” Environmental Health Perspectives 119, no. 6 (2011): 831–837; J. Kim, Y. Lee, and I. Moon, “An Index-Based Risk Assessment Model for Hydrogen Infrastructure,” International Journal of Hydrogen Energy 36, no. 11 (2011): 6387–6398; M. Guest, M. Boggess, and J. Attia, “Relative Risk of Elevated Hearing Threshold Compared to iso1999 Normative Populations for Royal Australian Air Force Male Personnel,” Hearing Research 285, no. 1-2 (2012): 65–76. 29. National Research Council (NRC), Health Risks from Exposure to Low Levels of Ionizing Radiation, BEIR VII (Washington, DC: National Academy Press, 2005); R. Jones and R. Southwood, eds., Radiation and Health (Chichester: Wiley, 1987). 30. Schaffner, Discovery and Explanation in Biology and Medicine, 139–142. 31. See K. S. Shrader-Frechette and E. D. McCoy, Method in Ecology: Strategies for Conservation (New York: Cambridge University Press, 1993). 32. K. Clouser, “Approaching the Logic of Diagnosis,” in Logic of Discovery and Diagnosis in Medicine, ed. K. F. Schaffner (Berkeley: University of California Press, 1985): 44.


246

Notes

33. Schaffner, Discovery and Explanation in Biology and Medicine, 253. 34. See Clouser, “Approaching the Logic of Diagnosis”; L. T. Budni, S. Kloth, M. VelaxcoGarrido, and X. Baur, “Prostate Cancer and Toxicity from Critical Use Exemptions of Methyl Bromide: Environmental Protection Helps Protect Against Human Health Risk,” Environmental Health 11, no. 5 (2012): 1–12. 35. See O. Amsterdamska, “Demarcating Epidemiology,” Science, Technology, and Human Values 30, no. 1 (2005): 17–51. 36. L. Burney, “Smoking and Lung Cancer,” Journal of the American Medical Association 171, no. 13 (1955): 1829–1836. Amsterdamska, “Demarcating Epidemiology.” 37. E. Hammond, “Cause and Effect,” in The Biologic Effects of Tobacco, ed. E. Wynder, 193–194 (Boston: Little, Brown, 1955); E. L. Wynder, “An Appraisal of the SmokingLung-Cancer Issue,” New England Journal of Medicine 264 (1961): 1235–1240; R. B. Belzer, “The Report on Carcinogens: What Went Wrong and What Can Be Done to Fix It,” Issue Analysis no. 1 (2012): 1–30; M. D. Freeman and S. S. Kohles, “Plasma Levels of Polychlorinated Biphenyls, Non-Hodgkins Lymphoma and Causation,” Journal of Environmental and Public Health 2012 (2012): 258981. 38. C. Little, “Some Phases of the Problem of Smoking and Lung Cancer,” New England Journal of Medicine 264 (1961): 1241–1245; A. Gelfert, “Strategies of Model-Building in Condensed Matter Physics: Trade-offs as a Demarcation Criterion Between Physics and Biology,” Synthese 2013 (2012): 253–272. 39. Institute of Medicine (IOM), Exposure of the American People to Iodine-131 from Nevada Nuclear-Bomb Tests (Washington, DC: National Academy Press 1998). 40. A. Makhijani, H. Hu, and K. Yih, eds., Nuclear Wastelands (Cambridge, MA: MIT Press, 1995). 41. See F. Perera, “Molecular Epidemiology,” Journal of the National Cancer Institute 88, no. 8 (1996): 496–509; O. P. Candhi, L. L Morgan, A. A. de Salles, Y. Han, R. B. Herberman, and D. L. Davis, “Exposure Limits: The Underestimation of Absorbed Cell Phone Radiation, Especially in Children,” Electromagnetic Biology and Medicine 31, no. 1 (March 2012): 34–51. 42. NRC Health Risks from Exposure to Low Levels of Ionizing Radiation, BEIR VII; IOM, Exposure of the American People to Iodine-131 from Nevada Nuclear-Bomb Tests; Jones and Southwood, Radiation and Health. 43. See K. J. Rothman and S. Greenland, “Causation and Causal Inference in Epidemiology,” American Journal of Public Health 95 (2005): S144–150. See note 46. 44. M. Parascandola, “Epidemiology: Second-Rate Science?,” Public Health Reports 113, no. 4 (1998): 312–320. 45. K. S. Shrader-Frechette, Taking Action, Saving Lives: Our Duties to Protect Public and Environmental Health (New York: Oxford University Press, 2007); Sheldon Krimsky, Science in the Private Interest (Lanham, MD: Rowman and Littlefield, 2003). 46. FDA claims are in Judith C. Juskevich and C. Greg Guyer, “Bovine Growth Hormone: Human Food Safety Evaluation,” Science 249 (August 24, 1990): 878. The 2010 court decision is International Dairy Foods Association and Organic Trade Association v. Boggs, US Court of Appeals for the Sixth Circuit, nos. 09-3515-3526, Columbus Ohio, September 30, 2010, http://www.ca6.uscourts.gov/opinions.pdf/10a0322p-06.pdf, accessed April 19, 2012. 47. Shiv Chopra, Mark Feeley, Gerard Lambert,Then Mueller, and rBST Internal Review Team, rBST (NUTRILAC) “Gaps Analysis” Report (Ottawa: Health Protection Branch, Health Canada, April 21, 1998). See also Robert Cohen, Milk: The Deadly Poison (Englewood Cliffs, NJ: Argus Publishing, 1997), esp. 77–96, and Michael Hansen, FDA’s Safety Assessment of Recombinant Bovine Growth Hormone (Yonkers, NY: Consumers’ Union of Consumer Reports, December 15, 1998), http://www.consumersunion.org/ pub/core_food_safety/002269.html, accessed April 19, 2012. 48. I. R. Dohoo, L. DesCôteaux, K. Leslie, A. Fredeen, W. Shewfelt, A. Preston, and P. Dowling, “A Meta-Analysis Review of the Effects of Recombinant Bovine Somatotropin,”


Notes

247

Canadian Journal of Veterinary Research 67, no. 4 (October 2003): 255; see 260 re RR; hereafter cited as Dohoo-2. 49. K. Koizumi, AAAS Report XXIX: R&D Trends and Special Analyses (Washington, DC: AAAS, 2005); D. Barnes and L. Bero, “Why Review Articles on the Health Effects of Passive Smoking Reach Different Conclusions,” Journal of the American Medical Association 279, no. 19 (1998): 1566–1570. 50. See note 46. 51. Dohoo-1, 244. 52. Dohoo-2, 255–56. 53. Dohoo-1, 242. 54. Doho-2, 456. 55. Dohoo-1, 245, 248, 243, 248. 56. Eric Temple Bell, Men of Mathematics: The Story of Evariste Galois (London: Gollancz, 1937).

Chapter 8

1. D. K. Nomura, D. Leung, K. P. Chiang, G. B. Quistad, B. F. Cravatt, and J. E Casida, “A Brain Detoxifying Enzyme for Organophosphorus Nerve Poisons,” Proceedings of the National Acadamy of Science 102, no. 12 (2005): 6195–6200; C. J. Winrow, M. L. Hemming, D. M. Allen, G. B. Quistad, J. E. Casida, and C. Barlow, “Loss of Neuropathy Target Esterase in Mice Links Organophosphate Exposure to Hyperactivity,” Nature Genetics 33, no. 4 (2003): 477–485; S. Jansen, “Chemical-Warfare Techniques for Insect Control: Insect ‘Pests’ in Germany Before and After World War I,” Endeavor 24, no. 1 (2000): 28–33; Frederick R. Sidell, “Nerve Agents,” in Medical Aspects of Chemical and Biological Warfare, ed. F. Sidell, E. Takafuji, and D. Franz, 129–179 (Washington, DC: Office of the Surgeon General, 1997). A shorter, much earlier version of this chapter appeared in Kristin Shrader-Frechette, “Statistical Significance in Biology: Neither Necessary nor Sufficient for Hypothesis-Acceptance,” Biological Theory 3, no. 1 (2008): 12–16. 2. W. J. Lee, A. Blair, J. A. Hoppin, J. H. Lubin, J. A. Rusiecki, D. P. Sandler, M. Dosemeci, and M. C. Alavanja, “Cancer Incidence among Pesticide Applicators Exposed to Chlorpyrifos in the Agricultural Health Study,” Journal of the National Cancer Institute 96, no. 23 (December 1, 2004): 1781–1789. 3. W. J. Rea, “Pesticides,” Journal of Nutritional and Environmental Medicine 6, no. 1 (1996): 55–124; D. L. Eaton, R. B. Daroff, et al., “Review of the Toxicology of Chlorpyrifos With an Emphasis on Human Exposure and Neurodevelopment,” Critical Reviews in Toxicology 38, no. s2 (2008): 1–125. 4. Carl Cranor, “Scientific Inferences in the Laboratory and the Law,” American Journal of Public Health 95, no. S1 (2005): S121–128. 5. E.g., C. Cheng, S. Pounds, J. Boyett, D. Pei, M-L. Kuo, M. Roussel, “Statistical Significance Threshold Criteria for Analysis of Microarray Gene Expression Data,” Statistical Applications in Genetics and Molecular Biology 3, no. 1 (2004), http://www.bepress.com/ sagmb/vol3/iss1/art36/, accessed March 27, 2008. The author thanks the US National Science Foundation (USNSF) for the History and Philosophy of Science research grant, “Three Methodological Rules in Risk Assessment,” 2007–2009, through which work on this project was done. All opinions and errors are those of the author, not the USNSF. 6. E.g., G. Fellers, K. Pope, J. Stead, M. Koo, and H. Welsh, “Turning Population-Trend Monitoring into Active Conservation,” Herpetological Conservation and Biology 3, no. 1 (2008): 28–39. 7. E.g., B. Dean, D. Aquilar, L. Johnson, J. McGuigan, W. Orr, R. Fass, N. Yan, D. Morgenstern, and R. Dubois, “Night-time and Daytime Atypical Manifestations of Gastro-oesophageal Reflux Disease,” Alimentary Pharmacology and Therapeutics 27, no. 4 (2008): 327–337.


248

Notes

8. See D. H. Johnson, “What Hypothesis Tests Are Not,” Behavioral Ecology 16, no. 1 (2004): 325–326. 9. D. Heath, An Introduction to Experimental Design and Statistics (Boca Raton, FL: CRC Press, 1995), 251; D. Simberloff, “Hypotheses, Errors, and Statistical Assumptions,” Herpetologica 46, no. 3 (1990): 351–357. See also, for instance, E. Rosenberg, P. F. Bass, and R. Davidson, “Arriving at Correct Conclusions: The Importance of Association, Causality, and Clinical Significance,” Southern Medical Journal 105, no. 3 (2012): 161– 166; I. Vlachos, A. Aertsen, and A. Kumar, “Beyond Statistical Significance: Implications of Network Structure on Neuronal Activity,” PLoS Computational Biology 8, no. 1 (2012): 1–9; A. Bothe and J. Richardson, “Statistical, Practical, Clinical and Personal Significance: Definitions and Applications in Speech-Language Pathology,” American Journal of Speech Language Pathology 20, no. 3 (2011): 233–243; S. Pintea, “The Relevance of Results in Clinical Research: Statistical, Practical and Clinical Significance,” Journal of Cognative and Behavioral Psychotherapies 10, no. 1 (2010): 101–114. 10. F. Fidler, M. Burgman, G. Cumming, R. Buttrose, and N. Thomason, “Impact of Criticism of Null-Hypothesis Significance Testing on Statistical Reporting Practices in Conservation Biology,” Conservation Biology 20, no. 5 (2006): 1539–1544. 11. Fidler et al., “Impact,” 1539–1544. 12. K. J. Rothman, Epidemiology (New York: Oxford University Press, 2002), 126. 13. D. Simberloff, “Competition Theory, Hypothesis-Testing, and Other Community Ecology Buzzwords,” The American Naturalist 122, no. 5 (1983): 626–635; D. Simberloff, “The Taxonomic Diversity of Island Biota,” Evolution 24, no. 1 (1970): 22–47; but see D. Simberloff, “Hypotheses, Errors, and Statistical Assumptions,” Herpetologica 46, no. 3 (1990): 351–357; E. F. Connor and D. Simberloff, “The Assembly of Species Communities,” Ecology 60, no. 6 (1979): 1132–1140; but see D. H. Robinson and H. Wainer, “On the Past and Future of Null Hypothesis Significance Testing,” The Journal of Wildlife Management 66, no. 2 (2002): 263–271. Regarding the difficulty of determining causes, see, for instance, N. Cartwright and S. Efstathiou, “Hunting Causes and Using Them: Is There No Bridge From Here to There?,” International Studies in the Philosophy of Science 25, no. 3 (2011): 223–241. K. Mainzer, “Causality in Natural, Technical, and Social Systems,” European Review 18, no. 4 (2010): 433–454. 14. R. C. Shelton, M. B. Keller, et al., “Effectiveness of St. John’s Wort in Major Depression,” Journal of the American Medical Association 285, no. 15 (2001): 1978–1986. 15. M. A. Eisenberger, B. A. Blumenstein, et al., “Bilateral Orchiectomy With or Without Flutamide for Metastic Prostate Cancer,” New England Journal of Medicine 339, no. 15 (1998): 1036–1042. 16. Rothman, Epidemiology, 123–127. 17. D. A. Savitz, K. Tolo, and C. Poole, “Statistical Significance Testing in the American Journal of Epidemiology 1970–1990,” American Journal of Epidemiology 139, no. 10 (1994): 1047; R. A. Bettis, “The Search for Asterisks: Compromised Statistical Tests and Flawed Theories,” Strategic Management Journal 33, no. 1 (January 2012): 108–113. 18. See note 8. 19. C. Cranor, Toxic Torts (New York: Cambridge University Press, 2006), 227. 20. D. J. Pittenger, “Hypothesis Testing as a Moral Choice,” Ethics and Behavior 11, no. 2 (2001): 152; D. H. Johnson, “The Insignificance of Statistical Significance Testing,” The Journal of Wildlife Management 63, no. 3 (1999): 763–772. 21. K. A. Barrett, C. L. Funk, and F. L. Macrina, “Awareness of Publication Guidelines and the Responsible Conduct of Research,” Accountability in Research 12, no. 3 (2005): 193–206. 22. G. Greenstein, “Clinical Versus Statistical Significance as They Relate to the Efficacy of Periodontal Therapy,” The Journal of the American Dental Association 134, no. 5 (2003): 586; W. J. Killoy, “The Clinical Significance of Local Chemotherapies,” Journal of Clinical Periodontology 2, supp. (2002): 22–29; A. Kingman, “Statistical vs. Clinical Significance in


Notes

249

Product Testing,” Journal of Public Health Dentistry 52, no. 6 (1992): 353–360; H. J. Burstein, “Pre-Judging Data: Benchmarking Clinical Significance Before Study Results are Known,” Journal of the National Comprehensive Cancer Network 10, no. 3 (2012): 289–290. 23. See J. S. Gardenier and D. B. Resnik, “The Misuse of Statistics,” Accountability in Research 9, no. 2 (2002): 65–74; Pittenger, “Hypothesis Testing as a Moral Choice”; R. P. Cuzzort and J. S. Vrettos, The Elementary Forms of Statistical Reason (New York: St. Martin’s Press, 1996), 259. 24. See A. Buchanan, K. Weiss, and S. Fullerton, “Dissecting Complex Disease,” International Journal of Epidemiology 35, no. 3 (2006): 567; L. F. Baker and J. F. Mudge, “Making Statistical Insignificance More Significant,” Significance 9, no. 3 (June 2012): 29–30. 25. Greenstein, “Clinical Versus Statistical Significance as They Relate to the Efficacy of Periodontal Therapy,” 584. 26. Johnson, “The Insignificance of Statistical Significance Testing,” 765. 27. Rothman, Epidemiology, 116. 28. D. Walton, “The Appeal to Ignorance,” Argumentation 13, no. 4 (1999): 367–377. 29. K. J. Rothman, E. S. Johnson, and D. S. Sugano, “Is Flutamide Effective in Patients with Bilateral Orchiectomy?,” Lancet 353, no. 9159 (1999): 1184ff.; S. N. Goodman and R. Royall, “Evidence and Scientific Research,” American Journal of Public Health 78, no. 12 (1988): 1568–1574; C. Poole, “Beyond the Confidence Interval,” American Journal of Public Health 77, no. 2 (1987): 195–199; J. L. Fleiss, “Significance Tests Have a Role in Epidemiologic Research,” American Journal of Public Health 77, no. 2 (1987): 559–560; W. D. Thompson, “Statistical Criteria in the Interpretation of Epidemiologic Data,” American Journal of Public Health 77, no. 2 (1987): 191–194. 30. Gardenier and Resnik, “The Misuse of Statistics,” 71. L. Silva-Aycaguer, P. Suarez-Gil, and A. Fernandez-Somoano, “The Null Hypothesis Significance Test in Health Sciences Research (1995–2006): Statistical Analysis and Interpretation,” BMC Medical Research Methodology 10 (2010): 44–52. 31. G. Taubes, “Epidemiology Faces Its Limits,” Science 269, no. 5221 (1995): 168; N. Breslow, “Are Statistical Contributions to Medicine Undervalued?,” Biometrics 59, no. 1 (2003): 1–8. 32. P. Vineis and D. Kreibel, “Causal Models in Epidemiology,” Environmental Health 5, no. 1 (2006): 21ff. 33. See Simberloff, “Hypotheses, Errors, and Statistical Assumptions”; Johnson, “What Hypothesis Tests Are Not”; Johnson, “The Insignificance of Statistical Significance Testing”; and Kristin Shrader-Frechette, “Randomization and Rules for Causal Inferences in Biology: When the Biological Emperor (Significance Testing) Has No Clothes,” Biological Theory 6, no. 2 (June 2011): 154–161; C. G. Lambert and L. J. Black, “Learning from Our GWAS Mistakes: From Experimental Design to Scientific Method,” Biostatistics 13, no. 2 (April 2012): 195–203. 34. D. B. Resnik, “Statistics, Ethics, and Research: An Agenda for Education and Reform,” Accountability in Research 8 (2000): 175. 35. See note 23. 36. K. Schaffner, Discovery and Explanation in Biology and Medicine (Chicago: University of Chicago Press, 1993), 253. 37. K. Pearson, A First Study of the Statistics of Pulmonary Tuberculosis (London: Dulau 1907). 38. K. Clouser, “Approaching the Logic of Diagnosis,” in Logic of Discovery and Diagnosis in Medicine, ed. K. F. Schaffner (Berkeley: University of California Press, 1985), 34. 39. Pittenger, “Hypothesis Testing as a Moral Choice,” 154. 40. See, e.g., Clouser, “Approaching the Logic of Diagnosis”; K. Shrader-Frechette, Risk and Rationality (Berkeley: University of California Press, 1991), 131–145; K. ShraderFrechette and E. D. McCoy, Method in Ecology (Cambridge: Cambridge University Press, 1993), 153–169; H. A. Simon, “Artificial Intelligence Approaches to Problem Solving and


250

Notes

Clinical Diagnosis,” in Logic of Discovery and Diagnosis in Medicine, ed. K. F. Schaffner, 80–88 (Berkeley: University of California Press, 1985). 41. J. A. Freiman, T. C. Chalmers, H. Smith, and R. R. Kuebler, “The Importance of Beta, the Type-II Error, and Sample Size in the Design and Interpretation of the Randomized Control Trial,” New England Journal of Medicine 299 (1978): 690ff. 42. K. J. Rothman, S. Greenland, Charles Poole, and Timothy Lash, “Causation and Causal Inference,” in Modern Epidemiology, ed. Kenneth J. Rothman, Sander Greenland, and Timothy L. Lash, 5–31 (Philadelphia: Lippincott, Williams, and Wilkins, 2008); hereafter cited as Rothman, CI; and Rothman, ME. 43. L. Buhl-Mortensen and S. Welin, “The Precautionary Principle and the Significance of Non-Significant Results,” Science and Engineering Ethics 4, no. 4 (1998): 393–504; M. J. J. Kuper-Hommel, M. L. G. Janssen-Heijnen, G. Vreugdenhil, A. D. G. Krol, H. C. KluinNelemans, J. W. Coebergh, and J. H. J. M. van Krieken, “Clinical and Pathological Features of Testicular Diffuse Large B-Cell Lymphoma: A Heterogenous Disease,” Leukemia & Lymphoma 53, no. 2 (2012): 242–246. 44. See Schaffner, Discovery and Explanation in Biology and Medicine; K. Shrader-Frechette, “Relative Risk and Methodological Rules for Causal Inferences,” Biological Theory 2, no. 4 (2007): 332–336. 45. L. Burney, “Smoking and Lung Cancer,” Journal of the American Medical Association 171, no. 13 (1959): 1829–1836; O. Amsterdamska, “Demarcating Epidemiology,” Science, Technology, and Human Values 30, no. 1 (2005): 17–51. 46. K. J. Rothman, “Should the Mission of Epidemiology Include the Eradication of Poverty?,” Lancet 352 (1998): 810–813; J. Matthews, Quantification and the Quest for Medical Certainty (Princeton, NJ: Princeton University Press, 1995); see note 33. 47. E. Hammond, “Cause and Effect,” in The Biologic Effects of Tobacco, ed. E. Wynder, 193–194 (Boston: Little, Brown, 1955); E. L. Wynder, “An Appraisal of the SmokingLung-Cancer Issue,” New England Journal of Medicine 264 (1961): 1235–1240; see C. Little, “Some Phases of the Problem of Smoking and Lung Cancer,” New England Journal of Medicine 264 (1961): 1241–1245; J. R. Stutzman, C. A. Luongo, and S. A McLuckey, “Covalent and Non-Covalent Binding in the Ion/Ion Charge Inversion of Peptide Cations with Benzene-Disulfonic Acid Anions,” Journal of Mass Spectrometry 47, no. 6 (June 2012): 669–675. 48. K. Koizumi, AAAS Report XXIX: R&D Trends and Special Analyses (Washington, DC: AAAS, 2005); D. Barnes and L. Bero, “Why Review Articles on the Health Effects of Passive Smoking Reach Different Conclusions,” Journal of the American Medical Association 279, no. 19 (1998): 1566–1570. 49. E.g., Sheldon Krimsky, Science in the Private Interest (Lanham, MD: Rowman and Littlefield, 2003); B. Keogh, “Biotech Crops’ Seal of Safety Does Not Convince Skeptics,” Journal of the National Cancer Institute 104, no. 7 (2012): 498–501. 50. Environmental Protection Agency (EPA) Science Advisory Board, Comments on the Use of Data from the Testing of Human Subjects, EPA-SAB- EC-00-017 (Washington, DC: US EPA, 2000). 51. Resnik, “Statistics, Ethics, and Research: An Agenda for Education and Reform,” 183. 52. Johnson, “The Insignificance of Statistical Significance Testing.” 53. Johnson, “What Hypothesis Tests Are Not”; Johnson, “The Insignificance of Statistical Significance Testing,” 766, 769; see Rothman, Epidemiology, 117; see Rothman, CI, and Kenneth J. Rothman, Sander Greenland, and Timothy L. Lash, “Precision and Statistics in Epidemiologic Studies,” in Rothman, ME, 156–163. Charles Poole, “Low P-Values or Narrow Confidence Intervals: Which Are More Durable?,” Epidemiology 12, no. 3 (May 2001): 291–294; Kenneth Rothman, “A Show of Confidence,” New England Journal of Medicine 299 (1978): 1362–1363. 54. S. Pedersen, “Effect Sizes and ‘What If ’ Analyses as Supplements to Statistical Significance Tests,” Journal of Early Intervention 25, no. 4 (2003): 310–319.


Notes

251

55. W. G. Hopkins, “Probabilities of Clinical or Practical Significance,” Sportscience 6 (2002): 201, http://sportsci.org/jour/0201/wghprob.htm, accessed March 20, 2008; W. G. Hopkins, “Clinical Versus Statistical Significance,” Sportscience 5, no. 3 (2001): 103, http://sportsci.org/jour/0201/wghprob.htm, accessed March 20, 2008; P. W. Lavori, T. A. Louis, J. C. Bailar, and M. Polansky, “Designs for Experiments,” in Medical Uses of Statistics, 2nd ed., ed. J. C. Bailar and F. Mosteller (Waltham, MA: Massachusetts Medical Society, 1996), 63, 79; J. C. Bailar and F. Mosteller, “Guidelines for Statistical Reporting in Articles of Medical Journals,” in Medical Uses of Statistics, 328. 56. See Cuzzort and Vrettos, The Elementary Forms of Statistical Reason, 247; Johnson, “The Insignificance of Statistical Significance Testing,” 769–770; Shrader-Frechette, Risk and Rationality; Shrader-Frechette and McCoy, Method in Ecology, 153–169. For other alternatives see T. Nagai, T. Hoshino, and K. Uchikawa, “Statistical Significance Testing With Mahalanobis Distance for Thresholds Estimated From Constant Stimuli Method,” Seeing & Perceiving 24, no. 2 (2011): 91–124; T. Gerrodette, “Inference Without Significance: Measuring Support for Hypotheses Rather Than Rejecting Them,” Marine Ecology 32, no. 3 (2011): 404–418. 57. Evgeny Morozov, “The Tyrany of Algorithms,” The Wall Street Journal (September 20, 2012), http://online.wsj.com/article/SB1000087239639044368600457763349101308 8640. html, accessed March 6, 2013.

Chapter 9

1. The large objects that we see have no observable wave-like properties, but quantummechanical experiments reveal the wave-like nature of elementary particles such as electrons. Earlier versions of part of this chapter appeared in Kristin Shrader-Frechette, “Idealized Laws, Antirealism, and Applied Science: A Case in Hydrogeology,” Synthese 81, no. 3 (1989): 329–352. The author gratefully acknowledges funding from NSF award ISP 82-09517, which enabled her to work on this and other projects, and thanks Dan Spangler for discussion of image wells and Darcy’s Law; Bill Thomas for discussion of Coulomb’s Law; and Ted Lockhart for criticism of an earlier draft. All opinions are those of the writer and not of NSF. 2. C. Hempel and P. Oppenheim, “Studies in the Logic of Explanation,” Philosophy of Science 15 (1948): 135–175. See also Carl Hempel, “Studies in the Logic of Confirmation,” Mind 54, no. 214 (1945): 97–121; Carl Hempel, Aspects of Scientific Explanation (London: Free Press, 1965); Carl Hempel, Philosophy of Natural Science (Upper Saddle River, NJ: Prentice Hall, 1966); Carl Hempel, Scientific Explanation (New York: Basic Books, 1967). 3. W. Salmon, Four Decades of Scientific Explanation (Minneapolis: University of Minnesota Press, 1990). See also chapter 12 of this volume. Regarding leukemia relative risk, see M. S. Pearce, J. A. Salotti, M. P. Little, K. McHugh, C. Lee, K. P. Kim, N. L. Howe, C. M. Ronckers, P. Rajaraman, A. W. Craft, L. Parker, and A. Berrington de González, “Radiation Exposure from CT Scans in Childhood and Subsequent Risk of Leukaemia and Brain Tumours,” Lancet 380, no. 9840 (August 4, 2012): 499–505, doi: 10.1016/ S0140-6736(12)60815-0. 4. Hempel, Philosophy of Natural Science; Rudolph Carnap, The Continuum of Inductive Methods (Chicago: University of Chicago Press, 1952); Rudolph Carnap, Studies in Inductive Logic and Probability, Vol. 1 (Berkely: University of California Press, 1971); John Earman, “Testing Scientific Theories,” Philosophy of Science 55 (1983): 292–303. 5. Karl Popper, Conjectures and Refutions (London: Routledge, 1963); Karl R. Popper, Objective Knowledge, rev. ed. (Oxford: Oxford University Press, 1979); Karl Popper, The Logic of Scientific Discovery (London: Hutchinson, 1959). 6. US Department of Energy, Blue-Ribbon Commission (BRC) on America’s Nuclear Future, Report to the Secretary of Energy (Washington DC: US BRC, 2012), http://www.


252

Notes

brc.gov/index.php?q=slide/brc-mines, accessed April 27, 2012; Senator Harry Reid of Nevada, Yucca Mountain (Washington, DC: US Senate, 2012), http://www.reid.senate. gov/issues/yucca.cfm?renderforprint=1&, accessed April 27, 2012. 7. B. Carpenter, “A Nuclear Graveyard,” US News and World Report 110, no. 10 (March 18, 1991): 74 8. J. Neel, “Statement” in U.S. Congress, Low-Level Radioactive Waste Disposal, Hearings before a Subcommittee of the Committee on Government Operations, House of Representatives, 94th Congress, second session, 23 February, 12 March, and 6 April 1976 (Washington, DC: Government Printing Office, 1976), p. 258. US Geological Survey, vertical file, Maxey Flats (Louisville, Kentucky: Water Resources Division, US Department of the Interior, n.d. see also A. Weiss and P. Columbo, “Evaluation of Isotope Migration—Land Burial,” NUREG/CR-1289 BNL-NUREG-51143 (Washington, DC: US Nuclear Regulatory Commission, 1980), 5. 9. E. McMullin, “Galilean Idealization,” Studies in History and Philosophy of Science 16, (1985): 247–273. See also K. M. Kostelnik and J. H. Clarke, “Managing Residual Contaminants—Reuse and Isolation Case Studies,” Remediation Journal 18, no. 2 (2008): 75–97; M. Brownstein, “Environmental Remediation at the Maxey Flats Disposal Site,” Radwaste Solutions 12, no. 1 (2005): 34–39. 10. G. Bertozzi, H. Hill, J. Lewi, and R. Storck, quoted in Kristin Shrader-Frechette, Burying Uncertainty (Berkeley: University of California Press, 1993), 51, 269; hereafter cited as Shrader-Frechette, BU. 11. E.g., S. Sinnock and T. Lin, Preliminary Bounds on the Expected Postclosure Performance of the Yuccan Mountain Repository Site, SAND84-1492 (Albuquerque, NM: Sandia National Labs, 1984). E. Jacobson, Investigation of Sensitivity and Uncertainty in Some Hydrologic Models of Yucca Mountain, SAND84-7212 (Albuquerque, NM: Sandia National Labs, 1984); N. Hayden, Benchmarking NNMSI Flow and Transport Codes, SAND84-0996 (Albuquerque, NM: Sandia National Labs, 1984). 12. M. Friedman, “The Methodology of Positive Economics,” in The Philosophy of Economics, ed. D. Hausman (New York: Cambridge University Press, 1984), 218. See also G. Contessa, “Scientific Models and Representation,” in The Continuum Companion to the Philosophy of Science, ed. S. French and J. Saatasi, 120–133 (London: Continuum International Publishing Group, 2011). 13. N. Cartwright, How the Laws of Physics Lie (Oxford: Claredon Press, 1983), 9. 14. M. Friedman, “The Methodology of Positive Economics,” in The Philosophy of Economics, 218. 15. Friedman, “Methodology of Positive Economics,” 14–15. See also S. F. Martinez and X. Huang, “Epistemic Groundings of Abstraction and Their Cognitive Dimension,” Philosophy of Science 78, no. 3 (July 2011): 490–511; K. Davey, “Idealizations and Contextualism in Physics,” Philosophy of Science 78, no. 1 (January 2011): 16–38. See also E. Tobin, “Chemical Laws, Idealization and Approximation,” Science and Education 21 (2012): 1–12. 16. G. Galileo, Dialogues Concerning Two New Sciences (New York: Knovel, 2001), 52. 17. E. McMullin, “Galilean Idealization,” 248, 254–258, 265, 269. For mathematical idealizations, see also R. Batterman, “On the Explanatory Role of Mathematics in Empirical Science,” British Journal of Philosophy of Science 61, no. 1 (2010): 1–25; O. Bueno and S. French, “Can Mathematics Explain Physical Phenomena?,” British Journal of Philosophy of Science 63, no. 1 (2012): 85–113; A. Wayne, “Expanding the Scope of Explanatory Idealization,” Philosophy of Science 78, no. 5 (December 2011): 830–841. For construct idealizations, see also T. Grüne-Yanoff, “Isolation Is Not a Characteristic of Models,” International Studies in the Philosophy of Science 25, no. 2 (2011): 119; T. Knuuttila, “Modelling and Representing: An Artefactual Approach to Model-based Represenation,” Studies in History and Philosophy of Science 42, no. 2 (June 2011): 262–271. For empirical-causal idealizations, see E. Sober, “A Priori Causal Models of Natural Selection,” Australasian Journal of Philosophy 89, no. 4 (2011): 571–589. For subjunctive-causal


Notes

253

idealizations, see also X. de Donato Rodriguez and A. Santos, “The Structure of Idealization in Biological Theories: The Case of the Wright-Fisher Model,” Journal for General Philosophy of Science 43, no. 1 (2012): 11–27; P. Ylikoski and J. Kuorikoski, “Dissecting Explanatory Power,” Philosophical Studies 148, no. 2 (2010): 201–219. 18. I. Scheffier, Science and Subjectivity (Indianapolis, IN: Hackett, 1982), 8. See also B. Leuridan, “Can Mechanisms Really Replace Laws of Nature?,” Philosophy of Science 77, no. 3 (July 2010): 317–340; I. Votsis, “Data Meet Theory: Up Close and Inferentially Personal,” Synthese 182, no. 1 (2011): 89–100. 19. Scheffier, Science and Subjectivity, 8–9. 20. Cartwright, How the Laws of Physics Lie, Essay 6; B. van Fraassen, The Scientific Image (Oxford: Claredon Press, 1980), ch. 2; see also B. van Fraassen, “To Save the Phenomena” in Scientific Realism, ed. J. Leplin (Berkeley: University California Press, 1984). See also L. Felline, “Scientific Explanation Between Principle and Constructive Theories,” Philosophy of Science 78, no. 5 (December 2011): 989–1000; I. K. Khalifa, “Is Understanding Explanatory or Objectual?,” Synthese (2011): 1–19. 21. Cartwright, How the Laws of Physics Lie, 102–112. See also N. Cartwright and S. Efstathiou, “Hunting Causes and Using Them: Is There No Bridge from Here to There?,” International Studies in Philosophy of Science 25, no. 3 (2011): 223–241; N. Cartwright, “Models: Parables v. Fables,” Boston Studies in the Philosophy and History of Science 262 (2010): 19–31. 22. Cartwright, How the Laws of Physics Lie, 776–779. 23. E. McMullin, “Galilean Idealization,” 261. 24. Cartwright, How the Laws of Physics Lie, 102–112. 25. See, for example, D. Pearce and V. Rantala, “Approximative Explanation Is DeductiveNomological,” Philosophy of Science 51 (1985): 126–140. 26. See R. Linsley, M. Kohler, and J. Paulhus, Hydrology for Engineers (New York: McGrawHill, 1975), 201. See also W. Gray and C. Miller, “On the Algebraic and Differential Forms of Darcy’s Equation,” Journal of Porous Media 14, no. 1 (2011): 33–50; J. Niessner, S. Berg, and S. M. Hassanizadeh, “Comparison of Two-Phase Darcy’s Law with a Thermodynamically Consistent Approach,” Transport in Porous Media 88, no. 1 (2011): 133–148. 27. See D. Todd, Groundwater Hydrology (New York: John Wiley, 1980), 65. 28. Todd, Groundwater Hydrology, 67. 29. R. Ward, Principles of Hydrology (New York: McGraw-Hill, 1975), 205–207; Todd, Groundwater Hydrology, 68. See also P. Brunner, P. G. Cook, and C. T. Simmons, “Disconnected Surface Water and Groundwater: From Theory to Practice,” Ground Water 49, no. 4 (2011): 460–467; H. McMillan, T. Krueger, and J. Freer, “Benchmarking Observational Uncertainties for Hydrology: Rainfall, River Discharge and Water Quality,” Hydrological Processes 26, no. 26 (2011): 4078–4111. See also J. Solsvik and H. A. Jakobsen, “Modeling of Multicomponent Mass Diffusion in Porous Spherical Pellets: Application to Steam Methane Reforming and Methanol Synthesis,” Chemical Engineering Science 66, no. 9 (May 2011): 1986–2000; R. Masoodi and K. M. Pillai, “Darcy’s Law-based Model for Wicking in Paper-like Swelling Porous Media,” AlChE Journal 56, no. 9 (September 2010): 2257–2267. 30. See, for example, Ward, Principles of Hydrology, 207. 31. Ward, Principles of Hydrology, 145. 32. Linsley et al., Hydrology for Engineers, 202; D. McWhorter and D. Sunada, GroundWater Hydrology and Hydraulics (Fort Collins, CO: Water Resources Publications, 1977), 79; J. Bredehoeft, I. Counts, S. Robson, and J. Robertson, “Solute Transport in Groundwater Systems,” in Facets of Hydrology, ed. J. Rodda (New York: John Wiley, 1976), 233. 33. Ward, Principles of Hydrology, 206; US Geological Survey: 1962, Memo: July 1, 1962––, vertical file, “Maxey Flats: Correspondence and Phone Conversations,” Water Resources Division, US Division of the Interior, Louisville, Kentucky. (The Louisville office of


254

Notes

the USGS is responsible for monitoring the Maxey Flats radioactive facility.) See W. Walton, Groundwater Resource Evaluation (New York: McGraw-Hill, 1970), 18ff. See also C. Avci, A. Sahin, and E. Ciftci, “A New Method for Aquifer System Identification and Parameter Estimation,” Hydrological Processes, [in press], doi: 10.1002/hyp.9352. See also A. Yoshise, “Complementarity Problems Over Symmetric Cones: A Survey of Recent Developments in Several Aspects,” International Series in Operations Research & Management Science 166, no. 1 (2012): 339–375; B. M. Alzalg, “Stochastic Second-Order Cone Programming: Applications Models,” Applied Mathematical Modelling 36, no. 10 (October 2012): 5122–5134; Y. M. Li, X. T. Wang, and D. Y. Wei, “Improved Smoothing Newton Methods for Symmetric Cone Complementarity Problems,” Optimization Letters 6, no. 3 (2012): 471–487. 34. See Linsley et al., Hydrology for Engineers, 205–207, for potential/well-storage problems and Kristin Shrader-Frechette, BU, 50–51. 35. Linsley et al., Hydrology for Engineers, 206–210. 36. Linsley et al.,Hydrology for Engineers, 206–10. 37. Walton, Groundwater Resource Evaluation, 157–167. See also J. Bredehoeft, “Modeling Groundwater Flow—The Beginnings,” Ground Water 50, no. 3 (2012): 325–329. 38. G. Joseph, “The Many Sciences and the One World,” Journal of Philosophy 77, (1980): 776. See also N. Jones, “General Relativity and the Standard Model: Why Evidence for One Does Not Disconfirm the Other,” Studies in History and Philosophy of Science Part B: Studies in History and Philosophy of Modern Physics 40, no. 2 (May 2009): 124–132; and M. Wilson, “Determinism and the Mystery of the Missing Physics,” British Journal for the Philosophy of Science 60, no. 1 (2009): 173–193. 39. Joseph, “The Many Sciences and the One World,” 778. 40. Joseph, “The Many Sciences and the One World,” 111. 41. Robert Davis, Head of Documents, US Geological Survey (USGS), Interview (Louisville, KY: USGS, 1985), unpublished interview, August, 2, 1985, with Kristin ShraderFrechette. This office holds all the Maxey Flats USGS documents. 42. Pearce and Rantala, “Approximative Explanation Is Deductive-Nomological,” 129; I. Niiniluoto, “Truthlikeness for Quantitive Statements,” in Philosophy of Science Association, vol. 1, ed. P. Asquith and T. Nickles (East Lansing, MI: Philosophy of Science Association, 1982). 43. Pearce and Rantala, “Approximative Explanation is Deductive-Nomological,” 214. 44. See Shrader-Frechette, BU, 42–50. I. Walker, Geological and Hydrologic Evaluation of a Proposed Site for Burial of Solid Radioactive Wastes Northwest of Morehead, Fleming County, Kentucky (Kearney, NJ: US Geological Survey (USGS, September 12, 1962), unpublished report, on file in the Louisville, Kentucky, office of the USGS.

Chapter 10

1. McLean v. Arkansas Board of Education, 529 F. Supp. 1255 (E.D.Ark. 1982). 2. D. Shapere, Philosophical Problems of Natural Science (New York: Macmillan, 1965). 3. R. Carnap, “The Methodological Character of Theoretical Concepts,” in Minnesota Studies in the Philosophy of Science I, ed. H. Feigl and M. Scriven, 38–76 (Minneapolis: University of Minnesota Press, 1956). 4. Carl Hempel and Paul Oppenheim, “Studies in the Logic of Explanation,” Philosophy of Science 15 (1948): 135–175. See also Carl Hempel, “Studies in the Logic of Confirmation,” Mind 54, no. 214 (1945): 97–121; Carl Hempel, Aspects of Scientific Explanation (New York: Free Press, 1965); Carl Hempel, Philosophy of Natural Science (Upper Saddle River, NJ: Prentice Hall, 1966); Carl Hempel, Scientific Explanation (New York: Basic Books, 1967). 5. Hempel, Aspects of Scientific Explanation; C. Hempel, “Valuation and Objectivity in Science,” in Physics, Philosophy, and Psychoanalysis, ed. R. S. Cohen and L. Laudan,


Notes

255

73–100 (Boston: Reidel, 1983). E. Nagel, The Structure of Science (New York: Harcourt, Brace, and World, 1961). 6. P. Feyerabend, “Explanation, Reduction, and Empiricism,” in Minnesota Studies in the Philosophy of Science III, ed. H. Feigl and G. Maxwell, 28–97 (Minneapolis: University of Minnesota, 1962); P. Feyerabend, “Problems of Empiricism,” in Beyond the Edge of Certainty, ed. R. Colodny, 145–260 (Englewood Cliffs, NJ: Prentice-Hall, 1965); P. Feyerabend, “Against Method: Outline of an Anarchistic Theory of Knowledge,” in Analyses of Theories and Methods of Physics and Psychology, Minnesota Studies in the Philosophy of Science IV, ed. Michael Radner and Stephen Winokur, 17–130 (Minneapolis: University of Minnesota Press, 1970). 7. N. R. Hanson, Patterns of Discovery (Cambridge: Cambridge University Press, 1958). 8. T. Kuhn, The Structure of Scientific Revolutions (Chicago: University of Chicago Press, 1962); T. Kuhn, The Road Since Structure (Chicago: University of Chicago Press, 2000). See J. Pinto de Oliveira, “Kuhn and the Genesis of the New Historiography of Science,” Studies in History & Philosophy of Science Part a, 43, no. 1 (2012): 115–121. 9. See note 5. A much earlier version of parts of this chapter appeared in K. ShraderFrechette, “Measurement Problems and Florida Panther Models,” Southeastern Naturalist 3, no.1 (2004): 37–50. 10. S. Toulmin, Foresight and Understanding (Bloomington: Indiana University Press, 1961). 11. See K. Schaffner, “Outlines of a Logic of Comparative Theory Evaluation with Special Attention to Pre- and Post-Relativistic Electrodynamics,” in Analyses of Theories and Methods of Physics and Psychology, Minnesota Studies in the Philosophy of Science IV, ed. M. Radner and S. Winokur, 311–354 (Minneapolis: University of Minnesota, 1970). 12. Feyerabend, Problems of Empiricism, 144. 13. Feyerabend, Problems of Empiricism, 145. 14. Kuhn, The Structure of Scientific Revolutions, 147–150, 154–155; T. Kuhn, “Reflections on My Critics,” in Criticism and the Growth of Knowledge, ed, I. Lakatros and A. Musgrave (Cambridge: Cambridge University Press, 1970); Kuhn, The Road Since Structure. 15. L. Laudan, Progress and Its Problems (Berkeley: University of California Press, 1977); L. Laudan, “How about Bust? Factoring Explanatory Power Back into Theory Evaluation,” Philosophy of Science 64, no. 2 (1997): 306–316; D. Mayo and J. Miller, “The Error Statistical Philosopher as Normative Naturalist,” Synthese 163, no. 3 (2008): 305–314; D. McArthur, “Laudan, Friedman and the Role of the a priori in Science,” Journal of Philosophical Research 32 (2007): 169–190; K. Freedman, “Normative Naturalism and Epistemic Relativism,” International Studies in the Philosophy of Science 20, no. 3 (2006): 309–322; S. Bangu, “Underdetermination and the Argument from Indirect Confirmation,” Ratio 19, no. 3 (2006): 269–277; A. Bird, “What Is Scientific Progress?,” Nous 41, no. 1 (2007): 92–117. 16. D. Hull, Science as a Process (Chicago: University of Chicago Press, 1988). 17. P. Galison, How Experiments End (Chicago: University of Chicago Press, 1986); A. Franklin, The Neglect of Experiment (Cambridge: Cambridge University Press, 1986); Doug Allchin, “The Super Bowl and the Ox- Phos Controversy: ‘Winner-TakeAll’ Competition in Philosophy of Science,” Proceedings of the Biennial Meeting of the Philosophy of Science Association 1 (1994): 22–33. 18. See note 8. M. Shah, “Is It Justifiable to Abandon All Search for a Logic of Discovery?,” International Studies in the Philosophy of Science 21, no. 3 (2007): 253–269. 19. Schaffner, “Outlines of a Logic of Comparative Theory Evaluation,” 321–25; K. Schaffner, Discovery and Explanation in Biology and Medicine (Chicago: University of Chicago Press, 1993). 20. Schaffner, “Outlines of a Logic of Comparative Theory Evaluation,” 322, see 323–324; Schaffner, Discovery and Explanation in Biology and Medicine, 204. 21. Schaffner, “Outlines of a Logic of Comparative Theory Evaluation, 349; Schaffner, Discovery and Explanation in Biology and Medicine, 204ff.


256

Notes

22. Schaffner, “Outlines of a Logic of Comparative Theory Evaluation,” 351; Schaffner, Discovery and Explanation in Biology and Medicine, 204. 23. Feyerabend, “Explanation, Reduction, and Empiricism,” 59; Feyerabend, “Against Method,” 82–83; Feyerabend, Problems of Empiricism. 24. See Kuhn, The Road Since Structure, 34. 25. Feyerabend, “Against Method,” 80. 26. Feyerabend, “Against Method,” 17, 21, 23. 27. See note 5. 28. Feyerabend, “Against Method,” 70. 29. Kuhn, The Road Since Structure; Kuhn, “Reflections on My Critics,” 275. 30. Kuhn, “Reflections on My Critics,” 276. 31. Kuhn, The Road Since Structure, 210. See N. Goldberg, “Interpreting Thomas Kuhn as a Response-Dependence Theorist,” International Journal of Philosophical Studies 19, no. 5 (2011): 729–752; T. Uebel, “Carnap and Kuhn: On the Relation Between the Logic of Science and the History of Science,” Journal for General Philosophy of Science 42, no. 1 (2011): 129–140. 32. Does Kuhn’s response help show how incommensurable theories can be compared? It may not because Kuhn admits that (i) not all theories possess all desiderata/scientific goals; (ii) not all sciences are predictive; and (iii) desiderata/goals and their ranking are not specifiable ahead of time (Kuhn, The Road since Structure, 214). If not, scientifictheory choices may be no better than choices of goals. Another worry is that, as Hempel noted, if one accepts specific scientific goals, then it becomes trivially true that it is “rational, in choosing between theories, to opt for the one which satisfies the desiderata better.” (Hempel, “Valuation and Objectivity in Science,” 91). If Hempel is right, comparability questions focus on how desiderata should be chosen/ranked. If one uses desiderata to beg choosing/ranking questions, as Hempel thinks Kuhn does, Hempel believes the resulting science is “rational” in only a trivial way. 33. See note 8. 34. Laudan, “How about Bust?,” 314; Laudan, Progress and Its Problems. 35. Laudan, Progress and Its Problems, 6. 36. Laudan, Progress and Its Problems, 18–30. 37. Laudan, Progress and Its Problems, 22–25. 38. Laudan, Progress and Its Problems, 30. 39. Laudan, “How about Bust?,” 314. 40. Laudan, “How about Bust?,” 314. 41. Laudan, “How about Bust?,” 314. 42. Laudan, “How about Bust?,” 310. 43. Laudan, “How about Bust?,” 315. 44. Laudan, “How about Bust?,” 315. 45. J. Hostetler, D. Onorato, B. Bolker, J. W. Johnson, S. O’Brien, D. Jansen, and M. K. Oli, “Does Genetic Introgression Improve Female Reproductive Performance? A Test on the Endangered Florida Panther,” Oecologia 168, no. 1 (2012): 289–300; J. Hostetler, D. Onorato, J. Nichols, J. W. Johnson, M. Roelke, S. O’Brien, D. Jansen, and M. K. Oli, “Genetic Introgression and the Survival of Florida Panther Kittens,” Biological Conservation 143, no. 11 (2010): 2789–2796; J. Benson, J. Hostetler, D. Onorato, W. Johnson, M. Roelke, S. O’Brien, D. Jansen and M. Oli, “Intentional Genetic Introgression Influences Survival of Adults and Subadults in a Small, Inbred Felid Population,” Journal of Animal Ecology 80, no. 5 (2011): 958–967; U. S. Seal and R. C. Lacy, “Florida Panther Viability Analysis and Species Survival Plan,” Report to the US FWS (Apple Valley, MN: Conservation Breeding Specialist Group, Species Survival Commission, IUCN, 1989); D. Maehr and J. A. Cox, “Landscape Features and Panthers in South Florida,” Conservation Biology 9, no. 5 (1995): 1008–1019; J. Kostyack, Letter to Colonel James G. May, US Army Corps of Engineers, Jacksonville, Florida and Jay Slack, US Fish and Wildlife Service (Vero Beach, FL: National Wildlife Federation, June 7, 2002): 1–12; D.


Notes

257

Land, M. Cunningham, R. McBride, D. Shindle, and M. Lotz, Florida Panther Genetic Restoration, 2001–2002 Annual Report (Tallahassee, FL: FFWCC, 2002); D. Maehr, P. Crowley, J. Cox, M. Lacki, J. Larkin, T. Hoctor, L. Harris, and P. Hall, “Of Cats and Haruspices: Genetic Intervention in the Florida Panther. Response to Pimm et al. (2006)” Animal Conservation 9, no. 2 (2006): 127–132. 46. D. Maehr, The Florida Panther (Covelo, CA: Island Press, 1997); hereafter cited as Maehr, The Florida Panther; T. McBride and R. Salinas, “Panthers and Forests in South Florida: An Ecological Perspective,” Conservation Ecology 6, no. 1 (2002): 18; Roy McBride, Current Panther Distribution, Population Trends, and Habitat Use Report of Field Work: Fall 2000–Winter 2001 (Vero Beach, FL: Florida Panther Subteam of MERIT, US Fish and Wildlife Service, South Florida Ecosystem Office, 2001); E. J. Comiskey, personal communication with member of Panther MERIT Subteam (September 1, 2, 8, 2003); D. Maehr and R. Lacy, “Avoiding the Lurking Pitfalls in Florida Panther Recovery,” Wildlife Society Bulletin 30, no. 3 (2002): 971–978. 47. D. Onorato, M. Criffield, M. Lotz, M. Cunningham, R. McBride, E. Leone, O. Bass Jr, and E. Hellgren, “Habitat Selection by Critically Endangered Florida Panthers Across the Diel Period: Implications for Land Management and Conservation,” Animal Conservation 14, no. 2 (2010): 196–205; hereafter cited as Onorato-2011; Maehr and Cox, “Landscape Features and Panthers in South Florida.” 48. D. Maehr, “Declaration of Opinion Regarding Florida Panther Litigation Re Landon Companies/Agripartners,” National Wildlife Federation et al. v. Caldera et al., Case No. 1 (2001): 00CV01031 (D.D.C. Judge Robertson); D. Maehr, J. Larkin, and J. Cox, “Shopping Centers as Panther Habitat: Inferring Animal Locations From Models,” Ecology and Society 9, no. 2 (2004): 9. 49. Maehr and Lacy, “Avoiding the Lurking Pitfalls in Florida Panther Recovery.” For a contrary view, see Onorato-2011. 50. I. R. Noble and R. Dirzo, “Forests as Human-Dominated Ecosystems,” Science 277, no. 5325 (1997): 522–525. 51. J. E. Pinder and T. E. Rea, “Deforestation, Reforestation, and Forest Fragmentation,” American Midland Naturalist 142, no. 2 (1999): 213–227. 52. G. P. Keister and W. A. Van Dyke, “A Predictive Model for Cougars in Oregon,” Northwest Science 76, no. 1 (2002): 15–25. 53. Pinder and Rea, “Deforestation, Reforestation, and Forest Fragmentation.” 54. E. J. Comiskey, O. L. Bass, L. J. Gross, R. T. McBride, and R. Salinas, “Panthers and Forests in South Florida,” Conservation Ecology 6, no. 1 (2002): 18–30; hereafter cited as Comiskey et al. 2002. 55. F. Lindzey, “Mountain Lion,” in Wild Furbearer Management and Conservation in North America, ed. M. Novak, J. Baker, M. Obbard, and B. Malloch, 656–668 (Toronto: Canadian Ministry of Natural Resources, 1987). 56. See notes 45, 48; Maehr, “Declaration of Opinion Regarding Florida Panther Litigation Re Landon Companies/Agripartners.” Maehr 2001. 57. Maehr, The Florida Panther, 36–39. J. Cox, D. Maehr, and J. Larkin, “Florida Panther Habitat Use: New Approach to an Old Problem,” Journal of Wildlife Management 70, no. 6 (2006): 1778–1785. Regarding problems with daytime-only telemetry, see R. McBride and C. McBride, “Photographic Evidence of Florida Panthers Claw-marking Logs,” Southeastern Naturalist 10, no. 2 (2011): 384–386. 58. E. J. Comiskey, personal communication with member of Panther MERIT Subteam, September 1, 2, 8, 2003. 59. Comiskey, personal communication, September 1, 2, 8, 2003. 60. Comiskey, personal communication, September 1, 2, 8, 2003. 61. Comiskey, personal communication, September 1, 2, 8, 2003. 62. Comiskey, personal communication, September 1, 2, 8, 2003; see Agripartners, “Motion of Intervenor Agripartners,” National Wildlife Federation (NWF), et al., v. Louis Caldera, Civil Action No. 1 (2001): 00 CV 01031 (JR).


258

Notes

63. Laudan, “How about Bust?,” 315. 64. D. Mayo, “Duhem’s Problem, the Bayesian Way, and Error Statistics, or ‘What’s Belief Got to Do with It,’ ” Philosophy of Science 64, no. 2 (1997): 222–244; D. Mayo, Error and the Growth of Experimental Knowledge (Chicago: University of Chicago Press, 1996). 65. Maehr, The Florida Panther; Maehr and Lacy, “Avoiding the Lurking Pitfalls in Florida Panther Recovery.” 66. Comiskey et al. 2002. 67. C. S. Dees, J. D. Clark, and F. T. Manen, “Florida Panther Habitat Use in Response to Prescribed Fire,” Journal of Wildlife Management 65, no. 1 (2001): 141–147; Comiskey et al. 2002 68. Maehr, “Declaration of Opinion Regarding Florida Panther Litigation Re Landon Companies/Agripartners”; Maehr, The Florida Panther; see note 48. 69. R. P. Meegan and D. S. Maehr, “Landscape Conservation and Regional Planning for the Florida Panther,” Southeastern Naturalist 1, no. 3 (2002): 217–223; see Comiskey et al. 2002; Kostyack, Letter to Colonel James G. May; Land et al., Florida Panther Genetic Restoration, 2001–2002 Annual Report; McBride, “Panthers and Forests in South Florida”; McBride, Current Panther Distribution, Population Trends, and Habitat Use Report of Field Work; James J. Slack, Letter to Colonel James G. May, US Army Corps of Engineers, Jacksonville, Florida, Biological Opinion for the Proposed Fort Myers Mine #2 in Lee County, Florida (Florida Wildlife Federation, January 30, 2002); James J. Slack, Field Supervisor, South Florida Ecological Services Office, US Fish and Wildlife Service, Letter to Kris Thoemke, Everglades Project Manager, National Wildlife Federation, and Nancy Anne Payton, SW FL Field Representative (Florida Wildlife Federation, June 12, 2001). 70. Kristin Shrader-Frechette and Earl McCoy, Method in Ecology (Cambridge: Cambridge University Press, 1993), 213–214. 71. Laudan, “How about Bust?,” 315. 72. L. Laudan, The Book of Risks (New York: Wiley, 1994), 9, 23, 24, 14. 73. Laudan, Book of Risks, 3–4. 74. Laudan, Book of Risks, 6–7. 75. J. Paul Leigh, Causes of Death in the Workplace (London: Quorum Books, 1995), 3–7, 215. 76. J. C. Lashof et al., Health and Life Sciences Division of the US Office of Technology Assessment, Assessment of Technologies for Determining Cancer Risks from the Environment (Washington, DC: Office of Technology Assessment, 1981), 3, 6ff; Paul Lichtenstein, Niels Holm, Pia Verkasalo, et al., “Environmental and Heritable Factors in the Causation of Cancer,” New England Journal of Medicine 343, no. 2 (2002): 78–85. Kristin ShraderFrechette, Taking Action, Saving Lives (New York: Oxford University Press, 2007), ch. 2. 77. Feyerabend, Problems of Empiricism, 135–145. 78. Laudan, “How about Bust?,” 315. 79. Laudan, “How about Bust?,” 314. 80. Maehr, The Florida Panther. 81. See P. Beier, M. Vaughan, M. Conroy, and H. Quigley, “Evaluating Scientific Inferences About the Florida Panther,” Journal of Wildlife Management 70, no. 1 (2006): 236–245. 82. US Congress, Hearing before the Permanent Subcommittee on Investigations of the Committee on Governmental Affairs, US Senate, National Cancer Institute’s Management of Radiation Studies (Washington, DC: Government Printing Office, 1998); hereafter cited as Congress (1998). Kristin Shrader-Frechette, “Comparativist Rationality and Epidemiological Epistemology,” Topoi 2 (September–October 2004):153–163. 83. See note 82 and Institute of Medicine and National Academy of Sciences (IOM), Exposure of the American People to Iodine-131 from Nevada Nuclear Bomb Tests (Washington, DC: National Academy Press, 1998); see National Cancer Institute (NCI), Estimated Exposures and Thyroid Doses Received by the American People from Iodine-131 in Fallout Following Nevada Atmospheric Nuclear Bomb Tests, NIH Publication 97-4264 (Washington, DC: National Institutes of Health, 1997); see also Congress (1998).


Notes

259

84. Allchin, “The Super Bowl and the Ox- Phos Controversy,” 24–25. 85. Adolf G. Gundersen, “Research Traditions and the Evolution of Cold War Nuclear Strategy: Progress Doesn’t Make Perfect,” Philosophy of the Social Sciences 24, no. 3 (1994): 307–308. 86. Maehr and Lacy, “Avoiding the Lurking Pitfalls in Florida Panther Recovery.” 87. Slack, Letter to Colonel James G. May; Land et al., Florida Panther Genetic Restoration, 2001–2002 Annual Report. 88. Feyerabend, Problems of Empiricism, 134. 89. Allchin, “The Super Bowl and the Ox- Phos Controversy,” 28. 90. Sharon Beder, Global Spin (Glasgow: Bell and Bain, 2002). 91. See Norman Care, “Participation and Policy,” Ethics 88, no. 1 (1978): 316–337. 92. Laudan, “How about Bust?,” 315. 93. Laudan, “How about Bust?,” 314. 94. See Feyerabend, Problems of Empiricism. 95. Allchin, “The Super Bowl and the Ox- Phos Controversy.” 96. Laudan, “How about Bust?,” 314. 97. Maehr, The Florida Panther. 98. Agripartners, “Motion of Intervenor Agripartners,” 3. 99. Agripartners, “Motion of Intervenor Agripartners,” 1; Maehr, “Declaration of Opinion Regarding Florida Panther Litigation Re Landon Companies/Agripartners”; see Kostyack, Letter to Colonel James G. May; National Wildlife Federation (NWF) et al., Plaintiffs v. Louis Caldera et al., Defendants (2001), Civil Action No. 1 (2001): 00 CV 01031 (JR), later changed to NWF et al., Plaintiffs, v. Thomas White et al., Defendants, Case: 00CV01031 (JR); Slack, Letter to Colonel James G.; Kris Thoemke, Everglades Project Manager, National Wildlife Federation, and Nancy Anne Payton, Southwest Florida Field Representative, Florida Wildlife Federation (2001), Letter to James J. Slack, Field Supervisor, South Florida Ecological Services Office (US Fish and Wildlife Service, May 7, 2001). 100. Slack, Letter to Colonel James G., 8. 101. Slack, Letter to Colonel James G., 10. 102. Allchin, “The Super Bowl and the Ox- Phos Controversy.” 103. See Allchin, “The Super Bowl and the Ox-Phos Controversy,” 27. 104. Laudan, “How about Bust?,” 314. 105. Laudan, Book of Risks, 9–10. 106. Mayo, “Duhem’s Problem, the Bayesian Way, and Error Statistics.” 107. Feyerabend, “Against Method,” 81. 108. Laudan, Progress and Its Problems. 109. Laudan, “How about Bust?,” 306. 110. Laudan, “How about Bust?,” 308. 111. Laudan, “How about Bust?,” 314. 112. Laudan, “How about Bust?,” 315.

Chapter 11

1. William Whyte, Street Corner Society (Chicago: University of Chicago Press, 1993). An earlier version of some of the insights and arguments in this chapter appear also in Kristin Shrader-Frechette and Earl McCoy, “Applied Ecology and the Logic of Case Studies,” Philosophy of Science 61, no.1 (June 1994): 228–249. 2. Graham Allison and Philip Zelikow, Essence of Decision (London: Longman, 1999). 3. Robert Yin, Case Study Research (Los Angeles: Sage, 2009): 4–14. 4. T. W. Schoener, “Mathematical Ecology and Its Place among the Sciences,” Sciences 178 (1972): 389; see Neftali Sillero, “What Does Ecological Modelling Model? A Proposed


260

Notes

Classification of Ecological Niche Models Based on Their Underlying Methods,” Ecological Modelling 222, no.8 (2011): 1343–1346; hereafter cited as Sillero, “Ecological Modelling”; Simone Mariani, “Through the Explanatory Process in Natural History and Ecology,” History and Philosophy of the Life Sciences 30, no. 2 (2008): 159–178; hereafter cited as Mariani, “Explanatory Process.” 5. R. H. Peters, A Critique of Ecology (Cambridge, England: Cambridge University Press, 1991); see Sillero, “Ecological Modelling”; Mariani, “Explanatory Process”; Joan Roughgarden, “Is There a General Theory of Community Ecology?,” Biology and Philosophy 24, no. 4 (2009): 521–529; hereafter cited as Roughgarden, “General Theory.” 6. SeeK. S. Shrader-Frechette and E. D. McCoy, Method in Ecology (Cambridge, UK: Cambridge University Press, 1993): ch. 2; hereafter cited as Shrader-Frechette and McCoy, Ecology, which has earlier versions of some of the arguments in this chapter. See also Sillero, “Ecological Modelling”; see Roughgarden, “General Theory”; Mariani, “Explanatory Process.” 7. J. Cracraft, “Species Concepts and Speciation Analysis,” in Current Ornithology, vol. 1, ed. R. F. Johnson, 169–170 (New York: Plenum Press, 1983); see also M. T. Ghiselin, The Triumph of the Danvinian Method (Berkeley: University of California Press, 1969); M. T. Ghiselin, “Species Concepts, Individuality, and Objectivity,” Biology and Philosophy 2 (1987): 127–143; S. J. Gould, The Mismeasure of Man (New York: Norton, 1981); D. Hull, The Philosophy of Biological Science (Englewood Cliffs, NJ: Prentice Hall, 1974); hereafter cited as Hull, Philosophy; D. Hull, “Are Species Really Individuals?,” Systematic Zoology 25 (1976): 174–191; D. Hull, “A Matter of Individuality,” Philosophy of Science 45 (1978): 335–360; D. Hull, Science as a Process: An Evolutionary Account of the Social and Conceptual Development of Science (Chicago: University of Chicago Press, 1988): 102ff.,131–157; hereafter cited as Hull, Science Process; P. Kitcher, Species (Cambridge, MA: MIT Press, 1985); E. Mayr, Systematics and the Origin of Species: From the Viewpoint of a Zoologist (New York: Columbia University Press, 1942); E. Mayr, Animal Species and Evolution (Cambridge, MA: Harvard University Press, 1963); E. Mayr, The Growth of Biological Thought: Diversity, Evolution, and Inheritance (Cambridge, MA: Harvard University Press, 1982): 273–275; hereafter cited as Mayr, Biological Thought; E. Mayr, “The Ontological Status of Species: Scientific Progress and Philosophical Terminology,” Biology and Philosophy 2 (1987): 145–166; A. Rosenberg, The Structure of Biological Science (Cambridge, UK: Cambridge University Press, 1985), 182–187; hereafter cited as Rosenberg, Biological Science; G. G. Simpson, The Principles of Animal Taxonomy (New York: Columbia University Press, 1961); hereafter cited as Simpson, Taxonomy; E. Sober, “Revisability, A Priori Truth, and Evaluation,” Australasian Journal of Philosophy 59 (1981): 68–85; P. Sokal and P. Sneath, Principles of Numerical Taxonomy (San Francisco: Freeman, 1963); W. Van Der Steen and H. Kamminga, “Laws and Natural History in Biology,” British Journal for the Philosophy of Science 42 (1991): 445–467; hereafter cited as Van Der Steen and Kamminga, “Natural History”; L. Van Valen, “Ecological Species, Multispecies, and Oaks,” Taxon 25 (1976): 233–239; Marc Ereshefsky, “Mystery of Mysteries: Darwin and the Species Problem,” Cladistics 27, no. 1 (2011): 67–79; Crawford L. Elder, “Biological Species Are Natural Kinds,” Southern Journal of Philosophy 46, no. 3 (2008): 339–362; hereafter cited as Elder, “Species.” 8. See, e.g., Mayr, Biological Thought; Rosenberg, Biological Science; M. Ruse, “Narrative Explanation and the Theory of Evolution,” Canadian Journal of Philosophy 1 (1971): 59– 74; M. Ruse, ed., What the Philosophy of Biology Is: Essays Dedicated to David Hull (Dordrecht: Kluwer Academic Publishers, 1989); R. Sattler, Biophilosophy: Analytic and Holistic Perspectives (New York: Springer-Verlag, 1986), 186ff.; Simpson, Taxonomy; E. Sober, “The Principle of the Common Cause,” in Probability and Causality: Essays in Honor of W. C. Salmon, ed. J. Fetzer, 211–228 (Boston: Reidel, 1988); hereafter cited as Sober, Common Cause; see Michael Hart and Peter Marko, “It’s About Time: Divergence, Demography, and the Evolution of Developmental Modes in Marine Invertebrates,” Integrative and Comparative Biology 50, no. 4 (2010): 643; T. Mehner, J. Freyhof, and M.


Notes

261

Reichard, “Summary and Perspective on Evolutionary Ecology of Fishes,” Evolutionary Ecology 25, no. 3 (2011): 547–556. 9. See, e.g., E. A. Norse, Ancient Forests of the Pacific Northwest (Washington, DC: Island Press, 1990): 17ff.; hereafter cited as Norse, Ancient Forests; D. S. Wilcove, “Of Owls and Ancient Forests,” in Ancient Forests of the Pacific Northwest, ed. E. A. Norse (Washington, D.C.: Island Press, 1990): 83ff.; hereafter cited as Wilcove, “Owls”; Roughgarden, “General Theory.” See Peter Szabo and Radim Hedl, “Advancing the Integration of History and Ecology for Conservation,” Conservation Biology 25, no. 4 (August 2011): 680–687. 10. See G. S. Stent, The Coming of the Golden Age: A View of the End of Progress (Garden City, NJ: Natural History, 1978), 219; see Carlo Natali, “Event and ‘Poiesis’: The Aristotelian Theory of Natural Events,” Journal of Chinese Philosophy 36, no. 4 (2009): 503–515; Brandon N. Towl, “The Individuation of Causal Powers by Events (and Consequences of the Approach),” Metaphysica: International Journal for Ontology and Metaphysics 11, no. 1 (2010): 49–61; hereafter cited as Towl, “Individuation.” 11. Hull, Philosophy, 98; Towl, “Individuation.” 12. See A. Kiester, “Natural Kinds, Natural History, and Ecology,” in Conceptual Issues in Ecology, ed. E. Saarinen (Boston: Reidel, 1982), 355ff. 13. N. Eldredge, Unfinished Synthesis, Biological Hierarchies and Modern Evolutionary Thought (Oxford: Oxford University Press, 1985); Elder, “Species”; see also R. Brandon, Adaptation and Environment (Princeton, NJ: Princeton University Press, 1990), 72ff. 14. E.g., Van Der Steen and Kamminga, “Natural History”; Roughgarden, “General Theory”; see Mariani, “Explanatory Process”; Sillero, “Ecological Modelling.” 15. See Shrader-Frechette and McCoy, Ecology; E. D. McCoy, H. R. Mushinsky, and D. S. Wilson, “Pattern in the Compass Orientation of Gopher Tortoise Burrows at Different Spatial Scales,” Global Ecology and Biogeography Letters 3 (1993); hereafter cited as McCoy et al., “Gopher Tortoise.” 16. G. H. Orians, Chair, Committee on the Applications of Ecological Theory to Environmental Problems, Ecological Knowledge and Environmental Problem Solving: Concepts and Case Studies (Washington, DC: National Academy Press, 1986); hereafter cited as Orians, Ecological Knowledge; see Mariani, “Explanatory Process.” In fact, case studies are an integral part of many US National Academy of Sciences studies, such as National Research Council, Foodborne Threats to Health: Policies, Practices, and Global Coordination (Washington, DC: National Academy Press, 2006); Transforming Clinical Research in the United States: Challenges and Opportunities (Washington, DC: National Academy Press, 2006); National Research Council, Human-System Integration in the System Development Process (Washington, DC: National Academy Press, 2007). 17. Orians, Ecological Knowledge, 1, 5. 18. Orians, Ecological Knowledge, 8. Many US research agencies also employ case-study methods; see, for instance, Agency for Toxic Substances and Disease Registry (ATSDR), “Case Study in Environmental Medicine, 2009,” http://www.atsdr.cdc.gov/csem/csem. asp?csem=17&po=0, accessed April 23, 2010. 19. Orians, Ecological Knowledge, 13, 16; see also S. Gorovitz and A. Maclntyre, “Toward a Theory of Medical Fallibility,” Journal of Medicine and Philosophy I (1976): 51–71; see Mariani, “Explanatory Process.” 20. Orians, Ecological Knowledge, 16. See Christoph Kueffer, “Integrative Ecological Research: Case-Specific Validation of Ecological Knowledge for Environmental ProblemSolving,” GAIA—Ecological Perspectives for Science and Society 15, no. 2 (2006): 115–120. 21. Kueffer, “Integrative Ecological Research,” 28; see Mariani, “Explanatory Process.” 22. G. Mitchell, “Vampire Bat Control in Latin America,” in Ecological Knowledge and Environmental Problem Solving, ed. Orians et al., 151–164 (Washington, D.C.: National Academy Press, 1986). 23. See P. Kitcher, “Two Approaches to Explanation,” Journal of Philosophy 82 (1985): 632–639; W. Salmon, “Four Decades of Scientific Explanations,” in Scientific


262

Notes

Explanation, Minnesota Studies in the Philosophy of Science, Vol. 13, ed. P. Kitcher and W. C. Salmon, 384–409 (Minneapolis: University of Minnesota Press, 1989); hereafter cited as Salmon, “Four Decades”; see also Mariani, “Explanatory Process”; Siggelkow, “Case Studies.” 24. H. Salwasser, “Conserving a Regional Spotted Owl Population,” in Ecological Knowledge and Environmental Problem Solving, ed. Orians et al. (Washington, DC: National Academy Press, 1986), 232; hereafter cited as Salwasser, “Spotted Owl”; see Jennifer Blakesley, Mark Seamans, Mary Conner, et al., “Population Dynamics of Spotted Owls in the Sierra Nevada, California,” Journal of Wildlife Management 74, no. 8 (2010): 1–36. 25. See Stefan Linquist, “But Is It Progress? On the Alleged Advances of Conservation Biology over Ecology,” Biology and Philosophy 23, no. 4 (2008): 529–544; hereafter cited as Linquist, “Conservation Biology.” 26. J. W. Thomas, E. D. Forsman, J. B. Lint, E. C. Meslow, B. R. Moon, and J. Verner, A Conservation Strategy for the Northern Spotted Owl (Portland, OR: USDA, Forest Service; USDI, Bureau of Land Management; USDI, Fish and Wildlife Service; USDI, National Park Service, 1990); hereafter cited as Thomas et al., Conservation Strategy. 27. US Congress, Report of the Interagency Scientific Committee to Address the Conservation of the Northern Spotted Owl, Senate Hearings (Washington, DC: US Government Printing Office, 1990): 101–850; hereafter cited as US Congress, Scientific Committee; see Barry Noon and Jennifer Blakesley, “Conservation of the Northern Spotted Owl Under the Northwest Forest Plan,” Conservation Biology 20, no. 2 (2006): 288–296; D. A. Clark, R. G. Anthony, and L. S. Andrews, “Survival Rates of Northern Spotted Owls in Post-fire Landscapes of Southwest Oregon,” Journal of Raptor Research 45, no. 1 (2011): 38–47; Carlos Carroll, “Role of Climatic Niche Models in Focal-species-based Conservation Planning: Assessing Potential Effects of Climate Change on Northern Spotted Owl in the Pacific Northwest, USA,” Biological Conservation 143, no. 6 (2010): 1432–1437. 28. Thomas et al., Conservation Strategy; see US Congress, Scientific Committee; see Linquist, “Conservation Biology”; Chad Hanson, Dennis Odion, Dominick Dellasala, and William Baker, “Overestimation of Fire Risk in the Northern Spotted Owl Recovery Plan,” Conservation Biology 23, no. 5 (2009): 1314. 29. US Congress, Scientific Committee, 260–296; see Colin W. Evers and Echo H. Wu, “On Generalising from Single Case Studies: Epistemological Reflections,” Journal of Philosophy of Education 40, no. 4 (2006): 511–526; hereafter cited as Evers and Wu, “Generalising from Case Studies”; Eisenhardt and Graebner, “Theory Building”; Siggelkow, “Case Studies.” 30. For example, W. R. Dawson, J. D. Ligon, J. R. Murphy, J. P. Myers, D. Simberloff, and J. Verner, “Report of the Scientific Advisory Panel on the Spotted Owl,” The Condor 89 (1987): 205–229; J. R. Gutierrez and A. B. Carey, eds., “Ecology and Management of the Spotted Owl in the Pacific Northwest,” US Forest Service, General Technical Report 185 (1985); Salwasser, “Spotted Owl,” 227–247; Thomas et al., Conservation Strategy. 31. Thomas et al., Conservation Strategy; Stan G. Sovern, Margaret Taylor, and Eric D. Forsman, “Net Reuse by Northern Spotted Owls on the East Slope of the Cascade Range, Washington,” Northwestern Naturalist 92, no. 2 (2011): 101–106. 32. D. T. Campbell, “Foreword,” in R. K. Yin, Case Study Research (Beverly Hills: Sage, 1984): 8; see G. G. Parker, “Are Currently Available Statistical Methods Adequate for Long-Term Studies?” in Long Term Studies in Ecology: Approaches and Alternatives, ed. G. E. Likens (New York: Springer-Verlag, 1989), 199; hereafter cited as Parker, “Long-Term Studies”; Paul E. Griffiths and Karola Stotz, “Experimental Philosophy of Science,” Philosophy Compass 3, no. 3 (2008): 507–521; hereafter cited as Griffiths and Stotz, “Experimental Philosophy”; Samir Okasha, “Experiment, Observation and the Confirmation of Laws,” Analysis 71, no. 2 (2011): 222–232; hereafter cited as Okasha, “Confirmation of Laws”; Mariani, “Explanatory Process.”


Notes

263

33. S. Merriam, Case Study Research in Education: A Qualitative Approach (San Francisco: Jossey-Bass, 1988), 6–7; hereafter cited as Merriam, Case Study Research; see Eisenhardt and Graebner, “Theory Building”; Mariani, “Explanatory Process”; Okasha, “Confirmation of Laws”; Griffiths and Stotz, “Experimental Philosophy.” 34. See, e.g., R. Levins, Evolution in Changing Environments: Some Theoretical Explorations (Princeton, NJ: Princeton University Press, 1968), 5ff.; A. F. McEvoy, The Fisherman’s Problem: Ecology and Law in the California Fisheries (Cambridge: Cambridge University Press, 1986), 83; see also Andrea Thorpe, Erik Aschehoug, Daniel Atwater, and Ragan Callaway, “Interactions Among Plants and Evolution,” Journal of Ecology 99, no. 3 (2011): 729; Stephen Ellner and Lutz Becks, “Rapid Prey Evolution and the Dynamics of Two-predator Food Webs,” Theoretical Ecology 4, no. 2 (2011): 133. 35. A. R. Berkowitz, J. Kolsds, R. H. Peters, and S. T. Pickett, “How Far in Space and Time Can the Results from a Single Long-Term Study Be Extrapolated?” in Long Term Studies in Ecology: Approaches and Alternatives, ed. G. E. Likens (New York: SpringerVerlag, 1989), 193–194; hereafter cited as Berkowitz et al., “Long-Term Study”; see Evers and Wu, “Generalising from Case Studies”; Siggelkow, “Case Studies”; Eisenhardt and Graebner, “Theory Building”; Narayani Barve, Vijay Barve, Alberto Jimenez-Valverde, Andres Lira-Noriega, Sean Maher, A. Townsend Peterson, Jorge Soberon, and Fabricio Villalobos, “The Crucial Role of the Accessible Area in Ecological Niche Modeling and Species Distribution Modeling,” Ecological Modelling 222, no. 11 (2011): 1810–1819. 36. See Parker, “Long-Term Studies,” 199–200; Okasha, “Confirmation of Laws”; Mariani, “Explanatory Process”; Evers and Wu, “Generalising from Case Studies”; Siggelkow, “Case Studies”; Eisenhardt and Graebner, “Theory Building.” 37. Salwasser, “Spotted Owl,” 227–247. 38. See Shrader-Frechette and McCoy, Ecology; R. A. Carson, “Case Method,” Journal of Medical Ethics 12 (1986): 36; hereafter cited as Carson, “Case Method”; M. Edelson, Psychoanalysis: A Theory in Crisis (Chicago: University of Chicago Press, 1988), xxxi–xxxii, 237–251; hereafter cited as Edelson, Psychoanalysis; A. R. Gini, “The Case Method,” Journal of Business Ethics 4 (1985): 351–352; hereafter cited as Gini, “Case Method”; A. Grünbaum, The Foundations of Psychoanalysis: A Philosophical Critique (Berkeley: University of California Press, 1984); hereafter cited as Grünbaum, Foundations; A. Grünbaum, “The Role of the Case Study: Method in the Foundation of Psychoanalysis,” Canadian Journal of Philosophy 18 (1988): 624ff.; hereafter cited as Grünbaum, “Case Study”; see Mariani, “Explanatory Process”; Evers and Wu, “Generalising from Case Studies”; Siggelkow, “Case Studies”; Eisenhardt and Graebner, “Theory Building.” 39. See K. Ervin, Fragile Majesty (Seattle: The Mountaineers, 1989), 86ff., 205ff.; Norse, Ancient Forests, 73ff.; Salwasser, “Spotted Owl,” 227. 40. See R. K. Yin, Case Study Research: Design and Methods (Beverley Hills: Sage, 1984), 16ff; hereafter cited as Yin, Case Study Research; see also Siggelkow, “Case Studies”; Mariani, “Explanatory Process”; Evers and Wu, “Generalising from Case Studies”; Eisenhardt and Graebner, “Theory Building.” 41. C. Bernstein, and B. Woodward, All the President’s Men (New York: Simon & Schuster, 1974). 42. See Yin, Case Study Research, 24. 43. See, however, T. D. Cook and D. T. Campbell, Quasi-Experimentation: Design and Analysis Issues for Field Settings (Chicago: Rand McNally, 1979). 44. See Edelson, Psychoanalysis, 278–308, 231ff.; Merriam, Case Study Research, 6, 36ff.; Yin, Case Study Research, 27, 29ff.; see also Okasha, “Confirmation of Laws”; Griffiths and Stotz, “Experimental Philosophy”; Mariani, “Explanatory Process”; Eisenhardt and Graebner, “Theory Building”; Evers and Wu, “Generalising from Case Studies.” 45. Salwasser, “Spotted Owl,” 242.


264

Notes

46. A. Kaplan, The Conduct of Inquiry: Methodology for Behavioral Science (San Francisco: Chandler, 1964), 333–335; see Yoichi Shida, “Patterns, Models, and Predictions: Robert MacArthur’s Approach to Ecology,” Philosophy of Science 74, no. 5 (2007): 642–653; Joseph Margolis, “Tensions Regarding Epistemic Concepts,” Human Affairs: A Postdisciplinary Journal for Humanities and Social Sciences 19, no. 2 (2009): 169–181. 47. D. T. Campbell, “Degrees of Freedom and the Case Study,” Comparative Political Studies 8 (1975): 178–193. 48. D. T. Campbell, “Reforms as Experiments,” American Psychologist 24 (1969): 409–429; see Yin, Case Study Research, 33–35. 49. Salwasser, “Spotted Owl,” 236; see Evers and Wu, “Generalising from Case Studies”; Eisenhardt and Graebner, “Theory Building”; Mariani, “Explanatory Process.” 50. Yin, Case Study Research, 35ff.; T. Kidder, The Soul of a New Machine (Boston: Little, Brown, 1981). One evaluates construct validity by employing multiple sources of evidence, attempting to establish chains of evidence, and having experts review the draft of the case-study report. One tests for internal validity or causal validity by doing pattern matching, as exemplified in the Campbell case already noted, and attempting to provide alternative explanations, as Grunbaum (see earlier notes) suggested. One tests for external validity by attempting to replicate the case-study conclusions in other situations. To the extent that the case is wholly unique, however, replication will be impossible. Nevertheless, the findings of the case study may be valuable if they exhibit heuristic power. 51. Salwasser, “Spotted Owl,” 238–242. 52. See Y. S. Lincoln and G. Guba, Naturalistic Inquiry (Newbury Park, CA: Sage, 1985); Merriam, Case Study Research, 133ff., 13–17, 140ff., 147ff., 123ff., 163ff.; Yin, Case Study Research, 99–120; Salwasser, “Spotted Owl,” 243, 232. 53. Salwasser, “Spotted Owl,” 235–238. 54. See Merriam, Case Study Research, 170ff.; see also Mariani, “Explanatory Process”; Eisenhardt and Graebner, “Theory Building”; Evers and Wu, “Generalising from Case Studies.” 55. See Shrader-Frechette and McCoy, Ecology; Salwasser, “Spotted Owl,” 242. 56. Grünbaum, Foundations; Grünbaum, “Case Study”; D. Callahan and S. Bok, The Teaching of Ethics in Higher Education: A Report (New York: The Hastings Center, 1980), 5–62; Carson, “Case Method,” 37; E. Dalton, “The Case as Artifact,” Man and Medicine 4 (1979): 17; Edelson, Psychoanalysis, 239–243; Gini, “Case Method,” 351–352; E. G. Guba and Y. S. Lincoln, Effective Evaluation (San Francisco: Jossey-Bass, 1981), 377; W. Hoering, “On Judging Rationality,” Studies in History and the Philosophy of Science 11 (1980): 123–136; hereafter cited as Hoering, “Rationality”; Merriam, Case Study Research, 33ff; Eisenhardt and Graebner, “Theory Building”; Siggelkow, “Case Studies”; Evers and Wu, “Generalising from Case Studies”; Helen Simons, Case Study Research in Practice (Los Angeles: Sage, 2009), ch. 10, esp. 163; Roland W. Scholz and Olaf Tietje, Embedded Case Study Methods: Integrating Quantitative and Qualitative Knowledge (Thousand Oaks, CA: Sage, 2002), ch. 29, esp. 334. See also Yuchi Zheng, Rui Peng, Masaki Kuro-o, and Xiaomao Zeng, “Exploring Patterns and Extent of Bias in Estimating Divergence Time from Mitochondrial DNA Sequence Data in a Particular Lineage: A Case Study of Salamanders (Order Caudata),” Molecular Biology and Evolution 28, no. 9 (2011): 2521–2535. 57. Hull, Science Process, 22; Robert E. Goodin, “The Epistemic Benefits of Multiple Biased Observers,” Episteme: A Journal of Social Epistemology 3, no. 3 (2006): 166–174; see Torsten Wilholt, “Bias and Values in Scientific Research,” Studies in History and Philosophy of Science 40, no. 1 (2009): 92–101. 58. See K. S. Shrader-Frechette, Science Policy, Ethics, and Economic Methodology: Some Problems of Technology Assessment and Environmental: Impact Analysis (Boston: Reidel,


Notes

265

1985), 68ff.; K. S. Shrader-Frechette, Risk and Rationality: Philosophical Foundations for Populist Reforms (Berkeley: University of California Press, 1991), ch. 4; Berkowitz et al., “Long-Term Study,” 195–197; see Evers and Wu, “Generalising from Case Studies”; Eisenhardt and Graebner, “Theory Building.” 59. P. de Vries, “The Discovery of Excellence,” Journal of Business Ethics 5 (1986): 195; see Evers and Wu, “Generalising from Case Studies”; Siggelkow, “Case Studies.” 60. McCoy et al., “Gopher Tortoise”; A. Kinlaw and M. Grasmueck, “Evidence for and Geomorphologic Consequences of a Reptilian Ecosystem Engineer: The Burrowing Cascade Initiated by the Gopher Tortoise,” Geomorphology 157–158, (July 2012): 108– 121; Stephen C. Richter, Jeffrey A. Jackson, Matthew Hinderliter, Deborah Epperson, Christopher W. Theodorakis, and S. Marshall Adams, “Conservation Genetics of the Largest Cluster of Federally Threatened Gopher Tortoise (Gopherus polyphemus) Colonies with Implications for Species Management,” Herpetologica 67, no. 4 (2011): 406–419. 61. See, e.g., H. Adelman, “Rational Explanation Reconsidered: Case Studies and the Hempel-Dray Model,” History and Theory 13 (1974): 208–224; hereafter cited as Adelman, “Hempel-Dray Model”; T. A. Beckman, “On the Use of Historical Examples in Agassi’s ‘Sensationalism’,” Studies in History and the Philosophy of Science 1 (1971): 293–296. 62. Beckman, “On the Use of Historical Examples,” 293–296. 63. Hoering, “Rationality,” 132–133. R. Gomm, M. Hammersley, and P. Foster, “Case Study and Generalization,” in Case Study Method, ed. R. Gomm, M. Hammersley, and P. Foster (Thousand Oaks, CA: Sage, 2000), 102. 64. See Edelson, Psychoanalysis, 255–266, 319ff.; see also Rodrigo Moro, “On the Nature of the Conjunction Fallacy,” Synthese: An International Journal for Epistemology, Methodology and Philosophy of Science 171, no. 1 (2009): 1–24. See also Moshe Sniedovich, “Fooled by Local Robustness: An Applied Ecology Perspective,” Ecological Application 22, no. 5 (2012): 1421–1427. 65. See G. Baker and P. Hacker, Wittgenstein (Oxford: Blackwell, 1985), 150–185; hereafter cited as Baker and Hacker, Wittgenstein; G. Baker and P. Hacker, “Critical Study: On Misunderstanding Wittgenstein: Kripke’s Private Language Argument,” in The Philosophy of Wittgenstein, vol. 10, ed. J. Canfield (New York: Garland, 1986), 330–333; hereafter cited as Baker and Hacker, “Kripke’s Argument”; L. Wittgenstein, Philosophical Investigations, ed. G. E. M. Anscombe and R. Rhees, trans. G. E. M. Anscombe (Oxford: Blackwell, 1973), 243ff.; hereafter cited as Wittgenstein, Philosophical Investigations. 66. L. Wittgenstein, On Certainty (Oxford: Blackwell, 1969), 204. 67. See R. Newell, Objectivity, Empiricism, and Truth (New York: Routledge & Kegan Paul, 1986), 16ff., 23, 30, 63; hereafter cited as Newell, Truth; see Jamie Morgan, “Defining Objectivity in Realist Terms: Objectivity as a Second-Order ‘Bridging’ Concept Part II: Bridging to Praxis,” Journal of Critical Realism 7, no. 1 (2008): 107–132. 68. See Yin, Case Study Research, 21; see Evers and Wu, “Generalising from Case Studies”; Eisenhardt and Graebner, “Theory Building”; Mariani, “Explanatory Process”; Siggelkow, “Case Studies.” 69. Wilcove, “Owls,” 77. 70. N. Cartwright, “Capacities and Abstractions,” in Scientific Explanation, Minnesota Studies in the Philosophy of Science, Vol. 13, ed. P. Kitcher and W. C. Salmon (Minneapolis: University of Minnesota Press, 1989), 349–356; hereafter cited as Cartwright, “Abstractions.” 71. J. Fetzer, “A Single Case Propensity Theory of Explanation,” Synthese 28 (1974): 171–198; J. Fetzer, “Statistical Probabilities: Single-Case Propensities versus Long-Run Frequencies,” in Development in the Methodology of Social Science, ed. W. Lcinfellner and E. Kohler, 387–397 (Dordrecht: Reidel, 1974); J. Fetzer, “On the Historical Explanation of Unique Events,” Theory and Decision 6 (1975): 87–97; hereafter cited as Fetzer, “Unique Events.”


266

Notes

72. P. Humphreys, “Scientific Explanation: The Causes, Some of the Causes, and Nothing But the Causes,” in Scientific Explanation, Minnesota Studies in the Philosophy of Science, Vol. 13, ed. W. Lcinfellner and E. Kohler, 283–306 (Minneapolis: University of Minnesota Press, 1989); P. Humphreys, The Chances of Explanation: Causal Explanation in the Social, Medical, and Physical Sciences (Princeton, NJ: Princeton University Press, 1991). 73. K. Popper, The Logic of Scientific Discovery (New York: Harper & Row, 1965), 28ff., 251ff. 74. A. Franklin, The Neglect of Experiment (New York: Cambridge University Press, 1986), 100, 192ff.; see Cartwright, “Abstractions,” 349. 75. See Hoering, “Rationality,” 135; Franz Huber, “Hempel’s Logic of Confirmation,” Philosophical Studies: An International Journal for Philosophy in the Analytic Tradition 139, no. 2 (2008): 181–189. 76. See Edelson, Psychoanalysis, 120, 363, 231–265, 275–276; Grünbaum, Foundations; C. Glymour, Theory and Evidence (Princeton, NJ: Princeton University Press, 1980); P. E. Meehl, “Subjectivity in Psychoanalytic Inference,” in Testing Scientific Theories, ed. John Earman, 349–411 (Minneapolis: University of Minnesota Press, 1983); Eisenhardt and Graebner, “Theory Building.” 77. See R. T. Francoeur, “A Structured Approach to Teaching Decision-Making Skills in Biomedical Ethics,” Journal of Bioethics 5 (1984): 146; Hoering, “Rationality,” 135; see also Gary Potter, “Induction and Ontology,” Journal of Critical Realism 7, no. 1 (2008): 83–106; hereafter cited as Potter, “Induction.” 78. See Grünbaum, Foundations; Grünbaum, “Case Study”; see also Marc Lange, “Why Proofs by Mathematical Induction Are Generally Not Explanatory,” Analysis 69, no. 2 (2009): 203–211. 79. See, e.g., J. Diamond and T. J. Case, eds., Community Ecology (New York: Harper & Row, 1986); D. Simberloff, “Experimental Zoogeography of Islands: Effects of Island Size,” Ecology 57 (1976): 629–648; D. Simberloff and L. G. Abele, “Island Biogeography Theory and Conservation Practice,” Science 191 (1976): 285–286. See also Sinichi Nakagawa and Robert P. Freckleton, “Model Averaging, Missing Data and Multiple Imputation: A Case Study for Behavioural Ecology,” Behavioral Ecology and Sociobiology 65, no. 1 (2011): 103–116; David Lesbarreres and Lenore Fahrig, “Measures to Reduce Population Fragmentation by Roads: What Has Worked and How Do We Know?,” Trends in Ecology and Evolution 27, no. 7 (2012): 374–380. 80. See Edelson, Psychoanalysis, 237–251, 286ff., 319ff.; see also Eisenhardt and Graebner, “Theory Building”; Evers and Wu, “Generalising from Case Studies.” 81. Grünbaum, Foundations; Grünbaum, “Case Study.” 82. E. Sober, “Parsimony, Likelihood, and the Principle of the Common Cause,” Philosophy of Science 54 (1987): 466; Sober, Common Cause; Elliot Sober, “A Priori Causal Models of Natural Selection,” Australasian Journal of Philosophy 89, no. 4 (2011): 571–589; Asbjorn Steglich-Petersen, “Against the Contrastive Account of Singular Causation,” British Journal for the Philosophy of Science 63, no. 1 (2012): 115–143. 83. See H. Reichenbach, The Direction of Time (Berkeley: University of California Press, 1956); W. Salmon, Scientific Explanation and the Causal Structure of the World (Princeton, NJ: Princeton University Press, 1984); Potter, “Induction”; G. Hofer-Szabo and P. Vecsernyes, “Reichenbach’s Common Cause Principle in Algebraic Quantum Field Theory with Locally Finite Degrees of Freedom,” Foundations of Physics 42, no. 2 (2012): 241–255; O. Sternäng, B. Jonsson, Å. Wahlin, L. Nyberg, and L. Nilsson, “Examination of the Common Cause Account in a Population-based Longitudinal Study with Narrow Age Cohort Design,” Gerontology 56, no. 6 (2010): 553–563; Carol E. Cleland, “Prediction and Explanation in Historical Natural Science,” British Journal for the Philosophy of Science 62, no. 3 (2011): 551–582. 84. See Van Der Steen and Kamminga, “Natural History”; Evers and Wu, “Generalising from Case Studies”; Mariani, “Explanatory Process.”


Notes

267

85. Yin, Case Study Research, 19; see Okasha, “Confirmation of Laws”; Mariani, “Explanatory Process.” 86. See T. Kuhn, The Structure of Scientific Revolutions (Chicago: University of Chicago Press, 1970): 187–191; hereafter cited as Kuhn, Scientific Revolutions. T. Nickles, Thomas Kuhn (New York: Cambridge University Press, 2003), ch. 6; J. Marcum, Thomas Kuhn’s Revolution: An Historical Philosophy of Science (New York: Continuum, 2005), part II; David Walker, “A Kuhnian Defence of Inference to the Best Explanation,” Studies in History and Philosophy of Science 43, no. 1 (2012): 64–73. 87. See B. Mowry, “From Galen’s Theory to William Harvey’s Theory: A Case Study in the Rationality of Scientific Theory Change,” Studies in History and Philosophy of Science 16 (1985): 49–82. 88. Orians, Ecological Knowledge, 247. 89. Salmon, “Four Decades,” 409; see N. Cartwright and S. Efstathiou, “Hunting Causes and Using Them: Is There No Bridge from Here to There?,” International Studies in the Philosophy of Science 25, no. 3 (2011): 223–241; K. Mainzer, “Causality in Natural, Technical, and Social Systems,” European Review 18, no. 4 (2010): 433–454. 90. See Merriam, Case Study Research, 20–21; Mariani, “Explanatory Process.” 91. See Kuhn, Scientific Revolutions, 191–204. See note 86 above. 92. See G. Baker, “Following Wittgenstein,” in The Philosophy of Wittgenstein, vol. 10, ed. J. Canfield (New York: Garland, 1986), 255; hereafter cited as Baker, “Following Wittgenstein.” 93. D. Bloor, Wittgenstein (New York: Columbia University Press, 1983); hereafter cited as Bloor, Wittgenstein; S. Kripke, “Wittgenstein on Rules and Private Language,” in Perspective on the Philosophy of Wittgenstein, ed. I. Black (Oxford: Blackwell, 1982), 239–296; hereafter cited as Kripke, “Wittgenstein on Rules”; C. Peacocke, “Reply,” in The Philosophy of Wittgenstein, ed. J. Canfield (New York: Garland, 1986), 274–297; B. Smith, “Knowing How vs. Knowing That,” in Practical Knowledge, ed. J. Nyiri and B. Smith (London: Croom Helm, 1988): 1–16; hereafter cited as Smith, “Knowing How.” 94. See Newell, Truth. 95. R. Ackermann, Wittgenstein’s City (Amherst: University of Massachusetts Press, 1988), 131; R. Eldridge, “Hypotheses, Criteria, Claims, and Perspicuous Representations: Wittgenstein’s ‘Remarks on Frazer’s The Golden Bough’,” Philosophical Investigations 10 (1987): 226–245; Wittgenstein, Philosophical Investigations, 47–54; L. Wittgenstein, “Remarks on Frazer’s The Golden Bough,” in Wittgenstein, ed. C. Luckhardt and trans. J. Beversluis (Ithaca, NY: Cornell University Press, 1979), 61, 77–79. 96. Kripke, “Wittgenstein on Rules”; Cartwright, “Abstractions.” 97. See Baker, “Following Wittgenstein”; Baker and Hacker, Wittgenstein; Baker and Hacker, “Kripke’s Argument”; E. Picardi, “Meaning and Rules,” in Practical Knowledge, ed. J. Nyiri and B. Smith, 90–121 (London: Croom Helm, 1988); Smith, “Knowing How”; and Wittgenstein, Philosophical Investigations. 98. Baker and Hacker, Wittgenstein, 151. 99. See A. Olding, “A Defence of Evolutionary Laws,” British Journal for the Philosophy of Science 29 (1978): 131–143. 100. See, e.g., Fetzer, “Unique Events”; Van Der Steen and Kamminga, “Natural History.” 101. See J. R. Bambrough, Moral Scepticism and Moral Knowledge (London: Routledge & Kegan Paul, 1978), ch. 8; I. Dilman, Induction and Deduction: A Study in Wittgenstein (Oxford: Blackwell, 1973), 115–120; Fetzer, “Unique Events,” 95–96; Kuhn, Scientific Revolutions; Newell, Truth, 88–94; J. Wisdom, Paradox and Discovery (Oxford: Blackwell, 1965); see also P. D. Magnus, “Demonstrative Induction and the Skeleton of Inference,” International Studies in the Philosophy of Science 22, no. 3 (2008): 303–315. 102. Kripke, “Wittgenstein on Rules.” 103. See Newell, Truth, 92–110; M. Polanyi, The Study of Man (Chicago: Chicago University Press, 1959), 13, 26; M. Polanyi, Personal Knowledge: Towards a Post-Critical Philosophy


268

Notes

(New York: Harper & Row, 1964); Adelman, “Hempel-Dray Model,” 223; G. Gutting, “Can Philosophical Beliefs Be Rationally Justified?,” American Philosophical Quarterly 19 (1982): 323; Bloor, Wittgenstein, 95. Harry Collins, Tacit and Explicit Knowledge (Chicago: University of Chicago Press, 2010). 104. See A. Plantinga, The Nature of Necessity (Oxford: Oxford University Press, 1974), 220–221.

Chapter 12 1. R. Poínhos, “Gender Bias in Medicine,” Acta Médica Portuguesa 24, no. 6 (November– December 2011): 975–986; William Ryan, Blaming the Victim (New York: Random House, 1976); Kriss Ravetto, The Unmaking of Fascist Aesthetics (Minneapolis: University of Minnesota Press, 2001). 2. Ben Lefebvre, “US Inspectors Say Chevron Knew of Pipe Problem That Caused California Refinery Fire,” Wall Street Journal, February 13, 2013, http://online.wsj.com/article/ SB10001424127887323478004578302303549949898.html, accessed April 15, 2013; “Richmond Refinery Cancer,” http://richmondrefinerycancer.wordpress.com/, accessed April 15, 2013; Charles Burress, “Fire at Chevron Refinery, Many Go to Hospitals,” August 6, 2012, BeniciaPatch, http://benicia.patch.com/articles/explosion-fire-reportedat-chevron-refinery, accessed April 15, 2013. 3. General Public Utilities Corporation, Three Mile Island (Parsippany, NJ: GPUC, 1980), 5. See, for example, L. M. Davidson, R. Fleming, and A. Baum, “Chronic Stress, Catecholamines, and Sleep Disturbance at Three Mile Island,” Journal of Human Stress 13, no. 2 (summer 1987): 75–83; M. A. Dew, E. J. Bromet, H. C. Schulberg, L. O. Dunn, and D. K. Parkinson, “Mental Health Effects of the Three Mile Island Nuclear Reactor Restart,” American Journal of Psychiatry 144, no. 8 (August 1987): 1074–1077; M. A. Dew, E. J. Bromet, and H. C. Schulberg, “A Comparative Analysis of Two Community Stressors,” American Journal of Community Psychology 15, no. 2 (April 1987): 167–184; B. P. Dohrenwend, “Psychological Implications of Nuclear Accidents,” Bulletin of the New York Academy of Medicine 59, no. 19 (December 1983): 1060–1076; R. F. Chisholm, S. V. Kasl, B. P. Dohrenwend, B. S. Dohrenwend, G. J. Warheit, R. L. Goldsteen, K. Goldsteen, and J. L. Martin, “Behavioral and Mental Health Effects of the Three Mile Island Accident on Nuclear Workers,” Annals of the New York Academy of Sciences 365 (1981): 134–135; R. Goldsteen, J. K. Schorr, and K. S. Goldsteen, “Longitudinal Study of Appraisal at Three Mile Island,” Social Sciences and Medicine 128, no. 4 (1989): 389–398. 4. Tsuvoshi Inaiima and Yuji Okada, “Japan May Be Atomic-Power Free Next Month After Shutdown,” Bloomberg Businessweek, April 17, 2012, http://www.businessweek. com/news/2012-04-16/japan-may-beatomic-power-free-next-month-after-shutdown, accessed April 20, 2012. Parts of this chapter rely on Kristin Shrader-Frechette, “Nuclear Catastrophe, Disaster-Related Environmental Injustice, and Fukushima,” Environmental Justice 5, no. 3 (June 2012): 133–139; and Kristin Shrader-Frechette, What Will Work (New York: Oxford University Press, 2011), 130–160; hereafter cited as ShraderFrechette, WWW. 5. Associated Press, “Protests in Japan as Nuclear Power Plant Reopens,” The Guardian, July 1, 2012, http://www.guardian.co.uk/world/2012/jul/01/japan-protest-nuclear-plantreopens, accessed October 27, 2012. 6. R. Koike, N. Tsuzaka, and K. Uechi, “Oi Nuke Reactors to Stay Online,” The Asahi Shimbun, March 20, 2013, http://ajw.asahi.com/article/0311disaster/fukushima/ AJ201303200067, accessed March 21, 2013. 7. E. Johnston, “Nation Marks First Anniversary of Disasters,” The Japan Times, March 12, 2012, http://www.japantimes.co.jp/text/nn20120312a1.html, accessed October 27, 2012.


Notes

269

8. Kenichi Ohmae, “Fukushima,” Japan Times, April 18, 2012, http://www.japantimes. co.jp/text/eo20120418a4.html, accessed April 20, 2012. IARC calculations are in Shrader-Frechette, WWW. 9. Q. Wang and X. Chen, “Regulatory Failures for Nuclear Safety—The Bad Example of Japan,” Renewable and Sustainable Energy Reviews 16, no. 5 (June 2012): 2610–2617; D. Normile, “Commission Spreads Blame for ‘Manmade’ Disaster,” Science 337, no. 6091 (July 2012): 143. 10. J. Nishikawa, “Some Progress, But Radiation Remains High at Fukushima Nuclear Plant,” The Asahi Shimbum, October 13, 2012, http://ajw.asahi.com/article/0311disaster/fukushima/AJ201210130030, accessed October 27, 2012. 11. “Pockets of High Radiation Remind of Fukushima Plant Danger,” Reuters News Service, August 2, 2011, http://wwwseuters.com/article/2011/08/02/iapan-nuclear-radiationidUSL3E7J203D20110802, accessed April 20, 2012. 12. Geogg Brumfiel, “Fukushima Reaches Cold Shutdown,” Nature, December 16, 2011, http://www.nature.com/news/fukushima-reaches-cold-shutdown-1.9674, accessed April 20, 2012. 13. “Fukushima Radiation Levels Underestimated by Five Times—TEPCO,” February 8, 2014, RT News Line, http://rt.com/news/fukushima-radiation-levels-underestimated-143/, accessed February 9, 2014. Hiroaki Koide, quoted in Mike Whitney, “Is Fukushima’s Doomsday Machine About to Blow?,” Eurasia Review, April 20, 2012, http://www.eurasiareview.com/20042012-is-fukushimas-doomsday-machine-aboutto-blow-oped/, accessed April 20, 2012; Tony Dutzik and Travis Madsen, Fukushima (Environment California, 2012), http://www.environmentcalifornia.org/sites/environment/files/reports/FUKUSHIMA%20FAC7/020SHEET.pdf, accessed April 20, 2012; BBC News Asia, “Fukushima,” March 20, 2013, http://www.bbc.co.uk/news/worldasia-21867705#, accessed March 20, 2013. 14. Whitney, “Fukushima’s Doomsday Machine.” 15. Arnold Gundersen, “Tokyo Soil Samples Would Be Considered Nuclear Waste in the US,” Fairewinds Energy Education, March 25, 2012, http://www.fairewinds.org/node/223, accessed April 20, 2012. 16. Frank von Hippel, “The Radiological and Psychological Consequences of the Fukushima Daiichi Accident,” Bulletin of the Atomic Scientists 67, no. 5 (2011): 27–36; ShraderFrechette, RJ, 2012; Shrader-Frechette, WWW, 130–160. Note that fission products/ radioactive-half-lives in Hiroshima/Nagasaki/FD are different. 17. Japanese Ministry of Education, Culture, Sports, Science, and Technology (MEXT), For Correctly Understanding Radioactivity, trans. Setsuko Shiqa (Tokyo: MEXT, 2011), 6, 11. 18. Hippel, “Radiological and Psychological Consequences”; Whitney, “Fukushima’s Doomsday Machine.” 19. Shinichi Saoshiro and Jonathan Thatcher, “TEPCO Wary of Fukushima Radiation Leak Exceeding Chernobyl,” Scientific American, April 11, 2011, www.scientificamerican.com/article.cfm?id=tepco-wary-of-fukushima-radiation-leak, accessed May 6, 2011. Chernobyl fatalities from Alexey V. Yablokov, Vassily B. Nesterenko, Alexey V. Nesterenko, and Janette D. Sherman-Nevinger, Chernobyl (Malden, MA: John Wiley, 2009). 20. Shrader-Frechette, WWW, 130–160. See note 11, chapter 4, this volume, regarding 3–6% of cancers. 21. World Health Organization, Health Effects of the Chernobyl Accident and Special Health Care Programmes (Geneva: World Health Organization, 2006). 22. World Nuclear Association (WNA), Health Impacts, Chernobyl Accident (London: WNA, 2009), http://www.world-nuclearorg/about/contact.html, accessed March 28, 2012. 23. World Nuclear Association, “Chernobyl Accident,” www.world-nuclear.org/info/ chernobyl/inf07.html, accessed April 10, 2011; see R. Giel, “Hoe Erg Was Chernobyl? De Psychosociale Gevolgen van Het Reactorongeluk,” Nederlands Tijdschrift voor


270

Notes

Geneeskunde 135, no. 25 (June 22, 1991): 1137–1141; Vera Rich, “USSR,” Lancet 337, no. 8749 (May 4, 1991): 1086–1086. 24. WNA, Heath Impacts. 25. Alexey V. Yablokov, Vassily B. Nesterenko, Alexey V. Nesterenko, and Janette D. ShermanNevinger, Chernobyl, vol. 1181 (New York: Annals of the New York Academy of Sciences, 2009). See WNA, Health Impacts, and Shrader-Frechette, WWW, esp. ch. 4. 26. World Nuclear Association, “Chernobyl Accident,” (2008), http://world-clear.org/ info/chernobyl/inf07.html, accessed February 13, 2009. See, e.g., A. M. Herbst and G. W. Hopley, Nuclear Energy Now (Hoboken, NJ: John Wiley, 2007), 138. 27. Maureen C. Hatch, Jan Beyea, Jeri W. Nieves, and Mervyn Susser, “Cancer Near the Three Mile Island Nuclear Plant,” American Journal of Epidemiology 132, no. 3 (1990): 397–412; see Evelyn O. Talbott, Ada O Youk, Kathleen P. McHugh-Pemu, and Jeanne V. Zborowski, “Long Term Follow-Up of the Residents of the Three Mile Island Accident Area,” Environmental Health Perspectives 111, no. 3 (2003): 341–348. 28. Talbott et al., “Long Term Follow-Up,” 241; Hatch et al., “Cancer”; Maureen C. Hatch, Sylvan Wallenstein, Jan Beyea, Jeri W. Nieves, and Mervyn Susser, “Cancer Rates After the Three Mile Island Nuclear Accident,” American Journal of Public Health 81, no. 6 (1991): 719–724. 29. E.g., R. J. Levin, “Incidence of Thyroid Cancer in Residents Surrounding the Three Mile Island Nuclear Facility,” Laryngoscope 118, no. 4 (2009): 618–628; Hatch et al., “Cancer”; Hatch et al., “Cancer Rates”; M. Susser, “Consequences of the 1979 Three Mile Island Accident Continued,” Environmental Health Perspectives 105, 6 (1997): 566–567; S. Walker, Three Mile Island (Berkeley: University of California Press, 2004), 235. 30. Hatch et al., “Cancer Rates”; Hatch et al., “Cancer ”; Susser, “Consequences.” 31. Talbott et al., “Long Term Follow-Up”; E. O. Talbott et al., “Mortality among the Residents of the Three Mile Island Accident Area: 1979–1992,” Environmental Health Perspectives 108, no. 6 (2000): 545–552. 32. E.g., S. Wing, D. Richardson, and D. Armstrong, “A Re-Evaluation of Cancer Incidence Near the Three Mile Island Nuclear Plant,” Environmental Health Perspectives 105, no. 1 (1997): 52–57; S. Wing, Affidavit in TMI Litigation Cases Consolidated II, Civil Action No 1: CV-88-1452 (Harrisburg, PA: US District Court for the Middle District of Pennsylvania, 1995); S. Wing, “Objectivity and Ethics and Environmental Health Science,” Environmental Health Perspectives 111, no. 14 (2003): 1809–1818. 33. Wing et al., “Re-Evaluation,” 266–268; Wing, “Objectivity and Ethics.” 34. Benjamin K. Sovacool, “Valuing the Greenhouse Gas Emissions from Nuclear Power,” Energy Policy 36 (2008): 2940–2953; K. Shrader-Frechette, “Data Trimming, Nuclear Emissions, and Climate Change,” Science and Engineering Ethics 15, no. 1 (2009): 19–23; Shrader-Frechette, WWW, ch. 3. 35. Moody’s Corporate Finance, New Nuclear Generating Capacity (New York: Moody’s, May 2008). 36. L. Mez, “Nuclear Energy?,” Energy Policy 48 (September 2012): 56–63. 37. US Department of Energy, DOE Selects 13 Solar Energy Projects (2007), http://www. energy.gov.news.4855.htm, accessed June 9, 2009. 38. B. Smith, Insurmountable Risks (Takoma Park, MD: IEER Press, 2006), 70; J. Aabakken, Power Technologies Energy Data Book (Golden, CO: US DOE, National Renewable Energies Lab, 2005), 37–39. 39. National Renewable Energy Laboratory (NREL), Near Term Practical and Ultimate Technical Potential for Renewable Resources (Golden, CO: Energy Analysis Office, NREL, US DOE, 2006). 40. Marshall Goldberg, Federal Energy Subsidies (New York: MRG Associates, 2000); Smith, Insurmountable Risks, 55.


Notes

271

41. National Research Council/National Academy of Sciences (NRC/NAS), Health Risks from Exposure to Low Levels of Ionizing Radiation: BEIR VII, Phase 2 (Washington, DC: National Academy Press, 2006). 42. US Environmental Protection Agency (US EPA), Radiation (2007), 2, http://www.epa. gov/radiation/docs/402-k-07-006.pdf, accessed February 27, 2006. 43. United Nations Scientific Committee on Effects of Atomic Radiation (UNSCEAR), Sources and Effects of Ionizing Radiation (New York: United Nations, 1994). 44. Arnie Gundersen, Three Myths of the Three Mile Island Accident (Burlington, VT: Fairewinds Energy Education Corporation, 2009), http://www.timia.com/ march26, accessed July 7, 2009; Sue Sturgis, Investigation (Durham, NC: Institute for Southern Studies, 2009), http://southernstudies.org/2009/04/post-4.html, accessed July 7, 2009; Walker, Three Mile Island, 78. See also T. Kletz, Learning From Accidents (Oxford: Routledge, 2012), ch. 11; S. Fushiki, “Radiation Hazards in Children,” Brain and Development 35, no. 3 (2013): 220–227. 45. Walker, Three Mile Island, 78; J. May, The Greenpeace Book of the Nuclear Age (London, Greenpeace Communications, 1989). 46. Walker, Three Mile Island, 87, 122, 124, 151, 189, 219; see May, Greenpeace Book; M. Lang, “Three Mile Island Unit 2,” Three Mile Island Alert (Harrisburg, PA: TMIA, 2009), http://www.tmia.com/node/110, accessed March 9, 2009. 47. Walker, Three Mile Island, 233; May, Greenpeace Book. 48. Walker, Three Mile Island; Wing, “Objectivity and Ethics.” 49. US Nuclear Regulatory Commission (NRC), Fact Sheet on the Three Mile Island Accident (Washington, DC: US NRC, 2008); Walker, Three Mile Island, 231. 50. Hatch et al., “Cancer,” 402. 51. Hatch et al., “Cancer,” 401. 52. Jan Beyea and J. M. DeCicco, Re-Estimating the Noble Gas Releases from the Three Mile Island Accident (Philadelphia: Three Mile Island Public Health Fund, 1990), 20. 53. Nuclear Information and Resource Service/World Information Service on Energy (NIRS/WISE), Radioactive Releases from the Nuclear Power Plants of the Chesapeake Bay (Washington, DC: NIRS, 2001). 54. Helen Caldicott, Nuclear Power Is Not the Answer (Melbourne, Australia: Melbourne University Press, 2006), 67. 55. Gundersen, Three Myths; Sturgis, Investigation. 56. Walker, Three Mile Island, 85–87. 57. Walker, Three Mile Island, 194. 58. World Health Organization (WHO), Fact Sheet On Chernobyl (Geneva: WHO, 1995). 59. J. G. Kemeny et al., Report of the President’s Commission on the Accident at Three Mile Island (Washington, DC: Kemeny Commission, 1979). 60. Randall Thompson, Joy Thompson, and David Bear, TMI Assessment (Durham, NC: Institute for Southern Studies, 1995), www.southernstudies.org/images/sitepieces/ ThompsonTMIassessment.pdf, accessed July 7, 2009. 61. H. Wasserman and N. Solomon, Killing Our Own (New York: Dell and Solomon, 1982), ch. 14. 62. M. Rogovin and G. T. Frampton, Three Mile Island, NUREG/CR-1250, vols. 1–2. (Washington, DC: US Nuclear Regulatory Commission, 1980), 728. 63. Opinion of the Court, In Re: TMI LITIGATION, 193 F.3d 613 (3rd Cir. 1999) (Philadelphia: Third Circuit Court of Appeals of the Court, 1999), par. 179; Judge Sylvia Rambo, “Three Mile Island: The Judge’s Ruling,” in Frontline: Nuclear Reaction Readings (Alexandria, VA: PBS, 1996), http://www.pbs.org/wgbh/pages/frontline/ shows/reaction/radngs/tmi.html, accessed February 4, 2009; In Re: TMI Litigation, Lori Dolan, Joseph Gaughan, Ronald Ward, Estate of Pearl Hickernell, Kenneth Putt, Estate of Ethelda Hilt, Paula Obercash, Jolene Peterson, Estate of Gary Villella, Estate of


272

Notes

Leo Beam, US Court of Appeals, Third Circuit, 193 F.3d 613 (3rd Cir. 1999), argued June 27, 1997; see Kemeny, Report, 118. 64. Elisabeth Cardis et al., “Risk of Cancer after Low Doses of Ionizing Radiation,” British Medical Journal 331 (2005): 77–80; Elisabeth Cardis et al., “The 15-Country Collaborative Study of Cancer Risk among Radiation Workers in the Nuclear Industry,” Radiation Research 167 (2007): 396–416. 65. Beyea and DeCicco, Re-Estimating the Noble Gas Releases, 20. 66. Beyea and DeCicco, Re-Estimating the Noble Gas Releases, viii; Wing, “Objectivity and Ethics,” 1809–1810; Hatch et al., “Cancer.” 67. Thompson et al., TMI, 9–10; Sturgis, Investigation. 68. Walker, Three Mile Island, 78; May, Greenpeace Book. 69. Laurence Stern et al., Crisis at Three Mile Island (Washington, DC: The Washington Post, 1999), ch. 1, www.washingtonpost.com/wp-srv/national/longterm/tmi/stories/ch1. htm, accessed July 6, 2009; see Rogovin and Frampton, Three Mile Island, 25, 182. 70. Beyea and DeCicco, Re-Estimating the Noble Gas Releases. 71. Shrader-Frechette, WWW, ch. 4. 72. E. Epstein, “Science for Sale,” World Information Service on Energy/ Nuclear Information and Research Service Nuclear Monitor 576 (2002): 9–10; May, Greenpeace Book; Lang, “Three Mile Island,” 11. 73. Wing, “Objectivity and Ethics,” 1814–1816; Walker, Three Mile Island, 235–236. 74. Talbott et al., “Long Term Follow-Up,” 341, 346, 347; Hatch et al., “Cancer,” 397; Hatch et al., “Cancer Rates,” 721–722. 75. Michael Kelsh, Libby Morimoto, and Edmund Lau, “Cancer Mortality and Oil Production in the Amazon Region of Ecuador,” International Archives of Occupational and Environmental Medicine 82, no. 3 (2008): 393. 76. C. Poole and K. J. Rothman, “Our Conscientious Objection to the Epidemiology Wars,” Journal of Epidemiology and Community Health 52 (1998): 612–618. 77. Sander Greenland, “Randomization, Statistics, and Causal Inference,” Epidemiology 1, no. 6 (1990): 421–429; Sander Greenland, “Response and Follow-Up Bias in Cohort Studies,” American Journal of Epidemiology 106 (1977): 184–188. 78. E.g., Jean Philippe Empana, Pierre Ducimetiere, et al., “Are the Framingham and PROCAM Coronary Heart Disease Risk Functions Applicable to Different European Populations?,” European Heart Journal 24 (2003): 1903–1911. 79. R. Curtis Ellison, “AHA Science Advisory on Wine and Health,” Circulation 104 (2001): e72; R. Curtis Ellison, “Importance of Pattern of Alcohol Consumption,” Circulation 112 (2005): 3818–3819. 80. International Agency for Research on Cancer (IARC), “Overall Evaluations of Carcinogenicity to Humans,” Supplement 7 (2008), http://monographs.iarc.fr/ENG/ Monographs/suppl7/Suppl7-5.pdf, accessed August 27, 2008. 81. E.g., Kaye M. Fillmore and W. Kerr, “A Bostrom Abstinence from Alcohol and Mortality Risk in Prospective Studies,” Nordic Studies on Alcohol and Drugs 19, no. 4 (2002): 295–296. 82. Carl Cranor, Toxic Torts (New York: Cambridge University Press, 2006), 240–241. 83. K. J. Rothman, “Statistics in Non-Randomized Studies,” Epidemiology 1 (1990): 417–418. 84. Greenland, “Response and Follow-Up,” 428. 85. Garth Anderson, “Genomic Instability in Cancer,” Current Science 81, no. 5 (2001): 384, 784. 86. Anderson, “Genomic Instability in Cancer,” 32, 358. 87. James Woodward, “Invariance, Explanation, and Understanding,” Metascience 15 (2006): 56–57. 88. E.g., Empana et al., “Framingham and PROCAM Coronary Heart Disease Risk Functions.” 89. National Research Council/National Academies of Science, 2006, 379, 376.


Notes

273

90. See Wing, “Objectivity and Ethics.” 91. Wing, “Objectivity and Ethics,” 1815. 92. P. Lipton, Inference to the Best Explanation (New York: Routledge, 2004), 7–22. 93. David Lewis, Philosophical Papers, vol. 2 (Oxford: Oxford University Press, 1986); “Causal Explanation,” 214–240; Lipton, Inference to the Best Explanation. 94. Carl Hempel, Aspects of Scientific Explanation (New York: Free Press, 1965), 421–423. 95. Lipton, Inference to the Best Explanation; Jonathan Schaffer, “Contrastive Causation,” Philosophical Review 114, no. 3 (2005): 297–328. 96. John Stuart Mill, A System of Logic (London: Longman, Green, 1904), III.VII.2; Lipton, Inference to the Best Explanation, 38–44; see C. K. Waters, “Causes That Make a Difference,” The Journal of Philosophy 104, no. 11 (2007): 551–579. 97. Hempel, Aspects of Scientific Explanation, 338. 98. Lipton, Inference to the Best Explanation, 58–60. 99. Carl Hempel, The Philosophy of Natural Science (Englewood Cliffs, NJ: Prentice-Hall, 1966), 3–8; Lipton, Inference to the Best Explanation, 7–11, 71–90; see Waters, “Causes.” 100. Hatch et al., “Cancer,” 406–407; Wing et al., “Re-Evaluation,” 56. 101. Hatch et al., “Cancer Rates.” 102. Leukemia and Lymphoma Society (LLS), Chronic Lymphocytic Leukemia (White Plains, NY: LLS, 2009). 103. D. B. Richardson et al., “Positive Associations between Ionizing Radiation and Lymphoma Mortality among Men,” American Journal of Epidemiology 169, no. 8 (2009): 969–976. 104. Gordon K. MacLeod, “A Role for Public Health in the Nuclear Age,” American Journal of Public Health 72, no. 3 (1982): 237; Walker, Three Mile Island; World Nuclear Association (WNA), Three Mile Island (London: WNA, 2001), http://www.world-nuclear.org/info/ inf36.html, accessed February 24, 2009. 105. Talbott et al., “Long Term Follow-Up,” 344–345. 106. Hatch et al., “Cancer Rates.” 107. Hatch et al., “Cancer Rates.” 108. Hatch et al., “Cancer Rates.” 109. E.g., J. Raingeaud et al., “Pro-Inflammatory Cytokines and Environmental Stress Case p38 Mitogen-Activated Protein Kinase Activation,” Journal of Biochemistry 270, no. 14 (1995): 7420–7426. 110. E.g., Eva S. Schernhammer, Susan E. Hankinson, Bernard Rosner, Candyce H. Kroenke, Walter C. Willett, Graham A. Colditz, and Ichiro Kawachi, “Job Stress and Breast Cancer Risk,” American Journal of Epidemiology 160, no.11 (2004): 1079. 111. E.g., O. Helgesson, C. Cabrera, L. Lapidus, C. Bengtsson, and L. Lissner, “Self-Reported Stress Levels Predict Subsequent Breast Cancer,” European Journal of Cancer Prevention 12, no. 5 (2003): 377–381. 112. Wing, “Objectivity and Ethics,” 1811; Wing, 1995; Wing et al., “Re-Evaluation.” 113. E.g., Levin, “Incidence”; Joseph J. Mangano, “Three Mile Island,” Bulletin of the Atomic Scientists 60, no. 5 (2004): 30–35. 114. Hatch et al., “Cancer Rates,” 721. 115. E.g., P. Kaatsch et al., “Leukemia in Young Children Living in the Vicinity of German Nuclear Power Plants,” International Journal of Cancer 1220 (2008): 721–726; J. J. Mangano, “A Short Latency Between Radiation Exposure From Nuclear Plants and Cancer in Young Children,” International Journal of Health Services 36, no. 1 (2006): 113–135; W. Watson and D. Sumner, “Measurement of Radioactivity in People Living near the Dounreay Nuclear Establishment, Caithness, Scotland,” International Journal of Radiation Biology 70, no. 2 (1996): 117–130; J. Michaelis et al., “Incidence of Childhood Malignancies in the Vicinity of West German Nuclear Power Plants,” Cancer Causes and Control 3 (1992): 255–263; M. Hatch and M. Susser, “Background Gamma Radiation and Childhood Cancers within Ten Miles of a US Nuclear Plant,” International Journal of Epidemiology 19, no. 3 (1990): 546–552; M. J. Gardner et al.,


274

Notes

“Results of Case Control Study of Leukemia and Lymphoma among Young People near Sellafield Nuclear Plant,” British Medical Journal 300 (1990): 423–434; B. E. Gibson et al., “Leukemia in Young Children in Scotland,” Lancet 2 (1988): 630– 635; M. A. Heasman et al., “Childhood Leukaemia in Northern Scotland,” Lancet 1 (1986): 266–268; D. Forman et al., “Cancer Near Nuclear Installations,” Nature 329 (1987): 499–505. 116. Joseph J. Mangano, Jay M. Gould, Ernest J. Sternglass, Janette D. Sherman, Jerry Brown, and William McDonnell, “Infant Death and Childhood Cancer Reductions after Nuclear Plant Closings,” Archives of Environmental Health 57 (2002): 23–31. 117. E.g., Hatch et al., “Cancer Rates”; Hatch et al., “Cancer,” 397–398, 410–411; Rambo, “Three Mile Island”; Rogovin and Frampton, Three Mile Island. 118. Epstein, “Science for Sale,” 9–10; May, Greenpeace Book; Lang, “Three Mile Island,” 11. 119. Wing, “Objectivity and Ethics,” 1810, 1812; Wing, 1995; Wing et al., “Re-evaluation”; May, Greenpeace Book; Katagiri Mitsuru and Aileen M. Smith, Three Mile Island (Harrrisburg, PA: Three Mile Island Action, 1989), www.tmia.com/mode/118, accessed July 7, 2009; Wasserman and Solomon, Killing Our Own, ch. 14. 120. Thomas Jefferson National Accelerator Facility, Radiation Biological Effects (Newport News, VA: Jefferson Labs, 2004). 121. Beyea and DeCicco, Re-Estimating the Noble Gas Releases, 20. 122. International Commission on Radiological Protection (ICRP), 2005 Recommendations (Stockholm: ICRP, 2005), 32. 123. Wasserman and Solomon, Killing Our Own, ch. 14; H. Yamazaki et al., “Pelvic IrradiationInduced Eosinophilia Is Correlated to Prognosis,” Radiation Medicine 23, no. 5 (2005): 317–321; Robert Del Tredici, People of Three Mile Island (San Francisco: Sierra Club Books, 1980). 124. Wasserman and Solomon, Killing Our Own, ch. 14; Vladimir A. Shevchenko and Galina P. Snigiryova, “Cytogenetic Effects of the Action of Ionizing Radiations on Human Populations,” in Consequences of the Chernobyl Catastrophe, ed. E. B. Burlakova, 23–45 (Moscow: Center for Russian Environmental Policy and Scientific Council on Radiobiology, 1996); hereafter cited as Burlakova, CCC; Vladimir A. Shevchenko, “Assessment of Genetic Risk from Exposure of Human Populations to Radiation,” in Burlakova, CCC, 47–61; Wing, “Objectivity and Ethics”; Wing, 1995; Wing et al., “Re-Evaluation.” 125. Pennsylvania Department of Health (PA DOH), TMI Area Death Rates No Higher Than State Average (Harrisburg: PA DOH, 1981), Tables 4, 5. 126. Wasserman and Solomon, Killing Our Own, ch. 14. 127. Talbott et al., “Long Term Follow-Up,” 343, 348. 128. Greenland, “Response and Follow-Up,” 427. 129. James Woodward, Making Things Happen (Oxford: Oxford University Press, 2003). 130. NRC/NAS, 2006, 6. 131. Cardis et al., “Risk”; Cardis et al., “Country.” 132. E.g., MacLeod, “A Role for Public Health”; Wasserman and Solomon, Killing Our Own, ch. 14; Mangano, “Three Mile Island”; Levin, “Incidence.” 133. M. Wahlen, C. O. Kunz, et al., “Radioactive Plume from the Three Mile Island Accident,” Science 207, no. 4431 (1980): 639–640; R. W. Holloway and C. K. Liu, “Xenon-133 in California, Nevada, and Utah from the Chernobyl Accident,” Environmental Science and Technology 22, no. 5 (1988): 583–586; Mangano, “Three Mile Island,” 32–33. 134. Mangano, “Three Mile Island,” 33–35. 135. Wahlen, Kunz, et al., “Radioactive Plume”; Holloway and Liu, “Xenon-133”; Wasserman and Solomon, Killing Our Own, ch. 14; Mangano, “Three Mile Island.” 136. Wing, “Objectivity and Ethics”; Wing, 1995; Wing et al., “Re-Evaluation.” 137. Lipton, Inference to the Best Explanation, 119, 139. 138. James E. Gunckel, Affidavit 9 (Bridgewater, NJ: The Bulletin of the Torrey Botanical Club, May 11, 1984), http://www.southernstudies.org/images/sitepieces/ gunckel_


Notes

275

affidavit_plants.pdf, accessed July 7, 2009; Mitsuru and Smith, Three Mile Island; Del Tredici, People of Three Mile Island; May, Greenpeace Book. 139. Shevchenko and Snigiryova, “Cytogenetic Effects,” 23–45; Shevchenko, “Assessment of Genetic Risk,” 47–61; Rambo, “Three Mile Island”; Harvey Wasserman, People Died at Three Mile Island (Columbus, OH: Free Press, 2009), www.freepress.org/columbus/display/7/2009/1733, accessed July 27, 2009; Wing et al., “Re-evaluation”; Wing, 1995. 140. MacLeod, “A Role for Public Health”; Wasserman and Solomon, Killing Our Own, ch. 14; Mangano, “Three Mile Island”; see Arjun Makhijani, B. Smith, and M.C. Thorne, Science for the Vulnerable (Takoma Park, MD: Institute for Energy and Environmental Research, 2006). 141. WNA, Health Impacts. 142. Thompson et al., TMI, 9–10; Sturgis, Investigation. 143. Hatch et al., “Cancer”; Hatch et al., “Cancer Rates”; Talbott et al., “Long Term Follow-Up.” 144. Greenland, “Response and Follow-Up,” 425–427; C. R. Muirhead, “Invited Commentary,” American Journal of Epidemiology 32, no. 3 (1990): 414; Mangano, “Three Mile Island.” 145. Mangano, “Three Mile Island,” 35. 146. Mangano, “Three Mile Island,” 35. 147. Makhijani, Smith and Thorne, Science for the Vulnerable; Mangano, “Three Mile Island”; Wasserman, People Died at Three Mile Island; Wing et al., “Re-Evaluation”; Wing, 1995. 148. Mangano, “Three Mile Island”; Wasserman, People Died at Three Mile Island; Wing et al., “Re-Evaluation”; Wing, 1995. 149. J. L. Bermejo, J. Sundquist, and K. Hemminki, “Risk of Cancer among the Offspring of Women Who Experienced Parental Death During Pregnancy,” Cancer Epidemiology, Biomarkers, and Prevention 16, no. 10 (2007): 2204–2208. 150. NRC/NAS, 2006. 151. NRC/NAS, 2006. 152. Hatch et al., “Cancer,” 406–407. 153. Woodward, Making Things Happen. 154. James Woodward, “Prospects for a Manipulability Account of Causation,” in Logic, Methodology and Philosophy of Science, ed. P. Hajek, L. Valdes-Villanueva, and D. Westerstahl (London: King’s College Publications, 2003), 341. 155. Waters, “Causes.” 156. Woodward, Making Things Happen, 67–68, 145–46. 157. Shrader-Frechette, WWW, ch. 4. 158. E.g., P. Machamer, L. Darden, and C. Craver, “Thinking about Mechanisms,” Philosophy of Science 67 (2000): 1–25; James Bogen, “Analyzing Causality,” International Studies in the Philosophy of Science 18 (2004): 3–26; Stuart Glennan, “Rethinking Mechanical Explanation,” Philosophy of Science 69 (September 2002): S342–S353; Stuart Glennan, “Rethinking Mechanistic Explanation,” Philosophy of Science 69 (2002): S342–S353. 159. Lipton, Inference to the Best Explanation. 160. Philip Kitcher, “Explanatory Unification and the Causal Structure of the World,” in Scientific Explanation, ed. Wesley Salmon and Philip Kitcher, 410–505 (Minneapolis: University of Minnesota Press, 1989). 161. Kitcher, “Explanatory Unification; Woodward, Making Things Happen, ch. 8. 162. Ellison, “AHA Science Advisory.” 163. Caldicott, Nuclear Power Is Not the Answer, 67. 164. Wasserman and Solomon, Killing Our Own, ch. 14; Rosalie Bertell, Rosalie Bertell on Three Mile Island (Toronto: International Institute of Concern for Public Health, 1998), http:// www.iicph.org/docs/tmi/htm, accessed March 11, 2009; In Re: TMI, 1999. 165. Wing, “Objectivity and Ethics.” 166. E.g., Hatch et al., “Cancer.” 167. Kai Koizumi, R & D Trends and Special Analyses, AAAS Report (Washington, DC: American Association for the Advancement of Science, 2005, 2004); Sheldon


276

Notes

Krimsky, Science in the Private Interest (Lanham, MD: Rowman and Littlefield, 2003); Kristin Shrader-Frechette, Taking Action, Saving Lives (New York: Oxford University Press, 2007). 168. S. Jablon, Z. Hrubec, and J. D. Boice Jr., “Cancer in Populations Living Near Nuclear Facilities,” Journal of the American Medical Association 265 (1991): 1403–1408. 169. E.g., Hatch and Susser, “Background Gamma Radiation.”

Chapter 13

1. An earlier version of several arguments in this chapter appeared in Kristin ShraderFrechette, Risk and Rationality (Berkeley: University of California Press, 1993): 100– 130; hereafter cited as Shrader-Frechette, RR; and in Kristin Shrader-Frechette, What Will Work (New York: Oxford University Press, 2011): ch.1; hereafter cited as ShraderFrachette, WWW. 2. Xinhua Qu, Xiaolu Huang, and Kerong Dai, “Metal-on-Metal or Metal-on-Polyethylene for Total Hip Arthroplasty,” Archives of Orthopaedic and Trauma Surgery 131, no. 11 (2011): 1573–1583; A. Malviya, J. R. Ramaskandhan, R. Bowman, M. Hashmi, J. P. Holland, S. Kometa, and E. Lingard, “What Advantage Is There to Be Gained Using Large Modular Metal-on-Metal Bearings in Routine Primary Hip Replacement?,” Journal of Bone and Joint Surgery 93-B, no. 12 (December 2011): 1602–1609; Sonali Natu, Raghavendra Prasad Sidaginamale, Jamshid Gandhi, David J Langton, and Antoni V. F. Nargol, “Adverse Reactions to Metal Debris,” Journal of Clinical Pathology 65, no. 5 (2012): 409–418; A. J. Smith, P. Dieppe, K. Vernon, M. Porter, and A. W. Blom, “Failure Rates of Stemmed Metal-on-Metal Hip Replacements,” Lancet 379, no. 9822 (March 2012): 1199–1204, doi:10.1016/S0140-6736(12)60353-5. PMID 22417410. 3. Cass Sunstein, Risk and Reason (Cambridge: Cambridge University Press, 2002), makes such claims; hereafter cited as Sunstein, RR. See Michael Resnick, Choices (Minneapolis: University of Minnesota Press, 1987), 32. 4. Thomas Kuhn, The Essential Tension (Chicago: University of Chicago Press, 1977), 320–339; Thomas Kuhn, The Structure of Scientific Revolutions (Chicago: University of Chicago Press, 2000), 24–25. 5. Norwood Russell Hanson, Patterns of Discovery (Cambridge: Cambridge University Press, 1958). 6. Philip Kitcher, Science in a Democratic Society (Amherst, NY: Prometheus Books, 2011); Philip Kitcher, Science, Truth, and Democracy (New York: Oxford University Press, 2001). 7. Helen Longino, “Beyond ‘Bad Science’,” Science, Technology, and Human Values 8, no. 1 (Winter 1983): 7–17; Helen Longino, Science as Social Knowledge (Princeton, NJ: Princeton University Press, 1990); Helen Longino The Fate of Knowledge (Princeton, NJ: Princeton University Press, 2002) 8. Kristin Shrader-Frechette, Ethics of Scientific Research (Lanham, MD: Rowman and Littlefield, 1994). 9. Kristin Shrader-Frechette, “Fukushima, Flawed Epistemology, and Black-Swan Events,” Ethics, Policy, and Environment 14, no. 3 (2011): 267–272. 10. U.S. Nuclear Regulatory Commission, Reactor Safety Study, NUREG-75/014, WASH 1400 (Washington, DC: Government Printing Office, 1975); K. S. Shrader-Frechette, Nuclear Power and Public Policy (Boston: Reidel, 1983), 84–85; Shrader-Frechette, WWW, ch.4. 11. Nuclear Regulatory Commission, Subcommittee on Reliability and Probability Risk Assessment, Subcommittee Meeting (Rockville, MD: NRC, January 24, 2003). 12. Union of Concerned Scientists, The Risks of Nuclear Power Reactors (Cambridge, MA: Union of Concerned Scientists, 1977); Nuclear Energy Policy Study Group, Nuclear Power (Cambridge, MA: Ballinger, 1977); this Ford-funded report was done by Mitre


Notes

277

Corporation. R. M. Cooke, “Risk Assessment and Rational Decision Theory,” Dialectica 36, no. 4 (1982): 334. 13. See Cooke, “Risk Assessment.” 14. J. Harsanyi, “Can the Maximin Principle Serve as a Basis for Morality? A Critique of John Rawls’s Theory,” American Political Science Review 69, no. 2 (1975): 594. See Glockner and Tilmann Betsch, “The Empirical Content of Theories in Judgment and Decision Making,” Judgment and Decision Making 6, no. 8 (2011):711–721; Michael D. Lee and Benjamin R. Newell, “Using Hierarchical Bayesian Methods to Examine the Tools of Decision-Making, Judgment and Decision Making 6, no. 8 (December 2011): 832–842. 15. J. Harsanyi, “Advances in Understanding Rational Behavior,” in Rational Choice, ed. J. Elster (New York: New York University Press, 1986), 88; hereafter cited as Elster, RC. See also J. Harsanyi, “Understanding Rational Behavior,” in Foundational Problems in the Special Sciences, ed. R. E. Butts and J. Hintikka (Boston: Reidel, 1977), 2: 322; hereafter cited as Butts and Hintikka, FP; A. Tversky and D. Kahneman, “The Framing of Decisions and the Psychology of Choice,” in Elster, RC, 125. Paul Levine, Peter McAdam, and Joseph Pearlman, “Probability Models and Robust Policy Rules,” European Economic Review 56, no. 2 (February 2012): 246–262. 16. Harsanyi, “Maximin Principle,” 594; Harsanyi, “Understanding Rational Behavior,” 320–321. Regarding individual decisions under uncertainty, see R. D. Luce and H. Raiffa, Games and Decisions (New York: Wiley, 1957), 275–326. 17. J. Rawls, A Theory of Justice (Cambridge, MA: Harvard University Press, 1971), 75–83, and John Rawls, Political Liberalism (New York: Columbia University Press, 1993). 18. Pratik Mehta and Rebecca Smith-Bindman, “Airport Full-Body Screening: What Is the Risk?,” Archives of Internal Medicine 171, no. 12 (March 2011): 1112–1115. 19. H. Otway and M. Peltu, Regulating Industrial Risks (London: Butterworths, 1985), 4; I. Hacking, “Culpable Ignorance of Interference Effects,” in Values at Risk, ed. Douglas MacLean, 136–154 (Totowa, NJ: Rowman & Allanheld, 1986); hereafter cited as MacLean, VR. 20. Harsanyi, “Understanding Rational Behavior,” 320; Otway and Peltu, Regulating Industrial Risks, 115; Resnick, Choices, 36. See Resnick, Choices, 88–91. Harsanyi, “Maximin Principle,” 594. See also R. C. Jeffrey, The Logic of Decision (Chicago: University of Chicago Press, 1983); J. Marschak, “Towards an Economic Theory of Organization and Information,” in Decision Processes, ed. R. Thrall, C. Coombs, and R. Davis (London: Wiley, 1954), 187ff.; L. Ellsworth, “Decision-Theoretic Analysis of Rawls’ Original Position,” in Foundations and Applications of Decision Theory, ed. C. A. Hooker, J. J. Leach, and E. F. McClennen (Dordrecht: Reidel, 1978): 2–29ff. 21. Harsanyi, “Understanding Rational Behavior,” 323; see also J. Harsanyi, “On the Rationale of the Bayesian Approach,” in Butts and Hintikka, FP, vol. 2. See Harsanyi, “Understanding Rational Behavior,” 320–322. 22. Harsanyi, “Maximin Principle,” 595. See Resnick, Choices, 26ff.; R. D. Luce and H. Raiffa, Games and Decisions (New York: Wiley, 1958), ch. 13. 23. Based on Resnick, Choices, 41. See Luce and Raiffa, Games and Decisions, 275–326. 24. See A. K. Sen, “Rawls versus Bentham,” in Reading Rawls, ed. N. Daniels (New York: Basic Books, 1981), 283–292; Peter Vanderschraaf, “Justice as Mutual Advantage and the Vulnerable,” Politics, Philosophy and Economics 10, no. 2 (2011): 119–147. 25. Harsanyi, “Maximin Principle,” 595. 26. See C. Starr and C. Whipple, “Risks of Risk Decisions,” Science 208, no. 4448 (1980): 1118; K. Misra, Risk Analysis and Management (London: Springer, 2008), 667–681. 27. Harsanyi, “Maximin Principle,” 594–595. 28. Harsanyi, “Maximin Principle,” 595. 29. See J. W. N. Watkins, “Towards a Unified Decision Theory,” in Butts and Hintikka, FP, 2: 351. 30. Watkins, “Unified Decision Theory,” 375.


278

Notes

31. B. Ames, R. Magaw, and L. Gold, “Ranking Possible Carcinogenic Hazards,” Science 236, no. 4799 (April 17, 1987): 271–280. K. S. Shrader-Frechette, Risk Analysis and Scientific Method (Boston: Reidel, 1985), 157ff.; Sunstein, RR. 32. Harsanyi, “Maximin Principle,” 595. 33. See Hacking, “Culpable Ignorance”; and D. Kahneman and A. Tversky, “Subjective Probability,” in Judgment under Uncertainty, ed. D. Kahneman, P. Slovic, and A. Tversky (Cambridge: Cambridge University Press, 1982), 46. See also D. Kahneman and A. Tversky, “On the Psychology of Prediction,” in Kahneman et al., Judgment under Uncertainty, 68. 34. J. Rawls, “Some Reasons for the Maximin Criterion,” American Economic Review 64, no. 1 (May 1974): 141–146. See MacLean, “Introduction,” in MacLean, VR, 12. 35. See D. MacLean, “Risk and Consent,” in MacLean, VR, 17–30. 36. Alice Kaswan, “Climate Change, the Clean Air Act, and Industrial Pollution,” UCLA Journal of Environmental Law and Policy 30 (2012): 51–120. 37. See J. G. March, “Bounded Rationality,” in Elster, RC, 148. 38. Harsanyi, “Maximin Principle,” 605. 39. Harsanyi, “Maximin Principle,” 595. 40. Kristin Shrader-Frechette “Environmental-Justice Whistleblowers Versus Industry Retaliators,” Environmental Justice 5, no. 4 (2012): 214–218; hereafter cited as ShraderFrechette, EJ. 41. Resnick, Choices, 35–37. 42. Harsanyi, “Maximin Principle,” 598. 43. See MacLean, “Introduction” and “Risk and Consent,” in MacLean, VR. 44. Harsanyi, “Maximin Principle,” 595; see K. J. Arrow, “Some Ordinalist-Utilitarian Notes on Rawls’s Theory of Justice,” Journal of Philosophy 70, no. 9 (May 10, 1983): 255. 45. If someone object that the disutility of nonnotification should be greater than (–16), see previous notes. 46. A. Sen, “The Right to Take Personal Risks,” in MacLean, VR, 155–170; for an opposite view, see MacLean, “Risk and Consent.” 47. See Resnick, Choices, 43, 205–212. 48. Harsanyi, “Maximin Principle,” 596. 49. Harsanyi, “Maximin Principle,” 597. 50. See Shrader-Frechette, RR, 100–130; Rawls, Theory of Justice, 54–83, 302; and “Distributive Justice,” in Philosophy, Politics, and Society, ed. P. Laslett and W. G. Runciman (Oxford: Blackwell, 1967), 58–82. 51. Harsanyi, “Maximin Principle.” See K. Binmore, L. Stewart, and A. Voorhoeve, “How Much Ambiguity Aversion?,” Journal of Risk and Uncertainty 45 (2012): 215–238. 52. See Luce and Raiffa, Games and Decisions, 284ff. 53. Luce and Raiffa, Games and Decisions, 599. See D. MacLean, “Philosophical Issues for Centralized Decisions,” in MacLean, VR, 17–30. 54. Harsanyi, “Maximin Principle,” 598–600. See also Samuels, “Arrogance of Intellectual Power,” 113–114, and MacLean, “Distribution of Risk.” 55. See V. Kerry Smith, “Benefit Analysis for Natural Hazards,” Risk Analysis 6, no. 3 (1986): 325ff. 56. Resnick, Choices, 37. 57. See A. Tversky and D. Kahneman, “Judgment under Uncertainty” and “Subjective Probability,” in Kahneman et al., Judgment under Uncertainty, 3ff., 32ff. See also J. Elster, “Introduction,” in Elster, RC, 6, 18–19. 58. See H. L. Dreyfus and S. E. Dreyfus, “Decision Analysis Model of Rationality,” in Butts and Hintikka, FP, 2:121ff. See also Hacking, “Culpable Ignorance,” and Michael Lew, “Bad Statistical Practice,” British Journal of Pharmacology 166, no. 5 (July 2012): 1559– 1567; Aris Spanos, “Foundational Issues in Statistical Modeling,” Rationality, Markets and Morals 2, (2011): 146–178. 59. See Luce and Raiffa, Games and Decisions, 284–285.


Notes

279

60. See Luce and Raiffa, Games and Decisions, 284–285. 61. Luce and Raiffa, Games and Decisions, chap. 13, esp. 293. See D. H. Oughton, “Social and Ethical Issues in Environmental Remediation Projects,” Journal of Environmental Radioactivity (2011):1–5. 62. Oughton, “Social and Ethical Issues,” chap. 13. See Shrader-Frechette, RR, 100–130. 63. Harsanyi, “Maximin Principle,” 599. 64. See Kristin Shrader-Frechette, Taking Action, Saving Lives (New York: Oxford University Press, 2007), ch. 1; hereafter cited as Shrader-Frechette, TASL. Kristin ShraderFrechette, Environmental Justice (New York: Oxford University Press, 2003); hereafter cited as Shrader-Frechette, EJ; Shrader-Frechette, EJ; and Oscar Morales Jr., Sara E. Grineski, and Timothy W. Collins, “Structural Violence and Environmental Injustice, Local Environment 17, no. 1 (2012): 1–21. 65. David H. Cleverly et al., Municipal Waste Combustion Study (New York: Taylor and Francis, 1989), 2–9; D. Nebert, “Genes Encoding Drug-Metabolizing Enzymes,” in Phenotypic Variation, ed. Avril D. Woodhead (New York: Plenum Press, 1988), 59. Yafei Li et al., “Genetic Variations in Multiple Drug Action Pathways and Survival in Advanced Stage Non-Small Cell Lung Cancer Treated with Chemotherapy,” Clinical Cancer Research 17, no.11 (2011): 3830–3840. 66. See K. Borch, “Ethics, Institutions, and Optimality,” in Decision Theory and Social Ethics, ed. H. W. Gottinger and W. Leinfellner (Dordrecht: Reidel, 1978), 242. 67. Rawls, Theory of Justice, 75–78. See Samuels, “Arrogance of Intellectual Power,” 113–120. 68. Rawls, Theory of Justice, 12–17. 69. Rawls, Theory of Justice, 94. 70. Rawls, “Distributive Justice,” 58–82, esp. sec. 4. 71. Harsanyi, “Maximin Principle”; L. Lave and B. Leonard, “Regulating Coke Oven Emissions,” in Paustenbach, Risk Assessment, 1068–1069. 72. W. T. Blackstone, “On the Meaning and Justification of the Equality Principle,” in The Concept of Equality, ed. W. T. Blackstone (Minneapolis, MN: Burgess, 1969), 121. 73. J. Rawls, “Justice as Fairness,” in Philosophy of Law, ed. J. Feinberg and H. Gross (Encino, CA: Dickenson, 1975), 284; Francis Fukuyama, “Dealing with Inequality,” Journal of Democracy 22, no. 3 (2011): 79. 74. See M. C. Beardsley, “Equality and Obedience to Law,” in Law and Philosophy, ed. S. Hook (New York: New York University Press, 1964), 35–36; Berlin, “Equality,” in Hook, Law and Philosophy, 33; W. K. Frankena, “Some Beliefs about Justice,” in Feinberg and Gross, Philosophy of Law, 250–251; Rawls, “Justice as Fairness,” 277, 280, 282; G. Vlastos, “Justice and Equality,” in Social Justice, ed. R. B. Brandt (Englewood Cliffs, NJ: PrenticeHall, 1962), 50, 56. 75. J. R. Pennock, “Introduction,” in The Limits of the Law, Nomos 15, ed. J. R. Pennock and J. W. Chapman (New York: Lieber-Atherton, 1974), 2, 6. See also Didier Mineur, “The Moral Foundation of Law and Ethos of Liberal Democracies,” Ratio Juris 25, no. 2 (June 2012): 133–148; Sang-Jin Han, “Individual Freedom and Human Rights Community,” Development and Society 40, no. 1 (June 2011): 17–43. 76. See J. Rawls, “Justice as Fairness,” Journal of Philosophy 54, no. 22 (October 1957): 653– 662; J. Rawls, “Justice as Fairness,” Philosophical Review 67 (April 1958): 164–194. See also Rawls, Theory of Justice, 3–53. Serge-Chrostophe Kolm, “Inequality, New Directions,” Journal of Economic Inequality 9, no. 3 (2011): 329–352. 77. See Rawls, Theory of Justice, 586; and A. Sen, “Welfare Inequalities and Rawlsian Axiomatics,” in Butts and Hintikka, FP, 2: 288. Matthew Clayton, “Equality, Justice and Legitimacy in Selection,” Journal of Moral Philosophy 9, no. 1 (2012): 8–30. 78. See Shrader-Frechette, Science Policy, Ethics, and Economic Methodology of Social Science (New York: Springer, 1984), 220–221. 79. See Shrader-Frechette, Science Policy, 222ff.; Frankena, “Beliefs about Justice,” 252–257.


280

Notes

80. W. K. Frankena, “The Concept of Social Justice,” in Brandt, Social Justice, 10, 14. See R. Taylor, “Justice and the Common Good,” in Blackstone, Concept of Equality, 94–97; and Peter Singer, Practical Ethics (Cambridge: Cambridge University Press, 2011). 81. See J. Rees, Equality (New York: Praeger, 1971), 116–117, 120; R. B. Stewart, “Paradoxes of Liberty, Integrity, and Fraternity,” Environmental Law 7, no. 3 (Spring 1977): 474–476; J. R. Pennock, Democratic Political Theory (Princeton, NJ: Princeton University Press, 1979), 16–58, esp. 38. See Rawls, Theory of Justice; and S. I. Benn, “Egalitarianism and the Equal Consideration of Interests,” in Equality, Nomos 9, ed. J. R. Pennock and J. W. Chapman (New York: Lieber-Atherton, 1968), 75–76. 82. Lave and Leonard, “Coke Oven Emissions,” 1068–1069; H. Bethe, “The Necessity of Fission Power,” Scientific American 234, no. 1 (January 1976): 26ff.; Sunstein, RR. 83. Lave and Leonard,”Coke Oven Emissions,” 1068–1069. 84. Frankena, “Concept of Social Justice,” 15. 85. See Markovic, “Equality and Local Autonomy,” 85, 87–88; Patterson, “Inequality,” 33–34; H. Laski, “Liberty and Equality,” in Blackstone, Concept of Equality, 170, 173; Rees, Equality, 61–79; and H. J. Gans, “The Costs of Inequality,” in Small Comforts for Hard Times, ed. M. Mooney and F. Stuber (New York: Columbia University Press, 1977), 50–51. Anmol Chadda and William Julius Wilson, “ ‘Way Down in the Hole,’ ” Critical Inquiry 38, no. 1 (2011): 164–188. 86. Shrader-Frechette, TASL, ch. 1; Kevin Phillips, Wealth and Democracy (New York: Broadway, 2002), 151–155; Lawrence Mishel, Jared Bernstein, and John Schmitt, The State of Working America (Ithaca, NY: Economic Policy Institute, Cornell University Press, 2001). See also Thomas Piketty and Emmanuel Saez, Income Inequality in the United States, 1913–1998, No. W8467 (Washington, DC: National Bureau of Economic Research, September 2001). 87. See Shrader-Frechette, Science Policy, chap. 7; Patterson, “Inequality,” 21–30. Williams, “The Idea of Equality,” in Blackstone, Concept of Equality, 49–53; and J. H. Scharr, “Equality of Opportunity and Beyond,” in Pennock and Chapman, Equality, 231–240. See also Chrisopher Freiman, “Equal Political Liberties,” Pacific Philosophical Quarterly 93, no. 2 (2012): 158–174. 88. See Mishan, Economic Fallacies, 232–233, 245ff.; Rees, Equality, 36. See also Plamenatz, “Equality of Opportunity”; and Larkin, “Ethical Problems.” 89. R. Grossman and G. Daneker, Jobs and Energy (Washington, DC: Environmentalists for Full Employment, 1977), 1–2. 90. Between 1947 and 1977, for example, employment in the service sector increased 95 percent, more than in any other sector. See Robert Haveman, Carolyn Heinrich, and Timothy Smeeding, “Policy Response to the Recent Poor Performance of the U.S. Labor Market,” Journal of Policy Analysis and Management 31, no. 1 (2012):177–186; Amy K. Glasmeier and Christa R. Lee-Chuvala, “Austerity in America,” Cambridge Journal of Regions, Economy and Society 4, no. 3 (2011): 457–474. 91. See Gibbard, “Risk and Value”; Shrader-Frechette, Science Policy, 227–228. 92. Mishan, Economic Fallacies, 237. 93. Shrader-Frechette, EJ. 94. R. B. Stewart, “Pyramids of Sacrifice?” in Land Use and Environment Law Review, 1978, ed. F. A. Strom (New York: Clark Boardman, 1978), 172. See A. M. Freeman, “Distribution of Environmental Quality,” in Environmental Quality Analysis, ed. A. V. Kneese and B. T. Bower (Baltimore: Johns Hopkins University Press, 1972), 271–275. 95. See V. Brodine, “A Special Burden,” Environment 13, no. 2 (March 1971): 24. See D. N. Dane, “Bad Air for Children,” Environment 18, no. 9 (November 1976): 26–34. A. M. Freeman, “Income Distribution and Environmental Quality,” in Pollution, Resources, and the Environment, ed. A. C. Enthoven and A. M. Freeman (New York: Norton, 1973), 101. A. V. Kneese, “Economics and the Quality of the Environment,” in Enthoven and Freeman, Pollution, 74–79; and Shrader-Frechette, WWW.


Notes

281

96. See Gibbard, “Risk and Value,” 96. See also Samuels, “Arrogance of Intellectual Power.” 97. See Samuels, “Arrogance of Intellectual Power,” 113–120. See also Shrader-Frechette, Science Policy, 228–229; and Shrader-Frechette, WWW. 98. Hans Jonas, “Philosophical Reflections on Experimenting with Human Subjects,” in Ethics in Perspective, ed. K. J. Struhl and P. R. Struhl (New York: Random House, 1975), 242–353. Jorn Sonderholm, “On the Moral Significance of Contribution to Poverty,” Journal of Global Ethics 7, no. 3 (2011): 315–319. 99. R. B. Brandt, Ethical Theory (Englewood Cliffs, NJ: Prentice-Hall, 1959), 415–420. See R. Taylor, “Justice and the Common Good,” in Hook, Law and Philosophy, M. C. Beardsley, “Equality and Obedience to Law,” in Hook, Law and Philosophy, 193. 100. See Shrader-Frechette, RR, 100–130. 101. See Samuels, “Arrogance of Intellectual Power,” 119; and Rawls, Theory of Justice, 172, 323. 102. See Tversky and Kahneman, “Framing of Decisions,” 123ff. 103. March, “Bounded Rationality,” 153. 104. R. B. Brandt, “Personal Values and the Justification of Institutions”; and Ladd, “Mechanical Models,” in Hook, Human Values, 37, 159, 166. 105. See Samuels, “Arrogance of Intellectual Power,” 113–120; Daniel M. Hausman, “Mistakes about Preferences in the Social Sciences,” Philosophy of the Social Sciences 41, no. 1 (2011): 3–25. 106. See notes 107–109. 107. Kennedy, “Social Choice,” 148. 108. Harsanyi, “Maximin Principle,” 600. 109. Harsanyi, “Maximin Principle,” 602. 110. Harsanyi, “Maximin Principle,” 602. 111. Harsanyi, “Maximin Principle,” 601. 112. Harsanyi, “Maximin Principle,” 601. 113. Harsanyi, “Maximin Principle,” 602. 114. Utility functions might measure welfare, but only after taking account of values like fairness. 115. See MacLean, “Distribution of Risk”; Gauthier, “Social Contract,” 52–53. 116. Rawls, Theory of Justice, 117. 117. See, for example, Rawls, Theory of Justice, 117, 191–192; D. Lyons, “Human Rights and the General Welfare,” in Rights, ed. D. Lyons (Belmont, CA: Wadsworth, 1979), 181–184. 118. Harsanyi, “Maximin Principle,” 601–602. 119. See David M. Hausman and Matt Sensat Waldren, “Egalitarianism Reconsidered,” Journal of Moral Philosophy 8, no. 4 (2011): 567–586. 120. Hacking, “Culpable Ignorance,” makes similar points. 121. See Woodhead, Phenotypic Variation; Rawls, Theory of Justice, 158–161, 191. 122. Harsanyi, “Maximin Principle,” 602. 123. See Rawls, “Distributive Justice;” Rosenkrantz, “Distributive Justice,” 11; Rawls, Theory of Justice, 302. 124. Harsanyi, “Maximin Principle,” 602. 125. See Lyons, “Human Rights and General Welfare,” 176–181. 126. Harsanyi, “Maximin Principle,” 602. See Baier, “Poisoning the Wells.” 127. See Rawls, Theory of Justice, 586; and A. Sen, “Welfare Inequalities and Rawlsian Axiomatics,” in Butts and Hintikka, FP, 2:288. 128. Public Law 91-190, United States Statutes at Large, 91st Cong., 1st sess., 1969, 83: 852–856; see esp. part I, secs. 101(b)2 and 101(c). 129. See Samuels, “Arrogance of Intellectual Power,” 113–120. 130. The Pearl Harbor example is in M. Douglas and A. Wildaysky, Risk and Culture (Berkeley: University of California Press, 1982), 94. 131. P. Slovic, B. Fischhoff, and S. Lichtenstein, “Facts vs. Fears,” in Kahneman et al., Judgment under Uncertainty, 485.


282

Notes

132. J. Raloff and J. Silberner, “Chernobyl: Emerging Data on Accident,” Science News 129, no. 19 (1986): 292. 133. R. J. Mulvihill, D. R. Arnold, C. E. Bloomquist, and B. Epstein, Analysis of United States Power Reactor Accident Probability, PRC R-695 (Los Angeles: Planning Research Corporation, 1965). This is an update of WASH-740. 134. A. Tversky and D. Kahneman, “Belief in the Law of Small Numbers,” and “Judgment under Uncertainty,” in Kahneman et al., Judgment under Uncertainty, 11–20. 135. Tversky and Kahneman, “Belief in the Law of Small Numbers,” and “Judgment under Uncertaintly,” 23–31, 4–11. 136. Kahneman and Tversky, “Subjective Probability,” 32, 46. See Hacking, “Culpable Ignorance.” 137. See S. Oskamp, “Overconfidence in Case-Study Judgments,” in Kahneman et al., Judgment under Uncertainty, 287–293. 138. See Shrader-Frechette, Nuclear Power, 98–100. 139. Slovic, “Facts vs. Fears,” 475–478. William J. Burns and Paul Slovic, “Risk Perception and Behaviors,” Risk Analysis 32, no. 4 (2012): 579–582. 140. See Kahneman and Tversky, “Subjective Probability,” 46. 141. See Harsanyi, “Bayesian Approach,” 382. 142. Without discounting of the future, benefit-cost analysis is strengthened. 143. Lave and Leonard, “Coke Oven Emissions,” 1064–1081; Sunstein, RR. 144. “Americans See a Climate Problem,” Time.com, March 26, 2006, http://www.time.com/ time/nation/article/0,8599,1176967,00.html, accessed May 26, 2006; Naomi Oreskes, “The Scientific Consensus on Climate Change,” in Climate Change, ed. Joseph F. Dimento and Pamela Doughman (Cambridge, MA: MIT Press, 2007), 65, 94. 145. Fred Singer, The Scientific Case Against the Global Climate Treaty (Science and Environmental Policy Project, 1997); Steven Hayward and Kenneth Green, Politics Posing as Science (Washington, DC: American Enterprise Institute, 2007), http://www.aei.org/print?pub =outlook&pubId=27185&authors=<a href=scholar/28Steve, accessed March 29, 2010. AEI is funded by fossil-fuel industries, as Shrader-Frechette, WWW, ch. 1, shows. See David Michaels, Doubt Is Their Product (Cambridge: Harvard University Press, 2008). For how ExxonMobil funds Singer, see http://www.exxonsecrets.org/html/personfactsheet. php?id=1. See Michaels, Doubt; Thomas McGarity and Wendy Wagner, Bending Science (Cambridge, MA: Harvard University Press, 2008); Shrader-Frechette, WWW, ch. 1. 146. Spencer R. Weart, The Discovery of Global Warming (Cambridge, MA: Harvard University Press, 2003). Oreskes, “The Scientific Consensus on Climate Change,” 73; Naomi Oreskes, “Beyond the Ivory Tower,” Science 306, no. 5702 (2004): 1686. 147. Shrader-Frechette, WWW.

Chapter 14

1. IARC, IARC Classifies Radiofrequency Electromagnetic Fields as Possibly Carcinogenic (Lyon, France: International Agency for Research on Cancer, 2011), http://tinyurl. com/3sya7sy, accessed July 21, 2012. Earlier versions of some arguments here appeared in Kristin Shrader-Frechette, Risk and Rationality (Berkeley: University of California Press, 1993), 131–145; hereafter cited as Shrader-Frechette, RR. 2. S. Lehrer, S. Green, and R. Stock, “Association Between Number of Cell Phone Contracts and Brain Tumor Incidence in Nineteen U.S. States,” Journal of Neuro-oncology 101, no. 3 (2011): 505–507. 3. M. H. Repacholi, A. Lerchl, M. Röösli, Z. Sienkiewicz, A. Auvinen, J. Breckenkamp, G. d’Inzeo, P. Elliott, P. Frei, S. Heinrich, I. Lagroye, A. Lahkola, D. McCormick, S. Thomas, and P. Vecchia, “Systematic Review of Wireless Phone Use and Brain Cancer,” Bioelectromagenetics 33, no. 3 (April 2012): 187–206.


Notes

283

4. International Energy Agency, Golden Rules for a Golden Age of Natural Gas (Paris: IEA, 2012); Valerie Brown, “Industry Issues,” Environmental Health Perspectives 115, no. 2 (February 2007): A76. 5. L. Lave and B. Leonard, “Regulating Coke Oven Emissions,” in The Risk Assessment of Environmental and Human Health Hazards, ed. D. J. Paustenbach, 1064–1081 (New York: Wiley, 1989). See also C. Starr, R. Rudman, and C. Whipple, “Philosophical Basis for Risk Analysis,” Annual Review of Energy 1 (1976): 629–662. 6. Danger claims are in T. Smijs and S. Pavel, “Titanium Dioxide and Zinc Oxide Nanoparticles in Sunscreens,” Nanotechnology, Science and Applications 4, (2011): 95– 112; D. Tran and R. Salmon, “Potential Photocarcinogenic Effects of Nanoparticle Sunscreens,” Australian Journal of Dermatology 52, no. 1 (February 2011): 1–6. Denials of danger claims are in M. Burnett and S. Wang, “Current Sunscreen Controversies,” Photodermatology, Photoimmunology, & Photomedicine 27, no. 2 (2011): 58–67. L. L. Lin, J. E. Grice, et al., “Time-Correlated Single Photon Counting for Simultaneous Monitoring of Zinc Oxide Nanoparticles,” Pharmaceutical Research 28, no. 11 (2011): 2920–2930. 7. E.g., M. Etminan, J. Delaney, B. Bressler, and J. Brophy, “Oral Contraceptives and the Risk of Gallbladder Disease,” Canadian Medical Association Journal 183, no. 8 (2011): 899– 904; Ø. Lidegaard, E. Løkkegaard, A. Jensen, C. Skovlund, and N. Keiding, “Thrombotic Stroke and Myocardial Infarction with Hormonal Contraception,” New England Journal of Medicine 366 (2012): 2257–2266. 8. P. Hannaford, “Epidemiology of the Contraceptive Pill and Venous Thromboembolism,” Thrombosis Research 127, no. 3 (February 2011): S30–S34; E. Raymond, A. Burke, and E. Espey, “Combined Hormonal Contraceptives and Venous Thromboembolism,” Obstetrics and Gynecology 119, no. 5 (2012): 1039–1044. 9. C. W. Churchman, Theory of Experimental Inference (New York: Macmillan, 1947); See S. Axinn, “The Fallacy of the Single Risk,” Philosophy of Science 33, nos. 1–2 (1966): 154– 162; J. Rossi, “The Prospects for Objectivity in Risk Assessment,” Journal of Value Inquiry 6, no. 2 (2012): 237–253. 10. A. Kaplan, The Conduct of Inquiry (San Francisco: Chandler, 1964), 253. 11. H. Leung and D. Paustenbach, “Assessing Health Risks in the Workplace,” in Paustenbach, Risk Assessment, 689–710 (regarding denying dioxin risks). 12. J. J. Thomson, Rights, Restitution, and Risk (Cambridge, MA: Harvard University Press, 1986). 13. P. Ricci and A. Henderson, “Fear, Fiat, and Fiasco,” in Phenotypic Variation in Populations, ed. A. Woodhead, M. Bender, and R. Leonard (New York: Plenum, 1988), 285–293. Cox and P. Ricci, “Risk, Uncertainty, and Causation,” in Paustenbach, Risk Assessment, 125– 156. C. Cranor, Legally Poisoned (Cambridge, MA: Harvard University Press, 2011). 14. See, for example, H. Shue, “Exporting Hazards,” in Boundaries, ed. P. Brown and H. Shue (Totowa, NJ: Rowman and Littlefield, 1981), 107–145; J. Lichtenberg, “National Boundaries and Moral Boundaries,” in Brown and Shue, Boundaries, 79–100. 15. J. Bentham, Principles of the Civil Code, in The Works of Jeremy Bentham, ed. J. Bowring (New York: Russell and Russell, 1962), 1:301. 16. L. Becker, “Rights,” in Property, ed. L. Becker and K. Kipnis (Englewood Cliffs, NJ: Prentice-Hall, 1984), 76. See H. Stuart, “United Nations Convention on the Rights of Persons with Disabilities,” Current Opinion in Psychiatry 25, no. 5 (2012): 365–369. 17. J. Bentham, Principles of Morals and Legislation, in Bowring, Works, 1:36; J. Feinberg, Social Philosophy (Englewood Cliffs, NJ: Prentice-Hall, 1973), 29, 59; J. Rachels, “Euthanasia,” in Matters of Life and Death, ed. T. Regan (New York: Random House, 1980), 38. 18. L. Cox and P. Ricci, “Legal and Philosophical Aspects of Risk Analysis,” in Paustenbach; hereafter cited as: LP, Risk Analysis, 22–26. See also W. Hoffman and J. Fisher, “Corporate Responsibility,” in Becker and Kipnis, Property, 211–220.


284

Notes

19. For example, the lead industry blamed child illnesses from lead-paint poisoning on poor parental care. See D. Rosner and G. Markowitz, “A Problem of Slum Dwellings and Relatively Ignorant Parents,” Environmental Justice 1, no. 3 (December 2008): 159–168. Thomas McGarity and Wendy Wagner, Bending Science (Cambridge, MA: Harvard University Press, 2010). See David Michaels, Doubt Is Their Product (New York: Oxford University Press, 2008). N. Oreskes and E. Conway, Merchants of Doubt (New York: Bloomsbury Press, 2010). K. S. Shrader-Frechette, What Will Work (New York: Oxford University Press, 2011), ch. 4; hereafter cited as Shrader–Frechette, WWW. 20. Stephen John, “Security, Knowledge and Well-being,” Journal of Moral Philosophy 8, no. 1 (2011): 68–91. 21. See A. C. Michalos, Foundations of Decisionmaking (Ottowa: Canadian Library of Philosophy, 1987), 202ff.; and H. S. Denenberg, R. D. Eilers, G. W. Hoffman, C. A. Kline, J. J. Melone, and H. W. Snider, Risk and Insurance (Englewood Cliffs, NJ: Prentice-Hall, 1964). See also Cox and Ricci, LP, Risk Analysis, 1035. A. Zia and M. Glantz, “Risk Zones,” Journal of Comparative Policy Analysis 14, no. 2 (2012): 143–159. 22. Thomson, Rights, 158. 23. See K. S. Shrader-Frechette, Nuclear Power and Public Policy (Boston: Reidel, 1983), 74– 78; hereafter cited as Shrader-Frechette, NP; Shrader-Frechette, WWW, ch. 4. 24. K. Shrader-Frechette, Taking Action, Saving Lives (New York: Oxford University Press, 2007); hereafter cited as Shrader-Frettchette, TASL; S. Krimsky, Science in the Private Interest (Lanham, MD: Rowman & Littlefield, 2003); Shrader-Frechette, WWW; Michaels, Doubt Is Their Product; Sharon Beder, Global Spin (Glasgow, UK: Green Books, 2002). 25. See previous chapter; Harsanyi, “Maximin Principle”; L. Maxim, “Problems Associated with the Use of Conservative Assumptions in Exposure and Risk Analysis,” in Paustenbach, Risk Assessment, 539–555; Lave and Leonard, “Coke Oven Emissions,” 1068–1069; S. Hoffmann, “Overcoming Barriers to Integrating Economic Analysis into Risk Assessment,” Risk Analysis 31, no. 9 (September 2011): 1345–1355; and S. Sgourev, “The Dynamics of Risk in Innovation,” Industrial and Corporate Change (July 2012): 1–27. 26. Shrader-Frechette, NP, 33–35; L. Wasserman, “Students’ Freedom from Excessive Force by Public School Officials?” Kansas Journal of Law and Public Policy 21 (2011): 35. 27. W. Frankena, “The Concept of Social Justice,” in Social Justice, ed. R. Brandt (Englewood Cliffs, NJ: Prentice-Hall, 1962), 10, 14; Shue, “Exporting Hazards”; and Lichtenberg, “National Boundaries.” 28. D. Eddy, “Probabilistic Reasoning in Clinical Medicine,” in Judgment under Uncertainty, ed. D. Kahneman, P. Slovic, and A. Tversky (Cambridge: Cambridge University Press, 1982), 267. See also the following articles in this collection: S. Oskamp, “Overconfidence in Case-Study Judgments,” 292; P. Slovic, B. Fischhoff, and S. Lichtenstein, “Facts vs. Fears,” 475. D. Kahneman, A. Tversky, Choices, Values, and Frames (Cambridge: Cambridge University Press, 2000); D. Kahneman, Thinking Fast and Slow (New York: Straus and Giroux, 2011). 29. Shrader-Frechette, NP, chs. 1, 4. See P. Huber, “The Bhopalization of American Tort Law,” in Hazards, ed. Robert Kates, John Ahearne, Ronald Bayer, Ella Bingham, Victor Bond, Daniel Hoffman, Peter Huber, Roger Kasperson, John Klacsmann, and Chris Whipple (Washington, DC: National Academy Press, 1986), 94–95, 106–107; Shrader-Frechette, WWW, ch. 4. 30. See A. B. Lovins and J. H. Price, Non-Nuclear Futures (New York: Harper and Row, 1975); C. Flavin, Nuclear Power: The Market Test (Washington, DC: Worldwatch, 1983); Shrader-Frechette, WWW. 31. See Cooke, “Risk Assessment,” 345–347. Shrader-Frechette, WWW, ch. 4. 32. Thomson, Rights, 172.


Notes

285

33. See Frankena, “Concept of Social Justice,” 15; and Shrader-Frechette, RR, ch. 8. Heidi Li Feldman, “What’s Right About the Medical Model in Human Subjects Research Regulation,” Georgetown Law Faculty Publications and Other Works (2012): 1097. 34. See Shrader-Frechette, RR, chs. 2 and 3. 35. T. Schelling, Choice and Consequence (Cambridge, MA: Harvard University Press, 1984), 145–146. B. Leoni, Law, Liberty, and the Competitive Market (New Brunswick, NJ: Transaction Publishers, 2009); J. Aldred, The Skeptical Economist (Sterling, VA: Routledge, 2010), ch. 2. 36. See J. S. Mill, On Liberty (Buffalo, NY: Prometheus Books, 1986), esp. 16. ShraderFrechette, RR, ch. 10; J. S. Purdy and N. Siegal, “The Liberty of Free Riders,” American Journal of Law and Medicine 38, no. 2-3 (2012): 374; R. Skipper, “Obesity,” Public Health Ethics 5, no. 2 (2012): 181–191; V. Devinatz, “Reevaluating US Company Paternalism,” Labor History, 53, no. 2 (2012): 299–304. 37. Cass Sunstein, Risk and Reason (New York: Cambridge University Press, 2002); Cass Sunstein, Laws of Fear (New York: Cambridge University Press, 2005); Kristin ShraderFrechette, “Review of Sunstein, Laws of Fear,” Ethics and International Affairs 20, no.1 (2006): 123–125; Kristin Shrader-Frechette, “Review of Sunstein, Risk and Reason,” Ethics 114, no. 2 (January 2004): 376–380; Kristin Shrader-Frechette, “Review of Sunstein, Risk and Reason,” Quarterly Review of Biology (December 2003). 38. McGarity and Wagner, Bending Science; Michaels, Doubt Is Their Product. 39. Suzanne Goldenberg, “US Senate’s Top Climate Sceptic Accused of Waging ‘McCarthyite Witch-Hunt’,” The Guardian (March 1, 2010), www.guardian.co.uk/environment/2010/march/01/inhofe-climate-mccarthyite, accessed on February 3, 2012. See Raymond Bradley, Global Warming and Political Intimidation (Amherst: University of Massachusetts Press, 2011). 40. Shrader-Frechette, TASL; Cheryl Hogue, “Scientists are being INTIMIDATED AND HARASSED Because of Their Research,” Chemical and Engineering News 88, no. 23 (June 7, 2010): 31–32; Devra Davis, When Smoke Ran Like Water (New York: Basic, 2002); Beder, Global Spin, esp. 108. 41. Ralph H. Luken and Stephen G. Miller, “The Benefits and Costs of Regulating Benzene,” Journal of the Air Pollution Control Association 31, no. 12 (1981): 1254–1259. 42. K. Brownell and K. Warner, “The Perils of Ignoring History: Big Tobacco Played Dirty and Millions Died,” Milbank Quarterly 87, no. 1 (March 2009): 259–294. 43. R. Hites, “Dioxins,” Environmental Science and Technology 45, no. 1 (2011): 16–20. 44. Lave and Leonard, “Coke Oven Emissions,” 1068–1069. 45. Lave and Leonard, “Coke Oven Emissions,” 1071–1078. 46. See Shrader-Frechette, RR, ch. 5. 47. Brian Emmet, Linda Perron, and Paolo Ricci, “The Distribution of Environmental Quality,” in Environmental Assessment, ed. D. Burkhardt and W. Ittelson (New York: Plenum, 1978), 367–374; Shrader-Frechette, TASL; S. Vanderheiden, “Confronting Risks,” Environmental Politics 20, no. 5 (2011): 650–667; C. Engeman, L. Baumgartner, B. Carr, A. Fish, J. Meyerhofer, T. Satterfield, P. Holden, and B. Harthorn, “Governance Implications of Nanomaterials Companies’ Inconsistent Risk Perceptions and Safety Practices,” Journal of Nanoparticle Research 14, no. 3 (2012): 749. 48. C. Starr, “General Philosophy of Risk-Benefit Analysis,” in Energy and the Environment, ed. H. Ashley, R. Rudman, and C. Whipple (Elmsford, NY: Pergamon Press, 1976), 16. See Cox and Ricci, “Legal and Philosophical Aspects.” 49. Slovic, “Facts vs. Fears,” 488. 50. National Research Council, Understanding Risk (Washington, DC: National Academy Press, 1996), 133ff. 51. Thomas Jefferson, “Thomas Jefferson to William C. Jarvis, 1820,” The Writings of Thomas Jefferson, ed. Andrew Lipscomb and Albert Bergh (Washington, DC: Thomas Jefferson Memorial Association, 1903–1904), 15: 278.


286

Notes

Chapter 15

1. United States District Court, S.D. Florida, Miami Division, “In re DENTURE CREAM PRODUCTS LIABILITY LITIGATION. This Document Relates to Case No. 9:09-CV-80625-CMA” (Chapman et al. v. Procter & Gamble Distributing LLC: No. 09-2051-MD, June 13, 2011). 2. United States District Court, S.D. Florida, Miami Division, “In re DENTURE CREAM PRODUCTS LIABILITY LITIGATION. This Document Relates to Case No. 9:09-CV-80625-CMA” (Chapman et al. v. Procter & Gamble Distributing LLC: No. 09-2051-MD, June 13, 2011). 3. United States District Court, S.D. Florida, Miami Division, “In re DENTURE CREAM PRODUCTS LIABILITY LITIGATION. This Document Relates to Case No. 9:09-CV-80625-CMA” (Chapman, et al. v. Procter & Gamble Distributing LLC: No. 09-2051-MD, June 13, 2011). Jef Feeley, “Glaxo Said to Pay $120 Million to End DentureCream Suits,” Bloomberg Businessweek (May 3, 2011), http://www.bloomberg.com/ news/2011-05-03/glaxo-said-to-pay-120-million-to-settle-suits-over-poligrip-denturecream.html, accessed February 20, 2013. 4. Carl Cranor, Regulating Toxic Substances (New York: Oxford University Press, 1993); Carl Cranor, Toxic Torts (New York: Cambridge University Press, 2006); Carl Cranor, “Do You Want to Bet Your Children’s Health on Post-Market Harm Principles?,” Villanova Environmental Law Journal 24, no. 2 (2008): 251–314; Carl Cranor, “A Framework for Assessing Scientific Arguments,” Journal of Law and Policy 15, no. 1 (2007): 7–58; Carl Cranor, “The Science Veil over Toxic Tort Law?,” Law and Philosophy (2004); Carl Cranor and David A. Eastmond, “Scientific Ignorance and Reliable Patterns of Evidence in Toxic Tort Causation?,” Law and Contemporary Problems 64, no. 4 (autumn 2001): 5–48; Carl Cranor, “Learning from the Law for Regulatory Science,” Law and Philosophy 14 (1995): 115–145. 5. Naomi Oreskes and Erik M. Conway, Merchants of Doubt (New York: Bloomsbury Press, 2010); Robin McKie, “Merchants of Doubt by Naomi Oreskes and Erik M. Conway,” Guardian, August 8, 2010, http://www.guardian.co.uk/books/2010/aug/08/merchantsof-doubt-oreskes-conway, accessed March 25, 2013; Naomi Oreskes, “The Scientific Consensus on Climate Change,” Science 306 (2003): 1686; National Association of Geoscience Teachers, “2011 Awardee—Dr. Naomi Oreskes, University of California San Diego,” http://nagt.org/nagt/programs/shea/2011.html, accessed February 21, 2013; George Mason University, Center for Climate Change Comunication, “Naomi Oreskes: 2011 Climate Change Communicator of the Year,” http://www.climatechangecommunication.org/other-resources/communicator-year-awards/naomi-oreskes2011-climate-change-communicator-year, accessed February 21, 2013. 6. R. M. Cooke, “Risk Assessment and Rational Decision Theory,” Dialectica 36, no. 4 (2007): 329–351; Roger M. Cooke, Experts in Uncertainty (New York: Oxford University Press, 1991); Roger M. Cooke, “Model Uncertainty in Economic Impacts of Climate Change: Bernoulli versus Lotka Volterra Dynamics,” Integrated Environmental Assessment and Management 9 (2013): 2–6; C. Kousky, R. E. Kopp, and R. M. Cooke, “Risk Premia and the Social Cost of Carbon: A Review,” Economics: The Open-Access, Open-Assessment E-Journal 5 (2011): 21; David M. Cooley, Christopher S. Galik, Thomas P. Holmes, Carolyn Kousky, and Roger M. Cooke, “Managing Dependencies in Forest Offset Projects,” Mitigation and Adaptation of Strategies for Global Change 17, no. 1 (2012): 17–24; C. Kousky and R. M. Cooke, “Climate Change and Risk Management,” RFF Discussion Paper No. 09-03-REV (2009); Roger M. Cooke and George-Neale Kelly, “Climate Change Uncertainty Quantification,” Resources for the Future Discussion Paper No. 10-29 (2010); Sarah Teck, Bejamin Halpern, Carrie Kappel, Fiorenza Micheli, Kimberly Selkoe, Caitlin Crain, Rebecca Martone, Christine Shearer, Joe Arvai, Baruch Fischhoff, Grant Murray, Rabin Neslo, and Roger Cooke, Ecological Applications 20, no. 5 (2010): 1402–1416.


Notes

287

7. E.g., Kristin Shrader-Frechette and E. D. McCoy, Method in Ecology (Cambridge: Cambridge University Press, 1993); Kristin Shrader-Frechette, “Applied Ecology and the Logic of Case Studies,” Philosophy of Science 61, no.1 (June 1994): 228– 249; Kristin Shrader-Frechette and Dan Wigley, “Comments on the Draft Environmental Impact Statement for Construction and Operation of Clairborne Enrichment Center, Homer, Louisiana,” in US Nuclear Regulatory Commission, Final Environmental Impact Statement for Construction and Operation of Clairborne Enrichment Center, Homer, Louisiana, vol. 2, NUREG 1484 (Washington, DC: US NRC,1994), 1–255 to 1–282; Kristin Shrader-Frechette, Expert Judgment in Assessing Radwaste Risks (Carson City, NV: Nuclear Waste Project Office/US Department of Energy, 1992); Kristin ShraderFrechette and Lars Persson, “Ethical, Logical, and Scientific Problems with the New ICRP Proposals,” Journal of Radiation Protection 22, no. 2 (June 2002): 149–162; Kristin Shrader-Frechette, Taking Action, Saving Lives (New York: Oxford University Press, 2007); hereafter cited as Shrader-Frechette, TASL; Kristin Shrader-Frechette, “Colored Town and Liberation Science,” in Holy Ground: A Gathering of Voices on Caring for Creation, ed. D. Landau and L. Moseley, 218–229 (San Francisco: Sierra, 2008); Kristin Shrader-Frechette’s website is www.nd.edu/~kshrader. 8. See, for instance, Justin Biddle, “Lessons from the Vioxx Debacle,” Social Epistemology 21 (2007): 21–39. Mieke Boon, “Understanding Scientific Practices,” in Characterizing the Robustness of Science, ed. L. Soler, E. Trizio, T. Nickles, and W. Wimsatt (Boston: Springer, 2012); Nancy Cartwright, Hunting Causes and Using Them (New York: Cambridge University Press, 2007); Sharyn Clough, “Gender and the Hygiene Hypothesis,” Social Science & Medicine 72, no.4 (2010): 486–493; Lindley Darden, “The Nature of Scientific Inquiry,” http://faculty.philosophy.umd.edu/LDarden/sciinq/index.htm, accessed March 25, 2013. Inma de Melo-Martín, “The Promise of the Human Papillomavirus Vaccine Does Not Confer Immunity against Ethical Reflection,” The Oncologist 11, no. 4 (2006): 393–396. I. de Melo-Martín, “Furthering Injustices Against Women,” Bioethics 20, no. 6 (2006): 301–307. Heather Douglas, Science, Policy, and the ValueFree Ideal (Pittsburg: University of Pittsburgh Press, 2009); John Dupre, Processes of Life (New York: Oxford University Press, 2012); Kevin Elliott, Is a Little Pollution Good for You? Incorporating Societal Values in Environmental Research (New York: Oxford University Press, 2011); Daniel J. McKaughan and Kevin C. Elliott, “Voles, Vasopressin, and the Ethics of Framing,” Science 338, no. 6112 (2012): 1285, PMID 23224537; Nancy Cartwright and Sophia Efstathiou, “Hunting Causes and Using Them,” International Studies in the Philosophy of Science 25, no. 3 (2011): 223–241; Carla Fehr, “Feminist Engagement with Evolutionary Psychology,” Hypatia 27, no. 1 (2012): 50–72; Ben Hale, “The Moral Considerability of Invasive Transgenic Animals,” Journal of Agriculture and Environmental Ethics 19, no. 4 (August 2006): 337–366; Gary Hardcastle and George Reisch, Bullshit and Philosophy (Peterborough, NH: Open Court Publishing, 2006); Susan Hawthorne, “Embedding Values: How Science and Society Jointly Valence a Concept—The Case of ADHD,” Studies in History and Philosophy of Science Part C 41, no. 1 (2010): 21–31l; Philip Kitcher, Science, Truth, and Democracy (New York: Oxford University Press, 2001); Hugh Lacey, Is Science Value Free? (New York: Routledge, 1994); Helen Longino, Science as Social Knowledge (Princeton, NJ: Princeton University Press, 1990); Deborah Mayo, Error and Inference (New York: Cambridge University Press, 2010); Sharon Meagher, “Unsettling Critical Urban Theory,” City 16, no. 4 (August 2012): 476–480; Sandra Mitchell, Unsimple Truths (Chicago: University of Chicago Press, 2009); Margaret Morrison, “Emergent Physics and Micro-Ontology,” Philosophy of Science 79, no. 1 (2012): 141–166; Lynn Hankinson Nelson, Who Knows: From Quine to Feminist Empiricism (Philadelphia: Temple University Press, 1990); Nancy Nersessian, Creating Scientific Concepts (Cambridge, MA: MIT Press, 2008); Kathleen Okruhlik, “Kant and the Foundation of Science,” in Nature Mathematized, ed. W. Shea (New York: Springer, 1982), 249–266; Wendy Parker, “Communicating Science,” in


288

Notes

Debating Science: Deliberation, Values and the Common Good, ed. D. Scott and B. Francis (Amherst, NY: Prometheus Books, 2011); Kathryn Plaisance, Thomas Reydon, and Mehmet Elgin, “Why the (Gene) Counting Argument Fails in the Massive Modularity Debate,” Philosophical Psychology 25, no. 5 (2012): 873–892; Michael Polanyi, “The Republic of Science,” Minerva 1, no. 1 (1962): 54–73; Sarah Richardson, Revisiting Race in a Genomic Age (New Brunswick, NJ: Rutgers University Press, 2008); Mark Sagoff, The Economy of the Earth (New York: Cambridge University Press, 1988); Miriam Solomon, Social Empiricism (Cambridge, MA: MIT Press, 2001); Henry Shue, Basic Rights (Princeton, NJ: Princeton University Press, 1986); Patrick Suppes, “Philosophy of Science and Public Policy,” PSA: Proceedings of the Biennial Meeting of the Philosophy of Science Association 2 (1984): 3–13; Paul Thompson, The Agrarian Vision (Lexington, KY: The University Press of Kentucky, 2010); Nancy Tuana, Women and the History of Philosophy (St. Paul, MN: Paragon House, 1992); N. Reo and Kyle Whyte, “Hunting and Morality as Elements of Traditional Ecological Knowledge,” Human Ecology 40, no. 1 (2012): 15–27; Jim Woodward, Making Things Happen (New York: Oxford University Press, 2005); Andrea Woody, “More Telltale Signs: What Attention to Representation Reveals about Scientific Explanation,” Philosophy of Science 71 (2004): 780–793; John Worrall, “Do We Need Some Large, Simple, Randomized Trials in Medicine?,” in EPSA Philosophical Issues in the Sciences, ed. M. Suarez, M. Dorato, and M. Redel (Dordrecht: Springer, 2010); Alison Wylie, Thinking from Things: Essays in the Philosophy of Archaelology (Berkeley: University of California Press, 2002). 9. Society for Philosophy of Science in Practice, Organizing Committee, http://www.philosophy-science-practice.org/en/organizing-committee/, accessed February 24, 2013. See SPSP Mailing List Members, http://www.philosophy-science-practice.org/en/membership/list_members/, accessed February 24, 2013. Mark Sagoff, The Economy of the Earth (New York: Cambridge University Press, 2008); Mark Sagoff, Price, Principle, and the Environment (New York: Cambridge University Press, 2004). 10. See, e.g., Helen Longino, Science as Social Knowledge (Princeton, NJ: Princeton University Press, 1990); Philip Kitcher, Science, Truth, and Democracy (New York: Oxford University Press, 2001). 11. Sharon M. Meagher and Ellen K. Feder, Practicing Public Philosophy, Report for the American Philosophical Association, April 2, 2010, 6, http://api.ning.com/files/--qSp H8VZdwkMHPvCiExMFpvuAT6MqF7LLfHv0zW5fcWV6h*NqVobeBlwQP8oYSJd 1Fl7J8RXK6dpB1DkEyz40z6HqqIwwyV/publicphil_report_draft_ June_3finalfinal. pdf, accessed February 10, 2013. 12. Earlier versions of some arguments in this chapter appeared in Kristin Shrader-Frechette, “Taking Action on Developmental Toxicity,” Environmental Health 11, no. 61 (September 10, 2012); hereafter cited as Shrader-Frechette, TADT; Shrader-Frechette, TASL. 13. Philippe Grandjean, Only One Chance (New York: Oxford University Press, 2013); D. C. Bellinger, “A Strategy for Comparing the Contributions of Environmental Chemicals and Other Risk Factors to Neurodevelopment of Children,” Environmental Health Perspectives 120 (2011): 501–507; P. J. Landrigan, C. B. Schechter, J. M. Lipton, M. C. Fahs, and J. Schwartz, “Environmental Pollutants and Disease in American Children,” Environmental Health Perspectives 110 (2002): 721–728; L. Trasande, P. J. Landrigan, and C. Schechter, “Public Health and Economic Consequences of Methyl Mercury Toxicity to the Developing Brain,” Environmental Health Perspectives 113 (2005): 5990–5996; T. T. Schug, A. Janesick, B. Blumberg, and J. J. Heindel, “Endocrine-disrupting Chemicals and Disease Susceptibility,” Journal of Steroid Biochemistry and Molecular Biology 127 (2011): 204–215; A. J. Bernal and R. L. Jirtle, “Epigenomic Disruption,” Birth Defects Research, Part A: Clinical and Molecular Teratology 88 (2010): 938–944; P. D. Gluckman, M. A. Hanson, and F. M. Low, “The Role of Developmental Plasticity and Epigenetics in Human Health,” Birth Defects Research Part C: Embryo Today 93 (2011): 12–18; P. Grandjean and P. J. Landrigan, “Developmental Neurotoxicity of Industrial Chemicals,”


Notes

289

Lancet 368 (2006): 2167–2178; R. Barouki, P. D. Gluckman, P. Grandjean, M. Hanson, and J. J. Heindel, “Developmental Origins of Non-communicable Disease,” Environmental Health 11 (2012): 42; M. K. Skinner, M. Manikkam, and C. GuerreroBosagna, “Epigenetic Transgenerational Actions of Endocrine Disruptors,” Reproductive Toxicology 31 (2011): 337–343. 14. Barouki et al., “Developmental Origins of Non-communicable Disease.” 15. See Shrader-Frechette, TADT and TASL, on which these arguments rely. American Association for the Advancement of Science, Principles of Scientific Freedom and Responsibility (Washington, DC: AAAS, 1980); hereafter cited as AAAS, PSF. 16. Kristin Shrader-Frechette, Ethics of Scientific Research (Lanham, MD: Rowman and Littlefield, 1994); hereafter cited as Shrader-Frechette, ESR; John Rawls, A Theory of Justice (Cambridge, MA: Harvard University Press, 1971); AAAS, PSF. 17. See Shrader-Frechette, TADT and TASL, on which these arguments/examples rely. 18. European Environment Agency, Air Quality and Ancillary Benefits of Climate-Change Policies (Copenhagen: EEA, 2006); P. Wahlin and F. Palmgren, Source Apportionment of Particles and Particulates (Roskilde, Denmark: National Environmental Research Institute, 2000). 19. Wahlin and Palmgren, Source Apportionment of Particles and Particulates. 20. Landrigan, Schechter, et al., “Environmental Pollutants,” 721–728. 21. D. Rabinowitz, “Climate Injustice,” Environmental Justice 5 (2012): 38–46. 22. N. Stern, The Economics of Climate Change (London: HM Treasury, 2006). 23. World Health Association, Air Quality and Health (Copenhagen: WHO, 2011). 24. D. C. Bellinger, “A Strategy for Comparing the Contributions of Environmental Chemicals and Other Risk Factors to Neurodevelopment of Children,” Environmental Health Perspectives 120 (2011): 501–507; US Environmental Protection Agency, Technical Support Document (Washington, DC: US EPA, 2005); hereafter cited as EPA, TSD. 25. EPA, TSD. 26. G. Rice, J. K. Hammitt, Economic Valuation of Human Health Effects of Controlling Mercury Emissions from US Coal-Fired Power Plants (Cambridge, MA: Harvard Center for Risk Analysis, 2005). 27. S. D. Grosse, “How Much Is an IQ Point Worth?,” AERE Newsletter 27 (2007): 17–21. 28. Landrigan, Schechter, et al. “Environmental Pollutants.” 29. Shrader-Frechette, TADT and TASL. 30. World Health Organization, Childhood Lead Poisoning (Geneva: WHO, 2010). 31. Shrader-Frechette, TADT and TASL. 32. Shrader-Frechette, TADT. 33. Aristotle, Nicomachean Ethics, ed. and trans. D. Ross (New York: Oxford University Press, 1925). 34. K. Jaspers, The Question of German Guilt, trans. E. B. Ashton (New York: Capricorn, 1961); Jean-Paul Sartre, What Is Literature? trans. B. Frechtman (London: Methuen, 1950). 35. Shrader-Frechette, TASL, ESR, and TADT. 36. C.I. Jackson, Honor in Science (New Haven: Sigma Xi, 1986), esp. 33. 37. K. Koizumi, “R&D Trends and Special Analyses,” AAAS Reports XXIX, XXVII (Washington, DC: American Association for the Advancement of Science, 2004). 38. Shrader-Frechette, TASL. See Michaels, Doubt Is Their Product (New York: Oxford University Press, 2008); and W. Wagner and T. McGarity, Bending Science (Cambridge, MA: Harvard University Press, 2008), esp. 158. 39. C. Pichery, M. Bellanger, D. Zmirou-Navier, P. Glorennec, P. Hartemann, and P. Grandjean, “Childhood Lead Exposure in France,” Environmental Health 10 (2011): 44. 40. Landrigan, Schechter et al., “Environmental Pollutants”; E. Gould, “Childhood Lead Poisoning,” Environmental Health Perspectives 117 (2009): 1162–1167. 41. I. Kant, Lectures on Ethics, ed. J. B. Schneewind, trans. P. Heath (Cambridge: Cambridge University Press, 2001); hereafter cited as Kant, LOE.


290

Notes

42. Shrader-Frechette TASL and TADT. 43. Michaels, Doubt Is Their Product; Wagner and McGarity, Bending Science; ShraderFrechette, TASL and TADT. 44. Shrader-Frechette, TASL and TADT. Jackson, Honor in Science. 45. Kant, LOE; I. Kant, Critique of Pure Reason, trans. N. K. Smith (New York: St. Martin’s, 1965). 46. E.g., Barouki et al., “Developmental Origins of Non-communicable Disease.” 47. American Meteorological Society, “Climate Change Research: Issues for the Atmospheric and Related Sciences,” Bulletin of the American Meteorological Society 84 (2003): 508–515. 48. Caltech Catalogue, History and Philosophy of Science, http://catalog.caltech.edu/ courses/listing/hps.html, accessed February 20, 2013. Notre Dame Courses, Philosophy of Science and Public Policy, http://www3.nd.edu/~kshrader/courses/, accessed February 21, 2013. University of Pittsburgh, Department of History and Philosophy of Science, Courses, http://www.hps.pitt.edu/courses, accessed February 22, 2013.


INDEX

abduction, 58–59 aborigines, 1–2 absence of evidence, 93–94 additive effects, 35–38 ad hoc, 125–127 affirming the consequent, 25–27 AIDS (acquired immunodeficiency syndrome), 45–47

air monitors, 48–51 alcohol, 35–38, 52–54, 90–92 alcoholic beverages, 32–35 algorithms, 25–27, 99–109 Allied forces, 99–100 All the President’s Men, 147–148 Amazon, 165–166 Amdur, Mary, 203–205 American Academy of Pediatric Dentistry, 28 American Association for the Advancement of Science, 95–97, 207–210 American Cancer Society, 60–61 American Geophysical Union, 218–219 American Health Physics Society, 75–77 American Journal of Epidemiology, 101–102 American Meteorological Society, 218–219 American Petroleum Institute, 203–205 American Philosophical Association, 6–7, 207–210

American Philosophical Association, Committee on Public Philosophy, 218–219 American Physical Society, 18–19 Ames, Bruce, 60–61, 183–185 analogy, 57–68 Angell, Marcia, 92–93 animal data, 57–68 Ankeny, Rachel, 207–210 appeal to authority, 20 appeal to ignorance, 32–35, 61–65, 139–141 appeal to the people, 20 Aristotle, 4–6, 20, 115–117, 214–218

Atlantic Richfield Oil, 39–42 Atomic Industrial Forum, 202–203 background radiation, 74, 77 Bacon, Francis, 58–59, 180–181 bad faith, 214–218 bait-and-switch, 38–39 BASF Chemical, 39–42 basketball, 69 Bayer Chemical, 39–42 Bayesian rules, 183–185 Bayesians, 137–139 begging the question, 20 Belarus, 74–75 bellwether trial, 206–207 Belmont, 35–38 Bendectin, 60–61 benefit-risk ratio, 35–38 benevolence, 65–67 Bentham, Jeremy, 199–202 benzene, 203–205 Bernoulli, Jacob, 188–189 beryllium, 214–218 bias values, 180–181 bioethics, 35–38, 48–51 biological conservation, 100–101 Biological Effects of Low-Dose Exposures, 39–42 biomedical ethics, 35–38 bisphenol A, 210–211 blaming the victim, 157, 175 bodily security, right to, 202–203 Boltzmann’s equation, 115–117 bottom-up approaches, 134–137 bovine growth hormone, 88, 95–97 brain cancer, 29–30 breast cancer, 35–38, 90–93 Brown’s Ferry accident, 194–195 burden of proof, 32–35, 196 Bush, George W., 20–25, 28, 114–115

291


292

Index

cadmium, 30–35 Calabrese, Edward, 29–35, 38–43 Cal Tech, 218–219 cancer, 45, 48–54, 61–65, 134–137, 203–205. See also specific types of case studies, 144–156 catastrophe, 189–191, 197–199 catastrophic effects, 194–195 causal laws, 153–156 cause, 61–65, 88–98, 210–211 cell phones, 196 Center for Philosophy and Public Policy, University of Maryland, 207–210 Centers for Disease Control and Prevention (CDC), 35–38, 51–52, 211–213 certainty, 182–183 ceteris absentibus clauses, 122–125 Challenger, 7–9 chemical industry motives, 39–42 Chemical Manufacturers Association, 39–42 chemophobia, 183–185 Chernobyl, 74–75, 87, 134–137, 157–163, 168–170, 194–195

Chevron refinery fire, 157 Chevron-Texaco, 165–166 childbed fever, 167–168 children, sensitivity of, 32–35 citizen’s duties, 210–211 civil law, 198–199 Clean Air Act, 17–18 climate change, 20–25, 162–163, 194–195, 214–218 climate related deaths, 211–213 clinical trials, 104–105 coal fired plants, 211–213 cognitive science, 219 cognitive value judgments, 180–181 cohort analysis, 170–171 Committee on Public Philosophy, 6–7, 207–210 Common Rule, 35–38 comparativism, 128–134, 139–142 compensation, 44–54, 188–189 conflicts of interest, 39–42, 95–97, 139–141 Congress, United States, 114–115, 134–137, 207–210

consent, 35–38, 48–54, 183–185 consistency, 25–27, 149–151, 180–181 Constitution, United States, 202–203 construct validity, 149–151 contextual values, 180–181 contraceptives, 203–205 control group, 90–92 Copenhagen Interpretations, 69–70 Coulomb’s Law, 115–117, 122–125 Council of Science Editors, 39–42 Cox-regression models, 103–104 creationism, 128–130 Cuban missile crisis, 144

Darcy’s Law, 114–127 Darwin, Charles, 67–68 Darwinian competition, 1–2, 129–130 decision making, societal, 183–185, 194–195 decision theory, 179–195 deduction, 18–20 deductive-nomological model, 112–117, 129–130 default assumption, 35–38 default rules, 35–38, 65–67, 139–141, 181–182, 196–199, 205 demarcation, 93–94 dementia, 179 depression, 101–102 developmental toxicity, 206–207, 210–218 diabetes, 30–35, 88 diesel particulates, 61–65 dioxins, 30–38 discovery, 59–60, 89–90, 95–101 discrimination, 188–189 DNA, 29–30, 72–77, 85–87, 100–101 dose registry, 51–52 dose-response curves, 70 dosimeters, 45–47 Dow Chemical, 39–42, 99–100 downwinders, 134–137 Dursban, 99–100 economic efficiency, 202–203 Einstein, Albert, 4–7, 25–27, 69–70, 111 empirical-causal idealizations, 118–119 empirical underdetermination, 72–73 Energy Department (DOE), 20–28, 48–54, 74–75, 162–163

environmental justice, 207–210 Environmental Protection Agency (EPA), 20–25, 28, 35–38, 72–73, 95–100, 207–213 Environmental Protection Agency (EPA) Science Advisory Board, 218–219 epidemiology, 32–35, 88–98 epigenetics, 210–211 equal protection under the law, 52–54, 189–192 errors, statistical false-negative/-positive errors, 196–205 type-I/type-II, 104–107 ethics bio and biomedical, 35–38, 48–51, 65–67 Nicomachean, 115–117 ethylene oxide, 29–30 eugenics, 111 evidence absence of, 93–94 preponderance of, 105–107, 127 weight-of-, 61–65 Expected utility, 179–195 expertise, 206–207 explanation, 58–59, 112–114 explanatory unification, 172–173


Index extrapolation fallacies, 32–35 Exxon Oil, 39–42 fairness, 35–38, 211–213 fallout, 134–137 false negatives/-positives, 196–205 falsificationism, 113–114 Feynman, Richard, 69–72 fingerprints, 67–68 Finkel, Adam, 61–65, 214–218 Fish and Wildlife Service (FWS) Panther Subteam, 134 Fisher, Ronald, 166–167 Florida panthers, 89–90, 128–143 Food and Drug Administration (FDA), 92–97 Ford-Mitre, 181–183 fossil fuels, 211–213 Fraassen, Bas van, 115–117 fracking, 196 Framingham, MA, 165–166 Franklin, Alan, 129–130 fruitfulness, 180–181 Fukushima, 52–54, 74–75, 87, 157–162, 194–195, 202–203

gag orders, 168–170 Galileo Galilei, 3–4, 28, 69–70, 115–117 General Agreement on Tariffs and Trade (GATT), Uruguay Round Agreements, 45–47 Geological Survey, United States (USGS), 20–25, 125–127

GlaxoSmithKline, 206–207 gopher tortoise, 151–153 greatest good for the greatest number, 52–54 greenhouse-emissions ratios, 162–163 harassment of polluters, 203–205 of researchers, 42–43 hasty generalization, 20 hazard pay, 45, 47–48 healthy worker effects, 51–52, 61–65, 72–73 heart disease, risk of, 165–166 Helsinki Declaration, 48–51 Hempel, Carl, 112–113, 121–122, 129–130, 167–168

heuristic power, 149–151 Higgs boson, 113–114 hip replacements, 179 Hiroshima, 74–75, 162–163 historical-comparativist methods, 128–143 Hitler, Adolph, 6–7, 14, 214–218 hormesis, 29–43 hormones, 88–90, 95–97, 179 human exposure data, 61–65 human rights, 35–38, 189–192, 202–203, 207 hydrogeology, 17–28, 111–127

293

hypotheses acceptance of, 153–156 causal, 107, 167–168 deductions, 112–117, 122–127, 129–130, 144, 149–156

development of, 57–59 discoveries, 59–60, 89–90, 95–101 formation of, 93–94, 149–151 justifications, 59–60, 129–130, 137–144, 175 linear, no-threshold (LNT), 32–35, 48, 70–77, 168–170

null, 100–101, 198–199 predictions, 111–127 statistical, 103–104 testing, 59–60 hypothetical-deductive methods, 113–117, 122–127

idealizations, 112–127 idols of the tribe, 180–181 inferences causal, 149–151 inductive, 112–113, 165–167 inference to the best explanation, 127, 157–176 Information and Regulatory Affairs Office, 17–18 informed consent, 48–54 intelligence quotient (IQ), 17–18, 211–213, 219 International Agency for Research on Cancer, 29–30, 61–65, 158–159, 196 International Atomic Energy Agency, 48, 74–75 International Commission of Radiological Protection (ICRP), 75–77, 207–210 International Committee of Medical Journal Editors, 39–42 invalid extrapolation, 32–35 invalid inference, 20–25 is-ought fallacy, 35–38 Japananese bombing data, 74–75, 162–163, 170–171

Journal of the American Medical Association, 39–42 judgments, subjective, 102–103, 181–182, 188–189

jumpers, 51–52 justice, 10–12, 35–38, 207–213 Kahneman, Daniel, 183–189, 194–195 Kanizsa Triangle, 57 Kaplan, Abraham, 198–199 Kemeny Advisory Commission, 173–175 Kemeny Report, 162–163 kidney damage, 30–35 knowledge right to, 35–38 tacit, 153–156 Kuhn, Thomas, 12–14, 59–60, 129–130, 137–139, 167–168, 180–181


294

Index

law, 112–113, 189–191 lead, 17–18, 35–38, 214–218 linear no-threshold hypothesis (LNT), 32–35, 48, 70–77, 168–170 logical empiricists, 6–7, 59–60, 112 Lysenkoist science, 14 mammography, 92–93 Management and Budget Office, 17–18 mathematical idealizations, 118–119 Maxey Flats, Kentucky, 114–115, 122–125 maximin, 182–195 mercury poisoning, 17–18, 199–202 Method of Difference, 167–168 methodological rules, 90–93 Minimata poisoning, 199–202 misconduct, scientific, 42–43 models, 20–27, 32–35, 85–87, 120–121 monetary value, 211–213 Monsanto, 95–97 mortality ratios, 173–175 Mount Sinai Medical School, 211–213 Nagasaki, 74–75, 162–163 nasal tumors, 30–35 National Academies of Sciences (NAS), 29–43, 48, 52–54, 60–61, 74–75, 134–137 , 145–146, 162–167, 203–210 National Association for the Advancement of Colored People (NAACP), 207–210 National Cancer Institute, 51–52, 88, 173–175 National Institutes of Health (NIH), 137–139 National Research Council (NRC), 145–146 National Science Foundation (NSF), 29–30, 39–42, 137–139, 207–210 natural gas, 162–163 natural history, information on, 146–147 naturalistic fallacy, 52–54 natural selection, 1–2, 129–130 Necker Cube, 57 nerve gas, 7–9, 99–100 neutrality objection, 214–218 newborns, 211–213 New England Journal of Medicine, 92–93, 134–137 New York Academy of Sciences, 159–161 Neyman-Pearson, 93–94 Nicomachean Ethics (Aristotle), 115–117 noblesse oblige, 214–218 non-Hodgkin’s lymphoma, 99–100 no observed adverse effect level (NOAEL), 30–32, 35–38

Northern Spotted Owl, 144–148 Not In My Backyard, 114–115 nuclear energy, nuclear waste, 20–25, 114–115, 162–163, 199–203 Nuclear Regulatory Commission (NRC), 20–25, 48–51, 159–160

nuclear weapons testing, 51–52 null hypothesis, 100–101, 198–199 Nuremburg, 35–38 objectivity, 151–153, 214–218 observational laws, 115–117 occupationally induced diseases, 45–48 occupational radiation dose, 48–51 Occupational Safety and Health Administration (OSHA), 45–47, 203–205 odds ratios, 168–170 operationalizability, 35–38 optical illusions, 57 organophosphate pesticides, 17–18, 99–100, 210–213

ought implies can, 214–219 ozone, 211–213 pancreas damage, 30–35 partiality claim, 130–132 particulates, 211–213 paternalism, 203–205 Pearl Harbor attack, 194–195 perfluorooctane compounds, 210–211 pesticides, 17–18, 32–42, 60–61, 99–100, 210–213 Philip Morris, 28 phthalates, 210–211 Physicians for Social Responsibility, 159–160 Poisson distribution, 77–83, 103–104 pollutants, synergistic, 35–38 polluters, harassment of, 203–205 polybrominated diphenyl ethers, 210–211 practical philosophy of science, 6–7, 207 precaution, 65–67 predictive power, 149–151 prima facie, 93–97, 129–130, 153–156, 199–202 primary group, 47–48 procedural rationality, 183–185 property rights, 65–67 public philosophy, 6–7, 207–210 racism, 111, 157, 180–181 radiation, 20–25, 35–38, 45–52, 72–74, 77–83, 85–87, 93–94, 111–127, 160–163, 167–175 randomization, 166–167 Rawls, John, 179–183, 189–194, 214–218 reactor core melts, 181–182 reduction ad absurdum, 70–72 relative risk, 48–51, 89–94 researchers, harassment of, 42–43 research misconduct charges, 42–43 retroduction, 58–60, 149–151 Reynolds Metals, 39–42 risk, 35–38, 90–92, 139–142, 182–183, 194–202. See also heart disease, risk of; relative risk; workplace risks risk analysis, quantitative, 196–205


Index Rohm and Hass Chemicals, 39–42 Roswell Park Memorial Institute, 60–61 Rubin Vase, 57 rule of thumb, 90–92, 156 Russian roulette, 95–97 sample size, 32–35 science, practical philosophy of, 6–7, 207 service learning, 218–219 sexism, 44, 180–181 sexual exploitation, 44 Shell Chemical, 39–42 Shrödinger's cat, 69–70 Sigma Xi, 7–9 significance, 32–35, 99–107 simplicity, 74, 120–121, 180–181 slave trade, 44, 202–205 Smith, Adam, 45 Society for the Philosophy of Science in Practice, 6–7, 207–210 solar photovoltaic, 162–163 special interests, 2–3, 27–30, 95–97, 137–139 special interest science, 7–9, 29–35, 198–199, 203–205

Spotted Owl case, 149–156 steel mills, 203–205 stress, 160–162, 167–175 subjunctive causal idealizations, 115–117 sunscreen, 203–205 Sunstein, Cass, 17–19, 183–185 survival of the fittest, 1–2, 129–130 Syngenta pesticide company, 39–42 Tacoma Narrows bridge, 7–9 TCDD (tetrachlorodibenzodioxin), 30–38 Technology Assessment Office, 48–51, 207–210 television, 1–2 testability, 128–129 tetrachlorodibenzodioxin (TCDD), 30–38 thought experiments, 48–51, 69–87 Three Mile Island, 157, 161–163, 167–168, 173–175, 194–195 three-value frame, 142 thyroid radiation, 162–163 tobacco, 52–54, 203–205 Tokyo Electric Power Company (TEPCO), 158–159

tort law, 99–100, 202–203 Toxicology Program, United States, 61–65

295

Toxic Substances Research and Teaching Program, University of California, 207–210 trump claim, 130–132, 139–141 two-value frame, 137–139, 142 type-I errors, 104–105 type-II errors, 104–107 ultima facie, 93–94, 210–211 uncertainty, 61–65, 121–122, 142, 179–189, 194–205

Union Carbide, 185–187 Union of Concerned Scientists (UCS), 181–183

United Nations Scientific Committee on the Effects of Atomic Radiation (UNSCEAR), 77–83

Uruguay Round Agreements (GATT), 45–47 US Ecology, 114–115 US revolution, 219 utilitarianism, 179–180, 189–195, 214–218 validation, 25–27 validity, 149–151 value judgments, 179–218 values, cultural, 180–181 vampire bats, 146–147 verification, 25–27 Veterans Affairs Department, 48–51 Vienna General Hospital, 167–168 Vietnam War, 203–205 wealth distribution, 189–191 weight-of-evidence, 61–65 workforce compensation, 44–54, 188–189 harassment, 42–43 healthy worker effects, 51–52, 61–65, 72–73 nonunionized, 47–48 occupationally induced diseases, 45–48 occupational radiation dose, 48–51 workplace risks, 45–48 World Association of Medical Editors, 39–42 World Health Organization (WHO), 160–161, 196, 207–210 World Trade Organization (WTO), 45–47 x-rays, 52–54 Yucca Mountain, 20–28, 114–115, 122–125









Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.