TOK Coursepack 2008-2009

Page 1

1

DWIGHT TOK

THEORY OF KNOWLEDGE COURSEPACK 2008-2009

“Now there is one outstandingly important fact regarding Spaceship Earth, and that is that no instruction book came with it.” (Buckminster Fuller)

“We must learn our limits. We are all something, but none of us are everything.” (Blaise Pascal)

“The universe is full of magical things, patiently waiting for our wits to grow sharper.” (Eden Phillpotts)

“Knowledge will forever govern ignorance; and a people who mean to be their own governors must arm themselves with the power which knowledge gives.” (James Madison)

“The universe is full of magical things, patiently waiting for our wits to grow sharper.” (Eden Phillpotts)

We know what we are, but know not what we may be. (William Shakespeare)

“The farther you enter the truth, the deeper it is.” (Bankei Zenji)

“Anyone who conducts an argument by appealing to authority is not using his intelligence; he is just using his memory.” (Leonardo da Vinci)

“You can tell whether a man is clever by his answers. You can tell whether a man is wise by his question.” (Naguib Mahfouz)

DWIGHT TOK COURSEPACK 2008-2009


2

DWIGHT TOK

THEORY OF KNOWLEDGE COURSEPACK 2008-2009 The articles and materials in this collection have been assembled for your reference, and they are organized as follows: o 2009 Prescribed Essay Titles o TOK Diagram o Knowledge Issue Exploration and Examples o Way of Knowing: Reason o Way of Knowing: Language o Way of Knowing: Sense Perception o Way of Knowing: Emotion o Area of Knowledge: Natural Sciences o Area of Knowledge: Human Sciences o Area of Knowledge: History o Area of Knowledge: The Arts o Area of Knowledge: Ethics o Areas of Knowledge: Mathematics

DWIGHT TOK COURSEPACK 2008-2009


3

Theory of knowledge prescribed titles November 2008 and May 2009 Instructions to candidates Your theory of knowledge essay for examination must be submitted to your teacher for authentication. It must be written on one of the ten titles (questions) provided below. You may choose any title, but are recommended to consult with your teacher. Your essay will be marked according to the assessment criteria published in the Theory of Knowledge guide. Remember to centre your essay on knowledge issues and, where appropriate, refer to other parts of your IB programme and to your experiences as a knower. Always justify your statements and provide relevant examples to illustrate your arguments. Pay attention to the implications of your arguments, and remember to consider what can be said against them. If you use external sources, cite them according to a recognized convention. Note that statements in quotations in these titles are not necessarily authentic: they present a real point of view but may not have been spoken or written by an actual person. It is appropriate to analyse them but it is unnecessary, even unwise, to spend time on researching a context for them. Examiners mark essays against the title as set. Respond to the title exactly as given; do not alter it in any way. Your essay must be between 1200 and 1600 words in length. 1

!Science is built of facts the way a house is built of bricks: but an accumulation of facts is no more science than a pile of bricks is a house8 (:enri Poincar<). Discuss in relation to science and at least one other area of knowledge.

2

When should we trust our senses to give us truth?

3

Evaluate the strengths and weaknesses of reason as a way of knowing.

4

!Seek simplicity, and distrust it8 (Alfred North Whitehead). Is this always good advice for a knower?

5

!In expanding the field of knowledge we but increase the horizon of ignorance8 (:enry Miller). Is this true?

6

Compare and contrast our approach to knowledge about the past with our approach to knowledge about the future.

7

!Moral wisdom seems to be as little connected to knowledge of ethical theory as playing good tennis is to knowledge of physics8 (Emrys Westacott). To what extent should our actions be guided by our theories in ethics and elsewhere?

8

To understand something you need to rely on your own experience and culture. Does this mean that it is impossible to have objective knowledge?

9

!The knowledge that we value the most is the knowledge for which we can provide the strongest justifications.8 To what extent would you agree with this claim?

10

!There can be no knowledge without emotionM. until we have felt the force of the knowledge, it is not ours8 (adapted from Arnold Bennett). Discuss this vision of the relationship between knowledge and emotion.

Handbook of procedures 2008 Š International Baccalaureate Organization, 2007

Diploma requirements, theory of knowledge Page E13

DWIGHT TOK COURSEPACK 2008-2009


4

KEY CONCEPT:

TOK FRAMEWORK FOR EPISTEMOLOGY

DWIGHT TOK COURSEPACK 2008-2009


5

KEY CONCEPT:

KNOWLEDGE ISSUE “Whoever undertakes to set himself up as a judge of Truth and Knowledge is shipwrecked by the laughter of the gods.” (Albert Einstein) • In English there is one word “know”, while French and Spanish, for example, each has two (savoir/connaître and saber/conocer). In what ways do various languages classify the concepts associated with “to know”? • Does knowledge come from inside or outside? Do we construct reality or do we recognize it? • How much of one’s knowledge depends on interaction with others?

DWIGHT TOK COURSEPACK 2008-2009


6

WHAT THE HECK IS A ‘KNOWLEDGE ISSUE’? Excerpt from the IB’s TOK guide…

Knowledge issues, knowers and knowing People know many things: they know when they are cold, or sick; they know if they are sad or happy, lonely or in love; they know how to make fire; they know that the sun will set and rise. Nonetheless people rarely stop to think about the processes by which knowledge is produced, obtained or achieved, nor about why, under what circumstances, and in what ways knowledge is renewed or reshaped by different individuals and groups at different times or from different perspectives or approaches.... Knowledge issues Knowledge issues are questions (emphasis added) that directly refer to our understanding of the world, ourselves and others, in connection with the acquisition, search for, production, shaping and acceptance of knowledge. These issues are intended to open to inquiry and exploration not only problems but also strengths of knowledge. Students sometimes overlook the positive value of different kinds of knowledge, and the discriminatory power of methods used to search for knowledge, to question it, and to establish its validity. Knowledge issues can reveal how knowledge can be a benefit, a gift, a pleasure and a basis for further thought and action, just as they can uncover the possible uncertainties, biases in approach, or limitations relating to knowledge, ways of knowing, and the methods of verification and justification appropriate in different areas of knowledge… In the broadest understanding of the term, knowledge issues include everything that can be approached from a TOK point of view (that is, in accordance with the TOK aims and objectives as they are formulated) and that allows a development, discussion or exploration from this point of view. For example, a simple question that is often raised by students, “Are teachers’ course handouts and textbooks always right?”, can be treated as a knowledge issue when correctly framed in the context of TOK aims and objectives. On the contrary, it can be the prompt for entirely trivial answers that have little or nothing to do with TOK.

DWIGHT TOK COURSEPACK 2008-2009


7

WHAT THE HECK IS A ‘KNOWLEDGE ISSUE’? A useful answer to this question provided by an IB student on the website IBscrewed.com: This "knowledge issue" discussion lies at the heart of what ToK is about. The discussions and debates that arise when one is challenged with a question like "how do you know that?" or "what makes you so sure about that?", whatever the context, is what comes under the knowledge issues criterion. Here are some examples. The first (A) is a knowledge issue, and the second (B) is IMO not a knowledge issue. A. Does language (or the use of statistics, graphs, photographs) affect our view of whether or not the planet is undergoing global warming? B. What graphs have been produced about global warming? A. What constitutes responsible journalism? How can we know whether scientific conclusions are justified? B. What have journalists said about why we shouldn't eat at McDonalds every day? A. How can we know whether intensive farming methods are always harmful? B. A case history of a condition brought on by eating intensively farmed food. The main difference is in the intent of the questions. The first ones are about how we know, about the difficulties that may be involved in coming to knowledge in these situations, about the reliable and trustworthy ways (and how reliable and trustworthy they are). The second is a question that can be answered by telling us information, recounting what is or isn't known.

DWIGHT TOK COURSEPACK 2008-2009


John Naughton: I Google, therefore I am losing the ability to think | Media ...

http://www.guardian.co.uk/media/2008/jun/22/googlethemedia.internet/print 8

The networker

I Google, therefore I am losing the ability to think John Naughton The Observer, Sunday June 22, 2008

'Is Google Making Us Stupid?' was the provocative title of a recent article in the US journal The Atlantic. Its author was Nicholas Carr, a prominent blogger and one of the internet's more distinguished contrarians. 'Over the past few years,' he writes, 'I've had an uncomfortable sense that someone, or something, has been tinkering with my brain, remapping the neural circuitry, reprogramming the memory. My mind isn't going - so far as I can tell - but it's changing. I'm not thinking the way I used to think.' He feels this most strongly, he says, when he's reading. 'Immersing myself in a book or a lengthy article used to be easy. My mind would get caught up in the narrative or the turns of the argument, and I'd spend hours strolling through long stretches of prose. That's rarely the case any more. Now my concentration often starts to drift after two or three pages. I get fidgety, lose the thread, begin looking for something else to do. I feel as if I'm always dragging my wayward brain back to the text. The deep reading that used to come naturally has become a struggle.' His diagnosis is that he's been spending too much time online. His complaint is not really against Google - it's against the network as a whole. 'What the net seems to be doing,' he writes, 'is chipping away my capacity for concentration and contemplation. My mind now expects to take in information the way the net distributes it: in a swiftly moving stream of particles. Once I was a scuba diver in the sea of words. Now I zip along the surface like a guy on a Jet Ski.' To judge from the volume of commentary that has followed his article, Carr has touched a nerve. He was 'flooded with emails and blog posts from people saying that my struggles with deep reading and concentration mirror their own experiences'. Various 端ber-bloggers such as Andrew Sullivan, Jon Udell and Bill Thompson took up the theme, adding their own twists. And prominent newspaper columnists such as Leonard Pitts (Miami Herald) and Margaret Wente (Toronto Globe & Mail) also revealed their private fears that addiction to cyberspace, and online media generally were, in fact, rotting their brains. What's surprising in a way is that people should be surprised by this. The web, after all, was designed by a chap (Tim Berners-Lee) who was motivated to do it because he had a poor memory for some things. Add powerful search engines to what he created and you effectively have a global memory-prosthesis. Who won the Ascot Gold Cup in 1904? Google will find it in a flash - and remind you that the race that year was run on 16 June, which is also the day in which all the action takes place in James Joyce's Ulysses. What was the name of Joyce's father? A quick Google search turns up the DNB entry, which reveals all. And what was the name of the woman who proved to be Parnell's downfall? Ah yes, here it is: Kitty O'Shea... and so it goes on. The combination of powerful search facilities with the web's facilitation of associative linking is what is eroding Carr's powers of concentration. It implicitly assigns an ever-decreasing priority to the ability to remember things in favour of the ability to search efficiently. And Carr is not the first to bemoan this development. In 1994, for example, Sven Birkerts published The Gutenberg Elegies with the subtitle The Fate of Reading in an Electronic Age, a passionate defence of reading and print culture and an attack on electronic media, including the internet. 'What is the place of reading, and of the reading sensibility, in our culture as it has become?' he asked. His answer, in a word, was 'shrinking' due to the penetration of electronic media into every level and moment of our lives.

1 of 2

But people have worried about this since... well... the Greeks. In the Phaedrus, Socrates tells how the Egyptian god Theuth tried to sell his invention - writing - to King Thamus as 'an accomplishment which will improve both the wisdom and the memory of the Egyptians. I have discovered a sure receipt [recipe] for memory and wisdom.' To which the shrewd old king

7/3/08 2:04 PM DWIGHT TOK COURSEPACK 2008-2009


John Naughton: I Google, therefore I am losing the ability to think | Media ...

http://www.guardian.co.uk/media/2008/jun/22/googlethemedia.internet/print 9

replied that 'the discoverer of an art is not the best judge of the good or harm which will accrue to those who practise it... Those who acquire writing will cease to exercise their memory and become forgetful... What you have discovered is a receipt for recollection, not for memory.' In other words, technology giveth; and technology taketh away. Now, who was it who said that...? john.naughton@observer.co.uk

guardian.co.uk Š Guardian News and Media Limited 2008

2 of 2

7/3/08 2:04 PM DWIGHT TOK COURSEPACK 2008-2009


Print Things We Think We Know

http://www.esquire.com/print-this/ESQ0307klosterman 10

Close

Things We Think We Know First impressions are usually wrong. Unfortunately, they usually turn into stereotypes, which then turn into the truth. By: Chuck Klosterman We all hate stereotypes. Stereotypes are killing us, and they are killing our children, and they are putting LSD into the water supply. Stereotypes are like rogue elephants with AIDS that have been set on fire by terrorists, except worse. We all hate stereotypes. Seriously. Dude, we hate them. Except that we don't. We adore stereotypes, and we desperately need them to fabricate who we are (or who we are not). People need to be able to say things like, "All stereotypes are based on ignorance," because expressing such a sentiment makes them enlightened, open-minded, and incredibly unpleasant. Meanwhile, their adversaries need the ability to say things such as, "Like it or not, all stereotypes are ultimately based in some sort of reality," because that kind of semilogic can justify their feelings about virtually anything. Nobody really cares what specific stereotype they happen to be debating; what matters more is how that label was spawned, because that defines its consequence. It raises a fundamental query about the nature of existence: Is our anecdotal understanding of the world founded on naivete, or is it built on dark, unpopular truths? That is the question. And here (I suspect) is the answer: neither. Stereotypes are not really based on fact, and they are not really based on fiction. They are based on arbitrary human qualities no one cares about at all. Whenever a given stereotype seems right (or wrong), it's inevitably a coincidence; the world is a prejudiced place, but it's prejudiced for the weirdest, least-meaningful reasons imaginable. Last November, I toured six German cities over a span of nine days. In my limited exposure to this nation, I was primarily struck by two points: a) the citizens of Germany are friendly and nervous, and b) the citizens of Germany perceive Americans to be obese, puritanical, nonsmoking retards. Their opinion of the United States is mind-blowingly low, even when compared with how the U. S. is viewed by France.1 Now, I concede that my reason for viewing Germans as "friendly" is completely unsophisticated; I believe Germans are nice because they were nice to me, which is kind of like trying to be a meteorologist by looking out a window. But--at least from what I could gather--the reason German citizens assume Americans are barbaric and vapid is almost as unreasonable, even though they're usually half right. During a weekend in Frankfurt, I went to an exhibit at the Schirn Kunsthalle art museum called "I Like America." This title (as one might expect) was meant to be ironic; it's taken from a 1974 conceptual art piece called I Like America and America Likes Me, in which German artist Joseph Beuys flew to New York and spent three days in a room with a live coyote and fifty copies of The Wall Street Journal. (This piece was a European response to the destruction of Native American culture, which made about as much sense to me as it did to the coyote.) The bulk of "I Like America" focused on German interest in nineteenth-century American culture, specifically the depictions of Buffalo Bill, cowboys, and the artistic portrayal of Indians as noble savages. It was (kind of ) brilliant. But it was curious to read the descriptions of what these paintings and photographs were supposed to signify; almost all of them were alleged to illustrate some tragic flaw with American ideology. And it slowly dawned on me that the creators of "I Like America" had made one critical error: While they had not necessarily misunderstood the historical relationship between Americans and cowboy iconography, they totally misinterpreted its magnitude. With the possible exception of Jon Bon Jovi,2 I can't think of any modern American who gives a shit about cowboys, even metaphorically. Dramatic op-ed writers are wont to criticize warhawk politicians by comparing them to John Wayne, but no one really believes that Hondo affects policy; it's just a shorthand way to describe something we already understand. But European intellectuals use cowboy culture to understand American sociology, and

1 of 3

8/26/08 10:07 AM DWIGHT TOK COURSEPACK 2008-2009


Print Things We Think We Know

http://www.esquire.com/print-this/ESQ0307klosterman 11

that's a specious relationship (even during moments when it almost makes sense). As it turns out, Germans care about cowboys way more than we do. A sardonic German teenager told me what she thought the phrase "the American dream" meant: "I assume it means watching Baywatch twenty-four hours a day." She was (sort of ) joking, but I've heard similar sentiments in every foreign country I've visited. There is widespread belief that Americans spend most of their lives watching Baywatch and MTV. But what's interesting about this girl's insight was her reasoning: She thinks Americans love Baywatch because Joey Tribbiani on Friends loved Baywatch. And this does not mean she viewed Friends as an accurate reflection of life, nor does it suggest that she saw Matt LeBlanc as a tragic spokesman for the American working class; this was just one random detail she remembered about an American TV show she barely watched. She didn't care about this detail, and neither do we. As I rode the train from Munich to Dresden to Hamburg, I started jotting down anything I noticed that could prompt me to project larger truths about Germany. An abbreviated version is as follows: 1. The water here is less refreshing than American water. 2. Instead of laughing, people tend to say, "That is funny." 3. Most of the rural fields are plowed catawampus. 4. Late-night German TV broadcasts an inordinate amount of Caucasian boxing. 5. No matter how much they drink, nobody here acts drunk. 6. Germans remain fixated on the divide between "high culture" and "low culture," and the term "popular culture" is pejorative. 7. Heavy metal is still huge in this country. As proof, there's astonishingly high interest in the most recent Paul Stanley solo record.3 8. Prostitution is legal and prominent.4 9. When addressing customers, waiters and waitresses sometimes hold their hands behind their backs, military style. 10. It's normal to sit in the front seat of a taxi, even if you are the only passenger. I suppose I could use these details to extrapolate various ideas about life in Germany. I suppose I could create allegorical value for many of these factoids, and some of my conclusions might prove true. But I am choosing not to do this. Because-- now--I can't help but recognize all the things Americans do that a) have no real significance, yet b) define the perception of our nature. While I was in Frankfurt, Ohio State played Michigan in football. I managed to find one of the only bars in town where this game was televised, and I watched it with two superdrunk businessmen from Detroit I'd never met before (and I'll never see again). Every time Michigan scored, one of them would march outside and yell, "Go Blue!" into the dark Frankfurt night. I have no idea why he kept doing this. I don't think he did, either. (It might have been just to amuse his companion.) But I could tell that every German in the bar was viewing his aggressive, unbridled enthusiasm as normative American behavior. This man is all they will ever know about life in Michigan. The next day, I was in an Irish-themed pub, and I met an Australian who was working for the king of Bahrain.5 He had been drinking beer and watching rugby all afternoon, and he kept repeating the same phrase over and over again: "You can't buy class." He also told me that the king of Bahrain is forty-nine years old, but that the crown prince of Bahrain is forty-eight.6 This seemed mathematically impossible, so I asked how such a relationship could exist. "You can't buy class," he said in response. So this is all I know about life in Bahrain. When I returned from my tour, many people asked me what Germany was like. I said I had no idea. "But weren't you just there?" they inevitably asked. "Yes," I told them. "I was just there. And I don't know what it's like at all." 1 I think this is because the French are keenly aware that everyone on earth assumes they are inherently anti-American;

as such, they overcompensate. 2Other possible exceptions: Tesla (who sang "Modern Day Cowboy"), W.A.S.P. (who sang "Cocaine Cowboys"), Ratt (who portrayed cowboys in the "Wanted Man" video), and MĂśtley CrĂźe (who recorded an unreleased track called "Rodeo" during the sessions for Girls, Girls, Girls). In 1985, cowboys used Aqua Net. 3 However, the German accent makes the words Paul Stanley sound remarkably similar to the words Hold Steady, which

prompted two wildly confusing conversations over the state of modern rock music. 4 This also caused confusion: Everybody who looked like a potential drug dealer was merely a hooker. 5 Bahrain is one of the richest Arab countries in the world. They don't have taxes. I also find it amazing that I met an

Australian from Bahrain in an Irish bar in Germany. 6 As it turns out, the king actually just turned fifty-seven and the crown prince is thirty-seven. You can't buy class.

Chuck Klosterman is the author of many fine books, including Chuck Klosterman IV which is available at your local

2 of 3

8/26/08 10:07 AM DWIGHT TOK COURSEPACK 2008-2009


Science & Spirit

http://www.science-spirit.org/newdirections.php?article_id=740 12

Flesh Made Soul Can a new theory in neuroscience explain spiritual experience to a non-believer? By Sandra Blakeslee March 1, 2008

September 25, 1974. I am on the delivery table at a maternity hospital run by Swiss-German midwives in Bafut, Cameroon. My daughter, Abi, arrives at 1:30 a.m. but because no bed is available, I lie awake in the kerosene lamplight waiting for the dawn. Mornings in this West African highland are chilly and calm. Swirls of woodsmoke carpet the ground. On a nearby veranda, the peace is shattered by the high-pitched ululations of a young woman. Her arms are raised above her head, bearing a tiny bundle. It is her dead infant. As she paces up and down, grieving, I reach for my sleeping newborn and hold her to my body, shaking. The next morning, as dawn breaks, I am in a private room and again the ululations pierce the stillness. But this time the sounds

2 of 12

8/31/08 11:15 AM

DWIGHT TOK COURSEPACK 2008-2009


Science & Spirit

http://www.science-spirit.org/newdirections.php?article_id=740 13

convey elation. A grandmother walks the veranda, holding newborn twins -- male firstborns -- in her arms. A birth, a death, more births. So close. Palpable. These transformative events somehow conspire to propel me, while sitting up in bed, into an altered state of consciousness. I am floating in a vast ocean of timelessness. My right hand holds my mother's hand. In her right hand is her mother's hand, which is holding her mother's hand and so on into the depths of time. My left hand holds my daughter's hand, which is holding her daughter's hand, who is holding her daughter's hand and so on into an infinite future. Time stands still. My mind and body expand in a state of pure ecstasy. Again, I am floating. The spiritual experience envelops me -- for how long I don't know -- until I come back into my body and observe my baby by my side. For the next five weeks, I walk around in this rapturous state of timelessness, of now, no past, no future, only now. I remember thinking, "Wow, if I meditated for thirty years I'd be lucky to feel this way." It all vanished when I came down with malaria and shortly afterward moved back to the United States.

People who believe in a supreme being might say that I had been in the presence of God or some manifestation of God. Or that I had touched Nirvana, that state of perfect peace, without craving, filled with transcendental happiness. But I disavow the idea of a personal God, do not believe in a soul that lives on after death, and think that religion -- defined as a set of cognitive, linguistic beliefs and creeds that are highly culture specific and historically contingent -- is irrelevant to my experience. So if this mindboggling spiritual experience came not from an encounter with God, what could explain it?

3 of 12

8/31/08 11:15 AM

DWIGHT TOK COURSEPACK 2008-2009


Science & Spirit

http://www.science-spirit.org/newdirections.php?article_id=740 14

Can science help? I think it can, although the research is in an early stage. A stunning new description of how the human body and brain communicate to produce emotional states -- including our feelings, cravings, and moods -- has all the elements needed to explain how the human brain might give rise to spiritual experiences, without the necessary involvement of a supernatural presence, according to Dr. Martin Paulus, a psychiatrist at the University of California in San Diego who is also a Zen practitioner. Called interoception, it offers a radically new view of human anatomy and physiology based on how information from the body reaches the brain and how that information is processed uniquely in humans. The subjective awareness of our emotional state is based on how our brain represents our physiological state, says Dr. Arthur D. Craig, a neuroanatomist at the Barrow Neurological Institute in Phoenix and leading researcher in interoceptive processes. "If there is any way to objectively measure a subjective state," he says, "this is it." Thus the brain's centers that integrate sensory reports from the body are found to be highly active in studies of drug addiction, pain in oneself, empathy for others, humor, seeing disgust on someone's face, anticipating an electric shock, being shunned in a social setting, listening to music, sensing that time stands still -- and in Tibetan monks contemplating compassion. In this view, spirituality -- an emotional feeling from the body, a sense of timelessness, a suspension of self and dissolution of personal boundaries -- can be explained in terms of brain physiology, which means, of course, that it is subject to experimentation and manipulation.

4 of 12

8/31/08 11:15 AM

DWIGHT TOK COURSEPACK 2008-2009


Science & Spirit

http://www.science-spirit.org/newdirections.php?article_id=740 15

The idea that the brain tracks moment-to-moment fluctuations in physical sensations from the body is well over a century old. Moreover, the feelings we sense from these changes is the definition of an emotion. William James, the American psychologist, famously described the physical essence of emotions in 1884 when he stated that we run from a bear not because we are afraid. We run because we have a racing heart, tight stomach, sweaty palms, and tense muscles. But the neuroantaomical details of how such signals from the body produce feelings and motivations have only recently been worked out. It turns out that the brain exploits several pathways for knowing what the body is up to. One involves touch. Human skin contains receptors for gentle pressure, deep pressure, sustained pressure, hair follicle bending, and vibration. When one of these touch receptors is activated, fast moving signals are sent to the brainï¿ s primary touch map, where each body part is faithfully mapped out. A touch on the arm activates the brainï¿ s arm map. A touch on the cheek activates the brain's cheek mapï¿ and so on for every inch of the human body. But according to Craig, skin, muscles, and internal organs -including heart, lungs, liver, and gut -- contain other types of receptors that collect an ongoing report about the body's felt state. Thus there are receptors for heat, cold, itch, tickle, muscle ache, muscle burn, dull pain, sharp pain, cramping, air hunger, and visceral urgency. There are even receptors for sensual touch -- the kind one might give to a baby or lover -- located mostly on the face and inner thigh. This collective interoceptive information represents the condition of the body as it strives to maintain internal balance, Craig says. "For example, you think of temperature as being external to the body and related to touch. The sidewalk is hot. The floor is cold. But temperature is not allied with touch," he says. "It is an opinion on the state of the body. If you are hot, a cool glass of water feels wonderful. If you are cold, the same glass of water can be unpleasant. If you are chilled, a hot shower feels great. If

5 of 12

8/31/08 11:15 AM

DWIGHT TOK COURSEPACK 2008-2009


Science & Spirit

http://www.science-spirit.org/newdirections.php?article_id=740 16

you are too warm, a hot shower is unpleasant.ï¿ Whereas touch signals for pressure and vibration are carried on fast acting fibers to the primary touch map, interoceptive information is carried up the spinal cord and into the brainstem via a wholly different network of slow acting fibers, Craig says. Moreover, this information about the body's felt state goes to a different region of the brain called the insula. The insula -- a prune-size structure tucked deeply into the brain's upper mantle, one in each hemisphere -- is devoted to feeling interoceptive sensations from the body. It is connected to a nearby motor area, called the anterior cingulate, which produces actions responding to those feelings. Both the insula and anterior cingulate are wired to other structures, notably the amygdala, hypothalamus, and prefrontal cortex, which allow the brain to make sense of what the body is telling it. Humans exploit this wiring to generate complex emotions that other animals cannot fathom, Craig says. Rats, cats, and dogs, for example, have feelings from the body but they do not pass the information to the insula. It goes to simpler control centers elsewhere in the brain. "This means animals cannot experience feelings from the body in the way that you and I do," Craig says. "They have primary emotions like fear, sadness and joy but not like you or me." Monkeys and apes do pass interoceptive signals to the insula, but in comparison to humans, their insulas are far less developed. Their emotions are more nuanced than a dogï¿ s but more rudimentary than a humanï¿ s. They do not, in all likelihood, have spiritual experiences.

We humans, Craig says, are anatomically unique in how we collect feelings from the body. Like our primate cousins, we gather information about ongoing physiological activity from the

6 of 12

8/31/08 11:15 AM

DWIGHT TOK COURSEPACK 2008-2009


Science & Spirit

http://www.science-spirit.org/newdirections.php?article_id=740 17

body and represent it in both insulas. But then we take an extra step. Our felt body senses are re-represented in the right frontal insula as social emotions. (Our frontal insulas are huge compared to other primates.) Re-representation means that a sense of blood rushing to the face might be experienced as embarrassment. Or the sense of a swelling heart is understood as pride. Or the sense of floating in bliss is felt as spirituality. Social emotions --everything from atonement to jealousy to pride to spirituality -- are a hallmark of humankind, Craig says. Interoception is what allows us to feel them. Social emotions are a mixture of positive and negative elements that activate the right and left frontal insulas differently, Craig says. In general the right insula is involved in energy expenditure and arousal whereas the left insula is associated with nourishment and love. Thus when empathy involves a challenge, the right insula is more active than the left. When empathy involves compassion, the left insula is more active. When we listen to music we don't like, the right insula is more active. When we listen to music we love, the left insula is turned on. Women who report greater satisfaction with their orgasms show increased activity in the left frontal insula. And when a mother gazes at her infant, her brain is bathed in two hormones -oxytocin and vasopressin -- that are released during childbirth, lactation, and when people trust one another. Responding to this sense of pure nourishment, the left insula lights up.

Of course, spiritual experiences involve more than bliss. Physical boundaries seem to dissolve. The ego vanishes. Space expands. Awareness heightens. Craving ceases. Time stands still. Specializations in human brain circuitry can also explain these phenomena, Craig says. As the right frontal insula collects information from the body, it builds up a set of so-called

7 of 12

8/31/08 11:15 AM

DWIGHT TOK COURSEPACK 2008-2009


Science & Spirit

http://www.science-spirit.org/newdirections.php?article_id=740 18

emotional moments through time. An emotional moment is the brain's image of the "self" at any point in time and is, he says, the basis for our subjective emotional awareness. Usually, our awareness involves many simultaneous events. But sometimes, in extraordinary moments, our awareness is heightened. For example, if someone is in a car accident, ten seconds can feel like a full minute. Everything seems to unfold in slow motion. This is because heightened awareness produces a larger emotional moment -- so large that it alters the perception of time. The sensation of floating out of one's body can be traced to another brain region in the parietal lobe called the right angular gyrus, which is essential for locating ourselves in space, according to Dr. Olaf Blanke, a neurologist at University Hospital in Lausanne, Switzerland. When this area is stimulated with an electrode, essentially shutting it down, people have vivid out-of-body experiences, he says. They feel as if they're floating on the ceiling, looking down at their bodies. When the electrode is turned off, they go back into their bodies. Out-of-body experiences naturally occur when activity in the right angular gyrus is suppressed for any reason, Blanke says. When people are in shock, say after an accident, a relative lack of blood flow to this region can easily produce sensations of floating outside one's body. Finally, the right hemisphere contains circuits for recognizing and feeling the self. Imaging experiments show that three regions -the medial prefrontal cortex, precuneu, and posterior cingulate cortex -- light up in imaging studies when subjects think about themselves, their hopes, and aspirations and retrieve episodic memories related to their lives. The sense of being -- me, myself, and I -- is located in this circuit, according Dr. Marco Iacoboni, a neuroscientist at the University of California, Los Angeles. When the circuit is suppressed with a device called a transcranial magnet, people can no longer recognize themselves in a mirror, he says.

8 of 12

8/31/08 11:15 AM

DWIGHT TOK COURSEPACK 2008-2009


Science & Spirit

http://www.science-spirit.org/newdirections.php?article_id=740 19

Given these brain circuits, it is possible to make predictions about the neurophysiology of a spiritual experience. The right angular gyrus or right parietal lobe should go offline, so that the body floats outside itself. Areas of the right brain should deactivate, including the anterior cingulate and frontal regions, allowing the self or ego to float free. The right frontal insula should register a huge global moment so that time appears to stop. Self-awareness will fade. Craving will cease. (A recent study of heavy smokers who had strokes that damaged their insulas showed that they were able to give up cigarettes instantly and permanently the moment they woke up from the stroke.) Finally, the left frontal insula should become especially active, engendering a sense of pure bliss and love. Recent neurological studies of religion and meditation support many of these predictions. At the University of Wisconsin in Madison, psychologist Richard Davidson is studying Buddhist monks as they meditate. In his most recent study, the monks focused on compassion, concentrating on how to alleviate suffering. There was a dramatic increase in insula activity, Dr. Davidson says, as the monks reported a sense of love and compassion. At the same time, their anterior cingulates (the action part of the interoceptive circuit) showed less activity, suggesting the monks attained a state of awareness without motivation. In a study of "vipassana" meditation, where one誰多 s field of awareness is expanded to include anything that comes into consciousness, Davidson found that practitioners showed a strong decrease in areas of the right brain associated with the self. According to Dr. Sara Lazar, a neurobiologist at Harvard

9 of 12

8/31/08 11:15 AM

DWIGHT TOK COURSEPACK 2008-2009


Science & Spirit

http://www.science-spirit.org/newdirections.php?article_id=740 20

University, insular gray matter is bigger in experienced meditators. Like a muscle, it can be challenged to grow bigger and stronger. Paulus, the psychiatrist who is a Zen practitioner, says that the focus on interoceptive experiences, such as concentrating on the breath, is a central aspect of meditative practices in certain Zen schools. The physiological representation of the self in the brain and spiritual experiences that transcend the self are closely connected. At the University of Pennsylvania, Dr. Andrew Newberg, director of the Center for Spirituality and the Mind, studies Buddhists using a brain-imaging tool called single photon emission computed tomography. The method is not suited for looking at the hard-to-reach insula, he says, but it does show decreased activity in the parietal lobe, which is involved in spatial orientation. When the monks meditate, he said in a 2001 paper, they perceive the dissolution of personal boundaries and a feeling of being at one with the universe. Newberg also studied Franciscan nuns while they repeated a prayer and experienced being in God誰多 s presence. He noted a similar decrease of activity in the parietal lobe, which suggests that the nuns experienced an altered body sense during prayer. The parietal lobe uses sensory information to create a sense of self and relate it spatially to the rest of the world, he says. When they pray, they lose themselves. At the University of Montreal, psychologist Mario Beauregard is studying the neural correlates of religious, spiritual, and mystical experiences in Carmelite nuns. When the sisters reported a sense of union with God -- of having touched the ultimate ground of reality, of feeling peace, joy, unconditional love, timelessness, and spacelessness -- their insulas lit up along with several other structures (the caudate nucleus, parietal lobe, and portions of the frontal cortex). Such studies do not, of course, prove or disprove the existence of God. The fact that many brain regions, including interoceptive regions, light up when subjects are asked to contemplate

10 of 12

8/31/08 11:15 AM

DWIGHT TOK COURSEPACK 2008-2009


Science & Spirit

http://www.science-spirit.org/newdirections.php?article_id=740 21

religious feelings or to meditate is but one aspect of these experiences. Beliefs, which are learned from family and culture, create meaning.

And what about my experience? I was bathed in oxytocin and vasopressin, hormones that propel people into feelings of connectedness, trust, and affiliation. I felt love for my newborn. I was frightened by the dead infant on the veranda and elated by the birth of the twins. Could these feelings from my body -- the shock, the shivering, the sensual touch -- have pushed me into an altered state of consciousness? As I breastfed my daughter, did the continued release of maternal hormones somehow maintain the state of bliss for another five weeks? I cannot prove that neurochemistry and interoceptive processes fully explain that extraordinary event, but they are, to my mind, sufficient. Some might argue that a supernatural agent entered my body without my knowledge -- but there is not a shred of evidence for that claim. I prefer a biological explanation. In fact, I believe that every person is capable of achieving an equivalent state of sublimity without invoking God. There is nothing mystical about it. Interoceptive awareness varies with individuals. Some will be more prone to such experiences than others. Some are better at reading signals from their bodies. Hormones like oxytocin, brain chemicals like serotonin, and drugs like ecstasy play a role in producing these phenomena. Indeed, drugs are used to induce spiritual experiences the world over. I am sure of one thing. When people have a spiritual experience, like mine in Cameroon, they feel compelled to explain it. Human brains evolved to be belief engines, according to Lewis Wolpert, a professor of biology at University College London who studies the evolutionary origins of belief. "We want to explain everything," he says, "We cannot tolerate not knowing a cause."

11 of 12

8/31/08 11:15 AM

DWIGHT TOK COURSEPACK 2008-2009


Science & Spirit

http://www.science-spirit.org/newdirections.php?article_id=740 22

Thus, it seems, if our cultural upbringing has convinced us that God exists, we will interpret our blissful floating state as proof of a divine power. But if we doubt that God exists, we will turn to science and hope that researchers will eventually learn how to induce a spiritual experience in anyone who asks for it.

Sandra Blakesless is a science journalist and a former staff writer at the New York Times.

Current Issue | Resources | Advertise | About | Subscribe | Search | Home Site and magazine supported by a generous grant from the John Templeton Foundation. Š 2007 Science & Spirit Magazine. All rights reserved.

12 of 12

8/31/08 11:15 AM

DWIGHT TOK COURSEPACK 2008-2009


Skeptic: eSkeptic: Wednesday, June 25th, 2008

http://www.skeptic.com/eskeptic/08-06-25.html#feature 23

The Death of Socrates, by Jacques-Louis David

Socratic Skepticism by Priscilla Sakezles

IT IS FREQUENTLY CLAIMED that Socrates said, “All I know is that I know nothing.” For instance, Skeptic magazine makes this claim in its self-defining article in the front matter of the magazine, “What is a Skeptic?” It uses this quote to justify a long historical tradition for skepticism, but it then castigates Socrates for making this claim, saying: “this pure position is sterile and unproductive and held by virtually no one. If

3 of 9

9/1/08 1:11 PM

DWIGHT TOK COURSEPACK 2008-2009


Skeptic: eSkeptic: Wednesday, June 25th, 2008

http://www.skeptic.com/eskeptic/08-06-25.html#feature 24

you are skeptical about everything, you would have to be skeptical of your own skepticism.” This is a misquote that I would like to correct the record for the readers of Skeptic magazine. The source of this misquote is Plato’s dialogue the Apology, and there are five different Socratic claims that may superficially appear to justify it. I will go through the relevant parts of the Apology, reviewing these claims to prove that they are not equivalent to, nor do they imply, the infamous quote. The Apology is Plato’s record of Socrates’ trial on the charges of impiety and corrupting the youth of Athens. In it we hear the story of Chaerephon asking the Delphic oracle at the temple of Apollo, “Is there anyone wiser than Socrates?” The priestess replies that there is no one. Thus, Apollo himself proclaims, “no one is wiser than Socrates.” Socrates cannot believe that this is literally true, because, he says, “I know very well that I am not wise, even in the smallest degree” (21b4–5). The Delphic oracle has a reputation for speaking in riddles that require interpretation, so Socrates sets out to discover what the oracle really means. Attempting to find someone wiser than himself, Socrates goes to those people reputed to be wise, questioning each about his area of expertise. A military general, for instance, should know “what courage is” (the subject of Laches). A religious zealot prosecuting his father for impiety certainly should know “what piety is” (the subject of Euthryphro). What Socrates discovers, time after time, is that each man is not in fact wise, although he is thought by many people — especially himself — to be so. Under Socratic cross-examination each man fails to define the concept he claims to know so much about. Socrates hopes to prove to each man how ignorant he really is. This process of repeated public humiliation of well-known citizens makes Socrates very unpopular and is what ultimately results in his trial and execution, at least as Plato tells the story. Socrates says he regrets this effect, but he thinks that the oracle’s message is meant as a divine command to expose the ignorance of his fellow citizens. He is thus morally bound to devote his life to this noble, if annoying, cause. He is confident that Apollo has commanded him to spend his life searching for wisdom, examining both himself and others. In fact, he claims to be a “gift from the god” to the Athenian people, acting as a “gadfly” to the large and well-bred but sluggish horse that is Athens (30de). Socrates’ final conclusion after years of such examination is this: “I am wiser than this man: neither of us

4 of 9

9/1/08 1:11 PM

DWIGHT TOK COURSEPACK 2008-2009


Skeptic: eSkeptic: Wednesday, June 25th, 2008

http://www.skeptic.com/eskeptic/08-06-25.html#feature 25

knows anything that is really worth knowing, but he thinks that he has knowledge when he has not, while I, having no knowledge, do not think that I have. I seem, at any rate, to be a little wiser than he is on this point: I do not think that I know what I do not know” (21d7). Socrates is not claiming that all knowledge is impossible. For instance, his examination of the artisans shows that they know many things related to their skills that he does not, and so in a sense they are wiser than he. He says of his own artisan knowledge: “for I knew very well that I possessed no knowledge at all worth speaking of” (22c9–d1). However, having this specialized knowledge or skill makes the artisans think they are also wise in “matters of the greatest importance.” Socrates proves they are not. Again, he decides that he is better off in his current condition, possessing neither their wisdom nor their ignorance. Socrates ultimately interprets the meaning of the oracle to be that human wisdom is worth little or nothing. The oracle merely uses Socrates as an example, “as though he would say to men, ‘he among you is the wisest who, like Socrates, knows that his wisdom is really worth nothing at all’” (23b3–4). So Apollo is right after all: no one is wiser than Socrates because only Socrates admits his own ignorance. Socrates, of course, is found guilty and sentenced to death. But he is not afraid of death, because to fear death is to think oneself wise without really being so, “for it is to think that we know what we do not know” (29a6). Socrates does not know what will happen when he dies, although he does suggest two alternatives (40c–41c). He may cease to exist and so lose consciousness, which would be like a long dreamless sleep. Or his soul may relocate to another place, where he could spend eternity cross-examining other dead people. Either option would be fine with Socrates, he does not know which is correct, and so he is not afraid. We have seen five Socratic quotes that may appear to mean or imply “All I know is that I know nothing.” They are: (S1) I know very well that I am not wise, even in the smallest degree (21b4–5). (S2) I do not think that I know what I do not know (21d7). (S3) I knew very well that I possessed no knowledge at all worth speaking of (22c9–d1). (S4) He among you is the wisest who knows that his wisdom is really worth nothing at all (23b3–4).

5 of 9

9/1/08 1:11 PM

DWIGHT TOK COURSEPACK 2008-2009


Skeptic: eSkeptic: Wednesday, June 25th, 2008

http://www.skeptic.com/eskeptic/08-06-25.html#feature 26

(S5) To fear death is to think that we know what we do not know (29a6). Are any of these statements really equivalent to “All I know is that I know nothing?” No. (S1) does say that Socrates knows something: that he is not wise. (S2) makes no positive knowledge claim, but rather says quite the opposite: he does not think that he knows anything that he does not really know. He lacks the false pretensions to knowledge of his interlocutors. (S3) may come close to saying “I know that I know nothing” if it is removed from its context and universalized. But within its context it is obviously limited to saying that Socrates knows that he has no knowledge of the artisan’s special skills (for instance, he does not know how to build a house). (S4) does make a positive knowledge claim: the wisest person knows that his wisdom is worthless. (S5) says that to fear death is to suppose that one really does know something that one in fact does not know. Socrates does not fear death precisely because he does not suffer from this epistemological delusion. “All I know is that I know nothing” is not an expression of skepticism, but of dogmatism. It asserts that I do in fact have positive knowledge of one and only one truth: that I do not have knowledge of any truths. This is obviously self-contradictory, and is a mistake often falsely attributed to Socrates. What Socrates does say is that he does not think or claim that he knows anything that he does not in fact really know. He has the “human wisdom” (20d) of recognizing his own ignorance, unlike his many unfortunate interlocutors. This is the attitude of an honest and sincere skeptic: he does not proclaim knowledge to be impossible, but merely is humble about his own and continues the search, always critically examining any knowledge claim. Socrates never says that he knows nothing at all, and he certainly does not say that he knows that he knows nothing. Rather, he says neither he nor anyone else “knows anything that is really worth knowing” (21d). The meaningful knowledge that Socrates seeks, but never finds, is the real definitions of ethical concepts such as courage, piety, moderation, and justice. As it turns out (at least as we can interpret from Plato’s development), he cannot find them because he is looking in the wrong place — the true answers are to be found in the realm of “forms,” which his student Plato discovers. And with that discovery, the philosophical tone of Plato’s dialogues shift from skeptical to dogmatic.

6 of 9

9/1/08 1:11 PM

DWIGHT TOK COURSEPACK 2008-2009


Skeptic: eSkeptic: Wednesday, June 25th, 2008

http://www.skeptic.com/eskeptic/08-06-25.html#feature 27

We must conclude that the attribution of this famous quote to Socrates is wrong. He is skeptical in a certain sense, but he is not the sort of dogmatic and self-contradicting skeptic that he is often made out to be. To return to Skeptic magazine’s self-definition, skepticism is “the application of reason to any and all ideas … it is a method, not a position.” It is the Socratic method, always vigilant to expose false pretensions to knowledge, whether about the real meaning of piety or virtue, the nature or existence of God, ghosts, or UFO’s. Socrates should be given the credit he deserves as the pioneer of this form of skepticism.

7 of 9

9/1/08 1:11 PM

DWIGHT TOK COURSEPACK 2008-2009


Science Journal - WSJ.com

http://online.wsj.com/public/article_print/SB121450609076407973.html 28

June 27, 2008

SCIENCE JOURNAL By ROBERT LEE HOTZ

DOW JONES REPRINTS

Get Out of Your Own Way

This copy is for your personal, non-commercial use only. To order presentation-ready copies for distribution to your colleagues, clients or customers, use the Order Reprints tool at the bottom of any article or visit: www.djreprints.com.

Studies Show the Value of Not Overthinking a Decision June 27, 2008; Page A9

Fishing in the stream of consciousness, researchers now can detect our intentions and predict our choices before we are aware of them ourselves. The brain, they have found, appears to make up its mind 10 seconds before we become conscious of a decision -- an eternity at the speed of thought.

• See a sample reprint in PDF format. • Order a reprint of this article now.

Their findings challenge conventional notions of choice.

"We think our decisions are conscious," said neuroscientist John-Dylan Haynes at the Bernstein Center for Computational Neuroscience in Berlin, who is pioneering this research. "But these data show that consciousness is just the tip of the iceberg. This doesn't rule out free will, but it does make it implausible." Through a series of intriguing experiments, scientists in Germany, Norway and the U.S. have analyzed the distinctive cerebral activity that foreshadows our choices. They have tracked telltale waves of change through the cells that orchestrate our memory, language, reason and self-awareness.

Corbis

In ways we are only beginning to understand, the synapses and neurons in the human nervous system work in concert to perceive the world around them, to learn from their perceptions, to remember important experiences, to plan ahead, and to decide and act on incomplete information. In a rudimentary way, they predetermine our

choices. To probe what happens in the brain during the moments before people sense they've reached a decision, Dr. Haynes and his colleagues devised a deceptively simple experiment, reported in April in Nature Neuroscience. They monitored the swift neural currents coursing through the brains of student volunteers as they decided, at their own pace and at random, whether to push a button with their left or right hands.

1 of 5

JOIN THE DISCUSSION 1

How do you best make up your mind?

9/1/08 1:17 PM

DWIGHT TOK COURSEPACK 2008-2009


Science Journal - WSJ.com

http://online.wsj.com/public/article_print/SB121450609076407973.html 29

In all, they tested seven men and seven women from 21 to 30 years old. They recorded neural changes associated with thoughts using a functional magnetic resonance imaging machine and analyzed the results with an experimental pattern-recognition computer program.

Are you better off when you sleep on a decision? What does this mean for our sense of choice and free will? Share your opinion in an 2

online forum .

While inside the brain scanner, the students watched random letters stream across a screen. Whenever they felt the urge, they pressed a button with their right hand or a button with their left hand. Then they marked down the letter that had been on the screen in the instant they had decided to press the button. Studying the brain behavior leading up to the moment of conscious decision, the researchers identified signals that let them know when the students had decided to move 10 seconds or so before the students knew it themselves. About 70% of the time, the researchers could also predict which button the students would push. "It's quite eerie," said Dr. Haynes.

MIND READING Is your freedom of choice an illusion? Your brain knows what you're going to do 10 seconds before you are aware of it, neuroscientist John-Dylan Haynes and his colleagues reported recently in Nature

3

4

Neuroscience . 5

Last year In the journal Current Biology , the scientists reported they could use brain wave patterns to identify your intentions before you revealed them. Their work builds on a landmark 1983 paper in the 6

Other researchers have pursued the act of decision deeper into the subcurrents of the brain. In experiments with laboratory animals reported this spring, Caltech neuroscientist Richard Anderson and his colleagues explored how the effort to plan a movement forces cells throughout the brain to work together, organizing a choice below the threshold of awareness. Tuning in on the electrical dialogue between working neurons, they pinpointed the cells of what they called a "free choice" brain circuit that in milliseconds synchronized scattered synapses to settle on a course of action. "It suggests we are looking at this actual decision being made," Dr. Anderson said. "It is pretty fast."

7

journal Brain by the late Benjamin Libet and his colleagues at the University of California in San Francisco, who found out that the brain initiates free choices about a third of a second before we are aware of them. Together, these findings support the importance of the unconscious in shaping decisions. Psychologist 8

Ap Dijksterhuis and his co-workers at the University of Amsterdam reported in the journal 9

Science that it is not always best to deliberate too much before making a choice. Nobel laureate Francis Crick -- co-discoverer of the structure of DNA -- tackled the implications of such cognitive science in his 1993 book The Astonishing 10

Hypothesis: The Scientific Search for the Soul . With co-author Giulio Tononi, Nobel laureate Gerald Edleman explores his biology-based theory of consciousness in A Universe Of Consciousness:

And when those networks momentarily malfunction, people do make mistakes. Working independently, psychologist Tom Eichele at Norway's University of Bergen monitored brain activity in people performing routine tasks and discovered neural static -- waves of disruptive signals -- preceded an error by up to 30 seconds. "Thirty seconds is a long time," Dr. Eichele said. Such experiments suggest that our best reasons for some choices we make are understood only by our cells. The findings lend credence to researchers who argue that many important decisions may be best made by going with our gut -- not by thinking about them too much. Dutch researchers led by psychologist Ap Dijksterhuis at the University of Amsterdam recently found that people struggling to make relatively complicated consumer choices -- which car to buy, apartment to rent or vacation to take -- appeared to make sounder decisions when they were distracted and unable to focus consciously on the problem. Moreover, the more factors to be considered in a decision, the more likely the unconscious brain handled it

2 of 5

9/1/08 1:17 PM

DWIGHT TOK COURSEPACK 2008-2009


Science Journal - WSJ.com

http://online.wsj.com/public/article_print/SB121450609076407973.html 30 11

How Matter Becomes Imagination .

all better, they reported in the peer-reviewed journal Science in 2006. "The idea that conscious deliberation before making a decision is always good is simply one of those illusions consciousness creates for us," Dr. Dijksterhuis said.

Does this make our self-awareness just a second thought? All this work to deconstruct the mental machinery of choice may be the best evidence of conscious free will. By measuring the brain's physical processes, the mind seeks to know itself through its reflection in the mirror of science. "We are trying to understand who we are," said Antonio Damasio, director of the Brain and Creativity Institute at the University of Southern California, "by studying the organ that allows you to understand who you are."

3 of 5

9/1/08 1:17 PM

DWIGHT TOK COURSEPACK 2008-2009


Science Journal - WSJ.com

http://online.wsj.com/public/article_print/SB121450609076407973.html 31

•

Email sciencejournal@wsj.com12. URL for this article: http://online.wsj.com/article/SB121450609076407973.html Hyperlinks in this Article: (1) http://forums.wsj.com/viewtopic.php? t=3094 (2) http://forums.wsj.com/viewtopic.php? t=3094 (3) http://www.bccn-berlin.de/People/haynes (4) http://www.nature.com/neuro/journal/vaop/ncurrent/abs/nn.2112.html

4 of 5

9/1/08 1:17 PM

DWIGHT TOK COURSEPACK 2008-2009


How the Mind Works: Revelations - The New York Review of Books

http://www.nybooks.com/articles/21575 32

VOLUME 55, NUMBER 11 路 JUNE 26, 2008

How the Mind Works: Revelations By Israel Rosenfield, Edward Ziff The Physiology of Truth: Neuroscience and Human Knowledge by Jean-Pierre Changeux, translated from the French by M.B. DeBevoise Belknap Press/Harvard University Press, 324 pp., $51.50 Nicotinic Acetylcholine Receptors: From Molecular Biology to Cognition by Jean-Pierre Changeux and Stuart J. Edelstein Odile Jacob, 284 pp., $99.00 Conversations on Mind, Matter, and Mathematics by Jean-Pierre Changeux and Alain Connes, translated from the French by M.B. DeBevoise Princeton University Press,260 pp., $26.95 (paper) What Makes Us Think? A Neuroscientist and a Philosopher Argue about Ethics, Human Nature, and the Brain by Jean-Pierre Changeux and Paul Ricoeur, translated from the French by M.B. DeBevoise Princeton University Press,335 pp., $24.95 (paper) Phantoms in the Brain: Probing the Mysteries of the Human Mind by V.S. Ramachandran and Sandra Blakeslee, with a foreword by Oliver Sacks Quill, 328 pp., $16.00 (paper) Mirrors in the Brain: How Our Minds Share Actions and and Emotions by Giacomo Rizzolatti and Corrado Sinigaglia, translated from the Italian by Frances Anderson Oxford University Press,242 pp., $49.95 A Universe of Consciousness: How Matter Becomes Imagination by Gerald M. Edelman and Giulio Tononi Basic Books, 274 pp., $18.00 (paper)

Jean-Pierre Changeux is France's most famous neuroscientist. Though less well known in the United States, he has directed a famous laboratory at the Pasteur Institute for more than thirty years, taught as a professor at the Coll猫ge de France, and written a number of works exploring "the neurobiology of meaning." Aside from his own books, Changeux has published two wide-ranging dialogues about mind and matter, one with the mathematician Alain Connes and the other with the late French philosopher Paul Ricoeur. Changeux came of age at a fortunate time. Born in 1936, he began his studies when the advent both of the DNA age and of high-resolution images of the brain heralded a series of impressive breakthroughs. Changeux took part in one such advance in 1965 when, together with Jacques Monod and Jeffries Wyman, he established an important model of protein interactions in bacteria, which, when applied to the brain, became crucial for understanding the behavior of neurons. Since that time, Changeux has written a number of books exploring the functions of the brain. The brain is of course tremendously complex: a bundle of some hundred billion neurons, or nerve cells, each sharing as many as ten thousand connections with other neurons. But at its most fundamental level, the neuron, the brain's structure is not difficult to grasp. A large crown of little branches, known as "dendrites," extends above the body of the cell and receives signals from other neurons, while a long trunk or "axon," which conducts neural messages, projects below, occasionally shooting off to connect with other neurons. The structure of the neuron naturally lends itself to comparison with the branches, trunk, and roots of a tree, and indeed the technical term for the growth of dendrites is

1 of 10

7/29/08 7:49 AM DWIGHT TOK COURSEPACK 2008-2009


How the Mind Works: Revelations - The New York Review of Books

http://www.nybooks.com/articles/21575 33

"arborization." (See the illustration below.)

We've known since the early nineteenth century that neurons use electricity to send signals through the body. But a remarkable experiment by Hermann von Hermholtz in 1859 showed that the nervous system, rather than telegraphing messages between muscles and brain, functions far slower than copper wires. As Changeux writes, Everyday experience leads us to suppose that thoughts pass through the mind with a rapidity that defies the laws of physics. It comes as a stunning surprise to discover that almost the exact opposite is true: the brain is slow—very slow— by comparison with the fundamental forces of the physical world. Further research by the great Spanish anatomist Santiago Ramon y Cajal suggested why the telegraph analogy failed to hold: most neurons, instead of tying their ends together like spliced wires, leave a gap between the terminus of the neuron, which transmits signals, and the receptor of those signals in the adjacent neuron. How signals from neurons manage to cross this gap, later renamed the synaptic cleft ("synapse" deriving from the Greek for "to bind together"), became the major neurophysiological question of the early twentieth century.

M 2 of 10

ost leading biologists at that time assumed that neurons would use the electricity in the nervous system to send

7/29/08 7:49 AM DWIGHT TOK COURSEPACK 2008-2009


How the Mind Works: Revelations - The New York Review of Books

http://www.nybooks.com/articles/21575 34

signals across the cleft. The average synaptic cleft is extremely small—a mere twenty nanometers wide—and though the nervous system may not function at telegraphic speed, it was not difficult to imagine electrical pulses jumping the distance. Further, given the speed with which nerves react, the alternative theory, that electrical pulses would cause a chemical signal to move across the cleft, seemed to rely on far too slow a mechanism. But as the decades passed, hard evidence slowly accumulated in support of the chemical theory. According to Changeux, experiments began to suggest that "the human brain therefore does not make optimal use of the resources of the physical world; it makes do instead with components inherited from simpler organisms...that have survived over the course of biological evolution." A remarkable experiment by Otto Loewi in the 1920s first suggested how the brain makes use of its evolutionary inheritance in order to communicate. Loewi bathed a frog's heart in saline solution and stimulated the nerve that normally slows the heartbeat. If the slowing of the heart was caused by a chemical agent rather than an electrical impulse, Loewi reasoned, then the transmitting chemical would disperse throughout the solution. Loewi tested his hypothesis by placing a second heart in the solution. If nerve transmission was chemical rather than electrical, he supposed, then the chemical slowing down the first heart, dispersed throughout the solution, would likewise slow down the second heart. This is exactly what happened. Loewi named the substance released by the relevant nerve, called the vagus nerve, Vagusstoff; today it is known as the neurotransmitter acetylcholine. By the 1950s, further experiments had definitively proved that most neurons, while using electricity internally, must resort to chemicals to cross the synaptic cleft and communicate with the next neuron in the chain. Changeux began his work at this stage, when the basic methods for neuron communication had been determined but the detailed chemical mechanisms were just opening up to research. Thanks to new high-resolution images from electron microscopes, first taken by Sanford Palay and George Palade in 1955, biologists could finally see the minute structures of the synapse. They discovered that the transmitting end of the neuron, called the nerve terminal, comes packed with tiny sacs, or vesicles, each containing around five thousand molecules of a specialized chemical, the neurotransmitter. When an electrical signal moves down the neuron, it triggers the vesicles and floods the synaptic cleft with neurotransmitter molecules. These chemical neurotransmitters then attach to the proteins called receptors on the surface of the neuron that is located just across the synaptic cleft, opening a pore and allowing the electrically charged atoms called ions to flow into the neuron. Thus, the chemical signal is converted back into an electrical signal, and the message is passed down the line.

T

hese processes were still somewhat mysterious in 1965, when the young Changeux, working with his teacher Jacques Monod and the American scientist Jeffries Wyman, produced one of the theories for which he became best known. The three scientists, then studying metabolism, attempted to explain how the structure of an enzyme could stabilize when another molecule attached to it. Changeux later saw a parallel with the nervous system. When a chemical neurotransmitter binds to a receptor it holds the ion pore open, ensuring its continuing function, a critical step in converting the neurotransmitter's chemical signal back into an electrical pulse. Changeux's discovery established the groundwork for the way many neurons communicate, and his findings were based on the more general paper he had coauthored with Wyman and Monod. With a working theory for neuron communication established, Changeux then turned to the ways that larger structures in the brain might change these basic interactions. A longstanding theory, introduced by Donald Hebb in 1949, proposed that neurons could increase the strength of their connection through repeated signals. According to a slogan describing the theory, "neurons that fire together, wire together." Repeated neuron firings, Hebb believed, would produce stronger memories, or faster thought patterns. But researchers found that certain regulatory networks could achieve far more widespread effects by distributing specialized neurotransmitters, such as dopamine and acetylcholine, throughout entire sections of the brain, reinforcing connections without the repeated firings required by Hebb. Changeux focused on these specialized distribution networks. It was long known that nicotine acts on the same receptor as the neurotransmitter acetylcholine. Changeux recognized that this could explain both nicotine's obvious benefits—greater concentration, relaxation, etc.—as well as the drug's more puzzling long-term effects. For instance, while cigarettes are dangerous to health, some studies show that smokers tend to suffer at significantly lower rates from Alzheimer's disease and Parkinson's disease. Changeux found that nicotine, by attaching to the same receptors as

3 of 10

7/29/08 7:49 AM DWIGHT TOK COURSEPACK 2008-2009


How the Mind Works: Revelations - The New York Review of Books

http://www.nybooks.com/articles/21575 35

acetylcholine, reproduces some of the benefits of acetylcholine by reinforcing neuronal connections throughout the brain. Nicotine is not exactly the same chemically as acetylcholine, but can mimic its effects. Changeux's lab has since focused on the workings of the nicotine/acetylcholine system, and he has attempted to explain how all such regulatory systems, working together, can produce the experience we call consciousness—as well as more abstract concepts like truth.

H

ow, then, does the mass of cells in the brain produce our experience of sight, sound, and imagination? According to Changeux, the infant brain is not a blank slate, receiving all experience and instruction—both what it sees and how to think about it—from the outside. Nor is the infant brain preprogrammed, its reactions predetermined, unable to change itself and adapt. Rather, as Changeux began to hypothesize in the late 1970s, the brain, beginning in the embryo, produces, by means of genetic action, "mental objects of a particular type that might be called prerepresentations—preliminary sketches, schemas, models." According to this theory, spontaneous electronic activity in the brain, "acting as a Darwinian-style generator of neuronal diversity," creates dynamic, highly variable networks of nerve cells, whose variation is comparable with the variation in DNA. Those networks then give rise to the reflex movements of the newborn infant. Over time the infant's movements become better coordinated. Neural networks associated with more successful movements—such as grasping an object—are "selected"; that is, their activity is reinforced as their synaptic junctions become strengthened. As the child continues to explore his or her surroundings, Darwinian competition strengthens some of these transient networks sufficiently to make them relatively permanent parts of the child's behavioral repertoire. Changeux calls the process, first elaborated in a 1976 paper, "learning by selection." Animals and infants conduct this miniature version of natural selection by means of what Changeux terms "cognitive games." One well-known example concerns cries of alarm in African vervet monkeys. Adult monkeys use a simple but effective vocabulary of sounds that warn against danger: a loud bark for leopards, a two-syllable cough for eagles, and a hissing sound for snakes. Surprisingly, researchers found, baby monkeys hiss at snakes without explicit instruction. Changeux writes, "Snakes seem to arouse a sort of innate universal fear, which probably developed fairly early in the course of the evolution of the higher vertebrates." When adult monkeys confirm the baby's judgment with their own hisses, the infant's genetically produced prerepresentation is rewarded and reinforced. But baby monkeys require more explicit instruction in protecting themselves against predators, such as eagles, to which they have been less genetically conditioned. At first, newborn monkeys react to any form that flies in the air, which is to say to the class of birds as a whole. Then, gradually, a selective stabilization of the response to the shape of dangerous species takes place.... If the first cry of alarm is sounded by one of the young, the nearest adult looks up. If it sees a harmless bird, it does not react. But if the young monkey has spotted a martial eagle, the adult reacts by emitting a cry of alarm that confirms the presence of danger.... The adult's cry of alarm validates a pertinent relationship between shape and sound that is established in the brain of the young monkey. This process of learning alarm cries through trial and error, reward and suppression, demonstrates the kind of cognitive games that are played out constantly through the brain's interaction with the environment. As successful behaviors increase in number, Changeux believes, they strengthen the capacity to consciously manipulate the environment. Most actions are not beneficial, and as each neuron competes for limited resources, many of the least useful neurons literally die out. Changeux therefore hypothesizes: "To learn is to eliminate."

I

n Changeux's view, starting in the womb, spontaneous electrical activity within neurons creates highly variable networks of nerve cells; the networks are selected and reinforced by environmental stimuli; and these reinforced networks can then be said to "represent" the stimuli—e.g., the appearance of a predator—though no particular network of nerve cells exclusively represents any one set of stimuli. The environment does not directly "instruct" the brain; it does not imprint precise images in memory. Rather, working through our senses, the environment selects certain networks and reinforces the connections between them.[1] Critical to this process of selection, in Changeux's view, is the brain's reward system: the pleasure response. Dopamine

4 of 10

7/29/08 7:49 AM DWIGHT TOK COURSEPACK 2008-2009


How the Mind Works: Revelations - The New York Review of Books

http://www.nybooks.com/articles/21575 36

is part of a reward system that is important in human and animal behavior, and dopamine levels are elevated in the brain when we experience pleasure or well-being. Pleasure is associated both with the anticipation of activities essential to survival—for example, eating and sex—and with the activities themselves. Changeux describes one particularly telling experiment: When a trained monkey succeeds in grasping a peanut hidden in a box, the inside of which it cannot see, the activity of dopamine neurons increases at precisely the moment when the animal recognizes the food with its fingers. But opiates, alcohol, cannabinoids, nicotine, and other drugs can also increase the release of dopamine and subvert the normal function of the reward system. A rat given infusions of cocaine into the brain following the pressing of a bar will persist in pressing the bar repeatedly in preference to consuming food or water. Sugar, too, can be addictive. Indeed, the National Institutes of Health is now studying whether foods high in fat and sugar should be classified as addictive agents, in the same category as nicotine, alcohol, and cocaine. In general, behaviors associated with pleasure are reinforced by the release of dopamine; as a result, the synaptic junctions of the associated neuronal networks are strengthened. And as they are strengthened, the changes in brain function often become permanent. A former cocaine addict who has been able to live without the drug for a decade may experience an irresistible need for cocaine when returning to a place whose cues evoke past drug-taking experiences. But the memories that are evoked are reconstructions: Every evocation of memories is a reconstruction on the basis of physical traces stored in the brain in latent form, for example at the level of neurotransmitter receptors. Instead of recalling the experiences of both pleasure-filled high and painful withdrawal, the addict's memories may be overwhelmed by the powerful neural connections previously created by the drug. Only if memory is a matter of reconstruction of latent physical traces, not direct recall of past events, Changeux argues, could these kind of drug-induced long-term compulsions occur.

I

n his book The Physiology of Truth, Changeux connects memory to the acquisition of knowledge and the testing of its validity, as is done in science in general. "We now find ourselves in a position," Changeux writes, to sketch the outlines of a plausible interpretation of the neural bases of meaning. The naive view that the neural representation of a complex meaning—a yellow Renault, for example—is located in a single, hierarchically prominent nerve cell...has been found to be unjustified for the most part. It is generally accepted today that distinct populations of neurons in sensory, motor, associative, and other territories are linked as part of a distributed network...[which] mobilizes several distinct and functionally specific territories in a discrete manner, thus constituting a neural embodiment of meaning. Note that this assumption does not require that...anatomical connections be...reproducible across individual brains in every detail [in order to evoke memory], only that a map of functional relations be established.

In some ways Changeux's ideas are similar to Gerald Edelman's theory of neural Darwinism. For both Changeux and Edelman, Darwinian selection is an essential part of how the brain functions. And yet Edelman and Changeux have radically different views of what selective mechanisms in the brain imply about the nature of brain function, knowledge, memory, and consciousness. Our senses, in Edelman's view, are confronted by a chaotic, constantly changing world that has no labels. The brain must create meaning from that chaos. Edelman writes, in A Universe of Consciousness, his book with Giulio Tononi: It is commonly assumed that memory involves the inscription and storage of information, but what is stored? Is it a coded message? When it is "read out" or recovered, is it unchanged? These questions point to the widespread assumption that what is stored is some kind of representation. [We take] the opposite viewpoint, consistent with a selectionist approach, that memory is nonrepresentational.[2] While Changeux also considers selection to be essential to the formation of memory, he, as opposed to Edelman, believes that once a set of neuronal circuits have been selected to form a memory, they become part of a relatively

5 of 10

7/29/08 7:49 AM DWIGHT TOK COURSEPACK 2008-2009


How the Mind Works: Revelations - The New York Review of Books

http://www.nybooks.com/articles/21575 37

stable structure that "can be conceived as a set of long-lasting global representations." Though "the precise patterns of connectivity in the network may vary from individual to individual," Changeux writes, its functional relationships (or stabilized meanings) remain constant. In this way a "scale model" of external reality...is selected and stored in memory in the brain. Memory objects enjoy a genuine existence, then, as latent "forms" composed of stable neuronal traces. Changeux says memories can be modified by the addition of new information, or "by preexisting knowledge or by the emotional resonance of actual memories of past experience."

I

n contrast to Changeux's account, Edelman, we believe, has a different and considerably deeper view of memory and what it tells us about the nature of meaning and brain function. Both Changeux and Edelman propose that during memory formation, our interactions with the world cause a Darwinian selection of neural circuits, much as the body, when invaded by a virus, "selects" the most potent antibodies from the enormous repertoire of antibodies made available by the body's immune system. However, the resulting memory is not, Edelman says, a representation of the outside world, any more than the antibody that has protected the body against an infecting virus is a representation of that virus. Yet the antibody can protect the body against a future attack by the virus, just as the neural circuits can contribute to memory recall. Instead, Edelman writes, memory is the ability to repeat a mental or physical act after some time despite a changing context.... We stress repetition after some time in this definition because it is the ability to re-create an act separated by a certain duration from the original signal set that is characteristic of memory. And in mentioning a changing context, we pay heed to a key property of memory in the brain: that it is, in some sense, a form of constructive recategorization during ongoing experience, rather than a precise replication of a previous sequence of events. For Edelman, then, memory is not a "small scale model of external reality," but a dynamic process that enables us to repeat a mental or physical act: the key conclusion is that whatever its form, memory itself is a [property of a system]. It cannot be equated exclusively with circuitry, with synaptic changes, with biochemistry, with value constraints, or with behavioral dynamics. Instead, it is the dynamic result of the interactions of all these factors acting together, serving to select an output that repeats a performance or an act. The overall characteristics of a particular performance may be similar to previous performance, but the ensembles of neurons underlying any two similar performances at different times can be and usually are different. This property ensures that one can repeat the same act, despite remarkable changes in background and context, with ongoing experience. The validity of the respective approaches of Changeux and Edelman remains to be tested by further inquiry into brain function. The detailed neurophysiological processes involved are still largely unexplored.

I

n fact, "external reality" is a construction of the brain. Our senses are confronted by a chaotic, constantly changing world that has no labels, and the brain must make sense of that chaos. It is the brain's correlations of sensory information that create the knowledge we have about our surroundings, such as the sounds of words and music, the images we see in paintings and photographs, the colors we perceive: "perception is not merely a reflection of immediate input," Edelman and Tononi write, "but involves a construction or a comparison by the brain." For example, contrary to our visual experience, there are no colors in the world, only electromagnetic waves of many frequencies. The brain compares the amount of light reflected in the long (red), middle (green), and short (blue) wavelengths, and from these comparisons creates the colors we see. The amount of light reflected by a particular surface—a table, for example— depends on the frequency and the intensity of the light hitting the surface; some surfaces reflect more short-wave frequencies, others more long-wave frequencies. If we could not compare the presence of these wavelengths and were aware of only the individual frequencies of light—each of which would be seen as gray, the darkness or lightness of each frequency depending on the intensity of the light hitting the

6 of 10

7/29/08 7:49 AM DWIGHT TOK COURSEPACK 2008-2009


How the Mind Works: Revelations - The New York Review of Books

http://www.nybooks.com/articles/21575 38

surface—then the normally changing frequencies and intensities of daylight (as during sunrise, or when a cloud momentarily blocks out the sun) would create a confusing picture of changing grays. Our visual worlds are stabilized because the brain, through color perception, simplifies the environment by comparing the amounts of lightness and darkness in the different frequencies from moment to moment.[3]

T

he problem of representation, meaning, and memory is also illustrated by the case of a patient who has lost his arm in an accident. As is often the case, the brain creates a "phantom" limb in an apparent attempt to preserve a unified sense of self. For the patient, the phantom limb is painful. The brain knows there is no limb; pain is the consequence of the incoherence between what the brain "sees" (no arm) and the brain's "feeling" the presence of a phantom that it has created in its attempt to maintain a unified sense of self in continuity with the past. Such pain is not created by an external stimulus and cannot be eliminated by painkillers.

One famous case is that of a young man who had lost his hand in a motorcycle accident. In a therapeutic procedure devised by V.S. Ramachandran, and described in his book with Sandra Blakeslee, Phantoms in the Brain, the patient put his intact hand in one side of a box and "inserted" his phantom hand in the other side. As the illustration on this page shows, one section of the box had a vertical mirror, which showed a reflection of his intact hand. The patient observed in the mirror the image of his real hand, and was then asked to make similar movements with both "hands," which suggested to the brain real movement from the lost hand. Suddenly the pain disappeared. Though the young man was perfectly aware of the trick being played on him —the stump of his amputated arm was lying in one section of the box—the visual image overcame his sense of being tricked. Seeing is believing! Pain—the consequence of the incoherence between the brain's creation of a phantom limb and the visual realization that the limb does not exist—disappeared; what was seen (a hand in the mirror) matched what was felt (a phantom).

7 of 10

7/29/08 7:49 AM DWIGHT TOK COURSEPACK 2008-2009


How the Mind Works: Revelations - The New York Review of Books

http://www.nybooks.com/articles/21575 39

According to the Italian neurologist Angela Sirigu, who used videos instead of mirrors to perform a similar experiment, It is the dissonance between the image of oneself and the damaged body, that is at the origin of the phantom pain. Seeing the damaged hand once again functioning, reduces the dissonance even though the patient is aware of being tricked. At one moment the patient experiences a painful phantom limb; at another he sees a mirror image of his intact hand and the pain disappears. This is only one of many neurological examples of what we might call the Dr. Jekyll and Mr. Hyde Syndrome: the patient in the experiment sees and remembers one world at certain times and a completely different world at other times.[4] The phantom limb is the brain's way of preserving a body image—a sense of self that is essential to all coherent brain activity. And as in the case of colors, the phantom limb suggests that what we see, hear, and feel are inventions of the brain—an integration of the past (the loss of the limb) and the present (a phantom that is essential for the brain's continuing to function "normally"). In general, every recollection refers not only to the remembered event or person or object but to the person who is remembering. The very essence of memory is subjective, not mechanical, reproduction; and essential to that

8 of 10

7/29/08 7:49 AM DWIGHT TOK COURSEPACK 2008-2009


How the Mind Works: Revelations - The New York Review of Books

http://www.nybooks.com/articles/21575 40

subjective psychology is that every remembered image of a person, place, idea, or object inevitably contains, whether explicitly or implicitly, a basic reference to the person who is remembering. Our conscious life is a constant flow, or integration, of an immediate past and the present—what Henri Bergson called le souvenir du présent (1908) and Edelman more recently called the remembered present (1989). Consciousness, in this view, is neither recalled representations nor the immediate present, but something different in kind (as colors are different in kind from the lightness and darkness of different reflected wavelengths).

T

he importance of body image and motor activity for perception, physical movement, and thought is suggested by the recent discovery of "mirror neurons" by Giacomo Rizzolatti and his colleagues. They observed that the neurons that fired when a monkey grasped an object also fired when the monkey watched a scientist grasp the same object. The monkey apparently understood the action of the experimenter because the activity within its brain was similar when the monkey was observing the experimenter and when the monkey was grasping the object. What was surprising was that the same neurons that produced "motor actions," i.e., actions involving muscular movement, were active when the monkey was perceiving those actions performed by others. The "rigid divide," Rizzolatti and Corrado Sinigaglia write in their new book, Mirrors in the Brain, between perceptive, motor, and cognitive processes is to a great extent artificial; not only does perception appear to be embedded in the dynamics of action, becoming much more composite than used to be thought in the past, but the acting brain is also and above all a brain that understands.

We can recognize and understand the actions of others because of the mirror neurons; as Rizzolatti and Sinigaglia write, this understanding "depends first of all on our motor neurons."[5] Our abilities to understand and react to the emotions of others may depend on the brain's ability to imitate the neuronal activity of the individual being observed. When we see a friend crying, we may feel sympathy because the activity in our brain is similar to that in the brain of the person crying. We recognize disgust in another person through our own experience of the feeling of disgust and the associated neural activity. Rizzolatti and Sinigaglia write: our perceptions of the motor acts and emotive reactions of others appear to be united by a mirror mechanism that permits our brain to immediately understand what we are seeing, feeling, or imagining others to be doing, as it triggers the same neural structures... that are responsible for our own actions and emotions. The nature of the brain's "representations"—if there is such a thing—of the world, the self, the past and present, remains puzzling, as the very different approaches we have described suggest: Changeux's view of "long-lasting global representations"; Edelman and Tononi's view of memory as constructive recategorizations, and Rizzolatti's stunning discovery of mirror neurons, suggesting that we know and understand others, to some extent, through neural imitation. And as these differing views show, while we are still far from a full understanding of the nature of memory, perception, and meaning, it is nonetheless because of the work of scientists such as Changeux, Edelman, and Rizzolatti that we have a better grasp of the complexity of subjective experiences. Perhaps in the future, questions about higher brain functions will be better understood because of new genetic and neurophysiological discoveries and brain imaging. An unexpected scientific discovery can give us a new insight into something we thought we had always known: mirror neurons, Rizzolatti tells us, "show how strong and deeply rooted is the bond that ties us to others, or in other words, how bizarre it would be to conceive of an I without an us." Notes [1] Neural circuits selected during memory formation may be strengthened by the addition of new neurotransmitter

receptors to the synaptic junctions. This is called Long Term Potentiation (LTP). The weakening or elimination of

9 of 10

7/29/08 7:49 AM DWIGHT TOK COURSEPACK 2008-2009


How the Mind Works: Revelations - The New York Review of Books

http://www.nybooks.com/articles/21575 41

circuits can lead to memory loss, which occurs normally with aging but is accelerated in neurodegenerative diseases such as Alzheimer's. In that disease, neurons in the hippocampus, a brain region that is important to memory function, as well as other neurons in the brain, lose their synapses and eventually die, leading to memory impairment. Despite extensive investigation, the cause of neuron death in Alzheimer's disease is not understood. (Some of the recent research on memory loss is mentioned by Sue Halpern in "Memory: Forgetting Is the New Normal," Time, May 8, 2008.) [2] For further discussion see Israel Rosenfield, "Neural Darwinism: A New Approach to Memory and Perception,"

The New York Review, October 9, 1986, as well as The Invention of Memory (Basic Books, 1988). [3]

See Oliver Sacks and Robert Wasserman, "The Case of the Colorblind Painter," The New York Review, November 19, 1987. [4] See Israel Rosenfield,

L'étrange, le familier, l'oublié (Paris: Flammarion, 2005) for further discussion; and Reilly et al., "Persistent Hand Motor Commands in the Amputees' Brain," Brain (August 2006), for evidence that the brain is maintaining normal motor commands despite the loss of a limb. [5] V.S. Ramachandran believes that mirror neurons might give us further clues to the nature of phantom limb pain.

He has noted that phantom pain disappeared for ten or fifteen minutes when a patient was observing a volunteer rub her hand and he has suggested that the suppression of pain in such cases might involve the mirror neurons. However, the mechanism of the pain suppression is not clear.

Copyright © 1963-2008, NYREV, Inc. All rights reserved. Nothing in this publication may be reproduced without the permission of the publisher.

10 of 10

7/29/08 7:49 AM DWIGHT TOK COURSEPACK 2008-2009


John Derbyshire on Smartocracy on National Review Online

http://article.nationalreview.com/print/?q=NjEzOTM1YmM4ZmYxOTEx... 42

July 22, 2008, 6:00 a.m.

Talking to the Plumber The I.Q gap. By John Derbyshire The rich man in his castle, The poor man at his gate, God made them, high or lowly, And order'd their estate.

The 1982 Episcopal Hymnal omits that stanza, the second of Mrs. Alexander’s original six (not counting the refrain). It also omits her fifth: The tall trees in the greenwood, The meadows where we play, The rushes by the water, We gather every day …

Understandable, in both cases. The fifth stanza might possibly be re-cast for a modern child (the hymn comes from Mrs. Alexander’s 1848 Hymns for Little Children), perhaps along lines like: The Xbox and the iPod, Computer games we play, The supervised activ'ties, We're driven to every day …

There’s nothing much you can do with that second stanza, though. Best just flush it down the memory hole. It very likely is the case that some people have higher net worth than others, though it’s a bit indelicate to talk about it. Still, even if this is so, no believer can possibly think that these inequalities come about as a result of divine ordinance. Still less can anyone, believer or unbeliever, think that Mother Nature had anything to do with it. Good grief, no! Believers and unbelievers alike are agreed that if there are indeed inequalities in our society, they result from uneven distribution of opportunity, caused by: Left Believer: Sinful human nature blinding us to the social justice ethic implicit in the Law, the Gospel, or the Koran, depending on your precise confession. Right Believer: Social pathologies — illegitimacy, easy divorce, feminism, a corrupt popular culture — arising from ignorance of, or wanton defiance of, the divine plan. Left Unbeliever: Oppression by the various types of human malignity that inevitably arise in capitalist society: sexism, racism, patriarchy, etc. Right Unbeliever: Insufficiently rigorous education policy, insufficiently family-friendly tax and health-insurance policies, excessive regulation stifling enterprise, etc. These notions fill the air. They are the common currency of politics, and the basis for innumerable speeches, sermons, opeds, commencement addresses, and academic papers. There are bits and pieces of truth in them. Some believers do not fully respect the Brotherhood of Man; illegitimacy has dumped a lot of kids in bad neighborhoods, to their disadvantage; human malignity does keep some people down (though not only under

1 of 4

7/29/08 7:50 AM DWIGHT TOK COURSEPACK 2008-2009


John Derbyshire on Smartocracy on National Review Online

http://article.nationalreview.com/print/?q=NjEzOTM1YmM4ZmYxOTEx... 43

capitalism); public policy often is not friendly to free people who want to make the best of themselves. Yet all those bits and pieces of truth together explain very little about social inequality. What mainly explains it is innate ability. U.S. society today is very nearly a pure meritocracy, perhaps the purest there has ever been. If you display any ability at all in your early years, you will be marked for induction into the overclass, especially if you belong to some designated victim group. (We preen ourselves endlessly — and pardonably — on how much more "inclusive" our present elites are than our past ones. This is one of the ways we avoid thinking about the necessity for any elite to be exclusive in some fashion.) There are still trust-fund kids, but they are not very consequential in this meritocracy. Seek out the rich man in his castle: It is far more likely the case in the U.S.A. than anywhere else, and far more likely the case in the U.S.A. of today than at any past time, that he is from modest origins, and won his wealth fairly in the fields of business, finance, or the high professions. Seek out the poor man at his gate: It is likewise probable, if you track back through his life, that it will be one of lackluster ability and effort, compounded perhaps perhaps with some serious personality defect. I have two kids in school, eighth grade and tenth. I know several of their classmates. There are some fuzzy cases, but for the most part it is easy to see who is destined for the castle, who for the gate. Of the deciding factors, by far the largest is intelligence. There are of course smart people who squander their lives, and dumb people who get lucky. If you pluck a hundred rich men from their castles and put them in a room together, though, you will notice a high level of general intelligence. Contrariwise, a hundred poor men taken from their gates will, if put all in one place, convey a general impression of slow dullness. That’s the meritocracy. That’s where we’ve come to. As Herrnstein and Murray put it: Mathematical necessity tells us that a large majority of the smart people in Cheops' Egypt, dynastic China, Elizabethan England, and Teddy Roosevelt's America were engaged in ordinary pursuits, mingling, working, and living with everyone else. Many were housewives. Most of the rest were farmers, smiths, millers, bakers, carpenters, and shopkeepers. Social and economic stratification was extreme, but cognitive stratification was minor. So it has been from the beginning of history into [the 20th] century. Then, comparatively rapidly, a new class structure emerged in which it became much more consistently and universally advantageous to be smart.

The problem with this smartocracy is, we have this itchy feeling that it’s un-American.

We Americans are easygoing about inequalities of wealth, much more so than Old World countries. There is something about inequality of smarts that just sets our teeth on edge, though. One of the first jokes ever told to me by an American was this one: A man finds an old-fashioned oil lamp on the beach. He takes it home and starts cleaning it up. A genie pops out. Genie: "I've been in there so long my powers are weak. I can only grant you one wish, and it's a choice of two. I can either make you super-rich or super-smart. What'll it be?" Man, after a moment's though: "Y'know, I've always been bothered about being kinda slow. Always felt people were laughing at me behind my back. Well, no more of that! Make me super-smart!" Genie: "Done!" The genie vanishes. The man smacks himself on the forehead: "Jeez, I shoulda taken the money!"

Until recently there was quite a strict taboo on mentioning the idea that some people might be smarter than others. Remember what abuse The Bell Curve came in for. It seems to me that we are starting to be a little more open and truthful about these matters. Columnist Chris Satullo in the Philadelphia Inquirer back in May pointed out that the charges of "elitism" then being hurled at Barack Obama were really about smarts. The charge of elitism isn't about people flaunting income; it's about people flaunting IQ. Americans, as a rule, don't resent people who have more money than them — particularly if the wealth is seen as earned. Envy, maybe, but not resent. You don't resent people whom you hope to emulate. And most Americans dream easily about having much more dough than they do. What Americans more readily resent is someone who is smarter than them, who knows it, who shows it, and who seems to think being smart makes you better than everyone else. A gap in income, you can always dream of closing. A gap in IQ, not so much. It's more personal, thus easier to resent.

A different writer, William Deresiewicz in The American Scholar, wrote recently about the difficulty of talking to

2 of 4

7/29/08 7:50 AM DWIGHT TOK COURSEPACK 2008-2009


John Derbyshire on Smartocracy on National Review Online

http://article.nationalreview.com/print/?q=NjEzOTM1YmM4ZmYxOTEx... 44

a plumber: It didn't dawn on me that there might be a few holes in my education until I was about 35. I'd just bought a house, the pipes needed fixing, and the plumber was standing in my kitchen. There he was, a short, beefy guy with a goatee and a Red Sox cap and a thick Boston accent, and I suddenly learned that I didn't have the slightest idea what to say to someone like him. So alien was his experience to me, so unguessable his values, so mysterious his very language, that I couldn't succeed in engaging him in a few minutes of small talk before he got down to work. Fourteen years of higher education and a handful of Ivy League [degrees], and there I was, stiff and stupid, struck dumb by my own dumbness. "Ivy retardation," a friend of mine calls this. I could carry on conversations with people from other countries, in other languages, but I couldn't talk to the man who was standing in my own house.

It’s a horrifying story, but not a surprising one. This is indeed what we have come to. An acquaintance of mine, an academic in the human sciences (not Charles Murray) holds the opinion that across an IQ gap of more than one standard deviation (i.e. about 15 points), communication between two people becomes difficult, and that beyond two standard deviations it is effectively impossible. I'm not sure I'm ready to believe that. I think there is actually an element of art there. Some individuals have a knack, a way of doing it, that allows them to communicate effectively even across 30 I.Q. points. Then again, some don’t. It probably helps not to have been culled out from the herd at an early age, like Mr. Deresiewicz, and segregated off from all contact with poor men at gates — the process known as “an Ivy League education.” It would probably help, too, if intelligence were not heritable to some large degree. (Forty to 80 percent, say Herrnstein and Murray, which agrees with one’s rule-of-thumb observations.) This means that our cognitive elites are increasingly inbred. Doctors used to marry nurses, professors used to marry their secretaries, business moguls used to marry starlets. Now doctors marry doctors, professors professors, moguls moguls, lawyers lawyers, etc. Those “modest origins” of our meritocratic elites are less modest by the year. We might be drifting towards a caste system, except that meritocracy requires some openness, some vacuuming-up of high-I.Q. outliers from the lower classes, some dumping of low-I.Q. duffers from the elites. That’s how we are doing things, and the perplexity of William Deresiewicz shows how far we have come. The rich man is in his castle (actually, more likely, his gated community or doorman apartment complex) and the poor man is at his gate. They can’t really talk to each other because the poor man is almost certainly a couple of standard deviations below the rich man in I.Q. score. They don’t want to anyway, because they don’t much like each other. The ignorant condescension of the overclass was exactly what was causing Barack Obama so much trouble back in May — not a mistake he will repeat, I think. He doesn’t seem the type to repeat mistakes. There you have one of the advantages of a high I.Q.: You learn fast. The converse dislike that the nonelite masses feel for their new masters was on display in the Scooter Libby case. From outside the castles, the walled compounds of the elite, it all looked like a storm in a teacup. Libby was one of them, so nothing much would happen to him. They would take care of him. What else would they do — throw him off the battlements? That never happens. Ten years on he’ll still be one of them. The guy lost a hand in the poker games that are played between different factions of them, that's all, up there in their castles. From behind the castle moats it all looked different. Those poker games seem far more important to the people playing them than they look to outsiders (or than, in fact, they actually are) because outsiders are hardly ever thought about. Everybody in Libby’s faction of the elite knew him; everybody liked Ol’ Scooter. That he had taken a hit from the other elite faction was an outrage, an occasion of high emotion. I thought I detected, here and there, some Deresiewiczian bafflement among the elites that this emotion did not seem to be much in evidence among people living outside the castle walls. But hey, who really cares about them? This isn’t one of those columns where I point out a problem and suggest a possible solution. As I remarked in an earlier piece on this general theme: I wish these … elites had a little more color and dash. I wish they were not so academic. I wish there were some sign of a Churchill among them, or a Roosevelt (Teddy for preference), or an Andy Jackson. I wish they had stronger opinions. I wish they showed more evidence of having courage. I wish, above all, that there were fewer of them. But do I have an alternative to meritocracy? Do I think these [elite college] kids are unspeakably awful, and will drag western civilization down to perdition? Would I prefer my own kids not have a shot at joining them, if they decide they want to? No, and no, and no.

3 of 4

7/29/08 7:50 AM DWIGHT TOK COURSEPACK 2008-2009


John Derbyshire on Smartocracy on National Review Online

http://article.nationalreview.com/print/?q=NjEzOTM1YmM4ZmYxOTEx... 45

Human society stumbles on forward, from imperfection to, one hopes, lesser imperfection. Our cognitive elites are not lovable. Every so often their arrogance and condescension will come breaking through the surface. It’s a pity there isn’t some way to forcibly mix them with their fellow citizens at some point in their cosseted young adulthood, so that they might at least have a shot at learning how to talk across the I.Q. gap; but in a free society, there is no way to do that. Absent that kind of social engineering, there is nothing for it but to lie back and let them rule us. They’ll probably make a pretty good job of it. They are, after all, the brightest and the best … however much we dislike them.

National Review Online - http://article.nationalreview.com/?q=NjEzOTM1YmM4ZmYxOTExMTg1MTZmODlhZGM0N2RkY2I=

4 of 4

7/29/08 7:50 AM DWIGHT TOK COURSEPACK 2008-2009


High-Aptitude Minds: The Neurological Roots of Genius: Scientific American

http://www.sciam.com/article.cfm?id=high-aptitude-minds&print=true 46

Scientific American Mind - September 3, 2008

High-Aptitude Minds: The Neurological Roots of Genius Researchers are finding clues to the basis of brilliance in the brain By Christian Hoppe and Jelena Stojanovic Within hours of his demise in 1955, Albert Einstein’s brain was salvaged, sliced into 240 pieces and stored in jars for safekeeping. Since then, researchers have weighed, measured and otherwise inspected these biological specimens of genius in hopes of uncovering clues to Einstein’s spectacular intellect. Their cerebral explorations are part of a century-long effort to uncover the neural basis of high intelligence or, in children, giftedness. Traditionally, 2 to 5 percent of kids qualify as gifted, with the top 2 percent scoring above 130 on an intelligence quotient (IQ) test. (The statistical average is 100. See the box on the opposite page.) A high IQ increases the probability of success in various academic areas. Children who are good at reading, writing or math also tend to be facile at the other two areas and to grow into adults who are skilled at diverse intellectual tasks [see “Solving the IQ Puzzle,” by James R. Flynn; Scientific American Mind, October/November 2007]. Most studies show that smarter brains are typically bigger—at least in certain locations. Part of Einstein’s parietal lobe (at the top of the head, behind the ears) was 15 percent wider than the same region was in 35 men of normal cognitive ability, according to a 1999 study by researchers at McMaster University in Ontario. This area is thought to be critical for visual and mathematical thinking. It is also within the constellation of brain regions fingered as important for superior cognition. These neural territories include parts of the parietal and frontal lobes as well as a structure called the anterior cingulate. But the functional consequences of such enlargement are controversial. In 1883 English anthropologist and polymath Sir Francis Galton dubbed intelligence an inherited feature of an efficiently functioning central nervous system. Since then, neuroscientists have garnered support for this efficiency hypothesis using modern neuroimaging techniques. They found that the brains of brighter people use less energy to solve certain problems than those of people with lower aptitudes do. In other cases, scientists have observed higher neuronal power consumption in individuals with superior mental capacities. Musical prodigies may also sport an unusually energetic brain [see box on page 67]. That flurry of activity may occur when a task is unusually challenging, some researchers speculate, whereas a gifted mind might be more efficient only when it is pondering a relatively painless puzzle. Despite the quest to unravel the roots of high IQ, researchers say that people often overestimate the significance of intellectual ability [see “Coaching the Gifted Child,” by Christian Fischer]. Studies show that practice and perseverance contribute more to accomplishment than being smart does. Size Matters In humans, brain size correlates, albeit somewhat weakly, with intelligence, at least when researchers control for a person’s sex (male brains are bigger) and age (older

1 of 4

8/31/08 11:20 AM

DWIGHT TOK COURSEPACK 2008-2009


High-Aptitude Minds: The Neurological Roots of Genius: Scientific American

http://www.sciam.com/article.cfm?id=high-aptitude-minds&print=true 47

brains are smaller). Many modern studies have linked a larger brain, as measured by magnetic resonance imaging, to higher intellect, with total brain volume accounting for about 16 percent of the variance in IQ. But, as Einstein’s brain illustrates, the size of some brain areas may matter for intelligence much more than that of others does. In 2004 psychologist Richard J. Haier of the University of California, Irvine, and his colleagues reported evidence to support the notion that discrete brain regions mediate scholarly aptitude. Studying the brains of 47 adults, Haier’s team found an association between the amount of gray matter (tissue containing the cell bodies of neurons) and higher IQ in 10 discrete regions, including three in the frontal lobe and two in the parietal lobe just behind it. Other scientists have also seen more white matter, which is made up of nerve axons (or fibers), in these same regions among people with higher IQs. The results point to a widely distributed—but discrete—neural basis of intelligence. The neural hubs of general intelligence may change with age. Among the younger adults in Haier’s study—his subjects ranged in age from 18 to 84—IQ correlated with the size of brain regions near a central structure called the cingulate, which participates in various cognitive and emotional tasks. That result jibed with the findings, published a year earlier, of pediatric neurologist Marko Wilke, then at Cincinnati Children’s Hospital Medical Center, and his colleagues. In its survey of 146 children ages five to 18 with a range of IQs, the Cincinnati group discovered a strong connection between IQ and gray matter volume in the cingulate but not in any other brain structure the researchers examined. Scientists have identified other shifting neural patterns that could signal high IQ. In a 2006 study child psychiatrist Philip Shaw of the National Institute of Mental Health and his colleagues scanned the brains of 307 children of varying intelligence multiple times to determine the thickness of their cerebral cortex, the brain’s exterior part. They discovered that academic prodigies younger than eight had an unusually thin cerebral cortex, which then thickened rapidly so that by late childhood it was chunkier than that of less clever kids. Consistent with other studies, that pattern was particularly pronounced in the frontal brain regions that govern rational thought processes. The brain structures responsible for high IQ may vary by sex as well as by age. A recent study by Haier, for example, suggests that men and women achieve similar results on IQ tests with the aid of different brain regions. Thus, more than one type of brain architecture may underlie high aptitude. Low Effort Required Meanwhile researchers are debating the functional consequences of these structural findings. Over the years brain scientists have garnered evidence supporting the idea that high intelligence stems from faster information processing in the brain. Underlying such speed, some psychologists argue, is unusually efficient neural circuitry in the brains of gifted individuals. Experimental psychologist Werner Krause, formerly at the University of Jena in Germany, for example, has proposed that the highly gifted solve puzzles more elegantly than other people do: they rapidly identify the key information in them and the best way to solve them. Such people thereby make optimal use of the brain’s limited working memory, the short-term buffer that holds items just long enough for the mind to process them. Starting in the late 1980s, Haier and his colleagues have gathered data that buttress this so-called efficiency hypothesis. The researchers used positron-emission tomography, which measures glucose metabolism of cells, to scan the brains of eight young men while they performed a nonverbal abstract reasoning task for half an hour. They found that the better an individual’s performance on the task, the lower the metabolic rate in widespread areas of the brain, supporting the notion that efficient neural processing may underlie brilliance. And in the 1990s the same group observed the flip side of this phenomenon: higher glucose metabolism in the brains of a small group of subjects who had below-average IQs, suggesting that slower minds operate less economically. More recently, in 2004 psychologist Aljoscha Neubauer of the University of Graz in Austria and his colleagues linked aptitude to diminished cortical activity after learning. The researchers used electroencephalography (EEG), a technique that detects electrical brain activity at precise time points using an array of electrodes affixed to the scalp, to monitor the brains of 27 individuals while they took two reasoning tests, one of them given before test-related training and the other after it. During the second test, frontal brain regions—many of which are involved in higher-order cognitive skills—were less active in the more intelligent individuals than in the less astute subjects. In fact, the higher a subject’s mental ability, the bigger the dip in cortical activation between the pretraining and posttraining tests, suggesting that the brains of brighter individuals streamline the processing of new information faster than those of their less intelligent counterparts do.

2 of 4

8/31/08 11:20 AM

DWIGHT TOK COURSEPACK 2008-2009


High-Aptitude Minds: The Neurological Roots of Genius: Scientific American

http://www.sciam.com/article.cfm?id=high-aptitude-minds&print=true 48

The cerebrums of smart kids may also be more efficient at rest, according to a 2006 study by psychologist Joel Alexander of Western Oregon University and his colleagues. Using EEG, Alexander’s team found that resting eight- to 12-hertz alpha brain waves were significantly more powerful in 30 adolescents of average ability than they were in 30 gifted adolescents, whose alpha-wave signal resembled those of older, college-age students. The results suggest that gifted kids’ brains use relatively little energy while idle and in this respect resemble more developmentally advanced human brains. Some researchers speculate that greater energy efficiency in the brains of gifted individuals could arise from increased gray matter, which might provide more resources for data processing, lessening the strain on the brain. But others, such as economist Edward Miller, formerly of the University of New Orleans, have proposed that the efficiency boost could also result from thicker myelin, the substance that insulates nerves and ensures rapid conduction of nerve signals. No one knows if the brains of the quick-witted generally contain more myelin, although Einstein’s might have. Scientists probing Einstein’s brain in the 1980s discovered an unusual number of glia, the cells that make up myelin, relative to neurons in one area of his parietal cortex. Hardworking Minds And yet gifted brains are not always in a state of relative calm. In some situations, they appear to be more energetic, not less, than those of people of more ordinary intellect. What is more, the energy-gobbling brain areas roughly correspond to those boasting more gray matter, suggesting that the gifted may simply be endowed with more brainpower in this intelligence network. In a 2003 trial psychologist Jeremy Gray, then at Washington University in St. Louis, and his colleagues scanned the brains of 48 individuals using functional MRI, which detects neural activity by tracking the flow of oxygenated blood in brain tissue, while the subjects completed hard tasks that taxed working memory. The researchers saw higher levels of activity in prefrontal and parietal brain regions in the participants who had received high scores on an intelligence test, as compared with low scorers. In a 2005 study a team led by neuroscientist Michael O’Boyle of Texas Tech University found a similar brain activity pattern in young male math geniuses. The researchers used fMRI to map the brains of mathematically gifted adolescents while they mentally rotated objects to try to match them to a target item. Compared with adolescent boys of average math ability, the brains of the mathematically talented boys were more metabolically active—and that activity was concentrated in the parietal lobes, the frontal cortex and the anterior cingulate. A year later biologist Kun Ho Lee of Seoul National University in Korea similarly linked elevated activity in a frontoparietal neural network to superior intellect. Lee and his co-workers measured brain activity in 18 gifted adolescents and 18 less intelligent young people while they performed difficult reasoning tasks. These tasks, once again, excited activity in areas of the frontal and parietal lobes, including the anterior cingulate, and this neural commotion was significantly more intense in the gifted individuals’ brains. No one is sure why some experiments indicate that a bright brain is a hardworking one, whereas others suggest it is one that can afford to relax. Some, such as Haier—who has found higher brain metabolic rates in more astute individuals in some of his studies but not in others—speculate one reason could relate to the difficulty of the tasks. When a problem is very complex, even a gifted person’s brain has to work to solve it. The brain’s relatively high metabolic rate in this instance might reflect greater engagement with the task. If that task was out of reach for someone of average intellect, that person’s brain might be relatively inactive because of an inability to tackle the problem. And yet a bright individual’s brain might nonetheless solve a less difficult problem efficiently and with little effort as compared with someone who has a lower IQ. Perfection from Practice Whatever the neurological roots of genius, being brilliant only increases the probability of success; it does not ensure accomplishment in any endeavor. Even for academic achievement, IQ is not as important as self-discipline and a willingness to work hard. University of Pennsylvania psychologists Angela Duckworth and Martin Seligman examined final grades of 164 eighth-grade students, along with their admission to (or rejection from) a prestigious high school. By such measures, the researchers determined that scholarly success was more than twice as dependent on assessments of self-discipline as on IQ. What is more, they reported in 2005, students with more self-discipline—a willingness to sacrifice short-term pleasure for long-term gain—were more

3 of 4

8/31/08 11:20 AM

DWIGHT TOK COURSEPACK 2008-2009


High-Aptitude Minds: The Neurological Roots of Genius: Scientific American

http://www.sciam.com/article.cfm?id=high-aptitude-minds&print=true 49

likely than those lacking this skill to improve their grades during the school year. A high IQ, on the other hand, did not predict a climb in grades. A 2007 study by Neubauer’s team of 90 adult tournament chess players similarly shows that practice and experience are more important to expertise than general intelligence is, although the latter is related to chess-playing ability. Even Einstein’s spectacular success as a mathematician and a physicist cannot be attributed to intellectual prowess alone. His education, dedication to the problem of relativity, willingness to take risks, and support from family and friends probably helped to push him ahead of any contemporaries with comparable cognitive gifts. Note: This article was originally published with the title, "High-Aptitude Minds".

Further Reading Can one neuron release more than one neurotransmitter? Why is it comforting to discuss problems with others? Nicotine Replacement Drug's Bad Trip Tougher Laws Needed to Protect Your Genetic Privacy Arranging for Serenity: How Physical Space and Emotion Intersect 150 Years Ago: The First Transatlantic Telegraph SciAm Mind Calendar: August/September 2008 Do-It-Yourself Addiction Cures? Treating Anxiety in Alcoholics may Reduce Cravings

4 of 4

8/31/08 11:20 AM

DWIGHT TOK COURSEPACK 2008-2009


spiked | Spell it like it is

http://www.spiked-online.com/index.php?/site/printable/5574/ 50

Tuesday 12 August 2008

Spell it like it is

The idea that we shuold except student’s spelling misstakes as merely ‘variant spellings’ speaks to the denigration of Trooth in education. Frank Furedi Many of us have had our Dan Quayle moment; we’re capable of making some highly embarrassing spelling mistakes. Yet according to the proponents of the ‘New Literacy’, when the former American vice president ‘corrected’ a school pupil’s spelling of ‘potato’ to ‘potatoe’ during a school spelling bee, he was simply practising the art of ‘variant spelling’. Many educators now consider the teaching of Correct Spelling as an elitist imposition that discriminates against the disadvantaged, or, in the case of Quayle, against those who have had a literacy-bypass. Those of us who work in universities are used to reading essays by students who have liberated themselves from the oppressive regime of good grammar and spelling. Some of us still bother to correct misspelled words; others have become tired and indifferent to the problem of poor spelling. Now, an academic has come up with an interesting compromise. Ken Smith, a criminologist at Bucks New University, England, argues that we should chill out and accept the most common spelling mistakes as ‘variant spellings’. ‘University teachers should simply accept as variant spelling those words our students most commonly misspell’, he argued recently in the Times Higher Education Supplement. That would mean treating ‘truely’ as the equivalent of ‘truly’ and possibly ‘potatoe’ as a variant of ‘potato’. In this way, academics would save themselves a lot of grief and incidentally – or incidently – rehabilitate Dan Quayle’s reputation. At first sight, Smith’s proposal appears as a sensible and pragmatic response to the problem of poor spelling. He is not arguing for abandoning the rules of spelling, only for taking a relaxed attitude towards a relatively small number of commonly misspelled words. Unfortunately, in today’s philistine intellectual climate, this pragmatic response can only end up legitimising poor standards of literacy. Taking an eclectic approach towards the rules of spelling would send the signal that how words are written is negotiable, even unimportant. Once variant spelling becomes acceptable for some commonly misspelled words, the list will grow and grow. My principal objection to ‘variant spelling’ is that it reinforces the pernicious idea that children and young people today cannot be expected to meet the difficult challenge of learning how to use

1 of 2

8/31/08 11:15 AM

DWIGHT TOK COURSEPACK 2008-2009


spiked | Spell it like it is

http://www.spiked-online.com/index.php?/site/printable/5574/ 51

language correctly. For some time now, influential educators have asked whether it is desirable to teach children correct spelling. Some pedagogues argue that teaching spelling is a waste of time that serves no positive purpose. Others claim that an insistence in the classroom on spelling everything correctly frustrates those who suffer from learning disabilities and dyslexia. So-called progressive educators have even suggested that the promotion of spelling is an elitist enterprise that discriminates against young people from disadvantaged backgrounds. In some cases, illiteracy has been turned into a virtue. I have been told by some experts that using punctuation is an arbitrary way of organising words. Apparently the insistence on ‘correct’ spelling inhibits creativity and stigmatises the self-expression of minority groups in particular. George Turnbull, described by BBC News as ‘the exam doctor at the UK Qualifications and Curriculum Authority’, has argued that: ‘Shakespeare could survive without spelling well although he did have a lot of other things going for him.’ (1) So if Shakespeare could not spell, there is little point in insisting that children learn this apparently unimportant skill. It is sad that Shakespeare is now called upon to condone the failures of contemporary schooling. In essence, variant spelling is a true companion to the idea of variant truths. Contemporary cultural life has become estranged from the idea of Truth with a capital T. In academia, social scientists never tire of informing students that there are no ‘right’ and ‘wrong’ answers. Instead of the truth, people are exhorted to accept different perspectives as representing many truths. The demotion of the status of truth calls into question the purpose of gaining knowledge. Celebrating variant truths, like variant spellings, is presented as a pluralistic gesture of tolerance. In fact it represents a reluctance to take education and its ideas seriously. And not surprisingly, those who do not take ideas seriously are also not very worried about how they are spelled. Frank Furedi is the author of Where Have All the Intellectuals Gone?: Confronting Twenty-First Century Philistinism (Continuum International Publishing Group, 2004). Buy this book from Amazon (UK). Visit Furedi’s website here.

Previously on spiked Frank Furedi looked at the degradation of reading. Elsewhere, he argued the case for the truth of music and, in an interview with Brendan O’Neill, criticised 21st century philistinism. Sara Selwood attacked the statistical measurement of art’s usefulness. Alan Hudson argued that education should be about the pursuit of knowledge. Neil Davenport criticised the emphasis on literacy over a passion for words. Or read more at spiked issue Education.

(1) Are school standards slipping?, BBC News, 16 August 2007 reprinted from: http://www.spiked-online.com/index.php?/site/article/5574/

2 of 2

8/31/08 11:15 AM

DWIGHT TOK COURSEPACK 2008-2009


Print

http://www.slate.com/toolbar.aspx?action=print&id=2197130 52

T H E S P E C TAT O R

The Columbia Journalism Review's Division Over Dissent Is global warming now beyond debate? By Ron Rosenbaum Posted Friday, Aug. 8, 2008, at 6:09 PM ET

When does dissent become Untruth and lose the rights and respect due to "legitimate dissent"? Who decides—and how—what dissent deserves to be heard and what doesn't? When do journalists have to "protect" readers from Untruth masking itself as dissent or skepticism? I found myself thinking about this when I came across an unexpected disjunction in the July/August issue of the Columbia Journalism Review. The issue leads off with a strong, sharply worded editorial called "The Dissent Deficit." (It's not online, but it should be.) In it, the magazine, a publication of the Columbia School of Journalism—and thus a semi-official upholder of standards in the semi-official profession of journalism—argues clearly and unequivocally that allowing dissent to be heard and understood is part of a journalist's mission. The editorial contends that doing so sometimes requires looking beyond the majority consensus as defined by the media on the basis of a few sound bites and paying extra attention to dissenting views, because they often present important challenges to conventional wisdom on urgent issues that deserve a hearing. The editorial deplores the way that journalism has lately been failing in this mission: "Rather than engage speech that strays too far from the dangerously narrow borders of our public discourse, the gatekeepers of that discourse—our mass media—tend to effectively shout it down, marginalize it, or ignore it." So true. The editorial offers the media's treatment of the Rev. Jeremiah Wright, a dissident whose views, particularly on American foreign policy's responsibility for 9/11, have gotten no more than sound-bite treatment, as an example. I found that the editorial gave the best short summary of Wright's view of "black liberation theology," especially the concept of "transformation," and made a strong case that Wright and his views deserve attention rather than derision. He shouldn't be erased from public discourse with the excuse that we've "moved on," that we're all "post-racial" now. The CJR editorial encourages journalists not to marginalize dissenters, however unpopular or out of step. Implicit are the notions that today's dissenters can become tomorrow's majority, that our nation was founded on dissent, that the Bill of Rights (and especially the First Amendment) was written by dissenters, for dissenters. That the journalistic profession deserves what respect it retains not for being the stenographers of the Official Truth but for conveying dissent and debate.

1 of 6

8/31/08 11:21 AM

DWIGHT TOK COURSEPACK 2008-2009


Print

http://www.slate.com/toolbar.aspx?action=print&id=2197130 53

It was troubling, then, to find, in an article in the very same issue of CJR, an argument that seems to me to unmistakably marginalize certain kinds of dissent. The contention appears in an article called, with deceptive blandness, "Climate Change: What's Next?" The article doesn't present itself as a marginalizer of dissent. It rather presents itself as a guide for "green journalists" on what aspects of climate change should be covered now that the Truth about "global warming"—whether it's real, and whether it's mainly caused by humans—is known. About two-thirds of the story offers tips and warnings like "watch out for techno-optimism." Alas, the author doesn't inspire confidence that she takes her own warnings to heart. The very first paragraph of her story contains a classic of credulous "techno-optimism": … a decade from now, Abu Dhabi hopes to have the first city in the world with zero carbon emissions. In a windswept stretch of desert, developers plan to build Masdar city, a livable environment for fifty thousand people that relies entirely on solar power and other renewable energy. All that's missing from the breathless, real-estate-brochure prose is a plug for the 24-hour health club and the concierge service for condo owners. But, the article tells us, the danger of "techno-optimism" pales before the perils of handling dissent. The first problem in the evaluation of what dissent should be heard is how certain we are about the truth. If we know the truth, why allow dissent from it into journalism? But who decides when we've reached that point of certainty? In any case, as the author's Abu Dhabi effusion suggests, there's no lack of certainty about what the Official Truth is in her mind: After several years of stumbling, mainstream science and environmental coverage has generally adopted the scientific consensus that increases in heat-trapping emissions from burning fossil fuels and tropical deforestation are changing the planet's climate, causing adverse effects even more rapidly than had been predicted. She's correct in saying that this is the consensus, that most journalists now accept what's known as the "anthropogenic theory" of global warming: that it is our carbon footprint that is the key cause of global warming, rather than—as a few scientists still argue—changes in solar activity, slight changes in the tilt of the earth's axis, the kinds of climate change that the earth constantly experienced long before man lit the first coal-burning plant. But here lies danger, "a danger that the subtleties of the science, and its uncertainty, might be missed by reporters unfamiliar with the territory," especially when confronted with "studies that contradict one another." Faced with conflicting studies, she tells us, "scientists look for consistency among several reports before concluding something is true." This is, frankly, a misunderstanding or misstating of the way science works. She seems to be confusing consensus among scientists and scientific truth. They are two different things. The history of science repeatedly shows a "consensus" being overturned by an unexpected truth that dissents from the consensus. Scientific truth has continued to evolve, often in unexpected ways, and scientific consensus always remains "falsifiable," to use Karl Popper's phrase, one any science reporter should be familiar with. All the more reason for reporting on scientific dissent, one would think. Yet when I read her description of how science proceeds, it seems to me she is suggesting science proceeds by a vote: Whoever who has the greatest number of consistent papers—papers that agree with him or her—"wins." As in, has the Truth.

2 of 6

8/31/08 11:21 AM

DWIGHT TOK COURSEPACK 2008-2009


Print

http://www.slate.com/toolbar.aspx?action=print&id=2197130 54

In fact, the history of science frequently demonstrates that science proceeds when contradictory—dissenting—studies provoke more studies, encourage rethinking rather than being marginalized by "the consensus" or the "consistency" of previous reports. Indeed, the century's foremost historian of science, Thomas Kuhn, believed, as even "green" reporters should know, that science often proceeds by major unexpected shifts: Just when an old consensus congealed, new dissenting, contradictory reports heralded a "paradigm shift" that often ended up tossing the old "consensus" into the junk bin. If it hadn't been for the lone dissenting voice of that crazy guy in the Swiss patent office with his papers on "relativity," we still might believe the "consensus" that Newtonian mechanics explained a deterministic universe. And what about Ignaz Semmelweis and his lone crusade against the "consensus" that doctors need not wash their hands before going from an infected to an uninfected patient? Or the nutty counterintuitive dissenting idea of vaccination? The consensus was wrong. In fact, science proceeds by overturning consensus. Sometimes the consensus proves to be long-lasting, but in science, any consensus, even the new consensus that formed around relativity, is subject to the challenges of Popper's "falsifiability." But even if—or because—not all truths in science are final, argument about what the truth is, and competition among competing ideas, often helps us to get closer to it. But our CJR author appears to believe that the green consensus, the anthropogenic theory of global warming, has some special need to be protected from doubters and dissenters, and that reporters who don't do their job to insulate it are not being "helpful." When faced with dissent from the sacrosanct green consensus, the author, as we'll see, argues that the "helpful" reporter must always show the dissenters are wrong if they are to be given any attention at all. This was the contention that stunned me—that reporters must protect us from dissent—especially in light of the CJR editorial deploring the "dangerously narrow borders of our public discourse." The contention that reporters must be "helpful" in protecting us from dissent is best understood in the context of the "no last word" anecdote in which the author tells us of the way your loyal green reporter must manage conflicting reports. She tells the story of a report that indicated the rest of the century would bring fewer hurricanes. It was important to her that "experienced" green journalists were able to cite other reports that there would be "more and more powerful hurricanes." (Italics hers.) She praises a reporter who concludes his story "with a scientist's caveat": "We don't regard this [new, fewer-hurricane report] as the last word on this topic." So, "no last word" is the way to go. Except when it isn't. We learn this as the CJR writer slaps the wrist of a local TV station for allowing "skeptics" to be heard without someone representing the consensus being given the last word. "Last year," she writes, "a meteorologist at CBS's Chicago station did a special report that featured local scientists discussing the hazards of global warming in one segment, well-known national skeptics in another, and ended with a cop-out: 'What is the truth about global warming? … It depends on who you talk to.' " In other words, no last word. 3 of 6

8/31/08 11:21 AM

DWIGHT TOK COURSEPACK 2008-2009


Print

http://www.slate.com/toolbar.aspx?action=print&id=2197130 55

Bad CBS affiliate, bad! "Not helpful, and not good reporting" she tells us. "The he-said, she-said reporting just won't do." Setting aside for a moment, if you can, the sanctimonious tone of the knuckle rapping ("just won't do"), there are two ways to interpret this no-no, both objectionable, both anti-dissent. One implication is that these "nationally known skeptics" should never have been given air time in the first place because the debate is over, the Truth is known, their dissent has no claim on our attention; their dissent is, in fact, pernicious. The second way of reading her "not helpful" condemnation is that if one allows dissenters on air to express their dissent, the approach shouldn't be "he-said, she-said." No, the viewers must be protected from this pernicious dissent. We should get "he-said, she-said, but he (or she) is wrong, and here is the correct way to think." It may be that believers in anthropogenic global warming are right. I have no strong position on the matter, aside from agreeing with the CJR editorial that there's a danger in narrowing the permissible borders of dissent. But I take issue with the author's contention that the time for dissent has ended. "The era of 'equal time' for skeptics who argue that global warming is just a result of natural variation and not human intervention seems to be largely over—except on talk radio, cable, and local television," she tells us. And of course we all know that the Truth is to be found only on networks and major national print outlets. Their record has been nigh unto infallible. But wait! I think I've found an insidious infiltration of forbidden dissent in the citadel of Truth that the CJR writer neglected to condemn. One of the environmental reporters the writer speaks of reverently, the New York Times' Andy Revkin, runs the Times' Dot Earth blog and features on his blogroll a hotbed of "just won't do" climate-change skeptics: the Climate Debate Daily blog (an offshoot of the highly respected Arts & Letters Daily). Revkin provides no protective warning to the reader that he will be entering the realm of verboten dissent from the Consensus. I find Climate Debate Daily a particularly important site precisely because it does give "equal time" to different arguments about climate change. Take a look at it. It's just two lists of links, one of reports and studies that support the consensus view and one of studies that don't. No warnings on the site about what is True and what constitutes Dangerous Dissent. Exactly the sort of thing that our CJR reporter says is just not done. And yet one cannot read the site without believing there are dissents from the consensus by scientists who deserve a hearing, if only so that their theses can be disproved. Check out, for instance, this work by an Australian scientist who was once charged with enforcing limits on greenhouse gases by the government but who now has changed his mind on the issue! It happens perhaps more often than "green journalists" let us know. At a dinner recently, I listened as Nick Lemann, the dean of Columbia's J-school, talked about the difficulty the school had in helping the students get the hang of "structuring an inquiry." At the heart of structuring an inquiry, he said, was the need to "find the arguments." Not deny the arguments. Find them, explore them. But which arguments? It's a fascinating subject that I've spent some time considering. My last two books, Explaining Hitler and The Shakespeare Wars, were, in part anyway, efforts to decide which of the myriad arguments about and dissenting visions of each of these figures was worth pursuing. For instance, with Hitler, after investigating, I wanted to refute the myth (often used in a heavy-handed way by anti-Semites) that Hitler was part Jewish. The 4 of 6

8/31/08 11:21 AM

DWIGHT TOK COURSEPACK 2008-2009


Print

http://www.slate.com/toolbar.aspx?action=print&id=2197130 56

risk is that in giving attention to the argument, one can spread it even while refuting it. But to ignore it was worse. Perhaps this is what our green journalist with her tsk-tsking really fears, and it's a legitimate fear. But I'd argue that journalists should be on the side of vigorous argument, not deciding for readers what is truth and then not exposing them to certain arguments. In my Shakespeare book, I mentioned—but didn't devote time to—what I regarded as the already well-refuted argument that someone other than Shakespeare wrote the plays in the canon. This doesn't mean I would stop others from arguing about it; it just is my belief that it wasn't worth the attention and that since life was short, one would be better off spending one's time rereading the plays than arguing over who wrote them. In any case, the fate of the earth was not at stake. But the argument over the green consensus does matter: If the green alarmists are right, we will have to turn our civilization inside out virtually overnight to save ourselves. One would like to know this is based on good, well-tested science, not mere "consensus." Skepticism is particularly important and particularly worth attention from journalists. Especially considering the abysmal record green journalists have on the ethanol fiasco. Here we should give the CJR reporter credit where due: She does include perhaps the single most important question that such an article could ask, one I haven't heard asked by most mainstream enviro-cheerleader media: [W]here were the skeptical scientists, politicians and journalists earlier, when ethanol was first being promoted in Congress? Indeed I don't remember reading a lot of "dissent" on the idea. Shouldn't it have occurred to someone green that taking acreage once capable of producing food on a planet with hundreds of millions of starving people and using it to lower the carbon footprint of your SUV might end up causing the deaths of those who lack food or the means to pay the soaring prices of ethanol-induced shortage? But it doesn't seem to occur to her that the delegitimizing of dissent she encourages with her "just won't do" sanctimony might have been responsible for making reporters fearful of being "greenlisted" for dissenting from The Consensus at the time. I think it's time for "green reporters," the new self-promoting subprofession, to take responsibility for the ethanol fiasco. Go back into their files and show us the stories they wrote that carry a hint that there might be a downside to taking food out of the mouths of the hungry. Those who fail the test—who didn't speak out, even on "talk radio, cable TV or local news"—shouldn't be so skeptical about skeptics. I'd suggest they all be assigned to read the CJR editorial about protecting dissent and the danger of "narrowing the borders" of what is permissible. The problem is, as Freeman Dyson, one of the great scientists of our age, put it in a recent issue of the New York Review of Books, environmentalism can become a religion, and religions always seek to silence or marginalize heretics. CJR has been an invaluable voice in defending that aspect of the First Amendment dealing with the freedom of the press; it should be vigilant about the other aspect that forbids the establishment of a religion. Ron Rosenbaum is the author of The Shakespeare Wars and Explaining Hitler. Article URL: http://www.slate.com/id/2197130/

5 of 6

8/31/08 11:21 AM

DWIGHT TOK COURSEPACK 2008-2009


57

WAY OF KNOWING:

Reason

“Logic, n. The art of thinking and reasoning in strict accordance with the limitations and incapacities of the human misunderstanding.” (Ambrose Bierce) • What constitutes good reason and good arguments? • Is reason purely objective and universal, or does it vary across cultures? • What are the advantages of being able to reason about something rather than, say, feeling something, dreaming about something, wishing something to be the case? • What are the advantages of discriminating between valid and invalid arguments, both for the individual knower and for society?

DWIGHT TOK COURSEPACK 2008-2009


Godel, Escher, Bach

http://www.times.com/books/97/07/20/reviews/hofstadter-grodel.html 58

April 29, 1979

Godel, Escher, Bach Reviewed By BRIAN HAYES

ertain ideas in the sciences have been stuffed almost to bursting with metaphoric meaning. Everybody's favorite is the concept of entropy, a measure of disorder in thermodynamics. Entropy tends to increase, and so the world is called on to express a variety of sentiments about the common fate of dissipation and decay. The uncertainty principle of quantum mechanics has been extended, or distended, in a similar way: From the principle that any observer disturbs the thing he measures comes the notion that no bystander is entirely innocent. The incompleteness theorem proved in 1931 by Kurt Gรถdel seems to be another candidate for metaphoric inflation. It is a great truth, and so it ought to have a large meaning; perhaps it should have the power to change lives. Unlike entropy and uncertainty, however, the incompleteness theorem is not the kind of idea that grabs you by the lapels and insists on being recognized. The theorem is a variation of the only well-remembered line of the Cretan poet Epimenides, who said, "All Cretans are liars." Another version of the same antinomy is more succinct and more troublesome: It reads, "This sentence is false." The unsettling effect of these statements was for a long time attributed to the looseness and ambiguity of natural languages, where a phrase can refer simultaneously to more than one thing. It was assumed that in a formal language, one constructed on strict rules of logic, no such inconsistent statements could be formulated; they would be unutterable, Gรถdel showed otherwise. Gรถdel's proof employs a formal language invented by Bertrand Russell and Alfred North Whitehead, who had set out to build a second foundation for the arithmetic of whole numbers. The language has a vocabulary of symbols and a grammar of rules for combining the symbols to form "strings" which can be interpreted as statements about the properties of numbers. A few simple strings are accepted as axioms, or self-evident truths. Any string of symbols that can be derived from the axioms by applying the grammatical rules must also be true; it is therefore designated a theorem. The language is at once simple and powerful, and until 1931 it appeared to have the satisfying quality of completeness. Russell and Whitehead believed that any true property of the whole numbers could be demonstrated in their language, and that no false propositions could be proved.

1 of 3

9/1/08 3:35 PM

DWIGHT TOK COURSEPACK 2008-2009


Godel, Escher, Bach

http://www.times.com/books/97/07/20/reviews/hofstadter-grodel.html 59

The theorem by which Gödel upset that believe is a string of symbols in the Russell-Whitehead language that can be interpreted on two levels. In one sense it is a straightforward statement about the natural numbers that seems to be true; at the same time, it represents a statement of "metamathematics" with the evident meaning: "This string of symbols is not a theorem." The paradox of Epimenides is with us again, and this time there is no escaping through the loopholes of language. If the string can be derived from the axioms, then a falsehood has been proved and the Russell-Whitehead language is inconsistent; by implication, so is arithmetic. If the string cannot be derived from the axioms, then there is a true statement about the natural numbers that cannot be proved in the formal language. There is good reason for choosing the latter alternative and concluding that the Russell-Whitehead language is incomplete. In fact, the result is more general than that: Any system of formal logic powerful enough to describe the natural numbers is intrinsically incomplete. It is easy enough to respond "So what?" No one thinks or speaks a formal language, and arithmetic seems to work quite well even if it is rotten at the core. Douglas R. Hofstadter, who is an assistant professor of computer science at Indiana University, addresses this issue at some length. At the heart of Gödel's theorem he finds the idea of self-reference, which can be viewed as a circular argument collapsed into itself. The same principle operates in other contexts, and in most of them it gives rise to no sensation of paradox. In his title Professor Hofstadter yokes together Gödel, Johann Sebastian Bach and the Dutch artist Maurits Corpelis Escher, and a substantial part of his book is dedicated to showing that this is not such an unlikely team of oxen. Escher is the easier case: his drawings (like the paintings of René Magritte, which are also discussed) have an obvious connection with verbal and mathematical paradox. For example, the print "Waterfall" shows a mill race in which water seems to flow always downhill and yet moves from under the wheel to over it. The image is formally undecidable in the same way that Gödel's theorem is: the eye presents the mind with two competing interpretations, and neither one is fully satisfactory. Much other modern art plays a more obvious game of selfreference, asking whether the painting is a symbol or an object and frustrating any attempt to give a definitive answer. The well-known combinatorial trickery of Bach's canons and fugues gives rise to another rich pattern of ambiguous perceptions. A theme enters, then appears again, inverted or reversed or in a different key or a different tempo; the transformed melody then blends with its original. Figure and ground may unexpectedly change roles. Even though each of the notes is heard distinctly--and in Bach the notes have a logic only slightly less formal than that of the Russell-Whitehead language-- the ear cannot always resolve their relationship. Douglas Hofstadter would not argue that awareness of the underlying mathematics contributes much to appreciation of the music, but the music does illuminate the math. And, less seriously, there is at least one instance of explicit self- referernce in Bach's work. In the last measures of the "Art of Fugue," written just before the composer died, he introduced a four-note melody that when transcribed in the German system of notation spells "B-A-C-H." Escher and Bach are only the beginning of Professor Hofstadter's Shandean digressions. He traces connections that lead from Gödel's theorem to Zen, where contradiction is cherished, to the social insects, where it is not clear whether the ant or the entire colony should be regarded as the organism; to television cameras pointed at television screens, to "elementary" particles of matter made up of still smaller elementary particles. He constructs a quite elaborate analogy between the incompleteness theorem and the transmission of genetic information encoded in the nucleotide sequences of DNA; here the self-reference of the theorem is comparable to the self-replication of the molecule. Most of all, he is at pains to present the implications of Gödel's proof for theories

2 of 3

9/1/08 3:35 PM

DWIGHT TOK COURSEPACK 2008-2009


Godel, Escher, Bach

http://www.times.com/books/97/07/20/reviews/hofstadter-grodel.html 60

of the human mind (and of artificial intellects). In the mind, the entire procedure of the Gรถdel proof seems to be repeated: A large but mechanistic, rule-following system, when it grows complex enough, develops the capacity for self-reference, which in this context is called consciousness. Professor Hofstadter's presentation of these ideas is not rigorous, in the mathematical sense, but all the essential steps are there; the reader is not asked to accept results on authority or on faith. Nor is the narrative rigorous in the uphill-hiking sense, for the author is always ready to take the reader's hand and lead him through the thickets. Someone seeking no more than an introduction to Gรถdel's work would probably do better to look into a little book published 20 years ago by Ernest Nagel and James R. Newman, "Gรถdel's Proof," which is just as clear and thorough and is only one percent as long. But Douglas Hofstadter's book is a more ambitious project. It is also a more pretentious one. To accompany each expository chapter, the author has provided a whimsical dialogue cast in the form of a Bach composition. For example, one dialogue has the form of a canon cancrizans, which is the same when read forward or backward. Some may find these interludes amusing. For my part, I was strongly reminded that the challenge of writing such a piece is not in throwing the melodies together according to rule, but in making music of them. Brian Hayes is an editor on the staff of Scientific American.

Return to the Books Home Page

Home | Site Index | Site Search | Forums | Archives | Marketplace Quick News | Page One Plus | International | National/N.Y. | Business | Technology | Science | Sports | Weather | Editorial | Op-Ed | Arts | Automobiles | Books | Diversions | Job Market | Real Estate | Travel Help/Feedback | Classifieds | Services | New York Today Copyright 1997 The New York Times Company

3 of 3

9/1/08 3:35 PM

DWIGHT TOK COURSEPACK 2008-2009


What Was I Thinking?: Books: The New Yorker

http://www.newyorker.com/arts/critics/books/2008/02/25/080225crbo_book... 61

BOOKS

WHAT WAS I THINKING? The latest reasoning about our irrational ways. by Elizabeth Kolbert FEBRUARY 25, 2008

People make bad decisions, but they make them in systematic ways.

A

couple of months ago, I went on-line to order a book. The book had a list price of twenty-four dollars; Amazon was offering it for eighteen. I clicked to add it to my “shopping cart” and a message popped up on the screen. “Wait!” it admonished me. “Add $7.00 to your order to qualify for FREE Super Saver Shipping!” I was ordering the book for work; still, I hesitated. I thought about whether there were other books that I might need, or want. I couldn’t think of any, so I got up from my desk, went into the living room, and asked my nine-year-old twins. They wanted a Tintin book. Since they already own a large stack of Tintins, it was hard to find one that they didn’t have. They scrolled through the possibilities. After much discussion, they picked a three-in-one volume containing two adventures they had previously read. I clicked it into the shopping cart and checked out. By the time I was done, I had saved The New Yorker $3.99 in shipping charges. Meanwhile, I had cost myself $12.91. Why do people do things like this? From the perspective of neoclassical economics, self-punishing decisions are difficult to explain. Rational calculators are supposed to consider their options, then pick the one that maximizes the benefit to them. Yet actual economic life, as opposed to the theoretical version, is full of miscalculations, from the gallon jar of mayonnaise purchased at spectacular savings to the billions of dollars Americans will spend this year to service their credit-card debt. The real mystery, it could be argued, isn’t why we make so many poor economic choices but why we persist in accepting economic theory. In “Predictably Irrational: The Hidden Forces That Shape Our Decisions” (Harper; $25.95), Dan Ariely, a professor at M.I.T., offers a taxonomy of financial folly. His approach is empirical rather than historical or theoretical. In pursuit of his research, Ariely has served beer laced with vinegar, left plates full of dollar bills in dorm refrigerators, and asked undergraduates to fill out surveys while masturbating. He claims that his experiments, and others like them, reveal the underlying logic to our illogic. “Our irrational behaviors are neither random nor senseless—they are systematic,” he writes. “We all make the same types of mistakes over and over.” So attached are we to certain kinds of errors, he contends, that we are incapable even of recognizing them as errors. Offered FREE shipping, we take it, even when it costs us.

A

s an academic discipline, Ariely’s field—behavioral economics—is roughly twenty-five years old. It emerged largely in response to work done in the nineteen-seventies by the Israeli-American psychologists Amos Tversky and Daniel Kahneman. (Ariely, too, grew up in Israel.) When they examined how people deal with uncertainty, Tversky and Kahneman found that there were consistent biases to

1 of 4

2/29/08 5:52 PM DWIGHT TOK COURSEPACK 2008-2009


What Was I Thinking?: Books: The New Yorker

http://www.newyorker.com/arts/critics/books/2008/02/25/080225crbo_book... 62

the responses, and that these biases could be traced to mental shortcuts, or what they called “heuristics.” Some of these heuristics were pretty obvious—people tend to make inferences from their own experiences, so if they’ve recently seen a traffic accident they will overestimate the danger of dying in a car crash—but others were more surprising, even downright wacky. For instance, Tversky and Kahneman asked subjects to estimate what proportion of African nations were members of the United Nations. They discovered that they could influence the subjects’ responses by spinning a wheel of fortune in front of them to generate a random number: when a big number turned up, the estimates suddenly swelled. Though Tversky and Kahneman’s research had no direct bearing on economics, its implications for the field were disruptive. Can you really regard people as rational calculators if their decisions are influenced by random numbers? (In 2002, Kahneman was awarded a Nobel Prize—Tversky had died in 1996—for having “integrated insights from psychology into economics, thereby laying the foundation for a new field of research.”) Over the years, Tversky and Kahneman’s initial discoveries have been confirmed and extended in dozens of experiments. In one example, Ariely and a colleague asked students at M.I.T.’s Sloan School of Management to write the last two digits of their Social Security number at the top of a piece of paper. They then told the students to record, on the same paper, whether they would be willing to pay that many dollars for a fancy bottle of wine, a not-so-fancy bottle of wine, a book, or a box of chocolates. Finally, the students were told to write down the maximum figure they would be willing to spend for each item. Once they had finished, Ariely asked them whether they thought that their Social Security numbers had had any influence on their bids. The students dismissed this idea, but when Ariely tabulated the results he found that they were kidding themselves. The students whose Social Security number ended with the lowest figures—00 to 19—were the lowest bidders. For all the items combined, they were willing to offer, on average, sixty-seven dollars. The students in the second-lowest group—20 to 39—were somewhat more free-spending, offering, on average, a hundred and two dollars. The pattern continued up to the highest group—80 to 99—whose members were willing to spend an average of a hundred and ninety-eight dollars, or three times as much as those in the lowest group, for the same items. This effect is called “anchoring,” and, as Ariely points out, it punches a pretty big hole in microeconomics. When you walk into Starbucks, the prices on the board are supposed to have been determined by the supply of, say, Double Chocolaty Frappuccinos, on the one hand, and the demand for them, on the other. But what if the numbers on the board are influencing your sense of what a Double Chocolaty Frappuccino is worth? In that case, price is not being determined by the interplay of supply and demand; price is, in a sense, determining itself. Another challenge to standard economic thinking arises from what has become known as the “endowment effect.” To probe this effect, Ariely, who earned one of his two Ph.D.s at Duke, exploited the school’s passion for basketball. Blue Devils fans who had just won tickets to a big game through a lottery were asked the minimum amount that they would accept in exchange for them. Fans who had failed to win tickets through the same lottery were asked the maximum amount that they would be willing to offer for them. “From a rational perspective, both the ticket holders and the non-ticket holders should have thought of the game in exactly the same way,” Ariely observes. Thus, one might have expected that there would be opportunities for some of the lucky and some of the unlucky to strike deals. But whether or not a lottery entrant had been “endowed” with a ticket turned out to powerfully affect his or her sense of its value. One of the winners Ariely contacted, identified only as Joseph, said that he wouldn’t sell his ticket for any price. “Everyone has a price,” Ariely claims to have told him. O.K., Joseph responded, how about three grand? On average, the amount that winners were willing to accept for their tickets was twenty-four hundred dollars. On average, the amount that losers were willing to offer was only a hundred and seventy-five dollars. Out of a hundred fans, Ariely reports, not a single ticket holder would sell for a price that a non-ticket holder would pay. Whatever else it accomplishes, “Predictably Irrational” demonstrates that behavioral economists are willing to experiment on just about anybody. One of the more compelling studies described in the book involved trick-or-treaters. A few Halloweens ago, Ariely laid in a supply of Hershey’s Kisses and two kinds of Snickers—regular two-ounce bars and one-ounce miniatures. When the first children came to his door, he handed each of them three Kisses, then offered to make a deal. If they wanted to, the kids could trade one Kiss for a mini-Snickers or two Kisses for a full-sized bar. Almost all of them took the deal and, proving their skills as sugar maximizers, opted for the two-Kiss trade. At some point, Ariely shifted the terms: kids could now trade one of their three Kisses for the larger bar or get a mini-Snickers without giving up anything. In terms of sheer chocolatiness, the trade for the larger bar was still by far the better deal. But, faced with the prospect of getting a mini-Snickers for nothing, the trick-or-treaters could no longer reckon properly. Most of them refused the trade, even though it cost them candy. Ariely speculates that behind the kids’ miscalculation was anxiety. As he puts it, “There’s no visible possibility of loss when we choose a FREE! item (it’s free).” Tellingly, when Ariely performed a similar experiment on adults, they made the same mistake. “If I were to distill one main lesson from the research described in this book, it is that we are all pawns in a game whose forces we largely fail to comprehend,” he writes.

A

few weeks ago, the Bureau of Economic Analysis released its figures for 2007. They showed that Americans had collectively amassed ten trillion one hundred and eighty-four billion dollars in disposable income and spent very nearly all of it—ten trillion one hundred and thirty-two billion dollars. This rate of spending was somewhat lower than the rate in 2006, when Americans spent all

2 of 4

2/29/08 5:52 PM DWIGHT TOK COURSEPACK 2008-2009


What Was I Thinking?: Books: The New Yorker

http://www.newyorker.com/arts/critics/books/2008/02/25/080225crbo_book... 63

but thirty-nine billion dollars of their total disposable income. According to standard economic theory, the U.S. savings rate also represents rational choice: Americans, having reviewed their options, have collectively resolved to spend virtually all the money that they have. According to behavioral economists, the low savings rate has a more immediate explanation: it proves—yet again—that people have trouble acting in their own best interests. It’s worth noting that Americans, even as they continue to spend, say that they should be putting more money away; one study of participants in 401(k) plans found that more than two-thirds believed their savings rate to be “too low.” In the forthcoming “Nudge: Improving Decisions About Health, Wealth, and Happiness” (Yale; $25), Richard H. Thaler and Cass R. Sunstein follow behavioral economics out of the realm of experiment and into the realm of social policy. Thaler and Sunstein both teach at the University of Chicago, Thaler in the graduate school of business and Sunstein at the law school. They share with Ariely the belief that, faced with certain options, people will consistently make the wrong choice.Therefore, they argue, people should be offered options that work with, rather than against, their unreasoning tendencies. These foolish-proof choices they label “nudges.” (A “nudge,” they note with scholarly care, should not be confused with a “noodge.”) A typical “nudge” is a scheme that Thaler and Sunstein call “Save More Tomorrow.” One of the reasons people have such a hard time putting money away, the authors say, is that they are loss-averse. They are pained by any reduction in their take-home pay—even when it’s going toward their own retirement. Under “Save More Tomorrow,” employees commit to contributing a greater proportion of their paychecks to their retirement over time, but the increases are scheduled to coincide with their annual raises, so their paychecks never shrink. (The “Save More Tomorrow” scheme was developed by Thaler and the U.C.L.A. economist Shlomo Benartzi, back in 1996, and has already been implemented by several thousand retirement plans.) People aren’t just loss-averse; they are also effort-averse. They hate having to go to the benefits office, pick up a bunch of forms, fill them out, and bring them all the way back. As a consequence, many eligible employees fail to enroll in their companies’ retirement plans, or delay doing so for years. (This is the case, research has shown, even at companies where no employee contribution is required.) Thaler and Sunstein propose putting this sort of inertia to use by inverting the choice that’s presented. Instead of having to make the trip to the benefits office to opt in, employees should have to make that trip only if they want to opt out. The same basic argument holds whenever a so-called default option is provided. For instance, most states in the U.S. require that those who want to become organ donors register their consent; in this way, many potential donors are lost. An alternative—used, for example, in Austria—is to make consent the default option, and put the burden of registering on those who do not wish to be donors. (It has been estimated that if every state in the U.S. simply switched from an “explicit consent” to a “presumed consent” system several thousand lives would be saved each year.) “Nudges” could also involve disclosure requirements. To discourage credit-card debt, for instance, Thaler and Sunstein recommend that cardholders receive annual statements detailing how much they have already squandered in late fees and interest. To encourage energy conservation, they propose that new cars come with stickers showing how many dollars’ worth of gasoline they are likely to burn through in five years of driving. Many of the suggestions in “Nudge” seem like good ideas, and even, as with “Save More Tomorrow,” practical ones. The whole project, though, as Thaler and Sunstein acknowledge, raises some pretty awkward questions. If the “nudgee” can’t be depended on to recognize his own best interests, why stop at a nudge? Why not offer a “push,” or perhaps even a “shove”? And if people can’t be trusted to make the right choices for themselves how can they possibly be trusted to make the right decisions for the rest of us?

L

ike neoclassical economics, much democratic theory rests on the assumption that people are rational. Here, too, empirical evidence suggests otherwise. Voters, it has been demonstrated, are influenced by factors ranging from how names are placed on a ballot to the jut of a politician’s jaw. A 2004 study of New York City primary-election results put the advantage of being listed first on the ballot for a local office at more than three per cent—enough of a boost to turn many races. (For statewide office, the advantage was around two per cent.) A 2005 study, conducted by psychologists at Princeton, showed that it was possible to predict the results of congressional contests by using photographs. Researchers presented subjects with fleeting images of candidates’ faces. Those candidates who, in the subjects’ opinion, looked more “competent” won about seventy per cent of the time. When it comes to public-policy decisions, people exhibit curious—but, once again, predictable—biases. They value a service (say, upgrading fire equipment) more when it is described in isolation than when it is presented as part of a larger good (say, improving disaster preparedness). They are keen on tax “bonuses” but dislike tax “penalties,” even though the two are functionally equivalent. They are more inclined to favor a public policy when it is labelled the status quo. In assessing a policy’s benefits, they tend to ignore whole orders of magnitude. In an experiment demonstrating this last effect, sometimes called “scope insensitivity,” subjects were told that migrating birds were drowning in ponds of oil. They were then asked how much they would pay to prevent the deaths by erecting nets. To save two thousand birds, the subjects were willing to pay, on average, eighty dollars. To save twenty thousand birds, they were willing to pay only seventy-eight dollars, and to save two hundred thousand birds they were willing to pay eighty-eight dollars. What is to be done with information like this? We can try to become more aware of the patterns governing our blunders, as “Predictably Irrational” urges. Or we can try to prod people toward more rational choices, as “Nudge” suggests. But if we really are wired to make certain kinds of mistakes, as Thaler and Sunstein and Ariely all argue, we will, it seems safe to predict, keep finding new ways

3 of 4

2/29/08 5:52 PM DWIGHT TOK COURSEPACK 2008-2009


What Was I Thinking?: Books: The New Yorker

http://www.newyorker.com/arts/critics/books/2008/02/25/080225crbo_book... 64

to make them. (Ariely confesses that he recently bought a thirty-thousand-dollar car after reading an ad offering FREE oil changes for the next three years.) If there is any consolation to take from behavioral economics—and this impulse itself probably counts as irrational—it is that irrationality is not always altogether a bad thing. What we most value in other people, after all, has little to do with the values of economics. (Who wants a friend or a lover who is too precise a calculator?) Some of the same experiments that demonstrate people’s weak-mindedness also reveal, to use a quaint term, their humanity. One study that Ariely relates explored people’s willingness to perform a task for different levels of compensation. Subjects were willing to help out—moving a couch, performing a tedious exercise on a computer—when they were offered a reasonable wage. When they were offered less, they were less likely to make an effort, but when they were asked to contribute their labor for nothing they started trying again. People, it turns out, want to be generous and they want to retain their dignity—even when it doesn’t really make sense. SEYMOUR CHWAST

4 of 4

2/29/08 5:52 PM DWIGHT TOK COURSEPACK 2008-2009


65

WAY OF KNOWING:

Language “Language exerts hidden power, like a moon on the tides”. (Rita Mae Brown) • In what ways does written language differ from spoken language in its relationship to knowledge? • What is lost in translation from one language to another? Why? • Is it possible to think without language?

DWIGHT TOK COURSEPACK 2008-2009


66

DWIGHT TOK COURSEPACK 2008-2009


Literacy Debate - Online, R U Really Reading? - Series - NYTimes.com

http://www.nytimes.com/2008/07/27/books/27reading.html?ei=5087&em=... 67

July 27, 2008

Literacy Debate: Online, R U Really Reading? By MOTOKO RICH

BEREA, Ohio — Books are not Nadia Konyk’s thing. Her mother, hoping to entice her, brings them home from the library, but Nadia rarely shows an interest. Instead, like so many other teenagers, Nadia, 15, is addicted to the Internet. She regularly spends at least six hours a day in front of the computer here in this suburb southwest of Cleveland. A slender, chatty blonde who wears black-framed plastic glasses, Nadia checks her e-mail and peruses myyearbook.com, a social networking site, reading messages or posting updates on her mood. She searches for music videos on YouTube and logs onto Gaia Online, a role-playing site where members fashion alternate identities as cutesy cartoon characters. But she spends most of her time on quizilla.com or fanfiction.net, reading and commenting on stories written by other users and based on books, television shows or movies. Her mother, Deborah Konyk, would prefer that Nadia, who gets A’s and B’s at school, read books for a change. But at this point, Ms. Konyk said, “I’m just pleased that she reads something anymore.” Children like Nadia lie at the heart of a passionate debate about just what it means to read in the digital age. The discussion is playing out among educational policy makers and reading experts around the world, and within groups like the National Council of Teachers of English and the International Reading Association. As teenagers’ scores on standardized reading tests have declined or stagnated, some argue that the hours spent prowling the Internet are the enemy of reading — diminishing literacy, wrecking attention spans and destroying a precious common culture that exists only through the reading of books. But others say the Internet has created a new kind of reading, one that schools and society should not discount. The Web inspires a teenager like Nadia, who might otherwise spend most of her leisure time watching television, to read and write. Even accomplished book readers like Zachary Sims, 18, of Old Greenwich, Conn., crave the ability to quickly find different points of view on a subject and converse with others online. Some children with dyslexia or other learning difficulties, like Hunter Gaudet, 16, of Somers, Conn., have found it far more comfortable to search and read online. At least since the invention of television, critics have warned that electronic media would destroy reading. What is different now, some literacy experts say, is that spending time on the Web, whether it is looking up something on Google or even britneyspears.org, entails some engagement with text.

1 of 8

7/29/08 7:47 AM DWIGHT TOK COURSEPACK 2008-2009


Literacy Debate - Online, R U Really Reading? - Series - NYTimes.com

http://www.nytimes.com/2008/07/27/books/27reading.html?ei=5087&em=... 68

Setting Expectations Few who believe in the potential of the Web deny the value of books. But they argue that it is unrealistic to expect all children to read “To Kill a Mockingbird” or “Pride and Prejudice” for fun. And those who prefer staring at a television or mashing buttons on a game console, they say, can still benefit from reading on the Internet. In fact, some literacy experts say that online reading skills will help children fare better when they begin looking for digital-age jobs. Some Web evangelists say children should be evaluated for their proficiency on the Internet just as they are tested on their print reading comprehension. Starting next year, some countries will participate in new international assessments of digital literacy, but the United States, for now, will not. Clearly, reading in print and on the Internet are different. On paper, text has a predetermined beginning, middle and end, where readers focus for a sustained period on one author’s vision. On the Internet, readers skate through cyberspace at will and, in effect, compose their own beginnings, middles and ends. Young people “aren’t as troubled as some of us older folks are by reading that doesn’t go in a line,” said Rand J. Spiro, a professor of educational psychology at Michigan State University who is studying reading practices on the Internet. “That’s a good thing because the world doesn’t go in a line, and the world isn’t organized into separate compartments or chapters.” Some traditionalists warn that digital reading is the intellectual equivalent of empty calories. Often, they argue, writers on the Internet employ a cryptic argot that vexes teachers and parents. Zigzagging through a cornucopia of words, pictures, video and sounds, they say, distracts more than strengthens readers. And many youths spend most of their time on the Internet playing games or sending instant messages, activities that involve minimal reading at best. Last fall the National Endowment for the Arts issued a sobering report linking flat or declining national reading test scores among teenagers with the slump in the proportion of adolescents who said they read for fun. According to Department of Education data cited in the report, just over a fifth of 17-year-olds said they read almost every day for fun in 2004, down from nearly a third in 1984. Nineteen percent of 17-year-olds said they never or hardly ever read for fun in 2004, up from 9 percent in 1984. (It was unclear whether they thought of what they did on the Internet as “reading.”) “Whatever the benefits of newer electronic media,” Dana Gioia, the chairman of the N.E.A., wrote in the report’s introduction, “they provide no measurable substitute for the intellectual and personal development initiated and sustained by frequent reading.” Children are clearly spending more time on the Internet. In a study of 2,032 representative 8- to 18-year-olds, the Kaiser Family Foundation found that nearly half used the Internet on a typical day in 2004, up from just under a quarter in 1999. The average time these children spent online on a typical day rose to one hour and 41 minutes in 2004, from 46 minutes in 1999.

2 of 8

7/29/08 7:47 AM DWIGHT TOK COURSEPACK 2008-2009


Literacy Debate - Online, R U Really Reading? - Series - NYTimes.com

http://www.nytimes.com/2008/07/27/books/27reading.html?ei=5087&em=... 69

The question of how to value different kinds of reading is complicated because people read for many reasons. There is the level required of daily life — to follow the instructions in a manual or to analyze a mortgage contract. Then there is a more sophisticated level that opens the doors to elite education and professions. And, of course, people read for entertainment, as well as for intellectual or emotional rewards. It is perhaps that final purpose that book champions emphasize the most. “Learning is not to be found on a printout,” David McCullough, the Pulitzer Prize-winning biographer, said in a commencement address at Boston College in May. “It’s not on call at the touch of the finger. Learning is acquired mainly from books, and most readily from great books.” What’s Best for Nadia? Deborah Konyk always believed it was essential for Nadia and her 8-year-old sister, Yashca, to read books. She regularly read aloud to the girls and took them to library story hours. “Reading opens up doors to places that you probably will never get to visit in your lifetime, to cultures, to worlds, to people,” Ms. Konyk said. Ms. Konyk, who took a part-time job at a dollar store chain a year and a half ago, said she did not have much time to read books herself. There are few books in the house. But after Yashca was born, Ms. Konyk spent the baby’s nap time reading the Harry Potter novels to Nadia, and she regularly brought home new titles from the library. Despite these efforts, Nadia never became a big reader. Instead, she became obsessed with Japanese anime cartoons on television and comics like “Sailor Moon.” Then, when she was in the sixth grade, the family bought its first computer. When a friend introduced Nadia to fanfiction.net, she turned off the television and started reading online. Now she regularly reads stories that run as long as 45 Web pages. Many of them have elliptical plots and are sprinkled with spelling and grammatical errors. One of her recent favorites was “My absolutely, perfect normal life ... ARE YOU CRAZY? NOT!,” a story based on the anime series “Beyblade.” In one scene the narrator, Aries, hitches a ride with some masked men and one of them pulls a knife on her. “Just then I notice (Like finally) something sharp right in front of me,” Aries writes. “I gladly took it just like that until something terrible happen ....” Nadia said she preferred reading stories online because “you could add your own character and twist it the way you want it to be.” “So like in the book somebody could die,” she continued, “but you could make it so that person doesn’t die or make it so like somebody else dies who you don’t like.” Nadia also writes her own stories. She posted “Dieing Isn’t Always Bad,” about a girl who comes back to life as half cat, half human, on both fanfiction.net and quizilla.com.

3 of 8

7/29/08 7:47 AM DWIGHT TOK COURSEPACK 2008-2009


Literacy Debate - Online, R U Really Reading? - Series - NYTimes.com

http://www.nytimes.com/2008/07/27/books/27reading.html?ei=5087&em=... 70

Nadia said she wanted to major in English at college and someday hopes to be published. She does not see a problem with reading few books. “No one’s ever said you should read more books to get into college,” she said. The simplest argument for why children should read in their leisure time is that it makes them better readers. According to federal statistics, students who say they read for fun once a day score significantly higher on reading tests than those who say they never do. Reading skills are also valued by employers. A 2006 survey by the Conference Board, which conducts research for business leaders, found that nearly 90 percent of employers rated “reading comprehension” as “very important” for workers with bachelor’s degrees. Department of Education statistics also show that those who score higher on reading tests tend to earn higher incomes. Critics of reading on the Internet say they see no evidence that increased Web activity improves reading achievement. “What we are losing in this country and presumably around the world is the sustained, focused, linear attention developed by reading,” said Mr. Gioia of the N.E.A. “I would believe people who tell me that the Internet develops reading if I did not see such a universal decline in reading ability and reading comprehension on virtually all tests.” Nicholas Carr sounded a similar note in “Is Google Making Us Stupid?” in the current issue of the Atlantic magazine. Warning that the Web was changing the way he — and others — think, he suggested that the effects of Internet reading extended beyond the falling test scores of adolescence. “What the Net seems to be doing is chipping away my capacity for concentration and contemplation,” he wrote, confessing that he now found it difficult to read long books. Literacy specialists are just beginning to investigate how reading on the Internet affects reading skills. A recent study of more than 700 low-income, mostly Hispanic and black sixth through 10th graders in Detroit found that those students read more on the Web than in any other medium, though they also read books. The only kind of reading that related to higher academic performance was frequent novel reading, which predicted better grades in English class and higher overall grade point averages. Elizabeth Birr Moje, a professor at the University of Michigan who led the study, said novel reading was similar to what schools demand already. But on the Internet, she said, students are developing new reading skills that are neither taught nor evaluated in school. One early study showed that giving home Internet access to low-income students appeared to improve standardized reading test scores and school grades. “These were kids who would typically not be reading in their free time,” said Linda A. Jackson, a psychology professor at Michigan State who led the research. “Once they’re on the Internet, they’re reading.” Neurological studies show that learning to read changes the brain’s circuitry. Scientists speculate that reading on the Internet may also affect the brain’s hard wiring in a way that is different from book reading. “The question is, does it change your brain in some beneficial way?” said Guinevere F. Eden, director of

4 of 8

7/29/08 7:47 AM DWIGHT TOK COURSEPACK 2008-2009


Literacy Debate - Online, R U Really Reading? - Series - NYTimes.com

http://www.nytimes.com/2008/07/27/books/27reading.html?ei=5087&em=... 71

the Center for the Study of Learning at Georgetown University. “The brain is malleable and adapts to its environment. Whatever the pressures are on us to succeed, our brain will try and deal with it.” Some scientists worry that the fractured experience typical of the Internet could rob developing readers of crucial skills. “Reading a book, and taking the time to ruminate and make inferences and engage the imaginational processing, is more cognitively enriching, without doubt, than the short little bits that you might get if you’re into the 30-second digital mode,” said Ken Pugh, a cognitive neuroscientist at Yale who has studied brain scans of children reading. But This Is Reading Too Web proponents believe that strong readers on the Web may eventually surpass those who rely on books. Reading five Web sites, an op-ed article and a blog post or two, experts say, can be more enriching than reading one book. “It takes a long time to read a 400-page book,” said Mr. Spiro of Michigan State. “In a tenth of the time,” he said, the Internet allows a reader to “cover a lot more of the topic from different points of view.” Zachary Sims, the Old Greenwich, Conn., teenager, often stays awake until 2 or 3 in the morning reading articles about technology or politics — his current passions — on up to 100 Web sites. “On the Internet, you can hear from a bunch of people,” said Zachary, who will attend Columbia University this fall. “They may not be pedigreed academics. They may be someone in their shed with a conspiracy theory. But you would weigh that.” Though he also likes to read books (earlier this year he finished, and loved, “The Fountainhead” by Ayn Rand), Zachary craves interaction with fellow readers on the Internet. “The Web is more about a conversation,” he said. “Books are more one-way.” The kinds of skills Zachary has developed — locating information quickly and accurately, corroborating findings on multiple sites — may seem obvious to heavy Web users. But the skills can be cognitively demanding. Web readers are persistently weak at judging whether information is trustworthy. In one study, Donald J. Leu, who researches literacy and technology at the University of Connecticut, asked 48 students to look at a spoof Web site (http://zapatopi.net/treeoctopus/) about a mythical species known as the “Pacific Northwest tree octopus.” Nearly 90 percent of them missed the joke and deemed the site a reliable source. Some literacy experts say that reading itself should be redefined. Interpreting videos or pictures, they say, may be as important a skill as analyzing a novel or a poem. “Kids are using sound and images so they have a world of ideas to put together that aren’t necessarily language oriented,” said Donna E. Alvermann, a professor of language and literacy education at the University of Georgia. “Books aren’t out of the picture, but they’re only one way of experiencing

5 of 8

7/29/08 7:47 AM DWIGHT TOK COURSEPACK 2008-2009


Literacy Debate - Online, R U Really Reading? - Series - NYTimes.com

http://www.nytimes.com/2008/07/27/books/27reading.html?ei=5087&em=... 72

information in the world today.” A Lifelong Struggle In the case of Hunter Gaudet, the Internet has helped him feel more comfortable with a new kind of reading. A varsity lacrosse player in Somers, Conn., Hunter has struggled most of his life to read. After learning he was dyslexic in the second grade, he was placed in special education classes and a tutor came to his home three hours a week. When he entered high school, he dropped the special education classes, but he still reads books only when forced, he said. In a book, “they go through a lot of details that aren’t really needed,” Hunter said. “Online just gives you what you need, nothing more or less.” When researching the 19th-century Chief Justice Roger B. Taney for one class, he typed Taney’s name into Google and scanned the Wikipedia entry and other biographical sites. Instead of reading an entire page, he would type in a search word like “college” to find Taney’s alma mater, assembling his information nugget by nugget. Experts on reading difficulties suggest that for struggling readers, the Web may be a better way to glean information. “When you read online there are always graphics,” said Sally Shaywitz, the author of “Overcoming Dyslexia” and a Yale professor. “I think it’s just more comfortable and — I hate to say easier — but it more meets the needs of somebody who might not be a fluent reader.” Karen Gaudet, Hunter’s mother, a regional manager for a retail chain who said she read two or three business books a week, hopes Hunter will eventually discover a love for books. But she is confident that he has the reading skills he needs to succeed. “Based on where technology is going and the world is going,” she said, “he’s going to be able to leverage it.” When he was in seventh grade, Hunter was one of 89 students who participated in a study comparing performance on traditional state reading tests with a specially designed Internet reading test. Hunter, who scored in the lowest 10 percent on the traditional test, spent 12 weeks learning how to use the Web for a science class before taking the Internet test. It was composed of three sets of directions asking the students to search for information online, determine which sites were reliable and explain their reasoning. Hunter scored in the top quartile. In fact, about a third of the students in the study, led by Professor Leu, scored below average on traditional reading tests but did well on the Internet assessment. The Testing Debate To date, there have been few large-scale appraisals of Web skills. The Educational Testing Service, which administers the SAT, has developed a digital literacy test known as iSkills that requires students to solve informational problems by searching for answers on the Web. About 80 colleges and a handful of high

6 of 8

7/29/08 7:47 AM DWIGHT TOK COURSEPACK 2008-2009


Literacy Debate - Online, R U Really Reading? - Series - NYTimes.com

http://www.nytimes.com/2008/07/27/books/27reading.html?ei=5087&em=... 73

schools have administered the test so far. But according to Stephen Denis, product manager at ETS, of the more than 20,000 students who have taken the iSkills test since 2006, only 39 percent of four-year college freshmen achieved a score that represented “core functional levels” in Internet literacy. Now some literacy experts want the federal tests known as the nation’s report card to include a digital reading component. So far, the traditionalists have held sway: The next round, to be administered to fourth and eighth graders in 2009, will test only print reading comprehension. Mary Crovo of the National Assessment Governing Board, which creates policies for the national tests, said several members of a committee that sets guidelines for the reading tests believed large numbers of low-income and rural students might not have regular Internet access, rendering measurements of their online skills unfair. Some simply argue that reading on the Internet is not something that needs to be tested — or taught. “Nobody has taught a single kid to text message,” said Carol Jago of the National Council of Teachers of English and a member of the testing guidelines committee. “Kids are smart. When they want to do something, schools don’t have to get involved.” Michael L. Kamil, a professor of education at Stanford who lobbied for an Internet component as chairman of the reading test guidelines committee, disagreed. Students “are going to grow up having to be highly competent on the Internet,” he said. “There’s no reason to make them discover how to be highly competent if we can teach them.” The United States is diverging from the policies of some other countries. Next year, for the first time, the Organization for Economic Cooperation and Development, which administers reading, math and science tests to a sample of 15-year-old students in more than 50 countries, will add an electronic reading component. The United States, among other countries, will not participate. A spokeswoman for the Institute of Education Sciences, the research arm of the Department of Education, said an additional test would overburden schools. Even those who are most concerned about the preservation of books acknowledge that children need a range of reading experiences. “Some of it is the informal reading they get in e-mails or on Web sites,” said Gay Ivey, a professor at James Madison University who focuses on adolescent literacy. “I think they need it all.” Web junkies can occasionally be swept up in a book. After Nadia read Elie Wiesel’s Holocaust memoir “Night” in her freshman English class, Ms. Konyk brought home another Holocaust memoir, “I Have Lived a Thousand Years,” by Livia Bitton-Jackson. Nadia was riveted by heartbreaking details of life in the concentration camps. “I was trying to imagine this and I was like, I can’t do this,” she said. “It was just so — wow.”

7 of 8

7/29/08 7:47 AM DWIGHT TOK COURSEPACK 2008-2009


Literacy Debate - Online, R U Really Reading? - Series - NYTimes.com

http://www.nytimes.com/2008/07/27/books/27reading.html?ei=5087&em=... 74

Hoping to keep up the momentum, Ms. Konyk brought home another book, “Silverboy,” a fantasy novel. Nadia made it through one chapter before she got engrossed in the Internet fan fiction again. Copyright 2008 The New York Times Company Privacy Policy

8 of 8

Search

Corrections

RSS

First Look

Help

Contact Us

Work for Us

Site Map

7/29/08 7:47 AM DWIGHT TOK COURSEPACK 2008-2009


Orwell: Politics and the English Language

file:///Users/psanders/Desktop/Dwight/Theory%20of%20Knowledge/Language/Orwell%20on%... 75

Politics and the English Language 1946

Most people who bother with the matter at all would admit that the English language is in a bad way, but it is generally assumed that we cannot by conscious action do anything about it. Our civilization is decadent and our language -- so the argument runs -- must inevitably share in the general collapse. It follows that any struggle against the abuse of language is a sentimental archaism, like preferring candles to electric light or hansom cabs to aeroplanes. Underneath this lies the half-conscious belief that language is a natural growth and not an instrument which we shape for our own purposes. Now, it is clear that the decline of a language must ultimately have political and economic causes: it is not due simply to the bad influence of this or that individual writer. But an effect can become a cause, reinforcing the original cause and producing the same effect in an intensified form, and so on indefinitely. A man may take to drink because he feels himself to be a failure, and then fail all the more completely because he drinks. It is rather the same thing that is happening to the English language. It becomes ugly and inaccurate because our thoughts are foolish, but the slovenliness of our language makes it easier for us to have foolish thoughts. The point is that the process is reversible. Modern English, especially written English, is full of bad habits which spread by imitation and which can be avoided if one is willing to take the necessary trouble. If one gets rid of these habits one can think more clearly, and to think clearly is a necessary first step toward political regeneration: so that the fight against bad English is not frivolous and is not the exclusive concern of professional writers. I will come back to this presently, and I hope that by that time the meaning of what I have said here will have become clearer. Meanwhile, here are five specimens of the English language as it is now habitually written. These five passages have not been picked out because they are especially bad -- I could have quoted far worse if I had chosen -- but because they illustrate various of the mental vices from which we now suffer. They are a little below the average, but are fairly representative examples. I number them so that I can refer back to them when necessary: 1. I am not, indeed, sure whether it is not true to say that the Milton who once seemed not unlike a seventeenth-century Shelley had not become, out of an experience ever more bitter in each year, more alien [sic] to the founder of that Jesuit sect which nothing could induce him to tolerate. Professor Harold Laski (Essay in Freedom of Expression ) 2. Above all, we cannot play ducks and drakes with a native battery of idioms which prescribes egregious collocations of vocables as the Basic put up with for tolerate , or put at a loss for bewilder .

1 of 8

8/31/08 12:59 PM

DWIGHT TOK COURSEPACK 2008-2009


Orwell: Politics and the English Language

file:///Users/psanders/Desktop/Dwight/Theory%20of%20Knowledge/Language/Orwell%20on%... 76

Professor Lancelot Hogben (Interglossia ) 3. On the one side we have the free personality: by definition it is not neurotic, for it has neither conflict nor dream. Its desires, such as they are, are transparent, for they are just what institutional approval keeps in the forefront of consciousness; another institutional pattern would alter their number and intensity; there is little in them that is natural, irreducible, or culturally dangerous. But on the other side ,the social bond itself is nothing but the mutual reflection of these self-secure integrities. Recall the definition of love. Is not this the very picture of a small academic? Where is there a place in this hall of mirrors for either personality or fraternity? Essay on psychology in Politics (New York ) 4. All the "best people" from the gentlemen's clubs, and all the frantic fascist captains, united in common hatred of Socialism and bestial horror at the rising tide of the mass revolutionary movement, have turned to acts of provocation, to foul incendiarism, to medieval legends of poisoned wells, to legalize their own destruction of proletarian organizations, and rouse the agitated petty-bourgeoise to chauvinistic fervor on behalf of the fight against the revolutionary way out of the crisis. Communist pamphlet 5. If a new spirit is to be infused into this old country, there is one thorny and contentious reform which must be tackled, and that is the humanization and galvanization of the B.B.C. Timidity here will bespeak canker and atrophy of the soul. The heart of Britain may be sound and of strong beat, for instance, but the British lion's roar at present is like that of Bottom in Shakespeare's A Midsummer Night's Dream -- as gentle as any sucking dove. A virile new Britain cannot continue indefinitely to be traduced in the eyes or rather ears, of the world by the effete languors of Langham Place, brazenly masquerading as "standard English." When the Voice of Britain is heard at nine o'clock, better far and infinitely less ludicrous to hear aitches honestly dropped than the present priggish, inflated, inhibited, school-ma'amish arch braying of blameless bashful mewing maidens! Letter in Tribune Each of these passages has faults of its own, but, quite apart from avoidable ugliness, two qualities are common to all of them. The first is staleness of imagery; the other is lack of precision. The writer either has a meaning and cannot express it, or he inadvertently says something else, or he is almost indifferent as to whether his words mean anything or not. This mixture of vagueness and sheer incompetence is the most marked characteristic of modern English prose, and especially of any kind of political writing. As soon as certain topics are raised, the concrete melts into the abstract and no one seems able to think of turns of speech that are not hackneyed: prose consists less and less of words chosen for the sake of their meaning, and more and more of phrases tacked together like the sections of a prefabricated henhouse. I list below, with notes and examples, various of the tricks by means of which the work of prose construction is habitually dodged: Dying metaphors. A newly invented metaphor assists thought by evoking a visual image, while on the other hand a metaphor which is

2 of 8

8/31/08 12:59 PM

DWIGHT TOK COURSEPACK 2008-2009


Orwell: Politics and the English Language

file:///Users/psanders/Desktop/Dwight/Theory%20of%20Knowledge/Language/Orwell%20on%... 77

technically "dead" (e.g. iron resolution ) has in effect reverted to being an ordinary word and can generally be used without loss of vividness. But in between these two classes there is a huge dump of worn-out metaphors which have lost all evocative power and are merely used because they save people the trouble of inventing phrases for themselves. Examples are: Ring the changes on, take up the cudgel for, toe the line, ride roughshod over, stand shoulder to shoulder with, play into the hands of, no axe to grind, grist to the mill, fishing in troubled waters, on the order of the day, Achilles' heel, swan song, hotbed . Many of these are used without knowledge of their meaning (what is a "rift," for instance?), and incompatible metaphors are frequently mixed, a sure sign that the writer is not interested in what he is saying. Some metaphors now current have been twisted out of their original meaning withouth those who use them even being aware of the fact. For example, toe the line is sometimes written as tow the line . Another example is the hammer and the anvil , now always used with the implication that the anvil gets the worst of it. In real life it is always the anvil that breaks the hammer, never the other way about: a writer who stopped to think what he was saying would avoid perverting the original phrase. Operators or verbal false limbs. These save the trouble of picking out appropriate verbs and nouns, and at the same time pad each sentence with extra syllables which give it an appearance of symmetry. Characteristic phrases are render inoperative, militate against, make contact with, be subjected to, give rise to, give grounds for, have the effect of, play a leading part (role) in, make itself felt, take effect, exhibit a tendency to, serve the purpose of, etc.,etc . The keynote is the elimination of simple verbs. Instead of being a single word, such as break, stop, spoil, mend, kill , a verb becomes a phrase , made up of a noun or adjective tacked on to some general-purpose verb such as prove, serve, form, play, render . In addition, the passive voice is wherever possible used in preference to the active, and noun constructions are used instead of gerunds (by examination of instead of by examining ). The range of verbs is further cut down by means of the -ize and deformations, and the banal statements are given an appearance of profundity by means of the not un- formation. Simple conjunctions and prepositions are replaced by such phrases as with respect to, having regard to, the fact that, by dint of, in view of, in the interests of, on the hypothesis that ; and the ends of sentences are saved by anticlimax by such resounding commonplaces as greatly to be desired, cannot be left out of account, a development to be expected in the near future, deserving of serious consideration, brought to a satisfactory conclusion , and so on and so forth. Pretentious diction. Words like phenomenon, element, individual (as noun), objective, categorical, effective, virtual, basic, primary, promote, constitute, exhibit, exploit, utilize, eliminate, liquidate , are used to dress up a simple statement and give an aire of scientific impartiality to biased judgements. Adjectives like epoch-making, epic, historic, unforgettable, triumphant, age-old, inevitable, inexorable, veritable , are used to dignify the sordid process of international politics, while writing that aims at glorifying war usually takes on an archaic color, its characteristic words being: realm, throne, chariot, mailed fist, trident, sword, shield, buckler, banner, jackboot, clarion . Foreign words and expressions such as cul de sac, ancien r&eacutgime, deus ex machina, mutatis mutandis, status quo, gleichschaltung, weltanschauung , are used to give an air of culture and elegance. Except for the useful abbreviations i.e., e.g. , and etc. , there is no real need for any of the hundreds of foreign phrases now current in the English language. Bad writers, and especially scientific, political, and sociological writers, are nearly always haunted by the notion that Latin or Greek words are grander than Saxon ones, and unnecessary words like expedite, ameliorate, predict, extraneous, deracinated, clandestine, subaqueous , and hundreds of others constantly gain ground from their Anglo-Saxon numbers. The jargon peculiar to Marxist writing (hyena, hangman, cannibal, petty bourgeois, these gentry, lackey, flunkey, mad dog, White Guard , etc.) consists largely of words translated from Russian, German, or French; but the normal way of coining a new word is to use Latin or Greek root with the appropriate affix and, where necessary, the size formation. It is often easier to make up 3 of 8

8/31/08 12:59 PM

DWIGHT TOK COURSEPACK 2008-2009


Orwell: Politics and the English Language

file:///Users/psanders/Desktop/Dwight/Theory%20of%20Knowledge/Language/Orwell%20on%... 78

words of this kind (deregionalize, impermissible, extramarital, non-fragmentary and so forth) than to think up the English words that will cover one's meaning. The result, in general, is an increase in slovenliness and vagueness. Meaningless words. In certain kinds of writing, particularly in art criticism and literary criticism, it is normal to come across long passages which are almost completely lacking in meaning. Words like romantic, plastic, values, human, dead, sentimental, natural, vitality , as used in art criticism, are strictly meaningless, in the sense that they not only do not point to any discoverable object, but are hardly ever expected to do so by the reader. When one critic writes, "The outstanding feature of Mr. X's work is its living quality," while another writes, "The immediately striking thing about Mr. X's work is its peculiar deadness," the reader accepts this as a simple difference opinion. If words like black and white were involved, instead of the jargon words dead and living, he would see at once that language was being used in an improper way. Many political words are similarly abused. The word Fascism has now no meaning except in so far as it signifies "something not desirable." The words democracy, socialism, freedom, patriotic, realistic, justice have each of them several different meanings which cannot be reconciled with one another. In the case of a word like democracy, not only is there no agreed definition, but the attempt to make one is resisted from all sides. It is almost universally felt that when we call a country democratic we are praising it: consequently the defenders of every kind of regime claim that it is a democracy, and fear that they might have to stop using that word if it were tied down to any one meaning. Words of this kind are often used in a consciously dishonest way. That is, the person who uses them has his own private definition, but allows his hearer to think he means something quite different. Statements like Marshal Petain was a true patriot, The Soviet press is the freest in the world, The Catholic Church is opposed to persecution, are almost always made with intent to deceive. Other words used in variable meanings, in most cases more or less dishonestly, are: class, totalitarian, science, progressive, reactionary, bourgeois, equality. Now that I have made this catalogue of swindles and perversions, let me give another example of the kind of writing that they lead to. This time it must of its nature be an imaginary one. I am going to translate a passage of good English into modern English of the worst sort. Here is a well-known verse from Ecclesiastes: I returned and saw under the sun, that the race is not to the swift, nor the battle to the strong, neither yet bread to the wise, nor yet riches to men of understanding, nor yet favour to men of skill; but time and chance happeneth to them all. Here it is in modern English: Objective considerations of contemporary phenomena compel the conclusion that success or failure in competitive activities exhibits no tendency to be commensurate with innate capacity, but that a considerable element of the unpredictable must invariably be taken into account. This is a parody, but not a very gross one. Exhibit (3) above, for instance, contains several patches of the same kind of English. It will be seen that I have not made a full translation. The beginning and ending of the sentence follow the original meaning fairly closely, but in the middle the concrete illustrations -- race, battle, bread -- dissolve into the vague phrases "success or failure in competitive activities." This had to be so, because no modern writer of the kind I am discussing -- no one capable of using phrases like "objective considerations of

4 of 8

8/31/08 12:59 PM

DWIGHT TOK COURSEPACK 2008-2009


Orwell: Politics and the English Language

file:///Users/psanders/Desktop/Dwight/Theory%20of%20Knowledge/Language/Orwell%20on%... 79

contemporary phenomena" -- would ever tabulate his thoughts in that precise and detailed way. The whole tendency of modern prose is away from concreteness. Now analyze these two sentences a little more closely. The first contains forty-nine words but only sixty syllables, and all its words are those of everyday life. The second contains thirty-eight words of ninety syllables: eighteen of those words are from Latin roots, and one from Greek. The first sentence contains six vivid images, and only one phrase ("time and chance") that could be called vague. The second contains not a single fresh, arresting phrase, and in spite of its ninety syllables it gives only a shortened version of the meaning contained in the first. Yet without a doubt it is the second kind of sentence that is gaining ground in modern English. I do not want to exaggerate. This kind of writing is not yet universal, and outcrops of simplicity will occur here and there in the worst-written page. Still, if you or I were told to write a few lines on the uncertainty of human fortunes, we should probably come much nearer to my imaginary sentence than to the one from Ecclesiastes. As I have tried to show, modern writing at its worst does not consist in picking out words for the sake of their meaning and inventing images in order to make the meaning clearer. It consists in gumming together long strips of words which have already been set in order by someone else, and making the results presentable by sheer humbug. The attraction of this way of writing is that it is easy. It is easier -- even quicker, once you have the habit -- to say In my opinion it is not an unjustifiable assumption that than to say I think. If you use ready-made phrases, you not only don't have to hunt about for the words; you also don't have to bother with the rhythms of your sentences since these phrases are generally so arranged as to be more or less euphonious. When you are composing in a hurry -- when you are dictating to a stenographer, for instance, or making a public speech -- it is natural to fall into a pretentious, Latinized style. Tags like a consideration which we should do well to bear in mind or a conclusion to which all of us would readily assent will save many a sentence from coming down with a bump. By using stale metaphors, similes, and idioms, you save much mental effort, at the cost of leaving your meaning vague, not only for your reader but for yourself. This is the significance of mixed metaphors. The sole aim of a metaphor is to call up a visual image. When these images clash -- as in The Fascist octopus has sung its swan song, the jackboot is thrown into the melting pot -- it can be taken as certain that the writer is not seeing a mental image of the objects he is naming; in other words he is not really thinking. Look again at the examples I gave at the beginning of this essay. Professor Laski (1) uses five negatives in fifty three words. One of these is superfluous, making nonsense of the whole passage, and in addition there is the slip -- alien for akin -- making further nonsense, and several avoidable pieces of clumsiness which increase the general vagueness. Professor Hogben (2) plays ducks and drakes with a battery which is able to write prescriptions, and, while disapproving of the everyday phrase put up with, is unwilling to look egregious up in the dictionary and see what it means; (3), if one takes an uncharitable attitude towards it, is simply meaningless: probably one could work out its intended meaning by reading the whole of the article in which it occurs. In (4), the writer knows more or less what he wants to say, but an accumulation of stale phrases chokes him like tea leaves blocking a sink. In (5), words and meaning have almost parted company. People who write in this manner usually have a general emotional meaning -- they dislike one thing and want to express solidarity with another -- but they are not interested in the detail of what they are saying. A scrupulous writer, in every sentence that he writes, will ask himself at least four questions, thus: 1. 2. 3. 4.

What am I trying to say? What words will express it? What image or idiom will make it clearer? Is this image fresh enough to have an effect?

And he will probably ask himself two more: 5 of 8

8/31/08 12:59 PM

DWIGHT TOK COURSEPACK 2008-2009


Orwell: Politics and the English Language

file:///Users/psanders/Desktop/Dwight/Theory%20of%20Knowledge/Language/Orwell%20on%... 80

1. Could I put it more shortly? 2. Have I said anything that is avoidably ugly? But you are not obliged to go to all this trouble. You can shirk it by simply throwing your mind open and letting the ready-made phrases come crowding in. The will construct your sentences for you -- even think your thoughts for you, to a certain extent -- and at need they will perform the important service of partially concealing your meaning even from yourself. It is at this point that the special connection between politics and the debasement of language becomes clear. In our time it is broadly true that political writing is bad writing. Where it is not true, it will generally be found that the writer is some kind of rebel, expressing his private opinions and not a "party line." Orthodoxy, of whatever color, seems to demand a lifeless, imitative style. The political dialects to be found in pamphlets, leading articles, manifestoes, White papers and the speeches of undersecretaries do, of course, vary from party to party, but they are all alike in that one almost never finds in them a fresh, vivid, homemade turn of speech. When one watches some tired hack on the platform mechanically repeating the familiar phrases -- bestial, atrocities, iron heel, bloodstained tyranny, free peoples of the world, stand shoulder to shoulder -- one often has a curious feeling that one is not watching a live human being but some kind of dummy: a feeling which suddenly becomes stronger at moments when the light catches the speaker's spectacles and turns them into blank discs which seem to have no eyes behind them. And this is not altogether fanciful. A speaker who uses that kind of phraseology has gone some distance toward turning himself into a machine. The appropriate noises are coming out of his larynx, but his brain is not involved as it would be if he were choosing his words for himself. If the speech he is making is one that he is accustomed to make over and over again, he may be almost unconscious of what he is saying, as one is when one utters the responses in church. And this reduced state of consciousness, if not indispensable, is at any rate favorable to political conformity. In our time, political speech and writing are largely the defense of the indefensible. Things like the continuance of British rule in India, the Russian purges and deportations, the dropping of the atom bombs on Japan, can indeed be defended, but only by arguments which are too brutal for most people to face, and which do not square with the professed aims of the political parties. Thus political language has to consist largely of euphemism., question-begging and sheer cloudy vagueness. Defenseless villages are bombarded from the air, the inhabitants driven out into the countryside, the cattle machine-gunned, the huts set on fire with incendiary bullets: this is called pacification. Millions of peasants are robbed of their farms and sent trudging along the roads with no more than they can carry: this is called transfer of population or rectification of frontiers. People are imprisoned for years without trial, or shot in the back of the neck or sent to die of scurvy in Arctic lumber camps: this is called elimination of unreliable elements. Such phraseology is needed if one wants to name things without calling up mental pictures of them. Consider for instance some comfortable English professor defending Russian totalitarianism. He cannot say outright, "I believe in killing off your opponents when you can get good results by doing so." Probably, therefore, he will say something like this: While freely conceding that the Soviet regime exhibits certain features which the humanitarian may be inclined to deplore, we must, I think, agree that a certain curtailment of the right to political opposition is an unavoidable concomitant of transitional periods, and that the rigors which the Russian people have been called upon to undergo have been amply justified in the sphere of concrete achievement. 6 of 8

8/31/08 12:59 PM

DWIGHT TOK COURSEPACK 2008-2009


Orwell: Politics and the English Language

file:///Users/psanders/Desktop/Dwight/Theory%20of%20Knowledge/Language/Orwell%20on%... 81

The inflated style itself is a kind of euphemism. A mass of Latin words falls upon the facts like soft snow, blurring the outline and covering up all the details. The great enemy of clear language is insincerity. When there is a gap between one's real and one's declared aims, one turns as it were instinctively to long words and exhausted idioms, like a cuttlefish spurting out ink. In our age there is no such thing as "keeping out of politics." All issues are political issues, and politics itself is a mass of lies, evasions, folly, hatred, and schizophrenia. When the general atmosphere is bad, language must suffer. I should expect to find -- this is a guess which I have not sufficient knowledge to verify -that the German, Russian and Italian languages have all deteriorated in the last ten or fifteen years, as a result of dictatorship. But if thought corrupts language, language can also corrupt thought. A bad usage can spread by tradition and imitation even among people who should and do know better. The debased language that I have been discussing is in some ways very convenient. Phrases like a not unjustifiable assumption, leaves much to be desired, would serve no good purpose, a consideration which we should do well to bear in mind, are a continuous temptation, a packet of aspirins always at one's elbow. Look back through this essay, and for certain you will find that I have again and again committed the very faults I am protesting against. By this morning's post I have received a pamphlet dealing with conditions in Germany. The author tells me that he "felt impelled" to write it. I open it at random, and here is almost the first sentence I see: "[The Allies] have an opportunity not only of achieving a radical transformation of Germany's social and political structure in such a way as to avoid a nationalistic reaction in Germany itself, but at the same time of laying the foundations of a co-operative and unified Europe." You see, he "feels impelled" to write -- feels, presumably, that he has something new to say -- and yet his words, like cavalry horses answering the bugle, group themselves automatically into the familiar dreary pattern. This invasion of one's mind by ready-made phrases ( lay the foundations, achieve a radical transformation ) can only be prevented if one is constantly on guard against them, and every such phrase anaesthetizes a portion of one's brain. I said earlier that the decadence of our language is probably curable. Those who deny this would argue, if they produced an argument at all, that language merely reflects existing social conditions, and that we cannot influence its development by any direct tinkering with words and constructions. So far as the general tone or spirit of a language goes, this may be true, but it is not true in detail. Silly words and expressions have often disappeared, not through any evolutionary process but owing to the conscious action of a minority. Two recent examples were explore every avenue and leave no stone unturned , which were killed by the jeers of a few journalists. There is a long list of flyblown metaphors which could similarly be got rid of if enough people would interest themselves in the job; and it should also be possible to laugh the not un- formation out of existence, to reduce the amount of Latin and Greek in the average sentence, to drive out foreign phrases and strayed scientific words, and, in general, to make pretentiousness unfashionable. But all these are minor points. The defense of the English language implies more than this, and perhaps it is best to start by saying what it does not imply. To begin with it has nothing to do with archaism, with the salvaging of obsolete words and turns of speech, or with the setting up of a "standard English" which must never be departed from. On the contrary, it is especially concerned with the scrapping of every word or idiom which has outworn its usefulness. It has nothing to do with correct grammar and syntax, which are of no importance so long as one makes one's meaning clear, or with the avoidance of Americanisms, or with having what is called a "good prose style." On the other hand, it is not concerned with fake simplicity and the attempt to make written English colloquial. Nor does it even imply in every case preferring the Saxon word to the Latin one, though it does imply using the fewest and shortest words that will cover one's meaning. What is above all needed is to let the meaning choose the word, and not the other way around. In prose, the worst thing one can do with words is surrender to 7 of 8

8/31/08 12:59 PM

DWIGHT TOK COURSEPACK 2008-2009


Orwell: Politics and the English Language

file:///Users/psanders/Desktop/Dwight/Theory%20of%20Knowledge/Language/Orwell%20on%... 82

them. When yo think of a concrete object, you think wordlessly, and then, if you want to describe the thing you have been visualizing you probably hunt about until you find the exact words that seem to fit it. When you think of something abstract you are more inclined to use words from the start, and unless you make a conscious effort to prevent it, the existing dialect will come rushing in and do the job for you, at the expense of blurring or even changing your meaning. Probably it is better to put off using words as long as possible and get one's meaning as clear as one can through pictures and sensations. Afterward one can choose -- not simply accept -- the phrases that will best cover the meaning, and then switch round and decide what impressions one's words are likely to mak on another person. This last effort of the mind cuts out all stale or mixed images, all prefabricated phrases, needless repetitions, and humbug and vagueness generally. But one can often be in doubt about the effect of a word or a phrase, and one needs rules that one can rely on when instinct fails. I think the following rules will cover most cases: 1. 2. 3. 4. 5. 6.

Never use a metaphor, simile, or other figure of speech which you are used to seeing in print. Never us a long word where a short one will do. If it is possible to cut a word out, always cut it out. Never use the passive where you can use the active. Never use a foreign phrase, a scientific word, or a jargon word if you can think of an everyday English equivalent. Break any of these rules sooner than say anything outright barbarous.

These rules sound elementary, and so they are, but they demand a deep change of attitude in anyone who has grown used to writing in the style now fashionable. One could keep all of them and still write bad English, but one could not write the kind of stuff that I quoted in those five specimens at the beginning of this article. I have not here been considering the literary use of language, but merely language as an instrument for expressing and not for concealing or preventing thought. Stuart Chase and others have come near to claiming that all abstract words are meaningless, and have used this as a pretext for advocating a kind of political quietism. Since you don't know what Fascism is, how can you struggle against Fascism? One need not swallow such absurdities as this, but one ought to recognize that the present political chaos is connected with the decay of language, and that one can probably bring about some improvement by starting at the verbal end. If you simplify your English, you are freed from the worst follies of orthodoxy. You cannot speak any of the necessary dialects, and when you make a stupid remark its stupidity will be obvious, even to yourself. Political language -- and with variations this is true of all political parties, from Conservatives to Anarchists -- is designed to make lies sound truthful and murder respectable, and to give an appearance of solidity to pure wind. One cannot change this all in a moment, but one can at least change one's own habits, and from time to time one can even, if one jeers loudly enough, send some worn-out and useless phrase -- some jackboot, Achilles' heel, hotbed, melting pot, acid test, veritable inferno, or other lump of verbal refuse -- into the dustbin, where it belongs. 1946

8 of 8

8/31/08 12:59 PM

DWIGHT TOK COURSEPACK 2008-2009


PREVIEW: Agenbites

http://www.weeklystandard.com/Utilities/printer_preview.asp?idArticle=15... 83

Agenbites Joseph Bottum, wordy. by Joseph Bottum 05/19/2008, Volume 013, Issue 34 Thwart. Yes, thwart is a good word. Thwarted. Athwart. A kind of satisfaction lives in such words--a unity, a completion. Teach them to a child, and you'll see what I mean: skirt, scalp, drab, buckle, sneaker, twist, jumble. Squeamish, for that matter. They taste good in the mouth, and they seem to resound with their own verbal truthfulness. More like proper nouns than mere words, they match the objects they describe. Pickle, gloomy, portly, curmudgeon--sounds that loop back on themselves to close the circle of meaning. They're perfect, in their way. They're what all language wants to be when it grows up. Admittedly, some of this comes from onomatopoeia: words that echo the sound of what they name. Hiccup, for instance, and zip. The animal cries of quack and oink and howl. The mechanical noises of click and clack and clank. Chickadees, cuckoos, and whip-poor-wills all get their names this way. Whooping cranes, as well, and when I was little, I pictured them as sickly birds, somehow akin to whooping cough. And yet, that word akin--that's a good word, too, though it lacks even the near-onomatopoeia of percussion and lullaby, or the ideophonic picture-drawing of clickety-clack and gobble. The words I'm thinking of are, rather, the ones that feel right when we say them: accurate expressions, somehow, for themselves. Apple, for instance, has always seemed to me the perfect name--a crisp and tanged and ruddy word. Grammarians may have a technical term for these words that sound true, though I've never come across quite what I'm looking for. Homological, maybe? Autological? Ipsoverific? In a logical sense, of course, some words are literally true or false when applied to themselves. Words about words, typically: Noun is a noun, though verb is not a verb. Poly- syllabic is self-true, and monosyllabic is not. And this logical notion of autology can be extended. If short seems a short word, true of itself, then the shorter long must be false of itself. But what about jab or fluffy or sneer, each of them true in a way that goes beyond logic? Verbose has always struck me as a strangely verbose word. Peppy has that perky, energetic, spry sound it needs. And was there ever a more supercilious word than supercilious? Or one more lethargic than lethargic? Let's coin a term for this kind of poetic, extralogical accuracy. Let's call it agenbite. That's a word Michael of Northgate cobbled up for his 1340 Remorse of Conscience--or Agenbite of Inwit, as he actually titled the book. English would later settle on the French-born word "remorse" to carry the sense of the Latin re-mordere, "to bite again." But Michael didn't know that at the time, and so he simply translated the word's parts: again-bite or (in the muddle of early English spelling) agenbite. Anyway, these words that sound true need some kind of name. And since they do bite back on themselves, like a snake swallowing its tail, Michael's term will do as well as any other. Ethereal is an agenbite, isn't it? All ethereal and airy. Rapier, swashbuckler, erstwhile, obfuscate, spume--agenbites, every one. Reverberation reverberates, and jingle jingles. A friend insists that machination is a word that tells you all about its Machiavellian self, and surely sporadic is a clean agenbite, with something patchy and intermittent in the taste as you say it. Sheer sound won't make one of these agenbites, however pleasurable the word feels on the tongue. Perspicacious is a succulent thing, I suppose, but who ever heard its perspicacity? Pragmatic seems closer, but in the end it's not quite hardnosed enough to get the job done. Pertussis, the scientific name for whooping cough, is one of those bad Latin terms that doctors used to invent, back in the days before they settled on the odd convention of naming

1 of 2

5/17/08 5:45 PM DWIGHT TOK COURSEPACK 2008-2009


PREVIEW: Agenbites

http://www.weeklystandard.com/Utilities/printer_preview.asp?idArticle=15... 84

diseases after doctors. And, as far as the sound goes, you can't ask for a better word to pronounce than pertussis--but where's the whoop? Odd. Now there's a word that says just what it means. Dwindle wants to fade away even while you're saying it. And surely splendiferous is a solid agenbite, expressing its own hollow pomposity. For that matter, isn't hollow a little hollow, with the sound of a hole at its center? Maybe not, but you always know where you are with words like dreary and gossip and gut and bludgeon. Or with onomatopoeics like flap and slurp and splash and gurgle. Or with the whole set of English -umbles: fumble and mumble and bumble. Gargoyle sounds like a word that knows just what it is. Snake and swoop and spew all reach back to gnaw on themselves--agenbites of speech. They're part of what makes poetry work. They're what all language wants to be, when it grows up. JOSEPH BOTTUM Š Copyright 2008, News Corporation, Weekly Standard, All Rights Reserved.

2 of 2

5/17/08 5:45 PM DWIGHT TOK COURSEPACK 2008-2009


BBC NEWS | Magazine | Are you going forward? Then stop now

http://newsvote.bbc.co.uk/mpapps/pagetools/print/news.bbc.co.uk/1/hi/mag... 85

Are you going forward? Then stop now A POINT OF VIEW Blue sky thinking, pushing the envelope - the problem with office-speak is that it cloaks the brutal modern workplace in such brainlessly upbeat language... as Lucy Kellaway dialogues. For the last few months I've been on a mission to rid the world of the phrase "going forward". But now I see that the way forward is to admit defeat. This most horrid phrase is with us on a go-forward basis, like it or not. I reached this sad conclusion early one morning a couple of weeks ago when listening to Farming Today. A man from the National Farmers' Union was talking about matters down on the farm and he uttered three "going forwards" in 28 seconds. The previous radio record, by my reckoning, was held by Robert Peston, the BBC's business editor. He managed three going forwards in four minutes on the Today programme, but then maybe that wasn't such a huge achievement when you think that he spends his life rubbing shoulders with business people. And they say going forward every time they want to make any comment about the future, which is rather often. But for the farmer, who spends his life rubbing shoulders with cows, to say it so often represented a linguistic landmark. If the farms of England are now going forward, then there is no turning back for any of us.

When someone says 'going forward' it assaults the ears just as, when a colleague starts slurping French onion soup at a neighbouring desk, it assaults the nose

I know I'm on slightly shaky ground talking on the radio about how badly other people talk on the radio. I'm also feeling a bit chastened having recently read a column by Craig Brown in the Telegraph consisting of spoof letters from language pedants. One of them went like this: "Sir - Listening to my wireless, I heard a song with the chorus, 'She loves you yeah yeah yeah'. Later in the same song, insult was added to injury with yet another chorus, this time, 'She loves you yeah yeah yeah yeah yeah'. Whatever happened to that good old-fashioned word " yes"? What was even funnier than his column was the readers' response to it on the Telegraph website. Most of them had quite failed to notice that they were being laughed at, and seized on the opportunity to voice their own concerns over declining standards of modern English. One took issue with the preposition "on", wailing over its use in "on the weekend" and "on the team". Another despaired over "for free". A third deplored "different to". You could say this orgy of pedantry was not only tedious, but also pointless. Language changes. End of. - to use a particularly annoying new phrase. Yet protesting feels so good. Not only does it allow one to wallow in the superiority of one's education, but some words are so downright annoying that to complain brings relief. When someone says "going forward" it assaults the ears just as, when a colleague starts slurping French onion soup at a neighbouring desk, it assaults the nose. Flinch mob We all have our own pet hates - I don't particularly mind "for free": I think it's quite comic. Neither do I mind the preposition "on". But "up" - now that's another matter altogether. To free up and to head up, both deliver a little jolt of irritation whenever I hear them. And as for heads-up, as in give me a heads up, that is utterly maddening. In any case, pedantry has a fine tradition.

1 of 5

8/13/08 8:10 AM DWIGHT TOK COURSEPACK 2008-2009


BBC NEWS | Magazine | Are you going forward? Then stop now

http://newsvote.bbc.co.uk/mpapps/pagetools/print/news.bbc.co.uk/1/hi/mag... 86

Writing in The Tatler in 1710, Jonathan Swift complained, "I have done my utmost for some Years past, to stop the Progress of Mob... but have been plainly born down by Numbers, and betrayed by those who promised to assist me." Instead of saying mob, they should have used the proper Latin term mobile vulgus, mobile meaning changeable or fickle and vulgus meaning common people. Yet here I think Swift was being a fussy old bore in objecting to a harmless little bit of shortening. One syllable is surely a lot more manageable than five, so I really can't see what his problem was. And the word mob is so good it has survived the next three centuries with meaning unchanged. By contrast there is so much more to object to in "going forward". It clings to the tongues of speakers compelling them to utter it again and again. It is a grown up equivalent of the word "like", which seems to trip off the tongue of the average teenager every two or three waking seconds. Like "like", "going forward" is as contagious as smallpox. It started with business people, and now has not only infected farmers, it has reached epidemic proportions with footballers. Hippie hangover When asked if he was going to be the England captain again after his triumph with Trinidad and Tobago, David Beckham came out with the gnomic reply "Going forward, who knows." It seems that the less one has to say, the more likely one is to reach for a going forward as a crutch. Politicians find it comforting for this reason. "We are going forward" poor Hillary Clinton said just before the last, fatal primary last month when it became indisputable that she was going nowhere of the kind.

One of the big banks is seeking 'passionate banking representatives to uphold our values' - this is a lie. It wants competent people to follow instructions and answer the phones

Yet more than all this, the really lethal thing about the whole language of business - is that it is so brainlessly upbeat. All the celebrating, the reaching out, the sharing, and the championing in fact grind one down. Several decades too late, it is as if business has caught up with the linguistic spirit of 1968. The hippies got over it, but businessmen are holding tight. The reason that the talk jars so much is that the walk doesn't match. The reality is that business is the most brutal it has been for half a century. If your company is not better than the competition, it goes bust. If you aren't good, or aren't thought to be good (which is a slightly different thing) you get pushed aside. For nearly a decade I wrote a fictional column in the Financial Times about a senior manager who spoke almost entirely in business cliches. Martin Lukes talked the talk. Or rather, he added value by reaching out and sharing his blue sky thinking. At the end of the day he stepped up to the plate and delivered world class jargon that really pushed the envelope. After eight years of being him I came to accept the nouns pretending to be verbs. To task and to impact. Even the new verb to architect I almost took in my stride. I didn't even really mind the impenetrable sentences full of leveraging value and paradigm shifts. But what still rankled after so long were the little things: that he said myself instead of me and that he would never talk about a problem, when he could dialogue around an issue instead. Misplaced passion Many of Martin's favourite phrases have recently found their way onto a list of 100 banned words that has been sent by the Local Government Association to Councils with the instruction that they are no longer to use them. It's a nice try, but I fear they are just as likely to succeed as I was with going forward. Yet what no list of words can get at is the new business insincerity: a phoney upping of the emotional ante. Last week I got an e-mail from someone I had never met that began by saying "I'm reaching out to you" and ended "warmest personal regards". As her regards had no business to be either warm or personal, the overall effect was somewhat chilling. But this incontinent gush is nothing compared to an e-mail sent by an extremely powerful person at JP Morgan encouraging his investment banking team to be more human. In it he said: "Take the time today to call a client and tell them you love them. They won't forget you made the call." Indeed. I'm sure the client would remember such a call for a very long time. If love has no place in the language of business, neither does passion. Passion, says the dictionary, means a strong sexual desire or the suffering of Christ at the crucifixion. In other words it doesn't really have an awful lot to do with a typical day in the office - unless things have gone very wrong indeed. And yet passion is something

2 of 5

8/13/08 8:10 AM DWIGHT TOK COURSEPACK 2008-2009


BBC NEWS | Magazine | Are you going forward? Then stop now

http://newsvote.bbc.co.uk/mpapps/pagetools/print/news.bbc.co.uk/1/hi/mag... 87

that every employee must attest to in order to get through any selection process. Every one of the candidates in the final rounds of interview on the Apprentice solemnly declared that they were passionate about being Sir Alan's Apprentice. It's not only when you're trying to impress nine million viewers on national TV. Even to get a humble job in a call centre passion is required. One of the big banks is currently advertising for such workers saying "we seek passionate banking representatives to uphold our values." This is a lie. Actually what the bank is seeking is competent people to follow instructions and answer the phones. The biggest lie of all in business speak is about ownership. In order to make it appear that there is a strong bond between customers and companies there is My e-Bay and My EasyJet and - most successfully of all - Your M&S. At the risk of being as pedantic as Jonathan Swift, I'd like to point out that it isn't my M&S. It isn't yours either. Neither is it even Stuart Rose's M&S. The company belongs to its shareholders. Though, just for the record, the knighthood Sir Stuart was given last week by the Queen really was his. Yet even that he deemed to be owned more broadly. It's not really for me, he said, it's for all M&S employees. I'm not quite sure what he was saying here, unless it was that everyone who works at Your M&S can call themselves Sir and Lady, going forward. Add your comments on this story, using the form below. As freelance copywriters, we spend a lot of time trying to convince people that business speak is archaic, and to find their company’s real voice. But the use of a particular word in a company department can become almost fashionable. Marketing, HR, sales – they’ve all got their own micro-languages. The good news is, people are starting to recognise how ridiculous management claptrap is - Business Bingo wasn’t invented for nothing. The bad news? Well, the trend now is towards sounding like Innocent. They developed a fantastic voice, which comes from who they are and what they do. Fine if you sell fruit-based drinks. Not so good if you’re a financial adviser. Everyone’s different, but it’s taking a while to convince some businesses of that. Agent A, UK I consider myself a linguist (being fluent in six languages, English being one of my foreign ones), but I also consider myself a business woman and a positive thinker. "Going forward" is the ONLY way to go, because what is the alternative? Staying put? Going backwards? Too many people are stuck in the past, both in their personal and their professional lives. There is nothing grammatically wrong with it either (apart from the fact that it should read "going forwards") and as you correctly point out, language changes; it is a living thing that constantly develops. I am all for correct punctuation and spelling, but the most important function of language is what it conveys, and in this case, going forward(s) conveys exactly the right message. Pessimism and realism are the quickest ways to get the economy in decline, positivism is the only way to get it back on track. Inge van der Veen, Bristol, UK The evolution of the language is what keeps it from following Latin into obscurity. But sloppy cliches, impenetrable jargon and meaningless verbal litter, like, just clutter and obstruct clear communication and hide superficial thinking. What we need is a new Oftalk to regulate the use of language, headed by Communication Czar who is passionate about going forward to champion the cause of the deep-rooted values that underpin the language that we all cherish as embodying the lasting and universal but distinctively English values of ... now hang on, where was I? Yitz Freeman This usage seems fairly mysterious: The president said he is confident that ... Mr Brown will ... "make sure that the sacrifices that have gone forward won't be unravelled by draw-downs that may not be warranted". Ali, London Language needs to adapt to new developments. While I agree that changing the word "redundant" to "downsized", does little to change the fact that you need to find a new job, trying to halt the expansion of language is like trying to stop tide. George Orwell's 1984 shows us that the fewer words we have to express and think with, the more limited and controlled we become. As it's not the weapons that kill people, but rather those who use them, words are not insincere, but those who speak them may be. So let's go forward and push the envelope of word usage, but let's do it with a bit of serenity. DS, Croydon, England Unfortunately, the success of English as the dominant business language means that England no longer has control, but surely that's better than having a group of 20 octogenarians telling us we should say magnetoscope instead of video recorder. Plus, we can now play Buzzword Bingo during tedious cross-channel conference calls. JC Lux, Luxembourg

3 of 5

8/13/08 8:10 AM DWIGHT TOK COURSEPACK 2008-2009


88

WAY OF KNOWING:

Sense Perception “You can’t depend on your eyes when your imagination is out of focus.” (Mark Twain) • What possibilities for knowledge are opened to us by our senses as they are? What limitations? • Does the predominance of visual perception constitute a natural characteristic of our human experience or is it one among several ways of being in the world? • Does the predominance of visual perception constitute a natural characteristic of our human experience or is it one among several ways of being in the world?

DWIGHT TOK COURSEPACK 2008-2009


Economist.com

http://www.economist.com/science/PrinterFriendly.cfm?story_id=12001831 89

Olfactory diagnostics

Smelling bad Aug 28th 2008 From The Economist print edition

Illustration by Stephen Jeffrey

Doctors may soon have a new diagnostic tool in their kit bags SINCE time immemorial—or at least as far back as Hippocrates—novice physicians have been taught to smell patients’ breath for signs of illness. Though unpleasant for the doctor, it is a useful trick. The sweet smell of rotten apples, for instance, indicates diabetes. Liver disease, by contrast, often causes the breath to smell fishy. But the human nose cannot detect all the chemical changes brought about by disease. Science, therefore, seeks to smell what human doctors cannot. The aim is to create a diagnostic nose as discriminating as those of perfume mixers or wine buyers. Such a nose would, however, be sensitive not to life’s pleasures, but to its pains.

1 of 3

9/1/08 3:46 PM

DWIGHT TOK COURSEPACK 2008-2009


Economist.com

http://www.economist.com/science/PrinterFriendly.cfm?story_id=12001831 90

The idea of creating a diagnostic nose goes back to the 1970s. In that decade Linus Pauling, a Nobel-prize-winning chemist, performed the first serious scientific analysis of human breath. He used a technique called gas chromatography, which enables complex mixtures to be separated into their components, to detect some 250 volatile organic compounds in the air exhaled from lungs. Gas chromatography by itself, however, does not allow you to identify each component—it is merely a way of separating them. To make the identifications, you need to add a second step, called mass spectrometry. This, as its name suggests, works out the weight of the molecules in each component. Often, weight is enough by itself to identify a molecule. But if two molecules happen to have the same weight, they can be analysed by breaking them up into smaller, daughter molecules. These are almost certain to differ in weight. Using gas chromatography and mass spectrometry, researchers have, over the years, identified more than 3,000 compounds that are regularly exhaled, excreted or exuded from the body. The search, now, is to understand how changes in the mixture of these compounds may indicate disease, and to find ways of recognising such changes routinely and robustly.

Exhaustive analysis One of the first practitioners of the field of olfactory diagnosis, Carolyn Willis of Amersham Hospital in Britain, decided to contract the job out to dogs. They, she reckoned, have the necessary nasal apparatus to sniff out illness, and there was already some anecdotal evidence that they could, indeed, smell people with cancer. It worked. For the past four years her sniffer dogs have been diagnosing bladder cancer. She is now training them to detect prostate cancer and skin cancer as well. But training dogs is probably not the best solution. It takes time and needs special skills, so mass-producing sniffer dogs would be hard. Moreover, a dog can give you only a yes-or-no answer. It cannot describe nuances, even if it detects them. Boguslaw Buszewski of Nicolaus Copernicus University in Torun, Poland, compares this approach to checking for fever by touching a patient’s forehead. That tells you he is ill. However, it is only by measuring his temperature with a thermometer that you can discover how serious his condition is. In Dr Buszewski’s view the breath-analysis equivalent of the thermometer is the mass spectrometer, and that is where effort should be concentrated. Other researchers agree. Earlier this month Michelle Gallagher, of the Monell Chemical Senses Centre in Philadelphia, announced the results of a study that uses this approach. She confirmed that the early stages of basal-cell carcinoma, a type of skin cancer, can be detected by analysing the odour of a person’s skin using gas chromatography and mass spectrometry. To do so, she sampled the air immediately above the tumours and compared its composition with that of air from the same sites in healthy individuals. She also checked the composition of the air in the room when nobody was present, as an extra control. She found that although air collected from both groups contained the same chemical substances, there was a difference in the amounts of some of them. This finding allowed her to produce what is known as a biomarker profile for the illness. That means it can be diagnosed reliably and—crucially—early on. The combination of gas chromatography and mass spectrometry thus works. It can, nevertheless, take up to two days to run the tests. Dr Buszewski hopes to refine and speed up the process so that it can be carried out within an hour. To do this, he has developed a device that can be tuned to pick up and concentrate the most relevant molecules. With patents still pending, he is cagey about the details, but the principle is to trap relevant molecules using columns made of metal or silica that are the width of a human hair. Each column is coated with special polymers tweaked so that they bind preferentially to particular compounds found in the breath. Pass a sample through a forest of these columns and the molecules of interest will be sucked out. They can then be flushed into the analytical machinery and a result quickly emerges.

2 of 3

9/1/08 3:46 PM

DWIGHT TOK COURSEPACK 2008-2009


91

WAY OF KNOWING:

Emotion

“Deep thinking is attainable only by a man of deep feeling” (Samuel Taylor Coleridge) • Can feelings have a rational basis? • ‘You’re being emotional’ is usually taken as a criticism. Why? • To what extent are we able to control our emotions? Which is the most difficult to control? • How would our understanding of the world be different if we were emotionless?

DWIGHT TOK COURSEPACK 2008-2009


Ruling Passions - New York Times

http://query.nytimes.com/gst/fullpage.html?res=990CE6D6113DF934A2575AC0A963958260&... 92

September 17, 1995

Ruling Passions By EUGENE KENNEDY;

EMOTIONAL INTELLIGENCE By Daniel Goleman. 352 pp. New York: Bantam Books. PERHAPS presaging the naive confidence of the Gail Sheehy era, Zelda Fitzgerald claimed to be a real American because she believed you could learn to play the piano by mail. Such optimism about remaking the self, based on everything from the New Testament to the New Age, may explain the booming condition of the nation's bookstores; there is nothing more American, except perhaps buying into Ponzi schemes and donning lodge costumes, than treating life as a multipart but curable illness. Daniel Goleman may then be accused of doing something unpatriotic, or at least countercultural, in "Emotional Intelligence": he refuses to oversimplify our emotional lives or to provide painless ways to manage them. Respecting the complex unity of personality, he asks what accounts for "the disintegration of civility and safety" in our daily lives, and, more to the point: "What can we change that will help our children fare better in life?" His answer: turn to the schools to teach "self-control, zeal and persistence, and the ability to motivate oneself" -- in other words, the abilities he calls "emotional intelligence." Mr. Goleman builds his argument deliberately, beginning with a masterly overview of recent research in psychology and neuroscience. Having reported on such matters for The New York Times for the last decade, he is a teacher at ease with his subject. Without distracting himself, he glances up from his notes to make lively connections between the wealth of new understandings and the riches of older wisdom about our affective lives. Mr. Goleman realizes that humans prefer to think of their personalities as plated with the gold of rationality rather than what they regard as the base metal of the emotions. At another level, they know better. We all gritted our teeth, for example, when Lyndon Johnson crinkled his eyes and quoted Isaiah: "Come now, let us reason together." We knew that he was about to abandon reason and to apply a tight-as-a-tourniquet emotional hold on others until, choking and red-faced, they agreed with him. Poorly understood and badly monitored emotions are, Mr. Goleman suggests, a national problem, interfering with every aspect of our intimate and public lives. In a supposedly therapeutic age, people chant the mantras "I hear a lot of anger" or "I feel your pain," but they will not be comforted. Recognizing our highly combustible national bad mood, Mr. Goleman identifies his governing insight with that of Aristotle in "The Nicomachean Ethics": "Anyone can become angry -- that is easy. But to be angry with the right person, to the right degree, at the right time, for the right purpose and in the right way -- this is not easy." Mr. Goleman believes we can cultivate emotional intelligence, and improve not only the I.Q.'s but the general life performances of the many children who

1 of 2

9/1/08 3:18 PM

DWIGHT TOK COURSEPACK 2008-2009


Ruling Passions - New York Times

http://query.nytimes.com/gst/fullpage.html?res=990CE6D6113DF934A2575AC0A963958260&... 93

now suffer because of our society's unbalanced emphasis on the intellectual at the expense of the affective dimension of personality. In his final section, he offers a plan for schooling to restore our badly neglected "emotional literacy." Proposing far greater attention to classes in "social development," "life skills" and "social and emotional learning," he singles out the Self Science curriculum, which began at the Nueva Learning Center, a small private school in San Francisco. To skeptics who, he acknowledges, will "understandably" ask whether emotional intelligence can be taught in a less privileged setting, he offers a visit to the Augusta Lewis Troup Middle School in New Haven. And for overburdened teachers who may resist adding another class, he suggests working "lessons on feelings and relationships" in with "other topics already taught." SOME readers will consider the concept of "emotional intelligence" as little different from traditional understandings of emotional adulthood and observe that Mr. Goleman scants powerful formative influences like mature religious faith. Others may argue that his vision of a school-based cure for a problem that begins at home adds unrealistic burdens to already stumbling systems. Nonetheless, Mr. Goleman, with an economy of style that serves his reformer's convictions well, integrates a vast amount of material on issues whose intricacy and problematic character he reveals in an original and persuasive way. Eugene Kennedy is a professor emeritus of psychology at Loyola University of Chicago.

Copyright 2008 The New York Times Company

2 of 2

Home

Privacy Policy

Search

Corrections

XML

Help

Contact Us

Work for Us

Back to Top

9/1/08 3:18 PM

DWIGHT TOK COURSEPACK 2008-2009


USC College : News : March 2007 : Antonio Damasio

http://college.usc.edu/news/march_2007/damasio.html 94

Moral Judgment Fails Without Feelings Study co-authored by USC College neuroscientist Antonio Damasio traces harmful moral choices to damaged emotional circuits. By Carl Marziali March 2007 Consider the following scenario: A runaway train car is heading for several workers, who will die if nothing is done. You are on a footbridge over the tracks, next to a large stranger whose weight could stop the train car before it hits the workers. Do you push him to his death? Most people waver or say they could not, even if they agree that in theory they should. But according to a new study in the journal Nature, subjects with damage to a part of the frontal lobe make a less personal calculation. The logical choice, they say, is to sacrifice one life to save many. Conducted by researchers at USC, Harvard University, Caltech and the University of Iowa, the study shows that emotion plays an important role in scenarios that pose a moral dilemma. If certain emotions are blocked, we make decisions that – right or wrong – seem unnaturally cold. The scenarios in the study are extreme, but the core dilemma is not: Should one confront a co-worker, challenge a neighbor or scold a loved one in the interest of the greater good?

Study co-senior author Antonio Damasio of USC College said the feeling of aversion normally blocks humans from harming each other. Photo credit: Phil Channing Click here for high-resolution photo.

A total of 30 subjects of both genders faced a set of scenarios pitting immediate harm to one person against future certain harm to many. Six had damage to the ventromedial prefrontal cortex (VMPC), a small region behind the forehead,

1 of 2

9/1/08 3:12 PM

DWIGHT TOK COURSEPACK 2008-2009


USC College : News : March 2007 : Antonio Damasio

http://college.usc.edu/news/march_2007/damasio.html 95

while 12 had brain damage elsewhere, and another 12 had no damage. The subjects with VMPC damage stood out in their stated willingness to harm an individual – a prospect that usually generates strong aversion. “In those circumstances, most people without this specific brain damage will be torn. But these particular subjects seem to lack that conflict,” said co-senior author Antonio Damasio, director of the Brain and Creativity Institute and holder of the David Dornsife Chair in Neuroscience in USC College. “Our work provides the first causal account of the role of emotions in moral judgments,” said co-senior author Marc Hauser, professor of psychology at Harvard and Harvard College Professor. But, Hauser added, not all moral reasoning depends so strongly on emotion. “What is absolutely astonishing about our results is how selective the deficit is,” he said. “Damage to the frontal lobe leaves intact a suite of moral problem-solving abilities but damages judgments in which an aversive action is put into direct conflict with a strong utilitarian outcome.” It is the feeling of aversion that normally blocks humans from harming each other. Damasio described it as “a combination of rejection of the act but combined with the social emotion of compassion for that particular person.” The study will inform a classic philosophical debate on whether humans make moral judgments based on norms and societal rules or based on their emotions. The study holds another implication for philosophy. By showing that humans are neurologically unfit for strict utilitarian thinking, the study suggests that neuroscience may be able to test different philosophies for compatibility with human nature. The Nature study expands on work on emotion and decision-making that Damasio began in the early 1990s and that caught the public eye in his first book, Descartes’ Error (Putnam, 1994). Marc Hauser, whose behavioral work in animals has attempted to identify precursors to moral behavior, then teamed up with Damasio’s group to extend those observations. Other authors on the study were Fiery Cushman and Liane Young of Harvard, Ralph Adolphs of Caltech, and Michael Koenigs and Daniel Tranel of the University of Iowa. Funding for the research came from the National Institutes of Health, the National Science Foundation, the Gordon and Betty Moore Foundation, and the Guggenheim Foundation. The mission of the Brain and Creativity Institute is to study the neurological roots of human emotions, memory and communication and to apply the findings to problems in the biomedical and sociocultural arenas. The institute brings together technology and the social sciences in a novel interdisciplinary setting.

2 of 2

9/1/08 3:12 PM

DWIGHT TOK COURSEPACK 2008-2009


96

AREA OF KNOWLEDGE:

Natural Sciences

“A theory is a fantasy constrained by fact.” (S. Bastian)

• What is the role of creativity in the sciences? • What knowledge, if any, will always remain beyond the capabilities of science investigate or verify? • What could be meant by “I have been steeped in science all my life, now I am ready to pray?” (Stephen Hawking) • Does the social context of scientific work affect the methods and findings of science?

DWIGHT TOK COURSEPACK 2008-2009


Study: Creationism Taught in Class

http://abcnews.go.com/print?id=4895114 97

Study: 16 Percent of U.S. Science Teachers Are Creationists By BOB HOLMES

May 21, 2008 — Despite a court-ordered ban on the teaching of creationism in U.S. schools, about one in eight high-school biology teachers still teach it as valid science, a survey reveals. And, although almost all teachers also taught evolution, those with less training in science -- and especially evolutionary biology -- tend to devote less class time to Darwinian principles. US courts have repeatedly decreed that creationism and intelligent design are religion, not science, and have no place in school science classrooms. But no matter what courts and school boards decree, it is up to teachers to put the curriculum into practice. "Ultimately, they are the ones who carry it out," says Michael Berkman, a political scientist at Pennsylvania State University in University Park. But what teachers actually teach about evolution and creationism in their classrooms is a bit of a grey area, so Berkman and his colleagues decided to conduct the first-ever national survey on the subject. 'Not Shocking' The researchers polled a random sample of nearly 2,000 high-school science teachers across the U.S. in 2007. Of the 939 who responded, 2 percent said they did not cover evolution at all, with the majority spending between 3 and 10 classroom hours on the subject. However, a quarter of the teachers also reported spending at least some time teaching about creationism or intelligent design. Of these, 48 percent -- about 12.5 percent of the total survey -- said they taught it as a "valid, scientific alternative to Darwinian explanations for the origin of species". Science teaching experts say they are not surprised to find such a large number of science teachers advocating creationism. "It seems a bit high, but I am not shocked by it," says Linda Froschauer, past president of the National Science Teachers Association based in Arlington, Virginia. "We do know there's a problem out there, and this gives more credibility to the issue." Better Training

1 of 2

9/1/08 1:33 PM

DWIGHT TOK COURSEPACK 2008-2009


Study: Creationism Taught in Class

http://abcnews.go.com/print?id=4895114 98

When Berkman's team asked about the teachers' personal beliefs, about the same number, 16 percent of the total, said they believed human beings had been created by God within the last 10,000 years. Teachers who subscribed to these young-Earth creationist views, perhaps not surprisingly, spent 35 percent fewer hours teaching evolution than other teachers, the survey revealed. The survey also showed that teachers who had taken more science courses themselves -- and especially those who had taken a course in evolutionary biology -- devoted more class time to evolution than teachers with weaker science backgrounds. This may be because better-prepared teachers are more confident in dealing with students' questions about a sensitive subject, says Berkman, who notes that requiring all science teachers to take a course in evolutionary biology could have a big impact on the teaching of evolution in the schools. Copyright Š 2008 ABC News Internet Ventures

2 of 2

9/1/08 1:33 PM

DWIGHT TOK COURSEPACK 2008-2009


99

The lost art of the letter Physics World: January 2007

The Internet is affecting not only how scientists communicate, but also how future science historians will have to work, says Robert P Crease Until quite recently, letters were the most common way – and often the only way – for scientists to communicate informally with each other. It is not surprising therefore that science historians have long relied on letters as invaluable sources of information. A dramatic illustration concerns the now-famous meeting between Werner Heisenberg and Niels Bohr in Nazi-occupied Denmark in September 1941 during which the two physicists, talking in private, sought to eke out the other's view on progress towards a nuclear bomb. At first, the principal account of the mysterious visit came from a letter that Heisenberg sent in 1955 to the German science writer Robert Jungk. But among Bohr's papers were several drafts of letters that Bohr wrote but never sent to Heisenberg after reading the latter's account of the meeting. In 2002, when the Bohr family made the drafts public, the letters served as a corrective to Heisenberg's version, showing it to be deceitful and self-serving. Roles of letters Now that e-mail has replaced letter writing as the principal means of informal communication, one has to feel sorry for future science historians, who will be unable to use letters and telegrams to establish facts and gauge reactions to events. In addition to the Copenhagen episode, another example of the role of letters is Stillman Drake's startling conclusion, based on a careful reading of Galileo's correspondence, that the Leaning Tower event actually happened. And of all the reactions to the discovery of parity violation in 1957, the simplest and most direct expression of shock came from Robert Oppenheimer. After receiving a telegram from Chen Ning Yang with the news, Oppenheimer cabled back: "Walked through door." Letters are also useful to historians because the character of scientists can often be revealed more clearly in informal communications than in official documents. Catherine Westfall, who has composed histories of both the Fermilab and Argonne national laboratories, likes to point out that letters often reveal leadership styles in striking ways. "[Former Fermilab director] Robert R Wilson knew he was making history and was ironically self-conscious," she once told me. "Leon Lederman [another Fermilab director] told jokes, [while former Argonne director] Hermann Grunder wrote letters that were really never-ending to-do lists." Historians also use letters to reconstruct thought processes. We could not hope to understand the development of quantum mechanics, for instance, without studying the vigorous exchanges of letters between the likes of Bohr, Dirac, Heisenberg, Pauli and others as they thrashed out the theory in the 1920s. Indeed, the historian David Cassidy decided to write his biography of Heisenberg only after accompanying the physicist's widow to her attic and seeing her drag out a trunk of Heisenberg's personal letters, adding that he could not have completed the biography without them. Cassidy also said that the way to understand Heisenberg's behaviour during the Third Reich is to study his nearly weekly letters to his mother. Internet impact Historians at the American Institute of Physics (AIP), who are working on a project to document the history of physics in industry, have encountered hints of how the Internet and computers are transforming scientific communication.

DWIGHT TOK COURSEPACK 2008-2009


100

E-mail is, of course, cheaper and encourages quicker thought, and it introduces a peculiar blend of the personal and professional. The AIP historians have also detected a decline in the use of lab notebooks, finding that data are often stored directly into computer files. Finally, they have noted the influence of PowerPoint, which can stultify scientific discussion and make it less free-wheeling; information also tends to be dumbed down when scientists submit PowerPoint presentations in place of formal reports. Generally, though, these new communications techniques are good for scientists, encouraging rapid communication and stripping out hierarchies. But for historians, they are a mixed blessing. It is not just that searching through a hard disk or database is less romantic than poring over a dusty box of old letters in an archive. Nor is it that the information in e-mails differs in kind from that in letters. Far more worrying is the question of whether e-mail and other electronic data will be preserved at all. One can lose letters, of course, a classic case being much of Planck's correspondence thanks to an Allied bomb in the Second World War. But the challenges of electronic preservation are more extensive and immediate. As AIP historian Spencer Weart notes: "We have paper from 2000 BC, but we can't read the first e-mail ever sent. We have the data, and the magnetic tape – but the format is lost." Weart is fond of quoting RAND researcher Jeff Rothenberg's remark that "it is only slightly facetious to say that digital information lasts forever – or five years, whichever comes first", meaning that information lasts only if regularly migrated to another format. This problem has inspired various programmes to foster the preservation of electronic documentation. One is the Persistent Archives Testbed Project – a collaboration between several US institutions to develop a tool to archive electronic data (slac.stanford.edu/history/projects.shtml). Another is the Dibner–Sloan History of Recent Science and Technology Project (authors.library.caltech.edu/5456) that seeks not only to digitally archive important documents, but also to enlist the scientists involved to put these in a historical context. The critical point Technology, from pencils to computers, has transformed not only the nature and content of communication, but also the practices that rely on it. Electronic communication is changing not only science, but also science history. Historians of the future will have to rely on other kinds of data than their precursors, and tell the story of science differently. There is no going back, as is illustrated once again by the Bohr–Heisenberg episode. Had the Web existed when Bohr wrote his invaluable draft letters to Heisenberg, his correspondence may well have not been preserved. Yet when the Bohr family decided to make the drafts publicly available, where did they put the material? On the Web. About the author Robert P Crease is chairman of the Department of Philosophy, State University of New York at Stony Brook and historian at the Brookhaven National Laboratory, e-mail rcrease@notes.cc.sunysb.edu

DWIGHT TOK COURSEPACK 2008-2009


Print This Page

http://www.searchmagazine.org/se/util/display_mod.cfm?MODULE=/se-server/mod/modules/s... 101

Archives

Daniel Dennett's Darwinian Mind: An Interview with a 'Dangerous' Man In this Issue

by Chris Floyd The outspoken philosopher of science distills his rigorous conceptions of consciousness, and aims withering fire at the dialogue between science and religion. I n matters of the mind—the exploration of consciousness, its correlation with the body, its evolutionary foundations, and the possibilities of its creation through computer technology—few voices today speak as boldly as that of philosopher Daniel Dennett. His best-selling works—among them Consciousness Explained and Darwin’s Dangerous Idea—have provoked fierce debates with their rigorous arguments, eloquent polemic and witty, no-holds-barred approach to intellectual combat. He is often ranked alongside Richard Dawkins as one of the most powerful—and, in some circles, feared —proponents of thorough-going Darwinism.

Einstein and the Middle Path God in the Garden Heavenly Bodies and the People of the Earth Julie Salamon "On God:" Hospital or Holy Ground? Letters -- July/August 2008 Praying for Ice Time Flies Walking a Fine Line to the White House

Dennett has famously called Darwinism a "universal acid," cutting through every aspect of science, culture, religion, art and human thought. "The question is," he writes in Darwin’s Dangerous Idea, "what does it leave behind? I have tried to show that once it passes through everything, we are left with stronger, sounder versions of our most important ideas. Some of the traditional details perish, and some of these are losses to be On this Topic regretted, but...what remains is more than enough to build on." Is I.D. Ready for Its Consciousness has arisen from the unwilled, unordained algorithmic processes of natural selection, says Dennett, whose work Close-up? May/June delivers a strong, extensive attack on the "argument from design" or the "anthropic principle." But a world without a Creator or 2008 (Full) an "Ultimate Meaning" is not a world without creation or meaning, he insists. When viewed through the solvent of Darwinism, Review: There Is No he writes, "the ‘miracles’ of life and consciousness turn out to be even better than we imagined back when we were sure they Perfect Meal May/June were inexplicable." 2008 (Full) Time Flies July/August

1 of 5

8/31/08 11:19 AM

DWIGHT TOK COURSEPACK 2008-2009


Print This Page

http://www.searchmagazine.org/se/util/display_mod.cfm?MODULE=/se-server/mod/modules/s... 102

Dennett’s prominence does not rest solely on his high public profile in the scientific controversies of our day; it is also based on a large body of academic work dealing with various aspects of the mind, stretching back almost 40 years. Dennett has long been associated with Tufts University, where he is now Distinguished Arts and Sciences Professor and director of the Center for Cognitive Studies. Boston-born, Oxford-educated, he now divides his time between North Andover, Massachusetts, and his farm in Maine, where he grows hay and blueberries, and makes cider wine. In this exclusive interview with Science & Spirit, Dennett talks about his ideas on consciousness, evolution, free will, and the "slowly eroding domain" of religion. Science & Spirit: Can you give us an overview of your ideas on consciousness? What is it? Where does it come from? Where might it be going? Dennett: The problem I have answering your question is that my views on consciousness are initially very counterintuitive, and hence all too easy to misinterpret, so any short summary is bound to be misleading. Those whose curiosity is piqued by what I say here are beseeched to consult the long version carefully. Aside from my books, there are dozens of articles available free on my website, at www.ase.tufts.edu/cogstud. With that caveat behind us (and convinced that in spite of it, some people will leap on what I say here and confidently ride off with a caricature), I claim that consciousness is not some extra glow or aura or "quale" caused by the activities made possible by the functional organization of the mature cortex; consciousness is those various activities. One is conscious of those contents whose representations briefly monopolize certain cortical resources, in competition with many other representations. The losers —lacking "political clout" in this competition—quickly fade leaving few if any traces, and that’s the only difference between being a conscious content and being an unconscious content.

2008 (Full)

There is no separate medium in the brain, where a content can "appear" and thus be guaranteed a shot at consciousness. Consciousness is not like television—it is like fame. One’s "access" to these representations is not a matter of perceiving them with some further inner sensory apparatus; one’s access is simply a matter of their being influential when they are. So consciousness is fame in the brain, or cerebral celebrity. That entails, of course, that those who claim they can imagine a being that has all these competitive activities, all the functional benefits and incidental features of such activities, in the cortex but is not conscious are simply mistaken. They can no more imagine this coherently than they can imagine a being that has all the metabolic, reproductive, and self-regulatory powers of a living thing but is not alive. There is no privileged center, no soul, no place where it all comes together—aside from the brain itself. Actually, Aristotle’s concept of a soul is not bad—the "vegetative soul" of a plant is not a thing somewhere in the plant; it is simply its homeostatic organization, the proper functioning of its various systems, maintaining the plant’s life. A conscious human soul is the same sort of phenomenon, not a thing, but a way of being organized and maintaining that organization. Parts of that organization are more persistent, and play more salient (and hence reportable) roles than others, but the boundaries between them—like the threshold of human fame—are far from sharp.

2 of 5

8/31/08 11:19 AM

DWIGHT TOK COURSEPACK 2008-2009


Print This Page

http://www.searchmagazine.org/se/util/display_mod.cfm?MODULE=/se-server/mod/modules/s... 103

S&S: What are the implications of all this for the notion of free will and moral choice? Dennett: The implications of all this for the notion of free will are many. I have come to realize over the years that the hidden agenda for most people concerned about consciousness and the brain (and evolution, and artificial intelligence) is a worry that unless there is a bit of us that is somehow different, and mysteriously insulated from the material world, we can’t have free will—and then life will have no meaning. That is an understandable mistake. My 1984 book, Elbow Room: the Varieties of Free Will Worth Wanting, set out to expose this mistake in all its forms and show how what really matters in free will is handsomely preserved in my vision of how the brain works. I am returning to this subject in my next book, with a more detailed theory that takes advantage of the tremendous advances of outlook in the last 15 years. S&S: What then of religion, or, more specifically, of the relationship between religion and science? Stephen Jay Gould speaks of "Non-Overlapping Magesteria," where the two realms of knowledge—or inquiry—stay within their own spheres, operating with mutual respect but maintaining a strict policy of non-interference. Is this possible, in your views? Is it even desirable? Dennett: The problem with any proposed detente in which science and religion are ceded separate bailiwicks or "magisteria" is that, as some wag has put it, this amounts to rendering unto Caesar that which is Caesar’s and unto God that which Caesar says God can have. The most recent attempt, by Gould, has not found much favor among the religious precisely because he proposes to leave them so little. Of course, I’m certainly not suggesting that he should have left them more. There are no factual assertions that religion can reasonably claim as its own, off limits to science. Many who readily grant this have not considered its implications. It means, for instance, that there are no factual assertions about the origin of the universe or its future trajectory, or about historical events (floods, the parting of seas, burning bushes, etc.), about the goal or purpose of life, or about the existence of an afterlife and so on, that are off limits to science. After all, assertions about the purpose or function of organs, the lack of purpose or function of, say, pebbles or galaxies, and assertions about the physical impossibility of psychokinesis, clairvoyance, poltergeists, trance channeling, etc. are all within the purview of science; so are the parallel assertions that strike closer to the traditionally exempt dogmas of long-established religions. You can’t consistently accept that expert scientific testimony can convict a charlatan of faking miracle cures and then deny that the same testimony counts just as conclusively—"beyond a reasonable doubt"—against any factual claims of violations of physical law to be found in the Bible or other religious texts or traditions. What does that leave for religion to talk about? Moral injunctions and declarations of love (and hate, unfortunately), and other ceremonial speech acts. The moral codes of all the major religions are a treasury of ethical wisdom, agreeing on core precepts, and disagreeing on others that are intuitively less compelling, both to those who honor them and those who don’t. The very fact that we agree that there are moral limits that trump any claim of religious freedom—we wouldn’t accept a religion that engaged in human sacrifice or slavery, for instance—shows that we do not cede to religion, to any religion, the final authority on moral injunctions. Centuries of ethical research and reflection, by philosophers, political theorists, economists, and other secular thinkers have not

3 of 5

8/31/08 11:19 AM

DWIGHT TOK COURSEPACK 2008-2009


Print This Page

http://www.searchmagazine.org/se/util/display_mod.cfm?MODULE=/se-server/mod/modules/s... 104

yet achieved a consensus on any Grand Unified Theory of ethics, but there is a broad, stable consensus on how to conduct such an inquiry, how to resolve ethical quandaries, and how to deal with as-yet unresolved differences. Religion plays a major role as a source of possible injunctions and precepts, and as a rallying point for public appeal and organization, but it does not set the ground rules of ethical agreement and disagreement, and hence cannot claim ethics or morality as its particular province. That leaves ceremonial speech acts as religion’s surviving domain. These play a huge role in stabilizing the attitudes and policies of those who participate in them, but the trouble is that ceremony without power does not appear to be a stable arrangement—and appearances here are all important. Once a monarch is stripped of all political power, as in Great Britain, the traditions and trappings tend to lose some of their psychological force, so that their sole surviving function—focusing the solidarity of the citizenry—is somewhat undercut. Whether or not to abolish the monarchy becomes an ever less momentous decision, rather like whether or not to celebrate a national holiday always on a Monday, instead of on its traditional calendar date. Recognizing this threat of erosion, religious people will seldom acknowledge in public that their God has been reduced to something like a figurehead, a mere constitutional monarch, even while their practices and decisions presuppose that this is so. It is seldom remarked (though often observed in private, I daresay) that many, many people who profess belief in God do not really act the way people who believed in God would act; they act the way people would act who believed in believing in God. That is, they manifestly think that believing in God is—would be—a good thing, a state of mind to be encouraged, by example if possible, so they defend belief-in-God with whatever rhetorical and political tools they can muster. They ask for God’s help, but do not risk anything on receiving it, for instance. They thank God for their blessings, but, following the principle that God helps those who help themselves, they proceed with the major decisions of their lives as if they were going it alone. Those few individuals who clearly do act as if they believed in God, really believed in God, are in striking contrast: the Christian Scientists who opt for divine intervention over medical attention, for instance, or those who give all their goods to one church or another in expectation of the Apocalypse, or those who eagerly seek martyrdom. Not wanting the contrast to be so stark, the believers in belief-in-God respond with the doctrine that it is a sin (or at least a doctrinal error) to count on God’s existence to have any particular effect. This has the nice effect of making the behavior of a believer in belief-in-God and the behavior of a believer in God so similar as to be all but indistinguishable. Once nothing follows from a belief in God that doesn’t equally follow from the presumably weaker creed that it would be good if I believed in God—a doctrine that is readily available to the atheist, after all—religion has been so laundered of content that it is quite possibly consistent with science. Peter de Vries, a genuine believer in God and probably the funniest writer on religion ever, has his hyper-liberal Reverend Mackerel (in his book The Mackerel Plaza) preach the following line: "It is the final proof of God’s omnipotence that he need not exist in order to save us." The Reverend Mackerel’s God can co-exist peacefully with science. So can Santa Claus, who need not exist in order to make our yuletide season more jolly.

4 of 5

8/31/08 11:19 AM

DWIGHT TOK COURSEPACK 2008-2009


Science & Spirit

http://www.science-spirit.org/newdirections.php?article_id=742 105

Search

Subscribe Now Write to the Editors Email this Article to a Friend Order Reprints

1 of 7

8/31/08 11:14 AM

DWIGHT TOK COURSEPACK 2008-2009


Science & Spirit

http://www.science-spirit.org/newdirections.php?article_id=742 106

Looking Back at the End of Science More than a decade after its original publication, does the prophecy of a controversial book still ring true? By John Horgan March 1, 2008

When I began writing about science twenty-six years ago, I believed in what Vannevar Bush, founder of the National Science Foundation, called "the endless frontier" of science. I started questioning that myth in the late 1980s, when physicists like Stephen Hawking declared they were on the verge of a "final theory" that would solve all their field誰多 s outstanding mysteries. That astonishing claim provoked me to wonder whether not just physics but science as a whole might end. After years of reading and talking to scientists about this topic, I concluded that the era of great scientific discovery might already be over, in the following sense: Scientists will never again achieve insights into nature as profound as the atomic theory of matter, quantum mechanics, relativity, the big bang,

2 of 7

8/31/08 11:14 AM

DWIGHT TOK COURSEPACK 2008-2009


Science & Spirit

http://www.science-spirit.org/newdirections.php?article_id=742 107

evolution by natural selection, and DNA-based genetics. Scientists will extend, refine, and apply this knowledge, but there will be no more great revolutions or revelations. That was the basic message of The End of Science, published in 1996. The science establishment was not amused. My book was denounced by dozens of Nobel laureates, the White House science , advisor the heads of the Human Genome Project, the British minister of science, and the editors of Nature, Science, and Scientific American, where I was working at the time. What Nineteenth-century Physicists Really Thought Most critics merely indulged in chest-thumping declarations of faith in scientific progress, but some have offered reasonable objections. By far the most common is some variation of, "Oh come on, that's what physicists thought at the end of the nineteenth century." Actually, most physicists then were wrestling with profound questions, such as whether atoms really exist. The historian of science Stephen Brush has called the alleged Victorian calm in physics "a myth." Moreover, even if some scientists wrongly predicted science was ending in the past, why does that mean all future predictions must be wrong? Science itself tells us that there are limits to knowledge. Relativity theory prohibits travel or communication faster than light. Quantum mechanics and chaos theory constrain the precision with which we can make predictions. Evolutionary biology reminds us that we are animals, shaped by natural selection not for discovering deep truths of nature but for breeding. The greatest barrier to future progress in science is its past success. Postmodern philosophers hate this comparison, but scientific discovery resembles the exploration of the Earth. We are now unlikely to discover something truly astonishing like the lost continent of Atlantis or dinosaurs on a secluded island. In the same way, scientists are unlikely to discover anything surpassing natural selection, quantum mechanics, or the big bang.

3 of 7

8/31/08 11:14 AM

DWIGHT TOK COURSEPACK 2008-2009


Science & Spirit

http://www.science-spirit.org/newdirections.php?article_id=742 108

Unanswerable Questions and Ironic Science Another common objection to my argument is that science still poses many profound questions. True, but some of these questions may be unanswerable. The big bang theory poses a very obvious and deep question: Why did the big bang happen in the first place, and what, if anything, preceded it? The answer is that we don誰多 t know, and we will never know, because the origin of the universe is too distant from us in space and time. Scientists誰多 attempts to solve these mysteries often take the form of what I call ironic science, unconfirmable speculation that resembles philosophy or theology rather than genuine science. My favorite example of ironic science is string theory, which for more than twenty years has been the leading contender for a unified theory of physics. Unfortunately, strings are so small that you'd need an accelerator one thousand light years around to detect them. String theory also comes in so many versions that it can accommodate virtually any data. Critics call this the Alice's Restaurant problem. That's a reference to the Arlo Guthrie verse: "You can get anything you want at Alice's Restaurant." But of course a theory that predicts everything really predicts nothing. Taking on Chaoplexity The physicist and Nobel laureate Robert Laughlin grants that we might have reached "the end of reductionism," which identifies the basic particles and forces underpinning the physical realm. Nevertheless, he insists that scientists can discover profound new laws by investigating complex, emergent phenomena, which cannot be understood in terms of their individual components. Laughlin is merely recycling rhetoric from the fields of chaos and complexity, which are so similar that I lump them under a single term, chaoplexity. Chaoplexologists argue that advances in computation and mathematics will soon make

4 of 7

8/31/08 11:14 AM

DWIGHT TOK COURSEPACK 2008-2009


Science & Spirit

http://www.science-spirit.org/newdirections.php?article_id=742 109

fields like economics, ecology, and climatology as rigorous and predictive as nuclear physics. The chaoplexologists have failed to deliver on any of their promises. One reason is the notorious butterfly effect. To predict the course of a chaotic system, such as a climate, ecology, or economy, you must determine its initial conditions with infinite precision, which is of course impossible. The butterfly effect limits both prediction and explanation, and it suggests that many of chaoplexologists' grand goals cannot be achieved. What about Applied Science? The physicist Michio Kaku wrote recently that "the foundations of science are largely over" because we have discovered the basic laws ruling physical reality. But Kaku insists that we can manipulate these laws to create an endless supply of new technologies, medicines, and other applications. In other words, pure science might be over, but applied science is just beginning. Kaku compares science to chess. We've just learned the rules, and now we're going to become grand masters. Applied science obviously has much further to go, and it's hard to know precisely where it might end. That's why The End of Science focused on pure science. But I doubt the predictions of techno-optimists like Ray Kurzweil that advances in genetics, nanotech, and other fields will soon make us immortal. I'd have more confidence that scientists could solve senescence if they'd had more success with cancer. After Richard Nixon declared a "war on cancer" in 1971, cancer mortality rates actually rose for two decades before declining slightly over the past fifteen years, mostly because of a decline in smoking. I hope someday researchers will find a cure that renders cancer as obsolete as smallpox. But given the record of cancer research so far, isn't it a bit premature to talk about immortality?

5 of 7

8/31/08 11:14 AM

DWIGHT TOK COURSEPACK 2008-2009


Science & Spirit

http://www.science-spirit.org/newdirections.php?article_id=742 110

Brain Science Is Just Beginning! The British biologist Lewis Wolpert once told me that my argument is "absolute rubbish." He was particularly upset by a chapter entitled ï¿ The End of Neuroscience.ï¿ Neuroscience, he declared, is "just beginning!" Actually, neuroscience has deep roots. Galvani showed two centuries ago that nerves emit and respond to electric current, and a century ago Golgi, Cajal, and others began unraveling the structure and function of neurons. The claim that neuroscience is "just beginning" reflects not the fieldï¿ s actual age but its output. Although neuroscientists have acquired increasingly powerful tools for probing and modeling the brain, they have failed to produce a compelling theory of the mind. Nor have researchers winnowed out pre-existing theories. Theories of the mind never really die. They just go in and out of fashion. Some prominent neuroscientists, such as the Nobel laureates Eric Kandel and Gerald Edelman, still think the best theory of the mind is Freudian psychoanalysis. Our best hope for a breakthrough is to crack the neural code, the set of rules or algorithms that transforms electrical pulses emitted by brain cells into perceptions, memories, and decisions. But recent research suggests that each brain may operate according to many different neural codes, which keep changing in response to new experiences. Some leading neuroscientists, such as Christof Koch, worry that the neural code might be too complex to fully decipher. Will the End of Science Be Self-fulfilling? Some critics worry that my predictions might become self-â ¨fulfilling by discouraging young people from becoming scientists. To be honest, I worry about this problem too, especially now that I teach at a science-oriented school. I tell my students that, even if I'm right that the era of profound discoveries has ended, there is still much meaningful work to do, especially in applied science.

6 of 7

8/31/08 11:14 AM

DWIGHT TOK COURSEPACK 2008-2009


Science & Spirit

http://www.science-spirit.org/newdirections.php?article_id=742 111

They can develop better treatments for AIDS or cancer or schizophrenia. They can invent cleaner, cheaper sources of energy, or devise computer models that give us a better understanding of global warming. They can even help us understand why we fight wars and how we can avoid them. I also urge my students to question all big, ambitious theories, including mine. The only way to find out how far science can go is to keep pushing against its limits.

JOHN HORGAN is a science journalist and Director of the Center for Science Writings at the Stevens Institute of Technology, Hoboken, New Jersey.

Current Issue | Resources | Advertise | About | Subscribe | Search | Home Site and magazine supported by a generous grant from the John Templeton Foundation. Š 2007 Science & Spirit Magazine. All rights reserved.

7 of 7

8/31/08 11:14 AM

DWIGHT TOK COURSEPACK 2008-2009


spiked | The tyranny of science

file:///Users/psanders/Desktop/Dwight/Theory%20of%20Knowledge/Science/Science%27s%20r... 112

Tuesday 15 January 2008

The tyranny of science

Scientific evidence is being repackaged as ‘The Science’: a superstitious dogma used to hector us on everything from sex to saving the planet. Frank Furedi Each month, Frank Furedi picks apart a really bad idea. This month he challenges the moralisation of science, and the transformation of scientific evidence into a new superstitious dogma. Scientists at one of Rome’s most prestigious universities, La Sapienza, are protesting against a planned visit by Pope Benedict XVI this Thursday. The Pope is due officially to open the university’s academic year, but some of the professors of science at the university are not happy. In a letter to the university’s rector, 67 lecturers and professors said it would be ‘incongruous’ for the Pope to visit given his earlier comments on Galileo; while he was still Cardinal Joseph Ratzinger, the Pope said that the Catholic Church’s trial of the great Italian astronomer was ‘reasonable and just’. So, university staff want to block a visit by a religious leader in the name of defending scientific truth and integrity. This is a striking story: today, it frequently seems as if scientific authority is replacing religious and moral authority, and in the process being transformed into a dogma. At first sight, it appears that science has the last word on all the important questions of our time. Science is no longer confined to the laboratory. Parents are advised to adopt this or that child-rearing technique on the grounds that ‘the research’ has shown what is best for kids. Scientific studies are frequently used to instruct people on how to conduct their relationships and family life, and on what food they should eat, how much alcohol they should drink, how frequently they can expose their skin to the sun, and even how they should have sex. Virtually every aspect of human life is discussed in scientific terms, and justified with reference to a piece of research or by appealing to the judgment of experts. Of course, as in the past, science still invites criticism and scepticism. Indeed, its authority is continually scrutinised and subjected to a deeply moralistic anti-scientific critique. Scientific experimentation and innovation - for example, in the areas of stem cell research, cloning and genetic modification - are stigmatised as ‘immoral’ and ‘dangerous’. Moreover, many wonder if there are hidden agendas or interests behind scientific studies, especially those that are used to justify moral or political campaigns. Many people understand that last year’s scientific advice is often contradicted by new findings further down the line. Others are anxious about the rapid pace of scientific advance: they worry about the potential for destruction that might be unleashed by developments in genetic manipulation or nanotechnology.

1 of 4

8/31/08 12:54 PM

DWIGHT TOK COURSEPACK 2008-2009


spiked | The tyranny of science

file:///Users/psanders/Desktop/Dwight/Theory%20of%20Knowledge/Science/Science%27s%20r... 113

Many greens blame science and technology for contributing to environmental degradation and to global warming. Indeed, one of the puzzling features of our time is this: the relentless expansion of the authority of science is paralleled by a sense of distrust about science. Anyone old enough to recall the public’s enthusiasm for scientific breakthroughs in the 1950s and 60s will be struck by the more begrudging and even fearful acceptance of science today. The attitude of Western society towards science is intensely contradictory. In the absence of political vision and direction, society continually hides behind scientific authority - but at the same time it does not quite believe that science has the answers, and it worries about the potential rotten fruits of scientific discovery. Yet whatever misgivings people have about science, its authority is unrivalled in the current period. The formidable influence of scientific authority can be seen in the way that environmentalists now rely on science to back up their arguments. Not long ago, in the 1970s and 80s, leading environmentalists insisted that science was undemocratic, that it was responsible for many of the problems facing the planet. Now, in public at least, their hostility towards science has given way to their embrace and endorsement of science. Today, the environmental lobby depends on the legitimation provided by scientific evidence and expertise. In their public performances, environmentalists frequently use the science in a dogmatic fashion. ‘The scientists have spoken’, says one British-based campaign group, in an updated version of the religious phrase: ‘This is the Word of the Lord.’ ‘This is what the science says we must do’, many greens claim, before adding that the debate about global warming is ‘finished’. This week, David King, the former chief scientific adviser to the UK government, caused a stink by criticising extreme green ‘Luddites’ who are ‘hurting’ the environmentalist cause. Yet when science is politicised, as it has been under the likes of King, who once claimed that ‘the science shows’ that global warming is a bigger threat than terrorism, then it can quite quickly and inexorably be converted into dogma, superstition and prejudice (1). It is the broader politicisation of science that nurtures today’s dogmatic green outlook. Today, religion and political ideologies no longer inspire significant sections of the public. Politicians find it difficult to justify their work and outlook in the vocabulary of morality. In the AngloAmerican world, officials now promote policies on the grounds that they are ‘evidence based’ rather than because they are ‘right’ or ‘good’. In policymaking circles, the language of ‘right’ and ‘wrong’ has been displaced by the phrase: ‘The research shows…’ Moral judgments are often edged out even from the most sensitive areas of life. For example, experts use the language of medicine rather than morality to tell young teenagers that having sex is not so much ‘bad’ as bad for their emotional health. So pervasive is the crisis of belief and morality that even religious institutions are affected by it. Fundamentalists no longer simply rely on Biblical texts to affirm their belief in the Creation; today, the invention of ‘creation science’ by Christian fundamentalists in the US is symptomatic of the trend to supplement traditional belief with scientific authority. Likewise, the anti-abortion movement no longer restricts itself to morally denouncing a medical procedure which they consider to be evil. Now they increasingly rely on scientific and technical expertise to advance their cause. They argue that having an abortion is bad for a woman’s health and is likely to cause post-abortion trauma. The question ‘when does life begin?’ was once a moral issue, bound up in competing views of morality, rights and human consciousness. Today anti-abortion activists appeal to medical research and use a narrowly scientific definition of ‘when

2 of 4

8/31/08 12:54 PM

DWIGHT TOK COURSEPACK 2008-2009


spiked | The tyranny of science

file:///Users/psanders/Desktop/Dwight/Theory%20of%20Knowledge/Science/Science%27s%20r... 114

life begins’: they argue that because ‘the evidence’ shows that fetuses can survive at 24 weeks, then this demonstrates the unquestionable beginning to life (2). Despite its formidable intellectual powers, science can only provide a provisional solution to the contemporary crisis of belief. Historically, science emerged through a struggle with religious dogma. A belief in the power of science to discover how the world works should not be taken to mean that science itself is a belief. On the contrary, science depends on an open-ended orientation towards experimentation and the testing of ideas. Indeed, science is an inherently sceptical enterprise, since it respects no authority other than evidence. As Thomas Henry Huxley once declared: ‘The improver of natural knowledge absolutely refuses to acknowledge authority as such.’ ‘[S]cepticism is the highest of duties’, said Huxley; ‘blind faith the unpardonable sin’. That is why Britain’s oldest and most respectable scientific institution, the Royal Society, was founded on the motto: ‘On the word of no one.’ The message conveyed in this motto is clear: knowledge about the material world should be based on evidence rather than authority. The critical spirit embodied in that motto is frequently violated today by the growing tendency to treat science as a belief that provides an unquestionable account of the Truth. Indeed, it is striking that the Royal Society recently dropped the phrase ‘On the word of no one’ from its website, while its former president, Lord May, prefers to use the motto ‘Respect the facts’ these days (see The Royal Society’s motto-morphosis, by Ben Pile and Stuart Blackman). Many religious leaders, politicians and environmentalists have little interest in engaging in the voyage of discovery through scientific experimentation. Instead they often appear to be in the business of politicising science, or more accurately, moralising it. For example, Al Gore has claimed that scientific evidence offers (inconvenient) Truths. Such science has more in common with the art of divination than the process of experimentation. That is why science is said to have a fixed and unyielding, and thus unquestionable, quality. Frequently, Gore and others will prefix the term science with the definite article, ‘the’. So Sir David Read, vice-president of the Royal Society, recently said: ‘The science very clearly points towards the need for us all - nations, businesses and individuals - to do as much as possible, as soon as possible, to avoid the worst consequences of climate change.’ Unlike ‘science’, this new term - ‘The Science’ - is a deeply moralised and politicised category. Today, those who claim to wield the authority of The Science are really demanding unquestioning submission. The slippage between a scientific fact and moral exhortation is accomplished with remarkable ease in a world where people lack the confidence to speak in the language of right and wrong. But turning science into an arbiter of policy and behaviour only serves to confuse matters. Science can provide facts about the way the world works, but it cannot say very much about what it all means and what we should do about it. Yes, the search for truth requires scientific experimentation and the discovery of new facts; but it also demands answers about the meaning of those facts, and those answers can only be clarified through moral, philosophical investigation and debate. If science is turned into a moralising project, its ability to develop human knowledge will be compromised. It will also distract people from developing a properly moral understanding of the problems that face humanity in the twenty-first century. Those who insist on treating science as a new form of revealed truth should remember Pascal’s words: ‘We know the truth, not only by reason, but also by the heart.’

3 of 4

8/31/08 12:54 PM

DWIGHT TOK COURSEPACK 2008-2009


spiked | The tyranny of science

file:///Users/psanders/Desktop/Dwight/Theory%20of%20Knowledge/Science/Science%27s%20r... 115

Frank Furedi’s Invitation To Terrorism: The Expanding Empire of The Unknown has just been published by Continuum Press. (Buy this book from Amazon(UK).) Visit Furedi’s website here. Read on: Really Bad Ideas

Previously on spiked Frank Furedi said Al Gore’s moral crusade depends for its legitimacy on the authority of science. Joe Kaplinsky looked at the dangers of lazy science reporting. Chris Tyler explained why his charity, Sense About Science, created a helpline for celebs to check their facts before endorsing dodgy campaigns. Brendan O’Neill said we should keep politics out of science - and vice versa. Dick Taverne told Helene Guldberg how we can halt the ‘march of unreason’. Or read more at spiked issue Science and technology.

(1) The war on hot air, Guardian, 12 January 2008 (2) See Abortion: stop hiding behind The Science, by Jennie Bristow, 22 October 2007 reprinted from: http://www.spiked-online.com/index.php?/site/article/4275/

4 of 4

8/31/08 12:54 PM

DWIGHT TOK COURSEPACK 2008-2009


The freedom to say 'no' - The Boston Globe

http://www.boston.com/bostonglobe/ideas/articles/2008/05/18/the_freedom_... 116

THIS STORY HAS BEEN FORMATTED FOR EASY PRINTING

The freedom to say 'no' Why aren't there more women in science and engineering? Controversial new research suggests: They just aren't interested. By Elaine McArdle | May 18, 2008 WHEN IT COMES to the huge and persistent gender gap in science and technology jobs, the finger of blame has pointed in many directions: sexist companies, boy-friendly science and math classes, differences in aptitude. Women make up almost half of today's workforce, yet hold just a fraction of the jobs in certain high-earning, high-qualification fields. They constitute 20 percent of the nation's engineers, fewer than one-third of chemists, and only about a quarter of computer and math professionals. Over the past decade and more, scores of conferences, studies, and government hearings have been directed at understanding the gap. It has stayed in the media spotlight thanks in part to the high-profile misstep of then-Harvard president Larry Summers, whose loose comment at a Harvard conference on the topic in 2005 ultimately cost him his job. Now two new studies by economists and social scientists have reached a perhaps startling conclusion: An important part of the explanation for the gender gap, they are finding, are the preferences of women themselves. When it comes to certain math- and science-related jobs, substantial numbers of women - highly qualified for the work - stay out of those careers because they would simply rather do something else. One study of information-technology workers found that women's own preferences are the single most important factor in that field's dramatic gender imbalance. Another study followed 5,000 mathematically gifted students and found that qualified women are significantly more likely to avoid physics and the other "hard" sciences in favor of work in medicine and biosciences. It's important to note that these findings involve averages and do not apply to all women or men; indeed, there is wide variety within each gender. The researchers are not suggesting that sexism and cultural pressures on women don't play a role, and they don't yet know why women choose the way they do. One forthcoming paper in the Harvard Business Review, for instance, found that women often leave technical jobs because of rampant sexism in the workplace. But if these researchers are right, then a certain amount of gender gap might be a natural artifact of a free society, where men and women finally can forge their own vocational paths. And understanding how individual choices shape the gender balance of some of the most important, financially rewarding careers will be critical in fashioning effective solutions for a problem that has vexed people for more than a generation. A few years ago, Joshua Rosenbloom, an economist at the University of Kansas, became intrigued by a new campaign by the National Science Foundation to root out what it saw as pervasive gender discrimination in science and engineering. The agency was spending $19 million a year to encourage mentoring programs, gender-bias workshops, and cooperative work environments. Rosenbloom had no quarrel with the goal of gender equity. But as he saw it, the federal government was spending all that money without any idea what would work, because there was no solid data on what caused the disparity between men and women in scientific fields. To help answer the question, Rosenbloom surveyed hundreds of professionals in information technology, a career in which women are significantly underrepresented. He also surveyed hundreds in comparable careers more evenly balanced between men and women. The study examined work and family history, educational background, and vocational interests. The results were striking. The lower numbers of women in IT careers weren't explained by work-family pressures, since the study found computer careers made no greater time demands than those in the control group. Ability wasn't the reason, since the women in both groups had substantial math backgrounds. There was, however, a significant difference in one area: what the men and women valued in their work. Rosenbloom and his colleagues used a standard personality-inventory test to measure people's preferences for different kinds of work. In general, Rosenbloom's study found, men and women who enjoyed the explicit manipulation of tools or machines were more likely to choose IT careers - and it was mostly men who scored high in this area. Meanwhile, people who enjoyed working with others were less likely to choose IT careers. Women, on average, were more likely to score high in this arena.

1 of 3

5/20/08 10:28 AM DWIGHT TOK COURSEPACK 2008-2009


The freedom to say 'no' - The Boston Globe

http://www.boston.com/bostonglobe/ideas/articles/2008/05/18/the_freedom_... 117

Personal preference, Rosenbloom and his group concluded, was the single largest determinative factor in whether women went into IT. They calculated that preference accounted for about two-thirds of the gender imbalance in the field. The study was published in November in the Journal of Economic Psychology. It may seem like a cliche - or rank sexism - to say women like to work with people, and men prefer to work with things. Rosenbloom acknowledges that, but says that whether due to socialization or "more basic differences," the genders on average demonstrate different vocational interests. "It sounds like stereotypes," he said in an interview, "but these stereotypes have a germ of truth." In the language of the social sciences, Rosenbloom found that the women were "self-selecting" out of IT careers. The concept of self-selection has long interested social scientists as an explanation for how groups sort themselves over time. Since human beings are heterogeneous, self-selection predicts that when offered a menu of options and freedom of choice, people will make diverse choices and sort themselves out in nonrandom ways. In other words, even given the same opportunities, not everybody will do the same thing - and there are measurable reasons that they will act differently from one another. The concept of self-selection sets off alarms for many feminists. It seems to suggest that women themselves are responsible for the gender gap. It can also be an excuse for minimizing the role of social forces, including discrimination in the classroom and the workplace. But self-selection has also emerged as the chief explanation in other recent studies of gender imbalance, including a long-term survey done by two Vanderbilt researchers, Camilla Persson Benbow and David Lubinski. Starting more than 30 years ago, the Study of Mathematically Precocious Youth began following nearly 2,000 mathematically gifted adolescents, boys and girls, tracking their education and careers in ensuing decades. (It has since been expanded to 5,000 participants, many from more recent graduating classes.) Both men and women in the study achieved advanced credentials in about the same numbers. But when it came to their career paths, there was a striking divergence. Math-precocious men were much more likely to go into engineering or physical sciences than women. Math-precocious women, by contrast, were more likely to go into careers in medicine, biological sciences, humanities, and social sciences. Both sexes scored high on the math SAT, and the data showed the women weren't discouraged from certain career paths. The survey data showed a notable disparity on one point: That men, relative to women, prefer to work with inorganic materials; women, in general, prefer to work with organic or living things. This gender disparity was apparent very early in life, and it continued to hold steady over the course of the participants' careers. Benbow and Lubinski also found something else intriguing: Women who are mathematically gifted are more likely than men to have strong verbal abilities as well; men who excel in math, by contrast, don't do nearly as well in verbal skills. As a result, the career choices for math-precocious women are wider than for their male counterparts. They can become scientists, but can succeed just as well as lawyers or teachers. With this range of choice, their data show, highly qualified women may opt out of certain technical or scientific jobs simply because they can. These studies looked at different slices of the working world, but agree that in a world in which men and women both have freedom of choice, they tend to choose differently. They have a provocative echo in the conclusions of Susan Pinker, a psychologist and columnist for the Toronto Globe and Mail. In her controversial new book, "The Sexual Paradox: Men, Women, and the Real Gender Gap," Pinker gathers data from the journal Science and a variety of sources that show that in countries where women have the most freedom to choose their careers, the gender divide is the most pronounced. The United States, Norway, Switzerland, Canada, and the United Kingdom, which offer women the most financial stability and legal protections in job choice, have the greatest gender split in careers. In countries with less economic opportunity, like the Philippines, Thailand, and Russia, she writes, the number of women in physics is as high as 30 to 35 percent, versus 5 percent in Canada, Japan, and Germany. "It's the opposite of what we'd expect," says Pinker. "You'd think the more family-friendly policies, and richer the economy, the more women should behave like men, but it's the opposite. I think with economic opportunity comes choices, comes freedom." If the gender gap in many fields has its roots in women's own preferences, that raises a new line of questions, including the most obvious: Why do women make these choices? Why do they prefer different kinds of work? And what does "freedom of choice" really mean in a world that is still structured very differently for men and women? For example, the choice to drop out of high-paying finance careers appears to be driven by the longer hours required in those jobs, says University of Chicago economist Marianne Bertrand, who studied the career tracks of the school's MBA

2 of 3

5/20/08 10:28 AM DWIGHT TOK COURSEPACK 2008-2009


The freedom to say 'no' - The Boston Globe

http://www.boston.com/bostonglobe/ideas/articles/2008/05/18/the_freedom_... 118

graduates. Women who want families eventually decide to walk away from the career, at least temporarily. "I've gone from the glass ceiling to thinking, if these jobs weren't 70 hours a week, women might not need to take so much time off," she says. Benbow and Lubinski, at Vanderbilt, found that high-achieving women often pick their careers based on the idea that they'll eventually take time off, and thus avoid fields in which that absence will exact a larger penalty. In humanities or philosophy, for instance, taking a year or two off won't affect one's skill set very much. But in quickly evolving technical fields, a similar sabbatical can be a huge career setback. Beneath those structural questions, though, women still seem to make choices throughout their lives that are different from men's, and it is not yet clear why. Rosenbloom, the economist behind the IT study, says little research has been done on how interests are formed. "We don't know the role of mentors or experience or socialization," he says. To some sociologists and many feminists, the focus on self-selection is a troubling distraction from bigger questions of how society pushes girls and boys into different roles. Rosalind Chait Barnett, at the Women's Studies Research Center at Brandeis, says that boys and girls are not, at root, different enough for such clear sorting to be seen as a matter of "choice." "The data is quite clear," she says. "On anything you point to, there is so much variation within each gender that you have to get rid of this idea that 'men are like this, women are like that.' " Sorting through the various factors is extremely challenging, all the researchers agree, and the issue is as complex as the individuals making each career decision. These findings on self-selection only open new areas of inquiry. They do suggest, however, that if the hard-fought battle for gender equality has indeed brought America to a point where women have the freedom to choose their career paths, then the end result may be surprising - and an equal-opportunity workforce may look a lot less equal than some had imagined. Elaine McArdle is a Cambridge writer. Her first book, "The Migraine Brain," coauthored with Harvard neurologist Dr. Carolyn Bernstein, will be published in September by Free Press.

Š Copyright 2008 The New York Times Company

3 of 3

5/20/08 10:28 AM DWIGHT TOK COURSEPACK 2008-2009


119

AREA OF KNOWLEDGE:

Human Sciences

“Under the most rigorously controlled conditions of pressure, temperature, humidity and other variables, the organism will do exactly as it pleases.” (Anonymous) • What are the main difficulties human scientists confront when trying to provide explanations of human behavior? • In what ways might the beliefs and interests of human scientists influence their conclusions? • What kinds of explanations do human scientists offer? • How might the language used in polls, questionnaires, etc. influence the conclusions reached? DWIGHT TOK COURSEPACK 2008-2009


Popular Mechanics

http://www.popularmechanics.com/science/research/4273929.html?nav=hpPrint&do=print 120

The New Science of Fear: Can It Predict Bravery at 13,500 Feet? By Jeff Wise Illustration by Tim Hower Published in the August 2008 issue.

MORE SKYDIVING ACTION • PLUS: 5 Extreme Skydives From the Last 50 Years

Man, I wish I were anyplace but here.

1 of 3

It's 8 am on a clear, cool morning at a "quiet airstrip in eastern Long Island, N.Y. The trees lining the runway are sharp as jewels against the crisp blue sky. Duncan Shaw, the jump instructor, hands me a heavy jumpsuit, a pair of gloves and a leather cap. In a few minutes we'll be riding in a Cessna 207 up to 13,500 ft.—then jumping out of it. It's a beautiful day, and I understand rationally that I have little to fear. Yet I am filled with an almost overwhelming sense of dread.

144

Shaw reviews the procedure for the sky dive, then we climb into the plane. As the Cessna climbs steeply I try to concentrate on my breathing, but I'm suffering through the standard

digg it

diggs

9/1/08 2:34 PM

DWIGHT TOK COURSEPACK 2008-2009


Popular Mechanics

http://www.popularmechanics.com/science/research/4273929.html?nav=hpPrint&do=print 121

checklist of fear: My heart is pounding, my mouth is dry, my stomach is churning. The plane levels off, and the door slides open to the rushing wind. The ground seems to be so very far below. I think of that sky-diving "cliché, the petrified first-timer clinging white-knuckled to the edge of the door. Am I going to be like that? The question is of more than personal interest. I'm taking part in a study on how people handle acute stress, conducted by Lilianne Mujica-Parodi, director of the Laboratory for the Study of Emotion and Cognition at Stony Brook University. As I edge toward the door, I'm wearing a clutch of electrical sensors. Soon I'll be a data point—or a big red splotch on the landscape. A few weeks before the jump I check into the research wing of Stony Brook University Hospital to undergo two days of testing so that Mujica-Parodi can get a baseline reading of my stress level. She's particularly interested in a brain region called the amygdala, part of the system that regulates the fear response. She believes that if she can understand the dynamics of the amygdala, she will have found the holy grail of stress research: a way to predict how a person will perform under alarming conditions by examining him beforehand in the calm of a laboratory. That ability would be of obvious interest to the military, which is why the Navy is backing her research. It could also prove useful in recruiting and training anyone else who might face danger in the line of duty. After I check into the hospital, I'm outfitted with electrodes that measure my heart rate; periodically, the staff collect blood and saliva samples. On the second morning, I lie inside a functional magnetic resonance imaging (fMRI) scanner for 45 minutes and stare up at a screen that displays pictures of faces. Some are scowling; others wear neutral expressions. I don't feel any conscious emotional response—they're just faces. But it's my brain's activity that Mujica-Parodi is interested in—specifically, the interaction between the excitable, irrational amygdala and the prefrontal cortex, which gives rise to conscious thought and intentional action. "The brain works as a control-system circuit, like a thermostat in your home, with a negative feedback loop," she says. "When I show you a neutral face, the amygdala says, 'What's that, is it dangerous?' And then the inhibitory component [the prefrontal cortex] kicks in and says, 'You know what, it's not. Calm down.'" When there is a real threat, the cortex jacks up the amygdala's response. Mujica-Parodi thinks that a big part of determining who handles acute emotional stress well lies in how the feedback loop is coupled. In a tightly coupled home-heating system, the furnace quickly turns on when the temperature drops below an optimal level and turns off again when the house is warm enough. In a loosely coupled system, the house gets much too cold before the furnace kicks in, and then it runs at full blast until the house is sweltering. My fMRI session tells Mujica-Parodi that my negative feedback loop is tightly coupled. The "furnace" of my amygdala fires up quickly, but doesn't stay on longer than it needs to. If her interpretation is correct, I should prove stress-resilient in frightening situations. In layman's terms, I'll be brave. But there's no way of knowing for sure until I jump. Up in the Cessna, Shaw and I crab-walk to the door; I'm strapped tight to his front. The wind howls past as we put our feet over the edge. I stick my head into the slipstream, and the cold air pushes my cheeks into wobbly pancakes. There's nothing below us but air, and all the psychological theorizing in the world does nothing to allay the sheer terror I'm feeling—or, to put it in neurological terms, it does nothing to improve my cortical dominance over my amygdala. Shaw's voice comes from behind me: "Feet under the plane … head back … ready … set … arch!" We push away, and we're falling, tumbling, my mind flooded with too much sensation to process. I glimpse the plane, high above. Within moments, we reach terminal velocity and settle into a stable horizontal position. I can see for miles. It's beautiful! Five minutes later, my feet touch down. I go inside the on-site laboratory and take off my jumpsuit and vest and, as two white-coated technicians look on, pull electrodes from my bare torso. A burly phlebotomist steps into the room, bearing sharp needles to take blood. I don't mind. In fact, I'm elated. It's great to be alive. Mujica-Parodi later e-mails me a couple of tables and charts, and I call her to discuss what they mean. One chart shows my heart rate as measured before, during and after the jump, as well as a corresponding figure for "sympathetic dominance." This number reflects the degree to which my amygdala–prefrontal cortex loop has cranked up the volume on my fight-or-flight circuitry, which modulates involuntary stress responses like trembling, dry mouth, rapid breathing and machine-gun heart rate. At rest, safe and sound in my hospital room before the sky dive, my sympathetic dominance measured 4.9—about the same as that of most people participating in the study. What was more meaningful was my condition riding up in the plane before the jump, when my sympathetic dominance was rated at 5, significantly less than the average of 6.5. When I was in free fall, it spiked to 13.4, about twice the average. (In other words—aaaaargh!) Then, back on the ground, it declined from its high of 13.4 to 8.4 within 15 minutes. "What this means is that you have good threat selectivity," Mujica-Parodi says. "You're kind of like a rubber band, in that when you go up, you come back down right away. You're conserving your sympathetic dominance for when it's actually needed." These results, Mujica-Parodi says, mirror those of my fMRI session. It's not that I stayed cool when I was plummeting toward earth—"You were in actual danger," she says, so "a strong excitatory response was appropriate"—but that when I wasn't falling I suppressed the fear response and conserved my energy. The upshot: I might do well at keeping calm in the face of lethal

2 of 3

9/1/08 2:34 PM

DWIGHT TOK COURSEPACK 2008-2009


Popular Mechanics

http://www.popularmechanics.com/science/research/4273929.html?nav=hpPrint&do=print 122

danger, as most firemen and policemen do. More important, my results seem to reinforce Mujica-Parodi's theory, which could mean that in the future recruiters for the military and law enforcement will have a way to screen applicants for the most suitable training and job assignments. Our conversation turns back to the sky dive. "Would you go again?" Mujica-Parodi asks. "I think so," I tell her. But not right now. Maybe in a few months. At the moment, I'm happy just to keep my amygdala quietly hibernating.

RELATED STORIES • PLUS: 5 Extreme Skydives From the Last 50 Years • PODCAST: Meet a Fearless Parachute Tester • TECH WATCH: CPU-Guided Parachutes Resupply Army

3 of 3

9/1/08 2:34 PM

DWIGHT TOK COURSEPACK 2008-2009


123

AREA OF KNOWLEDGE:

History

“We are the product of the past but not the completed product.” (C. Gustavson) • What are the implications of Henry Miller’s claim that “The history of the world is the history of a privileged few?” • How far can we speak with certainty about anything in the past? • Are value judgments a fault in the writing of history? • Why study history? • If truth is difficult to prove in history, does it follow that all versions are equally acceptable?

DWIGHT TOK COURSEPACK 2008-2009


History in Focus: the nature of history (article)

http://www.history.ac.uk/ihr/Focus/Whatishistory/munslow6.html 124

Back to articles index What history is

by Professor Alun Munslow How do historians, at least in the anglophone West, make history? By that I mean what consequences flow from the fact that all the events and processes in 'the past', are 'turned' by the historian into that narrative we call history? The debate on this 'narrative' or 'linguistic turn' - the recognition that history is a narrative about the past written in the here and now, rather than some distanced mirror of it - has been a significant issue within the profession for several years. What are some of the consequences that flow from this view of history as a narrative about the past constructed by the historian in the present? Much of the debate on viewing history, as the narrative construction of the historian, is whether this judgement distorts what history is, what historians do, and it reflects upon the objectivity and truth-seeking nature of the exercise. As a writer of history it is my conclusion that the linguistic turn - the essential element in the postmodern challenge to a view of history founded solely on the empirical-analytical model - is no threat to the study of the past. This is not because it does not fundamentally change how we think about history - I think it can - but it offers the opportunity to redefine what we do and broaden the scope of our activities. How do historians gain historical knowledge? It is assumed in every historical narrative that form always follows content. What this means is that the historical narrative must always be transparent in referring to what actually happened according to the evidence. As Voltaire said, and most conventionally trained historians might still agree, 'too many metaphors are hurtful...to truth, by saying more or less than the thing itself' (quoted in White 1973: 53). To get at the thing itself, objectivity is the aim and this demands the referential language of historians. With this aim and this tool we can infer the realities beneath the misleading world of appearances. To the Western modernist Enlightenment-inspired mind objectivity and the historical narrative must remain compatible. The nineteenth century European critique of that vision, particularly in the work of Hegel and Nietzsche, moved beyond how knowledge is derived, to concentrate more how it is represented, and the effects the process of representation has upon the status and nature of our knowledge. This debate about knowing and telling within the European-American discipline of history continued into the last century in the work of many historian-philosophers: initially Benedetto Croce and R.G. Collingwood then, among many others, Carl Hempel, Ernest Nagel, Patrick Gardiner, and later Michel Foucault, Jacques Derrida and Hayden White. The study of the past has never been static. The practice of history has witnessed many shifts and turns in the way it is thought and undertaken. Since the 1960s, for example, the discipline of history has experienced a 'social science turn', a 'cliometric' or 'statistics turn', a 'women's history turn', a 'cultural history turn' and so on. These are not novelties that have not come and gone. Each has remained a significant way for historians to reflect upon and write about change over time. But, in all this one thing has apparently not altered. This is the epistemology of history. In spite of this rich variety of methodological developments or shifts and turns of interest, the foundational way historians 'know' things about the past' has been unchallenged. Despite the use of statistics, the new themes (society, women, gender, culture) and the application of fresh concepts and theories, there remain two steady points in the

1 of 3

9/1/08 2:26 PM

DWIGHT TOK COURSEPACK 2008-2009


History in Focus: the nature of history (article)

http://www.history.ac.uk/ihr/Focus/Whatishistory/munslow6.html 125

historian's cosmos: empiricism and rational analysis. As the product of the European eighteenth-century Enlightenment the empiricalanalytical model has become the epistemology for undertaking the study of the past. However, since the 1960s and 1970s something has changed at this epistemological level. Doubts about the empirical-analytical as the privileged path to historical knowing have emerged. This has not happened in history alone, of course. In all the arts, humanities, social sciences, and even the physical and life sciences the question is increasingly being put, how can we be sure that empiricism and inference really does get us close to the true meaning of the past? In history how can we trust our sources - not because they are forgeries or missing, but because of the claims empiricism is forced to make about our ability not only to find the data, but also just as importantly represent their meaning accurately? It is not an abstract or scholastic philosophical question to ask, where does meaning come from in history? Is it the past itself? Is its meaning simply ushered in by the historian. Is the historian merely the midwife to the truth of the past? Or is the historian unavoidably implicated in the creation of a meaning for the past. Does the past contain one true meaning or several? Is there one story to be discovered or several that can be legitimately generated? I think most historians today would agree on the latter analysis. The difference comes over the consequences of that implication, and what it does for truth. In other words is it the historian who provides the truth of the past as she represents it rather than as she finds it? This is the essence of the postmodern challenge, the turn to the narrative-linguistic and its implications. What makes this turn more significant than the others is that it demonstrates a deeper change in our views concerning the conditions under which we create historical knowledge. In other words it has challenged history not with new topics or methods as such, but by confronting the discipline's empirical-analytical foundations. The linguistic turn in history, of course, continues to rely on the empirical-analytical model, but it extends the our epistemology to include its narrative-linguistic representation, the form we give to the past within our texts, and it accepts history as an essentially literary activity, one that is self-evidently authored. The emphasis now is less on history as a process of objective discovery and report but, rather, accepts its unavoidably fictive nature, that is, its literary constructedness. By this I mean recognising the figurative assumptions that underpin authorial activity in creating the text and which are already (in a pre-empirical sense) and necessarily brought to the historical field, often determining the selection of evidence and its most likely meaning. This is a process that is revealed by the complex analysis of authorial activity. Postmodern history, because it is a literary as much as an empirical project, recognises it cannot escape its authorship. In other words, the past is not just re-interpreted according to new evidence but also through self-conscious acts of re-writing as well. Thus it is that history and the past cannot coincide to the extent that the former, whether we like it or not, is principally a narrative about the latter. Arguably there are no original centres of meaning to be found outside the narrative-linguistic. Data in and of itself does not have given meaning. Though empirical and analytical, postmodern history deliberately draws our attention to the conditions under which we create knowledge, in the case of history its nature as a series of forms, or turns perhaps, of a realist literature? In a very real sense the postmodern challenge forces us to face up to the highly complex question of how we know things about the past and what we, as moral beings, do as a result. In other words, it extends the remit of history to include the historian's pre-narrative assumptions and how we translate those

2 of 3

9/1/08 2:26 PM

DWIGHT TOK COURSEPACK 2008-2009


History in Focus: the nature of history (article)

http://www.history.ac.uk/ihr/Focus/Whatishistory/munslow6.html 126

assumptions figuratively as we construct our strategies of narrative explanation. Postmodern historians thus ask many fresh questions. Are facts best thought of as events under a description? Is all data ultimately textual and, if so, what are its implications? Should history be written primarily according to literary rules and, if so, what are they? What is the significant difference between literary and figurative speech in history and how does it create historical meaning? How do we distinguish the historical referent of a discourse and its constructed, i.e., its ideological, meaning? Can history ever exist beyond discourse? And the very big question, is history what happened, or what historians tell us happened? All these have to be addressed when we do history, to ignore them is to do only half the job. October 2001 Back to articles index Created Autumn 2001 by the Institute of Historical Research. Copyright notice.

3 of 3

9/1/08 2:26 PM

DWIGHT TOK COURSEPACK 2008-2009


Arms and the Man: Books: The New Yorker

http://www.newyorker.com/arts/critics/books/2008/04/28/080428crbo_books_mendelsohn?printa... 127

BOOKS

ARMS AND THE MAN What was Herodotus trying to tell us? by Daniel Mendelsohn APRIL 28, 2008

In the figure of the Persian king Xerxes, Herodotus achieved a magisterial portrait of an unstable despot, an archetype that has plagued the sleep of liberal democracies ever since.

H 1 of 7

istory—the rational and methodical study of the human past—was invented by a single man just under twenty-five hundred years ago; just under twenty-five years ago, when I was starting a graduate degree in Classics, some of us could be pretty condescending about the man who invented it and (we’d joke) his penchant for flowered Hawaiian

9/1/08 2:23 PM

DWIGHT TOK COURSEPACK 2008-2009


Arms and the Man: Books: The New Yorker

http://www.newyorker.com/arts/critics/books/2008/04/28/080428crbo_books_mendelsohn?printa... 128

shirts. The risible figure in question was Herodotus, known since Roman times as “the Father of History.” The sobriquet, conferred by Cicero, was intended as a compliment. Herodotus’ Histories—a chatty, dizzily digressive nine-volume account of the Persian Wars of 490 to 479 B.C., in which a wobbly coalition of squabbling Greek city-states twice repulsed the greatest expeditionary force the world had ever seen—represented the first extended prose narrative about a major historical event. (Or, indeed, about virtually anything.) And yet to us graduate students in the mid-nineteen-eighties the word “father” seemed to reflect something hopelessly parental and passé about Herodotus, and about the sepia-toned “good war” that was his subject. These were, after all, the last years of the Cold War, and the terse, skeptical manner of another Greek historian—Thucydides, who chronicled the Peloponnesian War, between Athens and Sparta, two generations later—seemed far more congenial. To be an admirer of Thucydides’ History, with its deep cynicism about political, rhetorical, and ideological hypocrisy, with its all too recognizable protagonists—a liberal yet imperialistic democracy and an authoritarian oligarchy, engaged in a war of attrition fought by proxy at the remote fringes of empire—was to advertise yourself as a hardheaded connoisseur of global Realpolitik. Herodotus, by contrast, always seemed a bit of a sucker. Whatever his desire, stated in his Preface, to pinpoint the “root cause” of the Persian Wars (the rather abstract word he uses, aiti , savors of contemporary science and philosophy), what you take away from an initial encounter with the Histories is not, to put it mildly, a strong sense of methodical rigor. With his garrulous first-person intrusions (“I have now reached a point at which I am compelled to declare an opinion that will cause offense to many people”), his notorious tendency to digress for the sake of the most abstruse detail (“And so the Athenians were the first of the Hellenes to make statues of Hermes with an erect phallus”), his apparently infinite susceptibility to the imaginative flights of tour guides in locales as distant as Egypt (“Women urinate standing up, men sitting down”), reading him was like—well, like having an embarrassing parent along on a family vacation. All you wanted to do was put some distance between yourself and him, loaded down as he was with his guidebooks, the old Brownie camera, the gimcrack souvenirs—and, of course, that flowered polyester shirt. A major theme of the Histories is the way in which time can effect surprising changes in the fortunes and reputations of empires, cities, and men; all the more appropriate, then, that Herodotus’ reputation has once again been riding very high. In the academy, his technique, once derided as haphazard, has earned newfound respect, while his popularity among ordinary readers will likely get a boost from the publication of perhaps the most densely annotated, richly illustrated, and user-friendly edition of his Histories ever to appear: “The Landmark Herodotus” (Pantheon; $45), edited by Robert B. Strassler and bristling with appendices, by a phalanx of experts, on everything from the design of Athenian warships to ancient units of liquid measure. (Readers interested in throwing a wine tasting à la grecque will be grateful to know that one amphora was equal to a hundred and forty-four kotyles.) The underlying cause—the aiti —of both the scholarly and the popular revival is worth wondering about just now. It seems that, since the end of the Cold War and the advent of the Internet, the moment has come, once again, for Herodotus’ dazzlingly associative style and, perhaps even more, for his subject: implacable conflict between East and West.

M

odern editors, attracted by the epic war story, have been as likely as not to call the work “The Persian Wars,” but Herodotus himself refers to his text simply as the publication of his histori —his “research” or “inquiry.” The (to us) familiar-looking word histori would to Herodotus’ audience have had a vaguely clinical air, coming, as it did, from the vocabulary of the newborn field of natural science. (Not coincidentally, the cradle of this scientific ferment was Ionia, a swath of Greek communities in coastal Asia Minor, just to the north of Halicarnassus, the historian’s birthplace.) The word only came to mean “history” in our sense because of the impact of Herodotus’ text. The Greek cities of Ionia were where Herodotus’ war story began, too. These thriving settlements, which maintained close ties with their mother cities across the Aegean to the west, began, in the early sixth century B.C., to fall under the dominion of the rulers of the Asiatic kingdoms to the east; by the middle of the century, however, those kingdoms were themselves being swallowed up in the seemingly inexorable westward expansion of Persia, led by the charismatic empire builder Cyrus the Great. The Histories begins with a tale that illustrates this process of imperialist digestion—the story of Croesus, the famously wealthy king of Lydia. For Herodotus, Croesus was a satisfyingly pivotal figure, “the first barbarian known to us who subjugated and demanded tribute from some Hellenes” but who nonetheless ended up subjugated himself, blinded by his success to the dangers around him. (Before the great battle that cost him his kingdom, he had arrogantly misinterpreted a pronouncement of the Delphic oracle that should have been a warning: “If you attack Persia, you will destroy a great empire.” And he did—his own.) The fable-like arc of Croesus’ story, from a deceptive and short-lived happiness to a tragic fall arising from smug self-confidence, admirably serves what will turn out to be Herodotus’ overarching theme: the seemingly inevitable movement from imperial hubris to catastrophic retribution. The fall of Croesus, in 547 B.C., marked the beginning of the absorption of the Ionian Greeks into the Persian empire. Half a century later, starting in 499, these Greeks began a succession of open rebellions against their Persian overlords; it was this “Ionian Revolt” that triggered what we now call the Persian Wars, the Asian invasions of the Greek mainland in 490 and 480. Some of the rebellious cities had appealed to Athens and Sparta for military aid, and Athens, at least, had responded. Herodotus tells us that the Great

2 of 7

9/1/08 2:23 PM

DWIGHT TOK COURSEPACK 2008-2009


Arms and the Man: Books: The New Yorker

http://www.newyorker.com/arts/critics/books/2008/04/28/080428crbo_books_mendelsohn?printa... 129

King Darius was so infuriated by this that he instructed a servant to repeat to him the injunction “Master, remember the Athenians!” three times whenever he sat down to dinner. Contemporary historians see a different, less personal motive at the root of the war that was to follow: the inevitable, centrifugal logic of imperialist expansion. Darius’ campaign against the Greeks, in 490, and, after his death, that of his son Xerxes, in 480-479, constituted the largest military undertakings in history up to that point. Herodotus’ lavish descriptions of the statistic-boggling preparations—he numbers Xerxes’ fighting force at 2,317,610 men, a figure that includes infantry, marines, and camelriders—are among the most memorable passages of his, or any, history. Like all great storytellers, he takes his sweet time with the details, letting the dread momentum build as he ticks off each stage of the invasion: the gathering of the armies, their slow procession across continents, the rivers drunk dry, the astonishing feats of engineering—bridging the Hellespont, cutting channels through whole peninsulas—that more than live up to his promise, in the Preface, to describe erga th masta, “marvellous deeds.” All this, recounted in a tone of epic grandeur that self-consciously recalls Homer, suggests why most Greek cities, confronted with the approaching hordes, readily acceded to Darius’ demand for symbolic tokens of submission—“earth and water.” (In a nice twist, the defiant Athenians, a great naval power, threw the Persian emissaries into a pit, and the Spartans, a great land force, threw them down a well—earth and water, indeed.) And yet, for all their might, both Persian expeditions came to grief. The first, after a series of military and natural disasters, was defeated at the Battle of Marathon, where a fabulously outnumbered coalition of Athenians and Plataeans held the day, losing only a hundred and ninety-two men to the Persians’ sixty-four hundred. (The achievement was such that the Greeks, breaking with their tradition of taking their dead back to their cities, buried them on the battlefield and erected a grave mound over the spot. It can still be seen today.) Ten years later, Darius’ son Xerxes returned to Greece, having taken over the preparations for an even vaster invasion. Against all odds, the scrappy Greek coalition—this one including ultraconservative Sparta, usually loath to get involved in Panhellenic doings—managed to resist yet again. It is to this second, far grander conflict that the most famous Herodotean tales of the Persian Wars belong; not for nothing do the names Thermopylae and Salamis still mean something today. In particular, the heroically suicidal stand of the three hundred Spartans—who, backed by only a couple of thousand allied troops, held the pass at Thermopylae against tens of thousands of Persians, long enough for their allies to escape and regroup farther to the south—has continued to resonate. Partly, this has to do with Herodotus’ vivid description of the Greeks’ feisty insouciance, a quality that all freedom fighters like to be able to claim. On hearing that the Persians were so numerous that their arrows would “blot out the sun,” one Spartan quipped that this was good news, as it meant that the Greeks would fight in the shade. (“In the shade” is the motto of an armored division in the present-day Greek Army.) But the persistent appeal of such scenes, in which the outnumbered Greeks unexpectedly triumph over the masses of Persian invaders, is ultimately less a matter of storytelling than of politics. Although Herodotus is unwilling to be anything but neutral on the relative merits of monarchy, oligarchy, and democracy (in a passage known as the “Debate on Government,” he has critical things to say about all three), he ultimately structures his presentation of the war as a kind of parable about the conflict between free Western societies and Eastern despotism. (The Persians are associated with motifs of lashing, binding, and punishment.) While he isn’t shy about portraying the shortcomings of the fractious Greek city-states and their leaders, all of them, from the luxury-loving Ionians to the dour Spartans, clearly share a desire not to answer to anyone but their own leaders. Anyone, at any rate, was preferable to the Persian overlord Xerxes, who in Herodotus’ narrative is the subject of a magisterial portrait of corrupted power. No one who has read the Histories is likely to forget the passage describing the impotent rage of Xerxes when his engineers’ first attempt to create a bridge from Asia to Europe across the Hellespont was washed away by a storm: after commanding that the body of water be lashed three hundred times and symbolically fettered (a pair of shackles was tossed in), he chastised the “bitter water” for wronging him, and denounced it as “a turbid and briny river.” More practically, he went on to have the project supervisors beheaded. Herodotus’ Xerxes is, however, a character of persuasive complexity, the swaggering cruelty alternating with childish petulance and sudden, sentimental paroxysms of tears: it’s a personality likely to remind contemporary audiences of a whole panoply of dangerous dictators, from Nero to Hitler. One of the great, unexpected moments in the Histories, evoking the emotional finesse of the best fiction, comes when Xerxes, reviewing the ocean of forces he has assembled for the invasion, suddenly breaks down, “overcome,” as he puts it to his uncle Artabanus (who has warned against the enterprise), “by pity as I considered the brevity of human life.” Such feeling for human life, in a dictator whose casual indifference to it is made clear throughout the narrative, is a convincing psychological touch. The unstable leader of a ruthlessly centralized authoritarian state is a nightmare vision that has plagued the sleep of liberal democracies ever since Herodotus created it.

G

ripping and colorful as the invasions and their aftermaths are, the Greco-Persian Wars themselves make up just half of the Histories—from the middle of Book 5 to the end of the ninth, and final, book. This strongly suggests that Herodotus’ preoccupation was with something larger still. The first four and a half books of the Histories make up the first panel of what is, in fact, a diptych: they provide a leisurely account of the rise of the empire that will fall so

3 of 7

9/1/08 2:23 PM

DWIGHT TOK COURSEPACK 2008-2009


Arms and the Man: Books: The New Yorker

http://www.newyorker.com/arts/critics/books/2008/04/28/080428crbo_books_mendelsohn?printa... 130

spectacularly in the second part. Typically, Herodotus gives you everything you could conceivably want to know about Persia, from the semi-mythical, Oedipus-like childhood of Cyrus (he’s condemned to exposure as a baby but returns as a young man, disastrously for those who wanted him to die), to the imperial zenith under Darius, a scant two generations later. (Darius, who had a talent for unglamorous but useful administrative matters—he introduced coined money, a reliable postal system, and the division of the empire into manageable provinces called satrapies—was known as “the shopkeeper.”) From book to book, the Histories lets you track Persia’s expansion, mapped by its conflicts with whomever it is trying to subjugate at the time. In Book 1, there are the exotic Massagetae, who were apparently strangers to the use, and abuse, of wine. (The Persians—like Odysseus with the Cyclops—get them drunk and then trounce them.) In Book 2 come the Egyptians, with their architectural immensities, their crocodiles, and their mummified pets, a nation whose curiosities are so numerous that the entire book is devoted to its history, culture, and monuments. In Book 3, the Persians come up against the Ethiopians, who (Herodotus has heard) are the tallest and most beautiful of all peoples. In Book 4, we get the mysterious, nomadic Scythians, who cannily use their lack of “civilization” to confound their would-be overlords: every time the Persians set up a fortified encampment, the Scythians simply pack up their portable dwellings and leave. By the time of Darius’ reign, Persia had become something that had never been seen before: a multinational empire covering most of the known world, from India in the east to the Aegean Sea in the west and Egypt in the south. The real hero of Herodotus’ Histories, as grandiose, as admirable yet doomed, as any character you get in Greek tragedy, is Persia itself. What gives this tale its unforgettable tone and character—what makes the narrative even more leisurely than the subject warrants—are those infamous, looping digressions: the endless asides, ranging in length from one line to an entire book (Egypt), about the flora and fauna, the lands and the customs and cultures, of the various peoples the Persian state tried to absorb. And within these digressions there are further digressions, an infinite regress of fascinating tidbits whose apparent value for “history” may be negligible but whose power to fascinate and charm is as strong today as it so clearly was for the author, whose narrative modus operandi often seems suspiciously like free association. Hence a discussion of Darius’ tax-gathering procedures in Book 3 leads to an attempt to calculate the value of Persia’s annual tribute, which leads to a discussion of how gold is melted into usable ingots, which leads to an inquiry into where the gold comes from (India), which, in turn (after a brief detour into a discussion of what Herodotus insists is the Indian practice of cannibalism), leads to the revelation of where the Indians gather their gold dust. Which is to say, from piles of sand rich in gold dust, created by a species of—what else?—“huge ants, smaller than dogs but larger than foxes.” (In this case, at least, Herodotus’ guides weren’t necessarily pulling his leg: in 1996, a team of explorers in northern Pakistan discovered that a species of marmot throws up piles of gold-rich earth as it burrows.) One reason that what often looks like narrative Rorschach is so much fun to read is Herodotus’ style. Since ancient times, all readers of Herodotus, whatever their complaints about his reliability, have acknowledged him as a master prose stylist. Four centuries after Herodotus died, Cicero wondered rhetorically “what was sweeter than Herodotus.” In Herodotus’ own time, it’s worth remembering, the idea of “beautiful prose” would have been a revolutionary one: the ancient Greeks considered prose so debased in comparison to verse that they didn’t even have a word for it until decades after the historian wrote, when they started referring to it simply as psilos logos, “naked language,” or pedzos logos, “walking language” (as opposed to the dancing, or even airborne, language of poetry). Herodotus’ remarkable accomplishment was to incorporate, in extended prose narrative, the fluid rhythms familiar from the earlier, oral culture of Homer and Hesiod. The lulling cadences and hypnotically spiralling clauses in each of his sentences—which replicate, on the microcosmic level, the ambling, appetitive nature of the work as a whole—suggest how hard Herodotus worked to bring literary artistry, for the first time, to prose. One twentiethcentury translator of the Histories put it succinctly: “Herodotus’s prose has the flexibility, ease and grace of a man superbly talking.” All the more unfortunate, then, that this and pretty much every other sign of Herodotus’ prose style is absent from “The Landmark Herodotus,” whose new translation, by Andrea L. Purvis, is both naked and pedestrian. A revealing example is her translation of the Preface, which, as many scholars have observed, cannily appropriates the high-flown language of Homeric epic to a revolutionary new project: to record the deeds of real men in real historical time. In the original, the entire Preface is one long, winding, quasi-poetic sentence, a nice taste of what’s to come; Purvis chops it into three flat-footed sections. Readers who want a real taste of Herodotean style can do a lot worse than the 1858 translation of George Rawlinson (Everyman’s Library; $25), which beautifully captures the text’s rich Homeric flavor and dense syntax; more recently, the 1998 translation by Robin Waterfield (Oxford World’s Classics; $10.95) loses the archaic richness but, particularly in the opening, gives off a whiff of the scientific milieu out of which the Histories arose. But in almost every other way “The Landmark Herodotus” is an ideal package for this multifaceted work. Much thought has been given to easing the reader’s journey through the narrative: running heads along the top of each page provide the number of the book, the year and geographical location of the action described, and a brief description of that action. (“A few Athenians remain in the Acropolis.”) Particularly helpful are notes running down the side of each page, each one comprising a short gloss on the small “chapters”

4 of 7

9/1/08 2:23 PM

DWIGHT TOK COURSEPACK 2008-2009


Arms and the Man: Books: The New Yorker

http://www.newyorker.com/arts/critics/books/2008/04/28/080428crbo_books_mendelsohn?printa... 131

into which Herodotus’ text is traditionally divided. Just skimming these is a good way of getting a quick tour of the vast work: “The Persians hate falsehoods and leprosy but revere rivers”; “The Taurians practice human sacrifice with Hellenes and shipwreck survivors”; “The story of Artemisia, and how she cleverly evades pursuit by ramming a friendly ship and sinking it, leading her pursuer to think her a friendly ship or a defector.” And “The Landmark Herodotus” not only provides the most thorough array of maps of any edition but is also dense with illustrations and (sometimes rather amateurish) photographs—a lovely thing to have in a work so rich in vivid descriptions of strange lands, objects, and customs. In this edition, Herodotus’ description of the Egyptians’ fondness for pet cats is paired with a photograph of a neatly embalmed feline.

F

or all the ostensible detours, then, the first four and a half books of the Histories lay a crucial foundation for the reader’s experience of the war between Persia and Greece. The latter is not the “real” story that Herodotus has to tell, saddled with a ponderous, if amusing, preamble, but, rather, the carefully prepared culmination of a tale that grows organically from the distant origins of Persia’s expansionism to its unimaginable defeat. In the light of this structure, it is increasingly evident that Herodotus’ real subject is not so much the improbable Greek victory as the foreordained Persian defeat. But why foreordained? What, exactly, did the Persian empire do wrong? The answer has less to do with some Greek sense of the inevitability of Western individualism triumphing over Eastern authoritarianism—an attractive reading to various constituencies at various times—than it does with the scientific milieu out of which Herodotus drew his idea of histori . For Herodotus, the Persian empire was, literally, “unnatural.” He was writing at a moment of great intellectual interest in the difference between what we today (referring to a similarly fraught cultural debate) call “nature vs. nurture,” and what the Greeks thought of as the tension between physis, “nature,” and nomos, “custom” or “law” or “convention.” Like other thinkers of his time, he was particularly interested in the ways in which natural habitat determined cultural conventions: hence the many so-called “ethnographic” digressions. This is why, with certain exceptions, he seems, perhaps surprisingly to us, to view the growth of the Persian empire as more or less organic, more or less “natural”—at least, until it tries to exceed the natural boundaries of the Asian continent. A fact well known to Greek Civ students is that the word barbaros, “barbarian,” did not necessarily have the pejorative connotations that it does for us: barbaroi were simply people who didn’t speak Greek and whose speech sounded, to Greek ears, like bar-bar-bar. So it’s suggestive that one of the very few times in the Histories that Herodotus uses “barbarian” in our sense is when he’s describing Xerxes’ behavior at the Hellespont. As the classicist James Romm argues, in his lively short study “Herodotus” (Yale; $25), for this historian there is something inherently wrong and bad with the idea of trying to bleed over the boundaries of one continent into another. It’s no accident that the account of the career of Cyrus, the empire’s founder, is filled with pointed references to his heedless treatment of rivers, the most natural of boundaries. (Cyrus dies, in fact, after ill-advisedly crossing the river Araxes, considered a boundary between Asia and Europe.) What’s wrong with Persia, then, isn’t its autocratic form of government but its size, which in the grand cycle of things is doomed one day to be diminished. Early in the Histories, Herodotus makes reference to the way in which cities and states rise and fall, suddenly giving an ostensibly natural principle a moralizing twist: I shall . . . proceed with the rest of my story recounting cities both lesser and greater, since many of those that were great long ago have become inferior, and some that are great in my own time were inferior before. And so, resting on my knowledge that human prosperity never remains constant, I shall make mention of both without discrimination.

The passage suggests that, both for states and for individuals, a coherent order operates in the universe. In this sense, history turns out to be not so different from that other great Greek invention—tragedy. The debt owed by Herodotus to Athenian tragedy, with its implacable trajectories from grandeur to abjection, has been much commented on by classicists, some of whom even attribute his evolution from a mere note-taker to a grand moralist of human affairs to the years spent in Athens, when he is said to have been a friend of Sophocles. (As one scholar has put it, “Athens was his Damascus.”) Athens itself, of course, was to become the protagonist of one such tragico-historical “plot”: during Herodotus’ lifetime, the preëminent Greek city-state travelled a Sophoclean road from the heady triumph of the Persian Wars to the onset of the Peloponnesian War, a conflict during which it lost both its political and its moral authority. This is why it’s tempting to think, with certain classical historians, that the Histories were composed as a kind of friendly warning about the perils of imperial ambition. If the fate of the Persians could be intended as an object lesson for the Athenians, Herodotus’ ethical point is much larger than the superiority of the West to the East.

O

nly a sense of the cosmic scale of Herodotus’ moral vision, of the way it grafts the political onto the natural schema, can make sense of his distinctive style, of all the seemingly random detours and diversions—the narrative equivalents of the gimcrack souvenirs and brightly colored guidebooks and the flowered shirts. If you wonder, at the beginning of the story of Persia’s rise, whether you really need twenty chapters about the distant origins of the dynasty to which Croesus belongs, think again: that famous story of how Croesus’ ancestor Gyges assassinated the rightful king and took the throne (to say nothing of the beautiful queen) provides information that allows you to fit Croesus’ miserable ending into the natural scheme of things. His fall, it turns out, is the cosmic payback for his ancestor’s crime: “Retribution would come,” Herodotus says, quoting the Delphic

5 of 7

9/1/08 2:23 PM

DWIGHT TOK COURSEPACK 2008-2009


Arms and the Man: Books: The New Yorker

http://www.newyorker.com/arts/critics/books/2008/04/28/080428crbo_books_mendelsohn?printa... 132

oracle, “to the fourth descendant of Gyges.” These neat symmetries, you begin to realize, turn up everywhere, as a well-known passage from Book 3 makes clear: Divine providence in its wisdom created all creatures that are cowardly and that serve as food for others to reproduce in great numbers so as to assure that some would be left despite the constant consumption of them, while it has made sure that those animals which are brutal and aggressive predators reproduce very few offspring. The hare, for example, is hunted by every kind of beast, bird, and man, and so reproduces prolifically. Of all animals, she is the only one that conceives while she is already pregnant. . . . But the lioness, since she is the strongest and boldest of animals, gives birth to only one offspring in her entire life, for when she gives birth she expels her womb along with her young. . . . Likewise, if vipers and the Arabian winged serpents were to live out their natural life spans, humans could not survive at all.

For Herodotus, virtually everything can be assimilated into a kind of natural cycle of checks and balances. (In the case of the vipers and snakes he refers to, the male is killed by the female during copulation, but the male is “avenged” by the fact that the female is killed by her young.) Because his moral theme is universal, and because his historical “plot” involves a world war, Herodotus is trying to give you a picture of the world entire, of how everything in it is, essentially, linked. “Link,” as it happens, is not a bad word to have in mind as you make your way through a text that is at once compellingly linear and disorientingly tangential. He pauses to give you information, however remotely related, about everything he mentions, and that information can take the form of a three-thousand-word narrative or a one-line summary. It only looks confusing or “digressive” because Herodotus, far from being an old fuddy-duddy, not nearly as sophisticated as (say) Thucydides, was two and a half millennia ahead of the technology that would have ideally suited his mentality and style. It occurs to you, as you read “The Landmark Herodotus”—with its very Herodotean footnotes, maps, charts, and illustrations—that a truly adventurous new edition of the Histories would take the digressive bits and turn them into what Herodotus would have done if only they’d existed: hyperlinks. Then again, Herodotus’ work may have presaged another genre altogether. The passage about lions, hares, and vipers reminds you of the other great objection to Herodotus—his unreliability. (For one thing, nearly everything he says about those animals is wrong.) And yet, as you make your way through this amazing document, “accuracy”—or, at least, what we normally think of as scientific or even journalistic accuracy, “the facts”—seems to get less and less important. Did Xerxes really weep when he reviewed his troops? Did the aged, corrupt Hippias, the exiled tyrant of Athens now in the service of Darius, really lose a tooth on the beach at Marathon before the great battle began, a sign that he interpreted (correctly) to mean that he would never take back his homeland? Perhaps not. But that sudden closeup, in which the preparations for war focus, with poignant suddenness, on a single hopeless old has-been, has indelible power. Herodotus may not always give us the facts, but he unfailingly supplies something that is just as important in the study of what he calls ta genomena ex anthr p n, or “things that result from human action”: he gives us the truth about the way things tend to work as a whole, in history, civics, personality, and, of course, psychology. (“Most of the visions visiting our dreams tend to be what one is thinking about during the day.”) All of which is to say that while Herodotus may or may not have anticipated hypertext, he certainly anticipated the novel. Or at least one kind of novel. Something about the Histories, indeed, feels eerily familiar. Think of a novel, written fifty years after a cataclysmic encounter between Europe and Asia, containing both real and imagined characters, and expressing a grand vision of the way history works in a highly tendentious, but quite plausible, narrative of epic verve and sweep. Add an irresistible anti-hero eager for a conquest that eludes him precisely because he understands nothing, in the end, about the people he dreams of subduing; a hapless yet winning indigenous population that, almost by accident, successfully resists him; and digressions powerfully evoking the cultures whose fates are at stake in these grand conflicts. Whatever its debt to the Ionian scientists of the sixth century B.C. and to Athenian tragedy of the fifth, the work that the Histories may most remind you of is “War and Peace.”

A

nd so, in the end, the contemporary reader is likely to come away from this ostensibly archaic epic with the sense of something remarkably familiar, even contemporary. That cinematic style, with its breathtaking wide shots expertly alternating with heart-stopping closeups. The daring hybrid genre that integrates into a grand narrative both flights of empathetic fictionalizing and the anxious, footnote-prone self-commentary of the obsessive, perhaps even neurotic amateur scholar. (To many readers, the Histories may feel like something David Foster Wallace could have dreamed up.) A postmodern style that continually calls attention to the mechanisms of its own creation and peppers a sprawling narrative with any item of interest, however tangentially related to the subject at hand. Then, there is the story itself. A great power sets its sights on a smaller, strange, and faraway land—an easy target, or so it would seem. Led first by a father and then, a decade later, by his son, this great power invades the lesser country twice. The father, so people say, is a bland and bureaucratic man, far more temperate than the son; and, indeed, it is the second invasion that will seize the imagination of history for many years to come. For although it is far larger and more aggressive than the first, it leads to unexpected disaster. Many commentators ascribe this disaster to the flawed decisions of the son: a man whose bluster competes with, or perhaps covers for, a certain hollowness at the center; a leader

6 of 7

9/1/08 2:23 PM

DWIGHT TOK COURSEPACK 2008-2009


Arms and the Man: Books: The New Yorker

http://www.newyorker.com/arts/critics/books/2008/04/28/080428crbo_books_mendelsohn?printa... 133

who is at once hobbled by personal demons (among which, it seems, is an Oedipal conflict) and given to grandiose gestures, who at best seems incapable of comprehending, and at worst is simply incurious about, how different or foreign his enemy really is. Although he himself is unscathed by the disaster he has wreaked, the fortunes and the reputation of the country he rules are seriously damaged. A great power has stumbled badly, against all expectations. Except, of course, the expectations of those who have read the Histories. If a hundred generations of men, from the Athenians to ourselves, have learned nothing from this work, whose apparent wide-eyed naïveté conceals, in the end, an irresistible vision of the way things always seem to work out, that is their fault and not the author’s. Time always tells, as he himself knew so well. However silly he may once have looked, Herodotus, it seems, has had the last laugh. ART: ADRIEN GUIGNET, “XERXES AT THE HELLESPONT” (1845)/AKG-IMAGES

7 of 7

9/1/08 2:23 PM

DWIGHT TOK COURSEPACK 2008-2009


134

AREA OF KNOWLEDGE:

The Arts

“Art is a lie that makes us realize the truth.” (Picasso)

• Is originality essential in the arts? • Are the arts a kind of knowledge, or are thy a means of expressing knowledge? • What is “good art” and “bad art”? • What is the purpose of art? • Is there anything in art that can be universally considered beautiful? • Do all of the arts have certain features in common?

DWIGHT TOK COURSEPACK 2008-2009


Art and (Wo)man at Yale - WSJ.com

http://online.wsj.com/public/article_print/SB120900328811040439.html 135

April 24, 2008

In the Fray

DOW JONES REPRINTS

Art and (Wo)man at Yale

This copy is for your personal, non-commercial use only. To order presentation-ready copies for distribution to your colleagues, clients or customers, use the Order Reprints tool at the bottom of any article or visit: www.djreprints.com.

By MICHAEL J. LEWIS April 24, 2008

Has any work of art been more reviled than Aliza Shvarts's senior project at Yale? Andres Serrano's photograph of a crucifix suspended in his own urine did not lack for articulate champions. Nor did Damien Hirst's vitrine with its doleful rotting cow's head. But Ms. Shvarts's performance of "repeated self-induced miscarriages" has left even them silent. According to her project description, she inseminated herself with sperm from voluntary donors "from the 9th to the 15th day of my menstrual cycle . . . so as to insure the possibility of fertilization." Later she would induce a miscarriage by means of an herbal abortifacient. (Or so she claimed; whether she actually did any of this remains unclear.)

• See a sample reprint in PDF format. • Order a reprint of this article now.

Ms. Shvarts may have, as she asserts, intended her project to raise questions about society and the body. But she inadvertently raises an entirely different set of questions: How exactly is Yale teaching its undergraduates to make art? Is her project a bizarre aberration or is it within the range of typical student work, unusually startling perhaps but otherwise a fully characteristic example of the program and its students? A traditional program in studio art typically begins with a course in drawing, where students are introduced to the basics of line, form and tone. Life drawing is fundamental to this process, not only because of the complexity of the human form (that limber scaffolding of struts and masses) but because it is the object for which we have the most familiarity -- and sympathy. Students invariably bristle at the drawing requirement, wishing to vault ahead to the stage where they make "real art," but in my experience, students who skip the drawing stages do not have the same visual acuity, and the ability to see where a good idea might be made better. Following this introduction, students might specialize in painting, sculpture or such newer media as photography or video. A rigorous college art program provides a strong vertical structure, so that students take a sequence of ever more challenging courses in the same medium. Most undergraduate programs culminate in a senior show, a high-spirited and uneven romp in which students' clever ideas race far ahead of their execution and workmanship. It was for just such a show that Ms. Shvarts's project was, so to speak, conceived. It is often said that great achievement requires in one's formative years two teachers: a stern taskmaster who teaches the rules and an inspirational guru

1 of 3

9/1/08 3:01 PM

DWIGHT TOK COURSEPACK 2008-2009


Art and (Wo)man at Yale - WSJ.com

http://online.wsj.com/public/article_print/SB120900328811040439.html 136

who teaches one to break the rules. But they must come in that order. Childhood training in Bach can prepare one to play free jazz and ballet instruction can prepare one to be a modern dancer, but it does not work the other way around. One cannot be liberated from fetters one has never worn; all one can do is to make pastiches of the liberations of others. And such seems to be the case with Ms. Shvarts. ***

In "My Life Among the Deathworks," the sociologist Philip Rieff coined the term "deathworks" to describe works of art that celebrated creative destruction, and which posed "an all-out assault upon something vital to the established culture." He argued that the principal artistic achievements of the 20th century were such deathworks, which, however lovely or brilliant, served primarily to negate or transgress the existing culture, rather than to affirm or celebrate it. He did not live to see Ms. Shvarts's piece, but one suspects that he would have had much to say. Mr. Rieff was especially interested in those who treated their bodies as an instrument of art, especially those who used them in masochistic or repugnant ways. By now, it is hardly an innovation to do so. Nearly two generations have passed since Chris Burden had a bullet fired into his body. It is even longer since the Italian artist Piero Manzoni sold tin cans charmingly labeled Merde d'artista, which contained exactly that. Even Ms. Shvarts's central proposition -- that the discomfort we feel at the word miscarriage is itself a species of linguistic oppression -- is a relic of the highly politicized literary theory of the late 1980s. As she wrote in an op-ed published in last Friday's Yale Daily News: "The reality of miscarriage is very much a linguistic and political reality, an act of reading constructed by an act of naming -- an authorial act. It is the intention of this piece to destabilize the locus of that authorial act, and in doing so, reclaim it from the heteronormative structures that seek to naturalize it." In other words, one must act to shatter the rigid lattice of categories that words impose upon us. Although the accompanying jargon is fashionable (or was a few years ago), it is essentially a portentous recycling of the idea behind Marcel Duchamp's 1917 urinal, which became a "Fountain" when he declared it so. Immaturity, self-importance and a certain confused earnestness will always loom large in student art work. But they will usually grow out of it. What of the schools that teach them? Undergraduate programs in art aspire to the status of professional programs that award MFA degrees, and there is often a sense that they too should encourage the making of sophisticated and challenging art, and as soon as possible. Yale, like most good programs, requires its students to achieve a certain facility in drawing, although nowhere near what it demanded in the 1930s, when aspiring artists spent roughly six hours a day in the studio painting and life drawing, and an additional three on Saturday. Given the choice of this arduous training or the chance to proceed immediately to the making of art free of all traditional constraints, one can understand why all but a few students would take the latter. But it is not a choice that an undergraduate should be given. In this respect -- and perhaps only in this respect -- Ms. Shvarts is the victim in this story. Mr. Lewis is Faison-Pierson-Stoddard Professor of Art at Williams College. His latest book is "American Art and Architecture" (Thames & Hudson). URL for this article: http://online.wsj.com/article/SB120900328811040439.html

2 of 3

9/1/08 3:01 PM

DWIGHT TOK COURSEPACK 2008-2009


Bubbles, Booms, and Busts: The Art Market in 2008 - July 10, 2008 - The New York Sun

http://www.nysun.com/arts/bubbles-booms-and-busts-the-art-market-in-2008/81581/?print=3934... 137

Section: Arts+ > Printer-Friendly Version

Bubbles, Booms, and Busts: The Art Market in 2008 By TYLER COWEN | July 10, 2008 http://www.nysun.com/arts/bubbles-booms-and-busts-the-art-market-in-2008/81581/

HOW CAN A DEAD, STUFFED SHARK BE WORTH $12 MILLION? YES, THAT'S HOW MUCH DAMIEN HIRST'S FAMOUS SHARK — SHOWCASED IN HIS artwork "The Physical Impossibility of Death in the Mind of Someone Living" — sold for in 2005. The buyer was a wealthy financier named Steve Cohen. If you are wondering, price competition has not vanished from the marketplace: Eddie Saunders, a British electrician, caught a shark and displayed it in his shop two years before Mr. Hirst's project hit the market. Mr. Saunders later tried to sell his dead shark for about one-fifth the price, and probably that sum was negotiable downward. The offer received lots of press coverage but no interest on the buying side. If you think that one inanimate shark is as good as another, your understanding of the art market is, as they say, dead in the water. Mr. Saunders's piece just didn't have the same quality or cachet. (Although Mr. Saunders did claim his shark was more handsome.) Most important, it's not just about the work of art; rather, the value placed on a particular work derives from how it feels to own that art. Most art dealers know that art buying is all about what tier of buyers you aspire to join, about establishing a self-identity and, yes, getting some publicity. The network of galleries and auction houses spends a lot of its time, money, and energy giving artworks just the right image. Remarkably, buyers support the process in the interest of coming out on top, rather than fighting it and trying to get the lower prices.

Handout / 2007 Getty Images

Damien Hirst, 'For the Love of God' (2007).

That process has long been murky to outsiders, but we have a new guide to the exotic world of dead sharks, Brillo boxes, and Jeff Koons puppy sculptures. In "The $12 Million Stuffed Shark: The Curious Economics of Contemporary Art" (Palgrave Macmillan, 272 pages, $24.95), published earlier this year in England, and coming out in America in September, Don Thompson provides the single best guide to both the anthropology and the economics of contemporary art markets. This book is fun and fascinating on just about every page.

And as for that $12 million shark, Mr. Thompson points us to the $500 million-a-year salaries that are common in finance and hedge funds. If that's your income, the $12 million is about five days of your labor. About $20 billion of contemporary art is sold annually, which is comparable to the yearly sales of Nike. In other words, it only sounds like a lot of money. Maybe, given the caché of owning such a famous piece, the real question is why the Hirst shark didn't go for more, but try economics here, too. One

1 of 3

9/1/08 2:40 PM

DWIGHT TOK COURSEPACK 2008-2009


Bubbles, Booms, and Busts: The Art Market in 2008 - July 10, 2008 - The New York Sun

http://www.nysun.com/arts/bubbles-booms-and-busts-the-art-market-in-2008/81581/?print=3934... 138

complication involves the cost of storing, maintaining, and insuring the piece, given that the shark is already rotting, the fluid is ultimately unstable, and the container may someday leak. That's more than a few days' worth of headache, even if you delegate the actual legwork to an assistant. In other words, if you are a wealthy buyer, in terms of the foregone value of your time, you may have just spent some more money. And if the work ends up collapsing or being ruined, you look like a fool. Should we think such purchases are silly or noble? Many people recoil from the contemporary art market as the home of pretension and human foible, but as expensive pursuits go, the art market is a relatively beneficial one. The dead shark cost $12 million to buy but, of course, it didn't cost nearly that much to make. So the production process isn't eating up too many societal resources or causing too much damage to the environment. For the most part, it's money passing back and forth from one set of hands to another, like a game — and, yes, the game is fun for those who have the money to play it. Don't laugh, but we do in fact need some means of determining which of the rich people are the cool ones, and the art market surely serves that end. Most accomplished works of art end up in museums and are eventually accessible to the public; Mr. Hirst's shark last fall went on view at New York's Metropolitan Museum of Art for a three-year visit. Someday, if past behavior of major collectors is any guide, a permanent donation will likely follow. The associated tax deduction drains the Treasury, but this process is cheaper than having our government spend more on direct support of the arts, as is the case in Western Europe. It's a good deal all around. If you're wondering why there's such a boom in contemporary art today, that's because of competition, too. It's no longer possible to put together a world-class collection of Rembrandts or Botticellis; in fact, it can be hard to find a good one at all, at any price. But if you love contemporary art, or at least think you do, you can scale the heights of the market and leave your mark on the world as a collector. Today's richest got to where they are by having this emphasis on reaching no. 1; we should expect that same psychology to carry over to the art market. If the price is having to buy a stuffed shark rather than a shining Madonna, so be it; the world is growing more secular anyway. From this book, you'll also learn how to bid at an auction (inexperienced buyers start too soon in the process), how auctioneers entertain a crowd (they count on the non-buyers to keep the buyers interested), and why art critics don't matter much anymore. If the magazine Art in America pays $200 for a review article, why listen to that writer? We have a much richer and generally more accessible guide to the value of art — namely the market itself. So do you think that the art market might take a tumble, now that the rest of the economy is sliding into recession? Probably not. New classes of buyers from Russia and Asia and the Middle East are likely to keep things going. The art market followed the stock market down a bit after the 1987 crash, but we've yet to see the same dip this time around. Just filling the provincial art museums scattered around China, or under construction, will employ a lot of Western artists for many years. Willem de Kooning once said of the famed art dealer Leo Castelli: "You could give him two beer cans and he could sell them." Jasper Johns promptly went out and, for Castelli, he painted, sculpted, and created lithographs of two beer cans. These works have since been sold for millions; Mr. Johns's "White Flag" went for $20 million to the Metropolitan Museum of Art. It's been estimated that the "Mona Lisa" — if it ever made its way to the market — might sell for up to $10 billion, arguably to status-hungry official buyers in Qatar, Abu Dhabi, or Dubai. Damien Hirst is now worth more than Dali, Picasso, and Warhol were at the same age, put together. The point isn't whether, in aesthetic terms, he deserves that compensation. The question is whether this way of organizing the art market makes overall sense. What Mr. Thompson shows us is that, if you really understand human nature, and know just how much wealth is out there, it's hard to imagine things being any other way.

2 of 3

9/1/08 2:40 PM

DWIGHT TOK COURSEPACK 2008-2009


damien-hirst-shark.jpg (JPEG Image, 504x326 pixels)

http://rawartint.files.wordpress.com/2007/10/damien-hirst-shark.jpg 139

1 of 1

9/1/08 2:41 PM

DWIGHT TOK COURSEPACK 2008-2009


140

AREA OF KNOWLEDGE:

Ethics

“Integrity without knowledge is weak and useless, and knowledge without integrity is dangerous and dreadful.” (Samuel Johnson) • What is the difference between a fact and value? • Do we have specific moral intuitions that can give us knowledge? • Can we deduce moral laws using reason, or is the concept of knowledge inappropriate in morality? • Are people capable of altruism, or are they in some sense always selfish?

DWIGHT TOK COURSEPACK 2008-2009


How Did Honor Evolve? - ChronicleReview.com

http://chronicle.com/temp/reprint.php?id=x4b3czpc6vjdzhjb8mz4mnctyxsn1fhz 141

Subscribe Day pass

From the issue dated May 23, 2008 Search The Site

How Did Honor Evolve? The biology of integrity

Go More options | Back issues

Home News Opinion & Forums The Chronicle Review Commentary Forums Live Discussions

Careers Multimedia Chronicle/Gallup Leadership Forum Technology Forum Resource Center Campus Viewpoints Services Help Contact us Subscribe Day pass Manage your account Advertise with us Rights & permissions Employment opportunities

By DAVID P. BARASH To paraphrase Shakespeare's Falstaff, "honor pricks us on." And although Sir John famously concludes "I'll none of it," the reality is that for most people, honor is more than a "mere scutcheon." Many colleges have honor codes, sometimes elaborated into complex systems: The list includes small colleges (e.g., Gustavus Adolphus, Haverford), large universities (e.g., the University of Virginia, Texas A&M), Ivies like Dartmouth and Princeton, sectarian institutions like Brigham Young, science-tech (Caltech) as well as liberal-arts (Reed) colleges, Article tools and, with particular solemnity, the three military academies. The code at West Point is especially terse and predictably directive: "A cadet will not lie, cheat, or steal, nor tolerate those who do." The first "three commandments" — thou shalt not lie, cheat, or steal — speak for themselves. Of particular interest for our purposes, however, is that fourth admonition: "nor tolerate those who do." (Sure enough, Prince Hal shows himself true to this martial virtue when he eventually — and for many of us, hurtfully — turns away from Falstaff, showing that as king he disowns Fat Jack's dishonorable behavior.) Doesn't it stand to reason that everyone would be intolerant of violators? After all, when someone lies, cheats, or steals, it hurts the rest of us while making a mockery of society itself (cue Immanuel Kant, and his categorical imperative). The "fourth commandment" should, therefore, be altogether logical and hardly need specifying. The problem for theorists — if not for the "naturally intolerant" — is that blowing the whistle on liars, cheaters, or thieves is likely to impose a cost on the whistle-blower, while everyone else benefits from her act of conscience. Why not mind your own business and let someone else do the dirty work? Isn't that why we have police: to, as the word suggests, police the behavior of others, at least in part so we don't have to do so ourselves? A conceivable explanation is that if no one else perceives the transgression or, similarly, if no one else is willing to do anything about it, then perhaps the miscreant will get away with it, whereupon everyone — including you — will be worse off. In short, turning in a violator could be a simple act of self-aggrandizement, if the cost of doing so is less than the shared penalty of keeping silent. Another possibility, of course, is that people are indeed predisposed to ignore code violations, which is precisely why the "fourth commandment" exists — because otherwise malefactors would be tolerated. Yet another, and of particular interest to evolutionists, is that people are, at least on occasion, inclined to do things that are detrimental to their personal benefit so long as their actions are sufficiently beneficial to the larger social unit. That process, known as "group selection," has a long and checkered history in biological theory. Since natural selection should consistently reward selfish acts, how to explain the existence of morality that induces people to behave, as Bertolt Brecht puts it in The Threepenny Opera, "more kindly than we are"? These days, evolutionary explanations lean heavily on kin selection (also known as inclusive fitness theory, whereby apparent altruism at the level of bodies can actually be selfishness playing itself out at the level of genes), and on reciprocity, essentially "you scratch my back, I'll scratch yours." But there is also the possibility that beneficent acts are biologically generated by a payoff enjoyed by the group, of which the altruist is a member. At one point in The Descent of Man and Selection in Relation to Sex, Darwin gave impetus to the group selectionists: "Although a high standard of morality gives but a slight or no advantage to each individual man and his children over the other men of the same tribe, … an increase in the number of well-endowed men and advancement in the standard of morality will certainly give an immense advantage to one tribe over another." But just because Darwin said it doesn't make it true. The problem is that even if people "well

1 of 4

5/20/08 10:27 AM DWIGHT TOK COURSEPACK 2008-2009


How Did Honor Evolve? - ChronicleReview.com

http://chronicle.com/temp/reprint.php?id=x4b3czpc6vjdzhjb8mz4mnctyxsn1fhz 142

endowed" with morality provide their "tribe" with an "immense advantage," those same people run the risk of being immensely disadvantaged within their group if such endowment equates to spending time, energy, or money on behalf of others, or running risks that help the larger social unit while hurting the altruist. As a result, although group selection — along with its companion concept, "the good of the species" — was uncritically accepted for about a century, it has been deservedly out of favor for several decades, displaced by the understanding that selection operates most effectively at the lowest possible level: individuals or, better yet, genes. Or does it? Maybe reports of group selection's demise have been greatly exaggerated. Various mathematical models now suggest that under certain stringent conditions, selection could (at least in theory) operate at the level of groups. Some of the most promising formulations involve so-called multilevel selection, in which natural selection operates in simultaneous, nested baskets: among genes within individuals, among individuals within groups, among groups within species, and presumably among species within ecosystems, among ecosystems on the planet Earth, and (why not?) among planets in galaxies, and galaxies in the universe. Received wisdom these days is that if a behavior is costly for the individual, it is unlikely to evolve, regardless of whether it is beneficial for the group or the species. Nonetheless, even if we grant that group selection has probably been inconsequential when it comes to changing gene frequencies, that does not mean that selection at the group level hasn't been instrumental in shaping human psychology, producing some pro-social tendencies via cultural evolution rather than its genetic counterpart. And so we return to honor codes, violators thereof, and those who turn them in. Notably, along with the fertile mathematical modeling, there has been a flurry of experimental simulations by economists and social psychologists showing that under certain conditions, people are inclined to engage in "third-party punishment." That is, they will punish cheaters, even at distinct cost to themselves. If two people are playing, say, prisoner's dilemma, and one cheats ("defects," in game-theory terminology), the other may well defect in turn; that is the basis of the celebrated "tit for tat" strategy, which is selfish, or at least self-protective, and therefore not perplexing. Especially interesting for those of us who contemplate honor systems, however, are those third-party simulations in which an observer is given the chance to reward or punish defectors. To an extent that has surprised most biologists, third-party punishment is doled out quite enthusiastically, with the self-appointed adjudicators willingly absorbing a cost in order to police the behavior of others. Devotees of group selection (some of whom evidence an almost religious zeal, perhaps because they have been wandering in the biological wilderness since the mid-1960s) have seized on these results as demonstrating how human moral psychology may well have been shaped by an urge to benefit one's group, even at substantial personal expense. Such behavior can also be explained, however, by what evolutionary theorists call the "three R's": reputation, reciprocation, and retribution. Be moral, and your reputation will benefit (and thus, your fitness); you might also profit from the reciprocal morality of others. And if you are seen as immoral, you run the risk of painful retaliation. A parallel suite of inducements could generate third-party punishment, including an inclination to turn in honor-code violators, even if, paradoxically, society also takes a dim view of the "snitch" or "stool pigeon." That raises the problem of who administers the rebuke to code violators. Ideally it should be everyone, but that simply opens the opportunity for yet more defection, on the part of individuals who stand back and let others do the dirty work. One answer is for punishment to be meted out not only toward defectors, but toward anyone who refrains from punishing them. Next step, then, is to punish those who won't punish those who defect, and so on, ad infinitum. In the close-formation battle phalanxes favored by the Roman legions, each foot soldier carried a sword in his right hand and a shield in his left. Hence, each legionnaire depended on the man alongside to provide protection via the other's shield. Desertion in battle was a capital offense, with punishment to be administered on the spot; moreover, anyone who failed to kill a deserter was himself subject to immediate death! ("A legionnaire will not run away during battle, nor tolerate those who do." Nor will he tolerate those who tolerate those who do.) There are other responses to observing a cheater, including being more likely to cheat oneself, that may go a long way toward explaining all the officially orchestrated intolerance of cheating. Bad enough if one man breaks ranks and runs; worse yet if that induces everyone else to do the same. Consider a familiar circumstance in which the transgression, and the penalty for tolerating a transgressor, are both considerably less drastic: how difficult it is to wait at a crosswalk when all those around you are jaywalking.

2 of 4

5/20/08 10:27 AM DWIGHT TOK COURSEPACK 2008-2009


How Did Honor Evolve? - ChronicleReview.com

http://chronicle.com/temp/reprint.php?id=x4b3czpc6vjdzhjb8mz4mnctyxsn1fhz 143

The consistent results of third-party-punishment experiments — willingness or eagerness to enforce group norms, even when doing so is costly to the "enforcers" — can be seen as revealing something nasty and spiteful about people: They will go out of their way, even enduring personal financial loss, just to be mean to someone else. But in addition to this glass-half-empty interpretation, there is a half-full counterpart. The fact that people will punish a cheater in a so-called public-goods situation, even if doing so may be costly, is evidence for a kind of altruism. By maintaining social norms at their own cost, "punishers" are unpaid policemen, making a kind of selfless citizen's arrest. Although such behavior is admittedly rare at crosswalks, it is clear that people are readily inclined to turn on cheaters, as anyone who has ever bridled at the boor who breaks into line at a ticket window or supermarket checkout can readily attest. In those cases, and unlike jaywalking, code violations impose a disadvantage on those who wait responsibly in their queue. Yet, even here, it is tempting to swallow one's annoyance and hope that someone else will step up and enforce the norm for everyone else. There is much to be said for having such enforcers around, not only because the possibility of someone's acting on the cadet's fourth commandment probably makes cheating less likely in the first place, but also — and not least — because someone else is thereby charged with the task. It remains possible that third-party punishment, including but not limited to honor-code enforcement, derives, at least in part, from group selection. I doubt it, however, since reputation, reciprocation, and retribution (each acting on individuals) promise to explain such behavior more parsimoniously as well as more convincingly. But there can be no doubt that our species is notably "groupy." Nearly always, we interact with the rest of society, are expected to cooperate yet are tempted to cheat, and rely on the cooperation of others yet are vulnerable to their defection. The basic concept of society assumes give-and-take, a social contract whereby individuals make a deal, with each forgoing certain selfish, personal opportunities in exchange for gaining the benefits of group association. Lions, too, are groupy, although they pose some intriguing dilemmas — notably a failure to punish those who violate the leonine honor code. These animals are often understandably cooperative, at least within the pride; it helps, for example, to have more than one hunter to kill a large and potentially dangerous animal, such as a water buffalo. There is also a shared benefit if several lions are available to defend the kill from hyenas, as well as to keep other prides in their place. At the same time, some lions are less likely than others to hold up their end of the social contract. The laggards are typically tolerated, however, presumably because they yield an overall benefit to the rest of the group (or maybe because no one is willing to step up and upbraid the cheater). Most likely, having lots of lions — even if some of them spend most of their time just lyin' around — means that your pride is less likely to be pushed around by others, or by packs of hungry hyenas. Or take the phenomenon known to biologists as "reproductive skew," whereby the dominant members of certain bird flocks permit subordinates to be part of the group — and even, on occasion, to reproduce — so long as they don't get too carried away. The alpha birds profit by having the lesser-ranked individuals around, essentially as cannon fodder when it comes to defending the nest, bringing in food, and so forth, while the subordinates find it in their interest to subordinate themselves so long as they can glean some crumbs from the alphas' table. Among birds, this may also include subordinates' being allowed to lay a few eggs, but just a few. In these cases, however, misbehavior is policed by the dominant individuals, not by other subordinates' blowing the whistle on their excessively enthusiastic colleagues. Honor-code violations constantly threaten to undermine even the most pro-social of contracts, whether among animals or people. Subordinate birds regularly attempt to lay more eggs than the dominants would like; lazy and cowardly lions are more frequent than even Bert Lahr in The Wizard of Oz might imagine; and there is a reason why, as George Orwell once noted, the real test of character is how you treat someone who has no possibility of doing you any good. The problem, which boils down to reconciling personal selfishness with public benefit, has occupied many of the great thinkers in social philosophy. These days it also drives empirical research — albeit mostly based on survey reports or laboratory simulations — while also tickling the imagination of evolutionary biologists, whether theory-minded or field-oriented. And it bears a complicated and as yet unresolved relationship to the fraught matter of honor-code violations, not only what motivates the perpetrators, but also what inspires those who turn them in. Back to Falstaff's cynical ruminations: Is it honor that really pricks them on, or is it group selection, genetically mediated inclinations, fear, guilt, social compulsion, or what? David P. Barash is an evolutionary biologist, a professor of psychology at the University of

3 of 4

5/20/08 10:27 AM DWIGHT TOK COURSEPACK 2008-2009


Learning to Lie

file:///Users/psanders/Desktop/TOK%20Coursepack/Ethics/Children%20Who%20Lie.html 144

SAVE THIS | EMAIL THIS | Close

Features Learning to Lie Kids lie early, often, and for all sorts of reasons—to avoid punishment, to bond with friends, to gain a sense of control. But now there’s a singular theory for one way this habit develops: They are just copying their parents. By Po Bronson Published Feb 10, 2008

1 of 11

8/31/08 1:08 PM

DWIGHT TOK COURSEPACK 2008-2009


Learning to Lie

file:///Users/psanders/Desktop/TOK%20Coursepack/Ethics/Children%20Who%20Lie.html 145

In the last few years, a handful of intrepid scholars have decided it’s time to try to understand why kids lie. For a study to assess the extent of teenage dissembling, Dr. Nancy Darling, then at Penn State University, recruited a special research team of a dozen undergraduate students, all under the age of 21. Using gift certificates for free CDs as bait, Darling’s Mod Squad persuaded high-school students to spend a few hours with them in the local pizzeria. Each student was handed a deck of 36 cards, and each card in this deck listed a topic teens sometimes lie about to their parents. Over a slice and a Coke, the teen and two researchers worked through the deck, learning what things the kid was lying to his parents about, and why. “They began the interviews saying that parents give you everything and yes, you should tell them everything,” Darling observes. By the end of the interview, the kids saw for the first time how much they were lying and how many of the family’s rules they had broken. Darling says 98 percent of the teens reported lying to their parents. Out of the 36 topics, the average teen was lying to his parents about twelve of them. The teens lied about what they spent their allowances on, and whether they’d started dating, and what clothes they put on away from the house. They lied about what movie they went to, and whom they went with. They lied about alcohol and drug use, and they lied about whether they were hanging out with friends their parents disapproved of. They lied about how they spent their afternoons while their parents were at work. They lied about whether chaperones were in attendance at a party or whether they rode in cars driven by drunken teens.

2 of 11

8/31/08 1:08 PM

DWIGHT TOK COURSEPACK 2008-2009


Learning to Lie

file:///Users/psanders/Desktop/TOK%20Coursepack/Ethics/Children%20Who%20Lie.html 146

Being an honors student didn’t change these numbers by much; nor did being an overscheduled kid. No kid, apparently, was too busy to break a few rules. And lest you wonder if these numbers apply only to teens in State College, Pennsylvania, the teens in Darling’s sample were compared to national averages on a bevy of statistics, from academics to extracurriculars. “We had a very normal, representative sample,” Darling says. For two decades, parents have rated “honesty” as the trait they most wanted in their children. Other traits, such as confidence or good judgment, don’t even come close. On paper, the kids are getting this message. In surveys, 98 percent said that trust and honesty were essential in a personal relationship. Depending on their ages, 96 to 98 percent said lying is morally wrong. So when do the 98 percent who think lying is wrong become the 98 percent who lie? It starts very young. Indeed, bright kids—those who do better on other academic indicators—are able to start lying at 2 or 3. “Lying is related to intelligence,” explains Dr. Victoria Talwar, an assistant professor at Montreal’s McGill University and a leading expert on children’s lying behavior. Although we think of truthfulness as a young child’s paramount virtue, it turns out that lying is the more advanced skill. A child who is going to lie must recognize the truth, intellectually conceive of an alternate reality, and be able to convincingly sell that new reality to someone else. Therefore, lying demands both advanced cognitive development and social skills that honesty simply doesn’t require. “It’s a developmental milestone,” Talwar has concluded. This puts parents in the position of being either damned or blessed, depending on how they choose to look at it. If your 4-year-old is a good liar, it’s a strong sign she’s got brains. And it’s the smart, savvy kid who’s most at risk of becoming a habitual liar. By their 4th birthday, almost all kids will start experimenting with lying in order to avoid punishment. Because of that, they lie indiscriminately—whenever punishment seems to be a possibility. A 3-year-old will say, “I didn’t hit my sister,” even if a parent witnessed the child’s hitting her sibling. Most parents hear their child lie and assume he’s too young to understand what lies are or that lying’s wrong. They presume their child will stop when he gets older and learns those distinctions. Talwar has found the opposite to be true—kids who grasp early the nuances between lies and truth use this knowledge to their advantage, making them more prone to lie when given the chance. Many parenting Websites and books advise parents to just let lies go—they’ll grow out of it. The truth, according to Talwar, is that kids grow into it. In studies where children are observed in their natural environment, a 4-year-old will lie once every two hours, while a 6-year-old will lie about once every hour and a half. Few kids are exceptions. Next: Why lying can be a symptom of bigger problems.

3 of 11

8/31/08 1:08 PM

DWIGHT TOK COURSEPACK 2008-2009


Learning to Lie

file:///Users/psanders/Desktop/TOK%20Coursepack/Ethics/Children%20Who%20Lie.html 147

By the time a child reaches school age, the reasons for lying become more complex. Avoiding punishment is still a primary catalyst for lying, but lying also becomes a way to increase a child’s power and sense of control—by manipulating friends with teasing, by bragging to assert status, and by learning he can fool his parents. Thrown into elementary school, many kids begin lying to their peers as a coping mechanism, as a way to vent frustration or get attention. Any sudden spate of lying, or dramatic increase in lying, is a danger sign: Something has changed in that child’s life, in a way that troubles him. “Lying is a symptom—often of a bigger problem behavior,” explains Talwar. “It’s a strategy to keep themselves afloat.” In longitudinal studies, a majority of 6-year-olds who frequently lie have it socialized out of them by age 7. But if lying has become a successful strategy for handling difficult social situations, a child will stick with it. About half of all kids do—and if they’re still lying a lot at 7, then it seems likely to continue for the rest of childhood. They’re hooked. "My son doesn’t lie,” insisted Steve, a slightly frazzled father in his mid-thirties, as he watched Nick, his eager 6-year-old, enthralled in a game of marbles with a student researcher in Talwar’s Montreal lab. Steve was quite proud of his son, describing him as easygoing and very social. He had Nick bark out an impressive series of addition problems the boy had memorized, as if that was somehow proof of Nick’s sincerity. Steve then took his assertion down a notch. “Well, I’ve never heard him lie.” Perhaps that, too, was a little strong. “I’m sure he must lie some, but when I hear

4 of 11

8/31/08 1:08 PM

DWIGHT TOK COURSEPACK 2008-2009


Learning to Lie

file:///Users/psanders/Desktop/TOK%20Coursepack/Ethics/Children%20Who%20Lie.html 148

it, I’ll still be surprised.” He had brought his son to the lab after seeing an advertisement in a Montreal parenting magazine that asked, “Can Your Child Tell the Difference Between the Truth and a Lie?” Steve was curious to find out if Nick would lie, but he wasn’t sure he wanted to know the answer. The idea of his son’s being dishonest with him was profoundly troubling. But I knew for a fact his son did lie. Nick cheated, then he lied, and then he lied again. He did so unhesitatingly, without a single glimmer of remorse. Nick thought he’d spent the hour playing a series of games with a couple of nice women. He had won two prizes, a cool toy car and a bag of plastic dinosaurs, and everyone said he did very well. What the first-grader didn’t know was that those games were really a battery of psychological tests, and the women were Talwar’s trained researchers working toward doctorates in child psychology. One of Talwar’s experiments, a variation on a classic experiment called the temptation-resistance paradigm, is known in the lab as “the Peeking Game.” Through a hidden camera, I’d watched Nick play it with another one of Talwar’s students, Cindy Arruda. She told Nick they were going to play a guessing game. Nick was to sit facing the wall and try to guess the identity of a toy Arruda brought out, based on the sound it made. If he was right three times, he’d win a prize. The first two were easy: a police car and a crying baby doll. Nick bounced in his chair with excitement when he got the answers right. Then Arruda brought out a soft, stuffed soccer ball and placed it on top of a greeting card that played music. She cracked the card, triggering it to play a music-box jingle of Beethoven’s Für Elise. Nick, of course, was stumped. Arruda suddenly said she had to leave the room for a bit, promising to be right back. She admonished Nick not to peek at the toy while she was gone. Nick struggled not to, but at thirteen seconds, he gave in and looked. When Arruda returned, she could barely come through the door before Nick—facing the wall again—triumphantly announced, “A soccer ball!” Arruda told Nick to wait for her to get seated. Suddenly realizing he should sound unsure of his answer, he hesitantly asked, “A soccer ball?” Arruda said Nick was right, and when he turned to face her, he acted very pleased. Arruda asked Nick if he had peeked. “No,” he said quickly. Then a big smile spread across his face. Without challenging him, or even a note of suspicion in her voice, Arruda asked Nick how he’d figured out the sound came from a soccer ball. Nick cupped his chin in his hands, then said, “The music had sounded like a ball.” Then: “The ball sounded black and white.” Nick added that the music sounded like the soccer balls he played with at school: They squeaked. And the music sounded like the squeak he heard when he kicked a ball. To emphasize this, his winning point, he brushed his hand against the side of the toy ball. Next: How parents unwittingly teach kids to lie.

5 of 11

8/31/08 1:08 PM

DWIGHT TOK COURSEPACK 2008-2009


Learning to Lie

file:///Users/psanders/Desktop/TOK%20Coursepack/Ethics/Children%20Who%20Lie.html 149

This experiment was not just a test to see if children cheat and lie under temptation. It was also designed to test a child’s ability to extend a lie, offering plausible explanations and avoiding what the scientists call “leakage”—inconsistencies that reveal the lie for what it is. Nick’s whiffs at covering up his lie would be scored later by coders who watched the videotape. So Arruda accepted without question the fact that soccer balls play Beethoven when they’re kicked and gave Nick his prize. He was thrilled. Seventy-six percent of kids Nick’s age take the chance to peek during the game, and when asked if they peeked, 95 percent lie about it. But sometimes the researcher will read the child a short storybook before she asks about the peeking. One story read aloud is The Boy Who Cried Wolf—the version in which both the boy and the sheep get eaten because of his repeated lies. Alternatively, they read George Washington and the Cherry Tree, in which young George confesses to his father that he chopped down the prized tree with his new hatchet. The story ends with his father’s reply: “George, I’m glad that you cut down the tree after all. Hearing you tell the truth instead of a lie is better than if I had a thousand cherry trees.” Now, which story do you think reduced lying more? When we surveyed 1,300 people, 75 percent thought The Boy Who Cried Wolf would work better. However, this famous fable actually did not cut down lying at all in Talwar’s experiments. In fact, after hearing the story, kids lied even a little more than normal. Meanwhile, hearing George Washington and the Cherry Tree—even when Washington was replaced with a nondescript character, eliminating the potential that his iconic celebrity might influence older kids—reduced lying a sizable 43 percent in kids. Although most kids lied in the control situation, the majority hearing George Washington told the truth.

6 of 11

8/31/08 1:08 PM

DWIGHT TOK COURSEPACK 2008-2009


Learning to Lie

file:///Users/psanders/Desktop/TOK%20Coursepack/Ethics/Children%20Who%20Lie.html 150

The shepherd boy ends up suffering the ultimate punishment, but the fact that lies get punished is not news to children. Increasing the threat of punishment for lying only makes children hyperaware of the potential personal cost. It distracts children from learning how their lies affect others. In studies, scholars find that kids who live in threat of consistent punishment don’t lie less. Instead, they become better liars, at an earlier age—learning to get caught less often. Ultimately, it’s not fairy tales that stop kids from lying—it’s the process of socialization. But the wisdom in The Cherry Tree applies: According to Talwar, parents need to teach kids the worth of honesty, just like George Washington’s father did, as much as they need to say that lying is wrong. The most disturbing reason children lie is that parents teach them to. According to Talwar, they learn it from us. “We don’t explicitly tell them to lie, but they see us do it. They see us tell the telemarketer, ‘I’m just a guest here.’ They see us boast and lie to smooth social relationships.” Consider how we expect a child to act when he opens a gift he doesn’t like. We instruct him to swallow all his honest reactions and put on a polite smile. Talwar runs an experiment where children play games to win a present, but when they finally receive the present, it’s a lousy bar of soap. After giving the kids a moment to overcome the shock, a researcher asks them how they like it. About a quarter of preschoolers can lie that they like the gift—by elementary school, about half. Telling this lie makes them extremely uncomfortable, especially when pressed to offer a few reasons why they like the bar of soap. Kids who shouted with glee when they won the Peeking Game suddenly mumble quietly and fidget. Meanwhile, the child’s parent usually cheers when the child comes up with the white lie. “Often, the parents are proud that their kids are ‘polite’—they don’t see it as lying,” Talwar remarks. She’s regularly amazed at parents’ seeming inability to recognize that white lies are still lies. When adults are asked to keep diaries of their own lies, they admit to about one lie per every five social interactions, which works out to one per day, on average. The vast majority of these lies are white lies, lies to protect yourself or others, like telling the guy at work who brought in his wife’s muffins that they taste great or saying, “Of course this is my natural hair color.” Encouraged to tell so many white lies and hearing so many others, children gradually get comfortable with being disingenuous. Insincerity becomes, literally, a daily occurrence. They learn that honesty only creates conflict, and dishonesty is an easy way to avoid conflict. And while they don’t confuse white-lie situations with lying to cover their misdeeds, they bring this emotional groundwork from one circumstance to the other. It becomes easier, psychologically, to lie to a parent. So if the parent says, “Where did you get these Pokémon cards?! I told you, you’re not allowed to waste your allowance on Pokémon cards!” this may feel to the child very much like a white-lie scenario—he can make his father feel better by telling him the cards were extras from a friend. Next: The adolescent's need to withhold details. Now, compare this with the way children are taught not to tattle. What grown-ups really mean by “Don’t tell” is that we want children to learn to work it out with one another first. But tattling has received some scientific interest, and researchers have spent hours observing kids at play. They’ve learned that nine out of ten times, when a kid runs up to a parent to tell, that kid is being completely honest. And while it might seem to a parent that tattling is incessant, to a child that’s not the case—because for every time a child seeks a parent for help, there are fourteen instances when he was wronged but did not run to the parent for aid. So when the frustrated child finally comes to tell the parent the truth, he hears, in effect, “Stop bringing me your problems!” By the middle years of elementary school, a tattler is about the worst thing a kid can be called on the playground. So a child considering reporting a problem to an adult not only faces peer condemnation as a traitor but also recalls the reprimand “Work it out on your own.” Each year, the problems they deal with

7 of 11

8/31/08 1:08 PM

DWIGHT TOK COURSEPACK 2008-2009


Learning to Lie

file:///Users/psanders/Desktop/TOK%20Coursepack/Ethics/Children%20Who%20Lie.html 151

grow exponentially. They watch other kids cut class, vandalize walls, and shoplift. To tattle is to act like a little kid. Keeping their mouth shut is easy; they’ve been encouraged to do so since they were little. The era of holding back information from parents has begun. By withholding details about their lives, adolescents carve out a social domain and identity that are theirs alone, independent from their parents or other adult authority figures. To seek out a parent for help is, from a teen’s perspective, a tacit admission that he’s not mature enough to handle it alone. Having to tell parents about it can be psychologically emasculating, whether the confession is forced out of him or he volunteers it on his own. It’s essential for some things to be “none of your business.” The big surprise in the research is when this need for autonomy is strongest. It’s not mild at 12, moderate at 15, and most powerful at 18. Darling’s scholarship shows that the objection to parental authority peaks around ages 14 to 15. In fact, this resistance is slightly stronger at age 11 than at 18. In popular culture, we think of high school as the risk years, but the psychological forces driving deception surge earlier than that. In her study of teenage students, Darling also mailed survey questionnaires to the parents of the teenagers interviewed, and it was interesting how the two sets of data reflected on each other. First, she was struck by parents’ vivid fear of pushing their teens into outright hostile rebellion. “Many parents today believe the best way to get teens to disclose is to be more permissive and not set rules,” Darling says. Parents imagine a trade-off between being informed and being strict. Better to hear the truth and be able to help than be kept in the dark. Darling found that permissive parents don’t actually learn more about their children’s lives. “Kids who go wild and get in trouble mostly have parents who don’t set rules or standards. Their parents are loving and accepting no matter what the kids do. But the kids take the lack of rules as a sign their parents don’t care—that their parent doesn’t really want this job of being the parent.” Pushing a teen into rebellion by having too many rules was a sort of statistical myth. “That actually doesn’t happen,” remarks Darling. She found that most rules-heavy parents don’t actually enforce them. “It’s too much work,” says Darling. “It’s a lot harder to enforce three rules than to set twenty rules.” A few parents managed to live up to the stereotype of the oppressive parent, with lots of psychological intrusion, but those teens weren’t rebelling. They were obedient. And depressed. “Ironically, the type of parents who are actually most consistent in enforcing rules are the same parents who are most warm and have the most conversations with their kids,” Darling observes. They’ve set a few rules over certain key spheres of influence, and they’ve explained why the rules are there. They expect the child to obey them. Over life’s other spheres, they supported the child’s autonomy, allowing them freedom to make their own decisions. The kids of these parents lied the least. Rather than hiding twelve areas from their parents, they might be hiding as few as five. In the thesaurus, the antonym of honesty is lying, and the opposite of arguing is agreeing. But in the minds of teenagers, that’s not how it works. Really, to an adolescent, arguing is the opposite of lying. Next: Will how we deal with lies matter later in life?

8 of 11

8/31/08 1:08 PM

DWIGHT TOK COURSEPACK 2008-2009


Learning to Lie

file:///Users/psanders/Desktop/TOK%20Coursepack/Ethics/Children%20Who%20Lie.html 152

When Nancy Darling’s researchers interviewed the teenagers from Pennsylvania, they also asked the teens when and why they told the truth to their parents about things they knew their parents disapproved of. Occasionally they told the truth because they knew a lie wouldn’t fly—they’d be caught. Sometimes they told the truth because they just felt obligated, saying, “They’re my parents, I’m supposed to tell them.” But one important motivation that emerged was that many teens told their parents the truth when they were planning on doing something that was against the rules—in hopes their parents might give in and say it was okay. Usually, this meant an argument ensued, but it was worth it if a parent might budge. The average Pennsylvania teen was 244 percent more likely to lie than to protest a rule. In the families where there was less deception, however, there was a much higher ratio of arguing and complaining. The argument enabled the child to speak honestly. Certain types of fighting, despite the acrimony, were ultimately signs of respect—not of disrespect. But most parents don’t make this distinction in how they perceive arguments with their children. Dr. Tabitha Holmes of SUNY–New Paltz conducted extensive interviews asking mothers and adolescents, separately, to describe their arguments and how they felt about them. And there was a big difference. Forty-six percent of the mothers rated their arguments as being destructive to their relationships with their teens. Being challenged was stressful, chaotic, and (in their perception) disrespectful. The more frequently they fought, and the more intense the fights were, the more the mother rated the fighting as harmful. But only 23 percent of the adolescents felt that their arguments were destructive. Far more believed that fighting strengthened their relationship with their mothers. “Their perception of the fighting was really sophisticated, far more than we anticipated for teenagers,” notes Holmes. “They saw fighting as a way to see their parents in a new way, as a result of hearing their mother’s point of view be articulated.” What most surprised Holmes was learning that for the teens, fighting often, or having big fights, did not cause them to rate the fighting as harmful and destructive. Statistically, it made no difference at all. Certainly, there is a point in families where there is too much conflict, Holmes notes. “But we didn’t have anybody in our study with an extreme amount of conflict.” Instead, the variable that seemed to really matter was how the arguments were resolved. It will be many years before my own children become teenagers, but having lying on my radar screen has changed the way things work around the Bronson household. No matter how small, lies no longer go unnoticed. The moments slow down, and I have a better sense of how to handle them. Just the other day, my 6-year-old son, Luke, came home from school having learned a new phrase and a new attitude—quipping “I don’t care” snidely, and shrugging his shoulders to everything. He repeated “I don’t care” so many times I finally got frustrated and demanded to know if someone at school had taught him this dismissive phrase. He froze. And I could suddenly intuit the debate running through his head—should he lie to his dad, or rat out his friend? Recognizing the conflict, I told him that if he learned the phrase at school, he did not have to tell me who taught him the phrase. Telling me the truth was not going to get his friends in trouble. “Okay,” he said, relieved. “I learned it at school.” Then he told me he did care, and he gave me a hug. I haven’t heard it again. Does how we deal with a child’s lies really matter down the road in life? The irony of lying is that it’s both normal and abnormal behavior at the same time. It’s to be expected, and yet it can’t be disregarded. Dr. Bella DePaulo of the University of California, Santa Barbara, has devoted much of her career to adult lying. In one study, she had both college students

9 of 11

8/31/08 1:08 PM

DWIGHT TOK COURSEPACK 2008-2009


Learning to Lie

file:///Users/psanders/Desktop/TOK%20Coursepack/Ethics/Children%20Who%20Lie.html 153

and community members enter a private room equipped with an audiotape recorder. Promising them complete confidentiality, DePaulo’s team instructed the subjects to recall the worst lie they ever told—with all the scintillating details. “I was fully expecting serious lies,” DePaulo remarks. “Stories of affairs kept from spouses, stories of squandering money, or being a salesperson and screwing money out of car buyers.” And she did hear those kinds of whoppers, including theft and even one murder. But to her surprise, a lot of the stories told were about when the subject was a mere child—and they were not, at first glance, lies of any great consequence. “One told of eating the icing off a cake, then telling her parents the cake came that way. Another told of stealing some coins from a sibling.” As these stories first started trickling in, DePaulo scoffed, thinking, “C’mon, that’s the worst lie you’ve ever told?” But the stories of childhood kept coming, and DePaulo had to create a category in her analysis just for them. “I had to reframe my understanding to consider what it must have been like as a child to have told this lie,” she recalls. “For young kids, their lie challenged their self-concept that they were a good child, and that they did the right thing.” Many subjects commented on how that momentous lie early in life established a pattern that affected them thereafter. “We had some who said, ‘I told this lie, I got caught, and I felt so badly, I vowed to never do it again.’ Others said, ‘Wow, I never realized I’d be so good at deceiving my father, I can do this all the time.’ The lies they tell early on are meaningful. The way parents react can really affect lying.” Talwar says parents often entrap their kids, putting them in positions to lie and testing their honesty unnecessarily. Last week, I put my 3 -year-old daughter in that exact situation. I noticed she had scribbled on the dining table with a washable marker. Disapprovingly, I asked, “Did you draw on the table, Thia?” In the past, she would have just answered honestly, but my tone gave away that she’d done something wrong. Immediately, I wished I could retract the question. I should have just reminded her not to write on the table, slipped newspaper under her coloring book, and washed the ink away. Instead, I had done just as Talwar had warned against. “No, I didn’t,” my daughter said, lying to me for the first time. For that stain, I had only myself to blame. Additional reporting by Ashley Merryman.

Find this article at: http://www.nymag.com/news/features/43893 SAVE THIS | EMAIL THIS | Close Check the box to include the list of links referenced in the article.

Copyright © New York Magazine Holdings LLC. All Rights Reserved.

10 of 11

8/31/08 1:08 PM

DWIGHT TOK COURSEPACK 2008-2009


The Atlantic Online | August 2008 Unbound | In Defense of the Beta Blocker | Carl Elliott

http://www.theatlantic.com/doc/print/200808u/beta-blockers 154

Close Window DISPATCH AUGUST 20, 2008

Is this a performance drug that could actually increase the fairness of Olympic contests? BY CARL ELLIOTT

In Defense of the Beta Blocker

N

obody seemed terribly surprised when two North Korean athletes tested positive for performance enhancing drugs at the Olympics last week. By now, stories of disgraced athletes sound familiar almost to the point of tedium. But if you had the patience to read beyond the headlines, you might have noticed

something unusual about this particular scandal—namely, the nature of the banned drug the athletes were using. That drug was propranolol, and the athletes using it were pistol shooters. Propranolol is not exactly a cutting-edge performance enhancer. If you are familiar with propranolol, it is probably because you (or your parents) take it for high blood pressure. Its value as a performance enhancer comes from its ability to mask the effects of anxiety, such as the tremor that might cause one’s hand to shake when aiming a pistol. That propranolol can improve athletic performance is clear, and not just for pistol shooters. Whether it ought to be banned is a more complicated question. Propranolol comes from a class of drugs known as beta blockers, which lower blood pressure by blocking particular sympathetic nervous system receptors. These receptors also happen to be the ones that get activated in times of fear or anxiety, which is why beta blockers are useful as performance enhancers. A beta blocker can keep a person’s hands from trembling, his heart from pounding, and his forehead from beading up with sweat. It can also keep his voice from quavering, which is why shy people sometimes sneak a beta blocker before giving a big speech or a public presentation. Beta blockers do not directly affect a person’s mental state; taking a beta blocker before firing a pistol is not like taking a Valium, or tossing back a shot of Jack Daniels, because beta blockers do not alleviate anxiety so much as block the outward signs of anxiety. A pistol shooter on beta blockers will still be nervous, but his nervousness will be less likely to make his hand tremble. Beta blockers seem to be especially good performance enhancers when the performance in question involves an anxiety-producing public setting. This is because a large part of the anxiety of performing in public comes from the worry that one’s anxiety will become outwardly obvious. Most people who worry about public speaking, for example, aren't worried that they'll flub their lines, trip and fall as they approach the podium, or deliver an hour-long speech on television with their pants unzipped. They worry that their anxiety will become apparent to the audience. They're terrified that their hands will tremble, that their voices will become

1 of 3

8/31/08 11:17 AM

DWIGHT TOK COURSEPACK 2008-2009


The Atlantic Online | August 2008 Unbound | In Defense of the Beta Blocker | Carl Elliott

http://www.theatlantic.com/doc/print/200808u/beta-blockers 155

high-pitched and quivering, and that beads of sweat will appear on their foreheads and upper lip, like Richard Nixon trying to explain Watergate. This is why beta blockers are so useful; people who have taken a drug that blocks the outward effects of their anxiety become less anxious—not because the drug is affecting their brain, but because their worst fears are not being realized. Beta blockers have been around since the 1960s, but it took a while before anyone noticed how useful they were for performance anxiety. Probably the first performers to start using them widely were musicians, especially classical musicians, whose hands can get clammy or tremble during a concert performance. In the mid-’70s, a team of British researchers tested the effects of a beta blocker on the performances of skilled violinists and other string musicians. They made sure that the musicians were playing under maximally stressful conditions by booking them in an impressive concert hall. They also invited the press to attend, and recorded all the sessions. The musicians were asked to perform four times each, twice on placebo and twice on beta blockers, and their performances were scored by professional judges. Not only did the musicians tremble less on the beta blocker, they also performed better. Usually the improvement was minimal, but for a handful of musicians it was dramatic. From a competitive standpoint, this is what makes beta blockers so interesting : they seem to level the playing field for anxious and non-anxious performers, helping nervous performers much more than they help performers who are naturally relaxed. In the British study, for example, the musician who experienced the greatest benefit was the one with the worst nervous tremor. This player's score increased by a whopping 73%, whereas the musicians who were not nervous saw hardly any effect at all. One of the most compelling arguments against performance enhancing drugs is that they produce an arms race among competitors, who feel compelled to use the drugs even when they would prefer not to, simply to stay competitive. But this argument falls away if the effects of the drug are distributed so unequally. If it's only the nervous performers who are helped by beta blockers, there's no reason for anyone other than nervous performers to use them. And even if everyone did feel compelled to use beta blockers, it's unlikely that anyone would experience untoward health effects, because beta blockers are safe, cheap, and their effects wear off in a few hours. So unlike users of human growth hormone and steroids, users of beta blockers don’t have to worry about their heads growing or their testicles shrinking. You don’t even have to take them regularly. All you have to do is take a small, 10 mg tablet about an hour before your performance. Beta blockers are banned in certain sports, like archery and pistol shooting, because they're seen as unfairly improving a user’s skills. But there is another way to see beta blockers—not as improving someone’s skills, but as preventing the effects of anxiety from interfering with their skills. Taking a beta blocker, in other words, won’t turn you into a better violinist, but it will prevent your anxiety from interfering with your public performance. In a music competition, then, a beta blocker can arguably help the best player win. Does the same hold true for pistol shooting? That beta blockers generally help pistol shooters seems clear. It's even been demonstrated in a controlled study. A group of Swedish researchers found that the performance of a group of shooters was improved by an average of about 13% upon the administration of beta blockers. The improvement was deemed to be the result of the effect of the beta blocker on hand tremor. (What was unclear from the study is whether the beta

2 of 3

8/31/08 11:17 AM

DWIGHT TOK COURSEPACK 2008-2009


The Atlantic Online | August 2008 Unbound | In Defense of the Beta Blocker | Carl Elliott

http://www.theatlantic.com/doc/print/200808u/beta-blockers 156

blocker helped nervous shooters more than calm ones, and whether its effect would have been any different if the shooters had performed in a stress-inducing public competition, like the London musicians.) Even assuming that the effect was the same for the Swedish shooters as it was for the London violinists, however, it’s not obvious whether or not the drug should be banned. The question is whether the ability to perform the activity in public is integral to the activity itself. For some sports, being able to perform under stress in front of a crowd is clearly a crucial part of the game. Back in March, when the Davidson College basketball team was making its amazing run through the NCAA tournament, for example, the real thrill came from the ice-water-in-the-veins performance by shooting guard Steph Curry. The Davidson games were so unbearably intense that I thought my head would explode just from watching, yet it was always at the point of maximum tension that Curry’s 3-pointers would start dropping miraculously through the net. In a sport like basketball, where a player’s performance in public under pressure is critical to the game, taking a drug that improves public performance under pressure would feel like cheating. So the question for pistol shooting is this: should we reward the shooter who can hit the target most accurately, or the one who can hit it most accurately under pressure in public? Given that we’ve turned big-time sports into a spectator activity, we might well conclude that the answer is the second—it is the athlete who performs best in front of a crowd who should be rewarded. But that doesn’t necessarily mean that that athlete is really the best. Nor does it mean that using beta blockers is necessarily a disgrace in other situations. If Barack Obama decides to take a beta blocker before his big stadium speech at the Democratic Convention next week, I doubt his audience will feel cheated. And if my neurosurgeon were to use beta blockers before performing a delicate operation on my spine, I am certain that I would feel grateful.

Close Window SUBSCRIBE TO THE ATLANTIC TODAY! Take advantage of our great rate to subscribe to a year of The Atlantic Monthly. Go to the following Web address to sign up today: http://www.theatlantic.com/subscribe12 All material copyright The Atlantic Monthly Group. All rights reserved.

3 of 3

8/31/08 11:17 AM

DWIGHT TOK COURSEPACK 2008-2009


Books | Why I had to lie to my dying mother

http://books.guardian.co.uk/print/0,,334207927-99939,00.html 157

Why I had to lie to my dying mother American writer Susan Sontag was terrified of death. She beat cancer in the 1970s, and again in the 1990s, but third time around she wasn't so lucky. In a tender account of her final illness, her son David Rieff recalls how he colluded with his mother's fantasy that she wasn't dying - and what this ultimately cost him after she had gone David Rieff Sunday May 18, 2008 Observer When my mother Susan Sontag was diagnosed in 2004 with myelodysplastic syndrome, a precursor to a rapidly progressive leukaemia, she had already survived stage IV breast cancer in 1975 that had spread into the lymph system, despite her doctors having held out little hope of her doing so, and a uterine sarcoma in 1998. 'There are some survivors, even in the worst cancers,' she would often say during the nearly two years she received what even for the time was an extremely harsh regime of chemotherapy for the breast cancer. 'Why shouldn't I be one of them?' After that first cancer, mutilated but alive (the operation she underwent not only removed one of her breasts but the muscles of the chest wall and part of an armpit), she wrote her defiant book Illness as Metaphor. Part literary study, part polemic, it was a fervent plea to treat illness as illness, the luck of the genetic draw, and not the result of sexual inhibition, the repression of feeling, and the rest - that torrid brew of low-rent Wilhelm Reich and that mix of masochism and hubris that says that somehow people who got ill had brought it on themselves. In the book, my mother contrasted the perennial stigma attached to cancer with the romanticising of tuberculosis in 19th-century literature (La bohème and all that). In the notebooks for the book that I found after her death, I discovered one entry that stopped me cold. 'Leukaemia,' it read, 'the only "clean" cancer.' Clean illness, indeed. My poor mother: to think of what awaited her. So terrified of death that she could not bear to speak of it, my mother was also obsessed with it. Her second novel was actually called Death Kit and ends in an ossuary. She was an inveterate visitor of cemeteries. And she kept a human skull on the ledge behind her work table, nestled among the photographs of writers she admired (there were no family pictures) and various knick-knacks. 'Would I think about it differently if I knew whether the skull had been a man or woman?' she wrote in one of her journals. Obsessed with death, but never resigned to it: that, at least, is how I always thought of her. It gave her the resolve to undergo any treatment, no matter how brutal, no matter how slim her chances. In the 1970s, she gambled and won; in 2004, she gambled and lost. Seventy-one is not age 42, and awful and often lethal as breast cancer is, remissions are not uncommon even in advanced cases. But the pitiless thing about myelodysplastic syndrome is that unlike breast cancer and many other cancers, including some blood cancers, it does not remit. If you are diagnosed with MDS, as my mother quickly discovered to her horror, there really is only one hope - to receive an adult stem-cell transplant in which the ill person's defective bone marrow is replaced with cells from the bone marrow of a healthy person. What made my mother's situation even worse was that even at the most experimentally oriented hospitals, it was rare for such transplants to be performed on any patient over 50. And as far as I could find out, as I surfed the web trying to get up to speed on MDS (an act that can give you a false sense of having understood: information is not knowledge), successes in patients beyond their mid-sixties were rarer still. In other words, my mother's chances of survival were minuscule. Given such a prognosis, I suppose that my mother might have decided simply to accept that she was dying. But my

1 of 4

5/22/08 4:59 PM DWIGHT TOK COURSEPACK 2008-2009


Books | Why I had to lie to my dying mother

http://books.guardian.co.uk/print/0,,334207927-99939,00.html 158

mother was about as far from Dr Elisabeth Kübler-Ross's famous and influential (not least among doctors themselves) five-stage theory of dying - denial, anger, bargaining, depression and finally acceptance - as it was possible for a human being to be. She had been ill for so much of her life, from crippling childhood asthma to her three cancers. And death was no stranger to her; she had been surrounded by it in the cancer hospitals where she was treated, in the Aids wards of New York in the 1980s where she saw three of her closest friends die, and in war zones such as North Vietnam and Sarajevo. But no amount of familiarity could lessen the degree to which the idea of death was unbearable to her. In her eyes, mortality seemed as unjust as murder. Subjectively, there was simply no way she could ever accept it. I do not think this was denial in the 'psychobabble', Kübler-Ross sense. My mother was not insane; she knew perfectly well that she was going to die. It was just that she could never reconcile herself to the thought. So to those who knew her well, there was nothing surprising about her decision to go for the transplant. Life, the chance to live some years more, was what she wanted, she told her principal doctor, Stephen Nimer, who had warned her of the physical suffering a bone marrow transplant entailed, not 'quality of life'. In this attitude, she never wavered, though in fact just about everything that could go wrong after the transplant did go wrong, to the point where at her death her body, virtually from the inside of her mouth to the bottoms of her feet, was covered in sores and bruises. But I do not believe that even had she been able fully to take in from the beginning how much she was going to suffer, that she would not still have rolled the dice and risked everything for even a little more time in this world - above all, more time to write. In her mind, even at 71, my mother was always starting fresh, figuratively as well as literally turning a new page. For a writer as ambivalent, to put it gently, about her own American-ness as she had been since childhood, it was that most American of attributes - the exemplification of F Scott's Fitzgerald's quip that 'there are no second acts in American lives'. But if my mother was staunch in her commitment to trying to survive at any cost, she understood perfectly well just how dire an MDS diagnosis really was. On that question, even the most cursory look at the relevant sites on the web allowed for no doubt whatsoever. In those very first days after she realised that she was ill again, she was simply in despair. But my mother's desire to live was so powerful, so much stronger in her than any countervailing reality, that without denying the lethality of MDS, she willed herself into believing that she could again be the exception as she had been when stricken with breast cancer three decades before. Was this denial, à la Kübler-Ross? I can see how it could be described this way, but I don't believe so. My mother's refusal to accept death was not one 'stage' in the process leading first to acceptance and then (perhaps conveniently for the care givers who could parse their patients' deaths in this way?) to extinction itself. It was at the core of her consciousness. She was determined to live because she simply could not imagine giving in, as she put it to me once, long before her final cancer, to the imperative of dying. I suppose, as was once said of Samuel Beckett, that her quarrel too was with the Book of Genesis. But she could not keep up this determination to fight for her life against all odds on her own. That was where the people closest to her came in, where, without immodesty, for it was a position I found it almost unbearable to be in, I came in. In order for her to believe that she would be cured, my mother needed to believe that her loved ones were convinced of this as well. Virtually from the onset of her illness, what I felt she wanted from me - she never said this explicitly but the message was clear enough - was to find hopeful things to say about her prospects. She wanted optimistic or, at least, less pessimistic ways of construing even bad news, and - a kind of moral cheerleading, I suppose, and support for her hope, belief, call it what you will - that despite her advanced age and the spectacularly difficult cytogenetics of her specific case that she would be special, as she often put it, one more time and beat the odds. If I am being honest, I cannot say that I ever really thought my mother had much chance of making it. But equally, it never really occurred to me but to do whatever I could to buttress and abet her in her belief that she could survive. In those first few weeks after she was diagnosed, but before she made the decision to go to the Fred Hutchinson Cancer Centre in Seattle, Washington, to receive the transplant, I did keep wondering whether, given the fact that her chances were so poor and she was going to suffer so much, perhaps I should be candid with her. But she so plainly did not want to hear this that I never really came close to doing so.

2 of 4

5/22/08 4:59 PM DWIGHT TOK COURSEPACK 2008-2009


Books | Why I had to lie to my dying mother

http://books.guardian.co.uk/print/0,,334207927-99939,00.html 159

She was so afraid of dying. At the time, and even knowing what I know now, I thought that it would have been simply impossible for her to resign herself to extinction as K端bler-Ross insisted most patients eventually did. (though I wonder if the doctor drew this from data or instead saw what she wanted to see in her own patients, rejecting the lessons of those who did not fit her template?) Instead of dying in physical agony, I thought, my mother would have died in psychological terror, abject and inconsolable as she was in the first few days after her diagnosis until she righted herself. And, of course, there was always the very distant possibility that she would make it - the reason her doctors acquiesced in her desire to go ahead with the transplant. Given that those were the choices I saw, it was possible, though by no means easy, for me to opt for not being honest with her and for in effect concocting lawyer's brief after Jesuitical argument in support of what my mother so plainly wanted to hear. Cheerleading her to her grave was the way I sometimes thought of it. Believing one has no choice is not the same thing as believing one is doing the right thing. And that's the question: did I do the right thing? My doubts will never leave me, as well they shouldn't, but my answer cannot be entirely straightforward. I am convinced that I did what, implicitly, she was asking of me. Obviously, I know that it is not uncommon for parents to refuse to speak about their own deaths to their children. But I was in her hospital room in Seattle when, months after the transplant, when she could not roll over in bed unassisted and was hooked up to 300 metres of tubes infusing the chemicals that were keeping her alive but could do nothing to improve her condition, her doctors came in to tell her that the transplant had failed and the leukaemia was now full-blown. She screamed out in surprise and terror. 'But this means I'm dying,' she kept saying, flailing her emaciated, abraded arms and pounding the mattress. So do not tell me she knew all along. The awful paradox is that it was seeing the depths of her fear and witnessing her refusal to accept what was happening to her - until the last two weeks of her life, when she did know she was dying, even if there was no acceptance, she kept asking for some new, experimental treatment - that reassures me somewhat that the choice I made was defensible. I use the word advisedly. A choice that involves a complicity in the decision to undergo that much physical pain, however willingly undertaken on her part and willingly abetted on mine, surely can never simply be called right. It wasn't as if I was the one undergoing all that suffering. Of course, I was out of my depth. We all are, I think, since nothing really prepares you for the mortal illness of a loved one. Talking about it, thinking about it, abstractly trying to conceive of it, even dealing with it with people who are less close to you - none of those things seems in the end to be central to what you have to do. So I fall back on the phrase that kept going around in my head during the nine months of my mother's dying: 'She has the right to her own death.' But it is one thing to believe, as I did and do, that my mother owed neither me nor anyone else anything whatever with regard to the matter of her death, and quite another to pretend that the decisions which she took and the way she involved me in those decisions came without a cost. By choosing - if it even was as volitional as that - to go to her grave refusing until literally the last two weeks before she died to accept, let alone admit to anyone else that she was dying, my mother made it impossible for those close to her to say goodbye properly. It was impossible even to tell her - in a deep way, I mean - that I loved her because to have done so would have been to say: 'You're dying.' And if that wasn't on, then there was no chance whatsoever of real conversations about the past since all she really wanted to focus on was the future, on 'all the things I need to do when I finally get out of this hospital bed', as she often put it while lying on that bed out of which she never would rise. Would it have been easier for me? Certainly. But what might have been helpful to me after she was gone would have been terrifying to her. I felt I had to defer to her. But it was not easy then and in some ways, three-and-a-half years after my mother's death, it is even harder now. At the time, I understood that, in order to be of help to her, I had to not think about what I was virtually certain would happen - that not only would she not survive but she had little hope of dying what is sometimes called 'a good death', if that even exists. My hunch in any case is that all this talk of good deaths has little to do with the dying and everything to do with consoling their loved ones and, for that matter, the doctors and nurses who treated them.

3 of 4

5/22/08 4:59 PM DWIGHT TOK COURSEPACK 2008-2009


Books | Why I had to lie to my dying mother

http://books.guardian.co.uk/print/0,,334207927-99939,00.html 160

But, for me at least, not thinking about what I knew meant, to some extent, not thinking at all, because if I were really thinking all the time and allowing myself to be fully alert, I just never could have pulled it off. There was a certain numb comfort there. For I did want to have certain kinds of conversations with her, did want to tell her things and ask her others. Not thinking made the knowledge that these would probably never take place bearable as well. Her dying made everything seem trivial, weightless by comparison. It is less easy to be reconciled to now. I am anything but convinced that I am a good analyst of my own motives, but I have wondered since I wrote my memoir of my mother's death why I did so. I have never been confessionally inclined, and during my mother's illness, I very consciously chose not to take any notes because I thought that to do so would be to seek and perhaps gain a measure of detachment I neither wanted nor felt entitled to. And for a long time after my mother died, I believed that I would not write anything. I still believe that I would not have done so had I been able to say goodbye to my mother properly. I am not talking about what in the United States is called 'closure', the idea that somehow there is a way of drawing a psychological line under an event and, as the expression goes, of 'moving on'. I don't believe there is any such thing and, if there is, it is not available to me. But I do not pretend to have served anyone but myself. Memoirs, like cemeteries, are for the living. Susan Sontag and son Born 13 January 1933 in New York City. Her father, Jack Rosenblatt, died when she was five, and her mother, Mildred Jacobsen, married Nathan Sontag. They move to Los Angeles. 1949 Admitted to the University of Chicago. Continues graduate study at Harvard, St Anne's College, Oxford and the Sorbonne. 1950 Sontag marries Philip Rieff, a young teacher at Chicago, after a 10-day courtship. They divorce in 1958. 1952 David Rieff is born in Boston, Massachusetts, the only son of Susan and Philip. 1978-89 David Rieff works as a senior editor at Farrar, Straus and Giroux with authors including Joseph Brodsky, Philip Roth - and Susan Sontag. Sontag's books include four novels; a collection of short stories, I, etcetera (1977); several plays, and eight works of non-fiction, including On Photography (1976) and Regarding the Pain of Others (2003). Rieff's books include Slaughterhouse: Bosnia and the Failure of the West (1995) and At the Point of a Gun: Democratic Dreams and Armed Intervention (2005). · Swimming in a Sea of Death: A Son's Memoir is published by Granta, £12.99. guardian.co.uk © Guardian News and Media Limited 2008

4 of 4

5/22/08 4:59 PM DWIGHT TOK COURSEPACK 2008-2009


What Should a Billionaire Give – and What Should You? - New York Times

http://www.nytimes.com/2006/12/17/magazine/17charity.t.html?pagewanted=print 161

December 17, 2006

What Should a Billionaire Give – and What Should You? By PETER SINGER

What is a human life worth? You may not want to put a price tag on a it. But if we really had to, most of us would agree that the value of a human life would be in the millions. Consistent with the foundations of our democracy and our frequently professed belief in the inherent dignity of human beings, we would also agree that all humans are created equal, at least to the extent of denying that differences of sex, ethnicity, nationality and place of residence change the value of a human life. With Christmas approaching, and Americans writing checks to their favorite charities, it’s a good time to ask how these two beliefs — that a human life, if it can be priced at all, is worth millions, and that the factors I have mentioned do not alter the value of a human life — square with our actions. Perhaps this year such questions lurk beneath the surface of more family discussions than usual, for it has been an extraordinary year for philanthropy, especially philanthropy to fight global poverty. For Bill Gates, the founder of Microsoft, the ideal of valuing all human life equally began to jar against reality some years ago, when he read an article about diseases in the developing world and came across the statistic that half a million children die every year from rotavirus, the most common cause of severe diarrhea in children. He had never heard of rotavirus. “How could I never have heard of something that kills half a million children every year?” he asked himself. He then learned that in developing countries, millions of children die from diseases that have been eliminated, or virtually eliminated, in the United States. That shocked him because he assumed that, if there are vaccines and treatments that could save lives, governments would be doing everything possible to get them to the people who need them. As Gates told a meeting of the World Health Assembly in Geneva last year, he and his wife, Melinda, “couldn’t escape the brutal conclusion that — in our world today — some lives are seen as worth saving and others are not.” They said to themselves, “This can’t be true.” But they knew it was. Gates’s speech to the World Health Assembly concluded on an optimistic note, looking forward to the next decade when “people will finally accept that the death of a child in the developing world is just as tragic as the death of a child in the developed world.” That belief in the equal value of all human life is also prominent on the Web site of the Bill and Melinda Gates Foundation, where under Our Values we read: “All lives — no matter where they are being led — have equal value.”

1 of 11

9/1/08 4:14 PM

DWIGHT TOK COURSEPACK 2008-2009


What Should a Billionaire Give – and What Should You? - New York Times

http://www.nytimes.com/2006/12/17/magazine/17charity.t.html?pagewanted=print 162

We are very far from acting in accordance with that belief. In the same world in which more than a billion people live at a level of affluence never previously known, roughly a billion other people struggle to survive on the purchasing power equivalent of less than one U.S. dollar per day. Most of the world’s poorest people are undernourished, lack access to safe drinking water or even the most basic health services and cannot send their children to school. According to Unicef, more than 10 million children die every year — about 30,000 per day — from avoidable, poverty-related causes. Last June the investor Warren Buffett took a significant step toward reducing those deaths when he pledged $31 billion to the Gates Foundation, and another $6 billion to other charitable foundations. Buffett’s pledge, set alongside the nearly $30 billion given by Bill and Melinda Gates to their foundation, has made it clear that the first decade of the 21st century is a new “golden age of philanthropy.” On an inflation-adjusted basis, Buffett has pledged to give more than double the lifetime total given away by two of the philanthropic giants of the past, Andrew Carnegie and John D. Rockefeller, put together. Bill and Melinda Gates’s gifts are not far behind. Gates’s and Buffett’s donations will now be put to work primarily to reduce poverty, disease and premature death in the developing world. According to the Global Forum for Health Research, less than 10 percent of the world’s health research budget is spent on combating conditions that account for 90 percent of the global burden of disease. In the past, diseases that affect only the poor have been of no commercial interest to pharmaceutical manufacturers, because the poor cannot afford to buy their products. The Global Alliance for Vaccines and Immunization (GAVI), heavily supported by the Gates Foundation, seeks to change this by guaranteeing to purchase millions of doses of vaccines, when they are developed, that can prevent diseases like malaria. GAVI has also assisted developing countries to immunize more people with existing vaccines: 99 million additional children have been reached to date. By doing this, GAVI claims to have already averted nearly 1.7 million future deaths. Philanthropy on this scale raises many ethical questions: Why are the people who are giving doing so? Does it do any good? Should we praise them for giving so much or criticize them for not giving still more? Is it troubling that such momentous decisions are made by a few extremely wealthy individuals? And how do our judgments about them reflect on our own way of living? Let’s start with the question of motives. The rich must — or so some of us with less money like to assume — suffer sleepless nights because of their ruthlessness in squeezing out competitors, firing workers, shutting down plants or whatever else they have to do to acquire their wealth. When wealthy people give away money, we can always say that they are doing it to ease their consciences or generate favorable publicity. It has been suggested — by, for example, David Kirkpatrick, a senior editor at Fortune magazine — that Bill Gates’s turn to philanthropy was linked to the antitrust problems Microsoft had in the U.S. and the European Union. Was Gates, consciously or subconsciously, trying to improve his own image and that of his company?

2 of 11

9/1/08 4:14 PM

DWIGHT TOK COURSEPACK 2008-2009


What Should a Billionaire Give – and What Should You? - New York Times

http://www.nytimes.com/2006/12/17/magazine/17charity.t.html?pagewanted=print 163

This kind of sniping tells us more about the attackers than the attacked. Giving away large sums, rather than spending the money on corporate advertising or developing new products, is not a sensible strategy for increasing personal wealth. When we read that someone has given away a lot of their money, or time, to help others, it challenges us to think about our own behavior. Should we be following their example, in our own modest way? But if the rich just give their money away to improve their image, or to make up for past misdeeds — misdeeds quite unlike any we have committed, of course — then, conveniently, what they are doing has no relevance to what we ought to do. A famous story is told about Thomas Hobbes, the 17th-century English philosopher, who argued that we all act in our own interests. On seeing him give alms to a beggar, a cleric asked Hobbes if he would have done this if Christ had not commanded us to do so. Yes, Hobbes replied, he was in pain to see the miserable condition of the old man, and his gift, by providing the man with some relief from that misery, also eased Hobbes’s pain. That reply reconciles Hobbes’s charity with his egoistic theory of human motivation, but at the cost of emptying egoism of much of its bite. If egoists suffer when they see a stranger in distress, they are capable of being as charitable as any altruist. Followers of the 18th-century German philosopher Immanuel Kant would disagree. They think an act has moral worth only if it is done out of a sense of duty. Doing something merely because you enjoy doing it, or enjoy seeing its consequences, they say, has no moral worth, because if you happened not to enjoy doing it, then you wouldn’t do it, and you are not responsible for your likes and dislikes, whereas you are responsible for your obedience to the demands of duty. Perhaps some philanthropists are motivated by their sense of duty. Apart from the equal value of all human life, the other “simple value” that lies at the core of the work of the Gates Foundation, according to its Web site, is “To whom much has been given, much is expected.” That suggests the view that those who have great wealth have a duty to use it for a larger purpose than their own interests. But while such questions of motive may be relevant to our assessment of Gates’s or Buffett’s character, they pale into insignificance when we consider the effect of what Gates and Buffett are doing. The parents whose children could die from rotavirus care more about getting the help that will save their children’s lives than about the motivations of those who make that possible. Interestingly, neither Gates nor Buffett seems motivated by the possibility of being rewarded in heaven for his good deeds on earth. Gates told a Time interviewer, “There’s a lot more I could be doing on a Sunday morning” than going to church. Put them together with Andrew Carnegie, famous for his freethinking, and three of the four greatest American philanthropists have been atheists or agnostics. (The exception is John D. Rockefeller.) In a country in which 96 percent of the population say they believe in a supreme being, that’s a striking fact. It means that in one sense, Gates and Buffett are probably less self-interested in their charity than someone like Mother Teresa, who as a pious Roman Catholic believed in reward and punishment in the afterlife. More important than questions about motives are questions about whether there is an obligation for the rich to give, and if so, how much they

3 of 11

9/1/08 4:14 PM

DWIGHT TOK COURSEPACK 2008-2009


What Should a Billionaire Give – and What Should You? - New York Times

http://www.nytimes.com/2006/12/17/magazine/17charity.t.html?pagewanted=print 164

should give. A few years ago, an African-American cabdriver taking me to the Inter-American Development Bank in Washington asked me if I worked at the bank. I told him I did not but was speaking at a conference on development and aid. He then assumed that I was an economist, but when I said no, my training was in philosophy, he asked me if I thought the U.S. should give foreign aid. When I answered affirmatively, he replied that the government shouldn’t tax people in order to give their money to others. That, he thought, was robbery. When I asked if he believed that the rich should voluntarily donate some of what they earn to the poor, he said that if someone had worked for his money, he wasn’t going to tell him what to do with it. At that point we reached our destination. Had the journey continued, I might have tried to persuade him that people can earn large amounts only when they live under favorable social circumstances, and that they don’t create those circumstances by themselves. I could have quoted Warren Buffett’s acknowledgment that society is responsible for much of his wealth. “If you stick me down in the middle of Bangladesh or Peru,” he said, “you’ll find out how much this talent is going to produce in the wrong kind of soil.” The Nobel Prize-winning economist and social scientist Herbert Simon estimated that “social capital” is responsible for at least 90 percent of what people earn in wealthy societies like those of the United States or northwestern Europe. By social capital Simon meant not only natural resources but, more important, the technology and organizational skills in the community, and the presence of good government. These are the foundation on which the rich can begin their work. “On moral grounds,” Simon added, “we could argue for a flat income tax of 90 percent.” Simon was not, of course, advocating so steep a rate of tax, for he was well aware of disincentive effects. But his estimate does undermine the argument that the rich are entitled to keep their wealth because it is all a result of their hard work. If Simon is right, that is true of at most 10 percent of it. In any case, even if we were to grant that people deserve every dollar they earn, that doesn’t answer the question of what they should do with it. We might say that they have a right to spend it on lavish parties, private jets and luxury yachts, or, for that matter, to flush it down the toilet. But we could still think that for them to do these things while others die from easily preventable diseases is wrong. In an article I wrote more than three decades ago, at the time of a humanitarian emergency in what is now Bangladesh, I used the example of walking by a shallow pond and seeing a small child who has fallen in and appears to be in danger of drowning. Even though we did nothing to cause the child to fall into the pond, almost everyone agrees that if we can save the child at minimal inconvenience or trouble to ourselves, we ought to do so. Anything else would be callous, indecent and, in a word, wrong. The fact that in rescuing the child we may, for example, ruin a new pair of shoes is not a good reason for allowing the child to drown. Similarly if for the cost of a pair of shoes we can contribute to a health program in a developing country that stands a good chance of saving the life of a child, we ought to do so. Perhaps, though, our obligation to help the poor is even stronger than this example implies, for we are less innocent than the passer-by who did nothing to cause the child to fall into the pond. Thomas Pogge, a philosopher at Columbia University, has argued that at least some of our affluence comes at the expense of the poor. He bases this claim not simply on the usual critique of the barriers that Europe and the United

4 of 11

9/1/08 4:14 PM

DWIGHT TOK COURSEPACK 2008-2009


What Should a Billionaire Give – and What Should You? - New York Times

http://www.nytimes.com/2006/12/17/magazine/17charity.t.html?pagewanted=print 165

States maintain against agricultural imports from developing countries but also on less familiar aspects of our trade with developing countries. For example, he points out that international corporations are willing to make deals to buy natural resources from any government, no matter how it has come to power. This provides a huge financial incentive for groups to try to overthrow the existing government. Successful rebels are rewarded by being able to sell off the nation’s oil, minerals or timber. In their dealings with corrupt dictators in developing countries, Pogge asserts, international corporations are morally no better than someone who knowingly buys stolen goods — with the difference that the international legal and political order recognizes the corporations, not as criminals in possession of stolen goods but as the legal owners of the goods they have bought. This situation is, of course, beneficial for the industrial nations, because it enables us to obtain the raw materials we need to maintain our prosperity, but it is a disaster for resource-rich developing countries, turning the wealth that should benefit them into a curse that leads to a cycle of coups, civil wars and corruption and is of little benefit to the people as a whole. In this light, our obligation to the poor is not just one of providing assistance to strangers but one of compensation for harms that we have caused and are still causing them. It might be argued that we do not owe the poor compensation, because our affluence actually benefits them. Living luxuriously, it is said, provides employment, and so wealth trickles down, helping the poor more effectively than aid does. But the rich in industrialized nations buy virtually nothing that is made by the very poor. During the past 20 years of economic globalization, although expanding trade has helped lift many of the world’s poor out of poverty, it has failed to benefit the poorest 10 percent of the world’s population. Some of the extremely poor, most of whom live in sub-Saharan Africa, have nothing to sell that rich people want, while others lack the infrastructure to get their goods to market. If they can get their crops to a port, European and U.S. subsidies often mean that they cannot sell them, despite — as for example in the case of West African cotton growers who compete with vastly larger and richer U.S. cotton producers — having a lower production cost than the subsidized producers in the rich nations. The remedy to these problems, it might reasonably be suggested, should come from the state, not from private philanthropy. When aid comes through the government, everyone who earns above the tax-free threshold contributes something, with more collected from those with greater ability to pay. Much as we may applaud what Gates and Buffett are doing, we can also be troubled by a system that leaves the fate of hundreds of millions of people hanging on the decisions of two or three private citizens. But the amount of foreign development aid given by the U.S. government is, at 22 cents for every $100 the nation earns, about the same, as a percentage of gross national income, as Portugal gives and about half that of the U.K. Worse still, much of it is directed where it best suits U.S. strategic interests — Iraq is now by far the largest recipient of U.S. development aid, and Egypt, Jordan, Pakistan and Afghanistan all rank in the Top 10. Less than a quarter of official U.S. development aid — barely a nickel in every $100 of our G.N.I. — goes to the world’s poorest nations.

5 of 11

9/1/08 4:14 PM

DWIGHT TOK COURSEPACK 2008-2009


What Should a Billionaire Give – and What Should You? - New York Times

http://www.nytimes.com/2006/12/17/magazine/17charity.t.html?pagewanted=print 166

Adding private philanthropy to U.S. government aid improves this picture, because Americans privately give more per capita to international philanthropic causes than the citizens of almost any other nation. Even when private donations are included, however, countries like Norway, Denmark, Sweden and the Netherlands give three or four times as much foreign aid, in proportion to the size of their economies, as the U.S. gives — with a much larger percentage going to the poorest nations. At least as things now stand, the case for philanthropic efforts to relieve global poverty is not susceptible to the argument that the government has taken care of the problem. And even if official U.S. aid were betterdirected and comparable, relative to our gross domestic product, with that of the most generous nations, there would still be a role for private philanthropy. Unconstrained by diplomatic considerations or the desire to swing votes at the United Nations, private donors can more easily avoid dealing with corrupt or wasteful governments. They can go directly into the field, working with local villages and grass-roots organizations. Nor are philanthropists beholden to lobbyists. As The New York Times reported recently, billions of dollars of U.S. aid is tied to domestic goods. Wheat for Africa must be grown in America, although aid experts say this often depresses local African markets, reducing the incentive for farmers there to produce more. In a decision that surely costs lives, hundreds of millions of condoms intended to stop the spread of AIDS in Africa and around the world must be manufactured in the U.S., although they cost twice as much as similar products made in Asia. In other ways, too, private philanthropists are free to venture where governments fear to tread. Through a foundation named for his wife, Susan Thompson Buffett, Warren Buffett has supported reproductive rights, including family planning and pro-choice organizations. In another unusual initiative, he has pledged $50 million for the International Atomic Energy Agency’s plan to establish a “fuel bank” to supply nuclearreactor fuel to countries that meet their nuclear-nonproliferation commitments. The idea, which has been talked about for many years, is widely agreed to be a useful step toward discouraging countries from building their own facilities for producing nuclear fuel, which could then be diverted to weapons production. It is, Buffett said, “an investment in a safer world.” Though it is something that governments could and should be doing, no government had taken the first step. Aid has always had its critics. Carefully planned and intelligently directed private philanthropy may be the best answer to the claim that aid doesn’t work. Of course, as in any large-scale human enterprise, some aid can be ineffective. But provided that aid isn’t actually counterproductive, even relatively inefficient assistance is likely to do more to advance human wellbeing than luxury spending by the wealthy. The rich, then, should give. But how much should they give? Gates may have given away nearly $30 billion, but that still leaves him sitting at the top of the Forbes list of the richest Americans, with $53 billion. His 66,000-square-foot high-tech lakeside estate near Seattle is reportedly worth more than $100 million. Property taxes are about $1 million. Among his possessions is the Leicester Codex, the only handwritten book by Leonardo da Vinci still in private hands, for which he paid $30.8 million in 1994. Has Bill Gates done enough? More pointedly, you might ask:

6 of 11

9/1/08 4:14 PM

DWIGHT TOK COURSEPACK 2008-2009


What Should a Billionaire Give – and What Should You? - New York Times

http://www.nytimes.com/2006/12/17/magazine/17charity.t.html?pagewanted=print 167

if he really believes that all lives have equal value, what is he doing living in such an expensive house and owning a Leonardo Codex? Are there no more lives that could be saved by living more modestly and adding the money thus saved to the amount he has already given? Yet we should recognize that, if judged by the proportion of his wealth that he has given away, Gates compares very well with most of the other people on the Forbes 400 list, including his former colleague and Microsoft co-founder, Paul Allen. Allen, who left the company in 1983, has given, over his lifetime, more than $800 million to philanthropic causes. That is far more than nearly any of us will ever be able to give. But Forbes lists Allen as the fifth-richest American, with a net worth of $16 billion. He owns the Seattle Seahawks, the Portland Trailblazers, a 413-foot oceangoing yacht that carries two helicopters and a 60-foot submarine. He has given only about 5 percent of his total wealth. Is there a line of moral adequacy that falls between the 5 percent that Allen has given away and the roughly 35 percent that Gates has donated? Few people have set a personal example that would allow them to tell Gates that he has not given enough, but one who could is Zell Kravinsky. A few years ago, when he was in his mid-40s, Kravinsky gave almost all of his $45 million real estate fortune to health-related charities, retaining only his modest family home in Jenkintown, near Philadelphia, and enough to meet his family’s ordinary expenses. After learning that thousands of people with failing kidneys die each year while waiting for a transplant, he contacted a Philadelphia hospital and donated one of his kidneys to a complete stranger. After reading about Kravinsky in The New Yorker, I invited him to speak to my classes at Princeton. He comes across as anguished by the failure of others to see the simple logic that lies behind his altruism. Kravinsky has a mathematical mind — a talent that obviously helped him in deciding what investments would prove profitable — and he says that the chances of dying as a result of donating a kidney are about 1 in 4,000. For him this implies that to withhold a kidney from someone who would otherwise die means valuing one’s own life at 4,000 times that of a stranger, a ratio Kravinsky considers “obscene.” What marks Kravinsky from the rest of us is that he takes the equal value of all human life as a guide to life, not just as a nice piece of rhetoric. He acknowledges that some people think he is crazy, and even his wife says she believes that he goes too far. One of her arguments against the kidney donation was that one of their children may one day need a kidney, and Zell could be the only compatible donor. Kravinsky’s love for his children is, as far as I can tell, as strong as that of any normal parent. Such attachments are part of our nature, no doubt the product of our evolution as mammals who give birth to children, who for an unusually long time require our assistance in order to survive. But that does not, in Kravinsky’s view, justify our placing a value on the lives of our children that is thousands of times greater than the value we place on the lives of the children of strangers. Asked if he would allow his child to die if it would enable a thousand children to live, Kravinsky said yes. Indeed, he has said he would permit his child to die even if this enabled only two other children to live. Nevertheless, to appease his wife, he recently went back into real estate, made some money and bought the family a larger home. But he still remains committed to giving away as much as

7 of 11

9/1/08 4:14 PM

DWIGHT TOK COURSEPACK 2008-2009


What Should a Billionaire Give – and What Should You? - New York Times

http://www.nytimes.com/2006/12/17/magazine/17charity.t.html?pagewanted=print 168

possible, subject only to keeping his domestic life reasonably tranquil. Buffett says he believes in giving his children “enough so they feel they could do anything, but not so much that they could do nothing.” That means, in his judgment, “a few hundred thousand” each. In absolute terms, that is far more than most Americans are able to leave their children and, by Kravinsky’s standard, certainly too much. (Kravinsky says that the hard part is not giving away the first $45 million but the last $10,000, when you have to live so cheaply that you can’t function in the business world.) But even if Buffett left each of his three children a million dollars each, he would still have given away more than 99.99 percent of his wealth. When someone does that much — especially in a society in which the norm is to leave most of your wealth to your children — it is better to praise them than to cavil about the extra few hundred thousand dollars they might have given. Philosophers like Liam Murphy of New York University and my colleague Kwame Anthony Appiah at Princeton contend that our obligations are limited to carrying our fair share of the burden of relieving global poverty. They would have us calculate how much would be required to ensure that the world’s poorest people have a chance at a decent life, and then divide this sum among the affluent. That would give us each an amount to donate, and having given that, we would have fulfilled our obligations to the poor. What might that fair amount be? One way of calculating it would be to take as our target, at least for the next nine years, the Millennium Development Goals, set by the United Nations Millennium Summit in 2000. On that occasion, the largest gathering of world leaders in history jointly pledged to meet, by 2015, a list of goals that include: Reducing by half the proportion of the world’s people in extreme poverty (defined as living on less than the purchasing-power equivalent of one U.S. dollar per day). Reducing by half the proportion of people who suffer from hunger. Ensuring that children everywhere are able to take a full course of primary schooling. Ending sex disparity in education. Reducing by two-thirds the mortality rate among children under 5. Reducing by three-quarters the rate of maternal mortality. Halting and beginning to reverse the spread of H.I.V./AIDS and halting and beginning to reduce the incidence of malaria and other major

8 of 11

9/1/08 4:14 PM

DWIGHT TOK COURSEPACK 2008-2009


What Should a Billionaire Give – and What Should You? - New York Times

http://www.nytimes.com/2006/12/17/magazine/17charity.t.html?pagewanted=print 169

diseases. Reducing by half the proportion of people without sustainable access to safe drinking water. Last year a United Nations task force, led by the Columbia University economist Jeffrey Sachs, estimated the annual cost of meeting these goals to be $121 billion in 2006, rising to $189 billion by 2015. When we take account of existing official development aid promises, the additional amount needed each year to meet the goals is only $48 billion for 2006 and $74 billion for 2015. Now let’s look at the incomes of America’s rich and superrich, and ask how much they could reasonably give. The task is made easier by statistics recently provided by Thomas Piketty and Emmanuel Saez, economists at the École Normale Supérieure, Paris-Jourdan, and the University of California, Berkeley, respectively, based on U.S. tax data for 2004. Their figures are for pretax income, excluding income from capital gains, which for the very rich are nearly always substantial. For simplicity I have rounded the figures, generally downward. Note too that the numbers refer to “tax units,” that is, in many cases, families rather than individuals. Piketty and Saez’s top bracket comprises 0.01 percent of U.S. taxpayers. There are 14,400 of them, earning an average of $12,775,000, with total earnings of $184 billion. The minimum annual income in this group is more than $5 million, so it seems reasonable to suppose that they could, without much hardship, give away a third of their annual income, an average of $4.3 million each, for a total of around $61 billion. That would still leave each of them with an annual income of at least $3.3 million. Next comes the rest of the top 0.1 percent (excluding the category just described, as I shall do henceforth). There are 129,600 in this group, with an average income of just over $2 million and a minimum income of $1.1 million. If they were each to give a quarter of their income, that would yield about $65 billion, and leave each of them with at least $846,000 annually. The top 0.5 percent consists of 575,900 taxpayers, with an average income of $623,000 and a minimum of $407,000. If they were to give one-fifth of their income, they would still have at least $325,000 each, and they would be giving a total of $72 billion. Coming down to the level of those in the top 1 percent, we find 719,900 taxpayers with an average income of $327,000 and a minimum of $276,000. They could comfortably afford to give 15 percent of their income. That would yield $35 billion and leave them with at least $234,000. Finally, the remainder of the nation’s top 10 percent earn at least $92,000 annually, with an average of $132,000. There are nearly 13 million in this group. If they gave the traditional tithe — 10 percent of their income, or an average of $13,200 each — this would yield about $171 billion and leave them a minimum of $83,000.

9 of 11

9/1/08 4:14 PM

DWIGHT TOK COURSEPACK 2008-2009


What Should a Billionaire Give – and What Should You? - New York Times

http://www.nytimes.com/2006/12/17/magazine/17charity.t.html?pagewanted=print 170

You could spend a long time debating whether the fractions of income I have suggested for donation constitute the fairest possible scheme. Perhaps the sliding scale should be steeper, so that the superrich give more and the merely comfortable give less. And it could be extended beyond the Top 10 percent of American families, so that everyone able to afford more than the basic necessities of life gives something, even if it is as little as 1 percent. Be that as it may, the remarkable thing about these calculations is that a scale of donations that is unlikely to impose significant hardship on anyone yields a total of $404 billion — from just 10 percent of American families. Obviously, the rich in other nations should share the burden of relieving global poverty. The U.S. is responsible for 36 percent of the gross domestic product of all Organization for Economic Cooperation and Development nations. Arguably, because the U.S. is richer than all other major nations, and its wealth is more unevenly distributed than wealth in almost any other industrialized country, the rich in the U.S. should contribute more than 36 percent of total global donations. So somewhat more than 36 percent of all aid to relieve global poverty should come from the U.S. For simplicity, let’s take half as a fair share for the U.S. On that basis, extending the scheme I have suggested worldwide would provide $808 billion annually for development aid. That’s more than six times what the task force chaired by Sachs estimated would be required for 2006 in order to be on track to meet the Millennium Development Goals, and more than 16 times the shortfall between that sum and existing official development aid commitments. If we are obliged to do no more than our fair share of eliminating global poverty, the burden will not be great. But is that really all we ought to do? Since we all agree that fairness is a good thing, and none of us like doing more because others don’t pull their weight, the fair-share view is attractive. In the end, however, I think we should reject it. Let’s return to the drowning child in the shallow pond. Imagine it is not 1 small child who has fallen in, but 50 children. We are among 50 adults, unrelated to the children, picnicking on the lawn around the pond. We can easily wade into the pond and rescue the children, and the fact that we would find it cold and unpleasant sloshing around in the knee-deep muddy water is no justification for failing to do so. The “fair share” theorists would say that if we each rescue one child, all the children will be saved, and so none of us have an obligation to save more than one. But what if half the picnickers prefer staying clean and dry to rescuing any children at all? Is it acceptable if the rest of us stop after we have rescued just one child, knowing that we have done our fair share, but that half the children will drown? We might justifiably be furious with those who are not doing their fair share, but our anger with them is not a reason for letting the children die. In terms of praise and blame, we are clearly right to condemn, in the strongest terms, those who do nothing. In contrast, we may withhold such condemnation from those who stop when they have done their fair share. Even so, they have let children drown when they could easily have saved them, and that is wrong. Similarly, in the real world, it should be seen as a serious moral failure when those with ample income do not do their fair share toward relieving global poverty. It isn’t so easy, however, to decide on the proper approach to take to those who limit their contribution to their fair share when they could easily do more and when, because others are not playing their part, a further donation would assist many in desperate

10 of 11

9/1/08 4:14 PM

DWIGHT TOK COURSEPACK 2008-2009


What Should a Billionaire Give – and What Should You? - New York Times

http://www.nytimes.com/2006/12/17/magazine/17charity.t.html?pagewanted=print 171

need. In the privacy of our own judgment, we should believe that it is wrong not to do more. But whether we should actually criticize people who are doing their fair share, but no more than that, depends on the psychological impact that such criticism will have on them, and on others. This in turn may depend on social practices. If the majority are doing little or nothing, setting a standard higher than the fair-share level may seem so demanding that it discourages people who are willing to make an equitable contribution from doing even that. So it may be best to refrain from criticizing those who achieve the fair-share level. In moving our society’s standards forward, we may have to progress one step at a time. For more than 30 years, I’ve been reading, writing and teaching about the ethical issue posed by the juxtaposition, on our planet, of great abundance and life-threatening poverty. Yet it was not until, in preparing this article, I calculated how much America’s Top 10 percent of income earners actually make that I fully understood how easy it would be for the world’s rich to eliminate, or virtually eliminate, global poverty. (It has actually become much easier over the last 30 years, as the rich have grown significantly richer.) I found the result astonishing. I double-checked the figures and asked a research assistant to check them as well. But they were right. Measured against our capacity, the Millennium Development Goals are indecently, shockingly modest. If we fail to achieve them — as on present indications we well might — we have no excuses. The target we should be setting for ourselves is not halving the proportion of people living in extreme poverty, and without enough to eat, but ensuring that no one, or virtually no one, needs to live in such degrading conditions. That is a worthy goal, and it is well within our reach. Peter Singer is the Ira W. DeCamp professor of bioethics at the Center for Human Values at Princeton University. He is the author of many books, including most recently “The Way We Eat: Why Our Food Choices Matter.”

Copyright 2006 The New York Times Company Privacy Policy

11 of 11

Search

Corrections

RSS

First Look

Help

Contact Us

Work for Us

Site Map

9/1/08 4:14 PM

DWIGHT TOK COURSEPACK 2008-2009


172

AREA OF KNOWLEDGE:

Mathematics “All is number.” (Pythagorus)

• What does calling mathematics a language mean? • What is the significance of proof in mathematical thought? • Is mathematical proof certain? • If math did not exist, what difference would it make? • Why do many mathematicians consider their work to be an art form?

DWIGHT TOK COURSEPACK 2008-2009


Annals of Science: Numbers Guy: Reporting & Essays: The New Yorker

file:///Users/psanders/Desktop/Dwight/Theory%20of%20Knowledge/Math/Math%20from%20t... 173

ANNALS OF SCIENCE

NUMBERS GUY Are our brains wired for math? by Jim Holt MARCH 3, 2008

According to Stanislas Dehaene, humans have an inbuilt “number sense” capable of some basic calculations and estimates. The problems start when we learn mathematics and have to perform procedures that are anything but instinctive.

O

ne morning in September, 1989, a former sales representative in his mid-forties entered an examination room with Stanislas Dehaene, a young neuroscientist based in Paris. Three years earlier, the man, whom researchers came to refer to as Mr. N, had sustained a brain hemorrhage that left him with an enormous lesion in the rear half of his left hemisphere. He suffered from severe handicaps: his right arm was in a sling; he couldn’t read; and his speech was painfully slow. He had once been married, with two daughters, but was now incapable of leading an independent life and lived with his elderly parents. Dehaene had been invited to see him because his impairments included severe acalculia, a general term for any one of several deficits in number processing. When asked to add 2 and 2, he answered “three.” He could still count and recite a sequence like 2, 4, 6, 8, but he was incapable of counting downward from 9, differentiating odd and even numbers, or recognizing the numeral 5 when it was flashed in front of him. To Dehaene, these impairments were less interesting than the fragmentary capabilities Mr. N had managed to retain. When he was shown the numeral 5 for a few seconds, he knew it was a numeral rather than a letter and, by counting up from 1 until he got to the right integer, he eventually identified it as a 5. He did the same thing when asked the age of his seven-year-old daughter. In the 1997 book “The Number Sense,” Dehaene wrote, “He appears to know right from the start what quantities he wishes to express, but reciting the number series seems to be his only means of retrieving the corresponding word.” Dehaene also noticed that although Mr. N could no longer read, he sometimes had an approximate sense of words that were flashed in front of him; when he was shown the word “ham,” he said, “It’s some kind of meat.” Dehaene decided to see if Mr. N still had a similar sense of number. He showed him the numerals 7 and 8. Mr. N was able to answer quickly that 8 was the larger number—far more quickly than if he had had to identify them by counting up to the right quantities. He could also judge whether various numbers were bigger or smaller than 55, slipping up only when they were very close to 55. Dehaene dubbed Mr. N “the Approximate Man.” The Approximate Man lived in a world where a year comprised “about 350 days” and an hour “about fifty minutes,” where there were five seasons, and where a dozen eggs amounted to “six or ten.” Dehaene asked him to add 2 and 2 several times and received answers ranging from three to five. But, he noted, “he never offers a result as absurd as 9.” In cognitive science, incidents of brain damage are nature’s experiments. If a lesion knocks out one ability but leaves another intact, it is evidence that they are wired into different neural circuits. In this instance, Dehaene theorized that our ability to learn sophisticated mathematical procedures resided in an entirely different part of the brain from a

1 of 6

8/31/08 12:56 PM

DWIGHT TOK COURSEPACK 2008-2009


Annals of Science: Numbers Guy: Reporting & Essays: The New Yorker

file:///Users/psanders/Desktop/Dwight/Theory%20of%20Knowledge/Math/Math%20from%20t... 174

rougher quantitative sense. Over the decades, evidence concerning cognitive deficits in brain-damaged patients has accumulated, and researchers have concluded that we have a sense of number that is independent of language, memory, and reasoning in general. Within neuroscience, numerical cognition has emerged as a vibrant field, and Dehaene, now in his early forties, has become one of its foremost researchers. His work is “completely pioneering,” Susan Carey, a psychology professor at Harvard who has studied numerical cognition, told me. “If you want to make sure the math that children are learning is meaningful, you have to know something about how the brain represents number at the kind of level that Stan is trying to understand.” Dehaene has spent most of his career plotting the contours of our number sense and puzzling over which aspects of our mathematical ability are innate and which are learned, and how the two systems overlap and affect each other. He has approached the problem from every imaginable angle. Working with colleagues both in France and in the United States, he has carried out experiments that probe the way numbers are coded in our minds. He has studied the numerical abilities of animals, of Amazon tribespeople, of top French mathematics students. He has used brain-scanning technology to investigate precisely where in the folds and crevices of the cerebral cortex our numerical faculties are nestled. And he has weighed the extent to which some languages make numbers more difficult than others. His work raises crucial issues about the way mathematics is taught. In Dehaene’s view, we are all born with an evolutionarily ancient mathematical instinct. To become numerate, children must capitalize on this instinct, but they must also unlearn certain tendencies that were helpful to our primate ancestors but that clash with skills needed today. And some societies are evidently better than others at getting kids to do this. In both France and the United States, mathematics education is often felt to be in a state of crisis. The math skills of American children fare poorly in comparison with those of their peers in countries like Singapore, South Korea, and Japan. Fixing this state of affairs means grappling with the question that has taken up much of Dehaene’s career: What is it about the brain that makes numbers sometimes so easy and sometimes so hard?

D

ehaene’s own gifts as a mathematician are considerable. Born in 1965, he grew up in Roubaix, a medium-sized industrial city near France’s border with Belgium. (His surname is Flemish.) His father, a pediatrician, was among the first to study fetal alcohol syndrome. As a teen-ager, Dehaene developed what he calls a “passion” for mathematics, and he attended the École Normale Supérieure in Paris, the training ground for France’s scholarly élite. Dehaene’s own interests tended toward computer modelling and artificial intelligence. He was drawn to brain science after reading, at the age of eighteen, the 1983 book “Neuronal Man,” by Jean-Pierre Changeux, France’s most distinguished neurobiologist. Changeux’s approach to the brain held out the tantalizing possibility of reconciling psychology with neuroscience. Dehaene met Changeux and began to work with him on abstract models of thinking and memory. He also linked up with the cognitive scientist Jacques Mehler. It was in Mehler’s lab that he met his future wife, Ghislaine Lambertz, a researcher in infant cognitive psychology. By “pure luck,” Dehaene recalls, Mehler happened to be doing research on how numbers are understood. This led to Dehaene’s first encounter with what he came to characterize as “the number sense.” Dehaene’s work centered on an apparently simple question: How do we know whether numbers are bigger or smaller than one another? If you are asked to choose which of a pair of Arabic numerals—4 and 7, say—stands for the bigger number, you respond “seven” in a split second, and one might think that any two digits could be compared in the same very brief period of time. Yet in Dehaene’s experiments, while subjects answered quickly and accurately when the digits were far apart, like 2 and 9, they slowed down when the digits were closer together, like 5 and 6. Performance also got worse as the digits grew larger: 2 and 3 were much easier to compare than 7 and 8. When Dehaene tested some of the best mathematics students at the École Normale, the students were amazed to find themselves slowing down and making errors when asked whether 8 or 9 was the larger number. Dehaene conjectured that, when we see numerals or hear number words, our brains automatically map them onto a number line that grows increasingly fuzzy above 3 or 4. He found that no amount of training can change this. “It is a basic structural property of how our brains represent number, not just a lack of facility,” he told me. In 1987, while Dehaene was still a student in Paris, the American cognitive psychologist Michael Posner and colleagues at Washington University in St. Louis published a pioneering paper in the journal Nature. Using a scanning technique that can track the flow of blood in the brain, Posner’s team had detailed how different areas became active in language processing. Their research was a revelation for Dehaene. “I remember very well sitting and reading this paper, and then debating it with Jacques Mehler, my Ph.D. adviser,” he told me. Mehler, whose focus was on determining the abstract organization of cognitive functions, didn’t see the point of trying to locate precisely where in the brain things happened, but Dehaene wanted to “bridge the gap,” as he put it, between psychology and neurobiology, to find out exactly how the functions of the mind—thought, perception, feeling, will—are realized in the gelatinous three-pound lump of matter in our skulls. Now, thanks to new technologies, it was finally possible to create pictures, however crude, of the brain in the act of thinking. So, after receiving his doctorate, he spent two years studying brain scanning with Posner, who was by then at the University of Oregon, in Eugene. “It was very strange to find that some of the most exciting results of the budding cognitive-neuroscience field were coming out of this small place—the only

2 of 6

8/31/08 12:56 PM

DWIGHT TOK COURSEPACK 2008-2009


Annals of Science: Numbers Guy: Reporting & Essays: The New Yorker

file:///Users/psanders/Desktop/Dwight/Theory%20of%20Knowledge/Math/Math%20from%20t... 175

place where I ever saw sixty-year-old hippies sitting around in tie-dyed shirts!” he said.

D

ehaene is a compact, attractive, and genial man; he dresses casually, wears fashionable glasses, and has a glabrous dome of a head, which he protects from the elements with a chapeau de cowboy. When I visited him recently, he had just moved into a new laboratory, known as NeuroSpin, on the campus of a national center for nuclear-energy research, a dozen or so miles southwest of Paris. The building, which was completed a year ago, is a modernist composition in glass and metal filled with the ambient hums and whirs and whooshes of brain-scanning equipment, much of which was still being assembled. A series of arches ran along one wall in the form of a giant sine wave; behind each was a concrete vault built to house a liquid-helium-cooled superconducting electromagnet. (In brain imaging, the more powerful the magnetic field, the sharper the picture.) The new brain scanners are expected to show the human cerebral anatomy at a level of detail never before seen, and may reveal subtle anomalies in the brains of people with dyslexia and with dyscalculia, a crippling deficiency in dealing with numbers which, researchers suspect, may be as widespread as dyslexia. One of the scanners was already up and running. “You don’t wear a pacemaker or anything, do you?” Dehaene asked me as we entered a room where two researchers were fiddling with controls. Although the scanner was built to accommodate humans, inside, I could see from the monitor, was a brown rat. Researchers were looking at how its brain reacted to various odors, which were puffed in every so often. Then Dehaene led me upstairs to a spacious gallery where the brain scientists working at NeuroSpin are expected to congregate and share ideas. At the moment, it was empty. “We’re hoping for a coffee machine,” he said. Dehaene has become a scanning virtuoso. On returning to France after his time with Posner, he pressed on with the use of imaging technologies to study how the mind processes numbers. The existence of an evolved number ability had long been hypothesized, based on research with animals and infants, and evidence from brain-damaged patients gave clues to where in the brain it might be found. Dehaene set about localizing this facility more precisely and describing its architecture. “In one experiment I particularly liked,” he recalled, “we tried to map the whole parietal lobe in a half hour, by having the subject perform functions like moving the eyes and hands, pointing with fingers, grasping an object, engaging in various language tasks, and, of course, making small calculations, like thirteen minus four. We found there was a beautiful geometrical organization to the areas that were activated. The eye movements were at the back, the hand movements were in the middle, grasping was in the front, and so on. And right in the middle, we were able to confirm, was an area that cared about number.” The number area lies deep within a fold in the parietal lobe called the intraparietal sulcus (just behind the crown of the head). But it isn’t easy to tell what the neurons there are actually doing. Brain imaging, for all the sophistication of its technology, yields a fairly crude picture of what’s going on inside the skull, and the same spot in the brain might light up for two tasks even though different neurons are involved. “Some people believe that psychology is just being replaced by brain imaging, but I don’t think that’s the case at all,” Dehaene said. “We need psychology to refine our idea of what the imagery is going to show us. That’s why we do behavioral experiments, see patients. It’s the confrontation of all these different methods that creates knowledge.” Dehaene has been able to bring together the experimental and the theoretical sides of his quest, and, on at least one occasion, he has even theorized the existence of a neurological feature whose presence was later confirmed by other researchers. In the early nineteen-nineties, working with Jean-Pierre Changeux, he set out to create a computer model to simulate the way humans and some animals estimate at a glance the number of objects in their environment. In the case of very small numbers, this estimate can be made with almost perfect accuracy, an ability known as “subitizing” (from the Latin word subitus, meaning “sudden”). Some psychologists think that subitizing is merely rapid, unconscious counting, but others, Dehaene included, believe that our minds perceive up to three or four objects all at once, without having to mentally “spotlight” them one by one. Getting the computer model to subitize the way humans and animals did was possible, he found, only if he built in “number neurons” tuned to fire with maximum intensity in response to a specific number of objects. His model had, for example, a special four neuron that got particularly excited when the computer was presented with four objects. The model’s number neurons were pure theory, but almost a decade later two teams of researchers discovered what seemed to be the real item, in the brains of macaque monkeys that had been trained to do number tasks. The number neurons fired precisely the way Dehaene’s model predicted—a vindication of theoretical psychology. “Basically, we can derive the behavioral properties of these neurons from first principles,” he told me. “Psychology has become a little more like physics.” But the brain is the product of evolution—a messy, random process—and though the number sense may be lodged in a particular bit of the cerebral cortex, its circuitry seems to be intermingled with the wiring for other mental functions. A few years ago, while analyzing an experiment on number comparisons, Dehaene noticed that subjects performed better with large numbers if they held the response key in their right hand but did better with small numbers if they held the response key in their left hand. Strangely, if the subjects were made to cross their hands, the effect was reversed. The actual hand used to make the response was, it seemed, irrelevant; it was space itself that the subjects unconsciously associated with larger or smaller numbers. Dehaene hypothesizes that the neural circuitry for number and the circuitry for location overlap. He even suspects that this may be why

3 of 6

8/31/08 12:56 PM

DWIGHT TOK COURSEPACK 2008-2009


Annals of Science: Numbers Guy: Reporting & Essays: The New Yorker

file:///Users/psanders/Desktop/Dwight/Theory%20of%20Knowledge/Math/Math%20from%20t... 176

travellers get disoriented entering Terminal 2 of Paris’s Charles de Gaulle Airport, where small-numbered gates are on the right and large-numbered gates are on the left. “It’s become a whole industry now to see how we associate number to space and space to number,” Dehaene said. “And we’re finding the association goes very, very deep in the brain.”

L

ast winter, I saw Dehaene in the ornate setting of the Institut de France, across the Seine from the Louvre. There he accepted a prize of a quarter of a million euros from Liliane Bettencourt, whose father created the cosmetics group L’Oréal. In a salon hung with tapestries, Dehaene described his research to a small audience that included a former Prime Minister of France. New techniques of neuroimaging, he explained, promise to reveal how a thought process like calculation unfolds in the brain. This isn’t just a matter of pure knowledge, he added. Since the brain’s architecture determines the sort of abilities that come naturally to us, a detailed understanding of that architecture should lead to better ways of teaching children mathematics and may help close the educational gap that separates children in the West from those in several Asian countries. The fundamental problem with learning mathematics is that while the number sense may be genetic, exact calculation requires cultural tools—symbols and algorithms—that have been around for only a few thousand years and must therefore be absorbed by areas of the brain that evolved for other purposes. The process is made easier when what we are learning harmonizes with built-in circuitry. If we can’t change the architecture of our brains, we can at least adapt our teaching methods to the constraints it imposes. For nearly two decades, American educators have pushed “reform math,” in which children are encouraged to explore their own ways of solving problems. Before reform math, there was the “new math,” now widely thought to have been an educational disaster. (In France, it was called les maths modernes, and is similarly despised.) The new math was grounded in the theories of the influential Swiss psychologist Jean Piaget, who believed that children are born without any sense of number and only gradually build up the concept in a series of developmental stages. Piaget thought that children, until the age of four or five, cannot grasp the simple principle that moving objects around does not affect how many of them there are, and that there was therefore no point in trying to teach them arithmetic before the age of six or seven. Piaget’s view had become standard by the nineteen-fifties, but psychologists have since come to believe that he underrated the arithmetic competence of small children. Six-month-old babies, exposed simultaneously to images of common objects and sequences of drumbeats, consistently gaze longer at the collection of objects that matches the number of drumbeats. By now, it is generally agreed that infants come equipped with a rudimentary ability to perceive and represent number. (The same appears to be true for many kinds of animals, including salamanders, pigeons, raccoons, dolphins, parrots, and monkeys.) And if evolution has equipped us with one way of representing number, embodied in the primitive number sense, culture furnishes two more: numerals and number words. These three modes of thinking about number, Dehaene believes, correspond to distinct areas of the brain. The number sense is lodged in the parietal lobe, the part of the brain that relates to space and location; numerals are dealt with by the visual areas; and number words are processed by the language areas. Nowhere in all this elaborate brain circuitry, alas, is there the equivalent of the chip found in a five-dollar calculator. This deficiency can make learning that terrible quartet —“Ambition, Distraction, Uglification, and Derision,” as Lewis Carroll burlesqued them—a chore. It’s not so bad at first. Our number sense endows us with a crude feel for addition, so that, even before schooling, children can find simple recipes for adding numbers. If asked to compute 2 + 4, for example, a child might start with the first number and then count upward by the second number: “two, three is one, four is two, five is three, six is four, six.” But multiplication is another matter. It is an “unnatural practice,” Dehaene is fond of saying, and the reason is that our brains are wired the wrong way. Neither intuition nor counting is of much use, and multiplication facts must be stored in the brain verbally, as strings of words. The list of arithmetical facts to be memorized may be short, but it is fiendishly tricky: the same numbers occur over and over, in different orders, with partial overlaps and irrelevant rhymes. (Bilinguals, it has been found, revert to the language they used in school when doing multiplication.) The human memory, unlike that of a computer, has evolved to be associative, which makes it ill-suited to arithmetic, where bits of knowledge must be kept from interfering with one another: if you’re trying to retrieve the result of multiplying 7 X 6, the reflex activation of 7 + 6 and 7 X 5 can be disastrous. So multiplication is a double terror: not only is it remote from our intuitive sense of number; it has to be internalized in a form that clashes with the evolved organization of our memory. The result is that when adults multiply single-digit numbers they make mistakes ten to fifteen per cent of the time. For the hardest problems, like 7 X 8, the error rate can exceed twenty-five per cent. Our inbuilt ineptness when it comes to more complex mathematical processes has led Dehaene to question why we insist on drilling procedures like long division into our children at all. There is, after all, an alternative: the electronic calculator. “Give a calculator to a five-year-old, and you will teach him how to make friends with numbers instead of despising them,” he has written. By removing the need to spend hundreds of hours memorizing boring procedures, he says, calculators can free children to concentrate on the meaning of these procedures, which is neglected under the educational status quo. This attitude might make Dehaene sound like a natural ally of educators who advocate reform math, and a natural foe of parents who want their children’s math teachers to go “back to basics.” But when I asked him about reform math he wasn’t especially sympathetic. “The idea that all children are different, and that they need to discover things their own way—I don’t buy it at all,” he said. “I believe there is one brain organization. We see it in babies,

4 of 6

8/31/08 12:56 PM

DWIGHT TOK COURSEPACK 2008-2009


Annals of Science: Numbers Guy: Reporting & Essays: The New Yorker

file:///Users/psanders/Desktop/Dwight/Theory%20of%20Knowledge/Math/Math%20from%20t... 177

we see it in adults. Basically, with a few variations, we’re all travelling on the same road.” He admires the mathematics curricula of Asian countries like China and Japan, which provide children with a highly structured experience, anticipating the kind of responses they make at each stage and presenting them with challenges designed to minimize the number of errors. “That’s what we’re trying to get back to in France,” he said. Working with his colleague Anna Wilson, Dehaene has developed a computer game called “The Number Race” to help dyscalculic children. The software is adaptive, detecting the number tasks where the child is shaky and adjusting the level of difficulty to maintain an encouraging success rate of seventy-five per cent. Despite our shared brain organization, cultural differences in how we handle numbers persist, and they are not confined to the classroom. Evolution may have endowed us with an approximate number line, but it takes a system of symbols to make numbers precise—to “crystallize” them, in Dehaene’s metaphor. The Mundurukú, an Amazon tribe that Dehaene and colleagues, notably the linguist Pierre Pica, have studied recently, have words for numbers only up to five. (Their word for five literally means “one hand.”) Even these words seem to be merely approximate labels for them: a Mundurukú who is shown three objects will sometimes say there are three, sometimes four. Nevertheless, the Mundurukú have a good numerical intuition. “They know, for example, that fifty plus thirty is going to be larger than sixty,” Dehaene said. “Of course, they do not know this verbally and have no way of talking about it. But when we showed them the relevant sets and transformations they immediately got it.” The Mundurukú, it seems, have developed few cultural tools to augment the inborn number sense. Interestingly, the very symbols with which we write down the counting numbers bear the trace of a similar stage. The first three Roman numerals, I, II, and III, were formed by using the symbol for one as many times as necessary; the symbol for four, IV, is not so transparent. The same principle applies to Chinese numerals: the first three consist of one, two, and three horizontal bars, but the fourth takes a different form. Even Arabic numerals follow this logic: 1 is a single vertical bar; 2 and 3 began as two and three horizontal bars tied together for ease of writing. (“That’s a beautiful little fact, but I don’t think it’s coded in our brains any longer,” Dehaene observed.) Today, Arabic numerals are in use pretty much around the world, while the words with which we name numbers naturally differ from language to language. And, as Dehaene and others have noted, these differences are far from trivial. English is cumbersome. There are special words for the numbers from 11 to 19, and for the decades from 20 to 90. This makes counting a challenge for English-speaking children, who are prone to such errors as “twenty-eight, twenty-nine, twenty-ten, twenty-eleven.” French is just as bad, with vestigial base-twenty monstrosities, like quatre-vingt-dix-neuf (“four twenty ten nine”) for 99. Chinese, by contrast, is simplicity itself; its number syntax perfectly mirrors the base-ten form of Arabic numerals, with a minimum of terms. Consequently, the average Chinese four-year-old can count up to forty, whereas American children of the same age struggle to get to fifteen. And the advantages extend to adults. Because Chinese number words are so brief—they take less than a quarter of a second to say, on average, compared with a third of a second for English—the average Chinese speaker has a memory span of nine digits, versus seven digits for English speakers. (Speakers of the marvellously efficient Cantonese dialect, common in Hong Kong, can juggle ten digits in active memory.)

I

n 2005, Dehaene was elected to the chair in experimental cognitive psychology at the Collège de France, a highly prestigious institution founded by Francis I in 1530. The faculty consists of just fifty-two scholars, and Dehaene is the youngest member. In his inaugural lecture, Dehaene marvelled at the fact that mathematics is simultaneously a product of the human mind and a powerful instrument for discovering the laws by which the human mind operates. He spoke of the confrontation between new technologies like brain imaging and ancient philosophical questions concerning number, space, and time. And he pronounced himself lucky to be living in an era when advances in psychology and neuroimaging are combining to “render visible” the hitherto invisible realm of thought. For Dehaene, numerical thought is only the beginning of this quest. Recently, he has been pondering how the philosophical problem of consciousness might be approached by the methods of empirical science. Experiments involving subliminal “number priming” show that much of what our mind does with numbers is unconscious, a finding that has led Dehaene to wonder why some mental activity crosses the threshold of awareness and some doesn’t. Collaborating with a couple of colleagues, Dehaene has explored the neural basis of what is known as the “global workspace” theory of consciousness, which has elicited keen interest among philosophers. In his version of the theory, information becomes conscious when certain “workspace” neurons broadcast it to many areas of the brain at once, making it simultaneously available for, say, language, memory, perceptual categorization, action-planning, and so on. In other words, consciousness is “cerebral celebrity,” as the philosopher Daniel Dennett has described it, or “fame in the brain.” In his office at NeuroSpin, Dehaene described to me how certain extremely long workspace neurons might link far-flung areas of the human brain together into a single pulsating circuit of consciousness. To show me where these areas were, he reached into a closet and pulled out an irregularly shaped baby-blue plaster object, about the size of a softball. “This is my brain!” he announced with evident pleasure. The model that he was holding had been fabricated, he explained, by a rapid-prototyping machine (a sort of threedimensional printer) from computer data obtained from one of the many MRI scans that he has undergone. He pointed to the little furrow where the number sense was supposed to

5 of 6

8/31/08 12:56 PM

DWIGHT TOK COURSEPACK 2008-2009


Annals of Science: Numbers Guy: Reporting & Essays: The New Yorker

file:///Users/psanders/Desktop/Dwight/Theory%20of%20Knowledge/Math/Math%20from%20t... 178

be situated, and observed that his had a somewhat uncommon shape. Curiously, the computer software had identified Dehaene’s brain as an “outlier,” so dissimilar are its activation patterns from the human norm. Cradling the pastel-colored lump in his hands, a model of his mind devised by his own mental efforts, Dehaene paused for a moment. Then he smiled and said, “So, I kind of like my brain.” ILLUSTRATION: JOOST SWARTE

6 of 6

8/31/08 12:56 PM

DWIGHT TOK COURSEPACK 2008-2009


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.