The Literary Supplement 2012

Page 1

HARVARD POLITICAL REVIEW SUMMER 2012

THE LITERARY SUPPLEMENT HARVARD POLITICAL REVIEW

1


Contents Running Through Murakami’s 1Q84

The Doomsday diaries

Christine Ann Hurd

Daniel Gross

2

15

Cultural Agents

The ICC at 10: Deterrent or Distraction?

Julia Leitner

Elsa Kania

6

21

The Politics of Poetry

Grasping at the Grail

Olivia Zhu

Lena Bae

10

Discovering New Worlds

26

Eli Kozminsky

30

From The Editor The Harvard Political Review strives to produce insightful and original articles in a pithy, precise package. However, some subjects require a far broader scope and greater degree of research to provide a truly effective analysis. Thus the HPR has compiled its Literary Supplement, a new project devoted to lengthier, more thorough pieces. Long-form articles of this sort have become a burgeoning platform for journalism; they nestle comfortably between rapid-fire blogging and the latest 300-page policy hardback. Such discursive pieces are as equally rewarding in the pages of those thick pamphlets at the bottom of the news rack as they are published on an iPad, or even on a smartphone–in the palm of your hand. In a similar way, the HPR hopes to provide a novel avenue for writers to pursue subjects at

HARVARD POLITICAL REVIEW

length and in depth. Harvard’s campus publications have saturated the features, blog, and literary platforms with spectacular student content. The HPR plans on extending this excellence to the arena of long-form, well-researched investigations with its Literary Supplement. This inaugural online edition features writers delving into topics ranging from the novels of Haruki Murakami to Ronald Dworkin’s moral philosophy. We believe this new format will allow the HPR to cover more complex and nuanced issues, as well as deliver even deeper, more comprehensive articles to our readers.

-Eli Kozminsky Books & Arts


Running Through Murakami’s 1Q84 A more accessible, yet less striking novel. Christine Ann Hurd Haruki Murakami: the Japanese author who has become the darling of disaffected college students in the English-speaking world. His surrealist writings spanning over two decades deal with lonely protagonists, mysteriously silent women who dress in navy blue, as well as a smattering of autobiographical themes and objects. Most all of the novels take place in Japan and yet he has been deemed as “buttery” by some Japanese writers as painting a picture of the country as a sort of American foil. However, his authorial success is formidable, selling millions of copies worldwide, most recently of his epic novel 1Q84 as well as a continued trend of being predicted as one of the top contenders to receive the Nobel Prize for Literature. But I want to posit that the original surrealist formula that he so lovingly crafted seems to be fading as the years pass. While the sheer heft of the Orwellian pun—where the Q stands for question (a pun on the Japanese pronunciation of the number nine; phonetically “kew”)—has raised his profile in reviews and coverage, 1Q84 stands not as some Infinite Jest masterpiece indicative of the author’s style, but rather as a diluted proportion of surreal to real that makes it more accessible, yet less striking. 1Q84 revolves around the perspectives of two characters that travel to the alternate reality “1Q84.” The world appears identical except for the machinations of the Little People, a group of magical dwarves who speak through the leader of a religious cult, and the presence of an extra moon. The female protagonist is Aomame, a gym instructor by day who moonlights as an assassin of sexual abusers. Her soul mate is Tengo, a cram-school teacher who aspires to be a novelist and has brought about the wrath of the Little People by ghostwriting a story by 17 year-old Fuka-Eri detailing their existence and manipulation. Despite only knowing each other for a few brief years in elementary school, they are pulled

together 20 years later in 1Q84 by something only describable as “destiny.” The supporting cast includes a dowager who pays Aomame to kill sexual abusers, Tengo’s door-to-door subscription-collecting father, and Fuka-Eri, the stock mysterious girl who is incarnated in some form or another in Murakami novels. The final actor is the aesthetically bankrupt Ushikawa, who, in the pay of the cult whose leader Aomame kills, adds an omniscient perspective as he spies on the protagonists. In 1Q84’s 925 pages there are plots by the Little People to destroy Tengo and Aomame, exhibitions of the group’s magical powers (most notably the concept of an “Air Chrysalis,” a sort of centrifuge-cocoon they spin to separate the soul into two parts), and the gradual reveal of how Fuka-Eri knows of their secret existence. However, the overarching theme is that Tengo and Aomame are meant to be together (spoiler alert: they are indeed together by tale’s end) and thus begs the question of why Murakami chose to create such a long book about a simple story with a happy ending.

MURAKAMI’S FORMULA A “standard” Haruki Murakami novel will contain three parts: “otherworldliness,” loneliness, and a bizarre depiction of sex. Mixed within those three domains are the pet topics of Murakami drawn from the author’s life such as jazz (he owned a jazz club before becoming a full-time writer), cats, smoking (try finding a Murakami novel that doesn’t mention a ‘slim, gold lighter”) and a myriad of American pop culture references. For example, his 2004 work After Dark creates an ‘otherworldly’ feel by detailing a young girl’s lonely chain-smoking, coffee-drinking forays into nighttime Japan, as well her beautiful sister’s abduction into a TV screen by a sexual deviant. Kafka on the Shore, published in 2002, features

HARVARD POLITICAL REVIEW

2


a villain who travels to the real world through a special stone; fish rain from the sky at an old man’s command; and the runaway, ever-lonely protagonist not only has sex with his mother and his sister, but also travels through a magical forest to a type of purgatory. On first sight, 1Q84 seems to be the poster child for this oft-called “magic realism” genre most associated with Gabriel Garcia Marquez à la Hundred Years of Solitude. In this genre, the banal chewing of a spaghetti dinner mixes with surreal elements such as prophetic dreams, normally inaccessible wonderlands (e.g. Narnia), or abnormal creatures. So when Aomame hikes her skirt and climbs down the access exit of a Japanese superhighway from the “real” 1984 into 1Q84 to the soundtrack of Janacek’s “Sinfonietta,” a frequent reader of Murakami can only guess that this is the beginning of a parallel descent into something as sinister as the novel’s namesake. But the “otherworldly” plot centricity around 1Q84 is quite literally a misnomer compared to the insanity of Murakami’s prior forays. In his 1985 novel Hard-Boiled Wonderland and The End of the World, the author creates a world inside the protagonist’s conscience that serves as a study in Jungian archetypes. The main character lives within the Walls of The Town, interacting with The Librarian, The Colonel, and The Gatekeeper. The sheer amount of capitalization should be an indicator that Murakami intends the reader to view these characters not as familiar but as something foreign. In Kafka on the Shore, Murakami ascends to the height of what could only be called “crack fiction” by having the villain dress up as Johnnie Walker of whiskey fame, as well as a sort of abstract spirit who takes the shape of KFC’s colonel. In contrast, the only foreign elements in 1Q84 are the head of the cult Aomame is sent to kill (the Leader) and the gods with whom he communes (the Little People). This creates a sort of strange accessibility that makes it seem like Murakami is babying the reader. Likewise, the often invisible and hard-to-believe-as-sinister Little People seem like plot device pests, only there to prevent the two romantic leads from uniting. In regards to loneliness, 1Q84 holds most similarities to former Murakami novels. Each of the characters throughout the book is a study in Murakami’s self-practiced isolation. Aomame spends hundreds of pages hiding alone in an apartment reading Proust’s Remembrance of Things Past, exercising on a stationary bike, and sitting on the balcony with cocoa hoping to catch a fleeting glimpse of Tengo. Tengo is in a similar rut in that he only maintains contact with his married girlfriend once a week for necessary sex. The rest of his hours are spent stewing over his childhood, his loss of Aomame, or his troubled re-

3

HARVARD POLITICAL REVIEW

lationship with his now comatose father, or sometimes talking with Fuka-Eri, a highly inaccessible character who speaks in fragments. Ushikawa, arguably the most interesting of the three main characters, is only alone during the book, staking out Tengo and Aomame from an unfurnished apartment, smoking cigarette after cigarette, all while taking verbal abuse from Murakami as the most hopelessly ugly character the author has ever created. The long tracts of days going by without human interaction are nothing new for Murakami. The protagonist of the author’s 1995 book WindUp Bird Chronicle is perfectly content to stay holed up in his house; one of the most important scenes is his sojourn in the bottom of a well (unsurprisingly, alone.) Both protagonists of Kafka on the Shore and its 1987 predecessor Norwegian Wood are happy to spend hours reading by themselves in sorts of autodidact haze. In the 1999 work Sputnik Sweetheart, Surime is the archetypal lonely writer, chain-smoking with unkempt hair and spending hours in front of her typewriter attempting to channel Jack Kerouac. But neither “otherworldliness” nor loneliness can hold a candle to the unorthodox sexual practices of Murakami’s characters. Whether in the brothel of After Dark, the Oedipal theme of Kafka on the Shore, or the racy dreams of Wind-Up Bird Chronicle, Murakami will delve into some form of perhaps incestuous or age-difference deviance. However, in 1Q84, the references to sex fall in an “awkward” category rather than surreal. Aomame engages in “all-night orgies” with a female friend as a sort of biological imperative, and the last hit she is to perform is against a cult leader who has sex with underage girls including his own daughter. Tengo has a repetitive dream of an unknown man sucking on his mother’s breast and a sexual encounter with Fuka-Eri that somehow is explained away as, “he was really having sex with Aomame with only Fuka-Eri as a surrogate vessel.” This scene even “won” a Bad Sex Award from The Guardian. Yet sex takes a backseat to the romantic quest of Tengo and Aomame to reunite with each other after a few lost years spent together in elementary school. While in former works, sex seems to be goal of lonely characters in love with troubled women, in 1Q84, it’s more stuffed in as idle observations about the character’s physical appearance (The New York Times reviewer aptly summarized it as an unnaturally obsession with breasts), and seems completely unnecessary to the plot. Instead of the passion of young love as in Norwegian Wood or 1992’s South of the Border, West of the Sun, we have Tengo’s grappling to find a young girl attractive and Aomame’s taste for slightly balding older men.


We now have a formula along with some observations about how 1Q84 fits and exceeds it. While on a review-page summary of 1Q84 a would-be reader might think that it is filled with Murakami’s trademark sense of abnormality, most of the novel dabbles in the mundane to a staggering degree.

IGNORING THE MOLD We now have a formula along with some observations about how 1Q84 fits and exceeds it. While on a review-page summary of 1Q84 a would-be reader might think that it is filled with Murakami’s trademark sense of abnormality, most of the novel dabbles in the mundane to a staggering degree. The detail given to food consumption could rival Brian Jacques’ Redwall series; the musings of Aomame and Tengo focus very little on their world but on each other; and the references to characters listening to songs, reading books, walking from point A to B, or merely just staring at the night sky (albeit with two moons) makes 1Q84 seem downright domestic in comparison to former works. Moreover, while all the characters spend most of the novel alone, the crushing sense of loneliness and detachment in his other works is absent in the two main characters as the novel gives off a certainty that they will be together no matter what may come. “People are drawn to Murakami’s writing because he focuses on loneliness and isolation and how an unfulfilled desire for connection and love drives people to various forms of psychosis,” aptly proffers writer Grace Jung. This procession from loneliness to extreme action simply does not exist as in Murakami’s previous novels, and thus it comes across as merely boring instead of useful to the story. Finally, sex is hardly used in the destructive way most readers have come to expect of Murakami. Many times, the female interest will end

up crying after sex or will disappear or engage in some other distancing act. In 1Q84 it is either business-like or remote. Aomame has sex in order to satisfy a physical craving. The Leader has sex with underage girls because The Little People tell him to. Tengo has sex with Fuka-Eri under a state of paralysis so that Aomame can immaculately conceive. There’s no passion, no angst, nothing really at all to suggest that having sex had an emotional impact at all, as opposed to Murakami’s other novels, where such experiences galvanize the characters. Thus, at conclusion we have a novel that would rather exist in the banal than the otherworldly, and that avoids the crushing low of loneliness and high of sex. The question we must ask now is why.

THE RUNNING NOVELIST

“Nothing is in the real world is as beautiful as of a person about to lose consciousness,” writes Murakami in his 2008 memoir, What I Talk About When I Talk About Running. The quote is in reference to the grueling 26 miles he ran from Athens to Marathon, his first run that was of, shall we say, marathon proportions. Murakami’s first literary marathon came with the publication of Wind-Up Bird Chronicle that hovers just over 600 pages. Interspersed within the story of a young man, who encounters surreal dreams and a finding-yourself moment at the bottom of a well, is a somewhat gruesome and grueling account of a young Japanese man running military missions in China.

HARVARD POLITICAL REVIEW

4


Murakami, who considers himself a “running novelist,” could then compare 1Q84 to the longest run he ever completed: a 62-mile ultra-marathon. Yet this marathon, while a prodigious feat that should elicit admiration, ended up bestowing Murakami with a sort of depression and lack of enthusiasm for writing. During the race, he repeated a mantra telling himself that he was a machine and could not feel; he described his mind as quiet; he called it “almost philosophical or religious.” Most strikingly, he writes that he had “stepped into a different place,” much like Aomame and Tengo do in 1Q84. Murakami too became more introspective and he “no longer considered running the point of life.” Murakami alludes to the challenge of the novelist as a constant battle to keep the flame of talent alight through age. “As youth fades, that sort of freeform vigor loses its natural vitality and brilliance.” In Japan, Murakami writes, the act of composing novels is viewed as toxic or unhealthy. Many of his countrymen ask him if he will be able to continue his craft as he ages because there is an understanding that writing is antisocial and dangerous. Indeed, Murakami says that the reason he runs is to offset the inherently unhealthy nature of writing, “like it or not a kind of toxin that lies deep down in all humanity rises to the surface.” According to him: “Those of us hoping to have long careers as professional writers have to develop an autoimmune system of our own that can resist the dangerous (in some cases lethal) toxin that resides within. Do this, and we can more efficiently dispose of even stronger toxins. In other words, we can create even more powerful narratives to deal with these. But you need a great deal of energy to create an immune system and maintain it over a longer period. You have to find that energy somewhere and where else to find it but in our own basic physical being?” As Murakami ages, it is perhaps impossible to expect the same type of incisive, dramatic surrealism that he originally made his name by. He mentions that he now chugs along but is getting slower and accepting the slowing of movement with grace. Running, published in 2008, hints at a Murakami that has indeed, “stepped over” in his writing. 1Q84 is not a longer form of his punchier pieces but rather an ultra-marathon that focuses on the more beatific side of religion than the fires of hell. While Murakami might have accepted the decay of his autoimmune system against the toxins of his craft, it is hard to as a reader. The divide between the real and the fantastic grows

5

HARVARD POLITICAL REVIEW

sickeningly closer with every passing work. Even the names of each world have metamorphosed: in his 1991 Wonderland, the fantastic “otherworld” of the protagonist’s conscience is literally named “The End of the World,” while in 1Q84 the difference between settings is merely the changing of one digit to a letter. The deep, crushing loneliness has now been replaced by a calmer, older version—the young man grown up in a world that he knows is different from the one in his past, full of lurking threats and of two sides to every person instead of one. Thus, 1Q84 is meant for a different audience than Murakami’s earlier works. It is not necessarily for those who felt as if his previous novels stood alone amongst authors as a dive into the darker parts of the soul. It is not necessarily for those who relish reading about a world so different from our own as to be escapist. At the end of the book, you don’t really want to travel to the mystical world Murakami has thrown his characters into. You want to head back home and go to bed during the December of your day. The audience that 1Q84 will attract are a new group of readers, one that is not as familiar with Murakami’s more experimental works and who appreciate the escapism of the routine of every day life. Reading about Aomame in her imposed exile, cooking and reading Proust is comforting; you cheer for Ushikawa to leave so that Aomame and Tengo can be united...happily, for a change in Murakami’s novels. Murakami is finally branching out into a type of writing that, while he hasn’t explored as thoroughly, is simultaneously compatible and incompatible with his former works. It is compatible in that he is able to expose traces of the ideas that made him popular to a larger audience in a more accessible format. It is incompatible in that there are less of the incredibly memorable scenarios that define his earlier works. The haunting journey into the dark sewers filled with monsters and the man who reads the dreams of beasts in his subconscious is eschewed in favor of the quiet musings of two contemplative characters whose pace rocks slow in 1Q84. The only question that remains is if it is a conscious decision. Has Murakami watered himself down in order to appeal to more readers? Has he accepted a slower pace as he ages? I think the answer lies somewhere in the nebulous region of the latter in that Murakami seems to write for himself. He gave up his steady to career to explore the “toxin” of writing, and even though he might be prone to a weakening immune system, his work remains true to the man at the moment.•


Cultural Agents Art as an engine for change. Julia Leitner You hear the phrases constantly. “The system is failing.” “We just can’t go on like this anymore.” So imagine this. No, that’s it. Just imagine. What would it look like? What would we do differently? And then instead of crowding your mind with the clutter of institutions and what is and is not possible, employ some of that finger-painting recklessness they taught in first grade. Play. What are the possibilities? “You as an artist are expected to come up with your own rules and your own definition of what art is,” describes Pedro Reyes, an artist from Mexico City. “All spaces are open to innovation in the sense that I don’t think there is one single space of human activity that cannot be re-imagined.” Art provides the perfect test kitchen for the solving the world’s troubles, and artists around the world are beginning to harness (or release) the energy of art into the world of activism and social change. Examples of this agency are being collected by Cultural Agents, an initiative at Harvard University that recognizes the work of extraordinary artists and organizations around the world that employ art to affect change. “You can’t be a citizen unless you can put things together in new ways, imagine new possibilities,” says Harvard Professor Doris Sommer, the founder of Cultural Agents. “These kinds of interventions have existed for along time and what we need is to raise awareness so this looks like a possible future activity for more people.” The works of artists and architects such as Alfredo Jaar and Pedro Reyes illustrate examples of cultural agency and arts interventionism. These are individuals who have been trained in arts and architecture and enjoyed international recognition. A great body of their works incorporates an element of social responsibility. “I often think of what I do, that if it wasn’t called art, it would still be relevant,” explains Reyes. Pedro Reyes works with a plethora of media—from a TV puppet series, whose protagonists are Karl Marx and Adam Smith, to constructing pyramidal, vertical parks in Mexico City. Works such as these address a range of social and environmental issues. He has worked with the Guggenheim Museum in New York and in diverse locations across the United States, Europe and

Latin America. Reyes lists social dynamics and interaction as a key material in his work: “Works of art which are not finished until there is some input from the public.” The diverse backgrounds of artists such as Reyes and Jaar, who is now based in New York but fled Chile in the 1980s to escape the Pinochet regime, help them to address various problems on a multiple levels. The interventions are pointed, purposely, and micro, but have a huge splash. Take for example the project Lights in the City by Alfredo Jaar. It took Jaar’s artistic eye to bring attention to the fact that approximately 15,000 people sleep on the streets every night in Montreal, one of the wealthiest and most frigid cities in North America. In 1999 the city of Montreal invited Jaar to create a Public Intervention. Jaar, to date, has staged over sixty of what he calls Public Interventions: individual art projects that collaborate with cities or groups in order to creatively address social issues. Jaar has created Art Interventions in diverse locations such as the Finnish archipelago, Santiago de Chile, and Milan. His work has addressed issues ranging from immigration to conservation. When Jaar arrived in Montreal for one of seven investigative visits, he had no preconceptions of his project. During an investigative visit to the city, he chanced on the invisible beings that haunt the streets. Through interviews, Jaar found that the poor and homeless in Montreal felt marginalized and dehumanized. Jaar visited three different shelters near the Old Montreal district; consistently the residents of these temporary homes asked him not to photograph them. When asked, many replied, “This is what hurts. We are invisible. We are treated like urban furniture.” How did Jaar take this understanding and go beyond the standard work of both artists and NGOs? He was able to imagine something completely different. What distinguishes the work of cultural agents from social workers, traditional artists and socially manipulative ad-men is the ability, “To go beyond what is expected,” explains Harvard Professor Francesco Erspamer and co-teacher of an undergraduate class titled “Cultural Agents.” From the French Revolution to Obama’s 2008 campaign, Erspamer says, “The fact that art is effective at a social level is clearly understood.”

HARVARD POLITICAL REVIEW

6


Then, it’s a matter of how to employ that idea. So Jaar transformed a symbol of Quebecer parliament and culture into a visually stunning site and socially painful reminder of the city’s unwillingness to address an overlooked population. The cupola of the Marché Bonsecours vaults over the historic district of Montreal. Formerly the seat of the Canadian Parliament, five destructive fires caused the governmental body to abandon the building. Now, tourists weave in and out of the designer shops and chic cafés on the ground floor of the building. Above, the Cupola remains an empty beacon over the elegant neighborhood. In 1999 for a period of six weeks, the Cupola would, at random intervals, light up a fiery red. Suddenly, while diners enjoyed summer nights on a patio or pedestrians ambled the European rues, one hundred thousand watts of red lights would brilliantly illuminate the Cupola. “The red suggests fire: the fire that destroyed this cupola five times in its history,” explains artist Alfredo Jaar about his project. “This time it is another kind of fire that is destroying that tower. It’s a fire that is actually destroying the society to allow 15,000 people to go homeless every night.” Each time a person entered the shelter, the sky would light up with the shameful red fire. Ignited by the Cupola, the press and media spread the conflagrant outrage about the thousands of homeless who weathered the freezing winters in the northern city. Although the polemic Intervention was prematurely taken down by the mayor of Montreal, Jaar’s Intervention brilliantly drew attention to the hushed-up issue of homelessness in the city and painted a glaring portrait of the invisible residents without showing their faces. This was no exclusive gallery showing of portraits of homelessness. This brought the issue straight to all Montreal residents. Cultural Agents shows that the work of artists such as Alfredo Jaar is not a singular phenomenon. Not only has Jaar been working for decades now, but the idea that art creates effective interventions has become more and more salient in a world stagnated by old institutions. Though Professor Erspamer emphasizes his role as a theorist in the operation of Cultural Agents, he also notes, “I am Italian, and Italy has been on the verge of disaster since forever.” Acknowledging that Italy’s rich artistic tradition might be its salvation if applied to creative governing is the first step. Erspamer illustrates by pointing to the front page of the Harvard Gazette. Disembodied hands hold up a cardboard sign with the headline: “With jobs in short supply, Harvard analysts discuss what is needed to spur the

7

HARVARD POLITICAL REVIEW

economy.” “What this shows is that despite the fact that they are discussing what is needed to spur the economy, part of this sentence is already given as granted,” Erspamer explains. “The discussion is not open, but is already trying to find the solution to spur the economy as if to spur the economy is the solution. Which could well be. But that is not even talked about. And I think that’s what Cultural Agents and the use of art and literature within political economics, social environment. Is that on the contrary does not give anything as granted.” Using a similar notion, Pedro Reyes sought an alternative way to address gun mortality in a city in northwestern Mexico instead of taking it for granted that the only approach to fighting gang violence is with police violence. In 2008 the Botanical Garden in the city of Culiacán commissioned artists to do interventions in the park. The city had one of the highest mortality rates by gunshot in Mexico. Combining with public and private sector partners, Reyes launched an advertising campaign announcing that citizens in Culiacán could turn in guns in exchange for coupons for household appliances. He called the project Palas por Pistolas (Guns into Shovels). 1,527 weapons were turned over to the campaign. In a public exhibition, Reyes dismantled the arms and melted them down. He made1,527 shovels. With these shovels, Reyes continues to work with citizens of Culiacán and international efforts to plant 1,527 trees. Reyes re-imagined the horrors of drug violence and transformed violence into environmentalism and guns into shovels. The symbolic value of the project is poignant and manifold. It is also much more than a mere metaphor. As Reyes explained, Palas por Pistolas requires “A physical action as well as a psychological transformation.” This is a way that arts create impact. It catches us off guard. Nevertheless, we are accustomed to art in prepackaged form. We like museums, places were art can be contained. We walk into a house of culture and expect to leave unchanged, perhaps a little more thoughtful or dazed from spending hours in a dark space. We expect art to play with metaphor, and we can appreciate a clever refiguring of a common theme or icon. But we rarely expect to be provoked to the point of action or even interaction with art. “I think museums are built as fridges, which are spaces of perfectly controlled environment where works are preserved for posterity. And I think of museums more as ovens where you cook a reality,” says Reyes. So if you are cooking something in the oven you have to watch it, and the work begins to transform.


Picture from Reyes’ exhibit, Palas for Pistolas

For instance, last summer Reyes partnered with the Guggenheim in order to create an intervention for museum-goers in Brooklyn. The Sanatorium project admitted a number of “patients” for two hours of four different sessions of therapy. Designed by Reyes and with the help of 70 volunteer therapists, ticketholders entered into a diverse range of treatments for the span of two hours. Meditation, light therapy, sharing secrets, the “works” all aimed at transformation and intimate encounters. The Sanatorium made therapy available to those who wouldn’t normally have access, the way museums give us access to Rembrandts we can’t afford to put on our own walls. It also revealed the action of artwork and the psychological impact that culture can have. Reyes has taken it as his prerogative as an artist to play with the museum space. Our most basic concepts of culture are being transformed. Within the “fridge” of the museum, the most established institute of culture, Reyes is showing off how art can affect real change. To Reyes, the arts are an intriguing realm where play is allowed. There is no passive art connoisseur. There are spect-actors. With this point of view, how do we begin to study art? The phenomenon of Cultural Agents carries on into education. Not merely what kinds of artists we are teaching, but also how we are teaching. Artists are reimagining the role of art, and so teachers must begin to envisage a new form of education. The humanities can no longer be stagnant, passive studies of a text or artwork. Are objects of art are more than just passive objects? What happens when humanists begin to interpret these kinds of works? In the late 1990s, Professor Doris Sommer trailblazed the Cultural Agents Initiative when she recognized a “crisis” in humanist education. She questioned why humanities education was being devalued, why humanities departments

across the nation were being cut, and she asked herself what she could do. She identified and pioneered the academic study of artistic agency. She related civic responsibility to engaging and interpreting the process of artistic production. After establishing the field of study, Professor Sommer once again questioned her work as an educator and interpreter within the field of arts activism. “I hold myself accountable,” said Professor Sommer. “What am I doing as a cultural agent? Is it enough to study brilliant people? Or do brilliant people give you a responsibility to be creative and accountable in your own everyday work?” Her response, as she explains in her new book, The Work of Art in the World: On Humanistic Education and Civic Agency to be published by University Press later this year, was PRE-Texts, an educational tool that has been implemented internationally and with various aged schoolchildren and teachers. Based on a program in Perú, the PRE-Texts aims to enable teachers and grow. Rather than replicating a pre-packaged tool, the program encourages innovation and recreation of the principles. At its core, PRE-Texts takes great works of literature, from the likes of Julio Cortázar, Jorge Luis Borges, Ralph Ellison, and Ray Bradbury, and invites participants to explore the text through artistic creation and interpretation. With the help of visiting artists and facilitating teachers, students “play” with literature. They make books, portraits, and poems, they seek out alternative texts, they act, sing, and rap. Using the text as a point of departure, students learn to interpret and question through creation. An essential activity throughout the workshop is questioning, “What did we do?”. Students are not just asked leading questions. They ask the text questions and learn to imagine answers. This kind of interpretation mimics the work of cultural agents like Jaar and Reyes. Jaar and Reyes have shown that art is not passive. They use creativity to imagine new possibilities. In a field such as humanities, where there seems to be little room for innovation, this kind of energy is vitalizing. PRE-Texts has also been astoundingly successful. The Ministry of Education in Mexico has adopted the program. It has expanded to new schools in Boston, Puerto Rico, Mexico and Colombia. Educators and students alike express enthusiasm, demonstrate increased collaboration, and expand their appreciation for arts and literature. “This is my way of turning my own everyday work into something more creative and more

HARVARD POLITICAL REVIEW

8


responsive,” says Professor Sommer. Without being an artist herself, she was able to take lessons from the great theorists and artists she studied and taught and spin them into an act of cultural agency. As she told herself and as she tells her students, “If you’re going to get anything done you’ve got to be unreasonable.” PRE-Texts is one manifestation of Professor Sommer’s civic engagement. Her role as an educator continues to inspire students and also to imagine possibilities for the program—such as the unlikely opportunity to develop a similar intervention in Zimbabwe. “So we’re going to do some art activities and this is going to make students learn?” Although Harvard University senior Naseemah Mohamed values self-expression through dancing, when she began talking with Professor Sommer about the idea of implementing an arts intervention in a middle school in Zimbabwean schools, she was skeptical. “I never feel more like myself than when I’m dancing,” says Naseemah Mohamed. A native of Bulawayo, Zimbabwe, she has spent the last two summers conducting thesis research in schools in Zimbabwe. Last year, she enacted a creative program that is now supported by the Ministry of Education. For next year, she received a fellowship to learn flamenco dancing in Spain. In the future she plans pursue higher degrees in education policy. Naseemah not only had to overcome her own qualms about the project, but she also had to convince educators in Zimbabwe. In an education system where colonial education and corporeal punishment still hold sway, the possibility of staging a successful education program using artbased learning seemed dubious. On top of that, the obstacle that midway through the nine week program the students would go on vacation cast a shadow on Naseemah’s summer project. The 21 year-old pushed on. She brought in five artists to work with students between the ages of 14 and 20. The artists collaborated with the teachers to teach workshops and work through Chinua Achebe’s difficult text Things Fall Apart, in English, with the students. The students began to learn literature through interacting with the arts and arts projects. They acted, sang, rapped, painted, and wrote poetry. At the end, the students presented an exhibition. “The agency that the students gained in the classroom shocked all of us,” remembers Naseemah, who is currently translating her findings and data into a thesis. “I hypothesized that when you bring in art, teachers begin to not only recognize the individuality and the humanity of the students they are working with, but they also

9

HARVARD POLITICAL REVIEW

begin to appreciate the work that they are doing.” Not only did the students increase reading comprehension skills, but the relationship between students and teachers also began to change. The arts intervention began addressing some of the fundamental problems of the Zimbabwean education system and widespread social pessimism, which Naseemah observed during research. This pessimism characterized education as ineffective and useless, an attitude harms both students and those considering careers as teachers. Naseemah’s intervention used art in order to imagine a new possibility for these students and teachers. Clearly the institutionalized form of education was not working. Injecting the education system with a dose of creative energy gave students the ability to imagine better possibilities and impact both educators and students. Arts not only press certain pressure points in society that can cause widespread and diverse reactions. It also has revolutionized teaching. What does this mean for the future? Will art help to solve economic problems? Perhaps. For now, it is an inspiring new horizon that merits attention. Why? Because it is always changing. At the very heart of artists like Jaar and Reyes and educators such as Professors Sommer and Erspamer and Naseemah Mohamed is the idea that art continues to revolutionize, develop and change. Politics, education, immigration, environmentalism, are all fields that need to be re-imagined. There are even areas we don’t yet recognize that could use the disruptive artistic intervention. Cultural Agents plays to the idea that in arts nothing is assumed. Art surprises. It delights. It innovates and takes risks. Art imagines new solutions; it pushes us beyond repetition of past mistakes. Art and fiction allow us to test out ideas and imagine the impossible. Art holds a mirror to society (a diagnosis, you could say). Cultural Agents recognizes those artists who take that cultural diagnosis one step further and stage an intervention. As Professor Sommer points out, the “family resemblance” between cultural agents is strong and widespread across artists and education. Given the tools to recognize these resemblance is the first step. “You can recognize that those family resemblances in other things that people are doing. Fine,” says Professor Sommer. “Then you have done a first level intellectual work identifying what other people are doing. What if you can recognize that those things are done in any field you can imagine. Then what? What responsibility does that visit on you?” •


The Poetry of Politics Bringing the lyrical to the presidential.

Olivia Zhu Poetry is unquestionably linked with speechwriting. The most memorable and effective addresses use literary devices to make policies palatable to the public. Yet many speeches rely on ornamental poetry alone. Though a clever turn of phrase or the use of words that strike emotional chords might move audiences briefly, poeticism is most potent when paired with meaningful content. That is to say, a little bit of flair—in all its different rhetorical and literary manifestations—makes a speech artistic and powerful. A speech with strong content and policies that is overwhelmed with imagery and alliteration, then, is also ineffective. Its message is lost. Style is substance in political speechwriting, in that presidents must be entertainers in order to ensure they are effective explainers. Poetic techniques are intrinsically tied to the success of public addresses, and content can only be successfully disseminated when tied to attractively phrased sound bites. Beyond the fact that poetry is present and important to speeches, then, is the idea that not all literary devices are the same. Certain common techniques, like rampant alliteration, poignant imagery, and apt metaphors, are found in countless speeches ranging from Abraham Lincoln’s renowned Gettysburg Address to Barack Obama’s 2008 New Hampshire primary concession speech. However, other, more anomalous devices can make speeches as potent and memorable. Consider the antimetabole in John F. Kennedy’s first inaugural speech, when he famously pronounced, “Ask not what your country can do for you—ask what you can do for your country” as an example of one of the more unique instances of poetry in presidential speaking. The particular types of phrasing and cadence that can keep a speech forever in the nation’s memory are hard to pinpoint, but it is patent that poetry affects listeners on a

visceral level. As such, the longevity and popularity of the most famous addresses in American history is due to rhythm and rhyme, applied in good measure, good context, and good taste: a difficult blend to produce, but one important to producing speeches that move public audiences while contributing to a president’s written legacy.

SPEECH AS POETRY “The basic purpose of political rhetoric is to ‘move men to action or alliance,’” said former Kennedy and Johnson speechwriter Richard Goodwin. Indeed, the main goal of presidential speeches is rarely education. Persuasion and agenda setting are paramount, and words must captivate audiences with short attention spans and shorter memories. State of the Unions and inaugural addresses are, perhaps, the best examples of inspiring speeches, but almost every widelycovered or moderately important address is sure to contain sections dedicated solely to poignancy and appeals to emotion. In the words of Clinton speechwriter Jeff Shesol, “the words that excite us are also the words that can change us—words that stretch our national sense of self, that make us believe we really can end Jim Crow and win a war and put a man on the moon.” His words indicate that poetry facilitates the lofty speech so necessary to communicate a message across the vast diversity of America; furthermore, the emotional reaction triggered by a speech incites citizens to action. Regardless of creed, ethnicity, age, or gender, common themes of patriotism and unity are easily understandable and achievable, all due to a few choice words and artistic phrasing. Poetry is vital to speechwriting for yet another, more pragmatic reason. Because imagery, alliteration, and other literary devices strike

HARVARD POLITICAL REVIEW

10


emotional chords through evocative pictures or sounds, they allow easy, instinctive communication of ideas between teams of speechwriters and the men that they write for. That is to say, when speechwriters have limited access to a president or have to collaborate on an address, the poetry is the common language that permits disparate parts of a speech to finally coalesce. More importantly, a president can take the words of a speechwriter—foreign as they might be, since others write them—but can immediately understand the emotion and communicate it to an audience because of the universality of poetry. Poetry is practical. In fact, poetry is an inherent duty of speechwriters. According to Ben Stein, a speechwriter to Robert Nixon, “good speechwriting is the ability to make the prosaic poetic.” Indeed, it seems that some aides have an intrinsic talent: they wax literary even when speaking about their own experiences casually, impromptu. There is no need for hours spent crafting a speech with the perfect versed line when writers like former Clinton aide Jordan Tamagni can create poetry on the fly, almost unintentionally. While talking about the principles of speechwriting in January 2012, she pointed out that without well-reasoned arguments, “rhetoric hangs like wet laundry on a line.” Interestingly enough, without her colorful simile, perhaps her point would not have been as memorable. Poetry is important to speechwriting, and perhaps that is why some speechwriters have actually been carefully selected poets. Professional poets were sometimes hired by speechwriting staffs as special consultants, while in other administrations, one speechwriter might be designated the “staff poet.” Even Franklin Delano Roosevelt relied on calling “poet Archibald MacLeish, who served as librarian of Congress during the 1940s, or some other close adviser, to come in and lend a hand.” One of Nixon’s other speechwriters, William Gavin, had the unofficial duty of being in charge of “rich, velvety, rippling sort of stuff.” Yet the legacy of speeches is not that of speechwriters, but that of the presidents they serve. Thus, the poetry carefully worked into speeches is meant to create lines that will be remembered for generations. When poetry and structure is left behind in favor of conversational speech, however, the results rarely lend themselves toward contributing to a thrilling and historic address. Take President Clinton’s speech on Nov. 13th, 1993 on “What Dr. Martin Luther King Jr. Would Say If He Were Alive Today”: he “disregarded his notes” when giving the speech, which was about national violence, and used colloquialisms. Though the speech was initially

11

HARVARD POLITICAL REVIEW

lauded by the press, over time “the president’s words seemed to vanish from the national consciousness.” Ad-libbing seemed to have been a trend with Clinton, confirmed Tamagni—perhaps a reason why there are so few memorable quotes associated with his time in office unrelated to his impeachment. Compared to countless indelible remarks by Abraham Lincoln and John F. Kennedy, both of whom relied heavily on their carefully cultivated speeches, perhaps Clinton’s forgettable remarks were so ephemeral because they were full of popular sayings, jocularity, and bonhomie, but contained little of the poignancy of poetry. Presidents and their staff are well aware of the necessity of a historical legacy memorialized in not only successful policy action, but in the speeches that spurred said actions. Tamagni recalled being scolded by advisor Rahm Emanuel after he read a draft of one of her speeches for Clinton, when he asked if she was deliberately trying to write an address that was not memorable. Almost half a century earlier, F.D.R. realized the importance of writing strong speeches as well. His adviser, Robert Sherwood remembered that his president “knew that all those words would constitute the bulk of the estate he would leave posterity and that his ultimate measurement would depend on the reconciliation of what he said with what he did.” American history is rife with memorable speeches and lines that, when analyzed, are replete with poetry. Lincoln’s “better angels of our nature,” Obama’s “Yes We Can,” and Kennedy’s “Ask not what your country” are all examples of lines that remain in the national consciousness. They are repeated, mantra-like, in history books and by nostalgic pundits. A speech’s memorability depends on the conciseness and emotional impact of its content, which makes poetry integral to successful addresses. In just a few short words, poetry enables complicated or abstract ideas to fuse in a unique, brief form that allows an audience to quickly realize and remember connections between speech points and overarching themes.

PUTTING METAPHORS TO WORK Now, much of the discussion of poetry’s utility is in an overarching sense: a closer look at varying literary techniques provides a more textured picture of how words and syntax impact presidential speeches. Sometimes subtle, poetic devices add value to a speech in multifarious ways, just as they might contribute to prose and, of course, poetry. Abraham Lincoln, long revered for penning some of America’s most defining words, was an expert at using a vast range of liter-


Now we are engaged in a great civil war, testing whether that nation, or any nation so conceived and so dedicated, can long endure. We are met on a great battle-field of that war. We have come to dedicate a portion of that field, as a final resting place for those who here gave their lives that that nation might live. It is altogether fitting and proper that we should do this. -Abraham Lincoln, Gettysberg Address

ary ornaments that ultimately made his speeches more idiosyncratic and complex. Former Kennedy speechwriter and fellow wordsmith Ted Sorensen said of his predecessor: “Lincoln avoided the fancy and artificial. He used the rhetorical devices that the rest of us speechwriters do: alliteration (“Fondly do we hope—fervently do we pray”; “no successful appeal from the ballot to the bullet”); rhyme (“I shall adopt new views so fast as they shall appear to be true views”); repetition (“As our case is

new, so we must think anew, and act anew”; “We cannot dedicate, we cannot consecrate, we cannot hallow this ground”); and—especially—contrast and balance (“The dogmas of the quiet past are inadequate to the stormy present”; “As I would not be a slave, so I would not be a master”; “In giving freedom to the slave, we assure freedom to the free”).” The key takeaway from Sorensen’s analysis is that Lincoln not only used a great variety of poetic devices, but also that he used them to add to a

HARVARD POLITICAL REVIEW

12


speech rather than embellish it. Each instance of metaphor had meaning; there was no alliteration just for alliteration’s sake. Thus, it appears that poetic devices are not interchangeable. A highfalutin allegory means nothing to an audience that does not understand it, while vivid imagery will evoke memories and feelings easily. Of course, there is always disagreement over which poetic devices are the most effective. Former George W. Bush speechwriter John McConnell remembered once that he and the rest of the speechwriting staff realized “this really great line rhymed, and then it was absurd” because everyone in the room initially thought they “were Churchill, but [they] ended up Dr. Seuss.” Contrastingly, Sorensen felt that the rhyme “I shall adopt new views so fast as they shall appear to be true views” helped Lincoln make his point in his letter to Horace Greeley. The key word here is, of course, letter. Spoken aloud—at least in modern days—a blatant rhyme seems egregiously childlike. McConnell also suggested that it was important to be careful when using alliteration, possibly because an excessive amount would be too obvious and would negate any potential benefits of a more subtle delivery. After all, who would want to listen to a silly speech stuffed with susurrations? The general rule of using poetic devices in speechwriting seems to be that all good things come in moderation. Said McConnell, one should only use “some nice illustrations if they add to the point” of the address—even though a picture is worth a thousand words, it is not worth using a thousand words to paint a picture. While appropriate usage of poetic language can contribute to an address, equally important to speeches are rhythm and timing. A politician’s poor delivery can forever maim a speech, depriving it of its potential impact and grandeur. Take, for example, President George H.W. Bush, who, according to his speechwriter Mark Davis, “‘rarely got involved in the style’” and “‘would have served himself a little better if he had.’” Because speeches are written in a poetic style so “what may appear, on the page, to be an incomplete or run-on sentence” and they “might achieve a compelling cadence or rhythm that works well when spoken,” it is the duty of the speaker to enunciate and emote. Speechwriters help presidents by building applause lines in speeches or keeping sentences shorter, similar to lines in a poem. Applause lines are, in fact, a key technique. Analogous to stanza breaks, setting pauses after politically loaded comments is unmistakably strategic in terms of politicking, as moments for applause are often engineered so that it would be uncouth for an opposing party to refrain from applauding. Former

13

HARVARD POLITICAL REVIEW

Carter speechwriter James Fallows describes such a break, following a section regarding taxes and income inequality in Obama’s 2011 State of the Union speech, as a “another ‘let’s dare the Republicans not to cheer for this’ line.” In addition to the political rationale behind pauses, it is important to remember that brief breaks between ideas are calculated for audiences hearing the speech in the room or at home, allowing the listeners to process information and silently agree.

MODERNIZING TECHNIQUE The applause line, however, is a recent invention spurred by the advent of television and the institutional media coverage. After all, the past fifty years have seen a marked change in speech types, since they have marked a “conversational age, not an oratorical age.” There was no need for Washington, Lincoln, and Jefferson to include applause lines in their speeches, which were intended for print distribution. Now, says McConnell, “the State of the Union turns into four hundred applause lines”—all of which are noted in official White House transcripts and documented and analyzed by bloggers like Fallows. There have been other developments in speechwriting that parallel the development of media technology and the structuring of news delivery, making short, catchy phrases that somehow encapsulate policies and platforms all the more important. The sound bite takes such a significant role in political reporting because the quantity of speeches has increased even as coverage of an individual speech has dropped, a trend associated with the 1900s onwards. After all, “until the early 20th century, American presidents addressed themselves chiefly to the other branches of government, not to the people—and even then, most communications were written rather than spoken.” Lincoln is a prime example of a man who lived in an age when “oratory was important political entertainment; but with no broadcasting, his words reached large audiences outside the immediate vicinity only by print. His speeches were published in the newspapers of the day and composed by him with that in mind.” That is not the case today. There has been a systematization of speechwriting, as aides often have to produce “one to three speeches a day” to cope with the media burden of the modern age. The speed of speechwriting and speech giving has increased even more rapidly over the past few years: “Gerald Ford… delivered a speech


Franklin Delano Roosevelt once proclaimed his rules for speakers: “Be sincere; be brief; be seated.”

on average every six hours in 1976…. Jimmy Carter… [added] 9,873 singledspaced pages to the Public Papers of the Presidents of the United States. Ronal Reagan increased this bulk with another 13,000 pages, and Bill Clinton, in his first year as president, spoke publicly three times as often as Regan did in his first 12 months.” Quickly churning out speeches wears on aides, as many are often consigned to writing dry or unimportant addresses for minor events. For inaugural addresses or State of the Union speeches, each writer takes a section or writes in teams, which is detrimental to the unique poetic voice that would have existed with a single speech author. Nixon famously complained why his speechwriters were not on par with those of Woodrow Wilson; his staff retorted that Wilson wrote his own speeches. Many presidents did, until the advent of television.

“BE SINCERE, BE BRIEF” Franklin Delano Roosevelt once proclaimed his rules for speakers: “Be sincere; be brief; be seated.” Similarly, Jordan Tamagni said the three questions she asked herself while proofreading speeches were “Is it pompous? Is it soulless? Is it endless?” It seems that, with the inclusion of poetry, speeches cannot be pompous if literary devices are used carefully; speeches cannot be soulless if poetry inspires Americans to action; and speeches cannot be endless in an age where television necessitates quick, poem-like lines. Perhaps poetry is so associated with presidential speechwriting because its subtleties, complexities, passions, and humanity so parallel the political nature of the executive office. John McConnell, summarizing the tenets of great speechwriting, suggested prose that allows for “feeling that can be felt not because you announced it, but because you conveyed it”—precisely the sentiment resulting from skilled application of poetry to presidential speeches. •

HARVARD POLITICAL REVIEW

14


The Doomsday Diaries Survivalism's many stripes. Daniel Gross CRYSTAL BALL “You get the word passed to you. 'Hey, uh – make sure your food and water and ammo last – we've been cut off.' “You're like, uh, that's not good – I'm kind of in a desert. There aren't a ton of things around here that we could use. It's like going back to Boy Scout days.” In 2003, Andrew Wark's Marine unit was invading southern Iraq. He recalls it with a casual ramble that seems almost practiced. Within a day or two, he tells me, his unit was resupplied. But that was long enough for him to sense he'd been on the edge of what normally seems like a pretty orderly, if not quite relaxing, life. Leon Ordnoff hadn't met Andrew yet, but in 2003, he was in Iraq too. He talks fast. “You know, most of my twenties I was in a war zone. I did four tours of Iraq, one of Kosovo, one in Africa. Africa, we got left behind for like four or five days. There was an embassy bombing. The ships got spooked and took off and just left us there.” Leon means two truck bombs that killed hundreds of people in Kenya and Tanzania on August 7, 1998. It was still three years before one of its likely perpetrators, Osama Bin Laden, would be known to all of America. That day, two countries saw how abruptly violence can rip through a city. Leon saw the fragility of a supply line. “You just never really know, nobody has a crystal ball and can say what's going to happen. You know what I mean? Make sense?” He doesn't wait for an answer. Andrew met Leon in a bar when they were the only two combat vets on base in Japan. Back then, they wouldn't have called themselves survivalists. They probably didn't even known what 'survivalism' was, but they would. At the time, combat experience just gave them something to talk about. A few years later they were out, civilians. Andrew ended up doing

15

HARVARD POLITICAL REVIEW

communications design, which is close to what he did in the Marines. Leon stuck even closer to his military experience, with a Fresno-based gig at the Department of Defense. In 9-5 jobs, a little further from camouflage and carbines, societal breakdown still seemed possible. “In the Marines,” Leon says, “the thing that was always secure was pretty much food and weaponry. When I got out we didn't have an armory we could go to. We didn't have a supply tent where we could go and just say, hey, let me have some food.” “I said, let's get a plan together just to be safe. We were voicing our opinions on some websites and forums and stuff, and we realized, hey, we're usually the topic-setters on all this stuff. We're usually leading the discussion, we're usually answering the questions. Let's just start our own website.” When they did, they joined dozens of 'survivalist' or 'prepper' organizations nationwide that train to live through whatever's coming. Theirs is called Tactical Survival, and it's a Southern California group for figuring out how to survive if bad stuff happens. Their logo on meetup.com is a skull and crossbones. When you ask exactly what the bad stuff is, Leon and Andrew aren't evasive so much as elaborately nonspecific. There are earthquakes, tornadoes, hurricanes, power failures. They'd all demand a lot of you – keeping yourself fed, sheltered, defended. Make sense? So Tactical Survival members go on hikes, study for their ham radio exams, and shoot targets in the desert. They test out gear and take classes at REI. This makes them sort of like the Boy Scouts, if Boy Scouts had houses and spouses and lacked a national organizing body. You might think of them as a group of expert amateurs in a range of survival skills. That's what ham radio, for example, is all about – you become certified to broadcast in private 'ham' frequencies through detailed federal exams. The question is:


experts in what? It's unlikely that any two would share the same training, partly because preppers come from vastly different backgrounds, and partly because each one is prepping for something slightly different. It's important to note, therefore, that none of this is perfectly defined. One will tell you about invasion and nuclear attack, and another will talk about economic collapse. Some preppers expect imminent and unprecedented disaster, while others, like Leon and Andrew, just say you'd take out insurance on your car – why not on your survival? But well-defined or not, there are conferences, magazines, and blogs aplenty that thrive on whatever impulse survivalists have in common. And they do seem to share something. All of the preppers I spoke to felt they belonged to some kind of movement, maybe a growing one, defined by common concerns. They'll disagree what to call it, maybe. (For simplicity's sake, I'm using 'prepper' and 'surivivalist' interchangeably here.) Self-perception, as much as outside definition, can classify you into survivalism – as it would in a political party or religion or hobby. You make yourself up.

MAKING IT UP The media, of course, are fans of any story that involves the end of the world. But they don't tell it particularly well. Every so often a journalist hears about survivalism and tells its readers: “Area Grandma Likes Preserving Vegetables – and Guns”; “Guy Prepares for Disaster That Probably Won't Happen”; “Hard Times, But Not For Gas Mask Salesmen.” Many of them link today's survivalism to the fallout shelters of midcentury America. That's a bit of a challenge, though, because it seems like a preppers become preppers when they understand themselves as such. Cold War-era disaster preparation, the kind that shows high school kids trying out 'Duck and Cover,' wasn't a private self-identifier as much as a public recommendation. Survivalism attracts the media because of its easy paradoxes and surface fascination. Somewhat understandable, because it makes for killer hooks – as in the heading of “The Yuppie Survivalists” in Details magazine: “They live next door to you, not in bunkers. Young, successful, urban 'preppers' are stockpiling for the apocalypse—and they think you’re crazy not to.” These kinds of articles are happy to oblige the notion that survivalism is an rising trend. To a magazine, the only thing more interesting than a weirdo is a whole pile of weirdos. Practically, it's quite hard to know whether survivalism really is growing. It would mean tallying a mindset, one

that's particularly challenging to explain. It's more than counting bomb shelters, as you might have done in 1960. It is clear, however, that the efficiency and anonymity of the internet have made prepper voices louder and more widespread in the past decade. To be fair, “The Yuppie Survivalists” did a decent job illustrating that you can live in a city, enjoy moderation in wine and politics, work in IT or engineering or furniture design, and still be a survivalist. I spoke to a woman in Tactical Survival named Chris May, who spends her work day trying to kill cancer cells in petri dishes. She voted for Obama, but doesn't care much for politics. “There's no such thing as a typical survivalist,” she said. “Get a group of us together and you can't get us to agree on why we need to prepare. I have been in unproductive heated arguments over whether the group focus should be firearms training or 18th century primitive farming skills.” When I asked her in an email about her survival plan, she sent me 1200 words describing emergency items like 'bug-out bags' (BOBs) and solar car chargers. Another member was particularly concerned about chemicals and processing in the food supply. He wanted to stay anonymous. He's worked too many jobs to name, but most of them – like car repair and special effects – required improvisation and mechanical work. Because of that, for him as for many preppers, survivalism can be kind of fun. The Details article managed to capture some of this kind of diversity. Like most media accounts, it wasn't wrong. Just playful. When Andrew read it, he said: “Actually, not a bad story. But being in a high rise in the event of something really bad happening isn't a great idea. No power, stairs blocked...now what? 2nd floor, jump it. 22nd floor? Ouch.” When you look at the world through survivalist lenses, you're sensitized to what seem like societal weak links. The electricity-dependent credit card machines. The gasoline supply chain. The tall buildings. Then you think about how you'd respond if the links gave out. It's atypical, yes, to look at a high-rise and think about escape plans. The main thing is, it's not playful. Details says survivalists would call me crazy for failing to prepare, but Andrew saves crazy for a different group – a particular breed of survivalists. “I've been asked if we can teach how to make bombs. And I'm like, what? Huh? Like I even know how to do that. Sometimes Leon and I will talk to a guy at a function, and then go outside and smoke a cigarette and go, that guy. Was crazy. And I go, yeah. That guy was crazy.”

HARVARD POLITICAL REVIEW

16


PEOPLE LIKE ME There are a lot of kinds of survivalists, and most of them want you to know that. It's clear that preppers consider it important that you understand what they're not. The one you're talking to isn't on the fringe – but there is a fringe to not be on. Joe Nobody sees three species. He's the author of a few bestsellers on survival preparation, like Holding Your Ground, a guidebook about fortifying your home from attack. When Joe's not writing, he teaches courses on survival preparation in Texas. Like Leon, he works in private defense. As an author, he stays anonymous because of his work with government agencies. He did his classifications for me on the phone from Texas. 1) Anarchists. Those preparing and even eager for the fall of society. Often marked by radical religion or conspiracy theory; exhibit innate distrust of authority. May resemble the original 'mountain men' of the 1800s. Likely the smallest category; one would do well to avoid them. 2) Telepreppers. Those who prepare in front of audiences, as on The Discovery Channel. Like Bear Grylls (“Man vs. Wild”) they pride themselves on hands-on skills, knowledge of nature, and creative McGyverism in the face of shortterm survival scenarios. Often found (deliberately) lost in the woods, stuck in an avalanche, or treading water after their kayak capsized. 3) People Like Me. Those who desperately hope the government will not fall, but judge the likelihood of such an event as high enough to warrant cautionary steps. Appear to be the vast majority; reasonable folks with reasonable concerns. Like Joe, Leon won't go near what he calls the 'radical, conspiracy-minded survivalist.' He's slightly less repulsed by the 'past-time survivalist' – the armchair commandos who like talking a good game, but won't spend a buck on real gear. Then there's the 'housewife prepper' – who sure stocks up on food, but might not be so into the firearms. Leon's careful not to denounce any of them, but it's clear from his tone that these approaches don't make that much sense to him. He wants to talk survival with people like himself: people who'll stock food and water, grab some gear and also some guns, and carry a “realistic” head filled with know-how. Classification is important partly because survivalists tend to be taxonomists – people who rate and classify survival gear and disaster scenarios, and like classifying each other as much as they dislike being miscategorized themselves. To be precise, most people interested in this stuff would object to using 'survivalist' and 'prepper' interchangeably. There's good reason for their

17

HARVARD POLITICAL REVIEW

unease: survivalism is, almost literally, a loaded word with a very loaded past. If you ever need to construct a Mark-I submachine gun, for instance, you might turn to a handy volume from the 70's called The Poor Man's James Bond. It's a book written by Kurt Saxon, formely Don Siscco, with illustrated instructions to make homemade explosives and weapons. On the opening page of the revised edition are two columns. The left column describes the descent of 1980s society into crime, drug use, and moral complacency – in the face of which each citizen needs to take the law into his own hands. The right describes the stealthiest sources for weapons ingredients, like hobby chemical companies that advertise in Popular Science. Saxon claims to be the inventor of the word 'survivalist.' Saxon isn't known to be a violent man, but James Oliver Huberty was. He sensed impending collapse, by nuke or Depression or government action. He stockpiled food and firearms in the early 1980's. In 1984, he shot 21 people at a McDonald's in San Ysidro, California. Then he killed himself. He used three guns and took more than an hour to do all this. Newspaper reports called him a survivalist. Most people gave up trying to explain why someone would do something so unfathomable. Explicable or not, his title of 'survivalist' became part of the massacre's public narrative. Are they survivalists? Maybe. They're certainly part of the reason that many preppers won't use the word. A sociologist named Richard Mitchell calls this the Huberty problem. In his book about survivalism and modernity, he recognizes the difference between Huberty and preppers that aren't violent. “So why is he granted a place in this narrative? Because Huberty and his stories such as his have come to define survivalism in the public mind.” According to Mitchell's research, two-thirds of the survivalists he surveyed owned firearms. Yet as he points out, when most people think of survivalism, they think primarily of guns. Mitchell's greatest insight may be that meaning in society is built of competing narratives. One man's survivalist is another man's lunatic; it depends who tells the story better. In those narratives, our minds learn by association – and we learn to accept ourselves by differentiation. History, then, is heavy for most survivalists. Participation is by definition symbolic, and by classifying we try to change the symbols around, to lighten history a little. The media have their own trouble when they've tried to classify survivalists. For lack of a better place, the Details article ended up in the Culture & Trends/Career & Money section. The New York Times placed its 2008 article “Duck


and Cover: It's the New Survivalism” in Fashion & Style. (That article, by the way, also captured some of the diversity of today's survivalists – but only as a surprising shift away from vaguelydescribed “doomsday measures once associated with the social fringes”). When we run out of phyla to classify for our everyday experiences, it seems, we just stretch the old ones to make them fit. Classifications help define who we'll listen to and who we won't – whose stories are told and whose are ridiculed. Most of the time, our shorthand for this kind of sorting is 'sanity'. The ones we call insane are the easiest to ignore. Before Tactical Survival, Leon and Andrew were part of a different group. It was more hierarchical, Andrew says, and less into firearms. When a few of the members wanted to learn to shoot, the pair started teaching classes of their own. The group leader got upset. “Saying like, we don’t want to become militant – and I'm like, well I'm not really trying to become militant. People just want to know how to hit a deer. I'll teach you that. He ended up telling us we had to leave the group because we were kind of on the fringe or something.” Every prepper is on someone's fringe. “A lot of those people came and joined our group. That first group was a little bit out there for me.” As it happens, Kurt Saxon, author of The Poor Man's James Bond, also pointed a finger at what he thought was crazy. In his narrative about societal disintegration, he pointed at Huberty and away from himself. I have to wonder who Huberty might have pointed at, if someone had asked him about craziness. Perhaps his actions were truly insensible, even to himself – or maybe he found a way to rationalize immense violence as a response to his vision of societal disintegration. To Saxon, Huberty was disintegration. Crazy, of course, is in the glinting eye of the beholder. Which is why people feel justified ignoring Kurt Saxon – he's on the fringe.

THE SOCIOLOGIST AND THE SURVIVALIST When he was forty-two years old, Richard Mitchell wept against the dashboard of his car. He'd spent the weekend in Idaho with Eleen Baumann, his research partner, and a group of white supremacists who too bore the name survivalist. The leader of the gathering had spoken of killing Jews and black people. He had taught them to do it with bits of heavy wire. On Easter Sunday, packing up in his trailer, Rich heard children playing a game outside in the dust. “'Slit his throat! He's a Jew! He's a Jew!,'” they had yelled.

Mitchell is a sociologist, and this was what he studied. His task was to listen, to understand, and that weekend he couldn't do it. As with Huberty, these supremacists were survivalists by practice. They prepared for societal disintegration; they shared common concerns. But did this make them survivalists by name? Rich's tears were data not about survivalists, but about himself. They described the difficulty of integrating that weekend into the rest of his life. And they beg the question: if you call them survivalists, are you inadvertently condemning preppers of a different stripe? The ones who too might have wept at the sight of such hatred? Rich needed a new way to define and maybe explain survivalism. What he came up with comprised his book Dancing At Armageddon: Survivalism and Chaos in Modern Times. His is the best outsider's text you'll find on survivalists, driven by fieldwork stories and interspersed with the voices of heady but insightful social theorists like Max Weber and Georg Simmel. Its thesis is succinct: survivalism is much more than a sideshow curiosity, and can tell us a lot about the modern world. Rather, survivalism is a particular type of storytelling. Preppers design stories about how things could end, and illustrate those stories through their skills and their stockpiles. “In survivalist scenarios what you have is what you need,” Rich said in 2001. Preppers talk about water shortages when they show you their water stash, he says; they talk about fallout when they show you their shelter. In a way, survivalism is optimistic – life can't be overwhelming, because it's possible to prepare for it. In other words: stories define survivalism, and survivalists design their worlds to fit their stories. It's as if the disorder is real once you've found a cure. This is a theory you can throw at

Classification is important partly because survivalists tend to be taxonomists – people who rate and classify survival gear and disaster scenarios, and like classifying each other as much as they dislike being miscategorized themselves. HARVARD POLITICAL REVIEW

18


just about any prepper, because every prepper that talks to you will explain why his preparation made sense given the state of things. Leon has his stories about mitigation in the face of risks; Kurt Saxon has his about weapons in the face of societal disintegration. The question is, is Rich calling them mere storytellers? If so, he's contradicting himself – it would make them sideshow storytellers. But I suspect that he isn't. Because, first of all, nuclear war or food shortages are real threats, if hard to measure. More importantly, Rich's book insists that survivalism is an adaptive and inventive response to modernity, not irrelevant escapism. Rich, in this schema, is a storyteller too, and not necessarily a more successful one. The limits of his story are part of the story; he knows his task is to fail the best he can at explaining survivalists. Psychiatry aside, we tend to deem a man sane if his choices are in proportion to the actual world. Except we take in the actual world through stories – news articles, road maps, rumors, government reports. Without them, all I can know is the street I live on. Which means that a man is sane if his actions are in proportion with the stories he listens to. If he listens to different stories than I do, then who's sane? Do we decide to define sanity by the stories we choose to listen to? That's a circle: we choose the right stories, and therefore are sane; were are sane, and therefore know how to choose the right stories. None of this is to say that there aren't real, empirical risks that we can see in a real, empirical world. They do. Rather, it's to say that our margin of error in trying to know the empirical world is greater than the difference between Rich and

19

HARVARD POLITICAL REVIEW

Leon and Joe and I. And if I say Kurt Saxon and supremacists fall beyond that margin of error, that they exist in some category of their own, it's mostly because I feel like thinking that. It's hard to justify it more tightly than that. It all broke down, I think, because in the first place Rich's theory is a people theory before it's a prepper theory. Simply put, people tell stories and like their world to correspond with their yarns. And if we can find our footing in a world so relative, then maybe survivalists can be storytellers without being mere.

LISTENING “Is that what you want me to call you in the article?” I ask. “Ranger Man?” “Jason's okay too,” he says. Ranger Man is driving in Maine to a quiet spot where he can “shoot up that armor vest I blogged about on Wednesday.” He runs SHTFblog.com. It stands for “Shit Hits The Fan” – a favorite acronym among preppers that denotes survival mode. “I guess I'd describe it as sort of, a rational, reasonable, personal and societal preparedness blog. It's not over-the-top, you know I'm not talking about how to EMP-proof your 1968 Dodge.” Turns out his readership rakes in a good deal of ad revenue, and of late he's been able to pay two people to write for him. Better than most American newspapers. “What do you study down there?” he asks me. “English.” “Oh yeah? I was an English major in college.” Jason throws out an 'English major-ey' theory that he thinks I'd like. You know zombie


movies? They're just a cover for working out survivalism in a socially-acceptable way. He wrote a blog post about it: “It’s easier for many people to joke with others about a zombie invasion and entering survival mode than it is to talk about what life might be like if a real, deadly flu pandemic hit, what would happen if grocery store shelves went empty and the electricity ran out, with no foreseeable help to come.” When we manage to talk about it, he's suggesting, we do it through social codes and rituals. Otherwise we'd say nothing. If zombie movies can teach us something or help us communicate, it's because they offer an alternative to the daily life we know. And in that sense, Jason sounds a lot like Rich. “Survivalism, like art,” Rich wrote in Dancing at Armageddon, “promises what rational life does not, grandiloquent symbolic means of making a difference, personally and morally, in modern times.... Imaginary sides are drawn, rules set, action consequent and lasting.” Talking about survival, then, imagines over the weaknesses in society. Perhaps it can help us fix the weaknesses. If not, it helps us ignore them. There are plenty of things to say about both Jason's theory and Rich's theory, if you believe them. A few of them are: Dawn of the Dead. The Time Machine. Red Dawn. 1984. Atlas Shrugged. Star Wars. The Matrix. Lord of the Rings. Fight Club. Fahrenheit 451. The Road. Butch Cassidy & The Sundance Kid. The Twilight Zone. I don't mean that survivalist narratives should be classified as fiction, like the ones above. Rather, I mean that we've liked to imagine alternate worlds and harsh futures for a long time. Survivalists are not the only ones who'd think that on the frontiers of the West, on the flight deck of a spaceship, on the streets of dystopia, the rules are different, the dangers and the deeds are bigger. It's not a coincidence that Joe Nobody's first book of fiction is compared to a Western on the cover. It's on these kinds of frontiers that people have worked out their ugly fears and wispy dreams. As in The Twilight Zone, these scenarios can be implausible, but they're often close enough to make us wonder. Jason is a news junkie. He likes Drudge Report, Fox News, CSNBC, Al Jazeera English, MSNBC. And BBC and NPR – the usual stuff, he calls it. Like many of the other preppers, he's impressionistic, perhaps vague, when he's asked what exactly he's preparing for. Unlike Leon and Andrew, he thinks the chance for collapse is just way higher today than it used to be. Much of his concern has come from the headlines. Y2K, which “although it didn't really materialize, really highlighted for me how reliant we are on fragile infrastructure.” 9/11 right after that. Hurricane

Katrina. H1N1. “Kind of just a culmination. Got me thinking, what if two weeks from now...yada yada yada.” ... He has a wife and two young kids. When I ask him about anonymity, he has two reasons: one, that if his name's out there people will think him crazy. To explain the second, he mentions a classic episode of The Twilight Zone. It's called “The Shelter,” and it follows a lone Cold Warera prepper in a neighborhood that thinks he's paranoid. Spoiler alert: when the sirens start to echo through their sliver of suburbia, the neighbors come knocking on the shelter's heavy doors. Ranger Man, like the guy in The Twilight Zone, is stocked for his family, and that's all. So Jason doesn't tell the neighbors. He says preppers would call that op-sec. Operational security. Yada yada yada. Jason – Ranger Man – makes fun of everything. But at the moment he's entirely serious. It's his sincerity, actually, and the sincerity of people like him, that makes it so easy to call all of this an elaborate charade – to say survivalism is a way to elevate absurdity and child's play. We can read survivalism as storytelling, the way Rich does, or we can read it as crazy talk. Is there some alternative? Can we refuse to read it? Maybe – as if the goal instead is to refuse analysis, reject theorizing. As if all you needed to know in the first place is that people like to gossip, yammer, blather, write – and that all anybody can do in response is listen carefully, or not. Jason stays on the phone with me for a few more minutes. He says he's parked at the gravel pits somewhere in Maine, way out in the willy waggs. “For me,” he's saying, “the blog just started out for self-entertainment. Like tongue-andcheek, a certain amount of truth to it but over-thetop too. Because I have a very dry sense of humor. As people started reading it I realized I should be a little more conscious of what I was presenting. It starts off kind of fun and entertaining and it becomes something you gotta do. It's work.” When we're done talking, he'll trudge across the gravel to pepper a kevlar vest with speeding bits of lead. Eventually he'll convert that into words, which will tell the readers of his blog whether the bulletproof vest is any good or not. If I could listen to him do all this, I'd hear the crunch of boots on gravel, the explosion of gas out of a metal rod, the hollow clatter of bullet shells on stone, the hard strike of lead into body armor. And after a few moments maybe a soft impressed whistle or a disappointed sigh, which would tell me what he thought of it all. •

HARVARD POLITICAL REVIEW

20


The International Criminal Court at 10 Deterrent or Distraction? Elsa Kania The International Criminal Court (ICC) was envisioned as, in the words of first and current Prosecutor General Luis Moreno-Ocampo, the achievement of a “dream”: an imperative of creating an international system of justice. Established in the 2002 Roma Statute, the ICC’s creation was seen as a groundbreaking development after a century in which, as stated in the Roma Statute’s Preamble, “millions of children, women and men have been victims of unimaginable atrocities that deeply shock the conscience of humanity.” That cry of “never again” became manifest in this institution “determined to put an end to impunity for the perpetrators of these crimes and thus to contribute to the prevention of such crimes.” Today, however, that dream and the ideal of “international justice” remain in peril. In the ten years since the ratification of the Roma Statute, the International Criminal Court has struggled to come into its own. Facing formidable practical obstacles and a challenging political context, the Court has completed only a single trial over its nine years of existence. Nonetheless, the commitment to justice enshrined in its charter has deeply shaped attitudes towards human rights within the international arena. Paradoxically, the Court has displayed limited institutional capacity yet extensive symbolic influence. In this respect, the ICC has come to play an influential, and at times transformative, political role in the world today, most recently in response to the Arab Spring. The ICC has sought to serve as both an instrument of justice and a deterrent of crimes against humanity. The principles it embodies reveal a clearer picture of newly institutionalized global norms of human rights.

21

HARVARD POLITICAL REVIEW

Contradictory and often amorphous expectations for and limitations upon state behavior have come into conflict with the traditional standard of sovereignty. Indeed, the evolution of the Court powerfully illustrates the forces and trends at play in an international system devoid of mechanisms of authority or enforcement that are independent of its constituents.

ORIGINS On a basic level, the International Criminal Court represents the fusion of authority and legitimacy of multilateral institutions and the compelling need to prosecute past and prevent future crimes against humanity. In its most basic form, “internationalism” dates back to the League of Nations and has been attractive to the international sphere. In practice, institutions like the United Nations or the European Union have faced serious crises of legitimacy. Chief among these has been the strong resistance of nations to the prospect of losing sovereignty, traditionally a foundation of the international order. The idea of having an agency with an international mandate for addressing war crimes and crimes against humanity dates back to World War II. Military tribunals were then put in place by the Allied Powers in Nuremberg and Tokyo, and international criminal tribunals were later established in Yugoslavia and after the genocide in Rwanda. The progressive extension of a symbolic global mandate to prosecute war crimes, crimes against humanity, and now the crime of aggression” corresponds with the emergence of a theoretical conception of “human rights” as enshrined in the


United Nations’ Universal Declaration of Human Rights. Despite the failure of the international community to answer the call to action of “never again” and prevent future genocides in Rwanda and Yugoslavia, criminal tribunals once again took a prominent role in trying leaders responsible for the gravest atrocities, such as former Serbian president Milosevic and senior commander Radovan Karadzic in Yugoslavia and Théoneste Bagosora in Rwanda. International Criminal Tribunals have played a key role in social and societal reconciliation. However, some argue that they only accentuate preexisting resentments and grievances, promoting future or further violence and bloodshed. Nonetheless, the precedent set is at the very least a small step towards creating a truly just and peaceful world.

LIMITATIONS AND HOSTILITY The International Criminal Court has long been subject to controversy and, at times hostility while displaying seemingly limited success. A sustained challenge has been the lack of international consensus around the International Criminal Court. Although it has come to possess immense symbolic value, the ICC has not received sufficient practical support or maintained credible or consistent enforcement. Indeed, chief among the challenges and complexities that the ICC has faced are the ambiguity and at times outright hostility of an international community that created but seeks to control and manipulate it. The inherent conflict between conflict and justice, the jurisdictional restrictions and limitations placed upon the ICC all weaken it despite the scope and ambitiousness of its mandate. Today, 120 countries are States Parties to the Rome Statute of the International Criminal Court. However, yet some of the world’s major powers—namely the U.S., Russia, China, and India—are not among them and as such, not within its jurisdiction. The mandate of the International Criminal Court has been particularly weakened by the refusal of the U.S. to ratify the treaty for fear that U.S. nationals could be prosecuted. This paradox—and, some might say, hypocrisy—might seem surprising. After all, in the words of former President Clinton, the United States’ “strong support for international accountability and for bringing to justice perpetrators of genocide, war crimes, and crimes against humanity” might lead one to believe that the international system’s reigning hegemon would see the ICC as a key tool in furthering these aims. Indeed, in 2000, upon initially signing the Roma Statute, President Clinton strongly expressed his support of the ICC, despite

reservations, saying “we wish to remain engaged in making the ICC an instrument of impartial and effective justice in the years to come.” However, inherent concerns have preempted U.S. support of the Court. This, in and of itself, has been among its critical weaknesses. The United States–along with China, Iraq, Israel, Libya, Qatar and Yemen–voted against the Statute before then signing on and ultimately ‘unsigning.’ In Senate hearings held at the time, Senator Rod Grams called the ICC “a monster that must be slain,” and Senator John Ashcroft similarly denounced the ICC as “a clear and continuing threat to the national interest of the United States.” Specifically, the ICC was seen as a threat to U.S. sovereignty. Clinton initially expressed the concern “that when the court comes into existence, it will not only exercise authority over personnel of states that have ratified the treaty, but also claim jurisdiction over personnel of states that have not.” Indeed, as under the Bush administration, U.S. policy took a turn towards hostility, due in part to deep concern arose that the ICC might be used to prosecute Americans deployed in Iraq and Afghanistan. In 2002, the U.S. ‘unsigned’ the Roma Statute, formally conveying to the United Nations that America intended not to ratify it and thereafter, has no longer considered itself to be bound by the ICC’s mandate. The establishment of bilateral immunity agreements (BIAs) aimed at preventing Americans from being transferred to the ICC’s custody has been aimed at further protecting the U.S. from this monster. The American Servicemembers’ Protection Act of 2002 used the threat of cutting military aid to penalize countries that were unwilling to sign such immunity agreements. The Nethercutt Amendment, sponsored by former U.S. Representative George Nethercutt, currently a resident fellow at Harvard’s Institute of Politics, sponsored this initiative, which allowed the U.S. to leverage economic aid against countries that did not agree to these exemptions, including through cutting humanitarian programs. Herein lies one of the central dilemmas— that, in order to avoid violating considerations of sovereignty, states must voluntarily submit themselves to the ICC’s jurisdiction—yet those who most fear prosecution are least likely to do so. Such resistance to the ICC has undermined its authority. Over time, U.S. hostility has evolved into a fundamental ambiguity. The U.S. refrained from vetoing the Security Council Resolution referring the case to the ICC and instead abstained. Around this time, Congress also began to repeal anti-ICC sanctions that had formerly been imposed. But the damage had been done. The moral weight and

HARVARD POLITICAL REVIEW

22


power of the U.S., not aligned with this fledgling institution but aimed at evading the aegis of its authority, seems to have sent a powerful message to the world—that justice is not for all.

A FALSE START? In July 2003, Luis Moreno Ocampo swore, "I solemnly undertake that I will perform my duties and exercise my powers as Prosecutor of the International Criminal Court honourably, faithfully, impartially and conscientiously, and that I will respect the confidentiality of investigations and prosecutions." Mr. Moreno Ocampo, a former Prosecutor from Argentina had been involved in “Military Junta” trial of top military commanders for mass killings and human rights abuses during Argentina’s “dirty war.” Soon after his election, in April 2003, Mr. Moreno Ocampo said, "I deeply hope that the horrors humanity has suffered during the 20th century will serve us as a painful lesson, and that the creation of the International Criminal Court will help us to prevent those atrocities from being repeated in the future." However, his record—and that of the Court—has been mixed in this regard. Upon its creation, the extent to which the personality and capability Chief Prosecutor would determine the future of the Court were widely recognized. In the words of Edmond Wallenstein, a Dutch diplomat who had been involved in the Court’s establishment, “Inevitably the prosecutor will be the public face of the institution.” Unfortunately, this “face” leaves behind a controversial and contested legacy. Over the course of his nine-year term as Chief Prosecutor, Moreno Ocampo has come under heavy criticism for his behavior. He has been faulted for a perceived lack of focus and commitment, and the Court has hardly thrived under his tenure. Clashes of personality with the other senior officials of the Court have further compromised the efficacy of the Court and caused many employees to leave. Scandals abound. In 2006, Christian Palme, the ICC’s head of public relations filed a complaint against Moreno Ocampo, asserting that the chief prosecutor had sexually abused a female journalist while in South Africa. Moreno Ocampo was ultimately acquitted on these charges, and he fired Palme for making the allegations. After Palme had appealed to the International Labour Organization, the ILO concluded in 2009 that Mr. Palme had acted rightly and was awarded damages as well as his salary paid up until the end of his initial contract. Moreno Ocampo was strongly criticized for his abuse of power because he personally involved himself in dismissing Palme.

23

HARVARD POLITICAL REVIEW

Independent of personality and publicity, the Court’s record is mixed. It took more than six years after the ICC was opened in 2002 for its first case to begin, that of Thomas Lubanga, a militia leader from the Democratic Republic of Congo, for war crimes, including the use of child soldiers in ethnic fighting in the Ituri region of Eastern Congo. He was accused of forcing thousands of children to enlist in militias, some only eight years old, who were drugged and trained to kill, steal or mutilate civilians, and used for sex by militiamen. The pace of the Court in prosecuting Lubanga’s case—slow, almost absurdly so—as well as missteps on the part of the prosecutor made what should have been a triumph for the Court a near-disaster. Proceedings were twice halted by judges, who called for Lubanga’s release because of the prosecution’s mistakes and lack of cooperation. Judges said the prosecution’s handling of evidence amounted to “wholesale and serious abuse” of the process. Beyond this awkward beginning, the lack of success of the International Criminal Court in producing tangible results has caused many to become frustrated and disillusioned. Two individuals indicted by the ICC, Sudanese President Omar al-Bashir and Joseph Kony, the leader of the Lord's Resistance Army (LRA), are currently evading arrest, and both have even been able to travel to ICC signatories such as Kenya without being arrested. Perhaps, in retrospect, these indictments were unwise. Targeting sitting heads of states and powerful actors in lawless regions without a sustainable mechanism for enforcement, the ICC may have, in effect, set itself up for failure. Shifting its focus to prosecute lower ranking but equally guilty players in these conflicts and atrocities could be more feasible and allow the Court to begin building a record of results from which to build a foundation of legitimacy and a reputation for efficacy from which to target more senior figures.

REVIVED ROLE The value and viability of the ICC, both practically and symbolically, have been tested by the Arab Spring. The Court’s growing importance as a perceived deterrent and mechanism for accountability has given it a central role in controversial events. In his speech at the Kennedy School’s Forum last fall, Mr. Moreno-Ocampo raised the question of whether the referral of Gadhafi by the Security Council represented a political decision or a recognition of the ICC’s growing influence. Alternatively as “a new justice trend” in which atrocity crimes will not be tolerated” or a “normal


The ICC headquarters in the Hague, Netherlands

UNSC political decision” in which the ICC is used as a “tool” “to implement a political decision based on the interest of its members” the ICC has played a central role in recent events in Libya. The new and expanded importance of the ICC on the international stage may continue to test its ability to navigate these challenges. The institutional characteristics of the ICC inherently leave room for ambiguity. Moreno Ocampo has emphasized, “This is only an emergency court…We are not here to replace national judicial systems. We will act only when they need us.” However, what “emergencies” bring about this “need” has never been unambiguous, and the comparative advantage of the International Criminal Court as compared to other mechanisms of justice such as national courts must be considered. Gadhafi’s death under unclear circumstances could be investigated as a potential war crime, and his son Seif al-Islam’s impending trial may become a source of tension. Here, the disproportionate regional focus of the International Criminal Court has further

detracted from its legitimacy and has caused it to be perceived as an instrument of Western neocolonialism. Thus far, in the ICC’s six cases, only African nations have been targeted, with leaders indicted from Sudan, Rwanda, and the Congo. Although, to some extent, there is a basis for this focus, given that many human rights abuses are indeed committed in this region of the world, the perception of injustice hinders the potential for justice. Speaking of Seif al-Islam’s case, "We believe that the ICC has no jurisdiction on these issues," the deputy foreign minister, Khaled al-Khiam, said. "We see the international criminal court as targeting African states." Until it can shake this negative image, the ICC will not be seen as fully legitimate on the world stage. In the future, even simple steps could change the course of the Court. Currently, the ICC is reported to be considering prosecuting crimes committed in Afghanistan and Somalia, among other nations. Taking on a non-African case could possess immense symbolic value in breaking with this perceived trend.

HARVARD POLITICAL REVIEW

24


JUSTICE OR PEACE In general and particularly through the Arab Spring, the U.S. and others have expressed concerns that the potential for prosecution for war crimes or crimes against humanity serves as an incentive for dictators to cling to power longer, perhaps, than they otherwise would have. Nonetheless, the ICC has simultaneously been presented as an essential instrument of justice and accountability. This essential dilemma of whether the International Criminal Court serves as an essential deterrent or, through establishing perverse incentives for political leaders, perpetuates and even exacerbates atrocity must be considered in looking towards the future of the ICC. Consistently, the Security Council has been encouraged by many activists to refer Bashar al-Assad to the ICC for the ongoing crackdown in Syria yet has thus far failed to do so. Navi Pillay, the high commissioner for human rights, has also recently recommended that the Security Council the current situation in Syria to the International Criminal Court. As bloodshed continues, this question of whether justice would complement or counteract efforts for peace becomes particularly relevant. In Yemen as well, the International Criminal Court has played a symbolic yet far from practical role. Tawakul Karman, the Yemeni Nobel Prize Laureate, speaking in a keynote address at Harvard Arab Weekend in 2011, emphasized that her only two demands were that Saleh's assets be frozen and that he be indicted by the ICC. Although Saleh officially stepped down in December in accordance with the terms of a deal brokered by Saudi Arabia via the Gulf Cooperation Council (GCC) that included immunity from prosecution, he has remained a presence in the country. While in the U.S. for medical treatment, he has vowed to return and to play a future role in Yemen’s politics. Moreover, he is not far from the reins of power as his son and several nephews retain influential positions in the army. Karman then rejected this alternative as contrary to the will of the Yemeni people and as creating a sense of impunity that emboldened Saleh to persist in human rights abuses. That Saleh has faced no consequences for the massacre of thousands peaceful protesters, including via indiscriminate shelling of civilian neighborhoods, seems morally repugnant. The change of government in the absence of regime change, as has also been the case in Egypt, limits the potential for change. Mr. Saleh is currently in the United States, receiving medical treatment for injuries sustained in an attack, and was recently met by protesters changing “I.C.C., not N.Y.C.!” as

25

HARVARD POLITICAL REVIEW

he emerged from the Ritz-Carlton, one of whom was arrested for throwing a shoe at him.

A NEW BEGINNING? 2012 marks the beginning not only of a new decade but also, perhaps, of a new era for the Court, and it may have the chance for a fresh start under a new regime. Moreno-Ocampo is expected to step down as head of the Office of the Prosecutor (OTP) in mid-2012, and the search for his successor has already concluded—with the selection of his current deputy as the favored and only candidate to replace him. The unanimous selection of Fatou Bensouda of Gambia, formerly Assistant Prosecutor has met with acclaim. She has extensive experience with the ICC and, before joining, had been a legal adviser and trial attorney at the international tribunal that prosecuted leaders of the 1994 Rwanda genocide. Throughout the process, she was strongly supported by the majority of the Court’s African members. However, this support could cut both ways. African countries have thus far been the sole targets of the Court’s cases and has, as a result, come under pressure and scrutiny as unjustly biased. Ms. Bensouda may face particular pressures as she continues to pursue the cases now pending, all in Africa. In response to the question of how she, as Chief Prosecutor, would deal with criticisms directed from Africa, she said firmly, “My origin, being an African, has nothing to do with my mandate,” she said. Nonetheless, although her capability and experience make her the best candidate for the job, the political pressures and dynamics will inevitably be complicated for her as well. As a fresh face with a new start, she will have a unique opportunity to once again reinvent the Court while continuing to confront its ongoing challenges. Today, the ICC is clearly still finding its way as an institution. The combination of immense symbolic value and limited practical efficacy have led to immense frustration yet allow the potential for equal growth. The contradictions of its institutional structure, particularly the nations that have exempted themselves from its jurisdiction, remain substantive challenges to its authority and legitimacy. These tensions must be resolved, but, more importantly, the Court must continue to move forward. A new prosecutor and an expanding role in the international system may be able to guide the International Criminal Court towards playing a more productive and effective role as the tool for justice and conflict transformation that it was envisioned as and can still become. •


Grasping at the Grail Truth and Morality in Dworkin’s Justice for Hedgehogs Lena Bae The Ancient Greek poet Archilochus wrote: “The fox knows many things, but the hedgehog knows one big thing.” As intellectual historian Isaiah Berlin later explained in his 1953 essay The Hedgehog and the Fox, Archilochus’ words indicate the way in which different people see the world. Berlin lumped thinkers such as Dante, Plato, Hegel, Dostoevsky, and Nietzsche in the category of hedgehog, as those who relate everything to a single coherent principle or system of understanding. He thought Shakespeare, Erasmus, Goethe, and Joyce, on the other hand, were foxes, more likely to see the world as composed of many unconnected, perhaps self-contradictory fragments. Foxes rule the world today. By and large, people are uncomfortable ascribing the workings of the universe to a single coherent system, or living by the thought that we can fit our various opinions and beliefs together into a perfectly compatible whole. However, it is the hedgehog and its one big idea that Ronald Dworkin, a prominent professor of law and philosophy at New York University and regular contributor of the New York Review of Books, defends in his recent work, Justice for Hedgehogs. Dworkin wants to persuade the reader that our ideas about ethics, morality, and politics are interconnected in a single network of values, and that, moreover, there is truth to be sought within this network. This is a pretty big idea. Dworkin, who has a long career of advancing extremely interesting, often controversial arguments in moral philosophy and legal theory, appears to be on the losing side today, both in popular culture and in the academy. Religious fundamentalism aside, people are hesitant to argue that their convictions about abortion or war or torture, deep and stirring though they may be, are full-blown truths. The notion of objective truth has ceased to be a serious topic among philosophers and scholars of the humanities, replaced by a safer, kinder, and foxier pluralism that embraces the potential for different, conflicting values to exist. Politicians speak not about groping toward what democracy really means, but of negotiating between the values that inevitably clash in the real world.

THE REALM OF VALUES Despite our philosophically fragmented climate, Dworkin makes a strong argument for the interconnectedness of value through an important re-conceptualization of what value is. The key principle Dworkin uses to draw the boundaries about his terrain is what he describes as the “metaphysical independence” of value, an insight he draws from philosopher David Hume. This somewhat daunting term points to the distinction between facts of the real world and the normative opinions we hold about them. According to this principle, concrete realities tell us nothing about which of our normative beliefs are correct. While we may prove our physical existence by pointing to the atoms that configure such a thing as you or me, there are no such “morally charged particles” that make the moral beliefs we hold true or false. According to Dworkin, only a substantive moral argument can do the job—an argument, that is, standing solely upon further moral claims. This is the idea of the hedgehog. Such an argument will depend on values outside of the particular case, and ultimately fall or stand based on the coherency of one’s framework of values. The truth of a moral claim depends on that of another moral basis, and thus, writes Dworkin, “the argument ends when it meets itself, if it ever does.” So moral values have nothing to do with the physical world, and everything to do with other moral values. How exactly, then, should we conceptualize them? According to Dworkin, we can see moral values as “interpretive concepts.” These are concepts that we can agree touch upon something dear to us, but we disagree about how exactly they should be characterized or identified. We agree enough about the idea of equality, for example, that when we argue about disparities in income, we recognize we’re talking about the same concept. Yet we differ when it comes to what shape this notion should take. Our disagreement is not merely superficial; we really do disagree about what equality entails. For many of us, we are always negotiating conflicts between values in our own heads as well.

HARVARD POLITICAL REVIEW

26


In Dworkin’s hedgehog view, however, if we are true to our values (like equality and liberty), they will not and cannot conflict. For instance, say your friend asks you for your thoughts on a project that she believes will be a huge success. You don’t think she’s going to be the next Mark Zuckerberg, but you know telling the truth will be brutal. Is there a conflict between your values of honesty and kindness? Dworkin doesn’t think so. According to him, we search for the right answer by drawing upon our other convictions, thus re-interpreting our values of kindness and honesty and dissipating the conflict. So William James was mistaken when he lectured, “Some part of the ideal must be butchered” as one goes through life. Dworkin believes this is a misperception. The foxes are wrong: values do not conflict. Personally, I am left uncertain. In the bulk of the book, Dworkin examines a variety of values such as liberty, equality, and morality, but he reinterprets them by returning again and again to the principle of dignity, which he designates as fundamental. Arguably, in a world where values did rub against each other at their concrete meeting-edges, one could do the same: sacrifice bits of these ideals in an effort to make the choice that best reflected some deeper principle, whether dignity or something else. The point I believe Dworkin makes most persuasively, though, is that whatever explanation may sound more compelling about the interactions of values, we should not assume values stand in conflicting relationships. Moreover, the fact that we are initially pulled in different directions does not mean there is no morally superior decision. For instance, what if I have thoroughly considered existing arguments for and against affirmative action, and neither side seems more convincing than the other? I am, says Dworkin, “entitled without more ado to declare that I am uncertain.” Uncertainty is different from the popular conclusion that there is no right answer. Indeterminacy, which declares that a right answer cannot even exist, requires a strong positive case in order to be available as an option. In our own day, politicians and philosophers alike cringe at claiming that one community’s values are “more true” or its political institutions are “more just.” However, Dworkin argues that claiming indifference about truth is indeed an abdication of moral responsibility. This independence of morality is a very powerful principle. For example, one popular worry it dismisses is the origins of our moral values. Doesn’t it worry you that you might only believe abortion to be wrong because you were raised by blue-blooded liberals? And that, were you raised in a Mormon community instead, you might recoil at despoiling the sanctity of life? If we accept the sovereignty of morality, the question of origins

27

HARVARD POLITICAL REVIEW

need not worry us. Dworkin argues that while the sources of our moral opinions may be contingent, whether or not one believes that these opinions are true is an entirely different matter. I might believe that abortion is wrong because of my upbringing, but whether my view is true or not is an issue with which I will still have to struggle. One must trace out to the best of their ability how a particular view fits alongside the rest of his moral values in order to ascertain truth. “Morality stands or falls on its own credentials,” Dworkin argues, and “can neither be vindicated nor impeached” except through convincing moral argument.

ARGUING WITHIN MORALITY Even if we disagree about moral values, and this disagreement may in part be influenced by our surroundings, such dissonance does not mean that there is no truth in our values. The “Archimedean” argument that one can simply debunk the entire moral enterprise from a position outside of morality is one that Dworkin has long criticized (see, for example, his 1996 article “Objectivity and Truth: You’d Better Believe It”). Values only exist in a moral realm: in order for a moral conclusion to have any substance, it must be made on moral terms. According to Dworkin, there is no metaethical position from which to debunk morality. Dworkin offers us an illustration via four students discussing abortion. Student A argues that abortion is wrong. She states: “Everyone always has a categorical reason (a reason that exists regardless of personal interests) to condemn abortion.” Student B argues that sometimes, abortion is morally required. “In cases of rape,” he argues, “everyone always has a categorical reason to abort.” Student C interrupts: “Actually, there is no categorical reason either way. No one has to condemn or embrace abortion. It’s always permissible but never mandatory.” Finally, Student D dismisses the rest: “You are all wrong. Abortion is never morally prohibited, required, or permissible.” D’s view, which follows from an Archimedean skepticism, is not a real option in Dworkin’s account. The author argues that the important aspect of each speaker’s arguments is the conclusion each makes for the morality of abortion. While D has attempted to make a statement that says nothing about morality, he has taken the position that there are no categorical reasons for or against abortion, just like the morally-speaking C. According to Dworkin, this amounts to a moral position. It therefore makes no sense for D to argue that moral stances do not exist in this case, because he has effectively taken one. This is a controversial claim that ruffles the feathers of meta-ethicists in philosophy departments across the country. Russ Shafer-Landau, a philosopher at the University of Wisconsin, says


In a more striking case, Dworkin argues that given the independence of value, we cannot claim there has been moral progress throughout history. The judgment that moral progress has occurred is based on the judgment that the past practices we now condemn are immoral. This is a claim Dworkin emphasizes needs a moral argument of its own. “Foxes rule the world today. By and large, For example, that people did not people are uncomfortable ascribing the workings recognize slavery’s immorality due to false empirical beliefs about of the universe to a single coherent system.” human beings works to “assume rather than support” the conviction that slavery is wrong. We still his “initial unease grew steadily to something need some independent moral argument that our approaching panic” as he grasped Dworkin’s view views today are somehow better, “and that indethat “meta-ethics is largely a sham.” Indeed, many pendent judgment of improvement, on its own, is of us will still feel that while D’s statement might all we could mean by progress.” have the same consequence as C’s, it is of a differDworkin acknowledges we might find usent species of claims. What is persuasive, though, ing the term “truth” uncomfortable, and that is that D’s statement needs just as much of a subwe would rather edge toward a friendlier “most stantive argument as any of the others’. reasonable.” However, Dworkin believes this is not a useful move. He argues that “any alternate OBJECTIVE TRUTH endorsing term for interpretive judgments would have to signify, if it is to fit what we think, exactly Despite his belief in genuine moral disagreewhat ‘true’ signifies: unique success.” This works if ment, Dworkin still holds that objective truth we take up Dworkin’s suggestion that we see truth exists. Throughout the book, the non-philosopher as an interpretive concept like morals, thus creatwill likely question what exactly the author means ing a broader conception embracing all domains by “objective” truth. As one illustration, Dworkin of inquiry, from science to morality. The succinct asserts that something can be true even if no one definition of truth that Dworkin presents is sufthought so. As another, he claims the morality or ficiently abstract to cover both domains (I think immorality of something resides in the thing itself, you might have removed the definition of truth as instead of in our subjective feelings about the matunique success, do you think we need to put it back ter. in for this to make sense?). However, we habitually Yet from my reading, Dworkin’s defense of attribute to truth the characteristics it most clearly this bare kind of objective truth seems to sit someevidences in a domain like science: that of matchwhat uneasily alongside his idea that all moral ing reality. We have already seen that according values are interpretive concepts. After all, a subto Dworkin, practical reality does not play a role jective and contingent community of individuals in our moral arguments. Perhaps this only shows ultimately defines and interprets such concepts. the unhelpful attributes we have lodged onto the Dworkin does not justify the jarring relationidea of truth, as a result of our long readiness to tie ship between the contingency of those interprethe domains of facts and value together. We would tive concepts with his thought that truth exists have good cause, then, to think about truth and independently of what we subjective individuals reasonableness under an alternate, more abstract think. Upon consideration, I wonder whether for concept. Dworkin, objectivity is better understood as a conceptual quality, rather than characterizing a MORAL RESPONSIBILITY separate ontology of "objective truth." Perhaps the seeming friction between concept and truth eviDespite this talk of truth versus most reasondences instead objectivity's ultimate dependence ableness, moral disagreements “all the way down” on concept. I hesitantly imagine Dworkin to feel can exist. You and I might make moral claim after that while there may be an objective truth to jusmoral claim, and still stand at a crossroads. But tice without us humans having formed a concept of while I might not be able to bring you around to justice, what exactly that truth is does depend on how I understand things, Dworkin argues that the shape of that concept.

HARVARD POLITICAL REVIEW

28


we can still strive for something more important: coming to and practicing our beliefs responsibly. “Two people who both reason responsibly and find conviction in what they believe will reach different conclusions about what is right and wrong. But they will share the belief that there is a getting it right and a getting it wrong about what is right and wrong.” Even if a third person challenges that shared belief, “We must each believe what we responsibly believe.” Morally responsible people “act out of rather than in spite of their convictions,” says Dworkin. Addressing our sociological contingency, the author admits we all have “unstudied moral convictions.” Moral responsibility means that we must examine and interpret these convictions with the principles of overall coherence and personal resonance in mind. Through this process, we can turn our initially “unformed, compartmentalized, abstract, and therefore porous” convictions into a denser, broader filter through which we can make decisions ringing true for ourselves. Dworkin’s conception of moral responsibility raises the question: can people at moral odds converge on truth, instead of circling one another round and round? Dworkin demands two things of us as morally responsible individuals: coherence among our moral values, and authenticity in seeking those convictions that “grip us strongly enough to play the role of filters when we are pressed by competing motives.” Yet I wonder, to what exactly are we being authentic? One can easily imagine values that link snugly with one another, but with what moral core are they aligning or breaking? And if we each manage to polish a set of values both cohering and making personal sense, does anything grant we will do more than enforce the inherited, contingent convictions that may lie within that moral core? Dworkin does admit, “[w]e cannot escape a sense of the airiness and contingency of our interpretive convictions because we know that other people do think what we cannot think and that there is no lever of argument that we can press to convince them.” “Still,” it is important to note, “for all that, we are left only with uncertainty, not nihilism.”

DWORKIN’S RE-CONCEPTUALIZATIONS By re-conceptualizing what we really mean when talking about values, and distinguishing the realm of value from that of concrete reality, Dworkin casts new light on the ways we have long seen concepts such as truth and morality, all the while critiquing such views. In my own opinion, the most significant and successful is the attack against critical Archimedeanism—the view that one can stand outside morality to discredit the entire enterprise. By distinguishing uncertainty (not being convinced

29

HARVARD POLITICAL REVIEW

of the greater veracity of one side over the other) from indeterminacy (the non-existence of a right answer), Dworkin has discredited the latter as a default position to take, by demanding of those who advance it a good reason for their beliefs. For Archimedean skepticism to be successful, it must be both independent of morality and pertinent to it, conditions that are impossible to fulfill together. We can, for example, argue that we understand morals through a certain kind of nerve in the back of our brain. This is a claim that is indeed independent of morality. But it bears no relevance to how we could make sense of what is right or wrong–only how neurons fire about cranium. Removing indeterminacy as a default position has massive implications in a variety of domains, including within the personal ethics of our own lives, an issue Dworkin devotes a good chapter of the book upon. The author advances his own view of what a life lived well entails. We value human lives well lived because, like a work of art, “they too embody a performance: a rising to the challenge of having a life to lead. The final value of our lives is adverbial, not adjectival…. It is the value of a brilliant dance or dive when the memories have faded and the ripples died away.” Dworkin acknowledges some may feel differently, and that this sense of life’s objective value may well be a myth. Yet the unavailability of an Archimedean perspective means one cannot simply fall back upon the claim that life is devoid of meaning. Dworkin admonishes, “you need just as strong a set of value judgments to support your nihilism as others need to support their very different intuitive sense,” an argument of what would be needed for life to be meaningful, and why these conditions cannot be met. Thus, adds Dworkin, “Nihilism so earned has its own dignity.” Although Dworkin fights for the existence of truth throughout the truth, he freely admits we may never grasp that grail. We may continue to disagree in fundamental ways about fundamental moral questions. Yet, when faced with the question of why not simply dropping truth, Dworkin’s answer rings with the kind of dignity of conviction that perhaps reflects his own beliefs of what a life lived well entails. Holding onto truth “keeps before us the deepest philosophical challenge of this domain: to make sense of the idea that there is unique success to be had in inquiry, even when that inquiry is interpretive rather than empirical or logical, even when that inquiry admits no demonstration and promises no convergence.” In Justice for Hedgehogs, Ronald Dworkin puts up a solid fight to preserve this challenge for generations to come. •


Discovering New Worlds Charles C. Mann's 1491 & 1493 Eli Kozminsky Charles C. Mann, author of the 2005 study 1491: New Revelations of the Americas Before Columbus and the subsequent 2011 volume 1493: Uncovering the New World Columbus Created, has a clever method for dealing with the controversial nomenclature surrounding any discussion of the pre-Columbian Americas: he calls peoples by the names which present-day members call themselves, and stays true to the historically selfdescribed titles used by individuals and groups that existed in the past. So “Native American” is rendered as the less misleading and more widely self-identified “Indian,” while “Christopher Columbus” appears as “Cristóbal Colón.” But two terms Mann makes a repeated point of not mincing in 1491 and 1493 are “New World” and “globalization.” Simply put, the “New World” is not so new—not even close; “globalization” is not just about economic integration. These are the respective and heterodox points of the two books. And in making them, Mann radically alters our conception of the Americas before Colón, while redefining the world that came after this fateful encounter. Mann is a journalist, focusing on scientific subjects for publications like Wired and The Atlantic, as well as the co-author of several generalaudience books on topics ranging from aspirin to 20th-century physics. As such, he draws on legion experts across academic fields in addition to his own research to inform his enquiries—though not without frequent clashes with scholarly orthodoxy and popular knowledge. What emerges from Mann’s 1491 and 1493, two serial studies as exhaustive as they are accessible, is an important new view of our world, radically reconceived of as a place vastly older and more interconnected than previously realized.

THE PRISTINE MYTH

1491 is not for the sentimental reader. The elementary school view that the Western Hemisphere before the arrival of European explorers was an Edenic paradise meagerly populated by primitive bands of natives is wrong—very wrong. As Mann notes, “the Americas were immeasurably busier, more diverse, and more populous

than researchers had previously thought. And older, too.” And the beloved Bering Strait story of how these populations migrated to the Americas in the first place, via a mammoth bridge of ice, is similarly misguided: “In 1997 the theory abruptly came unglued,” writes Mann. Archaeologists uncovered traces of human habitation in what is now Chile that dates back over twelve thousand years—a distance from the Bering Strait implying arrival before the opening of the “ice-free corridor.” Faced with this new evidence, the academic jury is still out. Such notions are all part of what University of Wisconsin geographer William Denevan derides as the “pristine myth,” envisioning the Americas as uncultivated and largely vacant of pre-existing inhabitants. But thanks to an abundance of new research—archaeological, biological, and geographic—this myth is being steadily dismantled, and a vastly altered mosaic of the Americas before 1492 is emerging from the strata of time. “It seems incumbent on us to take a look,” argues Mann. His book 1491 compiles a prodigious and strikingly lucid survey. Delving into the thriving and advanced world preceding Columbus, as well as its mystifying demise, the author makes a compelling case for reevaluating our notions of the pre-Columbian Americas.

AN OLDER, MORE SOPHISTICATED WORLD In the first place, these pre-Columbian societies were far older and more sophisticated than previously believed by scholars and schoolchildren alike. The Olmec, for example, who ruled what is now Mexico circa 1800 B.C., devised the concept of zero—perhaps their greatest accomplishment—but also, Mann documents, “invented a dozen different systems of writing, established widespread trade networks, tracked the orbits of the planets, created a 365-day calendar (more accurate than its contemporaries in Europe), and recorded their histories in accordion-folded ‘books’ of fig tree bark paper.” These were no slouches by any current or historical measure, to say the least, and 1491 brims with such factoids to prove it. Meanwhile in North America, trading sys-

HARVARD POLITICAL REVIEW

30


tems had already quilted the most of the sprawling continent “for more than a thousand years” with goods like mother-of-pearl and copper. And indigenous technological advancements were awe-inspiring to early European explorers. For instance, when Tsenacommacah Chief Powhatan captured John Smith of Pocahontas fame in the Tidewater Virginian area, the colonist and selfpromoter broke his own gun. Smith wanted to ensure the natives would not discover his European weaponry’s inferiority to the Indians’ lethally effective bows and arrows. To the south, the half-dozen societies of Mesoamerica (including the brainy Olmec) had already developed almost “three-fifths of the crops now in cultivation” according to one recent estimate, including many beans, tomatoes, peppers, and, perhaps most vitally, maize. Geneticists still cannot crack these Indians’ secrets to breeding this last crop—which was accomplished before the birth of Christ. In terms of agriculture alone, the achievements of these pre-Columbian inhabitants are at least on par with their contemporaries in ancient Mesopotamia. Would that they find room amidst the ziggurat-filled pages of high-school history textbooks.

TERRAFORMING Not only were the inhabitants of the Americas before 1492 anything but “primitive,” the environment in which they lived was a far cry from the unruly biosphere described in popular conceptions of the New World. In fact, the deliberate environmental impact of these natives on the Americas was far more profound than most scholars had realized. As Mann shows, indigenous groups across both continents were intentionally transforming their environments before Colón, primarily utilizing controlled fires to tailor nature to fit their needs. Opening a copy of 1491, one is confronted with a map of the Americas demarcating Indian environmental projects across the two continents. I was stunned to see my home, Central Pennsylvania, traversed by the red-orange dashes denoting pyro-modification. “Rather than domesticate animals for meat,” the author explains, “Indians retooled ecosystems to encourage elk, deer, and bear.” So when Europeans began exploring the Americas, they encountered woodlands more like English parks than Edenic wilderness; nature had been so tactically tamed that European carriages could reportedly be driven straight through the forests. Bison were even being “imported” to the East coast at this time by strategically burning pastures all the way from New York to Georgia, with the natives using these manmade paths

31

HARVARD POLITICAL REVIEW

to rustle herds far from their natural Midwest habitats. What’s more, large-scale engineering operations had rerouted whole waterways in North America. The same is true for South America, which had long been consigned by mainstream scholars to primal jungle status. But, 1491 cites University of Pennsylvania Archaeologist Clark Erickson, who characterizes the Amazon as a “built environment,” forged by the flames creating rich terra preta (Portuguese for “black earth”). What spawned was a teeming, chaotic garden. “Rather than adapt to Nature, they created it,” admires Mann. “They were in the midst of terraforming the Amazon when Columbus showed up and ruined everything.” Since Indians had been administering the environment of the Western Hemisphere for thousands of years before 1492, with the managers wiped out, nature sprawled into the thicket of Denevan’s “pristine myth.” It was a wilderness brought about by the European encounter, not its absence.

AMERICAN AUTOPSY

If the Americas up to 1491 were such a vibrant, long-established realm, why did this world so swiftly vanish? A theory (or maybe mantra) one has likely heard enumerated is “guns, germs, and steel.” Pulitzer Prize-winning scientist and author Jared Diamond’s 1997 book, aptly titled Guns, Germs, and Steel, originally advanced this reductive, albeit illuminating formula to explain what he calls “historical inequalities” between more dominant regions like Europe and less developed ones such as Africa. It is a rule Diamond thinks holds no less for the Americas of the 15th century. Take the components of Conquistador Francisco Pizarro’s successful campaign against the Inka: “Those factors included Spanish germs, horses, literacy, political organization, and technology (especially ships and weapons),” he writes. All are “proximate,” or most immediate causes, as Diamond duly qualifies, but in general, “those technological and political differences as of A.D. 1500 were the immediate case of the modern world’s inequalities. Empires with steel weapons were able to conquer or exterminate tribes with weapons of stone and wood.” Case closed. Except that contra Diamond’s account, Mann counters that Indian “societies were destroyed by weapons their opponents could not control and did not even know they had,” i.e. germs. What happened to guns and steel in 1491? Drawing on research of anthropologist Henry Dobyns, who scoured Spanish traveler accounts of Pizarro’s campaigns, the weapon of mass destruction in


this account was in fact smallpox (with some infighting on the side). The Inka, then, “were not defeated by steel and horses,” concludes Mann, “but by disease and factionalism.” University of Texas historian Alfred W. Crosby, who coined the term “Columbian Exchange” in his 1972 text of the same name, came to similar conclusions as Mann in his 1986 publication Ecological Imperialism (which turned a much less popular phrase than his earlier book). “European emigrants and their descendants are all over the place,” wrote Crosby in the book’s prologue, “which requires explanation.” In short, he surmised that even though European conquerors were usually better equipped than their resisters, the more long-term, salient edge lay in epidemiology. It was guns, steel, but, in the end, mostly germs. In Hernando de Soto’s conquest of present-day Florida, for example, it was the conquistador’s pigs—not his troops—that sealed the fate of the region, de Soto’s infectious livestock serving as what the author calls an “ambulatory meat locker” for diseases like measles and smallpox.

QUANTIFYING THE TRAGEDY

As insightful as this bio-historical analysis may be, it is also profoundly funereal. Says Mann, “In our antibiotic era, how can we imagine what it means to have entire ways of life hiss away like steam? … It seems important to try.” This imagining is of particular importance inside academia, for rather political reasons: these historical theories have become a matter of deep academic feuding. Consider the bickering “Counters” found in 1491; that is, the “High Counters” and “the Low Counters.” These are the names of two opposed camps within the academy, each comprising an intellectual phalanx of scientists and historians, engaged in prolonged, at times “vehemently personal” war over…counting. The numbers, however, are the population figures of the Americas before European colonization. Until quite recently, the Low Counters’ account of two sparsely populated continents was predominant, establishing a view of the pre-Columbian Western Hemisphere as devoid of any teeming civilizations and dotted by transient bands of natives. They see it as implausible that supposedly titanic populations could collapse in such short time. It is more likely that the European invaders never encountered that many residents in the first place. But with fresh biological and historical research in their arsenal, the High Counters have been advancing an increasingly accepted account of the Americas prior to 1492 as bristling

with human life. European diseases drastically reduced this burgeoning populace, a hemispheric, epidemiological extinction with no parallel. In this story, history can at last be squared with the emerging trail of statistical figures. 1491 seconds the revisionist arguments of the High Counters.

THE OLD “NEW WORLD”

So what is Mann’s aim behind these engrossing revelations about the Americas? His project, I think, is to give readers a new lexicon, not merely changing the conversation, but rethinking its very vocabulary. As he states bluntly in 1491 “the Western hemisphere should perhaps no longer be described as the ‘New World’”; Britain, for example, was entombed in glaciers till approximately 12,500 B.C. Across the Atlantic, meanwhile, “people were thriving from Alaska to Chile while much of northern Europe was still empty of mankind and its works.” In truth, the only “New World” here is the lost one Mann has sweepingly illuminated in this book. But then again, it is not really new—we were just stuck in the old, elementary school ways of thinking about the Americas before Colón. This simplicity is in many ways understandable. As Mann notes, reconstructions of the pre-Columbian past rely on “arguing from silence,” working around the lamentable absence of voices to recount this era, its diverse peoples, vibrant cultures, and sophisticated societies. Silence must not serve as license to forget or mischaracterize the past. In this way, 1491 stands as encyclopedic justice to the Americas before the Europeans. ... There is, however, a “New World” in Mann’s chronicle—it just came into existence after 1492 and not a quaver before. This New World, the one Colón incidentally created rather than “discovered,” is the subject of Mann’s succeeding installation, 1493. While 1491 may be the more historically interesting of the two, its counterpart is more germane to today’s world. The former is overflowing with information but the latter has a far more pertinent argument, one that reconsiders the past, present, and future of globalization. The debate on globalization has long had two viciously opposed sides. On one end, the “economists and entrepreneurs” and on the other, the “environmental activists, cultural nationalists, labor organizers, and anti-corporate agitators.” These two groups came to blows at the Molotov Cocktail-lit protests of the World Trade Organization’s 1999 meeting in Seattle. The truth

HARVARD POLITICAL REVIEW

32


is, predictably, somewhere between these two ideological extremes. Global exchange has been making humanity richer for centuries, as Mann notes, but it carries with it the baggage of environmental desecration and political upheaval. To the author, this “inconceivably complex progress of life on this planet” is what he eventually dubs a “fractured celebration,” bringing global gains accompanied by intense turbulence. In probing the subject of globalization, 1493 seamlessly traces this complex process from the first encounters between European explorers and indigenous Americans, through the exchanges crossing the Atlantic and Pacific Oceans, and concluding with the modern slave trade. Along the way, Mann lays the groundwork for a drastically deeper theory of globalization, widening our analytical aperture from mere markets to a more comprehensive and instructive outlook.

A BIOLOGICAL PHENOMENON Mann’s qualified view of globalization stems largely from his conception of the phenomenon itself. “Newspapers usually describe globalization in purely economic terms,” he notes, “but is also a biological phenomenon: indeed, from a longterm perspective it may be primarily a biological phenomenon.” It is here that 1493 departs from conventional studies of worldwide interconnectedness and says something particularly novel. In the past, many prominent studies of globalization have espoused a narrow view of the subject. Take a look at a passage from the contentious 2002 book Globalization and its Discontents by Nobel-prize winning economist Joseph Stiglitz, a longtime left-wing critic of global economic policymaking: What is this phenomenon of globalization that has been subject, at the same time, to such vilification and such praise? Fundamentally, it is closer integration of the countries and peoples of the world which has been brought about by the enormous reduction of costs of transportation and communication, and the breaking down of artificial barriers to the flow of good, services, capital, knowledge, and (to a lesser extent) people across borders. Stiglitz’s very conception of globalization is economic and institutional in nature—less than half of Mann’s aforementioned picture of worldwide exchange. It is no wonder Stiglitz’s critique sets its sights on the “Washington Consensus” policies of the WTO, the IMF, and World Bank. In his view, these institutions have given the developing world the short end of the market’s stick, the policy-making process in the thrall of Western

33

HARVARD POLITICAL REVIEW

(primarily American), neoliberal influence. Stiglitz’s critique is provocative—but incomplete; he has settled primarily on only economic globalization. For Mann, the phenomenon of globalization concerns more than just markets; it unites the planet’s previously discrete hemispheres into a single global organism. 250 million years ago, recounts the author, Pangaea split, creating separate hemispheres with “wildly different suites of plants and animals.” But it only took one eccentric Genoese explorer to reverse the process. “Colón’s signal accomplishment was, in the phrase of historian Alfred W. Crosby, to reknit the seams of Pangaea,” declares Mann. This encounter would bring tomatoes to Italy, oranges to the United States, chocolate to Switzerland, and chili peppers to Thailand. Along with this, an “invisible wave” of upheaval would wash across this newly united world as new diseases went global and cultural conflict became exacerbated. And, this cataclysm would annihilate the vibrant, teeming, and ultimately lost world of 1491 in the process. The project of 1493 is in large part informed by Crosby’s seminal and svelte book The Columbian Exchange. “The first step to understanding man is to consider him as a biological entity which has existed on this globe, affecting, and in turn affected by, his fellow organisms, for many thousands of years,” Crosby argued. After placing man in this “proper spatial and temporal context,” it becomes clear that “the most important changes brought on by Columbian voyages were biological in nature.” Taking his cue from Crosby, Mann pinpoints 1493 as the first year in a “new biological era” following Colón’s voyage, dubbing it “the Homogenocene.” The term echoes the homogenizing character of this process, bringing ecologically isolated places into a more uniform global mixture—it is Pangaea reknit as per Crosby. Thus, concludes Mann, “Columbus’s voyage did not mark the discovery of the New World, but its creation.”

COLUMBIAN EXPLOSION The means to this globalized end lie in what Mann heralds as “the remarkable role of exchange” between all corners of the planet, and their epochal impacts. This “exchange” is not just trading goods and services. For instance, it was not the Spanish colonists themselves who made the decisive difference when they first crossed the Atlantic Ocean and landed at La Isabela, bearing their commodities and technology. Rather, the European ecosystem that “poured from hulls of Colón’s vessels” would christen the globalization


process. Earthworms, mosquitoes, cockroaches, honeybees, dandelions, as well as a litany of new viruses and bacteria invaded Hispaniola in 1492 in the first truly global exchange since the continents ruptured. Over on the Pacific Ocean, 1493 details the exchanges between the Spanish Empire and China. “Part of the reason China is the world’s most populous nation is the Columbian Exchange,” Mann points out. Thanks to the exploits of the Spanish Empire, worldwide access to maize, potatoes, and sweet potatoes planted the seeds for a population boom in Asia. On the downside, however, avalanches of silver from the hellish mines of Potosí in Chile would fuel heinous bouts of economic and political unrest in China. In Europe, the globalization kick-started by Colón put the continent on the path towards international ascendency, thanks in no small part to, of all things, potatoes. “Compared to grains, tubers are inherently more productive,” the author explains. Growing underground, the potato could expand to unseen sizes without collapsing like wheat stalk. This “exotic” import from the Western Hemisphere effectively doubled the European food supply. Famine was next to eradicated in potato country, and standards of living skyrocketed from what would today be third-world metrics. “At long last,” Mann writes, “the continent could, with the arrival of the potato, produce its own dinner.” The effects of the initial Spanish voyages for “God, glory, and gold,” it seems, had metastasized into a thriving global phenomenon. Undoubtedly the most brutal but perhaps most fascinating exchange documented in 1493 is what Mann considers “the foundational institution of the modern Americas.” He is talking about the modern slave trade, the first globalized system of labor. Yet it would entail consequences far beyond just the economics of plantation production. Specifically, the book deals with the long-overlooked “meeting of red and black”—Africans and Indians—that formed a surreptitious but mighty undercurrent in the globalized Americas. This encounter occurred primarily in maroon communities, settlements of fugitive slaves embedded alongside indigenous populations across the tropical areas of the Western hemisphere. After the initial migration of homo sapiens out of eastern Africa 70,000 years ago, Mann thinks that the next turning point in mankind’s global journey was “the transatlantic slave trade,” an exchange necessitated by the plantation economy, but which would have implications reaching far beyond its immediate causes. “Much of the great encounter between the two separate halves of the world thus was less a meeting of Eu-

rope and America than a meeting of Africans and Indians—a relationship forged both in the cage of slavery and in the uprisings against it,” he writes. This “complex interplay” accordingly occurred without European oversight, which regrettably also means out of the annals of most histories. But “arguing from silence” is music to Mann’s ears. What follows is more a gripping intermezzo than digression, detailing the clandestine world of maroons and its resistance to European domination, allying with Indians and even pirates to maintain their freedom. If you’re looking for indigenous “agency”—assertions of human autonomy against Colonial domination—the pages of 1493 are overflowing with material. It is a war that rages even today, though primarily on paper: descendants of these original maroons continue to wage legal battles for their land against commercial invaders in the Amazon. These raiders come hacking for açaí berries and hearts of palms, products that probably sit on your supermarket’s shelves. The global slave trade may be at an end in most places, but we all still wriggle, somehow, in its tumultuous, globalized wake. As in the rest of the expansive scope of areas covered in 1493, such a dynamic interaction between peoples and exchanges only becomes apparent by expanding one’s view of globalization and embracing Mann’s much broader conception. ... The global fissure of the continents was tectonically slow; it was the rapid, pivotal integration, revealed in 1493, that has had a seismic impact on the last five centuries and should duly define popular notions of “globalization.” The phenomenon extends far beyond just trade. Our perpetually new world is also one with a far deeper, richer past than many had previously imagined—a newly revealed world of an irretrievable past, masterfully presented in 1491. But what ultimately made the world “new” after this loss was the advent of globalized, galvanizing exchange, christened by the misguided voyage of three Spanish vessels in 1492 that triggered biological, commercial, and human exchange across the Earth. Rightly understood, it is a concept as expansive as the planet it covers. And humanity shows little potential of halting Colón’s course. To better understand our world, we would do well heeding Mann’s revelations and look upon the Earth as a far more ancient, more interconnected planet. •

HARVARD POLITICAL REVIEW

34



Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.