Gadfly Fall 2013

Page 1

1

WINTER 2013


2


editor-in-chief ethan edwards managing editor dan wallace associate editors jackson arn evan garnick jacob "kobi" goodwin mark antony gorenstein krishna hegde jay hyun kim alex robertson art & layout editor alejandra oliva

all art by alejandra oliva 3


TABLE of CONTENTS

letter from the editor ethan edwards antilochus and the ‘fool’ gabe rusk feminism and romance alejandra oliva can hobbes' leviathan withstand the class struggle? virgilio urbina lazardi dewey and the mooc mark antony gorenstein of goats and men david beale & max nelson the genealogy of explosions jackson arn

4

5 6 14 21 29 35 42


LETTER from the EDITOR ethan edwards

The Gadfly has always existed to promote philosophical thought in a world which needs it. Yet don’t we have enough philosophy? There already exist far too many books and thinkers to possibly study within a single lifetime. Even with this immense amount of content it seems unthinkable that any argument on a philosophical subject could reach a permanent answer. Yet in a world slouching towards panopticism and an always stronger reinforcement of the power structure, we still need it. To paraphrase Heidegger, writing at perhaps a less desperate time, what we need now is less philosophy and more thinking. We must strive to think through and critically examine our changing world as it evolves or else we risk becoming totally unaware, accepting all that is handed to us by increasingly secretive authorities. You are unlikely to find articles in the present volume which are true, which solve the problems they strive to. However if they can allow us to think, to see the world in a new light previously concealed from us, to wake us up from our lazy acceptance, then The Gadfly has done its job.

5


ANTILOCHUS and the ‘FOOL’ queues and the problem of fairness

gabe rusk

6

In Book XII of the Iliad, Achilles calls for a heroic chariot race to honor the death of Patroclus. As the race intensifies, the ever-glistening Antilochus, one of five entrants, attempts to overtake Menelaus. A “narrow” bridge or place has formed on the track where only a single chariot can pass. As Menelaus approaches the bridge Antilochus races up to his side. Menelaus calls out: “Antilochus, thou art driving recklessly…rein in thy horses! Here is the way straitened, but presently it will be wider for passing; lest haply thou work harm to us both by fouling my car." Antilochus forges on, causing Menelaus, originally in front of him, to pull back and eventually take third to Antilochus’s disputed second place. Moral theorist Malcolm Murray characterizes an event like this as the Narrow Bridge Game. The Narrow Bridge Game assumes that two chariots or two cars both approach a narrow space. How ought the two chariots proceed? Only one chariot can pass or disaster will ensue. If neither chariot passes then both will ostensibly lose. Both can’t stay and both can’t go. The Narrow Bridge Game necessarily requires a choice between one chariot and another: one will have to wait and one will have to speed ahead. A seemingly non-arbitrary and objective solution would be first in and first out, with time as the intuitively objective measuring stick. Whichever chariot first arrives at the narrow bridge ought pass first. Chariot A arrived first and thus deserves to pass first; Menelaus arrived first and thus should pass first. This ‘first in’ and ‘first out’ concept of fairness is the conceptual foundation of a queue, or a line. A queue is like this aforementioned competitive race, a sort of consensual competition through a fixed place and time for a scarce service or resource. For the chariot drivers, the scare resource is an award. For the queuer, it could be anything from time spent for a bank deposit to the last ride on Splash Mountain. First in and first out. In his 1987 paper, Perspectives on Queues: Social Justice and the


Psychology of Queuing, Dick Larson (or Dr. Queue, as he is known) formally coined the phrase “first in, first out’”(FIFO). Dr. Queue argued that a queue’s fairness is derived from the systematic quality of FIFO. Order in line is determined by time of arrival. Queues are appealing because of this purported objective fairness. It is fairly difficult to dispute time as such: if Individual A arrives three minutes before Individual B, then Individual A is before Individual B. Yet there is no guarantee that Individual B will play along: Antilochus did not observe FIFO, leading the queue on the track to self-destruct. Queues and FIFO would need to be composed of two instrumental elements to function appropriately: cooperation and compliance. If a queue is indeed based on first in and first out, then the observers of the line must be just that: cooperators and compliers. If they do not cooperate or comply then the line can’t function and ceases to exist. Famed moral philosopher David Gauthier would call the conditions needed to form a queue “morals by agreement.” Two to three years of your life will be spent “agreeing,” cooperating, and complying with queues. Americans alone will spend 37 billion hours in a queue this year. If billions of hours depend upon the “morals by agreement,” it is not entirely surprising that queuers are more likely to complain about violations of the queue than about the amount of time they spend in line. Scholars of “queue theory” have found that “skips” in line are vastly more detrimental to the perception and disposition of the queuer than other external factors, such as time. Social scientists and sociologists came to this conclusion after extensive research and the collection of personal testimony at such places as airports and sporting events. Even putting this evidence aside, there are philosophical reasons to predict that a queuer would become more angered by “skips” than duration of time. FIFO is a moral agreement stemming from a strong contractarian argument in the lines of 17th century political philosopher Thomas Hobbes. “Skips” break the contract all queuers agreed to. A contractarian argument for establishing what one ought or ought not do arises from an agreement between rational individuals in constant competition. FIFO is a contractarian agreement about what one ought or ought not do in a functioning queue. Hobbes argues that this “rational bargain” has to occur to squelch the “natural condition of war” or competition. The

7


8


natural condition of war is derived from inherent scarcity. An individual perceives others in some “least potential” of competition “for the goods that he needs for survival or for well-being.” This problem of competition instills a preference, such that the individual prefers to dominate preemptively so as to gain an early advantage in the scarcity battle. Scarcity in lines can vary. Tickets to a rugby game are a scarce resource. A line is formed to objectively give tickets in order of arrival. First in and first out with a ticket. A line is also about scarce time. At a bank there is no fear that that bank will run out of money, a line in this case is a battle over scarce time. Individual A has limited time, as does Individual B. First in line at the bank and first out with the least amount of time spent. If in a system both scarcity and competition exist then the individual prefers to dominate regardless of a zero-sum game. This preference is instilled in all individuals necessarily and would correspondingly lead to the “natural condition” and eventual fruition of war. Preemption on all parties and human error inevitably leads to escalation. To mitigate this natural condition of war, Hobbes posits that some ‘Article of Peace’ or moral agreement must be made. Moral agreements are derived from what Gauthier posits as choices that are “fully voluntary” and ex ante (before the event). The choice cannot be a parametric choice, meaning that if there are other actors or, in the case of The Iliad, other chariots involved, the choice does not and cannot revolve around the single interests of one actor. Individual A doesn’t dictate the rules in the competition for the scare resource, be it a rugby ticket or time spent at the bank. The choice shifts from parametric to strategic so that the actor’s “behavior” is “but one variable among others” and thus, “his choice must be responsive to the expectations of other’s choices, while their choices are similarly responsive to their expectations [i.e. ex ante].” This voluntary ex ante agreement would require cooperation or other voluntary cooperative parties. In this sense K. Baeir writes “the very raison d'être of a morality is to yield reasons which overrule the reasons of self-interest in those cases when everyone's following self-interest would be harmful to everyone.” The yielding to others in a competitive state requires some acknowledgement of constraint. This constraint is necessarily voluntary and ex ante, or others could not be expected to do the same. If Antilochus did not agree before the fact to FIFO, then

9


10

Menelaus would not expect the norm to hold. Menelaus had assumed in his strategic choice that Antilochus would willingly and preemptively let the first driver into the narrow bridge, as he, too, was expected to allow. Menelaus is a “fair” or “just” person because he internalized and affirmed the constraint and effort to promote mutual benefit. A person is fair in a moral agreement when she accepts and expects others to constrain while simultaneously constraining her own optimal choices. Take the classic novel Lord of the Flies. Stranded on a desert island, a group of young boys realize within hours of their isolation that they need “rules.” All of the boys try to talk at once, but that doesn’t allow for conversation. They decide that whoever holds the conch has the right to speak. This is a competition in which all the boys wished to speak (scarce resource) but they make a voluntary agreement to limit their actions according to possession of the conch. For those who know the story, this “peace agreement” doesn’t last long; who speaks becomes a battle where might is right. The boys of Lord of the Flies and Antilochus neither cooperate nor comply with constraint and thus reject the moral agreement. Antilochus and the boys are Hobbes’s “Fool.” The Fool is the individual that refuses to cooperate and comply to a moral agreement. The Fool is the queue skipper, the proverbial line cutter, that egoist who cuts you in the line of Subway. The Fool speaks for his Five-Dollar Footlong: The Foole hath sayd in his heart, there is no such thing as Justice; and sometimes also with his tongue; seriously alleaging, that every mans conservation, and contentment, being committed to his own care, there could be no reason, why every man might not do what he thought conduced thereunto: and therefore also to make, or not make; keep, or not keep Covenants, was not against Reason, when it conduced to ones benefit… Fools are pervasively irksome, not just because they are cutting you in line but because they are rejecting the implicit moral agreement that you have complied with. The Fool makes a parametric choice that ignores the line as a whole and the reasoning the line exists in the first place. Yet the pervasiveness of the Fool is growing. A modern Fool is not just a Fool that cuts and abolishes FIFO. A modern Fool is a Fool who believes she has earned her pedestal next to Antilochus. The modern Fool is now delusional.


Famed philosopher and amateur t-shirt model Slavoj Zizek has a favorite joke about such a Fool. A man believes he is a piece of grain. He is taken to rehabilitation or an asylum where doctors finally convince him he is indeed not a seed. Immediately after leaving he runs back inside in terror exclaiming that there is a chicken outside the door that is trying to eat him. “Dear Fellow,” says his doctor, “you know very well that you are not a grain of seed, but a man.” “Of course I know that,” replies the patient, “but does the chicken?” The modern Fool is not even aware of her own reality. If FIFO is contingent on compliance and cooperation, modern queues that are determined by monetary access complicate fairness, as we can see from two examples, the fastpass at Disney World and priority boarding at airports. Both the queues for rides at theme parks and the queues for boarding a plane were originally dependent on time. First in and first out. This has technically not changed. First has still has a connotation of just dessert. Individual A is first in line but not necessarily the “first” individual to arrive. Merit is no longer derived from time but, in some cases, proportional monetary compensation. An individual can buy or earn her way to a further place in line, or even to the front of the line. RyanAir is particularly infamous for charges on early boarding. Yet the Fool remains. RyanAir would probably argue that every individual has the choice to purchase a fastpass or upgrade to priority boarding at the terminal; however, choice does not conflate to ability. Disney, who had a FIFO system for fastpasses is now looking to change it so that queuers get fast passes based on which resort (and which price tier) they are staying in. While in even the most remedial of lines there are those who could possibly get to said line faster or earlier, time seems to be much more of a probable equalizer than financial prowess. Individual A and Individual B both have the choice to purchase a fastpass and both have a choice to upgrade their boarding position. Queues still reflect some sense of purported fairness. Individual A earned that position. Yet the ability to act on those choices is not necessarily equal. Structural poverty and the lack of social mobility problematize the fairness of queues in the future. Irrespective of having the same choices, not everyone has the same ability to act on those choices. Even if both time and monetary compensation are both earned, the former is more

11


likely to be a fair standard. Yet even this ‘new’ queuing standard isn’t prone to repel all Fools. “Skips” have taken to a more perverse form. Reports have surfaced of mothers hiring handicapped individuals at a $1000 dollars a day to accompany their children at Disney World so as to receive VIP Tour standing and even “skip” the fastpass holders. These Fools worry about the chicken outside the door, proclaiming they are indeed complying and cooperating with agreed upon rules! Fools exist in every devolution of the queue and continue to due so necessarily. At the point where queues depend on cooperation and compliance with moral agreements, there will always be dissenters like Antilochus. There will be those that fight “Articles of Peace.” Regardless, the queue will remain a Petri-dish of fairness in the chariot race that is the competition between moral agents. As timeless wrestling icon and incidental philosopher Mr. T once proclaimed: “I pity the Fool.”

12


13


FEMINISM in ROMANCE

the modern woman's role in the novel

alejandra oliva

14

Today’s romance novel is not your grandmother’s bodice-ripper set in Regency England. Instead, now that the Twilight-sparked Paranormal Romance phase is (thankfully) over, the market has settled into a trend unhelpfully called “New Adult Contemporary.” These books have all the angst (teenage or otherwise) of the Twilight series, sans vampires, werewolves, witches, etc. They generally tell the tale of the meeting of two middle-class, white protagonists of opposite genders with troubled histories. Sparks, both erotic and emotional, fly. Will their separate, but equally horrifying traumas make it possible for them to survive as a couple? Or will they be able to overcome their pasts and find lasting love? It is almost a trope to deride romance novels of any kind as formulaic and cheap “bad” literature, however these books comprise the bulk of not only “women’s fiction,” but fiction in general on the market today. In fact, romance novels make up 16.7% of the market share of books in the U.S. How can a genre so universally derided as cheap and trashy be so popular? While it’s easy to simply blame the mindless cud-chewing of media consumers, there’s a much more complex, nuanced reasoning behind the social status of romance novels. There’s a strong case to be made that both the derision and the dominance of the genre is based in the overt white-masculinity of canonical literature. Most of the books in the romance market are written by women – your Danielle Steeles, your Nora Roberts, and so on – and for women: 91% of romance novel readers are women. This is an explicit choice – editors view submissions by men to trade-paperback (read: the books you find at a drug store) and women’s imprints skeptically. In fact, most women’s lit imprints only have one or two men on the backlist, generally in the “cozy” genres (think James Herriot: big-city doctor arrives in a small town to be charmed


by its citizens, meets and marries a sweet girl). Where detective and spy novels written for men by John Grisham and the like are equally formulaic, they’re seen as somewhat more literary – their film adaptations get wide commercial releases as opposed to the Lifetime Movie treatment, their authors are more celebrated, their covers are less likely to be pastels. However, not only does this indicate that all other things (quality of writing, strength of plot and the like) held equal, men’s writing will be taken more seriously than women’s, it is also one of the factors that underlies the overwhelmingly white-maleness of the canon. Of course you have the whole “ancient” canon, from the times when women were considered unworthy of the arts of letters, but even in more modern times there is a plethora of not just white male authors, but misogynistic white male authors. You have Hemingway and Mailer and Roth, all the while Didion and Sontag and Nïn all are relegated to the outfield, just barely covering the representational bases of the curriculum. And so after years spent in high school and college classrooms being fed the lit-department version of “good literature,” it is only a matter of course that women need the escapism. Not the escapism usually attributed to the readers of romance novels, mind you – it’s not necessarily an escape to an idealized world of fairytale romance, but rather to a world where women’s stories are told in complex, nuanced ways. This escape, although caused by the very masculinity of the books considered canonically worthy, is viewed as frivolity, evidence that women readers cannot read (or write) anything beyond the flighty, the fluffy, the romantic. The constant masculinity of “highbrow” canonical literature leaves little recourse but the occasional escape into “lowbrow” women’s writing for even the most snobbish female reader. So what exactly are these feminine representations that women are escaping to? As mentioned above, the bodice-rippers of the past are long gone A bodice-ripper, for those unfamiliar with the genre, is the old formula for the romance novel, intended to make female enjoyment of sex less taboo by removing her control of the situation: a virginal teenaged protagonist living between 200-500 years ago is married to/kidnapped by a virile superhunk who rapes her, which she ends up enjoying, and she eventually ends up “taming him,” grooming his rough exterior into her one true love. Instead, today’s romance novel protagonist is a little older: the ages in new adult contemporary

15


16


range from college students to solidly middle-aged divorcees. She’s usually financially independent, professionally or creatively successful, and much less likely to be a virgin. These books, as the sub-genre name suggests, are also much less removed historically speaking – while there are still any number of Regency and Elizabethan romances published each year, New Adult Contemporary is set roughly in the present, with iPhones and mortgages. While rape is still a major theme in many of these books, it is never used as the catalyst for a relationship – instead it’s typically the dark shadow lurking in the protagonist’s past. Generally the protagonist is also working on some kind of professional or personal project – renovations of hotels, expansions of their boutique businesses – and these are intended to serve as placeholders, something to keep her busy as she heals. However, the very professions they hold are indicators of the conflicting signals of femininity these books espouse. Female protagonists are rarely doctors, lawyers or bankers (unless part of their psychological trauma involves “being married to their work”), but instead labor on the more domestic end of the career spectrum: florists, caterers, “small business owners,” interior decorators and the like. Today’s romance novel heroine begins the book much more independently than did her sisters from fifty or so years ago, but remains tied to the apron strings of traditional women’s careers. Her traumas are also modern in their expression: she has been raped, she has had an abortion she bitterly regrets, she has been violently abused by a parent or an ex-boyfriend, she has been cheated on or ignored or taken for granted by a nowor-soon-to-be-ex- husband. As a result of these, she doesn’t ever feel beautiful – there’s always a bit too much fat around her midsection, she’s just a little bit too old, or worse yet, she just doesn’t see herself as worthy of love. Her issues have slowly become internalized, coloring her views of herself and her desirability. The project she undertakes is usually framed as a way for her to focus on herself. There may even be a weight loss or fitness subplot, framed as a way to self-focus, but which in reality serves as a way for the reader to comprehend her desirability to the male romantic lead. These traumas, and their subsequent self-improvement projects serve to add complexity to what are otherwise cookie-cutter characters. Authorial intent lies in giving female characters perceived independence, and

17


18

giving them a means of not only supporting themselves, but also of creating the perception of a full life, one that does not require a man to make it complete. This independence has a thin veneer of basic feminism – these are independent women who have survived trauma, who are focused on themselves, and who don’t require the male touch to heal their traumas or validate their choices or their bodies. However, she has, despite all the best efforts of the author to provide the reader with an “acceptably” feminist-approved modern woman to identify with, internalized many patriarchal concepts, especially those that pertain to her desirability and her conduct. She never goes so far as to openly desire a man, or the white picket fence and the 2.5 kids, but some of her pain stems from having missed out on these things in some way. Her male counterpart is also damaged goods, albeit in a very different way. His struggles rarely impact his self-image, they usually just make him very sad – he suffers from girlfriends who had abortions without his knowledge, PTSD, a gold-digging wife, and so on. He swaggers into the storyline, steals “melting” kisses in corners of restaurants and next to parked cars, has abs and biceps of steel. He doesn’t need rescuing; he’s hermetically sealed himself away like Superman in his Fortress of Solitude. His pain manifests itself in a supremely masculine way, as isolation and withdrawal. Much like the female protagonist, he is also independent – financially, emotionally and sexually. Moreover, he is generally more sexually experienced than she is: while she is certainly not a virgin, he’s been around the block a few more times than she has. He is, in short, the uber-protagonist: the embodiment of all the traits our modern female character is striving for. This is the elementary foundation of sexism: woman is an imperfect copy of man that still needs polishing. The fact that this representation of women is found in books written for and by them smacks of internalized misogyny – the challenges of creating complex female characters without making them weak, and attractive male characters without making them overwhelmingly strong. The male and the female protagonist meet, of course, and sparks fly, typically in the form of some very passionate kisses that are followed by one or both participants sort of disappearing from the room suddenly and retreating into aroused confusion. Things come to a head both emotionally and eroti-


cally the first time that the couple has sex. The portrayal of sex in romance novels is among the most female-centered depictions in modern mass media – it’s something that generally goes both ways, far from the typically one-sidedness of pornography, and it can be explicit – hardly a romantic comedy’s tangled sheets and heavy breathing. Depictions of sex in romance novels are often narrated – although not exclusively – by the female protagonist, and do not balk at showing female pleasure. Women consent, ask for condoms, sigh, moan, orgasm, and most interestingly, direct their partners (although not too much – part of the male protagonist’s allure is that he’s pretty good at what he’s doing). Oral sex, when included, is usually given and received by both partners, although not always in the same scene, and in the books where narrative duties switch off, both are depicted as being equally invested in providing pleasure to their partner. These depictions of sexual encounters are revolutionary in their egalitarianism. This is a far cry from the original rape-fantasies of bodice-rippers, but also diametrically opposed to the modern-day depiction of sexuality in films. There, the woman is always the giver, and the depiction of feminine pleasure is taboo. (Feminist icon Ryan Gosling even argued that his 2010 film, Blue Valentine, was originally given an NC-17 rating by the MPAA because of its depictions of oral sex performed on a female.) However, after the first tumble in the sheets, the relationship generally runs amok – some sort of major discord is sown, and intimacy is cut off, increasing sexual and emotional tension until a confrontation is inevitable. This confrontation of the strong female and the strong male protagonist often results in a battle of the wills that is won, to be a true romance novel, by the woman, asserting the traditional values of family and togetherness. Susan Elizabeth Phillips, a wildly prolific romance author and feminist, describes the central fantasy of the romance genre as a fantasy of female empowerment. For Phillips, the modern female reader must strive to fulfill so many roles that balance and control are impossible. To see a protagonist in a romance novel triumph over a man that is stronger, that is more capable than she is by the simple fact of her beauty and caring is so ultimately empowering that it creates the illusion of control in an uncertain environment for the reader. Folklorist Linda J. Lee likens this to the Beauty and the Beast myth, where the “beast” must be tamed in order to create

19


a loving partner. While the first hint of domesticity comes with the first sexual encounter, there is still more terrain to be traversed before the final goal of a caring, human partner. She likens romance novels to female “quest” books, where the heroine must confront not only her own demons, but her partner’s often more obscure demons as well. The framing of the romance novel as a quest puts the female protagonist in the archetypal “hero” role, and in fact, she is often helped along by other gender-bent iterations of traditional quest-story archetypes – the helper, often an older neighbor or the heroine’s mother; the shadow or trickster figure, a rival for the man’s attentions, and so on. This quest is empowering not only in its completion, but in the female protagonist’s active role in completing it. She is not only thrust into a center stage typically occupied by a man – granted, her goal is quite different than that of the usual mythological hero, but nevertheless, searching – but she also takes an active role in finding the lost husband, in taming the beast. However, what does this “taming” entail for the man in question? Generally speaking, modern romance novels end much the same way as any Jane Austen book – not always with wedding bells, but often with some cozy cohabitation, or exhilarated confessions of feelings. In short, with the very triumph of the “family values” and love-and-marriage that has been the feminine ideal since the patriarchy determined that the home is a woman’s sphere. The end result is a somewhat conflicting view of women’s roles – she is active, independent, empowered, but also ready to, at the drop of a hat, submit to her societally mandated place. Romances have come a long way in terms of more accurately representing the romantic life of today’s woman, but they also remain firmly entrenched in the societal values of the past: the proverbial white picket fence with 2.5 kids and a loving spouse. This conflict between old-fashioned patriarchal values and the more modern ideas of independence and sexual liberation present a conflict not easily resolved, and in presenting personal and financial independence as “not quite enough,” continue to propagate romance and traditional values as the ultimate key to happiness and self-fulfillment.

20


can HOBBES’ LEVIATHAN WITHSTAND the CLASS STRUGGLE? virgilio urbina lazardi Beseeching a revered political philosopher to “take a side” in local, partisan bickering taking place half a millennium after his death may appear, at first glance, to be a somewhat frivolous endeavor. Considering, moreover, the scope of Thomas Hobbes’ towering contribution to political philosophy, riffling through the pages of Leviathan in order to find a justification for a campaigning politician’s proposal seems akin to using a sledgehammer to swat a pesky fruit fly. Yet these “innocuous” thought experiments frequently reveal previously unexplored limits of a thinker’s analytical framework, thereby exposing certain strengths as well as weaknesses that enrich our understanding of said thinker’s oeuvres. This sort of enrichment, I believe, can be found through the imputation of Hobbes’ response to one of Joe Lhota’s retorts to Bill de Blasio during the latest campaign season, in which the mayoral candidate accused de Blasio’s proposal of unjustly waving the flag of “class warfare.” How does the category of “(socio-economic) class” fit in Hobbes’ contractualist theory? Does his description of the Sovereign point to how to adequately reconcile competing economic interests in the body politic, or is this question one that he inadequately answers in his treatise? (Note: Henceforth, I shall use the gendered pronouns “he, him and his” in reference to the Sovereign, if only to reinforce the (gendered) personification of the Hobbesian Leviathan.) In this brief article, I wish to put forth the belief that Hobbes as a whole fails to properly account for the extent, impact and importance of class struggle because he traces the origin of a legitimate state from the standpoint of an ahistorical (meta)subject, whose rationality is derived from an altogether rigid conception of “human nature.” Hobbes’ methodological individualism

21


22

hampers his ability to consider whether the individual’s position within a class – that is, the individual’s position within a social mode of production, the individual’s relationship to the means of production – has certain grave implications for the role of the state. To do so, I will attempt a brief reconstruction of Hobbes’ State of Nature, since the State of Nature acts as a benchmark for him to evaluate the Social Contract. I will thereafter critique Hobbes’ supposed “solution” to the State of Nature on the grounds that this “solution” hardly considers the distortions to political power that arise from a class-conditioned reality (class society). Lastly, I will deduce whether Hobbes would in the end endorse de Blasio’s proposal, though at I hope to show from the preceding discussion that endorsement (or opposition) is unsatisfactory. As stated above, Hobbes begins his analysis by examining the position of an individual divorced from the context of civil society. Hobbes grants this “metasubject” with a basic disposition toward self-preservation. To be able to sustain himself in this ahistorical State of Nature, this individual must have the right to (i) appropriate whatever is necessary for his physical sustenance, (ii) protect himself from all external threats that would deprive him of either the means to his sustenance, his “natural freedom,” or (what amounts to the same thing) his life, and (iii) establish whatever conditions may be necessary to ensure his survival. Translated into the language of the “laws of nature” that Hobbes dictates in Leviathan, these set of rights award all individuals in the State of Nature a right to everything, a right to self-defense and a right to petition for “Peace.” Yet insofar as all individuals are relatively equal in their natural endowments, Hobbes believes that the equal distribution of these rights will inevitably turn a State of Nature into a perpetual war of all against all, in which every subject’s life, let alone whatever property they can secure, is in jeopardy. The conversion of the State of Nature into a State of War is unavoidable as a result of the unenforceability of covenants, by which individuals could establish boundaries to their “right to all.” Since the State of Nature lacks an overarching authority that, with a credible threat of force, could guarantee the fulfillment of these contracts, the “law of nature” to create said covenants is rendered a proverbial dead letter. Even if material interests did not in fact overlap, the fear of unjust encroachment is sufficient to sustain an


23


24

atmosphere of distrust (“diffidence”), in which embarking on an unprovoked, preemptive strike is a rational course of action for each subject. Without a universal third party, without the operability of contracts, and therefore with a constant fear of foreign encroachment, every individual has an unmitigated right to all, which effectively ensures that each one has a secure right to nothing. Hence the pronouncement that human life is naturally “solitary, poor, nasty, brutish and short.” The formation of the Sovereign is thus understood by Hobbes as a way of breaking out of this negative equilibrium; in other words, as an exercise of the aforementioned right to “endeavor for Peace.” Notice how this formation altogether precludes the existence of classes. If enforceable property rights (though not necessarily bourgeois property rights) are nonexistent in the State of War, any substantive economic association is likewise impossible. Indeed, the only relations of production to speak of before the Social Contract are between Hobbes’ metasubjects and whatever little they can secure for themselves. This means that whatever political structure these individuals will construct in the process of leaving the State of Nature will not take into explicit consideration the influence that economic stratification/positioning may have in the maintenance of a well-functioning social totality. This is not the case for all contractualist theories of the state. Locke, Rousseau and an assortment of other thinkers certainly could conceive of uneven land ownership – or other forms of class formation or class consciousness – before the rise of civil society. Such an exclusion of the economic dimension from the picture will eventually handicap Hobbes’ political project, though this can only be decided after I describe his paradigmatic Leviathan. What is the Leviathan tasked with upon his creation? What responsibilities do the formerly free subjects have to their new awe-inspiring Arbiter? Hobbes affirms that the Social Contract is solely among the subjects themselves, who compact to form a single body (composed of an individual or a group of individuals) that remains in the original boundless State of Nature. John Rawls’ characterization of the act as an authorization is perhaps a more appropriate descriptor. The individuals alienate the entirety of their natural rights to a corpus that they entrust, for perpetuity, with the enforcement of civil laws. The strength for this enforcement is to be supplied by the willing body of subjects,


who henceforth are compelled to lend themselves (and all that they can offer) to the state’s necessities. After all, the Sovereign can accept no “diet” in the carrying out of his duty. All of this is to say that, for the subjects to not fall into the State of War once again, the Sovereign’s authority must be (i) absolute, (ii) indivisible and (iii) unquestionable. He stands above the laws of which he is simultaneously the author, executor and reviewer, though, as Hobbes is quick to point out, in the performance of these tasks he is merely fulfilling the obligation set upon him by his formerly warring subjects. His power over life, property and other social institutions is similarly unlimited, even though all of these things only come to exist (at least, in a relatively stable fashion) with his appearance. For the Sovereign’s guarantee to uphold the rule of law renders what was formerly inoperable in the State of Nature operable: the ability to form covenants. With this step, the analysis can be said to finally incorporate economic formations, or the possibility of different modes of production. Here is where Hobbes’ Leviathan starts to encounter certain conceptual difficulties. With the emergence of modes of production comes the possibility of classes, the aggregation of which may begin to undermine both the Sovereign’s imperative as well as his “supreme” standing. Ideally, the Sovereign’s is to enact legislation on the whole beneficial to the Commonwealth, benefit being here defined as the capacity for the Sovereign’s subjects to pursue a “commodious” living. However, Hobbes does not make clear what the Sovereign is to do when certain economic arrangements cause conflict that is not explicitly formulated in the language of political discourse. While the state can in principle reorganize the playing field in which economic actors interact (as the realignment of property relations is within the Sovereign’s prerogative), its directive to do so is muddled when there exists a fundamental, structural disunity of extra political interests within his populace. To take an example: if, as Karl Marx argued in Capital, the conflict over the working day between sellers of labor-power and the owner of the means of production in a capitalist mode of production is “between equal [politically recognized] rights… [where] force decides,” how should the Hobbesian Sovereign intervene? Can the Sovereign ascertain a general interest (from which to determine the “benefit”) in the presence of “equal rights”?

25


26


The problem of the “confused directive” is then further compounded when we take into consideration the possibility that the economic compulsions brought to the surface by the class struggle can in effect subordinate the state into becoming a class instrument. Particularly orthodox Marxists often bandy about the phrase “the executive committee of the ruling class” when describing the state. While I find this formulation overly reductive, the grain of truth embedded in the statement is particularly noteworthy in this discussion. The mighty absolutist Sovereign may, for the sake of preserving itself, have to side consistently with arrangements beneficial to a specific class in order to avoid financial ruin. This is especially pressing if competition between sovereigns is brought into the analysis. Can a Sovereign truly be considered to be in the state of “natural freedom” if, for the sake of an example, his survival relies on the compound accumulation of capital or the predominance of feudal arrangements? Can a class assert implicit control over the state merely by the economic weight they gain through their “covenants”? This would seem to defeat the very purpose of the Sovereign, at least as Hobbes wishes to envision him. While Hobbes deals clearly with the issue of political factionalism by asserting that the Sovereign’s authority in the political realm is absolute, he is unable to confront economic factionalism effectively. Because Hobbes’ political edifice is built in abstraction from the material reproduction of civil society – this done to accommodate the individualistic methodology – suddenly the category of class becomes troublesome, if not insurmountable. If class struggle continues rage in civil society, can Hobbes’ Sovereign truly be said to have removed mankind from the State of War? To end this line of questioning, I would like to return to the conflict between Lhota and de Blasio that inspired this short investigation. De Blasio’s scheme for funding a somewhat neglected segment of public education depends upon the taxation of a small, but extremely wealthy portion of New York’s inhabitants. There is nothing that would prevent Hobbes from denying the Sovereign the ability to tax the rich. The Sovereign can feed himself in whichever ways he pleases, and his requests cannot be denied by anyone on the basis of private legal ownership (since, as I stated above, the Sovereign is not bound to the laws he enforces). Whether or not the enterprise to fund preschool is a worth the supposed loss of “business confidence,” or “in-

27


vestment incentive,� or whatever other economic repercussions lie in store is a question that the Sovereign will have to answer through lenses that are more difficult for Hobbes to process: those of economic inequality. For in making that decision, the Sovereign will have thrown a stone in the class war that characterizes capitalist societies.

28


DEWEY and the MOOC

a pragmatist approach to education and technology

mark antony gorenstein American public education maintains a tense and superficial relationship with technology. Teachers assign PowerPoint projects and occasionally shuttle their students to a computer lab, but still remain within the limits of dated curricular conceptions. While tools like the projector, Scantron machine, and photocopier have reduced the burdens of teaching, uses of technology have largely conformed to existing pedagogy. Other "innovations" intended to foster student engagement like the interactive whiteboard and clicker response system often feel out of place in a classroom, or even downright gimmicky. Recent technological developments have fundamentally altered society, and public education has failed to adapt accordingly. A vision of the school as a shelter from the technological saturation of the modern age conflicts with the need to practically prepare students to engage with this ever-morphing present. Pragmatist philosopher John Dewey diagnosed similar faults in the public schools of America following the Industrial Revolution that redefined his age. Science, through the mechanized, controlled industrial processes it enabled, began to radically impact the daily lives of citizens. Dewey saw a need for educational reform consonant with a transformation of American society he viewed as "so rapid, so extensive, [and] so complete." He notes that when industrial activity stood exposed and centered on the household, children acquired discipline and an understanding of production's role within society through their occupational contributions. Mechanization and division of labor abstracted the productive process, eliminating the practical training of children and its corresponding educative benefits. With household instruction waning, Dewey considered public schooling the force capable of unifying scientific progress

29


30

with society's ideals by "effecting the transfiguration of the mechanics of modern life into sentiment and imagination." Mouse clicks now move mountains, as digital systems direct production with a hyper-efficiency unparalleled in human history. Facebook replaces in-person social interaction, Wikipedia compiles the world's knowledge, and yet today's conceptions of the curriculum hardly differ from those of Dewey's time. The unrelenting march of innovation in today's age demands a reimagining of the school's role in society. Within his system, Dewey privileges education as a process uniquely disposed to shaping individuals and the societies they inhabit. Dewey sought a philosophical approach closer to lived experience than the theoretical and assumption-ridden endeavors of his predecessors. He viewed philosophy not as a mere intellectual exercise, but as a practice that matters and is capable of improving lives. Understandably, then, Dewey defined the goal of education and the final measure of its value as "its use and application in carrying on and improving the common life of all." According to Dewey, a fluid educational environment that facilitates individual experience of shared knowledge is necessary to the maintenance and improvement of our democracy. Although gatekeepers and censors do limit access to certain content, the Internet has, on the whole, enabled an unprecedented expansion of the average citizen's access to information. When advancements in printing and distribution during the industrial age allowed information to spread beyond a privileged class, Dewey proclaimed that, "knowledge is no longer an immovable solid; it has been liquefied. It is actively moving in all currents of society itself." No longer limited by the physical constraints of ink and paper, knowledge now flows (mostly) freely through fiber optic cables and across continents. Those with a connection can traverse unbelievable quantities of material conveniently indexed by Google and curated by various online services. Massively Open Online Courses, or MOOCs, have recently emerged to provide free access to high-quality educational content once only available to the students of elite universities. Varying in scope and subject matter, these courses simulate the traditional classroom experience with video lectures, discussion components, peer-graded assignments, and examinations. One


MOOC provider, Coursera, has registered more than 4 million users since its launch and has seen enrollment in individual courses surpass 100,000 students. Browsing through the catalogues of Coursera or blogs like Open Culture, I have previously located several seemingly interesting MOOCs like “Exploring Beethoven's Piano Sonatas” and “Introduction to Mathematical Philosophy”, though I have yet to complete an entire course. My usual pattern involves excitedly registering, seriously working through the material for a few weeks, and eventually falling behind and giving up on the course entirely. Despite average completion rates of less than seven percent, MOOC proponents point to success stories like that of 15 year-old Mongolian student Battushig Myanganbayar who garnered attention and eventually an offer of admission from MIT after earning a perfect score in the university's Circuits and Electronics online course. Although MOOCs allow students to proceed at their own pace through a series of modules, their lack of guidance and individualized instruction limits the likelihood of success to a highly motivated minority; as such, these courses are not a sustainable model for the education of a varied populace. A number of factors could account for the difficulty of keeping myself and thousands of other MOOC adopters engaged: lack of personal investment, the absence of incentive, and the novelty of online education all stand as barriers to the widespread adoption of this model. As modern technology continues to democratize knowledge, active examination of methodologies underlying education becomes imperative. MOOCs and the classroom environments they replicate currently fail, as did the traditional schools of Dewey's time, "to take into account the diversity of capacities and needs that exist in different human beings. ... [providing] a uniformed curriculum for all." In elaborating his educational philosophy, Dewey distinguishes the traditional schools of his time from those experimenting with progressive modes of education. He identifies the traditional model with specialization, rigid standardization, and quantification of student performance all aimed at efficiently transmitting pre-determined information. Rather than treating students' efforts as misguided approaches to be discarded, a progressive educator continually assesses the capabilities and needs of individual students, directing their growth accord-

31


32

ing to their potentials at particular moments. The science of the traditional school consists in "perpetuating the present order" while the progressive school seeks growth, movement, and change that builds upon existing possibilities. Certain experiments in education harness the content made available by modern technology for a more experiential and personalized approach. The "flipped classroom" model involves students learning material provided by their teacher or from a source like Khan Academy as homework, freeing up class time for the completion of related assignments under the teacher's guidance. A classroom experience involving action and creation, with a teacher directing a student's efforts, more closely resembles the continuous, personal world of a child that traditional methods of education so upset. According to Dewey, a child "goes to school, and various studies divide and fractionalize the world for him. ... Facts are torn away from their original place in experience and rearranged with reference to some general principle." A flipped model can reduce the boundary between a child's experience outside of a classroom and the learning accomplished in schools. Dewey understood the benefits of organizing knowledge into discrete and logical subjects, but denied the value of attempting to transmit knowledge in this abstract form to children. A teacher's role involves determining the tendencies characteristic of growth, consulting the relevant body of knowledge, and modifying a student's immediate environment to facilitate further progress. While not a substitute for individual experiences, the logical collection of a study serves, to Dewey, "as a guide to future experiences; it gives direction; it facilitates control; it economizes effort, preventing useless wandering, and pointing out the paths which lead most quickly and most certainly to a desired result." The method just outlined precisely describes the motivating principle behind adaptive learning platforms like the one produced by Knewton. Knewton acquires massive quantities of data related to a student's interaction with academic material and attempts to gauge their progress along a set curriculum. As their store of data grows, Knewton refines its machine learning algorithms to account for the various paths past students have taken, contextualizing student progress to decide future instruction. Expanding the scope of this methodology beyond the activities available on a particular platform


would allow teachers to determine a child's location in the growth-process and direct further development. Technology freed from the limitations of the traditional curriculum can realize a progressive vision of education. If those driving educational policy are not prudent, much of the populace will remain, as Dewey perceived, "mere appendages to the machines which they operate" largely because their education provided "no opportunity to develop [their] imagination and [their] sympathetic insight as to the social and scientific values found in [their] work." Absent is a large-scale, well-funded effort to develop new educational methods and systemically investigate their implications. Technology properly employed provides insight into the developmental track of individual students and can generate appropriate activities and materials; these tools enhance rather than supplant the educator's role, empowering teachers to maximally impact their students and society. Radical social change requires a science of progressive education whose experiments and hypotheses aim to guide forthcoming generations through the pathways of human knowledge, producing citizens able to engage critically with their ever-changing world.

33


34


of GOATS and MEN david beal & max nelson Scene: It is just before three in the morning on a long, hot Friday in late August. MILLICENT, 13, is sleeping over at the house of her friend PETUNIA, also 13. It is the last weekend before eighth grade begins. Millicent is lost in a deep sleep; Petunia is awake, reading Walpole by candlelight. Suddenly Millicent, stirred by a nocturnal thought, bolts awake. Millicent: There is, I maintain, something ethically dubious about the recent flood of goat-related YouTube videos. Petunia: That’s quite a vague statement. To which videos are you referring? Millicent: For the sake of argument, I’ll limit myself to the compilation videos of goats yelling like humans. For instance, have you seen the edit of Taylor Swift’s song “Trouble” with a screaming goat laid over the chorus? Petunia: Who hasn’t? I’m still a little confused by your charge, though. Is your claim that it’s ethically dubious to make the videos? To upload them? To watch them? Where do you place the blame? Millicent: You misunderstand me. I didn’t mean to point the finger at any individual. It’s not an action I want to evaluate, but a thing—or at least a loose category of things. Petunia (putting down her book): Very well. Why, then, do you find the videos “ethically dubious”? Millicent: I was getting there. First: whereas you can see—and laugh at—the goat in the video, there’s no way for the goat to see you. You’ve heard, I’m guessing, Montaigne’s question: “When I look at my cat, how do I know that she is not playing

35


with me?” Well, we know that the (non-human) animals we look at in person are able, at the very least, to look back at us. What’s more, we suspect that animals look at us a bit like we look at the world around us: as something foreign, strange, or unknowable. This gives us a kind of common ground with animals, and goat videos, it seems to me, do away with the basis for that common ground. The problem isn’t that viewers are laughing at the goats; it’s that they’re depriving the goats of the means to challenge them in response. Petunia: I am not convinced. As you say, we know that the animals we look at face-to-face see us in return, but we have little way of knowing what they see when they look back at us—let alone whether that thing corresponds in any meaningful way to what we see when we look at them. It seems to me that it’s you who has to decide that; when it looks at you, the goat is really giving you some kind of challenge—and if all you’re doing is projecting your own insecurities onto a goat’s facial expressions…well, you could do that just as easily with pixelated goats as you could with flesh-and-blood ones. Millicent: Very well; I concede the point. But you can’t deny that the YouTubèd goats are, more often than not, objects of ridicule. The videos might not all be called “Stupid Goats Making Stupid Noises” (although one, with 2,540,558 views, is), nor do they all contain outright abuse (like “Weird Goats Compilation,” in which two audibly guffawing off-camera men give a goat a cigarette), but there’s some derogatory thread running through nearly every one of them.

36

Petunia: Perhaps—but the humor, like all humor, depends on incongruity more than anything else. Each video tends to gawk at the goats for their strange, woolly goat-ness, and then re-casts them in a distinctly human situation, whether it’s yelling like a man or singing backup in a pop song. (This is taken to an extreme in, for instance, “Goat Eating Dinner,” which seems to have 132,408 views—peanuts on YouTube, but larger than the population of most US cities.) Making animals seem at once extra-animal and extra-human is a well-trod comic device: it’s there in everything from Coolidge’s Dogs Playing Poker to Eddie Murphy’s Dr. Doolittle. Are these “ethically dubious,” too?


Millicent: Surely not. Dr. Doolittle is a masterpiece. But that’s partly because it never asks us to, as you put it, gawk at the animals in question. What really irks me about the goat videos is the way they reduce their subjects to clownish spectacles, and convert those spectacles into YouTube ad revenues. Petunia: In that case, imagine another site—let’s call it GooTube—run by a community of earnest goat-video enthusiasts committed to keeping their site completely ad-free and non-profit. On GooTube, no goat has to fear having its image converted into capital. How would you evaluate the moral content of, say, “The Ultimate Goat Edition Supercut” if it were posted on GooTube instead of YouTube? Millicent: I don’t think I understand the experiment. The concept of a website that doesn’t convert information into capital seems to me like a plain contradiction. It would have to exist somewhere outside the internet, where the mere act of making something public is enough to transform it into marketing data (to say nothing of third-party tracking cookies). Petunia: I’m not sure that this argument, at least as you’ve phrased it, has the force of a logical contradiction—surely it’s possible to at least imagine a world containing a global shared computer network, but in which no data was mobilized for the production of capital—but never mind that. Isn’t this point rather incidental? If you do in fact believe that these videos “reduce their subjects to clownish spectacles,” wouldn’t the economic exploitation be icing on the cake? Surely the use of animals as spectacle would, in your view, be ethically irresponsible whether or not the spectacle in question was converted into capital! Millicent: Well, yes. What I suppose I am getting at is that, in letting ourselves think of an animal’s physically defining characteristics—the traits that mark it as a seal, a pig or, in this case, a goat—as essentially silly or ridiculous, we demote those animals from independent, autonomous beings to our playthings, our clowns, and our sources of entertainment. It’s a way of symbolically exercising power over animals; it’s also a way of appeasing

37


our consciences when it comes to the inhumane treatment of animals on a broad scale. If I may be allowed the comparison, it’s not far from the spirit of the nineteenth-century American minstrel show, which seemed designed to submerge suffering in whimsy, to neutralize ugly truths. Incidentally, or maybe not so incidentally, these shows were the viral videos of their day, with minstrels adjusting and “remixing” their acts as they moved from town to town. Petunia: I find the slavery comparison a bit melodramatic. You’re implying that all animals automatically suffer, and that the principle effect of silly YouTube videos is to make their suffering seem less ethically urgent to us. Millicent: I’m not saying that all animals suffer all the time. In fact, goats probably have it better off than most of the cows that are now being raised for slaughter in factory farms. Nor am I saying that the only effect of the videos is to make animals less dignified in our eyes and make their inhumane treatment more palatable; if anything, that’s one of their unintended byproducts. If someone sees a goat video and it brightens their day, or if we see a goat video and it prompts this conversation, those are also byproducts. Petunia: You have a point. What, though, do you mean by “inhumane”? Are we expected to treat non-human animals as if they were human?

38

Millicent: Not at all. Animal rights discourse ought not simply try to remake animals in humankind’s own image—in that case, we’d only have transferred the subjugation from a physical level to a metaphorical one. (Your “Goat Eating Dinner” video is a good example.) Instead, I think, the goal should be to stress the value of animal life as it actually is—however strange and unfamiliar and (literally) inhuman. I like Richard Klein’s suggestion that “a dog should die like a dog,” without all the rituals traditionally associated with human mourning. And a goat, we might add, should live like a goat. Petunia: When you use the word “humane,” then, do you mean that we ought to treat animals in a manner that does justice to our own humanity?


39


Millicent: A slightly better suggestion—I’d still be wary, though, of making matters so human-centric. To say that factory farming, for instance, is inhumane because it fails to do justice to the humanity of the participants would seem, in my view, to miss the point. What is at stake is not just the “humanity” (whatever that might mean) of the men and women who build, maintain and demand such farms, but the lives of the animals in their charge. In the same way, if I were debating whether or not to commit a murder, I might want to ask myself not only whether, in committing the crime, I’d be under-valuing my own existence (presumably, by lowering it to an animal-like level), but also whether I’d be under-valuing my potential victim’s existence. The case is even more extreme when it comes to animals, since associating “humanity” with moral worth seems like a kind of implicit devaluation of animal life. Petunia: And yet you’re awfully quick to use words like “value” and “worth,” which certainly seem like categories of human morality. Are these supposed to be designations that transcend species, the way some might want to say they transcend race, class, and culture within humanity? Millicent: Well… Petunia: It sounds like you’re implying the existence of a (divine?) third party that confers value on humans and goats alike. Millicent: Let’s not put the cart before the horse! Petunia: Or the goat. But at the very least, it seems as if you’d need to assume that the value animals possess is a simple fact about the way the world is, and that animals don’t merely have value to us. But doesn’t that call for a kind of Nagelian view from nowhere, some vantage point from which we can get a clear picture of the world—and of our place in it—uncolored by our own ways of seeing or patterns of thought?

40

Millicent: I think we’re still getting a little ahead of ourselves. Let’s grant that there exist such things as values in the world, but let’s stay agnostic as to whether those things apply to all


living denizens of the world, or just some subset thereof. You object, rightly, that I don’t have the right to set myself up as an acknowledger of value in non-human animals without first proving that value is a property those animals possess simpliciter. I might reply that, at least in my interactions with animals, I’m a giver of value. I just have no way of knowing whether other animals are also value-givers—or if so, what kind of value they give, whether a goat gives a different kind of value than a sheep, whether some goats give different kinds of value than others, what kinds of consciousness an animal has to have before it can confer value on other animals, and so forth. And it’s precisely this uncertainty that means we can’t be passive consumers of goat videos—or, for that matter, cat videos, hamster videos, badger videos, public zoos, safari tours, aquariums, Sea World exhibits, Zookeeper, Richard Attenborough documentaries, petting zoos, circuses, flea circuses, Meerkat Manor, meat, horse races, dog races, cockfights, Jean Painlevé films, displays of hunting trophies, the puppies in Macy’s windows at Christmas, lobster tanks, or even Dr. Doolittle. We ought to be aware of how we’re being trained to look at animals; what’s more, we ought to be equipped to distinguish between animals as they exist as independent, conscious entities and animals as they appear to us—or as they’re represented to us. And that’s becoming harder and harder to do. Petunia: All well and good, Millie. But what do you mean by “conscious”? Millicent: That, Petunia, is a question for another sleepover. Millicent lies back down and falls asleep. Tallow cascades in thick folds down the wick of Petunia’s candle; she returns to her novel as the flame laggardly extinguishes itself. In a few moments, the light quivers and glints, entrusting its final breath to the rising sun. CURTAIN

41


the GENEALOGY of EXPLOSIONS

the hollywood director as tragic artist

jackson arn

42

The question, “How would the great thinkers of the past regard modernity?” always makes for an entertaining thought-experiment, an academic’s version of the “if you could have dinner with one historical figure” game. After a while, the examples begin to sound less like the subjects for articles than the setups for jokes. There’s a special glee in thinking of Leonardo onboard United Airlines, or Baron Haussman strolling through Williamsburg. And who could resist the image of Savonarola contemplating, with scholarly rigor, the latest issues of Maxim and Penthouse? The life of Frederick Nietzsche, arguably the greatest philosopher of the 19th century, seems made for this kind of half-serious speculation. Perhaps the fault is Nietzsche’s – he seems to have had the misfortune of being born fifty years too early, thereby missing out on both World Wars, the October Revolution, and the new demagoguery of mass communication. Read alongside other would-be prophets (de Tocqueville, Marx, Orwell), Nietzsche’s eye for the future seems astoundingly keen. His relevance to 20th century history is so great that he is often blamed, somewhat nonsensically, for its greatest tragedies; by trading Christianity for atheism, it’s said, he welcomed and encouraged the godless crimes of Hitler, Stalin, and Pol Pot. It’s as if he saw the future – saw it, then traveled back to his own time and did nothing to change it. In the journals that eventually would be compiled under the title The Will to Power, Nietzsche makes a series of observations about the state of art and entertainment that seem ahead of their time, even by his standards: The art of the terrifying, in so far as it excites the nerves, can be esteemed by the weak and exhausted as a stimulus: that, for example,


is the reason Wagnerian art is esteemed today. It is a sign for one’s feeling of power and well-being how far one can acknowledge the terrifying and questionable character of things; and whether one needs some sort of “solution” at the end. Despite the mentions of a defunct art form, despite the strange peppering of italics and semicolons, Nietzsche’s vision of art seems resoundingly modern – and not only because Wagnerian opera has been called the closest precursor to film, the primary art form of our time. The strong emphasis on terror, excitation, suspense, and moral ambiguity seem highly relevant to entertainment as we know it, almost like a laundry list of the terms critics use to praise a movie. Perhaps most importantly of all, though, Nietzsche is fascinated by a work of art’s potential for mass appeal. As he watched music and literature morph from elitist pleasures to popular spectacles, he must have wondered how art itself would change – once again, his concern anticipates the present state of entertainment, particularly film, with its self-imposed divisions into highbrow and lowbrow, indie and blockbuster. Ever the prophet, Nietzsche was writing about the cinema decades before it was invented. Here, we have what may be the ultimate “what if they were alive today?” game, pairing modernity’s favorite scapegoat with one of its key anxieties: the new power of entertainment created by the invention of moving pictures. Too ambitious to limit himself to attacking one age at a time, Nietzsche begins his discussion of art in The Will to Power by targeting no less a figure than Aristotle. In his Poetics, Aristotle carefully lays out an influential theory of catharsis, in which the tragic play becomes a tool for spectators to vent their frustration and despair, so that they can live calmly in the real world. In too much of a hurry to waste his time addressing this point by point, Nietzsche spends about three sentences empirically refuting the father of Western science, literature, logic, and political science. No theatergoer, he insists, really feels pacified when the curtain falls; the intense drama he has just witnessed is only a “tonic” that introduces him to sensations he didn’t know he had. Whether this is a fair refutation of classical Greek drama is beside the point – thinking about the art forms of his own time, Nietzsche realized that Aristotelian catharsis was obsolete. More than a century later, its obsolescence seems horrifically obvious in the face of Clockwork Orange-inspired home invaders

43


44


in the 1970s, Fight Club-inspired brawlers in the early 2000s, and the maniac who painted his face like the Joker’s before he shot up a movie theater. But this brings up an important question: how would Nietzsche, or anyone else, account for the overall lack of engagement, the extraordinary laziness of our generation? As movies keep getting more and more exciting, why don’t we take to the streets? More urgently, why are the criminally insane the only people who do? For Nietzsche, the concept of artistic catharsis takes on its most dangerous form when it mingles with Christianity, particularly the concept of providence, the inevitable “happy ending” that confirms God’s omnipotence and omniscience. Christian doctrine and practice is filled with variations on the idea that human suffering leads to heavenly rewards; think of the trials of Job and Moses, or the priestly vow of abstinence from bodily pleasures. In Michelangelo’s The Last Judgment, the mortals who suffered the most while they were alive stand closest to Jesus Christ in the afterlife, symbolizing the great reward they are about to receive. While Nietzsche despised this notion on almost every level, he found the Christian promise of an eternal heaven so seductive and so pervasive that he spent the better part of his life exploring the ways it trickled into even the most original-sounding theories and the most secular-seeming art. The existence of a heaven, he believed, while appealing for those who already hated life, destroyed life’s value for the strong, the beautiful, and the creative. Artists, whether Christian or not, had been rendered almost incapable of producing art and literature that could truly accept the presence of evil in the world: thanks to the pervasiveness of the doctrine of providence, they could only conceive of evil, pain, and hate as stepping stones to some greater good. Worse, their understanding of beauty had become almost childish. Since evil was only a momentary distraction from the eternal, inevitable good, no beautiful thing could possibly be evil, too; thus, the tastes of the Christian artist (and the audiences she entertained) could never extend beyond the “pretty and the dainty.” When one considers the popular literature of the nineteenth century, it’s hard not to think that Nietzsche had a point. The two bestselling novels of his lifetime were Harriet Beecher Stowe’s Uncle Tom’s Cabin and Maria Susanna Cummins’ The Lamplighter. Both feature sweet, delicate female protagonists who experience a few hundred pages of

45


46

fear and misery before finding happy endings that last them the rest of their lives. Long before Clement Greenberg, Nietzsche, reacting to the flood of treacly thrillers and spurred on by his intense dislike for providence, had outlined a sophisticated theory of kitsch, perfect for talking about Hollywood. When one considers how much society has changed – socially, politically, and otherwise – in the last century and a half, it is surprising how well Nietzsche’s theory of kitsch continues to describe the Hollywood blockbusters that make hundreds of millions of dollars around the world. First and foremost, it’s surprising because the directors, actors, producers, and screenwriters who make them (with a few exceptions) would be deeply offended at the suggestion that they were perpetuating Christian doctrine of any kind. Yet Nietzsche’s theory never took heed of intentionality to begin with. People write, act, and direct, unaware of the unwritten rules they are following, even – and especially – when they think they are being revolutionary. For all the complaints of nihilism, Satanism, and anarchism (the “isms” have changed somewhat since Nietzsche’s time, when “atheism” was the preferred term of abuse), the mainstream American cinema remains an unknowing, obedient prisoner to the constraints of endings, and happy endings in particular. Christopher Nolan’s three Batman films are surely some of the best examples of this paradox, not because they are high art, or even particularly good, but simply because they were wildly successful, grossing approximately 2.5 billion dollars between 2005 and 2012, and attracting audiences in numbers beyond Nietzsche’s wildest imagination. Critics, too, were enthusiastic in their praise: reviews cited Nolan’s remarkable bravery in reimagining a campy, clichéd superhero franchise with “darker” characters, “bloodier” violence, and (the most frequent, and most tiresome catchphrase) “morally ambiguous” events. Even the negative reviews, like David Denby’s in the New Yorker, or Armond White’s for the New York Post, accepted without question the films’ unprecedented, nihilistic darkness, though they interpreted it as a fault, not a strength. But the trilogy ends with the protagonist, Bruce Wayne, having saved his city from destruction, escaped the trauma of his past, and found love in Anne Hathaway’s character. Is this the ending of a nihilistic work? Only a year after the release of the final chapter, it already


seems clear that Christopher Nolan’s Batman films, quite aside from being revolutionary, conform to the usual rules of storytelling that Nietzsche disliked so intensely. In 2008’s The Dark Knight, the second film of the trilogy, the principle villain is the Joker, memorably played by the late Heath Ledger as a gleeful anarchist who only wants to “watch the world burn.” The Joker’s plan, if he can be said to have one, is to transform the city’s hero, Harvey Dent, from a just man into a frightening ethical relativist, deciding life and death with the flip of a coin. On the strength of this plotline, and a lot of graphic violence thrown in for good measure, The Dark Knight was quickly, perhaps a little hysterically, judged to be a metaphor for the moral uncertainty of the Bush era, and a bold challenge to the usual black-andwhite depictions of heroes and villains in Hollywood films. Five years later, the Joker’s antics already seem strangely tame. In no small part, this is the fault of the third installment of the trilogy, The Dark Knight Rises, which ends not only with Bruce Wayne’s survival, after he risks his own life to save his city from a nuclear explosion (martyrdom, another key Christian trope!), but with his victory over the trauma that motivated him to become Batman in the first place, and with it, all the moral ambiguity that went into the earlier films. When I watched The Dark Knight Rises in theaters and saw Bruce return from the dead, cheered on by the audience, I remember feeling cheated. In the end of his trilogy, Nolan had shied away from seriously confronting evil, preferring to treat it as an irritating stepping stone along the path to a happy ending. Perhaps at least part of the reason that people don’t take to the streets is that the films on which they spend millions are too timid to be anything other than vaccines for their passion and energy. It barely matters that the punches are louder and the death count higher in recent blockbusters than in Uncle Tom’s Cabin – in their basic structure, they are embarrassingly alike. And so the doctrine of providence, along with the passive lifestyle it encourages, lives on. Isn’t the tendency of contemporary film criticism to emphasize the movies’ unprecedented grittiness and violence an indication of this age’s narcissism, not its nihilism? It seems hilariously petty, for instance, to praise the recent Daniel Craig-starring James Bond films for being more “serious” than the “cartoonish” adventures headed by Roger Moore in the seventies, as Manohla Dargis of the New York Times did several

47


48

years ago. The formula for action movies hasn’t changed in fifty years; the number of gallons of fake blood spilled has gone up. Just as before, the hero must live through the pain and violence he sees, affirming the supremacy of law, order, and justice. If anything, this formula is even more entrenched in film than it was in sentimentalist novels 150 years ago, since the commercialization of the film industry practically requires the protagonist to survive evil so that she can populate a sequel or spin-off. The rules of kitsch, as Nietzsche described them so clearly in The Will to Power, aren’t going anywhere. If there’s any difference at all between low culture in his time and low culture now, it’s this: kitsch nowadays doesn’t know that it’s only kitsch. To deny this, to interpret modernity’s forms of entertainment as somehow unique from those of past generations, seems like the ultimate delusion of grandeur – “today we’re facing unbelievable violence and hatred, unlike anything humanity’s seen before!” It’s time to recognize how little art’s substance has changed, and start looking for something better. If there is a way to escape Christian providence as it manifests itself in the cinema, it would have to embrace evil with an intensity that makes the false grittiness of Casino Royale or The Dark Knight obvious. But perhaps the kind of cinema Nietzsche would like already exists, in the work of directors like David Lynch and Abbas Kiarostami who eschew plot and tackle evil without ever providing a “solution” to it. Think of the infamous rape scene in Blue Velvet, simultaneously ugly in its refusal to censor the action and exquisitely beautiful, with its evocative colors and rich score. Or consider The Wind Will Carry Us, a film that records scenes of cruelty and kindness with the same anthropological fascination. Kiarostami’s masterpiece has no particular ending at all; one feels that the film could keep going forever if the credit hadn’t started rolling. Perhaps these directors, and a handful of others, answer Nietzsche’s call for a full-fledged tragic artist, who can move past the limitations of the happy ending and combine both the good and the evil in the world into one artistic vision. If Nietzsche were brought back from the dead and taken to a movie theater, he’d thunder predictably about the banality of entertainment, the pernicious influence of the morality of the weak. Then again, he might admit it was worth it if he could get his hands on a copy of Mulholland Drive.


49


50


51


52


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.