Gadfly Spring 2014

Page 1

1


EDITOR-IN-CHIEF

Dan Jacob Wallace MANAGING EDITOR

Alejandra Oliva EDITORS

Myriam Amri Caleb Fischer Jacob Goodwin Paul Helck Ethan Herenstein Daniel Listwa Sera Schwarz ART AND LAYOUT 2

Alejandra Oliva


IV. “FELT ABSENCE”: THE THREAT OF THE OTHER

Sera Schwarz

IX. FACING EVIL

Mikaila Read XIV. AN EXISTENTIAL HISTORY OF THE MIRROR SELFIE

Charles Dalrymple-Fraser XIX. MUSICA HUMANA

Alejandra Oliva

XXIV. INTERVIEW WITH ZACH WEINERSMITH

Daniel Listwa

XXX. ETHICAL INHERITANCE: SUPPLEMENTING THE BOOK

Ben Rashkovich

XXXVIII. YOU ARE (PROBABLY NOT) A COMPUTER SIMULATION

Dan Jacob Wallace

3


“FELT ABSENCE”1: THE THREAT OF THE OTHER

Sera Schwarz

Philosophers have long been in the business of privileging the “One” over the “Many”—of privileging, say, the general over the particular, or the part over the whole, or (what comes to much the same) what is over what becomes other. It is little wonder, then, that the “I think, I am” has long served as their “first and most certain” principle2: to insist on the primacy of the self just is to insist on the primacy of the singular. And it is as little wonder that Shakespeare—whose writing, on his own admission, “gives to airy nothing a local habitation and a name”3—stages as profound a response to them as he offers us in Othello. So this seems to me a play that presses the possibility that “we are endlessly separate, for no reason,” and that “the will to all and the will to nothingness… have the same core: a failure to acknowledge individuality”4; which is to say that is a play that is nothing if not attentive to the insufficiencies of monological (i.e., maximally singular) thought. It seems 1

1 I borrow “felt absence” from Bianca’s complaint to Cassio, her absent lover, in Othello, IV.iii.178ff. 2 Cf. Descartes’ Principles of Philosophy, I.7, where “this piece of knowledge--I am thinking, therefore I exist–is the first and most certain of all to occur to anyone who philosophizes in an orderly way.” 3

I quote from Shakespeare’s Theseus in A Midsummer Night’s Dream, V.II.16-18.

4

Quoted from Cavell’s The Claim of Reason (Oxford University Press: 1999), p. 369--371.

4


to me, in fact, that what Othello himself fails to recognize in time—but what Iago, for his part, knows only too well all the while—is precisely that the I can emerge only so long as there is something that is not-I; that something can be mine only so long as some other thing is not; and so that my very identity as an individual is enmeshed in, and emergent from, architectures that are themselves “external” to my mind5. Consider, then, the character of Othello. This is a man for whom independence and internal coherence are of the very highest value—a man, in fact, that no less than the Venetian senate will recall as one “all-in-all sufficient…whose solid virtue/ the shot of accident nor dart of chance/ could neither graze nor pierce.” So we find that he stands firm in the belief that his appearance wears his essence “on its sleeves” (as when he insists that “my title and my perfect soul [will] manifest me rightly”); that he is unwilling to admit of even the slightest uncertainty (so that “to be once in doubt” is, on his view, “once to be resolved”); that he is overwhelmed by all that is ambiguous and indeterminate (“What needs this iterance?”; “These stops of thine fright me the more”); and that he is inclined to admire all that is simple, stable, and self-identical (as when he imagines his ideal world as that “of one entire and perfect chrysolite”6). We find that he is, in fact, is far more invested in asserting his self-conception than is any other character in the play. So his very first speech is a disquisition on his “honor,” “proud fortune,” and naturally “unhoused, free condition”; and most of those that follow it are saturated with personal identifiers (as, e.g., “I,” “me,” “my” ). So, too, we frequently find him reading himself into everything and everyone he is not—as when, in an early scene of the play, he greets Desdemona as “my fair warrior” (which leads the ever-attentive Iago to observe that “our general’s wife is now the general”); or when he exclaims, after Iago begins airing suspicions of Desdemona’s infidelity, that “her [Desdemona’s] name...is now begrimed and black/as mine own face.”7 Othello, then, insists on the singular; but Iago insists as vociferously on difference. He is a modern Janus, is “nothing if not critical,” regards himself as what is not anyone else —as what must so pursue only a “peculiar end.” “I am,” he informs us, “not what I am.” And he is, in fact, a character that does seem less an “is” than a “becoming other” (—and less a character than a characteristic expression of the spirit of the negative). So he delights in improvising his schemes, which are always “yet confused”; finds “sport” in developing his tales and motives in the very act of telling and enacting them; 5 As, e.g., my physical body, or social and historical context, or spatio-temporal “situatedness”, etc. 6

Quoted from Othello, I.ii.37; III.iii.191; V.ii.183; III.iii.39; and V.ii.176, respectively.

7

Ibid., II.I.182; III.III.392.

5


imagines man as what is always divided into “reason” and “sensuality,” and as what, because of this, is always as changeable as his will; and encourages Othello to imagine nature as what tends toward “erring from itself,” i.e., as what tends toward destabilizing any and all natural orders. In fact, this is, I suspect, just why Shakespeare casts him as Othello’s “ensign” (from the Latin en + signum—which translates as “sign, token, a characteristic mark of a particular thing”). Iago lives and breathes division: so it is only fitting that his occupation obliges him to “show out a flag and sign of love/ which is indeed but sign”—just as it is only fitting that, in his everyday affairs, his “words and…performances are no kin together.” But this is, of course, precisely why Othello is so easily deceived by him. For Othello believes in the identity of thought and thing. He believes, that is, that speech (and appearances more generally) always and only correspond to what is spoken of (or what exists); and so that words, not their speakers, have meanings—which he need only receive to adequately perceive. He cannot, then, so much as conceive of duplicity: what is not-whole, what is only an aspect of a thing, is well beyond his comprehension. And this means that he can only stumble blindly through Iago’s decisively dialogical speech—in the course of which what is absent means as much as what is present, and what is mentioned obscures what is meant, and what is left unspoken resounds more loudly than what is said8. So it is not long before his “tranquil mind” seethes with contradictions, begins to suspect itself of being but one against many (Othello: “I think my wife be honest, and think she is not...I think thou art just, and I think thou art not.” Iago: “Sir, you are eaten up with passion. I do repent me that I put it in you.”). And yet what Othello slowly begins to recognize is, I think, that his sense of self filters in from the outside—that what he is is all but inseparable from his relationships with what is other. So he discovers that to distrust Desdemona’s love is to distrust the coherence of his own narrative (i.e., is to drain “the fountain from the which my current runs”, so that “chaos comes again”)—that “to say that he loses Desdemona’s power to confirm his image of himself…is to say that he loses his grasp of his own nature…[that] he no longer has the same voice in his history.”9 Still, it strikes me that Othello only succeeds in appreciating the implications of all this in the final scene of the play—this the scene, of course, of his suicide. I might, by way of defending this, recall his very last speech, in which he casts himself as both the “malignant…turban’d Turk” and the “Venetian” that he has undone; and I might suggest that it is this doubling of his self, this splintering of his whole into opposed parts, that drives him to take the life that will no longer—can no longer—be his own. I might suggest that it is here that Othello recognizes (too 8

And Iago often seems to encourage this by exploiting demonstratives and indexicals.

9 Quoted from Cavell’s “The Stake of the Other” in Disowning Knowledge (Cambridge: 2003), p. 130.

6


late!) that to be something is not enough, that to be is always to be recognized by others; and that this is just why he asks all those assembled to remember him as he would remember himself when they “these unlucky deeds relate.”10 But what I would like to press, in any event, is that it is in the moments of his death that Othello is first shaken by the ontological force of “stories”— so that it is here that he first comes face-to-face with what is developed, not discovered; what is performed, not (only) perceived; and so with what is, in all this, as much the work of an author as it is of an audience.11

10

“Relate” means, of course, to recount—but also to relate something to something else.

11 This will go a long way towards explaining why Iago is styled as a playwright; why Othello is cast as a storyteller; why Desdemona is wooed by his being so, and wishes she could be “such a man”; and why the play closes with Ludovico’s promising that he will “this heavy act with heavy heart relate.”

7


8


FACING EVIL

Mikaila Read

Past philosophers, theologians, psychologists and the like have understood evil as a privation of good, a hyperbolic expression of the given actions or events we find wrong, and something equivalent to an active, outside, supernatural force. I argue that there is an inextricable link between violence and that which we readily refer to as “evil,” and that there is a problem with the application of such a term to human beings and human character. Put simply, violence has become a staple of evil, but the commission of violence does not warrant ascriptions of “evil” to persons as much as it warrants ascriptions of “evil” to the violent acts themselves. It is important to note here that lifting the charges of evil against humanity is not equivalent to morally excusing or forgiving the moral atrocities that those we identify as “evil” may have committed or continue to commit. It neither dismantles our rights to punish them nor limits our ability to discriminate against certain actions and behaviors in principle or for the sake of societal order and safety. Instead, conceptualizing evil in action abides in the understanding that evil is captured in experience. It is not a quality, but an occurrence, and what we most readily take as evidence of evil is always found in action, not human genetics, not science and not religion. When we reflect on some of the greatest “evil-doers” of all history, names like Adolf Hitler, Joseph Stalin, and Chairman Mao rush to mind. The response is almost automatic, and with the horrific nature of the offenses their names carry, most would not hesitate to call them evil. In fact,

9


most people believe the presence of evil within these men to be so blatantly obvious that no defense is needed for the ascription, but this tendency in scrutiny reveals something unique about how we’ve come to view and define evil. Now, I am in no way dismissing or belittling the tragedy that resulted as a consequence of these men’s actions; they speak of the greatest moral atrocity, and I can conceive of no way these men could make up for all the suffering they’ve caused. What I am pointing to, instead, is how the events and individuals we consider the most evil are those that incorporate violence and disturb us to tremendous degrees. And so, for something to be considered evil, it must carry with it some power to shock or disturb. Violence best satisfies this need due to its physicality and the unquestioned malevolent connotations accompanying it, which has caused a melding of violence and evil to take place. This melding is one that Dr. Michael H. Stone observes in his book, The Anatomy of Evil. He notes, “actions that are unpleasant or even outright criminal-- but that don’t wound anyone physically or emotionally-- are seldom spoken of as evil...To rank as evil, something else is required: the element of shock and horror that touches the public in a special way.” Stone later presents readers with numerous case studies (some almost too grotesque to read in full) in contrast to terrible crimes of embezzlement, fraud, and the like. Crimes that likewise incorporate the defilement of the innocent and that are base at their very core, reflecting the same narcissism and lack of empathy for persons beyond one’s self that come to light in violent crimes. Stone asks why it is we do not so readily regard these men as evil? And the answer is aforementioned. We are less sensitized to crimes and actions that do not incorporate violence, and this minimizes our compulsion to cry “evil.” Consider also the difference in how we view crimes of passion versus those of imaginative serial killers or other psychologically-disturbed repeat offenders. The man who stumbles unexpectedly upon his wife having an affair with a close friend, and in his rage kills both his wife and her lover, is not seen as evil. Though we may not excuse him for the crime of passion, we understand him more easily than we do the serial killer. It’s the Patrick Batemans of the world, the offenders who construct elaborate plans to harm their victims or seek out unique, creative ways of inflicting harm, that truly disturb our minds. These are the sorts of men (and women) we denominate “evil.” Evil, therefore, is perturbing. Evil is horrific, but most of all, evil is violent, and this is how the collective conscience has best come to recognize it. In demonstrating this point further, I challenge the reader to provide any example of evil independent of the notion of violence, bearing in mind that violence is not exclusively physical in nature. The tendency to rank good and bad deeds by way of their power to shock us is surpassed only by the greater tendency to allow the extension of evil as an ascription to deeds to the performers of said deeds. The heavier the moral weight of an action, meaning the more shocking the consequenc-

10


es of an action, the more hastily we apply these titles and classifications to the performer of said action. However, this is, as already noted by the term “hastily,” rash and insensible. When we employ the word “evil” with unshakable confidence as an incriminating ascription, “we have to keep in mind that evil is only part of human character in a derivative sense. The primary sense is undeserved harm suffered by people.” When we isolate the variables of evil, reducing down to the most fundamental level, the remainder is always an act. The root of evil, therefore, lies in action, not man himself. What we take most commonly as evidence of evil-- perhaps, a “bad will” or the intention to harm others for its own sake-- is always manifest in action. It is the commission of violence itself that directs us to call something, and then someone, “evil.” When we sacrifice the presumed right to point to persons as evil incarnate, we do not likewise lose the right to incarcerate or punish them. Nor do we lose our rights to withhold forgiveness of their deeds. We only deny ourselves the implementation of a framework that would fundamentally limit our understanding of evil through the sheer indulgence of emotional cri de coeur. Recognizing the will as legitimately compromised in certain instances of evil does not excuse the act itself, nor demand forgiveness on the victim’s part for the offender. Instead, recognizing the will as legitimately compromised serves our understanding of what influences run common through the commission of evil. In our engagements with evil, or the moments in which we presume we bear witness to evil, there exists a sort of sacred respect for the emotional responses that might otherwise be condemned for undermining proper reasoning. Even now the discussion of excusing the morally compromised will may spark much outrage. This response is emotional and grounded in a false presumption that understanding evil and admitting a moral will may truly be subverted somehow demands acts of evil be forgiven. This is an error illuminated on through the paper, “Excusing The Inexcusable? Moral Responsibility And Ideologically Motivated Wrongdoing,” written by Dr. Geoffrey Scarre of Durham University. “Excusing an act,” he explains, “is not the same as forgiving it,” and, It is not true that to understand all is to forgive all. Learning more about a person’s motivations may do nothing to assuage our wrath or present his behavior in a more forgivable light; it may only confirm the depths of his depravity...Unless we try to uncover an agent’s motivations, we risk our moral judgments erring through ignorance. Forgivable status is not determined by moral excusability, and neither is the moral excusability of an act determined by whether or not it is forgivable or forgiven. Dr. Sam Harris echoes the logic of this stance in a similar argument through his recent book, Free Will, which aims to challenge the notion of free will and explores the repercussions of the claim that free will is illusory.

11


Worry arises here that “an honest discussion of underlying causes of human behavior appears to leave no room for moral responsibility.” However, Harris notes how eliminating our conception of complete free will does not eliminate our right to take measures against those who pose a threat to society. He writes, “if we could incarcerate earthquakes and hurricanes for their crimes, we would build prisons for them as well...Clearly, we can respond intelligently to the threat posed by dangerous people without lying to ourselves about the ultimate origins of human behavior.” Understanding the causal influences of behavior through the lens of neurological processes does not erode our right to incarcerate. What it does erode is our conception of evil as an expression of free will, and the assumed sobriety of the assessment that violent criminals are evil, which warrants their condemnation. Again, removing titles of evil need not be mistaken for forgiveness. Recognizing instances of evil as complex phenomena in the human experience does not mean we must wholly excuse agents of the concept of evil, but such a recognition should lead us to reprieve them. Given the undeniable connection between violence and evil, as well as the acknowledgement of the power that emotion and situation hold in influencing our definition and commission of evil, we must be cautious in our response to wrongdoing. We can perhaps justify the incarceration of firsttime and repeat offenders in the name of order and safety, but we cannot turn a blind eye to the what elements led them to commit their crimes. We must seek greater understanding of the underlying causes of evil and take measures against them inspiring future evil. The greatest means of limiting moral atrocities, and one that reeks less of moral contradiction in practice, is always the preventative; not the exterminative. The devastation brought by violent crimes survives in our emotional upset, not in those who commit them. And a shared power to commit evil, accompanied by the recognition that there exist legitimate impairments to the moral will does not revoke our right to punish. In all our understanding of evil, we must not let our eyes become fixed so steadfastly on the right to punish. We must not lose sight of an equal right to pardon.

12


13


AN EXISTENTIAL HISTORY OF THE MIRROR SELFIE

Charles Dalrymple-Fraser

In a technical sense, the mirror-selfie has been around as long as the camera. But it is only through a recent surge in popularity that mirror-selfies have taken upon critical public notice. Today, the mirror-selfie is often denigrated as an egotistical hobby for teenagers, and the term readily calls to mind an image of pubescents posing in a public restroom with their smartphones balanced precariously in hand. Yet, for all the disdain which has accompanied the recent outpouring of mirror-selfies, there has been little discussion as to why they have become so popular, and why they ought to be the subject of ridicule that they have become. In high school, I had a pink streak dyed into my hair, in support and awareness of breast cancer. I recall sitting in the barber’s chair, in front of a mirror, and being asked what part of my hair I would like to dye. I recall looking into the mirror and projecting various streaks of pink onto the head of hair I saw there, and then deciding that the bangs on the right side made for the best looking dye job. On the way out, I posed for a photograph for the same program. I saw the photo on Facebook a few days later and nearly choked: Why would anyone have let me do such a horrendous thing? It looked terrible where it was. I looked terrible. The next day, as I was walking down the street and contemplating a masking dye, I caught my reflection in a passing car window, and then in a storefront: I looked fine. Reassured, I wore that streak for a month; yet the remaining experience always felt slightly off. As the following years passed

14


by, I would come to reflect on that incident and despair: “If only I had known about the mirror-selfie.” I. THE WORLD AS A MIRROR The world is a mirror. From gleaming puddles of water to polished rocks, from storefront windows to the mirror itself, we are constantly bombarded by our reflections. It is not natural for individuals to experience themselves in the world as others would perceive them: the only holistic representations we tend to have of ourselves are reflections. Accordingly, as we develop from infancy, we develop a self-image which accords with our reflections: if I try to visualize myself—the way I look—the projection maps onto the image I would see in a mirror. It is only natural that I develop a representation of myself as that image which has appeared to me throughout my lifetime. Social psychologists have long recognized this phenomenon. II. SOCIETY AS A MIRROR In a different way, society too is a mirror. It is a famous tenet of existentialism that we are defined as individuals only through our actions, and that we come to know ourselves only through the mediation of others. It is only through writing that I can be an author. It is only through others considering me kind that I can know myself to be kind. And so, society is a mirror in that it reflects ourselves back at us, and we cannot perceive ourselves without its mediation. Granted, society does not always reflect back to us the image of ourselves we expect: it is startling to be told that you are cruel when you think you are kind, to be told you are disrespectful when you think yourself couth. We conceive of that which we would like to be and attempt to act in a way which will bring that about, but we are not always successful. Those who have experienced such a dissonance in their own lives can easily reflect on the despair which results: there seems to be an important way in which you have failed to accomplish what you set out to do, failed to become what you idealized being, and failed to be what you think you are. A similar sort of dissonance seems to motivate the mirror-selfie. III. PHOTOGRAPHY AS TRANSLATION The act of taking a person’s photograph can be seen as an act of translation in two parallel senses. The first is an act of translation in the sense of translating one’s actions into meaning or definition, as briefly introduced in the above section: a photograph is a mediating medium, through which I can see myself as others see me. The second is an act of translation in the quasi-geometric sense of a non-inverted representation: again, a photograph of me is a representation as others see me, rather than the inverted reflection in the mirror.

15


Here, a clear parallel can be drawn to the general dissonance described above. Individuals attempt to appear in ways which reflect their ideal self-concept, as they strive to become what they wish to be. In part, this notion extends to the physical presentation of oneself: I try to present myself in the way I want to be perceived, the way I want to be. So, when I look into a changing-room mirror, or make a streak in my hair, I project onto that reflection—or the representation I maintain—the style I wish to wear and to present. The problem is obvious: the concept I have of my physical self is an inversion of the individual that others see. WhereI perceive my hair as combed to the right, it is really combed to the left; where I am used to my nose slanting to one side, it really slants to the other. Accordingly, when I am presented with a photograph of myself, there is frequently a bout of dissonance: this is not the individual I consider myself to be. This sort of photograph-induced dissonance largely explains why many people tend not to like their photographs being taken: the person in the photograph does not look like the person they are used to seeing in the mirror—the person they carry about in their self-concept. The picture is rejected as unattractive. Yet, in this growing technological age, as social networks boom and interpersonal relations become global, photographs are being increasingly used to represent and identify the individual online and in the world. More and more, individuals are represented through photographs, their actions captured for others to see, to define. And, more and more, individuals are made to contend with their photographic existences, and their unreversed existence in the world. It is out of this need to contend with the reign of photographs that mirror-selfies find their place. IV. MIRROR-SELFIES AS PARADOX Perhaps fortunately, as photographs established their online importance, cameras became more affordable and globally available. This availability granted individuals the ability to better control the ways in which they are represented through self-portraiture. In particular, the advent of the mirror-selfie has given camera owners the immediate ability to present themselves in the way they want to be seen. The result is an intimate statement of vulnerability. In taking a mirror-selfie, photographers do two main things. The first is to buffer against the dissonance which arises when they are confronted with a non-reversed photograph of themselves: both by taking control of the photographic process and by implementing the mirror, individuals are better able to present a photograph which is consistent with their internal self-representation. However, as a second consequence of this, they become more vulnerable. In presenting themselves to the world as they believe they should be seen, or how they want to be seen, individuals open themselves to the potential of refutation and dissonance: in buffering against dissonance, the mirror-selfie presents the individual’s self-concept for the world

16


to validate or criticize. In this way, the employment of the mirror-selfie as an existential safeguard from dissonance paradoxically increases the risk of dissonance. For, while individuals may better control photographic output and encounter fewer true photographs of themselves, the very act of taking a selfie involves a tacit recognition that the photograph is not an authentic representation. Accordingly, those seeking a buffer from dissonance seek also to ignore the fact that they are taking a mirror-selfie. Otherwise, they must be confronted by the inauthenticity of the very act, and hence be open to dissonance again. So the photographer plays a game of make-believe, pretending that the mirror-selfie is reality. Here, we might find cause to reject the mirror-selfie: people taking mirror-selfies are not vain or engaging in vacuous activity; they are deluding themselves. They are sacrificing authenticity for comfort, reality for illusion. Taking a mirror-selfie is a comfortable psychological tool, and it helps us to control our appearances in the world. But, we must be careful not to become trapped in the fantasy it creates. The reality is that we exist in the world independent of our mere aspiration, and our interactions with others do not occur through a mirror. To live in ignorance of this fact is to live in ‘bad faith,’ as Sartre called it. Bad faith is the habit of deceiving ourselves into thinking that we do not have the freedom to make choices, for fear of the potential consequences of making a choice; it is treating oneself as an object more than as a conscious being in society—as the content of a photograph more than as a living being embedded in society. Had I the technology at the time, a mirror-selfie of the misplaced streak would have adorned my Facebook. But, in doing so, I would have run the risk of collapsing myself into that false narrative. The healthier and authentic solution would have been to accept the consequences of my actions and work toward authenticity, toward recognizing myself as a being in society and a body in the world, and to recognize that my perceptions of myself do not constitute the whole reality of myself. Where I disliked the photograph, it was not because I disliked myself, but because I did not understand myself. This speaks to the challenge of treating oneself both as photographer and as photographed, as an object that is in the world as much as in the picture. V. CONCLUSION Translating the mirror-selfie through an existentialist lens provides some insight into its popularity, as well as reasons to resist its upswing. In this light, the mirror selfie is not merely a vain and vacuous hobby, but a powerful act of vulnerable self-expression and self-representation of the self as being in the world, as seeking concord between act and interpretation. Mirror-selfies, we find, are psychological tools in the fights against existential dissonance. Yet, at the same time, they may be taken out of bad faith, and we run the risk of losing ourselves to the fictions they create. This groundwork now established, a new ethics of mirror-selfies may follow.

17


18


MUSICA HUMANA

Alejandra Oliva

When the world began, the first thinkers looked up at the sky and saw perfect, immovable crystalline spheres holding the universe together. Each one of these planetary orbs represented a single pitch, like a finger against the rim of fine glass. To avoid the chaos of a sweetly screaming universe, this music was put out of human’s reach. Pythagoras, who gave his name to this carefully ordered universal system, said that the resulting chord, if ever encountered by human ears, would prove unbearably beautiful and perfect. This celestial song was known as musica universalis: the music of the universe. It found its counterpart in musica humana – the music of the human self. Boethius, writing at the beginning of the medieval period, proposed that this musica humana was a song that harmonized body and soul. Immoral acts flung that music into discord, but a return to a third type of music, one that is audible, known as musica instrumentalis, could restore harmony to the soul. In this way, we find three planes of existence carefully intertwined, a grand polyphonic symphony: human echoing universe, balanced and set right by the restorative properties of harmonious sound. The intangibility and emotional power of music has made it a cultural gateway to an altered state – whether that be enlightenment or death. Legends speak of music that is unlistenable due to its great power, driving men mad or bringing them to love. More modern times give us the five-pitch melody that grants us access to the brightly glowing interior of an alien spaceship. The mysticism accorded to music, particularly in antiquity, cor-

19


responds to these three threads described by Boethius – external, audible music shifting the internal human music, a seismic shift in the human finally leading to a small rift in the universe that one might simply walk through. In medieval times, church officials invested substantial time and energy in controlling the types of music created and used for liturgical purposes. Even though the religious year came to be governed and marked by the different plainchants echoed throughout various cycles of worship, the church saw a danger present in this integral part of their ritual. If hymns soared too gracefully toward Gothic arches, Catholic officials believed that parishioners would end up worshipping the music itself – musica instrumentalis overtaking and shifting a musica humana that had been carefully cultivated to revere God alone. Monks, nuns, and peasants alike would be moved to false religious fervor for the all-enveloping, omnipotent, omnipresence of music. Saint Augustine distrusted the “greater religious fervor” and the “more ardent flame of piety” he experienced when hearing liturgical music. However, the church, like St. Augustine, often wielded music so that “by indulging the ears, weaker spirits may be inspired with feelings of devotion.” Music was not only the way monks marked the passages of time daily, but also the way that they ushered each other, and their flock, into eternal life. It was both a distraction from God – one of a myriad of passageways into hell – and a shortcut to him. The gods and heroes of the Greeks also sang. Orpheus, the consummate musician, is granted special access to hell thanks to his gift of song – he charms Persephone, and softens the heart of Hades. His song carves a unique passage for himself and his lover, Eurydice, and they become the only mortals granted a way out of the Underworld. However, many of the other singers of Greek mythology, such as the sirens, ease the way into the underworld trather than out of it. Encountered by Odysseus, they are terrifying seductresses, there to be avoided or outwitted. There is no homecoming for the man who draws near them unawares and hears the Sirens’ voices: no welcome from his wife, no little children brightening at their father’s return. For with their high clear song the Sirens bewitch him, as they sit there in a meadow piled high with the moldering skeletons of men, whose withered skin still hangs upon their bones. Their song is less a portal and more of an impediment – those men who hear are not transported, but instead rot away forever in a single place – their effects the very antithesis of Odysseus’ quest. The siren’s island home, which Odysseus sails past, is located near the edge of the world and the entrance to the underworld, and as such, the sirens are the guardians of it. However, they are not Homer’s creation alone, but appear over and over again in the Greek mythological canon as foreshadowers of death. They are the handmaidens of Persephone, Queen of the Underworld, and appear often on funerary artifacts from the time period. The tomb of Sophocles was adorned with depictions of sirens. Euripides has Helen call to them in

20


his play of the same name, begging them to accompany her in her grief at the overthrow of Troy. Ye Sirens, Earth’s virgin daughters, winged maids, come, oh! come to aid my mourning, bringing with you the Libyan flute or pipe, to waft to Persephone’s ear a tearful plaint, the echo of my sorrow, with grief for grief, and mournful chant for chant, with songs of death and doom to match my lamentation. Here they are welcome companions, not tricksters or seductresses. They become a helpmeet to Helen’s grief, equals in mourning, echoing her musica humana with their own musica instrumentalis of death and doom. Their presence is healing: in aiding Helen express her grief, their song allows her to reach the ear of Persephone and in doing so, help to usher the souls of the Trojans safely into the Underworld – and to bring Helen’s own grief into balance through its musical expression. In fact, Plutarch, the historian, describes the song of the sirens as “far from being inhuman and murderous.” Instead, it “inspires in the souls emigrating from the earth to the underworld, errant after death, oblivion for that which is transient and a love for that which is divine.” The souls, like Eurydice in reverse, are “captivated by the harmony of their song, follow it and bind themselves to it,” the song serving as a portal not only to the underworld, but to a love for it. The sirens fundamentally alter the musica humana of the dead to release the need for a body from their sense of balance. Those ushered into Hades by the song of the sirens are the peaceful, happy dead. While the idea of music as achannel to the Other Side may seem like an idea rooted in antiquity, it found a revival and a staunch advocate in Sir Oliver Lodge, a scientist of the Victorian Era. Lodge was heavily invested in the nascent field of electromagnetic communications, and was on the verge of having a quite promising career – he made the first long-distance radio transmission a year before Marconi, the more usually officially recognized inventor of the radio. However, his work was largely discredited after World War I. In this war, he lost his youngest son, Raymond. After his son’s death, he began to communicate with his son through a housemaid who claimed to be a medium. The book he wrote on the subject, Raymond: Or, Life and Death, shows heartbreaking evidence of a man taken advantage of by his grief – the first third of the book consists solely of correspondence with Raymond before his death, so that readers might come to know him better. While it is easy to write off substantial portions of the book as tragic bunk, there remain elements of Lodge’s narrative that are less easy to brush off. After some time communicating with Raymond through a medium, Lodge and his family instead held “impromptu” sessions, where they would feel Raymond appear in their midst during casual family gatherings. During one of these sessions, the family realized that music, in particular, affected Raymond: “He seemed to wish to listen to the music, and kept time with it gently. And after a song was over that he liked, he very distinctly and decidedly applauded.” In later conversations, Raymond communicated to his

21


father that he lived in a parallel dimension through which radio and sound waves would travel. This parallel dimension is called “Summerland,” and much as Lodge’s prewar research showed, it was sound waves that travelled best between our world and Summerland, particularly in musical form. Returning to Boethius, the musica instrumentalis of Raymond’s sisters at the parlor piano called to some deeper musica humana that remained an integral part of Raymond, even after death. Summerland remains a tantalizing reality to many a dusty corner of the Internet – it is a heaven free of discrimination, all long teatimes and green lawns, and the scientific discrediting of Lodge has conspiratorial undertones about it. The musical bridge between it and the world of the living provided Lodge with much-needed comfort during his lifetime. His theory, carefully constructed around its one lifesaving virtue of allowing him to communicate with his dearly departed son, is based on what was then cutting-edge science, and a long-standing cultural precedent. Music holds a powerful place within humanity’s mythological self-concept: through music we are connected to the stars, to our ancestors, it allows us to ferry ourselves across uncrossable rivers and back again.

22


23


INTERVIEW WITH ZACH WEINERSMITH

Daniel Listwa

One need not look any further than the items in one’s pocket to see that the progress of technology has facilitated a continued explosion of choice and customization. In the past, technological constraints implied that a particular household generally had only a single choice over telephone service providers. Now, the advent of the cell phone allows one to choose from a whole menu of providers. The phenomenon is widespread. From online dating sites enabling people to browse through millions of potential significant others, to 3D printing allowing the complete customization of almost anything, the trend toward greater personalization of experience continues to grow. What if the tendency toward tech-driven personalization were to spread to the sphere of government, breaking its monopoly in such a way that each individual were allowed to freely choose what sort of government system to be a part of ? This is the “what if ” behind Zach Weinersmith’s new book, Polystate: A Thought Experiment in Distributed Government. In his book, cartoonist and writer Weinersmith, best known for his daily web-comic Saturday Morning Breakfast Cereal (SMBC), describes the world in which typical geographically-bound nations as we know them (or “geostates” as Weinersmith calls them) are replaced by “polystates,” which are simply collections of “anthrostates.” To quote Weinersmith, an anthrostate is “a set of laws and institutions that govern the behavior of individuals, but which do not govern a behavior within geographic behavior.” In other words, while a fascist living in a democratic geostate would have to abide by the democracy’s laws, in a polystate a fascist could choose to live in a fascist anthrostate. While the laws of the fascist state will apply to her, they may not apply to her neigh-

24


bors, who may be citizens of a social democracy or communist state. Citizens would be regularly given the opportunity to change anthrostates, allowing them to experiment with forms of governance and easily escape the reign of a government they do not agree with. This is in stark contrast to the modern geostate, where even if one can change government, it is with great difficulty. The implications explored in Polystate are enormous. Just take the growth of North Korea, for example. As Weinersmith writes, “It is hard to imagine that he [Kim Jong-un] would have this larger population if any of his citizens could have freely switched to any other government.” I had the opportunity to sit down (virtually) with Zach and talk a bit about his new book, anarcho-capitalists, robot judges, and the ethics of technological growth. The following is an excerpted version of our conversation. Daniel Listwa: Your comics have earned you a reputation for coming up with creatively plausible situations and exploring their implications. Polystate is not an exception in this regard, but it is quite different in form. What led you to delve into this new medium? Zach Weinersmith: I’m actually working on a few science fiction projects, and this universe was one of them. I started to write the book, and at some point realized I was basically writing a series of dialogues about how the system in question might work. That seemed too boring for fiction, but just boring enough for a fun extended essay. Also, it was fun. DL: Can you give us any clues on your future projects? ZW: I’d rather not reveal too much, since it’s all very sketchy right now. But, I’m working on a few books, some for adults, some for kids. DL: You suggest categorizing Polystate as a work of speculative “Poli-fi,” relating it to the speculative science fiction of writers like H.G. Wells and George Orwell. In what sense do you consider your book’s premise to be speculative, as opposed to simply counterfactual? ZW: Here was my thought – there are a lot of trends that suggest what I called “discretization of experience.” That is, we more and more expect to be (at least superficially) allowed to make choices about everything. And, I think this’ll only continue as technology and affluence increase. For example, right now you don’t get too much choice over what your house looks like, unless you’re rich. And, even if you’re rich, you’re still limited by what builders and designers can realistically do. Now, imagine if some of the projects to create 3D house-building come along? If that gets cheap enough, every homeowner is going to expect to have a unique special house. I think if you look around you’ll see this trend everywhere in terms of the arts, sexual expression, family structure, politics, and so on. We all want to be different. So, it occurred to me that we may perhaps at some point expect lots of choice from government as well. But, of course, choice of

25


government is a lot more complicated than choice of potato chip (the options for which are also skyrocketing!). So, I wanted to explore one way that might work. DL: Is it fair to group technological trends, like the rise of 3D printing, with what seem to be more ethically driven issues, like freedom of sexual expression? ZW: I think it’s overly abstract to say there’s no relation between technology and ethics. For some reason, technology is often treated as unrelated to big philosophical questions. There’s an extent to which it’s a fair distinction, because one is tangible and the other is abstract. However, at least at a pragmatic level (and arguably at a higher level) technology DOES affect things like ethics, metaphysics, justice, and so on. For example, if technology can supply everyone with cheap food, the question, “Do I steal a loaf of bread to feed my family?” is mooted. It’s not undone as a valuable question, but it becomes less interesting. In the case of 3D printing, well, it’s similar to the food thing. 3D printing potentially could drastically lower scarcity of all sorts of things (from phones to condoms to housing to art). The extent to which we say that this has no ethical component seems to me to be a bit silly. I mean, suppose someone said, “I will dedicate my life to making fusion reactors viable.” That seems to me to be an ethical decision. It’s a decision to act to increase the happiness and choice of other people. I don’t think you would say, “Well, that engineer is just improving technology. This doesn’t affect ethics.” DL: But surely the pursuit of technological advance is not motivating, for example, the pursuit of LGBT rights? ZW: I don’t think the line is so clear. In fact, I think it’s no coincidence that societal affluence and social tolerance are correlated. In some ways, I think tolerance is a luxury good society has only recently been able to afford. Group cohesion is always more important when you have many people at higher risk. Now that we’re richer (and there are lots of us), fracturing isn’t so big a deal. In fact, within reason, I think it’s great. It means people have more pleasure and more consent. That’s pretty close to objectively good, to my mind. I think technology not only can have normative aspects, but technology always does have normative aspects. LGBT rights are, of course, more important than 3D printers, from the perspective of social justice. HOWEVER, consider whether LGBT rights would be so far along, without (for example) the development of cheap telephony, or cheap photocopying. DL: Government in a polystate is different from government as we know it in today’s world. What’s the common thread that makes both these concepts of government the same type of institution?

26


ZW: Somewhat out of necessity, I took a very narrow view of government, defining it in terms of coercive power. I don’t necessarily mean coercive in a negative sense (after all, I like when serial killers are coerced into cages), but rather in the sense that all governments share this one thing in common – in the areas over which they rule, they claim a right over the use of force. Even in the cases where you have a personal right to use force (say, self-defense), it is at the dispensation of the government. Of course, at least in western democracies, governments don’t use force most of the time. But, there is in some sense a threat of force behind all law. This is not a whole description of government. It is an abstract description that was useful for discussion. Of course, real governments always contain context. Laws must be interpreted. Societies have different unwritten social rules. So, my definition had to do more with what is common between governments than a perfect definition of government, which is probably not possible. DL: Is there at all a political agenda behind Polystate? ZW: Whether any of this sheds light on government as we know it, I don’t know. I tried mostly to stick to the speculation and not inject much in the way of politics. The book is what it is. There isn’t a hidden perspective. DL: A lot is left to be negotiated between anthrostates. You suggest Artificial Intelligence would help streamline arbitration. Are you suggesting that legal judgments would be rendered by computers? ZW: Good question! I think that’s one of the biggest problems with the proposal. AI was essentially a cheat to explain why it might work. That said, to answer your question, there is not reason in principle why an AI couldn’t arbitrate many things. You can certainly imagine simple cases where something like AI is already used – for example, international banking systems which use algorithms to calculate various exchange rates. Okay, it’s not exactly C3PO, but it is in some sense out of human hands. In addition, there are lots of cases where humanity would probably benefit from some machine assistance in the dispensation of justice. For example, there have been compelling studies showing that judges’ decisions can be affected by when and if they eat. DL: Sounds a bit I, Robot to me. ZW: Yeah, the idea of a strong AI dispensing justice on humans is a bit freaky for my taste. How that’d work would depend on how AI develops in the future. DL: What’s to stop an anthrostate from turning into de facto geostates? If I’m a democratic socialist, I’d probably want to be surrounded by like-minded people.

27


ZW: I think it probably would be the case. People tend to gather with similar people, for obvious reasons. But, I don’t see that as a problem for the polystate system. Remember, the rule is just that you can’t make the claim, “Anyone in area X abides by rule set Y.” So, if a group of anarcho-capitalists have gathered to form a town, that’s fine. They just can’t claim that Karl the Communist also has to obey their anthrostate rules while in town. DL: But couldn’t the anarcho-capitalists take the next step and declare themselves a geostate? ZW: As you say, a society could set up its own geostate by just declaring it. How common that’d be is probably just a cultural matter. I mean, it’s technically true that you and I could start a cult in Vermont, declare ourselves lords of the kingdom of The Free State of Weinermith-Listwa, and then face the consequences. But, it’s unlikely. It’s possible that the same would be true in a polystate. After all, if one anthrostate declares a geostate, it’s a land grab from every single other society. DL: Wouldn’t it be in the interest of some anthrostates to implement geostate-like immigration laws? Imagine communities of “democracy raiders” that join democracies in mass only to form a majority, drain the state coffers, and move on to the next state. ZW: Ha! I love the raiders idea. I address some similar stuff, somewhere in the book. But, the basic idea is that at equilibrium (always a dangerous phrase), anthrostates should anticipate these concerns. If you’re worried that roving coffer-drainers will rove your way, you have options. For example, you could say, “Only citizens who’ve been here for X years can vote.” DL: Doesn’t that suggest that whether geostates arise or not is more than just a cultural matter? If the institutions associated with geostates offer advantages over a “true” polystate, would polystates as you envision them survive? ZW: Yeah, I think it’s a totally fair point. I still think the question is cultural, in this sense – what is permissible will depend on how loyal everyone is to polystatism. This may be more profound than you think. Consider how you would react if Australia invaded Micronesia. In a certain sense, why should you care? It’s got a population comparable to Tuscaloosa Alabama. It’s nowhere near you. There’s no chance Australia will try to hurt the USA. BUT, culturally, you feel that it is not right for sovereign states to have their sovereignty violated on a whim. You would expect your government (which doesn’t clearly have a dog in the fight) to do something. I think that’s culture, and I think it’s conceivable you’d get something similar in a polystate. Whether it’d actually work? Nah, probably not. I mean, this is speculative after all.

28


29


ETHICAL INHERITANCE: SUPPLEMENTING THE BOOK

Ben Rashkovich

I. Harry Potter is a boy. He has green eyes, black hair, and a lightning-boltshaped scar on his forehead. Anyone who has read a Harry Potter book (or seen a movie, but we shall put other media aside for the time being) would nod his head at these statements. True, true, true. If I ask this person how he knows what I had said about Harry Potter to be true, he would have no problem convincing me. “Simply flip open the first book! You’ll see; the chapter itself is titled ‘The Boy Who Lived’!” And so on for the rest of my claims. One way we readers ascertain what is “true in fiction,” regardless of how philosophers model systems for it, is by pointing to the text itself. An unreliable narrator can try to deceive, an autobiographer may attempt to glamorize, but a book cannot lie about its own contents. Harry Potter grows up in England. It rains quite a lot in England, and more than in New York. Therefore, it must have rained quite a lot during Harry Potter’s childhood. Some might hesitate at this. Intuitively, we would think to respond that this is true. How do we know? No matter how thoroughly one has read the Harry Potter series, one has no chapter or scene at which to point. Never does Hermione say to Ron, “Ah, it rains so much here in England!” Never does Dumbledore complain to Hagrid, “New York has better weather!” Never, even, does the narrator indicate that it rains often as a matter of setting the scene. The text itself, then, is not the only source of truth with re-

30


gards to fiction. We must implicitly understand things like this—that Harry Potter has only one heart, two eyes, one liver, that he breathes air, but also that England once had colonies in North America and other continents, that it took part in many wars, that tea is a cultural staple there, and so on— by tapping into another well. To many concerned with the philosophy of fiction, the answer seems clear: we look to the author. J.K. Rowling knows, or believes, that it rains more in England than New York. Since we have no reason to distance Harry Potter’s England from ours, we can reasonably assert that for Harry Potter, it rained a lot. For the aforementioned philosophers, this problem of “background information inheritance” often plays second fiddle to the investigation of how we determine truth values semantically (or from the words of the text alone) when we speak of fiction. Nevertheless, briefly familiarizing ourselves with some individual approaches will give us a bit of context to work off of. The philosopher David Lewis says in “Truth in Fiction” that “the proper background” of information for a fiction lies “in the community where the fiction originated: the beliefs of the author and his intended audience.” He soon modifies this by looking to “overt beliefs” and “belief worlds,” focusing on what people generally know as opposed to what only certain specialists might be aware of. Gregory Currie, in “Fictional Truth,” theoretically diverges from Lewis but practically echoes him. Currie brings in the notion of an “informed reader,” positioning us as figures with somewhat greater responsibility and agency. It is up to us to “infer the beliefs of the author” in a reasonable manner, making logical jumps (such as from “dragons exist” to “unicorns exist,” Currie says, although Darwin might disagree) while assuming “that the author’s beliefs are as close to being conventional as the explicit content of the text will allow.” Problematic of the “conventional” aside, we see that Lewis and Currie essentially come together. When we read in between the lines, we rely on the person who wrote them to guide us through. Not every philosopher interested in fiction comes to this same formulation; many, though, arrive at an essence identical in practice. My aim is neither to prove nor disprove this conclusion, but rather to complicate it by questioning the core. For, as we can see in Lewis, Currie, and others, this notion draws on one fundamental assumption: the primacy of the author. II. Harry Potter is black. I expect this claim of mine would meet a great deal more hesitance, if not outright criticism and disagreement. In my experience, most people conceive of Harry Potter as white—certainly I heard of no outcry over his depiction on the book covers or in the films. Generally, the character of Harry Potter appears as white, is thought of as white, is accepted as white

31


by many… But does that mean it is true to say he is white, and false to say he is black? How would a reader prove to me the falsity of my claim? No section of the text, again, unmistakably indicates Harry’s race. “What a rude white boy you are” Professor Snape never snarls; “His white hand trembled as he gripped his acceptance letter to Hogwarts” did not make it to the final draft. Some might point to what they see as “implicit” textual evidence, like Harry’s speech patterns, family and living circumstances in the historical moment of the fiction, or even his very name. Rowling diversifies Hogwarts students with rather obvious culture-specific names like Parvati Patil and Cho Chang to be sure, but it would be unfair, negatively discriminatory in fact, to assume everyone with a “British” name is white. Indeed, one piece of feedback on this essay pointed out Harry’s pale, green-eyed and darkhaired characterization as evidence to his Caucasian ancestry, but this is similarly unfounded. “Pale” does not only apply to Caucasians—it is our minds and imaginations that make this leap, carried by something beyond Rowling’s words. The same kind of rebuttal works against the other claims as well: they are assumptions, at best grounded in one’s expectations, at worst in one’s ignorance. Nothing in the text itself determines Harry’s race. We must then move on to our inheritance—the background information we glean from the realm of the author’s community, according to Lewis, Currie, and others. The dubious “implicit” textual evidence built up from assumption falls, actually, into this category, as it relies on linking the textual with the cultural. All that is left is the author herself, if we seek to determine the truth-value of my original statement. Let us hold off that course for a moment. Say we go to J.K. Rowling and ask whether she intended Harry Potter to be black or white? All of a sudden—precisely because we have been examining a line of high stakes, rather than an unimportant one, like how many hairs does Harry have on his head—this question of truth in fiction swells beyond a sole interest of semantics, of words determining true and false. We acquire the angle of ethics. Of right and wrong. We recognize the importance of our systems. It matters, seriously and deeply, who or what defines Harry Potter’s race. Consider a child who grows up without any biographical knowledge of J.K. Rowling, and upon reading the Harry Potter series, conceives of Harry as black. Can we—should we—say that this child is unequivocally incorrect? The question of authorship, whether the author of a work has any power over the truth-values we try to dissect, affects our own agency, individuality, imagination, and prejudices. III. A vigorous dilemma now faces us. How can we account for the assumptions we inherit about a text—it rains in England more than in New York, for example—in an ethical way, not merely a semantic way?

32


Before continuing, it strikes me as necessary to speedily resolve the terms “meaning” and “truth in fiction.” As overly precise distinctions often admit more confusion than understanding, let us simply say that the two terms communicate roughly the same idea from different perspectives: the former from a literary critic’s, the latter from a linguistic philosopher’s. “Meaning” holds greater cultural capital than “truth in fiction” (“truth” is too broad for us here), for among its valences it counts “deeper meanings” and “human truths” and the like—what a work can tell us about life, and all that. However, works also have elemental meanings, the links of intertwining events that compose the story on a fundamental scale. Sentence X, at the most basic and pre-interpretative level, means that Harry did A, then Ron said B, then C happened. This is the level of meaning we will operate on, for it determines everything else. One possible solution to our problem is doing away with the author’s intentions. By delocalizing the source of meaning and minimizing the infestation of social biases, truth reaches a more egalitarian state. We would derive meaning, not from the author at all, but from the text alone, its form and structures and aesthetic presence. Monroe Beardsley and W. K. Wimsatt Jr., in “The Intentional Fallacy,” take the stance that meaning cannot at all sprout from the author’s intentions as they manifest apart from the text; if the author intends to convey A, but instead conveys B, then B is what we readers receive, and therefore what matters in the work. “The poem belongs to the public,” they declare, and as “the judgment of poems is different from the art of producing them,” readers and critics must adhere to the material at hand. Beardsley and Wimsatt distinguish between what is internal to the work—the syntax, diction, and grammatical flourishes, the lexical genealogies and semantic traditions, as “the meaning of words is the history of words” —and what is external, namely biographical facts and opinions unexpressed in the text itself. The internal is public; the external, private. The author’s intentions are impossible to discern from a work itself, and so do not influence any truths expressed by the text. The author’s desires, while perhaps biographically enlightening or interesting as curios, are therefore irrelevant to any critical analysis of the work. Artists, authors among them, must take responsibility for what they say over what they meant to say. Although Beardsley and Wimsatt do not here provide us with a map of background information inheritance directly, they do open up a methodology that seems concise, semantically sound, and ethical. If we put aside the prejudices of the author, we can get on our way to discovering meaning without interference. If we accept that authorial intent is irrelevant because it is both impossible to determine from the text and unnecessary to examine in search of meaning, then we cannot base the belief world of background information on the author’s community. Perhaps we might say that, therefore, anything not formally communicated in the text is neither true nor false. We might

33


assume Harry has only one heart, but because it is so unrelated to the movements of the work that it is not mentioned, its truth-value remains ambiguous. Perhaps… But probably not. The fictional world of Harry Potter is not a swirling morass of neither-truths-nor-falsehoods; intuitively we simply know that gravity works in the Vernon household the same way it does in our own, that the sky is blue, and so on. These truths are not explicit in the text, but they are nevertheless true for the fiction. Perhaps we might say that anything not formally communicated in the text is, instead, the same as it is in our world? This accounts for the above problems, certainly. But no— this would have us readers constantly searching for referents to our world until the text presents the divergence. For example, when Hagrid delivers baby Harry to Privet Drive, we witness the descent of his flying motorcycle; not until later do we expressly learn this motorcycle had been charmed to fly. Because of this, according to the aforementioned theory, we would be searching for some scientific way a flying motorcycle correlates with our own world for quite a long time. It will not do. The farther we stand from the author, the more tyrannical he or she becomes. Why should one single person hold dominance over all truth in a world—even if that person created it? Should we follow Beardsley and Wimsatt to their logical end, separating the writer and the written until the former disappears on the horizon, then all we would have left to supplement the text is ourselves. Roland Barthes, a French philosopher and literary theorist, suggests this as a necessary move in his seminal essay, “The Death of the Author.” “The author is a modern figure,” he says, but “it is language which speaks, not the author.” Because “every text is eternally written here and now,” and “the enunciation has no other content (contains no other proposition) than the act by which it is uttered,” we must look to the words alone. The author holds no authority other than what we socially ascribe to him or her. Barthes advocates deferring to the reader instead, as “the space on which all the quotations that make up a writing are inscribed without any of them being lost… someone who holds together in a single field all the traces by which the written text is constituted.” The reader is the authority; the reader judges the universe of truths. Before we accept this philosophy, we must subject it to the test of ethics. The role of truth-determinist admits a great deal of power—if truth acts as a communal good. However, Barthes leads to chaos, fractal individualism, and vacuous meaning; while he does not speak for reader-response theory, his extreme dependence on the receiver emphasizes fragmentation and disjunction over all else. My experiences are not yours: our truths will remain at odds, with no bridge for connection. “The birth of the reader must be at the cost of the death of the Author,” he ends his essay mightily. Barthes thus calls for us to destroy the tower of Babel that is the meaning shining dormant beneath art, to cast off truth into the void by preventing the flow of its very lifeblood: external evaluation. If everyone controls the text’s meaning

34


entirely, then no truth of the fiction can arise. Harry potter is white, black, dreaming, abducted by aliens—he can be anything, because for Barthes, truth sprouts from the purely subjective reader’s beliefs. Regardless of how we feel about this understanding, we must remember that ethics is macroscopic morality, the code of society. Subjectivism admits to no collaboration in a reality, and so no joining of truths. To impede the union of individual truths, as Barthes wants, is to destroy societal truth: an unethical act by its very definition. Where, then, can we turn? IV When pondering how to navigate the inheritance of background information with a mind towards ethics, we dispatched with the author as a limiting force, one that shrinks what truth in fiction can be. Why should J.K. Rowling’s imagination, not expressed in the pages of Harry Potter but kept in her head, lord over the minds of all readers everywhere? For surely those naïve of her intentions, as we all necessarily are to some degree, can still evaluate statements of truth in Harry Potter? Surely we can conceive of a circumstance in which Harry Potter is indeed black, without violating the ethics of textual meaning? Rather than killing the author as Barthes chants us on, we might look to another option: reconfiguring her. Or, we might instead say, seeing her clearly for the first time. I know a great deal about J.K. Rowling, and can find out nearly anything I wish with the Internet at my fingertips. I know very little of Shakespeare, though, and even Google cannot help me much. How is it, then, that both authors work the same way regarding our understanding of their texts? Scholars debate whether this poem was written by Shakespeare or Marlowe; The Cuckoo’s Calling, first published under a pseudonym, joined Rowling’s corpus only once her authorship leaked to the press. In both cases, the author is disconnected from the historical individual. None but literary historicists and biographers care when and where Rowling lived as a child: her role as an author revolves around her works, the relationships between them, and the meanings they hold. Select details of the individual’s life certainly affect the formation of the author, but these always require interpretative moves and assumptions of causation on our part. An author is not an occupation, but an entity apart from the individual. In response to Barthes’ essay, Michel Foucault swooped in with “What Is An Author?” As he often does, Foucault gestures at a number of directions throughout; his description of the “author function” in particular will interest us here. Starting off, Foucault notes that the relationship between “the proper name and the individual named” differs from that of “the author’s name and what it names,” as “the author’s name serves to characterize a certain mode of being of discourse,” and “seems always to be present, marking off the edges of the text.” An author is not a person, he says, but the subject of a mode of discourse, which applies only to certain categories

35


of texts (literature and science, namely). The crux of his argument lies with the notion that the author function “does not develop spontaneously… It is, rather, the result of a complex operation that constructs a certain being of reason that we call ‘author.’” He summarizes: “the author does not precede the works; he is a certain functional principle by which, in our culture, one limits, excludes, and chooses.” The author is socially constructed, and thus her intent is both retroactively applied and produced by the community. This route seems to do away with any and all problems we have faced throughout. Once we realize that we deal with a fictionalized apparatus rather than a historically-situated and self-contained individual, the difficulties melt away. Beardsley’s and Wimsatt’s diatribe against the intentional fallacy of critics is foundationally correct, in that we can never know truly what Joanne Rowling thought when she wrote a certain chapter of a Harry Potter book. Authorial intent signifies something different when we understand that the author comes into being alongside the text, however: intent is meaning, and meaning is intent. The words themselves are the intent. A formulation for background information inheritance appears as well: we cannot rely on the historical author-individual’s community’s belief world, indeed, but our own community’s beliefs about the belief world upon which the text-and-author floats can give us the best reasoning for our intuitions. This method passes the ethics test as well, because it acknowledges social dynamics rather than denying them. Some statements are communally determined as true or false, but we can change those values by changing our culture: they are fixed neither in the text nor in the author’s head. Today, Harry Potter does seem to be white. In twenty years, this may no longer be the case. What might sound like a semantic distinction holds a great deal of difference, as it accounts for the layers of construction and malleability that hold truth in place. As always for Foucault, and rightly so, truth is a function, a commodity, a production, of the power relationships within a community. Truth in fiction is no different.

36


37


YOU ARE (PROBABLY NOT) A COMPUTER SIMULATION

Dan Jacob Wallace

Are you a computer simulation? Oxford-based philosopher Nick Bostrom argues that it’s highly probable that you are. And a team of physicists have recently cited empirical evidence suggesting that you might be. I think most of us would file this topic under Fun to Think About, but the simulation hypothesis is coming from the sorts of thinkers we’re supposed to take seriously, and has been getting attention from mainstream news sources, most recently the New York Times. It’s generally reported with at least a face-saving dose of decide-for-yourself skepticism, but, still, I think that the growing popularity of the argument merits at least a little serious weighing in from the philosophical community ― particularly in an era where it’s becoming a kind of moral transgression to “deny science” (whatever that really means) or even to not “fucking love science” (whatever that means). That said, let’s consider whether the simulation argument is as convincing as its many supporters make it out to be. (It’s not.) In his 2002 paper, “Are You Living in a Computer Simulation?”, Bostrom argues that one of the following propositions is true: Proposition 1 (P1): The human species is very likely to go extinct before reaching a “posthuman” stage; Proposition 2 (P2): Any posthuman civilization is extremely unlikely to run a significant number of simulations of their evolutionary history (or variations thereof); Proposition 3 (P3): We are almost certainly living in a computer simulation.

38


We’ll get more into the ideas attached to the term “posthumanity” as we go along, but, for now, we’ll take it to refer to beings whose intelligence has evolved to a degree sufficient for their being able to produce computer simulations of minds like ours. Before considering the argument more closely, some important notes on how we should evaluate it. Bostrom recommends that we should assign roughly equal subjective probability to each proposition. It’s “subjective,” because, given the information available to us, the best we can do is make an informed guess. Contrast this with a fair coin flip, where you can be nearly certain that the result will be either heads or tails. Bostrom’s argument isn’t like this ― it’s not a three-sided die, so that each proposition gets 33.3% probability. In an FAQ on his webpage, Bostrom points out that each of us will have our own intuitive response to the argument. He himself assigns the probability of P3 to be “roughly in the 20% region, perhaps, maybe,” because we don’t have any evidence that these propositions are true or false. Those are better odds, by the way, than you have for throwing a seven (16.67%) with two fair dice! And way better than rolling a two (2.7%). As for me, I’m hesitant to assign any probability at all to P3, given our lack of information about the possible outcomes. With a coin toss, we have statistical reasons (some of them well correlated with physical facts about the coin) to assign heads 50%, and near-zero, if anything, to, say, the coin spontaneously combusting or getting carried off by a passing hummingbird. But let’s set these concerns aside and evaluate the argument on its own terms. I then assign near-zero, because there are assumptions that must be worked through before even getting to the simulation argument, one of the most crucial of which - that human consciousness is substrate independent ― I assign very low subjective probability. Substrate independence (Sub-I, for short) is simply a fancy way of saying that minds like ours can exist on some substance other than living neural tissue; for example, in computer hardware. I’m going to follow Bostrom’s lead and include another assumption within the Sub-I concept, which is that consciousness can somehow be coded into a computer program. Bostrom assumes that Sub-I is true, and points out that many thinkers these days agree with him. It’s true that many do, but it’s also true that many don’t; nor is it the case that opinions about Sub-I’s truth are founded on experimental evidence. Instead, arguments in favor usually go something like this: We are currently making things that, to the outside observer, exhibit the sorts of behaviors that we associate with conscious entities; as technology achieves the appropriate level of complexity, these things will begin to not just simulate conscious behavior, but to be genuinely conscious. Let’s consider this more closely. P3 refers to the possibility that you, me, and the world we live in are quite literally the products of a computer program written by a computer programmer from an advanced civilization. This isn’t the Matrix, where

39


we have a living physical body that’s hanging out somewhere. And it’s not a Brain in the Vat type thought experiment designed to help us test our theories about knowledge against skepticism. No, what P3 states is that our minds literally just are computer programs in run mode, likely housed in planet-sized computers (note that Bostrom spends a fair amount of his paper arguing for another assumption - that the computing power required to simulate our universe is physically possible). We have no reason to believe, however, that the complexity of the vivid inner mental world that you and I experience every day could be produced by a computer program. This notion of complexity is important. There are those who argue that the difference between the consciousness of, say, a thermometer and an intelligent human is merely a matter of complexity. Somewhere in between would be something like the children’s toy Furby. Indeed, the toy’s creator, Caleb Chung, has vehemently argued that the Furby has conscious experience “at its level,” and can “feel and show emotion” (Radiolab, Season 10, Episode 1, “Talking to Machines”). I’ll grant the showing - via engineered imitation ― but I emphatically reject that a Furby literally has an internal, first-person experience of anything at all, at any level at all; exactly no more or less than does a thermometer or a rock. I think Chung is in the minority on the view that the Furby feels emotion, but, for those who accept Sub-I, the idea is generally that, if a computer mind can be mapped out with the right level of complexity, consciousness will arise, though it need not initially be as sophisticated as human consciousness. Note, however, that, unless you grant that the behavior of something like a Furby or Siri counts as a simple form of consciousness, what’s getting more complex as these machines develop is the computer program, and not consciousness itself. There’s no clear line where consciousness starts. Perhaps this is why some bite the bullet and call Furby-like machines conscious. At any rate, we are very far from having reached the programming complexity required to produce anything close to human-like consciousness, but, the idea goes, given how quickly technology is developing, it just seems to make sense ― to feel right ― that it will eventually happen. Nor have our creations reached a low bar ― for example, the Turing Test ― for consistently appearing to be intelligent (which may or may not mean conscious, depending on how you define these terms). You might respond that, given that most of us currently accept materialism/physicalism to be the case, so that mental states just are physical brain states, there is no reason to think that consciousness couldn’t arise from a well designed computer simulation of our neural machinery. I’m not so sure. What’s the difference between introducing information ― e.g., instructions on how to bake a cake - into a living human brain, inputting it into a computer, or writing it on a piece of paper? It seems to me that inputting information into a computer is much closer to writing it onto a piece of paper than it is to introducing it to a human brain.

40


A key difference from paper, of course, is that the computer is capable of autonomous algorithmic processing (i.e., “following” our instructions), and has the capacity to change its behavior over time in response to new information. It can “learn.” For example, you could tell a computer to add another “1” indefinitely, or to keep track of the words most employed by a particular user, and thus treat one user differently than another user. With a stable enough computer, perhaps it could engage in a kind of expansive self-replication over thousands of years, becoming exponentially more complex, and, if properly programmed to begin with, would eventually achieve a conscious state. Perhaps this could happen. Perhaps not. Unfortunately, there’s currently no way to test this. Compound this with the fact that most thinkers (though not all) agree that we really don’t have a clear account of what consciousness even is, much less where it is or how it works. Still, there are those who believe, as a matter of faith, that something like the above process will eventually take place. Perhaps it will, perhaps it won’t. I have similar gripes about foundational assumptions in math and physics, and am glad to know that there are reputable scientists whose intuitions agree with mine. For example, I’m skeptical about many notions surrounding infinity, and so is MIT-based physicist Max Tegmark. In his book, Our Mathematical Universe, he writes, “I remember distrusting infinity already as a teenager, and the more I’ve learned, the more suspicious I’ve become,” and points out that the notion is rarely questioned because “we haven’t discovered good alternatives.” Indeed, Tegmark suspects that infinity is the “fundamentally flawed assumption at the very foundation of physics.” Of course, we all know that current physics doesn’t add up; what’s interesting here is that Tegmark is pointing to an uncontroversial idea as the culprit. Consider also the physicist Sir Roger Penrose, who has been working for the past ten years on a book called Fashion, Faith, and Fantasy in the New Physics, and who has referred to string theory as a matter of “fashion,” quantum mechanics “faith,” and cosmic inflation “fantasy” (Science Friday, April 4, 2014). To be clear, Tegmark does believe ― or, I would say, has faith ― in math’s ability to describe everything, including consciousness experience; and Penrose is by no means suggesting that we should abandon physics. It just happens that their intuitions guide them in different directions than do the intuitions of many other equally intelligent and informed thinkers. I note all this in order to reflect on what it means for subjective probability. Perhaps, you might argue, the scientist’s intuitions about simulation are more educated than ours. But consider that, when deciding upon which expert to follow, we are adding yet another layer of subjective probability: To which expert do I assign the greatest likelihood of being right? (Probably the one that confirms my pre-existing opinions.) None of this is to say, however, that compelling evidence for simulation couldn’t be discovered. For one thing, as Bostrom points out in his FAQ , our programmers could make

41


our true situation known to us. We could also develop indirect evidence by creating a successful simulation ourselves. I’ll comment more on this below, but first, let’s consider the aforementioned team of physicists who claim to have found evidence that simulation might be true. Their paper, “Constraints on the Universe as a Numerical Simulation,” cites Bostrom’s hypothesis as a motivation for their research. Their paper motivated me to write this article, because of the attention it, and in turn Bostrom’s simulation argument, has been receiving from mainstream news sources. The idea is that we run simulations already, which exhibit certain anomalies. Over time, these simulations will likely get sophisticated enough to include minds like ours. Still, because its structure is finite, it would exhibit anomalies similar to those exhibited by current simulations. They claim to have found such anomalies in cosmic rays. As you can guess at this point, I am suspicious of these assumptions about future simulations. And, interestingly, Bostrom, in his FAQ rejects that evidence for simulation would come from glitches or anomalies in the programming. The sort of evidence Bostrom would like to see is the creation of a successful running of substrate-independent conscious experience, and current simulations are far from that, and won’t be possible until humans have evolved to a posthuman stage. Perhaps, however, we can nudge that evolutionary process along. It turns out that this is something Bostrom is pushing for. More on that in a moment. You might be wondering at this point, “Who cares if we’re a simulation? What’s the difference?” Well, living in a simulation would come with implications. Before getting into these, I should point out that Bostrom, on his FAQ page, insists that simulation has no relation to religion. This is actually quite true in the sense that simulation doesn’t require the existence of a god or gods. Still, when we consider what simulation involves, it does start to take on a quasi-religious flavor, and certainly would have implications for both believers and nonbelievers. Consider that, if you are a simulation, you have no immortal soul, but you do have a consciousness not bound to ‘this plain.’ And you have a creator, a purpose for having been created, and, most significantly, you now have the potential to live on indefinitely. That is, your mind now exists in a computer program, it’s up to the programmer whether the program is terminated, thus ending your existence, or, even better, transferring you to a posher program. Note, too, that there are moral implications that come along with simulation, which function at two levels. First, there is the moral character of our programmers. Why would such an advanced civilization ― one that undoubtedly has access to many times the destructive power we do, yet has avoided extinguishing itself ― allow for the creation of a world like ours, a world with so much suffering? Perhaps it would effectively prohibit such a thing. How we respond to this question, by the way, might lead us to give more weight to P2.

42


The other moral level involves the question: If it’s just a simulation, what does it matter what we do? This has been addressed, for example by Robin Hanson, who wrote a paper called, “How to Live in a Simulation.” The gist is that, if we determine that we might be simulated, we might then concern ourselves with figuring out how to impress our programmers so that our situation may be selfishly optimized. Add to this the likelihood that, due to limited computing resources, many simulated animals and humans wouldn’t actually have conscious minds, though we can’t know which. I’ll leave that discussion to those who believe they might live in a simulation. We see, then, that simulation being true could be good news for the atheist (or the dedicated sinner) who yearns for immortality. And the surest way to prove it is to create our own successful simulations (we could also, we assume, upload our minds into our own simulations to live on after bodily death). But we can’t do that as mere humans. Thus enter Transhumanism, a proactive path to posthumanity and immortality. Back in 1998, Bostrom founded the nonprofit World Transhumanist Association (WTA), which changed its name to Humanity+ in 2006. Their mission is to advocate the elevation and expansion of human capacities through the ethical application of technology. In other words, they wish to influence the goals of scientists, technicians, and public-policy-makers towards efforts that facilitate the transition of humanity into a posthuman stage. Solving death is a fundamental goal here. See, for example, the article in the organization’s magazine, H+, “Our ‘GooglePlex Action’ for Radical Life Extension,” by Alexey Turchin, which features images of Transhumanists picketing Google offices with signs that read “immortality now,” and in which he writes, “I can easily envision a moment a decade from now when 10 or 20 percent of representative seats in a large country will be held by transhumanists.” Of course, this sort of radical thinking does not count against Bostrom’s simulation hypothesis, but it does reveal something notable about subjective probability. Once you accept a proposition as true, other propositions follow. Out of that first proposition, particularly when it is one for which you lack properly qualified evidence, there grows an increasingly fragile web of ideas and theories, each strand more farfetched than the last. So, before considering the subjective probability that you’d assign to the three propositions of Bostrom’s argument, consider closely the other layers of opinion and assumption involved. As for myself, given my discomfort with the assumptions that underly it, I assign very low, practically zero, subjective probability to the proposition that I am a computer simulation. In fact, I would bet serious money on its being false.

43


CONTRIBUTORS MYRIAM AMRI is a junior at the dual-bachelor between Columbia University & SciencesPo Paris. She is majoring in sustainable development and is interested in political theory as well as the intersection between Western and Islamic thought. CHARLES DALRYMPLE-FRASER is a current student and future educator, studying philosophy at the University of Toronto. When not deconstructing cultural artefacts, Charles can be found contemplating the identity and care of dementia patients. CALEB FISCHER is first-year student at Columbia College. He plans to dual major in mathematics and philosophy. He’s focused on the study of metaphysics and epistemology, and wants to figure out the interplay among existence, knowledge, and truth.

PAUL HELCK is a senior at Columbia College majoring in Philosophy, as well as the current moderator of the Undergraduate Forum. He has never been, and most likely will never be in a boy band. JACOB (KOBI) GOODWIN is a first-year student in the Joint Program between Columbia and the neighboring Jewish Theological Seminary, where he majors in Jewish Thought and South Asian Studies. One day he will likely split his time between the US and the north Indian state of Jammu & Kashmir. ETHAN HERENSTEIN is a Sophomore in Columbia College, majoring in Philosophy. Born and raised in Long Island, Ethan is now happily situated in Morningside Heights, where he rarely oversleeps his early classes. DANIEL LISTWA (CC’15, Economics-Philosophy, Concentration in Business Management) is devoted to logic, but concerned for his existential freedom. Daniel finds his research in Decision Theory and Philosophy of Science surprisingly applicable to his 44role as Editor-in-Chief of the Columbia Economics Review and is working to develop the K1 Project, a center for the study of nuclear-related issues, with the hope of encouraging rational decision making.


ALEJANDRA OLIVA does something different with her hair every six months, and as such, is prone to thinking about any number of things. She is is a junior, studying sociology and creative writing, and buys too many books.

BEN RASHKOVICH is a junior at Columbia College, studying English Literature and Creative Writing. He’s always considered himself a staunch empiricist and intersubjectivist, but recently he’s been struggling with the possibility of an objective Platonic universe. These are real problems. MIKAILA READ is in the final stages of earning her Bachelor’s degree in philosophy from Eastern Washington University, and will be entering a Master’s programme in philosophy at Durham University this autumn. Her expected area of specialization is applied phenomenology, and she plans to later pursue her PhD and seek professorship in the field. SERA SCHWARZ (BC ‘15) still inclines towards thinking that the force of the philosophical pursuit issues from its being a global, rather than a (exclusively) local, effort. She is especially interested in the history of Western metaphysics; in the history of philosophical skepticism, particularly as this is informed by, and expressive of, certain (explicit and implicit) theories of justification; and in questions proper to “value theory”, broadly construed (--as extends to (meta-)ethics, moral psychology, and social and political philosophy). DAN JACOB WALLACE (Columbia University, GS ‘15; Philosophy) enjoys the broad, how-it-all-hangs-together approach to philosophy. He’s currently thinking a lot about social group ontology and the group-individual dynamic, epistemology, philosophy of mind, political philosophy, and the interconnection of these areas.

A NOTE ON THE DESIGN OF THIS ISSUE

This issue was laid out in Baskerville, a font designed in 1757 in England by John Baskerville, and is considered a transitional font between older styles, and the more modern, such as Didot, the italic capitals of which are used for titling in this issue. The script font used is Copperplate. All images used as illustration in this issue come from Kunstformen der Natur, a book of prints by German biologist Ernst Haeckel which illustrates not only a variety of marine life discovered by Haeckel during his lifetime, but his belief in the symmetry and order underlying all of nature.

45


46


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.