BRIGHT IDEAS No. 6 Editor’s Note

Page 1

BRIGHT IDEAS

“Man’s greatest hubris is not believing he can create God; it’s believing he can control the God he creates.” — Anastasio Sevilla

In the summer of 2010, I set out on a 1,500-mile bicycle tour from Los Angeles to Seattle to promote my first novel, We’re Getting On. The book, which I started in 2007, was a reactionary, reductivist text exploring my fears concerning our increasingly digitized culture. And the tour—during which we slept in tents at organic farms, and didn’t use our phones—was an attempt to recreate the conditions of the text. In the narrative, a young tyrant named Daniel coerces a group of people to follow him into the desert, where they attempt to abandon technology. Things don’t go well. After all, if you set out to purge your life of all human inventions, you eventually discard language, and end up lying on a rock in the baking sun having forgotten what a sentence is. I bring up We’re Getting On, and the six-week bike trek we embarked on to publicize it, because in the last months I’ve reëngaged with the ideas that initially shocked me into writing the novel. Back in 2005, I heard a conversation on NPR between Ira Flatow and the futurist Ray Kurzweil that shook me to my core. In the next 20 years, Kurzweil postulated, we will have synthesized enough parts of the brain digitally that we’ll have software that can teach itself how to be intelligent. And it isn’t too many steps from there to uploading your consciousness to the cloud. A decade on, his predictions are proving astoundingly prescient. In 2012, Larry Page personally hired him to work on natural language recognition for Google. This fall, under Kurzweil’s oversight, Google will release a chatbot capable of near human-level speech. My knee-jerk reaction, in the face of rapid change, is to disengage intellectually and physically. Before our brains are connected to the internet, I thought16 a decade ago (and continued

to think until this spring), I’ll just move into the woods with Blessing and live like some pre-industrial farmer. But that tendency to withdraw, as romantic as it seems on paper, is not only infeasible: it’s dangerous. My transition away from Luddism and toward machine worship (or at least machine acquiescence), was sudden and profound. Last fall, The New Yorker profiled Nick Bostrom, the founder of the Future of Humanity Institute at Oxford University. Bostrom has dedicated his life to raising the tech world’s awareness about the dangers of the looming artificial intelligence explosion. Before reading his seminal text, Superintelligence: Paths, Dangers, and Strategies, I assumed that Bostrom’s goal was to convince computer scientists to abandon their quest for human-level machine intelligence. I couldn’t have been more wrong about his intention. Nor could I have imagined that, in reading his work this spring on vacation in Taiwan, that I would make the single largest intellectual paradigm shift of my life. Like a global-warming skeptic who’s just watched a polar bear cub drown in the middle of the Arctic Sea, or a Tutsi listening to a call for genocide on Radio-Television Libre des Mille Collines, I could see, suddenly—in terrifying clarity—an impending disaster. But hitched to that thought was the feeling that I could do something to avert the crisis. If you think I sound completely insane, I don’t blame you. In my darker moments, I feel like a lunatic. But if you haven’t familiarized yourself with recursively improving artificial superintelligence (A.S.I.), whether out of doubt or fear, turn to page 98. Our resident futurist, Chris Robinson—who’s explored dream recording and the future of virtual reality in past editions— in this issue turns his attention toward

Issue No6

ED L E I T T T OR E R A.S.I. Through the lens of cinema’s imaginative blindspots, he delves into the history of how filmmakers have portrayed intelligent machines, and why their frequently shortsighted representations—from Metropolis to Ex Machina—are inadvertently leading us toward the apocalypse. A.S.I., as you’ll see in Robinson’s essay, is an existential threat far surpassing nuclear weapons. But it’s also a predicament filmmakers are uniquely capable of dramatizing before we design it. Envision, for instance, a world in which, during the 1920s—free from the political context of World War II— we could’ve imagined in advance the damage we inflicted on Hiroshima and Nagasaki. Would we have abandoned developing the atomic bomb? Probably not. But we might at least have considered the consequences of dropping one on civilians. And perhaps we would’ve agreed in advance, as a unified world, how to ensure we didn’t destroy the planet. That’s where we are with A.S.I. Like time travel, for more than a century we’ve tinkered, intellectually, with the concept of the technology and its potential impact on our lives. But unlike time travel, A.S.I. impends. By the end of the century, at the very latest, there’s a 90 percent likelihood that machines will surpass humans in general intelligence. And the world will change fundamentally. Whether the transition is utopian or cataclysmic, as you’ll see in Robinson’s essay, remains to be determined. But artists, I venture to say, have a duty to help depict the aftermath—and in-so-doing, guide our progress. In fact, our future depends on it. Sorry for sounding like a madman. But we’ve never taken any shit more seriously than we’re taking this. James Kaelan

THE 12 TH ANNUAL CAMDEN INTERNATIONAL FILM FESTIVAL

SEPTEMBER 15—18, 2016

A program of the

POINTS NORTH INSTITUTE The launching pad for the next generation of nonfiction storytellers

POINTSNORTHINSTITUTE.ORG

17


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.