Technophobia – digital broadsheet

Page 1

. . . tech no pho bi a

Published annually • 02 May 2016 • ROI €2.10 • NI £1.25

FEAR OF A.I — UNIQUE TALKS ON ARTIFICIAL INTELLIGENCE

10-06-2016

The long-term future of AI //And what we can do about it/

A fear of AI // 2016. Talks from leading minds on possible outcomes of Artificial intelligence Sean Cummins -

Editor [HUMAN]

WHAT WAS ONCE just a figment of the imagination of some our most famous science fiction writers, artificial intelligence (AI) is taking root in our everyday lives. We’re still a few years away from having robots at our beck and call, but AI has already had a profound impact in more subtle ways. Weather forecasts, email spam filtering, Google’s search predictions, and voice recognition, such Apple’s Siri, are all examples. What these technologies have in common are machinelearning algorithms that enable them to react and respond in real time. There will be growing pains as AI technology evolves, but the positive effect it will have on society in terms of efficiency is immeasurable. AI isn’t a new concept; its storytelling roots go as far back as Greek antiquity. However, it was less than a century ago that the technological revolution took off and AI went from fiction to very plausible reality. Alan Turing, British mathematician and WWII code-breaker, is widely credited as being one of the first people to come up

with the idea of machines that think in 1950. He even created the Turing test, which is still used today, as a benchmark to determine a machine’s ability to “think” like a human. Though his ideas were ridiculed at the time, they set the wheels in motion, and the term “artificial intelligence” entered popular awareness in the mid- 1950s, after Turing died. American cognitive scientist Marvin Minsky picked up the AI torch and co-founded the Massachusetts Institute of Technology’s AI laboratory in 1959, and he was one of the leading thinkers in the field through the 1960s and 1970s. He even advised Stanley Kubrick on “2001: A Space Odyssey,” released in 1968, which gave the world one of the best representations of AI in the form of HAL 9000. But it took a couple of decades for people to recognize the true power of AI. High-profile investors and physicists, like Elon Musk, founder of Tesla, and Stephen Hawking, are continuing the conversation about the potential for AI technology. While the

The rise of the personal computer in the 1980s sparked even more interest in machines that think. of decades for people to recognize the truth

"It can only be attributed to human error" discussion occasionally turns to potential doomsday scenarios, there is a consensus that when used for good, AI could radically change the course of human history. And that is especially true when it comes to big data.

The very premise of AI technology is its ability to continually learn from the data it collects. The more data there is to collect and analyze through carefully crafted algorithms, the better the machine becomes at making predictions. Not sure what movie to watch tonight? Don’t worry; Netflix has some suggestions for you based on your previous viewing experiences. Don’t feel like driving? Google’s working on a solution for that, too, racking up the miles on its driverless car prototype. Nowhere has AI had a greater impact in the early stages of the 21st century than in

the office. Machine-learning technologies are driving increases in productivity never before seen. From workflow management tools to trend predictions and even the way brands purchase advertising, AI is changing the way we do business. In fact, a Japanese venture capital firm recently became the first company in history to nominate an AI board member for its ability to predict market trends faster than humans. Big data is a goldmine for businesses, but companies are practically drowning in it. Yet, it’s been a primary driver for AI advancements, as machine-


2

SPEAKER

THE FRIDAY AFTERNOON news dump, a grand tradition observed by politicians and capitalists alike, is usually supposed to hide bad news. So it was a little weird that Elon Musk, founder of electric car maker Tesla, and Sam Altman, president of famed tech incubator Y Combinator, unveiled their new artificial intelligence company at the tail end of a weeklong AI conference in Montreal this past December. But there was a reason they revealed OpenAI at that late hour. It wasn’t that no one was looking. It was that everyone was looking. When some of Silicon Valley’s most powerful companies caught wind of the project, they began offering tremendous amounts of money to OpenAI’s freshly assembled cadre of artificial intelligence researchers, intent on keeping these big thinkers for themselves. The last minute offers—some made at the conference itself were large enough to force Musk and Altman to delay the announcement of the new startup. “The amount of money was borderline crazy,” says Wojciech Zaremba, a

OpenAI is a billion-dollar effort to push AI as far as it will go researcher who was joining OpenAI after internships at both Google and Facebook and was among those who received big offers at the eleventh hour. How many dollars is “borderline crazy”? Two years ago, as the market for the latest machine learning technology really started to heat up, Microsoft Research vice president Peter Lee said that the cost of a top AI researcher had eclipsed the cost of a top quarterback prospect in the National Football League—and he meant under regular circumstances, not when two of the most famous entrepreneurs in Silicon Valley were trying to poach your top talent. Zaremba says that as OpenAI was coming together, he was offered two or three times his market value. OpenAI didn’t match those offers. But it offered something else: the chance to explore research aimed solely at the future instead of products and quarterly earnings, and to eventually share most—if not all—of this research with anyone who wants it. That’s right: Musk,

Inside

OpenAI

Elon Musk’s Wild Plan to Set Artificial Intelligence Free

spoken into smartphones, and respond to Internet search queries. And this same technology can drive so many other tasks of the future. It can help machines understand natural language, the natural way that we humans talk and write. It can create a new breed of robot, giving automatons the power to not only perform tasks but learn them on the fly. And some believe it can eventually give machines something close to common sense—the ability to truly think like a human. But along with such promise comes deep anxiety. Musk and Altman worry that if people

Altman, and company aim to give away what may become the 21st century’s most transformative technology— and give it away for free. Zaremba says those borderline crazy offers actually turned him off despite his enormous respect for companies like Google and Facebook. He felt like the money was at least as much of an effort to prevent the creation of OpenAI as a play to win his services, and it pushed him even further towards the startup’s magnanimous mission. “I realized,” Zaremba says, “that OpenAI was the best place to be.” That’s the irony at the heart of this story: even as the world’s biggest tech companies try to hold onto their researchers with the same fierceness that NFL teams try to hold onto their star quarterbacks, the researchers themselves just want to share. In the rarefied world of AI research, the brightest minds aren’t driven by or at least not only by the next product cycle or profit margin. They want to make AI better, and making AI better doesn’t happen when you keep your latest findings to yourself. This morning, OpenAI will release its first batch of AI software, a toolkit for building artificially intelligent systems by way of a technology called “reinforcement learning” one of the key technologies that, among other things, drove the creation of AlphaGo, the Google AI that shocked the world by mastering the ancient game of Go. With this toolkit, you can build systems that simulate a new breed of robot, play Atari

games, and, yes, master the game of Go. But game-playing is just the beginning. OpenAI is a billion-dollar effort to push AI as far as it will go. In both how the company came together and what it plans to do, you can see the next great wave of innovation forming. We’re a long way from knowing whether OpenAI itself becomes the main agent for that change. But the forces that drove the creation of this rather unusual startup show that the new breed of AI will

not only remake technology, but remake the way we build technology. AI Everywhere Silicon Valley is not exactly averse to hyperbole. It’s always wise to meet bold-sounding claims with skepticism. But in the field of AI, the change is real. Inside places like Google and Facebook, a technology called deep learning is already helping Internet services identify faces in photos, recognize commands

If you share research without restriction, bad actors could grab it before anyone has ensured that it’s safe. can build AI that can do great things, then they can build AI that can do awful things, too. They’re not alone in their fear of robot overlords, but perhaps counterintuitively, Musk and Altman also think that the best way to battle malicious AI is not to restrict access to artificial intelligence but expand it. That’s part of what has attracted a team of young, hyper intelligent idealists to their new project. OpenAI began one evening last summer in a private room at Silicon Valley’s Rosewood Hotel—an upscale, urban, ranch-style hotel that sits, literally, at the center of the venture capital world along Sand Hill Road in Menlo Park, California. Elon Musk was having dinner with Ilya Sutskever, who was then working on the Google Brain, the company’s sweeping effort to build deep neural networks artificially intelligent systems that can learn to perform tasks by analyzing massive amounts of digital data, including everything from recognizing photos to writing email messages to, well, carrying on a conversation. Sutskever was one of the top thinkers on the project. But even bigger ideas were in play.

Sam Altman, whose Y Combinator helped bootstrap companies like Airbnb, Dropbox, and Coinbase, had brokered the meeting, bringing together several AI researchers and a young but experienced company builder named Greg Brockman, previously the


SPEAKER

chief technology officer at high-profile Silicon Valley digital payments startup called Stripe, another Y Combinator company. It was an eclectic group. But they all shared a goal: to create a new kind of AI lab, one that would operate outside the control not only of Google, but of anyone else. “The best thing that I could imagine doing,” Brockman says, “was moving humanity closer to building real AI in a safe way.” Musk was there because he’s an old friend of Altman’s— and because AI is crucial to the future of his various businesses and, well, the future as a whole. Tesla needs AI for its inevitable self-driving cars. SpaceX, Musk’s other company, will need it to put people in space and keep them alive once they’re there. But Musk is also one of the loudest

voices humans control enough own.

warning that we could one day lose of systems powerful to learn on their

The trouble was: so many of the people most qualified to solve all those problems were already working for Google (and Facebook and Microsoft and Baidu and Twitter). And no one at the dinner was quite sure that these thinkers could be lured to a new startup, even if Musk and Altman were behind it. But one key player was at least open to the idea of jumping ship. “I felt there were risks involved,” Sutskever says. “But I also felt it would be a very interesting thing to try.”

Breaking the Cycle Emboldened by the conversation with Musk, Altman, and others at the Rosewood, Brockman soon resolved to build the lab they all envisioned. Taking on the project full-time, he approached Yoshua Bengio, a computer scientist at the University of Montreal and one of founding fathers of the deep learning movement. The field’s other two pioneers— Geoff Hinton and Yann LeCun—are now at Google and Facebook, respectively, but Bengio is committed to life in the world of academia, largely outside the aims of industry. He drew up a list of the best researchers in the field, and over the next several weeks, Brockman reached out to as many on the list as he could, along with several others.

3

Many of these researchers liked the idea, but they were also wary of making the leap. In an effort to break the cycle, Brockman picked the ten researchers he wanted the most and invited them to spend a Saturday getting wined, dined, and cajoled at a winery in Napa Valley. For Brockman, even the drive into Napa served as a catalyst for the project.

“An underrated way to bring people together are these times where there is no way to speed up getting to where you’re going,” he says. “You have to get there, and you have to talk.” And once they reached the wine country, that vibe remained. “It was one of those days where you could tell the chemistry was there,” Brockman says. Or as

Sutskever puts it: “the wine was secondary to the talk.”

As a result, big companies now share a lot of their AI research. That’s a real change, especially for Google, which has long kept the tech at the heart of its online empire secret. Recently, Google open sourced the software engine that drives its neural networks. But it still retains the inside track in the race to the future. Brockman, Altman, and Musk aim to push the notion of openness further still, saying they don’t want one or two large corporations controlling the future of artificial intelligence.

Musk is one of the loudest voices warning that we humans could one day lose control of systems powerful enough to learn on their own.

By the end of the day, Brockman asked all ten researchers to join the lab, and he gave them three weeks to think about it. By the deadline, nine of them were in. And they stayed in, despite those big offers from the giants of Silicon Valley. “They did make it very compelling for me to stay, so it wasn’t an easy decision,” Sutskever says of Google, his former employer. “But in the end, I decided to go with OpenAI, partly of because of the very strong group of people and, to a very large extent. Cade Metz - Writer The deep learning movement began with academics. It’s only recently that companies like Google and Facebook and Microsoft have pushed into the field, as advances in raw computing power have made deep neural networks a reality, not just a theoretical possibility. People like Hinton and LeCun left academia for Google and Facebook because of the enormous resources inside these companies. But they remain intent on collaborating with other thinkers. Indeed, as LeCun explains, deep learning research requires this free flow of ideas. “When you do research in secret,” he says, “you fall behind.”

All of which sounds great. But for all of OpenAI’s idealism, the researchers may find themselves facing some of the same compromises they had to make at their old jobs. Openness has its limits. And the long-term vision for AI isn’t the only interest in play. OpenAI is not a charity. Musk’s companies that could benefit greatly the startup’s work, and so could many of the companies backed by Altman’s Y Combinator. “There are certainly some competing objectives,” LeCun says. “It’s a non-profit, but then there is a very close link with Y Combinator. And people are paid as if they are working in the industry.”


tech• no• pho• bia Fear of A.I|2016 Bord Gaís Energy Theatre Friday June 10 Saturday June 11 Doors 9am/€40


ARTICLE

5

It’s Your Fault Microsoft’s Teen AI Turned Into Such a Jerk

It’s just a reflection of who we are, If we want to see technology change, we should just be nicer people. Davey Alba - Writer

IT WAS THE unspooling of an unfortunate series of events involving artificial intelligence, human nature, and a very public experiment. Amid this dangerous combination of forces, determining exactly what went wrong is nearimpossible. But the bottom line is simple: Microsoft has awful lot of egg on its face after unleashing an online chat bot that Twitter users coaxed into regurgitating some seriously offensive language, including pointedly racist and sexist remarks. On Wednesday morning, the company unveiled Tay, a chat bot meant to mimic the verbal tics of a 19-yearold American girl, provided to the world at large via the messaging platforms Twitter, Kik and GroupMe. According to Microsoft, the aim was to “conduct research on conversational understanding.” Company researchers programmed the bot to respond to messages in an “entertaining” way, impersonating the audience it was created to target: 18- to 24-year-olds in the US. “Microsoft’s AI fam from the internet that’s got zero chill,” Tay’s tagline read. But it became apparent all too quickly that Tay could have used some chill. Hours into the chat bot’s launch, Tay was echoing Donald Trump’s stance on immigration, saying Hitler was right, and agreeing that 9/11 was probably an inside job. By the evening, Tay went offline, saying she was taking a break “to absorb it all.” Some of her more hateful tweets started disappearing from the Internet, deleted by Microsoft itself. “We have taken Tay offline and

are making adjustments,” a Microsoft spokesperson wrote in an email to WIRED. meanwhile, was puzzled. Why didn’t Microsoft create a plan for what to do when the conversation veered into politically tricky territory? Why not build filters for subjects like, well, Hitler? Why not program the bot so it wouldn’t take a stance on sensitive topics?

developed by a staff, including improvisational comedians. And on top of all this, Tay is designed to adapt to what individuals tell it. “The more you chat with Tay the smarter she gets, so the experience can be more personalized for you,” Microsoft’s site describes Tay. In other words, Tay learns more the more we interact with her. It’s similar to another chat bot the company released

over a year ago in China, a creation called Xiaoice. Xiaoice, thankfully, did not exhibit a racist, sexist, offensive personality. It still has a big cult

out what to say, says Dennis R. Mortensen, the CEO and founder of x.ai, a startup offering an online personal assistant that automatically schedules meetings. “[The system] injected new data on an ongoing basis,”

Mortensen says. “Not only that, it injected exact conversations you had with the chat bot as well.” And it seems that was no way of adequately filtering the results. Unlike the hybrid human-AI personal assistant

‘This is an example of the classic computer science adage: garbage in, garbage out.’

following in the country, with millions of young Chinese interacting with her on their smartphones everyday. The success of Xiaoice probably gave Microsoft the confidence that it could replicate it in the US. Given all this, and looking at the company’s previous work on Xiaoice, it’s likely that Tay used a living corpus of content to figure

Yes, Microsoft could have done all this. The tech giant is flawed. But it’s not the only one. Even as AI is becoming more and more mainstream, it’s still rather flawed too. And, well, modern AI has a way of mirroring us humans. As this incident shows, we ourselves are flawed. How Tay speaks Tay, according to AI researchers and information gleaned from Microsoft’s public description of the chat bot, was likely trained with neural networks—vast networks of hardware and software that (loosely) mimic the web of neurons in the human brain. Those neural nets are already in wide use at the biggest tech companies—including Google, Facebook and yes, Microsoft— where they’re at work automatically recognizing faces and objects on social networks, translating online phone calls on the fly from one language to another, and identifying commands spoken into smartphones. Apparently, Microsoft used vast troves of online data to train the bot to talk like a teenager. But that’s only part of it. The company also added some fixed “editorial” content

M from Facebook, which the company released in August, there are no humans making the final decision on what Tay would publicly say.


6

ARTICLE

A Map of the Brain Could Machines to See Like You

It is imagination that allows us to predict future events and use that to guide our actions Emily singer - Writer

TAKE A THREE-YEAR-OLD to the zoo, and she intuitively knows that the long-necked creature nibbling leaves is the same thing as the giraffe in her picture book. That superficially easy feat is in reality quite sophisticated. The cartoon drawing is a frozen silhouette of simple lines, while the living animal is awash in color, texture, movement and light. It can contort into different shapes and looks different from every angle. Humans excel at this kind of task. We can effortlessly grasp the most important features of an object from just a few examples and apply those features to the unfamiliar. Computers, on the other hand, typically need to sort through a whole database of giraffes, shown in many settings and from different perspectives, to learn to accurately recognize the animal. Visual identification is one of many arenas where humans beat computers. We’re also better at finding relevant information in a flood of data; at solving unstructured problems; and at learning without supervision, as a baby learns about gravity when she plays with blocks. “Humans are much, much better generalists,” said Tai Sing Lee, a computer scientist and neuroscientist at Carnegie Mellon University in Pittsburgh. “We are still

more flexible in thinking and can anticipate, imagine and create future events.” An ambitious new program, funded by the federal government’s intelligence arm, aims to bring artificial intelligence more in line with our own mental powers. Three teams composed of neuroscientists and computer scientists will attempt to figure out how the brain performs these feats

“We still don’t have a complete listing of the parts that make up cortex, what the individuals cells look like, how they are connected.” of visual identification, then make machines that do the same. “Today’s machine learning fails where humans excel,” said Jacob Vogelstein, who heads the program at the Intelligence Advanced Research Projects Activity (IARPA). “We want to revolutionize machine learning by reverse engineering the algorithms and computations of the brain.” Time is short. Each team is now modeling a chunk of cortex in unprecedented detail. In conjunction, the teams are developing

algorithms based in part on what they learn. Next summer, each of those algorithms will be given an example of a foreign item and then required to pick out instances of it from thousands of images in an unlabeled database. “It is a very aggressive time-frame,” said Christof Koch, chief

Teach

scientific officer of the Allen Institute for Brain Science in Seattle, which is working with one of the teams. Koch and his colleagues are now creating a complete wiring diagram of a small cube of brain—a million cubic microns, totaling one

five-hundredth the volume of a poppy seed. That’s orders of magnitude larger than the most extensive complete wiring map to date, which was published last June and took roughly six years to complete.


ARTICLE

By the end of the five year IARPA project, dubbed Machine Intelligence from Cortical Networks (Microns), researchers aim to map a cubic millimeter of cortex. That tiny portion houses about 100,000 neurons, 3 to 15 million neuronal connections, or synapses, and enough neural wiring to span the width of Manhattan, were it all untangled and laid end-to-end. No one has yet attempted to reconstruct a piece of brain at this scale. But smallerscale efforts have shown that these maps can provide insight into the inner workings of the cortex. In a paper published in the journal Nature in March, Wei-Chung Allen Lee—a neuroscientist at Harvard University who is working with Koch’s team—and his collaborators mapped out a

wiring diagram of 50 neurons and more than 1,000 of their partners. By pairing this map with information about each neuron’s job in the brain— some respond to a visual input of vertical bars, for example—they derived a simple rule for how neurons in this part of the cortex are anatomically connected. They found that neurons with similar functions are more likely to both connect to and make larger connections with each other than they are with other neuron types. While the implicit goal of the Microns project is technological—IARPA funds research that could eventually lead to data analysis tools for the intelligence community, among other things—new and profound insights into the brain will have to come first. Andreas Tolias, a

neuroscientist at Baylor College of Medicine who is co-leading Koch’s team, likens our current knowledge of the cortex to a blurry photograph. He hopes that the unprecedented scale of the Microns project will help sharpen that view, exposing more sophisticated rules that govern our neural circuits. Without knowing all the component parts, he said, “maybe we’re missing the beauty of the structure.”

modules, or microcircuits, similar to the array of logic gates in a computer chip. Each module consists of approximately 100,000 neurons arranged in a complex network of interconnected cells. Evidence suggests that the basic structure of these modules is roughly the same throughout the cortex. However, modules

Processing Units

Scientists have only a rough sense of what these modules look like and how they act. They’ve largely been limited to studying the brain at smaller scales: tens or hundreds of neurons. New technologies designed to trace the shape, activity and connectivity of thousands of neurons are finally allowing researchers to analyze

The convoluted folds covering the brain’s surface form the cerebral cortex, a pizzasized sheet of tissue that’s scrunched to fit into our skulls. It is in many ways the brain’s microprocessor. The sheet, roughly three millimeters thick, is made up of a series of repeating

in different brain regions are specialized for specific purposes such as vision, movement or hearing.

7

how cells within a module interact with each other; how activity in one part of the system might spark or dampen activity in another part. “For the first time in history, we have the ability to interrogate the modules instead of just guessing at the contents,” Vogelstein said. “Different teams have different guesses for what’s inside.” The researchers will focus on a part of the cortex that processes vision, a sensory system that neuroscientists have explored intensively and that computer scientists have long striven to emulate. “Vision seems easy—just open your eyes but it’s hard to teach computers to do the same thing,” said David Cox, a neuroscientist at Harvard who leads one of the IARPA teams.

‘One challenge will be dealing with the enormous amounts of data the research produces 1 to 2 petabytes of data per millimeter cube of brain’ Each team is starting with the same basic idea for how vision works, a decades-old theory known as analysisby-synthesis. According to this idea, the brain makes predictions about what will happen in the immediate future and then reconciles those predictions with what it sees. The power of this approach lies in its efficiency—it requires less computation than continuously recreating every moment in time. The brain might execute analysis-by-synthesis any number of different ways, so each team is exploring a different possibility. Cox’s team views the brain as a sort of physics engine, with existing physics models that it uses to simulate what the world should look like. Tai Sing Lee’s team, co-led by George Church, theorizes that the brain has built a library of parts—bits and pieces of objects and people— and learns rules for how to put those parts together. Leaves, for example, tend to appear on branches. Tolias’s group is working on a more data-driven approach, where the brain creates statistical expectations of the world in which it lives.

His team will test various hypotheses for how different parts of the circuit learn to communicate. All three teams will monitor neuronal activity from tens of thousands of neurons in a target cube of brain. Then they’ll use different methods to create a wiring diagram of those cells. Cox’s team, for example, will slice brain tissue into layers thinner than a human hair and analyze each slice with electron microscopy. The team will then computationally stitch together each cross section to create a densely packed three-dimensional map that charts millions of neural wires on their intricate path through the cortex. With a map and activity pattern in hand, each team will attempt to tease out some basic rules governing the circuit. They’ll then program those rules into a simulation and measure how well the simulation matches a real brain. Tolias and collaborators already have a taste of what this type of approach can accomplish. In a paper published in Science in

Andreas Tolias and collaborators mapped out the connections among pairs of neurons and recorded their electrical activity. The complex anatomy of five neurons (top-left) can be boiled down to a simple

circuit diagram (top right). Injecting electrical current into neuron 2 makes the neuron fire, triggering electrical changes in the two cells downstream, neurons 1 and 5 (bottom).


|tech |no |pho |bia Fear of A.I|2016

Bord Gaís Energy Theatre Friday June 10 Saturday June 11 Doors 9am/€40

ISSN 7291-5713


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.