Mission Critical: Perception and Cognition

Page 1

V O L U M E 3 N O . 4 • N o v e m b e r 2 0 1 3 • A U V S I • 2 7 0 0 S . Q u i n c y S t . , S u i t e 4 0 0 , A r l i n g t o n , VA 2 2 2 0 6 , U S A

Perception and Cognition

Inside This Issue:

Getting a Feel for Haptics Can You Pass the Turing Test? Robots That Sense More Like Humans


MARK YOUR CALENDARS

WE’RE MOVING TO MAY

CONFERENCE 12 – 15 MAY | TRADE SHOW 13 – 15 MAY ORANGE COUNTY CONVENTION CENTER | ORLANDO, FLA. | USA

AUVSISHOW.ORG


CONTENTS

On the Cover 6

Brain in a Box European Project Kicks Off With Ambitious Goal: Understand and Replicate the Human Brain

4

Essential Components Perception and Cognition News

10

16

State of the Art Thinking About Perception and Cognition Around the World

Technology Gap

From the Mouths of Bots: Natural Language Learning in AI

19 Q & A

On the Cover: Kinova Research’s Jaco robotic arm leverages haptic technology to be able to grasp objects as delicate as an egg. Photo courtesy Kinova Research. Page 12.

Katsu Yamane, Disney Research

MISSION CRITICAL

November 2013

1


20 Timeline Artificial Intelligence: A Timeline

22

Uncanny Valley The Turing Test

24 Testing, Testing 12 Do You Feel Like I Do? Robots Leverage Haptics for a More Human Touch

26

Spotlight

How Susceptible are Jobs to Automation?

Getting Robots to Perceive More Like Humans

29 End Users Ghostwriter: Algorithms Write Books

Mission Critical is published four times a year as an official publication of the Association for Unmanned Vehicle Systems International. Contents of the articles are the sole opinions of the authors and do not necessarily express the policies or opinion of the publisher, editor, AUVSI or any entity of the U.S. government. Materials may not be reproduced without written permission. All advertising will be subject to publisher’s approval and advertisers will agree to indemnify and relieve publisher of loss or claims resulting from advertising contents. Annual subscription and back issue/reprint requests may be addressed to AUVSI.

2

MISSION CRITICAL

•

November 2013


Editor's Message

Editorial Vice President of Communications and Publications, Editor Brett Davis bdavis@auvsi.org Managing Editor Danielle Lucey dlucey@auvsi.org

Contributing Writers Rich Tuttle Ashley Addington

Advertising Senior Business Development Manager Mike Greeson mgreeson@auvsi.org +1 571 255 7787

A publication of

President and CEO Michael Toscano Executive Vice President Gretchen West AUVSI Headquarters 2700 S. Quincy St., Suite 400 Arlington, VA 22206 USA +1 703 845 9671 info@auvsi.org www.auvsi.org

How Can We Have Robots Think Like Us? And do We Really Want Them to?

T

his issue of Mission Critical tackles some big issues in the world of robotics: How do robots and unmanned systems perceive the world around them? And, having perceived that, how do they respond? If you take a look at the Timeline story beginning on Page 20, you can see that the modern idea of artificial intelligence has taken several twists and turns since researchers first began thinking about how machines think. We spend a little time in this issue talking about the Turing Test, posited by Alan Turing, which concludes that if a machine can fool a human into thinking it’s not a machine, then it’s intelligent. We also spend some time looking at how researchers have moved beyond that idea, instead pushing toward massive data sets that computers can sift through to, say, beat Garry Kasparov at chess or Ken Jennings at “Jeopardy!” As Ken Ford, CEO of the Institute for Human and Machine Cognition says on Page 23, people didn’t learn how to fly until they quit trying to fly like birds and instead discovered the laws of aerodynamics. Perception is a key part of this issue. Beginning on Page 12, writer Rich Tuttle takes a look at haptics, or the science of touch, and how it can lead to robots that are better equipped to navigate the world around them. Beginning on Page 24, we also take

Brett Davis

a look at how knowledge databases can help robots perceive the world around them by giving them an idea of what to expect when they roll into an office or classroom. That idea continues in the Q & A on Page 19, where a Disney research institute uses a similar idea to help robots interact with people. The idea of replicating the way humans think hasn’t gone away, however. It has just gotten more sophisticated. Beginning on Page 6, we take a look at a major new European initiative to replicate a human brain inside a supercomputer and then figure out how it works. That could lead to better ways to fight brain disease as well as the creation of new types of computers and robots that could be more intelligent. The United States also has kicked off a new research project to help understand the brain. So, in the near term, computers — and by extension, some robots — will probably think along the lines of Deep Blue or Watson, the IBM supercomputers that rely on massive databases. In the future, however, they might think more like humans. Rather than being programmed, they can learn, and they can do that more efficiently. Coupled with new sensors, such as fingertips that can actually feel, mobile robots of the future could be useful in ways we can only imagine today.

MISSION CRITICAL

November 2013

3


Brain Scans Digitally Remastered in MRI

The Key to Teaching Computers to See? Thinking Like a Computer Researchers at the Massachusetts Institute of Technology have discovered that the key for successful object recognition is to redo the recognition algorithms so they allow researchers to view the software from the computer’s perspective. This new process lets the objects being processed and identified be translated into a mathematical view, then translated back again into an image for recognition.

This allows researchers to better understand why recognition software only has a success rate of 30 to 40 Stephen LaConte (right) and members of his lab. Photo courtesy Virginia Tech. percent. With the goal of creating a Virginia Tech’s Carilion Research outside world,” Stephen LaConte, smaller error margin, computer recInstitute has found a way to make an assistant professor at the Vir- ognition software can then continue better use of brain scans with the ginia Tech Carilion Research Insti- to make breakthroughs in the realm help of computer imaging. tute, said in a press release. of artificial intelligence. Researchers are using real-time functional magnetic resonance imaging that allows real-time thought to be immediately transformed into action by transferring noninvasive dimensions of activity in the brain. By using this technology the hope is to be able to better treat a variety of brain disorders with simple mind-reading capabilities. “Our brains control overt actions that allow us to interact directly with our environments, whether by swinging an arm or singing an aria. Covert mental activities, on the other hand — such as visual imagery, inner language or recollections of the past — can’t be observed by others and don’t necessarily translate into action in the

In the study, scientists were able to use all of the brain to observe how subjects think when given a particular command. It was found that subjects who were in more control of their thoughts had a better brain scan than those who simply let their mind wander. “When people undergoing realtime brain scans and get feedback on their own brain activity patterns, they can devise ways to exert greater control of their mental processes,” said LaConte. “This, in turn, gives them the opportunity to aid in their own healing. We want to use this effect to find better ways to treat brain injuries and psychiatric and neurological disorders.”

Click on this QR code to see a video presentation of this research.

4

MISSION CRITICAL

November 2013

The main object detection research program is called HOG (histogram of oriented gradients). HOG uses an image broken into several pieces and then identifies each gradient that was separated. The software “recognizes” the color and labels it. “This feature space, HOG, is very complex,” says Carl Vondrick, an MIT graduate student in electrical engineering and computer science. “A bunch of researchers sat down and tried to engineer, ‘What’s the best feature space we can have?’ It’s very high dimensional. It’s almost impossible for a human to comprehend intuitively what’s going on. So what we’ve done is built a way to visualize this space.” MIT students hope that HOG will be a great research tool in better understanding how algorithms and software recognition intertwine and will improve students’ research experience.


Essential Components

Center for Brains, Minds and Machines Founded

The Human Brain Easily Tricked by Artificial Finger A study published recently has learned that the brain does not need multiple sensors to believe that an artificial finger belongs to the body or not. This is finding is a major breakthrough for neuroscience research. In an experiment conducted by Neuroscience Research Australia, participants held an artificial finger with their left hand that was placed above their right index fingers.

Illustration courtesy Christine Daniloff/MIT.

The National Science Foundation has funded a new research center at the Massachusetts Institute of Technology that will study artificial intelligence. The Center for Brains, Minds and Machines is an interdisciplinary study center that will focus on how the human brain can be replicated in machines. The center will be a multi-institution collaboration, with professors from MIT, Harvard and Cornell, to name a few. The center will also have industry partners, such as Google, IBM, Boston Dynamics, Willow Garage and Rethink Robotics. Research will focus on how the human body can be further integrated into computers with topics revolving around vision, language and motor skills, circuits for intelligence, the development of intelligence in children and social intelligence. “Those thrusts really do fit together, in the sense that they cover what we think are the biggest challenges facing us when we try to develop a computational understanding of what intelligence is all about,” says Patrick Winston, the Ford Foundation Professor of Engineering at MIT and re-

search coordinator for CBMM. With the research interests being so closely linked together, the likelihood of progress is higher, according to the researchers. All of the senses work together in the body for it to be able to understand and grasp its surroundings. The new center idea was launched on MIT’s 150th anniversary in 2011. The monetary donation going to the center will be given out over the next five years. “We know much more than we did before about biological brains and how they produce intelligent behavior. We’re now at the point where we can start applying that understanding from neuroscience, cognitive science and computer science to the design of intelligent machines,” says Tomaso Poggio, the Eugene McDermott Professor of Brain Sciences and Human Behavior at MIT. The center will also play a key role in the new BRAIN Initiative, an effort by federal agencies and private partners to better understand how the brain works. For more on that effort, see the story beginning on Page 6.

The participants were put under an anesthetic and vision was eliminated so the hand could go numb and feelings in the joints were removed. When the patients were able to look and both fingers were moved simultaneously, the body instantly believed that the artificial finger was its own. “Grasping the artificial finger induces a sensation in some subjects that their hands are level with one another, despite being 12 centimeters apart,” Prof. Simon Gandevia, deputy director of NeuRA, said in a press release. “This illusion demonstrates that our brain is a thoughtful, yet at times gullible, decision maker. It uses available sensory information and memories of past experiences to decide what scenario is most likely. …” This finding gives a brand new understanding of how the brain identifies its body. Unlike past experiments where the main focus was on the brain’s association with the main five senses, this experiment proved that muscle receptors are a key component in communication with the brain.

MISSION CRITICAL

November 2013

5


An image of a neuron cluster, which the Human Brain Project hopes to understand and replicate. Image courtesy HBP.

BRAIN IN A BOX European Project Kicks Off With Ambitious Goal: Understand and Replicate the Human Brain By Brett Davis

8

MISSION CRITICAL

•

November 2013


A

new European Commission initiative that kicked off in October seeks to unravel one of the greatest challenges facing science: simulate the human brain to be able to understand how it works and replicate it. The results could help develop new computing technologies that would finally allow computers and robotic systems to have “brain-like” intelligence, meaning they could learn and think much the way we do. “What we are proposing is to establish a radically new foundation to explore and understand the brain, its diseases and to use that knowledge to build new computer technologies,” says Henry Markram, a professor at École Polytechnique Fédérale de Lausanne and coordinator of the project, in a project video. The Human Brain Project is part of a new initiative launched as part of the EC’s Future and Emerging Technologies initiative. It will involve thousands of researchers from more than 130 research institutes and universities around the world. It’s intended to be a decade-long effort. The ramp-up phase, which just kicked off in October and runs through March 2016, has been funded at 54 million euros; the overall effort has been earmarked about 1 billion euros by the European Commission. Put simply, the main goal is to create an artificial replica of a human brain in a supercomputer. The project consists of three broad areas: neuroscience, aimed at understanding how the brain works; medicine, aimed at battling the diseases that can affect it; and computing, aimed at creating electronic version of brain processes. “It’s an infrastructure to be able to build and simulate the human brain, objectively classify brain diseases and build radically new computing devices,” Markram says. The human brain is able to perform computations that modern computers still can’t, all while consuming the same energy as a light bulb. “One of the human brain project’s most important goals is to develop a completely new category of neuromorphic computing systems,” says a project video. “Chips, devices and systems directly inspired by detailed models of the human brain.” As Karlheinz Meier, codirector of the project’s neuromorphic computing puts it in one project video, “What we build is physical models of human circuits on silicon substrates.” Such systems will transform industry, transportation systems, health care and “our daily lives,” the video says. The event kicked off at a conference from 6 to 11 Oct.

at the campus of Switzerland’s EFPL. The plan is to launch six research platforms and test them for the next 30 months. The platforms will be dedicated to neuroinformatics, brain simulation, high-performance computing, medical informatics, neuromorphic computing and neurorobotics. Beginning in 2016, the platforms are to be available for use by both Human Brain Project scientists and other researchers around the world. The resources will be available on a competitive basis, similar to the way astronomers compete to use large telescopes.

13 Areas The three main focus areas are further divided into 13 sub-areas, which include neuromorphic computing and neurorobotics. The computing effort will be to develop the Neuromorphic Computing Platform, a supercomputer system that will run brain model emulations. The system will consist of two computing systems, one in Heidelberg, Germany, and one in Manchester in England. “Platform users will be able to study network implementations of their choice, including simplified versions of brain models developed on the Brain Simulation Platform or generic circuit models based on theoretical work,” says the project’s website. The latter is led by the Technische Universität München, or Technical University of Munich, and EPFL, along with Spain’s University of Granada. It will provide a platform for taking brain models and plugging them into a high-fidelity simulator that includes a simulated robot. Researchers can take behaviors based on the human brain model, apply them to the robot, and see if they work and what happens in the brain model when they are carried out. “We consider this a starting point for a completely new development in robotics,” says Alois Knoll of Munich. “It will be much more powerful than anything we have had before in robotics simulation.” The computing part of the project won’t try to develop classical artificial intelligence. “The challenge in artificial intelligence is to design algorithms that can produce intelligent behavior and to use them to build intelligent machines,” the project website says. MISSION CRITICAL

November 2013

7


It doesn’t matter if the algorithms are realistic in a biological sense, as long as they work. The brain project, however, wants to create processors that actually work like the human brain. “We will develop brain models with learning rules that are as close as possible to the actual rules used by the brain and couple our models to virtual robots that interact with virtual environments. In other words, our models will learn the same way the brain learns. Our hope is that they will develop the same kind of intelligent behavior,” the project says on its website. “We know that the brain’s strategy works. So we expect that a model based on the same strategy will be much more powerful than anything AI has produced with ‘invented’ algorithms.” The resulting computer systems would be different from today’s computers in that they won’t need to be programmed, but instead can learn. Where current computers use stored programs and storage areas that contain precise representations of specific bits of information, the system the project hopes to create will rely on artificial neurons modeled after human ones, with all their built-in capabilities and weaknesses. “Their individual processing elements — artificial neurons — will be far simpler and faster than the processors we find in current computers. But like neurons in the brain, they will also be far less accurate and reliable. So the HBP will develop new techniques of stochastic computing that turn this apparent weakness into a strength — making it possible to build very fast computers with very low power consumption, even with components that are individually unreliable and only moderately precise,” the website says. Ultimately, such systems could be available for daily use, according to project researchers. They could be standalone computers, integrated into other systems, “even as brains for robots.”

Competition European researchers aren’t the only ones interested in delving into the mysteries of the human brain. The White House has announced a somewhat similar effort. Earlier this year, the White House announced the Brain Research through Advancing Innovative Neurotechnologies — or the BRAIN Initiative — which also seeks to replicate brain structures and functions. “Such cutting-edge capabilities, applied to both simple and complex systems, will open new doors to understanding how brain function is linked to human 8

MISSION CRITICAL

November 2013

An image of the brain, still poorly understood. Image courtesy HPB.

behavior and learning and the mechanisms of brain disease,” says the White House news blog. The effort is launching with more than $100 million in funding for research supported by the National Institutes of Health, DARPA and the National Science Foundation. Foundations and private research institutions are also taking part, including the Allen Institute for Brain Science, which plans to spend about $60 million a year on projects related to the initiative, and the Kavli Foundation, which plans to spend $4 million a year over the next decade, according to the White House. NIH has announced the initial nine areas of its research, which will be funded at $40 million in 2014. They include generating a census of brain cell types, creating structural maps of the brain and developing large-scale neural network recording capabilities. DARPA plans to allocate $50 million to the work in 2014, mainly with an eye toward creating new information processing systems and mechanisms that could help warfighters suffering from posttraumatic stress, brain injury and memory loss.
The NSF plans to spend $20 million to work toward molecular-scale probes that can sense and record the activity of neural networks; help make advances in systems to analyze the huge amounts of data that brain research can create; and understand how thoughts, emotions, memories and actions are represented in the brain. The Allen Institute, founded in 2003 by Microsoft cofounder Paul Allen, has launched a 10-year initiative to understand “neural coding,” or a study of how information is coded and decoded in the mammalian brain, according to the White House. It is also a formal partner in the European Human Brain Project. Brett Davis is editor of Mission Critical.


An infographic of the BRAIN Initiative. Image courtesy the White House.

MISSION CRITICAL

•

November 2013

9


THINKING ABOUT PERCEPTION AND COGNITION

A ROU N D T H E W O R LD

R

esearchers around the world are finding ways to create machines that can better sense their environment and react to the people around them, ranging from robotic harp seals that aid elderly patients to golf courses that know when they need to be watered.

U N IT ED K IN GDO M Robotic seals have been shown to help increase the quality of life and cognitive activity of patients with dementia in a new U.K. clinical study. Paro, a robotic harp seal made in Japan, interacts with patients with artificial intelligence software and sensors that allow it to move, respond to touch and sound and display different emotions.

SPAIN , P O RT U GAL

CAL IF O R NIA Google researchers have made a breakthrough for recognition software on mobile and desktop computers. The Machine Vision Technique can recognize more than 100,000 different types of objects in photos in a matter of minutes.

EV E RY W H E RE Facebook has set up a team to develop applications based on “deep learning,” the same technique Google’s software uses, to improve the news feeds of its users by giving them more relevant content and better-targeted advertising.

10

MISSION CRITICAL

November 2013

A new, smarter way to water golf courses has arisen in Spain and Portugal. The EU WaterGolf project intends to save water and find a smarter way to keep the playing greens green. New wireless technology laced throughout a course will suggest parameters of irrigation with 3-D mapping, drainage and weather forecasts.

ITALY Smart homes are now becoming realities with new technologies capable of making everyday tasks new and futuristic. Companies like Italy’s Hi Interiors are creating concepts to wildly change every aspect of a home, such as the HiCan, or high-fidelity canopy, bed that is built with portable blinds, Wi-Fi, entertainment system and a projector screen that emerges from the foot of the bed.


state of the art

G ER M ANY Bielefeld University has begun to analyze body language for customers in bars with the new robotic bartender, James, for Joint Action in Multimodal Embodied Systems. James is capable of making eye contact with customers and receiving drink orders as well as delivering them with his one arm and four fingers. No word yet on whether James is a good listener or if he will cut off customers who have had too many.

H U N GARY The Hungarian Academy of Science and Eötvös Loránd University has found that dogs interact better with robots when they are being socially active towards them. PeopleBot, a human-sized robot, got along better with canines when it behaved the way a human would behave.

ISRA E L Scientists who work for the Centers for Disease Control and Prevention are finding ways to use artificial intelligence to help prevent the next global flu outbreak. A branch of artificial intelligence researchers, including ones from Tel Aviv University, is composing algorithms based off of past outbreak data to recognize key properties of future dangerous new flu strands.

JAPAN Epsilon, a Japanese rocket that relies heavily on artificial intelligence to do the final safety checks before takeoff, recently did that and then launched into space. Using the new software allowed the rocket to take off with only eight people at the launch site instead of the usual 150.

INDIA IPsoft has created a humanoid robot capable of answering 67,000 phone calls and 100,000 emails every day. “She” handles the office’s dirty work and is capable of solving IT diagnostic problems and can be seen and interacted with on a customer’s computer screen.

MISSION CRITICAL

November 2013

11


Feel DoYou

LikeI Do

Robots Leverage Haptics for a More Human Touch By Rich Tuttle

H

aptics and robots were made for each other. Haptics, the science of touch, allows robots to feel as well as see, making them more effective at jobs they do today, like some kinds of surgery, and potentially able to do things they don’t do today, like aerial refueling. To control a remote system, “whether it’s in space or just next door, you’d like to be able to interact with things in the same way you interact with things in the real world, so in that sense haptics is a very well suited technology for robotic telemanipulation,” says Jason Wheeler, head of Sandia National Laboratories’ Cybernetics group. One way to interact is with tactile sensors. They feel what a robot finger, for instance, is touching and prompt the robot, or its human operator, to react accordingly. But such sensors either haven’t existed until recently Kinova Research’s Jaco robotic arm, which the company is fitting to wheelchairs to assist those with mobility problems. Photo courtesy the company.

12

MISSION CRITICAL

November 2013


or have been too costly to be embedded in robots that researchers have been working with so far, says Mark Claffee, principal robotics engineer at iRobot. Now, with sensor technology advancing, and with computers becoming more capable and cheaper, iRobot and others are closing in on robots that, “at least on some level, understand how to adjust themselves to better interact with objects,” Clafee says. “And they’re going to do that through tactile sensing and through haptic-type feedback, either to themselves or to a human operator.”

Haptic Challenge One big program taking advantage of such technology is DARPA’s Robotics Challenge, or DRC, which aims to develop robots that can help victims of natural or man-made disasters. Among other things, the DRC robots will have to be dexterous, which implies an ability to feel. IRobot and Sandia have supplied robotic hands to teams involved in the DRC. They’re also getting Atlas Robots from Boston Dynamics. The hands and other systems will compete on these robots in a series of tasks in December at the Homestead-Miami Speedway. Other teams that have been developing robots from scratch also will compete. IRobot’s hand has three fingers and Sandia’s has four, but both feature a “skin” with embedded tactile sensors. Sandia’s has fingerprints to help in gripping and fingernails —“that’s how you can pick up a flat key from a surface,” says George “Sandy” Sanzero, manager of Sandia’s Intelligent Systems, Robotics and Cybernetics Department. IRobot and Sandia developed the hands for another DARPA program, Autonomous Robot ManipulationHardware, or ARM-H. When that program ended a couple of years ago, DARPA decided to use these hands for DRC, says Sandia’s Wheeler. IRobot’s Clafee says hands are “the critical interface between the robot system and the world around it.” He says a robot in the competition typically won’t relay information to a human operator, but rather “understand where it has touched an object and how hard it is touching it at [a specific] point on the hand, and use its own software to understand what the right grasping strategy is.” That would be too much information to try to relay to a human operator, he says. “Our vision in terms of haptics and tactile sensing is let the robots understand the sensory input that’s coming in to them and make decisions for themselves on how to adjust their grasp, or how to move their fingers to get a more stable grasp on the object.”

Researching Haptics Canada’s Kinova Research is linking tactile sensors and robotics in another way — to help those in wheelchairs. Its robotic manipulator arms, fixed to wheelchairs, increase the mobility of people with upper spinal cord injuries, for example. Fitted with new tactile sensors, hightech manipulators like Kinova’s Jaco and Mico will be even more effective, says Francis Boucher, chief business development officer of the Montreal company. The sensor is the product of work by Kinova and the University of Quebec’s École de Technologie Supérieure. Prof. Vincent Duchaine of ETS says it’s “probably the most sensitive tactile sensor.” It can feel everything from a gentle breath to “very high forces, so it has a very wide range.” Carnegie Mellon University in Pittsburgh is using Kinova’s Mico in a program for several national agencies called Smart and Connected Health. The idea, according to a recent government announcement, is to spur “next-generation health and healthcare research through high-risk, high-reward advances in the understanding of and applications in information science, technology, behavior, cognition, sensors, robotics, bioimaging and engineering.” Mico is “a beautiful piece of technology,” says Sidd Srinivasa, associate professor of robotics and director of the Personal Robotics Lab at Carnegie Mellon. “We’re going to be building cutting-edge technology for this robot arm that will hopefully very soon reach real people who need this.” He also recognizes that while humans take for granted their ability to touch and feel, it’s “incredibly hard” for robots, because they “don’t have anywhere close to the resolution or fidelity of haptic sensing that humans do.” “The interaction between mind and fingertips in a human is a wonderful and, at present, not-duplicable feat,” says Frank Tobe of The Robot Report. Henrik I. Christensen, KUKA Chair of Robotics at Georgia Tech, illustrated this interaction in an experiment. He showed that it takes a person about five seconds to strike a match. But with fingertip anesthesia, it takes about 25 seconds and a lot of fumbling. “There’s no haptic feedback,” Christensen said in a recent TED presentation. “You have your regular muscle control and everything else. I’ve just taken away your fingertip feeling. We’re incredibly [reliant] on this fingertip sensing.” But Srinivasa says human-like fingertip feeling for robots isn’t likely to be developed soon. This means that while it’s important to continue to develop better haptic techMISSION CRITICAL

November 2013

13


nology, it’s also important to come up with ways to more effectively compensate for robots’ relative lack of touch. Pressure sensors, for instance, could help fill the gap. Srinivasa hypothesizes that humans use this very technique. He says we infer forces through deformation. In other words, “when I fold a piece of paper, if I press too hard, it deforms more than if I press less, and that perceptual channel, deformation channel, is acting as a proxy for the haptic channel. It’s the same thing when you’re operating on squishy stuff. If the stuff squishes, then you know that you’re exerting some amount of force.” He says “Humans are really good at compensating for missing channels with other channels,” and the lab is trying to get robots to do the same thing. Tactile sensors are one way to close the haptic loop. Another is to put sensors on a robot’s human operator. Cambridge Research and Development of Boston has developed a “linear actuator” called Neo that’s about the size of a watch and that a person can wear on a headband or armband. There’s no vibration or force feedback in remote surgery or other uses, “and adaptation takes mere minutes as the brain rapidly associates the Neo’s pressure application with the sense of touch,” according to the company. Cambridge CEO Ken Steinberg says surgeons and others “can operate [a] robot freely and they can feel whatever the robot’s feeling.” Steinberg sees all kinds of applications, including aerial refueling. Today, he says, a refueling boom is guided visually by an operator from a tanker to the plane being refueled. With Cambridge’s technology, a boom operator would have “feeler gauges” to make a more deft contact with the other plane. “It’s not purely a visual experience,” he says. “It’s also a haptic feedback experience.” The same technique might be even more appealing if both the tanker and the plane being refueled were robots, Steinberg says. The operator in that case could be on the ground. Rich Tuttle is a longtime aerospace and defense journalist and contributor to AUVSI’s Mission Critical and Unmanned Systems magazines.

The back of DARPA’s Atlas robot, which will use haptics research to help develop robots that could aid in the wake of disasters. Photo courtesy DARPA.

14

MISSION CRITICAL

November 2013


AUVSI’s quarterly publication that highlights special topics in the unmanned systems industry is now in print as a double issue on the back cover of Unmanned Systems.

Each is an in-depth focus on one particular issue with information on the defense, civil and commercial applications of the technology as well as new developments and what the future may hold.

Upcoming Issues: Automated Vehicles - Febuary 2014 Edition Advertising deadline: 2 Jan.

Agriculture - May 2014 Edition Advertising deadline: 25 March

VOLUME 1

NO.2 • S

UMMER 20

11 • AU

VSI • 270

Robots aid Japa

0 South Qu

incy Street

, Suite 400

n

VOLUM

1 • E 3 NO.

AUVSI 2013 • SPRING

uincy South Q • 2700

ed Unmann s m te s Sy rgy and Ene

Street,

ingt 00, Arl Suite 4

22206, o n , VA

, US

USA

VOLUME 3 NO.2 • May 2013 • AUVSI • 2700 South Quincy S t r e e t , S u i t e 4 0 0 , A r l i n g t o n , VA 22206, USA

Robots help p

Inside this iss

olice

ue:

First responde

r robots MISSIO N CRITICA L

Advertising deadline: 25 June s issue:

Advertising deadline: 25 Sept.

, VA 2 2 2 0 6

Unmanned system s fight fires

Public Safety - August 2014 Edition Commercial UAS - November 2014 Edition

, Arlington

Inside thi

Mining Automated e in ROV Timel s Power Line g in or it Mon Inside this issue:

If your company has technology in any of these arenas, then you can’t afford to miss this opportunity. To book your advertising space today, contact Ken Burris at +1 571 482 3204 or kburris@auvsi.org.

Summer 2011

1


Technology Gap

From the Mouths of Bots Natural Language Learning in AI

I

BM’s Watson can beat out any human competitor on “Jeopardy!” It can mine patient data to help doctors make more accurate diagnoses. It can even analyze thousands of pages of financial data published every day to make more informed investment choices. But does Watson need a teenager to help translate phrases like OMG? Natural language learning is a big challenge in artificial intelligence, but it has seen some success with applications like speech-to-text typing programs. But researchers at IBM wanted to take that a step further with Watson, introducing it to slang phrases. And the initial results weren’t quite what researchers bargained for. Eric Brown, the researcher in charge of the project, introduced Watson to Urban Dictionary, a wiki-style online lexicon of common — and oftentimes not so common — vernacular inputted by visitors to the site. The intention was to make Watson understand that OMG meant “Oh, my God,” and that “hot mess” doesn’t mean there was an accident in the kitchen.

Watson loaded onto a smartphone. Photo courtesy Jon Simon/Feature Photo Service for IBM.

The actual result was that Watson accidentally picked up a potty mouth. “Watson couldn’t distinguish between polite language and profanity — which the Urban Dictionary is full of,” said Brown in an interview with Fortune magazine. “Watson picked up some bad habits from reading Wikipedia as well. In tests, it even used the word bulls--- in an answer to a researcher’s query.” After that incident, the 35-person team working on the project had to construct a filter to wash Watson’s proverbial mouth out with soap. They also scrapped Urban Dictionary from its memory entirely.

Natural Language at the Press of a Button Arguably the most common natural language computing to technology consumers today is Apple’s Siri personal assistant. The origins of Siri come from an artificial intelligence project by SRI International, developed for DARPA, that Apple bought in 2010 for $200 million. But the actual voice of Siri was recently revealed to be Susan Bennett, a voice-over actress living in Atlanta, Ga. Apple has not confirmed this, however audio forensics experts have verified the voice is hers. For four hours a day in July 2005, she read phrases documenting the English language’s myriad vowel sounds, consonants, blends, diphthongs and glottal stops. However, simply recording a person’s voice and playing it back is not how Siri works. The first step is understanding the user’s command. When you press the home button on an iPhone and

16

MISSION CRITICAL

November 2013

speak, your voice gets digitally coded and the signal gets relayed through a cell tower to a cloud server that has speech models ready to analyze the noise. Also, the phone performs a local speech evaluation to identify if the command can be handled by the phone, like cueing up a song stored on the device, and if that’s the case, it informs the cloud-bound signal it is no longer needed. Then a server will evaluate the noises in the speech pattern to its series of known human language sounds and then it runs through a language model to estimate words. It then determines what the most probable commands might mean. The second step, Siri’s response, comes in the form of computer-generated speech, which leverages much of the same knowledge as analyzing the speech. And although this process can sound rather dry, like Watson, Siri is also prone to interesting replies. “There were many conversations within the team about whether it … should have an attitude,” says Norman Winarsky, vice president of SRI International, talking about the pre-Apple work on Siri to The Wall Street Journal. And sometimes this sass is intended to make those hip to artificial intelligence smirk. For instance, if a user tells Siri, “Open the pod bay doors” — a reference to a command to the malcontent sentient computer HAL 9000 of “2001: A Spacy Odyssey” — the program will respond, “We intelligent agents will never live that down, apparently.”


ACCESS

100,000 DATA POINTS 3,800 PLATFORMS

1,200 COMPANIES

the most cost-effective, comprehensive, and searchable unmanned systems and robotics directory in the industry

SEARCH

more than 30 variables with 100,000 data points with the confidence that records are updated with the most current data

PROFIT

from unparalleled access to data spanning academic, civil, commercial, and military markets including prototypes and full production systems

CONNECT

your company with the competition, customers with their products, manufacturers with developers and researchers with data

See uS in booth 4005 for an interactive demo and an introductory price for

auvSi’S unmanned SyStemS 2013 attendeeS!

robotdirectory.auvSi.org


WEBINAR series unmanned in the aRctic

Unmanned systems are proving to be

WHEN:

and research. From wildlife tracking to

13 November, 3:00-4:00 p.m. EDT (U.S. and Canada)

quickly becoming a staple in a region of

SPEAKER: Greg Walker, director, Alaska Center for Unmanned Aircraft Systems Integration

invaluable tools for Arctic monitoring oil spill remediation, the systems are the world where humans rarely venture. AUVSI’s Unmanned in the Arctic webinar will feature a presentation from Greg Walker discussing the University of Alaska, Fairbanks’ recent involvement in Arctic Shield 2013 and more.

ReseRve youR spot foR auvsi’s novembeR webinaR today! AUVSI Members: FREE • Nonmember: $39

foR RegistRation and sponsoRship infoRmation visit www.auvsi.oRg/webinaR

www.auvsi.org


Q&A

Katsu Yamane

Senior Research Scientist at Disney Research

Katsu Yamane is senior research scientist at Disney Research in Pittsburgh, Pa., where he has worked since 2008. His main area of research is humanoid robot control and motion synthesis. In a recent study, he worked on ways for robots to perceive that humans are handing them objects. A test subject interacts with a robot that can receive an object handed to it. Image courtesy Disney Research.

Q: What problems does this research help solve? A: This research helps robots make physical interaction with humans naturally. Motion-planning algorithms are becoming quite powerful, but they still have to spend a long time to generate robot motions compared to normal human reaction time. Robots would have to react to human motions much more quickly to make interactions natural. Q: By

using a database of human motions, do you

sidestep the need for greater perception or intelligence on the part of the robot?

A: Yes, that’s exactly the idea of this research. Obviously, humans can react to other persons’ motions instantly, and humans expect the same speed when they interact with robots. By using a human motion database, we can leverage the human motion planning process as a “black box” and just use its results. Q: Is the robot able to learn to accept differently shaped objects, or is this mostly focused on allowing it to know when it is being offered something and to synchronize its motion?

A: We are currently focusing on recognizing the handoff motion and synchronizing the robot arm motion. However, if the different object shapes result in different arm motions, the robot can recognize those shapes. In the future, we could combine this technique with a vision system to recognize the shape of the object being handed. Q: What

is the benefit of

“teasing”

the robot as

shown in a video?

A: This demo was just to demonstrate that the robot can quickly react, even if the human motion changes abruptly. Q: Can

you describe the hierarchical data struc-

ture that you developed?

How does that work, and how does it help the robot? A: The data structure is an extension of the classical binary tree data structure from information theory. Conventionally, this data structure has been used for quickly

searching and sorting data that can be easily ordered such as text. Human motion, on the other hand, is in a very high-dimensional space and therefore not easy to order. We developed a method for organizing human poses into a binary data structure. The data structure allows the robot to search for poses in the database that are similar to what it is seeing right now. Q: What robot motions and components would you like to add in the future? A: We would like to add natural hand motions to grab the handed object. We would also like to have the robot do additional tasks after receiving an object, such as putting it into a bag. Q: Is there a need for robots to have haptic sensors, or would that help? A: Haptic sensors would certainly help the robot recognize that the object is indeed in the hand. Q: What

other technology advances could aid in

this research or in its eventual use by robotics in a variety of roles (in a factory, in a home, etc.)?

A: Computer vision technology to recognize the human motion and object shape would be essential to put this research into practical use, because we can’t use motion capture systems in such environments. Q: Why is Disney Research interested in this work? A: My expertise is in motion synthesis and control of humanoid robots. We are interested in exploring autonomous, interactive robots and physical human-robot interaction using whole-body motions. Q: Why is this research important for the future? A: If robots are to work in factories and homes in the future, interactions with humans must be intuitive, seamless and natural. By learning from human-human interaction, we can model how humans interact, which in turn makes the robot motions and interactions look natural to humans. MISSION CRITICAL

November 2013

19


Artificial Intelligence t

i

i

•

November 2013

n

e

Jack Myers and Harry Pople at University of Pittsburgh develop the INTERNIST knowledge based medical diagnosis program, which was based on clinical knowledge. It is able to make multiple diagnoses related to internal medicine.

1979 1969

1956

1951

The term artificial intelligence is recognized as an academic discipline at Dartmouth College during a technology conference. MISSION CRITICAL

l

MIT’s Joseph Weizenbaum creates one of the earliest natural language processing programs called ELIZA. This program took users answers and processed them into scripts with human-like responses. Versions of the program are still available today.

Christopher Strachey writes one of the first machine learning game programs with the use of checkers. Studies found that the use of games helped scientists to learn and respond when training computers to think for themselves.

20

e

1966

1950

Mathematician and codebreaker Alan Turing devises the Turing Test, which involves a computer attempting to trick a person into believe it is another human.

m

1985

a

The first autonomous drawing program, AARON, is demonstrated at the AAAI national conference. The program was created by Harold Cohen.

The Standford Research Institute directs experiments with Shakey, one of the first mobile robot systems. It had the ability to move and observe its environment as well as do simple problem solving.


TIMELINE

Honda’s ASIMO robot gains the ability to walk at the same gait as a human while delivering trays to customers in a restaurant setting.

ALVINN, or Autonomous Land Vehicle In a Neural Network, steers a car coast-to-coast under computer control. ALVINN is a semiautonomous perception system that could learn to drive by watching people do it.

2012

2011

2005 2000

1995

1993

IBM’s Watson defeats the two greatest Jeopardy champions, Brad Rutter and Ken Jennings. Watson is an artificially intelligent computer that is capable of answering questions in natural language. It was developed by David Ferrucci in IBM’s DeepQA project.

Chess champion Garry Kasparov defeats IBM’s Deep Blue computer in a chess match. In 1997, an upgraded Deep Blue defeats Kasparov.

1996

Ian Horswill advances behavior-based robotics with Polly, the first robot capable of navigating with the use of vision. Polly was able to move at a speed of 1 meter per second.

The Nomad robot explores remote parts of Antarctica looking for meteorite samples. Nomad autonomously finds and classifies dozens of terrestrial rocks and five indigenous meteorites.

Apple releases Siri (Speech Interpretation and Recognition Interface) for the first time on the iPhone 4s. Siri is a spinoff of DARPA’s CALO project, which stands for Cognitive Assistant that Learns and Organizes.

MISSION CRITICAL

November 2013

21


The Turing Test Party Game, Turned Philosophical Argument, Turned Competition

T

Illustration courtesy iStock Photo.

he Turing Test, introduced by legendary mathematician and codebreaker Alan Turing, is a means of determining a machine’s ability to exhibit intelligent behavior that is at least equal to a human being. As introduced in 1950, the test involves a person who is communicating with another person and a machine. Both attempt to convince the subject that they are human through their responses. If the subject can’t tell the difference, then the computer wins the game, dubbed the Imitation Game, which was based on a party game of the time. The Turing Test has been held up as the very definition of artificial intelligence, although it arguably hasn’t been met yet.

22

MISSION CRITICAL

November 2013

That’s not for lack of trying. Work on Turing’s idea led, in the 1960s, to the development of “chatbots,” or computer programs that would communicate back and forth with human interrogators. The best known is ELIZA, developed in 1966, which replicated the communication of a psychotherapist. One modern descendant of ELIZA is Apple’s Siri, which may have helped give you traffic directions this morning. A yearly competition, the controversial International Loebner Prize in Artificial Intelligence, seeks to find the best of such chatbots and has been rewarding them since 1991. So far, all have won a bronze medal and $4,000. Should any program fool two or more judges when compared to two or more humans, the competition will then


Uncanny valley

begin requiring “multimodal” entries that incorporate music, speech, pictures and videos. Should a computer program win that, by fooling half the judges, its creators will win a $100,000 grand prize and the competition will end.

was indistinguishable from a human. I believe that was a mistake. At this point, very few serious AI researchers believe they are trying to pass the Turing Test. The Turing Test is both too easy and too hard at the same time,” he says.

This year’s winner — not of the big prize, but of another bronze medal — is a chatbot named Mitsuku, programmed by Steve Worswick of Great Britian, who told the BBC that he initially built a chatbot to attract users to his dance music website, only to discover they were more interested in parrying with the chatbot.

“Early flight pioneers tried to mimic birds. The laws of aerodynamics were never discovered by birdwatching. The blind mimicry of the behavior of the natural system without understanding the underlying principles usually leads one astray. We didn’t have flight until we built things that didn’t fly like birds or look like birds. This is very much analogous to AI.”

On a blog entry on Mitsuku’s website in September after the award was announced, he noted that winning even the annual competition has its benefits — people are paying attention. “Today has been a bit of a strange day for me,” he wrote. “I usually have around 400-500 people a day visit this site, and at the time of writing this, I have had 9,532 visitors from all corners of the globe, as well as being mentioned on various sites around the net. I was even interviewed by the BBC this morning.” In prepping for the prize, he wrote that he had been working to remove Mitsuku’s “robotic” responses to some questions, along the lines of, “I have no heart, but I have a power supply” or, “Sorry, my eye is not attached at the moment.” Funny, but not likely to fool humans into thinking she’s a real girl.

Birdwatching The Turing Test has been controversial throughout its history. Ken Ford, CEO of the Institute for Human and Machine Cognition, a not-for-profit research institute of the Florida University System, says the test was part of a “maturing process” for the field of AI. “AI went through a maturing process where the initial goals for AI were sort of philosophical and posed by Alan Turing, in some ways, as a thought experiment. … They weren’t technical goals,” he says. “He was arguing with philosophers about the possibility of an intelligent machine. The philosophers said not it’s not possible. We would never admit it.” In the early days, “the focus was very much on building mechanical simulacrums of human reasoning. It became a notion of building a machine whose dialogue

Ford says now, as the industry matures, “we are more in the Wright brothers stage. We are going in that direction rather in birdwatching and bird building. If we could build an AI that always passed the Turing Test, and it could do it perfectly, I can’t imagine what use it would be.” In recent days, there has been sort of a Turing Test in reverse. The quirky Twitter feed of @Horse-ebooks, thought to be a malfunctioning “spambot” that for years appeared to fire off random passages of electronic books, was revealed to instead be the work of two artists from the website BuzzFeed. ABC News reported that thousands who followed the account “were surprised to learn today that the account was not run by a spambot or a robot but by two human beings as an elaborate ‘conceptual art installation.’”

Further Advances Several deep-pocket companies today are funding what they hope will be breakthroughs in AI, which initially may resemble chatbots such as Mitsuku, albeit much smarter ones. In the most recent example, Microsoft cofounder Paul Allen recently announced that he had hired Oren Etzioni as executive director of the Allen Institute for Artificial Intelligence in Seattle. Etzioni was previously director of the Turing Center at Seattle neighbor the University of Washington. In a press release, Allen said he hired Etzioni because “he shares my vision and enthusiasm for the exciting possibilities in the field of AI, including opportunities to help computers acquire knowledge and reason.”

MISSION CRITICAL

November 2013

23


From Top to Bottom

W

Getting Robots to Perceive More Like Humans

here are you? It’s a simple answer as a human, but passing that knowledge on to a robot has long been a complex perception task. Cognitive Patterns aims to leverage what researchers already know about how humans perceive their environment and exploit that information when building autonomous systems. The prototype software developed by Massachusetts-headquartered company Aptima Inc. leverages the open-source Robot Operating System, or ROS, to enable this kind of processing on any type of platform. “One of the things that’s not particularly commonly known outside of cognitive science … is that humans perceive and really think about very selective aspects of the world and then build a whole big picture based on things we know,” says Webb Stacy, a cognitive scientist and psychologist working for Aptima. In essence, humans perceive a large portion of defining where they are through their brains and not through their surroundings. Past experiences feed future expectations about, for instance, what objects might be in a room called an office or a cafeteria. “You don’t have to really pick up information about exactly what a table, phone or notebook looks like, you already kind of know those things are going to be in an office, and as a result the process of perceiving what’s in an office or making

24

MISSION CRITICAL

November 2013

Aptima’s Cognitive Patterns architecture enables robots to make sense of their environment, much like how humans do. This robot has Cognitive Patterns integrated onto it for a test. Photo courtesy Aptima.

sense of the setting is as much coming from preexisting knowledge as it is directly from the senses,” he says. This way of perceiving is called top-down, bottom-up processing. However, machine perception is typically driven by what Stacy terms bottom-up processing, where data from sensors are streamed through mathematic filters to make sense of what an object might be and where a robot is in relation. Cognitive Patterns is revolutionary, says Stacy, because it provides a knowledge base for a robot, so it is able to combine a knowledge base

with visual data provided by sensors to extrapolate information about its environment in a fashion more akin to top-down, bottom-up processing. Aptima secured funding for Cognitive Patterns through a small business innovative research grant from DARPA, and Stacy stresses that what the company is doing is practical and can be done without a multimillion-dollar budget. The first phase of the project was limited to simulated models, whereas the second phase put the software on a robot.


Testing, Testing

“When you move from simulation to physical hardware, you’re dealing with an awful lot of uncertainty,” he says. “So it’s one thing in the simulation to say, here are some features that get presented to the system for recognition. It’s another thing on a real robot wandering around perceiving things.” However, Stacy says using ROS makes it easier to go from simulation to real-world application. In phase two, Aptima integrated the cognitive aspects of its software with visual information inputted by sensors built by iRobot. To test the prototype, Aptima secured the use of a local Massachusetts middle school, so the company could test the principles of its software in a real-world environment. For this second phase, Aptima’s DARPA agent worked at the Army Research Laboratory. This allowed the company to integrate Cognitive Patterns with a robotic cognitive model the ARL had, called the SubSymbolic Robot Intelligent Controlling System, or SS RICS. That program combines language-based learning with lower level perception. Stacy says Cognitive Patterns aligns somewhere between the two. The system has an operator interface that Stacy says allows the user to communicate with the robot on a high level. “The operator might say, ‘I don’t have a map. Go find the cafeteria and see if there are kids there,’” he says. “Now that’s a very high level of command, because for a machine there’s all kinds of stuff that

needs to figure it out there. … And one of the really interesting things we did here is we looked to see if we couldn’t generate new knowledge whenever we encountered something that we didn’t understand.”

Research and the Office of the Secretary of Defense where a prosthetic would be controlled by the neurosignatures of a person’s brain — interfacing the limb with the user’s mind.

The operator can place alternate reality, or AR, tags on certain objects, and they act as a proxy for using computer vision to recognize those objects. Then the robot is able to learn those features and apply them to future scenarios. For instance, the team did this once when a robot using Cognitive Patterns was in a room with a weapons cache, so next time it entered a similar scenario, it would have a cognitive model on which to base its perceptions. The operator can also tell the robot to ignore certain surroundings or label special things in its environment, such as determining what a student desk is, because that object is a blend of a traditional school chair and a separate desk.

“The arm’s going to have sensors on it, and so it’s going to be doing the same kind of thing the robot is doing with Cognitive Patterns, which is perceiving its environment, knowing the things it can reach for,” he says.

This higher level of communication benefits the operator, says Stacy, because, aside from software developers, most operators won’t care about the detailed features being extracted from the robot’s camera system.

Finding New Uses Now Aptima is working on another prototype as a follow on to its DARPA contract — implementing Cognitive Patterns onto prosthetics. The company is working on an intelligent prosthetic through a contract with the Office of Naval

Through collaboration with a prosthetics expert, Aptima is using its software to let the arm communicate with its user in a more natural way, such as through muscle twitches.

Perfecting Perception The end goal of this work addresses the current disconnect in how robots could best perceive their environments, with the machine vision community pushing optimal mathematical models to perfect bottom-up processing, while the cognitive science community is seeking out the best cognitive model to apply to get a robot to think, says Stacy. “We are starting to see the need to hook up with each other, and Cognitive Patterns really is the intersection between those two. … If that happens, we’ll have robots that can really see and understand the world the way that humans do, and that’s been elusive for a long time in robotics.”

Click on this QR code to see a video on Cognitive Patterns.

MISSION CRITICAL

November 2013

25


The Future of Employment: How Susceptible are Jobs to Automation?

By Ashley Addington

Personal assistant robots, like PAL Robotics’ REEM-H1, could be used for service jobs in the future. Photo courtesy PAL Robotics.

26

MISSION CRITICAL

November 2013


Spotlight

A

new Oxford study has emerged addressing the changes in the future job market.

In the study, conducted by Carl Benedikt Frey and Michael A. Osborne and entitled “The Future of Employment: How Susceptible are Jobs to Computerisation,” the researchers broke down the future evolution of jobs and what employment could mean for the rest of the world. “Before I was even conducting this study, I was really interested in how technology was advancing, especially in the job market,” Osborne said. The conclusion: Over the next two decades, 47 percent of the jobs in more than 700 fields could be affected by automation. The main jobs at risk are the low-level jobs that don’t require higher education. Jobs such as librarians, telemarketers and database managers will be among the first jobs that could be replaced by automation, the researchers say. Higher education jobs that require creativity, such as in the fields of health care, media, education and the arts, would be less likely to be handled by computers because of the need for spontaneous thought. “People need to be aware of how important education is. People with jobs that require more education will be least likely to lose their job,” Osborne said. Now, for the first time, low-wage manual jobs will be at risk. Jobs in agriculture, housekeeping, construction and manufacturing could be some of the areas handed over to computers and robots. Jobs that require perception and manipulation are safer, because they involve having to interact with other human beings. Computers would have a harder time taking over these jobs, because much of the work involved in these careers change on a daily basis. It would be close to impossible to program a computer with every possible scenario when dealing with situations and communications between individuals. “Most management, business and finance occupations, which are intensive in generalist tasks requiring social intelligence, are largely confined to the low-risk category,” the study says. Science and engineering jobs are also not at risk, because they require a large amount of creative intelligence. The higher the salary and education demands,

the less likely a job will be taken over by a machine. “Awareness is essentially what we have tried to produce from this study. We want to make more people aware of what the future potentially holds and that education is the key for people to keep their careers,” Osborne said. Jobs in transportation are also at risk due to the automation of vehicles. Sensor technology has continued to improve vastly over the past few years, which has led to increased safety and security in vehicles. It has also offered enough data that can help engineers get past the problems of robotic development. “These will permit an algorithmic vehicle controller to monitor its environment to a degree that exceeds the capabilities of any human driver. Algorithms are thus potentially safer and more effective drivers than humans,” the study says. As with any technological change, for every change to a current job there is the possibility of new growth in other areas. In order for computers and robots to be able to do particular jobs, there must be someone to set up and monitor the computers to make sure everything functions accordingly. Jobs will also be added to assist and manage the technology. “First, as technology substitutes for labor, there is a destruction effect, requiring workers to reallocate their labor supply; and second, there is the capitalization effect, as more companies enter industries where productivity is relatively high, leading employment in those industries to expand,” the study says. Computerization has the potential to change the job market and society by being more thorough and doing a vast majority of tasks faster. Algorithms are able to make decisions with an unbiased mind, thus allowing them to come up with conclusions in a more timely fashion than human operators. “Even though the job market is changing, there is no need to panic. The best things people can do for themselves is to stay current with technology and be thoroughly educated,” Osborne says. There are plans to continue the Oxford study and focus more on the wide range of jobs that have potential to change, which ones are most susceptible and how this should be handled.

Click on this QR code to read “The Future of Employment: How Susceptible are Jobs to Computerisation.”

MISSION CRITICAL

November 2013

27


Unmanned Systems Bringing you the latest

Global News and Information about Unmanned Systems

Send an email to eBrief@auvsi.org to begin receiving

AUVSI’s Unmanned Systems eBrief. 28

MISSION CRITICAL

November 2013


End Users

Ghostwriter: Algorithms Write Books

P

hilip Parker, a marketing professor at the international graduate business school INSEAD, is also a published author — a very published author. Published so many times, in fact, that he makes prolific writer like Shakespeare or Stephen King seem downright lazy. Parker has penned more than one million “report length” books, which he self-publishes as paperbacks or print-on-demand books. Unlike Shakespeare or Stephen King, they tend not to be about love or fear. Instead, they might be about such topics as “The 2007-2012 Outlook for Tufted Washable Scatter Rugs, Bathmats and Sets That Measure 6-Feet by 9-Feet or Smaller in India” or “The 2009-2014 World Outlook for 60-Milligram Containers of Fromage Frais.” There are some dictionaries in there too, and medical sourcebooks and texbooks. He also writes poetry — 1.4 million poems so far. He doesn’t produce these articles, books and poems just to see his name on the shelf, but rather to serve very small, niche markets, particularly in the developing world, that traditional publishing have ignored because they are so small. Parker doesn’t sit and crank these out himself, at least not in the traditional book-writing sense. Instead, he uses a series of computer programs to compile information on a given topic and then combines it into book form. The above-mentioned book titles might seem extremely arcane, but if you’re a small business in a developing country making a specific market, he noted in a recent Seattle TED Talk, “you can’t purchase market data for some of these products.” It’s out there but no traditional publisher is going to package it. “I discovered there is a demand. It’s a very narrow demand, but the key problem is the author,” he said. “Authors are very expensive. They want food and things like that. They want income.” The methodology for each type of book is studied and then copied into algorithms. As for how they work, “it depends on the subgenre,” he writes. “For example, the methodology for a crossword puzzle follows the logic of that genre. Trade reports follow a completely

different methodology. The methodology of the genre is first studied, then replicated using code.” The programs are able to provide “the whole value chain” of the publishing industry, he says, by automating the “collecting, sorting, cleaning, interpolating, authoring and formatting.” Parker says he began working on the programs-as-writer idea in the 1990s trying various techniques, but “around 1999-2000 I was able to create the first commercial applications. The idea came from being able to do genres that otherwise would not have happened.” He hit on the economic model of “selling very expensive, high-end market research studies, completely computer generated, to subsidize the creation of language learning materials, mathematics books, etc. for the underserved languages. “We’ve published over about a million titles, most of them are high-end industry studies, but a lot of them are language learning,” he said in the talk while showing a video of what it looks like when a computer writes a book (many Web pages pop up in quick succession). EVE is capable of creating textbooks for students in Africa who have never seen one in their language, for collating weather reports for farmers in remote locations, even to help develop video games for agriculture extension agents to help them plan planting regimens for regions they have never seen. Much of the poetry is aimed at helping non-English speakers learn the language. It’s a way of corralling existing data in new ways to serve tiny markets. The practice could be used in a variety of ways, he noted. Say, for example, you’re a football player and you don’t like your physics book because you can’t relate to it. “Why not have a football player physics book? Why not have a ballet dancer’s physics book?” he asked. Going forward, he’d like to do even more, such as by having “virtual professors” who can “diagnose, teach and write original research in fields that do not have enough scientists or researcher[s] available. Fun stuff.”

MISSION CRITICAL

November 2013

29


MAxIMIzE YOUr

visiBiliTy BECOME A CORPORATE MEMBER TODAY DISCOUNTS • • • • •

Exhibits Sponsorships Advertising Event registration Complimentary job listings on AUVSI’s career center

ACCESS • • • • • •

Listings online and in Unmanned Systems magazine Members-only networking, education and VIP events AUVSI Day on Capitol Hill Local chapters around the world Members-only online content AUVSI‘s online community

KNOWLEDGE • • • •

Unmanned Systems magazine subscription Unmanned Systems e-Brief subscription Advocacy Action Alerts Knowledge resources

Join today at www.auvsi.org/Join


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.