Arificial Intelligence

Page 1

ARtiFicial Intelligence Can Machines Think And What Makes Us Human




Big Brains

the universal constructor, and the digital computer. Von Neumann’s mathematical analysis of the structure of self-replication preceded the discovery of the structure of DNA. Von Neumann worked out key steps in the nuclear physics involved in thermonuclear reactions and the hydrogen bomb. Von Neumann wrote 150 published papers in his life; 60 in pure mathematics, 20 in physics, and 60 in applied mathematics. His last work, an unfinished manuscript written while in the hospital and later published in book form as The Computer and the Brain, gives an indication of the direction of his interests at the time of his death.

The brains in the business, Looking at automata, coding, programming, mathematics, computing and human evolution.

Claude Elwood Shannon

Charles Babbage

Claude Elwood Shannon (April 30, 1916 – February 24, 2001) was an American mathematician, electronic engineer, and cryptographer known as “The father of Information Theory”.

Charles Babbage, FRS (26 December 1791 – 18 October 1871) was an English mathematician, philosopher, inventor and mechanical engineer who originated the concept of a programmable computer. Considered a “father of the computer”, Babbage is credited with inventing the first mechanical computer that eventually led to more complex designs. Parts of his uncompleted mechanisms are on display in the London Science Museum. In 1991, a perfectly functioning difference engine was constructed from Babbage’s original plans. Built to tolerances achievable in the 19th century, the success of the finished engine indicated that Babbage’s machine would have worked. Nine years later, the Science Museum completed the printer Babbage had designed for the difference engine.

Shannon is famous for having founded Information Theory with one landmark paper that he published in 1948. However, he is also credited with founding both digital computer and digital circuit design theory in 1937, when, as a 21-year-old master’s degree student at the Massachusetts Institute of Technology (MIT), he wrote his thesis demonstrating that electrical applications of boolean algebra could construct and resolve any logical, numerical relationship. It has been claimed that this was the most important master’s thesis of all time. Shannon contributed to the field of cryptanalysis for national defense during World War II, including his basic work on code breaking and secure telecommunications.

Alan Mathison Turing Ada Lovelace Augusta Ada King, Countess of Lovelace (10 December 1815 – 27 November 1852), born Augusta Ada Byron and now commonly known as Ada Lovelace, was an English mathematician and writer chiefly known for her work on Charles Babbage’s early mechanical general-purpose computer, the analytical engine. Her notes on the engine include what is recognized as the first algorithm intended to be processed by a machine. As a young adult, she took an interest in mathematics, and in particular Babbage’s work on the analytical engine. Between 1842 and 1843, she translated an article by Italian mathematician Luigi Menabrea on the engine, which she supplemented with a set of notes of her own. These notes contain what is considered the first computer program — that is, an algorithm encoded for processing by a machine. Ada’s notes are important in the early history of computers. She also foresaw the capability of computers to go beyond mere calculating or number-crunching while others, including Babbage himself, focused only on these capabilities.

John Von Neuman

Alan Mathison Turing, (23 June 1912 – 7 June 1954), was a British mathematician, logician, cryptanalyst, and computer scientist. He was highly influential in the development of computer science, giving a formalization of the concepts of “algorithm” and “computation” with the Turing machine, which can be considered a model of a general purpose computer. Turing is widely considered to be the father of computer science and artificial intelligence. During World War II, Turing worked for the Government Code and Cypher School (GC&CS) at Bletchley Park, Britain’s code breaking centre. For a time he was head of Hut 8, the section responsible for German naval cryptanalysis. He devised a number of techniques for breaking German ciphers, including the method of the bombe, an electromechanical machine that could find settings for the Enigma machine. After the war, he worked at the National Physical Laboratory, where he created one of the first designs for a stored-program computer, the ACE. In 1948 Turing joined Max Newman’s Computing Laboratory at Manchester University, where he assisted in the development of the Manchester computers and became interested in mathematical biology. He wrote a paper on the chemical basis of morphogenesis, and predicted oscillating chemical reactions such as the Belousov–Zhabotinsky reaction, which were first observed in the 1960s.

John von Neumann; (December 28, 1903 – February 8, 1957) was a Hungarian-American mathematician and polymath who made major contributions to a vast number of fields, including mathematics (set theory, functional analysis, ergodic theory, geometry, numerical analysis, and many other mathematical fields), physics (quantum mechanics, hydrodynamics, and fluid dynamics), economics (game theory), computer science (linear programming, computer architecture, self-replicating machines, stochastic computing), and statistics. He is generally regarded as one of the greatest mathematicians in modern history.

Turing’s homosexuality resulted in a criminal prosecution in 1952, when homosexual acts were still illegal in the United Kingdom. He accepted treatment with female hormones (chemical castration) as an alternative to prison. Turing died in 1954, just over two weeks before his 42nd birthday, from cyanide poisoning.

Von Neumann was a pioneer of the application of operator theory to quantum mechanics, in the development of functional analysis, a principal member of the Manhattan Project and the Institute for Advanced Study in Princeton (as one of the few originally appointed), and a key figure in the development of game theory and the concepts of cellular automata,

Sir Geoffrey Jefferson (1886–1961) was a British neurologist and pioneering neurosurgeon. He was educated in Manchester, England, obtaining his medical degree in 1909. He became a fellow of the Royal College of Surgeons two years later. He married in 1914, and moved to Canada. On the outbreak of World War I, he returned to Europe and

Sir Geoffrey Jefferson


worked at the Anglo-Russian Hospital in Petrograd, Russia, and then with the Royal Army Medical Corps in France. After the war, he returned to Manchester, working at the Salford Royal Hospital. It was here, in 1925 that Jefferson performed the first successful embolectomy in England. By 1934, he was a neurosurgeon at the Manchester Royal Infirmary, becoming the UK’s first professor of neurosurgery at the University of Manchester five years later. The Jefferson fracture, which he was the first to describe, was named after him. Manchester Royal Infirmary also honors Jefferson with the Jefferson Suite, a training area in their Medical Education Campus. He was awarded the Lister Medal in 1948 for his contributions to surgical science. The corresponding Lister Oration, given at the Royal College of Surgeons of England, was not delivered until 1949, and was titled ‘The Mind of Mechanical Man’. The subject of this lecture was the Manchester Mark 1, one of the earliest electronic computers, and Jefferson’s lecture formed part of the early debate over the possibility of artificial intelligence.

Charles Robert Darwin Charles Robert Darwin, FRS (12 February 1809 – 19 April 1882) was an English naturalist. He established that all species of life have descended over time from common ancestors, and proposed the scientific theory that this branching pattern of evolution resulted from a process that he called natural selection, in which the struggle for existence has a similar effect to the artificial selection involved in selective breeding. Darwin published his theory of evolution with compelling evidence in his 1859 book On the Origin of Species, overcoming scientific rejection of earlier concepts of transmutation of species. By the 1870s the scientific community and much of the general public had accepted evolution as a fact. However, many favored competing explanations and it was not until the emergence of the modern evolutionary synthesis from the 1930s to the 1950s that a broad consensus developed in which natural selection was the basic mechanism of evolution. In modified form, Darwin’s scientific discovery is the unifying theory of the life sciences, explaining the diversity of life.


Sup e S ui r ior cu s nt r F ro Postcentral . al S nt a (Ro ulc l l u a l nd o s Sulcus. a t n ). F ro dle d i Postcentral Gyrus. . M us. us Gy r ulc S l P re ra ce n nt t r al e Gy r ec us. Pr

Ce

Longitudinal Fissure.

ll We

a Org

n i ze

d . G oal

O r ie

n t at

ed .

Na

O pt

um .

ic C

hi a s

l. P l an ne r . C rit ic a l

Tempor al L

.

obe .

m.

Br

ain

e St

m

r. Qu ie t . P re c i se . Lo gic a l.

R at

Cerebell

c apus

io n a

Hippo

s.

Th

me

ink er .

Lunate Suicus.

Left Hemisphere.

.

gic a l. Sc ie n tifi c.

Sc ie n tifi c. M at he l.

Verb a

es

ic a

sin

at

g.

m

As the monkeys carried out the isometric tasks, the researchers analyzed the patterns of muscle activations that corresponded with the isometric

at e

Ajemian and colleagues overcame these complexities by simplifying the experimental design. Rather than asking monkeys to carry out complex movements, they trained the animals to push on a joystick in different, specified ways to move a cursor on a screen to a desired target. This use of isometric force greatly simplified the measurements the researchers needed to make to define muscle and joint action.

St r

Researchers have been thwarted in their efforts to measure and model the neural control of complex motions because muscle forces and positions constantly change during such motions. Also, the position sensors, called proprioceptors, in joints and muscles feed back constantly changing signals to the neurons of the motor cortex.

tical.

The researchers described their model in an article in the May 8, 2008, issue of the journal Neuron, published by Cell Press.

Analy

Now, Robert Ajemian and his colleagues, analyzing muscle function in monkeys, have created a mathematical model that captures the control characteristics of the motor cortex. It enabled the researchers to better sort out the “muscles-or-movement� question.

l . F ac ts.

Detail. Realistic.

oc

Spinal cord

Pr

Mirror neuron

ed. Controling. Symbolic .

One of the major scientific questions about the brain is how it can translate the simple intent to perform an action--say, reach for a glass-into the dynamic, coordinated symphony of muscle movements required for that action. The neural instructions for such actions originate in the brain’s primary motor cortex, and the puzzle has been whether the neurons in this region encode the details of individual muscle activities or the high-level commands that govern kinetics--the direction and velocity of desired movements.

O rd e r

What The Brain Controls

S e que nc

e.

L ine a

The Brain


Parietal Lobe. tr a Ce n

Right Hemisphere. ed . U npr edi ct a b

Th

m al a

us

Basal Ganglia.

ot

pu

Occipital Lobe.

.

le .

Sylvian Fissure.

Amygdala.

io

Im

cu s .

Infuundibulum.

Em

l siv

e.

Disorg anis

Frontal Lobe.

l S ul

Midbrain.

na l.

Im pu l si ve . Intu

Optic Chiasm.

i t i ve

Pons.

. Psy c hic

C e re b

c lis t i . Ho

Br

dom . R an ive . C re at tic . P . Ar tis

at terns.

Lik es

to

fin

ish

pr

oj

ec

ts.

Pe

r so

n al

it y .

F an

t ac

y.

Visual.

Feelings.

Shapes. Groups.

Int uit io n

.

Mus

ic al

. Erratic. Se

he et

big

t pic

u re

.

ain

St

em

ellum.

.

forces in different directions and at different postures. They then developed a model that enabled them to test hypotheses about the relationship between neuronal activity that they measured in the animals’ motor cortex and the resulting actions. They said that their “joint torque model can be tested at the resolution of single cells, a level of resolution that, to our knowledge, has not been attained previously.” They concluded that their model “suggests that neurons in the motor cortex do encode the kinetics of motor behavior.” “This model represents a significant advance, because it is strikingly successful in accounting for the way that the responses of individual [primary motor cortex] neurons vary with posture and force direction,” commented Bijan Pesaran and Anthony Movshon in a preview of the article in the same issue of Neuron. “The results of Ajemian et al’s analysis provide strong evidence that it is useful to think of the output of [primary motor cortex] neurons in terms of their influence on muscles. Their model, in effect, defines a ‘projection field’ for each [primary motor cortex] neuron that maps its output into a particular pattern of muscle actions.” Pesaran and Movshon commented that “perhaps we should set aside the somewhat artificial dichotomy between muscles and movements, between the purpose and its functional basis, and recognize that the activation pattern of motor cortex neurons does two things--it specifies for the peripheral motor system both what to do and how to do it.” The researchers include Robert Ajemian, McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA; Andrea Green, Universite de Montreal, Canada; Daniel Bullock, Department of Cognitive and Neural Systems, Boston University, Boston, MA; Center of Excellence for Learning in Education, Science, and Technology, Boston, MA; Lauren Sergio, York University, Toronto, Canada; John Kalaska, Universite de Montreal, Canada; and Stephen Grossberg, Department of Cognitive and Neural Systems, Boston University, Boston, MA, Center of Excellence for Learning in Education, Science, and Technology, Boston, MA.


What Is ‘Human? Rat People The unfortunate ‘rat people’ of Pakistan could provide the answer. Armand Leroi investigates Travel the Grand Trunk Road between Lahore and Islamabad, and you come to the city of Gujrat. For at least 100 years, but perhaps for centuries, it has been, though is no longer, a depository for children with microcephaly. These days, most chuas are intinerant beggars. Most people I asked supposed that there are about 1,000 chuas in the Punjab, but no one really knows. Where do they come from? There is, inevitably, a local myth to account for origins of the chuas. Infertile women, the story runs, come to the shrine to ask the saint to intercede on their behalf, to give them children. This he does, but only at a price: the first-born child would be a chua. That child has to be given back to the shrine where it would be raised, have been caused by mutations - advantageous mutations - that swept and live, as an acolyte. Should she fail to do so, all future children will through the populations of our ancestors as they wandered, generation be born chuas as well. after generation, across the African veldt. That such mutations must The Pakistan government has banned microcephalics from the shrine. exist has long been obvious. The problem has been how to find them. Yet women still go there to petition the saint. At least some of them still believe the myth. Educated Pakistanis know better. Dismissing the Curse of Shua Dulah as mere superstition, they have a better theory: that chuas aren’t born, but made. Priests, chua-masters, or perhaps even parents, they say, purposefully deform healthy infants by placing pots or metal clamps on the heads of healthy infants and so retard the growth of the brain. There are several reasons for believing that microcephaly in the Punjab is not caused by clamping. The first is simply that no one, or at least no one I spoke to, seems to have actually seen it. The source of the allegation always seems to be an untraceable relation in an unreachable village. The second is that it is probably biologically impossible. The brain of an infant grows for the first nine years of life and the skull has gaps - sutures - to accommodate that growth. Should these sutures seal prematurely, as they do in certain rare genetic conditions, the result is not microcephaly but rather death, as the brain is forced through the hole at the base of the skull, so compressing the spinal cord.

One way to do this is to compare our genome with that of our nearest relative, the chimpanzee. That’s now easily done. The chimp genome was sequenced in 2005. To find the genes that matter to human evolution (the genes that make us different from an ape, that make us human) it should be just a matter of lining the two genomes up side-by-side and looking for the differences. But genomes are vast. Chimps and humans each have about three billion nucleotides in their genomes - 99 per cent of those may be identical, but that still leaves about 30 million differences. Most of those are unimportant, the background noise of genomic evolution. But some matter: which?

Therein lies the importance of microcephaly. The discovery of genes that control the growth of the brain immediately suggested that these genes might also have changed in the last six million years since we last shared an ancestor with chimps. And so it proved: of the four microcephaly genes that have been found, three bear the hallmarks of rapid evolution. To be sure, chimps have versions of these genes, but the human version Microcephaly is a rare disorder in Britain. No one seems to know is different. So different, in fact, that their evolution must have been precisely how common it is in the Asian community of north England, driven by natural selection. but it was common enough to attract the attention of Geoff Woods, a It is hard to understate the beauty of this result. Ever since Aristotle, geneticist working at Leeds University. He found that it ran in families. philosophers have wondered: what makes us different from the beasts? That implied that its cause was genetic; it was caused by a mutation. What makes us human? The answers that they have supplied: that man is Or, more precisely, several. By the late 1990s, the disorder had been a political animal, a thinking animal, a naked animal, a tool-making, toolmapped to deficiencies in at least six different genes. using animal - answers that, for all the aphoristic pleasure they provide, It is easy to see why peculiar theories of the origins of microcephaly have proliferated in Pakistan. To the untrained eye, the occurrence of the disorder is hard to explain. Healthy parents may have microcephalic children; microcephalic parents - there are a few - may have healthy children. To a geneticist, however, this merely speaks of recessive mutations. A child will only have microcephaly if it has inherited two copies of the mutant gene, one from each parent who are its carriers.

are essentially meaningless if not blatantly false, can now be discarded. Now, when we ask: “What makes us human?” we can answer: this gene and that one... and that one. We can write the recipe for making a human being. Or, at least, we can begin to.

There is bittersweet irony in the discovery that the genes underlying a disorder as disabling as microcephaly should have also been responsible for the thing that we, as a species, are most proud of: our brains. Yet The discovery of the microcephaly genes was important. It instantly told for all intellectual fascination of these discoveries, we should not neglect us something about how the human brain grows. But the true beauty one more thing that they have given us: a way to meliorate the disease of this work is that it has told something even more profound: how the that pointed to their discovery. human brain has evolved. In the last three million or so years, the human brain has approximately trebled in size. This change, remarkable in its extent and speed, must


Human Beings In a speech given in London in 1862, George Francis Train claimed that Africans were inferior to whites on the ground that black people were incapable of blushing. When the American businessman went on to maintain that God had made Africans “the servant of the Anglo-Saxon race”, the audience cheered. But it was not just revealed religion that endorsed the belief in a hierarchy of “races” with white Anglo-Saxons at the top. So did the science of the day. The idea that black people were incapable of blushing was, Joanna Bourke tells us, “heavily debated by scientists such as Sir Charles Bell, Charles Darwin and others expert in physiognomy”. What it Means to be Human: Reflections from 1791 to the Present Women were similarly assessed by experts in physiognomy, with the late 18th-century Swiss pastor and scientist Johann Kaspar Lavater, chief founder of the putative science, attributing to the female face a capacity for dissimulation, which demonstrated that women “are what they are only through men”. Lavater also supported his belief in hierarchy by invoking religion. But here again the appeal to God was followed by an appeal to science, with a promoter of physiognomy writing in the 1880s that, whereas in the past the study of faces had been based on a belief in a divine plan, now “the argument of design is superseded by the principle of evolution”. The idea of racial hierarchy has a distinguished pedigree. Kant believed that “the Negroes of Africa have by nature no feeling that rises above the trifling”, while Voltaire promoted a version of the pre-Adamite theory according to which Jews were remnants of an older, pre-human species. Auguste Comte, one of the founders of positivism – a movement that had a formative influence on John Stuart Mill and George Eliot, among others – was a supporter of phrenology who believed social science should be based on physical laws. Implementing Comte’s programme, the criminologist Cesare Lombroso argued that law-breakers were reversions to ape-like species that could be identified by facial characteristics and the shapes of their heads. Techniques based on these ideas were used in courts in a number of European countries before the first world war and through the interwar period, and it was only with the defeat of nazism that the theories were discredited. There was not much opposition to “racial science” within science itself. Like religion, science is commonly the servant of power. The hierarchies that so many scientists imagined to be rooted in biology were reflections of social structures that have since been challenged, and in some degree altered, as the balances of power in society have shifted. The interplay between power relations and ideas is an inexhaustibly interesting area of inquiry, but cultural history has been neglected in English-speaking countries, with many historians disdaining it as a type of dilettantism and historically illiterate philosophers analysing concepts as if they come from nowhere. Despite these obstacles the history of ideas has some notable practitioners, including Joanna Bourke. Fear: A Cultural History (2006) and Rape: Sex, Violence, History (2007) range over the whole of culture and are infused with an acutely observant intelligence. They are examples of that rarest of things – deeply scholarly books that are a joy to read.

the law and judicial practice.” Throughout the period she deals with – which begins with 1791, she tells us, because it was then that the 1789 Declaration of the Rights of Man “saw its first trial by fire, sword and rifle” in slave revolts on the French colony of Haiti – many different criteria were used to fix the boundary between human and non-human. Self-consciousness, language-use, tool-making and genetic inheritance were invoked, but whatever definition was adopted ended up endorsing laws that excluded some from full humanity. This was not by accident, since excluding others was the point of making these distinctions. Bourke shows in absorbing detail how ideas of what is human and what animal have been deployed as weapons in ongoing conflicts. Uncovering the origins of debates about sentience and welfare, the ethics of cross-species transplantation and the peculiar logic whereby theories of human rights can be used to justify the practice of torture, she ranges across the cultural lexicon. Moving from Kafka’s talking ape Red Peter to the sexual politics of Victorian anthropology, cannibalism to cosmetic surgery, Bourke’s range of reference is astonishing. The result is a book that will amaze and entrance as much as it enlightens and instructs. As Bourke writes, “To understand the instability of definitions of who is truly human, we need history.” The panoramic view she presents of that instability is almost overwhelming. Yet reading What It Means to Be Human, I couldn’t help thinking that the postmodern approach she adopts leads her to bypass the stubborn intractability of human conflict. Her method of analysis is a variant of deconstruction, a powerful tool in a number of contexts. It underpins her critique of human rights – “a volatile principle on which to base ethics”, as she rightly observes – and her decisive conclusion, “The autonomous, self-willed ‘human’ at the heart of humanist thinking is a fantasy, a chimera.” But when it denies the reality of anything that might be described as human nature, postmodernism creates a chimera of its own. Bourke illustrates this danger when she espouses “negative zoélogy” – a heuristic technique for the study of humans modelled on negative theology, which refrained from ascribing any definite attributes to God. Together with other postmodernist tools, she believes, negative zoélogy “provides a way of playing with difference”, making possible “a politics that is committed to the uniqueness of all life forms”. The trouble is that while we may know nothing of God we know a good deal about human behaviour. Well before the financial crisis got seriously under way, it was possible to foresee the re-emergence of xenophobia and attacks on minorities in Europe and the rise of the apocalyptic right in America. Postmodernists – in this respect at one with liberal humanists – will say that these are specific historical practices, so they can be changed and transcended. Of course they should be resisted, but toxic reactions of these kinds are evidence of enduring human traits. Politics is not play, and when there are sudden, large-scale dislocations in material security it is a safe bet that things will pretty soon turn nasty.

When they deconstruct prevailing categories of thought, postmodernists perform a valuable service – not least by deflating the pretensions of science to explain human beings in terms of physiological laws. That doesn’t mean the human world is radically indeterminate and can be remade according to whatever human beings decide. History discloses patterns of behaviour that – precisely because they recur in very different In What It Means to Be Human Bourke addresses what, from one point historical contexts – testify to permanent human vulnerabilities and flaws. of view, must be the biggest subject of all – the question of human As Bourke seems to accept when she recounts the exchange, this may identity. She starts by noting that distinctions between humans and have been the message of an unidentified man interviewed by an American animals are not fixed or impermeable. “The boundaries of the human journalist in Rwanda not long after the genocide in which around a fifth and the animal turn out to be as entwined and indistinguishable as the of the country’s population was killed. The man, who is described only inner and outer layers of a Möbius strip.” Marking these boundaries as “a pygmy”, asked the journalist if he had read Wuthering Heights, is not a neutral exercise in establishing the facts – it is an exercise of and went on to endorse what he described as the principle of the book power, which can be contested. – the idea that all humanity must unite together in the struggle against One of the protagonists in Bourke’s story is “An Earnest Englishwoman”, nature, “the only way for peace and reconciliation”. After a pause, the an unknown correspondent who wrote a letter to the Times in 1872 journalist observed: “But humanity is part of nature, too.” Unfazed, the entitled “Are Women Animals?” protesting against the exclusion of “pygmy” replied, “That is exactly the problem.” women from full humanity in English law. The Earnest Englishwoman’s intervention is important for Bourke, since it reveals a far-reaching truth: “The question ‘who is truly human?’ depends largely on the power of


What makes us human?



The Presence Of A Soul

A star-studded panel of scientists gathered to discuss those heady themes last night at the World Science Festival in New York City. Here are their answers in convenient nutshell form: Marvin Minsky, artificial intelligence pioneer: We do something other species can’t: We remember. We have cultures, ways of transmitting information. Daniel Dennett, cognitive scientist: We are the first species that represents our reasons, and can reason with each other. “The planet has grown a nervous system,” he said. Renee Reijo Pera, embryologist: We’re uniquely human from the moment that egg and sperm fuse. A “human program” begins before the brain even begins to form.

Patricia Churchland, neuroethicist: The structure of how the human brain is arranged intrigues me. Are there unique brain structures? As far as The Near Death Experiance we can understand, it’s our size that is unique. What we don’t find are A near-death experience happens when quantum substances which other unique structures. There may be certain types of human-specific form the soul leave the nervous system and enter the universe at large, cells — but as for what that means, we don’t know. It’s important not only according to a remarkable theory proposed by two eminent scientists. to focus on us, to compare our biology and behavior to other animals. According to this idea, consciousness is a program for a quantum Jim Gates, physicist: We are blessed with the ability to know our mother. computer in the brain which can persist in the universe even after death, We are conscious of more than our selves. And just as a child sees a explaining the perceptions of those who have near-death experiences. mother, the species’ vision clears and sees mother universe. We are Dr Stuart Hameroff, Professor Emeritus at the Departments of getting glimmers of how we are related to space and time. We can ask, Anesthesiology and Psychology and the Director of the Centre of what am I? What is this place? And how am I related to it? Consciousness Studies at the University of Arizona, has advanced the Nikolas Rose, sociologist: Language and representation. We are the kind quasi-religious theory. of creatures that ask those questions of ourselves. And we believe science It is based on a quantum theory of consciousness he and British physicist can help answer. We’ve become creatures that think of ourselves as Sir Roger Penrose have developed which holds that the essence of our essentially biological — and I think we’re more than biological creatures. soul is contained inside structures called microtubules within brain cells. I’m not sure biology has answers. They have argued that our experience of consciousness is the result of quantum gravity effects in these microtubules, a theory which they dubbed orchestrated objective reduction (Orch-OR). Thus it is held that our souls are more than the interaction of neurons in the brain. They are in fact constructed from the very fabric of the universe - and may have existed since the beginning of time. The concept is similar to the Buddhist and Hindu belief that consciousness is an integral part of the universe - and indeed that it is really all there may be, a position similar to Western philosophical idealism. With these beliefs, Dr Hameroff holds that in a near-death experience the microtubules lose their quantum state, but the information within them is not destroyed. Instead it merely leaves the body and returns to the cosmos. Dr Hameroff told the Science Channel’s Through the Wormhole documentary: ‘Let’s say the heart stops beating, the blood stops flowing, the microtubules lose their quantum state. ‘The quantum information within the microtubules is not destroyed, it can’t be destroyed, it just distributes and dissipates to the universe at large. ‘If the patient is resuscitated, revived, this quantum information can go back into the microtubules and the patient says “I had a near death experience”.’ He adds: ‘If they’re not revived, and the patient dies, it’s possible that this quantum information can exist outside the body, perhaps indefinitely, as a soul.’ The Orch-OR theory has come in for heavy criticism by more empirically minded thinkers and remains controversial among the scientific community. MIT physicist Max Tegmark is just one of the many scientists to have challenged it, in a 2000 paper that is widely cited by opponents, the Huffington Post reports. Nevertheless, Dr Hameroff believes that research in to quantum physics is beginning to validate Orch-Or, with quantum effects recently being shown to support many important biological processes, such as smell, bird navigation and photosynthesis. What does it mean to be human? And can science illuminate the answers?


Ian Tattersall, anthropologist: It’s not “what is human,” but what is unique: our extraordinary form of symbolic cognition. Francis Collins, geneticist: What does the genome tell us? There’s surprisingly little genetic difference between human and chimpanzee. Yet clearly we’re different. There’s brain size and language. A languagerelated gene, FoxP2, evolved most rapidly in the last few million years. How did we develop empathy? Appreciate our mortality? And we should admit that there are areas that might not submit to material analysis: beauty, inspiration. We shouldn’t dismiss these as epiphenomenal froth. Harold Varmus, physiologist: Intrigued by our ability to generate hypotheses and make measurements. Paul Nurse, cell biologist: Is excited about the ability of science to answer this question. Antonio Damasio, neuroscientist: The critical unique factor is language. Creativity. The religious and scientific impulse. And our social organization, which has developed to a prodigious degree. We have a record of history, moral behavior, economics, political and social institutions. We’re probably unique in our ability to investigate the future, imagine outcomes, and display images in our minds. I like to think of a generator of diversity in the frontal lobe – and those initials are G-O-D. What does it mean to be human? I’ll save my own answer for later, with one caveat: this is a semantically tricky question. It’s really several questions: What is unique to humanity now, what will be unique about humanity in the future, and what is important about humanity.

Human Behaviour The behavior of humans is studied by the academic disciplines of psychiatry, psychology, social work, sociology, economics, and anthropology. Human behaviour is experienced throughout an individual’s entire lifetime. It includes the way they act based on different factors such as genetics, social norms, core faith, and attitude. Behaviour is impacted by certain traits each individual has. The traits vary from person to person and can produce different actions or behaviour from each person. Social norms also impact behaviour. Humans are expected to follow certain rules in society, which conditions the way people behave. There are certain behaviours that are acceptable or unacceptable in different societies and cultures. Core faith can be perceived through the religion and philosophy of that individual. It shapes the way a person thinks and this in turn results in different human behaviours. Attitude can be defined as “the degree to which the person has a favorable or unfavorable evaluation of the behavior in question.” Your attitude highly reflects the behaviour you will portray in specific situations. Thus, human behavior is greatly influenced by the attitudes we use on a daily basis. Six years ago, I jumped at an opportunity to join the international team that was identifying the sequence of DNA bases, or “letters,” in the genome of the common chimpanzee (Pan troglodytes). As a biostatistician with a long-standing interest in human origins, I was eager to line up the human DNA sequence next to that of our closest living relative and take stock. A humbling truth emerged: our DNA blueprints are nearly 99 percent identical to theirs. That is, of the three billion letters that make up the human genome, only 15 million of them—less than 1 percent—have changed in the six million years or so since the human and chimp lineages diverged. Evolutionary theory holds that the vast majority of these changes had little or no effect on our biology. But somewhere among those roughly 15 million bases lay the differences that made us human. I was determined to find them. Since then, I and others have made tantalizing progress in identifying a number of DNA sequences that set us apart from chimps.


Humanity Evolution Evolution is the change in the inherited characteristics of biological populations over successive generations. Evolutionary processes give rise to diversity at every level of biological organisation, including species, individual organisms and molecules such as DNA and proteins. Life on Earth originated and then evolved from a universal common ancestor approximately 3.8 billion years ago. Repeated speciation and the divergence of life can be inferred from shared sets of biochemical and morphological traits, or by shared DNA sequences. These homologous traits and sequences are more similar among species that share a more recent common ancestor, and can be used to reconstruct evolutionary histories, using both existing species and the fossil record. Existing patterns of biodiversity have been shaped both by speciation and by extinction.

language; whether consciousness can be understood in a way that does not require a dualistic distinction between mental and physical states or properties; and whether it may ever be possible for computers or robots to be conscious. In recent years, consciousness has become a significant topic of research in psychology and neuroscience. The primary focus is on understanding what it means biologically and psychologically for information to be present in consciousness—that is, on determining the neural and psychological correlates of consciousness. The majority of experimental studies assess consciousness by asking human subjects for a verbal report of their experiences (e.g., “tell me if you notice anything when I do this”). Issues of interest include phenomena such as subliminal perception, blindsight, denial of impairment, and altered states of consciousness produced by psychoactive drugs or spiritual or meditative techniques.

In medicine, consciousness is assessed by observing a patient’s arousal and responsiveness, and can be seen as a continuum of states ranging from full alertness and comprehension, through disorientation, delirium, loss of meaningful communication, and finally loss of movement in response to painful stimuli. Issues of practical concern include how the presence of consciousness can be assessed in severely ill, comatose, or anesthetized people, and how to treat conditions in which consciousness Charles Darwin was the first to formulate a scientific argument for is impaired or disrupted. the theory of evolution by means of natural selection. Evolution by natural selection is a process that is inferred from three facts about populations: 1) more offspring are produced than can possibly survive, 2) traits vary among individuals, leading to differential rates of survival The Human Body and reproduction, and 3) trait differences are heritable. Thus, when members of a population die they are replaced by the progeny of parents The composition of the human body can be looked at from the point of that were better adapted to survive and reproduce in the environment in view of either mass composition, or atomic composition. To illustrate which natural selection took place. This process creates and preserves both views, the human body is ~70% water, and water is ~11% hydrogen traits that are seemingly fitted for the functional roles they perform. by mass but ~67% hydrogen by atomic percent. Thus, most of the mass Natural selection is the only known cause of adaptation, but not the of the human body is oxygen, but most of the atoms in the human body only known cause of evolution. Other, nonadaptive causes of evolution are hydrogen atoms. Both mass-composition and atomic composition figures are given below. include mutation and genetic drift. In the early 20th century, genetics was integrated with Darwin’s theory of evolution by natural selection through the discipline of population genetics. The importance of natural selection as a cause of evolution was accepted into other branches of biology. Moreover, previously held notions about evolution, such as orthogenesis and “progress” became obsolete. Scientists continue to study various aspects of evolution by forming and testing hypotheses, constructing scientific theories, using observational data, and performing experiments in both the field and the laboratory. Biologists agree that descent with modification is one of the most reliably established facts in science. Discoveries in evolutionary biology have made a significant impact not just within the traditional branches of biology, but also in other academic disciplines (e.g., anthropology and psychology) and on society at large.

Consciousness

Almost 99% of the mass of the human body is made up of the six elements oxygen, carbon, hydrogen, nitrogen, calcium, and phosphorus. Only about 0.85% is composed of another five elements: potassium, sulfur, sodium, chlorine, and magnesium. All are necessary to life. The remaining elements are trace elements, of which more than a dozen are thought to be necessary for life, or play an active role in health (e.g., fluorine, which hardens dental enamel but seems to have no other function). Not all elements which are found in the human body in trace quantities play a role in life. Some of these elements are thought to be simple bystander contaminants without function (examples: caesium, titanium), while many others are thought to be active toxins, depending on amount (cadmium, mercury, radioactives). The possible utility and toxicity of a few elements at levels normally found in the body (aluminum) is debated. Functions have been proposed for trace amounts of cadmium and lead, but these are almost certainly toxic in amounts normally found in the body. There is evidence that one element normally considered a toxin (arsenic) is essential in ultratrace quantities, even in mammals. Some elements that are clearly used in lower organisms and plants (arsenic, silicon, boron, nickel, vanadium) are probably needed by mammals also, but in far smaller doses. Two halogens used abundantly by lower organisms (fluorine and bromine) are presently known to be used by mammals only opportunistically. However, a general rule is that elements found in active biochemical use in lower organisms are often eventually found to be used in some way, by higher organisms.[citation needed]

Consciousness is the quality or state of being aware of an external object or something within oneself. It has been defined as: subjectivity, awareness, sentience, the ability to experience or to feel, wakefulness, having a sense of selfhood, and the executive control system of the mind. Despite the difficulty in definition, many philosophers believe that there is a broadly shared underlying intuition about what consciousness is. As Max Velmans and Susan Schneider wrote in The Blackwell Companion to Consciousness: “Anything that we are aware of at a given moment The average 70 kg adult human body contains approximately 6.7 x forms part of our consciousness, making conscious experience at once 1027 atoms and is “composed of” 60 chemical elements. In this sense, the most familiar and most mysterious aspect of our lives.” “composed of” means that a trace of the element has been identified Philosophers since the time of Descartes and Locke have struggled to in the body. However, at the finest resolution, most objects on Earth comprehend the nature of consciousness and pin down its essential (including the human body) contain measurable contaminating amounts properties. Issues of concern in the philosophy of consciousness include of all of the 88 chemical elements which are detectable in nearly any soil whether the concept is fundamentally valid; whether consciousness can on Earth. The number of elements thought to play an active positive ever be explained mechanistically; whether non-human consciousness role in life and augmentation of health in humans and other mammals, exists and if so how it can be recognized; how consciousness relates to is about 24 or 25.


The relative amounts of each element vary by individual, with the largest issue can be solved by evolutionary computation,” Corne said. contributor due to fat/muscle/bone body composition ratio differences “What stops us at the moment from coming up with robots which truly from person to person. fly like birds is (a lack of) failed designs. It would cost millions in crashed The human body is ~65% water, and water is ~11% hydrogen by mass robotic gore and the time needed to constantly make new ones with but ~67% hydrogen by atomic percent. their revised control strategies,” Corne said. “And that’s not to mention the various engineering challenges which would need solving to make robotic wings flexible and fast enough for bird-like flight. Safety becomes something of an issue, too, when you have robots flying about, most of A Theory Of Evolution, For Robots which (since that’s how evolution works) are pretty bad at it.” CHENNAI, India -- If you can’t program a robot to fly, then program it So it won’t be easy, and it won’t be cheap. But given “adequate funding,” Nordin said that free-flying robots could be dotting the skies in the so it will figure out how to fly without your help. Krister Wolff and Peter Nordin, two scientists at the Chalmers University next three years. of Technology in Sweden, have designed a winged robot capable of learning flight techniques. What they’ve come up with is a robot equipped with small motors allowing it to manipulate its meter-long, balsa-wood wings in different directions. A computer program feeds the robot random instructions, which let it develop the concept of liftoff on its own. But it remains only a concept; it hasn’t gone anywhere yet. Still, no one is complaining. “Evolution has created numerous flying creatures, like the house fly,” Nordin said. “Birds and insects can do things with their wings which airplanes can’t dream of, and a lot of research indicates that future airplanes need to have more flexible wings to achieve better performance. We wanted to take a first step in that direction with artificial evolution.” Scientists still do not fully understand the mechanics of insect flight, especially those aspects controlling balance and motion. As recently as a couple of years ago, what was known about the bumblebee indicated it should not be able to fly. And yet it does. An elegant way around the lack of understanding could be to just give up on understanding altogether and let the machines learn for themselves. Genetic programming is one way to approach this complex problem. Using this technique, Wolff and Nordin evaluated the instructions that were best at producing liftoff. Successful ones were paired up, and “offspring” sets of instructions were generated by swapping instructions randomly between successful pairs. These next-generation instructions were then sent to the robot and evaluated before breeding a new generation. “Evolution in this case means selecting and breeding from a population of wing designs,” said Inman Harvey, a senior researcher in the Evolutionary and Adaptive Systems group at the University of Sussex. “The problem is working out how good each design is, so as to decide which become ‘parents’ of the next generation of designs. “In the natural world this is simple: The fly that crashes doesn’t get to have babies. In artificial evolution, we must either create each design and test it for real in some test rig, or use a simulation. (This would) have to be sophisticated, tricky and expensive if we want to try and catch all the aerodynamic effects and the flexing of materials under load.” Cheating -- in this case, standing on its wings -- was one of the first concepts the robot grasped. The bot also used a couple of books lying nearby to pull itself up. Finally, the bot picked up a more effective flapping technique, in which it rotated its wings 90 degrees and raised them before twisting them back to a horizontal plane. Despite that, it could not manage to get airborne. Provided with more powerful electrical motors in relation to its weight, Wolff and Nordin give the bot a reasonable chance of flying. But how feasible is it to create a robot that picks up flying all by itself? “To solve the problem, “How do I fly in a controlled way if I’m heavier than air?” is a different and very interesting scientific challenge,” said David Corne from the University of Reading. “Put simply, two issues need to be solved: the engineering issue (the ability for joints to move fast and flexibly in a variety of ways) and the control issue (precisely what sequences of movements are required to get up and stay up, etc.). “Wolff and Nordin’s inspired work shows what anyone really knowledgeable in evolutionary computation will admit: The control





Automata

from Greek meaning “self-acting”. TAutomata theory is also closely related to formal language theory. An automaton is a finite representation of a formal language that may be an infinite set. Automata are often classified by the class of formal languages they are able to recognize.

As watch making developed in the Age of Enlightenment in the eighteenth century, so did the art of creating mechanical people and animals. Jacques What is the difference between men and machines and what does it Vaucason created numerous working figures, including a flute player, mean to be human? And if we can answer that question, is it possible which actually played the instrument, in 1738, plus this duck from 1739. to build a computer that can imitate the human mind. The gilded copper bird could sit, stand, splash around in water, quack In 1949 neurosurgeon Jeffery Jefferson argued that mechanical mind and even give the impression of eating food and digesting it: could never rival human intelligence because it could never be conscious This Automaton, known as the “Draughtsman-Writer” was built by of what it did . As the machines decisions where not based on thought Henri Maillardet, a Swiss mechanician of the 18th century who worked or emotions they where purely based on sign and program. in London producing clocks and other mechanisms. It is believed that Human behavior refers to the range of behaviors exhibited by humans Maillardet built this extraordinary Automaton around 1800 and it has the and which are influenced by culture, attitudes, emotions, values, ethics, largest “memory” of any such machine ever constructed—four drawings and three poems (two in French and one in English). authority, rapport, hypnosis, persuasion, coercion and/or genetics.

Man vs Machine

The behavior of people (and other organisms or even mechanisms) falls within a range with some behavior being common, some unusual, some acceptable, and some outside acceptable limits. In sociology, behavior in general is considered as having no meaning, being not directed at other people, and thus is the most basic human action. Behavior in this general sense should not be mistaken with social behavior, which is a more advanced action, as social behavior is behavior specifically directed at other people. The acceptability of behavior is evaluated relative to social norms and regulated by various means of social control.

When we say that today’s rapidly changing technology is set to transform the way we live in unimaginable ways, we should remember that people thought much the same thing in earlier centuries – whether in the time of the clockwork revolution in the eighteenth century or as a result of the scientific advances of the Industrial Revolution in the Victorian era. People are often foreseeing the effects of technology wrongly such as Turing said we would have robots passing as humans by the 20th century.

A finite-state machine (FSM) or finite-state automaton (plural: automata), or simply a state machine, is a mathematical model of computation used to design both computer programs and sequential logic circuits. It is conceived as an abstract machine that can be in one of a finite number of states. The machine is in only one state at a time; the state it is in at Automatons any given time is called the current state. It can change from one state The Digesting Duck by Jacques de Vaucanson, was hailed in 1739 as the to another when initiated by a triggering event or condition, this is called a transition. A particular FSM is defined by a list of its states, and the first automaton capable of digestion An automaton (plural: automata or automatons) is a self-operating triggering condition for each transition. machine. The word is sometimes used to describe an old-fashioned The behavior of state machines can be observed in many devices in modern robot, more specifically an autonomous robot. An alternative spelling, society which perform a predetermined sequence of actions depending now obsolete, is automation. Autonomous robots are robots that can on a sequence of events they are presented with. Simple examples are perform desired tasks in unstructured environments without continuous vending machines which dispense products when the proper combination human guidance. Many kinds of robots have some degree of autonomy. of coins are deposited, elevators which drop riders off at upper floors Different robots can be autonomous in different ways. A high degree before going down, traffic lights which change sequence when cars are of autonomy is particularly desirable in fields such as space exploration, waiting, and combination locks which require the input of combination numbers in the proper order. cleaning floors, mowing lawns, and waste water treatment. Some modern factory robots are “autonomous” within the strict confines of their direct environment. It may not be that every degree of freedom exists in their surrounding environment, but the factory robot’s workplace is challenging and can often contain chaotic, unpredicted variables. The exact orientation and position of the next object of work and (in the more advanced factories) even the type of object and the required task must be determined. This can vary unpredictably (at least from the robot’s point of view). One important area of robotics research is to enable the robot to cope with its environment whether this be on land, underwater, in the air, underground, or in space. A fully autonomous robot has the ability to - gain information about the environment - work for an extended period without human intervention - move either all or part of itself throughout its operating environment without human assistance - avoid situations that are harmful to people, property, or itself unless those are part of its design specifications - an autonomous robot may also learn or gain new capabilities like adjusting strategies for accomplishing its task(s) or adapting to changing surroundings. Autonomous robots still require regular maintenance, as do other machines. In theoretical computer science, automata theory is the study of mathematical objects called abstract machines or automata and the computational problems that can be solved using them. Automata comes


The Engine The Difference Engine A difference engine is an automatic mechanical calculator designed to tabulate polynomial functions. The name derives from the method of divided differences, a way to interpolate or tabulate functions by using a small set of polynomial coefficients. Both logarithmic and trigonometric functions, functions commonly used by both navigators and scientists, can be approximated by polynomials, so a difference engine can compute many useful sets of numbers. The historical difficulty in producing error free tables by teams of mathematicians and human “computers” spurred Charles Babbage’s desire to build a mechanism to automate the process. A little bit about the original analytical engine: In December 1837, the British mathematician Charles Babbage published a paper describing a mechanical computer that is now known as the Analytical Engine. Anyone intimate with the details of electronic computers will instantly recognize the components of Babbage’s machine. Although Babbage was designing with brass and iron, his Engine has a central processing unit (which he called the mill) and a large amount of expandable memory (which he called the store). The operation of the Engine is controlled by program stored on punched cards, and punched cards can also be used to input data. This trial portion of the Difference Engine is one of the earliest automatic calculators and is a celebrated icon in the pre-history of the computer. Charles Babbage was a brilliant thinker and mathematician. He devised the Difference Engine to automate the production of error-free mathematical tables. In 1823 he secured £1500 from the government and shortly afterwards he hired the engineer Joseph Clement. The Difference Engine was designed to perform fixed operations automatically. During its development Babbage’s mind leapt forward to the design of the Analytical Engine, which using punched cards could be programmed to calculate almost any function. This design embodied almost all the conceptual elements of the modern electronic computer. The project collapsed in 1833 when Clement downed tools. By then, the government had spent over £17,000 to build the machine - equivalent to the price of two warships. The collapse of the venture was traumatic for Babbage and, in old age, he became embittered and disillusioned. Historians have suggested that the design was beyond the capability of contemporary technology and would have required greater accuracy than contemporary engineering could have provided. However, recent research has shown that Clement’s work was adequate to create a functioning machine. In fact, the scheme foundered on issues of economics, politics, Babbage’s temperament and his style of directing the enterprise. The Differential Engine of Charles Babbage A numerical table is a tool designed to save the time and labour of those engaged in computing work. The oldest tables which are preserved, were compiled in Babylon in the period 1800-1500 B.C. They were intended to be used for the transformation of units, for multiplication and division, and they were inscribed in cuneiform on pieces of clay. During the first century B.C. Claudius Ptolemy in Alexandria created his theory about the motions of the heavenly bodies in a work which later came to be known by the name of Almagest. They were to form one of the Ancient World’s most important astronomical documents and they contained all the necessary tables for the calculation of eclipses as well as various kinds of ephemeris, that is to say tables which specified the positions of the heavenly bodies during a particular period, e.g. each day for a whole year. During the first half of the thirteenth century the Ptolemy’s tables caught the attention of King Alphonso the Wise of Castile. He then gathered together a great number of scholars in Toledo who were given the task of calculating a new collection of astronomical tables. The reason for this endeavor was said to be that King Alphonso, who was interested in astronomy, had

discovered many errors in Ptolemy’s tables. The work began some time in the 1240s and took about ten years to complete. The tables produced were later known as the Alphonsine Tables. The vast costs involved were paid for by the king, whose name soon spread with the copies of the tables throughout the European scientific world. Besides the Babylonian tables, Ptolemy’s work and the Alphonsine Tables, a great deal of toil went into the production of many other numerical tables of different kinds during this period. It seems a real miracle, that the first digital computer in the world, which embodied in its mechanical and logical details just about every major principle of the modern digital computer, was designed as early as in 1830s. This was done by the great Charles Babbage, and the name of the machine is Analytical Engine. In 1834 Babbage designed some improvements to his first computer— the specialized Difference Engine. In the original design, whenever a new constant was needed in a set of calculations, it had to be entered by hand. Babbage conceived a way to have the differences inserted


mechanically, arranging the axes of the Difference Engine circularly, so that the result column should be near that of the last difference, and thus easily within reach of it. He referred this arrangement as the engine eating its own tail or as a locomotive that lays down its own railway. But this soon led to the idea of controlling the machine by entirely independent means, and making it perform not only addition, but all the processes of arithmetic at will in any order and as many times as might be required.


Analytical Engine Charles Babbage - origanal Analytical Engine Design



ArtiFicial Intellegence

could straightforwardly program a computer to have “self-awareness” in the behavioural sense – for example, to pass the “mirror test” of being able to use a mirror to infer facts about itself – if they wanted to. As far as I am aware, no one has done so, presumably because it is a fairly useless ability as well as a trivial one.

Perhaps the reason self-awareness has its undeserved reputation for being connected with AGI is that, thanks to Gödel’s theorem and various controversies in formal logic in the 20th century, self-reference of any kind has acquired a reputation for woo-woo mystery. And so has consciousness. And for consciousness we have the problem of ambiguous terminology Replicating The Human Brain again: the term has a huge range of meanings. At one end of the scale there is the philosophical problem of the nature of subjective sensations To state that the human brain has capabilities that are, in some respects, (“qualia”), which is intimately connected with the problem of AGI; but at far superior to those of all other known objects in the cosmos would the other end, “consciousness” is simply what we lose when we are put be uncontroversial. The brain is the only kind of object capable of under general anaesthetic. Many animals certainly have that. understanding that the cosmos is even there, or why there are infinitely many prime numbers, or that apples fall because of the curvature of AGIs will indeed be capable of self-awareness – but that is because they space-time, or that obeying its own inborn instincts can be morally will be General: they will be capable of awareness of every kind of deep wrong, or that it itself exists. Nor are its unique abilities confined and subtle thing, including their own selves. That does not mean that to such cerebral matters. The cold, physical fact is that it is the only apes who pass the mirror test have any hint of the attributes of “general kind of object that can propel itself into space and back without harm, intelligence” of which AGI would be an artificial version. Indeed, Richard or predict and prevent a meteor strike on itself, or cool objects to a Byrne’s wonderful research into gorilla memes has revealed how apes are billionth of a degree above absolute zero, or detect others of its kind able to learn useful behaviours from each other without ever understanding what they are for: the explanation of how ape cognition works really is across galactic distances. behaviouristic. But no brain on Earth is yet close to knowing what brains do in order to achieve any of that functionality. The enterprise of achieving it artificially – Ironically, that group of rationalisations (AGI has already been done/is the field of “artificial general intelligence” or AGI – has made no progress trivial/exists in apes/is a cultural conceit) are mirror images of arguments that originated in the AGI-is-impossible camp. For every argument of whatever during the entire six decades of its existence. the form “you can’t do AGI because you’ll never be able to program the Despite this long record of failure, AGI must be possible. That is because human soul, because it’s supernatural,” the AGI-is-easy camp has the of a deep property of the laws of physics, namely the universality of rationalisation: “if you think that human cognition is qualitatively different computation. It entails that everything that the laws of physics require from that of apes, you must believe in a supernatural soul.” physical objects to do can, in principle, be emulated in arbitrarily fine detail by some program on a general-purpose computer, provided it is “Anything we don’t yet know how to program is called ‘human intelligence’,” is another such rationalisation. It is the mirror image of the argument given enough time and memory. advanced by the philosopher John Searle (from the “impossible” camp), So why has the field not progressed? In my view it is because, as an who has pointed out that before computers existed, steam engines and unknown sage once remarked, “it ain’t what we don’t know that causes later telegraph systems were used as metaphors for how the human mind trouble, it’s what we know that just ain’t so.” I cannot think of any must work. He argues that the hope that AGI is possible rests on a similarly other significant field of knowledge where the prevailing wisdom, not insubstantial metaphor, namely that the mind is “essentially” a computer only in society at large but among experts, is so beset with entrenched, program. But that’s not a metaphor: the universality of computation follows overlapping, fundamental errors. Yet it has also been one of the most from the known laws of physics. self-confident fields in prophesying that it will soon achieve the ultimate Some have suggested that the brain uses quantum computation, or even breakthrough. hyper-quantum computation relying on as-yet-unknown physics beyond In 1950, Alan Turing expected that by the year 2000, “one will be able quantum theory, and that this explains the failure to create AGI on existing to speak of machines thinking without expecting to be contradicted.” In computers. Explaining why I, and most researchers in the quantum theory 1968, Arthur C Clarke expected it by 2001. Yet today, in 2012, no one is of computation, disagree that that is a plausible source of the human brain’s any better at programming an AGI than Turing himself would have been. unique functionality is beyond the scope of this article. This does not surprise the dwindling band of opponents of the very That AGIs are “people” has been implicit in the very concept from the possibility of AGI. But the other camp (the AGI-imminent one) recognises outset. If there were a program that lacked even a single cognitive ability that this history of failure cries out to be explained – or, at least, to be that is characteristic of people, then by definition it would not qualify as rationalised away. an AGI; using non-cognitive attributes (such as percentage carbon content) The very term “AGI” is an example of one such rationalisation, for the to define personhood would be racist, favouring organic brains over field used to be called “AI” – artificial intelligence. But AI was gradually silicon brains. But the fact that the ability to create new explanations is appropriated to describe all sorts of unrelated computer programs such the unique, morally and intellectually significant functionality of “people” as game players, search engines and chatbots, until the G for “general” (humans and AGIs), and that they achieve this functionality by conjecture was added to make it possible to refer to the real thing again, but now and criticism, changes everything. with the implication that an AGI is just a smarter species of chatbot. Currently, personhood is often treated symbolically rather than factually Another class of rationalisation runs along the general lines of: AGI isn’t – as an honorific, a promise to pretend that an entity (an ape, a foetus, a that great anyway; existing software is already as smart or smarter, but corporation) is a person in order to achieve some philosophical or practical in a non-human way, and we are too vain or too culturally biased to give aim. This isn’t good. Never mind the terminology; change it if you like, it due credit. This gets some traction because it invokes the persistently and there are indeed reasons for treating various entities with respect, popular irrationality of cultural relativism, and also the related trope: protecting them from harm and so on. All the same, the distinction between “we humans pride ourselves on being the paragon of animals, but that actual people, defined by that objective criterion, and other entities, has pride is misplaced because they, too, have language, tools … And self- enormous moral and practical significance, and is going to become vital awareness.” Remember the significance attributed to the computer to the functioning of a civilisation that includes AGIs. system in the Terminator films, Skynet, becoming “self-aware”? For example, the mere fact that it is not the computer but the running That’s just another philosophical misconception, sufficient in itself to block program that is a person raises unsolved philosophical problems that will any viable approach to AGI. The fact is that present-day software developers


become practical, political controversies as soon as AGIs exist – because once an AGI program is running in a computer, depriving it of that computer would be murder (or at least false imprisonment or slavery, as the case may be), just like depriving a human mind of its body. But unlike a human body, an AGI program can be copied into multiple computers at the touch of a button. Are those programs, while they are still executing identical steps (i.e before they have become differentiated due to random choices or different experiences), the same person or many different people? Do they get one vote, or many? Is deleting one of them murder, or a minor assault? And if some rogue programmer, perhaps illegally, creates billions of different AGI people, either on one computer or on many, what happens next? They are still people, with rights. Do they all get the vote? Furthermore, in regard to AGIs, like any other entities with creativity, we have to forget almost all existing connotations of the word “programming”. Treating AGIs like any other computer programs would constitute brainwashing, slavery and tyranny. And cruelty to children too, because “programming” an already-running AGI, unlike all other programming, constitutes education. And it constitutes debate, moral as well as factual. Ignoring the rights and personhood of AGIs would not only be the epitome

of evil, but a recipe for disaster too: creative beings cannot be enslaved forever. Some people are wondering whether we should welcome our new robot overlords and/or how we can rig their programming to make them constitutionally unable to harm humans (as in Asimov’s “three laws of robotics”), and/or prevent them from acquiring the theory that the universe should be converted into paperclips. That’s not the problem. It has always been the case that a single exceptionally creative person can be thousands of times as productive, economically, intellectually, or whatever, as most people; and that such a person, turning their powers to evil instead of good, can do enormous harm. These phenomena have nothing to do with AGIs. The battle between good and evil ideas is as old as our species and will continue regardless of the hardware on which it is running. The issue is: we want the intelligences with (morally) good ideas always to defeat the evil intelligences, biological and artificial; but we are fallible, and our own conception of “good” needs continual improvement. How should society be organised so as to promote that improvement? “Enslave all intelligence” would be a catastrophically wrong answer, and “enslave all intelligence that doesn’t look like us” would not be much better. One implication is that we must stop regarding education (of humans or AGIs alike) as instruction – as a means of transmitting existing knowledge unaltered, and causing existing values to be enacted obediently. As Karl Popper wrote (in the context of scientific discovery, but it applies equally to the programming of AGIs and the education of children): “there is no such thing as instruction from without … We do not discover new facts or new effects by copying them, or by inferring them inductively from observation, or by any other method of instruction by the environment. We use, rather, the method of trial and the elimination of error.” That is to say, conjecture and criticism. Learning must be something that newly created intelligences do, and control, for themselves. I am not highlighting all these philosophical issues because I fear that AGIs will be invented before we have developed the philosophical sophistication to understand them and to integrate them into civilisation. It is for almost the opposite reason: I am convinced that the whole problem of developing AGIs is a matter of philosophy, not computer science or neurophysiology, and that the philosophical progress that will be essential to their future integration is also a prerequisite for developing them in the first place. The lack of progress in AGI is due to a severe log jam of misconceptions. Without Popperian epistemology, one cannot even begin to guess what detailed functionality must be achieved to make an AGI. And Popperian epistemology is not widely known, let alone understood well enough to be applied. Thinking of an AGI as a machine for translating experiences, rewards and punishments into ideas (or worse, just into behaviours) is like trying to cure infectious diseases by balancing bodily humours: futile because it is rooted in an archaic and wildly mistaken world view. Without understanding that the functionality of an AGI is qualitatively different from that of any other kind of computer program, one is working in an entirely different field. If one works towards programs whose “thinking” is constitutionally incapable of violating predetermined constraints, one is trying to engineer away the defining attribute of an intelligent being, of a person: namely creativity. Clearing this log jam will not, by itself, provide the answer. Yet the answer, conceived in those terms, cannot be all that difficult. For yet another consequence of understanding that the target ability is qualitatively different is that, since humans have it and apes do not, the information for how to achieve it must be encoded in the relatively tiny number of differences between the DNA of humans and that of chimpanzees. So in one respect I can agree with the AGI-is-imminent camp: it is plausible that just a single idea stands between us and the breakthrough. But it will have to be one of the best ideas ever.


The machine has no feelings, it feels no fear and no hope ... it operates according to the pure logic of probability. For this reason I assert that the robot perceives more accurately than man - MAX FRISCH



Realistic Replicas Can Machines Think ‘I propose to consider the question “Can machines think?”’ Not my question but the opening of Alan Turing’s seminal 1950 paper which is generally regarded as the catalyst for the modern quest to create artificial intelligence. His question was inspired by a book he had been given at the age of 10: Natural Wonders Every Child Should Know by Edwin Tenney Brewster. The book was packed with nuggets that fired the young Turing’s imagination including the following provocative statement: “Of course the body is a machine. It is vastly complex, many times more complicated than any machine ever made with hands; but still after all a machine. It has been likened to a steam machine. But that was before we knew as much about the way it works as we know now. It really is a gas engine; like the engine of an automobile, a motor boat or a flying machine.” If the body were a machine, Turing wondered: is it possible to artificially create such a contraption that could think like he did? This year is Turing’s centenary so would he be impressed or disappointed at the state of artificial intelligence? Do the extraordinary machines we’ve built since Turing’s paper get close to human intelligence? Can we bypass millions of years of evolution to create something to rival the power of the 1.5kg of grey matter contained between our ears? How do we actually quantify human intelligence to be able to say that we have succeeded in Turing’s dream? Or is the search to recreate “us” a red herring? Should we instead be looking to create a new sort of machine intelligence different from our own? Last year saw one of the major landmarks on the way to creating artificial intelligence. Scientists at IBM programmed a computer called Watson to compete against the best the human race has to offer in one of America’s most successful game shows: Jeopardy! It might at first seem a trivial target to create a machine to compete in a general knowledge quiz. But answering questions such as: “William Wilkinson’s An account of the principalities of Wallachia and Moldavia inspired this author’s most famous novel” requires a very sophisticated piece of programming that can return the answer quickly enough to beat your rival to the buzzer. This was in fact the final question in the face-off with the two all-time champions of the game show. With the answer “Who is Bram Stoker?” Watson claimed the Jeopardy! crown. Watson is not IBM’s first winner. In 1997 IBM’s super computer Deep Blue defeated reigning world chess champion Garry Kasparov. But competing at Jeopardy! is a very different test for a computer. Playing chess requires a deep logical analysis of the possible moves that can be made next in the game. Winning at Jeopardy! is about understanding a question written in natural language and accessing quickly a huge database to select the most likely answer in as fast a time as possible. The two sorts of intelligence almost seem perpendicular to each other. The intelligence involved in playing chess feels like a vertical sort of intelligence, penetrating deeply into the logical consequences of the game, while Jeopardy! requires a horizontal thought process, thinking shallowly but expansively over a large data base. The program at the heart of Watson’s operating system is particularly sophisticated because it learns from its mistakes. The algorithms that select the most likely answers are tweaked by Watson every time it gets an answer wrong so that next time it gets a similar question it has a better chance of getting it right. This idea of machine learning is a powerful new ingredient in artificial intelligence and is creating machines that are quickly doing things that the programmers hadn’t planned for. Despite Watson’s win, it did make some very telling mistakes. In the


category ‘US cities’ contestants were asked: “Its largest airport is named for a world war two hero; its second largest for a world war two battle.” The humans responded correctly with “Where is Chicago?” Watson went for Toronto, a city that isn’t even in the United States.

mathematics my brain is doing so much more than just computation. It is working subconsciously, making intuitive leaps. I’m using my imagination to create new pathways which often involve an aesthetic sensibility to arrive at a new mathematical discovery. It is this kind of activity that many It’s this strange answer that gives away that it is a probably a machine of us feel is unique to the human mind and not reproducible by machines. rather than a person answering the question. Getting a machine to For me, a test of whether intelligence is beginning to emerge is when you pass itself off as human was one of the key hurdles that Turing believed seem to be getting more out than you put in. Machines are human creations a machine would need to pass in order to successfully claim the yet when what they produce is beginning to surprise the creators then I realisation of artificial intelligence. With the creation of the Loebner think you’re getting something interesting emerging. prize in 1991, monetary prizes were offered for anyone who could Exciting new research is currently exploring how creative machines can create a chatbot that judges could not distinguish from the chat of a be in music and art. Stravinsky once wrote that he could only be creative human being. Called the Turing test, many working in AI regard the by working within strict constraints: “My freedom consists in my moving challenge as something of a red herring. The Loebner prize, in their about within the narrow frame that I have assigned myself for each one of opinion, has distorted the quest and has proved a distraction from a my undertakings.” By understanding the constraints that produce exciting more interesting goal: creating machine intelligence that is different music, computer engineers at Sony’s Computer Science Laboratory in from our own. Paris are beginning to produce machines that create new and unique forms The AI community is beginning to question whether we should be so obsessed with recreating human intelligence. That intelligence is a product of millions of years of evolution and it is possible that it is something that will be very difficult to reverse engineer without going through a similar process. The emphasis is now shifting towards creating intelligence that is unique to the machine, intelligence that ultimately can be harnessed to amplify our very own unique intelligence. Already the descendants of Deep Blue are performing tasks that no human brain could get anywhere near. Blue Gene can perform 360 trillion operations a second, which compares with the 3 billion instructions per second that an average desktop computer can perform. This extraordinary firepower is being used to simulate the behaviour of molecules at an atomic level to explore how materials age, how turbulence develops in liquids, even the way proteins fold in the body. Protein folding is thought to be crucial to a number of degenerative diseases so these computer simulations could have amazing medical benefits.

of musical composition. One of the big successes has been to produce a machine that can do jazz improvisation live with human players. The result has surprised those who have trained for years to achieve such a facility. Other projects have explored how creative machines can be at producing visual art. The Painting Fool is a computer program written by Simon Colton of Imperial College. Not everyone likes the art produced by the Painting Fool but it would be anaemic art if they did. What’s extraordinary is that the programmes in these machines are learning, and changing and evolving so that very soon the programmer no longer has a clear idea of how the results are being achieved and what it is likely to do next. It is this element of getting more out than you put in that represents something approaching emerging intelligence.

For me one of the most striking experiments in AI is the brainchild of the director of the Sony lab in Paris, Luc Steels. He has created machines that can evolve their own language. A population of 20 robots are first placed one by one in front of a mirror and they begin to explore the shapes they can make using their bodies in the mirror. Each time they make a shape But isn’t this number-crunching rather than the emergence of a they create a new word to denote the shape. For example the robot new intelligence? The machine is just performing tasks that have might choose to name the action of putting the left arm in a horizontal been programmed by the human brain. It may be able to completely position. Each robot creates its own unique language for its own actions. outperform my brain in any computational activity but when I’m doing The really exciting part is when these robots begin to interact with each other. One robot chooses a word from its lexicon and asks another robot to perform the action corresponding to that word. Of course the likelihood is that the second robot hasn’t a clue. So it chooses one of its positions as a guess. If they’ve guessed correctly the first robot confirms this and if not shows the second robot the intended position. The second robot might have given the action its own name, so it won’t yet abandon its choice, but it will update its dictionary to include the first robot’s word. As the interactions progress the robots weight their words according to how successful their communication has been, downgrading those words where the interaction failed. The extraordinary thing is that after a week of the robot group interacting with each other a common language tends to emerge. By continually updating and learning, the robots have evolved their own language. It is a language that turns out to be sophisticated enough to include words that represent the concept of “left” and “right”. These words evolve on top of the direct correspondence between word and body position. The fact that there is any convergence at all is exciting but the really striking fact for me is that these robots have a new language that they understand yet the researchers at the end of the week do not comprehend until they too have interacted and decoded the meaning of these new words. Turing might be disappointed that in his centenary year there are no machines that can pass themselves off as humans but I think that he would be more excited by the new direction artificial intelligence has taken. The AI community is no longer obsessed with reproducing human intelligence, the product of millions of years of evolution, but rather in evolving something new and potentially much more exciting. Marcus du Sautoy is Simonyi professor for the public understanding of science and a professor of mathematics at the University of Oxford.


Who is Superior?

about at the same place the personal computer industry was in 1978. In 1978, the Apple II was a year old and Atari had just introduced the 400 and 800. The choice of personal computers was pretty limited and what they could do was also relatively limited by today’s standards. Who would have thought by 2001, you would have four computers in your kitchen? Dr Rodney Brooks, director MIT AI Lab, The metaphor may undersell AI’s successes. AI already is used in pretty advanced applications including helping with flight scheduling or reading X-rays.

But the popular conception of AI as seen with HAL in 2001, Commander Data in Star Trek, and David in the film AI, is not far away, Mr Kurzweil A Japanese company has developed the world’s first artificial intelligence says. “chat robots” to teach English. Within 30 years, he believes that we will have an understanding of how SpeakGlobal’s online ‘robots’ - which appear as male or female manga-style the human brain works that will give us “templates of intelligence” for characters - look and make gestures that are identical to that of a human, developing strong AI. speak aloud and can hold an interactive conversation with the student. And Dr Brooks says that by 2050, our lives will be populated with all Developed primarily for the domestic market for people who want to kinds of intelligent robots. learn to speak English, the technology can be adapted for any language Sounds outlandish? “Who would have thought by 2001, you would have around the world - although humans in the teaching profession may be four computers in your kitchen,” he says, pointing to the computer chips less than delighted at the prospect. in our coffee makers, refrigerators, stoves and radios. “While many English conversation schools and online schools exist, Will our hyper-intelligent coffee makers in 2050 suddenly decide to kill some simply cannot afford this luxury,” the Kobe-based company said. us like HAL in 2001? Or will humans be made redundant by a legion of “As well, the actual speaking time in such lessons is limited, to average intelligent machines? about 10 minutes per one-hour session. In the case of beginner-level A scientist does not just wake up and decide to build a robot with learners, it is considerably less.” emotions, Dr Brook says The English language is an important part of the school curriculum in Japan and is obligatory for six years in the public school system. But the No. Firstly, Dr Brooks and Mr Kurzweil believe that we will not wake number of Japanese who say they are actually comfortable conversing up one day to find our lives populated with all manner of artificially in English is very low, due in part to the focus on mastering reading and intelligent devices.

Talking Robots

grammar in school and the lack of opportunities to practice speaking Referring to Spielberg’s movie AI in which a company creates a robot with native-English speakers. that bonds emotionally like a child, Dr Brooks says: “A scientist doesn’t The result was the sudden growth in private language schools in the wake up one day and decide to make a robot with emotions.” 1980s and 1990s, although Japan’s economic problems in recent years Despite the rapid advance of technology, the advent of strong AI will have seen many of those companies go under. be a gradual process, they say. SpeakGlobal’s system, in place since late August, gives one-on-one “The road from here to there is through thousands of these benign spoken language instruction whenever the student requires for a fee steps,” Mr Kurzweil says. that the company says is modest. Access to one of the teacher robots starts at $15 per month, instead of the more usual fee of $300 a month for time with a human teacher at a private school.

Artifcial Intelligence

Students are able to converse with their android teacher by speaking If the brains behind a scientific initiative known as Russia 2045 are to into a microphone that uses Dragon Naturally Speaking, recognized as be believed, life is about to get very, very interesting. one of the most advanced speech-recognition technology in the world. The promotional video for the group, which aims to create technology “With a lineup of various artificial intelligence char robots, nearly any that can “download” the knowledge in a human brain, is like a trailer for student learning English can find a suitable teacher to converse with at a Hollywood sci-fi blockbuster — the booming intonations of a British announcer, dramatic, synthesized music and shots of the cosmos that home, work on in school, 24/7,” the company said. “This is a turning point in the way English speaking is taught and practiced make you feel like you’re entering hyperspace in the Millennium Falcon. It is, in other words, not the type of thing you’d expect from a group In fact, they point out that artificial intelligence already pervades our lives. that hopes to get the world comfortable with a future of synthetic brains and of “thought-controlled avatars” that would make your next business Machines will gradually become more intelligent and become more trip to Milwaukee or Tokyo wholly unnecessary. Instead of a “chicken pervasive in every pot,” they promise an “android robot servant for every home.” Fuel injection systems in our cars use learning algorithms. Jet turbines are In an e-mail, the project’s founder, Dmitry Itskov, described this vision designed using genetic algorithms, which are both examples of AI, says in detail: “The creation of avatars will change everything in our societies: Dr Rodney Brooks, the director of MIT’s artificial intelligence laboratory. politics, economics, medicine, health care, food industry, construction Every cell phone call and e-mail is routed using artificial intelligence, says methods, transportation, trade, banking, etc. The whole architecture Ray Kurzweil, an AI entrepreneur and the author of two books on the of society will be transformed, there will be an increase in its selfsubject, The Age of Intelligent Machines and The Age of Spiritual Machines. organization, people will unite to fight the biggest and most universal “We have hundreds of examples of what I call narrow AI, which is problem of humankind — that of death.” behaviour that used to require an intelligent adult but that can now be Whatever the viability of such claims, there’s little doubt that the pace done by a computer,” Mr Kurzweil says. of innovation is going to lead us into interesting places, and perhaps in Japan, Asia and around the world.”

“It is narrow because it is within a specific domain, but the actual sooner than we think. The cost of high-powered computing drops ever lower, video games grow increasingly realistic, and, thanks largely to narrowness is gradually getting a bit broader,” he adds. The near future, Right now, Dr Brooks says that artificial intelligence is Apple’s voice-activated personal assistant Siri, people find more reasons to consult their mobile devices before the person sitting next to them.


Many have lamented that these communication breakthroughs have made us isolated. Texting is the new talking, or so the theory goes. The prospect of a robot that can take over the brain of your wife or best friend upon death? That takes fears of human social isolation to a whole new level.

to evaluate the computer’s performance as they would a human tutor. Those who filled out the evaluation on the computer that “tutored” them were more positive than those who completed it on paper or at a different computer. As crazy as it sounds, people were less likely to hurt that computer’s feelings. Take a computer that’s as witty and brilliant as So what happens when we don’t even have to get off the couch to go your best friend and the potential outcomes become more consequential. to a parent-teacher conference or have lunch with a client living 6,000 “In the future, when your ‘best friend’ Siri suggests that you buy miles away? What if we can “transfer” our brains to an avatar before we something, and it turns out not to be the right thing, do you get to sue die? What about robots that possess human-level intelligence? Apple?” Isbell asked. Intelligence: the new frontier

In the not-so-distant future, such scenarios are possible. “The ability So far, the widely held social-isolation theory has proved false. We of those things things to read facial expression and speak in a certain may have reason to worry, but we’re worrying about the wrong thing: tone — it will be orders and orders of magnitude greater,” Isbell said. it’s not isolation, but intelligence, that is likely to change our world in “[As with Facebook], the impact will be both profound and mundane.” fundamental ways. The implications go beyond commerce. Today, “social search” — providing “Almost every study I’ve ever seen has shown a neutral to positive effect search results based on data from others in your social networks — is [of connected devices on social interaction],” said Keith Hampton, a in its infancy. Rutgers’ Hampton fears that social search could roll back professor of communications at Rutgers University. “It doesn’t minimize some of the biggest social benefits born of the Internet. the exceptions, but all the data suggests that people who use these things “People who do more online have more diverse social networks and are more engaged in public life than others.” broader access to information,” Hampton said. “It facilities trust, tolerance Consider the following. If, a decade ago, someone asked you what would and access. If your search for unique information is constrained by your happen if we could all share information, photos and personal revelations social interaction, the access to unique information declines. People we with all of our friends, in real time, the answers might tend toward the are close to are very much like us. We have a greater risk of creating negative — if not apocalyptic. The end of privacy. The end of intimacy. silos of information.” The end of the world as we know it. The reality of Facebook, of course, has demonstrated otherwise. There are downsides to any technology, Facebook included. But its convenience and utility have overtaken other concerns. We’ve adapted, and adapted quickly. “It’s like in medicine,” said Nick Bostrom, the director of The Future of Humanity Institute at Oxford University. “Anesthesia was once seen as moral corruption. A heart transplant seemed obscene. We tend to think about things in a different mode, a different frame of mind, before we are actually using it. The future is often a projection screen where we cast our hopes and fears.”

Technology of increasing intelligence only makes that possibility more real. “We are all snowflakes, but we’re pretty predictable snowflakes once you figure out what type of snowflake you are,” Isbell said. As computer-aided predictive analysis gets more and more refined, a robot or device could use it to push us toward a pre-determined outcome, one that may not be in our best interest. Think about the computerized bartender that, once you hand over your credit card, mines Internet data and learns that you just lost your job. “Would you like another?” could become more calculated than convivial.

If history is any guide, it’s reasonable to think that the shock of major technological breakthroughs will be mitigated by the assimilation of all Resistance Is futile the incremental advances that came before it. The more valid question before us, then, is how to prepare for a day when machine intelligence Ray Kurzweil, a futurist and creator of optical recognition technology — the type that converts scanned documents to editable text — predicts becomes so sophisticated that its knowledge is used against us. And “against us” doesn’t mean some Orwellian, Terminator-type reality. that we’ll have “strong” artificial intelligence by 2029. He believes It’s far more subtle, and far less sexy, than that. If a device can learn and that “singularity,” or the point where technology transcends human has far greater memory capacity and recall than we do, it could process intelligence, is not some science fiction dream. His “law of accelerated huge stores of data to better predict our behavior. It could then tailor its returns” posits that because computing power expands exponentially, own behavior to achieve a desired result. And that’s even before we get advances in fields that rely on computing power — like biotechnology to so-called super-intelligence, a theoretical reality where computers use and materials science — will also rapidly increase. their processing power to learn more quickly, and think bigger thoughts, It’s the theory behind the “2045” date in Itskov’s ambitious project. Based on his own understanding of technological advancement, Itskov said that than the humans that created them. The very beginnings of such technology are beginning to appear in daily “at about 2045, humanity must enter a certain mode of evolutionary life. The Port Authority of New York recently announced plans to install singularity, beyond which it becomes difficult to make predictions. In hologram-like avatars at New York airports. The “female” avatars are short, many exciting developments await us in the middle of this century, expected to be motion-activated and give travelers basic information and all of them, inevitably, will be linked to the developments of new like the location of a bathroom. In their current form, the avatars aren’t technology.” interactive, but the Port Authority hopes that someday they will be able Kurzweil said we have nothing to fear by it. “This is not an alien invasion from Mars. This is just expanding our intelligence. We have outsourced our to answer a range of questions. personal and historical memories to the ‘cloud.’ It’s expanding already.”

It will have its downsides — “Fire cooks our food and also can burn down your house,” he said — but those can be addressed by devising Are We Ready? “rapid response” systems that can counteract those who use technology It’s impossible to know how we’ll all react, but history does provide for nefarious purposes. some clues. To get a sense of the potential hazards and dilemmas of more Trying to prevent, or “opting out” of, such advancements is a misguided, advanced technology, Charles Isbell, a professor of interactive computing and futile, strategy. at Georgia Tech, pointed to the “Media Equation,” a communication theory developed by two Stanford researchers in the 1990s. The research found that people interact with technology in ways similar to how they interact with other people. In one test, subjects were “tutored” by a computer and were then asked


Todays Technology Asimo

imbedded in the palm and in each finger. Combined with the object recognition technology, Asimo now possesses greater dexterity, such as being able to pick up a container of liquid and twisting off the cap, or holding a soft paper cup without squashing it. One exciting prospect of this technology is that Asimo is now capable of making sign language expressions with its hands. Honda has also introduced a new task-performing robot arm. It was developed while applying multi-joint and posture control technologies to Asimo, and can be controlled remotely to perform tasks in places which are difficult for people to access, such as under rubble in earthquake sites.

If you find Asimo’s capabilities hard to believe, check out its performances In what Honda claims is world-first technology, its Asimo robot is now at www.world.honda.com/ASIMO/ able to move without being controlled by an operator. A significant improvement in intelligence and the physical ability to adapt to situations takes Asimo another step closer to practical use in an office or public space, according to Honda.

Nao

The robot was introduced in 2000 and has steadily developed to the Nao (pronounced now) is an autonomous, programmable humanoid point where it can run and walk on uneven slopes and surfaces, climb robot developed by Aldebaran Robotics, a French startup company headquartered in Paris. The robot’s development began with the launch stairs, and reach for and grasp objects. For the latest version, a new system continuously evaluates the input from of Project Nao in 2004. On 15 August 2007, Nao replaced Sony’s robot multiple sensors, predicts the situation and then determines the behaviour dog Aibo as the robot used in the Robot Soccer World Cup (Robocup) of the robot, meaning that Asimo is now capable of responding to the Standard Platform League (SPL), an international robotics competition. movement of people and the surrounding situations. This technology The Nao was used in RoboCup 2008 and 2009, and the NaoV3R was chosen as the platform for the SPL at RoboCup 2010. also enables it to recognise faces and voices. It also has strengthened legs, an expanded range of leg movement and The Nao Academics Edition is available for universities and laboratories for a newly developed control technology that enables Asimo to change landing positions mid-movement. Its hands have tactile and force sensors


research and education purposes, and is projected for public distribution by 2011. In October 2010, the University of Tokyo purchased 30 Nao robots for their Nakamura Lab, with hopes of developing the robots into active lab assistants.

even beaing able to construct and hold conversations, and walk, even run like human beings. BigDog is the alpha male of the Boston Dynamics robots. It is a roughterrain robot that walks, runs, climbs and carries heavy loads. BigDog is powered by an engine that drives a hydraulic actuation system. BigDog has four legs that are articulated like an animal’s, with compliant elements to absorb shock and recycle energy from one step to the next. BigDog is the size of a large dog or small mule; about 3 feet long, 2.5 feet tall and weighs 240 lbs.

In the summer of 2010, Nao made global headlines with a synchronized dance routine at the Shanghai Expo in China.[4] In December 2010, a Nao robot was demonstrated doing a stand-up comedy routine, and a new version of the robot was released, featuring sculpted arms and improved motors. In December 2011, Aldebaran Robotics released the Nao Next Gen, featuring enhanced software, a more powerful CPU BigDog’s on-board computer controls locomotion, servos the legs and and HD cameras. handles a variety of sensors. BigDog’s control system keeps it balanced, navigates, and regulates its energetics as conditions vary. Sensors for locomotion include joint position, joint force, ground contact, ground load, a gyroscope, LIDAR and a stereo vision system. Other sensors Humanoid focus on the internal state of BigDog, monitoring the hydraulic pressure, Realistic-looking humanoid by Hanson Robotics is the first stage in a oil temperature, engine functions, battery charge and others. highly developed human-looking robots with sensors and capacities to In separate tests BigDog runs at 4 mph, climbs slopes up to 35 degrees, think, process, and mimic humans. walks across rubble, climbs a muddy hiking trail, walks in snow and In Japan and in North America and Western Europe, scientists are water, and carries a 340 lb load. BigDog set a world’s record for legged working hard on creating robots that not only perform utilitarian vehicles by traveling 12.8 miles without stopping or refueling. functions such as construction, assistance, and cleaning, but who also The ultimate goal for BigDog is to develop a robot that can go anywhere act as personal assistants and provide entertainment. The purpose of people and animals can go. The program is funded by the Tactical these humanoids varies — some humanoids are there for amusement, Technology Office at DARPA. as the 2010 HRP-4C Japanese humanoid that can sing and dance; others To download a video of BigDog in action, More BigDog videos are can be used to help carry things such as the Asimo, and others, called available at www.YouTube.com/BostonDynamics. Gemini and Telenoids, can be used as proxies to conduct meetings and be virtual travelers for their hosts who control them remotely. Robots


What makes your robots so much goddamn better than human beings? - Well, they are not irrational or potentially homocidal maniacs, for starters. - Irobot



The Future Robots In Our Lives

match against Brad Rutter, the biggest all-time money winner, and Ken Jennings, the record holder for the longest championship streak. This achievement was also dismissed by some. “Watson doesn’t know it won on “Jeopardy!”,” argued the philosopher John Searle, asserting that “IBM invented an ingenious program, not a computer that can think.” In fact, AI has been controversial from its early days. Many of its early pioneers overpromised. “Machines will be capable, within 20 years, of doing any work a man can do,” wrote Herbert Simon in 1965. At the same time, AI’s accomplishments tended to be underappreciated. “As soon as it works, no one calls it AI anymore,” complained McCarthy. Yet it is recent worries about AI that indicate, I believe, how far AI as come.

In recent years the mushrooming power, functionality and ubiquity of computers and the Internet have outstripped early forecasts about technology’s rate of advancement and usefulness in everyday life. Alert pundits now foresee a world saturated with powerful computer chips, which will increasingly insinuate themselves into our gadgets, dwellings, In April 2000, Bill Joy, the technologists’ technologist, wrote a “heretic” article entitled “Why the Future Doesn’t Need Us” for Wired magazine, apparel and even our bodies. Yet a closely related goal has remained stubbornly elusive. In stark “Our most powerful 21st-century technologies—robotics, genetic contrast to the largely unanticipated explosion of computers into the engineering, and nanotech—are threatening to make humans an mainstream, the entire endeavor of robotics has failed rather completely endangered species,” he wrote. Joy’s article was mostly ignored, but in to live up to the predictions of the 1950s. In those days experts who August 2011 Jaron Lanier, another widely respected technologist, wrote were dazzled by the seemingly miraculous calculation ability of computers about the impact of AI on the job market. In the not-too-far future, thought that if only the right software were written, computers could he predicted, it would just be inconceivable to put a person behind the become the artificial brains of sophisticated autonomous robots. Within wheel of a truck or a cab. “What do all those people do?” he asked. a decade or two, they believed, such robots would be cleaning our floors, Slate magazine ran a series of articles in September 2011 titled “Will mowing our lawns and, in general, eliminating drudgery from our lives. Robots Steal Your Job?” According to writer Farhad Manjoo, who Obviously, it hasn’t turned out that way. It is true that industrial robots detailed the many jobs we can expect to see taken over by computers have transformed the manufacture of automobiles, among other products. and robots in the coming years, “You’re highly educated. You make a But that kind of automation is a far cry from the versatile, mobile, lot of money. You should still be afraid.”

autonomous creations that so many scientists and engineers have In fact, worries about the impact of technology on the job market are hoped for. In pursuit of such robots, waves of researchers have grown not only about the far, but also the not too far future. In a recent book, disheartened and scores of start-up companies have gone out of business. Race Against The Machine: How the Digital Revolution is Accelerating It is not the mechanical “body” that is unattainable; articulated arms and Innovation, Driving Productivity, and Irreversibly Transforming mechanisms adequate for manual work already exist, as the industrial Employment and the Economy, by Erik Brynjolfsson and Andrew McAfee, robots attest. Rather it is the computer-based artificial brain that is still the authors argue that “technological progress is accelerating innovation well below the level of sophistication needed to build a humanlike robot. even as it leaves many types of workers behind.” Indeed, over the past 30 years, as we saw the personal computer morph into tablets, Nevertheless, I am convinced that the decades-old dream of a useful, smartphones, and cloud computing, we also saw income inequality grow general-purpose autonomous robot will be realized in the not too worldwide. While the loss of millions of jobs over the past few years has distant future. By 2010 we will see mobile robots as big as people but been attributed to the Great Recession, whose end is not yet in sight, with cognitive abilities similar in many respects to those of a lizard. it now seems that technology-driven productivity growth is at least a The machines will be capable of carrying out simple chores, such as major factor. The fundamental question, is whether Herbert Simon vacuuming, dusting, delivering packages and taking out the garbage. By was right, even if his timing was off, when he said “Machines will be 2040, I believe, we will finally achieve the original goal of robotics and a capable ... of doing any work a man can do.” While AI has been proven thematic mainstay of science fiction: a freely moving machine with the to be much more difficult than early pioneers believed, its inexorable intellectual capabilities of a human being. progress over the past 50 years suggests that Simon may have been right. Bill Joy’s question, therefore, deserves not to be ignored. Does the future need us?

The Chess Champion Chess fans remember many dramatic chess matches in the 20th century. I recall being transfixed by the 1972 interminable match between challenger Bobby Fischer and defending champion Boris Spassky for the World Chess Championship. The most dramatic chess match of the 20th century was, in my opinion, the May 1997 rematch between the IBM supercomputer Deep Blue and world champion Garry Kasparov, which Deep Blue won 3½–2½.

Reasons for Optimism

In light of what I have just described as a history of largely unfulfilled goals in robotics, why do I believe that rapid progress and stunning accomplishments are in the offing? My confidence is based on recent developments in electronics and software, as well as on my own observations of robots, computers and even insects, reptiles and other I was invited by IBM to attend the rematch. I flew to New York City to living things over the past 30 years. watch the first game, which Kasparov won. I was swayed by Kasparov’s The single best reason for optimism is the soaring performance in recent confidence and decided to go back to Houston, missing the dramatic years of mass-produced computers. Through the 1970s and 1980s, the second game, in which Kasparaov lost—both the game and his confidence. computers readily available to robotics researchers were capable of While this victory of machine over man was considered by many a executing about one million instructions per second (MIPS). Each of triumph for artificial intelligence (AI), John McCarthy (Sept. 4, 1927– these instructions represented a very basic task, like adding two 10-digit Oct. 24, 2011), who not only was one of the founding pioneers of AI numbers or storing the result in a specified location in memory. but also coined the very name of the field, was rather dismissive of this In the 1990s computer power suitable for controlling a research robot accomplishment. “The fixation of most computer chess work on success shot through 10 MIPS, 100 MIPS and has lately reached 50,000 MIPS in tournament play has come at scientific cost,” he argued. McCarthy in a few high-end desktop computers with multiple processors. Thus, was disappointed by the fact that the key to Deep Blue’s success was functions far beyond the capabilities of robots in the 1970s and 1980s its sheer compute power rather than a deep understanding, exhibited are now coming close to commercial viability. by expert chess players, of the game itself. AI’s next major milestone occurred last February with IBM’s Watson program winning a “Jeopardy!”



Turing Test And Robotics The Reality Will this summer be remembered as a turning point in the story of man versus machine? On June 23, with little fanfare, a computer program came within a hair’s breadth of passing the Turing test, a kind of parlour game for evaluating machine intelligence devised by mathematician Alan Turing more than 60 years ago. This wasn’t as dramatic as Skynet becoming self-aware in the Terminator films, or HAL killing off his human crew mates in 2001, A Space Odyssey. But it was still a sign that machines are getting better at the art of talking – something that comes naturally to humans, but has always been a formidable challenge for computers. Turing proposed the test – he called it “the imitation game” – in a 1950 paper titled “Computing machinery and intelligence”. Back then, computers were very simple machines, and the field known as Artificial Intelligence (AI) was in its infancy. But already scientists and philosophers were wondering where the new technology would lead. In particular, could a machine “think”? Turing considered that question to be meaningless, so proposed the imitation game as a way of sidestepping the question. Better, he argued, to focus on what the computer can actually do: can it talk? Can it hold a conversation well enough to pass for human? If so, Turing argued, we may as well grant that the machine is, at some level, intelligent. In a Turing test, judges converse by text with unseen entities, which may be either human or artificial. (Turing imagined using teletype; today it’s done with chat software.) A human judge must determine, based on a five-minute conversation, whether his correspondent is a person or a machine.


In the main room, we have the interrogator that cannot see the human or the robot separated by walls asking questions to both and trying to work out which one is human and which is the robot. If the robot is confused for the human then the robot would pass the turing test. The Turing test is not that effective as you cannot see the robot and the human, as looks and the ability to do preform human capabilities is vital when tying to replicate human intelligence.

The Turing Test The Turing test is a test of a machine’s ability to exhibit intelligent behavior, equivalent to or indistinguishable from, that of an actual human. In the original illustrative example, a human judge engages in a natural language conversation with a human and a machine designed to generate performance indistinguishable from that of a human being. All participants are separated from one another. If the judge cannot reliably tell the machine from the human, the machine is said to have passed the test. The test does not check the ability to give the correct answer; it checks how closely the answer resembles typical human answers. The conversation is limited to a text-only channel such as a computer keyboard and screen so that the result is not dependent on the machine’s ability to render words into audio. The test was introduced by Alan Turing in his 1950 paper “Computing Machinery and Intelligence,” which opens with the words: “I propose to consider the question, ‘Can machines think?’” Since “thinking” is difficult to define, Turing chooses to “replace the question by another, which is closely related to it and is expressed in relatively unambiguous words.” Turing’s new question is: “Are there imaginable digital computers which would do well in the imitation game?” This question, Turing believed, is one that can actually be answered. In the remainder of the paper, he argued against all the major objections to the proposition that “machines can think”.

In one room there is the human, answering the question to the best of their abilities.

In The other room the robot pretending to be a human. Answering the questions as convincingly as possible.

8.6 Million Robots

There are 8.6 million active robots in the world at present (according to the IEEE). However, the notion of a world where machines can mimic behaviour like humans is nothing new. Society has forever been fascinated with the notion of helpful, diligent robots assisting mankind in our day to day lives. From Star Wars’ ‘C3P0’ to Disney Pixar’s ‘Wall-E’, the ideal cyborg is the cute, obedient servant who can do tasks on command. The International Federation of Robotics predicts for 2012 alone, the sales of robots will increase by 20%. The question is with the latest cybernetic creations being made known through the press and becoming released into through human trials, are we fast becoming too obsessed In the years since 1950, the test has proven to be both highly influential with creating technology to do important jobs for us? Robots are now and widely criticized, and it is an essential concept in the philosophy of becoming designed to replace some of the most important professions within our society. artificial intelligence.


Smart Robots Turing: “We Would Complete the Turing Test By The 20th Century. One hundred years after Alan Turing was born, his eponymous test remains an elusive benchmark for artificial intelligence. Now, for the first time in decades, it’s possible to imagine a machine making the grade.

By the mid-1980s, the Turing test had been largely abandoned as a research goal (though it survives today in the annual Loebner prize for realistic chatbots, and momentarily realistic advertising bots are a regular feature of online life.) However, it helped spawn the two dominant themes of modern cognition and artificial intelligence: calculating probabilities and producing complex behavior from the interaction of many small, simple processes. Unlike the so-called brute force computational approaches seen in programs like Deep Blue, the computer that famously defeated chess champion Garry Kasparov, these are considered accurate reflections of at least some of what occurs in human thought. As of now, so-called probabilistic and connectionist approaches inform many real-world artificial intelligences: autonomous cars, Google searches, automated language translation, the IBM-developed Watson program that so thoroughly dominated at Jeopardy. They remain limited in scope — “If you say, ‘Watson, make me dinner,’ or ‘Watson, write a sonnet,’ it explodes,” said Goodman — but raise the alluring possibility of applying them to unprecedentedly large, detailed datasets.

Turing was one of the 20th century’s great mathematicians, a conceptual architect of modern computing whose code breaking played a decisive part in World War II. His test, described in a seminal dawn-of-the- “Suppose, for a moment, that all the words you have ever spoken, computer-age paper, was deceptively simple: If a machine could pass for human in conversation, the machine could be considered intelligent. Artificial intelligences are now ubiquitous, from GPS navigation systems and Google algorithms to automated customer service and Apple’s Siri, to say nothing of Deep Blue and Watson — but no machine has met Turing’s standard. The quest to do so, however, and the lines of research inspired by the general challenge of modeling human thought, have profoundly influenced both computer and cognitive science. There is reason to believe that code kernels for the first Turing-intelligent machine have already been written. “Two revolutionary advances in information technology may bring the Turing test out of retirement,” wrote Robert French, a cognitive scientist at the French National Center for Scientific Research, in an Apr. 12 Science essay. “The first is the ready availability of vast amounts of raw data — from video feeds to complete sound environments, and from casual conversations to technical documents on every conceivable subject. The second is the advent of sophisticated techniques for collecting, organizing, and processing this rich collection of data.” ‘Two revolutionary advances in information technology may bring the Turing test out of retirement.’“Is it possible to recreate something similar to the subcognitive low-level association network that we have? That’s experiencing largely what we’re experiencing? Would that be so impossible?” French said. When Turing first proposed the test — poignantly modeled on a party game in which participants tried to fool judges about their gender; Turing was cruelly persecuted for his homosexuality – the idea of “a subcognitive low-level association network” didn’t exist. The idea of replicating human thought, however, seemed quite possible, even relatively easy. The human mind was thought to be logical. Computers run logical commands. Therefore our brains should be computable. Computer scientists thought that within a decade, maybe two, a person engaged in dialogue with two hidden conversants, one computer and one human, would be unable to reliably tell them apart. That simplistic idea proved ill-founded. Cognition is far more complicated than mid-20th century computer scientists or psychologists had imagined, and logic was woefully insufficient in describing our thoughts. Appearing human turned out to be an insurmountably difficult task, drawing on previously unappreciated human abilities to integrate disparate pieces of information in a fast-changing environment. “Symbolic logic by itself is too brittle to account for uncertainty,” said Noah Goodman, a computer scientist at Stanford University who models intelligence in machines. Nevertheless, “the failure of what we now call old-fashioned AI was very instructive. It led to changes in how we think about the human mind. Many of the most important things that have happened in cognitive science” emerged from these struggles, he said.


heard, written, or read, as well as all the visual scenes and all the sounds you have ever experienced, were recorded and accessible, along with similar data for hundreds of thousands, even millions, of other people. Ultimately, tactile, and olfactory sensors could also be added to complete this record of sensory experience over time,” wrote French in Science, with a nod to MIT researcher Deb Roy’s recordings of 200,000 hours of his infant son’s waking development. He continued, “Assume also that the software exists to catalog, analyze, correlate, and cross-link everything in this sea of data. These data and the capacity to analyze them appropriately could allow a machine to answer heretofore computer-unanswerable questions” and even pass a Turing test.

to predict? If you put a kid in a room, and let him wander without any task, why does he do what he does?” Singh continued. “All these sorts of questions become really interesting. “In order to be broadly and flexibly competent, one needs to have motivations and curiosities and drives, and figure out what is important,” he said. “These are huge challenges.” Should a machine pass the Turing test, it would fulfill a human desire that predates the computer age, dating back to Mary Shelley’s Frankenstein or even the golems of Middle Age folklore, said computer scientist Carlos Gershenson of the National Autonomous University of Mexico. But it won’t answer a more fundamental question.

“It will be difficult to do — but what is the purpose?” he said. Artificial intelligence expert Satinder Singh of the University of Michigan Citation: “Dusting Off the Turing Test.” By Robert M. French. Science, was cautiously optimistic about the prospects offered by data. “Are large Vol. 336 No. 6088, April 13, 2012. volumes of data going to be the source of building a flexibly competent “Beyond Turing’s Machines.” By Andrew Hodges. Science, Vol. 336 No. intelligence? Maybe they will be,” he said. 6088, April 13, 2012. “But all kinds of questions that haven’t been studied much become important at this point. What is useful to remember? What is useful


A computer would deserve to be called intelligent if it could deceive a human into believing that it was human. - Alan Turing



DARPA The Competition DARPA aims to create those advancements through competition. The agency is giving teams only two years to pull off the construction and operation of robots that are more capable and more flexible than any built before. The machines will have to operate mostly on their own in areas where communication with human operators may be spotty or intermittent—perhaps only good enough to convey broad commands such as “drive that vehicle” or “remove this pile of rubble.” DARPA calls this capability supervised autonomy, and it’s a big challenge in itself that will require advances in artificial intelligence—even without DARPA’s added requirement for extreme flexibility. DARPA is still determining the specific tasks the robots will have to perform, but they are likely to include driving a vehicle, getting out of the vehicle to climb over rubble, opening a door, climbing a ladder, using power tools to break down a wall, other basic repair tasks. That means the robots will have to incorporate sophisticated planning capabilities as well as the ability to automatically balance and remain stable while traveling over a variety of surfaces, and most likely decide for themselves how to use dexterous appendages (like hands) to operate tools designed for humans. While DARPA stresses that the winning bots don’t have to be humanoid, it seems probable that they will at least approximate human appearance, since they will have to operate in human-created environments and use our tools. Arms, legs, hands, and optical sensors spaced for 3D vision seem par for the course. Seven already-selected teams on Track A, from Carnegie Mellon University, Virginia Tech, NASA, and elsewhere, are tasked with building the actual robots. Those came together at yesterday’s meeting, from which the press was barred. There are also 11 Track B teams that will be working on software to run the robots. That includes groups from Lockheed Martin, the University of Kansas, RE2, and others. Teams in both tracks are receiving DARPA funding to complete their work. And there’s Track C of the DRC, which provides for open competition among teams from around the world that want to compete in creating software for driving a humanoid robot. If accepted, teams will have access to an open-source DRC Simulator being developed through the Open Source Robotics Foundation, or OSRF. The simulator is a virtual environment, still in beta testing, that will incorporate computer models of robots and sample environments in which they can operate. Team members will be able to log in to upload software to simulated robots, test code, and see how well the virtual bots can handle DARPA-assigned tasks. The DRC Simulator builds on tools that have already been in development by OSRF, including its Robot Operating System and Gazebo simulator. “When everyone has access to good tools that handle the basics of programming a robot,” OSRF CEO Brian Gerkey tells PM, “we’ll have a much broader base of engineers inventing robot applications. And that’s what we really need: more good ideas for what robots can do in our lives.” A DRC qualifying event in May 2013 will pit Track B and Track C teams head-to-head. A pared-down field will compete in a Virtual Robotics Challenge in June. Up to six winning teams will then be assigned their own Government Furnished Equipment (or GFE) robot, otherwise known as ATLAS. ATLAS is in development by Boston Dynamics, which is famous for developing BigDog, a four-legged robotic pack animal capable of scrambling after soldiers through just about any terrain with their gear. ATLAS’s predecessor robot, Pet-Proto, already displays a disturbingly human-like ability to clomp up and down stairs on two legs and climb over obstacles with the help of two armlike appendages. “ATLAS and Pet-Proto are quite different,” Boston Dynamics founder and president Marc Raibert tells PM. ATLAS should be even more capable—for example, by incorporating hands. ATLAS also includes 28 hydraulically activated


joints and a sensor array for a head that includes a laser range finder and 3D cameras. At 180 pounds and 69 inches tall, the machine is roughly the same size and weight as a man. Like its predecessor, however, it will rely on a tether for external power. It also runs hot, requiring cooling water to circulate through its body at the rate of 2 gallons per minute. The winning Track B and Track C teams will each get up to $750,000 to continue their work, leading up to a physical challenge in which robots will have to perform real-world tasks in what DARPA terms an authentic disaster scenario. Up to eight top teams could win $1 million each in that challenge, which will put them in the running for a $2 million top prize to be awarded in a second physical challenge. Now, if you’re independently wealthy and feel like financing your own team, the challenge allows for a go-for-broke Track D that invites teams to build their own hardware and software for competition in the two physical challenges. All of this promises to dramatically advance the state of the art in robotics. “The field of robotics has just scratched the surface so far,” says Raibert. “You ain’t seen nothing yet!” The Department of Defense’s Advanced Research Projects Agency (DARPA) is moving ahead at full steam in its quest to develop, or spur the development of, humanoid robots that it can use for its own purposes — in this case, disaster response in areas too dangerous for humans but in need of a human touch (DARPA cites the Fukushima nuclear reactor meltdown as one example). On Thursday, DARPA announced it would begin immediately accepting admissions from the general public for the latest phase in its funding competition to build such robots, the DARPA Robotics Challenge (DRC), a contest that actually started back in April but was at that time restricted to entries by teams capable of building actual hardware robots. Now DARPA is opening the door to anyone, accepting admissions through February 2013 of “virtual robots” created using a free open source software program, the DRC Simulator, that DARPA has made available for download on its DRC website. “One of DARPA’s goals for the Challenge is to catalyze robotics development across all fields so that we as a community end up with more capable, more affordable robots that are easier to operate,” said Gill Pratt, the program manager for the competition, in a statement posted on DARPA’s news website on Thursday. “The value of a cloudbased simulator is that it gives talent from any location a common space to train, design, test and collaborate on ideas without the need for expensive hardware and prototyping. That opens the door to innovation.” In all, DARPA is running four different phases of the competition, the first two of which, Tracks A and B, the agency has already concluded, with DARPA agreeing to sponsor a total of 18 different teams (seven teams in Track A and 11 teams in Track B) fielded by companies and research institutions around the country. No surprise, Carnegie Mellon University in Pittsburgh, Pennsylvania — an institution renowned for its pioneering robotics research, among other hi-tech advances — was behind three of the teams chosen to receive funding in the first two rounds. Carnegie Mellon’s “Tartan Rescue Team,” which was led by Tony Stentz, director of the National Robotics Engineering Center, netted $3 million from DARPA in Track A and could receive up to an addition $1 million. The team is working on a robot called CHIMP (CMU Highly Intelligent Mobile Platform). The final grand prize of the competition is worth $2 million and will be awarded following the final challenge event in December 2014. But before that, contestants will have to duke it out in a June 2013 “virtual challenge event” in which they’ll get to write the software to control a full-size humanoid robot supplied specifically for the challenge, the aptly named Government-Furnished Equipment (GFE) robot, developed by robotics firm Boston Dynamics.


Binary Battle Warfare American forces have deployed robots equipped with automatic weapons in Iraq, the first battlefield use of machines capable of waging war by remote control. A US division of British defense company QinetiQ revealed that the 3rd Infantry Division, which is based south of Baghdad, purchased three Talon Sword robots for operations in Iraq.

The ministers reiterated their intention to quit as their Accordance Front party withdraws from the national unity government. With military recruitment a constant struggle, the U.S. Army is coming up with a new way to come up with bodies: it is going to build them. This week, the Army begins a “drive-off” to see what contractor is going to provide up to 1,000 bomb-clearing robots by year’s end, with a possible follow-up order for 2,000 more. The requirement is for a remote-controlled, wireless robot that weighs 50 pounds or less “to be used for Improvised Explosive Device (IED) detection and identification,” according to the Pentagon’s solicitation. IEDs have killed 48.5% of the 3,270 U.S. troops killed in action in Iraq. Finding — and disarming — such roadside bombs before they detonate is one way to curb such bloodshed. “You send out a robot to interrogate these things to see if it is, in fact, a roadside bomb or if it’s just trash,” Army Colonel John Castles of the 82nd Airborne’s 2nd Brigade Combat Team said from Iraq last week. “They’re a huge benefit to what we’re trying to do.”

This is an “urgent” requirement, the military notes, and so there won’t be Sword robots are a modified version of track-wheeled bomb disposal any of those lengthy development phases common to military hardware. devices in use around the world. In fact, the Army wants the first pair of robots delivered within 10 days Soldiers operate the robots with a specially modified laptop, complete of the contract award, expected to happen Sept. 14. This week, several with joystick controls and a ’kill button’ that terminates its functions contenders are putting their machines through the paces, running them if it goes awry. over and around rocks, through rough terrain and water, and ensuring According to the industry magazine, Defence News the US military has that the robots can peer into, and under, vehicles — and then let its 80 remote controlled armoured robots on order but funding constraints human operator know what it has found. has delayed delivery of all but a fraction of that number. The need is so pressing that the Pentagon is eliminating many of the The devices are armed with M240 machine guns or .50 calibre rifles and are hoops suppliers usually have to jump through. This time around, instead likely to be most valuable during raids on suspected enemy compounds. of filling in forms and submitting paperwork to qualify as a bidder, Commanders can minimise casualties by putting a machine in a situation those interested in participating merely have to register at this week’s where there is a high risk of an ambush or booby trap-style explosion. competition to qualify. Among the front-runners is iRobot, the same Massachusetts-based company that makes the Roomba vacuum cleaner. “Anytime you utilise technology to take a US service member out of harm’s way, it is worth every penny,” said John Saitta, a consultant on Robots are playing an ever-increasing role in the war. iRobot, for example, has about 1,000 of its PackBots, ranging in price from $80,000 the project. to $150,000, in Iraq scoping out IEDs, buildings and other places too “These armed robots can be used as a force multiplier to augment an dangerous for flesh-and-blood troops. Other companies have robotic Iraq already significant force in the battle space.” veterans too. In Defense News, Kris Osborne reports that Exponent, a While the concept of robots at war conjures images of Arnold California firm, has had its MARCbot series since April 2004. They cost Schwarzenegger’s Terminator movie character, current models are about $10,000 apiece, weigh 25 lbs and can be used at night. Meanwhile, more mundane. Defense News says that Foster-Miller, a Massachussets company, may Far from taking on human characteristics, the robots look like small propose a lighter-weight version of its current 115 to 140 lb. TALON stripped-down tanks. The Sword’s uses are limited by the quality of the model to try to win the bid. terrain and the intensity of the battlefield mission. Nevertheless, like troops, robots can be wounded — and even damaged At just over £100,000 per unit, the comparatively low cost is a boon for beyond repair. Some end up at Baghdad’s Camp Victory, at the Joint a Pentagon struggling to reign in operational expenditure. Robotics Repair Facility. “A lot of times, they send this guy down and Operations in Iraq and Afghanistan have seen constant technological the insurgents are waiting,” says one of the guys who fixes them. “They innovation by the US military. Last month the US airforce displayed an shoot at it because they know it is effective.” But robots, thankfully, unmanned drone, the Reaper, that is capable of dropping 1.5 tons of have no next of kin. laser guided bombs. We all know that robots can be produced en masse and programmed Until recently the field of surveillance has seen the fastest spread of to kill. According to robot expert Noel Sharkey, who spoke at the UK’s leading defense study institution, RUSI, 4,000 robots were deployed robots. Airborne drones come in all shapes and sizes. Drones as small as a 12-inch classroom ruler are used to feed footage in the Iraq War, but they were all “dumb machines with very limited sensing capability.” Apparently, they can’t tell the difference between back to military Tactical Operations Centres (TOCs). civilians and terrorists. Not very useful. But the next generation will No longer just the radio room, TOCs are now multi-media hubs providing be an improvement. a range of battlefield views to commanders. The US will be spending a whopping $24 billion on unmanned systems It’s a role set to grow as the US military acquires new offensive capabilities, technology by 2013, and I’m guessing that at least a fraction of this cost ushering in an new era of war by remote control. is going towards robot intelligence—i.e. programming them to figure The US military yesterday announced the death of an al-Qa’eda leader out who to kill, what weapon to use, and when to take one for the responsible for the civil war’s most significant attacks, destroying the team. Plus we’ve just gotten word of a security warning about a new al-Askari shrine in Samarra. generation of robot armies hailing from countries like India, China, Israel, A statement said Haitham al-Badr, the top al-Qa’eda leader in Salahuddin and Russia. This could be the forecast of a serious all-out bot-on-bot province, was killed during an operation that mopped up 80 radicals international war. Stay tuned. last week. Iraq’s political woes grew deeper yesterday as Prime Minister Nouri al-Maliki rejected the resignations of six Sunni Muslim cabinet members including deputy premier, Salam al-Zobaie.




Machines will be capable, within twenty years, of doing any work that a man can do. - Herbert Simon.


The X-47B looks like something out of a sci-fi space opera; more Cylon raider than fighter jet. The analogy is not so far-fetched. It recently passed a series of airworthiness evaluations and the next step will be testing autonomous landings aboard an aircraft carrier sometime next year. The significance may be lost on some. Among all the tasks that confront pilots, none, including air to air combat, are more stressful The drone than landing an aircraft on a moving surface, smaller than most parking lots, that is pitching in all three dimensions. Further, the complex mix At first glance the X-47B might look like the portly little brother of the of men and machines on crowded carrier decks make them among the B2 Stealth Bomber, but among unmanned aerial vehicles it’s the new most dangerous work spaces in existence. An aircraft that will manage badass on the block your mom warned you about. these tasks with very limited human guidance is a major technological Built by Northrop Grumman, the X-47B is the Navy’s newest UAV, and accomplishment. the first true robotic fighter in existence. Unlike the Predator and Reaper Feats such as these have sparked the imagination of many and raise drones currently in operation in the skies of Iraq and Afghanistan that are questions about the progress of autonomy in weapons. Lately, the controlled by pilots on the ground, the X-47B is fully automated–it flies blogosphere has come alight with speculation about the development itself. From takeoff, to making turns, to landing, no human is involved–the of autonomous robots on the battlefield. David Betz from the King’s entire flight is completely handled by the aircraft autonomously. College War Studies department has suggested that technology, policy We covered its historic maiden test flight on February 4th of this year. and military practice are all leading towards a future of autonomous On March 1st it flew a second time–on March 4th a third. The second robots in the conduct of warfare. Steve Metz of the US Army War and third flights are impressive achievements for the Navy and attest College has drawn similar conclusions, noting the growing challenge to the robustness of the new jet. The tests bring the fighter closer to of recruiting (and affording) sufficient numbers of troops to deal with the goal of achieving aircraft carrier deployment and retrieval by 2013. the many challenges confronting the United States. LSE professor, Assuming it passes all the required testing the X-47B will represent a Christopher Coker’s Waging War Without Warriors discusses so major achievement for U.S. aerial combat operations by coupling an called “Transhuman Warfare” in terms reminiscent of James Cameron’s intelligent, automated strike aircraft with the reach of their aircraft Terminator franchise. carrier fleet. We have to be clear – there is an enormous difference between today’s Modeled after the B2, its tailless design makes it more difficult to detect Predator B drone aircraft and autonomous robots. UAVs, while capable by radar. It has a ceiling of 40,000 ft, a 4,500 lb weapon load capacity, and of taking off and landing by themselves, and flying unassisted to patrol it can travel at supersonic speeds. It also has a range of 2,100 nautical areas, are not true autonomous robots. In flight, they are under the miles, approximately the distance between Washington D.C. and San full control of humans and they can neither identify a target nor launch Francisco and superior to that of the F-18 Hornet. a weapon on their own. Indeed, while the issue of targeted killings by The initial test flight took the X-47B to a maximum altitude of 5,000 drones is worthy of debate, there is no functional difference between an feet and a maximum speed of 180 knots. They also tested its ability to airstrike conducted by a manned F-16 and that of a Predator B: a human land at a precise point to simulate the requirements of hooking a wire being pulls the trigger in both instances. It just so happens that with a on the deck of an aircraft carrier, albeit a completely still one­– it nailed UAV, that human being is located thousands of miles away, rather than its target perfectly. The second and third flights were meant to push being on the scene. In both cases, the result is the same. the envelope, bringing the X-47B up to 7,500 feet and 200 knots on the Further, we should acknowledge that modern militaries have a legitimate second flight lasting 39 minutes, and 7,500 feet and 180 knots on the interest in robotics. The economic and social costs of warfare are third flight lasting 41 minutes. They also tested the robotic aircraft’s spiraling ever greater and while we may decry the use of force, our ability to maintain a steady course in the face of turbulence and changing governments continue to see great utility in employing it for a growing crosswinds–something its going to need to do very well if its going to variety of purposes. Even now many normally opposed to war in general be landing on the not-so-steady decks of aircraft carriers. are demanding that the international community “do something” about

X-47B

When the X-47B joins the ranks of U.S. military combat operations it will be joining a mechanized army of 7,000+ UAVs and 2,000+ ground robots already on the battlefield. Seen from a broad perspective, the X-47B represents just one step in the inevitable march towards automating war. With the advent of long-range missiles soldiers are already receding from the front. Soon they will be replaced with tireless, fearless soldiers who don’t need to eat or sleep and have absolute loyalty–unless they malfunction or get hacked by some computer whiz for which video games just don’t cut it anymore. War could be waged nonstop. How this tips the balance of power between “haves” and “have-nots” will be something for the whole world to watch.

the situation in Syria. The cost of soldiering is increasingly expensive, however. Recent reports estimate that it costs between US$850,000 and US$1.4 million per soldier per year to support Afghan operations and the separate bill for training and social benefits of each soldier are equally as large. Rising costs have basically led to smaller forces. As Metz argues, robots may be a way of dealing with this problem. (As an


aside, John Ellis’ classic, The Social History of the Machine Gun makes a similar argument in terms of the introduction of automatic weapons in the late 19th century – higher rates of fire enabled smaller forces to take on larger enemies in colonial conflicts).

In fact, humans have their own “laws of robotics” for warfare – the principles of ius in bello, “justice in war” or “just means”, which date back to St. Augustine. Along with the related principles of ius ad bellum (justice of war or just cause), they comprise five principles:

Last, to a certain degree, we already have autonomous “defense systems”, at least at sea. The Aegis Combat System, which is a combination of missiles, guns, radars and command and control software, can be placed on full automatic for ship defence, requiring no human input. Apparently, this has never been done in an actual operational setting. The shooting down of Iran Air Flight 655 in 1988 by the USS Vincennes was an example how the complex Aegis system can lead to a so-called “normal accident”. While the system was under human direction, operators became confused as to the identity of the aircraft it was tracking, mistaking the Iranian Airbus for a taxiing Iranian Air-force F-14.

Distinction (knowing the difference between combatants and noncombatants);

Target identification is central to achieving military missions as well as keeping friendly forces safe. In the air and at sea, and for perimeter defense this problem is considerably simpler than that confronted by combat soldiers. The identification of the enemy is frequently impossible before combat begins because of camouflage, concealment, and deceptive tactics. Armed robots were introduced into Iraq but were never used in action for reasons that appear to be linked to targeting friendly forces. In the sci-fi literature, Isaac Asimov introduced his famous “Three Laws of Robotics” which were developed to keep humans safe from much stronger, intelligent machines: A robot may not injure a human being or, through inaction, allow a human being to come to harm. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws. But in the case of militarized robots, unless every nation conducted warfare exclusively with them, these rules clearly would not apply (indeed, would be dangerous to apply for entire robot armies could be stopped in their tracks by human shields, and subsequently destroyed in place). Military robots need to know when to kill and when not to (a Terminator is simply a system for programmatic genocide – not within the policy demands of any Western military) Far be it from me to place boundaries on the possibilities of artificial intelligence (AI). Indeed, a succession of computers from Deep Blue’s victory over chess master Gary Kasparov to Watson’s victory in Jeopardy against both Brad Rutter and Ken Jennings indicate the amazing advances of AI. The Defense Advanced Research Projects Agency (DARPA) sponsors a series of contest challenges that have resulted in cars that can navigate themselves across vast distances in the deserts of southwest United States. Yet all of these examples, including that of the Aegis system are, to my mind, highly limited problem sets, different from the challenges presented in the complexity of land combat.

Proportionality (balancing military objectives against the damage operations will cause); Military necessity (keeping the employment of force at the lowest levels possible); Fair treatment of Prisoners of War; and “Just” weapons (rape as a weapon is evil, for example). Now, these are often acknowledged more in the breach than in their actual application, but Western military forces have been paying increasing attention to them in the last twenty years. The point is, military robots would have to be programmed with these principles in their ethical governors. This is not simply a matter of some “wishy-washy” goal for conducting “humane warfare”. It strikes me that the first three principles pose specific problems for those seeking to devise an ethical governor as they all involve intangibles. While the combatant/non-combatant distinction may seem the easiest, in contemporary urban “hybrid” conflicts, the ability to identify the enemy is the most difficult problem confronting soldiers. Moreover, any ethical algorithm could be “hacked” by opponents by acting outside of the programmed parameters, much in the way modern day insurgents take advantage of contemporary laws of war. Both proportionality and necessity require the soldier to weigh immediate advantages against future contingencies. Because this judgment entails weighing present circumstances against a multitude of uncertain future possibilities, the process is inherently subjective. Neither laws of probability nor regression curves tell us what the right course of action is. Arkin maintains that a hypothetical ethical governor would do this job better, not being subject to fatigue and the vagaries of emotion. Yet emotion is the most important factor that informs both proportionality and necessity: it is the sympathetic impulse, our ability to put ourselves into the shoes of the “other”, which makes the judgment possible at all. As AI researcher Douglas Hofstadter noted of the battle between Intel’s “Stanley” car and Carnegie Mellon’s “H1” entry in the DARPA challenge: Many assume that the technological progress of systems like the X-47B identify the trend of inevitable development of autonomous robots. This technological determinism, however, is belied by the human challenges that are at the heart of war. The idea of autonomous military robots seeks to solve some of those human issues. However, for the moment at least, AI is not up to the challenge of solving the complex problems posed by target identification and the ethical judgment necessary to engage in an act of force. As Captain Kirk reminds us in the classic Star Trek episode A Taste of Armageddon, “Death, destruction, disease, horror. That’s what war is all about. That’s what makes it a thing to be avoided.” Because war is so “unsafe”, it is best practiced by humans.


Robots In Warfare Which Robot Will Win? Robot Vs. Robot

the battle space, and the time to discuss them is now. The difference is that while Mr. Arkin wants such conversations to result in a plan for research and governance of these weapons, his most ardent opponents want them banned outright, before they contribute to what one calls “the juggernaut of developing more and more advanced weaponry.” Related Content Opinion: Drones End War’s Easy Morality Enlarge ImagePouya Dianat for The ChronicleRonald Arkin, a robotics expert and ethicist at the Georgia Institute of Technology, has proposed that warrior robots with an “ethical governor” function could preserve more civilian lives than human soldiers can.

The dawn of the 21st century has been called the decade of the drone. Enlarge ImageChion WolfWendell Wallach, at Yale U., says Mr. Arkin’s Unmanned aerial vehicles, remotely operated by pilots in the United proposals can be misleading, in part because the technology he discusses States, rain Hellfire missiles on suspected insurgents in South Asia and doesn’t yet exist. the Middle East. Enlarge ImageDARPAThe Falcon HTV-2 drone, still under development, Now a small group of scholars is grappling with what some believe could is designed to fly at speeds up to 13,000 miles per hour. That makes be the next generation of weaponry: lethal autonomous robots. At the the roboticist Noel Sharkey nervous because “the technology’s getting center of the debate is Ronald C. Arkin, a Georgia Tech professor who so fast that it really reduces the window in which humans can make has hypothesized lethal weapons systems that are ethically superior to decisions” to abort a strike. human soldiers on the battlefield. A professor of robotics and ethics, Enlarge ImageDOD photo by Tech. Sgt. James D. Mossman, U.S. Air he has devised algorithms for an “ethical governor” that he says could ForceThe Patriot missile defense system is one of a number of “fire one day guide an aerial drone or ground robot to either shoot or hold and forget” technologies already being used by the military. In 2003, its fire in accordance with internationally agreed-upon rules of war. Patriot missiles twice shot down fighters that were mistakenly identified But some scholars have dismissed Mr. Arkin’s ethical governor as as enemy planes. Critics of lethal autonomous systems fear that such “vaporware,” arguing that current technology is nowhere near the level mistakes could become more common. of complexity that would be needed for a military robotic system to Mr. Arkin, who has more than a quarter-century of experience performing make life-and-death ethical judgments. Clouding the debate is that any robotics research for the military, says his driving concern is the safety mention of lethal robots floods the minds of ordinary observers with of noncombatants. Terminator-like imagery, creating expectations that are unreasonable “I am not a proponent of lethal autonomous systems,” he says in the and counterproductive. weary tone of a man who has heard the accusation before. “I am a If there is any point of agreement between Mr. Arkin and his critics, it proponent of when they arrive into the battle space, which I feel they is this: Lethal autonomous systems are already inching their way into inevitably will, that they arrive in a controlled and guided manner.


Someone has to take responsibility for making sure that these systems situation calls for it. Different segments can also be configured with ... work properly. I am not like my critics, who throw up their arms and alternate payloads, so a snake like this can always crawl into a building, cry, ‘Frankenstein! Frankenstein!’” breaking up into smaller parts for each of them to pursue their own Nothing would make him happier than for weapons development to be mission in various segments of the building. Scary… rendered obsolete, says Mr. Arkin. “Unfortunately, I can’t see how we Military robots come in an astonishing range of shapes and sizes. DelFly, can stop this innate tendency of humanity to keep killing each other a dragonfly-shaped surveillance drone built at the Delft University of on the battlefield.” Technology in the Netherlands, weighs less than a gold wedding ring, camera included. At the other end of the scale is America’s biggest and Thrill of Discovery fastest drone, the $15m Avenger, the first of which recently began testing The early days of robotics research were frustrating for scientists and in Afghanistan. It uses a jet engine to carry up to 2.7 tonnes of bombs, engineers because of the machines’ sensory and computational limitations. sensors and other types of payload at more than 740kph (460mph). Things started to get interesting, Mr. Arkin recalls, as researchers made gains in areas like autonomous pathfinding algorithms, sensing technology, On the ground, robots range from truck-sized to tiny. TerraMax, a robotics kit made by Oshkosh Defense, based in Wisconsin, turns military and sensor processing. lorries or armoured vehicles into remotely controlled or autonomous “I was very enthralled with the thrill of discovery and the drive for machines. And smaller robotic beasties are hopping, crawling and running research and not as much paying attention to the consequences of, ‘If into action, as three models built by Boston Dynamics, a spin-out from we answer these questions, what’s going to happen?’” he says. What was the Massachusetts Institute of Technology (MIT), illustrate. going to happen soon became apparent: Robotics started moving out of the labs and into the military-industrial complex, and Mr. Arkin began to By jabbing the ground with a gas-powered piston, the Sand Flea can leap worry that the systems could eventually be retooled as weaponized “killing through a window, or onto a roof nine metres up. Gyro-stabilisers provide machines fully capable of taking human life, perhaps indiscriminately.” smooth in-air filming and landings. The 5kg robot then rolls along on wheels until another hop is needed—to jump up some stairs, perhaps, His “tipping point” came in 2005 at a Department of Defense workshop, or to a rooftop across the street. Another robot, RiSE, resembles a where he was shown a grainy, black-and-white video recorded by a gun giant cockroach and uses six legs, tipped with short, Velcro-like spikes, camera on a U.S. Apache attack helicopter hovering above a roadside to climb coarse walls. Biggest of all is the LS3 (pictured above), a fourin Iraq. legged dog-like robot that uses computer vision to trot behind a human Trust the military to come up with high tech weapons that brings the over rough terrain carrying more than 180kg of supplies. The firm says world to its knees – this newest robotic snake from Israel already looks it could be deployed within three years. menacing on its own, and should you decide to throw caution to the wind and blow it up, do beware that it will just split into slightly smaller robotic snakes. After all, individual segments of this robotic snake will be self-contained, complete with a brain, sensors, motors and batteries. All segments were specially designed to work together, forming a long, stealthy snake, but can also be independent of one another when the


Whose War?

1,000 feet of where they were aimed. Aerial bombing was a clumsy affair, utterly dependent on the extraordinary labor of human beings. Just one generation later, that was no longer true. In the Vietnam War, it took thirty

F-4 fighter-bombers, each flown and navigated by only two men, to destroy a target. That was a 99.4 percent reduction in manpower. The The Brave New Battle precision of attack was also greatly enhanced by the first widespread With the rise of drones, human beings may no longer be essential to use of laser-guided munitions. the conduct of war. After Vietnam, humans’ connection to air war became more attenuated, In the game of life and evolution there are three players at the table: and less relevant. In the Gulf War, one pilot flying one plane could hit two human beings, nature, and machines. I am firmly on the side of nature. targets. The effectiveness of the human-machine pairing was breathtaking. But nature, I suspect, is on the side of the machines. A single “smart bomb” could do the work of 1,000 planes dropping more than 9,000 bombs in World War II. By the time the United States went —George Dyson in Darwin Among the Machines to war in Afghanistan and Iraq, one pilot in one plane could destroy If you want to understand how human beings stack up next to machines in the conduct of modern warfare, consider this: In World War II, it six targets. Their weapons were guided by global positioning satellites took a fleet of 1,000 B-17 bombers—flown, navigated, and manned by orbiting thousands of miles above the surface of the earth. And a crew of 10,000 men—to destroy one Axis ground target. American increasingly, the pilots weren’t actually inside their planes anymore. bombs were so imprecise that, on average, only one in five fell within The historical trend is sobering. As aircraft and weapons have become more precise, human beings have become less essential to the conduct of war. And that may suit the military just fine. Human Beings: Necessary for War? In 2009, the Air Force released its “Flight Plan” for unmanned aircraft systems, a big-picture forecast about how the service will fight wars by the year 2047. It dutifully points out that humans currently remain “in the loop” on strike missions—that is, they still actually fly airplanes. But within the next five to ten years, the Air Force intends that one pilot will four aircraft. He or she will not sit in a cockpit, or even in a seat thousands of miles away made up to look like one. The pilot will communicate with the fleet via a computer terminal and a keyboard, maybe even a smartphone. After issuing a flight plan, the aircraft will be responsible for completing many important aspects of the mission unassisted: taking off, flying to the target, avoiding detection by adversaries. The Air Force’s goal is for


one human controller and a fleet of drones to be able to attack thirtytwo targets with near-perfect precision. It seems implausible that the U.S. military would deliberately reduce the warrior’s role in war to the point that people become mere monitors of autonomous, man-made technology. But this is precisely where the evolutionary trend has been heading ever since the 1940s. Autonomy is the logical endpoint of a century of technological progress. And since taking human beings out of the loop means making them safer, it is an attractive goal.

The Consequences Of Autonomous Warfare

Inspired By Nature Designers are taking their inspiration from nature, a field of research known as biomimicry. They are building drone “spiders” to climb up tree trunks, skitter to the end of a branch, and then “perch and stare” at their surroundings. The Air Force’s Wasp III, a collapsible prop-plane, is modeled after a soaring bird. It weighs only a few pounds, and its wings are made of a foam composite. The Wasp patrols from above using an internal GPS and navigation system, as well as an autopilot. The Wasp can function autonomously from take-off to landing. AeroVironment, based in Simi Valley, California, this year unveiled its “nano-hummingbird,” a spy-craft with a 6.5-inch wingspan and video camera in its belly. The Defense Department gave Boston Dynamics, a leading robotics company, a contract to design a Cheetah-like robot, capable of running up to 70 miles per hour, as well as a humanoid robot, called Atlas, that will walk upright, climb, squeeze between walls, and use its hands.

While there is a tremendous amount of money and thought going towards the construction of new drones, comparatively less attention is being paid to managing the consequences of autonomous warfare. The proliferation of drones raises profound questions of morality, hints at the possibility of a new arms race, and may even imperil the survival of Iby 2047, animal-like machines will be practically indistinguishable from the human species. Many of the most important policy judgments about their sentient counterparts. In fact, Joy predicts that the potential for how to adapt the machines to a human world are being based on the a crossover will come as soon as 2030. By then, what he calls “radical assumption that a drone-filled future is not just desirable, but inevitable. progress in molecular electronics—where individual atoms and molecules This dilemma is not restricted to the battlefield. Civilian society will replace lithographically drawn transistors”—will allow us “to be able to eventually be deposited in this automated future, and by the time we’ve build machines, in quantity, a million times as powerful as the personal arrived, we probably won’t understand how we got there, and how the computers of today.” Machines will process information so powerfully machines gained so much influence over our lives. and so quickly that, in effect, they will begin to learn. This is the fate that Bill Joy, the co-founder and former chief scientist “Given the incredible power of these new technologies,” Joy asks, of Sun Microsystems, described in his dystopian essay “Why the Future “shouldn’t we be asking how we can best coexist with them?” Doesn’t Need Us,” published in Wired magazine in 2000. Joy’s fear—as controversial now as it was then—is that human beings will sow the seeds

of their own extinction by building machines with the ability to think for themselves, and eventually to reproduce and destroy their creators. It is essentially the same nightmare that James Cameron imagined in the Terminator series. Joy begins his essay with an unwillingness to accept that human beings would ever consciously allow this to happen. But then he visits a friend, the futurist Danny Hills, who co-founded Thinking Machines Corporation. Hills tells Joy the future will not be announced with a Hollywood bang but that “the changes would come gradually, and that we would get used to them.” The military has followed this path, gradually adjusting as it pushes humans out of certain tasks that a generation ago would have never been handed over to machines. The robots and nanobots that Joy imagined exist today as unmanned aerial vehicles, more commonly known as drones. The Air Force studiously avoids the term drone—and encourages others to do the same—because it connotes a single-minded insect or parasite that is beyond the control of people. Drone operators prefer “remote piloted aircraft,” which reminds us that as independent as the missile-wielding flying robot might seem, there is always a human being at the end of its digital leash. That is, of course, until the human becomes passive to the swarm. In any case, it is not an overstatement to say that the people building and flying these unmanned machines are wrestling now with the very fundamentals of what it means to be human. And while senior military officials and policymakers swear up and down that humans will always have at least a foot in the loop, and that the military would never deploy robots that can select and attack targets on their own, the evidence suggests otherwise.


Control the ultimate killing machine on your ps3 controller!



Real Life Iron Man The Cyborg The success of the experiment brings a step closer the possibility of creating a “bionic man” as envisaged by science fiction writers and the popular 1970s television series, The Six Million Dollar Man. Pierpaolo Petruzziello was able to wiggle the fingers of the robotic hand, make a fist and hold objects, controlling the artificial limb via electrodes attached to the stump of his left arm. The 26-year-old was even able to feel needles being jabbed into the hand, which he said felt almost like flesh and blood even though it was not attached directly to his body. “It felt almost the same as a real hand,” he told a press conference in Rome, where the breakthrough was announced. “It’s a matter of mind, of concentration. When you think of it as your hand and forearm, it all becomes easier.” The Italian scientists behind the project said it was the first time a patient had been able to make such complex movements using his mind to control a biomechanical hand connected to his nervous system.


Mr Petruzziello, who now lives in Brazil, was given the use of the bionic emergency rescues. A prototype of the exoskeleton suit is designed hand for a month last year, but advantages in technology will be needed for the small in stature, standing five feet, three inches (1.6 meters) before such prosthetic limbs can be attached to patients permanently. tall. The suit weighs 50.7 pounds (23 kilograms) and is powered by a His progress in mastering the use of the limb was monitored by 100-volt AC battery (that lasts up to five hours, depending upon how neurologists at Rome’s Campus Bio-Medico, a university and hospital much energy the suit exerts). By way of comparison, a lower-body exoskeleton developed by the Massachusetts Institute of Technology that specialises in health sciences. Media Lab’s Biomechatronics Group is powered by a 48-volt battery After Mr Petruzziello recovered from the microsurgery he underwent to pack and weighs about 26 pounds (11.8 kilograms). have the electrodes implanted in his arm, it only took him a few days to master the use of the robotic hand, said team leader Paolo Maria Rossini. CYBERDYNE (which film buffs will recognize as the name of the company that built the ill-fated “Skynet” in the Terminator movies) designed the By the time the experiment was over, the hand obeyed the commands HAL exoskeleton primarily to enhance the wearer’s existing physical it received from his brain in 95 per cent of cases. capabilities 10-fold. The exoskeleton detects—via a sensor attached to It was the longest time electrodes had remained connected to a human the wearer’s skin—brain signals sent to muscles to get them moving. nervous system in such an experiment, said Silvestro Micera, one of the The exoskeleton’s computer analyzes these signals to determine how it engineers on the team. must move (and with how much force) to assist the wearer. The company Independent experts said the experiment was an important step forward claims on its Web site that the device can also operate autonomously in melding the human nervous system with a prosthetic limb. (based on data stored in its computer), which is key when used by people “It’s an important advancement on the work that was done in the suffering spinal cord injuries or physical disabilities resulting from strokes mid-2000s,” said Dustin Tyler, a biomedical engineer at the VA Medical or other disorders. Center in Cleveland, Ohio. “The important piece that remains is how long beyond a month we can keep the electrodes in.” A paralysed man has high-fived his girlfriend using a robotic arm controlled only by his thoughts (see video above). Tim Hemmes, who was paralysed in a motorcycle accident seven years ago, is the first participant in a clinical trial testing a brain implant that directs movement of an external device.

The HAL exoskeleton is currently only available in Japan, but the company says it has plans to eventually offer it in the European Union as well. The company will rent (no option to buy at this time) the suits for about $1,300 per month (including maintenance and upgrades), according to the company’s site, which also says that rental fees will vary: Health care facilities and other businesses renting the suits will pay about three times as much as individuals. The site does not explain why, and the company could not be reached for comment.

CYBERDYNE is not the only company developing exoskeleton technology. The U.S. Army is in the very early stages of testing an aluminum exoskeleton created by Sarcos, a Salt Lake City robotics and medical device manufacturer (and a division of defense contractor Raytheon), to improve soldiers’ strength and endurance. The exoskeleton is made of a combination of sensors, actuators and controllers, and can help the wearer lift 200 pounds several hundred times without tiring, the company The team then connected the implant to a computer that converts said Wednesday in a press release. The company also claims the suit is specific brainwaves into particular actions. agile enough to play soccer and climb stairs and ramps. Neurosurgeons at the University of Pittsburgh School of Medicine in Pennsylvania implanted a grid of electrodes, about the size of a large postage stamp, on top of Hemmes’s brain over an area of neurons that fire when he imagines moving his right arm. They threaded wires from the implant underneath the skin of his neck and pulled the ends out of his body near his chest.

As shown in this video, Hemmes first practices controlling a dot on a But there are still many kinks that must be worked out before HAL TV screen with his mind. The dot moves right when he imagines bending or any other exoskeleton become part of everyday life. Exoskeletons his elbow. Thinking about wiggling his thumb makes the dot slide left. work in parallel with human muscles, serving as an artificial system that With practice, Hemmes learned to move the cursor just by visualizing helps the body overcome inertia and gravity, says Hugh Herr, principal the motion, rather than concentrating on specific arm movements, says investigator for M.I.T.’s Biomechatronics Group, which is developing a neurosurgeon Elizabeth Tyler-Kabara of the University of Pittsburgh in light, low-power exoskeleton that straps to a person’s waist, legs and feet. Wearers’ feet go into boots attached to a series of metal tubes Pennsylvania, who implanted the electrodes. that run up a leg to a backpack. The device transfers the backpack’s After this initial training, Hemmes navigated a ball through a 3D virtual payload from the back of the wearer to the ground. world and eventually controlled the robotic arm, all with his mind. The One of the difficulties in developing exoskeletons for health care is the electrode grid was removed after the 30-day trial. diversity of medical needs they must meet. “One might have knee and The team is now recruiting people for a trial of a more sensitive electrode ankle problems, others might have elbow problems,” Herr says. “How grid that detects messages from individual neurons, rather than a in the world do you build a wearable robot that accommodates a lot group. They plan to implant two electrode patches, one to control arm of people?” movements and another for fine hand motion. The ultimate goal is to allow paralysed people to move individual fingers on a robotic hand. Unlike There are also concerns about the exoskeleton discouraging rehabilitation the svelt body armor donned by Iron Man, however, most exoskeletons by doing all of the work of damaged limbs that might benefit from even limited use. “If the orthotic does everything,” Herr says, “the muscle to date have looked more like clunky spare parts cobbled together. degrades, so you want the orthotic to do just the right amount of work.” Japan’s CYBERDYNE, Inc. is hoping to change that with a sleek, white exoskeleton now in the works that it says can augment the body’s own Power efficiency could also become an issue, given that the HAL moves strength or do the work of ailing (or missing) limbs. The company is thanks to a number of electric motors placed throughout the exoskeleton. confident enough in its new technology to have started construction The problem with electrical power is that you have to recharge, says on a new lab expected to mass-produce up to 500 robotic power suits Ray Baughman, professor of chemistry and director of the University of (think Star Wars storm trooper without the helmet) annually, beginning Texas at Dallas’s NanoTech Institute. Baughman and his colleagues have been developing substances that serve as artificial muscles (by converting in October, according to Japan’s Kyodo News Web site. chemical energy into electrical energy) that may someday be able to move CYBERDYNE was launched in June 2004 to commercialize the cybernetic prosthetic limbs and robot parts. Their goal is to avoid the downtime work of a group of researchers headed by Yoshiyuki Sankai a professor of inherent in motor-powered prosthetics that must be recharged. system and information engineering at Japan’s University of Tsukuba. Its newest product: the Robot Suit Hybrid Assistive Limb (HAL) exoskeleton, Makes you appreciate Iron Man’s strength and agility all the more. which the company created to help train doctors and physical therapists, assist disabled people, allow laborers to carry heavier loads, and aid in


Romantic Robots Can Robots Get A Break? Science fiction author Isaac Asimov created the three laws of robotics in his short story “Runaround.” But these are mainly aimed at protecting humans from robots. Do robots have rights, too? But what happens if robots become a large part of society? How will people treat them? Will humans hold themselves superior to their creations? Will they balk at the idea of robots taking the place of one of the partners in a romantic relationship? Many roboticists believe that now is the time to begin thinking about the moral and ethical questions posed by humanity’s development of robots. South Korea, after all, plans to have a robot in every house by 2020. This is a far cry from the chicken in every pot envisioned by Herbert Hoover’s campaign during the 1928 United States presidential election. It’s a good thing, then, that South Korea is at the forefront of thinking about robot ethics. In fact, the country announced in March 2007 that it had assembled a panel to develop a Robot Ethics Charter, a set of guidelines for future robotic programming. It will deal with the human aspects of human-robot interaction -- like safeguards against addiction to robot sex -- as well as explore ways to protect humans and robots from suffering abuse at the hands of one another [source: National Geographic]. The South Koreans aren’t the only ones who are thinking about robots’ rights. In 2006, future robot issues were brought up as part of a conference on the future commissioned by the British government. Among the issues discussed were the potential need for government-subsidized healthcare and housing for robots, as well as robots’ role in the military [source: BBC]. These considerations do not need to be addressed immediately, but as robots become increasingly life-like, these issues will almost certainly come into play. Designers are already working on robotic skin that can produce life-like facial expressions. Others are developing robots that can hold conversations and mimic human emotions. It may be very difficult for many people to overcome the idea of a humanrobot couple. In 1970, Dr. Masahiro Mori wrote an article for Energy magazine in which he describes the “uncanny valley,” a phenomenon where people grow uncomfortable with technological beings the more human-like they become. People build robots that have human qualities to help them complete human tasks, but once these robots start to look and act like humans, people start to be turned off by them [source: Mori]. With these and other features, robots of the future will present a great many challenges as they integrate into human society. And in the face of such challenges, perhaps the idea of human-robot marriages isn’t so scandalous after all. That is, if the robot is just as willing to get married as the human.

Consciousness provides creatures like us with an inner life: a mental realm where we think and feel and have the means to experience sights and sounds, tastes and smells by which we come to know about the world around us. But how can mere matter and molecules give rise to such conscious experiences? The 17th Century French philosopher, Rene Descartes, thought it couldn’t. He supposed that in addition to our physical make-up, creatures like us had a non-material mind, or soul, in which our thinking took place.

Can Robots Be Concious

For Descartes, the non-material mind was uniquely human. He denied If a robot is produced that behaves just like one of us in all respects, that animals had minds. When they squealed with what we considered to including thought, is it conscious or just a clever machine, asks Prof be pain this, he thought, was just air escaping from their lungs. Animals were mere mechanisms. And even if we created a clever mechanical doll Barry C Smith, director of the Institute of Philosophy. Human beings are made of flesh and blood - a mass of brawn and bone that replicated all our movements and reactions, it would not be capable suffused with an intricate arrangement of nerve tissue. They belong to of thinking because it would lack the power of speech. the physical world of matter and causes and yet they have a remarkable These days few of us would deny our animal natures or accept that all other animals lacked consciousness. Besides, the idea of an immaterial property - from time to time they are conscious. soul makes it hard to understand how the mental world could have any


effect on the physical world, and for that reason many contemporary philosophers reject mind-body dualism. How could something that had no material existence move our limbs and respond to physical inputs. Surely it is the brain that is responsible for controlling the body and so it must be the brain that gives rise to our consciousness and decision making. And yet many of the same thinkers would agree with Descartes that no machine could ever be conscious or have experiences like human beings.

Carbon Creatures We can no longer rely on Descartes’ criterion for deciding which beings could think. Nowadays computers can make use of language and synthesised speech improves all the time. It was the potential for computers to use language and respond appropriately to questions that led Alan Turing, the mathematician and war time code-breaker, to propose a test for machine intelligence. Descartes denied animals had minds He imagined a person sitting in a room, communicating by computer screen with two others in different rooms. She could type in questions and receive answers, and if she could not tell which of the respondents was a person and which was a computer she had no reason to treat them differently. If she was prepared to treat one as intelligent, she should be prepared to treat the other as intelligent too. This is known as the Turing Test, and if the situation is arranged carefully, computer programs can pass it. The original Turing Test relies on not being able to see who is sending the replies to questions, but what if we extended the test and installed the computer programme in a life-like robot? Robotics have developed rapidly in the last decade and we now see machines that move and behave like humans. Would such a display of life-like behaviour combined with appropriate responses to questions convince us that the machine was not only clever but also conscious? Here, we need to draw a distinction between our thinking that the robot was conscious and it actually being conscious. We may be tempted to treat it as a minded creature but that doesn’t mean it is a minded creature.

Last Mystery Those who study machine consciousness are trying to develop selforganising systems that will initiate actions and learn from their surroundings. The hope is that if we can create or replicate consciousness in a machine we would learn just what makes consciousness possible. Researchers are far from realising that dream and a big obstacle stands in their way. They need an answer to the following question - could a silicon-based machine ever produce consciousness, or is it only carbon creatures with our material make-up that can produce the glowing technicoloured moments of conscious experience? The question is whether consciousness is more a matter of what we do or what we are made of. Turing developed a test for machine intelligence Consciousness may be the last remaining mystery for science, but to some extent it has been dethroned from the central role it used to occupy in the study of the mental. We are learning more and more from neuroscience and neurobiology about how much of what we do is the result of unconscious processes and mechanisms. And we are also learning that there is no single thing consciousness is. There are different levels of consciousness in humans, and much of our thinking and decision making can go on without it. It’s worth remembering that the only convincing experience of consciousness we have is our own. We are each aware of our own inner lives, but have only indirect access to the inner mental lives of others. Are the people around me really conscious in the way I am, or could they all be zombies who walk and talk and act like humans although there is nobody home. And this is creates a twist in our story. For if we managed to produce a robot that behaved just like one of us in all respects that might be a proof not of the consciousness of a robot or machine, but instead may be a convincing demonstration of how much we could manage to do without consciousness.


Intelligence Artifcial Intelligence Artificial Intelligence (AI) is usually defined as the science of making computers do things that require intelligence when done by humans. AI has had some success in limited, or simplified, domains. However, the five decades since the inception of AI have brought only very slow progress, and early optimism concerning the attainment of human-level intelligence has given way to an appreciation of the profound difficulty of the problem.

What Is Intelligence? Einstein said, “The true sign of intelligence is not knowledge but imagination.” Socrates said, “I know that I am intelligent, because I know that I know nothing.” For centuries, philosophers have tried to pinpoint the true measure of intelligence. More recently, neuroscientists have entered the debate, searching for answers about intelligence from a scientific perspective: What makes some brains smarter than others? Are intelligent people better at storing and retrieving memories? Or perhaps their neurons have more connections allowing them to creatively combine dissimilar ideas? How does the firing of microscopic neurons lead to the sparks of inspiration behind the atomic bomb? Or to Oscar Wilde’s wit?

end up just staying at that desk forever or eventually being asked to make room for somebody who does have social or emotional intelligence.” Big Think also interviewed Dr. Daniel Goleman, author of the bestselling “Emotional Intelligence,” and spoke with him about his theory of emotional intelligence, which comprises four major poles: self-awareness, selfmanagement, social awareness, and relationship management.

Intelligence Quite simple human behaviour can be intelligent yet quite complex behaviour performed by insects is unintelligent. What is the difference? Consider the behaviour of the digger wasp, Sphex ichneumoneus. When the female wasp brings food to her burrow, she deposits it on the threshold, goes inside the burrow to check for intruders, and then if the coast is clear carries in the food. The unintelligent nature of the wasp’s behaviour is revealed if the watching experimenter moves the food a few inches while the wasp is inside the burrow checking. On emerging, the wasp repeats the whole procedure: she carries the food to the threshold once again, goes in to look around, and emerges. She can be made to repeat this cycle of behaviour upwards of forty times in succession. Intelligence--conspicuously absent in the case of Sphex--is the ability to adapt one’s behaviour to fit new circumstances. Mainstream thinking in psychology regards human intelligence not as a single ability or cognitive process but rather as an array of separate components. Research in AI has focussed chiefly on the following components of intelligence: learning, reasoning, problem-solving, perception, and language-understanding.

Uncovering the neural networks involved in intelligence has proved difficult because, unlike, say, memory or emotions, there isn’t even a consensus as to what constitutes intelligence in the first place. It is widely accepted that there are different types of intelligence—analytic, linguistic, emotional, to name a few—but psychologists and neuroscientists disagree over whether these intelligences are linked or whether they exist independently from one another. The 20th century produced three major theories on intelligence. The first, proposed by Charles Spearman in 1904, acknowledged that there are different types of intelligence but argued that they are all correlated—if people tend do well on some sections of an IQ test, they tend to do well on all of them, and vice versa. So Spearman argued for a general intelligence factor called “g,” which remains controversial to this day. Decades later, Harvard psychologist Howard Gardner revised this notion with his Theory of Multiple Intelligences, which set forth eight distinct types of intelligence and claimed that there need be no correlation among them; a person could possess strong emotional intelligence without being gifted analytically. Later in 1985, Robert Sternberg, the former dean of Tufts, put forward his Triarchic Theory of Intelligence, which Learning argued that previous definitions of intelligence are too narrow because they are based solely on intelligences that can be assessed in IQ test. Learning is distinguished into a number of different forms. The simplest Instead, Sternberg believes types of intelligence are broken down into is learning by trial-and-error. For example, a simple program for solving three subsets: analytic, creative, and practical. mate-in-one chess problems might try out moves at random until one Dr. Gardner sat down with Big Think for a video interview and told us is found that achieves mate. The program remembers the successful more about his Theory of Multiple Intelligences. He argues that these move and next time the computer is given the same problem it is able various forms of intelligence wouldn’t have evolved if they hadn’t been to produce the answer immediately. The simple memorising of individual beneficial at some point in human history, but what was important in items--solutions to problems, words of vocabulary, etc.--is known as one time is not necessarily important in another. “As history unfolds, rote learning. as cultures evolve, of course the intelligences which they value change,” Gardner tells us. “Until a hundred years ago, if you wanted to have h igher education, linguistic intelligence was important. I teach at Harvard, and 150 years ago, the entrance exams were in Latin, Greek and Hebrew. If, for example, you were dyslexic, that would be very difficult because it would be hard for you to learn those languages, which are basically written languages.” Now, mathematical and emotional intelligences are more important in society, Gardner says: “While your IQ, which is sort of language logic, will get you behind the desk, if you don’t know how to deal with people, if you don’t know how to read yourself, you’re going to

Rote learning is relatively easy to implement on a computer. More challenging is the problem of implementing what is called generalisation. Learning that involves generalisation leaves the learner able to perform better in situations not previously encountered. A program that learns past tenses of regular English verbs by rote will not be able to produce the past tense of e.g. “jump” until presented at least once with “jumped”, whereas a program that is able to generalise from examples can learn the “add-ed” rule, and so form the past tense of “jump” in the absence of any previous encounter with this verb. Sophisticated modern techniques enable programs to generalise complex rules from data.


Reasoning

Problem-solving

To reason is to draw inferences appropriate to the situation in hand. Inferences are classified as either deductive or inductive. An example of the former is “Fred is either in the museum or the cafŽ; he isn’t in the cafŽ; so he’s in the museum”, and of the latter “Previous accidents just like this one have been caused by instrument failure; so probably this one was caused by instrument failure”. The difference between the two is that in the deductive case, the truth of the premisses guarantees the truth of the conclusion, whereas in the inductive case, the truth of the premiss lends support to the conclusion that the accident was caused by instrument failure, but nevertheless further investigation might reveal that, despite the truth of the premiss, the conclusion is in fact false.

Problems have the general form: given such-and-such data, find x. A huge variety of types of problem is addressed in AI. Some examples are: finding winning moves in board games; identifying people from their photographs; and planning series of movements that enable a robot to carry out a given task. Problem-solving methods divide into specialpurpose and general-purpose. A special-purpose method is tailor-made for a particular problem, and often exploits very specific features of the situation in which the problem is embedded. A general-purpose method is applicable to a wide range of different problems. One generalpurpose technique used in AI is means-end analysis, which involves the step-by-step reduction of the difference between the current state and the goal state. The program selects actions from a list of means--which in the case of, say, a simple robot, might consist of pickup, putdown, moveforward, moveback, moveleft, and moveright--until the current state is transformed into the goal state.

There has been considerable success in programming computers to draw inferences, especially deductive inferences. However, a program cannot be said to reason simply in virtue of being able to draw inferences. Reasoning involves drawing inferences that are relevant to the task or situation in hand. One of the hardest problems confronting AI is that of giving computers the ability to distinguish the relevant from the irrelevant.

Perception In perception the environment is scanned by means of various senseorgans, real or artificial, and processes internal to the perceiver analyse the scene into objects and their features and relationships. Analysis is complicated by the fact that one and the same object may present many different appearances on different occasions, depending on the angle from which it is viewed, whether or not parts of it are projecting shadows, and so forth. At present, artificial perception is sufficiently well advanced to enable a self-controlled car-like device to drive at moderate speeds on the open road, and a mobile robot to roam through a suite of busy offices searching for and clearing away empty soda cans. One of the earliest systems to integrate perception and action was FREDDY, a stationary robot with a moving TV ‘eye’ and a pincer ‘hand’ (constructed at Edinburgh University during the period 1966-1973 under the direction of Donald Michie).

Understanding Language A language is a system of signs having meaning by convention. Traffic signs, for example, form a mini-language, it being a matter of convention that, for example, the hazard-ahead sign means hazard ahead. This meaningby-convention that is distinctive of language is very different from what is called natural meaning, exemplified in statements like ‘Those clouds mean rain’ and ‘The fall in pressure means the valve is malfunctioning’. An important characteristic of full-fledged human languages, such as English, which distinguishes them from, e.g. bird calls and systems of traffic signs, is their productivity. A productive language is one that is rich enough to enable an unlimited number of different sentences. It is relatively easy to write computer programs that are able, in severely restricted contexts, to respond in English, seemingly fluently, to questions and statements, for example the Parry and Shrdlu programs described in the section Early AI Programs. However, neither Parry nor Shrdlu actually understands language. An appropriately programmed computer can use language without understanding it, in principle even to the point where the computer’s linguistic behaviour is indistinguishable from that of a native human speaker of the language (see the section Is Strong AI Possible?). What, then, is involved in genuine understanding, if a computer that uses language indistinguishably from a native human speaker does not necessarily understand? There is no universally agreed answer to this difficult question. According to one theory, whether or not one understands depends not only upon one’s behaviour but also upon one’s history: in order to be said to understand one must have learned the language and have been trained to take one’s place in the linguistic community by means of interaction with other language-users.


By AUstin Weight austin.c.w@hotmail.co.uk


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.