feature-rendevous-with-schema

Page 1

10

Features

Sensor Readings

Sensor Readings were created thousands of years ago, but their progeny will reveal their true calling, and that may happen within the next decade or two, especially if the maker community and the robotics industry keeps growing at current rates.

A scene from 2001: A Space Odyssey, in which the computer that controls the spacecraft, HAL, is being shut down

Rendezvous with schema Robotics trends Robots are increasingly involved in human activities, moving out of the factories and into people’s homes

Features

I

t’s often been referred to as “the robotics revolution”, and although not everyone would reach for such ominous epithets, it’s difficult to find anyone who disagrees with the underlying idea that robotics is increasingly pervasive in society and will only become more so as time goes on. It’s certainly not a passing fad. A short-lived infatuation with robotics would have been and gone a long time ago. But for many generations, the fascination for robots has found some kind of affirmation, and usually that affirmation can be measured in monetary terms, in that people have proved that there’s money in robotics. Even if it’s a non-commercial hobby, there’s no doubt that the so-called “maker community” of today are the inheritors of something profound and enduring, and all the signs are that robotics will keep growing as a stimulating leisure activity and as a profitable industry.

I think we should be very careful about AI. If I had to guess at what our biggest existential threat is, it’s probably that. So we need to be very careful” Elon Musk, CEO, Tesla, SpaceX

editorial@roboticsandautomationnews.com

Next stop, space. With enough nuclear fuel available on Earth to power millions of space probes for eons of time; with the moon and the planets waiting to be explored; with many of the world’s rocket launch sites waiting to be commercialised and turned into spaceports; and before the predicted boom in space tourism, what may come is an expansion in space exploration by the millions of robotics tinkerers who want to see the surface of Mars with their own robot’s visual sensors. Robotics, as many Hollywood movies have shown, is yet to reach its true zenith. In the Arthur C Clarke book Rendezvous with Rama, a mysterious spaceship is observed travelling towards Earth, which sends a manned mission to study it. When they arrive on the alien vessel, the human astronauts see mechanical creatures, apparently performing maintenance tasks. Clarke’s implication seems to be that space exploration by advanced civilisations is most likely to involve robots. Which it already does. NASA sent its first robots into space in the 1970s, and its first humanoid robot in 2011. Robots may have come a long way since their ancestors

www.roboticsandautomationnews.com

You’re the first, the last The first recorded example of a robot is considered to be the mechanical bird made by Archytas, in Ancient Greece, about 2,400 years ago. Archytas’ “pigeon” was propelled by steam and, as well as being one of the first examples of automata, it is one of the earliest known experiments with flight. Automated machines were of course prevalent in the first Industrial Revolution. But the modern era of robotics could be argued to have begun in earnest in the 1940s, with the invention of programmable computers and artificial intelligence. In the early part of that decade, Isaac Asimov was formulating the Three Laws of Robotics, coining the phrase “robotics” in the process. A few years later, Norbert Weiner came up with the principles of cybernetics. But both of them were intellectuals who were not known to make things themselves; they just thought about them and articulated them. By the end of the 1940s, however, William Grey Walter of the Burden Neurological Institute, in England, had built a couple of three-wheeled, tortoise-like robots which could follow light to find their own way to a charging point when they were low on battery power. It would be almost another decade before the first useful, complex, programmable robot was built – the template for the robotic arm ubiquitous in all types of industries today. Invented by George Devol, the original robotic arm, the Unimate, initially found success in Japanese car factories, but the first recorded sale of a Unimate was to US car giant General Motors, in 1961. Devol, along with his company president Joseph Engelberger, went on to sign deals with GM as well as Kawasaki Heavy Industries. Such deals saw the development of similar but more advanced robotic arms based on the Unimate. The inventor of the two-armed robot seems somewhat lost in the fog of competition between robotics companies, as at least two companies claim to have developed the world’s first dual-armed robot. Rethink Robotics probably would be the company that can rightfully make that claim. Its Baxter dual-armed collaborative robot was launched in 2012. ABB claims its YuMi robot (left) is the “world’s first truly collaborative robot”. However, YuMi was launched in 2015, so ABB’s claim to be the first would depend on how they interpret the word “truly”. Meanwhile, away from industry and semantics, all manner of inventions and innovations have taken place since Devol’s the Unimate first came into existence. In space exploration, NASA integrated two robotic arms into Viking 1 and 2. Launched in 1975, the space probes successfully landed on Mars a year later. NASA also launched a humanoid astronaut, the Robonaut, into space in 2011, where it still operates on the International Space Station, and will soon be sent on its first space “float”, since it doesn’t appear to have legs – not that the expression

www.roboticsandautomationnews.com

11

“space walk” makes much literal sense anyway. While Asimov is credited with inventing the word “robotics”, the word “robot” was said to be first used by Czech writer Karel Čapek in a play called Rossum’s Universal Robots. Čapek said his brother had suggested the word “robota” to denote servitude. The first automated human-like mechanical robot was said to be built by Friedrich Kauffman, of Germany. His humanoid soldier with a trumpet was constructed as far back as 1810. However, even with the cleverness of Honda’s Asimo and the many unnervingly humanlike robots in existence today, no humanoid robot has convincingly and consistently been able to prove itself adept at tasks that the average human being could do with ease. Therefore, it could be said that humanoid robots have a long, long way to go. Far out, man One of the main challenges in building a humanoid robot is not really the hardware – there are enough materials and gifted artists and engineers who can create a robot that appears life-like and build large computers that can number-crunch at blistering speeds eternally. The really difficult challenge is the software. The mysteries of the human brain, or rather the human mind, will probably take centuries to fully deconstruct, if they are ever understood at all. But enough is known about the human operating system that a lot of computer software is already doing jobs previously done by humans. From automated phone voices that can perform complex tasks in banking, to serving customers in Japanese department stores, artificial intelligence software and software-driven robots are in many places. Robots like Pepper can also be used to serve drinks and snacks to parties at guests. Useful, and no doubt saves a lot of money for large organisations, especially the telephone automation systems, but it doesn’t pass that Turing Test threshold that would mean a robot can be considered to be human-like, and do much the same work as a human. In the virtual world, it’s possible that computers could convincingly simulate humans. But in the physical world, ask a robot – humanoid or otherwise – to sew a button on your shirt, or do the ironing, then you’re effectively placing a very heavy burden on a whole world of programmers and their computing machines. The computers may have caught up and be up to the task, but the software is still not quite there yet. Big data, powerful compute capabilities, massive storage capacity, and the schema that maps it all out, are all flourishing, but it’s difficult to say how long before a quantum leap is made and humanoid robots can be classed as androids, much less human-like. The first working artificial intelligence programs were developed in the 1950s, but they were mainly designed to play chess and games with a relatively small number of clearly defined rules. It’s only now that humanoid robots are claimed by their makers to be able to engage in less formulaic, more fluid, complex conversation, for example, and understand – or compute – cues given by body language to perceive emotions. These are among the claims made by Aldebaran, the developer of the Pepper and Nao robots. Many other companies are now making a similar case for their humanoid robots. And the exponential growth in intelligence – or computing power – required to reach a level which approaches the Turing standard is being provided by the cloud.

editorial@roboticsandautomationnews.com


12

Features

Sensor Readings

Companies such as the Brain Corporation and Neurala are creating cloud computing services for robotics. So are organisations such as Robo Brain, developed by Cornell University; and RoboEarth, funded by the European Union. These initiatives will inevitably reach a point where it would indeed be difficult if not impossible to tell a human from a robot, the subtle differences in physiology notwithstanding. Even the physiology question could be answered by the scientists who have already created muscle tissue in a laboratory, but that’s another story. The origins of the universe’s OS IBM is thought to have been the first company to have written a programming language designed with industrial robotics and automation in mind. Simply called “A Manufacturing Language”, or AML, it was an integrated solution – an IBM minicomputer with the language installed – that enabled programmers to create application programs. Now, IBM has Watson, the AI computer used in a range of settings from game shows to the healthcare service. But perhaps more significantly, IBM has developed a “brain-inspired computer and ecosystem”. The second generation of the “cognitive computing” chip was unveiled last year. The company said the brain chip has 1 million programmable neurons, 256 million programmable synapses, and 4,096 neurosynaptic cores. It says: “IBM's long-term goal is to build a neurosynaptic chip system with ten billion neurons and one hundred trillion synapses, all while consuming only one kilowatt of power and occupying less than two liters of volume.” IBM also launched the world’s first 7-nanometre chip last month. Moreover, the company owns SoftLayer, the cloud infrastructure company which hosted the first cloud based robotics challenge in 2013, organised by the Open Source Robotics Foundation. So IBM looks to be well placed for cloud robotics of the humanoid kind. Then there’s Google Cloud. Already part of the networking infrastructure for its autonomous robotic cars, Google Cloud is trying to position itself as a leading company in the robotics market. The company is said to be developing an open source cloud robotics operating system which aims to be the ubiquitous platform for robots and robotic systems everywhere, allowing developers to create applications that can run on many different robots. The now well-known Robot Operating System (ROS) came out of Stanford University, so it shares the same roots as Google, which also grew out of Stanford. They are inextricably linked. Not only that, Google is said to have developed ROS in the first place, and has made its Android smartphone operating system fully compatible with ROS. The beautiful – or the beastly – thing about cloud robotics is that when one robot on a network learns something, all the other robots on the same network can simultaneously acquire the same knowledge. Provided it’s correct information and not used for evil, that may well be a good thing. That it’s technically amazing is beyond question, but such power in the increasingly human-like hands of robots could potentially be a threat to humans and humankind itself. We come in peace, probably Elon Musk, he of Tesla and SpaceX fame, declared artificial intelligence to be humanity’s greatest existential threat, probably. “I think we should be very careful about artificial intelligence. If I had to guess at what our biggest existential threat is, it’s probably that. So we need to be very careful,” said Musk. “I’m increasingly inclined to think that there

editorial@roboticsandautomationnews.com

Artwork by Rafael Vallaperde from Lightfarm Studio. The image is titled The Verge and was inspired by the book Rendezvous with Rama, by Arthur C. Clarke

Success in creating AI would be the biggest event in human history … it might also be the last”

Professor Stephen Hawking

should be some regulatory oversight, maybe at the national and international level, just to make sure that we don’t do something very foolish.” And he’s not alone. No less a mind than Stephen Hawking, the scientist celebrated for coming up with the Big Bang theory of the origin of the universe, has expressed trepidation about mankind’s inevitably shared future with artificial intelligence. Writing in The Independent newspaper, Professor Hawking said: “Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks. In the near term, world militaries are considering autonomous-weapon systems that can choose and eliminate targets; the UN and Human Rights Watch have advocated a treaty banning such weapons. “Looking further ahead, there are no fundamental limits to what can be achieved: there is no physical law precluding particles from being organised in ways that perform even more advanced computations than the arrangements of particles in human brains. “An explosive transition is possible, although it might play out differently from in the movie [Transcendence]: as Irving Good realised in 1965, machines with superhuman intelligence could repeatedly improve their design even further, triggering what Vernor Vinge called a ‘singularity’ and Johnny Depp’s movie character calls ‘transcendence’.” In the movie Transcendence, Johnny Depp plays a tech genius who builds an artificially intelligent, sentient computer, with devastating consequences. And Irving Good was a colleague of Alan Turing, the British computer scientist after whom the Turing Test – to see if a computer can pass for a human – is named. Vernor Vinge is a computer scientist and author who invented the concept of “the singularity”, outlined in his 1993 article “The Coming Technological Singularity”, the hypothetical point at which the human era ends and super-human artificial intelligence takes over, which he apparently estimated to be 30 years away. So that leaves humans with about eight years before they become full-time pets for robots. As well as Hawking et al, there are others joining the chorus of disapproval of the indifference that lawmakers and other regulators are apparently showing towards artificial intelligence. Ryan Calo, an assistant professor at the School of Law at University of Washington, said: “Technology has not stood still. The same private institutions that developed the Internet, from the armed forces to search engines, have initiated a significant shift toward robotics and artificial intelligence. The widespread distribution of robotics in society will, like the internet, create deep social, cultural, economic, and of course legal tensions.” All powerful technologies have challenged society down the ages, but making like Luddites – who smashed the machinery that took their jobs during the Industrial Revolution in the 19th century – clearly would not work. But making like the maker community just might. l

www.roboticsandautomationnews.com


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.