THE ROBOT AS MAPMAKER
One group working to make autonomous mobile robots an everyday reality is the Perceptual Robotics Laboratory (PeRL) at the University of Michigan. Led by Dr. Ryan Eustice, the lab focuses on the piece of the puzzle dealing with navigation and mapping, functions fundamental to autonomous movement. The faculty-student teams at PeRL develop algorithms and programming by which robots absorb and process perceptive data to model their environment, locate themselves accurately within that map, and determine how to move, act and react as situations require. Their efforts typically result in mathematical papers and programming. The papers essentially present algorithms that capture the processes that allow robots to perform certain functions, and the coding is the implementation of those algorithms. Published on the lab website (robots.engin.umich.edu/), most of the work is openly available to other researchers to learn from, build upon, or copy and use in their own research and development efforts. Current PeRL projects include: a self-driving car, in cooperation with Ford Motor Company; a freeswimming hull inspection robot for the Office of Naval Research (ONR); developing active safety “situational awareness” technology to give robots the ability to detect, assess and handle dangers; and, in partnership with the Naval Engineering Education Center, a variety of efforts to improve the autonomy of unmanned land and air vehicles. While the projects focus on very different robots and pursue very different goals, they also share significant common threads. “Even though the applications seem very different, from self-driving cars to underwater robots and aerial drones, there is a lot of commonality between them when it comes to mapping, navigation and also what we call perception – in which the robot builds a meaningful model of the environment and kind of ‘understands’ the world around it through perceptive data,” says Dr. Eustice. “Much of the mathematical work we do in my lab is actually a general framework that, essentially, can be applied across different domains.” The self-driving car and the underwater vehicle provide a good example of this connection. Some ten years ago, PeRL began work for the ONR on a fully autonomous hull inspection robot, the prime mover for the project being the Navy’s desire to replace human divers in such dangerous tasks as inspecting warship hulls for limpet mines. The development platform for this work is a free-swimming underwater vehicle on loan from the Navy, which PeRL has brought to the point of being able to function in environments about which it has no prior information.
Dr. Ryan Eustice Head of Perceptual Robotics Laboratory (PeRL), University of Michigan
Since its job is to locate hull surface issues, the inspection robot necessarily uses the vessel under survey as its position reference – a logical approach, but challenging in that the robot doesn’t know where it is the first time around, nor does it know what the vessel looks like. So, like an ancient explorer mapping an unknown coastline a mile at a time, on its first survey the robot collects ship imagery bit by bit and knits it all together to form a picture of the whole. In a process named simultaneous localization and mapping (SLAM), the robot collects this imagery with underwater camera and sonar equipment, supplemented by a periscope camera looking abovewater, to build a map of the vessel in real time. This map is the robot’s memory of the ship, and allows it to determine its location, to store a properly organized visual record of the hull surface, and to recognize the vessel in the future. The robot notes all distinguishing features of the surface under inspection – including names and numbers, which it can read – and uses them collectively as a fingerprint to identify the ship. In this way, it can distinguish even sister ships that appear to be exactly alike. When it returns to survey the vessel at a later time, it will match the new camera and sonar imagery to its database and retrieve the map made during its previous visit; the existing map provides reference points the robot uses to determine its position along the hull. The new data acquired on the subsequent survey is used to update and refine the existing map, accurately noting the type and location of any differences from the last inspection, such as dents or marine growth or attached mines. It also notes exactly the distance and area it has covered, so that its client (‘operator’ being an outdated word for this new human-robot
SURVEYOR | 2018 VOLUME 1 | 7