Mechanix illustrated

Page 1



CONTENTS

30 SPACE TECH

Astro Mike Series: A Science Fiction Monster

12

By Mike Massimino

Improvising in Space

19

Engineering a Path to Space

26

On Robotics, Teleoperation, and Force Feedback

30

By Mike Massimino

Tom Sheridan on Mike Massimino and Human-Machine Systems • 35 Our Eyes and Hands in the Solar System

38

By Luis F. Peñin

26

Mechanix Illustrated Classics

56

General Electric’s “Machine Man” exploits force feedback

58

ENERGY

Out Of Thin Air

58

Artificial photosynthesis and the promise of a sustainable energy future

68

Moonshots With Naveen Jain

68

The Moon: Persian Gulf of the solar system? By John Schroeter

HISTORY OF TECHNOLOGY

A Brief Early History of Unmanned Systems By H.R. (Bart) Everett

The Turbine-Powered Firebirds

Jet engine technology comes down to Earth By David W. Temple

84

72 3

*

*

• mechanixillustrated.co • Fall 2017

84

72


CONTENTS

140

Continued SPECIAL FOCUS

Self-Driving Cars: Intelligent Cars

98

By Hod Lipson and Melba Kurman

Electronic Highway of the Future

113

A look back on looking ahead By Melba Kurman

Toyota’s Concept-i

124

Philosophy, passion, and purpose converge with a host of new and emerging technologies to offer a glimpse of tomorrow’s car today

98

By John Schroeter

BMW’s i Inside

140

A glimpse of the future of the automotive interior By John Schroeter

Comparing Computing Architectures for ADAS and Autonomous Vehicles

142

By Marcus Levy, Drue Freeman, and Rafal Malewski

171

Continental Brings Augmented Reality Head-Up Displays • 155

113

By John Schroeter

Autonomous Driving: How Safe Is Safe Enough?

162

By Dr. Gill Pratt

Big Data and Your Driverless Future

171

By Evangelos Simoudis, Ph.D.

Communications in a Driverless World By Sudha Jamthe

162

124 4

*

*

• mechanixillustrated.co • Fall 2017

178


CONTENTS

182

Continued HAPTICS

The Future of the Touchscreen Is . . . Touchless

182

Haptics brings the sense of touch to the virtual world By John Schroeter

AUTOMOTIVE ENGINEERING

An Introduction To Automobile Aerodynamics

192

More than meets the eye By Joseph Katz, Ph.D.

HABITAT TECHNOLOGY

Hanging Out in Space

219

208

Living aboard an asteroid-tethered tower

Mechanix Illustrated Classics

219

Yesterday’s cities of tomorrow ENGINEERING PRACTICE

The Path Of Invention

208

GET

221

Mechanix Illustrated® is published by Technica Curiosa™, a division of John

Link references throughout this magazine: Website link

By John Hershey, Ph.D.

August Media, LLC. Subscriptions to all Technica Curiosa titles, including Free download link

Popular Astronomy®, Mechanix Illustrated®, and Popular Electronics®, are free with registration at www.technicacuriosa.com/register. For editorial and advertising inquiries, contact john@technicacuriosa.com.

BUY

Video link

Facebook link

Audio link

Twitter link

Purchase link

Editor-in-Chief John Schroeter

Associate Editor Michaelean Ferguson

*

Eduardo Rother

Design & Production ER Graphics

Except where noted, all contents Copyright © 2017 John August Media, LLC. The right to reproduce this work via any medium must be secured

Additionally, all text in blue is a live link

with John August Media, LLC.

221 5

Art Director

*

• mechanixillustrated.co • Fall 2017

192



FROM THE EDITOR

* John Schroeter •

A Brief History of Mechanix Illustrated ...and a Personal Note

W

ilford Hamilton “Captain Billy” Fawcett (1885–1940) was a wild man with an adventurist’s spirit and a wicked sense of humor. In-

spired by the Spanish-American War, he ran away from home at the age of 16 to join the Army. He rose to captain during World War I and, at the same time, began writing for the Army publication, Stars and Stripes. This experience would lead him to venturing on his own into publishing with the scandalously irreverent humor magazine Captain Billy’s Whiz Bang. It’s naughtiness notwithstanding, the magazine provided much needed comic relief following the horrors of the Great War and ultimately served to launch Fawcett’s publishing empire. In 1928, he introduced what would eventually become his company’s

flagship title, Mechanix Illustrated. Originally titled Modern Mechanics and Inventions, it was retitled Modern Mechanix and Inventions, then shortened to Modern Mechanix, before settling on Mechanix Illustrated in 1938.

1919

7

1928

*

*

• mechanixillustrated.co • Fall 2017

1934

1938


FROM THE EDITOR

* John Schroeter •

Two more name changes would follow. In 1984, CBS bought Fawcett Publications and renamed the magazine Home Mechanix, signaling a new editorial direction. The second was in 1993 when the title was acquired by Time Inc., who drove this repositioning home by rechristening it Today’s Homeowner. This new name completed the shift away from science and mechanics to the more lucrative home improvement market. Or was it not so lucrative? In 2001, Time Inc. shuttered the title following the March/April issue. But that’s not where the story ends. The editorial shift that began in 1993 coincided with a change of my own when I abandoned a career in the semiconductor industry to launch the first of my new publishing company’s magazines, Fingerstyle Guitar. Fingerstyle Guitar was a magazine dedicated to a small but passionate niche in the guitar world, and one of the first publications to include a companion CD. The magazine also showcased the world’s great guitar makers, taking readers inside the inspiring shops of master luthiers where the aroma of Spruce shavings intermingled with the intoxicating bouquet of French polish. There’s still nothing better! Interestingly, many of the best luthiers came to their trade from careers in mechanical engineering, and I remain fascinated by the ingenious ways in which they incorporate engineering practices into their guitarmaking craft (more about that in a future issue).

1939

8

1945

*

*

• mechanixillustrated.co • Fall 2017

1950

1955


FROM THE EDITOR

* John Schroeter •

I’ve got to back up yet another 20 years, though, to when the publishing bug first bit. I have my father to thank for that. He was a master machinist at JPL, where he worked on the “shoot and hope” Ranger program. Its mission objective, which was designed to inform the Apollo program, was to obtain the first up-close images of the surface of the Moon and send them back to Earth just before impact. One day in 1973 he brought home a box of old Mechanix Illustrated magazines from the 1950s that a coworker was cleaning out of his office. To this young teen, he might as well have brought me the Moon. I consumed those musty magazines with their yellowed pages filled with an unbridled excitement about the future. Stories about jet age-inspired concept cars, spacecraft that were still more science fiction than science fact, and other wondrous fare transported my imagination via this amazing look back on looking forward. It had to have been a magical time, I thought, because the spirit of innovation that permeated these pages seemed all but gone, at least in my 1973 world. At that time, the energy crisis had imposed an absurdly low speed limit, Viet Nam was lost, and the Watergate scandal was unfolding. Muscle cars had given way to Vegas and Pintos. Skylab was launched, but the Apollo program was finished. Not a great decade for science, industry, or culture. Fast-forward to the present, and we find ourselves rushing head-

1956

9

1959

*

*

• mechanixillustrated.co • Fall 2017

1962

1967


FROM THE EDITOR

* John Schroeter •

long into a sensational new era—a period of unprecedented innovation and possibility. Today the exciting rise of robotics, the nascent but rapidly ascendant field of machine learning, the burgeoning democratization of space, the massive disruptions coming in energy, the resurgent maker ethic, and many other developments are converging and conspiring to redefine virtually every aspect of life here on Earth—and beyond. And with it, a new chapter in the history of this storied title. We like to say here that the future ain’t what it used to be, that it’s actually so much more. Yet as we look forward, we will also pay homage to yesterday’s visionaries. While we’ll cover, for example, the latest self-driving car technologies, we’ll also look back at the ingenious and auspicious beginnings of autonomous vehicles. Across all the subject matter areas we address, we will always endeavor to inform, inspire, and instruct with content that yields as many new insights as it does practical takeaways—particularly for tech entrepreneurs. Lastly, I’m particularly excited to feature Mike Massimino on the cover of our inaugural issue, as he exemplifies the heights to which a career in mechanical engineering can take anyone who possesses a passion such as his. I hope that Mike’s and other stories here will serve to inspire you as much as this iconic magazine inspired me all those years ago.

1970

10

1979

*

*

• mechanixillustrated.co • Fall 2017

MI

1984

2017



ES ASTRO MIK

SPACE TECH

ER

O

IES

n March 1, 2002, I left Earth for the first time. I got on board the space shuttle Columbia and I blasted 350 miles into orbit. It was a big day, a day I’d been dreaming about since I was seven years old, a day I’d been training for nonstop since NASA had accepted me into the astronaut program six years earlier. But even with all that waiting and planning, I still wasn’t ready. Nothing you do on this planet can ever truly prepare you for what it means to leave it.

Space Shuttle Columbia begins its 27th flight in the pre-dawn hours. Image courtesy of NASA.

By Mike Massimino 12

*

*

• mechanixillustrated.co • Fall 2017

•••

••••••••••••

••


SPACE TECH

* A Science Fiction Monster •

Our flight, STS-109, was a servicing mission for the Hubble Space Telescope. We were a crew of seven, five veterans and two rookies, me and my buddy Duane Carey, an Air Force guy. We called him Digger. Every astronaut gets an astronaut nickname. Because of my name and because I’m six feet three inches, everybody called me Mass. Ours was going to be a night launch. At three in the morning, we walked out of crew quarters at Kennedy Space Center to where the Astrovan was waiting to take us out to the launchpad. This was only the second shuttle mission since the terrorist attacks of 9/11, and there were helicopters circling overhead and a team of SWAT guys standing guard with the biggest assault rifles I’d ever seen. Launches had always had tight security, but now it was even more so. Digger was standing right next to me. “Wow,” he said, “look at the security. Maybe it’s a 9/11 thing.” I said, “I don’t know. I think they’re here to make sure we actually get on.” I was starting to get nervous. What had I signed up for? I could swear that one of the SWAT guys was staring at me—not at potential terrorists, but right at me. It felt like his eyes were saying, Don’t even think about running for it, buddy. It’s too late now. You volunteered for this. Now get on my bus. We got on and rode out to the launchpad, everything pitch-black all around us. The only light on the horizon was

The shuttle was making these ungodly sounds. I could hear the fuel pumps working, steam hissing, metal groaning and twisting under the extreme cold of the fuel, which is hundreds of degrees below zero.

13

*

*

• mechanixillustrated.co • Fall 2017

Image credit: Jeffrey Schifman, Columbia Engineering.


SPACE TECH

* A Science Fiction Monster •

After suiting up, the STS-125 crew members exit the Operations and Checkout Building to board the Astrovan, which will take them to the launchpad of Space Shuttle Atlantis on the mission to service the Hubble Space Telescope. On the right (front to back) are astronauts Scott Altman, commander; Megan McArthur, Andrew Feustel, and Mike Massimino, all mission specialists. On the left (front to back) are astronauts Gregory C. Johnson, pilot; John Grunsfeld, and Michael Good, both mission specialists. Image courtesy of NASA. the shuttle itself, which got bigger and bigger as we approached, the orbiter and the two solid rocket boosters on each side of that massive rust-orange fuel tank, the whole thing lit up from below with floodlights. The driver pulled up to the launchpad, let us out, then turned and high-tailed it out of the blast zone. The seven of us stood there, craning our necks, looking up at this gigantic spaceship towering 17 stories high above the mobile launcher platform. I’d been out to the shuttle plenty of times for training, running drills. But the times I’d been near it, there was never any gas in the tank, the liquid oxygen and liquid hydrogen that make rocket fuel. They don’t put it in until the night before, because once you add rocket fuel it turns into a bomb. The shuttle was making these ungodly sounds. I could hear the fuel pumps working, steam hissing, metal groaning and twisting under the extreme cold of the fuel, which is hundreds of degrees below zero. Rocket fuel burns off at very low temperatures, sending huge billows of smoke

14

*

*

• mechanixillustrated.co • Fall 2017


SPACE TECH

* A Science Fiction Monster •

pouring out. Standing there, looking up, I could feel the power of this thing. It looked like a beast waiting there for us. The full realization of what we were about to do was starting to dawn on me. The veterans, the guys who’d flown before, were in front of me, high-fiving each other, getting excited. I stared at them like, Are they insane? Don’t they see we’re about to strap ourselves to a bomb that’s going to blow us hundreds of miles into the sky? I need to talk to Digger, I thought. Digger’s a rookie like me, but he flew F16 fighter jets in the Gulf War. He’s not afraid of anything. He’ll make me feel better. I turned to him, and he was staring up at this thing with his jaw hanging down, his eyes wide open. It was like he was in a trance. He looked the way I felt. I said, “Digger.” No response. “Digger!” No response. “Digger!” He shook himself out of it. Then he turned to me. He was white as a ghost. People always ask me if I was ever scared going into space. At that moment, yes, I was scared. Up to that point I’d been too excited and too busy training to let myself get scared, but out there at the launchpad it hit me: Maybe this wasn’t such a good idea. This was really dumb. Why did I do this? But at that point there was no turning back. When you’re getting ready to launch, you have this big rush of adrenaline, but at the same time the whole process is drawn out and tedious. From the bottom of the launch tower, you take an elevator up to the launch platform at 90 feet. You make one last pit stop at a bathroom up there— the Last Toilet on Earth, they call it—and then you wait. One at a time, the ground crew takes each astronaut across the orbiter access arm, the gangway between the tower and the shuttle itself. You can be out on The beast that terrified you out on the the platform for a while, waiting for launchpad? Now that beast is waking your turn. Finally they come and up. At six seconds you feel the rumble get you, taking you across the arm of the main engines lighting. The whole into a small white room where they stack lurches forward for a moment. help you put on your parachute Then at zero it tilts back upright again, harness. Then you wave goodbye and that’s when the solid rocket boostto your family on the closed-cirers light and that’s when you go. cuit camera and go in through the

15

*

*

• mechanixillustrated.co • Fall 2017


SPACE TECH

* A Science Fiction Monster •

shuttle hatch. You enter on the mid-deck, where the crew’s living quarters are. Up a small ladder is the flight deck. Neither is very big; it’s pretty cozy inside the shuttle. Four astronauts, including the pilot and commander, sit on the flight deck for launch. They get windows. The remaining three sit on the mid-deck. Once you’re inside, the ground crew straps you in. They help you affix your helmet to your orange launch and entry suit. You check your oxygen, check your gear. Then you lie there. If you’re on the mid-deck like I was there aren’t any windows, so there’s nothing to look at but a wall of lockers. You’re there for a few hours waiting for everything to check out. You chat with your crewmates and you wait. Maybe play a game of tic-tac-toe on your kneeboard. You’re thinking that you’re going to launch, but you can’t be sure. NASA’s Launch Control Center will cancel a flight right up to the last minute because of bad weather or anything questionable with the spaceship, so you never really know until liftoff. Once it’s down to about an hour, you glance around at your buddies like, Okay, looks like this might actually happen. Then it gets down to 30 minutes. Then 10 minutes. Then one minute. Then it gets serious. With a few seconds left, the auxiliary power units start. The beast that terrified you out on the launchpad? Now that beast is waking up. At six seconds you feel the rumble of the main engines lighting. The whole stack lurches forward for a moment. Then at zero it tilts back upright again, and that’s when the solid rocket boosters light and that’s when you go. There’s no question that you’re moving. It’s not like, Oh, did we leave yet?

Image courtesy of NASA.

16

*

*

• mechanixillustrated.co • Fall 2017


SPACE TECH

Image courtesy of NASA.

17

*

* A Science Fiction Monster •

No. It’s bang! and you’re gone. You’re going 100 miles an hour before you clear the tower. You accelerate from 0 to 17,500 miles an hour in eight and a half minutes. It was unreal. I felt like some giant science fiction monster had reached down and grabbed me by the chest and was hurling me up and up and there was nothing I could do about it. Right after we launched, I realized that all the training we’d had on what to do if something went wrong during launch—how to bail out, how to operate the parachutes, how to make an emergency landing—I realized that all those years of training were completely pointless. It was just filler to make us feel okay about climbing into this thing. Because if it’s going down, it’s going down. It’s either going to be a good day or it’s going to be a bad day, and there is no in-between. There are emergency placards and safety signs all over the interior of the shuttle, telling you what to do and where to go. That stuff is there to give you something to read before you die. After about a minute, once the initial shock passed, this feeling came over me. I had a sensation of leaving. Like, really leaving. Not just goodbye but adios. I’d been away from home before, on vacations and road trips, flying out to California, going camping in East Texas. But this time, I was leaving behind my home, this safe haven I'd known my whole life, in a way that I never had before. That’s what it felt like: truly leaving home for the first time. It takes eight and a half minutes to make it into orbit. Eight and a half minutes is a long time to sit and wonder if today is going to be the day you get it. You can’t say much because your mic is live and you don’t want to get on the command and say anything stupid that might distract people. It’s not the time to try to be clever. You just keep lying there, looking at your buddies, listening to the deafening roar of the engines, feeling the shuttle shake and shudder as it fights to break out of the Earth’s atmosphere. You get up to three g’s for about two and a half minutes at the end and you feel like you weigh three times your body weight. It’s like you have a pile

*

• mechanixillustrated.co • Fall 2017


SPACE TECH

* A Science Fiction Monster •

of bricks on your chest. The whole thing can be summed up as controlled violence, the greatest display of power and speed ever created by humans. As you’re leaving the Earth’s atmosphere, the bolts holding you to the fuel tank blow. You hear these two muffled explosions through the walls of the shuttle—fump! fump! —and then the fuel tank is gone and the engines cut and the whole thing is over as abruptly as it began. The roar stops, the shuddering stops, and it’s dead quiet. All you hear are the cooling fans from some of the equipment gently whirring in the background. Everything around you is eerily, perfectly still. You’re in space. Once the engines cut and you’re in orbit, the shuttle’s no longer accelerating. Your perception is that you’ve come to a complete stop. You’re moving at 17,500 miles per hour, but your inner ear is telling your brain that you’re perfectly still; your vestibular system works on gravity, and without any gravity signals coming in, the system thinks you’re not moving. So you have this sensation like you’re lurching forward but then you come to a stop when the engines cut. You feel like you’re sitting straight up in a dining room chair, except that you’re still strapped down flat on your back. It’s completely disorienting. The first thing I did was ask myself, Am I still alive? It took me a moment to answer. Yes, I’m still alive. We’d made it, safely. It took me a minute or two to get my bearings. Then, once I felt acclimated, it was time to go to work. I reached up and took my helmet off and— just like I’d watched Tom Hanks do in Apollo 13—I held it out and let it go and floated it in the air in front of me, weightless. MI Excerpted from Spaceman by Mike Massimino. Copyright © 2016 by Mike Massimino. Excerpted by permission of Crown Archetype, a division of Random House LLC. All rights reserved. No part of this excerpt may be reproduced or reprinted without permission.

BUY

18

*

*

• mechanixillustrated.co • Fall 2017


19

*

L

ike many who were inspired by the Apollo 11 lunar landing, Mike Massimino has fostered a lifelong fascination with space—and astronauts. Unlike many, he managed to join the rarefied ranks of NASA’s astronaut corps. While Apollo 11’s primary objective was to answer President John F. Kennedy’s call for a manned lunar landing and return, Massimino’s two Hubble-servicing missions were designed to keep scientists looking far beyond the moon, into the deepest recesses of space—to the beginning of time itself.

*

• mechanixillustrated.co • Fall 2017

ES ASTRO MIK

SPACE TECH

ER

IES

•••

••••••••••••

Astronaut Mike Massimino works with the Hubble Space Telescope in Atlantis' cargo bay. Image courtesy of NASA.

••


SPACE TECH

* Improvising in Space •

Astronauts Mike Massimino (facing) and Michael Good are about to be submerged in the waters of the Neutral Buoyancy Laboratory (NBL) near NASA's Johnson Space Center. Massimino and Good are attired in training versions of their Extravehicular Mobility Unit (EMU) spacesuit. SCUBA-equipped divers (out of frame) are in the water to assist the crew members in their rehearsal, intended to help prepare them for work on the Hubble Space Telescope. Image courtesy of NASA.

20

*

*

• mechanixillustrated.co • Fall 2017

Both shuttle missions, STS-109 (flown by Columbia in March 2002—Columbia’s last mission prior to the disastrous re-entry of STS-107)—and STS125 (flown by Atlantis in May 2009), were to deliver a series of upgrades and repairs to the Hubble Space Telescope, which launched aboard the shuttle Discovery on April 24, 1990. Thanks to those missions—and the remarkable work performed over the course of multiple and sometimes harrowing spacewalks—the Hubble will remain in operation long past its slated 2014 end, well into 2020. If all goes well, Hubble’s boosted longevity will provide a comfortable overlap with its successor, the James Webb Telescope, the joint operation of which, even if short-term, could contribute additional and potentially exciting new insights. The story of Hubble’s history is one that alternates with derision— it was deployed with epic defects— and delirium. Hubble’s equally epic accomplishments span reading the spectrum of an alien planet’s atmosphere to the realization that the expansion of the universe is accelerating, yielding the discovery of dark energy. And that’s to say nothing of the thousands of astounding images Hubble has returned—images that reveal the very birth of ancient galaxies. “We’ve actually seen an object that emitted its light about 13 bil-


SPACE TECH

* Improvising in Space •

lion years ago,” says Hubble senior scientist Dave Leckrone. “Since the universe is 13.7 billion years old, that’s its infancy, the nursery. From the nearest parts of our solar system to further back in time than anyone has ever looked before, we’ve taken ordinary citizens on a voyage through the universe.” Fortunately, Hubble’s creators anticipated technical difficulties with the Greyhound-bus-sized telescope, as well as the development of new technologies that would enable it to see even farther. As such, it was designed to be serviced in space by spacewalking astronauts who’d rendezvous with Hubble in its lowEarth orbit via the space shuttle. In fact, five such servicing missions were planned and carried out with great success and to the delight of the scientific community, cementing Hubble’s magnificent legacy. But these missions weren’t exactly walks in the park. As Mike Massimino attests, nothing is easy in space. Massimino’s engaging space memoir, Spaceman, relates the servicing events in such vivid and gripping detail that you’ll imagine yourself in his space boots as he executes his EVAs. One such EVA was particularly consequential, as it exposed the reality that you simply cannot plan for everything that might occur in space. It happened during the May 2009 STS-125 mission—the final visit to the Hubble be-

21

*

*

• mechanixillustrated.co • Fall 2017

Astronaut Mike Massimino, STS-125 mission specialist, practices repairing Hubble Space Telescope hardware during a training session at NASA's Johnson Space Center. Image courtesy of NASA.


SPACE TECH

* Improvising in Space •

fore the shuttle fleet would be retired. The repair involved Hubble’s Space Telescope Imaging Spectrograph (STIS). In addition to taking detailed pictures of celestial objects, the STIS acts like a prism in that it separates light from the cosmos into its constituent colors, yielding an electromagnetic “fingerprint” of the observed object, revealing details about its temperature, chemical composition, density, and motion. What’s more, the STIS aids in the detection of black holes. As the Space Telescope Science Institute explains it, “Light emitted by stars and gas orbiting the center of a galaxy appears redder when moving away from us (redshift), and bluer when coming toward us (blueshift). STIS looks for redshifted material on one side of the suspected black hole and blueshifted material on the other, indicating that this material is orbiting an object at very high speeds.” Needless to say, the STIS was a vital piece of equipment. At the time it failed, the STIS accounted for 30% of the research being done with the telescope. And it was broken. To make matters worse, this was one piece of equipment that wasn’t built to be serviced, let alone taken apart. Yet serviced it would

The moment Mike Massimino snaps the handrail. For more on the STIS repair event, watch the video here. Videocaps courtesy of NASA.

22

*

*

• mechanixillustrated.co • Fall 2017


SPACE TECH

* Improvising in Space •

be, and the job fell to Massimino. Getting at the failed unit meant removing a panel situated behind a handrail secured by four hex screws. Removing them would be a piece of cake. It was the 111 tiny screws that lie beneath the handrail that were concerning. When Massimino engaged the last of the four handrail screws, his pistol-grip power tool just spun in the screw head. The drill bit was going round and round, but nothing was happening. So much for the piece of cake. “Stripping a screw on Earth,” Massimino says, “while annoying, is not a game-over situation. You just pop down to the hardware store, and they’ve got extractor bits and tools designed to deal with the situation. We were prepared for the small screws to get stripped, but nobody had thought it would happen with the big ones. We didn’t have any of the right tools, and the closest hardware store was a long way away.”* While not exactly an Apollo 13 moment, it did nonetheless call for some serious and potentially dangerous improvising. And failure at this point meant a significant mission failure. The situation set the people on the ground at Goddard scrambling for a solution. Within an hour, they managed to rig up a test with the backup handrail (yes, they had one!) hooked up to a torque wrench along with a digital fish scale for measuring how many pounds of force it would take to break the handrail loose with one screw holding it in. The answer was “60 pounds linear at the top of the handhold.” The biggest worry was the possibility of debris that might go flying with it, as its sharp edges could easily puncture Massimino’s space suit. For this, . . . you work through so they resorted to an equally low-tech solumany problems in the tion: tape placed over the screw head. “As I course of your training looked at that handrail,” Massimino recalls, that the skill you actually “attached to this $100 million instrument develop is not in solving inside this $1 billion telescope, I yanked it specific problems, but how hard and bam! It came off. Clean. No debris. to work a problem. No punctured space suit.”*

“ 23

*

*

• mechanixillustrated.co • Fall 2017


SPACE TECH

* Improvising in Space •

Massimino notes that the very idea of repairing the STIS meant they’d be attempting something in space that had never been done before. “How were we going to do it?” he wondered. “Could we do it? The reason we run through these tasks so many times on the ground is not simply to learn how to do the job right, but to find out everything that might go wrong. Depending on the complexity of the space walk, so many potential problems can occur. The last thing you want is to encounter a problem you didn’t think of or hadn’t prepared a solution for.”* Massimino, now a professor of professional practice in the School of Mechanical Engineering at Columbia, translates this experience to the classroom. “I tell my students that even when you’re studying for an exam or working a problem, that’s preparation. In preparation for doing something operational, playing a game, playing an instrument—whatever the event might be—you’re not always sure how it’s going to go. You practice, and sometimes things go just as you expect them to, but that rarely happens. Likewise, you encounter problems when you train for a spacewalk. Chances are, you’re not going to encounter the same problems you trained for when you get to space. Any time you do something, you’re going to find another problem. We try to find as many of them as we can, and you try to prevent what you can, but you’re never going to catch all of them. So ultimately what we practice is how to solve problems. And when you do encounter a problem in space, you can use your checklist, you can talk to the expert in the field, there’s an expert in the control center who knows about the situation you’ve encountered, you know how to describe a problem, you know how it should work, you know what the limitations are. But the big takeaway is that you work through so many problems in the course of your training that the skill you actually develop is not in solving specific problems, but how to work a problem generally—how to solve it on the fly. That’s really what you get from training to fly in space.” MI * Passages marked with an asterisk were excerpted or adapted from Spaceman by Mike Massimino. Copyright © 2016 by Mike Massimino. Excerpted by permission of Crown Archetype, a division of Random House LLC. All rights reserved. No part of this excerpt may be reproduced or reprinted without permission.

24

*

*

• mechanixillustrated.co • Fall 2017



ES ASTRO MIK

SPACE TECH

ER

“Even as a kid,” Massimino

recalls, “I liked math and science. When I was first thinking about what to do for college I considered physics, but then I discovered engineering. When I got to Columbia, that’s what I chose. At the time, I wasn’t thinking about being an astronaut or working for NASA; I just thought it would be an interesting field to study.

Canadarm2 in action during STS-114 in 2005. Image courtesy of NASA. 26

*

*

• mechanixillustrated.co • Fall 2017

IES

•••

••••••••••••

••


SPACE TECH

* Engineering a Path to Space •

It was after I’d been working for a couple of years at IBM that I became interested in doing something with the space program.” That interest led him to graduate studies at MIT, studying under the legendary Tom Sheridan, a pioneer of robotics and remote-control technologies. “I specialized in the field of human factors,” he says, “which is understanding how the brain responds to different stimuli and how to account for them in your designs. It’s fascinating. I was working specifically on the problem of humans and robots cooperating in space, which meant I’d be working with astronauts. It turns out that it was a good choice; it worked out for me. I used that to find a way to fold it into a career in the space program.” If Massimino’s early robotics work with NASA proved anything, it was the value of astronauts. “Astronauts can think on the spot, improvise solutions, communicate abstract thoughts,” he explains. “Robots can’t. If you design a robot to do A, B, and C, and then you get to space and it turns out the robot needs to do X or Y or Z, you’re out of luck. If you have a person with a human brain operating hands with opposable thumbs, you can shift gears on the fly, work the problem, devise a solution . . . The most valuable piece of equipment you can have in space is a person.”* Notwithstanding, robotics is equally indispensable in space. And

Original image courtesy of Canadian Space Agency.

27

*

*

• mechanixillustrated.co • Fall 2017


SPACE TECH

* Engineering a Path to Space •

Massimino’s integration of human factors to improve robotic operations would make it doubly valuable. NASA had been working on a remote-controlled mechanical arm that would come to be called the Shuttle Remote Manipulator System (SRMS). Built in Canada, it was given the nickname, Canadarm. It would be used to deploy, capture, and repair satellites, position astronauts, maintain equipment, and move cargo in and out of the shuttle’s payload bay. But as Massimino observed the system’s simulations, he noticed that the view to the arm’s movement was frequently occluded, which forced its operators to rely upon digital readouts to obtain the arm’s spatial coordinates. That presented numerous operational and cognitive challenges when trying to perform delicate work in real time. “It was an incredibly convoluted and counterintuitive way to manipulate this arm,” Massimino noted. But the challenge was exactly what he had dealt with in his graduate work at MIT. His conclusion was that the control system for this robotic arm needed better human factors. Massimino was also able to draw from his experience working on the robot arm at McDonnell Douglas, where he designed the video display that had first flown on STS-69 in 1995 on the shuttle Endeavor. Called the

28

*

*

• mechanixillustrated.co • Fall 2017

The Canadarm guides NASA astronaut Michael Massimino toward the cargo bay of Space Shuttle Columbia during STS-109. Image courtesy of NASA.


SPACE TECH

* Engineering a Path to Space •

With the Space Shuttle Columbia in limited natural light, astronauts James Newman (right) and Michael Massimino share the Canadarm platform to work on the Hubble Space Telescope (HST) during the flight's second of five scheduled space walks. A thin slice of reflected sunlight and airglow can be seen at Earth's horizon. Image courtesy of NASA.

RMS Manipulator Positioning Display, it had significantly improved the shuttle’s robotic toolset, and quickly became the standard for robot-arm operators on shuttle missions. A “robot guy,” Massimino recognized that this work would eventually get him to space, as well. And the next step in that process presented itself with the Canadarm2, which was being developed for the Space Station. Massimino, who became deeply involved in the project, explains, “The arm was controlled by astronauts inside the shuttle while they looked outside into the payload bay of the orbiter through the aft flight deck windows. They manipulated it via two hand controllers: a left-handed one for translations (XYZ motions), and a right-handed one for rotations (roll, pitch, and yaw motions). Flying the arm required a fair amount of training and skill and was one of the major jobs an astronaut performed on the space shuttle.”* Once in space, Massimino found himself at the end of that robot arm as he performed his numerous EVAs, giving him the very unique distinction of having worked in space with equipment he helped design. MI * Passages marked with an asterisk were excerpted or adapted from Spaceman by Mike Massimino. Copyright © 2016 by Mike Massimino. Excerpted by permission of Crown Archetype, a division of Random House LLC. All rights reserved. No part of this excerpt may be reproduced or reprinted without permission.

29

*

*

• mechanixillustrated.co • Fall 2017


ES ASTRO MIK

SPACE TECH

ER

IES

•••

By Mike Massimino

I

n the mid-1980s when I was looking into grad school, I was reading about the space program and it was clear to me that human factors would be an important issue—particularly when you have robots doing certain things and astronauts doing other things; there would have to be some sort of division of labor. How do you split up tasks between astronauts versus robots? That was an interesting question to me.

30

*

*

• mechanixillustrated.co • Fall 2017

••••••••••••

••


SPACE TECH

* Robotics, Teleoperation, and Force Feedback •

I was also intrigued by how a person could control a robot over very large distances—distances so great that they introduce a time delay. When you’re controlling something in space from Earth or something on another planet from Earth or from another spaceship, you have a transmission time delay; it takes a long time to get a signal there and back. For example, the Mars rover is controlled from the ground, and because the signal is sent from Earth, it takes 20 light minutes or so to get there. Then you have to wait another 20 or 30 minutes to see what happened. That’s not a good way to operate. I wanted investigate ways to cut down the effect of that time delay. And that involved a study of the senses. I was taking a neuroscience class at MIT along with my robotics courses, and I learned that the brain is fascinating in the ways it receives and processes information in order to make decisions. And the overwhelming sensory organ for taking information in when you’re driving a car, flying an airplane, or controlling a robot is your eyeballs: you take in a tremendous amount of information through your eyes. But you also get information from other places when you’re flying an airplane or driving a car. For example, you can get audio feedback. When you’re shifting a car you can hear the engine revving. When somebody honks a horn or an alarm goes off—whatever it might be—you get auditory information. We also get information through the tactile sense, through vibrations or other forms of haptics that you can feel on your hands. The motivation behind teleoperation . . . the Mars rover is controlled systems is to enable humans to interfrom the ground, and because act with remote environments, envithe signal is sent from Earth, it ronments that might be dangerous or takes 20 light minutes or so to otherwise inaccessible, by providing the get there. Then you have to wait operator with sensory feedback to emuanother 20 or 30 minutes to late the experience as though they were see what happened. That’s not actually present at the remote site. But a good way to operate. in the context of teleoperation of robots,

31

*

*

• mechanixillustrated.co • Fall 2017


SPACE TECH

* Robotics, Teleoperation, and Force Feedback •

most people were just giving you another visual display to look at. In space, the last thing you want is something more to have to look at: your eyes are already busy looking out the window, watching a video monitor, or any number of other things, so to introduce yet another gauge to look at doesn’t always work. This led me to investigate sensory substitution—that is, substituting one sense for another to perceive force information. Think, for example, of Braille: blind people can’t read words, but they can feel the words with their fingers. Likewise, because we can’t give operators direct sensory information because of a time delay, the idea I had was to provide that information through another sensory modality. Whenever you manipulate an object with your hand, pull a lever, or twist a knob, the amount of resistance it gives you is something you can feel. It’s instantaneous, and you can react to it right away. The brain knows automatically how to read those signals and adjust accordingly to apply more force or less. But if you’re manipulating an object remotely via a robot, there’s a time delay between the signals the robot is sending to you and the commands you’re sending to it. Consequently, you might push too hard or not hard enough based on wrong information about what’s happening on the other end, and the object you’re manipulating becomes unstable and you start knocking into things. The approach I took to solving this problem was an “auditory display” that would deliver a series of sounds that, based on the amplitude, direction, and frequency of the sound, gives the operator an idea of the force he was exerting through the hand controller. A complementary design was a vibrotactile display realized through a wearable set of devices that would vibrate on a person’s hand when a force was detected, also providing information about the direction and magnitude of the force. This work culminated in my thesis at MIT, “Sensory Substitution for Force Feedback in Space Teleoperation,” with Tom Sheridan as my advisor, and was ultimately embodied in a pair of patents (see next section on the following page).

32

*

*

• mechanixillustrated.co • Fall 2017


SPACE TECH

* Robotics, Teleoperation, and Force Feedback •

Massimino’s Sensory Substitution Patents Mike Massimino’s patents with Tom Sheridan, 5,451,924 (1995) and 5,619,180 (1997), describe feedback apparatuses for an operator to control an effector that is remote from the operator, enabling interaction with a remote environment via a local input device manipulated by the operator. The abstract reads, “Sensors in the effector’s environment are capable of sensing the amplitude of forces arising between the effector and its environment, the direction of application of such forces, or both amplitude and direction. A feedback signal corresponding to such a component of the force is generated and transmitted to the environment of the operator. The signal is transduced into a vibrotactile sensory substitution signal to which the operator is sensitive. Vibration-producing apparatuses present the vibrotactile signal to the operator. The full range of the force amplitude may be represented by a single, mechanical vibrator. Vibrotactile display elements can be located on the operator’s limbs, such as on the hand, fingers, arms, legs, feet, etc. The location of the application of the force may also be specified by the location of a vibrotactile display on the operator’s body. Alternatively, the location may be specified by the frequency of a vibrotactile signal.”

33

*

A • An electronic circuit that can be used to amplify

B • Vibrotactile display elements

the signal generated by a force-sensing resistor.

arrayed on an operator’s hand.

*

• mechanixillustrated.co • Fall 2017

MI


SPACE TECH

* Robotics, Teleoperation, and Force Feedback •

D • Schematic representation of a number of force-sensing resistors arrayed on another effector.

C • A representation of an embodiment of the invention using an auditory display.

34

*

*

• mechanixillustrated.co • Fall 2017


ES ASTRO MIK

SPACE TECH

ER

Tom Sheridan on Mike Massimino and Human-Machine Systems

A

stronaut Mike Massimino was my Ph.D. student at MIT, where I was a professor in the Mechanical Engineering Department and directed a laboratory called Human-Machine Systems, where Mike did his doctoral research. I was delighted to have Mike in my lab. He was well-liked by the other grad students and was an extremely hard worker. We served NASA as well as other government agencies and private firms in performing research on teleoperation in space and undersea, commercial aviation, highway safety, nuclear power, and medical applications. For a certain class of students who are willing to take intellectual risks, the field of man-machine systems engineering is rewarding. I mention intellectual risks because the field is relatively young and not well-defined, in contrast to many classical engineering disciplines that date back to Newton and earlier. Human-machine systems (preferred over “man-machine systems” these days) is an inherently interdisciplinary activity, mixing systems engineering with applied experimental psychology. Our concern lies with the biomechanical aspects of human performance in critical tasks, but even more with the cognitive aspects, where the human operator is supervising computer-based automation. Examples of tasks include piloting aircraft and spacecraft, driving automated cars, operating nuclear power plants, controlling robots, and performing critical technical operations in hospitals and military environments. The discipline goes by older handles such as “human-factors engineering” and newer ones such as “human-system integration.” The discipline had its beginnings about the time of the Second World War, where the sophistication of aircraft, tanks, ships, and

35

*

*

• mechanixillustrated.co • Fall 2017

IES

•••

••••••••••••

••


SPACE TECH

* Tom Sheridan on Mike Massimino •

submarines as well as mechFor a certain class of students who anized gunnery demanded a disciplined approach to are willing to take intellectual risks, fitting the human to the mathe field of man-machine systems chine. Classical fields of engiengineering is rewarding. I mention neering are based on equaintellectual risks because the field tions characterizing the laws is relatively young and not wellof physics and well-known defined, in contrast to many classical properties of materials. In engineering disciplines that date contrast, human-systems enback to Newton and earlier. gineering is largely empirical, based on a body of experiments, much as in medicine. A number of graphical and mathematical models are gradually being developed (see T. Sheridan, Models of Human-System Interaction: Philosophical and Methodological Considerations, with Examples. John Wiley, 2017). Human-automation interaction is today’s most challenging problem area, where engineers and managers are constantly faced with questions of what to automate, in consideration of efficiency, safety, and cost. More and more, the human is becoming a supervisor of robots and automation, analogous to the way a supervisor of skilled workers in a factory plans, communicates with, and directs the activities of human subordinates. This new sub-field, called “supervisory control,” is concerned with how the programmed intelligence of multiple computers interacts with the intelligence of the human supervisor. Mike was one of three astronauts who got graduate degrees in my lab. The others were: Dan Tani, a Japanese American who had missions on STS-108 (2001), STS-120 (2007), and STS-122 (2008); and Nick Patrick, a British-American who flew on STS-119 (2006) and STS-130 (2010). MI

For another view into supervisory robotic control, see Brainwaves Correct Robot Mistakes In Real Time.

36

*

*

• mechanixillustrated.co • Fall 2017



ES ASTRO MIK

SPACE TECH

ER

B Y L U I S F. P E Ñ I N

W

here human and

robot interaction are concerned, the whole is definitely greater than the sum of the parts. Human-machine systems augmented by teleoperation enable activities in space that are unmatched by either humans or robots working independently. We explore the field’s colorful history and future directions.

38

*

*

• mechanixillustrated.co • Fall 2017

IES

•••

••••••••••••

••


SPACE TECH

* Our Eyes and Hands in the Solar System •

The Precursors The first robot to operate beyond Earth’s atmosphere was the mechanical arm—a surface-soil sampling scoop—on board the Surveyor 3 spacecraft, which landed on the Moon on April 20, 1967. Although limited in its capabilities, it flawlessly fulfilled its mission on the Moon’s surface, collecting samples of rocks and dust to be later processed on board. Data resulting from the processing were transmitted to Earth for further analysis. Surveyor 3’s robotic arm Surveyor 3 on the Moon over two years after its 1967 landing. Image courtesy of NASA.

39

*

*

• mechanixillustrated.co • Fall 2017


SPACE TECH

* Our Eyes and Hands in the Solar System •

worked by executing simple, blind, pre-programmed movements and actions. Simple, but powerful enough to enable access to the detailed composition of the Moon’s surface for the first time. Soviet engineers answered the American accomplishment in 1970 and again in 1973 with the roving robots Lunokhod 1 and Lunokhod 2, respectively. These were the first vehicles to move upon an extraterrestrial body. Looking back, it is amazing that the Lunokhod 2 spent four months roving the Moon’s surface, traveling a total of 37 kilometers. Both craft were controlled remotely from Earth, using what is commonly known as teleoperation technology. There is little information about how this was accomplished, but one can imagine the difficulties and challenges the Soviets had to overcome in controlling a vehicle from a distance of 385,000 km—and across a rough terrain of rocks and rilles. In 1976, following the path blazed by the Surveyor’s robot arm— and raising the bar in response to the Soviets’ lunar activities— Top: First panoramic view by Viking 1 from the surface of Mars, captured on July 20, 1976. Bottom, left to right: Surveyor 3, Lunokhod rover, and Viking. Images courtesy of NASA.

40

*

*

• mechanixillustrated.co • Fall 2017


SPACE TECH

* Our Eyes and Hands in the Solar System •

NASA put a new pair of mechanical arms to work, this time on Mars, carried by the Viking 1 and Viking 2 spacecraft. These were also remotely controlled from Earth—more than 350 million kilometers away—using existing images to select the samples to be taken from the Martian surface, and programming the robotics at distance to perform the tasks. Having demonstrated the successful application of robots in planetary exploration, the space robotics field took another leap forward with the Shuttle Remote Manipulator System (SRMS), also known as Canadarm. First deployed on board the space shuttle Columbia in 1981 (the STS-2 mission) and used until its retirement 30 years later with the final shuttle mission in 2011, it was the first system to pair human astronauts with robots to work cooperatively in space. With a 15-meter length and six rotating joints, the arm was controlled by the crew from inside the shuttle via a pair of compelentary joysticks. The SRMS enabled new mission objectives, including capturing stranded satellites, putting new satellites in orbit, repairing the Hubble Telescope, and moving astronauts from one place to another. It also facilitated the building of the International Space Station, into which bigger robot relatives were introduced.

Teleoperation Technology in Space Modern teleoperation technology originated in the late 1940s with the advent of the nuclear age, driven by the need to manipulate radioactive materials at a distance through protective barriers. Initially these systems were completely mechanical, coupling master and slave manipulators. Through the years, the technology was improved and distances were increased with the introduction of electrical control systems. In space, teleoperation technology is necessitated by the fact that humans cannot currently reach destinations beyond the Moon. But with the help of teleoperated mechanical devices, humans can extend our eyes and hands far beyond our physical reach or presence. Working in Earth orbit, where humans already enjoy a permanent outpost, the motivations for using teleoperation lie in improving safety and productivity, as well as reducing operational costs. There

41

*

*

• mechanixillustrated.co • Fall 2017


SPACE TECH

* Our Eyes and Hands in the Solar System •

Shuttle Remote Manipulator System (SRMS). Image courtesy of NASA. are many risks in launching a manned mission, not the least of which are extra-vehicular activities (EVAs). Teleoperation combines the capabilities of a robot to work and survive in a remote and hostile environment with the flexibility and adaptability of a human mind, whose body can enjoy the comforts of a typical office environment, either on the ground or within a pressurized spacecraft. In the case of the SRMS, the arm was controlled from inside the space shuttle by astronauts looking through the windows and at the video feeds from the SRMS cameras. It was operated by two joy-

42

*

*

• mechanixillustrated.co • Fall 2017


SPACE TECH

* Our Eyes and Hands in the Solar System •

sticks, one for commanding the speed of the manipulator tip in any direction within its 15-meter reach, and the other for orienting the tip while keeping its position in space. It also allowed the movement of the arm’s six joints independently for trickier maneuvers. But the use of teleoperation is also a matter of operational costs. The use of teleoperated robots reduces the resources and time needed to prepare and perform manned operations, particularly the EVAs. Typically, the time needed to prepare an EVA is about five times the amount of time needed to perform the EVA itself. Moreover, dedicated EVAs require months of costly rehearsal on the ground in buoyancy tanks that simulate the absence of gravity.

Space Telerobotics Beginnings Having realized the tremendous advantages brought by remotely controlled robot arms, NASA moved to develop the technologies further. In 1985, the space station was on the drawing boards of NASA engineers. Those were ambitious times in which diverse projects were conceived to develop fully autonomous space robotic systems the likes of which did not yet exist even in more Earth-bounded applications. One such project was the Flight Telerobotic Servicer (FTS). Conceived to be developed with a budget of nearly $300 million, it aimed to incorporate robotics into this embryonic space station. However, these projects failed to come to fruition; the technology was just not sufficiently mature to support the ambitious goals of a completely autonomous system. Even today, with all the developments that have ensued since, these systems are still unrealizable. The failures did, however, bring about a turning point for the community working on space robotics: Out of the fiasco of big and completely autonomous systems, a new paradigm emerged. In 1990, NASA announced its Space Telerobotics Program with the declared objective, “To develop, integrate, and demonstrate the science and technology of remote manipulation such that by the year 2004, 50% of the EVA-required operations on orbit and on planetary surfaces may be conducted telerobotically.” This change in policy and direction, which was also adopted by other space agen-

43

*

*

• mechanixillustrated.co • Fall 2017


SPACE TECH

* Our Eyes and Hands in the Solar System •

ROTEX Telerobotic Experiment in Spacelab-D2 Mission. Images courtesy of DLR.

cies, was pivotal in the revival of space teleoperation technology. Curiously enough, it was the Europeans who took a lead in the space teleoperation domain. The ROTEX experiment, developed by the German Aerospace Agency (DLR), flew aboard the space shuttle Spacelab-D2 Mission (STS-55) in April 1993. It succeeded in teleoperating from Earth a manipulator arm that was involved in several experiments, including the capture of a floating object—despite a round-trip time delay of six seconds for the communication signals from Earth to the shuttle and back. The ROTEX project closed a 12year space robotics development gap that had stood between it and the first flight of the SRMS, ushering in a new and exciting era. New techniques were developed to perform direct control of a manipulator at a distance with large time delay between the command and the visual confirmation that the movement has taken place. It is easy to understand the difficulty introduced by this delay. Imagine, for example, typing on a keyboard with the words appearing on the screen six seconds later. Any error or attempt at correction would wreak havoc. Of course, you could just type a letter and wait six seconds to type the next, but then the system would be totally useless. Several solutions were envisioned, such as the use of predictive

44

*

*

• mechanixillustrated.co • Fall 2017


SPACE TECH

* Our Eyes and Hands in the Solar System •

displays—that is, simulators that show the operator what would happen on the remote side before the information gets back to them. Of course, simulators need to have knowledge of the environment in advance and also be extremely accurate. A further refinement is teleprogramming, in which the tasks are done beforehand using a high-fidelity simulator or mock-up of the environment. High-level commands are then extracted for execution by the remote robot in an autonomous manner, with some kind of correction or adaptation applied, depending on the environment.

Robots in the Space Stations Today, the International Space Station (ISS) is one of the most important reference platforms in space robotics. And with robots coming to it from all over the world, the station comprises a truly international infrastructure. Outside the station there are several manipulator arms, heirs in concept and operation to the SRMS, though more evolved. The most impressive example is the Space Station RMS (SSRMS), also known as Canadarm2. It has a length of more than 15 meters and features a symmetric structure with an “elbow” in the middle that allows the use of both sides (hand and shoulder) for attachment to the station. It quite literally “walks” from attachment to attachment. It is teleoperated from inside the ISS by the astronauts and was a key element in the early stages of the ISS construction. The SSRMS is complemented by a more compact but equally complex precise robot called the Special Purpose Dexterous Manipulator (SPDM). It’s like a torso with two 3.5-meter manipulator arms with seven joints each. It can be grasped and carried by the SSRMS to work in different locations on the ISS, and has a set of special toolings on its torso and hands that allow it to perform very complex tasks outside the space station. An important aspect of the real-time teleoperation of these manipulator robots by the astronauts is to keep what is called the right “tele-propioception.” Propioception is a medical term that refers to the ability to know the dispositions of one’s arms and legs. You cannot perform a task if you don’t really know where your arm is with

45

*

*

• mechanixillustrated.co • Fall 2017


SPACE TECH

* Our Eyes and Hands in the Solar System •

Above: Japan's RMS, the Japanese Experiment Module (also known as Kibo) on the ISS. Right: Artist's rendering of the European Robotic Arm on the ISS. Images courtesy of NASA. respect to your body, and to where its movement is intended. Correct visual cues are key to maintaining tele-propioception. Think of the difficulty of performing a precise task through the use of a mirror, in which movements are reversed. The Japanese also have their own 10-meter-long RMS fixed to the Japanese Experiment Module (also known as Kibo) on the ISS to provide external servicing, with a small but fine arm 2 meters long on its end to perform precise operations. On their side, the Europeans have developed the European Robotic Arm (ERA), soon to find its way to the station (to be attached to the Russian station segment) after being finished years ago.

46

*

*

• mechanixillustrated.co • Fall 2017


SPACE TECH

* Our Eyes and Hands in the Solar System •

There have been efforts over the years to integrate robots within the pressurized modules of the ISS, but with little success so far. It could be due to either the potential resistance of astronauts to mechanical companions in their quarters or the fact that there is no real need of robots for performing repetitive tasks inside the modules. So far only the Robonaut has traversed this virtual border, Robonaut 2, pictured here in the International Space Station's Destiny laboratory during and only in an experimental a round of testing for the first humanoid robot in space. Image courtesy of NASA. context. The Robonaut has a human-like torso with anthropomorphic features and two arms with respective hands, fingers included. The second generation, called R2, flew to the ISS in 2011 to test its capabilities as an astronaut robotic assistant, following commands issued by its human mates. It’s still in its early phases of demonstrating its potential use in future manned operations.

“Floating” Robots The SRMS, the manipulator arm that was anchored to the payload bay of the space shuttle, has played a key role in what is known in the space business as “on-orbit servicing”—that is, the inspection of large space infrastructures, and the capture and/or repair of malfunctioning satellites already in orbit. The SRMS needed to be operated directly from a manned spacecraft such as the shuttle, but why not consider robots that are able to perform these kinds of tasks in unmanned spacecraft? The idea of servicing satellites in situ by a robotic spacecraft is especially appealing for those satellites in the geostationary orbit. This orbit is particularly crowded because its orbital period is the same

47

*

*

• mechanixillustrated.co • Fall 2017


SPACE TECH

* Our Eyes and Hands in the Solar System •

as that of Earth’s rotation: geostationary satellites maintain a constant position in the firmament with respect to a fixed Earth-based reference. Telecommunication satellites operators literally fight to have a slot in the orbit over the Earth region they want to cover with their telecommunication services. Any failure of these satellites can mean losing a precious and sometimes irreplaceable asset. And at 36,000 km—far from manned space outposts in low Earth orbit (around 500 km) and beyond the radiation belts—these satellites aren’t easily accessed. Why not then design and operate robotic missions capable of reaching a failed satellite, repairing it in orbit, and eventually moving it to a “graveyard” orbit. The robotic spacecraft could remain in reserve along with other such satellites, standing by to be potentially recalled for duty. This idea has been on the drawing boards of national space agencies for quite some time, but it was not until 1997 that the first real satellite-servicing experimental mission was flown. The Japanese National AeroSpace and Development Agency (NASDA), subsequently rebranded as the Japanese Aerospace Exploration Agency (JAXA), launched its Engineering Test Satellite VII (ETS-7) into a 550 km orbit with numerous robotics experiments on board. Its objective was to demonstrate the capture of one satellite by another spacecraft LEFT: Artist’s impression of the ETS-7 satellites with details of the on-board robotic arm and experiments. RIGHT: Teleoperation (by the author) of the ETS-7 robot arm (seen on top right image) from JAXA’s Tsukuba Space Centre using a force-feedback joystick.. Images courtesy of JAXA.

48

*

*

• mechanixillustrated.co • Fall 2017


SPACE TECH

* Our Eyes and Hands in the Solar System •

carrying a robot controlled from Earth. Its manipulator arm was 2 meters long with six rotating joints able to assume any 3D configuration, and it was fitted with two cameras. The two satellites were launched together and later separated in orbit to carry on the rendezvous and capture experiments. In 1998, after several attempts, the first capture of one satellite by another unmanned satellite was achieved—a major milestone in the short history of space robotics. The ETS-7 robotic arm was always teleoperated from Earth, overcoming the fundamental problem of five to seven seconds of roundtrip delay between a command issued from the ground control station and the reception of the image and telemetry indicating the outcome of the command. Many other telerobotic experiments were performed, mimicking potential future repair operations, such as deploying mechanisms or inserting and extracting equipment. Also, the first ever direct teleoperation from the ground of a space robot using force-reflection technology was performed. Making the operator feel in his hand the conExample of predictive display used for the tact force of the manipulator with teleoperation of the ETS-7 robot arm from the environment was already in use the ground. Images courtesy of JAXA. in other teleoperation domains, such as in submarine applications, but not in space and not with the added time-delay challenge. Force feedback is one method of achieving as much “telepresence” as possible for the human operator. The greater the sense of “being there,” the more productive and efficient the operation. Other contributors to telepresence are visual displays and the use of master manipulator hand controllers, similar to those of the remote manipulator, though smaller in size, emulating an extension of the operator’s own arm. Force feedback

49

*

*

• mechanixillustrated.co • Fall 2017


SPACE TECH

* Our Eyes and Hands in the Solar System •

is essential, for example, when considering the difficulty of trying to grasp a glass without feeling the contact in our hands. Contact information is so relevant to object manipulation that it can improve the effectiveness of the operation even if conveyed through another sensory medium, e.g., audio or tactile. This approach is called sensory substitution, and it was the subject of research by astronaut Mike Massimino. Force sensory substitution has been demonstrated decisively to reduce operation time versus those operations lacking force information. It is especially suited to operating in conditions where direct force feedback is not possible or convenient, and it has the further advantage of not physically affecting the operator manipulation task or overloading his visual stimulus. Commercial developments of in-orbit satellite servicing have been attempted in Europe in the last decade. Of note is the DEOS (Deutsche Orbitale System) mission developed by EADS Astrium GmbH (now Airbus Defence and Space GmbH) and led by the German Aerospace Agency (DLR), whose aim was demonstrating the robotic capture and safe de-orbiting of a failed satellite. Another example is the SMART-OLEV mission promoted by Orbital Satellites Services, developed by a consortium of European companies (SSC from Sweden, SENER from Spain, and Kayser-Threde from Germany), and sponsored by a geostationary satellite operator desiring to extend its satellite fleet lifetime. After the success of DARPA’s Orbital Express mission in 2007 demonstrating on-orbit refueling and ORU (Orbital Replaceable Unit) transfer, currently the concept has made a comeback in the US. On one side DARPA has established the Robotic Servicing of Geostationary Satellites (RSGS) Program, with Space Systems Loral (SSL) as the primer contractor. In parallel, Orbital ATK is developing its own Commercial Servicing Vehicle (CSV) in collaboration with Intelsat, a telecommunication operator.

Explorers of Other Worlds Space robots are especially well suited to the exploration of other celestial bodies, including planets, moons, asteroids, and even

50

*

*

• mechanixillustrated.co • Fall 2017


SPACE TECH

* Our Eyes and Hands in the Solar System •

The Sojourner rover takes a stroll on Mars' rocky surface in 1997. Image courtesy of NASA. comets, where humans still cannot leave their footprint. Rovers are designed to move along and inspect the surfaces of these bodies, deploy instruments and measurement devices, and eventually take samples for analysis. In 1997, a new vehicle—a descendant of the Lunokhod—started moving on the surface of Mars. On the Fourth of July, the Sojourner rover from the Pathfinder mission initiated its accidental wandering on Mars' rocky surface. With six independent wheels and weighing just 11.5 kg, Sojourner spent more than 2,000 hours on the surface of the Red Planet, moving at a maximum speed of 0.4 miles per hour around the lander mother spacecraft, without any umbilical connecting them. A travelled total distance of 100 meters does not seem impressive if we compare it to the 37 km of the Lunokhod 25 years prior, but Mars is on average around 225 million km away, and the communication round trip takes between 20 and 40 minutes. (Besides that, the tiny Sojourner rover only measured 65 cm!)

51

*

*

• mechanixillustrated.co • Fall 2017


SPACE TECH

* Our Eyes and Hands in the Solar System •

Three larger mechanical brothers of the Sojourner colonized Mars in the following years. The Mars Exploration Rovers (MER) called Spirit and Opportunity landed in 2004 in two different places. Expected to operate only 90 days, the Spirit wandered over Mars’ surface for seven years, while Opportunity has covered almost 45 km and is still alive and well 13 years later! Finally, in 2011 the car-size rover Curiosity, of almost 3 meters in length and 2.7 meters wide, started its own journey on Mars. It transports a multitude of cameras and measurement instruments, along with a 2.1-meter-long robotic arm. It has covered more than 15 km to date. Round-trip delays of seconds in Earth orbit can be cleverly overcome to allow direct teleoperation, but 20 minutes is simply too much. Consequently, all the Martian rovers are operated under another teleoperation control scheme known as “supervisory control” Shown here are three generations of Mars exploration rovers. Image courtesy of NASA.

52

*

*

• mechanixillustrated.co • Fall 2017


SPACE TECH

* Our Eyes and Hands in the Solar System •

(see the classic text by T.B. Sheridan, Telerobotics and Human Supervisory Control, The MIT Press, 1992). Under this scheme robots do not receive continuous commands, but rather are able to achieve their goals with some degree of autonomy. For example, a rock of interest is identified by the scientists and a potential path is suggested by the engineers. The robot moves autonomously to the target, following the suggested path but reacting to any unforeseen obstacles or events along the way. Of course, there are many degrees of supervisory autonomy, depending on the “intelligence” of the robot. In the

Different levels of relationship between human operator and task, from manual control to fully automatic control through supervisory control. From T.B. Sheridan’s Telerobotics and Human Supervisory Control, The MIT Press, 1992. case of the MER rovers, they could autonomously travel a distance of nearly 200 meters, given targets from the ground, and do so with an average speed of about 30 meters per hour, slowed down only by their hazard avoidance software that causes them to pause every 10 seconds, explore for about 20 seconds, and then move on again. Although less known, it is fair to say that the Vikings’ robot arms had a very worthy descendant in the Phoenix robotic arm that operated attached to a fixed lander on the surface of Mars in 2008. With a length of 2.5 meters it was able to dig trenches a half-meter deep. So far, the US is the only country to place a working robot on Mars.

53

*

*

• mechanixillustrated.co • Fall 2017


SPACE TECH

* Our Eyes and Hands in the Solar System •

Top: The Phoenix robotic arm displays a full scoop of soil excavated from the Martian polar terrain. Bottom: Yutu, China's first Moon rover, imaged by the Chang'e 3 lander. The Europeans made the first try in 2003 with the Beagle 2 lander and robot arm, but it didn’t reach the surface in one piece. The next attempt is planned for 2020, with a rover on board the Exomars mission. Recently the Moon has again attracted much attention for space robot exploration. A Chinese rover named Yutu launched in December, 2013, made the first soft landing on the Moon since 1976 and was the first rover to operate on the lunar surface since the Soviet Lunokhod 2 ceased operations on May 11, 1973. In the commercial realm, the Google Lunar XPRIZE has triggered the competition of many teams across the globe to put a rover on the Moon, have it travel at least 500 meters, and transmit high-definition video and images. The first to complete this mission before the end 2017 will claim a $20 million prize. The clock is ticking . . .

The Future The history of teleoperation and robots in space may be brief, but it has also been intense. Without these international efforts, many endeavors, such as building the ISS or exploring Mars would simply not have been possible. Robots will continue to be our eyes and hands in the cosmos, as there will always be places that we

54

*

*

• mechanixillustrated.co • Fall 2017


SPACE TECH

* Our Eyes and Hands in the Solar System •

cannot physically reach. Of course, robots will continue to improve their autonomous capabilities, reducing the need for human direct control. And these capabilities will be fundamental to exploring the distant reaches of the solar system. But at the same time, there will always be applications in space in which the combination of humans with remote-controlled robots will not be matched by either of them working independently. And that is the beauty of teleoperation! MI

About the Author Luis F. Peñin holds a Ph.D. in teleoperation and robotics and started his 20-year long career with a post-doc research position at the Japanese Aerospace Exploration Agency (JAXA). There he performed the first ever force-reflection teleoperation from the ground of a space robot aboard the ETS-7 satellite. For more information, see his IEEE paper. He co-founded aerospace company DEIMOS Space in 2001, where he held various positions before becoming head of flight engineering. During this period he participated in many different studies, projects, and missions for the European Space Agency (ESA) in the area of exploration and space transportation, with a highlight being ESA’s IXV re-entry demonstrator. Dr. Peñín also worked at ESA, where he contributed to the development of, and put into operation, the European GALILEO Global Navigation Satellite System (GNSS) satellites. He is currently with the Spanish aerospace company SENER, leading the development of ESA’s Proba-3 precise formation flying mission. He has published more than 70 technical papers and reports in the space domain and has co-authored a Robotics Fundamentals textbook (in Spanish) for McGraw-Hill. He is senior member of the American Institute of Aeronautics and Astronautics (AIAA) and senior member of the Institute of Electrical and Electronics Engineers (IEEE).

55

*

*

• mechanixillustrated.co • Fall 2017


ES ASTRO MIK

SPACE TECH

ER

IES

•••

General Electric’s “Machine Man” Exploits Force Feedback

W

hile more terrestrially oriented than Massimino’s robotic machinations, GE’s Pedipulator (officially called the Cybernetic Anthropomor-

phous Machine, or CAM)—a “walking truck” developed for the military—was an early and ambitious attempt at human-machine collaboration on a large scale. Of note is the linkage between the operator and the machine, which consisted of hydraulically controlled levers that provided the operator with force feedback, lending a “sensation” of the mechanical limbs. Unfortunately, the controls were fatiguing and it was, as one might imagine, a nightmare to maintain. Consequently, it was never deployed, but it did make for a great Mechanix Illustrated cover story!

56

*

*

• mechanixillustrated.co • Fall 2017

MI

September 1964

••••••••••••

••



ENERGY

Out Of Thin Air

Artificial Photosynthesis and the Promise of

a Sustainable Energy Future

All this happens swiftly,

in silence, at the temperature and pressure of the atmosphere, and gratis: dear colleagues, when we learn to do likewise we will be sicut Deus [like God], and we will have also solved the problem of hunger in the world. —Primo Levi

” By John Schroeter

58

*

*

• mechanixillustrated.co • Fall 2017


ENERGY

* Out of Thin Air •

E

ver since Antoine Lavoisier (1743-1794), the father of modern chemistry, discovered that so-called “fixed air” contains carbon and oxygen—carbon dioxide—scientists have la-

bored to understand its properties and harness its potential. It goes without saying that without CO2, life on this planet would simply not exist. But could it also be our undoing? Can too much of a good thing be, well, too much of a good thing? By some estimates, the amount of CO2 in our atmosphere has increased nearly 50% over the past 50 years. But it’s not all bad news. The rising levels of CO2 have delivered increased crop yields, improved water use efficiency, and enabled better quality of life for many around the world. These benefits, however, are also attended by a dark side: the increased acidification of the oceans, diminished air quality, and a receding of the polar ice caps, to name a few of carbon’s ill effects. Whatever one’s position on climate change, there are transcendent aspects to addressing this matter that are also intrinsic to the goals of generating sustainable energy sources and to improving the human condition. The efforts, technologies, and innovations dedicated to reducing CO2 levels to a carbon-neutral sta-

59

*

*

• mechanixillustrated.co • Fall 2017

JCAP scientist Dr. Jack Baricuatro studying mechanisms of CO2 reductions reaction in the surface science laboratory in JCAP.


ENERGY

* Out of Thin Air •

sis promise wide-ranging benefits that go well beyond the battling of greenhouse gases. As such, this science has immeasurable and fundamental value to humanity. The people at JCAP agree. JCAP is the Joint Center for Artificial Photosynthesis. It’s one of the Department of Energy’s “energy innovation hubs” focused on developing fundamental scientific foundations that can then be translated to the practical technologies needed to solve large-scale energy problems. JCAP was founded in 2010 with an initial focus on water splitting— separating water into oxygen and hydrogen. In late 2015, the organization entered into its second phase: the very scientifically challenging and exciting goal of building scientific foundation through the discovery of new mechanisms, materials, and understanding of complex processes for reduction of CO2. Today, JCAP operates as a partnership among multiple institutions, including California Institute of Technology, Lawrence Berkeley National Laboratory, UC Irvine, UC San Diego, and the Stanford Linear Accelerator Laboratory. Together, these organizations are rethinking the science that will affect the entire energy cycle. Dr. Ian Sharp, one of JCAP’s scientists, puts it this way: “Really, we’re tackling one of the most pressing challenges that faces humanity.” But let’s back up a bit. Artificial photosynthesis? This is actually not a new idea. The notion of artificial photosynthesis was suggested as early as 1912, by the Italian chemist Giacomo Ciamician. Driven by social concerns of the day, he proposed the abandonment of fossil fuels in favor of radiant energy provided by the sun and captured by photochemical devices. And in proselytizing on the idea, he assured his audiences that such a change would “not be harm-

60

*

*

• mechanixillustrated.co • Fall 2017

An impressionistic look at photosynthesis: at left, the oxygen-evolving complex in photosystem II (Yachandra/Yano lab); at right, electronic energy transfer in photosystem II’s light- harvesting complex as simulated by supercomputers at NERSC, the National Energy Research Scientific Computing Center. Learn more about NERSC here.


ENERGY

* Out of Thin Air •

ful to the progress and to human happiness.” Photosynthesis in nature, of course, relies upon a series of interactions beginning with the absorption of light. As a refresher, chlorophyll—from the Greek transliteration, “green leaf”—absorbs visible light in the blue portion of the electromagnetic spectrum (480 – 460nm), followed by the red portion (750 – 650nm). The green portions (530 – 490nm), however, are reflected, giving plants their familiar color. The process of photosynthesis, then, occurs as a light-dependent chemical reaction that first splits water into its constituent parts, then combines the resulting hydrogen with ambient carbon dioxide to produce glucose, expelling oxygen as the byproduct. Artificial photosynthesis draws its inspiration from these same processes, but uses them to generate fuels, like hydrogen and hydrocarbons. There are two main areas of research in artificial photosynthesis: water splitting to generate hydrogen and CO2 reduction to make energy-dense fuels. The big idea behind artificial photosynthesis is that it allows for conversion of solar energy into chemical energy, i.e., storage of solar energy in chemical bonds. But generating enough fuel to power a society instead of a single plant poses some big challenges for scientists. An integrated cell for the solar-driven splitting of water consists of multiple functional components and couples various photoelectrochemical (PEC) processes at different length and time scales. The overall solar-to-hydrogen (STH) conversion efficiency of such a system depends on the performance

61

*

*

• mechanixillustrated.co • Fall 2017

Solar-driven watersplitting processes. Image courtesy of Angewandte Chemie.


ENERGY

* Out of Thin Air •

and materials properties of the individual components as well as on the component integration, overall device architecture, and system operating conditions. While hydrogen is abundant on earth, it is almost always found as part of another compound, e.g., H2O, and therefore it must be separated. There are numerous methods of splitting water, but they require energy, and so far, none of these methods is cost-effective when compared to hydrogen generation from fossil fuels. Cost reduction, therefore, is a primary technical challenge for renewable hydrogen production. Which brings us back to nature’s methods. The imitation of the models and processes found in nature is called biomimicry, a phrase coined by Otto Schmitt, whose “Schmitt Trigger”—a staple of electronic circuits—emulates the system of nerve propagation—in squids. Artificial photosynthesis likewise takes a biomimetic approach in seeking to replicate a natural process in a controllable, efficient, and scalable way. But it isn’t easy. Scientists and technologists have struggled mightily to replicate nature’s ways of capturing, converting, and storing solar energy in ways that have economic value. For example, the photovoltaic cell exploits solar energy, but it is a use-it-or-lose-it proposition: a cost-effective way of storing the electricity the cells produce remains a substantial challenge. In photosynthesis, the plant actually locks solar energy within the chemical bonds in the glucose molecule it creates. Now as remarkable as this process is, even if we were able to fully tap the power of photosynthesis, it still wouldn’t be good enough to satisfy the economic drivers for large-scale energy production. While the plant might be happy enough, it turns out that natural photosynthesis is not terribly efficient. In fact, it is quite inefficient: of the total solar spectrum that is incident upon the leaf, nearly half falls outside chlorophyll’s active range of 400 – 700nm. And of the in-range photons, nearly a third are lost to incomplete absorption; not all photons score a hit on the chloroplasts. When all such degradations are accounted for—and there are several other factors—we end up with something on the order of 1% – 3% efficiency. Not

62

*

*

• mechanixillustrated.co • Fall 2017


ENERGY

* Out of Thin Air •

exactly the stuff of commercial-grade scalability. The minimum solar-to-chemical conversion efficiency required for commercial viability is closer to 10%. The challenge, then, is to improve upon nature’s process, which means absorbing more light at a wider range of wavelengths, ideally across the entire solar spectrum. That is precisely what the researchers at JCAP are doing. And they’re making tremendous progress. Professor Harry Atwater, JCAP’s director, explains, “JCAP, over the last five years, has had a mission to develop a solar fuels generator that was more efficient than natural photosynthesis. I’m happy to say that in producing a 10% solar-driven device, it was able to accomplish its goals. It’s these scientific advances that will eventually bring about a renewable carbon cycle for our economy. And to that end, we are in the second year of our five-year CO2 reduction project, in which we will parlay these advances.” And the excitement at JCAP is palpable. “What’s really exciting,” says Dr. Adam Weber, “is the fact that we can produce something, in an ideal world, just from the air; from the sunlight that’s hitting the Earth, we can produce something that can feed into your car. No longer do we have to think about digging up oil.” To accomplish that work, Dr. Charles McCrory, a JCAP alum currently working at the University of Michigan, explains, “First, we develop light absorbers to capture solar energy. Second, we develop catalysts to facilitate the chemical reactions, and lastly, we develop membranes to separate the resulting chemical fuels from the oxygen which we’re also producing at the same time.” The catalysts can be classified into two general categories. Dr. Ian Sharp explains, “One of these is heterogeneous catalysts, which are made from solid materials like metals and oxides of metals. The other would be molecular catalysts. These molecular catalysts are ones that can be finely tuned by chemists to mimic natural processes. These molecules can support reactions such as the combining of two protons

63

*

*

• mechanixillustrated.co • Fall 2017

Electrochemical reduction of CO2 by bimetallic alloys.

Image courtesy of American Chemical Society.


ENERGY

* Out of Thin Air •

High-throughput theory and experimental discovery of new materials. Image courtesy of National Academy of Sciences.

and two electrons to form hydrogen gas. For the more complex fuels that we would like to generate in the future, such as methane and methanol, these catalysts have to have very complex configurations to enable each and every step of that reaction to be done with the highest degree of efficiency and minimize the energy that we have to input.” The current focus of JCAP is on the development of heterogeneous catalysts for CO2 reduction and its “sister” oxidation reaction– oxygen evolution reaction. Not surprisingly, JCAP is also involved in materials discovery. “By the discovery of new materials,” Dr. Sharp says, “we’re enabling new technologies that no one predicted.” Dr. John Gregoire explains, “A lot of materials absorb sunlight, but we need a material that not only absorbs it, but also harnesses that solar energy to perform specific chemical reactions. And we need materials that can do this all at once so that we can integrate those materials into a high-efficiency device. If we can discover the right set of materials and successfully integrate them, we’re really creating the ultimate renewable technology. But with that great promise

64

*

*

• mechanixillustrated.co • Fall 2017


ENERGY

* Out of Thin Air •

comes the great challenge of finding these materials that are not presently known.” Another constraint is that these materials have to be very inexpensive and easily obtained in large quantities. “And currently,” Gregoire notes, “our only source for elements is the Earth. So we need earth-abundant materials to include in this device to make it scalable and manufacturable.” Gregoire explains that over the decades of solar fuels research, practically every element from the periodic table has been considered, but none of them is good enough. “That means we need to combine the elements in different ways that no one has ever done before. To make and test all those combinations, you need very, very fast experiments.” And for that reason, JCAP is built for speed. It turns out that JCAP is not alone in this endeavor: there are thousands of researchers the world over working to crack this grand challenge. JCAP is joined in this global effort by the Max Planck Institute in Germany, the Ministry of Education, Culture, Sports, Science and Technology (MEXT) in Japan, and the Korea Center for Artificial Photosynthesis (KCAP), among others—all of whom are making gains in artificial photosynthesis research. Additionally, the $20M NRG COSIA Carbon XPRIZE will “challenge the world to reimagine what we can do with CO2 emissions by incentivizing and accelerating the development of technologies that convert CO2 into valuable products.” That’s a lot of activity. “And hence,” Gregoire says, “we have a high-throughput experimentation group to do that discovery work. We work with our colleagues to generate ideas faster than traditional experiments can test them.

65

*

*

• mechanixillustrated.co • Fall 2017

JCAP scientist Dr. Aniketa Shinde performing fast screening tests of the new materials in the high-throughput laboratory in JCAP.


ENERGY

* Out of Thin Air •

Inkjet-printed libraries of new materials. And then we can invent instruments and invent techniques to perform those experiments a hundred to a thousand times faster than traditional methods.” In the end, JCAP is accelerating a development process that spans the discovery phase to a translational phase to an operational device phase from many months to a matter of weeks. This is not only enabled by the high-throughput techniques, but also the interdisciplinary nature of the energy innovation hubs where scientists and engineers work together to translate discoveries to device prototypes. Gregoire adds, “When we invent the tools, we get to synthesize materials that no one in the world has ever synthesized before. So being able to discover new things every day is really exciting.” Some of the hallmark technological achievements of JCAP include the synthesis of high-performance materials using a scalable technique based on inkjet printing, the optimal integration of the materials so the device works even better than the sum of its parts, and the establishment of comprehensive device modellings to guide the development of solar fuels prototypes. Indeed, a key part of the technology development is prototyping

66

*

*

• mechanixillustrated.co • Fall 2017


ENERGY

* Out of Thin Air •

these new devices and benchmarking their performance levels to determine how active each of the new components is. Dr. Charles McCrory explains that JCAP’s unique catalyst benchmarking process yields more reliable results. “There are thousands of researchers working on this problem, and each one is working under slightly different conditions. What we do is make apples-to-apples comparisons of all the different catalysts that are being developed throughout the world. We study the catalysis and the activity under the exact same conditions. That’s important, because if you’re trying to determine how fast a car can go, the track conditions matter. So we’re testing the speed of each car on the exact same track.” JCAP’s rapid pace of progress can be attributed to its organizational principles. JCAP is a multidisciplinary hub that is designed to efficiently cross-pollinate know-how across its four research thrusts: electrocatalysis, photoelectrocatalysis, integration of materials, and test bed prototypes and modeling. As Dr. McCrory attests, “One of the best things about working at JCAP is you get to work with people from the wide spectrum of backgrounds—physicists, chemists, chemical engineers, mechanical engineers. And you can leverage off their expertise, as well as the main facilities we have here to solve problems in unique ways.” JCAP’s efforts to take artificial photosynthesis from the lab to practical devices that can be used for the large-scale production of energy hold tremendous promise. Professor Atwater sums it up: “The ability to bridge the divide between systems that operate on a very small scale to technologies that can be scaled up into large systems for generating fuel from sunlight will ultimately allow us to produce the energy we need—and do so in a carbon neutral way.” MI

67

*

*

• mechanixillustrated.co • Fall 2017

Rapid screening of inkjet-printed libraries of materials.


ENERGY

The Moon: Persian Gulf of the Solar System?

L

ater this year, Naveen Jain, CEO of Moon Express, expects to launch the world’s first private commercial initiative to unlock the vast hidden

resources on the moon—resources spanning magnesium to platinum to titanium to helium-3—and ultimately develop a space colony there to support mining operations, particularly for helium-3. What’s so special about helium-3?

By John Schroeter

68

*

*

• mechanixillustrated.co • Fall 2017


ENERGY

* Moonshots •

Today, nuclear power plants rely upon a nuclear reaction—fission— to produce heat, which turns water into steam, which in turn drives a turbine to produce electricity. The downside to this process is a byproduct called radioactive waste. In nuclear fusion, on the other hand, hydrogen isotopes are used—the same energy source that fuels the sun. And it is not radioactive: In fusion, there are no neutrons generated as a reaction product, and, consequently, it produces no nuclear waste. Not only is fusion clean, it is astoundingly efficient. “Imagine,” Jain says, “replacing a coal train more than a kilometer long, loaded with 5,000 tons of coal, with just 40 grams of helium-3. Just 25 tons of helium-3 could power the United States and Europe combined for a year! And with more than a million tons of the stuff on the moon, we

69

*

*

• mechanixillustrated.co • Fall 2017


ENERGY

* Moonshots •

could keep up this pace for 40,000 years. Energy problem solved.” But we’ve got to go to the moon to get it. Thank Earth’s magnetic field for helium-3’s scarcity. Helium-3 is emitted by the sun and scattered throughout our solar system by the solar winds. That wind, however, is repelled by the Earth’s magnetic field; only a tiny amount of He-3 makes it through our atmosphere in the form of cosmic dust. The moon, however, has a weak magnetic field and no atmosphere. That makes it a fertile receptor of everything the solar wind blows its way, hence the massive deposits of helium-3. All that remains is mining it and transporting it to Earth. The good news is that the mining technology and know-how already exist, and, of course, robotic machines would perform the work. The processes involved in separating helium-3 from its ore are

Image courtesy of the European Space Agency, ESA / Foster + Partners

70

*

*

• mechanixillustrated.co • Fall 2017


ENERGY

* Moonshots •

equally straightforward, and easily and economically accomplished. In fact, one analyst suggested that the total investment would be comparable to building a major transcontinental pipeline—but one with a vastly more productive payback. And we’d get a permanent lunar base in the bargain—a base that could serve every need from resupply to launches to training to unimagined potential for scientific discovery. What’s more, the moon has massive amounts of water locked up in ice that can not only be used in the production of rocket fuel (water being composed of hydrogen and oxygen—rocket fuel’s essential ingredients), but also to provide crucial life support. Just like the bits that fuel the internet economy, water is the oil for the space economy. Jain adds, “I am thrilled that our children are getting to experience the same excitement that we witnessed in the 1960s with the Apollo program. There was a tremendous spirit of optimism and great excitement around the planting of the American flag on the surface of the moon. But this shouldn’t be just an American dream. I would love to see the American dream become a global dream.” To this end, Jain—who views the moon as Earth’s eighth continent—observes that entrepreneurs don’t have boundaries. “Entrepreneurs work with everyone who believes in them, who believes in the cause,” he says. “Moon Express is funded by entrepreneurs from all over the world—entrepreneurs from China, Russia, India, Germany, France. We all believe that this is possible, and we all came together to make it happen. Capital is not patriotic. Capital goes where the opportunities are. Boundaries are created by politicians for their own purposes. Entrepreneurs don’t create boundaries; they expand them far beyond any visible horizon on Earth or beyond.” MI To learn more of Jain’s vision, check out our new eBook series, Moonshots: The Great Entrepreneurs Riff on the Technology Innovations that are Shaping Life on Planet Earth—and Beyond.

71

*

*

• mechanixillustrated.co • Fall 2017

GET


HISTORY OF TECHNOLOGY

By H.R. (Bart) Everett

T

The past two decades have seen exponential growth in the burgeoning field of unmanned systems, foretelling what may one day be viewed as the biggest paradigm shift in the evolution of mankind. Military drones have forever changed the conduct of war, providing persistent surveillance, enhanced command and control, and precision strike capabilities, while ground robots play life-saving roles in neutralizing landmines and improvised explosive devices. On the civilian side, self-driving cars, trucks, buses, and even bicycles are no longer wishful fantasy, already mingling with manned vehicles on congested roadways. Thousands of autonomous material-handling robots work tirelessly around the clock, fetching millions of products in Amazon’s massive fulfillment warehouses, with

72

*

*

• mechanixillustrated.co • Fall 2017

Figure 1 • New Zealand inventor Alban J. Roberts stands behind the wireless controller for his 15-foot dirigible during an indoor demonstration in 1912.


HISTORY OF TECHNOLOGY

* Unmanned Systems •

airborne and street-level drones already envisioned for doorstep delivery. These are indeed exciting times! Contrary to popular belief, however, interest in robots and drones dates back long before most people realize, as the United States and other countries have been employing unmanned military systems for more than a century. One of the earliest attempts at airborne force projection, for example, involved free-flight balloons introduced during Austria’s 1849 siege of Venice during the Italian War of Independence. Each of these 23-foot-diameter balloons trailed a long copper wire that remotely triggered the release of a bomb over its target. As supporting technologies evolved, a number of wirelessly directed airships appeared, such as the 22-foot dirigible of inventor Albert Leo Stevens and electrical engineer Mark O. Anthony, who put on a two-hour demonstration at a blimp hangar in Hoboken, NJ, in February 1909. The following year, Raymond Phillips demonstrated a 20-foot model of a Zeppelin dirigible inside the London Hippodrome, astonishing his audience with aerial maneuvers and flashing lights. New Zealand inventor Alban J. Roberts developed a similar radio-controlled model for theatric performances in 1912 (Figure 1). Another model airship, built by Christopher Wirth of Nuremburg, Germany, was demonstrated at a Berlin circus in 1913.

Figure 2 • A number of onboard subsystems automatically controlled the Fu-Go balloon’s altitude to keep it in the jet stream, release its incendiaries upon arrival, and then cause the entire configuration to self-destruct.

73

*

*

• mechanixillustrated.co • Fall 2017


HISTORY OF TECHNOLOGY

* Unmanned Systems •

The inherent stability of these lighter-than-air systems relative to primitive fixed-wing aircraft of the time provided a distinct advantage for remote control, but their susceptibility to ambient air currents made the battery-powered craft impractical for outdoor operation. This issue, plus their relatively short battery life and the range limitations of early wireless gear, meant that demonstrations were confined to indoor venues and thus ill-suited for military use. Several attempts to remotely operate full-size airships powered by gasoline engines ensued but were found to be more or less impractical. Years later during WWII, Japan took an entirely different approach, purposely electing to harness the power of reasonably predictable winds aloft for sustained propulsion over long distances, without the need for radio control. Military engineers reasoned that the 200mph winter jet stream, about which the rest of the world knew very little, could theoretically push at least 10 percent of their weaponized free-flight balloons across the Pacific Ocean in about 3 days. A sustained attack against North America began in November 1944, during which some 9,300 incendiary Fu-Go balloons were launched (Figure 2), of which an estimated 900 to 1,000 crossed the Pacific and made landfall. While there were 361 confirmed arrivals in the United States and Canada, the winter rainy season substantially reduced the threat of wildfires. From the very beginning, military applications for unmanned systems were eagerly pursued in hopes of obtaining government funds for both development and future sales. In the maritime domain, the precursor to unmanned surface vehicles (USVs) was the fire ship, a vessel of opportunity loaded with combustible materials and set afire to bear down upon the enemy, driven by prevailing winds and tides. Its Achilles heel was the unpredictable trajectory of the burning hulk (which could reasonably be outmaneuvered

74

*

*

• mechanixillustrated.co • Fall 2017

Figure 3 • A) Elevation view of the remote-controlled sailboat, armed with a spar torpedo, as proposed by Werner Siemens in 1870. B) Plan view of the pneumatically actuated tiller mechanism, controlled by varying the pressure in the rubber bag via a trailing air hose (not shown).


HISTORY OF TECHNOLOGY

* Unmanned Systems •

unless the target ships were at anchor), compounded by the inherent lack of surprise. Remote control offered obvious advantages, and applying such a solution to a boat in two dimensions was far more forgiving than doing so with an airship in three. Accordingly, in 1862, Captain W.H. Noble of the Royal Navy proposed (but never pursued) an electrically steered surface craft as a means for guiding a fire ship toward an enemy fleet. Just two years later, Captain Giovanni Luppis of the Austrian Navy unsuccessfully approached British civil engineer Robert Whitehead about steering an unmanned spar-torpedo boat, using long ropes from shore to control the rudder. The first practical demonstration of a remotely guided unmanned surface vehicle has its origins in an 1870 design proposed by a Prussian artillery officer named Werner Siemens (Figure 3), to be pneumatically steered via a trailing air hose. Soon to be the founder of a huge industrial conglomerate that would bear his name, Siemens constructed a far more practical working prototype just two years later, which by 1874 could be electrically controlled via a single-conductor tether, the return circuit being through the seawater. Rudder deflection of the steampowered vessel was achieved using polarized relays (invented by Siemens) and reversing the control current on shore (Figure 4), with the rudder returning amidships whenever the current ceased. In the absence of operator input, the innovative remote steering mechanism even provided for automatic heading stabilization, based upon an onboard magnetic compass. The most significant 19thcentury USV was developed by Nikola Tesla, a Serbian immigrant to the United States and one of the most prolific inventors of all time. In 1897, he

75

*

*

• mechanixillustrated.co • Fall 2017

Figure 4 • A typical polarized-relay configuration employed a pair of permanent-magnet armatures a and d to actuate either of two sets of electrical contacts a-b-c or d-e-f, depending on the direction of the coil current flowing between binding posts R.


HISTORY OF TECHNOLOGY

* Unmanned Systems •

Figure 5 • Tesla’s radiocontrolled boat, approximately 4 feet long by 3 feet high, employed a series of storage batteries E to power propulsion motor D and steering motor F (adapted from US Patent No. 613,809).

constructed a radio-controlled “telautomaton” in the form of a boat (Figure 5), which was privately demonstrated for investors at the first Electrical Exposition at Madison Square Garden in September of the following year. Tesla’s pioneering work was significant for two reasons: 1) it appears to have been the first reduction to practice of a radio-controlled unmanned system; and, 2) it provided effective means of such radio control over multiple subsystems via multiplexing versus a single binary on-off function. But in 1897, some seven years prior to the invention of even the most simplistic “diode-valve” vacuum tube by Professor Ambrose Fleming in 1904, how was radio control even possible? The answer traces back to an unusual but simplistic component devised by the French physicist Edouard Branly, who published his findings in 1890. The subject device consisted of a small glass tube, filled with metal filings and capped at both ends by conductive plates, which came to be known known as a “filings tube.” In 1894, the English physicist Oliver Lodge demonstrated an application of Branly’s filings tube as a mechanical radio-frequency (RF) detector, which he called a “coherer.” It would prove to be a huge technological innovation that

76

*

*

• mechanixillustrated.co • Fall 2017


HISTORY OF TECHNOLOGY

* Unmanned Systems •

ushered in a whole new era of possibilities. The secret to the coherer’s success as an RF detector was Branly’s observation that the metal filings inside the non-conductive tube tended to clump together (or cohere, as Lodge put it) in the presence of an RF signal, causing the impedance across coherer terminals A and B in Figure 6 to decrease significantly. In essence, the coherer was an RF-actuated single-pole switch. In practice, the broad-band RF signal from a spark-gap transmitter was received by the antenna at upper left and passed to ground through coherer C. The resulting impedance drop increased the current flow from battery B1 through the coil of sensitive relay R, drawing in the relay armature to close its contacts, thereby allowing battery B2 to activate power relay S for the control output. The fly in the ointment was that once clumped together by the RF signal, the metal filings inside the coherer tended to stay clumped even after the RF signal ceased, meaning neither relays R or S would drop out. To get around this problem, an electromechanical clapper (typically a doorbell without the bell) was wired in parallel with relay S and positioned so as to vibrate the coherer assembly and shake loose the filings. (As long as an RF signal was being received, the metal filings remained cohered.) Besides being the first to apply radio control to an unmanned vehicle, Tesla’s further contribution was to demultiplex multiple commands from the received binary pulse train, such as start, stop, reverse, turn left, or turn right. That same year, Englishmen Ernest Wilson and C.J. Evans reportedly operated slow-moving boats on the Thames River using a coherer-based radio-receiver design that controlled steering only. Lacking Tesla’s multiplexing scheme, they instead used two separate transmitters and receivers for deflecting the rudder either left or right,

77

*

*

• mechanixillustrated.co • Fall 2017

Figure 6 • Schematic diagram of an 1896 Marconi receiver design using coherer C as an RF detector (decohering clapper not shown), forming a binary on-off controller via electrical relays R and S.


HISTORY OF TECHNOLOGY

* Unmanned Systems •

with two orthogonal antenna dipoles to achieve signal discrimination (Figure 7). On March 26, 1898, the pair received UK patent No. 7,382 entitled “Improvements in Methods of Steering Torpedoes and Submarine Boats,” and US Patent No. 663,400 entitled “Methods of Controlling Mechanisms by Means of Electric or Electromagnetic Waves of High Frequency” on December 4, 1900. Note the proposed military application of remote torpedo guidance in the UK patent title, which was a topic of considerable interest around the turn of the century. Having viewed the previously mentioned concept for a shore-guided surface craft proposed by Captain Giovanni Luppis in 1864 to be impractical, British engineer Robert Whitehead had developed instead a cylindrical torpedo driven by compressed air, completed in 1866. Lacking gyro stabilization, this forerunner of modern torpedoes suffered from poor accuracy, for which wire-guided and later radio-control solutions seemed to offer great promise. A plethora of attempts to create practical dirigible (steerable) torpedoes, both wire-guided and wireless, were soon undertaken by inventors all over the world. The first such device to become an actual service weapon was conceived by a Union naval officer named John Lewis Lay, who had developed the Wood-Lay spar torpedo during the American Civil War. The term torpedo originally denoted what is now more commonly called a mine, essentially a floating or submerged explosive device that detonated when struck by a ship or was remotely triggered via an electric cable from shore. Having resigned his commission after the war, Lay was hired to design the underwater defenses of Callao, Peru, in preparation for an anticipated Spanish attack. While thus engaged laying down a submerged defensive network of electrically fired torpedoes (i.e., anchored mines), Lay was struck by a sudden inspiration. The “torpedo planter” in which he was embarked carried a torpedo

78

*

*

• mechanixillustrated.co • Fall 2017

Figure 7 • The radiocontrol scheme of E. Wilson and C.J. Evans employed two separate receivers with their antenna pairs 1-2 and 6-7 arranged orthogonally (i.e., one vertical and one horizontal) for two-channel selectivity (redrawn from US Patent No. 663,400).


HISTORY OF TECHNOLOGY

* Unmanned Systems •

attached to a reel of insulated wire, and as the small boat advanced, this trailing wire leading back to shore was paid out by rotating the reel. Lay reasoned that if the boat were provided with motive power, he could remotely control it from shore with electrical signals sent over the trailing wire. In this way, the torpedo could seek out an enemy vessel instead of waiting to be hit by one. Furthermore, a cylindrical “auto-mobile” (self-powered) torpedo that ran beneath the surface was an even better delivery option than an exposed boat. Upon completion of his Peruvian venture in 1867, Lay returned to his hometown of Buffalo, NY, to pursue his idea. Key innovations were: 1) remote electrical steering via polarized relays and a single-conductor tether; and, 2) propulsion from expanding pressurized carbonic acid (H2CO3) from its liquid to gaseous state (CO2). This approach, suggested in 1869 by Walter Hill of the Naval Torpedo Station, yielded more potential energy for the same size storage flask than the compressed air used in Whitehead’s torpedo. While Lay’s first two prototypes proved impractical (Figure 8), he successively improved the design in 30 subsequent iterations over the next 20 years, filling orders from the United States, Egypt, Peru, and Russia. Meanwhile, inspired by the dangers associated with his dirigible-airship attempts in the late 19th century, the Spanish inventor Leonardo Torres-Quevedo sought a more practical means for wireless development, using a test surrogate in a marine environment. The very first Telekino prototype, however, was a portable mock-up, a large wooden box fitted with a propeller and rudder, which he presented to the Academy of Sciences in Paris on August 3, 1903. This surviving assembly, now almost 115 years old,

79

*

*

• mechanixillustrated.co • Fall 2017

Figure 8 • Lay Torpedo No. 2 at the Naval Torpedo Station, Newport, RI, circa 1892, where it was known as Station Lay No. 1 or, as marked here, Lay Torpedo No. 1. Courtesy of US Navy.


HISTORY OF TECHNOLOGY

* Unmanned Systems •

is on display at the Torres-Quevedo Museum at the Civil Engineering Faculty of the Polytechnic University of Madrid (Figure 9). The Telekino control circuitry expanded upon Tesla’s earlier distributor design by increasing the number of rotary-switch positions, and providing separate encoder disks for propulsion and steering (Figure 10). Decoder disk H was used to select one of five possible propeller speeds, including stop, whereas disk T provided five preset helm positions on either side of “rudder amidships.” Interestingly, Torres-Quevedo chose to conduct initial Telekino testing of this maritime surrogate for his envisioned dirigible airship on a surrogate of its own, in the form of a threewheeled land vehicle. Unfortunately, no pictures of the latter are known to exist. In 1904, preliminary experiments using a small electric boat were run at the Royal Country House Lake in Madrid, with a maximum control range of about 250 meters. The next demonstration, us-

Figure 9 • The original Telekino feasibility-demonstration hardware on display at the Torres-Quevedo Museum at the Civil Engineering Faculty of the Polytechnic University of Madrid. Courtesy of Antonio Perez-Yuste.

Figure 10 • The Telekino employed two motor-driven decoder disks H (propulsion) and T (steering), which were positioned in response to the output of a rotary switch, which was incremented by ratchetand-pawl mechanism L in response to pulses detected by the receiver (redrawn from Spanish Patent No. 33,041).

80

*

*

• mechanixillustrated.co • Fall 2017


HISTORY OF TECHNOLOGY

* Unmanned Systems •

ing the electrically powered launch Vizcaya, took place in the Estuary of Bilbao near Algorta on March 5, 1905, followed by a well-advertised public demonstration in the port of Bilbao on November 7 (Figure 11). An impressive standoff range of 2 kilometers was achieved, witnessed by a very large and enthusiastic crowd of attendees. Unable to obtain further support from the Spanish government, however, Torres-Quevedo abandoned all further pursuit of radio control. The previously mentioned three-wheeled test surrogate employed by Torres-Quevedo in 1904, which had an effective range of just 20 to 30 meters, appears to be the first known example of a radio-controlled unmanned ground vehicle (UGV). Practical military applications in the ground domain did not come into play until WWI, spurred by the devastating stalemate of trench warfare. At least two tethered UGVs were introduced by the French in 1915. The wire-guided Crocodile Schneider Torpille Terrestre (Figure 12a) carried a 40-kilogram internal charge for attacking German barbedwire and concrete-casement defenses. Its lackluster performance during operational testing through June 1916 was eclipsed by higher expectations for the newly introduced battle tanks. The similar Aubriot-Gabet Torpille Electrique explosive charge carrier was powered by a single electric motor mounted on its upper rear deck (Figure 12b). This instantiation appears to be unsteerable,

A

81

*

B

*

• mechanixillustrated.co • Fall 2017

Figure 11 • The rather bulky Telekino apparatus, similar to that shown in Figure 9, has just been relocated from the dock to the stern of Vizcaya at Abra de Bilbao, September 6, 1906. Courtesy of www.torresquevedo.org.

Figure 12 • A) Several Crocodile Torpille Terrestre (Type B) land torpedoes are lined up outside the Schneider plant in Le Creusot, France. B) The French Aubriot-Gabet Torpille Electrique land torpedo featured a third track angled upward at front center, probably to flatten barbed-wire defenses. Courtesy of Ministere de la Guerre.


HISTORY OF TECHNOLOGY

* Unmanned Systems •

which was in keeping with the nature of its intended target, the nearby and very long opposing trenches of the enemy. The trailing tether was described in a postwar article in Scientific American as providing power for the drive motor and being paid out from the vehicle versus dragged over the ground. While both the Torpille Terrestre and the Torpille Electrique were reportedly tested in battle but never series produced, they nonetheless set the stage for numerous improved UGVs that would see extensive use in WWII. MI

About the Author Commander (Ret.) H.R. (Bart) Everett is the former technical director for robotics at the Space and Naval Warfare Systems Center Pacific in San Diego, CA. In this capacity he has served as technical director for the Idaho National Laboratory (INL) Advanced Unmanned Systems Development Program funded by the Office of the Secretary of Defense (OSD), technical director for the Army’s Mobile Detection Assessment Response System (MDARS) robotic security program, and chief engineer for the USMC Ground Air Robotic System (GATORS). He is the former director of the Office of Robotics and Autonomous Systems (SEA-90G), Naval Sea Systems Command, Washington, DC, and has been active in the field of robotics for over 50 years, with personal involvement in the BUY development of over 40 mobile robotic systems, with an emphasis on sensors and autonomy. He has published more than 125 technical papers and reports (including several books) and has 21 related patents issued or pending. He serves on the Editorial Board for Robotics and Autonomous Systems magazine and is a member of AUVSI, IEEE, and Sigma Xi. This article draws from his book, Unmanned Systems of World Wars I and II, MIT Press, 2015. Find him on Twitter: @HRBartEverett.

82

*

*

• mechanixillustrated.co • Fall 2017



HISTORY OF TECHNOLOGY

The Turbine-Powered F I R E B I R D S Jet Engine Technology comes down to Earth From 1949 to 1961, General Motors staged a total of eight shows which spotlighted their products from each of their varied divisions.

Those from 1953 to 1956 are the most remembered since those had a variety of what were then known as “dream cars.” Today, we refer to them as concept cars. The 84

*

*

• mechanixillustrated.co • Fall 2017

dream cars were the drawing card for the millions of people who ventured to the General Motors Motorama, an ad-

BY DAVID W. TEMPLE


HISTORY OF TECHNOLOGY

* Turbine-Powered Firebirds •

mission-free, extravagant auto show held in major cities across the United States. General Motors of Canada also sponsored a nearly identical version in major cities in Canada but utilizing some of what were then one-year old dream cars for the shows held there from 1954 through 1957. Among the many dream cars produced for the auto-show circuit were a series of experimental types which went beyond the typical concept cars; they were serious, advanced research cars powered by an alternative to GM’s straight-six and OHV V-8s of the day—the turbine engine. These cars were known as the Firebird I, Firebird II, and Firebird III. A brief revival of the GM Motorama took place for the 1959 model year in which the 1958 GM Firebird III was exhibited. It was also shown at many other venues, including the GM of Canada Motorama. By this time period, the construction of new dream cars was greatly reduced for multiple reasons, discussion of which is beyond the purpose of our story.

Turbine Research Begins at GM Soon after jet engines began to enter into general use in military aircraft after the end of World War II, the idea of adapting the technology for automotive use began to be explored. The British had already built and tested the world’s first turbine-powered car, the Rover J.E.T., shortly before the emergence of the GM Firebird. The first of the GM turbine cars was the 1953 GM Firebird, which was also known as the XP-21 Firebird. (XP-21 was its internal designation prior to a Harley Earl, the vice president of GM Design, posed formal name being chosen.) with the Firebirds for this photograph taken in 1958 Later, it was relabeled as at GM’s Mesa, AZ, proving grounds. Earl retired from the Firebird I after its sucGM at the end of the year. He established GM’s styling cessor, the Firebird II, was department in 1927. Credit: GM Media Archive.

85

*

*

• mechanixillustrated.co • Fall 2017


HISTORY OF TECHNOLOGY

* Turbine-Powered Firebirds •

built. It resembled a jet-powered fighter plane, and indeed, its styling was largely influenced by the shape of the delta-winged F4D Skyray interceptor flown by the U.S. Navy. GM’s styling boss, Harley Earl, explained the origin of the Firebird’s design in an article he authored for the Saturday Evening Post titled “I Dream Automobiles.” He wrote, “The Firebird tickles me because of its origin. In our 1953 Motorama the spotlight model of the dream cars was the Le Sabre, and just after it had been first shown to company officials, I was on an airplane trip. I picked up a magazine and noticed a picture of a new jet plane, the Douglas Skyray. It was a striking ship, and I liked it so well that I tore out the picture and put it into my inside coat pocket. Subsequently a traveling companion, also a GM officer, stopped at my seat to congratulate me on Le Sabre. ‘But,’ he added, ‘now what will you do for next year?’ At that moment, The Firebirds are part of the collection of the GM I had absolutely nothing Heritage Center and continue to be shown at varin mind. But I patted the ious auto shows and museums across the country. pocket where the picture of Credit: GM Media Archive. the Skyray was tucked away. ‘I have it right here,’ I said. I was joking. I was merely answering his banter in kind. Then, bingo, I decided I had kidded myself into something. The result, as you may have seen, is that the Firebird is an earth-bound replica of the Skyray airplane.” The radical airplane-like styling for the Firebird was not simply about getting noticed; it served to underscore the fact that a turbine engine—basically an aircraft jet engine—powered it. Furthermore, its unconventional appearance was intended to imply that practical turbine engines for automobiles were considered to be years away, at best, from being used for production

86

*

*

• mechanixillustrated.co • Fall 2017


HISTORY OF TECHNOLOGY

* Turbine-Powered Firebirds •

automobiles. The original Firebird was also expected to provide an opportunity to test the little understood area of aerodynamics for land vehicles. A near exact, scale model of the research car was sent to the California Institute of Technology for extensive wind tunnel testing to establish the best shape for the Firebird’s body as The Firebird I had a single deployable headlight. well as determine the optiCredit: GM Media Archive. mal brake-flap angles and the amount of negative angle of attack for the wings. In 1953, turbine engine research with the GT-300 had been underway at the GM Research Laboratories Division for several years. A turbine engine was originally expected to be tested for use in heavy-duty trucks and buses, but then Harley Earl thought an automobile should be used as the research medium. Simply put, the Firebird was a research vehicle to help determine whether or not a turbine engine could be used to provide economical and satisfactory performance in an automotive application. The Firebird I’s more advanced GT-302 Whirlfire Turbo-Power engine and its chassis design were the responsibility of GM Vice President and General Manager of GM Research Laboratories Division Charles McCuen, and William Turunen, who had delved deeply into the potential of the turbine engine for automotive use. Bob McLean, a Cal Tech graduate who had an aeronautical background, was placed in charge of the Firebird’s overall design. The nose of the Firebird contained a 35-gallon fiberglass fuel tank and just behind the cockpit sat the two-part gas turbine engine consisting of the gasifier and power sections connected by a flexible shaft. The gasifier section was analogous to the engine and torque converter pump of a conventional automobile, and the power section substituted for the torque converter turbine, transmission, and rear axle gears. Unlike a jet engine which propels an aircraft by the expulsion of the exhaust, the turbine engine of the Fire-

87

*

*

• mechanixillustrated.co • Fall 2017


HISTORY OF TECHNOLOGY

* Turbine-Powered Firebirds •

bird had to have its exhaust gases funneled through a power turbine connected directly to the car’s rear wheels via a transmission. The gasifier section was composed of the compressor rotor and a gasifier turbine wheel each attached to a common shaft. Air entering the compressor was pressurized to 3 ½ times atmospheric pressure (14.7 psi at sea level) prior to entering the two combustion chambers where the gas temperature soared to 1,500 degrees Fahrenheit. The 1953 Firebird I was shown at the 1954 GM The hot gas blasting from the gasifier Motorama show circuit. This photo was taken turbine ran the second turbine—the at the Waldorf Astoria’s grand ballroom in power section turbine—connected to January of that year. Credit: GM Media Archive. the Firebird’s rear wheels through a two-speed planetary transmission. The GT-302, which idled at 8,000 rpm, was rated at 370hp at 26,000 revolutions per minute of the gasifier turbine and 13,000 rpm of the power turbine—well beyond the RPMs of an automobile gasoline engine but actually relatively low revs for a turbine engine. Its lower rotational speed reduced the stresses imposed on the moving parts, thus increasing reliability. Even so, the stresses applied were plenty high; a gasifier turbine blade-tip speed could be as high as 1,000 mph, placing a 3,000-pound pull on each lightweight blade. The 775-pound weight of the GT-302 turbine engine accounted for 31% of the Firebird’s total weight of 2,500 pounds. These numbers implied a theoretical top speed of over 200 mph! The suspension system of the Firebird was composed of a double wishbone and torsion bars in front while the rear received a DeDion type. Split brake flaps on the trailing edges of the wings controlled with switches on the steering-wheel-activating aircraft-type actuators helped to slow the Firebird from higher speeds. Eleven-inch-diameter brake drums were mounted outside the wheels rather than inside them to help dissipate the heat generated from braking. Furthermore, a total of 16 gauges provided important perfor-

88

*

*

• mechanixillustrated.co • Fall 2017


HISTORY OF TECHNOLOGY

* Turbine-Powered Firebirds •

mance data to the driver of the Firebird. Three-time Indy 500 winner and engineer Mauri Rose was a consultant to GM. One of his assignments was to evaluate the performance of the Firebird at GM’s test track in Mesa, Arizona. Engineers expected the car would easily surpass the record set by the experimental Rover J.E.T. However, before Rose could put the Firebird through high-speed runs, Charles McCuen decided to perform some tests himself and almost got killed in This aft view of the Firebird I shows it with its the process. The Firebird accelspeed brakes in the deployed position. The GT-302 erated slowly, but once its turWhirlfire turbine engine powered it. Split brake bine engine reached high rpm it flaps on the trailing edges of its “wings” controlled began to accelerate quickly. As with switches on the steering-wheel-activating McCuen approached the far turn aircraft-type actuators also helped to slow the Firein the test track, the Firebird was bird from higher speeds. Credit: GM Media Archive. accelerating at a high rate; letting off the accelerator did virtually nothing to slow the speeding car because the turbine engine did not provide engine braking as in a conventional automotive engine. There was very little time to apply the brakes. The car skidded underneath the 41-inch-high guard rail and tumbled several times. McCuen survived only because of the Firebird’s built-in headrest and safety harness. He recovered just enough from his injuries to return to work for a while but then took an early retirement at age 63 in 1955 and lived another 20 years. The fiberglass-bodied Firebird was repaired in time for the opening of the 1954 GM Motorama at the Waldorf Astoria. Though Mauri Rose did test drive the GM Firebird I, high-speed trials were not attempted again, thus leaving the true top speed as only an educated guess. In a report written by Rose for the April 1954 Motor Life, he stated,

89

*

*

• mechanixillustrated.co • Fall 2017


HISTORY OF TECHNOLOGY

* Turbine-Powered Firebirds •

“… the steering was absolutely true. The car wanted to behave. It wanted to keep going straight ahead. It was perfectly stable.” He also said with a note of strong conviction, “With absolute sincerity I can say that the car itself is an outstanding job from both styling and engineering standpoints.” The original Firebird had some deficiencies—not at all surprising with a new, advanced project. It was somewhat noisy, it had poor fuel mileage, and the exhaust temperature was very high at roughly 1,000 degrees Fahrenheit. These problems would be mitigated with the next Firebird turbine car.

1956 GM Firebird II General Motors built upon the lessons learned with the Firebird I when it designed its next turbine-powered research car—two of which were built—the 1956 Firebird II (XP-43). It was “the first American gas turbine passenger car specifically designed for family use on the highway,” according to a GM-issued press release. (The claim was technically correct. Chrysler Corporation tested a turbine engine in a nearly stock 1954 Plymouth Belvedere, a car adapted for a turbine engine rather than designed from the start for turbine power.) However, the new Firebird II went far beyond the aircraft-like Firebird I. It was used to test experimental suspension and braking systems, too. The Firebird II also pioneered the concept of futuristic electronic highways in which cars of the future might be controlled electronically for speed, direction, and spacing interval in order to eliminate driver error in the operation of an automobile. Today the concept has reemerged as the self-driving car. Three-time Indy champion Mauri Rose was hired by GM as a consultant. He tested the 1953 Firebird I at the company’s proving grounds in Mesa, AZ. Note the swing-open canopy. It could be opened from either side or completely detached. Credit: GM Media Archive.

90

*

*

• mechanixillustrated.co • Fall 2017


HISTORY OF TECHNOLOGY

* Turbine-Powered Firebirds •

Another innovation tested with just one of the Firebird IIs was titanium as a potential alternative for body construction. This one was nonfunctional and was used as the display car for the 1956 GM Motorama. The other one, built with a fiberglass body, served as the actual research vehicle. Dr. Lawrence Hafstad, the vice president of the GM Research Laboratories staff, was assigned to lead the engineering team in charge of designing the engine and chassis of the Firebird II. Among the advances made with this experimental car over the previous one include its GT-304 Whirlfire, which had a more efficient regenerator that recycled 80% of the exhaust heat wasted in the GT-302 of the Firebird I. As a result, fuel economy improved to almost that of the average piston engine of the day. Another was lower temperature exhaust gases, which traveled through a set of stainless-steel pipes running through the rocker panels and onward to ports on top of the rear fenders; exhaust temperature was reduced to nearly the same as any other automobile. Noise was also reduced to nearly that of a conventional car through the use of a silencer built into the nose of the Firebird II. The GT-304 was a less powerful engine—only about half as much at 200 gross horsepower at 28,000 rpm— and it had to carry more than double the weight its predecessor did. As a test bed for a turbine-powered family car, it also lacked the sleekness of the first Firebird, too.

Other Ideas Tested on the Firebird II

This photograph of the 1956 Firebird II (functional version) was taken at GM’s proving grounds in Mesa, AZ, where the “highway of tomorrow” concept was tested. The FB II was equipped with air conditioning, much to the relief of anyone sitting underneath the experimental car’s canopy. Credit: GM Media Archive.

91

*

*

• mechanixillustrated.co • Fall 2017

Other advanced features tested on the Firebird II involved its starting procedure, highway of tomorrow control systems, air conditioning, chassis, and braking systems. Starting the Firebird II’s turbine engine was done by inserting a magnetic key and depressing the starter button. A Delco-Remy motor then brought the gasifier section


HISTORY OF TECHNOLOGY

* Turbine-Powered Firebirds •

up to 4,000 rpm, enough to make ignition automatic. The starter then continued to assist up to 15,000 rpm. Design for Dreaming, a movie shown at the 1956 GM Motorama to explain the concept of automatically controlled highways of the future, included a scene of a family enjoying a vacation drive during a contemporary setting, but soon they encounter a frustratingly congested highway and begin to dream of what might be 20 years later in the year 1976. In their dream, the family was traveling in a Firebird II on the radar-controlled highway of tomorrow free of traffic jams. Presented was the idea that such convenience would be possible with a flip of a switch to activate an automatic control system, allowing electronic impulse-emitting metal strips embedded in the road surface to communicate with electronic pick-up coils placed inside the pair of conelike projections on the front of the Firebird II. Electronic signals controlled steering, speed, and braking through the car’s onboard computer, which freed the occupants to talk, play games, watch television, or just watch the scenery. Occupants could communicate with control towers along the “Auto-Way” to obtain directions, find motel vacancies, make reservations, or get other information, while the control tower operator could communicate with passengers by flashing messages on the GM’s Moraine Products Division two TV screens in the car, or through voice communication. As soon as the driver entered designed the experimental all-metthe roadway the control tower operator could al “Turbo-X” brakes for the Firebird II. It was composed of castcheck the fuel level and engine operation of iron disks rotating with the car’s the car and synchronize speed and direction wheels and a set of metal-lined while the driver manually positioned the car over the metal strips. If anything was found to pads. Applying the hydraulic brakes squeezed the disk between be amiss with the vehicle at any time along the way, the car could be guided automatical- a movable pad on the inboard side and a fixed pad on the outboard ly to a safe place out of traffic. side. From the author’s collection. The “highway of tomorrow” systems of the

92

*

*

• mechanixillustrated.co • Fall 2017


HISTORY OF TECHNOLOGY

* Turbine-Powered Firebirds •

Firebird II were only simulated on the car for a time. About two years after the 1956 GM Motorama, functional systems were installed on the road test car and successfully demonstrated to the press by GM engineers. The sloping nose of the Firebird II contained a set of oil-cooling fins located immediately behind the electronic sensors in the cone-like The functional Firebird II was equipped with many projections all fitted in deep gauges to monitor the performance of its various recesses. Behind this equipsystems. They informed the driver of data such as ment were what looked like the temperature of critical bearings, turbine inlet turbine blades which severed temperature, gasifier rpm, fuel nozzle pressure, fuel to prevent larger objects from pump pressure, regenerator hydraulic pressure, etc. getting into the engine air Credit: GM Media Archive. inlet. The experimental car’s small headlights retracted into the body when turned off, leaving only the turn signal/parking lamps exposed. When the headlights were turned on, they extended outward several inches and emitted a strong beam of light. A set of flaps in front opened automatically to allow heat to escape. Also in back were two 10-gallon fuel pods over which the rear fenders flared outward. Taillights appeared to be absent in daylight as they were housed in a large reflector that created a chromed effect; however, they had the appearance of a glowing jet exhaust pipe at night. The entire trunk floor rose like a freight elevator to fender height to eliminate the need to lean over for access to the trunk. Inside the trunk were eight pieces of fitted luggage as well as twin 12-volt batteries. The frame of the running car had to be built rigidly enough to prevent the clear, bubble-like canopy from cracking. Flip-up panels on the canopy opened when the magnetic key was inserted into a slot on the car’s body side panels to ease ingress and egress from the car. The functional Firebird

93

*

*

• mechanixillustrated.co • Fall 2017


HISTORY OF TECHNOLOGY

* Turbine-Powered Firebirds •

II was also equipped with air conditioning and a heater. Air conditioning was vital because the bubble canopy created a greenhouse effect. It was especially needed when the highway of tomorrow systems were tested at the Mesa, Arizona, proving grounds. Also, a set of three flaps in the center of the canopy could be opened to cool the interior. An experimental air-oil suspension system designed by the Delco Division was installed in the road test car. The Delco-Matic air-oil suspension units replaced conventional shock absorbers and springs. A cushion of air provided soft springing and a hydraulic leveling system compensated for light or heavy loads to keep the car level. According to GM, the Firebird II was the first American car to have leveling in both the front and rear. When the car was moving, the leveling system switched off and provided a smooth ride with air cushioning. The Moraine Products Division designed the experimental all-metal “Turbo-X” brakes for the Firebird II. It was composed of cast-iron disks rotating with the car’s wheels and a set of metal-lined pads. Applying the hydraulic brakes squeezed the disk between a movable pad on the inboard side and a fixed pad on the outboard side.

The More Advanced Firebird III Turbine engine research moved ahead with the 1958 Firebird III (XP-73). It proved to be the last of its kind, though for 1964 GM trotted out the Firebird IV, which was said to be turbine-powered. In fact, it was a nonfunctional show car without an engine. According to GM’s booklet Flight of the Firebirds, Harley Earl “envisioned an entirely different type of car, ‘which a person may drive to the launching site of a rocket to the moon,’”

94

*

*

• mechanixillustrated.co • Fall 2017

The nonfunctional Firebird II with its body made of titanium was shown on the 1956 GM Motorama show circuit, as well as other auto show venues throughout the 1950s. This photo is believed to have been taken at the GM Motorama held at the Pan Pacific Auditorium in Los Angeles. From the author’s collection.


HISTORY OF TECHNOLOGY

* Turbine-Powered Firebirds •

when he considered the styling for the next turbine car. For the times, the styling, which included a twin-bubble canopy and multiple tail fins, certainly fit Earl’s vision. Contained within the car’s fiberglass body were a regenerative gas turbine GT-305 and a separate two-cylinder, 10-horsepower aluminum engine to run the electrical and hydraulic accessories consisting of steering, braking pumps, brake flaps, air suspension, and air conAccess to the interior of the Firebird III was via ditioning systems. The new engine an ultrasonic key. Just pointing the ultrasonic was 25% lighter, was more comkey at the door caused it to swing upward and pact, developed 225 horsepower forward; the side panel and bubble canopy were at 33,000 rpm gasifier speed, joined as one unit to form the door. A booklet and provided a 25% increase in detailing the FB III pointed out that one could fuel economy compared to the step into the car without stooping and be GT-304. The engine, transmission, seated in a comfortable lounge-chair seat. The and differential were mounted as upholstery was originally red but was changed a unit behind the passenger comto black sometime later. Credit: GM Media Archive. partment. Its trans-axle included a Hydra-Matic transmission mounted directly to the differential case. Building upon the highway of tomorrow research done with the Firebird II, this car received a revised system called “Autoguide.” Other advanced gadgetry of the Firebird III included a functional “Cruisecontrol” to automatically maintain a constant speed, as well as a “Unicontrol” system for driver control of steering, acceleration, and braking. It was operated with a swivel stick accessible from either seat. Moving it engaged servos controlled by three analog computers which compensated for too much driver input such as a sudden turn at high speed. Pushing it left or right steered the Firebird III; a forward push or a backward push caused the car to accelerate or brake, respectively. Rotating the handle 20 degrees in either direction engaged reverse, and an 80-degree rotation in either direction engaged “park.”

95

*

*

• mechanixillustrated.co • Fall 2017


HISTORY OF TECHNOLOGY

* Turbine-Powered Firebirds •

In GM’s booklet Imagination in Motion—Firebird III, the aspects of research being done with the car were encapsulated by the phrase “human engineering” and was explained this way: “Automotive engineers have long recognized an area of development known as human engineering … In this car, the driver has been viewed as a challenge rather than as a limitation to automotive engineering possibilities. Here is an opportunity to use new simplified control devices, to provide improved air-conditioned comfort, and the armchair ride of an entirely new high-pressure air-oil hydraulic suspension system.” The Firebird III was the first completely electronically controlled car. Yet another area of research conducted with the Firebird III was an anti-lock braking system. Though such systems are commonplace today, it was quite high-tech for the day. The 11×4-inch “Turb-Al” brake drums were cast into the alloy wheels and faced with iron. Brake shoes were composed of sintered metallic linings. Cast-in cooling passages between the drum and wheel brought cooling air in through the hub and spun out through the slots. At speeds above 30 mph, the airbrake flaps were deployed to aid in braking. Additionally, a “grade retarder” using oil-cooled friction disks on the rear axle shafts also went into action above this speed. The Firebird III’s experimental suspension system was composed of solid axles anchored to the sub-frame with four control arms on each, which reduced the car’s overall height and kept the wheels perpendicular to the road at all times to improve handling. Its air-oil springs at the front and rear were interconnected so as to cause vertical forces acting on a front wheel to be simultaneously applied to the rear wheel in order to suppress pitching motions for a smoother ride. The car’s air-oil unit, operating at 3,000 psi, had a variable spring constant which gave a strong spring action when the car was heavily loaded and a relatively weak spring action when lightly loaded. Height-control valves maintained a fixed The GT-305 was installed in the Firebird III from underneath the experimental car. Credit: GM Media Archive.

96

*

*

• mechanixillustrated.co • Fall 2017


HISTORY OF TECHNOLOGY

* Turbine-Powered Firebirds •

The September 1963 issue of Mechanix Illustrated featured Tom McCahill’s road test of Chrysler’s Gas Turbine Car—a “cyclone in a tin can ...that just may be the car of tomorrow.” Read it here.

road clearance, regardless of the load carried (within design limits of course). Turbine research continued at GM with a series of heavy-duty trucks during the 1960s, but this alternative power system has yet to enter production for any automobile even in the 21st century. However, it got close in the 1960s. Chrysler CorThe 1958 GM Firebird III was the only poration also performed research on the concept car on exhibit at the 1959 and use of turbines for automotive use. At one 1961 GM Motorama. This photo was point it planned to produce and sell 500 taken at the 1959 show at the Waldorf turbine-powered Dodges for the 1966 Astoria when the car was painted model year, but the plan was halted due “Lunar Sand.” Credit: GM Media Archive. to the issuance of new regulations by the federal government regarding emissions from automobiles. Chrysler worked on that problem for some time, but when facing bankruptcy around 1980, the research ended. MI

David W. Temple is a freelance automotive photojournalist specializing in vintage cars. His work has appeared in Auto Restorer, Car Collector, Cars & Parts, Collectible Automobile, and many others over the past 30 years. Temple has also authored four books, including Full Size Fords: 1955-1970 and The Cars of Harley Earl. At the time of publication of this story, he is authoring Chevrolets of the Fifties. Much of this story was excerpted from the author’s book, Motorama: GM’s Legendary Show and Concept Cars.

97

*

*

• mechanixillustrated.co • Fall 2017

BUY


SPECIAL FOCUS Self-Driving Cars

• Cameras, radar, sensors, laser,

and 3D maps combine in the Volvo Concept 26 to bring about a new era of road travel. For more on Volvo innovations click here. Image courtesy of Volvo.

A car commercial of the future shows a driverless minivan tearing around the curves of an unfamiliar, mist-shrouded mountain road, its front seats notably empty. Children sleep trustingly inside as the minivan confidently steers itself around a sharp bend. Suddenly, a dark object drops out of the air and plummets toward the windshield. The camera pans across the vehicle’s vacant front seats and then

98

*

*

• mechanixillustrated.co • Fall 2017

By Hod Lipson and Melba Kurman


SPECIAL FOCUS

* Intelligent Cars •

back to the sleeping children. A booming commercial-ready voice-over asks, “Is it just a plastic bag or a big rock?” On screen, the driverless minivan doesn’t hesitate. It drives confidently into the path of the unidentified airborne object that (after a dramatic pause) wafts high into the misty air before floating gently down to land at the side of the road. “It was just a plastic bag after all,” the voice intones. “When it comes to safe driving, our cars know the difference.” Driverless cars are developing at such a rapid rate that such a commercial could soon be reality. Google, Tesla, and Uber (and maybe even Apple) are successfully designing and testing robotic cars that use software and sensors to navigate traffic. Big car companies, not to be left behind, have responded to the threat to their long-entrenched incumbency by creating R&D divisions in Silicon Valley and purchasing robotics and software

99

*

*

• mechanixillustrated.co • Fall 2017

Image courtesy of Bosch Mobility Solutions. For more on Bosch innovations, click here.


SPECIAL FOCUS

* Intelligent Cars •

startups to speed up the development of their in-house driverless car technology. In the coming decades, the automotive industry will become a new battlefield, as traditional car companies and software companies will compete (or perhaps cooperate) to sell driverless cars. Keen vision and fast physical reflexes—previously the sole domain of biological life forms—will become standard automotive features, marketed alongside a vehicle’s miles per gallon. Consumers will benefit, as intelligent driverless vehicles profoundly improve the ways in which people and physical goods move around the world. Car accidents caused by human error will no longer claim more than a million lives worldwide each year. Cities will replace unsightly parking lots with parks, mixed-income housing, and walkable neighborhoods. Fleets of self-driven cabs will efficiently pick people up and drop them off, and the carbon-spewing traffic jams of rush hour will become a thing of the past. But that’s the second half of the story. Let’s take a few steps back and examine the current state of affairs. Once upon a time, cars and computers lived in separate universes. Beginning in the 1980s, software began to gradually creep into vehicles as automotive companies discovered the safety benefits of built-in automated “driver-assist” technologies such as anti-lock braking systems, parking assistance, and automatic lane-keeping warnings. Today the average human-driven car boasts an impressive amount of sophisticated software. In fact, an average new car might run 100 million lines of code—that’s twice as many lines of code as were in the Windows Vista operating system (50 million lines) and nearly 10 times as much code as a Boeing 787 airliner (roughly 15 million lines of code). For a fuller comparison on codebases, click here. The fact that modern mainstream vehicles are essentially computers on wheels raises an obvious question: what’s taking so long? If cars already help us steer, brake, and park, why aren’t some of the one billion or so vehicles that roam our planet’s roads fully driverless right now? Part of the answer involves

100

*

*

• mechanixillustrated.co • Fall 2017

Keen vision and fast physical reflexes— previously the sole domain of biological life forms—will become standard automotive features, marketed alongside a vehicle’s miles per gallon.


SPECIAL FOCUS

* Intelligent Cars •

a formidable practical challenge, that of software integration. To create the computerized safety features that consumers have come to expect, automotive companies purchase driver-assist software modules from a chain of different suppliers. Since these software modules are not designed and coded from the ground up to work with one another, they exchange data at a rate that’s sufficiently fast to support the braking and steering decisions made by a human driver, but not fast enough to handle the additional processing that would be needed to provide full-blown artificial intelligence. Speaking of artificial intelligence, another reason driverless cars are not yet a widely used mode of transportation is because of a subtle but critical distinction that roboticists make between a machine that is automated and one that’s autonomous. While the typical new car might be highly automated, it is not yet autonomous, meaning fully self-guided. The driver-assist software systems that grace today’s cars do not have the ability to make their own driving decisions. Driverless cars are

101

*

*

• mechanixillustrated.co • Fall 2017

Image courtesy of Volvo.


SPECIAL FOCUS

* Intelligent Cars •

a sterling demonstration of the fact that steering a car through traffic, a seemingly simple activity that has been mastered by billions of teenagers and adults the world over, is actually an exquisitely complex demonstration of advanced robotics.

Teaching Machines to See Since the 1950s, carmakers and roboticists have tried and failed to create cars that could liberate humans from the dangerous tedium of driving. In a research conundrum known as Moravec’s paradox, roboticists have learned that it is more difficult to build software that can emulate basic human instincts than to build software that can perform advanced analytical work. Ironically, the art of creating artificial intelligence has illuminated the vast and uncharted genius of human instinct, particularly our near-miraculous mastery of perception, or what philosophers call “scene understanding.” Sometimes human perception misfires, as anyone who has eagerly pulled out a wax apple from a bowl of false fruit can attest. Most of the time, however, our lightning-quick ability to correctly classify nearby objects and react to them appropriately guides us gracefully through most situations. Since the dawn of academic artificial intelligence research, computer scientists have attempted to build machines that demonstrate scene understanding. While efforts to create software with human-scale artificial perception have fallen short of the mark, computer scientists have succeeded admirably at creating advanced software programs that perform extraordinarily well at a highly specific task. Factory work has been automated for decades as the bolted-down mechanical arms of industrial robots, guided by highly responsive software-based controls, assess and calibrate hundreds of system variables without missing a beat. In 1997 IBM’s Deep Blue demonstrated that a computer can outmaneuver the world’s best human chess masters. Yet, building a mobile and fully autonomous robot such as a driverless car that can handle a tricky left turn during rush hour is an achievement

102

*

*

• mechanixillustrated.co • Fall 2017


SPECIAL FOCUS

* Intelligent Cars •

that has eluded artificial intelligence researchers for decades. Why can a computer play chess, but can’t tell the difference between a friendly wave from a passing pedestrian and a stern “STOP” hand signal from a traffic cop? Because computers aren’t very good at guiding machines through unstructured environments; in other words, robots can’t think on their feet. In the case of cars, two interrelated factors considerably complicate the task of programming an autonomous vehicle: one, the challenge of building software and a visual system that can match the level of performance of human reflexes, and two, the fact that a driverless car needs software intelligent enough to handle unexpected situations, or what roboticists call “corner cases.” For most of the 20th century, software programs have been confined to analytical work. Part of the problem has been mechanical, that computer hardware was too primitive to support the rapid calculations and vast amount of data inputs needed to simulate human perception. In addition, for most of their history, computers have been large, fragile, and (by today’s standards) laboriously slow. As a result, programmers had to write software code that was “parsimonious,” meaning it ran off of very little data and could function efficiently on a meager amount of available memory and computing power. It didn’t help that until the advent of desktop computing, access to precious high-powered hardware and software was carefully doled out to a small universe of academic and industrial researchers. The prevailing mode of artificial intelligence software that developed in the midst of these constraints was built on the notion of structured logic. If the field of artificial intelligence research were a tree, its mighty trunk would branch into two large forks, one fork being a school of thought that favors structured logic, and the other an alternative approach called “machine learning.” From the 1960s through the 1990s, the structured logic approach, called “symbolic AI,” ruled the roost in university computer science departments. The other fork of the tree, machine learning software, was relegated to the sidelines as an interest-

103

*

*

• mechanixillustrated.co • Fall 2017

The fact that modern mainstream vehicles are essentially computers on wheels raises an obvious question: what’s taking so long?


SPECIAL FOCUS

* Intelligent Cars •

ing but not very elegant way to create artificial intelligence. Symbolic AI is essentially a recipe that instructs the computer through a series of precise, written set of rules. Using symbolic AI, researchers tried to emulate human intelligence by anticipating and writing a rule to address every single possible situation a program (or robot) might later encounter, essentially flattening the chaotic, three-dimensional world into elaborate sets of if/then statements. Rule-based AI had such a strong grip on the field of artificial intelligence research that in 1988 in a series of interviews on public television, noted comparative mythology scholar and anthropologist Joseph Campbell quipped, “Computers are like Old Testament gods; lots of rules and no mercy.” (Campbell, The Power of Myth). For most of the 20th century, Campbell’s description was a fitting one. Software based on structured logic works well for automating activities that have a finite number of “moves” or take place within a confined environment; for example, playing chess or overseeing a repetitive task on an assembly line. While rule-based AI remains a powerful analytical tool that’s in widespread use today, it proved to be of limited value for automating environments brimming with corner cases, the unpredictable situations beyond the reach of pre-determined rules. Real-world environments are irrational, shaped by an infinite number of ever-shifting rules. Despite its limitations, as recently as 2007, pioneers of self-driving car research used rule-based code to build their vehicles’ computer vision systems, resulting in software that was wooden and inflexible. To automate a driverless car that could roll with the punches on public streets and highways, some other sort of software was needed. Two interrelated forces broke the stalemate. One was Moore’s Law, the simultaneous rapid growth and drop in price of computing power. The second disruptive force was the rapid maturation of “deep learning” software. If you recall the giant two-pronged tree of artificial intelligence,

104

*

*

• mechanixillustrated.co • Fall 2017


SPECIAL FOCUS

* Intelligent Cars •

deep learning software is a sub-branch of machine learning. Unlike symbolic AI, machine learning software does not use formal logic to give artificial life to software, but instead uses algorithms to model statistical relationships between selected phenomena. To create a machine learning application, the role of the human programmer is to define a body of data, choose the appropriate algorithm to parse that data, and then “train” the software to perform a particular task. Training involves using a process of trial and error in which the learning algorithm adjusts the weight assigned to the variables in a statistical model until that model performs at a satisfactory level. Sometimes called “bottom-up” AI, a machine learning program needs large amounts of data in order to be trained and refined. Because of its gluttonous appetite for data and inexpensive computing power, machine learning languished in the sidelines for most of the 20th century. As early as 1957, the principles that underlie modern deep learning were demonstrated by computer scientist Frank Rosenblatt, who built a machine he called “The Perceptron,” a giant contraption made of transistors and colored lights that learned to recognize simple shapes. Modern deep learning was dramatically launched in 2012 at an annual image-recognition competition. A team of researchers from the University of Toronto designed a novel machine learning program that demonstrated record-breaking levels of accuracy in recognizing selected objects in pictures pulled at random from the internet. Their network, a deep learning program called SuperVision, examined thousands of digital images and labelled them with a rate of 85% accuracy,

105

*

*

• mechanixillustrated.co • Fall 2017

• Neural network

pioneer Frank Rosenblatt, left.


SPECIAL FOCUS

* Intelligent Cars •

approximately half the error rate of the competitors. Powerful deep learning networks such as SuperVision could flourish once digital images became abundant and processing power became cheap. Deep learning belongs to a sub-subbranch of machine learning called neural networks. A deep learning network is made up of hundreds of layers of grid-like arrays of artificial “neurons.” The process works as follows: A digital image is copied into the first layer of the deep learning neural network. Then the image is handed to the next internal layer of the network, which detects small patterns or “visual features,” marks those features, and hands the image off again to the following layer. At the end of its journey through dozens (or even hundreds) of layers that make up the network, the information in the original image has been transformed into visually unrecognizable—but statistically meaningful—patterns that correspond to the notion of a cat, a dog, a stop sign, or anything else the network has been trained to identify. The job of the human programmer is to provide ample examples of the object the network is learning to “see,” and the training algorithm takes care of the rest. The training algorithm improves the performance of the deep learning network by repeatedly reducing the weight of the individual artificial neurons that have been responsible for making wrong decisions and increasing the weight of neurons that made correct decisions. The fact that a deep learning network can transform an image of a cat into a simple yes/no answer to the question “is this a cat?” simply by using a series of tiny steps sounds miraculous. Yet the process works surprisingly well. At the same image-recognition competition where SuperVision crushed its competition, three years later another artificial neural network won the 2015 competition by identifying objects in digital images with a level of accuracy greater than that of an average human. Teaching a software program to recognize cats or shoes by showing it thousands and thousands of random digital images sounds of little practical value for automating the task of driving.

106

*

*

• mechanixillustrated.co • Fall 2017


SPECIAL FOCUS

* Intelligent Cars •

Yet deep learning software has proven to be the long-awaited missing link in the evolution of artificial life. A deep learning network can be trained to spot objects that are commonly found by the side of the road, such as bicycles, pedestrians and construction sites. This is why Google’s autonomous vehicles continue to prowl the streets, gathering data to build a giant motherlode of driving experiences. Once a deep learning network classifies what’s near the vehicle, it hands off that insight to the car’s operating system. The operating system—a suite of different types of onboard software modules—learns to react appropriately to a wide variety of driving situations until the driverless vehicle attains human-level artificial perception and reflexes.

Artificial Senses and Reflexes Meanwhile, while AI research was revitalized by abundant data and cheap computing power, a parallel revolution was taking place in the onboard hardware devices that serve as a vehicle’s artificial eyes, ears, hands, and feet. Driverless cars track their

107

*

*

• mechanixillustrated.co • Fall 2017

• Google’s auton-

omous vehicles continue to prowl the streets, gathering data to build a giant motherlode of driving experiences.


SPECIAL FOCUS

* Intelligent Cars •

location with a GPS device that can identify a car’s whereabouts to within a few feet. The GPS is supplemented with another device called an Inertial Measurement Unit (IMU), a multipurpose device that contains acceleration and orientation sensors and keeps track of the car’s progress on a stored, onboard high-definition digital map. One of the IMU’s functions is to augment the GPS by keeping track of how far and fast the vehicle has travelled from its last known physical location. The IMU also serves as the equivalent of a human inner ear, so it senses if the car is tipping dangerously far in any direction and informs the car’s guiding software to correct the situation. Radar sensors, once so giant and delicate they had to be mounted on a standing tower, are now tiny enough to be installed on the sides of the car. Radar sensors send electromagnetic waves to sense the size and velocity of physical objects near the vehicle. Another key optical sensor is perhaps the most iconic, the cone-shaped LiDAR (laser radar) devices that graced the tops of Google’s driverless Priuses. A LiDAR device creates a three-dimensional digital model of the physical environment that surrounds a driverless vehicle by “spray painting” the vicinity with spinning laser beams of light. As these beams of light land on nearby physical objects and bounce back, the LiDAR device times their journey and calculates the object’s distance. The data generated by a LiDAR is fed to onboard software that puts together the information into a digital model called a “point cloud.” A point cloud depicts the shape of the physical world outside the car in real time. One of the major drawbacks of using laser beams to build a digital point cloud is that the spinning lasers do not recognize and record color. Another shortcoming of LiDAR is that despite modern high-speed processors, generating a point cloud is a relatively time-intensive process more suited to surveying static environments (such as geological formations) than for use in emergency driving situations. For these reasons, LiDAR is teamed with several digital cameras that are mounted on different parts of the driverless car.

108

*

*

• mechanixillustrated.co • Fall 2017

Ironically, the art of creating artificial intelligence has illuminated the vast and uncharted genius of human instinct.


SPECIAL FOCUS

* Intelligent Cars •

Digital camera technology has benefitted tremendously from Moore’s Law, as cameras have shrunk in size and cost while the resolution of digital images has increased. A digital camera gathers light particles called “photons” through its lens. It stores the photons on a silicon wafer in a grid of tiny photoreceptor cells. Each photoreceptor cell absorbs its appropriate share of photons. To store the light energy, each photoreceptor translates the photons into electrons, or electrical charges; the brighter the light, the larger the number of photons and, ultimately, the stronger the captured electrical charge. At this point, the visual data captured in a digital image can be transformed into a nomenclature a computer can understand: a pixel, or “picture element” represented by a number. The grids of numbers are fed directly into a deep learning network, then to the car’s onboard operating system, which integrates data from all over the vehicle, analyzes it, and chooses an appropriate response. If visual sensors and navigational devices act as the equivalent

109

*

*

• mechanixillustrated.co • Fall 2017

• LiDAR operation. Image courtesy of LeddarTech.


SPECIAL FOCUS

* Intelligent Cars •

of human senses, another innovative family of hardware devices called “actuators” serve as mechanical hands and feet. Before the digital era, actuators were mechanical contraptions that used hydraulic or mechanical controls to physically pull, push, or otherwise manipulate a particular machine part. Modern actuators are electronically linked with the car’s software subsystem by an onboard local network called a “drive-by-wire” system. The driveby-wire system uses a controller area network (CAN) bus protocol that zips data around at a rate of approximately one megabit per second so all the moving parts of the car can communicate with the software subsystems and react when necessary. A body of knowledge called controls engineering is applied to maintain the smooth functionality of the various hardware devices and mechanical systems on a driverless vehicle. One set of controls oversees the vehicle’s route-planning and navigation controls by using algorithms to parse the data from the GPS and IMU devices, rank several possible outcomes, and select the optimal route. Another family of controls (sometimes known as “low-level controls”) use feedback loops to maintain the system’s equilibrium and pull the system back to a pre-set, steady state. Early feedback controls used in factory machinery were mechanical affairs—cables or pulleys or valves—that regulated a machine’s speed, moisture, or temperature. Modern feedback controls are digital, relying on software that reacts to inflowing sensor data and responds by applying control algorithms to maintain the system’s optimal level of performance. Moore’s Law will continue to improve the performance of sensors and onboard artificial intelligence software. A car’s artificial eyes—its onboard digital cameras and other types of visual sensors—can already see farther than human eyes, even in the dark. Individual driverless cars will share data and experiences with each other, creating a giant pool of shared collective knowledge of roads, speed limits, and dangerous situations. As they gain experience, when the correct reaction means the difference between life and death, driverless cars will save lives by quickly

110

*

*

• mechanixillustrated.co • Fall 2017


SPECIAL FOCUS

* Intelligent Cars •

and calmly ranking the outcomes of several possible outcomes and choosing the most optimal one. In time, driverless cars will boast artificial perception that’s more rapid and fluid than that of the most alert, sober, and skilled human driver.

The Road Ahead As the hardware and software near human-level physical dexterity and perception, what barriers remain to the widespread adoption of driverless vehicles? Ironically, technological barriers will prove to be relatively straightforward compared to the potential obstacles that could be posed by people, organizations, and politics. To combat the pernicious creep of special interests, intelligent regulation will be needed. One formidable challenge will be the definition and enforcement of transparent and rigorous safety standards for driverless car software and hardware. Car and technology companies, consumers, and state and federal legislators will need to set aside their individual agendas and work together to implement a framework for safety testing similar to those used by the aviation industry. In the United States, the USDOT has taken some cautious steps to address driverless vehicles but so far has stayed away from specific guidance, preferring to leave that up to the individual states. In addition to safety, individual freedom, safety, and privacy will be at stake. While driverless cars will save millions of lives that would have been lost to car accidents, new dangers and costs will emerge. A driverless car, like a computer, could be hacked remotely and steered off the road, or its passengers hijacked and driven somewhere against their will. As driverless taxis are installed with facial recognition software, customer privacy could be compromised or, at best, sold out to corporations who will launch merciless and intrusive marketing campaigns. Other hardships will be economic as millions of truck and taxis drivers all over the world will lose their jobs. Personal mobility could become a human rights issue. Today’s battles fought over the ethics of doling out of internet band-

111

*

*

• mechanixillustrated.co • Fall 2017

Individual driverless cars will share data and experiences with each other, creating a giant pool of shared collective knowledge of roads, speed limits, and dangerous situations.


SPECIAL FOCUS

* Intelligent Cars •

width to deep-pocketed corporations will pale in comparison to the battles that will be fought over control of driverless cars software. Authoritarian governments may attempt to impose a kind of “physical censorship” on targeted individuals, limiting their travel to a short list of pre-approved destinations and preventing them from driving together with people whose names appear on government watchlists. Another political risk is that in order to buy themselves time to improve their in-house robotic capabilities, hard-lobbying car companies will attempt to convince legislators to enforce a gradual approach to automated driving in the name of “safety.” If this happens, the federal agency that oversees automobiles, the United States Department of Transportation (USDOT), could be convinced to delay the development and testing of full-on transportation robots in favor of increasingly partially automated vehicles. On the other hand, if car companies are able to rapidly master the art of robotic-building and can find an appealing business model in commercializing fully autonomous vehicles, the automotive industry could become a passionate advocate of driverless cars and instead lobby for their commercialization. Although cars and computers have been around for nearly a century, the two have not yet fully embraced one another… until now. Driverless cars are the brilliant product of the union of automotive and computer technologies. As artificial intelligence technology finally matures, intelligent, driverless vehicles will save time and lives, and create new opportunities for cities and businesses, but only if legal and regulatory frameworks can keep up. MI

About the Authors Melba Kurman writes and speaks about disruptive technologies. Hod Lipson is a roboticist and professor of engineering at Columbia University. They are co-authors of Driverless: Intelligent Cars and the Road Ahead.

112

*

*

• mechanixillustrated.co • Fall 2017

BUY


SPECIAL FOCUS Self-Driving Cars

A Look Back on Looking Ahead

Electronic Highway of the Future

“…you settle back to enjoy the ride as your car adjusts itself to the prescribed speed. You may prefer to read or carry on a conversation with your passengers—or even catch up on your office work. It makes no difference for the next several hundred miles as far as the driving is concerned.”

So begins the story entitled “Electronic Highway of the Future,” published in the January 1958 issue of Electronic Age, unveiling an early vision of a driverless future.

By Melba Kurman

113

*

*

• mechanixillustrated.co • Fall 2017

Credit: Radio Corporation of America (RCA).


SPECIAL FOCUS

* Electronic Highway of the Future •

L

ong before there were driverless cars, there were dreams of automated highways. One of the first high-profile exhibitions of what was dubbed “hands-free, feet-free driving” took place at the 1939 World’s Fair in Queens, New York. In a dazzling demonstration of creative marketing, General Motors Corporation (now known simply as “GM”) designed and built the Futurama, an exquisitely detailed, small-scale depiction of what driving would be like in the year 1960. Fairgoers waited in long lines to ride through the Futurama’s miniature utopian landscape, where radio-controlled cars guided themselves on autonomic highways. The Futurama was long on vision and short on technical details. To build working prototypes of radio-controlled cars, in the early Credit: General Motors.

114

*

*

• mechanixillustrated.co • Fall 2017


SPECIAL FOCUS

* Electronic Highway of the Future •

1950s, GM teamed up with another leading-edge company of the day, Radio Corporation of America (later known as RCA). The modern driverless vehicle is essentially a transportation robot that carries most of its intelligence on board, relying on GPS signals, stored digital maps, data from onboard sensors, and intelligent software to find its way. Back then, however, computers of the day were too large to be mounted on a moving vehicle and too slow to process vast reams of visual and sensor data in real time. Thus began the era of the Electronic Highway. The first Electronic Highway consisted of a miniature car that was guided by signals from a pattern of wires laid on the floor of an RCA laboratory in Princeton, New Jersey. This small-scale laboratory prototype soon blossomed into a real-life Electronic Highway, a full-scale, 400-foot-long stretch of public highway built outside of Lincoln, Nebraska. On the Electronic Highway in Nebraska, specially-equipped cars used a system of radio controls, buried electronic circuits, and electrical wires to drive in a straight line and keep a safe distance from one another. While the Electronic Highway was a stellar example of bold and elegant engineering, it was not scalable. The necessary electronic components cost too much to be installed into the United States’ growing network of federal and state highways as RCA and GM had originally envisioned. After spending more than a decade trying to make their vision come to life, both RCA and GM moved on to commercially greener pastures. Today, all that remains of the Electronic Highway is the lingering misperception that autonomous vehicles require a pricey and elaborate high-tech infrastructure. In reality, the opposite is true. As the science of computer vision continues to advance at breakneck speed, the most important component of physical highway infrastructure will not be buried electronic circuits or

115

*

*

• mechanixillustrated.co • Fall 2017

Credit: General Motors.


SPECIAL FOCUS

* Electronic Highway of the Future •

pricey traffic-control technology designed for human drivers, but low-tech white lane markers that help cue the autonomous vehicle’s visual system. Following is the full text of the 1958 article.

Driving will one day be foolproof, and accidents unknown, when science finally installs the...

Electronic Highway of the Future

P

assing the above sign as you enter the superhighway, you reach over to your dashboard and push the button marked “Electronic Drive.” Selecting your lane, you settle back to enjoy the ride as your car adjusts itself to the prescribed speed. You may prefer to read or carry on a conversation with your passengers—or even to catch up on your office work. It makes no difference for the next several hundred miles as far as the driving is concerned. Fantastic? Not at all. The first long step toward this automatic highway of the future was successfully illustrated by RCA and the state of Nebraska on October 10, 1957, on a 400-foot strip of public highway on the outskirts of Lincoln. Both unequipped vehicles and a test car with special receiving

116

*

*

• mechanixillustrated.co • Fall 2017


SPECIAL FOCUS

* Electronic Highway of the Future •

equipment were used to show the immediate uses and the ultimate possibilities of electronic highway control. Coupled to a series of RCA experimental detector circuits buried in the pavement were a series of lights along the edge of the road. In the test car were special RCA radio receivers and audible and visual warning devices to simulate automatic steering and brake controls. In a series of tests, the installation at Lincoln proved its ability to: • Provide automatic warning to a driver following too closely behind another vehicle, • Indicate to a driver the presence of a parked vehicle or other obstacle in the highway ahead, • Guide a car accurately along its traffic lane even under conditions of zero visibility for the driver, • Cause remote operation of warning lights ahead at points of merging traffic, or along the roadside for any distance ahead of or behind a moving vehicle unequipped with special equipment. The demonstration, observed by nearly 100 state and federal highway officials, representatives of automobile manufacturers, and the press, made two major points. First was the fact that the various elements of the system can be used immediately in conjunction with arrays of roadside and intersection lights to increase driving safety under present conditions without requiring special equipment on cars or trucks. Second was the clear indication that the system as a whole can be developed without major technical complications into a fully automatic highway traffic control system.

Driving in the Future Operating the system of the future will be as simple as it seems fantastic. From beneath the pavement, electrical signals will radiate from buried wires to be picked up by the tiny transistorized receivers built into the car. On one frequency will come the

117

*

*

• mechanixillustrated.co • Fall 2017


SPECIAL FOCUS

* Electronic Highway of the Future •

signals from the guidance cable, controlling the power steering mechanism to keep the car in its lane. Signals on another frequency will warn of obstructions in the highway half-a-mile or a mile ahead—perhaps a stalled vehicle, or a highway maintenance crew at work. Whatever the cause, signals, picked up by another receiver in the car, will operate automatic controls that reduce your speed by letting up the accelerator, apply the power brakes, or guide the car automatically into the next lane to pass the obstruction. Operating on a third frequency, the special highway receiver on the dashboard will pick up signals from a buried antenna and cut off the standard car radio to make an announcement of its own: “Exit Number Three for Pittsburgh area is five miles ahead. Connections at this exit with Routes 19 and 28. Please watch roadside signs for further directions.” Approaching the exit the radio will again cut in with an announcement supplementing the roadside signs. “Exit Number Three for Pittsburgh area is two miles ahead. Motorists for Exit Three please switch off electronic drive and move to exit lane at extreme right.” The motorist will simply push a button to switch the car to manual control. He will then move into the right lane to approach the exit and turn off the superhighway.

How it Came About Behind the Lincoln demonstration, and the further developments that are likely to come from it, is a story of imagination and enterprise in two widely separated locations. One is RCA’s David Sarnoff Research Center in Princeton, NJ. The other is the State Capitol building at Lincoln, Nebraska. The electronic highway control system is itself the conception and development of an RCA Laboratories’ team including Leslie E. Flory, George W. Gray, and Winthrop S. Pike, working under the direction of Dr. Vladimir K. Zworykin, honorary vice president of

118

*

*

• mechanixillustrated.co • Fall 2017


SPECIAL FOCUS

* Electronic Highway of the Future •

RCA. It is based on a concept demonstrated by Dr. Zworykin and his associates at the Center in 1953. At that time, the principles were successfully applied in a smallscale system in which wires laid in a pattern on the laboratory floor were used to guide and control a miniature car. It was a sufficiently fascinating “toy” to inspire a feature article by writer John Lear in Collier’s magazine. The article sparked the imagination of Leland M. Hancock, traffic engineer in the Nebraska Department of Roads, and of his director, L.N. Ress, state engineer. The decision was made to experiment with various aspects of the system in actual highway installations in the vicinity of Lincoln. To accomplish their purpose, the Nebraska officials turned to the RCA Laboratories group at Princeton for the novel electronic equipment that was needed. The opportunity that presented itself was the construction of a new main intersection of US Route 77 and Nebraska Highway 2 on the outskirts of Lincoln. As the pavement was laid, the necessary wiring was buried, preparing the ground for the experimental work, the results of which were demonstrated on October 10.

How it Works What the observers saw at the first public demonstration was a system comprising three basic elements: • A sequence of detectors installed at intervals slightly greater than car

119

*

*

• mechanixillustrated.co • Fall 2017

Cars with radio receivers run tests on highway equipped with buried detector circuits at Lincoln, Nebraska. RCA scientists are continuing developmental work on the system.


SPECIAL FOCUS

* Electronic Highway of the Future •

length along the road, capable of reacting to the passage of cars; • A radio warning system for following vehicles, controlled by a signal from the detectors; • A guidance system to keep each vehicle centered in its lane. The detectors consist of rectangular loops of wire, six by 20 feet, buried in the pavement in the traffic lane, and coupled to an associated circuit at the edge of the road. The loop carries a voltage from a high-frequency power line. Whenever a vehicle passes over the loop, the result is a variation in current which is detected by the roadside circuit. The circuit then produces an output signal that controls an indicating device such as a warning light, and at the same time switches on the radio warning system for following vehicles. The radio warning system is simply a transistor switch and an antenna that extends back beneath the pavement from each of the detector loops for any desired length. When the detector responds to a passing vehicle, the transistor switch is closed, causing the antenna to radiate a signal. This continues for a certain interval of time after the vehicle has passed, so that the antenna radiates, in effect, a “radio tail warning” behind cars as they move along the highway. The tail warning signals can be used to actuate warning lights along the side of the road, or they may be picked up by following cars with appropriately tuned receivers that can be used ultimately to activate autoWarning devices include left-right indicator (below matic control of brakes. mirror) and buzzer and warning lights for obstacles The guidance system is a cable laid such as other vehicles or maintenance crews. down the center of the traffic lane be-

120

*

*

• mechanixillustrated.co • Fall 2017


SPECIAL FOCUS

* Electronic Highway of the Future •

neath the pavement, carrying a signal current of yet another radio frequency. To use the continuous signal, a car is equipped with small receiving antennas mounted at either end of the front bumper or on both front fenders. These antennas are connected to a different receiver which responds when it receives two signals of differing intensities. As long as the car is centered over the cable, nothing happens. As soon as the vehicle moves to either side, one signal increases in intensity while the other decreases, causing a response in the receiver. In the demonstration, the effect was shown on a meter in the test car; in the ultimate system, it would be used for automatic control of the steering mechanism.

A “Compatible” System Perhaps the most telling point of the demonstration in the eyes of highway experts was the extent to which the electronic system is compatible with present highway conditions, having many applications in the near future without requiring any equipment in cars. The point was demonstrated simply. As cars entered the test strip, a warning light automatically flashed on over a merging traffic sign 400 feet ahead, alerting drivers approaching on an adjacent road. Then, as each car moved over the buried loops, a light beside each loop was turned on automatically, tracing the passage of the car through the test area. Dr. Zworkin pointed out that the use of lights along the roadside could provide a visual substitute for the “radio tail warning,” serving vehicles without any receiving equipment. In this way, a vehicle moving along in the fog might light a series of lights several hundred feet behind it, warning approaching cars and immediately preventing read-end collisions in fog banks—a type of accident that has involved as many as 30 cars at one time on the New Jersey Turnpike.

121

*

*

• mechanixillustrated.co • Fall 2017


SPECIAL FOCUS

* Electronic Highway of the Future •

Introducing System First stage: no auxiliary equipment in vehicles, but systems of warning lights activated by the buried detector units. This could be done immediately, perhaps beginning with isolated portions of the highway system. Second stage: Some cars equipped with signal detectors which would give the driver a dashboard indication of the distance of clear roadway ahead, even under conditions of zero visibility. Final stages: Automatic vehicle control would also be introduced on a step-by-step basis, beginning with installed equipment that would make use of signals from the guidance cable and buried antenna systems to control steering and braking. *** Ultimately, automatic control devices would sense suitable opportunities for passing and would change routes in response to a program pre-set on an electronic computer in the vehicle. The driver would have to take over only as he left the high-speed road system. MI

About the Authors Thanks to Melba Kurman and Hod Lipson for their introduction to this feature. Melba Kurman writes and speaks about disruptive technologies. Hod Lipson is a roboticist and professor of engineering at Columbia University. They are co-authors of Driverless: Intelligent Cars and the Road Ahead. Also see their feature in this magazine, Intelligent Cars.

122

*

*

• mechanixillustrated.co • Fall 2017

BUY



SPECIAL FOCUS Self-Driving Cars

Toyota’s Concept-i Philosophy, passion, and purpose converge with a host of new and emerging technologies to offer a glimpse of tomorrow’s car today

By John Schroeter

124

*

*

• mechanixillustrated.co • Fall 2017


SPECIAL FOCUS

* Toyota’s Concept-i •

C

ars aren’t what they used to be—and they certainly aren’t what they’re going to be. Which means we find ourselves at a hinge in automotive history. Just how far and fast that

hinge swings is a matter of debate, but what’s not subject to debate is the unprecedented coming together of technologies that are enabling—if not driving—the massive disruptions occurring in the auto industry right now.

And while there’s no question that these disruptions will alter the way we drive, do they also signal a wholesale departure from a century of car culture? Or could it possibly mean a return to it? You know, the days of driving for the pure pleasure of it. It seems the Sunday drive I remember as a kid with the family in the Buick wagon is now a relic of a bygone era. Societal changes have had a lot to do with that (who has time for a Sunday drive anymore, to say nothing of paying the fuel costs?), but maybe driving around in cars just isn’t as fun as it used to be. Has the automobile come to a point of simply performing a utilitarian function? The vast majority of cars on the road today would suggest so. So what, then, do we really have to look forward to? Well, let’s take a look ahead and find out. But first, perhaps a look back at looking ahead will provide some helpful perspectives. Our case in point is the February 1951 issue of Popular Science, which featured a short article about a concept vehicle—a “Preview

125

*

*

• mechanixillustrated.co • Fall 2017


SPECIAL FOCUS

* Toyota’s Concept-i •

Car”—dubbed the LeSabre. A product of Harley Earl’s fabled studio, it took its styling cues from jet aircraft (we’d just entered the Jet Age, after all), with a tail fin motif borrowed from Earl’s Cadillac Sixty Special of 1948, a feature inspired by the twin vertical stabilizers of the P-38. “You can’t buy this beauty,” opened the copy, although Earl appropriated the car as his personal daily driver, “but you can expect to see many of its features turning up on cars in the years to come. Now being completed by General Motors, the hand-built, super-styled car will serve as a rolling laboratory for engineers to test out new ideas in design and equipment. Its special high-compression (10 to 1) V-8 engine plus supercharger is expected to deliver 300 hp. Built-in hydraulic wheel jacks, electric seat warmers, and a rain-sensitive switch that will automatically put up the top are just a few of today’s novelties that may become commonplace tomorrow. After all, an earlier 1938 GM version was the first U.S. car to boast an electric top, curved-glass windows, and push-button door latches—now taken for granted.”

126

*

*

• mechanixillustrated.co • Fall 2017


SPECIAL FOCUS

* Toyota’s Concept-i •

It was indeed an exciting prospect in 1951. And it did indeed take some time for many of its innovative features to find their ways into production cars. Not even the name LeSabre would appear on a production vehicle until 1959! Not much has changed in the 66 years since the original LeSabre— or the 79 years since Earl’s 1938 Buick Y-Job, briefly referenced in the Popular Science story and widely regarded as the seminal concept car. But then again, that’s why they’re called concept cars; they’re about showcasing what could be. Which brings us to Toyota’s radical Concept-i—a vision of the automotive future 13 years out. And what a vision it is. Aside from the astounding array of new technology that the vehicle exhibits, at its heart is a fundamentally redefined relationship between car and driver, man and machine. In short, it’s a vision of the automobile as partner. There’s a certain inevitability about the technological future, but there’s also a certain ineffability about it that makes its articulation somewhat challenging. That’s why we have science fiction writers. But the Concept-i is no fiction. A tangible expression of the future in the here and now, it reels in far-flung potential and brings it up close and personal. It’s a car where command and control give way to conjuring and collaborating: functions appear seemingly out of the ether and sublimate when no longer needed. It’s a car that begs to be driven, yet simultaneously appears to need no help from a biological unit of any kind. It’s a car that even when standing still is a study in motion. Such contradictions are to be expected at the edges of disruptions. Intrigued, I sat down with the car’s creators, Ian Cartabiano, chief designer at Toyota’s California-based Calty studio (Calty being a contraction of California and Toyota), and Project Design Manager William

127

*

*

• mechanixillustrated.co • Fall 2017


SPECIAL FOCUS

* Toyota’s Concept-i •

Chergosky, who was responsible for much of the UX and interior design, to learn more. Both men are longtime veterans of the car design craft and have worked together on myriad projects spanning the exotic to the more domesticated likes of the Camry. Perhaps that’s one reason why when sharing the experience of designing the Concept-i, they frequently finish each other’s sentences. Indeed, in the process of conceptualizing the Concept-i, it is obvious that they developed a unity of both mind and purpose—an attribute that is equally and obviously manifested in the result.

Gentlemen, first of all, congratulations on a tremendous achievement with the Concept-i. Are you happy with the outcome and the reception it’s received? IAN: We’re very happy with the outcome. We really pushed hard for two years to debut this car at CES 2017. We wanted to be out there with a message about the future of the automobile that’s different from everyone else’s. And frankly, that could have gone either way, but our message seems to have resonated. Still, I’m a little surprised

128

*

Cartabiano, right, •chiefIandesigner at Toyota’s

Project Design Manager William • Chergosky, center, who was responsible

California-based Calty studio.

for much of the UX and interior design.

*

• mechanixillustrated.co • Fall 2017


SPECIAL FOCUS

* Toyota’s Concept-i •

at the reception, how positive it has been. People came up to us at CES to tell us how the car makes them feel when they see and experience it, which is what we had in mind when we created it. It’s a really great connection. And the fact that the car got seven “Best of CES” awards, well, I’ll be honest, it feels good. Broadly speaking, how would you characterize this vehicle? IAN: Concept-i is Calty’s vision of what a future Toyota and a future fun-to-drive vehicle can look like in the year 2030. It was a vision that connects driver with car through a very advanced and intelligent user interface and AI, but in a friendly, warm, engaging, and intriguing way. That was the big overarching statement of the car. We’re envisioning a future car that has an autonomous feature, but where fun-to-drive is still key—the driver is always in control. It’s a positive, forward-looking vision of the future. Before we get too far into the future, let’s back up a bit. Take us to the genesis of the project. IAN: Bill and I actually spent months debating philosophy with the design team. There were no sketches made until we got the story down. We were still fleshing it out as we were developing the car, but

129

*

*

• mechanixillustrated.co • Fall 2017


SPECIAL FOCUS

* Toyota’s Concept-i •

basically we wanted to get the philosophy, the direction, and the vision clear before anybody started creating. Good strategy! IAN: Yeah! It was a very different design approach for us. And we think it served us well. At CES, you said when you saw the car that everything rings true. That was our goal. Everything connects from start to finish. It’s a really unified message. When you spend the time getting the concept correct, everything else follows. BILL: We did define a very clear future. I don’t want to say it was easy, but ultimately our roadmap became simple to measure—does it achieve our goals or doesn’t it? What were the guiding principles and form language that steered the design?

130

*

*

• mechanixillustrated.co • Fall 2017

Early interior •concepts.


SPECIAL FOCUS

* Toyota’s Concept-i •

IAN: Two things came up at the beginning of the project. First, we came up with the keyword “kinetic warmth.” That was important because that keyword defined the entire project—everything you see in the car from the AI to the interior design to the exterior design to the color of the materials and to the finish of the graphics. Kinetic warmth stands for something that feels energetic and alive and moving—that’s the kinetic part. The warmth is a unique element; it brings something to the car that is humanistic, friendly, and a little bit magical in a way. And then we sought to combine those attributes in creating something that’s different from what everybody else has been doing for CES and other auto shows. BILL: When we looked at where the industry was going, when they show “the future” we always see an aluminum box with maybe four wheels, like a pod. You get in and you’re shut off from society; you’re

surrounded by screens, you get shuttled from point A to point B, and that’s it. We did not want to make a laptop or a cellphone on wheels. We wanted to remove the tyranny of the black screen on the dashboard or anywhere else in the vehicle. We’re already surrounded by smartphones and laptops and screens all day, even when working at

131

*

*

• mechanixillustrated.co • Fall 2017


SPECIAL FOCUS

* Toyota’s Concept-i •

home. So we thought, let’s create a space where we’re not flooded by data. IAN: Exactly. And consistent with that, in addition to the concept of kinetic warmth, we came up with a different way to design that we called “design from the inside out.” Usually when we design a car we have a packaging; we do the exterior shape and we do the interior design that matches the exterior shape, and then if there’s time, the user interface and graphics. But in the case of this car, once we developed the kinetic warmth keyword, we said we’re going to do this car differently; we’re going to design it from the inside out. When you see the car in real life, as compared with photos, you can really get a sense of that inside-out movement, that constant connective tissue of surface that links the interior to the exterior to the wheels, back to the cabin and constantly looping around the car. Those are big, broad objectives. How did you work this out at the drawing board? IAN: We actually started with the AI-driven user interface, which we call Yui. We came up with that first. We thought long and hard about what type of communication agent should represent our advanced AI. In thinking about something that is warm and lively and engaging and friendly but also simple and universal, we went through a lot of variations and a lot of crazy shapes, but in the end we went back to the humanistic art form of animation. We started with a simple 2D graphic, but we animated it so it looked like it was alive. Think of the UI as comprising an outer ring and an inner

132

*

*

• mechanixillustrated.co • Fall 2017


SPECIAL FOCUS

* Toyota’s Concept-i •

circle. The outer ring represents the body, and the inner circle represents the soul. But it also represents the car and the driver inside, and the way the two interact. The UI communicates through voice but also through imagery, and in doing so, we made an inanimate object feel alive. That was the starting point. Ultimately, though, it’s about what kind of future are we going to show; what is Toyota going to be in the future? Driving is important, fun-to-drive is important. 2030 sounds really far away, but it’s only 13 years away. BILL: We wanted to plant a flag to signal a very different direction with this vehicle. This is not just a tool that gives you information and moves you from A to B. It’s an immersive experience meant to not only improve the driving experience, but to improve your life. Most of the talk around AI in the context of cars is about autonomous operation. But you’re inverting that model, making the driver experience inside the car the focus of your attention. IAN: Yeah, I’m glad you got that. But in the end, we’re a car company. There is a lot of talk about “cars are going to disappear,” or everything is going to be about sharing. Maybe that can happen in San Francisco and Los Angeles and New York. But there are still large parts of the country and the world where driving is still essential; it’s still the way to get from place to place. But certainly there are times

133

*

*

• mechanixillustrated.co • Fall 2017


SPECIAL FOCUS

* Toyota’s Concept-i •

when you don’t want to drive. Bill and I have the world’s worst commute from opposite directions getting to work. I don’t love driving on the 405 Freeway. I’d rather not. But there are times when I love it and I want to have that experience, whether it’s driving through the desert or driving up the Pacific Coast Highway to Monterrey. That’s still a fun activity; it’s an exploring activity. It’s even therapeutic. We don’t want to lose that. That’s good news. Tell me more about the human-machine interface you set out to create. BILL: It’s about demonstrating metaphorically the relationship between the human and the car. As you enter the vehicle, the center console “grows” out of the floor surface. We call that the hand—it’s the initial interaction point you have with Yui. You put your hand over it and the surface grows and reaches up and you touch it and it touches you as well, via a novel haptics system. At that moment, you sense its “life force.” The light it emits shoots up to the instrument panel, wakes it up, the head-up display activates, and the car comes to life. It’s not unlike meeting a person and shaking hands; there’s an interaction and moment of recognition that occurs. That’s the idea we’re trying to convey with Yui. IAN: You’ll also notice the white ceramic appearance to everything inside the car. There’s also a heavy use of a pattern that is meant to convey this sense of technology with movement that spills out toward the edges. And by using gold in the pattern with the white ceramic, you get this nice interplay between futuristic and ancient materials. We didn’t want the typical chrome or aluminum or anything cold and sterile. The combination

134

*

*

• mechanixillustrated.co • Fall 2017


SPECIAL FOCUS

* Toyota’s Concept-i •

of the gold and the white ceramic conveys that sense of human craft. It looks like a human hand made the car. It looks like a piece of ceramic sculpture. I can see that this same philosophy carried forward to the exterior. IAN: Absolutely. As an example, you saw the car winking, right? We used a simple headlamp graphic to convey personality and create a bond with the owner, the driver, the passenger. But also you saw that there were no distinct headlamps on the face of the car. The lighting on the front, the messaging on the side, the messaging on the rear, just like the interior, magically comes from nowhere. It’s all through really cool technology that we developed to have the lighting come from behind the paint. And like the interior, when the car comes to life, the lamps open up, and as you approach it, it greets you with a wink. When you start driving faster, the shape of the lamps angle in; they get a little more set and aggressive. BILL: This is not at all the idea of a linear progression from where we see technology today. As I mentioned earlier, technology today means we’re all inundated with devices and information. Our thesis is that in the future, technology becomes smarter, and we’re not looking to bury you in information, like some sort of Mission Control. So out of that white ceramic you see the meters and messaging

135

*

*

• mechanixillustrated.co • Fall 2017


SPECIAL FOCUS

* Toyota’s Concept-i •

appearing and disappearing in a very cool and engaging way. IAN: What makes technology approachable is not only making it humanistic and warm but also bringing back the wow. That’s when people get excited about tech and welcome it into their world. Yes, it helps them, but it still has that magical element to it. And that idea fed into the design language. Speaking of welcoming new technology, with the advent of the autonomous vehicle, there’s a lot of debate about the future fate of the steering wheel. How did this consideration get worked out in your design philosophy? IAN: Against the grain of what a lot of future think is, the steering wheel in Concept-i doesn’t go away. That’s a unique point in our vehicle. This vehicle represents a vision of the future where driving is still a passion. And there’s a functional element to it as you transfer from mode to mode. Whether its autonomous Level 2, 3, 4, or 5, you need to be able to grab the wheel. The time it takes for the wheel to deploy in these open-close systems might be too slow and you may actually end up getting in an accident. So there are these functional and emotional reasons for the wheel to always be there. The wheel in the Concept-i also plays host to some interesting UI technologies. BILL: Yes, and contextually, the information that is displayed on the wheel is only there when you need it, as are the audio and climate control functions. This was actually inspired by a Japanese concept called “omotenashi”—service on demand; it just appears seamlessly. The technology comes up and goes away when you don’t need it. That magical element was a great way to break away from the linear path of today to this expressive, kinetic future that we envision. IAN: It’s about service that is so perfect that it is almost invisible. It’s basically hospitality at the

136

*

*

• mechanixillustrated.co • Fall 2017


SPECIAL FOCUS

* Toyota’s Concept-i •

right time, when you need it, in the right amount, served in a beautiful way. And when you’re done, it disappears. That leads me to ask about another invisible feature: the haptics you’ve built into the user experience. BILL: It’s a safety technology we pioneered. We’ve built haptic feedback into the shoulder of the seat, and it will actually physically tap you and give you an alert when, for example, something is going on in your blind spot or behind the car when you’re backing up. IAN: And the type of tapping, its resonance, speed, and pitch, depends on the severity or the type of alert it’s generating. The tapping is actually an ultrasound pulse whose frequency we can tune to give a unique, distinctive tapping feel appropriate to the situation. If it’s tapping hard and fast on your shoulder, you know there’s some immediacy to what it’s requiring of you. If it’s just a gentle, soft tap, it might be something very minor. The car can also assist you as you’re dozing off, through another technology we’ve built into the car. If the car sees that you’re starting to doze off, it will tap you and ask you questions and keep you awake and engaged. So you’ve also integrated biometric sensors with the haptic tapping, enabled, I presume, by deep learning technology that monitors the driver’s physical state? IAN: Exactly. The car has five cameras in the interior. They track your eye movement, your blinking rate, your respiration rate. The seat can take your temperature, your pulse rate. There are also audio sensors that can tell if you’re talking aggressively or with anger or frustration. If your behavior behind the wheel is erratic or otherwise of concern, Yui will ask if this might be a good time for it to take over. And it can be helpful in other ways. For example, it knows your schedule and

137

*

*

• mechanixillustrated.co • Fall 2017


SPECIAL FOCUS

* Toyota’s Concept-i •

knows that you’ve had a long day and that it’s been nine hours since you’ve eaten lunch. Through your social media feed or searches or conversations with Yui—or even conversations with friends in the car—it remembers that you want to try this new restaurant on the way home. So there are many ways the car can become a partner. BILL: Yui can actually engage in a conversation with you. For example, if you’re going on a road trip, a 10-hour drive, it’ll play the license plate game with you. It’s all about keeping the driver focused and alert. Yui can do that. IAN: Going back to the autonomous capability, a key point is that Yui will ask you if it can take over. We still want the driver to be in control. If all else fails, the car will take over. It will not let you crash. Driving is not fun if you don’t feel safe. All of our autonomous technologies, and the things the people at TRI [Toyota Research Institute] are working on—a zero-accident future—are first and foremost about safety. We think that’s another way where this concept of car as partner can improve your life—by actually saving your life. Let’s move to the exterior of the vehicle. The inside-out design motif manifests itself first in the glass. BILL: Yes. The glass is really just a plane to keep the outside from getting in, but the surfacing is all a singular, unified shape, like the human body where your arm connects to the torso, your torso to your waist. It’s not delineated into separate parts. So while the glass just serves a function to keep things out, sculpturally, it is one singular element. Which brings us to the aerodynamic principles that steered the design. It’s gorgeous, but to what extent does form follow function?

138

*

*

• mechanixillustrated.co • Fall 2017


SPECIAL FOCUS

* Toyota’s Concept-i •

How did you manage or balance the tradeoffs between aesthetics and aerodynamic performance? IAN: The aerodynamic performance is excellent because it is a very simple shape. I don’t want to compare it to the Prius but there is an ideal profile for aerodynamic performance, and this car follows that profile, which is not only for a low drag coefficient, but also for high-speed stability. BILL: Another important point on the topic of form following function is that this car seats four adults comfortably. A lot of times show cars are so cheated; not many people can fit in them, or if you can, you have to really cram yourself into them. We packaged this car around four adults. So when you get in the car and the doors close, it’s a wonderful space to be in. I actually found it to be soothing and calming to be in the car during CES. It was a nice serene environment to try to escape the chaos of CES! I know just what you mean! Your design objectives were very ambitious and it is evident that you’ve achieved them. But it’s interesting that in all this conversation, we’ve not spoken about the drivetrain, which seems almost a secondary consideration…

IAN: Right now all we’re saying is it is a zero emissions vehicle. Fair enough! So for the moment what’s under the hood is staying under the hood. So which of the Concept-i features will we see start making their way into production vehicles first? IAN: Well, this vehicle represents a vision. We’re not going to put this car into production. But a lot of the things that you saw and experienced are being researched and developed right now, so you may see some of those things in the near future. And we know that the future is coming! MI

139

*

*

• mechanixillustrated.co • Fall 2017


SPECIAL FOCUS Self-Driving Cars

I

s the ultimate driving machine set to become the ultimate lounging machine? With visions of an autonomous driving experience—complete with built-in bookcase, 4K video monitor, and on-demand services courtesy of BMW’s Connected Cloud architecture—BMW’s i Inside Future exhibit at CES 2017 suggests this isn’t your father’s BMW.

BMW’s i Inside A Glimpse of the Future of the Automotive Interior

140

*

*

• mechanixillustrated.co • Fall 2017


SPECIAL FOCUS

* BMW’s i Inside •

The exhibit was also the occasion for debuting BMW’s HoloActive Touch, an innovative system that shows us the future of interactivity. It integrates three key components: a micro-mirror array that projects a holographic image into space from an LCD screen below, an ultrasonic haptic array to provide tactile feedback to your interactions with the virtual holographic controls, and an Intel RealSense gesture camConceptual drawing of what would era mounted in the dashboard to track your inbecome the “sculptural” expression air gestures within the holographic/haptic zone, of BMW’s i Inside exhibit. hence the name HoloActive Touch. These three elements work in concert to create an interactive but contactless “surface” that enables driver control over a plethora of dynamic functions spanning climate control to infotainment to navigation—even your Amazon shopping experience. Besides the convenience it will offer drivers, the system is designed specifically to reduce distractions in the human-machine interface. The haptic technology, developed in conjunction with the University of Tokyo, is a key enabling element in this new picture of interactivity; combining it with the holographic element provides a whole new complement—and interactive experience—to the head-up display. Read more about head-up display in this issue. While the bookcase might not materialize as a production feature, look for HoloActive Touch to make its way into production vehicles in the next couple of years. An enticing automotive sculpture plays host to a Learn more about haptics in panoply of new technologies, including a demonstration of BMW’s HoloActive Touch haptics system. this issue. MI

141

*

*

• mechanixillustrated.co • Fall 2017


Self-Driving Cars

Comparing Computing Architectures for ADAS and Autonomous Vehicles

142

*

*

• mechanixillustrated.co • Fall 2017

T

ogether with connected car technology and new energy vehicles, advanced driver-assistance systems (ADAS) and autonomous vehicle systems have moved to the forefront of innovation in the automobile industry. Research, development, and production activities are significantly increasing due to demand for greater traffic safety, enhanced passenger and driver comfort, and ultimately improved driving efficiency. Over 90% of the innovation in these areas is directly related to electronics, semiconductors, and the increasingly sophisticated software solutions enabled by the electronics and processing capability. With so many options becoming available for

By Markus Levy, Drue Freeman, and Rafal Malewski Image courtesy of NXP Semiconductors.

SPECIAL FOCUS


SPECIAL FOCUS

* Comparing Computing Architectures •

‘‘

these computing architectures, it’s important that the industry establish a standard that will allow system designers to select the appropriate hardware and software components. Ever since the first successful autonomous vehicle experiments in the late 1980s in both the US [1] and Germany [2], engineers have been focusing their efforts on improving the perception and cognition of self-driving vehicles. Early work at Carnegie Mellon focused primarily on vision and computing platforms, making use of TV cameras, early versions of LiDAR, sonar, and several state-of-the-art onboard computers [3]. By 1989, these researchers were experimenting with rudimentary neural networks and artificial intelligence for directional control in response to the sensor inputs [4]. Today, nearly 30 years since these pioneering efforts, the basic principles of autonomous driving research remain largely the same: perception, cognition, and control. These core activities are often described as “sensing,” “thinking,” and “acting.” Sensing still depends heavily on vision and LiDAR, though radar has become an important additional component. There is now some debate in the industry over whether it will be possible to replace LiDAR entirely with a lower-cost multi-camera, multi-radar system or, alternatively, if the expected cost reductions in LiDAR that will come about from the introduction of solid-state technology, and the associated improved aesthetics of not having the radar sensors embedded in the front of the car, will make cameras and radar systems less important. Vehicle-to-Vehicle and Vehicle-to-Infrastructure communication (together, V2X) can provide additional sensory input to the self-driving

While a future in which self-driving vehicles are ubiquitous may still be many years away, there are already hundreds of companies deploying thousands of engineers to develop ADAS and autonomous vehicle systems across all levels of autonomy.

143

*

*

• mechanixillustrated.co • Fall 2017


SPECIAL FOCUS

* Comparing Computing Architectures •

vehicle. The thinking and acting functions are also continuing to develop, requiring ever more powerful state-of-the-art processing capability (typically massively parallel processors) evolving toward machine learning and artificial intelligence. In order to better understand the requirements for ADAS and autonomous vehicle processing systems, a distinction must first be made between the different levels of autonomous driving [5]. While the distinction between the levels is sometimes fuzzy, the levels themselves have been standardized by the Society of Automotive Engineering (SAE). Each subsequent level requires exponentially increasing compute performance, an increasing number of sensors, and an increasing level of legal, regulatory, and even philosophical approvals.

Level-0

|

Level-1

|

These driver-assistance systems have no vehicle control but typically issue warnings to the driver. Examples of Level-0 systems include blind-spot detection or simple Lane Keeping Assistance (LKA) systems that don’t take control of the vehicle. These systems require reasonably sophisticated sensor technology and enough processing capability to filter out some false positives (alerting the driver to a hazard that is not real). However, since the ultimate vehicle control remains with the driver, the processing demands are not as great as with the higher levels of autonomy.

The driver is essentially in control of the vehicle at all times, but the vehicle is able to temporarily take control from the driver in order to prevent an acci-

144

*

*

• mechanixillustrated.co • Fall 2017


SPECIAL FOCUS

* Comparing Computing Architectures •

dent or to provide some form of convenience. A Level-1 automated system may include features such as Adaptive Cruise Control (ACC), Automatic Electronic Braking (AEB), parking assistance with automated steering, and LKA systems that will actively keep the vehicle in the lane unless the driver is purposely changing lanes. Level-1 is the most popular form of what can reasonably be considered “self-driving technology” in production today. Since Level-1 does not control the vehicle under normal circumstances, it requires the least amount of processing power of the autonomous systems. With Level-1 systems and above, the avoidance of false positives becomes an important consideration in the overall system design. For example, the sudden erroneous application of the brakes by a vehicle on the freeway when the car does not need to slow down or stop could actually introduce a far greater safety hazard than the one it was meant to avoid. One of the techniques being deployed to help avoid false sensor readings is the use of sensor fusion—fusing inputs from various disparate sensors like camera and radar to stitch together a more accurate picture of the vehicle’s environment.

Level-2

|

Level-3

|

The driver is obliged to remain vigilant in order to detect objects and events and respond if the automated system fails to do so. However, under a reasonably well-defined set of circumstances, the autonomous driving systems execute all of the accelerating, braking, and steering activities. In the event that a driver feels the situation is becoming unsafe, they are expected to take control of the vehicle and the automated system will deactivate immediately. Tesla’s autopilot is a well-known example of Level- 2 autonomy, as was the ill-fated Comma-One by Comma.ai. [6]. Within known, limited environments, such as freeways or closed campuses, drivers can safely turn their attention away from the driving task and start

145

*

*

• mechanixillustrated.co • Fall 2017


SPECIAL FOCUS

* Comparing Computing Architectures •

reasonably doing other activities. Nonetheless, they must still be prepared to take control when needed. In many respects Level-3 is believed to be one of the more difficult levels to manage, at least from the perspective of human psychology. When a Level-3 system detects that it will need to revert control of the vehicle back to the driver, it must first be able to capture the driver’s attention and then effect a smooth handover before relinquishing control. This is a remarkably difficult thing to do and could take as long as 10 seconds before a driver who was engrossed in some non-driving related task is fully ready to take over. Many, though not all, automakers have announced plans to skip Level-3 entirely as they advance their autonomous driving capabilities.

Level-4

|

‘‘

Other than setting the destination and starting the system, no human intervention is required; the automatic system can drive to any location where it is legal to drive and make its own decisions.

The automated system can control the vehicle in all but a few conditions or environments, such as severe weather or locations that have not been sufficiently mapped. The driver must enable the automated system only when it is safe to do so. When enabled, driver attention is not required, making Level-4 the first level that can genuinely be called fully autonomous in the sense that a driver or passenger would have a reasonable expectation of being able to travel from one destination to another on a variety of public and private roads without ever having to assume control of the vehicle. This is the level of autonomy that most automakers, and companies like Waymo, are working toward as a first step when they are making claims of having fully autonomous cars on the road in the next three to five years. The sensing, thinking, and control functions must all be extremely powerful and sophisticated in a fully functional Level-4 vehicle, though to

146

*

*

• mechanixillustrated.co • Fall 2017


SPECIAL FOCUS

* Comparing Computing Architectures •

some extent, the ability to “fence off” particularly difficult roads or prohibit operation in severe weather allows for some limitations in the system’s sensing and learning capability. At least some of the Level-4 vehicles in development will likely be produced without any driver control capabilities at all.

Level-5

|

Other than setting the destination and starting the system, no human intervention is required; the automatic system can drive to any location where it is legal to drive and make its own decisions. True Level-5 autonomous driving is still a number of years out as these systems will require extremely robust systems for every activity: sensing, thinking, learning, and acting. The perception functions must avoid false positives and false negatives in all weather conditions and on any roads, whether those roads have been properly mapped out in advance or not. Most likely Level-5 vehicles will not have any driver controls.

While a future in which self-driving vehicles are ubiquitous may still be many years away, there are already hundreds of companies deploying thousands of engineers to develop ADAS and autonomous vehicle systems across all levels of autonomy. Existing production ADAS systems and autonomous systems already rely on sophisticated sensing in order to accurately perceive the vehicle’s environment, and on powerful real-time processing. In addition, because these systems are handling so much data, automakers have started transitioning from traditional In-Vehicle-Networking architectures like CAN to higher bandwidth protocols like Ethernet for transporting the signals and data around the vehicle. This situation becomes magnified as sensor fusion becomes the norm and sensor input data must be routed to a central processing unit in order to comprehensively analyze the vehicle’s position and surroundings. Of course, this will place even more demand on the

147

*

*

• mechanixillustrated.co • Fall 2017


SPECIAL FOCUS

* Comparing Computing Architectures •

system’s processing capabilities and makes the decision on which processing platform to use all the more important.

Comparing Compute Platforms for ADAS and Autonomous Systems

‘‘

Many, though not all, automakers have announced plans to skip Level-3 entirely as they advance their autonomous driving capabilities.

Every ADAS system, and therefore every level of autonomous driving, will require some form of complex embedded computing architecture. These architectures could contain a variety of CPU cores, GPUs, and specialized hardware accelerators. An example of such a computing platform is NVIDIA’s multi-chip solution, DRIVE PX 2, which includes two mobile processors and two discrete GPUs. Contrast this with NXP’s BlueBox, which incorporates an automotive vision/sensor fusion processor and an embedded compute cluster (eight ARM processor cores, an image processing engine, a 3D GPU, etc.). Both companies claim that their devices have the capability for full autonomous driving. How can an ADAS or autonomous driving system engineer compare these two platforms, as well as platforms from Analog Devices, Qualcomm, Intel, Mobileye, Renesas, Samsung, Texas Instruments, or other semiconductor suppliers? Ultimately the production computing platform must be selected based on factors that include compute performance, energy consumption, price, software compliance, and portability (such as the application programmer interface or API). Identifying the potential compute performance and energy consumption of a heterogeneous embedded architecture is an overwhelming and inaccurate task with the currently available benchmarks as they focus either on monolithic application use cases or on isolated compute op-

148

*

*

• mechanixillustrated.co • Fall 2017


SPECIAL FOCUS

* Comparing Computing Architectures •

erations. Real-world scenarios on these architectures require an optimal utilization of the available compute resources in order to accurately reflect the application use case. This often implies the proper balancing of the compute tasks across multiple compute devices and separate fine-tuning for their individual performance profiles, which in turn requires intimate knowledge of the architectures of individual compute devices and of the heterogeneous architecture as a whole. To support this effort, key semiconductor vendors and automotive OEMs are working with EEMBC to develop a performance benchmark suite that assists in both identifying the performance criteria of the compute architectures and in determining the true potential of the architectures for various levels of autonomous driving platforms. EEMBC is an industry association focused on developing benchmarks for various embedded applications, including ADAS. ADAS is highly reliant on vision processing, and a typical application use case involves multiple camera images that are normalized and stitched into a seamless surround view, and certain objects of interest (e.g., pedestrians, road signs, obstacles, etc.) are detected in the resulting images. The processing of this information is done in a pipeline flow comprised of discrete elements. Hence, the overall goal of this compute benchmark is to measure the complete application flow from end to end as well as the discrete elements (using micro-benchmarks). This allows the user to comprehend the performance of the system as a whole, and supports the benchmark being used as a detailed performance analysis tool for SoCs and optimizing the discrete elements (Figure 1).

Optimizations in a Benchmark? One of the key points for a real-world ADAS benchmark is to enable the user to determine the true potential of the compute architecture, instead of relying on benchmark numbers from software that will typically be biased toward the reference platform

149

*

*

• mechanixillustrated.co • Fall 2017


SPECIAL FOCUS

* Comparing Computing Architectures •

Computing Device 1 Color Space Conversion Video Source

Dewarping

Fork

Computing Device 1 Join

Color Space Conversion

Stitch

Computing Device 2 Object Detection Neural Network

Dewarping

Computing Device 2

Figure 1: Simplified example of ADAS processing flow with discrete pipeline elements distributed among various compute devices. Video stream input splits into two

it was developed on. This true potential discovparallel sections for color space conversion ery is only possible if the code is optimized for (Bayer) and de-warping (to remove distorthe given architecture; hence, vendors can imtion); the Join stage stitches the 2D polyplement manual optimizations using a common gons back together, creating a top-down interface that demonstrates their processor’s view. The stitching stage is followed by a baseline performance. Optimizations following neural network that performs object detecthe standard API will be verified by EEMBC, by tion. In a complete ADAS application this code inspection, and by comparative testing output would then be used to make deciwith a random generated input sequence. If sions on which the vehicle would act. users want to utilize proprietary extensions, it becomes an unverified custom version but will ultimately demonstrate the full potential of the intended architecture. Even following the standard benchmarking methodology, there are still many possible combinations to distribute the micro-benchmarks across compute devices. Ultimately, the benchmark will serve as an analytical tool that automatically tracks the various combinations and allows the user to make tradeoffs for performance, power, and system resources. In other words, for devices that are power constrained, highest performance is not always optimal. Each compute device in an ADAS architecture has a certain power profile, and by adding that information to the optimization algorithm, it can be adjusted to find the best performance at a defined power profile.

150

*

*

• mechanixillustrated.co • Fall 2017


SPECIAL FOCUS

* Comparing Computing Architectures •

The Tangible Intangibles

‘‘

Besides the benchmark performance comparisons addressed above, automakers and system engineers must also consider a number of soft factors when selecting computing platforms for ADAS and autonomous driving development, such as relationship, roadmap alignment, and the ability to enable a broader software ecosystem. For example, BMW has recently announced a partnership with Intel and MobilEye. Likewise, Audi and Tesla have both established partnerships with NVIDIA. These tie-ups may give certain players advantages over others, even in the event that benchmarking performance is comparable. In addition, while some automakers are developing their own in-house software capability, many of the players in the rapidly evolving autonomous driving software ecosystem are startups. These new companies are developing everything from driving-control applications to abstraction layers between the hardware and the application layers. A few are even developing complete vehicle operating systems. While most software startups attempt to remain hardware-agnostic, at least in theory, the reality is that many only have the resources to develop their systems on one or two hardware platforms. Once the vehicle system engineers at the automaker or a major Tier1 supplier choose a particular computing platform, they are in effect pre-defining the software ecosystem to which they will potentially have access. Another important consideration in the selection of the computing architecture is the overall automotive capability of the processor supplier. Certainly quality and device reliability are parts of this equation. While qualifying ICs to AEC Q100 is no longer the

Today, nearly 30 years since these pioneering efforts, the basic principles of autonomous driving research remain largely the same: perception, cognition, and control.

151

*

*

• mechanixillustrated.co • Fall 2017


SPECIAL FOCUS

* Comparing Computing Architectures •

barrier to entry it used to be, ADAS and autonomous driving are safety-critical applications and, as such, compliance to functional safety standards (ISO26262) is an important criterion when selecting vendors for production programs. But automotive capability goes beyond quality. Supply chain reliability should be equally important when evaluating systems intended for production vehicle programs. This means not only understanding where parts are manufactured, but how much control the supplier has over those manufacturing locations, how quickly can alternate sources be brought online, and how much priority will you be afforded in the event of a supply chain disruption. Many people in the automobile industry have experienced business disruptions resulting from such natural disasters as floods, earthquakes, and fires. No supplier is 100% immune from such forces majeures, but the truly automotive-capable suppliers understand how to deal with these situations and have risk mitigation plans in place to protect their automotive customers.

No Right Answer but There is a Right Process Clearly there is no one right answer to the question, what is the best compute architecture for ADAS and autonomous driving? The right processing platform will very much depend on a number of system-specific technical requirements centered on processing power and energy consumption. Having the best possible benchmark is an ideal place to start the technical part of the evaluation. But beyond the benchmark, it is critical that system engineers also have a longer-term view to the broader ecosystem they wish to enable and to the enterprise level capabilities of the suppliers under evaluation. In this way, system engineers can transition rapidly and smoothly from being at the forefront of automotive innovation to being at the very core of tomorrow’s production vehicles. MI

152

*

*

• mechanixillustrated.co • Fall 2017


SPECIAL FOCUS

* Comparing Computing Architectures •

References [1]

The Defense Advanced Research Projects Agency funded Autonomous Land Vehicle project at Carnegie-Mellon University.

[2]

Mercedes-Benz together with the Bundeswehr University Munich.

[3]

Kanade, T. & Thorpe, C. (1986). CMU strategic computing vision project report: 1984 – 1985. Retrieved from http://repository.cmu.edu/robotics

[4]

Pomerleau, D.A. (1989). ALVINN, an autonomous land vehicle in a neural network. Retrieved from http://repository.cmu.edu/robotics

[5]

https://en.wikipedia.org/wiki/Autonomous_car#Classification

[6]

Abuelsamid, S. (2016). Lessons From The Failure Of George Hotz And The Comma One Semi-Autonomous Driving System. Forbes. http://www.forbes.com/sites/samabuelsamid/2016/11/01/ thoughts-on-george-hotz-and-the-death-of-the-comma-one/2/

About the Authors Markus Levy is president of EEMBC, which he founded in April 1997. As president, he manages the business, marketing, press relations, member logistics, and supervision of technical development. Mr. Levy is also president of the Multicore Association, which he co-founded in 2005. In addition, Mr. Levy chairs the IoT Developers Conference and Machine Learning DevCon. He was previously founder and chairman of the Multicore Developers Conference, a senior analyst at In-Stat/MDR, and an editor at EDN magazine, focusing on processors for the embedded industry. Mr. Levy began his career in the semiconductor industry at Intel Corporation, where he served as both a senior applications engineer and customer training specialist for Intel’s microprocessor and flash memory products. He is the co-author of Designing with Flash Memory, the one and only technical book on this

153

*

*

• mechanixillustrated.co • Fall 2017


SPECIAL FOCUS

* Comparing Computing Architectures •

subject, and received several patents while at Intel for his ideas related to flash memory architecture and usage as a disk drive alternative. Drue Freeman is a 30-year semiconductor veteran. He advises and consults for technology companies ranging from early-stage startups to multi-billion dollar corporations and for financial institutions investigating the implications of the technological disruptions impacting the automotive industry. Previously, Mr. Freeman was sr. vice president of global automotive sales & marketing for NXP Semiconductors. He has participated in numerous expert panels and spoken at various conferences on Intelligent Transportation Systems. Mr. Freeman helped found and served on the Board of Directors for Datang-NXP Semiconductors, the first Chinese automotive semiconductor company, and spent four years as VP of automotive quality at NXP in Germany. He is an Advisory Board Member of BWG Strategy LLC, an invite-only network for senior executives across technology, media, and telecom, and a member of Sand Hill Angels, a group of successful Silicon Valley executives and accredited investors that are passionate about entrepreneurialism and the commercialization of disruptive technologies. Rafal Malewski leads the Graphics Technology Engineering Center at NXP Semiconductors, a GPU-centric group focused on graphics, compute, and vision processing for the i.MX microprocessor family. With over 16 years of experience in embedded graphics and multimedia development, he spans the full vertical across GPU hardware architecture, drivers, middleware, and application render/processing models. Mr. Malewski is also the EEMBC Chair for the Heterogeneous Compute Benchmark working group.

154

*

*

• mechanixillustrated.co • Fall 2017


SPECIAL FOCUS Self-Driving Cars

Continental Brings Augmented Reality To Head-Up Displays By John Schroeter n Whenever you glance down to read an

instrument cluster, look at a map, or check to see who just texted you, in that brief moment that your eyes are off the road, your vehicle may have traveled tens of meters. In those instances, you are, for all practical purposes, driving blind. Consequently, thousands of people die and hundreds of thousands more are injured each year in crashes involving distracted driving. It follows then that eliminating a good portion of such distractions could likewise eliminate a corresponding number of accidents.

155

*

*

• mechanixillustrated.co • Fall 2017


SPECIAL FOCUS

* Head-Up Displays •

That’s the idea behind the Head-up Display (HUD), at least in part. To this end, the HUD projects information exactly where it is needed: directly in the driver’s line of sight. Through a virtual image “floating” just over the hood, drivers are served a stream of important information, including speed, warning signals, even indicator arrows for navigation—all without the driver having to avert his gaze from the road ahead. Realizing such a human-machine-interface (HMI) experience requires a holistic engineering approach, and that’s exactly what Continental has applied. The company’s augmented reality-enhanced HUD system is enabled by, and integrates, a multitude of technologies that span myriad sensors (camera, LiDAR, biometric, ultrasonic), next-generation HUD technology (through Continental’s partnership with DigiLens), the application of machine learning (enabling autonomous functions), GPS positioning, and even cloud connectivity for serving up digital map data, current traffic conditions, and other vital information. As you might imagine, the rapidly escalating volume of data coming into a vehicle now also requires commensurate instrumentation and processing power, as well as tight integration with ADAS systems, in order to digest and deliver that information in real time to the driver in a cohesive and non-distracting way. Let’s focus in on just a few of those technologies.

AR-HUD

Continental has entered into a strategic partnership with DigiLens, a developer of ultrathin augmented reality “holographic” HUD systems. We’ll explore their technology in depth in a future issue, but one of the breakthroughs they’ve achieved is having shrunk the unit by a factor of three compared to where the state of the art has been to date, freeing up extremely valuable real estate within the vehicle. Looking under the hood, the HUD unit’s graphical elements are generated with the help of a digital micromirror chip (DMD)—the same technology used in digital cinema projectors. The core of the picture generation unit (PGU) is an optical semiconductor comprising a matrix of

156

*

*

• mechanixillustrated.co • Fall 2017


SPECIAL FOCUS

* Head-Up Displays •

several hundred thousand tiny mirrors, which can be tilted individually ±10-12° into reflecting (on, or bright) and non-reflecting (off, or dark) states via electrostatic fields. Each mirror represents one or more pixels in the projected image, with the number of mirrors corresponding to the image resolution. To generate a color image, the micromirror matrix is lit by a set of three LEDs—red, green, blue—firing in rapid succession and in a time-sequential manner. The human eye actually “averages” all three color frames, creating the impression of a fully and continuously colored picture. The image is then projected onto the windshield, as opposed to a screen. The HUD, though, is essentially the output device in this system. It takes its cues from Continental’s AR-Creator, the Grand Central Station for all the signals data coming into the vehicle.

AR-Creator

Jennifer Wahnschaff, Vice President, Instrumentation and Driver HMI for the Americas, explained the AR-Creator to us this way: “The AR-Creator takes all of the information coming from driver-assistance systems, all of the sensors that are in the vehicle for adaptive safety systems—cameras, LiDAR, radar, wheel-speed sensors—along with

157

*

*

• mechanixillustrated.co • Fall 2017


SPECIAL FOCUS

* Head-Up Displays •

GPS, digital map data, and other sources, and brings them all together to deliver the right information to the driver at the right time. Whether it’s alerting a drifting driver with a lane-departure warning, or detecting that the driver is getting drowsy, or whether it’s providing information for navigation, or if the vehicle has adaptive cruise control and is monitoring the distance to the car ahead, all of these things are crucial to safety, and that’s really what AR-Creator is about.” Wahnschaff adds that the system also provides a handshake to the driver. “With all these new technologies that are coming into safety systems, we have to have a way of communicating with the driver and building a level of trust so that the driver understands what the vehicle knows and what the vehicle doesn’t know. And by providing this in an individual way, it not only helps the driver to become more secure, but also more trusting in the technology.” The system’s look-ahead capability is especially interesting. While the AR-HUD is displaying information only about 7.5 meters in front of

158

*

*

• mechanixillustrated.co • Fall 2017


SPECIAL FOCUS

* Head-Up Displays •

the driver’s view, the vehicle’s sensors are projecting out much farther.

eHorizon

If you’re familiar with the popular traffic-navigating app Waze, Continental’s eHorizon takes the human element out of the equation, relying instead on information from the vehicle’s sensors and the infrastructure on and around the roadway. In the future, as V2V technologies begin to proliferate, eHorizon will exploit them, as well to provide even better driver updates. About V2V, Wahnschaff notes the availability of the communications infrastructure in Europe, where traffic information is brought in over the radio system. “That traffic information could also be integrated into the navigation system and propose a different route for the driver to get around a breakdown, or construction, or other situation.” As to the balance between cloud-based and local processing, there will always be trade-offs to consider in how much of the processing eHorizon integrates topographical and digital map data with sensor data, namely GPS receivers, for predictive control of vehicle systems. Future events, such as the uphill incline after the next corner, are exploited at an early stage in order to optimize the vehicle’s response. eHorizon interprets map and sensor data and automatically adapts the engine and transmission management.

159

*

*

• mechanixillustrated.co • Fall 2017


SPECIAL FOCUS

* Head-Up Displays •

burden is going to be placed on the OEM and the vehicle itself. Continental’s approach is to support both models, but Wahnschaff is quick to point out that as more and more information is being processed— particularly GPS and maps data—eHorizon looks to a healthy degree of connectivity. “Our philosophy,” Wahnschaff says, “is we adapt to the environment that we have in the vehicle, making the information increasingly accurate and sophisticated as infrastructure, sensors, and connectivity allow.” And, of course, when 5G is deployed in a few short years, it is going to change everything, to which Wahnschaff muses, “But then we’ll be waiting for 6G and 7G and 8G!”

Projecting Navigation

Perhaps the most elegant manifestation of Continental’s AR-HUD is its ability to project navigation graphics apparently directly onto the surface of the road ahead. This is no small feat. How does one go about rendering a 3D element in the 3D world that is dynamically fixed to the roadway? Continental has managed to isolate this function, independent of the movements

Click here to watch AR-Creator HUD navigation in action, as seen through the windshield of Toyota’s Concept-i.

160

*

*

• mechanixillustrated.co • Fall 2017


SPECIAL FOCUS

* Head-Up Displays •

of the vehicle. And on top of that, it, too, must look ahead. Continental’s Stefan Wentzel, who showed us the system at CES 2017, explains, “In this system we are integrating sensors and a camera that normally supports lane-departure detection and warning, or recognition of obstacles, pedestrians, bicyclists, etc. The AR-Creator combines information from the camera’s image stream and the maps data. One of the challenges was that the camera only looks 60 to 70 meters ahead, but with the AR-HUD, we have to provide a picture up to 200 meters down the road. So we extend the view of the camera in a virtual way. The other challenge we met is that GPS data is not always as precise as needed; there is always a few meters difference from where it says you are and where you actually are. So we correct the GPS data by overlaying the streaming camera image. You have to put the two together. And now that we know where we really are, we can extend the field of view.” Which brings us to the last hurdle in realizing this remarkable technology. Continental’s engineers had to find a way to calculate the 3D objects within the driver’s view in such a way that they could actually forecast them—that is, before they become visible. Wentzel continues, “For example, as the sensors measure the roadway, if you calculate the new objects as they appear, the car has already driven some part of the way. That means you’ve got to calculate well into the future, extrapolating the data to forecast what’s down the road, relying again on this combination of maps, GPS, and camera data, because otherwise it’s too late.” Taken together, the elements comprising AR-HUD yield a whole that is greater than the sum of its parts. And to fully appreciate that, it’s got to be experienced. Come 2020, when this technology gets rolled out into production vehicles, you just might be able to. Jennifer Wahnschaff sums it up: “I’m really excited about this technology. I enjoy driving the test vehicles. When you’re in a new environment it’s great to have as much information as possible for figuring out where you need to go. And reducing driver distraction by presenting this information in such a simple way, well, it is just so beneficial.” MI

161

*

*

• mechanixillustrated.co • Fall 2017


SPECIAL FOCUS Self-Driving Cars

By Dr. Gill Pratt

Thoughts by Toyota Executive Technical Advisor and CEO of Toyota Research Institute.

162

*

*

• mechanixillustrated.co • Fall 2017

My views here reflect findings from a few key research projects that we and our partners have been conducting this past year at Toyota Research Institute (TRI). To provide a bit of context, TRI’s mission is focused on artificial intelligence and includes four goals:


SPECIAL FOCUS

* Autonomous Driving Safety •

• First, to greatly enhance vehicle safety and someday create a car incapable of causing a crash. • Second, to greatly increase mobility access for those who cannot drive. • Third, to heavily invest in robotics to move people not just across town, but in their home, from room to room. • And finally, to accelerate discovery in materials science by applying techniques from artificial intelligence and machine learning. My thoughts are framed by a question designed to offer clarity and provoke discussion on just how complicated this business of autonomous mobility really is. The question I’d like to explore with you is: How safe is safe enough? Society tolerates a lot of human error. We are, after all, “only human.” But we expect machines to be much better. Last year, there were about 35,000 fatalities on US highways—all involving vehicles controlled by human drivers. Every single one of those deaths is a tragedy. What if we could create a fully autonomous car that was “as safe, on average” as a human driver. Would that be safe enough? In other words, would we accept 35,000 traffic fatalities a year in the US at the hands of a machine if

163

*

*

• mechanixillustrated.co • Fall 2017

it resulted in greater convenience, less traffic, and less impact on the environment? Rationally, perhaps the answer should be yes. But emotionally, we at TRI don’t think it is likely that being “as safe as a human being” will be acceptable. However, what if the machine was twice as safe as a human-driven car and 17,500 lives were lost in the US every year? Would we accept such autonomy then? Historically, humans have shown nearly zero tolerance for injury or death caused by flaws in a machine. And yet we know that the artificial intelligence systems on which our autonomous cars will depend are presently and unavoidably imperfect. So…how safe is safe enough? In the very near future, this question will need an answer. We don’t yet know for sure. Nor is it clear how that standard will be devised. And by whom. And will it be the same globally? One standard that is already in place is the SAE International J3016, revised just last September, which defines five levels of driving automation. I want to review this standard with you because there continues to be a lot of confusion in the media about it. All car makers are aiming to achieve Level 5, where a car can drive ful-


SPECIAL FOCUS

* Autonomous Driving Safety •

ly autonomously under any traffic or weather condition in any place and at any time. I need to make this perfectly clear: This is a wonderful goal. However, none of us in the automobile or IT industries are close to achieving true Level 5 autonomy. Collectively, our current prototype autonomous cars can handle many situations. But there are still many others that are beyond current machine competence. It will take many years of machine learning and many more miles than anyone has logged of both simulated and real-world testing to achieve the perfection required for Level 5 autonomy.

But there is good news. SAE Level 4 autonomy is almost Level 5, but with a much shorter timetable for arrival. Level 4 is fully autonomous except that it only works in a specific Operational Design Domain, like the MCity test facility on the campus of the University of Michigan. Restrictions could include limited areas of operation, limited speeds, limited times of day, and only when the weather is good. When company A, or B . . . or T says it hopes to have autonomous vehicles on the road by early 2020s, Level 4 is the technology they are probably referring to. TRI believes it is likely that a

Image courtesy of SAE International.

164

*

*

• mechanixillustrated.co • Fall 2017


SPECIAL FOCUS

* Autonomous Driving Safety •

number of manufacturers will have Level 4 autonomous vehicles operating in specific locations within a decade. Level 4 autonomy will be especially attractive and adaptable for companies offering Mobility as a Service in such forms as ridesharing and carsharing, as well as inner-city last-mile models. In fact, Mobility as a Service may well offer the best application for bringing Level 4 to market sooner, rather than later. Moving down the ladder, Level 3 is a lot like Level 4 but with an autonomous mode that at times may need to hand off control to a human driver who may not be paying attention at the time. Hand off, of course, is the op-

erative term—and a difficult challenge. In Level 3, as defined by SAE, the autonomy must ensure that if it needs to hand off control of the car it will give the driver sufficient warning. Additionally, Level 3 autonomy must also ensure that it will always detect any condition requiring a handoff. This is because in Level 3, the driver is not required to oversee the autonomy and may instead fully engage in other tasks. The term used by SAE when the vehicle’s system cannot handle its dynamic driving tasks is a request to intervene. The challenge lies in how long it takes a human driver to disengage from their texting or reading once this Photo courtesy of University of Michigan.

165

*

*

• mechanixillustrated.co • Fall 2017


SPECIAL FOCUS

* Autonomous Driving Safety •

fallback intervention is requested, and also whether the system can ensure that it will never miss a situation where a handoff is required. Considerable research shows that the longer a driver is disengaged from the task of driving, the longer it takes to reorient. Furthermore, at 65 miles per hour, a car travels around 100 feet every second. This means that to give a disengaged driver 15 seconds of warning at that speed, the system must spot trouble about 1,500 feet away, or about 5 football fields ahead. That’s extremely hard to guarantee, and unlikely to be achieved soon. Because regardless of speed, a lot can happen in 15 seconds, so ensuring at least 15 seconds of warning is very difficult. In fact, it is possible that Level 3 may be as difficult to accomplish as Level 4. This brings us to Level 2, perhaps the most controversial right now because it’s already here and functioning in some cars on public roads. In Level 2, a vehicle handoff to a human driver may occur at any time with only a second or two of warning. This means the human driver must be able to react, mentally and physically, at a moment’s notice. Even more challenging is the requirement for the Level 2 human driver to always supervise the operation

166

*

*

• mechanixillustrated.co • Fall 2017

of the autonomy, taking over control when the autonomy fails to see danger ahead. It’s sort of like tapping on the brake to disengage adaptive cruise control when we see debris in the road that the sensors do not detect. This can and will happen in Level 2, and we must never forget it. Human nature, not surprisingly, remains one our biggest concerns. There are indications that many drivers may either under-trust or over-trust a system. When someone over-trusts a Level 2 system’s capabilities, they may mentally disconnect their attention from the driving environment and wrongly assume the Level 2 system is more capable than it is. We at TRI worry that over-trust may accumulate over many miles of handoff-free driving. Paradoxically, the less frequent the handoffs, the worse the tendency to over-trust may become. And there is also evidence that some drivers may deliberately test the system’s limits, essentially misusing a device in a way it was not intended to be used. This is a good time to address situational awareness and mental attention. It turns out that maintaining awareness while engaged in monitoring tasks has been well studied for nearly 70 years. Research psycholo-


SPECIAL FOCUS

* Autonomous Driving Safety •

gists call it the “vigilance decrement.” During World War II, it became clear that radar operators looking for enemy movement became less effective as their shift wore on, even if they kept their eyes on the task. In 1948, Norman Mackworth wrote a seminal paper called “The breakdown of vigilance during prolonged visual search.” The experiment he performed used a clock that only had a second hand that would occasionally and randomly jump by two seconds. It turns out that, even if you keep your eyes on the Mackworth clock, as this graph shows below, your performance at detecting two-second jumps will decrease in proportion to how long you do it. Okay, so how do you think you would do at this task for two hours? Are you likely to remain vigilant for a possible

Results

Subjects: RAF and Naval personnel (25 each) Clock Test: 12 signals / 30 min. Following training, 2-hr. test

PROPORTION OF SIGNALS DETECTED (%)

(Mackworth, 1948) Alerted Detection

100 90

Vigilance Decrement

80 70 60 50

30

60

90

TIME ON TASK

167

*

*

• mechanixillustrated.co • Fall 2017

120 minutes

handoff of the Level 2 car’s autonomy? Does this body of evidence mean that Level 2 is a bad idea? Some companies have already decided the challenges may be too difficult and have decided to skip Levels 2 and 3. As it turns out we are finding evidence that some things—texting not included—seem to reduce vigilance decrement. We are finding that some mild secondary tasks may actually help maintain situational awareness. For example, long-haul truck drivers have extremely good safety records, comparatively. How do they do it? Perhaps because they employ mild secondary tasks that help keep them vigilant. They talk on two-way radios and may scan the road ahead looking for speed traps. And I bet almost all of us have listened to the radio as a way of staying alert during a long drive. Experts have divided opinions on whether that is a good idea or a bad one. What we do know for sure is that as we move forward toward the ultimate goal of full autonomy, we must strive to save as many lives as possible in the process, because it will take decades to have a significant portion of the US car fleet functioning at Level 4 and above. That’s why TRI has been taking a two-track approach, simultaneously


SPECIAL FOCUS

* Autonomous Driving Safety •

developing a system we call Guardian, designed to make human driving safer, while working on Level 2 through Level 5 systems that we call Chauffeur. Much of the work in hardware and software that we are developing to achieve Chauffeur is also applicable to Guardian, and vice versa. In fact, the perception and planning software in Guardian and Chauffeur are basically the same. The difference is that Guardian only engages when needed, while Chauffeur is engaged all of the time during an autonomous drive. One can think of anti-lock brakes, vehicle stability control, and automatic emergency braking as early forms of Guardian. When it arrives, it will be a hands-on-the-wheel, eyes-on-theroad, only-when-needed system that merges vehicle and human situational awareness. In Guardian, the driver is meant to be in control Toyota of the car at all times except in those cases where Guardian anticipates or identifies a pending incident and briefly employs a corrective response. Depending on the situation, Guardian can alert the driver with visual cues and audible alarms, and if necessary, influence or control speed and steering.

168

*

*

• mechanixillustrated.co • Fall 2017

Like Yui, our Concept-i agent, Guardian employs artificial intelligence and becomes smarter and smarter through both first-hand data-gathering experience and by intelligence shared via the cloud. Over time, we expect Guardian’s growing intelligence will allow it to sense things more clearly, process and anticipate more quickly, and respond more accurately in a wider array of situations. Every year cars get safer. One reason is because every year, automakers equip vehicles with higher and higher levels of active safety. In ever-increasing numbers, vehicles are already being entrusted to sense a problem, choose a course of action, and respond, assuming, for brief periods, control of the vehicle. And that brings me back to the Concept-i. Concept-i.


SPECIAL FOCUS

* Autonomous Driving Safety •

At TRI, we think that Yui, the Concept-i agent, might not only be a way to engage and provide useful advice, we think it might also be a way to promote the driver’s continued situational awareness using mild secondary tasks to promote safety. We’ve only begun our research to find out exactly how that would work. Perhaps Yui could engage the driver in a conversation that would reduce the vigilance decrement the way talking on the two-way radio or looking for speed traps seems to do with truck drivers. We think the agent might even be more effective, because the Yui would be coupled to the autonomy system, which would be constantly monitoring the car’s environment, inside and out, merging human and vehicle situational awareness. We’re not sure, but we aim to find out. Toyota is involved in many aspects of making future cars safer and more accessible. Yui and Concept-i represent a small part of that work. But it has the potential for being more than a helpful friend. It may have the potential to become the kind of friend that looks out for you and keeps you safe—a guardian, as well as a chauffeur. Our goal is to someday create a car that will never be responsible for caus-

ing a crash, whether it is driven by a human being or by a computer. And Concept-i may become a key part of that plan. MI

About Dr. Gill Pratt, CEO, Toyota Research Institute, Inc. (TRI) Before joining Toyota, Dr. Gill Pratt served as a program manager in the Defense Sciences Office at the US Defense Advanced Research Projects Agency (DARPA) from January 2010 through August 2015. Dr. Pratt’s primary interest is in the field of robotics and intelligent systems. Specific areas include interfaces that significantly

human/machine collaboration, mechanisms and control methods for enhanced mobility and manipulation, low impedance actuators, and the application of neuroscience techniques to robot perception and control. He holds a Doctor in Philosophy in electrical engineering and computer science from the Massachusetts Institute of Technology (MIT). His thesis is in the field of neurophysiology. He was an associate professor and director of the Leg Lab at MIT. Subsequently, he became a professor at Franklin W. Olin College and, before joining DARPA and then Toyota, was associate dean of Faculty Affairs and Research. Dr. Pratt holds several patents in series elastic actuation and adaptive control.

169

*

*

• mechanixillustrated.co • Fall 2017

enhance



SPECIAL FOCUS Self-Driving Cars

Big Data

and Your driverless Future Early in 2016, GM invested $500M in ridesharing

By Evangelos Simoudis, Ph.D.

171

*

*

• mechanixillustrated.co • Fall 2017

company Lyft and reportedly paid close to $1 billion to acquire Cruise Automation, a 40-employee, Silicon Valley-based, venture-funded startup that was developing driverless vehicle technology. A few months earlier Toyota had announced that it would invest $1 billion in artificial intelligence research, one of the key technologies making driverless vehicles possible.


SPECIAL FOCUS

* Big Data •

These moves by the world’s largest automakers, along with related moves by other incumbents, show that the automotive industry is starting to understand that it can be disrupted in ways that we haven’t seen in the 100 years since the invention of the automobile. The disruption will come as we try to address important societal and urban challenges through our approaches to mobility. It will be the result of changing attitudes toward car ownership, technology innovations, and business model innovations. The disruption will be catalyzed by the Autonomous, Connected, and Electrified (ACE) vehicle in conjunction with a variety of on-demand mobility services under a hybrid model that blends car ownership with on-demand car access. Big data coming from inside and outside the ACE vehicle combined with machine intelligence technologies used for the exploitation of this data are key ingredients in next-generation mobility. Together they offer a unique, and still overlooked, value-creation opportunity in a driverless future. What is causing a company like GM to pay such a sum to acquire a tiny startup, for Toyota to commit such a large amount on artificial intelligence research, and for almost every major automaker to establish a technology center in Silicon Valley? Try to imagine commuting in a driverless vehicle during rush hour in a city like Los Angeles. In addition to being freed from the stress caused by rush hour traffic and being able to use the commuting time in any way you like, you will not have to worry about the issues associated with arriving at your destination: Where can I find parking? How close to my destination will I be if I park in a particular location? Will I need to drive or could I walk to arrive at my next meeting on time? Answers to these questions impact the customer experience and particularly the experience younger consumers are starting to expect. And because of these changing expectations, consumers will impact the business of the incumbent automotive and transportation industries. With their recent actions, the automotive industry incumbents are attempting to acquire or develop the technology that is rapidly becoming table stakes for next-generation mobility and thus address an emerging disruption risk that a driverless future will bring.

172

*

*

• mechanixillustrated.co • Fall 2017


SPECIAL FOCUS

But billion-dollar acquisitions and investments, though impressive, are not sufficient by themselves to address the disruption risk the incumbent automotive industry is facing. The industry will need to: n Transform from designing and manufacturing vehicles to offering transportation experiences that will be based on insights derived from understanding the changes in consumer expectations and preferences; n Adopt different approaches to innovation. They must devise new corporate innovation strategies that combine technology innovations with the right business model innovations like startups in Silicon Valley, Israel, and China are doing routinely, and make the appropriate organizational and cultural changes in order to make the transformation successful; n Designate big data and its exploitation using machine intelligence as a strategic imperative and invest heavily and over a long period in these technologies in which they have little or no competence. My book, The Big Data Opportunity In Our Driverless Future, attempts to answer seven questions relating to the big data and machine intelligence in the context of driverless vehicles and the mobility services they enable. Today’s vehicles are collections of electromechanical components that are controlled by a large number of microprocessors running disparate but increasingly complex software. ACE vehicles are robots on wheels. As such, not only do they rely on more complex software, but they are also generators and consumers of big data. Incumbent automakers and the startups working with ACE vehicles—and the mobility services they enable— don’t have experience with the types of data generated in these environments. The seven questions addressed by the book aim at helping them create a roadmap to successfully take advantage of the opportunities provided by big data and its exploitation using machine intelligence. These questions are: 1 n Is there a real threat to the automotive industry from ACE vehicles and the mobility services offered around them? 2 n Is big data associated with ACE vehicles and next-generation

BUY

173

* Big Data

*

*

• mechanixillustrated.co • Fall 2017


SPECIAL FOCUS

* Big Data •

mobility a strategic area that deserves the automotive industry’s attention—and particularly the attention of incumbents? 3 n Can big data and machine intelligence be leveraged and used as an effective and long-term advantage by the automotive industry incumbents to provide personalized transportation solutions in addition to enabling driverless vehicle navigation? 4 n Could the incumbent automotive industry benefit from making big data and machine intelligence strategic imperatives even if they don’t change their business models and adopt ACE vehicles and associated mobility services, or do they need to transform more radically? 5 n What do the automotive incumbents, and particularly the automotive OEMs, need to do in order to capitalize on this big data advantage? 6 n How would big data and machine intelligence enable the offering of better transportation experiences, including sharing of ACE vehicles and thus addressing social and urban problems? 7 n Will future consumers of mobility services attach more brand value to traditional automotive companies and brands (e.g., Ford, Chevrolet, Lexus), to technology brands offering such services (e.g., Apple, Google, Amazon), or to the mobility providers themselves (e.g., Uber, Lyft)? One of the areas discussed in the book in the process of addressing these seven questions is how technology and business model innovations address certain societal and urban challenges that are starting to disrupt the automotive and transportation industries. Autonomous and driverless vehicles require innovations in four technology areas: hardware, software, connectivity, and big data, including high-definition maps. These vehicles are big data platforms, and they are supported by big data platforms. But big data is the one area where the automotive industry incumbents in particular have the least experience, even though it is the one that can impact a broad set of outcomes, from vehicle navigation to vehicle personalization, fleet optimization, provision of personalized consumer transportation solutions, and many others. But while we are often

174

*

*

• mechanixillustrated.co • Fall 2017


SPECIAL FOCUS

* Big Data •

The Mercedes-Benz autonomous F 015 concept car doubles as a rolling lounge. Image courtesy of Mercedes-Benz. in awe of the technological innovations in ACE vehicles, we have not paid as much attention to the business model innovations that are enabled by these technologies. This is because for the longest time vehicle ownership was the de facto model. Vehicle ownership involves a limited set of well-understood business models. One could buy a vehicle or lease it under a relatively long-term contract. Car rental was considered only as part of long-distance travel, even when only a small amount of driving was involved in such travel, and taxis were the only viable alternative to mass transit in cities. The connected vehicle started to enable new business models in many industries that work around the automotive industry. For ex-

175

*

*

• mechanixillustrated.co • Fall 2017


SPECIAL FOCUS

* Big Data •

ample, states are starting to experiment with taxing consumers by the mile traveled, insurance companies are starting to roll out usage-based policies, the entertainment industry has adopted streaming radio and video subscription models, and the advertising industry is employing online advertising on vehicle maps to monetize services and content. However, the arrival and fast consumer acceptance of on-demand mobility services, such as ride-hailing, ridesharing, carsharing, and others, enabled by software, big data, and smartphone technologies, is emerging as the real business model disruptor of the automotive industry. For example, the dynamic pricing (surge pricing) business model offered by ridesharing services is enabled by technologies such as location-based services (big data) software and GPS, the sophisticated analysis of traffic data, analysis of consumer demand data, and analysis of driver-supplied data. Similarly, carsharing service companies like Zipcar are testing a business model to charge drivers per mile driven rather than per hour. On-demand mobility services enable consumers to have access to vehicles without owning them, and, more broadly, shape the future of mobility while addressing a variety of societal and urban challenges. This is starting to lead to a big shift from the notion that puts car ownership at the center to one that puts car access at the center, with access provided by on-demand mobility services. In fact, because of the advantageous economics offered by ACE vehicles, companies offering ride-hailing, ridesharing, and carsharing will be the first adopters of these vehicles. The automotive industry as we know it today has emerged from a group of startups, many of which were established in Detroit, a city that at the turn of the 20th century was an innovation cluster quite similar to today’s Silicon Valley. Over the years, the industry has weathered many challenges, several due to economic downturns, most recently in 2009, and others due to regulation and globalization. It always found a way to succeed through innovation and come out stronger. However, its overall business model has never been challenged for the past 100-plus years of its existence in the way it

176

*

*

• mechanixillustrated.co • Fall 2017


SPECIAL FOCUS

* Big Data •

is being challenged today by both startups and large corporations that are newcomers to the automotive industry. Today, it has again the opportunity to disrupt in the coming driverless and on-demand mobility future. However, this time, in order to succeed it will need to make fundamental changes, and among its strategic options, recognize and take advantage of the profound and long-term opportunities provided by the broad exploitation of big data.

About the Author Evangelos Simoudis is a recognized expert on big data strategies and corporate innovation. He has worked in Silicon Valley for 25 years as a venture investor, entrepreneur, and corporate executive. He is the co-founder and managing director of Synapse Partners, a venture firm that invests in early-stage startups developing big data applications. Evangelos is advising several global corporations on their big data strategies and startup-driven innovation. He has also served as partner and managing director at Apax Partners and Trident Capital. In 2012 and 2014 he was named top investor in online advertising. Prior to his venture and advisory career, Evangelos served as President and CEO of Customer Analytics, and as Vice President of Business Intelli-

BUY

gence at IBM. He serves on the advisory boards of Caltech’s Center for Information Science and Technology, the Brandeis International School of Business, New York’s Center Urban Science and Progress, and SAFE’s Autonomous Vehicle Task Force. Evangelos earned a Ph.D. in computer science from Brandeis University and a BS in electrical engineering from Caltech.

177

*

*

• mechanixillustrated.co • Fall 2017


SPECIAL FOCUS Self-Driving Cars

By Sudha Jamthe

L

ate nights, as I drive home after teaching Cognitive IoT at Stanford Continuing Studies, my loyal companion on the road is a fleet of Google’s self-driving cars, now known as Waymo. They drive defensively at 25 mph and with a human at the ready to take control if the car disengages into manual model. What strikes me the most about the driverless car is its silence and its seemingly oblivious disconnect with the social fabric of the communication on the road. Driverless car pilots are focused on such things as computer vision and deep learning, teaching the car the necessary “skills” to comprehend its surroundings, and creating algo-

178

*

*

• mechanixillustrated.co • Fall 2017


SPECIAL FOCUS

* Communications in a Driverless World •

rithms that enable it to make better driving decisions on the road. The cars are learning the rules of the road, enabled by object detection and classification in order to safely navigate a crowded roadway comprising human-driven cars, pedestrians, road signs, traffic lights, and the occasional puddle or pothole. As for the autonomous car’s ability to react to humans inside the car, that’s coming, too.

Baffling a Driverless Car with Honks, Nods, and Eye Contact Recently, US Senator Gregg Harper asked Ford Motors at a US House subcommittee hearing what a self-driving car would do if he honked at it. The answer is that the car would ignore it. Car manufacturers have not thought about how a car will react to honking or any other social communication—the kinds of things that make up the norm of our driving lives. There are so many nuances to honking that it is seemingly impossible for a car to apply machine learning to understand the context of a honk in order to properly react to it. A driver might honk in excitement when they spot a friend in a neighboring car, or they may honk in road rage at a car cutting them off. A whole group of cars might honk at a group of people rallying for a cause for which they want to show support. “Honk if you love driverless cars!” When we suddenly encounter a jaywalking pedestrian, we human drivers perform more than mere object recognition. When another human crosses our path in an unexpected place, we notice them, note their expressions, make interpretations about their intent, and then make a decision to stop or slow down, perhaps even with empathy. In other words, we make subjective judgements. This is another example of a class of object recognition that we cannot teach an autonomous car however much progress we might be making in affective computing—that area of research where we teach machines empathy in order to simulate human emotions. I might let the person pass in the middle of the road if I suspect it is someone carrying a heavy shopping bag across from an apartment building, which may be far away from the next legal pedestrian crossing. Another person may make the same interpretation but choose not to stop, because they do not want to encourage the jaywalker’s unsafe behavior. So beyond the contextual interpretation, our cognitive reasoning influences our decision-making as well. Human drivers communicate with each other and with bicyclists on the road through an extensive and subtle system of nods, eye contact, and hand waves. Yield signs are a great example of where we not only follow the rules of the road to give right of way, but we also keep ourselves safe by signaling a car entering the highway to merge in front of us, or by nodding when a driver lets us turn left. The classic case where the

179

*

*

• mechanixillustrated.co • Fall 2017


SPECIAL FOCUS

* Communications in a Driverless World •

self-driving car struggles is determining how to merge when it comes upon an unexpected construction zone. The bottom line is that there is a great deal of unwritten communication that transpires between human drivers by which we navigate such potentially chaotic situations. Can we teach a driverless car the delicate dance we perform when negotiating ending lanes and construction signs, as well as the nonverbal permissions we take and grant as we get back into single file with other cars?

Machine Communication with other “Things” All the foregoing notwithstanding, today’s driverless cars are smart enough to avoid the accidents commonly caused by the human carelessness that lead to 35,000 traffic fatalities each year in the US alone. They can predict the actions of cars ahead of them at speeds incomprehensible to humans, as was recently demonstrated by Tesla. The Tesla car predicted, based on its computer vision using sensor data, that two cars ahead of it were about to stop, saving the driver from an accident. Cars can quietly communicate with roads and traffic lights using an exchange of sensor data and predict and react to changing road conditions with more alertness than humans can ever hope for. Humans can’t see around corners, but driverless cars can. The many sensors that enable driverless cars can save us in sudden foggy conditions or when a thin sheet of ice invisible to the human eye is forming on the road as we drive. And equally interesting, those sensors may be borne by the car’s passengers (and occasional drivers) as well.

Human Wearables Become our Communication Proxy Renault Motors has recently partnered with Sensoria Fitness of Seattle to sync its biosensor-enabled smart socks and other garments with Renault’s Sport Motor app to help race car drivers track their heart rate during a race and in training. They’re also using the technology to learn more about how a race car driver moves his/her feet on the pedals. This makes the wearable a viable and fascinating proxy for communicating with the car using IoT sensor signals. We’re seeing this kind of technology being integrated into the car, as well, with biosensors—complete with haptic feedback—embedded into the car seats.

Transforming All Road Stakeholders into Cognitive IoT What automotive AI might be lacking in human cognition, it makes up in agile decision-making. We cannot expect a real-time conscious communication between an

180

*

*

• mechanixillustrated.co • Fall 2017


SPECIAL FOCUS

* Communications in a Driverless World •

autonomous vehicle and a human, at least not at high speeds for decisions that can impact safety. This has convinced me that all the elements involved with the social fabric of our roadway communication need to be enhanced by cognitive IoT. The road, traffic lights, parking spots, wearables, and the car’s AI all have to process the huge volume of real-time sensor and intent data, and improve its capabilities through machine learning to enable safe and effective communication with the driverless car. The road can alert the car about a pothole or icy conditions. The parking spot can inform the car about space availability—and even take out its payments using Blockchain. Traffic lights can signal the car to go on through an empty road. Humans can communicate using wearables that predict their intents using AI models. Taken together, these capabilities enable a symphony of cognitive IoT devices empowered by facial recognition, the tracking of driver behaviors, a coming together of data and sensor sources providing information about road and traffic conditions—all feeding one another in symbiotic concert. Indeed, the driverless car is the catalyst that is set to accelerate the realization of cognition in city infrastructure, wearables, and other cars. On our path to this driverless world, sharing the road with self-driving cars, my “drive” home is only going to get better with the likes of Waymo and its fellow travelers, all sharpening their AI, hungry to learn our communication methods. The “silence” of driverless cars will be broken as we progress from fragmented human communications not understood by cars to intelligent, nuanced cognitive communication from the many things on the road. MI

About the Author Sudha Jamthe, CEO of IoTDisruptions.com, globally recognized technology futurist, and keynote speaker, is the author of 2030 The Driverless World: Business Transformation from Autonomous Vehicles and three IoT books. She brings 20 years of digital transformation experience in building organizations, shaping new technology ecosystems, and mentoring leaders at eBay, PayPal, Harcourt, and GTE. She has an MBA from Boston University and teaches IoT Business at Stanford Continuing Studies. Sudha also aspires to bring cognitive IoT and autonomous vehicles together.

BUY

181

*

*

• mechanixillustrated.co • Fall 2017

BUY


HAPTICS

I

f seeing is believing, then feeling has got to be downright

convincing. But feeling something that really isn’t there? Ultrahaptics brings the long-neglected sense of touch to the virtual world. And now you can add this to your own products and customer experiences.

By John Schroeter

Anyone taking in Disneyland’s 1994 3D attraction Honey, I Shrunk the Audience was treated to a glimpse of the future of virtual reality. The film was synchronized with several special effects built into the theater itself, including air jets mounted

The Future of the Touchscreen Is

Haptics Brings the Sense of Touch to the Virtual World 182

*

*

• mechanixillustrated.co • Fall 2017


HAPTICS

* Touchless Touchscreen •

beneath the seats that pulsed shots of air right onto the ankles of the audience members, quite realistically simulating the feel of scurrying mice as they were virtually let loose in the theater. I still remember the screams of surprise. Things have come a long way since then. Air jets and vortices have given way to a new generation of technologies, including ultrasound, which enables highly nuanced tactile effects—haptics—and now promises to revolutionize user experiences in Augmented and Virtual Reality. Haptics is the science of touch. Ultrahaptics, the company, is taking haptics to a whole new realm, creating that sense of touch in midair. The applications of touch are as limitless as sight and sound: think virtual force fields, touchless dials complete with the clicking feel of detents, holographic buttons, sliders, and switches with which you can control a myriad of devices—your music, thermostat, lighting, your car’s infotainment system ...pretty much anything. You can now interact in a natural, intuitive way with any virtual object. Founded in 2013, Ultrahaptics grew out of research conducted by Tom Carter as a student at the University of Bristol in the UK.

183

*

*

• mechanixillustrated.co • Fall 2017

Ultrahaptics’ technology uses ultrasound to generate a haptic response directly onto the user’s bare hands. Gesture controls can be used to operate infotainment systems, such as in-car audio and connected-car applications, more intuitively. Watch Ultrapahaptics in action here.


HAPTICS

* Touchless Touchscreen •

There he worked under the supervision of computer science professor Sriram Subramanian, who ran a lab devoted to improving human-computer interaction. Subramanian, who has since moved to the University of Sussex, had long been intrigued by the possibilities of haptic technologies but hadn’t brought them to fruition for want of solving the complex programming challenges. That’s where Carter comes in. With the fundamental programming problems solved, the company’s solution works by generating focused points of acoustic radiation force—a force generated when the ultrasound is reflected onto the skin—in the air over a display surface or device. Beneath that surface lies a phased array of ultrasonic emitters (essentially tiny ultrasound speakers), which produce steerable focal points of ultrasonic energy with sufficient sound pressure to be felt by the skin. Using proprietary signal processing algorithms, the array of ultrasonic speakers or “transducers” generates the focal points at a frequency of 40kHz. The 40kHz frequency is then modulated at a lower frequency within the perceptual range of feeling in order to allow the user to feel the desired haptic sensation. Ultrahaptics typically uses a frequency from 1–300Hz, corresponding to the peak sensitivity of the tactile receptors. Modulation of the frequency is

184

*

*

• mechanixillustrated.co • Fall 2017

How will the driving of the future look? Bosch presented its vision at CES 2017 with a new concept car. Alongside home and work, connectivity is turning the car into the third living space. The concept car includes gesture control with haptic feedback. Developed with Ultrahaptics, the technology uses ultrasound sensors that sense whether the driver’s hand is in the correct place and then provide feedback on the gesture being executed. Image courtesy of Bosch. For more on Bosch innovations, click here.


HAPTICS

* Touchless Touchscreen •

one of the parameters that can be adjusted by the API to create different sensations. The location of the focal point, determined by its three-dimensional coordinates (x, y, z), is programmed through the system’s API. Beyond creating a superior computer-human interaction with more intuitive, natural user experiences, the technology is also finding applications in use cases spanning hygiene (don’t touch that dirty thing) to accessibility (enabling the deaf and blind) to creating safer driving experiences. Really, the possibilities are endless: if you can control an electronic device by touch, chances are you can go touchless with haptics.

A Sound Foundation Ultrasound, in terms of its physical properties, is nothing more than an extension of the audio frequencies that lie beyond the range of human hearing, which generally cut off at about 20kHz. As such, ultrasound devices operate with frequencies from 20kHz on up to several gigahertz. Ultrahaptics settled on a carrier frequency of 40kHz for its system. Not only can humans not hear anything above 20kHz, we can’t

185

*

*

• mechanixillustrated.co • Fall 2017


HAPTICS

* Touchless Touchscreen •

feel them, either. The receptors within human skin can only detect changes in intensity of the ultrasound. The 40kHz ultrasound frequency must therefore be modulated at far lower frequencies that lie within the perceptual range of feeling, which turns out to be a fairly narrow band of about 1-400Hz. As to how we feel ultrasound, haptic sensation is the result of the acoustic radiation force that is generated when ultrasound is reflected. When the ultrasound wave is focused onto the surface of the skin, it induces a shear wave in the skin tissue. This in turn triggers mechanoreceptors within the skin, generating the haptic impression. (Concern over absorption of ultrasound is mitigated by the fact that 99.9% of the pressure waves are fully reflected away from the soft tissue.)

Feeling is Believing Ultrahaptics’ secret sauce, as you might imagine, lies in its algorithms, which dynamically define focal points by selectively controlling the respective intensities of each individual transducer to create fine haptic resolutions, resolving gesture-controlled actions with fingertip accuracy.

186

*

*

• mechanixillustrated.co • Fall 2017

How does Ultrahaptics’ Mid-air touch with ultrasound work? The modulated ultrasound waves, which are precisely controlled, are transmitted from an array of transducers such that the resulting interference pattern creates focal points in midair, as indicated by the green. See the full video here.


HAPTICS

* Touchless Touchscreen •

When several transducers are focused constructively on a single point—a point being defined by its x, y, z coordinates—the acoustic pressure increases to as much as 250 pascals, which is more than sufficient to generate tactile sensations. The focal points are then isolated by the generation of null control points everywhere else. That is, the system outputs the lowest intensity ultrasound level at the locations surrounding the focal point. In the algorithm’s final step, the phased delay and amplitude are calculated for each transducer in the array to create an acoustic field that matches the control point, the effect being that ultrasound is defocused everywhere in the field above or below that controlled focus point. Things get more interesting when modulating different focal points at different frequencies to give each individual point of feedback its own independent “feel.” In this way the system is not only able to correlate haptic and visual feedback, but a complete solution can attach meaning to noticeably different textures so that information can be transferred to the user via the haptic feedback. The API gives the ability to generate a range of different sensations, including: • Force fields: a use case for this could include utilization in domestic appliances, for example a system warning that a user is about to put their hand on a hob that is not completely cool. • Haptic buttons, dials, and switches: these are particularly interesting in the automotive industry, where infotainment controls, for example, can be designed to be projected onto a user’s hand without the driver having to look at the dashboard. • Volumetric haptic shapes: in a world where virtual and augmented reality could become part of our everyday lives, one of the missing pieces of the puzzle is the ability to feel things in a virtual world. Ultrahaptics’ technology can generate different shapes, giving a haptic resistance when users, immersed in a virtual world, are expecting to feel an object. • Bubbles, raindrops, and lightning: the range of sensations that can be generated is vast; it can range from a “solid” shape to a

187

*

*

• mechanixillustrated.co • Fall 2017


HAPTICS

* Touchless Touchscreen •

sensation such as raindrops or virtual spiders. As well as being of interest to the VR gaming community this is also something that will be extremely interesting for location-based entertainment. These sensations are generated by modulation of the frequency and the wavelength of the ultrasound, and these options are some of several parameters that can be adjusted by the API to create different sensations. The location of the focal point, determined by its three-dimensional coordinates, is also programmed via the system’s API.

Gesture Tracking gets Touchy Gesture control, of course, requires a gesture tracking sensor/controller—for example the Leap Motion offering, which Ultrahaptics has integrated into its development and evaluation system. The controller determines the precise position of a user’s hands—and fingers—relative to the display surface (or hologram, as the case may be). Stereo cameras operating with infrared and augmented to measure depth provide high-accuracy 3D spatial representation of gestures that “manipulate” the active haptic field. The system can use any camera/sensor; the key is its ability to reference the x, y, z coordinates through the Ultrahaptics API.

188

*

*

• mechanixillustrated.co • Fall 2017


HAPTICS

* Touchless Touchscreen •

In the Interest of Transparency Another key component of a haptic feedback system is the medium over which the user interacts—in this case, a projected display screen or device, beneath which is the transducer array. The chief characteristic of the display surface/device is its degree of acoustic transparency: the display surface must allow ultrasound waves to pass through without defocusing and with minimum attenuation. The ideal display would therefore be totally acoustically transparent. The acoustics experts at Ultrahaptics have found that a display surface perforated with 0.5mm holes and 25% open space reduces the impact on the focusing algorithm while still maintaining a viable projection surface. In time, we may see acoustic metamaterials come into play. By artificially creating a lattice structure within a material, it is possible to correct for the refraction that occurs as the wave passes through the material. This would enable the creation of a solid material that permits a selected frequency of sound to pass through it. A pane of glass manufactured with this technique would provide the perfect display surface. It has also been shown that such a material could enhance the focusing of the ultrasound by acting as an acoustic lens. But, again, we’ll have to wait for this; acoustic metamaterial-based solutions are only beginning to emerge. In the

189

*

*

• mechanixillustrated.co • Fall 2017


HAPTICS

* Touchless Touchscreen •

meantime, surface materials that perform well include woven fabrics, such as those that would be used with speakers; hydrophobic acoustic materials, including the range from Saati Acoustex, which also protect from dust and liquids; and perforated metal sheets.

Generating 3D Shapes Ultrahaptics’ system is not limited to points, lines, or planes; it can actually create full 3D shapes—shapes you can reach out to touch and feel such as spheres, pyramids, prisms, and cubes. The shapes are generated with a number of focal points projected in a (x, y, z) position and move as their position is updated at the chosen refresh rate. Ultrahaptics continues to investigate this area to push the boundaries of what can be achieved.

Evaluation & Development Program For developers looking to experiment and create advanced prototypes with 3D objects or AR/VR haptic feedback, the Ultrahaptics

190

*

*

• mechanixillustrated.co • Fall 2017


HAPTICS

* Touchless Touchscreen •

Evaluation Programme includes a development kit (UHEV1) providing all the hardware peripherals (transducer array, driver board, Leap Motion gesture controller) and software, as well as technical support to generate custom sensations. The square form factor transducer platform comprises an array of 16×16 transducers—a total of 256 ultrasound speakers—driven by the system’s controller board, a system architecture consisting of an XMOS processor for controlling the transducers. The evaluation kit offers developers a plug and play solution that will work in conjunction with any computer that operates Windows 8 or above, or OSX 10.9 or above. In short, the kit has everything you need to implement Ultrahaptics in your product development cycle. The UHDK5 TOUCH development kit is available through Ultrahaptics distributor EBV and includes a transducer array board, gesture tracking sensor, software suite, and fully embedded system architecture (microprocessor and FPGA). The Sensation Editor library includes a range of sensations that can be configured to individual design requirements as well as enabling the development of tailored sensations using the software suite provided. MI

191

*

*

• mechanixillustrated.co • Fall 2017

BUY


AUTOMOTIVE ENGINEERING

An Introduction To Automobile

aerodynamics More Than Meets the Eye

Sometime early in the 20th century, motorized vehicles became a reality, and the race to improve road infrastructures and vehicle speed had begun. Transportation speeds rapidly increased, and when legislators observed an open field for imposing new restrictions, speed limits were invented.

By Joseph Katz, Ph.D. 192

*

*

• mechanixillustrated.co • Fall 2017


AUTOMOTIVE ENGINEERING

Figure 1 • Increase of vehicle total drag and tire rolling resistance on a horizontal surface, versus speed (measured in a tow test of a 1970 Opel Record).

* Introduction to Aerodynamics •

In most cases passenger safety, fuel saving, and environmental concerns were cited (which all sound politically correct). It turns out that the science of aerodynamics is directly tied to all of these elements, and most of us intuitively relate higher speeds to reduced fuel economy. The science of automotive aerodynamics, however, is not limited to external aerodynamics: it includes elements such as engine cooling, internal ventilation, air conditioning, aerodynamic noise reduction, high-speed stability, dirt deposition, and more. In the following discussion, for the sake of brevity, we’ll focus on external aerodynamics. To demonstrate the effect of aerodynamics on vehicles, let us start with a simple example: the drag force (resisting motion), which also drives the shape and styling of modern vehicles. The forces that a moving vehicle must overcome are the tire rolling resistance, the driveline friction, elevation, vehicle acceleration changes, and also

1000

Resistance force, N

800

Aerodynamic drag + rolling resistance

600

400 Tire rolling resistance

0

x

x

200

0

20

40

60

x

80

Speed, Km/h

193

*

*

• mechanixillustrated.co • Fall 2017

x

x

100

120

x

140


AUTOMOTIVE ENGINEERING

* Introduction to Aerodynamics •

aerodynamics. Let us assume that the vehicle moves along a flat surface at a constant speed and the external forces are limited to the tire friction and to the aerodynamic drag. Such an experiment is described in Fig. 1, where the data was obtained from a towing test. A careful examination of the data in this figure reveals that the aerodynamic drag increases with the square of the velocity, while all other components of the drag force change only marginally. Therefore, engineers devised a non-dimensional number, called the drag coefficient (CD), which quantifies the aerodynamic sleekness of the vehicle configuration. The definition of the drag coefficient is:

D CD = –––––––– 0.5ρU 2S

where D is the drag force, ρ is the air density, U is vehicle speed, and S is the frontal area. One of the nice aspects of this formula is that the coefficient doesn’t change much with speed, and it basically represents how smoothly the vehicle slices through the oncoming airstream. Recall that the power (P) to overcome the aerodynamic resistance is simply the drag (D) times velocity (U ), so we can write:

P = D • U = CD 0.5ρU 3S This means that if we drive our car twice as fast as our neighbor, then we need a bigger engine that delivers eight times more power (assuming similar vehicles). These are exactly the arguments that led to the infamous 55 mph speed limits back in 1974! By the way, using a similar formula to the drag coefficient, a lift coefficient (CL) can be defined, indicating how much aerodynamic lift is created by the vehicle’s shape. So, if driving power requirements and fuel consumption reduction depend strongly on a vehicle’s drag coefficient times its frontal area, what is the order of magnitude of CD? The table in Fig. 2 shows the range of the above coefficients for a range of typical configurations. In this figure, the first configuration represents a streamline-shaped body, and a drag coefficient in the range of 0.025 to 0.040 can be

194

*

*

• mechanixillustrated.co • Fall 2017


AUTOMOTIVE ENGINEERING

*

CL

CD

1

Low drag body of revolution

-3.00

0.04

2

Low drag vehicle near the ground

0.18

0.15

3

Generic automobile

0.28

0.35

4

Prototype race car

-3.00

0.75

Figure 2 • Range of the lift and drag coefficients (based on frontal area) for generic ground vehicle shapes.

195

* Introduction to Aerodynamics

expected (and the value of 0.04 is shown in this table). Also, for such a symmetric body, far from the ground, no lift force is expected. Keeping a streamlined shape, but bringing it close to the ground and adding wheels increases the drag to a level of CD = 0.15, but the long boat tail is impractical for most vehicles. Also note that this geometry produces a significant level of lift. For practical sedan configurations (#3), both the drag and lift increase significantly, beyond the level of the streamlined shape. Finally, a high downforce prototype race car shape is added to demonstrate the extreme range of the drag and lift coefficients. The high downforce (negative lift) for such race cars is needed for better tire adhesion (resulting in faster laps), but not necessarily faster maximum speeds. The large increase in drag is a result of the increased negative lift (i.e., nothing comes for free).

*

• mechanixillustrated.co • Fall 2017


AUTOMOTIVE ENGINEERING

Figure 3 • Schematic description of the airflow over the centerline of a generic automobile.

* Introduction to Aerodynamics •

Next, with the aid of Fig. 3, let us speculate about the relation between a vehicle’s shape and the resulting lift and drag coefficients. First, it appears that flow above the vehicle moves faster than below it, and if it follows the curved shape of the vehicle, we call it attached flow. However, at the back of the vehicle, the flow cannot follow the sharp downward turn, and so this region is called “separated flow.” At this point one must remember the theories of the Swiss scientist Daniel Bernoulli (1700-1782), who postulated that at higher speeds the pressure is lower. Therefore, the pressure on the upper surface of the automobile shape in Fig. 3 will be lower than on its lower surface, resulting in lift. Also at the front, the airflow almost stops and the frontal pressure is higher than in the back, where (because of the flow separation) it is low due to the higher velocity at the rear edge of the roof. This very short discussion attempts to describe the origins of lift

Viscous boundary layer

z

Ue

б x

A-Hatched flow Separated flow

U∞

196

*

*

• mechanixillustrated.co • Fall 2017


AUTOMOTIVE ENGINEERING

Figure 4 • Effect of adding a rear wing to a ground vehicle.

* Introduction to Aerodynamics •

and drag due to the pressure distribution over the vehicle. However, one must remember that in a very thin layer (called the boundary layer—shown by б ) near the vehicle surface there is a so-called “skin friction” which also adds to the drag coefficient (but its contribution in automobiles to CD is usually very small). In many passenger cars, rear wings or spoilers are added to increase downforce (or reduce lift). This interaction can be demonstrated when mounting a rear wing to the generic ellipsoid shape of Fig. 4 (having a smooth underbody). The expected streamlines, and the partial flow separations at the rear, are depicted in the upper part of this figure. When an inverted wing is added at the back, the flow under the ellipsoid accelerates as a result of the lower base pressure (at the back), induced by the wing. The higher speed caus-

More downforce

197

*

*

• mechanixillustrated.co • Fall 2017


AUTOMOTIVE ENGINEERING

* Introduction to Aerodynamics •

es more downforce on the body, apart from the downforce created by the wing itself. Furthermore, in many occasions, the high-speed flow created near the wing partially reattaches the flow on the body, reducing the area of flow separation. This simple example demonstrates why proper mounting of a rear wing can increase the downforce of a vehicle by more than the expected lift of the wing itself!

Methods Used for Evaluating Vehicle Aerodynamics Evaluation of vehicle aerodynamics and corresponding refinements are a continuous process and an integral part of automotive engineering, not limited to the initial design phase only. Typical analysis and evaluation tools used in this process may include wind tunnel testing, computational prediction, or track testing. Each of these methods may be suitable for a particular need. For example, a wind tunnel or a numeric model can be used during the initial design stage prior to the vehicle being built. Once a vehicle exists, it can be instrumented and tested on the track.

Computational Methods The integration of computational fluid dynamic (CFD) methods into a wide range of engineering disciplines is rising sharply, mainly due to the positive trends in computational power and affordability. One of the advantages of these methods, when used in the automotive industry, is the large body of information provided by the “solution.” Contrary to wind tunnel or track tests, the data can be viewed, investigated, and analyzed over and over, after the “experiment” is concluded. Furthermore, such virtual solutions can be created before a vehicle is built and can provide information on aerodynamic loads on various components, flow visualization, etc. A typical solution depicting the surface pressures on the body of a race car and the direction of some streamlines is shown in Fig. 5. Such information, as noted, can be used by engineers to improve vehicle performance, as in reducing drag, or increasing downforce

198

*

*

• mechanixillustrated.co • Fall 2017


AUTOMOTIVE ENGINEERING

* Introduction to Aerodynamics •

(for race cars). While the computational methods appear to be the most attractive, computational tools are not perfect and they require highly knowledgeable aerodynamicists to run and interpret those computer codes.

Wind Tunnel Methods Wind tunnels offer the luxury of testing in a highly controlled environment and with a variety of instrumentation which need not be carried on the vehicle. Also, if the vehicle hasn’t yet been built, smaller scale models can be tested. Wind tunnels were used extensively for airplane development, but the use of aeronautical wind tunnels for Figure 5 • Typical results of CFD showing surface pressure distribution and streamlines near an open-wheel race car. Image courtesy of TotalSim, US.

199

*

*

• mechanixillustrated.co • Fall 2017


AUTOMOTIVE ENGINEERING

* Introduction to Aerodynamics •

automotive testing introduced two concerns. The first is the small clearance between the vehicle underbody and the stationary floor of the test section; the second is courtesy of Windshear, Inc. related to how to mount the rotating wheels. One of the solutions is to use “moving ground,” which is a thin but strong belt running on the floor (and also turning the wheels) at the same speed as the air. Such a facility (Windshear in NC) is shown in Fig. 6, where full-scale vehicles can be tested. See the strut on the side, which holds the car in position and also measures the forces required to hold it in position.

Figure 6 • A sedan, as mounted in the Windshear wind tunnel test section. Note the sliding belt under the car, simulating the moving road. Image

Track Testing Some of the difficulties inherent to wind tunnel testing are simply nonexistent in full-scale aerodynamic testing on the track. Rolling wheels, moving ground, and wind tunnel blockage correction are all resolved, and there is no need to build an expensive smaller scale model. Of course a vehicle must exist, the weather must cooperate, and the costs of renting a track and instrumenting a moving vehicle must not upset the budget. Because of the above-mentioned advantages, and in spite of the uncontrolled weather and cost issues, this form of aerodynamic testing has considerably improved in recent years. One of the earliest forms of testing was the coast-down test to determine the drag of a vehicle. In spite of variation in atmospheric conditions and inconsistencies in tire rolling resistance, reasonable incremental data can be obtained. With the advances in computer and sensor technology, by the end of the 1990s the desirable forces, moments, or pressures could be measured and transmitted via wireless communication at a reasonable cost.

200

*

*

• mechanixillustrated.co • Fall 2017


AUTOMOTIVE ENGINEERING

* Introduction to Aerodynamics •

Generic Automobile Shapes and Aerodynamics The next question is how a vehicle’s shape affects its aerodynamics. Prior to answering this with typical drag or lift coefficients, let us look at some generic trends, as depicted in Fig. 7. For example, Figure 7 • Vortex flow when slanting the rear upper surface of a generic body (Fig. 7a), the [ a ] Slanted ] “Three-box” on some generic auto- upper air surface swirls near the sides and creates two[ bvortices, asbody shown. This vorSEPARATED FLOW mobile shapes. tex-dominated flow is present for U a slant-angle range of 10° to 30° [[ aa ]] Slanted [[ bb ]] “Three-box” Slanted upper upper surface surface “Three-box” body body SEPARATED FLOW [[ aa ]] Slanted body [[ bb ]] “Three-box” SEPARATED FLOW U Slanted upper upper surface surface “Three-box” body U SEPARATED FLOW SEPARATED FLOW

U U

[ c ] Tapered lower surface [[ cc ]] [[ cc ]]

[ d ] Basic venturi

Tapered Tapered lower lower surface surface Tapered Tapered lower lower surface surface

[[ dd ]] [[ dd ]]

Basic Basic venturi venturi Basic Basic venturi venturi

[ e ] Flow near A-pillar [[ ee ]] [[ ee ]]

Flow Flow near near A-pillar A-pillar Flow Flow near near A-pillar A-pillar

A-PILLAR VORTEX FLOW A-PILLAR A-PILLAR VORTEX FLOW VORTEX FLOW A-PILLAR A-PILLAR VORTEX FLOW VORTEX FLOW

201

*

*

• mechanixillustrated.co • Fall 2017

MIRROR WAKE OSCILLATIONS MIRROR WAKE MIRROR WAKE OSCILLATIONS OSCILLATIONS MIRROR WAKE MIRROR WAKE OSCILLATIONS OSCILLATIONS


AUTOMOTIVE ENGINEERING

* Introduction to Aerodynamics •

(slant angle is measured relative to a horizontal line). Usually, such a vortex structure creates drag and also lift because of the high velocity under the vortices. Another typical pattern of flow separation, frequently found on three-box-type sedans, is depicted in Fig. 7b. In this case, a separated-flow bubble, with locally recirculating flow (vortex), is observed in the front, along the junction between the bonnet and the windshield. The large angle created between the rear windshield and trunk area results in a second, similar recirculation area. One can see this on a rainy day when the water droplets are not blown away as the car moves faster. When introducing a slanted surface to the lower aft section of the body (as in Fig. 7c), a similar trend can be expected, but now the lift is negative because of the low pressure on the lower surface. This principle can be utilized for race cars, and for moderate slant angles (less than 15˚) an increase in the downforce is observed. In the racing circuits, such upward deflections of the vehicle lower surface are usually called “diffusers.” However, a far more interesting case is when two side plates are added to create an underbody tunnel, sometimes called venturi (Fig. 7d). This geometry can generate very large values of negative lift, with only a moderate increase in drag. Furthermore, the downforce created by this geometry increases with smaller ground clearances, and also when pitching the vehicle’s nose down (called rake). A closer look at the flow near a road car may reveal more areas with vortex flow, and as an example, the A-pillar area is shown in Fig. 7e. The main A-pillar vortex is responsible for water deposition while driving in the rain, and in addition, the rear view mirror creates an oscillating wake. This vortex flow near the rear view mirror is also responsible for vortex noise during high-speed driving.

Passenger Cars After the short discussion on generic shapes, let us return to typical passenger car shapes. Possible variants offered by a particular manufacturer may have one of the generic shapes depicted in Fig. 8.

202

*

*

• mechanixillustrated.co • Fall 2017


AUTOMOTIVE ENGINEERING

Figure 8 • Generic shapes of most popular passenger cars. Typical drag coefficients are provided in Table 1.

Table 1 • Drag and lift coefficients of typical passenger cars and the effect of opening the windows. Note that front (CLf) and rear (CLr) lift is provided only for two cases.

203

*

* Introduction to Aerodynamics •

The reported aerodynamic data usually depends on measuring methods and facilities. For example, most manufacturers will test full-scale vehicles on the road or in a wind tunnel (but data may be affected by using or not using moving ground or environmental effects in coast-down testing, etc.). In most cases, though, a station wagon will have slightly less drag than the sedan or a well-designed hatchback (see the slant angle problem in Fig. 7). Also, the flow usually separates behind the windshield of open-top cars (convertibles), which explains why their drag is typically higher. Lastly, SUVs are based on existing trucks and have a boxy shape and edgy corners, and consequently, their drag is the highest. Also, the conventional wisdom that driving with windows closed and air conditioning on saves fuel is based on the fact that opening the windows increases the vehicle’s drag. Typical incremental drag coefficient numbers when comparing a vehicle with fully closed or fully opened windows is also shown in Table 1. The largest increment

–––––––––––––––––––– C C C ∆C –––––––––––––––––––– Sedan 0.32 0.067 0.114 ~0.05 –––––––––––––––––––– Wagon .030 ~0.04 –––––––––––––––––––– Hatchback .031 ~0.03 –––––––––––––––––––– Convertible 0.40 0.011 0.143 –––––––––––––––––––– SUV 0.40-0.50 ~0.06 –––––––––––––––––––– D

Lf

Lr

*

• mechanixillustrated.co • Fall 2017

D

(open window)


AUTOMOTIVE ENGINEERING

Table 2 • Computed breakdown of the drag components on a typical sedan.

204

*

* Introduction to Aerodynamics •

– –––––––∆– – FEATURE C – –––––––0.050 –– Bodywork – –––––––0.015 –– Rear View Mirror – –––––––0.085 –– Rear Surfaces – –––––––0.024 –– Engine Bay – –––––––0.048 –– Cooling Bag – –––––––0.085 –– Underbody + Chassis – –––––––0.025 –– F Wheel + Suspension – –––––––0.023 –– R Wheel + Suspension – ––––––––– Total Drag Coefficient 0.355 ––––––––––

is with boxy shapes as shown for the SUV. Also, opening just one window at lower speeds will create low-frequency pressure fluctuations (buffeting), which can be quite annoying. It is also interesting to investigate which part of the vehicle contributes to the overall drag (and how much). This is not a simple question, because such a breakdown of the total drag is difficult to measure experimentally and may depend on the method of the CFD used (when evaluating numerically). Some estimated numbers, based on computations, are presented in Table 2 (for a typical sedan, as at the top of Fig. 8). Note that the most dominant contributors are the underbody and the rear surfaces (behind the rear window and trunk). Aerodynamics are often applied to improve comfort in open-top vehicles. Even at moderate speeds aerodynamic buffeting (pressure fluctuations) caused by opening a window or the sunroof of a sedan can create considerable discomfort. As an example, the reversed flow behind the windshield of a convertible car is depicted in Fig. 9a. In this case, the unsteady reverse flow can blow the driver’s hair into his/her face, interfering with concentration, or simply blow away items inside the car. A typical solution is a moveable screen or a rear wind deflector that blocks the reverse flow path (see Fig. 9b). Such devices can be controlled automatically, raising at speed and retracting at low speeds. Such wind deflectors can be also mounted at the top of the windshield as shown in Fig. 9c. By redirecting the flow over the whole open top of the vehicle the unpleasant wind gusts are eliminated. Such a method is quite simple and effective at D

*

• mechanixillustrated.co • Fall 2017


AUTOMOTIVE ENGINEERING

Figure 9 • Aerodynamic devices aimed at improving comfort: A rear wind deflector behind the driver of an open-top car (b) or at the top of the windshield (c).

* Introduction to Aerodynamics •

(a) (a) (a)

Wind deflector

(b) (b) (b)

Wind deflector Wind deflector

Wind deflector

(c) (c) (c)

Wind deflector Wind deflector

low speeds but will increase drag at higher speeds. Finally, let us look at an example demonstrating the unpredictability of aerodynamics. Pickup trucks were designed for work, and naturally their aerodynamics is not ideal. Because of consumer demand there are single/dual cabin and short/long bed versions (see

205

*

*

• mechanixillustrated.co • Fall 2017


AUTOMOTIVE ENGINEERING

* Introduction to Aerodynamics •

(a) Figure 10 • Schematic depiction of the flow field above a single cabin (a) and extended cabin (b) pickup truck.

(a)

Opening the tailgate lowers the streamlines

Tailgate

Wind deflector

(b) Wind deflector

(b)

model shapes in Fig. 10). Interestingly, in most cases, lower drag numbers were measured for the longer cabin with the shorter bed. Wind deflector In addition, lowering the tailgate actually increases drag—the opposite of what is expected! Before trying to explain, let us observe some experimental drag coefficient numbers (lift is usually not proWind deflector vided). Typical drag coefficient numbers for such pickup trucks are about CD ~ 0.45 to 0.50. In this particular case, the drag numbers are shown in Table 3. These results are representative of most pickup trucks where lowering the tailgate has minimal effect and in most cases even (slightly) increases the drag. The tonneau is a simple cover of the truck

(c) (c)

Table 3 • Typical drag coefficients of pickup trucks and the incremental effects of tailgate and tonneau.

206

*

– –––––––– –––Tailgate ––Down –––– –– FEATURE Baseline Tonneau – –––––––– ––––+1.0 –– ––– –– Single Cabin Truck 0.0483 % -7.0 % – –––––––– ––––+1.8 –– ––– –– Extended Cabin Truck 0.0472 % -5.7 % –––––––––––––––––––– *

• mechanixillustrated.co • Fall 2017


AUTOMOTIVE ENGINEERING

* Introduction to Aerodynamics •

bed so that the upper surface is flat, and it seems to lower the drag in most cases. Next we can prove that in aerodynamics, anything can be explained. Let us observe the streamline following the cabin’s rooftop. Referring to Fig. 10, it appears that for the short bed and extended cabin (b), this streamline is positioned above the rear tailgate, which is placed within the separation bubble behind the cabin. As a result, less drag is expected and the tailgate open/closed position may have less effect (the numbers above show an increase of 1.8% drag, but in some cases similar reduction in drag is reported). For the short cabin (a) and the long bed, the separation bubble behind the cabin is shorter and the rooftop streamline may hit the tailgate area. When lowering the tailgate, the rooftop streamline could be displaced lower, which results in a faster flow over the cabin, and hence lower pressure behind and above the cabin. An increase in drag and lift is expected, in spite of the gain realized by lowering the tailgate! MI

For Further Reading Sumantran, V, and Sovran G., “Vehicle Aerodynamics,” SAE PT-49, Warrendale, PA 1996. Hucho, H., 1998, Aerodynamics of Road Vehicles, 4th edition, SAE International, Warrendale, PA. 1998. Katz, J., “Race Car Aerodynamics: Designing for Speed,” Bentley Publishers, Second Edition, Cambridge, MA, 2006. Milliken F., and Milliken M.L., “Race Car Vehicle Dynamics,” SAE International, Warrendale, PA, 1995. Katz, J., Automotive Aerodynamics, Wiley and Sons, Hoboken, NJ, April 2016.

BUY About the Author Dr. Joseph Katz is Professor of Aerospace Engineering at SDSU, San Diego, CA, where he pursues a wide variety of research interests. This article is based on Chapter 7 of his book, Automotive Aerodynamics.

207

*

*

• mechanixillustrated.co • Fall 2017


HABITAT TECHNOLOGY

Nanotechnology could provide the very highstrength, low-weight fibers that would be needed to build the cable of a space elevator. Image by artist Pat Rawling.

T

he phrase “from the ground up” may one day give way to “from the asteroid down.” At least that’s the way the speculative architects at Clouds AO see it. The firm’s founder, Ostap Rudakevych, is advocating for the concept of “atmosphere as site,” challenging terra firma’s fundamental place in architecture. His ideas do indeed posit a

208

*

*

• mechanixillustrated.co • Fall 2017

Living aboard an asteroidtethered tower


HABITAT TECHNOLOGY

* Hanging Out in Space •

radical change in the relationship between living space and outer space. But is it possible? Many think so. Rudakevych’s ideas are essentially a hack of two big concepts that NASA has actually been exploring with considerable energy. The first is the notion of a space elevator. While an idea that has been much travelled in science fiction, current and emerging technologies appear to bring the idea into the realm of feasibility. In short, if one were to extend a cable from Earth at the equator straight up into space to a distance of 35,000 kilometers (~22,000 miles), the cable would be held in tension, thanks to the forces exerted by Earth’s rotation. And because the cable would be taut, it could be used as a track to transport materials—and people—into space: a space elevator. The cable, of course, would have to be exceptionally strong to support even its own weight. Carbon nanotubes, the strongest and stiffest of wonder materials yet fabricated, may or may not be up to the job. Such nanotube cables, if they can be fabricated without strength-compromising defects, hold great promise as they have

209

*

*

• mechanixillustrated.co • Fall 2017


HABITAT TECHNOLOGY

* Hanging Out in Space •

been constructed with a length-to-diameter ratio of 132,000,000:1— just what you need for such a span. The second big idea is NASA’s Asteroid Redirect Mission (ARM). Originally planned for 2021 but now apparently a victim of budget cuts, the robotic mission would visit a large near-Earth asteroid, collect a large boulder from its surface, and redirect it into a stable orbit around the moon where it could be explored further. Orbital mechanics for an asteroid-suspended Rudakevych speculates that if this much can structure—the geosynchronous orbit matches be done, then eventually an asteroid could Earth’s sidereal rotation period of one day. The ultimately be redirected into Earth orbit. And if tower’s position in the sky traces out a path in we were able to nab one with sufficient mass— a figure-8 form, returning the tower to exactly the an Itokawa-class asteroid, for example (less same position in the sky each day. Ground trace than a kilometer across)—then we could also annotated with 24-hour segments corresponding to the tower’s position over a specific geographic drop a cable down from it to suspend a strucfeature. Credit: Clouds Architecture Office. ture, which could span the last 20 miles (well

210

*

*

• mechanixillustrated.co • Fall 2017


HABITAT TECHNOLOGY

* Hanging Out in Space •

An asteroid is relocated into a geosynchronous orbit and affixed with supporting cables to support the tower below. Credit: Clouds Architecture Office.

below the Van Allen radiation belts) and nearly scrape the Earth’s surface (a groundscraper?). Of course, moving an asteroid of such size would require a great deal of energy, or a very long time. It could, perhaps, in the absence of the ARM program, be redirected by orbiting a satellite around it as a tractor or through the use of a solar sail. Rudakevych’s firm has proposed—and designed —an asteroid-suspended structure they call the Analemma (after an effect of the pattern of Earth’s orbit). The building—a 20-mile high tower—would be placed in an eccentric geosynchronous orbit, which would allow it to travel between the northern and southern hemispheres on a daily loop about the equator. The ground trace for this pendulum tower would be a figure eight, where the tower would move at its slowest

211

*

*

• mechanixillustrated.co • Fall 2017


HABITAT TECHNOLOGY

* Hanging Out in Space •

The upper reaches of Analemma would extend beyond the troposphere. Credit: Clouds Architecture Office.

speed at the top and bottom of the figure eight, allowing the possibility for the tower’s occupants to interface with the planet’s surface at these points. The proposed orbit is calibrated so that the slowest part of the tower’s trajectory occurs over New York City. We recently visited with Rudakevych to learn more. What was the genesis of this project? Earth’s surface is subject to various stresses, such as earthquakes, floods, landslides, tsunamis, etc. Over half of the world’s population lives in large cities, 60% of which are located on coasts and thus vulnerable to effects from sea level rise. We’ve been studying various strategies for degrounding architecture as a way of addressing the dangers and costs related to these potential calamities. What are the major technology problems yet to solve in order to realize this vision?

212

*

*

• mechanixillustrated.co • Fall 2017


HABITAT TECHNOLOGY

* Hanging Out in Space •

We are certainly cognizant of the various technological challenges in realizing this project— challenges such as material strength, connectivity to utility systems, atmospheric drag, existing satellite paths, space junk, terrorism, Earth’s oblong distortion from pole to equator, etc. Some of the issues with the project as proposed can be ameliorated by changing the orbit type from a geosynchronous orbit to a geostationary orbit. A stationary orbit would obviate the ground speed, atmospheric drag, and connectivity issues; however, the tower would be limited to equatorial locations. What are the physics involved in suspending such a structure from an asteroid? Analemma would pass over a variety of landscapes during the course of its daily orbit. Parachutes offer a quick way down to the surface. Credit: Clouds Architecture Office.

213

*

*

• mechanixillustrated.co • Fall 2017


HABITAT TECHNOLOGY

* Hanging Out in Space •

The concept depends on well-known orbital mechanics and is essentially a tethered satellite system. The center of mass for the combined system would need to be found and placed into geo orbit (see The Universal Orbital SupUnrolled Orbital Path: Chart showing a typical daily port System). But the physical stresses incycle for an inhabitant of Analemma. Business is volved would be enormous and beyond the conducted at the lower end of the tower (F), while strength of materials currently available. The Analemma would face many of the same challenges anticipated by the space elevator concept, not the least of which is space debris. This tower would be operating in a pretty cluttered orbit… Jerome Pearson, who has brought much scientific rigor to the space elevator con-

214

*

*

• mechanixillustrated.co • Fall 2017

sleeping quarters are approximately 2/3 of the way up. Devotional activities are scattered along the highest reaches (A, B, D), while surface transfer points (G) take advantage of high topography. The size and shape of windows changes with height to account for pressure and temperature differentials. The amount of daylight increases by 40 minutes at the top of the tower due to the curvature of the Earth. Credit: Clouds Architecture Office.


HABITAT TECHNOLOGY

* Hanging Out in Space •

Prefabricated units are hoisted up and plugged into an extendable core which is then clipped onto the supporting cable. Credit: Clouds Architecture Office.

cept, notes the problem of interference with all other spacecraft and debris in Earth orbit: every satellite and every piece of debris would eventually collide with it. We’d definitely have to clean up the space. Notwithstanding, the tower would be a game changer for how we think about satellites. In the future when tethered systems are in place, it’s conceivable that instrument packages now being tossed into orbit could instead be fixed to tethers. The top of the tower could contain a platform for satellites currently operating within the proposed zone. How would power be supplied to the tower? Analemma would get its power from spacebased solar panels. Installed above the dense and diffuse atmosphere, these panels would have constant exposure to sunlight, with a greater efficiency than conventional photovoltaic installations. Water would be filtered and recycled in a semi-closed loop system, replenished with condensate captured from clouds and rainwater. How would residents travel between the tower and Earth’s surface?

215

*

*

• mechanixillustrated.co • Fall 2017


HABITAT TECHNOLOGY

* Hanging Out in Space •

Personal drones, such as those in development by Airbus and Ehang, would allow people to move between the tower and the surface. There are six transfer points arranged along the path, located in places where the topography is higher to meet the underside of the building. There is a Diagram showing a linkage-based linkage at each station that maximizes ground interface to maximize contact contact with the ground, allowing people with the surface while maintaining a and goods to be transferred on and off the constant velocity. See it in motion here. tower. We’ve since thought that a linear rail Credit: Clouds Architecture Office.

216

*

*

• mechanixillustrated.co • Fall 2017


HABITAT TECHNOLOGY

* Hanging Out in Space •

type of transport could provide a more efficient interface with the surface, where the tower would engage with a car on a rail, allowing for people to get on and off the tower. People could also connect to the tower using large drones. How would you imagine life on board such a structure? Obviously the experience would be quite different at the various altitudes along its 20-mile height… It wouldn’t be much different from life in tall buildings existing now, like the ultra-high residential towers in Manhattan. The most recent ones are built as self-contained worlds with full amenities, such as in-house gyms, restaurants, pools, etc. Some even include small studio apartments on the lower floors for service View of Analemma passing above staff (maids, cooks, butlers) that work on mainbuildings in midtown Manhattan. taining the luxury apartments of the ultra-rich Credit: Clouds Architecture Office.

217

*

*

• mechanixillustrated.co • Fall 2017


HABITAT TECHNOLOGY

* Hanging Out in Space •

on the upper floors. Towers are increasing- Click here for an enlarged view ly being designed as self-contained worlds of the diagram. detached from their surroundings. Credit: Clouds While researching atmospheric condiArchitecture Office. tions for this project, we realized that there is probably a tangible height limit beyond which people would not tolerate due to the extreme conditions. For example, while there may be a benefit to having 45 extra minutes of daylight at an elevation of 20 miles, the near vacuum and -40°C temperature would prevent people from going outside without a protective suit. Then again, astronauts have continually occupied the space station (in orbit ~250 miles above Earth) for decades, so perhaps it’s not so bad? • • • As to when such a structure could be realized, Jonathan McDowell, an astrophysicist at the Harvard-Smithsonian Center for Astrophysics, puts it out 200 years. Not only are there fundamental technology issues to resolve and invent, but hazards abound. Rudakevych notes, “If the tower got tall enough, it could reach escape velocity and break off and spin through space. As it spun it would create its own gravity—a spectrum of gravity across the length of the tower from its central point.” Or, to posit another potential disaster scenario, McDowell adds, “In the event of a tether snap near the asteroid, the loosed cable could whip around the Earth, wrapping itself over the entire globe 1.2 times. The impact and collapse of a 20-mile building wouldn’t be good news, either.” Then again, as Arthur C. Clark once said, “The only way to discover the limits of the possible is to go beyond them into the impossible.” MI Learn more about Analemma and other Clouds Architecture Office projects at www.cloudsao.com.

218

*

*

• mechanixillustrated.co • Fall 2017


HABITAT TECHNOLOGY

Yesterday’s Cities of Tomorrow

I

n June 1957—60 years ago—Mechanix Illustrated imagined a sun-filled, solar-powered life in a glassteel dome. Inspired by the so-called “Googie” style

of architecture that yielded the many Atomic Age structures that dotted the western highways of the ’40s and ’50s, the Mechanix Illustrated cover anticipated the iconic LAX Theme Building as well as its contemporary, the Seattle Space Needle, built for the 1962 World’s Fair. The Jetsons also made their debut in 1962 with a vision of life 100 years out in a future replete with a Googie-oozing Orbit City skyline. But while sky-high, both the Jetsons’ cityscape and the Space Needle were decidedly Earth-bound.

219

*

*

• mechanixillustrated.co • Fall 2017

June 1957


HABITAT TECHNOLOGY

* Yesterday’s Cities of Tomorrow •

July 1963

Like the fin-laden cars that drew their inspiration from the Jet Age, they didn’t get airborne, either. The optimism of the age notwithstanding, visions of the future also had a shadow side: 1962 was also the height of the Cold War. While many were looking upward, some were looking down—way down—to the development of underground cities to provide shelter in the event of a nuclear war. The July 1963 issue of Mechanix Illustrated featured a vision of such a city, complete with detailed descriptions of the elaborate infrastructures that would make life underground bearable.

220

*

*

• mechanixillustrated.co • Fall 2017

MI


ENGINEERING PRACTICE

By John Hershey, Ph.D.

Path of

The

Invention I

nventing is a great way to spend your time. Not only does it give you something intellectually exciting to do, it may result in a benefit to society and to you personally. Please know that I am not an attorney or a patent agent, and I do not presume to give you legal advice. But I am a named inventor on over 200 patents, having worked for General Electric’s

221

*

*

• mechanixillustrated.co • Fall 2017


ENGINEERING PRACTICE

BUY

* The Path of Invention •

R&D Center in upstate New York for a couple of decades with some really good researchers and patent attorneys. I learned the patent system and the elements of inventive behavior. I turned this experience into the book The Eureka Method: How to think like an Inventor, published by McGraw-Hill. I wrote the book for a new generation of inventors, for people like you, who have ideas but questions about the patent system and the path to invention.

November 6, 1894. As described by Mr. Hooker:

The Invention Mindset

The figure below is from his patent. Does it look familiar? Why has his invention persisted in a form so close to the original? There are probably two reasons. First, the parts required to make it are inexpensive, and a manufacturer can buy in bulk and celebrate the blessings of scale. Second, the invention continues to work well. Quite simply, the trap has not needed to evolve because the mouse has not evolved. As an aside, consider the (scary) case where the pest, or threat, does evolve, as with

It’s in our DNA. When we use a device or practice a method, we often wonder how the device could be made cheaper, faster, lighter, or the method more efficient, less onerous—in short, improved. Edison is often credited with putting it simply and directly: “There’s a way to do it better. Find it.” Improvement inventions do just that. Generally this class of invention is motivated by evolution of materials, chemicals, electronics, and all the other kit parts of modern systems. It is quite rare to find any product or method that cannot be improved by applying an evolved item or process step. The one invention I have seen that seems to have stayed still is Mr. Hooker’s “Animal Trap,” US patent US528671, issued

222

*

*

• mechanixillustrated.co • Fall 2017

“The object of the present invention is to provide, for catching mice and rats, a simple, inexpensive and afficient [sic] trap adapted not to excite the suspicion of an animal, and capable of being arranged close to a rat-hole, and of being sprung by the animal passing over it when not attracted by the bait.”


ENGINEERING PRACTICE

* The Path of Invention •

various healthcare-associated infections (HAI), such as E. coli strains. See this remarkable story about how bacteria transform into superbugs. Many improvement inventions generally proceed with changes that leave most of the design and components the same. The addition of ridges to the paperclip is an excellent example. In his patent (US patent 1654076, issued December 27, 1927) the inventor, Mr. Griffith, said: “My invention relates to improvements in paper clips of the type known generally as ‘Gem’ clips and it has for its general object to provide a clip in which the portions of the surfaces which are adapted to contact with the sheets of paper or other like material held thereby are roughened so as to more firmly hold and grip such sheets.” The patent’s third figure, at right, should still look quite familiar today. Let’s now take a brief look at inventions that do not focus on improvements but rather are characterized by combinations of elements to form a new and useful system or method. As our

223

*

*

• mechanixillustrated.co • Fall 2017

example, consider the digital camera and the internet. Seems like a natural mix for new services. Today, using a camera and the internet together is second nature to smartphone users. But look at the data in the table on the next page. It spans 13 years and enumerates the number of issued US patents up through a filing year having the terms “internet,” “digital camera,” and “internet & digital camera” in either their claims, their title, or their abstracts. See how long it took before the idea broke loose. Occasionally the results of combining different components are as singularly dramatic as was the movement combining individual coffee pods and coffee makers and the movement to combine the national treasure global positioning system (GPS) with stored maps and computer displays. For such highly impactful combinations we are tempted to assign the term “innovation” rather than invention. Whether there is a bright line that can be drawn between invention and innovation is something I’ve often wondered about without success. Bill Walker in Wired wrote the following about the two words:


ENGINEERING PRACTICE

* The Path of Invention •

––––––––––––––––––––––––––––––––––––––––––– He may be on to someFiling “Internet” & Year

“Internet” “Digital Camera” “Digital Camera”

––––––––––––––––––––––––––––––––––––––––––– 2002 17861 2620 147 ––––––––––––––––––––––––––––––––––––––––––– 2001 14978 2039 120 ––––––––––––––––––––––––––––––––––––––––––– 2000 11081 1463 67 ––––––––––––––––––––––––––––––––––––––––––– 1999 7031 950 37 ––––––––––––––––––––––––––––––––––––––––––– 1998 4225 621 21 ––––––––––––––––––––––––––––––––––––––––––– 1997 2297 337 10 ––––––––––––––––––––––––––––––––––––––––––– 1996 874 166 2 ––––––––––––––––––––––––––––––––––––––––––– 1995 199 84 1 ––––––––––––––––––––––––––––––––––––––––––– 1994 47 47 ––––––––––––––––––––––––––––––––––––––––––– 1993 23 35 ––––––––––––––––––––––––––––––––––––––––––– 1992 14 29 ––––––––––––––––––––––––––––––––––––––––––– 1991 8 23 ––––––––––––––––––––––––––––––––––––––––––– 1990 8 18 ––––––––––––––––––––––––––––––––––––––––––– “People often use the words ‘invention’ and ‘innovation’ interchangeably. This is not only incorrect, but misses a few key subtleties in meaning that can change a conversation. Invention is about creating something new, while innovation introduces the concept of ‘use’ of an idea or method. While this difference is subtle, and these words are listed in every thesaurus that I checked as synonyms of each other, they are definitely not 100% interchangeable. An invention is usually a ‘thing,’ while an innovation is usually an invention that causes change in behavior or interactions.”

224

*

*

• mechanixillustrated.co • Fall 2017

thing, but for the moment, as far as my being able to define innovation, I’ll have to respond with Associate Supreme Court Justice Potter Stuart’s comment on his inability to define hardcore pornography: “I know it when I see it.”

Incubating Patentable Ideas

So, how do you come up with patentable objects? I don’t have perfect knowledge, but I do know how you may be able to attract them. The following are five of the ways that may work for you.

1

• Scaling Up

When an enterprise decides to grow quickly (the term “exponential” is often loosely used), there will be many opportunities to simplify or substitute many of the components that were used in the original schema of the present enterprise architecture. For example, when a business franchise goes from local to regional, from regional to nationwide, from nationwide to worldwide, its logistical methods and implements will almost certainly have to be changed. Get a head start on the


ENGINEERING PRACTICE

* The Path of Invention •

thinking and envision where and how this might be done, and what improvement or combination inventions might be attractively posed to the business chieftains once you have secured intellectual property protection. The multipliers offered by scale can make even a small individual savings something that should not be ignored.

2

• Analogies—an Agar for more Answers than Questions

I remember how enthralled I was when I first learned of imaginary numbers, such as the number which when squared yields minus 4. The idea was remarkable to me because it seemed to imply an imbalance between the number of questions and the number of answers, the former exceeding the latter. After I mastered imaginary numbers, the balance seemed restored and the initial glow dimmed. But over time I have come to suspect that there may indeed be a great imbalance between the number of questions and the number of answers, but that the imbalance lies in a greater ponderance of answers. George de Mestral lived from 19071990. He was an electrical engineer who invented Velcro and named it by juxtaposing “vel” from velour (French for velvet) and “cro” from crochet (French for hooks). As the story goes, he came to invent Velcro based upon his observation

225

*

*

• mechanixillustrated.co • Fall 2017

of burrs acquired on a nature walk. I am a strong advocate for looking for analogies and seeing whether they engender patentable ideas, for there is a seemingly endless supply of analogies and they are found everywhere. On a personal note, I was studying urban multipath when an analogy to cryptography seemed to arise. This led to a cryptographic paradigm of the channel becoming the cryptovariable and a number of derivative patents. I also incorporated the concept in a text I wrote for McGraw-Hill, Cryptography Demystified, published in 2003.

3

• Government Regulations— Especially in the Communication Arts

We’re all familiar with E-ZPass®, wireless fire alarms, baby monitors, cordless phones, Bluetooth®, and a panoply of other wireless gadgets that have mushroomed and filled catalogs and storefronts. How did they come about? They came about because people wanted them. Wireless means convenience. But how? How may I just transmit signals? Don’t I have to apply to the government for permission? Pay a fee? File some forms? Get a license? The wonderful answer is “no.” The government, in the early 1980s, did a proactive, pro-public, promotion that energized the regulation-to-technology transition. In 1981 the Federal Commu-

BUY


ENGINEERING PRACTICE

* The Path of Invention •

nications Commission (FCC) adopted a Notice of Inquiry [the bolding is mine]: “… for the authorization of certain types of wideband modulation systems. The Inquiry is unusual in the way that it deals with a new technology. In the past, the Commission has usually authorized new technologies only in response to petitions from industry. However in the case of spread spectrum, the Commission initiated the Inquiry on its own, since its current Rules implicitly ban such emissions in most cases, and this prohibition may have discouraged research and development of civilian spread spectrum systems. As the next step in this proceeding, we are proposing in this Notice of Proposed Rulemaking rules that would authorize the use of spread spectrum under conditions that prevent harmful interference to other authorized users of the spectrum. We anticipate that this authorization will stimulate innovation in this technology, while meeting our statutory goal of controlling interference.” And stimulate innovation it did! It was a prime example of inventors jumping on a technology to unleash all sorts of devices that consumers felt, or soon came to feel, they needed. The message is, keep aware of inchoate regulation. It may lead to opening the gates to those who are ready.

226

*

*

• mechanixillustrated.co • Fall 2017

4

• Gaming the System

The year 1895 was interesting for a number of reasons. In that year, the Supreme Court declared the federal income tax unconstitutional, a Constitutional defect that was remedied by the 16th Amendment. Also in that year it became harder for players to game their game of baseball. That year the infield fly rule was adopted. Prior to that, a pop up near the infield could often yield two outs depending on the number of outs, runners on base, and the psychology of gaming the runners by either catching or purposely not catching the fly ball. Gaming—or counter-gaming—the system has been a national pastime for years, and it’s a great way to conceive of inventions, especially in the security arts where they are sorely needed. In 1888, an Inspector Bonfield told a Chicago Herald reporter that: “It is a well known fact that no other section of the population avail themselves more readily and speedily of the latest triumphs of science than the criminal class. The educated criminal skims the cream from every new invention, if he can make use of it.” It’s not only mankind that games the system. There is the invention-inspiring squirrel. There are very many patented bird feeders, so many that a bird feeder


ENGINEERING PRACTICE

* The Path of Invention •

–––––––––––––––––––––––––––––– invention is termed replaced incanFiling “Bird “Bird Feeder” a “crowded art.” In descent bulbs. Years Feeder” & “Squirrel” –––––––––––––––––––––––––––––– 2006 I made some • LED signals require 1990-1994 133 19 less maintenance counts of the number –––––––––––––––––––––––––––––– 1995-1999 127 21 than incandescent of issued US patents –––––––––––––––––––––––––––––– signals. that had the phrase But is there a downside to invention? “bird feeder” in their claims, title, or abstract. Of those I counted the ones that And do the contingencies call for further also specified the word squirrel. Searchinvention? Consider the following taken ing over the decade of the ‘90s, I found from the story “Energy-saving Traffic the data shown in the table above. Lights Blamed in Crashes:” Why do we find such a strong coupling between bird feeder inventions and squir- “Many communities have switched to LED bulbs in their traffic lights because they rels? I believe it is an incidental societal use 90 percent less energy than the old stasis. People are staying enamored of birds, and squirrels continue to game the incandescent variety, last far longer and bird feeders and thereby bring out the in- save money. Their great advantage is ventiveness of people as they continually also their drawback: They do not waste seek to counter-game the squirrels. energy by producing heat . . . Illinois authorities said that during a storm in April, • Recognize that Invention Never Stops 34-year-old Lisa Richter could see she Why is the replacement of incandeshad a green light and began making a cent bulbs by arrays of LEDs in traffic left turn. A driver coming from the opposite direction did not realize the stoplight lights such a large infrastructure underwas obscured by snow and plowed into taking? The answer derives from the Richter’s vehicle, killing her.” impressive number of utility advantages that accrue through the improvement When major changes in infrastructure invention of substituting LED technology for incandescent light technology in begin, think ahead and try to outthink the traffic signals. Consider: planners. Might there be problems? How • Operating LED traffic signals requires could they be addressed using your ideas? less electrical power, on the order Some Advice to New of a fifth of what it takes to run an Technologists Ready to Invent incandescent signal. Congratulations on your education • LED signals are brighter than the

5

227

*

*

• mechanixillustrated.co • Fall 2017


ENGINEERING PRACTICE

* The Path of Invention •

and recognition in your technical art of choice. You have successfully emerged from a primarily academic environment set in the culture of engineering. You have seen how engineering has provided many of the benefits to society and you may be eager to invent to improve and augment the remarkable technology that we all enjoy. I have a few suggestions that may ease your progress. Many of the suggestions relate to the differences between the academic environment and the culture of invention practitioners. This bridge is the hardest one for many technologists to cross probably because a formal technical education and a technical research job targeted to generating valuable company intellectual property, such as patents, both seem as though they should have a common culture. They don’t.

1

• Learn the Basics of the Patent Process and its Laws

Failure to know and abide by many of the patent process guidelines may result in losing the benefit of your intellectual inspirations and efforts. For example, within academia, prompt sharing of ideas and knowledge is the way of expected conduct. Sharing of your ideas for something you may wish to patent may result in a disclosure that bars your quest to receiving a patent. Another surprise proceeds from a pat-

228

*

*

• mechanixillustrated.co • Fall 2017

ent’s granting only negative rights. Your patent prohibits others from doing such things as making or using your invention, but your patent does not necessarily guarantee that you may make or use your invention, as your invention may include components that are themselves patented and require a license to be used.

2

• Move in a Timely Fashion to Protect your Invention

The United States used to be a “first to invent” nation so far as patent priority was concerned. This meant that if you were, and could prove that you were, the first to conceive of an invention and diligently pursued its enablement, then you stood a good chance of being awarded a patent. This situation changed, however, and the US has joined the rest of the world in becoming a “first to file” nation, which is a much simpler procedure. If you are the first to file your invention with the US Patent and Trademark Office (the USPTO), then you will most likely have patent issuance priority over others who may have invented the same thing before you. Thus, speed is of the essence. Where this consideration illuminates a difference between the academic world and the domain of corporate research is that in academia many researchers are loathe to publish their work until it has been refined and polished, with relevant questions asked and answered. In


ENGINEERING PRACTICE

* The Path of Invention •

the patent world, so long as what you propose is utile, novel, not obvious, and taught so that someone of general ability in the field can make and use your invention, you have most likely met the bar for receiving a patent. Somewhat surprisingly, there is no requirement that you need to understand why your invention works! It reminds me of the Mongol ruler Genghis Khan, who advised his people to “not bathe or wash clothes in running water during thunder.” Good practical advice but not anywhere near an understanding of the physics of the danger.

3

• Crediting Others

An application for a patent requires identification of the inventors, where an inventor is someone who has made a meaningful contribution to at least one of the patent’s claims. The claims are the most essential parts of the patent as they delineate exactly what the patent covers and protects. By regulation, if someone significantly contributes to only a single claim of an application, that person must be listed as an inventor, a coequal status to an inventor who may have done all of the rest. It is difficult for some inventors to have to include others, as this seems to them an admission that efforts were incomplete. And for highly educated inventors, it is often a difficult challenge to understand that a patent and a peer-reviewed paper are

229

*

*

• mechanixillustrated.co • Fall 2017

very different in spite of many seeming similarities. The patent lists the inventors and the paper lists the authors. Inclusion or exclusion of someone from the list of authors to a paper is not a legal matter but a matter of courtesy and culture. In a patent, the list of inventors needs to be scrupulously accurate and neither include non-inventors nor exclude actual inventors. Further, in a paper, the order of the listed authors may connote relative prestige or weight of contribution. In a patent, however, the order of the listed inventors is meaningless. For example, A and B became aware of a newly discovered electronic phenomenon and conceived a new device exploiting such. Their boss C directed them to construct a prototype and provided funding for the R&D. D joined them to do computer simulation, and E helped construct a prototype. F worked out a theory to explain why the device works. When the patent application was filed, only A and B likely qualified as inventors.

4

• Practice Clear Writing

Sounds simple and gratuitous, doesn’t it? Yet, picking on the nuances, the subtle ambiguities, and the vagueness that infect most of our writings is what the legal profession ably practices in taking a stand against a patent’s enforcement. I remember having to memorize long lists of prepositions in junior high school.


ENGINEERING PRACTICE

* The Path of Invention •

The memory of this came flooding back when I read of a finding of non-infringement in trials regarding US patent 4761290 “Process for making dough products.” One of the patent claim’s elements contained the wording: “heating the resulting batter-coated dough to a temperature in the range of about 400° F. to 850° F. for a period of time ranging from about 10 seconds to 5 minutes.” I have bolded the preposition “to.” Do you see the problem? The Court of Appeals for the Federal Circuit upheld a district court’s finding of non-infringement. Judge Friedman ruled as follows, where I have bolded the preposition “at”: “The sole issue in this appeal is the meaning of the following language in a patent claim: ‘heating the resulting batter-coated dough to a temperature in the range of about 400° F. to 850° F.’ The question is whether the dough itself is to be heated to that temperature (as the district court held), or whether the claim only specifies the temperature at which the dough is to be heated, i.e., the temperature of the oven (as the appellant contends). We agree with the district court that the claim means what it says (the dough is to be heated “to” the designated temperature range) and therefore affirm.”

230

*

*

• mechanixillustrated.co • Fall 2017

5

• See and Hear the World Outside of the Lab

As you move from an academic environment into the larger world, remember that only a small fraction of its inhabitants are technically oriented. Take time to listen, to understand what people are interested in, to learn about their current fantasies and fears. Find out what they need and what they think they want. Engineers and technicians are often at the lower end of the emotional scale. Try to understand the evaluation that a glass may be half full for some, half empty for others, and not just the sterile engineer’s observation that the glass is twice as big as it needs to be. Best of luck! MI

About the Author John Hershey, Ph.D. (electrical engineering) is a named inventor on over 200 US patents and the author/coauthor of nine BUY books. He served in the CIA and the Department of Commerce for 15 years and in GE R&D for two decades. Dr. Hershey has served the adjunct faculty at the University of Colorado, Boulder; Rensselaer Polytechnic Institute; and Union College, Schenectady, New York. He also served for five years as a program evaluator for ABET (Accreditation Board for Engineering and Technology), and is an elected fellow of the IEEE.


We hope you enjoyed this special edition of

We’d love to hear your thoughts and learn more about what you’d like to see in future issues. Contact us here. In the meantime, please subscribe—it’s FREE. And while you’re at it, check out our other titles: eBook SERIES

Stay up to date with fresh articles, events, and new eBooks by visiting and bookmarking www.TechnicaCuriosa.com Please share this news with your friends and associates.

eBook SERIES


Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.