r5u yhtf

Page 1

Here’s How the Chip Shortage Will End Old fabs come to the rescue P. 5

The Pandemic’s Winners And Losers Lessons from a giant “forced experiment” P. 22

An Antenna for the Far Reaches JPL’s radical design for a Europa probe P. 34

The Brain of A Tiny Hunter

FOR THE TECHNOLOGY INSIDER AUGUST 2021


Yumeng Yang, Spin InfoTech Lab, ShanghaiTech University

Faster and more reliable We congratulate the Spin InfoTech Lab led by Prof. Yumeng Yang at ShanghaiTech University for realizing fast and energy-efficient magnetization switching. Through charge-spin conversion based on the anomalous Hall effect in ferromagnetic materials, this result can be used in a magnetic random access memory (MRAM) bit to improve reliability, memory density, and speed. This work will greatly benefit the commercialization of next-generation MRAM technology, which is one of the most promising candidates for high-performance neuromorphic computing. We are excited to continue our collaboration and look forward to further spectacular results.

Zurich Instruments


VOLUME 58 / ISSUE 8

AUGUST 2021

What We Learned From the Pandemic

34

An Antenna Made for an Icy, Radioactive Hell

22

The shock forced us to adapt; some innovations will persist. By Michele Acuto, Shaun Larcom, Ferdinand Rauch & Tim Willems

Lessons From a Dragonfly’s Brain

28

Modeling small, efficient neural networks may help researchers optimize missile defense systems. By Frances Chance

To assure the antenna survives on Europa, JPL’s engineers started from scratch. By Nacer E. Chahat

A Circuit to Boost Battery Life

40

LEFT: JPL-CALTECH/NASA; RIGHT: KLAUS STEINBERG/UNSPLASH

A digital version of this usually analog circuit will save your phone’s battery. By Keith A. Bowman

NEWS Chip Shortage Endgame (p.5) High-Altitude Balloons (p.8) Heat Energy Storage (p.11)

5

HANDS ON 14 A little bit of custom automation makes for a better workbench. CROSSTALK 18 Numbers Don’t Lie (p.18) Internet of Everything (p.20) Macro & Micro (p.21) PAST FORWARD IBM and the PC Revolution

48

ON THE COVER: Photo by Antagain/Getty Images

AUGUST 2021  SPECTRUM.IEEE.ORG  1


Need an adhesive for your MEDICAL DEVICE

APPLICATION?

James W. Cortada at the IBM offices in Cranford, N.J., in the late 1970s.

BACK STORY

epoxies, silicones, light curing compounds for bonding, sealing, coating, potting & encapsulating

Our products meet

USP Class VI for biocompatibility & ISO 10993-5 for cytotoxicity

Our experts are ready to help

offering adhesive solutions to established medical device manufacturers, as well as companies newly entering the market

+1.201.343.8983 • main@masterbond.com www.masterbond.com

2  SPECTRUM.IEEE.ORG  AUGUST 2021

The Essential Question

H

ow many IBM PCs can you fit in an 18-wheeler? That, according to historian James W. Cortada, is the most interesting question he’s ever asked. He first raised the question in 1985, several years after IBM had introduced its wildly successful personal computer. Cortada was then head of a sales team at IBM’s Nashville site. “We’d arranged to sell 6,000 PCs to American Standard. They agreed to send their trucks to pick up a certain number of PCs every month. So we needed to know how many PCs would fit,” Cortada explains. “I can’t even remember what the answer was, only that I was delighted that I thought to ask the question.” Cortada worked in various capacities at IBM for 38 years. (Above, he’s standing in the parking lot of IBM’s distinctive building in Cranford, N.J., designed by Victor Lundy.) After he retired in 2012, he became a senior research fellow at the University of Minnesota’s Charles Babbage Institute, where he specializes in the history of technology. That transition might seem odd, but shortly before he joined IBM, Cortada had earned a Ph.D. in modern European history from Florida State University. And he continued to research, write, and publish during his IBM career. This month’s Past Forward (an extended version of which is available online) describes the 1981 launch of the IBM PC. It’s drawn from Cortada’s authoritative history of Big Blue: IBM: The Rise and Fall and Reinvention of a Global Icon (MIT Press, 2019). “I was able to take advantage of the normal skills of a trained historian,” Cortada says. “And I had witnessed a third of IBM’s history. I knew what questions to ask. I knew the skeletons in the closet.” Even before he started the book, a big question was whether he’d reveal those skeletons or not. “I decided to be candid,” Cortada says. “I didn’t want my grandsons to be embarrassed about what I wrote.” n

JAMES W. CORTADA

We Offer


simulation case study

Antenna design optimization is crucial to launching the internet of space The wired and wireless networks that currently connect people around the world cannot reach everywhere on Earth. To solve the problem, engineers are turning their eyes toward space. The goal is to form a suborbital high-data-rate communications network to revolutionize how data is shared and collected. Before this Internet of Space can be built, design engineers need to optimize their antenna designs. learn more comsol.blog/IoS

The COMSOL Multiphysics® software is used for simulating designs, devices, and processes in all fields of engineering, manufacturing, and scientific research.


CONTRIBUTORS

 KEITH A. BOWMAN Bowman, a principal engineer and manager in the processor research team at Qualcomm, has been battling drooping voltage in processors for much of his career. He invented a circuit that’s been doing that for all of Qualcomm’s Snapdragon processors since 2016, and he calls it “one of the most significant things I’ve ever done.” Bowman hopes the circuit will pair with future digital lowdropout voltage regulators to save even more power, which he discusses in this issue [p. 40].

 NACER E. CHAHAT Chahat is a senior antenna engineer at NASA’s Jet Propulsion Laboratory. He, along with other JPL engineers, took on the “impossible” task of designing an antenna for missions to Europa. The moon’s frigid surface temperature, and a radioactive bombardment from Jupiter, mean that conventional antenna designs would fail quickly. In “An Antenna Made for an Icy, Radioactive Hell,” he explains how the engineers built an antenna that could survive Jupiter’s most intriguing moon.

 FRANCES CHANCE Chance is a principal member of the technical staff at Sandia National Laboratories and a computational neuroscientist by training. Chance’s current research, as she describes in this issue [p. 28], applies her knowledge of nervous systems toward the development of neuro-inspired algorithms and architectures. Her admiration of the aeronautical prowess of dragonflies began when she was a small child; she always found them too tricky to catch.

 FERDINAND RAUCH Rauch, an economist at the University of Oxford, and his fellow economists, Shaun Larcom, at the University of Cambridge, and Tim Willems, at the International Monetary Fund, studied how a 2014 disruption on the London Underground forced commuters to find better ways to get work. Larcom got Michele Acuto from the University of Melbourne to help broaden this approach. Read their findings in “What We Learned From the Pandemic” [p. 22].

4  SPECTRUM.IEEE.ORG  AUGUST 2021

EDITOR IN CHIEF Susan Hassler, s.hassler@ieee.org EXECUTIVE EDITOR Glenn Zorpette, g.zorpette@ieee.org EDITORIAL DIRECTOR, DIGITAL Harry Goldstein, h.goldstein@ieee.org MANAGING EDITOR Elizabeth A. Bretz, e.bretz@ieee.org SENIOR ART DIRECTOR Mark Montgomery, m.montgomery@ieee.org PRODUCT MANAGER, DIGITAL Erico Guizzo, e.guizzo@ieee.org SENIOR EDITORS Evan Ackerman (Digital), ackerman.e@ieee.org Stephen Cass (Special Projects), cass.s@ieee.org Jean Kumagai, j.kumagai@ieee.org Samuel K. Moore, s.k.moore@ieee.org Tekla S. Perry, t.perry@ieee.org Philip E. Ross, p.ross@ieee.org David Schneider, d.a.schneider@ieee.org Eliza Strickland, e.strickland@ieee.org DEPUTY ART DIRECTOR Brandon Palacio, b.palacio@ieee.org PHOTOGRAPHY DIRECTOR Randi Klett, randi.klett@ieee.org ONLINE ART DIRECTOR Erik Vrielink, e.vrielink@ieee.org NEWS MANAGER Mark Anderson, m.k.anderson@ieee.org ASSOCIATE EDITORS Willie D. Jones (Digital), w.jones@ieee.org Michael Koziol, m.koziol@ieee.org SENIOR COPY EDITOR Joseph N. Levine, j.levine@ieee.org COPY EDITOR Michele Kogon, m.kogon@ieee.org EDITORIAL RESEARCHER Alan Gardner, a.gardner@ieee.org ADMINISTRATIVE ASSISTANT Ramona L. Foster, r.foster@ieee.org CONTRIBUTING EDITORS Robert N. Charette, S ­ teven ­Cherry, Charles Q. Choi, Peter Fairley, Maria Gallucci, W. Wayt Gibbs, Mark Harris, Jeremy Hsu, Allison Marsh, Prachi Patel, Megan Scudellari, Lawrence Ulrich, Emily Waltz EDITOR IN CHIEF, THE INSTITUTE Kathy Pretz, k.pretz@ieee.org ASSISTANT EDITOR, THE INSTITUTE Joanna Goodrich, j.goodrich@ieee.org DIRECTOR, PERIODICALS PRODUCTION SERVICES Peter Tuohy MULTIMEDIA PRODUCTION SPECIALIST Michael Spector ASSOCIATE ART DIRECTOR, PUBLICATIONS Gail A. Schnitzer ADVERTISING PRODUCTION +1 732 562 6334 ADVERTISING PRODUCTION MANAGER Felicia Spagnoli, f.spagnoli@ieee.org SENIOR ADVERTISING PRODUCTION COORDINATOR Nicole Evans Gyimah, n.gyimah@ieee.org EDITORIAL ADVISORY BOARD, IEEE SPECTRUM Susan Hassler, Chair; David C. Brock, Robert N. Charette, Ronald F. DeMara, Shahin Farshchi, Lawrence O. Hall, Jason K. Hui, Leah Jamieson, Mary Lou Jepsen, Deepa Kundur, Peter Luh, ­Michel Maharbiz, Somdeb Majumdar, Allison Marsh, Carmen Menoni, Sofia Olhede, Wen Tong, Maurizio Vecchione EDITORIAL ADVISORY BOARD, THE INSTITUTE Kathy Pretz, Chair; Qusi Alqarqaz, Philip Chen, Shashank Gaur, Lawrence O. Hall, Susan Hassler, Peter Luh, Cecilia Metra, San Murugesan, Mirela Sechi Annoni Notare, Joel Trussell, Hon K. Tsang, Chenyang Xu MANAGING DIRECTOR, PUBLICATIONS Steven Heffner EDITORIAL CORRESPONDENCE IEEE Spectrum, 3 Park Ave., 17th Floor, New York, NY 10016-5997 TEL: +1 212 419 7555 FAX: +1 212 419 7570 BUREAU Palo Alto, Calif.; Tekla S. Perry +1 650 752 6661 DIRECTOR, BUSINESS DEVELOPMENT, MEDIA & ADVERTISING Mark David, m.david@ieee.org ADVERTISING INQUIRIES Naylor Association Solutions, Erik Henson +1 352 333 3443, ehenson@naylor.com REPRINT SALES +1 212 221 9595, ext. 319

REPRINT PERMISSION / LIBRARIES Articles may be photocopied for private use of patrons. A per-copy fee must be paid to the Copyright Clearance Center, 29 Congress St., Salem, MA 01970. For other copying or republication, contact Managing Editor, IEEE Spectrum. COPYRIGHTS AND TRADEMARKS IEEE Spectrum is a registered trademark owned by The Institute of Electrical and Electronics Engineers Inc. Responsibility for the substance of articles rests upon the authors, not IEEE, its organizational units, or its members. Articles do not represent official positions of IEEE. Readers may post comments online; comments may be excerpted for publication. IEEE reserves the right to reject any advertising.

IEEE BOARD OF DIRECTORS PRESIDENT & CEO Susan K. “Kathy” Land, president@ieee.org +1 732 562 3928 Fax: +1 732 981 9515 PRESIDENT-ELECT K.J. Ray Liu TREASURER Mary Ellen Randall SECRETARY Kathleen A. Kramer PAST PRESIDENT Toshio Fukada VICE PRESIDENTS Stephen M. Phillips, Educational Activities; Lawrence O. Hall, Publication Services & Products; Maike Luiken, Member & Geographic Activities; Roger U. Fujii, Technical Activities; James E. Matthews, President, Standards Association; Katherine J. Duncan, President, IEEE-USA DIVISION DIRECTORS Alfred E. “Al” Dunlop (I); Ruth A. Dyer (II); Sergio Benedetto (III); Manfred “Fred” J. Schindler (IV); Thomas M. Conte (V); Paul M. Cunningham (VI); Miriam P. Sanders (VII); Christina M. Schober (VIII); Rabab Kreidieh Ward (IX); Dalma Novak (X) REGION DIRECTORS Eduardo F. Palacio (1); Barry C. Tilton (2); Jill I. Gostin (3); Johnson A. Asumado (4); James R. Look (5); Timothy T. Lee (6); Jason Jianjun Gu (7); Antonio Luque (8); Alberto Sanchez (9); Deepak Mathur (10) DIRECTOR EMERITUS Theodore W. Hissey IEEE STAFF EXECUTIVE DIRECTOR & COO Stephen Welby +1 732 562 5400, s.p.welby@ieee.org CHIEF INFORMATION OFFICER Cherif Amirat +1 732 562 6017, c.amirat@ieee.org PUBLICATIONS Steven Heffner +1 212 705 8958, s.heffner@ieee.org CHIEF MARKETING OFFICER Karen L. Hawkins +1 732 562 3964, k.hawkins@ieee.org CORPORATE ACTIVITIES Donna Hourican +1 732 562 6330, d.hourican@ieee.org MEMBER & GEOGRAPHIC ACTIVITIES Cecelia Jankowski +1 732 562 5504, c.jankowski@ieee.org STANDARDS ACTIVITIES Konstantinos Karachalios +1 732 562 3820, constantin@ieee.org EDUCATIONAL ACTIVITIES Jamie Moesch +1 732 562 5514, j.moesch@ieee.org GENERAL COUNSEL & CHIEF COMPLIANCE OFFICER Sophia A. Muirhead +1 212 705 8950, s.muirhead@ieee.org CHIEF FINANCIAL OFFICER Thomas R. Siegert +1 732 562 6843, t.siegert@ieee.org TECHNICAL ACTIVITIES Mary Ward-Callan +1 732 562 3850, m.ward-callan@ieee.org MANAGING DIRECTOR, IEEE-USA Chris Brantley +1 202 530 8349, c.brantley@ieee.org IEEE PUBLICATION SERVICES & PRODUCTS BOARD Lawrence O. Hall, Chair; Sergio Benedetto, Edhem Custovic, Stefano Galli, James Irvine, Clem Karl, Hulya Kirkici, Fabrizio Lombardi, Aleksandar Mastilovic, Sorel Reisman, Gaurav Sharma, Isabel Trancoso, Maria Elena Valcher, Peter Winzer, Bin Zhao IEEE OPERATIONS CENTER 445 Hoes Lane, Box 1331 Piscataway, NJ 08854-1331 U.S.A. Tel: +1 732 981 0060 Fax: +1 732 981 1721

IEEE SPECTRUM (ISSN 0018-9235) is published monthly by The Institute of Electrical and Electronics Engineers, Inc. All rights reserved. © 2021 by The Institute of Electrical and Electronics Engineers, Inc., 3 Park Avenue, New York, NY 10016-5997, U.S.A. Volume No. 58, Issue No. 8. The editorial content of IEEE Spectrum magazine does not represent official positions of the IEEE or its organizational units. Canadian Post International Publications Mail (Canadian Distribution) Sales Agreement No. 40013087. Return undeliverable Canadian addresses to: Circulation Department, IEEE Spectrum, Box 1051, Fort Erie, ON L2A 6C7. Cable address: ITRIPLEE. Fax: +1 212 419 7570. INTERNET: spectrum@ieee.org. ANNUAL SUBSCRIPTIONS: IEEE Members: $21.40 included in dues. Libraries/institutions: $399. POSTMASTER: Please send address changes to IEEE Spectrum, c/o Coding Department, IEEE Service Center, 445 Hoes Lane, Box 1331, Piscataway, NJ 08855. Periodicals postage paid at New York, NY, and additional mailing offices. Canadian GST #125634188. Printed at 120 Donnelley Dr., Glasgow, KY 42141-1060, U.S.A. IEEE Spectrum circulation is audited by BPA Worldwide. IEEE Spectrum is a member of the Association of Business Information & Media Companies, the Association of Magazine Media, and Association Media & Publishing. IEEE prohibits discrimination, harassment, and bullying. For more information, visit https://www.ieee.org/web/aboutus/whatis/policies/p9-26.html.


THE LATEST DEVELOPMENTS IN TECHNOLOGY, ENGINEERING, AND SCIENCE

AUGUST 2021

LIESA JOHANNSSEN-KOPPITZ/BLOOMBERG/GETTY IMAGES

The coronavirus pandemic led to a chip supply squeeze that halted auto industry production. (Pictured here is a G ­ erman Volkswagen assembly line in April.) SEMICONDUCTORS

How and When the Chip Shortage Will End    Old nodes are the key SAMUEL K. MOORE

H

istorians will probably spend decades picking apart the consequences of the COVID-19 pandemic. But the shortage of chips that it’s caused will be long over by then. A variety of analysts agree that the most problematic shortages will begin to ease in the third or fourth quarter of 2021, though it could take much of 2022 for the resulting chips to work their way through the supply chain to products. The supply relief will not be coming from the big, national investments in the works right now by South Korea, the United States, and the European Union but from older chip fabs and foundries running processes far from the cutting edge and on comparatively small silicon wafers.

AUGUST 2021  SPECTRUM.IEEE.ORG  5


THE LATEST DEVELOPMENTS IN TECHNOLOGY, ENGINEERING, AND SCIENCE NEWS

CHIP DEMAND BY REVENUE

CHIP PROCESS TECHNOLOGY

(U.S. $ , billions) Automotive $ 39.5 Industrial $ 41.6

Communications infrastructure $ 36.3

Mature 54 %

Advanced 17.5 %

Wireless $ 126.7 Consumer $ 60.1

Computing $ 126.7

Mainstream 18.5 %

The auto industry is a relatively small chip end user, but it’s growing fast.[Source: IDC]

Cars rely on chips made using mature manufacturing processes—40 nanometers and older. Those processes make up most of the installed capacity. [Source: IDC]

NUMBER OF 200–mm FABS IN OPERATION

200–mm CAPITAL EQUIPMENT SPENDING 2020: U.S. $ 3 billion

2020: 212 2021: $ 4.6 billion 2022: 222 2022: $ 4 billion

These mature chips are generally made on 200-millimeter wafers. There are few new fabs of this kind, but companies are investing in equipping the old ones. [Source: SEMI]

Before we get into how the shortage will end, it’s worth summing up how it began. With panic, lockdowns, and general uncertainty rolling across the globe, automakers canceled orders. However, those conditions meant a big fraction of the workforce re-created the office at home, purchasing computers, monitors, and other equipment. At the same time entire school systems switched to virtual learning via laptops and tablets. And more time at home also meant more spending on home entertainment, such as TVs and game consoles. These factors, the 5G rollout, and continued growth in cloud computing quickly hoovered up the capacity automakers had uncere-

6  SPECTRUM.IEEE.ORG  AUGUST 2021

moniously freed. By the time carmakers realized people still wanted to buy their goods, they found themselves at the back of the line for the chips they needed. At US $39.5 billion, the auto industry makes up less than 9 percent of chip demand by revenue, according to market research firm IDC. That figure is set to increase by about 10 percent per year by 2025. However, the auto industry— which employs more than 10 million people globally—is something both consumers and politicians are acutely sensitive to, especially in the United States and Europe. Chips for the automotive sector are made using processes intended to meet

MONTH 2021

safety criteria that are different from those meant for other industries. But they are still fabricated on the same production lines as the analog ICs, power management chips, display drivers, microcontrollers, and sensors that go into everything else. “The common denominator is [that] the process technology is 40 nanometers and older,” says Mario Morales, vice president, enabling technologies and semiconductors at IDC. This chip manufacturing technology was last on the cutting edge nearly 15 years ago. Lines producing chips at these old nodes represent a full 54 percent of installed capacity, according to IDC. Today these old nodes are typically used on 200-millimeter wafers of silicon. To reduce cost, the industry began moving to 300-mm wafers in 2001, but much of the old 200-mm infrastructure remained and even expanded. Despite the auto industry’s desperation, there’s no great rush to build new 200-mm fabs. “The return on investment just isn’t there,” says Morales. What’s more, there are already many legacy-node plants in China that are not operating efficiently right now, but “at some point, they will,” he says, further reducing the incentive to build new fabs. According to SEMI, the industry association for the electronics manufacturing and design supply chain, the number of 200-mm fabs will go from 212 in 2020 to 222 in 2022, about half the expected increase of the more profitable 300-mm fabs. Adding capacity to existing 200-mm fabs makes more sense than building new ones, and there are indications that’s happening. According to SEMI’s Christian Gregor Dieseldorff, more than 40 companies will increase capacity by more than 750,000 wafers per month from the beginning of 2020 to the end of 2022. The long-term trend to the end of 2024 is for a 17 percent increase in capacity for 200-mm facilities. Spending on equipment for these fabs is set to rise to $4.6 billion in 2021 after crossing the $3 billion mark in 2020 for the first time in years, SEMI says. But then spending will drop back to $4 billion in 2022. In comparison, spending to equip 300-mm fabs is expected to hit $78 billion in 2021. The chip shortage is happening simultaneously with national and regional efforts to boost advanced logic chip manufacturing. South Korea announced


a plan worth $450 billion over 10 years, the United States is proposing legislation worth $52 billion, and the EU could plow up to $160 billion into its semiconductor sector. Chipmakers were already on a spending spree. Globally, capital equipment for semiconductor production grew 56 percent year on year through April 2021, according to SEMI. The organization’s 3 June 2021 World Fab Forecast indicates that 10 new 300-mm fabs will start operation in 2021 with 14 more coming up in 2022. “The push for building IC capacity

around the world will certainly drive fab investment of the current decade to a new high,” says Dieseldorff, who is senior principal for semiconductors at SEMI. “We expect to see record spending and more new fab announcements in the next few years.” One potential hiccup on the road to ending the shortage is that some of the skyrocketing demand appears to be from customers that are double-ordering to bulk up on inventory, says Jim Feldhan, president of Semico Research. “I don’t know of any product that needs twice the

amount of analog” as the year before, he says. But manufacturers “don’t want a 12-cent part to hold up a 4K television,” so they’re stocking up. The auto industry needs to do more than just stock up, according to Bharat Kapoor, lead partner, Americas, in the high-tech practice of global strategy and management consulting firm Kearney. To keep future shortages at bay, the chip industry and auto executives need a more direct connection going forward so signals about supply and demand are clearer, he says. n

SEMICONDUCTORS

turing Co. decided to stay with F ­ inFETs for its next-generation process, the 3-nanometer node. While IBM’s manufacturing partner, Samsung, does plan to use nanosheet technology for its 3-nm-node chips, IBM outdid it both by using nanosheets and going down another step to a 2-nm node. Another first was IBM’s application of extreme-ultraviolet lithography (EUV) patterning to the front-end-ofline where the individual devices (such as transistors, capacitors, and resistors) are patterned in the semiconductor. In this latest step in its evolution, EUV patterning has made it possible for IBM to produce variable nanosheet widths from 15 nm to 70 nm. IBM expects this chip design will be the foundation for future systems for both IBM and non-IBM chip players, and the potential benefits of these advanced 2-nm chips will be exponential for

today’s most advanced semiconductors. Closer to most of us is what IBM expects this to do for our laptops and portable devices’ functions—including quicker processing in applications, easier language translation, and faster 5G or 6G connections. For those who find daily phone charging annoying, 2-nm-node chips will quadruple cellphone battery life versus 7-nm-node chips. The company says the new chips could let users charge their devices only every third or fourth day, rather than every night. IBM also anticipates that this may affect autonomous cars by providing faster object detection and reaction. All of this sounds promising and it may not be that far off. According to Mukesh Khare, vice president of hybrid cloud at IBM Research in Albany, N.Y., 2-nm-node chips could be rolling out of fabs as early as 2024. n

Big Blue Gets Small      IBM’s 2-nanometer chip is a world’s first BY DEXTER JOHNSON

IBM

I

BM has become the first in the world to introduce a 2-nanometer-node chip. The company claims that this new chip will improve performance by 45 percent using the same amount of power—or that it will use 75 percent less while maintaining the same performance level—as today’s 7 nm-based chips. To give some sense of scale, with 2-nm technology, IBM could put 50 billion transistors onto a chip the size of a fingernail. The foundation of the chip is nano­ sheet technology in which a transistor is made up of three stacked horizontal sheets of silicon, each only a few nanometers thick and completely surrounded by a gate. Nanosheet technology is poised to replace so-called FinFET technology named for the finlike ridges of c­ urrent-carrying silicon that project from the chip’s surface. FinFET has more or less reached its life expectancy at the 7-nm node. If it were to go any smaller, transistors would become difficult to switch off: Electrons would leak out, even with the three-sided gates. You can’t help but sense a bit of one-upmanship in IBM’s development after Taiwan Semiconductor Manufac-

This row of 2-nanometer-scale transistor components lies at the heart of IBM’s new 2-nm chip technology, which will reduce data-center energy consumption and extend battery lifetimes for consumer devices.

AUGUST 2021  SPECTRUM.IEEE.ORG  7


NEWS

CLIMBING FOR A BETTER VIEW Whether for surveillance or tracking climate change, new balloons promise higher resolution than satellites, wider coverage than drones, and lower costs than either.

ALTITUDES (KILOMETERS) 500

25

Low Earth orbit satellites (typical minimum)

World View

SIERRA NEVADA CORP. Defense contractor Sierra Nevada has previously carried out stratospheric balloon tests for the U.S. Southern Command, according to an FCC filing, to “provide a persistent surveillance system to locate and deter narcotic trafficking and homeland security threats.” This recent balloon flight also tested high-altitude networking, operating from a U.S. Navy base on an island near San Diego. San Clemente, Calif. WORLD VIEW World View’s high-altitude (25-kilometer) balloons are being tested over 17 western U.S. states, streaming high-resolution video and data to a mobile command center. Some of its federal customers include NASA and the U.S. military. 17 states

Raven Aerostar Artemis 20 18

Sceye Radiance Technologies

15

10

5 3 1 0.1

Sierra Nevada Corp.

Passenger jet plane (cruise altitude) Military observation drone (typically) Helicopter (typically) Light aircraft (typically) Consumer drone (maximum)

RADIUS (KM)

100 200 300 400

AEROSPACE

Stratospheric Balloons Take Monitoring and Surveillance to New Heights    These eyes in the sky fly above drones and below satellites BY MARK HARRIS

8  SPECTRUM.IEEE.ORG  AUGUST 2021

A

lphabet’s enthusiasm for ­balloons deflated earlier this year, when it announced that its high-altitude Internet company, Loon, could not become commercially viable. But while the stratosphere might not be a great place to put a cellphone tower, it could be the sweet spot for cameras, argue a host of high-tech startups. The market for Earth-observation services from satellites is expected to top US $4 billion by 2025, as orbiting cameras, radars, and other devices monitor crops, assess infrastructure, and detect greenhouse gas emissions. Low­-altitude

Illustration by StoryTK


RAVEN AEROSTAR Raven Aerostar makes the stratospheric balloons that many companies use to test their equipment. It also files FCC applications itself, including this one to test communications equipment under a Pentagon contract for Alion, a defense contractor. Stanley, N.M.

SCEYE Sceye is building a stratospheric airship that is solar powered and uncrewed. In May, the company completed a test flight to 19.5 km. Intended applications include providing broadband communications, as well as detecting methane and carbon dioxide emissions. Roswell, N.M. RADIANCE TECHNOLOGIES Radiance is working on directed-energy weapons, hypersonics, and other defense technologies. For this test, it is developing ranging signals and algorithms to help the navigation of high-altitude balloon platforms. Baltic, S.D. ARTEMIS Artemis is developing a small synthetic-aperture radar that can punch through clouds, dust, and smoke to provide high-resolution imagery day or night. The original FCC application for this series of experiments cited a U.S. Department of Defense contract. Huntsville, Ala.

PARTS OF A HIGH-ALTITUDE BALLOON

Superpressure envelope containing helium Solar panels, which enable long-duration operation Payload with flight control, communications and observation systems

observations from drones could be worth billions more. Neither platform is perfect. Satellites can cover huge swaths of the planet but remain expensive to develop, launch, and operate. Their cameras are also hundreds of kilometers from the things they are trying to see, and often moving at tens of thousands of kilometers per hour. Drones, on the other hand, can take supersharp images, but only over a relatively small area. They also need careful human piloting to coexist with planes and helicopters. Balloons in the stratosphere, 20 kilometers above Earth (and 10 km above

most jets), split the difference. They are high enough not to bother other aircraft and yet low enough to observe broad areas in plenty of detail. For a fraction of the price of a satellite, an operator can launch a balloon that lasts for weeks (even months), carrying large, capable sensors. Unsurprisingly, perhaps, the U.S. military has funded development in stratospheric balloon tests across six Midwest states to “provide a persistent surveillance system to locate and deter narcotic trafficking and homeland security threats.” But the Pentagon is far from the only organization flying high. An IEEE

­Spectrum analysis of applications filed with the U.S. Federal Communications Commission reveals at least six companies conducting observation experiments in the stratosphere. Some are testing the communications, navigation, and flight infrastructure required for such balloons. Others are running trials for commercial, government, and military customers. The illustration above depicts experimental test permits granted by the FCC from January 2020 to June 2021, together covering much of the continental United States. Some tests were for only a matter of hours; others spanned days or more. n

AUGUST 2021  SPECTRUM.IEEE.ORG  9


NEWS

BRAIN-COMPUTER INTERFACES

Imagine There’s No QWERTY Virtual longhand is faster than mentally moving a cursor BY EMILY WALTZ

M

otionless, speechless communication di­­ rectly from the brain to the computer just got a lot faster thanks to the ancient art of handwriting. Researchers at Stanford University have devel­ oped algorithms that can translate thoughts about handwritten mes­ sages into typed sentences on a computer. The technique enables typing-bybrain communication at a rate more than twice as fast as that of previ­ ous experiments, in which subjects imagined moving a cursor around a digital keyboard. The Stanford study involved a 65-year-old man who, after a spinal cord injury, had an electrode array implanted in his brain. The scientists published their results recently in the journal Nature.

For years, researchers have been experimenting with ways to enable people to directly commu­ nicate with computers using only their thoughts, without verbal commands, hand movement, or eye movement. This kind of technology offers a life-giving communication method for people who are “locked in” from brain-stem stroke or dis­ ease, and unable to speak. Successful brain-computer interface (BCI) typing approaches have typically involved a person imagining moving a cursor around a digital keyboard to select letters. Meanwhile, electrodes record brain activity, and machine-learning algorithms decipher the patterns associated with those thoughts, translating them into the typed words. The fastest of these previous

Typing by thought, according to new research, may be quickest via virtual handwriting rather than via imagining typing on a virtual keyboard.

10  SPECTRUM.IEEE.ORG  AUGUST 2021

Illustration by Emily Cooper


typing-by-brain experiments allowed people to type about 40 characters, or 8 words, per minute. In the new system, by contrast, the participant, who had been paralyzed for about a decade, imagines the hand movements he would make to write sentences. “We ask him to actually try to write—to try to make his hand move again, and he reports this somatosensory illusion of actually feeling like his hand is moving,” says Frank ­Willett, a researcher at ­Stanford who collaborated on the experiment. A machine-learning algorithm then decodes the brain patterns associated with each letter, and a computer displays the letters on a screen. The participant was able to communicate at about 90 characters, or 18 words, per minute. By comparison, able-bodied people close in age to the study participant can type on a smartphone at about 23 words

This new technique enables typing-by-brain communication at a rate more than twice as fast as previous experiments.

ENERGY STORAGE

Antora Energy in Sunnyvale, Calif., wants to use carbon blocks for such thermal storage, while Electrified Thermal Solutions in Boston is seeking funds to build a similar system using conductive ceramic blocks. Their visions are similar: Use excess renewable electricity to heat up the blocks to temperatures of over 1,500 °C, and then turn it back to electricity for the grid when needed. To beat the cost of the natural-gas plants that today back up wind and solar, storing energy would have to cost around US $10 per kilowatt-hour. Each startup says its heating system will meet that price. Lithium-ion batteries, meanwhile, are now at approximately $140/kWh, according to a recent study by MIT economists, and could drop below $100/kWh in only a few years—around which point they start to become cost-competitive with fossil fuels. Justin Briggs, Antora’s cofounder and chief science officer, says he and his cofounders David Bierman and Andrew Ponec, who launched the company in 2018, considered several energy-storage technologies to meet that goal. These included today’s dominant method, pumped hydro, in which water pumped to a higher elevation spins turbines as it falls, and the similar new gravity storage method, which involves lifting 35-tonne bricks and letting them drop.

White-Hot Blocks as Renewable Energy Storage?     Thermal batteries could provide a cheap and simple option BY PRACHI PATEL

ALAMY

I

n five years, operating a coal or ­natural-gas power plant is going to be more expensive than building wind and solar farms. In fact, according to a new study by Bloomberg New Energy Finance, in many regions of the world building a new solar farm is already cheaper than operating coal and natural gas plants. Yet a full shift to intermittent energy sources desperately calls for low-cost, reliable energy storage that can be built anywhere. Some nascent startups believe the answer lies in the process that lights up toaster coils by electrically heating them to scorching temperatures.

hours of audio data typically used to train neural networks. “We only had the opportunity to collect maybe 100 to 500 different sentences that we could ask the participant to write,” Willett says. “So we took those sentences and chopped them up into individual letters and rearranged them into an infinite number of different per minute, the authors say. Adults can sentences, and we found that that really type on a full keyboard, on average, at helped teach these algorithms.” The about 40 words per minute. team also borrowed another tool from In building the system, the Stanford speech recognition—a hidden Markov researchers repurposed a machine-­ model—to help label the relevant data learning algorithm that was originally and decode when, exactly, the man was developed for speech recognition. writing a letter and when he wasn’t. The deep-learning algorithm, called a The algorithms, in their current recurrent neural network, trained over form, have to be trained for and custhe course of a few hours to recognize tomized to each participant. As a next the participant’s neural activity when step, Willett says he hopes to reduce he imagined handwriting sentences in the amount of initial training time and English. That wasn’t much data, com- come up with a way for the algorithms pared with the tens of thousands of to automatically recalibrate. n

CONTINUES ON PAGE 46

Blocks made from graphite or ceramics may be a promising medium for thermal storage of renewable energy.

AUGUST 2021  SPECTRUM.IEEE.ORG  11


THE BIG PICTURE

A Big Bet on Crypto In June, Sotheby’s conducted a first for the renowned brokerage: an online auction for nonfungible tokens (NFTs). NFTs have drawn huge interest from the cryptocurrency community as a method for recording ownership of a digital file, such as a piece of art, using the Ethereum blockchain. Critics of NFTs, however, point out that although they record ownership of the file, other people are still able to view, copy, and download the image, GIF, or song—not to mention the fact that Ethereum transactions consume an immense amount of energy. Even so, there’s enough interest around NFTs for some of them to command staggering prices. CryptoPunk 7523, shown here at an in-person media preview day, eventually sold for an eye-watering US $11,754,000 at Sotheby’s Natively Digital: A Curated NFT Sale online auction. The piece was created by technologists Matt Hall and John Watkinson under the joint name Larva Labs. PHOTOGRAPH BY TIMOTHY A. CLARY/AFP/GETTY IMAGES

12  SPECTRUM.IEEE.ORG  AUGUST 2021


AUGUST 2021  SPECTRUM.IEEE.ORG  13


TECH TO TINKER WITH

Finally, a bench that helps you build.

This Huge Workbench Gives You a Hand A little bit of automation and a lot of space makes projects easier BY JEREMY S. COOK

14  SPECTRUM.IEEE.ORG  AUGUST 2021

A

s an avid experimenter and builder of random contraptions—and who isn’t the best at putting his tools away and normally has multiple projects in various stages of completion—I often run out of work space. So I decided to build a new workbench. One that would be better, not just because it was bigger but because it would be smarter. A bench that could automatically assist me in getting things done! In my garage I previously had two main work spaces: a 183-by-76-centi-

Illustrations by James Provost


AUGUST 2021

The solder squid [left] uses an EZ Fan board and a motion sensor to control a fan. The bench lights are controlled using an Arduino Nano [far right] inserted into another custom board, the Grounduino [middle], which also provides a dedicated space for the recommended large capacitor when driving addressable LED strips.

meter butcher block that also houses a small milling machine, and a custom 147-by-57-cm work space with a built-in router that pops out as needed. Though this space is generous by most standards, it seems I always needed “just a bit” more. After some consideration, I purchased a 2x4basics custom workbench kit (which provides the bench’s heavy-gauge structural resin supports) and lumber to form the main structure, and then cut slabs of chipboard to form a top and a bottom surface. I decided on building a 213-by-107-cm

bench. This was the largest space that I could reasonably reach across and also fit in my garage without blocking movement. The 2x4basics kit came with shelves, providing space for plastic storage boxes. At this point, I thought I was done, because surely this bench would be simply something that I built and used—a background thing that needs no more mention than a screwdriver or hammer would. As it turns out, I can’t leave well enough alone. The initial tweaks were small. To enhance the bench’s storage, I

added magnets on which to hang various tools, and augmented my existing storage cases with 3D-printed dividers. Then I added an eyebolt for my air compressor—a fabulous tool for its roughly US $40 price—to keep it at the ready for blowing off excess material. Toward the back of the bench rests a hot-air gun and a soldering station, as well as my bag of other electrical tools. Then things got more complex. I added a DIY solder squid—a block with four flexible arms that I use to hold components in place while soldering—with

AUGUST 2021  SPECTRUM.IEEE.ORG  15


HANDS ON

Three infrared sensors that detect motion are spaced along the bench so that my work zone is always automatically illuminated.

a concrete base and an automatic solder fume extractor. Yes, my solder squid is made out of concrete, via a 3D-printed mold—though that last refinement is perhaps optional. You could make nearly the same sort of brick using a plastic storage container. Heavy, cheap, and nonconductive, concrete is the perfect base material for such a device, and for arms you simply need to stick a few coolant lines in while the concrete cures. Two of the arms have alligator clips attached, one has a larger clamp, and the third has an old PC fan, recycled for my fume extractor. I automated the fan by hooking up a rechargeable battery, a USB charger board, and a passive infrared (PIR) motion sensor. When activated by soldering movements, the PIR sensor turns the fan on with the help of a leftover

original EZ Fan transistor board. (I created the EZ Fan board to control add-on cooling fans for Raspberry Pi computers, and now sell an even slimmer version.) This means that I don’t ever have to remember to turn the fan on or off: It just comes on when it senses that I’m soldering. I normally keep it plugged into a USB port that provides power, but there is also a battery inside for when a USB port isn’t available. For light, I initially just used a ­linkage-based desk lamp with a powerful three-lobe LED bulb. But why stop there? Why not apply strips of LEDs to the underside of the overhead storage? I did just that, pulling out a strip of 12-volt nonaddressable LEDs and powering them with a simple wall power adapter. This gave things a constant glow, but it was only a matter of time

As it turns out, I can’t leave well enough alone. The initial tweaks were small... 16  SPECTRUM.IEEE.ORG  AUGUST 2021

until addressable LEDs made an appearance, which would let me illuminate different zones as desired. I mounted one PIR sensor at the end of a piece of pipe and one in the middle, and then strung a strip of WS2812B RGB addressable lights along the length. I attached this to the overhead shelves with pipe hangers, which let me adjust the lighting angle as needed to complement the static white LEDs. To control both the addressable LEDs and the nonaddressable strip, I used an Arduino Nano plugged into another ­utility board of my own creation, the Grounduino, and connected another PIR sensor to it, giving me three sensors along the length. The Grounduino provides screw terminals for hooking wires to the Nano and, as the name suggests, five extra ground connections (and five extra 5V connections as well). It also has built-in accommodation for the ­ r ecommended capacitor that others often forget to use with WSx addressable LED lights. Probably the biggest challenge here was actually fishing the various wires through the length of pipe, but in the end it worked quite well. Three segments of addressable LEDs turn on based on which PIR sensor is triggered, while the 12V nonaddressable strip is powered via a FQP30N06L metal-­ oxide-semiconductor field-effect transistor (MOSFET) under control of the Arduino (the power required is just a little on the high side for an EZ Fan board). A push-button control lets me alter the brightness of the strips using pulse-width modulation. If I was starting from scratch, I’d use a single LED voltage, as my setup currently has two power transformers (12V and 5V). Hindsight is 20/20, though it’s very possible this project isn’t quite done yet. I use open-source Home Assistant software to turn on house lights over Wi-Fi, and a homemade ESP8266 contraption to link the same system to my garage door, so why not my bench lights? The Grounduino and Nano were good choices here, but with an ESP8266, I could potentially automate everything and/or control it all with my phone if needed… However, for now at least, I can finally fit my projects, and my tools, on one bench!


SHARING THE EXPERIENCES OF WORKING ENGINEERS

From Spacecraft to Sensor Fusion    Disparate jobs let Iverson Bell create a flexible skill set BY DANIEL P. DERN

RANDY FIELDS

I

t’s easy to set up multiple video feeds in all kinds of locations. Merging those feeds together, combining them with other information, and putting them into context is a tougher challenge, especially when your customers include first responders and intelligence agencies. But that’s the job that Cubic Corp. hired Iverson Bell III to do in late 2020, when they chose him “to lead the team responsible for transforming the company’s Unified Video project into a more full-featured video/­ communications platform, including AI/ machine learning and support for distributed sensors and more data types,” says Bell. Bell had previously been at Northrop Grumman, doing spacecraft design and testing, and had researched electrodynamic tethers, including for use with CubeSats, as part of his postdoc work with Brian Gilchrest at the University of

Michigan, where Bell earned his m ­ aster’s and Ph.D. Moving to his role at Cubic’s Hanover, Md., location might seem like a big jump between two very different specialties, but Bell was able to leverage common skills and other experiences in dealing with complex data handling and processing. For example, Bell created sophisticated data-analysis models during an internship at the Johns Hopkins University Applied Physics Laboratory; performed software testing along with electrical-system integration and testing for the James Webb Space Telescope program at Northrop Grumman; and has been an agile lead manager on other software development. The path to his current job and project, according to Iverson, reflects a mix of exposure, opportunities, and risk-taking—both by him and by people in charge of him—plus, of course, a lot of hard work.

AUGUST 2021

“My undergraduate engineering focus at Howard [University, in Washington, D.C.,] was applied electromagnetics, like antennas and waveguides—so there was lots of math involved. And I was also interested in signal processing, like how antennas convert data into a digital signal. Then, in grad school at the University of Michigan, my concentration was electromagnetics, and I got a lot of exposure to remote sensors, radars, and other cool stuff in more depth. “I chose good topics by pure luck,” says Bell. Bell attributes his interest in science and engineering to early exposure through books, other media, and family. “I read books like Marshall Brain’s How Stuff Works series, and watched the Discovery Channel,” he recalls. “My mother is a pediatrician. And she was a chemistry undergrad, and cooked—and cooking was chemistry. And my older sister has been a role model—she’s a civil engineer—and I was exposed to her taking classes. When I got to high school and was liking math and science, she asked me, ‘What do you want to major in?’” “Be ready to take risks and learn as you go,” Bell advises. “For example, when I was part of the group conducting electrical tests on the Webb telescope, we were working with the mechanical engineering and test team, which I hadn’t been exposed to previously. It was something to learn. “It helps when folks are willing to take a chance on you—so if you’re in a position to employ someone, take that chance on them…. I have a passion for mentorship,” says Bell. “I want to help improve education for younger people, expose them to STEM topics.... But my interest is shifting from individual mentoring to wanting to help address the larger policy and curriculum aspects of the problem.” Bell is enjoying the shift from the process of building spacecraft—“where the process takes time and you often get only one chance for things to work”—to delivering a real-time service. It’s “very different to see teams working on high-end leading-edge engineering at a very fast pace,” he says. “It’s like changing the plane’s engine while you’re flying it.”

AUGUST 2021  SPECTRUM.IEEE.ORG  17


OPINION, INSIGHT, AND ANALYSIS

THEY JUST KEEP GETTING BIGGER In each successive era, the biggest ships have gotten even bigger, but the lengthto-beam ratio rose only up to a certain point. Narrower designs incur less resistance and are thus faster, but the requirements of seaworthiness and of cargo capacity have set limits on how far the slimming can go.

VIKING LONGSHIP LBR: 4.4 Length: 23.3 meters Width: 5.3 meters

TYPICAL RANGES OF LENGTH-TO-BEAM RATIOS 2–4: Small to midsize planing powerboats 3–4: Small to midsize sailboats, motor yachts 4–6: Large, efficient long-range cruisers and racing monohulls 6–10: Large freighters, cruising trimarans, cruising catamarans, and large sailing monohulls 10–16: Fast-cruising catamarans, trimarans, and racing multihulls Over 16: Racing multihulls Length/beam ratio (LBR) = WL/B (WL = waterline length; B = maximum beam at the waterline)

SANTA MARÍA LBR: 3.45 Length: 19 meters Width: 5.5 meters

A Boat Can Indeed Be Too Long and Too Skinny The length-to-beam ratio still has practical limits

I

n comparison with Moore’s Law, the nonsilicon world’s progress can seem rather glacial. Indeed, some designs made of wood or metal came up against their functional limits generations ago. The length-to-beam ratio (LBR) of large oceangoing vessels offers an excellent example of such technological maturity. This ratio is simply the quo-

18  SPECTRUM.IEEE.ORG  AUGUST 2021

FLYING CLOUD LBR: 5.4 Length: 69 meters Width: 12.7 meters

SS GREAT BRITAIN LBR: 6.4 Length: 98 meters Width: 15.4 meters

tient of a ship’s length and breadth, both measured at the waterline; you can think of it simply as the expression of a vessel’s sleekness. A high LBR favors speed but restricts maneuverability as well as cargo hold and cabin design. These considerations, together with the properties of shipbuilders’ materials, have limited the LBR ratio of large vessels to single digits. If all you have is a rough wickerwork over which you stretch thick animal skins, you get a man-size, circular or slightly oval coracle—a riverboat or lake boat that has been used since antiquity from Wales to Tibet. Such a craft has an LBR close to 1, so it’s no vessel for crossing an ocean, but in 1974 an adventurer did paddle one across the English Channel. Building with wood allows for sleeker designs, but only up to a point. The LBR of ancient and medieval commercial wooden sailing ships increased slowly. Roman vessels transporting wheat from Egypt to Italy had an LBR of about 3; ratios of 3.4 to 4.5 were typical for Viking ships, whose lower freeboard—the distance between the waterline and the main deck of a ship—and much smaller carrying capacity made them even less comfortable. The Santa María, a small carrack captained by

Illustration by John MacNeill


NUMBERS DON’T LIE BY VACLAV SMIL

SS SERVIA LBR: 9.9 Length: 157 meters Width: 15.9 meters

Christopher Columbus in 1492, had an LBR of 3.45. With high prows and poops, some small carracks had a nearly semicircular profile. Caravels, used on the European voyages of discovery during the following two centuries, had similar dimensions, but multidecked galleons were sleeker: The Golden Hind, which Francis Drake used to circumnavigate Earth between 1577 and 1580, had an LBR of 5.1. Little changed over the following 250 years. Packet sailing ships, the mainstays of European emigration to the United States before the Civil War, had an LBR of less than 4. In 1851, Donald McKay crowned his career designing sleek clippers by launching the Flying Cloud, whose LBR of 5.4 had reached the practical limit of nonreinforced wood; beyond that ratio, the hulls would simply break. But by that time wooden hulls were on the way out. In 1845 the SS Great Britain (designed by Isambard Kingdom Brunel, at that time the country’s most famous engineer) was the first iron vessel to cross the Atlantic—it had an LBR of 6.4. Then inexpensive steel became available (thanks to Bessemer process converters), inducing Lloyd’s of London to accept its use as an insurable material in 1877. In 1881, the Con-

RMS TITANIC LBR: 9.6 Length: 269.1 meters Width: 28.1 meters

SOURCES: SOME FAMOUS SAILING SHIPS AND THEIR BUILDER DONALD MCKAY, R.C. MCKAY, 1928; CALEDONIAN MARITIME RESEARCH TRUST

MSC GÜLSÜN LBR: 6.5 Length: 399.9 meters Width: 61.5 meters

cord Line’s SS Servia, the first large trans-Atlantic steel-hulled liner, had an LBR of 9.9. Dimensions of future steel liners clustered close around that ratio: 9.6, for the RMS Titanic (launched in 1912); 9.3, for the SS United States (1951); and 8.9 for the SS France (1960, two years after the Boeing 707 began the rapid elimination of trans-Atlantic passenger ships). Huge container ships, today’s most important commercial vessels, have relatively low LBRs in order to accommodate packed rows of standard steel container units. The MSC Gülsün (launched in 2019) the world’s largest, with a capacity of 23,756 container units, is 1,312 feet (399.9 meters) long and 202 feet (61.5 meters) wide; hence its LBR is only 6.5. The Symphony of the Seas (2018), the world’s largest cruise ship, is only about 10 percent shorter, but its narrower beam gives it an LBR of 7.6. Of course, there are much sleeker vessels around, but they are designed for speed, not to carry massive loads of goods or passengers. Each demi-hull of a catamaran has an LBR of about 10 to 12, and in a trimaran, whose center hull has no inherent stability (that feature is supplied by the outriggers), the LBR can exceed 17. n

AUGUST 2021  SPECTRUM.IEEE.ORG  19


CROSSTALK

INTERNET OF EVERYTHING  BY STACEY HIGGINBOTHAM

Cozy Futurism We don’t need a Jetsons future, just a sustainable one

F

or decades, our vision of the future has been stuck in a 1960s-era dream of science fiction embodied by The Jetsons and space travel. But that isn’t what we need right now. In fact, what if our vision of that particular technologically advanced future is all wrong? What if, instead of self-driving cars, digital assistants whispering in our ears, and virtual-reality glasses, we viewed a technologically advanced society as one where everyone had sustainable housing? Where we could manage and then reduce the amount of carbon in our atmosphere? Where everyone had access to preventative health care that was both personalized and less invasive? What we need is something called cozy futurism, a concept I first encountered while reading a blog post by software engineer Jose Luis Ricón Fernández de la Puente. In the post, he calls for a vision of tech�nology that looks at human needs and attempts to meet those needs, not only through technologies but also cultural shifts and policy changes. Take space travel as an example. Much of the motivation behind building new rockets or developing colonies on Mars is wrapped up in the rhetoric of our warming planet being something to escape from. In doing so, we miss opportunities to fix our home rather than flee it. But we can change our attitudes. What’s more, we are changing. Climate change is a great example. Albeit slowly, entrepreneurs who helped build out the products and services over the tech boom of the past 20 years are now searching for technologies to address the crisis. Jason Jacobs, the founder of the fitness app Runkeeper, has created an entire media business called My Climate Journey to find and help recruit tech folks to address climate change. Last year, Jeff Bezos created a US $10 billion fund to make invest�-

20  SPECTRUM.IEEE.ORG  AUGUST 2021

Mitigating climate change is an easy way to understand the goals of cozy futurism, but I’m eager to see us all go further.

ments in organizations fighting climate change. Bill Gates wrote an entire book, How to Avoid a Climate Disaster: The Solutions We Have and the Break�throughs We Need. Mitigating climate change is an easy way to understand the goals of cozy futurism, but I’m eager to see us all go further. What about reducing pollution in urban and poor communities? Nonprofits are already using cheap sensors to pin�point heat islands in cities, or neighborhoods where air pollution disproportionately affects communities of color. With this information, policy changes can lighten the unfair distribution of harm. And perhaps if we see the evidence of harm in data, more people will vote to attack pollution, climate change, and other problems at their sources, rather than looking to tech to put a Band-Aid on them or mitigate the effects—or worse, adding to the problem by producing a never-ending stream of throwaway gadgets. We should instead embrace tech as a tool to help governments hold companies accountable for meeting policy goals. Cozy futurism is an opportunity to reframe the best use of technology as something actively working to help humanity—not individually, like a smartwatch monitoring your health or self-driving cars easing your commute, but in aggregate. That’s not to say we should do away with VR goggles or smart gadgets, but we should think a bit more about how and why we’re using them, and whether we’re overprioritizing them. After all, what’s better than demonstrating that the existential challenges facing us all are things we can find solutions to, not just for those who can hitch a ride off-world but for everyone. After all, I’d rather be cozy on Earth than stuck in a bubble on Mars. n

Photo-illustration by Alvaro Dominguez


MACRO & MICRO  BY MARK PESCE

Cloud Computing’s Dark Cloud The electricity consumed is growing unsustainably

H

ow much of our computing now happens in the cloud? A lot. Providers of public cloud services alone take in more than a quarter of a trillion U.S. dollars a year. That’s why Amazon, Google, and Microsoft main�tain massive data centers all around the world. Apple and Facebook, too, run similar facilities, all stuffed with high-core-count CPUs, sporting terabytes of RAM and petabytes of storage. These machines do the heavy lifting to support what’s been called “surveillance capitalism”: the endless tracking, user profiling, and algorithmic targeting used to distribute advertising. All that computing rakes in a lot of dollars, of course, but it also consumes a lot of watts: Bloomberg recently estimated that about 1 percent of the world’s elec�tricity goes to cloud computing. That figure is poised to grow exponentially over the next decade. Bloomberg reckons that, globally,

Illustration by Harry Campbell

We can’t devote the whole of the planet’s electricity generation to support the cloud. Something will have to give.

we might exit the 2020s needing as much as 8 percent of all electricity to power the future cloud. That might seem like a massive jump, but it’s probably a conservative estimate. After all, by 2030, with hundreds of millions of augmented-reality spectacles streaming real-time video into the cloud, and with the widespread adoption of smart digital currencies seamlessly blending money with code, the cloud will provide the foundation for nearly every financial transaction and user interaction with data. How much energy can we dedicate to all this computing? In an earlier time, we could have relied on Moore’s Law to keep the power budget in check as we scaled up our computing resources. But now, as we wring out the last bits of efficiency from the final few process nodes before we reach atomic-scale devices, those improvements will hit physical limits. It won’t be long until computing and power consumption will once again be strongly coupled—as they were 60 years ago, before integrated CPUs changed the game. We seem to be hurtling toward a brick wall, as the rising demand for computing collides with decreasing efficiencies. We can’t devote the whole of the planet’s electricity generation to support the cloud. Something will have to give. The most immediate solutions will involve processing more data at the edge, before it goes into the cloud. But that only shifts the burden, buying time for rethinking how to manage our computing in the face of limited power resources. Software and hardware engineering will no doubt reorient their design practices around power efficiency. More code will find its way into custom silicon. And that code will find more reasons to run infrequently, asynchronously, and as minimally as possible. All of that will help, but as software progressively eats more of the world—to borrow a now-famous metaphor—we will confront this challenge in ever-wider realms. We can already spy one face of this future in the nearly demonic coupling of energy consumption and private profit that provides the proof-of-work mechanism for cryptocurrencies like Bitcoin. Companies like Square have announced investments in solar energy for Bitcoin mining, hoping to deflect some of the bad press associated with this activity. But more than public relations is at stake. Bitcoin asks us right now to pit the profit motive against the health of the planet. More and more computing activities will do the same in the future. Let’s hope we never get to a point where the fate of the Earth hinges on the fate of the transistor. n

AUGUST 2021  SPECTRUM.IEEE.ORG  21


22  SPECTRUM.IEEE.ORG  AUGUST 2021


What We Learned From the Pandemic BY M IC H E L E AC U TO S H AU N L A RC O M F E R D I NA N D R AU C H & TIM WILLEMS

Illustrations by Chad Hagen

Most of all, it taught us how to adapt under pressure

AUGUST 2021  SPECTRUM.IEEE.ORG  23


On top of the damage it has wreaked on human lives, the pandemic has brought increased costs to individuals and businesses alike. At the same time, however, we can already measure solid improvements in productivity and innovation: Since February 2020, some 60 percent of firms in the United Kingdom and in Spain have adopted new digital technologies, and 40 percent of U.K. firms have invested in new digital capabilities. New businesses came into being at a faster rate in the United States than in previous years. We propose to build on this foundation and find a way to learn not just from crises but even during the crisis itself. We argue for this position not just in the context of the COVID-19 pandemic but also toward the ultimate goal of improving our ability to handle things we can’t foresee—that is, to become more resilient.

T

o find the upside of emergencies, we first looked at the economic effects of a tidy little crisis, a two-day strike that partially disrupted service of the London Underground in 2014. We discovered that the approximately 5 percent of the commuters who were forced to reconsider their commute ended up finding better routes, which these people continued after service was restored. In terms of travel time, the strike produced a net benefit to the system because the one-off time costs of the strike were less than the enduring benefits for this minority of commuters. Why had commuters not done their homework beforehand, finding the optimal route without pressure? After all, their search costs would have been quite low, but the benefits from permanently improving their commute might well have been large. Here,

24  SPECTRUM.IEEE.ORG  AUGUST 2021

the answer seems to be that commuters were stuck in established yet inefficient habits; they needed a shock to prod them into making their discovery. A similar effect followed the eruption of a long-­ dormant Icelandic volcano in 1973. For younger people, having their house destroyed led to an increase of 3.6 years of education and an 83 percent increase in lifetime earnings, due to their increased probability of migrating away from their destroyed town. The shock helped them overcome a situation of being stuck in a location with a limited set of potential occupations, to which they may not have been well suited.

Icelandic volcano eruption of 1973.

BETTMANN/GETTY IMAGES

LIFE IS A HARD SCHOOL: First it gives us the test and only then the lesson. Indeed, throughout history humanity has learned much from disasters, wars, financial ruin—and pandemics. A scholarly literature has documented this process in fields as diverse as engineering, risk reduction, management, and urban studies. And it’s already clear that the COVID-19 pandemic has sped up the arrival of the future along several dimensions. Remote working has become the new status quo in many sectors. Teaching, medical consulting, and court cases are expected to stay partly online. Delivery of goods to the consumer’s door has supplanted many a retail storefront, and there are early signs that such deliveries will increasingly be conducted by autonomous vehicles.


As economists and social scientists, we draw two fundamental insights from these examples of forced experimentation. First, the costs and benefits of a significant disruption are unlikely to fall equally on all those affected, not least at the generational level. Second, to ensure that better ways of doing things are discovered, we need policies to help the experiment’s likely losers get a share of the benefits. Because large shocks are rare, research on their consequences tends to draw from history. For example, economic historians have argued that the Black Death plague may have contributed to the destruction of the feudal system in Western Europe by increasing the bargaining power of laborers, who were more in demand. The Great Fire of London in 1666 cleared the way, literally, for major building and planning reforms, including the prohibition of new wooden buildings, the construction of wider roads and better sewers, and the invention of fire insurance. History also illustrates that good data is often a prerequisite for learning from a crisis. John Snow’s 1854 Broad Street map of cholera contagion in London was not only instrumental in identifying lessons learned—the most important being that cholera was transmitted via the water supply—but also in improving policymaking during the crisis. He convinced the authorities to remove the handle from the pump of a particular water source that had been implicated in the spread of the disease, thereby halting that spread. Four distinct channels lead to the benefits that may come during a disruption to our normal lives: Habit disruption occurs when a shock forces agents to reconsider their behavior, so that at least some of them can discover better alternatives. London commuters found better routes, and Icelandic young people got more schooling and found better places to live.

TAO IMAGES/ALAMY

Selection involves the destruction of weaker firms so that only the more productive ones survive. Resources then move from the weaker to stronger entities, and average productivity increases. For example, when China entered world markets as a major exporter of industrial products, production from less productive firms in Mexico was reduced or ceased altogether, thus diverting resources to more productive uses. Weakening of inertia occurs when a shock frees a system from the grip of forces that have until now kept it in stasis. This model of a system that’s stuck is sometimes called path dependence, as it involves a way of doing things that evolved along a particular path, under the influence of economic or technological factors. The classic example of path dependence is the establishment of the conventional QWERTY key-

China enters world markets as major exporter of industrial products.

board standard on typewriters in the late 19th century and computers thereafter. All people learn how to type on existing keyboards, so even a superior keyboard design can never gain a foothold. Another example is cities that persist in their original sites even though the economic reasons for founding them there no longer apply. Many towns and cities founded in France during the Roman Empire remain right where the Romans left them, even though the Romans made little use of navigable rivers and the coastal trade north of the Mediterranean that became important in later centuries. These cities have been held in place by the man-made and social structures that grew up around them, such as aqueducts and dioceses. In Britain, however, the nearly complete collapse of urban life after the departure of the Roman legions allowed that country to build new cities in places better suited to medieval trade. Coordination can play a role when a shock resets a playing field to such an extent that a system governed by opposing forces can settle at a new equilibrium point. Before the Great Boston Fire of 1872, the value of much real estate had been held down by the presence of crumbling buildings nearby. After the fire, many buildings were reconstructed simultaneously, encouraging investment on neighboring lots. Some economists argue that the fire created more wealth than it destroyed.

T

he ongoing pandemic has set off a scramble among economists to access and analyze data. Although some people have considered this unseemly, even opportunistic, we social scientists can’t run

AUGUST 2021  SPECTRUM.IEEE.ORG  25


­ lacebo-controlled experiments to see how a p change in one thing affects another, and so we must exploit for this purpose any shock to a system that comes our way. What really matters is that the necessary data be gathered and preserved long enough for us to run it through our models, once those models are ready. We ourselves had to scramble to secure data regarding commuting behavior following the London metro strike; normally, such data gets destroyed after 8 weeks. In our case, thanks to Transport for London, we managed to get it anonymized and released for analysis. In recent years, there has been growing concern over the use of data and the potential for “data pollution,” where an abundance of data storage and its subsequent use or misuse might work against the public interest. Examples include the use of ­Facebook’s data around the 2016 U.S. presidential election, the way that online sellers use location data to discriminate on price, and how data from Strava’s fitness app accidentally revealed the sites of U.S. military bases. Given such concerns, many countries have introduced more stringent data-protection legislation, such as the EU General Data Protection Regulation (GDPR). Since this legislation was introduced, a number of companies have faced heavy fines, including British Airways, which in 2018 was fined £183 million for poor security arrangements following a cyberattack. Most organizations delete data after a certain period. Nevertheless, Article 89 of the GDPR allows them to

26  SPECTRUM.IEEE.ORG  AUGUST 2021

W

hen we do find innovations through forced experimentation, how likely are those innovations to be adopted? People may well revert to old habits, and anyone who might reasonably expect to lose because of the change will certainly resist it. One might wonder whether many businesses that thrived while their employees worked off-site might nonetheless insist on people returning to the central office, where managers can be seen to manage, and thereby retain their jobs. We can also expect that those who own assets few people will want to use anymore will argue for government regulations to support those assets. Examples include public transport infrastructure—say, the subways of New York City—and retail and office space. One of the most famous examples of resistance to technological advancements is the Luddites, a group of skilled weavers and artisans in early 19th-century England who led a six-year rebellion smashing mechanized looms. They rightly feared a large drop in their wages and their own obsolescence. It took 12,000 troops to suppress the ­Luddites, but their example was followed by other “machine

UNIVERSAL IMAGES GROUP/GETTY IMAGES

The Great Boston Fire of 1872.

retain data “for scientific or historical research purposes or statistical purposes” in “the public interest.” We argue that data-retention policies should take into account the higher value of data gathered during the current pandemic. The presence of detailed data is already paying off in the effort to contain the COVID-19 pandemic. Consider the Gauteng City-Region Observatory in Johannesburg, which in March 2020 began to provide governmental authorities at every level with baseline information on the 12-million-strong urban region. The observatory did so fast enough to allow for crucial learning while the crisis was still unfolding. The observatory’s data had been gathered during its annual “quality of life” survey, now in its 10th year of operation, allowing it to quantify the risks involved in household crowding, shared sanitation facilities, and other circumstances. This information has been cross-indexed with broader health-vulnerability factors, like access to electronic communication, health care, and public transport, as well as with data on preexisting health conditions, such as the incidence of asthma, heart disease, and diabetes. This type of baseline management, or “baselining,” approach could give these data systems more resilience when faced with the next crisis, whatever it may be—another pandemic, a different natural disaster, or an unexpected major infrastructural fault. For instance, the University of Melbourne conducted on-the-spot modeling of how the pandemic began to unfold during the 2020 lockdowns in Australia, which helped state decision-makers suppress the virus in real time.


JONATHAN WEISS/ALAMY

­ reaking” rebellions, riots, and strikes throughout b much of England’s industrial revolution. Resistance to change can also come from the highest levels. One explanation for the low levels of economic development in Russia and Austria-­Hungary during the 19th century was the ruling class’s resistance to new technology and to institutional reform. It was not that the leaders weren’t aware of the economic benefits of such measures, but rather that they feared losing a grip on power and were content to retain a large share of a small pie. Clearly, it’s important to account for the effects that any innovation has on those who stand to lose from it. One way to do so is to commit to sharing any gains broadly, so that no one loses. Such a plan can disarm opposition before it arises. One example where this strategy has been successfully employed is the Montreal Protocol on Substances That Deplete the Ozone Layer. It included a number of measures to share the gains from rules that preserve the ozone layer, including payments to compensate those countries without readily available substitutes who would otherwise have suffered losses. The Montreal Protocol and its successor treaties have been highly effective in meeting their environmental objectives.

C

OVID-19 winners and losers are already apparent. In 2020, economic analysis of social distancing in the United States showed that as many as 1.7 million lives might be saved by this practice. However, it was

The conventional QWERTY keyboard.

also found that about 90 percent of the life-years saved would have accrued to people older than 50. Furthermore, it is not unreasonable to expect that younger individuals should bear an equal (or perhaps greater) share of the costs of distancing and lockdowns. It seems wise to compensate younger people for complying with the rules on social distancing, both for reasons of fairness and to discourage civil disobedience. We know from stock prices and spending data that some sectors and firms have suffered disproportionately during the pandemic, especially those holding stranded assets that must be written off, such as shopping malls, many of which have lost much of their business, perhaps permanently. We can expect similar outcomes for human capital. There are ways to compensate these parties also, such as cash transfers linked to retraining or reinvestment. There will almost certainly be winners and losers as a result of the multitude of forced experiments occurring in workplaces. Some people can more easily adapt to new technologies, some are better suited to working from home or in new settings, and some businesses will benefit from less physical interaction and more online communication. Consider that the push toward online learning that the pandemic has provided may cost some schools their entire business: Why would students wish to listen to online lectures from their own professors when they could instead be listening to the superstars of their field? Such changes could deliver large productivity payoffs, but they will certainly have distributional consequences, likely benefiting the established universities, whose online platforms may now cater to a bigger market. We know from the history of the Black Death that, if they’re big enough, shocks have the power to bend or even break institutions. Thus, if we want them to survive, we need to ensure that our institutions are flexible. To manage the transition to a world with more resilient institutions, we need high-quality data, of all types and from various sources, including measures of individual human productivity, education, innovation, health, and well-being. There seems little doubt that pandemic-era data, even when it’s of the most ordinary sort, will remain more valuable to society than that gathered in normal times. If we can learn the lessons of COVID-19, we will emerge from the challenge more resilient and better prepared for whatever may come next. n

Editor’s note: The views expressed are the authors’ own and should not be attributed to the ­International Monetary Fund, its executive board, or its management.

AUGUST 2021  SPECTRUM.IEEE.ORG  27



LESSONS FROM a DRAGONFLY’S BRAIN Evolution built a small, fast, efficient neural network in a dragonfly. Why not copy it for missile defense?

B y F R ANC ES C H ANC E

AUGUST 2021  SPECTRUM.IEEE.ORG  29


L E SS ONS F ROM A DR AGONF LY ’ S BR AI N

algorithm are greater than those produced by four cars over their lifetimes. But does an artificial neural network really need to be large and complex to be useful? I believe it doesn’t. To reap the bene­ fits of neural-inspired computers in the near term, we must strike a balance between simplicity and sophistication. Which brings me back to the dragonfly, an animal with a brain that may provide precisely the right balance for certain applications. If you have ever encountered a dragonfly, you already know how fast these beautiful creatures can zoom, and you’ve seen their incredible agility in the air. Maybe less obvious from casual observation is their excellent hunting ability: Dragonflies successfully capture up to 95 percent of the prey they pursue, eating hundreds of mosquitoes in a day. The physical prowess of the dragonfly has certainly not gone unnoticed. For decades, U.S. agencies have experimented with using dragonfly-inspired designs for surveillance drones. Now it is time to turn our attention to the brain that controls this tiny hunting machine.

WHILE DRAGONFLIES MAY not be able to play strategic games like Go, a dragonfly does demonstrate a form of strategy in the way it aims ahead of its prey’s current location to intercept its dinner. This takes calculations performed extremely fast— it typically takes a dragonfly just 50 milliseconds to start turning LOOKING TO A dragonfly as a harbinger of future in response to a prey’s maneuver. It does this while tracking computer systems may seem counterintuitive. The the angle between its head and its body, so that it knows which developments in artificial wings to flap faster to turn ahead of the intelligence and machine prey. And it also tracks its own movelearning that make news are typically ments, because as the dragonfly turns, algorithms that mimic human intellithe prey will also appear to move. gence or even surpass people’s abilities. So the dragonfly’s brain is performing Neural networks can already perform as a remarkable feat, given that the time well—if not better—than people at some needed for a single neuron to add up all specific tasks, such as detecting cancer its inputs—called its membrane time in medical scans. And the potential of constant—exceeds 10 ms. If you factor in these neural networks stretches far time for the eye to process visual inforbeyond visual processing. The computer mation and for the muscles to produce program AlphaZero, trained by self-play, the force needed to move, there’s really It takes the dragonfly only is the best Go player in the world. Its sibonly time for three, maybe four, layers of about 50 milliseconds to ling AI, AlphaStar, ranks among the best neurons, in sequence, to add up their begin to respond to a prey’s maneuver. If we assume 10 ms Starcraft II players. inputs and pass on information. for cells in the eye to detect Such feats, however, come at a cost. Could I build a neural network that and transmit information about Developing these sophisticated systems works like the dragonfly interception the prey, and another 5 ms for requires massive amounts of processing system? I also wondered about uses for muscles to start producing power, generally available only to select such a neural-inspired interception force, this leaves only 35 ms for the neural circuitry to institutions with the fastest supercomsystem. Being at Sandia, I immediately make its calculations. Given puters and the resources to support considered defense applications, such that it typically takes a them. And the energy cost is off-putting. as missile defense, imagining missiles of single neuron at least 10 ms Recent estimates suggest that the carbon the future with onboard systems to integrate inputs, the emissions resulting from developing and designed to rapidly calculate intercepunderlying neural network can training a natural-language processing be at least three layers deep. tion trajectories without affecting a mis-

30  SPECTRUM.IEEE.ORG  AUGUST 2021

PREVIOUS PAGES: RICHARD PENSKA/500PX/GETTY IMAGES

IN EACH OF OUR BRAINS, 86 billion neurons work in parallel, processing inputs from senses and memories to produce the many feats of human cognition. The brains of other creatures are less broadly capable, but those animals often exhibit innate aptitudes for particular tasks, abilities honed by millions of years of evolution. Most of us have seen animals doing clever things. Perhaps your house pet is an escape artist. Maybe you live near the migration path of birds or butterflies and celebrate their annual return. Or perhaps you have marveled at the seeming single-mindedness with which ants invade your pantry. Looking to such specialized nervous systems as a model for artificial intelligence may prove just as valuable, if not more so, than studying the human brain. Consider the brains of those ants in your pantry. Each has some 250,000 neurons. Larger insects have closer to 1 million. In my research at Sandia National Laboratories in Albuquerque, I study the brains of one of these larger insects, the dragonfly. I and my colleagues at Sandia, a national-security laboratory, hope to take advantage of these insects’ specializations to design computing systems optimized for tasks like intercepting an incoming missile or following an odor plume. By harnessing the speed, simplicity, and efficiency of the dragonfly nervous system, we aim to design computers that perform these functions faster and at a fraction of the power that conventional systems consume.


Prey

Prey Dragonfly

Dragonfly

Prey image

Fixation position

Motor output

The model dragonfly reorients in response to the prey’s turning [upper left]. The black circle is the dragonfly’s head, held at its initial position. The solid black line indicates the direction of the dragonfly’s flight; the dotted blue lines are the plane of the model dragonfly’s eye. The red star is the prey’s position relative to the dragonfly, with the dotted red line indicating the dragonfly’s line of sight. On the upper right, the figure shows the dragonfly engaging its prey. Below are three heat maps of the activity patterns of neurons at the same moment; the first set represents the eye, the second represents those neurons that specify which eye neurons to align with the prey’s image, and the third represents those that output motor commands.

FRANCES CHANCE

sile’s weight or power consumption. But there are civilian applications as well. For example, the algorithms that control self-driving cars might be made more efficient, no longer requiring a trunkful of computing equipment. If a dragonfly-inspired system can perform the calculations to plot an interception trajectory, perhaps autonomous drones could use it to avoid collisions. And if a computer could be made the same size as a dragonfly brain (about 6 cubic millimeters), perhaps insect repellent and mosquito netting will one day become a thing of the past, replaced by tiny insect-zapping drones! TO BEGIN TO answer these questions, I created a simple neural network to stand in for the dragonfly’s nervous system and used it to calculate the turns that a dragonfly makes to capture prey. My three-layer neural network exists as a software simulation. Initially, I worked in Matlab simply because that was the coding environment I was already using. I have since ported the model to Python. Because dragonflies have to see their prey to capture it, I started by simulating a simplified version of the dragonfly’s eyes, capturing the minimum detail required for tracking prey.

Although dragonflies have two eyes, it’s generally accepted that they do not use stereoscopic depth perception to estimate distance to their prey. In my model, I did not model both eyes. Nor did I try to match the resolution of a dragonfly eye. Instead, the first layer of the neural network includes 441 neurons that represent input from the eyes, each describing a specific region of the visual field—these regions are tiled to form a 21-by-21-neuron array that covers the dragonfly’s field of view. As the dragonfly turns, the location of the prey’s image in the dragonfly’s field of view changes. The dragonfly calculates turns required to align the prey’s image with one (or a few, if the prey is large enough) of these “eye” neurons. A second set of 441 neurons, also in the first layer of the network, tells the dragonfly which eye neurons should be aligned with the prey’s image— that is, where the prey should be within its field of view. Processing—the calculations that take input describing the movement of an object across the field of vision and turn it into instructions about which direction the dragonfly needs to turn—happens between the first and third layers of my artificial neural network. In this second layer, I used an array of 194,481 (214) neurons, likely much larger than the number of neurons used by a dragonfly for this task. I precalculated the weights of the connections between all the neurons into the

AUGUST 2021  SPECTRUM.IEEE.ORG  31


L E SS ONS F ROM A DR AGONF LY ’ S BR AI N

network. While these weights could be learned with enough time, there is an advantage to “learning” through evolution and preprogrammed neural network architectures. Once it comes out of its nymph stage as a winged adult (technically referred to as a teneral), the dragonfly does not have a parent to feed it or show it how to hunt. The dragonfly is in a vulnerable state and getting used to a new body—it would be dis­ advantageous to have to figure out a hunting strategy at the same time. I set the weights of the network to allow the model dragonfly to calculate the correct turns to intercept its prey from incoming visual information. What turns are those? Well, if a dragonfly wants to catch a mosquito that’s crossing its path, it can’t just aim at the mosquito. To borrow from what hockey player Wayne Gretsky once said about pucks, the dragonfly has to aim for where the mosquito is going to be. You might think that following Gretsky’s advice would require a complex algorithm, but in fact the strategy is quite simple: All the dragonfly needs to do is to maintain a constant angle between its line of sight with its lunch and a fixed reference direction. Readers who have any experience piloting boats will understand why that is. They know to get worried when the angle between the line of sight to another boat and a reference direction (for example due north) remains con­ stant, because they are on a collision course. Mariners have long avoided steering such a course, known as par­ allel navigation, to avoid collisions. Translated to dragonflies, which want to collide with their prey, the prescription is simple: keep the line of sight to your prey constant relative to some external reference. However, this task is not necessarily trivial for a dragonfly as it swoops and turns, collecting its meals. The dragonfly does not have an accurate internal gyroscope (that we know of) that will maintain a con­ stant orientation and provide a reference regardless of how the dragonfly turns. Nor does it have a magnetic compass that will always point north. In my simplified simulation of dragonfly hunting, the dragonfly turns to align the prey’s image with a specific location on its eye, but it needs to calculate what that location should be. The third and final layer of my simulated neural network is the motor-command layer. The outputs of the neurons in this layer are high-level instructions for the dragonfly’s muscles, telling the dragonfly in which direction to turn. The dragonfly also uses the output of this layer to predict the effect of its own maneuvers on the location of the prey’s image in its field of view and updates that projected location accordingly. This

updating allows the dragonfly to hold the line of sight to its prey steady, relative to the external world, as it approaches. It is possible that biological dragonflies have evolved additional tools to help with the calculations needed for this prediction. For example, dragonflies have specialized sensors that measure body rotations during flight as well as head rotations relative to the body—if these sensors are fast enough, the dragonfly could cal­ culate the effect of its movements on the prey’s image directly from the sensor outputs or use one method to cross-check the other. I did not include this possibility in my simulation. To test this three-layer neural network, I simulated a dragon­ fly and its prey, moving at the same speed through three-­ dimensional space. As they do so my modeled neural-network brain “sees” the prey, calculates where to point to keep the image of the prey at a constant angle, and sends the appropriate instructions to the muscles. I was able to show that this simple model of a dragonfly’s brain can indeed successfully intercept other bugs, even prey traveling along curved or semi-random trajectories. The model dragonfly does not quite achieve the success rate of the biolog­ ical dragonfly, but it also does not have all the advantages (for example, impressive flying speed) for which dragonflies are known.

If a dragonfly wants to catch a mosquito that’s crossing its path, it can’t just aim at the mosquito. The dragonfly has to aim for where the mosquito is going to be.

32  SPECTRUM.IEEE.ORG  AUGUST 2021

MORE WORK IS needed to deter­ mine whether this neural network is really incorporating all the secrets of the dragonfly’s brain. Research­ ers at the Howard Hughes Medical Institute’s Janelia Research Campus, in Virginia, have developed tiny backpacks for dragonflies that can measure electrical signals from a dragonfly’s nervous system while it is in flight and transmit these data for analysis. The backpacks are small enough not to distract the dragonfly from the hunt. Similarly, neuroscientists can also record signals from individual neurons in the dragonfly’s brain while the insect is held motionless but made to think it’s moving by presenting it with the appropriate visual cues, creating a dragonfly-scale virtual reality. Data from these systems allows neuroscientists to validate dragonfly-brain models by comparing their activity with activ­ ity patterns of biological neurons in an active dragonfly. While we cannot yet directly measure individual connections between neurons in the dragonfly brain, I and my collaborators will be able to infer whether the dragonfly’s nervous system is making calculations similar to those predicted by my artificial neural network. That will help determine whether connections in the dragonfly brain resemble my precalculated weights in the


neural network. We will inevitably find ways in which our model differs from the actual dragonfly brain. Perhaps these differences will provide clues to the shortcuts that the dragonfly brain takes to speed up its calculations.

ANTHONY LEONARDO/JANELIA RESEARCH CAMPUS/HHMI

DRAGONFLIES COULD ALSO teach us how to implement “attention” on a computer. You likely know what it feels like when your brain is at full attention, completely in the zone, focused on one task to the point that distractions seem to fade away. A dragonfly can likewise focus its attention. Its nervous system turns up the volume on responses to particular, presumably selected, targets, even when other potential prey are visible in the same field of view. It makes sense that once a dragonfly has decided to pursue a particular prey, it should change targets only if it has failed to capture its first choice. (In other words, using parallel navigation to catch a meal is not useful if you are easily distracted.) Even if we end up discovering that the dragonfly mechanisms for directing attention are less sophisticated than those people use to focus in the middle of a crowded coffee shop, it’s possible that a simpler but lower-power mechanism will prove advantageous for next-generation algorithms and computer systems by offering efficient ways to discard irrelevant inputs. The advantages of studying the dragonfly brain do not end with new algorithms; they also can affect systems design. Dragon­fly eyes are fast, operating at the equivalent of 200 frames per second: That’s several times the speed of human vision. But their spatial resolution is relatively poor, perhaps just a hundredth of that of the human eye. Understanding how the dragon­fly hunts so effectively, despite its limited sensing abilities, can suggest ways of designing more efficient systems. Using the missile-defense problem, the dragonfly example suggests that our antimissile systems with fast optical sensing could require less spatial resolution to hit a target. THE DRAGONFLY ISN’T the only insect that could inform neural-inspired computer design today. Monarch butterflies migrate incredibly long distances, using some innate instinct to begin their journeys at the appropriate time of year and to head in the right direction. We know that monarchs rely on the position of the sun, but navigating by the sun requires keeping track of the time of day. If you are a butterfly heading south, you would want the sun on your left in the morning but on your right in the afternoon. So, to set its course, the butterfly brain must therefore read its own circadian rhythm and combine that information with what it is observing. Other insects, like the Sahara desert ant, must forage for relatively long distances. Once a source of sustenance is found, this ant does not simply retrace its steps back to the nest, likely a circuitous path. Instead it calculates a direct route back. Because the location of an ant’s food source changes from day to day, it must be able to remember the path it took on its foraging journey, combining visual information with some internal measure of distance traveled, and then calculate its return route from those memories. While nobody knows what neural circuits in the desert ant perform this task, researchers at the Janelia Research Campus have identified neural circuits that allow the fruit fly to self-orient

This backpack that captures signals from electrodes inserted in a dragonfly’s brain was created by Anthony Leonardo, a group leader at Janelia Research Campus.

using visual landmarks. The desert ant and monarch butterfly likely use similar mechanisms. Such neural circuits might one day prove useful in, say, low-power drones. And what if the efficiency of insect-inspired computation is such that millions of instances of these specialized components can be run in parallel to support more powerful data processing or machine learning? Could the next AlphaZero incorporate millions of antlike foraging architectures to refine its game playing? Perhaps insects will inspire a new generation of computers that look very different from what we have today. A small army of dragonfly-interception-like algorithms could be used to control moving pieces of an amusement park ride, ensuring that individual cars do not collide (much like pilots steering their boats) even in the midst of a complicated but thrilling dance. No one knows what the next generation of computers will look like, whether they will be part-cyborg companions or centralized resources much like Isaac Asimov’s Multivac. Likewise, no one can tell what the best path to developing these platforms will entail. While researchers developed early neural networks drawing inspiration from the human brain, today’s artificial neural networks often rely on decidedly unbrainlike calculations. Studying the calculations of individual neurons in biological neural circuits—currently only directly possible in nonhuman systems—may have more to teach us. Insects, apparently simple but often astonishing in what they can do, have much to contribute to the development of next-generation computers, especially as neuroscience research continues to drive toward a deeper understanding of how biological neural circuits work. So next time you see an insect doing something clever, imagine the impact on your everyday life if you could have the brilliant efficiency of a small army of tiny dragonfly, butterfly, or ant brains at your disposal. Maybe computers of the future will give new meaning to the term “hive mind,” with swarms of highly specialized but extremely efficient minuscule processors, able to be reconfigured and deployed depending on the task at hand. With the advances being made in neuro­ science today, this seeming fantasy may be closer to reality than you think. n

AUGUST 2021  SPECTRUM.IEEE.ORG  33


There’s no planned mission to send a lander to Europa, but this artist’s rendition gives a sense of what one such lander might look like, including the new antenna design necessary for staying in touch with Earth.

Illustration by Marek Denko/ Noemotion


An Antenna Made for an Icy, Radioactive Hell JPL’s all-metal design can withstand Europa’s brutal environment BY NAC E R E . C H A H AT

AUGUST 2021  SPECTRUM.IEEE.ORG  35


Currently, the only planned mission to Europa is the Clipper orbiter, a NASA mission that will study the moon’s chemistry and geology and will likely launch in 2024. Clipper will also conduct reconnaissance for a potential later mission to put a lander on Europa. At this time, any such lander is conceptual. NASA has still funded a Europa lander concept, however, because there are crucial new technologies that we need to develop for any successful mission on the icy world. Europa is unlike anywhere else we’ve attempted to land before. For context, so far the only lander to explore the outer solar system is the European Space Agency’s Huygens lander. It successfully descended to Saturn’s

36  SPECTRUM.IEEE.ORG  AUGUST 2021

The antenna team, including the author [right], examine one of the antenna’s subarrays. Each golden square is a unit cell in the antenna.

moon Titan in 2005 after being carried by the Cassini orbiter. Much of our frame of reference for designing landers—and their antennas—comes from Mars landers. Traditionally, landers (and rovers) designed for Mars missions rely on relay orbiters with high data rates to get scientific data back to Earth in a timely manner. These orbiters, such as the Mars Reconnaissance Orbiter and Mars Odyssey, have large, parabolic antennas that use large amounts of power, on the order of 100 megawatts, to communicate with Earth. While the Perseverance and Curiosity rovers also have direct-to-Earth antennas, they are small, use less power (about 25 W), and are not very efficient. These antennas are mostly used for transmitting the rover’s status and other low-data updates. These existing direct-to-Earth antennas simply aren’t up to the task of communicating all the way from Europa. Additionally, Europa, unlike Mars, has virtually no atmosphere, so landers can’t use parachutes or air resistance to slow down. Instead, the lander will depend entirely on rockets to brake and land safely. This necessity limits its size—too heavy and it will require far too much fuel to both launch and land. A modestly sized 400-kilogram lander, for example, requires a rocket and fuel that combined weigh between 10 to 15 tonnes. The lander then needs to survive six or seven years of deep space travel before finally landing and operating within the intense radiation produced by Jupiter’s powerful magnetic field. We also can’t assume a Europa lander would have an orbiter overhead to relay signals, because adding an orbiter could very easily make the mission too

JPL-CALTECH/NASA

E

uropa, one of Jupiter’s Galilean moons, has twice as much liquid water as Earth’s oceans, if not more. An ocean estimated to be anywhere from 40 to 100 miles (60 to 150 kilometers) deep spans the entire moon, locked beneath an icy surface over a dozen kilometers thick. The only direct evidence for this ocean is the plumes of water that occasionally erupt through cracks in the ice, jetting as high as 200 km above the surface. The endless, sunless, roiling ocean of Europa might sound astoundingly bleak. Yet it’s one of the most promising candidates for finding extraterrestrial life. Designing a robotic lander that can survive such harsh conditions will require rethinking all of its systems to some extent, including arguably its most important: communications. After all, even if the rest of the lander works flawlessly, if the radio or antenna breaks, the lander is lost forever. Ultimately, when NASA’s Jet Propulsion Laboratory (JPL), where I am a senior antenna engineer, began to seriously consider a Europa lander mission, we realized that the antenna was the limiting factor. The antenna needs to maintain a direct-to-Earth link across more than 550 million miles (900 million km) when Earth and Jupiter are at their point of greatest separation. The antenna must be radiation-hardened enough to survive an onslaught of ionizing particles from Jupiter, and it cannot be so heavy or so large that it would imperil the lander during takeoff and landing. One colleague, when we laid out the challenge in front of us, called it impossible. We built such an antenna anyway—and although it was designed for Europa, it is a revolutionary enough design that we’re already successfully implementing it in future missions for other destinations in the solar system.


expensive. Even if Clipper is miraculously still functional by the time a lander arrives, we won’t assume that will be the case, as the lander would arrive well after Clipper’s official end-of-mission date. I’ve mentioned previously that the antenna will need to transmit signals up to 900 million km. As a general rule, less efficient antennas need a larger surface area to transmit farther. But as the lander won’t have an orbiter overhead with a large relay antenna, and it won’t be big enough itself for a large antenna, it needs a small antenna with a transmission efficiency of 80 percent or higher—much more efficient than most space-bound antennas. So, to reiterate the challenge: The antenna cannot be large, because then the lander will be too heavy. It cannot be inefficient for the same reason, because requiring more power would necessitate bulky power systems instead. And it needs to survive exposure to a brutal amount of radiation from Jupiter. This last point requires that the antenna must be mostly, if not entirely, made out of metal, which is more resistant to ionizing radiation. The antenna we ultimately developed depends on a key innovation: The antenna is made up of circularly polarized, aluminum-only unit cells—more on this in a moment—that can each send and receive on X-band frequencies (specifically, 7.145 to 7.19 gigahertz for the uplink and 8.4 to 8.45 GHz for the downlink). The entire antenna is an array of these unit cells, 32 on a side or 1,024 in total. The antenna is 32.5 by 32.5 inches (82.5 by 82.5 centimeters), allowing it to fit on top of a modestly sized lander, and it can achieve a downlink rate to Earth

JPL engineers, including the author [bottom row on left], pose with a mock-up of a Europa lander concept. The model includes several necessary technological developments, including the antenna on top and legs that can handle uneven terrain.

of 33 kilobits per second at 80 percent efficiency. Let’s take a closer look at the unit cells I mentioned, to better understand how this antenna does what it does. Circular polarization is commonly used for space communications. You might be more familiar with linear polarization, which is often used for terrestrial wireless signals; you can imagine such a signal propagating across a distance as a 2D sine wave that’s oriented, say, vertically or horizontally relative to the ground. Circular polarization instead propagates as a 3D helix. This helix pattern makes circular polarization useful for deep space communications because the helix’s larger “cross section” doesn’t require that the transmitter and receiver be as precisely aligned. As you can imagine, a superprecise alignment across almost 750 million km is all but impossible. Circular polarization has the added benefit of being less sensitive to Earth’s weather when it arrives. Rain, for example, causes linearly polarized signals to attenuate more quickly than circularly polarized ones. Each unit cell, as mentioned, is entirely made of aluminum. Earlier antenna arrays that similarly use smaller component cells include dielectric materials like ceramic or glass to act as insulators. Unfortunately, dielectric materials are also vulnerable to Jupiter’s ionizing radiation. The radiation builds up a charge on the materials over time, and precisely because they’re insulators there’s nowhere for that charge to go—until it’s ultimately released in a hardware-damaging electrostatic discharge. So we can’t use them. As mentioned before, metals are more resilient to ionizing radiation. The problem is they’re not insulators, and so an antenna constructed entirely out of metal is ­­still at risk of an electrostatic discharge damaging its components. We worked around this problem by designing each unit cell to be fed at a single point. The “feed” is the connection between an antenna and the radio’s transmitter and receiver. Typically, circularly polarized antennas require two perpendicular feeds to control the signal generation. But with a bit of careful engineering and the use of a type of automated optimization called a genetic algorithm, we developed a precisely shaped single feed that could get the job done. Meanwhile, a comparatively large metal post acts as a ground to protect each feed from electrostatic discharges. The unit cells are placed in small 16-by-16 subarrays, four subarrays in total. Each of these ­subarrays is fed with something we call a suspended air stripline, in which the transmission line is suspended between two ground planes, turning the gap in between into a dielectric insulator. We can then safely transmit power through the stripline while still protecting the line from electric discharges that would build up on a dielectric like ceramic or glass. Additionally, suspended air striplines are low loss, which is perfect for the highly efficient antenna design we wanted.

AUGUST 2021  SPECTRUM.IEEE.ORG  37


D

esigning, fabricating, and testing the antenna took only 6 months. To put that in context, the typical development cycle for a new space technology is measured in years. The results were outstanding. Our antenna achieved the

38  SPECTRUM.IEEE.ORG  AUGUST 2021

This exploded view [above left] of an 8-by-8 subarray of the antenna shows the unit cells [top layer] that work together to create steerable signal beams, and the three layers of the power divider sandwiched between the antenna’s casing. The power divider [above right] for an 8-by-8 subarray splits the signal power into a fraction that each unit cell can tolerate without being damaged.

80 percent efficiency threshold on both the send and receive frequency bands, despite being smaller and lighter than other antennas. It also doesn’t require a delicate gimbal to point it toward Earth. Instead, the antenna’s subarrays act as a phased array capable of shaping the direction of the signal without reorienting the antenna. In order to prove how successful our antenna could be, we subjected it to a battery of extreme environmental tests, including a handful of tests specific to Europa’s atypical environment. One test is what we call thermal cycling. For this test, we place the antenna in a room called a thermal chamber and adjust the temperature over a large range—as low as –170 °C and as high as 150 °C. We put the antenna through multiple temperature cycles, measuring its transmitting capabilities before, during, and after each cycle. The antenna passed this test without any issues. The antenna also needed to demonstrate, like any piece of hardware that goes into space, resilience against vibrations. Rockets—and everything they’re carrying into space—shake intensely during launch, which means we need to be sure that anything that goes up doesn’t come apart on the trip. For the vibration test, we loaded the entire antenna onto a vibrating table. We used accelerometers at different locations on the antenna to determine if it was holding up or breaking apart under the vibrations. Over the course of the test, we ramped up the vibrations to the point where they approximate a launch. Thermal cycling and vibration tests are standard tests for the hardware on any spacecraft, but as I mentioned, Europa’s challenging environment required a few additional nonstandard tests. We typically do some tests in anechoic chambers for antennas. You may recognize anechoic chambers as those rooms with wedge-covered surfaces to absorb

JPL-CALTECH/NASA

Put together, the new antenna design accomplishes three things: It’s highly efficient, it can handle a large amount of power, and it’s not very sensitive to temperature fluctuations. Removing traditional dielectric materials in favor of air striplines and an aluminum-only design gives us high efficiency. It’s also a phased array, which means it uses a cluster of smaller antennas to create steerable, tightly focused signals. The nature of such an array is that each individual cell needs to handle only a fraction of the total transmission power. So while each individual cell can handle only a minuscule amount of power, each subarray can handle more than 6 kilowatts. That’s still low—much lower than the megawatt-range transmissions from Mars rovers—but it’s enough for the modest downlink rates I mentioned above. And finally, because the antenna is made of metal, it expands and contracts uniformly as the temperature changes. In fact, one of the reasons we picked aluminum is because the metal does not expand or contract much as temperatures change. When I originally proposed this antenna concept to the Europa lander project, I was met with skepticism. Space exploration is typically a very riskaverse endeavor, for good reason—the missions are expensive, and a single mistake can end one prematurely. For this reason, new technologies may be dismissed in favor of tried-and-true methods. But this situation was different because without a new antenna design, there would never be a Europa mission. The rest of my team and I were given the green light to prove the antenna could work.


any signal reflections. An anechoic chamber makes it possible for us to determine the antenna’s signal propagation over extremely long distances by eliminating interference from local reflections. One way to think about it is that the anechoic chamber simulates a wide open space, so we can measure the signal’s propagation and extrapolate how it will look over a longer distance. What made this particular anechoic chamber test interesting is that it was also conducted at ultralow temperatures. We couldn’t make the entire chamber that cold, so we instead placed the antenna in a sealed foam box. The foam is transparent to the antenna’s radio transmissions, so from the point of view of the actual test, it wasn’t there. But by connecting the foam box to a heat exchange plate filled with liquid nitrogen, we could lower the temperature inside it to –170 °C. To our delight, we found that the antenna had robust long-range signal propagation even at that frigid temperature. The last unusual test for this antenna was to bombard it with electrons in order to simulate Jupiter’s intense radiation. We used JPL’s Dynamitron electron accelerator to subject the antenna to the entire ionizing radiation dose the antenna would see during its lifetime in a shortened time frame. In other words, in the span of two days in the accelerator, the antenna was exposed to the same amount of radiation as it would be during the six- or seven-year trip to Europa, plus up to 40 days on the surface. Like the anechoic chamber testing, we also conducted this test at cryogenic temperatures that were as close to those of Europa’s surface conditions as possible. The reason for the electron bombardment test was our concern that Jupiter’s ionizing radiation would cause a dangerous electrostatic discharge at the antenna’s port, where it connects to the rest of the lander’s communications hardware. Theoreti-

Each unit cell [above left] is pure aluminum. Collectively, they create a steerable signal by canceling out one another’s signals in unwanted directions and reinforcing the signal in the desired direction.

The antenna [above right] had to pass signal tests at cryogenic temperatures (–170 °C) to confirm that it would work as expected on Europa’s frigid surface. Because it wasn’t possible to bring the temperature of the entire anechoic chamber to cryogenic levels, the antenna was sealed in a white foam box.

cally, the danger of such a discharge grows as the antenna spends more time exposed to ionizing radiation. If a discharge happens, it could damage not just the antenna but also hardware deeper in the communications system and possibly elsewhere in the lander. Thankfully, we didn’t measure any discharges during our test, which confirms that the antenna can survive both the trip to and work on Europa. We designed and tested this antenna for Europa, but we believe it can be used for missions elsewhere in the solar system. We’re already tweaking the design for the joint JPL/ESA Mars Sample Return mission that—as the name implies—will bring ­Martian rocks, soil, and atmospheric samples back to Earth. The mission is currently slated to launch in 2026. We see no reason why our antenna design couldn’t be used on every future Mars lander or rover as a more robust alternative—one that could also increase data rates 4 to 16 times those of current antenna designs. We also could use it on future moon missions to provide high data rates. Although there isn’t an approved Europa lander mission yet, we at JPL will be ready if and when it happens. Other engineers have pursued different projects that are also necessary for such a mission. For example, some have developed a new, multilegged landing system to touch down safely on uncertain or unstable surfaces. Others have created a “belly pan” that will protect vulnerable hardware from Europa’s cold. Still others have worked on an intelligent landing system, radiation-tolerant batteries, and more. But the antenna remains perhaps the most vital system, because without it there will be no way for the lander to communicate how well any of these other systems are working. Without a working antenna, the lander will never be able to tell us whether we could have living neighbors on Europa. n

AUGUST 2021  SPECTRUM.IEEE.ORG  39



BY KEITH A. BOWMAN

A Circuit to Boost Battery Life

ALL-DIGITAL VERSIONS OF THE LOW-DROPOUT VOLTAGE REGULATOR WILL SAVE TIME, MONEY, AND POWER

you’ve probably played hundreds, maybe thousands, of videos on your smartphone. But have you ever thought about what happens when you press “play”? • The instant you touch that little triangle, many things happen at once. In microseconds, idle compute cores on your phone’s processor spring to life. As they do so, their voltages and clock frequencies shoot up to ensure that the video decompresses and displays without delay. Meanwhile, other cores, running tasks in the background, throttle down. Charge surges into the active cores’ millions of transistors and slows to a trickle in the newly idled ones. • This dance, called dynamic voltage and frequency scaling (DVFS), happens continually in the processor, called a system-on-chip (SoC), that runs your phone and your laptop as well as in the servers that back them. It’s all done in an effort to balance computational performance with power consumption, something that’s particularly challenging for smartphones. The circuits that orchestrate DVFS strive to ensure a steady clock and a rock-solid voltage level despite the surges in current, but they are also among the most backbreaking to design. Photo-illustration by Edmon de Haro

AUGUST 2021  SPECTRUM.IEEE.ORG  41


VIN

Head switch

LDO

Reference voltage 1

LDO

Core 1

Reference voltage 2

Core 2

LDO

Reference voltage 3

Reference Voltage 4

LDO

Core 3

Core 4

Low-dropout voltage regulators (LDOs) allow multiple processor cores on the same input voltage rail (VIN) to operate at different voltages according to their workloads. In this case, Core 1 has the highest performance requirement. Its head switch, really a group of transistors connected in parallel, is closed, bypassing the LDO and directly connecting Core 1 to VIN, which is supplied by an external power management IC. Cores 2 through 4, however, have less demanding workloads. Their LDOs are engaged to supply the cores with voltages that will save power.

ANALOG LDO Reference voltage

+

DIGITAL LDO

VIN

Comparator Reference voltage

Op amp

VIN

Power PFET

+ -

Control logic

Power PFETs

VDD

VDD

LDO clock Core

Core

The basic analog low-dropout voltage regulator [left] controls voltage through a feedback loop. It tries to make the output voltage (VDD) equal to the reference voltage by controlling the current through the power PFET. In the basic digital design [right], an independent clock triggers a comparator [triangle] that compares the reference voltage to VDD. The result tells control logic how many power PFETs to activate.

That’s mainly because the clock-­ generation and voltage-regulation circuits are analog, unlike almost every­thing else on your smartphone SoC. We’ve grown accustomed to a near-yearly introduction of new processors with substantially more computational power, thanks to advances in semiconductor manufacturing. “Porting” a digital design from an old semiconductor process to a new one is no picnic, but it’s nothing compared to trying to move analog circuits to a new process. The analog components that enable DVFS, especially a circuit called a low-dropout voltage regulator (LDO), don’t scale

42  SPECTRUM.IEEE.ORG  AUGUST 2021

down like digital circuits do and must basically be redesigned from scratch with every new generation. If we could instead build LDOs—and perhaps other analog circuits—from digital components, they would be much less difficult to port, saving significant design cost and freeing up engineers for other problems that cutting-edge chip design has in store. What’s more, the resulting digital LDOs could be much smaller than their analog counterparts and perform better in certain ways. Research groups in industry and academia have tested at least a dozen designs over the past few years, and despite some shortcomings, a

commercially useful digital LDO may soon be within reach. A TYPICAL SYSTEM-ONchip for a smartphone is a marvel of integration. On a single sliver of silicon, it integrates multiple CPU cores, a graphics processing unit, a digital signal processor, a neural processing unit, an image signal processor, as well as a modem and other specialized blocks of logic. Naturally, boosting the clock frequency that drives these logic blocks increases the rate at which they get their work done. But to operate at a higher frequency, they

A


also need a higher voltage. Without that, transistors can’t switch on or off before the next tick of the processor clock. Of course, a higher frequency and voltage comes at the cost of power consumption. So these cores and logic units dynamically change their clock frequencies and supply voltages—often ranging from 0.95 to 0.45 volts—based on the balance of energy efficiency and performance they need to achieve for whatever workload they are assigned—shooting video, playing back a music file, conveying speech during a call, and so on. Typically, an external power management IC generates multiple input voltage (VIN) values for the phone’s SoC. These voltages are delivered to areas of the SoC chip along wide interconnects called rails. But the number of connections between the power-management chip and the SoC is limited. So, multiple cores on the SoC must share the same VIN rail. But they don’t have to all get the same voltage, thanks to the low-dropout voltage regulators. LDOs along with dedicated clock generators allow each core on a shared rail to operate at a unique supply voltage and clock frequency. The core requiring the highest supply voltage determines the shared VIN value. The power-management chip sets VIN to this value and this core bypasses the LDO altogether through transistors called head switches. To keep power consumption to a minimum, other cores can operate at a lower supply voltage. Software determines what this voltage should be, and analog LDOs do a pretty good job of supplying it. They are compact, low cost to build, and relatively simple to integrate on a chip, as they do not require large inductors or capacitors. But these LDOs can operate only in a particular window of voltage. On the high end, the target voltage must be lower than the difference between VIN and the voltage drop across the LDO itself (the eponymous “dropout” voltage). For example, if the supply voltage that would be most efficient for the core is 0.85 V, but VIN is 0.95 V and the LDO’s dropout voltage is 0.15 V, that core can’t use the LDO to reach 0.85 V and must work at 0.95 V instead, wasting some power. Similarly,

if VIN has already been set below a certain voltage limit, the LDO’s analog components won’t work properly and the circuit can’t be engaged to reduce the core supply voltage further. However, if the desired voltage falls inside the LDO’s window, software enables the circuit and activates a reference voltage equal to the target supply voltage. HOW DOES THE LDO supply the right voltage? In the basic analog LDO design, it’s by means of an operational amplifier, feedback, and a specialized power p-channel field effect transistor (PFET). The latter is a transistor that reduces its current with increasing voltage to its gate. The gate voltage to this power PFET is an analog signal coming from the op amp, ranging from 0 volts to VIN. The op amp continuously compares the circuit’s output voltage—the core’s supply voltage, or VDD—to the target reference voltage. If the LDO’s output voltage falls below the reference voltage—as it would when newly active logic suddenly demands more current—the op amp reduces the power PFET’s gate voltage, increasing current and lifting VDD toward the reference voltage value. Conversely, if the output voltage rises above the reference voltage—as it would when a core’s logic is less active—then the op amp increases the transistor’s gate voltage to reduce current and lower VDD. A basic digital LDO, on the other hand, is made up of a voltage comparator, control logic, and a number of parallel power PFETs. (The LDO also has its own clock circuit, separate from those used by the processor core.) In the digital LDO, the gate voltages to the power PFETs are binary values instead of analog, either 0 V or VIN. With each tick of the clock, the comparator measures whether the output voltage is below or above the target voltage provided by the reference source. The comparator output guides the control logic in determining how many of the power PFETs to activate. If the LDO’s output is below target, the control logic will activate more power PFETs. Their combined current props up the core’s supply voltage, and that value feeds back

H

to the comparator to keep it on target. If it overshoots, the comparator signals to the control logic to switch some of the PFETs off. NEITHER THE ANALOG nor the digital LDO is ideal, of course. The key advantage of an analog design is that it can respond rapidly to transient droops and overshoots in the supply voltage, which is especially important when those events involve steep changes. These transients occur because a core’s demand for current can go up or down greatly in a matter of nanoseconds. In addition to the fast response, analog LDOs are very good at suppressing variations in VIN that might come in from the other cores on the rail. And, finally, when current demands are not changing much, it controls the output tightly without constantly overshooting and undershooting the target in a way that introduces ripples in VDD. These attributes have made analog LDOs attractive not just for supplying processor cores, but for almost any circuit demanding a quiet, steady supply voltage. However, there are some critical challenges that limit the effectiveness of these designs. First, analog components are much more complex than digital logic, requiring lengthy design times to implement them in advanced technology nodes. Second, they don’t operate properly when VIN is low, limiting how low a VDD they can deliver to a core. And finally, the dropout voltage of analog LDOs isn’t as small as designers would like. Taking those last points together, analog LDOs offer a limited voltage window at which they can operate. That means there are missed opportunities to enable LDOs for power saving—ones big enough to make a noticeable difference in a smartphone’s battery life. Digital LDOs undo many of these weaknesses: With no complex analog components, they allow designers to tap into a wealth of tools and other resources for digital design. So scaling down the circuit for a new process technology will need much less effort. Digital LDOs will also operate over a wider voltage range. At the low-voltage end, the digital com-

N

AUGUST 2021  SPECTRUM.IEEE.ORG  43


ponents can operate at VIN values that are off-limits to analog components. And in the higher range, the digital LDO’s dropout voltage will be smaller, resulting in meaningful core-power savings. But nothing’s free, and the digital LDO has some serious drawbacks. Most of these arise because the circuit measures and alters its output only at discrete times, instead of continuously. That means the circuit has a comparatively slow response to supply voltage droops and overshoots. It’s also more sensitive to variations in VIN, and it tends

to produce small ripples in the output voltage, both of which could degrade a core’s performance. Of these, the main obstacle that has limited the use of digital LDOs so far is their slow transient response. Cores experience droops and overshoots when the current they draw abruptly changes in response to a change in its workload. The LDO response time to droop events is critical to limiting how far voltage falls and how long that condition lasts. Conventional cores add a safety margin to the supply voltage to

Output voltage

Overshoot

Reference voltage

LDO sampling clock

Droop Higher frequency

When a core’s current requirement changes suddenly it can cause the LDO’s output voltage to overshoot or droop [top]. Basic digital LDO designs do not handle this well [bottom left]. However, a scheme called adaptive sampling with reduced dynamic stability [bottom right] can reduce the extent of the voltage excursion. It does this by ramping up the LDO’s sample frequency when the droop gets too large, allowing the circuit to respond faster. SOURCE: S.B. NASIR ET AL., IEEE INTERNATIONAL SOLID-STATE CIRCUITS CONFERENCE (ISSCC), FEBRUARY 2015, PP. 98–99.

Digital LDO

ensure correct operation during droops. A greater expected droop means the margin must be larger, degrading the LDO’s energy-efficiency benefits. So, speeding up the digital LDO’s response to droops and overshoots is the primary focus of the c­ utting-edge research in this field. SOME RECENT AD­VANCES have helped speed the circuit’s response to droops and overshoots. One approach uses the digital LDO’s clock frequency as a control knob to trade stability and power efficiency for response time. A lower frequency improves LDO stability, simply because the output will not be changing as often. It also lowers the LDO’s power consumption, because the transistors that make up the LDO are switching less frequently. But this comes at the cost of a slower response to transient current demands from the processor core. You can see why that would be, if you consider that much of a transient event might occur within a single clock cycle if the frequency is too low. Conversely, a high LDO clock frequency reduces the transient response time, because the comparator is sampling the output often enough to change the LDO’s output current earlier in the transient event. However, this constant sampling degrades the stability of the output and consumes more power.

S

Digital LDO using adaptive sampling with reduced dynamic stability Core current

Core current 1.4 mA

1.4 mA 1.1 µs

5.8 µs Voltage droop 210 mV

44  SPECTRUM.IEEE.ORG  AUGUST 2021

Voltage droop 90 mV


Comparator Reference voltage

+ -

+

VIN Power PFETs

Control logic

-

Analog assist loop

An alternative way to make digital LDOs respond more quickly to voltage droops is to add an analog feedback loop to the power PFET part of the circuit [top]. When output voltage droops or overshoots, the analog loop engages to prop it up [bottom], reducing the extent of the excursion. SOURCE: M. HUANG ET AL., IEEE JOURNAL OF SOLID-STATE CIRCUITS, JANUARY 2018, PP. 20–34.

LDO clock

VSSB

LDO current VDD Core

–––– With analog assist VOUT, 0.6 volts 0.6 0.5 0.4 0.3 0.2

0.5 0.4 0.3 0.2

VSSB, 0.05 0.05 volts 0.00

0.00

-0.05 -0.05 -0.10 -0.10 30

30

40

40

Time,

The gist of this approach is to introduce a clock whose frequency adapts to the situation, a scheme called adaptive sampling frequency with reduced dynamic stability. When voltage droops or overshoots exceed a certain level, the clock frequency increases to more rapidly reduce the transient effect. It then slows down to consume less power and keep the output voltage stable. This trick is achieved by adding a pair of additional comparators to sense the overshoot and droop conditions and trigger the clock. In measurements from a test chip using this technique, the VDD droop reduced

loop that engages only when there is a steep change in output voltage. So, when the output voltage droops, it reduces the voltage at the activated PFET gates and –––– Without analog assist instantaneously increases current to the core to reduce the magnitude of the droop. Such an analog-assisted loop has been shown to reduce the droop from 300 to 106 mV, a 65 percent improvement, and overshoot from 80 to 70 mV (13 percent). Of course, both of these techniques have their drawbacks. For one, neither can really match the response time of today’s analog LDOs. In addition, the adaptive sampling frequency technique requires two additional comparators and the generation and calibration of reference voltages for droop and overshoot, so the circuit knows when to engage the higher frequency. The analog-assisted 50 60 loop includes some analog components, 50 60 reducing the design-time benefit of an microseconds all-digital system. Developments in commercial SoC processors may help make digital LDOs more from 210 to 90 millivolts—a 57 percent successful, even if they can’t quite match reduction versus a standard digital LDO analog performance. Today, commercial design. And the time it took for voltage SoC processors integrate all-­digital adapto settle to a steady state shrank to tive circuits designed to mitigate perfor1.1 microseconds from 5.8 µs, an 81 per- mance problems when droops occur. cent improvement. These circuits, for example, temporarily An alternative approach for improv- stretch the core’s clock period to prevent ing the transient response time is to make timing errors. Such mitigation techniques the digital LDO a little bit analog. The could relax the transient response-time design integrates a separate analog-­ limits, allowing the use of digital LDOs assisted loop that responds instantly to and boosting processor efficiency. If that load current transients. The analog-­ happens, we can expect more efficient assisted loop couples the LDO’s output smartphones and other computers, while voltage to the LDO’s parallel PFETs making the process of designing them a through a capacitor, creating a feedback whole lot easier. n

AUGUST 2021  SPECTRUM.IEEE.ORG  45


NEWS

WHITE-HOT BLOCKS AS RENEWABLE ENERGY STORAGE? CONTINUED FROM PAGE 11

In the end, heating carbon blocks won for its impressive energy density, simplicity, low cost, and scalability. The energy density is on par with lithium-ion batteries at a few hundred kilowatt-hours per cubic meter, hundreds of times higher than pumped hydro or gravity, which also “need two reservoirs separated by a mountain or a skyscraper-sized stack of bricks,” Briggs says. Antora uses the same graphite blocks that serve as electrodes in steel furnaces and aluminum smelters. “[These] are already produced in 100-million-tonne quantities so we can tap into that supply chain,” he says. Briggs imagines blocks roughly the size of dorm fridges packed in modular units and wrapped in commonly used industrial insulating materials. “After you heat this thing up with electricity, the real trick is how you retrieve the heat,” he says. One option is to use the heat to drive a turbine. But Antora chose thermophotovoltaics, solar-cell-like devices that convert infrared radiation and light from the glowing-hot carbon blocks into electricity. The price of these semiconductor devices drops dramatically when made at large scale, so they work out cheaper per watt than turbines. Plus, unlike turbines, which work best when built big, thermophotovoltaics perform well regardless of power output. Thermophotovoltaics have been around for decades, but Antora has developed a new system. Richard ­Swanson, one of the company’s advisors, was an early pioneer of the technology in the late 1970s. The devices had been converting heat into electricity at efficiency percentages stuck in the 20s until the Antora team demonstrated a world record of more than 30 percent in 2020. They did that by switching from silicon to ­higher-performance, III-V semi­ conductors, and by using tricks like harnessing lower-energy infrared light that otherwise passes through the semiconductor and is lost. Antora’s system recuperates that heat by placing a reflector behind the semiconductor to bounce the infrared rays back to the graphite block. The technology has caught on. Antora has received early-stage funding from ARPA-E, and was part of the 2017 Activate entrepreneurial fellowship

46  SPECTRUM.IEEE.ORG  AUGUST 2021

program. The company was also selected for the Shell GameChanger Accelerator, administered by NREL, a U.S. government agency. More recently, it has gotten funding from venture capitalists and the California Energy Commission to scale up its technology, and will build a pilot system at an undisclosed customer site in 2022. Electrified Thermal Solutions, which is part of Activate’s 2021 cohort and was founded in 2020, takes a slightly different approach. The company’s cofounders, Joey Kabel and Daniel Stack, chose ceramic blocks as their thermal storage medium. Specifically, they picked honey­ comb­ceramic blocks used today to capture waste heat in steel plants. Since ceramics don’t conduct electricity, the bricks are doped to make them conductive so that they can be electrically heated to 2,000 °C. Stack says they plan to target a wide market for that stored heat. They could use it to drive a gas turbine for

electricity, or to run any other high-­ temperature process such as producing cement and steel. The duo is still working out some technical challenges such as keeping the ceramic from oxidizing and vaporizing over time. Eventually the system should have a lifetime of 20-plus years, another big advantage over batteries. They are now building a benchtop prototype, Kabel says, but the final full-scale system will look like a large grain silo that should store about 600 ­kilowatt-hours per cubic meter, matching Antora’s energy density. It will be a few years before either company is ready to build a full-scale installation. If they can prove themselves, though, either or both could pave a way for a cost-effective storage technology for the 21st-century electrical grid. “We want to decarbonize the industrial and electric sector by replacing the combustion process with a renewable heating system,” Stack says. n

JOURNAL WATCH

Building DNA Logic Biological circuits, made of synthetic DNA, are still earlystage technology but have already been mooted as tests for diagnosing cancer and identifying internal trauma, including brain injuries. A team led by Renan Marks, an adjunct professor of the Faculty of Computing at the ­Universidade Federal de Mato Grosso do Sul, in Brazil, initially created a software program called DNAr, which ­researchers can use to simulate various chemical reactions and subsequently design new biological circuits. Now they’ve developed a software extension for the program, called DNAr-Logic, that allows scientists to describe their desired circuits at a high level. The software takes this high-level description of a logical circuit and converts it to ­chemical-reaction networks that can be synthesized in DNA strands.

Marks says an advantage of his team’s new software extension is that it will allow scientists to focus more on designing the circuits, rather than worry about the calculations and details of the chemical chain reactions. “They can design and simulate [biological circuits] using DNArLogic without previous knowledge in chemistry and without writing hundreds of reactions—and differential equations needed to simulate its dynamic behavior— by hand,” he says. Marks adds that his group was able to use DNAr-Logic to design some synthetic biological circuits capable of generating up to 600 different reactions. “I plan to use DNAr as a framework to assist in researching and developing new circuits based on algorithms that can help health professionals diagnose illnesses faster and be more effective in health treatments,” Marks says. “The software lifts the burden of chemical reactions details from the scientist’s shoulders.” —Michelle Hampson


New York, NY (MSG Entertainment Group, LLC) – Director, Systems Engineer - Provide direct support to the VP of Corporate Application for the organization’s human resource information systems, including system maintenance, updates, administration, as well as implementation, configuration and support of cloud based human capital-related software development and integrations. Develop, create, and modify general computer applications software or specialized utility programs. Analyze user needs and develop software solutions. Min Req: Bachelor’s degree in Information Technology, Management Information Systems, or a related technical field and 5 years of corporate system applications, systems integration and Oracle Cloud Fusion HCM experience. Experience must include working with HCM Extract, Oracle Fusion Middleware tools, Oracle Service-Oriented Architecture, Fast Formulas, analyzing an designing new enhancements for IT landscapes and preparing and conducting requirements walk-throughs in order to clarify requirements by using systems to drill down explicit and implicit requirements. Qualified applicants send resumes to: Emily Pantofel, Job Code: DR512, MSG Entertainment Group, LLC, 2 Penn Plaza, New York, NY 10121.

Associate/Assistant Professor in Ocean and Coastal Environment (Ref. No.: FST/CEE/AAP/06/2021) The Department of Civil and Environmental Engineering (CEE) of the University of Macau (UM) invites applications for the position of Associate/Assistant Professor in Ocean and Coastal Environment whose research focus should be in ocean environment, ocean ecology, coastal and offshore related disciplines for the newly established Centre for Regional Oceans (CRO) CEE offers both undergraduate and postgraduate programmes. Currently, it has 21 full-time academic staff, around 230 undergraduates and 110 graduate students. UM is among the top 1% in ESI rankings in Engineering. In the THE World University Ranks, the Engineering and Technology programme is ranked among the 126th – 150th. CRO is a newly established research centre with the purpose of advancing the ocean-related research of UM. CRO is expected to deliver impactful research in ocean environment, ocean ecology, coastal and offshore engineering. The candidates must have an earned PhD degree in related areas. Preference will be given to candidates with specialties in ocean and coastal environment. Applicants should https://career.admo.um.edu.mo/ details, and apply ONLINE.

for

visit more

TAP. CONNECT. NETWORK. SHARE. Connect to IEEE–no matter where you are–with the IEEE App. Stay up-to-date with the latest news

Schedule, manage, or join meetups virtually

Get geo and interest-based recommendations

Read and download your IEEE magazines

Create a personalized experience

Locate IEEE members by location, interests, and affiliations

AUGUST 2021  SPECTRUM.IEEE.ORG  47


A Tool for Modern Times On 12 August 1981, at the Waldorf Astoria Hotel in midtown Manhattan, IBM unveiled its entrant into the nascent personal computer market: the IBM PC. With that, the preeminent U.S. computer maker launched another revolution in computing.

48  SPECTRUM.IEEE.ORG  AUGUST 2021

BY JAMES W. CORTADA

Soon the world began embracing little computers by the millions, with IBM dominating those sales. Other companies, including Apple and Tandy Corp., were already making personal computers, but no other machine carried the revered IBM name. IBM’s essential contributions were to position the technology as suitable for wide use and to set a technical standard. Rivals were compelled to meet a demand that they had all

grossly underestimated. As such, IBM had a greater impact on the personal computer’s acceptance than did Apple, Compaq, Dell, and even Microsoft. And yet IBM’s lead in this new market would last only a few years, and the eclipse of the IBM PC came to mirror the decline of Big Blue. n FOR MORE ON THE HISTORY OF THE IBM PC, SEE spectrum.ieee.org/ pastforward-aug2021

MARK RICHARDS/COMPUTER HISTORY MUSEUM

HISTORY IN AN OBJECT


IEEE & ME

One Program. Seven Reasons 1

AffordAbility The buying power of a large IEEE membership base helps keep costs down with affordable group rates. Compare your existing coverage to our group rates and see if you can save.

2

PortAbility Benefits are portable and go with you wherever you go (they are not tied to any employer).

3 4 5 6 7

vAlue Plans are customized to the specific needs of IEEE members, and rigorously monitored to be sure they continue to offer quality and value.

AdvoCACy You can always rely on advice from IEEE Member Group Insurance Program coverage professionals, if you have questions.

truSt For more than 55 years, the IEEE Member Group Insurance Program has delivered tailored plans to members and their families.

StAbility IEEE Member Group Insurance Program only works with companies showing stability and bearing the highest rankings from industry standards.

ConvenienCe Access plans, rates, enrollment and more online 24/7!

As an IEEE member, there are many advantages of the IEEE Member Group Insurance Program. Explore your benefits today!* For more information** about the IEEE Member Group Insurance Program, Call 1-800-493-IEEE (4333) or visit IEEEinsurance.com/Reasons *Please check the website for plan availability in your region, as coverage may vary or may only be available in some areas of the United States (except territories), Puerto Rico and Canada (except Quebec). This program is administered by Mercer Consumer, a service of Mercer Health & Benefits Administration LLC. For life and health insurance plans, Mercer (Canada) Limited, represented by its employees Nicole Swift, Pauline Trembly and Suzanne Dominico, acts as broker with respect to residents of Canada. For the Professional Liability Insurance Plan, Marsh Canada Limited acts as the insurance broker with respect to residents of Canada. **For information on features, costs, eligibility, renewability, limitations and exclusions.

The IEEE Member Group Insurance Program is Administered by Mercer Health & Benefits Administration LLC, 12421 Meredith Drive, Urbandale, IA 50398 AR Insurance License #100102691 • CA Insurance License #0G39709

In CA d/b/a Mercer Health & Benefits Insurance Services LLC

92878 (8/21) Copyright 2021 Mercer LLC. All rights reserved.


MATLAB SPEAKS

With MATLAB,® you can build and deploy deep learning models for signal processing, reinforcement learning, automated driving, and other applications. Preprocess data, train models, generate code for GPUs, and deploy to production systems.

mathworks.com/deeplearning

Semantic segmentation for wildlife conservation.

©2021 The MathWorks, Inc.

DEEP LEARNING


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.