C or em m om e iv at e su Is ®
Why the AI Revolution Is Only the Beginning: page 4 Intel’s Star Chip Architect on What Comes Next: page 17 Laying Down a Law: Gordon Moore’s Original Essay: page 20
Winter 2019 Free An Intel Publication
This commemorative magazine celebrates the 50th anniversary of Intel and the longevity of Moore’s Law. In tribute, we have designed it in the style of Electronics magazine in 1965, the year Gordon Moore’s famous original essay was published. 2
Table of Contents Why the AI Revolution Is Only the Beginning, by John Markoff................................................................ 4 We’re Living in the New Era of Computing, by Sam Madden.................................................................... 8 A Way Around Moore’s Law? Intel’s Chiplet Ace Explains, by Jonah Freedman.................................... 11 How Robots Got Better at Becoming Your Partner, by Anca Dragan...................................................... 14 Intel Chip Architect Jim Keller on What Comes Next, by Walden Kirsch................................................. 17 Laying Down a Law: The Essay That Started It All, by Gordon Moore.................................................... 20
Why Moore Is Still Our Guide When Intel co-founder Gordon Moore wrote his now-famous essay in 1965, he had a vision of a future where virtually every activity would interact with computing. That day has arrived. We’re at the starting line of a generation-defining digital transformation where a data-centric world is driving demand for all kinds of devices powered by computing, from cars to appliances to personal communication and everywhere in between. We celebrated Intel’s 50th anniversary in 2018, and we’re even more excited about what the future brings. But we wouldn’t be where we are today without Moore’s vision. That’s why I couldn’t be more pleased to welcome you to this commemorative magazine, which pays homage to his original essay, faithfully reprinted from the April 19, 1965 issue of Electronics. Your eyes don’t deceive you: This entire magazine is designed in that retro format. It’s a subtle nod to our past, and a celebration of how profoundly computing technology has changed the world over the last half-century. But this is no reboot. We’re again looking into the future. In the pages that follow, you’ll also read about some of the latest developments in artificial intelligence, robotics, sensor computing and more, all written by some of the most innovative minds in tech and academia today. Much like Moore, these are our visions of what is possible with data and technology, and how its applications will continue to change our world for the better. It’s proof positive that the next 50 years will be just as exciting as the last 50. I sincerely hope you enjoy this magazine as much as we enjoyed putting it together. And as for the journey ahead, we can’t wait to take it with you.
Dr. Murthy Renduchintala, Group President Technology, Systems Architecture & Client Group Chief Engineering Officer Intel Corporation
3
The Revolution Is Bigger Than You Know AI technology has already had a profound effect on chip-making, but there are bigger surprises on the way. By John Markoff
The artificial intelligence revolution isn’t coming—it’s already here. The dramatic success of AI has succeeded in transforming Silicon Valley over the past half-decade. Seemingly every shade of tech company—from startups all the way up to the dominant players—are pursuing big data, powerful pattern recognition algorithms and specialized silicon accelerators. What’s less clear is whether the AI renaissance is the future or simply a step on a path toward something even more radical. Several years ago, I attended a meeting of semiconductor industry partners at Stanford University. Much of the event was taken up with hand-wringing about the impending end of Moore’s Law. For decades, the industry designed exponentially more powerful computer processors and memories—and like clockwork, the advances made it possible for the creation of entire new industries with each successive microchip generation. Video game consoles, personal computers, MP3 players and smartphones all came in quick succession as Moore’s Law drove down the cost of silicon chips. Not only did processors get exponentially faster, their cost fell nearly just as fast. But as the chip makers approached fundamental atomic limits, there were increasing concerns that innovation would stall. Over dinner that night at Stanford, however, I met a young computer architect who was overjoyed by the end of what he thought of as a “free ride.” Enthusiastically, he mused, “Now it’s our turn!” In his view, it would soon be possible to explore radical new computer design ideas. Turned out that young man saw the future. It’s been more than 60 years since Cornell University psychologist Frank Rosenblatt unveiled the Perceptron, a single-layer neural network that was meant to be the beginnings of a computing device that would model the human brain. Newspaper accounts at the time reported that the Perceptron would soon lead to devices that would “walk, talk, see, write, reproduce itself and be conscious of their own existence.” In the years since Rosenblatt’s invention, there has been a metronome-like cycling back and forth between specialized computer hardware and general purpose processors. With the advent of the commercial microprocessor in the
4
AI capability has found its way into processors over the past decade. That in turn has changed how they’re engineered.
5
early 1970s, and the relentless cadence of Moore’s Law,
capitalize on their ability to blend powerful pattern recognition
special purpose hardware was soon made obsolete by the
systems, big data and inexpensive computing systems.
exponential improvements in computing speed and memory.
Put that through the lens of the impact of Deep Learning
Along the way, there have been repeated efforts to take
systems, powerful silicon-based pattern-recognition systems
ideas from the world of neuroscience and use them to inspire
that are transforming both the computing world as well the
computing designs. For example, in the mid-1980s, MIT
entire economy. In all, we’re experiencing a shift that foretells,
computer scientist Danny Hillis simulated the parallelism
at a minimum, the arrival of the era of intelligent machines—
found in the brain and designed the CM-1 supercomputer
and there’s possibly an even more radical transformation of
that was based on 65,536 1-bit processors. Several years
computing on the horizon.
later, Caltech physicist Carver Mead—who originally coined
Intel has, in many ways, been a bellwether for that
the phrase “Moore’s Law”—teamed up with Federico Faggin,
changing landscape, beginning with the strategic acquisition
one of the designers of Intel’s first microprocessor. Their firm
of reprogrammable chip-maker Altera in 2015 and continuing
was early to explore the potential of neural networks.
with a rapid succession of other AI acquisitions, such as
The slowing of Moore’s Law and the emergence of
Nervana™, Movidius, Mobileye and Vertex.AI. But the giant
powerful machine intelligence techniques has led to an
chipmaker is also running with a wave of other innovative tech
explosion of new design ideas as chip makers, computer
companies that are designing their own artificial intelligence
makers, car makers and even toy makers have moved to
accelerators, such as Amazon, Apple, Google, Microsoft
Neural engines have become key components of silicon design, which now can enable applications such as autonomous driving.
and Nvidia.
AI technology found its way into processors, dramatically
The scale of the impact of the silicon design renaissance
accelerating the speed of neural networks. And in many
can be seen in Apple’s recent addition of a “neural engine”
ways, AI is still in its early days with massive amounts of
to the microprocessor used to power the newest iPhone. As
infrastructure still in development.
I’ve previously reported, the newest version of the specialized
Since then, the practical applications of speech recognition
processor occupies only a small area on Apple’s chip, but
and computer vision applications have come to the fore as the
it has more equivalent computing power than the world’s
most advanced AI techniques have found their way into an
fastest supercomputer just two decades ago. That processor
ever-widening consumer market. That in turn has resulted in
in particular is a good example of how AI accelerators
a remarkable proliferation of new processor designs, making
are being used to drastically improve the performance of
possible—among other new markets—a world of increasingly
everyday consumer applications such as photo processing,
intelligent “things.”
video games, augmented reality and speech recognition. In
Where will this new wave of silicon end? I’ve been around
a similar fashion, Intel’s acquisition of Mobileye promises
Silicon Valley long enough to understand that there are
a wave of new silicon that will help make driving safer and
inevitably technology surprises around every corner. The
create a path toward self-driving vehicles.
new freedom realized by processor architects has raised
The pace of AI-focused design has accelerated
the possibility of even more radical departures in computer
significantly as the semiconductor industry has pushed ever
design and given new opportunities to those who were until
closer to atomic limits. It’s really only been since 2006 that
recently seen as heretics.
6
Researchers at MIT’s Center for Bits and Bots are working on machines that can assemble their own parts, and perhaps themselves.
Not long ago, I had breakfast with Neil Gershenfeld, a physicist who runs the Center for Bits and Atoms at MIT.
curve that has driven the semiconductor industry for the last half-century.
Gershenfeld is a deep critic of the modern field of computer
“Personal fabrication will mean that anybody will be able
science, but he believes now is the time to create a new
to make almost anything, anywhere,” Gershenfeld said
generation of intelligent machines that not only blur the
recently. “That will fundamentally change what is work, what
distinction between computing and storage, but “action” as
is education, what are supply chains and how we organize
well. Taking his inspiration not just from biology, but from
society.”
“reality,” he has begun to design computing devices that
If he’s right, the enthusiasm for AI currently sweeping
erase the line between computation and mechanical action.
Silicon Valley could be just a step on the way to something
Gershenfeld’s lab, working alongside NASA and the Pentagon, is designing silicon “assemblers” that may one day represent the future of both computing and manufacturing. Such next-generation manufacturing systems, he asserts, will completely change both worlds. If he’s right, the next computing wave may be far more profound than thinking machines. Several years ago, Gershenfeld sat with Intel co-founder Gordon Moore and showed him a chart that represented how his idea is on the same exponential
even more earth-shattering. John Markoff has written about technology and science since 1977. He was a reporter for The New York Times for 30 years and was part of a team that won a Pulitzer Prize for Explanatory Reporting in 2013 for its coverage of automation. He is currently a visiting scholar at the Center for Advanced Study in the Behavioral Sciences at Stanford University.
The views, assumptions and opinions expressed in this article are those of the author and do not necessarily reflect those of Intel.
7
We’re Living in the New Era of Computing A decade ago, the world wasn’t quite ready for sensor computing. That’s all about to change.
When I was a graduate student at the turn of the millennium—way back before smartphones and smart cars were a thing, and it was still considered cool to use Comic Sans in PowerPoint presentations—many computer scientists, including me, were captivated by the idea of what was called “sensor computing” or “sensor networking.” With the availability of low-power processors, wireless radios and a variety of sensors, it became possible to use networks of battery-powered devices to measure and sense the world. In my doctorate work, the applications of this technology included habitat monitoring on remote islands, measuring light and humidity in redwood forests, recording vibration on the Golden Gate Bridge and tracking vehicles as they drove through the desert. We built early prototypes of these systems, but ultimately we were limited by the hardware and networks of the time. Most deployed sensor systems could do little more than collect simple measurements like temperature, humidity
By Sam Madden
or occupancy of an environment a few times per minute. That limited the real-world impact of the technology, despite tremendous interest among academic circles. Modern computing devices, however, provide a lot more muscle toward these audacious goals—they’re dramatically more powerful, and can capture high-resolution, high-rate video and audio, as well as high-precision location data from GPS and a variety of other ranging and positioning technologies. These sensors, coupled with the ubiquity of highbandwidth wireless networks, energy-efficient processors on phones and cars, and new algorithms for making sense of this data, have sped up the ability to deliver on the promise of what sensor computing brought to the table. In short, the muscle we were waiting for finally showed up. In this new age of sensor computing, we’ll see computers even further integrated into our lives than they already are, and perhaps at a faster pace than we expected. Some of these shifts are already happening. For example, sensors and algorithms are enhancing safety features in cars to the point where truly autonomous automobiles are expected to arrive within the next few years. They’re also present in industrial applications, where a combination of high-resolution sensors and sophisticated algorithms can improve efficiency and optimize production. In healthcare, biotech and industrial engineering, companies like Novartis
8
Advances in sensor technology have accelerated to where truly autonomous cars are projected to be only a few years away.
9
and BASF are beginning to apply similar advanced imaging
On the software side, the way these new applications
techniques and advanced algorithms to synthesize and
make use of data is dramatically different than the
predict the properties of new compounds and materials.
simple processing we saw in sensing applications
There has been plenty written about how AI will change
a decade ago. In particular, there has been a boom in the
the modern world. Let’s look instead at the major shifts
availability of massive multimedia datasets and in software
that will power these changes. First, let’s consider the new
systems capable of storing this data. Those have allowed
algorithms for processing data—particularly the techniques
new, so-called “deep” neural networks: machine-learning
for perceiving patterns and objects in images and other
techniques that allow algorithms to excel at certain perceptual
rich sensor data. Second, take the rise of powerful sensing
tasks, such as recognizing objects, reading signs or
and “edge” computing devices in phones, cars and other
converting speech to text—in many cases, performing as
mobile devices. There is a shift towards more heterogeneous
well as or better than humans. These capabilities are critical
processing, which is being driven by the effectiveness of
in these new applications where we depend on computers to
hardware specialized to take on these perceptual tasks.
interact with humans in natural and humanistic ways, such as
These new technologies are complementary, and they drive a
driving cars, processing medical imagery or operating robotic
cycle of innovation similar to how the internet drove the rapid
equipment alongside humans.
development of server-side computing technology. That, in
On the hardware front, these new applications also
turn, enabled massive new software businesses built around
have driven a number of technological shifts, requiring new
ubiquitous connectivity to thrive. We’re about to experience
storage and low-power processing technologies for use
a similar effect now.
in edge devices, as well as a diversity and heterogeneity
Hardware, data and AI work in a complementary cycle where advancements in one area can help in the evolution of the other two.
of processing technology to power perceptual algorithms.
massive amounts of data and further the need for specialized
Consider autonomous vehicles: There may be several
hardware. And that in turn makes the algorithms better.
general-purpose CPUs on a car, responsible for overall
This cycle of technology behind this new era of computing
planning and coordination of processing. There may also
applications help makes our world a safer, more connected
be a number of specialized processors—such as GPUs,
place. It creates exciting new businesses and alters the way
Google’s tensor processing unit (TPU) or Intel’s Nervana—
many traditional enterprises—from hospitals to car makers
for very efficient evaluation of deep nets to process video
to insurance companies—operate. These changes also bring
and audio to identify objects, recognize scenes or interpret
to fruition the vision of sensor computing that inspired me as
speech. Server-side technology used to coordinate and
a young graduate student. Even if I had to learn on my own
share data between vehicles will in turn use a variety of
that Comic Sans wasn’t, in fact, all that cool.
processing technologies, including high-power serverclass processors as well as more advanced versions of the specialized processors to run the perceptual algorithms on the cars themselves. This new hardware, coupled with data processing and collection techniques, and perceptual algorithms, are locked in a virtuous cycle. New deep perceptual algorithms work because of the availability of both massive amounts of data and powerful specialized hardware. These algorithms enable new applications—such as autonomous driving and personalized healthcare—that produce even more
Samuel Madden is a professor of electrical engineering and computer science at MIT’s Computer Science and Artificial Intelligence Laboratory. He is also a founder of several companies, including Vertica, a next-generation analytical relational database system, and Cambridge Mobile Telematics, which develops technology to make roads safer by making drivers better. His research interests include databases, networking and sensor computing.
The views, assumptions and opinions expressed in this article are those of the author and do not necessarily reflect those of Intel.
10
You May Say I’m a Dreamer
Ramune Nagisetty is out to prove there’s more to Moore’s Law. As senior principal engineer in Intel’s Technology and Innovation office, she started leading Intel’s vision on chiplets eight years ago. The timing was no coincidence, as there has been an explosion in different types of computing: wearables, mobile, ambient computing, PC, the cloud and so on. In short, the days of the “one or two sizes fit all” approach are coming to an end. Nagisetty and her team are finding ways to
Intel’s lead on chiplets thinks Moore got it right—even if he couldn’t picture what the future looked like.
address that changing nature of computing by using chiplets and package integration. In other words, instead of shrinking a single chip indefinitely—the central concept of Moore’s Law— chiplets are used for heterogeneous integration, to mix and match different functions to meet a wider variety of product requirements. The implications are, as Nagisetty says, “huge, because it changes how we conduct business and how we create products.”
By Jonah Freedman
She would know all about that. Nagisetty has been at Intel for more than 20 years across multiple workflows, many of which touch directly on silicon. But she’s also been mapping the future, not just in her strategy roles, but also in the advancement of women in the tech industry. We caught up with her to talk about some of those future visions, including why chiplets are a path forward, what other developments are on the horizon and what advice she’d give young women aspiring to a career in engineering.
When Gordon Moore wrote his original essay in 1965, do you think he foresaw something like chiplets, which is almost a way around what we now call Moore’s Law? I don’t think Gordon Moore was ever expecting to create a “law,” and he certainly didn’t expect his paper would have a longevity of more than 50 years. But there are a few sentences in it where he actually predicted this direction we’re going in now. He didn’t use exactly the same words that we use today, but he did write, “It may prove to be more economical to build large systems out of smaller functions, which are separately packaged and interconnected. … The availability of large functions would allow the manufacturer large systems to design and construct a variety of equipment, both rapidly and economically.” Essentially, this is what we’re doing with chiplets. So, yes, I would say this is an evolution of Moore’s Law.
11
It sounds like you’re suggesting he indeed
monitor a healing wound or measure blood oxygenation while
almost envisioned technology like chiplets as
a person is exercising. Our health monitoring is very narrow
a game-changer.
right now, and there’s a next generation of developments in
Somewhat. Ironically, I think science fiction authors
wearables that will teach us a lot about human physiology
are better at predicting the future than technologists and
and how varied it is from person to person and in different
engineers. I remember some quote from way back when
types of conditions.
people weren’t really sure what the practical use of a PC would be, and someone speculated they might be used for
Let’s shift to women in technology. There are more
collecting recipes for cooking. I remember even being in a
women in prominent positions in the industry these
meeting where people were asking, “Why would anyone
days, but that’s been a big shift from the beginnings
want a camera in a phone?” As technologists, we’re always
of Silicon Valley. You made your way up the ladder
thinking in terms of constraint. Picturing the future often
through two-plus decades in tech. What has your
requires the imagination of people who are outside the
experience been like?
industry who don’t think in any constrained kind of way. So
I’ve encountered a lot of hurdles, but I don’t necessarily
in many ways, the dreamers have been better at predicting
always attribute it to being a woman. I have a natural
the future than the actual technologists.
inclination to work against the boundaries of the status quo, and being a woman is just a part of that. Things have improved, but honestly, Intel has given me tons of opportunities for both technical and personal development.
As technologists, we’re always thinking in terms of constraint. Picturing the future often requires the imagination of people who are outside the industry.
In my 23 years here, I would say I’ve had at least three, if not four careers, and I think that speaks a lot to how interesting it is to work in tech. That said, I know I’ve been fortunate. So part of my work has also been in mentorship activities, and a lot has changed there, too. In 2006, we started the Women’s Principal Engineers Forum. Today, it’s called the Women’s Principal Engineers and Fellows Forum because now we actually have women Fellows. So there have certainly been strides. One of the things we’ll face in the future is being able to find the highly talented workforce that a company like Intel relies on. And if we’re only tapping into half the adult working population, we’re going to come up short. So this is turning
Speaking of the future: Besides chiplets, what are some other emerging technologies you’re really
into not just the right thing to do, but a business imperative as well.
excited about? I’m still really interested in wearable computing, which
Right, as you mentioned, you’ve been involved in
I spent five years working on. There will be a time when
a lot of professional organizations for women in tech,
everyone will have adopted wearable devices, and I think it’s
including the Society of Women Engineers. Is there
going to be the really big breakthrough in understanding how
a collective sense the playing field is beginning to
we live, how we grow and how our health changes. That’s
even out?
something that will potentially take a while, but I think it’s just
The biggest indicator I have that things have shifted is
a matter of time. I was working with university researchers
comparing my experience to my mother’s. She was an
to create flexible, wearable sensors that could be integrated
engineer at Ford Motor Company and had master’s degrees
into smart Band-Aids, which could be used to, for example,
in physics, electrical engineering and computer science,
12
Nagisetty began her career at Intel in 1995 as a transistor engineer. Today, she leads the company’s vision around chiplet technology.
yet she still faced really formidable challenges. At that time, there was very little awareness of any kind of issue and there was really no desire to change. In just one generation, my experience and hers have been totally different. My generation has benefited from women like my mother who led the way and had a much harder time. I think the trend will continue, and I think the next generation will benefit from what’s going on right now. Part of that is the change in awareness and the desire for change. Industry leaders—both
I don’t think enough people recognize engineering as a creative endeavor. Part of it is imagining the future and part of it is creating the future.
at Intel and beyond—have talked about a desire for change. It’s out in the open and it’s being talked about everywhere.
13
You mentioned your mentorship activities.
very creative and they don’t look at engineering that way.
What advice do you give to young women who are
That’s a common misperception that has to stop. The simple
interested in pursuing engineering and competing?
fact is you can work across disciplines, such as biology
I always encourage young women and minorities to
and neuroscience. Computing is now really woven into the
pursue engineering, not just because there are so many
fabric of our lives, and it will only become even more so. It’s
opportunities, but also because your career will really change
a discipline that looks across those adjacent disciplines,
over time and you won’t get bored. I don’t think enough
and that’s really a powerful way to inform the work and the
people recognize engineering as a creative endeavor. Part
direction. That’s also where the most interesting opportunities
of it is imagining the future and part of it is creating the future.
are. Engineering might be one of the most creative career
I think women and girls are often attracted to things that seem
paths anyone can embark on.
My Collaborator Is a Robot
Back in the 1970s, Freddy the Robot was the celebrity in residence at the University of Edinburgh’s Department of Artificial Intelligence. A look at an early demo shows him hard at work: A scientist drops a bag of parts in front of him. Freddy then identifies the jumbled-up parts via computer vision and starts using his hand to unscramble them, pick them up one by one and assemble them into a “product.” Amazingly, what emerges is a shiny, yellow toy car. Fifty years later, I’m in awe of Freddy, completely humbled
Modern machines are better at perception and interaction, thanks in part to Moore’s Law.
by the scientists and engineers who put him together. In my lab at UC Berkeley, I stare at the robot arm I have (we call it “Archie”), and think back to the months it took us to write its planner (the decision-making engine that powers its behavior), controller (the system that decides how to spin its motors to make the planned behavior a reality) and perception system (Archie’s “eyes,” which enable it to figure
By Anca Dragan
out what objects are in front of it), so it can pretty much do the same thing Freddy was already doing so well back then. So what’s changed? When I take a closer look at our system, it’s doing something remarkably similar to what Freddy did. But if I put a coffee mug in front of Archie, he easily figures out it’s a mug. The same goes for my watch. These are more weirdly shaped objects, but computer vision has come a long way since those early days. Rather than relying on rules or looking for a specific color against a different background, today’s vision systems extract their own ways of detecting objects by finding patterns in datasets of millions of images. That it’s possible to process all these images is a testament to the new computing power we’ve gained. That a machine can identify useful patterns is to some extent due to new algorithms, but to a large extent it’s also enabled by our ability to quickly iterate on these algorithms by trying them out and seeing what they come up with. What used to be a large-scale experiment even in the 1990s is today what equates to our students’ coffee break. Even better, Archie doesn’t just move objects around for only one task, but can move objects between arbitrary poses while avoiding collisions. This used to be very hard because arms tend to have seven degrees of freedom, so they have to reason in not three or four, but seven dimensions. The problem becomes exponentially harder with the number of dimensions, and only in the ’90s did we finally create algorithms powerful enough to do battle with this kind of
14
Because it can process physical prompts, “Archie” can infer it’s holding a coffee cup too high above the ground for human comfort.
complexity and win. When I started graduate school in 2009, even with this
will be difficult for me when I’m older.
significant progress, I’d still have to wait three to four seconds
Archie uses extra compute cycles to make predictions of
for a robot arm to figure out how to move a bottle from A
where my arm will move next, and to be sure to stay out my
to B. And even then, it would come up with some totally
way. He becomes more careful if I start acting in a weird way
inefficient, unnatural solution—the arm would go side to side
that doesn’t fit with his predictive model of me. He sometimes
for no apparent reason, snaking its way through a maze that
guides me in what to put where as we sort out the objects.
wasn’t really there. Now, Archie ponders for milliseconds and
He can also sacrifice some efficiency here and there to
figures it out. What’s more, he moves efficiently, with no more
become more deliberate and transparent about what he’s
snaking back and forth, a little more like a human arm. I’d like
doing, slowing down and exaggerating his motion by using
to believe that those of us who worked on randomized and
a simple mental model of me and what I can see and predict.
trajectory optimization-based motion planning get the credit,
That way, he makes sure I’m able to understand what he’s up
but 10 years of Moore’s Law sure helped us.
to and can make some room for him. And he lets me know
Because Archie moves a little more naturally, I have an easy time being close to him, which means that at times,
15
interesting tasks—Archie might be able to assist with what
when something is too heavy for his rather flimsy motors and when he needs my help.
Archie and I can actually work together. For now, our
Next, my student Andrea walks into the lab. She sees
collaboration is limited to clearing up the table and sorting
Archie carrying her coffee mug, and she’s unhappy he’s
stuff into bins. But in the future, hopefully we’ll tackle more
holding it so high above the table—if he drops it, it will break.
In her research in robotics, Anca Dragan puts a heavy emphasis on ensuring robots and people can work together.
She pushes Archie’s arm down. Instead of him popping
on a track in a matter of days, for instance. However, you
right back up, Archie infers he must have been doing the
can’t easily get that same car to properly negotiate merging
task wrong, immediately replanning his movement to go
onto the highway in heavy traffic with other people. And
lower. The robots of 50 years ago were were rigid in both
highly dexterous manipulation in unstructured environments
their control, as well as the rules they would follow. Archie is
remains something that requires a lot of task-specific training
compliant and adapts to human input, as well as what this
or engineering.
input communicates about the rules he should follow and objectives he should achieve.
Overall, we’re making progress towards robots becoming more and more capable at doing the things we tell them to,
Archie stands on the shoulders of not just Freddy, but
but we’re not quite there yet. When we do get there, new
the work of every robotics researcher who has come since,
research challenges will emerge, like figuring out what we
and the work of those behind Moore’s Law. Together, they
should tell them to do in the first place!
enabled advances in perception, planning and optimal control, handling uncertainty, learning from prior experiences, and the ability to collaborate with and learn from people. We’ve come a long way from the days of Freddy the Robot, but we still have a long way to go. Some things that used to be entire research agendas are now commodities: You can set up an autonomous car to drive
Anca Dragan is an assistant professor of robotics at UC Berkeley and runs the InterACT Lab, where she focuses on algorithms for human-robot interaction. She is also on the leadership committees for the Berkeley Artificial Intelligence Research Lab and the Center for HumanCompatible AI.
The views, assumptions and opinions expressed in this article are those of the author and do not necessarily reflect those of Intel.
16
Transformation Is on the Way Intel’s chief chip architect offers his take on unlocking new innovation.
When Intel announced in April 2018 it had hired Jim Keller as a senior VP to lead its silicon engineering work, it wasn’t small news in the tech world. AnandTech’s headline called Keller a “CPU design guru.” VentureBeat went one further, calling Keller a “rock star chip architect.” Keller is now the general manager of Intel’s Silicon Engineering Group. Why did Intel hire him? As chief engineering officer Murthy Renduchintala wrote at the time, “Jim is one of the most respected microarchitecture design visionaries in the industry,” who will help Intel speed our work to “fundamentally change the way we build silicon.” After a series of senior leadership roles in both the x86 and
By Walden Kirsch
ARM worlds, why did Keller come to Intel? And why now? That’s where we started our recent conversation.
How important is Intel’s IP? Our crown jewels are our engineers. They’ve created lots of valuable technical assets, but every year it changes. IP depreciates pretty fast. So you have to keep reinventing and reinvigorating it. To put different kinds of IP together successfully takes a lot of work. That’s a hard problem to solve, and the industry has been working on it for a while. I think Intel started doing that later than some others. But that transformation is well on the way. What we have to do is get great at it. I think that’s a technical change, not a culture change.
From your perspective, what does Intel need to do differently? First I have to say, everywhere I go at Intel, I meet really great people. We have technical expertise, talent and energy everywhere I go. Comparing yourself to yourself, you get in your own bubble that way. There’s places where we are the best thing that’s ever happened, and maybe we’re even doing the best possible, but we have to make sure we’re super clear about that. We also need to continue to prioritize sustainable innovation. That’s something Intel has done and will continue to do in the future and something for which we can be counted on. Other companies have peaks and inflections they hit, but they struggle to repeat them. Intel has the know-how, talent and track record to do this, which is something that will enable us to take our leadership into the future, which will be a new era of computing innovation.
17
Jim Keller joined Intel in April 2018 as senior VP in charge of silicon engineering after spending time innovating for Tesla, AMD and Apple.
believe in focus. One of Steve Jobs’ famous quotes was,
When you joined, Murthy said that one of the reasons Intel was delighted to have you is that it’s entering a world of heterogeneous processes and architectures. Help us understand what that means.
“Why screw up two things when you can only get one thing
The pillars of our business today are PCs and servers. We
How was your experience at Apple? That was weirdly fun. Apple was a very dynamic place. It was very much a leadership-focused company. It was very aggressive. They really reach for things, but they also really
right?”
tend to build PCs around a processor, and then we leverage
There’s a lot of trying to figure out what the market needs,
that processor in servers, and then we build servers of very
and what’s the best tradeoff between performance and cost,
dense processors and memory systems. Heterogeneous
as opposed to what the best thing is we could possibly do. The interesting thing about the best you can do is that helps you push the envelope.
So what does a Jim Keller goal look like in general? I like to set a goal to be clear and simple. You might have 100 great people and 100 opinions. That’s a lot of stuff, and boiling it down to a simple goal is not easy. Having clear goals is not an easy thing, but it’s the way you do great products.
In the future, we may build products on different pieces of silicon and pull them together into a heterogeneous system. The complexity of that, as you build each kind of computing system, goes up. 18
computing means that now we have CPUs for running games and some graphics. We have media codecs for doing video encoding and decoding. So that’s a different kind of computing. Now we’re building AI computing devices, and they’re different too. So the range of computing types is expanding. In the future, we may build products on different pieces of silicon and pull them together into a heterogeneous system. The complexity of that, as you build each kind of computing system, goes up. Somehow or other, Moore’s Law just keeps
That whole time, right? But somehow or other, it just keeps
on going. So I’ve decided not to worry about it for the rest
on going. Moore’s Law is essentially an assertion that we
of my life.
will continue to shrink technology and raise performance.
We’d be remiss if we didn’t ask you about Moore’s Law. How do you look at Moore’s Law, and specifically what do you see as changing and what’s not?
19
So the range of computing types is expanding. In the future, we may build products on different pieces of silicon and pull them together into a heterogeneous system.
At different junctures, different things had to happen from a manufacturing perspective. And Intel has participated as a leader in many of those inventions. So go ahead, think about Moore’s Law. It’ll continue. The press will write about the end of it over and over. And in 10 years we’ll be thinking, boy, that
I’ve been working on computers for almost 40 years.
was really hard what we did. Now it’s going to be “over” again.
Everybody has always been predicting Moore’s Law will
That’s been the truth for a long time. So I’m not that worried
end in 10 to 15 years, which is reduced to five to 10 years.
about Moore’s Law.
How It All Began
Laying Down a Law In 1965, Intel’s founding father made a prediction that defined how chips would be manufactured for the next half-century. Here’s his original essay. By Gordon Moore, Intel Co-Founder This article originally appeared in Electronics, Volume 38, Number 8, April 19, 1965
The future of integrated electronics is the future of
today as well as any additional ones that result in electronics
electronics itself. The advantages of integration will bring
functions supplied to the user as irreducible units. These
about a proliferation of electronics, pushing this science into
technologies were first investigated in the late 1950s. The
many new areas.
object was to miniaturize electronics equipment to include
Integrated circuits will lead to such wonders as home
increasingly complex electronic functions in limited space
computers or at least terminals connected to a central
with minimum weight. Several approaches evolved, including
computerautomatic controls for automobiles, and personal
microassembly techniques for individual components, thinfilm
portable communications equipment. The electronic
structures and semiconductor integrated circuits.
wristwatch needs only a display to be feasible today.
Each approach evolved rapidly and converged so that
But the biggest potential lies in the production of large
each borrowed techniques from another. Many researchers
systems. In telephone communications, integrated circuits in
believe the way of the future to be a combination of the
digital filters will separate channels on multiplex equipment.
various approaches.
Integrated circuits will also switch telephone circuits and perform data processing.
The advocates of semiconductor integrated circuitry are already using the improved characteristics of thin-
Computers will be more powerful, and will be organized
film resistors by applying such films directly to an active
in completely different ways. For example, memories built
semiconductor substrate. Those advocating a technology
of integrated electronics may be distributed throughout the
based upon films are developing sophisticated techniques
machine instead of being concentrated in a central unit. In
for the attachment of active semiconductor devices to the
addition, the improved reliability made possible by integrated
passive film arrays.
circuits will allow the construction of larger processing units. Machines similar to those in existence today will be built at lower costs and with faster turn-around.
Present and Future
Both approaches have worked well and are being used in equipment today.
The Establishment Integrated electronics is established today. Its techniques
By integrated electronics, I mean all the various
are almost mandatory for new military systems, since the
technologies which are referred to as microelectronics
reliability, size and weight required by some of them is
20
achievable only with integration. Such programs as Apollo,
functions. But silicon will predominate at lower frequencies
for manned moon flight, have demonstrated the reliability
because of the technology which has already evolved around
of integrated electronics by showing that complete circuit
it and its oxide, and because it is an abundant and relatively
functions are as free from failure as the best individual
inexpensive starting material.
transistors. Most companies in the commercial computer field have machines in design or in early production employing
Reduced cost is one of the big attractions of integrated
integrated electronics. These machines cost less and perform
electronics, and the cost advantage continues to increase as
better than those which use conventional electronics.
the technology evolves toward the production of larger and
Instruments of various sorts, especially the rapidly
larger circuit functions on a single semiconductor substrate.
increasing numbers employing digital techniques, are starting
For simple circuits, the cost per component is nearly
to use integration because it cuts costs of both manufacture
inversely proportional to the number of components, the
and design.
result of the equivalent piece of semiconductor in the
The use of linear integrated circuitry is still restricted
equivalent package containing more components. But
primarily to the military. Such integrated functions are
as components are added, decreased yields more than
expensive and not available in the variety required to satisfy
compensate for the increased complexity, tending to raise
a major fraction of linear electronics. But the first applications
the cost per component. Thus there is a minimum cost at any
are beginning to appear in commercial electronics, particularly
given time in the evolution of the technology. At present, it is
in equipment which needs low-frequency amplifiers of
reached when 50 components are used per circuit. But the
small size.
Reliability Counts
minimum is rising rapidly while the entire cost curve is falling (see graph below). If we look ahead five years, a plot of costs suggests that the minimum cost per component might be
In almost every case, integrated electronics has
expected in circuits with about 1,000 components per circuit
demonstrated high reliability. Even at the present level of
(providing such circuit functions can be produced in moderate
production low compared to that of discrete componentsit
quantities.) In 1970, the manufacturing cost per component
offers reduced systems cost, and in many systems improved
can be expected to be only a tenth of the present cost.
performance has been realized. Integrated electronics will make electronic techniques more generally available throughout all of society, performing many functions that presently are done inadequately by other techniques or not done at all. The principal advantages will be lower costs and greatly simplified design payoffs from a ready supply of low-cost functional packages. For most applications, semiconductor integrated circuits will predominate. Semiconductor devices are the only reasonable candidates presently in existence for the active elements of integrated circuits. Passive semiconductor elements look attractive too, because of their potential for low cost and high reliability, but they can be used only if precision is not a prime requisite. Silicon is likely to remain the basic material, although others will be of use in specific applications. For example, gallium arsenide will be important in integrated microwave
21
Costs and Curves
The complexity for minimum component costs has
integrated circuits are already underway using multilayer
increased at a rate of roughly a factor of two per year (see
metalization patterns separated by dielectric films. Such a
graph on next page). Certainly over the short term this rate
density of components can be achieved by present optical
can be expected to continue, if not to increase. Over the
techniques and does not require the more exotic techniques,
longer term, the rate of increase is a bit more uncertain,
such as electron beam operations, which are being studied
although there is no reason to believe it will not remain
to make even smaller structures.
nearly constant for at least 10 years. That means by 1975, the number of components per integrated circuit for minimum cost will be 65,000. I believe that such a large circuit can be built on a single wafer.
Two-mil Squares
Increasing the Yield There is no fundamental obstacle to achieving device yields of 100%. At present, packaging costs so far exceed the cost of the semiconductor structure itself that there is no incentive to improve yields, but they can be raised as high as is economically justified. No barrier exists comparable to
With the dimensional tolerances already being employed
the thermodynamic equilibrium considerations that often limit
in integrated circuits, isolated high-performance transistors
yields in chemical reactions; it is not even necessary to do
can be built on centers two thousandths of an inch apart.
any fundamental research or to replace present processes
Such a two-mil square can also contain several kilohms
Only the engineering effort is needed.
of resistance or a few diodes. This allows at least 500
In the early days of integrated circuitry, when yields were
components per linear inch or a quarter million per square
extremely low, there was such incentive. Today ordinary
inch. Thus, 65,000 components need occupy only about
integrated circuits are made with yields comparable with
one-fourth a square inch.
those obtained for individual semiconductor devices. The
On the silicon wafer currently used, usually an inch or more in diameter, there is ample room for such a structure if the components can be closely packed with no space wasted for interconnection patterns. This is realistic, since efforts to achieve a level of complexity above the presently available
same pattern will make larger arrays economical, if other considerations make such arrays desirable.
Heat Problem Will it be possible to remove the heat generated by tens of
22
thousands of components in a single silicon chip?
interconnected. The availability of large functions, combined
If we could shrink the volume of a standard high-speed
with functional design and construction, should allow the
digital computer to that required for the components
manufacturer of large systems to design and construct
themselves, we would expect it to glow brightly with present
a considerable variety of equipment both rapidly and
power dissipation. But it won’t happen with integrated circuits.
economically.
Since integrated electronic structures are two-dimensional, they have a surface available for cooling close to each center
Linear Circuitry
of heat generation. In addition, power is needed primarily to
Integration will not change linear systems as radically as
drive the various lines and capacitances associated with the
digital systems. Still, a considerable degree of integration
system. As long as a function is confined to a small area on
will be achieved with linear circuits. The lack of large-
a wafer, the amount of capacitance which must be driven
value capacitors and inductors is the greatest fundamental
is distinctly limited. In fact, shrinking dimensions on an
limitations to integrated electronics in the linear area.
integrated structure makes it possible to operate the structure at higher speed for the same power per unit area.
Day of Reckoning
By their very nature, such elements require the storage of energy in a volume. For high Q it is necessary that the volume be large. The incompatibility of large volume and integrated electronics is obvious from the terms themselves.
Clearly, we will be able to build such component crammed
Certain resonance phenomena, such as those in
equipment. Next, we ask under what circumstances we
piezoelectric crystals, can be expected to have some
should do it. The total cost of making a particular system
applications for tuning functions, but inductors and capacitors
function must be minimized. To do so, we could amortize the
will be with us for some time.
engineering over several identical items, or evolve flexible
The integrated r-f amplifier of the future might well consist
techniques for the engineering of large functions so that
of integrated stages of gain, giving high performance at
no disproportionate expense need be borne by a particular
minimum cost, interspersed with relatively large tuning
array. Perhaps newly devised design automation procedures
elements.
could translate from logic diagram to technological realization without any special engineering.
Other linear functions will be changed considerably. The matching and tracking of similar components in integrated
It may prove to be more economical to build large systems
structures will allow the design of differential amplifiers of
out of smaller functions, which are separately packaged and
greatly improved performance. The use of thermal feedback effects to stabilize integrated structures to a small fraction of a degree will allow the construction of oscillators with crystal stability. Even in the microwave area, structures included in the definition of integrated electronics will become increasingly important. The ability to make and assemble components small compared with the wavelengths involved will allow the use of lumped parameter design, at least at the lower frequencies. It is difficult to predict at the present time just how extensive the invasion of the microwave area by integrated electronics will be. The successful realization of such items as phased-array antennas, for example, using a multiplicity of integrated microwave power sources, could completely revolutionize radar.
23
This magazine includes forward-looking statements relating to Intel. All statements that are not historical facts are subject to a number of risks and uncertainties, and actual results may differ materially. Please refer to Intel’s most recent earnings release, 10-Q and 10-K filings for the risk factors that could cause actual results to differ.
24