1 THREE THINGS CAME TOGETHER TO ALLOW THE WIDESPREAD
IMPLEMENTATION OF AI: INCREASED COMPUTATIONAL POWER, STORAGE CAPACITY, AND DATA.
Artificial intelligence research began in the 1950s with theoretical exploration leading up to the advent of deep neural networks, a type of AI system loosely based on the way our brains work where computations are made through a series of interconnected nodes with tens or even hundreds of layers. Three key advances in computing came about which enabled deep layer neural networks to work really well, and AI exploded. “After deep layer neural networks were introduced, it was like flood gates—AI research and applications abounded,” says Kumar Bhagavatula, director of CMUAfrica and an electrical and computer engineering professor who has worked in AI for over 30 years. “Three technological breakthroughs converged together at once:
6 THINGS YOU SHOULD KNOW ABOUT AI FROM EXPERTS IN THE FIELD RESEARCHERS FROM CARNEGIE MELLON UNIVERSITY’S COLLEGE OF ENGINEERING SHARE WHAT THEY HAVE LEARNED ABOUT ARTIFICIAL INTELLIGENCE WHILE WORKING IN THE FIELD— FROM WHAT LED TO THE EXPLOSION OF AI APPLICATIONS, TO WHERE IT COULD HAVE THE BIGGEST IMPACT IN THE FUTURE, TO AREAS STILL RIPE FOR DISCOVERY.
2
AI IS NOT MAGIC.
Artificial intelligence does not magically discover what isn’t already there. AI is based on concepts that engineers, mathematicians, and computer scientists already know: math, statistics, signal processing, and optimization, but put together in a way that can handle bigger data and a broader scope. “AI is not magic. It cannot create something from nothing and is built on concepts we already know,” says Liz Holm, a professor of materials science and engineering. “The results are also not magic—the information is already in the data and AI is a way of getting it out. Sometimes it does that better than humans because we think differently, but it is not making anything up; its only finding things that are hard for us to see.”
BIG DOES NOT ALWAYS MEAN BETTER FOR DATA. Access to more data is one reason why AI has been able to solve many problems that humans cannot. But just because a lot of data is available, it doesn’t always mean
3
it is better. There are times when data
doesn’t exist, when it is costly to obtain and label, or when there’s more noise
than signal that renders much of the data useless. Researchers are finding ways to make small data meaningful by designing algorithms to work with small data and get more from less data. “More data is not always better,” says ECE assistant professor Yuejie Chi. “It is if the data quality is good, but one issue with big data is that it can be very messy
computational power in the form of GPUs,
and you might have a lot of missing data.
increased storage capacity that enabled
Big data problems also involve a lot of
cloud computing, and the collection of tons
computation, so we want to minimize the
of data through sensors and IoT devices.
computational complexity of the algorithm
The availability of hardware is really what
by doing more with less data.”
enabled AI to be implemented in such a broad way.”