3 minute read
MILESTONES IN AI HISTORY
Over nearly eighty years, artificial intelligence has advanced from the stuff of theory and science fiction to being stuffed into everyone’s pockets. Here are ten milestones that have ushered us to our ubiquitous AI reality.
1943
Alan Turing, godfather of computer science and artificial intelligence, conceives the Turing Test aimed at determining if a machine can exhibit intelligent behavior indistinguishable from that of a human.
1955
Computer scientist John McCarthy coins the term artificial intelligence in advance of a conference at Dartmouth University where top scientists would debate the merits of rules-based programming versus the creation of artificial neural networks.
1981
Businesses start to buy into narrower applications of AI, with Digital Equipment Corporation deploying a so-called “expert system” that configures customer orders and saves the company millions of dollars annually.
2002
Autonomous vacuum cleaner Roomba from iRobot becomes the first commercially successful robot designed for use in the home, employing simple sensors and minimal processing power to perform a specialized task.
2011
Apple introduces Siri, a voice-controlled virtual assistant that puts ground-breaking AI into the pockets of iPhone users.
1950
Author and biochemist Isaac Asimov imagined the future of AI in his sci-fi novel I, Robot, and devised the Three Laws of Robotics designed to prevent our future sentient creations from turning on us.
1969
Shakey the Robot becomes the first mobile robot able to make decisions about its own actions by reasoning about its surroundings and building a spatial map of what it sees before moving.
1997
IBM supercomputer Deep Blue defeats world chess champion Garry Kasparov in a hyped battle between man and machine.
2005
Five autonomous vehicles complete the DARPA Grand Challenge off-road course, sparking major investment in self-driving technology by the likes of Waymo (Google), Tesla, and others.
2018 this technology is doing to us,” says Bellamkonda. “We can’t have technologists just say: ‘I created this, I’m not responsible for it.’ This will be a profound change at Emory—an intentional decision to put AI specialists and technologists not in one place, but to embed them across business, chemistry, medicine, and other disciplines, just as you would with any resource.”
Self-driving cars finally (and legally) hit the road when Waymo launches its self-driving taxi service in Arizona.
While the AI.Humanity initiative has just begun, there are already several projects at Emory that have shown the potential of bringing ethics and a wide range of disciplines to bear when developing, implementing, and responding to AI in different settings.
Let’s look at four Emory examples that might serve as models for conscientious progress as AI and machine learn- ing become even more commonplace in our day-to-day lives. Perhaps putting the human heart in AI will not only lead to a more efficient, equitable, and effective deployment of this technology, but it might also give humanity better insight and more control when the machines really do take over.
Interesting Intersections
FOR MANY PEOPLE WHO WORK IN THE HUMANITIES, the advent of the digital age—the continuous integration of computers, internet, and machine learning into their work and research— has been incidental, something they’ve merely had to adapt to. For Emory’s Lauren Klein, it was the realization of her dream job.
Klein grew up a bookworm who was also fascinated with the Macintosh computer her mother had bought for the family. But she spent much of her career searching for a way to combine reading and computers. Then came the advent of digital humanities—the study of the use of computing and digital technologies in the humanities. Specifically, Klein keyed into the intersection of data science and American culture, with a focus on gender and race. She co-wrote a book, Data Feminism (MIT Press, 2020), a groundbreaking look at how intersectional feminism can chart a course to more equitable and ethical data science.
The book also presents examples of how to use the teachings of feminist theory to direct data science toward more equitable outcomes. “In the year 2022, it’s not news that algorithmic systems are biased,” says Klein, now an associate professor in the departments of English and Quantitative Theory and Methods (QTM). “Because they are trained data that comes from the world right now, they cannot help but reflect the biases that exist in the world now: sexism, racism, ableism. But feminism has all sorts of strategies for addressing bias that data scientists can use.”
Klein’s hire between the English department and QTM is an example of the cross-pollination designed to foster thoughtful collaboration of new technologies. “She’s bringing a humanistic critique of the AI space,” says Cliff Carrubba, department chair of QTM at Emory. “A social scientist would call that looking at the mechanism of data collection. Each area has an expertise. Humanists have depth of knowledge of history and origins, and we can merge that expertise with other areas.”
In addition to her own research, which currently includes compiling an interactive history of data visualization from the 1700s to present, a quantitative analysis of abolitionism in the 1800s, and a dive into census numbers that failed to note “invisible labor,” or work that takes place in the home, Klein is also co-teaching a course at Emory called Introduction to Data Justice. The goal is to help students across disciplines come to grips with the concepts of bias, fairness, and discrimination in data science, and how they play out when the datasets are used to train AI.
“It’s a way of thinking historically and contextually about these models in a way that humanists are best trained to do,” says Klein. “It’s a necessary complement to the work of model development, and it’s thrilling to bring these areas together. To me, the most exciting work is interdisciplinary work.”