Discover Experiential AI
Responsible AI for human-centric solutions
Responsible AI for human-centric solutions
Welcome to the Institute for Experiential AI at Northeastern University!
On April 6, 2022, we celebrated the launch of our institute in Boston with a series of keynote speakers, expert panels, an AI career fair, and research poster sessions for the AI projects supervised by our faculty. More than 700 people attended in-person and virtually. Several reputable publications such as The Wall Street Journal, Boston Business Journal, and Datanami chose to cover the event. You can find a complete list of videos from the event and related articles in the Appendix section of this e-book.
What is Experiential AI? It is human-centric AI that leverages human intervention and feedback and focuses on making solutions work in real settings. We believe the advancement of AI as a science will be driven by understanding what works and what does not work in real applications dealing with imperfect inputs and conditions. We believe that understanding the art of the practice of AI is a great way to drive the research agenda, including the basic research agenda in the field of AI in meaningful and impactful ways.
The Institute for Experiential AI drives AI research, applications, and education across Northeastern University campuses internationally to advance the science and practice of Experiential AI. We work with partner companies, public institutions, and academia on problems that deliver value through applied solutions and support education at the undergraduate, graduate, and professional levels by coupling supervised apprenticeships and experiential education in the context of real-world AI-solution delivery.
Emphasizing AI ethics and Responsible AI through multi-disciplinary approaches to high-impact business and societal problems, we help partners build a founda-
tion of responsible and ethical use of AI in industry, government, society, and academia. This is one of the important ways to advance the issue of trust in AI—a proper combination of actual solutions that work with a lot of attention to make sure this practice is done responsibly.
Our AI Solutions Hub develops value-driven solutions for partners by focusing on their pressing data and AI challenges. This delivers value directly by developing innovative working solutions. Through our Responsible AI Advisory Service, we co-develop and implement a framework for practicing Responsible AI, with services including algorithmic audits, ethical risk assessment, and an independent, virtual AI Ethics Board. And by providing data science leadership, we work hand in hand with industry partners to create an applied action plan to unlock the value of data and deliver innovative training to generate a top-quality talent pipeline and by continuously supporting the lifelong education goals through upskilling existing employees.
One of the early projects that demonstrated the success of this new model is our collaboration with Sun Life, a partner at the Roux Institute. Not only did we work with Sun Life on creating a catalog of opportunities for AI in their business, but we also worked on solving the highest priority project in this catalog at the AI Solutions Hub, and in parallel, we ran a practicum course built around this problem by bringing together learners from Northeastern University with learners from Sun Life. This project created a very successful illustration of the various parts of the model.
We invite you to explore the impact of Experiential AI, our institute, and the work of our amazing faculty members in the following pages, and we look forward to working with you in the future.
Usama Fayyad Executive Director, Institute for Experiential AI
INTRODUCTION TO THE INSTITUTE
The Institute for Experiential AI provides Responsible AI research and services to produce AI systems that benefit individuals, institutions, society, and nature at large. It encompasses all the ethical, legal, and technical aspects of developing and deploying beneficial AI technologies, ensuring that AI systems do not cause harm, interfere with human agency, discriminate, or waste resources, among other things.
As part of these services, we have formed a unique worldwide AI Ethics Board (AIEB) to provide top-level
and independent guidance to help organizations develop and deploy AI responsibly. Composed of experts in multiple disciplines from academia and across industries, the AIEB aims to ensure diversity on multiple dimensions, including gender and nationality. Our list of board members includes a core group of Northeastern University faculty and a majority of experts from other organizations.
We look forward to helping you incorporate AI Ethics into your AI strategy and governance goals and build an innovative responsible AI practice.
Ricardo Baeza-Yates, Ph.D. Director of Research Institute for Experiential AI
The Institute for Experiential AI includes a diverse team of more than 90 faculty members from all Northeastern University colleges—and we continue to grow.
The Institute is comprised of faculty across colleges from computer science, engineering, statistics, math, business, to social science, psychology, ethics, and law. The collaborative spirit and cross-disciplinary approach is a core strength of the Institute for Experiential EAI.
Together with our sister Northeastern institutes, such as the Roux Institute, the Ethics Institute, Network Science, Cybersecurity & Privacy, Wireless IoT, and Experiential Robotics institutes, our multidisciplinary facul-
ty works to generate relevant research rooted in real solutions to real-world problems. We’re also leading initiatives across the university in AI + health, AI + life sciences, and AI + climate and sustainability. In addition to our focus on research areas in AI, we’re working to make AI trustworthy and responsible by taking humans into account in our models. Jointly, we can maximize our impact, advance the science of AI, and tackle grand societal challenges.
I am incredibly excited about Northeastern University’s commitment to AI and the many amazing projects now and in the future.
Jennifer Dy Director of AI Faculty Institute for Experiential AI
93
total faculty members representing all 9 colleges at Northeastern University (62 core members and 31 affiliate members)
20 current EAI postdocs
$ 50 M in active grant funding for core members
2.75 K+
registrations at 20 seminars and 2 conferences (May ‘21-’22)
2.9 K+
university-wide corporate partners
Northeastern Global Network
*EAI Hub
Discover Experiential AI Event Attendance:
438
NU-affiliated (faculty, students, staff, etc)
157 external partners at the inaugural event
130 + job seekers at career fair
INTRODUCTION TO THE INSTITUTE
Northeastern University President Joseph E. Aoun and David Roux, managing partner at BayPine, began the event with a discussion about the opportunity to create impact and shift the higher education model to one that gives enterprises and companies a seat at the table from the start.
“ You need people in the loop at design time, at testing time, and at governance and oversight time .”
David Roux Managing Partner, BayPine
“Industry now is generating research that is equal, and in some domains, superior to, what is happening in higher education.”
Joseph E.
Aoun
President, Northeastern University
“If we are going to maximize our positive impact in the world, we need to be deep and broad in AI… You don’t solve problems in the world with technology alone. It takes a university and its partners to solve problems.”
David Madigan Provost, Northeastern University
It’s tempting to think of artificial intelligence (AI) as some boundless frontier of pure possibility, but AI’s true potential lies in the way humans and machines complement each other.
This is clear in the era of Big Data, where the abundance of digital devices produces too much information for humans to process. Similarly, the machines we’ve employed to make sense of that data are illequipped for creative or context-sensitive uses. For Steve Johnson, founder and co-director of Notable Systems, context and creativity fall under the umbrella of “human judgment.”
It’s Johnson’s opinion that software that artfully blends machine work with human judgment can solve complex problems better than purely artificial ones. Uses of AI such as driverless vehicles, document analysis, customer service, and medical diagnosis all work best with a human in the loop.
Johnson’s company uses AI to automate and capture valuable bits of information from innumerable documents—a typically costly process. He feels a more automated approach could cut costs up to 90 percent while liberating human creativity to focus on more pressing tasks.
“Systems developers who insist on keeping humans out of the loop are limiting the imaginable capabilities, or, worse, they’re consigned to committing errors that are actually avoidable.”
Steve Johnson
Founder
and Co-director of Notable Systems
Humans better recognize the contexts behind algorithmic tasks. They can alter inputs to account for ethical considerations and physical details, like the idiosyncrasies of handwriting, which computers are notoriously bad at deciphering.
Reserving more creative, context-sensitive decisions for humans—such as spotting new applications or designing for better performance—gives AI practitioners and downstream users the best of both worlds.
For an audio innovator like Bose, the AI revolution is only possible because of the digital one that preceded it. In fact, AI may turn out to be every bit as transformative as the iPod, streaming music, or wireless speakers.
Powered by reams of digital data, AI enables new kinds of listening experiences uniquely attuned to your environment. And with a human in the loop, developers can uncover new insights that were almost impossible to see in the digital era. As Bose CEO Lila Snyder explained in her keynote speech at Discover Experiential AI, it’s all about using data to create new experiences that are enjoyable and useful.
Noise-cancelling technology is a prime example. The technology has been around for a few decades, stuck in a binary state: You can either turn it on or off.
Bose’s AI-powered Active Sense technology can monitor a listener’s environment to trigger noise cancellation only when appropriate, tuning out racket and tuning in important sounds like sirens and nearby voices. The same tech can potentially override volume
knobs on home audio systems automatically—a blessing to anyone who hates having to turn the volume up to hear dialogue. Other applications are limited only by one’s imagination.
The secret truth about audio is that it’s not usually heard how musicians and engineers intend. Fine details like listener position, speaker distance, room acoustics, and sound system fidelity make for wildly different experiences. But AI allows engineers to disaggregate sound
“We can render and recreate the music in that environment so you feel like you’re sitting there, but we can’t do that without AI and without data.”
Lila Snyder CEO of Bose Corporation
data to emulate ideal environments, transforming a humble living room into a symphony hall.
For Snyder, the next revolution in audio is all about hearing what you want to hear. “But,” she said, “we can’t do that without AI and without data.”
How do we separate AI hype from AI reality? That was one of the questions put to a panel that brought together C-level executives, including Sammy Assefa, Ph.D., Nenshad Bardoliwalla, Gaia Bellone, Steve Johnson, and Vipin Mayar. Touching on the strategic role of AI in their respective industries, as well as how higher education can help build bridges between them, the panel offered some much-needed clarity to the topic.
AI attracts attention because it can do things no human can do, so it’s understandable that hype would follow. For Nenshad Bardoliwalla, Chief Product Officer at Data Robot, the hype only underscores the need for human involvement.
“What we’ve found is that the people side of the equation is really about framing the problem and helping people understand what AI can and cannot do.”
Nenshad Bardoliwalla CPO at Data Robot
What complicates things is that, for industry, the risk assessment is beginning to shift in favor of AI: Many businesses can’t afford not to invest in it.
Gaia Bellone, chief data scientist at Prudential Financial, comes from an industry defined by extraordinarily complex data, and it’s only getting more complicated. Whereas an accuracy rate of, say, 80 percent used to be acceptable, it’s now completely unacceptable.
Other panelists agreed, and it’s not just in finance where we see this. Increasingly, companies need to be
careful about what they do with their data and how they use it to inform everything from product design to customer service.
As the best tool we have for making sense of an increasingly complex marketplace, AI may one day be able to steer the ship, but until then, decision-makers will need to be as diligent as ever. AI with a human in the loop offers the best of both worlds: game-changing data analytics combined with measured, thoughtful human oversight.
When it comes to artificial intelligence, positive outcomes depend on measured, thoughtful inputs. But without a way to contextualize data, results may cause disappointment or harm. Adding to the challenge is the tendency for outcomes to hinge on hidden hyper-parameters that can have a downstream impact on social subgroups or marginalized populations.
Mike Culhane understands this better than many. He’s the founder and CEO of Pepper Global, a financial services firm specializing in lending and asset servicing for small companies. As a lender, Pepper uses AI to help make credit decisions. A successful outcome depends, first and foremost, on good predictions. That means the machine learning models and algorithms the company uses need to have good, clean data from the get-go.
But a successful outcome is not only measured in dollars; companies have ethical responsibilities to follow when designing and implementing public-facing algorithms. So how do you ensure they align with social expectations?
Machines need people to contextualize data and protect against its most harmful biases. This human-machine feedback loop known as Experiential AI is a cutting-edge approach that puts people first while also unlocking the true potential of Big Data.
This human-machine feedback loop is known as Experiential AI, and it’s the core philosophy of the Institute for Experiential AI at Northeastern University (EAI).
Pepper Global is just one of many companies that have realized the power of AI, using it to source talent, predict outcomes, and facilitate enterprise-level decisions. But they’re also aware of the limitations.
“You really need to have that human-level interface with the machines to make sure you’re actually really going down the path that you want to go down.”
Mike Culhane CEO of Pepper Global
Can we trust AI? Most experts say no… at least not yet. If we’re to create AI that is worthy of our trust, we need to incorporate ethics into each stage of the design, development, and deployment. Doing so will be tricky because it hinges on whether or not we can insert qualitative values into inherently quantified systems. Can it be done? That’s what a core group of experts sought to answer at Discover Experiential AI.
The discussion began with a definition of trust: When you trust something, you have faith that the object of your trust will not harm you. Clearly, we can’t say that
about AI. Not yet. There’s just not enough transparency, accountability, or interpretability. Data, too, is often biased at the very point of collection.
Public auditing of algorithms may help—but that’s no simple task, either, given how companies don’t generally want to share their information. And it’s not enough to simply audit an algorithm post-hoc. If a system is not designed with interpretability, fairness, or human values in mind, then there’s almost no chance it will stand up to ethical scrutiny. Given these current limitations, how can we solve this problem?
“We need a multi-pronged solution to [AI ethics]. Education is a piece of it. The law is a piece of it. But laws are slow to pass, and in the meantime people are being harmed.”
Tina Elliassi-Rad Khoury College of Computer Sciences
EAI believes in applying AI in ways that center the human experience. That’s what it means to be “experiential.” It requires bridge-building between industry and academia, between research and practice, and between theory and application. The openness provided by such an approach is perhaps the only way to build trust in systems that, whether we like it or not, are here to stay.
EAI core faculty members and guest speakers who participated in the panel included: Professor Tina Eliassi-Rad and Associate Professor Christo Wil- son from Northeastern University, Professor Helen Nis- senbaum from Cornell Tech, and Professor Cynthia Ru- din from Duke University. It was moderated by EAI Director of Faculty Jennifer Dy and Associate Professor John Basl.
“Transparency is necessary but not sufficient. If a system wasn’t designed with things like interpretability or fairness and considerations of ethics and values from the beginning, you’ve already failed... Success in terms of ‘trustworthy’ strarts at the point of conception.”
Cristo Wilson Khoury College of Computer Sciences
Cansu Canca’s AI Ethics Lab was one of the first to help practitioners put what we now know as artificial intelligence into practice. So naturally, she knows you can’t just flip a switch and turn AI on overnight.
Canca, AI ethics lead for the Institute of Experiential AI, said implementing AI responsibly means finding the best design solutions and navigating policy for business practices. That conversation began with a definition of what exactly ethical AI is within a given discipline. And each differing response in fields of law, medicine, finance, and others will help frame a strong reliability structure that ensures collaboration between AI systems and humans.
“Putting together different stakeholders with different backgrounds is extremely important to establishing ethical AI,” said Canca, a philosopher in the tech world.
Canca started that conversation with a diverse group of experts at the institute’s launch event. Her panel, “Can We Trust AI?” hashed out the ethical issues of assigning human values of trust to algorithms when designing AI systems to improve human life and human intelligence.
“While AI has great potential to technically help us solve a number of ethical issues that currently exist in our manual world, if we use it wrongly or put systems in place that are just amplifying harm, we are going to get ourselves into bigger trouble,” said Canca. “We need to make sure that we use AI systems to make our lives better than they are.”
Cutting-edge technologies are finding their way into commercial applications at record speed. But as the gap between machine learning (ML) research and actual deployment shrinks, data scientists are realizing that the state-of-the-art models that work in controlled experiments aren’t always ready for prime time. In the real world, these ML models can exploit statistical coincidences in the data that don’t generalize beyond controlled experiments, capturing and amplifying social biases like negative attitudes about race and gender.
But now, data scientist Silvio Amir, a core faculty member at the Institute for Experiential AI, is working to change that.
“These limitations raise broader questions about what it means to assign human values like trust to machines,” said Amir, assistant professor at the Khoury College of Computer Sciences. He develops natural language processing, ML, and information retrieval
methods for text in social media and electronic health records. “How should we design AI systems to improve human life and human intelligence? How can we ensure transparency and accountability in AI systems?”
Amir is posing these questions to the institute’s leaders to broaden participation and inclusion during the conceptualization and implementation of AI systems. On the new frontier known as Experiential AI, he said humans will be the ones to guide, validate, and augment AI technology so it runs accurately and responsibly in the world. And that means putting our trust in the Institute’s data scientists to provide meaningful solutions that positively impact individuals and society.
“I believe the Institute for Experiential AI is well positioned to make this happen and become a leader in this area, given the diversity of its faculty members in complementary disciplines,” said Amir. “That’s critical given the increasingly important role of AI in society.”
Progress in both ethical and practical terms will depend on honing the interdisciplinary foundations of AI.
At Discover Experiential AI, ten deans representing the nine colleges at Northeastern University (plus the library) gathered to discuss how AI affects each of their domains. The conversation spanned virtually every aspect of human interest, revealing the interdisciplinary foundations of artificial intelligence as it currently stands and how progress in both ethical and practical terms will depend on honing that approach.
Northeastern Deans
Discuss an Interdisciplinary
Community
Scaling solutions in an engineering context requires evermore sophisticated means of processing the accumulated data. From robotics to autonomous vehicles to design optimization, data—and, perhaps more importantly, the interpretation of data—allows engineers to identify new solutions and glean insights that were previously unimaginable. As Gregory Abowd, dean of the College of Engineering, said, “Engineering pervades AI and vice versa, and AI needs much more than engineering to be successful.” It needs students and thinkers who are cognizant of the social, political, and economic contexts of applied AI. That requires a cross-disciplinary approach to education.
“We need to educate people to understand the tools of AI but understand the social, political, economic context in which those solutions reside.”
Gregory Abowd Dean, College of Engineering
Any basic web crawler can search the internet and extract text for language models to process. More challenging for algorithms is gleaning useful and accurate information from library collections. Dan Cohen, dean of the Libraries at Northeastern, pointed to text extraction as a persistent challenge common in research settings. Handwritten documents, daguerreotype photographs, library volumes written in hundreds of different languages—these are huge challenges for AI not trained on the nuances of handwriting and iconography. That said, great strides have been made in solving some of the hardest problems. The library provides a challenging playground for cutting-edge AI research.
According to PwC, AI is expected to drive a $15.7 trillion (26%) increase in global GDP by 2030. But there’s no one particular home for all that value. As David Fields, dean of the College of Professional Studies, said, it’s really at the intersection of where the fields come together that we find the most practical applications. From an educational perspective, those include human resources, leadership, project management, professional administration, and communications studies, to name a few. Within the College of Professional Studies at Northeastern, students are able to follow a number of tracks with specialties in AI. “Students are learning from practitioners in the field,” said Field, “so that we can mesh learning outcomes and the skills and competencies that we know they’re going to need when they graduate.”
Advanced algorithms have been used in legal settings for decades. James Hackney, dean of the School of Law, pointed to ways AI has transformed how lawyers do their jobs. Perhaps the most transformative shift has been in legal discovery, where AI is used to perform the menial work of ferreting out the most useful information in reams of documents. Hackney also spoke about how AI has transformed people’s experiences with the legal system—both positive and negative. Biased data, mistranslations, and algorithmic errata have contributed to legal decisions that greatly impact human lives. To render more just outcomes for participants will require practitioners to target critical areas for collaboration between humans and machines.
“I take it as my mission as dean to train the next generation of AI specialists where lawyers work as co-creators as well as AI practitioners.”
James Hackney Dean, School of Law
Acknowledging the transformative power of AI, Beth Mynatt, dean of the Khoury College of Computer Sciences, stressed how AI needs to become more than what it currently is. “AI is much more than the modern infrastructure of our daily lives,” she said. “It’s also the societal infrastructure.” Data may be everything in AI, but it will still “carry every aspect of discrimination and privilege” in our society and hide it inside a black box. For that reason, computer scientists need to do a better job setting expectations and focusing on the huge ethical and technical challenges that lie ahead. That means looking to other domains for help and imagining new forms of collaboration.
“By partnering with AI researchers and ourselves embracing AI research, CAMD fields move beyond the critique of the often unintended ill effects of AI to become partners in making sure human experience is being enhanced, not compromised, and social problems are being solved, not created.”
Elizabeth Hudson Dean, College of Arts, Media and Design
“[We are working] to address one of the major challenges in the development and deployment of AI systems... the lack of a robust ethics ecosystem.”
Uta Poiger Dean, College of Social Sciences and Humanities
Uta Poiger, dean of the College of Social Sciences and Humanities, argued that the humanities are uniquely positioned to critique AI while also proposing new applications for the technology. She stressed the need for community, and acknowledged the tendency for that word to be abused. Educational programs focused on community stand a better chance of formulating outcomes that reflect society—not just in academia but also in industry, philanthropy, and public policy. “I believe that with greater attention to community, we can formulate educational programs as well as define problems in such a way that we will be able to solve one of the key problems that we still have, and that is the problem of diversity.”
Spanning biology, genetics, social science, and the environment, human health couldn’t care less about the fiefdoms of academia, so any attempt to understand it should be interdisciplinary from the start. Carmen Sceppa, dean of the Bouvé College of Health Sciences, explained how AI technologies help bridge those divides by supplying robust data and digital assistance in clinical and research settings. It promises to make sense of complex data sets while pointing to solutions that reach well beyond personal health—encompassing the entire span “from cells to society,” including drug discovery, therapeutic targeting, behavior tracking, patient outcomes, and public health.
“Health is complicated and messy. The quick encounter between an individual and a healthcare provider is not enough to understand the context in which health is happening or not happening... AI-enabling technologies will help address these issues by enabling individuals to impact their own health.”
Carmen Sceppa Dean, Bouvé College of Health Sciences
Hazel Sive, dean of the College of Science, also pointed to data as the cornerstone of AI. “Most scientific data sets are huge,” she said. “And often they are not properly analyzed.” The human genome, disease risk, ecosystems, climate change, particle physics—all these areas of study depend on sophisticated analysis of many terabytes worth of data, and that’s increasingly only possible through AI. Similarly, the sciences are making AI more ethical and trustworthy. In many cases, that means going back to the very scientific and mathematical principles underlying the algorithms used by AI systems.
Emery Trahan, interim dean of the D’Amore-McKim School of Business, sees AI as a key element of the digital convergence, where success depends on combining data analytics with human judgment. “Business is the epitome of experiential,” he said. In that world, AI partnerships are needed to solve business problems and improve the human condition. In particular, Trahan pointed to natural language processing to help make sense of lengthy or redundant information and decision-making models that can help businesses optimize outcomes. But to avoid exacerbating human biases or oversights, AI systems in business contexts still depend on a collaborative approach.
Patient records, clinical trials, lab results, protein structuring, medical inventory: In health science, the great promise of AI is to make sense of data, forging new insights for drug discovery, clinical care, medical research, and public health. At Discover Experiential AI, representatives from industry and academia sat down to discuss how AI can help the industry unlock the potential buried in massive data sets.
Algorithms are difficult to understand—not merely for the patients whose outcomes are affected by them, but also for the doctors, nurses, and other healthcare professionals who depend on them. Democratizing these systems will require a careful approach, one which places the algorithm within the context of a broader, human-driven system. It may also require an ability to sift through the hype.
In health science, the great promise of AI is to make sense of data.
“Five years ago, AI was not mainstream in biotech. In some ways, it’s because we haven’t done a very good job defining what AI is and how it can help. Having an institute like EAI, which can put a shape to it and be cross-disciplinary, is an amazing thing.”
Laurant Audoly Founder and CEO, Parthenon Therapeutics
For example, Laurent Audoly, Director for AI + Life Sciences at EAI, expressed some reservations about whether AI, which is inherently context-dependent, is ready to make the leap to generalizable, disease-agnostic recommendations. Similarly, Jared Auclair, director of the Biopharmaceutical Analysis Laboratory at Northeastern, articulated a concern that medical practitioners who don’t understand enough about the inner workings of predictive algorithms are effectively flying blind. So what can be done about this knowledge gap?
Melissa Landon, Chief Strategy Officer at Cyclica, and Aileen Huang-Saad, director of Life Sciences and Engineering at the Roux Institute, see it as a cultural problem. AI shows how interdisciplinary the world has become, so shouldn’t the deployment of AI also be interdisciplinary? That’s going to require a societal shift, one that begins at the educational level. What better way to plant the seeds for the future than with an institution as multidisciplinary as EAI?
In 2005, Rai Winslow founded the legendary Institute for Computational Medicine at Johns Hopkins University, where he set out to prove that modeling is vital to the future of medicine. Now, as director of life science and medicine research at Northeastern University’s Roux Institute, he’s paving the way for caregivers to translate those theoretical results into medical applications.
“AI and machine learning are poised to play a huge role not only in life sciences, but in medical research,” said Winslow.
Modeling-based approaches in the clinic will enable better medicine and better diagnoses. And that’s where the “experiential” piece of AI comes into play.
Five years ago, there wasn’t a hospital in the country capable of collecting continuous and rapidly evolving data in real-time. Today, health care settings are processing high-frequency data from wearable patient monitors that constantly track vital signs and behavior changes. This moment-to-moment data on the physiological state of patients lets clinicians make live predictions about a patient’s health trajectory. Medicine, said Winslow, is becoming a computational science.
Five years ago, there wasn’t a hospital in the country capable of collecting continuous and rapidly evolving data in real-time.
Giving caregivers advanced warning about patients headed towards a negative outcome will let doctors intervene as early as possible. But methods are only as good as the data from which they learn—and that data can have inherent biases.
“Ultimately, it’s the caregiver who needs to decide on those recommendations and put them into action, so we need to help them to understand the strengths and weaknesses of the predictions we make,” Winslow said. Because, as he likes to say, AI is just another member of the health care team.
“Human-in-the-loop is a phrase I first heard from [Inaugural Executive Director for the Institute of Experiential AI] Usama Fayyad in the early days of the institute,” said Winslow. “I’ve never envisioned AI being anything but human-in-the-loop. It is invariably human-in-the-loop.”
AI for Societal Impact Panel Foresees Huge Potential Using AI
The largest humanitarian organization in the world, the United Nations (UN) World Food Programme, has begun using AI, and it was Ozlem Ergun who made it happen. When the Institute’s College of Engineering distinguished professor began with the UN’s food assistance program almost a decade ago, all functions of its logistics center were siloed. Every department, from those deciding who should receive food to how they received
“There’s enough food to feed everyone in the world. The key is knowing how to actually get it to the right people.”
Ozlem Ergun
Distinguished Professor, College of Engineering, Northeastern University
it, operated separately. Ergun’s nimble mathematical models identified an optimized supply chain setup so decision-making could work in concert across all departments. The UN’s first data project has saved millions of dollars and sped up food delivery to individuals most in need by identifying, organizing, storing, and sharing crucial data seamlessly across the organization.
“While the big picture of climate science and the need for action is well understood, there are still gaps... that machine learning and computer vision, network science, graphical methods, agent-based models, and integrated physics-AI-human systems are wellpositioned to address.”
Auroop Ganguly Director of the Sustainability & Data Sciences Lab
Northeastern University
Models by Auroop Ganguly ensure resilience in cities under extreme weather conditions. Ganguly is a professor of civil and environmental engineering at Northeastern and director of the Sustainability & Data Sciences Lab. He is advancing our knowledge on behalf of the National Science Foundation, Department of Defense, Department of Homeland Security, NASA, the city of Boston, and even his own Boston-based climate analytics startup, risQ.
“We’d like to use natural language processing technologies to help healthcare providers make sense of and utilize this data to improve patient care.”
Byron Wallace Creator of Trialstreamer
Byron Wallace’s natural language processing aggregates massive datasets from clinical journals and notes in electronic health records. His AI tool, called Trialstreamer, now housed at the Institute for Experiential AI, has already compiled more than 800,000 searchable clinical trials that automatically update and detect patterns in the text to help assess disease risk or the efficacy of drugs.
Algorithms created by Director of the Augmented Cognition Laboratory Sarah Ostadabbas analyze infant poses to help diagnose autism spectrum disorder and cerebral palsy. Her group is the first to publicly release datasets that adapt tools that close the data gap between infants and adults, which she hopes will one day be used to track the movement of babies in every baby monitor out there.
Ostadabbas’ team also analyzes how bats fly to help roboticists design flying drones.
Khoury College of Computer Sciences Associate Professor Alina Oprea models algorithms to detect cyber threats universally and at individual organizations faster and more accurately than existing defenses. Her projects, funded by the Department of Defense Advanced Research Projects Agency, are called P-CORE and Portfiler.
“These attacks equate to financial losses, loss of sensitive information, and critical infrastructure threats. The question that motivates my research is, can we use AI to solve these security problems?”
Alina Oprea
Associate Professor, Khoury College of Computer Sciences
1
Industry leaders understand the value of AI but are troubled by some of the ethical and logistical pitfalls .
2
Experiential AI with a “human in the loop” offers the best of both worlds: human oversight with algorithmic insight .
6
It’s at the intersections between academic disciplines where we find the most promising and challenging conditions for AI research.
7
People are often far removed from the design and development of the algorithms they use, resulting in a downstream impact on vulnerable populations
3
AI is being used to solve problems and improve life in cybersecurity, consumer tech, healthcare, and more.
8
State-of-the-art machine learning models tend to capture and amplify social biases like negative attitudes about race or gender.
4
AI solutions developed at EAI can analyze countless clinical trials, identify cyber-attacks more quickly, and diagnose complex behavioral disorders.
9
To build trustworthy AI, organizations need to incorporate ethics into each stage of design, development, and deployment.
5
For businesses, the risk assessment is beginning to shift in favor of AI, but without a way to contextualize data, results may cause disappointment or even harm.
10
Focusing on the hardest AI problems offers researchers a chance to get ahead of unforeseen challenges
Don Peppers Discusses How AI Is Perfectly Suited to Improve Customer Relationships
Panel: The Inherent Value of Trustworthy AI
Discover Experiential AI Virtual Photo Album
Northeastern University Launches The Institute For Experiential AI
READ ARTICLE
Toward Trustworthy AI: Bridging The Trust Gap Between Humans And Intelligent Machines
READ ARTICLE
Advancing AI With Data And Machine Learning: What Else Is Needed?
READ ARTICLE
10 Key Questions Every Company Should Ask Before Using AI
READ ARTICLE
Can We Trust AI — and Is That Even the Right Question?
READ ARTICLE
World’s first chief data officer launches AI research center in Boston
READ ARTICLE
The Most Cutting-Edge Technological Innovation in the Future Might Just Be... Humans
READ ARTICLE
How AI Could Propel the Next Generation of Noise-Canceling Headphones
READ ARTICLE
The Morning Download: Weekly AI Insights, AI R&D - An Interview with Dr. Usama Fayyad
READ ARTICLE
VentureBeat
Language Models Fail to Say What They Mean or Mean What They Say
READ ARTICLE
The Institute for Experiential AI researches and develops human-centric AI solutions that leverage machine technology to extend human intelligence. We solve core research problems centered on building real solutions in real contexts to make reliable and responsible AI that works effectively and cooperatively with humans.