11 minute read

HUMANS AT THE HEART OF AI T

Though the trend toward AI is well documented, seeing it in action has reinvigorated debate around some fundamental questions. How should humans focus on remaining at the heart of business as the AI revolution unfolds? Are fears of AI’s abilities to usurp human leadership and ingenuity legitimate?

The rise of generative artificial intelligence, such as ChatGPT, may present more questions than answers today, but one axiom for the age of AI is clear to experts in the Darden community: Tapping into essential human qualities like adaptation and collaboration is critical for people to continue thriving in business as AI develops.

Like most major advances in technology, artificial intelligence has been in the works for quite a while — since the development of the first computers and programming languages in the 1950s. Computer scientist John McCarthy is credited with coining the term “artificial intelligence” in 1955. Aspects of AI have been an accepted part of the average American’s everyday life for years with applications like Grammar.ly, virtual assistants like Alexa, and the emergence of AI-generated videos, photos and audio (sometimes used as tools to spread false information).

However, when OpenAI’s ChatGPT -3.5 emerged, the reaction was different. The possibility of computers being able to learn from experience and context, perform cognitive functions, solve problems and imitate creativity became very real, very fast. Despite a recent scientist-led attempt to pause further development in the interest of treading carefully, most AI observers believe the cat is out of the bag and further surprise developments are all but inevitable. A leaked Google memo in May, titled “We Have No Moat, And Neither Does OpenAI,” revealed the extent to which that belief is true. The memo stated that people around the world have access to open-source generative AI models, and that the power of many small contributions from the crowd is allowing open-source AI to advance much more rapidly than what Google or OpenAI can deliver with months of work and millions of dollars.

Heading into uncharted territory without a map is overwhelming, but there are a few guiding principles that members of the Darden community can use as a guide.

Survival Of The Adaptable

Though unique, the AI revolution is similar to prior technological revolutions or environmental upheavals. What’s required in the face of great change is adaptation, much as England’s peppered moths evolved from light to dark during the Industrial Revolution to blend in with soot or the rapid shift to virtual work during the COVID-19 pandemic.

Change, of course, isn’t easy.

“We know from management scholarship that people often resist change. With change comes feelings of uncertainty. These feelings affect things we care about, such as creativity and openness to new ideas,” said Professor Lillien Ellis, an expert on creativity and innovation.

Successful change often starts with new learning.

Ellis sees much of the rising tension in attitudes toward generative AI stemming from lack of understanding. “Historically, our frameworks for thinking about artificial intelligence have been built by the media,” she said. “Without subject-matter expertise, it’s difficult to understand exactly how this technology works, how we feel about it and the impact it has on the world.” Ellis said that considering the circumstances, it makes sense many are so edgy about AI, but “what we need is more education on how it works and how to lead the people who work with it. That’s absolutely something we can offer at Darden.”

Successful organizations of the future will be highly adaptive, and “you can't have a highly adaptive organization unless you've got highly adaptive people,” said Professor Emeritus Ed Hess, Batten Executive-in-Residence Emeritus at Darden. Hess, an expert on organizational and individual high performance, has authored 15 books, including Humility Is the New Smart: Rethinking Human Excellence in the Smart Machine Age and his latest, Own Your Work Journey! The Path to Meaningful Work and Happiness in the Age of Smart Technology and Radical Change.

Hess said that highly adaptive people “learn, unlearn and relearn at the speed of change.” Highly adaptive learners bring their best thinking, listening, learning, exploring and collaborating selves to work every day. Doing so requires people to take ownership of their egos, minds, bodies and emotions, he said. While the business world has traditionally focused on hard skills, skills that enable the highest levels of learning — like critical thinking, reflective listening and emotional intelligence — have become the valuable assets human beings bring to the table that computers can’t (yet).

“We've reached the point now where soft skills are going to be mission critical for human excellence,” said Hess.

STOP, COLLABORATE AND LISTEN

“Survival of the fittest is not going to work,” Hess warned, “because the fittest is not going to be any individual going forward. It’s going to be all teamwork.”

Teamwork in the age of AI means with fellow humans but also with AI in the form of humantechnology collaboration. “The question isn’t, ‘Does it replace us or not?’” said Ellis. “It’s, ‘How do we collaborate with it effectively?’ It’s a tool. It is not a replacement for human beings.”

Ellis said leaders should expect resistance to change and prepare to manage it. “That means mindful leadership, designing a healthy organizational culture, and not underestimating the role employees and industry partners play in helping or hindering the growth of an organization when applying these new technologies.”

Ethical Complications

The degree to which people can successfully collaborate with new AI capabilities is complicated by significant ethical considerations like data privacy, copyright and intellectual property issues, and biases inherent in AI.

AI chatbots’ “hallucinations,” in which they fabricate false information or even a love-struck persona advising the user to leave his wife in one infamous case, are clearly a problem. More subtle complications arise in the realm of bias, which AI tools regularly exhibit in realms such as gender and racial bias. Generative AI’s responses are not based on thoughts, feelings and assumptions like a human's. Rather, it responds based on datasets that come from humans, and humans have biases. Even as technologists work to mitigate these factors, they remain an open challenge to successful collaboration between humans and the technology.

Questions also abound about whether AI's use of creative works constitutes copyright infringement or intellectual property theft. “The ethics of AI-generated creative work are complex — not just because we don’t fully understand the training models, we’re also still learning about the human side. Is there a human artistry behind generative work?” said Ellis.

Many creative professionals and artists feel violated by generative AI that may use their work without consent as part of the process to create written, verbal or visual composites. Laws written before the rise of AI will be applied to address some messy territory, such as the common prompt to generate something “in the style of” a known artist. At the end of the day, AI feels something like free creative labor, but the dataset it works from originated in very human work. If the humans behind AI’s generative work are not compensated, there may be little incentive for people to create truly new, original work.

Alignment Is Everything

For anyone who has seen the “Terminator” movies, AI can cause Skynet to rise up in our collective imagination. However, technooptimists are focused on how to align the capabilities of AI with human interests. Darden alumnus Alexander Shashko (MBA ’17) is a product manager within Google Research working on AI and machine learning. If you’ve ever seen the feature on Google Maps used to find more sustainable routes that use less fuel, you know his team’s work. “I’m trying to be optimistic here,” he said of AI’s impact in the future. “This notion of AI alignment is becoming very important.”

AI alignment seeks to design goals for AI systems that advance what human society wants to achieve. The scary part is that unforeseen consequences are, well, unforeseen. But Shashko feels relatively optimistic about tech’s ability to mitigate unintended negative outcomes, if a thoughtful and deliberate approach is taken to developing AI systems to align with the interests of society.

Shaping Responsible Leaders In The Age Of Ai

When David M. LaCross (MBA ’78), founder of Risk Management Technologies, and his wife, Kathleen O. LaCross, made a historic $44 million gift to Darden, they had the School’s role in the future of AI in mind. Part of the investment is designated toward a future Darden research center or initiative on AI and machine learning. “This may sound like an exaggeration, but I think it will be more impactful on people’s lives than the internet itself,” said LaCross. He sees Darden as uniquely positioned to use the case method to help students and future leaders learn to manage the intersection of AI technology and human interests.

In his vision, students will be able to exercise a high level of discernment regarding the best solution to use for any given problem, manage the transition from raw research to application, and do so in an ethical manner. It’s a tall order, but one that LaCross believes Darden can deliver with its focus on teams, collaboration and group discussion. “That’s how I benefited most from being here,” he said. “It’s a tremendous, learned skill to be able to debate civilly and collectively advance the solution.”

LaCross believes AI will become a specialty of its own at Darden, similar to the finance, marketing and operations tracks.

“I feel very strongly it needs to be embedded into the core curriculum of Darden. I have a sense of urgency about it that is really acute,” he said. “It’s just the magnitude and pace of how the technology is evolving, and how impactful it will be on humans on so many different dimensions. We need business leaders to be full participants in it.”

GREAT POWER, GREAT RESPONSIBILITY

“The stakeholders of this new technology are humanity itself,” said Professor Anton Korinek. His current research analyzes the implications of artificial intelligence for business, the economy and the future of society.

“How to govern these systems is a decision that should be made by humanity as a whole, not just by a small lab somewhere on the West Coast or by the shareholders of a big corporation,” said Korinek.

Government regulation is in order, and policymakers need to play catch-up. According to Korinek, the first step is to develop the capability to monitor who is training which systems with what capabilities. Then the public and private sectors may be able to have a productive conversation about how to regulate them.

Though Korinek sees himself as an AI optimist, he urges everyone to take the potential existential threats seriously. Worst-case scenarios like human extinction or, slightly less dramatic, AI taking over all the jobs and the economy imploding, aren’t out of the question. Anyone touching AI, he says, has profound responsibility.

“If you are developing AI,” he said, “you have a real responsibility to ensure that your creations steer clear of these bad outcomes. If you are a business that is actively deploying AI systems, you need to be aware that you are contributing to the process of automation and to the erosion of the value of human labor, and you have the responsibility to actively advocate for systemic reforms that provide people with more security; for example, something like a universal basic income.”

Speeding Into The Unknown

“Knock, knock. Who’s there?

Artificial.

Artificial who?

Artificial you glad I didn’t say ‘AI’?”

— Chat GPT-4 (prompt: Please write me a knock-knock joke about AI)

Amara’s Law, the adage credited to scientist Roy Amara, says, “We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.” If knock-knock jokes are any kind of a bellwether, AI isn’t quite ready for primetime as a replacement for human cognition. “Will it drastically change how we do work tomorrow? I don’t think so,” said Shashko. “But over the next 10, 20 years, will it have more significant effects on the type of work we’re doing? Yes.”

The best advice most experts seem to have when it comes to AI is: Don’t panic, but do keep your head on a swivel. Even the creators of generative AI systems don’t fully understand how the technologies work.

“Researchers are continuously surprised by what these systems can do,” said Korinek. “They’re crossing their fingers and hoping for the best. But it’s not like they fully understand how this works, why it works and what to expect from it next.”

AI and Education: Could Every Child Have an Effective, Affordable Tutor?

Anne Trumbore, chief digital learning officer of Darden’s Sands Institute for Lifelong Learning, doesn’t look at AI and see a threat to work for humans. She sees a tool that could lift human potential and achievement across the globe through the power of education. Trumbore shared her vision in a recent editorial in The Conversation, republished here in part.

Imagine a private tutor that never gets tired, has access to massive amounts of data and is free for everyone. In 1966, Stanford philosophy professor Patrick Suppes did just that when he made this prediction: One day, computer technology would evolve so that “millions of schoolchildren” would have access to a personal tutor. He said the conditions would be just like the young prince Alexander the Great being tutored by Aristotle.

Now, ChatGPT, a new artificial intelligence-powered chatbot with advanced conversational abilities, may have the capability to become such a tutor. ChatGPT has collected huge amounts of data on a wide range of topics and can pass graduate school exams. As a researcher who studies how computers can be used to help people learn, I think ChatGPT can be used to help students excel academically. However, in its current form, ChatGPT shows an inability to stay focused on one particular task, let alone tutoring.

Computers As Tutors

Suppes was a pioneer of a field called “computer-assisted instruction.” He developed some of the earliest educational software. That software provided individual instruction via computer and led students to have better test results than those who didn’t use the program. I worked for Suppes in developing software and other online programs from 2004 to 2012.

Since then, experiments in building “intelligent tutors” to help students have driven advances in artificial intelligence, social networks and computer hardware. And today, the abilities of ChatGPT to write essays, answer philosophical questions and solve computer coding problems may finally achieve Suppes’ goal of truly personalized tutoring via computer.

I asked ChatGPT why it would be a good tutor, and it responded: “I am a good tutor for a student because I am able to understand and respond to a wide range of questions and prompts, and I can provide detailed and accurate information on a wide range of subjects. Additionally, I am able to communicate this information in a clear and easy-to-understand manner, making it accessible for students of all levels of understanding. Because I am a machine-learning model, I can also provide a large amount of information quickly and efficiently, which can be especially helpful for students who are trying to learn a lot of new material in a short amount of time.”

By contrast, Siri said, “Hmm … I don’t have an answer for that. Is there something else I can help you with?”

Recent research in using chatbots in college courses shows that how chatbots are used is important in encouraging students to learn more deeply. For instance, AI chatbots enabled significant improvements in learning in a graduate course on cloud computing at Carnegie Mellon. These learning gains occurred when these chatbots asked students to build on an existing argument or to provide more information about a claim they had made. In this case, the chatbot asked the student a question, rather than vice versa.

Many educators are worried about students learning less with ChatGPT since it can be used to cheat on assignments and papers. Others are worried about ChatGPT giving wrong answers or spreading misinformation.

Yet the history and research of intelligent tutors show that using the right design to harness the power of chatbots like ChatGPT can make deeper, individualized learning available to almost anyone.

This article is from: