POINT OF VIEW
Ethan Mollick, Professor, The Wharton School
Automating Creativity: Using AI as a Creative Engine AN ASTONISHING CHANGE is occurring in the landscape of human creativity: there is now evidence that AI can help make us more innovative — if we use it properly. The core irony of generative AIs like ChatGPT and Bing is that they were supposed to be all logic and no imagination. Instead, we get AIs that make up information, engage in (seemingly) emotional discussions and are intensely creative. And that last fact is one that makes many people deeply uncomfortable. To be clear, there is no one definition of creativity, but researchers have developed a number of tests that are widely used to measure the ability of humans to come up with diverse and meaningful ideas. The fact that many of these tests were flawed wasn’t that big a deal until, suddenly, AIs were able to surpass all of them. But now, ChatGPT-4 beats 91 per cent of humans on a variation of the Alternative Uses Test for creativity, and exceeds 99 per cent of people on the Torrance Tests of Creative Thinking. We are running out of creativity tests that AIs cannot ace. While these psychological tests are interesting, applying human tests to AI can be challenging. There is always the chance that the AI has previously been exposed to the results
of similar tests and is just repeating answers (although the researchers in these studies have taken steps to minimize that risk). And, of course, psychological testing is not necessarily proof that AI can actually come up with useful ideas in the real world. Yet, we have learned from three recent academic papers — all of which are available at SSRN.com — that AI really can be creative in settings with real-world implications. Each paper directly compares AI-powered creativity and human creative effort in controlled experiments. The first major paper is from my colleagues at Wharton: “Ideas Are Dimes a Dozen: Large Language Models for Idea Generation in Innovation.” In it, they describe how they staged an idea generation contest; pitting ChatGPT-4 against students in a popular innovation class that has historically led to many start-ups. The researchers — Karan Girotra (Cornell University) and colleagues used human judges to assess idea quality, and found that ChatGPT-4 generated more, cheaper and better ideas than the students. Even more impressive from a business perspective was that the purchase intent from outside judges was higher for the AI-generated ideas. Of the 40 best ideas rated by the judges, 35 came from ChatGPT. rotmanmagazine.ca / 87