8 minute read
A.I. vs. Human Creativity: Where Do We Go From Here?
By Charlie Warzel
“Because I was born believers. If you can beat him—legend that thing. If you have only one hand, don’t just watch a marathon. First—marathon.”
Advertisement
This text is part of an experiment by Wieden+Kennedy. The agency trained a neural network with seven years’ worth of its work for Nike and let it generate its own text.
It was cutting edge three years ago; imagine where this can go today. Can natural language processing develop into dominance for creative advertising? Can it outstrip human creatives when measured on output?
How does adland sit with A.I. at the moment?
Most sources agree there’s a massive potential for A.I. and that its market across sectors worldwide is expanding at a ferocious pace. While A.I. has been a part of advertising technology for several years, from sentiment analysis to audience clustering technology, it is now playing a more pivotal role in the creative process.
Technologies like IBM Watson’s Advertising Accelerator use A.I. to create multiple personalized digital ads for all sorts of media requirements and audiences. The power of technologies like this allows for optimization at scale in any industry, by microtargeting demographics, psychographics, purchase triggers, customer journeys and various KPIs like conversion, video view rate and app downloads.
A fully developed case study in this area can be found in the Ad Council’s Covid Vaccine Education Initiative, carried out in the U.S. in 2021. The project’s aim was to increase vaccine uptake amongst the population, buttressed by IBM’s creative A.I. technologies on the optimization side. IBM’s findings as the campaign progressed were that there were four key barriers to overcome: the safety of the vaccines, the sheer speed at which they were made, mistrust toward Congress, and other miscellaneous conspiracy theories.
Based on these uptake barriers, IBM’s tech was able to tailor messaging at scale to convince audiences to get the jab. The campaign directed 39.6 percent more people to visit GetVaccineAnswers.org than standard creative.
In similar fashion and in a completely different sector, Vanguard used Persado’s NLG A.I. platform to disseminate personalized ads on LinkedIn. The financial services industry exists in a heavily regulated advertising environment, so uniqueness is a priceless commodity for brands in this space. Persado was able to personalize Vanguard’s LinkedIn ads and test them at scale in a similar way to IBM’s software, in order to optimize messaging on a person-to-person basis. In a sector where the front lines move in meters, not miles, Vanguard was able to boost conversion to 15 percent via the platform.
All very exciting stuff. The issue, however, is that these technologies are essentially doing iterative A/B testing at hyperspeed, not creative development from scratch. As groundbreaking as this optimization technology is, the promised land of an A.I. that can create an idea, the genesis of the strategic and creative process, is not yet here.
Or is it? The case for GPT-3 and DALL-E
OpenAI’s GPT-3 has been viewed as something of a messianic moment for the A.I. sector. Trained on 570 GBs of text, or just shy of 1 trillion words, and sourced from Internet pages deemed of high enough a linguistic quality, the A.I.’s ability to produce cogent and relevant writing off the back of a short human sentence was astounding.
The ability to instantaneously generate new, interesting and cohesive content, but at incredible speed, on the face of it looks like a strategist and creative’s dream. However, there are pitfalls. The first serious drawback that GPT-3 faces is the result of the language source upon which its foundation is based—the internet.
A.I., a play put on for a three night run in 2021 at the Young Vic, was based around the innovative production technique of letting the cast create a script live using GPT-3, and then performing the results. While the A.I.’s ability to generate uncannily accurate, human-sounding text lends itself to this structure well, it repeatedly put one of the cast’s Middle Eastern actors, Waleed Akhtar, in negative stereotypical roles—for example, a terrorist or a violent criminal. In this sense, GPT-3’s reliance on the internet as the source of its language patterning creates serious problems; it effectively acts as a mirror to the internet’s ugly underbelly.
The second and more commonly manifested problem with using GPT-3 as an ideation tool is its uncontrolled randomness, and unpredictability of generated content.
While some companies, partnering with OpenAI’s API program, have developed software that mitigates GPT-3’s more chaotic outputs to focus on media and performance focused tasks (similar to IBM’s program), purely creative experiments have generated telling results.
Adweek experimented with Copysmith, a copy-generation site with a GPT-3 integration, to produce ideas for ads based on brand names alone—the results, while certainly inventive, were not necessarily award hopefuls.
Brand: Ford
A.I. ad idea: “What if we did something with a skydiving car chase?”
Brand: Lay’s potato chips
A.I. ad idea: “What if we performed an experiment with a vending machine that would only accept your shadow as payment?”
Brand: Adweek
A.I. ad idea: “What if we did something with a giant red button that when pushed, would break some news?”
A.I. programs like DALL-E 2, Stable Diffusion and Midjourney, specializing in image generation based on text inputs, run into the same issue. A 2022 project run by London-based agency 10 Days sought to use Midjourney to create A.I.-generated campaigns around 10 enterprise brands, including Ray-Ban, KFC and GymShark. The only inputs were the brand names, and six genre-based words to power the A.I.’s algorithm (such as “noir” or “cinematic”). The results were both impressive and shocking.
The issue, then, for strategists and creatives looking to leverage A.I., be it for idea, copy or art generation, is that current models like DALL-E and GPT-3 must either be hemmed in by guardrails that increase focus but drastically decrease creative output and potential, or let the proverbial pig out of the sty and see which way it runs, come what may.
This state of affairs is reflected by people in the marketing industry attempting to use this tech, and even those selling it. 10 Days, the performers at the Young Vic, and various creative technologists interviewed on the topic all believe that these iterations of A.I. tech are brilliant starting points that provide provocative creative springboards, but are simply too inclined to randomness to be relied upon to create a cogent campaign from scratch. Even Jasper.ai, a GPT-3 powered copywriting tool, reassures its users that “A.I. isn’t here to replace you. Every Batman needs a Robin. Jasper will be your best assistant.”
What can be done?
A marriage of the kind of performance marketing technology and the raw creative potential of A.I.’s like GPT3 and DALL-E 2 could provide possible answers.
Panagiotis Angelopoulos, chief data scientist at Persado, says GPT-3’s strength is volume, not quality—the vastness of the dataset it was trained on means its creative linguistic scope is huge, but it is not equipped with the functionality to be able to adapt text to a reader or medium.
IBM’s Advertising Accelerator, and software like it, is a mirrored product—its entire architecture is precisely engineered toward being able to constantly tailor messaging through hyperspeed A/B testing, thus becoming an A.I. that is truly able to learn, but without the unbridled creative potential of GPT-3.
A potential solution would be to use GPT-3 as a starting point, and work with its closely guarded API to significantly widen the range of possible data inputs—to include typical strategy starting points for a campaign, like audience demographics, psychographics, buying behaviours, seasonality and so on.
If NLP A.I. is to eventually become a viable foundation stone for an “ideas machine,” the content it generates needs to be informed by more than just its enormous lexicon— contextual information is also required.
The debate on A.I. outside of advertising—is it real creativity?
Even if we are eventually able to manufacture an NLP/image generation A.I. solution which is able to “learn,” respond to new data inputs and use insight sources to generate content which doesn’t meander into meaninglessness, there is a wider debate about whether these technologies can ever produce work that is genuinely “creative.
” Anna Ridler, an artist who uses creative technology to inform her work, summarized its limitations: “A.I. can’t handle concepts: collapsing moments in time, memory, thoughts, emotions—all of that is a real human skill, that makes a piece of art rather than something that visually looks pretty.” A frequently cited example is the astronaut riding a horse experiment. When using a tool like DALL-E to depict this, the results are convincing—but a reversal of the dynamic results in essentially the same image, but with the actors flipped. The issue here being that causal dynamics between actors in a given scenario is not captured—nuance is missed.
A potential solution to this can be found in GAN (generative adversarial networks), a clever way of training an image generator by working through two networks—one which generates imagery repeatedly, and a second which tries to classify those images as either real (an actual picture created by a human), or fake (generated by the A.I.). This goes on until the second network is fooled roughly 50 percent of the time, thereby indicating a level of plausibility for how “human” the first network’s images are.
What GANs do has been termed as a form of transformational creativity (creating something in a completely novel style), versus the combinational creativity displayed by A.I.’s like DALL-E 2 (where the images generated are trained to be in the exact same style as the others in the training dataset).
This conundrum, while in this case focusing on art generation, also applies to the current state of affairs in language generation. Even if, however, after years of innovation and development, it turns out these forms of A.I. cannot be creative in a human sense, a question still remains. If what they create is virtually indistinguishable from what we can create, then from the point of view of a consumer viewing an advert, does it even matter?
And finally—a GPT-3 generated BBH short story
The only line of the below that is human is the first sentence. The rest was written by Novel.ai, a GPT-3 story generator, using its Arthur Conan Doyle creative module.
BBH was an extremely loud advertising agency. The sound of the buzzers had been a familiar sound to me for many years—I had heard it so often that it had become a part of my life as much as the clank of the coffee maker or the sound of traffic outside. But now my world had come to an end. I was no longer listening to BBH; but the sound itself, the noise of those metal boxes, had somehow remained with me. It continued on and on in my ears, without any of its former meaning—devoid entirely of power to disturb me or to move me.
The owner of BBH had died suddenly. His death had occurred while he sat at his desk and worked late into the night. The cause of death was unknown. A few days later, three young men from BBH were found dead in their beds. One had been stabbed through his heart by a blood-stained dagger; another lay face downward upon his pillow, pierced through the back of his skull. The third man had fallen out of bed onto the floor, his neck broken. It appeared that they had all been asleep when they had been killed. The police found no sign of foul play. Everyone had apparently died peacefully in his sleep. The whole incident was strange and mysterious.
Will Roscoe has been active in the the gay movement since 1975, when he helped found Lambda, the first gay/lesbian organization in Montana. He worked with Harvey Milk in the “No