10 minute read

AI Imitates Art

From painting to rap music, artificial intelligence is invading the art world. Creatives are coping with the fallout.

By Kendall Polidori

Advertisement

Singer-songwriter Nick Cave didn’t like what he was hearing.

“This song is bullshit, a grotesque mockery of what it is to be a human,” he exclaimed. He was responding to what ChatGPT came up with when asked to compose original music and lyrics in his style.

It wasn’t just the aesthetics—Cave disapproved in the principle of a machine mimicking art, labeling it “replication as travesty.”

Fair. But he might be missing the point.

Artificial intelligence, or AI, has been around for a few years, but ChatGPT’s public release in November alerted the general population to the technology, especially in creative fields, such as music, visual arts and writing.

ChatGPT writes original lyrics in the style of any recognized musician in seconds, turns out a writerly blog post on any topic almost instantly, and generates fresh images in the style of professional painters, graphic designers and photographers.

It’s impressive enough to worry a lot of creatives, but Maya Ackerman would tell them not to fret. She’s CEO and co-founder of WaveAI, which has one platform to write lyrics and another to create melodies.

“The best way to really understand these tools is to use them,” Ackerman says. “These machines are not meant to replace people. Try to make something good with it.”

Image intelligence

With text-to-image platforms—including DeepAI, Fotor, Dezgo, OpenAI’s DALL–E, Midjourney and AiArtist—users type in keywords or descriptions of images they want.

They might ask for “an illustration of a hound dog playing a red Gibson Les Paul.” Within minutes, images inspired by the request appear.

The AI systems were trained on a large collection of artists’ work, such as paintings by the likes of Vincent van Gogh and Pablo Picasso. That inspired Oregon-based artist Erin Hanson to type the words “an oil painting in the style of Erin Hanson” into one of the platforms. She was stunned by the results.

“It’s beautiful, but only because it’s based on beautiful artwork created by human beings,” she says.

Hanson is among thousands of artists whose work has been used to train AI systems like Stable Diffusion—without her permission. Unlike most, she’s researched the copyright laws and found she has the right to “prevent the use of his or her name as author of any work of visual art which they did not create.”

She hopes the choice to opt-out of data collection will become widespread. At press time, Stable Diffusion V3 was enabling artists to opt-out from use of their images.

While painters and photographers wrestle with AI issues, the technology is also becoming a factor in video work.

This landscape by Erin Hanson exemplifies a style called open impressionism that she helped originate.

Courtesy of Erin Hanson

GENERATIVE AI TOOLS HELP OVERCOME VERY SPECIFIC SONGWRITING CHALLENGES.

Video assistants

Shane Verkest, who’s freelancing as a production assistant and video editor, has been using AI to help him deal with the medium.

“I think this will create more accessibility for people to create things and shift the model where maybe you don’t need a super big budget anymore,” Verkest says. “The things that kids can make from their bedroom on their cellphones are about to get cooler.”

Verkest doesn’t fear AI will take his job, but he understands other artists’ concern. He hopes the technology will work in tandem with creatives.

While he considers AI-generated artwork impressive, he feels there’s a hollowness to it. “There’s still going to be a place for human art,” he concludes.

Still, AI-generated art is winning competitions, and AI comic books are prompting officials to reconsider of the merits of copyright. That success also raises the question of whether AI art is going to erase human art.

Artist and gallery owner Erin Hanson is campaigning to protect artwork from the encroachment of AI.

Courtesy of Erin Hanson

“I don’t think so,” Hanson says, “but if it did, it would be stealing the emotional content of other artists’ work. When you are in front of an original oil painting, it has more of an emotional impact than in the identical canvas print sitting right next to it.”

That divide between the real and the synthetic can occur in music, too, but artists can perhaps overcome it by using AI as a tool.

Generated tunes

While completing a Ph.D. in computer science at the University of Waterloo in Ontario, Ackerman decided to take opera lessons for fun. Within a year, she was singing semi-professionally and soon developed a strong desire to write her own music.

But creating a song didn’t come easily. “I was kind of permanently stuck in this very narrow creative space,” Ackerman says. So, she learned to produce and record other people’s music.

Then in 2015, as a professor of computer science at Florida State University, she discovered the field of computational creativity—the intersection of artificial intelligence, cognitive psychology, philosophy and the arts.

“I realized I could build tools, using these concepts of generative AI, that would help me with very specific songwriting challenges I was facing,” she says.

So, Ackerman created WaveAI, a music platform that currently hosts two systems: LyricStudio and MelodyStudio.

LyricStudio is used by millions of artists and creators, 15% of whom are professionals. The system guides users through the songwriting process by

learning the user’s emotions and style and then offering suggestions for lyrics. Ackerman uses it nearly every day, and she notes that the artist drives the system.

MelodyStudio appeared on WaveAI in early February and resembles LyricStudio but instead guides artists through the process of writing melodies. AI doesn’t write songs in their entirety. It serves only as a creative tool to help musicians get out of writer’s block, Ackerman says.

One example is American hip-hop recording artist and producer Curtiss King, who had a No. 1 iTunes hip-hop album made in collaboration with LyricStudio.

“I was struggling with writer’s block,” King says. “I’m a purist when it comes to my writing process, but it was interesting because LyricStudio would give me certain prompts that I wouldn’t have thought of.”

He used the platform to spark new ideas for verses and help come up with rhymes. But LyricStudio wasn’t his first AI tool. He had previously used one that generates beats and chord progressions.

Like many of his peers, King was skeptical at first, but he wanted something that could assist in his musical process—not replace it.

Since discovering LyricStudio, King has used many different platforms, including ChatGPT and DALL-E. As an independent, DIY artist, he says AI acts as his unofficial employee.

But AI could also have a dark side for working musicians and other artists.

The sound of AI

MUSICIANS CAN USE AI MUSIC PLATFORMS TO HELP WRITE TUNES AND LYRICS. HERE ARE LUCKBOX ’S TOP 5:

LyricStudio provides endless lyric prompts to help spark new ideas.

Songmastr can automatically master any song (wav or mp3) to an uploaded reference track.

AIVA acts as an electronic composer.

Amper Music creates songs from millions of individual samples and thousands of purpose-built instruments.

Boomy helps create original songs that artists can monetize.

Replacement theory

The programs that help create music could also push songwriters aside. The recording industry would be “happy to get rid of human authors,” says Daniel Gervais, a professor at Vanderbilt University Law School. If labels have the chance to produce music for free and not have to pay royalties, they’ll do it, he notes.

But Ackerman provides the counterpoint that users should adapt to the systems and understand they’re strictly for creative possibilities. “You are the star and it’s the helper,” she says. “It’s flipping the paradigm, which is opposite of how AI is often described to us.”

But many creatives aren’t convinced AI is on their side, and they’re resorting to legal action to protect their interests.

Getty Images, a British-American visual media company, has initiated legal proceedings in London against Stability AI, saying the company has infringed upon its copyright for “millions of images.”

Separately, three visual artists have filed a class-action lawsuit in U.S. federal court against Stability AI, DeviantArt and MidJourney. The plaintiffs claim the companies violated copyright law by using their images to train AI.

A Stability AI spokesperson has been quoted as saying “anyone that believes that this isn’t fair use does not understand the technology and misunderstands the law.”

Fair use creates exceptions to copyright for purposes “such as criticism, comment, news reporting, teaching and research,” says Daliah Saper, a trademark, copyright and media attorney in Chicago.

Whether or not it’s legal depends upon how much is used and what effect that has on the underlying work, she says.

“If you look at what is protected by copyright, the Supreme Court says that the work produced must be the result of creative choices,” Gervais says. “I don’t think a machine can be creative, at least not in the way that a human can be creative.”

The overarching question is who owns the work? The current cases are interesting, Saper says, because the platforms must pull from existing sources to create the new artificial work.

“Should the underlying artist be in a position to authorize that database from extracting or being inspired by or using their work?” Saper says. “There’s no definitive answer.”

During an earnings conference call in early February, Warner Music Group (WMG) CEO Robert Kyncl addressed AI and copyrights.

“It falls into four buckets,” Kyncl said. That includes using copyrighted material to train AI, sampling copyrighted material for new and remixed AI content, using AI to support creativity, and protecting the work of artists and songwriters from being diluted or replaced by AI-generated content. Kyncl notes that Warner identifies and tracks content on consumption platforms to identify copyright and compensate copyright holders.

THERE’S A HOLLOWNESS TO AI-GENERATED ART.

But what’s the fallout from the legal activity? If the companies involved in lawsuits are found guilty, Saper says it will be the end of them and their current models.

“The outcome is going to have to be licensing,” she continues. “The software technology is going to have to license the database of content from those companies that are aggregating the content, just like music used to be.”

In a February op-ed on the website Music Business Worldwide, Michael Nash, executive vice president and chief digital officer for Universal Music Group, notes the similarities between AI’s rise and the ascent of Napster and unlicensed music sharing more than 20 years ago.

“At that time, it was copyright law that saved the day, ensuring that artists and labels were protected,” he says.

The Supreme Court doesn’t hear fair use cases often but is currently sitting on an opinion from proceedings in October, which Gervais says may redefine fair use as early as this spring.

No matter what shape that decision takes, AI tools aren’t going away. So, artists may want to prepare.

For King, that means realizing it’s too late to worry about the existence of AI. Instead, it’s vital to keep a cautious eye on the technology to ensure it aligns with society’s ethics.

Gervais agrees. “What made us special on this planet as a species is we can do this so-called higher mental faculties stuff that no other species could,” he says. “Now we are creating machines that compete with us there.”

“From what we’re seeing, [AI] is enhancing, not replacing, creativity. But eventually, yeah, looking forward to 100 years from now, it could do the whole creative job.”

–OpenAI CEO Sam Altman, during an interview for Greylock’s Greymatter podcast

This article is from: