4 minute read

Ghost in the Machine

Next Article
CUBANA

CUBANA

By Alan Galbraith

Iwas born with a curse. I have an eye for design, but little illustrative ability. Painting and drawing courses have always ended with a platitude that went something like, “well, at least you’re good at <insert any talent other than illustration>”. So, when I heard of a new tool that allows one to generate images from text, I was intrigued. Thus began my exploration in the world of artificial intelligencegenerated art. While most are using this new tool to create reality bending images of purple cows eating pancakes on Mars, my automotive enthusiast background steered me towards creating motorsports related images. But before I dive into the process of making some of these images, let’s look under the hood at the engine driving it all, generative artificial intelligence programs, and what got us to this point.

Most of us are used to searching the internet for an image, let’s say a “yellow balloon,” for example. In the early days of search engines, once you hit “find” the computer would scour the internet for items captioned as “yellow balloon”. This works great if every picture on the internet was captioned, and captioned correctly. However, most are not. So, example pictures were fed to search engines telling them “This IS a yellow balloon”, and the programs would then compare pixels that make up the example yellow balloon picture and look for similar in cyberspace. The results would return images of yellow balloons, but also lemons. A better but not perfect system. Stepping from finding existing images to creating new ones, the computer programing and math involved gets really deep, really fast. Mathematical and statistical models take over. Terms that I don’t pretend to understand, like “variational autoencoder” and “generative adversarial networks” become the norm. These programs are fed as many photos and text descriptions as possible on an ongoing basis, continuously refining the program’s “understanding” of images and allowing them to not only find pre-existing images but create new ones.

These technologies are relatively new, with many developed in the mid-2010s. Developed in parallel were programs that deciphered “natural language” text-based inputs. The integration of the two led to programs that, when fed the text prompt of “yellow balloon” would return a newly created image of a yellow balloon. Which is great if you like balloons. But what if you are a motorhead like me? Well, start your engines.

Throughout my life I’ve been exposed to all manner of art depicting automobiles: concept drawings, event/racing posters, brochures, magazine spreads, advertising and models. When I discovered the generative AI program Midjourney (Midjourney.com), my first thought was to release some of the automotive images that were locked in my head by my lack of artistic talent. One of the outstanding – and some would say controversial – features of Midjourney is the ability to create images in the style of a particular artist through entering plain text descriptions. What if Art Fitzpatrick or Van Kaufman, known for their 1960’s Pontiac brochures, painted a scene with a white 2017 Mercedes AMG GT in it? What would a drawing of a vintage Ferrari at a racetrack done by futurist Syd Mead look like? Thirty seconds after typing in the words “Syd Mead drawing of a vintage Ferrari at a racetrack”, I had my answer, and the answer was amazing. The algorithms behind Midjourney took the text and “knowing” the style of Syd Mead and what a vintage Ferrari looks like, created a stunning scene that looks like it came from Syd’s own drafting table.

Which is where some of the controversy comes in. Some would argue that when naming a particular artist to emulate, the programs are doing nothing but ripping off the works of those artists. There is certainly some merit to this thinking. However, emulation, if not straight up plagiarism, is not new in the world of creative expression. How many guitar players emulate the styles of those that have come before them? Wasn’t Stevie Ray Vaughan’s style just an ever-so-slight twist on Jimmy Hendrix's? Prince lifted pieces of James Brown’s style wholesale for his act. Painters would gather with the expressed intent of copying each other’s style and led to what we know today as different “movements”. No one looks down on Monet for being like other painters in the Impressionist movement, or Charles and Ray

Eames for incorporating abstractionist elements into their mid-century modern industrial designs. I’ll leave you to reach your own conclusion on the ethics of generating visual art, music, film, architecture, or any other creative output in the style of an existing artist. But if you ever wished to have a Salvador Dali painting of your car, that can be made real with just a few keystrokes.

One of the less controversial aspects of AI art generation is to create art not based on an existing artist’s work, but rather in general styles. Prompts like “Character Design, Double Exposure Shot, Vintage Bugatti filled with drivers and people in fancy clothes” or “Abstract illustration of a 1966 Ford GT40” yield wildly differing results. Each image generated leads to new ideas for text descriptions. It’s much like playing with the knobs on synthesizer keyboard, creating new and unexpected sounds that can lead to inspiration for a new song. The programs let you pick images you like and make new iterations of them, emphasizing certain elements or mutating at random. Reaching the image you envisioned can sometimes take dozens of iterations. Hours can pass in the search of just the right combinations. A little skill in Photoshop can go a long way to combining elements from different images into the intended vision.

Some may argue that this cuts out traditional artists and illustrators from being employed at their craft. I would maintain that AI art generation is imprecise at best. It’s unlikely that it will replace the human give-and-take of employing an artist to render a scene and make revisions based on a mutual understanding of the desired output. The AI technology just isn’t refined enough to fully conceptualize the nuances of human artistic vision. The programs behind it only understand how to piece together bits and pieces of what it has been fed. It’s a pretty good mimic, but hardly a replacement for human creativity.

One thing AI art generation does do is bring the ability to create images home to those that lack illustrative abilities themselves. I encourage you to try it and see for yourself its strengths and limitations. Be prepared for hour upon hour at your keyboard. You might even want to consider clearing some wall space in your garage for your newly created masterpieces.

This article is from: