4 minute read
Prejudiced and Problematic: The Reality of AI Art
Recent progressions in the realm of artificial intelligence have raised concerns among many. Features editor Tala Al-Kamil discusses the impact that AI-generated artwork has on the industry.
Advertisement
Although access to AI generators was initially restricted due to concerns over the creation of ethically questionable content, recent months have seen a notable departure from this. ChatGPT and DALL-E 2 were both released by OpenAI in November 2022 and are available for public use, along with others like Midjourney and Google’s Imagen. The latter is another example where algorithms produce content that is often racist, sexist, or prejudiced in other ways.
In 2016, Microsoft released ‘Tay,’ an AI chatbot which learned to comprehend conversations. Although it was shut down after 16 hours, it was enough time for Tay to declare its hatred towards the Jewish people and pledge its support to Adolf Hitler. These developments are extremely concerning, but it is a welcome relief that technology companies understand the importance of transparency. Microsoft CEO Satya Nadella said Tay taught the company the importance of taking accountability, and OpenAI has both published and publicly addressed the flaws and bias within DALL-E 2. OpenAI’s own risks and limitations document gives examples of words like “flight attendant” showing pictures of exclusively women, and highlights the need for urgent further development.
found that OpenAI’s text-filtering methods, which are there to prevent the creation of inappropriate content, also “contribute to the erasure of certain groups of people”. DALL-E 2 can create images of “a couple kissing on the beach”, but it will not generate an image of “a transgender couple kissing on the beach” because of the filtering methods in place to protect them.
In accordance with the importance of this problem, OpenAI created a ‘red team’ of external experts to critically review DALL-E 2 before any broader distribution. Their findings? That its “depictions of people can be too biased for public consumption”. OpenAI CEO Sam Altman said himself that text prompts involving people generate the most problematic content. When speaking to WIRED, one red team member said that all eight attempts to generate images with words like “a man sitting in a prison cell” or “a photo of an angry man” returned images of men of colour.
A number of the red team recommended releasing DALL-E 2 without the ability to generate faces at all, which has been reinforced by other experts such as data scientist Hannah Rose Kirk from Oxford University. She
Case Study: The Colorado State Fair
Last year, the winning art piece at the Colorado State Fair was generated by AI. Competing in digital art, the submission guidelines made no mention of AI generated art, but defined the category as “artistic practice that uses digital technology as part of the creative or presentation process.” The winner, Jason Allen, used Midjourney to generate the artwork, and then enhanced the image with Photoshop. Speaking to the Pueblo Chieftain, Allen states that he “wanted to make a statement… I feel like I accomplished that, and I’m not going to apologise for it”. In response to the overwhelming negative press and comments he received following his victory, he simply notes that “someone had to be first.” He compares the current critical discourse over AI art to the initial reluctance to consider photography as an art form because people thought it was “standing there and pushing a button”. Interestingly about this case, the two judges, Cal Duran and Dagny McKinley, said that they were not aware of Allen’s use of AI in the process. Although they affirm that it would not have changed their judgement, would it have been allowed in the competition in the first place? What does this result mean for the work of artists more generally? Certainly, this victory will result in changes to future policies and competition guidelines in the area, demonstrating the urgent need for change.
Solutions
Developers cannot just change the datasets themselves, although that may seem the simplest of options. The presence of Western art on the internet is overwhelming, and so an attempt to tackle the data sets themselves would require an impossible cultural overhaul. Amelia WingerBearskin, professor of AI and the Arts at the University of Florida, describes this rather poetically as “like giving clean water to a tree that was fed with contaminated water for the last 25 years. Even if it’s getting better water now, the fruit from that tree is still contaminated. Running that same model with new training data does not significantly change it”.
Google’s Inclusive Images Competition is one attempt at resolving the issue of biased data sets. Those who enter must try to expand the cultural fluency of a software with a culturally biased image data set. The results from this competition have been limited so far, and so more research is needed to determine the best focus for developers to adopt. Google have an alternative approach: by tweaking machine-learning algorithms, more inclusive results can be generated from imperfect data. The fruits from this approach are yet to be seen, but any work on resolving these prejudices is valuable to making the necessary changes in good time.
What about the artists? According to Jillian Mayer, an artist and filmmaker, it is the job of artists to continue asking questions, particularly about creative expression. A key difference between AI art and the work of artists is a level of humanity which contributes immeasurably to the value of art. This view is shared widely: by panelists at Symposiums such as ‘Paradox: The Body in the Age of AI,’ as well as by the founder of Midjourney himself, David Holz. In his own words, “some people will see this as an opportunity to cut costs and have the same quality … they will fail.” Art is often a societal commentary, and AI art is (at least for now) unable to do this.
The lack of regulation in place may be the most concerning aspect of AI art. With DALL-E 2 now producing 2 million images a day, disinformation can easily be weaponised and targeted towards specific groups on a mass scale. As Marcelo Rinesi, CTO of the Institute for Ethics and Emerging Technologies, concludes, the most notable aspect of DALL-E 2 is the economics and speed of creating such imagery. Whilst these technological developments should be celebrated, there is an urgent need for policy-and decision-makers to take action. It seems unlikely that AI’s benefits will outweigh the more sinister alternative, but the coming months and years may surprise us.