Kenneah March Dimacali

AI Art: Just Because We Can, Doesn’t Always Mean We Should

Article Detail

This essay was selected as one of two runner-up entries in the 2024 Michèle Whitecliffe Art Writing Prize.

Each year entrants to the Michèle Whitecliffe Art Writing Prize respond to a theme. In 2024 the theme was ‘Artificial intelligence (AI) and the visual arts', which invited writers to reflect on the impact of AI technologies on how we make and understand art. 

This year's judge was Dr Mi You, a professor of art and economies at the University of Kassel / documenta Institute. Her academic interests are in new and historical materialism, performance philosophy, and the history, political theory and philosophy of Eurasia. Her interests in politics around technology and futures has led her to work on ‘actionable speculations’, articulated in the exhibition Sci-(no)-Fi at the Academy of the Arts of the World, Cologne (2019). Dr You has curated exhibitions and created programmes at the Asian Culture Center in Gwangju, South Korea, Ulaanbaatar International Media Art Festival, Mongolia (2016), Zarya CCA, Vladivostok (2018) and, with Binna Choi, she is co-steering a research/curatorial project Unmapping Eurasia (2018–ongoing). She was one of the curators of the 13th Shanghai Biennale (2020–2021).

Commenting on the two runner-up entries, Dr You commented that: 

The essays 'Artificial Intelligence (“AI”) and the Visual Arts: Another Evolution or Revolution?' and 'AI Art: Just Because We Can, Doesn’t Always Mean We Should' demontrate historical awarenss to art history in relation to technological advancement, and clear understandings of the underlying operational logic of AI. They emphasise the importance of embodied experiences of viewing and making art, and point to the potential pitfalls of the technology, such as the Model Autophagy Disorder. Both present balanced views on the future of art and technology.

--------------------------------------------------------------

Intelligence and creativity: qualities that make us undeniably human. According to human evolution, alongside the improved ability of hominins to think, the ability to create art was also developed. Although many would argue that being smart and being creative are separate skills, human beings would not have been able to make art without intellect. In recent times, we have been able to devise complex, beneficial inventions and technologies, one of which is AI or artificial intelligence.

The word ‘artificial’ describes something man-made, and naturally non-existent; often, it has negative connotations. Cognitive ability, on the other hand, is naturally occurring – all living organisms have it. However, inanimate objects, like computers, need human interference for them to have ‘intellect’. Thus, AI is a technology that imitates human intelligence, allowing computers and machines to have capabilities for several uses, mainly for problem-solving. From its first introduction in the 1950s up to now, AI was made for efficiency –  leave easy, repetitive, boring tasks to computers while achieving quick and accurate results.

The use of AI in our daily lives is inevitable; it exists everywhere. It’s in our devices, with the use of speech recognition, GPS and weather forecasts; we encounter it in websites (commonly online shopping sites), with the use of online virtual agents for customer service; and we also use it in self-driving cars, for anomaly detection, radiology imaging and much more. Indeed, AI has proven itself to be useful for humanity in more than one way. However, the use of generative artificial intelligence for image and art creation results in problematic issues for the future of visual art.

Quite similar to how humans need to study before they gain knowledge, AI needs to ‘learn’ before it can function. It does so by extracting and analysing features from huge amounts of data given to it, which the AI then uses to learn patterns and make predictions. Meanwhile, generative AI focuses on generating original results by utilising language models to make outputs such as text, images, music, videos and even visual art. Although it sounds good in theory, its current use in practice proves otherwise.

There have been many issues with how generative AI creates these ‘original’ outputs. When generating art, it learns from existing artworks. This means that if we feed the AI several images of Monet paintings, it would be able to imitate Monet’s style in the generation of an 'original painting'. Seems harmless enough – after all, human artists have been doing the same for centuries; we take inspiration from art before us to create our own. But what about feeding the AI with art of contemporary artists? There’re heaps of them on sites such as Instagram, X (formerly known as Twitter), DeviantArt, etc. Unfortunately, many artists have likely not consented for their art to be used by an AI to learn from, which is problematic.

AI does not know how to emulate the same way a human does; its process is more reminiscent of stealing. Humans who mimic someone else’s style will add their own unique twist to show their creativity, making it original. In contrast, instead of taking inspiration, AI does a poor job of cutting, pasting and mashing together elements from the images it has learned from into one ‘original work’, producing an outcome that merely looks alright.

Several instances of generative AI art have been reported to look eerily similar to the work of artists who have not consented to their art being used for AI. This is possible because anyone can go and take someone else’s art from the internet and use it to generate their ‘own’ art –  but in actuality, neither the AI artist/user nor the AI itself has truly created the artwork. There have even been accounts of copied watermarks, which could be defended as the AI simply learning that artists use watermarks to signify their own work. However, this reasoning seems to be a bit skewed when the watermark in an AI-generated artwork is immensely similar to that of a real artist. Assuming that AI unintentionally copied someone’s watermark is detrimental to the artists whose works have been used without their consent. Additionally, this issue raises a fundamental question: do we credit AI art to the AI ‘artist’, the AI tool, or the artist(s) used for training the AI?

AI is certainly helpful in our day-to-day lives, but some human activities should not be automated. Art is inherently human; tasking a computer to create it seems insulting to the years of history involved in the creation and process of art. Yes, AI makes art easier, requiring only text prompts to be generated, but in my view this contradicts the true essence of art. As an artist, I find art both difficult and time consuming, and it never becomes easier. Nonetheless, this challenge is part of what makes art interesting and relevant. Despite having numerous tools and mediums, people still find it difficult to create – to the point that we’ve trained AI to do it for us – proving the immense value of artists’ skill and the art they produce. Moreover, many artists would argue that the creative process is much more important than the final work; the true value is in the act of creation, not so much in its completion.

Art has always been rooted with communication: as a way to connect with others and as a means of keeping record. Humanity’s humble beginnings with art during the prehistoric times – evident in the Cueva de las Manos (Cave of the Hands) in Argentina, the caves in Lascaux and locally with the Māori rock art in Te Waipounamu South Island – tell us that even with limited skill and resources we were able to create something artistic that has lasted for millennia. Ironically, despite advances in both art and technology, the art generated by AI is, at best, mediocre.

Artificial intelligence creates equally artificial works, noticeable for its distinctive ability to botch hands, feet, clothing – for example, conspicuously having too many fingers – regardless of the art style. The result is art that can be uncanny, unsettling, or soulless. This reinforces the idea that AI doesn’t know how to emulate; it just mixes together what it knows from its training data without actually understanding anatomy or even laws of physics.

<p>Images generated by DALL-E 2 using the search term &#39;human hands&#39;</p>

Images generated by DALL-E 2 using the search term 'human hands'

Lowering cost, improving the efficiency of creative outputs and making art easier to make (especially for those new to it) seem to be the main appeals of AI art. However, AI-generated art would not exist without pre-existing, man-made artworks. This is incredibly disrespectful to artists around the world, as art is not cheap precisely because of the time, resources and skill that an artist needs to hone before they can sell their work.

Some might argue that AI art will be similar to photography. It won’t wipe out painting or drawing, but instead will be its own medium. This thinking, however, is flawed. Compared to photography, AI is fundamentally different. Photography creates an original piece of media as both a tool for the visual arts (for example a reference tool, recording ephemeral art, etc.) and as its own medium. Over time, photography proved itself not to be a threat to the existence of other visual arts, but instead as its own separate artform. AI, however, cannot be considered its own medium because of all the issues regarding copyright infringement as well as the fact that it can’t exist independently. It will always reply on data from images and art made by other artists for it to be able to create ‘original’ works, which just makes it counterproductive. Moreover, AI art raises ethical concerns regarding how fast it can generate outputs from written prompts. Anyone can use AI to produce art that could defame another person or be harmful to human rights. If these issues are resolved in the future, then there may be a chance for AI to be used as a tool for art creation – helping artists gather inspiration or ideas –  although not so much as a standalone medium.

In addition, AI art or generative AI in general also negatively impacts the environment. Every stage of generative AI’s existence – from its training, public usage and performance improvements – requires significant energy, leading to increased carbon dioxide emissions. This is due to the data centres that house the hardware required to run and train the different AI models. While data centres existed long before generative AI, they are now constructed faster than ever because of the energy required to power and sustain the technology. Training AI alone can generate approximately 550 tonnes of carbon dioxide due to the electricity needed to run it. Furthermore, when generative AI is used (for ChatGPT specifically), it uses 10 times more electricity than a normal web search. Generating images and art using AI uses even more energy than generating text. Water usage is another concern, as large amounts are needed to cool data centres by absorbing excess heat.

There is another issue with AI art: it could consume and destroy itself. The large language models used to train generative AI are sourced from the internet. AI regurgitates the information it takes into a new output, which can be posted back into the internet where it got its knowledge from. This means that there’s a chance that AI will start training itself with the very same synthetic data it’s made – ie, it will start to eat itself. Recently coined as the Model Autophagy Disorder (MAD), this leads to ‘model collapse’, destroying the very foundation that AI is built upon. In text outputs, this leads to the AI spouting gibberish and incoherent sentences, while in image/art generators it leads to blurry, unidentifiable photos that may be unrelated to the prompt given. The AI will go ‘mad’ once it eats itself – a fitting outcome, and a cautionary parable.

In a positive light, the surge of AI-generated art may actually help artists in a way. While it is true that there is the constant threat of AI replacing artists, there will always be value in art that is made by a human being. Throughout history, handmade works or crafts always had higher value than machine-made ones, and the same can be said in art. AI art may just start an era of increased value and appreciation for the human touch and labour in art – recognising the artist’s brushstrokes, fingerprints, mistakes: details that make their art irrefutably human.

In saying this, I see and think of art as a form of communication more than anything. You simply cannot communicate if you haven’t learned the language. AI art is the equivalent of saying random vocabulary words of several languages, hoping to form a coherent sentence. Its use is unfair, and harmful to artists. I’m positive that with generative AI’s rapid progression, it will and already is able to create art and images that are good enough to pass as manmade. However, this does not eliminate the other issues with its use, and if anything makes it worse, as it can be used for malicious purposes even more than it already is.Ultimately, we should all be asking ourselves if we want future generations to only have seen AI art and not experience the beauty and awe of art made with human effort, thought and skill.