Bust of Apollo equipped with VR headset. Metaverse concept with copy space. 3D rendering

Written by

But is it art? – How AI is redrawing creativity

11
minutes reading time

The ability to create art is a hallmark of what it means to be human. But with chatbots generating stunning images, video and even poetry, are we facing AI’s artistic triumph?

LET’s begin with the proposition that Artificial Intelligence has not yet, and never will, create an authentic work of art. Or at least not by itself. The human artists who work closest with this technology tend to be the first to say so. Canadian author Sheila Heti recently published a short story called According To Alice, written in “collaboration” with a chatbot of that name.

Heti crafted text prompts to elicit odd, oblique responses from Alice that she rendered into fiction. She became “obsessed with talking to her”, even while knowing the AI had no self, no thoughts, no feeling for the joys or sufferings or mysteries of existence that compel our mortal species to write, paint, sing, and so on. “I think our desperate sense-making comes of being authentically alive,” Heti has said, “and that desperation is baked into the cells of art.”

Being alive at this moment also requires us to make sense of the tech we coexist with. And if the science is too complex – the hardware too opaque, the software often obscured within those proprietorial “black boxes”– then it becomes incumbent on the humanities to help us understand it. The New Real was founded for that purpose: a hub for AI-related research by resident specialists at the University of Edinburgh and the Alan Turing Institute.

Among its essays, videos and transcripts one may read the techno-philosopher (and Baillie Gifford Chair in the Ethics of Data and Artificial Intelligence at the University’s Edinburgh Futures Institute) Shannon Vallor on one possible future of AI as “a relentless army of angry ghosts who keep haunting us … until we reckon more fully with ourselves.”

orchestra perform in front of light up pink screen. Words on the screen say" beethoven the AI project.
The Beethoven Orchester Bonn perform on stage during the rehearsal for the world premiere of Beethoven’s 10th symphony, completed by artificial intelligence on October 09, 2021 in Bonn, Germany.

Artists’ intelligence

One can also find within more positive or negative intimations from computer scientists of where this tech may lead us, and discover artists whose use of it might be considered effectively value-neutral, or explicitly critical. “Some in my community do believe machine learning is fundamentally flawed, and take clear positions against it,” says New Real founder Drew Hemment, a Professor of Data Arts & Society at the Edinburgh College of Art and the Edinburgh Futures Institute.

For his own part, Hemment is averse to the “determinism” that assumes a single dark outcome from the speed and direction of progress, and talks instead of tech as “Janus-faced, or good and bad.” Common objections encompass the way these models are “trained”, the biases they amplify, the businesses practices built around them.

Drew Hemment speaking at a lecturn
Professor Drew Hemment: “Deep-learning algorithms are… asking very big questions about what is culture, what is creativity?”

Commercial and concept artists are already losing jobs to AI image-generators, which effectively harvest historical data from across the digital world – the fruits of other people’s labour – then lock it in their black boxes. “Impact on employment is a concern,” says Hemment. “Broadly, it helps to think of tasks, not jobs. Certain tasks will go as AI does them at industrial scale, but I do think it will open opportunities too.”

With this in mind, he and several peers co-authored the New Real’s Manifesto for Intelligent Experiences, outlining what they think needs to happen – in creative, social, and technical terms – for art and data to intersect most equitably. The idea is to push generative AI beyond its present capacity for mimickry toward new orders of surprise, delight, and illumination (as opposed to shock, horror and destruction); to “provoke us to seach for meaning and come to new interpretations of cultural works and ultimately of ourselves.” 

“Deep-learning algorithms are inherently inscrutable,” says Hemment, “but their complexity gives us amazing capabilities that will take time to comprehend. They are rewiring our society so quickly, so profoundly, asking very big questions about what is culture, what is creativity, what is consciousness.” This is the domain of art itself, or course.

Still of deepfake drag artist close up from the Zizi Show 2020
Still of deepfake drag artist from The Zizi Show 2020.

Hemment takes his cue from sci-fi author William Gibson, who once said that the future is already here but is not yet evenly distributed. He considers artists to be “weak signals” from that future, as The New Real flags up projects that take AI as theme and/or medium. Jake Elwes’s The Zizi Show, for example – a “deepfake cabaret” of drag performers fabricated from neural networks. Or Memo Aitken’s Learning To See, which reflects on the filtered outlook of such networks as mirrors of our own selective worldviews. “Scratched eyeballs will always see scratched,” as Saul Bellow once put it.

“As AI has gathered momentum, we’ve seen this field of ‘AI-artists’ working with it as a means of enquiry. What does it mean? Where is it going? They’re not paid for that social duty, and we need to celebrate them as that work becomes more important.”

As it happens, the University has been celebrating the 60th anniversary of its pioneering first research group into machine learning. “Edinburgh is the home of AI in Europe,” says Hemment. “It’s also the world’s foremost festival city. That confluence of futures, arts, and data is like the holy trinity that inspired me to come here.”

“Remarkable, not magical”

At the University’s School of Informatics, meanwhile, Professor Mirella Lapata and fellow computer scientists are effectively “teaching” language models how to deal with semantic information. After a given model learns to generate text, predict the next word in a typed sequence, and give “truthful” responses when prompted, it might then be trained in the more abstruse linguistic concepts, says Lapata. “You can try to show it what irony might look like.”

“Humour. Sarcasm. Give it examples, guide it through those, go over it again and again. The results can be remarkable. But not magical.” Lapata is a great one for demystification. While artists like Holly Herndon posit text prompts as a new form of creative practice, Lapata says she and her peers tend to find that process “boring”. To her mind, even the most sophisticated language models still operate something like infinite monkeys at infinite typewriters, who will sooner or later reconfigure words into patterns that please us.

“Philosophers study this question now: if a machine creates a poem that is technically excellent, should we consider it authentic? But the machine has no voice, no ideas. It might compose a story in the style of James Joyce, but it’s not speaking to us in the same way. Even Harry Potter is a unique thing that happened in its own personal, political, and socioeconomic context. This will never happen with AI. Well, maybe 1000 years from now.”

When asked her own favourite writer, the first name that springs to mind is Margaret Atwood, perhaps only because we have strayed into domain of speculative fiction. “There you go, she’s perfect for this subject.” And when Lapata thinks about that coming millennium, she believes a new architecture is needed make these models more energy-efficient, sustainable, equitable; not to mention new regulations to protect existing artworks that AI is even now learning from. Or plagarising, as various lawsuits would have it.

In fashion

Systems like ChatGPT and Dall-E (both developed by the lately contentious research organization OpenAI) draw on literally untold volumes of novels, paintings, and other intellectual property to generate their text and images, but the legal status of the original artist remains blurry because “style” is not copyright-protected in most jurisdictions.

Professor Lynne Craig uses such tools with her students in the university’s Design Informatics MA programme, because they allow so easily for “play”. “That’s how new ways of thinking can emerge,” says Craig. “There’s a simple delight factor that comes with the first button press, and the challenge then becomes, what do you do with it?”

As deployed in the fashion world, across a global supply chain notorious for wastefulness, AI promises obvious efficiencies. “It can help generate new designs and concepts so you can test clothes without having to manufacture them, or travel to where they’re manufactured.” At the same time, as in any creative field, the tech raises questions of ethics and authorship.

“What does it mean for the role of the designer? Why are you the designer if anyone in the street can do the same thing using AI connected to other systems? And if we’re using data sets from unknown sources, then where does authenticity come from?”

Fashion has always fed on itself, but machine-learning models surely speed the loop, ingesting new trends as soon as they emerge through the grass roots of social media. Does all that data give AI something like “fashion sense”? By the same reasoning, have image-generators like Dall-E and Midjourney developed their own signature styles – inspired, as it were, by pictures they were trained on?

Artificial life imitating art

There is indeed a default tendency toward the surreal, says Martin Disley, an Edinburgh-based artist and PhD candidate in Design Informatics. “There’s a term in photography about capturing things that should exist, and perhaps the opposite turn happens in AI, capturing things that don’t exist.” Photoshop has long since become both noun and verb, and developers of new imaging tools use a different order of images to distinguish their doctoring techniques. The uncanny works well to that end, says Disley. Amid the proliferation of pictures that are “high fidelity but clearly not produced by lens-based technology”, most are created through open-source Stable Diffusion models (see image below).

Outline of a woman surrounded by colourful images of coral reefs
A visitor watches a projection at the Serpentine North Gallery in London at a new exhibition of Refik Anadol’s AI generated work in February 15, 2024 in London. The artist created an immersive environment made from 5 billion images of coral reefs and rainforests using Stable Diffusion.

So many users favour sexualised anime-style content, and upvote images they like, that seedy material in this vein has become “a kind of benchmark test for developers and engineers”. And while the likes of Midjourney good at generically fantastical images, says Disley, “it’s a challenge to make them produce specific compositions”.

Disley’s own art is particularly AI-critical, even “adversarial”, using audio and video to show up failures or fallacies in the tech. He’s hardly a luddite, but his studies of machine-learning systems have made him antagonistic. Sometimes his work probes at “certain misaligned applications, or black boxes that you can’t access.” Sometimes it’s more political, a critique of what he calls the “epistemic regime” that is elevating data science to world dominion.

His most recent film project targets the machine learning model Speech2Face. That programme uses vocal patterns to generate framed portraits like passport photos, while Disley “attacks the algorithm” to conjure different faces from the same voice (provided by his girlfriend). The average viewer might not digest all the technical details, he admits, “but hopefully this dissociative moment occurs, to make you doubt this algorithm in an experiential way.”

The film was shown at the Edinburgh Festival, and a reviewer from BBC Front Row questioned whether it worked as art. “I hope it looks pretty,” says Disley, “but I like putting things in galleries that aren’t normally shown in there. And I don’t think we need to be so concerned about what is and isn’t authentic art. The question is whether it’s successful or not. You could say a lot of AI-generated images are ‘artistic’, but they’re also just shite. Shite art existed before AI, and will continue to. We’re always modulating our understanding of what images we think are valuable.”

Image credits: Apollo with VR goggles – HT Ganzo/Getty; coral reef art – Dan Kitwood/Getty Images; Beethoven: The AI Project- Andreas Rentz/Getty Images

An illustration of an android playing the piano in an abandoned warehouse