King Of England Charles III

The rapid rise of AI art

Image credit: MidJourney

Has humanity unwittingly entered a radical new era of art and artistic expression? That’s the suggestion being circulated in creative communities and online forums as a new breed of powerful artificial intelligence emerges from the shadows.

Generative AI art has exploded onto the scene over the past few months through advanced online platforms like DALL-E2, Midjourney and Stable Diffusion, which enable anyone with access to a smartphone or PC to create highly polished art by typing in simple text instructions.

Sophisticated algorithms have learnt to mimic the specific styles, colours and brushstrokes of renowned artists, enabling users to instantaneously create their own unique versions of masterpieces by the likes of Van Gogh, Dali, Turner or Monet.

The technology can bring outlandish and otherworldly creations to life in super-realistic detail. Type in ‘Cookie Monster climbing the Shard’ and you’ll see the children’s TV character incongruously scaling the tower. Type ‘Taylor Swift commanding a legion of the undead’ and a disturbing image of the pop star will appear as if conjured from the bowels of hell itself.

The endless possibilities have sparked an avalanche of memes on social media, thrusting the topic of generative AI into the spotlight and raising some fundamental questions in the process: if a machine makes art, is it real art or just the result of complex calculations? And what does the technology mean for human artists working in video games, music, films or TV? Are their hard-​won creative skills being devalued and their jobs put in jeopardy?

AI Made This Painting. Engineering.

If a machine makes art, is it real art or just the result of complex calculations?

Image credit: MidJourney

The dilemma strikes at the heart of the artistic profession because AIs are ‘trained’ on millions of images, many of them copyrighted works by real artists who don’t have the ability to opt out. While some artists are prepared to accept this sacrifice in exchange for the creative avenues opened up by the technology, others claim it amounts to little more than theft.

Karla Ortiz, an illustrator and board member of the Concept Art Association (CAA), tells E&T: “The first time I heard about these tools, I was actually quite curious. But the more I found out about how they are created, what kind of data they not only use, but need, to generate results, I started becoming much more hesitant, to the point where now I cannot in good conscience or good faith suggest to anybody in my industry to use these tools, whether they are a concept artist, an art director; not anybody.”

The concept of using AI to make art might seem revolutionary, but experiments programming computers to mimic human creativity in fact date back several decades.

One of the earliest examples of an autonomous picture creator was developed in 1973 by the artist Harold Cohen. The ‘Aaron’ system used algorithms to instruct a computer to draw specific objects with the irregularity of freehand drawing. Some commands generated forms the artist said he could not have come up with, mimicking real artistic decision-making.

Fast-forward to the 2000s, and innovation accelerated thanks to the development of computer coding resources for artists, open-source projects and the public availability of vast datasets, like ImageNet, that could be used to train algorithms to catalogue photographs and identify objects.

Recent improvements in AI, specifically a class of technology known as generative AI, have shifted the needle by combining complex deep-learning techniques that mimic the working of the human brain with massive computing power.

Platforms like DALL-E 2, Midjourney and Stable Diffusion exploit neural networks trained on huge image datasets to detect underlying features and patterns and, based on user text prompts, create similar content without being a carbon copy. For example, the text prompt ‘Engineering and technology’ input into Midjourney produced a selection of images, one of which you can see opposite.

It’s still early days for generative AI, and systems sometimes struggle to convincingly render certain features, such as human or animal body parts, or written content, which is often garbled. Nevertheless, they have already proved their ability to rival human art, duping even experienced art critics.

An AI-generated work titled ‘Théâtre D’opéra Spatial’ won the digital art category at the Colorado State Fair last summer, even though the artist Jason M Allen neglected to reveal that machine learning was behind his creation.

“Art is dead, Dude,” he told the New York Times following the award, insisting that he didn’t break any rules, although many artists were furious.

Dr Mark Wright, director of the Foundation for Art and Creative Technology at Liverpool John Moores University, tells E&T: “AI art has some history to it, but these amazing deep-learning systems and convolutional networks seem to have produced a step change in competence, which is really remarkable. Where before artists had to be embedded with scientists or technical people to achieve anything using AI, today anyone can achieve results.”

Despite the fledgling status of the technology, many artists are using it to enhance their work, and come up with ideas for illustrations and concept art. Some have even adopted the moniker ‘AI collaborator’ to describe their co-dependent relationship with the software.

One vocal supporter is Belgian artist and machine-learning researcher Xander Steenbrugge, whose art video ‘Voyage through Time’ was created using 36 consecutive phrases in Stable Diffusion to define its imagined prehistoric landscape of dinosaurs.

Rather than simply adopt the ‘vanilla’ version of the platform, Steenbrugge says he ‘hacks’ the open-source code to alter the logic and introduce subtle changes.  

“I usually have an intention of what I’d like to create,” he explains, “and when I start exploring and the AI model enters into that cycle, coming back with certain visuals, based on what it produces, I iterate, adopt certain things that worked well and adjust the code. I feel like when I’m creating there is a second creative agent in the process. It’s a new and interesting paradigm we’re seeing here.”

Tinkering with the code highlights how working with text-to-image generators doesn’t have to be zero-effort; it can require talent, practice and time. Allen’s lengthy artistic process making ‘Théâtre D’opéra Spatial’ involved exploring a special prompt to create hundreds of images. “After many weeks of fine-tuning and curating my gems, I chose my top three and had them printed on canvas after upscaling with Gigapixel AI,” he said in a post on the chat forum Discord.

US-based professional illustrator Keith Rankin confesses to being “shocked” at the jump in quality enabled by the latest update to Midjourney, including its ability to accurately replicate existing human art. His experiments using the tool have included darkly evocative artworks inspired by the likes of Salvador Dali and René Magritte.

“Right now, I see AI as a tool for creating references or generating ideas, but that could change quickly,” he says, predicting a scenario in the near future whereby AI is used to automatically fill in the majority of frames in animation, or where artists are able to make entire films on their own, each frame rendered like a highly-detailed painting. “Project that further and further into the future and the possibilities are even more overwhelming,” he adds.

If AI can function as a catalyst to help accelerate certain artistic processes, so too it could carry out tasks normally reserved for human artists, with potentially damaging impacts for the profession. AI has proved itself superior to people at many tasks already, so why would the art world be any different?

Abhishek Gupta, founder and principal researcher at the Montreal AI Ethics Institute, says: “There is an obvious implication for the livelihood of artists, in particular those who rely heavily on funding their creative pursuits through commissioned art like book covers, illustrations, and graphic design. An erosion of avenues to commercial gain for their hard work is sure to have the twin effect of depressing existing artists’ financial means and discouraging newer artists who want to pursue the field as a full-time career.”

“One can’t help but feel like it’s a matter of time until our hard-earned skills are no longer needed,” adds Dan Eder, a senior video-game character artist working in California, who says many of his peers have expressed disdain for this new trend. Rather than resist change, however, Eder believes artists will have to adapt and “find ways to bring their unique talents to the table in a way that machines simply aren’t able to achieve”.

A super-human capacity for number crunching is one thing, but the way AI art platforms are trained to recognise and reproduce the works of real artists, living or dead, has triggered an angry backlash among designers.

Millions, and sometimes billions, of images are scraped from the web and other sources to train models to identify and replicate patterns in data, many of them made by people and copyright-protected in one way or another.

If essentially anyone can produce accurate imitations of ‘real’ art and companies can create and sell explicit knockoffs of living designers, it raises serious legal and moral questions.

The potential damage is greater now that developers have access to APIs to embed these art generators into apps and websites, paying a fee to the platform based on the number and size of images its customers produce.

“This type of AI isn’t just training to be an artist replacement; even more egregious, it wants to be your replacement using your own work,” says the CAA’s Ortiz. “When people call it the democratisation of art, I see it as bringing art theft to the masses. That’s a bold statement, but these new technologies are improving all the time.”

From a legal perspective, the jury is still out on whether these systems are capable of infringing copyright or if artists have any legal claim over the models, or the content they create. In the US, AI researchers, start-ups and tech giants typically claim image use is covered by ‘fair use’ doctrine, which aims to encourage the use of copyright-protected work to promote freedom of expression.

While that might hold true when training models using other people’s data, it might not hold if the generated content threatens the market for the original art. For example, instruct an AI trained on Damien Hirst’s art to ‘produce a Damien Hirst painting’, then sell the piece at auction and there’s an obvious intention to compete with the artist.

A first-of-its-kind class-action lawsuit against the AI system behind GitHub Copilot, a feature designed by Microsoft and OpenAI to help programmers write code faster, could provide some much-needed clarity on the future legal landscape for generative AI.

Beyond the legal quandaries, a starting point for artists who suspect they’ve been copied is finding out if their work has been used to train AIs. OpenAI has refused to share the image data DALL-E 2 was trained on, but Stable Diffusion’s code is open-source, and it shares details of the database of images used to train its model.

Expressive Colourful

The prompt for this ‘art’ was ‘engineering, technology, expressive, colourful, detailed, sky, satellites, space, buildings, artificial intelligence created this’

Image credit: MidJourney

AI Painted This. BAME.

For this hyperrealistic piece, the prompt was ‘AI painted this, BAME, engineering, technology, extreme detail’

Image credit: MidJourney

AI Created This Painting Technology

This image included the prompts ‘Artificial intelligence created this painting, technology, world, abstract, detailed’.

Image credit: MidJourney

In a push for greater transparency and control, the artist collective Spawning launched the website Have I Been Trained?, which allows artists to search some 5.8 billion images used to train models, including Stable Diffusion and Imagen.

Users can opt in or opt out of training, set permissions for how their style and likeness is used, and offer up their own training models to the public. Stability AI, the firm behind Stable Diffusion, says it is now using the tool for artist opt-in/opt-out, and also working with the Content Authenticity Initiative, which is trying to promote adoption of an open industry standard for content authenticity and provenance.

Nathan Lile, responsible for marketing and PR at Stability AI, says: “Our current stance is that transformer architecture [neural network architecture] learns first principles and does not replicate any of the training materials.”

E&T also contacted Midjourney and OpenAI for this feature.

Steenbrugge counts himself among those artists willing to ‘opt in’ and share his portfolio in exchange for the potential creative benefits on offer. “I feel like being against this is a bad strategy,” he says, “The big benefit is people can much more quickly iterate and make variations and remixes of other people’s works. Rather than staring blind at the copyright issues, the upsides are also really large.”

Fellow designer Rankin says improving transparency “is the right step” to see how AI training works and where images are being pulled from. The next move, he says, would be to credit or compensate artists in some way when an image draws from a specific piece. In the future, he envisages the introduction of “more curated data sets, or community data sets”, that acknowledge artists’ concerns.

Criticism of generative AI also extends to the potential harmful nature of content generated. A cursory browse through the Midjourney Discord channel, where art is produced, reveals a heavy user preference for images of scantily clad young women. That’s harmless enough, but what about the potential to use these platforms for more violent or abusive imagery, or deep fakes? Midjourney’s rules state that users should not “create images or use text prompts that are inherently disrespectful, aggressive, or otherwise abusive”, and a team of moderators vet content.

The way AI models are trained to make decisions and produce desired outputs can also cause them to reinforce or amplify societal prejudices and stereotypes, like racism, sexism and ableism.

Early tests by OpenAI and its team of external researchers found that DALL-E 2 leant toward generating images of white men by default, overly sexualised images of women and reinforced racial stereotypes.

The company has since implemented new mitigation techniques, designed to generate more diverse images. An internal evaluation found users 12 times more likely to say images included people of diverse backgrounds, and further changes are in the pipeline.

As researchers, lawmakers and the creative industry continue to grapple with the complex implications of this fast-moving field, some are forming ideas of how platforms should evolve to become ‘fairer’ to artists, more ethical and produce safer output.  

Through a three-pronged approach, Gupta believes AI art platforms should provide strong disclaimers to users on potential copyright issues, and “adopt recourse mechanisms so that artists who would like to remove their work have a chance to do so”. He also believes they should invest in content moderation and safety teams to respond to user complaints and flags on any harmful content generated. Before a body of case law or regulations is established to guide the actions of developers and users, “co-developing norms and practices with the artistic community will be essential to maintain an ethical approach”, he adds.

Given the jaw-dropping potential of machine-generated art, even at this early stage, what can we expect in the near future – will human art become less interesting and fade into the background? What are the implications if it becomes impossible to tell one from the other?

Mhairi Aitken, an ethics research fellow specialising in digital innovation at the Alan Turing Institute, explains: “As these tools become ubiquitous and as their capabilities to produce realistic images become more advanced, it will become ever more difficult to accurately and reliably identify which images are ‘real’, and which are generated by AI. This leads to significant risks to democracy, both through the potential for fake images to be reported as real, and through increasing scepticism about the authenticity of real images.”

Much more than just a smart tool for spreading memes, generative AI raises the stakes for creativity and for society as a whole. 

AI art reviewed

By Dr Giles Hansen Sutherland

When I was approached to write this brief critique of artworks produced by AI my first instinct was to say ‘no’. After all, what I know about AI, algorithms, foundation models and ever-increasing parameter sizes could be written on the back of a postage stamp. I felt out of my depth.

After all, wouldn’t it be like writing about painting while knowing nothing about the materials used to create it or the long and complex history surrounding it: the artists, the models, the inspiration, the patronage, the symbolism, the complex intertwining of politics, culture, and society; the codes and the conventions?

But curiosity got the better of me. Having made it clear to the commissioning editor that my area of expertise lay elsewhere she generously agreed to an open brief. “Approach the work as if it was created by a human, be as critical as you like. But if you want to incorporate the AI element into your critique, that would be great,” she told me.

I’ve taken a bit of time to look at the images that were sent to me for comment. My first impression (a reaction I rely on a lot in my work as an art critic) was the expected predictability of the imagery. It was as if the technology being used to create the imagery was somehow implicit in the fields of reference the AI systems were using to create the imagery in the first place. So, futurist AI tech being used to create futurist, super-realist sci-fi vistas of worlds within worlds; or half-machine, half human replicants with impossibly well-proportioned, impossibly beautiful female faces. I’m not much of a fan of these genres, although Bladerunner is without doubt one of the best films ever made. And the Terminator series has many moments.

But let’s take a few steps back for a moment.  When critiquing human-created visual art, I’m always acutely aware of the person behind it, even if they are dead. This person has or had sensitivities and feelings, I tell myself. I must be as kind and constructive as possible. But what are the parameters on commentary where no-one can be offended?

I’ve gone back to look again at these images and a second perusal often results in a more considered response. For a technology in its relative infancy these images are quite remarkable, given the fact that no human hand has been involved in the final stage of composing the image. They are complex, engaging and, inevitably, original, although the graphic origins are the result of billions or trillions of image-permutations (including those created by human artists) found online. 

These particular images were created using the AI at Discord (This is a text to image model) using prompts and keywords, such as ‘engineering’ ‘brushstrokes’, ‘hyper-realistic’, ‘AI’

According to The Economist (which used an AI generated image for its 7 June 2022 edition) This new generation of AI is they are “adaptable in ways that earlier AIs were not, perhaps because at some level there is a similarity between the rules for manipulating symbols in disciplines as different as drawing, creative writing and computer programming.” Does this augur well for our creative future? As in every such debate the ideas are nuanced and the outcomes unpredictable. Some view ever more sophisticated foundation models as a benign aid to human creativity while others see it as an all-encompassing threat eclipsing what is means to be human.

One definition of what separates us from our closest mammalian relatives is our ability to create art, as every other dividing factor has been removed like a crowd of skittles. Now the apparently unique quality of human creativity has been challenged.

Show me some of the imagery from the next generation of AI, and I’m sure I will not be able to tell the difference between the output of a human biological neural network and one based on chip technology, working at trillions of computations per second.

Will such advances ever bring us another Shakespeare, James Joyce, Mozart or Michelangelo?

The banality of AI art

By JJ Charlesworth

The images that AI platforms produce so readily tell us, more than anything, about the human-made culture of images off which these machines so easily feed. Painting is an antique medium, but Midjourney doesn’t – if these examples are representative – know a great deal about it. It knows about photorealistic painting, of the kind used by high-end fantasy illustrators and artists, such as in BAME and Very detailed and intricate. It has learned something about the loose, flatly layered technique only really achievable in acrylic paint because (I suspect) this is a style learned and circulated by tens of thousands of the amateur and trained illustrators and artists, who promote their work online, including in the burgeoning NFT space. It has even learned to produce the kind of stroke achievable with knife-applied oil paints, as in engineering_technology_expressive_brushstrokes.

Are they any good? To answer that, we might have to think back through how we got to the kind of visual culture we have today. Twenty-first century mass culture is unrivalled in its production of visual fantasies. Back in the 1980s, a teenager like me would have to go to a bookshop to look at the strange and fabulous imaginings of ‘fantasy artists’ – the weird, wonderful alien worlds of Roger Dean, or the lurid, hyperrealist ‘sword and sorcery’ fantasies of painters such as Boris Vallejo, or the epic sci-fi cover art of Chris Foss. But now this culture has exploded, through photoshop and CGI, the internet, and the relentless growth in fantasy and science-fiction markets. There’s a vast ocean of visual fantasies out there, instantly available, when once it was the preserve of introspective teenagers and nerdy subculture obsessives.

That huge background resource of images – you could almost call it our visual ‘unconscious’ – underpins what we see in these Midjourney-produced images. But what we also find in them is the limit of that visual imagination. These images of cyborg women, for example, are recognisable ‘tropes’, endlessly repeated in sci-fi culture ever since Yul Brinner revealed his android innards in Westworld, or Star Trek’s art department came up with the Borg. (Human-android hybrids always have one android eye, never both!) The clusters of screens and cables that colonise the head of the female figure in Art and Technology go all the way back to the freakish techno-horror of Shinya Tsukamoto’s 1989 cult classic Testuo: the Iron Man. 

These tropes – both in terms of technique and of visual content – are ubiquitous and generic. They are competent and mediocre. But then – this is the irony – humans make art which is just as generic and repetitive. Imagination in art, human or otherwise, is the capacity to come up with something different, by reflecting on what already exists.

What’s the worst of the bunch? It’s lucky that you can’t hurt an AI’s feelings (not yet, anyway), because these splashy, colourful portraits of King Charles look like the paintings sold in those galleries you sometimes find in department stores. What’s dreadful is not the mimicry of brushstrokes or the rendering (at least he’s recognisable), it’s that there’s no realisation of how absurd the monarch looks; his crown tilted precariously, his features coloured as if his grandchildren had just got him with the facepainting kit. AI can clearly learn what painting should look like. But it will still need humans to teach it to look, think and (just maybe) learn to judge the good from the bad.


Expressive Brushstrokes


Image credit: MidJourney

King Of England Charles III


Image credit: MidJourney

Technology And Art


Image credit: MidJourney

World Of Engineering And Technology


Image credit: MidJourney

 Technology And Art


Image credit: MidJourney

Art And AI


Image credit: MidJourney

Very Detailed And Intricate


Image credit: MidJourney

Expressive Brushstrokes


Image credit: MidJourney

A critic’s view

By Charlotte Mullins

A world within a world; a man with glass eyes; cerise clouds above a factory – looking at these images I feel as if I am judging a lacklustre school art display. There’s a degree of competence in the rendering of people and buildings, of the application of light and shade, but the artist trips themself up each time with a melting face or wonky nose. Perhaps this is the nub of the problem because there isn’t an artist at all behind these images but an AI program called Midjourney.

Digital art is nothing new. Artists in the 1960s experimented with computer algorithms and created what is now known as generative art. What was once a collaboration between artist and machine has now become the machine ‘thinking’ and ‘painting’ for itself. If the greatest artists show us deep truths and speak to our emotions can AI replicate this? Will programs like Midjourney make artists redundant? Looking at this crop of images – generated by the commissioning editor by typing in a few directional keywords – the answer is an emphatic ‘no’.

The least successful are the images that try too hard to look like gestural paintings. The ‘brushstrokes’ of the turbulent sky above the factory [1] do not flow and the light is all over the place. If the Sun is setting why is the top of the cloud white? As for the image of King Charles III [2], he seems to have taken on the grey curls of his mother, so the best thing to do is gloss over it.

The images that look like illustrations for dystopian films have more going for them. I like the unexpected feathery nature of the propeller [3] and the light source accurately catches the gleam along the right edge of the machinery. The bulbous factory pumping out clouds of steam [4] could be a storyboard image for a new Bladerunner film.

Several images splice women with machine parts [5,6], but the diptych with gold coils of Tolkeinesque tracery [6] offers absolutely nothing in terms of emotional connection. The old man with the clouded machine eyes [7] seems a better offering until we scrutinise the nose, when the realism falls apart.

Perhaps the most successful image is the world within a world [8]. You can imagine it on a T-shirt at Glastonbury or a poster for a Coldplay concert. The silhouette of a standing figure against a distant skyline with clouds forming a world around them at least has something you can get your teeth into. But this feels more infinite monkey theorem than a glimmer of what William Blake conjured in his poem ‘Auguries of Innocence’ as the ability to hold infinity in the palm of your hand.

None of these images come close to even a minor work by a professional artist. AI has yet to grasp art’s ability to conflate time and space, compress emotion and reveal hidden universal truths. The robotic artist Ai-Da hasn’t cracked this either. Perhaps we should heed what her namesake, the mathematician Ada Lovelace, said in 1843: ‘The Analytical Engine has no pretensions whatever to originate anything. It can do whatever we order it to perform.’

Each program may be trained by crunching through billions of images but the spark of creativity, of originality, is missing. For now.

Sign up to the E&T News e-mail to get great stories like this delivered to your inbox every day.

Recent articles