Saturday, March 01, 2025

A Thousand Voices Ask - 'Is This What We Want?'

Artificial Intelligence is in the news on an almost daily basis and a challenging new album release could mark the beginning of a counter revolution, with more than a thousand British artists contributing to an unusual work of art which, through silence, screams a unified message; that AI, driven only by corporate profit, presents an existential threat to the creative arts.

The human race has a long history of creating machines to do our work for us. From the wheel to the device that you’re using to read these words, every technological advancement has enabled us to reach further, to move faster, to achieve more.

Until recently, the greatest limitation of machines was their innate lack of creativity. A digital synthesiser could produce any sound, emulate any instrument but only when programmed by a human. Visual arts could be copied but not conceived. A living, thinking, feeling person was always in the driving seat.

There’s no doubt that technology has improved our quality of life but is AI just another technology, or is something different happening? To better understand, let’s take a look at what AI actually is, how it’s built and how it works.

AI is artificial in that it is completely man made and intelligent in that it can perform mental tasks akin to human reasoning. Task-specific AI, able to do one job very well, has been around for a very long time, for example in customer service chatbots and the recommendation feature of Spotify, Netflix and Amazon.

Straight out of the box, an AI can’t do anything. It’s like a newborn baby with a brain packed full of neurons which aren’t yet connected to anything. An AI learns through external stimuli and how it achieves its objectives is mysterious, sometimes radically innovative. This doesn’t make the AI intelligent in the human sense, only lucky. Human input is still required to guide the AI towards the desired result.

The goal of AI research is to create ‘general intelligence’ which can reason and imagine like a human. Our current stage of evolution towards this is ‘Large Language Models’ or LLMs which learn by absorbing huge volumes of content from the internet. LLMs can produce a facsimile of a conversation but they are still limited by the rules of their creator, as has been reported recently with China’s DeepSeek.

You might have used tools such as ChatGPT to write your essays, reports, presentations and homework assignments. The education world is torn; on one hand trying to work out how to stop students using AI and on the other hand enabling teachers to use AI to set and grade assignments.

What happens when we train an AI not just with text data but with images? We get a machine such as DALL-E, Craiyon or Canva. The AI models had been fed millions of art works to produce results in any conceivable style, leading artists to wonder if their artworks had been consumed and, if so, what compensation was due? People began to push back against the likes of Adobe whose cloud based software had a clause in the small print suggesting that they can use all of your creations to train their AI models.

Many photographers make a living through stock imagery. An editor might want a particular photo so they would trawl through the stock libraries for something suitable and then pay for its use. In less than the time it took to look through the first page of stock images, the editor could type “a young executive pointing at a spreadsheet and laughing” into an AI art engine and produce a unique image with no copyright restrictions. Typing that prompt into the Craiyon engine produces the following selection of images.


You might notice a flaw that creeps into AI models – they exhibit the same bias as the people who trained them.

Visual arts were an easy quick win for AI, however another market existed which was difficult for AI to break into until OpenAI launched their Sky chatbot with a voice that sounded so much like Scarlett Johansson that many people couldn’t tell the difference.

Creators are joining the war on both sides. Artists such as Grimes, Charli XCX, Charli Puth, Troye Sivan and Sia are allowing the use of their voices. Others are trying to show a future within which there is no longer a place for human creators.

The new album release, ‘Is This What We Want?’, coincides with the closing of a UK government consultation on changes to copyright law, in which a waiver for AI firms is the preferred option. This change in the law could make it easier for AI companies to train their models without licensing the training data.

The album, released on 25th February 2025, is a collection of recordings of empty studios and performance spaces, representing a possible impact on the music industry. AI generated music could easily be used in films, advertising, social media, corporate presentations and more, taking away the livelihood of musicians and resulting in fewer artists and restricted choice for the consumers of art. In its rush to be perceived as supporting innovation in AI, the British government risks causing more problems for the creative industries.

The list of artists who have supported this work is simply too long to write here but they include Annie Lennox, Billy Ocean, Ian Broudie, Imogen Heap, Jamiroquai, Jimmy Somerville, Julian Lloyd Webber, Kate Bush, Kim Wilde, Martyn Ware, Mystery Jets, New Order, Pet Shop Boys, Robert Fripp, Sam Fender, Scouting For Girls, Simon Le Bon, Tasmin Archer, The Clash, Tori Amos, Toyah Willcox, Yard Act and Yusuf / Cat Stevens. The track listing spells out a clear message, “The British government must not legalise music theft to benefit AI companies.”

A UK government statement reads: “As it stands, the UK’s current regime for copyright and AI is holding back the creative industries, media and AI sector from realising their full potential – and that cannot continue. That’s why we have been consulting on a new approach that protects the interests of both AI developers and rights holders and delivers a solution which allows both to thrive.”

Is it possible to reach a compromise that artists, AI providers and consumers will all be happy with? If history can teach us anything about the relentless march of innovation then there is only one time to act and that time is now. Supporting both established and new artists and live music venues is more important than ever. AI might be able to generate a facsimile of a voice or create royalty free ‘music’ for a video but it’s unlikely to replace the visceral, thrilling, shared experience of a live gig in an intimate venue or the grand scale of a performance at a major concert hall.


One thing is certain; if the fears of the artists who created ‘
Is This What We Want?’ are realised then we will all regret the day when we chose to sit by and do nothing.

More information about the album and the artists involved can be found at their website and all profits from the album are being donated to the charity Help Musicians.


Peter Freeth

@genius.photo.pf

Images: 'Is This What We Want', craiyon.com, Peter Freeth


If you enjoyed reading this article please consider buying us a coffee. The money from this pot goes towards the ever increasing yearly costs of running and hosting the site, and our "Writer Of The Month" cash prize.

No comments:

Post a Comment

Comment Here;

Note: only a member of this blog may post a comment.