Why Artists Don’t Use AI
On AI narratives and why artists don’t consider AI for their work.
What happens when AI ethics can’t compete in the marketplace?
AI experts can’t stop talking about copyright and AI ethics, arguing the importance of licensing training data.

But ethical alternatives only matter if the AI solution actually works. Right?
So far, the best (and most used) models have been trained on “publicly available data” under the umbrella of “fair use”. For example:
Suno is accused of using copyrighted music to train its models.
Sora is trained on publicly available content, licensed content, and content generated by human reviewers.
Competing is challenging for those training on licensed data, as it is often scarce and typically sourced from stock libraries (rather than commercial music or films).
This can limit both the diversity and the quality of the output generations.
How far is too far for ethical options that fall short of consumers’ expectations?
Further, AI is increasingly influencing our everyday lives through products in the marketplace (like chatbots or AI software for creatives).
Accordingly… small startups, large corporations, and (even) artists are subject to market forces, impacting stakeholders in varying ways.
Several initiatives (such as public regulation or private certifications) are being introduced to prevent abuse.
In this post, I explore the narratives and power dynamics behind why many artists still hesitate to embrace AI.
Why most artists don’t even consider AI for their work?
Finally, in the past few days, we’ve seen major moves in the music industry:
Spotify partners with major labels to develop AI music products.
Udio teams up with Universal Music Group to create an AI streaming platform.
Stability AI and Universal Music Group partner to develop AI music creation tools.
Hence, major labels are now actively developing AI solutions—an involvement that could reshape the industry and redefine the narratives around AI art.
And, most importantly, this marks a new era. Opening the possibility of training high-quality models on licensed commercial music.
AI models are no longer constrained to the style of stock libraries.
Public Regulation
The private development of AI through the free market inquires the risk that leading companies define, de facto, the very definition and nature of AI. The economic incentives are huge.
Public investment in AI could lead to centralized systems that governments may use for reelection or social control.
In both scenarios, it seems reasonable to enable mechanisms of democratic control over the economy/power.
Will this kill innovation? Perhaps. But maybe that’s exactly the point. Everyone calls it a bug—but for regulators, it’s a feature.
Be aware of your bubble.
Private Certifications
Social trust is essential for AI adoption, and demands a strong ethical approach.
AI is currently being rolled out through the free market, that is challenging to intervene.
If it’s difficult to agree on new mechanisms of democratic control, why not promoting certificates around ethics and AI?
Note the parallelism when buying cage-free or free-range eggs.
Rather than adding new restrictions, bring in new incentives.
The goal of AI certificates is to add value to those products that are developed following ethical principles.
Ethically sourced, grassfed, organic AI music eh? — Kuraido (@MailliwNos).
The problem is that consumers may place limited value on certifications, prioritizing instead AI solutions that are effective and reliable.
Narratives
Marketing efforts are also contributing to the social definition of AI.
In a context where AI technology is still underdeveloped and society is unprepared for its critical adoption, leading companies are pushing their (optimistic) vision of AI to the world.
This (legitimate) push to increase the company’s share price is creating hype.
And fear.
Because a (pessimistic) narrative of concern has also emerged, often fueled by media and publishers seeking clicks.
This surprisingly well-coordinated combo is actually driving people apart.
Power Asymmetries
Current AI market is full of power asymmetries, which influence how artists perceive AI-generated works.
Example 1: Can your artist friend build an AI-based installation?
No, because she does not even understand which are the building blocks of this technology and she is concerned it could threaten their work.
Artists often face pushback for using AI, as it is still not widely accepted in many creative circles.
Example 2: Can you train an AI model based on your artworks?
Yes, if you know how to code, read scientific papers, and can buy expensive hardware.
Only a few people can imagine and develop new AI systems.
And those have the capacity (ab)use their dominance to define what AI (art) means.
A New Era
Major music labels are now actively involved in shaping the development of AI.
Chronology of recent news…
2 October 2025: Financial Times reports upcoming AI licensing deals involving major labels.
16 October 2025: Spotify partners with major labels.
29 October 2025: Udio partners with Universal Music Group.
30 October 2025: Stability AI partners with Universal Music Group.
Interestingly, Spotify is a streaming platform.
And Udio is becoming a streaming service, as Udio discontinued the download feature.
Stability AI, on the other hand, is focusing on professional music creation tools.
Based on these announcements, major labels appear focused on:
Using AI to engage (super)fans through (interactive AI) streaming services.
Enable new forms of music creation (with AI) for artists.
Yet, what comes next remains uncertain. Especially as the Financial Times reported that major labels are also in talks with other companies.
Disclaimer. The views expressed are my own and do not reflect the opinions or positions of my employer.


Timely read. 'AI ethics can’t compete' – such a sharp observation.