AI Music is Generative and Uncanny
In this interview we discuss how AI misfires could shape AI music.
AI is contributing to the history of music.
Just like synths, the autotune, or the radio once did.
In this interview, I discuss how artists are using AI.
Musicians are experimenting…
With different formats: interactive installations or generative albums.
With different sounds: uncanny or multi-genre music.
This interview was originally published in Spanish by La Vanguardia.
What have been the biggest trends you've found in your research?
The dominant narrative tends to simplify things: you give an AI model a text, and it generates a song. But most artists already know how to make music—they're not looking for AI to do it for them. Instead, they see it as a new tool. They want to perform with it, like any another instrument. They want to create interactive artistic installations, both physical and virtual. Or explore generative music—compositions that can evolve and change based on the listener’s interaction.
So AI is more of an added value to existing music practice?
Exactly. It’s being used to do things that weren’t possible before. What interests me most is that innovative dimension: not using AI to replicate what we already have, but to push the boundaries of art and create entirely new musical forms.
In your report, you compare this to the arrival of autotune and synthesizers.
Yes. We’re in a historical moment of transition, and it’s hard to draw final conclusions. But it’s not the first time we’ve faced similar challenges. For example, when radio began broadcasting recorded music instead of live performances, musicians’ unions and the industry pushed back. Yet in the long run, it did not necessarily hurt music—it transformed it. The same happened with synthesizers, and later with autotune. Today’s debates around AI echo those earlier moments. I’m not saying the outcome will be the same, but we’re definitely in the middle of a transition.
What are the most innovative ways musicians are using AI?
One of the most compelling trends is what I’d call an uncanny sound. Those sounds feel almost familiar, but also strange. The surrealists explored this long ago—think of Dalí’s melting clocks, inspired by Freud and the unconscious. AI can generate sonic textures that live in that ambiguous space, and many artists are drawn to that. It opens up a new aesthetic frontier.
We’re seeing something similar in image and video as well.
Absolutely. With visuals, it’s even more obvious—AI-generated faces with three eyes, hands with six fingers. These are mistakes, technically speaking, but they also define a new style. It’s reminiscent of how distorted electric guitars—originally a byproduct of technical limitations—helped birth rock music. Artists took what was meant to be an error and made it central to their sound. We might be witnessing something similar with AI today.
How do artists transform those “errors” into an aesthetic?
Some lean into the lack of human intention as a creative strategy. Take DADABOTS, for instance. They generate infinite death metal loops without human input—sometimes as conceptual critique, sometimes as humor. They're pushing the idea of removing the human from the process to its extreme, exploring what that means for creativity.
That reminds me of the lo-fi trend—a deliberate embrace of imperfections.
Exactly. Lo-fi music is designed to evoke a nostalgic, aged feel, often featuring vinyl crackles and a worn, vintage sound. Another example is the 8-bit sound of early video games. Those explore an aesthetic rooted in historical artifacts. In the same way, today’s uncanny AI sounds might one day be symbols of our time—potentially even nostalgic in the future.
And what about fake songs mimicking Bad Bunny or other popular artists? How legal is that?
Some platforms are starting to block that kind of content, and it’s unclear whether there’s sustained public interest in it. But culturally, it’s forcing us to ask new questions.
Of those questions, which do you find most interesting?
What fascinates me most is the emergence of a powerful underground scene. Artists are using AI not just to make music, but to reflect on society, technology, and culture. Artists are precisely using the challenges introduced by AI as a means to reflect on our society. That’s where I see the most potential—it’s an incredibly exciting moment artistically.
Is this comparable to autotune’s rise in trap music? Does AI democratize music-making?
To some extent, yes. But it’s not as simple as just using a new tool. The first artist who used autotune in trap was a visionary. We’re in that early, exploratory stage now—figuring out what’s possible and what might eventually have real cultural impact.
At Sónar+D, you’ve seen many AI-music projects. Which stood out?
There have been many remarkable examples. GANTASMO’s work (see video below) uses uncanny aesthetics as a compositional tool. The Barcelona Supercomputing Center’s voice-mapping project can transform any voice into Maria Arnal’s. But many installations there were also exploring the boundaries of interactivity.
What do you think is the most promising use of AI in art?
Using AI not just as a tool, but as a medium. Like creating a generative song that the audience can interact with and influence in real time. That completely redefines the traditional relationship between artist and listener.
Where do you think this is all headed?
What I want to understand is whether AI can make a lasting cultural impact. I started researching this about a decade ago, when AI was mostly used to analyze music—like identifying instruments or genres. Then came generation: not just analyzing, but creating. We’ve now spent ten years developing these tools. What interests me most today is how AI will integrate into art and society—and whether it will truly reshape culture.
Disclaimer. The views expressed are my own and do not reflect the opinions or positions of my employer.
Image credits: "Mori Uncanny Valley" by Smurrayinchester is licensed under CC BY-SA 3.0.