Music as Software
Tero Parviainen on the web as a medium for generative music.
He brings generative music to life through interactive web experiments.
Through Counterpoint, the studio he runs with Samuel Diggins, he has collaborated with brands like Roland and Moog, and with AI music pioneers like YACHT, Holly Herndon, and Mat Dryhurst. This work has appeared at Ars Electronica, CTM Festival, and SFMOMA. Yet, his name remains quietly known.
His online installations let you experience generative music concepts by recreating classic tools (tapes, loops, or delays) using JavaScript. Beyond the technical execution, a refined sense of design runs through everything he creates. His visuals are beautiful and inspiring.
From recreating iconic works by Reich, Riley, and Eno to reviving Laurie Spiegel’s Music Mouse online, his projects are a crash course in the history of generative music. These web-based sketches, as he calls them, are educational and fun. They allow you to explore complex concepts in a hands-on, interactive way.
After following his work for many years, I finally have the pleasure of interviewing Tero Parviainen.
Q: Most of your work lives online as interactive websites rather than as static tracks on streaming platforms. Could you share how your background led you to this intersection of software and sound, and how you define your artistic practice today?
I have a software engineering background. I spent the early stages of my professional life as a generalist software consultant, before about a decade ago I decided I needed a switch. I felt a strong pull towards music and the arts more generally, so that’s where I went. It wasn’t really a career change as much as just a decisive concentration of all my focus and energy on this one thing. To apply the software skills I’d picked up to an area of personal interest.
Because of this path, the center of gravity for my creative identity is as a software maker, more so than a musician or an artist. Software is my primary craft. Projects tend to start as ideas of interactions, systems, procedures, and protocols, and the music then springs from that initial conception. The natural end result of this kind of process is rarely a static track, and usually something adaptive, reactive, or just open and unbounded by time. A lot of the time it’s a web app, as I find the web to be the optimal medium for casual interactive music experiences. It’s open, standardised, cross-platform, free, and with very capable primitives for both visuals and audio.
Q: Could you walk us through one specific project? From the initial idea to the technical and musical decisions that shaped the final result?
The Digital Electronium was our first collaboration with Yuri Suzuki, back in 2019. He had an idea about building a new version of The Electronium, a mythical generative music machine from the 1960s, conceived and made by Raymond Scott. This was a machine Scott worked on for decades, including for some time at Motown Records as their head of electronic music.
Yuri had secured permission from the Raymond Scott estate, and received a pile of Scott’s sketches and documents from his friend Mark Mothersbaugh, who also owns the only (semi-)functional original Electronium machine today. We proceeded to study all that material, to learn how the original machine might have worked.
Yuri didn’t want the digital version to be an exact copy, but to make use of modern day affordances where it made sense and honoured the spirit of the original machine. What he did not want to do, however, is strip back and simplify. A big part of the Electronium’s aesthetic as a piece of kit is that it looks like an airplane cockpit. It has hundreds of knobs, switches, and buttons arranged in multiple panels. We tried to reproduce this rather faithfully as a UI, spread over three large interconnected touchscreen displays. This was the main design constraint on the audio backend: all these hundreds of controls had to do something meaningful, as this was designed to be poked at and enjoyed by casual users - visitors to an exhibition in The Barbican where it was to be presented.
The audio system powering that UI runs in Ableton Live. Within it we have a bunch of Max patches, as Max for Live devices. They communicate with the touchscreens over OSC, doing control flow and generative patterning. Audio DSP is synths and effects in Ableton - some stock, some plugins.
So most of the generative stuff in there is what you might call good old-fashioned generative: algorithms in Max. But there’s also a bit of generative AI in there too. Scott’s machine had a “counterpoint” section, whose purpose was to generate automatic contrapuntal accompaniment to whatever patterns you had punched in. Our version of this was powered by Coconet, an open model by our friends at Google Magenta, which is a convnet trained to inpaint four-part harmony. So whenever you engage the counterpoint section, we run a Coconet inference to generate a unique harmonisation.
Q: Your work also reflects a strong interest in wellbeing music and the history of generative music.
Yes, a good chunk of the work we’ve done at Counterpoint has been around making music systems in the service of wellbeing: psychedelic therapy, sleep, spa treatments, exercise. This is a natural fit for adaptive music systems, because the music is there to facilitate potentially unpredictable situations and may need to readjust in the moment. Software can do that. Working in these contexts also often feels prosocial, because you’re getting to contribute to embodied practices that help people regulate their nervous systems in an increasingly mad world. It’s music as medicine.
The interest in generative music history has been about self-education, first and foremost. When I made the decision to concentrate in music ten years ago, my strategy was to try and learn from the masters. I wanted to try and figure out what they had done, and how, and why. Studying the work of people like Brian Eno, Laurie Spiegel, and Terry Riley helped me understand how to think about music as systems, which gave me an angle that made sense to me. We’ve continued this in our projects with Yuri Suzuki, which has given us opportunities to explore Raymond Scott’s Electronium, as well as classic music tech by Roland and Moog.
Q: Finally, like many other OG musicians in our community, you also spent time working in the NFT space. How do you reflect on that period now?
Yeah, that was an interesting time. It’s kind of receded into the distance with the whole pandemic fever dream. But for a while there, there was a lot of generative art happening on blockchains. A lot of artists were able to make a living out of it outside of institutional support. A few got very rich, but many more were just able to support themselves. That was a bit of an anomaly. We’ve done a few pieces too, with people like Boreta, Aaron Penne, Bright Moments, and Amon Tobin.
The scene is still going strong, but now it’s become more of a true believer’s affair. There’s a lot less money going around, and the upside of that is a lot of the scamminess and speculation has washed away too. I remain cautiously bullish on crypto and blockchains in general. There’s a lot of deep thinking on decentralisation and distributed trust happening in that world, and that seems like something we should be thinking about in a world where it’s increasingly clear there are no adults in charge.
Q: NFTs are an interesting concept when tied to generative digital art, as they represent a unique, fixed snapshot of a system that is otherwise dynamic and changing.
It seems like an appropriate medium for code-based art, for sure. In communities like Art Blocks the model is that as an artist, you make an algorithm that’s capable of producing many iterations of an artwork. Sometimes an infinite amount of them. Then for everyone who buys one, the algorithm is run with a different random seed, so everyone gets to own their unique piece that no one has seen before. Tyler Hobbs coined the phrase “long-form generative art“ for this kind of thing.
The Rituals collection we worked on for Aaron Penne and Boreta is an example of this. It’s a slow, meditative, audiovisual piece with 1000 versions owned by different people. When it was launched, there was a special event in LA where you could get one and it was revealed to you as you’re sitting alone in a room with a big screen and sound system. But beyond that, it lives forever as code on the Ethereum blockchain and runs in a web browser.
Disclaimer. The views expressed in this content are those of Tero Parviainen and do not reflect my own opinions or those of my employer. Only the introduction was written by me.



