patten is the musical project of London-based multidisciplinary artist Damien Roach. He has made tour visuals for Caribou and graphic design for Daphni, exhibited visual artwork at the Venice Biennale and Tate Modern, and built a community around club night and online forum 555-5555, which now serves as a label and platform for his music.

His album Mirage FM was released in spring 2023. It extends his interest in new technologies, using text-to-audio AI systems to generate hours of sample material that form the basis of the record. Described by Roach as “crate digging in latent space”, Mirage FM asks fundamental questions about the nature of music production in the nascent age of AI.

The first part of this interview was conducted after the release of Mirage FM. In autumn 2023, patten returned with his second text-to-audio AI album, Deep Blue, (the seventh patten album in the last three years) and it was after its release that the second part was subsequently concluded.


The first thing that struck me about Mirage FM is how quick the whole process has been.

Yeah, it's true. I didn’t plan to make this album; it just sort of happened. I released four albums between autumn 2019 and the end of 2020, and they're all really different from each other. I think that it's been interesting for me over the past years to just do things on my own terms and step out of a slower, out-of-date system of thinking about what it is to have a practice. All of those antiquated processes are very tied to essentially change-averse institutions like the majority of record labels and commercial art galleries.

One fundamental in the way Mirage FM challenges that notion of practice is the use of text-to-audio AI systems like Riffusion to generate your source material.

I've been working with AI-assisted systems in my design, creative direction and visual artwork for a while, and have been thinking a lot about all the potentials they seem to offer up. Mirage FM was the first time I used AI in terms of sound and music. As soon as I saw Riffusion, I thought it was going to be everywhere, but that hasn’t really happened.

I was really struck by the potential of it as a sample source. I generated all of the audio that became the raw material for the album in December 2022 in about 36 hours without sleeping much, and then it was only a couple of weeks working on it non-stop before I'd done the record. Now looking back with a bit of distance I'm seeing how intense that was. It was really fast!

I was playing around with Riffusion before speaking to you. One of the hardest things I found was knowing what to type into the text box. I could have summoned anything but was immediately confronted by the limits of my imagination. How did you try to break out of the patterns that might have guided what you searched for to begin with?

I think that's the problem with search. The problem with search, or the text box, or the prompt, is how do you engineer a chance meeting between yourself and some new data that you didn’t know you were looking for, or that you have no idea you'd be interested in? How do you engineer serendipity within the space of search?

It reminds me of the early days of file sharing. The way peer-to-peer sites like Soulseek or Napster worked is that you'd find a user who had really good stuff and then go on a journey into the unknown via their collection. I remember searching for Stereolab or Tortoise or something, and then finding all this other stuff in that person’s archive.

Engineering this this of moving without a fixed goal in sight is so important. Sometimes new tools or unknown processes can be a inspiring good way to break out of your own habits and bleave your usual pathways behind.

One thing I find interesting is that the prompts in text-to-audio and text-to-image AI are words. In these systems, language has become the primary creative tool once more.

You’re right, there's something quite weird about the main interface being words and language. I've been describing these prompts as being like spells or incantations. I'm really drawn to language as a tool, and the idea of 'magic words'. It's back to abracadabra and conjuring things up with language. Access is a thing here though. A lot of friends who don’t have English as their first language have felt pretty locked out because of the default to that one dialect. You have to see it as theis very familiar set of barriers being replicated too.

You also used to work with collage. Is there an analogy here between sampling, cutting and pasting and text-to-audio materials in creating new ways of thinking about things?

Yeah, definitely. I think the strange frictions between jammed-together materials has always had a big gravitational pull for me. It's like inside that friction there's the spark for something to flow out. More than new, just something that's emotionally or conceptually resonant. Surfing the friction.

It's funny, this reminds me of an old track title of mine - 'Words Collided' - obviously a play on 'worlds collided'. With Mirage FM I've really zoomed in on those collisions and frictions and that alchemic potential.

When we spoke in 2016, you were also talking a lot about hacking formats, and interrogating form as well as content. You often take a creative approach to the mode of delivery as much as what is being delivered.

100%! With Mirage FM I made videos for every single track on the album. There’s 21 videos up on YouTube as it’s own thing. And with the Bandcamp download you get this ExpansionPack with bonus samples, totally different unreleased versions of the videos, HD images, text, a PDF zine… There’s so many different ways of doing things… it’s just about pushing away from ‘the way things are’. There’s no real reason for a lot of these formulaic
structures in releasing music and art, and maybe more importantly in how we do life. They’re just habits that have stuck, and a lot of it is well overdue being outright rejected, or at the very least, questioned.

You’ve played with club nights and the 555-5555 forum as modes of production too.

In some small way, what I’m trying to do with that club night, the forum and everything I put out point at paths not just for my own work, but that open something up for the way that things can happen out there in a wider sense. I'm trying to make frameworks that could enhance or extend what's possible. Poking at the edges of what's possible is how change happens, and the potential for change is worth fighting for.

Bringing this back into the AI conversation, has the concept of latent space influenced the way you think about disciplines interacting beyond the silos of art, design and music?

Yeah, I think the opportunity to hack the canon, or to rewire where things sit within a cultural topology is important. With these GenAI tools like DALL-E, ChatGPT, and Stable Diffusion, it's like people have been given a key to directly access all of these modes of production. There's so much potential in making these very mutated forms that are hyper individualised. It's really an incredible moment in that sense. It's about utilising that and harnessing it to do whatever it you want.

One of the reasons for making and putting this thing out was to talk about that. To say look, this is possible, what is it that we want to do with this stuff? It's a question and an invitation. Obviously just because there's something that a new system can do, it doesn't invalidate other ways of doing things. No way! It's just another space that we can explore.

How do you feel the record has been received? When the story becomes about the way something is made, it can draw attention away from the work itself.

It's a good point. An important part of apprehending the album is having a sense of what it is and how it's been made, and so I'm into wondering what it's like for somebody who bumps into it on a playlist and has no idea. What I hope is that there's a kind of essence in the material itself that transcends the technical processes that have gone into making it.

On the other hand, I've tried to put something together that would ask questions regardless of its specific context and in a practical sense mark out a territory for thinking through and experiencing those questions in a non-linguistic way. I was talking to someone recently and they said they heard UK garage music ina few places on the album where I hadn’t really perceived it. I feel like there are a bunch of access points woven into what the album 'is' materially - like in its flexibility and openness, as well as the videos, that hopefully allow it to have a life outside of having that knowledge of how and why it came to be.

You are also releasing a CD-R with extra material and developing a live show too. Speaking of formats, it’s like the record is no longer a fixed entity, but can feed off itself to keep living in the world.

Exactly. In the same way that the ethos behind the starting point of Mirage FM was looking at history, looking at all recorded music, looking at aesthetics as a material, a data set that can be drawn from to make completely new things, I'm trying to think about what happens next. Like you said, if the album is fixed, how can that process continue? I'm really into the idea of refashioning, remodelling and not being revenant to the idea of a fixed artefact, but instead forcing things to stay liquid and full of potential.


In the spirit of liquidity, we revisited this conversation with patten later in the year, following the release of his second AI-assisted album, Deep Blue. The following questions build on the initial interview with another 6 months of experience to draw from.


How has the potential of text-to-audio technology evolved since you started working on Mirage FM in December 2022?

It’s evolving all the time in technical terms, but the potential remains the same - the closing gap between a person’s ideas and their realisation, and the possibility to radically hack established systems of thought, meaning, and categorisation.

Has this informed the way you approached making Deep Blue? What did you do differently?

The funny thing is that I haven’t actually had any opportunities to work with these tools outside of making these projects. It’s been a crazy year. I made some music videos and the live show visuals for Jayda G’s album and summer tour. I made videos for Daphni. Exhibitions under my own name. And I started a PhD in AI, perception, and creativity a few weeks back. There’s always a lot going on, so I have to do things in really focused bursts.

I made all the material for Mirage FM in three days and then spent a few weeks making the album from that material. The next chance I got to dig in on the tools I had a particular idea about using text-to-audio AI in a really ‘human’ way. I was drawn to jazz as a framework for this because it’s all about human expression articulated through deviations from expectation. Thinking about how AI could be used to make really expressive music by these quite trad indicators was something I felt really pulled towards. It’s not Riffusion this time. I’m using more customised notebooks at the moment. There’s a million other things I’d like to try out soon. There’s actually a whole other album I made after finishing Deep Blue that I debuted live at an AI symposium at the BEK in Bergen, Norway a few weeks ago - an AV thing called ‘AIR’.

Have you seen any change in the way music produced with AI tools has been received by the industry (and listeners) in the last year?

I’m just shocked there’s not more widely released exploration in this area. How I’ve made the first two text-to-audio AI albums ever in short breaks between doing a bunch of other stuff, separated by half a year, is really surprising to me. It's been nearly a year now and it's like 'where are all the text-to-audio -assisted AI albums?'

In our first interview (above) you said: "One of the reasons for making and putting this thing out was ... to say look, this is possible, what is it that we want to do with this stuff?" What have you learnt from doing this and what do you consider to be the most exciting possibilities for musicians in this field moving forwards?

I guess there are lots of speeds in play all at once. The tech is moving fast, but the wider landscape of making is maybe a bit slower. This will definitely change once these systems get integrated into existing DAW environments. But personally, I’m set on digging a lot further into the strange portal of possibility these tools have opened up. It’s been a wild ride so far, and there’s lots more to come very soon.