This is the second part of my interview with Ezra Buchla. If you haven’t already, you can read part 1 here.
At first I only wanted to ask him about his experience with the AE Modular system which he had ordered a few months ago. But as we were talking we naturally touched on some more general topics about the current commercial landscape of modular synths and about making and appreciating music in general.
As a DSP programmer by trade, in which direction would you like to see DSP-based modular synth modules evolve in the future? Are there any exciting projects or developments you’re following closely or looking forward to?
To be honest, I have mixed feelings about the proliferation of small computers in modular synthesizers (and elsewhere). On the one hand, it's pretty awesome that embedded computing is so accessible right now. On the other hand, it seems like a strange allocation of resources to have a eurorack system with 20 computers in it or something, all of which are similarly spec'd with different panel art and firmware. not every problem needs to be solved by throwing a 100 MHz processor at it. :)
My tendency is to want my digital devices to be maximally flexible and reusable. there's a huge number of used smartphones and laptops in the world, available basically for free...
So, I like the fact that there are open-source digital modules and that there is a large active community engaged in modifying them. (e.g. Mutable Instruments.) any digital module that isn't open-sourced seems weird to me.
My long association with monome.org speaks to our shared design values with regard to functional minimalism and openness. Last year they released "norns," which I was instrumental in designing and implementing; it is their take on the portable, linux-based desktop sound processor. The upcoming "crow" will be a complementary unit for analog hardware interfacing.
At this year’s NAMM 2019 we’ve seen some big companies getting into the modular world. Especially the Korg Volca Modular came as a big surprise and it introduces the “masses” to west coast synthesis. How do you feel about this “east coast vs. west coast” synthesis and do you even think about this when you are making music?
For what it's worth, I think it's a silly marketing conceit, and always was. Neither Don nor Bob appreciated the terms. Just say what you mean! (sequenced, generative, keyboard-based, FM, additive, subtractive, whatever.) I chose to disengage from the music tech industry in my daily life a couple of years ago and am now working in the hearing industry, which is a lot more fulfilling to me. I’m not sure I like the “toy” aspect of where things are heading and especially how many little computers are now being used and sitting in your rack. You know, there’s a $25 digital board and a nice aluminium panel and it becomes this collector or fashion item and is really marketed towards this certain upper class people that seem to mostly just collect these things. These are just not my values and especially with these digital boards you’re engaging with this global supply chain that artificially deflates the cost of the components. This is just not great for the world. We are now living in a world where people in the developed countries throw away their phone every three years. That’s why I like the really analog designs which have much less impact. I would like to do more with fewer computers.
I can see this is so true. There are so many new Eurorack modules coming out all the time and everyone has to have them. And then on YouTube you don’t really find many people that actually do interesting or even musical things with them.
Yeah, and I don’t really consider myself a musician nowadays. Although I did for ten years when that was my job. Nowadays I don’t feel like releasing electronic music because it feels like a very saturated world. I much prefer to do live performances.
I think it’s really cool, too that the synthesizer market empowers people to created their own sounds. That’s what this is really about, you can create and compose anything you like for your own enjoyment in real time with these machines and that’s the magic of them. At the same time I think it’s a poor fit with the traditional recording industry with their record release cycles. So it’s not appealing to me for instance to release for ambient synthesis music as a professional package because to me it feels unnecessary. It just feels strange to say, here is a recording made by a professional and you should value it more than other recordings made by non-professionals. And at the same time you’re saying, here is a machine that you can buy and make your own ambient music that sounds just like it [was made by a professional]. It is a strange and ego driven world where you’re trying to have both. And people are pushing their own creations out there at this breakneck pace that to me discourages reflection and listening with presence. And this is what I appreciate most in some aspects of ambient or minimalist music. That it’s ephemeral, that it exists in time and is a conscious experience of time and not just the clicking of the next Youtube link.
That’s interesting, that you’re talking about your love for ambient and minimalist music. Where is this interest coming from, what are your main musical influences?
The term “ambient” is a strange one and most of what is called ambient music I don’t care for, it’s too tame for me. But I like Minimalism as a compositional philosophy and ambient music in the sense of Satie’s music or Feldman’s music that is about really experiencing duration. That’s what I grew up listening to. Probably my most formative experiences were a lot of John Cage performances, Feldman performances and Éliane Radigue and Steve Reich which I experienced as a young person. Those were very formative for me in the aspect of how to approach process driven music. Because the interesting thing that synthesizers can do is create autonomous musical processes that provide a different way of accessing the same sort of compositional space that Feldman or Steve Reich were addressing. When I was enrolled in a conservatory as a string performance major and composition major and whether it’s acoustic instruments or electronic, those were always the compositional frameworks that I was interested in and that still interest me.
But I also played a lot in bands and I’m very influenced by Punk music and metal and all kinds of stuff. But in those frameworks the synthesizer has a different role, which is to make a sonic object that is impactful and that is maybe hard to do with other tools. For instance I used to tour with a band where I was mostly singing and I also played this tiny modular that was basically just a triangle core and some feedback - a dual oscillator setup that produced really aggressive sounds which were very physical when you cranked it up in a big room. So I would turn it on for like a second - [vroom] - and that’s what it did.
But these are all just tools like a cello is a tool and you can play Feldman or Xenakis with it [laughs].
So probably many people who are getting into modular synthesizers nowadays aren’t composition majors or may not have much musical education at all (myself included!). What advice would you give these people? What could they study or experiment with?
In don’t know really. I think you should just be aware of what your interests are and what kind of experiences you find enriching. That’s really the only advice I would give anyone, whether they’re a musician or not. One thing that I never understood is this desire or the attitude that I see expressed a lot of “How do I make this kind of music on my synthesizer?” or even “Which synthesizer do I have to buy to make this kind of music?”. This baffles me although I kind of understand it, but to me it misses the point. To me, music is a communicative medium and even to communicate just with myself to express through sound how I perceive the universe. Kind of like a lens through which I see what’s happening right now. And then you can transfer that experience to other people, and that’s what I’m interested in. I never thought “Oh I want to make something that sounds like a Kraftwerk song.” I just never thought that. So maybe this is very cultural. The environment that I grew up in was very much about this sort of experimental sound community in the Bay Area. And this music community was very much influenced by listening practices, maybe even more so than sound production. So if you look at the history of electronic music and how for instance Pauline Oliveros writes about music, she doesn’t write about how do you produce a sound, she writes about how to listen to a sound. The same with John Cage and other of my role models, they were all more interested in listening than sound production. And that’s what I still continue to do.
So my compositional advice is to just listen! But to do this in a really attentive way. This may sound obvious, but I think that a lot of people aren’t really listening in a critical way. You know? It’s really difficult to really listen to what you’re doing. It’s extraordinarily difficult.
On the same topic, I met with Sam Aaron recently who produced a computer language environment called SonicPi, mainly to teach kids music and programming at the same time. And I asked him, how do you teach this to kids? And he said “You don’t. You just show them the command that plays a note at 440 Hz and then you tell them that this number can go up and it can go down. Then you add the command for a pause and you just gave them the tools to play all of western classical music.”
And that’s what is was like for me, too. I had a lot of music education, but the most educational single memory that I recall was actually sitting down at the piano with my dad and he asked me to play just one note. And my dad actually got piano lessons from David Tudor, who was a great John Cage interpreter and an electronic composer in his own right. So these exercises were like “Play and hold this one note and then listen until you can’t hear it anymore.” And that was very impressive because when you pay really close attention then it’s very hard to say that you’re not hearing the sound anymore on a piano. Another exercise was, how soft can you play the note. Just like a lot of Feldman piano pieces have the direction “as softly as possible”. Softness isn’t just about volume, but also about touch, how softly can you touch the keys.Those things together give you an appreciation of the incredible sensitivity of human hearing. The dynamic range of human hearing has about a factor of a million between the strongest and the weakest stimulus that you can perceive.
So my music education was very non-traditional and some of the pedagogical techniques that were most influential to me were not traditional ear training or music theory, but a practice that was rooted in deep listening and experimental performance. Another person I like alot is W.A. Mathieu who wrote a book called “The Harmonic Experience” which has a lot of a sort of meditative training exercises, a lot of singing. This is how I would approach talking to kids about sound and music. I have taught the violin and piano to young kids, but never synthesizers. Only once did I play an old Buchla with my nephew who’s one and a half. He was just fascinated by the fact that you can turn a knob and the sounds goes up and down. That was enough for him and was an almost magical experience.
Thank you Ezra for this very interesting chat!
If you haven’t already, please also check out and listen to this piece that Ezra performed with the AE Modular, his Viola and tenor guitar:
Here are some links to composers and performers that Ezra talked about:
You can read part 1 of this interview here.
(Interviewer: Carsten Eckelmann, Skype interview from 10th February 2019)
Ezra Buchla is an accomplished violinist and musician who is creating soundscapes using just his violin and a computer running the supercollider audio programming environment. He is the son of the late Don Buchla who was instrumental in designing the Buchla music system and through his work defined what some people call “West Coast Synthesis”. Ezra is also a founding member of The Mae Shi, an experimental rock band from Los Angeles.
We were surprised and very pleased when - out of the blue - we received an order for the AE Modular Standard Rack 1 from Ezra so we wanted to find out what his thoughts are now that he had some time to play with it.
The following is my transcript (in sense if not word for word) of some written communication and a subsequent Skype interview with Ezra. It is quite long so we’ll release it in two parts.
Please also check out and listen to this piece that Ezra performed with the AE Modular, his Viola and tenor guitar:
How did you find out about AE Modular and, given that you obviously have access to much more expensive gear, why did you choose to buy a system from us?
I saw it mentioned on the `lllllll.co` forum and was intrigued by the overall design, the motivations and some of the specific modules (like the NYLE filter) and the price was low enough to justify satisfying my curiosity. I’m not a big gear head and I mostly work with computers. That’s why I worked alot with the monome.org project mainly on the "norns" machine. But I was curious about the sound producing components especially the oscillators and the NYLE filter. I really love that filter, it’s totally bizarre, the resonance knob goes to self oscillation at 12 o’clock and you can bleed different inputs depending on the output conductance. When I contacted Robert I was impressed with his passion and the way he talks about his system. So I wanted to evaluate it for the analog sound sources because I wanted to hear if they have more character than just a clean sound. And it was just what I expected, really grungy so I’ll probably run my viola and the guitar or voice through it just to give it some character.
Please be honest and let us know what you think about the AE Modular system. Would you use it for one of your performances?
Sure, I used it in a performance back in October, which went great.
The system I have is quite limited because I was mostly interested in checking out the oscillators and filters, and routing audio through it. For example, I have no sequencing capabilities at all! I tend to want to do sort of strange things with musical event generation, which are best realized on a computer. So I have been intending to build my own 5v programmable sequencer interface, but of course haven't had time.
I'm also not a big MIDI user... so at the moment, the system is basically a source of static textures / drones. This is totally fine with me, and I've spent enough time with it to have some ideas about what few things to add to optimize it for that role...
Which improvements would you like to see, which drawbacks do you see?
The main drawback for me is just the fragility of the sockets themselves. But I see this has been addressed in recent revisions. :)
There are a number of oddities and strange behaviors, but I don't actually see this as a drawback per se. Like, a lot of outputs aren't buffered (?) and adding e.g. capacitance to them (like by touching with a finger) can affect other outputs in the circuit in strange ways.
(that's is sort of a guess... honestly, I haven't even engaged with the system in a "technical" mindset at all. I haven't hooked it up to a scope or even looked under the panels. for now, I'm happy to keep it that way :) )
Would you recommend it to other musicians or students of modular synthesis?
Absolutely. Of course this would depend on the person's needs or intentions. To some extent I think it seems actually a little challenging for beginners. For a practicing musician new to synthesizers, road-worthiness is important, limited signal flow is ok, so semi-modular designs make sense.
But if someone is already familiar with synthesis paradigms (and digital systems can easily provide this experience), or wanting to dive in at the "deep end," then I would definitely recommend the AEM system.
Which modules would you like to see most in the AE environment
I could go on of course, ha.
At the AE Modular forum we have now started a fun little series of weekly challenges which are supposed to inspire people to produce and record small pieces with their system. The challenges are also a bit of a puzzle, like produce sound without using an oscillator, or produce ambient spacey, drones. What challenge would you set us?
So these are almost like scores, right? I did a lot of this work as a composition student, like scores for modular or instructions for modular. I have mixed feelings about it. I don’t really care for the prescriptiveness of the challenges, but I can see how it would be helpful for someone who is starting out and wants to sit down every day and do something, but isn't sure how to go about it. It helps to have a goal in mind. Maybe in a pedagogic sense, it seems cool, but it’s not really for me.
The most interesting thing for me is that because the way the AE system is designed, I believe it cuts a lot of corners. I’m guessing there are unbuffered outputs and stuff like this. It has these strange behaviours that deviate from the sort of ideal that someone might have learned in a software environment and that’s a really valuable thing to have. Like if you have never experienced this tactile environment where you are interacting with a real electric circuit and not just a number that’s copied over here when turning this knob.
Maybe that is a challenge that I would set: “Try to find a behaviour that you can’t explain.”
What would your “stranded on a desert island” music setup look like?
I actually don’t have a lot of stuff in my studio. I have this Italian viola from 1850 which is really nice and if that was the only instrument I could take, I would be more than happy. I also have another very modern viola with a carbon fibre body and an electric pickup which I really like. I also have a tenor guitar which I use a lot and a really nice audio interface and some distortion pedals. There’s a Moog Rogue which is really fun. That and the monome stuff of course and a few weird circuits that I’m working on is really most of what I have in my studio. Probably that’s already too much for a deserted island, but it’s not really a lot.
There’s this interview which I really like with Robert Henke, one of the co-founders of Ableton, where someone asked him about plugins and he replied “You know in Ableton or really any DAW you can have as many compressors as you want. You can serialise them, sidechain them, you can do stuff that no one even imagined being able to do in the ‘80s, not even in a million dollar studio. And now on your computer you can have all of this basically for free.” This is a really great antidote to the gear chasing mentality. You know, doing more with less is the essence of art. For instance the early Beatles records were recorded with one dynamic microphone in the middle of the room in mono.
A lot of my favorite music is recorded with pretty minimal means. My favorite artist Alastair Galbraith in New Zealand really does a lot with a violin, a guitar and tape loops and his voice.
So my deserted island setup would probably be just my viola and if I could bring my computer that would be great, too.
When playing a violin, you are able to add expression and emotion to your sound quite naturally and spontaneously via fine tactile input. Modular synths are often controlled without this kind of “touch” however. How do you feel about this? Do you miss certain controller types for modular synths, or is the often rather algorithmic approach to modular music-making a welcome change or even a challenge? In another interview you said that you mostly use software to program sound (was it csound?). Do you think that you can achieve a similar expressiveness in a live situation with program code as with knobs and faders?
I've mostly used supercollider for the last 10-15 years. I personally tend to use electronic sound structures as environments / processes that proceed sort of autonomously, sometimes in response to acoustic signals.
In both analog and digital systems, expressivity and depth of control are totally achievable. the kind of analysis-driven processes that I gravitate towards are much easier to achieve in the digital domain (e.g. with trivial access to frequency-domain / pitch / timbre analysis.) and on the other hand, direct response to gesture is fundamentally an analog thing.
There's a bit of a paradox in control design: you can make a system that is flexible and patchable, and highly responsive - but controlling responsivity takes practice. In an experimental environment it's cool to change up the gesture -> music linkage, but in I dunno, in more "traditional" music forms it actually seems helpful to have a sensitive but constrained set of gestures, and practice intensively with them. (consider the theremin for example.)
But yeah, for myself, real time control is not actually a big concern, since compositionally I tend towards static structures / minimalism in any case.
This is the end of part 1 of this interview.
Please come back for part 2 where we will talk about the current state of the industry around modular synthesizers, appreciating music and Ezra’s main artistic influences.
This is a link to the projects that Ezra participated in:
Ezra also a bandcamp page:
And he is a founding member of The Mae Shi, an experimental rock band from Los Angeles: