A Conversation With Hans Tutschk

By Nathan Park, February 15, 2024

Hans Tutschku is a German composer and professor at Harvard University’s Studio for Electroacoustic Composition. He is a pioneer in the field of electronic music composition, having collaborated with Karlheinz Stockhausen and being a member of the Ensemble for Intuitive Music Weimar. He has studied at renowned institutions such as the Royal Conservatoire in The Hague and IRCAM in Paris. Professor Tutschku holds international workshops, with his compositions earning numerous awards, including those from Bourges, Prix Ars Electronica, and Prix Musica Nova.

Interview conducted and condensed by Nathan Park.


Q. Can you share your journey into electroacoustic music? What initially drew you to this field, and how did your early experiences shape your approach to composition?

A. I was about to turn 16, having studied the piano, and I attended a concert of music by Karlheinz Stockhausen. I had no idea what to expect—I had never heard anything from the 20th century. The experience completely blew me away. I felt that this form of music really spoke to me. While the classical repertoire was interesting, it wasn’t necessarily what I wanted to pursue.                                                                                                                 
After the concert, I approached Michael von Hintzenstern, one of the musicians who had played a small analog synthesizer during the performance. Without hesitation, I asked him if he could explain the synthesizer to me. He looked at me, then at the synthesizer, and said, “That’s not something we can do in ten minutes. Why don’t you take it home and try it out on your own?”
It was exciting. I’m from Weimar, a small town, and back then it had maybe 60,000 inhabitants. My parents were musicians, so people knew each other. He wasn’t handing the instrument to a total stranger, but it was still a surprising gesture. I skipped school for two days, messing around with the synthesizer and headphones, trying to figure out how it worked. When I returned it, he asked me to show him what I had discovered. After I demonstrated it, he was impressed and offered to work together. That was the beginning of a collaboration that has lasted 42 years—we’re still playing together today.
Six weeks after that encounter, we had our first concert. Since then, electronic music has been one of my central activities. I’m also interested in theater, photography, pottery, and other arts, but my main focus is electronic composition.
I’ve written some purely instrumental pieces, but my passion lies in the possibilities and extended sound palettes that technology can add to traditional instruments.

Q. Could you describe your compositional process when working with electroacoustic music? How do you integrate technology and traditional compositional techniques to create your pieces?

A. That’s evolved a lot over the years. My training in Germany was very structured, with a focus on formal composition techniques like serial music. My early pieces were highly calculated—I’d sit at a desk and work out the proportions. The first part should be this many minutes, the second part that many seconds, and so on. The durations of sounds and textures had to respect those calculations.
In 1994, I moved to France and lived there for almost ten years. The French school of acousmatic music is very different. In the 1950s, there was a clear divide between the German and French approaches. The French approach is much more focused on the evolution of sounds. You listen to the sounds first, then make formal decisions. The length of something can’t be decided beforehand. This gradually shifted my approach to composition.
Today, I think I’m working in three areas: improvisation, composition with extended electronics, and pure acousmatic composition, where no live performers are involved.
Electroacoustic music allows more freedom in creating spatial sound. In a concert hall, when we install many speakers in a surround setup, the audience becomes part of the sonic field, creating an immersive experience. This is very different from a typical instrumental recital, where the audience is more passive. With multichannel specialization, we can create experiences where the audience feels like they’re part of the soundscape.
When improvising with musicians, we’re inventing musical structures in real time, listening and reacting to each other. This improvisational energy strongly influences my compositions. I often think of my compositions from a theatrical perspective, as I also studied theater. So theater and drama are important to me. I’m not necessarily thinking that one sound represents a young lady and another represents someone else, but I’m thinking about layers, textures, their agency and relationships—how they can surprise, attract, or continue.
Also, when it comes to electronic music, I’m more interested in compositions that aren’t beat-based. The moment we introduce a beat, the way we listen shifts. When there’s a steady pulse, our body naturally taps or nods along, and our ears stop being as alert because we’ve entered a loop of expectation—we know when the next beat is coming. Of course, there are forms where the beat drops out and something shifts, but you still have an idea of where it’s headed. I prefer to keep listeners on their toes, where they have to stay attentive the entire time.
I do still use music theory, but not in the traditional sense. I think a lot about pedal tones and harmonies, though it’s not the kind of harmony that moves from a dominant 7th chord to a tonic. It’s more about how pitches emerge and reappear, prolonging certain textures. Many of my pieces last 20 minutes or more, a longer time span than we’re used to with radio or popular music, which is typically only a few minutes long. I enjoy diving into a listener’s mind for a longer period, allowing them to discover something deeper over time

Q. Can you describe some of the programs, technology, and physical equipment that you often use in your compositions?

A. Technology evolves constantly, which is great but also brings challenges. I have compositions from the 90s that are no longer performable because the hardware is obsolete. That’s a real shame, considering the effort put into those works.
In 2010, I started using iOS to create pieces for instruments with iPads or iPods. I wanted to see if musicians could engage with electronics already during the early stages of the rehearsal process. But Apple frequently changes its programming tools and audio requirements, making it difficult to keep up, so I eventually left that platform.
I’ve worked with programs like Pure Data, Max MSP, and SuperCollider—these are programming languages for audio. Max MSP allows me to extend what traditional instruments can do.
A piece I’m particularly fond of is Sparks, which I wrote in 2019 for piano and electronics as part of the FluCoMa research project at the University of Huddersfield, UK. In this piece, I integrated an AI-based polyphonic pitch detector that I programmed within Max MSP. This allows me to detect up to four simultaneous pitches. I did not want to rely on a specific piano, which has MIDI sensors built in. I simply placed a microphone inside any grand piano to capture the pitches being played.
The electronics are then generated in real-time based on those detected pitches. What’s really fascinating is that the entire electronic layer of the piece responds dynamically to the performance. The electronics are built from a vast collection of samples that I recorded and processed myself—many of them being piano sounds, prepared piano sounds, and transformed variations. But they are not arranged into a fixed ‘tape’.
The piece is structured into 33 presets. Each of them dictates a different relationship between the acoustic piano and the electronic responses, creating a unique interaction between performer and electronics in each section of the piece. No two performances of Spark are ever exactly the same, as the electronics evolve in response to the subtle nuances of each performer.
For hardware, I used to rely on the samplers by E-Mu and Akai in the 1990th and 2000. They have been now replaces with specific large patches running in Max MSP.

Q. Throughout your experiences as an educator, are there any common challenges that beginners face when learning to compose electronic music? How do you suggest they overcome these problems?

A. One common challenge is that students are often overwhelmed by the vast array of modules and options available in programs like VCV Rack. My advice is to limit yourself—start with a few modules and experiment with them. As you become more familiar with those, you can gradually explore more options.
I also encourage my students to analyze their favorite songs and try to recreate the sounds they hear. Limiting yourself to a specific set of tools forces creativity. I’ve had personal experiences with this. For example, I tried to imitate the time-frequency modulation capabilities of a Yamaha DX7 synthesizer using an analog synth. Physically, it wasn’t possible, but in the process of trying, I discovered new possibilities.
And if you make that a regular practice, you’ll naturally gain more confidence in your ability to interact with technology in a way that sparks creativity and produces sonic results. The key is understanding that technology itself is not the solution—it’s simply a tool.
What’s really important is being creative with whatever tools you have at hand. Then, when something new comes along, you’ve already developed the mindset of, “Oh, that’s nice—I can add this to my toolkit.” But your creativity doesn’t rely on having the latest technology. It’s about what you can do with the tools available to you.

Q. I listened to your piece Remembering Japan and enjoyed it a lot. You incorporate many vocal samples and performances—can you describe your experiences recording and incorporating live performers? How do the places you visit and the people you meet influence your compositions?

A. Traveling is one of my favorite activities, and it deeply influences my music. My first visit to Japan in 2001 was transformative—it’s one of the most different cultures I’ve encountered. Although I’ve learned many languages, I didn’t speak Japanese, and it felt like a completely different world.
I always carry a portable recording device with me to capture interesting sounds. I then take those recordings back to the studio, where they help shape the emotional tone of my compositions. However, it’s important to be respectful when recording in different cultures. Sometimes, you can’t just record people, as it could feel intrusive.
And I often work with musicians to record local music.
In 2014, I had the chance to return to Japan for three months. I was particularly interested in classical Japanese music and the koto. I met with a priest at a private temple in Tokyo, who performed a ceremonial chant called Shomyo. He gave me a long introduction to the process, and I was allowed to set up my microphone by the altar. The ceremony lasted 45 minutes, and it was deeply moving—I was sitting there, overwhelmed.
I couldn’t bring myself to incorporate those recordings for two years. They felt too precious. Eventually, I used them in a one-hour composition, which I completed in 2022. I’m returning to Japan this year for its Japan premiere at the Gakuen School of Music in Tokyo. I’m curious to see how the Japanese audience will respond, as they’re much more familiar with these sounds than I am.