6 minute read

INTERVIEW: SOUND MIND

MUSIC ON YOUR MIND

Providing an insight into the inner state of the human brain when listening to music, and presenting a creative audio/visual spectacle to boot, Sound Mind is a neurofeedback installation like no other. We caught up with its builders to learn more…

We all know how music makes us feel inside, but what if we could peer inside our heads and see, visually, how our brain waves are reacting to different forms of music? Part scientific exercise and part artistic installation, Sound

Mind has been designed to lift the lid on precisely this all-important interaction. Conceived by dBs Institute graduate Mark Doswell, and serving as the major project for his Innovation in

Sound MA, Sound Mind ‘paints’ the activity in the human brain via LED lighting arrayed across a large dome-like structure. Based in Bristol, Mark enlisted team mates, Rory Pickering and Jim Turner to construct this futuristic dome. Doswell crafted a slick signal chain, beginning with a consumer-aimed electroencephalogram (EEG), running through a brain-wave organising application, into Ableton Live and then out to a light-controlling workstation, which triggers different LED lighting states across the structure.

Keen to dig into this fascinating new foray into brain/music interfacing further, we spoke to Mark and his team to find out more…

AMI: Hi guys, firstly what was the starting point for Sound Mind, and had the world of neurofeedback been of interest to you generally?

Mark Doswell: There were a few starting points really, I was really surprised to find out that there were consumer-grade EEGs available on the market. There was one that was used to aid in meditation and therapy. After that I discovered that there was a third-party app called Mind Monitor. That allows you to send OSC (Open Sound Control) messages which you can then pick up inside software like MaxMSP or Max for Live (inside Ableton Live). Both these things were quite exciting to me.

I’d built a biosensor before to use on plants, in the hopes of making music with them. I used it on myself at one point then started to wonder about what other bio signals you could use in a musical context. I played around with my heartbeat and galvanic skin response before I thought it’d be cool to scan human brainwaves.

AMI: At what point did the Sound Mind project find its feet then, and how did the team come together?

Mark: I met Rory at Hackspace, and Jim is an old friend of mine. Hackspaces are cool creative places which are equipped with laser cutters and 3D printers. They’re great for facilitating ideas. I started talking to Rory about my idea of illuminating a brain via EEG, and

he explained how he typically makes light installations. Then we became collaborators.

Rory Pickering: I’d been building a few things using LEDs and I’d always wanted to do something with music. I heard Mark’s idea and just thought it sounded very cool. For quite a while we were talking about building a literal brain that sits above somebody’s head. Over time we realised it didn’t need to be quite so literal. It’s more an abstract representation.

Mark: Studying at dBs forces you to get stuff done, but the fact that we had this deadline, as it became my major project, meant we had a motivating force. The innovation course was great, and it was really useful for showing me what MaxMSP was capable of.

Rory: I’d never heard of dBs before getting involved with this project, but they were very encouraging, and facilitated our mad idea. I was quite impressed by the space and the people.

AMI: So what are we seeing when we’re watching the colours light-up, are they representing emotional responses?

Rory: So we had five channels of incoming data (corresponding to brainwaves), the hardest part was mapping these to different visual parameters. The data stream that indicates excitement, we might map to a visual parameter that is indicative of that state of mind. Like a strobe effect, or the speed at which some kind of LFO in the visuals is scaled. We used several different programs per track. We’d change the mapping for different songs, so you get quite interesting results. It also varies depending on the person.

Mark: At the moment, we do know that alpha waves are more active during a music listening session, or during relaxation or meditation. So we can demonstrate this. It’s also true that gamma waves are more likely to appear when stressed. We were focusing on emphasising this but then we realised that the best approach was to balance the science with art. We wanted to make it a creative installation ultimately.

Sound Mind is not mapped to brain *regions* yet. So, if you’re processing a certain element of music, like rhythm, the left hemisphere of your head should probably be the most active. This is something we’re looking at doing for the next iteration though.

AMI: So Rory and Mark were responsible for the concept and technical set-up, and Jim was tasked with building the structure itself?

Jim Turner: Yeah, I designed the structure of it. I was throwing ideas out to Mark and Rory over a weekend. The whole thing was made on a very low budget, so we had to be creative to make it look impressive, and have an angularity to it. To display the ideas we had. Overall it took three to six months. Mark: Over half of that time was deciding where to go with the structure. We didn’t want to do anything that had been done before shape-wise which made it quite challenging.

AMI: What was the first test, and I guess a big question is how do participants interface with it?

Mark: So we use the Muse Headband, it’s designed for meditation but is a four-channel EEG. It’s surprisingly very reliable. There’s a lot of academic papers written on it. So we used that as our brain-scanner. This was going to my phone which had an app called Mind Monitor, which renders the incoming EEG data. That’s sent via OSC data to Ableton Live to automate some Max for Live devices. This is sent to the video mapping and light projection suite Mad Mapper.

Rory: It was Mark’s girlfriend that first tried it out. She recorded her brainwaves into Ableton Live, so then we had a recording to work with. Even though we were bending the DAW to a new purpose, it did become our main way of organising the control data, whereas the visuals were determined by Mad Mapper, taking the MIDI from Live.

AMI: Were there any big surprises, and how responsive was it?

Mark: One caveat to using SoundMind was that you had to close your eyes. Any eye movements would make litter jumps or artefacts. I had a conversation with Alan Harvey, a neuroscientist who did a great TED Talk called ‘Your Brain on Music’, which was very inspiring. He told us to make sure the subject’s eyes were closed.

It differs from conventional neurofeedback, because usually you’d be getting that data back in real time and you’d learn to control your brainwaves. With Sound Mind the participants are getting it later. The audience is watching this happen in real-time and getting an insight into what’s going on in the subject’s brain.

This article is from: