Interactive Designer Portfolio

Page 1

The Bartlett School of Architecture, UCL MArch Design for Performance and Interaction Interactive Architecture Lab

Interactive Designer: Portfolio

Researcher: Carlotta Bianchi, Jocelyn Murray Supervisors: Paul Bavister, Felix Faire, Luca Dellatorre


carlottabianchi@outlook.com +44 (0)7888494371


Table of Contents 03 Prototype 1

04 Prototype 2

05 Prototype 3

06 Polluted Sounds Intro

07 Thesis: Research Intro

08 Air Quality Sensor

09 Sound Structure

10 Data Sonification

11 Spatial Sound

12 Global Sound Engine

13 Digital to Physical

14 Sound Design

15 Sound: Emotional

16 Sound: Data Sonification

17 Sound: Sonic Texture

18 Sound: Interaction

19 Visual Aid

20 Projection Mapping

21 Hackney Bridge

22 The Crypt: Design

23 The Crypt: Layout

24 The Crypt: Layout

25 The Crypt Storyboarding

26 The Crypt: Film

27 Press

28 Appendix


Prototype 1 : Breath

Making the Invisible Audible: Breath Can breath frequencies be a look into our mental and climate health?

A look at breath frequencies in relations to human mental health. Investigating different frequencies of breaths. Recording different states of human breaths: exploring differences in anxious and clamer breathing states. Using sound as a medium to investigate how our mental health and the health of our environment can be explored through breath frequencies.

‘Breath’ a sonic collection of breaths recorded in different states (calm vs anxious) ‘Breath’ /audiovisual: https://vimeo.com/388564421

If breath can give us a look into our mental health, can wind frequencies tell us something about the health of the inhabited environment?


Protoype 2: Invisible Visible

Making the Invisible Visible: Exploring the interaction between: air, materials, and the body

How does the lack of air make us feel about the environment we inhabit?

Investigating how the invisibility of air can become visible by the effect it can have on humans and the environment. Using our body as a medium to explore this relationship.


Prototype 3: Invisible Audible

Making the Invisible Audible: Soundscape of a room

How can we recreate the sense of being in a room through sound? Our first step into making the invisible audible was recording a binaural soundscape of a room. We binaurally recoded multiple human movements and air induced interactions from a point in space and then replayed them in the same room from the same point. After all of our experiments, we then decided to focus on using sound as a medium to communicate air quality, a first step into making the visible audible which brought us to birth Polluted Sounds. Binaural recording devices allow the listener to experience the sounds recorded from a spatial perspective. Binaural Sketch

Binaural recording is a method of recording sound that uses two microphones, arranged with the intent to create a 3-D stereo sound sensation for the listener of actually being in the room with the performer. We recreated different sounds within a room and recorded them binauraly to trascend the listener to our workspace in Here East.

The project asks its occupants; is it possible to understand the health of our environment through our ears?

Listen to our binaural soundscape via the link below: https://soundcloud.com/user-731852678/binaural-soundscape

Here East UCL Studios, London, UK


Polluted Sounds: Introduction

Polluted Sounds Polluted Sounds is a spatial project that allows occupants to immerse themselves in inhabitable data through sound. Air quality in our urban environments is critical to public health, but often neglected and ignored. Polluted Sounds is a sonic journey that aims to shed light onto the positive side effects that COVID-19 played in air quality to urban areas. The project aims to communicate the narrative of London’s air quality through an empathic, immersive sonic experience. The translation of the invisible and intangible agents of the environment into a psychophysiological sonic timeline allows the body to navigate this global issue through sound, space and time. Basing the sound design on research in emotional valence, the sonic language created aims to bring the audience to understand changes in air quality through positive and negative arousal. The translation of such a fundamental issue for human and environmental health, challenges common communicative languages by creating a sound bath of unconscious or conscious understanding. Listen to our sonic album for London’s air quality from 2017 to 2020 via the link below: https://soundcloud.com/user-731852678/2017-2020a

The air pollution levels from 2017 - 2020 decreased.*

The project asks its occupants; is it possible to understand the health of our environment through our ears?


Thesis: Research Intro

The site-specificity of immersive auditory representations: interpreting data through sound in art installations Rationale This research questions; how can the use of space in site-specific sonic environments enhance the communication and the experience for an immersive auditory representation of data to a non-specialized audience? To approach this question, this thesis investigates the synergy between the process of auditory representations of data and the use of space in site-specific sonic environments (both digital and physical). The intersection of these two fields forms the foundations to Polluted Sounds; a case study project aiming to translate air quality data into an immersive sonic experience. Given the extreme global uncertainty caused by COVID-19, and the impact it has and will have in shaping the future of cultural and artistic shared experiences, this thesis takes a speculative approach towards how the fields of auditory data representations and site specific sound installations might adapt to these current challenges. Through this investigation, we analyse the landscape of these rich fields and pave the way for future exhibition scenarios for Polluted Sounds. A critical analysis of the field of auditory data representations is carried out to identify Polluted Sound’s position in the field and why this case study makes space for its existence. The literature review explores the validity of contemporary sonification exercises as current global soundscapes adapt to the ever-changing environment, questioning the validity of sonic interpretations used in the arts and in urban cities to communicate and understand today’s inhabited environment. In parallel, the thesis discusses the interactions between space and sound. An overarching analysis of how spatial contexts shape and enhance sonic performances allows the researcher to gain the necessary knowledge to design and tailor the auditory representation in a digital or physical environment – or perhaps in a hybrid of the two. Theory is applied to practice by taking a speculative approach towards Polluted Sound’s future development. Taking into account that the thesis is written during the development phase of the design project and will shape its evolution.

Case Study: NASA Orbit Pavilion by STUDIOKCA


Polluted Sounds: Prototype 1

Live Air Quality Sensor Our first approach was to build a live air quality sensor with an Arduino which was measuring CO2 Levels. We wanted to gather the data ourselves and bring awareness to air quality to the open and closed spaces in our inhabited environment.

1.0

We built a live air quality sensor and brought it to different locations around London: Gatwick Airport, Olympia Park, and in G20 in Here East, Hackney Wick. This experiment allowed us to understand the importance of bringing awareness to air quality in our local inhabited environment as we measured the different levels of air pollution around the city.

Gathering air quality data first hand Prototyping with a live air quality sensor measuring CO2 Levels in real time with an Arduino. Listen to our sonic album for London’s air quality via the link below: https://soundcloud.com/user-731852678/london-uk-live-air-quality-sensor


Polluted Sounds: Prototype 2 Prototyping with a live air quality sensor measuring CO2 Levels in real time with an Arduino. With the aim of gathering the data first-hand and investigating the air quality of the open and closed spaces we inhabited, we measured the air quality levels around different urban locations in London and in our studio in Here East, London. The dome structure was designed to be situated around different locations in London as a means to engage the local audience on the topic by sonifying the data in real-time. The dome structure was inspired by Bernard Leitner’s Sound Spaces; conceiving sound as a constructive material, as an architectural element that allows sound and space to construct an immersive sonic experience.

Sound Spaces, Bernhard Leitner, 1995

Sound Structure Can we use sound as a medium to engage a local audience on the health of the inhabited environment?

Trafalgar Square, London

Hyde Park, London


Polluted Sounds: Data Sonification

Data Sonification Our first approach to sonification as an exercise, to make an audience understand the difference of good and bad air quality, was to sonify air quality values for four different locations. This was our first approach to the sound design whereby we designed a sound for 4 different cities with the aim of crafting unique timbral structures, or harmonic identities for each pollutant.

How can we translate air quality data into sound?

Listen and view our sonic album for London’s air quality via the links below: https://soundcloud.com/user-731852678/sets/air-quality-sonification https://vimeo.com/412856653

Ryoji Ikeda, Datamatics


Spatial Sound

Spatial Sound

Experimenting with sonic tools to create immersive sound experiences

Spatialised audio, as described by Willits, (2017) “allows the use of space as a part of the music” The directionality and speed of sound sources are fundamental components in the use of spatial audio, where the virtual sound source is mapped to create movement and different velocities. In the context of Polluted Sounds, directionality has been successful in triggering emotional responses in the audience and in the communication process of the air quality data. We played our sonic lanugage on an 8 channel surround sound set up and binaurally recorded it to share the enhanced experience with an audience through headphones. Binaurally recording an 8-channel surround sound experience Here East Studios UCL, London, UK

Vector Based Amplitude Panning


An Interactive Webspace

Global Air Quality Sound Engine Can we translate air quality into an immersive sonic experience in the digital realm?

A global real-time translator anyone can access to listen to their local sonification, available for access from anywhere in the world. We adapted to the digital realm by building a globally accessible interactive web space. We used auto generative tools to build a sonic engine which could sonify the air quality all over the world based on the user’s closest air quality sensor. A hybrid interaction inbetween machine-generated sounds and ad-hoc sound files responding in real time to API air quality data. Webspace Link: https://polluted-sounds.glitch.me Concept video: https://vimeo.com/417241545

Process Diagram: A globally accessible sound engine for your local air quality


Transition to Physical Space

Digital to Physical Space What does the space look like? How can the experience be catered to the individual?

Sugar Studios, London, UK - Render made in C4D

How does a user experience the sound journey? What does it look like? Thinking about how to transition the sonic poem of air quality from the digital realm into a physical space.


Sound Design: Process

Sound Design 1. Research in Emotional Valence 2. Data Sonification with Max 8 3. Producing Sonic Textures 4. Interactive Element: Arduino

Contact Mic Recordings


Sound Design: 1 Emotional Valence

Sound Design: Emotional Valence From research within our thesis, we looked at the psychophysiological response to certain qualities in sound. We looked at one research paper in particular that stood out based on the sample size of participants, the accuracy of the results from statistical comparisons, and accuracy in methods used to test the psychophysiological response to sound. This research article is called : Emotion in Motion: A Study of Music and Affective Response by: Javier Jaimovich & Benajmin Knapp

Beach Boys

Nina Simone

Beach Boys Clip: https://soundcloud.com/user-731852678/beach-boys-clip Nina Simone Clip: https://soundcloud.com/user-731852678/nina-simone-clip

The researched showed the most common positive response was from the song “Good Vibrations” by the Beach Boys & the highest negative response was from “I Get Along Without You Sometimes” by Nina Simone. We took these results as the basis for how to evoke a negative or positive response in our sound design. A chord progression was taken from the chorus of each song, and stretched using a “paul stretch” to keep the harmonic valence. We then wove these tracks into our final sound which you can listen to from the links on the first Page “Polluted Sounds” on page 3. You can listen to the stretched clips by clicking on the links to the left.


Sound Design: 2 Data Sonification

Sound Design: Data Sonification

How can air quality data be transcribed into sound? The transcrption process of converting data into sound is alway arbitrary. For transcribing data into sound, we wanted to create a machine that could do the conversion and take out more arbitrary decisions for this side of the process. How we went about that design sound process of data sonification was through a software called Max 8. In this software we were able to create a ‘patch’ that automatically generated sounds based off of the air quality data for each year. The differing PM2.5 values altered the frequency built into the patch. The software processes the data for each year and ran through a keyboard and synth to play a certain and tone. Sine, square, and triangle oscillators were then used to enhance the sonification. Listen to our sonifiication for four cities air quality via the link below: https://soundcloud.com/user-731852678/sets/air-quality-sonification


Sound Design: 3 Sonic Textures

Sound Design: Sonic Textures

We wanted to craft each and every sound you hear. To create atmospheric sounds we found the best sound we wanted to evoke was through a contact mic. A contact mic is able to records sounds very close up or even on materials and magnify them. We found generating movement on a drying rack with a small metal tool had the right amount of reverb and give to the sound we wanted to create. You can hear a clip of the contact mic recordings via link below: https://soundcloud.com/user-731852678/contact-mic-clip


Sound Design: 4 Interaction / Arduino

Interaction: Arduino How can we make data immersive ? Can we design an inhabitable sonic experience?

Brainstorm sketch

The sonic interaction was triggered by an ultrasonic sensors which was also used to modulate the volume of the sound based on the physical proximity to the sound source. The sensors readings were then run through Max to alter each time zone based on the users proximity to the sensor in real time. There were four ultrasonic sensors set up through an Arduino. We exploited this interaction as a medium to create an inhabitable sonic timeline. This interaction was not explored as a communication for the sonification of the air quality at hand. This element of the installation was designed to allow a visitor to navigate space and tailor its performance to his position. This allowed more than one visitor to experience the installation simultaneously as the sound was isolated and its performance tailored to the visitor itself. Sound Design Process in Max for Live


Visual Aid : Touch Designer

Visual Aid: Touch Designer How can we visualize the sonic poem of air quality? What does the experience look like? How can we complement the sounds we created? We went through many iterations of designing sound reactive visuals to possibly be projected as an aid to the understanding of air quality. We did this in the software Touch Designer. Some examples of the outputs for our audio reactive visuals are shown to the right. The projection illustrates a series of particles designed to be audio-reactive to the sound and are then applied to the screens surrounding the sound system.

Here East Studios UCL, London, UK - C4D Render


Testing: Projection Mapping

Projection Mapping

Setting up at Hackney Bridge, London, UK

Sketch / Projection Mapping Techniques

Sugar Studios, London, UK - Render in C4D


Testing: Hackney Bridge

Prototype 1: Hackney Bridge The first space where we situated our installation was Hackney Bridge, where we designed a linear sonic timeline where time zones interacted in real-time with human presence, allowing a visitor to navigate space and tailor its performance to the inhabited movements.

The sonic interaction was triggered by ultrasonic sensors which were also used to modulate the volume of the sound based on the physical proximity to the sound source. We experimented with different visual aids to complement the understanding of our sonic narrative. We used film as a medium to represent a visual narrative of London’s air quality changing over time as the sound was performing. We situated it using projection mapping on the wall. We also used lights to give the sound space the ability to visually communicate when they were on and off. This test was not very successful because the space ended up being too reverberant and the sound was lost in space, which resulted as a weaker sonic performance. Because the space was small, we couldn’t isolate sound and therefore the installation could respond only to one person at a time.

Hackney Bridge Studios, London, UK


The Crypt Sketch

The Crypt: Design Thinking We were given the opportunity of doing a week long residency at The Crypt Gallery under the New Church of St Pancreas in Euston Road, London. We used this opportunity to set up the installation and film an immersive experience of it as the installation was closed to visitors. Our aim was to maximise the uique architectural particularities of the space to our advantage. We situated 4 sound spaces in different isolated locations int the crypt and designed a site-specific experience. The sound in The Crypt was very dry and in tune with our sonic experience, resulting as a success.

The Crypt: Design Thinking - Sketchbook


The Crypt / Situated Sound Spaces

Setup for The Crypt Translating the emotional and sensorial aspect of our sonic language in an immersive environment. Making air quality an immersive experience by translating London’s changing air quality from before to after covid into an inhabitable psychophysiological sonic timeline. The sonic interaction was triggered by ultrasonic sensors which were also used to modulate the volume of the sound based on the physical proximity to the sound source.


The Crypt: Map Experiments

Site-specific design Thinking about how to situate the different timezones in the Crypt. Each timezone representing 1 year, starting from 2017 to 2020.

Use of color and pulsing patterns as an aid to complement the understanding of the feeling of good versus bad air quality communicated through sound. Associating yellow and faster pulsing patterns to bad air quality, and blue and slower pulsing patterns to the good air quality.

Final Situated Map


The Crypt: Storyboarding

Storyboarding

Time zone 1 / 2017 Air Highly Polluted

Time zone 1 / 2017 Air Highly Polluted

Time zone 2 & 3 / 2018, 2019 Pollution Levels Decreasing

Time zone 4 / 2020 Purer Air Quality


The Crypt: Film

The Crypt: Film

Polluted Sounds Experience Film: https://vimeo.com/505843343 Polluted Sounds The Making Of:

https://vimeo.com/502182448


Project Press

PRESS: AI Artist of the Week x AI FOR GOOD Article link below: https://www.facebook.com/ AIforGood/posts/featured-aiartist-of-the-week-pollutedsounds-by-carlotta-bianchi-jocelynmurra/707724413132824/


Appendix Polluted Sounds Ephemeral Sketches 29

Pathway 1: AI Vs. Human Colors 33

Pathway 1: Not So Simple 33

Pathway 2: Triadic (VR Performance) 34 - 35

Pathway 2: Intersecting Dualities 36

Pahtway 3: Splash 37

Pathway 3: Life Vibrations 38

Pathway 3: The Unheard 39


Polluted Sounds Sketches

Physical Speculative Sound Setup

Using the skeleton structure of an umbrella to create a sonic portable tool which senses live air quality through an arduino.

A sound space designed to translate the air quality of different cities in real time. Using the umbrella structure to isolate sound as an individual navigates through space to engage with the sonic performance of different locations.

The Sonic Umbrella


Polluted Sounds Sketches

Physical Speculative Sound Setup


Pathway 1 / Machine Learning

Mapping the Machine: CoColor Project Experimenting with AttGan deep learning algorithm to generate an AI color spectrum. Machine learning experiment with the Coco Dataset; a text- image based model that generates images by sourcing billions of images fed into the system. Challenging the model’s perception of colors sourced by images, which doesn’t work with colors as humans see them. We tested how sensitive the outputs are to the changes in the input sentences by changing some most attended words in the text descriptions. Findings: The generated images are modified according to the changes in the input sentences, showing that the model can catch subtle semantic differences in the text description and change the image accordingly. Example: the color orange could only be achieve by inputting ‘orange carrot’ and by repeting the word ‘red’ or ‘yellow’ up to 24 times.

CoColor Project Video: https://vimeo.com/390362789

AI COLORS Vs. HUMAN COLORS


Pathway 1 / Machine Learning

Not So Simple Experimenting with Machine Learning algorithm “img2txt” whcih takes imagss and converts them into senteces. What happens when you input an image of text? Conclusion: not a successful algorithm for understanding text in images Here is a link to a video representing the test https://vimeo.com/498993952


Pathway 2 / Triadic

Triadic Exploring the use of virtual reality as a means of translating movement data from the human body into visual information in a 3D virtual environment. This visual information has the potential to be used as an improvisational, training and performative tool in dance.


Pathway 2 / Triadic

Triadic: A Virtual Reality Performance A new choreographic tool for education: training the idea of switching in-between states through a sonic trigger - solo research Vs. group orientated movement. Sensoring dance information in a spatial setting by giving specific tasks to three participants. One participant sees the visual information through the headset and the other two participants are the visual information which is tracked and visualized as a triangular movement in the virtual reality. Challenging the senses for interaction; breaking down the use of a virtual reality headset with the aim of creating new interactions through sensory movements. Sound as a trigger for a task: when the music changes, the participants are invited to seek an interactive movement in between them, finding the ‘triangle’ with the limited information they have access to.

The Entangled

Participant A

Participant B

Participant C

VR headset: visual Information

Joystick: tracked movement

Joystick: tracked movement

Developments: One meter accuracy when the participants are seeking the triangular allignment. Outside range: joysticks vibrate, VR visuals change color to red.


Pathway 2 / Intersecting Dualities

Intersecting Dualities This project questions what it means to be a performer. Who is watching who? The ‘performer’ becomes both of the participants, by both trying to work together. Using a design software called Unity to create a movement connection in Virtual Reality. The wands are used to record movements in time and once aligned with the movements from another user, a different color and sound are rewarded to the users to encourage collaboration. The users cannot see the movement of each other and creates an interesting challenge. You can see a video about the project here https://vimeo.com/498993094


Pathway 3 / Invisible to Visible

Splash Creating a sonic performance based off of daily routine objects. Using a contact mic to record the sounds produced from those objects, the sounds are then used to create visuals in Touch Designer. Once combined they are continuously altered by how the objects are being played to create different sounds and visuals. Video Below:

https://vimeo.com/498992321


Pathway 3 / Life Vibrations

Life Vibrations Everything is made up of atoms. We know atoms are constantly in motion. This underlying motion, is a vibration; vibration, oscillation ,or resonating at various frequencies. With our bodies, we are sensitive to these frequencies and vibrations life is creating. Our human bodies itself vibrates at different frequencies – very low frequencies called infrasonic waves. Different organs of the human body produce different resonance frequencies. The heart resonance frequency is ~ 1 hz. The brain has a resonance frequency of ~ 10 hz, blood circulation about 0.05 to 0.3 hz. This project aimed to record the frequencies in daily life. In the living room, walking to school, in the classroom, etc. What is the body coming in contact with on a daily occurance. The freqeuncies were recorded and then later transposed into a sound using the same frequencies. The built visual aid pictured e on the right is a representation of frequencies your body comes in contact with. The records representing potentail sound yet to be tapped into, all around us, blended together into our daily routine. You can listen to the daily frequency translation in the link below. https://soundcloud.com/user-731852678/live-vibrations

Is sound just music? Is it the wind that rushes passed our ears in the morning? Is it vibrations we feel or can’t feel? Is it frequencies within nature?


Pathway 3 / Invisible to Visible

The Unheard The Unheard focuses on translating sustainable materials into new mediums of communication: visuals into sound and sound into motions The project translate objects (such as materials used in design and architecture, and fabrics used in fashion) into different audios based on textures and colors Using TouchDesigner, we create a new way to express the sound through particles movements. Process: • Analysis of brightness for every pixel • Mapping vertical pixels to frequency • Divide into the horizontal timeline of audio The Unheard Making Of Video: https://vimeo.com/390362789


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.