Remix: Revealing Unintended Arrangements

Page 1

刀䔀䴀䤀 堀

刀䔀嘀䔀䄀䰀䤀 一䜀  唀一䤀 一吀䔀一䐀䔀䐀  䄀刀刀䄀一䜀䔀䴀䔀一吀匀

䤀 唀刀䤀   吀刀伀䴀䈀䤀 一䤀


REMIX: REVEALING UNINTENDED ARRANGEMENTS Music, like architecture, is comprised of a series of relational components. But the presence of some objects within larger compositions are not always apparent. It is within the latency of these systems where new knowledge potentials can emerge. This project investigated the architectural implications of remixing music as it pertains to the untapped potential within generative design processes. How might this investigation shed light on the relationship between rigidly controlled and idiosyncratic production methods? What role does subjectivity and imperfection play during the translation of a design concept? A remix is a different way of seeing; an alternative perspective to something familiar. It is a process that develops over time in order to create something new from an existing composition. It takes multiple iterations to identify how various components of the source material can be rearranged and what emerges from each version. A remix offers two examples of a set of parameters with completely different constructs – the intended whole and the unexpected resultant – made up of the same parts yet completely independent. To test this contention, the goal of the study was to generate a representation of music using a digital formal language, and then alter that form in order to remix the source song. In order to generate geometry from audio, sound was to be translated into a coded language of Cartesian coordinate points. The initial step was to identify a song composed of varying structural elements, such as sound waves, which map amplitude and frequency, or loudness and pitch within audio. A song with a diverse organizational structure produced sound-waves with a greater range of inflections. These waves were translated through Processing software in order to generate a vector map and a tessellation structure representative of the song. Once all the coordinate points were generated by the Processing software and a digital form was constructed, the procedure of remixing the song could begin. Remixing of the source music was accomplished by maintaining a link between the representative digital geometry and the song’s original sound-waves. The form was altered by translating the vector points of the surface geometry through scripted Grasshopper definitions. In turn, the linked sound-waves in the original music would also change in response to the vector translation. The edited sound-wave was then output to yet another data processing software with the capability of generating a playable music file. An isolated portion of the song was remixed numerous times in order to come up with a series of new sound-wave files. Each iteration provided insight into the logic behind the unique audio remix. An investigation into the relationship between translation procedures and code revealed the latent characteristics embedded within the audio data. This project emphasized the importance of learning through applied research. Technological experimentation, such as intentionally remixing a known condition, enables designers to embrace mistakes as moments of discovery. This project was initiated with the intent of moving beyond the expected outcomes produced by the tools within the discourse. The remix also highlights the influence these emergent conditions can have on design thinking and creative processes.


Audio file with sampling rate of 44,100Hz [44,100Hz means that for every 1 second of a sound, there are 44,100 frames]

//Processing ... Audio-player player; ... player = minim.loadFile("audio.mp3"); ... for(int i = 0; i < player.mix.size() - 1; i++){ if(hvalue < abs(player.mix.get(i))){ hvalue = abs(player.mix.get(i)); } } ... if(hvalue > threshold){ float xp = random(-300,300); float yp = random(-300,300); float zp = random(-300,300); points.add(new PVector(xp,yp,zp)); } ... //The processing script reads the audio file and calculates the sound-wave. A height threshold was applied to the wave lengths, if the sound-wave exceeded the limit, a vector point was created.

PVector xr yr zr

newcoor = points.get(i); = xr + newcoor.x; = yr + newcoor.y; = zr + newcoor.z;

... for( int j = 0; j < points.size(); j++){ PVector coor1 = points.get(i); PVector coor2 = points.get(j); line(coor1.x,coor1.y,coor1.z,coor2.x,coor2.y,coor2.z); ... //vectors connect the points into a larger tessellation.

From sound to form. Processing script illustrating how an audio file is translated to geometry.


10s

20s

... {58.86682, 261.95416, -175.94633} {-103.25432, -86.52399, -277.08905} {146.26437, -214.20078, -164.91255} {-231.62123, -271.1164, -104.46861} ...

... if(hvalue > threshold){ float xp = random(-300,300); float yp = random(-300,300); float zp = random(-300,300); points.add(new PVector(xp,yp,zp)); } ...

30s

... {1.770844, -285.21014, 131.7225} {-220.65128, -272.89496, -182.0471} {62.892975, -254.93588, -47.722397} {89.785432, 124.154821, -23.976231} ...

Sound-wave segment: 3 seconds of extracted sound

... {71.26866, 56.766548, 2.598952} {188.254363, -74.68543, 12.59892} {-61.08667, 86.634766, 2.747559} {151.46701, -146.37161, 23.91745} ...

Processing tessellation Vectors are informed by the inflection of the sound-wave. A threshold is determined in Processing, if the height of the wave is > than the threshold, a new vector is created.


//Processing } //export vector points {58.86682, 261.95416, -175.94633} {-103.25432, -86.52399, -277.08905} {146.26437, -214.20078, ... message = coor1.x + "," + coor1.y + "," + coor1.z + ";"; String ip = "127.0.0.1"; int port = 6400; udps.send( message, ip, port ) } ...

Rhino _Grasshopper interpolate pts

interpolate pts get data from port

pt

mesh from lines

branch list

wb_framing

interpolate pts

interpolate pts Example of 15 seconds of sound represented via Processing and Grasshopper.


Resultant structure informed by sound-waves.

Resultant surface informed by structure.

Sound-wave into form. Construction example of 35s audio sample.


Geometry and the sound-wave are linked allowing for a simultaneous response to deviations.

Deviation Process. Manipulation of the sound through form.


Remixed Form. Result of the deviation between form and sound-wave.


\\Matlab \\import x and z points from deviated sound-wave >>sound-wave = readfile (’deviated_points.txt’); >>plot(sound-wave); ... 0.228644669055939 0.209326460957527 0.215430155396462 0.223212376236916 0.201818898320198 0.160893589258194 0.138950780034065 0.144871369004250 0.148106321692467 0.130375072360039

0.0573747977614403 0.0690328702330589 0.0808435305953026 0.0935087129473686 0.106021299958229 0.111239969730377 0.112796410918236 0.118076115846634 0.124393448233604 0.127842038869858 ...

0.0923184901475906 0.0866420492529869 0.0618610195815563 0.0323801375925541 0.0236213263124228 0.0327158421278000 0.0388195440173149 0.0356456190347672 0.0318613238632679 0.0374767296016216

132303 points found

\\132303 points in a 44100Hz frequency sample equals to 3 seconds of an audio file. >>audiowrite (’deviated_sound.wav’, sound-wave, 44100); >>audioplay ‘deviated_sound.wav’; ...

Original sound-wave

Rhino extractpoints save points - deviated_points.txt

From form to sound. After deviating the form, the new sound-wave points are extracted and exported to Matlab, where they are re-plotted as a sound-wave that can be played and written as an audio file.


Time iteration - 3s deviation on a 15s song


Formal representation of 3s of music with no deviation.


3s Remix #1


3s Remix #2


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.