Wind Architecture Studio 2016 Michael Mack - Part A + B

Page 1

// INTERACTIVE NARR[AIR]TIVE _WIND ARCHITECTURE | STUDIO 05 2016 SEMESTER 2 _MASTERS OF ARCHITECTURE _MICHAEL MACK 584812


// A.00.00 - CONTENTS // A.00.00 - CONTENTS // A.01.01 - INTRODUCTION | WIND ARCHITECTURE // A.02.01 - A BRIEF HISTORY OF INFLATABLE ARCHITECTURE // A.03.00 - PRECURSORS > A.03.01_PERFORMATIVE ARCHITECTURE STUDIO 2011 // A.03.00 - PRECURSORS > A.03.02_MOVIE TASK 1: ”THE BUBBLE” // A.04.00 - KITE WORKSHOP | PETER LYNN > A.04.01_DESIGN > A.04.02_MATERIALITY > A.04.03_CONSTRUCTION+REFINEMENT > A.04.04_AFTERWORD // A.05.00 - PART A BIBLIOGRAPHY // B.01.01 - DESIGNING FOR USERS // B.01.02 - DIGITAL NARRATIVE // B.01.03 - CURATING THE EXPERIENCE // B.01.04 - INTERFACING // B.02.00 - PROTOTYPING > B.02.01_SKETCH01 : ARDUINO | SENSORS > B.02.02a_DETECTING USERS > B.02.02b_SKETCH02 : KINECT | GRASSHOPPER | FIREFLY | ARDUINO > B.02.03_COLLABORATION, RESEARCH, AND ELECTRICAL NONSENSE // B.04.00 - PROPOSAL > B.04.01_ WARFA[IR] + PA[I]RASITE > B.04.02_REFLECTION ON FIRST PROPOSALS > B.04.03_MATERIAL STUDIES // B.05.00 - PROPOSAL II > B.05.01_CHRYSA[IR]LIS | DESIGN // B.05.00 - PROPOSAL II // B.06.00 - PART B BIBLIOGRAPHY // C.01.00 - MOVIE > C.01.01_SKETCH 03: UNITY | ANDROID SDK | VIRTUAL REALITY

2 4 6 9 9 10 10 12 14 17 18 20 22 25 26 28 31 32 32 34 37 38 38 40 40 43 44 47 47 53 54 58 58 58



// A.01.01 - INTRODUCTION | WIND ARCHITECTURE

Participants of this studio design, make, and put to use -fly - stateof-the-art kites and other experimental inflatable structures. This studio begins with kite-flying lessons and an intensive prototyping workshop led by world-leading kite designer Peter Lynn in collaboration with a maker and writer Simon Feidin. It continues by using generative and parametric design, physical and digital simulation, hand crafting and digital fabrication and build speculative prototypes of architectural structures that can be supported or animated by air. Selected designs are produced at full scale and used on location with real publics. The resulting performances are documented and exhibited as moving images. Past and emerging examples of air-supported structures take form of personal garments, individual shelters, mobile guerrilla installations, large-scale building skins, power-generating installations and means of transportation. Their uses can range from emergency rescue shelters to dance performances and their sites span the broad range from indoor environments to stratosphere. The broad blend of skills acquired when making such structures can be useful in many other forms of design, now, and into the future. - Stanislav Roudavski, MFA/MArch, MSc CABD, PhD Cantab. Assisted by Alex Holland BEnv, MArch.



// A.02.01 - A BRIEF HISTORY OF INFLATABLE ARCHITECTURE The development of pneumatic and inflatable structures spans several decades. Initially, inflatable structures were utilised for their structural capabilities and efficiencies. Not only were they able to provide an unsupported span of large areas, but their flexibility in material meant they could also be transported, deployed, and removed as needed 1. Eventually, inflatable structures began to explore other aspects of design. Haus-Rucker-Co developed a number of small installations of pneumatic structures such as Oase No. 7 which engaged people to influence and experience their own environments, rather than just become passive observers 2. Formally, these designs sought to critique the architectural scene as a whole, commenting on the contemporary styles of Modernism and Brutalism at the time. Groups such as Ant Farm also used these projects to assess the American culture of mass media and consumerism. They produced a number of events, movies and performances, which explored the intersection of architecture and media art 3. However, due to the increasing price of plastic production in the 1970’s, further experimentation was largely halted. The resurgence of inflatable projects however, has devolved into inflatable pavilions and whimsical light shows. While inflatables were once used to critique both architectural and social/political aspects, a large number of modern projects exist that simply utilise spectacle and novelty. RIGHT // HAUS-RUCKER-CO OASE NO.7 -6-


-7-



// A.03.00 - PRECURSORS > A.03.01_PERFORMATIVE ARCHITECTURE STUDIO 2011 The first experience of inflatables was underwhelming. The inflated structure from the Performative Architecture Studio 2011, while large and still a feat of making, felt relatively lacking some significant aspects. Personally, the most engaging aspects of the structure itself was the process of inflation as it writhed and bulged on the floor, the act of climbing into the structure itself, and its eventual collapse and deflation. It was later revealed that the white surfaces were used as a screen for the projection of an agentbased artwork which changed with the number of people within the structure. Users could also use a lamp to ‘agitate’ the structure which would react to the users by switching through a series of states. This certainly added a degree of interaction that was lacking in many modern day inflatable projects. FAR LEFT // PERFORMATIVE ARCHITECTURE STUDIO 2011 STRUCTURE LEFT (TOP TO BOTTOM) // HOT AIR - THE FLYING MAST CONNECTOR - MMW SILVER BEAN - ANISH KAPOOR INFLATABLE PAVILION - BIG

-9-


// A.03.00 - PRECURSORS > A.03.02_MOVIE TASK 1: ”THE BUBBLE” Using collective footage from everyone, the studio was tasked with creating a short movie highlighting our experience of the structure. I took this as an opportunity to emphasise the two actions of crawling through the hole into the structure, and directly interacting with the surface of the material through touch. As well as highlighting the qualities of reflection brought about by the clear plastic. The result was “The Bubble”, a short horror movie trailer featuring short clips and fast cuts to emphasise the disorientating act of climbing into the structure. The music and background rustling and gasping sounds crescendo to a variety of quick, dizzying pans. The trailer ends with an extended clip looking through the structure letting the lines of reflective light dance in front of the camera, obscuring and blurring vision inside. In reflection, the conversion of experience to move raises an interesting question of representation. While “The Bubble” used the same footage as others to create a suspenseful quality, other films mostly focused on other aspects such as whimsy interactions, the act of inflation and deflation, or experiential qualities. By the authoring of the movie, the simple ordering of short clips, combined with simple video editing techniques such as colour grading, audio and video composition, are enough to generate a variety of stories through representational techniques. RIGHT // STILLS FROM “THE BUBBLE”

- 10 -



// A.04.00 - KITE WORKSHOP | PETER LYNN As part of our learning objectives, the studio partook in a workshop with kite-maker, engineer, and inventor, Peter Lynn. The goal of the workshop was not only to learn how to make his design for a single skin, single line (SSSL) kite, but to distill the underlying design process behind using wind. This workshop was crucial in providing a fundamental understanding for the design of kites and inflatable structures.



// A.04.00 - KITES > A.04.01_DESIGN “For a kite to fly on a single line, it must, as the most basic condition, have some way to detect which way is up.” - Peter Lynn 4 A fundamental understanding of how kites stay in the air is vital in its transposition into other further designs. The primary forces acting on the kite, and by extension any flying object, are the weight of the kite due to gravity, the tension of the tethering line which can be broken into horizontal and vertical components, and the aerodynamic force which is broken into the two components lift and drag. Without a proper engineering background, it is difficult to properly utilise the aerodynamic force equations. However, many key points can still be extracted; Firstly, for a kite to maintain stable flight, the sum of all the forces must equal zero. For design, this means that the weight has to be sufficiently low enough for lift to occur, and vice versa. Secondly, for the kite to be angled upwards so it can be affected by the wind, the center of mass of the kite must lie below where the center of pressure is. Thirdly, the weight to area ratio is not linear, therefore kites will not ‘scale’, meaning that larger kites will be heavier relative to the amount of aerodynamic force they generate. The rest of the design of the kite is reliant on prototyping and Peter’s personal experience. The design we were making had been iterated over three years, and many of the changes made were small centimeter adjustments in order to optimise the kite. The symmetricalness of any design however, is paramount to its success. A number of a number of measures need to be taken into account in order to minimise errors in both the material and construction process.

- 14 -

P(PRESSURE) = 0.5 p V2 L(LIFT) = 0.5 V2 A p CL D(DRAG)= 0.5 V2 A p CD θ(ANGLE OF ATTACK) = L / D WHERE: V = WIND VELOCITY (m/s) A = AREA OF KITE (m2) p = DENSITY OF AIR (APPROX 1.229kg/m3) CL = COEFFICIENT OF LIFT (APPROX 1.0 units) CD = COEFFICIENT OF DRAG (APPROX 1.0 units) ABOVE// EQUATIONS FOR FORCES RIGHT // DIAGRAM OF FORCES ON KITE


- 15 -


STRETCH

RIGID

RIGID

STRETCH

- 16 -


// A.04.00 - KITES > A.04.02_MATERIALITY Ripstop nylon is a reinforced nylon fabric which is highly resistant to rips and tears. Thicker threads are woven at regular intervals into the fabric in a cross-hatch which reinforces the material. The ripstop nylon used for kitemaking comes in various weights, and is also coated to give it zero porosity, meaning that no air or water can pass through. While these properties make it ideal for kite and wind based applications, there are still several issues with the use of this material. For example, the cross-hatch weaving of the threads is rarely exactly perpendicular resulting in asymmetry in the fabric itself. In addition, while ripstop nylon is strong along the weave of the fabric, it has a significant stretch diagonal to the cross-hatch pattern, exacerbating the issue of inconsistent weaving. In order to deal with these inconsistencies, kite patterns must be nested and rotated carefully on the material. This is to orient the strength of the material due to the weave in the correct direction. With further application of this material in other designs, consideration needs to be made if the design consists of multiple panels of this material over a large span of area. The orientation of the material will have to be considered to reduce the amount of stretch in areas where it is unfavourable. TOP // PETER LYNN DEMONSTRATING NESTING AND HOT KNIFE CUTTING BOTTOM LEFT // DIAGRAM SHOWING RIGID AND STRETCH ORIENTATIONS BOTTOM RIGHT // NESTING TO OPTIMISE RIGIDITY IN KITE - 17 -


// A.04.00 - KITES > A.04.03_CONSTRUCTION+REFINEMENT

The actual construction process was hindered mostly by sewing skill. Due to the stretching issue of the material, there were several cases where an area of stretchier material is being sewn onto a one with less. This caused a number of ‘registration’ problems with mismatched lengths of materials during the sewing process. Pinning was also not advised as it created a number of holes in the fabric which were not ideal. Even with optimal nesting, there may still be cases where mismatched weaves of material need to be sewn together, and this needs to be identified and compensated for in the process. With regards to the bridles, their design for SSSL kites is quite simple. As a base point, the bridles need to all be the same length in order to even out the wind pressure to the fabric. The length of each bridle can then be adjusted to compensate for any errors in the kite construction. Similar methods can be adapted to multi-line kites by identifying appropriate tethering points and allowing for adjustments to be made after initial testing. RIGHT // MATERIAL COMPENSATION IN SEWING BOTTOM RIGHT // BRIDLE ADJUSTMENT

- 18 -



// A.04.00 - KITES > A.04.04_AFTERWORD The kite workshop identified a number of issues that may develop in further designs. An aspect of kite making that was discussed but not explored was ram air kites. They work on similar principles to mechnically inflated structures and could provide a potential for further exploration. Most of the limitations of ram air kites revolve around the geometry and design. Aspects such as long appendages, or large flat surfaces should generally be avoided due to airflow within the kite’s structure causing instability. However, ram air kites and similar designs can also be inflated mechanically allowing for a much more consistent envirnonment for prototyping or even as a final outcome. However, due to a lack of experience and knowledge in the development of kites, it is difficult to develop this specific mode of kite to any significant extent. Therefore, other avenues of exploration must be taken in order to utilise inflatables for a project of this scale.



// A.05.00 - PART A BIBLIOGRAPHY 1. “Tensile Architecture, Tensioned Membrane Structure Company History – Birdair Inc.” Birdair Inc, assessed 28 July, 2016, http://www.birdair.com/about/company-history. 2. “Haus-Rucker-Co,” Spatial Agency, assessed 27 July, 2016, http://www.spatialagency.net/ database/haus-rucker-co. 3. “Ant Farm,” Spatial Agency, assessed 27 July, 2016, http://www.spatialagency.net/database/why/political/ant.farm 4. “Why Kites Don’t Fly,” Peter Lynn Himself, assessed 14 August, 2016, http://www.peterlynnhimself.com/Why_Kites_Dont_Fly.php

Images 1. Haus-Rucker-Co, Oase No. 7, http://www.arquilecturas.com/2012/06/inner-world-innenwelt-projects-of-haus.html 2.

MMW, Connector, http://www.mmw.no/connector/

3. Flying Mast, Anca Trandafirescu ‘Hot Air’, http://www.flyingmast.com/?tag=inflatablestructures 4.

Anish Kapoor, ‘Cloud Gate,’ http://www.phaidon.com/resource/anishkapoor-p452.jpg

5. BIG, ‘Inflatable Pavilion,’ http://www.archdaily.com/791253/big-designed-inflatablepavilion-lights-up-roskilde-festival




// B.01.01 - DESIGNING FOR USERS

Through the development of the digital age, humans have evolved from symbolic machinelanguages, to text-based code, to the visual and graphic user interfaces of the modern age. As technology has evolved, so too has the role of the architect. As designers, architects have a responsibility to enhance and actively encourage the interaction between user and space. Traditionally, this involved understanding things such as how natural light, or views of the outside encouraged certain behaviours. However, the expansion into the digital realms has subsequently raised the question of how design can both utilise and interface with the digital to create new architectures that can engage the user and enhance experience. Architectural Installations, such as Philip Beesley’s “Epiphyte Chamber” explore the combitation of media art, and interactive mechatronics to amplify the experience of space 1. Therefore, this project aims explore the interplay between physical and virtual interfaces and how narrative is used to direct users the interactive story telling through an interactive installation. LEFT // EPIPHYTE CHAMBER 1


// B.01.02 - DIGITAL NARRATIVE

- 26 -


Narrative refers to a series of events that is conveyed through a medium such as text, spoken word, or moving images. It can range from anything such as poems or literature, to live performances, to video games. Writers use a wide range of literary devices to describe stories of a protagonist to the reader. Similarly, film-makers and animators create ‘scenes’ for each of these events, utilising cameras, lighting and framing to direct attention, and evoke emotions from the viewers. In traditional architecture narrative is conveyed through a number of things, from conceptual and process drawings, to the experience of movement and use of the space 2. However, these are all forms of users as ‘disembodied observers’ 3, where users have no impact to the story being told. While these are by no means inferior methods of storytelling as they are certainly methods of engagement, far more interesting to the realms of architecture is the prospect of interactive narrative. Video games are prime examples of how users interact with an environment in order to progress through a narrative. The amount of interactivity varies greatly between individual games, and depends solely on the designer’s intent to allow the users to affect the world states. Some game narratives are similar to movies where users play out the role of a protagonist, where any deviation from the story line results in a fail state. However, games such as Heavy Rain incorporate choice mechanics which allow players to make storyaltering decisions which may result in deaths of certain characters drastically changing the ending of the narrative. LEFT // HEAVY RAIN GAME PLAY SHOWING ‘CHOICE’ MECHANIC 2

- 27 -


// B.01.03 - CURATING THE EXPERIENCE

The primary issue when creating interactive narrative is predicting the behavior of users which for the most part, unpredictable 4. Also known as the “Narrative Paradox,” 5 narratives need a system in place which can deal with the inherent problems when allowing interactive freedom within the traditional notions of story-telling. This is usually handled through forms of non-linear progression. In video games, this is usually accomplished through branching narratives which contrasts linear narrative by allowing players to make story-changing decisions at key points within the game. In doing so, players are allowed to change the world-state in which different endings are reached based on certain events triggered by the player. In the example to the right from the game ‘Mass Effect’, players are given the option to send one of their members of their team to go and rescue other characters from the story. If they do choose to send a member which is a detriment to their progression at the time, and the member they choose to send is loyal, they are able to rescue the other characters who contribute to other aspects of the story line. Otherwise, both the member sent to rescue the remaining characters and those characters themselves are all killed. It should still be noted that all these events are still designed within the game world. The choices the user can make are controlled, yet the user is given the impression of an emergent narrative based on their decisions to trigger certain events. Users given the ability to interact with the world-state therefore become an active performer within the world, accentuating experience and immersion into the story. RIGHT // MASS EFFECT BRANCHING STORY LINE CHART 3

- 28 -


- 29 -


VIRTUAL WORLD

PHYSICAL WORLD

INITIAL CONDITIONS

USER INTERACTION

INPUT FROM USER

DECISION PROCESS

ENVIRONMENT STATE CHANGE

EMERGENT BEHAVIOUR

FEEDBACK TO ENVIRONMENT

SYSTEM RESPONSE TO INTERACTION


// B.01.04 - INTERFACING

Creating interaction between a user and a virtual space is not a new concept. Text-based inputs and graphic interfaces such as those on smart phones and other portable electronic devices have ingrained basic interactions with the digital into society. Digital systems such as those used in transport infrastructure also link users to directorial virtual systems 6. However, all of these systems translate digital information into the medium of the screen - a two dimensional text and graphical interface. Mixed reality, which includes both augmented and virtual reality, has begun to merge the physical and virtual worlds to create environments where users can comprehend both physical and digital spaces at once. This will be expanded on later. The issue then becomes how to create a tangible relationship between the virtual and physical. One solution being explored is based on similar non-linear narrative events as described previously, and a feedback system between user inputs and virtual decision-making. Borrowing ideas from the Real-Time Object Tracking Systems project from the Interactive Architecture Lab 7, the diagram to the left demonstrates how user interaction in the physical world would feed into virtual systems whereby the choices by the system can be made to affect the real-world environment. With certain parameters being established, users can experience a physical feedback through interaction with digital systems. LEFT // FEEDBACK DIAGRAM BETWEEN VIRTUAL AND PHYSICAL WORLDS


// B.02.00 - PROTOTYPING > B.02.01_SKETCH01 : ARDUINO | SENSORS Thus, for the first prototyping task, I began to explore what types of analog inputs can be utilised, and what interfaces permit the transfer of information. Arduino, a micro-controller that uses open-source hardware and software for controlling digital and physical devices. It features a number of input and output pins which can be connected to a wide range of readily available sensors and actuators. The benefit of the Arduino is that it is easy to learn and utilise using its integrated development environment (IDE). Through the interface, simple scripts can be written that read and write data to and from the Arduino. In the prototype shown to the right, an ultrasonic distance sensor is used to read the distance from the sensor, and write a voltage to the DC motor allowing for variable speed. If the user was a certain distance from the sensor, a buzzer would play Ride of the Valkyries. This same script was later adapted to power a servo and stepper motor, as well as using light and sound sensors in similar manners. While very rudimentary, it introduced basic scripting skills and interfaces in order to potentially develop this further. With the use of additional power supplies for higher voltage hardware, or various other inputs using different communication protocols, the Arduino can potentially be used to control much more robust systems. TOP RIGHT // PSEUDOCODE FOR READING AND WRITING IN ARDUINO BOTTOM RIGHT // ARDUINO SETUP WITH ULTRASONIC DISTANCE SENSOR AND DC MOTOR WITH 3D PRINTED FAN

- 32 -


// LOOP START

// READ DISTANCE (cm) (ULTRASONIC DISTANCE SENSOR)

// CONDITION 1 DISTANCE < 90.00cm TRUE

FALSE

// CONDITION 2 DISTANCE < 100.00cm

FALSE

// CONDITION 3 DISTANCE > 100.00cm

TRUE

// REMAP DISTANCE TO MOTOR SPEED (0, 90, 60, 255)

// SET TO MAX MOTOR SPEED (255)

// WRITE MOTOR SPEED

// WRITE MOTOR SPEED // WRITE HIGH VALUE TO BUZZER PIN

// WRITE TO CONSOLE “OUT OF RANGE”


// B.02.00 - PROTOTYPING > B.02.02a_DETECTING USERS While the Arduino sensors provide simple sensors, they are limited in the amount and type of data they can send. Their utility is therefore also constrained to specific, generally smaller scope circumstances. The Microsoft Kinect however, is a piece of hardware that facilitates interaction between users and interfaces using a combination of an RGB camera and depth sensor which is used to track users in a threedimensional space to the extents of the camera’s range. The Kinect can therefore be easily used through software to develop interactive installations such as ‘Cell’ by James Alliban and Keiichi Matsuda 8 . This installation simply tracked the users who stood in front of the screen and mapped a series of randomly assigned personalities from online profiles as a comment on constructed personas. This precedent shows the capabilities of the hardware, but doesn’t tackle the problem of feedback. Beyond novelty of making objects on the screen move, there is no necessity given to the user that encourages continual interaction that can’t be witnessed by another users’ interaction. However, this hardware does provides a good starting point for working with motion sensors. RIGHT // “CELL” INSTALLATION BY JAMES ALLIBAN AND KEIICHI MATSUDA 4

- 34 -


- 35 -


// READ KINECT SKELETAL TRACKER

// TRACK TRIGGER BOXES TO USER WIRE MODEL

// ASSIGN HITBOX TO USERS’ HANDS

// WAIT FOR HITBOX REGISTRATION (LEFT HAND TO TRIGGER)

// WAIT FOR HITBOX REGISTRATION (RIGHT HAND TO TRIGGER)

TRUE FALSE

TRUE

// DURATION > ASSIGNED TIME VALUE FOR TRIGGER

// DURATION > ASSIGNED TIME VALUE FOR TRIGGER

TRUE FALSE

FALSE

TRUE

// USER HITBOX FOLLOWING TRIGGER BOX PATH

// USER HITBOX FOLLOWING TRIGGER BOX PATH

TRUE

TRUE AND // WRITE TRUE TO FIREFLY + ARDUINO

- 36 -

FALSE


// B.02.00 - PROTOTYPING > B.02.02b_SKETCH02 : KINECT | GRASSHOPPER | FIREFLY | ARDUINO The base functions of the Kinect and inbuilt skeletal tracker allow users to be tracked around the camera’s range. This could be used for location-based triggers such as moving to a specific location to activate a narrative event. It can also be used for simple tracking fuctions to maintain the location of a user within the environment. However, to further the functionality of the Kinect, this prototype explores the use of physical gestures. Similar to how smart phones have ingrained the ‘swipe’ gesture into society, common gestures such as pushing or pulling are already motions used which are associated to generate results. Using the Kinect, it is possible to track both hit boxes and trigger boxes to the user’s skeletal model. Hit boxes were attached to the model’s hands, while triggers were placed 300mm in front of the model. Using some scripted logic which prevented ‘accidental’ triggering through a number of conditionals as shown on the left, the script was set up to detect a ‘pushing’ motion. For this prototype, Firefly was used to then output a signal through to the Arduino. OpenCV (Open Source Computer Vision) is a series of libraries enabling computer vision. This, coupled with further functionality from Processing also allows for a more general capture of movement and facial features through regular camera streams. Coupled with the smaller-scale sensors of the Arduino and the Kinect, these technologies enable a variety of movement data to be transferred from the real world into the virtual. TOP LEFT // PSEUDOCODE FOR SKELETAL TRACKING WITH KINECT BOTTOM LEFT // SKELETAL TRACKING AND HIT BOX RECOGNITION BOTTOM RIGHT // OPEN CV + PROCESSING

- 37 -


// B.02.00 - PROTOTYPING > B.02.03_COLLABORATION, RESEARCH, AND ELECTRICAL NONSENSE Upon satisfactorily exploring methods of user inputs through motion, the next step was to understand how the virtual system can change the physical environment. We needed to investigate ways in which we could send signals or commands that produced immediate feedback for the user in real-time. As the first prototype with Arduino had utilised variable fan speed, this was the starting point for research. The primary issue involved how to safely control large fan equipment using the microcontrollers. The top circuit diagram to the right shows the theoretical connecting of 240v a.c. to PWM using a MOSFET transistor that can be controlled via Arduino. However, with a lack of knowledge of proper electronics, working with high voltages was potentially lethal. DMX512 (Digital Multiplex), a standard digital communication for stage lighting and effects provided a potential solution. DMX libraries exist for both Arduino and Processing which allow signals to be sent through those interfaces. However, the main restriction in this case was cost per unit versus relative control granted. One DMX controlled fan cost upwards of $1000 per unit, and the amount of air generated wouldn’t necessarily create the immediate physical feedback as required. Therefore, further research into DMX systems was discounted. The solution then became to use a series of motors which would act as pulleys which could pull and release lengths of cord attached to the fabric. While relatively simple, it fulfilled the need to be able to quickly actuate the skin to expand and contract it as needed. Communication between each of the motors would however have to be achieved through wireless communication between each of the motors and an Arduino board. Fortunately, wireless modules for microcontrollers exist for this type of communication facilitating this communication. TOP RIGHT // CIRCUIT DIAGRAM FOR 240V A.C TO PWM BOTTOM RIGHT // SCHEMATICS FOR WIRELESS CONNECTIONS TO ARDUINO NANO BOARDS 5

- 38 -


- 39 -


// B.04.00 - PROPOSAL > B.04.01_ WARFA[IR] + PA[I]RASITE These two proposals were developed from a culmination of all the prior research. All of the information from the investigation into interactivity was shared with Tony Yu who also shared research into Processing, Arduino sensors, and the wireless communication that was suggested on the previous page. As such, these two proposals reflect this shared knowledge yet focus on two different aspects of new media. WARFA[IR] (Warfare) by Tony Yu, was a proposal for an installation utilising gameplay elements over a table-top grid of inflatable voxels. Users could use gestures to affect the agent-based simulation projected onto the gameboard which would in turn affect how the voxels would rise or fall. My proposal, the PA[I]RASITE (Parasite) was an organic inflatable which users could interact and play with. The immediate installation space would detect things like where people were in the space, or how fast they were moving. A state machine would then govern the behaviour of the Parasite, which would change its shape, or colour depending on a series of conditions. Both of these installations used similar pulley systems to achieve immediate responsive movement, and relied on user interaction to stimulate reaction from the system. In both cases, a simple detail (over page) could be multiplied depending on the scale of the installation. TOP RIGHT// CONCEPTUAL DRAWING FOR WARFA[IR] 6 BOTTOM RIGHT // CONCEPTUAL DRAWING FOR PA[I]RASITE 7 - 40 -




// B.04.00 - PROPOSAL > B.04.02_REFLECTION ON FIRST PROPOSALS Both these projects fit the criteria we had set out to achieve. With regards to Parasite, I feel as though it would have successfully created an interactive installation space. If successfully implemented, users would have been able to experience a feedback between their actions and the actions of the inflatable through their movement and gestures within the space. As suggested earlier, the response in the physical environment, and the reaction of users to these changes are paramount in determining the success of the installation. In reflection however, both these proposals propose mechanical systems that are situational and somewhat unique to their purpose. For example, the frame and pulley rig system could not be easily translated into another inflatable project. While this does not necessarily depreciate the value of the experience of the installation, it does limit the scope to what is achievable with electronics and actuated systems. To further our development of the proposal, we began looking into ways to which we could give the material additional properties. The aim therefore shifted from imposing movement onto the material through a centralised structural system, to giving the material its own properties aided by actuated electronics. In theory, a material which has ingrained mechanics can therefore be transposed and developed independently to its function. TOP // BEHAVIOURAL SIMULATION MESH WITH KANGAROO BOTTOM LEFT // PULLEY DETAIL FOR PA[I]RASITE BOTTOM RIGHT // PULLEY DETAIL FOR WARFA[IR]

- 43 -


// B.04.00 - PROPOSAL > B.04.03_MATERIAL STUDIES

Many examples of existing systems with actuating local behaviour already exist, and are accomplished in a number of different ways. The example to the top right uses strands of shape-memory alloy which, when heated, causes the fabric to coil 9. These strands can be woven into fabric which can then cause fabric to bend to a desired shape, mimicking muscle-type movement. The other two examples use pneumatics as actuators. The SelfActuating Origami project uses this to trigger a series of hinges which can be folded or unfolded to change form 10. These hinges can be interlinked into a modular system in order to control a larger structure. The Aegis Hyposurface project uses pneumatic pistons attached directly to a number of interconnected panels to give the impression of a continuous triangulated surface 11. These three projects show examples of how locally controlled systems can similarly produce a wide range of behaviours and states, yet are not restricted to the configuration of their use. However, while these projects demonstrate the benefit of this material-based approach, none of them are suited to soft structures due to the hardware requirements of complex pneumatic piston systems and heated coils. In addition, simple tube systems such as those used in the Self-Actuating Origami project are also unfeasible because they only provide on/off states. Therefore, with respect to the proposal for this inflatable, stepper motors which can be much more accurately controlled to a degree of accuracy provided by a spooling system is much better suited. TOP LEFT// SELF-ACTUATING ORIGAMI BASED MATERIAL 8 TOP RIGHT // ROBOTIC FABRIC 9 BOTTOM // AEGIS HYPOSURFACE 10

- 44 -


- 45 -a


- 46 -


// B.05.00 - PROPOSAL II > B.05.01_CHRYSA[IR]LIS | DESIGN

The design for CHRYSA[IR]LIS (Chrysalis) centers around an indivudally controlled module which can be multiplied and scaled as required. Each panel is equipped with a 3D printed stepper motor mount which can be sewn into the between adjourning panels. A spool attached to the motor is attached to the adjacent motor mount to form a continuous chain of pulleys, each of which can be individually controlled via wireless module. Each nano controller can control up to 4 independant motors, all of which can be controlled via a single networked arduino unit with wireless adaptors. LEFT // DIAGRAM OF CONSTRUCTION MIDDLE // ACTUATOR MOTION RIGHT // PROTOTYPE WITH 3D PRINTED MOUNT

- 47 -


// B.05.00 - PROPOSAL II > B.05.02_CHRYSA[IR]LIS | PROTOTYPE This prototype was constructed to test the movement of a single panel. Done by manually pulling on cord attached to the opposite side the the panel, this test confirms that the imagined movement of the panel can be achieved. While the surface area remains the same due to the inflation, the tightening of the panel itself is enough to create movement in the fabric’s skin once this behaviour is applied across the adjacent panels. This material behaviour could then be simulated through Grasshopper and Kangaroo. The example to the right shows the initial test simulations of the actuated movement applied across a grid of these panels. By controlling the degree to which each panel contracts, larger-scale patterns can be developed across the fabric’s skin. BELOW // MOVEMENT TEST PROTOTYPE RIGHT // SIMULATION OF BEHAVIOUR USING KANGAROO 11

- 48 -


- 49 -


// B.05.00 - PROPOSAL II > B.05.03_CHRYSA[IR]LIS | BEHAVIOUR The behaviour of the Chrysalis could then be developed through a simple state machine system. By determining a number of criteria for interaction, such as ‘user enters tracking area’, a number of reaction states and therefore, positions of the inflatable can be established. The state machine on the right demonstrates a number of different events, and causes which contribute to the narrative of the experience. Using a form similar to that used for the Parasite, these behaviours could be simulated by actuating certain panels to create movement in the overall form. The first example behaviour - ‘defensive’, shows the inflatable retracting from an arbitrary point in space representing a user. The lower example shows the installation reacting to a person standing close and moving to interact with them demonstrating ‘affection’. By catering for a number of different possibilities of actions through testing and extending the state machine, through the relative randomness of each unique user’s actions, each experience will seemingly be unique. TOP RIGHT // EXAMPLE STATE MACHINE FOR BEHAVIOUR RIGHT // RETRACTION FROM MOVING POINT - ‘DEFENSIVE’ 12 BOTTOM RIGHT // ATTRACTION TO SINGLE USER - ‘AFFECTION’ 13

- 50 -


- 51 -



// B.05.00 - PROPOSAL II > B.05.04_REFLECTION The development of material properties rather than a single installation proved to be a far more interesting proposal. However, many aspects of the proposal remain unresolved. As raised in the feedback, many physical hardware problems such as cable management, assembly, and component efficiency still lack a degree of refinement. In addition, a number of other considerations would need to be taken in order to further explore this design. This includes understanding how the construction could be simplified and scaled up, to how each individual panel could potentially affect neighboring panels in order to develop emergent behaviours. In addition, other actuator systems would need to be investigated in order to determine what the best method of creating individually moving panels would be without the need for somewhat clunky mechanical fixings. While the project itself successfully proposes the idea for an interactive installation which enhances the experience through integration with digital technologies, issues of implementation still inhibit the feasibility of the Chrysalis LEFT // CONCEPT DRAWING FOR CHRYSA[IR]LIS 14


// B.06.00 - PART B BIBLIOGRAPHY 1. Philip Beesley, “Epiphyte Chamber”, Interactive Installation, Museum of Modern and Contemporary Art 2013, http://philipbeesleyarchitect.com/sculptures/1312_MMCA_EpiphyteChamber 2. Sophia Psarra, Architecture and Narrative: The Formation of Space and Cultural Meaning, (Routledge 2009). 3. Mark Owen Riedl and Vadim Bulitko, “Interactive Narrative, An Intelligent Systems Approach.” AI Magazine 34, 1 (2012). http://www.cc.gatech.edu/~riedl/pubs/aimag.pdf 4. Magy Seif El-Nasr, “Interactive Narrative Architecture Based on Filmmaking Theory”, International Journal on Intelligent Games and Simulation v. 3 no. 1 (2004). 5. Sandy Louchart and Ruth Aylette, “Managing a Non-linear Scenario - A Narrative Evolution,” Virtual Storytelling: Using Virtual Reality Technologies for Storytelling 3805 (2005). 6.

William Mitchell, e-topia (The MIT Press, 2000).

7. Menglin Wang, “Real-Time Object Tracking Systems,” Interactive Architecture Lab, written 20 August, 2016, http://www.interactivearchitecture.org/real-time-object-tracking-systems.html 8. Filip Visnjic, “Cell [openFrameworks, Kinect],” Creative Applications, written 30 October 2011, http://www.creativeapplications.net/openframeworks/cell-openframeworks-kinect/ 9. Rebecca Kramer, “Robotic Fabric turns any surface even clothing into an automaton,” ecouterre, viewed 28 August 2016, http://www.ecouterre.com/robotic-fabric-turns-any-surfaceeven-clothing-into-an-automaton/robotic-fabric-perdue-3/ 10. John Paulson et. Al. “Self-Actuating Origami-based material”, newelectronics, viewed 29 August 2016, http://www.newelectronics.co.uk/electronics-news/self-actuated-origami-basedmaterial-can-be-both-stiff-and-pliable/116574/ 11. Mark Burry, “Aegis Hyposurface,” Mark Burry Professional and Academic Research, viewed 29 August 2016, https://mcburry.net/aegis-hyposurface/

Images 1. Philip Beesley, “Epiphyte Chamber”, Interactive Installation, Museum of Modern and Contemporary Art 2013, http://philipbeesleyarchitect.com/sculptures/1312_MMCA_EpiphyteChamber 2. Heavy Rain - Video Game Screenshot, Quantic Dream, Published by Sony Entertainment 2010, https://jetsetnick.wordpress.com/2011/07/18/the-narrative-impact-of-heavy-rain/ 3. Mass Effect Wiki, “Suicide Mission Flowchart”, http://masseffect.answers.wikia.com/wiki/ Suicide_mission_flowchart 4. Keiichi Matsuda and James Alliban, “Cell”, Interactive Installation, Alpha Ville Festival 2011, http://km.cx/projects/cell/ 5. Tony Yu, Arduino Communication Diagrams, University of Melbourne, Wind Architecture: Masters of Architecture Studio 5, 2016.


6. Tony Yu “WARFA[IR]”, Digital Painting, University of Melbourne, Wind Architecture: Masters of Architecture Studio 5, 2016. 7. Tony Yu “PA[I]RASITE”, Digital Painting, University of Melbourne, Wind Architecture: Masters of Architecture Studio 5, 2016. 8. John Paulson et. Al. “Self-Actuating Origami-based material”, newelectronics, viewed 29 August 2016, http://www.newelectronics.co.uk/electronics-news/self-actuated-origami-basedmaterial-can-be-both-stiff-and-pliable/116574/ 9. Rebecca Kramer, “Robotic Fabric turns any surface even clothing into an automaton,” ecouterre, viewed 28 August 2016, http://www.ecouterre.com/robotic-fabric-turns-any-surfaceeven-clothing-into-an-automaton/robotic-fabric-perdue-3/ 10. Mark Burry, “Aegis Hyposurface,” Mark Burry Professional and Academic Research, viewed 29 August 2016, https://mcburry.net/aegis-hyposurface/ 11. Tony Yu, Kangaroo Simulations of Material Behaviour, University of Melbourne, Wind Architecture: Masters of Architecture Studio 5, 2016. 12. Tony Yu, ‘Defensive’ Behaviour Simulation, University of Melbourne, Wind Architecture: Masters of Architecture Studio 5, 2016. 13. Tony Yu, ‘Affection’ Behaviour Simulation, University of Melbourne, Wind Architecture: Masters of Architecture Studio 5, 2016. 14. Tony Yu, “CHRYSA[IR]LIS”, University of Melbourne, Wind Architecture: Masters of Architecture Studio 5, 2016.


// C.01.00 - MOVIE [PLACEHOLDER SECTION]

-PLACEHOLDERYet, through other digital mediums, technologies such as virtual and augmented realities are exploring other ways for users to interact with design. ‘Augmented City’ (2010), a short movie by Keiichi Matsuda explores a city in which the physical and virtual are combined into a single, continuous whole 2. While in the video game industry, virtual reality technology provides users with fully immersive gaming environments to build and destroy. By actively relying on feedback between the user and the interface, it is easy to imagine how immersion and engagement with the user can be achieved much more easily than with static, nonresponsive design. RIGHT // KEIICHI MATSUDA - AUGMENTED CITY 3D 5



// C.01.00 - MOVIE > C.01.01_SKETCH 03: UNITY | ANDROID SDK | VIRTUAL REALITY The studio was again tasked with generating a movie based on the footage from the workshop. However, there was a lack of footage which could be utilised to form a decent narrative which I felt properly captured the essence of the design process. Instead, I decided to explore different representation methods which eventuated into a virtual reality test through the Unity game engine. A virtual model could be built which stores a number of trigger points that can be interacted with either through motion sensors or through an augmented reality system using a mobile device or tablet camera. A UI could then be developed which would open animations or videos showing aspects of the design which could not be ordinarily seen. In addition, this could also be adapted to show drone or 360 footage if applicable. For this, I got three tools properly working. Firstly, the camera in the game engine responds to the gyroscope in the mobile device in order to move the camera around. Secondly, a UI is tracked to the screen in the form of a Google cardboard to single viewer button toggle. Lastly, a single plane which reads the input from the rear camera of the phone and uses them as a surface texture. While rudimentary in appearance, this opens potential for a number of different applications, and can easily be combined with any of the previous prototypes to develop a sequence of interactive events. The next step in development for these ideas however, is understanding when and how data should be read and written, and what potential unique opportunities are open integrating this technology with inflatable design.

- 58 -


- 59 -


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.