Adaptable Space.
Matthew Hunter.
An investigation into analytic spaces with the capacity to adapt to changing demands of human behaviour and social interactions.
Adaptable Space.
Matthew Hunter.
ACKNOWLEDGEMENTS UNSW Faculty of Built Environment, Tam Nguyen, Client & Supervisor UNSW Faculty of Built Environment, Stephen Peter, Lecturer
Table Of Contents.
ACKNOWLEDGEMENTS ABSTRACT HYPOTHESIS TABLE OF CONTENT 1.0 INTRODUCTION 2.0 CONCEPTUAL FRAMEWORK 2.1 Background information 2.2 Precedent Studies Building Around the Mind OpenFloor Tunable Sound Cloud Lost in Space Human Activity Analysis: A Review Enhancing Social Interaction in Elederly Communities 2.3 Theoretical framework Perception of the Senses through Computer Vision Adaptable Environments Architecture Capable of Enhancing 3. PROTOTYPE 3.1 Timeline 3.2 Methodology 3.2.1 Interpreting Sensory Data Blob Tracking Kinect 3.2.2 Sorting Sensory Data Observation Analyses Calculating Velocity Calculating Permutation Calculating Proximity Calculating Point of Intersection Identifying Social Interactions 3.2.3 Prototype Responds to Enhance Social Interactions Prototype 1.0 Prototype 2.0 4.
DISCUSSIVE
5.
CONCLUSION
REFERENCE LIST APPENDIX
Adaptable Space.
ABSTRACT
HYPOTHESIS
The Adaptable Space project is an investigation into analytic environments with the capacity to adapt to the changing demands of their occupants.
Spaces can be dynamically adjusted in real-time to redefine and enhance a space for social interactions. Digitally perceiving human activities and social interactions is seemingly possible through the collection and analysis of sensory data. The experiments outlined within this report demonstrate a component of the desired system, which will examined in further research and investigations.
Two mechanical prototypes were developed during the course of this project. These prototypes propose to develop an Architecture capable of sensing and perceiving human movements and social interactions with the objective of encouraging social dynamism. The prototypes aim to recognise and anticipate social interactions between 2 to 6 occupants. The conducted experiments and research; examine studies and methods for collecting and interpreting computer vision sensory data. The skeleton tracking capabilities of Microsoft Kinect are employed to develop a system capable of analysing components of human movement, deriving data such as the occupants’ centre of mass and velocity. The architecture responds to prompt and stimulate social interactions through adjusting its spatial configuration to manipulate the circulation of its occupants.
Introduction.
The core purpose of Architecture has always been to facilitate for human needs. As needs evolve, so do the demands on a space. The project aims to explore the possibilities of a dynamically adaptable architecture that evolves with the individual and social needs of its occupants. It aims to “unfreeze architecture, to make it a fluid, vibrating, changeable backdrop for the varied and constantly changing modes of life” (Zuk 1970). The purpose of this project is to develop an Architectural system that reduces the environmental impact of a structure through increasing the life cycle of the building. The production of waste, use of energy, greenhouse gases emissions and depletion of natural resources associated with building construction and demolition has huge environmental implications (Australian Bureau of Statistics 2003). According to the Australian Bureau of Statistics; Australian construction and demolition of buildings contribute “eight million tonnes” of waste to landfill per year. This waste can be significantly reduced through increasing the architectures longevity by expanding its capacity to adapt to its evolving demands. In 1973, Norman Foster future proofed the ‘Willis and Faber headquarters’ building by anticipating for changes in technology. At the time everyone was using typewriters and foster designed the building to be “wired for change” by ensuring that it was flexible enough to accommodate for the technologies of “a future which was essentially unknown” (Foster 2007). The ‘Adaptable Space’ project will take this approach to design, by developing a flexible spatial configuration capable of completely transforming itself to facilitate for adaptability. The project proposes a mechanised prototype of a dynamic architecture that facilitates for adaption by enhancing and extending the activities of its occupants. The prototype aims to develop an environment capable of perceiving human movements and interactions through the collection and sorting of computer vision sensory data. The focus of the prototype is to examine opportunities for implementing calculated perceptions for developing an adaptable environment that is capable of enhancing its social dynamics.
The objective of the space is to enhance social dynamism by guiding and encouraging identified and anticipated interactions. The system also seeks to recognise and remove instances of social isolations by prompting these occupants to interact. It decomposes components of human movement to extract the parameters of position and velocity. This helps the system establish the occupant’s direction of movement, acceleration, forward facing direction and centre of mass. The intention of the prototypes is to develop a system capable of recognising the social interactions of 2 to 6 occupants. The prototypes will aim to determine the intensities of these interactions, using time in conjunction with the parameters. The velocities and positions of all tracked occupants are calculated to project and anticipate for potential intersections where occupants may meet and initiate an interaction. The system also seeks to recognise instances of social isolations occurring amongst occupants. The architecture responds to prompt users to interact and enhance interactions by adjusting its spatial configuration to manipulate the way in which people circulate throughout the space. The architecture links isolated occupants through generating passageways to others in isolation or to those who are already taking part in an established interaction. The configuration is composed of a single transformable form that is capable of redefining the space by imitating established architectural mechanisms; walls, ceilings, openings and passageways. There is great potential for further implementing analytic systems capable of perceiving human activities and interactions into many types of architectural applications. By embedding perception within Architectural systems, architects can design to facilitate for adaptability through sensing change, rather than anticipating for it. Perception of activities and interactions could be utilised to optimize thermal, visual, lighting and acoustic conditions and to promote sharing or collaboration in space. The role of these sorts of intelligent systems will be paromount in anticipating for change in architectural design of the future.
Conceptual Framework Figure 1.0 - Prototype 1.0 Rack & Pinion Gear
Figure 1.1 - Prototype 1.0 Rack & Pinion Gear
Background Information This section presents research conducted for conceptualising the ideas and direction of this project. The framework examines a series of precedent studies closely aligned to this research, which range from literature, to art installations. These studies have informed the project on both technological and conceptual levels. Precedent 01: “Building Around the Mind” provides an insight into how systems of architecture and the physical environment can influence the operation of human mind (Anthes 2009). The article examines a series of observational analyses that present the latest findings in brain research. This study suggests ways in which architectural systems can be implemented to enhance social interactions. Precedent 02: “Open Floor” is an interactive art installation that examines the ways in which humans interact with their physical environment (Cazan 2011). The installation tracks human movement using blob tracking. This technologically influenced the direction of the prototype experiments, outlined within the methodology. Precedent 03: “Tunable Sound Cloud” is a research project that focuses on performance based architecture with the capacity to tune itself to enhance and manipulate acoustic performance (Mani 2009). The project presents some key conceptual ideas that are relevant to the objective of the ‘Adaptable Space’ to develop; an architectural system with the ability to perceive and adapt. Precedent 04: “Lost in Space” is a research project framed as an investigation into how spatial situations can
be manipulated to affect the behaviour and movement of humans occupying the space. The project experiments with ‘smart materials’ and their ability to change in physical form and structure. The smart materials that used and discussed in their experiments range from latex rubber to electro-active polymer. It examines the possibilities for implementing these ‘smart materials’ within dynamic spaces (Gassmann & Muxel 2011). This research informed experiments conducted in building the Prototype outlined in the methodology. Precedent 05: “Human Activity Analysis: A Review” is a literature review that presents, theories for methods of digitally perceiving human social interactions and activities using sensory technologies (Aggarwal, J & Ryoo, M). Precedent 06: The paper “Enhancing Social Interaction in Elderly Communities” is based on a research project that examines a perceptive architecture targeted at maintaining the physical, mental and emotional wellbeing of elderly people. The projects approach to forming a perception of the spaces social dynamics and encourage and enhance social dynamism has significantly informed the conceptual framework and methodology of the ‘Adaptable Space’ project. These studies provide a structure for supporting underpinning theories developed in the following Theoretical Framework. This platform helps respond to the research question; Can the human senses be perceived digitally through computer vision to create adaptable environments capable of enhancing social interactions?
Precedent Study 01
Building Around the Mind
Description: ‘Building Around the Mind’, by Emily Anthes was published in the April/May edition of the 2009 Scientific American. Emily Anthes is a freelance science and health writer, who has had her writing published is a vast assortment of distinguished publications. She has a master’s degree in science writing from MIT and a bachelor’s degree in the history of science and medicine from Yale, where she also studied creative writing (Anthes 2011).
Problem/Concept/Issue: The article examines the possibilities of brain research in helping us better understand how the operation of human brain responds to its physical environment, suggesting ways to “craft spaces that relax, inspire, awaken, comfort and heal” (Anthes 2009). The article presents a series of experiments that link connections between the physical environment and the way in which the human brain functions. The experiments observe and compare the response of people and their ability to process information under different environmental circumstances. In 2007; professor Joan Meyers-Levy from the University of Minnesota reported that the “ceiling height affects the way you process information”. She conducted an experiment where she randomly assigned 100 people to room with either an eight or 10 foot ceiling, where they were subjected to a series of tests. Participants were given a list of 10 sports and were asked to list them into different categories. The results suggested that people within the room with higher ceilings, were experiencing more of an abstract thought process. They responded with much more abstract categories, such as “challenging” sports or “sports they would like to play”. Whilst those of whom were subjected to the lower ceilings; responded with “more concrete groupings, such as the number of participants on
a team” (Anthes 2009). In 2000; environmental psychologist Nancy Wells, from Cornell University, conducted studies that suggested views of a natural setting helps improve mental focus. Nancy and her colleagues followed 7 to 12-year-old children before and after a family move. Wells and her team discovered a link between those kids who were subjected to views of natural settings and gains on a standard test of attention (Anthes 2009). The findings suggest the following: - Lighter, brighter spaces with full-spectrum lighting increase alertness and help guard against depression - Access to views of a natural setting helps improve mental focus. - Rooms intended mainly for relaxation should feature darker colours, dimmer lighting, fewer sharp edges on furniture and bookshelves (these activate the part of the brain that alerts us to danger) - Lower ceilings improve performance in detailoriented tasks, whereas high ceilings encourage abstract creative thought.
Relationship and Benefit: The findings of the experiments and research presented in this paper demonstrate; “how can we utilize the rigorous methods of neuroscience and a deeper understanding of the brain to inform how we design” (Edelstein, cited in Anthes 2009, p. 55). This paper influenced the ideas and direction of the project through demonstrating ways in which Architecture can manipulate our thoughts and social interactions. This provides a platform for developing an architectural system cable of perceiving and adjusting itself to manipulate and enhance social interactions.
Precedent Study 02
OpenFloor
Description: The OpenFloor project is an interactive floor projection installation that tracks human movement using blob tracking. The floor projection interacts with its occupants through projecting animated dynamic elements that respond to their position. Open floor was created by Vlad Cazan who at the time was a 4th Year Undergraduate at Ryerson University (Toronto, Canada) studying Radio & Television Arts. Vlad is now a research assistant at Ryerson University. The installation took place in April 2010, and he has since released the packaged source files on an open source repository (Cazan 2011).
Problem/Concept/Issue: The installation is framed as a dynamic floor projection. The lack of documentation and available material suggest that this installation is conceptually unclear. The project seems to lack some substance with the only stated objective being to understand “the ways people interact with the outside world, when they are not expecting it.” Vlad has successfully created an advanced system capable of measuring and interacting with human movements through integrating the advanced blob tracking algorithms of the openCV (Open Source Computer Vision Project) libraries into his own application.
Relationship and Benefit: An abundance of interactive projection installations have emerged in recent years. The significance of this particular project lies within the fact that Vlad has released it as an open source project, giving novice programmers such as myself access to the source code and technological methodology. This methodology has provided an insight into the limita-
tions of blob tracking algorithms for tracking people within a space. They are heavily reliant on controlled lighting conditions, which are not guaranteed in the exhibition space. This persuaded the ‘Adaptable Space’ methodology to steer away from implementing blob tracking, inclining the project towards employing the skeleton tracking capabilities of the Microsoft Kinect.
Precedent Study 03
Tunable Sound Cloud
Description:
Relationship and Benefit:
The ‘Tunable Sound Cloud’ is a research project and installation in the form of a responsive ceiling element that adjusts its form to enhance the acoustic performance of the space. The structure is made of an array of triangulated hinged modules, with each point on the array being controlled by a pulley system and servos. Controlled through Grasshopper and Rhino, the Tunable Sound Cloud is actuated with Arduino micro-controller and servo motors. It is ongoing collaborative research project led by Toronto based designer and Architect, Mani Mani. Mani’s research/work has been published, reviewed exhibited internationally. This particular project was exhibited in September 2009.
This study is technologically relevant to my research, as it presents an alternative approach to developing a dynamic ceiling structure. The system relies heavily on gravity to push points of the structure in a downwards direction and this demonstrates the potential of the dynamics that are achievable from such a system. The aim for my project is to develop a much more dynamic structure. This therefore persuaded my project to steer away from such a system.
Problem/Concept/Issue: The ‘Tunable Sound Cloud’ aims to examine the connection between music of the past and the architecture that facilitated for its performance at the time. It focuses on responsive systems in Architecture that can be adjusted and optimized to meet the sound requirements for a specific performance of music. The project demonstrates a sophisticated and innovative approach to building a dynamic architectural system (Mani 2009).
The project presents some key conceptual ideas that are relevant to the objective of the ‘Adaptable Space’ to develop; an architectural system with the ability to perceive and adapt.
Precedent Study 04
Lost in Space
Description: “Lost in Space” is a research study and series of experiments that attempts to find new possibilities of dynamic spaces, with abilities to change their physical form and structure. This project is still a work in progress and is currently in the very early stages of its research, which will eventually lead into uncovering more sophisticated systems of dynamically changing architecture. It was exhibited in Germany 2010. The project was conducted by Florian Gassmann and Andreas Muxel. They are both proven academics teaching as visiting professors at various universities across Germany. Florian is a qualified architect and works at the Institute of Design at the Faculty of Architecture at the University of Applied Sciences in Cologne, Germany. His research focus area deals with ‘the basics of physical space and its influence on human behaviour’. Andreas is an interaction and interface designer for the MARS-Exploratory Media Lab at the Fraunhofer Institute for Media Communication. His works are is interested in the mixture of digital code and physical material and the man-machine interface.
Problem/Concept/Issue: The project is an investigation into how spatial situations can be manipulated to affect the behaviour and movement of humans occupying the space. It is particularly looking at the parameters within an environment that stimulate human behaviours and the ways in which people respond and interact to these (Gassmann & Muxel 2011). The research experiments with smart materials and the parameters explored seems to be more focused on materiality, rather than spatial situations. The smart materials
that are experimented with and discussed in their experiments range from latex rubber to electro-active polymer. The project is still very much unresolved and has a long way to come, before they are able to achieve the outlined scope of works.
Relationship and Benefit: This scope of this project is very much in line with research being conducted by myself. I am investigating systems of Architecture with the ability to analyse and facilitate for changing needs of its occupants. This project provides an insight into their theories of how spatial configurations can be manipulated to affect behaviour and movement of its occupants. The implemented work flow for operating the mechanical prototype is the same as the one proposed by the ‘Adaptable Space’ project, which proposes a physical prototype that is controlled by a parametric grasshopper model and driven by stepper motors. They have provided a detailed documented methodology for this proposed workflow, which has significantly influenced the methodology of the ‘Adaptable Space’ project.
Precedent Study 05
Human Activity Analysis: A Review
Description:
Relationship and Benefit:
This precedent study is based on an essay that examines various research papers on human activity recognition. The study was conducted by academics; Aggarwal, J & Ryoo, M from the ‘Electronics and Telecommunications Research Institute’ at The University of Texas at Austin. The paper was published in ‘Volume 43 Issue 3, April 2011’ edition of the renowned ‘ACM Computing Surveys (CSUR)’ journal.
The ideas presented in the paper support developing theories for methods of digitally perceiving human social interactions and activities through sensory technologies. The objective of my research is to create a system capable of constructing a “high-level” human activities¸ such as a social interaction. The paper suggests that achieving this sort of sophisticated level of construction requires a string of atomic-level sub-activities. These activities are categorised within the paper into “four different levels: gestures, actions, interactions, and group activities”. Body position and gesture helps describe actions or activities, which then allows for us to understand interactions involving two or more persons and/or objects, which leads into group activities. This hierarchical logic can be directly applied to measuring human activities within the ‘Adaptable Space’ project.
Problem/Concept/Issue: The objective of this paper is to provide a complete overview of state-of-the-art human activity recognition techniques for computer vision. It discusses various types of approaches designed for the recognition of different levels of activities. The paper describes the hierarchical approach in which the human mind uses to measure human activity by analysing gestures to understand actions, then using actions to perceive interactions, and group activities. These sub-activities can be processed and interpreted through this same hierarchical approach of modelling a high-level activity as a string of atomic-level sub-activities using techniques of computer vision (Aggarwal, J & Ryoo, M). This is very significant to my project as it provides a useful insight into how human position tracking and body posture analysis can be used to reconstruct the interactions and activities occurring in a space.
Precedent Study 06
Enhancing Social Interaction in Elderly Communities
Description: The paper ‘Enhancing Social Interaction in Elderly Communities’ is based on a research project that is currently in its early stages of development. The paper was written as a collaborate by academics Joshua J. Estelle (Computer Science and Engineering), Ned L. Kirsch (Dept. of Physical Medicine and Rehabilitation) and Martha E. Pollack (Computer Science and Engineering) from the University of Michigan, USA. It was written in April, 2006 at the University of Michigan, USA.
Problem/Concept/Issue: The paper examines a perceptive architecture targeted at maintaining the physical, mental and emotional wellbeing of elderly people. Social participation and relationships are identified as important contributing factors in achieving this. The objective of this project is to reduce social isolation by enhancing social interactions of elderly people. The project presents an innovative approach to maintaining social participation by using sensory data to identify occurring cases of social isolation. It uses this sensory data to construct a model of the social network of an Assisted Living Facility using a wireless sensor networks to track the location and co-location of elderly residents. This sensory data is used to construct a model of the social network which allows for the system to perceive occurring social isolations and prompt ‘users’ to participate
in activities and interact with other occupants.
Relationship and Benefit: This project embodies relevant ideas of using sensory data of its occupants’ movements to form a perception of the spaces social dynamics and encourage and enhance social dynamism. The system identifies instances of social isolation and provides these occupants with suggestive prompts to encourage and enhance social interactions. This approach to enhancing social dynamics through recognising and removing social isolations, has considerably informed the methodology. The described method for gaining a perception of occurring social dynamics through constructing model of the spaces social network has also significantly informed the conceptual framework and methodology of the ‘Adaptable Space’ project.
Theoretical Framework.
Can the human senses be perceived through computer vision to create adaptable environments capable of enhancing social interactions? Perception of the Senses through Computer Vision Human beings have a multitude of senses. Although consciously unaware the human mind is constantly organizing and interpreting sound, speech, touch, smell, taste and sight sensory information to construct a perception of our surroundings. The primary sensory device that humans employ for measuring human activity and interactions is sight (Marr, D 1982). Through vision we are able to interpret social interactions, and activities, by reading body language and actions of individuals. Using this hierarchical approach the mind is able to model a high-level activity as a string of atomic-level sub-activities. This hierarchy of sub-activities is discussed in the paper ‘Human Activity Analysis: A Review’ (Aggarwal & Ryoo): - Gestures/Body Position Gestures are elementary movements of a person’s body part, and are the atomic components describing the meaningful motion of a person. `Stretching an arm’ and `raising a leg’ are good examples of gestures (Aggarwal & Ryoo). - Actions Actions are single person activities that may be composed of multiple gestures organized temporally, such as `walking’, `waving’, and `punching’ (Aggarwal & Ryoo).
- Interactions Interactions are human activities that involve two or more persons and/or objects. For example, `two persons fighting’ is an interaction between two humans and `a person stealing a suitcase from another’ is a human-object interaction involving two humans and one object (Aggarwal & Ryoo). - Group Activities Finally, group activities are the activities performed by conceptual groups composed of multiple persons and/or objects. `A group of persons marching’, `a group having a meeting’, and `two groups fighting’ are typical examples of them (Aggarwal & Ryoo). These sub-activities can be processed and interpreted through this same hierarchical approach using techniques of computer vision (Aggarwal & Ryoo). Blob Tracking algorithms and more recent forms gesture recognition technologies such as the Microsoft Kinect; are allowing for components of human movements and interactions amongst subjects to be analysed and decomposed. By linking this decomposed data to real world studies and observation analyses of human activities, perceptions of the occurring activities and interactions can be formed.
These perceptions provide an insight into the nature of the occurring individual activities and social interactions.
Adaptable Environments Adaptability in Architecture generally refers to the ability of space being flexible enough to accommodate for the changing demands of its occupants (Fox & Kemp 2009, p. 96). This is traditionally in reference to adaptable systems that are performance based and are focused on optimizing the requirements of a building, for example; an adjustable louver system to encourage ventilation. Another example of this is Building Automation Systems (BAS). BAS are computerised intelligent control systems, designed to control everything from the “lighting, climate, security and entertainment” of the building (Fox & Kemp 2009, p. 98). These systems are typically motivated by convenience and energy-use optimisation. Although these types of systems have the ability to adapt to changing demands on the system, they generally rely on a schedule (Daintree Networks 2010) or occupants manually manipulating the system control. They are not interactive in the sense that they are able to sense and form a perception that is shaped by “learning, memory and expectation” (Gregory 1987, p. 598–601) of their occupants and environment (Fox & Kemp 2009, p. 96). They seek to detect and react, but do not perceive. Recent forms of adaptable environments are emerging that are capable of forming perceptions of occupant presence and environmental conditions. The ‘Nitrogen Logic Automation Controller’ project is a home automation and lighting control system using the Microsoft Kinect. The ‘Nitrogen Logic Automation Controller’ is “a powersaving automation controller that can run standalone in simple automation systems” (Nitrogen 2011). Unlike motion or occupation sensors the system uses the skeleton tracking capabilities of the Microsoft Kinect, which is able to detect presence rather than motion. The project proposes a system capable of using gesture to perceive the activities and interactions of its occupants (Nitrogen 2011). This application demonstrates an environment facilitating for adaptability by constructing a perception to enhance and extend the activities of its occupants.
Architecture Capable of Enhancing The oxford dictionary defines the notion of enhancing as an act of “intensifying, increasing, or further improving the quality, value, or extent of” a state of being (Oxford Dictionary Online 2011). Recent forms of perceptive architectures are emerging to address the need to enhance and extend activities of their occupants. The primary targets for enhancing and extending activities in the past have been for ‘the military, the elderly, and the handicapped’ (Fox & Kemp 2009, p. 122).
Perceptive architectures targeted at reducing social isolation of elderly people through the enhancement of social interactions are discussed in the paper ‘Enhancing Social Interaction in Elderly Communities’ (Estelle, Kirsch & Pollack). The paper explores technologies aimed at measuring social networks of Aged Care Facilities to identify occurring instances of social isolation. The described system uses wireless sensor networks to track the location and co-location of elderly residents. It uses this sensory data to construct a model of the social network which allows for the system to perceive occurring social isolations and prompt ‘users’ to participate in activities and interact with other occupants. This system proposes an architecture with the ability to sense and perceive information, which allows for it to adapt by prompting and enhancing social interactions. This approach of enhancing is achieved by manipulating the behaviour of occupants through an architectural system that prompts users to respond in certain ways (Estelle, Kirsch & Pollack). Manipulative environments are discussed in the article “Building Around the Mind” (Anthes 2009) published in the April/May edition of the 2009 Scientific American. The article explores the possibilities of brain research in helping us better understand how the human mind responds to its physical environment. It examines a series of observation analysis and experiments that link connections between the physical environment and the way in which the human brain functions. The experiments observe and compare the response of people and their ability to process information under different environmental circumstances. The studies provide an insight into how architecture can influence the way in which the human mind functions and enhance particular activities and social interactions. The studies examined in this article suggest the following: - Spaces intended for maintaining focus should feature lighter colours, brighter lighting and sharp edges. These acti¬vate the part of the brain that alerts us to danger (Anthes 2009). - High ceilings encourage abstract creative thought (Anthes 2009). - Lower ceilings improve performance in detailoriented tasks (Anthes 2009). - Spaces intended to calm and relax its occupants should feature darker colours, dimmer lighting, fewer sharp edges on furniture (Anthes 2009).
Prototype
Week
03
Continue research Conduct precedent studies Arduino, firefly, grasshopper experiments Research Refined Project Proposal
Week
02
Commence research Conduct precedent studies Arduino & grasshopper experiments Interim prototype design
Week
01
Finalize proposals Design and build progress blog
Timeline Week
09 Week
10 Week
11
Prototype – Prepare for Fabrication Documentation for report Report – Methodology Start compiling showreel
*1st Report draft due Prototype Fabrication – Finalise Laser cutting Documentation for Report Report – Methodology Refine showreel Gather all catalogue material
Prototype Fabrication – Finalise Electronics Documentation for Report Report – Methodology Refine catalogue material Prepare showreel
Week
12
Prototype Fabrication – Build Model Gather catalogue materials Put together portfolio Refine showreel Report – Methodology
Week
13
Prototype Complete - Hardware Refine showreel Report – Conclusion / Discussion Website catalogue materials due
Week
04
Continue research Conduct precedent studies Conduct Observation Analysis Refined Project Proposal Arduino, firefly, grasshopper & stepper motor experiments Research on human behavior and interactions
Week
05
Continue research Conduct precedent studies Conduct Observation Analysis Observation analysis / research applied to matrix Construction on prototype *Order all necessary parts for final model
Week
06
Continue research Conduct precedent studies Prototype Fabrication – Build and present interim model in Seminars Human Behavior and Interaction Matrix complete *Presentation
Week
07
Prototype Fabrication – Develop Interim Model Documentation for Report Commence Report – Abstract, Hypothesis, Introduction and Methodology
Week
08
Prototype Fabrication – Develop Interim Model Documentation Report – Conceptual Framework and Methodology
Week
17
Exhibition
Week
16
Prepare for exhibition
Week
15
*Report due *Showreel due *Portfolio due Prototype Complete – Programmed Matrix
Week
14
Report sent off to print *Printed catalogue materials due Film prototype for showreel
Microsoft Kinect Skeleton Tracking
Methodology.
The following experiments propose to develop an Architecture capable of sensing and perceiving human movement with the objective of enhancing the social dynamics of its space. The prototype aims to recognise and anticipate for social interactions between 2 to 6 occupants. The system analyses components of human movement utilising the centre of mass and velocity of occupants, to project and anticipate for potential social interactions. This is determined by calculating the paths of intersections where occupants are likely to meet. Social interactions are determined using the velocity, position and time parameters, which allow for the system to establish the proximities and angles in which occupants are facing. This allows for the system to reconstruct and test whether occupants are facing and are within proximity of one another. This is measured up against previously identified interactions and time, which allows for the system construct a model of the spaces’ social network. The system is then able to recognise the intensities of interactions and instances of social isolations occurring amongst occupants. The architecture responds to nurture and stimulate identified interactions, and prompt occupants experiencing social isolation to interact with others. This is achieved by manipulating the circulation amongst occupants. The structures form morphs to mimic established architectural elements, linking occupants by creating openings and passage ways. The architecture attempts to recognise links between interacting occupants. It responds to enhance the social dynamics by encouraging interaction amongst those who have participated within interactions the least.
OSCeleton via OpenNI
Fir efl y co - O m SC po l ne iste nt ne r
Input, Process, OutPut
r
olle r t n
o
o uin
Ard
o-C icr
M
r ir ve
rd
a Bo
yD
s Ea
V
12
Firefly - Serial Port Write
Grasshopper / Rhinoceros Data sorted and Contructed
r
Ste
e pp
r to o M
s
Figure 1.3 - Microsoft Kinect
1.
Interpreting Sensory Data
To develop a system capable of understanding components of human movement and social interactions, a method of collecting and analysing appropriate sensory data must first be established. Based on the research conducted in the “Perception of the Senses through computer vision” it is apparent that a reconstruction of a high-level activity, such as a social interaction requires a string of atomic-level sub-activities. As stated previously these activities are categorised into “four different levels: gestures, actions, interactions, and group activities” (Aggarwal & Ryoo). Body position and gesture help describe actions or activities, which allows for us to understand interactions involving two or more persons and/or objects. The focus of the prototype is to utilise components of body movement to construct a model of the analysed spaces’ social network. The system should be capable of extracting data for the centre of mass and velocity of occupants circulating throughout a space. This data would enable the system to further extract parameters such as acceleration, direction of travel and forward facing direction. The following experiments examine methods for collecting and analysing this data through computer vision.
Experiment: Blob Tracking Computer vision technologies can be applied to the hierarchical approach to sense and perceive these sub-activities. The most established method of digitally perceiving vision is through blob detection. Blob detection is achieved through the detection of changes in differential regions of an image. Blob detection is capable of measuring approximate human movements within a space, but is limited in regards to measuring gesture.
Through utilising open source software, a number of blob tracking programs were experimented with. The main were OpenTSPS (Toolkit for Sensing People in Spaces) and OpenFloor (tracks humans on the floor). Both of which integrate the OpenCV (Open Source Computer Vision) library, which uses a ‘blob’ tracking algorithm to sense people moving around a space. They were both fairly similar in regards to their reliability of collected data, but OpenTSPS was more user-friendly as it allows to manually adjust thresholds to suit the applied space. OpenTSPS communicates data such as the persistent id, age, center of mass, contours (the shape of the blob) and velocity through an OSC/TUIO server. I was able to access this data inside grasshopper (parametric modelling plugin for Rhinoceros) via gHowl (set of components which extend Grasshopper’s ability to communicate and exchange information with other applications). This method was fairly reliable for measuring human movements within a relatively simple scene, but became a problem when people clustered together and when new objects were introduced to the scene. I was able to communicate persistent id, age, center of mass with grasshopper via the UPD receive component in gHowl and visualise the movements of people within Rhinoceros. The potential of utilizing the parameters of velocity, age and contours to create a much more sophisticated analytic system is promising.
Experiment: Kinect Microsoft Released the Kinect in late November 2010. The Kinect technology enables advanced gesture recognition, facial recognition and voice recognition.
I was immediately intrigued with the devise and its potential for measuring activities within a space. Prior to the release of the Microsoft Kinect SDK, OpenNI was the primary framework for digitally perceiving data from the Kinect. OpenNI provides a platform for voice and voice command recognition, hand gestures, and body motion tracking. This framework provides very thorough data streams for tracking human movements. Following the release of the Kinect numerous open source software tailored to access the OpenNI data streams emerged, the most prominent being OSCeleton. OSCeleton takes kinect skeleton data from the OpenNI framework and spits out the coordinates of the skeleton’s joints via OSC messages. I was able to access this data using the UDP Receive from gHowl in Grasshopper. Although data streams from the kinect are much more thorough and reliable than traditional blob tracking algorithms, the standard indirect skeleton tracking of the Kinect (Microsoft SDK) only supports the tracking of up to 6 people, which for the scenario of the exhibition is not ideal.
Figure 1.4 - Extracted parameters of human movement diagram
2.
Sorting Sensory Data
The gathered sensory data containing the position of occupants is fed into a matrix, which sorts and constructs the incoming information. Using this data the matrix is able to extract parameters such as velocity and proximity, which gives an indication of the occupants forward facing angles, acceleration and direction of travel. This allows for the system to draw relationships and links to the nature of the interactions occurring amongst occupants. The objective is to use this information to manipulate the space to encourage and enhance social dynamism.
The focus of the following analyses is to decompose the human movements and body language, which influence the perception of the human mind when determining the occurrence of a social interaction. The results of the observation analyses suggest that there are a number of contributing factors which determine an occurring social interaction; the primary being proximity and forward facing direction of subjects.
Experiment: Observation Analyses
Observation 01 Type: Non-participating subjects Location: Lawn opposite Red Centre, University of New South Wales Time of Day: 13:05 Environment: Outdoors, Warm sunny day, Lots of passing pedestrian traffic. Number of Occupants: 3 in total Occupant descriptions: [i] Male, Asian, late teens early twenties [ii] Male, Caucasian, late teens early twenties [iii] Female, Asian, late teens early twenties Observation: Subjects [i] and [ii] sitting on ledge of lawn engaging in conversation. Subject [iii] walking along university mall in the direction of Anzac Parade, approaches subjects [i] and [ii]. Subject [iii] remains standing whilst conversation continues for 30 seconds. Conversation ends and Subject [ii] stands. Subject [ii] and [iii] depart. Subject [i] remains sitting and [ii] and [ii] continue to walk along University Mall towards Anzac Parade. Observation assumptions: Scenario 01: Subject [ii] was meeting [iii] for a pre-arranged engagement. Scenario 02: Subject [iii] friends with subject [ii]. Subject [iii] encountered [ii] by coincidence. They could have made plans or just have just been travelling in the same direction.
All observation analyses were conducted on the grounds of University of New South Wales for ethical reasons. These analyses were conducted on both participating and nonparticipating subjects. Precautions were taken to ensure that these analyses were conducted in a non-biased manner and that the privacy of participants was respected at all times.
Observation 02 Type: Non-participating subjects Location: Physics lawn, University of New South Wales Time of Day: 14:30 Environment: Outdoors, Clear sunny day, minimal pedestrian traffic.
The following observation analyses aim to decompose the body movements of interacting subjects with the objective of linking parameters of movement including positions, forward facing angles, acceleration, direction of travel and time to occurring social interactions. The scope of this project aims to develop a system capable of: - Recognising and determining the intensities of occurring social interactions. - Projecting and anticipating for potential social interactions - Recognising instances of social isolations occurring amongst occupants The system then constructs a model of the spaces’ social network through the recording of identified interactions to a database. The database records information such as number of identified interactions and information about these interactions including the amount of time they occurred. The database allows for the system to form a perception of the spaces social interactions that is shaped by “learning, memory and expectation” (Gregory 1987, p. 598–601).
-
direction of travel centre of mass
acceleration
-
-
-
Number of Occupants: 4 in total Occupant descriptions: [i] Male, Caucasian, early twenties [ii] Male, Caucasian, early twenties [iii] Female, Caucasian, early twenties [iv] Female, Caucasian, early twenties Observation: All subjects sitting on lawn beside a tree next to Science Rd. Subjects are surrounded by bags. Subjects engaged in conversation. Subject [ii] is eating sandwich. Subjects [iii] and [iv] are laughing and seem very happy. Subject [i] is very quiet and timid whilst [ii] seems to be talking a lot and is the center of attention. Conversation continues for 16 minutes. Subjects arise to their feet. Subject [iv] brushes grass of her pants. Subjects walk towards University Mall. Subjects maintain walking distance until the reach university mall where they all depart in separate ways. Observation assumptions: Subjects are engaged in friendly conversation sitting on lawn eating lunch, it was not clear as to whether any of the subjects were engaged in relationships. The general body language and their departing in separate ways suggest that they were not.   Observation 03 Type: Non-participating subjects Location: Library Level 8, University of New South Wales Time of Day: 11:30 Environment: Indoors, congested, surrounded by lots of people Number of Occupants: 8 in total Occupant descriptions: [i] Male, Middle Eastern, late teens early twenties [ii] Male, Middle Eastern, late teens early twenties [iii] Male, Caucasian, late teens early twenties [iv] Male, Caucasian, late teens early twenties [v] Male, Caucasian, late teens early twenties [vi] Female, Middle Eastern, late teens early twenties [vii] Female, Middle Eastern, late teens early twenties [viii] Female, Middle Eastern, late teens early twenties Observation: Group of 3 subjects [i], [vi] and [vii] sitting at a library study station, engaged in conversation. Subject [vi] and [vii] are both staring at subject [i] laptop in deep concentra-
forward facing direction
tion. Occupants [ii], [iii], [iv], [v] and [viii] arrive from lift lobby area. These occupants are very loud and disrupting within the library environment area. Subjects engage in 15 minute conversation, where [i] and [iv] continue to act in an loud obnoxious manner. Male subjects [ii], [iii], [iv], and [v] depart in the opposite direction, whilst female subject [viii] remains standing talking to subjects [vi] and [vii]. Observation assumptions: I was able to listen into this conversation which may have affected the following assumptions. Initial subjects [vi] and [vii] seem to be critiquing a piece of work of Subject [i]. This is interrupted as the group of predominately male approach the group. All subjects are engaged in conversation where body language and dialogue of male subjects [i] and [iv] suggests that they are trying to impress female subjects [viii], [vi] and [vii]. The females are engaged within a sub-conversation and do not seem interested in the male subjects. Males depart whilst females and subject [i] continue to discuss what appears to be a piece of work by subject [i].  
Calculating Velocity
Calculating the Permutation
Both of the tracking methods outlined in ‘Interpreting Sensory Data’, are able to track the centre of mass for each of the occupants. The centre of mass coordinates is communicated to grasshopper as X, Y and Z coordinates (GitHub Sensebloom Repository 2011).
To develop a system capable of calculating interrelationships amongst occupants, some form of calculation for determining all possible combinations of interactions needs to be established.
The system is able to determine the occupants rate of change in movement (both speed and direction of travel) also known as velocity. Velocity is calculated by subtracting the position of a Person x at point in time t from their Position x at the increment of time ∆t, then dividing this by the increment of time ∆t. Velocity = (x(t+∆t) – x(t) ) / ∆t (The Physics Classroom 2011). Assuming that the occupants are moving around the space in a forwards direction, the angle in which occupants are facing can then be calculated by connecting an initial point x(t) with a terminal point x(t+∆t). The data contains the following set of parameters: - centre of mass - acceleration - direction of travel - forward facing direction - time
This was resolved by applying a permutation on the set identified occupants. For example if you have the following set of occupants moving through the space: {A,B,C} n=3 We want to test every possible intersection: {A,B}; {A,C}; {B,A}; {B,C}; {C,A}; {C,B} The possible arrangements are calculated by n!/(n-2) 3! = (3x2x1)/3-2 = 6 There are 6 possible arrangements. At first I thought this was correct then I discovered that all of the combinations were being calculated twice (for example {A,B}; and {B,A};) and there should in fact only be three values in the set {B,A}; {C,A}; {C,B}; A script was then developed (see appendix) to spit out the following culling pattern that can be operated on the data set. True, True, False, True, False, False {A,B}; {A,C}; {B,A}; {B,C}; {C,A}; {C,B} Proximity: As people move throughout the space the system is continually calculating their proximity using the above ‘Permutation’ calculation. This is calculated by measuring the distance between grouped occupants. For example: If A = {10,2,0}, B = {20,5,0} C = {20,15,0} The distance between: {B,A}; = 10.4 {C,A}; = 16.4 {C,B}; = 10
Figure 1.5 - Point of intersection Diagram
Point of Intersection: As occupants move throughout the space the system calculates and projects the anticipated points of intersections or shared focal point amongst occupants. It uses their velocity proximity; both position and speed to calculate the projected paths where occupants are likely to intersect. This is determined by calculating the angles between the vectors and then using the law of sines:
where R is the radius of the circumscribed circle of the triangle:
The system calculates this for every possible combination through applying a permutation on the set of vectors. This allows for the system to anticipate the likely intersection of people moving around the space, and determine the opportunities for social interactions. The system then calculates whether or not occupants will meet at their projected path of intersection using their distance to intersection point and velocity (speed). This is calculated by dividing the projected distance of travel by the speed in meters per time interval. This returns a value which indicated the time that it would take occupants to meet their projected intersection. These values are compared and if the time is within a specified threshold then the system is able to determine whether or not the occupants will intersect. Time = Distance / Rate of Movement For Example: There are two occupants who are on path to intersect. Occupant 01 needs to travel 4 meters, whilst Occupant 02 only needs to travel just 3 meters. Occupant 01 is travelling at a rate of 1.2 meter per second, whilst Occupant 02
is travelling at 1 meter per second. Occupant 01: 4 meters /1.2 meter per second = 3.33 Seconds Occupant 02: 3 meters / 1 meter per second = 3 Seconds IF difference between occupants < nominated threshold THEN Intersection = True ELSE Intersection = False
Social Interaction Identified The system performs numerous calculations, to determine whether occupants are engaged in social interactions. A number of contributing factors influence this calculation; the primary being proximity and forward facing direction of occupants. These parameters are compared against time and previously identified interactions, which assists the system in forming a perception. For example: IF occupants remain within proximity of one another & are within facing within a positive view range THEN Start Counter IF counter > predetermined time THEN Interaction Identified END END The identified interactions are recorded to a database to assist the in understanding the spaces social network as well as the intensities of occurring interactions. These calculations examine whether occupants purposely moved towards each other, by comparing the initial projection of time to the actual time taken to reach the point of intersection. These calculations give some form of indication as to whether occupants are engaging in an interaction.
Figure 1.7 - Surface being translated to physical prototype from Grasshopper
3. Prototype Responds to Enhance Social Interactions These experiments propose an Architecture capable of utilising perceived sensory data to generate an Architectural configuration capable of encouraging and enhancing social dynamism. Through manipulating the way in which occupants circulate throughout the space; the space nurtures and stimulates occurring interactions, whilst encouraging interactions amongst those experiencing social isolation. To achieve a system as such; the architecture itself must be able to manipulate in its configuration. The first step was to devise a dynamic architectural form capable of morphing to accommodate for the desired spatial configurations of the occupants. Figure 1.6 was the first attempt at designing such a system. As the research developed this design became unfeasible due to the form being impracticable to manipulate as a dynamic structure. A system for controlling a much simpler form was then developed. Figure 1.7 and 1.8 demonstrates a conducted experiment where a grid of points is projected onto a surface using the generative modelling ‘Grasshopper’ plugin for Rhinoceros. Once the points are projected onto the surface the distance between the projected points and their origins can be calculated. These measurements can then be actuated to physical devises using components of Firefly and the Arduino micro-controller. Firefly is a set of software tools developed to bridge the gap between Grasshopper , the Arduino, and other data sources. “It allows near real-time data flow between the digital and physical worlds”. The Arduino is an “open-source electronics prototyping platform based on flexible, easy-to-use hardware and software”. The arduino is able to control devices such as servos and stepper motors, which are capable of controlling the desired system. Figure 1.7 demonstrates firefly and the Arduino controlling 3 servos, which are synced to a grasshopper model.
Figure 1.6 - Early concept prototype paper model
Figure 1.8 - Early concept prototype render
Continued >
PROTOTYPE 1.0 The next stage was to develop a system to drive the grid of control points using motors. Different methods of controlling various pulley systems were experimented with, but they didn’t perform as desired as they relied on gravity to push points of the structure downwards. A rack and pinion gear system was then devised (see Fig 3.6), which was composed of a laser cut frame of Perspex to guide the shafts and hold the motors in place. Before the full scale model was built, an interim model was developed as a scaled down model to test the devised system. Due to the nature of the system a set of custom parametric pinion and shaft parts were designed within the parametric programming environment of ‘grasshopper’. The first set as seen in Fig 3.7 were a failure, but after further research into the mathematics behind the gear a system was developed that worked very effectively (see Fig 3.8 & Fig 3.9). The gear system was driven by a set of four stepper motors. The system was driven by the firefly components developed for grasshopper. A custom firefly firmata was adapted and developed from the “MODIFIED FIREFLY FIRMATA TO CONTROL A STEPPER MOTOR WITH A POTENTIOMETER” By Jason K Johnson, March 18, 2011 (SEE APENDIX). The fermata was edited to receive data sent from firefly to drive multiple stepper motors. A custom grasshopper definition was also developed to send data for the multiple motors through the “serial write” firefly component.
to counter for the 0.3mm (2*0.15mm) gap. Standard acrylic was used in the prototype for demonstration purposes. Acrylic proved to be too fragile, which was especially evident within the bending of the shafts. The materiality of the morphing ceiling surface itself is fundamental to the execution of this project. This was proving to be a difficult task with the system requiring for the surface material to stretch and retract about 200-300%. The most promising material discovered during the research was the Super-Elastic Plastic from the ‘Inventables’. The plastic is incredibly soft and stretches to about eight times its original size without ripping. Unfortunately this material is too expensive ($140 per sq/m) for this project. A spandex material was tested as a less expensive alternative. This material performed well when being stretched diagonally against the course of the thread, but failed for the purpose of the prototype when being stretched along the course of the thread. A latex rubber material was purchased towards the end of these experiments. This material was initially avoided due to some misleading advice from a local retail outlet, who suggested that precast thin sheets of latex were rare to find and that this would have to be manually cast. They also suggested that it would tear quite easily and is not suitable for this project. Some time was spent further researching the properties of the material and it was discovered that rubber is rated to be much stretchier than the spandex material previously experimented with. A sheet of 1000 x 1000 x 0.25 mm latex was sourced internationally from a latex fashion outlet in the UK.
When the model was set up to be laser cut the width of the 0.15mm laser beam was not accounted for. This meant The intention of this interim prototype was to develop and test the materials and mechanisms for a component of the that some of the junctions were not as tight as desired desired system, which is to be investigated in Prototype 2.0. and as a result they required to be filled with plastic strips
Methodology. ~ (continued)
PROTOTYPE 2.0 This experiment proposes the components of prototype 1.0 at larger scale a larger scale. This prototype was initially designed as a module for system of units. The prototype aimed to develop two modules for the exhibition with each of the modules frame being 600 x 800 mm, containing a grid of 2 by 3 motors. Due to the uncertainties of possible complications that may arise, the prototype was designed in stages. Stage 1 was to develop the initial module unit, whilst Stage 2 aimed for 2 units. Each module will be suspended from the ceiling in an array. The extent of this prototype is very much reliant on the exhibition space and its capacity to be able to suspend these modules from the ceiling. During the inspection of the gallery space, it became clear that there may be possible complications with suspending multiple modules from the plasterboard ceiling. The weight of each modules frame will likely exceed the limitations of the plasterboard. I came to the realisation that the final prototype will likely be just one module. The module was re-evaluated to see how well it would demonstrate the desired system as single entity. I came to the conclusion that 2 x 3 motors would not be sufficient in articulating this and the frame was re-designed to mount a grid of 3 x 3 motors. After testing prototype 1.0 is was clear that the single pinion shafts of 6mm acrylic were very flimsy and would not strong enough to morph the prototypes form. To resolve this issue the frame and gear system was redesigned using a 10mm acrylic instead of 6mm. The received quotes for laser cutting this system using 10mm was almost triple that of the 6mm. Also using 10mm acrylic meant that the frame was going to nearly double in weight. After lots of thought and consideration, I negotiated a way of strengthening the pinion shafts using 6mm acrylic. The shafts were reinforced with support strips of acrylic, which were to be attached using custom laser cut dowel pins and an acrylic binding agent. This seemed like the most obvious solution as it meant that the system would cheaper and lighter. It also allowed for the shafts to be 18mm and therefore much stronger then the outlined 10mm system. The frame and gear system was then sent to be laser cut. All precautions were taken to ensure that all of the tight fitting junctions of the frame and pinion shafts were offset to account for the width of the laser beam. Upon collecting the laser cut sheets of acrylic it was discovered that
the junction should have been offset a little bit less than the advised 0.15mm. This meant that I had to manually file back all 44 slots in the frame along with all of the dowel connecting tabs. I unfortunately managed to snap one of the motor supports trying to force it into the frame. Luckily acrylic glue is a miracle acrylic binding agent and I was able to resolve this with ease. I glued the dowels and supports to the shafts, and right away it was evident that these were going to be more then strong enough for the purpose of this prototype. These were left to dry overnight and in the morning I discovered that the glue had melted the shafts to the supports so tight that the teeth of the rack gears would not even fit into the pinion slots. This was resolved by slightly sand back the teeth of the rack gear, which left the teeth opaque white in colour. The next stage was to fabricate a more reliable and permanent system for the electronics. The system was devised using a prototyping board rather than the breadboard, which was proving to be unreliable and a pain to set up. I needed to plug in 9 Easy drivers to drive the 9 stepper motors of the final prototype. There are a total of 9 input / outputs required for each of the easy drivers and only three of these could be shared (power, ground, and ground for the direction / step). The system needed to be portable in the sense that it could be easily assembled within the exhibition space. Male headers and female plugs seemed to be the most appropriate way of achieving this. The pins of the male headers were soldered into the easydriver boards and the female plugs were attached to the stepper motors and Arduino leads. I was initially using a â&#x20AC;&#x2DC;veroâ&#x20AC;&#x2122; prototyping board, which has parallel strips of copper cladding running in one direction all the way across one side of the board. Due to all the intersecting pins using this would require me to drill out 90+ breaks in the tracks, which I was not prepared to do. I then purchased a Perfboard, which similar to the vero board contains a grid of predrilled holes spaced 2.54 mm apart, but does not have tracks of copper. Instead this board has singular pads of copper that can be joined using solder or wires. This board also proved to be not suitable as it would be far too messy with leads going everywhere. I was then able to source a prototype board with a conductive trace layout similar to the breadboard. I was able to share both the main power and ground through linking the tracks using solid core wire, which greatly reduced the amount of wires being plugged into the board. The adapted firefly fermata and grasshopper definition was re-devised to drive the 9 stepper motors instead of 4.
Discussion
Perceiving Sensory Data The technologies investigated for tracking human movements were the Open Toolkit for Sensing People in Spaces (OpenTSPS) and the Microsoft Kinect. OpenTSPS was able to track large groups of people, but was unable to provide accurate data for individual occupants, which was not useful in developing an advanced system capable of perceiving social interactions. Whilst the skeleton tracking capabilities of the Kinect only made it possible to perform indirect, centre of mass tracking on upto 6 occupants, this technology provided very accurate data, which was more appropriate to the objective of this project. Through tracking the centre of mass using the Kinect we were able to extract the occupants’ proximity, forward facing direction, acceleration and direction of travel. This allowed for the system to construct a model of the spaces’ social network and form a level of perception, shaped by “learning, memory and expectation” (Richard 1987). This perception could have been enhanced through utilising more communicative forms of sensory information such as gesture, facial expressions and speech. These will be examined in further investigations. The calculated perception was computed in the grasshopper programming environment, which at the time seemed to be the most logical choice seeing that the physical model was being actuated using the firefly components of grasshopper. In retrospect this perception could have been developed externally in a more appropriate programming environment, as grasshopper did not perform ideally with these heavy calculations in real-time.
Dynamic Architectural Prototype The mechanised prototype embodied the calculated perception to develop an adjustable spatial configuration that was capable of manipulating the circulation and interactions of its occupants. The objective of this manipulation was to encourage and enhance social dynamism. The extent of this enhancement was restricted due to the limited awareness of the gained perception. The current extent of the system was capable of anticipating for
and identifying social interactions, as well as recognising instances of social isolations. This could be expanded to better facilitate for enhancement with a more insightful understanding of the occupants activities and social interactions. The digital prototype was developed within Grasshopper, which was able to actuate the physical prototype using firefly and the Arduino micro-controller. Grasshopper was the most appropriate platform with the firefly components providing extensive capabilities for communicating with the Arduino and controlling motors. Translating the dynamic grasshopper form to the prototype was achieved by rationalising the surface as a grid of segments, which allowed for it to be broken down and actuated through a physical model. This method of projecting points to translate a physical form was an effective way of averaging out a pre-generated form, but was not always a true representation of the desired form. In hindsight the system could have benefitted from taking a slightly different approach; where instead of averaging out the form it could perform calculations where it attempts to simplify and replicate the form. This could be achieved through taking the highest and lowest points that define the form and syncing these with the control points that are closest. The segments of physical prototypes’ form were controlled using a custom laser cut acrylic, rack and pinion gear system. This was effective in demonstrating operation of the prototype, but would not be viable at an architectural scale. The form was driven by a grid of 9 motors, which ideally should have been 16 to 25 to achieve the desired spatial configuration. This was more than adequate as a proof of concept for demonstrating a component of the desired system. The next stage in developing this conception would be investigating the required materials, and mechanics for operating the proposed model as a full scale architectural system. These investigations would experiment with using DC-gear motors or industrial brushless motors instead of stepper motors. Further investigations would involve looking into implementing light and sound to further develop the manipulative environment.
Conclusion The results of the conducted experiments suggest that is feasible to develop an analytic environment capable of sensing and perceiving human social interactions. The experiments were able to achieve perception to the extent of identifying occurring interactions and instances of social isolations amongst 2 to 6 occupants. The system was capable of analysing components of human movement including the subjects’ centre of mass, and velocity, which allowed for the further extraction of parameters such as acceleration, direction of travel and forward facing direction. Using this data the project was able to develop a system capable of; recognising and determining the intensities of occurring social interactions, projecting and anticipating for potential social interactions and recognising instances of social isolations occurring amongst occupants. This allowed for the system to construct a model of the spaces’ social network and form a level of perception, shaped by “learning, memory and expectation” (Richard 1987). The prototypes developed throughout this investigation demonstrated an architectural system capable of manipulating its spatial configuration to redefine the conventions of space. The spatial configuration was able to harness the gained perception of its occupant’s interactions to encourage and enhance social dynamism through guiding and manipulating the occupant’s circulation. Removing social isolation proved to play an important role in en-
hancing these dynamics. The attained perception gained within these experiments was limited in the sense that it did not consider more communicative forms of human movements such as gesture or facial expression. The calculated perception was based entirely on vision and would be enriched through integrating additional sensory data such as sound or dialogue in conversation. The next stage for this project is to employ this additional sensory data to develop a much more intelligent understanding of the occupants activities and interactions. A more comprehensive perception of the occupants’ activities and interactions would increase the currently restricted architectures ability to enhance. This prototype is intended as a conception for the desired system, which will be examined in further research and investigations. It would entail developing a system capable of enhancing through addressing desires to have public or private space, to optimize thermal, visual, lighting and acoustic conditions and to promote sharing or collaboration in space. There would be an investigation into the materials, and mechanics required for operating the proposed model as a full scale architectural system. They would examine beyond form and into implementing light and sound to further develop the manipulative environment.
Reference List
Aggarwal, J & Ryoo, M 2011, “Stochastic Representation and Recognition of High-level Group Activities”, International Journal of Computer Vision (IJCV), 93(2):183-200, June 2011. Aggarwal, J & Ryoo M & Chen, C & Roy-Chowdhury, A 2010, “An Overview of Contest on Semantic Description of Human Activities (SDHA) 2010”, International Conference on Pattern Recognition (ICPR) Contests, August 2010. Aggarwal, J & Ryoo, M 2011, “Human Activity Analysis: A Review”, ACM Computing Surveys (CSUR), 43(3), April 2011. Anthes, E 2009, “Building around the Mind”, Scientific American, April 2009.
Gassmann, F & Muxel, A 2011, accessed 18 August 2011, <http://space.andreasmuxel.com/> GitHub Sensebloom Repository, accessed 28 September 2011, <https://github.com/Sensebloom/OSCeleton> Mani, M 2009, accessed 18 August 2011, <http://www. fishtnk.com/2009/09/28/tunable-sound-cloud/> Marr, D 1982, Vision: A Computational Investigation into the Human Representation and Processing of Visual Information, p. 3-7. Nitrogen Posterous 2011, accessed 20 October 2011 <http://nitrogen.posterous.com/>
Anthes, E 2011, accessed 20 October 2011 <http://emilyanthes.com/about>
Nitrogen Logic 2011, accessed 20 October 2011 <http:// www.nitrogenlogic.com/products/automation_controller. html>
Arduino 2011, accessed 22 September 2011, <http://www. arduino.cc/>
Payne, A & Johnson, JK 2011, accessed 22 September 2011, <http://www.fireflyexperiments.com/>
Australian Bureau of Statistics 2003, Construction and the environment, accessed 20 October 2011, <http://www.abs.gov.au/ausstats/abs@.nsf/ Previousproducts/1301.0Feature%20Article282003?opendo cument&tabname=Summary&prodno=1301.0&issue=2003 &num=&view=>
Physics Classroom 2011, accessed 28 September 2011, <http://www.physicsclassroom.com/Class/1DKin/U1L1d. cfm>
Cazan, V 2011, accessed 28 August 2011, <http://www. vladcazan.com/projects/openfloor/> Daintree Networks 2010, ‘The Value of Wireless Lighting Control’, accessed 20 October 2011
Xia, L & Chen, C, and Aggarwal, J 2011, “Human Detection Using Depth Information by Kinect”, International Workshop on Human Activity Understanding from 3D Data in conjunction with CVPR (HAU3D), Colorado Springs, CO, June 2011.
Foster, N 2007, ‘Norman Fosters Green Adgenda’, 2007 DLD Conference, Munich.
Zuk, W & Clark, R 1970, Kinect Architecture, Van Nostrand Reinhold, New York.
Fox, M and Kemp, M 2009, “Interactive Architecture”, Princeton Architectural Press, 2009.
Richard, G 1987, “Perception” in Gregory, Zangwill 1987, p. 598–601
Appendix - Code
Custom Arduino Firefly Firmata: /* [ARDUINO + FIREFLY] THIS IS A MODIFIED FIREFLY FIRMATA TO CONTROL A STEPPER MOTOR WITH A POTENTIOMETER By Jason K Johnson, March 18, 2011. Visit: www.fireflyexperiments.com for more info // Drive one Stepper Motor with a Potentiometer using the EasyDriver v4.3 board by Sparkfun _ info: http://schmalzhaus.com/EasyDriver/ // This uses the Arduino EasyDriver.h library. In the example I am using 1.8 degree stepper with both the MS1 and MS2 pins set to HIGH (for 1/8 step resolution). // Clock-wise around EasyDriver v4.3: MOTOR Input [A1-Yellow, A2-White, B1-Blue, B-2 Red; connected to a 4 wire Stepper Motor]; MS2 to 5v; GND; M+ to 5V; DIR to Pin 3; STEP to Pin 2; GND; MS1 to 5V // Potentiometer: Black to GND; Middle to AnalogIn 1; Red to 5V */ #include <EasyDriver.h> // download the library here: http://www.arduino.cc/cgi-bin/ yabb2/YaBB.pl?num=1251509480 // copy txt to your Arduino > Libraries folder; then rename them EasyDriver.h and Easydriver.cp // Built upon the MotorKnob example: http://www.arduino.cc/en/Reference/Stepper #define BAUDRATE 9600 //Set the Baud Rate to an appropriate speed, 9600 is recommended #define BUFFSIZE 256 // buffer one command at a time, 12 bytes is longer than the max length ////// ED_v4 Step Mode Chart ////// http://danthompsonsblog.blogspot.com/2010/05/ easydriver-42-tutorial.html // MS1 MS2 Resolution // // L L Full step (2 phase) // // H L Half step // // L H Quarter step // // H H Eighth step // 5V jumpers into MS1 and MS2 to set them as “H” //////////////////////////////////// int DIR = 3; int STEP = 2; int DIR2 = 5; int STEP2 = 4; int DIR3 = 7; int STEP3 = 6; int DIR4 = 9; int STEP4 = 8; int DIR5 = 11; int STEP5 = 10; int DIR6 = 25; int STEP6 = 24; int DIR7 = 27; int STEP7 = 26; int DIR8 = 27; int STEP8 = 26; int DIR9 = 29; int STEP9 = 28; Stepper stepper (200, DIR, STEP); //Stepper(int number_of_steps, int dir_pin, int step_pin) steps per revolution Stepper stepper2 (200, DIR2, STEP2); //Stepper(int number_of_steps, int dir_pin, int step_ pin) steps per revolution Stepper stepper3 (200, DIR3, STEP3); //Stepper(int number_of_steps, int dir_pin, int step_ pin) steps per revolution Stepper stepper4 (200, DIR4, STEP4); //Stepper(int number_of_steps, int dir_pin, int step_ pin) steps per revolution Stepper stepper5 (200, DIR5, STEP5); //Stepper(int number_of_steps, int dir_pin, int step_ pin) steps per revolution Stepper stepper6 (200, DIR6, STEP6); //Stepper(int number_of_steps, int dir_pin, int step_ pin) steps per revolution Stepper stepper7 (200, DIR7, STEP7); //Stepper(int number_of_steps, int dir_pin, int step_ pin) steps per revolution Stepper stepper8 (200, DIR8, STEP8); //Stepper(int number_of_steps, int dir_pin, int step_ pin) steps per revolution Stepper stepper9 (200, DIR9, STEP9); //Stepper(int number_of_steps, int dir_pin, int step_ pin) steps per revolution char buffer[BUFFSIZE]; // this is the double buffer uint16_t bufferidx = 0; // a type of unsigned integer of length 8,16, or 32 bits uint16_t p1, s1, p2, s2, p3, s3, p4, s4, p5, s5, p6, s6, p7, s7, p8, s8, p9, s9; int readCounter = 0; int prev1 = 0; int prev2 = 0; int prev3 = 0; int prev4 = 0; int prev5 = 0; int prev6 = 0; int prev7 = 0; int prev8 = 0; int prev9 = 0; /*==================================================================== ========== * GLOBAL VARIABLES *==================================================================== ========*/ char *parseptr; char buffidx;
int APin0 = 0; //declare all Analog In pins int APin1 = 0; int APin2 = 0; int APin3 = 0; int APin4 = 0; int APin5 = 0; // /* int DPin2 = 0; //declare all Digital In/out pins int DPin3 = 0; int DPin4 = 0; int DPin5 = 0; int DPin6 = 0; int DPin7 = 0; int DPin8 = 0; int DPin9 = 0; int DPin10 = 0; int DPin11 = 0; int DPin24 = 0; int DPin25 = 0; int DPin26 = 0; int DPin27 = 0; int DPin28 = 0; int DPin29 = 0; // */ int writecounter = 0; //declare the write counter /*==================================================================== ========== * SETUP() This code runs once *==================================================================== ========*/ void setup() { pinMode(2, OUTPUT); // sets the pin for digital output pinMode(3, OUTPUT); // sets the pin digital output pinMode(4, OUTPUT); // sets the pin digital output pinMode(5, OUTPUT); // sets the pin for digital output pinMode(6, OUTPUT); // sets the pin digital output pinMode(7, OUTPUT); // sets the pin digital output pinMode(8, OUTPUT); // sets the pin for digital output pinMode(9, OUTPUT); // sets the pin digital output pinMode(10, OUTPUT); // sets the pin digital output pinMode(11, OUTPUT); // sets the pin for digital output pinMode(12, OUTPUT); // sets the pin digital output pinMode(13, OUTPUT); // sets the pin digital output pinMode(24, OUTPUT); // sets the pin digital output pinMode(25, OUTPUT); // sets the pin for digital output pinMode(26, OUTPUT); // sets the pin digital output pinMode(27, OUTPUT); // sets the pin digital output pinMode(28, OUTPUT); // sets the pin digital output pinMode(29, OUTPUT); // sets the pin digital output }
Serial.begin(BAUDRATE); // Start serial communication
/*==================================================================== ========== * LOOP() This code loops *==================================================================== ========*/ void loop() { serialread(); // Call the Serial Write function if (writecounter == 1500){ // Wait every 1500th loop to then call the Serial Write function writecounter = 0; serialwrite(); } writecounter = writecounter +1; } /*==================================================================== ========== * WRITE FUNCTION() *==================================================================== ========*/ void serialwrite(){ // READ SENSORS + BUTTONS FROM ARDUINO APin0 = analogRead(0); // Read analog input pin APin1 = analogRead(1); // Read analog input pin APin2 = analogRead(2); // Read analog input pin APin3 = analogRead(3); // Read analog input pin APin4 = analogRead(4); // Read analog input pin APin5 = analogRead(5); // Read analog input pin
DPin2 = digitalRead(4); // Read digital input pin DPin4 = digitalRead(7); // Read digital input pin DPin7 = digitalRead(8); // Read digital input pin
//******************************************************** stepper2.setSpeed(s2); //set speed to x rpms (200 to 600?) stepper2.step(p2 - prev2); // move a # of steps 0 to 1600 for 360 prev2 = p2; // remember the previous val of the sensor
/*==================================================================== ========== * SERIAL WRITE FUNCTION() *==================================================================== ========*/
}
stepper3.setSpeed(s3); //set speed to x rpms (200 to 600?) stepper3.step(p3 - prev3); // move a # of steps 0 to 1600 for 360 prev3 = p3; // remember the previous val of the sensor
// Sending Sensor Data (comma seperated) to Serial / GH Serial.print(APin0); Serial.print(“,”); // Send the value and a comma Serial.print(APin1); Serial.print(“,”); // Send the value and a comma Serial.print(APin2); Serial.print(“,”); // Send the value and a comma Serial.print(APin3); Serial.print(“,”); // Send the value and a comma Serial.print(APin4); Serial.print(“,”); // Send the value and a comma Serial.print(APin5); Serial.print(“,”); // Send the value and a comma Serial.print(DPin4); Serial.print(“,”); // Send the value and a comma Serial.print(DPin7); Serial.print(“,”); // Send the value and a comma Serial.print(DPin8); Serial.print(“,”); // Send the value and a comma Serial.println(“eol”);
/*==================================================================== ========== * SERIAL READ FUNCTION() *==================================================================== ========*/
stepper4.setSpeed(s4); //set speed to x rpms (200 to 600?) stepper4.step(p4 - prev4); // move a # of steps 0 to 1600 for 360 prev4 = p4; // remember the previous val of the sensor stepper5.setSpeed(s5); //set speed to x rpms (200 to 600?) stepper5.step(p5 - prev5); // move a # of steps 0 to 1600 for 360 prev5 = p5; // remember the previous val of the sensor stepper6.setSpeed(s6); //set speed to x rpms (200 to 600?) stepper6.step(p6 - prev6); // move a # of steps 0 to 1600 for 360 prev6 = p6; // remember the previous val of the sensor stepper7.setSpeed(s7); //set speed to x rpms (200 to 600?) stepper7.step(p7 - prev7); // move a # of steps 0 to 1600 for 360 prev7 = p7; // remember the previous val of the sensor stepper8.setSpeed(s8); //set speed to x rpms (200 to 600?) stepper8.step(p8 - prev8); // move a # of steps 0 to 1600 for 360 prev8 = p8; // remember the previous val of the sensor stepper9.setSpeed(s9); //set speed to x rpms (200 to 600?) stepper9.step(p9 - prev9); // move a # of steps 0 to 1600 for 360 prev9 = p9; // remember the previous val of the sensor // } //********************************************************
void serialread(){ char c; // holds one character from the serial port if (Serial.available()) { c = Serial.read(); // read one character buffer[bufferidx] = c; // add to buffer if (c == ‘\n’) { buffer[bufferidx+1] = 0; // terminate it parseptr = buffer; // offload the buffer into temp variable //******************************************************** p1 = parsedecimal(parseptr); // parse the Xth number parseptr = strchr(parseptr, ‘,’)+1; // move past the “,” s1 = parsedecimal(parseptr); // parse the Xth number
}
uint32_t parsedecimal(char *str) { uint32_t d = 0;
parseptr = strchr(parseptr, ‘,’)+1; // move past the “,” p2 = parsedecimal(parseptr); // parse the Xth number parseptr = strchr(parseptr, ‘,’)+1; // move past the “,” s2 = parsedecimal(parseptr); parseptr = strchr(parseptr, ‘,’)+1; // move past the “,” p3 = parsedecimal(parseptr); // parse the Xth number parseptr = strchr(parseptr, ‘,’)+1; // move past the “,” s3 = parsedecimal(parseptr); parseptr = strchr(parseptr, ‘,’)+1; // move past the “,” p4 = parsedecimal(parseptr); // parse the Xth number parseptr = strchr(parseptr, ‘,’)+1; // move past the “,” s4 = parsedecimal(parseptr); parseptr = strchr(parseptr, ‘,’)+1; // move past the “,” p5 = parsedecimal(parseptr); // parse the Xth number parseptr = strchr(parseptr, ‘,’)+1; // move past the “,” s5 = parsedecimal(parseptr); parseptr = strchr(parseptr, ‘,’)+1; // move past the “,” p6 = parsedecimal(parseptr); // parse the Xth number parseptr = strchr(parseptr, ‘,’)+1; // move past the “,” s6 = parsedecimal(parseptr); parseptr = strchr(parseptr, ‘,’)+1; // move past the “,” p7 = parsedecimal(parseptr); // parse the Xth number parseptr = strchr(parseptr, ‘,’)+1; // move past the “,” s7 = parsedecimal(parseptr); parseptr = strchr(parseptr, ‘,’)+1; // move past the “,” p8 = parsedecimal(parseptr); // parse the Xth number parseptr = strchr(parseptr, ‘,’)+1; // move past the “,” s8 = parsedecimal(parseptr); parseptr = strchr(parseptr, ‘,’)+1; // move past the “,” p9 = parsedecimal(parseptr); // parse the Xth number parseptr = strchr(parseptr, ‘,’)+1; // move past the “,” s9 = parsedecimal(parseptr);
}
bufferidx = 0; // reset the buffer for the next read return; //return so that we don’t trigger the index increment below } // didn’t get newline, need to read more from the buffer bufferidx++; // increment the index for the next character if (bufferidx == BUFFSIZE-1) { //if we get to the end of the buffer reset for safety bufferidx = 0; }
}
while (str[0] != 0) { if ((str[0] > ‘50’) || (str[0] < ‘0’)) return d; d *= 10; d += str[0] - ‘0’; str++; } return d;
Permutation Code: Dim k As Integer = x - 1 Dim i As Integer Dim ki As Integer Dim ji As Integer Dim J As Integer = 1 For i = 0 To y Step 1 If k > 0 For ki = 1 To k print(True) Next For ji = 1 To j print(False) Next k=k-1 j=j+1 i=i+1 End If Next
Prototype Frame >
Appendix Grasshopper Definitions
Translating Form to Physical Model >
Projecting intersection
Validating Projected intersection >
Initiating Arduino Communication ^
Communicating Data to Arduino / Stepper Motors >
Communicating Data to Arduino / Stepper Motors ^
Incoming Kinect data >
Incoming Kinect data >
Calculating Permutation >
Calculating Proximity ^