First International Conference on Critical Digital: What Matters(s)? 18-19 April 2008 Harvard University Graduate School of Design, Cambridge MA 02138 USA The purpose of Critical Digital is to foster a dialogue about digital media, digital technology, and design and to challenge the basis of contemporary digital media arguments. The intention is to identify, distinguish, and offer a critique on current trends, tendencies, movements, and practices in digital culture. Critical Digital provides a forum for discussion and enrichment of the experiences in this discourse. Through diverse activities, symposia, competitions, conferences, and publications, Critical Digital is supporting dialogue that challenges what is rapidly becoming the de facto mainstream. What is digital? Why should design be (or not) digital? How have practitioners and schools been using digital media? The theme of the first conference is What Matter(s)? As the current theoretical discourse in architecture seems to elude digital phenomena, a crucial critical discussion is emerging as a means to address, understand, clarify, and assess the elusive nature of this discourse. Issues related to virtuality, ephemerality, continuity, materiality, or ubiquity, to name a few, while originally invented to explain digital or computational phenomena, are utilized today in the context of a traditionally still material-based design. What is the nature of their use? Is materiality subject to abstract digital concepts? Is the digital buildable? What matters? As we progress to think and design for the built environment, interactive space, and the body what materializations are actually emerging? What physical manifestations and manifestos are to be promoted? Critical Digital presents and hosts scholarly participation under the theme What Matter(s). Intentionally, the provocation is for both critical writings and projective works which address the issue of the digital within the contemporary design discourse. Cultural changes based on the fast evolution of digital technologies are continuously developing and affecting all of our activities as professionals, academics, and citizens. Digital culture has affected our notions as inhabitants and creators of a built environment, changing and affecting the way we conceive, transform and produce space. In the first place, digital design and production processes are simulating and integrating material and environmental conditions, while addressing innovative methods of conception and physical realization of ideas at all scales. This has opened rich areas of research in design and important crosspollinations and multidisciplinary approaches that reinforce and expand the connections between practitioners, industry, and academia. It is a challenge to creativity, rigor, and exploration, but also a product of an increasingly complex understanding of what design is, of what designers can produce, and their relation to the material and physical conditions of the built environment.
It is also fundamental to understand how the development of digitally enhanced products and spaces is affecting our experiences at all scales. New ways of relationships and communications have become quickly available, and imply new models of interaction with the built environment, mediated through digital devices and embedded computation. This also calls for a highly critical and multidisciplinary approach to design, in order to engage the complex phenomena and the fast development of technology without losing sight of what matters - the “substance�. Translations, transformations, transportations of What Matter(s) in design are being called to question. We have been looking for positions, projects, and proposals which address the value of the digital in our design cultures. What Matter(s) is an event which invites people interested in bridging or debunking issues of digital material/virtual culture. What matter(s) in terms of work, process, and thought is curated, published, and debated in an open format at the Graduate School of Design of Harvard University on April 18 and 19 of 2008.
Acknowledgements Critical Digital was an idea originally conceived by Doctoral students at the Advanced Studies Program at Harvard’s University Graduate School of Design. Doctoral students (now graduated/graduating) Carlos Cardenas, David Gerber, Jie Eun Huang, Scott Pobiner, Yasmine Abbas, and Neyran Turan started the idea of a conference as a means to address the various and diverse issues raised from the digitalization of architecture. Their efforts led to a series of symposia held at the GSD in 2005, and 2006. I would like to thank them all for their invaluable contribution to the intellectual foundation of this effort. Last year, some of the current students at the Doctor of Design program took on the idea and developed it into the form of a conference. Jan Janclaus, Teri Rueb, Dido Tsigaridi, Nashid Nabian, Zenovia Toloudi, and Taro Narahara are the original organizers of this conference that have contributed numerous hours on meetings, discussions, and ideas about organizational matters, administrative issues, and presentation formats. I thank them all for their spirit, work, and inspiring ideas. Jock Herron, also a Doctoral student, has joined the group lately and I thank him for his enthusiastic support. Three faculty members of the department of Architecture Ingeborg Rocker, Mariana Ibanez, and Jeannette Kuo have been offering fresh and innovative ideas to the conference and joined the group as moderators and organizers. I would like to thank them for their time, effort, and inspiration.
Session 1a: Pedagogy
Moderators: Jeanette Kuo and Teri Rueb Yehuda E. Kalay The Impact of Information Technology on Architectural Education in the 21st Century
3
Bob Giddings and Margaret Horne The Changing Patterns of Architectural Design Education in the UK
7
Dominik Holzer Embracing the Post-digital
17
Anastasia Karandinou, Leonidas Koutsoumpos, and Richard Coyne Hybrid Studio Matters: Ethnomethodological Documentary of a Tutorial
23
Paolo Fiamma D.I.G.I.T.A.L. Defining Internal Goals In The Architectural Landscape
35
Tim Schork Option Explicit – Scripting as Design Media
41
Session 1b: Process in Design
Moderators: Mariana Ibanez and Jock Herron Tomasz Jaskiewicz Dynamic Design Matter[s]: Practical considerations for interactive architecture
49
Gun Onur and Jonas Coersmeier Progressions in Defining the Digital Ground for Component Making
57
David Celento and Del Harrow CeramiSKIN: Biophilic Topological Potentials for Microscopic and Macroscopic Data in Ceramic Cladding
65
Emmanouil Vermisso ‘Digitality’ controlled: paradox or necessity?
77
Sherif Morad Building Information Modeling and Architectural Practice: On the Verge of a New Culture
85
Oliver Neumann Digitally Mediated Regional Building Cultures
91
David Harrison and Michael Donn Using Project Information Clouds to Preserve Design Stories within the Digital Architecture Workplace
99
Christian Friedrich Information-matter hybrids: Prototypes engaging immediacy as architectural quality
105
TheodoreDounas Algebras, Geometries and Algorithms, Or How Architecture fought the Law and the Law Won
111
Session 2a: Digital Culture
Moderators: Nashid Nabian and Dido Tsigaridi Panos Parthenios Analog vs. Digital: why bother?
117
Julian Breen and Julian Breen The Medium Is the Matter: Critical Observations and Strategic Perspectives at Half-time
129
Daniel Cardoso Certain assumptions in Digital Design Culture: Design and the Automated Utopia
137
Branko Kolarevic Post-Digital Architecture: Towards Integrative Design
149
Ingeborg Rocker Versioning: Architecture as series?
157
Katerina Tryfonidou and Dimitris Gourdoukis What comes first: the chicken or the egg? Pattern Formation Models in Biology, Music and Design
171
Session 2b: Tools
Moderators: Taro Narahara and Kostas Terzidis Sawako Kaijima and Panagiotis Michalatos Simplexity, the programming craft and architecture production
181
Aya Okabe, Tsukasa Takenaka, and Jerzy Wojtowicz Beyond Surface: Aspects of UVN world in Algorithmic Design
195
Orkan Telhan Towards a Material Agency: New Behaviors and New Materials for Urban Artifacts
205
Bernhard Sommer Generating topologies: Transformability, real-time, real-world
213
Josh Lobel The representation of post design(v.) design(n.) information
221
Serdar Asut Rethinking the Creative Architectural Design in the Digital Culture
229
Jerry Laiserin Digital Environments for Early Design: Form-Making versus Form-Finding
235
Yanni Loukissas Keepers of the Geometry: Architects in a Culture of Simulation
243
Simon Kim and Mariana Ibanez Tempus Fugit: Transitions and Performance in Activated Architecture
245
Session 3a: Critical Space
Moderators: Dido Tsigaridi and Jan Jungclauss Edgardo Perez The Fear Of The Digital: From The Elusion Of Typology To Typologics
255
Francisca M. Rojas, Kristian Kloeckl, and Carlo Ratti Dynamic City: Investigations into the sensing, analysis and application of real-time, location-based data
267
Ole B.Jensen Networked mobilities and new sites of mediated interaction
279
Gregory More The Matter of Design in Videogames
287
Joseph B. Juhรกsz and Robert H. Flanagan Do Narratives Matter? Are Narratives Matter?
295
Jock Herron Shaping the Global City: The Digital Culture of Markets, Norbert Wiener and the Musings of Archigram
301
Session 3b: Process in Design
Moderators: Ingeborg Rocker and Zenovia Toloudi Dimitris Papanikolaou From Representation of States to Description of Processes
311
Rodrigo Martin Quijada Reality-Informed-Design (RID); A framework for design process
319
Sergio Araya Algorithmic Transparency
325
Sotirios Kotsopoulos Games with(out) rules
337
Magdalena Pantazi Using Patterns of Rules in the Design Process
345
Session 4: Critical Reflection
Moderators: Teri Rueb and Kostas Terzidis Anthony Burke Reframing “intelligence� in computational design environments
355
Mahesh Senagala Deconstructing Materiality : Harderials, Softerials, Minderials, and the Transformation of Architecture
363
Erik Conrad Rethinking the Space of Intelligent Environments
373
Lydia Kallipoliti and Alexandros Tsamis The teleplastic abuse of ornamentation
381
Neri Oxman Oublier Domino: On the Evolution of Architectural Theory from Spatial to Performance-based Programming
392
Shaxin Wei Poetics of performative space
401
First International Conference on Critical Digital: What Matter(s)?
Pedagogy
Moderators: Jeanette Kuo and Teri Rueb
Papers: Yehuda E. Kalay The Impact of Information Technology on Architectural Education in the 21st Century Bob Giddings and Margaret Horne The Changing Patterns of Architectural Design Education in the UK Dominik Holzer Embracing the Post-digital Anastasia Karandinou, Leonidas Koutsoumpos, and Richard Coyne Hybrid Studio Matters: Ethnomethodological Documentary of a Tutorial Paolo Fiamma D.I.G.I.T.A.L. Defining Internal Goals In The Architectural Landscape Tim Schork Option Explicit – Scripting as Design Media
1
First International Conference on Critical Digital: What Matter(s)?
2
First International Conference on Critical Digital: What Matter(s)?
3
The Impact of Information Technology on Architectural Education in the 21st Century Yehuda E. Kalay Department of Architecture, University of California, Berkeley, U.S.A. kalay@berkeley.edu
Abstract Architecture is a technology-intensive discipline. It uses technology—both in the process of designing and in its products—to achieve certain functional, cultural, social, economic, and other goals. In turn, technology transforms the discipline. The importance of technology to the discipline and to the practice of architecture has been demonstrated again and again throughout history. In the 21st century, the advent of computer-aided design, computerassisted collaboration, construction automation, “intelligent” buildings, and “virtual” places, promise to have as much of an impact on architectural design processes and products as earlier technological advances have had. Like most other early adoptions of a technology, the first uses of computing in the service of architecture mimicked older methods: electronic drafting, modeling, and rendering. But this rather timid introduction is changing rapidly: new design and evaluation tools allow architects to imagine new building forms, more responsive (and environmentally more responsible) buildings, even radically new types of environments that blend physical with virtual space. Communication and collaboration tools allow architects, engineers, contractors, clients, and others to work much more closely than was possible before, resulting in more complex, more innovative, and more effective designs. Understanding and shaping this transformation are the basis of architectural education in the 21st century. 1. The effects of technology on Architecture Architecture is a technology-intensive discipline. It uses technology—both in the process of designing and in its products—to achieve certain functional, cultural, social, economic, and other goals. In turn, technology transforms the discipline. The importance of technology to the discipline and to the practice of architecture, and through them to society as a whole, has been demonstrated again and again in history: the adaptation of the Etruscan keystone arch enabled Roman engineers to build extremely strong and durable bridges, and led them to invent the dome as early as 27 BC. The invention of the Flying Buttress allowed 12th century Master Builders to replace the Romanesque’s massive walls by the relatively thin and tall walls and soaring vaulted ceilings of the Gothic Cathedral. The invention of perspective and scale drawings in the 15th century radically transformed the practice and products of architecture, and created the office of the Architect, as distinct from the Master Builder. Henry Bessemer’s invention of mass produced steel in 1855, coupled with Elisha Otis’ invention of the safety elevator in 1853, and Werner von Siemens invention of the electric elevator in 1880, allowed architects such as Daniel Burnham to design and build skyscrapers as early as 1902. Thus, technological innovations—often several of them coming together at the same time— have always had a significant impact on the discipline and practice of architecture. In the 21st century, the advent of computer-aided design, computer-assisted collaboration, computer-assisted construction technologies, computer-controlled buildings, and Internet-
First International Conference on Critical Digital: What Matter(s)?
4
based “virtual” places, promise to have as much of an impact on architectural design processes and products as these earlier technological advances have had. Like most other early adoptions of a technology, the first uses of computing in the service of architecture mimicked older methods: electronic drafting, modeling, and rendering. But this rather timid introduction is changing rapidly: new design and evaluation tools allow architects to imagine new building forms, more responsive (and environmentally more responsible) buildings, even radically new types of environments that blend physical with virtual space. Communication and collaboration tools allow architects, engineers, contractors, clients, and others to work much more closely than was possible before, resulting in more complex, more innovative, and more effective designs. Understanding and shaping this transformation are the basis of architectural education in the 21st century. 2. Computing in Architecture The introduction of computer technology has provided architects with new affordances and has begun to displace previous design technologies. It is obvious that the efficiency, control, and intelligence made possible by computational tools are increasingly essential to architectural practices. But it is less obvious how this technology has begun to influence the discipline and the practice of architecture, the society they serve, and therefore the education of architects. The effect of new technologies on our lives is rarely guided by reflection. Rather, we adapt to the changing technological context. The effect of these adaptations eventually becomes known, but by then our lives, practices, and environments have been irreversibly changed— often with unintended consequences. The advent of the automobile at the beginning of the 20th century is a case in point: it was intended to be a means of transportation, not much different than the horse and carriage that preceded it (indeed, it was called a “horseless carriage”). Its inventors and users have never imagined that this purely technological devise will lead to the creation of suburbs, freeways, shopping malls, drive-throughs (and drivebys), to global dependence on oil along with its political consequences, global warming, and more. The transformation of the computer itself provides further evidence to this sociotechnological phenomenon: when general purpose computers were first introduced after World War II, they were giant monochrome machines tended to by people wearing white shirts and black ties. They cost millions of dollars, filled entire rooms, and were used for such tasks as accounting and statistical calculations. They were icons of governmental power: there was nothing “personal” about them. The counterculture of the 1960s loathed them. Only 50 years later, with the help of many technological improvements, the computer came to be seen as empowering people, rather than suppressing them. And their use has proliferated to every aspect of our lives, changing our lives immeasurably from what they were in 1960s. Four decades after the first introduction of computers into architecture, their effects on the discipline, the practice, and the products of Architecture are becoming evident. As educators and researchers, we must examine the premises and purposes of the new technologies so that we may assess what has been displaced and adapted, what has been gained, what is new, and predict and direct the future of our discipline along with the affordances of the
First International Conference on Critical Digital: What Matter(s)?
5
new technologies. It is not enough to assume that our new design technologies help architects to work more intelligently, more responsibly, more effectively, and more collaboratively. Rather, it is necessary to understand that design technologies also change our perceptions and influence our work, sometimes (often?) with unintended consequences. 3. Two paradigms Two paradigms can serve to illustrate the relationship between a technology, its affordance, and a practice. The first, which I call ‘a square peg in a round hole,’ describes the problem of adapting a new technology to a current practice. As the new technology is introduced into the practice, a dysfunctional relationship can develop between the tools and a task. This occurs either because the task is poorly understood or because the process of displacing a traditional technology is largely one of the substitution of tools—ones with the wrong affordances. This inappropriate use of the technology results in a poorer practice. An example of this was readily seen in the early uses of CAD tools in the design process, where they introduced false rigor and instilled misplaced confidence in what was, at best, tentative design decisions. The answer to this dysfunction was to ‘round off’ the square peg—making tools that better fit the (perceived) needs of the practice. The second paradigm, which I call ‘the horseless carriage,’ is characterized by the shifting perception of a practice as it transforms in relationship to a new technology. In using the term ‘horseless carriage’ to describe the automobile in the early 20th century, the task of transportation was wrongly understood through the lens of a previous technology, not realizing that the practice of travel has changed dramatically. Understanding this paradigm requires asking the question—how do the affordances provided by the new technologies change the design practice and its products? Do we understand how having more precision early in the process affects the reasoning of options? Do we understand how communication via digital media fundamentally changes the culture of a practice? How the emergence of virtual ‘places’ changes our work, study, and entertainment practices (as well as changes our conception of ourselves)? How does knowledge once invested only in the designer but now ingrained in the tools, effects the practice? Architects have had only a few decades of experience with computational tools. Therefore it is not surprising that the fit between the affordances offered by computing technology and design practice is still problematic, and its consequences are not yet well understood. But it is already clear that such consequences are significant. 4. Architecture’s New Media In the 21st century, the ‘environment’ in Environmental Design, more than ever before, will not be limited to the physical environment alone. The digital revolution has augmented, replaced, and ‘remixed’—to borrow a New Media term—that which is strictly ‘physical’ and that which is strictly ‘virtual.’ Over the past two decades we have become accustomed to communicating, shopping, conducting business, being educated, and being entertained in an ever more transparent mix of physical and virtual environments. Our physical spaces are becoming more ‘aware’ of our human presence and activities, in ways that range from doors that open automatically when we approach, to elevators that position themselves at the most heavily used floors, to surveillance systems, ‘smart’ homes, and ‘intelligent’ roadways. The materials from which we build physical environment have become more sophisticated, not merely in their physical properties, but also in their ability to respond and adapt to changing conditions. And communication tools like the Internet, coupled with object-aware (rather than graphics-aware) databases are changing the way Architecture is practiced. No
First International Conference on Critical Digital: What Matter(s)?
6
longer is a small architectural office limited to practicing within a radius of 100 miles of its physical location: it is now commonplace to collaborate on projects located on other continents, with partners spanning the globe, in a design practice that spans 24 hours a day. 5. The impact of computing on architectural education At the dawn of the 21st century we are faced with unique environmental, economic, social, and political challenges, such as global warming, globalization, diminishing natural resources, and aging populations. At the same time we also have at our disposal opportunities such as ubiquitous information technologies, new materials and building practices, new learning methodologies, and a knowledge-based economy. Addressing these challenges and opportunities requires new approaches to education in general, and environmental design education in particular. While traditional values, methods, and practices are still important, architects, planners, and landscape architects who will practice in the 21st century (much like other professionals) need to know more, about more issues, than ever before. They will need to have a better understanding of the impact that diminishing natural resources will have on physical, social, economic, and political environments; the means they can employ when designing new environments; and the new kinds of products they will be asked to design. More importantly, they will need to understand the impact these developments will have on their professions, on the global environment, and on the societies they serve. Faculty and students will have to assume a leadership role in research and design that will direct these developments along social, cultural, and professional values. Schools of architecture, therefore, have the responsibilities of understanding these changes, and guiding them, while educating new generations of architects who will use these methods, tools, and practices to change the environment in which we live. That is our challenge, as educators and researchers. If we don’t rise to meet it, others (such the vendors of CAD tools) will, and shape our discipline and our environments for many years to come. Understanding the implications of these new affordances requires a new approach to architectural education: an education that views the changes from the point of view of the horseless carriage, rather than the square peg in the round hole.
First International Conference on Critical Digital: What Matter(s)?
The Changing Patterns of Architectural Design Education in the UK Bob Giddings, BA BArch MA DPhil RIBA FCIOB FFB School of the Built Environment, Northumbria University, UK bob.giddings@northumbria.ac.uk Margaret Horne, BSc MPhil CEng MBCS CITP School of the Built Environment, Northumbria University, UK m.horne@northumbria.ac.uk Abstract Digital technologies have been introduced to students of architecture for over two decades and at present it could be argued that students are producing some of the highest quality designs, and some of the most interesting forms ever to come from University Schools. The value of computer aided design (CAD) is also being demonstrated in architectural practice, with high profile, large budget, bespoke and iconic buildings designed by internationally renowned architects. This paper reviews the changing patterns of architectural design education and considers the contribution digital technologies could make to buildings with more commonplace uses. The study offers a perspective on different kinds of buildings and considers the influence that emerging technologies are having on building form. It outlines digital technologies, alongside students’ application for architectural design and considers the role they could play in the future, in developing a shared architectural language. It is suggested that some of the biggest opportunities for future research will be in the design of external spaces, often a neglected part of architectural design education. 1. Introduction Success in architectural education has often been defined as mastering three-dimensional (3D) thinking. Students have constantly struggled with this concept and their intellectual development has invariably been hampered by representational methods. Traditionally, the only methods of representing three-dimensional thinking have been by drawing – perspective, axonometric, isometric; and by constructing physical models. Both can suffer from the ambiguity of deciding whether they are design or presentation tools; which is something that students may take some while to appreciate. Drawings have the disadvantage of not being true three-dimensional representations, but merely two dimensional representations of three dimensional situations. They are also limited by a single viewpoint. This can be carefully selected by the student to suggest a spatial quality in the design that does not really exist. Models also have disadvantages. Generally, they are viewed from above, which produces less impact than from the human viewpoint; and they imply a neatness in the environment that cannot be replicated in practice (Giddings and Horne, 2002). Both have two further disadvantages. Structure, materials and construction can be shown in a superficial way, as the forms of representation are not dependent on their accuracy. Yet, perhaps the greatest impediment has been the effect on the design process. Drawings and models can involve considerable time investment by students. Thus testing can take longer than the developing thought processes. Students also test their ideas less often than is desirable because of the time and effort needed to generate further drawings and models. This tends to lead to inflexibility in the students’ progress and a willingness to accept a less favourable solution because the drawings and models for it have already been completed. As higher standards of presentation are constantly being required, the design process
7
First International Conference on Critical Digital: What Matter(s)?
generally becomes curtailed at a relatively early stage. The design is fixed and the remaining project time spent on presentation. This means that design time is only a proportion of total project time, and proposals that are not fully resolved can be offered as final solutions. 2. Development of digital technologies Digital technologies have been introduced to students of architecture for over two decades. As computer aided design evolved, it has witnessed several changes in research direction, from CAD which stood for computer aided drafting and simulated the use of drafting tools, to CAD which enabled the production of photo-realistic models of buildings, to CAD which is now primarily concerned with design exploration and collaborative design activity (Duarte, 2005). Computer aided design today encompasses technologies based on purely geometrical principles to technologies based on building elements (building information modeling (BIM)), to technologies incorporating animation, interactivity and immersiveness (virtual reality (VR)) and to, more recently, augmented reality (AR). In the context of this paper, digital technologies encompass the ever developing range that is now available for architectural design. More progressive Schools of Architecture have long embraced the challenges that technology has brought and continue to evaluate the opportunities it offers to the design process (Petric and Maver, 2003). Yet for many students, CAD still means computer aided drafting and an electronic media for a traditional process. Despite the fact that digital media have been used in architectural design studios at universities for over twenty years, architectural students are generally taught to draw the same way their tutors learned – with traditional media first (Goldman 2005). However, the new generation of digital technologies is providing electronic three-dimensional representations that can be quickly assembled, recorded, tested and adjusted. Design and presentation methods can be harmonized, reducing the traditional prolonged presentation period. Such representations are enabling the development and testing of designs to be faster and more accurate; and students can now quickly and accurately produce designs to a much more sophisticated level. It could be argued that students are now producing some of the highest quality designs, and some of the most interesting forms ever to come from University Schools. 3. Student projects Digital technologies now have a well established research track record (Asanowicz, 1998, Achten et al, 1999). However, effective integration into the academic curriculum is an ongoing challenge to academics who have to balance the expectations of increasingly computer-literate students with the demands of curricula devised to meet the needs of professional practice (Horne and Hamza, 2006). Nonetheless, some interesting student projects are emerging that demonstrate appropriate application of digital technologies for both design and presentation. Students who fully understand the functionality of the tools are beginning to harness their capabilities to enhance design exploration. The following student projects have been selected to illustrate how students are using computer aided design tools, in three different contexts: The first project is a regeneration scheme in Newcastle upon Tyne, UK. The students favoured the use of computer technology to explore their designs and one of the results can be seen in the figure 1. However the studio programme stipulated that freehand sketching, scaled drawings and physical scale models were to be used in the presentation. The 3D computer model enabled interaction with, and navigation around the design during its formulation, contributing greatly to the decision making process; and photo-realistic images
8
First International Conference on Critical Digital: What Matter(s)?
and pre-programmed fly-throughs were used for presentation purposes. Being able to navigate around a developing model facilitates the generation of ideas, increases the time spent on design analysis and reduces the focus on final presentation. However, in this case, time was lost through the duplication of electronic media with traditional techniques. It may be difficult to understand why this duplication occurred, but there appears to be something deep in the psyche of architectural tutors that causes them to insist on the replication of their own design education process. There was also a fear of black-box technology, and the last vestiges of a master-pupil relationship in which the master must be seen to have greater skill and knowledge than the pupil.
Figure 1. Student Image: Oliver Jones 2005 Figure 2 shows a student project which was centred on both the design of a building and its production phase. The project was based around modular / volumetric construction where many of the components would be made in factory conditions. The student considered structure and construction during the early stages of the design process and developed the computer model to enable a potential client to understand how the modules would fit together and the building assembled. In such circumstances, it might be surprising that free hand sketching was the preferred method for the initial conceptual design and construction method, followed by the creation of a highly detailed interactive computer model. This could be because it is still difficult to perceive the precise nature of the computer model as a tool for creating and manipulating an embryonic concept. It can be considered that the fluidity and ambiguity generated by the freehand pencil unlocks the possibilities of lateral thought in a way that cannot be achieved with digital devices (Latham, Swenarton, Sampson, 1998).
9
First International Conference on Critical Digital: What Matter(s)?
Figure 2.
10
Student Image: Christopher Burkitt 2006
Figure 3 illustrates a student project that was designed in three dimensions from the conceptual design stage. However, it demonstrated two further concerns that have been articulated by architectural tutors. The first concern is that the library of standard components in the building information modeling software may lead to uncritical design decisions. In this case, the student was challenged to explore the creation of customised components, curved structures and non-standard forms – parameters that may well not be replicated in practice. The second concern is that although the emerging technology of building information modeling is fostering a closer relationship between designer and fabricator (Scheer, 2006), this technology is abstracting the designer further from the reality of buildings and its materials, than ever before. One outcome could be that the designer increasingly passes on design decisions to specialist suppliers, losing control on the integrity of the design in the process.
Figure 3. Student Image: Craig Peel 2005 4. Implications for Practice
First International Conference on Critical Digital: What Matter(s)?
11
High profile, large budget, bespoke and iconic designs by internationally renowned architects are also demonstrating the value of digital media. Designs such as the Eden Project (Figure 4) have demonstrated the possibilities opened up by digital technologies and offer inspiration to an increasingly computer literate student population who are excited by non-rectilinear geometry that can convey the essence of a building through its form (Giddings and Horne 2002).
Figure 4. Eden Project, Nicholas Grimshaw & Partners The future for these kinds of projects seems assured. Digital technologies have enabled extraordinary new buildings to be designed and built. Forms of nature are providing inspiration for increasingly complex built forms and architects are rediscovering the joy of sculpting unusual geometries (Novitski, 2000). The computer’s processing capability in structural analysis and production and fabrication techniques is a major contributor to the changing shape of architecture. Architects are exploring the new possibilities opened up by CAD software, modern analysis and simulation methods (Mitchell, 2000).
First International Conference on Critical Digital: What Matter(s)?
12
Figure 5. Digital Model of City Hall London Whilst it must be acknowledged that the high-profile, well-publicised iconic buildings are receiving considerable attention, it can be argued that such projects occupy only a very small proportion of the built environment. It is likely that the vast majority of architectural graduates will design buildings with more commonplace uses, and these constitute the majority of built form and new construction. Figure 6 shows an exaggerated historical view of an iconic building framed by contextual buildings (Lozano 1990). Traditionally, the latter were constructed from custom and practice, rarely involved architects, and fitted into their localities.
First International Conference on Critical Digital: What Matter(s)?
13
Figure 6. Iconic Building framed by Contextual Buildings However, as the 20th Century progressed, the scale of these developments and the loss of local builders, meant that what had been contextual buildings, increasingly became part of the design and construction industry. Before the end of that century, concern was already being expressed about their appearance. It has been considered that one of the major reasons for the destruction of our towns and cities in visual terms is that ordinary buildings of recent years are each trying to draw attention to themselves (Sahai, 1991). At the other end of the spectrum, the demand for cheapness in the production of the built environment has generated the dull and mundane. National pervasiveness of some sectors, such as speculative house building, has produced a consensus lamenting the lack of regional distinctiveness in domestic design. There has been a continuing search for a shared architectural language. The likelihood is that digital information models based on building elements, will become increasingly used for the design of these buildings. Much of the development has been directed towards ease-of-use, and to a large extent this has been achieved. The benefits have already been considerable; in terms of use as a design tool rather than a drafting tool, reduced time in presenting the information, relationships between building elements, instant schedules for doors, windows etc., and many more. However, every advance also carries dangers. In the hands of current graduates, such systems can assist subtle building design. The first danger is therefore related to a totally electronic design education. As Craig Peel’s tutors pointed out, unless all the benefits of the traditional education can be translated into electronic media, graduates may become increasingly detached from the nature of buildings, and just work within a virtual world. Another fear is that as the tools become easier to use, the ease-of-use that was so welcomed by architects, may be their downfall. If systems become so simple that anybody can use them, then anybody will use them. Clients whose primary interest is generating floor space may feel that a BIM system could replace the architect, especially if the system has a standard library of components. Some clients may employ unqualified assistants to press the buttons, or may even undertake the process themselves. This scenario is not encouraging for increased design quality and interesting building forms, which require intuition, spontaneity and exploration, as well as geometrical precision. In the hands of unqualified practitioners, this fear can easily become reality, and the built environment
First International Conference on Critical Digital: What Matter(s)?
14
could turn into an ever growing incoherent array of catalogue buildings. (Martins et al, 2007). In the meantime, research is ongoing to develop computational tools which enable design exploration in an intuitive way rather than a rigid parametric way. This may involve some quite sophisticated computer skills, and most architects may be unwilling to become engaged at that level. Nevertheless, it is an attempt by the developers of computer aided design technologies to match their tools to the concepts around which designers wish to develop their skills (Aish, 2005) and to decrease the remoteness between designer and artifact. Research presents the challenges of teaching computational geometry to architectural students, and proposes a multi-level pedagogical scheme introducing associative geometry and parametric modeling/design into architectural design education (Iordanova, 2007; Burry, 2007). 5. Conclusions and observations This paper has been submitted to foster debate on architectural design education which is incorporating digital technologies. It has raised some concerns about the future use and application of such technologies from the architects’ perspective, whilst proposing that three-dimensional computer modeling technologies emerging for current architectural students offer a truly 3D form of design exploration. Architectural education that encourages the use of digital technologies is beginning to enable students to combine simple rectilinear shapes with sophisticated curved surfaces and non-symmetrical forms alongside consideration for both the construction and fabrication stages. The divergence in architectural practice at the turn of the 21st century, into those who do boxes and those who do blobs (Mitchell, 2000), may be beginning to end with a generation of students who wish to see the two techniques converging, based on an architectural education which continues to strive for the integration of design and building technology. Digital technologies are now enabling a move from drafting and visualisation to the generation and optimisation of design, and will play an increasingly important role in the design of different building types. One of the central debates is whether these new processes can make a significant contribution to establishing a shared architectural language or whether they will generate an ever growing incoherent array of catalogue buildings. 6. Future research The authors of this study support the school of thought that those architectural practices who are looking beyond the drafting and visualization solutions offered by digital technology are finding that they are changing work practices in the course of this interaction (Toamassian and Marx, 2006). An emerging generation of students who can understand how digital technologies can be applied to architectural design, as well as anticipating future applications, is important for the development of architectural practice, and its restructuring in the digital era. Arguably, some of the biggest opportunities for digital technologies can be found in the design of external spaces, as it is even more difficult to evaluate proposals for external spaces than those for buildings. In addition to modeling the external spaces themselves, there is now developing technologies that can simulate a range of environmental conditions generated by the designs.
First International Conference on Critical Digital: What Matter(s)?
15
7. Acknowledgements Undergraduate students Oliver Jones, Christopher Burkitt and Craig Peel whose projects appear in this paper. 8. References Giddings B. and Horne M. Artists’ Impressions in Architectural Design, London: Spon Press, 2002. Duarte J.P. Proceedings of the 23rd Conference of the Association for Education and Research in Computer Aided Architectural Design in Europe, Lisbon, 7, 2005. Petric J., Maver T. “Virtual reality, rapid prototyping and shape grabbing, a new generation of tools in the architecture design studio”, Proceedings of the 21st Conference on Education in Computer Aided Architectural Design in Europe, Graz University of Technology, 2003, p. 13-16. Goldmann G. Campus Technology, Innovation: three-dimensional electronic, paperless, digital architecture design studio, New Jersey Institute of Technology, 2005. Asanowicz A. “Approach to computer implementation in architectural curriculum”, Computerised Craftmanship (eCAADe Conference Proceedings) Paris, 24-26 September 1998, pp. 4-8. Achten H., Roelen W., Boekholt J., Turksma A., Jesssurun J. “Virtual Reality in the Design Studio: the Eindhoven Perspective”, Architectural Computing From Turing to 2000. Proceedings of the 17th Conference on Education in Computer Aided Architectural Design in Europe, Liverpool, 15-17 September 1999, pp. 169-177. Horne M., Hamza N. “Integration of virtual reality within the built environment curriculum”, ITCon, Special Issue Architectural Informatics, Vol. 11, May 2006, pp. 311-324, http://www.itcon.org/2006/23 Latham, I., Swenarton, M., Sampson, P. Sketchbook 12.05.98 Terry Farrell and Partners. London: Right angle Publishing Ltd., 1998. Scheer D. R. “The ten bytes of architecture? Some thoughts on architectural culture in the age of simulation”, AECbytes Viewpoint #22. 2006. Novitski B. J. “Fun with computer aided modeling clay”, Architecture Week, p.T3.1, 2000. Mitchell W. “Square ideas”, New Scientist, 52, 2000. Lozano E. Community design and the culture of cities. Cambridge: Cambridge University Press, 1990. Sahai, V. “Search for a shared language”, RIBA Journal, February, p. 41-45, 1991. Martins B, Koutamanis A, Brown A. “Predicting the future from past experience: a refection on the fundamentals of CAAD”, Proceedings of the 24th Conference on Education in Computer Aided Architectural Design in Europe, Frankfurt, Germany, 26-29 September 2007, pp. 523-532. Aish R. “From intuition to precision”. Proceedings of the 23rd Conference of the Association for Education and Research in Computer Aided Architectural Design in Europe, Lisbon, 11-12, 2005. Iordanova . “Teaching Digital Design Exploration: Form Follows”, International Journal of Architectural Computing vol. 5 - no. 4, pp. 685-702, 2007. Burry J. “Mindful Spaces: Computational Geometry and the Conceptual Spaces in which Designers Operate”, International Journal of Architectural Computing vol. 5 - no. 4, pp. 611-624, 2007. Tomassian R. and Marx J. “Digital practices: digital technology and architecture white paper”, ACADIA, 2006.
First International Conference on Critical Digital: What Matter(s)?
16
First International Conference on Critical Digital: What Matter(s)?
Embracing the Post-digital Dominik Holzer Spatial Information Architecture Lab (Sial), RMIT University, Melbourne Australia dominik.holzer@rmit.edu.au Abstract This paper discusses ways for designers to reconnect their design methodologies with the process of making. The paper takes a critical standpoint on the way architects have integrated digital tools and computational processes in their design over the past three to four decades. By scrutinising the support designers can derive from their virtual design-space it is debated in how far this may be complemented by sensory information-feedback from the physical design-space. A studio-based design project is used to illustrate how students have approached this issue to address aspects of building performance in a post-digital way. Moving between digital and physical models without difficulty, the students were able to study the effects geometrical changes on sustainability-performance in real time. 1.Introduction Decades after its initial use in the media and its adoption in everyday life, the description ‘digital’ still maintains a notion of newness, correctness or even ‘betterness’. The idea that every little piece of information we use, the medium that carries it and even the tools that process it can be based on a binary logic of ones and zeros adds the notion of order and control. In addition to that we associate ‘digital’ with speed, precision and unlimited possibilities of sharing standarised information with others. As for architects, there has been a substantial transition in dealing with the 'digital'. Whilst embracing computational tools to digitally reproduce their traditional (manual) work-methodology in the early CAD days, architects have engaged in a more philosophical discourse about the digital. This has consequently lead to a move away from materiality and a deeper engagement with the virtual to address the challenges of the information age1. As a by-product of this transition which started in the mid 1990s, and assisted by modelling software borrowed from ‘digital’-animation, architects began to experiment with shapes and structures that did not follow conventional rules of materiality and physics. This movement was soon proclaimed as ‘blob architecture’ and it was later described as ‘happy accident’2. In retrospect, few projects from this era have been realised. The main reason for this may be the level of complexity of these projects, combined with a lack of knowledge by architects on how their non-standard designs would perform and how to actually build them. At the same time it illustrates the fixation of many architects on visual aspects of design and their appetite for the spectacular. In parallel to the above mentioned development, we have seen a drastic change in the way designers and design-consultants work. The hegemony of paper based 1
see Perella, S. “Hypersurface architecture and the question of interface” Interfacing Realities, V2 Publishers Rotterdam, pp Z23, 1995 2 see Silver, M. “Towards a Programming Culture in Design Arts” AD Programming Cultures, Architecture, Art and Science in the Age of Software Development, John Wiley & Sons, London pp 5-11, 2006
17
First International Conference on Critical Digital: What Matter(s)?
design and physical model-making has gradually been taken over by the possibilities offered by digital tools for drafting, representation and simulation. These developments have certainly brought benefits to designers and their consultants to augment their capabilities. Over the past decades, engineers have become confident in the use of digital processes to analyse and test specific aspects of building performance3. Architects on the other hand have become more dependent on engineers as their own digital modeling tools do not allow them to derive basic feedback of building performance for their design. This, and the lack of sensory information-feedback from haptic investigation with physical models has lead to the fact that architects have increasingly isolated themselves from the process of making. 2.Design heuristics Back in the early 1970s when Negroponte first described the use of computational tools in the arts, he noted that the creative thinking of a designer can get affected by the ‘machine’ and he explored how human-machine interaction can assist in a plethora of decision making processes in a design environment4. As consequence of his observations Negroponte urged designers to draw a distinction between heuristics of form and heuristics of method and to find ways of taking advantage of digital technology to pursue both of these. Whilst heuristics of form relates more to an investigation of space, geometry and structural systems, heuristics of method implies a far deeper investigation on how creative design processes unfold, how they can be made explicit, and how they can be shared with others. In this 'digital' age architects rather seem to investigate heuristics of form through digital means to assist their drawing. This is taken to the point where morphological explorations of some architects push the limit of geometrical experimentations for the purpose of testing ‘what is possible’. Some investigations for the use of digital processes as form-generators have had positive side-effects to the development of the architectural profession. By introducing aspects of ‘time’ to their designmethodologies, architects and design-researchers have increasingly become involved in ‘thinking in processes’ and the exploration of dynamic systems. This has lead to a diverse design-culture which adopts techniques and methods of form finding from various backgrounds through the support of digital processing and simulation5. The inclusion of computational tools for morphogenesis has allowed designers to learn letting go of total control over their design process and to allow the computer to surprise them with unexpected results. In addition to this, the more playful use of design software has enabled us to generate a plethora of design variations for comparison and selection. The designer’s perception about the end-result of his/her investigation has shifted. We are no longer pursuing the idea of producing ‘the perfect design’ but we are now able with little or no extra effort to producing a series of possible solutions to choose from. 3
see Coenders, J. Coenders J.L. & Wagemans, L.A.G. “Structural Design Tools: The next step in modelling for structural design” Responding to Tomorrow’s Challenges in Structural Engineering: Volume 92, 306-307. Budapest, 2006 4 see Negroponte, N. The Architecture Machine, MIT Press, Boston Massachusetts and London England, 1970 5 see Kolarevic, B. “Digital Architectures” Eternity, Infinity and Virtuality, Proceedings of the 22nd Annual Conference of the Association for Computer-Aided Design in Architecture, Washington D.C. 19-22, October 2000
18
First International Conference on Critical Digital: What Matter(s)?
Why is this such an important step, and why do we need to progress beyond it? The way we have previously applied digital processes for our design in CAD has rather alienated us from our intuitive design-methods. Most of the current standard design-tools are of a prescriptive nature as they ask users to perform tasks in a certain way. We are now in search for tools that will allow us to interact more intuitively with our digital and non-digital design-space. After interviewing a multitude of expert designers over three decades, Lawson asserts that computational tools can only become real design partners in our profession if they link into cognitive processes that support our creative design thinking6. Central to this is the ability of juggling different ideas simultaneously and to confidently deal with uncertainty. Polanyi argues that emotional affection is often crucial to the development of hunches and informed guesses in creative acts of discovery. In this context he describes in more detail how acts of discovery involving conceptual and sensory information lead to the build-up of what he calls ‘tacit’-knowledge that is highly personal7. Lawson has picked up this argument and he has researched its relevance in design practice. 3.Defining the ‘Post-digital’ Having to work with a computer tool that does not represent knowledge the way you do may cause considerable interference in your thinking8 We can follow from the above statement that the designer’s interaction with computational software is a highly personal matter. If we individually learn to understand and develop the rules of engagement between our own design thinking and the support we derive from digital processes we can progress the status quo to develop our own distinctive design methodology and foster it by digital means. In this process we are neither stigmatising the use of one nor the other, but we are exploring our personal boundaries for applying them in synergy as a matter of course. This is a post-digital approach. 4.Studio-based investigation In order to test the above theory, the author has conducted a design-studio at RMIT University where students were asked to explore the ‘aesthetics of performance’ and to investigate the various implications of using both analogue and digital design methods to achieve this goal. During the first half of the semester, architecture students developed their personal design methodology on the basis of simple small-scale projects. The purpose of these tasks was for students to gain ‘rule-of thumb feedback’ about rudimental design questions related to structural, acoustic, and environmental performance. The students were asked to advance the aesthetic and formal aspects of their individual projects with each of the above performance criteria in mind. By doing so, the focus did not lie in the formal definition of the end result, but on the process of negotiating and integrating performative aspects of building design in a concurrent way. Each 6 7 8
see Lawson, B. How Designers Think Elsevier, Architectural Press, Oxford, 2006 see Polanyi, M. The Tacit Dimension, Doubleday, Garden City, New York, 1967 see Lawson, B. What Designers Know , Elsevier Architectural Press, Oxford, 2004
19
First International Conference on Critical Digital: What Matter(s)?
student had to find his/her own balance in applying digital 3D design-tools, scripting, physical modeling, or a combination of the three to achieve this goal. Nearly all students chose to address this task mainly through physical model making. When asked for a reason they responded that they were unable to find adequate digital tools that would assist them to test the performance quickly and intuitively enough to act as design driver to generate their simple structures and shapes (figure 1).
Figure 1. Simple structural and acoustic models In the second half of the semester, students were working on a building-related sustainability project which included the generation of sun shading options for achieving maximum daylight entry with minimum solar gain for a faรงade. Once the students were aware of the basic implications various shading options bring to bare, they were encouraged to start designing with shading performance in mind. This implies a step away from understanding shading as a technical add-on to a faรงade, to creating shading options which strongly influence the appearance of a building. In the beginning of the exercise most students were again relying on their physical model-making skills to gain tacit knowledge about the relation between shading options, sun angles and the shadows that were cast. They built basic models from cardboard and placed a spot-light to simulate correct sun-angles. While progressing from basic to more elaborate models, they found their design method to be limited by the time-constraints of building new physical models and the lack of precise performance-feedback from them. The observation could be made that once students had reached a point where they wanted to explore more complex, non repetitive shading options, or shading devices for irregularly shaped buildings, they were willing to extend their investigation into the virtual world. In contrast to the earlier investigation with structural and acoustic models, they did find digital tools for testing environmental performance (in particular Ecotectโ ข) that allowed them to intuitively connect their design thinking to knowledge representation. In many cases this occurred by first re-creating their latest physical model computationally to compare it to the virtual one. This was undertaken to gain confidence in the accuracy of the tool they were using and their capability to simulate a real-life scenario computationally. Once this was achieved, students then continued their design process by creating several versions of their models and testing their shading-performance and material usage in real time. This way they could extract valuable information about the effects of geometrical alterations to optimize building performance in an iterative process (figure 2).
20
First International Conference on Critical Digital: What Matter(s)?
Figure 2. Sustainability project, digital exploration The uptake of digital technology as design-driver varied from student to student as did the goals that could be achieved by it. Whereas some students used their digitally augmented models to have more options to choose from, others used them to refine one specific design solution and others again used them to extract bill of quantities to compare material usage to shading efficiency. The immediacy of gaining feedback from daylight analysis under varying conditions was of greatest importance to advance the design in all cases. As a final step students used rapid prototyping to produce a physical model once the digital investigation had given them satisfying results. The step back from the digital to a physical model appeared to be necessary to assure them the validity of their exercise. The physical model still seemed to reveal aspects that otherwise might have been overlooked in their virtual counterpart. When asked about the reason for this, students argued that their bodily movement around the model while simultaneously analysing their design as a whole offered them a better understanding of the outcome. 5.Conclusions What mattered in developing a project in a post-digital way was that the following: Students could instantly comprehend the sustainability-task, produce hands-on physical models with simple materials, address performance issues by positioning physical spot-lights and then managed to reproduce the models virtually and to run basic daylight-analysis software. Through instant versioning and by flipping back and forth between the analogue and the digital models they advanced their design with constant performance checks and finally they were able to compare the effects of geometrical changes on the building-performance in real time. (figure 3).
Figure 3. Sustainability project: transition physical-digital-physical Acknowledgement: Work from studio participants: Chris Seaman, Yuhong Wrong, Kate Milligan, and Allison Claney
21
First International Conference on Critical Digital: What Matter(s)?
22
First International Conference on Critical Digital: What Matter(s)?
23
Hybrid Studio Matters
Ethnomethodological Documentary of a Tutorial Anastasia Karandinou University of Edinburgh, Scotland A.Karandinou@sms.ed.ac.uk Leonidas Koutsoumpos University of Edinburgh, Scotland L.Koutsoumpos@sms.ed.ac.uk Richard Coyne University of Edinburgh, Scotland Richard.Coyne@ed.ac.uk Abstract This paper looks into the electronically augmented, or ‘hybrid’ contemporary environment, through the spatial and temporal thresholds or ‘seams’ that it encompasses. Electronically augmented environments have been studied increasingly within the past few years. The question of how architects respond to the new spatial conditions, how they interpret and design space, is a major emerging issue. Within these broad questions, we conducted an ethnomethodological analysis of a particular environment-example: the architectural design studio, through the documentation and analysis of an episode in an architectural tutorial. The analysis of this case-study is based upon the seams, the thresholds or ruptures that occur between different media. We argue that the shift or transition from one medium to another can be smooth and un-noticed, whereas, in other instances, it shifts completely the centre of attention, the flow of the tutorial or the perception of the means (and other elements) engaged. The transitions, occurring within the recorded tutorial, are studied in relation to the notions of engagement, immediacy and continuity. We consider that these three notions bring forth the complexities, conflicts and richness (of the hybrid environment) that the tutorial recording reveals. 1. The immaterial, the fusion and the seam As argued in many contemporary texts, the space we inhabit is increasingly constituted by non-visible or non-material elements, some of which are closely related to new technologies. By ‘technologies’ we mean both those that emerge within the conventional construction of a building (either as part of its infrastructure or as electromagnetic waves) and also the portable, mobile, temporal technologies, such as small functional devices. Both kinds of new technologies create spatial qualities and affect the way we perceive spaces and their interconnections. The electromagnetic field, for example, although not visible, does define territories, connections, thresholds and edges. Jonathan Hill, refers to this condition 1 as ‘electromagnetic weather’ , and he looks particularly into the way that it defines the place and thresholds of the home. The electromagnetic weather’s relation to ‘home-ness’ is mainly considered in two ways: The electromagnetic weather enters the physical home (through electronic devices) and shifts the way the home is connected to the ‘outside’ or to the ‘unknown’. The electromagnetic weather, though, can also re-create a ‘homely’ condition elsewhere, since the devices that control the electromagnetic ‘thresholds’ of the home are portable.2 In the same way that the electronic technologies create homeness, they can recreate other spatial qualities too, such as place for work, teaching, learning, communicating, etc.
First International Conference on Critical Digital: What Matter(s)?
24
William Mitchell looks into contemporary ‘hybrid’ environments by opposing them to the ‘virtual reality fascination’ of the recent past. Mitchell argues3 that the debates about new technologies, the virtual spaces and their potential or limitations, are being replaced by discussions and research about the fusion of the new media in their physical environment. One of the examples that he gives is that of a young researcher in a library, surrounded by books, connected to the Internet through his portable computer, taking pictures of interesting pages with a digital camera, and getting guidance through his mobile phone by his supervisor or colleagues. As Mitchell argues, “[t]he challenge… is to start thinking like creative fusion chefs – to create spaces that satisfy important human needs in effective new ways, and that surprise and delight us through digitally enabled combinations of the unexpected.”4 This quest for fusion uses a language of continuity, viscosity, or flow, and its ‘collapse’ implies a condition which wishes to conflate any possible oppositions, such as the observer with the observed, the human with the machine, the subject with the object.5 This conflation transforms previously discrete dualities into smooth ambiguity, a blurring of meanings. Of course, this conflation also becomes responsible for rupturing and unsettling the interface; in the manner of a collapse, it becomes a major contributor to rupture and distress.6 In contrast to this ambition towards the smooth, there are numerous examples of tools and technologies the use of which seems to rely on the obviousness of the seam, a conspicuous and distressed relationship between the performer and instrument. The ideals of ‘smart architecture’ and tangible computing, in blending and merging, can be distressed by resistance — an architecture that does not only facilitate but fights and provokes in an articulation of space by event.7 Within this context, our paper explores the seam or transitions between digital and physical media within an ordinary architectural tutorial, and the issues, that the observation of these transitions bring forth. 2. Methodology In order to examine the way that the members of the education activity carry out their everyday tasks we recorded an all-day tutorial of the 1st year architectural design studio, functioning as participants-observers. A recorded and selected video segment was then studied within an interdisciplinary focus group that includes researchers from Human Geography, Computer Sciences, Sociology and Architecture. In order to represent the footage in textual form, it has been transcribed through the techniques of conversation analysis.8 We follow Garfinkel’s consideration of ethnomethodology, as the study of the methods that people use in accomplishing their mundane everyday activities “by paying to the most commonplace activities of daily life the attention usually accorded extraordinary events, seek to learn about them as phenomena in their own right.”9 In this sense we are not interested in an extreme or extraordinary hi-tech studio-situation, but in the mundane level of an ordinary design studio class (here the first year design studio of the University of Edinburgh, Scotland). Ethnomethodological studies have been emphatically distinctive in giving an analysis ‘from within’ to what is actually going on in social activities. By focusing on the micro-practices of people’s life they inform us about details that our habitual seeing of the world actually overlooks.10 Methodologically, the use of a concrete example is not a mere application of existing theory but, as emphasized by Flyvbjerg, stands as a fundamental generator of knowledge.11 By focusing on the minimal level of what people actually do when they perform everyday activities, it is possible to go beyond the stereotypical view that considers technology as a
First International Conference on Critical Digital: What Matter(s)?
25
‘mere’ tool outside the substance of the project itself. In our case, new technologies are part of a procedure and of a discussion involving all aspects of being, doing and interconnecting things. 3. The situation of the tutorial Jonathan (tutor) and Mark (student) are having a tutorial about the design of a small house. They are sitting in front of Mark’s desk: on the desktop there is a drawing board and on top of it there is an open notebook with lots of sketches and notes; on it, a technical pen, a pair of eyeglasses and an open laptop with a 3D model. The tutorial is nearly ended and Jonathan makes specific comments on how Mark should continue working. Jonathan rotates the 3d model while talking, and Mark takes notes of Jonathan’s comments on his notebook:
Figure 1. The scene Tutor (Jonathan): you need to address the rest of the site now too (1) and, yea, put your service ((M starts writing something on his notebook)) ((J gestures generally over the site in front of the screen)) ((J puts his hand on the laptop’s trackpad and starts rotating the 3D model)) spaces in:: (here), em.. (yea), and that’s what you do on that wall, I guess you ((J zooms in the model)) ((M writes on his notebook)) put your service spaces in here::: the bathrooms, the toilets, the storage rooms, whatever else you need back here. ((J zooms even more and looks inside the room)) Student (Mark):
yea:: I was thinking also:: the traffic would probably be quite a bit of a ((M stops writing)) (reprobation) on that back wall, so not that suitable for bedrooms. ((M points towards the screen)) ((J who has zoomed to close to the building, gets stuck for a couple of seconds in between the two floors))
T:
yea, =
First International Conference on Critical Digital: What Matter(s)?
26
((zooming in and out in the computer model)) = (3) that’s useful tool, I think that I would (…) probably do a quick physical model S:
yes, it might be useful..
T:
and then em..
S: T: S: T:
S:
I can probably make, e:: you were saying to do wax model= =I think you (should) just do massy models right now= =do you recommend maybe considering doing (.) some cast models I don’t thi:nk so (.) I don’t think you need to (.) cause that sounds like it’s going take lots of time and effort to me (.) I would just do it in wooden blocks (.) and and then (.) [start working on your paths] ((J takes his hands of the computer and sits back)) [Chop some woods from the workshop] in wood and then: and paths like this that go around it (.) you know (1) because you’re going to spend all day sort of working on this thi:ng and trying to reso:lve it, I think (.5) ((J points towards the computer screen)) = it is kind of more manageable idea (probably) (…) so probably start with (this) (…)
T:
I think you le:a:ve this now and you’ll come back to this later o:n, but, em: ((J points towards the computer screen)) ((M writes on his notebook)) you leave that and you see what you (need to)
S:
to change (…) ((J knods his head affirming))
Figure 2. Interaction with the physical model. T:
to clarify a little bit, and the:::n (.) cause there is an imme::diacy, right?, like (1.5) ((J turns to his left and takes a small carton model)) = when:: there is the front area you going to mo:ve and shift things= ((J gestures with the model in hand)) ((J puts the carton model back on his left)) I like working on the computer as we:ll actually (.) I can find it in visualizing pretty well e ((J points towards the computer screen))
First International Conference on Critical Digital: What Matter(s)?
27
S:
(…) that’s something I was trying to (…) (in hand) and then (…) study the structure (…) especially in plans and sections, so (…) that’s (why) (…)
T:
the other thing you’ve got to do is (.) speaking of sections (.) is draw sections, a::::nd ((J flips to the previous page of Mark’s notebook, where there are some sketches of sections)) and re:al sections, not just (0.5) you kind of need to move beyond this = ((J points towards the sketchy sections in the notebook)) = and get to (.75) heights (.5) and actually to sca:le=
Figure 3. Showing the sections in the notebook. S: T:
but I could probably (.) could I not do it fro::m (.) [here] ((M flips back the page to the one that was open before)) ((M points towards the computer model)) [from here] ((J points towards the computer model))
Figure 4. Cutting a section on the 3D model.
First International Conference on Critical Digital: What Matter(s)?
28
Figure 5. Close-up of the section. T:
yeah I mean you can (.) you can cut sections (.) you can cut sections and then they can ((Mark puts his hand on the computer’s trackpad)) ((J make a gesture like a cut)) print them o:ff=
S:
=in sketch form (.) and then [I can] ((Mark clicks on the section tool at the toolbar)) [yeah] (.) print them o:ff and then sketch on top of them maybe but you should have some re:al (0.5) ((Mark clicks over a surface that automatically creates a section of the building))
T:
S:
so, probably work from here and (…) ((Mark points at the computer screen, and then takes his hands from the computer)) ((J nods affirming))
4. Inquiring into the situation In the above situation we see five major seams in the transition between different media: 1)The first occurs in the beginning of the dialogue where Jonathan puts his hands on the trackpad and starts to navigate the model; 2)the second occurs when Jonathan leaves the computer, sits back while describing the new model that Mark should make; 3)the third is when Jonathan reaches the carton model and leaves it behind; 4)the fourth is when Jonathan flips the page on Mark’s notebook and comments about his sketchy sections; 5) and the fifth when Mark puts his hand on the trackpad and makes the section of the 3D model. There are further minor seams that we discuss later on. Because of the limited space of this paper, instead of dealing with each one of the five seams separately, we organize our comments under three themes that summarize our inquiry: engagement, immediacy and continuity. 4.1 Engagement A hot medium is one that extends one single sense in ‘high definition.’ High definition is the state of being well filled with data. … Hot media are, therefore, low in participation, and cool media are high in participation or completion by the audience.12
First International Conference on Critical Digital: What Matter(s)?
29
The five abovementioned seams happen between three different media: the computer, the physical model and the notebook. During the situation there appears to be a circle that moves from the computer to the physical model, to the notepad and goes back to the computer again. If we accept McLuhan’s distinction between hot and cool media, we could argue that the computer is the ‘hottest’ medium of all three. The glossy high definition screen is compacted with visual information that is deliberately created to attract the eye, by photo-realistic textures and colors. On the other hand, a physical model is the ‘coolest’ medium since it is a arguably a more limited and abstract representational medium, allowing a plethora of interpretations. The reason for mentioning this distinction is not the mere categorization of the media, but to consider the way in which Jonathan and Mark become engaged with them to different degrees. According to McLuhan, hot media allow less participation since the density of their information leads to a passive stance.13 In the case of the computer, the shiny screen seems to dominate in the setup of the scene of the tutorial. The engagement of the parties, though, is asymmetrical since there is only one who leads the navigation within the model and controls what appears in the screen. Since the view of the virtual model is actually flat (a sequence of perspective images of an imaginative 3d space), the one who has control of the trackpad (or the mouse) leads both observers through the virtual space. Here we can, again, make a distinction between the medium as it is experienced by the one who navigates and by the one who watches; for the one who watches it is even more passive, since he cannot virtually navigate himself exactly as he would wish to. In our case study, the control of the navigation is passing back and forth from the student to the tutor. The control of this navigation is quite closely related to the sequence of issues emerging within the discussion; in some sense, and to some extend, the one who controls the trackpad controls the flow of the discussion too. A characteristic shift (or seam) within the tutorial dialogue related to that is the moment when the student replies to the inquiry of the tutor (about making a drawing of a section) by ‘cutting’ the virtual model and showing, thus, its section. At that moment the student takes control of the trackpad, offers a new input and navigates the ensuing dialogue. We could argue, thus, that the computer, and particularly the immersive virtual environments are ‘hotter’ or ‘cooler according to whether one is actively engaged in them, or if one is a passive receiver of their alluring images. Here arises also the issue of ‘enjoyment’: the tutor, while suggesting to the student that he try some thing out with a physical model, says (as if admitting something not generally obvious or accepted) that he actually enjoys working with computer models himself too. Although it is traditionally believed that physical manual work provides the enjoyment of making, in our dialogue the opposite is also evident: the computer model is not only a tool for dis-engaged work, but an immersive tool like many others. At this stage the discussion develops around the medium itself, whether physical or electronic, and not the space. The two participants have moved back, facing the computer as a tool ‘present at hand’. the medium, though, still affects the discussion, and although they are not moving within the virtual space, the one who controls the computer seems to dominate the dialogue at that moment.
4.2. Immediacy A weave always weaves in several directions, several meanings and beyond meaning. A network-stratagem, and thus a singular device. Which? A dissociated series of ‘points’, red points, constitutes the grid, spacing a multiplicity of matrices of generative cells
First International Conference on Critical Digital: What Matter(s)?
30
whose transformations will never let themselves be calmed, stabilized, installed, identified in a continuum. Divisible themselves, these cells also point towards instants of rupture, discontinuity, disjunction.14 The last seams, towards the end of the discussion - that negotiate how the section would be organised - reveal interesting points about how the two parties construct notions of immediacy. When Jonathan pointed out that Mark should make some sections he asked for a ‘real’ section, asking for a shift from the sketchy sections towards ‘proper’ drawings that have more accurate heights and ‘actual’ scale. Mark’s response was to flip back the page to the original drawing, an action that anticipates the fact that he was going to take the situation under control. He asks whether he could do this from the computer, a question accompanied by a definitive gesture towards the computer. Although Jonathan understood immediately what Mark wanted to say, since – as we already mentioned - they repeated the same phrase ‘from here,’ Mark took control of the trackpad and executed the section ‘right there’ in ‘real time.’ This seam makes explicit a suggestion about a hybrid-section that was using the computergenerated (out of the 3D model) section, and was then elaborated further by hand or through some other means. Nevertheless, this section was not a smooth transition between the digital and manual but rather a stratagem of discontinuity, rupture and overlap of the two media, that is revealed in the phrase ‘print them off and sketch on top of them’. Furthermore, there is another issue emerging here: arguably, the section is generally made to present an abstract aspect of space that cannot be visualized otherwise. Here, though, Jonathan and Mark undertook a converse operation: since the 3D model already existed, they rotated it to see it from all sides or from within. However, Jonathan still asked for a section and Mark did offer it to him, to address the need for a different kind of projection of the space. A question raised here is whether the tutor asks for a section because there is not enough information and detail presented through the computer 3d model, or because there are other aspects of space that a photo-real 3d image cannot provide but a conventional section can. At the same time, when Jonathan asked for the sections, we saw that he did not take hold of the trackpad and cut a section of the model himself. This fact shows, first of all, that sometimes the tutor may not be familiar with the specific software that the student is using, and is therefore unable to perform operations that in a physical model would be obvious: cut bits off, add a piece of cardboard to make a point, as he did while describing how Mark should make the model with wooden blocks and the paths from card fragments lying on the desk. Secondly, it shows that Jonathan was less comfortable performing radical changes on Mark’s digital model (a common practice in physical media). Even if such a change was attempted there is always a variety of risk-reducing strategies deploying commands such as ‘Save As’ or ‘Go Back’. This is also related to issues of authority, but also the notion of ‘damage’ and the value of the artifact: the student’s model is sometimes not only a means or a tool for investigating and designing the space, but also a valuable object, because of the time and effort spent on it; because of the investment of care and engagement; and because it is made by the student and it has a meaning that is immediate to him.
4.3. Continuity What interests us in operations of striation and smoothing are precisely the passages or combinations: how the forces at work within space continually striated it, and how in the course of its striation it develops other forces and emits new smooth spaces.15
First International Conference on Critical Digital: What Matter(s)?
31
The shift from the space of the designed house to the space of the tutorial is evident at the stage of the dialogue where the tutor starts – for the first time – talking about the medium itself, saying “It is a useful tool.” The shift from the ‘virtual’ space to the actual physical space of the studio seams to be occurring in several stages, two of which have special significance. The first is when the discussion (verbally) moves from one environment to the other. The second is when the hand movement accomplishes a similar shift some seconds later. While the tutor was – in some sense – situated within the designed house, navigating within it and exploring its spatial qualities (in order to understand it and comment about its qualities to the student), he started commenting about the medium itself with comments about the usefulness of the tool. At that particular moment we could argue that he moved back to the physical environment of the studio, whereas previously we was immersed in and engaged with the ‘virtual’ space of the house. At the moment that he started discussing the medium itself and its several aspects in relation to the designing process (such as the time needed to make it, the degree of precision necessary at each stage of the student’s investigation, etc), he turned the software and the computer into an object ‘present at hand’ laying in front of them.13 Nevertheless, by following the movement of the tutor’s hands, we could argue that this transition actually occurs later on: when he stopped navigating within the virtual space, left the touch-pad and sat back, and also started indicating with his hands the computer as an object itself. Although he first made the shift by starting to comment on the medium, he was still half immersed within the virtual space; still navigating within the house. If we were not listening to the dialogue we would still think that he was ontologically immersed within the designed house, imagining himself walking within it. It is not a simple matter to identify the transition from the virtual computer model to the physical space, and there are moments when we are not fully here or there, not fully immersed within the physical or virtual environment. Although the transition is perhaps obvious at key moments described in this analysis such thresholds of transition also occur in more subtle ways. This is frequently revealed through the way language is used, regarding oneself either fully immersed or fully aware of the environment in which one is situated. One could argue that we inhabit such transitions or seams since these are practically always present in different degrees of consciousness and at different intensities. Sometimes a medium attracts our attention and we treat it as a distant object of reflection. At other times we are immersed in the medium and are oblivious to it. 5. Concluding remarks In this paper we argued for the significance of the seam, against the presupposition that considers hybrid environments as fused spaces that are characterized by smooth homogeneity. One approach to resisting the imperative towards the seamless interfaces is to counteract it with the concept of perturbation: a time-based disturbance of the surface, that may tear, rupture and otherwise fracture it. The continuity implied in designing spatial form can be troubled, resisted and ameliorated by an articulation of time.16 Such temporal seams we found to be present in the study of the tutorial between Jonathan and Mark in the architectural design studio. Although the ‘seams’ between the experience of different mediums can often remain unnoticed, it can also bring forth to our perception the nature of the medium itself. The
First International Conference on Critical Digital: What Matter(s)?
32
transition from a medium as a tool ‘ready to hand’ to one ‘present at hand’ (and the other way round) brings our attention to the qualities of the medium and also to the relation between the medium and the substance itself; the relation between the medium as a medium and the medium as a ‘message’. The transition between mediums, we argue, should not be regarded as a smooth sequenceshot in a film. Instead of conveying a scene as one continuous time sequence filmmakers use the element of temporal disruption, as in montage, of switching from one spatial location to another. These cuts may convey an overall sense of smooth temporal flow. In the situation that we studied our interest lies with the seams between the different sequences. In this sense, we promote the enculturation not of ‘fusion chefs’ but rather of film editors that focus on the montage and the way that the cuts or the seams articulate a whole image, without necessarily conflating one scene with another.17
7. Endnotes Jonathan Hill, Immaterial Architecture (London: Routledge, 2006). p.23 The portable computers and the mobile phones, for example, can re-create a homely feeling or atmosphere elsewhere, far from the physical home, by connecting the user to his familiar faces, activities, etc. The ‘homely’ here is not based upon the physical space but rather upon the links, connections, possibilities of communication and electronic objects or databases available to the user. 3 in his essay “After the revolution_ Instruments of displacement” 4 William Mitchell, "After Revolution_Insturments of Displacement," in Disappearing Architecture: From Real to Virtual to Quantum, ed. G. Flachbart and P. Weibel (Blackwell Synergy, 2005). 5 Pedro Rebelo and Richard Coyne, "Resisting the Smooth: Time-Based Interactive Media in the Production of Distressed Space" (paper presented at the Digital Design: 21st eCAADe Conference, Graz, Austria, 2003). 6 Richard Coyne, Pedro Rebelo, and Martin Parker, "Resisting the Seamless Interface," International Journal of Architectural Computing 4, no. 2 (2004). 7 Bernard Tschumi, Event-Cities 3: Concept Vs. Context Vs. Content (Cambridge, Mass.; London: MIT Press, 2004). 8 Here, we have used the Conversation Analysis’ transcript techniques offered by Emanuel Schegloff, in his online transcription project http://www.sscnet.ucla.edu/soc/faculty/schegloff/TranscriptionProject/index.html [accessed 23. 03. 2007] 9 Harold Garfinkel, Studies in Ethnomethodology (Englewood Cliffs, N.J.: Prentice-Hall, 1967). p.1 [my italics] 10 This fact does not mean that ethnomethodology ignores the wider context (social, political, economical, material, or immaterial) of the micro-practice. On the contrary suggests that the context, either material or immaterial, does not merely surround the activities, but the context and the activity co-exist. Michael Lynch, Scientific Practice and Ordinary Action: Ethnomethodology and Social Studies of Science (Cambridge: Cambridge University Press, 1993). p. 29 11 Bent Flyvbjerg, Making Social Scinece Matter, Why Social Inquiry Fails and How It Can Succeed Again (Cambridge et. al.: Cambridge University Press, 2001). Flyvbjerg mentions also the role of ethnomethodology as “especially interesting” view within the wider hermeneutic-phenomenological strand of sociology. 1 2
First International Conference on Critical Digital: What Matter(s)?
33
Marshall McLuhan, Understanding Media: The Extensions of Man (Oxon: Rutledge, 2001/1964). p. 24-25 13 Ibid. p. 25 14 Jacques Derrida, "Where the Desire May Live (Interview)," in Rethinking Architecture, ed. Neil Leach (London: Routledge, 1997). p. 332 15 Coyne, Rebelo, and Parker, "Resisting the Seamless Interface." 16 G. Deleuze, A Thousand Plateaus: Capitalism and Schizophrenia (Athlone Press, 1987). p. 551 12
First International Conference on Critical Digital: What Matter(s)?
34
First International Conference on Critical Digital: What Matter(s)?
35
D.I.G.I.T.A.L. Defining Internal Goals In The Architectural Landscape Paolo Fiamma Civil Engineering Department - University of Pisa Italy paolo.fiamma@ing.unipi.it Abstract The digital factor is a challenge to regain the meaning of Design on Architecture, in addition to evaluating its possible extension and transformations. Digital could be an answer for the actual needs of architectural design: Architecture should be digital because digital is profit. Digital could help to understand architectural design as "verified conception" through the concept of computational modelling: Architecture should be digital because digital goes in line and not against design tradition. Digital could enhance the didactic dimension, really important for students: Architecture should be digital because is actual. Digital offers cognitive and ontological value for the design and new skills for the designer: Architecture should be digital because digital is a catalyst of new and creativity. Digital reshapes constructed architecture introducing new aesthetic paradigms: Architecture should be digital because digital is the mental landscape as reference point for the actual theoretical phase of Architecture … There are several answers to the question: “Why Architecture should be digital?”, but without rigor and critical dimension cannot be any digital benefit in architectural landscape, and the main risk could be that the “representation” prevails over "the fact". 1. Introduction The digital factor is a challenge to regain the meaning of Design on architecture, in addition to evaluating its possible extension and transformations. To sum up, we could say that the value of the Digital for Designing is relevant to the procedural iter (methodological aspect), to the dynamic of knowledge (cognitive aspect) and to the concept (ontological aspect). Some pre-concepts block the education to this value in architecture. 1) considering Digital just a “tool” (therefore exclude it from direct relevance, prevent it to having real effects in architectural theory and practice). This misunderstanding probably comes from the early approaches to computer technology, as an assistive technology that would enhance the architecture designers. The scope of the early engagements was captured in the unhappy phrase: “computer - aided architectural design”. Today, i.e., the concept of “totally computer-mediated architectural design”, has a totally different significance, implications and potentialities. (Mahalingam G. 2003). 2) making the contribution of Digital to coincide with only the esthetic component of a specific type of building (these shapes are currently undergoing a lot of criticism). Critical training starting with didactic sphere, therefore of vital importance. From always the digital “resources” for design have always affected our ways of thinking and designers studying design techniques, have also disclosed a unique view of reality and at the same time a "way of being" towards the meaning of design itself (Heinsen G.L. 1995). From many years by now, the computational models evolves: from being simple tools for the implementation of working methods to affecting the genesis of a potential transformation of the designing conception and practice. Critics often highlight that today we are witnessing the birth of a new International Style, (typical of a so-called “digital” – or “transgenic”, “blobby”, “liquid” architecture) a limit of which is especially the cultural position with respect to: the specific features of a place the permanence of shapes, the search for tradition and memory, the value of history (the ancient “Genius Loci”); the theoretical and practical approach to the “Triade Vitruviana: Firmitas, Utilitas, Venustas”, base of the meaning of the architectural typologies.
First International Conference on Critical Digital: What Matter(s)?
36
2. Methodological value At the glance, it could be possible to assert that Architecture should be digital because everything is already (and even more will be) digital… It is a problem of “citizenship”: belongings, language and communication. Digital for Architecture as and after Digital for the World. (Or, from a different point of view: “no-digital Architecture” could survive in a digital world?). But this would be, obviously, an “external” reason. Really there are several “internal” reasons. In our times, architectural design has become so complex that traditional ways of managing the process design are no longer sufficient. Design should be not merely a process, but a co-evolution of efforts and events in various places and times, both synchronous and asynchronous (Anders P., Jabi W. 2003), but we know, instead, as the process design is – the facto - fragmentary owing to the numerous “actors”, phases, environmental contexts and cultural backgrounds (Wix J. 1997; Eastman C., Siabiris 1995; Jacobsen, oth. 1997). This leads to considering the Design as a set of isolated and sequential compartment. All this introduces an inefficiency of costs and/or timing, or leads to buildings that do not attain the required performances, and, locally, produces buildings that after some time are unliveable (Carrara G., Kalay Y. 1994, Carrara G., Fioravanti A. 2002). Digital could be an answer for the actual needs of architectural design: we could say that Architecture should be digital because is profit.The Digital allows on the one hand, to open up new fields of investigation, and on the other hand, if well considered, to take back the meaning of designing to its original value. This value goes in line and not against tradition.
3. The architectural design as "verified conception" Design has always played a key role: that of being a “verified conception”. In the past, a conception was implicitly verified as one designed and built to the professional standards that had been developed by tradition over the centuries (Gucci N. 1998). Nowadays, the designing conception is verified (in “a priori” manner), before the building is erected through the modelling of its structural features, through an interactive process running across different disciplines and iterative between the time of the ideational expression and the time of its verification, based on a wide-ranging concept of digital modelling, (not just as pure geometrical modelling). How Digital could assert and promote this way? A sample. We know that the BIM methodology allows to create a virtual model using object oriented elements of construction (pillars, beams, bricks, fixtures...). The value of digital BIM, is to be a logicalfunctional model of the future construction, that permits to obtain in an “unicum”, not only the possibility to study the different phases of construction, to simulate the thermo-physical properties, the cost, etc… but allows to generate everything in a combined manner. It is not just a procedural method change (to build directly a constructive object-oriented model from which documents: views, sections… are derived afterwards instead of the opposite), but a cognitive change: the design concept needs to be integrated since an early stage.
4. A didactic experience In my Course (“Technical Architecture II” in Civil Engineering Degree), the significance of this designing method has been explored. The students have been invited to design a single family villa. Contrary to what they were used to do, beginning with the design of a map, prospects, and sections, they were asked to use a 3D object oriented approach, using constructive elements, and trying - in addition - to translate in construction of a digital model, the operational mode that would have characterized the construction of a building.
First International Conference on Critical Digital: What Matter(s)?
37
In the final written report, the students always report a radical challenge of their designing practice, with respect to the “normal” design procedures within a generic C.A.D. software. Basically students writing: - "To have a different way of reasoning": the fact of being invited to conceptualizing the project from a constructive system an subsystem perspective in the student’s minds and then as a virtual model, had a direct effect in terms of understanding "the building object", like they had never experimented before. - "To understand all in clear and total way": they experimented how this procedure had the benefit of leading them to think how to immediately solve the relationship between constructive elements to avoid leaving unsolved/incomplete many aspects of the building. Please note, that students were familiar the topic of the design of a building of this type: (they were already attending others courses). However, they have indicated a rediscovery of the practice not only at a project level but also constructive of the future building.
5. The digital: cognitive and ontological value for the design 5.1 A concept of space The digital dimension generates new cognitive spaces for the designer and “in” the designer and for the designing concept (Saggio A. 2005; Novak K. 1999; Mitchell W. 1999).The new technology is dramatically changing our approach to design. This change involves as much our conceptual and designing potentials as their implementation. The digital dimension affects the way designers works through the generation of new mode to perceiving conceiving and imagining space and to relating to it - and “in it” - in a new way (Pea R.D. 1994; Winograd T. 1995;). The digital dimension allows for and boosts the growth of a new dynamic “environment”, half natural and half artificial, in which the design concept comes to life and form and matter can even be replaced by digital information, by virtual entity, by augmented object. (Thinking i.e. at the Collaborative Environment Rich Virtualized) 5.2 New meanings of interactivity The digital dimension turns a new “environment” (Johnson A. 2001; Maher M. L., P. Frost 2003; Brown M. 1999) where it is born and grows a new concept of an unusual form of interaction between the designers (subject) and the design (object). A sample. Immersive and augmented are new working spaces for designers: one can see that this increases, improves the process of knowledge (know ability and creativity, and therefore also design) to such extent as to allow the cognitive processes themselves to move from a symbolicreconstructive value to a perceptive-motor value, which is the inborn cognitive method of the human organism (Forte F. 1997). To sum up, we are moving beyond vision to "experience" the design before it is built. This interaction was always believed in the arts and design history. The main aspect was to overcome the concept of Renaissance Window, towards a dynamic relationship between subject who observe and object that is observed: (i.e., the design vision of F. Brunelleschi, L.B. Alberti, D. Velázquez, R. Magritte, P. Picasso). 5.3 New skills for the designer? We could say that digital also increases our designing abstraction skills. Two main aspects: - From the theoretical side, about the possibility of discovering a kind of visualization of our mental space. - From the technical side, about the possibility of renders some portions of space visible that would otherwise usually lie beyond our perception (Eloueni A. 2001).
First International Conference on Critical Digital: What Matter(s)?
38
The conquest of a new cognitive space leads to the conception of a new architectural space. In architecture, changing the environment where design takes place involves a dramatic change in the way the architectural space is conceived of and designed.
6. Constructed (digital) architecture In the last two decades a common conceptual base of Architectural vanguard experiences, was the concept of architectural form: digital information (as a set of ever-moving, everchanging data) becomes shape and vice–versa (thinking, i.e., about to the architecture “of immaterial”). After centuries, the shapes and the buildings “move”. Digital reshapes constructed architecture, overcoming the assumption that Architecture was always compared to the study of the inert, of everything “static”. In addition, the Digital constructed architecture is often the architecture “of relation”: it does not keep its distance from the onlooker, it asks the onlooker to enter the building. This architecture is no longer defined by the space it offers but also by the number and features of the services it supplies, its ability to change as quickly as possible, to be open to anything without contradictions. The building becomes, in itself, a service whose value is related to its ability to fulfil a given number of requirements.
7. Conclusion Today, the debate on the relationship between Digital and Architecture is really actual and totally open and will be ever more necessary, important and unavoidable, because the digital “resources’ increase day by day and not just because they involve several potential architectural aspects (Digital thinking, pedagogy, projects, practices, representation, design, visualization, fabrication, tools..), but because, these aspects, all together, are generating – de facto - a only Digital Dimension for Architecture. Probably, some contrary positions hide a “fear”: could “the digital” be absorbed by the discipline (the Architecture) or vice - versa? This question looks like the core of the majority of local cultural (and “strategic”) positions, but, often, could reveal a weak cultural position on the architecture landscape: a problem of “identity”. In order to avoid fading away of the discipline (Architecture), the problem is not the situation the discipline faces, but what the discipline actually is: its identity (the identity of cultural positions inside it). A cultural position with strong identity is almost always open to new situations, looks for a comparison and is able, after a critical recognition, to include the positive factors of the new situations and to exclude the negative once. We know that Architecture is the genetic “expression” of human understanding of reality: Architecture interprets, includes and stimulates the cognition models within human live reality. The change of the cognition models is strongly related to the change of the paradigms of the Architecture and vice-versa. It is knowledge itself that is “represented” in the architectural object. These happenings are more evident during specific times that we could name “phases of revolution”. Today we are living a phase of digital revolution and we know that new technology has always been a catalyst for new ideas in architecture. After some decades it is possible assert that Digital enhances design creativity in Architecture, rather than just work production (thinking i.e. to the digital animation, we could understand “the movement” as dynamic force, a creativity resource, not just as pictures sequence). From an aesthetic point of view probably the contemporary architecture cannot “not be digital”. Architecture should be digital, because digital is the actual theoretical phase of the Architecture. Renaissance Architecture (influenced by the “new science” of Perspective) has
First International Conference on Critical Digital: What Matter(s)?
39
been transformed to be “perspectivable” itself (object/subject of perspective).The Architecture of “Funzionalismo” (Modern Movement), has been transformed to be “industriable” (o/s of industry): objective, separated, mechanical, not just standard-built (the assonometric representation was its announcement). Today (part of) Architecture would change to be “digitable” (o/s of digital): to include the dynamic, the connection, the interactivity of the digital paradigm (hyper-surfaces, algorithms, splines … as new language). We could say that Digital is the mental landscape as reference point for the actual phase of Architecture. From a theoretical point of view, the digital factor for architectural design evolves between two opposite risks: - The digital as an obstacle to reality. The power of modeling as resource of the evaluation, could suffocate the concept itself, preventing the creativity… - The Digital as Land of Utopia, idealizing creativity: on one hand because digital is become a link, between what can be conceived and what can be built through “file-to-factory” processes, on the other hand, because the projects could remain only exercises in software). The risk could be a fashion leaning towards aestheticism, that makes the representation ("marketability") prevail over "the fact" - over the “substance” - it refers to (the correctness and operation of the architectural design). The shape of the building stands out, prevails over its function, on the consistency of the constructive decisions. Thus, the Digital in architecture can risk reducing its message to pure form (thinking about the possibility to obtain physical form from digital entity using a 3D printer). For the future, it is hoped, that the discussion about the digital for design in Architecture shouldn’t be about its orthodoxies, what matters instead is its effectiveness. The challenge for the designed Architecture is to try to take hold of the new digital paradigm, but without taking them, tout court, as "objective" parameter in order to assess the achievement of a new quality of built Architecture.
8. References AA.VV. Acm Transation on Graphics, Official Proceedings of International Conference ACM Siggraph 2006 Boston, Ma. ACM New York, 2006. Gianfranco Carrara, Antonio Fioravanti. A Theoretical model of shared distributed Knowledge bases for Collaborative Architectural Design, Strategic Knowledge and Concept Formation III Conference Proceedings, Sydney, 2001. Charles Eastman, Building Product Models: Computer Environments Supporting Design and Construction, CRC Press, Boca Raton, FL 1999. Wassim Jabi, Peter Anders. Official Proceedings of Acaadia 22 Conference Connecting crossroads of digital discourse. B.S. University Indianapolis, Indiana, 2003. {Ganapathy Mahalingam (ibidem)} Natale Gucci. Some reflections about design process of construction, Italian Engineer, official magazine of the National Italian Committee of Engineers, Dario Flaccovio Editore S.r.L., Palermo, Italia, 1998. George L. Heinsen. Note of design lesson in Architecture, La Editrice Artistica Bassanese e la Scuola Remondini. Bassano del Grappa, Italia, 1995. William J. Mitchell. E-topia: Urban life, Jim – but not as we know it. The MIT Press, Cambridge, Ma, 1999. Antonino Saggio. Interactivity at the Centre of Avant-Garde Architectural Research. Arch’it Digital Review of Architecture. online at http://architettura.supereva.com/, 2005. Uddin M. Saleh. Digital Architecture, Mc Graw Hill, New York, 1999.
First International Conference on Critical Digital: What Matter(s)?
40
First International Conference on Critical Digital: What Matter(s)?
41
Option Explicit – Scripting as Design Media Tim Schork Spatial Information Architecture Laboratory School of Architecture and Design Royal Melbourne Institute of Technology University GPO Box 2476V, Melbourne, VIC 3001, Australia tim.schork@rmit.edu.a
Abstract In practice, the domains of architecture and computation have traditionally been perceived as distinct. Computation and its associated technologies, such as computers and software applications, have primarily only been applied to the domain of architecture. The aim of this paper is to reconsider the relationship between these domains. In moving away from separate entities towards a synthesis of architecture and computation, this paper explores the potential and the challenges of this rich creative space that opens up for architectural design through a series of case studies.
1. Introduction “Architects tend to draw what they can build, and build what they can draw”1. In this statement William J. Mitchell describes to a great extend the predominant use of computer technology and software applications in architecture. They are still to a great extent only used as 2 dimensional drawing tools that mimic conventional modes of production in order to maximize efficiency. While this uptake of computer technology has caused a skill change in the profession, the attitude towards it as an applied tool has to a great extend remained the same, keeping the domain of computation hiding within the black box of the ‘tool’. Architecture stays passive in the use of technology and merely consumes whatever tools are provided. In contrast to this, I suggest an alternative attitude, one that questions the status quo use of computer technology and explores its capacity for extension and customization to ones own needs. In this scenario computer technology can bring with it not just a skill change, but more importantly an attitude change in the use and role of computers. Here the designer takes ownership of the generic tool by opening up the black box and inserting his domain knowledge into it, making it a specific tool that caters to his design intentions. This approach considers concepts and theories that are integral to the domain of computation and have useful applications in the domain of architecture. An indicator of this synthesis is the currently increasing engagement of architects with scripting, a subset of computer programming, in order to design their own set of tools for design, analysis, visualisation and documentation. This use of scripting as a design media marks a different way of thinking and working architecturally, while drawing on a thirty year history for instance to the work of others such as John Frazer.2 Thus the computer can become more than a ‘drawing tool’, and instead becomes a ‘design tool’.
First International Conference on Critical Digital: What Matter(s)?
42
This synthesis needs also to be considered in conjunction with a series of wider cultural shifts. Outside the discipline of architecture, technology affects nearly every aspect of our social and cultural lives. We are generally comfortable with technology and regard it as an integral part of our everyday life and work. We are accustomed to the idea that technology not just helps and serves us, but that we can modify it to meet our needs. Examples of this are the programming of DVD recorders and mass cultural phenomena such as Google™, Blogs, Wikis or MySpace™ websites. As Mitchell states, “computers have become consumer items; and computing is now a mass medium”3. Within the discipline, architecture’s re-incorporation of aspects from theories of mathematics and science has expanded the disciplinary boundaries. This knowledge has led to a shift of architectural design from form to process, extending the existing architectural repertoire. This is characterised by the transformation from 3D representational models of architectural form to more performative models of processes. The difference between the two can be illustrated by a simple example. In a representational model the geometry is something explicit that is based on implicit rules. In procedural models this relationship is inverted, meaning that implicit geometry is based on a network of interconnected entities which are described by a system of explicit design rules. These design rules define the relationships between individual parts and enable a constant negotiation of the geometry with its context.
2. Case Study – Screenresolution Sceenresolution is a case study that investigated how scripting can be a useful design media exploring the creative design space that exists at the overlap of architecture and computation. The project was undertaken by MESNE4 in collaboration with seven interior design students at the School of Architecture and Design at RMIT University. The intention of this seminar was to produce a 1:1 prototype of an architectural screen made from non standard components that was collectively designed by the entire group of students. By developing a series of strategies to converge the abstract model (the code) and the physical model (the materialisation) the project further explored the synthesis of generative design processes and fabrication techniques. Throughout the entire semester the students were required to post their scripts to an open source script library, so that they can be shared among all the design members. The final ‘master script’ used to generate the final screen is compiled from this library, giving the project not just a single author
2.1. Decomposition and recomposition At the beginning of the seminar students were introduced to rule-based design systems and were asked to investigate natural occurring surface tessellations for their underlying ‘generative code’. The task was not merely to produce ‘bonsai architectures’5, small scale representational models of something already existing, but aimed to discover the geometric rules and the internal logic of their structuring processes. This required the mental task of decomposition and recomposition, which is something inherent to both design and scripting. As with good design, scripting generally improves through a process of drafting and redrafting and requires clear thinking. When working with the technique of scripting one first decomposes the design intent into a series of rules and parameters, then transcribes them into a series of functions and recomposes them by bringing these entities back into relationship with one another when running the script. This approach reflects Burry’s description about generative design. He notes “With a generative approach to creative
First International Conference on Critical Digital: What Matter(s)?
43
production it is possible for there to be a clear distinction between what is generated and what generates, between the code and the resultant objects.”6
Figure 1. Geometric studies generated by scripting After creating a design schema and establishing a common and generic component based assemblage, each student investigated a particular area of individual design interest, such as colour, porosity and light transmission. All these design intentions were encoded as a set of instructions into each individual component in order to define how the component can adapt to its environment. The environment setup for this generative system was a manually created simple 3D model in Rhino™ that consisted of two variable NURBS surfaces. These surfaces defined the position of the screen in the exhibition space, between which the components were instantiated. The screen also needed to fulfill a several programmatic requirements, which were located by a series of points in the 3D digital model. The proximity of a component to the field of influence of each locator was used as an input for the generative system that would trigger a response in the component geometry. For example if a screen component was within the range of influence of a locator for multimedia projections, a closed component type was generated, while the proximity to a locator that defined the area for a proposed lounge caused a colouration of the component. The final configuration of each component is an accumulative response that synthesis the influence of all the different locators within the exhibition space.
Figure 2. Colouration of components based on proximity to lounge locators
2.2. Component based authorship The technique used in this project is a combination of RhinoScript™ and Visual Basic for Applications™ (VBA) in order to create a design tool that is able to generate a series of different design versions. This design tool is then used to iteratively explore possible configurations of the proposed screen within the exhibition space.
First International Conference on Critical Digital: What Matter(s)?
44
During the early design phase the group was given two requirements that needed to be considered and implemented into the setup of the generative code. The first requirement was that the screen needed to be produced out a flat sheet material, which imposed a number of constraints on the geometry of each individual component. The second requirement was that each component had to be fabricated with an available numerically controlled (NC) flatbed cutter. This meant that no component within the entire assemblage could be bigger than the maximum bed size of the cutter. This incorporation of material and fabrication constraints into the generative script and its generated geometric outcomes facilitates an interface between the design concepts and the material practice of architecture and makes it possible to use the information contained within these models directly for fabrication.
Figure 3. Series of small scale prototypes
2.2 Converging modes of representation With this knowledge of material and fabrication constraints in mind, the development of the generative code of the screen was constantly informed by the simultaneous making of physical prototypes that for example either tested the connection details between individual components in the assemblage, the properties of the used cardboard material or the fabrication constraints of the flatbed cutter. These physical prototypes, incorporating as well the design intentions, as well as the fabrication constraints, created a feedback loop between all the different modes of representations of the design. It is through this constant moving back and forth between the different modes of representations of a design, the scripted text based, the geometrical visual and the material physical, that enriches our understanding and the quality of the designed object. The final screen is a unique object and consists of an array of non-standard components. As a field, these components act collectively to express properties of porosity, colour, and the interplay of light and shadow. This collection of properties generates a moment in a continuous state of change.
First International Conference on Critical Digital: What Matter(s)?
45
Figure 4. View of 1:1 prototype showing variation of individual components Conclusion This paper has presented how a group of students collectively designed a screen made out of non standard components and why the use of scripting as a design media marks a different way of thinking and working architecturally. On reflection, we can draw several conclusions from this specific case study. Firstly, the overlap created through the synthesis of aspects of the domains of architecture and computation supports design thinking. Once the scripted design tool was compiled, the group of students explored the design space that opened up through this overlap and continuously generated a vast number of possible designs. This allows to “ask ‘What if?’ more often, and about more kinds of assumptions”7 creating the opportunity to investigate more design alternatives. Secondly, it illustrates how scripting can foster collaboration and act as lingua franca among the members of a design team, therefore becoming a catalyst for innovation within design. The project challenges the proverb that ‘too many cooks spoil the broth’. Lastly, the paper shows how scripting creates an interface between design and fabrication processes. This interface this bridges the representational gap between the abstract (virtual) and the concrete (materialization) leading to more buildable design outcomes. Can scripting overcome Mitchell’s statement about the production of architecture from the beginning of this paper?
First International Conference on Critical Digital: What Matter(s)?
1
46
see Mitchell, W. J., "Roll over Euclid: How Frank Gehry Designs and Builds", in Frank Gehry Architect, J. Fiona Ragheb (ed.), New York: NY: Solomon R. Guggenheim Foundation, 2001, pp.352-63. 2 see Frazer, J., An Evolutionary Architecture, London: Architectural Association, 1995. 3 see Mitchell, W. J., Liggett, R. S., Kvan, T., The Art of Computer Graphics Programming, New York: Van Nostrand Reinhold Company, Inc., 1987. 4 MESNE is an architecture practice, founded in 2005 by the author and Paul Nicholas, www.mesne.net 5 Herzog & de Meuron: Natural History, Ursprung, P., Basel, Lars Müller Publishers, 2006 6 Burry, M., Burry, J., Dunlop, G., Maher, A., “Paramorphs: An Alternative Generative System”, in Proceedings of Second International Conference on Generative Systems in the Electronic Arts, CEMA, Melbourne, 2001. 7 See McCullough, M., 20 Years of Scripted Space, in Architectural Design 76, no.4, 2006, pp.12-15
First International Conference on Critical Digital: What Matter(s)?
47
Process in Design Moderators: Mariana Ibanez and Jock Herron Papers: Tomasz Jaskiewicz Dynamic Design Matter[s]: Practical considerations for interactive architecture Gun Onur and Jonas Coersmeier Progressions in Defining the Digital Ground for Component Making David Celento and Del Harrow CeramiSKIN: Biophilic Topological Potentials for Microscopic and Macroscopic Data in Ceramic Cladding Emmanouil Vermisso ‘Digitality’ controlled: paradox or necessity? Sherif Morad Building Information Modeling and Architectural Practice: On the Verge of a New Culture Oliver Neumann Digitally Mediated Regional Building Cultures David Harrison and Michael Donn Using Project Information Clouds to Preserve Design Stories within the Digital Architecture Workplace Christian Friedrich Information-matter hybrids: Prototypes engaging immediacy as architectural quality Theodore Dounas Algebras, Geometries and Algorithms, Or How Architecture fought the Law and the Law Won
First International Conference on Critical Digital: What Matter(s)?
48
First International Conference on Critical Digital: What Matter(s)?
49
Dynamic Design Matter[s]
Practical considerations for interactive architecture Tomasz Jaskiewicz TU Delft, the Netherlands t.j.jaskiewicz@tudelft.nl Abstract This paper explores the concept of interactive architecture. The first section begins by formulating a daring vision of a radically new kind of architecture. In the second chapter this vision is further elaborated upon, by proposing a generic approach towards practically accomplishing the originally formulated theoretical concept. Opportunities and threats that emerge from this vision and approach are discussed in the third section and eventually, in section four and five, the proposed approach is brought to practical applications and illustrated with a number of experimental building component examples that all together include all necessary features to create a complete large scale architectural object. All projects and explorations have been conducted as part of the Hyperbody group’s research at the Delft University of Technology and have been inspired by group’s director, prof. Kas Oosterhuis. 1. Interactive architecture – vision statement Interactive architecture is a new concept in architectural design which is rooted in the global development of the information technology. However, unlike other recent developments which are a result of the employment of digital technologies that exponentially increase the amount of tasks that we have to deal with on our daily basis, interactive architecture has the potential to provide solutions to complex problems and information overflow which has emerged as a result of the new conditions of the digital era. Development of radically new architectural qualities has become a necessity. Changes in lifestyles and in other cultural developments are now occurring faster, are more radical and, like never before, are highly unforeseeable. On the other hand, global society of today calls for architectural solutions capable of sustaining themselves in its dynamic spatial, social and natural environment. Since contemporary ways of life are so rapidly evolving in all their aspects, there is an urging necessity for architectural spaces to be enhanced in ways that would allow them to perform an active dialogue with their fluctuating content; to dynamically deal with changing needs of social groups, as well as to directly serve particular individuals. This trend forces architects to design flexibly, to take into account potential emergence of new spatial requirements that cannot be anticipated before the actual building use comes to place and which can dynamically change over time. It has been commonly acknowledged that architecture has to be sustainable. This means that it has to sustain itself by its own means, instead of being sustained by external forces. Environment in which architecture has to perform consists of numerous layers. Those layers; the natural, social, cultural and many others, form local and global ecologies of all possible kinds, creating very rich networks of dependencies and relations. If demands towards architecture, coming from any factor of such intricate ecologies, under certain conditions abruptly change, architecture may have to either adapt itself in order to accommodate those new demands, or it has to give feedback to its environment to reconsider its requirements. In either case, this means that buildings have to develop an ability to react to unpredictable and rapid changes in demands for specific functional
First International Conference on Critical Digital: What Matter(s)?
50
qualities. Serious consideration of these trends in architectural design leads to a concept of a new kind of architecture which treats architectural matter as a dynamic substance. Such new kind of architecture has to be efficient. It hast to efficiently perform in its energy consumption, economical feasibility, in its structural aspects. It also has to efficiently deal with its surroundings and most importantly, with providing necessary spatial conditions. In other words, architecture has to satisfy various demands of its users and its environment. The difficulty is that those demands are never constant; they change over time and depend on numerous variable conditions. Additionally, we have to acknowledge that many of these demands can not be met due to reasons of logical or technical nature. Therefore within its ability to adapt, architecture also has to be able to deal with its own limits. It should go beyond the level of performance of a fully obedient savant. Buildings and their spaces should provide more sophisticated behaviour than the one of sheer response to our needs, simply because such unconditioned responsiveness cannot provide solutions in case of complex spatial situations. If linear problem solving doesn’t provide satisfactory answers, solutions need to be found in a creative manner. For that reason, in addition to adapting to changing conditions, built spaces should be at the same time pro-actively counter affecting our lifestyles and activities. They have to actively negotiate with all external demands. In other words – they have to become interactive1. The notion of Interactive architecture is being commonly oversimplified, by being used to refer to buildings and built spaces which are capable of simple responsive adaptations and spatial customizations of various kinds. Indeed, automation of adaptive and customizable qualities of buildings leads to creation of architecture which is active in the core of its nature. However, only consequent replacement of linear logics that guide their behaviour with an ability to reason and learn results in achieving true interactivity – creation of spaces which are able to maintain a dialogue with their users, not only responding to their demands, but pro-actively engaging themselves in all kinds of featured spatial activities. Interactive buildings are to be more than just simple customizable spaces. They will posses all features of traditional architecture, however, in addition to it; they will develop a subtle will of their own. They will serve us pro-actively, creatively coming up with features that, depend on their constantly gathered, updated and validated knowledge. Interactive architecture will provide unprecedented experiences and aesthetics, which are not just one time, singular states, but which are going to be evolving processes. Architecture will become an active medium, mediating between individuals, between entire social groups and with other phenomena that have never been mediated with before. 2. Architecture as an open system - approach The new vision of architecture and architectural design leads to a theory of treating architectural constructs not as finite singularities, but as multidimensional multiplicities, resulting in offering a wide range of solutions for varying environmental conditions and for flexible socio-spatial ecologies. This concept would have been in all probability an entirely utopian speculation, if it was not for the most recent possibilities emerging from the application of computers and other latest digital technologies in architectural practice. As a result of employment of those new technological means, we are now witnessing an ongoing, radical shift in architectural design methodology. Firstly, linear and centrally steered “top-down” designing becomes replaced or supplemented by creation of parametric, procedural and relational digital design models. Consequently, those models become final design products themselves, making traditional, fixed plan drawings less important or even
First International Conference on Critical Digital: What Matter(s)?
51
completely redundant. Secondly, mass production is being replaced by mass customization of building elements, made possible with data-driven manufacturing technologies. In this way, in many cases, it does not make a significant difference from the manufacturer’s perspective whether a building consists of repetitive components, or if its every single element is fully unique in its form. Thirdly, the use of latest digital technologies for digital sensing, data processing and actuating of buildings practically opens up a vision of architecture that can be flexible and adaptive not only during the design process but also after being built.2 The other consequence of, both virtual and physical, employment of such dynamic, multidimensional architectural design matter is to naturally include time as one of considered dimensions. In this way, architectural multiplicities can be comprehended as dynamically steered processes unfolding over time. Normally, such processes begin as virtual constructs and gradually become materialized as parts of the built environment. Various approaches that are being employed to deal with creation of such processes vary. On one hand there are reductionist ones; starting from an infinite amount of dimensions and gradually imposing constraints on the project and in this way reducing its dimensionality. On the other hand this problem can be dealt with by starting with a limited amount of dimensions and by subsequently gradually increasing their amount throughout the project development, leading to a design process resembling in its mechanisms the evolution or natural growth of living organisms. In any of these cases, architectural design methods have to be practically reinvented from the beginning. In place of designing spatial, singular forms, architects are now expected to design processes that produce architectural spaces. The nature of these architectural systems can be very diverse. However, providing that the aim of architecture is to be interactive, these processes have to be designed in an open and extensible manner.3 The definition of open systems can be formulated in opposition to traditionally more common closed systems, which despite the easiness in which they can be optimized for reaching certain clearly defined optima or for solving predefined problems, are not capable of self-improvement. Open systems, on the contrary, can evolve over time in non predetermined ways, which gives them a very high degree of adaptability to unforeseen circumstances. Complex adaptive systems are a specific kind of open systems which consist of a high number of autonomous elements. Even though each of these elements may be driven by very simple, linear logics, the entire system as a whole is likely to exhibit very intricate properties. It can develop an ability to learn and to perform simple reasoning processes. It can also be easily expanded and altered by increasing the number elements or by changing behaviours of all or of a selection of its contained parts. Those features make complex systems ideal for architectural applications, which normally deal with a very high number of variable factors and undetermined, variable dimensions.4 3. Spatial consequences – opportunities and threats It is impossible to exactly foresee what are going to be the long term consequences of applying the theory of developing interactive architecture as an open, complex system. It definitely leads to radically different ways of not only creating and thinking of architectural spaces, but also of using them. If we look at three basic qualities of architecture originally defined by Vitruvius5, we may realize that each of these qualities undergoes a radical transformation when architectural spaces start to be considered as being interactive. The beauty [venustas] of interactive architecture will lay not just in its one time appearance and proportion, but it will emerge
First International Conference on Critical Digital: What Matter(s)?
52
from the manner in which proportions are maintained over a changing architectural form in relation to its also changing environment. The functionality [utilitas] of architecture will reach a wholly different level, since it will be fully dependent on changing functional demands. In this manner we will no longer evaluate the functionality of buildings as a definite quality, but we’ll judge their efficiency in addressing unpredictably appearing functional needs. The reliability of architecture, which normally derives from its firmness [firmitas] will be probably the most difficult of the three qualities for the interactive architecture to fulfil. It will most likely take many experimental developments to eventually reach a fully reliable level of interactive architectural performance. However, although it is easy to think of many discouraging scenarios, perpetuated by science fiction films in the sorts of “Space odyssey 2001” or “the Matrix”, it should not hinder us in trying to advance the quality of our spatial environment. What may be the real difficulty however, is that implementation of interactive architecture requires radical changes in the way we culturally deal with architecture. The perception of architecture has been often based in confidence of a static model of the society. Architectural monuments, from tombs of Egyptian kings to modern churches have been designed with the intention of everlastingly conveying firm meaning and symbols. These symbols however devaluate faster than any of their creators might have ever expected. Therefore, the method has to be altered in which these values are being expressed by architecture. We have to become accustomed to the idea of change as a permanent element of spatial environment. 4. Prototyping the new kind of spaces – investigating practical solutions Normally new milestone developments of human civilization are brought to common use gradually. Such is also the case of interactive architecture. Its creation cannot take place instantly because of the involvement of the very wide range of difficult problems that need to be solved to allow architecture to reach true interactivity. The least of them are of the technological nature, the most difficult problems to overcome relate to theoretical, cultural and social questions. In any case, these questions can only be answered by testing these new concepts in practice. For this, however, first a general methodology needs to be defined, which would suit best the development of interactive buildings. Yet, in order to propose a new design methodology, we first need to know what is the actual nature of things that are to be designed. As argued earlier in this paper, interactive architecture has to be comprehended as a system. This task has already been faced by architects in the beginning of the 20th century. The “smart home” concept is probably the most common contemporary evolution of early modernist attempts in creating buildings as machines. In all these concepts elements of buildings are always centrally controlled in a predefined manner. This approach leaves no space for any further evolution of such creations, which practically rules out the idea of interactivity. Instead, developing a system as an open network of interconnected elements is highly extensible and adaptable. Applying this idea to architecture means that buildings can be seen as a network of interrelated building components. Such networks in their principle are easily extensible and adaptable. However, the most important feature of open systems is that they are capable of developing what’s often referred to as “swarm intelligence” (figure 1.). It is a holistic theory of intelligent behaviour that can emerge from systems which consist of a high number of simple interconnected agents. Those agents in case of buildings
First International Conference on Critical Digital: What Matter(s)?
53
are the active building components communicating with each other, with people using the building spaces and with their environment.6
Figure 1. Swarm “intelligence” in nature. In finding practical ways of creating complex systems, a lot can be learned from nature. In fact, every living being, seen as composed of countless smaller and simpler elements, is a complex system in itself. If we trace processes that form living organisms, it’s obvious that none of them had initially been shaped in all its intricacy. They always start with a single cell which multiplies itself numerous times. When a critical mass is reached, cells start to differentiate; they begin to form tissues and organs. Analogically, buildings can be designed and created in a similar manner. Originally consisting of a small number of components and gradually increasing their complexity, while a higher number of more advanced components are added. What’s more, with the process-driven approach, division between designing, construction and maintenance of buildings becomes very vague. Those three phases don’t have to be carried out in a sequence anymore; they can be in many aspects simultaneous or even mixed with each other.7 To validate the presented approach, several concepts have been tested and prototyped on 1:1 scale in numerous case projects. New materials became sensing and actuating organs of buildings and digital information processing worked as their neural system. This allowed creating unprecedented adaptive and performative qualities developed within designed spatial installations. 5. Interactive building elements An interactive building can be composed of diverse active elements. Most interesting ones for feasible experimentation on real scale are those which build up “membranes” that separate distinct building spaces. In traditional terms membranes would be building facades and internal divisions. For interactive spaces it is important that a membrane can create either a connection or a boundary between separated spaces. What’s more, it can actively generate and modify various spatial conditions on its two sides. An interactive membrane (figure 2.) can provide many functional features. If applied on small scale, apart from its primary role as a space organizing object, it can provide seating elements, light sources, atmosphere creation, ambient and direct communication, active monitoring, acoustic control, access control and many other. If applied on a larger scale, the range of possible functional implementations becomes even greater, giving hypermembrane potentially a role of a communication medium not only between individuals, but between entire social groups. What’s most important, however, is that all these features are to be
First International Conference on Critical Digital: What Matter(s)?
54
provided in a dynamic manner, as an intelligent derivative from information gathered by interactive membrane from its surroundings. The process governing the behaviour of the membrane will constantly improve its logic, given the ability to learn from different precedent situations and their effects on the environment and people. Applications of an interactive membrane idea can vary in scale and technique. Hyperbody has formulated a number of projects which will explore this concept in its many aspects and forms, ultimately providing knowledge to implement the hypermembrane commercially and solve a wide variety of complex spatial problems.
Figure 2. Interactive membrane From the technical point of view a membrane can consist of a number of connected autonomous segments which can serve as spatial divisions, but also seating places, display stands or sound and light sources. In this way it can provide adaptive spatial conditions by learning how to respond to variable information coming directly and indirectly from participants of space. A number of technical solutions needed to be tested in order to select building techniques which will provide the best balance between the range of movement, structural stiffness, stability, strength, slimness, possibility of dynamic openings in the surface, aesthetic qualities and cost. Several options have already been tested within other projects related to the hypermembrane development, although limited financing and lack of open access to preferred technologies hinders investigations of many potentially applicable solutions. Developed as part of various projects these solutions include active pneumatic elements, dynamic flexible surfaces, separated objects visually appearing as one whole as well as several hybrid solutions. (figures 3.,4.) Kinetic features of such systems can be supplemented with light, sound, colour and information display using new kinds of digitally driven active materials.
First International Conference on Critical Digital: What Matter(s)?
55
Figure 3. Technical solutions for membrane implementations, concepts
Figure 4. Technical solutions for membrane implementations, case projects. Another potentially interactive feature of buildings is their structure. Due to technical constraints two sorts of structural adaptation can be developed. Structural reconfiguration (figure 5.) can take place if changes in structural geometry are not needed more often than in weekly or two-weekly intervals. In that case static elements can be rapid-manufactured using cnc technologies and added to the structure, while other redundant elements are removed. In case of dynamic spatial adaptation, dynamic structural components can be employed to allow instant changes in structure, without affecting the structural topology (figure 6.). The third group of interactive investigations of Hyperbody covers furniture scale elements, which can freely populate building spaces, but still operate as parts of the building system.
Figure 5. Reconfigurable building structure.
First International Conference on Critical Digital: What Matter(s)?
56
Figure 6. Dynamic structure concept, implemented in Muscle Tower project
6. Future of interactive architecture If current research and implementation of interactive architecture proves successful, in a more distant prospect, this development may become a foundation of what Marcos Novak refers to as “neuroarchitecture�. Future architects may be using nanotechnology to create complex systems on the level of neurons and atomic particles. Those constructs will be able to operate in a manner which will be very close to the behaviour of biological living organisms.8 Endnotes 1 2 3 4
5 6 7 8
Oosterhuis K., Cook P., Architecture Goes Wild, 2001, 010 Publishers Netherlands Jaskiewicz T., Process driven architecture, ASCAAD 2007 conference proceedings, 2006 De Landa M. Intensive Science and Virtual Philosophy, 2002 Kaneko K, Life: An Introduction to Complex Systems Biology, 2006, Springer-Verlag Berlin Heidelberg Vitruvius, The Ten Books on Architecture, 1960, Dover Publisher Wolfram S., A New Kind of Science, 2003, Wolfram Media, Inc. Flake G.W., The Computational Beauty of Nature, 1998, The MIT Press Novak M., Transvergence: Finite and Infinite Minds, In: K.Oosterhuis, L. Feireiss (eds.) Game Set and Match II: The Architecture Co-laboratory on Computer Games, Advanced Geometries, and Digital Technologies, 2006, Episode Publishers Rotterdam
First International Conference on Critical Digital: What Matter(s)?
57
Progressions in Defining the Digital Ground for Component Making Onur Yüce Gün KPF NY, Computational Geometry Specialist onuryucegun@alum.mit.edu Jonas Coersmeier Büro NY, Partner / Design Critic, Pratt Institute jonas@burony.com
Abstract Terms digital and computation, once accepted as emergent understandings in design, became commonly known and used in recent years. Transformation of techniques from analog to digital created a shift in the understandings as well as products of design. Digital design exploration enabled the designers’ exposure to variety and richness. Increasing number of digital tools became easily-accessible. Thus design thinking in both practice and academia was transformed. Computation, via increasing power and speed of processing, offers mass information execution. Once this power was utilized to inform the discrete pieces of design, “component making” quickly became one of the trends in architectural design. Idea of components transformed the enclosing forms of architecture into subdivision surfaces which act as fields for components to aggregate on. While there has been a great interest in creating variety via manipulation of components as individual members, the characteristics of the surfaces became overlooked via common use of parametric (UV) subdivision. This paper, with a critical look at the current component field generation techniques, focuses on alternative methods of transforming a surface into a digital ground for component aggregation. Series of studies address and deal with various pitfalls of component design and application on software-dictated UV subdivision surfaces. Studies aim to release the component design logic from being software-specific by creation and use of customized digital tools and scripts.
1. Variety Variation plays a crucial role in both conceptual and construction phases of architectural design. In nature, variation is an outcome of a certain condition in nature, thus individual members of a group may differ in some characteristics like topology and functioning1. Similar ideas apply in architecture: discrete pieces of a building are designed, manufactured and built depending on their function and their structural behavior. Computation brings ease in generation and control of variety. Emergent design tools feature easy-to-use component compiling techniques. Once created, components can be aggregated on parametrically subdivided surfaces. Components embody certain intelligence within. They could be manipulated by internal parameters (like numerical values) as well as being driven by behavioral commands, thus they respond to external entities (forces, neighboring elements, proximities and like). While such approach exposes the designers to richness of
First International Conference on Critical Digital: What Matter(s)?
58
possibilities it also promotes geometrical repetition and monotony. Most UV based subdivision engines fail to suffice in going beyond repetitive polygonal compositions as they imply regular two dimensional subdivisions. Besides, over-crowdedness and unperceivable deviation of members entail definition of certain criterion for filtration. Variety stands as neither something that could directly be dismissed from design nor something that could be pushed to the utmost limit without control. Following studies, having a critical look at the foundation of current component-making techniques, offers systems in which variety could be better understood, designed and generated.
2. Progressions in component logic 2.1. Poromorph: Basic understanding in component design Poromorph is a five day research study done in alpha version of Bentley’s Generative Components as a test user back in 2004. While testing a new parametric design environment, the study aims creation of a component via use of solid modeling techniques and Boolean operations to test behavioral changes of components via regular parametric (UV) distribution on surface. Generative Components, by default, offers a workflow for transforming surfaces into polygonal grids. A point placed on the surface can easily be transformed into a point cloud, in which the locations of points on the surface are defined by fractional parameters. A series of decimal numbers between zero and one enables regular distribution of the points. This point cloud implies a quadrilateral polygonal grid. The individual polygons in the resulting grid are used as cellular entities, and the components are distributed on top of them. Poromorph takes this workflow as a precept as the base component is built on a quadrilateral polygon. Five individual solids, whose thicknesses are defined by global parameters feature parametrically adjustable pores on themselves. Variation is achieved mostly via use of the UV parameters, with an additional parameter derived from the steepness values along the surface. Poromorph represents slight variation and behavioral changes driven by parameters and fulfills the very basic ideas about components, however fails in creation and control of variation (Figure 1).
Figure 1. Poromorph: Variation in solid forms.
First International Conference on Critical Digital: What Matter(s)?
59
The default surface subdivision workflow of Generative Components, which plays a great role in understanding algorithmic system design, may quickly become a restrictive dictating technique: While the design ideas, forms and surfaces change, the logic of subdivision never changes and the perceivable variety of members ends up being virtual: the stretching of a surface affects the components on the very periphery of the surface, but this doesn’t necessarily imply a functional or topological change. 2.2. Alternative Subdivision of Surfaces The default workflow in Generative Components also dictates a one-to-one relationship between the underlying polygonal grid and the component. System does not easily allow the user to break the flawless continuity of series of components distributed along the surface. A later study focuses on dealing with limiting aspects of direct application of components onto UV subdivision surfaces. It aims to define ways for uneven and discontinuous application of components. While skipping some polygons during component distribution process overcomes the flawless repetition, further subdivision in selected members generates more deviation in the scalar change. A selective recursive subdivision approach (Figure 2) helps in realization of both ideas. Not all the members are divided in the recursion depth, and not all the polygons are used as component base. A drastic scale deviation occurs in between different recursive depth levels. Components with scale driven properties start showing a wider range of possibilities including deviations in topological characteristics. While the approach produces a valid solution for the initial problems, the products end up being mechanical rather than natural. With no intermediate conditions regarding the scale, the system fails to create fluid translations; thus harmony and continuity is interrupted.
Figure 2. Selective recursive subdivision of quadrilateral polygons 2.3. Reversing the Component Logic: Nanjing train station Another preconception about components is accepting the field populations as the only way of aggregating them. The conventional understanding dictates a workflow in which a
First International Conference on Critical Digital: What Matter(s)?
60
compiled component is aggregated on a surface via certain placement rules. Thus surface is always accepted as a mandatory input for any kind of component population. Nanjing Train Station Competition entry of Kohn Pedersen Fox New York rolls over this understanding. The preliminary massing model of the train station reveals the design intentions: Five hundred meter long platforms lying between the fifteen train tracks are sheltered with individual surfaces of continuous fluid forms. The canopies raise and tear apart in the middle to accentuate the main concourse. The entrances on the north and south are highlighted by projections. While the train station represents a global harmony, the individual surfaces feature specific identity, as they are generated with smaller bits of information: Various configurations, deformations and transformations of simple “S” curves, driven by global rule sets and internal parameters for local adaptations define the characteristics of the surfaces (Figure 3). Generated as a derivative of a series of discrete invisible components, the “S” curves, this time the surface becomes the product of series of components.
Figure 3. Invisible components influencing the form
3. User defined subdivision techniques 3.1. Inspiration: Studio at Pratt Institute In the light of concerns above, in the second year undergraduate studio we ran at Pratt Institute in spring 2007, we aimed to propose an alternative way of thinking in creation of digital grounds for designing with components. While supporting the studio with necessary theoretical background for student’s development in the conceptual understanding of design computation we also helped students develop their technical skills in use of parametric modeling environment. From a range of possibilities, regarding student’s experience and strength of the platforms, we used Rhino and of Generative Components throughout the semester. The studio ran through stages of studies which encouraged students to develop analytical systems to generate architectural envelopes and enclosures by deriving data from the project site. Students were expected to develop systems which are driven by their design concepts and solidify their ideas by creating geometrical aggregation models. The activities taking place around the given site in Bowery, New York became the medium for students to start with as they recorded and mapped these activities. While mapping, students used line drawings and point clouds to express and re-construct the connections
First International Conference on Critical Digital: What Matter(s)?
61
between activities and to relate them to their design concepts. This very geometrical composition, derived from the site was used as an input for compiling polygonal meshes, which played the role of the digital ground and created the back bone for their further studies in the studio. 3.2. Mesh as a Digital Design Ground A polygonal mesh is a composition of three different geometrical entities: First the vertices, which is a collection of points in space, second, the edges that connect these vertices and the third, the faces that fill in between the edges2. In computer graphics, meshes are generated via use of different algorithms, by taking different geometrical entities as inputs. A group of points as well as a surface could be the inputs for mesh algorithms. The characteristics of the output is also defined by the algorithm: A mesh could be composed of only triangular polygons, or only quadrilateral polygons, or could be a product of various concave and convex polygonal compositions. Perceived as a digital ground for component making, we attribute a new character to polygonal mesh. Mesh is no longer accepted only as a digital representation of an enclosing geometry, but rather considered as a space constructor that is composed of interconnected responsive parts. Elements of mesh, with this logic become “cells�, vertices, edges and surfaces become the part of design idea. Instead of representing a global form, they become placeholders for various architectural components. A polygonal mesh, depending on the algorithm of origin, may create a polygonal composition in which the polygonal members change tremendously in scale. Polygons in various sizes address resolution and density. When controlled sensibly, this capability becomes a design understanding: While the placement of points in a Delaunay triangulation3 algorithm may define the resolution by activity, a mesh extracted from a bent surface may indicate curvature. Similar approaches and understandings may be used together or independently (Figure 4).
Figure 4. Point cloud projected on a surface is then triangulated with Delaunay algorithm A mesh delivers the availabilities related to variety, scale and topology and breaks the limitations that are dictated by UV subdivision techniques. However, custom digital tools are needed to realize this conceptual understanding, which are not yet available through the software packages. 3.3. Essence: The Tool Generative Components, while bringing ease in application of repetitive actions, is not an open platform since it doesn’t accept geometric information to be imported in by use of conventional import and export commands. Since it is a dynamic parametric platform, all the geometry has to be constructed and manipulated in GC itself. However it accepts data sets, which makes regeneration of models generated in different software platforms via use of numerical references in GC.
First International Conference on Critical Digital: What Matter(s)?
62
McNeel’s Rhino, by default and by open source materials features many different meshing algorithms, which enables the designer to design and configure the meshes in desired way. But to use the meshes as digital grounds for component population, certain level of scripting knowledge is a prerequisite. Regarding the different strengths of different platforms, and the interest in re-interpretation of a digital ground for component making, the motivation to have a set of custom tools to generate a smooth work flow between cross platforms emerges. To enable transformation of a Rhino mesh into a digital generative ground in Generative Components, a series of scripts are prepared. The meshes modeled in rhino are transformed into two numerical data sets. While the first one stores the coordinate values of the mesh vertices, second set stores the information of the way the vertices are grouped to generate the mesh polygons. The data sets are then exported into spreadsheets. The two data sets are read into GC via customized scripts. While vertices are represented as individual points each mesh polygon becomes a responsive generative base on which components systems could be applied. The digital ground in GC, now represents vast amount of variety in scale, size, density, curvatures compared to basic UV surface subdivision systems existent in GC (Figure 5).
Figure 5. Variety in scale: Digital ground in GC
3.4. Design Cycle A proposed idea of compiling of components that typologically change depending on scalar deviations is tested during the studio sessions at Pratt. Once the mesh is imported into GC, designer creates components that are driven by internal and external parameters, but mainly are dependant to the scale: While the components become surface elements in high curvature values, or become less sophisticated in detail as they get smaller, they become articulated spatial entities as they get larger (Figure 6).
Figure 6. Architectural interpretation: Bradley Rothenberg (Pratt Institute)
First International Conference on Critical Digital: What Matter(s)?
63
The test grounds once populated by components can be evaluated and then manipulated by use of initial mesh algorithm. While an evaluation and re-generation cycle develops between cross-platforms, the design becomes concept-driven rather than falling into the trap of being software-specific.
4. Epilogue: Further Investigation The sequence of studies concluding with the final mesh creation approach proposes an open system for creation of digital grounds for designing with components. The richness of the approach lies in this openness: A mesh could be generated in many different ways by use of many different algorithms. Such availability releases the designer from the limitations of software-dictated design understandings. In the specific case of mesh generation, the design concept defines which meshing algorithm to chose and the selected method affects the characteristic of the polygonal mesh in organizational means, thus the surface and spatial compositions and subdivisions become products of design intentions4. The work in progress includes development of several scripts to classify and group certain mesh polygons depending on design intentions. Orientation, proximity to attraction points, scale or material properties could most definitely be considered as some of the possible influencing drivers for design. While some other studies focus on different techniques of point cloud creation as an input for meshes, some parallel studies aim to develop custom tools to increase the ease of manipulation of meshes as digital grounds in GC. Sources: 1
See Hensel M., Menges A. and Weinstock M. Techniques and Technologies in Morphogenetic Design, Great Britain: Wiley-Academy, 2006. 2 See Portman H., Asperl A., Hofer M. and Kilian A. Architectural Geometry, United States of America: Bentlye Institute Press, 2007, pp. 381-396. 3 See de Berg M., van Kreveld M., Overmars M. and Schwarzkopf O. Computational Geometry: Algorithms and Applications, Germany: Springer, 2000. 4 See Kostas, T. Algorithmic Architecture, Great Britain: Architectural Press, 2006, pp. 25.
First International Conference on Critical Digital: What Matter(s)?
64
First International Conference on Critical Digital: What Matter(s)?
65
CeramiSKIN Biophilic Topological Potentials for Microscopic and Macroscopic Data in Ceramic Cladding David Celento, RA Assistant Professor, Department of Architecture Digital Design/Digital Fabrication The Pennsylvania State University, USA dcelento@psu.edu Del Harrow Assistant Professor, School of Visual Arts Ceramics The Pennsylvania State University, USA dbh13@psu.edu Keywords: ceramic cladding systems, biophilia in architecture, digital design, digital fabrication, masscustomization. Abstract: CeramiSKIN is an inter-disciplinary investigation examining recursive patterns found in organic matter. Through the use of digital capture and translation techniques, these biophilic systems may serve as topological generators for structural and ornamental consequences well-suited to mass-customizable ceramic cladding systems for architecture. Digital information is acquired through laser scanning and confocal electron microscopy, then deformed using particle physics engines and parametric transformations to create a range of effects promulgated through digital fabrication techniques. This inquiry is primarily concerned with two questions: • Is it possible that natural systems may be digitally captured and translated into biophilic structural forms and/or ornamental effects that may foster beneficial responses in humans? • Since natural orders eschew rigid manifold geometries in favor of compound plastic shapes, is it possible to fabricate mass-customized, large-scale biophilic ceramic cladding from organic digital data?
1. Critical Nature of Nature: A recent article by Rivka Oxman convincingly argues that Digital Architectural Design (DAD) is substantively different than Computer Aided Design (CAD) due to new ways of design conceptualization and the possibilities inherent in versioning. She summarizes DAD as being comprised of the following models: formation, generative, and performance. 1 CeramiSKIN is primarily concerned with Oxman’s generative model using biophilic data. Specifically, we are exploring the possibilities inherent in the study of natural systems enabled by increasingly advanced and accurate analysis within the scientific community. This endeavor intends to explore both structural and ornamental effects that can be manipulated in a three dimensional design environment to create digitally fabricated ceramic forms. Data will be gathered at both microscopic and macroscopic levels as generative information for formal systems in architectural cladding. Natural structures received much attention in architectural and engineering circles during the past century following publication of On Growth and Form, by D’Arcy Wentworth Thompson in 19172 . Numerous experiments and structures related to natural forms were explored involving such things as soap bubbles (Matzke, 1945; Lewis, 1949)3 , thin shell structures (Pier Luigi Nervi, 1891-1979), and polyhedrons
First International Conference on Critical Digital: What Matter(s)?
66
(Buckminster Fuller, 1895-1983); however the tools for analysis (compared to today) were somewhat crude and speculative. Among other written works on the topic, Peter Pearce published Structure in Nature is a Strategy for Design 4 in 1980, while Finding Form: Towards an Architecture of the Minimal was written by the venerable Frei Otto 5 in 1995, suggesting continued relevance for designers. In light of increasingly sophisticated data gathering techniques in the scientific community, as well as the ability to translate this information directly into three-dimensional forms, natural systems and biophilic exploration offer significant possibilities for a number of aspects of design, from structure to ornament.
Figure 1. Nassellarian Skeleton, “On Growth and Form”, D’Arcy Wentworth Thompson
1.1. Digital Design—Novelty, Discovery, and Possibility: The mediated nature of digital design and the complexity of current formal explorations has generated various positions regarding agency in attempts to categorize, comprehend, and create novel forms. The preoccupation with agency attenuates other meaningful explorations outside of architecture which offer significant opportunities for designers as well as potential benefits for users. Thus, the current emphasis on novelty may be somewhat limiting as it partially veils numerous other inquiries that may offer insight into such aspects as structural optimization, durability, engagement, and possibly even value, to name a few. For example, the sciences have utilized digital techniques less for purposes of novelty, but rather with a teleological emphasis on analysis and comprehension. Resulting discoveries in the sciences offer an extension of at least a century of architectural and engineering investigations in this area, with far greater knowledge today. Due to the increased clarity of recent scientific analyses and the production of digitally manipulatable data these investigations are worthy of more attention in architectural spheres than they are currently receiving. Current digital investigations divide themselves into various didactic camps, but in reality these categories resist tidy characterization due to varying degrees of overlap. Thus, they are often employed in a less exclusive fashion than they are portrayed. Oxman’s categories are well-defined, but we respectfully propose alternative labels and one additional category that might more accurately suggest the processes at work:
First International Conference on Critical Digital: What Matter(s)?
67
1) Willful (in lieu of Formation): Complex shapes enabled by digital tools through conscious parametric manipulation. Highly responsive to cultural trends as commented on by Sylvia Lavin. 6 2) Consequential (in lieu of Generative): Includes algorithmic and generative strategies often involving iterative mathematical and logical processes. See Terzides.7 3) Embedded (in lieu of Performance): Dynamic, data-driven solutions involving parametric updating of information. Represented largely by Building Information Modeling (BIM). See Eastman, et al. 8 4) Constructive: (added to Oxman’s categories) Digital fabrication offers opportunities for most digital strategies to be physically produced directly from digital data by computer numerically controlled (CNC) machinery; thus representing a common meeting ground for all forms of digital design. This domain has historically been a significant part of the engineering discipline, but has been largely outside of architectural pedagogical models. This interdisciplinary area resembles and offers significant benefits similar to design/build models inspired by such enterprises as the Jersey Devil9 and the Rural Studio,10 and even the goals of the Bauhaus. Digital fabrication in architecture is explored in-depth through writings and examples by Schodek, Bechthold, et al, 11 Callicott,12 and Aranda Lasch. 13 Kieran Timberlake propose that this area will offer significant advantages for those interested in engaging in the opportunities offered by this emergent area.14 These categories are bounded by current technologies, interests, and fashions; however, as computational power increases and societal trends evolve, others forms of digital creation will propagate. Continued exploration of emergent possibilities benefits designers far more than the institutionalization of boundaries around known strategies. 1.2. The Possibilities of Biophilia in the Digital Design Realm: The Harvard biologist, Edward O. Wilson, commenting on the natural world uses the term biophilia to describe “the connections that human beings subconsciously seek with the rest of life.” 15 Going beyond this observation is James Wise, an environmental psychologist from Washington State University, who suggests that it is the fractal patterns in nature that are primarily responsible for beneficial human responses. Further, Wise believes that these natural patterns can be mathematically reproduced with the same beneficial effects as those in nature, 16 an intriguing idea for those pursuing consequential design explorations. To date some recent architectural works have incorporated biophilic strategies, such as the 1999 BMW soap bubble pavilion by Bernhard Franken, but most precedent in this area is limited to historical ornamental effects—including non-representational Islamic patterning. Even the ornamental panels of Louis Sullivan feature inspired organic reinterpretation of these non-representational Islamic forms which have translated into ceramics quite well.
First International Conference on Critical Digital: What Matter(s)?
68
Figure 2. Terra Cotta Ornament, Guaranty Building, Buffalo, NY, Louis Sullivan, 1896
The artist Neil Forrest has also explored biophilic patterning in his contemporary work. This strategy of Arabesque patterning features repeating geometric forms that often echo the forms of plants and animals.
Figure 3. Braided Space, Neil Forrest
First International Conference on Critical Digital: What Matter(s)?
69
Figure 4. Braided Space, Neil Forrest, detail
2. CeramiSKIN Project Status: CeramiSKIN was recently accepted by the EKWC (European Ceramics Work Centre) in the Netherlands for an upcoming thirteen week collaborative residency that concludes in the spring of 2009. The intention of the residency is for an artist and architect to collaborate closely throughout the project while exploring new avenues for architectural ceramics. <www.ekwc.nl> Our work will consists of explorations in porcelain as an architectural cladding system utilizing microscopic and macroscopic digital data to construct mass-customizable structural and ornamental effects. At the time of this writing, the project is in the initial development phase with limited, but promising, findings. The work will feature a variety of scalar studies, as well as a large scale fabrication using ceramic cladding to be exhibited with nineteen other team projects from the past four years. Weâ&#x20AC;&#x2122;re particularly interested in the rich history of terra cotta and porcelain in architectural cladding systems. The use of 3-d scanning, simulated particle physics deformation, and parametric transformations will create surfaces that both evoke the historical use of ceramics in architecture as well as develop new territory in terms of structural promise, assembly techniques, and complexity of pattern, texture, and ornament. We also intend to explore a range of ceramic surface possibilities including glazing, digitally printed patterning, stained/colored clays, and metallic lusters â&#x20AC;&#x201C; adding additional complexity to the ceramic surfaces.
3. Brief Historical Narrative of Ceramic Cladding:
First International Conference on Critical Digital: What Matter(s)?
70
Figure 5. Glazed Bricks, Gates of Ishtar, 6th century B.C.
Fired ceramics are certainly the oldest human made architectural cladding material and ceramic tiles have been made for at least 4,000 years. While “Egyptian paste” a technique for creating a glassy surface on ceramic was developed abut 7,000 years ago, this surface would have been susceptible to corrosion due its high sodium content and was impossible to form in units larger than a few centimeters. Glaze,as a mixture of silica and fluxes in a fluid suspension applied to a clay object became common in the Han dynasty in China (200BC), but may have developed earlier in Iran (as early as 900BC). Durable glazed brick construction appeared in the Gates of Ishtar in Babylon and the Gates of Persepolis in Iran in the 6th century BC. The functional advantages of ceramic cladding are its durability and resistance to corrosive forces. The major innovations involved in the use of porcelain were increased durability and a high level of translucency in the glaze due to the interface between the clay and glaze surface. The primary material limitations of ceramics for architectural applications are its weight, it’s relatively low tensile strength, and its tendency to deform and crack in the forming process. To exploit the material advantages of ceramics while minimizing its structural limitations ceramic tiles tend to be either adhered with mortar to a stone or concrete substrate or hung using a clipping system to a metal structure.
3.1. Ceramic Cladding Hung over Metal Structure. The New York Skyline—which, without exaggeration, is the most wonderful building district in the world— is more than one half architectural terra-cotta”
—The New York Times, May 14, 1911
The use of architectural terra cotta developed along with the increasingly pervasive use of steel in construction beginning in about 1775 (Thomas Paine’s cast iron bridge—as a monument to the American Revolution—was fabricated and exhibited in England in 1791). Terra-cotta cladding, usually in the form of large blocks hung on a steel clip on the back side, were employed. The functional advantages of terra cotta cladding include: resistance to weathering, oxidation, fire, and lighter weight than stone. Complex
First International Conference on Critical Digital: What Matter(s)?
71
ornamental relief could be carved into the wet clay for a much lower cost in tooling and labor than carving hard stone. Terra cotta was both unglazed and glazed—with a remarkable ability to imitate stone—and are seeing a renewed popularity as cladding by Pei and others.
3.2. Issues involved with the Sectional Assembly of Ceramic Cladding: Functional Necessity, Ornamental Use, and Technical Issues The size of individual tiles is a function of the weight and structural properties of the material. Larger units have historically been difficult to fabricate as larger pieces of clay tend to deform in the drying and firing process. Tiled surfaces tend to accumulate irregularities due to human error. Grid patterns are the easiest to realize but more complex patterns have been achieved both through clever workarounds in assembly or innovations in high level mathematics17 . The physicist and Harvard graduate student, Peter Lu, has described a conceptual breakthrough that occurred around 1200CE when tile patterns were “reconceived as tessellations of a special set of equilateral polygons”. This allowed for near perfect repeating patterns to be developed over large surfaces. In the construction of the Sydney Opera House several models were tested for adhering the glazed tiles to the surface of the ferroconcrete shell. Finally the concrete sections were cast into beds of ceramic tile and the complete sections were raised into place. In both of these scenarios the underlying problem deals with minimizing conditions which contribute to irregularity due to human error in the application of repeated units over large and complex surfaces. The problem is compounded when working with complex curved surfaces, problems that digital design and digital fabrication may more readily solve.
3.3. CeramiSKIN Tectonics: There are significant advantages to working with tile units that are hung with mechanical connections rather than adhesive mortar. Assembly involves less possibility of human error—the connection points in the metal framework determine the absolute location of each tile. Broken or cracked units can be replaced easily over the life of the building, and “floating” units permit greater degrees of expansion and contraction. CeramiSKIN intends to develop a system through which grouped sections of tiles are joined together with flexible mechanical and adhesive connections to form larger units which are than attached to a metal framework with mechanical connections. This method provides the working advantages of firing smaller individual tiles that are inherently less susceptible to deformation in the firing process, combined with the on-site assembly advantages of larger units, and thus fewer total units to assemble on-site. Typically ceramic cladding systems using mechanical connections have only been developed for flat planar surfaces with tile units arranged in a simple grid. Digital modeling can accurately map connection points between metal structure and cladding units allowing for the development of complexity in the overall curvature of surfaces and in the variation and patterning of tile units.
4. Ceramic Cladding: Forming Techniques and Technology: A few basic forming techniques—with minor variations—have persisted throughout the history of architectural ceramics. The majority of ceramic tiles have been formed in a gypsum based (plaster) mold. Plastic clay is pressed into the mold, which wicks moisture from the contact surface of the clay, drying it slightly and allowing the clay to shrink slightly and release from the mold. The clay dries as water evaporates, resulting in shrinkage of up to 15%. Clay contains two types of water: non-chemical, which will evaporate through drying; and chemical which is only driven off during the firing process at about 1000°F. During the firing process molecules in the clay
First International Conference on Critical Digital: What Matter(s)?
72
reorient themselves resulting in increased density, durability and (at temperatures above 2200°F) the formation of a glass. Irregularities in the forming process are often invisible in the unfired clay but result in major structural defects in the fired piece. Contemporary industrially-produced tiles are typically made using a “dry pressing” technique in which clay containing only its chemical water is compacted into a mold at extremely high pressure. This allows for very low water content, even density, and very little shrinkage and deformation and shrinkage in the firing process. However molds for dry pressing are expensive to produce and can consist of a maximum of only two parts meaning the pressed forms can’t be very complex.
4.1. CeramiSKIN Production: The European Ceramic Work Center has developed as a “hands-on”, highly experimental ceramic production facility. Very little of the production process has been standardized in order to allow for a highly customized approach to the realization of each project. For our project this provides significant advantages over a traditional factory, in which mechanized processes limit the range of possible forms. The EKWC has developed innovative methods for reducing shrinkage and deformation of clay that do not require an investment complex and expensive molds. Additions of fired and ground clays and fiber binders reduce shrinkage and increase strength in the unfired clay. It is important for us to be able to produce more complex multi-part molds quickly and inexpensively. More complex molds will allow us to create more detailed and higher relief, as well as create tile units that fit more precisely together. The tiles will create interlocking systems not only as a surface of twodimensional polygons but will also fit precisely on the “Z axis” as three-dimensional polyhedrons. Because of the cost and highly skilled labor involved in producing each mold using traditional processes a relatively small number of molds have been typically used to create a corresponding number of units repeated over a surface. We’re interested in using digital fabrication as a way of making more complex molds and larger numbers of unique molds. This is made possible by direct milling of mother molds—a mold of a mold. The sections of a mold can be made quickly by pouring plaster into the negative form of a mold section directly CNC milled out of a foam block. The additions of binders and fillers in the plastic clay make high pressure forming and additional investment in mechanical jigging unnecessary.
5. Surface Treatments and Printing Technology: Our focus is on glazed surfaces fired between 2100°F - 2200°F. Glazing on tiles allows for a wide range of saturated colors and a surface that is highly resistant to corrosion/oxidation/discoloration and permits easy cleaning. This temperature range takes advantage of both a strong fit between the clay and glaze surface—minimizing the possibility of surface defects that become visible over time—and reduces additional stresses and deformation in the fired units that may only appear at higher firing temperatures. In addition to this high temperature glazed surface we’ll also employ a new technique using digitally printed ceramic glazes that are fired at a lower temperature on top of the glazes. This allows for additional layers of ornamental and graphic complexity in the cladding surface.
5.1 CeramiSKIN Glazing and Patterning: Recent innovations in ceramic printing technology make possible direct printing of ceramic decals using inkjet printers retrofitted with ceramic inks (http://www.easyceramicdecals.com/). These were previously produced using screen printing which involved a large investment in tooling. By using direct-printed decals we’ll be able to experiment with a large range of possible surfaces and effects quickly and with a small investment in tooling and a greater degree of freedom to create mass-customized effects. In addition to printed decals we’ll also develop surfaces using patterns composed of screen printed metallic luster whose constituent materials and processes have evolved from Persian luster painted ceramics.
First International Conference on Critical Digital: What Matter(s)?
73
6. Technologies and Work in Progress: We intend to utilize the following technologies to explore an expansive range of transformative design solutions for ceramic skins in architectural cladding systems. Software:
Geomagic, RealFlow, Maya, Blender
Hardware:
Laser scanning, MRI, Confocal Electron Microscope, ZCorp/Dimension Rapid Prototyping, CNC milling.
6.1 Laserscanning: Laserscanning allows for accurate and complex form capture which may be manipulated into generative digital material. Organic forms will be explored at a scale of 1:1 for ornament.
Figure 6: Torso laserscan. Konica Minolta Vivid 700. Celento
6.2 Confocal Electron Microscope: Electron microscopes permit magnification of up to 2 million times with a resolution of .2 nanometers and generate digital data. Such data will permit investigation of organic material and orders that are entirely unfamiliar in daily life.
Figure 7: Confocal Microscopic scan of Lily (PSU) Figure 8: Conversion to vector information (Flash)
First International Conference on Critical Digital: What Matter(s)?
74
Figure 9: Compositing of confocal data with torso scan
6.3 RealFlow: Software particle physics permit complex alterations based upon fields that define characteristics such as flow, gravity, mass, etc., useful for sophisticated ornamental deformations.
Figure 10: RealFlow fluid particle simulation.
First International Conference on Critical Digital: What Matter(s)?
75
Figure 11: RealFlow fluid particle rapid prototype (ZCorp)
6.5 Maya: Maya software enables complex form generation and parametric deformations through scripting of data permitting exploration of complex transformative designs.
Figure 12: Parametric transformations by Harris and Jaubert. Random Rules, Sâ&#x20AC;&#x2122;06, Harvard GSD, Professors Monica Ponce de Leon (Office DA) and David Celento.
7. Conclusion: At this early stage in the process the technical challenges of data translation have been largely reconciled and final form generation is in progress. The results thus far suggest that a biophilic strategy for formgeneration is entirely realizable and offers significant opportunities. Next steps will involve translation of data to CNC formwork and ceramics casting, first at smaller scales for testing, then larger scalar studies at the EKWC in the summer of 2008. Final results from this study will be presented at an EKWC exhibition in the summer of 2009.
First International Conference on Critical Digital: What Matter(s)?
76
8. Endnotes: 1
see Oxman R., Digital Architecture as a Challenge for Design Pedagogy: Theory, Models, Knowledge and Medium, Design Studies Vol 29 No. 2 March 2008, pp. 99-120 DAD models may be summarized as: 1) Formation - topological studies resulting from animation and parametric design directed by the author. 2) Generative - rule based (and generally parametric) computational mechanisms that produce topological results—similar to prior “paper based” architectural pursuits of natural orders and shape grammars. 3) Performance - potentials for going beyond simulation, analysis, and evaluation (as in Building Information Modeling - BIM) which Oxman suggests will eventually create dynamically modifiable results. 2
see Wentworth Thompson, D., On Growth and Form (orig. 1917), Editions - Cambridge University Press,
1942; Dover, 1992; Cambridge University Press, 1992 3
see Dodd, J. D., “An Approximation of the Minimal Tetrakaidecahedron”, American Journal of Botany, Vol. 42, No. 6 (Jun., 1955), pp. 566-569 4
see Pearce, P., Structure in Nature is a Strategy for Design, MIT Press, 1980
5
see Otto, F., Finding Form: Towards an Architecture of the Minimal, Edition Axel Menges, 1995
6
see Lavin, S. “In a Contemporary Mood” in Hadid, Z. and Schumacher, P. Latent Utopias, Wien: Springer-Verlag, 2002, pp. 46-7 7
see Terzidis, K., Algorithmic Architecture, Architectural Press, 2006, p 7-8
8
see Eastman, C., Lee, G., Sacks, R., “Specifying parametric building object behavior (BOB) for a building information modeling system”, Automation in Construction Volume 15, Issue 6, November 2006, Pages 758-776
9
see Branch, S., Palladino, M., Devil's Workshop:: 25 Years of Jersey Devil Architecture, Princeton Architectural Press, 1997. See also, http://www.jerseydevildesignbuild.com/ 10
see Oppenheimer Dean, A., Hursley, T., Rural Studio: Samuel Mockbee and an Architecture of Decency, Princeton Architectural Press, 2002. See also http://cadc.auburn.edu/soa/rural%2Dstudio/ 11
see Schodek, D., Bechthold, M., Griggs, K., Kao, K., Steinberg, M., Digital Design and Manufacturing: CAD/CAM Applications in Architecture and Design, John Wiley and Sons, Inc, 2005
12
see Callicott, N., Computer-Aided Manufacture in Architecture - The Pursuit of Novelty, Architectural Press, 2001 13
see Aranda, B., Lasch, C., Pamphlet Architecture 27: Tooling, Princeton Architectural Press, 2005
14
see Kieran, S., Timberlake, J., Refabricating Architecture: How Manufacturing Methodologies are Poised to Transform Building Construction, McGraw Hill Professional, 2003 15
see Wilson, E. O., Biophilia, Harvard University Press, 1984
16
see Wise, J., “Biophilia in Practice:Buildings that Connect People with Nature”, Environmental Building News, July 2006.
17
see Lu, P., Steinhardt, P., “Decagonal and Quasi-Crystalline Tilings in Medieval Islamic Architecture”, Science 315, 2007, pp 1106-1110
First International Conference on Critical Digital: What Matter(s)?
77
‘Digitality’ Controlled: Paradox or Necessity?
The emergent need for a codification of design rules within digital design methodologies, with references to Classical Architecture and the work of Claude Perrault and Gaudi. Emmanouil Vermisso ArchitRecture, Greece archi_trek@hotmail.com, evermiss@syr.edu Abstract In view of a possible a-historical development of an architecture that is solely reliant on technology, this paper attempts to address the need for a set of working rules for digital processes which are at once flexible and controlled. As examples, we have re-considered Classicism within the current temporal context and in relation to available technologies and methods, by looking at how the Classical system was appropriated by theorists and architects like Claude Perrault and Antoni Gaudi. 1. Introduction 1.1. The emergence of ‘Digitality’ ‘Digital Design’ as a concept in architecture emerged during the ‘90s- although it was probably conceived earlier on through research in computer science begun in the ‘70s. It is a post-post modernist vogue, and as such, it initially demonstrated a sort of ‘eclectic’ nature in the sense of its fluidity and lack of a fixed theoretical doctrine. The intuitive and therefore amusing manipulation of form made possible by CAD software/hardware, rapidly established digital design as a universal modus operandi, first in academia and eventually in practice, notwithstanding the potential risks of adopting such a -theoretically- ‘loose’ approach to architecture. Many samples of contemporary architectural design exhibit what may be coined as ‘Digitality’, that is, a logic of design and fabrication that is inherent in the design process. This involves spline geometry, and therefore constitutes a more complex formal manipulation than architects have used before. More and more, Digitality is becoming pertinent to avant-garde design; this paper attempts to clarify How it may be embedded within the design methodology, and Why it is possibly beneficial to situate it within a logical system of operation. 1.2. The encoding of Digitality in design Digitality is observed in various fields, but architecturally, it is materialized through advancement in software and fabrication technologies, which has helped promote a new way of thinking by students and practitioners alike. The representation of buildings has become more visually sophisticated and resolved, while in terms of construction, innovative methods are pursued to accommodate the complexities introduced in the design. Bearing in mind that digital design is a product of a technologically intense and precise environment, the absence of guiding rules for its application would seem paradoxical. Such guidelines have existed in the past, almost throughout every architecturally active period, as early as the Classical Architecture of Greece and later on during the Enlightenment and the Industrial Revolution. Digitality as a result of computer technology infiltrating design may be analogous to the impetus on behalf of Enlightenment theorists (like Perrault), to reconfigure architecture relative to their contemporary scientific and technological progress.
First International Conference on Critical Digital: What Matter(s)?
78
2. Logical Systems of design rules 2.1. Classical Architecture & Claude Perrault’s rationale of a new rule doctrine The Classical proportional system uses a ‘module1’ and further subdivisions (‘minutes’) to determine the size of all parts in a building, in relation to each other. The clear rules, present in ancient Greek architecture were further used during the Renaissance, bearing certain modifications as far as the extent of a module’s subdivision was concerned. Unfortunately, measured drawings by later architects, like Palladio and Serlio, were not always entirely correct. For this, Claude Perrault2 decided, while translating Vitruvius3, to write a new treatise that would clarify the Classical system by situating it within the scientific rhetoric of the Enlightenment and which could be used by others. Perrault perceived theory as ‘rational method’ and this is what he attempted to provide. Perrault’s scope was important but his perception of this contradicted the very essence of Classical architecture: according to Wittkower, proportion during the Renaissance was linked to cosmological order, making architecture a quasi-mystical profession with almost metaphysical4 connotations; in addition, quoting Pérez-Gómez, ‘In Renaissance treatises, the ontological foundation of architecture is always mathematical, that is, based on the numerical proportions present in the disegno’. The metaphysical nature of architectural theory was advocated by architects like Blondel. Nevertheless, the epistemological revolution of the 17th century and the Enlightenment, of which Perrault was a part, started what Pérez-Gómez5 (1983) calls the ‘functionalization of architectural theory’, a process that reversed the status of architecture from an ‘ars liberalis’, back to a mechanical craft. The desire of Perrault to reformulate theory with reference to the new science6 is further demonstrated during the 18th and 19th centuries, with the creation of measurement systems and the appearance of ‘natural constants’. The scientific society was looking for ways to facilitate the exchange of ideas across the various fields and also between people and countries. In view of the increased industrialization, France developed the metric system, to help deal with the needs for exchange between currencies and also within the same currency, as well as regulate taxation and expenditure in the empire already charted by Napoleon7. In general, ‘measure’ as a concept gained an unprecedented epistemological status, within the scientific domain. It was then, for example, that the Golden Section acquired its new status as a natural constant. The process of this reorganization saw the emergence of statistics as a discipline. However, due to the way that probability is measured, standards of comparison were adjusted, distorting the true perception of reality: ‘Thus, though we have become accustomed to thinking of the normal as the correct, if unexceptional, it practically and conceptually replaced the role of the ideal’8. This type of scientific rhetoric founded on statistics presents a possible drawback: systems of control often generalize (‘normal’) and so they may compromise a quest for the optimum (‘ideal’). It is evident that Perrault wished to impose a contemporary system of rules, albeit one that proved to be one sided, because it was strongly influenced by the respective intellectual context of his time. This context was the cause of a drifting away from the archaic ideals of Classicism and the inherently religious connotations of Classical ornament that referred to sacrificial ritual during the time of Vitruvius (Hersey9 1988). An example of Perrault’s built work is the eastern wing of the Louvre (his most important edifice) where, according to his own, more rational/scientific thinking about architectural proportions, he introduced a series of paired columns and ample intercolumniations10. The
First International Conference on Critical Digital: What Matter(s)?
79
façade, in as much as it is elegant, is juxtaposed with the rest of the Louvre that is more influenced by Baroque Architecture, and is certainly more austere than Bernini’s original proposal. At this point, the importance of maintaining a correct framework within which the rules of a newly-designed system may operate is undeniable. 2.2. Antoni Gaudi and the use of second order geometry in the Sagrada Familia Another architect who challenged the conventional thinking by using complex organic shapes and new geometries in design was Antoni Gaudi (1852-1926). Gaudi designed the Sagrada Familia in the Gothic and Classical styles altogether. In this process he used second order geometry, that is, ruled surfaces (helicoids, hyperbolic paraboloids and hyperboloids of revolution) to arrive to the final shapes of the building (Fig. 2,3). He had possibly foreseen the unfinished nature of the building during his final years, and used this type of geometry as a codex for his successors, who could apply this towards the understanding of the design and its completion (Mark Burry11 2004).
Figures 2, 3. The ‘Hyperboloid of one sheet’ is a type of ruled surface used by Gaudi in the design of the Sagrada Familia; rendering of a nave window from the Sagrada Familia: it has resulted from Boolean subtractions of ruled surfaces from a notional solid (Burry 2004) Considering the ‘plasticity’ of his forms, and the inadequacy of conventional drawings to convey information about the building, the existence of a legible logic behind the design becomes essential for further analysis. Gaudi used this system in the same way the ‘modules’ were originally used in Classicism. Nevertheless, unlike Classical architects, he was probably lacking any aesthetic preconceptions12. An important question is whether Gaudi used ruled surface geometry to reach predetermined shapes he had chosen or if the shapes were the result of the geometry he used systematically. In other words, it becomes an issue of how much he could see what he was making. But, among other reasons, Gaudi is likely to have used a certain system to safeguard his authority over the design in the years to come. The building would be completed but it had to be in such a way that left no margin for arbitrary alterations on behalf of his successors13. If one considers the context within which Gaudi operated, one where digital technology was something utopian (computers appeared more than twenty five years after his death), it is remarkable he decided to use the language of mathematics to ensure the continuity of his project. As Mark Burry observes, the geometries involved in the Sagrada Familia challenge the currently available computer hardware and software even today.
First International Conference on Critical Digital: What Matter(s)?
80
2.3. Synthetic Classicism: a new way to look into Classical design The abandonment of Classicism today on one hand, and the intuitive versatility of digital forms on the other, encouraged me to re-examine Classical moldings by looking at the performative qualities that NURBS curves have in contrast to the conventional representation of profiles as segments of conical sections. In ‘The Theory of Moldings’, C.Walker14 (1926) demonstrates the possibility for different moldings to produce very similar shadows (Fig.4). Based on this assumption, some tests were designed, which produced a series of ‘hybrid moldings’ from the resulting profiles (Fig.5). These were, ultimately, digitally fabricated with a laser-cutter.
Figure 4. Classical profiles that have been digitally morphed by moving their control points (the original ‘a’ and new profiles ‘d’ and ‘e’ are shown here).
Figure 5. A number of profiles resulted by moving the control points of the original profile ‘a’ (profiles ‘a’, ‘d’ and ‘e’ are shown above ).
Figure 6. Renderings of uniformly extruded profiles a,b,c,d,e,f,g,i,j The digital model ‘ade’ of the lofted surface is sliced in layers parallel to the profile (axis x) and then each layer is laser cut and re-assembled. The molding was sliced in 98 layers for fabrication: sections taken at 3/ 16" (equal to the thickness of the card board used in the laser -cutter) produced a molding 18 inches long (Fig.7). Although the laser cutter was initially used to physically model the complex double curvatures of the moldings, it can start a new series of experiments that formally determine the final shape through the use of the tools. For instance, depending on the thickness of material used, the same digital model can produce different lengths of Laser cut models. This way, the process becomes embedded
First International Conference on Critical Digital: What Matter(s)?
81
within the end product, displaying a system of rules that modify the design by establishing a link between hardware and software.
Figure 7. Snap-shots of laser-cut molding that resulted from the combination (surface loft) of profiles a, d and e The new moldings challenge Walker’s conventional definition of ‘a uniformly prolonged section…’15; the molding ‘ade’ is the resultant surface from lofting profiles a, d and e. Perhaps this is the point from which these new forms liberate themselves from strictly classical rules towards a more fluid, Art Nouveau-esque perception; as a result, a set of rules beneficial to these experiments may be one that is relative to the degree of curvature applied. This is in turn, dependant on the movement of control points on the profile, and so the greater their freedom, the more fluid the result and possibly the deeper the shadows produced. 3. Conclusion 3.1. The Age of Cybernetics? With his book, ‘Architectural Principles in the Age of Humanism16’, Wittkower (1949) hoped to encourage the development of ‘new and unexpected solutions’17, without however suggesting any particular course of action. The book’s impact was purely due to the uncritical attitude of students who adopted its ideas. It nevertheless created the impression of a present that is promising, in a non-specific way18. Wittkower’s view on proportion was dual. It was a matter of aesthetics (subjective) and therefore not to be imposed; however as a symbolic form it allowed access to natural order and so could be beneficial in design. We have long now moved into the age of Historicism (in which history affects the evolution of things, including architecture) and may also be entering a new age where digital technologies promote a fluid, amorphous, a-stylistic architecture (according to C.Hight, this may be ‘the Age of Cybernetics’). 3.2. Possibilities for new paradigms We have considered a possible rule-making process for the purpose of consciously stirring current design towards a non-random path. The ways these rules are deployed partly stems from previous cases of systematic design attempts and partly from current issues within design, such as the closer linking of design and construction. The reasons for having organized design methodologies in the past have varied, depending on the time, or the architect in question. We find some of these reasons are still valid today, by examining the context within which those design systems were formulated. The work of both Perrault and Gaudi is informative of how they tried to apply a ‘controlled’ design, each in his own way. In the case of Gaudi, the limited legibility of his system perhaps too innovative for its time- has required a team of specialists to decipher the geometry of an incomplete building.
First International Conference on Critical Digital: What Matter(s)?
82
In a more recent attempt to re-examine Classical language, I have demonstrated another possible way to think about Classicism and digitality in a complimentary way. The reasons I offer are not that different from the architects before me, albeit being raised during a more dynamic context of architectural development. It took almost a century for Perrault’s and Gaudi’s ideas to be embraced and properly interpreted, whereas architecture is now changing on a daily basis. For this, we should be fully conscious of the large amount of information we produce in a way that this may be productively analyzed by our successors. This is not about a mere classification process, but the urge to produce architecture worth classifying. In this enquiry towards rule-finding, a few issues may dictate the result: What type of system are we looking for? The ancient Classical system was based on proportion, and therefore aesthetics (although there’s also the sophisticated effect of shadow to consider). Gaudi’s system is purely Geometrical, while the Hybrid moldings emphasize performance. Naturally, proportion is embedded into all the above systems, regardless of their focus. The creation of new rules aims to improve the perception of digital design. These may in fact not be ‘rules’ in the conventional sense, but a framework of thought within which one can refer to combined resources, a ‘synthetic’ way of considering things. How much should a system restrict the architect? As with all mechanisms of control, there is always the chance of excessive limitation that inverts the desired result; consequently, certain latitude should be allowed during the early stages of every new idea/system to let this develop intuitively before establishing universal guidelines. What is more, by applying the type of ‘synthetic’ logic mentioned above, such an intellectual framework becomes open-ended, and is no longer restrictive of, but complimentary to the creative genius of the architect. Along these lines, it is valuable to encourage specialized study, yet apply this within a context that can benefit from specialists in other fields too; in other words, embrace a collective appreciation of specialization which counteracts the disadvantages of isolated focused study as identified by Buckminster Fuller19 (1981). Reyner Banham saw a similar concern to reconfirm the current (in his time) design during the ‘60s: ‘What this generation sought was historical justifications for its own attitudes, and it sought them in two main areas of history-the traditions of Modern Architecture itself…and the longer traditions of Classicism’20. Today, we are still evaluating –or at least, should bethe development of design. It is always useful in such cases to trace continuities in the design traditions. Digital Design however, was born out of technology and is therefore independent from previous systems, hence a strong impulse to justify this historically21. In the end, it is a case of being critical. The notion of digital design is suggestive of a contextual approach to methodology; despite of (and thanks to) its lack of historical references, digitality as a mode of cognition may rely in several contexts, one of which is that of the natural processes (‘biomimesis’) or environmental performance. As a possible example of such thinking, the molding research tries to tackle the issue of performance by studying light with reference to an existing architectural language.
First International Conference on Critical Digital: What Matter(s)?
83
Regardless of any theoretical or stylistic tendencies, the establishment of a digital framework may simply rely on Architecture’s inherent process of ‘natural selection’, which is fostered by ongoing discourse. Endnotes 1 A ‘module’ was the average dimension used to determine the proportion of a Classical building, and was equal to the diameter of the base of the column shaft. 2 Claude Perrault (Paris 1613-1688) was the architect of the eastern range of the Louvre in Paris, and is best known for his translation of the Ten Books of Vitruvius. He also achieved success as physician and anatomist. 3 Vitruvius’s ‘Ten Books’ being the only written account of this period’s architecture, this system would have evolved far more had it been more sufficiently documented. 4 The Golden Section (mathematically represented by the number Φ=1.618…) has always been divisive between architects as mystical and irrational, but also as an admirable measure of harmony that is mathematically represented in nature. 5 Pérez-Gómez, A: 1983, Architecture and the crisis of modern science, MIT Press, Cambridge, Mass., pp. 3-14 6 Perrault’s training as a doctor justifies his adherence to a more scientific consideration of architecture and brings forward a similar opposition within the medical field during the Renaissance, between those who adopted the ancient readings and tradition of Asclepios, and the innovators of modern anatomy. 7 Hight, C: 2008, ‘Measured Response’ in Architectural Principles in the Age of Cybernetics, Routledge, UK, p. 145 8 Ibid., p. 145 9 Hersey, GL: 1988, The Lost Meaning of Classical architecture: Speculations on Ornament from Vitruvius to Venturi, MIT Press, Cambridge, Mass., pp. 1-10 10 From Pérez-Gómez’s introduction to Perrault’s 1993 [1683] ‘Ordonnance for the Five Kinds of Columns after the Method of the Ancients’, The Getty Center for the History of Art and the Humanities, Santa Monica, CA, pp.5-8 11 Burry, M: 2004, ‘Virtually Gaudi’ in Digital Tectonics, Wiley-Academy, UK, pp.23-33 12 In a sense, this makes him similar to contemporary ‘digital’ designers, who are often unaware of the end product, since this is largely based on the construction techniques (digital and not). 13 In the same way, Gaudi had inherited a Gothic revival plan by Francesco del Villar, which he could not bypass, and had to work with it. 14 Walker, CH: 2007 [1926], Theory of Moldings, W. W. Norton & Company. 15 Ibid. 16 In the Age of Humanism, the world remains unchanged. 17 Wittkower, R: 1971 [1949], Architectural Principles in the Age of Humanism, W. W. Norton & Company. 18 See Prolegomenon from ‘Architectural Principles in the Age of Historicism’, RJ Van Pelt & CW Westfall, 1991, Yale University Press. 19 Fuller criticizes specialization as being an ‘Ideology’ in the Marxist sense of a means imposed by authority to acquire control over the intellectually privileged. Excessive breakdown of a body of knowledge creates divisions within the professional and academic fields, so that each one cannot perceive the true value of his individual contribution. I am confident this can be reversed by creating think tanks from various specialists that will introduce innovative thinking in architectural issues. 20 From Hight, C: 2008, ‘A Mid-Century Renaissance’ in Architectural Principles in the Age of Cybernetics, Routledge, UK, p. 71 21 Architecture is still linked to its historicist influences; however, perhaps the ‘digital’ age has more to demonstrate as founding principles besides history, such as the environment and performance that is based on state-of-the-art technology.
First International Conference on Critical Digital: What Matter(s)?
84
First International Conference on Critical Digital: What Matter(s)?
85
Building Information Modeling and Architectural Practice On the Verge of a New Culture
Sherif Abdelmohsen Georgia Institute of Technology, USA sherif.morad@gatech.edu Abstract The introduction of machine-readable tools for architectural design, which do not just focus on mere geometry or presentation, but on the richness of information embedded computationally in the design, has impacted the way architects approach and manipulate their designs. With the rapid acceleration in building information modeling (BIM) as a process which fosters machine-readable applications, architects and other participants in the design and construction industry are using BIM tools in full collaboration. As a trend which is already invading architectural practice, BIM is gradually transforming the culture of the profession in many ways. This culture is developing new properties for its participants, knowledge construction mechanisms, resources, and production machineries. This paper puts forward the assumption that BIM has caused a state of transformation in the epistemic culture of architectural practice. It appears that practice in the architecture, engineering and construction (AEC) industry is still in this phase of transformation; on the edge of developing a new culture. The paper attempts to address properties of such an emerging culture, and the new role architects are faced with to overcome its challenges. 1. Transformations in architectural practice: How is BIM different? Conventional methods of manual drafting through sketching and digital drafting through computer-aided design (CAD) representation techniques have been used by architects extensively in the design process. In such methods, these types of representations are typically “logical pictures”1 of things; analogies rather than copies of reality, used to intentionally and explicitly inform about specific key aspects of the design in hand. They can be considered as ‘abstractions’ of reality, but which hold within a plethora of interpretations with varying degrees of ambiguity. The architect or designer in this case has control over ‘what to show’ or ‘what to highlight’ in his design, as the drawback of these drafting methods as non machine-readable representations helps conceal specific points of interest. With the introduction of BIM in the architectural profession, the notion of the ‘logical picture’ in this sense became slightly different in meaning. The accuracy and degree of detail that BIM tools claim to maintain takes the machine-readable representations to the ‘copy’ end of the spectrum more than the ‘analogy’ end; to a virtual mimic of the elements, processes and mechanisms in a building rather than just a mere depiction. BIM was introduced as the latest generation of object oriented CAD systems. It built on strategic limitations in previous systems. These limitations revolved around two key issues. Previous systems produced representations interpretable only by humans and not computers, and they required endless effort to comprehend the full geometrical detail of those representations in order to adequately provide construction information, thus making them highly open to errors.
1
see Langer S. “The study of forms”, in An Introduction to Symbolic Logic, New York, Dover, 1967, p. 24
First International Conference on Critical Digital: What Matter(s)?
86
BIM built on the main concept of the ‘virtual building’1 which describes a single consistent source that captures all the information about the ‘intelligent objects’ in the represented building. BIM tools allow for extracting different views from a building model. These views are consistent, as each object instance is defined only once. Because BIM relies on machine readable representations, data from virtual models can be used in many ways. Benefits such as spatial conflict checking, consistency within drawings, automatic generation of bills of material, energy analysis, cost estimation, ordering and tracking, can be achieved. Other benefits include the integration of these analyses to provide feedback for architects and engineers from other disciplines to inform them of effects of alterations of designs, while designing, rather than post facto checking2.
2. Properties of a new culture The mechanisms of interaction among participants in the BIM process; how they “sit together at one table” early in the process to develop a project design, and how they “construct together” the project electronically as a “virtual building”; have led to a revolution in project delivery. The culture of architectural practice is thus changing. Mechanisms of practice, participant roles and communication, knowledge construction mechanisms, are all being re-shaped in a way that is challenging for both practice and participants. BIM has created a paradigm shift that has caused a state of transformation in the epistemic culture3 of architectural practice. 2.1. Problem solving contexts Different mindsets, machineries and mechanisms are being produced to cope with the rapidly developing territory of BIM. For example, the idea of variations in design problem solving contexts has been influenced by BIM. Traditionally, resources to finding clues for design “solutions” were usually embedded in freehand design sketches and prescriptive execution drawings4, in the experience and design expertise of talented architects5, and the intuition of multidisciplinary teams working on projects. All these factors tended to vary elements of the design strategy in order to come up with “satisficing” solutions6. All these resources usually constituted “communal stocks of knowledge” and “experiential registers” where the “narrative culture” of the architectural community contributes to the problem solving process through “blind variations” 7.
1
see Seletsky, P. “Questioning the Role of BIM in Architectural Education: A CounterViewpoint”, AECbytes Viewpoint #27, 2006. 2 see Eastman, C., Teicholz, P., Sacks, R. and Liston, K. BIM Handbook: A Guide to Building Information Modeling for Owners, Managers, Designers, Engineers and Contractors, Wiley, 2008. 3,7 see Knorr Cetina, K. Epistemic Cultures: How the Sciences Make Knowledge, Cambridge, MA: Harvard University Press, 1999. 4 see Schön, D. The Reflective Practitioner. How professionals think in action, London, Temple Smith, 1983. 5 see Cross, N. “The Expertise of Exceptional Designers”, in N Cross and E Edmonds (eds), Sydney, Australia, 2003, pp. 23-35. 6 see Simon, H. and March, J. Organizations, New York, Wiley, 1958.
First International Conference on Critical Digital: What Matter(s)?
87
With BIM, whether or not resources for problem solving contexts have remained unchanged is a controversial issue. Viewed as a tool which automates various stages of the design process, BIM is seen as enclosing the process into a closed universe, where systematic variations are operationalized within the detailed technical specifics of the mechanism of the digital tool alone. Viewed as a process by itself, however, which emphasizes the underlying logic of design, BIM is seen as fostering creative thinking, enhancing paths of self discovery, and enabling the synthesis of knowledge through the digital tool. But knowledge resources have shifted to “registers” within the intelligent tool rather than “experiential registers”1. 2.2. Knowledge construction Participants in the architecture, engineering and construction industry are moving from being active “epistemic agents”2 who produce the knowledge of that culture throughout the process into passive agents. The produced knowledge is embedded within the “single model”, where all participants contribute to a pool of knowledge, thus melting down individual identities within the knowledge construction process. Copyright, data management and database transaction issues become highly critical. Contributions to a specific project or to knowledge databases that inform future projects become identityanonymous in a way that may require a redefinition of roles and responsibilities. Although architects would argue against this idea from a “master architect ego” point of view, no single individual or group of individuals can be identified as producing the resulting knowledge, and the only epistemic agent becomes the extended building information model. The architect’s role in this sense can alternate between the “master builder” and the “team participant”. It is up to the architect then to define the main emphasis of his role in the process of knowledge construction. 2.3. Knowledge representation An important shift has also occurred in the design and construction industry, where the focus has deviated from skilled artistic techniques, catchy perspective drawings, and individual talents into representations of information and unified methods of presentation, exchange formats, and model attributes for the sake of collaboration and coordination between different participants. Personal “master architect” talents and techniques have blurred due to the global response to the concepts of the “virtual model” and the “single building model repository”. The architect would have to rethink how to define his identity as a designer in this global and anonymous repository, and representation techniques are definitely no longer the proper way to express identity. This would also imply consequently cutting down the amount of artistic strokes, the intentionally drawn ambiguous sketches and approximate drawings that are open to interpretation. The architect has to predefine a lot of elements and attributes before attempting to sketch a 3D volume. Ideally, the virtual output is quickly and accurately executed, associated with a whole set of information rich by-products. On the other hand, from a cognitive standpoint, understanding the modeling tool parameters, mechanisms and attributes introduces a burden to the creative process of design. The architect has to model something of which he has no clue of its final output or shape, but with a relatively high degree of precision beforehand. 1,2
see Knorr Cetina, K. Epistemic Cultures: How the Sciences Make Knowledge, Cambridge, MA: Harvard University Press, 1999.
First International Conference on Critical Digital: What Matter(s)?
88
2.4. Mechanisms and data exchange Emphasis has inverted within the discipline from care of the object to care of the self, where BIM systems work on designs of architects, consultants and various collaborators, only to absorb them into a closed universe1. Within this “single universe”, the focus has become more involved in controlling, improving, monitoring and understanding the internal processes and components within the system. The culture becomes concerned with selfanalyzing the ‘design’, and reconstructing it internally ‘within the BIM tool’ to produce negative images of registers through sign-oriented systems. The design elements, represented in walls, spaces, doors, windows,…etc., enter the BIM system as these objects, but get transformed internally into signs that are absorbed and reconstructed via purely internal interpretations in terms of other ‘meaningless’ variables, neutral file formats and extensions, rather than relations close to the registered design elements. This is done for the purpose of satisfying the internal tool mechanisms which demand exchangeable formats in order to build up the single building model. Thus the internal tool mechanisms have more control over the design element in hand in this case. 2.5. Scope of Expertise Along the many transformations that have influenced the architectural profession, the role of the architect as “master builder” has been fluctuating between having full control over design, construction methods and roles, and having a more narrowed focus on design aspects thus being isolated from the decision making process in other disciplines. BIM has presented a new challenge for the architect’s scope of expertise. Architects are not only dealing with design complexities, but are facing the steep learning curve of computational tools which have become essential for engaging in BIM projects. The process of learning new tools is different from conventional CAD tools. It is now more a process of learning how to integrate BIM-authoring tools with concept design tools, analysis and simulation tools, facility management tools and others in a seamless fashion that serves the purpose of the design strategies at the same time. The question then becomes: where does the expertise required of the architect lie? In this sense, expertise does not reside in design expertise2 only or computational tool expertise3 only, but in the strategic management4 of both design and tool strategies along all design phases.
3. The expanded role of the architect The role of the architect is therefore now expanding and being redefined. The architect himself is going through a stage of transformation, where he has to extend the scope of his expertise, regulate the mechanisms of knowledge construction and representation, define 1
see Knorr Cetina, K. Epistemic Cultures: How the Sciences Make Knowledge, Cambridge, MA: Harvard University Press, 1999. 2 see Cross, N. “The Expertise of Exceptional Designers”, in N Cross and E Edmonds (eds), Sydney, Australia, 2003, pp. 23-35. 3 see Bhavnani, S. and John, B. “The Strategic Use of Complex Computer Systems”, in Human-Computer Interaction, Volume 15, 2000, pp. 107-137. 4 see Sanguinetti, P. and Abdelmohsen, S. “On the Strategic Integration of Sketching and Parametric Modeling in Conceptual Design”, in proceedings of ACADIA 2007, Halifax, Nova Scotia, Canada, 2007, pp. 242-249.
First International Conference on Critical Digital: What Matter(s)?
89
resources for problem solving contexts, and manage methods of data exchange in this new “BIM-enabled culture”, while maintaining the role of the “master orchestrator” or “key team participant”. Although this places more load on the architect, but it is perhaps ironically significant at the same time, where the architect needs to be deeply involved in the development and research of BIM technology, as well as being in continuous communication with BIM-authoring software vendors. The architect needs to ensure that while meeting the requirements of clients and specialists, what matters is not only efficiency, but the larger goal of design as primary importance, where creativity and extended horizons of design imagination are not sacrificed due to mere business requirements. In this sense, this new “BIM-enabled culture” may allow architects to maintain the balance through regaining control and management of the architecture, engineering and construction process. At the same time, BIM should not divert architects from the fact that they are both technologists and artists. Architects’ designs should meet objective performance criteria and solve problems, but on the other hand, architects still have the privilege of ‘making meaning’1. Through their creative work, they tend to understand culture, society, history and time, and reflect through buildings that represent signs of people’s social values, beliefs and feelings. It is not all about clients’ needs, and architects should have that in mind in order to know what “gives things meaning” in their designs. So BIM may have influenced the culture of the architectural profession profoundly, but if tackled by architects properly, it could allow the master architect with the universal view to establish the agenda yet again for the AEC process. References Langer S. “The study of forms”, in An Introduction to Symbolic Logic, New York, Dover, 1967, pp. 21-44. Seletsky, P. “Questioning the Role of BIM in Architectural Education: A Counter-Viewpoint”, AECbytes Viewpoint #27, 2006. [http://www.aecbytes.com/viewpoint/2006/issue_27.html] Eastman, C., Teicholz, P., Sacks, R. and Liston, K. BIM Handbook: A Guide to Building Information Modeling for Owners, Managers, Designers, Engineers and Contractors, Wiley, 2008. Knorr Cetina, K. Epistemic Cultures: How the Sciences Make Knowledge, Cambridge, MA, Harvard University Press, 1999. Schön D. The Reflective Practitioner. How professionals think in action, London, Temple Smith, 1983. Cross, N. “The Expertise of Exceptional Designers”, in N Cross and E Edmonds (eds), Expertise in Design, Creativity and Cognition Press, University of Technology, Sydney, Australia, 2003, pp. 23-35. Simon, H. and March, J. Organizations, New York, Wiley, 1958. Bhavnani, S. and John, B. “The Strategic Use of Complex Computer Systems”, in HumanComputer Interaction, Volume 15, 2000, pp. 107-137. Sanguinetti, P. and Abdelmohsen, S. “On the Strategic Integration of Sketching and Parametric Modeling in Conceptual Design”, in proceedings of the Association for Computer Aided Design in Architecture International Conference (ACADIA 2007), Halifax, Nova Scotia, Canada, 2007, pp. 242-249. Shore B. Culture in Mind: Cognition, Culture, and the Problem of Meaning, Oxford University Press, 1996.
1
see Shore B. Culture in Mind: Cognition, Culture, and the Problem of Meaning, Oxford University Press, 1996.
First International Conference on Critical Digital: What Matter(s)?
90
First International Conference on Critical Digital: What Matter(s)?
91
Digitally Mediated Regional Building Cultures Oliver Neumann School of Architecture and Landscape Architecture University of British Columbia Canada neumann@oliverneumann.com
Abstract Designs are complex energy and material systems and products of diverse cultural, economic, and environmental conditions that engage with their extended context. This approach relates architecture to the discourse on complexity. The design research described in this paper introduces an extended definition of ecology that expands the scope of design discourse beyond the environmental performance of materials and types of construction to broader cultural considerations. Parallel to enabling rich formal explorations, digital modeling and fabrication tools provide a basis for engaging with complex ecologies within which design and building exist. Innovative design applications of digital media emphasize interdependencies between new design methods and their particular context in material science, economy, and culture. In British Columbia, influences of fabrication and building technology are evident in the development of a regional cultural identity that is characterized by wood construction. While embracing digital technology as a key to future development and geographic identity, three collaborative digital wood fabrication projects illustrate distinctions between concepts of complexity and responsiveness and their application in design and construction.
1. Responsiveness and interaction Architectural designs must be understood as complex energy and material systems and products of diverse cultural, economic, and environmental conditions that engage with their extended context. This approach equally applies to design at the scale of objects, buildings, and cities and relates architecture to the discourse on complexity. The design research described in this paper introduces an extended definition of ecology that expands the scope of design beyond the environmental performance of materials and types of construction to broader cultural considerations. In addition to enabling rich formal explorations, digital modeling and fabrication tools provide a basis for engaging with complex ecologies within which design and building exist. Digital modeling with parametric features, in particular, offers responsive design methods that facilitate engagement with a range of dynamic parameters. By establishing relationships between elements of a design that are analogous to mathematical equations, element parameters can be manipulated while constraints and dependencies between elements are maintained. The resulting dynamic models are able to respond to changes and offer a high degree of flexibility and coordination. These processes apply to everyday considerations of design, fabrication, and construction and to conceptual explorations of dynamic conditions. The responsiveness of parametric modeling and related digital design techniques broaden possibilities for the integration of diverse references into the design process. By extending the range of aspects and parameters considered in the design process, digital design encourages innovative design responses to regionally specific conditions. As novel rereadings and exploitations of existing contexts, innovative designs that extend the range of
First International Conference on Critical Digital: What Matter(s)?
92
aspects and parameters considered in the design process emphasize interdependencies between new design methods and their particular ecology. Using an expanded definition of ecological design grounded in its material, economic, and cultural context, material expressions become significant references for innovative design and production. Therefore, particular modes of production and craft play a central role in design grounded in ecology with interdisciplinary collaborations in design, building, and research reflecting discursive engagements a cultural environment in flux. While modern science often relies on an anthropocentric understanding of the environment, ecological design is grounded in interrelationships with “a repertoire of operatives affected by time, patterns of connectivity, and changing populations of multiple components.”1 Accordingly, an extended definition of ecology expands the scope of design beyond the environmental performance of materials and types of construction to broader cultural considerations. In a common approach that focuses on the natural environment, ecological design incorporates “a complex set of interdependent interactions…, which must be regarded dynamically.”2 However, the expanded definition of ecology outlined here looks beyond environmental considerations to include cultural, social, and ecological factors. Tradition and craft are two reference points for ecological design that acknowledges the need to position digital explorations in architecture as part of a “hybrid discipline”3 with collaborations of architects, engineers, material scientists, companies and practitioners. Innovative ecological design approaches embrace digital technology as a key to future development and geographic identity, as aspects of place now include interrelated natural and man-made conditions, including social, cultural, economic, and technological factors. In this framework, spatial concepts are informed by the logic of fabrication and methods of assembly. A reciprocal relationship between technology, space, and locale suggests that the introduction of new technology generates new spatial concepts. While situating contextspecific design at the intersection of local and global influences has been a common theme since the early 20th century, when industrialization and the increase of mass-produced building materials promoted a sense of regionalism as a reconciliation of local and global influences, modern applications of technology have often been treated as independent of space and place. With the development and incorporation of digital technology into the design process, however, design interventions can be integrated into their respective context. 2. Digital technology and regional formation Design that engages in the conditions of a location requires attentiveness to the dynamic particularities rather than generalized concept of place. In British Columbia, where influences of fabrication and building technology are evident in the development of a regional cultural identity, modern architecture and industrial design resulted from the integration of international and local influences. While plywood furniture design combined a modernist sensitivity and fabrication methods with local influences, modern West Coast post-and-beam architecture synthesized cultural influences from Europe with local craft and geographic conditions of climate and topography. Today, as standardization and massproduction have given way to mass-customization processes, digital fabrication technology offers an opportunity for an architectural culture that simultaneously looks to global developments and to the particularities of the local context. Geography takes on a broader definition that encompasses social, economic, cultural, and technological factors of a given locality, and their global influences. In the Pacific Northwest, this particularly applies to wood construction.
First International Conference on Critical Digital: What Matter(s)?
93
In British Columbia, wood figures as a primary characteristic of place in several ways: the particular B.C. ecology with forestry as a principal economic factor; the development and use of wood products and their industries and craftsmanship; and, available contemporary wood fabrication technologies. In the face of limited wood supply and growing pressure from foreign producers, the British Columbia lumber industry depends on technology to enhance the value of wood products to remain competitive. Global economic competitiveness requires the ability to export ideas, technological know-how, and sophisticated building techniques. Therefore, the focus on invention in building methods through technology in design is critical when considering global competition where finite resources, growing environmental awareness and a focus on the preservation of natural resources are central aspects of a regional building culture that is environmentally sensitive and economically viable in the long term. Three recent wood fabrication projects illustrate aspects of the integration of architecture into its extended context. Situated in British Columbia, the projects explore the potential of digital modeling and fabrication technologies to contribute to regionally specific designs. While each of the design-build projects engage in a range of conditions concerning the relationship of ecological design, digital media, and fabrication technologies, the projects are categorized to foreground particular aspects of the mediation of regional building through digital media, design tools, and logics of fabrication. Digital design and fabrication tools are used to generate â&#x20AC;&#x153;forms that can be built effectively, ... (to) achieve a balance between aesthetic intrigue, innovation and efficiency in new structural formsâ&#x20AC;?4 as a direct reflection the ecological condition that constitute their extended context. Borne out of a comprehensive procedure that identifies and explores a range of social, economic, and environmental factors that figure into the generation of architecture, the regionally specific structures are seen as expressions of regional formation rather than as built forms autonomous of their production processes. 2.2. Naramata Roof Structure: digital fabrication â&#x20AC;&#x201C; concepts and application
Figure 1. Completed roof structure Digital fabrication tools such as CNC beam processors provide a direct link between computer-aided modeling and physical form. The digital output devices allow for the direct translation of conceptual models into built form and promote evolution of practical aspects of traditional wood building methods. The ensuing designs allow the development of culturally responsive designs and buildings that explore the dynamic relationship of technology and culture, economy and landscape. Their resulting spatial organizations and formal expressions demonstrate an evolving architecture rooted in complex ecologies. The Naramata Roof Project explores CNC wood fabrication technologies for the design, fabrication, and assembly of a small roof structure for farm use in Naramata, British Columbia. The roof is intended as a protected meeting and workspace for the farm
First International Conference on Critical Digital: What Matter(s)?
94
community. The project includes research on wood joinery and wood structures, as well as the design of the roof, preparation of the construction site, the fabrication of roof components, and the assembly of the wood structure at the site. The realization of the project included the use of a variety of media, fabrication, and construction methods. The Naramata Roof Structure highlights the potential of digital wood fabrication technologies for the design and fabrication of a context specific project. The research also illustrates effects of the translation from digital design media to building as the particular conditions for the use of digital design media, wood fabrication software, wood fabrication technology, and for the assembly of the structure at the site, all contribute to the built project. A particular focus of the project was the translation of the design concept developed using digital modeling and wood fabrication software into the built structure. Material tolerances, assembly sequences, and the accuracy of the digital fabrication process as well as the limitations of manual construction methods under site conditions constituted limiting factors for the project. Designed as a structural truss , the Naramata Roof Project is fusing the principles of a truss and a space frame. Deploying a mirrored array of three dimensional components to form the overall roof, the structure is designed to form and operate as a three dimensional configuration rather than an array of planar structures linked together with a sheathing or cladding. The project illustrates that mass-customization processes using digital design media and wood fabrication technology allow for the material- and time-efficient translation of spatially complex designs. Variations of joints and building configurations that respond to site, program requirements, and available materials can be generated without compromising the efficiency of the fabrication process. Similarly, the research project reveals distinctions between conceptual and spatial potential of digital design and fabrication technologiesâ&#x20AC;&#x2122; flexibility and responsiveness and the actual application in design, fabrication, assembly, and construction that reflect the limitations of each phase of the design and building process.
Figure 2. Digital modeling, CNC fabrication, and assembly Initially, design concepts were developed using Rhinoceros 3-D modeling software and physical model studies. To prepare data for wood fabrication, digital design models were then imported into Cadwork wood fabrication software. Subsequently, the fabrication data was used to fabricate components of the wood roof structure on a Hundegger K2 CNC beam processor. In a final step, the pre-fabricated components of the wood structure were assembled. Additional OSB sheathing, aluminum flashing, and roofing were then prepared and installed at the site. The particular context of the Naramata Roof Project includes the conditions at the site, program requirements, available design and fabrication methods, building materials, and
First International Conference on Critical Digital: What Matter(s)?
95
methods of assembly. As a group design and building project, exchange and interaction between members of the research team are also central contributing factors to the quality of the design and building. Similarly, the integration of the project into its particular context of design and fabrication illustrates both potentials and limitations for the design and building process. The translation of the digital design model into Cadwork fabrication software takes the available tool setup of the Hundegger K2 4-axis router used for the fabrication of roof components into consideration. The roof configuration and joint geometry are adjusted to acknowledge the particular tool set of the available 4-axis router. In addition to avoiding compound angles, processable lumber sizes are considered in the design revisions in Cadwork with the choice of lumber sizes and quality directly affecting the joinery design. The positioning of connecting bolts is coordinated with available drill bits, router sizes, and tool path limitations. In addition, repeated joints and wood configurations are copied during the design process in Cadwork to promote efficiency in the fabrication process by minimizing the generated fabrication data and related fabrication times. 2.3. Loon Lake Cabin: environment and economy
Figure 3. Project diagram and main elevation The Loon Lake Cabin built with Solid-Wood-Wall panels explores the potential of masscustomized Solid-Wood-Wall technology for a regionally specific design sensitive to its ecology. The project focuses on the spatial, economic, and environmental implications of the panel construction method. Material characteristics and spatial configurations particular to Solid-Wood-Wall construction are investigated in the context of the British Columbia building culture and the particular cultural, economic, and environmental conditions of the region and site. The compact design of the cabin references existing campground architecture and is intended to maintain the character of the campgrounds while introducing contemporary wood building methods.
Figure 4. Solid-Wood-Wall panels, concept elevation + plan configuration The project for a cabin built with Solid-Wood-Wall panels is a collaboration between the University of British Columbia Research Forest, the Hundegger Maschinenbau GMBH based
First International Conference on Critical Digital: What Matter(s)?
96
in Germany, the UBC School of Architecture and Landscape Architecture, the UBC Centre for Advanced Wood Processing, and participating engineers. The project includes research on the spatial and environmental implications of Solid-Wood-Wall construction methods. Digital wood fabrication, material characteristics, environmental performance and spatial configurations particular to Solid-Wood-Wall construction are explored to identify the relevance of Solid-Wood-Wall construction in the context of the British Columbia building culture and the particular economic and environmental conditions of the region. As a building method that uses large amounts of wood, Solid-Wood-Wall construction responds both to current environmental and economic necessity to utilize large quantities of beetleinfested wood in British Columbia within a short time period. Given that some areas of British Columbia will see up to 80 percent of its mature pine trees killed by the beetle infestation in the near future, Solid-Wood-Wall construction offers an opportunity to utilize the affected wood for building components with high environmental performance and to avoid economic and environmental damage from dying forests in the region. With its inherent sustainable and structural qualities the new construction product and method can address current environmental and economic challenges. 2.4. Loon Lake Outdoor Theater Roof: context and expression
Figure 5. Initial space-truss configuration and perforated horizontal plywood diaphragm The Loon Lake Outdoor Theater Roof takes large scale CNC fabrication technologies as a starting point for innovative wood construction methods that build on the B.C. wood building tradition and the existing forestry industry to promote sustainable wood building designs through material efficiency and efficiency of assembly. Given this focus, the design research is intended to contribute to the transformation of the B.C. wood industry from a resourcebased to a technologically sophisticated and knowledge-based economy. The design is both rooted in its local circumstances and reflective of larger processes addressing technologyâ&#x20AC;&#x2122;s potential to generate designs consistent with the conditions of their extended context. While the design aims to satisfy the specific needs related to program, climate, and locale, the design project considers a scale beyond the site and immediate context of its intervention, referencing complex processes that equally influence and are affected by the design. The roof project illustrates digital fabrication technologyâ&#x20AC;&#x2122;s potential to generate designs consistent with the conditions of the place of their intervention. The design research project uses CNC timber framing software and CNC fabrication technology to build a materialefficient wood roof structure that meets the requirements of the local program and site conditions while exploring forms and expression particular to digital fabrication technologies. As a product of a multidisciplinary collaborative design process including architects, engineers, material scientists, digital fabricators and software developers, the roof reflects
First International Conference on Critical Digital: What Matter(s)?
97
the diversity of aspects implicated by the use of digital wood fabrication technologies. Utilizing existing columns, the roof provides cover for small open-air stage. As a light and open structure, the roof preserves important visual relationships of the outdoor theater seating to the surrounding forest and lake beyond. A visually perforated roof plane promotes the integration of activities on the stage into the surroundings with sunlight and shadows from surrounding trees animating the roof structure and stage below.
Figure 6. Project site + revised roof configuration Similar to the previously described projects, the theater roof project explores technical, spatial, and cultural aspects of CNC wood fabrication. References for the project are technological innovation and formal expression of contemporary wood structures. The roof project illustrates how spatial concepts are informed by the logic of fabrication and methods of assembly. With a focus on material efficiency and contemporary interpretations of woodto-wood joinery, the roof design explores the materially and structurally efficient use of small wood members in a non-hierarchical space-truss configuration. The lightweight material-efficient roof is designed to minimize the number of necessary wood members and to simplify the wood-to-wood joints. Hung from existing columns, the space truss configuration with a perforated horizontal plywood diaphragm for rotational stability responds to the load conditions in the trusses with the underside of the structure shaped to reflect the moment forces in the structure. Equally, the roof design introduces a scale independent of the size and resolution of the wood structure. While the structural logic of the project responds to the forces in the roof and to the orientation of the stage towards the audience, the design references the complexity of the projects extended context by introducing a graphic pattern autonomous of the size and resolution of the wood structure. 3. Digitally mediated regional designs As collaborative design-build projects that depend on the cooperation of structural engineers, wood scientists, software developers, and fabricators, the multidisciplinary projects illustrate the complex relationships and conditions particular to design grounded in its broader ecological context. Forestry, wood building materials, and wood building methods are essential aspects of the cultural context of the Northwest. The collaborative projects are focused on promoting the integration of digital techniques into regional design and construction practice, and on illustrating how digital modeling and fabrication can contribute to the conception of new building methods and spaces. By exploring how digital fabrication leads to conceptual explorations and form-finding processes that influence existing design and construction practices, the research projects contribute to the necessary transformation of the regional
First International Conference on Critical Digital: What Matter(s)?
98
wood industry in the Northwest from a resource-based economy to an economy based in knowledge and global competition. This paper has illustrated that CNC fabrication technologies can produce new spatial and material expressions consistent with the notion of complex environments. Given the capacity to efficiently create ever-smaller building modules and spatially complex building components, CNC-fabricated wood building elements can be designed to meet the specific and changing requirements of individual building projects while promoting efficiency of material use and assembly. The CNC fabrication processes allow the efficient application of mass-customization technologies and the exploration of formal and spatial conditions corresponding to ideas of complexity and the openness, individuality and self-expression of contemporary living conditions. With their inherent sustainable and economic characteristics, contemporary wood products, fabrication and production methods can be used to generate site-specific designs as part of an ecologically sensitive building culture. While the architecture generated using contemporary CNC wood fabrication technology currently benefits from the import of technology and software developed in Europe, their application is not limited to revisiting familiar configurations, structures, and traditional joinery. Rather, in projects described here, contemporary fabrication technology provides a basis for design explorations specific to the economic and cultural context in British Columbia with its existing wood building culture and focus on sustainability and the natural environment. While the design research projects illustrate concepts corresponding to the complexity of the context of design, they also highlight the limitations of translating abstract conceptual frameworks in material-based applications. Recognizing that “the key advantage of advanced technology” is to provide “greater freedom in the design process, rather than more flexibility or open-endedness in the finished product,”5 the projects – consistent with their focus on contemporary wood building methods – acknowledge the particular characteristics of wood design and construction and point to an architecture grounded in the ecology of their extended context. Endnotes 1
2
3
4
5
see Easterling K. Organization Space. Landscape, Highways, and Houses in America, Cambridge and London: MIT Press, 1999. see Yeang K., “A Theory of Ecological Design”, in Rethinking Technology, Braham W. and Hale J. (ed.) London and New York: Routledge, 2007, pp.388-95 see Leach N., Turnbull D., Williams C., “Introduction”, in digital tectonics, Leach N., Turnbull D., Williams C. (ed.) London: Wiley Academy, 2004, pp.4-12 see Shea K., “Directed Randomness”, in digital tectonics, Leach N., Turnbull D., Williams C. (ed.) London: Wiley Academy, 2004, pp.88-101 see Jones W. “Towards a Loose Modularity” Praxis, Issue 3, 2001, p.21
First International Conference on Critical Digital: What Matter(s)?
99
Using Project Information Clouds to Preserve Design Stories within the Digital Architecture Workplace David Harrison Victoria University of Wellington, New Zealand david.harrison@stress-free.co.nz Michael Donn Victoria University of Wellington, New Zealand michael.donn@vuw.ac.nz Abstract During the development of an architectural design a series of design stories form. These stories chronicle the collective decision making process of the diverse project team. Current digital design processes often fail to record these design stories because of the emphasis placed on the concise and accurate generation of the virtual model. This focus on an allencompassing digital model is detrimental to design stories because it limits participation, consolidates information flow and risks editorialisation of design discussion. Project Information Clouds are proposed as a digital space for design team participants to link, categorise and repurpose existing digital information into comprehensible design stories in support of the digital building model. Instead of a discrete tool, the Project Information Cloud is a set of principles derived from a proven distributed information network, the World Wide Web. The seven guiding principles of the Project Information Cloud are simplicity, modular design, decentralisation, ubiquity, information awareness, evolutionary semantics and context sensitivity. These principles when applied to the development of existing and new digital design tools are intended to improve information exchange and participation within the distributed project team. 1. Preserving design stories within Project Information Clouds Design stories form within architectural projects through the interweaving of design conversation, decisions and outcomes. These design stories are valuable in determining a project's current state and they increase the accessibility of information within the design team. Unfortunately, current digital architectural design tools emphasise production and communication of outcomes ahead of the preservation of conversations and decisions. To resolve this shortcoming the concept of Project Information Clouds is proposed as a means of digitally recording and maintaining these design stories. The Project Information Cloud is not a discrete entity but a set of principles. These principles when applied to the development of existing and new digital design tools are intended to improve information exchange and participation within the distributed project team. The principles that comprise the Project Information Cloud are derived from concepts similar to those that fostered the World Wide Web. Although the architecture, engineering and construction (AEC) industry was slow to adopt digital design processes it is now undergoing rapid digital evolution. This digital migration was both a response to and an enabler of increased information processing demands. Hampering the recording of design stories during this evolution was the disconnect between the tools used to communicate and record design outcomes. Whilst digital communication through email and the Web have significantly improved the quantity of project communication1, this data often fails to be directly or indirectly associated to the digital
First International Conference on Critical Digital: What Matter(s)?
100
design outcome in any structured way. Likewise whilst digital tools used to model architecture can record design outcomes in exacting detail, they do so in a closed, virtual environment devoid of real context. Not only does this closed environment restrict participation, it also limits the ability of those interacting with the model to comprehend design decisions. Subsequently, whilst the AEC industry currently has powerful tools for communicating vast amounts of data and recording virtual outcomes in exacting detail, it lacks a digital vocabulary for weaving these two distinct information streams into coherent and maintainable design stories. 2. Deriving value from digital architecture's design stories Architecture is as much about personalities and decisions as it is about the eventual built form2. A project has multiple design story threads, each one is a subset of the personalities, decisions and outcomes contained in the overall design. The understanding of these design stories is instrumental in enabling a project team to collaborate effectively during the course of the design and construction process. Whilst of limited value at the moment of project conception, these stories appreciate over the buildingâ&#x20AC;&#x2122;s life-cycle to fulfill the role of decision making aids and historical learning resources. Traditionally design stories were established through direct participation and narrated to others should the need arise. Digital design is eroding these bonds through its ability to break down geographic constraints and consolidate project information around tightly controlled, data-rich models. This has led to more distributed and efficient design processes. However, it has reduced the ability for all design participants to comprehend and in some cases take part in ongoing design stories. Ironically in an effort to improve efficiency and distribution, digital design tools may in fact be degrading the underlying strength of the design process. The project team must be able to digitally establish, reinforce and derive value from design stories. Therefore, they must be able to participate in the linking, categorisation and repurposing of all project information, whether it be complex virtual model, conventional plan or digital message. In order for this to take place there needs to be a shift in the way design participants treat their digital archives. Digital design artifacts cannot continue to be isolated and shielded from other project data. Instead these data points and their associated meta-data should be considered as part of a larger network, which when viewed as a whole forms a Project Information Cloud. There are two challenges to overcome if discrete project data is to be treated as part of this larger meta-network. The first is the organisational and legal constraints which accompany any professional exchange of data. Whilst a Project Information Cloud will need to respect the ownership and privacy requirements of existing data, the contributed meta-data used in the establishment of design stories should be considered property of the collective project team. Communal ownership is an essential element of this meta-layer because it will ensure all parties are free to copy, preserve and build upon existing digital stories in perpetuity. The second and perhaps more difficult challenge is to overcome the dominant trend within digital architecture to record all design outcomes within a single, complex and highly regulated digital building model. 3. Why digital building models compromise design stories To efficiently manage the increased amounts of project information, the current trend in digital architecture is to build increasingly complex and information-dense virtual models. The premise of this trend is that the more comprehensively and accurately a virtual outcome can be modeled, the more efficiently the project team will be able to manage the information and processes associated with it. This objective has seen the traditional notion
First International Conference on Critical Digital: What Matter(s)?
101
of Computer Aided Architectural Design (CAAD) evolve into the concept of the Building Information Model3 (BIM). Unlike CAAD, which at its core is a digital extension of the drafting table4, BIM accurately records the analytical and semantic characteristics of an architectural design within a highly structured, semi-intelligent digital model. BIM is not a fundamentally new idea and draws much of its technical inspiration from Product Model technologies proven within the aerospace, shipbuilding and manufacturing industries. This combination of CAAD and Product Model results in an architectural information modeling tool capable of utilising semantic data structures to create efficient and versatile working environments5. However to attain these benefits the design team must consolidate all significant architectural information around a single, highly structured BIM. Regrettably, by establishing this concise and complex point of truth, the ability of all participants to accurately record and comprehend design stories is diminished. 3.1 Complexity reduces participation Participation is important to design stories because architecture is the physical representation of a collective decision making process6. BIM imposes process and knowledge barriers to participation due to its dependence on a single, complex data structure. In an effort to ensure the digital building model's integrity, the authority to manipulate the data is restricted. Even when permission is granted participants must understand and be capable of using the complicated software interfaces which govern the building model7. This participation bottleneck means the project team generally relies on selected participants to funnel relevant design data and decisions into the BIM. Owing to their status in the project team and close association with the digital building model, the role of digital shepherd generally falls to the architect. The architect undoubtedly is appreciative of this fact as it reinforces their place as the project's information and decision making hub. Unfortunately, those who take on this role can consciously or subconsciously filter out information vital in the recording and comprehension of design stories. 3.2 Rigid centralisation leads to editorialisation Compounding BIMâ&#x20AC;&#x2122;s participation bottleneck is its rigid and often proprietary data structure. This limits the type and quantity of information capable of being stored within the digital building model. Whilst this enables consistency and efficiency it often requires third-party information to be editorialised and associated with a foreign semantic system before it can be included within the project BIM. This manipulation can potentially lead to degradation of the design stories through editorialisation and confusion. Vendors of BIM are aware of these data storage limitations and are continually extending the semantic structures within their products8. However this semantic extension occurs at the risk of increased complexity and also with the knowledge that no rigid structure can handle all potential data or semantic needs during the telling of design stories. 3.3 Virtual accuracy confuses practical reality Accuracy within an architectural project is crucial but it is equally important to know where inaccuracies and tolerances lie. Architecture ultimately manifests itself in the physical environment and it is important for the project team to understand where, how and why the physical form deviates from its virtual blueprint. Traditional design representation depicted an abstract and partial description of the intended built form. In contrast BIM's capacity to depict a highly accurate, yet ultimately idealised, virtual truth risks impeding the ability of design participants to comprehend or accept the discrepancies between the virtual and
First International Conference on Critical Digital: What Matter(s)?
102
physical realms. This is an issue that becomes pronounced as rapid design changes and construction inconsistencies are introduced into the process. If those administering the BIM cannot keep pace with these changes then information will be lost, incorrect decisions made and the design stories will suffer. It is possible that eventually BIM implementations will evolve to account for the issues raised in this discussion. However, it is highly unlikely that within the foreseeable future a single digital building model will efficiently or accurately capture a project's design stories. Therefore, to ensure accurate recording of the design stories, the Project Information Cloud must exist as a distinct yet supporting element to BIM. 4. Learning from the Web to create the Project Information Cloud Attempting to accurately record design stories using BIM highlights the inherent problem of using a centralised, highly structured data model to capture decentralised, unstructured decision making. A better means of capturing such data is to establish a distributed Project Information Cloud where all participants can contribute equally. Fortunately, many of the underlying principles and technologies necessary to create such a space exist already within the World Wide Web. The Web is the most successful distributed digital information network currently in existence. This success stems from its ability for anyone to create and link to other relatively unstructured data in meaningful ways. The AEC industry has not ignored the Web but it is yet to embrace its full potential within the architectural design process. As with every industry, the availability of the Web and email has revolutionised the speed and distance across which project teams can communicate and exchange data. However, the actual processes of the industry itself have yet to be considerably influenced by the Web's principles or technologies. Project intranets have been adopted in a limited fashion within the AEC industry. However, these have primarily acted as digital extensions of traditional filing cabinets rather than as new methodology for collaborative design9. Whilst these tools can be valuable management and auditing aids, their centralised nature and the fact they are controlled by one group of design participants generally relegates their role to digital document manager for a specific project team or organisation. If implementations of the Project Information Cloud are to be based on similar technologies then they must overcome these shortcomings. This can be achieved by adhering to a common set of principles which emphasise decentralisation and ubiquitous data formats that all participants can utilise. Establishing a common set of principles will ensure that design stories can be created and openly syndicated amongst the distributed project team. 5. The principles of the Project Information Cloud For the Project Information Cloud to be established seven guiding principles should inform the methodologies and technologies that constitute it: simplicity, modular design, decentralisation, ubiquity, information awareness, evolutionary semantics and context sensitivity. These principles are inspired by the concepts that have driven development of the World Wide Web10 yet reflect the objective of the Project Information Cloud to be a common, distributed environment for exchanging design meta-data and preserving cohesive design stories. The principles of simplicity, modular design and decentralisation are intended to ensure implementations of the Project Information Cloud are capable of accommodating the largest
First International Conference on Critical Digital: What Matter(s)?
103
and most fragmented project teams. The principle of simplicity aims to ensure that the underlying data formats and structures that form the Cloud's fabric are easy to understand and replicate. This principle will ensure a broad range of digital design tools can evolve to interact with this space and the design stories it contains11. The principle of modular design aims to ensure that undue influence cannot be exerted by a single participant or software vendor. To achieve this, any component of the Project Information Cloud should be able to be replaced by a similar, independently developed component. The centralisation of digital information is a key inhibitor to storing design stories within the Building Information Model. To avoid this problem the principle of decentralisation declares that the Cloud cannot be formed around, or rely upon, a specific digital information source. Within this space all points of data are of equal significance to ensure scalability and equal participation by all project members. The principles of ubiquity, information awareness, evolutionary semantics and context sensitively are intended to promote the intelligent distribution of design information throughout the project team. The principle of ubiquity should influence the nature of the digital information exchanged. Rather than stipulating data formats the emphasis of the Project Information Cloud should be on identifying the most common formats available within each project team. As this data is referenced in the Cloud, the principle of information awareness will then ensure that these changes are efficiently syndicated throughout the design team. The principle of evolutionary semantics states that the taxonomy of the Project Information Cloud must be capable of changing12. This will assist in meeting the diverse and shifting classification requirements of the design stories. Finally, the principle of context sensitivity ensures that design team participants are only presented with information that is appropriate for their role or the project's current state. Through the embodiment of these seven principles implementations of the Project Information Cloud will be successful in digitally recording a project's design stories. 6. Conclusion Design stories are a valuable outcome from the architectural process. Despite this, project teams lack the ability to easily weave digital information streams into cohesive design stories. The current trend towards centralised Building Information Models has further degraded design stories as these models impose barriers to participation and rigid semantic data structures. The concept of a Project Information Cloud is proposed as a means of allowing participants to record design stories within a meta-data layer that inherits properties of the World Wide Web. By learning from the underlying lessons of the Web the AEC industry can position itself to evolve its digital methodologies and tools. This will enable the formation of Project Information Clouds. Once in place these clouds should improve the project team's ability to digitally record design discussion and its relationship to the Building Information Model. It is envisaged that the Project Information Cloud will provide AEC professionals with a more capable means of utilising their design stories for problem solving and collaboration. Endnotes 1 see Aragon, Patrick. 2006. Reinventing Collaboration Across Internal and External Project Teams. http://www.aecbytes.com/viewpoint/2006/issue_28.html (accessed 3 March, 2007). 2 see Kvan, Thomas. “Collaborative Design: What is it?” Automation in Construction 9, no. 4 (2000): 409-15. 3 see D’Agostino, Bruce, Marisé Mikulis, and Mark Bridgers. Eighth Annual Survey of Owners. FMI/CMAA, 2007. 4 see Willis, Daniel, and Woodward, Todd. “Diminishing Difficulty - Mass Customization and the Digital Production of Architecture.” Harvard Design Magazine 23 (2005): 71-83.
First International Conference on Critical Digital: What Matter(s)?
5
104
see Ibrahim, Mary. “To Bim Or Not to Bim, This is Not the Question.” Paper presented at the Communicating Space(s) 24th eCAADe Conference Proceedings, Volos, Greece, 2006. 6 see Cooper, Graham, Cerulli, Cristina, Peng, Chengzhi, and Rezgui, Yacine. “Tracking Decision-Making During Architectural Design.” ITcon (2005): 125-39. 7 see Kiviniemi, A, M Fischer, and V Bazjanac. “Multi-Model Environment: Links Between Objects in Different Building Models.” Paper presented at the CIB W78's 22nd International Conference on Information Technology in Construction, Dresden, Germany, 2005. 8 see Amor, Robert, Ying Jiang, and Xiaofan Chen. “Bim in 2007 – Are We There Yet?” Paper presented at the Bringing ITC knowledge to work, Maribor, Slovenia, 2007. 9 see Al-Reshaid, K, and N Kartam. “Improving Construction Communication: The Impact of Online Technology.” Paper presented at the CiB W78, Vancover, Canda, 1999. 10 see Berners-Lee, Tim. 1998. Principles of Design. http://www.w3.org/DesignIssues/ Principles.html (accessed August 10, 2007). 11 Berners-Lee, Tim, and Mendelsohn, Noah. 2001. The Rule of Least Power. http://www.w3.org/2001/tag/doc/leastPower.html (accessed March 20, 2008). 12 see Mathes, Adam. “Folksonomies - Cooperative Classification and Communication Through Shared Metadata.” Computer Mediated Communication - LIS590CMC (2004):
First International Conference on Critical Digital: What Matter(s)?
105
Information-matter hybrids
Prototypes engaging immediacy as architectural quality Christian Friedrich. Delft University of Technology, The Netherlands. h.c.friedrich@gmail.com. Abstract â&#x20AC;&#x2DC;Immediate Architectureâ&#x20AC;&#x2122; is an exploratory investigation into possibilities of immediate interactive and constructive interaction with the built environment supported by digital technologies. Aim is to realize interactive reconfigurable architectural objects that support their informational and material reconfiguration in real-time. The outcome is intended to become a synergetic amalgam of interactive architecture, parametric design environment, automated component fabrication and assembly. To this end, computational and material strategies are developed to approach the condition of immediate architecture and applied in real-world prototypes. A series of developed techniques are presented, ranging from realtime volumetric modeling, behavioral programming and meta-application protocol to streaming fabrication and dynamic components for interactive architecture. 1. Introduction The mass transition from analogue to digital media, the digital revolution, took place a decade ago. In contemporary architectural praxis, as in everyday life, digital technology has become ubiquitous; we participate in the ongoing evolution of the digital tools and devices which are now part of our environment. Digital media radically changed the relation between information and the material carriers used for its communication and storage, in effect changing how we situate both informational and material artifacts. The multimodal interfaces between the informational and the material domain provided by digital technologies do introduce more flexible superpositions of one onto the other, in adaptive, addressable manners. Information can be wirelessly communicated around the globe in an instance, stored in various media, processed in ubiquitous devices be affected by and written back as streams of light , sound, energy or into the material environment. Users can route how sensory data gathered from the environment shall be interpreted, processed, communicated and controlling fabrication tools which mold and forge materials by heat, laser beam, drills, band-saws and printers. Since architecture is all about informing the material formation of our built environment, applications of computational techniques already have radically affected architectural practice. In the design phase, successive generations of parametric design applications, intelligent computing techniques, and scripting environments make tools for handling design complexity more accessible to the architectural profession. Collaborative environments facilitate exchange of information between partners in the building process. Non-standard production techniques allow file-to-factory production of mass-customized building elements. Interactive, kinetic architectures preserve a part of the fluidity of the parametric model, the adaptation of which becomes an open-ended conversation between users and acting buildings. Digital technologies put higher performance at the hand of designers. They offer an extensible, shared medium for creating and realizing higher complexity designs with higher precision at higher speeds in economically feasible manners. Digital technologies start to integrate the design process and the built environment. Digital media can sense the built environment, be involved in the material construction, compute and propagate, be interfaced towards the bodies of users and designers. It is remarkable how little the design processes of digital architects are yet influenced by live-mapped sensory data of the projectâ&#x20AC;&#x2122;s environmental context, and how little the stakeholders are integrated into design and decision making. How few buildings are after their realization still affected in their
First International Conference on Critical Digital: What Matter(s)?
106
material configuration by the computational processes that were involved in their design and construction. 2. Immediate Architectures ‘Immediate Architectures’ is an exploratory investigation into possibilities of immediate constructive interaction with the built environment supported by digital technologies. Aim is to realize interactive reconfigurable architectural objects that support their informational and material reconfiguration in real-time. The outcome is intended to become a synergetic amalgam of interactive architecture, parametric design environment, automated component fabrication and assembly. These are to be supported by computational and material strategies that are developed approach the state of immediate architecture and applied in real-world prototypes. Multi-linearity in architectural design processes and in the encounters with interactive environments is a major theme of the contemporary discourse on digital architecture. The next step is to close the big loop and include production and reconfiguration, to reach a process where the building is not just informed by emergent processes but in its entirety is undergoing an open-ended emergent process. Immediate Architecture is, by virtue of collapsing the phased timeline of the architectural process into a singularity in time, a radical challenge to conventional notions of architecture. The digital design and fabrication toolbox then is used as device for linking spatial experience immediately to action to design to production. Digital design environments and fabrication devices then are applied for orchestrating in real-time concurrent, simultaneous operations of usage, design, planning, fabrication, construction. In this combination of interactive architectures and digital design environments with computer-controlled production techniques, the designer’s dialogue with the built environment may reach an unprecedented state of immediacy. To this goal, a series of techniques are developed providing a tool framework for handling real-time architecture throughout the mentioned fields. The techniques do not divide the design and construction process in phases, are orchestrating a state of now, which can be directed by designers as well as users. In order to approach this state of immediacy, unifying principles of digital architecture have to be found and applied as basis for the realization of prototypes. These principles should allow for handling the informational domain, modeling information, executing code and having a spatial representation which goes beyond description of surfaces. They should also offer ways for binding the informational structure to actuating and sensory material elements of the material structure as well as into the fabrication and assembly process of new elements. 3. Executable Topologies Non-standard reconfigurable, interactive structures are an inherently topological challenge: in the geometric descriptions of smallest parts, in the conception of the parametric elements with variable amount of parts and connections, in the structure of assemblies of elements, in the behavioral code running in each computing core, in the behavioral relations between cores and between elements and the users, and in the topological changes of the built structure by interactive kinetics or reconfiguration. Executable topologies are the project’s code. They can be generated, computed, produced, assembled; they define the production network and are the basic description of interactions which are foundations for emergent behavior. This topological continuum of structures and relationships can be encoded as Graph, interactions as BehaviorGraphs, spatial relationships as SpaceGraphs. With executable topologies, The Building is the Information Model. 3.1. Spatial representation: SpaceGraphs For the prototypes, representations of space are developed for the state of immediacy. In contrast to conventional architectural geometry, where demands are fit into a chosen structure, with these representations the structure is an outcome of an abstract description
First International Conference on Critical Digital: What Matter(s)?
107
of demands. In real-time exploration, it becomes possible to model topology and geometry of the prototype at the same time. Smartvolumes is a method for deriving structural topologies immediately from the spatial constellation of point-clouds. Modeling the behavior of the pointcloud nodes is combined with volumetric design exploration based on three-dimensional (additively weighted) Voronoi power diagrams. The method combines fast calculation of three-dimensional weighted Voronoi Power Diagrams with a volume-dependent feedback loop, resulting in a real-time interactive modeling tool. This tool, named SmartVolumes, has been integrated into the modeling environment BehaviourLinks, where the interaction between parametric volumes and other entities can be further elaborated through behavioral linkages. Applications of SmartVolumes in urban design and architectural design are described, implications of the use of Voronoi diagrams for architectural modeling and environments are discussed and directions of consecutive developments are indicated.
Figure 4. In-game screenshots of SmartVolumes modeler SmartVolumes is a tool for finding structures and generating geometries based on volumetric and behavioral demands. It supports designers in the exploration of the complexity of geometries based on Voronoi diagrams: In a three-dimensional Voronoi diagram the location of points, the volumes of cells, the faces of cell surfaces, the edges of these facets and the endpoints of these edges are all implicitly related to each other. Each change to the generative point cloud simultaneously affects structure, building physics, details, aesthetics and other performances of the design. These complex interrelations demand for a tool to efficiently explore possible solution spaces. SmartVolumes is intended to meet these demands. Spacegraphs represent design space as network of its partitioning into generic elements nodes, edges, facets and cells. Spacegraphs are a description of spatial relationships of generated, actuated or sensed sets of elements, which facilitate generation of aware informational or material structures. As such they are a generic tool for architectural design exploration and to inform the behavior of interactive environments. Smartvolumes is a first exploration into Spacegraphs, to be used for real-time conceptual design exploration.
First International Conference on Critical Digital: What Matter(s)?
108
Figure 3. Pointclouds modeled with BehaviourLinks, their SpaceGraphs and surfaces resulting queries of the SpaceGraph network Figure 3 shows the outcome of a Spacegraph query, which traverses the network according to types of relations and specified types of nodes in the neighborhood. The query returns design features, in this case a set of facets that form a surface. 3.2. Processing: BehaviourLinks
Figure 2. BehaviourLinks examples: particle-spring system, parametric surface BehaviourLinks is at the intersection between parametric design environments and programming. It defines interactions between conceptual entities. These entities can represent architectural concepts but also digital interfaces to sensors, users and exchange data. By defining conceptual nodes and laying behavioral links, users can grow a parametric diagram of the design. Its shape, structure and visualization originate from the behavioral design rules and decisions made by the user as well as the feedback negotiation between swarming nodes. 3.3. Information Modeling: XiGraph The information model chosen allows for real-time changes and is based on the semantic network paradigm, structuring design data by defining connections and building networks. This generic kind of data structure can be meaningfully organized and mapped by users. For this end, XiGraph library provides a generic basis.
First International Conference on Critical Digital: What Matter(s)?
109
Figure 1. XiGraph: principle and real-time exchange XiGraph is a meta-application protocol, which allows for concurrent modeling and exchange of information structures via a semantic network. In this network, the data structures present in various data sources, including professional design software and custom-made tools, can be represented, related and updated in real-time. The modeling of connections and transformations between various data structures in design computing, previously the domain of experts, with protocols like this can be handled by novices and within the design process. XiGraph is a basis for the implementation of BehaviourLinks and a way of connecting it to existing off-the-shelf software. 4. Material topologies Interaction with material architecture takes place on two strata. The first is construction and constructive reconfiguration, which currently is a long-term interaction but is becoming faster, more accessible, more integrated and increasingly non-linear. The second is dynamic interaction with kinetic and medial structures, which can already take place in real-time but always has limited, pre-defined degrees of freedom of expression. As the state of immediacy is approached these two strata can be expected to eventually converge.
Figure 6. Streaming fabrication experiment Networked Performative Modules are prototypes for interactive assemblies that are actively involved in the continuation of their design process. They are to be put together from
First International Conference on Critical Digital: What Matter(s)?
110
modular cells, which can differ in form, structure, material and affordances. A module can be merely structural, or actuating, contain sensors, sound equipment, processors. It could act as projector, interface or screen. Non-standard production modes, combined with just-in-time concurrent production of architectural elements, bind the digital design models directly into ongoing fabrication processes. In this continuum of concurrent design and production, file-to-factory [F2F] production evolves towards an open-ended streaming fabrication. 5. Information-Matter Hybrids Immediate architecture is architecture coming together in real-time. Users can explore the design space, and encounter, entwined in the physical structure, behavior which emerges from the hybridization of material elements and streams of information. Architecture becomes an activity, becomes immediate to the building-body whose configuration became open-ended. Architectural praxis is no longer focused on obtaining the building-object. The building-object is fiction, a fleeting asymptote towards which the open-ended process is steered. The building is no more a series-of-one, but becomes the event of ongoing material and informational reconfiguration driven by our bodily experience and feedback of the material structure. References Friedrich, HC: 2007, SmartVolumes - Adaptive Voronoi power diagramming for real-time volumetric design exploration, in Lecture Notes in Computer Science, VSMM07 proceedings (forthcoming), Springer Verlag, Berlin. K. Oosterhuis: 2003, HyperBodies â&#x20AC;&#x201C; Towards an E-motive Architecture, Birkhäuser Publishers for Architecture, Basel. Oosterhuis, K, Friedrich, HC, Jaskiewicz, T and Vandoren, D: 2007, Protospace Software, in H.A.J. de Ridder and J.W.F. Wamelink (eds.), Second International Conference World of Construction Project Management, TU Delft, The Netherlands. Frederic Migayrou (ed.): 2003, Architectures Non Standard, Editions du Centre Pompidou, Paris. Robert Sheil: 2005, Transgression from drawing to making, in Architectural Research Quartely (arq), vol 9, no 1, Cambridge University Press, Cambridge, United Kingdom.
First International Conference on Critical Digital: What Matter(s)?
111
Algebras, Geometries and Algorithms, Or How Architecture fought the Law and the Law Won Theodoros Dounas Aristoteleio University of Thessaloniki, Greece An Architect is required to deal quite often with a restrictive piece of Building Code during his/her practice, especially in traditional and hence protected environments. The paper examines the algorithmic nature of such a Building Code and in particular the President's Decree governing the design and architecture of traditional housing in the Old Town, “Ano Poli”, in Thessaloniki Greece. The nature of the constraints and descriptions the Decree contains is algorithmic, which means that the descriptions of the constraints is procedural with a specific start and a specific finish for a house design. The problem with such descriptions in a Law is that, although an architect can develop his/her own interpretations of the traditional language of the area, or even be able to trace his/her designs using shape grammars derived from traditional buildings preserved until today, the final result cannot be approved for a building permit since it does not comply with the Presidential Decree. We suggest that the nature of such legislation should be algebraic in nature and not algorithmic, since algebras allow an amount of freedom in development of architectural language while also permitting the restriction of scale, height and so on. This coupling of architectural design freedom and effective restriction on metrics of new buildings contained in algebraic systems can be shown to be much more effective than the established algorithmic system. The Decree's content comprises of regulations concerning the volume, form and use of new buildings in the protected and conserved built environment of “Ano Poli” in Thessaloniki. The Decree was developed in 1979 by a team of architects inside the Ministry of Built Environment in Greece with the purpose of protecting the sensitive traditional area of “Ano Poli” in Thessaloniki, where a large number of traditional houses (the number varies from 2000 to 4000 depending on sources) were still intact. The area needed special protection since its colorful and traditional nature was threatened by the rapid development of the building industry in the 60's and 70's. Tools where developed during the period 1978 to 1979 to assess the rules comprising the traditional architecture of the area. These rules where extracted from a detailed observation and interpolation of the rules used in the surviving buildings at the time. These rules were then codified inside “algorithmic tools” in the Presidential Decree text, encapsulating constraints, directions, restrictions and vectors for the architecture of new constructions in the area of Ano Poli. The tools that are used in the Decree are design constraints and descriptions of the new buildings. These constraints and descriptions provide an algorithmic approach to new designs that unfortunately is not universal but is a collection of algorithmic parts. This partiality has one direct and one indirect result: 1. Various inconsistencies and conflicts arise in the new designs and buildings -During the design of the whole building a form imposed in one part of the building can clash with the form or use imposed by the Decree in another part of the building. 2. The Decree's application during the last 30 years has been ensured by a a committee of architects, both free lancing and public servants. With changes in the committee representations over the years, opinions on the algorithmic nature of the Presidential Decree change inside the committee often leading to conflicts with decisions by previous members of the committee. The result is a that the architecture of new buildings in the
First International Conference on Critical Digital: What Matter(s)?
112
traditional area on Ano Poli has changed over time and various languages or styles of architecture are evident and can be traced back to specific configuration of the committee from 1979 onwards. These architectural styles are in conformance with the letter of the Law but they clash with the stated goal of the Presidential Decree, to protect the traditional area of Ano Poli, since this variety of styles finally concludes to a cacophony of styles and forms. These conflicts can be very damaging to the design process (and of course the final building) since the architect almost designs like an automaton following the algorithm of the building code, without any chance for developing contemporary designs. Furthermore, the conflicts the Decree has hidden inside its collection of algorithmic parts coupled with the low resolution text has in defining over specific rules of form, can bring the design procedure to a halt and abort with no alternative exit. In contrast to that an algebraic approach to defining constraints can serve the purpose of enabling the architectural interpretations individual architects make of the traditional architecture of the area, while restricting the geometrical features of the outcome via simple universal rules: addition, subtraction, intersection, multiplication and so on. After developing an Algebra specific to the traditional architecture seen in Ano Poli, while maintaining the constraints of the established Presidential Decree, we try to generalize the process of extracting an algebra from a building code, or constructing one if extraction is impossible. It is in this generalization that we try to answer the questions of “architectural ethics” that emerge: 1. Are computing tools robust enough to represent and direct our intervention in sensitive and protected environments? 2. Why and how should we use computing tools like geometric and shape algebras inside generic building codes? Even though it would appear as a strange coupling, matching computation tools (in the area of regulation and legislation) with the formal language of traditional architecture of an area can provide a new materiality: Since we no longer built with the techniques and materials of a past era, the architect would now treat forms as a traditional material. From this treatment of “Form” as a Material one can assess the ecosystem that algebraic computing tools provide for the designer. Instead of only handling pure representation the designer can establish structural connections between forms, in the way of the shape grammar paradigm. The crucial question is whether establishing such an algebra inside such a specialized building code would improve the stated goals of the legislation: In our opinion just creating an algebra would not provide any clear benefits, but an algebra that takes into account the traditional architecture of the area of Ano Poli and is feature complete concerning said architecture would provide clear goals and descriptions for the designers and architects to follow when they design an new building for the region. This feature completeness can only be created when one considers the most basic structures and forms of traditional architecture, without describing them in a unique and singular manner. Thus instead of restricting the design of new buildings, one would be able to infer new designs that are close relatives of the traditional architecture in question, even though no such direct equivalent of a traditional building remains until today.
First International Conference on Critical Digital: What Matter(s)?
113
References Duarte J.P, 2001: Customizing mass housing: a discursive grammar for sizaâ&#x20AC;&#x2122;s malagueira houses, phd dissertation, MIT Knight Terry W, 1991 : Designing with Grammars, CAAD futures Digital Proceedings 1991 Maver, Thomas W, 1995 : CAAD's Seven Deadly Sins, CAAD futures digital procedings '95 Martens B et al, 2007 : Predicting the Future from Past Experience, A Reflection on the Fundamentals of CAAD, 25th eCAADe Conference Proceedings , Frankfurt am Main (Germany) 26-29 September 2007, pp. 523-531 Pentilla, Hannu, 2006 :Describing the changes in architectural information technology to understand design complexity and free-form architectural expression. ITcon Vol. 11 (2006), pg 395
First International Conference on Critical Digital: What Matter(s)?
114
First International Conference on Critical Digital: What Matter(s)?
115
 Â
Digital Culture
Moderators: Nashid Nabian and Dido Tsigaridi Papers: Panos Parthenios Analog vs. Digital : why bother? Julian Breen and Julian Breen The Medium Is the Matter : Critical Observations and Strategic Perspectives at Half-time Daniel Cardoso Certain assumptions in Digital Design Culture: Design and the Automated Utopia Branko Kolarevic Post-Digital Architecture: Towards Integrative Design Ingeborg Rocker Versioning: Architecture as series? Katerina Tryfonidou and Dimitris Gourdoukis What comes first: the chicken or the egg? Pattern Formation Models in Biology, Music and Design
First International Conference on Critical Digital: What Matter(s)?
116
First International Conference on Critical Digital: What Matter(s)?
117
Analog vs. Digital : why bother? The role of Critical Points of Change (CPC) as a vital mechanism for enhancing design ability. Panagiotis Parthenios Harvard Graduate School of Design panos@parthenios.com Abstract Architects take advantage of a broad palette of tools and media for design, analog and digital, because each tool has its own strengths and weaknesses and provides an additional value—an added level of vision—to the architect. This closely relates to the notion of Critical Points for Change (CPC) a contribution this study makes towards a better understanding of the uniqueness of the conceptual design process. CPC are crucial moments when the architect suddenly becomes able to “see” something which drives him to go back and either alter his idea and refine it or reject it and pursue a new one. They are crucial parts of the design process because they are a vital mechanism for enhancing design. The right choice and smooth combination of design tools, analog and digital, is critical for the design outcome. Using multiple tools allows the designer to overcome the possible influences and limitations imposed by a single tool. The current and evolving landscape is illustrated by coexistence, complementing and evolution of tools. The answer to the pseudo-dilemma of analog or digital is both. Keywords. Conceptual design, design process, tool, analog, digital.
1. Introduction The war between Analog and Digital in the field of design is definitely not new. Designers have the tendency to pick a side and fight for what they believe is the “right” way to perform design. “Now that we have digital, why do we still need analog?” or “Analog is perfect. Why did we ever need digital?” Moreover, since the transition from one media to the other is –still- not trouble free, why do we bother switching tools? Why don’t we find which tool is convenient for each one of us and stick with it? Why bother? This paper seeks to shed light on the role that analog and digital tools play during conceptual design by explaining the mechanism of the Critical Points for Change. The critical question of this study is whether the two worlds (analog and digital) are antagonistic and if one of the two is going to prevail. The following two short cases are part of a series of case studies conducted as part of the doctoral study Conceptual Design Tools for Architects1 at Harvard Graduate School of Design. They are included in this paper not with the intention to propose a model of the design process but in order to illustrate some of the findings.
2. Case Study A
1
Parthenios P., Conceptual Design Tools for Architects, Harvard Design School, Cambridge, MA, 2005
First International Conference on Critical Digital: What Matter(s)?
118
In the first case study we can observe what tools the architect and her team used to perform conceptual design and why the design process is not linear. Audrey, a senior architect in Stubbins Associates, worked with two junior architects on a 6,000 m2 research lab. She began with small sketches on her sketchbook which analyzed and filtered the information that the client had given. The first sketches were very simple and represented the basic requirements of the project. They included thoughts, questions, solutions, forms and ideas. Gradually these sketches became geometric attempts to capture the main concept and in the next stage they adopted a bigger, common scale on tracing paper. The beauty of this initial step of conceptual design lies in the freedom and ambiguity that allow the architect to address anything she wants in no particular order or hierarchy (Fig. 1).
Figure 1. Sketches at different stages. When Audrey reached a concept that she believed had good potential, she asked her two team members to take the space requirements that the client had given them in Excel spreadsheets, analyze them and translate them into geometry. This was done in AutoCAD 2D with simple rectangles that represented each module and led into some primitive plan layouts (Fig. 2).
First International Conference on Critical Digital: What Matter(s)?
119
Figure 2. Translation of space requirements into geometry. After accomplishing a satisfying layout of the plans which matched the main idea in sketches, Audrey wanted to see how that would look in 3D. She let the two team members play individually in 3D and explore a number of variations. They used SketchUp to create simple digital 3D models. They would print screenshots of the models, hang them on the wall so that everyone on the team could look at them without necessarily having to meet, and Audrey would often stop by, overlay a piece of tracing paper and sketch on them. At some point, and while presenting the digital 3D model to the board of her firm, Audrey realized that “I knew what I wanted the building to do but it was not really doing it”. While trying to discover where the dissonance was, one of the team members reminded Audrey of a sketch she had made a few days ago and had left aside. It turned out to be a more suitable solution which they developed further and based their design on. Altering the main idea meant that they had to go back and do the layout in AutoCAD again, along with new sketches and new digital 3D models. The satisfactory result of this process progressed to the next level, which was building a physical 3D model (Fig. 3).
Figure 3. Physical 3D model (left) and digital 3D model (right). The physical model gave Audrey an additional level of vision and allowed her to understand more aspects of the design. “It is not the same as having a piece there that you can break, stick things on, or take them off; it’s not a tangible thing”. The new media triggered alterations which meant the architects had to go back again and update the AutoCAD drawings, the sketches and the digital 3D model.
3. Case Study B In the second case study we can compare how the same architect used different tools and media in three projects and what the effect of that was on each design. Robert, a senior associate principal at KPF, designed three office towers in Asia. His office works on a comparative method during the conceptual design stage (Fig. 4).
First International Conference on Critical Digital: What Matter(s)?
120
Figure 4. Comparative method. They usually generate a lot of alternatives that investigate formal characteristics, relationships of the program components with one another, plan layouts, space efficiency and they come up with a few favorable schemes that they present to the client. Lately, the office has started to use the computer more as a generative tool than just a deploying tool. For all three of the projects in this case study, the goals and the starting point were quite similar. Robert would start with some sketches in order to explore an idea about the tower and its relationship to the site and the program. Because the towers are three-dimensional forms the goal is to get from that sketch to a model where the tower can be seen in three dimensions in relationship to its context. In tower A no computer tools were used for conceptual design; in the next two towers B and C, computer modeling in Rhino played a crucial role in helping Robert and his team to explore a number of design schemes (Fig. 5).
Figure 5. Towers A (paper), B and C (ZCorp) When Robert designed tower A in 2000, he was not using computers during conceptual design so all the exploration was done through sketches and physical models. He started
First International Conference on Critical Digital: What Matter(s)?
121
with a few sketches, which were followed by hand drawings, pencil on straight edge. These led to physical 3D models made out of paper which were placed on the context model in order to determine how the tower related to the site and the surrounding buildings. During the exploration the initial square plan view of the tower was slightly deformed to get some directionality and later the sharp corners where chamfered in order to comply with the city regulations. Tower B was the result of combination of the contextual response to the site and the mayor’s interest in having an iconic building that would resemble a magnolia, the city flower of Shanghai. Robert started with the idea of having the plant form at the ground level, which would rotate as it rose and this rotational aspect could give it some organic feel. After a few sketches he used Rhino to make a 3D model of the tower and capture the form he liked. Moreover, he had just learned how to use Rhino, and one of the first commands he had learned was how to twist and also how to loft a curve. “This initial scheme was very much a result of playing with new software." For this project his team used many small physical 3D models, initially out of paper and later out of resin (ZCorp directly from Rhino). In tower C we can see a wider variety of tools and means deployed during conceptual design and a more mature use of digital tools (Rhino). Robert and his team used sketches, small physical 3D models out of clay and paper, Rhino for digital modeling and subsequently ZCorp printed 3D models. They explored a large number of alternatives using all of the above means at the same time, moving back and forth between the tools and producing three different complete scenarios which they presented to the client.
4. Conceptual design process The design process is open-ended and problems and solutions cannot be clearly identified and separated. This set of problems and solutions cannot be broken into parts that can be solved separately; it has to be treated as a whole. Moreover, there is no perfect, “right,” solution, only preferred better ones. This means that the architect needs to be able to decide when to stop exploring different ideas and select one to carry on into development as the main concept. As we saw in Case Study A, during this hunt for a better solution there are often back and forths, switches between different media and tools, and constant questioning of ideas through comparisons, tests, and rejections. The conceptual design process is not linear. Furthermore, according to the data gathered from 242 architects who participated in the Survey for Tools for Conceptual Design2, most of the architects tend to explore two to three ideas before they choose the “one” and move on to design development. No matter how many ideas the architects explore, they do often feel the need to go back and revise their design (as Audrey did in Case Study A). Switching between different media and tools creates loss and duplication of information and forces them to re-enter information; nevertheless, they choose to do it even if that causes delays. By a two-to-one margin, respondents used multiple software packages (rather than just one), despite frustration with data exchange between tools.
2
Parthenios P., Conceptual Design Tools for Architects, Harvard Design School, Cambridge, MA, 2005
First International Conference on Critical Digital: What Matter(s)?
122
5. Sub Processes - Levels of Refinement Conceptual design is not a linear process. It consists of sub-processes which are individual but interact with each other. Each sub-process has its own unique value and grants the architect an additional level of vision. The sub-processes correspond to the types of media and tools used during conceptual design. For example, in Case Study A we can distinguish four separate sub-processes, which play a valuable role during decision making: a) sketching; b) 2D CAD; c) 3D digital modeling; and d) 3D physical modeling (Fig. 6).
Figure 6. Sub Processes and Critical Points for Change (CPC). In this example, only when Audrey used a digital 3D model was she able to see an aspect of the design â&#x20AC;&#x201C;which sketches and 2D CAD could not reveal- and decide that she had to go back and change the main idea. Going back entails a manual update of the design with new sketches and new CAD drawings. Similarly, only when the architects built a physical 3D model were they able to see another aspect of their design that needed to be altered; they decided to go back again and make the appropriate changes. Then again they had to reinput information in new CAD drawings, a new digital 3D model, and new sketches (Fig. 1). These sub-processes operate as levels of refinement for the conceptual design process. Each level functions as a filter for narrowing down the number of explored alternatives. The tools used in Case Study B define four levels of refinement: a) simple sketches; b) rough small paper 3D models; c) digital 3D models in Rhino; and d) printed 3D models (ZCorp) (Fig. 7).
First International Conference on Critical Digital: What Matter(s)?
123
Figure 7. Levels of Refinement. Robert, in Case Study B, utilized only the first two levels in Tower A since he was not using computers at that time. The result was a simpler design based on an extrusion of a skewed square. No one can claim that the designs of Towers B and C, which exploited four levels of refinement, are better, because what is better design? What we can observe though is that the added levels provided the platform for more exploration and allowed the architect to investigate forms which he could not have done –and he did not do- by hand. Additionally, in Case Study B we encountered an example of how the selection of tools affects the design outcome. Robert arrived at the design of Tower B because of the particular command of the software he was using. Software tools are not innocent anymore. Undoubtedly proficiency with software reduces the effect on design solutions.
6. Critical points for change The levels of refinement are closely interrelated to the notion of Critical Points for Change. These are moments when the architect “sees” something that drives him to go back and either alter his idea or start with a new one (as it was clearly illustrated in Case Study A). They are crucial parts of the design process because they are a vital mechanism for enhancing design. They either trigger alterations that refine the design solution or provoke the architect to reject the idea and pursue a better one. Often a new level of refinement would provoke a CPC. Through the help of a new tool, the architect becomes able to “see” something that was not visible before and can decide to go back and a) alter the design idea, b) abandon it and begin from scratch, or c) abandon it and pick an idea that had been discarded or left “inactive”. Moreover half of the architects who participated in the Survey on Tools for Conceptual Design (Parthenios, 2005) reported that several times they had changed their minds and that they went back even if they had proceeded to the design development stage.
First International Conference on Critical Digital: What Matter(s)?
124
Even though CPC might look like irregularities that make the conceptual design process inefficient, the truth is that they are absolutely necessary for a creative, genuine course of design exploration. Besides, the desired outcome does not emerge on the first try. Architects need to explore a number of ideas until they can choose the optimal one. Tools for conceptual design should not attempt to disguise or underestimate the Critical Points for Change. To the contrary, the tools should assist the architect during CPC cases in six ways: a) Reveal CPC cases earlier in the process. b) Provoke the emergence of more CPC cases. c) Encourage deeper exploration of each alternative by offering additional levels of vision and understanding. d) Support the architect in the dilemma of whether to alter an idea or abandon it and start again from scratch. e) Organize all the different ideas and present a broad palette of them. f) Integrate the different media and tools in order to reduce the inefficiencies that CPC causes.
7. Co-existence of analog and digital tools The gap between Analog and Digital, and also between 2D and 3D is a challenge that technology needs to address. Nevertheless, the subsequent inefficiencies, delays and duplication of information are merely one side of the story. Since transitions from one media to the other are not â&#x20AC;&#x201C;yet- univocal, the ineluctable ambiguity tolerates mistakes, unexpected results (Fig. 8).
First International Conference on Critical Digital: What Matter(s)?
125
Figure 8: From 2D to 3D and from analog to digital Due to the open-edit nature of the design process, which often is based on the trial and error method, these accidental discoveries could lead to the covetable eureka! that is every designer’s goal. “Capture accidents. The wrong answer is the right answer in search of a different question”3. Renee Cheng points out that "any tool is more powerful if it is part of a cycle of digital and analog, going back and forth, rather than a linear progression from sketching first, then digital modeling, with no return”4 Pierluigi Serraino has written that “form follows software”5. This couldn’t be more of a valid argument: as some examples from the case studies on my doctoral research demonstrate, the series of commands that each piece of software embraces, along with its accessibility, makes the tool a potentially influential factor in design. Computer tools have become so powerful that they affect the design outcome. Bernard Tschumi believes in the co-existence of analog and digital worlds because they supplement each other. In the case study of the Museum of Sao Paolo6, Tschumi explains that he needs both the analog and the digital version of the same section because they allow him to “see” different things; each is an additional level of perception. With the computer image “I get more information, it allows me to see very important things about the concept." He can better understand issues like density, relationships, hierarchies, and 3
Mau, B. An Incomplete Manifesto for Growth, 1998 Laiserin, J. Doin' the DEED (AEC Insight Column) Cadalyst, Jan. 2008, p.41-42 5 Serraino, P. Form Follows Software, Proceedings of ACADIA2003, Indianapolis, 2003 6 Parthenios P., Conceptual Design Tools for Architects, Harvard Design School, Cambridge, MA, 2005 4
First International Conference on Critical Digital: What Matter(s)?
126
dimensions. “I don’t know the dimensions, I know the ideas. Once I get to this” [the computer 2D section-rendering] “I get a sense of the dimensions. I realize the building, for example, is not that high." The computer model works as a checking mechanism for the hand-drawn sketches (Fig. 9).
Figure 9: Digital and Analog section, Bernard Tschumi.
8. Analog VS digital Without a doubt, computer tools have become exceptionally powerful and play an increasingly crucial role in conceptual design, not only in terms of productivity but, more importantly, in creative, substantial ways, which touch the essence of conceptual design. At the same time, , the results of my doctoral research reveal that analog tools are still irreplaceable, no matter what digital tools are available, even among younger architects. Regardless of age, years of experience, size of firm, or types of projects, 60% of the architects who participated on the survey (and use computers to design) identified pencil and paper as their favorite tools, with 80% starting their design process on paper. As a result, the answer to the pseudo-dilemma of analog or digital is both. Designers simply use a plethora of analog and digital tools in many different, innovative ways. As Jerry Laiserin points out, “no single tool provides the best solution for representing any design idea. In fact, exploring design ideas through multiple tools helps insulate designers from the subtle influences (and/or limitations) provided (and/or imposed) by the affordances of any single medium or tool.” Thus, we should not be looking for conflict between analog and digital media; instead the right words to describe the current and evolving landscape are co-existence, complementing and evolution of tools. The goal should be a fine-tuned tool or a set of tools that will truly assist the architect during conceptual design and will be so transparent that the architect focuses on the design and not on the tool.
References
First International Conference on Critical Digital: What Matter(s)?
127
Parthenios P., Conceptual Design Tools for Architects, Harvard Design School, Cambridge, MA, 2005 Mau, B. An Incomplete Manifesto for Growth, 1998 Laiserin, J. Doin' the DEED (AEC Insight Column) Cadalyst, Jan. 2008, p.41-42 Serraino, P. Form Follows Software, Proceedings of ACADIA2003, Indianapolis, 2003
First International Conference on Critical Digital: What Matter(s)?
128
First International Conference on Critical Digital: What Matter(s)?
129
The Medium Is the Matter
Critical Observations and Strategic Perspectives at Half-time Jack Breen Delft University of Technology, The Netherlands j.l.h.breen@bk.tudelft.nl Julian Breen Utrecht University, The Netherlands j.d.breen@gmail.com Abstract This paper critically re-views the professional impact and functionality of the pervasive digital ‘matter’ we have come to believe we can no longer do without. On the basis of a playful exploration of the first ‘half-century’ of our digital age, an attempt is made to draw new perspectives for the next ’level’ of our digital culture in a broader (multi)media perspective and more specifically: the domains of Architecture. To stimulate an open-minded ‘second-half’ debate, the paper puts forward some potentially promising (and hopefully provocative) conceptions and strategies for imaginative interface applications and game-based architectural study initiatives. Furthermore, the paper proposes the establishment of a new cultural platform for the exchange of Critical Digital hypotheses and the evolvement of visionary design concepts through creative digital innovation, with the (inter)active involvement of older and younger team-players… Introduction The ‘Digital Age’ was heralded in with an air of optimism and opportunity. This initial technological and cultural positivism, dominated by the promises and expected benefits of an all-encompassing digital culture has pervaded to this day. As a consequence, the digital discourse has hardly been ‘critical’. As inevitable as the cycles of development and implementation of computer-based media may now seem, it is worth realising that, on a cultural time-scale, they have actually only been ‘around’ relatively briefly… In fact, if we were to think of our Digital Age in terms of a Digital Century, we should realise that we now find ourselves more or less half way! So, as we approach the beginning of our second half-century, it seems justified to have a time-out… Time to count the blessings, but also the shortcomings, of our increasingly digitised culture and to think clearheadedly, but imaginatively, about where we might – and indeed should – be heading in the (near) future… What are to be the ‘winning strategies’ for the ‘second half’? From digital cult to new media culture The first thirty years of our digital century were characterised by fascination on the one hand and technical innovation on the other. In the Sixties and Seventies the computer was still considered an exponent of a ‘brave new world’. The issue of computation was associated with scientific rationalism but was also 1
First International Conference on Critical Digital: What Matter(s)?
130
surrounded by starry-eyed expectations and hopes for a better global (indeed: universal) society. This optimism was at the same time tempered by the looming spectre of an impending ‘anti-society’, centrally controlled by a predominant Big Brother… After the mind-expanding Sixties and Seventies it was no longer just the media specialists and scientists, but the public at large that started considering the digital revolution as the Next Big Thing. The new media culture (or cult?) was proclaimed with a passion. “Digital” became a kind of mantra for innumerable devotees. The slogan: “Thanks to the Computer we are now (or: will soon be) able to …… .” After the early, ‘heroic’ research-and-development phase, the eighties witnessed the true flowering of the digital revolution as a popular movement. Over a period of a mere twenty years the steady proliferation of digital technology took place via a series of iterative loops. From clean-room mainframe environment, via still relatively specialised text processor, personal computer and eventually desktop publisher for the masses, the resulting technological and cultural shift was not only extremely swift but also seemingly inevitable. Some of the rapidly evolving permutations of this process, which have shaken up the way we work (and indeed play): - From ‘analogue’ to digital; - From hard to ‘soft’ technologies; - From mass media to ‘personalised’ tool; - From traditional skill to ‘choice-based’ interface; - From complex calculation to ‘easy’ manipulation; - From ‘centralised’ communication to networked environment. The effects of these cycles of development, implementation, upgrading, mainstreaming and hybridisation have not been entirely painless. After the gold rush era, whereby new ‘generations’ of digital applications were hurriedly brought onto the market, we now find ourselves collectively lumbered with operationally, ergonomically and aesthetically ‘poor’ instruments. The ‘matter’ of media Over the years our conceptions of (digital) media have been changing constantly. Early Media gurus (notably McLuhan) saw the medium not merely as a ‘messenger’, but as the Message (even the Massage).1 Scholars of architectural design began to see computer-based media not just as ‘tools’, but increasingly as the Method.2 We would argue that, in the present cultural time frame and technological state-of-the-art, digital media – as such – are no longer ‘the thing that matters’. They have evolved from being the driving forces ‘between’, to becoming the base condition, the Matter on which new concepts are (to be) built… ‘Digital or not’ is hardly an issue any longer. Indeed, from a user point-of-view, the fixation on digitalisation has become almost irrelevant and even distracting. What is needed is a more imaginative kind of instrumentality that can facilitate a new cultural avant-garde! This means that next generations of design media applications will need to redress the notorious drawbacks of the machinery of today. To name a few: - The limitations of the archaic typewriter / screen interface; 2
First International Conference on Critical Digital: What Matter(s)?
-
The The The The The The
131
‘flatness’ of orientation and lack of spatial ‘presence’; untamed ‘much-ness’ of information and absence of mapping; visual impersonality and lack of aesthetic sensibility; insufficient compatibility of media: virtual and physical; lack of spontaneity in design modelling and sketching; constraints upon physical manifestations of ideas in the ‘real world’.
Digitised systems have nonetheless become the Modus Operandi of the late twentieth- and early twenty-first century and have had a profound impact on various fields of creative enterprise, such as Architecture… After an extensive period of fine-tuning existing conventions, it is time for inventions. What is needed is a wholly new conception of the physical interface, which can stimulate the imagination and concretisation of new realities. Virtual architectonics From the late Eighties onward, institutes of architectural education embraced the (somewhat dubious) motto: “Think Digital!” For established centres of learning this was largely a matter of ‘keeping up’ and not losing the ‘edge’ – and consequently status – to newer institutes and younger members of faculty. According to some, this fixation on digitalisation on all levels threatened the foundations of the ‘art’ of teaching architecture (in a technological as well as an intellectual sense). In some cases the preoccupation with ‘all things digital’ admittedly led to pseudosystematisation and a painful simplification of the true issues of design. However, after the initial growing pains had subsided, most architectural schools proved able to absorb computation as an added value and increasingly became ‘laboratories’ for new applications. Consequently, computer-literate graduates have contributed considerably to changing the face of the architectural practice… The architectural discipline was initially slow to catch on (with computation primarily used for structural and budgetary calculation and subsequently for two-dimensional computeraided drafting). This changed as the fascination for spatial modelling and texture-mapped renderings spread though the international design community. As a consequence, CAAD arguably contributed to a new kind of – ‘global’ – architectural vocabulary, characterised by complex geometries and sleek (im)materialisation concepts. The mere fact that hitherto ‘inconceivable’ architectural configurations could now be modelled accurately and visualised seductively (at least in the computer) resulted in new waves of ‘solid liquid’ architectures and other ‘hypermodern’ form experiments. The problem in many cases was that the built results tended to be considerably less convincing in real reality than in their ‘virtually real’ design environments – particularly on the level of detailing and materialisation – unless a very considerable budget was available. The various guises of computer aided architectural design have arguably given a meaningful impulse to contemporary aesthetics, but have not led to a new, universal ‘style’. On the contrary, we currently find ourselves confronted with an array of formal agendas. At the same time, a shift has been taking place in the field of design media… Whereas, only a few years ago, computer generated ‘special effects’ would not fail to impress, there is now a tendency to shake off the constraints of standardised software and to adopt a more personal ‘signature’. ‘Old school’ techniques are becoming credible again, but now: enhanced by the digitised formats of emerging media and other art forms.
3
First International Conference on Critical Digital: What Matter(s)?
132
At the same time we witness a trend towards moving away from purely digital architectonics, back to the physical world, particularly in the field of computer-based physical modelling and manufacturing. Some -
topical issues: Imaginative and dynamic visualisation and testing (design); Intelligently produced architectonic components (technology); Contemporary forms of architectural ornamentation (aesthetics); Improved understanding and structuring of information (research); Cross-disciplinary mixing of media platforms (communication).
The question: what kinds of design development and representation applications might be expected to ‘matter’ in the future? Second-half perspectives… As we anticipate being called out for the second-half of our digital century, we must be acutely aware of the paradigm-shifts that are in the air and lend an ear to the fresh ideas of younger ‘team-players’ who have grown up with the established matter of our digital culture. This does not mean that our tactics should necessarily deviate from what we have been doing up to now… On the contrary, there is considerable opportunity to sharpen existing strategies whilst at the same time giving room to fresh ideas and innovations via combined approaches of: - Fine-tuning proven instruments, formats and operations; - Renewed attention towards earlier ambitions that have drifted off the critical radar; - Wholly new conceptions and applications. To contribute to the Critical Digital agenda discussion we would like to propose some ‘moves’, which we will discuss briefly. Some themes we consider worthy of exploration – and expansion – in the near future: - Playful Media (learning from game formats and game theory); - Interactive Interfaces (developing personalised input/output alternatives); - Gesture-based Connectivity (towards more tangible forms of interaction); - New 3D Visions (creating new impulses for spatial experiencing). Playful Media: Whilst the collective attention in the ‘first half’ was mainly on the transition from ‘analogue’ to digital, we are now confronted with a situation where ‘digital’ has become ubiquitous to such an extent that it is a ‘given’ we need hardly be aware of any more. At the same time, the steadily progressing digital evolution has altered our working methods and conceptions in various fields of enterprise, from commerce to the arts. As we are inclined to consider architecture as an art form, we might study and learn from other artistic disciplines, such as moviemaking (cinematographic approaches, sequencing and animation), theatre (physical expression, interaction, improvisation) and music (rhythm, harmonic variation, but also digital recording and sampling). These may expand the ‘palette’ of architecture (traditionally making use of drawings, models, pictures and symbols). But perhaps the most meaningful phenomenon to consider is contemporary game culture… 4
First International Conference on Critical Digital: What Matter(s)?
133
Digital games have emerged as a ‘serious medium’. The operational cleverness, interactivity and elegance of particular digital game applications can give new incentives to the practice and education of architecture. In this context, the concept of design as ‘playing with ideas’ visually and interactively might generate new ‘game-worlds’ in an architectural context. On the basis of pedagogical game situations, students and professionals would be able to get actively involved in explorations of design composition and perception issues, using digital media imaginatively and with a stimulating measure of competitiveness. Such ‘designerly’ studies may not only be thought-provoking for the participants but, if intelligently organised as experimental studies, may lead to a better understanding of the issues of the discipline in the context of education and research. Linking up with gaming communities and the creative industries may create a ‘laboratory’ for new techniques and manipulations and the evolvement of computer interfaces relevant to the architectural field. Interactive Interfaces: The proliferation of ‘soft’ technologies has been so successful that in many ways software has become the new hardware: an intrinsic ‘cultural commodity’, to such an extent that one might argue that is too important to leave its development to (big) business… The digital applications we constantly work with are hardly user-friendly. Instead of interacting freely with ‘personalised tools’, we find ourselves struggling with the limitations of mass products that are no longer fitting in our present digital era, and certainly not adequate for the applications of the future… Arguably, recent waves of ‘Apple-chique’ products have led to a much-needed stylistic upgrading of many of our digital utensils, but the question remains as to whether, on the whole, the effects have been more than skin deep… Essentially, digital working (and gaming) environments are based on (pre)determined choice formats. In daily practice, the supposedly ‘easy’ manipulations prove to be cumbersome and irritating rather than stimulating. A lot of this has to do with the rigidity of many existing software packages and the serious limitations of the human-computer keyboard/mouse/screen interface. One important limitation to creative interaction is arguably the typewriter interface; which in many ways can be considered a ‘living fossil’. Ergonomically developed for specialised ‘blind’ typists, it has become a fixed standard, used extensively by people with hardly any typing skills and for specialised applications that might demand an altogether different set-up. The keyboard is hardly optimal for new generations of users and for specialised applications, such as film editing, architectural modelling and graphic design, to name but a few, which should benefit from wholly different kinds of tactile formats. At the same time there is considerable ‘software-hardship’ amongst users having to resort to seemingly endless file searching or antiquated keyboard-codes. Whilst the software applications might be complex in themselves, on an interface level there should be ‘interactive simplicity’: affording ultra-easy selections of preferred options, via changeable ‘touch & see’ screens, rather than the standard keyboard and mouse. An important added value would be the integration of sound: not ‘total’ sound recognition but most preferred actions (“select all”, “copy”, “paste” etc. etc.) being activated via voice request (rather than: ‘command’). As a consequence, what is now a – push-button – keyboard should evolve into a truly dynamic ‘what-you-want-is-what-you-get’ interface console, able to ‘morph’ into any 5
First International Conference on Critical Digital: What Matter(s)?
134
number of preferred guises at will, such as: mixing desk, editing table, sketch board, graphic composer, musical instrument, as well as touch sensitive typewriter keyboard. Gesture-based Connectivity: As professional interfaces still tend to be conservative in their design and slow in development, changes might be expected to come from the game- and entertainment industries. Some years ago, expectations were high concerning Virtual/Physical interfaces. As high-end VR formats ‘flopped’, expectations were downgraded considerably. Nonetheless, it may be worth reviving some of the tactile concepts of the recent past and cultivating physically responsive forms of interaction. From the outset, it has been the game industry, rather than the computer manufacturers, that have been active in the research-and-development of controllers for various kinds of virtual activities (such as: playing tennis; skiing; driving a motorbike; gunning someone down). Although truly touch-sensitive ‘glove’ interfaces have not become a great success, it may be time to revitalise the concept: to explore the opportunities of motion- and gravitation sensitive gizmos, making optimal use of eye-hand (iHand?) coordination. Other visual language formats might be beneficial for the study of architectural design conceptions, whereby ‘real movements’ could underscore and enhance design-driven enquiry and intellectual discourse. An indicator of how things might develop can be seen in the Nintendo Wii interface, in which hand and body motion – essentially gesturing – is the key to a more spontaneous physical interaction with software-driven applications. The end is not in sight, and developing specialised formats for architectural applications and other ‘artistic’ operations would be a worthwhile challenge for the ‘second half’, particularly for younger players. An extra advantage of such a development would be that it might liberate architectural designers and scholars, who now feel ‘tied down’ by their computers. We should look for opportunities that make such digitally based activity more appealing: a critical and physical workout while you work! New 3D Visions: As standard screen interfaces have serious shortcomings (particularly in architecture) due to their visual flatness, lack of orientation and limited ‘presence’, it may be opportune to rekindle certain spatial interaction concepts of the recent past, as the technological state-ofthe art develops, offering new opportunities for virtual spatiality. A new development, which might be of considerable interest for the ‘digital architecture’ community is 3D cinema. As formats such as 3-D IMAX and REAL D come into view, experts and students should get seriously involved. Whilst the techniques will initially target a broader ‘cinema’ audience, one might envisage perspectives for productions targeting more specialised groups and potentially even becoming tools for the design studio of the future. As regards to architecture as a discipline, such techniques may offer an important stimulus, though the exploration and evocation of important architectural precedents, via 3D modelbased studies and the pursuit of new forms of architecture. One possibility might be the construction of simulated spatial building environments (virtual building labs), for the benefit of didactic demonstrations or even interactive design ‘in situ’.
6
First International Conference on Critical Digital: What Matter(s)?
135
From experimental to new mainstream, such 3D presentation platforms might stimulate the systematic generation of new or imagined forms of architecture, on the basis of varying parameters, conventions, climates, environment or tastes. An added benefit should be more imaginative approach to the representation of architectural ‘realities’ in movies and games. Towards a Digital Critical Salon What is called for is a new ‘free place’ for the exchange, development and evaluation of Digital Critical ideas: a ‘stamping ground’ for different parties to get involved in the advancement of digital matter. A place where the acknowledged and the upcoming can meet to discuss meaningful issues in a simulating setting… What we envisage is a kind of return of the Salon. A cultural meeting place where connoisseurs and artists (the established and the avant-garde) can meet, present work, exchange opinions and judgements, discuss the course ahead and set the targets and ambitions for following meetings worth looking forward to. What such a ‘Critical Digital Salon’ might achieve is to bring media-players together on the basis of clearly defined themes and targeted experiments: not only concerning questions of ‘interface’ and ‘operation’, but particularly concerning the cultural impacts and meanings of existing and foreseeable digital applications in the arts and particularly architecture… The Salon should set tasks (as in a good game situation) with challenges and constraints, with incentives and rewards, calling for visionary and/or analytical ‘architectures’ using existing, emerging or experimental media. The Salon might connect with sponsors, active in affiliated fields or industries, in order to offer stipends and rewards for the effectuation of projects to be exhibited and debated at forthcoming meetings. Some suggestions for potential themes, on the basis of recent experiences: - Dietrological Architectures3 (unravelling architectural artefacts, to ‘get behind’ the workings of fundamental architectural issues); - Information and Data Spaces (‘taming’ data overloads, creating immersive, stimulating interfaces, like virtual exhibitions or museum spaces4); - Reviving Formal Articulation (addressing contemporary design issues, notably the return Ornamentation with the influx of CAAM techniques5); - Groundbreaking Spatial Scenarios (free-form architectural explorations for new or imagined environments or conditions, suitable for 3D screening). All in all, it is a matter of teaming up ‘old dogs’ and fresh team players within a stimulating and competitively challenging, creative interplay, whereby: the Medium is the Matter and the Game is the Method…
Endnotes 1 The idea of the medium as the ‘message’ was introduced by Marshall McLuhan in: Understanding media: The extensions of man, McGraw-Hill, New York, 1964. Not adverse to the occasional pun, a following book by McLuhan, with Quentin Fiore and Jerome Angel, was titled: The Medium is the Massage, Bantham, New York, 1967. 2 The role of design media as a form of ‘method’ in design and research, was explored in: Jack Breen: The Medium is the Method, Media approaches to the designerly enquiry of architectural compositions, in: Architectural Design and Research: Composition, Education, Analysis, C. Steenbergen et al (ed.), THOTH publishing, Bussum, The Netherlands, 2000.
7
First International Conference on Critical Digital: What Matter(s)?
136
3
A novel by Don DeLillo inspired the concept of ‘dietrological’ studies in architecture: Underworld, Scribner, 1997. Excerpt: “There is a word in Italian. Dietrologia. It means the science of what is behind something. A suspicious event, the science of what is behind an event” (italics by the authors). 4 The idea of a ‘Virtual Media Museum’ was introduced in a conference workshop by Jack Breen at the 6th conference of the European Architectural Endoscopy Association and published in the Proceedings: Spatial Simulation and Evaluation, Bratislava, 2003. 5 The notion that computer aided modeling and manufacturing techniques may bring about new forms of ornamentation is investigated at the Delft faculty of Architecture in an ongoing ‘Ornamatics’ course. Ambitions and results have been presented at recent eCAADe meetings and published in the Proceedings of the 2006, 2007 conferences plus the forthcoming 2008 conference.
8
First International Conference on Critical Digital: What Matter(s)?
Certain assumptions in Digital Design Culture Design and the Automated Utopia
Daniel Cardoso Llach Massachusetts Institute of Technology, United States dcardoso@mit.edu
Abstract Much of the research efforts in computational design for Architecture today aim to automate or bypass the production of construction documents as a means of freeing designers from the sticky and inconvenient contingencies of physical matter. This approach has yielded promising questions and applications, but is based on two related assumptions that often go unnoticed and that I wish to confront: 1. Designers are more creative if the simulations they rely on engage only with the superficial aspects of the objects they design (rather than with their structural and material-specific behaviors) and 2. The symbolic 3-D environments available in current design software are the ideal media for design because of their free nature as modeling spaces. These two assumptions are discussed both as cultural traits and in their relation to digital design technologies. The work presented is a step towards the far-sighted goal of answering the question: how can computation enable new kinds of dialogue between designer, design media and construction in a design process? In concrete, this paper proposes a critical framework for discussing contemporary digital design practices as a continuity –rather than as a rupture- of a long-standing tradition in architecture of separating design and construction.
1. Perfect Slaves June, 1829. In his heartfelt rebuttal of Thomas Carlyle’s Signs of the Times, Timothy Walker, a Harvard lawyer and self-proclaimed “America’s attorney” contends that “Machines are to perform all the drudgery of man, while he is to look on in selfcomplacent ease”. He asserts that once the corporeal necessities of man are satisfied by machinery “there would be nothing to hinder all mankind from becoming philosophers, poets, and votaries of art.” (Walker 1831 : 123, emphasis added)
137
First International Conference on Critical Digital: What Matter(s)?
Figure 1. Steam Engine (Malone 2005) More than a hundred years later, in May of 1966, Stephen A. Coons, a MIT mechanical engineer and early promoter of numerically controlled machinery as creative aides, described digital fabrication devices to an audience of artists and designers as “perfect slaves” that are to perform the dirty work of dealing with materials while the artist or designer is free to “concentrate fully in the creative act” (Coons 1966:11, emphasis added). These two propositions belong to different moments in history and describe different technological realities; Walker’s was the world of textile mills, factories and steam engines; his Defense of Mechanical Philosophy is an ode to the promises of a nascent technological society. In contrast, Coons’ world is Post-war US America, a world in which computers and other technologies developed mainly for military purposes were starting to be massively, optimistically assimilated by consumer markets in the United States. In his talk he speaks of digital computers and numerically controlled devices ambiguously as “compliant partners”, as an “appropriate kind of slaves” and as “magic instruments of creative liberation”; Coons’ talk is an ode to the promises of the information society, the number-crunching digital computers and their potential for faster, better and cheaper manufacturing. Despite the differences, the ideology both seem to promote is surprisingly consistent: Walker believes, as well as Coons, that the manipulation of physical materials is a stage that should be hidden as an unnecessary and undesired part of human existence. They both dismiss the physical, the material, as something dirty and abject, while the purely mental is ennobled: A working hypothesis of this paper is that a common trait underlies Walker’s and Coons’ discourses. We maintain, that the more work we can compel inert matter to do for us, the better will be for our minds, because the more time shall we have to attend to them. (…) A certain portion of labor, then, must be performed expressly for the support of our bodies. But at the same time, as we have a higher and nobler nature, which must also be cared for, the necessary labor spent upon our bodies should be as much abridged as possible, in order to give us leisure for the concerns of this better nature. [Walker 1831:124, emphasis added].
138
First International Conference on Critical Digital: What Matter(s)?
This “higher and noble nature” that Walker refers to in his Defense of Mechanical Philosophy is consistent with what -more than a century later- Coons’ refers to when he utters the word “creativity”; we could describe Coons’ definition of creativity as the process of or the ability to operate in a clean and unconstrained world of ideas and symbols. The correspondence between the thoughts of these two men1 perhaps sheds light into the culture and assumptions that underlie the digital tools that we use for designing today. Walker’s references to the nobility of intellectual contemplation, and Coons’ view of CAD/CAM as a “perfect slave”2, seem to be instances of the same primal separation in western thought between the physical and the mental, the mind and the body and –perhaps more important for us architects and designers- design from construction.
Figure 2. A portrayal of early fabrication optimism at (Technion, exact date unknown). Part of the significance of Coons’ speech for our understanding of current design culture resides in that it marks the moment when simulation technologies and computer controlled machinery, used during the war to outsmart enemies overseas, started to be conceived in a crucially different way as magic instruments of creative liberation (Coons 1966:11). Once these resemblances between Coons’ early computer-age claims on design and creativity and the hopes of nineteenth century emerging industrial order are exposed, three relevant issues emerge. First, the fact that the portrayal of these environments built on an utopian desire for depriving design of its physicality by having machines enable a –presumably- seamless transit of symbolic forms into matter through machinery3, second, the role of design technologies as active shapers of the cultural notion of creativity and design, and third, the inference that the persistent idea of the perfect slave in Coons’ and Walker’s discourses might be the indication of a long standing trait in design and architectural thought of dividing design and construction. In the next section I will follow on this inference to propose a preliminary draught of a genealogy of the design/construction divide. 2. An old trait
1 As Staudenmaier notes, women’s perspectives on technology are likely to be quite different from men’s (Rothschild 1983). 2 Coons’ enthusiasm for computers and machinery is not an isolated manifestation of technological faith in his time; it is part of a generalized optimism about technological progress that emerged in US American society after the second world war, enthusiasm which, as Paul N. Edwards suggests, can be situated in terms of the realization of a dream of a closed and clean world of purely symbolic manipulation. For an immersion in the role of technology and computing in the construction of post-war American “closed world discourse”, see (Edwards 1996). 3 The phrase Automated Utopia is used in (Marx 2000:185)
139
First International Conference on Critical Digital: What Matter(s)?
In his Ten Books of Architecture Renaissance architect and scholar Alberti established the distinction between lineamenta and structura. This distinction became a powerful dialectic device, and anticipated the future of architect’s relation to construction. Although there is no full agreement in what Alberti precisely meant by lineamenta, his writings give clues about it being very close to the sphere of representation, and more specifically to the building’s ground plan, where the architect embeds all of his knowledge and judgment. In words of (Lang 1965) “The lineamenta derives from the mind, and the structura from nature”. For Alberti lineamenta means design; a domain in which “all the ideas of the architect are incorporated”. In Alberti’s vision the materialization of the architect’s design, its structura, should be performed by a skilled craftsman: in Alberti’s vision there is a clear divide between the ‘organizational’ sphere –the sphere of the architect’s reasoning-, and a physical sphere –conventionally associated with the engineer- that has persisted until today. The dichotomy between the mental (design) and the physical (construction) is not a consequence of Coons’ computer, or Walker’s mill. Cultural practices, economic interests and human obsessions are crystallized and embodied in design technologies, rather than being originated by them. The analogies that we have traced between the computerized slave of Coons, Walker’s mechanical utopia and the “skilled craftsman” of Alberti should cast a shadow of doubt on the technological determinism that sometimes seems to underlie digital design discourse, by exposing its nature as a re-interpretation of long-standing desires and obsessions. Alberti’s role as a Renaissance scholar and architect is an early precedent –yet distinct- of the “gentleman architect” that according to historian of construction Jacques Heyman would appear later in England during the second half of the seventeenth century. Hence Alberti’s distinction must be read keeping in mind that at his time the modern social role of the architect did not exist as we understand it today. The roles of the master builder and of the architect would become definitely separate at a later moment in history. Heyman locates this professional split in the figure of Christopher Wren, a professor of astronomy in seventeenth century England who “had never worked on a building site” but that faced the challenge of efficiently managing the reconstruction of many buildings in London after the great fire of 16664. Because of the scale and number of the works commissioned, Wren’s architectural practice was forced to deal with the construction in a way that differed from the medieval practice of the masons. Taking physical and organizational distance from the medieval master builder, Wren avoided the direct manipulation of materials by using contractors. This marked the emergence of new social roles, new models of authorship, and new economic schemes in construction, giving origin –as is argued by this author- to the modern architect, characterized by its social role of mediator between the client and the builder (Heyman 2003). Architecture treatises would play a major role in the consolidation of the modern architect by rationalizing practices that had been the sole domain of the craftsmen; in The Projective Cast: Architecture and its three Geometries, Robin Evans discusses how XVIII century French treatises of stereotomy played a major role in the process of rationalization of technique that derived in the specialization of the technical and
4 According to Heyman, rather than the traditional process of apprenticeship, Wren’s knowledge of architecture had been acquired by a one year stay in Paris where he observed salient buildings and monuments, and through books (he significantly mentions a 1512 latin edition of Alberti’s Ten books of Architecture).
140
First International Conference on Critical Digital: What Matter(s)?
intellectual roles in construction; Evans discusses how this process was often a source of tensions between the different social groups involved. (Evans 2000)
Figure 3. A XVII c. treatise on stonecutting, from (Evans 2000) Not exclusive to architecture, the division between design and construction resonates with a more general division between the mental and the material, and is described by many authors as a defining feature of modern western thought and society. Tim Ingold has suggested that this dichotomy is so engrained in our culture that “we are inclined to use it as a window through which to view practices of all kinds, past and present, Western and non-Western, human and animal” (Ingold 2001). Other authors, like (Helmreich 1998) have noted both the platonic undertone of this division, and its resemblance to the Cartesian dualism between mind and body. Having exposed a preliminary –and somehow arbitrary- draft of a genealogy of the design/construction divide in architecture, I will discuss in the following section how this trait is visible in a few instances of today’s culture of digital design, often in assumptions that go unnoticed in the language we use to talk about our practice as designers. 3. Scaffolds and Shields In a session of a graduate research workshop on digital design and fabrication conducted at MIT’s Department of Architecture in the Spring of 2007, one of the staff members put forward a view of the role of computers and machinery in design in terms strikingly similar to those of the examples discussed above. The visual, the surface, is most of what architectural training is about; that’s what the architects are really good at (…) the other part [the “physical Layer”] is in the back, and that’s what we really want to automate, and that’s what most of
141
First International Conference on Critical Digital: What Matter(s)?
computing is about. This is how the system becomes a scaffold for truly creative work. The concept of creativity in this proposition stops at the visual, and is implicitly linked to the “soft” and “human”. The role of construction, in contrast, falls outside the sphere of the creative, and is linked to the “hard” and automatable. Its role as a “scaffold” for creativity renders it passive rather than active participant in the generation of designs. Computation is conceived as a shield to protect the designer from these contingencies. As a result, materials become in this proposition mere receptacles of ideal forms. Particularly worthy of discussion in this model is the opposition between a “visual layer” to which “design” and “creativity” are confined and a “physical layer” that is the domain of computers and machinery. In this distinction the visual is associated to the intuitive, the imaginative and the creative, and the physical to the objective, functional and “technical”. As in Walker’s and Coons’ propositions, the physical is to be hidden and out of the sight of the designer in the hands of the perfect slave. This persistent rhetoric places materials and technical knowledge outside the sphere of design and creativity; in words of Bromell, “empties the mechanic’s physical labor of value so that it may be replaced, with no loss, by the efficiencies of mechanism.” (Bromell 43); it is significant that the same rhetoric manifests itself once and again, always rendering design as an exclusively mental activity. A general examination of current trends in digital design culture both in academia and in practice will expose -with exceptions- a strive of designers to impose the fluid geometry of forms created in 3-D environments into the less forgiving world of materials and gravity. This can be seen in the increasing number of practices that place themselves as bridges between the complexity of digitally generated shapes and the reality of materials. While I worked on sections of this document Fabian Scheurer, a German architect and computer scientist, gave a lecture to the Design and Computation Group at MIT’s School of Architecture. Designtoproduction, his practice, is defined by him as “a consultancy for the digital production of complex designs”, further more, he described his team as integrating “specialist knowledge from various fields to help architects, designers, engineers, and manufacturers bridge the gap between idea and realization” (emphasis added)5. A few weeks later Paul Seletsky gave a lecture on the same room, to the same audience, in which he presented an agenda for a methodological integration of performance data in the early stages of conceptual design, a sort of “consortium” project involving schools and software development teams such as Gehry Technologies. The project involves finding a pre-rational and performance based approach to digital design; his argument drew largely on the convenience of blurring the line between analysis systems (Ecotect), and design systems (Digital Project, Rhino). In this section I have discussed some instances of how the design and construction divide is visible in current digital design culture. Even though many times the software’s “signature” becomes a strong driver of the design process this fact goes unnoticed under the illusion of a presumed free-form design. This in turn derives in a displacement of authorship and in an emergence of new social roles. What question do these changes pose for architectural education?
5
FOC (Freedom of Creation) is a product design firm that defines its practice in the same terms.
142
First International Conference on Critical Digital: What Matter(s)?
Can we speak of the graphic mannerisms of screen-based design practices as a new canon, comparable to the classic canon of French beaux-artism that Le-Duc would deem empty of meaning? (Le-Duc 1990) Can we devise new premises for computational design systems? Premises in which perhaps a dynamic dialogue between materials and form is a defining feature of our aesthetic judgments on architecture and therefore of our definition of â&#x20AC;&#x153;designâ&#x20AC;?6?
6
As a final two scenarios for reflection and action ought to be mentioned: 1) GenerativeFabrication: the search for an integrated computational framework for both generative design exploration and prototype fabrication. The role of bottom-up design generation is in this context considered in addition to, and as an alternative to, top-down design methodologies where computer programs or manual labor are used to subdivide an initial shape into a set of distinct building components. This framework would enable exploration of the potential of material and device-specific design grammars to act both as machinereadable information and as platforms for creative design exploration (Cardoso 2008). 2) Objects to Design With; a pedagogical perspective that aims at exposing the logic behind design technologies by enabling students to script their own design tools and to alter existing ones. The international series of workshops Computational Design Solutions, taught in collaboration with Kenfield Griffith, Pablo Herrera and John Snavely are a step towards building this empowerment in young students of architecture.
143
First International Conference on Critical Digital: What Matter(s)?
144
First International Conference on Critical Digital: What Matter(s)?
Figure 4, 5 and 6. Exploring Generative Fabrication (by the author)
Figure 6. Studentsâ&#x20AC;&#x2122; work at the CDS Chile guided by the author 5. Conclusion A persistent view of machinery as a â&#x20AC;&#x153;perfect kind of slaveâ&#x20AC;?, and its relevance to digital design discourse, is exposed and discussed critically in terms of the separation of design from construction it elicits. An examination of the genealogy of this cultural trait reveals that rather than originating it, the popularization of computers has redefined it and embodied it in entirely new ways as Computer Aided Design and Manufacturing tools (CAD/CAM). Inadvertently abiding to this separation fixes the place that materials and construction knowledge occupy in a design process, purporting a view of design representations as clean conduits, and of construction materials as passive receptacles of ideal forms. If we as architects, researchers, authors and educators are to adopt an active, informed, and critical stance towards the tools and environments where design takes place today, the nature of these as cultural objects needs to be debated and
145
First International Conference on Critical Digital: What Matter(s)?
examined. Their underlying rules and technicalities open and exposed. Design tools, and particularly computing, are not neutral and embody a discourse that has cultural, political and social dimensions. 3. Endnotes Marx, Leo 2000 The Machine in the Garden: Technology and the Pastoral Ideal in America. Oxford University Press. Malone, Patrick M. Surplus Water, Hybrid Power Systems, and Industrial Expansion in Lowell, IA, The Journal of the Society for Industrial Archeology NA 2005 <http://www.historycooperative.org/journals/sia/31.1/malone.html> (23 Mar. 2008). Walker, Timothy 1831 Defence of Mechanical Philosophy, North American Review. 33:1 (1831) Coons, Stephen 1966 Computer, Art & Architecture. Art Education, Vol. 19, No.5 (May,1966), 9-11 Rothschild, Joan 1983 Machina Ex Dea: Feminist Perspectives on Technology. Teacherâ&#x20AC;&#x2122;s College Press. Loukissas, Yanni N.d. The cultural history of the digital fabrication studio. Unpublished, Department of Architecture, Massachusetts Institute of Technology. Edwards, Paul N. 1996 The closed world: computers and the politics of discourse in Cold War America. Cambridge, MA: MIT Press. Technion Institute of Technology [Exact Date Unknown] ComputerAidedManufacturing. Electronic Document. http://pard.technion.ac.il/archives/Researchers/ComputerAidedManufacturing.jpg Alberti, Leon Battista 1988 On the Art of Building in Ten Books. Trans. Joseph Rykwert with Neil Leach and Robert Tavernor. Cambridge: MIT Press. Lang, S 1965 De Lineamentis: L. B. Albertiâ&#x20AC;&#x2122;s Use of a Technical Term. S. Lang. Journal of the Warburg and Courtauld Institutes, Vol. 28. (1965), pp. 331-335. Stable URL: http://links.jstor.org/sici?sici=00754390%281965%2928%3C331%3ADLLBAU%3E2. 0.CO%3B2-Q Heyman, Jacques 2003 Wren, Hooke and partners. Paper presented at the First International Congress on Construction History, Madrid, 2003. Evans, Robin 2000 The Projective Cast: Architecture and its Three Geometries. Cambridge: MIT Press.
146
First International Conference on Critical Digital: What Matter(s)?
Ingold, Tim 2001 Beyond Art and Technology: The Anthropology of Skill. In Anthropological Perspectives on Technology. Michael Brian Schiffer, ed. Pp. 17-31. Amerind Foundation Publication. Helmreich, Stefan 1998 Silicon Second Nature. Berkeley and Los Angeles CA: University of California Press. Bromell, Nicholas Knowles 1993 By the Sweat of the Brow: Literature and Labor in Antebellum America. University of Chicago Press. Le-Duc, Viollet 1990 The Architectural Theory of Viollet-le-Duc: Readings and Commentaries (Paperback). Hearn, M.F. ed. Cambridge: MIT Press. Cardoso, Daniel 2008. Generative Fabrication. Unpublished, Department of Architecture, Massachusetts Institute of Technology. Herrera, Pablo 2008 Arquitectura y Programacion http://arquitecturayprogramacion.blogspot.com/2008/02/rhinoscripting-workshoplima-2008.html
147
First International Conference on Critical Digital: What Matter(s)?
148
First International Conference on Critical Digital: What Matter(s)?
Post-Digital Architecture: Towards Integrative Design Branko Kolarevic University of Calgary Abstract In this paper, an alternative vision of integrated design is proposed that is more open, fluid, pliable, and opportunistic in its search of collaborative alliances and agendas. This alternative approach is referred to as integrative design, in which methods, processes, and techniques are discovered, appropriated, adapted, and altered from “elsewhere,” and often “digitally” pursued. The designers who engage design as a broadly integrative endeavor fluidly navigate across different disciplinary territories, and deploy algorithmic thinking, biomimicry, computation, digital fabrication, material exploration, and/or performance analyses to discover and create a process, technique, or a product that is qualitatively new. Introduction Concepts such as integrated practice and integrated design have gained prominence in architecture over the past several years as relatively new paradigms. What is usually meant by these terms is a multidisciplinary, collaborative approach to design in which various participants from the building industry – architects, engineers, contractors, and fabricators – participate jointly from the earliest stages of design, fluidly crossing the conventional disciplinary and professional boundaries to deliver an innovative product at the end. Integrated design and integrated practice have emerged as a result of several, initially unrelated organic, bottom-up developments within the building industry. At one end, the (re)emergence of complexly shaped forms and intricately articulated surfaces, enclosures, and structures has brought a close collaboration from the earliest stages of design among architects, engineers, and builders, out of necessity. The binding agent of the resulting disciplinary and professional integration were various digital technologies of design, analysis, and production that provided for a fairly seamless and fluid exchange of information from conception to construction, often defying the existing ossified legal structures of clearly delineating professional and disciplinary responsibilities. At the same time, building information modeling (BIM) has emerged as a technological paradigm promising a way to encode comprehensively all the information necessary to describe the building’s geometry, enable various analyses of its performance (from the building physics point of view), and directly facilitate the fabrication of various components and their assembly on site. BIM, as a technological platform, however, demands a structural redefinition of the existing relationships within the industry, if the various players are to fully realize the potential of better, faster, more direct exchanges of information. In other words, BIM’s message is that the integration of information within the industry requires process-wise and structural integration of the various disciplines and professions comprising the highly fractured building industry today.1
149
First International Conference on Critical Digital: What Matter(s)?
An equally important development over the past decade was the emergence of the design-build enterprises that, through the way in which they are structured, inherently imply close integration of design and building. The principal motivation behind them is a reduction in substantial inefficiencies that exist due to the fractured nature of the industry, and the implied, profit-motivated desire for integration. The separate paths towards integrated design and practice stemming from the expansion of design-build within the industry, introduction of building information modeling as an enabling technology, and the emergence of increasingly complex building forms, are increasingly converging, leading many to believe that integration within the industry is an inevitable outcome as architecture, engineering, and construction enter a “post-digital” age, i.e. as the digital technologies become increasingly transparent in their use. While the higher degrees of integration promise buildings that are better, faster, and cheaper to design and construct, the challenge is to avoid closed systems of integration and to keep integrative tendencies as open as possible, conceptually and operationally. While integrated design could be understood as a well-defined (and thus closed) constellation of related disciplines and profession, integrative design, I believe, implies a much more open, more pliant conceptual and structural platform on which architecture could continue to develop in its post-digital stage as it embraces ideas, concepts, processes, techniques, and technologies that were until recently considered to be within the domains of “others.” A Brief History of Disintegration Architecture and building were once “integrated.” For centuries, being an architect also meant being a builder. Architects were not only the masters of geometry and spatial effects, but were also closely involved in the construction of buildings. The knowledge of building techniques was implicit in architectural production; inventing the building’s form implied inventing its means of construction, and vice versa. The design and production, architecture and construction, were integrated – one implied the other. The master masons of the Middle Ages were in charge of all aspects of buildings, from their form to the production techniques used in their construction. They had the central, most powerful position in the production of buildings, stemming from their mastery of the material (stone in most cases) and its means of production. As the palette of materials broadened and the construction techniques became more elaborate, the medieval master masons evolved into master builders (or architects) who would integrate increasingly multiplying trades into an increasingly more complex production process. The tradition of master builders, however, did not survive the cultural, societal and economic shifts of the Renaissance. Leon Battista Alberti wrote that architecture was separate from construction, differentiating architects and artists from master builders and craftsmen by their superior intellectual training. With Alberti’s elevation of architects over master builders came the need to externalize information (so it could be communicated to tradesmen) and the introduction of orthographic abstractions, such as plan, section and elevation,2 into the currency of building. Architects no longer had to be present on site to supervise the construction of the buildings they designed.
150
First International Conference on Critical Digital: What Matter(s)?
The rifts between architecture and construction started to widen dramatically in the mid-nineteenth century when “drawings” of the earlier period became “contract documents.” Other critical developments occurred, such as the appearance of a general contractor and a professional engineer (first in England), which were particularly significant for the development of architectural practice as we know it today. The relationships between architects and other parties in the building process became defined contractually, with the aim of clearly articulating the responsibilities and potential liabilities. The consequences were profound. The relationship between an architect (as a designer of a building) and a general contractor (as an executor of the design) became solely financial, leading to what was to become, and remain to this day, an adversarial, highly legalistic and rigidly codified process. The architect’s role on the construction site, instead of shaping the building (as master builders once did), became the contractual administration, i.e. the verification of the contractor’s compliance with the given contractual construction documents. The design was split from the construction, conceptually and legally. Architects detached themselves from the act of building. The twentieth century brought the increasing complexity to building design and construction, as numerous new materials, technologies and processes were invented. With increased complexity came increased specialization, and the emergence of various design and engineering consultants for different building systems, code compliance, etc. The disintegration was thorough, deep, but fortunately, reversible, as shown by the various developments within the industry over the past decade, briefly discussed earlier. Reintegrating out of Necessity Over the past decade we have seen in architecture the (re)emergence of complexly shaped forms and intricately articulated surfaces, enclosures, and structures, whose design and production were fundamentally enabled by the capacity of digital technologies to accurately represent and precisely fabricate artifacts of almost any complexity. The challenges of constructability left designers of new formal and surface complexities – whether “blobs” or intricately patterned “boxes” – with little choice but to become closely engaged in fabrication and construction, if they were to see their projects realized. Building contractors, used to the “analog” norms of practice and prevalent orthogonal geometries and standard, repetitive components, were reluctant to take on projects they saw as apparently unbuildable or, at best, with unmanageable complexities. The “experimental” architects had to find contractors and fabricators capable of digitally-driven production, who were often not in building but in shipbuilding. They had to provide, and often generate directly, the digital information needed to manufacture and construct the building’s components. So, out of sheer necessity, the designers of the digitally-generated, often “blobby” architecture became closely involved in the digital making of buildings. A potentially promising path to integrated design emerged. In the process of trying to address the material producibility of digitally conceived complex forms, the “experimental” architects discovered they have the digital information that could be used in fabrication and construction to directly drive the computer-controlled machinery, making the time-consuming and error-prone production of drawings unnecessary. In addition, the introduction and integration of digital fabrication into the design of buildings enabled architects to almost instantaneously produce scale models of their designs using processes and techniques identical to those used in the industry. Thus, a valuable feedback
151
First International Conference on Critical Digital: What Matter(s)?
mechanism between conception and production was established, providing a hint of potential benefits that the integration of design and production could bring. This newfound ability to generate construction information directly from design information, and not the complex curving forms, is what defined the most profound aspect of much of the formally expressive architecture that has emerged over the past decade. The close relationship that once existed between architecture and construction –what was once the very nature of architectural practice – has reemerged as an unintended but fortunate outcome of the new, closely coupled, digital processes of design and production. Builders and fabricators become involved in the earliest phases of design, and architects actively participate in all phases of construction. In the new digitally-driven processes of production, design and construction are no longer separate realms but are, instead, fluidly amalgamated. The fission of the past is giving way to the digital fusion. As observed by Toshiko Mori, “The age of mechanical production, of linear processes and the strict division of labor, is rapidly collapsing around us.”3 The Digital Integration An effective digital exchange of information is vital to the realization of the new integrative capacity of architecture, engineering, and construction. The ability to digitally generate and analyze the design information, and then use it to directly manufacture and construct buildings, fundamentally redefines the relationships between conception and production – it provides for an informational continuum from design to construction. New synergies in architecture, engineering and construction start to emerge because of the use of digital technologies across the boundaries of various professions. In this scenario, the digital model of the building (a building information model) becomes the single source of design and production information that is generated, controlled and managed by the building design team. It encodes all the information needed to manufacture and construct the building. Layers of information are added, abstracted and extracted as needed throughout the design and construction, as architects, engineers, contractors and fabricators work in a collaborative fashion using a single digital model from the earliest stages of design. Such a model of production requires that all tools for design, analysis, simulation, fabrication and construction be integrated into a single, cohesive digital environment that can provide information about any qualitative or quantitative aspect of building under design or construction.4 An example of the integrated application of the multiplicity of information about a project can be seen in the proliferation of ecological and biological design considerations surfacing in contemporary architecture in relation to greater availability of information about natural and human circumstance. For example, the current interest in building performance as a design paradigm is largely due to the emergence of sustainability as a defining socioeconomic issue and to the recent developments in technology and cultural theory. Within such an expansive context, building performance is being understood by some architects very broadly, across multiple realms, from financial, spatial, social and cultural to purely technical (structural, thermal, acoustical, etc.). The issues of performance (in all its multiple manifestations) are increasingly considered not in isolation or in some kind of linear progression but simultaneously, in an integrated fashion, and are engaged early on in the conceptual stages of the project, by relying on close collaboration between the many parties involved in the design of a building.
152
First International Conference on Critical Digital: What Matter(s)?
In such a highly “networked “design context, digital quantitative and qualitative performance-based simulations are used as a technological foundation for a new, comprehensive, highly integrated approach to the design of the built environment. Integration Varieties The emergence of “integrated practice” and “integrated design” as promising professional and disciplinary futures was driven primarily by two parallel, mutually reinforcing developments: the emergence of design-build as a fundamental, structural change in the building industry, and building information modeling (BIM), as an information technology platform that can support new collaborative modes of working. Design-build, however, is only one way of actualizing the emerging professional synergies of digitally-driven modes of production. A more interesting possibility is the structuring of building teams as dynamic, geographically-distributed digital networks of design and production expertise, which change fluidly as the circumstances of the project or practice require. Architects will increasingly find themselves working in an environment of multidirectional digitally-mediated exchange of knowledge among various members of design and construction teams. In the emerging fluid, heterogeneous processes of production, the digital data, software and various manufacturing devices will be used in different ways by different members of the building team, who will often operate in different places and in different time zones. Two different trajectories are often pursued: the horizontal one that integrates different disciplines across the same scale, and the vertical one that integrates similar disciplines across different scales. In the context of building design, a horizontal strategy would mean the integration of architecture, engineering, and construction, and a vertical one would mean the integration of industrial design, architecture, and urban design, for example. (Other, cross-axial combinations are also possible.) In light of the technologically-enabled changes, innovative practices with crossdisciplinary expertise are forming to enable the design and construction of new formal complexities and tectonic intricacies. Front, Inc. from New York is perhaps the most exemplary collaborative practice to emerge over the past decade; acting as a type of free agency, they fluidly move across the professional and disciplinary territories of architecture, engineering, fabrication and construction, and effectively deploy new digital technologies of parametric design, analysis, and fabrication. Similarly, entrepreneurial enterprises, such as designtoproduction from Zurich, Switzerland, have identified an industry niche in the translation of model scale prototypical designs into full-scale buildings. Design firms, such as SHoP Architects and LTL Architects in New York and Gang Studio Architects from Chicago, have integrated in-house design and production in many of their projects. Meanwhile, integrated fabrication specialists such as 3form, Inc. in Salt Lake City, A. Zahner Company in Kansas City, and Octatube in Delft, the Netherlands, represent an industry-oriented broadening to engage the emerging innovative design processes directly and more effectively through close collaboration with designers. Broadening Integrated Design Integrated design should be much more open, fluid, pliable, and opportunistic in its search of collaborative alliances and agendas. This alternative approach is referred to
153
First International Conference on Critical Digital: What Matter(s)?
as integrative design, in which methods, processes, and techniques are discovered, appropriated, adapted, and altered from “elsewhere,” and often “digitally” pursued. The distinction between being integrated and being integrative may seem minor, but I think it is rather significant, as it implies a fundamentally different attitude towards collaboration, which need not be limited to the professions and disciplines comprising the building industry (or the particular scale of building). The designers who engage design as a broadly integrative endeavor fluidly navigate across different disciplinary territories, and deploy algorithmic thinking, biomimicry, computation, digital fabrication, material exploration, and/or performance analyses to discover and create a process, technique, or a product that is qualitatively new. Scientific and engineering ideas become starting points of the design investigation. For example, concepts such as minimizing waste are engineering tactics that are increasingly applied to architecture from the outset of design projects. Other engineering concepts, such as optimization, are finding favor too, not just in budgetary considerations and fabrication procedures, but also in formal and organizational strategies. As discussed earlier, greater attention is given to the analyses of simulated building performance as essential feedback criteria in the design process. Mathematics and geometry are re-embraced as a rich source of ideas in articulating form, pattern, surface and structure in architecture, and collaborations with mathematicians are increasingly sought out. For example, the expansive, patterned surfaces of the Federation Square building in Melbourne, designed by Lab Architecture Studio, are based on what is known in mathematics as pinwheel aperiodic tiling, enabling the designers to apply different scales of the same pattern across the building as needed. Daniel Libeskind proposed a patterned skin based on fractals for the extension he designed (with Cecil Balmond of Arup) for the Victoria & Albert Museum addition in London. There are other notable examples in which patterning is based on mathematics. For example, Voronoi tessellation5 is a particularly popular patterning algorithm today. Many of the patterning schemes can be extended from a two-dimensional to a three-dimensional realm and emerge from basic mathematical operations in order to achieve complex results. A simple patterning scheme was used by Cecil Balmond and Toyo Ito in their design for the Serpentine Pavilion in London (2002) to produce a complex-looking outcome. The apparently random patterning that wraps the entire pavilion is produced by incremental scaling and rotation of a series of inscribed squares, whose edges were extended and trimmed by the pavilion’s unfolded box shape to create a beautiful, seemingly irregular-looking pattern of alternating voids and solids. The “bird nest” random-looking structural pattern for the National Stadium in Beijing, China (2008), designed by Herzog & de Meuron, is also based on a relatively simple set of rules. The nearby National Aquatics Center (2008), designed by PTW Architects from Australia, is a simple box that features a seemingly complex three-dimensional bubble patterning. Its geometric origin is the so-called Weaire-Phelan structure6, an efficient method of subdividing space using two kinds of cells of equal volume: an irregular pentagonal dodecahedron and a tetrakaidecahedron with 2 hexagons and 12 pentagons. This regular three-dimensional pattern was sliced with a non-aligned, i.e. slightly rotated rectilinear box to produce the seemingly irregular patterning effect on the exterior. Science, mathematics, and engineering are not the only domains that are explored for potential ideas. Designers and researchers increasingly are looking for inspiration in nature to discover new materials and new material behaviors, so that buildings (or rather, building enclosures) can respond dynamically to changing environmental
154
First International Conference on Critical Digital: What Matter(s)?
conditions. In addition to mimicking the intricate complex appearance and organization of patterned skins and structures in nature, their behavior is also being investigated for possible new ideas about the performance of building skins and structures. In such “form follows performance” strategies, the impulse is to harness the generative potential of nature, where evolutionary pressure forces organisms to become highly optimized and efficient (nature produces maximum effect with minimum means). A nature-imitating search for new ideas based on biological precedents – often referred to as biomimicry or biomimetics7 – holds much promise as an overarching generative driving force for digitally driven contemporary architecture.8 What is interesting to note is that in integrative design, the deployed digital technologies become much less important than the “borrowed” operational metaphors. For example, integration of time as a dimension in design thinking is manifested today in very different ways, from weathering, the need to adapt to change, movable parts and reconfigurable assemblies, to time-based modeling of geometric forms using animation software. Biomimicry has become a fertile ground for new ideas of integrating nature in design by imitating or taking inspiration from its systems, processes, and elements to address particular design issues (such as sustainability, for example). These developments are part of the perceived broader shift towards integrative design as an emerging trajectory for architecture in its postdigital phase. Acknowledgements This paper contains excerpts from chapters written by the author that appeared in the “Architecture in the Digital Age: Design and Manufacturing” book, edited by the author, and “Performative Architecture: Beyond Instrumentality,” co-edited with Ali Malkawi. Also included are excerpts from the chapter titled “Manufacturing / Material / Effects” co-authored with Kevin Klinger that will appear in the forthcoming book, titled “Manufacturing Material Effects: Rethinking Design and Making in Architecture,” co-edited with Kevin Klinger. Endnotes 1
For a discussion of the structural shifts within the building industry, see Branko Kolarevic (ed.), Architecture in the Digital Age Design and Manufacturing, London: Spon Press, 2003, and Branko Kolarevic and Ali Malkawi (eds.), Performative Architecture: Beyond Instrumentality, London: Spon Press, 2005. 2 Ironically, architecture’s disassociation from building started in the late Renaissance with one of its most celebrated inventions – the use of perspective representation and orthographic drawings as a medium of communicating the information about buildings. 3 Toshiko Mori (ed.), Immaterial/Ultramaterial: Architecture, Design, and Materials, New York: George Braziller, Inc., 2002, p. xv. 4 The challenge is (and has been for almost four decades of computer-aided design) how to develop an information model that facilitates all stages of building, from conceptual design to construction (and beyond, for facilities management), and provides for a seamless digital collaborative environment among all parties in the building process. 5 Voronoi diagrams are named after Russian mathematician, Georgy Voronoi, who studied the general ndimensional case of the conceptually simple decomposition scheme in 1908. In Voronoi tessellation, the decomposition of space is determined by distances to a specified discrete set of objects (points) in space. 6 The Weaire-Phelan structure is a complex three-dimensional structure devised in 1993 by Denis Weaire and Robert Phelan, two physicists based at Trinity College in Dublin, Ireland. 7 The term biomimetics refers to man-made processes, substances, devices, or systems that imitate nature. It was coined by Otto Herbert Schmitt (1913–1998), an American engineer and biophysicist, best known for establishing the field of biomedical engineering. Velcro, the hook-loop fastener, is perhaps the
155
First International Conference on Critical Digital: What Matter(s)?
best-known example of material biomimetics: it was created in 1948 by George de Mestral, Swiss engineer, who was interested in how the hooks of the burrs clung to the fur of his dog. 8 Imitating forms and structures found in nature also has a long history in architecture: Joseph Paxtonâ&#x20AC;&#x2122;s Crystal Palace was allegedly inspired by the lily padâ&#x20AC;&#x2122;s structure.
156
First International Conference on Critical Digital: What Matter(s)?
157
Versioning: Architecture as series? Ingeborg M. Rocker Graduate School of Design, Harvard University, Cambridge, USA. irocker@gsd.harvard.edu
Abstract This paper investigates the role of versioning in contemporary theory and the practice of design. The introduction of computation done by computers allowed for complex mathematical calculations and their visualization, which were for long time simply too complex. Today, differential calculus – underlying most interactive 3D modeling software – has significantly informed the production and conceptualization of architecture. The upshot of this transformation is that we are now witnessing a shift from an architecture of modularity towards an architecture of seriality, design versions. The core idea of versioning exceeds simple variation between different parameterized design iterations, versioning rather also operates at the micro-scale, within the structure and aesthetic of digital design itself.
1. From Modularity to Seriality This paper investigates the role of versioning in contemporary theory and the practice of design. The introduction of computation done by computers allowed for complex mathematical calculations and their visualization, which were for long time simply too complex. Today, differential calculus – underlying most interactive 3D modeling software – has significantly informed the production and conceptualization of architecture. The upshot of this transformation is that we are now witnessing a shift from an architecture of modularity towards an architecture of seriality. To understand this shift we will first focus on the idea of modularity. In 1926 Walter Gropius suggested: “The systematic preparation for a rational housing construction will serve the development of dwelling. […] The creation of types […] is an effective tool to create better and cheaper with industrial production a new manifold of products. Therefore it is logical to serve the masses’ everyday need through standardization. Each individual has the choice between the side-by-side developing types. The building and their furnishing will vary in their totality corresponding to the number and difference of the inhabitants, while their constitutive elements will remain the same.” Grundsätze Bauhaus Dessau, W. Gropius 1926.
1
First International Conference on Critical Digital: What Matter(s)?
158
In 1926 the Bauhaus postulated that new industrial production techniques would significantly change the modes of designing and producing architecture – and ultimately the modes of dwelling. The creation of prototypes and the introduction of modularity seemed then the key to success. While the modernist architecture was based on a classical mechanistic paradigm, which envisioned a machine for dwelling assembled out of a kit of parts, today’s architecture is increasingly informed through the trans-classical Turing machine – the digital computer. In 1936, the model for the Turing Machine [Fig. 1], developed by English mathematician Alan Turing, laid the theoretical foundation for computing.1 In principle, the machine could be physically assembled out of a few components: a table of contents that held the rules for its operations, a reading and writing head that operated according to those rules, writing 0s or 1s – the code – on an infinitely long tape. The moving head followed three modes of operation: writing, scanning, erasing.
Figure 1: Alan Turing, Turing Machine, 1936 Image Source: Andrew Hodges, Alan Turing: The Enigma, Simon and Schuster (New York), 1983, pp 98–9. Though basic, Turing considered his machine universal in the sense that it could simulate all other (calculable) machines. It had the unprecedented ability to emulate divergent and multivalent processes, and it could even initiate recursive processes -processes that would repeat themselves in a self-similar way. Turing did not envision his machine as a practical computing technology, but rather as a thought experiment that could provide – independent of the formalism of traditional mathematics – a precise definition of a mechanical procedure – an algorithm.2 A computer program is essentially an algorithm that determines and organizes the computer’s sequence of operations. For any computational process, the algorithm must be rigorously defined through a series of finite well-defined steps that determine the order of computation.
2
First International Conference on Critical Digital: What Matter(s)?
159
Obviously, the formalization of the design process in order to capture it algorithmically at first introduced a level of control into architecture, which countered the influence of intuition. Algorithmic design, design with the computer, seemed more about creating mechanisms and codes for control than about design. The hype of the digital medium along with the fascination for cybernetics and information theory throughout the 1950s and 1960s was therefore quickly followed by disenchantment. The ‘algorithmization’ of design processes seemed to most architects overly constraining and stifling of creativity. And it was only with the proliferation of personal computers and mostly calculus-based interactive software that computer-based architectural production and construction proliferated and eventually began to challenge the profession’s inherited logics of operation. This was in the 1990s, more than 50 years after the invention of Turing’s machine and 70 years after Gropius’ request for a prototypical industrially mass-produced building components that would assemble unique buildings. With the introduction of digital media, the conception of modularized architecture constructed out of nearly identical industrially mass-produced components, and thus Gropius’ thesis, has been challenged. Today, with the use of the computer and calculus-based software, architecture can instead be conceptualized as a series. A series is a framework of parameters designed by the architect, within which a variety of design versions may be realized. Each of these design versions is unique and yet also part of the series. The parts assembling each of the series’ designs are no longer necessarily mass-produced but could rather be mass-customized. The argument is not new but is frequently overlooked. Already Greg Lynn wrote in his introduction to the 2nd edition of Folding in Architecture (2004) about the centrality of the digital medium and the role of the differential calculus for rethinking architecture as a series. Retracing Lynn’s debate on the role of calculus back to 1993 when the first edition of Folding in Architecture was published, this paper wants to shift attention from the highly formal-driven use of the digital medium associated with ‘literal folding’ or ‘elegance’ to arrive at a critical assessment of differential calculus and its potential to challenge traditional modes of designing, producing, and constructing architecture.
2. Differential calculus: Folding elegant architectures? Gilles Deleuze’s book “The fold: Leibniz and the baroque,”3 first published in France in 1988, began to spread its influence through out the US American architecture scene in 1993 when it was first translated into English and quoted in the Architectural Design Magazine “Folding in Architecture.”4 Greg Lynn, the guest editor of this magazine, intended to rethink architecture countering both the Deconstructivists’ discontinuity and Venturi and Scott Brown’s heterogeneity. Lynn advocated instead the continuous, the continuously differentiated, for which Deleuze provided the philosophical argument and perhaps more importantly the figure of ‘the fold,’ which was to preoccupy architects for the next decade – resulting in folding cities, folding grounds, folding buildings, folding interiors, folding done by anybody, anywhere, at any time. What was the fascination that emanated from Deleuze’s reading of Leibniz?
3
First International Conference on Critical Digital: What Matter(s)?
160
For Lynn, a young architect working with the latest software based on differential calculus, Deleuze’s reading of Leibniz gave birth to a new logic, that of the “integration of differences within a continuous yet heterogeneous system.”5 It was 1686 when Leibniz, a well-known German politician and philosopher, propagated his idea of differentiation with “Nova Methodus pro Maximis et Minimis, Itemque Tangentibus, qua nec Fractas nec Irrationales Quantitates Moratur, et Singulare pro Leibniz invented with differentiation a method that could illi Calculi Genus.”6 calculate and thus comprehend the rates of change of curves and figures. Differential calculus was soon applied for the graphing of physical phenomena of movement or the graphing of curves for the construction of ship and bridge designs. (7)7 Besides its practical application, Leibniz’s differential calculus also had philosophical implications, as it could analyze and thus allow the comprehension of nature as a ‘continuous variation,’ as a ‘continuous development of form.’ In Deleuze’s reading of Leibniz, differences were hereby no longer thought of in terms of separate entities, but rather in terms of a continuous differentiation according to contingencies; a process Deleuze termed folding. Folding in Architecture stands according to Lynn for a flexible organization in which dynamic relations replace fixed coordinates as the logics of curvilinearity [Fig. 2] Perhaps most depicted by Leibniz’s differential calculus underlie the system.8 notably, with software based on differential calculus architectural forms changed “from fragmented polygonal rectelinearity towards smooth continuous splinal curvelinearity, […] subverting both the modernist box and its deconstructionist remains.”9
Figure 2: Greg Lynn, Differential Curvature, 1999.
Besides Lynn, one other advocate of Folding in Architecture was Lynn’s teacher and employer Peter Eisenman. At first glance, however, Eisenman’s triangulated Form Zgenerated architectures [Fig. 3] of the early 1990s had – at least superficially – next to nothing to do with either folding, curvilinearity, or Leibniz’s differential calculus.
4
First International Conference on Critical Digital: What Matter(s)?
161
Figure 3: Peter Eisenman: Emory University Center of the Arts, Atlanta 1991 – 1993, Folding Model, Folding bars, Form*Z,. And yet its author Eisenman insisted at length on the dramatic implication of Leibniz’s mathematics for architecture, explaining folding in topological terms: “A folded surface maps relationships without recourse to size or distance; it is conceptualized in the difference between a topological and a Euclidean surface. A topological surface is a condition of mapping without the necessary definition of distance. And without the definition of distance there is another kind of time, one of a nomadic relationship of points. These points are no longer fixed by X, Y, Z coordinates; they may be called x, y, and z but they no longer have a fixed spatial place. In this sense they are without place, they are placeless on the topological ground. […] Here the topological event, the dissolution of figure and ground into a continuum, resides physically in the fold; no longer in the point or the grid.”10 Folding is seen by Eisenman as a possibility of changing the conception of space, as it draws attention to that which is commonly overlooked: the coordination of space and architecture. Eisenman refers particularly to the most dominant of the coordination devices used in architecture: the so-called Cartesian Grid, the ubiquitous Cartesian coordinate system, that initially was developed to allow for the algebratization of geometric forms, combining mathematical calculation and visualization – two otherwise entirely distinct methods – in one machine. The mathematician Gaspard Monge elaborated in 1799 a descriptive geometry at the École Politechnique that radicalized elementary geometry. [Fig. 4] Monge’s invention was a three-dimensional spatial construct that would allow him to map complex geometries and their distribution and relation in space through orthographic projections onto reference planes. The mathematicians Monge, Poncelet (continuity principle), and Brianchon (principle of dualism) utilized the Cartesian coordinates for the mapping and generation of complex curves and surfaces. The so-called Cartesian grid served hereby merely as an abstract machine that would ease the placement of points in space.
5
First International Conference on Critical Digital: What Matter(s)?
162
Figure 4: The Cartesian Coordinate System of Monge (right) and Durand’s interpretation there off (left) Today Monge’s invention serves as a tool to reinforce and consolidate standardization and stasis – an idea that Monge’s Colleague and Architect J. N. L. Durand initiated, whose interpretation of Monge’s grid widely differed from the abstract machine the mathematician had imagined as a tool for mapping movement, dynamic relationships, and curvilinear complexities. In Durand’s hands Monge’s imaginary grid turned into an universal planning grid, according to which walls and columns were positioned and extruded in a net-like layout. Instead of utilizing the coordinate system as an abstract numerical mapping device, Durand produced gridded architecture by literally building out the coordinate system’s units. With Durand’s interpretation the Cartesian abstract grid transformed – turned from an abstract machine – into architectural determinism that unfolded along the grid of predetermined lines, and that would become represented along the grid’s orthogonal projection planes. Contemporary architectural drawing conventions still prefer representations which were long ago established through Durand’s institutionalized depictions —that is, the section, elevation, and floor plan. Durand’s determinism eagerly coordinated – even typified – with the aid of the Cartesian grid the real, thereby greatly ignoring the abstract machine’s potential for speculation and for analysis of the a priori indeterminable. Today, with every usage of standard architectural software – given that it is based on the Cartesian grid – inherited standards, norms and associations become automatically re-installed. Every digital, discrete image is characterized through uniform subdivisions, a finite Cartesian grid of cells – the so-called pixels. Facing the ubiquous inscriptions of griddedness, it seems all the more necessary to remember that all standards are arbitrary mental constructs. Any standardization is arbitrary, as any tool and software preference is arbitrary – and hence potentially open for change. Eisenman advocates this change, suggesting that the fold could replace the Cartesian grid and hence the comprehension and production of space associated with it
6
First International Conference on Critical Digital: What Matter(s)?
163
3.0 Folding versus gridding Folding, the process of differentiation based on Leibniz’s differential calculus, turns in Eisenman’s hands into the fold, a formal tectonic, thought to be capable of changing not only traditional viewing conventions, but also inherited conceptions of space. [Fig. 5] The fold seems – at least to Eisenman – a perfect device with which to play his games of confusing the imaginary with the real, and the real with the imaginary. The fold presents an alternative to the grid of Cartesian descent as it presents a challenge – if not a catastrophe – for architecture’s planometric means of representation, which simply cannot cope with the spatial complexities characteristic of the fold. With the new means of presentation, new realms of architectural thought and production become possible, as the designer is liberated from the constraints of traditional models of presentation.
Figure 5: Peter Eisenman: Emory University Center of the Arts, Atlanta 1991 – 1993, Folding auditorium, Form*Z, Chipboard Model.
Eisenman writes that the moment in which “space does not allow itself to be accessed through gridded planes”11 is the moment in which the architect realizes that the process of imaging was always already present in the process of design and its realization – and thus inscribed itself into the material substance of architecture. Had Architecture in the past been literally informed through the so-called Cartesian Grid, so it is in the present, as Eisenman further speculates, informed through the ‘fold,’ alias differential calculus: “Leibniz turned his back on Cartesian rationalism, on the notion of effective space and argued that in the labyrinth of the continuous the smallest element is not the point but the fold’. If this idea is taken into architecture it produces the following argument. Traditionally, architecture is conceptualized as Cartesian space, as a series of point grids. […] In mathematical studies of variation, the notion of object is changed. This new object is for Deleuze no longer concerned with the framing of space, but rather a temporal modulation that implies a continual variation of matter. The continual variation is characterized
7
First International Conference on Critical Digital: What Matter(s)?
164
through the agency of the fold: ‘no longer is an object characterized by an essential form.’ He calls this form of an object ‘object event.’”12 Eisenman depicts here what I first termed versioning13 in 2003 - thinking of design no longer as a single entity characterized by an essential form but rather as a series. Each design-event is hereby comprehended as a unique intricate version of a whole series of possible designs - all characterized through continuous similarities rather than clearly defined differences. In this sense, folding could have been interpreted by Eisenman as the divergence from the Modernists’ mechanical component-based design and construction technique. Eisenman’s own architectural solutions, nevertheless, remained literal folds, which never realized the potential of his own readings, never inquired into the role of differential calculus for the physical realization of his architecture. Nevertheless, Eisenman challenged, regardless of the literalness of his folding architectures and cities, the perception and comprehension of space. Eisenman’s architectures destabilize, displace, and make all those accustomed to orthogonal Cartesian Space simply sick.
4.0 Parametric Design: Architecture as series While in 1993 the architects included in the seminal AD article were merely compelled by the potential of the concept of folding, it was not until recently that a more critical – and yet still perfunctory - assessment of the concept, and its underlying logic of differential calculus, has occurred. Greg Lynn thinks thus today of his design as a series, as versions. Consequently the most interesting to him, is less the literal production of folded architecture, but Leibniz’s differential calculus and its ability to fuse the hierarchy of parts and whole to produce a deeply modulated whole as well as infinitesimal variation among parts. The Embryological House (1999) is a highly diagrammatic version of an architectural series, but it is also one of the few examples in architecture up to today which explicitly conceptualizes the idea of differential calculus. Lynn’s Embryological House [Fig. 6] is a series of one-of-a-kind houses that are customized for individual clients.
8
First International Conference on Critical Digital: What Matter(s)?
165
Figure 6: Greg Lynn: Ebryological House, 1999. The houses claim to be adaptable to a full range of sites and climates. Lynn describes the Embryological House as a strategy for the invention of domestic space that engages contemporary issues of variation, customization and continuity, flexible manufacturing and assembly. Overall an ambitious concept which never left the stage of a sketchy proposal. A system of geometrical limits liberates an exfoliation of endless variations: “I design not just one or two of the Embryological House instances. […] It is shocking how few architects can get this because they are so used to thinking of design as a once-and-for-all problem and not serially. Most architects want to understand the Embryological House experiment as a search for an ideal house – as if the whole collection of houses […] was a conceit to then select the best one. They are all equivalent. I love them all equally as if they were my children […]. The design problem was not the house, but the series, the entire infinitesimally extensive and intensive group.”14 At the prototyping stage of the Embryological House, Lynn developed six instances exhibiting a unique range of domestic, spatial, functional, aesthetic and lifestyle constraints. In the project description he emphasizes that: “There is no ideal or original Embryological House, as every instance is perfect in its mutation. The formal perfection does not lie in the unspecified, banal and generic primitive, but in a combination of the unique, intricate variations of each instance and the continuous similarity of its relatives. The variations in specific house designs are sponsored by the subsistence of a generic envelope of potential shape, alignment, adjacency and size between a fixed collection of elements. This marks a shift from a Modernist mechanical kit-of-parts design and construction technique to a more vital, evolving, biological model of embryological design and construction.”15
9
First International Conference on Critical Digital: What Matter(s)?
166
Lynn proposes that identity, signature, and meaning tend to move today through series rather than single objects. Calculus is the mathematics for defining these kinds of ensembles. The Coffee & Tea Towers [Fig. 7] Lynn designed for Alessi are testing this premise as an ensemble of mass-produced one-of-a-kind objects. The set is designed so that there are three modules of container: large and medium sized pots for hot water, coffee, tea and milk, and small containers for cream, sugar and lemon juice. These containers share the same form at their edges so that they can be combined in various radial arrangements. The pots are designed by combining nine differently shaped curves. The vessels are formed of thin metal titanium sheets using heat and pressure.
Figure 7: Greg Lynn: Alessi Coffee & Tea Towers, 2000-2003.
Up until now, it is product designers and car designers who have grasped the concept much more easily, that the design problems of a series, rather than of a single design, are the issue today. Perhaps it is for this reason that Lynn has tested his concepts with product designs for Alessi, rather than with architecture for a client.
5.0 Versioning: Cross scalar seriality
10
First International Conference on Critical Digital: What Matter(s)?
167
The core idea of versioning, however, goes far deeper than simple variation between different parameterized design iterations. Both the power and the limitations of versioning-as-series is demonstrated by Lynn’s work – which both suggests the possibility for limitless parameterization across a series. Versioning, however, also operates at the micro-scale, within the structure of digital design itself. The key to a deeper understanding of the thorough infiltration of the versioning idea into contemporary aesthetics is the long delay between Leibniz's discovery of calculus, and its common usage in design. The sinuous curve and biomimetic form itself isn't difficult to accomplish without computers. What is rather difficult is to make it mathematically malleable and variable. This is where a higher-order thinking comes in -- it is not just curves themselves, but curves of curves, recursive functions as they literally inform computer-generated form. The computer's power is in its tireless looping -- its ability to perform millions of operations in a single second, and to constantly shift and recalculate functions continuously. It is not just “the curve” which characterizes digital form, but the parameterized curve – the folding and shifting of the two-dimensional curve across a third dimension. [Fig. 8] Not the curve alone, but also its derivatives and inflection points – the core of calculus-based mathematical analysis – is that which informs digital form-making at its deepest level.
Figure 8: Rocker-Lange Architects, cross scalar seriality, 2008.
11
First International Conference on Critical Digital: What Matter(s)?
168
Versioning is at the core of the digital form itself; its signature and its authenticity derive from the parameterized repetition which give computer-generated design its characteristic combination of tightly disciplined structure and formal variability. It's not just the new calculus-powered curvaceousness, which is characteristic of a digitally informed age; it is also a disciplined groundwork of order that underpins the whole operation -- the rhythm of a powerful Turing Machine that drives the versioning at the heart of the digital aesthetic. Is designing architecture then nothing other than curve analysis? Not at all; on the one hand, the aim of versioning is certainly not to excel in the use of differential calculus, nor to turn architecture into curve sketching â&#x20AC;&#x201C; it is rather a case sensitive tool for the critical assessment of highly specialized needs, that could range from environmental factors to personal preferences. And furthermore, versioning is not just the key to a hyper-individuality of design, but is rather a crucial component of digital production.
Figure 9: Rocker-Lange Architects, Explorations in cross scalar seriality, 2008.
For the last two years, my research in algorithmic architecture at the Harvard Graduate School of Design16 has focused on versioning understood as cross scalar and cross functional seriality. One focus of the study is the role of differential calculus for the generation of surface, in which â&#x20AC;&#x153;surfaceâ&#x20AC;? may hereby be considered at once structural and ornamental, functioning and signing.
12
First International Conference on Critical Digital: What Matter(s)?
169
The intent is to design and control the surface, highlighting infenitessimally scaled components. By embedding the idea of versioning into the scale of the design itself – that is, into the gradients, which repeat and dissolve across a surface, a loss of modularity in favor of the infinitesimal component occurs. Architecture is viewed neither as fragmented nor contradictory, but rather as an integrative intensive whole. With computer-controlled manufacturing techniques, infinitesimal variations and “one-of a kind customized variety”17 is now possible. This is why code matters. Endnotes 1 Alan M Turing, “On computable numbers, with an application to the Entscheidungsproblem 1936–37,” in: Proc. London Maths. Soc., Ser. 2, 42: pp. 230–65. 2The word ‘algorithm’ etymologically derives from the name of the 9th-century Persian mathematician Abu Abdullah Muhammad bin Musa al-Khwarizmi. The word ‘algorism’ originally referred only to the rules of performing arithmetic using Hindu-Arabic numerals, but evolved via the European-Latin translation of al-Khwarizmi's name into ‘algorithm’ by the 18th century. The word came to include all definite procedures for solving problems or performing tasks. 3 Gilles Deleuze, The fold: Leibniz and the baroque / Gilles Deleuze; foreword and translation by Tom Conley, Minneapolis: University of Minnesota Press, 1993. Originally published in French: Gilles Deleuze, Le Pli: Leibniz et le baroque, Paris: Les Editions de Minuit, 1988. 4 Greg Lynn, “Folding,” in: Folding in Architecture: guest edited by Greg Lynn, Architectural Design, vol. 63, no. 3-4, 1993. 5 Greg Lynn, “Architectural Curvilinearity: The folded, the Pliant and the Supple,” in: Folding in Architecture: guest edited by Greg Lynn, Architectural Design, vol. 63, no. 3-4, 1993, p. 8. 6 Gottfried, Wilhelm Leibniz “A New Method for Maxima and Minima, as Well as Tangents, Which Is Impeded Neither by Fractional nor by Irrational Quantities, and a Remarkable Type of Calculus for This,” 1684. 7 The other great discovery of Newton and Leibniz – closely related to the finding of the differential calculus – was the finding of areas under curves— the integral calculus. Leibniz approached this problem by thinking about the area under a curve as a summation (ò) of the areas of infinitely many infinitesimally thin rectangles (dx) between the x-axis and the curve. 8 Greg Lynn, “Architectural Curvilinearity: The folded, the Pliant and the Supple,” in: Folding in Architecture: guest edited by Greg Lynn, Architectural Design, vol. 63, no. 3-4, 1993, 11. 9 Ingeborg M. Rocker, “Calculus-based form: an interview with Greg Lynn,” in: Architectural Design, vol. 76, no. 4, 2006, pp. 88-95. 10 Peter Eisenman, “Folding in Time: The Singularity of Rebstock,” in: Folding in Architecture: guest edited by Greg Lynn, Architectural Design, vol. 63, no. 3-4, 1993, p. 25. 11 Peter Eisenman, „Visions unfolding: Architektur im Zeitalter der elektronischen Medien,“ in: Peter Eisenman, Aura und Exzess, Zur Überwindung der Metaphysik der Architektur, Wien: Passagen Verlag, 1995, pp. 203-225, here p. 214. “Der Raum läßt sich nicht weiter durch gerasterte Ebenen erschließen.” 12 Peter Eisenman, “Folding in Time: The Singularity of Rebstock,” in: Folding in Architecture: guest edited by Greg Lynn, Architectural Design, vol. 63, no. 3-4, 1993, p. 24. 13 Ingeborg M. Rocker, “Versioning: Evolving Architectures - Dissolving Identities; Nothing is as Persistent as Change,” in: Architectural Design, vol. 72, no. 5, 2002, pp. 10-17. See in this context also my discussion with Greg Lynn about his embryological house series and his Tea Pot series for Alessi in: Ingeborg M. Rocker, “Calculus-based form: an interview with Greg Lynn,” in: Architectural Design, vol. 76, no. 4, 2006, pp. 88-95. 14 Greg Lynn in a conservation with Ingeborg Rocker, February 2005. See for further details: Ingeborg M. Rocker, “Calculus-based form: an interview with Greg Lynn,” in: Architectural Design, vol. 76, no. 4, 2006, pp. 88-95. 15 Greg Lynn, FORM, project description for the Embryological House. 16 The research took place in cooperation with Miranda Callahan, co-author of this text’s section “Versioning: Cross scalar seriality” and Christian J. Lange, Rocker-Lange Architects. 17 Greg Lynn, “Introduction,” in: Folding in Architecture, Architectural Design, 2nd revised edition, London: Wigley Press, 2004, pp. 8-13.
13
First International Conference on Critical Digital: What Matter(s)?
170
First International Conference on Critical Digital: What Matter(s)?
171
What comes first: the chicken or the egg? Pattern Formation Models in Biology, Music and Design Katerina Tryfonidou, Dipl. Architect, Lecturer, Washington University in St. Louis. Greece. katerina@post.harvard.edu Dimitris Gourdoukis, Dipl. Architect, Lecturer, Washington University in St. Louis. Greece. object.e.architecture@gmail.com Abstruct The popular saying that wonders if the egg is coming before the chicken or vice versa, implies a vicious circle where all the elements are known to us and the one is just succeeding the other in a totally predictable way. In this paper we will argue, using arguments from fields as diverse as experimental music and molecular biology, that development in architecture, with the help of computation, can escape such a repetitive motif. On the contrary, by employing stochastic processes and systems of self organization each new step can be a step into the unknown where predictability gives its place to unpredictability and controlled randomness. 01. Music The Greek music composer and architect Iannis Xenakis in his book Formalized Music 1 divides his works -or better the methods employed in order to produce his works- into two main categories: deterministic and indeterministic models. The two categories, deriving apparently from the mathematics, are referring to the involvement or not of randomness in the compositional process. As Xenakis himself explains, “in determinism the same cause always has the same effect. There’s no deviation, no exception. The opposite of this is that the effect is always different, the chain never repeats itself. In this case we reach absolute chance – that is, indeterminism”2. In other words a deterministic model does not include randomness and therefore it will always produce the same output for a given starting condition. Differential equations for example tend to be deterministic. On the other hand, indeterministic or stochastic processes involve randomness, and therefore will produce different outputs each time that the process is repeated, given the same starting condition. Brownian motion and marcov chains are some examples of such stochastic mathematical models. Xenakis’ compositional inventory includes processes from both categories3. As said above, the use of stochastic models in composition has as a result a process that produces a different outcome each time that it is repeated. For example, using Brownian motion4 (see figure 01: particles generated using Brownian motion5) in order to create the glissandi of the strings, means that the glissandi are generated through a process that includes randomness, therefore if we try to generate them again we will get a different output. At the same time, all the different results of the process will share some common characteristics. With that in mind someone would expect that such a musical composition would vary –at least in some aspects- each time that it is performed. However that is not the case with Xenakis’ works. While he was employing stochastic processes for the generation of several parts of his scores, he was always “translating” his compositions using conventional musical notation, with such detail that he was leaving no space at all for the performer to improvise, or to approach the composition in a different way. In other words the generation of the score involves randomness to a great extent, but the score becomes finalized by the composer so that each time that it is performed it remains the same.
First International Conference on Critical Digital: What Matter(s)?
172
Figure 01: Brownian motion. Object-e architecture: space_sound. 2007. What is maybe even more interesting is that Xenakis did compose scores that are different each time that they are performed. However, those scores usually are employing deterministic mathematical models, therefore models that do not include randomness. In those cases the situation is inverted: the generation of the score is deterministic, but the performance may vary. An example of the last case is Duel, a composition that is based on game theory6. The composition is performed by two orchestras guided by two conductors, and is literary a game between the two that in the end yields a winner. Each conductor has to select for each move, one out of seven options that are predefined by the composer. A specific scoring system is established and the score of each orchestra depends on the choices of the two conductors7. The result of this process is that each time that the composition is performed, the outcome is different. Therefore, a deterministic system where there are seven specific and predefined elements is producing a result that varies in each performance of the score. To make things even more complicated, the seven predefined musical elements are composed by Xenakis with the use of stochastic processes. To summarize the structure of Duel: Xenakis generated seven different pieces using stochastic processes, therefore seven pieces that include randomness. However those pieces were finalized by the composer into a specific form. Then those pieces are given to the conductors that are free to choose one for each move of the performance. The choice of each conductor however is not random: “… it is [not] a case of improvised music, ‘aleatory’, to which I am absolutely opposed, for it represents among other things the total surrender of the composer. The Musical Game accords a certain liberty of choice to the two conductors but not to the instrumentalists; but this liberty is guided by the constraints of the Rules of the Game, and which must permit the music notated by the score to open out in almost unlimited multiplication.”8 So the choices of each conductor are based upon the strategy that he follows in order to win the game, and consequently upon the choices of the second conductor. Therefore the final performance of the score is different each time. Xenakis was quite specific with his decisions regarding the use of deterministic or indeterministic processes. In most cases he employs models from both categories for each composition. More importantly, while he names his music “stochastic music”, the stochastic part, the place where randomness occurs, is always internal to the process and totally controlled by the composer. The final result - the score that reaches the performer, or even more the listener – is always specific. Even in the cases where the outcome may vary, it still does so in a set of predefined solutions already predicted by the composer. 02. Life Science The study of morphogenesis, one of the most complex and amazing research topics of modern biology, aims to provide understanding on the processes that control the organized spatial distribution of cells during the embryonic development of an organism. In other words, how starting from a single fertilized egg, through cell division and specialization, the overall structure of the body anatomy is formed9. According to Deutsch and Dorman10 there
First International Conference on Critical Digital: What Matter(s)?
173
is a continuum of different approaches to the problem; on the one end there are theories of preformation and on the other end systems of self organization. The concept of preformation assumes that any form is preformed and static. Therefore any new form is always a result of a combination of the already existing forms. Taking a different approach, self –organization implies a de novo pattern formation that is dynamic and gets developed over time. Morphogenesis in the self-organization model depends on the interaction between the initial cells or units. Preformation is a top-to-bottom idea, while selforgazination is a bottom-up system. In both cases, research uses computation as the necessary medium for the simulation of the biological processes. We will argue that according to the latest research in biology, morphogenesis can be approached as a process which involves both the notion of preformation as well as self-organization. During morphogenesis, cells proliferate and specialize, i.e. they choose which subset of proteins to express. But how do cells know when to divide or where to specialize? How do cells know where to become skin or bone or how many fingers should they form? It turns out that the key to understanding morphogenesis is the way cells sense and respond to their environment. Cells obtain information about their environment by using proteins embedded in their membrane to sense specific “message” proteins that are located around them. When such a “message” protein binds to a membrane protein, the cell “receives” the message and acts accordingly11. Therefore, during morphogenesis there is a constant interaction, through such “message” proteins, between each cell and its neighboring cells. This interaction helps cells to understand where in the body they are located, when should they divide and when do they need to specialize into some particular type of cell. The above function, or better, sequence of functions, has been the focus of scientific research for the past two decades. Nowadays, molecular biology can accurately describe many steps of the reactions between proteins and how they are related to cell specialization12. It has been proved that these reactions follow specific physical laws which can be described by mathematical models. For example, given a pair of proteins, the properties of the resulting interactions are known. Because of the physical laws that are being applied, the model of the function of cells is a deterministic one, since it is made of many elementary interactions between proteins that have well defined inputs and outputs. Going this notion a step further, one could argue that the function of the cells implies the idea of preformation, that is, from two predefined elements only one possible combination can occur. In a way, the deterministic rules that the reactions of the proteins follow can be seen as a model of preformation, where there is only one output given a specific input. Although the reactions between the proteins inside the cell follow specific rules that have been defined, yet there is a great degree of unpredictability at the life and function of each cell. Why science cannot predict exactly which will be the next “moves” of the cells, thus controlling all the functions in a (human) body? Although the nature of the outcome of the interaction between two proteins has been studied and analyzed, it is not possible to define deterministically when and where this interaction will take place, since proteins move randomly in space and they can interact only if they come into proximity and under proper relative orientation. This is true for proteins inside and outside the cell. Furthermore, it is not possible to define the exact location of neighboring interacting cells, when each cell will sense the presence of a “message” protein, when and how much will the cell respond to this signal by secreting its own “message” proteins, and when its neighbors will sense this signal. Given the number and complexity of the functions in each cell, as well as the vast possibilities of interactions with its neighboring cells, the large number of processes that could potentially happen cannot be expressed by deterministic models. Since there is, to a certain degree, randomness in cellular functions, science turned to stochastic models in order to explain them. That is, instead of deterministic mathematical models, scientists use models that incorporate probabilities, in order to include the large amount of possible actions. Brownian motion, for example, is the stochastic model that
First International Conference on Critical Digital: What Matter(s)?
174
describes the movement of particles in fluids, and therefore is used to describe the movement of proteins inside the cell. Stochastic processes can describe the spatial and temporal distribution of interactions inside cells and between neighboring cells. To understand the importance of the stochastic component of cell function, here is another example: even though monozygotic twins have exactly the same DNA, they look similar but not identical. If the cell-response system was purely deterministic, then babies that have the same DNA should look identical. Nevertheless, this kind of twins look very much alike, but they are not identical. The small differences in their physical appearance occurred because of the stochastic nature of protein motion and interaction during morphogenesis. Even though the initial information was the same, and even though the outcome of protein reactions follows deterministic rules, the exact location of cells and proteins can only be described in a stochastic mode. The stochastic part of cellular functions could, at a different framework, be seen as a model of self-organization. For people outside of the scientific biological community the introduction of randomness at research seems particularly intriguing. Instead of a process of preformation, (specific aspects of cells function that can be described by deterministic models) in the self-organizational model, cell function results in something different that cannot be described by deterministic rules. Cell functions depend on the fact that each cell is part of a whole. Together with their neighbor- cells, they react to external stimuli and to a large extent define the characteristics of the whole, as part of a bottom-up process. Deutch and Dorman, focus on the absence of distinction between organizer and organized in selforganized systems: “In self-organized systems there is no dichotomy between the organizer and the organized. Such systems can be characterized by antagonistic competition between interaction and instability”13. To a certain extend, the cell functions acquire a selforganizational character, because the transformations depend on the interaction of the cells with each other. There are many examples like the above in molecular biology to make the point that both deterministic and stochastic processes are used to describe the phenomena of life. As an overall assumption, many of the microscopic phenomena in life science follows deterministic rules, but the macroscopic outcomes can be described only in a stochastic way. Following this thought we argue that models of preformation and self-organization, as described above, can exist simultaneously in a system. The case of the cell-cell interaction in general and in the morphogenesis in particular depicts the complex processes that occur and highlights which part of the processes suggest a deterministic, preformatted model, and part of it follows a stochastic model of self-organization. 03. Design The two cases we already examined – Xenakis work in musical composition and the study of morphogenesis in molecular biology – are both dependant to a great extent on the same medium: computation. Xenakis used computer as the means to transform mathematical models into music almost from the beginning of his career. At the same time, it would be impossible for researchers today to study the extremely complex phenomena that are involved in the development of life without the use of the computer. The use of the computer, of course, has also become one of the main driving forces behind design today. The encounter of computation with design happened rather late and in the beginning took the form of an exploration of the formal possibilities that software packages were offering. That initial –maybe “immature” but still experimental– approach soon gave its place to a widely generalized dominance of digital means on every aspect related to architecture: from design strategies to the construction industry. However, using the computer in an architectural context does not necessarily mean that we are taking advantage of the opportunities and the power that computation has to offer. More often than not, the use of computers in architecture today serves the purpose of the
First International Conference on Critical Digital: What Matter(s)?
175
“computerization” of already predefined processes and practices – aiming usually to render them more efficient or less time consuming. That might be convenient, it does not promote however the invention of new ways to think about architecture. As Kostas Terzidis notes, “computerization is the act of entering, processing or storing information in a computer… [while] … computation is about the exploration of indeterminate, vague, unclear and often ill-defined processes”14. While automating and mechanizing every-day architectural tasks may be useful, the true gain for architecture in relation to digital media lies in the understanding of what a computational design process can really be. Only this way we can use computers in order to explore the ‘unknown’; in order to invent new architectures. We believe that the already mentioned examples in biology and the music of Xenakis can offer the means to better understand where these creative possibilities of computation are laying. Of course computation has numerous applications in several different fields, the selection of those two specific cases as guidelines however, has a very specific motivation: The biological approach is providing scientific, highly developed techniques that have been tested thoroughly and at the same time can show us how computation and digital tools can become the bridge between complex processes taking place in the physical world and the way that space is created. Xenakis’ work on the other hand, is an example of computational techniques used outside their strict scientific origins; that is in order to compose a musical score. Therefore they can provide insights about how those methods can be used in order to create an art-form. The work of Xenakis is pointing out one of the most import aspects that computation is bringing into (architectural or musical) composition: the introduction of randomness. One can argue of course that architects always had to take decision based on chance. However humans are not really able of creating something totally random. For example if we ask somebody to draw a random line; between the decision to draw a line and the action of drawing the line, there are several different layers that affect the result: ones idea of what a line is, ones’ idea of what random is, the interpretation of the phrase “draw a random line” etc. On the contrary computers are very good at producing randomness (see figure 02: stochastic algorithm positioning and scaling a box randomly). If we program a computer to draw a random line, then the computer will simply draw a line without anything external interfering between the command and the action. The ability to produce randomness, combined with the ability to perform complex calculations is defining the power of computational stochastic processes. And as we have already seen with the work of Xenakis, randomness can be controlled. Therefore the architect/programmer can specify specific rules or define the range within which the stochastic process will take place and then the computer will execute the process and produce results that satisfy the initial conditions. The advantage of such a process lays in the fact that through randomness the architect can be detached from any preconceptions that he or she may have about what the result should be, and therefore it will become easier to generate solutions initially unpredicted. By defining the rules and letting the computer generate the results, we open ourselves to a field of almost endless possibilities; designers can produce results that they couldn’t even imagine in the beginning of the process, while they can still maintain control of the process and the criteria that should be met.
First International Conference on Critical Digital: What Matter(s)?
176
Figure 02: Stochastic distribution. Object-e architecture: space_sound. 2007. The most important aspect that is crucial to realize, and Xenakisâ&#x20AC;&#x2122; work is making it easier to see, is that the power of algorithms used in composition lays in the process and not in the final result. If it is a building that we need to produce, then a building might be produced in the end, as a musical composition was produced in the end by Xenakis. However the importance is moving from the final result to the process that we use to generate it; the architect is not designing the building anymore, but the process that generates it. To point it out once again: computation is defining a process, not an output. And exactly because it allows us to focus on the process and not the results, those results can be unexpected.
Figure 03: student: Josie Kressner While Xenakis emphasizes the importance of process over the final output along with the stochastic properties of algorithms, the example from biology, when applied to architecture, is highlighting another important aspect that the use of algorithms is raising: that of selforganization. Architectural tradition, starting with the renaissance, is heavily based upon the idea of the â&#x20AC;&#x153;masterâ&#x20AC;?: an architect with a specific vision, which he or she materializes through his/her designs creating subsequently a style. Self-organization however implies a totally different idea: the architect does not actualize through his design something that he or she has already conceived. On the contrary: the architect creates the rules, specifies the parameters and runs the algorithm; the output is defined indirectly. Through computation and after many iterations, even the simplest rules can provide extremely complex results, which are usually unpredictable. Moreover, by a simple change in the rules something totally different may arise. The top-bottom idea of architecture, with the architect being at the top level and his/her creations being at the bottom, is inverted: the process begins from the bottom. Simple elements interact with each other locally, and through the iterative application of simple rules, complex patterns start to emerge. The architect does not design the object anymore, but is designing the system that will generate the final output. An example of a pattern formation model with self-organizational properties is that of cellular automata, which are extensively in use in several different fields, and lately also in architecture. A cellular automaton is a self organized system where complex formations arise as a result of the interaction and the relations between the individual elements. The
First International Conference on Critical Digital: What Matter(s)?
177
simplicity of the model combined with its abilities to produce very complex results and to simulate a very wide range of different phenomena makes it a very powerful tool that allows the architect to be disengaged from the creation of a specific output and to focus instead on the definition of a process. (see figure 04: a one dimensional ca and a surface generated byt the acceleration of the rules.)
Figure 04: student: Lauren Matrka The possibilities arising for architecture are virtually infinite: from the creation of selforganized and self-sustained ecosystems to the study and planning of urban growth. In the place of externally imposed “sustainable” rules, we can have internally defined rules that form the generative process. In the place of applying external “planning” strategies to the cities, we can study urban entities as organisms, as system that are growing following specific rules that define the interaction between its elements. Yet, as noted in the example of the protein interaction, self-organization is not encountered on its own. It is always functioning together with other, deterministic or pre-formed, systems. The same way that Xenakis was using indeterministic processes in relation to deterministic ones. What stochastic processes and self organization are offering to architecture, are the means to engage the unknown, the unexpected. The means to move away from any preconceptions that define what architecture is or should be, towards processes the explore what architectures can be. As Marcos Novak writes, “by placing an unexpected modifier x next to an entity y that is presumed – but perhaps only presumed – to be known, a creative instability is produced, asking, ‘how y can be x?’”15. In that sense, by placing models of self organization next to models of preformation, or stochastic processes next to deterministic processes, we are not only inventing new systems or new architectures, but we are also discovering new qualities of the systems that we already know – or we thought that we know.
First International Conference on Critical Digital: What Matter(s)?
178
Endnotes 1
see Xenakis, I. Formalized Music: Thought and Mathematics in Composition New York: Pendragon Press, 1992. see Varga, B.A. Conversations with Iannis Xenakis London: Faber and Faber limited, 1996, p.76. 3 In the “deterministic” category of Xenakis’ work fall compositions like Akrata, Nomos Alpha and Nomos Gamma, while examples of the “indeterministic” approach can be found in compositions like N’Shima (Brownian Motion) and Analologigues (Markov Chains). 4 Brownian motion in mathematics (also Wiener process) is a continuous-time stochastic process. In physics it is used in order to describe the random movement of particles suspended in a liquid or gas. 5 Figures 1-2: from the project space_sound, Object-e architecture, 2007. Figures 3-4: student work, School of Architecture, Washington University in St. Louis. 6 Game theory is a branch of applied mathematics that studies the behavior in strategic situations, where an individual's success in making choices depends on the choices of others. 7 For a detail description of Duel see Xenakis, I Formalized Music: Thought and Mathematics in Composition New York: Pendragon Press, 1992. p. 113 – 122. 8 Xenakis, I. Letter to Witold Rowicki. see Matossian, N. Xenakis New York: Taplinger Publishing Co. 1986, pp. 164165. 9 http://en.wikipedia.org/wiki/Morphogenesis, 03/22/08 10 see Deutch A. & Dorman S. Cellular Automaton; Modeling of Biological Pattern Formation, Boston: Birkhauser, 2005. 11 see Sudava D., et al, Life: The science of Biology, Freeman Company Publishers, 2006. 12 see Lodish H., et al, Molecular cell biology, Freeman Company Publishers, 2007. 13 see Deutch A. & Dorman S., Cellular Automaton; Modeling of Biological Pattern Formation, Boston: Birkhauser, 2005. 14 see Terzidis, K. Expressive Form, a Conceptual Approach to Computational Design New York: Spon Press, 2003, p.67. 15 see Novak, M. “Speciation, Transvergence, Allogenesis: Notes on the Production of the Alien” AD vol 72 No 3 Reflexive Architecture, Spiller N. (ed.) London: Wiley Academy, 2002, p.65.
2
First International Conference on Critical Digital: What Matter(s)?
179
 Â
Tools
Moderators: Taro Narahara and Kostas Terzidis
Papers: Sawako Kaijima and Panagiotis Michalatos Simplexity, the programming craft and architecture production Aya Okabe, Tsukasa Takenaka, and Jerzy Wojtowicz Beyond Surface: Aspects of UVN world in Algorithmic Design Orkan Telhan Towards a Material Agency: New Behaviors and New Materials for Urban Artifacts Bernhard Sommer Generating topologies: Transformability, real-time, real-world Josh Lobel The representation of post design(v.) design(n.) information Serdar Asut Rethinking the Creative Architectural Design in the Digital Culture Jerry Laiserin Digital Environments for Early Design: Form-Making versus Form-Finding Yanni Loukissas Keepers of the Geometry: Architects in a Culture of Simulation Simon Kim and Mariana Ibanez Tempus Fugit: Transitions and Performance in Activated Architecture
First International Conference on Critical Digital: What Matter(s)?
180
First International Conference on Critical Digital: What Matter(s)?
181
Simplexity, the Programming Craft and Architecture Production Sawako Kaijima Computational Design Researcher, Adams Kara Taylor, London, UK sawako@akt-uk.com / Kaijima@gmail.com Panagiotis Michalatos Computational Design Researcher, Adams Kara Taylor, London, UK panmic@akt-uk.com / panhimic@hotmail.com Abstract In resent years, digital design tools have become prevalent in the design community and their capabilities to manipulate geometry have grown into a trend among architects to generate complex forms. Working as computational design consultant in an engineering firm, between architecture and engineering we often come across the problems generated by a superficial use of digital tools in both disciplines and the incapacity of the current system to cope with their byproducts. Here we will discuss the problems we see with the current system and the opportunities opened by digital design tools. Two guiding concepts are simplexity [the desire to fine tune and build a system that yields a solution to a specific design problem by collapsing its inherent complexity] and defamiliarization [a side effect of having to represent things as numbers]. They can both affect the designer as an individual who chooses to engage with digital media as well as the production system in which he/she is embedded since he/she will have to find new channels of communication with other parties. To demonstrate our strategy and the obstacles faced we will examine our involvement in the development of a computational design solution for a small house designed by Future Systems architects.
1. Introduction During the past two decades we have witnessed an ever increasing invasion of information technologies and digital media in the field of architecture production. Because of hard socioeconomic reasons digital design tools have become ubiquitous in the field. As Jane Jacobs writes, “The conflict between the process of adding new work to old and the guilds' categories of work was a constant source of wrangling in medieval European cities.” The same struggle exists today, and will certainly continue into the future.” 1 Hannah Arendt, The Human Condition: “if the human condition consists in man’s being a conditioned being for whom everything, given or man-made, immediately becomes a condition of his further existence, then man “adjusted” himself to an environment of machines the moment he designed them.” 2 Therefore, debating over the question “Why – digital technology should be used?” becomes insignificant, as it comes from a mere resistance of the current established structure. What should be debated is “How” architecture should re-adopt to the diffusion of the digital technology by adding to the knowledge accumulated till now. Today, most of the parties related to architecture production are utilizing various software tools in order to perform some of their tasks. Though software tools generally try to facilitate the production within each party, they do share a common underlying representation of information in digital form. The fact that a consistent representation of the
First International Conference on Critical Digital: What Matter(s)?
problems, intentions and solutions enclosed within each domain exists is significant since it sets the ground for a rearrangement of the production system to occur. As architects, focusing on computational design, standing between the two domains of architecture and engineering, we will present our observation and effort in trying to pierce the complex considerations posed by the new media.
Figure 1. computational design in AKT projects
2. Programming Simplexity 2.1. Complexity and the emergence of the clichĂŠ
Figure 2. Iterative Function System tree
182
First International Conference on Critical Digital: What Matter(s)?
183
One of the side effects of the rapid adoption of the digital design tools at least in the domain of architecture was a fascination with geometry and the possibilities of producing prolific forms. There lies the misconception that advanced tools give advanced results and the confusion between the logically possible on one side and the technically, technologically and economically possible on the other. This turn to the complex was justified as a way to create “novel” forms. Does the use of digital design tools guarantee the reduction in the use of clichés or formal biases in architecture? The answer is probably not, as attested by the endless series of organic and biomorphic things that constitute the bulk of production of many architecture schools. There is a trend in the architecture community to visually manipulate “complex” forms without intellectually engaging with the concept underneath digital computation which yields problematic conditions especially at the interfaces with other disciplines. One reason for this is that current software prioritizes visual information and is based on metaphors of previous processes like drafting and mechanical assembly. Another reason is the adoption of easy algorithms that produce “spectacular” results with minimum effort and intellectual investment3. In this respect the concept of complexity has acquired an emblematic function becoming the master signifier for a whole school of computational design4. These ill effects are partly the result of an attempt to use programming in order to construct a pseudo science of design and hence justify and present as rational inherently irrational choices. We will try to show that programming cannot be the basis for a design science since programming itself is a craft. Moreover there is a complementary concept to complexity borrowed by system theory which can help us to reframe the aims, possibilities, aesthetics and ethics of computational design. 2.2. Simplexity
Figure 3. simplexity in a particle system.
Simplexity is the complementary term to complexity we are going to employ in order to show a different path that one can follow when using digital media in design. We saw how the use of simplistic rule systems with only stylistic visual considerations gives rise to the clichés of complexity. There is however, a whole class of algorithms that deal with simplification instead of proliferation. These are usually more complex and require stronger engagement with the concept lying underneath the computation. Simplification in a way that produces meaningful results and renders the complex system more accessible to human thought or more efficient is harder to achieve. This is because filtering, reduction, selection and abstraction are procedures that require intelligent and responsible choices as well as some way to refer and operate on the totality of a system. The emergent simplicity out of such intricate and complex set of rules is called simplexity. Many of these algorithms require from the designer a deeper understanding of the system. Simplexity is the aesthetics of a programming craft that results from thorough fine tuning of the design system and expresses the will to understand and make decisions rather than to proliferate and decorate. 2.3. Programming as a craft
First International Conference on Critical Digital: What Matter(s)?
184
The fact that programming is a craft becomes apparent if one considers that a good computer scientist is not necessarily a good programmer, and a good programmer is not necessarily a good designer. The first is about theoretical knowledge, the second is associated with experience and technical artisanship, and third includes the ability to make aesthetic judgment, ethical considerations and tectonics. Programming is never taught directly but is rather mastered through practice while theoretical principles of programming, mathematics and formal languages can be taught. 2.3. Defamiliarization One aspect of the programming craft is the necessity to choose a numerical base representation of objects. This forces the designer/programmer to select properties or aspects of objects relevant to the problem considered and decide how to best represent them numerically and retroactively to reframe the problem he/she has to solve. This can have profound effects on how one perceives and operates on objects. Karatani in Architecture as metaphor notes that: “ The flaw in Euclid’s work lies in his reliance on perception, or natural language, and in his inference of the straight line and point.” 5 “From the moment Descartes defined points as coordinates of numbers, the point and line segment in geometry became an issue of numbers.” 6 We see here that the use of a numerical representation for points was the decisive move that led to modern differential geometry and the break with intuition. We recognize an affinity between this side effect of choosing a numerical representation and the concept of defamiliarization. Defamiliarization is a technique widely used in art that was described by Viktor Shklovsky in the beginning of the 20th century. “Tolstoy makes the familiar seem strange by not naming the familiar object. “ ”In describing something he avoids the accepted names of its parts and instead names corresponding parts of other objects. For example, in "Shame" Tolstoy "defamiliarizes" the idea of flogging in this way: "to strip people who have broken the law, to hurl them to the floor, and "to rap on their bottoms with switches," and, after a few lines, "to lash about on the naked buttocks."” 7 The aim is to achieve a specific effect of slowing down perception and halting the automatic, habitual interpretation of things. Here we are not considering defamiliarization as a technique but as a necessity implicit in having to choose a numerical representation. Being forced to think of an object as a set of quantities8 means that one has to momentarily abandon the habitual perception of a problem and the automated solutions one developed through training in an architectural schools.9 Defamiliarization is not exclusive to architects, engineers have to defamiliarize with many well established concepts when dealing with finite element models for example and when they have to build some intelligence to these models. Probably similar processes can be described in each one of the involved disciplines. Hence the pervasiveness of digital media is a contingency that at the moment affects the whole established chain of production of the built environment. Each discipline has a chance
First International Conference on Critical Digital: What Matter(s)?
185
to influence the system by the way it chooses to make use of the digital media by reconfiguring itself and subsequently the ways of communicating with others. 3. Simplexity in the system of Production Hence, the idea of simplexity can be discussed in a larger context where the design community is embedded in, that is the architecture production system. It seems to us that the current system is not adequate in adapting to the current context shaped by recent technological developments, yet as individuals in the system, it is impossible to design or set the rules to the whole system. Considering the current socio-economic condition as Complexity, Simplexity of the architecture production system should emerge as the result of the partial pressures of the subjects to the system in order for the system to move to a direction rather than remain static but rigid or dissolve in incoherence. Therefore, Simplexity in this context is a responsible will and strategic decision making that is firmly located with the individuals. Yet, as a community, we see the importance in re-examining the architect's role as well as the existing production system in order for the practice to engage with the current context. 3.1. Architecture as contingent planning Karatani in his “Architecture as Metaphor” discusses how the term architecture has been conceived in philosophy as a realization of design qua idea. He however, in order to deconstruct this idea, refers to architecture as a practice being an event par excellence exposed to contingencies. “Nothing is less relevant to the reality of architecture than the idea that it is the realization of design qua idea. Far more critical factors are involved, such as the collaboration with other staff members and the dialogue with and persuasion of the client. The design as initially conceived, is invariably destined to be transformed during the course of its execution.” 10 The current architecture production system inherits its structure form the time of industrial revolution where the division of lobar in manufacturing was developed in order to adapt to the technological development at that time. The system is often conceptualized as tree structure where complex entities composing the production are divided into units [architect, engineer, fabricator, etc] and linear construction phases are used to measure the progress of the projects [Design Development, Tender documentation, etc.] Partly due to the effect of the relative isolation of architecture design process in the highly refined production chain, it seems that today many architects themselves have forgotten about the contingent character of architectural production, and instead embraced the misconception that Karatani attempts to dispel. We see architecture practice as a contingency planning aiming to realize the design not as an idea that becomes tainted by the realization process but as a problematic field that becomes enriched by contingencies. Each contingency then creates new rules in the system and fine tunes the previous ones.
First International Conference on Critical Digital: What Matter(s)?
186
Figure 4. tree structure diagrams of building system
3.2. Rigidity of the system The tree structure is a hierarchical structure that operates efficiently when fed with clear goal on what to execute. In this type of structure, any kind of unexpected scenario is a disturbance such as communication short circuit in the system11. For example, we often â&#x20AC;&#x153;freezeâ&#x20AC;? the architecture form so as for the next parties in the system to develop their solutions. In this case, it becomes difficult to integrate the concerns raised by the following parties to feed into the given formal solution. Moreover, when a unit fails to come to a solution with the given frozen form, the system needs to propagate up the tree structure where the cause of the failure may have occurred and re-trace that path hoping the following units will not do so in the second run. Since architecture by nature is exposed to such scenarios, at the end, the production system becomes tremendously complex and laborious process of trial and error. We can observe two ways in which the current practice is trying to improve the wastefulness of this trial and error. One approach tries to speed up the process of the trial-error loop using digital computers, and the other approach tries to stiffen the tree structure in a military manner by introducing a role such as project manager. However, both approaches seem devastating for a creative community. We believe the reconfigured partial subjects of the architectural project need to find new ways to connect with each other reconfiguring the well established communication patterns and transgressing boundaries on the way. Perhaps we should allow for the organizational structure of any project to pass through a phase of complexity and even incoherence before the parties involved can negotiate a new functional equilibrium.12 4. Case Study: Clyde Lane House 4.1. Context One of the on going projects we are involved in as computational design consultants, within a structural engineering office, is Clyde Lane, which is a small house in Dublin being designed by Future Systems. Our attempt here is to illustrate the aspects in which we have discussed thereof in a context of a real architectural project. The focus in the project is a roof that is designed by the architects as a doubly curved surface [a trimmed NURBS surface] dipping in the middle to form an atrium. For this roof design, the main communication has been taking place among the architects, engineers and ourselves, the computational design team, that develops a custom software tool in order to solve problems specific to the project. All software and interface was implemented in C++ using Opengl and Intel math kernel.
First International Conference on Critical Digital: What Matter(s)?
187
4.2. Setting up the problem The initial concern for the architects was to distribute the roof openings approximately following the lighting requirement derived from programmatic constraints. The first step of each project is to render such architectural concerns suitable for computational representation. In addition, at this stage, we try to define our problem as generic as possible such that other considerations and contingencies can be dealt with in the later stages. In this case, the main problem was not the opening shape itself but the density and directionality of the opening distribution pattern. 4.3. Reparameterization algorithm Therefore, we opted for an algorithm that can generate a mapping / reparameterization of any surface topology that complies to given scaling and directional conditions expressed as a pair of orthogonal vector fields. A good candidate for th e input vector field from an engineering perspective would be the two principal stress eigenvectors taken from a finite element analysis of the given surface13. This field is chosen under the assumption that for a large enough surfaces with a fine material continuity, preservation of the material continuity along the principal stresses will result in some structural efficiency while at the same time it will reveal and imprint something of the structural behavior of its form. In order to be able to locally control the scaling of the pattern we employed an algorithm described by Ray N., Chiu Li W., Levy B., Sheffer A., and Alliez P. called Periodic Global Parameterization14. This particular algorithm has the further advantage that is quasi-isometric15 which means that the resulting quads show small variation in sizes, a desirable trait of panelization solutions. This happens at the expense of introducing singularities [branching and T nodes]. These singularities however might have a positive impact in the way the designer understands and expresses the underlying geometry. The invariance of the algorithm to rotations of the field by Ď&#x20AC;/2 makes it suitable for eigenvector fields like principal stress and curvature where the sense of the vectors is irrelevant. In addition, because the result of the algorithm is a global reparameterization of the given surface we can map any pattern on it easily using the resulting coordinate systems.
Figure 5. map of opening density.
First International Conference on Critical Digital: What Matter(s)?
188
Figure 6. principal stress lines under self weight.
Figure 7. effect of different fields on reparameterization.
4.4. Integrating Contingencies The project faced three major contingencies, up to now, that let to dramatic formal change yet, expressing the architectural requirement as density [lighting conditions] and the engineering one as directionality [material continuity along principal stress directions] were both preserved throughout the project. Initially, the roof had Star of David pattern in which the triangles guarantee a planar solution for the strut problem. [Star of David pattern, Framing solution]
First International Conference on Critical Digital: What Matter(s)?
189
Figure 8. initial pattern for frame solution.
First event: Insurance issue related to the waterproofing [Circular pattern, GRP solution] For this event where the structural solution and opening geometry completely had to change, we didnâ&#x20AC;&#x2122;t have to intervene in the algorithm itself but just change the mapped pattern which was a trivial exercise. We simple had to fine tune form following principal stress directions by putting struts along integral lines to placing holes in between results in a similar material continuity condition. Second event: Neighbors of the house appealing for a lower roof height [Change in global geometry, ellipsoidal pattern] This change in the global geometry had virtually no effect on us as the algorithm is capable of handling any input surface geometry. However, the architects, at this point, decided that the lower patterns will be elliptical openings. This meant that special consideration should be taken for some holes to align with the principal curvature directions so as the flat glass panels for the openings do not deviate too much from the outer surfaces. Third event: limitation on available transportation means. [Introduction of joints] The incorporation of mechanical joints in the roof implied additional consideration because areas around the joints would appear as solid zones on the surface. First we tried to influence the position and shape of joint so that they fall in areas of minimum bending moments and in addition to disturb the pattern less. However this was not entirely possible due to the internal wall position, so in the end there was a clash between the pattern and the joints. To accommodate this we had to alter the control field of the patterning so that it blends smoothly around the joints. This was achieved by taking advantage of another field the distance map16. This is a scalar field that has zero values at boundary edges and increases further away from them. The gradient of this field gave us the desired alignment vectors near the joints and the normalized value of this map gave us
First International Conference on Critical Digital: What Matter(s)?
the weight by which we blended the gradient field to the principal stresses field so that alignment to the edges fades with distance from them.
Figure 9. truck fitting of roof parts.
Figure 10. effect of joints on structural behaviour.
Figure 11. distance map and its gradient and tangent fields.
190
First International Conference on Critical Digital: What Matter(s)?
Figure 12. field construction and results.
The way in which we coped with the changes did not have to do with trial and error, where you remodel everything from scratch. Instead, we added information gained at each stage to our program and developed functions that extract the desired solutions17. Because we opted for an algorithm where the solution is controlled through vector fields the design problem itself was transformed into the complementary problem of constructing a vector field that best negotiates project constraints.
Figure 13. optimized mapping.
191
First International Conference on Critical Digital: What Matter(s)?
192
5. Conclusions In this paper we discussed some of the effects of the introduction of digital design tools in the architecture building process. We tried to map out a potential way of taking advantage of the current situation not only in order to cope with the problems caused by the arbitrary use of these tools but also to increase the adaptability of the system as a whole and the quality of the solutions that it yields. To do these we had to examine the implications of programming as a craft and the effect of defamiliarization with given design problems. In addition the concept of simplexity operational in the level of the designer and in the system as a whole helped us to frame better our aims and decisions in dealing with such problems. The case study of the Clyde Lane house by Future Systems architects gave us a chance to apply some of these concepts and seek a deeper engagement of the parties involved in the design process with digital media. The biggest obstacles in the project did not come from the contingencies of the design itself and its realization. They came from rigidity in the current system of production as well as the ambiguous role of computational techniques in design and computational design consultancy as a practice. Computational design is still a signifier without content and for that reason it is not yet clear for other parties what is it for or how they can take advantage of it. Eventually it shouldnâ&#x20AC;&#x2122;t become a support discipline for architects but rather dissolve and affect all the involved parties in the same ways that IT technologies have disappeared by becoming ubiquitous in the design and building process. We hope that in the future this signifier will express a desire to develop a deeper understanding of digital representations and processes rather than a set of tools to automate and speed up existing processes.
6. Endnotes and References 1
Karatani K.: Architecture as Metaphor, Cambridge Massachusetts and London England: The MIT Press, 2001. 2 Arendt H.: The Human Condition, Chicago : University of Chicago Press, 1958. 3 Iterative function systems are a good example on point, since that which was intended to describe natural forms, has become a tool to reproduce their visual appearance even when the actual building process is going to be radically different to energy efficient organic growth. So while in nature the biomorphic is a necessity or an expression of biochemical and mechanical processes in architecture it becomes a liability. 4 Zizek S.: The Parallax View, Cambridge Massachusetts and London England:The MIT Press, 2006. p.37 5 Karatani K.: Architecture as Metaphor, Cambridge Massachusetts and London England :The MIT Press, 2001. 6 Ibid. 7 Shklovsky V.: Art as Technique, reprinted in : Lodge D.: Modern Criticism and Theory: A Reader (London: Longmans, 1988), pp. 16-30 8 Choosing a numerical representation is closely related to the problem considered. Letâ&#x20AC;&#x2122;s look for example at different ways of representing color by numbers. 1. a computationally efficient linear system[RGB] 2. a production system [CMYK] 3. a perceptual system [LAB] 4. a scientific system [Intensity, Frequency, Polarization] 5. an intuitive system [HSV] 6. a symbolic system [table of named colors] The fact that translations between these numerical representations are not exact indicates that although they all refer to what we call color they do deal with different physical, social,
First International Conference on Critical Digital: What Matter(s)?
193
economical and cognitive aspects of it. For example, using a CMYK representation with its incapacity to represent the full range of human perceptible colors, one can efficiently use printing ink with economical and environmental benefits. 9 For example, in this context one is not asked to design a window but a distribution of light [interior condition] and visual contact [exterior condition] and a distribution of apertures that meets these criteria. By trying to represent the window numerically one has to eventually abandon the idea of a window altogether along with the established and habitual techniques of thinking in windows and designing windows. 10 Karatani K.: Architecture as Metaphor, Cambridge Massachusetts and London England: The MIT Press, 2001. 11 Alexander C.: Notes on the Synthesis of Form, Cambridge Massachusetts and London England: Harvard University Press, 1999. 12 Examples of defamiliarization and adaptable systems can be found in how people adopt to contingencies and how people conceptualize such phenomena. Masato Sasaki, who is a psychologist, documented a way in which a person who became motion-impaired below his shoulders by an accident developed a way to wear a pair of socks. After five month of trial and error, the time required for the subject to complete the task was reduced dramatically. In order to understand how the subject developed and mastered the skill, Sasaki decomposed the sock-wearing movement into smaller sub-movements and analysed them so as to conceptualize this complex phenomenon of adaptation to wearing a pair of socks. As a result, Sasaki, at the end, conceptualized the technique of the subject developing the system as coordination [Bernstein, Nichlai Aleksandrivch] of movements, where two or more movements are combined and executed simultaneously. This coordination was developed by a combination of a conscious thinking about movement [intellectual engagement], the practise [skill development], and the bodyâ&#x20AC;&#x2122;s natural tendency to economize energy and find efficient paths between movements [athletic ability] of the subject. This example illustrates the concept of simplexity from both sides, the one that develop simplexity in his skill, and one that observe the complex process of mastering the skill and conceptually simplify them in order to understand such phenomena. See : Sasaki M.: Layout and Affordance, in Design Gengo, Okude N., and Goto T. (ed.) Tokyo: Keio University Press, 2002, pp. 127-150 13 Felippa, C.: Introduction to finite Element Methods (ASeN 5007), Department of Aerospace Engineering Sciences, University of Colorado, Boulder, 2006. 14 Ray N., Chiu Li W., Levy B., Sheffer A., and Alliez P.: Periodic global Parameterization, ACM Transactions on Graphics, 2006. 15 For a definition of isometric mapping see: Kreyszig E. Differential Geometry, New York: Dover Publications, 1991. 16 An algorithm for the computation of the distance map over meshes is described in : Wang C., Wang Y., Tang K., Yuen M.: Reduce the stretch in surface flattening by finding cutting paths to the surface boundary, Department of Mechanical Engineering, Hong Kong University of Science and Technology, Clear Water Bay, Kowloon, Hong Kong 17 Michalatos P., Kaijima S.: Design in a non Homogeneous and Anisotropic Space, IASS Conference Proceedings, 2007
First International Conference on Critical Digital: What Matter(s)?
194
First International Conference on Critical Digital: What Matter(s)?
195
Beyond Surface
Aspects of UVN world in Algorithmic Design Aya Okabe The University of British Columbia ayaokabe@interchange.ubc.ca Tsukasa Takenaka The University of British Columbia ttsukasa@interchange.ubc.ca Jerzy Wojtowicz The University of British Columbia jerzy@post.harvard.edu Abstract The need for architects to develop their own computational tools is becoming increasingly evident. In this paper, we introduced our design tool named ‘UVN generator’ which is based on the algorithmic process combining scripting potentiality and flexibility of traditional 3D surface modeling. Our attempt on combining the two served us well to explore the new ground for design. New conditions were explored and observed in the three case studies which are named ‘on a surface’, ‘between surfaces’ and ‘on a new ordered surface’, referring to place where the scripts were run. In design projects presented in our case studies, we focus on the system behind the generation of complex, expressive, biomimetic, yet humanistic shape. This challenge to find a new ground for computational design enables us to pose our critical question ‘What could be algorithmic design potential may lay beyond basic surfaces?’.
1. Introduction Introduction of algorithmic design process into architecture continues to produce interesting results. This type of architectural design “might be aligned with neither formalism nor rationalism, but with intelligent form and traceable creativity”1. Algorithm is generally understood as a sequence of exact instructions proceeding from an initial state, through a successive transformation, up to its termination. Due to its prescriptive character, some see algorithmic method restricts the design creativity. However, the algorithmic design process is not necessarily predictable, or deterministic. Through the deployment of variables, the algorithm offers explosion of possibilities, while the explorations of randomness, or fuzzy logic can contribute to the new, unpredictable and rich design conditions. “The intellectual power of an algorithm lies in its ability to infer new knowledge and to extend certain limits of human intellect”2. Our paper builds on the initial premise of Kostas Terzidis that an algorithmic design radically differs from the conventional design. “The human designer may be constrained by quantitative complexity and may be unable to construct unpredictability since that would negate a designers intellectual control”3. By reconsidering, challenging and extending algorithmically conceived surfaces in architecture we hope to contribute to the emergence of space-edge notion in new, algorithmic design. Combining scripting with traditional surface modeling we illustrate new and intuitive aspects of algorithmic design. We discuss our UVN geometry generator that facilitates running given scripts on any selected objects as well as related design method and illustrate its deployment in three schematic case study projects.
2. Methodology
First International Conference on Critical Digital: What Matter(s)?
196
2.1. Why UVN? The need for architects to develop their own tools is increasingly evident. For those who translate design ideas into a precise geometric form using traditional modeling application there is a growing operational ease, but “behind the scenes” there are always series of scripts making it all possible. Scripts can easily repeat dynamics that compiles many components and complex relationships. However, when you generate architectural form with scripts, you need to write often complicated algorithm using exact mathematical formula. Sooner, or later, without broad understanding and engagement of algebraic, or mathematical discipline you confront your limitations in drawing your imagined, complex form. Apart from the possible “generate and test” strategy, to enter design critical loops we need to apply digital tools more simply and immediately. Is there any way that we can use both application flexibility of making complex shapes and utilize potential of scripting? How these two could combine in the design process? In response to those questions we proposed the idea of the “UVN Generator”. We named it for its main task of shape translation – from XYZ coordinate system into our UVN coordinate system. This is very important, as it facilitates running given scrips on the selected object using the UVN coordinate system.
Figure 1. A script running on XYZ coordinates
With this hybrid approach, we can now generate extremely complex shapes in a familiar way without constraints; then, we can construct complex geometric relationships and parameter values on the object using scripts. Our assumption is that we can broaden shape possibilities by running scripts on the UVN coordinates usually running on XYZ coordinates. (Figure 1. A script running on XYZ coordinates) In other words, running scripts on the space edge that allows us to give new expressions. Furthermore, we could design space-edge as a ‘space design’, not just ‘surface design’ by adding N coordinate to UV coordinates. 2.2. Technique for UVN generator The UVN coordinates are based on UV coordinates generated by a perspective projection system, where we applied the N coordinates as the normal direction from an arbitrary point on the UV surface. Using this notion, we developed the script for the purpose of translation from XYZ coordinates to UVN coordinates, and call it ‘UVN generator’. This is illustrated in the alignment of cubes script that is running initially on XYZ coordinates and then translated to UVN coordinates. (Figure 2. A script running on UVN coordinates) Figure 2. A script running on UVN coordinates
First International Conference on Critical Digital: What Matter(s)?
197
3. Case Studies We explored the three types of process applied to space-edge, a basic process ‘On a surface’, ‘Between surfaces’, and more advanced process which is based on the above two ‘On a new ordered surface’. We will present those processes with a few variations indicating its potential for architectural design. 3.1. Case One : On a Surface 3.1.1 Proceeding with script The first case study illustrates a process with series of scripts running ‘on a surface’. After you form a space, the algorithmic process starts with the ‘UVN Generator’ working to translate points on your drawn space to points on the UVN coordinates. After those points data are stored in a point-cloud database, the programming can take next steps, ‘Points relations and Unit design’ for running scripts on a UVN surface. The process can be seen in a demonstration project, Studium-X in Warsaw where a script was run on the toroidal form of enclosure surface. By running the script, the surface will be a media, not only a wall function itself, reflecting the imagination of architects grasping from dynamic, energetic and clamor images of a stadium. (Figure 3. Running scripts on a simple surface) (Figure 4. StadiumX-Warsaw by Tsukasa Takenaka, vdsEURO, 2007) Figure 3. Running a script on a simple surface
Figure 4. StadiumX-Warsaw by Tsukasa Takenaka, vdsEURO, 2007
First International Conference on Critical Digital: What Matter(s)?
198
3.1.2 How to work scripts on a surface? The ‘Point relations and unit design’ step offers many components and complex relationships to a surface by adding selected ‘unit function’ code. ‘Unit function’ in turn can store any of
Figure 6. Wall-study by Aya Okabe, UBC, 2008
Figure 5. Basic unit variations
First International Conference on Critical Digital: What Matter(s)?
199
your 2D/3D unit drawings based on a point ‘Pt’ value that comes from point-cloud database. (Figure 5. Basic unit variations) The test project is demonstrated below. Our intention was to make a curved wall surface into unpredictable and rich design condition using the simple ‘Leaf’ unit. The ‘Leaf’ unit is made by nine points connected by curves and surfaces. The result is repeating and aggregating order with complex geometric relationships of units. The wall character becomes unique. Its screen like perforated surface is expressive, yet with intuitive softness of transitions found only in nature. It is radically different from of conventional, inorganic forms produced with predictable logic of conventional design. This biologically inspired system suggests potential parallel between design creativities and creations in nature. (Figure 6. Wall-study by Aya Okabe, UBC, 2008) 3.2. Case Two : Between Surfaces The main difference between first and second case is the location where scripts are run. (Figure 7. UV coordinates and ß coordinates) In Case One, scripts run directly on a single surface, while in Case Two, series of scripts run between two or more surfaces which are not necessarily identical, as they can have distinct shapes. The field where scripts run is defined by UV coordinates and a new ß coordinates that has direction between surfaces, instead of N coordinates. Figure 7. UV coordinates and ß coordinate
Figure 8. Structural-study by Tsukasa Takenaka, vdsEURO, 2007
First International Conference on Critical Digital: What Matter(s)?
200
The second hypothetical case fosters communications and interactions between surfaces in design to create expressive surface behavior. The results was explored through a structural study illustrated in figure 8. (Figure 8. Structural-study by Tsukasa Takenaka, vdsEURO, 2007) In this study, the space between outside and inside was designed with ring shaped ‘structural elements’ and both inner and outer ‘facade elements’ which were designed using human scale components. This type of process allows to integrate and combine structures with inner and outer facades. Since all elements and their relationships are determined by scripts, the rich and diverse range of possible structural solutions could be rapidly explored. The process is initiated by selecting both guiding surfaces, followed be generation of space structures. 3.3. Case Three : On a New Ordered Surface 3.3.1 Embed new order on a surface This final case study is based on two previously presented cases. In order to develop surface behavior further, we take following steps in the script design. a. select any object b. make UV grid with N or ß coordinates c. give new order to the grid generated in step b
Figure 9. A new ordered surface
Figure 10. Rhythm variations
Figure 11. A study for ‘Biennale Pavilion’ by Tsukasa Takenaka, 2008
First International Conference on Critical Digital: What Matter(s)?
201
d. run a selected unit script on the new ordered surface To achieve this, we developed a script for making ‘new ordered grid’ which introduced notion of rhythm to a surface. This function is controlled by any rhythm running on a UVN grid or a UVß grid by generation of deformation for each direction. In this way, interesting results can be clearly observed in the difference between a normal UVN surface and a new ordered surface. Even the surface shapes and the script for running on those surfaces are the same, different expression of outcomes can be generated. (Figure 9. A new ordered surface) Besides, this process can embed a new order on any selected surfaces not only on simple surfaces, but also on a very complex one. This means, we can ultimately generate new ordered surface behavior beyond predictable. The study project for ‘Biennale Pavilion’ was to derive hidden rhythm from the landscape and embed it on the surface for tracing out by our unit script. (Figure 10. Rhythm variations) It is of central significance for a sense of design process that this type of new ordered surfaces can respond to a rhythm from the site environment. This is particularly interesting point as outcome of design can have an underlying harmonious relationship to its site. (Figure 11. A study for ‘Biennale Pavilion’ by Tsukasa Takenaka, 2008) 3.3.2 Random-grid generation On the other hand, there is a powerful potential for disorder, ‘randomness’ in design. Feature of randomness can have parametric range between order and disorder and control resulting appearance. Using those parameter, we can generate and express surface behavior beyond predictable. Another aspect of randomness is that random occurrence is depending on time. But using a database function to store the randomness as point-cloud automatically allows to reoccur the same randomness again. Therefore, with the technique of handling randomness, ‘randomness’ seems to become one of the materials we can now use in design extensively. The roof study used randomness as one of the design element with a mathematical formula. The design intention for the project is to make a continuous roof located between urban and forested area. The study was started from demonstrating occurrence of randomness in U, V and UV directions. (Figure 12. Roof studies with random deformation) To controlled randomness occurrence more precisely, the project used mathematical formulas to control its
Figure 12. Roof studies with random deformation
First International Conference on Critical Digital: What Matter(s)?
202
Figure 13. Deformation formula
Figure 14. A roof-study by Aya Okabe, UBC, 2008
appearance. (Figure 13. Deformation formula) The result of the study shows inspiration for a continuous transformation from a perfectly aligned ordered space to an ambiguous and expressive space that can connect and progressively negotiate between urban area and nature. (Figure 14. A roof-study by Aya Okabe, UBC, 2008)
4. Future Work There are few central and potentially critical characteristics tangent to this method to be investigated further in the near future. The multiplicity of possible designs calls for exploration of ‘generate and test’ strategy in order to limit explosion of possibilities. With deployment of ‘Beyond Surface’ scripts as design tools there is a potential for interpretations of intuit biological process. Further investigation of nature inspired systems in architecture could lead to emergence of organic and potentially biomorphic forms and textures in our designs. Process of self organization, nested structures and patterns in a variety of programed interpretations of intuit biological process needs much more work. The digitally controlled ordered surfaces responsive to and interactive with environment should also be more explored.
5. Summary and Conclusion The need for architects to develop their own computational tools is becoming increasingly evident. For those who translate design ideas into precise geometric forms using traditional modeling application there is clear operational ease; yet “behind the scenes” there are always a series of scripts making it all possible. Architects like painters who develop their own style of brush, should step into managing their scripting language and customize own digital design tools. Our attempt on combining scripting potentiality and flexibility of the traditional surface modeling enables us also to explore a new ground - beyond the conventional surface. We observed the results of surface behavior while constructing our projects. We, as designers, amplified our imagination using this computational method, and the surface could reflect and
First International Conference on Critical Digital: What Matter(s)?
203
restore our imagination to make it more expressive, biomimetic, and yet not without humanistic complexity. As the outcome, observers can be now aware of the fundamental essence and creative quality beyond the basic physical reality. Apart from technical interests and functional possibilities, what attracted us was that algorithmically made surfaces gave us back to pay attention to its hidden sense behind its shape. As the results, it made ordered surface into the new ground, as well as, into the new design media reflecting environments and perhaps starting to connect architecture of natural and those of artificial.
6. Endnotes 1 i.e. book> see Terzidis K., Algorithmic Architecture, Oxford: Architectural Press, 2006, p.XII 2 i.e. book> see Terzidis K., Algorithmic Architecture, Oxford: Architectural Press, 2006, p.65 3 i.e. book> see Terzidis K. Algorithmic Architecture, Oxford: Architectural Press, 2006, p.103
7. References Rutten D., Rhinoscript101, Robert McNeel & Associates
First International Conference on Critical Digital: What Matter(s)?
204
First International Conference on Critical Digital: What Matter(s)?
205
Towards a Material Agency:
New Behaviors and New Materials for Urban Artifacts Orkan Telhan MIT Mobile Experience Lab, USA. otelhan@mit.edu Abstract As computationally augmented materials find their applications in architectural practice, we observe a new kind of material culture shaping architectural discourse. This is a kind of material intelligence that is not only introducing a richer vocabulary for designing more expressive, responsive and customizable spaces, but also encouraging us to think of new ways to contextualize the technical imperative within today’s and tomorrow’s architectural design. It becomes important not only to discuss and extend the technical vocabulary of computational materials in relation to other disciplines that are also concerned with ‘designing intelligence,’ but also to tie the research’s connection to a broader discourse that can respond to it in multiple perspectives. In this paper, I present a position on this emerging field and frame my work in two main threads: 1) the design of new materials that can exercise computationally complex behaviors and 2) the design of new behaviors for these materials to tie them to higher-level goals connected to social, cultural and ecological applications. I discuss these research themes in two design implementations and frame them in an applied context.
1. Introduction Research in computationally augmented materials is an emerging field that is bringing together different perspectives, techniques and expertises from ubiquitous computing, human-computer interaction, digital fabrication, and material science with the aim of developing a new discourse on ‘material intelligence’. As shape-changing alloys, elastic polymers, electronic textiles, and electro-mechanically programmable surfaces find applications in architecture, we can, on the one hand, observe a rich vocabulary for designing more expressive, responsive and customizable spaces and, on the other hand, find new ways to critically reflect on the behaviors of these materials and discuss the role of material intelligence within architectural design. It is important not only to extend this technical vocabulary in relation to other disciplines that are also concerned with designing intelligent behaviors, but also contextualize the technical imperative within the architecture’s broader discourse where these behaviors resonate with everyday uses. One recent practice in material research focuses on the design of smart composites in which a variety of materials, combined with electro-mechanical components are designed to actualize computational behaviors. We observe a generation of responsive, interactive, taskdriven materials that can sense and act on user or environmental inputs. We witness smart columns repartitioning spaces1, smart skins covering buildings that filter water2, and observe interventions on facades that can collect, harvest and store energy from a variety of natural resources 3. In this paper, I present my position on this emerging material culture in two ways: first, within the actual design of the materials by pointing out new directions for material research that will allow it to be able to accomplish more elaborate tasks; and, secondly, in the way the material research can be tied to higher-level goals that have social, cultural, and
First International Conference on Critical Digital: What Matter(s)?
206
ecological implications. I discuss these two threads within the context of two design implementations following in the later sections.
2. Designing New Computational Materials Current trends in ‘material intelligence’ introduce a multitude of approaches and implementations of responsive materials that respond to users and/or the environment with low-level behaviors. Physically or computationally augmented materials introduce new design affordances that utilize a number of basic responses to stimuli4. By registering state changes, materials can start building memories (e.g., remembering when they are touched, actuated, or transformed), respond to user input and transform the physical input to another form of energy5. One important dimension in this research lies in the design of composites that can combine a set of low-level behaviors and also equip them with a concept of ‘agency,’ an internal monitoring system that can allow them to be ‘aware’ of their own functionalities and eventually manage these low-level behaviors for accomplishing higher-order tasks.
2.1. Material Agency Computational sciences, ranging from cybernetics, cognitive science, artificial intelligence and robotics research to bio-engineering, already share a rich discourse on the alternative ways to conceive artificial agencies. From chess playing automatas to today’s selfmonitoring super computers, the quest for machine agency has traversed many histories, but since the earliest times, architecture’s reaction to machine agencies has almost always informed its technical practice and shaped its design methods6. While feedback and process control systems (e.g., thermostats) immediately found their way into everyday usage, selforganizing generative grammars are finding ways to design smart geometries and fabrication techniques for realizing once impossible architectures7. Today, a subtle, yet vital paradigm shift in computational agency design marks an era of data-driven, information processing system designs, in which pattern recognition, statistical and probabilistic learning methods combined with signal processing techniques provide new perspectives on the design of artificial agencies. As these numerical methods are prevalently used to construct new kinds of adaptive, self-regulating systems with different levels of agencies, they inspire the design of material systems, which can deliver more complex behaviors that can fulfill elaborate tasks. Smart composites can now include additional electronic components, memories and computational instructions that allow them to reflect on their own behavior, revise them for changing conditions and make them benefit from a long-term relationship with their user and/or environment, which can provide continuous input to shape the current behavior. While the computational intelligence of the system can also lie in an external infrastructure that would monitor and organize a number of units’ behaviors, it is possible to encapsulate the agency within the composite material itself. Depending on how the composite is used within the architectural design, either as a set of tiles applied onto the surface, or as standalone units attached to an existing architecture, the material can not only regulate its own behavior, but also communicate with other units for organizing more collaborative tasks.
First International Conference on Critical Digital: What Matter(s)?
207
2.2. Material Learning The field of machine learning in computer science, in particular, identifies new strategies to design autonomous systems that can ‘learn’ from their interaction with users and/or the environment to exercise goal-driven behaviors. Computational systems can be trained for different conditions and have the ability to make informed guesses based on the current state of their knowledge, which has been acquired during the history of their interaction. While these technologies already have everyday uses in optimizing today’s communication networks or improving the path-planning algorithms of autonomous house-cleaning robots8, their implications in architectural design, especially in computationally augmented material research, is still very under-explored. As a new direction in composite material design, it is now possible to discuss a new generation of self-aware, user-aware, companion-materials that can inform both the design and the production process of building technologies and explore new roles for materialbased agencies in today’s architectural discourse. As such materials mimic, adapt, and demonstrate an ability to learn in different capacities over time, they can not only extend the existing behaviors of places assigned by their initial architectural program, but also allow their users to have a continuous influence in the look, feel, and behavior of the inhabited spaces.
3. Designing New Material Behaviors One of the main motivations behind the research on learning systems is to design systems that can show the ability to alter behavior in changing conditions. Their mode of operation includes adaptive processes such as feedback loops, self-calibration, and optimization cycles that allow them to exercise autonomy to adjust their behaviors according to the different interaction patterns they have with the environment. Unlike systems that follow a prescribed plan of actions, learning systems demonstrate a ‘bottom-up intelligence’ that primarily utilizes the interaction with the world as the basis of their action9. While these systems are still pre-programmed for executing certain tasks, their behaviors are left ‘underspecified’ to allow them to revise their actions for changing stimuli and perform behaviors beyond their prescribed roles10.
3.1. Task-Driven vs. Goal-Driven Materials One interesting consequence of the learning material research is the possibility to design goal-driven materials. While today’s computationally augmented materials are designed for highly specialized low-level tasks, it is not difficult to imagine the design of an underlying material agency that can combine simple behaviors and choose a relevant strategy, an action plan for the given goal, while continuously revising it based on the current affordances. For example, a unit of composite material may consist of three layers of behaviors in which it can control the amount of light that it will allow to pass through in a given block (e.g., a shutter or diaphragm), filter the incoming air, and store the incoming light from sunshine on photo-voltaic panels. While the material can execute each task in a timely fashion, if it is equipped with a ‘goal’ that involves the execution of a particular combination of tasks, an internal monitoring system can evaluate the best time to execute the task instead of doing
First International Conference on Critical Digital: What Matter(s)?
208
them simply one after the other. It can compute when it becomes meaningful to let more light in and store it or use that energy to filter the air. If there is a sudden need for clean air, it can even borrow extra energy from another unit on a different façade of the building that has benefited from the afternoon sun. Here, the underlying goal can be maintaining an optimum condition for the unit, such as maximizing its own lifetime while producing its own power to clean the air. However, the unit can also act with its own agency even within a broader system, a material network, which can perform a larger set of goals by coordinating many kinds of specialized materials that have specific capabilities, logistical advantages and expertise within the system. Eventually, material ecosystems can be designed to be in charge of the heating, cleaning, energy generation and heat management processes of the entire architecture while they are supervised by large scale information and energy networks that form the urban infrastructure.
3.2. Materials with Computational Experience Wiener suggests that, “a learning machine must be programmed by experience.”11 To be able to avoid a “literal-minded” system, he argues that one needs to program the capability to the system to evaluate its success and failures and even learn from its opponents’ performances to revise its set of goals. It would be an important advantage for a material composite to benefit from a long-term relationship with its users and/or the environment. Similar to the way a traditional material reflects change over time (e.g., wearing out of metal or wood) by physical appearance, composite materials can make use of the history of tasks they execute over time, the information they gather from the environment and apply them over a number of different adaptation strategies that will customize their behavior for a given location. Here, ‘material experience’ would not only lie in a material’s capacity to alter behavior for external conditions, but also based on its internal conditions. As the material starts to age, and is not able to deliver some of its tasks in full-potential, it can slowly re-prioritize the goals before waiting to be replaced by new. For example, when the active components with mechanical actuation are eventually worn out, the material can shifts its focus and begin to use its resources towards utilizing passive parts, such as energy collection, preservation and storage.
4. Design Cases My current research on ‘material agency’ involves explorations in the electro-mechanical and optical properties of materials (e.g., fiber optics, LEDs, various sensors and actuators, etc.) and looking at ways to design architectural media that can exercise learning behavior to adjust responses, functionalities, or services for different needs. Here, the work has applications ranging from the design of public interfaces, alternative display technologies to programmable surfaces to writing new behaviors for these systems that enable them to respond to different interaction modalities (e.g., vision, audio, touch, etc.) with users and the environment.
First International Conference on Critical Digital: What Matter(s)?
209
4.1. Self-aware Surface As a first attempt towards a possible design of material agency, I have built a threedimensional, foldable, programmable surface, which explores the potential of self-aware structures that can not only physically adapt to surrounding architecture, but also to changing environmental conditions and user needs. This is a system made of modular composite units, which include two layers of LED arrays for visual output, a mechanical infrastructure for binding units with hinges and sensors, and an electronics layer that controls the sensing, power, data management for the visual output (Figure 1 and 2). As a public interface integrated into the physical space, the project explores the communicative aspects of architectural elements and their role in an urban semiotic system as discussed by Venturi and others12. Unlike traditional static billboards, or Time Sq. style public (broadcast) media façades, the interface intends to create a different kind of relationship with the architectural space by showing a greater adaptability in its form and content, by utilizing the input it receives from the users and environment. It offers a modular, reconfigurable surface, which explores alternative physical configurations for its users to create new spaces around it. Being able to compute its current geometry with the help of sensors, the interface can switch to different modes (e.g., from public to private, for single user to multi user) and engage with users by self-adjusting the presentation of its content (Figure 3 and 4). Here, users can control the type of information, its flow, direction and rhythm by physically interacting with the programmable surface, such as folding and bending it into different geometries. As an urban artifact, the display blurs the distinction between a display and an interface, allowing it to exercise an intelligence not only for regulating its own actions (for being an input and output device at the same time), but also for responding to changing environmental lighting conditions for optimizing its current output. Being equipped with light sensors, each composite unit can measure the amount of ambient light reflected on its surface to calibrate its brightness to optimize the contrast level and to save energy by dimming itself when it detects that there is no one around for a certain period of time. 4.2. Interactive Social Catalyst As a second iteration, I explored the potential of a learning system, during the design of an interactive urban sculpture that exhibits autonomous behaviors with a â&#x20AC;&#x2DC;presumedâ&#x20AC;&#x2122; agency. This object is designed as a three-dimensional free-form object covered with fiber-optic cables that allow it to display visual output on the entire surface13. Equipped with touch sensors, proximity sensors, speakers and cameras, it can respond to the environment and engages with its audience by playing a number of games with them. Modeled like a character, the object responds to the passer-by. It figures out if someone is close enough to invite for a game. It tries to compute the height of the person from the average height they are touching on the surface and if it can detect that it is engaging with a child, it chooses to play a game accordingly or, if it cannot, it plays a general one. Here, I specifically look at the means and kinds of behavior the system can learn from its interactions with the environment and form a sense of memory based on the history of these interactions. While the object in this instance is a continuous form and not made of tiles or composite units, I explored a self-organizing agency by segmenting the surface into
First International Conference on Critical Digital: What Matter(s)?
210
a grid of sections which are responsible for sensing a particular section of the surface for input and activating the fiber-optic cables distributed in that section for visual output. I have experimented with how different sections of the object can coordinate with each other and work towards a given goal, such as directing users to a particular area, informing them towards a particular activity, etc. As different sections coordinate with each other to work on common goals, I have studied ways to utilize the sensory input for designing highlevel behaviors (e.g., hide and seek game) and use the fiber-surface’s agency as a social catalyst to activate the urban space around it14.
5. Conclusion Being an extension of the ongoing responsive material research, the design of new ‘material agencies’ promises new ways to make use of the knowledge and expertise learned from the low-level material behaviors that are beginning to find more applications in architectural design. Materials that have the capacity to learn from their interaction with the environment and those which can demonstrate a ‘computational experience’ by utilizing this learning over a period of time are suggesting new ways to design high-level behaviors that are increasingly demanded from architectural design. Especially by tying back low-level responses to the design of energy-saving, environmentally-friendly, sustainable architectural practices, it is possible to identify new relationships between material behaviors, information, inhabitants, and the lived space: as augmented materials are programmed to work on achieving high-level goals (e.g., energy harvesting, pollution monitoring, etc.) they will not only be able to support individual users, but also take responsibilities in improving urban-scale problems (e.g., reducing C02 emissions) as they coordinate with each other and other information and material networks. While we are still in the process of designing new material behaviors and questioning their relevance in today’s and tomorrow’s architectural practices, given the emerging social, cultural, and ecological conditions, I believe that a confluence of computational theories and material practices will not only extend the discussions on an emerging discourse on material intelligence but also inform the design of new spaces and improve the ways we experience them.
First International Conference on Critical Digital: What Matter(s)?
Images are embedded only for referencing purposes. Please use the high-res images attached to the submission The endnotes follow the image reference page
Figure 1. Composite unit detail.
Figure 2. Composites connected with hinges.
Figure 3. Self-aware display in private mode.
Figure 4. Self-aware display in public mode.
211
First International Conference on Critical Digital: What Matter(s)?
212
Endnotes 1 Omar Khan, “Open Columns: Responsive elastomer constructions for patterning the space of inhabitation” (paper presented at the workshop on Transitive Materials: Towards an Integrated Approach to Material Technology, 9th International Conference on Ubiquitous Computing, Innsbruck, Austria, 16-19 September 2007). 2 Geoff Manaugh, “Jellyfish House,” (12 April 2007), http://www.worldchanging.com/archives/006457.html. 3 In “Grow”, Samuel Cabot Cochran, employs “thin film photovoltaics with piezoelectric generators and screen printed conductive ink encapsulated in ETFE fluoropolymer lamination to harvest energy from wind,” http://s-m-it.com/#grow_target. 4 Marcelo Coelho, “Programming the Material World: A Proposition for the Application and Design of Transitive Materials” (paper presented at the workshop on Transitive Materials: Towards an Integrated Approach to Material Technology, 9th International Conference on Ubiquitous Computing, Innsbruck, Austria, 16-19 September 2007). 5 Cochran, “Grow.” 6 Gordon Pask, “Aspect of Machine Intelligence,” in Soft Architecture Machines, Negroponte, N., (Cambridge: MIT Press, 1976), 6-31. 7 John Frazer, “New Tools,” An Evolutionary Architecture, Architectural Association, (1995), 23-64. 8 Robotics Primer Workbook section on learning. http://roboticsprimer.sourceforge.net/workbook/Learning 9 Rodney A. Brooks, "Intelligence Without Representation," (preprints of the Workshop in Foundations of Artificial Intelligence, Endicott House, Dedham, MA, June 1987). 10 Omar Khan, “Underspecified Architecture Performatives” (paper presented at 19th European Meeting on Cybernetics and Systems Research, Vienna, Austria, March 25-28, 2008). 11 Norbert Wiener. “Cybernetics: or Control and Communication in the Animal and the Machine,” 2nd ed., (Cambridge: MIT Press, 1965), 169-180. 12 Robert Venturi and Denise Scott Brown. “Architecture as Signs and Systems,” (Cambridge: Harvard University Press, 2004). 13 Ongoing research project at MIT Mobile Experience Lab. Design Team: Federico Casalegno, Orkan Telhan, Sergio Araya, Hector Oilet, Guz Gutmann. 14 William H. Whyte, “City: Rediscovering the Center,” (New York: Doubleday, 1988).
First International Conference on Critical Digital: What Matter(s)?
213
Generating topologies
Transformability, real-time, real-world Bernhard Sommer Hyperbody, TU Delft, The Netherlands b.sommer@tudelft.nl Abstract Customization is a contemporary trend, which should not be ignored by architecture. An increasing demand and decreasing resources will necessitate the reuse and the sharing of space. Transformability will facilitate these tasks. On the basis of a case study, this paper demonstrates the technical feasibility of a continuously transformable structure, which enables the transformation and manipulation not only of shape, but of topological qualities as well. However, this fully and universally transformable architecture itself cannot only be seen in the context of customization, but also as a further development of architecture as a discipline.
1. Customizing the future environment Form in the digital age finally lost its functional reference. Where in the mechanical age you could read off functionality from the classical machine, electronic devices started to hide away how they work. As the computer finally infiltrated the man-made environment, shape lost its iconographic meaning. Of course, you always could enclose, and thus hide, the functional parts of a machine. The significant difference now is, that any type of machine can be reduced to one kind, the computer. This technological leap in return offers shear unlimited possibilities of customization and combinations of the original functions. The gadget industry is thriving. From washing machines to cars, from television sets to airplanes, nothing escapes the digital age. Eventually, with the mobile phone, with its increasing number of functions, few human beings can be found without a digital device. The concept of customization thus proves successful. At least partly, this is because of the so developed ability to adjust to in itself diverse and changing societies on the one hand, a greater cultural diversity of societies to be dealt with on the global level on the other hand, another consequence of the phenomenon, commonly refered to as globalization. "Observers of current and future trends" predict that "the nature of working and living will change drastically such that society will require completely new types of structures"1. An increasingly busy world economy and the rise of the so-called emerging markets result in higher construction activities. Already now, concrete is refered to as "the most widely used substance on the planet after water"2. Its production process releases large amounts of carbon dioxide. Given the political situation, it seems quite impossible to address these issues in a general, standardised way. Yet, assuming, that a good portion of construction activities are the result of remodelling, reconfiguring or modification of buildings, this might be another field, where concepts of customization might be suited to contribute to solutions. While building technology is permanently upgraded, and the optimization of heating and airconditioning systems, water and energy supply, security systems and the ergonomics of lightning systems is at least possible the core competence of architecture is rarely
First International Conference on Critical Digital: What Matter(s)?
214
considered. Providing beauty and functionality through spatial configurations, forms and relations. Specalizing space can be seen as one of architecture´s primary goals. Thus, customizing architecture will particularly deal with transforming space: changing size and form, and, even more important, correlations, dependencies, connections. Transformable structures could be the sustainable answer to an unexpected and unpredictable future.
2. Space not form - what to customize In the current architectural discourse, actual transformations of built structures, if at all at stake, usually are restricted to homeomorphic change. However, there is a long tradition of radical tranformation in architecture, decisively contributing to the functionality of any building. Each time, we open or close a door, a window or an operable wall, we change a building´s topology (fig. 1). These changes can be extended by more far-reaching topological transformations, like adding or substracting handles, changing the orientation of a surface or tying and untying knots (fig. 2).
fig. 1: topological change, when opening a door The specific form of an object, especially of a machine part, can be extremely powerful. A screw, a nozzle, a car, they all can only function by their very shape. In architecture, the topological structure seems to be more important than the specific shape. Various functions can be accomodated within the same shape, yet, some topological changes might become necessary, when transforming a church into a residential building. Still, also this relation will rarely be as compelling and as definite as the form-function relation of mechanic elements. As Bill Hillier puts it: "space does not direct events, but it shapes possibilities."3 If Greg Lynn argues in Animate Form4 that the form of a building should be optimized like the hull of a boat, he pursues a task, which architecture as a spatial art will only be able to perform metaphorically. Although his design generates spatial patterns, it does so as a byproduct of form. Form and space are not given the same attention. Thus the final outcome of these patterns and with it the related possibilities are left to randomness, prejudice and tradition more conceived by the sub-conscious, than a well-considered act. In the second and third of his three spatial laws, Hillier links spatial structures to social behaviour.5 The first law refers to the "construction of space", which is "the generation of spatial patterns by means of walls, apertures, etc."6 As with the "configurations", he refers to, these means can be interpreted as a certain range of topological qualities. His interest and argument does not consider change, though. In the context of his research the
First International Conference on Critical Digital: What Matter(s)?
215
unchanged within a cultural context seems to be of higher significance. By searching for an "inequality genotype", equal to e.g. a wide range of floorplans from different ages, Hillier identifies "some kind of cultural principle for giving different social relations and activities a spatial form"7. Focused on urbanistic problems, rather than design aspects, he also does not look into a more detailed level, where other topological qualities such as windows, or possible change such as the opening and closing of doors would be of importance. Still, he gives evidence of significant relation between topological qualities and society and even function. This might be surprising in the context of Animate Form. However, the definition of architecture as a spatial art goes back to the 19th century.8 The relation between function, society and configuration can be demonstrated with the Kings Road House in West Hollywood9. The significance of this house is not to be seen in a fancy form, which it has not. Its importance lies within the changed programm. Besides kitchen, bathrooms and garage, its main rooms are not specialized according to traditional functions like parlour, office or library. They are labeled by the names of its inhabitants. It was a remarkable attempt towards a customized, individualized space. The house was designed to accomodate two friendly couples. However, it assigned each of the inhabitants its own room, each couple a courtyard, a bathroom and an outdoor sleeping basket on the roof and one kitchen for all together. Each of the couple´s area has its own entrance. As you can access the kitchen, which again has its own entrance form outside, from both areas, it is not the structure of a semi-detached house. Its programm still is unthinkable for commercial housing developement plans. Its topological structure is customized for a unique social programm, which still would be seen as unusual. If we ask, how architecture could contribute to customization, the question is, what architecture specializes and by what means. Of course, there is an architectural language, signs and forms, which are equally involved in specializing space and greatly involved in producing rich architectural experiences.10 But, as with the example of the church converted to a residential building, their contribution to its functional specialization is less significant. For some design tasks, the very shape is crucial, e.g. an auditory. A specific function, assigned to a certain space, requires a certain topology. This topological structure, however, must be given the right scale and proportion. Topology, scale and proportion specialize the space in question. Where topology defines the relation and structure of spaces, scale and proportion assigns specific volumes to spaces. Customizing spaces, not once and for all, but again and again, thus calls for transformability of these three qualities.
3. Beyond customization - implications of continuously variable structures An issue, becoming evident with the example of the converted church, is that of iconographic form. As stated above, functionality itself, does not necessarily generate a selfreferential form. It is significant for the digital age, that icons often refer to outdated products, like the mail envelope as an icon for e-mail applications or a vintage telephone receiver to depict the funtionality of the call button on mobile phones. However, a building, capable of continuously changing its topology, will also be able to change its shape, thus its iconographic meaning. Freeing architecture from function-related form will be another merit of transformable architecture opening new realms of communicating spaces. Time, essential to any transformation process, is another issue. A continuously variable structure radically changes this dimension´s effect on architecture, which usually is tied to the beholder´s or user´s movements. The analogy of architecture and music was often
First International Conference on Critical Digital: What Matter(s)?
216
stated and searched for. It is one of architecture´s old topics and can be traced back to the fifth century. It still was an important inspiration to many 20th century architects.11 Whatever methods were developed to respond to sound and compositions, music is strictly subject to time´s inevitable linearity. Only the continuous transformation of spatial qualities, real-time, real-world, will allow for fusing these arts. In static architecture, the effects of time can be experienced only through a specific, subjective view-point. A transformable architecture will enable objective, time-based spatial concerts. Though, the analogy between music and architecture, is just one aspect of a greater realm of ephemeral design patterns. Nature has also been and is still a source of inspiration for architectural design. The Alps have been inspiration to the roof structure of the Olympia stadion in Munich as well as the New Trade Fair in Milan of Massimiliano Fuksas, 40 years later. By means of traditional architecture, the beauty of more ephemeral natural phenomena could rarely be exploited. The cloud analogy of COOP Himmelb(l)au´s BMW World is strictly metaphorical, and merely commits the design to massive cantilevered structures and light weight construction. Diller & Scofidio succeeded in realizing a cloud-like experience with their Blur Building for the Swiss Expo 2002. Of course, their interactive design approach can hardly be called traditional. Unfortunately, the actual interactive component of this project, the Braincoat, could not be realized. A permanent transformation of social spaces would have taken place.12 However, on the level of physical space the Blur Building is limited to one design inspiration. Transformable architecture, can realize a wider range of ephemeral designs. Such a design concept is not necessarily conceived by the same person, or by a person at all. As the structure is redesigned over and over again, starting from opening and closing holes without changing enclosed volumes to restructuring all topological qualities, several layers of design can be found: the primary design, being the construction of the transformable structure itself, the secondary design of the generated topological and geometric properties and the tertiary design of the number and quality of additional holes, bumps and forms. The secondary design defines and connects volumes. The tertiary design has its substantial impact on the enclosing elements. As such a structure has to be fully programmable, it can well be applied to interactive concepts. Where the primary design in its initial state would be architecture, the redesign of the secondary and tertiary level could be called meta-architecture. With Transformer 3 the author realized an actually transformable construction, Desert Cloud being its meta-architecture, its programm. The project was another contribution to the cloud analogy13. Literal change and a translucent light weight construction, relying only on tension and pneumatic elements, should render ephemeral qualities to physical form and space. However, all change would have to take place within one given topological structure. The spatial structures themselves, thus, would not be subject to transformation and the ephemeral. The range of possibilities still are not as universal, as a fully customizable architecture should be. In analogy to the computer, the universal machine, which has replaced or integrated into nearly every tool, it is tempting to create prototypes of a universal architecture, which integrates into nearly any spatial environment.
4. ConVarSys 5 - generating topologies, real-world, real-time
First International Conference on Critical Digital: What Matter(s)?
217
On the basis of ConVarSys 5, a specific project of the author, this paper will show and study the feasibility of a topology generating structure, not only enabling homeomorphic change, but also allowing for the generation of different spatial topologies. 4.1 The material components - from Generic Detail to Generating Detail New design strategies deal with increasingly abstract design methodes, such as Parametric Modelling, Behaviour Models or Genetic Algorithms14. Generic Details can thus be developed at a conceptual stage, when the specific geometric form is not yet designed15. In this context, ConVarSys 5 will be a built representation of a parametric model. Its details are constantly transformed, actualizing and informing the structure in real-time, a Generating Detail. The architecture itself becomes abstract, freed from the handicap of beforehand defined shape. It is ready to be customized according to the user´s need. Starting point for ConVarSys 5 is its universal detail. Universal in this context means, that only one element is used throughout the construction of the whole strcuture. The universal detail sets the constraints to develope the script behind the parametric model, while being itself constrained by matters of feasibility and producibility. One of these matters is the limited availability of actuators, and the economic rule to employ the least possible means. Actuators are the actively changing elements within a tranformable structure. Ideally, form change is subject to only one transformation principle. Not anticipating the universal detail´s final appearance, it is envisioned as a building block, which can be horizontally and vertically displaced against its neighbour. The tranformation process is driven by one of the simpliest possible kind of actuators, the linear actuator. In previous projects the author could show, the wide range of possible transformations by exclusively relying on linear actuators16. The possible translation of each building block first is limited to a certain range, which will be given by the constraints of its physical counterpart. It can be further restricted by other design considerations. The script allows for the input of two parameters related to the building blocks: The proportion and scale of the block itself and the range of translation expressed as a proportion of the building block´s width and height. Tube-like strings, are lined up by these basic building blocks, as they are always, and in the same way connected to two opposing neighbors. While they are never disconnected, gaps and openings between the tubes are possible, creating a sufficent potential of transformations. The building blocks subsequently will be enabled to perform all the additionally necessary tasks, forming a structural building skin. A mechanism of the universal detail ensures fixation and release from string to string, if it is necessary to transfer dead loads and live loads. An arbitrary number of strings can be arranged side by side to form a field. The fields can be piled up, of course accordingly to their bearing capacity. 4.2 The virtual components - SpaceSeeds and SpacePower To fully exploit the possibilities of such a construction, it is essential to enable interaction and programming, not by controlling the location of single building blocks, but by a mechanism, which generates, configures, separates and unifies space. A virtual component thus is introduced to the script, the SpaceSeed. Each SpaceSeed is assigned a certain, changeable range of influence, the SpacePower.
First International Conference on Critical Digital: What Matter(s)?
218
According to the location of the SpaceSeed and the „force“ of SpacePower the blocks recede, keeping a minimum distance, eventually opening or closing holes, resulting in different topological objets (fig.2).
fig. 2: material and virtual components Space between the strings can thus be "inflated" or "deflated" by concerted translational procedures through the actuators. Spaces are in-formed, combined and separated, opened and closed, knotted and straightened. A great topological variety can be achieved (fig. 3).
fig. 3: topological variety The structure is conceived and simulated, using Virtools software development plattform, which was successfully applied to process data and control actuators of dynamic structures at Hyperbody, a sub-department of TU Delft´s Faculty of architecture, led by prof. Kas Oosterhuis. A full representation of ConVarSys 5 is scripted. The script allows for the output of the values of translational displacement. A permanent stream of data coordinates and actuates the performance of each building block. 5. Consequences - running the structure By means of universally transformable structures, like ConVarSys 5, space becomes fully programmable and open to customization as well as optimization. The elements of architecture are not designed but emerge from the behaviour pattern of the universal building blocks - either directly from the interaction between material building blocks and virtual SpaceSeeds or by assigning additional behavioural information. Any opening, such as a door or a window, is a direct consequence of the onfiguration of SpaceSeeds. A stair can be generated by either placing the SpaceSeeds in a specific manner
First International Conference on Critical Digital: What Matter(s)?
219
or by limiting the translation between building blocks, so that they only produce reasonable ratios of rise and tread. A parapet or railing is generated by an aditional information: after input of the parameters of gap width and parapet height, any hole - within a certain range or which is generated by a certain SpaceSeed - which exceeds the given gap width, will produce a parapet of given height (fig.4)17
fig. 4: a generated parapet The SpaceSeeds can be placed or moved, again according to a script. They can gather to form specific shapes. This script can be designed to consider and interact with a specific user´s behaviour, customizing the spatial environment. As a consequence of customization not only change is enabled, but paradoxically also its contrary, durability. By memorizing a specific configuration, environments can be re-called, just as the seat of a Mercedes Benz memorizes your prefered settings. Given a ConVarSys structure this can be done at any place in the world. By change, even durability can be taken to another level, ensuring intimate spatial qualities to its inhabitant, wherever he or she is taken by the dynamics of globalization.
6. Conclusion ConVarSys 5, even if its material part is not yet realized, shows the technical feasibility of a continuously transformable structure, which enables the transformation and manipulation not only of shape, but also of topological qualities. As has been demonstrated, these qualities are essential to architecture. Customization of architecture thus has to address spatial configurations as well as scale and proportion. With ConVarSys 5, this is achieved by introducing virtual elements, the SpaceSeeds. The effort itself can not only be seen in the context of customization, but also as a further development of architecture as a discipline, enabling to realize many age-old dreams, such as the analogy of music and architecture, a wider range of natural inspirations or a fully interactive real-world architecture. Endnotes 1 Durmisevic E. Transformable building structures: design for disassembly as a way to introduce sustainable engineering to building design & construction. Doctoral theses, Delft University of Technology, 2006, p.8 2 Anon., "Concrete proposals needed" The Economist 19 December, 2007. Print edition via http://www.economist.com/ (accessed on 11 Jan 08).
First International Conference on Critical Digital: What Matter(s)?
3
220
Hillier B. Space is the machine, London: Space Syntax, 2007, p.155 Electronic edition via http://www.spacesyntax.com (accessed on March 17, 2008) 4 Lynn G. Animate Form, New York: Princeton Architectural Press, 1999 5 Hillier B. "The Nature of the Artificial: the Contingent and the Necessary in Spatial Form in Architecture" Geoforum vol. 16 no. 2 1985, pp.163-178 6 ibd. p.165 7 ibd. p.173 8 Schmarsow, A. Das Wesen der architektonischen Schöpfung: Antrittsvorlesung, gehalten in der Aula der K. Universität Leipzig am 8. November 1893, Leipzig: Hiersemann, 1894. 9 The Kings Road House now houses the MAK Center L.A., it is open to the public. It´s address is 835, North Kings Road, West Holywood, CA 90069 10 The work of FOA shows, that topological qualities contribute to the richness of architectural experience no less. 11 Jormakka, K., Schürer O. and Kuhlmann D. Basics: DesignMethods Basel: Birkhäuser, 2007, p.10 12 Garcia M. “Otherwise Engaged: New Projects in Interactive Design” Architectural Design vol. 77 no. 4 2007, pp.45-53 13 Noever, P. ed. Schindler by MAK. München: Prestel, 2005, p.56 14 Kilian, A. Design Exploration through Bidirectional Modeling of Constraints, Ph.D., Massachusetts Institute of Technology, 2006 15 Oosterhuis, K., Bier, H., Albers, C. and Boer, S. "File to factory and real time behaviour in ONL-architecture" iA, vol. #1 2007, pp.8-21. 16 Sommer, B. "Transformer: steuerbare Formveränderung einer konstruktiven Gebäudehülle" arch+, vol. 154/155 2001, pp.120-123 and Sommer, B., Häuplik, S. and Aguzzi, M. "Inflatable Technologies: from dream to reality" in ECCOMAS, Structural Membranes 2007: III International conference on textile composites and inflatable structures. Barcelona: CIMNE, 2007. 17 Note the different proportion of the building blocks, compared to fig.3.
First International Conference on Critical Digital: What Matter(s)?
221
The representation of post design(v.) design(n.) information
Josh Lobel jlobel@mit.edu Massachusetts Institute of Technology 77 Massachusetts Avenue, Cambridge, MA, USA
Abstract. Attempts to address interoperability issues in digital design information have become stilted. A lack of any real success is more indicative of the questions asked rather than the solutions proposed. If design information is the progenitor of design representation, and representation is a method by which to encode, store, and distribute design information, then the issues associated with digital design information can be seen as special cases of the general the problems associated with communication. Considering a representation by asking: ‘What is the information that needs to be communicated?’ and, ‘With whom is this information being communicated?’ may provide a better perspective from which to assess specific technological problems such as software interoperability. The goal of this paper is a call to attention – an exercise in critical thought and a provocation. Can re-conceptualizing the problems with the representation and interoperability of digital design information as generic problems of communication offer insight on novel solutions? A brief overview of the challenges posed to interoperability is presented along with current and past efforts to address this issue. An alternative methodology for the communication of design information via process rather than state descriptions is proposed, followed by a summary conclusion.
“In each period of our history, design and communication have evolved synchronously with the technology of the time. Each new medium has extended our sense of reality and each has looked to its predecessor for language and conventions, referencing and adapting its characteristics until its unique capabilities can be explored and codified.” – Muriel Cooper, 1989. In order to engage in a critical, digital discourse we must understand, or at least agree upon, a context in which the discourse is to take place. Material matter(s) embody context-independent characteristics and a preferred state of existence based on their physical properties; conversely, digital matter(s) are completely context-dependent and have no “natural” or desired state. Any digital discourse must therefore be framed with respect to the particular problem needing solved. A ‘problem’ in this domain is the
First International Conference on Critical Digital: What Matter(s)?
222
discrepancy, or gap, between the desired state of a system and its existing state. In regard to design problems, design tools are the mechanisms by which to bridge this gap through the manifestation of design information. The determination of a design tool’s appropriateness or usefulness can be considered with respect to the facility with which it allows designers to achieve their design goals. I believe it is helpful to begin with an examination of the ‘deliverables’ of a design process, the design documentation. Effectively all contemporary architectural design documentation attempts to capture design information in the form of a state description1. As such, design documents (including digital / physical drawings, models, visualizations, bills of materials, etc.) are considered the representation of a set of physical and spatial relationships which comprise the design intent. All design documents, material or digital, aim to communicate design information through a process of encoding, storing, and distributing a set of commonly agreed upon signs and symbols - what may be referred to as the architectural vocabulary. If communication is the main driver of documentation, then the effectiveness and fidelity of the communicative effort can be judged by the consistency and conformity of design information across a set of design documents, and between the documents and the built work. Communication theory defines information as “…a measure of one’s freedom of choice when one selects a message.” [1] Information in a design document is therefore a measure of the relative ambiguity present in a given document. In an industry heavily reliant on a communicative enterprise as overwhelmingly ambiguous as the visual2 the difficulty of disambiguating design information through annotated, graphical, statedescriptions becomes exceedingly clear. In short, everyone reads the drawings differently3. It is important to make the distinction between the act of design and the result of design. To design - as a verb, as the act of creating, planning, or calculating in service of a predetermined end4 - certain associative leaps and intuitive design decisions are presumed which should not be unduly constrained in advance. However, a design – as a noun, as the expected result of a particular process can, should, and must be analytically rationalized into a series of discrete procedures from which it can be derived. This of course presupposes a direct link between a design process and the processes by which the design is materialized. In a profession increasingly reliant on the use of digital design tools for the automated generation and rationalization of form, the contractual separation5 of a design goal from the means and methods by which it is to be realized retards the entire Architectural, Engineering, and Construction (AEC) industry. It is no longer productive for design descriptions to be contractually isolated from its process of fabrication and assembly. As Dennis Shelden noted in Tectonics, Economics and the Reconfiguration of Practice: The Case for Process Change by Digital Means, our "Capabilities for the geometric expressions of form - enabled by advances in digital media - have moved beyond the capacities of 'conventional' project descriptions to effectively capture and process project intentions into building…A key aspect of this catalytic force is the potential for directly repurposing information through various stages of project definition and execution." (emphasis added) [3]. It is the potential to directly repurpose information that characterizes the greatest opportunities for design via digital media, “extending our sense of reality” and providing insight on how to free digital technology from the “language and conventions” of its material predecessors [4]. “First, the taking in of scattered particulars under one Idea, so that everyone understands what is being talked about…” - Plato, Phaedrus, 265D
First International Conference on Critical Digital: What Matter(s)?
223
By moving away from material-dependent design manifestations Computer Aided Design (CAD) established a means for capturing, storing, and processing the information necessary to re-present a design object as explicit relationships between abstract-symbolic entities. This critical difference was noted by its creator, Ivan Sutherland, in 19756. Twenty years later, this distinction was restated more abstractly; "An analog medium transfers shape to produce an analogue of one physical arrangement in another, analogous one. But digital media transform physical form into conceptual structure. A shape or color is converted into a number whose symbol is then inscribed on a ledger so that it can subsequently be ascertained by a machine or a person. The material out of which this ledger is constructed is incidental to the information stored, unlike the constitutive material defining an analog medium." [5] Current initiatives aimed at achieving greater performance, tighter construction tolerances, greater formal complexity, increased sustainability, and reduced environmental impact - currently generalized under the title BIM (Building Information Modeling) - will only become a reality when the design ledger becomes truly inconsequential. Currently, the most significant impediment to achieving this goal is found in the ‘I’ of BIM; the transformational axis which portends the means by which a building can be mapped to model and a model to a building7. Rather than wasting time, money, and effort on reformatting existing data to satisfy the requirements of software needs8, discrete sets of information should be filtered from an overall set of project data as needed, providing for varying design representations to be created on-thefly without sacrificing the consistency and conformity of the overall design information. The degree to which digital information may be repurposed is directly related to the technological independence, or interoperability, of the information, which is typically determined by the data structure which houses the information. The majority of attempts to resolve this problem can be categorized as follows: committee-based, standards-based, market-based, and open-source. Committee-based solutions such as the Initial Graphics Exchange Specification (IGES), and the STandard for the Exchange of Product model data (STEP) have suffered from the retarding effects of bureaucratic decision-making, slowing their ability to keep pace with rapid changes in technology. Attempts to create industry standard data structures by commercial geometry kernel providers have failed due to their equalities in readily available, high-quality products (ACIS, Parasolid, etc.). Market-based approaches by software vendors in the form of all-in-one CAD/CAM/CAE packages such as CATIA (and now the AutoDesk suite of products) result in prohibitively expensive software and licensing costs, and the need for dedicated experts to operate the software with no guarantee of the software being the best choice for every job. Because no single obvious standard has emerged for digital modeling, affiliate programs through which software developers encourage third-parties to develop additional software functionality via plug-ins and APIs (Application Programming Interfaces) have not been widely effective. One of the most recent and heavily supported forays into this field is the Industry Foundation Class (IFC) system9. To a large extent, IFC has been built from the STEP framework; however it could arguably be included within any one of the aforementioned categories, increasing its vulnerability to failure. The common aspect of all these approaches is the consideration of the state description as the only means by which to communicate design information. Rather than more of the same, a critical re-evaluation of means and methods by which we communicate design information is needed. Fabian Sheurer poses the question nicely in his essay Getting complexity organized: Using self-organisation in architectural construction, “What is a reasonable quantity of explicit information for a specific design, and how does one communicate it in a reasonable fashion?” [7]. If we abstract the problem of capturing, storing, and distributing design information as a general problem of communication then we can ask two very fundamental questions: ‘What are we trying to communicate?’ and,
First International Conference on Critical Digital: What Matter(s)?
224
‘With whom are we trying to communicate?’. Stated this way, the pre-conditions of what and with whom provide the criteria with which we can determine the most effective way to subdivide, or filter, a given set of information and the most appropriate method for describing that information (either as a state description, process description10, or both). For example, the type of information and how it is communicated will differ when transferred from person to person, person to machine, or machine to machine. “After all, nothing is more fundamental in design than formation and discovery of relationships among parts.” - Bill Mitchell, 1995. Contemporary generative design techniques such as ‘scripting’ encode explicit relational rule-sets in high-level computer languages that can, to a certain extent, be read by people as well as compiled by computers in executable machine code. These scripts capture a post-rationalized set of steps (rule-schema) that facilitate the derivation of structured data (visualized by most design software in the form of geometry) from a given set of variables and pre-determined conditions. Because the relationships between the variables and conditionals must be explicitly and logically embedded in the rule-schema, and each derivation of the script is dependent on the values assigned to the independent and dependent variables – its context - these methods are inherently parametric; change a variable, re-run the script, and the entire system will be re-evaluated accordingly. In addition, because such schema can be stated as step-wise procedures from an existing state to a desired state, it can always be stated in a technologically-independent format. Correlated with increases in the use of scripting are increases in the availability of proprietary design software, many which are packaged with their own individual scripting language. Assuming the trend toward generative techniques will spill over into mainstream practice, the necessity to consider, rationalize, and explicitly capture in schema the processes and relationships by which a design is derived will also increase. Process descriptions may be able to provide a more consistent means for communicating design information in a less ambiguous way. Seen this way, designs solutions are not singular entities but discrete instances of a set of evaluated design rules in a particular context. Changes to the context only effect the derivation, not the design itself. A state description can be used in conjunction with the process description to check that the design information was communicated correctly, providing a validation for consistency and conformance. The precedence of design-pattern approaches (from which object-oriented programming owes much of its development) and extensible rule schema11 provide working precedence for the encapsulation and dynamic re-combination of discrete rulesets. The implementation of such an approach is non-trivial. Mapping information between proprietary data structures is notoriously difficult, and information loss is typically the rule not the exception. Even some proponents of IFC acknowledge that it is just one of many options, and that it may not be the best framework under all circumstances12. Another challenge is that certain descriptions of space, such as mathematically function-based description like NURBS, are not supported by all digital design tools and must be transformed into faceted approximations before they can be read into non-NURBS programs. This may suffice for visualization, but can lead to significant errors in fabrication and construction. A promising approach would be a general purpose specification (such as XML) that provides the framework for the encapsulation and extensibility of design rules, meaning that rules could be re-written, re-ordered, and added on-the-fly without requiring the entire schema to be redefined. Theoretically, the procedural description of a design could contain domain-specific rules (for such domains as structural, electrical, cost-
First International Conference on Critical Digital: What Matter(s)?
225
analysis, etc.) to be invoked only by those parties interested – producing domain-specific design representations and reducing the informational clutter with which any one part of the project team would need be concerned. Specification frameworks also provide for the validation of rule-sets with respect to each other, allowing contradictory rules to quickly be identified and addressed. Coincidental to this approach could be the automated translation between software-specific scripts. A direct translation from one scripting language to another may prove prohibitively difficult based on the degree of differentiation between program-specific data structures. For example, there may not be a useful method for translating a script from a NURBS-enabled surface modeler such as Rhinoceros (RhinoScript) to a non-NURBS, constructive solid geometry based program such as AutoCAD (AutoLISP). Although it must be pointed out that the majority of digital fabrication technology, if not all, already performs such transformations of input geometry in order to derive machine tool paths. Thus, at a certain level we have already accepted such approximations13. If design schema were written in a ‘universal’ meta-scripting language, a more direct mapping from these scripts to proprietary languages could be achieved. However, it is uncertain if the industry would support the adoption of yet another layer to the design and documentation process, and further research needs to be done. Conclusion During a time when a great deal of the contemporary architectural discourse is devoted to assessing the role of the architect, the capability and use of digital technology in architectural design have acted to further remove - rather than re-center – architects from a direct connection with the artefacts of their toil. Architects continue to adopt tools created for and by other industries based on the desire to produce geometrically complex forms and better manage project information. In addition to expanding the technical boundaries of traditional design tools14, this adoptive approach has led to an exponential increase in the amount of information generated by the AEC industry. The attention issues relating to communication, such as interoperability, have received relative to the effectiveness of the solutions is indicative of the lack of a critical, digital discourse. The continual proliferation of digital tools employed in the design, fabrication, and construction of architectural projects has exacerbated the interoperability issue of technological interoperability which can restrict or prohibit the ability of digital information to be shared amongst project teams. By exploiting the fundamentally unique characteristics of digital media, architects may be able to reposition themselves not just as process consumers, but as process creators, re-establishing the link between thinking and making. While the solution presented here may at best be a schematic proposal, I feel it is important enough to warrant further consideration. More importantly, the focus of digital discourse both in academia and the profession needs to be re-centered on how we communicate digital design information.
References [1] Shannon, Claude E. (1949) The Mathematical Theory of Communication (Urbana: University of Illinois Press.) [2] Simon, H.A. (1996) The Sciences of the Artificial, 3rd ed. (MIT Press). p.210 [3] Sheldon, D. (2006) "Tectonics, Economics and the Reconfiguration of Practice: The case for Process Change by Digital Means" in Architectural Design v.76 no.4, Programming Cultures: Art and Architecture in the Age of Software. (July/August 2006), pp.82-87.
First International Conference on Critical Digital: What Matter(s)?
226
[4] Cooper, Muriel (1989) “The New Graphic Languages” in Design Quarterly v.142, Design and Computers. pp.4-17. [5] Binkley, Timothy (1997) “The Vitality of Digital Creation” in The Journal of Aesthetics and Art Criticism, v.55 no.2, Perspectives on the Arts and Technology. (Spring 1997), pp.107-116. [6] Sutherland, I. (1975) “Structure in Drawings and the Hidden-Surface Problem”, in Reflections on Computer Aids to Design and Architecture, N. Negroponte, ed.; New York. pp.73-85. [7] Scheurer, Fabian (2007) “Getting Complexity Organized: Using Self-Organization in Architectural Construction” in Automation in Construction 16: p 78-85. [8] Simon, H.A. (1996) The Sciences of the Artificial, 3rd ed. (MIT Press). p.210
Endnotes 1
According to Simon, state descriptions “…characterize the world as sensed; they provide the criteria for identifying objects, often by modeling the objects themselves.” [2] 2 The field and study of shape grammars and visual calculating is rooted in the fundamental ambiguity of the multiple ways in which we can see, interpret, and work with the visual world. For more on this topic see Shape: Talking about Seeing and Doing Stiny, George (2006). MIT Press: Cambridge, MA. 3 For more on the problematic nature of communication see “The Conduit Metaphor” by Michael Reddy in Metaphor and Thought, Ortony, A. (ed.) (1993). Cambridge University Press: England. 2nd Edition. 4 Paraphrased from Merriam-Websters Unabridged Online definition 1.g.. 5 According to the American Institute of Architects document B101-2007 Standard Form of Agreement Between Architect and Owner, section 3.6.1.2. the designer is barred from explicitly specifying the means and methods by which their projects are to be built. Regardless of how integrally the process may be bound to the product, the discretion is left to the builder to choose their preferred methods so long as the final outcome reasonably matches the design documents. 6 “To a large extent it has turned out that the usefulness of computer drawings is precisely their structured nature and that this structured nature is precisely the difficulty in making them. I believe that the computer-aided design community has been slow to recognize and accept this truth. An ordinary draftsman is unconcerned with the structure of his drawing material. Pen and ink or pencil and paper have no inherent structure. They only make dirty marks on paper. The draftsman is concerned principally with the drawings as a representation of the evolving design. The behavior of a computer-produced drawing, on the other hand, is critically dependent upon the topological and geometric structure built up in the computer memory as a result of drawing operations. The drawing itself has properties quite independent of the properties of the object it is describing.” [6] 7 This suggests seeing BIM not as a tool (as many producers of architectural software purport), but an organizational strategy for storing, and distributing project data. This consideration proposes a framework through which all representations of a project truly become selective partial-orderings from the overall set of project data. 8 An overwhelming majority of the effort expended on the analysis of projects which have already been created in a digital format is devoted to input preparation and geometry redefinition specific to each analysis program. See Bazjanac, V. (2001) “Acquisition of Building Geometry in the Simulation of Energy Performance”, Proceedings of Building Simulation ’01, Seventh International IBPSA Conference. pp 305-311. 9 IFC was developed by the International Alliance for Interoperability (IAI), an international consortium of commercial companies and research organizations founded in 1995. The
First International Conference on Critical Digital: What Matter(s)?
227
actual development work is carried out by a six member group known as the Model Support Group. Software applications must become “IFC Compliant” in order to import and export IFC files from their native data structure to the IFC-standard data structure. Also, the object-oriented nature of IFC allows third-party users to create new entities not currently defined in the IFC model called “proxies”. 10 A characterization of “…the world as acted upon; [process descriptions] provide the means for producing or generating the objects having the desired characteristics.” [8] 11 Such as shape grammars and XML (eXtensible Markup Language). 12 See “The IFC Building Model: A Look Under the Hood”, online article; Khemlani, L. AECbytes Feature, March 30, 2004. http://www.aecbytes.com/feature/2004/IFCmodel_pr.html 13 This would be another problem well worth further investigation. 14 See section ‘Expanding Design Boundaries’, pp405-408 in Chapter 15 of Digital Design Media, second edition; Mitchell, William J. and McCullough, Malcolm (Wiley & Sons, 1995).
First International Conference on Critical Digital: What Matter(s)?
228
First International Conference on Critical Digital: What Matter(s)?
229
Rethinking the Creative Architectural Design in the Digital Culture Serdar ASUT Anadolu University Department of Architecture, Eskisehir, TURKEY sasut@anadolu.edu.tr Abstract This paper tries to examine the effects of emerging digital tools in architectural design. Digital tools are not only practical instruments used for drawing, but they also affect design thinking. As the ones that are used in architectural design are mostly commercial, one can say that design thinking, the identity of the design and the creativity of the designer are defined by the companies which develop these tools. Therefore architects have to be able to manipulate these tools and personalize them in order to free their design thinking and creativity. This paper addresses the open source development in order to redefine creativity in architecture of digital culture. Keywords: Design Tools, Digital Culture, CAD Software, Open Source 1. Introduction A tool is the extension of human mind. While doing any kind of particular activity several types of tools are always used. It is easier to observe this claim within physical activities, for instance what a carpenter physically does. It is almost impossible to do his job with bare hands and the quality of his craft depends on the quality of the tools he uses. The same relation is valid with any kind of mental activity such as thinking, imagining and interpreting. In such activities, not only physical tools but also conceptual ones are used. Designing is an activity which incorporates both physical and conceptual tools. In an architectural design process, besides the physical tools such as all the drawing and drafting instruments, several conceptual tools such as the shape grammar and library are used. These conceptual tools are the ones that designer uses to abstract and comprehend the design problem, mentally reconstruct, figure out and resolve it thus generate the design idea. And the physical tools are the ones to visualize and realize the design. There are several reasons that make design thinking dependent on the physical tools used. First of all design process includes visual thinking. Therefore all of the visual elements that are created during the process influence designerâ&#x20AC;&#x2122;s thinking. Design is not a linear process which focuses to the target, but a netlike path which includes instantaneous feedbacks and coincidental decisions. As Nigel Cross puts it, designers are solution-led, not problem-led; for designers it is the evaluation of the solution that is important, not the analysis of the problem1. And what cause these feedbacks and decisions and constitute the evaluation are the visual elements such as drawings, sketches, models, etc. that are created by using the physical tools. Besides, beholding the effects of tool shifts on design thinking is more distinctive when considering the conceptual tools such as the shape economy, shape grammar and library, the concepts, the relations, the parameters and the rules as well. Because these are the tools that designer mentally uses to generate the idea, construct the relations and evaluate the solution. Thinking within the conceptual tools as well, one can say that the identity of the design and the creativity of the architect depend on the tool used. 1
see Cross N. Designerly Ways of Knowing, London: Springer, 2006.
First International Conference on Critical Digital: What Matter(s)?
230
2. Emerging Digital Tools in Architectural Design We are now in the midst of a period which brings about a momentous change for the tools used in architectural design, like in other fields. The digital tools –to call in general- are the gifts of this new paradigm which is called as the information society or the digital culture. To name a few, the computers, cell phones, network, software, vectors, graphs, codes and algorithms are some examples of them. What makes them momentous is that they are both physical and conceptual tools in use of architectural design. 2.1 How Digital Tools Take Place in the Design Process The most regarded scope of digital medium in practice today is its representative aspect. As Peter Szalapaj puts it, design is a subject that requires not only the creation and development of design ideas, but also increasingly in contemporary architectural practice, the effective expression of these ideas within computing environments by people2. Such a visual rhetoric is needed and used by architects in order to demonstrate their ideas. Digital medium, as a drawing, drafting and representation tool, has a great potential to effectively express the architect’s idea through photo-realistic renders, fly-through animations, and so on. However, what is more important for design thinking is that representation is significant not only means of expressing the designer’s idea to someone else, but also for the feedbacks within the design process. Such representations visualize the design decisions and let the designer rethink, reconsider and remodel them. Design process is such a netlike path as mentioned before. One important genre of representation is the physical models created by several rapid prototyping technologies such as Selective Laser Sintering (SLS), Stereolithography (SLA), 3D Printing (3DP), and so on. Creating physical models for representation of the design is not a new idea. 3D models have been used as the complementary medium of 2D drawings for hundreds of years and they are still in use. Even though digital environments are able to provide virtual 3D representations, the existence of a physical 3D model is still considered influential. And rapid prototyping technology leads manufacturing technologies, which is another important aspect of digital design. Digital environment makes not only the design but also the manufacturing process of complex systems easier. Besides the forms of the buildings, as building systems, construction technologies, cost and structural analysis getting more complex, the involvement of design and manufacturing processes becomes more necessary. And a design process which is operated within the digital medium is more accurate to be involved with manufacturing since it provides a common language and environment for techniques and actors of all phases. Besides, as Yahuda E. Kalay mentions, computational methods aimed at facilitating collaboration have focused primarily on assisting the communicative aspects of collaboration in A/E/C (Architecture, Engineering, Construction)3. As the industry becomes a more complex and interdisciplinary field, it involves actors from different disciplines who are working on different phases and purposes and take part in different periods, such collaborations are vital. Digital environment is able to provide these collaborations between disciplines, like it is used for design collaboration between the members of a design team.
2
see Szalapaj P. Contemporary Architecture and the Digital Design Process, Massachusetts: Architectural Press, 2005. 3 see Kalay Y.E. “The Impact of Information Technology on Design Methods, Products and Practices” Design Studies Vol 27 No. 3 May 2006, pp.357–380
First International Conference on Critical Digital: What Matter(s)?
231
These qualities of the digital environment are being practically utilized in architecture, thus the methods, actors and concepts are being redefined as mentioned. However, the definitions until now are incomplete and define a very primitive and insufficient condition. The missing point is that besides being a practical tool, it does assist the human mind and take place in the design process as an actor which is a collaborator of the human designer. That means, it does not only perform the designer’s idea, but also generate idea. As Kostas Terzidis puts it, these “idea generators” which are based on computational schemes have a profound ability not only to expand the limits of human imagination but also to point out the potential limitations of the human mind4. However, in order to let it generate idea and expand the limits of human imagination, the designer has to go beyond the mouse-based applications and understand the mental processes of digital environment. Parallel with the Artificial Intelligence researches, the possibility of a computer to design is being improved. If the designer’s mental process is defined within a computational logic which can be operated by a computer, then the computer can design. Still during the progress, even though the computer is not the only actor, it is one of the main agents that define design thinking as being both a physical and conceptual tool. In other words, designer imagines, thinks and acts within the context which is defined by the qualities of the digital tool used. 2.2 CAD Software as a Translator between the Designer and Digital Tool A designer thinks through visual tools. He/she creates visual representations to physically observe his/her design idea. Once these representations are created, they provide the designer to examine, rethink and reconfigure the decisions he/she has made. The things that he/she uses to create these representations are visual entities (2D and 3D), a shape economy or a geometrical language. The designer thinks by operating these visual entities. However, the digital environment operates by using a different language which consists of information defined by codes. Basically, even though a computer can simulate the human thinking, it uses several switches that turn on and off to operate the tasks which are arithmetically defined by codes and algorithms. Even though the task is very complicated, it can be digitally operated as long as it is defined by step by step algorithms. In other words, a computer can simulate human thinking for operating even very complex tasks, as long as the entities that human uses to think are translated into the ones that a computer uses to operate. And the translator to be used in design is called CAD software. CAD software translates the language of designer into the language of digital tools, and vice versa. Therefore they provide an environment that designer and the digital tools can collaborate and communicate in. Similar to conventional design methods, in a CAD process, the designer inputs the data upon visual elements. Then the software translates the data into the digital language, lets the digital tool operate it and visualize it by retranslating it back for the designer. Therefore, the CAD software takes a vital role in the interaction between designer and the digital environment. Namely, the way designer uses the digital environment and the way he/she thinks are mostly defined by the software used. Possible software used today in architectural design is mostly the commercial ones. So one can say that the tool we use is produced commercially and indirectly the way we think and design is defined commercially. All of the software has specific characteristics within the producer firm’s commercial identity. Today it is not difficult for an expert to understand which software is used in design process by evaluating just the outcome of the design. And this signifies a problem if we still want to define architectural design as a creative act in the future. Thinking about the future scenarios, William J. Mitchell says that one possibility is that a few large software developers will dominate the CAD market, treat libraries of shape 4
see Terzidis K. Algorithmic Architecture, Massachusetts: Architectural Press, 2006
First International Conference on Critical Digital: What Matter(s)?
232
construction procedures as proprietary intellectual property, and thus define the shape universes that architects can explore5. Therefore, architects have to be able to manipulate these tools and utilize them as collaborators which can expand the limits of human imagination and support designer’s creativity as being idea generators. Otherwise they will only be exploring the shape universes that were defined by CAD developers. This paper addresses the open source development within the network culture to answer this necessity. 3. The Network Culture, WEB 2.0 and Open Source The network culture, today, means a lot more than the early applications of internet. As we speak of the WEB 2.0 concept, the net is the place which includes all forms of information and communication, self-evaluates thanks to the consumers who are the producers as well, and is the environment of social organization and individual participation. Manuel Castells characterizes the internet culture by a four-layer structure: the technomeritocratic culture, the hacker culture, the virtual communitarian culture, and the entrepreneurial culture6. According to this definition, this paper’s concern is about the virtual communitarian and mainly the hacker issues of it. Eric S. Raymond defines hackers as artists, tinkerers, problem solvers and experts, not as the term is now abused by journalists to mean a computer criminal7. Such a complaint is appropriate since even in the dictionary a hacker is defined as someone who hacks into other people’s computer systems8. However according to The Jargon File9, a comprehensive compendium of hacker slang in the public domain to be freely used, shared and modified, this is the definition of a cracker, not a hacker. In fact, a hacker manipulates programmable systems and enjoys doing so, generates software and freely shares it in the public domain, creates communities both in the physical world and on the network, and work to free the information. As Castells puts it, the hacker culture is, in its essence, a culture of convergence between humans and their machines in a process of unfettered interaction. It is a culture of technological creativity based on freedom, cooperation, reciprocity, and informality10. This is what WEB 2.0 provides to us. Or rather, these mentioned applications introduce us the concept of WEB 2.0 which refers to the emerging phase of the network culture. It is the platform which enables and is generated by individual participation, is the medium of collective intelligence, and accommodates all means of communication and information and enables users to freely share, use and modify them. One of the major applications of sharing is the code sharing of software, and is called as open source development. A hacker creates software, puts it and the source code of it in the public domain, and the others freely use, modify and redistribute it. In this way each user creates a different version of the software according to his/her own needs, abilities and creativity. What is important about open source development is that the software produced this way are not commercial, not bounded by particular individuals or institutions and they can be 5
see Mitchell W. J., “Foreword”, in Expressive Form: A Conceptual Aprroach to Computational Design, Terzidis K. New York: Spon Press, 2003, pp.vii-vii 6 see Castells M. The Internet Galaxy, New York: Oxford University Press, 2003. 7 see Raymond E. S. The Cathedral & the Bazaar: Musings on Linux and Open Source by an Accidental Revolutionary, California: O’Reilly Media, 2001. 8 Cambridge Advanced Learner’s Dictionary, Cambridge University Press, http://dictionary.cambridge.org 9 The Jargon File, http://catb.org/jargon/ 10
see Castells M. The Internet Galaxy, New York: Oxford University Press, 2003.
First International Conference on Critical Digital: What Matter(s)?
233
modified according to the user’s needs and desires. It is able to provide unique and personal digital tools for any kind of users. Thus it can provide tools that can be idea generators and collaborators of the architect, can save the architect from being bounded by commercial software developers, and support the creativity of architectural design in the digital culture. Therefore, the point to be discussed is how to generate open source architectural design software. In order to answer this, one must examine how hackers work and the knowledge they use, and redefine the architect as an expert of the emerging knowledge. Certainly our traditional content of knowledge won’t be enough anymore for consistently redefining the architect’s position this way in the digital culture. In fact it is similar with the skills that architects had improved in the past like spinning the pencil while drawing a line or using different type of rulers and bars in order to catch proper angels. Now our tools are the digital ones and we need to manipulate and modify them as it was done with pencils and rulers in the past. Therefore the architectural knowledge has to include the information of programming, writing codes and solving algorithms within the knowledge of computer sciences, cognitive sciences, artificial intelligence and so on. 4. Suggestions for Further Study The expansion of this framework in fact needs to be very well and carefully defined. Because it requires a new definition of architecture and architect which are totally different then today’s predominant understanding. Besides the expanded knowledge of architecture, the definition of architect as a social actor needs to be deconstructed. In an environment of collective intelligence, the architect cannot tend to be a hero or a star anymore. His knowledge has to be free. And his creativity has to be based on collectivity. He has to give up the idea of being responsible from all phases of the process and every single detail of the design. The architect, like anyone and anything in the digital culture, is just one node on the net. This node is tied to other nodes with several bonds and paths. All these nodes and ties create a complex structure which enables networked living. And architecture is a practice of this symbiotic living which the people with common interests collaborate over, share information and generate solution. 5. References Castells M. The Internet Galaxy, New York: Oxford University Press, 2003. Cross N. Designerly Ways of Knowing, London: Springer, 2006. Kalay Y.E. “The Impact of Information Technology on Design Methods, Products and Practices” Design Studies Vol 27 No. 3 May 2006, pp.357-380 Raymond E. S. The Cathedral & the Bazaar: Musings on Linux and Open Source by an Accidental Revolutionary, California: O’Reilly Media, 2001. Szalapaj P. Contemporary Architecture and the Digital Design Process, Massachusetts: Architectural Press, 2005. Terzidis K. Algorithmic Architecture, Massachusetts: Architectural Press, 2006. Terzidis K. Expressive Form: A Conceptual Aprroach to Computational Design, Terzidis K. New York: Spon Press, 2003. The Jargon File, http://catb.org/jargon/
First International Conference on Critical Digital: What Matter(s)?
234
First International Conference on Critical Digital: What Matter(s)?
235
Digital Environments for Early Design Form-Making versus Form-Finding Jerry Laiserin The LaiserinLetter™ USA jerry@laiserin.com Abstract Design ideas, like scientific theories, are falsifiable hypotheses subject to testing and experimentation and—if need be—replacement by newer ideas or theories. Design ideas also are known through distributed cognition, in which a mental construct and an external representation complement each other. Representations may be categorized along the axes 2D-3D and Analog-Digital, plus a proposed third axis from Form-Making to Form-Finding. In Form-Making, the mental construct component (of distributed cognition) arises before the representation. In Form-Finding, representation arises before the mental construct. All media of representation have different affordances. Certain media and representations afford Form-Making more so than Form-Finding; and vice versa. Design educators, students and practitioners will benefit from conscious, systematic choice of media and methods that afford an appropriate range of Form-Making and Form-Finding behavior when proposing and testing design ideas. 1. Design hypotheses and media Austrian-British philosopher Karl Popper redefined thinking about science by casting scientific theories as “falsifiable hypotheses.”1 In this view, the scientific method involves testing any theory via experiments to determine whether that theory is false. As theories survive the experimentation/falsification process, they gain greater acceptance and usefulness. By analogy, design ideas can be thought of as falsifiable hypotheses about possible solutions to design problems. Instead of the scientific method, design ideas are tested and experimented on in the design process. An architectural design idea, such as a parti, may be tested against the building program or brief; a massing model against the zoning envelope; or an enclosure design against the desired building energy performance. Design ideas that survive this process — are not falsified by the building requirements against which they are tested — gain acceptance and usefulness to the designer for subsequent phases of design. Over the past two decades, numerous digital tools, both software and hardware, have emerged to help designers formulate, express, visualize and test design ideas. At the earliest stages of the design process, these digital tools complement, rather than displace, traditional analog tools and methods such as sketching on paper or building physical models by hand. However, this coexistence of electronic and paper-based tools tends to alter designers’ perception of both media. Today’s designers are free to use a wide range of media and interfaces to represent their design ideas. In 1991, William Mitchell and Malcolm McCullough, both then at Harvard, laid out a framework for understanding these media in their ground-breaking book, Digital Design Media.2 Grouping media according to what I call axes of representation, Mitchell and McCullough classified representations from 2D to 3D and from Analog to Digital. Thus, paper-based drawings are 2D Analog, while conventional CAD (or CADD) is 2D Digital, and so on. Furthermore, each of these modes of representation can be translated to the others. For example, paper drawings can be scanned to CAD files, and CAD can be plotted to paper; physical models can be 3D-scanned to 3D Digital models, and 3D Digital files can be
First International Conference on Critical Digital: What Matter(s)?
236
“printed” via rapid prototyping techniques; yet all these representations and media are different ways of expressing and viewing the same design ideas. Architects and design professionals have enjoyed this analytic framework for design media and modes of representation for two decades. However, that framework does not explicitly accommodate classification of media by their suitability to support early design. I propose adding to Mitchell and McCullough’s two axes of representation a third axis from formmaking to form-finding as in Figure 1. Form-making, loosely defined, is a process of inspiration and refinement (form precedes analysis of programmatic influences and design constraints) versus form-finding as (loosely) a process of discovery and editing (form emerges from analysis). Extreme form-making is not architecture but sculpture (perhaps, folly)—form without function. Extreme form-finding also is not architecture but applied engineering—form exclusively determined by function.
Figure 1 - Axes of representation Known architectural design methodologies fall between these extremes. Although not intended for architectural criticism, it can be argued from this position that many canonical works result from design processes optimally balancing form-making and form-finding—e.g., the work of Louis Kahn.
First International Conference on Critical Digital: What Matter(s)?
237
2. Representation and cognition Form-making and form-finding are more rigorously defined with respect to designers’ ways of knowing. Distributed cognition posits that knowing occurs not solely as mental constructs, but is distributed in external representations as well. Such representations include maps, charts, graphs, diagrams, sketches, drawings, models and so on. Further, cognition of complex, social-behavioral phenomena such as operation of naval vessels may not reside with any individual but is distributed among crew members3—each of whom knows a portion of the whole as well as a mental model of which other portions are known by which crewmates. Thus, representations are tools for knowing. As with all tools, these representations have affordances—the qualities of an object, or an environment, that allows an individual to perform action.4 This is a rigorous way of expressing the common-sense notion “to someone with a hammer, the whole world looks like a nail.” In other words, a fundamental property of the hammer is that it affords nail-driving. If affordances determine or constrain the potential for action within or upon representations (in their role as tools), and if representations embody a necessary component of knowing (as constituents of distributed cognition), then knowing is determined or constrained by the choice of representation. Representations that seem well-suited to their subjects (i.e., their affordances are appropriate to desired actions with or on their subjects) are deemed to be “handy” (ready-to-hand, zuhanden5). Thus, maps are handy for wayfinding, graphs for mathematics and floor plans for building. Representations that seem ill-suited to their subjects are not handy, but merely present-athand (vorhanden6). Such presence awareness disrupts knowing the tool-in-action and is experienced as “breakdown.”7 Designers seeking, e.g., a consistently handy representation for designing at campus scale may have difficulty choosing between GIS systems (handy at regional scale) and CAD systems (handy at building scale). Distributed cognition and the affordances of design media thus correspond to our everyday understanding of “design”—designs are identified with their representations, which in turn are identified with the mental constructs of designers. A designer who “has a design” asserts the existence of both a mental construct and an external representation. To attain the condition is_a_design, then, there must exist both a mental construct and an external representation. 3. Time of design Further, there must exist some “time of design,” Td, at which both conditions (mental construct, external representation) obtain. For any ∆T>0 (i.e., after Td) the condition is_a_design exists. For any ∆T<0 (i.e., before Td) the condition is_a_design does not exist; which means absence of mental construct, representation or both. Three possible sequences of action lead from a condition of “no design” to is_a_design as ∆T approaches 0 (time advances to Td)—mental construct arises first, external representation arises first; or they arise simultaneously. Limits of human perception may imply that simultaneity is the more prevalent sequence; however, probability suggests simultaneity is a special case. Cases or design methodologies in which mental constructs arise first can be labeled formmaking: the designer has “an idea,” then sketches or otherwise represents the idea. Frank
First International Conference on Critical Digital: What Matter(s)?
238
Lloyd Wright’s “shaking one out of [his] sleeve”8 also corresponds to form-making—mental construct precedes representation. Cases or design methodologies in which representation arises first are labeled form-finding: designers doodle, pore over images or otherwise manipulate representations that are not yet “design”—but may represent programmatic influences and design constraints. Designers then “get the idea.”9 Objets trouvé, algorithmic methods10,11 or aleatoric techniques of designers such as Peter Eisenman also are form-finding: representation precedes mental construct. However, it has been suggested that experienced designers, with greater knowledge of likely and/or possible designs, employ doodling strategies that inherently imply form.12,13 Special cases in which mental construct and external representation arise simultaneously (or in such close temporal proximity as to be indistinguishable14) correspond to such scenarios as Merleau-Ponty’s intentional arc15 or Schön’s reflection-in-practice,16 plus practice examples as cited above re Louis Kahn. Note that an alternative formulation collectively emerged from a 2004 workshop discussion on “The Designer as Tool Builder”17 in which the present author was a participant. Contrasts were drawn between “pre-rationalized form” (in which data precedes geometry) versus “post-rationalized form” (in which geometry precedes data) In the present argument, prerationalized form therefore corresponds to form-finding whilst post-rationalized form corresponds to form-making. 4. Form follows software? Pierluigi Serraino, a Berkeley-based architect and theorist, has written extensively on the theme that “form follows software.”18 In my view, Pierluigi’s assessment is that different software tools afford different ways of design thinking and design expression. What all of this means for practicing designers and design students is that no single tool can provide the best solution for representing any design idea. In fact, a disciplined process of exploring design ideas through multiple tools can help insulate designers from the subtle influences (and/or limitations) provided (and/or imposed) by the affordances of any single medium or tool. Many of the most proficient designers instinctively recognize this situation and consciously exploit it as part of their process for testing design ideas. In his 2005 doctoral dissertation at Harvard, Athens-based architect Panagiotis “Panos” Parthenios examined many of these issues in considerable depth19 (full disclosure: I had the privilege of serving on Panos’ dissertation committee). Panos’ research included several elegant case studies of both skilled designers and design students at work. In one of these studies, an experienced project architect can be observed deliberately shifting her focus back and forth from sketches to CAD to study models and digital 3D to address emerging and evolving design issues as they came to hand. This process is confirmed by a 2004 survey of 240 computer-using designers included as an appendix to Panos’ thesis.20 Regardless of age or years of experience, size of firm or types of projects, 60% of respondents identified pencil and paper as their favorite tool, with 80% starting their conceptual design process on paper—this despite the fact that all respondents identified themselves as computer-using designers. The runners up to paper and pencil among tools/media for early design included Sketchup, 3D physical models, 3DStudio Viz/Max, AutoCAD, ArchiCAD, formZ, Revit and Photoshop.
First International Conference on Critical Digital: What Matter(s)?
239
Other favored conceptual design tools ranged from the expected, such as Maya, Rhino and Vectorworks, to surprises such as Microsoft Excel and Word (for conceptual design!). Adobe InDesign and Illustrator were not mentioned (perhaps because the Adobe Creative Suite was being revamped at the time of the survey) and Autodesk Impression had not yet been introduced to the market. 5. Choosing tools By a 2:1 margin, survey respondents preferred to work with multiple software packages for conceptual design, despite expressed frustration with data exchange among the tools. Sixtythree percent believed computer tools allow them to design “better;” whereas 22% said faster, but not better; with 10% seeing no change; and only 3% perceiving the impact of software on conceptual design as negative. Among the aspects of their work that respondents saw as improved by digital tools were: visualization, communication, exploring more alternatives, exploring more complex geometry, improved perception, more organized thinking and “getting inspired.”21 Renée Cheng, Head of the School of Architecture in the College of Design at the University of Minnesota, says that students today “come in with great fluency in digital tools, so that schools of architecture no longer need to teach computer skills or specific software.” However, she also notes that “students don’t choose tools well and often stick with them too long — they get stuck, but don’t always know enough to know that they’re stuck.” The teacher’s job, then, is “to push students to use different tools and media… [in order] to ask different questions about the design.” In fact, according to Cheng, “any tool is more powerful if it is part of a cycle of digital and analog, going back and forth, rather than a linear progression from sketching first, then digital modeling, with no return.” She encourages her students to do the same with 3D models and digital tools, “3D printing the model, then sectioning it on the band saw, modifying and gluing it back together before remodeling it in the computer.”22 6. Critical Implications Design educators, students and practitioners will benefit from conscious, systematic choice of media and methods that afford an appropriate range or “space” of Form-Making versus Form-Finding and Analog versus Digital media, strategies and behaviors when proposing and testing design ideas—as shown in Figure 2.
First International Conference on Critical Digital: What Matter(s)?
240
Figure 2 - Space of design media These issues raise critical questions in the following areas, each of which warrants further study: • • •
• •
•
•
Software development: Interfaces catering to different designers, different design methodologies and/or different aspects of design problem-solving. Software selection: Practitioners’ choices among available software tools to support personal preferences of individual designers, design workflows across multiple designers or preferred methodologies within design firms. Curriculum development: Design schools’ development and evaluation of curricula accommodating form-making and form-finding across n-dimensions of analog and digital media. Whither academic sequences, individual courses, problems and pedagogical methods? Must schools expose students to the full curricular space? Pedagogy: Design communication inherently is form-finding: recipients of design communication see (the sender’s) design representation first, then develop mental constructs in response. Implications for teaching, the desk crit, pinup and so on? Design Practice: How do individual designers recognize and play to (or off) personal strengths (or weaknesses) relative to affordances of different media and demands of different design problems? How do design firms balance and/or blend inductive skills/processes of form-makers with design editing skills of form-finders. Human-Computer Interaction (HCI): Many software developers seek machine-readable design data transfers via building information modeling (BIM), industry foundation classes (IFCs) and so on. However, media of representation best suited to machinereadability may lack affordances for human-accessibility. Must one be sacrificed for the other? Mediated Collaboration: Various media of design collaboration afford different qualities of human-to-human interaction. If one manifestation of distributed cognition is team
First International Conference on Critical Digital: What Matter(s)?
241
knowing, then to what extent should media of representation for collaborative technologies be chosen by their affordances (e.g., supporting emergence of shared context). 7. Conclusion Design ideas, as scientifically falsifiable hypotheses, depend for their development and testing on the affordances of the media of representation chosen by their designers. The range or “space” of such media considered by designers must encompass all axes of representation: from 2D to 3D; Analog to Digital; and from Form-Making to Form-Finding. Exploration of all critical implications of this condition may lead to a future state in which designers embrace the capacity and opportunity to design their own tools with affordances appropriate to their intended design methods.23 Endnotes _________________________ 1
Popper, K., The Logic of Scientific Discovery, 1934 (as Logik der Forschung, English translation 1959) Mitchell, W. J. and McCullough, M., Digital Design Media, 1991 3 Hutchins, E., Cognition in the Wild, 1995 4 Gibson, J.J., “The Theory of Affordances” in Perceiving, Acting, and Knowing, Shaw, R. and Bransford, J., eds., 1977 5 Heidegger, M., The Basic Problems of Phenomenology, 1936 6 Heidegger, M., op.cit. 7 Winograd, T., and Flores, F., Understanding Computers and Cognition, 1986 8 Storrer, W.A., The Frank Lloyd Wright Companion, 1974 9 Buxton, B., Sketching User Experiences, 2007 10 Terzidis, K., Expressive Form, 2003 11 Terzidis, K., Algorithmic Architecture, 2006 12 Gross, M., Private communication to the author at DCC04, 2004 13 Cheng, R., Private correspondence with the author, 2008 14 Cheng, R., op.cit. 15 Merleau-Ponty, M., Basic Writings, 2003 16 Schön, D., The Reflective Practitioner, 1983 17 Hesselgren, L., et al, Discussion topic in DCC04 workshop: “The Designer As Tool Builder And The Limits To Parametrics,” 2004 18 Serraino, P., History of Form*Z, 2002 19 Parthenios, P., Conceptual Design Tools for Architects, 2005 20 Partnenios, P., op.cit. 21 Parthenios, P., op.cit. 22 Cheng, R., Interviewed by the author for Cadalyst magazine, 2007 23 Kolarevic, B., Private communication to the author at Innovation in AEC, 2005 2
First International Conference on Critical Digital: What Matter(s)?
242
First International Conference on Critical Digital: What Matter(s)?
243
Keepers of the Geometry
Architects in a Culture of Simulation Yanni Loukissas Massachusetts Institute of Technology yanni@mit.edu
Summary “Why do we have to change? We’ve been building buildings for years without CATIA?” Roger Norfleet, a practicing architect in his thirties poses this question to Tim Quix, a generation older and an expert in CATIA, a computer-aided design tool developed by Dassault Systemes in the early 1980’s for use by aerospace engineers. It is 2005 and CATIA has just come into use at Paul Morris Associates, the thirty-person architecture firm where Norfleet works; he is struggling with what it will mean for him, for his firm, for his profession. Computer-aided design is about creativity, but also about jurisdiction, about who controls the design process. In Architecture: The Story of Practice, Architectural theorist Dana Cuff writes that each generation of architects is educated to understand what constitutes a creative act and who in the system of their profession is empowered to use it and at what time. Creativity is socially constructed and Norfleet is coming of age as an architect in a time of technological but also social transition. He must come to terms with the increasingly complex computeraided design tools that have changed both creativity and the rules by which it can operate. In today’s practices, architects use computer-aided design software to produce threedimensional geometric models. Sometimes they use off-the-shelf commercial software like CATIA, sometimes they customize this software through plug-ins and macros, sometimes they work with software that they have themselves programmed. And yet, conforming to Larson’s ideas that they claim the higher ground by identifying with art and not with science, contemporary architects do not often use the term “simulation.” Rather, they have held onto traditional terms such as “modeling” to describe the buzz of new activity with digital technology. But whether or not they use the term, simulation is creating new architectural identities and transforming relationships among a range of design collaborators: masters and apprentices, students and teachers, technical experts and virtuoso programmers. These days, constructing an identity as an architect requires that one define oneself in relation to simulation. Case studies, primarily from two architectural firms, illustrate the transformation of traditional relationships, in particular that of master and apprentice, and the emergence of new roles, including a new professional identity, “keeper of the geometry,” defined by the fusion of person and machine. Like any profession, architecture may be seen as a system in flux. However, with their new roles and relationships, architects are learning that the fight for professional jurisdiction is increasingly for jurisdiction over simulation. Computer-aided design is changing professional patterns of production in architecture, the very way in which professionals compete with each other by making new claims to knowledge. Even today, employees at Paul Morris squabble about the role that simulation software should play in the office. Among other things, they fight about the role it should play in promotion and firm hierarchy. They bicker about the selection of new simulation software, knowing that choosing software implies greater power for those who are expert in it. Architects and their collaborators are in a continual struggle to define the creative roles that can bring them professional acceptance and greater control over design. New technologies for computer-aided design do not change this reality, they become players in it.
First International Conference on Critical Digital: What Matter(s)?
The full text of this essay is printed in: Simulation and its Discontents, Sherry Turkle (ed.) Cambridge: MIT Press, 2008.
244
First International Conference on Critical Digital: What Matter(s)?
245
Tempus Fugit
Transitions and Performance in Architecture
Simon Y. Kim Massachusetts Institute of Technology simonkim@mit.edu Mariana B. Ibanez Harvard University mibanez@gsd.harvard.edu
abstract Meaning in architecture has isotropic instances of realization, one that can unfold during the design process and one that can be layered onto the artifact of the building; its components and forms constitute a communication flow that emerges from an abstract form of description to its physicality. The internal cognition of this condition situates the subject as the third element, one that identifies the meaning from the extant building to its proxy meaning. In this manner, narrative and aesthetics perform the actualizations (the spatial and physical sequences) so that the occupant may understand its implications.1 Architecture is thus a one-directional flow of information (the building is an inert object from which meaning is derived, its physicality is static). Even in process-driven design, the synthesis of the many and the ordered, is evident in the materiality of the architectural manifestation; the building, although presented as a result of process cannot be separated from the reading of the generative operations. 2 Rather than continue in this manner of constructing meaning from an extensive coding (joining a concept to an object) or the instantiation (producing one from a larger field of possibilities) from a version, we suggest a dialectic that is bi-directional, or even multinodal, that is, continually self-renewing in meaning and material configuration with the active participation of the occupant. This representation is one that is time-based.
1. architecture and physical computing Physical computing is defined here as a designed environment that responds to input. This input can be analog - input from the physical world of human occupation or of atmosphere or it can be digital - input from other computers, networks, or signals. Through the integration of physical computing in the conception of the design process, the work of architects in manipulating space with tectonic logic and material affect not only expands, but new territories are found. Architecture gains another medium in computation that is not only critical in the making of digital form or information modeling but in the reassembly of spatial logic. As physical computing and cybernetics becomes increasingly ubiquitous in the practice of building, architecture - as a design discipline - must become a participant in its deployment.3 For physical computing, disciplinary tools must be applied for it to become an Activated Architecture in which we can shape and assess the polemic of its results.
First International Conference on Critical Digital: What Matter(s)?
246
1.1 transitions Since the incorporation of time-based and agent-based simulation as a mechanism for the production of design4, it has become apparent that a shift in architectural paradigm might be related to a technologically based ideology. In a recent lecture at MIT, Peter Eisenman suggested through his idea of lateness that in the late moments of an epoch – in the highly mannered, baroque, obscured work that resists legibility - before a paradigm shift, the signs of the new are already present5. In a similar manner, Patrik Schumacher in his introduction of Zaha Hadid’s MAK catalogue, states that innovation does not occur as a completely new act, but always exists in relation to a current or historic model.6 In this context, we are positioning a scenario where what is deemed ‘new’ is not arriving from the late machinations of the critically disturbed, against-the-grain ‘late’. The onset of ubiquitous computing and what we determine as Activated Architecture is not linked or archived only in the work of prior generations of architects. It is the purview of the physicists, the engineers and the cyberneticists alongside with architects and designers: those who increase design in their respective fields. Here the argument may move into the re-alignment of a critical discourse of architecture applied to physical computing, or into the dissolution of this outmoded field of architecture into one of networked collaborations. We are interested in trying to find an architectural polemic, a scenario where the efforts are focused at achieving an architecturally meaningful format exchange. Here we construct a link from Gottfried Semper to Gordon Pask, not as a fictive device but as an inquiry of legacy based on motivation and method. Both were instructors and practitioners of architecture, establishing a new role for architectural production and evaluation, while aware of their positions in architecture in a Kantian vector of history.7 2. Semper, Pask and transformations 2.1. representation and simulation – a formal to temporal transformation For Semper, the idea of expression in architecture is not in ornamentation but as a transformation of a material system, from the original logic of its assembly, into an object of representation. This transformation into Urmotiv is a vehicle that holds a prior tectonic method but is transformed by a new technology.8 In fact, the reinforcement here is that it is through enactment of primitive social action through which architecture was constructed. The temporal, process based ritual of this weaving, knotting, and joining is a continual reenactment made material. This affirmation of the roots of architecture was in response to a crisis in the deterioration of aesthetic criteria in ornament into what Semper presented as Practical Aesthetic.9 For Pask, digital simulation is the domain of a new reality, as it no longer needs reification from an external source; the simulation has its own set of rules and patterns making it independent. Rather than a linear, one to one mapping of an stimulus and a predetermined response, there is now a first order and second order, a system aware of being observed and of the other observer system, derived from feedback loops. This is the teleology of an observed system to the cognition of the observing system. This evolved into the use of language, of communication called Conversation Theory- one level has a set of goals, the other a set of actions. The response mechanism is not a direct, anticipatory mapping; instead the system has a higher order goal. This Paskian Environment had a capacity for boredom, a feedback loop to establish a level of interaction, rather than a simple reaction enforced upon the system. This idea of system participation, and not a regulated manner of information and control is what differentiates Pask when applied to his architecture studio. The phenomenology of Geist in observer and observed becomes fulfilled by subjective recognition of itself.
First International Conference on Critical Digital: What Matter(s)?
247
2.2 ordering and organization For Pask and Semper, there was a need for new criteria to describe existing information. Semper’s first interest was in the taxonomy of architectural style and all its constituent elements rather than a historical ordering. Style for Semper was not ornamental vocabularies applied in arbitrary fashion, but a careful re-assembly of meaning from the first industrial methods in what he calls primitive culture. This was in attempt to place his Practical Aesthetics as a comparative science, so that the function of these examined parts, were separated from the whole. Here, the industy of textiles is socially informed in the weaving of reeds or branches, the joining of surfaces by knots serialized into a seam that establishes continuity among disparate pieces, and the wreath as the original work of art. The origins of these techniques are then the deductive drivers for the persistence of their appearance on industrial work: industry derived as an evolutionary transformation of their process and not as a materialist idea of the spiritual or the ritual aspect of craft. Pask’s Interactions of Actor Theory10 continues his work on second order environments. In his attempts to discard the notion of user and machine, Pask introduces process as “Concepts persist minimally as stable dynamic resonating triples linked in the Borromean manner.”11 It can be determined in the interdependence of process and product in the continual maintenance of a dynamic stability so that a tripartite borromean knot topology is maintained. Any product or process has a simultaneous switching impact on all other products and processes. For his Paskian environments, this can have a impact on architecture that is truly participative beyond simplistic, linearly mapped behaviours.12 2.3 architectural implications Semper’s project of the ideal museum was to display his divisions of irreducible elements of process – the hearth, the walls, terrace and roof. These are based on having their own unique interior operation, not a form: the hearth is molded, which is done in ceramics, the walls are spatial dividers woven in textile and autonomous from structure, the terrace is made of joinery and carpentry, and the roof which is a mode of stereotomy or the measure of volumes in masonry. 13 Each of the four elements were to be separated into their own quadrants, of their varying industrial operations and outcomes. Semper also understood the nature of experiencing artwork over time and from varying positions,14 although that object does not have any motility outside of the viewer’s active gaze and its own artistic affects. Pask’s work in architecture, particularly as a collaborator with Cedric Price’s Fun Palace15, and in his own Colloquy of Mobiles art installations for the ICA in 196816, was also carefully defined in definitions and behaviours but his organization scheme occupied a polar difference. The use of control, information, and feedback were vigilantly maintained but Pask was interested in the new meaningful relationships to be explored by this selforganizing system itself. Whereas Semper was cultivating an archivist manner in identifying persistent elements over time and material assembly, Pask was interested in carefully specifying the initial start of the system but then allowing it to manufacture its own logic of connections, even proposing that the system grow new wire connections over time in response to its own impulses.17 What can be concluded here is that architectural discourse in this meeting of two designers can be maintained within the device of representation that is made of indivisible elements made meaningful in social action. For Pask, those elements are inextricably linked, and in a dynamic equilibrium represented in participative behaviour. The validity of the system is in the mutability of predetermined responses for an adaptive collective experience. Semper’s identification of representation is similar as, in his systematic classification of primitive
First International Conference on Critical Digital: What Matter(s)?
248
technique and material transformed in industrial action, he makes a scenario through the primacy of process over form. 3. Activated Architecture: spatial and temporal representations “… The role of the architect here, I think, is not so much to design a building or city as to catalyze them; to act that they may evolve.” Gordon Pask18 3.1. physical computing, a design method Applying this criticality, physical computing is currently defined as a designed environment that responds to input.19 Brought into the designed environment, which we instantiate as Activated Architecture, there is a transformation of the input that creates a dynamic response back to the occupant. This information is organized by the microprocessor into an exchange with an output device. In fact, this information is without value until linked with an agent of expression that is made meaningful to the other. It is this application of motive and its conveyance that is potentially troubled. The issue of what is perceived or interpreted is a historical one that can be addressed by the intention without ambiguity. With the intervention of a Stil, a release can be found in the idea of electronics having no dominant or privileged viewpoint. The interesting aspect of this is the nature of what input and what output creates a meaningful relationship with the occupants. The observer as the passive recipient of the building output is now wholly active in initializing the response designed into the building. Buildings now have a narrative structure that can be construed as inbuilt. In electronics, bits are organized to represent a mediated reality. Messages sent in bits to the microprocessor from the sensors, are represented/translated in programming as an output that in turn can be sent back into the system. 3.2. activated architecture, conformal computing, networked urbanism
This is an architectural environment that is embedded and responsive. Its design and its execution are linked to become more than cyclic. What is generative is equally generated within its constant and dynamic feeback. The critical issues in conformal computing are those of representation and meaning as not only applied to space but to time. The sequence of actions within the system, the delay and patterning of information transfer for legible visualization, and the extensibility from local system to global networks are isotropic in the communication of meaning. 3.3. continuous present – feedback as a spiral design Activated architecture therefore surpasses traditional built form because of its constant negotiation of inputs and designed outcomes, an architecture that is charged and renewing. It continuously maintains a connection from the initial design phases to its materialization.
First International Conference on Critical Digital: What Matter(s)?
249
The design input is still accessible through the output via the feedback. Inhabitable space becomes an indeterminate design experience, not a fixed, designed environment – the process remains active. This is not a continuation of the classical idea of a form that is determined from a fixed system, as evidenced in a feedback loop. Rather, we prefer to use the schema of a spiral from what Semper and Pask would both understand to be figure of advancement – in each cyclical motion, the trajectory moves perpendicularly so that at each ‘return’ there is a transformative shift. For Semper, the binding of architecture is maintained when it moves from rope and wood, to iron as the same cyclic node on the spiral, Pask would have this spiral become constantly challenged where a system response at one time would be a different response based on the impulses within the system itself. In this context, we are presenting a scenario where the interchange of design as a process and as an advancement in technique and representation is a continually temporal one. The efforts are focused at achieving a format exchange: a reality that occurs in digital format, and a virtuality that occurs as a physical condition. The interesting aspect of this framework is that contrary to the traditional model, one cannot occur without the other. This interaction is characterized by the fact that our physical virtuality requires processing capabilities; and it can only exist in tandem with computational intelligence. Therefore, the separation between mediums as discrete or isolated instances is replaced by a multi-nodal model; one that places physical computing at the core of architectural design.
an Activated Architecture prototype by the authors.
First International Conference on Critical Digital: What Matter(s)?
250
1
see Evans, Robin. The Projective Cast: Architecture and its Three Geometries, Cambridge: The MIT Press, 1995. This is expanded in the legibility of pure geometry in the centralized church as described by Wölfflin and Wittkower. The encoding of form to be removed from the historical motivation of earlier pagan structure is resolved within symbolic meaning of the circle.
2
see Schumacher, Patrik. “Introduction” in Zaha Hadid Architektur, Noever (ed) Vienna: Hatje Cantz Verlag, 2003 where he discusses the multi-valent possibilities of complex relationships by digital simulations as a provocation against previously inert or singular geometry. 3
see Kabat, Jennifer. “The Informalist” Wired, issue 9.04 2001. Recent publications have pointed to a neutrality or evacuation of the centralized architect/ design schema for a peripheral, collaborative network of agents. One of the more compelling arguments for this is the work of Cecil Balmond, interviewed here. 4
see DeBiswas, Kaustuv and Kim, Simon. “A Multi-Agent Framework to Morph between Non-Commensurable Perspectives: Game City” in Cybernetics and Systems 2008, Vienna: Austrian Society for Cybernetic Studies, Trappl (ed), 2008. We implemented a virtual environment where numerous agents with simple rules, in biased preference orders, could continually change into evolving urban conditions. 5 Eisenman, Peter. “Beyond the Index”, MIT Lecture Series, March 1, 2007. The discussion of late style was in reference to the work of Edward Said, On Late Style: Music and Literature Against the Grain. Said is interested in the conflict and contradiction in the work of artists (specifically of Beethoven as reviewed by Adorno) in their final years as well as in seeds of interruption within epochal transitions. 6
Schumacher, Patrik. Zaha Hadid Architektur.
7
Herrmann, Wolfgang. Gottfried Semper, In Search of Architecture, Cambridge: MIT Press, 1984. shows his self-criticality in response to Reigl and Schnaase. For Pask this is asserted in his discussions of beauty here: http://www.cyberneticians.com/video/pask-on-conv-and-beauty2.mov In Critique of Pure Reason, Kant wrote of the sensus communus in every person that is an internal knowledge of the external world, but one that is shaped by the pervading culture of that time. It is in these environments that what is perceived and understood is systematized and made understandable. Small increments in knowledge and industry can then be reordered by a heroic or a genius act to dramatically shift that environment to another level of purpose. 8
Hvattum, Mari. “Gottfried Semper: Between Poetics and Practical Aesthetics”, Zeitschrift für Kunstgeschichte vol 4 number 64, 2001, pp. 537-564. “…rhythmic patterns were slowly translated into the domain of art and craft, into the rhythmic movement of weaving and the symbolic gathering performed by the knot…. the primordial motifs of art – the knot, the bead, the wreath - are indeed mimetic representations of ritual acts…” 9
Semper, Gottfried. Style in the Technical and Tectonic Arts or, Practical Aesthetics. trans. Mallgrave, Los Angeles, Getty Trust Publications, 2004. 10
see Pask, Gordon. Interactions of Actors (IA), Theory and Some Applications, unpublished 1992
First International Conference on Critical Digital: What Matter(s)?
251
11
Pask, Gordon. Interactions of Actors (IA), Theory and Some Applications, unpublished 1992
12
see Pask, Gordon. “Cybernetics”, http://www.cybsoc.org/gcyb.htm
13
see Rykwert, Joseph, “Semper and the Conception of Style”, in Gottfried Semper und die Mitte des 19.Jahrunderts, Vogt, Reble, Fröhlich (eds.) Basel: Birkhäuser Verlag, 1974, pp.67-81. 14
see Semper, Gottfried. “The Attributes of Formal Beauty” in Gottfried Semper, In Search of Architecture, Wolfgang, Cambridge: MIT Press, 1984. “… so that the observer experiences its total content by moving around it… which together presents the mind an overall image. It is totally wrong to imagine an art object is perceived in a single unit of time, an assumption that has harmed mainly architecture.” pp. 221 This is contrasted to the work of his contemporary, JN.L Durand, where architecture is not a formal device of order and composition, but an active process of craft that is informed by changes in technology and material.
15
see Price, Cedric. “Gordon Pask” Systems Research, Volume 10, Issue 3, London: John Wiley & Sons, Ltd, 1993 pp.165 – 166 16
Pask, Gordon “Colloquy of Mobiles” , ICA exhibition, 1968 http://www.medienkunstnetz.de/works/colloquy-of-mobiles/
17
see Cariani, Peter. “To evolve an ear. Epistemological Implications of Gordon Pask's Electrochemical Devices” Systems Research, Volume 10, Issue 3, London: John Wiley & Sons, Ltd, 1993 pp. 19-33. Pask had instituted an idea of chemical computation whereby the system could adaptively change its circuitry in a process of bio-technology to grow metal filaments. The devices and associated control equipment enable the programming of many inputs and outputs. This is done with charged electrodes being introduced to a solution of copper sulphate; the resulting growth of new connectivity can be used to not only construct new sensors but to reorganize its own logic of circuitry. The implication is that engaging the system would never be the same way twice. 18
Pask, Gordon. “Foreword” Evolutionary Architecture, John Frazer, London: Architectural Association, 1995, pp. 6-7
19
an effort is made to dismantle the idea of electronic componentry as simply applied over the architecture as a layer, enabling it as another medium of control. While this is still valid as MEMS (microelectromechanical systems) become more prevalent, it retains a classical model of architecture as form-giver and electronics as a craft.
First International Conference on Critical Digital: What Matter(s)?
252
First International Conference on Critical Digital: What Matter(s)?
253
Critical Space
Moderators: Dido Tsigaridi and Jan Jungclauss Papers: Edgardo Perez The Fear Of The Digital: From The Elusion Of Typology To Typologics Francisca M. Rojas, Kristian Kloeckl, and Carlo Ratti Dynamic City: Investigations into the sensing, analysis and application of realtime, location-based data Ole B. Jensen Networked mobilities and new sites of mediated interaction Gregory More The Matter of Design in Videogames Joseph B. Juhász and Robert H. Flanagan Do Narratives Matter? Are Narratives Matter? Jock Herron Shaping the Global City: The Digital Culture of Markets, Norbert Wiener and the Musings of Archigram
First International Conference on Critical Digital: What Matter(s)?
254
First International Conference on Critical Digital: What Matter(s)?
THE FEAR OF THE DIGITAL
FROM THE ELUSION OF TYPOLOGY TO TYPOLOGICS
EDGARDO PEREZ MALDONADO IN_TROPE Architecture-Design Intrope01@gmail.com Professor of Architecture, School of Architecture University of Puerto Rico Rio Piedras, San Juan, PR
Abstract It might seem that architecture has been forced to choose, once again, between two worlds of existence. One of them might be the construction of the tangible, the other, a “formal fantasy” that will never reach a legitimate status among the “tectonic” or the “structural”. This vague spectrum has confirmed the fear of loosing typology as a proof, of loosing a foremost validation for architecture. But one could see the virtual as a possibility to generate a structure of discourses and interactive tactics to reformulate the typological. This meaning that the virtual could transcend the so called “graphic” stigma and actually produce the discourses and spatial strategies to radicalize typology and move towards a radicalization of content.
The fear of the digital The digital design process had revealed a consistent distance from the typological referent. In the one hand, this “elusion” from the referent might be linked to the tendencies of many current typologies to be defined by forces outside the architectural. In the other hand, the digital has opened a space for the construction of discourses (constructs) which describe and inscribe dissident architectures that disrupt themselves from typological continuance. Not in few occasions, this evasion is seen as a “stigma” that challenges prevalent discourses that relate to the ways an architectural object is conceived and resolved. The “fear” of these digital selves of architecture is exacerbated in many instances by the fact that the discourses involving the digital reveal morphological determinisms that do not involve an experiential content. But as suspicious as these determinisms might seem, there has been a plausible radicalization of the image of architecture and its technology. This intersection reveals the potentials that reside in this elusion from the typological. If the digital process and devices
255
First International Conference on Critical Digital: What Matter(s)?
had manifested the capacity to articulate dissident architectural images and discourses, then there is a potential to propose alternate processes, rituals and perceptions in space. One of the most intriguing consequences of digital design in the last twenty years perhaps has been the emergence of images of architecture that do not necessarily manifest a lineage that could be easily traced within spatial or constructive traditions [Figure 1]. The radicalization of the image of form might well be one of the most compelling effects of the digital design process in architecture. In this (r)evolution of the architectural image, new discourses (imagery and language) had come to describe the new breed of buildings that, in many cases, exist only among the virtual and those that had radicalized construction processes and technologies in order to reach a material existence. In both cases, architects had engaged in the task of “naming” these new breeds of images with discourses that are not inscribed in the traditional structures of our discipline. An example of this might be described by the biological metaphors associated to digital design, where the design process is described as an autonomous biological process [Figure 2]. Or the inclusion of technologies from other disciplines like motion caption in order to define/materialize these possible architectures. In a way, the digital has become an extension of the human conceptual capacity where both the images and technologies of architecture are being reformulated. But the “genesis and exegesis” for such architectures has been articulated by and within the digital realm itself, a notion like this describes the state of dissidence of the digital with respect to a typological framework. A condition like such might suggest a transgression from a discourse of typological continuity.
The elusion of typology Typological recurrence is understood as a pre-defined codification of the image and its spatial/constructive configurations. In many aspects, the typological today might be considered a contingent discourse as well. These “typological codes” achieve their signification by a convention of their discourse. It is imperative as well to recognize that recurrence in the typological established the continuity of the meanings of the architectural object [Figure 3]. The very same meanings that in our contemporary condition might seem more fragmented than continuous1. A reason for such might lie in the fact that contemporary typologies, in many cases, do not reveal an experiential content. Nonetheless, today recurrence of such typologies is perpetuated by consumption or real state. The banalization of programs and construction has eroded much of the experiential possibility in typological content.
256
First International Conference on Critical Digital: What Matter(s)?
257
Ironically, traditional recurrence of types was purged in many ways of the experiential at the end of 18th Century. “The fascination with enciclopedism, taxonomies, comparative studies, different kinds of measured observations and like” brought forth the typological science2. “If we study the pages of the Recueil”, confronts Durand’s typological science,” what unfolds before us is not a history of architecture but a collection of systematically selected examples organized into a comparative survey similar to comparative studies and taxonomies of contemporary science”3. In many ways typological recurrence was turned into a “mécanisme de la composition”. This is the very condition of “recurrence” that the digital escapes. This condition implies is an elusion from the referent. This “evasion” of the referent” has been revealed in other cultural and social dimensions as well. In such emancipation is where the “fear of the other”, that is the digital self of architecture, might reside. There is a resistance to the supposed loss of typological continuance. Which in fact, we have seen that such continuance is not as linear as one might think. Many of our current spatial or constructive configurations resulting from digital experimentations had not come to be conventionalized yet. They have not revealed specific social contracts to structure their meanings. Their aesthetic quality defies conventional connections with notions of decorum or proportions, quintessential values derived from typological recurrence. This is perhaps where the “fear” of such elusion resides, which in fact refers to the resistance to acknowledge digital architectures coexistence with other traditions in the discipline. This brings forth the notion that typology might not be the only and foremost validation of the architectural object. This condition reiterates the idea of simultaneity of paradigms. In this context, the digital process has opened the conceptual space where paradigms are created. A similar statement might be said of Modern Architecture, that the radicalization of social and material reality radicalization of image of architecture had taken place. Modern Architecture looked upon Classical Antiquity to establish universal values as a prevalent discourse; hence might not be explicit but could also be traced.
the difference being occurred before a In the other hand, validate its aim to a link with tradition
The radicalization of the image and form revealed in the digital design process originated in the representation and visualization of architecture, a field that had became the forming ground of the images of its technology as well [Figure 4]. There is perhaps where the potential of digital architecture might reside. Not only in the radicalization of the image of architecture but also in the radicalization of its content.
First International Conference on Critical Digital: What Matter(s)?
If the typological is no longer our referent, critical intersection is being revealed: if typological recurrence is eluded then the reformulation of current typological codes might be triggered. In this context, the digital might be understood not only as an extension of the architect’s conceptual framework but also as a strategic allocation of technology within space to enhance our perceptions and experiences of space. This might suggest a shift from typology to typologics.
From typology to typologics The notion of typologics departs from the experiential nature of digital technology. It recognizes the potentials of the enhancement of the sensorial within space as a mechanism to articulate potential spatial discourses in proposed or existing contexts. But it is also born from the preoccupation of the limitations of the embodiment of the digital itself. At the moment it might seem that we are mostly confined to “interiorized” results or to limited scales. But does these results have the capacity to be expanded to attend latent possibilities within larger environments like urban perhaps? Typologics might suggest economy of plural perceptions that focus in the intersection of space and technology in order to re-signify a pre-existing typological content. The implication in this premise is that the recurrence of the typological code might be reformulated in content along with its image. These reformulations seek experiences triggered by the strategic allocation of technology, not only as means of materialization, but also as experiential devices within the given environment. In this context, this merging might well be informed by the inherent or proposed rituals of a given environment. Both architecture and the digital are superimposed into an experiential assembly that might generate more than a visual perception, but a spatial experience that proposes an alternate interpretation of a specific typological referent. The value of the architectural element might be transcribed at the scale of pre-existing spatial contracts. If the walls of a classroom are individually wired with impact sensors, a student can throw a ball to the wall and create a particular musical note [Figure 5]. Then other walls could be orchestrated into musical device where the experience of music could be integrated to physical dynamics. The discourse of “classroom” is then dismantled and reconfigured into a new interpretation [Figure 6 ]. If the unit of the pre-existing typology, in this case a “classroom”, is re-signified, the experiential content of a “school” could be reconfigured in its image, its form and content.
258
First International Conference on Critical Digital: What Matter(s)?
The idea of “meta-tectonics” is expanded from this spatio-technological intersection. Digital explorations have questioned the very idea of “statics” in tectonic assemblies. As Kurt W. Foster describes “numerous recent projects are no longer based on the millenary dialectic of supports and weights”4. Instead the idea of “structure” has mutated into the realm of actuating architectural assemblies. The notions of “skin” perhaps are strengthened in this stance. The “structurality” of the components becomes radicalized into dynamic entities that support but respond as well in an “organic” fashion [Figure 7]. Levels of permeability might be achieved by the “interactivity” of the architectural element and the subject. The presence of the subject might very well be registered by the response of the element. The suggestion is that the building gains its life with the occupant. Perhaps many other associations are derived. The image and content of the structure is transfigured into an alternate discourse or expand the preexisting discourse for that matter [Figure 8]. The notion of the “digitized ritual” springs to mind as well. The re-articulation of non-experiential contracts by means of the digital device gains relevance. As we have seen already many cultural rituals have been “virtualized”. Consumption, education and sex are perhaps some of the most notable examples of digital rituals today. But in a way these provide for an alternate spatiality that is parallel to physical public life. The rituals have been individualized by an alternate “public engagement”. The interface remains its only spatial constraint. Digital rituals escape the dependency of certain spatial configurations. If this is the case, then non-experiential transactions could be re-articulated with multivalent experiences [Figure 9]. The case could be that a particular typology is no longer necessary in order to the subject partake in the ritual. Instead an alternate spatial configuration could be proposed. It is possible that by means of the hybridization of form and content, virtual interactions might displace “consumption” as an omni-experience and open a way for a multidimensional public interactions and the “spatialization” of infrastructure [Figure 10]. In a way, the question of typologics is a question of further applicability of the digital device in architecture. It is the preoccupation with the tangible effects that we as architects might be able to translate into larger contexts as the urban or the suburban. The direction is to transgress the limitations of the “interiorization” or the limitation of scale and reach the integration of the digital as an option to the “literalness” of many of architectural experiences. It also elevate serious question about the agency of the subject vis a vis the autonomy of the digitally augmented experience of the element or space. The extension of the digital might remain a parallel universe in which divergent individual or public dimensions will coexist but never be experienced simultaneously. Perhaps we do not need them to.
259
First International Conference on Critical Digital: What Matter(s)?
As we continue to expand on the possibilities of the digital in our discipline we must test its latent capacities to affect our public experience [Figure 11]. The effects of recent exploration might reverberate in palpable effect at an urban scale. This is particularly critical from an ecological standpoint. The sophistication of digital design has presented us images of architecture that defy convention. The advancements in digital fabrication have also presented significant shifts in constructive technology. The pace in which the idea becomes form has been accelerated. The numerical ornament has reformulated the surface once more. This means that the limitation that once confined digital architecture in to the realm of the “graphic” is no more. But within this emancipation there is also the critical recognition of the digital as a conceptual extension of the human mind. This brings the idea of a new craftsmanship. What role does craftsmanship plays in the discourse of the digital? The digital is not exempt of banal applications (or duplications for that matter) and it will not presuppose the auto-sufficiency of architecture. The critical recognition here is that craftsmanship will remain in the architect’s conceptual capacity. This is perhaps better explained by Heidegger: “ Because the essence of technology is nothing technological essential reflection on technology and decisive coming to terms with it must happen in a realm that is, on the one hand, akin to the essence of technology, on the other, fundamentally different from it. Such realm is art”5. Endnotes 1 see Vesely, D., “Architecture in the Age of Divided Representation. The Question of Creativity in the Shadow of Production”, Cambridge: MIT Press, 2004. 2 see Laplace, “A Philosophical Essay on Probabilities”, p. 4; S. Villary, J.N.L Durand (1760-1834) : “Art an Science of Architecure” New York: Rizzoli, 1990. 3 see Vesely, D., “Architecture in the Age of Divided Representation. The Question of Creativity in the Shadow of Production”, Cambridge: MIT Press, 2004, p242. 4 see Forster, K. W. (essay) “Architecture, Its Shadow and its Reflections” Metamorph, Focus. 9th International Architecture Biennale Vol. 1 2004, p.2 5 see Heidegger, M., “Vortrage und Aufsatze” In: D. Vesely, “The Age of Divided Representation”. Boston: Massachusetts: MIT Press, pp. 282-315.
260
First International Conference on Critical Digital: What Matter(s)?
Figure 1: Surface Study. IN_TROPE. 2005.
Figure 2: Computer Rendering. Marcos Novak. 2000
261
First International Conference on Critical Digital: What Matter(s)?
Figure 3: Typological Rendition.
Figure 4: Torre de Valencia Antena. Madrid. IN_TROPE. 2007.
262
First International Conference on Critical Digital: What Matter(s)?
Figure 5: Music Space Project. IN_TROPE. 2006.
Figure 6: Music Space Project. IN_TROPE. 2006.
263
First International Conference on Critical Digital: What Matter(s)?
Figure 7: Mediateque. IN_TROPE. 2007.
Figure 8: Mediateque. IN_TROPE. 2006.
264
First International Conference on Critical Digital: What Matter(s)?
Figure 9: Wal*Mart Project. IN_TROPE. 2006.
Figure 10: Wal*Mart Project. IN_TROPE. 2006
265
First International Conference on Critical Digital: What Matter(s)?
Figure 11: Wal*Mart Astrodome Project. IN_TROPE. 2006
266
First International Conference on Critical Digital: What Matter(s)?
Dynamic City
Investigations into the sensing, analysis and application of real-time, location-based data Francisca M. Rojas MIT senseable city lab, Cambridge, MA. USA fmr@mit.edu Kristian Kloeckl MIT senseable city lab, Cambridge, MA. USA kloeckl@mit.edu Carlo Ratti MIT senseable city lab, Cambridge, MA. USA ratti@mit.edu Abstract Over the past decade, our cities have been blanketed with digital bits. Unlike the old electromagnetic, unidirectional waves, these bits are bidirectional – they communicate – and are thus tied to human activities. Our hypothesis is that by analyzing these bits we can gain an augmented, fine-grained understanding of how the city functions - socially, economically and yes, even psychologically. Some preliminary results from different projects recently carried out at MIT senseable city lab are discussed below. 1. Introduction 1.1 Digital traces Over the past decade, our cities have been blanketed with digital bits. Unlike the old electromagnetic, unidirectional waves, these bits are bidirectional – they communicate – and are thus tied to human activities. Almost everything you do produces some form of digital data: using the Internet from the campus café locates you through the WiFi antenna, doing a Google search or ordering a book from Amazon informs countless algorithms of your interests, talking on your cell phone registers at the nearest telecom tower or switch, using your CharlieCard to take the T from Harvard Square to other stations in Boston keeps a record of your origin and destination patterns, and riding in a GPS-enabled taxicab can produce a precise map of your travel route. In a way, these bits are incidental: they are produced and captured in the everyday acts of urban life, and as such, they reveal the multiple and complex facets of the city and its inhabitants in detail and in real-time. As Bruno Latour laments, "As soon as I purchase on the web, I erase the difference between the social, the economic and the psychological, just because of the range of traces I 1 leave behind." This paper discusses a series of projects from MIT’s senseable city lab that demonstrate how today’s digital bits of human activity can be captured and analyzed in their intensity, frequency, rhythm, and reach. All four projects fall under what Sheller and Urry2 have termed the “new mobilities paradigm”: an approach to understanding social actions by exploring the relationships between the material and
267
First International Conference on Critical Digital: What Matter(s)?
the mobile. The first project is iSPOTS, which documents the occupation of spaces on the wireless MIT campus via WiFi logs. The second, Real Time Rome, expands this idea by overlaying real-time cell phone, taxi and bus data on maps of Rome to capture urban dynamics as they occur. As an extension of this, the WikiCity project considers how a city can perform as a real-time, open source system where its inhabitants produce and share digital information about the functioning and experience of the city. And, finally, the New York Talk Exchange expands the scale of our understanding of urban life by visualizing the real-time telecom flows between New York City and the rest of the world. Together, these projects exemplify the potential of analyzing urban life from the district to the urban to the global scale using incidental, real-time, and mobile digital traces. For urban planners and designers, these accumulations of digital traces are valuable sources of data in capturing the pulse of the city in an astonishing degree of 3 temporal and spatial detail. Yet this condition of the hybrid city - which operates simultaneously in the digital and physical realms - also poses difficult questions about privacy, scale, and design, among many others. These questions must be addressed as we move toward achieving an augmented, fine-grained understanding of how the city functions - socially, economically and yes, even psychologically. 1.2 Data from a dynamic city: some considerations Issues to consider in this emerging type of urban analysis are: Who controls the digital traces? Mobility data is often captured by private or semipublic interests who provide the services that intersect with digital functions such as telecommunications companies, transit authorities, and large institutions. Access to this data for analysis is a challenge though arguably, a worthy endeavor if it results in more accurate and reliable decision making in urban management. Even so, the prospect of a “WikiCity” also exists where individuals themselves can control and 4 distribute the information generated by their urban experiences. Which scale of digital data is appropriate for urban analysis? While there is much discussion about protecting individual privacy in the age of digital surveillance, is there such as a thing as collective privacy? If we relate the location of cell phone calls (at an aggregated level) with GPS to capture the patterns of mobility at a neighborhood scale – for undestanding the dynamic relationship between pedestrians and transit service, for example – then should we be concerned about knowing the real-time locations of groups of people in a particular area of the city? Once we have captured the incidental flows of digital traces produced by the everyday functioning of the city, the question becomes, how best to visualize and represent this information? Visualizing digital data has many purposes: it enables the information to be grasped by a broader audience, since the data in raw numeric format is both overwhelming and disorienting; it helps identify patterns of activity for further analysis; and this pattern analysis then feeds back into the structuring and processing of the data itself and drives subsequent visualizations. In dealing with large datasets we must set priorities, emphasizing certain aspects while omitting others, and must think systematically about how the data is instrumental to answering the questions we are interested in exploring. Combining a design vision with the understanding of theory to be tested allows us to translate very complex datasets into engaging images of real-time urban dynamics. These accessible images of the city are possible due to the computational tools that help us to handle and
268
First International Conference on Critical Digital: What Matter(s)?
interpret the data. In turn, not only do urban planning and design professionals and academics benefit from an enhanced understanding of the city, but so do urban residents who can capture this representational information via their own digital devices and use it for their own on-the-fly decision-making. That said, we must also consider the ways in which we can establish perhaps more meaningful connections between digital information and physical space and objects. Once we have captured the abstract dynamics of an urban context through digital information, how can this be reintegrated with the physical space experienced by its inhabitants? The increasing diffusion of mobile devices into the urban fabric has opened up new ways of accessing information individually while on the move and in urban environments. But how can this information be accessed in a collective manner? What interfaces are needed to complement and augment the experience of the public realm of the city with digital information? In a recent discussion, Manuel Castells emphasized the increasing disjuncture between virtual and physical public spaces due to the individualized experience of the former (what he calls the “my 5 hypertext” experience). This concern leads us to wonder whether the built environment itself should be embedded with sensing and responsive technologies (for a presumably collective and public experience) or if individuals themselves should carry the sensors and devices to manipulate and access the city’s digital traces. Our belief is that it should be a combination of the two where people use their digital devices to interact and manipulate the digital interfaces of the city’s public spaces to produce collective expressions of urban culture. These questions have all emerged in the process of realizing the four projects to be discussed below. While the projects, for now, are representations and tools for analyzing the dynamics of our urban environments at multiple scales, our vision at the MIT senseable city lab is that these tools will soon travel with us in our mobile devices and in the public spaces of our cities. 2. Projects 2.1 The visible campus: iSPOTS (2005) The iSPOTS project monitors and collects data of WiFi usage on the Massachusetts Institute of Technology campus. It uses and analyzes LOG files from the Institute’s Internet service provider in order to better understand the daily working and living patterns of the academic community, which, we argue, are changing as a result of the emergence of WiFi itself. The access to a widespread and robust wireless network combined with high rates of laptop ownership has untethered the MIT community from its computer labs, offices and even classrooms, thus producing a more flexible and dynamic use of space. iSPOTS reveals the complex and dispersed movement of individuals in real-time and thus helps to answer questions such as: which physical spaces are preferred for work in the MIT community? How could future physical planning of the campus suit the community's changing needs? Which location-based services would be most helpful for students and academics? Figure 1 shows the classification of buildings on the MIT campus based on the similarity of WiFi usage profiles. Since MIT’s wireless network is composed of over 2,800 access points and is one of the largest of its kind, it not only offers a privileged environment for this research, but also serves as a test bed for entire cities.6
269
First International Conference on Critical Digital: What Matter(s)?
Figure 1. iSPOTS: classification of buildings on the MIT campus based on the similarity of WiFi usage profiles. 2.2 The visible city: Real Time Rome (2006) The visualizations of Real Time Rome synthesize data from various real-time networks. We interpolate the aggregate mobility of people according to their mobile phone usage and visualize it synchronously with the flow of public transit, pedestrians, and vehicular traffic. By overlaying mobility information onto geographic references of Rome we reveal the relationships between fixed and fluid urban elements. These real-time maps help us understand, for example, how neighborhoods are used in the course of a day and how the distribution of buses and taxis correlates with densities of people. By examining the resulting visualizations, users can react to the shifting urban environment.7Figure 2 helps us to understand how do the movement patterns of buses and pedestrians overlap in the Stazione Termini neighborhood of Rome. This is a functional map of the city that asks how well transit service correlates with the patterns of movement of pedestrians. This image captures the changing positions of public buses, indicated by yellow points, and the relative densities of mobile phone users, represented by the red areas. If a tail on a yellow point is long, this means that a bus is moving fast. Areas colored by a deeper red, have a higher density of pedestrians.
270
First International Conference on Critical Digital: What Matter(s)?
Figure 2. Real Time Rome: a functional map Combining real-time cell phone data with geographic references not only presents functional information for the efficient management of a city, but also has the potential to reveal the pulse and mood of the city at certain moments. Figure 3 measures the density of cell phone calls being made in Rome during the World Cup final match on July 9, 2006 between Italy and France. It is essentially an emotional map of a moment in the city, captured at a scale and immediacy that cannot be approached by more conventional (or analog) methods of survey or observation.
271
First International Conference on Critical Digital: What Matter(s)?
Figure 3. Real Time Rome: an emotional map 2.3 The visible city in the city: WikiCity Rome (2007) The WikiCity Rome project builds upon Real Time Rome by combining the real-time information being generated by the city itself with a means for people to access the data they themselves produce over the course of their various activities. In this way, the visualization of real time dynamics becomes more than a reflective analysis, it is a new tool for people to synchronously navigate the ever-changing urban landscape. The occasion for the implementation of this real time tool was Romeâ&#x20AC;&#x2122;s Notte Bianca, an all-night urban festival comprised of hundreds of events in enclosed and open air spaces throughout the city and involving two million participants. The WikiCity interface displayed a satellite image of the city of Rome overlayed by the intensity of cellphone communication happening throughout the city at that given time. This conveyed the overall flow of people along with labels that announced and described events and their sites, the location and speed of public transport buses, and a news feed from onsite journalists (Figure 4). We gathered all of this information as realtime, location based data that we then fed into a visualization program designed for effective readability and comprehension. A timeline at the bottom indicated the current time while simultaneously disclosing a collection of images taken from the events.8
272
First International Conference on Critical Digital: What Matter(s)?
Figure 4. WikiCity Rome WikiCity Rome was projected at large scale onto a building in a public square in Rome (Figure 5). While individual interaction with the data was excluded in this way, this type of interface had two interesting benefits. First, a public display results in a very low access barrier for a large part of the audience since no particular knowledge or even posession of individual, electronic devices is required. Second, in the case of an audience memberâ&#x20AC;&#x2122;s incomprehension of the visualization, advice can be easily sought from fellow participants. The public display acts as a form of triangulation, where it encourages social interaction via consultation of the data in a shared manner. The WikiCity interface thus a tool for shared decision making and a collective form of blending the virtual and physical public spaces of the city.
273
First International Conference on Critical Digital: What Matter(s)?
Figure 5. a large scale projection of WikiCity Rome 2.4 The visible world: New York Talk Exchange (2008) The New York Talk Exchange (NYTE) project is composed of three visualizations and was produced for the MoMA exhibit Design and the Elastic Mind. Each visualization, of which we are including two in this paper, reflects on a different aspect of telecommunications flows between New York and the rest of the world in order to answer the following questions: How does the city of New York connect to other cities around the world? With which cities does New York have the strongest ties? How do these relationships shift with time? And, how does the rest of the world reach into the neighborhoods of New York?9 Globe Encounters (Figure 6) visualizes in real time the volumes of Internet data flowing between New York and cities around the world. It represents the continuous flow of IP data going through the New York backbone via the trajectories of data packets travelling between cities on a 3-D representation of a spinning globe. The trajectories glow more or less according to the amount of data being exchanged between New York and other cities. In this way, this visualization shows New York's global connections to the world, illustrating globalization as it happens. Because the IP data is updated continuously, it is immediately observable how the fountain of data flowing into and out of New York changes over time and thus where New Yorkâ&#x20AC;&#x2122;s connectivity is oriented during different times of the day.
274
First International Conference on Critical Digital: What Matter(s)?
Figure 6. NYTE: Globe Encounters World within New York zooms inside New York City's five boroughs and explores how global connections vary from neighborhood to neighborhood, illustrating what can be described as a "globalization from the bottom." It shows how different neighborhoods reach out to the rest of the world via the AT&T telephone network. The city is divided into a grid of square pixels where each pixel is colored according to the regions of the world wherein the top connecting cities are located. The widths of the color bars represent the proportion of calls exchanged with each neighborhood. Encoded within each pixel is also a list of the first ranking world cities that account for 70% of the communications with that particular area of New York. This representation allows for a detailed reading of each particular area of the city and provides a sense of the patterns of connections that exist throughout the city. For example, areas colored green reveal patterns of calls exchanged with South America, primarily clustered around upper Manhattan and the Bronx. Being able to obtain detailed, neighborhoodlevel data, which goes further than taking the entire city as its unit of analysis, reveals great variation between and within boroughs in the cityâ&#x20AC;&#x2122;s connections to the rest of the world via its telecommunications infrastructure. While we all know that New York has strong connections with London, this data reveals that the city is also tied to Toronto, Kingston and Seoul.
275
First International Conference on Critical Digital: What Matter(s)?
Figure 7. NYTE: World Within New York 3. Conclusion Without a doubt, we now inhabit a hybrid city composed of the digital and the physical where the tradeoffs between visibility, privacy and understanding must be considered. Using digital technologies to sense the city provides an unprecedented level of detail for the understanding of urban dynamics, as suggested by the senseable city lab projects discussed in this paper. The fine grained, incidental data that can be gathered from cell phones, public transportation and citizens themselves can open up many new opportunities for experiencing and managing the city, if done in conscientious and democratic ways. Real time maps generated from digital data now accompany the actual events happening in the physical city, describing it and making it accessible to people that can subsequently base their decisions on this synchronized information. It is an approach that begins to coincide the often disjoined spaces of the virtual and the physical.
1
Latour, Bruno. 2007. Beware, your imagination leaves digital traces. Times Higher Literary Supplement, April 6, 2007.
2
Sheller, Mimi and John Urry. 2006. The new mobilities paradigm. Environment and Planning A 38:207-226.
3
Crang, Mike and Stephen Graham. 2007. Sentient Cities: Ambient intelligence and the politics of urban space. Information, Communication & Society 10 (6):789-817.
4
Calabrese, Francesco, Kristian Kloeckl and Carlo Ratti. 2007. Wikicity: real-time urban environments. IEEE Pervasive Computing 6 (3): 52-57
5
From a presentation to MITâ&#x20AC;&#x2122;s "Responsive City Initiative" at the Department of Urban Studies
276
First International Conference on Critical Digital: What Matter(s)?
and Planning on March 19, 2008. 6
Sevtsuk, Andres and Carlo Ratti. 2005. iSPOTS. How Wireless Technology is Changing Life on the MIT Campus. In CUPUM '05: The Ninth International Conference on Computers in Urban Planning and Urban Management. London, UK.
7
Rojas, Francisca, Francesco Calabrese, Fillipo Dal Fiore, Sriram Krishnan, and Carlo Ratti. 2007. Real Time Rome. In Urban_Trans_Formation, The Holcim Forum for Sustainable Construction. Shanghai, China.
8
Calabrese, Francesco, Kristian Kloeckl and Carlo Ratti. 2007. WikiCity: real-time, locationsensitive tools for the city. In CUPUM 07 10th International Conference on Computers in Urban Planning and Urban Management.
9
Rojas, Francisca M., Clelia Caldesi Valeri, Kristian Kloeckl, Carlo Ratti, ed. 2008. NYTE: New York Talk Exchange. Cambridge, MA: SA+P Press.
277
First International Conference on Critical Digital: What Matter(s)?
278
First International Conference on Critical Digital: What Matter(s)?
279
Networked Mobilities and New Sites of Mediated Interaction Ole B. Jensen Department of Architecture and Design, Aalborg University, Denmark obje@aod.aau.dk ABSTRACT This paper takes point of departure in an understanding of mobility as an important cultural dimension to contemporary life. The movement of objects, signs, and people constitutes material sites of networked relationships. However, as an increasing number of mobility practices are making up our everyday life experiences the movement is much more than a travel from point A to point B. The mobile experiences of the contemporary society are practices that are meaningful and normatively embedded. That is to say, mobility is seen as a cultural phenomenon shaping notions of self and other as well as the relationship to sites and places. Furthermore, an increasing number of such mobile practices are mediated by technologies of tangible and less tangible sorts. The claim in this paper is, that by reflecting upon the meaning of mobility in new mediated interaction spaces we come to test and challenge these established dichotomies as less fruitful ways of thinking. The paper concludes with a research agenda for unfolding a ‘politics of visibility’, engaging with the ambivalences of networked mobilities and mediated projects, and critically challenge of taken for granted interpretations of networked mobilities. 1. Introduction This paper takes point of departure in an understanding of mobility as an important cultural dimension to contemporary life. The movement of objects, signs, and people constitutes material sites of networked relationships. However, as an increasing number of mobility practices are making up our everyday life experiences the movement is much more than a travel from point A to point B. The mobile experiences of the contemporary society are practices that are meaningful and normatively embedded. The claim in this paper is, that by reflecting upon the meaning of mobility in new mediated interaction spaces we come to test and challenge these established dichotomies as less fruitful ways of thinking. In the creation and design of new interaction spaces applying urban technology there is a potential for conceptual critique but also for discussing mediated sites of interaction as venues for new meaningful social interaction and relationships shaping new ways of thinking about the political. Issues of democracy, multiple publics, and new mobile (electronic and material) agoras are pointing towards a critical re-interpretation of contemporary politics of space and mobility. By studying networks of mediated activities in what we might call ‘semiotic rich settings’ such critical potential for ‘thinking mobilities’ as well as ‘designing for flow’ is enhanced. In this paper we want to discuss two cases. In the first case, we will look into how the interactive pavilion ‘NoRA’ exhibited at the 2006 Architecture Biennale of Venice constitutes a performative space of mediated interaction. In the second case, ‘MAUTS’ we explore the meaning of mobile robots in transit spaces from the point of view of asking if such new mobile technologies can increase the mediated interaction and thus potentially challenge the mono-functional understanding of transit spaces as waiting spaces. 2. Framing networked mobilities and new sites of mediated interaction In understanding the importance of mediation, global-local interactions, networks, and the distributions of meaning and mediated discourses new ways of thinking about mobilities are called for. In particular a critical awareness to how such technologies shape the foreground/background attention of social agents seems crucial. By studying embedded
First International Conference on Critical Digital: What Matter(s)?
280
technologies and ‘ambient environments’ we increase our knowledge about the over layering of the material environment with digital technologies. The presences of GPS, mediated surfaces, mobile agents (robots), RFID and other technologies that all relate to contemporary mobility practices add a different dimension to the notion of movement and constitutes new arenas and tools for identity construction and social interaction (as well as of course commercial exploitation and state control). Analysts of the contemporary situation points to the fact that the previous obsession with the ‘virtual’ and cyberspace where technology took off as is was from the physical environment has come to be replaced with a beginning awareness of the importance of the location, the placement and the situated.1 Rather than working within separate domains new media and technologies overlay the physical world of places, houses and infrastructures. Thus creating a new situation where the physical placement of social agency and the technology at hand becomes crucial. Much of the engagement with technologies in this way we find in sites of transit and mobility. As we move across cities utilizing numerous networked technologies to navigate, coordinate and facilitate our trajectories potentials for new experiences might occur in these new sites of mediated interaction. Needles to say, new means of control and power also loom within the potential of the new ‘augmented spaces’2 In exploring ‘what matters’ this paper argue that it is important to understand how the networked technologies relating to contemporary urban mobility offer potentials for transgressing mobility as ‘waste of time’ or instrumentalism at the same time as they are power-laden and oscillates between state control and market consumerism. The challenge for a social science engaging with design is to analyse and discuss networked urban mobility as ‘more than A to B’.3 Travel can be a positive experience and we need not consider it pure cost to paraphrase Kevin Lynch.4 Mobility is a cultural phenomenon too often just seen through the eyes of planners trying to ‘fix’ congestion, accidents and so-called ‘environmental externalities’. Not realising that mobility is culture. Simultaneously such a discussion should try to point at the third space for meaningful social interaction mediated by networked technologies that goes beyond state control and market commercialism. Furthermore the critical dimension to this discussion is also to address the issue of ‘qui bono?’ As mobility is a differentiated social phenomenon new networked sites of interaction potentially favour some groups whilst it disfavours others. What really matters is how to empower people by exploring the potentials of the new mediated technologies. But being critical also means to problematise the taken for granted notion that infrastructures always host instrumental practices that they are generic ‘non-places’.5 Thinking critically about the meaning of mobility in new networked sites of interaction thus has to do with uncovering power-issues as well as with stretching the mind towards wider and more inclusive ways of comprehending everyday life mobility6. Here we shall argue for a ‘politics of visibility’ in the sense that new experiments and explorations of augmented spaces and mediated networks becomes crucial if we are discuss the pros and cons of these often ‘invisible’ technologies.7 In terms of not only thinking mobilities (analysis) but also designing for flows (intervention) it seems pertinent to explore the opportunities for transit spaces to become more than venues 1 Crang, M. & S. Graham “Sentient Cities. Ambient Intelligence and the politics of urban space” Information, Communication & Society vol. 10, no. 6, 2007 pp. 789-817, Manovich, L. “The poetics of augmented space” Visual Communication, vol. 5(2), 2006 pp. 219-240, McCullough, M. Digital Ground. Architecture, Pervasive Computing, and Environmental Knowing, Cambridge Mass.: MIT Press, 2004 2 See Manovich, L. “The poetics of augmented space” Visual Communication, vol. 5(2), 2006 pp. 219-240 3 Urry, J. Mobilites, Cambridge: Polity Press, 2007 4 Lynch, K. Good City Form, Cambridge Mass.: MIT Press, 1981 5 Augé, M. Non-places. Introduction to an anthropology of supermodernity, London: Verso, 1995 6 Jensen, O. B. and T. Richardson Making European Space. Mobility, Power and Territorial Identity, London: Routledge, 2004 7 Crang, M. & S. Graham “Sentient Cities. Ambient Intelligence and the politics of urban space” Information, Communication & Society, vol. 10, no. 6, 2007 pp. 789-817
2
First International Conference on Critical Digital: What Matter(s)?
281
for instrumental mobility practices. The issue of how to develop and design ‘public domains’ in these spaces and with the help of multiple layered technologies is at the forefront here. The temporality and often ‘messy’ character of transit spaces need to be seen as sites of interaction between multiple publics and social groups – the very definition of public domain8. Today’s spaces of mobility are ‘rooms’ in which we live much of our life. Beyond that the meaning of mobility is more than circulation as it becomes a culturally significant practice. Therefore sites of mobility and infrastructures facilitating these should become sites of cultural production, enhanced experience and democratic pluralism.9 Meeting points, exchanges and flows of communication may be commercial and less oriented towards building public spheres (like the commercial billboards alongside the urban freeway). However, this does not rule out a potential for re-thinking the relation of infrastructures to notions of the public realm. The moving urbanite engages with multiple mobile and electronic agoras during the travel. We are linked-in-motion and thus not just passively being shuffled across town. Being-on-the-move is a contemporary everyday life condition in the city and should as such be re-interpreted. Therefore we must comprehend places as sites of interaction and media flows that only becomes ‘places’ in so far as flows of people, ideas, symbols, goods and material either positively flow ‘into’ these nodes in the network, or conversely for all sorts of reasons do NOT flow into the nodes. The new mediated spaces are unfolding between many different normative ways of engaging with the social production of mobility and interaction. Assessing whether they are socially inclusive or exclusive, environmentally sound or unsound, creative or mindless reproducing established ways of thinking, if they are liberating to yet unseen communities of practice or just cementing the established lines of power and social order is an open agenda to be explored. However, this short discussion here hopefully suggests that we should aim at establishing and analytical frame capable of addressing issues of differential mobility and power, the meaning and potential of performative urban environments, the relationship between interaction and public domains in order to relate the analytical issue of ‘thinking mobilities’ to the interventionist ambition of ‘designing for flows’. In the next section we shall look into two short cases / projects that may open up to some of these discussions. 3. Learning from NoRA In the first case, we will look into how the interactive pavilion ‘NoRA’ (Northern Research Application) build by Architecture and Design students and exhibited at the 2006 Architecture Biennale of Venice and since in Skagen and Aalborg, Denmark constitutes a performative space of mediated interaction (see figure 1 and the NoRA Web site http://www.aod.aau.dk/staff/bsth/nora/ for more information on the project). On the Biennale in Venice an area of about 35 m2 was occupied mainly as an exhibition space and with the integrated technologies activated as local generator and attractor. NoRA addresses the issues about global connection through an online condition of 5 online cameras that always are able to track the local site accessed through a webpage. The architecture of NoRA becomes the eye to the local society as it mediates the local and the global by switching the fixity and flows.
8
Hajer, M. & A. Reijndorp In search of New Public Domain, Rotterdam: Nai Publishers, 2001 Calabrese, L. M. “Fine Tuning. Notes from the Project”, in Mobility: A room with a view, Houben, F. & L. M. Calabrese (eds.) Rotterdam: NAi Publishers, 2003, pp. 343-362. 9
3
First International Conference on Critical Digital: What Matter(s)?
282
Figure 1: NoRA and the ‘satelites’ transmitting flows to the pavilion (Photo: Bo Stjerne Thomsen) The technologies of NoRA track the movement of people in the surrounding environment through infrared cameras and filter the local movements of people into sound and light. The changes in movements around the building are furthermore attached to three satellite units at each corner of the building each initiating sound scapes from the local surroundings. The light colours on the building are determined by the surrounding movements and coherent with the changing sound pattern. Thus NoRA is a reactive space always acting towards a changing context both according to changing light and sound from movements as well as the main soundtrack slowly adapted from location. Through the sensor technology and the interaction with the building the visitors enter a feedback loop with the urban setting establishing a temporary urban environment from the flow of local actors and maintained through live recordings. NoRA was a window into exploring different dimensions to networked mobilities and mediated interaction10 that leads to the following issues for critical discussion and future research: -
The making of prototypes for new interactive urban artefacts and sites of interaction facilitating new public domains Exploring the potentials in building new mobile, plural and open platforms for social interaction mediated by technological artefacts and urban architecture
10
Jensen, O. B. & B. S. Thomsen “Performative Urban Environments – Increasing Media Connectivity”, paper for the conference ‘Media City – Media and Urban Space’, Bauhaus Universität, Weimar, Germany, November 10 – 12 2006
4
First International Conference on Critical Digital: What Matter(s)?
-
283
Explore the potential for transgressing the commercialized media scapes of urban architecture that we know in the guise of urban ads and electronic facades communicating the gospel of consumption
4. Mobile Agents in Urban Transit Spaces (MAUTS) In the second case, MAUTS (Mobile Agents in Urban Transit Spaces) we explore the meaning of mobile robots in transit spaces from the point of view of asking if such new mobile technologies can increase the mediated interaction and thus potentially challenge the monofunctional understanding of transit spaces as mere waiting spaces. The MAUTS project has opposed to NoRA not been fully implemented and in this paper we shall only be able to reflect upon a pilot test made in December 2007 in the transit space of Kennedy Arkaden, Aalborg, Denmark. The project takes point of departure in an awareness of increasing global transportation activity and congestion has increased the time we spend in transit. Whether it is waiting in line for security checks in airports or delays due to train service interruption, most perceive transit time as unproductive, boring, but unfortunately unavoidable in contemporary urban environments. The project is set to explore if there is a potential for adding both economic value and experiences to the transit time by thinking differently about transit. Adding mobility and manipulation capacities in terms of mobile agents (robots) to urban transit spaces and allowing them to interact with humans will create new interesting spaces that are productive, educational, safe and potentially enjoyable. It is explored if urban transit spaces endowed with mobile agents may enhance traditional service functions such as guidance and provide information about the functional properties of the environment. Value may also be added by providing an experience that for example makes waiting in the line for security check enlightening. Mobile agents may provide the waiting person with educating information about destinations to be reached or even interact with the waiting person in playful and unforeseeable manner. While doing so, they may advertise services and products, or track, monitor and survey persons thereby providing mobile security information for e.g. airport spaces. The project develops methods and technologies for the construction of cognitive mobile agents for transit spaces, able to evolve and grow their capacities in close interaction with humans in an open-ended fashion. A pilot test was made in December 2007 in the transit space of Kennedy Arkaden, Aalborg, Denmark. In this experiment we let a mobile robot dressed in Santa Claus costume (due to the fact that the experiment took place in a public shopping arcade in December and thus nicknamed ‘santabot’ by the research team) track and interact with people (see figure 2). Tracking was the main issue here as there is a number of difficulties in getting the robot to follow and approach people in a non-intimidating fashion. Speaking of ‘interaction’ hereafter is perhaps too much as Santabot is the ’dumb’ first generation mobile agent in a transit space (only capable of tracing and movement). Thus one might surmise that the second generation MAUTS could be over-layered with one-way information technologies (e.g. traffic information). A third generation MAUTS could be added interactive technologies (e.g. games and interactive communication) and thus become the ‘digital co-passenger’ of the future.
5
First International Conference on Critical Digital: What Matter(s)?
284
Figure 2: Santabot in interaction, Kennedy Arkaden (image from pilot study video) From the initial stage of this project we find the following issues of relevance to a critical exploration into networked mobilities and new sites of mediated interaction: -
Explore the meaning of transit spaces and contemporary urban mobility by using mobile agents Challenge the existing generic an uninspiring transit spaces, but also address the issue of power (e.g. surveillance potentials related to mobile agents) Exploring the notion of ‘mobility as culture’ and that places orchestrating mobility therefore should be understood in the light of their potential for generating meaningful experiences, social interaction and public domains
5. Discussion and concluding remarks This paper has been looking into critical issues and important research to de done more than actual ‘findings’. Thus the paper ends with a pledge for critical research into the new mediated sites of interaction and we suggest a number of important research agendas to follow up on: -
-
-
Unfold a ‘politics of visibility’ (i.e. projects that help making visual the invisible issues and problems of new mediated spaces) Engage with the ambivalences of networked mobilities and mediated projects (i.e. focus both on the potential for enrichment of experiences as well as the questions of social exclusion) Critically challenge of taken for granted interpretations of networked mobilities (i.e. that mobility is an instrumental act of moving from A to B and therefore all transit spaces are instrumental and generic) Explore new theoretical concepts (i.e. move beyond dichotomies of local/global, virtual/physical, space/place, sedentary/nomad) Discuss the division between utopian and dystopian perspectives, asking how digital technologies may empower mobile social agents? Explore prototypes for new interactive urban artefacts and sites of interaction facilitating new public domains Explore the potential for transgressing the commercialized media scapes of urban architecture (i.e. urban ads and electronic facades communicating the gospel of consumption)
6
First International Conference on Critical Digital: What Matter(s)?
-
285
Critically discuss if these technologies and performative urban environments can contribute to socially inclusive designs Critically discuss if we can move beyond commercial exploitation and state control, and whether mediated interaction spaces become even more meaningful and culturally enriching if we enhance the technological networks of such spaces
Conducting research into these issues is â&#x20AC;&#x2DC;what mattersâ&#x20AC;&#x2122;!
7
First International Conference on Critical Digital: What Matter(s)?
286
The Matter of Design in Videogames Greg More School of Architecture + Design, RMIT University, Australia. gregory.more@rmit.edu.au Abstract What is videogame matter? This essay examines the matter of videogames in relationship to architectural design, and advances a definition that videogame matter is: meta, modular, indexical, and distributive. These attributes support an argument that the materiality of the videogame has a markedly different set of properties than the matter of the physical world. A definition of videogame matter is critical to understanding the value of design within virtual environments, which then aids architects and designers utilizing the immersive environments of the videogame for representation, design and collaboration. 1. Introduction The real is produced from miniaturised units, from matrices, memory banks and command models – and with these it can be reproduced an indefinite number of times. It no longer has to be rational, since it is no longer measured against some ideal or negative instance. It is nothing more than operational. In fact, since it is no longer enveloped by an imaginary, it is no longer real at all. It is hyperreal, the product of an irradiating synthesis of combinatory models in a hyperspace without atmosphere. Jean Baudrillard, Simulations.1 This essay begins in a space without atmosphere. In this space, if atmosphere were to exist, it would need to be constructed. Equally, if matter were present, it would need to be designed. This is the space of the videogame: a contemporary technology that presents matter and atmosphere purely for experience - and it is the information of experience - that shapes this space. Philosopher Bernard Steigler says information is not immaterial; rather it is a ‘transitional state of matter’.2 Therefore videogames can be understood as systems of transitional matter that migrate experience between physical and digital realms, real and virtual spaces, and from user(s) to system. The hyperreal matter of the videogame is not the matter that we know from the real world – it is designed quite differently. In this essay I examine the matter of videogames in connection to architectural design. The videogame becomes an increasingly important technology as more people inhabit virtual environments for recreation and work. However the value of architectural design in these environments is not clearly defined. Rather than study the spatial and architectural qualities of videogames this essay promotes an examination of the matter that constitute these worlds; what are these synthetic environments made of? First, this essay outlines connections between architecture and videogame space, and the technological association between designers and the mechanism of the videogame. Secondly, I outline a series of attributes that define videogame matter: meta, modular, indexical, and distributive. Thirdly, I present a series of projects that engage in the relationship between architecture and videogame space to further elaborate the earlier established definition of matter.
2. Architecture and the videogame Architects are advancing the use of videogames within the architectural discipline. For example the publication Space Time Play3 presents a comprehensive survey of architects working with videogames, and an extensive review of videogame titles of interest to designers and architects alike. Videogame theorists considering the spatiality of videogames suggest that the spaces of videogames are best understood as allegories of space.4 Also, for videogame designers the primary role of architecture - in a game - is to support game-play with secondary roles of allusion and atmosphere.5 Whether understood from the perspective of a creative discipline, by theoretical examination, or in the production of game titles, videogames are replete with spatial qualities and therefore are complex spaces for inquiry. This complexity is confounded by the ubiquitous term videogame, which combines a range of media types and terminology associated with real-time digital environments: computer games, massive multiple online environments, synthetic worlds, first person shooters, virtual environments etc. In this essay the term videogame refers to contemporary videogame technologies that allow users to experience - through first or third person views - agency within space. Architectural projects incorporating videogames are usually modifications of existing videogame titles. Developers of game titles (for example: Epic’s Unreal Tournament, Valve’s Half-Life 2 etc) enable user communities to generate content for their platforms by supplying tools to modify and create environments.6 When a videogame is repurposed for an architectural project it explicitly couples an architectural design with its associated videogame technology. This rewards the architectural potentials of the repurposed technology but diminishes design critique. In this situation design has built-in obsolescence as videogame matter depreciates at the rate of Moore’s law. Videogame environments have predetermined matter-models defined by the developers of the game title. For example Linden Lab’s Second Life and Epic’s Unreal Tournament III both allow users to modify and create environments, but have completely different systems to deliver and experience materiality. The user has agency to redefine space, but limited potential to affect the matter within the environment. Therefore any consideration of matter within videogames has to acknowledge this technological condition; that the mechanism of the videogame is a system designed prior to its architectural appropriation and therefore its matter is pre-engineered. 3. Attributes of matter I have isolated a four attributes to define the matter of the videogame: meta, modular, indexical, and distributive. These attributes are platform independent and make connections between matter, videogame technologies and the screen based experience of digital materiality. Firstly, matter in the videogame should be considered as meta-matter, indicating that it is an abstraction from real matter. A similar condition exists in contemporary architecture where the algorithmic simulation of matter creates a substitute materiality. This architectural meta-matter then becomes machine readable for the direct manipulation of physical matter through machine tooling, robotics and other manufacturing processes. However matter in the videogame has no further translation to become real. It’s pure experience is through the videogame and in that sense this matter is Baudrillard’s hyperreal. Secondly, videogame matter is modular, and is derived from an object paradigm. This is best explained by understanding the structure of the code that drives the technology of the
videogame - object oriented programming (OOP). With OOP discrete units of programming logic are encapsulated in ‘objects’. Light, sound, geometry, even physics are all designed as specific types of videogame objects. Since attributes of the objects are alterable and parameterized, objects can change state allowing for endless variation. Videogame matter and its modularity create a recombinant concept of materiality built from atoms of logic. Thirdly, the appearance of videogame matter is an index of its rendering technology. Triangular polygons are the geometric building blocks of synthetic worlds - all topologies within videogame environments descend into facetted geometries. This is the requirement of the rendering engines that synthesis software and hardware capabilities, and translate scene graphs of geometry into 60 frames of imagery per second. Although the surface of any form is a collection of triangular facets, a series of surface effects enliven these planar elements, to seem anything but reductive. For example texture mapping (applying imagery onto the surface), bump mapping and advanced techniques of normal and parallax mapping. Videogame matter has a visual indexicality that is tied to each videogame engine and their developer’s intellectual property. Unique bindings of geometry and projective representational techniques mean the collapse from digital space to screen-space is indexical. Fourth, and finally, the telos of videogame matter is that it is distributive. It is meta, modular, indexical and distributed to be experienced - synchronously or asynchronously - by one or many. Distributive matter can be generated client side (at the node), server side (centralized), or a combination of both. Being distributive raises a novel relationship between matter and its ownership and access. Matter becomes connected to digital rights management (DRM). When one purchases a videogame title or enters into an End User Licensing Agreement (EULA), one is agreeing to respect the intellectual property of this distributable and synthetic matter. 4. Videogame Architecture(s) Having outlined four attributes of digital matter, I now present a series of projects developed to question the relationship between architecture and videogame space, including the use of videogame environments in design studios. These projects are presented to contextualize the definition of videogame matter. 4.1. Meta spaces, beta places With the Meta Island Beta project I examine a relationship between physical space and digital space, specifically connecting a gallery installation to a transforming videogame environment. Meta Island Beta (Figure 1) presents an island as a space prior to identity; based on Wu Cheng’en’s 1590s tale Journey to the West (commonly known as Monkey). Islands in the digital and synthetic environments of videogames, are geographies without geology, ultimately located in the flatness of an infinite ocean, where the meta-concepts of gameplay and virtual inhabitation inform the silhouette, undulation and edge of the land form.
Figure 1. Meta Island Beta. Meta Island Beta presents a landscape sculpture with embedded electronics that senses the light levels of the gallery space, and drives a videogame environment, that presents an island sitting on the horizon of a virtual ocean. This island is constituted of 1024 individually addressable modular cells, which are constantly reconfiguring and reforming themselves in response to the light levels of the gallery. This videogame environment is custom built using the programming language Python and the Panda3D game engine. Every five minutes a new island is created through a series of formative meta processes. Firstly, initial formation - the island is given a mathematical distribution (Gaussian) with a series of peaks informed by the information transmitted from the light sensors. Secondly, erosion - the island form is eroded utilising an algorithm to simulate hydraulic and atmospheric erosion. Finally, vegetation - the erosion process redistributes simulated soil deposits, and where soil is of sufficient depth and above the water line, peach trees (Monkey King gained immortality from the Peaches of Heaven) are planted onto the surface of the island. During a day around 288 beta islands are generated by the installation, each unique and derived from a combination of physical and digital processes. 4.2. FPS Architecture Concrete Falls - A Thousand Lines of Sight examines the relationship between architecture, landscape and the space of the videogame (Figure 2). It presents an architecture realized for videogame environments and promotes architectures that are formally defined by the activity of the game space. By using trajectories, sightlines and boundary defenses as design generators, this project creates an architectural memorialization of the First Person Shooter (FPS) videogame genre. Fragments of Berlinâ&#x20AC;&#x2122;s historical border are re-enacted as a site of enquiry: a place of the oppositional and defensive gaze. Like the tourist climbing the viewing towers of the old West, the lust for the gaze motivates the viewer to ascend and see over the wall. In A Thousand Lines of Sight two self similar tower-walls, placed symmetrically on either side of the border, gaze at each other as part of a novel architectural assemblage.
Figure 2. Concrete Falls â&#x20AC;&#x201C; A Thousand Lines of Sight. This project explores the concept of a solid/void model that carves the activity of the game space into the architecture. Matter becomes mutable to game play. The architecture has been developed parametrically allowing the serially deforming ramp to be reconfigured in shape and character. A thousand lines of sight are distributed on three trajectories that travel up the ramp replicating the FPS positions of crouching, standing and jumping. The solid/void model makes salient the effect of carving out the lines of sight and is archaeologically reminiscent of a fortress or bunker. Texture baking techniques are used to give each surface in the game environment complex lighting effects - allowing the light to filter through the apertures into the interior of the tower-wall. For this project the Unreal Tournament game environment allowed for the detailed surface textures and complex geometries to be experienced within a real-time environment. In its static form, as if arrested by Medusaâ&#x20AC;&#x2122;s gaze, the resultant architecture provides a memorial to the interaction of the game-space. 4.3. Atomistic constructions In recent years I have been directing digital design studios using Linden Labâ&#x20AC;&#x2122;s Second Life as a design context for architecture and interior design students (Figure 3). Second Life presents a persistent virtual world where inhabitants can create, exchange and even sell objects. For designers Second Life offers a robust platform to experiment within a persistent collaborative online design space.
Figure 3. Second Life, RMIT Universityâ&#x20AC;&#x2122;s Ormond Island. In Second Life designers use a set of tools to create their virtual designs directly in the environment. These tools are limited when compared to typical architectural software, but guarantee that all modeling is compatible with the environment. Objects created are called a prims (short for primitives), and through a combination of prims designers can achieve more complicated spatial compositions. Second Life embraces the concept of atomistic construction for user generated content,7 where simple - easy to generate - objects can be used in combination to create complexity. The matter of Second Life is highly distributive. Located server-side all objects and spaces are streamed to the client when required. Second Life is less sophisticated in its rendering of space when compared to other videogame technologies. This is because as a technology Second Lifeâ&#x20AC;&#x2122;s success is based on the efficient delivery of its distributive environments, and allowing as many people to access these spaces, with minimal computer specifications and across a series of operating systems. 5. Coda The experience of materiality through the videogame is intrinsically shaped by the mechanics of the videogame. I would argue, extending Steiglerâ&#x20AC;&#x2122;s idea that information is matter in transition, that the videogame offers a unique experience of matter by coupling information with energy. Ironically the synthetic worlds of the videogame require the physical world mechanisms of computers, networks and servers. A critique of digital matter is not just about the phenomenon of the videogame in architectural design, however, the more we migrate to persistent online environments the more we engage an economics of design where digital matter has real world implications. The further we invest in the space of the videogame, be it online worlds, massively multiplayer environments, or multi-user educational spaces, we need to understand that digital matter has an associated cost. I have articulated the concept of videogame matter to better understand the material qualities of synthetic environments. If we consider the matter of the videogame as meta, modular, indexical, and distributive, then we engage in a matter-model that is unique to the technological conditions these synthetic worlds. The unique combinations of matter, form and interaction availed in the space of the videogame will lead to new formulations of architecture; however an architecture whose matter is always in transition. 1
Baudrillard, J., Simulations, Trans. by Foss, P., Patton, P., and Beitchman, P., (New York: Semiotext(e), 1983). 3.
2
Crogan, P., ‘Information, says Bernard Stiegler somewhat aphoristically, is not immaterial but “a transitional state of matter.”, Stiegler, B., Aimer, SÂ’aimer, Nous Aimer: Du 11 Septembre au 21 Avril [To Love, To Love Oneself, To Love Ourselves: From 11 September to 21 April] (Paris: Editions Galilae, 2003), 68. 3 eds. von Borries, F., Walz, S., Böttger, M., Space Time Play, (Basel: Birkhäuser, 2007). 4 Aarseth, E., Allegories of Space: The Question of Spatiality in Computer Games, in CyberText Yearbook 2000, eds. Markku Eskelinen and Raine Koskimaa (Jyvaskyla, Finland: Research Centre for Contemporary Culture, 2001), 152-71. 5 Adams, E., The Role of Architecture in Videogames, Source: gamasutra, http://www.gamasutra.com/features/20021009/adams_pfv.htm 6 Jack M. Balkin defines this as the third kind of freedom within virtual worlds - the freedom to design together - where both players and developers build and enhance the game space together. The first two freedoms are: Freedom to Play (players ability to participate in the world), and Freedom to Design (developers ability to plan and maintain a virtual world). From Balkin, J.M., Law and Liberty in Virtual Worlds. in The State of Play. (New York: New York University Press, 2006) 86. 7 Ondrejka, C., Escaping the Gilded Cage: User-Created Content and Building the Metaverse. In The State of Play. (New York: New York University Press, 2006) 165.
First International Conference on Critical Digital: What Matter(s)?
294
First International Conference on Critical Digital: What Matter(s)?
295
Do Narratives Matter? Are Narratives Matter?
Joseph B. Juhász University of Colorado USA juhaszj@colorado.edu Robert H. Flanagan University of Colorado USA Robert.Flanagan@cudenver.edu Abstract Narratives—public and private are the stuff of design. This commonplace truism is often forgotten in the buzz, boom and confusion surrounding the development of digital media; digital media seem to offer a “virtual” alternative to such stuff; as such, the current Romance with Digital Media is nothing but a weak revival of primitive mentalism. Narrative Composition Narratives—public and private are the stuff of design. Written, verbal, and graphic, narratives cross artistic boundaries to define creative vision. While all narratives are compositions, all compositions are not narratives; uniqueness ascribed by narrative, composition is embedded in its process of construction--performance composed in time. Time, is the common constructive thread of performance, contributing dimensional awareness through scripted authorship: coordinated, articulated, and defined compositional actions. What is a narrative? A narrative is a story transmitted. In a very general sense, it is a story told. The willing suspension of disbelief makes it possible for the story to cohere without the constraints of “reality”. Like any story, just about any narrative will have the explicit or implicit structure: beginning-end-middle. Again, like any story, the narrative will have embedded in it the explicit or implicit voice of a speaker, which then provides the point of view—the narrator. These elements are almost always found in any story—but their lack does not necessarily invalidate the idea that a tale has been told or rather transmitted. The medium matters—but it is a story, even if there are no words in it—and even (or especially) if there are no words possible to paraphrase it. Architecture deals with stories told by dwellings. As dwellings reverberate with the narratives of their creation, their curators ponder the time before the “big bang”. That secret however is secure from detection in the realm of knowledge occupied by the narrative; the dimension occupied by all things imagined. While the mechanics of creation (of form and space) may be the obsession of the seeker, the truth, is forever concealed from post-creation logical deduction. Cubism and Futurism Göethe had ascribed the temporal metaphor in architecture “(it) is as frozen music,” however the commonplace application of the temporal narrative in the design process is predominantly a twentieth-century phenomenon. Formal variations are
1
First International Conference on Critical Digital: What Matter(s)?
296
defined early on in Corbusier’s “Purism” and Wright’s “organic architecture” and Sant’Elia’s power stations.
Cubism is at the fulcrum of narrative awareness in the modern era; the narratives of Picasso and Braque in 1906, through selective dimensional negation of the pictorial, free up partial dimensional channels to introduce the temporal – to script temporal symbolic narratives reconstructed with “silly little cubes”. Notably, the mechanics of cubist temporal narration heralded and foreshadowed of related upheavals in science and other forms of rational discourse. The Italian Futurists spin this temporality into a Romance with the Synthetic—again foreshadowing The Age of Synthetics. Synthetics are toxic imitations of originals, the toxicities of which are not evident until the consequences of consuming the toxin become irreversible. This is the Romance with the Power Plant—you See the energy that is cheaper and cleaner than horsepower--but you can’t see the cancer—thus Marinetti, Sant’Elia and others. The surreal The constraints of “reality” become lifted in art with Surrealism—let’s simply stutter, say: Dali and DaDa. Here the irrationality of dreams enters and distorts time, chronology, and narrative voice and structure. In architecture Moholy-Nagy could be though of as an exemplar. The Virtual With PopArt the idea that in the age of mechanical duplication we have reached a state where copies are superior to originals becomes a kind of dogma. This is the apotheosis of the romance with the synthetic and where theorists of virtual architecture remain stuck to this here day. Here, nylon is revealed—or let us say was revered—as purer than silk. The toxicity of nylon itself in the final apotheosis becomes a kind of Apocalyptic Positive. Here too the Romance with the immaterial reaches co-apotheosis. The narrative runs aground as purely self-referential. The virtual becomes identified with the mental—the immaterial essence beloved of the Hellenists. She is the ghost coming back to haunt us in her last haunt: The Virtual. Experience repossessed Oddly, perhaps, it is architecture rather than the other High Arts which first foretells of The End of the Affair with synthetics: Wright’s Guggenheim as The House of Art— followed by Kahn’s Kimball as its vault, and Linn’s Big Book as its dirge, ring the death knell of nylon—as much as The Titans of titanium still sold(i)er on bravely against the flood. Wright’s Guggenheim, the Kimball and the ‘Nam Mem’ are pure narratives embedded in time. They brought about Green And Whole foods—partly
2
First International Conference on Critical Digital: What Matter(s)?
297
synthetic as they may be—yet whispering of “wind power” and fluttering silk kerchiefs. Digital Negation of Narrative Structure In its primal essence, a narrative is an enactment, not a spiritual or mental representation, even if it is suppressed or even at times somewhat implicit; a narrative that is not an enactment is not a narrative, it is not even the ghost of a narrative—it is a Platonist or Neo-Platonist wishdream—it is what someone totally alienated from their own experience would want a wishdream to be were it the case that the “wishes” (understood as mental phenomena rather than as enactments) that you dream (understood as mental phenomena rather than as enactments) could come true. A story told is not merely ink on paper. As the story infolds and unfolds and outfolds; teller and hearer participate in synesthetic sensations and implicit actions— enactments, performances. Such “images” are not the ghosts of sensations. From the cubists through the age of Pop and Post-Pop the hidden agenda was to substitute “mind” for “soul”—this synthetic, thoroughly modern creation—the mind— was then believed to be superior to The Soul. There may be a ghost in the machine—but if he is there, he is made of stuff—there is no ghost in any machine that is utterly immaterial. Thus, “virtual” environments, “virtual” media, “virtual” design, identify phenomena that are the unfortunate legacy of Hellenism. The Hellenistic preoccupation with “origin” as bereft of polluting matter is the Spirit or Soul we inherit from the Apostle Paul. This distortion of reality and of experience is the direct great-grandfather of the notion: VIRTUAL. It is a return to a primitive mentalism: thus, the current Romance with Digital Media is nothing but a weak revival of primitive mentalism. So, what’s the matter with “virtual” “environments”?--they’re really no matter at all—or even worse—they only matter in that they distract us from what matters—narratives. In The Narrative (as let us say in The Guggenheim, or the Kimball, or the ‘Nam Mem’) the physical enactment supervenes its virtual representation. It is the beginning of the end of The Virtual and the End of The Beginning of repossessing experience—repossessing the narrative—repossessing architecture. The printing ink and electronic vibration spilled and wasted on discussions of “The Virtual Designer” is a feeble and futile whimper of a last stand for Soul—understood hellenistically as essence removed or distilled or primary from matter. It is a sad and pitiable attempt to mechanize the soul, denying architecture its role (and soul) as the performance, determined or patterned by a set of instructions and validated and proven by the structured enactment of the narrative. The Ghosts’ Last Stand In 1989 the Iron Curtain collapses and AutoCAD manifests. Here, the narrative is being dictated by the ghost in the machine. The End. In 1989 the narrative is no longer able to be written by The Author—it is dictated to the computer scribe. It signals the end of the architect being able to articulate and edit the narrative of the hand drawing. There is a continual progression of this encroachment by technology that further removed the author from the compositional tactics of creation.
3
First International Conference on Critical Digital: What Matter(s)?
298
“The Story of AutoCAD” is that it is simply a cheap toxic substitute for the original. It’s not better—it is merely cheaper—and highly toxic. Nylon = cheap toxic silk. At this point we have an admission of defeat and at least an implicit sense of inferiority. It is indeed The End of the Affair. The idea of the automated creative artists— finally—may have sold on Wall Street—but is not doing well on Main Street. Flash Gordon’s Last Stand. Draw! At its best it is anonymous design-build that is the antithesis of these ghosts—here the building returns to the ideal of enactment—without drawings-as-such much less—simulations. Narratives and Matter We alter the future with narratives—written, graphic, verbal, multisensory, synesthetic. These can be abstractions like the recording of a thought in the present (like an instruction) which will affect the future. This is like a deed, a promise, a ManiMephist(g)o. A contemporary analysis of narratives begins by asking: what is the nature of this bequest or legacy? —is it matter, does it matter? (Is time therefore an illusion?) (What IS the future of this illusion?) In this sense the instructions that constitute a deed transcend matter—which however does not make such instructions “virtual”. The questions about the transcendent quality of instructions take us directly to the questions surrounding perception, questions like: are thoughts in the present capable of transcending matter? Or are thoughts from the past, the future, or some alternate reality transcending the present, “a radio beacon effect”? Questions of magic, wizardry and religion: wizardry in man-made whereas magic is natural; in the man-made world duplicates exist—in nature they do not; thus human creations, duplicates, are a revolt against nature. Hallucinations, the counterpoint to the “sane” narrative, surround drug states, head trauma, or visions (e.g. of the Blessed Virgin Mary). Traditional Psychiatry wouldn’t differentiate between these—but they are different, and oddly enough, contemporary Psychiatry differentiates between Hallucinations and Visions – structured narratives of place and being. Issues surrounding, active and passive, take us to questions surrounding imagining and memory. Memory and imagining: where do dreams and narratives come from? Is there such a thing as an original thought? How is this related to the necessity of forgetfulness and imperfection? Yet, the score survives. Structured Narratives of Place As we have said, Göethe made the observation that “architecture is as frozen music;” he identifies architecture as performance, it is an art in that it is temporal (like music) but “moves more slowly.” Although of course, the process of decomposition in architecture is merely slowed to a crawl. While architecture is more like a frozen ritual than it is like frozen music, both engage the compositional techniques of layered narratives (duration and speed), and structured symbolic language. The space between is the clearinghouse for all things real and imagined, where facts and dreams are reassembled in the Surreal Freudian dimension of legends and survival, the land of fairytales and faith, of hunger and desire where all things function without the constraints of absolute logic of the law of the excluded middle.
4
First International Conference on Critical Digital: What Matter(s)?
299
All things real and imagined are synthesized in synesthetic-kaleidoscopic-zoetropic assemblages where syllogistic logic is seduced. Of course, faith and belief are necessary to the survival of the species. How would it be possible to live with the reality of death staring us in the face at every moment? What if dreams were not forgotten and memories not repressed? What if perversion and not “normal” desire were unleashed? Is our survival not dependent on the denial of reality and the synthesis of the imagined, idealism sheltered from actuality? Faith is the “logical explanation” of the space between, the place occupied by what is imagined and that which is known - the realm of the dream maker. Narrative Architecture, Flesh and Dwelling In the narrative the deed becomes flesh and dwells among us—or rather in architectural design it becomes our Dwelling Place. It is the cover we need for separation from nature (Wizardry over Magic). Wizardry as legerdemain separates us from the magic participatory and time-less narrative of nature. We fall into Wizardry from Magic. At its best then architecture is a form of wizardry—it is a recipe for legerdemain, a slight-of-hand performance of the trick (or art). Yet—where does the trick itself come from? What is the Origin of Tricks? Here, we are in the realm of imagining and memory properly so called, we are beyond and through the doors of perception into the realm of the transcendent (not the mental—not the spiritual). The dream, not understood as mental phenomenon but rather as a suppressed enactment, comes, well, from The Future—since it cannot come from either the present or the past. The Miller of RepoMan was right: there has to have been a Time Before—and before, there was not what there is now. Thus, change comes from the future. We repossess the dream; it’s a Chevy Malibu (not a stainless steel DeLorean), it’s a time machine. The original thought is the radio signal from The Future. It is matter. It matters. The deed then does not turn us into puppets or mechanical robots. We are flesh and we are blood and we are bones. The illusion of the soul should not be exchanged for the illusion of a mechanized soul or a mechanical man. The response to mentalism or dualism is neither simplistic monism nor simplistic materialism and Virtual Environments share the worst of both of those impossible worlds. A narrative in time—on the contrary, recovers the flesh of the dwelling place. Beacons, Transcending Matter When Beethoven dies his score survives; when The Imperial Hotel is demolished its score survives. The performance survives by transcending temporal restraints; the persistent memory of the narrative is residual. After all, art is what we have left of The Sacred in a Secular World. Notably, narratives only exist and persist in a flat world (the cubist dilemma); if you imagine the world to be round, burdening narrative intention with extraneous dimensionality, its narratives disappear to be replaced by Celebrity Talk – “Romance with Digital Media.”
5
First International Conference on Critical Digital: What Matter(s)?
300
6
First International Conference on Critical Digital: What Matter(s)?
Shaping the Global City: The Digital Culture of Markets, Norbert Wiener and the Musings of Archigram John Herron, Jr. Graduate School of Design, Harvard University USA jherron@gsd.harvard.edu Abstract The contemporary “built environment” as conceived by designers – be it actual or virtual; be it architecture, landscape, industrial products or, more purely, art – is increasingly generated using powerful computational tools that are shaping the culture of the design professions, so much so that the phrase “digital culture” aptly applies. Designers are rightly inclined to believe that the emerging contemporary landscape – especially in thriving global cities like New York, London and Tokyo – has recently been and will continue to be shaped in important ways by digital design. That will surely be the case. However, design does not exist in a material vacuum. Someone pays for it. This essay argues that the primary shaper of global cities today is another “digital culture”, one defined by the confluence of professions and institutions that constitute our global financial markets. The essay explores the common origins of these two cultures – design and finance; the prescient insights of Archigram into the cybernetic future of cities; the spatial implications of nomadic “digitized” capital and the hazards of desensitizing – in many ways, dematerializing – the professional practices of design and finance. The purpose of the essay is not to establish primacy of one over the other. Especially in the case of urban design, they are interdependent. The purpose is to explore the connection. Overview “Cities, like dreams, are made of desires and fears, even if the thread of their discourse is secret, their rules are absurd, their perspective deceitful and everything conceals something else.” -Italo Calvino Invisible Cities While much attention is rightly being focused on contemporary digital culture and design, this paper explores the origins and implications of a closely related, but quite different digital culture – a largely antecedent one spawned by financial theorists, market specialists and computer engineers in the 1970s – that has had a defining impact on the identity, purpose, culture and physical presence of our largest global cities including New York, Tokyo and London. This essay argues that the convergence of finance and digital technology that began in the 1970s was both central to the shaping of global cities today and anticipated by very few urban seers, Cedric Price and Dennis Compton of Archigram among the few. The paper concludes with two short speculations: first, on the implications of nomadic capital – integral to the digital culture of open financial markets – for major urban centers; and, second, on the common hazard of dematerializing the practice of digital design and digital
301
First International Conference on Critical Digital: What Matter(s)?
finance. My aim is to remind critics exploring the relationship between digital culture and the built environment that a parallel digital culture – the digitized culture of global financial markets – has had a powerful shaping impact on cities as we know them today and, presumably, on the design profession itself. Somewhat alien and largely invisible to the design community, this close digital cousin is too powerful to ignore. Norbert Wiener, digital technologies and financial theory In 1994 at a centennial symposium celebrating the 100th anniversary of Norbert Wiener’s birth, Nobel financial economist Robert Merton credited Wiener’s pioneering work on continuous-time stochastic processes - mathematical models keying off of Brownian motion – as the foundation supporting modern portfolio finance in the 1970s. Nobelist Paul Samuelson and leading finance theorists Andrew Lo and Stephen Ross also spoke on Wiener’s behalf. More relevant for our purpose – and, on the surface, a surprising participant – Charles S. Sanford, Jr., the chairman of Bankers Trust Company, presented a paper co-authored with Daniel Borge entitled, “The Risk Management Revolution”. Sanford noted that he was “here today…to help you honor Norbert Wiener, whose work provided a mathematical foundation for many of the financial models that have fueled the changes now taking place.” 1 The chairman of what some considered the world’s most innovative trading house knew first hand that computationally sophisticated financial techniques were re-shaping global financial markets and, in the process, shifting the fates of major financial centers throughout the world. The link between Norbert Wiener and global finance is appreciated by specialists with an interest in history, but is otherwise obscure. The link between global finance and the relative prosperity, the demographics, the culture and – quite literally - the skyline of our major cities is visibly evident. In New York City one of the most important variable revenue lines in the annual budget consists of tax-receipts generated by Wall Street bonuses.2 In Tokyo, Shun’ichi Suzuki governed the city from 1979 until 1995 and explicitly positioned it as a world city - sekai no Tokyo as the intellectual Koda Rohan had put it earlier3 - a city whose global influence, whose very identity, was principally derived from its economic stature and investing power. London used financial deregulation, its so-called Big Bang in the 1980s, to distance itself decisively from other European financial capitals like Paris, Frankfurt and Amsterdam. This all had a direct bearing on urban design. Three of the largest urban developments in each of these cities during the past thirty years – Battery Park in New York, Shinjuku West in Tokyo and Canary Wharf in London – physically signify the material importance of global financial markets. An analysis of commercially significant urban and regional peripherals - entertainment, residential real estate valuations, restaurants, philanthropy etc. – would reveal the same dependence on global finance, particularly in New York and London. As noted above, this was made possible by a convergence between financial theory and information technology which had begun in earnest some two decades earlier. Four cybernetic innovations independently developed in the 1970s have combined to create the basic command-and-control architecture – the fundamental information and feedback mechanisms generalized by Norbert Wiener – of the global financial system today. Three of the innovations – the relational database, the large-scale integrated circuit and the Ethernet – are innovations in information theory and computer engineering. The fourth – the Black-Scholes option pricing model – is an
302
First International Conference on Critical Digital: What Matter(s)?
innovation in applied economic theory. The first three provide a framework for collecting, manipulating and communicating data – converting bits into useful information. The fourth provides an analytic framework for using the information to determine whether or not market prices are in equilibrium or if there are arbitrage opportunities that traders can exploit –– acting as financial versions of Maxwell’s Demon creating dynamic equilibriums both across and within markets throughout the world...at least in theory. While each of these innovations is compelling in its own right, it is the interplay between them that interests us here. More flexible and efficiently codified databases permitted higher transaction volumes that, in turn, required more computing power to process. Faster and more reliable communication protocols permitted more distributed processing that, in turn, required increasingly supple data base architecture. Sophisticated arbitrage finance required much faster processing power, real time data base access and secure, ‘fat pipe’ global communications links. Data was transformed into information by the first three innovations and into actionable models for allocating risk capital by the fourth. Over the past two decades, R&D on all four innovations has effectively been supported by leading securities firms whose prospects have depended increasingly on their global trading and risk management activities. Simple technological determinism – the stirrup changing the political landscape of medieval Europe – is a fun party game, but the powerful and sustained interplay between seemingly unrelated technologies – spinning and weaving in 18th century England and India – offers richer insights into historical developments. In this case, the combination of better data bases, faster processing capacity, more efficient communication protocols and more sophisticated financial analytics created a financial revolution when paired with global de-regulation and higher market volatility throughout the world. The virtues of this are contestable, but the radical impact on our global cities is undeniable. City planning and cybernetics: RAND and Archigram Between 1966 and 1975, New York City spent several hundred million dollars (nearly $3.0 billion in 2007 dollars) on a series of ill-fated, cyberneticaly-derived and secretively administered projects administered by the NYC RAND Institute, a joint venture of the RAND Corporation and New York City.4 Supported by Mayor Lindsay, the Institute’s mission was to beat the sword of defense department information systems into the ploughshares of urban analytics. Sophisticated computer models predicated on Wiener’s feedback-and-control principles were designed to help the Lindsay administration manage the city more efficiently. By virtually all accounts, the ambitious effort was a very expensive failure and the program was shuttered by Abe Beame in 1975 just as the city was entering its darkest fiscal crisis – the era of “Ford to City: ‘Drop dead!’” as the Daily News starkly reported. While New York City was wasting millions of dollars trying to get a handle on its future, the loose architectural collaborative of Archigram was displaying prescient visions of London’s future for free. Ironically, both Archigram and RAND viewed urban life through a cybernetic lens. In New York, technically gifted planners tinkered with complex computer models designed to track, simulate and forecast information flows. In London, Archigram took an entirely different tack and critiqued the planning profession with partially polemical, partially playful, often prescient projects – Plug-
303
First International Conference on Critical Digital: What Matter(s)?
In City, Fun Palace, Living City are examples – that, in retrospect, contained the metaphoric seeds of the cybernetic urban future that was beginning to unfold. Although financial markets, specifically, and capitalism, more generally, were off Archigram’s radar screen – or, if on, more as vice than virtue - cybernetics was a central thrust, especially for Plug-In City, Peter Cook's iconic vision of the city as a restless network of constant, but actively modulated flows. As a complement to PlugIn City, fellow Archigramist, Dennis Crompton, designed Computer City to provide the infrastructural pipes and monitors - the system of information feedback and control in Wiener's terminology - needed to make Plug-In City conceptually viable. In 1966, the BBC succinctly summarized Archigram’s mutatis mutandum take on the cybernetic city: "Gadgets are less important than the new ability to understand and control a hundred or a thousand different things, all happening at once."5 Given Archigram's fascination with information flows, the step to understanding money as little more than a digital message, a debit or credit entry tracked in a computer bank, is a short one. The future was there to be seen, not spelled out, but made suggestively accessible through a remarkable Gestalt intended for imaginative viewers with open minds. The Archigram influence does not end there. In 1969, New Society published “NonPlan: An Experiment in Freedom”, a provocative article directly attacking the topdown, command-and-control planning establishment in England and most everywhere else at the time. Jointly written by Reyner Baynham, Peter Hall, Paul Barker and Archigamist Cedric Price, the article posed the simple question: “could things be any worse if there were no planning at all?"6 Recalling the article three decades later, Barker drew a straight line from the concept of "Non-Plan" to the development of London’s monumental Canary Wharf.7 Alfred Sherman, an "excommunist turned Tory", used the “Non-Plan” concept to convince his boss, the new Prime Minister Margaret Thatcher, that the Docklands should be treated as an “enterprise zone” – in effect, a series of "small Non-Plan zones" where government intervention on development projects would be minimal. Spatial gravity and the digital culture of nomadic finance Saskia Sassen has astutely observed that even in an age of distributed communications, the modern day capital markets – deeply cybernetic phenomena as we have discussed – are so complex that the principals and their accountants and their lawyers and their computer specialists need to work closely together to manage transactions.8 At the moment – whether it be Silicon Valley, Marunouchi, the City of London or Wall Street – place still matters. Even in William Mitchell’s “city of bits”, place retains its commercial gravity. But, when bodies move fast, gravitational relationships change. While Sassen makes a strong case for the importance of place, she also highlights the urban vulnerability caused by gravitational shifts in economic geography. Amsterdam once dominated London as a financial center. In 1949, Shanghai was the leading financial center in Asia, not Tokyo or Hong Kong. And urban competition is not just between national capitals. Leading cities within countries compete as well. During the past four decades Montreal has been supplanted by Toronto, Osaka by Tokyo, Calcutta by Bombay, Melbourne by Sydney, Rio de Janeiro by Sao Paulo and virtually every American city by New York as the primary node in the highly charged circuitry of global finance. More recently, finer scale, intra-regional competition has
304
First International Conference on Critical Digital: What Matter(s)?
emerged. Greenwich, Connecticut, a leafy – gold leafy – residential suburb of New York City, is now considered the “hedge fund capital of the world”.9 Talented traders using a combination of advanced cybernetic communications, real-time data management and state-of-the art analytics have created a financial ‘neighborhood’ in Greenwich that rivals Wall Street in certain markets. Unfortunately for New York, Greenwich is outside their tax jurisdiction. The hedge funds clustered in Greenwich – some of the largest hedge funds in the world – demonstrate both the sustained importance of place, in general, and the vulnerability of any place, in particular, especially a place dependent on digitized, global finance. The risk for erstwhile prospering financial centers like New York is that Greenwich is not an isolated example: Newport Beach is home to PIMCO, the largest bond fund in the world; Setauket, New York, on Long Island is home to Renaissance Technologies, one of the largest and most successful hedge funds in the world; resort towns like Sarasota, Aspen, Marion, Jackson Hole and Coral Gables are home to sophisticated global fund managers; While financial capital – as the phrase “flight capital” suggests - has been relatively mobile for several centuries, financial talent has been far stickier, far more tied to a specific place. This is less true today, primarily due to cybernetic technologies that “re-spatialize” economic geography rather than “despatializing” it. New York is a robust beneficiary today, but should be wary of its overdependence on global finance. The cybernetic financial technologies that powered New York out of the doldrums of the 1970s are increasingly capable of empowering a critical mass of nomadic capitalists – mimicking the nomadic nature of capital – to seek new places, new tax and regulatory regimes, outside the control of the larger city, but close enough to benefit from it. And these days, “close enough” for the most successful cybernetic capitalists is measured by the range of a private jet, not just a limousine. The hazards of dematerializing the culture and practice of design and finance While genealogy is not destiny, the digital technology of both finance and design share some common DNA that deserves attention. As a contemporary example, the architectural modeling tools that helped Meyer and Schooten design the headquarters for the ING bank in Amsterdam share some common origins with the financial trading models used by the bankers inside the building who are allocating risk capital throughout the world. The shared roots are partially technical – processors, compression algorithms, vector graphics – but they are also cultural. The software user, be she an architect or an options trader, is professionally abstracted from a tactile connection to what is being created. Symbols representing mathematical relationships intervene – shapes in the case of the architect, formulas in the case of the trader. These symbols are generated, analyzed and shared by specialists employed in subtly alienating work cultures that defy materiality. The cultural ‘real estate’ they share is an LCD screen, not the ground outside. Some good things can come from this, of course. The ING headquarters is architecturally clever, engaging and Green. The ING traders provide needed liquidity throughout the developing world. But, seemingly so different in terms of surface culture, the two digital professions – the architects and the traders – share a certain alienation, a certain unworldly separation, because what they create on their screens is computationally powerful, but sensually remote. There are profound material consequences in the work performed – buildings get built, markets move – but the
305
First International Conference on Critical Digital: What Matter(s)?
creative experience for the designer and the trader can be strikingly cerebral and solipsistic. Technical facility is privileged in each profession over social imagination and purpose. As employees hover intently at their computer stations, some design studios and architectural firms bear more than a modest resemblance to the trading floors of large security firms. The work itself is similarly abstract and disengaged from the material world. What looks convincing on a computer screen – a particularly elegant shape or a compelling arbitrage opportunity – can deceive the fabricator unless she can meaningfully position it in the contextual messiness of what we loosely call the “real world”, that increasingly foreign territory just beyond the computer screen. The point is not to trade the processor for the T-square or the abacus. Computing power is not the problem. It is a neurological liberator. The concern is that the cerebral enrichment of computing experience is often sense-depriving. Taken to a solitary extreme, what we trumpet as “social computing” becomes oxymoronic. We should be wary of any digital culture – be it design or finance – that desensitizes us too much; that seduces us with anesthetizing digital creations that coolly distance us from the material world with its feverish animal spirits and unknowable unknowns. An immediate and pressing example of digital finance gone badly awry – of being fundamentally unhinged from the way real people act and behave – is the securitization of sub-prime mortgages in which the underlying assets were improperly appraised and the creditworthiness of borrowers was evaluated using mechanical credit scoring algorithms based on unverified data. Computationally fabricated and defensible in moderation, these financial structures violated common sense as they grew in scale and complexity. Although fault remains at issue, the leaks in Gehry’s otherwise invigorating Strata Center at MIT have surprised few building engineers. Computationally derived shapes, however magical, are hard to bind. The more dematerialized the professions of design or finance become in actual practice, the wider the likely divide between practitioner intent and client outcome. Conclusion Although the two cultures of design and finance share common tools – the processors, the graphics cards, the computational algorithms, the technocratic apparatus – they are largely alien to each other. As the digital culture of design evolves in centrality and complexity, it behooves design practitioners to keep the influence and reach of their seemingly distant digital cousin squarely in mind. While Giles Deleuze may have been too singular when he asserted that “the operation of markets is now the instrument of social control”,10 digital financial markets are having an undeniable impact on the shape – and fates - of global cities today and our built environment, more generally. Architects and urban designers increasingly devoted to their own richly digital culture would do well not to ignore the more arcane digital culture of global finance. When it comes to what actually gets built, where and by whom, digital markets play a commanding, if opaque, role. Identified by Castells and examined more closely by Sassen, their influence is waxing, not waning. Conversely, it is important to remember the provocations and prescience of Archigram in the 1960s. They let their imaginations loose on The City, and in the process saw trends and developments that traditional urban analysts missed. As designers, they were better positioned to “connect the dots” in a forward-looking Gestalt that trumped more conventional planning techniques based on quantified
306
First International Conference on Critical Digital: What Matter(s)?
model building. The design professions have the social license to make conceptually intuitive leaps that challenge, refine, engage, disturb, violate and, on occasion, even embrace the conventional wisdom. The more experientially remote and computationally cerebral the practice of design becomes, the harder it will be for designers to conjure fresh visions that reach beyond the display screen into the messy flux of life outside the digital box. A digital design culture that loses touch with its deeply tactile roots will have lots more byte than bite.
Endnotes 1 Sanford, Jr., Charles S. & Borge, P. Daniel, “The Risk Management Revolution” in The Legacy of Norbert Wiener: A Centennial Symposium, Jerison, David et al (ed.) American Mathematical Society, 1997 2 Hevesi, Alan, The Impact of Wall Street on Jobs and Tax Revenues New York State Office of the Controller, 2004 3 Schulz, Evelyn, “The Past in Tokyo’s Future: Koda Rohan’s Thoughts on Urban Reform and the New Citizen in Ikkoku no shuto (One Nation’s Capital)” in Japanese Capitals in Historical Perspective: Place, Power and Memory in Kyoto, Edo and Tokyo, Nicolas Fiévé and Paul Waley (ed.) London: Routledge/Curzon, 2003 4 Light, Jennifer, “Cybernetics and Urban Renewal” From Warfare to Welfare: Defense Intellectuals and Urban Problems in Cold War America Baltimore: Johns Hopkins 2003 5 Sandler, Simon, Archigram: Architecture without Architecture Cambridge: MIT 2005 6 Baynham. Reyner et al, “Non Plan: An Experiment in Freedom” New Society 20 March 1969 7 Barker, Paul, "Tenth Reyner Banham Memorial Lecture" Journal of Design History Vol 12 No2 1999 8 Sassen, Saskia, The Global City: New York, London, Tokyo Princeton: Princeton 2001 9 Davis, Jonathan, “Amaranth: how to lose $6 billion in a fortnight” The Spectator 28 October 2006 10 Deleuze, Gilles, “Postscript on Societies of Control” The Cybercities Reader, Stephen Graham (ed.) London: Routledge, 2004
307
First International Conference on Critical Digital: What Matter(s)?
308
First International Conference on Critical Digital: What Matter(s)?
 Â
Process in Design
Moderators: Ingeborg Rocker and Zenovia Toloudi Papers: Dimitris Papanikolaou From Representation of States to Description of Processes Rodrigo Martin Quijada Reality-Informed-Design (RID); A framework for design process Sergio Araya Algorithmic Transparency Sotirios Kotsopoulos Games with(out) rules Magdalena Pantazi Using Patterns of Rules
309
First International Conference on Critical Digital: What Matter(s)?
310
From Representation of States to Description of Processes Dimitris Papanikolaou Massachusetts Institute of Technology, USA dimp@mit.edu Abstract Introduction of digital technologies in architecture has generated a great amount of hesitation and criticism about the role of design and its relation to the artifact. This confusion seems to stem from the dual nature of design as representation of the form and as a description of its production process. Today architects urge to adopt digital tools to explore complex forms often without understanding the complexity of the underlying production techniques. As a consequence, architects have been accused of making designs that they do not know how to build. Why is this happening today? It seems that while technology has progressed, the design strategy has remained the same. This paper will deal with the following question: What matters in design? The paper will reveal fundamental problems, attempt to answer this question, and suggest new directions for design strategies today. The conclusion of this paper is that digital design should also aim to describe process of production rather than solely represent form.
1. Introduction 1.1. What is the role of design? According to Herbert Simon sciences are classified into natural and artificial1. Natural sciences describe the natural world. Sciences of the artificial describe artifacts of human intervention in the natural world; artifacts are conceived by design. Architecture is a science of the artificial; it is the science that describes edifices that will be built by human intervention in the natural world. The word intervention includes the technology that the human mind will use to create the artifact. The word natural emphasizes that the purpose of the design is to describe something that will be produced in the physical world. Therefore, in architecture there is a close relationship between design description and production means. The question then is: what matters in design? Is it the description of the artifact or the description of the process to make the artifact? But before asking this we should perhaps first query on the nature of the artifact: when does the artifact start to exist, is it during design or during production? To answer that we have to carefully trace the processes that bring the artifact into life; we will call this is the value chain. By observing how the structure of the value chain has changed in time we shall be able to draw conclusions on the current role of design. 1.2 The value chain and its role in performance of production The value chain, a term coined by Michael Porter2, but definitely explored before by Taiichi Ohno3, and later by James Womack and Daniel Jones4, describes the thread of all the processes and resources that are necessary to bring the artifact to life, from design to 1 2
3 4
Simon, Herbert A. The Sciences of the Artificial - 3rd Edition. The MIT Press, 1996. 5 Porter, Michael E. Competitive Advantage: Creating and Sustaining Superior Performance. Free Press, 1998. Ohno, Taiichi. Toyota Production System: Beyond Large-Scale Production. Productivity Press, 1988. Womack, James P., and Daniel T. Jones. LEAN THINKING : Banish Waste and Create Wealth in Your Corporation. Simon & Schuster, 1996.
production. It starts from conceptualization, procurement of raw amorphous matter, transformation of matter into building components, and finally assembly of the components to form the actual artifact. On every step of the chain, processes add value to the artifact and gradually turn the amorphous disordered matter into ordered form. Value adding processes are the processes which embed design information into the matter5. There are two main types of value adding processes: transformation processes and aggregation processes. Transformation processes change the form or the state of materials (fabrication) to make parts. Aggregation processes put parts together to form larger complexes (assembly). The value chain should not be perceived as a linear structure; instead it is a network often with significant complexity. The processed artifact embodies and conveys design information from fabrication processes to assembling processes formulating a communication stream between designers, fabricators, and assemblers. For example, a fabricator that follows designerâ&#x20AC;&#x2122;s instructions to form two interlocking parts with a peg and a hole explicitly conveys the assembling instruction to the assembler through the form of these two parts. The structure of the value chain greatly affects the design of the artifact because it determines the type and amount of design information that can be embodied and conveyed through the value adding processes. For example the physical constraints of the transportation network, the suppliersâ&#x20AC;&#x2122; resources, the manufacturing tools, and the assemblersâ&#x20AC;&#x2122; capacity determine the size and shape of the manufactured parts that will flow through the value chain. Therefore, the position and distribution of the value adding processes in the value chain is a strategic decision. The more concentrated and the closer the value adding processes are to the construction site, the less the noise and constraints of the chain to the artifact are. On the other hand, the more distant the value adding processes are from the construction site, the more vulnerable the artifact is on the noise and constraints of the structure of the network. Compare for example the probability of failure of the production of an artifact whose parts are fabricated by a number of different fabricators located at remote places from the construction site, to the probability of failure of the production of an artifact whose parts are fabricated by only one fabricator located inside the construction site. Clearly the first case is exposed to higher risk of failure. It turns out that the position and relationship of the value adding processes determines the role of design either as explicit or implicit instruction in a value chain. If design is explicit, its purpose is to direct; if design is implicit, its purpose is to indicate. In the previous example it is clear that in the first case the designer needs to explicitly define all design instructions before the production starts. In the second case however, the designer can implicitly define or even modify design instructions during production since all value adding processes are in the construction site. The position and relationship between fabrication and assembly processes in the value chain varied throughout history. A careful observation of their relation reveals important conclusions about the role of the design in each case. 1.3 The traditional and the digital value chain In the traditional value chain fabrication and assembly took place at the final step of the chain. Both transformation of raw materials to building components and assembly of the building components to formulate the artifact are handled by the builder in the construction site. The designer would know what, but the builder would know how. The traditional value 5
According to Simon, the amount of design information that is embedded in an artifact relates to its entropy; entropy measures the amount of uncertainty of information. See Simon, The Sciences of the Artificial, 189.
chain was experienced based: a great amount of decisions was taken on site. Therefore, design in the traditional value chain was an implicit description of form.
Figure 1. The traditional value chain In the digital value chain fabrication and assembly take place at different steps in the chain. Now, the transformation of raw, amorphous materials into building components takes place in the middle of the chain by the manufacturer but the assembly of the components takes place at the end of the chain, by the assembler. The designer needs to know both what and how and instruct manufacturer and assembler. The digital value chain is knowledge based: all decisions have to be taken before production starts. Therefore, design in the digital value chain is an explicit description of processes. For example, the assembler can not use his experience to assemble a number of pre-manufactured parts because the assembly sequence is already determined by the designer. As a consequence, any mistake during design process is irreversible if manufacturing of parts has taken place.
Figure 2. The digital value chain 1.4 The artifact of the digital value chain: complex assembly This paper defines the digital artifact as the product of the digital value chain. From this definition follows that the digital artifact has a dual aspect: from one hand, as an object it is a complex assembly of customized parts; from the other hand, as a process it is the result
of a complex system of collaborating value adding processes. The relation between these two aspects is that the value processes in the chain follow the assembly description as instructions to produce the artifact. 1.5 The problem of describing the digital artifact This paper deals with the following question: What matters in the design description of the digital artifact? A proper design description should take into account both aspects of the definition of the digital artifact: description of the assembly structure and description of the value chain structure. A proper design description should therefore provide an insight on how difficult the production of a digital artifact is. Difficulty of production is a function of two factors: difficulty of assembly process and difficulty of manufacturing process. This brings a new role to designers of the digital value chain from designers of forms to designers of systems of processes. This paperâ&#x20AC;&#x2122;s statement is that the role of design in the digital value chain is to explicitly describe the process of production of the artifact rather than implicitly represent the form of the artifact. However, today while the value chain has changed, the design strategy has remained the same. As a consequence, today designers design artifacts that they either cannot build or their cost becomes enormous. Unfortunately, most of these incompatibilities are discovered either during construction, or by building physical mockups with a significant loss in both time and cost, and a disputable reliance.
2. Background 2.1. Previous work on description methods Previous research in understanding assemblability in architecture has focused on two main directions: CAD modeling (3D, 4D) and Physical Mockups. CAD 3D modeling has been used for modeling assemblies. However, CAD 3D modeling represents the final state of the assembly, when all parts have been put together, but not the process of putting these parts together. Moreover, the order of constraint delivery in CAD models has nothing to do with the actual constraint delivery of the real assembly. As a consequence, by studying a CAD 3D model, the designer can not tell if a design is assemblable, nor he can estimate the difficulty of the assembly. CAD 4D modeling has been used for clash detection during assembly sequence. However, 4D modeling fails similarly to describe actual constraint delivery between parts. Physical mockups have been used during design development to test assemblability6. However, there is a significant loss in time and cost. In this fashion, testing is empirical, understanding the solution to the geometrical problem is obscure, and design development becomes intuitive. If digital manufacturing is a new field in architecture, we should perhaps seek the solution to the problem of description in the disciplines that have been dealing with this field. Manufacturing and industrial management have been long dealing with modeling of both assemblies and production systems. Assembly modeling has been thoroughly studied in mechanical engineering and manufacturing using the liaison graph7. The liaison graph is a network whose nodes represent parts and connections represents liaisons. An assembly sequence can be explicitly defined as a series of nodes and liaisons. The liaison graph provides a concise and formal method to describe assemblies. For further explanation on 6
7
Sass, Lawrence, Dennis Michaud, Daniel Cardoso. â&#x20AC;&#x153;Materializing a Design with Plywoodâ&#x20AC;?, ECAADE, Frankfurt, Germany, Sept. 2007. Whitney, Daniel E. Mechanical Assemblies: Their Design, Manufacture, and Role in Product Development. Oxford University Press, USA, 2004, 45.
Network Analysis methods the reader should refer to the bibliography; it is not the purpose of this paper to explain Network Analysis. System Dynamics is a methodology coming from Control Theory, originally developed by Jay Forrester8, for studying and managing complex feedback systems. A feedback system is a system in which information from result of past action is a basis for decisions that control future action. A System Dynamics model is a tripartite network consisting of: states (stocks); processes (flows) that affect states; and decision variables that control processes. System Dynamics have been extensively used in modeling of supply chains to evaluate their performance. For further explanation of how System Dynamics methodology works the reader should refer to the bibliography; it is not the purpose of this paper to explain System Dynamics. 3. Proposal 3.1. Description of the digital artifact as a system of value adding processes This paper attempts an alternative direction for describing the artifact of the digital value chain. Instead on describing form, the proposed method describes and simulates the network of processes that bring the artifact into life. For example, beginning from a liaison graph we can find a valid assembly sequence and then generate the sequence of processes to bring the artifact into life. Tracing back the predecessors of processes we can reveal the entire value chain. By evaluating the performance of the value chain designers can have a better understanding of the complexity and feasibility of their designs.
4. Methods 4.1. Framework of the proposed model As an example this paper demonstrates a methodology that consists of the following steps; first, description of the assembly as a directed network of parts; direction of connections indicates constraint delivery. Second, determination of assembly sequence and evaluation of its difficulty using the nodal degree distribution; nodal degree is the number of connections each node has with others. Third, conversion of the assembly sequence into a task sequence. Fourth, execution and evaluation of the task sequence in a System Dynamics model of the value adding processes in terms of time, cost, and risk. In short, the method evaluates the performance of a system of processes on executing a system of tasks. Any artifact that will be fabricated and assembled in a value chain can fit in this description. This method should be considered as a complementary design tool during design development. The following examples apply to assemblies of planar interlocking parts that are manufactured at custom shapes using three-axis CNC routers. Since the parts are planar, each part can be represented by a normal vector perpendicular to its plane. Three-axis CNC routers cut planar parts perpendicularly to their plane, constraining the cuts to have 90degree bevel angles. Therefore, two parts can have a connection if and only if they are coplanar or perpendicular. The following rules regarding assemblability are briefly presented:
8
Forrester, Jay Wright. Industrial Dynamics. Pegasus Communications, 1961.
Rule 1: two nodes can be connected by a liaison if and only if their normal values have a zero or unitary cross product. Rule 2: difficulty of an assembly step is determined by the number of links an installed part has with the rest of the subassembly. Rule 3: a part can be located by another part by one or more liaison connections. If the liaison connections are more than one then their liaison installation vectors must be parallel. Rule 4: A subassembly is the sum of two or more parts connected by liaison graphs. A subassembly can be represented as a single part. Rule 5: two parts can be connected by a third part which has a normal value equal to the cross product of the two parts. Rule 6: if in one part more than one liaisons end, then this part can be installed only after all previous parts have already been installed. The value chain is modeled in System Dynamics as a chain of flows and stocks. Flows are controlled by the value adding processes. Stocks represent the current state of the matter that flows in the chain (inventories). Value adding processes control flows and change the state of the matter on every step of the chain. The value adding processes that control the flows are: supplying rate, manufacturing rate and assembling rate. Value adding processes are connected through shipping rates. The assembly sequence difficulty and properties of value adding processes (locations, capacities, etc.) will determine the performance of entire value chain. 4.2 Experiment The following two experiments illustrate both the problems that designers confronted with conventional design tools and the results of the suggested methodology. The first experiment refers to the design, fabrication and assembly of a chair made from interlocking planar parts. The chair was designed in 3D CAD modeling software (Rhinoceros) and the parts were fabricated from planar plywood sheets in a three-axis CNC router. Modeling of the assembly focused on representing two states of the artifact: the assembled form where all parts are put together and the flattened parts in cut-sheets for fabrication. The assembled form seemed to be a valid configuration of the artifact with no clashes between the solid volumes of the parts. Unfortunately, assembly process stopped at a certain point; installation of parts was impossible due to conflicts in the installation vectors. The designers had no tools to describe, understand, and evaluate the assembly process.
Figure 3. Manual assembly jammed on eighteenth part A representation of the assembly with the liaison graph clearly shows that the assembly sequence is in fact impossible due to installation vector incompatibility between parts. The analysis shows that assembly should jam at the eleventh step because after that each next part would have to simultaneously connect with nine non-parallel installation vectors with the rest of the assembly. However, real assembly jammed later due to the looseness of the notches of the parts.
Figure 4. The liaison graph of the chair The second experiment refers to the design, fabrication, and assembly of a mockup of a faรงade panel. Design development took place in a parametric 3D CAD modeling software (CATIA). In this case, while the assembly was successful, it proved to be difficult, and took significantly more time than the designer expected. While this example is relatively simple including a small number of parts, it clearly demonstrates the lack of tools that designers need to understand assembly process.
Figure 5. The assembled faรงade panel A representation of the assembly with the liaison graph shows that while the assembly is possible, there are two steps in the assembly sequence of high difficulty because they need simultaneous connections. The nodal degree distribution along the actual assembly sequence shows the difficulty of each step as a function of the number of connections that have to be achieved with the rest of the assembled artifact. The nodal degree sequence is then inserted as input in the simple System Dynamics model that represents the assembling process. The model clearly shows that assembling rate will significantly drop at the 12th and 23rd step of the assembly sequence.
Figure 6. Assembly sequence matrix and degree distribution
Figure 7. System Dynamics model of assembly process 5. Results The presented method was successful in revealing information that cannot otherwise be studied with typical digital modeling techniques. This information has to do with modeling of processes rather than modeling of forms. Points in the process of high risk were located and they would be valuable if the designers followed this methodology during design. Another
benefit of the approach is the high level of abstraction; network based modeling and System Dynamics modeling can take place during design development without requiring detailed geometric information. This is significant information during design development in the digital value chain. 6. Discussion This paper showed and explained why the role of design has changed in the digital value chain. It illustrated problems that designers face by demonstrating examples. The paper moreover suggested a systemic direction for designers to evaluate design processes from the perspective of the value chain. The method however only indicates a direction and it by no means constitutes a panacea. Unless designers realize their new role, they will be disconnected and removed from authority from the digital value chain. This paper concludes by encouraging designers to explore tools from systems theory and integrate them in design development to better frame and understand problems of digital design for production. 7. Bibliography Bertalanffy, Ludwig Von. General System Theory: Foundations, Development, Applications. George Braziller, 1976. Dori, Dov. Object-Process Methodology. Springer, 2002. Forrester, Jay Wright. Industrial Dynamics. Pegasus Communications, 1961. Forrester, Jay Wright. Principles of Systems. Pegasus Communications, 1968. Newman E. J. Mark. “The structure and function of complex networks.” SIAM Review vol. 45 (2003): 167-256 Mitchell, W.J. & McCullough, M., 1995. Digital Design Media 2nd ed., Van Nostrand Reinhold. Ohno, Taiichi. Toyota Production System: Beyond Large-Scale Production. Productivity Press, 1988. Porter, Michael E. Competitive Advantage: Creating and Sustaining Superior Performance. Free Press, 1998. Sass, Lawrence, Dennis Michaud, Daniel Cardoso. “Materializing a Design with Plywood”, ECAADE, Frankfurt, Germany, Sept. 2007. Schodek, Daniel, Martin Bechthold, James Kimo Griggs, Kenneth Kao, and Marco Steinberg. Digital Design and Manufacturing: CAD/CAM Applications in Architecture and Design. Wiley, 2004. Shannon, Claude E, Warren Weaver, and Shannon. The Mathematical Theory of Communication. University of Illinois Press, 1998. Simon, Herbert A. The Sciences of the Artificial - 3rd Edition. The MIT Press, 1996.
Steward, Donald V. Systems Analysis and Management: Structure, Strategy and Design. Petrocelli Books, 1981. Whitney, Daniel E. Mechanical Assemblies: Their Design, Manufacture, and Role in Product Development. Oxford University Press, USA, 2004. Womack, James P., and Daniel T. Jones. LEAN THINKING : Banish Waste and Create Wealth in Your Corporation. Simon & Schuster, 1996.
8. Acknowledgements The first experiment (design, fabrication and assembly of a chair) is a team project in class 4.580: Inquiry into Computation and Design (Prof Terry Knight, Prof Lawrence Sass), Massachusetts Institute of Technology, fall 2006. Team members: Joshua Lobel, Magdalini Pantazi, Dimitris Papanikolaou. The second experiment (design, fabrication, and assembly of a mockup of a faรงade panel) is an individual project in class 4.592 Special Problems in Digital Fabrication (Prof Lawrence Sass), Massachusetts Institute of Technology, Spring 2007. Team member: Dimitris Papanikolaou. All figures are property of the author.
First International Conference on Critical Digital: What Matter(s)?
319
Reality-Informed-Design (RID), A framework for design process
Arch. M.S. © Rodrigo Martin Quijada School of Architecture / University of Santiago de Chile, USACH rmartin@lauca.usach.cl Abstract The “action” of design is an integration process, in which values, information of different kinds and data leads to a physical object of “design”. This integration process is non-linear and multiple objectives aimed, producing complex requirements to computer programs. RID systems intend to develop a new tool for the design process, using an evolving structure in the perspective to introduce basic levels of “self-awareness” in the design process to relate analogue and digital tools. This paper proposes an interpretation of the design process, a model for it and the first ideas for a possible new generation of “self awareness” design software. 1. Design process In the design process we not only integrate information (data), equally relevant are the consideration of values, intuition, imagination and chance. To achieve a fruitful integration between the rational and the emotional or intuitive world, let us aim to bound both realities in one meaningful, comprehensive, self-awareness of the integration we are aiming for. Consequently meaning, purpose, usefulness, comprehensiveness and “self awareness” are what we integrate throughout a design process. The application of mathematical models on the process of design helps us to understand his inner structure and most important, gives the opportunity to find out the way “self awareness” could be achieved. One of the traditional models applied on design are the optimization models. Optimization models lend a useful perspective to describe the design process. The sequence of design actions integrating non linear variables to be solved, obtaining finally a designed object, can be compared to an optimization process, specifically a logic-deductive process. An enormous tradition of mathematical (and computational) methods support this way of proceeding. The gradient or hill-climbing1 methods, search for a solution using an iterative and heuristic approximation to an optimal. They explore the space of solutions using a neighbor search method, and an evaluation function that describes the aim of the search. In the design process we can find a similar behavior in the search for alternatives of design, but the “evaluation function” or “design objective” tends to evolve and include new data during the process, and to be unknown in some cases. This “unknowness” is where we should look and find the “self awareness” in the design process. Another important fact about optimization is the way to solve local and global optimality problems in cases of high combinatorial complexity. In this cases we have multiples optimal Figure 1.- Global versus points in the space of solutions, and is possible to get trapped in local” optimals” one local optimal, obstructing the search to continue to the global 1
Stuart J Russell & Peter Norvig , “Artificial Intelligence: A Modern Approach” (2nd ed.), (Upper Saddle River, NJ: Prentice Hall, 2003) pp. 111-114.
First International Conference on Critical Digital: What Matter(s)?
320
optimal. This is usual to see in design process when the designer chooses a course of action in one specific aspect of the design possibilities, not including simultaneously others that are forgotten in the next steps. Consequently the final object of design does not include all the variables needed to be “complete”, and it is a “local optimal”. The meta-heuristics like genetic algorithms2 and tabu search3, use a mechanisms to avoid getting stuck in local optimality, the mutation included in the genetic process or the diversification of the search. These stochastic mechanisms are very similar to the explorative, random search systems used when a designer cannot advance in the process of design. Only creativity and innovation will then play the role of “self awareness” as a function to avoid “dead ends” in a design process. If we apply optimization methods to the design process, it would become an “automated” design searching to find the best way to solve a specific problem at a time. I propose to focus the optimization analysis not on the object of design, but on the process of design itself. So, the optimization process applied to the design process would be generic and open, not restricted to a specific design object. If we center our optimization efforts on the process of design, some interesting consequences would appear: A) Quantitative and qualitative variables can co-exist in the process, because is not necessary for them to be combined, it only needed to establish a “quantified relation” between them, similar to a “weight matrix of relations”. B) The optimization process should be less deterministic, and more explorative. So, this “fuzzy” mechanism would be easy to integrate in the different parts of the process of design in an effective way to come to a point in which this integration represents an “awareness” of the process in itself. c) If the process of design is the optimization objective rather than the design object, one of the main concerns of this process will be to integrate analogical and digital process in a coherent way to achieve consciousness of the process in itself. If we understand the process of design as a sequence of design acts, incorporating the phases and variables of design generating design decisions through validation patterns. Each one of these steps is composed of a series of design acts adequate to that particular process and designer. The optimization of the process of design will be the organization of tools to support the design acts of the designer (scripts, and others) in order to enhance the process of design. But this mechanism will have to be adapting end evolutive, because the definition of the process is impossible to be known in advance (a priori). This “a priori” definition can only be guaranteed, if the process itself develops a “consciousness” to avoid determinism in it. Genetic programming4 states that is possible to define a “program synthesis or program induction”, that means to generate an evolving process in which a specific program for a specific task, can be generated and not directly programmed. So what we could understand as a static list of instructions to develop procedure, transforms into a group of actions who evolve genetically in an automated fashion in order to develop that task. In the same way, we could understand the optimization of the design process as the genetic design of the process itself. But to do this we have to understand the “code of design”, and the variations of the process in order to evolve it.
2
Vose, Michael D.,“The Simple Genetic Algorithm: Foundations and Theory”, (Cambridge, MA.: MIT Press, 1999). 3 Fred Glover and M. Laguna, “Tabu Search”, (Norwell, MA.: Kluwer, 1997). 4 John Koza, “Genetic Programming: On the Programming of Computers by Means of Natural Selection”, (Cambridge, MA.: MIT Press, 1992).
First International Conference on Critical Digital: What Matter(s)?
321
2. Design code The “design” pattern that has emerged from dynamical models like fractals, are attractive, autonomic and complex pattern composition. These design patterns are self-supporting deterministic design dynamics representing mathematical equations in a very fashionable way. I propose to look at the design process as a dynamic system composed of the equilibrium of three phases, through which the process passes in an a-periodic cycle. Each one of these phases represents an interacting equation of the design process. In the form of a “strange attractor”5 it is impossible to know in what order is going to pass from phase to phase, we can only be sure that its going to stay inside that approximated trajectory. I don’t mean only to be sure that it is “going to stay inside”, more than to enhance and to reaffirm that the process in itself overcomes “dead ends” and short cut determinism. We need to guarantee that the process stay open to creativity and innovation to be aware of the need to provide a continuous “push on” process and ever find much more alternatives than the evidence of a restrictive dynamic towards obvious ends. These three phases Figure 2.are composed of a group of actions (or scripts) related to the StrangeAttractor type of phase of design in which they are involved. These three phases can be characterized as: 2.1. Increasing complexity phase In this phase a design act is confronted to a validation process in which the number of validation patterns is higher than the design alternatives. Consequently the probabilistic solution space grows, and the design process upgrades his complexity. This phase uses an evolutionary exploration to generate new alternatives. 2.2. Stability phase This phase is characterized by equilibrium in the number of acts of design and validation patterns. In which sequential iterations develops design alternatives that explore deeply into every parallel possibility of the design object. In this phase is possible to use a genetic optimization system, and the parametric development between the different alternatives of design to create synergy between the parallel improvements. This phase creates a “memory” of the decisions taken, developing time-free associations in which every possibility affects all others. In this phase, from the created memory, we should evolve the “self awareness” function of the design process.
Figure 3.- Lorenz Attractor
2.3. Inductive-Associative phase In this phase the reduction of alternatives of design takes place. This is achieved by association or grouping of similar or equivalent alternatives of design, or by direct choice of the designer. The direct choice of the designer, once interacting with the “self awareness” of the process makes possible to start endless new design cycles. This reduction of possibilities can be the end of the process, or it can also be a start for a new cycle. In this model we can assume the roles of the two principal actors involved, the algorithm must maximize the numbers of validation patterns obtaining them from the reality and from 5
“Strange Attractor” as the reference for a non periodic system in which unpredictability is contained in a limited “space of probabilities”.
First International Conference on Critical Digital: What Matter(s)?
322
the process, and the designer must minimize the design alternatives, by association and induction. And a second implication is that the model cannot be expressed trough a linear equation, it has to be represented non-linear and dynamic equation that depends on states of the process. What we know for sure is that the design process is far from being a simple integration of rules (or restrictions) in a linear process of increasingly data inputs. In fact, the world around us is far more multiform, multilateral, meaningful and very much more comprehensive. Changes in our surroundings are dynamics which evolve constantly. What we call “reality”, is not the same after one minute, or one second, or even after one nanosecond. This model should be new in the sense that it don’t need to reconstruct and adapt every time that the external conditions change. It should have enough responsiveness, sensitivity and “self awareness” capacity to adapt to changing patterns and the process in itself should lead this dynamic. Design “interface” should be able to integrate changing patterns in a predictive way of what we expect that the process of design will achieve. The model proposed corresponds to this dynamical form. The three phases develop in a non sequential form in the design process, depending on the characteristics of the process itself. Every phase is composed of multiple design acts and actions specific to that particular phase. And the process can jump from one phase to other at any moment depending on the decision of the designer. 3. Designer First we have to realize that “complexity” is not an outside phenomenon that we can observe. We are part of that phenomenon, we have been “designed” as a constitutive part of it, and his complex patterns are part of us as we are part of them. The action of design is wrongly seemed as an action developed by some individuals in specific moments to interact with the context and change it. If we look at the process used to decide a trajectory during a route decision in every-day life, the components of: analysis, evaluation, decision and dynamic evaluation of the decision taken during the course of the action which feedback the process, are very similar to the ones used in the design of an object. The differences are the subject of the process and the complexity of the variables. So almost every way in which we interact with the reality uses some or all of the components of this process. What I am saying is that “design acts” are a constitutive part of the consciousness of every living organism, and that the designer cannot be separated from the design process and the “awareness” of this process in itself. So if we intend to understand the process, first we have to overcome the practical separation between designer and design act (or process). The visual process, which is very easy to assume as an eye-object relation defined by physical data, reveals that only 20% of the electric impulses that activate the visual cortex of the brain, come directly from the eye, the rest comes form other parts of the brain 6 (and some feedback from the same brain spot). This can be understood as a need to interpret vision before actually seeing. And that what we call “vision of our surroundings” is mostly vision of ourselves interpretation of the world. ¿And what is all of this 80% of “inner vision”? Basically a self construction of the reality developed during our life, our “visual” memory, which represents the “self awareness” of the visual organic function respect to the visual process. Some authors refer to this as a protection system of the brain against the enormous amount of data received by the visual process, which is filtered by a memory of “what I recognize and understand”, excluding data not learned yet. But since one of the main relations that a designer establishes with the design object is through the visual perception, we can call “creation” to the visual uprising of a not previously seen object. If 6
F. Varela, H. Maturana , “El Árbol del Conocimiento”, (Madrid: Ed. Debate, 1984)
First International Conference on Critical Digital: What Matter(s)?
323
we assume that we cannot see what we don’t understand, creation is first understanding and then bringing into sight. In the design process, we balance our interpretation of reality with reality itself, creating a new structured relationship between them. But a question that emerges is asking where the need for a design action does come from? Why is needed to design something, and how this need transforms itself in action? The design act can be seen as a constitutive part of reality, a moment in which reality itself, increases his structural level, and develops an improvement. So a creative design act is self-awareness of reality, and improvement carried out by a designer. 4. Modeling RID A comprehensive and dynamic Reality Informed Design Models (RID)7, would be a new trend in the design world. RID, should not be understood as an enhanced CAD tool, since his main aim is not to aid in the design of an object, but to inform a design process in his integration with the reality. This means to establish a relation between the act of design and the structure in which it interacts. RID design tools should be orientated towards a broad incorporation of a new kind of “data-patterns” and interfaces, most principally those “linkpatterns” and “connections-fluxes” of data. The RID framework corresponds to a dynamical process that moves through the space proposed in the model following the design process developed by the designer. This movement corresponds to an a-periodic cycle composed of the three identified phases of the design process. To achieve all of the proposed, the structure of the framework should have at least two components, a main structure leading the process direction and constructing the memory of it; and a smaller group of proceedings (scripts) coherent to the phase in which the design is involved, constructing the “design acts”. o
Like a machine-learning network the main structure is composed of a net of logical connectors corresponding to validation patterns, the initial matrix weight of connections is defined by the hypothesis presented by the designer8, establishing a heterogeneous matrix of relations between the different validation patterns. During the process of design, the “weights” of connections evolve, the number of validation patterns reduces or increases and the changes are stored. The storage of changes in the main structure would be the dynamic sequence of “time-events”. We should call this the time-event implications model in the design process.
o
The second structure of the RID framework corresponds to an evolving group of proceedings that establish a direct and an ongoing link between “inputs” and “outputs” during all the “time-events” of a design process. This proceedings (or scripts) correspond to the relation established between designer and software (i.e. assessment of size and scale, 3d / 2d translation, explorative iterations, etc). The type or family of scripts will be defined by the position-time in which the process is in the model. We should call these “time-event” singularities.
To support and enhance the “designing act”, understood as a sequence of value orientated design decisions are far more strategic then aims which brings us beyond the representation 7
Rodrigo Martin & Danilo Lagos, “Between Analogue Design and Digital Tools”, A+C (vol.2), (Santiago, Universidad de Santiago USACH, 2007), 26-35. 8 Similar to a Neural Network training phase, in which the net is presented to training data and the internal weight is adjusted to obtain the wanted result, previous to present new (and unknown) data to the net. In this case, training and operation are mixed together.
First International Conference on Critical Digital: What Matter(s)?
324
of the designing products. If we can throughout asses or validate the qualities, purpose and the design consistency during the design process we are beginning to understand the complexity and the implications of the “designing act” as a validated and relevant “timeevent” design sequence. This will provide the designer an ongoing “validation assessment process” over his own design decisions. Designers in the very near future will have the opportunity to share new continuous “timeevents” design models and “validation-patterns” strategies, which will deepen our understanding on “how we make relevant design decisions during the design process”. This will bond the design community together in a networking community by sharing a constantly growing flow of design “time-event” singularities. We want to reach the point in which, much more than the design product, we are putting our attention to the multiple ways of validating the birth of self-awareness inside the design process, which means a new decision dynamic within an increasing independence from the design commands: “The design is the process in itself”. This awareness of the design process gives us the possibility to link all our “design singularities”, creating a global network of design decision on-line. 6. Figures 1. Rodrigo Martin, “Local and global optimal”, 2008 2. James Gleick, “CAOS la creación de una ciencia”, (Barcelona: Seix Barral,1998), 151. 3. K. Alligood, T. Sauer, J, Yorke, “Chaos, an introduction to dynamical systems”, (New York: Springer-Verlag ,1996), 547. 7. Endnotes 1. Stuart J Russell & Peter Norvig , “Artificial Intelligence: A Modern Approach” (2nd ed.), (Upper Saddle River, NJ: Prentice Hall, 2003) pp. 111-114. 2. Vose, Michael D.,“The Simple Genetic Algorithm: Foundations and Theory”, (Cambridge, MA.: MIT Press, 1999). 3. Fred Glover and M. Laguna, “Tabu Search”, (Norwell, MA.: Kluwer, 1997). 4. John Koza, “Genetic Programming: On the Programming of Computers by Means of Natural Selection”, (Cambridge, MA.: MIT Press, 1992). 5. Robert M. Harnish, “Minds, Brains, Computers”, (Massachusetts: Blackwell Publishers, 2002) 6. F. Varela, H. Maturana , “El Árbol del Conocimiento”, (Madrid: Ed. Debate, 1984) 7. Steven Pinker, “How the Mind Works”,(New York – London: W.W, Norton, 1999) 8. K. Alligood, T. Sauer, J, Yorke, “Chaos, an introduction to dynamical systems”, (New York: Springer-Verlag ,1996) 9. Rodrigo Martin, Danilo Lagos, “Between Analogue Design and Digital Tools”, A+C Arquitectura y Cultura , vol.2 (Santiago: School of Architecture USACH, 2007) 26-35 10. James Gleick, “CAOS la creación de una ciencia”, (Barcelona: Seix Barral,1998)
First International Conference on Critical Digital: What Matter(s)?
325
Algorithmic Transparency Sergio Araya Massachusetts Institute of Technology School of Architecture, PhD in Design and Computation Massachusetts Avenue 77 room 9.268 sergio_a@mit.edu
Abstract This paper describes the procedures developed in the creation of an innovative technique to design and manufacture composite materials with transparency and translucency properties. The long term objective of this research is to develop a method to design and fabricate architectural elements. The immediate objective is to develop the methodology and procedural techniques to design and manufacture a composite material with controlled non homogeneous transparency properties. A secondary objective is to explore different levels of “embedded behavior” or responsiveness by using these techniques to combine different physical material properties on new designed “smarter” and “responsive” composite materials. 1
Introduction “Transparency may be an inherent quality of substance, as in a glass curtain wall; or it may be an inherent quality of organization. One can, for this reason, distinguish between a literal and a phenomenal transparency.”1 One very evident consequence of the adoption of modern technologies in the contemporary city is the result of the separation of structure and skin which is manifest in the proliferation of transparent building envelopes. For some critics this has also implied the vanishing of the façade, or of ‘architecture’s face’ as pointed out by Anthony Vidler.2 Several years before, Colin Rowe had established a distinction between literal and phenomenal transparency by comparing façade treatments on Le Corbusier and Gropius buildings. By doing so he left open questions about the use and interpretation of transparency in architecture, as a material physical condition and/or as a spatial or organizational condition. While this paper is not trying to respond or contest any of Rowe’s interpretations, it tries to build from these distinctions; it presents an operational and instrumental approach within a framework initiated from these distinctions. This paper elaborates on the possibility of applying a design procedural approach to develop non homogeneous material properties, transparency and translucency. Fixed definitions of such properties, where for example transparency and opaqueness are interpreted as absolute values and pure states, are challenged by actual creating gradual variation, and continuous yet heterogeneous performative values. Furthermore, it explains how a different condition, specifically related to material transparency, is developed which could be categorized as an ambiguous or intermediate stage between the literal and the phenomenal stages defined by Rowe: a multifarius transparency, where these stages can be copresent simultaneously. The method developed here consists in using procedural composition techniques combining different materials with different material attributes to create new properties. Optical fibers were chosen to take advantage of their conductive qualities.
First International Conference on Critical Digital: What Matter(s)?
326
Optical fibers conduce light from one end to the other based on absolute internal reflection. This gives the composite another characteristic, different from that of glass or other transparent materials: depth, or freedom of location. And it embeds in the component the possibility of spatial depth or spatial transparency, beyond surface depth into volumetric and spatial depth. I will call this deep transparency. â&#x20AC;&#x153;The figures are endowed with transparency; that is they are able to interpenetrate without an optical destruction of each other. Transparency however implies more than an optical characteristic, it implies a broader spatial order. Transparency means a simultaneous perception of different spatial locations. Space not only recedes but fluctuates in a continuous activity. The position of the transparent figures has equivocal meaning as one sees each figure now as the closer now as the further oneâ&#x20AC;?3 A second question raised by Rowe is that coming from his formalistic analysis, where formal organization beyond material conditions, played an essential role in identifying these characteristics. In that regard, this paper presents the layered logics embedded in the creation, development and fabrication of these elements in a number of prototypes of different degrees of development and with different levels of functionality. This organizational relation creates a second characteristic: divergence, or freedom of coherence. I will call this distributed transparency. In Figure 1 a testing prototype incorporating optical fibers of varying sizes and grid ratios illustrates the effect of material composition in the performative attributes achieved.
Figure 1. Material composition and performance 2
Methods: Deep transparency, distributed transparency The research described in this paper, was conceived as a sequential series of explorations which combine design routines and CNC fabrication techniques. Custom procedures were developed for these experiments, and learning from each implementation, adjustments and therefore variations within these procedures, were pursued in order to exploit observed properties in the resulting prototypes. The application of this research is used later in further implementations to create threedimensional display systems. This further development required also investigation on image and signal processing in order to match the right bits of information with the distributed array of pixels produced through the method described in this paper. That specific research, developed by designer Orkan Telhan, will not be included in this presentation, and is subject to its own publication. Detailed here is the research
First International Conference on Critical Digital: What Matter(s)?
327
related to the design and fabrication of the physical implementation of such system. Part of the investigation and of the prototyping was done in collaboration with designer Hector Ouilhet, especially those aspects of the research related to the adaptation of these principles to the later implementation as a threedimensional display system. The method presented here, decomposes a particular media input into a myriad of small bits, transports each of them through space ant then reassembles them, gaining in the process two degrees of freedom, location, which enable deep transparency, and coherence, which enable distributed transparency. Location because this method allows having transparency translated from its origin to a new, possibly distant location. Fibers can be bundled, piped and embedded within a substrate, transporting information/data from one point to a distant other. Coherence because it has to be reorganized to reproduce the information/data, and this can be done in infinite ways. It can be reassembled consistently with the original input, but it can also be fragmented and distributed, can be reorganized to encrypt the original input or simply decomposed to a myriad of bits and scattered thorough space. This enabled the physical routing of the fibers to act as a material computing device, enabling the creation of â&#x20AC;&#x153;image editing filtersâ&#x20AC;?. As shown in Figure 2 a process of splitting in a number of bits and then routing them through a distance, takes place. Later a reassembly of such bits or pixels is required to reconstitute the source.
Figure 2. Organizational spatial transparency, beyond mere physical. The starting point for a matrix distribution is a regular orthogonal grid. A grid is an optimal distribution of the fibers on the matrix material, from a logistical as from a data management perspective, but a non homogenous and organic distribution was a design driver of this research. Specifically targeting the gradual distribution of material properties in the final composite. Gradients, scattering, patterning and other compositions are made available as performative variations Several layers of transformation were overlaid in order to produce an ambiguous yet defined condition of organization. As in liquid crystals, a nematic phase is seek and achieved, a condition that is in-the-process-of-becoming-but-not-yet, between order and disorder, between fluid and rigid. Both aesthetics and function are affected by this reorganization. Figures 3 and 4. Shows a composition where both depth (location) and distribution (coherence) are variable and non homogeneous.
First International Conference on Critical Digital: What Matter(s)?
328
Figure 3. Continuous gradients of material distribution
Figure 4. Spatial organizational transparency, beyond mere physical. Current developments, including some new products in the available in the market today, have used fiber optics embedded in building materials, such as concrete. Most of these designs though, exploit the properties of the material only from their physical transparency, ignoring the capacities present on the composite material through their organizational transparency, that which add two new degrees of freedom, location (depth) and coherence (order). Furthermore, all this attempts are basically recreating those properties present in glass, so these concrete tiles behave similarly to translucent glass. These explorations intend to expand the range of effects embodied in these components, to enhance the modes in which they affect architectural space. 3
Material Composition Composites materials are generally made by the combination of two basic materials, a matrix and reinforcement. The matrix surrounds the reinforcement and fixes it in place, while the reinforcement contributes the mechanical and physical properties that enhance the matrix properties. This method proposes the creation of a non homogenous composite material. The material is formed by the distribution of a transparent reinforcement (fiber optics) within a non transparent matrix. The distribution is done using a custom develop program that operates on a parametric model, where density, variation and location are variable parameters. Given the conductivity of the optical fibers, light would travel through in both directions. A system where this potential could be used to display a video source is created, where each fiber is homologued to a pixel of the video source. Figure 5. Shows a test where a video source is fed to the input end of the fibers, acting as a distributed pixel system. Figure 6. Demonstrates the bidirectionality of the conductivity, when a prototype is place outdoors, and while being fed a video source
First International Conference on Critical Digital: What Matter(s)?
329
from one end, on the other end a person passes by and the silhouettes are translucent to the video input side, only using the sunlight as light source.
Figure 5. Distributed transparency for pixel liberation To operate such match a series of transformations are required and are described below. The procedure implemented consists of a series of scripted functions that manage the different stages of the material design. The project implements five stages of development:
•
• •
•
•
On‐surface distribution (this procedure takes a digitally modeled object and uses an algorithmic distribution to create a pattern on the surface of the object using local variables and adjustable parameters) Unfolded distribution (this procedure takes an approximated facetted and unfolded set of shapes and unfolds the distribution pattern based on local positions) Optimized distribution (this procedure reorganizes the distribution pattern using a transformation of the original relative positions to an absolute position on a rectangular grid of tag‐numbered shapes) Matrix distribution (this procedure creates a list of every point or fiber location so it can be read by the parser application which would later be responsible to assign the correct pixel to each fiber, each fiber location is still relative inside the local surface, in order to relate to the actual distribution, but ordered in a uniform grid in order to relate to the input distribution) Uniform input distribution (this procedure takes the total number of fibers in the matrix and optimizes their location in order to create a consistent and uniform grid, for the input side)
First International Conference on Critical Digital: What Matter(s)?
330
 Figure 6.Dual sided display performance Although theoretically, only two distributions are needed, input and output, practically, when the prototypes were scaled up in size and number, some intermediate steps were required. Specifically targeting the later implementation as a display, the complexity of the assembly process and the prefiguration of possible maintenance issues, demanded a series of â&#x20AC;&#x153;control screensâ&#x20AC;? in order to properly track and organize the fibers in smaller groups, and also to aid in error correction process after assembled. This set of solutions is required both to locate and control each fiber position individually but also for checkups and maintenance purposes, in case a replacement is needed. 3.1 On Surface Distribution
First International Conference on Critical Digital: What Matter(s)?
331
The fiber distribution over the objectâ&#x20AC;&#x2122;s surface was written as a scripted routine inside a parametric platform, Generative Components. The objective was to facilitate the early design phases, iteration and decision making by controlling certain variables which would determine the distribution. The routine requires a digital model to apply the distribution on, to which it is independent of, precisely to be able to explore different design alternatives. The objectâ&#x20AC;&#x2122;s surface is then subdivided to be able to address local areas with adhoc precision. This subdivision is control by a set of values that provide both linear and non linear sets of values, resulting in a non uniform subdivision system that enhances the ability to target specifically conflicting areas (such as extremely convoluted double curved moments of the surface). The density of points to be created inside each surface subdivision is controlled by independent variables for its relative U and V values. This implementation used a uniform set of values for each surface subdivision as it was easier to manage later on the backend side to individually assign each pixel to a specific fiber. 3.2 Unfolded Distribution Several test were perfomed using CNC techniques, but given the complex form factor of the later prototypes, a 5 or maybe even 7 axis CNC router would be required to align the spindle normal to the surface at every point and drill accordingly. Given the limitations in time and budget, some prototypes used approximated surfaces and allparallel milling, some in a 3 axis CNC router. The more complex prototype used a different procedure. An alternative path is delineated and decided to have a printed set of patterns which could be used to cover the surface of the object and use it as a template to drill all the holes to pass and fix each fiber. The procedure then takes the information of the surface subdivision obtained in the previous stage and projects a new approximated surface using flat triangular facets. This projection is needed in order to unfold the surface of the object which is a complex double curved surface, therefore not developable. Each facet is then translated to another plane and aligned and rotated in order to unfold the facetted model in continuous strips using the V direction as guidance. The local relative position of each fiber is read from the ONSURFACE distribution pattern and then reapplied to locate every point on the new set of unfolded surface subdivisions. Finally each unfolded strip is tag-numbered in order to facilitate the assembly process later. 3.3 Optimized Distribution The process by which a light source is going to be focused and directed to one end of the fibers in order to channel the light through them requires a packed set of fibers on one end and a distribution of all the tips on the other end, according to certain rules. In order to track and identify each fiber in the system (individualize each pixel) it is imperative to build a registry that targets both distribution patterns (both ends). For this purpose a distribution set holds each group of fibers, as they are located in each surface patch, it creates an isolated and unfolded version of each patch, tags each of them so each fiber position can then be localized and isolated, for checking or maintenance of the installation. There are two panels, each of them belong to one half of the object, which for management as well as fabrication purposes, was designed as two complementary halves that lock together with magnetic locks.
First International Conference on Critical Digital: What Matter(s)?
332
3.4 Matrix Distribution This transformation takes each fiber position on the object and records its relative jnew uniformed and normalized version of each of the patches. The gaps and spaces are still present as in the original pattern, but the aspect ratio of the patches is now uniform and can be homologated to the media aspect ratio. This new distribution has all panels organized in a continuous grid, where all patches are contiguous, as now they are all of the same dimensions. By this method the fibers are packed together, although still maintaining its position. This new relative positions are recorded and use to manufacture a control panel where the fibers are assembled and packed. It is also recorded by the program by exporting a text file with the location of each fiber on each isolated patch. These locations are based on UV values which are then read by a parser that interprets these positions in order to send the appropriate bit of information, or pixel in the Cloud project case, to the corresponding fiber. 3.5 Uniform Input Distribution This final transformation is required as the media input from the projector needs a continuous medium to project onto or some bits of light would be lost between the gaps. This function basically packs together all the fibers, eliminating the spaces in between, creating an homogeneous bundle of packaged fibers. For this distribution, each fiber exists in a 2 dimensional array, as they belong to the patch based on UV values. They are then rearranged in a one dimensional list (linear) where ach fiber is consequently listed and concatenated with the next fiber from the next path, leaving a â&#x20AC;&#x153;gapâ&#x20AC;? value to indicate the end and beginning of a new patch. The transformation is recorded and passed to the software parser as it needs to locate each fiber in the new arrangement to mask and send the proper bit or pixel. Figure 7. Show these transformations.
Figure 7. Matrix transformations 4
Discussion and (in)Conclusions The work presented here is an ongoing research project and therefore no definitive conclusions are made, but open discussions relating the design procedures, the material results and the theoretical implications of such practices are opened and offered. 4.1 Hylomorphism and material culture Widespread among architects and designers, the concept of materiality has become a wildcard to refer to various conditions, characteristics and attributes of materials and their multiple uses. In general it refers vaguely and ambiguously to such conditions and attributes, it implies from a consideration in the use and choice of
First International Conference on Critical Digital: What Matter(s)?
333
materials in design, to a careful and studied theoretical position regarding materials in a practice that deals both with virtual (abstract) representations and with actual (concrete) objects as products of such practice. Some critics argue that this makes the emphasis on merely superficial, sometimes even obvious if not shallow, notions of a perceptual interpretation of materials. It evokes a “heightened sensibility toward the use of materials within an architectural context (…) the applied material qualities of a thing”4. As Fernandez remarks, this notion is usually used to described evident and explicit properties of materials, and usually circumscribe the description and discussion of such material properties to only “the haptic and visual aspects of materials” neglecting other aspects related to its performance properties. He asks for a deep, specific, and concrete attitude towards materials, from physical, chemical and mechanical properties to performative and qualitative attributes should be carefully considered while choosing and using a specific material. Raoul Bunschoten on the contrary, describes the interest of his practice in the dynamics of materiality, present in different scales, “from the thinking hand molding a ball of clay” to the” process of human activities, exchanges and emotions at the urban scale”. Bunschoten refers to this as the “skin of the earth as a dynamic materiality, and the inhabited space: the second skin”5. He sees these processes as the place where “architectural artifacts (small worlds)” happen. He sees concrete material processes through a conceptual framework of meta-processes that understand the urban and the natural as their place of action, their field of interaction. Architects and designers traditionally have understood materials as the substance with which they work to create and shape their ideas. Katie Lloyd points precisely to this relation, between matter and form in the design process, where material (or matter) “is inert - as that which is given form”6 through design, materials are shaped or formed. Historically the architects role has been regarded within the discipline as “form giver”, a definition which relies on the separation of both form and matter, as “form is that which can differentiate and form which can be differentiated”7. Katie Lloyd calls this differentiation Hylomorphism, and places this dichothomic relationship between matter and form within a historical and philosophical context, where the abstract representation has always had a predominant role in the practice of architecture. Lloyd Thomas points to an emergent attitude, not yet radical enough to question these assumptions, but strong and spread enough to start shifting the balance in contemporary architectural discourse about its material practice, she call this new attitude “material attention”. While this material attention still accepts the hylomorphism paradigm, they try to reestablish some balance to the equation, at least to “replace neglect” with this new attentiveness. She concludes her essay by asking (if not expecting) a more radical reinterpretation and implementation of this framework, especially when after the soft (ware) digital revolution we are experimenting the hard (ware) digital revolution, and when new technologies such as CNC manufacturing, are “recentering the discussion as a link between conceptualization and production, undoing a gap which has been such an important part of the discipline’s structuring”8 The massification of new technologies and the emergence of whole new paradigms in relation to material manipulation and making, are slowly transforming the ways in which form is though in relation to the matter it would shape. In the adoption of digital technologies, from the “personal computer to the personal fabricator”9 we are stepping into the field of Digital Craft, where both conceptualization and materialization take place virtually and actually.
First International Conference on Critical Digital: What Matter(s)?
334
To speculate about the implications of these emergent techniques within the practice, this paper presented some explorations on developing techniques within a design process to be implemented using CNC technologies. This exploration’s aim is to investigate possible avenues for further research, when applying design methodologies to specifically develop and exploit material attributes and qualities. Figure 8. Illustrate the different stages at which computational design contributed to rethink and explore material distribution to enable material performativity.
Figure 8. Performative materiality through material distribution 4.2 Transparency distribution (pushing glass) Engaged in an investigation that wanted to depart from exploiting material configurations, rather than formal designs results through materials, the initial explorations were freed from a global design objective and remained at a local physical level. Form in those cases was a resource used to pursue and develop material properties. Shapes and patterns then become vehicles of addressing gradual distribution of material conditions, rather than formalistic results. By 1996, William J. Mitchell talked about the future of the city as a networked machine, he used the term “Pulling Glass”10 to refer to the act of wiring physical places, building the Infobahn. Today we all live in such space, and the spaces, places and infobahn as prefigured by him, are facts. The transformations of such places, and human behavior accordingly, have been studied extensively. I will borrow such term to refer to a new possibility, “Pushing Glass”. Pushing beyond standard conceptions of finite and absolute conditions, of standardized and uniformed attributes and qualities, all derived from the modern paradigm of mass production and standardization. Pushing also material distribution to enable performative functions beyond the standard inherent physical properties of matter, possibilitating deep transparencies, and distributed spatial transparencies. 4.3 Algorithmic Transparency This conception of matter as continuous gradual allocation of performative functionalities instead of simple aggregation of discrete elements with diverse qualities, allows a new understanding our material practice. Traditional divergent values can now be seen as converging forces in continuous fields of variation, the relation between interior and exterior, between built object and landscape, can be reinterpreted, can at least be challenged. This presents an opportunity for altering and radically challenging our physical environment now, the same that has been transformed virtually during the last decades, now by affecting its actual physicality, concretely, transforming passive material into active matter. Specifically in relation
First International Conference on Critical Digital: What Matter(s)?
335
to the explorations entailed in this paper, by extending standard notions of material properties, such as transparency, to that of algorithmic transparency. To explore transparency distribution different procedures were scripted thus several different patterns were produced. Manipulation of the distribution allows creating distortion or altering effects to the light/image source. Through these ordering procedures, a sort of analog filter for image manipulation can be created, relying on the physical conductive properties of the fibers and their translation in space from their input to their output. The study of distribution patterns isolates every variable parameter involved in the final distribution algorithm:
• • • • •
Relative position (plane location in relation to the original grid) Relative depth (in relation to the plane) Relative distance (in relation to their neighbors) Continuity (homogeneous distribution versus scattered distribution) Size (variation in fiber diameter)
These algorithms should be refined and further research will be conducted in order to explore other potential behaviors derived from the manipulation of these and other parameters 4.4 Material Computing (machine matters) A machine, as derived from the latin root machina, means a “an assemblage of parts that transmit forces, motion, and energy one to another in a predetermined manner”1, hence by pushing beyond the physical transparency properties of glass towards a distributed multifarious transparency, we could think that this artifacts behave as computing machines. Starting with the different patterns studied several potentials in terms of material properties and possible embedded behaviors appear. A potential for extending these transparency properties through larger spatial conditions, exploring light conducting both for sustainable lightning purposes as well as for performative displaying reasons. Relocating transparency from one space to the other. Rethinking transparency through its potential organization (design): encrypted transparency, displaced transparency, scattered transparency, distributed transparency. 5
Acknowledgements Parts of these explorations are connected in various degrees to investigations conducted by our team at the Mobile Experience Lab, at the Design Laboratory in MIT. I have been very fortunate of having the chance to collaborate, and be supported at all times by a distinguished group of designers coming from different fields. All my gratitude to Orkan Telhan, and Hector Ouilhet. To Federico Casalegno, director of the Mobile Experience Lab, for his support and insight during the research. Finally to William Mitchell, for permanently questioning and challenging every step of my research process, for his sparking critics and comments and for supporting this work at so many levels.
First International Conference on Critical Digital: What Matter(s)?
6 1
336
Endnotes and References
Colin Rowe, Robert Slutzky, Transparency: Literal and Phenomenal (Perspecta, vol 8 1963) 1 Colin Rowe, Robert Slutzky, Transparency: Literal and Phenomenal (Perspecta, vol 8 1963) 2 Anthony Viddler, Losing Face: Notes on the Modern Museum. (Assemblage, No 9 1989) 3 Gyorgy Kepes, Language of Vision (Dover Publications, 1995) 4 John Fernandez, Material Architecture. (Architectural Press 2006) 5 Raoul Bunschoten, The Thinking Hand, in Material Matters (Routledge 2007) 6 Katie Lloyd Thomas, Material Matters, Architecture and Material Practice (Routledge 2007) 7 Ibid. 8 Ibid. 9 Neil Gershenfeld, FAB, The coming revolution on your desktop â&#x20AC;&#x201C;from personal computers to personal fabrication (Basic Books, 2006) 10 William J. Mitchell, City of Bits, Space Place and the Infobahn (MIT Press, 1996)
First International Conference on Critical Digital: What Matter(s)?
337
Games with(out) rules Sotirios D. Kotsopoulos Massachusetts Institute of Technology, USA skots@mit.edu
Abstract Fifty years after the first attempts to introduce algorithmic methods in design we have reached a point where we might ask if design has become not a game in which the designers play with the notion of rule, but a game where they play according to rules. 1. Introduction Computational design deals with the construction of systems that use algorithms to produce designs with certain properties and with the construction of theories that give account for the way these designs are produced. The paper examines algorithmic and digital representation, randomness and innovation within the twofold context of design production and design explanation. It is pointed out that digital design corresponds to a specific subdomain of computational design. Digital design confines design on specific conceptual and productive boundaries, which require accurate planning and sophisticated machinery of reproduction. When design is approached via digital means, doing away with ambiguity becomes a primary concern. This is especially true for the designers who feel that what they want to do must be determined by what a digital machine can do. Fifty years after the first attempts to introduce algorithmic methods in design we have reached a point where we might ask if design has become not a game in which designers play “with” the notion of rule, but a game where they play “according to” rules. Pre-school children explore the playground by interpreting the scattered toys in unexpected ways. Pleasure does not depend on knowing where the play is leading. Rules are made up and abandoned effortlessly. Grown-ups, on the contrary, determine rules with sincerity. They participate in games in which they have to outwit each other. Sometimes it is pleasant to think of design as a playground, where we make up games without rigid rules and forget the rivalries between winners and losers. The Greeks had placed players under the protection of Mercury (in Latin), or Hermes (Ερμής in Greek). Hermes was also the god of interpretation or herme-neia (ερμηνεία, in Greek). This implied that the interpretation of rules is involved in playing, but also that the interpretation of a course of action leads to the formation of rules. Hermes was widely held responsible for encouraging his followers to cheat by diverging from the established rules. Sincerity would never help a player to win. In the arts, cheating was often titled poetic license. The presence of the “deus ex machina” in Greek drama was acknowledged as poetic license. Today, it is generally accepted that the task of the creative person is not to legislate, but to unsettle. Writers and poets risk the reinterpretation of the rules of language to produce original interplays of words, while painters, sculptors, and architects violate the conventions of visual expression, to be championed by people who strive for visual novelty. Nevertheless, the premise of organizing games without rigid rules does not preclude the possibility that in the name of poetic license one may invent games dedicated to rule setting, like the construction of algorithms or the making of computer programs.
First International Conference on Critical Digital: What Matter(s)?
338
2. Background Ever since Plato and Moses denounced the making of representations as illusions, the artists of the Western world have been treated with distrust as propagators of errors. Even when modern art reconfigured the relationship among art, imitation and abstraction, artists have been repeatedly blamed for deceiving “truth” and “beauty”. In the modern world, the artists were accused of betraying the canons of beauty much like in the Middle Ages the prodigal sons of the church were accused of betraying the rules of God. The folly of the prodigal son consisted in his having used his own rules to please himself rather than following the rules of the church and serve God. At the beginning of the 20th century, Henry Poincare paved the way for artists and mathematicians to communicate through their common admiration of form. Poincare claimed that the foundations of mathematical creation are extra-logical1. He proposed that mathematical and artistic creation has aesthetic origin: it implies an aptitude to discern and select among constructions the ones that can be of potential use. This does not consist in employing new combinations of existing forms, Poincare said, because calculations made on this basis are of limited interest. Ludwig Wittgenstein treated art as a language able to reflect a state of the artist’s mind. Initially, the young Wittgenstein attempted to construct a logical language, defined to reveal unambiguously the facts to which it refers2. He presented these ideas in Tractatus Logico-Philosophicus claiming to have established “definitive truths”. Soon, he abandoned this certainty, which he came to regard as neither possible, nor desirable, for what he called language games3. Around the same time, Rudolph Carnap in the preface of the book Der logische Aufbau der Welt was attempting to place his logistic approach to philosophy, in context with art and architecture4. Carnap took strong interest in the ideas developed at the Bauhaus, where he used to give lectures to young designers. At the Bauhaus, Paul Klee and Wassily Kandinsky were attempting to realize a formal framework for painters, sculptors and architects. Key aspect of their approach was the effort to “formulate the laws of art as simple rules”. The objective was not to reduce design into prescriptive formulas but to arrive at conventions that propel creativity. Klee suggested that one should build one’s own system of rules with precision: “We shall try to be exact but not one-sided. What we are after is not form, but function”. But Klee also endorsed artists’ tension to diverge from the established order: “It is clear that different functions operating in different elements will lead to sharp divergences. And yet some people would like to deny the artist the very deviations that his art demands” 5. Current computational design research concerns the construction of algorithmic processes that generate designs with specific properties. Computational design was introduced in the 60’s and 70’s in an effort to use of formal devices like set theory (Alexander 1964), graph theory (Steadman 1973), Boolean algebra (March 1972), programming languages (Eastman 1970; Mitchell 1974), formal syntax (Hillier et al. 1976), and shape grammars (Stiny and Gips 1972) in design. Computation was employed as a prescriptive or as a descriptive instrument. In prescription, computational rules were applied as prescriptive instructions to 1 2
3
4
5
see Poincare H, Science and Method, Dover Publications Inc., 1952, pp. 46-63 see Wittgenstein L, Tractatus Logico-Philosophicus, translated by Pears D F and McGuinness B F, Routledge & Kegan Paul Ltd, 1921 (1974) see Wittgenstein L, Philosophical Investigations, translated by von Wright G H, Rhees R, Anscombe G E, Basil-Blackwell, 1953 (1997) see Carnap R, Der Logische Aufbau der Welt, translated as The Logical Structure of the World, Chicago and La Salle Illinois: Open Court, 1928 (2003), xviii see Klee P, The Thinking Eye, New York: Witterborn, 1961, pp. 59, 82
First International Conference on Critical Digital: What Matter(s)?
339
provide a norm for the production of designs. In description, the rules were used to map the behavior of designers and to affirm that the claims of some general hypothesis produce the desired results. Computational design aims to provide minimum principles by the means of which we can practice and explain design, in three ways: First, by describing the design process explicitly; second, by leading to the implementation of devices with strong generative capacity; and third, by making a design process available for future reference. The descriptive task involves the mapping of the actions of a designer with the aid of rules. The generative task involves their implementation in grammars or computer programs. The reference task involves the assemblage of data structures that can be retrieved by future users. Computational design systems include a calculating and a syntactic-interpretive part. The calculating part provides an environment where calculations of some kind take place. And, the syntactic-interpretive part consists of statements assigning practical meaning to the computations. The application of a thought is expressed computationally by the application of a rule. In this way, the thought becomes a step in a calculation. New steps can be added by inserting new rules. A rule specifies that given some condition x, a conclusion y is produced, that is, an objective is accomplished provided that some conditions are satisfied. For example, the dissection of a quadrilateral “room” by a “wall” can be expressed by a rule. The anticipation of this action consists in the ability to know that a specific result can be produced whenever such a “room” is found in a description. Table 1. Rule-expression, rule-condition, rule-conclusion
rule
condition
conclusion
3. Games without rigid rules Kant observes (§ 49) that all exemplary artistic creation seems to be a product of rules. These are self-imposed by the artist without conscious attention. Further, Kant notices that an important work of art seems unprecedented because “it discloses a new rule”.6 In both cases a kind of free play, similar to that of children, is involved.
6
see Kant E., The Critique of Pure Judgment, translated by Gayer P and Matthews E, Cambridge University Press, 2000, pp. 194-195
First International Conference on Critical Digital: What Matter(s)?
340
A common criticism to the computational approach to design is that computational models are too rigid and fail to capture the freedom with which designers act and think. Computational models provide a formalized environment for calculation, but actions have to be reduced to become expressions of the model language in order to be treated computationally. Computational models – the criticism continues – can represent, at best, only moments in a process characterized by constant flux. Winograd and Flores contrast formal representation with actual human thought, by arguing that the projection of human capacities onto computational devices is misleading. Classifications or distinctions caused by formal procedures eliminate certain possibilities thus causing a specific kind of blindness: “Blindness is not something that can be avoided, but it is something that we can be aware. The designer [of a system] is engaged in a conversation of possibilities. Attention to the possibilities being eliminated must be in constant interplay with expectations for the new possibilities being created” 7. Like most researchers in the area of computational design, I see the existence of a formal component necessary for productive and explanatory purposes. The selection of the appropriate formal component can have great qualitative and quantitative implications for the modeling of a specific domain. One cannot simply pick a computational apparatus and squeeze the empirical content in. The philosopher Nelson Goodman discloses some of the theoretical implications of descriptions belonging to diverse modeling systems, or “worlds”: “In other cases, worlds differ to theoretical rather than practical needs. A world with points as elements cannot be a world having points as certain classes of nesting volumes or having points as certain pairs of intersecting lines or as certain triples of intersecting planes. That the points of our everyday world can be equally well defined in any of these ways does not mean that a point can be identified in any one world with a nest of volumes and a pair of lines and a triple of planes; because all these are different from each other. Again the world of a system taking minimal concrete phenomena as atomic cannot admit qualities as atomic parts of these concreta”8. Theoretical considerations like the above obtain practical significance in design, where designers use points, lines, surfaces and solids to construct design descriptions. The process of construction becomes problematic whenever computational systems restrict the use of these elements in unintuitive ways. For example, digital software requires descriptions to be registered in the memory of the computer with a permanent structure. The preservation of the structure becomes a barrier: it does not allow one to diverge from one’s initial design approach. The creator of the first CAD system Ivan Sutherland explains: “An ordinary draftsman is unconcerned with the structure of his drawing material. Pen and ink or pencil and paper have no inherent structure. They only make dirty marks on paper. The draftsman is concerned principally with the drawings as a representation of the evolving design. The behavior of the computer-produced drawing, on the other hand, is critically depended upon the topological and geometric structure built up in the computer memory as a result of drawing operations. The drawing itself has properties quite independent of the properties of the project it is describing” 9. Digital design is a specific sub-domain of computational design, where rules and procedures are converted into machine language to be executed by digital machines. The interest in the use of digital machines in design is a cultural phenomenon that characterizes the post7
8 9
see Winograd T., and Flores F., Understanding Computers and Cognition, Addison-Wesley, 1987, pp. 163-179 see Goodman N., Ways of Worldmaking, Hackett Publishing Company, Inc., 1978, p. 9 see Sutherland I., “Structure in Drawings and the Hidden-Surface Problem”, in Reflections on Computer Aids to Design and Architecture, Negroponte N. (ed.), New York: Petrocelli, 1975
First International Conference on Critical Digital: What Matter(s)?
341
industrial era. The objects that digital design exhibits are conceived and produced digitally, as opposed to analogue-ly, or mechanically. Digital design production rests on the assumption that man is greatly impressed by the results of calculations occurring in a specific domain, namely, the domain of numerically defined representations. This domain involves calculations with zero-dimensional elements (points and symbols) as opposed to elements of higher dimension (lines, planes, solids). When design is approached via digital means, doing away with ambiguity becomes a primary concern. The program of a digital machine has to assimilate a sequence of execution tasks free of ambiguity and with an unambiguous overall objective. The language, in which this meticulous description is assembled, does not provide the best means for inducing doubt. But, it is within the territory of doubt that a designer conducts the most fruitful experiments. The programmer of a digital machine has to determine in advance, with precision, possible spaces of configuration, which underlie what we see on the screen. In this way, many possibilities are being created, but also many others are being categorically eliminated. The aim of the programmer is not to depict the visual “how” but the measurable “why”. In order to build and use this type of knowledge, the programmer has to be an iconoclast: one who has more faith in the realism of the quantities preserved in the memory of the machine than in what is visible. George Stiny who has written extensively on the properties of visual calculation, observes: “When memory counts more than what I see, it isn’t visual calculating. There’s a conservation law of some sort to uphold the decisions I’ve made in the past – to recognize (remember) what I did before and act on it heedless of anything else that might come up”.10 Digital descriptions of observable things need to be conserved in the memory of the computer. Therefore, form is expressed as an n-dimensional manifold, which can be numerically generated. Also, form, color and other attributes can be detached from objects. And because, all observed attributes obtain numerical expression, arbitrary combinations of their values can be inserted in random processes to generate unforeseen results. Geometry can also become indeterminate. The probability of obtaining unexpected formulations can be increased by allowing chance among the elements of a numeric calculation. Since process has become part of the design content, chance has become indispensable element of the composition. Players, artists and children share the same fascination with chance, but their motivation differs. Players seek fulfillment by risking in a pre-determined process, which can bring them wealth. Artists and children engage themselves into a nondeterministic process that enriches their experience. Nevertheless, there are no available algorithms for producing masterpieces, as there aren’t any for winning the roulette. In art, chance can be introduced by a lapse, a slip, or by accident, or it can be invoked by loosening control over a process, or a material. The inclusion of chance in a sequence of machine instructions should not be confused with non-determinism. In random machine processing, chance makes its involvement present as a surprise. If we were to evaluate the worth of a surprise, we would have to view it as a ‘not-easy-to foresee-result’ occurring in the course of a sequence of predefined (deterministic) events. When a work of art is conceived as a surprise, it is called a happening. If we narrow our view to the point at which all events are reduced to happenings, changes become meaningless, since – like in flipping a coin – they are no longer placed within any context.
10
Stiny G, Shape: Talking about seeing and doing, Cambridge, Massachusetts: The MIT Press, 2006, p. 144
First International Conference on Critical Digital: What Matter(s)?
342
Chance makes its involvement present in more interesting ways, as emergence. Emergence is a feature of systems that display qualities not directly reducible to the system's constituent parts. In such systems ‘the whole is greater than the sum of the parts’. Knight observes that systems based on elements of higher than zero dimensions (lines, planes, solids) exhibit strong emergent behaviors11. Further, Stiny shows that within such systems, descriptions are characterized by the absence of permanent subdivisions, and rules can apply unrestricted. The structure of a description ceases to be rigid and becomes “an artifact of computation” guided by the changing priorities of a viewer.
Figure 1. A rule that rotates a square by 180˚
⇒
⇒
⇒
⇒
⇒
⇒
… Figure 2. Each time the rule applies on the interior small square, a new square emerges
4. Epilogue If design is to be comprehended as a process, it cannot be dissociated from a set of rules, a method, or a language, thanks to which conclusions and calculations can be verified. The originality of a design lies in the uniqueness achieved through the incomplete adherence to the method (μέθοδος in Greek means “following the road”). The reasons for departing from the road can vary. First and foremost, designs are made unique so that others may appreciate the designer’s point of view. The departure from the road becomes problematic in systems that restrict the free application of rules by imposing inflexible structures to descriptions. The preservation of structure for a description operates as a barrier: it does not allow one to diverge from one’s initial course. An example can be found in the use of digital software packages: they require descriptions to be structured in specific ways. After a description is complete, one can only employ combinations of the prescribed parts. But this – as Poincare points out – leads to calculations of limited creative potential.
11
see Knight T, "Computing with emergence" Environment and Planning B: Planning and Design 30(1), 2003, pp. 125 – 155
First International Conference on Critical Digital: What Matter(s)?
343
Digital design extends design beyond visual description to abstract spaces of configuration. In order to use this type of knowledge the designer must become a programmer: one who has more faith in the realism of quantification than in the presence of the visible. Fifty years after the first attempts to introduce algorithmic methods in design we have reached a point where we might ask ourselves if design has become not a game in which the designers play “with” the notion of rule, but a game where they play “according to” rules. The risks taken in games played according to rigidly determined rules are limited when compared to the risks that one can take in games that are approached as fields of unrestricted experimentation. Design still remains an open game.
5. Endnotes Poincare, H. 1952. Mathematical Discovery. In Science and Method. New York: Dover Publications Inc. Wittgenstein, L. 1921 (1974). Tractatus Logico-Philosophicus, translated by D. F. Pears and B. F. McGuinness. Routledge & Kegan Paul, Ltd. Wittgenstein, L. 1953 (1997). Philosophical Investigations, translated by G. H. von Wright, R. Rhees, G. E. Anscombe. Basil-Blackwell Carnap, R. 1928 (2003). Der Logische Aufbau der Welt, translated by R. A. George as The Logical Structure of the World. Chicago and La Salle Illinois: Open Court Klee, P. 1961. The Thinking Eye. New York: Witterborn Kant, E. 2000. The Critique of Pure Judgment, translated by P. Gayer and E. Matthews. Cambridge University Press Winograd, T. and Flores, F. 1987. Understanding Computers and Cognition. Addison-Wesley Goodman, N. 1978. Ways of Worldmaking. Indianapolis: Hackett Publishing Company, Inc. Sutherland, I. 1975. “Structure in Drawings and the Hidden-Surface Problem”. In Reflections on Computer Aids to Design and Architecture, edited by. N. Negroponte. New York: Petrocelli Stiny, G. 2006. Shape: Talking about seeing and doing. Cambridge, Massachusetts: The MIT Press Knight, T. 2003. Computing with emergence. Environment and Planning B: Planning and Design 30(1): 125 – 155
First International Conference on Critical Digital: What Matter(s)?
344
First International Conference on Critical Digital: What Matter(s)?
Using Patterns of Rules in the Design Process MAGDALINI ELENI PANTAZI Massachusetts Institute of Technology, USA mpantazi@mit.edu Abstract In the past three decades computational processes were introduced and were widely applied in the field of architecture. This fact imposed questions about the types of strategies that architects apply during the early phase of the design process. The answer to this question became crucial as computational processes, based on algorithms, use explicit rules while in traditional ways the role of rule during the creative phase of design remains unidentified. If we want to effectively introduce computational processes into design then the role of rule in design should be identified. In this paper, I present an experiment where I examine the patterns of rules that architects use during the exploration of a design idea, from the formation of the design problem towards the design solution. Furthermore, I investigate the role that constraints play in the formulation of these design patterns of rules.
1.Introduction The most challenging and enjoyable aspect of architectural design occurs during the early phase of design, when the architect is still free to play with concepts and shapes while exploring different ideas to solve a design problem. During this process, a variety of tools and procedures can be used to actualize architectural objects as possible solutions to the design problem. The goal is to develop a representation that can most accurately illustrate the designer’s thoughts, while at the same time leaving enough space for further investigation and exploration. In Schon’s words “creative fields are characterized by the generation and manufacture of objects for reflection and evaluation.” 1 The generation and manufacture of an object, however, is not actualized in the same way in all creative fields. On the contrary, as Robin Evans notes, there exists a “peculiar disadvantage under which architects labor, never working directly with the object of their thought, always working at it through some intervening medium, while painters and sculptors, who might spend some time 2 in preliminary sketches and maquettes, all end up working on the thing itself.” Throughout the whole design process, architects model an object that is not yet realized, using different kinds of processes and representations in order to illustrate its form and understand its structure. The object comes to life through the model, and the interaction between model and object leads to a constant exchange of information between the two, until the culmination of the design process in the realization of the object. In the above context, architectural design can be perceived as a conversation between the designer’s thoughts and the object under construction. This conversation is conducted with the aid of a design medium. Two are the important features that determine the outcome of this conversation: the design process and the object itself. The design process refers to the steps that the designer follows from the initial idea through its exploration to the final result. The object, on the other hand, consists the vehicle for the exploration as it reflects the strategies employed by the designer. The object, in other words, is considered as part of the architectural design process that enhance design development. The focus of the present paper is on the early phase of the architectural design process: this of exploration and creativity. The scope of the study is to examine the patterns of actions that architects form around constraints during the creative phase
345
First International Conference on Critical Digital: What Matter(s)?
in order to both address a design problem and work towards its solution. In that framework I first pose some questions regarding the strategies that architects employ while designing. I then set up an experiment to examine these questions. Three basic features that characterize the structure of the experiment are then introduced and analyzed: the design problem, the design process and the feedback relationship. Finally, I analyze and discuss the results of the experiment. The early phase of design describes designer’s actions from the introduction to the design problem, through the exploration of the possible solution alternatives and their transformations to the crystallization of a first satisfying design result. A basic feature of the architectural design process at this stage is its undefined and unclear 3 character: designers seem to proceed in design solutions in a rather ad hoc way that makes difficult the establishment of systematic methods of approaching the design problems. This undefined character of the design process is proved problematic especially today, where the introduction of new design tools in the field of architecture challenge the traditional ways of designing. While traditional tools and processes are based on designer’s intuition and support the use of implicit actions, the new computational tools are based on very explicit processes and rules. Different design tools impose different design processes, so If we want to use, improve or even invent tools to effectively address the design process then we need to have a better insight on how designers form their actions and what patterns of actions they follow during the design process. As a first step to this research the following questions are addressed: How do designers formulate the information contained within a design problem? How do designers organize their actions towards a design solution? How do designers move between different solutions? Do they use rules or patterns of rules in these processes? And if so, what kind of rules or patterns of rules do they use? To answer these questions I need to introduce first the design problem, the design process and the feedback relationship. 2. Design Problem Newell and Simon describe a problem as follows: “A person is confronted with a problem when he wants something and does not know immediately what series of 4 actions he can perform to get it.” While this definition is true for all problems there are some characteristics of certain kind of problems that make the approach to their solution even harder. For example, problems that scientists or engineers deal with are definable and may have solutions that are findable. In these problems, usually the mission is clear such as finding the solution to an equation. Furthermore, an exhaustive formulation can be stated containing all the information the problemsolver needs for understanding and solving the problem, provided that he/she knows how to do it. On the other hand, this is not the case for problems that are ill-defined, ill5 structured, or wicked . On the contrary, these problems have no clarifying traits, neither a single solution. Additionally, the necessary information about the design problem is not, or even cannot be, available to the problem solver. The design problem needs to be structured upon objective and subjective parameters, for
346
First International Conference on Critical Digital: What Matter(s)?
example the program for a building and the personal interpretation of the program respectively. An example that best illustrates this situation is that of a design competition. A plethora of totally different design solutions is proposed as an answer to the same program, the same site, the same time-frame and the same client. It is clear, therefore, that architectural design problems cannot be organized deterministically and also that they do not have a unique solution. What do designers tend to do, then, when they seek a solution? Gross et al. supported that constraints play a significant role in design. They described design as an exploration of constraints and argued that constraints provide a knowledge representation that supports reasoning about designs and designing. In order to test their hypothesis they did not provide any empirical evidence, but they developed a computer software, “the constraint manager,” to describe design. Based on the above considerations, about the characteristics of the design problem and the importance of constraints in designing, I created two design problems. The design problems had the same objective: the creation of a family house in a rectangular site. While the size of the site was the same in both cases, the site changed: in one case it was placed in a city and in the second case in the countryside. The creation of the house in the city had to follow a specific building code and also respond to the urban structure. The creation of the house in the country side, on the other hand, was not bounded on the same constraints. The site was surrounded by nature and the architects were free from strict building codes. 3. Design Process As mentioned above the design problem is strongly related to the design process. Therefore, the ill-character of the design problem affects the design process, which cannot follow an explicit path to reach the final product and is characterized by the use of implicit rules. This fact is sustained by the solution-focused processes that designers use in contrast to the scientists’ problem-focused processes. Lawson’s studies on design behavior revealed that the architects learn about the problem by trying out solutions so as to achieve the desired result, whereas the scientists are more concerned on studying and analyzing the problem to discover the rule. In this framework, throughout the experiment, I investigated if and how architects form and follow rules while designing. I specifically examined the formation of patterns of rules according to different constraint conditions. Furthermore, I explored the methods that architects applied in order to handle the feedback relationship while designing. The term feedback is used to describe the process of reinterpretation of design: the designer observes a new relation on the produced design, he/she evaluates it against the initial idea or hypothesis and then alters the design solution. 4. Feedback Relationship Several studies based on protocol analysis have acknowledged the importance of reinterpretation in the early phase of design and tried to identify mechanisms and tools that support it. Studies have also examined the role of sketching in reinterpretation as well as discovered the kinds of interactions that architects have with their designs. In a series of papers, Goldschmidt has examined protocols of design involving novice and expert architectural designers. She proposed that the dialectic between arguments of “seeing as” and “seeing that” during the process of
347
First International Conference on Critical Digital: What Matter(s)?
sketching “allows the translation of the particulars of form into generic qualities and 6 generic rules into specific appearances.” In the same line, Schon and Wiggins suggested that sketching consists a visual representation that can potentially be perceived in different ways through a design process that develops along the schema see-move-see. Goel reversed the question and investigated the properties of sketch that allow for reinterpretation. He acknowledged the importance of “lateral transformations” and supported the hypothesis that because sketching constitutes a symbol system, which is characterized by syntactic and semantic denseness as well as ambiguity, it allows lateral transformations to occur. Symbol systems, however, that are non-dense and unambiguous will hamper the exploration and development of alternative solutions and force early crystallization of design development. Goel’s conclusion is similar to an observation made by Ivan Sutherland back in 1975. Sutherland comment concerns reinterpretation relatively to the structure of the design in different representational media. He argued that because pencil and paper have no inherent structure, they can be decomposed and manipulated in any manner of interest to the designer. An evolving design may thus have alternative descriptions that may change from time to time in unanticipated ways. The structure of the computer design, on the other hand, presents an obstacle to all of this, because it is fixed in specific design operations. As discussed above, I conducted an experiment to investigate the design problem, the design process and the feedback relationship through the patterns of actions that architects form around different constraint conditions. In this way, constraints can be considered to be the driving force that organize the architects’ actions from the formulation of the problem towards the creation of a solution. The types of constraints vary: some are external and relate to the site or to the program, while others are personal or internal and express architects preferences. In most cases the combination of the two leads the architect to outline the solution. Constraints alone, however, are not enough to guide architects to the solution. It is also the various patterns of actions that architects form around the different types of constraint that lead the exploration and filter the alternative solutions. The way that these patterns are organized and the different groups that they form were under examination throughout the experiment. 5. The Experiment The aim of the experiment was to investigate design actions in response to different design situations. The feature that defined the difference between these situations was the constraint condition: design problems were divided in less constrained versus more constrained ones. The patterns of action that architects formed so as to address the different constraint conditions were in the focus of the experiment. The method selected for it was a protocol analysis of retrospective reports of subject’s 7 design thoughts. The think-aloud verbal reports method, which is most typical one of analyzing subjects cognitive processes was not employed because previous work suggested that talking aloud may influence designer’s perception of their design actions. The experiment was conducted with the participation of five proffesional architects and was consisted of three tasks: two design tasks and one report task. In each design task the participants were asked to solve a design problem in a one-hour session. They were provided with a simple diagram presenting the site in which they were asked to locate a family house. Participants were free to use whatever
348
First International Conference on Critical Digital: What Matter(s)?
representational medium they wanted as tool for design. They were not asked to describe their moves and actions while they were designing, nor were they interrupted during that time. In order to keep track of the process that each architect followed towards solving the design problem, a video camera was used to record the architects design decisions. The two design tasks happened sequentially in two day time. One week after the design tasks were completed the report task followed. I met with each participant and together we reviewed the process he/she followed with the aid of the videotapes. More specifically, while watching the videotapes, I asked each participant to describe the moves and the decisions he/she took during the design process. They were asked to remember and report with as much detail as possible what they were thinking as they were designing. Participants were not interrupted with questions during the report, except when the participant skipped a design event without commenting on it, when I asked him/her to describe it. The whole session was audiotaped. 6. Protocol analysis The analysis of the protocol followed the subsequent steps. First, the verbal protocol recorded from the design sessions was transcribed. The next step was the analysis of the designs based on the visual representation of the drawings and their verbal descriptions. Every design solution was divided into three segments regarding the formulation of the problem, the organization of the actions, as described on the design, and the construction of responsive mechanisms. In the last step the two design solutions that each participant produced in response to the two design problems were compared according to the above three segmentation categories. In the formulation of the problem the focus was on how architects interpreted the design problem information and what kind of groups of actions they created according to it. In this phase a more diagrammatic analysis of the problem was attempted by the architects. In the more-constrained problem, the created diagrams were mostly related to functional features of the problem and different organizations of groups of spaces were proposed â&#x20AC;&#x201C; for example spaces were organized in terms of private and public, of lighting or of adequate dimensions of different spaces. In the less-constrained problem, the fact that architects were more free to work towards the solution did not mean that they did not followed certain constraints. On the contrary, having as a point of reference the site, architects imposed their own personal constraints, which were not only functional but related for example to other characteristics such as the view. In both cases, however, the same process was observed: the analysis of the problem was based upon a selection of constraints. From the infinite area of the solution space a set of constraints was selected. In response to it a general pattern of actions was created that helped architects to develop their thoughts. After the first analysis of the problem, the architects organized their actions by imposing their thoughts on paper and constructing the first synthesis of forms. Spatial relationships determined the solution space and formed the actions that articulated the spaces. In both solution cases, more and less constrained, geometry played a significant role in the organization of patterns of actions. Grids that organized the provided space, relationships to the boundaries and occupied area (figure 1) are some of the geometrical rules that determined architectsâ&#x20AC;&#x2122; actions. In this segment category various applications of each constraint were examined
349
First International Conference on Critical Digital: What Matter(s)?
through spatial patterns of actions within the limits of the design solution. This fact resulted either in the abandonment of solutions that did not satisfy the constraints or in further refinement of the ones that did. In this second phase the solution space was limited.
Figure 1: Geometrical constraints
The final segmentation relates to the responsive mechanism that architects formed so as to evaluate their actions and work towards the solution. This mechanism described the feedback relationship. The mechanism involved a continuous examination of various constraint applications in different solutions, the abandonment or maintenance of which led to a gradual shrinking of the design solution space. Architects were creating patterns of action according to the facet of the constraint they wanted to examine. They were then evaluating the design solution against them. In cases where the patterns of actions were conscious, for example imposing a grid and allocate spaces according to it, the architects, were able to better handle the part of the solution which was under examination, understand its limits and potentials. 7. Conclusions In the present paper I attempted to cast some light in the vague field of the architectural design process. In order to do so, I examined the design patterns of action that professional architects employ while exploring a design problem. My main goal was to investigate how these patterns of action facilitate the design process. I discovered that architects often build their patterns of action around the foundation provided by the constraints inherent to the design problem. The selection of constraints and the associated patterns of action that were created were examined in three phases: the formation of the problem, the organization of the action as reflected on paper and the responsive mechanism. The experiment provided evidence that architects develop patterns of action in relation to the problem constraints, which therefore guide the design exploration. More specifically, architects arrive to a final solution by examining intermediate solutions in terms of how well initial constraints are fulfilled; subsequent solutions are then reached by making the necessary adjustments so as to satisfy pending constraints. In other words, given an infinite solution space, the architect finds an optimal solution by gradually eliminating â&#x20AC;&#x2DC;penaltyâ&#x20AC;&#x2122; regions: areas of the solution space that do not satisfy the constraints. Future work will focus on identifying types of constraints, as well as patterns of actions that architects use while designing. It is hoped that this process will open new avenues in the field of computation, since it will make possible new design tools that can better address architectsâ&#x20AC;&#x2122; needs. Only when the design processes that architects use become more clear, can computers really contribute in the exploration and creative phase of the design process.
350
First International Conference on Critical Digital: What Matter(s)?
1
Donald Schon, Educating the Reflective Practitioner (New York, NY: Jossey-Bass, 1990).
2
Evans Robin, Translations from Drawing to Building, (London: Architectural Association Publication, 1997).
3
Nigel Cross, Designerly Ways of Knowing, (London: Springer, 2006).
4
Newell, Allen and Simon, Herbert. Human Problem Solving. Prentice Hall, 1972.
5
Horst Rittel and Melvin Webber, “Dilemmas in a general theory of planning,” Policy Sciences vol.4(1973): 160.
6
Gabriela Goldsmichidt, “The dialectics of sketching,” Creativity Research Journal vol.4 no 2 (1991): 128.
7
Herbert Simon and Anders Ericsson, Protocol analysis: verbal reports as data, (MIT Press: Cambridge, 1993).
Bibliography Cross, Nigel. Designerly Ways of Knowing. London: Springer, 2006. Goel, Vinod. Sketches of Thought. Cambridge, Mass. : MIT Press, 1995. Goldsmichidt, Gabriela. “The dialectics of sketching.” Creativity Research Journal vol.4 no 2 (1991): 123-143. Goldsmichidt, Gabriela. “On visual design thinking: the vis kids of architecture.” Design Studies vol.15 no 2 (1994): 158174. Gross, Mark, Ervis, Stephen, Anderson, James and Fleisher Anderson. Constraints: Knowledge representation in design. Design Stdies vol. 9 no 3(1988); 133-142. Lawson, Bryan. How designers think, Oxford: Elsevier/Architectural Press, 2006. Newell, Allen and Simon, Herbert. Human Problem Solving. Prentice Hall, 1972. Rittel, Horst and Webber, Melvin. “Dilemmas in a general theory of planning.” Policy Sciences vol.4(1973): 155–169. Robin, Evans. Translations from Drawing to Building. London: Architectural Association Publication, 1997. Schon, Donald and Wiggins, Glen. “Kinds of Seeing and their function in Design.” Design Studies vol.13 (1992): 135-156. Simon, Herbert and Ericsson, K. Protocol analysis: verbal reports as data. MIT Press: Cambridge, 1993. Sutherland, Ivan. “Structure in Drawings and the Hidden-Surface Problem.” In Reflections on Computer Aids to Design and Architecture, edited by Negroponte N., New York: Petrocelli/Charter, 1975.
351
First International Conference on Critical Digital: What Matter(s)?
352
First International Conference on Critical Digital: What Matter(s)?
Critical Reflection
Moderators: Teri Rueb and Kostas Terzidis Papers: Anthony Burke Reframing “intelligence” in computational design environments Mahesh Senagala Deconstructing Materiality : Harderials, Softerials, Minderials, and the Transformation of Architecture Erik Conrad Rethinking the Space of Intelligent Environments Lydia Kallipoliti and Alexandros Tsamis The teleplastic abuse of ornamentation Neri Oxman Oublier Domino: On the Evolution of Architectural Theory from Spatial to Performance-based Programming Shaxin Wei Poetics of performative space
353
First International Conference on Critical Digital: What Matter(s)?
354
First International Conference on Critical Digital: What Matter(s)?
355
Reframing “Intelligence” in Computational Design Environments. Anthony Burke. University of Technology, Sydney, Australia anthony.burke@uts.edu.au
Abstract This paper seeks to establish a set of principals that form an understanding of intelligent systems related to design and architecture, through a review of intelligence as it has been understood over the last 60 years since Alan Turing first asked the question “can machines think?”1 From this review, principals of intelligence can be identified within the neurophysiological and artificial intelligence (AI) communities that provide a foundation for understanding intelligence in computational architecture and design systems. Through critiquing these principals, it is possible to re-frame a productive general theory of intelligent systems that can be applied to specific design processes, while simultaneously distinguishing the goals of design oriented intelligent systems from those goals of general Artificial Intelligence research. 1. Why Critique Intelligence? How can we begin to evaluate claims of intelligence for computational systems in architecture within the current technological environment?2 Equally, how do we distinguish between complex, expert, computational and intelligent systems as the discipline moves into an era of open and highly customized software production and application? Clearly a framework of terms is necessary if we are to answer these questions. Similarly a taxonomy of system architectures is also necessary to determine what elements of a system might be considered essential for intelligence. In architectural terms intelligence is perhaps typically understood along the lines of this definition by Kas Oosterhuis who writes, “Intelligence as I use it here is not seen as human intelligence. It is regarded as emergent behavior coming up from the complex interactions of less complex actuators. It seems to be possible to apply the same definition of intelligence to the functioning of our brains, traffic systems, people gathering, and to the growth and shrinking of cities.” 3 An Intelligent system should be seen as distinct from a system that is simply beyond understanding or computationally complex and based rather on types of computational architectures, and the goals of each type of system. These goals might include optimizing and managing processes or returning partnership capacities within a design team such as generating possible design options or integrating complex information and analysis into the design process. Architects and designers will ultimately use both intelligent and nonintelligent systems within their work. However as our discipline further evolves with technology, the genealogy of the digital is necessarily becoming more complex requiring an equal sophistication in its terminology. As we interact more and more with design systems beyond our specific understanding, it could be said the site of design moves from the object to the organization, and the need for meta-models to assist designers and architects in understanding how to work with a systems particular architecture becomes both practically and theoretically important.
First International Conference on Critical Digital: What Matter(s)?
356
Without understanding a systems structure, or having a clear definition of its terms, effective communication between various systems structures becomes difficult, as does working with the limits and advantages of the growing numbers of systems that we are currently engaging. Apart from identifying the foundations for a taxonomy of digital processes, identifying specific properties of an intelligent system in computational design processes is necessary to distinguish the design ambitions of an intelligent system from normative (if not complex) computationally driven work. The large majority of advanced work in architecture today is a result of systems often touted as intelligent, such as parametric modeling, BIM, evolutionary structural optimization (ESO) and so on. Typically however, these processes rely on functions of high level computational power within normative programming architectures displaying processing prowess but few of the hallmarks of intelligence as it is understood in other fields. Similarly, the introduction of autonomous systems to both design and manufacturing, place a portion of the design and development process in the metaphoric hands of a system that may or may not have been programmed with disciplinary knowledge in mind. As we generate, auto-generate and assign control of aspects of design processes to complex systems, we are assigning part of our design intelligence to a third party whose assumptions and goals are likely to be different to our own. As such there are many reasons to consider this an important point to determine a framework for intelligent design systems (IDS). Developments in all manner of technological systems and processes as well as our changing relation to software within the design process all contribute to the evolution of customized expert, complex and “intelligent” systems. While complex simulations such as the Earth Simulator project at the Japan Agency for Marine-Earth Science and Technology are not yet intelligent, their capacities for knowledge storage and retrieval, world modelling and behaviour generation might be considered aspects of intelligence which are evolving and converging. It is not difficult to imagine these systems developing rapidly to gain a measure of direct advisory and executive capacities such as those now well advanced in autonomous military weapons and information and battle management systems employed by the military. All these aspects point to the evolution of a new relationship with technology beyond the simple singular operations of technology use today, in which computational intelligence will partner with us, not as a tool or device, but as a sophisticated set of compounded operations that will advise, guide, inform, suggest and empathize with our simplest tasks as well as our most complex goals as architects, designers and researchers. Defining Intelligence Definitions of intelligence vary with every discipline. James S. Albus writes, “Even the definition of intelligence remains a subject of controversy, and so must any theory that attempts to explain what intelligence is, how it originated, or what are the fundamental processes by which it functions.”4 The intelligence industry is very active across many different disciplines including Neuroanatomy, Neurophysiology, Neuropharmacology, Psychophysics, and Behavioral psychology, not to mention work in Artificial Intelligence Research, Robotics, Computer Science and Computer Integrated Manufacturing.5 While an exhaustive comparison of intelligence as it is understood across these disciplines is beyond this paper, there are several characteristics of intelligence that are common to many of
First International Conference on Critical Digital: What Matter(s)?
357
these disciplines, and illustrate various characteristics of intelligence that may be appropriate to a general model for an intelligent design system. It is necessary to note intelligence can be understood within the demands of each disciplinary type and is also typlically partitioned into various general levels of intelligence. Albus recognizes these levels as corresponding to Basic Intelligence or the ability to Sense the environment, make decisions and control actions, Higher Intelligence or the capacity to recognize objects, construct a world model and represent knowledge, and reason and plan for the future, and Advanced Intelligence which includes the capacity for perception and understanding, choosing wisely, acting successfully under numerous complex circumstances, and prospering.6 Generally, a low level intelligence like that of a swarm of bees can be understood as being an implicit intelligence, generated from within the architecture of the (biological) system itself. This accounts for such apparently coordinated intelligence of insects and so on that act on instinct rather than reasoned thought, yet can be said to display intelligent behaviour that increases the survival possibilities of that species. Conversely high level intelligence is generally considered explicit and relies on calling from and reasoning through a knowledge base and knowledge representations or a world model. Interestingly Turing frames the concept of higher and lower intelligence through the terms sub-critical and super-critical. Sub-critical intelligence in his example is that which would produce one response for every single input, something akin to an instinctual response. A super-critical response however would be the generation of multiple thoughts or outcomes, “…a whole theory, consisting of secondary, tertiary and more remote ideas”7. In Turning’s test of machine intelligence in 1950 the common assumption of anthropocentrism is clearly defined through the device of his “imitation game”. Predicated on the idea that if a computer can fool an impartial human into thinking it is a human through responses to questions, then that calculating machine should be considered intelligent. This test while still contentious8 has motivated and sustained the artificial intelligence communities for decades and continues to provide a cornerstone of intelligence, even while other types of machine and biological intelligence form a substantial part of the research. In this sense, much AI and neurological research is conducted explicitly on the basis of explaining human/biological intelligence in such as way as to instantiate it in machine behaviour. Manuel De Landa however interrogates the possibility of non-human intelligence as an alternative to this anthropocentrism. In War in the age of Intelligent Machines, De Landa for example writes a history of technology from the point of view of machines arguing “technological development may be said to possess its own momentum, for clearly it is not always guided by human needs.”9 In this way, other forms of intelligent systems may be possible to envisage, indeed become necessary to work with, but have until more recently been outside the mainstream of intelligence research. However intelligence generally is assumed to encompass both biological and machinic/artificial instances and any general theory of intelligence then should encompass both these instantiations. Indeed most discussions of intelligence move easily between biological and artificial or synthetic examples. Albus, a roboticist and control systems expert, ascribes the creation of intelligence precisely to natural selection and the evolution of survival mechanisms within biology while seamlessly employing computational hierarchies, modules and frames to break-down intelligence into replicable applicable computational modules.
First International Conference on Critical Digital: What Matter(s)?
358
Jeff Hawkins and Sandra Blakeslee link intelligence specifically to prediction and memory, insisting that intelligence cannot be measured by behaviour. “Intelligence is measured by the predictive ability of a hierarchical memory, not by human like behaviour.” 10 The central or peripheral role of behaviour as an indicator of intelligence is key to design systems yet still controversial as Hawkins’ and Blakeslee’s claim for example is refuted directly by the understanding in the AI community that intelligence is “that which produces successful behaviour.”11 Turing complicates this further by recognizing that a requirement of intelligence is the return of unexpected results from any interaction with an intelligent calculating machine noting, “It is probably wise to include a random element in a learning machine.”12 Others such as Pierre Levy define intelligence not through a computational analogy or dependance on behaviour, but through a socio/political lens akin to Marshall McLuhan’s concept of a global brain and Manuel Castells’ formulation of the Network Society. In Levy’s thinking, “We pass from the Cartesian cogito to cogitamus. Far from merging individual intelligence into some indistinguishable magma, collective intelligence is a process of growth, differentiation, and the mutual revival of singularities.”13 Intelligence in his case is aligned with an architecture of massive parallel processes creating intelligence through complexity. Levy’s assumption is that intelligence is an inevitability of highly interconnected flows of information, foregrounding the anthropocentric aspect of a collective, technologically afforded and shared knowledge base that is part of an anthropological space he identifies as the “knowledge space”.14 Interestingly Levy includes technology as a foundational part of an anthropological definition, while being primarily concerned with the politicization of this knowledge space, linking intelligence to ideology. This allows him to state for example, “Totalitarianism collapsed in the face of new forms of mobile and cooperative labour. It was incapable of collective intelligence.”15 This aspect of intelligence has been developed more recently by Christopher Hight and Chris Perry16 linking collective intelligence not only to forms of practiced intelligence with roots in the phenomena of emergence and complex information space akin to Oosterhuis definition, but to the political/ideological agenda of Michael Hardt and Antonio Negri’s Empire and Multitude17. Yet another version of intelligence is in the sophistication of contemporary highly automated manufacturing systems. According to Andrew Kusiak18, intelligence in design and manufacturing situations is an awareness of many subsystems and the capacity principally to integrate a multitude of discontinuous inputs. According to Kusiak computational intelligence is an almost everyday part of the manufacturing and design process. Motivating and linking systems such as automated material handling, data storage and retrieval systems, quality management systems, CAD, CAM and CAPP systems, and so on, Kusiak however still defers to an anthropologic model, stating for example “Computational Intelligence…allows automated systems, e.g. robotics, to duplicate such human capabilities as vision and language processing.” In this context CI is an integration and protocol interpretation level operation. For the purposes of initiating this research, Albus’ provides the most direct and elaborated exposition in his Outline for a Theory of Intelligence, defining intelligence as “the ability of a system to act appropriately in an uncertain environment, where appropriate action is that which increases the probability of success, and success is the achievement of behavioural sub goals that support the system’s ultimate goal.”19 Intelligence in these terms includes the integration of a range of essential components including, “behaviour generation, world modelling, sensory processing and value judgment.”20 For the purposes of this brief discussion, Albus’ definition will suit as a benchmark to elaborate some elements of intelligent systems from the broader I.S. community that could help define intelligence in a design system.
First International Conference on Critical Digital: What Matter(s)?
359
Considering characteristics of Intelligent Design Systems (IDS) It is clear that AI and design systems will not overlap completely in their understanding of intelligence, however it is still useful to compare aspects of one system to the other. As an example of some of the differences between IDS and AI systems we might look at the issue of goals. The goals of design as they are understood in this paper are along the lines of creativity and innovation, rather than problem solution. Goals within design and architecture in this sense then are frequently ill-defined and contingent on typically intuitive explorations of a very broad design or problem space, more so in design research. In this sense, one of the characteristic differences of intelligent design systems from Albus’s and others definition is the ability to search out and recognize opportunity and innovation, and adjust goals as opportunities present themselves. Within this context, “acting appropriately” is contrary to innovation or generating deviations from normative or stochastically average behaviour. Indeed acting inappropriately is more of a goal of design systems, where explorations that redefine a design or problem space are initially highly desirable. Goals in this example are very fuzzy within design with priorities changing depending on circumstances and results. Within AI definitions goals are typically understood as highly determined and from outside the intelligent system, whereas in design, goals could be said to be generated from within it. From the definitions above, it is possible to begin to isolate a non-exclusive list of characteristics that could be used to formulate a test for intelligence in computational design systems. These might include;
The ability to respond to an environmental situation The ability to deviate from normative or expected behaviour Interaction with knowledge identities (databases) from inside and outside the design space Internalized feedback and feed-forward loops for error checking and self analysis Integration of both hierarchical and horizontal systems architectures An exceptionally broad world model i.e. set of assumptions about the world from which to base behavior and value judgments on. Capacity to integrate various systems protocols Generative of behavior Ability to generate and evaluate new system goals The ability to apply value-state variables or Value Judgments. “Unless machines have the capacity to make value judgments (i.e. to evaluate costs, risks, and benefits, to decide which course of action, and what expected results, are good, and which are bad) machines can never be intelligent or autonomous.”21
Conclusion As computational systems evolve from expert tools to fulfill both advisory and executive roles within generative design practice,22 a critical understanding of what defines intelligence in computational design systems is particularly pressing. Oosterhuis’ definition is typical of the application of the term intelligence within current architectural discourse, equating intelligence with emergence, or other forms of recognizable pattern or behavior as a consequence of complex interactions or calculations. In this form, intelligence is often
First International Conference on Critical Digital: What Matter(s)?
360
ascribed to any system displaying complexity beyond human comprehension yet in a general model of intelligent design systems we should be critical of assigning intelligence to phenomena we do not understand. Like Levy’s Collective Intelligence, Oosterhuis’ swarm intelligence operates at best at a low level when assessed through current AI definitions as it lacks attributes of higher or advanced intelligence such as reference to world models, value judgments and so on, and demonstrates the need for a more sophisticated approach to an evolving and maturing digital genealogy. Through comparison of several key properties of intelligence from AI and Neurophysiological research, benchmarks can be created to assess whether an IDS is indeed intelligent or not. Using these characteristics it will also be possible to begin to sketch out a schema of levels of intelligence and various systems architecture. Without this taxonomy, meaningfully communicating across the vast array of systems currently in use will become more difficult. The anthropomorphic measure of intelligence is also far less of issue in computational design systems than within most AI applications, and IDS are dependant on new paradigms of intelligence in order to evolve. While most researchers agree that when artificial intelligence or intelligent systems do come along they won’t be in any recognizable or expected form23, basic issues such as the inference of intelligence through behaviour or pattern generation remain controversial. Since these basic assumptions remain unclear, it is timely to ask how we might begin to develop a theory of design in the age of intelligent systems. As the design work flow moves from image to information based models, and as architects further develop working processes with the abstractions of script, algorithm and real-time data flows, the need for a common framework to distinguish and evaluate evolving forms of computational processes and partnerships will become only more urgent.
Endnotes See Turing A, Computing Machinery and Intelligence, Mind, 59, 1950 pp 433-460 http://www.abelard.org/turpap/turpap.htm (accessed 11.01.08) 2 For a general overview of digital design and manufacturing systems see Kolarevic B (ed.), Architecture in the Digital Age. Design and Manufacturing, London: Spon Press, 2003. 3 See Oosterhuis K, “A new kind of Building”, in Graafland A, and Kavanaugh L J, (eds.) Crossover. Architecture Urbanism Technology, Rotterdam: 010 Publishers, 2006, p243 4 See Albus J, “Outline for a Theory of Intelligence” in IEEE Transactions on Systems, Man and Cybernetics, vol 21, No.3, May/June, 1991 pp473-509 5 Paraphrased from Albus J, “Outline for a Theory of Intelligence” in IEEE Transactions on Systems, Man and Cybernetics, vol 21, No.3, May/June, 1991 pp473 6 See Albus J, “Outline for a Theory of Intelligence” in IEEE Transactions on Systems, Man and Cybernetics, vol 21, No.3, May/June, 1991, p474 7 See Turing A, Computing Machinery and Intelligence, Mind, 59, 1950 pp 433-460 http://www.abelard.org/turpap/turpap.htm (accessed 11.01.08) 8 See John Searle’s Chinese Room Argument (1980) directly contests Turing’s assumptions of the relationship between mind and intelligence and remains controversial. 9 See De Landa M, War in the Age of Intelligent Machines, New York: Zone Books, 2000, p3 10 See Hawkins J, Blakeslee S, On Intelligence, New York: Times Books, 2004, p210 11 See Albus J, “Outline for a Theory of Intelligence” in IEEE Transactions on Systems, Man and Cybernetics, vol 21, No.3, May/June, 1991, p473 12 See Turing A, Computing Machinery and Intelligence, Mind, 59, 1950 pp 433-460 http://www.abelard.org/turpap/turpap.htm (accessed 11.01.08) 13 See Levy P, Collective Intelligence, Cam. Mass: Perseus books, 1997, p17 1
First International Conference on Critical Digital: What Matter(s)?
14
15 16 17 18 19 20
21 22 23
361
See Levy P, Collective Intelligence, Cam. Mass: Perseus books, 1997, p5 For levy, an anthropological space is “a system of proximity (space) unique to the world of humanity (anthropological), and thus dependant on human technologies, significations, language, culture, conventions, representations, and emotions.” See Levy P, Collective Intelligence, Cam. Mass: Perseus books, 1997, p3 See Hight C, Perry C, (eds.) Collective Intelligence in Design, AD Vol 76, No. 5, New York: Wiley Press, 2007 See Hardt M, Negri A, Empire, Boston: Harvard University Press, 2000 and Hardt M, Negri A, Multitude, New York: The penguin Press, 2004 See Kusiak A, Computational Intelligence in Design and Manufacturing, (New York: John Wiley and Sons, 2000 See Albus J, “Outline for a Theory of Intelligence” in IEEE Transactions on Systems, Man and Cybernetics, vol 21, No.3, May/June, 1991, p474 See Albus J, “A Theory of Intelligent Machine Systems” in Intelligent Robots and Systems ’91. ‘Intelligence for mechanical Systems, Proceedings IROS ’91. IEEE/RSJ International Workshop, vol1, 1991, pp3-9 See Albus J, “Outline for a Theory of Intelligence” in IEEE Transactions on Systems, Man and Cybernetics, vol 21, No.3, May/June, 1991, p502 Burke A, “After BitTorrent: Darknets to Native Data” in Hight C, Perry C, (eds.) Collective Intelligence in Design, AD Vol 76, No. 5, New York: Wiley Press, 2007 See Hawkins J, Blakeslee S, On Intelligence, New York: Times Books, 2004, p207
First International Conference on Critical Digital: What Matter(s)?
362
First International Conference on Critical Digital: What Matter(s)?
Deconstructing Materiality
Harderials, Softerials, Minderials, and the Transformation of Architecture Mahesh Senagala University of Texas at San Antonio www.mahesh.org Abstract This paper presents a deconstructionist close reading of the conventional discourses about materiality by forwarding a triadic framework of harderials, softerials and minderials. The discourse draws from the Derridan notion of différance in articulating the fundamental difficulty in understanding materiality. Taking the discourse about materiality into the digital realm, a critical discussion of softerials (BREP Solids, Polynomial Surfaces and Isomorphic Polysurfaces) and their implication to architecture are presented. Questions about a possible material-envy and materiality-complex in architectural profession are also raised. Different binary strategies by which softerials are relegated by architects to a secondary status of “media” are exposed. Preface
Figure 1: Kimbell Art Museum by Louis I. Kahn. Photo by HKCB. CC 3.0 LicenseA. “You say to brick, ‘What do you want, brick?’ And brick says to you, ‘I like an arch.’” (Kahn, 2003).
One of the fundamental assumptions aboutB architecture is that it belongs only to the physical and material world. I will question that assumption as did K. Michael Hays twenty years ago: “That architecture is deeply and inescapably enmeshed in the material world may, on the first reflection, hardly seem a contentious proposition. And yet questions concerning the precise nature of the reciprocal influences between
363
First International Conference on Critical Digital: What Matter(s)?
architectural form and material life—matter and its irreducible heterogeneity in relation to individual subjects—bring opposing theories of architecture and its interpretation into forceful play” (Hays, 1988). Architecture becomes contentious because it lies at the crossroads of pragmatic, philosophical, political, cultural, and the metaphysical realms, all of which have their own agenda and aporia. Any claims of truth by one dimension are easily met with opposing claims from the other directions. New, alternative and seemingly minor developments such as Second Life® and other Massively Multiuser Online Worlds (MMOWs) point to the advent of a new era of digital materiality that calls into question the privileged status of physical materiality and conventional notions of architecture.
Figure 2: “Arlberg,” a Second Life® Environment by Amalthea Blanc. CC 3.0 License.C Once we probe it closely, materiality will be revealed as a questionable, ambiguous concept that, has served as architecture’s primary source of legitimacy, legacy, and meaning. We will interrogate some of these notions to reveal its privileged position in the discipline. Any notion of digital materiality—our current subject matter—must also, by implication deal with the fundamental questions of materiality. Mother of all Things Conventional wisdom maintains that that which is not physical is not material. Little probing of the word and the concept behind it reveals that the 'matter' is not so simple or clear. Material is that which matters. The word material comes to English language via Latin materialis from Indo-European māter, which means mother or that from which things originate. A material has to have an existence in order for it to be (later on we will consider the Heideggerian view of materiality). Material is that which matters. The question of ‘what matters’ goes to the question of relevance, resistance, power, and impact. Therefore, a material does not have to be physical to be of consequence.
364
First International Conference on Critical Digital: What Matter(s)?
The notions of “resistance” and “difference” are fundamental to the understanding of the notion of materiality. A material is that which exists not necessarily (or apparentlyD) only in the physical realm (what I call, harderial), but also in the mental realm (minderial: all ideas are made up of minderials). Extending this line of thought to the digital world, we could postulate the notion of digital materiality or softerialityE. Before I go any further, allow me to dwell a little on deconstruction, an approach that I would like to use to reveal some aporia in the conventional notions of materiality in architecture. Deconstruction isn’t passé Jacques Derrida’s strategies and activism in questioning, revealing and “shaking” the foundations of textual discourses has been quite valuable despite (or because of) its controversial stance. Deconstruction has attained a good measure of notoriety in architectural circles in the eighties and early nineties. Unfortunately, we haven’t seen much deconstruction lately. As such scholars as Michael Benedikt (Benedikt, 1991) and Mark Wigly (Wigly, 1993) pointed out, deconstruction has much to offer the world of architecture. My earlier “close readings” of software interfaces and programs go a step beyond the buildings and deconstruct digital constructs (Senagala, 2004, 2007). We must be clear: there is no such thing as Derridan deconstruction as a defined set of operations or methods or principles or systems. Derrida himself declared that any efforts to define deconstruction are bound to be false (Bernasconi, 1985). In a rare instance of clarity, Derrida unequivocally negates the notion that deconstruction is a method or an analysis or a critique: “deconstruction is neither an analysis nor a critique… It is not an analysis in particular because the dismantling of a structure is not a regression toward a simple element, toward an indissoluble origin. These values, like that of analysis, are themselves philosophemes subject to deconstruction. No more is it a critique, in a general sense or in Kantian sense.” (Bernasconi, 1985). So, what I am attempting here could be described more aptly as my own version of deconstruction than that of Jacques Derrida, despite the intellectual debt that I owe him. To me, deconstruction is a process of interrogation and shaking the foundations in much the same vein as Derrida’s work. The value of deconstruction to architecture is to expose any absolutist, idealist, strongly-held notions of a metaphysic or the perpetuation of a “universal truth” that must be accepted in some unquestionable manner by the disciples of architecture and beyond. Architecture has a rich tradition of prophets, authorities, philosophers, theoreticians, practitioners and gurus who are “certain” of quite a few things. But once we begin to question, carefully interrogate and look closely at how those certainties are constructed, we begin to see a host of shaky foundations, unraveling fasteners, cliques, and power play that is often founded upon pedigree or authority. Deconstruction’s greatest contribution has been to reveal the latent and suppressed agenda in the absolutist valorizations. The formula by which philosophers, theorists and architects usually make their case goes something like this: 1. Select a pair of binary oppositions (physical and digital) 2. Glorify, admire, and purify the physical; abhor, minimize, belittle, and look down on the digital
365
First International Conference on Critical Digital: What Matter(s)?
3. Establish a routine to accentuate this polarity through corroboration, suppression, and exercising institutional authority of some kind. So what does this have to do with materiality? Everything, I submit! When something is privileged, there must be an artful enforcement of a structural framework that is founded upon binary oppositions, valorizations, and networks of semantic chains that extend from architectural monograph to monograph, text to text and studio to studio across the continents, and resist probes into the hidden assumptions and subtexts or glossed-over ambiguities. What matters? In arguments for harderiality, a privileging of all things physical takes precedence. The digital then takes the subservient or instrumental or secondary role. The physical becomes the destination while the digital becomes, at best, a vehicle, a medium, a ‘mere tool’ to achieve the physical. In reality, it may well be that the programmer who writes the program with which the architects design and build is creating more “value” than the architect herself. This value may be reflected in the higher salaries or higher social standing or greater influence in the society, which are the bottom lines. In case of MMOWs, the value of the software and softerial environments—as measured by market capitalization, capital movement, number of users, intellectual property generated, and other metrics—might far outweigh the value of the harderial architecture in whose creation the software might play a role. What matters? My intention is not to put softeriality on a pedestal. Rather, I am simply deconstructing the conventional valorizations about harderiality and suppression of softeriality. Elsewhere, I had written about the far reaching impact of software systems on architectural design and discipline (Senagala, 2004, 2007). From Derrida, we learn that clarity is possible only within a delimited framework of suppressions and assumptions. Once we begin to see the shakiness of those assumptions and suppressions, the whole notion of clarity unravels. The notion and feeling of clarity are both sources of comfort as they enable us to rest in the security that they offer. In making this statement, my intent is to say that where there is a sense of certainty and comfort and clarity, there are “fortifications” that protect the interiority of the theoretical construct from an exteriority of uncertainty, undecidability, and ambiguity. So, materiality is a construct despite our “sense of certainty” that there are material things, and that those material things are of primordial importance in architecture. The construct is often founded upon a suppression of the question “what matters and why.” Once that question has been posed, the difficulties of asserting the certainty are exposed. What matters is an undecidable matter of complex social, political, economic, and existential discourse. Materiality with a Différance Materiality is fundamentally existential. The moment we invoke being, we invoke nothingness. And we owe Jean-Paul Sartre an intellectual debt of gratitude for his profound discussion of Being and Nothingness (Sartre, 1956). Inherent in being is non-being, which is not its opposite, but its potentiality, a fundamental, différance to borrow a Derridan notion. That which is is recognized by its différance. That which matters persists in various ways through resistance and différance. Things exist only in relationship to other things. Other things exist only in relationship to more things. It would be a futile abstraction to think of identity of things-in-and-of-themselves, pure, isolated, and unconnected. The meaning of a thing is indefinitely deferred to and drawn from the meaning of other things, which further defer meaning to more
366
First International Conference on Critical Digital: What Matter(s)?
things in a fluctuating and expanding network of unstable relationships. Things attain identity through difference, not an essence that is somehow intrinsic. Essence is a difference. In other words, the essence of a thing is never “present” in a thing. It is a fundamental aporia that lies in any argument that there is an essential materiality inherently “present” in a thing. Later on we will see how this understanding applies equally well to harderials, softerials and minderials. Heideggerials “That which gives things their constancy and pith but is also at the same time the source of their particular mode of sensuous pressure—colored, resonant, hard, massive—is the matter in things. In this analysis of the thing as matter (hyle), form (morphē) is already composited. What is constant in a thing, its consistency, lies in the fact that matter stands together with a form. The thing is formed matter” (Heidegger, 1993). Can something be “formed” and yet be without matter? Can there be matter without form? Has anyone ever witnessed matter without form and form without matter? Put differently, can there be matter that can be understood through frameworks—a priori or a posteriori— other than geometric formF? That which exists is a thing in the sense that Martin Heidegger has said “on the whole the word ‘thing’ here designates whatever is not simply nothing” (Heidegger, 1993). By this definition, softerials and minderials also qualify as materials from which things originate. If a thing, as Heidegger defines it, is not an abstraction but a concrete experience, then what is material or matter in distinction to thing? Is materiality an abstraction, and hence a matter of textual discourse? Is there matter that truly matters and matter that does not matter? These questions need to be merely posed to comprehend the ambiguities surrounding physical or any other form of materiality. Softeriality Broadly speaking, softeriality would refer to a different kind of matter from which “things originate” differently. Softerials are a new breed of (digital) materials out of which a new world is being produced, not just in architecture, but in virtually all fields. Although it is difficult or impossible to precisely define the notion of softeriality, we can sense the intense penumbra of concepts that surround it. Elizabeth Grosz’s observations echo the impact of softeriality thus: “The space, time, logic and materiality of computerization threaten to disrupt and refigure the very nature of information and communication, as well as the nature of space, time, community, and identity” (Grosz, 2001). Softerials are not just geometric beings, albeit they could be manifested in geometric form. The world today is animated by softerials. More than 98% of United States’ financial transaction system is in the softerial form (which means less than 2% is made up of physical material), moving at the speed of light. Softerials are time-based. Softeriality is rooted, for most part, in computational intelligence. They are transmissible, translatable, and interactive. Second Life® is an interesting example (www.Secondlife.com) of a softerial world complete with its own functioning dual economy (internal and external) as well as an evolving social structure.
367
First International Conference on Critical Digital: What Matter(s)?
Figure 3: A BREP Solid Of immediate interest to the architectural community are a subspecies of softerials that have a geometric manifestation. Let us look into B-REP Solids as a case. In simple terms, a BREP solid is defined as a volume completely bounded by planar surfaces with specific topological structure. Many of the popular CAD programs use BREP solid modeling. BREP Solids are interesting softerials. These softerials have a sense of mysterious interiority that they maintain at all times while presenting an exteriority of flat surfaces and sharply defined edge condition. Their definition arises out of edge and corner conditions while leaving most parts of the surface to be uniform and ambiguous. Other than the geometric or gravitational centers of the faces, the rest of the surface remains anonymous unless specifically engaged or interacted with. If subjected to sectional cuts, the solids “heal” and “conceal” along the cuts and maintain the differentiation between inside and outside. Historically, BREP solids sprang from the concerns about limitations of CSG (constructive solid geometry). While CSG is based on the primacy of primitives as a way to build more complex geometric entities, BREP solids are based on connections between a set of surface elements. What matters in BREP solids is the edge condition or the “periphery,” not the center, in a curious inversion of a harderial convention where the center is privileged over periphery. The kind of manipulations, play, geometric negotiation, and materiality of BREP solids is unique and different from any known harderials. The edginess of BREP solids gives them a specific flavor that no other harderial can come close to. BREP solids do not necessarily need to refer to a harderial space. They may refer to economic space or political space or any number of other spaces that were discussed by Henri Lefebvre (Lefebvre, 1991), and those that are unfolding in softerial worlds such as Second Life. This example should suffice to understand that the nature of softeriality differs from harderiality but is not opposed to it. The same could be said of Polynomial Surfaces (Splines) and Blobs (Isomorphic Polysurfaces), which offer different material possibilities in their own rightG.
368
First International Conference on Critical Digital: What Matter(s)?
Material-envy or Materiality Complex In architectural circles there is a definite unease about anything digital. It is often least understood, feared, shunned, quarantined within harderial-dominated curricula, and debated extra-vigorously than anything else. This stance is ironic, when we consider the fact that the entire profession of architecture is founded upon the notion of “knowledge” and not physical things! As Greg Lynn pointed out some time ago, “architecture is a profession concerned with the production of virtual descriptions as opposed to real buildings” (Lynn, 1999). So, there lies certain hypocrisy. Many architects value physical things and yet that physicality is outside the realm of their professional ken! It is that separation, that distance, that impossibility of possession, which holds the tantalizing seduction and pleasure of materiality. So, it is ironic for a profession whose primary legitimacy is based on “virtual descriptions” to be shunning the value of softeriality. There may be some sort of denial and harderial-envy at play here. Kostas Terzidis concurs: “Issues related to virtuality, ephemerality, continuity, materiality, or ubiquitousness, while originally invented to explain digital or computational phenomena, are utilized today in the context of a traditionally still material-based architecture…Is materiality subject to abstract digital concepts? What is (or is not) important?” (Terzidis, 2006). Can the architects, as a profession, leave their comfort zone of holding on to harderiality as the legitimizing discourse in architecture? Does this harderial-envy represent a sort of a Freudian dynamic of desiring a thing that cannot be controlled, cannot be had, or that which is a downright legal taboo? Is the limitation of the professional framework to virtual descriptions of drawings and drawing-like digital databases resulting in a sense of inadequacy, impotenceH, and, perhaps repression of the virtual and, consequently, a repression of softeriality? Does this condition reflect what Tafuri had called “gymnasts within a prison yard?” Do we accept the condition of limitation or do we resist the polarization (of harderial versus softerials) and reconstitute the discipline of architecture that, without inhibitions, embraces softeriality and architecture of soft realities? When Medium is Xtra Large: Medium is the Material One way harderials maintain the privileged status is by relegating softerials to the status of “media.” Conventional wisdom states that a design work begins with the formation of an idea in the small but complex neural network of the conditioned human brain. The idea would then grow in a medium of drawings, databases, models, etc., and finally become the built work, often its largest manifestation. That is the conventional belief. The separation between medium and the end product used to be clearer when physical buildings (steel, brick, stone, concrete, etceteras) were the only anticipated result. The day architects stopped using the heuristic process of building directly on site with bricks and mortar, the day architects started resorting to drawings and other media before the buildings were built, the materiality of the end product ceased to be the primary factor that affected the spatiality and tectonic of the building. Like a mind that is shaped by the experiences of the past, the materiality of a medium is manifest in any building. Today, we find ourselves in many situations where the differences between medium and product simply cease to exist. Where does a medium end and a building begin? How does the notion of difference play into the discourse about (digital) materiality? What matters? Once the difference between medium and building vanishes, medium becomes the material out of which buildings are made. Medium is the material. This medium is so large
369
First International Conference on Critical Digital: What Matter(s)?
now, larger in scope, impact, dynamism, participation, and potential, that it ceases to be a medium. Inconclusions: Gymnasts within a prison yard? When it comes to what matters, the discipline of architecture still privileges harderiality over softeriality. It may be a marginalizing game. Manfredo Tafuri’s analogy was brilliant: “how ineffectual are the brilliant gymnastics carried out in the yard of the model prison, in which architects are left free to move about on temporary reprieve” (Tafuri, 1980). Within a limited framework of formal possibilities, architects construct an elaborate system of gymnastics. Architects’ notion of value is rooted in the notions of well-crafted buildings and a vague metaphysic of experience of harderial space. The notion of craft, detail and tactility are valorized within a prison that remains distant from what matters and to whom it matters. The recent wave of digital fabrication presents a strange conundrum. Is it an unwitting demonstration of privileging “physical” materiality of harderiality over softeriality? Or is it a move past the polarization of harderiality versus softeriality by making the digital subservient to the production of the physical? Architecture, as a discipline, seems to be dogged by a love for binary oppositions while other design disciplines seem to be more willing to not fall into this binary trap of absolutisms. What difference does softeriality make to the world of architecture? How does it transform not just our conception of materiality, but also the scope and the manner in which we practice, teach, and build works of architecture in harderials and softerials alike? Are softerials just media? Are harderials always the end products? What happens when the medium itself gains more value (by most measures), has more impact than the end product? Does the end product then become a by-product, an aside of little—albeit boutique—consequence? Should this emerging inversion be reflected in the academic and professional bodies, curricula, and licensure? Could we finally ask without resorting to a harderial reality, “What does a softerial want to be?” References 1. Benedikt, M. Deconstructing the Kimbell. New York: Lumen Books, 1991 2. Bernasconi, R., et al. eds. Derrida and Differance, Warwick: Parousia Press 1985 3. Derrida, J. Edmund Husserl’s Origin of Geometry: An Introduction, Stony Brook, NY: Nicolas Hays, LTD, 1978 4. Derrida, J. Writing and Difference. Alan Bass, tr. Chicago: University of Chicago Press, 1978 5. Grosz, E. Architecture from the Outside, Cambridge, MA: MIT Press, 2001. 6. Hays, K.M. “Editorial,” Assemblage, No. 5, Cambridge, MA: MIT Press, February 1988, pp4-5 7. Heidegger, M. Basic Writings, New York, NY: Harper Collins, 1993 8. Kahn, L. Louis Kahn: Essential Texts, Robert Twombly, ed., New York, NY: W.W. Norton, 2003. 9. Lefebvre, H. The Production of Space (trans. D. Nicholson-Smith). Oxford: Blackwell, 1991. 10. Lynn, G. Animate Form, New York: Princeton Architectural Press, 1999.
370
First International Conference on Critical Digital: What Matter(s)?
11. Sartre, J. Being and Nothingness, Hazel Barnes, trans., New York, NY: Philosophical Library, 1956 12. Senagala, M. ““formZ in flatWorld: A Critical Close Reading,” in Murali Paranandi, ed., Digital Pedagogy, Columbus, OH: AutoDesSys, February 2007. 13. Senagala, M. “Deconstructing the Software Interface: A Critical Close Reading of AutoCAD,” in J. Bermudez and Jose Ripper Kos, eds., the International Journal of Architectural Computing (IJAC), Multiscience, V.7, December 2004. 14. Tafuri, M. Theories and History of Architecture, New York, NY: Harper and Row, 1980. 15. Terzidis, K. Algorthmic Architecture, Amsterdam, Boston: Architectural Press, 2006. 16. Venturi, R., Complexity and Contradiction in Architecture, New York, NY: MOMA, 1966. 17. Wigly, M. The Architecture of Deconstruction: Derrida’s Haunt, Cambridge, MA: MIT Press, 1993 A
http://flickr.com/photos/hkcb/399419674. Last accessed 3-24-2008 00.04AM. The famous deconstructionist technique of sous rature, or "under erasure" has been extensively used in the current close-reading to communicate the impossibility of fixed or certain meaning. Meaning is undecidable. C http://flickr.com/photos/tealthea/2355827908/in/pool-sl_architecture. Last accessed 3-24-1008 00.11AM. Second Life URL: http://slurl.com/secondlife/Arlberg/221/219/166/ D All this has a strange resonance in Vedic and Buddhist texts, where the notion that the whole world is a sort of illusion (māya) and play (nātakam). Certain forms of Vedic and Buddhist texts negate the notion of harderiality and harderialism and any sense of false security to be found in clinging to material “things.” From a different perspective, quantum physics has clearly shown that there is nothing so clear about “harderiality” in the universe. Physicists have time and again observed that there are no “things” but only fields of energy that comes together to form an illusion of things. E I would also like to distinguish between the digital and the virtual. Digital refers to specific technical means of working with data that can be represented, computed, transmitted, translated, and experienced. Digital need not be non-physical. Virtual refers to any potentiality that is latent in any harderials, minderials and softerials. Virtual differs from physical. F Derrida’s Edmund Husserl’s Origin of Geometry: An Introduction is a direct interrogation of Husserlian and phenomenological notions of certainty of geometry as absolute truth. Husserl, had intended to rescue philosophy from the then increasing attacks from phenomenologists that truth lies (pun intended) in the intuition and that the world consists of phenomena rather than hard facts and truths “out there.” He had set out to show that there is geometric truth out there that, at some point in the history of humankind had been intuited and then developed into a body of knowledge. He had set out to how that it was possible for truth to exist outside intuition and experience and that it was historically intuited by some human being and then that triggered the development of the body of knowledge. Derrida, who wrote a 131 page introduction to the 26 page essay by Husserl, called it mildly an “introduction,” albeit it was, what he later came to call as “deconstruction.” Derrida’s interrogation of Husserl’s text revealed that there are internal inconsistencies in Husserlian logic. This interrogation might not mean much to architects at large, particularly for those embroiled in the pragmatics of practice. But it has implications to architecture and the various theories and practices that stem, often, from absolutist notions of “truth” or “legitimacy” or “ultimate validity” of geometric notions in their mind. Architects and architectural theorists would rather have it both ways: they wish to base their work on some absolute notions and yet practice in a world of contingency and pragmatism. That would be, to borrow from Derrida, an aporia or a fundamental inconsistency. G For an earlier discussion of softerials, see Senagala, M. “Production of Digital Space: On the Nature of Digital Materiality,” in H. Pentilla, ed., Proceedings of eCAADe Conference, 2001 H It is a well-known fact that architectural profession is not well-represented in the circles of power, particularly economic and political power. There has been only one architect ever elected to the United States Congress, Ambassador Richard Swett, who went on to write about this problem in his book Leadership by Design: Creating an Architecture of Trust (Greenway Communications, 2005). This maybe a consequence of absence of adequate capital flow through the professional networks that is necessary for people to get elected, exert political influence and bring about tangible change. B
371
First International Conference on Critical Digital: What Matter(s)?
372
First International Conference on Critical Digital: What Matter(s)?
373
Rethinking the Space of Intelligent Environments Erik Conrad Concordia University, Montreál, Canada erik.conrad@peripheralfocus.net Abstract Technologies are not mere exterior aids but interior changes of consciousness that shape the way the world is experienced. As we enter the age of ubiquitous computing, where computers are worn, carried or embedded into the environment, we must be careful that the ideology the technology embodies is not blindly incorporated into the environment as well. As disciplines, engineering and computer science make implicit assumptions about the world that conflict with traditional modes of cultural production. Space is commonly understood to be the void left behind when no objects are present. Unfortunately, once we see space in this way, we are unable to understand the role it plays in our everyday experience. In this paper, I argue that with the realization of the vision of ubiquitous computing, the fields of computer science and engineering reify the dominance of abstract space in real space. A new approach to the design of computing systems is necessary to reembody space. The social nature of the interface allows us to situate it within Henrí Lefebvre’s notions of space, providing new tools for thinking about how computing practice engages space as well as opening avenues to rematerialize the environment through embodied interaction. 1. Transforming consciousness New technologies signal transformations of both individual and cultural consciousness. In the early days of computing, the belief that advancements in 'thinking machines' would unfold as quickly as other recent and ongoing technological achievements was widely held. The promise of intelligent machines lead to speculation among architects about the exciting prospects of designing intelligent environments. This enthusiasm dwindled as the potential of artificial intelligence proved overstated due to misconceptions about 1) what is possible to achieve with computers and 2) the nature of intelligence itself. Vestigial traces of these original visions can still be seen in places as diverse as ecological building design and ubiquitous/pervasive computing research, even if no longer connected to their forebears. Although the computing machinery now available is orders of magnitude more powerful than that of the initial dreams and experiments, current applications lack the magic inherent in those earliest proposals. Why is this? I propose that despite a firm grasp of what computers can do, approaches to intelligent environments are still wrought with a misconception about the nature of intelligence. This underlying assumption about what computers should do hampers the potential of a fruitful marriage between architecture and computing. What is needed is a path that bridges concerns in both the design of computing systems and the design of space. Henrí Lefebvre's dissection and discussion of space provide a rich framework to contemplate the intersection and overlap of architectural and computer mediated spaces. Lefebvre believes that any attempt to understand the contemporary world that ignores spatial considerations are both partial and incomplete. The meanings that we attribute to space are inextricably bound with our understandings of the world in which we live. Our basic understanding of the world originates from the sensory spatial relationship between our body and the world. Conversely, the computer is a product of “a nineteenth and early twentieth century scientized approach to the world: that mind is separable from body; that it is possible to understand a system by reducing it to its components and studying these components in isolation (that the whole is no more than the sum of its parts); that the
First International Conference on Critical Digital: What Matter(s)?
374
behavior of complex systems can be predicted.” 1 While useful, I would like to suggest, that this is not necessarily the world in which we would like to live. If we combine the field of computing with a different set of underlying assumptions, we may be able to create a world that is more rich. Recent advances in science support philosophies of mind that employ a different relationship between body and thought, organism and environment. Our understanding of space is directly related to our understanding of the space of our body, which has long been sundered in Western culture by the Cartesian duality. If we do not accept this separation, what is the resultant space? This new understanding can change the ways in which we live and imagine the present–including how we can use computational media as a ‘tool for thinking’ in the precipitant space. 2. Towards the age of ubiquitous computing Technologies are not mere exterior aids but interior changes of consciousness that shape the way the world is experienced.2 We are currently in the midst of a collective change of consciousness – the age of the information machine. The computer arises from Western scientific ideology which is built upon the assumption that the mind is separated from the body. The influence of this assumption is present at all levels of the technology, from the architectural level in the hardware/software split to the reduced set of body senses/ movements engaged by its interface. This conflict between the abstract and the embodied is beginning to take the stage of the everyday as the digital/informatic realms, which have been inherently abstract, come directly into contact with cultural forms which have traditionally been inherently bodily processes. As we enter the age of ubiquitous computing, where many small computers will be worn, carried or embedded into our everyday environment (as computers 'disappear'), we must be careful that the values they embody are not blindly incorporated into the environment as well. Early development in information technology followed the legacy of industrial interface design. In the early 20th century, as automation replaced humans in the workplace, its goal was to eliminate participation wherever possible. Consideration for the user has lagged behind the need to interact with computers. Computer science has a history of venturing blindly into disciplines, wielding the authority of the capital used to finance its research. For example, years of computer animation research were conducted before any computer scientists had any meaningful interaction with an animator. While research in humancomputer interaction has been fruitful in certain areas, such as visual displays, it is not prepared to take on the design of physical spaces. In his book Digital Ground, Malcolm McCullough states, “Notions of what a computer is have not kept pace with realities of how digital systems are applied.” The use of computers has evolved from its origins in mainframe computing and one computer for many users to the current age of desktop computing, with a one-to-one ratio of computers to users. Recent trends in computing have given rise to a third age of computing, where many, possibly thousands, of small computers are worn and/ or embedded into the environment. In this age of ubiquitous or pervasive computing, the human/computer ratio jumps from 1:1 to 1:1000s. In some ways, it can be argued that the age of ubiquitous computing is well on its way. The average American already owns twenty or more computers, although most are devices that someone would normally not refer to as
1
see Penny, S. The Virtualisation of Art Practice: Body Knowledge and the Engineering World View. CAA Art Journal 1997, 30-38. 2
see Ong, W. Orality and Literacy, London: Routledge, 1982
First International Conference on Critical Digital: What Matter(s)?
375
a computer. Televisions, VCRs or DVD players, microwave ovens, cell phones, as well as many components of modern automobiles (anti-lock brakes, fuel injection and climate control for example) contain information processing components. Even today, these computers, often referred to as embedded systems, are being produced at much higher volume (billions/year) than desktop PCs (millions/year). At the current moment, the vast majority of these computers act in isolation. However, in the future, an increasing number of computers embedded in the environment will be networked and communication, sensing and information processing will disappear into the environment. As information technology becomes part of the social infrastructure, it demands design consideration from a broad range of disciplines. Appropriateness now surpasses performance in importance in technological design. “Appropriateness is almost always a matter of context. We understand our better contexts as places, and we understand better design for places as architecture.” 3 How does the computer participate in the world it represents? This question illustrates the design challenge that results from the conflict between the “(quintessential) product of engineering” and all of the “spaces” that it inhabits. Computation is a fundamentally representational medium, and as the ways in which we interact with computers expands, so does the importance of attention paid to the duality of representation and participation. 4 The focus of this attention, and the place where this conflict is potentially best solved is at the interface, the point or area in which the person and computer come into contact. Somewhat appropriately, ‘context’ is a popular topic in current ubiquitous or pervasive computing research. Most early papers, and even some recent ones, make a point to say that context is more than just location 5. What is included in context changes from researcher to researcher, but a couple of other typical variables are time, identity, identity of others, etc. Location is often an (x,y) or latitude, longitude if using GPS. Sometimes location is specified by building or room. The overwhelming majority of these research environments are for work settings and are focused on applications such as “How can we tell when a meeting is taking place?” so that presumably it can be recorded. Although it is a step forward that computing has realized the importance of the social, and has begun in its own way and with the aid of interdisciplinary researchers to understand it in relation to computing, it is primarily focused on work environments. Social and spatial interactions as they relate to the production of capital are important, not the implications of technology on the everyday. However, computing has become part of the ambient, social, and local provisions for everyday life and as such it becomes important to look at the larger impact of computation on culture. Computing has revolutionized almost every discipline, and is continually increasing its presence in day to day life. However, it reifies an ideology which subordinates the body and physical experience. 3. A new sense of space for computing
3
see McCullough, M. Digital Ground: Architecture, Pervasive Computing, and Environmental Knowing. Cambridge, MA: MIT Press, 2004 4
see Dourish, P. Where the Action Is The Foundations of Embodied Interaction. Cambridge, MA: MIT Press, 2001 5
see Abowd, G. and Mynatt, E. Charting past, present, and future research in ubiquitous computing. ACM Transactions on Computer-Human Interaction. Volume 7, Issue 1 March 2000, pp. 29-58
First International Conference on Critical Digital: What Matter(s)?
376
Lefebvre confronts considerations of space that reside “comfortably enough within the terms of mental (and therefore neo-Kantian or neo-Cartesian) space.” His central claim, that space is a social product, directly challenges the predominate “idea that empty space is prior to whatever ends up filling it.” 6 Lefebvre’s re-conceptualization of space is, at least partially, related to his conception of the body and its place in Western culture. “Western philosophy has betrayed the body; it has actively participated in the great process of metaphorization that has abandoned the body; and it has denied the body.” 7 Lefebvre describes the body, as he does many things, in the form of a triad: perceived– conceived–lived. Introducing a third term into the equation already destabilizes any notions of Cartesian duality. The body, as simultaneous subject and object, “cannot tolerate such conceptual division,”8 and can be liberated through a production of space. This occurs, in part, through the distinction between physical, social and mental space. Lefebvre states: Social space will be revealed in its particularity to the extent that it ceases to be indistinguishable from mental space (as defined by philosophers and mathematicians) on the one hand, and physical space (as defined by practicosensory activity and the perception of ‘nature’ ) on the other.9 All interactions with computer systems are at some level a social activity. Computation can be both a tool of and structuring force behind the relationships between people, institutions and practice. Even if one uses a computer in isolation, there is a social interaction present between the user of the system and the designer of the system. A user only knows how to use a computer system through a shared set of social expectations. Empty space thickens when mixed with information, making space itself an interface, and thus part of social space. The unique properties of social space allow it to become the site for reconciliation between the physical and the mental, concrete and abstract. Going one step further, social space can be broken down into the triad spatial practice–representations of space–representational space. Lefebvre describes each as follows 10: 1. Spatial practice, which embraces production and reproduction, and the particular locations and spatial sets characteristic of each social formation. Spatial practice ensures continuity and some degree of cohesion. In terms of social space, and of each member of a given society's relationship to that space, this cohesion implies a guaranteed level of competence and a specific level of performance. 2. Representations of space, which are tied to the relations of production and to the 'order' which those relations impose, and hence to knowledge, to signs, to codes, and to 'frontal' relations.
6
see Lefebvre, H. The Production of Space. Malden, MA: Blackwell Publishing, 1974, p. 15
7
Ibid., p. 407
8
Ibid.,
9
Ibid., p. 27
10
Ibid., p. 33
First International Conference on Critical Digital: What Matter(s)?
377
3. Representational spaces, embodying complex symbolisms, sometimes coded, sometimes not, linked to the clandestine or underground side of social life, as also to art (which may come eventually to be defined less as a code of space than as a code of representational spaces). Spatial practice is closely related to perceived space. It is the space secreted by society, recursively reifying it. It falls between daily routine and the infrastructure that allows it–the actual routes and networks that organize the daily routine. Ultimately, it is in spatial practice where the effects of ubiquitous or pervasive computing design will be felt and internalized. Computing is part of the infrastructure that organizes daily life. Representations of space refers to conceived space. It is the space of scientists, architects, urban planners and all who privilege the cognitive over the perceptual or lived. It is the dominant space in our society, and it is the space of contemporary visual and computing cultures. It is a mental space separated from physical space, or abstract space imposed on concrete space. Representational space corresponds to lived space, it is where meaning resides. It is “directly lived through its associated images and symbols.” 11 It is the passively experienced space, which overlays physical space, which the imagination is able to change and appropriate. Representational spaces “tend toward more or less coherent systems of nonverbal symbols and signs.” 12 Embodied interaction moves the design of computing systems from representations of space to representational space, from conceived to lived space. These spaces are not always clearly differentiable, they overlap and intermingle in varying intensities. Lefebvre states that in order to understand these three moments of social space, one can map it to the body. The spatial terms (spatial practice, representations of space, representational space) are analogous to the bodily triad of perceived–conceived– lived. 13 Table 1. Lefebvre's spatial and body triads.
Physical
Mental
Social
Spatial Practice
Representations of Space
Representational Space
Perceived
Conceived
Lived
Lefebvre seems to imply that these triads are in some ways analogous although different. If social space reconciles the duality of the mental and the physical with a nature that is both abstract and concrete, one may also argue that representational space holds a similar position between spatial practice and representations of space just as the lived does between the perceived and conceived. If all interactions with computer systems are social, and the social is the space of embodiment, where physical and mental co-mingle, this is where we should begin to rethink design. The layered interfusion of spaces presented by Lefebvre provides a rich framework for thinking about the possibilities of designing
11
Ibid.
12
Ibid., p. 38
13
Ibid., p. 40
First International Conference on Critical Digital: What Matter(s)?
378
computationally mediated environments as they extends into everyday space while simultaneously reflecting a careful negotiation between technology and human beings.
First International Conference on Critical Digital: What Matter(s)?
379
First International Conference on Critical Digital: What Matter(s)?
380
First International Conference on Critical Digital: What Matter(s)?
381
Teleplasty: The basis of chemosynthetic design Lydia Kallipoliti Princeton University / Greece E-mail: lydiak@princeton.edu Alexandros Tsamis Massachusetts Institute of Technology / Greece E-mail: tsamis@mit.edu
Abstract Is it possible that psychoanalysis, a discipline that allegedly deals with abstract or invisible entities, and entomology, a discipline that predominantly taxonomizes insects by type, can offer us an insight into the nature of digital design processes and emergent material phenomena? One of Roger Cailloisâ&#x20AC;&#x2122; most controversial psychoanalytic theories, â&#x20AC;&#x153;teleplasty,â&#x20AC;? shows that psychoanalysis and entomology can indeed suggest an alternative perspective of how bodily or other material substances are initially fabricated by insects and how they can further transform. In several of his case studies, Caillois claims alliances between material and psychical structures in his psycho-material teleplastic theorem and eventually questions spatial distinctions: distinctions between geometry and material, purpose and function, cause and effect, between the imaginary and the real. Can digital media help us redefine the static relationship between a window and a wall as an interaction of chemical substances rather than a process of assembling joints and components? Can we perceive material, not as an application to predetermined geometries, but as an inherent condition, a subatomic organization of matter that precedes geometry? The aim of this paper is to problematize such distinctions as a discussion emerging through the prolific use of digital design processes. -------------------------------------
First International Conference on Critical Digital: What Matter(s)?
382
From whatever side one approaches things, the ultimate problem turns out in the final analysis to be that of distinction: distinctions between the real and the imaginary, between waking and sleeping, between ignorance and knowledge -- all of them, in short, distinctions in which valid consideration must demonstrate a keen awareness and the demand for resolution.1 Among distinctions in the production of space, one assuredly clear-cut is that between a wall and a window, the former a solid, the latter a void. Always, in the design of exterior building envelopes, this distinction is comprehended by the discrete demarcation of transparent and opaque areas; an envelope is partially pierced, framing perimeters of transparent and opaque areas. However, along the lines of his influential essay “Mimicry and Legendary Psychasthenia,”2 where Roger Caillois problematizes the scientific validity of clean-cut distinctions in the field of entomology, he proposes an alternative definition for a synthetic three-dimensional articulation of solids and voids that he identifies as teleplasty. Caillois, whose work idiosyncratically crossbred literary criticism, social sciences and psychoanalysis with his concurrent studies in biology, mineralogy and geology, particularly examines the process of transformation in certain species of insects on their way to “acquiring the morphological character,” arguing that the objective for skin coloring and ornamentation of certain insects could not be interpreted crudely as the defensive reaction of a subject in its survival struggle. Due to pragmatic factors, which many biologists had pointed out at the beginning of the 20th century, predators are not all fooled by homomorphy or homochromy, other senses besides vision guide predators to track their preys. Further, inedible species, which would have nothing to fear from predators, are also mimetic.3 In light of this evidence, Caillois sustained that the primary cause of mimetic transformation is fascination. In parallel, it indicates a multivalent disorder between the insect and its environment, or more precisely a pathologic display of amalgamation with space, as the subject enters into a psychology of depersonalization—psychasthenia—and attempts to achieve a biotic desynthesis with its surroundings. Callois concluded that survival was in effect an “epiphenomenon whose utility appears to be null.”4 In the course of the metamorphic procedure, the subject retroverts to a primitive stage of development, turning back to previous life stages, as in the return of a primitive era, where the relationship between the subject and the environment was structured in different terms. This return is of such dramatic nature that the subject is diffused in space and time, whereas it distills entirely different organic behaviors and capacities that make one realize inconceivable phenomena under normal circumstances. In regression, there is a loss of functional unity and the various ego systems, both sensory and executive, operate in an asynchronous fashion.5But ahead of explanations in the sphere of a psychospatial turmoil, Caillois was effectively immersed in the nature of the very transformative process of the insects, teleplasty: Morphological mimicry could be, after the fashion of chromatic mimicry, an actual photography, but of the form and the relief, a photography on the level of the object and not on that of the image, a reproduction in three-dimensional space with solids and voids: sculpture-photography or better teleplasty, if one strips the word of any metapsychical content.6 One could argue that the proclaimed teleplastic articulation of solids and voids defies deeply rooted disciplinary assumptions in regards to form and materiality, as it unearths a decisive distinction: the contours that clearly delimit a separation between solids and voids, windows and walls. In its place, the process of teleplasty puts forward a bottom-up material
First International Conference on Critical Digital: What Matter(s)?
383
distribution, a composite deep skin where borders are of no relevance to its spatial composition. Essentially teleplasty rejects the adjunct process of defining perimeters in a greater master plan, suggesting a morphology “built into the very structure of matter”7 as latent material; an argument which Caillois later elaborates in his 1960 Mask of Medusa, in which such inherent potential is claimed to be embedded into the anatomy of living things.8 The metamorphic creature changes in such a seemingly random manner that one can barely distinguish which of the alterations regard its body and which its texture, which are superfluous and which are meaningful. It resembles an indeterminate mass of distributed, variable substances defying the qualitative presence of organs and systemic functions. One can only assume that for the creature in question, form and material intermingle in unorthodox, non-hierarchical ways such that certain parts of its body can be excessively, and others inadequately, structural and/or ornamental.
Figure 1: Membracides; Heteronotus Vulnerans. Courtesy of the Natural History Museum, London. The mechanism of this phenomenon is unclear; the position of bodily organs, as well as their purposefulness is also unclear. It is crystal clear, however, that if we analyze the insect’s pathologic transformation as a psycho-spatial structure, there are no distinct lines between its anatomical structure and its bodily matter; in other words, there is a fusion of form and material. This conceptualization of matter undergoing evolutionary transformations renders a counterpart model of architectural practice to the combinatorial multiplicity and the propagation of complexity through recursive unitary logic. t In the latter case, the unit represents a primary monad that can endlessly be repeated in multiple configurations, yielding overall complexity to a system. This logic though, although promoting variability, encompasses standardization as a technique for the monad and acknowledges the occurring variability as an effect of a larger system consisted or regular subsystems. With historical underpinnings in the theories of atomism,9 this orthodoxy was introduced intact in computational processes that began to infiltrate architectural practice in the 1960s. On the other hand, teleplasty offers a counterpart to this orthodoxy, putting forward a variability of the bodily organ, the unit itself, that we may tentatively call “psychomaterial” or “chemical change,” suspending our thought from combinatorial logic. We may then identify teleplasty as a smectic material state, derivative from the Greek word “σμεκτός” meaning smeared, as in the case of liquid crystals in a mesomorphic phase, where molecules align in series of layers, form alliances and coalesce. In a lecture given to Manchester University in 1952, Alan Turing speculated upon the chemical basis of morphogenisis. He suggested that “a system of chemical substances, called morphogens, reacting together and diffusing through a tissue, is adequate to account for the main phenomena of morphogenesis. Such a system, although it may be originally quite homogeneous, may later develop a pattern or structure due to an instability of the homogeneous equilibrium, which is triggered off by random disturbances.”10 The scope of that theoretical paper was to describe how patterns observed in animals could be explained as a result of the interactions between chemical substances operating within a mass of tissue.
First International Conference on Critical Digital: What Matter(s)?
384
What laws are to control the development of this situation? They are quite simple. The diffusion follows the ordinary laws of diffusion, i.e. each morphogen [chemical substance] moves from regions of greater to regions of less concentration, at a rate proportional to the gradient of the concentration, and also proportional to the ‘diffusability’ of the substance.11
Figure 2: Turing patterns, J. Boissonade, E.Dulos, and P. De Kepper. The Belousov-Zhabotinsky type reaction introduced by Belousov in the early 1950s and further investigated by Zhabotinsky in 1964, proved Alan Turing’s speculations to be true.12 Wave-like patterns emerged from the catalytic oxidation of malonic acid by potassium bromate.13 By changing the properties of the environment through exposure to different lighting conditions, or by changing the concentration of either substance in the mixture, the system appeared to produce steady states.14 Further investigations by J. Boissonade, E.Dulos, and P. De Kepper in 1995 have substantiated the relationship between conditions in the environment and the results of such reactions. Narrow, uniform regions, regions of clear spot exhibiting a hexagonal arrangement, striped areas, and areas of intricate mixtures of stripes and spots, all coexist in one sample, depending on the variation in concentration of the substances. By varying the section of the container in which the reactions take place, they trigger non-homogeneous pattern formations. They go on to make explicit, that the geometry of the container does not participate in any way in this process; its role is to control the concentration of substances within it.15 Phenomena such as the above, have given rise to theories of self-organization and complexity. The digital has often been criticized as being devoid of physical materiality16 but such criticism may prove to be premature, and the medium, instead, should be interrogated for its capacity to redefine materiality.17 The study of Caillois’ psychoanalytical approach of mimicry and teleplasty, as well as Turing’s mathematical speculations on the chemical basis of morphogenesis present alternative tectonic paradigms; still through the investigation of computational models, like incremental local adjustments in the case of mimicry and calculated emergent patterns in the case of Turing’s chemical morphegenetic models. As an offspring of this discussion, many questions of relevance to contemporary computational modes of production surface: Can digital media help us redefine the static relationship between a window and a wall as an interaction of chemical substances rather than a process of assembling joints and components?
First International Conference on Critical Digital: What Matter(s)?
385
Can we perceive material, not as an application to predetermined geometries, but as an inherent condition, a subatomic organization of matter that precedes geometry? Caillois’ teleplasty and Turing’s chemical morphogens suggest an algorithmic process of incremental local adjustments none of which are individually spectacular or unique; but rather produce overall, through a sum of small insignificant details, a spectacular chemosynthetic material topography. Although in mimicry, for instance, the creature assays to match a given setting (or a given other creature), the manner in which this shift is accomplished numerically, pixel by pixel—for instance matching one green dot of its skin with one of the target image and so forth—reveals an infinite number of new possibilities, in the course of the morphological character being established. The collapse of multiple scales of formal adjustments, the parallel and seemingly random adaptation process of diverse members and substances are instances of such local textural complexity. This is a complexity produced not by “genius, inspiration, determination or evolution,” but by a modest action of simple substitutions18 “which cannot be caught up in any mystique of creation,” to paraphrase Roland Barthes in his celebrated allegory of the Argo ship.19 On the irrelevance of the greater master plan and the resurfacing of a reciprocal topography, Caillois notes it is not the presence of the elements that is perplexing and decisive, it is their mutual organization, their reciprocal topography.20 Despite the fact that Caillois’ analysis concerns the surface articulation of certain species of insects, such alternative definitions are dissident to normative disciplinary assertions and correspond to a current open debate on the emergence of new creative models of production through computation and programming languages in digital media. Yet it is significant to note that if such a discussion is resurfacing after decades of exiling rule-based systems, this is not due to the fact that new utilities has been scavenged to legitimize these operations, but rather to the breakthrough of effective new means of expression within the realm of the digital medium. Embedded within the logic of the medium lies also the coming to the forefront of algorithmic functions, which have partially unearthed both material and formal doctrine, meaning that either form is derivative from the innate properties and potential of specific materials or that material is an application to predetermined forms. Afar from the illustration of an indexical process to which many critics would disagree (including the current authors), this paper posits a tentative theorem on teleplasty, on the foundations of Roger Caillois’ and Alan Turing’s observations, via two design-research projects. Its ultimate scope is to problematize distinctions, especially as these are manifest in a priori conventions for the design of exterior envelopes. In the projects presented – 23° degrees, WindoWall- one could lay claim that in many cases, the products necessitate an enhanced degree of tactile, optical and engagement from the user, who is urged to discover new ways of spatial occupation and senses of viewing through complex filters —envelopes— that mediate the relationship of exterior and interior in multivalent ways. Teleplastic space can hardly be a given space for consumption; it is nothing but a work in progress, a space, which forcefully requires its own reinvention. WindoWall; From Wall to Gradient
First International Conference on Critical Digital: What Matter(s)?
386
Figure 3: Conventional Section of a window and a wall versus WindoWall (Kallipoliti, Tsamis) WindoWall is a composite component part for an exterior envelope assembly. It aims to eliminate the joints between a window and a wall, as well as to shift the notion of an ‘edge’ to that of a ‘gradient’. WindoWall essentially substitutes the assembly as a third mediating piece that joins two elements with an area of gradient transition in a singular composite surface. By incorporating different properties within the same gradient surface-element, the component retains only these necessary material constituents that yield a specific local performance. The selection of materials, finalized in a combination of thermoplastic polymers, was decisive to the development of the project. From a wide range of polymer composite materials, selection was grounded on the exact mechanical and thermal properties of each, in combination to optical attributes in order to distribute transparent and opaque areas. For the precise measurement of extrapolated material properties in the new component, we used CES software, an application in the fields of nanotechnology and material science. Finally, we designed and manufactured a numerically computer-controlled electronic device, in order to facilitate the fabrication of complex components that assemble the WindoWall component.
First International Conference on Critical Digital: What Matter(s)?
387
Mass customization, as a concept, offers a conduit for investigation into the possibilities of a new digital tectonic. Our critique of their contemporary use is based on its adoption of a basic principle- the production of single-purpose parts, a relic from the period of mass production. With WindoWall, we aim to revisit John Ruskin’s notion of the “chunk” where structure, infill, window, wall, insulation, ornament, etc. would be dealt with within a piece’s variable material properties. We can always substitute the notion of structure and surface with the equivalent structural and non-structural body, the notion of the window and the wall with a relationship between transparent and non-transparent areas, changing the point of view from parts to properties.
Figure 4: Mold it Device for the production of WindoWall components (Tsamis) 23° degrees; the rupture of the cataclitical space-frame
First International Conference on Critical Digital: What Matter(s)?
Figure 5: 23 degrees, cataclisis diagrams (Kallipoliti, Tsamis)
388
First International Conference on Critical Digital: What Matter(s)?
389
Figure 6: 23 degrees (Kallipoliti, Tsamis) 23ยบ degrees is a design experiment for an exhibition space, redesigning the Crystal Palace. Rather than a design project, 23ยบ degrees is a rule-based strategy of geometrically articulating the topography of complex surfaces. The variable, non-homogenous material allocation of the canopy emerges from precise rules and constraints that relate to a number of parameters including program, structure and vision. Methodologically, different design operators were created through the use of computer programming languages in order to
First International Conference on Critical Digital: What Matter(s)?
390
control and enhance the local structural capacity of the surface and the direction of vision through distributed apertures. Endnotes 1
Roger Caillois, “Mimicry and Legendary Psychasthenia” (1936) trans. John Shepley in October 31 (1984):12-32. Ibid. 3 Ibid. 4 Ibid. 5 Peter L. Giovacchini, “Somatic Symptoms and the Transference Neurosis” in the International Journal of Psychoanalysis, No. 44 (1963), p 148. 6 Ibid. 7 Roger Caillois, The Mask of Medusa trans. G. Ordish (New York: Clarkson N. Potter, Inc./Publisher, 1960), p. 12. 8 Ibid. 9 Democritus, the ancient Greek philosopher who articulated the theory of ‘atomism,’ was teaching and writing that all physical objects were created from different arrangements of atoms and voids that were never created and will have no end. 10 Alan Turing, “The Chemical Basis of Morphogenesis,” Philosophical Transactions of the Royal Society of London, Series B, Biological Sciences, Vol. 237, No. 641. (August 14, 1952), p.37 11 Ibid, p.40 12 See Nicholas G. Rambidi, “Chemical-Based Computing,” Molecular Computing in Tanya Sienko, Andrew Adamtzky, Nicholas G. Rambidi, and Michael Conrad (Eds), (Cambridge, MA: MIT Press, 2003), p.109. 13 Rambidi, “Chemical-Based Computing,” in Molecular Computing, p. 109 14 Ibid. 15 Ibid, pp. 242-246. 16 See Kenneth Frampton, Tectonic Culture (Cambridge: MIT Press, 1995). 17 See Antoine Picon, “Architecture and the Virtual: Towards a New Materiality,” Praxis: Journal of Writing and Building, No. 6 (March 2004), pp.114-121. 18 Barthes is cited for his model on substitution and nomination by Rosalind Krauss in The Originality of the AnantGarde and Other Modernist Myths (Cambridge, Mass: MIT Press, 1985), p. 2. 19 The exact quote is the following: “To illustrate this notion of structure, Roland Barthes like to use the story of the Argonauts, ordered by the Gods to complete their long journey in one and the same shi- -the Argo- against the certaintly of the boat’s gradual deterioration. Over the course of the voyage the Argonauts slowly replaced each piece of the ship “so that they ended with an entirely new ship without having to alter either its name or its form. This ship Argo is highly useful,” Barthes continues. “It affords the allegory of an eminently structural object, created not by genius, inspiration, determination or evolution, but by two modest actions (which cannot be caught up in any mystique of creation): substitution (one part replacing another as in a paradigm) and nomination (the name is in no way linked to the stability of the parts): by dint of combinations made in one and the same name, nothing is left of the origin: Argo is an object with no other cause than its name, with no other identity than its form.” In Krauss The Originality of the Anant-Garde and Other Modernist Myths, p. 2. 20 Caillois, “Mimicry and Legendary Psychasthenia.” 2
First International Conference on Critical Digital: What Matter(s)?
391
First International Conference on Critical Digital: What Matter(s)?
392
Oublier Domino
On the Evolution of Architectural Theory from Spatial to Performance-based Programming Neri Oxman PhD Candidate, Design Computation Group Massachusetts Institute of Technology, Cambridge, MA, USA neri@mit.edu Abstract The conception of the architect as form-giver has since historical times dominated the field of architecture. It is precisely this image which has devalued material practice in the distinction between form and matter consistently inherent in architectural discourse. Recent technological developments in the field of design computation, coupled with environmental concerns and philosophical debates have contributed to the shift in focus from form, as the exclusive object of design practice to matter and materials as an alternative approach to the conception of form. Such a shift calls for a reorientation of existing protocols for design generation. Design based upon performance appears to justify and make sensible computational design processes that integrate material properties with structural and environmental constraints. These processes, as demonstrated here, contribute to the elimination of traditional architectural typologies replaced with spatial organization driven by need and comfort. This paper proposes a new approach in design where processes of formgeneration supporting sustainable design solutions are directly informed by structural and environmental constraints. Computational models are developed and implemented that incorporate data-driven form generation. Fabrication tools and technologies are customized to include material properties and behavior. The projects illustrated in this paper are currently on display at the Museum of Modern Art. 1. Modernism: Forgive and Forget 1.1. From Spatial Canons to Material Practice From the early writings of modern theoreticians such as Giedeon, theories of form have dominated the discourse of architectural modernism1. Form rather than performance has been the central locus of practice and theoretical discussion. Performance has been significant in its influence upon form, but only as an agency at its service, rather than acting as the origin of creation. This cultural election of form as an indicator of architectural content has not been exclusive. There have been design experimentalists such as Gaudi, Fuller, ProuvĂŠ, and Otto for whom research in material and experimentation has been a legitimate field of design activity2. However, form has been such a dominant factor in the culture of modern architecture that it has also affected related fields in design and architectural research. We currently appear to be in a period in which the contribution of material experimentation is emerging as a new locus of theory and research in architecture. Formal knowledge and the theoretical dominance of spatial typologies have long promoted a mono-functional approach to architectural design as manifested in the early 20â&#x20AC;&#x2122;s with the establishment of the Modern Movement. Le Corbusierâ&#x20AC;&#x2122;s canonical essay, Five Points for a 1
Giedion, S. Space, Time and Architecture: The Growth of a New Tradition, Cambridge, MA: Harvard University Press, Fifth Edition, 1941 2 Drew, P., Otto, F. Form and Structure, Boulder, Colo.: Westview Press, 1976.
First International Conference on Critical Digital: What Matter(s)?
393
New Architecture, formulated in 1926, established the formal content for pre-subscribed functional and spatial canons3. Homogeneous distribution of structure and matter were made to serve pre-determined functions in their generic spatial arrangements. These theoretical formulations of generic spatial relationships and of the dominance of spatial over material, or performance conceptions, have long become emblematic of modern architecture. 1.2. Functional Hierarchies: A Pathology The non-bearing curtain wall façade stands out as a classical illustration of functional and hierarchical separation of the elements of architecture: formal and material attributes are assigned to the structural and non-bearing components. Through the demonstration of several design explorations which promote the distribution of properties in light of their performance, this paper questions the relevance and authority of such canons in the digital age. In doing so, the paper offers a new approach to design based upon performance where material properties and behavior are utilized as integrated formal organizations corresponding to a multitude of structural and environmental conditions. 1.3. Organization The paper is organized in three sections: In the following section theoretical foundations are presented and discussed. In the third section – a series of relevant design experiments is presented. In the fourth and final section, design implications are presented which speculate upon the outcome of such new foundations of theory and practice. 2.0. Material Based Design 2.1. Performance Based Design, Condition Based Programming Design based upon performance seeks to promote a multi-functional approach to design whereby matter is distributed where needed responding to its structural, environmental and indeed, social performance4. In fostering material integration of architectural elements across various scales, structure and façade are no longer divorced in function and/or in behavior, but rather negotiated through the informed distribution of matter. One significant consequence of design that is informed by performance is the incorporation of material variation: gradients of structural and material effects emerge modulating their thickness, transparency, porosity and thermal absorption according to their assigned function or desired conditions of stability (structure) and habitation (program). Currently such technologies that support gradient material distributions are analytical rather than synthetic and therefore their generative potential is questionable. However given the possibility to formulate and formalize models enabling these two modes combined, it is possible to conceive of simulation models that integrate performance simulation with design generation5. Such an integrated approach would support performance-based design. 3 4
5
Le Corbusier. Vers une Architecture, France: Getty Research Institute, 1923 Kolarevic, B., Towards the Performative in Architecture”, in Performative Architecture: Beyond Instrumentality, Kolarevic, B. (ed.) New York: Spon Press, 2005, pp.203-214 Oxman N. “Get Real: Towards Performance Driven Computational Geometry” International Journal of Architectural Computing, vol. 5 2008, p.663-684
First International Conference on Critical Digital: What Matter(s)?
394
We introduce arguments for the transformation of architectural design rationale from theories of universal space and programs based on typology to theories of condition-based programming. Such processes allow for the generation of spatial organization based on conditions of habitation, contrary to the traditional typology based approach. It is proposed here that digital design, as it is now evolving, should serve as a catalyst of this transition with unique implications for architectural design. A key characteristic of this process of change is the prioritization of the non-standard over repetition, homogenization and standardization. 2.2. The Function of Effect: Towards Material Based Design Architectural discourse of the last two decades, supported by digital design technologies and coupled with a renewed interest in sustainability, is now offering an alternative to the dominancy of mono-functional architectural solutions implicit in the modern tradition. Contrary to a design approach which has promoted divisions of function between the architectural elements (structure, façade, etc), design based upon performance and conditions of habitation postulates divisions of effect (structural, environmental, etc.)6 Our ability to quantify structural and environmental performance in a building’s envelope assumes that differences of use and behavior must be accounted for. Given such ability to predict and respond to performance criteria and desired effects this paper aims at shifting the practice from design typologies dominated by presubscribed mono-functional programs to condition- (or performance) based programming. 2.3. Making Difference: Towards Material Based Fabrication Traditional CAD (Computer-Aided Design) packages assist architects and engineers in tasks of representation (drafting, modeling etc). As such – these computational methods offer geometry authoring tools that range from 2-D vector based drafting systems to 3-D solid and surface modelers. CAE (Computer-Aided Engineering) packages, parallel to the domain of CAD are used to evaluate the structural behavior of components and assemblies through simulation and optimization processes such as FEM (Finite Element Method) analysis. CAM (Computer-Aided Manufacturing) includes a range of applications to assist in the manufacturing and/or prototyping of product components. CAM functions such as CNC (Computer Numerically Controlled) code generated to drive machine tools are expanding to become more fully integrated with CAD/CAE solutions. Most advanced CAD/CAE/CAM software packages incorporate associative modeling features providing the designer with a method of linking dimensions and variables to geometry such that any transformation in a given variable propagates throughout the entire model. Combined, CAD/CAE/CAM technologies offer a wide range of visualization, simulation and optimization tools. When initiated and informed by pre-determined variables brought about by process or product constraints, the design process carries potential for becoming generative: the designer defines a network of constraints parametrically linked to geometrical features in order to generate multiple options for generation. Rooted within a design culture inspired and evaluated by formal manifestation, computational geometry has become and still remains an elevated science for architectural production. Triggered by computational developments in the fields of digital design and fabrication, we are now witnessing a shift in method questioning the profession’s value 6
Oxman N. “Material-based Design Computation” International Journal of Architectural Computing, vol. 5, 2007, p.26-44
First International Conference on Critical Digital: What Matter(s)?
395
system. While computational geometry is still central to design practice, computational techniques and technologies enable shift from form-based design to performance-based design7. Design processes based on performance rather than pure form, have promoted design protocols integrating form, material, structure and environment. From here, the emerging significance of the physical over the virtual is inevitable. In search for both the theory and practice of performance based design, we are preparing to value distribution over assembly and difference over standardization. Material properties and effects, such as transparency, elasticity, porosity and relative strength are slowly catching up with Cartesian descriptions of space as we know it. Could we hypothesize that in design, form is to geometry what effect is to material practice? And given that such an assumption occupies a legitimate place in the contemporary discourse, what does it entail in terms of the tools and techniques that govern the intensions and processes behind our designs? Recent initiatives in design computation capitalize upon optimization-based routines using ESO (Evolutionary Structural Optimization). Such methods are based on finite element analysis methods (FEA) capable of optimizing the formal geometry of an object to obtain minimum volume through an iterative design processes under even stress-distribution. However, such analytical methods have rarely been used as generative tools allowing the designer to shift freely between representations and allow, by means of optimization, for difference in kind to occur in addition to difference in degree. 3. Demonstrating Material Based Design The projects below demonstrate the notion of condition-based programming and material-based design8 through the integration of structural and environmental performance data into form-generating processes. Each project explores unique relationships between material organization and behavior, inherent in their form and attributed to the ways and methods in which these objects have been fabricated. It is precisely this interaction between material organization and environmental pressures that is a novel approach to the genesis of form. Combined with this aspiration, the projects postulate our ability as designers to exploit digital fabrication technologies as a generative domain for material production9. Making is not secondary to from-generation and directly ties to it, beyond its being an agency for production10. 3.1. Monocoque Monocoque (French for "single shell") stands for a construction technique that supports structural load using an object's external skin. This stands in contrast with using 7
Oxman N. “Get Real: Towards Performance Driven Computational Geometry” International Journal of Architectural Computing, vol. 5 2008, p.663-684 8 Oxman N. “Material-based Design Computation” International Journal of Architectural Computing, vol. 5, 2007, p.26-44 9 Oxman N. “Fab Finding” Proceedings for eCAADe, Predicting the Future, Frankfurt am Main, 2007, p.785-792 10 Oxman N. “Rapid Craft: Machine Immanence and Naïve Materialization” Proceedings for IASS 2007, Shell and Spatial Structures: Structural Architecture: Towards the Future Looking to the Past, Venice, Italy, 2007, p.269-276
First International Conference on Critical Digital: What Matter(s)?
396
an internal framework (posts and beams) that is then covered with a non-load-bearing skin. The project demonstrates the notion of a structural skin using a voronoi pattern, the density of which corresponds to multi-scalar loading conditions. The distribution of shear-stress lines and surface pressure is embodied in the allocation and relative thickness of the vein-like elements built into the skin. The composite prototype models represent three strategies for the design and fabrication of single shell structures.
Figure 1. Monocoque: illustrated are three fabrication approaches 3.2. Raycounting Raycounting is a method for originating form by registering the intensity and orientation of light rays. 3-D surfaces of double curvature are the result of assigning light parameters to flat planes. The algorithm calculates the intensity, position and direction of one or multiple light sources placed in a given environment and assigns local curvature values to each point in space corresponding to the reference plane and the light dimension. The models explore the relation between geometry and light performance from a computational-geometry perspective. Light performance analysis tools are reconstructed programmatically to allow for morphological synthesis based on intensity, frequency and polarization of light parameters as defined by the user. The project is inspired by one of the first rapid prototyping technologies from the 1860â&#x20AC;&#x2122;s known as photo sculpting. The method was developed with the aim of regenerating accurate 3-D replicas of a given object by projecting multiple prints of different angles and carving them relative to the reference artifact. Photo sculpting employs 2-D projections to regenerate 3-D objects; Raycounting employs 2-D planes as they are informed by light to generate form.
First International Conference on Critical Digital: What Matter(s)?
397
Figure 2. Raycounting: SLS nylon models (top) and printed resin composites including structural pockets (bottom). 3.3. Subterrain The physical features of a terrain represent the distribution and magnitude of the forces that have brought it about. These elements embody the complex relations between physical matter in its given environment and denote its subterranean force field. The work explores the notion of material organization as it is informed by structural load and environmental conditions. 2-D micro structural tissues are visualized, analyzed and reconstructed in 3-D macro scale prototypes. Computational analysis is used to determine material behavior according to assigned properties and performance such as force, stress, strain, deformation, heat flux and energy such that the tensor properties in atomic scale are kept isomorphic to the morphological pattern. The tissue is then reconstructed using a CNC mill and multiple types of wood composites. Anisotropic in nature, grain directionality and
First International Conference on Critical Digital: What Matter(s)?
398
layering are informed by the analysis resulting in a laminated structure which corresponds to a range of loading conditions. Three micro scale biological tissues (a leaf section, a butterfly wing and a scorpion paw) are reconstructed using this generative digital protocol.
Figure 3: Subterrain: Micro structural biological tissues structural compost (top) and perofrmance analysis (bottom). 3.4. Cartesian Wax The project explores the notion of material organization as it is informed by structural and environmental performance: a continuous tiling system is differentiated across its entire surface area to accommodate for a range of conditions accommodating for light transmission, heat flux, and structural support. The surface is thickened locally where it is structurally required to support itself, and modulates its transparency according to the light conditions of its hosting environment. 20 tiles are assembled as a continuum comprised of multiple resin types - rigid and/or flexible. Each tile is designed as a structural composite representing the local performance criteria as manifested in the mixtures of resin. The work is inspired by the Cartesian Wax thesis, as elucidated by Descartes in the 1640â&#x20AC;&#x2122;s. The thesis relates to the construction of self knowledge and the way in which it is informed by and reports about an individualâ&#x20AC;&#x2122;s experience of the physical world. According to Descartes, the knowledge of the wax is whatever survives the various changes in the waxâ&#x20AC;&#x2122;s physical form. That is, the form of the wax embodies the processes that have generated its
First International Conference on Critical Digital: What Matter(s)?
399
final features. Replace the notion of knowledge with that of performance and the waxâ&#x20AC;&#x2122;s physical form represents the force fields that grant its birth.
Figure 4. Cartesian Wax: full scale and details composite image of final installation
First International Conference on Critical Digital: What Matter(s)?
400
4. Discussion and Contribution The distinction between form and matter in architectural design has historically valued the formal over the material. As a result, the discourse of architecture has tended to concern itself with formal questions, establishing the role of the architect as form-giver rather then form-finder or form-maker. This privileging of form over matter is profoundly embedded into our practices and methods of working. Performance-based design processes carry significant potential to rethink traditional classifications of function. This paper calls for a reconsideration of functional integration on the level of the building skin: rather than considering structural and material partitions between structure and surface (or: column and faรงade) we consider the strategic distribution of matter as assigned to performance. Through the medium of various experiments in performance based design, the projects presented have demonstrated the significance and potential of such a transformation of design theory. The experiments demonstrate the notion of material based design formulated here to facilitate informed material distributions and structures at the building scale. Particular emphasis was placed on innovative applications of computation and fabrication technologies that can potentially contribute to the design generation of material organizations. The use of computational geometry and a growing ability to author our own form-generation tools have contributed to processes of design engaged with performance. However, performance based design is still confined to geometry and form, while the consideration of material based design as a potential design generator has been limited and undefined. The paper has introduced some theoretical and technical foundations for material based design. Finally, the paper has provoked and expressed a will to defy the very classical canons of modernist design production protocols: now that we can control material organization and behavior in construction scales, we can finally break away from typology. Oublier Domino.
First International Conference on Critical Digital: What Matter(s)?
401
Poetics of Responsive Spaces
A Topological Approach to the Design of Responsive Environments1 Sha Xin Wei, Ph.D. Director, Topological Media Lab, Concordia University, Canada sha@encs.conordia.ca
Abstract My project concerns subjectivation, performativity and embodiment, as inflected by notions of process and field. These questions were inspired by recent work in the margins of experimental performance, sound arts, computational media, and philosophy of process. They are informed by, and critically respond to Leibniz’s continuous substance, Whitehead’s “unbifurcated” process ontology, and Petitot’s approach to morphogenesis. Beginning with a concern with the materiality of writing, the project explores the ethico-aesthetics of touch and movement, and poetic architecture or installation events as sites for speculative action. The kind of events I describe, are collective, co-present, embodied, and a-linguistic. The potential for physical contact is a condition for the collective embodied experiences needed to conduct experimental phenomenology. Our events are designed for four or more participants, three to destabilize dyadic pairing, and lower the threshold to improvising being in that space, and a fourth for potential sociality. Having dissolved line between actor and spectator, we may adopt the disposition of an agent of change, or equally a witness of the event. Relinquishing also a categoreal fixation on objects in favor of continua, we inhabit ambient environments thick with media and matter that evolve in concert with movement or gesture. Introduction TGarden is a branching family of responsive playspace events and installations, with curious cousins and descendants. What I discuss in this essay, however, are not particular playspace installations and events like trg, tg2001, or the recent Remedios Terrarium exhibit (2008)2 but some of the passionate critiques and questions motivating the creation of such responsive environments, and the desired qualities of experimental experience that make some of the background and potential for such playspaces still so compelling with respect to that pre-history. Given all the heart, craft, knowledge and energy that have been poured into making and presenting these installation-events, it’s natural to ask what’s at stake? Why should we creators and participant players care about making these playspaces? I hope this essay will encourage some of you to, as Maja Kuzmanovic put it, grow your own worlds weedily and wildly. My interest in these responsive media spaces stems from two intertwined conversations. The first is a series of conversations about agency, language, and hybrid ontology, going back to a seminar on interaction and media (IMG) with Niklas Damiris, Helga Wild, Ben Robinson, Ann Weinstone, Alice Rayner, and other humanist scholars affiliated with Stanford University in California. For three years, 1995-1997, we met every week, reading gem essays smuggled out from the heart of their disciplines, recasting what insights and arguments we found into ways to articulate and maybe work with the hybrid computational/physical “interactive” media that we saw emerging around us. But we put
First International Conference on Critical Digital: What Matter(s)?
402
every label and concept like “interactive” and “media,” in play, and later from farther afield, “map,” “system,” “language,” “expression,” and “human” as well. I was engaged with the boundaries of language, especially with how we fleshy, mortal people could use non-linguistic writing and sketching to trace infinities like complete Riemannian manifolds, the profoundly immeasurable non-self, and the role and the power of the imaginary in our infinitely thick and dense world. The second conversation is a specific set of challenges by two practitioners of experimental theater, Laura Farabough and Chris Salter. They came with 25 and 5 years, respectively, of experimental performance and contemporary theater, and challenged us with examples of avant-garde theatrical practice after Bertolt Brecht: guerilla theater like Farabough’s Beggar’s Theater and NightFire in San Francisco, Egypt and Mexico, and the bloody and complex works of Heiner Müller, William Forsythe, and Pina Bausch, in whose company they learned their craft. They challenged me to make palpable the ideas we were forging from what Ann Weinstone called the wild academy, to make events in which we and our participants could encounter and experiment bodily with these radicalized articulations of agency, self, desire, and action that we were so delightedly cooking up in the IMG. This was a heady and daunting challenge, but nothing that a few tens of millions of dollars and a few dozen fearless artists and engineers couldn’t do in 10 years. We lacked the time, the capital, and the discipline. But we did find some fearless artists. So we grew our own networks of artists, from the art groups Sponge that we established in 1997 in San Francisco, and from FoAM, that Maja Kuzmanovic established with Nik Gaffney and Lina Kusaite in 2000 in Brussels. Starting Questions Most critically, how can we make events that are as compelling for the people who encounter them as the theater by Brecht, Müller, Bausch, Sankai Juku, and Dumb Type were in their day and for their audiences? In a sense, this is a technical challenge, in other words it is a challenge to the practice and craft of experimental performance (to what Peter Brook called Holy Theater, as opposed to Dead Theater of typical rote and commercial performance, and different from the Rough Theater of the street and Commedia dell’arte.) 3 [NOTE: Peter Brook, The Empty Space.] One of the questions I refined from this very broad challenge was: How can we make a responsive space and event within which initially accidental, unmarked, unrehearsed, ordinary gestures could acquire great symbolic charge? These questions were practical questions of craft, and could only be answered or explored materially, bodily, in physical built spaces and peopled events. However, the way in which we explored them was not by producing commercial shows, but by doing performance research. We made installation-events that straddled the border between closed shop studio improvisation-experiments with special audiences, and open performances with a public. As it turns out, these questions, though they were forged in a precise context of experimental performance research, resonate far outside the world of digital media art and performance. They are informed by dance, movement, textiles, fabric, musical performance, and visual art, but they also are impelled by a desire to embed such work into public space and everyday space. This is part of the ethico-aesthetic adventure of the work that appeals so much to me. Now the same questions about the event also have a radical, micro-textural inflection. Could technologies like computational media, real-time sound and video (re-)synthesis, cheap hobbyist sensors, and the like, be added to the mise en scene of theater as Antonin Artaud dreamed to extend the theater of cruelty in a way that is relevant to us today? This
First International Conference on Critical Digital: What Matter(s)?
403
theater of cruelty would create a theater that would not drop out of our consciousness as soon as we’ve finished consuming it but would transform those who encounter it as utterly as the plague. By cruelty, Artaud explicitly did not intend the meanness of human hurting human or animal, but the implacability and indifference of matter with respect to our human ego. Stone resists, and a tree greens, and software breaks regardless of what we say. If we desire matter to perform differently, we cannot simply legislate or script it by brandishing a pen alone, we must also manufacture a symbolic material substrate that behaves differently from ordinary matter. Spiraling Concepts The rest of this essay will spiral up through a set of concepts: the basic kind of events that I’m considering, a discussion about “representation,” a question of performance, then the technologies of performance, then concepts embedded in those technologies, and finally a return of sorts to a transformed notion of event and representation (language). Some people say that ideas are cheap, that making is hard. But we know very well that humans create and rework concepts with just as much effort and rigor and material discipline as the making of a physical installation. It’s just that the young domain of media arts and sciences has not enjoyed the luxury of alloying and hammering out concepts as thoroughly as say biotechnology or history of Renaissance Italian literature. Domains of practice that benefit from billions of dollars or centuries of investment can elaborate practices that exploit the making and composition of concepts based on antecedent literatures, intricate dependencies and interrelationships of publication and citation, the social networks that give meaning to concepts, and procedures of evidence and argument and generative logics indigenous to the epistemic culture.4 [NOTE: I find it helpful to think in terms of inequality rather than equality or definition: knowledge information, concept representation abstraction concept, autonomy consumer choice. Also, theory philosophy model procedure (problem solving ) poetry rhetoric theory. I would shy away from abstraction, model, sophist rhetoric, and art “theory”’s word salad, but affirm that we need philosophy, poetry, and concepts if we want a life worth living. See Gilles Deleuze and Felix Guattari, What Is Philosophy? ] Events The kind of events I’m talking about, the kind I’m interested in making are collective, copresent, embodied, and a-linguistic. These are situations to which people are invited to be physically together, face-to-face, in short, co-present. This is a basic condition of theater, too, and distinguishes theater from for example cinema or photography, in that the performer-actor-artist is in the same physical place as the spectator-visitor, so that the spectator can get up and physically lay a hand on the actor to interfere with the action if she or he wants to. This potential for physical contact is a condition for the collective embodied experiences needed to conduct experimental phenomenology, I believe. By design, these situations are collective, with three or more participants, three to destabilize dyadic pairing, with an eye to lowering the threshold for the improvisation of being in that space. I say embodied to mark that the fleshy bodies of the participants essentially move and act together in the co-construction of the event. The line between actor and spectator is dissolved, so any body may adopt the disposition of an actor as an agent of change in the event, or equally a spectator as a witness of the event. The ambient environment will be thick with media, filled with thick sound, thick video, dense
First International Conference on Critical Digital: What Matter(s)?
404
physical materials, so that people will live in a dense matter that responds and evolves in the course of their activity. All of this activity can be conducted a-linguistically without necessity for spoken language. On the other hand, speech is not prohibited; it’s just dethroned with respect to the other modalities of coordination among the bodies and media in the space, again as way to estrange the speaking subject, and render more prominent the material dynamics of the lifeworld on the other side of the veil of the technologies of language. By thickness, I refer not only to perceptual thickness -- density of video and sound textures, but also to the rich magma of social, imaginative, erotic fields within which people play even in ordinary situations, situations in which we perform without first analyzing, and cutting up our experiences into analytic layers: how did I smile? How did I rest my feet on the floor? Did my voice carry or resonate well? Did I stand too close or too far to other people? Did I interrupt or listen or talk over or under other speakers? Is the light too bright? I borrow the term from Clifford Geertz’s notion of a sociological / anthropological responsibility to study culture in all of its rich social patterns and dynamics without orthogonalizing it a priori into categories that we would bring to bear on that culture. So this experience should be designed in a pre-orthogonalized way by the designers, and enjoyed by the participants without requiring that they make any cognitive model of their world in order to perform in it. Why? Engineering’s power derives from the portability and extensibility of standardized schemas and methods that apply globally over phenomena and life. Our engineered systems are already built on taxonomies that must be navigated by grammars and operated according to rules that discipline our thought and action -- the action of power to discipline humans into docile bodies has radically evolved under the impact not only of the informatic technology but the epistemic matrix that encases our imaginary, for example. These taxonomies rest on fundamentalist distinctions such as signal vs. noise, functional versus aesthetic, and syntactical vs. non-syntactical (relative to a grammar). It’s not enough to side with noise as the opposite of signal, however, or idleness (the vacation) as the opposite of wage-slavery because that still leaves in force the distinction made by the relevant schema in power. [Representations of] lifeworld Perhaps the principal (and only?) loci at which power grips us and with which we grip the world are the patterns and forms of the world. These regularized and normalized systematized patterns are what we call representations. And our most highly developed form of representation is language, which since Ferdinand de Saussure’s semiotics has been axiomatically susceptible to regularization (and subsequent normalization) by linguistics.5 [NOTE: Saussure, Course In General Linguistics.] It is language to which most of us have been disciplined since childhood, thanks to the modern democratic impulse. That this generative power can turn to the benefit of non-elite agents is recognized as a threat by the counter-democratic forces that are dismantling the systems of public and higher education in the western nations. It’s for this reason that so much critical energy (Plato, Kant, Foucault, Deleuze, Derrida, Haraway, and so many others), has concentrated on the power of representation to constrain us to think and act in the world in certain ways but not others. I use “power” mindful of Foucault’s studies of the genealogy of “madness,” the “prisoner,” and “sexuality” that put those categories back into play in the contingent currents of history. What’s at stake is whether we can create conditions for events in which power is put in play, and its categorical fingers can be unclamped, if only provisionally from their grip upon our bodies. Power, as Foucault reminds us, is not always signed with the mark of evil (or good for that
First International Conference on Critical Digital: What Matter(s)?
405
matter), it’s the generative force, “the force that through the green fuse drives the flower” (to borrow from Dylan Thomas) as well as the blasting cap. To put power in play also means to unclamp the hands and collectivities that wield it against life. And if representation is the grit and grip of power, then one core way to put power in play would be to test the limits of language. Now, mistrusting, examining, and interrogating the limits of language in fact has been one of Modernism’s central concerns6 [NOTE: One modern root would be Saussure’s canonical Course in General Linguistics, but of course we find antecedents in Leibniz’s search for a language of mathesis universalis, and Athanasius Kircher’s cabalistic formal languages.], so we are walking a path well trodden by many, which should assure us that this concern is not peripheral or hermetic, but vital to people whenever they wonder how life is worth living. When Ludwig Wittgenstein wrote at the end of the Tractatus Logico Philosophicus, “Wovon man nicht sprechen kann, darüber muß man schweigen,” (Whereof one cannot speak, thereof one must be silent), he was acknowledging the limits of what could be expressed by propositional language, of the machinery of statements with truth value that could be built with logic into the vast edifice of knowledge that could be articulated in statements like: “Creon, ruler of Thebes, forbids on pain of death anyone to bury Polyneices, who was a traitor to Thebes. Antigone has covered her brother Polyneices’ corpse. Therefore Antigone’s life is forfeit.” or “When the appropriate conditions are satisfied, power must be exercised. Iraq has been tyrannized by a dictator. My nation is founded on principles of self-determination and autonomy. Given the preeminent power of my nation, it follows that, in the name of freedom, it is imperative that my nation liberate Iraq from its dictatorship.” -- complexes of statements that are supposed to have the same epistemic weight as: “Suppose there are only a finite number of prime integers, p1 < p2 < ... < pn, where pnis the largest prime. Then consider the integer Z = 1 + p1 * p2 * ... * pn, 1 added to the presumably enormous but finite product of all the prime integers. Z is not divisible by any of the primes, p1 , ..., pn, yet Z is bigger than pn. But it is a prime bigger than pn, which contradicts the assumption that pnwas the largest prime integer. Therefore there cannot be a largest prime integer, i.e. therefore, there are an infinite number of prime integers.” It would be disingenuous of me to dismiss the tremendous constructive power of propositional knowledge. Propositional knowledge is in fact part of the social/legal/economic infrastructure that makes it possible for me to walk out of this door and down the street to buy a copy of the Economist or Libération. It is part of the technoscientific apparatus that allows me to type this essay without thinking about the galaxy of electronic and logical procedures that are being performed to stabilize and transmit my words to you. My purpose is not to diminish the scope and depth of propositional knowledge, which in effect is all we can state about ourselves and our experience, but to play at the limits of propositional language, of language, of sign in general, in fact at the meeting place of sign and matter, which is the symbolic.7 [NOTE: I thank Patrice Maniglier for teaching me this concept of the symbol.] That is what led me to consider creating playspaces of responsive media saturated with symbolic potential in distributions of desiring matter. That is why I thought of the TGarden and its precursor installation-events as phenomenological experiments. Wittgenstein, who like A.N. Whitehead cut his teeth on logic and the foundations of mathematics, so he knew profoundly what he was talking about, also wrote in the Tractatus: “Die Ethik nicht aussprechen lasst. (Die Ethik und Ästhetik sind Eins.)” Ethics cannot be expressed. (Ethics and aesthetics are one.) With this, Wittgenstein expressed several deep insights with characteristic compactness. Even given the rich and ever more
First International Conference on Critical Digital: What Matter(s)?
406
complex web of knowledge that can be expressed in propositional language, such as law and morality -- social norms -- and computer science, matters of ethics and aesthetics cannot be expressed in propositional language because such language cannot express value. Recognizing this, Wittgenstein closed his project on the logical foundation of knowledge, and wrote the Philosophical Investigations, surgically deflating the illusions of the conventional theories of meaning one by one until we are left standing at the door to the only source of meaning, which is life, practice, the lifeworld. Meaning, Wittgenstein observed, cannot come from any set of rules, from correspondence to the world, or from appeal to transcendental objects. (That last observation is pretty obvious after Descartes and Kant.) Meaning comes from contingent use, meaning comes from practice in life. But the lifeworld is external to the span of what language can contain in itself. Jacques Derrida wrote, in Of Grammatology: “Il n’y a pas de hors-texte” (There is no outside-text), meaning not that the world is entirely contained inside the semiotic, but that we cannot ground language’s meaning by having it represent faithfully something in a transcendental or exterior world. Context determining the meaning of a text can only be expressed in language itself, so it would be delusional to attempt to ground meaning by believing language homologously represents or faithfully points to some ultimate reality, whether that be the Bible, genes, memes, or bits. So, after Wittgenstein and Derrida, it would be quixotic to try to simplify our lifeworld by reducing how we make meaning and symbolic charge to one thin layer of the world or another, so let’s skip by the monuments of cognitivism, and move into the lifeworld, the other to language. Reality and the Imaginary What can we do in the lifeworld, then? And what would it take to unmoor power-thatcontrols and put it in ethico-aesthetic play? One of the basic distinctions we have to address here is the issue of Reality. There’s much talk about reality as if it were something pure that we could contaminate, and therefore save. But even if corporate and state power require the conceit that reality is pure and must be protected by opposing it to the virtual, we do not. As Jean Baudrillard observed in Simulation and Simulacra, it is exactly at the moment that our symbolic machines have become so powerful as to threaten to destabilize capitalistic power that power tries to distinguish reality from virtuality, and re-inscribe reality so ferociously. Why? The virtual is that which is not actual, but could be, and understood this way is identical to the potential, a mortal threat to the power that would control. In fact, reality, as Bruno Latour so thoroughly and persuasively argued in We Have Never Been Modern, is always and everywhere radically, inextricably mixed between society and nature, word and thing, symbol and substance. In fact, it’s useful to think of reality as everything that is not logically self-contradictory, like a 4-sided triangle, and include the virtual as part of this reality. So, Reality = Potential + Actual. The actual is what is in the here and now, what is the case, whether as configurations of physical matter, or as symbolic patterns like law, business, or systems of value like emotional relations, fashion and aesthetic tastes. The potential is what is not the case, but could be, and the imaginary is the collective or individual envisioning of that which is not the case, and of transforming the potential into the actual. So, reality is always already mixed. The challenge is not to define, brand, or package mixed reality, but to mix reality, just as the deepest challenge is not to define the
First International Conference on Critical Digital: What Matter(s)?
407
human, or the citizen or the psychological or cognitive subject (as AI aspires to do), but to human (adapting from Ann Weinstone). Therefore, what I’ll do is not just putter around synchronic representations of mixed reality which can be much more than written language, of course, including any map, diagram, schema or any sign system whatsoever, but bracket the operation of [Representation of], and move to the arena of improvising, performing, practicing in symbolic, desiring, embodying matter. What in the world could that possibly be like? How can we work not instrumentally but poetically with such material magmas and stay clear of formalizing, disembodying, and dessicating reductions to the informatic or cognitivist abstractions of the lifeworld? Felix Guattari’s decades of work with schizophrenics in his clinic La Borde, while deeply informed by the tradition of psychoanalysis of Freud and Lacan, parted from psychoanalysis in a most radical way. Guattari left behind psychoanalysis’ aspiration to scientificity, to discovering the truth about the subject’s world, and recognized instead that all forms of expression are actually also simultaneously forms of content, that every one of us cocreates the world and co-adapts to the world. Guattari recognized that the schizophrenic is as much a co-structuring agent as the doctors and nurses who ostensibly run the clinic. One of most illuminating examples in Chaosmosis tells about families who come as a group to sessions in which actors introduce extra characters in filmed events. The participants must revise, improvise, enact and re-enact their relations for each other and for later viewers. There are vocal and manual gestures or movements whose meanings are not predefined or evident but arise organically from being exfoliating in the world in a signifying process that Guattari (and Deleuze) called pathic subjectivation. The subjects later reviewed these events, and narrated for themselves what they saw themselves doing. This is radically different from the subjectivation imposed according to schema by an analyst who announces to his patient: “By the power invested in me from my training as an analyzed Analyst and interpreter of the DSM,8 [NOTE: The standard reference of psychiatric disorders.] I declare, ‘You are schizophrenic.’” It’s one of Guattari’s clearest examples of ethico-aesthetic play in the magma of a-signifying semiologies, and of improvisation over rehearsal and experience sedimented over the lifetime and (acknowledging Lacan) beyond the lifetime of the ego. This is not theatrical role-playing, nor everyday activity observed in the wild behind a screen, nor purified laboratory interrogation. There are no blueprints or recipes for any of this kind of playful, rigorous work, and in fact it would be a terrible betrayal to make a method out of this. Much of this articulation has come to me only after many years of working dumbly, so to speak, so I’ve enjoyed the pleasure of traces of recognition in these writers who wrote incandescently out of the crucible of their own experiences. Guattari and Artaud resonate well with how I’ve tried, in very preliminary and partial ways, working with autonomous people and the means at hand, to nurse art research in a studio-lab I established, called the Topological Media Lab (TML).9 [NOTE: http://topologicalmedia.concordia.ca] Responsive Media Research at the TML Given these concerns, as I’ve described them, what’s interesting is not so much a matter of taxonomy, and schemas and classifications or standards and protocols, although those are necessarily part of the robust construction and operation of our playspaces, but the dynamics of processes that stir, up, shape and unshape the material patterns that constitute the lifeworld. The early exercises, studies, and installation-events by Sponge10 [NOTE: http://sponge.org] dealt with particular questions in performance research: How to
First International Conference on Critical Digital: What Matter(s)?
408
make events that were experientially as powerful as works of avant-garde theater but without resorting to verbal/written language, erasing the distinction between actor and spectator, and relying on thick, physical/computational ambient media. TGarden:TG2001 as built by FoAM and Sponge was an installation-event that marked a transition and a bifurcation from performance research into a strand of public installation-events and a strand of studio-laboratory research in the Topological Media Lab. I started the TML after leaving Stanford for Georgia Tech in 2001 to take stock of, and strategically extend some of the technologies of performance according to a particular set of ethical-aesthetic heuristics inspired by continuity, human performance (e.g. the violin), human play (e.g. in water and sand), and non-electronic matter like clay, smoke, or rain. I wanted to make responsive media synthesis engines and gestural instruments, and choreography systems that would allow participants to experimentally co-structure, not interact (!), with co-evolving ambient life in the “real-time” of perceptually concurrent action and the specious present.11 [NOTE: Regarding the specious present, see William James, Principles of Psychology, p.573 in the Harvard edition; see discussion in Steven J. Meyer, Irresistible Dictation,] The media engines and instruments that we’ve developed fall naturally into the areas of calligraphic video, gestural sound, softwear (active materials made of fabric and other soft woven or non-woven matter), and audio-visual instruments (such as DMX-controlled theatrical instruments). Media Choreography Media Choreography names how, in the approach taken by the Topological Media Lab, the creators of a playspace put all the media together using continuous dynamics and quasiphysics, rather than rules, databases and procedural logic. This is both an aesthetic and an operational heuristic. Media choreography is a way to relate the synthesis of all the different streams of media in concert with the activities of the people in the common playspace, such that the behaviors (to use an overly anthropocentric term) of the media and the people co-structure one another, and evolve over time according to pre-arranged strategies and latent predilections, contingent activity, and memory of past activity. I appealed to continuous dynamical systems on several grounds: (1) People’s experience of the world is continuous. (2) People have sedimented huge amount of experience with the physical world, so we should leverage it by using quasi-physics models. (3) I wished to see how we could move away from the Judeo-Christian technology of egocentrism and anthropocentrism. A most important common feature of the media choreography of this family of playspaces, from tg2001 (TGarden) to trg, including Time’s Up’s A Balanced Act, is that the creators specified not a fixed, discrete set or sequence of media triggered by discrete visitor/player actions, but rather a potential range -- a field -- of possible responses to continuous ranges of player actions. But in this family, behavioral tempers, or to use less animistic terms, climates of response evolve over macroscopic periods of time (minutes), according to the history of continuous player activity. A subtle difference between an information theoretic approach to scripting the behavior of a system and the TGarden’s quasi-physical approach is that the latter bets on a radically modest approach to computational media as dumb matter. By dumb I mean (1) free of language, even the formal procedural programming languages that are operationalizations of the logic that I relinquished early in this essay; (2) free of intelligence, the cognitivist approaches of symbolic artificial intelligence; and also (3) free of representations of abstract
First International Conference on Critical Digital: What Matter(s)?
409
structures like hidden Markov statistical models or 3D polyhedral geometry. One particular research strategy I’m exploring in the TML is to use continuous dynamics to sustain superposition of contingent and composed potential behavior, and expose these intertwined dynamic processes to the players not through words or discrete tokens, props, or characters, but via the richest available temporal textures of sound and visual imagery. The research heuristic is that this way we can leverage people’s bodily intuition by having them play in the media, rather than look at representations of some squiggly shapes projected at some remove from their own flesh. (Representation would rear its head.) To let people play immersed in media, we could have them step into a warm pool of water laced with honey, so why use computational media? Computing the quasi-physics allows the creators to inject a physics that changes according to activity and local history, and respond in ways that resemble but are eerily unlike any ordinary matter. This is analogous to the alienation effect of theater but not at the level of whole bodies: characters, actors, spectators, plot. Instead, what continuous, dense, topological dynamical systems afford is a micro-fine alienation effect at the level of substrate media such as calligraphic video, gestural sound, and kinetic fabrics imbued with uncanny physics. A word on method, design heuristic Indeed it would take a lot of work to build up to macroscopic objects and actions from relatively homogeneous textures and simple dynamics. But I would say that it is not “hard” (the adjective used by Tim Boykett in Riga), but strange and un-idiomatic for all of us who have been trained to the aesthetics and logic of whole bodies and macroscopic human-scale objects like words, props, characters and conventional game action. After all, to render a character in a novel or play from the raw material of alphabetic text and grammar, takes an enormous amount of hard-earned psycho-social knowledge, literary apparatus and wordcraft. One significant difference between trg and the earlier TGarden, txOom, and tgvu experiments, is that in TGarden and tgvu, the metaphorical behavioral state topology is independent of media state topology, whereas in trg, what the player sees is identical to the behavioral topology. TGarden’s state engine evolves through a rather sparse topological landscape with few valleys and peaks, whereas the visual and sound fields are synthesized as densely and temporally finely as possible and as necessary to sustain a rich experience, with micro-dynamics of response that we do not attempt to trace using the state engine. The reason for decoupling the dynamical metaphorical state engine from the media engines was in fact to decouple the evolution of the behavioral response “climate” from the dynamics of the visual and sonic textures, which has to be as rich and tangibly responsive to the players’ actions as possible. It seems artistically and compositionally useful to keep these dynamics decoupled from one another. My concern at least in the context of this essay is precisely with what possibilities a microphenomenology, free of ego and anthropocentrism and indeed of any fixed, a priori objects, can offer toward fresh and refreshing improvised play. Aesthetically, at least for TGarden, this play should take place immanently in as dense an ambient medium as that of ordinary life. So the best approach would be to start with ordinary matter and real fleshy people in common space, and judiciously augment the everyday matter with just enough computational matter to give the event a strange and marvelous cast. This approach, which I nickname “minimax” design: maximum experiential impact for minimum computational technology, resonates with the poor theater’s choice of a minimalist technology of mise en
First International Conference on Critical Digital: What Matter(s)?
410
scene relative to cinema, a minimalism which in fact is constitutive of its magic.12 [NOTE: See Jerzy Grotowski’s Towards a Poor Theater.] However, this apparent inefficiency is in fact endemic not only to “bottom-up” simulations but to all simulations and simulacra. As Humberto Maturana and Francisco Varela pointed out, to be as dense as life, a simulation of an autopoietic system can never operate any faster than that autopoietic system, and can at best run at the speed of life -- so much for the cybernetic fantasy of mastering and replacing the lifeworld by a transcendental, superior simulation of life. As for theoretical approach, my long term interest in the TGarden and its sibling responsive playspaces extends beyond the actual events themselves to the mixing of ideas and conflicting ideological commitments from different epistemic cultures. I won’t take the space here to pursue this sociologically or anthropologically, but it would be liberating to practice our arts and sciences in a more reflexive way.13 [NOTE: Isabelle Stengers has retold the stories of seven scientific disciplines in a way that presents the partial and provisional messiness of science as it is actually practiced. Telling science in this way has both cosmological and political implications, hence the title of her books: Cosmopolitiques.] The week after the ‘Space and Perception’ conference at RIXC in Riga, I participated in a symposium focused on Deleuze, Whitehead and the Transformations of Metaphysics14 [NOTE: With Isabelle Stengers, James Williams, Mick Halewood, Steven Meyer and about 20 other philosophers, Proceedings of the Royal Flemish Academy.] There I realized how to articulate that one could use mathematics as poetry rather than as instrument or measure, or a replacement for God, or an intellectual battering ram. (I must confess, however, to deriving some pleasure from reading Alain Badiou’s fearless and fierce polemic about mathematics = ontology.) I agree with Badiou that mathematics is substance, and not merely a description of substance. Shaping mathematics as poietic material in fact differs in kind from using mathematics to describe the universe as physicists see it. Part of trg’s charm is its attempt to make palpable a concept of the world (recent quantum field theoretic cosmology) by forcibly identifying it with the perceptual field -- a cosmic ambition. The artists could only begin to approximate this by restricting trg to a very compact physical duration and place in Kibla, and by making allegorical simulations in software. Allegory makes the world of difference between depiction and enaction, perception and phenomenology. As for experimental phenomenology, I’m trying to discover and mix together mathematics as materials that are adequate to life. It could be sharply different sorts of poetic matter: continuous topological dynamics, geometric measure theory, or even fancier stuff like noncommutative algebra and etale cohomology. But I choose to start with the simplest symbolic substances that respect the lifeworld’s continuous dynamism, change, temporality, infinite transformation, morphogenesis, superposability, continuity, density, and value, and is free of or at least agnostic with respect to measure, metric, counting, finitude, formal logic, linguistics, (syntax, grammar), digitality, and computability, in short of all formal structures that would put a cage over all of the lifeworld. I call these substances topological media. Simplicity here is not a requirement of the theory (no Occam’s razor here) but merely an acknowledgement that I do not understand enough about the lifeworld to bring out fancier stuff yet, of which there is so much more up the wizard sleeves. The fundamental difference in this approach is to use mathematics as substance in a workmanlike way, patching here and there to see what values ensue, as a trellis for play, rather than a carapace, but always sensitive to whether the poetic material accommodates
First International Conference on Critical Digital: What Matter(s)?
411
transfinite, incommensurable, immanent passion. Totalizing carapaces like Wolfram’s computational equivalence principle, which at bottom is a transcendental atomic metaphysics founded on making counting sacred, would hammer us into a very sparse ontology. And to a hammer everything is a nail. What’s at stake? I approached the branching family of playspaces represented by tgarden, txoom, tgvu, and trg as phenomenological experiments of a certain kind, as events based on gesture and movement, rather than language, for people face-to-face in a common place, playing and improvising meaningful micro-relations without language, in thick responsive media. I see these as opportunities for ethico-aesthetic play, to borrow and adapt Guattari’s concept of the coming into formation of subjectivity, to engage in biopolitics, radically dispersed into tissue and molecular strata, and reaching far beyond the computational media arts, meeting with experimental impulses in dance, movement, textiles, musical performance, experimental theater, but also the most speculative initiatives in urban design, science studies, and philosophy. But the ambition here is to conduct even the most philosophical speculation by articulating matter in poetic motion, whose aesthetic meaning and symbolic power are felt as much as perceived.15 [NOTE: See Eugene Gendlin on felt meaning.] I shift the emphasis from spaces of representation to spaces of experience, hence the Topological Media Lab’s emphasis on technologies of performance, and on live event. If we grant ourselves the power and opportunity to experiment with the world at all scales, in all strata, and relinquishing all schemas for an object-oriented ontology, to what extent can the blackboxed modes of work, operation, representation themselves be continuous and transformable sans metric, i.e. topological? Art all the way down? If art puts the world in play, puts questions in motion via human and material experience, then art practice could be a mode of material and speculative philosophy. But working in a plenist, unbifurcated world (working with Whitehead’s concept of nature recovered whole from the many dualist knives of modernism and postmodernism), I wonder to what extent we can truly suspend, float, and dissolve all distinctions that fracture our being in the world, including the distinction between art and craft. Under capitalism, modern art practice is well served by a distinction between the artist and the executant, the director and the designer, art and craft, theory and practice, and in exchange much commodity art pretends to nothing more than a clever permutation or anamorphic mirror of the actual. But art all the way down could put all relations in play, which implies that how it is produced is as important as what is produced. Therefore it must risk dissolving those distinctions of modern art. FoAM is a good example of an a-modern art organization that tries to work this way with limited access to knowledge and financial capital. However, with the rising star of engineering buoyed by a particularly crude version of pragmatism, there’s been of course the counter-cultural revolution aimed to turn the tables on high art, but very often this threatens to merely flip the duality upside down, and manacle art to the categories and norms of engineering and design.16 [NOTE: There’s a profound difference between discrete approximations to continuous things and things that are discrete from the get go. One example is the definition of flow by mean curvature, a project that Ken Brakke tried to carry out but could not complete due to deep technical lacunae that could not be patched until Tom Ilmanen’s work 20 years later. Elevating the discrete and the computable to universality, via for example Wolfram’s principle of computational equivalence or Newell and
First International Conference on Critical Digital: What Matter(s)?
412
Simon’s symbol processing hypothesis excludes more life than it includes.] Given that one of engineering’s norms is modularity, I ask, can we alchemically open and critically transform all these blackboxes: “interaction,” “program,” “information,” “bit,” “sensor,” “cpu,” “linguistics,” “market,” “design,” “industry,” “body,” “ego,” “citizen,” “machine,” “human.” ...? Art all the way down means there is no layer below which the socio-technical magma becomes mere machine and craft, the level of the technician who executes the artist’s desire. But on the other hand, this means also that we do not reduce conceptual rigor and passionate dreams to a willfully dematerialized, a-historical, anti-intellectual naiveté. It means, for example, that to explore the erotics of the formation and dissolution of object from field has consequences not only at the level of co-present bodies but also at the level of programming language, drawing model, and graphics and dynamics engines. Can the material process of making things collectively be radically non-denumerable, countless, non-computable, non-dimensional, infinite, and yet remain also immanent, embodied and continuous? Can we make playspaces that evoke not puzzle-solving behavior, but ethico-aesthetic-erotic play, and marvel, or vertigo, or elation? To respect the open, unbounded lifeworld, such a space should not be useful or therapeutic. In fact, that was Guattari’s point about psychoanalysis, too. The point would not be to help the participant construct a narrative analogous to the hermeneutic objective of classical psychoanalysis -- “This is what the patient’s phobias / psychoses / dreams mean,” nor to effect a cure -- therapy’s arrogant stance with respect to its patient: “You are sick. We will fix you.” In a playspace, a participant would not read, interpret or recount a dream -- a participant would be a dream. Why not just enclose a volume of ordinary space and repeat some experiments like the action art of 40 years ago? With our techniques, a playspace could be charged with latent magic, a heightened potential for charging gestures with symbolic power. A playspace could become a theater for the alchemical transformation of hybrid matter, but not a space for cognitive games, inducing puzzle-solving behavior, nor a bath of raw qualia. An alchemical theater would avoid having “users” and “system” building models of each other. (In the human, such models would be cognitive models.) Our typical model of interaction has been of humans and their proxies engaging in an action-reaction ping-pong. And interaction design, even in its most enlightened mood has been centered on the human (viz. human-centered design), as if we knew what a human was, and where a human being ends and the rest of the world begins. Since the beginning of the Enlightenment, the automaton has fascinated those members of our species who cannot themselves bear children. One of the most celebrated such automata was the Turk, a chess playing machine unveiled by Wolfgang von Kempelen in 1770, and toured through the courts of Europe.17 [NOTE: Tom Standage, The Mechanical Turk, 2002.] In fact, this chess playing automaton turned out to be powered by a human dwarf hidden inside the box. This piece of automata history is in fact emblematic of the genealogy of the concept of the software agent as a homunculus, from the ENIAC to the fictive Hal 9000 in “2001,” to the agents of Sim City and the customer call center program that can interpret telephoned speech as well as John Searle’s Chinese Box. But this anthropocentrism is not confined to engineering, of course. Look at Bill Viola’s beautiful series of video works, The Passions. If we really take seriously the challenge to pursue art all the way down, and if we are willing to put in play, in suspension, all the putative atoms, objects, and subjects of the world, then I ask you this question: to whom do you owe allegiance: Homo Sapiens Rex, or the world?
First International Conference on Critical Digital: What Matter(s)?
413
Apart from the the totalizing and dematerializing power of the Judeo-Christian God, and of informatic and logico-linguistic schemas, essentially the only ethico-aesthetic choice in the West is to start with the self, with Homo Sapiens. We witness the disastrous global ecological and economic consequences of this choice. However, given topology as a way, even a rigorous and precise way, to articulate living, non-denumerable, dense, nondimensional, open, infinite, and continuous matter, one has the option of choosing the world instead. I use these adjectives precisely for their interwined technical and poetic values. But this is not going to be a cure-all, a recipe for success. It’s an approach to design, a way to think about living in the world, how to shape experience, a disposition with respect to the world, rather than a methodology or a technology.18 [NOTE: This will be the subject of a book on the genealogy of topological media. For a spirited and beautifully motivated introduction to the mathematical study of proto-metric substance, see Klaus Jänich’s Topology, 1984.] Enactment and Enchantment in Living Matter Dylan Thomas wrote (in 1938): The force that through the green fuse drives the flower Drives my green age; that blasts the roots of trees Is my destroyer. And I am dumb to tell the crooked rose My youth is bent by the same wintry fever. The force that drives the water through the rocks Drives my red blood; that dries the mouthing streams Turns mine to wax. And I am dumb to mouth unto my veins How at the mountain spring the same mouth sucks. The hand that whirls the water in the pool Stirs the quicksand; that ropes the blowing wind Hauls my shroud sail. And I am dumb to tell the hanging man How of my clay is made the hangman’s lime.... Allow me to suggest a reverse-allegory and use a piece of the world to stand in for some concepts. This is a patch of sod that I cut out of the earth under a tree outside the RIXC building. Representations, words, are like the blades of grass, individually well formed, discrete. I can pull up this piece of sod and turn it over to reveal the root structure underneath. Yes, there is a network of roots as we can plainly feel running our fingers through the dirt. However, I draw attention past the blades of grass and their contingently formed roots to the dirt and the moisture in between the roots. It’s the continuous, nourishing, dark, loamy stuff in between the discrete structures that materially constitutes the Earth. This moist earth is always and everywhere in continuous transformation. Our discrete structures, our words, syntax, grammars and schemas and methodologies are the blades and at best the roots. And yes, they are our best ways to grip the earth. But though they are a common supra-individual resource, they are not transcendental. They can only take form in and draw meaning from the earth, and become earth when their life
First International Conference on Critical Digital: What Matter(s)?
414
cycle is finished. Archimedes said, “Give me a place to stand, and I shall move the World.” But what if there is no place to stand inside a bubbling chaosmotic soup of infinite inflation? To what extent can we alchemically open and critically transform all of modernity’s blackboxes such as market, machine, or human if we do not have a place to stand in this age of globalized empire and permanent war?19 [NOTE: Hardt and Negri, Empire, 2000, and Multitude, 2004.] Is there any possibility for an immanent resistance for us not as non-docile bodies, but as resistive and desiring flesh? Yes, I believe, yes, if we take reality already as an amalgam of the potential and the actual, dematerializing, for example by becoming fictive, and rematerializing under the incessant quickening action of our imagination. This affords openings for life in the mud-filled interstices of our technology. A most immanent mode of resistence and weedy generation in those muddy interstices of our technologies of representation is play. Play could be the make-belief, the as if, making fictive, becoming other than what is the case, the art that drives the green fuse all the way down and up again. But in recent years, play has been harried by many who would classify it, barely escaping the nets of those taxidermists who would like to stuff play into the carcass of game. What our playspaces could offer us are not allegories of other worlds, whether cosmological, or political, or religious, or psycho-fictive, but events affording playful processes that open life up to more life. Let me close by suggesting a few senses of play that may merit more careful consideration. There’s the play of water lapping against the side of the boat, making the lazy slapping sound that evokes sunlight and fish in the clear water just beyond the reach of your fingers. There’s the play, the empty space, between the teeth of interlocking gearwheels, without which the entire assembly of gears would lock up; the teeth guarantee discrete synchrony, but it’s the gap that allows movement to be born. And yet, that gap is never a vacuum because the world’s structures are always and everywhere part of the substrate magma of the world. There’s play in the sense of continuous, infinite dimensional variation from any given trajectory, that invites arbitrary degrees of novelty. And there’s play as the infinite deferral of definition, a passionate sense-making that develops ever more virtuosity re-enchanting the world.
First International Conference on Critical Digital: What Matter(s)?
415
Endnotes 1 For accompanying images, see the website documentation of TGarden, Ouija, and Remedios Terrarium in http://topologicalmedialab.net. TGarden Responsive Playspaces 2001: http://www.topologicalmedialab.net/joomla/main/content/view/8/11/lang,en/ Ouija Experiment on Collective Gesture in Responsive Media Spaces: http://www.topologicalmedialab.net/joomla/main/content/view/155/11/lang,en/ 2 Remedios Terrarium, Fine Arts Gallery, Concordia University, 17 March – 4 April 2008. The main Gallery acts as an alchemical vessel mixing multiple species of matter responding to activity inside and outside the space: calligraphic video and sound, plastic cells, structured light, and in certain moments, performers. Other chambers contain sculptural reflections on the terrarium. We compose the exhibition as two and a half week long event breathing according to clocks as well as contingent states. http://topologicalmedialab.net/remediosterrarium/ 3 Brook, P. The Empty Space. New York: Atheneum, 1968. 4 I find it helpful to think in terms of inequality rather than equality or definition: knowledge information, concept representation abstraction concept, autonomy consumer choice. Also, theory philosophy model procedure (problem solving ) poetry rhetoric theory. I would shy away from abstraction, model, sophist rhetoric, and art “theory”’s word salad, but affirm that we need philosophy, poetry, and concepts if we want a life worth living. See Deleuze G., and Guattari F. What Is Philosophy?, European Perspectives. New York: Columbia University Press, 1994. 5 Saussure F. Course in General Linguistics. Tr. Roy Harris. LaSalle, Ill.: Open Court, 1986. 6 One modern root would be Saussure’s canonical Course in General Linguistics, but of course we find antecedents in Leibniz’s search for a language of mathesis universalis, and Athanasius Kircher’s cabalistic formal languages. 7 I thank Patrice Maniglier for teaching me this concept of the symbol. 8 The standard reference of psychiatric disorders is: American Psychiatric Association Task Force on DSM-IV. Diagnostic and Statistical Manual of Mental Disorders: DSM-IV-TR. Washington, DC: APA, 2000. 9 http://topologicalmedia.concordia.ca 10 http://sponge.org 11 Regarding the specious present, see James W. The Principles of Psychology. Cambridge, Mass.: Harvard University Press, 1983 (1890), p.573. See discussion in Meyer S.. Irresistible Dictation: Gertrude Stein and the Correlations of Writing and Science, Writing Science. Stanford, Calif.: Stanford University Press, 2001. 12 See Grotowski J, and Barba E. Towards a Poor Theatre. 1st Routledge ed. New York: Routledge, 2002. 13 Isabelle Stengers has retold the stories of seven scientific disciplines in a way that presents the partial and provisional messiness of science as it is actually practiced. Telling science in this way has both cosmological and political implications, hence the title of her books: Cosmopolitiques. Stengers I. Cosmopolitiques I: La Guerre Des Sciences; L'invention De La Me'canique" Pouvoir Et Raison; Thermodynamique: La Re'ealite Physique En Crise. 2 vols. Vol. 1. Paris: La Dcouverte / Poche, 2003 (1997). ———. Cosmopolitiques Ii: Me'canique Quantique: La Fin Du Re`Ve; Au Nom De La Fle`Che Du Temps: Le De'fi De Prigogine; La View Et L'artifice: Visages De L'e'mergence; Pour En Finir Avec La Tole'rance. 2 vols. Vol. 2. Paris: La Dcouverte / Poche, 2003 (1997). 14 Deleuze, Whitehead and the Transformations of Metaphysics, Symposium sponsored by the Institute of Philosophy, K.U. Leuven, and the Royal Flemish Academy, Brussels, Belgium, May 25, 2005, with Isabelle Stengers, James Williams, Mick Halewood, Steven Meyer and about 20 other participants. 15 On felt meaning, see Gendlin E. T. Experiencing and the Creation of Meaning : A Philosophical and Psychological Approach to the Subjective. Evanston, Ill.: Northwestern University Press, 1997. 16 There’s a profound difference between discrete approximations to continuous things and things that are discrete from the get go. One example is the definition of flow by mean curvature, a project that Ken Brakke tried to carry out but could not complete due to deep technical lacunae that could not be patched until Tom Ilmanen’s work 20 years later. Elevating the discrete and the computable to universality, via for example Wolfram’s principle of computational equivalence or Newell and Simon’s symbol processing hypothesis excludes more life than it includes. See Brakke K. A. The Motion of a Surface by Its Mean Curvature. Princeton, N.J.: Princeton University Press, 1978. Ilmanen T.. Elliptic Regularization and Partial Regularity for Motion by Mean Curvature. Vol. 520, Memoirs of the Amer. Math. Soc. 108. Providence RI: American Mathematical Society, 1994. 17 See Standage T. The Mechanical Turk : The True Story of the Chess-Playing Machine That Fooled the World. London: Allen Lane, 2002. 18 This will be the subject of a book on the genealogy of topological media. For a spirited and beautifully motivated introduction to the mathematical study of proto-metric substance, see Jänich K. Topology. Berlin: Springer-Verlag, 1980, 1984. 19 See Hardt M., and Negri A. Empire: Harvard University Press, 2000. ———. Multitude : War and Democracy in the Age of Empire. New York: The Penguin Press, 2004.
First International Conference on Critical Digital: What Matter(s)?
416