Adaptive Architecture and Computation Portfolio Marcin Kosicki

Page 1



MARCIN KOSICKI AAC PORTFOLIO

3

Genetic Programming in Morphogenesis of Architectural Forms

GENERATION: 100 (-0.7502160933545206 / ((-4.275748497590577 + (((0.5833451812767656 * ((X1 * (0.6837010487240978 (X1 / -0.20618839602994044))) / X2 )) / X2 ) + ((X1 / X2 ) / X2 ))) / (0.6837010487240978 * (1.2884335581974984 * (0.6837010487240978 * (X1 * X1 ))))))

ABOVE: Examples of forms generated by analitical functions. The graphs shows graphical tree based represention of the functions. LEFT: An example of an analitical function f(x1,x2) developed by the genetic programming algorithm.

THE AIM The objective was to create an experimental, digital, design space exploration tool that uses principles of genetic programming to create external building envelope. The main goal was to use the concept of performance-oriented design based on environmental analysis. The aim of the project was to support decision-making process and evaluate multiple design options during the initial stage of design. THE METHOD The form was developed by a genetic programming algorithm that used symbolic regression in order to create an analytic function which worked as genotype. Then it was translated into 3D surface which performance was evaluated by custom made total solar irradiation algorithm. The algorithm

seeked for a form that had minimal solar intake in a constrained search space. Initial expressions were formed by randomly combining elements from a primitive set such as mathematical operators, constants, and variables. New equations were then formed by combining previous equations by the use of genetic operations.The algorithm is an example of an stoachastic search method. INSPIRATION The inspiration for the method was taken form famous Chris Williams’ work for British Museum Grate Court Roof. The Geometry of the structure was based on the analytical function. The concept of the ongoing research is to take these approach further and enhance the search of analytic functions by the use of artificial intelligence.


GENOTYPE > PHENOTYPE PSEUDOCODE 1. For every Individual there is fixed 2D grid of initial control points

2. Each Individual has a genotype with is an analytic function evolved through GP.

3. The function of the form f(x,y) produce a z-coordinate for every point.

4. A set of node points is created.

5. A mesh is generated based on node points. It is a discrete representation of the f(x,y) function which can be then evaluated by solar irradiation engine.


5

MARCIN KOSICKI AAC PORTFOLIO

depth

+

*

Y

1

X

1

+

1

Y

/ 1

0

X

2

2

3

infix notation : (y-1)+(x-1) - (y+1)*(x/2) prefix notation: -( +( -(y 1) -(x 1) ) *( +(y 1) /(x 2) ) )

- + - Y 1 - X 1 * + Y 1 / X 2 110

111

110

2

5

Primitive Set Functional Set Kind of Primitive

Example(s)

Arithmetic Mathematical ....

+-*/ sin, cos , exp ...

1

110

Example(s)

Variables Constant values ...

x, y 3, 0.45 ...

113

111

2

5

112

1

9

char [ ] expression Flattened (linear) representation of trees, which corresponds to listing the primitives in prefix notation but without any brackets. Each primitive occupies one byte

create pc (program counter) and set to 0, eval ( char [] expression ) incremet pc by 1 ( pc++) for recursion,

Terminal Set Kind of Primitive

7

GENETIC PROGRAMMING

if expression[pc] is a functional node then evaluate arguments based on a custom function value = (eval( expression(1) ), eval( expression(2) ), ... ) else expression[pc] is a terminal node then value = expression[pc] return value

Genetic programming is a branch of genetic algorithms. The main difference between genetic programming and genetic algorithms is the representation of the solution. Genetic programming creates computer programs while GAs create a string of numbers that represent the solution. GP creates computer programs in the lisp or scheme computer languages as the solution. They are executed recursively by traversing through a program tree.


TREE BASED GENETIC OPERATORS Mutation

Crossover

crossover point

+

crossover point

+

+

3

X

Y (x+y)+3

/

parent1

crossover point

X

* +

Y

/ 1

X

Y

+ 3

(x/2)+y

2

* +

2 1

Y

(y+1)*2 parent

2

offspring

crossover point

*

1

3

(y+1)*(x/x-3)

X

x/(x-3)

-

X X

/ X

/

offspring

3

randomly generated sub-tree

(y+1)*(x/2) parent2

CP

CP

+ + X Y 3 parent1

* + Y 1 / X 2 parent2

+ + / X 2 + / X 2 3 offspring

MP

* + Y 1 2 parent

/ X - X 3 randomly generated sub-tree

* + Y 1 * + Y 1 / X - X 3 offsprign

crossover ( parent1, parent2 ) ln1, ln2 = compute traverse lenth for parent1 and parent2 from the root node (0) xo1start = crossover point for parent1 => random int number form ( 0, ln1 ) xo1end = traverse length to xo1start in parent1 xo2start = crossover point for parent2 => random int number form ( 0, ln2 ) xo2end = traverse length to xo2start in parent2

cr

lenoff = length of an offspring => xo1start + ( xo2end - xo2start ) + ( len1- xo1end ) offspring = create an offspring => new char[ lenoff ]; perform an exchange of a genetic material: copy an array ( source array , src position, destination array, dest position, length) copy ( parent1, 0, offspring, 0, xo1start ); copy ( parent2, xo2start, offspring, xo1start, (xo2end - xo2start) ); copy ( parent1, xo1end, offspring, xo1start + (xo2end - xo2start), (len1 - xo1end) );

cr

traverse( char [] buffer, int buffercount ) if buffer[buffercount] is a terminal node then return( ++buffercount ); if buffer[buffercount] is a functional node then return( traverse( buffer, traverse( buffer, ++buffercount ) ) );

+


7

MARCIN KOSICKI AAC PORTFOLIO

Evolutionary Alghorithm Create initial population

Termination Criterion Satisifed ?

Evaluate Fitness of Each Individual on Population

individual = 0

individual = M ?

Gen = Gen+1

Select Genetic Operation Probabilistically

Select Two Individuals Based on Fitness

Perform Crossover

Select OneIndividual Based on fitness

Perform Mutation

Insert Offspring into New Population

Insert Mutatnt into New Population

individuals = individuals+1

individuals = individuals+1



MARCIN KOSICKI AAC PORTFOLIO

9

TOTAL SOLAR IRRADIATION SYMULATION CUMULATIVE SKY APPROACH


1. Sky patches are transformed into points

2. For every F in Mesh M pick a part of the sky that is visible for a face F.

3. For every face F create a set S of faces that are positioned “above� the face F.

4. For every ray form the center point of the face F to sky centre pt check if there is an intersection with S.

5. Add sky patch values form the rest of the rays.

6. Perform calculation for every face of the mesh M.


MARCIN KOSICKI AAC PORTFOLIO

SOLAR IRRADIATION

A computationally efficient approximation of total solar irradiation on surface was proposed by Deren and Robinson (2004) in a method called cumulative sky method. The method use a discrete representation of a sky vault due to Tregenza which has 145 patches. Next the Perez all weather luminance distribution model is a base for the prediction of the luminance/radiance at the centroid of each pach and the result is aggregate for the period of interest.

THE APPLICATION

The pricple of abovementioned method was used in a custom made irradiation symulation engine written in Java. It used pricples of backward ray tracing to determine which sun rays hit the face and contribute to total solar irradiation value. As a source of data it use EnergyPlus weather format.

PSEUDOCODE

1. Sky patches are transformed into points sPt and associated irradiation values are initialised. 2. For every face Fn in a mesh M create a set of SsPt which contains points sPt that represents a part of the sky that is visible for a face Fn. 3. For every face Fn create a set Sfn of faces that are positioned “above” the face Fn. 4. For every ray R form the centre point of the face Fn to every member of SsPt do ray-face intersection test with faces form Sfn. 5. If it does intersect move to the next SsPt ray check for a face Fn. 6. It it doesn’t intersect add sky patch values associated with SsPt to the cumulative radiation value for face Fn and move to the next ray check for the face Fn. 7. Perform calculation for every face Fn of the mesh M. 8. Return cumulative irradiation values for every face Fn in the mesh M.

11


THE PIASUK PROJECT

Physical Computing Workshop AAC 2015 Team: Ioanna Nika Athanasios Tsaravas Marcin Kosicki Stanislaw Mlynski Boyana Buyuklieva


MARCIN KOSICKI AAC PORTFOLIO

13


Above. Frames deicpt the action of the end effector of UR robot. It makes hWoles in sand surface by air under pressuredigital controlled custom made computer algorithm

Below. Zooming to the end effector and sand surface. The process of hole making is caputred.


15

MARCIN KOSICKI AAC PORTFOLIO

The iInital state of a hexagonal CA

SUMMARY The project is an outcome of physical computing workshop oranised in January 2015 at the Bartlett School of Graduate Design. The aim of the project was to design an emergnet system inspired by the phenomena of sand dunes. The system uses Cellula Automata to generate movements for an UR robot which uses air pressure to make patterns in a sandbox. Because of emergent phenomena of CAs the patterns are unpredictiable and onec started the new movements will be generated infiniently untill there is power for both the compuer and the robot.

The translation of CA’s state into robot’s movement



17

MARCIN KOSICKI AAC PORTFOLIO

Cellula Automata A cellular automaton is a discrete model studied in computability theory, mathematics, physics, complexity science, theoretical biology and microstructure modeling. A cellular automaton consists of a regular grid of cells, each in one of a finite number of states, such as on and off.The grid can be in any finite number of dimensions. For each cell, a set of cells called its neighborhood is defined relative to the specified cell. An initial state (time t = 0) is selected by assigning a state for each cell. A new generation is created (advancing t by 1), according to some fixed rule (generally, a mathematical function) that determines the new state of each cell in terms of the current state of the cell and the states of the cells in its neighborhood.Typically, the rule for updating the state of cells is the same for each cell and does not change over time, and is applied to the whole grid simultaneously, though exceptions are known, such as the stochastic cellular automaton and asynchronous cellular automaton.

The used rule for the hexagonal CA was simple. Set the next state of the cell only if the number of alive neighbours is between 2 <= n <=3



MARCIN KOSICKI AAC PORTFOLIO

19

Programming a UR robot The movements for the robots were generated in Grasshopper via the use of a plugin called Scorpio. It was developed by Khaled ElAshry, Vincent Huyghe, Ruairi Glynn. It is a solver which allows easy generation of robotic programs from paths using inverse kinematic for Universal Robots. The CA’s

data is inported directly into Grasshopper’s sketch trasnformed by the Scorpio package and direct upload to the robots through TCP/IP. The enables the symulation of future movmenets and dircet digital preview of current’s robot position.W


AUGUMENTED GUIDED ASSEBMLY SUMMARY

The project describes an investigation of an application of mobile augmented reality (MAR) for facilitating assembly of structures designed by building information modeling (BIM). The goal of the project was to create a prototype tool that could overcome the most common mistakes at each of four stages in the assembly process as described by Hou et al. (2014) and develop a low-cost solution to translate geometry and data from a BIM to an AR environment.

AUTHORS Marcin Kosick, Boyana Buyuklieva SOFTWERE: Unity (C#) + Vuforia Module: City as Interface Module Leader: Ava Fatah gen. Schieck Adaptive Architecture and Computation The Bartlett School of Graduate Design University Collage of London


MARCIN KOSICKI AAC PORTFOLIO

21


Figure 1. Real-time guiding animation

Figure 2. Change of current after correct placement

Figure 3. Roof model test: Both the panels and the site were tracked, animation was played for the current one.

Figure 4. Roof model test: The animation was updated in real time.

INTRODUCTION

Current tendencies in architecture allow for buildings that are ever more complex, not only in visual form but also in terms of technical sophistication and sustainability. Today a building can be considered in many dimensions as afforded by powerful tools that architects and engineers have available. Despite the extended ability to design with vast information and into intricate detail, a lot of a design’s complexity remains trapped in the virtual world. There is a discrepancy between what is virtually possible and physically feasible to construct despite the tools we have. Although digital fabrication deals with the translation of digital to analog very directly, the manual assembly process is still an indispensable norm in construction because it is a more convenient and cost efficient convention. In the building and construction industry the idea of Building Information Modelling (BIM) is an attempt to organise complexity in the various stages of a building’s cycle (design, construction, maintenance and demolition). The label of BIM is an elusive one, but at its core it involves representing a design as combinations of “objects” – that carry their geometry, attributes and relations to other objects. This principle makes BIM an efficient database that became an industry standard [1]. The idea of informed geometry has been around for decades and has also been inherent to some of the first programs for architectural representation. Despite the presence of virtual 3D models which

contain all the data of a project (including assembly order and location of its constituent elements), twodimensional line drawings are still the norm when it comes to representing a geometry’s data. In the general case, this inefficient use of BIM is mainly because of unintuitive user interfaces [5]. In using BIM information for assembly the problem is augmented by hardware unavailability and lack flexibility on the assembly site. For BIM in a wider context and especially BIM for assembly, there is a failure to live up to the theoretical potential of the concept because of limitations in its human-computer interaction side. Therefore, a translation into a device which enables more embedded interactions is required.The technology that intuitively fill the gap is augmented reality (AR). AR allows the live direct or indirect view of physical reality, with virtual objects superimposed upon or composited within it. AR’s key feature is that, it supplements, rather than replacing the real world[2]. Assembly Task

workpiece related

non-workpiece realted

physical operations

information-related activities

kinesthetic & psychomotor

cognitive

observing, grasping, installing

comprehending, translating, retrieving information context


23

MARCIN KOSICKI AAC PORTFOLIO

refer to technical specyfication

obtain the right information

identify components

place the component, compare standards

initial pre-defined specification (in this case order). The application developed is flexible enough to accommodate any geometry. To prove this, two models were created and tested. The first was a Soma puzzle, which was chosen because of its complex 3D geometry. The second was a parametric roof structure based on the Elephant House in Copenhagen by Foster+Partners. The application was developed largely in the Unity game-developing environment with the aid of the Vuforia SDK.

INTERACTION

When the application is launched the user is prompted to locate the site by hovering over it. make a judgment of its correctness Once it is located the first element inthe assembly order appears as a pulsating virtual object in its correct location on site. Simultaneously, a prompt is A design’s shift from the virtual to the physical given asking the user to hover over all the elements world is a non- evident process because complex available. When an element is recognised it becomes spatial relationships from a purely digital design highlighted. often require virtual information to make sense of. In addition, another issue with physical assembly The user interaction is centered on the current arises because the creator and the assembler of a element. When it is tracked the element begins to design are rarely the same person.This is problematic pulsate and an animation is triggered showing its because assembly, the processing of joining two or allocation to the site. This is a real-time animation more elements based on a technical specification, is that shows the shortest path and simplest rotation not only a kinesthetic/psychomotor process, but also for correct placement. (Fig. 1) After the user has a cognitive one [4]. At each stage there is a potential placed the element as indicated, this piece is no longer to come across one of the four main issues associated augmented and the next one by order of assembly with assembly in construction, as outlined by Hou et becomes current (Fig.4). This iterative process al. [6].Tang et al. [6] studied the effectiveness of AR in continues until the whole structure is assembled. assembly tasks and discovered that it reduced error by 82%. OUTCOME Not being able find the corretet informatin contained within technical drawings; Not being able to fine the correct component to be assembled;

An incorrect assembly sequence;

Incorrect installation.

IMPLEMENTATION

An important point of the investigation is the live connection between the site and the elements’ relative location. This is an improvement on the relevant examples which use only relative position [3], or only an assembly site [7]. The project’s main contribution is that it demonstrates guided manual assembly can be done without 2D technical specifications or complicated hardware. It is an addition to a broad discussion about growing the field of applied AR and specifically AR in assembly.The latter is the important core of developing AR for construction sites, which would be applicable in a building’s construction and demolition life phase.

The aim of the project was to create a working ACKNOWLEDGEMENTS prototype utilise available resources and explored Our thanks to Ava Fatah gen. Schiek and her kind the use of MAR for assembly. The goal of the MAR tutors for their guidance and support assembly task was to correctly place physical elements on specific target locations, according to an


START

GameObject PhysicalObject GameObject MultiMarker[i] MultiMarkerVuforia MultiMarker puzzle0; bool isBeingTracked

GameObject AppManager AppManagerClass TextAsset xmlData; GameObject siteTargetPuzzles; GameObject physicalPuzzles; void loadXML(); void addProperties(); void setCurrent(site,0); void setCurrent(puzzle,0);

ObjectData bool tracked bool current bool physical

GameObject e01 mesh e01

ImaeTargetVuforia ImageTarget siteMarker bool isBeingTracked

GameObject siteObject[i] ObjectData

int id int nextId

void render(tracked); void fadeInOut(tracked, current); void drawLine(tracked, current, physcial)

GameObject SiteObject

int id int nextId

GameObjeect Link mesh mesh; ObjectData startOD ,endOD; bool fit;

bool tracked bool current bool physical

void render(tracked); void fadeInOut(tracked, current); void drawLine(tracked, current, physcial)

GameObject e01 mesh e01

void animate(tracked, physical); void deleteLn( !tracked ) ; bool ifPlaced ( mesh ); int setCurrent( fit );

Above. Application’s data flow chart

Below. Assemly procedure augmented by the app.


25

MARCIN KOSICKI AAC PORTFOLIO

REFERENCES

1. Azhar S., (2011) Building information modeling (BIM): trends, benefits, risks, and challenges for the AEC industry. Leadersh Manag Eng 11:241–252Azuma 1997 2. Azuma, R. T. (1997). A survey of augmented reality. PresenceTeleop.Virt. Environ., 6(4), 355–385. 3. Elipaz N. - Israel Institute of Technology, (2011) Maig, Augmented reality assembly instructor [Online video] Available from - https://www.youtube.com/watch?v=rrFSCRO10Fs 4. Hou L., Wang X., Bernold L, Love E. D. P., (2013) Using Animated Augmented Reality to Cognitively Guide Assembly,

Journal of Computing in Civil Engineering, 439- 451 5. Quirk, V. (2012) “A Brief History of BIM “. Archdaily. [Online] Available from http://www.archdaily.com/?p=302490 6. Tang, A., Owen, C., Biocca, F., and Mou, W. (2003). Comparative effectiveness of augmented reality in object assembly. Proc., SIGCHI Conf. on Human Factors in Computing Systems, 73–80. 7.VTT.-Technical Research Centre of Finland, (2010) Augmented Assembly - Increasing efficiency in assembly work with Augmented Reality, [Online video] Available from https://www. youtube.com/watch?v=vOhiZ37aaww

The poject was presented and publised in proceedings form The 4th ACM International Symposium on Pervasive Displays which took place in Saarbrücken, Germany on June 10-12, 2015


THE TANGIBLE TABLE The experimental interface for time-based interactive symulations SUMMARY A tangible user interface is a user interface in which a person interacts with digital information through the physical environment. They are characteristics by Physical representations are computationally coupled with underlying digital information and they are embody mechanisms for interactive control. Additionaly physical representations are perceptually coupled to actively mediated digital representations. Physical state of tangibles embodies key aspects of the digital state of a system. FWive basic defining properties of tangible user interfaces are as follows: space-multiplex both input and output; concurrent access and manipulation of interface components; strong specific devices; spatially aware computational devices; spatial re-configurability of devices W[1].

The project was an early experiment in building and controling tangible interface. The aim of it was to influance movments of digital objects that represents flocking behaviour based on Reynold’s Boids algrothm. The positions of physical objects were scanned by Microsoft Kinect - an affordable 3D scaner and imported into custom made computer program. The program treated them as obstacles for boids. It gave the user an ability to experiment with humanphyscial-digital interaction. The of the project was to gather knolwedge and expience required for further archtiectural-specyfic application,. Simmilar interface can be used on dayly basis for live pedestrian, solar or wind symulation in a contex of build envinroment. It can enchance the creativiy of architects and make them more awere of the desisins they make during the design process.

Referencs 1. Mi Jeong Kim, Mary Lou Maher.: The impact of tangible user interfaces on spatial cognition during collaborative design.In: Design Studies,Vol 29, No. 3, May 2008


MARCIN KOSICKI AAC PORTFOLIO

27


Placement of physical objects

Objects are recognised as obstacles for boids


MARCIN KOSICKI AAC PORTFOLIO

User’s hand picked by a 3D scanner

Buttons actions are tiggered in response to user’s touch

29



MARCIN KOSICKI AAC PORTFOLIO

31



33

MARCIN KOSICKI AAC PORTFOLIO

At the begining the system needs to be calibrated. It is done by selecting 6 reference point by the user

The top view of the interface. Symulation is running and unobstructed boids are visible.


34


MARCIN KOSICKI AAC PORTFOLIO

Computers and creativity in relation to the application of time-based tools in the architectural design process.

Abstract

Computers play a more and more significant role in our life. Affordable and relatively fast machines for architectural studios lead to a paradigm shift in the way architects can approach a design process. Liberation from the static use of computers, as can be seen in Turner’s lecture Breaking the CAD shell (2009), facilitates the change of architectural design perceived as a plastic art to time- based media. The concept, described by Penn’s lecture Architecture and Architectural Research (2008), brings timebased simulations on a scale previously unknown, and unreachable. The application of tools developed initially in the field of machine learning and artificial intelligence like genetic algorithms, neural networks or multiagents systems expands designers’ ability to engage with their creations. The new methods in combination with augmented reality systems give recent architects fundamentally different tools from those that their predecessors were able to use. It essentially changes the role of computers in the design process. It shifts from a sophisticated drawing board to an active assistant that is able to make some of the design decisions on its own. That phenomenon raises the question of creativity. Does it still lie entirely in the hands of architects? Or perhaps it is distributed among humans and machines and, if so, is it evenly distributed? The paper suggest that architects, in order to create better forms for the build environment, have to redevelop some of their concepts of creativity and establish a new connection with the machines that they already have in their studios.

35


36

Shift in Architecture

Architecture is traditionally perceived as plastic art. It is designed by physical manipulation of a plastic medium, often in three dimensions or by twodimensional drawings. When computers were brought into architectural design they were conceptually used like old-fashioned drawing boards. The dawn of computer aided design (CAD), standardised the tedious and time-consuming process of drafting, but has no influence on the essence of architectural creation. The development of mathematical functions that described free form curves and surfaces catalyst the major aesthetic change. In most cases this apparatus gave architects a new way of expression but conceptually computers were still used as static tools that translated handdrawn, pre-designed concepts into rigid mathematical models. Machines played a crucial role, they made forms build-able due to computer-aided manufacturing and engineering software, but their role was still only to draft. A major change was made when John Frazer (1987) described the concept of “soft modelling”, He suggested that instead of developing the geometry of solid forms (as in traditional CAD software) architects should work on the geometry of relationships. In that approach designers are expected to provide the information of a higher level order, a kind of metadata, that controls the creation of desired form. The metadata, information of how to construct the form, can be computational so it can be transformed based on predefined rules. The change can be made both by humans and computer programs. Generally, digital computation uses simulation (usually inspired by a natural phenomenon like evolution) to cope with complex data. Simulations are time-based and are able to produce feedback on a predefined model. Dynamic design by data gives new, previously unknown opportunities, take the idea of standard CAD software and puts it further, by adding a new layer of engagement between the designer and its creation. In computational, form finding architects have to distribute some of their decision power to computers. That phenomenon redefines the role of a computer, it become an active assistant. Its responsibility is not only limited to drafting pre-designed ideas but it has an ability to suggest ideas of its own and in a conceptual sense works alongside the architect. This establishes a new, human-computer relation which poses the question of creativity. Are computers creative on their own? Are architects’ projects creative? Is the process itself creative?

Computers and Creativity

The dawn of Artificial Intelligence (AI), ignited the discussion whether or not computers can be creative. Since architects are able to implement some algorithms based on AI, they were automatically drawn into that discussion. The first proposition in the debate that took place at the University Collage of London on the 14th November 2014, highlighted the current discourse on computers’ creativity. The starting point for the debate indicates that the word “creativity” has to be better specified. According to Boden’s


MARCIN KOSICKI AAC PORTFOLIO

(2004) view creativity is usually judged by its output. An idea produced by “creative process” has both fundamental novelty and usefulness. The end result of creativity must in some significant way be different from all previous ideas, in the context of the creator (egocentric creativity which is defined as P-creativity) or in the context of all history (historical creativity, H-creativity). Usefulness means that it has to fulfil some need or purpose. Outcomes of “creativity” often have an element of emotional surprise.The origin of that “deep surprise” is due to the fact that creativity arises when the person transforms his conceptual system so that he is now capable of producing ideas that he could not have had before. Creative ideas are stunning not because they change our assumptions of how things might be, but because they change our assumption of how things could be. That approach to creativity is argued by Csikszentmihályi (1988), who points out that the unique feature of scientific discovery is problem finding, not problem solving. According to him, computers simulate some of the rational dimensions of cognition, leaving out the emotional part. They are crucial for the capacity to formulate the problem in an original way which is much more important than the ability to solve it. Csikszentmihályi claims that as long as the machines can only model rational problem solving, they may be better than we are, but they won’t be like us. In the debate it was pointed out that propagators did not want to prove that computers are to exactly copy humans.The knowledge of how humans’ minds work is limited, so it is impossible to build machines that will copy or even outperform in every aspect. If the future programs are to match all humans’ creative powers, then future psychology must achieve a complete understanding of the human mind. The time that passed from the dawn of modern computers in the 1940s and 1950s, cannot be compared with millions of years of evolution. However, the concept of a computer is useful because its programs are effective procedures. Computational theories, not computers as such, are crucial to modern psychology. Computational theory of a mind that perceives a human brain as an information processing system tries to explain conceptual structures in people’s minds. The world ‘computer’ is used as a metaphor, a symbol manipulator described by the Turning Machine (Turning, 1937) and is does not mean a modern day digital computer. Following that concept the substance by which a particular computation is performed, may be silicon or any unknown material as well as neuroprotein, working inside humans’ brains. The human brain has unique capabilities. Igor Alexander, the father of neural networks admires a still unknown mechanism by which the brain makes surprisingly accurate hunches based on experience. Its capability of generating ideas that might be promising even before it can be truly understood what the promise is, is exceptional. Another feature is its ability to get access to knowledge from memory without the necessity of exhaustive searches and forming associations between unrelated items (Alexander & Morton, 1990).Those features traditionally named as intuition,

37


38

perception, and imagination are believed to be catalysts of architectural ideas. The main objection to computers is that they are fully deterministic, so they are irrelevant to humans’ creativity. They are believed to follow exact rules, and nothing that was previously unknown by the programmer could occur. Unpredictability is said to the fundament of creativity. Following that idea, the chaos theory proves that even very simple local rules can produce unpredicted results, that has emergent properties and behaviour.The order can arise in a reverse manner where small actions can be combined until a recognisable pattern of global behaviour emerges. These processes can be open-ended and when a certain critical mass of complexity is reached, objects can self-organise and self-reproduce. That phenomenon includes the ability to create entities not only equal but also parenting objects more complicated than the initial one (Frazer, 1995). In order to recognise creativity the creator of an idea has to be conscious of the goal he is looking for. However, most of the mental processes by which people generated novel ideas are unconscious rather than conscious. The crucial factor is self-conscious evaluation of the outcomes of a solution- finding process. A creative system must be able to ask and answer questions about its own ideas.There are programs that can do that. Genetic Algorithms, self-transforming, problem-solving programs are able to use rule-changing algorithms based on biological genetics.They map and judge its own creations whether they are interesting or not. If so, they are explored further (Boden, 2004). One of the biggest issues when working with computers is representation of actual data in a computer’s “conceptual space”. Computers are highly structured but they lack semantic. According to Dreyfus (1986) AI does well in a tightly constrained domain but is ineffective in tasks that require a holistic approach. The key Dreyfus’ argument is non-stereotypical story summarisation. Incapacity to succeed in it would cause every machine to fail the Turning Test (Turning, 1950) of intelligence. It is caused by the absence of connections between symbols processed by the computers and the denotations of those symbols in the real world. To counter that argument learning and patter-recognition capabilities of the human brain was an inspiration for the creation of neural networks. Neurones are simulated by components arranged in a network. Inputs cause the component to learn to fire when a particular threshold is reached (Alexander & Morton, 1990). Neural networks were used by Elman (1995) to describe natural language as a dynamic system. After training the network was able to predict types of words that were to come next in the sentence and put syntax and semantic in the same space inside a computer program. Elman’s work proves that working with a computer code focusing on processing rather than structural rules produces simulations that are much closer to real world scenarios.


MARCIN KOSICKI AAC PORTFOLIO

Computers and Architecture

The design process in architecture is a problem far more complex than story summarisation. It was formally described as wicked problems by Rittle and Webber in 1973. It is the kind of task that has incomplete, contradictory, and changing requirements that are often difficult to recognise.The process includes dynamic factors like ill-defined brief, new site, different team and subcontractors and many more. It produces unknown solutions, with farreaching consequences. Following that idea, the creativity associated with architecture is more similar to a problem pointed out by Csikszentmihályi. Defining the constraints of the particular site is a crucial starting point for every project. The way in which architects map their conceptual space is decisive when dealing with design challenges. The dimensions of that space are the organising principles which merge and establish mutual connections between relevant domains. As in the field of AI, they define a certain range of possibilities and have direct influence on the final design. The first attempt to add rationality and organisation into the planning process was made by C. Alexander (1964). The proposal was to treat the design program as a functional analysis of the design problem. Alexander viewed the design problem as a system consisting of two parts: the design form and its context. The problem is defined by the context and puts demands on the form. The form consists of solutions to the problem and it is the element of the system over which the designer has control. The design invokes both the form and its context. The goal of design is to achieve “a good fit” between form and context. Alexander binds strongly the system pattern of the problem and the solution to it. Following that model the problem of complexity is quickly encountered. When a mental model meets the real world there is no guarantee the program based on the identified problems is accurate and undistorted. There is always the possibility that some important issues were not included as variables. Even Alexander claims that it is not possible, and the intention of the method is rather to represent the way in which the designer defined the problem. Due to the complexity of the requirements, the necessity of information processing technology is rather obvious. Alexander was rather pessimistic about computers in design. He described them as “a huge army of clerks, equipped with rule books, (...) entirely without initiative, but able to follow exactly millions of precisely defined operations”(Alexander, 1967). When defining Alexander’s rule book in a structural manner (like natural language grammar as recursive rules in Syntactic Structures by Chomsky in 1957), the outcomes are vulnerable to criticism similar to Dreyfus’ story summarisation problem. Elman’s approach in which difficulty lies in creating the rules in a way in which they do not prescribe the result is a potential solution to overcome that issue. On the field of AI one the most active research areas at present is the design of ‘hybrid’ information processing systems, combining the flexible pattern-matching, evolutionary techniques, sequential processing and hierarchical structures. The crucial point is that

39


40

technology, based on the rules developed from architectural research, in effect creates inverted emerging forms for the build environment. The way was paved when a new approach to planning was brought up, calming, that the characteristics of a place are based as much on repeating patterns of events as by the consciously designed physical environment. It is suggested that buildings and special places have subtle qualities as a result of patterns that can be organised into a useful language for planners (Alexander, 1977). Adopting that approach might suggest that it would lead to a resurrection of holistic ideas from the 1960s that used forms of component-based rationalisation and techniques that favoured generic approaches, modular coordination and concept of construction as a set of elements. That led to super blocks and, due to industrialised production, to homogenisation of the build environment. That was possible because generally all architects used the same tools. In contrast to that the idea behind computational design is that the planners should develop their own tools for processing the patterns that are already known as well as incorporating elements that reflect their own research and personal style. The growing demand for making buildings more and more green and sustainable, as well as better integrated in the social context of their users put architects in the position in which they have to cope with an overwhelming number of data. The new technologies, originated in computer science, facilitated new means of production. They enable the designers to produce a variety of styles and personalised products.When considering that phenomenon in architecture, the results can be closer to pre-industrialisation handcraft rather than a regime of super-blocks in the 1960s. The direct link that might be possible between the architect, the computer code and computer-controlled means of production gives an almost one-to-one relation between the planner and its creation, supported by the computation, and product. That new emergent relationship is described by Frazer (1995) as “the electronic craftsmanship�.

Conclusion

The founding fathers of modern computation, Turning and John von Neumann, were both principally interested in conceptual computers, in the generative process and the nature of the living process. In these traditions the computer is a device with the power and speed to meet the requirements of the limits of our imaginations. We need these powers to compress evolutionary time and space so that results can be achieved in reasonable time. The point of that is not to make computers similar to humans to copy their design skills. The point is to design a creative process in which humans and computers complement each other in order to create better, more creative, forms for the build environment.

References

ALAN PENN, UNIVERSITY COLLAGE OF LONDON (2008) Architecture and Architectural Research. [Online Video]. December 3rd 2008. Available from: https://itunes. apple.com/pl/podcast/5.-architecture-architectural/id390420445?i=87480745&l=pl&mt=2 [Accessed: December 8th 2014] ALASDAIR TURNER, UNIVERSITY COLLAGE OF LONDON (2009) Breaking the CAD


MARCIN KOSICKI AAC PORTFOLIO

Shell. [Online Video]. July 2nd. Available https://itunes.apple.com/pl/podcast/1.-breakingthe-cad-shell/id390420445?i=87480748&l=pl&mt=2. [Accessed: December 5th 2014]. ALEXANDER, C. (1964) Notes on the Synthesis of Form. Cambridge: Harvard University Press ALEXANDER, C. (1967) The Question of Computers in Design. Landscape, Autumn, pp 8-12. ALEXANDER, C. (1977) A Pattern Language: Towns, Buildings, Construction. New York: Oxford University Press ALEXANDER, I. MORTON, H. (1990) An Introduction to Neural Computing. Chapman and Hall. BODEN, M. (2004) The Creative Mind - Myths and mechanisms. London: Routledge CHOMSKY, N. (1957) Syntactic Structures. Berlin: Mouton de Gruyter CSIKSZENTMIHALYI, M. (1988) Solving a problem is not finding a new one: a reply to Simon New leads in Psychol.Vol. 6. No. 2, pp. 183-186. DESING AS A KNOWLEDGE BASEE PROCESS (2014) Of course computers are creative. [Online Audio Record]. November 14th 2014. Available form: https://moodle.ucl.ac.uk/ pluginfile.php/2617537/mod_folder/content/0/debate1_141114.mp3?forcedownload=1 [Accessed: January 2nd 2015] DREYFUS, H. DREYFUS, S (1987) Mind Over Machine:The Power of Human Intuition and Expertise in the Era of the Computer New York: The Free Press ELMAN, J. (eds.) in PORT, R. and GELDER, T. (1995) Language as Dynamical System. Mind in Motion, The MIT Press FRAZER, J. (1995) An Evolutionary Architecture, London: E.G Bond Ltd FRAZER, J. (eds.) in T.Maver and H.Wagter (1987) Plastic Modelling – The Flexible Modelling of the Logic of Structure and Space CAAD Futures 87, conference proceedings (Elsevier 1988), pp 199-208. RITTLE, H., & WEBBER, M. (1973). Dilemmas in a General Theory of Planning. Policy Sciences, 4, 155-169. TURNING A.M (1950) Computing machinery and intelligence. Mind, 59, pp 433-460 TURNING, A.M. (1937) On Computable Numbers, with an Application to the Entscheidungsproblem, Proceedings of the London Mathematical Society, (2), 42, 1937

41



MARCIN KOSICKI AAC PORTFOLIO

43

The New Elephant House, Foster + Partners, Case Study Abstract

The report reflects on a computational design process that is behind a design of the New Elephant House at Copenhagen Zoo. It focuses on human-computer interactions in the field of architectural design and traces the effect of an adopted solution on the final form. The project was developed by Foster & Partners in 2003 and built in 2008. It is characterised by two canopy structures, one larger than the other, which stands out from the landscape with a major part of a building dug into earth.


44

The Goal

Copenhagen Zoo is the biggest cultural institution in the country, which attracts 1.2 million visitors every year. Among over 3,000 different animals that people can admire, Indian elephants are the biggest attraction.The task for architects was to design a new shelter for the creatures that would replace the old structure,which dated back to 1914. Elephants and their social patterns were the focal points of the project.The climate in Denmark, with temperatures dropping to -12°C, forces animals to live indoors in winter, so providing as much natural light as possible was essential.To meet the demands of both the elephants and the visitors, the building plan was organised around two separate enclosures covered with lightweight, glazed domes enabling the elephants to live under them or in adjacent paddocks. The geometric complexity of the structure and glazing was developed by the use of custom-made computer programs that were treated as a design tool. The programs allowed to ‘sketch’ early form studies with 3D CAD software so that the architects were able to explore more design options, and alter the design. The novel workflow of embedding computer programming in the initial design stage enabled the rationalisation of the geometry due to fabrication constraints. The authors pointed out that the method allowed them to explore computational approaches that would have not been possible without computers.

Canopy Design Strategy

The architects suggest that their design methodology and unique way of using computers in architecture allowed them to make a proposal that changes the paradigm in architecture. Initially the architect Norman Foster suggested two canopy structures with a major part of a building dug into the earth. In the next step, the design team conducted a creative research through various media, mostly via physical models. The necessity to describe the three-dimensional quality of structures and spaces forced the team to use a computer aided design (CAD) ‘sketchy’ model. The proposed geometry was based on the arrangement of elephant spaces and the relation with the neighbouring landscape and had a complex, doublecurved geometry.The desired form created an issue of expensive fabrication which required unique custom-made elements. On the other hand, the project cost constraints demanded using repetitive identical parts that could be manufactured in a conventional way. In order to overcome that


MARCIN KOSICKI AAC PORTFOLIO

problem and rationalise the complex form the design team decided to use toroidal geometry which was not an outcome of computational methods and could have been created by using the analog process.As pointed out by Pottman (2007), torus was a surface of revolution generated by revolving a circle in a three-dimensional space around an axis coplanar with the circle and usually outside the circle. The form it created was commonly described as a “doughnut” or a “tyre” (Fig 2). In the digital design process the smooth surface of a torus was then converted into a discreet representation that had planar, quadrilateral panels, aligned with each other along the edges that were repetitive in the direction of rotation. Additionally, the geometric set-out was based on a rcs which significantly simplified solid and surface offset. The decision to use toroidal geometry resolved complex, structural issues and allowed the components to be manufactured in a traditional way. Therefore, each canopy structure in the New Elephant House was based on a different torus with a different radius.The radii were determined by the specification of each of the two elephant areas. An irregular form was obtained by tilting the torus away from the vertical and cutting with a horizontal plane (Fig. 3). The angles of inclination were determined by the form of the space created by the two enclosure areas and by the form of the intersection created when the torus was cut with the intersect plane. Finally, all of the centre-lines, beams and glazing elements were orientated according to the mathematical logic of the torus.

Generative Design of Structure and Glazing.

The complexity and number of configurations to be studied suggested a generative approach. A parametric model was developed through writing a custom computer program named Structure Generator by a member of the architectural team with programming skills. In these cases computer programming was treated as another design tool that liberated architects from the limitations of the standard software available for architects and as a way to explore further design options. The program was a macro which in computer science is a rule or pattern that maps the specific input into desired output. In general they are used to make tasks using the application less repetitive. In this case the program had an input of 26 specific numeric

45


46

1. Inital torus

2. Revolved circles in a desired spacing

3. Points form equaly didived circles

4. Quad panels in the 3D space

4. Panels divided by horizontal plane

4. Panles unfoled into planar strips

variables that controlled the form of the canopies including: the number of elements, the structural offset, the primary and secondary radii of the torus, the size, spacing and type of structural members, the extent of the structure, etc. Additionally, an input geometry and several right-angle lines were provided which defined a basic co-ordination system that determined the position of the torus in space and its rotation. The outcome of the macro was the parametric model of the roof that generated all of the structural members, glazing components and tables of node points. The model defined the well constrained search space in which the team were able to evaluate a vast amount of possible solutions only by changing the input parameters. Therefore, despite the complexity of the geometry the use of computation allowed the creation of many design options that could be visualised, fabricated and tested. The figure 4 presents the diagram that describe the process of creating the discrete, panelled representation form an initial surface. It was made by the author of the report by the use of a different software from the original project only to visually communicate the principles of parametric approach adopted in Foster & Partners.


MARCIN KOSICKI AAC PORTFOLIO

However, the New Elephant House roof elements were generated via a digital design process, the key decisions were made based on studying physical models. Rapid prototyping technology, especially the adoption of a three-dimensional printing process enabled the architects to study landscape options, interior spatial studies and canopy structure options generated by computer programming (Peters 2008).

Discussion

The authors of the project underlined that the final form would have not been possible without using computation and a digital design approach. It is emphasised that the canopy geometry was neither pre-rationalised nor post-rationalised but the rationalisation process and the construction system ideas evolved with the project. One can argue with that statement when carefully reflecting on the workflow described by the authors (Peters 2008). The single decision to base the main geometric concept on a torus was determined only because of its well-known formal properties. It was rather pre-rational due to its deterministic results, linear behaviour and geometric construction. The outcome was based on an analytical solution that mapped the problem’s complexity into a manageable subspace. While that approach was acceptable and perfectly fine from a structural point of view, it did not seem to reflect current developments of design and fabrication technologies (Becker 2007). The computer program made by the team was strictly procedural and the output was not a product of any emergent behaviour. It did not invoke any artificial intelligence.The suggested approach was an example of a static use of computers in architecture. The algorithm did not posses any decision power and there was no place for its creativity. The entire holistic approach was on the side of humans who created a master-slave relation with machines. Conceptually the role of a computer was to be a sophisticated drafting board rather than an active assistant that was able to make some of the design decisions on its own. It resembled Alexander’s approach (1967) which was rather pessimistic about computers in architecture. He described them as “a huge army of clerks, equipped with rule books, (...) entirely without initiative, but able to follow exactly millions of precisely defined operations”. In that case, the army followed exactly the same procedures regenerating the model based on different parameters. There was no feedback loop embedded in the algorithm and all the evaluation was made only by humans. On the other hand, the design constraints were encoded within a system of associated geometries. As Williams (2004) pointed out, an algorithm is a finite list of well-defined instructions for accomplishing a task. Its rules are not ambiguous, and their interpretation is straightforward. When design is made by humans, especially in architecture, their rules are vague, incomplete, contradictory and require intelligence to decipher. Architectural tasks have changing requirements that are often difficult to recognise. These phenomena, described as a wicked problem (Rittle 1973) require a rare ability to understand and interpret the design intent and translate it into algorithms. Although the algorithm used in this project did not exist and

47


48

was developed incrementally during the project. It used components known from standard computer-aided design software. The added value was to connect them in a logical sequence and automate the process of geometry generation. The advantage of that method was that it assured validity and consistency without the difficulties of disturbance which are commonly expressed when dealing with more free-form surfaces. There was no problem of deflection from the original form on the envelope which occurs when architects start to play with a geometry. Determinism in that case was a psychological driving factor as much as it was a practical one. Nevertheless, adopting such constrained solution space opened up new possibilities for design expression. Architects were able to generate a vast amount of equally detailed versions of building envelopes and evaluate them in order to choose the optimum solution. It undoubtedly liberated the team from some laborious tasks of calculation and allowed to focus more on the quality of the design. The adoption of the parametric approach altered their personal creativity and gave a tool that could easily cope with an overwhelming number of data. It allowed the architects to focus more on the growing demand for making buildings more green and sustainable, as well as better integrated in the social context of their users. In that sense it metrically overplayed architectural considerations by interpreting and implementing pragmatic constraints (Becker 2007). The principal idea behind computational design gave the designers an ability to develop their own tools for processing the already known patterns as well as developing elements that reflected their own research and personal style. The new approach, originated in computer science, facilitated new means of production. Due to a computational approach the final design was handed over to the fabricator, not via traditional 2D hardcopy or a 3D digital model, but in a format called a Geometry Method Statement (Peters 2008). It assured that simple, generative, geometric rules were reliably transferred between various CAD systems. The fabricators were required to build their own models in the CAD system most suitable for them which minimised the chance of an error while exploring the data in different formats. This was an example of a phenomenon in architecture, the results can be closer to pre-industrialization handcraft rather than a regime of super-blocks in the 1960s. The direct link was established between the architect, the computer code and computer-controlled means of production gave an almost one-to-one relation between the planner and its creation, supported by the computation, and product. This relationship was described by Frazer (1995) as “the electronic craftsmanship�

Conclusion

The project was an example of a response in a broader discourse of adopting digital design methods in architecture. It recently became gradually more interesting as it took us deeper into the effects of human-computer interaction. However presented, the method used a rather traditional way of interpreting design problems and undoubtedly contributed to the field and grounded the notion of validity for the usage of computation


MARCIN KOSICKI AAC PORTFOLIO

in architecture. The aim of a careful reflection is obviously not to make computers more similar to humans or to copy exactly their design skills but rather to seek a creative process in which humans and computers complement each other in order to create better, more efficient, forms for the build environment.

Bibliography

Pottman, H. Asperl, A. Hofer, M. Kilian, A. (2007) Architectural Geometry Exton, PA: Bentley Institute Press. Peters, B. (2008) The Copenhagen Elephant House: A Case Study of Digital Design Processes Proceedings of the ACADIA 2008 Conference, ALEXANDER, C. (1967) The Question of Computers in Design. Landscape, Autumn, pp 8-12. RITTLE, H. WEBBER, M. (1973). Dilemmas in a General Theory of Planning. Policy Sciences, 4, 155-169. Becker, M. Dritsas, M. (2007). Research & Design in Shifting from Analogue Digital, in Expanding Bodies: Art, Cities, Environment, Proceedings of the ACADIA 2007 Conference, eds. Brian Lilley and Philip Beesley. Halifax: TUNS Press. Williams, c. 2004. Design by Algorithm. In Digital Tectonics, eds. Neil Leach, David Turnbull, and Chris Williams, 78-85. West Sussex, UK: John Wiley and Sons. FRAZER, J. (1995) An Evolutionary Architecture, London: E.G Bond Ltd Construction of the roof. Source: www.fosterandpartners.com

49


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.