EVOLUTIONARY AUTOMAT_I_©ITY
RUIS DERVISHI
RUIS DERVISHI EVOLUTIONARY AUTOMAT_I_©ITY TESI DI LAUREA SPECIALISTICA IN ARCHITETTURA RELATORE GIOVANNI GALLI FACOLTÁ’ DI ARCHITETTURA UNIVERSITÁ’ DEGLI STUDI DI GENOVA
TABLE OF CONTENT
4 EVOLUTIONARY AUTOMAT_I_©ITY
01 THE IDEA 6-9 10-13 14-15 16-17
Introduction Problem Proposed Solution Basic Conditions
02 AUTOMATICITY 20-23 Self-Organization Model 24-26 Cellular Automaton 27-31 Conway’s Game of Life
03 EVOLUTION 34 35-36 37-38 39-51
Evolutionary Computation Evolutionary Algorithm The Software Galapagos
04 RESEARCH 54-64 65-67 68-69 70-73 74-75
Life-Like Cellular Automata Studies Choosing The Rule The Algorithm Solar Studies The Algorithm
05 THE PROJECT 78-81 82-83 84-87 88-89 90-91 92-95 96-105 106-111
The Site Setup Final Organism Vertical Communication The Algorithm Cells Connections Apartments Design Drawings and Renderings
06 REFERENCE Sitography Bibliography
RUIS DERVISHI 5
THE IDEA
INTRODUCTION “We don’t live in nature any more – we put boxes around it. But now we can actually engineer nature to sustain our needs. All we have to do is design the code and it will self-create. Our visions today – if we can encapsulate them in a seed – [will] grow to actually fulfill that vision. [...] One day, who knows, maybe we’ll plant a seed and grow a sky scraper, that has all the nutrients it needs to stay warm, to literally react to our environment, maybe even keep an eye on us, protect us, nurture us. It’s just all in the design.” 1 Andrew Hessel. This words, spoken by Professor Hessel during an interview on Archdaily, may look prophetic. Computers have already change our life in many aspects, often improving it considerably. In the other hand, the real potential seems to be unexplored. In many fields computational technology has helped scientists to better understand nature and use this information to improve or even reinvent our technology. We have safer transportation vehicles, better and faster communication tools, more efficient materials, etc. In fact we have all the human knowledge under our fingers, we just need to browse the internet. Architecture has benefited from the computational revolution as many other fields. One could define as “Architecture 1.0” the practice that architects have done for centuries till’ the early 60’s. With the invention of Computer –aided design (CAD) we entered what we can define as “Architecture 2.0” CAD assisted architects to improve their productivity, the quality of the design, to improve communication through documentation and create database for manufacturing. In the last fifty years Computer-aided design has made huge improvements. New software and new manufacturing techniques have been developed. Building Information Modeling (BIM) has furthermore increase the productivity and infor-
8 EVOLUTIONARY AUTOMAT_I_©ITY
Luc Schuiten: Vegetal City
BIM design process
Parametric pattern generated in Grasshopper
mation managing. 3D computer graphic software allows architects to design and optimize complex shapes. Today we are approaching to a second revolution in Architecture. We can call it “Architecture 3.0” “If you don’t know what Grasshopper is (or think it’s a reference to Kwai Chang Caine), you are already in the wastelands of the digital age. It is one of the leading factors of Architecture 3.0, the second computational revolution for building design. This new phase is shortening the design process from months to days , and allowing a new generation to envision, design, and execute major projects with a single laptop,” says Niel Chambers, CEO and Founder of Brooklyn-based Chambers Design.
LAVA: Digital Origami:
“For Architecture 3.0 to come alive,” says Chambers, “energy modeling and other sustainability-minded analyses need to be used as early as pre-conceptual, conceptual, and schematic phases. Second, programs that evaluate climate, energy, and comfort need to be used as often as Revit or Rhino is used today...Water, biodiversity, and ecology will need to be active design criteria for a true paradigm shift to happen.” “Where current energy modeling systems (eQuest, Energy Plus, and DOE-2) evaluate one building at a time, Architecture 3.0 wants to evaluate hundreds, if not thousands of design options for a single component of a building, such as glazing percentages, orientation or solar gain exchange.” “This is only the beginning of Architecture 3.0.”
Aedas: Abu Dhabi Investment Council Headquarters, responsive facade
2
We are already one step further the so called “Parametricism”. Parametricism belongs to Architecture 2.0, it has become nothing more than a theoretical exercise. It is a powerful tool that architects have to design their buildings, but once the tool has given thousands of different options the
Aedas: Abu Dhabi Investment Council Headquarters, responsive facade
RUIS DERVISHI 9
architect often chooses one only by his personal taste. This way the architect defines what is “nice” and what is not, forgetting what is necessary to the people that will live his creation. Using computational design as for parametricism could be misleading. It doesn’t solve any contemporary problem in architecture, despite creating a new architectonic style. Of course “each architectural style represents an epoch in the history of civilisation and parametricism is the first new epochal style after modernism” 3, but we are moving towards the health age and computational design as it was applied until now does not reflect this tendency. It doesn’t offer any solution to life quality improvement through architecture, apart from aesthetic questions, which are always subjective. Architecture 3.0 is the evolution of parametricism. It uses computational design as a tool to improve efficiency and design sustainable strategies. This is already a huge step forward, but the Architect still remains as a central figure in the design process. The process too often consists in adaptive solutions to the form made by the architect, who certainly creates it considering the computational analysis feedbacks, but since it is a human creation won’t be the best possible overall solution. Evolutionary Automaticity moves even further Architecture 3.0. Evolutionary Automaticity creates an organism that responds to certain given inputs and it self creates. The role of the Architect here shifts from designing a building considering certain given conditions to input those conditions in the algorithm and the organism will generate automatically. This process aim to mimic natural behaviors, it does not aim to mimic natural forms. In fact, one may not be able to recognize natural shapes in the final organism, even though it could be possible to mimic natural form optimisations by introducing different parameters. This project wants to create a different approach to computational design. The future described by Andew Hessel is
10 EVOLUTIONARY AUTOMAT_I_©ITY
J. MAYER H. Architecs: Mirador, Sevilla
Zaha Hadid Architects: Kartal-Pendik Masterplan, Istanbul
Zaha Hadid Architects: Performing arts center, Abu Dhabi
probably far away, but we have all the possibilities to move in that direction. Evolutionary Automaticity does not solve or attempt to solve any construction technological problem. The building construction technics will evolve and change following the evolution in technology. Evolutionary Automaticity does introduce a revolutionary methodology to create the architecture of the future, which will be economic, ecologic, sustainable, and will fulfill our needs even in high complex dense urban environments. Hernan Diaz Alonso
Notes: 1. AD Interviews, Archdaily: Andrew Hessel 2. Metropolis POV Blog: The Role of Software for Birthing Architecture 3.0 3. Theory against Theory: Patrik Schumacher: The Future is Ready to Start
Chimera Team: Mangal City
RUIS DERVISHI 11
PROBLEM The 20th century is related to the phenomenon of rapid urbanisation. By 1900, 13% of the world’s population was urban. During the next years, improvements in medicine and science allowed higher city densities. According to UN reports, the urban population increased from 220 million in 1900 to 732 million in 1950 (29% of the world’s population). By 2007 50% of the world population were living in cities; further improvements in technology, medicine and prevention of disease allowed even larger urban densities. According to latest predictions, 4.9 billion people, or 60% of the world’s population, are expected to be urban dwellers by 2030 (Table 1). Investigations show significant differences in urban population change between the more developed regions and the less developed regions. The majority of the inhabitants of the less developed regions still live in rural areas, but in the more developed regions the population is already highly urbanized. As urbanisation tends to rise and as development increases urbanisation is expected to rise as well in the future (Table 2). Cities have the potential to reduce poverty and improve living conditions for a large proportion of the population – if they are managed properly and make the most of their advantages. But there is a desperate need for new urban models to tackle the associated social, economic and environmental pressures in a sustainable way. To maximize the potential of cities, upcoming challenges such as inequality, energy, water, garbage, transportation, housing, effective infrastructure, public safety and environmental impact must be addressed. However, the diversity of cities means that a challenge in one city can mean something different for another city. There is no single set of solutions to suit every city and all their residents with their subjective views on quality of life. Any solutions must take local conditions into account. 2
had a population of 500,000 citizens (6 –7 and century AD) andofwas to be theother majo ous that the location topography theconsidered area, together with second largest city aftereconomy Baghdad.have Today, the same city,role Istanbul, become a modern played a major for thehas progress and advancement of s megacity of approximately 11 million citizens connecting Europe with Asia. It is obvithrough centuries. ous that the location and topography of the area, together with other major factors like economy have played a major role for the progress and advancement of several cities Table 1. Global proportion of the urban population increase. through centuries. (Source: UN Population Division) Year population Proportion Table 1. Global proportion ofUrban the urban population increase. (million) (Source: UN Population Division)
1900 220 13% Urban population Proportion 1950 732 29% (million) 2005 3,200 49% 1900 220 13% 2030 4,900 60% 1950 732 29% 2005 3,200 49% Table 1. Global proportion of the urban population increase. Table 2. Differences in urban population rates. (Source: UN Population D 2030 4,900 60% (Source: Division) More developed regions UN PopulationLess developed regio Year Population (billion) Per cent Population (billion) Pe Table 2. Differences1900 in urban population rates. (Source: UN Population Division) 0.15 0.07 More developed regions 0.90 Less developed regions2.3 2005 74% Year Population (billion) Per cent1.00 Population (billion) Per cent 2030 81% 3.9 1900 0.15 0.07 14% 2005 0.90However, as cities 74% expand beyond 2.3 their administrative 43%boundaries they lac 2030 1.00cial or jurisdictional 81% capacity to3.9 56%services (planning, provide the necessary Year
tricity, sanitation, etc) to all inhabitants. The administration of the city bec Table 2. inbeyond urban population rates. (Source: Population Division) However, asDifferences cities expand their they lackcountries, the finan-where land complicated and administrative bureaucratic inboundaries the lessUN developed cial or jurisdictional capacity to provide the technology necessary services (planning, water, election is weak and new and necessary spatial tools are not imple tricity, sanitation, etc) to all inhabitants. The administration of the city becomes more In the following a few examples are given of the increased need complicated and bureaucratic in the less developed countries, where land administra- for servic in city management due to rapidtools urbanisation. tion is weak and new technology and necessary spatial are not implemented. Energy insecurity (Figure 2b, 2c) has major global issue and the re In the following a few examples are given of the increasedbecome need foraservices provision tion is expensive. Energy inadequacy and illegal electricity c in city management due tomanagement rapid urbanisation. are a common phenomenon in most countries of the world facing the prob Energy insecurity (Figure 2b, 2c) has become a major global issue andwithin the related pollu- European urbanisation, also within Europe – especially the Eastern tion management is expensive. Energy inadequacy andplant illegalinelectricity connections Public Power Corporation’s (PPC) Kozani, Greece has been found t are a common phenomenon in most countries of theAs world facingPPC the will problem of to rapid the most polluting in Europe. reported, pay up 2.2 billion Eur urbanisation, also within Europe – especially within the Eastern European The on ligni carbon emission licenses unless it shifts away from itsregion. dependence Public Power Corporation’s (PPC)expect plant in Kozani, Greece has found to be(Figure one of2a). ers could a rise in electricity billsbeen of 45% by 2013 the most polluting in Europe. As reported, PPC will pay up to 2.2 billion Euros a year for 16 carbon emission licenses unless it shifts away from its dependence on lignite. Consumers could expect a rise in electricity bills of 45% by 2013 (Figure 2a). 16
Overcrowded trains in India
Housing buildings in Hong Kong
12 EVOLUTIONARY AUTOMAT_I_©ITY
As architects we need to focus on the housing problem. Too often architecture has forgot housing in favor of hyper-expensive useless iconic buildings. Affordable housing often means identical concrete constructions of more than 25 m height. For example, in order to achieve economies of scale in the modern city of Skopje (of only 571,040 citizens) this has recently become the minimum required height prescribed in the building regulations, while in the past planners were accustomed to work with maximum permitted height standards. Safety standards are frequently overlooked for the sake of increased commercial development with terrible results. Such was the case in some modern constructions following a strong earthquake in L’Aquila of Italy in April 2009. As reported by Prof Rangachari, of India, humanity has lived with floods for centuries but the impact of floods was not felt to the same extent in the past as is experienced now. Construction in stream and river floodplains or close to the coast, or in areas where extensive deforestation has taken place due to rapid urbanisation, presents greater risk of flooding and mud slides. The results are similar whether in India or in the favelas of Sao Paulo or in the unplanned settlements of Europe or in New Orleans or in Asia. Natural disasters, floods, earthquakes and fires are more difficult to deal with in highly urbanized areas and affect both rich and poor. Rapid population growth leads to increased need for affordable housing in most cities; the lack of certain policies leads to rapid informal development. Informal and unplanned development is actually caused by the phenomenon of rapid urbanisation. As reported in The Economist “the poor, who seem to prefer urban squalor to rural hopelessness, migrate to the city centers and urban fringe creating slums”. According to UN statistics, one of every three of the world’s city residents lives in inadequate housing with few or no basic services. The world’s slum population is expected to reach 1.4 billion by 2020. Informal settlements, whether of
Kin Ming estate, Hong Kong
Kowloon, Hong Kong
RUIS DERVISHI 13
good or bad construction quality have a common characteristic all over the world: they do not officially exist. For that reason government provides nothing, or very little in the best cases. Slums in less developed areas, whether in Latin America, Africa, Asia, Ex-Soviet Asia or even in Europe have a few similar characteristics: unclear land tenure, poor quality and size of construction, no or poor access to services and violation of land-use zoning. Crime that flourishes in crowded areas with insufficient job opportunity is also a common characteristic. Unfortunately in the slum situation changes are difficult and slow because, as often reported in The Economist, frequently both sides, the city administrations and the slum dwellers, may enjoy benefits in some cases: – Frequently, many people make money from the informal housing sector. – Slums provide cheap labour that enables city to operate. – The situation may suit the authorities, since the economy of the city is supported and at the same time is an alternative to the missing social housing policy. – Politicians or civil servants may be landlords in slums areas. – Poor rural people or immigrants are offered hope for employment in the formal economy of the city. – Slums are usually well placed near the city so if the poor do find jobs they can walk to work. Informal development is also caused by the spread of the low or middle-income population to the cities’ outskirts and the surrounding rural lands either by squatting on rural land or by seeking affordable land to develop self-made housing. This causes an increase of informal real estate markets and loss of state revenue (by the loss of permit revenue and taxes), illegal changes in the spatial organisation of landuses and gradual environmental degradation. This sub-urban population commutes to the city centers every day consuming energy and increasing traffic and pollution problems.3
14 EVOLUTIONARY AUTOMAT_I_©ITY
Slum, Brazil
Favela, Sao Paolo
Notes: 1. FIG publication n° 48: Rapid Urbanization and Mega Cities 2. The networked society blog : Life in the modern-day megacity 3. FIG publication n° 48: Rapid Urbanization and Mega Cities
RUIS DERVISHI 15
PROPOSED SOLUTION With the rapid increase of urbanization cities are not just growing in size, they are also growing in complexity. Architects have always tried to solve complex problems through rationalization. Bringing order to complexity was wildly accept as the ideal solution. Especially with modernism this concept was elevated to manifesto. In fact, order could be applied (or attempted to be applied) only in small scale. When we start to aggregate ordered elements we will lose control of the system ending up in chaos, and more complex is the system, bigger will be the entropy and faster will be the transition to chaos. For instance if we look to Manhattan in plan we recognize a square grid but if we look at it in the third dimension we will see a “delirious” chaotic aggregation of almost standard boxes. If we add to the equation the fourth dimension and people behavior, we end up in chaos. Since our cities are becoming bigger and more complex, no architect can deal with this complexity defining the ideal solution.
Manhattan
Manhattans grid plan
16 EVOLUTIONARY AUTOMAT_I_©ITY
As illustrated in the previous chapter, in megacities housing could be divided in two categories. One is self-organized (Slums) and has an unpredictable chaotic organization, the other one is designed by architects (high density modernist buildings) and it has an ordered organization. Nature is in constant equilibrium between order and chaos. One of the important findings of modern chaos theory is that seeds of order seem to be embedded in chaos, while seeds of chaos are apparently embedded in order. Systems that are stable in relation to their environment can become unstable. Systems that are unstable can return to stability. The proposition of this thesis is to build a new architectural process that uses computers to create a complex housing organism that mimics natural morphogenesis. The organism must be generated using simple rules and must be able to evolve under certain given conditions. Even in the most complex and apparently chaotic configuration, the organism must keep the initial conditions to ensure high levels of life quality, sustainability and economy. The architect will define only the initial conditions that may change from site to site. The algorithm will generate the organism through evolutionary optimizations to match the given conditions and will provide the needed documentation for construction, from floorplans to building information. Computers can handle huge amount of data and from this information can generate complex solutions. We just need to give to the computer the right instructions, therefore the traditional process will be inverted. Computers will not assist architects anymore in their creations, architects will assist computers to automatically generate the needed solution.
People in NY
Romanesco broccoli
The edge of chaos
RUIS DERVISHI 17
BASIC CONDITIONS Since we are moving towards the health age, life quality is becoming the most important factor in today’s society. We need to improve living conditions, therefore providing quality housing will be one of the biggest challenges for contemporary architecture. This means making sure that everyone can find safe, decent, affordable housing within reach of where they work, shop, study, and play. Evolutionary Automaticity (E.A.) must provide affordable housing for everyone. Affordability doesn’t necessary mean economic, unfortunately the real estate value too often depends on the market. What we can do to limit the problem is to design prefabricated units that will be placed in site reducing the construction costs and time. However each unit will be built following high manufacturing standards and using high end materials that last and need low maintenance. Standardization and mass production will allow anyway to reduce the unit cost. The units must have the right size, they should be big enough for the number of people they’re intended for. The layout of the units must be flexible and space should be used efficiently. Affordable housing shouldn’t automatically be built in the least desirable areas of a city or community. It should be In reasonable reach of shopping, public transportation, recreation, and health and human services. In cities, basic shopping should be in walking distance. E.A. will grow in any given area reaching the highest possible density while keeping high livability standards and reacting to the surrounding environment. In the final aggregation each unit must have at least one open face for each living space. For example, each living room and bedroom will have at least one glazed face. Even in very dense and complex aggregations each unit will be well ventilated and well illuminated. Solar analysis will make possible to generate and evolve the organism in such a way that the highest possible number of units can have direct sunlight illumination while maintaining the highest possible density.
18 EVOLUTIONARY AUTOMAT_I_©ITY
Healthy living
Robotic manufacturing
It must provide common or private spaces for most of the units allowing direct or visual connection between the neighborhoods. This spaces will also serve as green areas for relaxation or social activities. Communication will be provided by vertical cores that will be positioned in the construction plot to cover the highest number of units with the minimum number of cores. Even though, each core will serve only the nearest units allowing therefore an easy and direct access. This conditions are only the basic ones, in the future, with more computational power and more efficient software will be possible to add more variables to the equation increasing the performance of the building and obtaining more efficient results.
Habitat 67
Mountain Dwelling
RUIS DERVISHI 19
AUTOMATICITY
SELF-ORGANISATION MODEL As mentioned in the previous chapters, the final organism must be spontaneous, it should generate from a few simple rules and grow following given conditions. It will follow no aesthetic tendencies, the final form will be only the result of evolutionary optimizations of a self-organized model. “Self-organization is a process where some form of global order or coordination arises out of the local interactions between the components of an initially disordered system. This process is spontaneous: it is not directed or controlled by any agent or subsystem inside or outside of the system; however, the laws followed by the process and its initial conditions may have been chosen or caused by an agent. It is often triggered by random fluctuations that are amplified by positive feedback. The resulting organization is wholly decentralized or distributed over all the components of the system. As such it is typically very robust and able to survive and self-repair substantial damage or perturbations. Self-organization occurs in a variety of physical, chemical, biological, social and cognitive systems. Common examples are crystallization, the emergence of convection patterns in a liquid heated from below, chemical oscillators, the invisible hand of the market, swarming in groups of animals, and the way neural networks learn to recognize complex patterns. The most robust and unambiguous examples of self-organizing systems are from the physics of non-equilibrium processes. Self-organization is also relevant in chemistry, where it has often been taken as being synonymous with self-assembly. The concept of self-organization is central to the description of biological systems, from the subcellular to the ecosystem level. There are also cited examples of “self-organizing” behaviour found in the literature of many other disciplines, both in the natural sciences and the social sciences such as economics or anthropology. Self-organization has also been observed in mathematical systems such as cellular automata. Sometimes the notion of self-organization is conflated with
22 EVOLUTIONARY AUTOMAT_I_©ITY
Birds flock
Fish Flock
that of the related concept of emergence, because “the order from chaos, presented by Self-Organizing models, is often interpreted in terms of emergence”. Properly defined, however, there may be instances of self-organization without emergence and emergence without self-organization, and it is clear from the literature that the phenomena are not the same. The link between emergence and self-organization remains an active research question.
Patterns
PRINCIPLES OF SELF-ORGANIZATION The original “principle of the self-organizing dynamic system” was formulated by the cybernetician Ashby in 1947. It states that any deterministic dynamic system will automatically evolve towards a state of equilibrium (or in more modern terminology, an attractor). As such it will leave behind all non-attractor states (the attractor’s basin), and thus select the attractor out of all others. Once there, the further evolution of the system is constrained to remain in the attractor. This constraint on the system as a whole implies a form of mutual dependency or coordination between its subsystems or components. In Ashby’s terms, each subsystem has adapted to the environment formed by all other subsystems. The principle of “order from noise” was formulated by the cybernetician Heinz von Foerster in 1960. It notes that self-organization is facilitated by random perturbations (“noise”) that let the system explore a variety of states in its state space. This increases the chance that the system would arrive into the basin of a “strong” or “deep” attractor, from which it would then quickly enter the attractor itself. A similar principle was formulated by the thermodynamicist Ilya Prigogine as “order through fluctuations” or “order out of chaos”. It is applied in the method of simulated annealing that is used in problem solving and machine learning.
Patterns in nature
DEVELOPING VIEWS Other views of self-organization in physical systems in-
Patterns in nature
RUIS DERVISHI 23
terpret it as a strictly accumulative construction process, commonly displaying an “S” curve history of development. As discussed somewhat differently by different researchers, local complex systems for exploiting energy gradients evolve from seeds of organization, through a succession of natural starting and ending phases for inverting their directions of development. The accumulation of working processes which their exploratory parts construct as they exploit their gradient becomes the “learning”, “organization” or “design” of the system as a physical artifact, such for an ecology or economy. The mechanism of self-organization is the interaction between the elements and the constrains, which leads to constraint minimization. This is consistent with the Gauss’ principle of least constraint. More elements minimize the constraints faster, another aspect of the mechanism, which is through quantity accumulation. As a result, the paths of the elements are straightened, which is consistent with the Hertz’s principle of least curvature. The state of a system with least average sum of actions of its elements is defined as its attractor. In open systems, where there is constant inflow and outflow of energy and elements, this final state is never reached, but the system always tends toward it. This method can help describe, quantify, manage, design and predict future behavior of complex systems, to achieve the highest rates of self-organization to improve their quality, which is the numerical value of their organization. It can be applied to complex systems in Physics, Chemistry, Biology, Ecology, Economics, Cities, network theory and others, where they are present.
Water drops on a spider web
SELF-ORGANIZATION IN BIOLOGY According to Scott Camazine: In biological systems self-organization is a process in which pattern at the global level of a system emerges solely from numerous interactions among the lower-level components of the system. Moreover, the rules specifying interactions among the system’s
24 EVOLUTIONARY AUTOMAT_I_©ITY
Snake skin
components are executed using only local information, without reference to the global pattern.” The following is an incomplete list of the diverse phenomena which have been described as self-organizing in biology. • Spontaneous folding of proteins and other biomacromolecules. • Formation of lipid bilayer membranes. • Homeostasis (the self-maintaining nature of systems from the cell to the whole organism). • Pattern formation and morphogenesis, or how the living organism develops and grows. See also embryology. • The coordination of human movement, e.g. seminal studies of bimanual coordination by Kelso. • The creation of structures by social animals, such as social insects (bees, ants, termites), and many mammals. • Flocking behaviour (such as the formation of flocks by birds, schools of fish, etc.). • The origin of life itself from self-organizing chemical systems, in the theories of hypercycles and autocatalytic networks. • The organization of Earth’s biosphere in a way that is broadly conducive to life (according to the controversial Gaia hypothesis).
Honeycomb
SELF-ORGANIZATION IN MATHEMATICS AND COMPUTER SCIENCE As mentioned above, phenomen a from mathematics and computer science such as cellular automata, random graphs, and some instances of evolutionary computation and artificial life exhibit features of self-organization. In swarm robotics, self-organization is used to produce emergent behavior. In particular the theory of random graphs has been used as a justification for self-organization as a general principle of complex systems. In the field of multi-agent systems, understanding how to engineer systems that are capable of presenting self-organized behavior is a very active research area.” 1
Morphogenesis
Notes: 1. Wikipedia
RUIS DERVISHI 25
CELLULAR AUTOMATON How can we create a system that mimics self-organization in biology through computes? It has been proven that some biological processes occur, or can be simulated by cellular automata (CA). “A cellular automaton is a discrete model studied in computability theory, mathematics, physics, complexity science, theoretical biology and microstructure modeling. Cellular automata are also called cellular spaces, tessellation automata, homogeneous structures, cellular structures, tessellation structures, and iterative arrays. A cellular automaton consists of a regular grid of cells, each in one of a finite number of states, such as on and off (in contrast to a coupled map lattice). The grid can be in any finite number of dimensions. For each cell, a set of cells called its neighborhood (usually including the cell itself) is defined relative to the specified cell. An initial state (time t=0) is selected by assigning a state for each cell. A new generation is created (advancing t by 1), according to some fixed rule (generally, a mathematical function) that determines the new state of each cell in terms of the current state of the cell and the states of the cells in its neighborhood. Typically, the rule for updating the state of cells is the same for each cell and does not change over time, and is applied to the whole grid simultaneously. The concept was originally discovered in the 1940s by Stanislaw Ulam and John von Neumann while they were contemporaries at Los Alamos National Laboratory. While studied some throughout the 1950s and 1960s, it was not until the 1970s and Conway’s Game of Life, a two-dimensional cellular automaton, that interest in the subject expanded beyond academia. In the 1980s, Stephen Wolfram engaged in a systematic study of one-dimensional cellular automata, or what he calls elementary cellular automata; his research assistant Matthew Cook showed that one of these rules is Turing-complete. Wolfram published A New Kind of Science in 2002, claiming that cellular automata have applications in many fields of science. These include computer processors
26 EVOLUTIONARY AUTOMAT_I_ŠITY
Different typologies of CA neighbouhoods
A simple Cellular Automaton
and cryptography. The primary classifications of cellular automata as outlined by Wolfram are numbered one to four. They are, in order, automata in which patterns generally stabilize into homogenity, automata in which patterns evolve into mostly stable or oscillating structures, automata in which patterns evolving in a seemingly chaotic fashion, and automata in which patterns become extremely complex and may last for a long time, with stable local structures. This last class are thought to be computationally universal, or capable of simulating a Turing machine. Special types of cellular automata are those which are reversible, in which only a single configuration leads directly to a subsequent one, and totalistic, in which the future value of individual cells depend on the total value of a group of neighboring cells. Cellular automata can simulate a variety of real-world systems, including biological and chemical ones. There has been speculation that cellular automata may be able to model reality itself. One way to simulate a two-dimensional cellular automaton is with an infinite sheet of graph paper along with a set of rules for the cells to follow. Each square is called a “cell� and each cell has two possible states, black and white. The neighborhood of a cell is the nearby, usually adjacent, cells. The two most common types of neighborhoods are the von Neumann neighborhood and the Moore neighborhood. The former, named after the founding cellular automaton theorist, consists of the four orthogonally adjacent cells. The latter includes the von Neumann neighborhood as well as the four remaining cells surrounding the cell whose state is to be calculated. For such a cell and its Moore neighborhood, there are 512 (= 29) possible patterns. For each of the 512 possible patterns, the rule table would state whether the center cell will be black or white on the next time interval. Conway’s Game of Life is a popular version of this model. Another common neighborhood type is the extended von Neumann neighborhood, which includes the two closest cells in each orthogonal direction, for a total of eight. The general equation
A seashell with CA-like patterns
Evolution of various cellular automata from disordered initial states.
RUIS DERVISHI 27
for such a system of rules is kks, where k is the number of possible states for a cell, and s is the number of neighboring cells (including the cell to be calculated itself) used to determine the cell’s next state. Thus, in the two dimensional system with a Moore neighborhood, the total number of automata possible would be 22^9, or 1.34×10154. It is usually assumed that every cell in the universe starts in the same state, except for a finite number of cells in other states; the assignment of state values is called a configuration. More generally, it is sometimes assumed that the universe starts out covered with a periodic pattern, and only a finite number of cells violate that pattern. The latter assumption is common in one-dimensional cellular automata. Cellular automata are often simulated on a finite grid rather than an infinite one. In two dimensions, the universe would be a rectangle instead of an infinite plane. The obvious problem with finite grids is how to handle the cells on the edges. How they are handled will affect the values of all the cells in the grid. One possible method is to allow the values in those cells to remain constant. Another method is to define neighborhoods differently for these cells. One could say that they have fewer neighbors, but then one would also have to define new rules for the cells located on the edges. These cells are usually handled with a toroidal arrangement: when one goes off the top, one comes in at the corresponding position on the bottom, and when one goes off the left, one comes in on the right. (This essentially simulates an infinite periodic tiling, and in the field of partial differential equations is sometimes referred to as periodic boundary conditions.)”1
Pattern generated with cellular automata
Pattern generated with rule 110
28 EVOLUTIONARY AUTOMAT_I_©ITY
CONWAY’S GAME OF LIFE “The Game of Life, also known simply as Life, is a cellular automaton devised by the British mathematician John Horton Conway in 1970. The “game” is a zero-player game, meaning that its evolution is determined by its initial state, requiring no further input. One interacts with the Game of Life by creating an initial configuration and observing how it evolves.
RULES The universe of the Game of Life is an infinite two-dimensional orthogonal grid of square cells, each of which is in one of two possible states, alive or dead. Every cell interacts with its eight neighbours, which are the cells that are horizontally, vertically, or diagonally adjacent. At each step in time, the following transitions occur: • Any live cell with fewer than two live neighbours dies, as if caused by under-population. • Any live cell with two or three live neighbours lives on to the next generation. • Any live cell with more than three live neighbours dies, as if by overcrowding. • Any dead cell with exactly three live neighbours becomes a live cell, as if by reproduction. The initial pattern constitutes the seed of the system. The first generation is created by applying the above rules simultaneously to every cell in the seed—births and deaths occur simultaneously, and the discrete moment at which this happens is sometimes called a tick (in other words, each generation is a pure function of the preceding one). The rules continue to be applied repeatedly to create further generations.
John Horton Conway
Game of life glider gun
ORIGINS Conway was interested in a problem presented in the 1940s by mathematician John von Neumann, who attempted to find a hypothetical machine that could build copies of itself and succeeded when he found a mathematical model for such a
Game of life infinite growth
RUIS DERVISHI 29
machine with very complicated rules on a rectangular grid. The Game of Life emerged as Conway’s successful attempt to drastically simplify von Neumann’s ideas. The game made its first public appearance in the October 1970 issue of Scientific American, in Martin Gardner’s “Mathematical Games” column. From a theoretical point of view, it is interesting because it has the power of a universal Turing machine: that is, anything that can be computed algorithmically can be computed within Conway’s Game of Life. Gardner wrote: “The game made Conway instantly famous, but it also opened up a whole new field of mathematical research, the field of cellular automata ... Because of Life’s analogies with the rise, fall and alterations of a society of living organisms, it belongs to a growing class of what are called “simulation games” (games that resemble real life processes).” Ever since its publication, Conway’s Game of Life has attracted much interest, because of the surprising ways in which the patterns can evolve. Life provides an example of emergence and self-organization. It is interesting for computer scientists, physicists, biologists, biochemists, economists, mathematicians, philosophers, generative scientists and others to observe the way that complex patterns can emerge from the implementation of very simple rules. The game can also serve as a didactic analogy, used to convey the somewhat counter-intuitive notion that “design” and “organization” can spontaneously emerge in the absence of a designer. For example, philosopher and cognitive scientist Daniel Dennett has used the analogue of Conway’s Life “universe” extensively to illustrate the possible evolution of complex philosophical constructs, such as consciousness and free will, from the relatively simple set of deterministic physical laws governing our own universe. The popularity of Conway’s Game of Life was helped by its coming into being just in time for a new generation of inexpensive minicomputers which were being released into the market. The game could be run for hours on these ma-
30 EVOLUTIONARY AUTOMAT_I_©ITY
Pattern generated with Game of Life rules
A pattern of ridges and troughs is formed by a film of varnish that has wrinkled and begun to lift off the substrate.
chines, which would otherwise have remained unused at night. In this respect, it foreshadowed the later popularity of computer-generated fractals. For many, Life was simply a programming challenge: a fun way to use otherwise wasted CPU cycles. For some, however, Life had more philosophical connotations. It developed a cult following through the 1970s and beyond; current developments have gone so far as to create theoretic emulations of computer systems within the confines of a Life board. Conway chose his rules carefully, after considerable experimentation, to meet these criteria: • There should be no explosive growth. • There should exist small initial patterns with chaotic, unpredictable outcomes. • There should be potential for von Neumann universal constructors. • The rules should be as simple as possible, whilst adhering to the above constraints.
Game of Life generations in proccessing
VARIATIONS ON LIFE Since Life’s inception, new similar cellular automata have been developed. The standard Game of Life, in which a cell is “born” if it has exactly 3 neighbours, stays alive if it has 2 or 3 living neighbours, and dies otherwise, is symbolised as “B3/S23”. The first number, or list of numbers, is what is required for a dead cell to be born. The second set is the requirement for a live cell to survive to the next generation. Hence “B6/S16” means “a cell is born if there are 6 neighbours, and lives on if there are either 1 or 6 neighbours”. Cellular automata on a two-dimensional grid that can be described in this way are known as Life-like cellular automata. Another common Life-like automaton, HighLife, is described by the rule B36/S23, because having 6 neighbours, in addition to the original game’s B3/S23 rule, causes a birth. HighLife is best known for its frequently occurring replicators. Additional Life-like cellular automata exist, although the vast
Game of space installation
RUIS DERVISHI 31
majority of them produce universes that are either too chaotic or too desolate to be of interest. A sample of a 48-step oscillator along with a 2-step oscillator and a 4-step oscillator from a 2-D hexagonal Game of Life (rule H:B2/S34) Some variations on Life modify the geometry of the universe as well as the rule. The above variations can be thought of as 2-D square, because the world is two-dimensional and laid out in a square grid. 1-D square variations (known as elementary cellular automata) and 3-D square variations have been developed, as have 2-D hexagonal and 2-D triangular variations. Variant using non-periodic tile grids has also been made. Conway’s rules may also be generalized such that instead of two states (live and dead) there are three or more. State transitions are then determined either by a weighting system or by a table specifying separate transition rules for each state; for example, Mirek’s Cellebration’s multi-coloured “Rules Table” and “Weighted Life” rule families each include sample rules equivalent to Conway’s Life. Patterns relating to fractals and fractal systems may also be observed in certain Life-like variations. For example, the automaton B1/S12 generates four very close approximations to the Sierpiński triangle when applied to a single live cell. The Sierpiński triangle can also be observed in Conway’s Game of Life by examining the long-term growth of a long single-cell-thick line of live cells, as well as in HighLife, Seeds (B2/S), and Wolfram’s Rule 90. Immigration is a variation that is very similar to Conway’s Game of Life, except that there are two ON states (often expressed as two different colours). Whenever a new cell is born, it takes on the ON state that is the majority in the three cells that gave it birth. This feature can be used to examine interactions between spaceships and other “objects” within the game. Another similar variation, called QuadLife, involves four different ON states. When a new cell is born from three different ON neighbours, it takes on the fourth value, and
32 EVOLUTIONARY AUTOMAT_I_©ITY
otherwise, like Immigration, it takes the majority value. Except for the variation among ON cells, both of these variations act identically to Life.” CA is Life-like (in the sense of being similar to Conway’s Game of Life) if it meets the following criteria: • The array of cells of the automaton has two dimensions. • Each cell of the automaton has two states (conventionally referred to as “alive” and “dead”, or alternatively “on” and “off”) • The neighborhood of each cell is the Moore neighborhood; it consists of the eight adjacent cells to the one under consideration and (possibly) the cell itself. • In each time step of the automaton, the new state of a cell can be expressed as a function of the number of adjacent cells that are in the alive state and of the cell’s own state; that is, the rule is outer totalistic (sometimes called semitotalistic). There are 218 = 262,144 possible Life-like rules, only a small fraction of which have been studied in any detail. In the descriptions below, all rules are specified in Golly/RLE format. The life-like model has been chosen as the generative model for Evolutionary Automaticity since it has simple rules and it is possible to generate complex organisms from simple starting configurations. “1 One major change has been made to the life-like CA. It is a 2-dimentional formation. The algorithm has been implemented to make the CA 3-dimentional. In 2-dimentional CA an initial state (time t=0) is selected by assigning a state for each cell. A new generation is created (advancing t by 1), that determines the new state of each cell in terms of the current state of the cell and the states of the cells in its neighborhood in the 2-dimentional grid. To replicate the automaton in 3-dimentional space is necessary to move each generation in the Z axis. This could also be defined as a 3D structure obtained using the memory of the 2D Life-like CA. Each generation should be moved in the Z positive direction
by the amount of the grid size. Consequentially the cells will assume a 3-dimentional shape, for instance if the grid will be squared, the cells will be cubic, but it is possible to set different elementary shapes as parallelepipeds depending on the grid shape and on the clear height needed for the cells. In the case of EA, we will take the simplest scenario assuming a squared grid of 7m x 7m and the height will be 3.5m assuming a 0,4m slab, therefore each generation will be moved in the vertical direction by 3,5m (t*3.5).
Game of Life in 3 dimentions
Notes: 1. Wikipedia
Game of Life in 3 dimentions
RUIS DERVISHI 33
EVOLUTION
EVOLUTIONARY COMPUTATION One of the most critic aspects of a self-organizing cellular automata model is that this will often have undesired results. It is in fact very difficult to implement the desired conditions (see Basic Conditions) to a self-organizing model and control his growth. Therefore is necessary to implement some evolutionary strategies in order to change the starting configuration until the final organism matches the desired conditions. “In computer science, evolutionary computation is a subfield of artificial intelligence (more particularly computational intelligence) that involves continuous optimization and combinatorial optimization problems. Its algorithms can be considered global optimization methods with a metaheuristic or stochastic optimization character and are mostly applied for black box problems (no derivatives known), often in the context of expensive optimization. Evolutionary computation uses iterative progress, such as growth or development in a population. This population is then selected in a guided random search using parallel processing to achieve the desired end. Such processes are often inspired by biological mechanisms of evolution.�1
Computational intelligence
Notes: 1. Wikipedia
36 EVOLUTIONARY AUTOMAT_I_ŠITY
Evolution scheme
EVOLUTIONARY ALGORITHM “In artificial intelligence, an evolutionary algorithm is a subset of evolutionary computation, a generic population-based metaheuristic optimization algorithm. An evolutionary algorithm uses mechanisms inspired by biological evolution, such as reproduction, mutation, recombination, and selection. Candidate solutions to the optimization problem play the role of individuals in a population, and the fitness function determines the quality of the solutions (see also cost function). Evolution of the population then takes place after the repeated application of the above operators. Artificial evolution (AE) describes a process involving individual evolutionary algorithms; evolutionary algorithms are individual components that participate in an AE. Evolutionary algorithms often perform well approximating solutions to all types of problems because they ideally do not make any assumption about the underlying fitness landscape; this generality is shown by successes in fields as diverse as engineering, art, biology, economics, marketing, genetics, operations research, robotics, social sciences, physics, politics and chemistry[citation needed]. Techniques from evolutionary algorithms applied to the modeling of biological evolution are generally limited to explorations of microevolutionary processes. The computer simulations Tierra and Avida attempt to model macroevolutionary dynamics. In most real applications of evolutionary algorithms, computational complexity is a prohibiting factor. In fact, this computational complexity is due to fitness function evaluation. Fitness approximation is one of the solutions to overcome this difficulty. However, seemingly simple evolutionary algorithm can solve often complex problems; therefore, there may be no direct link between algorithm complexity and problem complexity.
A simple evolutionary algorithm
Simple evolutionary problem
RUIS DERVISHI 37
IMPLEMENTATION OF BIOLOGICAL PROCESSES • • •
Generate the initial population of individuals randomly first Generatio Evaluate the fitness of each individual in that population Repeat on this generation until termination (time limit, sufficient fitness achieved, etc.): 1. Select the best-fit individuals for reproduction - parents 2. Breed new individuals through crossover and mutation operations to give birth to offspring 3. Evaluate the individual fitness of new individuals 4. Replace least-fit population with new individuals
Genetic algorithm is the most popular type of Evolutionary algorithm. One seeks the solution of a problem in the form of strings of numbers (traditionally binary, although the best representations are usually those that reflect something about the problem being solved), by applying operators such as recombination and mutation (sometimes one, sometimes both). This type of Evolutionary algorithm is often used in optimization problems.”1
Biological Chromosomes were the incentive for Genetic Algorithms
Genetic Algorithms working scheme
Notes: 1. Wikipedia
38 EVOLUTIONARY AUTOMAT_I_©ITY
THE SOFTWARE RHINOCEROS Rhino is a stand-alone, commercial NURBS-based 3-D modeling software, developed by Robert McNeel & Associates. The software is commonly used for industrial design, architecture, marine design, jewelry design, automotive design, CAD / CAM, rapid prototyping, reverse engineering, product design as well as the multimedia and graphic design industries. Rhino specializes in free-form non-uniform rational B-spline (NURBS) modeling. Plug-ins developed by McNeel include Flamingo (raytrace rendering), Penguin (non-photorealistic rendering), Bongo, and Brazil (advanced rendering). Over 100 third-party plugins are also available. There are also rendering plug-ins for Maxwell Render, V-ray, Thea and many other engines. Additional plugins for CAM and CNC milling are available as well, allowing for toolpath generation directly in Rhino. Like many modeling applications, Rhino also features a scripting language, based on the Visual Basic language, and an SDK that allows reading and writing Rhino files directly. Rhinoceros 3d gained its popularity in architectural design in part because of the Grasshopper plug-in for computational design. Many new avant-garde architects are using parametric modeling tools, like Grasshopper. GRASSHOPPER Grasshoppe is a visual programming language developed by David Rutten at Robert McNeel & Associates. Grasshopper runs within the Rhinoceros 3D CAD application. Programs are created by dragging components onto a canvas. The outputs to these components are then connected to the inputs of subsequent components. Grasshopper is used mainly to build generative algorithms. Many of Grasshoppers components create 3D geometry. Programs may also contain other types of algorithms including numeric, textual, audio-visual and haptic applications.
RUIS DERVISHI 39
“Popular among students and professionals, McNeel Associate’s Rhino modelling tool is endemic in the architectural design world. The new Grasshopper environment provides an intuitive way to explore designs without having to learn to script.” AEC Magazine ECOTECT ANALYSIS Ecotect Analysis is an environmental analysis software that allows designers to simulate building performance right from the earliest stages of conceptual design. It combines a wide array of detailed analysis functions with a highly visual and interactive display that presents analytical results directly within the context of the building model, enabling it to communicate complex concepts and extensive datasets in surprisingly intuitive and effective ways. RABBIT Rabbit is a plug-in for Grasshopper that simulates biological and physical processes. It provides an easy way to explore natural phenomena such as pattern formation, self-organization, emergence, non-linearity. GECO Geco is a set of components which establish a live link between Rhino/Grasshopper and Ecotect to export,evaluate and import geometries.
40 EVOLUTIONARY AUTOMAT_I_©ITY
GALAPAGOS Galapagos is an evolutionary solver component for Grasshopper. The following lines come from David Rutten’s (the inventor of Grasshopper and Galapagos) blog. He explains the principles of evolutionary algorithms applied to problem solving using Galapagos. “Since we are not living in the best of all possible worlds there is often no such thing as the perfect solution. Every approach has drawbacks and limitations. In the case of Evolutionary Algorithms these are luckily well known and easily understood drawbacks, even though they are not trivial. Indeed, they may well be prohibitive for many a particular problem. Firstly; Evolutionary Algorithms are slow. Dead slow. It is not unheard of that a single process may run for days or even weeks. Especially complicated set-ups that require a long time in order to solve a single iteration will quickly run out of hand. A light/shadow or acoustic computation for example may easily take a minute per iteration. If we assume we’ll need at least 50 generations of 50 individuals each (which is almost certainly an underestimate unless the problem has a very obvious solution.) we’re already looking at a two-day runtime. Secondly, Evolutionary Algorithms do not guarantee a solution. Unless a predefined ‘good-enough’ value is specified, the process will tend to run on indefinitely, never reaching The Answer, or, having reached it, not recognizing it for what it is. All is not bleak and dismal however, Evolutionary Algorithms have strong benefits as well, some of them rather unique amongst the plethora of computational methods. They are remarkably flexible for example, able to tackle a wide variety of problems. There are classes of problems which are by definition beyond the reach of even the best solver implementation and other classes that are very difficult to solve, but these are typically rare in the province of the human me-
so-world. By and large the problems we encounter on a daily basis fall into the ‘evolutionary solvable’ category. Evolutionary Algorithms are also quite forgiving. They will happily chew on problems that have been under- or over-constrained or otherwise poorly formulated. Furthermore, because the run-time process is progressive, intermediate answers can be harvested at practically any time. Unlike many dedicated algorithms, Evolutionary Solvers spew forth a never ending stream of answers, where newer answers are generally of a higher quality than older answers. So even a pre-maturely aborted run will yield something which could be called a result. It might not be a very good result, but it will be a result of sorts. Finally, Evolutionary Solvers allow -in principle- for a high degree of interaction with the user. This too is a fairly unique feature, especially given the broad range of possible applications. The run-time process is highly transparent and browsable, and there exists a lot of opportunity for a dialogue between algorithm and human. The solver can be coached across barriers with the aid of human intelligence, or it can be goaded into exploring sub-optimal branches and superficially dead-ends. THE PROCESS In this section I shall briefly outline the process of an Evolutionary Solver run. It is a highly simplified version of the remainder of the blog post, and I’ll skip over many interesting and even important details. I’ll show the process as a series of image frames, where each frame shows the state of the ‘population’ at a given moment in time. Before I can start however, I need to explain what the image below means. What you see here is the Fitness Landscape of a particular model. The model contains two variables, meaning two values which are allowed to change. In Evolutionary Computing we refer to variables as genes. As we change Gene A, the state of the model changes and it either becomes better or worse (depending on what we’re looking for). So as Gene A
RUIS DERVISHI 41
changes, the fitness of the entire model goes up or down. But for every value of A, we can also vary Gene B, resulting in better or worse combinations of A and B. Every combination of A and B results in a particular fitness, and this fitness is expressed as the height of the Fitness Landscape. It is the job of the solver to find the highest peak in this landscape.
Of course a lot of problems are defined by not just two but many genes, in which case we can no longer speak of a ‘landscape’ in the strict sense. A model with 12 genes would be a 12-dimensional fitness volume deformed in 13 dimensions instead of a two-dimensional fitness plane deformed in 3 dimensions. As this is impossible to visualize I shall only use one and two-dimensional models, but note that when we speak of a “landscape”, it might mean something terribly more complex than the above image shows. As the solver starts it has no idea about the actual shape of the fitness landscape. Indeed, if we knew the shape we wouldn’t need to bother with all this messy evolutionary stuff in the first place. So the initial step of the solver is to populate the landscape (or “model-space”) with a random collection of individuals (or “genomes”). A genome is nothing more than a specific value for each and every gene. In the
42 EVOLUTIONARY AUTOMAT_I_©ITY
above case, a genome could for example be {A=0.2 B=0.5}. The solver will then evaluate the fitness for each and every one of these random genomes, giving us the following distribution:
Once we know how fit every genome is (i.e., the elevation of the red dots), we can make a hierarchy from fittest to lamest. We are looking for high-ground in the landscape and it is a reasonable assumption that the higher genomes are closer to potential high-ground than the low ones. Therefore we can kill off the worst performing ones and focus on the remainder:
It is not good enough to simply pick the best performing genome from the initial population and call it quits. Since all the genomes in Generation 0 were picked at random, it is actually quite unlikely that any of them will have hit the jack-pot. What we need to do is breed the best performing genomes in Generation 0 to create Generation 1. When we breed two genomes, their offspring will end up somewhere in the intermediate model-space, thus exploring fresh ground: We now have a new population, which is no longer completely random and which is already starting to cluster around the
three fitness ‘peaks’. All we have to do is repeat the above steps (kill off the worst performing genomes, breed the best-performing genomes) until we have reached the highest peak.
In order to perform this process, an Evolutionary Solver requires five interlocking parts, which I’ll discuss in something resembling detail. We could call this the anatomy of the Solver. • Fitness Function • Selection Mechanism • Coupling Algorithm • Coalescence Algorithm • Mutation Factory FITNESS FUNCTIONS In biological evolution, the quality known as “Fitness” is actually something of a stumbling block. Usually it is very difficult to say exactly what it means to be fit. It certainly has little or nothing to do with being the strongest, or the fastest, or the most vicious. The reason there are no flying dogs isn’t that evolution hasn’t gotten around to making any yet, it is that the dog lifestyle is supremely incompatible with flying and the sacrifices required to equip a dog with flight would certainly detract more from the overall fitness than flight would add to
RUIS DERVISHI 43
it. Fitness is the result of a million conflicting forces. Evolutionary Fitness is the ultimate compromise.
A fit individual is on average able to produce more offspring than an unfit one, so we could say that fitness equals the number of genetic children. A better measure yet would be to count the number of grand-children. And a better measure yet would be to count the allele frequency in the gene-pool of the genes that made up the individual in question. But these are all rather ad-hoc definitions that cannot be measured on the spot. At least in Evolutionary Computation, fitness is a very easy concept. Fitness is whatever we want it to be. We are trying to solve a specific problem, and therefore we know what it means to be fit. If for example we are seeking to position a shape so that it may be milled with minimum material waste, there is a very strict fitness function that leaves no room for argument. Let’s have a look at the fitness landscape again and let’s imagine it represents a model that seeks to encase an object in a minimum volume bounding-box. A minimum bounding-box is the smallest orthogonal box that completely contains any given shape. In the image below, the green shape is encased by two bounding boxes. B has a smaller area than A and is therefore fitter.
44 EVOLUTIONARY AUTOMAT_I_ŠITY
When we need to mill or 3D-print a shape, it is often a good idea to rotate it until it requires the least amount of material to be used during manufacturing. For a real minimum bounding-box we need at least three rotation axes, but since that will not allow me to display the real fitness landscape, we will restrict ourselves to rotation around the world X and Y axes. So, Gene A will represent the rotation around the X axis and Gene B will represent rotation around the Y axis. There is no need to allow for rotation higher than 360 degrees, so both genes have a limited working domain. (In fact, since we are talking about orthogonal boxes, even a 0-90 degree domain would suffice). Behold rotation around a single axis:
When we need to mill or 3D-print a shape, it is often a good idea to rotate it until it requires the least amount of material to be used during manufacturing. For a real minimum bounding-box we need at least three rotation axes, but since that will not allow me to display the real fitness landscape, we will restrict ourselves to rotation around the world X and Y axes. So, Gene A will represent the rotation around the X axis and Gene B will represent rotation around the Y axis. There
is no need to allow for rotation higher than 360 degrees, so both genes have a limited working domain. (In fact, since we are talking about orthogonal boxes, even a 0-90 degree domain would suffice). Behold rotation around a single axis: Every individual tries to maximize its own fitness, as high fitness is rewarded by the solver. And the steepest uphill climb is the fastest way towards high fitness. So if the black sphere represents the location of the ancestral genome, the orange track represents the pathway of its most successful offspring. We can repeat this exercise for a large amount of sample points which will tell us something about how the Solver and the Fitness Landscape interact:
Since every genome is pulled uphill, every peak in the fitness landscape has a basin of attraction around it. This basin represents all the points in model-space that will converge upon that specific peak. It is important to notice that the area of the basin is in no way representative of the quality of the peak. Indeed, a very poor solution may have a large basin of attraction while a good peak might have a small catchment area. Problems like this are typically very difficult to solve, as the solution tends to get stuck in local optima. But we’ll have a look at problematic fitness functions later on. First, let’s have a closer look at the actual fitness landscape for our minimum bounding-box model. I’m afraid it’s not quite as simple as the image we’ve been using so far. I was actually quite surprised how organic and un-box-like the actual fitness landscape for this problem is. Remember, the x-axis rotation is mapped along the Gene A direction and the y-axis rotation along the Gene B direction. So every point on the AB plane represents a unique rotation composed of two angles. The elevation of this point is a direct mapping of the volume of the bounding-box at those two rotation angles:
RUIS DERVISHI 45
It would appear that the lowest points in this landscape (the minimum bounding-boxes) are both fewer in number and of a different kind. We only get 8 optimal solutions and they are all very sharp, indicating a somewhat more fragile state. Still, on the whole we have nothing to complain about. All the solutions are of equal quality and there are no local optima at all. We can generalize this landscape to a 2-dimensional graph:
The first thing to notice is that the landscape is periodic. I.e., it repeats itself every 90 degrees in both directions. Also, this landscape is in fact inverted as we’re looking for a minimum volume, not a maximum one. Thus, the orange peaks in fact represent the worst solutions to this problem. Note that there are 16 of these peaks in the entire range and that they are rounded. When we look at the bottom of this fitness landscape, we get a rather different view:
46 EVOLUTIONARY AUTOMAT_I_©ITY
No matter where you end up as an ancestral genom\e, your blood-line will always find its way to a minimum bounding box. There’s nowhere for it to get ‘stuck’. So it’s really just a question about who gets there first. If we look at a slightly more complex fitness graph, it becomes apparent that this need not be the case:
This fitness landscape has two kinds of solutions. The high quality sharp ones near the bottom of the graph and the low quality flat ones near the top. The basin of attraction is given for both solutions (yellow for high quality, pink for low quality) and you can see that about half of the model space is attracted to the low quality solutions. An even worse example (flipped upright again this time, so high values indicate good solutions) would be the following fitness landscape:
The basins for these peaks are very small indeed and therefore easy to miss by a random sampling of the landscape. As soon as a lucky genome finds the peak on the left, its offspring will rapidly populate the low peak causing the rest of the population to go extinct. It is now even less likely that the better peak on the right will be found. The smaller the basins for solution, the harder it is to solve a problem with an evolutionary algorithm. Another example of a cumbersome problem to solve would be a discontinuous fitness landscape:
Even though there are strictly speaking no local optima, there is also no ‘improvement’ on the plateaus. A genome which finds itself in the middle of one of these horizontal patches doesn’t know where to go. If it takes a step to the left, nothing changes. If it takes a step to the right, nothing changes. There’s no ‘pressure’ in this fitness landscape, so all the genomes will wander about aimlessly, until one of them has the good fortune to suddenly step onto a higher plateau. At this point it will quickly dominate the gene-pool and the wandering starts again until the next plateau is accidentally found. Even worse than this though is a landscape that has a high degree of noise or chaos. A landscape may be continuous and yet feature so much detail that it becomes impossible to make any intelligible pronunciations regarding the fitness of a local patch:
RUIS DERVISHI 47
SELECTION MECHANISMS Biological Evolution proceeds by Natural Selection. The ruthless force identified by Darwin as the arbiter of progress. Put simply, Natural Selection affects the direction of the genepool over time by regulating who gets to mate. In extreme cases mating is prevented because a specific genome is so unfit that the bearer cannot survive until reproductive age. Another rather extreme case would be sterility. However, there’s a myriad ways in which Natural Selection can make it difficult or impossible for certain individuals to pass on their genetic footprint. However, Natural Selection isn’t the only game in town. For a long time now humans have been using Artificial Selection in order to breed specific characteristics into a (sub)species. When we try to solve problems using an Evolutionary Solver, we always use some form of artificial selection. There’s no such thing as sex or gender in the computer. The process of selection is also much simpler than in nature, as there is basically only one question that needs to be answered: Who gets to mate? Allow me to enumerate the mechanisms for parent selection that are available in Galapagos. This is only a small subset of the selection algorithms that are possible, but they seem to cover the basics rather well. First off, we have Isotropic Selection, which is the simplest kind of algorithm you can imagine. In fact, it is the absence of a selection algorithm. In Isotropic Selection everyone gets to mate:
48 EVOLUTIONARY AUTOMAT_I_©ITY
No matter where you find yourself on this fitness graph, your chances of ending up in a mating couple are constant. You might think that this is a particularly pointless selection strategy as it does nothing to further the evolution of the gene-pool. But it is not without precedent in nature. Take for example wind-pollination or coral spawning. If you’re a sexually functioning member of such a species, you get to play ball come mating season. Another example would be females in a walrus colony. Every female in a colony gets to breed with the dominant male, no matter how fit or unfit she is. Isotropic Selection is certainly not without function either. For one, it dampens the speed with which a population runs uphill. It therefore acts as a safe-guard against a premature colonization of a local -and possibly inferior- optimum. Another mechanism available in Galapagos is Exclusive Selection, where only the top N% of the population get to mate:
If you’re lucky enough to be in the top N%, you’ll likely have multiple offspring. A good analogy in nature for Exclusive Selection would be Walrus males. There’s only a few harems to go around and far too many males to assign them all (a harem of one female after all is not really a harem). The flunkies get to sit on the side-line without a single chance to father a walrus baby, doing whatever it is walruses do when they can’t get any action. Another common pattern in nature is Biased Selection, where the chance of mating increases as the fitness increases. This is something we typically see with species that form stable couples. Everyone is basically capable of finding a
mate, but the really attractive individuals manage to get a lot of hanky-panky on the side, thus increasing their chances of becomes genetic founders for future generations. Biased Selection can be amplified by using power functions, which have the effect of flattening or exaggerating the curve.
COUPLING ALGORITHMS Coupling is the process of finding mates. Once a genome has been elected to mate by the active Selection Algorithm, it has to pick a mate from the population to complete the act. There are of course many ways in which mate selection could occur, but Galapagos at the moment only allows one; selection by genomic distance. In order to explain this in detail, I should first tell you how a Genome Map works. This
is a Genome Map. It displays all the genomes (individuals) in a certain population as dots on a grid. The distance between two genomes on the grid is roughly analogous with the distance between the genomes in gene-space. I say roughly
because it is in fact impossible to draw a map with exact distances. A single genome is defined by a number of genes. We assume that all the genomes in a species have the same number of genes (this is not technically a limitation of Evolutionary Algorithms, even though it is currently a limitation of Galapagos). Therefore the distance between two genomes is an N-Dimensional value, where N equals the number of genes. It is not possible to accurately display an N-Dimensional point cloud on a 2-Dimensional screen so the Genome Map is only a coarse approximation. It also follows that the axes of this graph have no meaning whatsoever, the only information a Genome Map conveys is which genomes are more or less similar (close together) and which genomes are more or less different (far apart). Imagine you are an individual that has been selected for mating (yay). The population is well distributed and you are somewhere near the average (I’m sure you are a wildly original and delightful person in real life, but for the time being try to imagine you are in fact sort of average):
That red dot is you. Who looks attractive? You could of course limit your search of potential partners to your immediate neighbourhood. This means that you mate with individuals who are very much like you and it means your offspring will also be very much like you.
RUIS DERVISHI 49
When this is taken to extremes we call it incestuous mating behaviour and it can become detrimental pretty quickly. Biological incest has a nasty habit of expressing unhealthy but recessive genes, but in the digital world of Evolutionary Solvers the biggest risk of incest is a rapid decline in population diversity. Low diversity decreases the chances of finding alternate solution basins and thus it risks getting stuck in local optima. The other extreme is to exclude everyone near you. You’ll often hear it said that opposites attract, but that’s true only up to a point. At some point the genomes at the other end of the scale become so different as to be incompatible.
This is called zoophilic mating and it can be equally detrimental. This is especially true when a population is not a single group of genomes, but in fact contains multiple sub-species, each of which is climbing their own little fitness peak.
50 EVOLUTIONARY AUTOMAT_I_©ITY
You definitely do not want to mate with a member in a different sub-species, as the offspring would likely land somewhere in the middle. And since these two species are climbing different peaks, “in the middle” actually puts you in a fitness valley. It would seem that the best option is to balance in-breeding and out-breeding. To select individuals that are not too close and not too far. In Galapagos you can specify an in-breeding factor (between -100% and +100%, total out-breeding vs. total in-breeding respectively) that allows you to guide this relative offset:
Note that mate selection at present completely ignores mate fitness. This is something that needs looking into for future releases, but even without any advanced selection algorithms the solver still works.
COALESCENCE ALGORITHMS Once a mate has been selected, offspring needs to be generated. On the genetic level this is anything but fun and games. The biological process of gene recombination is horrendously complicated and itself subject to evolution (meiotic drive for example). The digital variant is much more basic. This is partially because genes in evolutionary algorithms are not very similar to biological genes. Ironically, biological genes are far more digital than programmatic genes. As Mendel discovered in the 1860’s, genes are not continuously variable qualities. Instead they behave like on-off switches. Genes in evolutionary solvers like Galapagos behave like floating point numbers, that can assume all the values between two numerical extremes. When we mate two genomes, we need to decide what values to assign to the genes of the offspring. Again, Galapagos provides several mechanisms for achieving this:
In Crossover mating, junior inherits a random number of genes from mommy and the remainder from daddy. In this mechanism gene value is maintained. Blend Coalescence will compute new values for genes based on both parents, basically averaging the values:
It is also possible to add a blending preference based on relative fitness. If mum is fitter than dad for example, her gene values will be more prominent in the offspring:
MUTATION FACTORIES Imagine we have two genomes of four genes each. There is no gender and no sex-based characteristics in the solver so the combination of M and D is potentially a completely symmetrical process. A mechanism that is somewhat synonymous with biological recombination is Crossover Coalescence.
All the mechanisms we have discussed so far (Selection, Coupling and Coalescence) are designed to improve the quality of solutions on a generation by generation basis. However all of them have a tendency to reduce the bio-diversity in a population. The only mechanism which can introduce diversity is mutation. Several types of mutation are available in the Galapagos core, though the nature of the implementation in Grasshopper at the moment restricts the possible mutation to only Point mutations. Before we get to mutations though, I’d like to talk briefly about Genome Graphs. A popular way to display multi-dimensional points on a two-dimensional medium is to draw them as a series of lines that connect different values on a set of
RUIS DERVISHI 51
vertical bars. Each bar represents a single dimension. This way we can quite easily display not just points with any number of dimensions, but even points with a different number of dimensions in the same graph:
Here for example we have a genome consisting of 5 genes. This genome is thus a point in the 5-dimensional space that delineates this particular species. When G0 is drawn at â…“, it means that the value is one-third between the minimum and maximum allowed limits. The benefit of this graph is that it becomes quite easy to spot sub-species in a population, as well as lone individuals. When we apply mutations to a genome, we should see a change in the graph, as every unique genome has a unique graph.
The above modification shows a Point Mutation, where a single gene value is changed. This is currently the only mutation type that is possible in Galapagos. We could also swap two adjacent gene values, in which case we get an Inversion Mutation:
52 EVOLUTIONARY AUTOMAT_I_ŠITY
Inversion mutations are only useful when subsequent genes have a very specific relationship. It tends to drastically modify a genome and thus in most cases also drastically modify fitness. This is almost always a detrimental operation. Two examples of mutations that cannot be used on a species which requires a fixed number of genes are Addition and Deletion mutations.
RUIS DERVISHI 53
RESEARCH
LIFE-LIKE CELLULAR AUTOMATA STUDIES There are several well-known Life-like CA, therefore “some straightforward generalizations on the behavior of different kinds of rules can be made: • In all rules where the lowest birth condition is 1 neighboring ON cell, all finite patterns grow at the speed of light in all directions. No still lifes, oscillators or spaceships are possible in these rules. Several have replicators, however. There are 65536 (= 216) rules of the B1 type. • All rules where the lowest birth condition is 2 neighboring ON cells are exploding or expanding in character; this is largely due to the fact that a domino at the corner of a pattern will give rise to a new domino, also located at the corner of the daughter pattern. Spaceships (such as the moon) and oscillators (such as the duoplet) do exist in many of these rules. There are 32768 (= 215) rules of the B2 type. • All rules where the lowest birth condition (if any) is 4 or more neighboring ON cells are stable in character, since no patterns ever grow beyond their initial bounding box. In particular, no spaceships can exist. There are 16384 (= 214) rules of the B4+ type. • In all rules where the lowest birth condition is 0 neighboring ON cells, and the highest survival condition is 8 neighboring ON cells, the vacuum is unstable and will be immediately filled (and remain filled) with ON cells; thus, there are no patterns that remain finite. All of these rules have distinct complementary rules, and they are not commonly studied on their own. This leaves 16384 rules in which the lowest birth condition is 3 neighboring ON cells, as well as 65536 rules in which the lowest birth condition is 0 neighboring cells, and 8 neighbors is not a survival condition. All chaotic rules must fall in either of these two areas of the rulespace. Most well-studied examples fall in the first one, since for long no commonly available software existed that could simulate the evolution of rules containing B0.”
56 EVOLUTIONARY AUTOMAT_I_©ITY
This Life-like CA have been studied as 2D CA thus it was necessary to investigate the behavior of those Life-like rules in 3-dimentional space. This studies were done setting up a 7m X 7m squared grid. The grid has the total dimension of 11 x 11 cells. For each know rule 2 cases were studied. In the first case the CA starts (t=0) from 1 alive cell in the middle of the grid. This is necessary to give an idea on how the organism evolves from the simplest possible case. In the second case all the grid cells were set up as genes in the evolutionary solver. Therefore there were 122 genes (11x11), each of those could be alive or dead (1 = alive and 0 = dead). This way it is possible to obtain all possible configurations. The next step was to set up the fitness function. The goal for the evolutionary solver was to switch the cells from 1 to 0 and make all possible configurations in order to achieve the aggregation with the highest density, in other words to have the highest number of cells in generation 11.
Grid_7x7
Cell_number
Dead_cell
Grid setup and genes that define the state for each cell
The following examples are some of the most well-known or well studied life-like CA. This examples use S/B notation for rulstrings. Where “S” states for survive and “B” states for born.
Rulestring: 1/1 Name: Gnarl Description: A simple exploding rule that forms complex patterns from even a single live cell.
Generation: Starting Cells: Cells: Blinded Cells: CA Rules:
_0 _1 _1 _0 _1/1
Flying_cell
Generation: Starting Cells: Cells: Blinded Cells: CA Rules:
_10 _1 _157 _0 _1/1
Generation: Starting Cells: Cells: Blinded Cells: CA Rules:
_10 _25 _321 _4 _1/1
No structure
Generation: Starting Cells: Cells: Blinded Cells: CA Rules:
Random_Cells
_0 _25 _25 _1 _1/1
Blinded_cell
Blinded_cell
Conclusions: This Life-like CA provides high density but the aggregation has too many flying cells, therefore there is no structural coherence, Therefore the exploding rules will be avoid.
RUIS DERVISHI 57
Rulestring: 12357/12357 Name: Replicator Description: A rule in which every pattern is a replicator. Generation: Starting Cells: Cells: Blinded Cells: CA Rules:
_0 _1 _1 _0 _12357/12357
Generation: Starting Cells: Cells: Blinded Cells: CA Rules:
_10 _1 _537 _80 _12357/12357
Blinded_cell
Generation: Starting Cells: Cells: Blinded Cells: CA Rules:
_0 _28 _28 _1 _12357/12357
Generation: Starting Cells: Cells: Blinded Cells: CA Rules:
_0 _28 _746 _100 _12357/12357
Conclusions: The extreme high density and the amount of blinded cells emerging from this rules makes it impossible to apply on a building, even though, the aggregations obtained are very interesting.
58 EVOLUTIONARY AUTOMAT_I_ŠITY
Rulestring: 012345678/3 Name: Life without death Description: An expanding rule that produces complex flakes. It also has important ladder patterns. Generation: Starting Cells: Cells: Blinded Cells: CA Rules:
_0 _1 _1 _0 _012345678/3
Generation: Starting Cells: Cells: Blinded Cells: CA Rules:
_10 _1 _11 _0 _012345678/3
Generation: Starting Cells: Cells: Blinded Cells: CA Rules:
_0 _30 _30 _1 _012345678/3
Generation: Starting Cells: Cells: Blinded Cells: CA Rules:
_10 _30 _846 _120 _012345678/3
Conclusions: The emerging behavior of the automata produces high density aggregations without particular interesting shapes.
RUIS DERVISHI 59
Rulestring: 1234/3 Name: Mazectric Description: An expanding rule that crystalizes to form maze-like designs that tend to be straighter (ie. have longer “halls”) than the standard maze rule. Generation: Starting Cells: Cells: Blinded Cells: CA Rules:
_0 _1 _1 _0 _1234/3
Generation: Starting Cells: Cells: Blinded Cells: CA Rules:
_10 _1 _1 _0 _1234/3
Generation: Starting Cells: Cells: Blinded Cells: CA Rules:
_0 _40 _40 _0 _1234/3
Generation: Starting Cells: Cells: Blinded Cells: CA Rules:
_10 _40 _622 _4 _1234/3
Conclusions: This rules create an aggregation that is very interesting, it has a structural coherence and a good density, thus the openings in the middle of the organism are too small and the lower floors have not enough light and ventilation.
60 EVOLUTIONARY AUTOMAT_I_©ITY
Rulestring: 12345/3 Name: Maze Description: An expanding rule that crystalizes to form maze-like designs. Generation: Starting Cells: Cells: Blinded Cells: CA Rules:
_0 _1 _1 _0 _12345/3
Generation: Starting Cells: Cells: Blinded Cells: CA Rules:
_10 _1 _1 _0 _12345/3
Generation: Starting Cells: Cells: Blinded Cells: CA Rules:
_0 _40 _40 _0 _12345/3
Generation: Starting Cells: Cells: Blinded Cells: CA Rules:
_10 _40 _705 _13 _12345/3
Conclusions: Maze is very similar to the previous rule, therefore the same considerations could be applied.
RUIS DERVISHI 61
Rulestring: 45678/3 Name: Coral Description: An exploding rule in which patterns grow slowly and form coral-like textures. Notes: Since there are no S=0 or B=0/1 rules the automata can’t generate from one starting cell, therefore there are no diagrams explaining this case Generation: Starting Cells: Cells: Blinded Cells: CA Rules:
_0 _45 _45 _2 _45678/3
Generation: Starting Cells: Cells: Blinded Cells: CA Rules:
_10 _45 _488 _65 _45678/3
Conclusions: Interesting behavior but no structural coherence. Rulestring: 34/34 Name: 3-4 Life Description: An exploding rule that was initially thought to be a stable alternative to Conway’s Life, until computer simulation found that most patterns tend to explode. Generation: Starting Cells: Cells: Blinded Cells: CA Rules:
_0 _42 _42 _1 _34/34
Conclusions: Very compact and dense aggregation but too many blinded cells.
62 EVOLUTIONARY AUTOMAT_I_©ITY
Generation: Starting Cells: Cells: Blinded Cells: CA Rules:
_10 _42 _619 _36 _34/34
Rulestring: 4567/345 Name: Assimilation Description: A very stable rule that forms permanent diamond-shaped patterns with partially filled interiors.
Generation: Starting Cells: Cells: Blinded Cells: CA Rules:
_0 _51 _51 _2 _4567/345
Generation: Starting Cells: Cells: Blinded Cells: CA Rules:
_10 _51 _967 _130 _4567/345
Conclusions: Extreme high density but too many blinded cells and no interesting form generated. Rulestring: 5678/35678 Name: Amoeba Description: A chaotic pattern that forms large diamonds with chaotically oscillating boundaries. Known to have quadratically-growing patterns. Generation: Starting Cells: Cells: Blinded Cells: CA Rules:
_0 _45 _45 _2 _1358/357
Generation: Starting Cells: Cells: Blinded Cells: CA Rules:
_10 _45 _644 _22 _1358/357
Conclusions: This rule has interesting behavior but has no possibility for openings in the center, thus would be difficult to have direct illumination in all the cells.
RUIS DERVISHI 63
Rulestring: 238/357 Name: Pseudo Life Description: A chaotic rule with evolution that resembles Conway’s Life, but few patterns from Life work in this rule because the glider is unstable. Generation: Starting Cells: Cells: Blinded Cells: CA Rules:
_0 _39 _39 _1 _238/357
Generation: Starting Cells: Cells: Blinded Cells: CA Rules:
_10 _39 _538 _18 _238/357
Conclusions: Due to his similarity with Conway’s Life it is interesting to study, but the unstable gliders may be a problem in high rise organisms. Rulestring: 125/36 Name: 2x2 Description: A chaotic rule with many simple still lifes, oscillators and spaceships. Its name comes from the fact that it sends patterns made up of 2x2 blocks to patterns made up of 2x2 blocks. Generation: Starting Cells: Cells: Blinded Cells: CA Rules:
_0 _43 _43 _1 _125/36
Conclusions: Fascinating generations, thus the spaceships could be a problem.
64 EVOLUTIONARY AUTOMAT_I_©ITY
Generation: Starting Cells: Cells: Blinded Cells: CA Rules:
_10 _43 _519 _13 _125/36
Rulestring: 23/36 Name: HighLife Description: A chaotic rule very similar to Conway’s Life that is of interest because it has a simple replicator.
Generation: Starting Cells: Cells: Blinded Cells: CA Rules:
_0 _43 _43 _1 _23/36
Generation: Starting Cells: Cells: Blinded Cells: CA Rules:
_10 _43 _516 _12 _23/36
Conclusions: This rule has a very interesting behavior thus the fact that it has a replicator may be a contradiction to the fact that the final organisms should be the most random and chaotic aggregation. Rulestring: 245/368 Name: Move Description: A rule in which random patterns tend to stabilize extremely quickly. Has a very common slow-moving spaceship and slow-moving puffer. Generation: Starting Cells: Cells: Blinded Cells: CA Rules:
_0 _45 _45 _0 _245/368
Generation: Starting Cells: Cells: Blinded Cells: CA Rules:
_10 _45 _632 _22 _245/368
Conclusions: Another interesting rule, but the tendency to have gliders and spaceships may be a problem.
RUIS DERVISHI 65
Rulestring: 23/3 Name: Conway’s Life Description: A chaotic rule that is by far the most well-known and well-studied. It exhibits highly complex behavior. Generation: Starting Cells: Cells: Blinded Cells: CA Rules:
_0 _49 _49 _1 _23/3
Generation: Starting Cells: Cells: Blinded Cells: CA Rules:
_10 _49 _480 _6 _23/3
Conclusions: The most studied rule. It has good density, creates interesting forms, has a good structural coherence. Probably the most fascinating rule.
Rulestring: 24/3 Name: Life_24 Description: A chaotic rule similar to Conway’s Game Life that exhibits highly complex behavior. Generation: Starting Cells: Cells: Blinded Cells: CA Rules:
_0 _47 _47 _1 _24/3
Generation: Starting Cells: Cells: Blinded Cells: CA Rules:
_10 _47 _468 _5 _24/3
Conclusions: This rule is the best compromise between all the factors considered so far. It has good density, it creates interesting forms, it has only few blinder cells and a good structural coherence.
66 EVOLUTIONARY AUTOMAT_I_©ITY
CHOOSING THE RULE Each automata rule has some pros and some cons. One of the most influent factors for the choice is the density in the aggregation even though the structural coherence of the organism has to be considered. Structural coherence means that there should not exist “flying” cells or flying groups of cells. A first evaluation regarding the structural coherence had to be made visually. Observing the aggregations, was decided to select or delete certain rules in the case those rules produced totally impossible aggregations in the reality. This was the first direct selection. The next step was to evaluate through the algorithm how many flying and blinded cells were generated and compare this to the total number of cells. A last direct and visual evaluation was made to select the most complex and architecturally “fascinating” rule. One first choice was to select Conway’s Life rule (S23/B3). This rule is one of the most interesting form generators due to his complexity, relatively high density and good structural coherence since it generates only few blinded and flying cells. However after more investigations with less known Life-like rules, was found the S24/B3 rule. This rule (from now it will be named Life_24) is very similar to Conway’s Life but it has a higher structural coherence and lower flying and blinded cells. It shows in the 3D aggregations a low tendency to generate “Gliders” and “Puffers”. This is important because in environments with no space limitations the organism can grow having extremely large aggregations on the top with an extremely small base. The last problem to solve was to avoid blinded or flying cells. Therefore all the blinded and flying cells detected were deleted. This didn’t create a noticeable reduction in density since the rule doesn’t generate too many flying and blinded cells. To detect the problematic cells the algorithm was set up to recognize the central point in each face, then was evaluated how many of this points have the same position in space. It is clear that tow neighbor cells will have two coincident faces, therefore defining the centroid of this faces there will be 2 coincident points. Since a flying cell does not have any co-
Impossible_structural_solution “Flying”_Cell
“Blinded”_Cell
Blinded_Faces
Fig.1: An example of “blinded” cell and “flying” cell.
7.00m 7.00m
3.50m
3.50m
1.75m 3.50m
3.50m
2x
= dist. 2.00m
4x
= dist. 3.00m
4x
= dist. 2.00m
8x
= dist. 3.00m
1.75m
3.50m
and
Flying cell
and
Blinded cell
Fig.2: How to evaluate each cell and define if it is a “flying” or “blinded” cell.
RUIS DERVISHI 67
incident face/point with the near cells, so it has to have 2 points on a distance equal to 1.75m and 4 point on a distance of 3.5m from the cell center. The same principle was used for the blinded cells. Each blinded cell has 6 coincident faces with the surrounding cells. In this case, instead of having 2 points 1.75m distant from the center, there will be 4 (2 double points) and instead of having 4 points 3.5m distant from the center, there will be 8 (Fig.2) Extended researches were made to investigate the Life_24 rule in extended grid sizes. In fig.5 is illustrated an example. It is interesting to notice how the pattern is repeating in the center of the grid but has chaotic random behavior on the edges. Fig.3 illustrates the starting cells, while fig.4 shows the blinded and the flying cells.
Fig.3: Initial pattern
“Flying”_Cells
“Blinded”_Cells
Fig.4: Flying and Blinded cells
68 EVOLUTIONARY AUTOMAT_I_©ITY
Fig.5: Emerging behavior for Life_24, generation 10.
RUIS DERVISHI 69
THE ALGORITHM CELLULAR AUTOMATA
CELLS ANALYSIS
70 EVOLUTIONARY AUTOMAT_I_©ITY
RUIS DERVISHI 71
SOLAR STUDIES Definitions DAYLIGHT & SUNLIGHT “Daylight refers to the level of diffuse natural light coming from the surrounding sky dome or reflected off adjacent surfaces. Sunlight, on the other hand, refers to direct sunshine and is very much brighter than ambient daylight. The Sun’s position in the sky varies markedly throughout the day and, when viewed from any particular point, is often obscured by clouds, trees or other buildings. It also experiences significant changes in intensity at different times of the year. Thus it does not make a very reliable light source with which to light the inside of a building. Also, its intensity is such that it can be a significant source of glare when falling on a work surface or reflected off a computer screen. As a result, direct sunlight is rarely included in architectural daylighting calculations.”1 (fig.1) DAYLIGHT FACTOR “The Daylight Factor is defined as the ratio of the illuminance at a particular point within an enclosure to the simultaneous unobstructed outdoor illuminance under the same sky conditions, expressed as a percentage. Once both the Daylight Factor and Design Sky are known, simply multiplying the two together gives the illuminance level (in either lux or foot candles) due to daylight at the point.”1 (Fig.2)
Fig.1: The difference between a sunlit (left) and daylit (right) space.
Fig.2: Factors affecting the penetration of daylight into a space.
Notes: 1. Ecotect guide
72 EVOLUTIONARY AUTOMAT_I_©ITY
Fig.3: An example of BRE Daylight Factors falling off quite quickly when obstructed from direct sky or external reflections.
Redefining the process Generating form through cellular automata’s rules makes it impossible to control some of the conditions imposed at the beginning. Automaton will grow following his own rules without considering the best living conditions. For instance the automaton doesn’t know where the sun comes from and what is its path in the sky, therefore it cannot grow allowing direct illumination for each cell. One way to solve this problem is again using the evolutionary solver. As for the density, were the evolutionary algorithm was created to evolve the organism changing the initial pattern until it reaches, after different generations, the highest density, a similar approach had to be made for the natural illumination. The typical workflow for solar analysis usually consist in 3 phases: 1. Design the model. 2 Perform solar analysis with dedicated software. 3 Change initial design accordingly to solar analysis. This workflow is not very efficient since the first design is not made considering the solar analysis so in the third stage many changes have to be make to the initial design, often implementing different elements to match the requirements. The idea for Evolutionary Automaticity is to build in the algorithm this analysis and optimize the final organism through the evolutionary solver until it reaches the best compromise between density and illuminated cells. For this stage was used Ecotect Analysis and Geco. The process is theoretically very simple. The solver changes the initial pattern and it aggregates the cells to generate the organism. This aggregation of cells is then exported to Ecotect. In Ecotect many different analysis can be done. In the beginning the idea was to make the shading, overshadowing, and sunlight hours analysis, but this analysis allows to calculate only the direct illumination. Basically what it does is quantify
Fig.4: Aggregation obtained after a density optimization.
Under_40_hrs/y Fig.5: Sunlight hours analysis result. This image shows clearly the critical aspect of this type of analysis. Many cells don’t receive, or receive very little sunlight, ( for instance the green faces receive only under 40 hours of sunlight per year) even though some of them are in the edges of the aggregation and clearly receive diffuse illumination.
RUIS DERVISHI 73
and visualize how many hours of sunlight will fall on a given surface of your model. It is very simple to observe that the north exposed facades will receive no direct sunlight, therefore since we don’t need only direct illumination this analysis is not the best one. Another very useful tool in Ecotect is the daylight factor analysis. This analysis allows to calculate the diffuse lighting. This is obviously the ideal solution, but unfortunately this type of analysis can be done only on a grid, or on predefined points. Moreover it takes a really long time to perform the calculation and is not possible to do it directly from grasshopper. Therefore it becomes impossible to have a direct feedback from the model. For this reasons the best analysis is an incident solar radiation analysis. It calculates the direct sunlight and the diffuse daylight radiation and it gives as an output a cumulative value in wh/m2. Ecotect performs an incident solar radiation analysis on every single cell face. Once the analysis is over, the data generated is exported to Grasshopper where the algorithm evaluates how many total cells and how many illuminated cells there are in the aggregation. This two numbers will be weighed out and summed. In our case the illuminated cells will weigh by 60% and the total number of cells will weigh by 40%. This because I believe that good illumination and life conditions are more important that pure density. However this is a personal opinion and the values can be easily changed. There is, by the way, a critical problem in this process. Every single solar analysis takes several minutes to perform in the case of small aggregations, but it can easily take hours or even days for large organisms. This slows down significantly the evolutionary solver and makes it almost impossible to have “the best” solution since the solver needs hundred or even thousands of generations to evolve. That’s why the tests were done in very small aggregations. The theoretical approach is anyway correct. Hopefully in the future there will be better and more performing computers, as well as software.
74 EVOLUTIONARY AUTOMAT_I_©ITY
Fig.6: Total sunlight hours analysis. This images shows the north-east faces of the organism. It is clear again that the blue faces don’t receive direct illumination, but for sure, many of them receive high percentage of diffuse illumination.
Fig.7: Sky component analysis. It is clear in this analysis that the sky component is very high for most of the cells, therefore they receive high levels of diffused light.
Fig.8: Random generated aggregation into evolutionary solver.
Fig.10: The evolution continues and there are more illuminated cells in a denser organism.
Fig.9: After a few interactions the solver improves the organism.
Fig.11: After 25 generation and several hours the solver starts to reach worst solutions until it crashes. From this test is clear that evolutionary solvers are very powerful, but at the same time they are still very limited.
RUIS DERVISHI 75
THE ALGORITHM SOLAR ANALYSIS
76 EVOLUTIONARY AUTOMAT_I_©ITY
RUIS DERVISHI 77
THE PROJECT
THE SITE Chicago
Evolutionary automaticity needs to develop under difficult and various conditions. Therefore the site for this project had to contain the most various urban fabrics. Chicago’s downtown area is one of best suited western city center for this project. It has the typical grid plan development, but when we start to look at it in 3 dimensions immediately appears the “delirious” Chicago. Many typologies of building could be observed, there are skyscrapers, mid-height office towers,
low residential buildings, tens of empty or parking lots and different green areas. There is the river and the Lake Michigan harbor. The conditions are as various as possible. This way Evolutionary Automaticity can deal with different difficult conditions and evolve reacting to the surrounding environment. The project site has been chosen because are two small empty lots surrounded be high-rise buildings as well as lowrise building and empty lots. This is just a pilot project that
Lake Michigan Chicago Downtown
Intervention Area Chicago River
Chicago Harbor
Fig.1: Chicago
80 EVOLUTIONARY AUTOMAT_I_©ITY
aims to show the real potential of Evolutionary automaticity. In the future all the empty lots could be invaded by EA and grow theoretically infinitely. Of course it will have to respect all the existing buildings and roads and will not grow over the river or over the lake.
Empty Lots
Project Site
Chicago River
Chicago Harbor
Fig.2: Downtown area and project site.
RUIS DERVISHI 81
Embassy Suites Ho NBC Tower
115 m
Plot_n°2 157
Plot_n°1 130 m
Sheraton Chicago Hotel and Towers
82 EVOLUTIONARY AUTOMAT_I_©ITY
Cit
River East Center
otel
m
City View Condominiums
tyFront Place Apartments
RUIS DERVISHI 83
Setup
The first operation was to setup a square grid with 7m x 7m cell size (Fig.3). This will be the grid for the automata. The total size of the grid is 40 cells by 40 cells, this means that it is 280m x 280m. Unfortunately the grid size couldn’t be larger since the computational power available was not sufficient for larger scale organisms. The next step was to detect the centers of each 2D cell and move it in vertical by 1.75m. This way will be possible to associate a 3D cell to each point. Afterwards were selected only the points within the boundary of the plots. There were exactly 342 points inside the plots, (fig. 4) but one more step had to be done. There were deleted the points around the border of the plots because there was necessary to create a walkway (Fig.5). It was made an offset from the boundary of 7m. This way the walkways will be at least 3.5m large. Substituting the 3D cells to the generative points is possible to have the first visual feedback on the occupation of the plots (Fig.6). As written above, the organism can’t grow over the streets, therefore a protection volume will be set up. This volume will have the same profile of the roads and will elevate for 25m (Fig.7). It is necessary also to create a protection volume around the buildings since the EA should not grow too close or even attached to the existing buildings (Fig.8).
Grid_7x7 Center_grid_pts
Fig.3: Grid setup
342_generative_pts
Fig.4: Generative points
84 EVOLUTIONARY AUTOMAT_I_©ITY
Walkway
25m height
Roads protection
232_generative_pts
Fig.5: Walkways
Fig.7: Roads protection volume
Buildings protection
Generation: Starting Pts: Cells: CA Rules:
_0 _232 _232 _None
Buildings protection
Buildings protection
Buildings protection
Fig.6: Generative points
Fig.8: Buildings protection volume
RUIS DERVISHI 85
FINAL ORGANISM So far the process is linear and clear. As for the cellular automata studies, the next step is to generate the final aggregation or final building if one prefers. All the parameters are already in the algorithm, the only difference is that this time the aggregations will be much larger and will start from the initial points in the plots. The final height of the organism will be 36 floors (126m). Unfortunately this is necessary to keep the number of cells relatively low and make it possible for the computer to compute the calculations. Even though, each generation in the evolutionary algorithm took from 35 seconds to 1 minute or more only to regenerate the aggregation. Afterwards it took other 13 or more minute to calculate the incident solar radiation for each and every single aggregation. It took days to get a satisfactory result. However different tests were done, this because was interesting anyway to study also high density aggregations with low illumination or low density aggregations with high illumination. By the way what interest in this thesis in not the best solution but, as a mentioned in the introduction, this thesis wants to demonstrate that a new process in architecture is possible. Therefore one of the optimal organisms was chosen but surely not the best one.
Generation: Starting Cells: Cells: CA Rules:
_35 _99 _1959 _24/3
Fig.1: Wh/m2 1467000+ 1320300 1173600 1026900 880200 733500 586800 440100
Generation: Starting Cells: Cells: CA Rules:
_35 _99 _1959 _24/3
293400 146700
In the following images are illustrated some of the most interesting organisms.
0
Fig.2
86 EVOLUTIONARY AUTOMAT_I_ŠITY
Generation: Starting Cells: Cells: CA Rules:
_35 _103 _2056 _24/3
Generation: Starting Cells: Cells: CA Rules:
Fig.3 Wh/m2
1173600 1026900 880200 733500 586800 440100
Fig.7: Wh/m2
1467000+ 1320300
_35 _121 _2081 _24/3
1467000+
Generation: Starting Cells: Cells: CA Rules:
_35 _103 _2056 _24/3
1320300 1173600 1026900 880200 733500 586800 440100
293400
293400
146700
146700
0
0
Fig.4
Generation: Starting Cells: Cells: CA Rules:
_35 _121 _2081 _24/3
Fig.6
RUIS DERVISHI 87
Generation: Starting Cells: Cells: CA Rules:
_35 _112 _2112 _24/3
Fig.7: Final Organism
88 EVOLUTIONARY AUTOMAT_I_ŠITY
Wh/m2 1467000+
Generation: Starting Cells: Cells: CA Rules:
_35 _112 _2112 _24/3
1320300 1173600 1026900 880200 733500 586800 440100 293400 146700 0
Fig.8: Final Organism, Solar analysis
RUIS DERVISHI 89
VERTICAL COMMUNICATION What distinguish this new approach to architecture is the absence of a superstructure (typical to matabolists). This is a major problem for the building. Usually buildings (particularly high-rise buildings) are design starting from the vertical communication. Vertical connection through floors is probably the most important aspect of a project. Usually the vertical communication is designed first then the rest of the building is designed around it. In Evolutionary Automaticity this process is not possible since the aggregations are self-organized and they don’t assume any kind of vertical connections. It is obvious that is impossible to make a vertical communication from cell to cell, therefore a different solution was adopted. The idea is to serve the maximum number of cells with the minimum number of cores. Again the evolutionary solver comes to rescue. The way of solving the problem is to generate a core that can move within the plots boundary by 7m. Its base will exactly move over the generative grid points(Fig1). This way it will be always aligned with the cells. The algorithm will analyze how many cells it will cover in the z axis direction and how many it will cover within a radius of 14m (3 Cells) from its center (Fig.2). Sometimes the highest and denser formation will be on the edges of the organisms so those cells will be the ones to offer the best conditions. Therefore the core will move to research for the upper described condition and for the lowest illuminated cells. This values will be weight on the fitness by different values. The vertical cells will contribute the fitness for 40%, the cells within the radius for 30% and the low illuminated cells for the remaining 30%. This way the core will look for his place considering the highest number of cells over one point, so it doesn’t change too much the initial aggregation. It will consider the highest number of cells from its center within 21m to serve the maximum number of cells possible. Last but not least, it will place itself where it is the highest number of low illuminated cells, so those cells can be used for the cores instead of being low life condition apartments. Once the first core has reach this requirements, the
90 EVOLUTIONARY AUTOMAT_I_ŠITY
y Core
Generative_pts
x
Fig.1: The core can move through the generative points in the plots untill it covers the highest number of cells.
Bounding_cylinder R=12m
Core
Cells_inside_core_volume
Cells_inside_cylinder_volume
Generative_pts
Fig.2:
algorithm will generate another core that will do the same and so on until there are enough cores to cover all the cells.
Cores
Cells
Elevator
Stairs
Fig.4: Cores in the final organism. A
B A' 1,40m
B' 1,40m
C' 1,40m
D' 1,40m
1,40m
1
1 1,40m
1'
1'
1,40m
2'
2'
1,40m
3'
3'
1,40m
4'
4'
1,40m
2
2
A'
A
Fig.3: Core’s design
B'
C'
D'
B
Fig.5: Core’s floorplan
RUIS DERVISHI 91
THE ALGORITHM VERTICAL COMMUNICATION
92 EVOLUTIONARY AUTOMAT_I_©ITY
RUIS DERVISHI 93
CELLS CONNECTIONS After generating complexity it was necessary to develop a strategy in order to connect the cells to the cores and moreover it was necessary to create a connection between cells to make different apartment solutions. The first thing to do was to organize the connections between the cells. The idea was to have three different apartments in size. One is obviously a single cell apartment. Therefore is had to be 49 m2. This apartment is sized for one or two people. Then the next size had to be composed by two cells thus it is 98 m2. This apartment is sized from two to maximum four people. The last apartment was composed by 3 cells aggregated in line or in an L-shape. The size was 147m2 and it was thought to accommodate from four to six people (Fig.1). This way the organisms will be as various as possible and will accommodate different sized families with different range of prices. The logic for the associations was to join a cell with 3 open faces with neighbor a cell with 1 open face (Fig.3). The cell with 1 open face will contain the living room, the kitchen and the dining room. The cell with 3 open faces will contain the bedrooms (this will be further explained in the next chapter). When there will be no more cells with one open face, the cells with 3 open faces will be connected with neighbor cell with 2 open faces, having a brighter living area. After having connected all the 3 open face cells, the logic will connect the cells with 2 open faces with the cells remaining neighbor cells with 1 open face. This connection allows to have a living area with one open face and a night area with two open faces. Once there will be no more neighbor cells with 1 open face, the algorithm will connect the neighbor cells with 2 open faces. This process will be repeated until all the cells will be connected and all the apartments will be created. There could be anyway the case that some cells will be disconnected. Those cells will be single apartments. The last problem to solve was to create a connection from the apartments to the cores. The algorithm was set up to search from the core center in 4 directions by 7 meters, (+x; -x; +y; -y). If the algorithm finds a cell center there is a di-
94 EVOLUTIONARY AUTOMAT_I_ŠITY
Single_Apartment
Double_Apartment
Triple_Apartment
Triple_L-shape_Apartment
Fig.1: Apartments typologies
1_Blinded_Face 2_Blinded_Faces 4_Open_Faces 3_Blinded_Faces
3_Open_Faces 2_Open_Faces
1_Open_Face
Fig.2: Types of cells
3_Open_Faces
1_Open_Face 2_Open_Faces
Fig.3: Connection logic
rect connection between the core and the cell, and the cell will be aligned to have the entrance towards the core. If the algorithm finds a terrace it will start searching again for a cell in 4 directions as before. If it finds a cell, there will be a “terrace connection” (Fig. 4). This means that the access to the apartment will be from the common or private terrace and again the apartment will be orientated to have the entrance towards the terrace. If the research doesn’t find a cell after the terrace, it will search again in all the directions. If it finds then a cell, it will build a bridge. If it finds another terrace, it will repeat the process until if finds a cell. If after the first research doesn’t find anything, it will repeat the research until it finds a cell. This process can go on only within a radius of 21m, 7 meters more than the radius set up for the cores. In the next pages is illustrated a typical floor plan, specifically is the number 23. In fig. 7 is floor plan with the cells typologies, in fig.8 is the floor plan with the cells already connected to create the apartments.
Direct_connection
D
D
D Terrace_connection
Bridge_connection
Bridge_Terrace_connection
Direct_Connection Terrace_Connection
Fig.5: Connection logic process
Bridge_Connection
Direct_connection
Detect_Nothing Detect_Terrace Detect_Cell
Terrace_connection Fig.4: Connection typologies
Cell Terrace Core Fig.6: Legend
RUIS DERVISHI 95
1_Open_Face
3_Open_Faces
Cores
2_Open_Faces
4_Open_Faces
Fig.7: Floor 23, cells typologies
96 EVOLUTIONARY AUTOMAT_I_ŠITY
Triple_L-shape_Apartment Core
Single_Apartment
Double_Apartment Triple_Apartment
Fig.8: Floor 23, apartment typologies
RUIS DERVISHI 97
APARTMENTS DESIGN The apartments were designed to be as flexible as possible. As mentioned in the beginning of this book, the cells have to be manufactured through mass production robotic manufacturing. Since there are different possible aggregations of cells, those had to be design to match all the configurations having just a few standard cells. The single cell was design to have a possible access from all the four faces. The living and the night area were organized to offer good illumination even though the cell has 2 blinded faces (worse possible scenario). In the double apartment one cell contains the living area and the other cell contains bedroom area. Again, those are designed in a way that the living area could be well illuminated even with only one open face (worse scenario) and the bedrooms will be well illuminated even if the cell has only 2 open faces (worse scenario). For the triple cells apartments the problem was a bit more complicated. The fist and the third cell were the same as for the double apartment. The middle cell for the in-line solution was designed to be mirrored in the case one only one face is open so the room can be illuminated. For the L-shape solution, the corner cell was designed to be rotated by 90°, this way the room could be oriented where the open face is. The next diagrams better explain the design. In the end of this chapter, can be found also the floor plans and the diagrams for the detailed design.
Possible Entrance
Toilet
Bed
Kitchen
Possible Entrance
Dining
Living
Entrance
Possible Entrance
Fig.1: Single apartment design
Possible Entrance
Possible Terrace
Possible Terrace
Toilet
Possible Entrance
Kitchen Bed Dining Entrance
Living
Possible Entrance
Terrace Possible Terrace
Fig.2: Single apartment design, terraces configuration
98 EVOLUTIONARY AUTOMAT_I_ŠITY
Cell_2 Bed Neighbor Cell
Toilet
Cell_1
Open Face
Possible Entrance
Toilet
Bed
Kitchen
Possible Entrance
Kitchen
Bed
Toilet
Dining Entrance
Living
Living
Dining
Open Face
Neighbor Cell
Possible Entrance
Entrance PossibleEntrance
Fig.3: Single apartment design, 2 blinded faces configuration
Fig.5: Double apartment design.
Possible Terrace Cell_2
Neighbor Cell Open Face
Bed
Possible Terrace
Toilet Possible Entrance
Bed
Possible Terrace
Toilet
Cell_1 Possible Entrance
Bed
Kitchen
Kitchen
Toilet
Dining Entrance
Living
Dining
Living Open Face
Neighbor Cell
Possible Entrance
Terrace
Possible Terrace
Entrance PossibleEntrance
Possible Terrace
Fig.4: Single apartment design, 2 blinded faces configuartion
Fig.6: Double apartment design, terraces configuration.
RUIS DERVISHI 99
Cell_3 Bed Cell_2
Open Face
Cell_2
Bed
Bed Open Face
Cell_1
Toilet
Cell_1
Toilet
Neighbor Cell
Possible Entrance
Possible Entrance
Bed
Kitchen
Bed
Bed Toilet Toilet Kitchen
Toilet
Living
Living
Dining
Open Face
Dining
Neighbor Cell Entrance PossibleEntrance
Entrance
Neighbor Cell
PossibleEntrance
Fig.7: Double apartment design, 3 open faces configuration
Fig.9: Tripple apartment design.
Possible Terrace Open Face
Cell_2
Neighbor Cell
Bed
Cell_1
Bed
Kitchen
Possible Terrace
Toilet Possible Terrace Bed
Bed
Possible Entrance
Bed
Possible Entrance
Toilet Toilet
Toilet Living
Bed Cell_2
Possible Terrace
Toilet
Cell_1
Cell_3
Possible Terrace
Open Face
Kitchen Living
Dining
Dining
Possible Terrace
Neighbor Cell Neighbor Cell Entrance
Open Face PossibleEntrance
Terrace
Entrance
Possible Terrace PossibleEntrance
Possible Terrace
Fig.8: Double apartment design, 3 open faces configuration
100 EVOLUTIONARY AUTOMAT_I_ŠITY
Fig.10: Tripple apartment design, terraces configuration.
Cell_2
Bed
Open Face
Cell_3 Neighbor Cell Bed
Open Face
Cell_2
Open Face
Cell_1
Bed
Toilet Bed
Cell_3
Bed
Kitchen Cell_1
Toilet
Bed
Possible Entrance
Toilet
Toilet
Dining
Possible Entrance
Toilet Kitchen
Bed
Open Face
Toilet
Living
Living Dining
Bed
Neighbor Cell
Entrance
Neighbor Cell Entrance PossibleEntrance
Possible Entrance Neighbor Cell
Fig.11: Tripple apartment design., 4 open faces configuration
Fig.13: Tripple apartment design,.
Possible Terrace
Cell_2 Possible Terrace
Bed
Open Face
Cell_3 Open Face
Cell_2
Possible Terrace
Toilet
Neighbor Cell Bed Cell_1
Bed
Bed
Open Face
Cell_3
Bed
Kitchen
Toilet
Dining
Bed
Possible Entrance
Toilet
Bed
Toilet
Toilet
Bed
Neighbor Cell
Possible Entrance
Living
Kitchen Living
Possible Terrace Cell_1
Toilet
Entrance
Dining Neighbor Cell
Neighbor Cell Entrance PossibleEntrance
Open Face
Fig.12: Tripple apartment design., 4 open faces configuration
Possible Entrance
Possible Terrace
Possible Terrace
Terrace
Fig.14: Tripple apartment design, terraces configuration.
RUIS DERVISHI 101
A
B A' 1,40m
1,40m
D' 1,40m
1,40m
1
Cell_2
Bed
1,40m
C'
1
Neighbor Cell
Open Face
B'
1,40m
1'
Bed Open Face
1'
Neighbor Cell
Kitchen
1,40m
Cell_3
Cell_1
Toilet Toilet
Bed
2'
2'
Dining
Possible Entrance 1,40m
Toilet
Living
3'
3'
Bed
Entrance 1,40m
Open Face
Neighbor Cell
Open Face
4'
4'
Neighbor Cell
Possible Entrance
1,40m
2
2
A'
B'
C'
D'
A
B
Fig.15: Tripple apartment design, 4 open faces configuration.
Fig.17: Single apartment floor plan
Neighbor Cell Cell_2
Bed
A
Open Face
B A' 1,40m
Open Face
Cell_3
1,40m
C' 1,40m
C
D' 1,40m
E' 1,40m
1,40m
F' 1,40m
G' 1,40m
H' 1,40m
1,40m
1
Bed
Toilet
B'
Cell_1
Toilet Kitchen
1
Open Face
1,40m
1'
Possible Entrance
1'
1,40m
Bed
Toilet Dining
Living
Bed
2'
2'
1,40m
Entrance 3'
Open Face
Neighbor Cell Possible Entrance
3'
1,40m
Neighbor Cell 4'
4'
1,40m
2
2
A'
A
Fig.16: Tripple apartment design, 4 open faces configuration.
102 EVOLUTIONARY AUTOMAT_I_ŠITY
B'
C'
D'
E'
B
F'
G'
H'
C
Fig.18: Double apartment floor plan
A
B A' 1,40m
B' 1,40m
C' 1,40m
C
D'
E'
1,40m
1,40m
1,40m
F' 1,40m
G' 1,40m
D
H' 1,40m
I' 1,40m
1,40m
L' 1,40m
M' 1,40m
N' 1,40m
1,40m
1
1 1,40m
1'
1'
1,40m
2'
2'
1,40m
3'
3'
1,40m
4'
4'
1,40m
2
2
A'
B'
C'
D'
E'
A
F'
G'
H'
I'
B
C
B
L'
M'
N'
D
C E' 1,40m
F' 1,40m
G' 1,40m
H' 1,40m
Fig.19: Tripple apartment floor plan
1,40m
1
1 1,40m
1'
1'
1,40m
2'
2'
1,40m
3'
3'
1,40m
A
4' A' 1,40m
B' 1,40m
C' 1,40m
4'
D' 1,40m 1,40m
1,40m
2
2 1,40m
5'
5'
1,40m
6'
6'
1,40m
7'
7'
1,40m
8'
8'
1,40m
3
3
A'
A
B'
C'
D'
E'
B
F'
G'
H'
C
Fig.20: Tripple L-shape apartment floor plan
RUIS DERVISHI 103
Fig.21: Single apartment.
104 EVOLUTIONARY AUTOMAT_I_©ITY
Fig.22: Double apartment.
RUIS DERVISHI 105
Fig.23: Tripple apartment.
106 EVOLUTIONARY AUTOMAT_I_©ITY
Fig.24: Tripple L-shape apartment.
RUIS DERVISHI 107
NORTH
FLOOR PLAN LEVEL 23
136,50m
133,00m
129,50m
126,00m
122,50m
119,00m
115,50m
112,00m
108,50m
105,00m
101,50m
98,00m
94,50m
91,00m
87,50m
84,00m
80,50m
77,00m
73,50m
70,00m
66,50m
63,00m
59,50m
56,00m
52,50m
49,00m
45,50m
42,00m
38,50m
35,00m
31,50m
28,00m
24,50m
21,00m
17,50m
14,00m
10,50m
3,50m
0,00m
-4,50m
-9,00m
-13,50m
-18,00m
SECTION A-A’
BIBLIOGRAPHY • • • • • • • • • • • • • • • • •
Carpo M., “The Alphabet and the Algorithm”, The MIT Press, Cambridge, 2001. FIG Commission 3, “Rapid Urbanization and Mega Cities: The Need for Spatial Information Management”, FIG, Copenhagen, 2010. Frazer, J.H., “An Evolutionary Architecture”, Architectural Association Publications, London, 1995. Khabazi Z., “Generative Algorithms”, Morphogenesism Education, 2012. Koolhas R., “Delirious New York: A Retroactive Manifesto for Manhattan”, The MonacellI Press. Inc., United States of America 1994. Kondo S. and Miura T., “Reaction-Diffusion Model as a Framework for Understanding Biological Pattern Formation”, Science 329; 1616, 2010. Mavroudi A., “Simulating city growth by using the Cellular Automata Algorithm” , London, MSc AAC 2007. Merks R., Hoekstra A., Kaandorp J., Sloot P., “Models of coral growth: spontaneous branching, compactification and the Laplacian growth assumption”, Journal of Theoretical Biology 224, Amsterdam, 2003. Mertins D., “Where Architecture Meets Biology”, University of Pennsylvania, 2007. Morelli L. G., “Computational Approaches to Developmental Patterning”, Science 336; 187, 2012. Ohmori H., “Computational Morphogenesis: Its Current State and Possibility for the Future”, Nagoya University, 2008. Roudavski S., “Towards Morphogenesis in Architecture”, Multi-Science Publishing, Melbourne, 2009. Spyropoulos T., Schumacher P., Burry M., Frazer J., Steele B., “Adaptive Ecologies: Correlated Systems of Living”, Architectural Association Publications, London, 2012 Tedeschi A., “Architettura parametrica Introduzione a Grasshopper”, Le Penseur, 2011. Thompson D. W., “Crescita e forma”, a cura di J. T. Bonner, Bollati Boringhieri, Torino, 1992. Turing A. M., “The Chemical Basis of Morphogenesis”, Royal Society of London, London, 1952. Von Neumann J., Burks A.W., “Theory of Self-Reproducing Automata”, University of Illinois Press, Urbana and London, 1966.
116 EVOLUTIONARY AUTOMAT_I_©ITY
SITOGRAPHY • • • • • • •
• • • • • • • • • • •
• • •
AD Interviews: Andrew Hesse, http://vimeo.com/60863051 Bargh J. A., “The ecology of automaticity: Toward establishing the condition needed to produce automatic processing effects”, http://www.jstor.org/discover/10.2307/1423027?uid=3738296&uid=2&uid=4&sid=21102461429151 Callahan P., “What is the Game of Life?”, http://www.math.com/students/wonders/life/life.html Carpo M., “On the consequences of technical and cultural change in architecture. A postface to The Alphabet and the Algorithm”, http://architettura.it/files/20110404/index.htm Carpo M. “Pattern Recognition”, http://architettura.it/extended/20060305/index.htm Chimera, “Mangal City”, Sucker Punch, http://www.suckerpunchdaily.com/2010/01/03/mangal-city/ Chindapol N., Kaandorp J.A., Cronemberger C., Mass T., Genin A., “Modelling Growth and Form of the Scleractinian Coral Pocillopora verrucosa and the Influence of Hydrodynamics”, http://www.ploscompbiol.org/article/info%3Adoi%2F10.1371%2Fjournal.pcbi.1002849 Co-de-iT, “Reefs”, http://www.co-de-it.com/wordpress/reefs.html Conwaylife.com, “Cellular Automaton”, http://www.conwaylife.com/wiki/Cellular_automaton Erioli A., “Design, controllo e creatività”, Doppiozero, http://www.doppiozero.com/materiali/advancity/design-controllo-e-creativita Generative Design, “Conversation with : Prof. John Frazer”, http://generativedesign.wordpress.com/2012/01/18/ johnfrazer/ Grigoriadis K., Robles A., Robles P., Shamma I., Fereos P., “Urban Reef”, Evolo, http://www.evolo.us/competition/urbanreef-housing-skyscraper-in-new-york/ Mediumlite, “Emergence and Self-Organizing Systems”, http://mediumlite.com/procedural/week11.html Nettler J., “Architecture Enters Its Second Computational Revolution, Can You Keep Up?”, Planetizen, http://www.planetizen.com/node/59500 Rybczynski W., “Lost Amid the Algorithms”, http://www.architectmagazine.com/design/parametric-design-lost-amid-the-algorithms.aspx Schueler G., “The Order/Chaos Relationship in Complex Systems”, http://www.schuelers.com/chaos/chaos1.htm Schumacher P., “The Future is Ready to Start”, Theory Against Theory, http://theoryagainsttheory.wordpress.com/ tag/parametricism/ Schumacher P., “Parametricism - A New Global Style for Architecture and Urban Design”, http://www.patrikschumacher.com/Texts/Parametricism%20-%20A%20New%20Global%20Style%20for%20Architecture%20and%20 Urban%20Design.html Soflin Z., “Date driven architecture”, http://zachsoflin.com/2012/04/20/data-driven-architecture-2/#.UePDFkEwc2Q Wikipedia, “Cellular automaton”, http://en.wikipedia.org/wiki/Cellular_automaton Wikipedia, “Self-organization”, http://en.wikipedia.org/wiki/Self-organization
RUIS DERVISHI 117
ACKNOWLEDGEMENTS I dedicate this project to my Family. Thank you for all the support given during my life and for giving me the opportunity to come so far. I would like to express my very great appreciation to Professor Giovanni Galli for the valuable and constructive critiques during the development of this thesis and for the great architectural lessons he gave me throughout the university years. I would also like to thank Arturo Tedeschi for the tutoring and help with Grasshopper, and Giovanni Parodi for the constructive advices on the development of this project. A special appreciation goes to Professor Brunetto De BattĂŠ for introducing me to architecture and for the amazing teaching he gave me. A special thank you to all my colleagues in LAVA and LAVA directors for the great time spent together and for everything I learned working with them. The most special thank you goes to all my friends and university colleagues. Thank you for all the love and support you gave me. The biggest contribution to my personal and professional growth comes from you. It would be impossible to mention all but particularly I would like to thank Arianna, Davide, Matteo, Antonio, Clementina, Alberto and Cecilia for the amazing and unforgivable time spent together. A very special thought goes to Marcello. Thank you my friend, it was a privilege to be part of your life.
118 EVOLUTIONARY AUTOMAT_I_ŠITY
“I THINK THE NEXT CENTURY WILL BE THE CENTURY OF COMPLEXITY� Stephen Hawking, 2000
Creative Commons
2013 Ruis Dervishi.
All rights reserved. No part of this thesis may be reproduced in any form by any electronic or mechanical means (including photocopying, recording, or information storage and retrieval) without the prior written permission of the author, except in the case of brief quotations embodied in critical reviews and certain other noncommercial uses permitted by creative commons conditions.
RUIS DERVISHI 119