Dis. Thesis| Post.Grad.AD:ITAD| Algorithmic Systems' Evolution Through Emergence Differentiation

Page 1



Dissertation Thesis PGr. Program Advanced Design: Transdisciplinarity and Innovation in Architectural Design

Algorithmic Systems’ evolution through emergence differentiation

Ravanidou Theodora January 2017


The copyrights of the present work belong to Ravanidou Theodora and to the Postgraduate Studies Program “Advanced Design: Innovation and Interdisciplinaryism in Architectural Design”, AUTh. The research work was presented in the postgraduate study program ‘Advanced Design: Innovation and Transdisciplinary in Architectural Design’ by the Department of Architecture at the Aristotle University of Thessaloniki as a partial fulfillment of the requirements for obtaining a Postgraduate Degree in Architecture

Polytechnic School of the Aristotle University of Thessaloniki Department of Architecture Postgraduate Study Program ‘Advanced Design: Innovation & Transdisciplinary Architectural Design’ January 2017

4


“Algorithmic logic is about the articulation of thoughts and a vague struggle to explore possibilities of existential emergence� K.Terzidis, Algorithmic Architecture

Thanks to: I would like to thank very much for their contribution on the research purpose, the structure of the subject and the research methodology, all members of the teaching group of the PGr. Porgram Advanced Design, as well as external collaborator Zaruka Mano from the Bartlett School of Architecture for his valuable contribution to the original structure and bibliography.

5


Abstract: The departure point of the present research is the contemporary tendency of algorithmic tools incorporation in design practice, as a method to continuously investigate new possibilities and reevaluate the designer’s position in modern and future situations. The algorithm is being analyzed through the philosophical background of the science of Cybernetics, and not as a concept of the theory of information. Cybernetics is used as a transdisciplinary field that allows to position analysis on a wider and more global frame of interest. It is seen through the philosphy of Gordon Pask, as a new science that deterritorializes concepts from their fields. As a result, through cybernetics we can approach the origins and the extensions of algorithms’ use in design onto multiple philosophical and cultural levels. Emergence is the second basic concept through which algorithms are being analyzed, classificated and criticized. It is considered to be the basic element that explains and controls algorithmic structure and behavior. As an introduction of the research, in the first chapter of the thesis, the concept of algorithm is seen as three different possible scenarios. The first one is algorithm as logical procedure. Here algorithms are being analyzed as a different mode of though, soft thought as introduced by L.Parisi, contrasted to the concepts of mindware of Andy Clark and wetware of Dennis Bray. The second scenario is algorithm as an object, where algorithmic structure is analyzed through an ontological point of view with perspectives from A. Turing, A.Kay, K.Terzidis and L.Parisi. The last scenario, algorithm as a problem-addressing concept, tries to find the basic changes that algorithmic use provokes in design methods. In the core of the thesis, the basic element, algorithm, is being analyzed as it successively evolved through first order to second order cybernetics and finally at the postcybernetic era. First order cybernetics is analyzed through the thermodynamic concept of entropy, second order through the concept

6


of turbulence and finally postcyberentic era through the concept of control without control. The transitions between orders, are decomposed with the concept of negentropy, developed form L. Brillouin and with the critics of B.Bratton and A.Turing. The research finally focuses on how emergence occurs through the speculative use of computation in contemporary paradigms. The conclusion of the thesis includes comments on different aspects that arise concerning algorithmic prehension and focuses on the concept of emergence that is assumed to create a new frame for novelty without control.

7


/gr/ Σύνοψη: Έναυσμα για την συγκεκριμένη έρευνα αποτελεί η σημερινή πρακτική στο σχεδιασμό που ενσωματώνει τα αλγοριθμικά συστήματα ως εργαλεία μέσα σε μία διαδικασία αναζήτησης νέων δυνατοτήτων και διερεύνησης της θέσης του αρχιτεκτονικού σχεδιασμού σε ένα σύγχρονο πλαίσιο αναφοράς. Ο αλγόριθμος δεν αναλύεται ως έννοια της πληροφορικής, αλλά μέσα από το πρίσμα της κυβερνητικής (cybernetics), καθώς θεωρείται καταλληλότερη για να αποδώσει το νόημα της χρήσης μιας τόσο πολυσήμαντης έννοιας σε ποικίλα επιστημονικά πεδία. Η κυβερνητική, που σχολιάζεται ως έννοια μέσα από τα κείμενα του Gordon Pask, επιτρέπει την τοποθέτηση της ανάλυσης σε ένα ευρύτερο πλαίσιο αναφοράς μέσω μιας διεπιστημονικής θεώρησης και υποδεικνύει ότι ο σχεδιασμός με την χρήση αλγοριθμικών δομών έχει καταβολές και προεκτάσεις σε διαφορετικές φιλοσοφικές και πολιτισμικές θεωρήσεις. Η έννοια της ανάδυσης (emergence) είναι μαζί με την κυβερνητική ο δεύτερος βασικός πυλώνας της έρευνας και χρησιμοποιείται για να περιγράψει τα αλγοριθμικά συστήματα μέσω των διαφορών που εμφανίζουν στο να αποδώσουν έναν συγκεκριμένο τρόπο δομής. Σαν εισαγωγή στην έρευνα, παρουσιάζεται στο πρώτο κεφάλαιο μία αναλυτική εκδοχή της έννοιας του αλγορίθμου μέσα από μία προσπάθεια κατηγοριοποίησης διαφορετικών οπτικών που του αποδίδονται από θεωρητικούς που τον ανέλυσαν, όπως η Luciana Parisi και ο Κώστας Τερζίδης. Η οπτική του αλγορίθμου μέσα από ένα οντολογικό πλαίσιο καθώς και οι προεκτάσεις που παρουσιάζει αυτή, εισάγουν στην έρευνα τη συζήτηση για τα μη υπολογίσιμα αντικείμενα όπως σχολιάστηκαν από την L.Parisi και τον Alan Turing, τον αντικειμενοστραφή σχεδιασμό όπως ξεκίνησε από τον Alan Kay και τα συστήματα μεταμοντέλων όπως αναλύθηκαν από τον Felix Guattari.

8


Στην διατριβή πρόκειται να αναλυθούν οι διαφορές που παρουσιάστηκαν κατά την εξέλιξη των αλγοριθμικών συστημάτων περνώντας από την πρώτη γενιά κυβερνητικής, η οποία περιγράφεται παράλληλα με την έννοια της εντροπίας, στην δεύτερη γενιά κυβερνητικής, με την εμφάνιση της έννοιας της διαταραχής και πλέον στην κατάσταση μετακυβερνητικού ελέγχου. Οι μεταβάσεις μεταξύ των γενεών κυβερνητικής σχολιάζονται με την έννοια της αρνητικής εντροπίας(negentropy)όπως αναλύθηκε από τον Léon Brillouin, και με την κριτική που ασκήθηκε στα πρώιμα συστήματα κυβερνητικής τόσο από τον Benjamin H. Bratton όσο και από τον A. Turing παλαιότερα. Σε κάθε φάση εξετάζεται ο διαφορετικός τρόπος με τον οποίο εμφανίζονται φαινόμενα ανάδυσης σε τέτοιου είδους συστήματα ενώ στο τέλος η διατριβή επικεντρώνεται στο πως προκύπτει η ανάδυση σήμερα μέσω την στοχαστικής χρήσης των ψηφιακών μέσων.

9


10


index/structure: Introduction: computation, algorithm, government, emergence, architecture as a system (13) 1. The concept of the algorithm (17) a. Algorithm as a logical procedure (17) b. Algorithm as an ontological object (19) i. Incomputable objects (20) ii. Object-oriented design (21) iii. Metamodelling systems (22) c. Algorithm as problem-addressing concept (23) i. Implications on design theory (23) ii. Algorithmic systems and design consequences (24) 2. Emergence in 1st order cybernetics (33) a. The concept of entropy (33) b. Characteristics of 1st order algorithmic structures (35) 3. Emergence in 2nd order cybernetics (39) a. The concept of turbulence (40) b. The appearance of autopoietic systems (41) c. Interactivity and Responsiveness (43) d. Machine learning and Artificial Intelligence (44) 4. Emergence and Post-cybernetic control (49) a. Critics in production of complexity in post-cybernetic schemes (50) b. Control without control (51) c. Emergence in chaos theory and complexity theory systems (52) d. Emergence in Artificial life systems (52) 5. Embedding Emergence into the practice of Speculative Computation (59) a. Speculative computation in Architecture (60) b. Emergence in state-of-the-art algorithmic systems (63) Epilogue-conclusions: redefinition of speculative computation, design extensions in architecture (67)

11


12


Introduction:

Gordon Pask, whom Luciana Parisi mentions as a representative of cybernetic thought, interprets the architect’s practice since 1800 as a system designer, even though at that time he had very firm values ​​and styles to serve. In the article, The Architectural Relevance of Cybernetics, he explains in detail this position by commenting on the fact that just as a building is functionally a system-system of motion, function siting, connection- sImilarly its construction and design are a system. On a larger scale, the way a whole city is built is on a systematic plan that acts as a prediction and future model for its development. Pask comments that indirectly the architect is always asking for an adaptation of his work to an existing system, a good integration into his environment, so in fact what he has to design is a system. Even though the practice and subject of architecture today is sophisticated in relation to the past of the 1850s, the architect has since been called upon to serve essentially motion systems, established systems of perception of space, social systems that should be adapted to the farm, the villa or the theater that was called to design. Pask refers to projects that have begun using new technological innovations such as the Temple Meds, the Tropical Palace, the Crystal Palace. The solution for them was to be seen as part of the ecosystem of human society. According to Pask, hypotheses that were developed had the element of cybernetic theory, and the thought that appeared in them was cybernetic thought which came to light.

13


The concept of cybernetics has been commented since 1960 by Pask as a newly established science, whose main feature is the interdisciplinarity that arises “when we think of the economy not as economists, biologists as biologists, engineers not as engineers” (Pask , 1960). It is a science that characterizes systems not with their epistemological terms but with a global view that examines how they construct, reproduce and develop themselves with a basic question of how they are self-organized. Through cyberentic examination of a social or natural or cultural system it can find its roots in many scientific fields and find correlations between concepts borrowed from different fields of mind. The reason government is used in describing algorithmic structures is because of the effort to define their role in a wider cultural and physical context. This is why the concept of government is used as an element of analysis in this research, alongside the fact that the algorithms have entered the design and especially the architectural design combined with being the building block of the digital media that are and new design tools. The discussion of systems designed by an architect and the commentary by Pask is not at all accidental at a time when algorithmic systems appear as design tools. Obviously, the algorithm is a system that has its own extensions. Given its nature and its rapid evolution and integration into algorithm networks, it is a subsystem that can contribute to the design of systems that architecture is concerned with as the science of designing, assembling, and restructuring systems. Deleuze’s folding flows and Bernard Cache scales can be matched to architecture with a kind of system folding that shapes and is being shaped. For this difficult operation it therefore needs a powerful tool for building systems and what is found at the given time are the algorithms and so, much of the modern architectural thinking is built around them. How algorithms are structured as a system will be analyzed in the first chapter which ends with the design extensions of these systems. Just because it is always apparent, the purpose of this research is in part to make algorithmic pracitce clearer, more conscious and perceivable in the wat algorithms are used in the design content. The concept of the algorithm will be extensively analyzed later, with extensions of its structure, the theoretical background that surrounds it today, and the philosophical background that is potentially “shaped”

14


or formed around its use. Therefore, the concept of the algorithm will be analyzed in a way of focusing on it, then changing the scale for the environment in which it is created, and finally analyzing the effect of its use on the environment that shapes it. The fact that the algorithm is more than its functional structure, as its use has design and physical extensions, drives research into the philosophical reflection of many theorists who have treated the algorithm as an ontological element. In addition, a review by Alan Kay added a new property to algorithms this of an object. The algorithm as an object will be analyzed after its philosophical and ontological dimension, as it is a next step of its understanding, which led to a radically new version of the practice of design, object-oriented design. The purpose of this research is to draw conclusions about the use of digital tools today. Thus, through a sequential description of the development of algorithmic systems, research focuses on the most advanced form of algorithmic systems, artificial life systems and stochastic programming. In all of the subchapters what is sought is the way in which the emergence phenomena appear in algorithmic systems. Emergence is a concept that has received many interpretations that focuse on different characteristics, but what is used in this research is the interpretation of Peggy Holman which identifies emergence as the appearance of order through chaos. More analytically, in combination with other interpretations that envision emergence as the tendencies in systems consisting of a number of units due to interactions between them, this means that emergence is the creation of behaviors within a system that will guide it because they function as trends, they will inform and eventually clarify what its structure is. We can link the emergence with the novelty elements that appear in a system.

15


...

16


1.

The concept of the algorithm The purpose of this paper is not to make a historical retrospective look at the appearance of the algorithm as its history is relatively short and has not yet exhibited prehistoric extensions that define major intersection and separation points. For example, there are no algorithmic types that are no longer used, and no abbreviated digital signals. Thus, in the theoretically short path of the digital algorithm, what will be studied is its evolution into different kinds of use, the separation of cases where it is used as the 1st order cybernetics, secondly in 2nd order cybernetics, and especially how it is transformed into the era of post-cybernetic control. The phenomenon of emergence in each case arises as a different possibility and in a different way, and it is the main objective of this research to talk about emergence in modern models of algorithmic structures. Seeing the algorithm as a logical process will be analyzed in the first subsection without many extensions as it only leads us to perceive it as a series of instructions and a succession of pre-treated steps. It will be studied predominantly in the second subunit concerning its ontological dimension as analyzed by Parisi L. and Terzidis K. The third subsection offers ground for discussion of the consequence and its products as well as its integration into the design process.

a.

Algorithm as logical procedure

As Terzides notes, in design, algorithms can be used to solve, organize, investigate problems that have increased visual organizational complexity. In its simplest form an algorithm uses numerical methods to direct problems. While we are accustomed to perceiving numbers as discrete units, as elements that separate entities, in fact in computational terms they are used to give an unlimited degree of fragmentation and therefore to offer a theoretical continuity. The basic components of the algorithm (the fixed variables, classes, functions, and libraries) are linked to each other by some grammatical and editorial rules, enabling the design of logical shapes. K. Terzidis mentions the algorithm as a system, a language of communication

17


between the human mind and the computer. This automatically leads us to think about the networks that produce such a language. Mainly from the point of view of man, the discussion of a language of communication opens up a field of investigation as to whether the neurological processes that animate human thought have nothing to do with the serial structure, sequence and multiple repetition which are basic ways of structuring the algorithmic structure . Andy Clark, using the term “mindware”, examines how human perceptions work and why this term is used to paralleling or better examining whether similarities develop between the way in which thought is generated in the human brain and the software that deploys the computer. Dennis Bray has used another term, “wetware”, to describe how operations are performed on simple organisms such as amoeba and the reason for this is that in such early forms of life there is no brain thinking or central control but one way action through chemical reactions evolving into the basic neural network. The reason that triggers these reactions is first the environment, but by extension it is also a type of comparison with past events experienced by the organization, which give a certainty of anticipating changes and making specific decisions. The algorithm as a logical process is part of a way of thinking, since it enables the designer to create a way of perception and to construct perceptual charts as processes to produce concrete actions. Therefore, we can integrate the algorithmic process into a sort of way of thinking possibly in what Luciana Parisi develops as “soft thought”, by seeing the algorithmic structure as the neural network that evolves a computational function. This includes the perception of concepts that are not included in human thought, or at least not simulated in its computational capacity, such as incomputable objects, infinity, or are mostly imprinted intuitively.

18


b.

Algorithm as ontological object

The fact that the algorithm can be treated as something more than a sequence of steps, a series of translation commands for computer communication, is something that has been supported by many theorists, including L. Parisi and K. Terzidis. The algorithms acquire a sphere of influence and precisely because they acquire the ability to interact with the environment in bidirectional way, they create around them an ontological reference framework that allows them to be examined in a different light. The starting point of an ontological view in the algorithms begins with the appearance of the object-oriented design that will be analyzed in a next subchapter. The basic feature is that an algorithmic structure is examined beyond its static command sequence structure, and so is the “product” that an algorithm produces. Objects derived from algorithms named objectile to denote their hybridity and differentiation from a static object are commented by Parisi as objects that are constantly subject to change. So she concludes that the object ultimately does not identify the change, but rather the change of what is the object. Here comes an ontological paradox, where computational objects of discrete and finite steps produce structures that are constantly changing. “Reflective theories for objectiles and blobs are based on the temporal processes and the non-linear or differential interaction of the parties” (Parisi, 2013). Seeing the algorithm as an object is a perspective in the context of a wider philosophical trend of object-oriented optics. Developed by Graham Harman, it examines how a system is built of independent objects that are not lost in one another, that is, they are distinct, they are pervasive, and are not subject to constant change, but are the basis with which constant changes occur . The reason why we can look at the algorithms from an ontological point of

19


view will be explained in the sub-chapter of incomputable objects where the extensions of the algorithm at design and mental level will be further analyzed. The sub-chapter of bject-oriented design will analyze how this problematique appeared, developed and led a new design perspective with regard to the use of digital media. i.

Incomputable objects

Luciana Parisi is constantly mentioning in the first chapter of Contagious Architecture, the algorithm as an object and not at all as a logical process. The concept of incomputable objects is directly related to computation, both by definition and by substance. The algorithm can be said to be exactly the expression of the incomprehensible, and its reference model. She emphasizes Alan Turing as the first to deal with the problem of the incomputable object. She comments that he tried to transform the limits of computational ability into algorithmic probabilities. This practically means that any unrecognizable object for Turing, is a chance that persists through the execution of an algorithm. Also, for Turing, the undetectable determines the limits of computability, as there is no permissive combination of rules that can predict whether the calculation of the data will go ahead and whether it will reach an end result. The objects of Graham Harman’s theory as presented by Parisi are autonomous and can not be reduced to their parts unless we consider them to be other objects, and so we define the way they relate to each other. This definition is not at all random when we talk about algorithmic objects that consist of partial totals that constitute the general set of the algorithmic object. Relationships are also objects, emphasized by Parisi, and are fully relevant to the description of the algorithm as an object since it is a continuous search and a continuous definition of relationships between elements. (1) The elements from which an algorithm consists of are a series of symbolisms, but this serial sequence structure can not be considered to be an apparently predetermined sequence. If we considered that an algorithm is a predetermined set of instructions, it would have the consequence of denying any possibility of innovation, and the use of digital media would in fact be a tautology of already-made decisions.

20


As Parisi also comments, algorithms, are not calculators of probabilities, but potentiators. Algorithms as real objects are referred to as intellectual and physical data prehensions, intangible and physical forms of the computational process of digital media. By prehension Parisi L. implies that algorithms are not simple rules encoding complexity. Just because prehension allows complexity to enter existing data structures. Thus, the synthesis of algorithmic structures and computation are transformed into speculative reason. And that is why the term speculative is currently accompanying the modern use of digital media. ii.

Object-oriented design

The perception of the algorithm as an object has created a new philosophy in design, object-oriented design.One of the first to create the philosophy of object-oriented programming and introduce its elements into use is Alan Kay, for which the programming language is oriented to working more with objects than with commands. He argued that data, structures, code, and commands are an inseparable set and considered that computer objects in a software can function as black boxes for the user. He can use them without necessarily knowing what they are and what they contain. He thus developed a rationale that also prompted a more intuitive approach to programming, which nowadadys is evident and helps users in using programming without necessarily being programmers. Kay’s objects can be manageable by users and are subject to change so they can be used in a creative and non-deterministic context. This shift in programming was highlighted by theoretician Lev Manovich with the comment that he transformed the computer “from being a culturally invisible technology to a new machine of civilization” (Parisi, 2013). Object-oriented design is the basis on which the digital element’s culture evolved. It is the starting point for the creation of interactive software, a user-friendly software that allows the rapid use of digital media.

iii.

Metamodelling systems

The fact that we treat the algorithm as an object is in itself capable of setting a new frame of reference around this concept. In order not to treat

21


the algorithm as a representation of reality or as an end product, Parisi introduces a transformed version of Felix Guattari’s meta-modeling system. It attempts to extend its hypothesis as to the substance of the algorithm as an object that overpasses both its mathematical and physical substance as a computer object. The concept of metamodeling by Guattari describes an extra space of unrelated realities. The extra space of algorithmic structures is not created by them, nor does it create them. It responds to random data that is experienced as part of a set. Guattari’s problem is essentially a critique of the concept of the model, which is perceived as a cybernetics-system scheme. The model for him is the simulation of a form of perception (appearing on every level, socially, culturally, politically, or aesthetically). Therefore, they are deductions of a diagrammatic space created by intersections, and they operate with abstract symbols and signatures. This diagrammatic space which is a metamodel for Guattari, not a series of prototypes that are repeated, does not denote a pattern of behavior that is repeated, but it describes how through a certain process some contingencies outweigh preexisting typologies. All of these features are used and fit into the description of the algorithmic structure as an object. The conclusion of the above metamodeling theory is that eventually any combination of rules that creates a model essentially constructs its own mappings, its own reference points, and automatically its own analytical methodology and approach theory. Basic examples of metamodels are Guattari’s both mathematics and software. Just as mathematics is a selfreferential system that is not proven by the physical world, respectively, the algorithm concept also works. The intersection of degraded points (for example, mathematical symbols degraded by the formal, common language system) and objects (degraded by their physical substance) define the metamodel as both discovery and construction of new realities. The metamodel of the algorithmic structure is more than a combination of elements such as algorithmic thinking, functional symbols and digital objects. It goes beyond the concept of a rule and creates a new space-time event of a new reality that combines the material with the intangible and digital.

c.

Algorithm as problem-addressing concept

22


Algorithm as design tool has its own peculiarities and the way it is used to solve a design theme affects the entire design context. Terzidis K. refers to algorithms as “possible paths leading to possible solutions”. It defines it as the linguistic expression of a problem that is broken down into linguistic symbolic elements and logical linguistic processes. This way of structuring the algorithm helps its discription in steps and its communication to other users for further processing. Its ability to produce a new system to solve a design theme and to integrate it into the visualization of the Guattari metamodel system, leads to the conclusion that whenever an algorithmic system is constructed, a cognition map of data is created. The construction of a new system that “understands” data in a way defined by the designer himself implies the shifting of the design object from the final result to the system that produces it. Subsequently, the role of the designer is not only to produce a final result, but to handle all the inputs and extensions that are directed at his design object. It is required to construct a mental scheme before constructing the algorithmic system and before producing its design product that results indirectly as a result. These features are attributed to design due to the use of algorithms and the reasons for the number of possibilities that come as a consequence. i.

Implications on design theory

The definition of algorithm according to the origin of the concept, according to Terzidis, describes it as a process of directing a problem through a perennial set of instructions that use repeating and selection commands. According to Terzidis, the algorithms are not just the integration of a language into computers. It is a theoretical construct with profound philosophical, social, design and artistic implications. Planning through their use is a process of conception, and is identified by Terzidis in the sense of “plan”, which contains not so much the concept of future design as the concept of search for archetypes and is associated with the presocratic philosophical position that nothing it is not produced by anything and is not lost in anything. The algorithms in design use, determine it from a problem-solving

23


orientation to a problem-addressing orientation. The rationale for problem solving design is basically based on the initial data of the problem and seeks to find optimal from a range of possible solutions. The solution in this case comes from the subject where the original data and the wording of the problem are based. Ιn problem-addressing version it is intended to place the problem in a thematic section possibly different from the one that produced it. Algorithmic design allows the possibility of creating a system specific to this problem and determining how it will adapt to real data and inputs. Especially in terms of architectural design and architecture as a science, it differs from the rest of the scientific fields in the fact that a problem is not only addressed in its solution but also in its implications. The cause-effect tactic is not routine in architectural practice where a more widespread and often intuitive approach often occurs and this is why problem-addressing logic develops compatibility with this field and gives power to algorithmic thinking as a design tool. Design by many theorists is orienting towards the ‘virtual’ and the ‘possible’ rather than the ‘actual’ and ‘existing’. Thus, its nature is undefined, vague and uncertain, and does not necessarily imply the appearance of some form, but rather the appearance of a combination of thoughts that lead to the assumption of a form. “Algorithmic logic has to do with the structure of thoughts and with a broad battle to explore possible outbursts.” (K. Terzidis, 2006)

ii.

Algorithmic systems and design consequences

Although the modes of operation of all algorithms could be said to derive from a few basic functional structures such as repetition and selection, the ways they have been used creatively have developed different algorithmic systems that, in addition to the different command structures, are also addressed in different types of conditions and results. The realization that the use of algorithms in design introduces new fields to explore, repositions the view of the approach not only of the design result and the process, but also of the core itself around which the object of design

24


is rotated, as discussed in the previous sub-chapter. Automatically, the need for the use of the algorithm is made aware of the design extensions resulting from its structure. Obviously, the algorithmic systems exhibit too many variations and any design concerns may arise or create a new algorithmic structure. In addition to an attempt to categorize them clearly based on their order structure, specific examples of algorithmic systems that will form the basis for many variants will be given below. A basic system that uses only the properties of a schema to produce a structure is the sequential transformation system. The shape can be designed using the basic properties of an object, modifying them with a simple repetition rule. Here the algorithmic structure generates “rhythm, repetition and evolution� (K. Terzidis, 2006). Although K. Terzidis in this system analyzes the algorithmic structure with reference to geometrical transformations, these are general shapes that describe wider concepts. Another common pattern commonly used in system structures is multiboolean systems. This structure examines the interaction of two elements and is expressed in four different versions: union, intersection, difference of the first of the second (A difference B), the difference of the second from the first (B difference A). It uses the basic logical processes and, or, not. The algorithmic structure can perform operations between sets and its roots are located in the Boolean algebra that was developed in 1954. In addition to its simplest form, the system can be applied in conjunction with sets of repetitive processes leading to great complexity. (2) Continuing with other algorithmic structure systems, stochastic search uses a random search until a given condition is reached. The commands that make up the structure of the algorithm are here also the repetition and a selection condition. The randomness introduced is also this type gives the title stochastic to the particular system. Fractals, cellular automata, and versions of L systems are all systems that by using the iteration and the selection condition define different ways of developing each through the use of selection from a certain range of versions to create the next element. The relatively recent agent-based systems or BDI systems as reported by L.Parisi (Belief-Desire-Intention) are probability models with a much more complex structure than the above-mentioned, operating through the trend

25


development process and behaviors, a process that Parisi mentions as a thought action. In multiagent systems a plurality of units are programmed to have a specific behavior with respect to neighboring elements. Systems are created only by the interaction between the units but at the same time they develop specific tendencies and behaviors. It turns out that algorithmic systems involve specific origins and extensions given the way they are structured. Besides the fact that the algorithm itself has given possibilities for processing new types of geometry and for designing in non-Euclidean spatialities, thus ignoring the purely geometric and spatial part, the algorithmic system itself imposes, guides and processes with specific data and the design refers to not only in visual or sensory parameters but in the wider philosophical context within which the design object will move.

26


27


example of agent-based system driven by alignement-separation behaviors (project within the framework of the P.Gr. Program Advanced Design)

28


example of a structure evolution using a cellular automata algorithm (project within the framework of the P.Gr.Program Advanced Design)

29


example of structure evolution using boolean logic in algorithmic structure. (Fig. 4.11 by K. Terzidis, Algorithmic Architecture)

30


1st chapter citations :

1. 1.b.ii This is also the beginning for the development of systemic theories of 1990 that appeared in the algorithmic systems mainly within the flow of parametricism. 2. 1.c.ii. Terzides classifies this design logic in the constrast design theory by referring to the example of Iakov Chernikov, where design is a bottomup approach that includes the composition of elements using basic combination patterns such as connection, set, penetration, splicing, coupling, which are the language for a Boolean type design that has a way of combining elements that are familiar to architectural thinking)

31


...

32


2. Emergence in 1st order Cybernetics

a.

The concept of entropy

Entropy is a basic concept that summarizes the way that the first generation of algorithms was constructed and developed and is analyzed as such by Parisi L. and Terranova T. It is a concept derived from the 2nd thermodynamic law and formulated in physics with a specific relationship from Boltzmann. It was then used by Shanon in computer science to declare his physics equivalent. The laws of thermodynamics appear with the industrial revolution, influenced by the operation of the steam engine and directly linked to a new emerging social and political context (1). Entropy is defined as the tendency of a system to constantly return itself to a certain state. Consequently, it is in a constant search for a point of equilibrium. The energy of a system is constantly being used in its restoration, the system in this sense is presented as closed, and the result that Parisi calls the ultimate triumph of entropy is the heat-death, the equilibrium of the energy of the system, the maximum increase of its entropy, and his ultimate state of equilibrium, which in fact is his death, as there is no difference in energy to create any further action. Theoretically and philosophically, analyzing a system in such an entropic way leads to an inevitable ending. In 1st order cybernetics, algorithms are built on this logic. In first generation of cybernetic control, the concept of entropy was removed from the foreground and the main concern was homeostasis. As Norbert Wiener says about this period, “life is a battle against the lack of organization, chaos and death. “(Parisi, Terranova, 2000) So a system is built with a tendency of introversion against the perpetual tendency of the environment to be disturbed. The concept of control arises through the

33


concept of homeostasis. It is the negative feedback that occurs as a reaction of a system to external influences. A system searches for mechanisms that, by changing its variables, can restore it to its original state. And this is his most basic and dominant function. According to Parisi, the first generation of government is concerned with entropy but does not solve its problem. In order for entropy not only to produce unnecessary energy, but also productive energy, entropy must be treated as a direct energy, not as an external one. The first to deal with the change in the concept of entropy is Ludwig Boltzmann, a physicist (2), who distinguishes it from its relationship to thermodynamics and steam engines. Entropy in his theory becomes the equivalent of randomness, a view that is more compatible with the modern turn to government. At the same time, Claude Shannon embodies the concept of entropy in computer science. While LĂŠon Brouillon notes the information as negentropy, Shannon identifies these two concepts by saying that the more random a message is, the more information it contains. The change in the concept of entropy will be studied in the next chapter. (3) The emergence in such systems arises through the constant search of the equilibrium point of the system. Emergence is essentially a differentiation that produces the system in its attempt to balance. A point of equilibrium in physics, is defined as the equal distribution of energy in all of the bodies of the system. In an algorithmic system, equilibrium occurs at the end of the simulation. The system is defined to a certain number of constants and variables and is specified to have a final state. The uniqueness of the final state is not necessary in such systems, but it is assumed that there will be one. All the intermediate stages, the more they actually do are to increase the entropy of the system just as it does in physics. The more variations occur in a system state, the more entropy increases. Increasing the entropy of the system in a way means an ever-increasing tendency to return to its equilibrium, something that establishes the equilibrium even more as the basic characteristic of the system.

34


b.

Characteristics of 1st order cybernetics

As has been shown by the analysis of the concept of entropy, computation in the first generation of cybernetics is a closed system, a “formal language that can describe every biophysical process without the need to act or practice in the external environment” (Parisi, 2013). It is a self-sufficient system and, as in the concept of entropy and the sphere of the thermodynamic system, its future behavior and ultimate conclusion is predictable through a deterministic behavior with predetermined data leading to a certain range of results. These algorithmic systems introduce the concept of feedback, as in every entropic thermodynamic system there is ann element that establishes for the system a homostatic equilibrium, a point of equilibrium. Feedback is used to introduce entropy between Input and Output, inputs and result of the system. Τherefore, the final versions respond to the initial state and initial input data. The realization that the use of algorithms such a system can process a large volume of data, shifts the sphere of interest into systems with a large multiplicity. In these systems, the elements and relations between them are structured in complex ways that make their processing or simulation impossible by the human brain due to the quantitative nature of their complexity. K. Terzidis, in order to separate this capability of an algorithmic system from the possibilities of the human mind, processes and analyzes the concept of complexity by contrasting it with the concept of periplocus. The complexity of Terzides is a concept that is manageable mainly by the computer, since even when it comes to a simple functional structure, the number of iterations or the size of the structure of the system can only be perceived by digital means of great computational power and storage capacity . On the contrary, the concept of periplocus(coming from the greek περίπλοκο), which does not translate as perplexity because it does not imply the deliberate distortion suggested by the English term, is structured on a very simple reasoning basis which can present complexity, but it does not have computational power. As an example of periplocus, it refers to the structure of the labyrinth, made by the human mind, and characteristic of the

35


complex conception that it can perform, which has a completely different starting point from the complexity we encounter in digital computing. Emergence in the first generation of government is a rise of complexity. An end result that is inherent in the imagination of the human mind but which can not simulate itself as a result of the original data that is put into the system that produces it. It is a first type of emergence in early algorithmic systems which, although being closed systems without external adaptations and data flows, is the beginning of experimentation and the exploration of the possibilities opened up by algorithmic design. It results from simple relationships that are developed between the elements of the system, which lead the way to a computational power result with an unprecedented complexity.

36


2nd chapter citations :

1. 2.a. According to Luciana Parisi and Tiziana Terranova, the laws of thermodynamics were used by industrial capitalism to construct a system of technological, economic and social order. (Heat-Death pp6) The second thermodynamic law is in fact a theoretical framework in which a system continuously absorbs the energy produced in it in order to lead to homeostasis at its equilibrium point. The analogy with the capitalist system is that in industrial capitalism the energy released by a unit is eventually absorbed by the system and embodied in it. Perhaps this is why industrial capitalism finally reaches the heat-death point and collapses. 2. 2.a. Boltzmann is a physicist born in 1844 and his investigations concern, among other things, theories on gas movement (gas theory). Claude Shannon and LĂŠon Brouillon are half a century more modern and engaged in information science (Information theory). The fact that their views are used to describe the concept of entropy that started as a concept of physics suggests a tendency to approximate the concepts in a more interdisciplinary way that explains why such concepts appear in computer science and also implies the emergence of cybernetics. 3. 2.a. The reason that the question was resolved was, according to the writers of the Heat Death, that production is desirable from capital and from the forces released during its historical development. Beyond the ultimate destruction of the universe, a perpetual unlimited and discontinuous process of production is being studied, where nothing is lost. The operation has escaped the status of thermodynamic logic of the industrial establishment.

37


...

38


3. Emergence in 2nd order cybernetics

The passage to the second generation of cybernetic control results through the introduction of a new perspective in the overall view of the natural sciences. After the intense study of thermodynamics, a number of theorists come to rethink an approach to the entropic optics that eventually will lead to its mutation and will result in its absorption from a completely different view. The main turning point in the course of the concept of entropy is its association with information theory from the theoretical computer scientist Claude Shanon. Immediately after this assumption, entropy becomes a new concept and its new perspective transforms it into something completely different and leads the study of systems and interactions to new levels of philosophical and scientific exploration. Entropy was treated as a set of information, and therefore the increase in entropy that led a system to its self-disparition, in the state of heat death, which automatically meant an increase in information. The maximum amount of information is what appears to be random. Randomness has begun to be investigated, and the new vision wants the unexpected and random to be the maximum gathering of information, which is why something is considered random and can not be determined. In this process, the theory of chaos and the theory of complexity were developed further after the passage into the Third Generation of Cybernetics, which will be analyzed in the next chapter. As far as the 2nd order cybernetic control is concerned, the concept of emergence is less well defined because new concepts such autopoiesis are more central within system theories.

39


a.

The concept of turbulence

As was mentioned in the description of the concept of entropy (chapter 2.a.), at the end of the first generation of cybernetic control, the notion of entropy is dealing with the governmental context of randomness and information. This correlation has the effect of changing the way systems and their operation are envisioned. “Avoid the problem of randomness and entropy is a focus on the concept of an autopoietic system” (Parisi, Terranova, 2000). Thus, the theories of aytopoiesis were developed as a special category in the second-generation cybernetic control systems. The system is now focused on producing itself. It is possible to collaborate and communicate with many systems but the basic identity in the 2nd generation is the autopoietic capacity, the capacity of the system to produce itself. According to Parisi, the entropy question was adjusted in the second generation of algorithms in order to find an answer that includes the unlimited power generation, and to exclude the assumption that every system comes to an end through the application of laws of thermodynamics and their final result of Heat Death. Thermodynamics and entropy ceased to be a threat to life and were replaced by the principle of universal production that is capable of incorporating and organizing all endless variations. While classical thermodynamics deal with “structures of decreasing complexity” (Parisi, Terranova, 2000), with systems that are slowly losing their ability to function as described by Parisi, in contrast, systems of non equilibrium, systems of the new thermodynamics that overlooks whether there is a point of equilibrium, concern totalities and increase their complexity in order to regain skills to function continuously. So they do not end up in the Heat Death. Shanon, who was the first to link entropy to information, created a turning point and entropy began to be treated as the thermodynamic cause of a system’s movement towards self-organization, and not as a heat generation function that ultimately leads to death. Consequently, the concept of chaos was redefined. Instead of being treated as in the Victorian era as a disordered state and n astounding life and energy, it acquired a positive sign by thinking that it increases complexity and produces new life. In this sense the concept of entropy was absorbed and covered by the

40


concept of disorder. The new vision that wants systems to be guided by disorder (turbulence) rather than entropy is crucial and central to the study of the lack of equilibrium and the emergence of dissipative systems, systems that have the possibility of dissolution, a possibility that entropic systems did not have. Dissolution is defined as converting to something new or retaining the ability to return to oneself. On this basis, complexity theory systems and theory of chaos begin to be studied more intensely, and the considerations in terms of positive sciences also re-shaped the principles of a universe that is contingent and nondeterministic in its laws. Parisi associates these theories with the concept of the divergent course of the individual, the least deviation from the straight path that ultimately creates life and space. This deviation is what creates differential energy and matter, or what Parisi calls, a non-Platonic ontology for becoming. The irrevocable time of entropy leads to death, while the irrevocable time of deviation is a source of order. The vector of time does not lead to ataxia( αταξία = lack of order), but rather to a different kind of order, unpredictable but coherent, a fluid and disruptive order.

b.

The appearance of autopoietic systems

Since the first generation of cybernetics, the issue of system feedback has been raised, even if this feedback aimed at restoring its balance. In second generation of cybernetics government, the concept of feedback is expanding even further and introducing the environment as a determining factor that will affect it. This shift goes hand in hand with a general extrovert tendency that systems are not static and not in an attempt to continually restore, and is associated with the emergence of the concept of disorder as discussed in the previous subchapter. Feedback becomes an element that will play a decisive role in the internal organization of the system. The environment as an entity and a co-modifier of the structure of a system, originally appears in the theory of automation systems. The roots of these theories meet in the biology science, and have been described in detail by many theorists such as Maturana and Varela, have been commented by Deleuze, Parisi, employing a great

41


deal of philosophical reflection. A characteristic feature of the concept of self-esteem is that it is often referred to as autopoietic enactivism, which denotes a virtualization of its conception and implies an organic development of the system through its internal tendencies that function in its adaptation in its environment. These systems are pioneering for the period they appear because they are exterior-referential systems and imply a continuous hetero configuration, not exactly an interaction, but certainly a communication with structures outside their own. The emergence in autopoietic systems is evidently arising from their ability to refer to exterior factors. Any result of behavior is the result of an exogenous factor and a continuous process of adaptation to environmental conditions. The concept of emergence does not seem to concern itself with planning practice and arises as a matter of urgency, and the interest is shifted to how incoming input elements - feedback loops - will act as metabolites of external conditions. The value of this type of interaction is precisely that it breaks the closed limits of the first-generation entropic system and opens the way for the evolution of interactive systems that will display emergence with a transformed visual.

c.

Interactivity and Responsiveness

The double heading of the subchapter suggests the dual feature of this type of system that shares features with autopoietic systems with respect to their exterior-reference capacity. However, they are also a distinct category since they have, beyond their interactivity and communication with their environment, the ability to respond. The word response is not accidental in their characterization as it accurately describes their limitation of communication with the environment. This limit is the response, the transmission of a signal that the environment has been perceived and nothing more as for example the configuration of the environment by the system. After the attempt to simulate biophysical structures in the first generation, and then with the organization of the algorithmic system as an autopoietic engine, the concept of interaction of the algorithmic systems with natural data (1) has emerged and been used in recent years. Digital art works as well as architectural projects adapt natural sensors to their structures.

42


This replaces In situ the collection of the data that will be the system’s feedback. Sensors act as actuators and give the system a ““ubiquitous fervor” of perceptual data” (Parisi, 2013). Along with the autopoietic system, interaction is another step in the second generation that adds “biophysical contingencies” to mathematical formulations of algorithms (biophysical contingencies is the term used by Parisi to comment on inputs of interactivity). It is in fact the first time that scientific considerations of systems of mathematics and biology find hybrid connections within the field of algorithmic structure design. Of course, this type of interactivity and responsiveness systems, are somewhat limited in the general perception of the second generation cybernetics of the self-organization of biophysical structures through the autopoietic process. Luciana Parisi’s critique of this kind of systems, which also relates to their emergence, is that there is no concern or deeper reflection on the inherent power of the algorithms, as well as on the way they are constructed and operated the promotion of operating steps through certain rules, and finally without the “indeterminations” of programming. In this way, the opportunity and the ability of all these elements to produce innovation and emerging phenomena in the natural, mathematical or biological structures that are produced, are totally lost. In fact, there is still no perception that algorithms are a third element with its own extensions and not just an auxiliary background or element of interconnection of higher science systems. The emergence in these systems is again (as in autopoietic systems), the result of exogenous interaction and the result of a predetermined response to external conditions and changes. The predictability of the future situation is an element inherent in the system and is also the main inhibitory factor for the assumption that there is indeed an emergence in them. (2) ‘In principle, these procedures will eventually enable the machine (or the software program) to learn and calculate the probabilities of a similar scenario before it actually happens. The cybernetic logic of forecasting the future through models of the past is geared here towards a new level of predictability. It now involves the ability of the software to rewrite the rules that it was programmed for, and of the mathematical model to change as a result of physical interactions “(Paris, 2013) In the end, although the structure can be altered and shaped by external

43


data, which are seemingly undefined, virtually what is achieved is a “new kind of predictability” where the input data for prediction are biophysical responses.

d.

Machine learning and Artificial Intelligence

Machine learning has its roots at the very beginning of the digital computer. It has been conceived as a concept since 1950 by Alan Turing. Thus, theoretically perhaps it can be placed in the first generation of government for reasons not only of time but also of features (3). However, it is commented on as a sub-chapter of the second generation of cybernetics, firstly to separate it from the closed systems of first-generation algorithms, and secondly because emergence in machine learning and artificial intelligence has common traits with the autopoietic and interaction systems analyzed in the previous two subchapters. The creation of artificial intelligence machines uses the logic of “finite sets of commands that can promote large and complex amounts of information” (Parisi, 2013). The basic principle of establishing these rules is that the engine or system being designed is intended to learn from and about its environment. According to Andy Clark, it is basically the design of some perception. The term cognition implies not only the part of the interaction with the environment (analyzed in the previous type of interactive systems) but also the mode of action. The issue of the “consciousness” of the digital medium and whether a machine can think is one of the key issues Alan Turing addresses in Computing Machinery and Intelligence. The idea that indirectly goes through the extensive reference to opposing views is that even if we can not call thinking how a digital computer works, it is definitely about carrying out a logical process that is not at all related to the way in which the human thought articulates. The digital machines that will undergo the learning process are Turing discrete-state machines that can represent a large number of states due to their large storage capability. Along with the issue of consciousness that concerns Turing comes also the question whether the subject of a digital computing machine can be the same machine. Thus, he argues that if a machine is programmed to build its own programs or to predict the transformation effects of its structure

44


in order to perform its functions more efficiently, then the object of the processes is the very machine. Whether or not we can assume consciousness or perception of such a mode of operation is not the subject of this research, but the conclusion that emerges with regards to emergence in machine learning, is that it is essentially not an innovative phenomenon expected to be attributed in the system. The emergence in this case is associated with the displacement of the design object, since the design center is not the structure we want to produce, but the machine that we will create and how we perceive the environment. These elements bring very close the concept of autopoiesis, mainly on a theoretical level and not practical.

45


46


3rd chapter citations :

1. 3.c. His view of making digital response media attempting to prefix an artificial element with a natural one is confronted with criticisms such as Mark Hansen, who considers that the architecture of the algorithm is not capable of explaining the physical dimension of space. He commented that as well as mathematical systems in relation to biological structures, they can only function abstraction by reducing physical complexity in wellformed forms. These critiques arise from the confrontation of the digital to the physical element that end up considering the biopic design as a simulation of the biological world, as an unconnected scheme of abstraction of the real. 2. 3.c. (predictability) The concept of prediction is commented by Alan Turing, in the digital and in the computational part, it is mainly about practicality rather than the predictability of the system itself. The “universe as a whole� as defined by Laplace acting as such, can develop with small mistakes or changes in its original state a drastic result in the end. Based on this hypothesis, Turing comments and considers that any computational scheme relating to a mechanistic system must be verifiable and predictable, predictable at every step of its discrete states of operation. (Turing is referred to as the discrete state machine in the digital machine, in determining function from sequences of 0 and 1, to the only kind of digital camera we know so far) 3. 3.d. Parisi also classifies artificial intelligence machines in the first generation of government, and the main reason is that they were used as complexity operators for a large amount of data that they would get from users to develop intake capabilities. Complexity was the main driving force in their initial use and the main problem was the handling of the concept of feedback and that both the development of communication and response capabilities

47


... ...

... ...

48


4.

Emergence and Post-cybernetic Control

In all the above phases where the operation of algorithmic systems is programmed and specified either because it is specified how they will be developed by themselves or because they are fully defined and some fields from which information will be acquired in which there will be some targeted response. The choice of the “place� of the information is already an information, a kind of mapping and that is what characterizes the emergence of the previous type of systems. After finding that the whole problem of interactivity, responsiveness or artificial intelligence is a predefined system in operation, there is a critique that leads to the search for speculative computation and the development of new Artificial Life systems. In B.Bratton’s article, Outdating Turing Test, he criticizes human-centered approaches to the development of algorithmic systems and machine learning. His analysis will be further investigated in the first subchapter. The second subchapter will analyze the Artificial Life systems and then explain how the emergence of Portevi emerges in the theory of chaos theory and the complexity theory, which is the basis for the most advanced models of intergovernmental control.

49


a. Critics in production of complexity in post-cybernetic schemes After analyzing the mode of operation of the formations developed up to the second generation of cybernetics, some features are distinct in the way they are constructed that lead to a skepticism towards them. This skepticism also leads to the post-cybernetic control situation where the way of development and the goals set in an algorithm have a different starting point. Especially in regard to the issue of the emergence phenomenon, no changes in the basic approach are observed in all previous systems. Systems tend to predominate towards a more deterministic logic of predetermining basic parameters, although with the introduction of the interaction, there is a need to search for a way of differentiation. Even in this case, however, it is not fully understood whether this is considered a heterodimerisation of the algorithmic structure and the digital system, or especially an adaptation of it to respond to and be informed by its environment. The deterministic logic of the above systems is not stressed as negative, but is mainly criticized by some theorists. Parisi states that these systems do not take into account the elements of the algorithm that classify it into a separate ontological scheme, and also do not use all the possibilities that it can have. Regarding in particular the field of artificial intelligence, the ongoing debate revolves around whether or not it is feasible to reach the level of human perception and self-consciousness. Bratton in his Outdating A.I.:Beyond the Turing Test article comments that the basic question is not that, but what it means to have a kind of thinking that produces artificial intelligence very different from human. A “mature� artificial intelligence for him, does not necessarily simulate human intelligence, and he thinks that anthropocentric approach rallies the dangers and detracts from the benefits. The artificial intelligence that already exists in our environment from use in search engines to industrially-produced robotic systems, largely evolves through a man-centered research. According to Bratton, the biggest mistake of this research is to mirror the image of human mind in

50


such systems. Commenting on the Turing Test, he discovers that there is a continuous effort to reconcile a machine with humans. He thinks that thinking is something more complex than being measured or defined in human terms, and suggests machine thinking, which can also be a way of heterosexualization of man and what ultimately is human thought. By locating anthropocentric data from 1950 even in Turing, he considers that intelligence has no bearing on what element reflects “humanity� back to us and therefore instead of searching for such hallucinations suggests that there are many more profits in one a more honest and dejected relationship with synthetic-type intelligence.

b.

Control without control

The title is borrowed from a subchapter of the Heat Death article by L. Parisi and T. Terranova. The authors associate the problem around the concepts of entropy and disorder, through government, with the concept of control. In the third generation cybernetics the whole approach is more bottomup than in the previous two orders. Complexity arises from the successive combination of simple elements and rules. It is taken into account that the property of matter evolves over time through a process of self-organization in an unpredictable but coherent way. What drives interest is the results of this control unpredictability. That is, it examines how the control mode changes or is modeled on matter. Thus, the questions of post-cybernetics come about as to how control comes in place and how it is created. By paraphrasing this question, the focus turns towards the concept of Artificial Life Systems. This concept is a proof that there are elements of self-organization in social, physical and technological production regardless of the environment. The theoretical background of Alife and the third generation of cybernetics incorporates and synthesizes approaches from the theory of chaos, complexity and evolutionary biology. The denial of entropy and its unique ability to direct a system, now appears with the intense use of biological processes at simulation level. It may be a source of inspiration for creating new types of control, or rather an

51


attempt to clarify the lack of a decisive control parameter in nature which is to be used in artificial life systems. Thus, patterns, forms and algorithmic structures are developed. The question of turbulence is found here with Deleusian-type transformations of a fluid space where there is continuity and plasticity and which is constructed from the minimum diverging angle from the vertical. The concept of self-organization continues to exist. But here, selforganization produces complexity and evolutionary dynamics, which is opposed vertically to the dominant concept of entropy of 1st order’s cybernetic control. Algorithmic systems are classified as mental schemes where algorithms construct other algorithms, a concept developed by K. Terzidis under the term metaalgorithms. Meta-algorithms are a sophisticated form of algorithm, where central control is lost, and the discussion is placed around systems that can be structured on their own, with a more complex outcome and a great amount of uncertainty about how control is constantly shifted.

c. Emergence in chaos theory and complexity theory systems Protevi in ​​his book Political Affect: Connecting the Somatic with the Social, has developed his theory of how emergence occurs in chaos theory systems and complexity theory systems. Systems of this particular type are concerned with post-cybernetic control study as was mentioned in the previous subchapter as it is not only the interpretive basis but also the synthetic tool of this newer types of systems. Initially, it is worth mentioning Protevi’s clarifying comment to state that under the terms complexity theory and chaos theory common language refers to types of systems for which scientific language usually uses other terms than mathematics of non-linear dynamic systems. This is the general category in which all these systems are integrated and the separation of the two terms is in essence meant to highlight the two subcategories.

52


Chaos theory systems comment on the development of unpredictable behaviors, from simple rules, to deterministic, non-linear systems. Complexity theory systems, on the other hand, face the emergence of relatively simple functional structures through complex reactions between the structural elements of a system. Therefore, as Protevi points out, chaos theory is driven from simple to complex while complexity theory is the opposite. These systems are completely separated from random systems. By analyzing the behavior of these different systems, we have to observe that in the complexity systems the behavior is organized through the concept of attractors. These attractors may be points (lead to fixed systems), loops (lead to oscillation systems), or fractal-like behaviors (leading to turbulence / chaotic systems). Attractors define the behavioral patrons of the system and are one of the three elements of the system. They are described by the other two elements, which are the phase space, that is, the number of dimensions that define the position of the system’s points and the mathematical manifold, that is, a topological scheme defining the set of points in the system. The areas of phase space are defined by the bassins of attraction. Negative feedback loops run into the system to bring it back to its definite trickery behavior and return to a certain point of reference. When this is repeated, the system becomes a stable system. When the feedback loops are positive (positive feedback loops), then we do not return to the homeostasis of the system. Growth or decline is being produced and the system is probably being led to a new behavior, basically changing the range of throttle traction, or otherwise changing the system’s attractors. A new attitude is created by developing a new shape of n attractor scheme. Such a change point is called bifurcation or bifurcator point. Complexity theory systems are more unpredictable than non-predetermined or deterministic, and the term ‘random’, when used for them, is more of an epistemological term for the limits of our prediction. Chaos theory systems in contrast are somewhat predictable because their behavior is defined by simple rules that lead to a complicated result. In random systems, which we have separated above, there are no intangible behaviors, or its fateful behavior is not in our ability to map. The specificity in complexity theory systems and the development of emergence lie at the moment when a system will get into what Protevi calls learning. When a

53


system changes its behavior and leads to a completely new behavior then it is led to the phase of growing plasticity or development (Developmental Plasticity-West-Eberhart, Diachronic Emergence-Morowitz) The concept of emergence in Protevi is the development of self-organization and concentration by limiting the action of the units of the system. He examines emergence in terms of the way it engages in the theory of DeleuzeGuattari. It defines it as the “timeless production of functional structures, in complex systems that achieve the simultaneous concentration of systemic behavior, limiting the individual behavior of units.� (Protevi, 2009) (1). The emergence of systems as described by Protevi is a constant cause search (2)

d.

Emergence in Artificial Life Systems

(Langton’s theory) Christopher Langton has described the characteristics of Artificial Life in a way that specifies how emergence, appears in them. He considers initially, that in such a system there can not be a single program that will direct and organize the rest. Interpreting, according to Langton in a Artificial Life function there cannot be a central control nor a central programming. Extending this feature, he notes that there are no rules that define the behavior of such a system. This means that such a system does not have specific rules that will be repeated in its subprograms. Therefore, any action that occurs at a level higher than individual operation is considered as an emergence. It is a new behavior, and it is essentially a driving force of the system and not its rule. (Parisi, Terranova, 2006) This decentralization is, according to Parisi, the differentiating element of this new type of cybernetic control which creates a separating section over the two previous ones. The system uses self-organization, not aiming at reproducing itself, but in generating differentiation through it. Differentiation does not emerge due to a problem or a genetic algorithm. The concept of emergence replaces the concept of homeostasis and the concept of self-regulation, as the driving force of the system. It is what the system seeks to be considered a system. Emergence in the Third Generation

54


of Cybernetics stems from unplanned and verifiable aspects of algorithmic processes, not predicted by the programmer of the simulation.

55


56


4th chapter citations :

1. 4.c Protevi’s definition of detection in a more general context indicates that emergence is the construction of functional structures over time that achieve synergic coordination of systemic behavior. He mentions the two different approaches to social systems and revival. The perspective of the reducers states that the social behaviors of a set are no more than a combination of the behaviors of their units. On the contrary, the emerging street perspective it adopts gives more importance to the emergence that it examines more systematically. Protevi believes that the concept of renewal is central and central to talking about how the subject’s physiological physiology, and hence the political consciousness, and the whole body politics philosophy, is formed. 2. 4.c. On this basis, Protevi reports two other types of emergence, reciprocal causality and transverse emergence where sets are created by biological, social or technical factors. This emerging theory uses it to analyze his theory of body politics and the political cognition and affect he analyzes in the next chapter of his work.

57


58


5. Embedding Emergence into the practice of Speculative* Computation *Note: with the term speculative use of digital media, the concept of stohastic computation is implied. The concept of stohastic translates as a reflection but not in the way the concept is used by Kostas Terzidis, for whom stohastic design is the one that is in the constant search for a certain result, through a continuous and random search. Instead, the notion of speculation has no specific goal to seek and does not refer to systems seeking a balance or balance. It is a continuous search without a predetermined result for the purpose of searching, combining and recombining data and searching for emerging phenomena. This stohastic research feature is the main demand in an era of post-cybernetic control, and that’s the pointat which the term speculative is used for.

Algorithmic architecture is a case of speculative computation that raises logic and calculation into the power of incomprehensible. (Paris, 2013) After the emergence of the concept of “post-cybernetic control� and the development of digital tools and algorithmic structures both in terms of their structural and ontological content, a new term appears in the use of digital media. The concept of emergence is directly related to the notion of speculation in the use of the computer, because on its basis this new term denotes a continuous investigation and even reflective. Investigation has no specific goal as it is in a stochastic research. Nor is it based on random choice and direction. This is a continuous investigation that is not aimed at somewhere, but is constantly looking for something new. This is the reason why we can consider emergence as the most central concept of this type of intergovernmental control structures that arise from the

59


speculative computation process. The term scripting is a term associated with speculative computing and it allows a wide range of reports. It appears in Mark Burry’s AD: Scripting Cultures, and is thoroughly analyzed by stating a new era where computation evolves into a culture and as the title says not one but many. The crowd arises from the fact that digital media users are numerous and refers to the development of a wide range of different approaches to use. Algorithms as a general notion are now the skeleton, the process that will construct the design system, while the programming languages ​​are the translators of this system. In scripting, the technician with the essential element, the programming languages ​​with the libraries and the ready-made commands of the respective programs. The term scripting also refers to the user’s redesign of program data and design tools. As a basic principle of speculative computation, or better as its basis and tool we can consider scripting. M. Burry likens it to writing a script that can be played in many different ways. This new logic leads to the former logic of mass production being broken into design, and the user of the programs and designer becomes the new toolmaker and software engineer. (1) One of the basic functions of scripting is that it offers the ability to manipulate a large volume of data, to move in many directions simultaneously and ultimately, the most important, the ability for the human user to work beyond his perceptual capabilities.

a.

Speculative computation in Architecture

For L. Parisi speculative computation does not mean that algorithms are systems that relate to presenting the present or the past in the future, but that algorithms introduce distinct debilitations into realities. Thus it defines “soft thinking” which is one of the categories in which it separates the architecture of thought. Other categories that are directly related to design, system creation, and algorithmic structures are Interactive thinking,

60


neuroarchitecture, enactive thought, negative prehension, cybernetic thought, ecological though, and so on. Software thinking is the basis in which the practice of speculative computation appears in architecture. By defining what soft thinking is in comparison, it states that it denotes the introduction of incomprehensible probabilities into algorithms and is a real extension of incomprehensible objects. Thus, “as an arrest of incomprehensible objects, soft thought is unimaginable in human perception, it is involved in the human mind and thought.” (Parisi, 2013). With this observation, L.Parisi emphasizes Soft thought not as something completely different, outside of human intellect, but as something inherent in it. Software thinking is another kind of thought that the human brain can capture, just as it can capture the notion of incomprehensible object. It is a kind of thinking that develops to handle such ontologies and objects. In the context of digital architecture, speculation has to do with the way in which algorithms construct space-time realities. A specific example of architecture that uses speculative use of algorithmic structures is the work of R (&) Sie (n), “I’ve Heard About” which depicts an urban system composed of data structures. The specific work on architecture suggests that it does not concern the incorporation of an artist’s intelligence into form, but it is the means to point to the infinite potential of an urban landscape that is being developed, transformed and expanded beyond its original state . The urban structure in the project evolves through the transformation of algorithmic structures that occurs due to data recycling, synthesis or decomposition. The system is created with the simultaneous validity of some protocols concerning: inputs from pre-existing morphology data (such as compact boundaries, natural light, existing dimensions of “resilient cells”), internal structures of chemical reactions (physiology, endocrinology exudations, physical emissions) and digital information items. Although we could understand the algorithmic structure as an actuator of specific commands that ultimately produce urban architecture, the R (a) Sie (n) approach does not concern the algorithmic structure as a design tool but as an additional element of urban composition. As L.Parisi comments on this work, the transformation of the urban structure has to do with the algorithmic conception of the transformation of distinct levels of data. Thus, the structure becomes a system of algorithms driven by combinations

61


R(&)Sie(n), “I’ve Heard about”

62


of possible contingencies that can build multiple, heterogeneous or contradictory scenarios. In essence, algorithms function as data input societies by changing their structure from them, so they escape the logic of reproducing their original state, which is constantly changing over time.

b. Emergence in state-of-the-art algorithmic systems ( or else emergence in Speculative Computation) The emergence of speculative computation lies at the point which Parisi cites as software thought. This is not a kind of thinking that can replace logical thinking, nor an ontological basis of thought. It is a different and compact way of thinking that has the ability not to limit itself to the limits of what the human mind can do. This is an attribute associated with emergence as it introduces non-computable objects and modelless data into our own reality. Parisi insists on the term patternless data, which is not related to the types of algorithms of previous generations in developing systems such as cellular automata (Cellular Automata), and considers these to be a central element of speculative computation. Emergence does not have to do with an increasing unpredictability. The emergence here has to do with the incorporation of the infinite, the unlimited into the finite. The discovery in speculative computation is that in each binary step and in each binary computation, there can be revealed ‘endelessnesses’ or else, unlimited entities, justifing the objects of the results of the algorithms on an ontological basis.

63


64


5th chapter citations :

1. 5. Rationalism in design as Mark Burry in A.D.Scrpipting Cultures introduces and dominates the modern movement in architecture and is established by CIAM. He believes that this is prevalent in algorithmic thinking to this day and we are in a time of transformation of a new expanded rationalism. There is, however, a tendency to renegotiate logic and sensory and a tendency for greater discussion of the affective in architecture.

65


66


Epilogue-conclusions: redefinition of speculative computation, design extensions in architecture

So far, the concept of algorithm is still the basic concept used to develop computing processes in digital media. After analyzing the concept through a multitude of optics, of course we can not guarantee its continued existence and future generations of digital use, we come to some conclusions as to its processing capabilities. The fact that the development of the digital medium is based on the algorithmic building system verifies the theories of ontological object. If the algorithm was only a cognitive processing machine, the actions that were performed with it would be purely computational, and we would not see any evolution in the course of the digital element. The fact that this evolution exists points to a greater number of attributes that exist in algorithms and categorizes them in a way of thinking, in a kind of conception, in the algorithmimc prehension. The algorithmic conception is one could say an extended form of perception of the human mind. The digital as a co-modifier of human perception, as an extension of the neurological potential of the brain. The algorithmic concept, which involves the concepts of infinite, non-computable objects and the production of metamodels, promotes a different way of perception which appears to be visible in post-cybernetic era where it is understood that algorithmic systems are not control systems or systems that produce control. As discussed through the structure of the research, talking about computation without having even analyzed the concept of the algorithm in a way would be inadequate as it is the structural element

67


and indeed very special and powerful in the synthesis of any digital computational process. We observe that the analysis of the concept of the algorithm and its philosophical extension influence to the maximum how it will be incorporated, how it will be structured and how it will be processed in a use operation. The perspective of the algorithmic structure will inevitably affect the structure of the design itself and will lead to a certain result. This awareness leads to a more critical and up-to-date use of the digital medium and we must next consider that such a basis is necessary in a contemplative use of the digital element. A typical example of the above conclusion is object-oriented philosophy of design. As discussed extensively in the first chapter, Kay seeing the digital world as a world of objects has completed a vision that seeks to use the algorithm as an object by creating a completely different ontology that eventually led to more direct use of algorithms. The ontology, intentionally or not, developed by both Parisi and Terzidis is an even greater extension of the concept of the algorithm as it treats it as something strange. Terzides devotes an entire chapter to otherness and this leads us to conclude that there are much larger cultural implications of what has been discussed so far. We can conclude that in every design theme that someone is called to negotiate, besides speculative computation being a way to redefine its approach to the design object, there are also many different algorithmic systems that can adapt to display a consistency in the design logic. Through this consistency there is no need for a deterministic way of thinking but a thoughtful process of evolving the possibilities given. Thus, for example, the stochastic search is, as analyzed, a particular process involving the element of control and randomness. The use of cellular automata means a way of planning where immediate proximity rules organize and lead the end result. Agent-

68


based systems is a different subclass which, in the infinite variations it can display, allows a bottom-up design where endogenous interactions shape the overall system and trends are generated through divergent behaviors of system units. Similarly, all algorithmic systems drive or rely on very different design choices that are obviously conducive to specific design approaches. As far as the connection of the computation with the physical element is concerned, here the algorithmic systems can be said at an early out-of-phase stage, judging not only from the systems of interactivity but also from the way the robotics research progresses to the greater stage. The digital structure is mainly treated as a design aid for operating the machine. Automatically this sort of approaches exclude all the theoretical discussion of an ontological framework and remove the possibility of a substantial evolution of the digital world. The greatest ally of the algorithm, of course, is technology, and its integration into social, physical and cultural structures, but this is so far conducted in a way that is quite human-centered. It may be necessary in the practice of speculative computation to move away from such a perspective to the point where, by focusing on the possibilities of the digital element itself, this will be even more developed and will lead to a real cultural upgrading and not only to a technological development, which is often used to consolidate or emphasize social and cultural gaps. Different considerations in the composition of digital, often lead to its democratization, as was the case with Kay’s perspective on object-oriented programming. The perspective of considering that the programs are parts of linked “objects� that can in another structure organize a new shape has managed to make programming very popular to a large number of users. So, taking advantage of a vast array of possibilities a cultural development occurs that may only have a positive sign and possibly lead to a faster evolution of the subject.

69


Algorithmic architecture has been able to restructure mathematical concepts in relation to dynamics, such as Cellular Automata, in order to examine spatial expression. Thus, it is undeniable that the geometric forms and all the consequences that they may have in the experiential perception of the space have passed at a later stage quite unexplored as to the influence and the extensions that it might have. Spatialism finds new forms of expression entirely new, with new cultural and social possibilities. By turning to interactive design, technological innovation manages to cope with biophysical data as inputs to algorithmic systems. However, it has not fully given the answer to these temporary objects, or to the investigation of biological structures in the design. It continues to see algorithms as the result of developing data, or as the response to biosynthetic data. The algorithmic object in architecture, in the view of some theorists, has not yet been examined and has not found the proper expression. Indeed, we could argue that in order for the digital to find its natural expression, a much larger exploration and a non-anthropocentric concept, free from previous considerations, even developed in the philosophy of digital media to date, are needed. The concept of emergence is a basic driving force that determines algorithmic systems and separates their evolution into different phases. The way in which it appears, obviously emphasizes very different aspects of the algorithmic structures, the context in which they are moving, the generations and the philosophical visions through which they are born and the extensions to which they aim. The emergence as an appearance of the basic behaviors of a system is perhaps even more important than the structure itself, the aesthetics and the morphological (in the sense of Morphogenesis) system, because it reveals how it is conceived and also as it seems, its ending. Closed algorithmic systems of the 1st generation cybernetics, although they are oscillating in a number of aesthetic and

70


morphological choices, the way in which behaviors emerge are the essence of their structure, an enclosed structure of production of prescribed steps, which is closer to simulation, rather than creation. The emergence of interactive systems, responsive systems and autopoiesis is again a mechanistic process. If there is no perception of how a system evolves, it may not be possible to properly assess and capture the structure that runs through it. Post-cybernetically, and when the notion of “control without control� appears in critical theories, such as L.Parisi’s, a tendency in design practice begins to be declared, possibly not even directly declared by practitioners such as Francois Roche, which poses emergence in the post-cybernetic era of the concept of control. In all of the previous algorithmic systems, the concept of control has no different perspective as to who defines emergence and from where it arises. We conclude that the main reason why algorithmic structures continue being used in design is because the potential for emergence phenomena involves additional fields to be explored, which is the main explanation for their use for innovation. Innovation in design is a process that through stochastic exploration and experimentation shifts to loss of control from the basic poetic structure-building engine, exploring how control is changing, and building a network of relationships that will structure the broadest design context and the architectural component. The appearance of emergent phenomena is shifted by control without control in the novelty without control, where the tendencies of a system are not specified from the beginning but appear in the course of the evolution of design practice and arise through the system itself that combines digital, intangible and physical elements. The novelty comes in as a combinatorial element in the emergence, since in the stochastic processing of the algorithm that is involved in the appearance of the behaviors that will structure the system is not the reproduction of standards but

71


the exact meaning of the term, the aspect containing innovation, the stochastic exploration of new capabilities and expanded concepts, which tends to be achieved through the loss of control, emergence.

72


73


Bibliography:

Bratton, Benjamin, Outdating A.I.: Beyond the Turing Test, The New York Times, The opinion pages, February 23, 2015 Bray, Dennis, Wetware: A Computer in every Living Cell, New Haven & London: Yale University Press, 1983 Burry, Mark, AD Primers: Scripting Cultures, Architectural Design and Programming, G.B.: John Wiley & Sons Ltd, 2011 Cache, Bernard, Earth Moves. The Furnishing of Territories. Massachusetts: The MIT Press, 1995 Clark, Andy, Mindware: an introduction to the philosophy of cognitive science, Oxford: Oxford University Press, 2001 Carpo, Mario, The Second Digital Turn in Architecture. Sydney: University of New South Wales, 2015 Carpo, Mario, The alphabet and the algorithm, U.S.A.: The MIT Press, 2011 Castoriadis, Cornelius, The rising tide of insignificancy, translated&edited by anonymous, electronic publication date, 2003 Hayles, Katherine, How we became Posthuman, Virtual Bodies in Cybernetics Literature and Informatics, Chicago: The University of Chicago Press, 1999 Hight, Christopher, Architectural Principles in the age of Cybernetics, Routledge Ed., 2007 Kay, Alan, and Adele Goldberg, Personal Dynamic Media, the New Media Reader 26, 1977, 393-404

74


Langton, G. Chris, Computation at the edge of chaos: phase transition and emergent computation, Physica D, vol.42, 1990, 12-37 Maturana, Humberto R., Francisco J. Varela, Autopoiesis and cognition : the realization of the living. Dordrecht: D.Reidel, 1980 Parisi Luciana, Contagious Architecture: Computation, Aesthetics and Space, U.S.A.: The MIT Press, 2013 Parisi, Luciana, Symbiotic Architecture: Prehending Digitality, Theory, Culture and Society 26, (March-May 2009), 346-374 Parisi, Luciana, Terranova Tiziana, Heat-Death: Emergence and Control In Genetic Engineering And Artificial Life, University of Victoria: CTheory Archive, 2000 Pask, Gordon, The architectural relevance of cybernetics, Architectural Design, September 1969, 494-496 Pask, Gordon, An Approach to Cybernetics, London: Hutchinson & CO (Publishers) LTD, 1961 Pasquinelli, Mateo, Augmented Intelligence Traumas, Luneburg: meson press Portevi, John, Political Affect: Connecting the Somatic with the Social, London: University of Minnesota Press, 2011 Roche, Francois, I’ve heard about . . . (A Flat, Fat Growing Urban Experiment): Extracts of Neighborhood Protocols, Architectural design 79, no.4 (July-August 2009) Roche, Francois, Stephanie Lavaux, Navarro Jean and Benolt Durandin, R&Sie(n) I’ve heard about …©, (a flat, fat, growing urban experiment), Paris: Musée d’Art Moderne/ARC, 2005 Shannon, Claude Elwood, A Mathematical Theory of Communication, U.S.A.: The Bell System Technical Journal, 1957

75


Shoham, Yoav, Kevin Leyton-Brown, Multiagent Systems, Algorithmic, Game-Theoretic, and Logic Foundations, electronic version, revision 1.1 Terzidis, Kostas, Algorithmic Architecture, Oxford: Architectural Press, 2006 Turing, Alan, Computing Machinery and Intelligence, Oxford University Press, 1950

76


77


annex Presentation of Dissertation Thesis on 27/1/2017 at Postgraduate Porgram ‘Advanced Design: Innovation and Transdisciplinaryism in Architectural Design’, AUTh. : online οn: prezi.com/bddg7wentwly/thesis-dissertation/

the images of the presentation are works of the workshop (n) certainties GSAPP Fall 2011 with Francois Roche




Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.