Intelligence and Speculative Prosthetics: from technoanimism to technofetishism

Page 1

intelligence and speculative prosthetics from technoanimism to technofetishism

-the creative power of non-human agenciesby Artemis Kyriakou


INTELLIGENCE AND SPECULATIVE PROSTHETICS: FROM TECHNOANIMISM TO TECHNOFTISHISM


- ARE YOU INTELLIGENT? -


INTELLIGENCE AND SPECULATIVE PROSTHETICS: FROM TECHNOANIMISM TO TECHNOFTISHISM


Intelligence & Speculative Prosthetics:

from technoanimism to technofetishism - the creative power of non-human agencies by Artemis Kyriakou

Research Thesis February 2018

Supervisor: Maria Voyatzaki Aristotle University of Thessaloniki Faculty of Engineering -School of Architecture-


INTELLIGENCE AND SPECULATIVE PROSTHETICS: FROM TECHNOANIMISM TO TECHNOFTISHISM


CONTENTS PREFACE

What are we?

01

CHAPTER 1

Science Fiction explains the inexplicable

05

CHAPTER 2

Precursors: myths, stories and rumours of artificial beings endowed with intelligence

09

CHAPTER 3

Frankenstein – ‘‘the prescient tale of techno-madness’’ 1700-1900

11

CHAPTER 4

‘‘The machine as an alternative to human brain’’ From energy to information 1900-1950

22

CHAPTER 5

What is intelligence? Ours. Or anyone—or anything—else’s AI and the ancient urge to reproduce ourselves 1952-1956

35

CHAPTER 6

Building humanity anew Chatting with machines and reengineering humans to fit the stars 1956-1974

42

CHAPTER 7

BOOM! Going wireless in Cyberspace! From myth to reality 1980-1987

59

CHAPTER 8

‘‘Geneticists’ Dreams’’ - Let’s make a body for intelligence to live in From life-as-it-is to life-as-it-could-be

72

CHAPTER 9

Technologies to bond with. Let’s connect to each other electronically A global self-synthesising organ bustling with neural intelligence 1993-2001

83

CHAPTER 10

Is Big Data the new AI? Self-designing our avatar selves 2000–present

94

EPILOGUE

Summing up: Living in extreme times What’s next?

BIBLIOGRAPHY

108

127

i.


INTELLIGENCE AND SPECULATIVE PROSTHETICS: FROM TECHNOANIMISM TO TECHNOFTISHISM

ii.


preface In the exponential pace at which technology moves, time speeds up, ‘‘order’’ 1 increases, and our world undergoes a constant evolution in all the aspects..The evolution of intelligence, and its vast augmentation, has always fascinated humans. On a geological or even evolutionary timescale, the rise of Homo sapiens from our last common ancestor with the great apes happened swiftly. We developed upright posture, opposable thumbs, and—crucially—some relatively minor changes in brain size and neurological organization that led to a great leap in cognitive ability. As a consequence, humans can think abstractly, communicate complex thoughts, and culturally accumulate information over the generations far better than any other species on the planet. These capabilities let humans develop increasingly efficient productive technologies. Our modest advantage in general intelligence has led us to develop language, technology, and complex social organization. The advantage has compounded over time, as each generation has built on the achievements of its predecessors. If some day we build machine brains that surpass human brains in general intelligence, then this new superintelligence could become very powerful. The fate of our species would depend on the actions of the machine superintelligence. We do have one advantage: we get to build the stuff. 2 Since the Big Bang, the universe has been in constant evolution and continuous transformation. First there were physical and chemical processes, then biological evolution, and finally now technological evolution. As we begin to ride the wave into human redesign, the destination is still largely unknown but the opportunities are almost limitless. Technological evolution, evolution of artificial intelligence, of tools and techniques, of sciences, of architecture and design, of economy and politics; the whole evolution of our society. Let us praise evolution! It has created a plethora of artifαcts of indescribable beauty, complexity, elegance and effectiveness. At the same time it caused environmental disasters, colonization, advanced capitalism and poverty. But, it created human beings with their intelligent human brains, beings smart enough to create their own intelligent technology, and at the same time rational beings capable of critical thought, so as to direct evolution in the right path. It has put the base on which we can now dream of superintelligent beings, machines and artifacts, a new world with embedded brains everywhere! Intelligence is obviously an important issue. Literally hundreds of books have been written about it. Throughout human history, philosophers, psychologists, artists, teachers, and more recently neuroscientists and artificial intelligence researchers have been wondering about it, have been fascinated by it, and have devoted much of their lives to its investigation. It is said to be important, sensitive and highly mysterious, but what is it really? 1.According to Ray Kurzweil, the rate of progress of an evolutionary process increases exponentially over time. Over time, the “order” of the information embedded in the evolutionary process (i.e., the measure of how well the information fits a purpose, which in evolution is survival) increases. This is the way he describes the ‘Law of Accelarating Returns.’ 2. Nick Bostrom (2014), Superintelligence: Paths, Dangers, Strategies. Oxford University Press

01


INTELLIGENCE AND SPECULATIVE PROSTHETICS: FROM TECHNOANIMISM TO TECHNOFTISHISM

Intelligence is obviously difficult to define. It has to do with consciousness, thinking, and memory, along with problem solving, intuition, creativity, language, and learning, but also perception and sensory-motor skills, the ability to predict the environment, the capacity to deal with a complex world. But the mistake humans made for long, was to believe that they were on the pinnacle of the evolutionary pyramid thanks to their more complex brain, thus giving definitions that are human-centered, considering intelligence as an exclusively human asset. But is it, really? Although intelligence is most widely studied in humans, it has also been observed in non-human animals and in plants. Are ants intelligent? Are apes intelligent? Could their intelligence be comparable to that of man? Are we more intelligent than the evolutionary process that created us? And after all are we willing to broaden our definition of intelligence? And then, what about the algorithm that picks our preferences on Google and Amazon and tells us what to buy? Is this intelligent or not? Can an intelligence create another intelligence more intelligent than itself? Artificial intelligence, in an attempt to study how intelligence works and can be understood as a set of tools and programs that makes software “smarter” in a way an outside observer thinks the output is generated by a human. Artificial intelligence is no longer thought to be a silly sci-fi concept, associated with robot characters in fiction movies. Actually it is not fictional at all, although it often sounds more like a mythical future prediction than a reality.. “AI,” being a broad topic, it ranges from your phone’s calculator to self-driving cars to something crazy in the future that might change the world dramatically. “AI” refers to all of these things, that describe our technological reality. As we move forward, technology progresses and evolves, the definition of AI changes and the infinite progress is transforming the world holistically. Life as we know it will be forever changed..3 Will the artificial intelligence come to exceed that of its creator? Who will have the power of intelligence after all? These are some questions that have been addressed by many scientists through the years, gradually planting the seeds of AI. Many researchers have been struggling to solve the mystery behind the process of human thinking and the greatest problem with the question of artificial intelligence is that we don’t fully understand the nature of natural intelligence. The human brain, which is the very seed from which the idea of artificial intelligence was born, is still among the greatest mysteries we have yet to solve. Yet, if the day does come that hardware and software become as flesh and mind, we will have a new companion in searching for the basic questions of our own existence. Over six decades, we’ve set off technological revolutions that ultimately led to some of today’s most advanced technologies. It seems clear that we humans are on a path to a more symbiotic union with the non-human intelligence produced. What’s drawing us forward is the temptation of achieving advantageous enhancements to our inborn abilities, and the promise of improvements to the human condition. But as we stride into a future that will give our machines unprecedented roles in virtually every aspect of our lives, we humans alone or even with the help of those machines - will need to wrangle some tough questions about the meaning of personal agency and autonomy, privacy and identity, authenticity and responsibility. Questions about who we are and what we want to be. Before the next century 3. Urban, T. (2015), The AI Revolution: The Road to Superintelligence [Online] Available: https://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html [Accessed 15 June 2017]

02


PREFACE

is over, human beings will no longer be the most intelligent or capable type of entity on the planet. Actually, let me take that back. The truth of that last statement depends on how we define human. The primary political and philosophical issue of the next century will be the definition of who we are.4 What we think we know about who we are, what we are, and even where we are. Are we humans? How much humans are we going to be in the future to come? The answer to a question in times of human, cultural and ecological crises is as vulnerable as the validity of the question itself. This research thesis attends to the notion of prostheticization as a contemporary condition of post-industrial human subjectivity. Today, human experience is shaped by an increasing interactivity with various technologies, which act as prostheses, supplementing our natural abilities but also fundamentally changing the way we function. This prosthetic transference involves a simultaneous extension and replacement, and a reciprocal re-construction of the human being and of technology - processes complicated by the dematerialization of technology in this current technological age. This has important implications. In the past, Man may have been regarded as autonomous master of his tools; today, we observe that Man is both master and functionary of his tools. The inherited structure of the body is ultimately altered by its technological extensions. The human hand is uniquely adapted to make and use tools. Human designs eventually redesign the human. Therefore, ‘‘if the human is a question mark, design is the way that question is engaged.’’ 5 The pace of change is accelerating and has been since the inception of invention. Humanity, being in a constant struggle to understand its biological capacities, hacks the human body and blurs the boundaries between the human and the inhuman, in an attempt to transfer them upgraded onto its environment. Through the power of technology, we humans are radically reshaped. Our environment is undergoing ceaseless design. We literally live inside design, redesigned to be agile. Architecture is becoming agile. We can upgrade our environment, while being upgraded and surmount evolution. Our time alone may be nearing its end.. The aim is to present timely and valuable insights into how pervasive information technologies are altering the way people live, act, relate to others and think of themselves. The social contexts in which information and communication technologies are intertwined with our lives. The intimate relationship between technology, design and human. The process of machines becoming us and the everyday changes in human life. A fascinating excursion into the realm of mind-body relationships in the Information Age. In our new era, processes trump products. We are moving away from the world of fixed nouns and toward a world of fluid verbs. A world of becoming. We transform technology and technology is transforming us back, everything that surrounds us, the whole way we live. We have been so far, allowed as a species to establish complete domination over the rest of the natural world. But whether our species will still be the future dominant race is a matter for debate...6

4. Kelly, K. (2016), The Inevitable. Understanding the 12 technological forces that will shape our future | The future of tech really is an Uber for everything. [Online] Available: https://qz.com/722101/the-future-of-tech-really-is-an-uber-for-everything/ [Accessed 26 June 2017] 5. Colomina, B., Wigely, M. (2017) Are We Human? Notes on an Archaeology of Design. Lars Müller Publishers. 6. Kurzweil, R. (1999), The Age of Spiritual Machines: When Computers Exceed Human Intelligence. Viking Press.

03


INTELLIGENCE AND SPECULATIVE PROSTHETICS: FROM TECHNOANIMISM TO TECHNOFTISHISM

04


CHAPTER 1

Science Fiction explains the inexplicable “There is a mythological component, especially with science fiction. It’s people looking for answers – and science fiction offers to explain the inexplicable, the same as religion tends to do... If we accept the premise that it has a mythological element, then all the stuff about going out into space and meeting new life – trying to explain it and put a human element to it – it’s a hopeful vision. All these things offer hope and imaginative solutions for the future.” - William Shatner

Fig. 1.1 The act of Prometheus stealing the fire of the gods, as given in Pierio Valeriano’s ‘Hieroglyphica‘ (Lyon, 1586).

05


INTELLIGENCE AND SPECULATIVE PROSTHETICS: FROM TECHNOANIMISM TO TECHNOFTISHISM

How do powerful mythic ideas about technologies drive our social understanding and our expectations of them? ‘‘From the start, the tangle between fact and fiction has been difficult to unravel...’’7 In Greek mythology, the story of Prometheus (definition: “forethought’’) holds a special place in popular imagination. Prometheus was a master craftsman. This son of a Titan was regarded as one of the great benefactors of humankind, the bringer of fire and the original teacher of technology and the useful arts to man. Champion of Humankind, he created humans by shaping lumps of clay into small figures resembling the gods. Athena admired these figures and breathed on them, giving them life. Prometheus would be of vast merit to human society as well in this mythology as he was to be credited with the creation of humans and therefore all of humanity as well.8 Mythology has always been linked with technology. The successive growth of technology, has always been accompanied by mythical stories which included promises having the potential to create momentum or social disruption. These stories, have always been empowered by the notion of time, projecting all the way into the future. Enhancing hopes, big ideas, great expectations, fears and threats for the future to come. Usually the fears were fed by the man’s desperately myopic determination to maintain his superiority as a species. But, would this be possible? How really do myths affect our lives? Can they predict the future? Are they the medium to project on, all the potential changes that we would inmost like to see in the world, promising us evolution? Do they operate as ‘‘awakening forces’’ for both the optimists and the pessimists of this technological explosion? Do they warn us of something? It is definitely all about intelligence and evolution. Myths bring about with them some crucial questions, related to the future of humanity. How much will intelligence augment in 1000 years? How will this intelligence affect our environment and ourselves? How much time do we have before computers outpace the human brain in computational power? Myths have somehow contributed to the history of intelligence. Being deeply embedded in our collective memory; they shaped our understanding of technology at every run, the deep and hidden legacy of the cybernetic history. Myths don’t contradict the facts; they complement the facts. The rise of the machines was always projected into the future, not into the present or the past. According to Bernard Stiegler, in his book ‘‘Technics and Time: The Fault of Epimetheus,’’ philosophy, at its very origin and up until now, has repressed technics as an object of thought. Technics is the unthought. Stiegler argues that “technics” forms the horizon of human existence. This fact has been suppressed throughout the history of philosophy, which has never ceased to operate on the basis of a distinction between episteme and tekhne. The genesis of technics corresponds not only to the genesis of what is called “human” but of temporality as such, and that this is the clue toward understanding the future of the dynamic process in which the human and the technical consists. History cannot be thought according to the idea that humanity is the “subject” of this history and technology simply the object. When it comes to the relation between the human and the technical, the “who” and the “what” are in an undecidable relation. Stiegler argues that Heidegger’s philosophy fails adequately to 7. McCorduck, P. (2004), Machines Who Think (2nd ed.), Natick, MA: A. K. Peters. 8. Prometheus, Myths Encyclopedia. [Online] Available: http://www.mythencyclopedia.com/Pa-Pr/Prometheus.html#ixzz4rI4EkbO7, [Accessed 25 August 2017]

06


SCIENCE FICTION EXPLAINS THE INEXPLICABLE

grasp that, if there is such a thing as authentic temporality, the only access to it can be via objects, artefacts and, in general, technics, without which access to the past and future is impossible as such. What is more important is the relationship between technics and time.9 Mythologies are remarkable not for their content, but for their form. The basis of the myth appears as fully experienced, innocent and indisputable reality: computers are becoming ever faster; machines are ever more networked; encryption is getting stronger. But at the same time the myth makes a leap, it adds a peculiar form to the meaning. And this form is always emotional. Myths are convincing because they appeal to deeply held beliefs, to hopes, and often to fears about the future of technology and its impact on society. For their adherents, subscribing to the intelligent narratives of the future required more than evidence - it required belief. And myths made it easy to believe. ‘‘So powerful was the myth that its own creators kept falling for it.’’ Technology myths have indeed the form of a firm promise: the cyborg will be built; machines that are more intelligent than humans will be invented, the singularity is coming; cyberspace will be free, intelligence will augment exponentially.. Faith dressed as science...

‘‘The only way of discovering the limits of

the possible is to venture a little way past them into the impossible. Any sufficiently advanced technology is indistinguishable from magic.’’

9. Stiegler B., (1998), Technics and Time: The Fault of Epimetheus, Stanford University Press.

07


INTELLIGENCE AND SPECULATIVE PROSTHETICS: FROM TECHNOANIMISM TO TECHNOFTISHISM

Precursors:

myths, stories and rumours of artificial beings endowed with intelligence ‘‘There is intelligence in their hearts, and there is speech in them and strength, and from the immortal gods they have learned how to do things.’’ -Lattimore, 1951

We live in a world that would seem like “science fiction” to those who preceded us..9 Myths can be depicted as valuable expressions of human experience and symbolic understanding of phenomena that otherwise escape scientific explanation, or as distorted, false, unscientific or indeed ‘ideological’ stories about the world. Modern technology tends to recycle the old myths..10 The history of Artificial Intelligence begins in antiquity, with myths, stories and rumours of artificial beings endowed with intelligence or consciousness by their master craftsmen. Stories about artificially intelligent creatures are born in the ancient Greek times. By discovering the true nature of the gods, man has been able to reproduce it.11 The old Greek myths of Hephaestus; the god of technology, and Pygmalion, incorporated the early idea of intelligent robots with Talos, Galatea and Pandora; lifelike metal automatons, appearing as mechanisms that were able to think and feel. Also, the shift from the myth to the artifact, made by Galatea’s story, brings us to Daedalus, the master craftsman with his lifelike statues that abounded in the ancient world. These creatures acted as if they were alive, and often appeared to be, if it weren’t for the fact that they were made out of metal. Apparently, those ideas for artificial intelligence pre-dated the technology that would come to enable it. The myth would soon become a fact..12

9. Rubin, C., T. (2003), Artificial Intelligence and Human Nature | The New Atlantis [Online] Available: http://www. thenewatlantis.com/publications/artificial-intelligence-and-human-nature [Accessed 26 June 2017] 10. Pavlik, J., V. (2010), The Myths of Technology: Innovation and Inequality (Digital Formations). Oxford 11. Hope, J. (2015), 7 phases of the History of Artificial Intelligence. [Online] Available: http://www.historyextra. com/article/ancient-greece/7-phases-history-artificial-intelligence [Accessed 23 May 2017] 12. McCorduck, P. (2004), Machines Who Think (2nd ed.), Natick, MA: A. K. Peters.

08


CHAPTER 2

And it did. Writers continued to speculate..

-Where do we stand today? Who are we today? -What can we expect in the future? Where are we going to stand then? -Who or what are we going to be? This research attempts to overview how intelligence evolved through time, and examine the various shifts its definition has undergone, being directly affected by the technological advances, by the changes our environment and ourselves have undergone. What was the role of human at each time in relation to the evolved intelligent machines? What was the relationship between humans and things, humans and architecture? How was the nature of the mind affected? Which were the consequences on the human kind after all? Technology’s impact and influence on our planet is undeniable. Let’s time-travel to the past, let’s live through the old myths and learn about how intelligence developed and impacted us, our lives. The chapters are organized chronologically, but the themes may overlap, jumping back and forth in time to illuminate events that happened simultaneously..

‘‘To speak of a history, any history, as though there was but one somehow canonical history is misleading.. Any entity, culture or civilisation carries innumerable, in some ways differing, histories.’’ -Gordon Pask, ‘‘Interactions of Actors’’

09


INTELLIGENCE AND SPECULATIVE PROSTHETICS: FROM TECHNOANIMISM TO TECHNOFTISHISM

10


CHAPTER 3

Frankenstein

‘‘the prescient tale of techno-madness’’ ...and suddenly dramatic changes happen in life and the combination of wonder and fear for our future overwhelms our souls... Can we handle it? Industries are powered by Watt’s engine, machines replace human labour and with the industrialization of the world, arises the belief that ‘‘mechanical principles can govern all phenomena, from the movement of the planets to the beating of the heart..’’ 13 Robust growth and development take place. Mechanization and mass-production fuel the building industry and urbanization. And before the desirable utopian images of the machines serving people are established and celebrated, an open revolt begins between workers and machinery. And while some inventors are designing increasingly complex, automated machines to supplement, enhance, and even replace human activities, welcoming mechanization as a labor-saving and even lifesaving boon, others express fears of a world of mechanical droids or of being supplanted by automation. Frankenstein is responsible for unleashing the ‘‘monster’’ on the world and he must pay with his death and that of his creature. Which are the benefits and which the dangers of mechanization and scientific invention? As the dominant race, it is our ‘‘duty’’ to save humanity by eliminating the threat. We, the ‘‘luddites’’?! 14

Fig. 3.1 Frankenstein - The movie, 1931 13. Monika E. Lewis, Dr. Cady, (2007), Frankenstein and the Industrial Revolution: “Powerful Engine.” [Online] Available: https://sites.google.com/site/monikalewis02/frankensteinandtheindustrialrevolution [Accessed 23 May 2017] 14. The word “luddite” refers to a person who is opposed to technological change. The term is derived from a group of early 19th century English workers who attacked factories and destroyed machinery as a means of protest. They were supposedly led by a man named Ned Ludd, though he may have been an apocryphal figure.

11


INTELLIGENCE AND SPECULATIVE PROSTHETICS: FROM TECHNOANIMISM TO TECHNOFTISHISM

The Industrial Revolution, which took place from the 18th to 19th century, was a period in which dramatic changes in all the aspects of life, transformed the world holistically. Technology was its greatest aspect, causing the ‘‘revolution’’ from the cottage industry15 to the mechanized age. Since then, every manufacturing process has speeded up the production and labour has changed significantly. Better days were about to come.. In the late 19th and 20th century, during the second Industrial Revolution, also known as the Technological Revolution, developments in machines, tools, and computers, gave rise to the automatic factory. The manufacturing process changed dramatically, when machines displaced forever the production by hand. Industrialization and the urgent need for cost-effective methods of production, eventually led to the rise of mechanization, factory system16 and mass-production. The iron and textile industries, along with the development of the steam engine and many others mechanic inventions, played central roles in the Industrial Revolution, which also saw improved systems of transportation, communication and banking. The beginning of the industrial revolution, was marked by the invention of a plethora of devices that could substitute for manual labour in a precise, and thoroughly inhuman way.17 Along with the Industrial Revolution came conflicting attitudes towards the benefits of mechanization as well as a renewed interest in science for many people. On the one side, industrialization brought about an increased volume and variety of manufactured goods and an improved standard of living, while fascinating some people with the idea that the power of science could give life. At the same time it also resulted in often grim employment and living conditions, since some craftspeople were to be replaced by machines. And along with those threats, came the fear of the consequences of attempting to copy the ‘mechanism’ of nature, onto those machines. The myths that promoted the machines during the beginning of the Industrial Revolution, through the desirable utopian images of the machines serving people (rationalism’s positivity of West), were to be transformed into stories of threat and fear... By that moment, machines appeared as the ‘‘dominant race’’ against humanity. New myths were about to be written..(with blood?) Those technological inventions were seen as competitors, about to replace and extrude human workers. Could they become living entities, surpass the human capacities and dominate the whole planet? The worst nightmares of people were fueled by the old myths. A ‘‘battle’’ was about to start between machines and humans..18 15. For hundreds of years, life in Europe focused on agriculture. Most people lived in the country and farmed a small piece of land for the subsistence of their own families. They made most of what they needed, including tools, furniture, and clothing, right at home and traded for anything that they couldn’t produce on their own. Some families earned a bit of extra money by producing surplus goods, especially spun thread and woven textiles, for sale to their neighbors or to traveling merchants, who provided them with raw materials. In this cottage industry, as it was called, household workers set their own schedules and their own pace; did their work by hand, using simple machines, like spinning wheels and weaving looms; and produced only a limited quantity of merchandise. By the middle of the 18th century, however, the merchants were demanding greater production and more profits, and innovations were arising that would soon give them exactly what they wanted and change the face of the world. 16. The factory system was first adopted in Britain at the beginning of the Industrial Revolution in the late 18th century and later spread around the world. The main characteristic of the factory system is the use of machinery, originally powered by water or steam and later by electricity. 17. The Industrial Revolution | Encyclopedia Britannica [Online] Available: https://www.britannica.com/technology/ history-of-technology/The-Industrial-Revolution-1750-1900 [Accessed 24 May 2017] 18. Industrial Revolution | HISTORY [Online] Available: http://www.history.com/topics/industrial-revolution [Accessed 24 May 2017] 19. Shelley M., (1818), Frankenstein; or The modern Prometheus, United Kingdom: Lackington, Hughes, Harding, Mavor & Jones, pp.87

12


FRANKENSTEIN – ‘‘THE PRESCIENT TALE OF TECHNO-MADNESS’’

‘‘I doubted at first whether I should attempt the creation of a being like myself or one of simpler organization; but my imagination was too much exalted by my first success to permit me to doubt of my ability to give life to an animal as complex and wonderful as man.’’ 19 Mary Shelley, the writer of Frankenstein story, was one of those who doubted the ability of machines to replace human labour. Her story implied that the scientist who tries to usurp the role of the creator is punished. Frankenstein, as a “prescient tale of techno-madness,” endures today due to its “vivid… message of the dangers of mechanization and the problems of scientific invention” (Sale 16-17). The monster recognizes his power over the scientist, calling Frankenstein his “slave” and commanding: “You are my creator, but I am your master; - obey!” (Shelley 146). The combination of great power and free will the monster acquires, makes Frankenstein afraid of the consequences. The fears towards the first form of ‘‘non-human intelligence’’ are born, as soon as humans realize that a ‘‘machine’’ can be stronger than themselves. It is not only a Fig. 3.2 Frankenstein or the Modern Prometeheus matter of replacement or exclusion, but also a matter of control. For here is the moral dilemma of science presented in concrete and implacable terms. Good can beget evil, and this outcome is only sometimes predictable. It is a story that deals with the separation that has to be maintained between living and non-living things. Both these components surround humans, but mixing one with the other is a serious transgression that results in dire consequences, because the man-made creature escapes the control of its master and turns against him. Frankenstein brings to mind the industrialization of the 19th century that inspired it, as well as prospects of human cloning or sentient robots that are explored in modern science fiction, and may come to pass in the future.20

19. Shelley M., (1818), Frankenstein; or The modern Prometheus, United Kingdom: Lackington, Hughes, Harding, Mavor & Jones, pp.87 20. McCorduck P. (1940), Machines Who Think: A Personal Inquiry into the History and Prospects of Artificial Intelligence. Massachusetts: A K Peters, Ltd.

13


INTELLIGENCE AND SPECULATIVE PROSTHETICS: FROM TECHNOANIMISM TO TECHNOFTISHISM

“I saw the pale student of unhallowed arts kneeling beside the thing he had put together. I saw the hideous phantasm of a man stretched out, and then, on the working of some powerful engine, show signs of life…. supremely frightful would be the effect of any human endeavour to mock the stupendous mechanism of the Creator of the world” -Shelley 24

Fig. 3.3 Frankenstein, Mary Shelley, Book cover by Bernie Wrightson

14


FRANKENSTEIN – ‘‘THE PRESCIENT TALE OF TECHNO-MADNESS’’

[ DESIGN ]

design: the evolution of technics.. ‘‘Design can revolutionize thinking. It’s an immediate jolt, or one that happens retroactively – years, even hundreds of thousands of years, later-like a time bomb.’’21

Fig. 3.4 First stone tools

The first tools, the cutting instruments, were designed as objects not just made, but thought. Those artificial stones, implied design, the application of forethought and an intelligent purpose. Those human artifacts shaped for the hand of man, gave the human body a new set of abilities to cut into the world. They produced what Prestwich evocatively called ‘‘blows applied by design’’. Envisaging the possibility of a ‘‘techno-logy’’ that would constitute a theory of the evolution of technics, Marx outlined a new perspective. Engels evoked a dialectic between tool and hand that was to trouble the frontier between the inert and the organic.18 A new alliance between geology and archaeology had literally repositioned human culture within geology itself-detailing the evidence of human inventiveness in the absence of any human remains in a kind of archaeology of the mind. The extreme industrialization in the mid-19th century, reinforced the sustained attempt to develop and promote a concept of ‘‘design’’ in everyday objects as a necessary response to the massive impact of industrialization on human life. This attempt, was made during the dramatic encounter with the design of the very first tools and with a destabilizing sense of human intimacy with apes and the extended organic world. The ‘history of labor,’ with handmade implements; stone tools from the Bronze Age to the year 1800, were chronologically displayed in the 1867 Universal Exposition of Art and Industry in Paris. The historical timeline of design objects was seen as an exposition of the mental development of the human kind. Design itself was understood as the very principle of human evolution in an uncritical celebration of progress..22 21. Stiegler B., 1998, Technics and Time: The Fault of Epimetheus, Stanford University Press. 22. Colomina, B., Wigely, M. (2017), Are We Human? Notes on an Archaeology of Design, Ch.3: Blows of Design Lars Müller Publishers, pp.31.

15


INTELLIGENCE AND SPECULATIVE PROSTHETICS: FROM TECHNOANIMISM TO TECHNOFTISHISM

The exponential acceleration of mechanization in the so-called Industrial Revolution, was initiated in England and expanded itself as a vast interconnected mechanism digesting ever more territory, resources and people. The rapid massive shift from the energy of humans to that of machines, compounded by the revolution of time and space with the emergence of train networks, and compounded again with the arrival of instantaneous communication by telephone, telegraph, and radio was itself seen as an all-consuming life-form with its own biological needs and even desires. The acceleration of industrialization was accompanied by an increasingly urgent questioning of what constitutes the human. The word ‘‘design’’ was called on in the 1830s to negotiate between human and machine, a kind of belated echo of industrialization itself. Humans invented tools and artifacts to reinvent themselves. The concept of design remained a nineteenth-century project.

Fig. 3.5 Assembly line - Industrial Revolution

- Design was itself understood as a form of education, but the rapid advance of technology created confusion and ignorance. Architects, designers, and all the creatives, had to deal with the new circumstances of the era; machines. Who had the control after all? The designer or the industry? Crystal Palace, Eiffel Tower, The Iron Railroad Station..The industrial revolution and its technological feats, did not signify a gain in either intellectual or social status for nineteenth-century architects. Engineers and other technical devisers, were the mythical protagonists of capitalist industrialization and the beneficiaries of the middle class’s enthusiasm for technology. Along with the Industrial Revolution, came the mechanization, which by the expansion of the railway networks, the increasing mobility and transportation of goods and materials, directly resulted in the growth and congestion of industrial towns, and the fast development of their building stock and infrastructure. It directly affected the building industry and construction process. Along with technology, and the shift from natural resources to mass production, the extended use of iron and particularly steel was a fact. That magical material was a game changer in architecture. Bigger, lighter, more open spaces were about to get built, since the growth, the urbanization, and the need of 16


FRANKENSTEIN – ‘‘THE PRESCIENT TALE OF TECHNO-MADNESS’’

[ DESIGN ]

space, made a vast call for building larger and taller. The growth of heavy industry was significant and brought a flood of new building materials—such as cast iron, steel, and glass—with which architects and engineers devised structures hitherto undreamed of in function, size, and form. Eventually the applications of steel lead to the opening up of architectural possibilities in private construction. The application of industrially mass-produced building elements first manifested itself in the construction of large-scale steel structures. Steel, made construction less a custom craft and more a wholesale, unskilled endeavor. Mass production was reflected also in the building industry. Particular technological processes inherently favored particular outcomes. The bias towards high pressure/high temperature for industrial processes steered places of manufacturing away from humans and toward large-scale, centralized factories, regardless of culture, background, or politics. Taller and taller office buildings were appearing on urban lots, skyscrapers were on their way.. Indeed, the mechanization and rationalization of manufacturing served architects and everyone else as a permanent reminder of the enormous potential of mass production and standardized components. The world fairs, with their demand for huge covered spaces and rapid construction, helped fix international attention on the new methods of building. Henceforth, a central part of the modernist architects’ task of redefining their field would deal with the machine and with the rival figure of the engineer; the machine’s symbolic master. One ideological strategy had been evolving since the beginning of the century: It took the industrial builder and the engineer as the “noble savages” of the new age.23

Fig. 3.6 Industrial Revolution technology

23. Larson, M. (1993), Behind the Postmodern Facade: Architectural Change in Late Twentieth-century. University of California Press, pp.25Lars Müller Publishers, pp.31.

17


INTELLIGENCE AND SPECULATIVE PROSTHETICS: FROM TECHNOANIMISM TO TECHNOFTISHISM

we: mechanization of humans or humanization of machines?

Fig. 3.7 Homo Naledi hands - human evolution

Design has been around since humans have existed. We design ourselves every day. We become human when we add things to ourselves and our bodies are design because they represent the constant evolution of previous stages. Designing the body, designing the planet, designing life, designing time. The aim of re-designing the human being.. We are design - the product of several stratified design levels. This emotional and physical human phenomenology says that design has existed ever since we have. Darwin, once suggested that the use of stone tools would have affected the evolution of the shape of human hands, by favoring the hands best suited to manipulate those tools. The human hand becomes uniquely adapted to make and use tools. Hence, the inherited structure of the body is ultimately altered by its technological extensions. The human can actually change the shape of its own organism over countless generations. It is like a hidden mechanism in the body, which tends to redesign itself, being directly affected by the designed tools.24

- But when machines take over, in which way is our ‘‘shape’’ changed? Are we redesigned by the machines to adapt to the machines or to become machine-bodies and tools? How human is the human body after all? Historically, machines have often been regarded as toys, or as agents of magic, marvel and fantasy. For philosophers, they have served as symbols and metaphors. Since the beginning of the mechanical age and the time of the Industrial Revolution, some have looked to machines to bring about progress toward utopia; others have feared them as the enemies of humanistic values, leading only to destruction. Early modern natural philosophy was characterized by a striking proliferation of machine metaphors. Τhe merging of machines and organisms into integrated systems, was born with the notion that the universe could be better explained as machine-like, as a kind of clockwork contraption. This shift was located in the 17th century, with Descartes, who transported the mechanization of the heavens into the heart of life. This deceptively simple idea, that living bodies are literally machines, was about to spread everywhere.. 24. Colomina, B., Wigely, M. (2017), Are We Human? Notes on an Archaeology of Design. Lars Müller Publishers.

18


FRANKENSTEIN – ‘‘THE PRESCIENT TALE OF TECHNO-MADNESS’’

[ WE ]

The scientific revolution was a decisive watershed in bringing together science and technology into a novel network of alliances that would underpin not only the production of knowledge about nature and machines, but also the implementation of human-machine systems in industrial settings. Descartes’ ‘‘Traité de l’homme, 1630’’ has been the most sophisticated articulation of the early modern machine-body, a body permeable to technology and to the virtual gaze of machines. In the Cartesian system, all material things – human beings, animals, plants and inorganic nature – are machines, ruled by the same inexorable laws, and so susceptible of analysis by the quantitative methods of mathematics. Descartes had faith that there was no situation in human life, no problem facing mankind, that could not be solved if one applied the infallible, all-encompassing laws of mathematics. Life itself was seen as a sheer mechanism. It was the first extended treatment of the body as an automaton, and the first thorough attempt to rewrite life in strictly machine-like terms. His understanding of biology was mechanistic in nature. His scientific work was based on the traditional mechanistic understanding that animals and humans are completely mechanistic automata. Thus, a decisive turn was made, in the conceptual enmeshing of living things and technical things. Descartes elaborated an extended account of the workings of human bodies based on a close analogy with machines, advancing the most thorough argument for a materialist model of life. It was an event of celebrated significance in the history of physiology, which occupied a pivotal place in the cultural histories of technology and of the human body. In the life sciences, it marked the moment technology became central to the conceptualization of organic life as a complex mechanism subject to physical laws.25 One of the chief obstacles that all mechanistic theories have faced is providing a mechanistic explanation of the human mind; Descartes, for one, endorsed dualism in spite of endorsing a completely mechanistic conception of the material world because he argued that mechanism and the notion of a mind were logically incompatible. The view of man which emerges from Descartes’ philosophical writings is that of an amalgam of two substances: one material and other immaterial. Man has both body and mind. These two substances interact in some way that remained mysterious even for him. The mind is a spiritual substance, immortal, eternal and free, and the body is a machine, although a very complicated one. Descartes in his Passions of the Soul and The Description of the Human Body suggested that the body works like a machine, that it has material properties. The mind (or soul), on the other hand, was described as a nonmaterial and not to follow the laws of nature. This form of dualism or duality proposes that the mind controls the body, but that the body can also influence the otherwise rational mind, such as when people act out of passion. Descartes’ dualism was motivated by the seeming impossibility that mechanical dynamics could yield mental experiences. Hobbes, on the other hand, conceived of the mind and the will as purely mechanistic, completely explicable in terms of the effects of perception and the pursuit of desire, which in turn he held to be completely explicable in terms of the materialistic operations of the nervous system. Following Hobbes, other mechanists argued for a thoroughly mechanistic explanation of the mind, with one of the most influential and controversial expositions of the doctrine being offered by Julien Offray de La Mettrie in his ‘‘Man a Machine,’’ 1748. 25. Vaccari, A. (2003), The Body Made Machine: On the History and Application of a Metaphor | Presented at The Flesh Made Text: Bodies, Theories, Cultures in the Post-Millennial Era. School of English, Aristotle University of Thessaloniki, Greece. 9-13 May, 2003

19


INTELLIGENCE AND SPECULATIVE PROSTHETICS: FROM TECHNOANIMISM TO TECHNOFTISHISM

Yet the doctrine that man is a machine was argued most forcefully in 1751, long before the theory of evolution became generally accepted, by de La Mettrie; and the theory of evolution gave the problem an even sharper edge, by suggesting there may be no clear distinction between living matter and dead matter, pointing out that one useful way of regarding human beings as mechanisms—is that all their functions are capable of being described in a logical, analytical way. 26 By the mid-century, opinions about machines and their potentialities were extremely confused. Optimism and belief in progress existed side by side with great despair. Which opinion one held, depended largely on one’s position in society. In literature, the most optimistic exponent of progress and of technology’s unlimited possibilities was Jules Verne. Science opened up all paths before his heroes could force any environment to yield to their will. A completely opposite view was that of Samuel Batler in ‘‘Erewhon 1872.’’ He foresaw a human race that had become parasited of the machine, making man an ‘‘affectionate machine-tickling aphid.’’ In a reversal of La Mettrie’s concept of man as machine, Butler depicted machines as human beings with intelligence and initiative. Along with the dramatic acceleration of industrialization in the mid-nineteenth century, workers were increasingly treated as disposable machine parts and machines were treated as organisms with an internal life that needed to be preserved. Many writers speculated on the possible demise of the human at the hands of the mechanized world that it had produced. In 1863, Samuel Butler published his polemic ‘‘Darwin among the Machines’’, speculating that the tools that humans had originally deployed as prosthetic extensions of their bodies were now evolving as living species in their own right. ‘‘I am thinking of the modern machine, which is as it were alive, and to which the man is auxiliary, and not of the old machine, the improved tool, which is auxiliary to the man, and only works as long as his hand is thinking.’’ (William Morris 1886) The machine was considered to be a human tool that had now become a new life-form that was turning eventually humans into its own tools. Morris, was against the enslavement of humans to machines but not against machines. In his book, ‘‘News from Nowhere, 1890’’ he expressed his utopian visions for an improved machinery and free workers, while embracing technology and the positive effects it could bring in our lives. In the years to come, we started talking about evolution. Evolution and revolution of the technological society, heading towards human intelligence. It was obvious that the machinic evolution envisioned by the ‘‘creatives’’ of the era, through their imaginary books, has been moving extremely fast. The ‘‘conscious machines’’ of their dreams, would transform into machines that could “reproduce” like living organisms, turning eventually into self-replicating machines. Although the visionary machines would become more human-like, they could also become dangerous for humans. While life under machine rule might have been materially comfortable for humans, the thought of the human kind being superseded in the future was just as horrifying, as the thought that our distant ancestors were anything other than fully human. Technology itself has become biological, a form of ‘‘mechanical life’’ that was already deploying humans to nourish it. 26. Popper, K. (1978), Of Clouds and Clocks: An Approach to the Problem of Rationality and the Freedom of Man, included in Objective Knowledge: An Evolutionary Approach. pp. 224. 27. Colomina, B., Wigely, M. (2017), Are We Human? Notes on an Archaeology of Design. Ch.6 News from Nowhere. Lars Müller Publishers.

20


FRANKENSTEIN – ‘‘THE PRESCIENT TALE OF TECHNO-MADNESS’’ [ WE ]

It was just a matter of time before the machine world have no need of its human slaves to keep it alive: ‘‘we are ourselves creating our own successors..these glorious animals.’’ 27 The possibilities that humans may either become superhuman or have manufactured their own demise - as another Frankenstein, were already a subject of public debate. The visions for a technologically upgraded future with its consequent risks..

Fig. 3.8 Samuel Batler, Book of the Machines

Earlier in the century, Western culture in general could be characterized as subscribing to optimistic rationalism, a faith in human achievement and steady progress. Recent inventions and advances promised a new century of peace and prosperity. World War I changed the world; faith in humanity and civilization was also shaken by the “brutal impersonality of modern machine warfare” (Morgan 2) and the technological boom after WWI met with much ambivalence. The potential abuses of technology overshadowed the utopian notions that science and technology had inspired before the war.

21


INTELLIGENCE AND SPECULATIVE PROSTHETICS: FROM TECHNOANIMISM TO TECHNOFTISHISM

From energy to information

‘‘The machine as an alternative to human brain’’ ‘‘Fuelled by the exigencies of war, and drawing upon a diversity of intellectual traditions, a new form of intelligence emerged on Earth.’’ 28

But what if some other physical entity, something that’s indisputably a machine—for argument, let’s say a computer—displays some intelligent behavior? Then are we prepared to agree that mind is mechanism, and that two instances of intelligent agents, namely humans and computers, realize those processes in a physical system?

Imagine if we could analyse the brain, understand its functions and imitate its intelligence! We could handle the future, create our own myths.. What if we could make the machines work in favor of humanity? Let’s build some brains! By the time World War II begins, the rise of the machines becomes a fact. The war, makes an urgent call for a vast technological augmentation, after the realization that complex ballistic calculations can perform faster and more accurately than human ‘‘brains.’’ Soon ‘‘mechanical brains’’ begin to ‘‘think.’’ Philosophers and psychologists attempt to define the mind, while mathematicians try to turn human logic intro rigorous mathematics. Rossum scientists implying the ‘‘man versus-machine’’ dilemma, radar technology, automation, cybernetics, adaptive mechanisms, electronic computers, engineering, mathematics, calculations take the lead. A new model of information in an open system is established. Control and communication come to shift fundamentally. The high-tech war fathers a range of innovations that forever change how humans relate to machines and especially to computers. WWII triggers fresh thinking..

28. Kurzweil, R. (1990), The Age of Intelligent Machines, Ch.6: Electronic Roots. USA: MIT Press.

22


CHAPTER 4

Compared to previous wars, World War II had the greatest effect on the technology and devices that are used today. While technological advancements were made prior to the war, other developments were a direct result of the trials and errors suffered during the war. The WWII era housed a great many changes which affected intelligence, as technology, played a greater role in the conduct of WWII than in any other war in history, and had a critical role in its final outcome. No war was as profoundly affected by science, maths, and technology than WWII. During World War II, science became mobilized on a grand scale. Numerous laboratories focused on everything from electronics to medical research to psychological testing. By the end of the war, the atomic bomb made it clear that science had “lost its innocence” – becoming a critical tool of military power, while at the same time, government money was given for research. Scientists became advisors to presidents on the most pressing issues of national and foreign policy. Ever since World War II, the American government has mobilized science, mathematics, and engineering on a vast scale. 29

Formal reasoning: ‘‘the process of human thought can be mechanized’’ 30 Besides the body and its physiology, as mentioned previously, the mind and the way it operates have always been a mysterious path.. As soon as they could solve the mystery, machines could be made to simulate human thought, upgrading their nature to be closer to that of humans and coming a step closer to their fictional figures of their dreams.. The assumption that the process of human thought can be mechanized, has always been the driving force towards the dream of AI. For that reason, the study of mechanical or “formal” reasoning has a long history, going back in the 17th century, when Leibniz, Thomas Hobbes and René Descartes explored the possibility that all rational thought could be made as systematic as algebra or geometry.31 These philosophers began to articulate the physical symbol system hypothesis that would become the guiding faith of AI research. In 1854 George Boole set out to “investigate the fundamental laws of those operations of the mind by which reasoning was performed, inventing the Boolean algebra. He was particularly keen to ensure that his mathematics really could capture laws of mental activity. Also, Newtonian physics, which had ruled from the end of the seventeenth century to the end of the nineteenth with scarcely an opposing voice, described a universe in which everything happened precisely according to law, a compact, tightly organized universe in which the whole future depends strictly upon the whole past.32 In the 20th century, the study of mathematical logic provided the essential breakthrough that made artificial intelligence seem plausible. The passion of researchers for solving the mystery of mind and thought, came back to the foreground. Building on the previous work of Boole’s ‘‘The Laws of Thought’’ and Frege’s ‘‘Begriffsschrift,’’ Russell and Whitehead presented a formal treatment of the foundations of mathematics in their masterpiece, the ‘‘Principia Mathematica’’ in 1913, which revolutionized formal logic. 33, 34 29. Mindell, D., The Science and Technology of World War II | The National Museum of World War II [Online] Available: https://www.nationalww2museum.org/sites/default/files/2017-07/s-t-teacher-and-student.pdf [Accessed 25 May 2017] 30. Berlinski, David (2000), The Advent of the Algorithm, Harcourt Books. 31. McCorduck, P., (2004), Machines who Think (2nd ed.) Natick, MA: A K Peters, Ltd, pp 37-46 32. History of Formal Reasoning [Online] Available: http://aiinformatique.altervista.org/category/history-and-formal-reasoning/?doing_wp_cron=1503348462.1458170413970947265625 [Accessed 25 May 2017] 33. Berlinski, D. (2000), The Advent of the Algorithm, Harcourt Books 34. Crevier, D. (1993), AI: The Tumultuous Search for Artificial Intelligence, New York, NY: BasicBooks

23


INTELLIGENCE AND SPECULATIVE PROSTHETICS: FROM TECHNOANIMISM TO TECHNOFTISHISM

‘‘Actions are defined by the mind. The mind is driven by thoughts. The thought process is what defines the intelligent actions.’’ The development of machines that somewhat mimic humans in their actions, was the goal of the era. How could it be possible to mimic the human thought process, by artificial means? The previous era’s myths were about to become reality. The dream of mimicking the human mind! 35 That dream was firing the hearts and minds of the creatives of the era, who started envisioning their new stars of their stories, coming to life by humans, fuelled by their creator’s human intelligence.. The myths were actually written and the movies were actually released. Their very first name was to be ‘‘robots.’’ And they were determined to spread the message of the era.. 1920 was the year when the word ‘‘robot,’’ appeared for the first time. It was introduced through the theatre work R.U.R (Rossum’s Universal Robots), of Karel Capek, referring to the smart machines, manufactured by humans, expected to be used as their own servants. It was about a comedy, partly of science, partly of truth. The odd inventor Mr. Rossum was a typical representative of the scientific materialism of the last century. His desire to create an artificial man in the chemical and biological sense, was inspired by a foolish and obstinate wish to prove God unnecessary and absurd. He was a young scientist, untroubled by metaphysical ideas; scientific experiment to him was the road to industrial production. He was not concerned to prove but to manufacture.36 The ‘‘man versus-machine’’ dilemma once more; it introduced the idea of a robot that wiped out mankind and it foresaw concerned about widespread technological unemployment as a consequence of automation. There, robots were androids, with a human appearance as well as the ability to think for themselves. In the uprising, the robots killed all the humans except for one, and the book ended with two of them discovering human-like emotions, which seemed to set them up to begin the cycle all over again. As technology progressed - more specifically computer-technology - and machines increasingly started taking over mundane functions, the concept of robots appealed to people. Could computer-controlled machines take over everything from people? Could these machines think like people? As we sometimes use robotic body-parts already, how much further can we go? Could one make a robot that looked like and behaved like a human? With it came the intriguing threat of robots going out-of-control which has been since then, the subject of numerous science fiction books.38 This theatrical work, has been the trigger for many others science fiction works to come and present the dominance of technological power over the human nature, trying to eliminate the anthropocentric understanding of the world. Human thoughts were fed by the fear of machines mimicking humans and eventually exceeding them. They presented a world emerging through the darkness of an industrialized society that had already begun to rely the most on machines than man for survival..

35. De Silva, C., (2000), Intelligent Machines: Myths and Realities, London UK: The Book Depository US 36. Klima, I., (2002), Karel Čapek: Life and Work, Catbird Press, pp.78-80 37. Dr. Delahoyde, M., (2001), Karel Capek R.U.R, London Saturday Review interview, [Online] Available: http:// public.wsu.edu/~delahoyd/sf/r.u.r.html [Accessed 18 June 2017] Washington State University. 38. Vegter, W., (2007), Karel Capek: ‘‘Mummy, where do robots come from?’’ [Online] Available: http://wvegter. hivemind.net/abacus/CyberHeroes/Capek.htm [Accessed 25 May 2017]

24


FROM ENERGY TO INFORMATION: THE MACHINE AS AN ALTERNATIVE TO HUMAN BRAIN

‘‘Those who think to master the industry are themselves mastered by it; Robots must be produced although they are a war industry, or rather because they are a war industry. The product of the human brain has escaped the control of human hands. This is the comedy of science.’’ 37

Fig. 4.1 Karel Capek Rossum’s Universal Robot R.U.R

Fig. 4.2 Rossum’s Universal Robots (RUR), 1938

25


INTELLIGENCE AND SPECULATIVE PROSTHETICS: FROM TECHNOANIMISM TO TECHNOFTISHISM

what robots? The concept of creating machines that can operate autonomously dates back to classical times, but research into the functionality and potential uses of robots did not grow substantially until the 20th century.39 Throughout history, it has been frequently assumed that robots would one day be able to mimic human behavior and manage tasks in a human-like fashion. But fully autonomous only appeared in the second half of the 20th century. Robots, as the new entities of the era, imitating the human nature while being artificial, gave birth to the doubts, the fears and the moral hesitation about the future of humanity. In 1950, ‘‘I Robot’’ was published – a collection of short stories by science fiction writer Isaac Asimov. Asimov was one of several science fiction writers who picked up the idea of machine intelligence, and imagined its future. His work was popular, thought-provoking and visionary, helping to inspire a generation of roboticists and scientists. He also imagined developments that seem remarkably prescient – such as a computer capable of storing all human knowledge.40 It was in 1942 when the ‘‘upcoming darkness’’, made Isaac Asimov formulate his “Three Rules of Robotics,” by publishing the science fiction short story ‘‘Liar!’’ in the May issue of ‘‘Astounding Science Fiction:’’ 1. A robot may not injure a human being, or through inaction allow a human being to come to harm. 2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law. 3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.41 They proved so confronting that many other science fiction writers have adopted them as naturally as gravity—laws that were given, and broken only under the most unusual circumstances. This was the first time scientists had to deal with the ethical part of the field. At that point, AI began to be seen as serious field of study.

Fig. 4.3 ‘‘I Robot’’ by Isaac Asimov 39. Nocks, L., (2007), The robot: the life story of a technology. Westport, CT: Greenwood Publishing Group. 40. BBC | iWonder, (2015), AI: 15 key moments in the story of Artificial Intelligence [Online] Available: http://www. bbc.co.uk/timelines/zq376fr [Accessed 26 May 2017] 41. Asimov, I., (1950), ‘‘Runaround,’’ I, Robot (The Isaac Asimov Collection ed.) New York City. Doubleday, pp.40

26


FROM ENERGY TO INFORMATION: THE MACHINE AS AN ALTERNATIVE TO HUMAN BRAIN

While the myths have been driving our beliefs and shaping our views towards the arising of the machines and the new form of intelligence, cybernetics and its pioneers came to οverturn the image of the future. Our fears dressed as hopes. By the time the war ended, the shift of views towards machines was already a fact! From machines of assured destruction to machines of loving grace! This shift was achieved by Norbert Wiener, an eccentric mathematician, who took a set of ideas from electrical engineers and weapons designers, refined, repackaged and presented with a generous gesture to the public! People suddenly started getting enchanted by machines! A turn was made in the way people viewed technology, since they began projecting their hopes and fears into the future of thinking machines! From World War II to the mid-1950s, speculation about mind in terms of machine models remained an exceptionally rich, diverse and fascinating field, in which cybernetics took the lead. It was in 1948, when Norbert Wiener, adopted the word ‘‘cybernetics,’’ through his first book ‘‘Cybernetics’’, defining it as the science of control and communication in the animal and the machine.42 There, he revealed the magic of feedback loops, of self-stabilizing systems, of machines that could purpose and could even self-reproduce, at least in theory. The machine suddenly seemed lifelike! The thinking machine.. Cybernetics without doubt, was one of the 20th century’s biggest ideas, a veritable ideology of machines born during the first truly global industrial war that was itself fueled by ideology. Its ideas shifted shape several times, adding new layers to its twisted history decade by decade. It was in fact a general theory of machines, a curious post-war scientific discipline that sought to master the swift rise of computerized progress. From the early 1940s, it was about computer, control security, and the ever-evolving interaction between humans and machines. It was about finding the features that were common to automatic machines and the human nervous system. Brain and machine overlapped in a wide area. The dream of AI was born with the hope that cybernetics could make it soon come true.. Many attempts and researches were made in order to come closer to the human brain and the decoding of the way it operated. The goal of the era was the creation of an artificial-constructable brain.43 In 1943, the publication of a paper entitled “Behavior, Purpose and Teleology”, by Arturo Rosenblueth, Norbert Wiener, and Julian Bigelow, proposed an information-processing model that would allow greater flexibility in hypothesizing about the mind. At the same time, “A Logical Calculus” was startling enough, stating as it did that the laws governing mind should be sought among the laws governing information rather than energy or matter. It was no wonder that the computer began to be called a thinking machine. Cybernetics made some important steps towards the understanding of the brain, by building electromechanical devices that were themselves adaptive. On 13 December 1948, the Daily Herald carried a front-page article entitled “The Clicking Brain is Cleverer than Man’s,” featuring a machine called the ‘‘homeostat’’ built by W. Ross Ashby. ‘‘It was the closest thing to a synthetic human brain so far designed by man, giving the ‘‘promise’’ that the machine would one day be developed into an artificial brain.’’ Soon the rest of the press in Britain and around the world followed suit with the article “The Thinking Machine, January 1949” and by March in the same year, Ashby was holding forth on BBC radio on “imitating the brain.” 42. Wiener, N., (1948), Cybernetics, or Control and Communication in the Animal and the Machine. Cambridge: MIT Press. 43. Rid, T., (2016), Rise of the Machines: A Cybernetic History, W. W. Norton Company.

27


INTELLIGENCE AND SPECULATIVE PROSTHETICS: FROM TECHNOANIMISM TO TECHNOFTISHISM

At much the same time, W. Grey Walter, a neurophysiologist, built wheeled automatons in order to experiment with goal-seeking behavior; Elmer and Elsie, the first examples of his robot “tortoises,” which were small electromechanical robots, referred to as members of a new inorganic species; Machina speculatrix, modelling a certain form of adaptive behavior, while exploring through his book the electrical properties of individual neurons which together made up the brain. From a scientific perspective, the robot tortoises and the homeostat were original examples of adaptive mechanisms, and they were at the forefront of “brain science” in the late 1940s and throughout the 1950s, illuminating the go of adaptation to an unknown environment..44

- But what was the role of computers then in relation to the decoding of the human thought? Computer was the tool for the production of intelligent behavior. The rapid growth of computing dates back to the advent of electrical computing at the beginning of the twentieth century. The vision of the world’s first programmable computer, came true in England of the 19th century, when Babbage’s Analytical Engine45 was about to become a remarkable foreshadowing of the modern computer. Despite Babbage’s inability to finish any of his major initiatives, their concepts of a computer with a stored program, self-modifying code, addressable memory, conditional branching, and computer programming itself still form the basis of computers today. Since then, Babbage has been the first inspirational figure of the century, leading to the creation of the first American programmable computer, the Mark 1, completed in 1944 by Howard Aiken of Harvard University.46 By the time the world’s first computer ran the first stored computer program, the notion of artificial intelligence was born..

‘‘We must remember that another and higher science... has given to us in its own condensed language, expressions, which are to the past as history, to the future as prophecy... It is the science of calculation which becomes continually more necessary at each step of our progress, and which must ultimately govern the whole of the applications of science to the arts of life.’’ -Charles Babbage, 1832

44. Pickering, A., (2010), The Cybernetic Brain: Sketches of Another Future, University Of Chicago Press. 45. The Analytical Engine was a proposed mechanical general-purpose computerdesigned by English mathematician and computer pioneer Charles Babbage. It was first described in 1837 as the successor to Babbage’s difference engine, a design for a mechanical computer.The Analytical Engine incorporated an arithmetic logic unit, control flow in the form of conditional branching and loops, and integrated memory, making it the first design for a general-purpose computer that could be described in modern terms as Turing-complete. In other words, the logical structure of the Analytical Engine was essentially the same as that which has dominated computer design in the electronic era. 46. The IBM Automatic Sequence Controlled Calculator (ASCC), called Mark I by Harvard University’s staff, was a general purpose electromechanical computer that was used in the war effort during the last part of World War II. One of the first programs to run on the Mark I was initiated on 29 March 1944 by John von Neumann, who worked on the Manhattan project at the time, and needed to determine whether implosion was a viable choice to detonate the atomic bomb that would be used a year later. The Mark I also computed and printed mathematical tables, which had been the initial goal of British inventor Charles Babbage for his “analytical engine”.

28


FROM ENERGY TO INFORMATION: THE MACHINE AS AN ALTERNATIVE TO HUMAN BRAIN

And the time has come, when the inversion of logic was established, concerning our view of intelligence! The basic influence, has been the still dominant model of computation of Turing. Ever since, the human brain has come to be considered as the seat of our thought, desires, and dreams, it has been compared to the most advanced technology possessed by mankind. Through time, popular “complexity” metaphors for the brain have evolved.47 Let us begin with the so called era of ‘‘computing without computers’’, when experiments could lead to great conclusions, at the time when the theoretical base for computing was set.. Computer revolution was led by Alan Turing and John von Neumann, who both contributed to the realization of computers, whose program was stored in memory and could be modified during execution. This idea appeared originally in the form of the Turing Machine, and was given practical realization in the so-called von Neumann architecture of the first electronic computers, such as the EDVAC.48 While this computing design seemed natural, even obvious, to us now, it was at the time a significant conceptual leap.49 Turing’s Machine was to become the main theoretical construct in modern computer science. In 1936, the model for the Turing Machine, laid the theoretical foundation for computing. Turing did not envision his machine as a practical computing technology, but rather as a thought experiment that could provide a precise definition of a mechanical procedure – an algorithm. Turing’s universal machine could in theory carry out any computing task that any special-purpose automaton could do. It had the unprecedented ability to emulate divergent and multivalent processes. This was the concept of a reprogrammable digital computer..50 How could this new technology be used to the emulation of intelligence? On September 1947, he wrote a paper called “Intelligent Machinery” (Turing, 1969), discussing the possible ways in which machinery might be made to show intelligent behavior. The analogy with the human brain was used as a guiding principle, leading to the central issue, ‘‘man as machine’’. Turing’s metaphor has become the very definition of computation. The metaphors for the brain have entrenched it as the equivalent of Turing’s form of computation, and thus rationalism largely assumed that the human brain was a Turing machine, carrying out Turing computation, and controlling its periphery, the human body. In October 1950, with his classic paper “Computing Machinery and Intelligence,” came the question: ‘‘Can machines think?’’ The idea that computers can think suddenly became very attractive. Turing went on to invent the so-called Turing Test, which set the bar for an intelligent machine: a computer that could fool someone into thinking they were talking to another person. He also described an agenda that would in fact occupy the next half century of advanced computer research: game playing, decision making, natural language understanding, translation, theorem proving, and, of course, encryption and the cracking of codes. 47. Pfeifer, R., Bongard, J., (2007), How The Body Shapes The Way We Think: A New View of Intelligence, A Bradford Book, The MIT Press. 48. EDVAC (Electronic Discrete Variable Automatic Computer) was one of the earliest electronic computers. Unlike its predecessor the ENIAC, it was binary rather than decimal, and was a stored-program computer. EDVAC was delivered to the Ballistics Research Laboratory in August 1949. By 1960 EDVAC was running over 20 hours a day with error-free run time averaging eight hours. EDVAC ran until 1961 when it was replaced by BRLESC. During its operational life it proved to be reliable and productive for its time. 49. Hsu, S., (2015), Don’t Worry, Smart Machines Will Take Us With Them: Why human intelligence and AI will co-evolve. | Nautilus. [Online] Available: http://nautil.us/issue/28/2050/dont-worry-smart-machines-will-take-uswith-them [Accessed 27 May 2017] 50. Rocker, M. I., (July/August 2006), When Code Matters, Programming Cultures: Art and Architecture in the Age of Software | Architectural Design magazine, Vol 76, No 4, pp.20

29


INTELLIGENCE AND SPECULATIVE PROSTHETICS: FROM TECHNOANIMISM TO TECHNOFTISHISM

This list turned out to be the main part of the program that occupied artificial-intelligence researchers for the next two decades.51

Fig. 4.4 Alan Turing, Enigma machine, 1952

Fig. 4.5 Elmer and Elsie robots, Walter tortoises

design: designing ideas.. Once designers step away from industrial production and the marketplace they enter the realm of the unreal, the fictional, or what we prefer to think of as conceptual design—design about ideas. It has a short but rich history and it is a place where many interconnected and not very well understood forms of design happen—speculative design, critical design, design fiction, design futures, antidesign, radical design, interrogative design, design for debate, adversarial design, discursive design, future-scaping, and some design art. 52 The shock of the war, the shock of the machine, the shock of the metropolis had in common anaesthesia, the temporal removal of feeling, being it physical or psychological. Thus, design was seen as the design of neglect; the world had developed an ability to watch everything yet do nothing. In ‘‘Experience and Poverty’’ (1993) Walter Benjamin wrote about people returning from WWI poorer in experience, unable to communicate, silent, in shock after feeling the full force of modern technology: ‘‘A generation that had gone to school in horse-drawn streetcars now stood in the open air, amid a landscape in which nothing was the same except the clouds, and, at its center, in a force field of destructive torrents and explosions, the tiny, fragile human body.’’ Feeling was no longer possible. Humans were anaesthetized. This poverty of experience was expressed in modern design. This lack of action was designed.. 53 During the 20th century, industrial building activity was attempted and correlated with the need to adapt architecture to the era of the machine. The building industry started adopting industrial production methods during the 1920s and 1930s, in a push to solve the housing shortage in the growing towns after the First World War. Serial mass production, time and 53. Colomina, B., Wigely, M. (2017), Are We Human? Notes on an Archaeology of Design, Ch.7: Good Design is anesthetic. Lars Müller Publishers.

30


FROM ENERGY TO INFORMATION: THE MACHINE AS AN ALTERNATIVE TO HUMAN BRAIN [ DESIGN ]

cost savings were achieved by designing a limited number of identical building elements to construct slightly different housing types. At the same time there was an attempt to reduce and simplify the number of stages involved in building on the construction site, to increase the employment of unskilled labour and to shorten the completion time. Walter Gropius’s Törten housing estate in Dessau, Germany (1928) is maybe one of the best-known examples, along with the Hausbaumaschine (‘‘House Building Machine’’) developed and published during the Second World War by Ernst Neufert. This process-oriented initiative differed completely from the approaches of the 19th century in its ambition to change the organisation of the building site instead of just responding to, and borrowing from, the innovations and products developed by other sectors of industry. For the first time the serial mass-production of elements and use of industrial fabrication methods in the building industry seemed to make sense, to meet the urgent task of reconstruction and demand for housing in the post-war period, as well as during the following boom years between 1950 and 1970. The ultimate goal of that era, was to express the positive attitude of its supporters towards technology and to extend their experiments by proposing a wide-ranging change. In spite of its attempt to move away from a technological utopia, the industrialization of the building has remained impregnated with utopian aspirations, such as the reconciliation of nature with technology, the liberation of man from hard work, or the ambition of man to dwell in all parts of the planet.. Proto-modern designers at the turn of the 20th century like Otto Wagner, Frank Lloyd Wright, Peter Behrens, presented their designs as a kind of interface between human and machine. They engaged the increasingly mechanized world and the new forms of life that it supported but also tried to protect the rapidly evolving human. Design was a form of defense; the shock absorber at the end of WWII. The shock in the post-war years was the shock of nuclear annihilation. ‘‘Good design’’ offered ‘‘good life’’ a galaxy of happy, self-contained objects for people who did not feel safely contained and couldn’t be sure of life itself. The real function of good design remained anaesthetic, a symptom of a trauma that couldn’t be expressed, a smooth line of defence..54 Design as a paradoxical gesture, attempted to change the human in order to protect it. ‘‘The feeling of security of our ancestor came in the seclusion and confinement of his cave.. The man of the future does not try to escape elements. He will rule them. His home is no more a timid retreat: The earth has become his home.. The comfort of the dwelling lies in its complete control of space, climate, light, mood, within its confines.’’ 55 The human had to be discovered again and the whole point of design should be to offer this gesture of rediscovery to the users, who would paradoxically finally feel human in being at once extended and completed. LeCorbusier, presented modernity as a return to a primitive nomadic existence - ‘‘the modern nomad.’’ He spoke about the rebirth of the human body and brain. In ‘‘The Decorative Art of Today, 1925’’ he described design as an ‘‘orthopaedic art,’’ the prosthetic extension of the body with ‘‘artificial limbs.’’ The new body parts transform human capacity but need to be replaced whenever a better tool comes along. The tools themselves are living beings, 54. Colomina, B., Wigely, M. (2017), Are We Human? Notes on an Archaeology of Design, Ch.7: Good Design is anesthetic. Lars Müller Publishers. 55. Schindler R. M., (1912), Manifesto for ‘‘modern architecture’’, Vienna.

31


INTELLIGENCE AND SPECULATIVE PROSTHETICS: FROM TECHNOANIMISM TO TECHNOFTISHISM

like a powerful or delicate species of animal of astounding ability. ‘‘We have bred a race of machines to work for us and ‘‘machines beget machines’’ in a relentless evolution of technological capacity that seems miraculous and exceeds our understanding in a dismaying complexity of organs.’’ 56 For Hannes Meyer, ‘‘building is a biological process.’’ His 1926 ‘‘The New World’’ described the transformative effect of the total mechanization of our planet. The victory of man the thinker over amorphous nature..that gives our new world its shape. Nothing is what it was. Machines have expanded bodily and brain functions. Homes have become mobile. People no longer have a homeland. The house is not just a machine for living, but a ‘‘living machine’’ for the ‘‘semi-nomad.’’ He described the house as the biological apparatus serving the needs of body and mind.’ Also, Buckminster Fuller saw design as the scientific project of providing shelter for the human by engaging directly with its biology. His 1938 investigation of the question ‘‘what is man?’’ argued that not only is the human body and brain inseparable from its ‘‘prosthetic’’ extensions with technologies, but that the human body was itself the first tool, a technology that can and should be modified. LeCorbusier insists that the human remains the regulating constant of this new ecology of artificial life-forms, as the essential ‘‘compass’’ of design, the scale and the function. ‘‘The human is treated as something fundamental, a given with physiological, emotional, and intellectual needs to be addressed by design.’’ 56 The species is literally being remade in a radical transformation of its biology and mentality, yet the guiding reference point is still its very first gesture to invent the first ‘‘artificial limbs’’ to protect its vulnerable naked body to survive in the ‘‘inhuman’’ conditions of nature..

Fig. 4.6 Neufert, Hausbaumaschine

56. Colomina, B., Wigely, M. (2017), Are We Human? Notes on an Archaeology of Design, Ch.9: Human-centered Design. Lars Müller Publishers.

32


FROM ENERGY TO INFORMATION: THE MACHINE AS AN ALTERNATIVE TO HUMAN BRAIN [ WE ]

we: our biological capacities as inspiration for technology..

Fig. 4.7 Body proportions -The golden ratio, Ernst Neufert, 1936

The human body that had been thoroughly industrialized in the late 19th century workplace, turned into a mechanism. After the horror of WW1, the ‘‘new man’’ that had already been standardized into an efficient machine part was increasingly portrayed as a traumatizing and traumatized automaton, whether it be as a biotechnical hybrid of flesh and technology or a faceless mannequin. In the 20th century, the cosmic human was displaced by the human understood as a technical instrument made of coordinated parts whose motion needed to be integrated into the mechanics of the home. Man was seen ‘‘as measure and goal.’’ It was the same time, in 1930s, when Neufert introduced his figures, providing the standarized measures for any human activity.57 Since World War II, with the explosion of cybernetics and commodity culture and the growing awareness of the brutal potential of technology in its militaristic forms, the utopian view has collapsed. Enactment or performance have replaced translation as modes for articulating the hinge between body and technology.58 The goal was still the same as in the late 1940s: understanding the human being and its social world, controlling and mimicking it. The early cybernetics of Walter and Ashby directly concerned the brain as an anatomical organ. His homeostat was seen as ontological theater—as a model for a more general state of affairs: a world of dynamic entities evolving in performative (rather than representational) interaction with one another. A model of the sort of adaptive processes that might happen in the brain. Ashby’s electromechanical assemblages themselves had, as their necessary counterpart, an unknowable world to which they adapted performatively. As ontological theater, his brain models inescapably return us to a picture of engagement with the unknown.59 At the same time, along with the mystery of the brain, appeared the notion of the body. The body was seen as a natural machine. Machines started inundating our bodies at an increasing rate because the human body was already, in essence, technology. In what sense were we considered to be technology? The post-modern body, still moving on the axis of the contained (Durand 1963), could operate on many different levels. 57. Colomina, B., Wigely, M. (2017), Are We Human? Notes on an Archaeology of Design, Ch.10: The frictionless silhuette. Lars Müller Publishers. 58. Jones, A., (2009), The Body and Technology | Art Journal, Vol.60, No.1 (Spring, 2001). College Art Association, pp.20. 59. Pickering, A., (2010), The Cybernetic Brain: Sketches of Another Future, University Of Chicago Press.

33


INTELLIGENCE AND SPECULATIVE PROSTHETICS: FROM TECHNOANIMISM TO TECHNOFTISHISM

Humans were seen as social and cultural constructs, but above all as natural machines of the domestic sphere (Katz 1999). The body, while being the real, great domestic technology, was represented as the emblem of naturalness, although was artifice to the maximum degree (artificium in Latin means the art to improve a thing). The body worked as a natural machine and as a technology, but was neither machine nor technology; it was much more. And we, humans were much more than our bodies. Thus, the society, seeking for psychic reproduction, attempted to create artificial intelligence systems (Collins, 1990). AI involved a separation of mind and body, or more precisely the development of a mind without a body.60 We were entering a period of manufacturing creativity. We, humans were transforming into the designers of some new species, giving birth to the creatures of the myths we embraced: automatons, golems, androids, robots.

Fig. 4.8 WWI amputees learn to use artificial limbs

60. Katz., E., J., (2002), Machines that Become Us: The Social Context of Personal Communication Technology. New Brunswick, NJ: Transaction Publishers.

34


CHAPTER 5

What is intelligence?

‘‘Ours. Or anyone - or anything - else’s’’ ai and the ancient urge to reproduce ourselves

What is intelligence? What is it about a system, man-made or begotten (in Warren McCulloch’s phrase), that allows it to behave intelligently? What is consciousness? Learning? Understanding? What kinds of tasks require intelligent behavior? ‘‘Artificial intelligence as the scientific apotheosis of a venerable cultural tradition, the proper successor to golden girls and brazen heads, disreputable but visionary geniuses and crackpots, and fantastical laboratories during stormy November nights. Its heritage is singularly rich and varied, with legacies from myth and literature; philosophy and art; mathematics, science, and engineering; warfare,commerce, and even quackery. I’ve spoken of roads or routes, but in fact it is all more like a web, the woven connectedness of all human enterprise. We harbor that mysterious but ancient urge to reproduce ourselves in some essential but extraordinary way. Artificial intelligence comes blessed with one of the richest and most diverting histories because it addresses itself to something profound and pervasive in the human spirit. The urge to excess and to play is as strong as ever, with contemporary Paracelsuses tweaking the dewlaps of an outraged establishment and contemporary Frankensteins and Babbages being tormentedby their own inventions. True to its speculative origins, artificial intelligence posesses a set of grave moral questions while, true to its claims to be a science, it promises answers to puzzles about the nature of intelligence. Ours. Or anyone—or anything—else’s..’’ -Machines who think

35


INTELLIGENCE AND SPECULATIVE PROSTHETICS: FROM TECHNOANIMISM TO TECHNOFTISHISM

AI isn’t a new concept; its storytelling roots go as far back as Greek antiquity. Alan Turing and John von Newman harbored the hope that the ability to think rationally, the unique asset in dealing with the world, could be captured in a machine. Machines had the ability to “think” like a human. Computers evolved immensely, programs to reason and to play intellectual games like chess were designed. The wheels were set in motion, and the term “artificial intelligence” entered popular awareness. By the 1950s, wide-eyed optimists appeared, about the coming gloom and boom of Automation. This idea, was a true innovation, long before computers’ appearance. Soon, another cybernetic myth of mechanized organisms inspired the popular view that man was able to build a superman-human able to improve on man’s creation. Our minds might be amplified by computers just as our muscles had been amplified by the steam engines of the industrial revolution. We were gradually entering the early world of Artificial Life, with simulated organisms and simulated environments, simulated ‘‘genetic codes’’ and self-replicating patterns. DNA, the designer of life, was a fact! We could translate information! Along with self-reproduction mechanisms and living organisms, rose ‘‘the notion of one robot building another robot.’’ The power given on the human hands empowered the visions of another utopian world! Scientists set out to map the human nature in order to control and mimic it. Genetics, neurology, pharmacology, information technology and artificial intelligence all joined the party. From the route of imagination, to the route of philosophical inquiry, to the field of artificial intelligence as it has been realized since the development of the digital computer.. There is a long history of trying to build self-reproducing machines; the possibility of one automaton taking raw materials and building another automaton. An automaton physically replicating itself, designing a self-building, evolving automaton. John von Neumann, one of the greatest mathematicians of the twentieth century, was one of the inventors of the modern digital computer and the inventor of the cellular automaton; an abstract machine that was used as the basis for describing a self-replicating mechanism. The first self-replicating automaton, called the ‘‘Universal Copier and Constructor’’ (UCC), had in its open system, the machines operating upon one another, constantly modifying each other’s configurations; code and operations; rules: ‘The machines were made sustainable by modifying themselves within the inter-textual context of other Universal Turing Machines.’ Since then, several other abstract machines were developed, mathematically and in simulation, that could reproduce themselves. Von Neumann believed that biological organisms could be seen as very sophisticated machines. The important part of an organism was not the matter from which it was made, but rather the information it contained. Although his demonstration was made around 1950s, his self-replicating machine was devised before the discovery of the replication mechanisms of DNA—that came only in 1953! John von Neumann, made explicit comparisons between the parts of the computer he proposed and the human nervous system. Using terms like memory and control organs to designate certain functions of his new computer, he made a comparison.61 36


WHAT IS INTELLIGENCE? OURS. OR ANYONE - OR ANYTHING - ELSE’S

‘‘There are, first of all, the physical differences, and these aren’t incidental. The components of computers are large, awkward, and unreliable, compared with the miniature and reliable cells of the brain. The human nervous system also shows clear signs of both discrete and continuous behavior, whereas computers must be either discrete (digital) or continuous (analogue).’’ But the major reason for his despair over ever getting a computer to think was the lack of a logical theory of automata. The lack of such a theory, based in formal, logical terms, prohibited machine builders from ever building machines with much more complexity than was possible in 1951, and such complexity was absolutely essential to the production of anything like intelligent behavior.’’ 62

Fig. 5.1 ENIAC by Alan Turing, 1946

The development of the digital computer was celebrated along with the publication of ‘‘Faster than Thought’’ (1953). At that point, the idea of Newman that the computer can be seen as identical to the nervous system started being widely expressed.. Views of a better, automated, computerized, borderless, networked and freer future were made. Machines, our own cybernetic creations, would be able to overcome the innate weakness of our inferior bodies, our fallible minds and our dirty politics! Norbert Wiener’s described control and stability in electrical networks, Claude Shannon’s information theory described digital signals, and Alan Turing’s theory of computation showed that any form of computation could be described digitally. The close relationship between all these ideas suggested that it might be possible to construct an electronic brain. The thinking-machines-dream was ignited again!62, 63 Walter Pitts and Warren McCulloh analyzed networks of idealized artificial neurons and showed how they might perform simple logical functions. They were the first to describe what later researchers would call a neural network.64 One of the students inspired by Pitts 61. Rocker, M. I., (July/August 2006), Programming Cultures: Art and Architecture in the Age of Software | When Code Matters | Architectural Design magazine, Vol 76, No 4. 62. McCorduck, P., (2004), Machines Who Think (2nd ed.), Natick, MA: A. K. Peters, Ltd. 63. Russell, Stuart J.; Norvig, P., (2003), Artificial Intelligence: A Modern Approach (2nd ed.) vvv , Upper Saddle River, New Jersey: Prentice Hal.

37


INTELLIGENCE AND SPECULATIVE PROSTHETICS: FROM TECHNOANIMISM TO TECHNOFTISHISM

and McCulloh; Marvin Minksy, was deeply concerned with trying to understand how the brain worked, and trying to understand it in terms of neurons, using as a model the on-off cells of the digital computer. ‘‘The brain is an electrical and chemical mechanism, whose organization is enormously complex, whose evolution is barely, understood, and which produces complex behavior in response to an even more complex environment.’’ (neurophysiologists, Minsky) In 1951 he built the first neural net machine, the SNARC.65 He was to become one of the most important leaders and innovators in AI for the next 50 years.62 Parallel to the development of computation was the discovery of the DNA code in the early part of the 20th century, the significance of which had only begun to be realised with the completion of the Human Genome Project.66 The intelligent process that created us, evolved through DNA. Evolution as a master programmer, had been prolific, designing millions of species of breathtaking diversity and ingenuity. And that was just here on Earth. The software programs have been all written down, recorded as digital data in the chemical structure of an ingenious molecule called deoxyribonucleic acid.. This master “read only” memory controls the vast machinery of life. DNA provided a recorded and protected transcription of life’s design from which to launch further experiments. DNA as another designer, the designer of life.

Fig. 5.2 Vitruvian man

64. Artificial neural networks (ANNs) or connectionist systems are computing systems inspired by the biological neural networks that constitute animal brains. Such systems learn (progressively improve performance) to do tasks by considering examples, generally without task-specific programming. For example, in image recognition, they might learn to identify images that contain cats by analyzing example images that have been manually labeled as “cat” or “no cat” and using the analytic results to identify cats in other images. They have found most use in applications difficult to express in a traditional computer algorithm using rule-based programming. The original goal of the neural network approach was to solve problems in the same way that a human brain would. Over time, attention focused on matching specific mental abilities, leading to deviations from biology such as backpropagation, or passing information in the reverse direction and adjusting the network to reflect that information. Neural networks have been used on a variety of tasks, including computer vision, speech recognition, machine translation, social network filtering, playing board and video games, medical diagnosis and in many other domains. 65. SNARC (Stochastic Neural Analog Reinforcement Calculator) is a neural net machine designed by Marvin Lee Minsky. This machine is considered one of the first pioneering attempts at the field of artificial intelligence, and Minsky is well known for his contributions to what is now the MIT Artificial Intelligence Lab. 66. The Human Genome Project (HGP) was an international scientific researchproject with the goal of determining the sequence of nucleotide base pairs that make up human DNA, and of identifying and mapping all of the genes of the human genome from both a physical and a functional standpoint. It remains the world’s largest collaborative biological project. It originally aimed to map the nucleotides contained in a human haploid reference genome(more than three billion). The “genome” of any given individual is unique; mapping the “human genome” involved sequencing a small number of individuals and then assembling these together to get a complete sequence for each chromosome. Therefore, the finished human genome is thus a mosaic, not representing any one individual.

38


WHAT IS INTELLIGENCE? OURS. OR ANYONE - OR ANYTHING - ELSE’S

Along with the development of Turing test, which was considered to be a measure of intelligence, the creation of the Ferranti Mark 1 67 for chess, established in the 1951 the AI Games, as a field of AI. In the mid 50s and early 60s, they eventually achieved sufficient skill to challenge a respectable amateur. It was definitely a big step towards the dream, as Game AI would continue to be used as a measure of progress in AI throughout its history.. When access to digital computers became possible in the mid 50s, a few scientists instinctively recognized that a machine that could manipulate numbers could also manipulate symbols and that the manipulation of symbols could well be the essence of human thought. This was a new approach to creating thinking machines.68 Symbolic reasoning 69 was spreading and setting its basis in the field of AI, with researchers trying to understand and decode the mystery of human intelligence.. Finally, in 1956, in the small town of Hanover, New Hampshire, “artificial intelligence” was officially launched as a new research discipline. (The Dartmouth Conference of 1956 was organized by Marvin Minsky, John McCarthy and two senior scientists: Claude Shannon and Nathan Rochester of IBM.) Top scientists debated how to tackle AI. Some, like influential academic Marvin Minsky, favoured a top-down approach: pre-programming a computer with the rules that govern human behaviour. Others preferred a bottom-up approach, such as neural networks that simulated brain cells and learned new behaviours.70 Over time Minsky’s views dominated: “every aspect of learning or any other feature of intelligence can be so precisely described that a machine can be made to simulate it.” 71 The participants included Ray Solomonoff, Oliver Selfridge, Trenchard More, Arthur Samuel, Allen Newell and Herbert Simon, all of whom would create important programs during the first decades of AI research. At the conference Newell and Simon debuted the “Logic Theorist” 72 a program that would eventually prove 38 theorems from Whitehead and Russell’s Principia Mathematica,73 introducing several critical concepts to artificial intelligence inclu67. The Ferranti Mark 1, also known as the Manchester Electronic Computer in its sales literature, and thus sometimes called the Manchester Ferranti, was the world’s first commercially available general-purpose electronic computer. It was “the tidied up and commercialised version of the Manchester computer”. 68. McCorduck, P., (2004), Machines Who Think (2nd ed.), Natick, MA: A. K. Peters, Ltd. 69. Symbolic artificial intelligence is the collective name for all methods in artificial intelligence research that are based on high-level “symbolic” (human-readable) representations of problems, logic and search. Symbolic AI was the dominant paradigm of AI research from the mid-1950s until the late 1980s. The most successful form of symbolic AI is expert systems, which use a network of production rules. Production rules connect symbols in a relationship similar to an If-Then statement. The expert system processes the rules to make deductions and to determine what additional information it needs, i.e. what questions to ask, using human-readable symbols. Symbolic AI was intended to produce general, human-like intelligence in a machine, whereas most modern research is directed at specific sub-problems. Research into general intelligence is now studied in the sub-field of artificial general intelligence. Machines were initially designed to formulate outputs based on the inputs that were represented by symbols. Symbols are used when the input is definite and falls under certainty. But when there is uncertainty involved, for example in formulating predictions, the representation is done using “fuzzy logic”. This can be seen in artificial neural networks. 70. BBC | iWonder, (2015), AI: 15 key moments in the story of Artificial Intelligence [Online] Available: http://www. bbc.co.uk/timelines/zq376fr [Accessed 26 May 2017] 71. McCarthy et al., (Aug. 31, 1955), Dartmouth Artificial Intelligence Project Proposal. 72. Logic Theorist is a computer program written in 1955 and 1956 by Allen Newell, Herbert A. Simon and Cliff Shaw. It was the first program deliberately engineered to mimic the problem solving skills of a human being and is called “the first artificial intelligence program”. It would eventually prove 38 of the first 52 theorems in Whitehead and Russell’s Principia Mathematica, and find new and more elegant proofs for some. 73. Principia Mathematica (PM) was an attempt to describe a set of axioms and inference rulesin symbolic logic from which all mathematical truths could in principle be proven. As such, this ambitious project is of great importance in the history of mathematics and philosophy, being one of the foremost products of the belief that such an undertaking may be achievable. However, in 1931, Gödel’s incompleteness theorem proved definitively that PM, and in fact any other attempt, could never achieve this lofty goal; that is, for any set of axioms and inference rules proposed to encapsulate mathematics, either the system must be inconsistent, or there must in fact be some truths of mathematics which could not be deduced from them.

39


INTELLIGENCE AND SPECULATIVE PROSTHETICS: FROM TECHNOANIMISM TO TECHNOFTISHISM

ding heuristics, list processing and ‘‘reasoning as search.’’ They were convinced at the time that, by using the notion of computation or abstract symbol manipulation, it would soon become possible to reproduce interesting abilities normally ascribed to humans, such as playing chess, solving abstract problems, and proving mathematical theorems. By that time it was official. The 1956 Dartmouth conference was the moment that AI earned its name, its mission, its first success and its major players, being widely considered the birth of AI.74

design: adjusting architecture in the era of machines becomes an urgent task....

The influential “Situationist International Group” based in Guy Debord in Paris, created in 1950, shaped the understanding of the built environment in terms of robotics. As stated in the founding document of 1953 “The architectural set will be modifiable. Its features will change completely or in part according to the wishes of its users... The emergence of the concept of relativity in Modern thinking allows one to assume the experimental nature of the upcoming culture... On the basis of this versatile civilization, at least initially, architecture will be a means of experimenting with hundreds of ways of changing life, with a look at mythical compositions.’’ Although not applied to the theoretical architectural ideas of the era, alongside the intellect around the robotic systems, there was also a corporate interest. Therefore, market-driven roles have emerged that have developed the robotics sector and have implicated users directly in the real world. The 19th century dream of ‘‘total design’’ has been realized. The famous slogan of the 1907 Deutscher Wekbund ‘‘from the sofa cushion to city planning,’’ updated in 1952 with Ernesto Rogers’s ‘‘from the spoon to the city.’’ The patterns of atoms are being carefully arranged and colossal artifacts, like communication nets, encircle the planet. Designer have become role models in the worlds of science, business, politics, innovation, art and education but paradoxically they have been left behind by their own concept. They remain within the same limited range of design products and do not participate fully in the expanded world of design. Ironically, this frees them up to invent new concepts of design..75

74. Crevier, D., (1993), AI: The Tumultuous Search for Artificial Intelligence, New York, NY: BasicBooks. 75. Colomina, B., Wigely, M. (2017), Are We Human? Notes on an Archaeology of Design, Ch.9: Human-centered Design. Lars Müller Publishers.

40


WHAT IS INTELLIGENCE? OURS. OR ANYONE - OR ANYTHING - ELSE’S

[ WE ]

who are we? ‘‘For all previous millennia, our technologies have been aimed outward, to control our environment. (…) Now, however, we have started a wholesale process of aiming our technologies inward. Now our technologies have started to merge with our minds, our memories, our metabolisms, our personalities, our progeny and perhaps our souls.” 76 -Joel Garreu ‘‘The post-modern body, which still moves on the axis of the contained (Durand 1963), operates on many different levels. It is a physical and psychic entity; it is our first extension in space; it is a powerful and natural means of communication; it is an instrument of seduction; it is the first barrier between our subjective world and the objective world; it is the seat of our capacity to work and of our inner dimension.’’77 (L’universo del corpo 1999l Marx 1953) Wiener in his discussion of human purposes, recognizing feedbacks and larger systems which include the environment, moved closer to an ideal of understanding and, both consciously and effectively, of collaborating with natural processes. Wiener had a lifelong obsession to distinguish the human from the machine. But in ‘‘The Human Use of Human Beings,’’ his intention was to place his understanding of the people / machines identity / dichotomy within the context of his generous and humane social philosophy. The text argued for the benefits of automation to society; it analyzed the meaning of productive communication and discussed ways for humans and machines to cooperate, with the potential to amplify human power and release people from the repetitive drudgery of manual labor, in favor of more creative pursuits in knowledge work and the arts. The risk that such changes might harm society (through dehumanization or subordination of our species) was explored, and suggestions were offered on how to avoid such risk. Cybernetics had originated from the analysis of formal analogies between the behaviour of organisms and that of electronic and mechanical systems. Artificial intelligence, highlighted the potential resemblance between certain elaborate machines and people. The cybernetic machines-such as general-purpose computers-suggested a possibility as to the nature of mind: mind was analogous to the formal structure and organization, or the software aspect, of a reasoning-and-perceiving machine that could also issue instructions leading to actions. Thus the long-standing mind-brain duality was overcome by a materialism which encompassed organization, messages and information in addition to stuff and matter. But the subjective - an individual’s cumulative experience, sensations and feelings, including the subjective experience of being alive - was belittled, seen only within the context of evolutionary theory as providing information useful for survival to the organism. The fact that the metaphor of a sophisticated automaton was so heavily employed, made us think of humans as in effect machines. How was the machine affecting people’s lives? Or still more pointedly: who reaped a benefit from it? 78 76. Rinie van Est, (12 April 2015), Intimate Technology: the Battle for Our Body and Behaviour, [Online] Available: https://www.nextnature.net/2015/04/intimate-technology/, [Accessed 26 June 2017] 77. Katz., E., J., (2002), Machines that Become Us: The Social Context of Personal Communication Technology. New Brunswick, NJ: Transaction Publishers, pp.72. 78. Wiener, N., (1950), The Human Use of Human Beings: Cybernetics and Society, US: Houghton Mifflin Harcourt, UK: Eyre & Spottiswoode. pp.xix-xx.

41


INTELLIGENCE AND SPECULATIVE PROSTHETICS: FROM TECHNOANIMISM TO TECHNOFTISHISM

Building humanity anew

‘‘chatting with machines and re-engineering humans to fit the stars..’’ ‘‘In the age of artificial satellites orbiting above us the very origin of the human, needs to be redesigned.’’ Calls for the species to ‘‘build humanity anew.’’ -Lina Bo Bardi, 1958 The years after the Dartmouth conference were an era of discovery, of sprinting across new ground. The programs that were developed during this time were, to most people, simply astonishing: computers were solving algebra world problems, proving theorems in geometry and learning to speak English. Few at the time would have believed that such “intelligent” behavior by machines was possible at all. Imagine a future, where your computer chats with you.. The supreme test of intelligence. Such a brilliant vision of a future human-computer interface. Your intelligent computer is your new friend HAL 9000! An ‘‘anthropomorphic computer’’ with a perfect command of speech, excellent vision, and humour, was heading towards human level intelligence. LISP, ELIZA and SHRDLU languages, UNIMATE robots, DENDRAL experts, and Shakey. Tough problems are cracked. ARPANET pops in as the precursor to the Internet. Androids are born and the personal computer is established. Government agencies like DARPA pour money into the new field. Researchers express an intense optimism, predicting that from three to eight years we will have a machine with the general intelligence of an average human being. (Marvin Minsky in Life Magazine, 1970) and that a fully intelligent machine will be built in less than 20 years. And while Walter’s cybernetics is addressed to the brain, Brooks understands it as robotics, Frazer takes it into architecture, and Burroughs transplants it into the domain of altered states and that classic sixties project; the exploration of consciousness.. Another chess game, another confrontation between computer and human intelligence.. - Where is evolution taking us? Will our descendants hurtle through space as relatively unchanged as the humans on the starship Enterprise? Will they be muscle-bound cyborgs? Or will they chose to digitize their consciousness—becoming electronic immortals? -

42


CHAPTER 6

‘‘Suddenly, a window would open into a vast field of possibilities; the time limits would vanish, and the machines would seem to become humanized components of the interactive network now consisting of oneself and the machine still obedient but full of suggestions of the master controls of the imagination.’’ 79 -Vladimir Ussachevsky While humans have been trying to understand how the brain works, studying the natural processes of living systems and making simulations through mechanisms, the relationship between user and machine upgraded. From the manipulation of the machine, towards collaboration with it. The information-processing model 80 emerged and was richly productive. There were two ways of pursuing it. One was to imitate humans as closely as possible; the other aimed at producing intelligent behavior in a computer by whatever method—or as Minsky put it, “without prejudice toward making the system simple, biological or humanoid” (Minsky, 1968). ‘‘2001: A Space Odyssey’’ (1968) by Arthur C. Clarke and Stanley Kubrick, was the story that promoted the idea of a computer that could see, speak, hear, and think. All these potentials, sparked imaginations and hopes, towards the creation of human-like machines. HAL 9000 81, a Heuristically programmed Algorithmic computer was presented, depicting the vision of a future human-computer interface.82 In fact, computer interface design started in March 1960, when J. C. R.Licklider published his paper “Man-Computer Symbiosis. Sculleys book aimed to illustrate an interface of the future, beyond mice and menus. He did an excellent job. What would the position of human and what that of machine be?

Fig. 6.1 2001 A Space Odyssey, Stanley Kubrick, 1968 79. Kurzweil, R. (1990), The Age of Intelligent Machines, Ch.9: The Science of Art. USA: MIT Press. 80. The Information Processing Model is a framework used by cognitive psychologists to explain and describe mental processes. The model likens the thinking process to how a computer works. Just like a computer, the human mind takes in information, organizes and stores it to be retrieved at a later time. The theory is based on the idea that humans process the information they receive, rather than merely responding to stimuli. This perspective equates the mind to a computer, which is responsible for analyzing information from the environment. According to the standard information-processing model for mental development, the mind’s machinery includes attention mechanisms for bringing information in, working memory for actively manipulating information, and long-term memory for passively holding information so that it can be used in the future. This theory addresses how as children grow, their brains likewise mature, leading to advances in their ability to process and respond to the information they received through their senses. The theory emphasizes a continuous pattern of development, in contrast with Cognitive Developmental theorists such as Jean Piaget that thought development occurred in stages at a time. 81. HAL 9000 is a fictional character and the main antagonist in Arthur C. Clarke’s Space Odyssey series. First appearing in 2001: A Space Odyssey, HAL (Heuristically programmed Algorithmic computer) is a sentient computer (or artificial general intelligence) that controls the systems of the Discovery One spacecraft and interacts with the ship’s astronaut crew. Part of HAL’s hardware is shown towards the end of the film, but he is mostly depicted as a camera lens containing a red or yellow dot, instances of which are located throughout the ship. In addition to maintaining the Discovery One spacecraft systems during the interplanetary mission to Jupiter (or Saturn in the original novel, published shortly after the release of the film), HAL is capable of speech, speech recognition, facial recognition, natural language processing, lip reading, art appreciation, interpreting emotional behaviours, automated reasoning, and playing chess. 82.Human-Machine Interaction (HMI) is a study of interactions between humansand machines. Human Machine Interaction is a multidisciplinary field with contributions from Human-Computer interaction (HCI), Human-Robot Interaction(HRI), Robotics, Artificial Intelligence (AI), Humanoid robots and exoskeleton control.

43


INTELLIGENCE AND SPECULATIVE PROSTHETICS: FROM TECHNOANIMISM TO TECHNOFTISHISM

Fig. 6.2 Stanley Kubrick, 2001Space Odyssey, HAL, 1968

‘‘In computers everything becomes number: imageless, soundless, wordless quantity..any medium can be translated into another..a total connection of all media on a digital base erases the notion of the medium itself. Instead of hooking up technologies to people, absolute knowledge can run as endless loop.’’ -Kittler ‘‘Digital butlers will be numerous, living both in the network and by your side, both in the center and at the periphery of your own organization, large or small.’’ 83 There were many successful programs and new directions in the late ‘50s and ‘60s. Among the most influential were Reasoning as search, natural language and micro-worlds. Machines could see, speak, hear, and think. Artificial Intelligence had the objective of developing computers that could act like humans. Just like neurons in the brain, the hardware and software of a computer are themselves not intelligent, yet it has been demonstrated that a computer might be programmed to demonstrate some intelligent characteristics of a human. 84 83. Negreponte, N., (1995), Being Digital, Interface Agents, Alfred A. Knopf, pp. 151-152 84. De Silva, C., (2000), Intelligent Machines: Myths and Realities, London UK: The Book Depository US, pp. 1-2.

44


BUILDING HUMANITY ANEW: chatting with machines and re-engineering humans to fit the stars

Many early AI programs used the same basic algorithm. To achieve some goal (like winning a game or proving a theorem), they proceeded step by step towards it as if searching through a maze, backtracking whenever they reached a dead end. This paradigm was called “reasoning as search.” Neweland Simon tried to capture a general version of this algorithm in a program called the “General Problem Solver.” 85 Other “searching” programs were able to accomplish impressive tasks like solving problems in geometry and algebra, such as Hebert Gelernter’s Geometry Theorem Prover (1958) and SAINT, written by Minsky’s student James Slagle(1961).86 Other programs searched through goals and subgoals to plan actions, like the STRIPS system developed at Stanford to control the behavior of their robot Shakey87 - the first mobile robot that appeared to “reason” about its actions. 88 An important goal of AI research has also been to allow computers to communicate in natural languages like English. An early success was Daniel Borbrow’s program STUDENT,89 which could solve high school algebra word problems. In the late 1950s, at the University of Cambridge, appeared for the first time the semantic nets 90 for machine translation. Later, Joseph Weizenbaum’s ELIZA91 could carry out conversations that were so realistic that users occasionally were fooled into thinking they were communicating with a human being and not a program. But in fact, ELIZA had no idea what she was talking about. She simply gave a canned response or repeated back what was said to her, rephrasing her response with a few grammar rules. ELIZA was the first chatterbot. It was programmed to simulate

85.General Problem Solver or G.P.S. was a computer program created in 1959 by Herbert A. Simon, J.C. Shaw, and Allen Newell intended to work as a universal problem solver machine. Any problem that can be expressed as a set of well-formed formulas (WFFs) or Horn clauses, and that constitute a directed graph with one or more sources (viz., axioms) and sinks (viz., desired conclusions), can be solved, in principle, by GPS. Proofs in the predicate logic and Euclidean geometry problem spaces are prime examples of the domain the applicability of GPS. It was based on Simon and Newell’s theoretical work on logic machines. GPS was the first computer program which separated its knowledge of problems (rules represented as input data) from its strategy of how to solve problems (a generic solver engine). GPS was implemented in the third-order programming language, IPL. 86. Crevier, D., (1993), AI: The Tumultuous Search for Artificial Intelligence, New York, NY: BasicBooks. pp. 51-58, 65-66 87. Shakey the robot was the first general-purpose mobile robot to be able to reason about its own actions. While other robots would have to be instructed on each individual step of completing a larger task, Shakey could analyze commands and break them down into basic chunks by itself. Due to its nature, the project combined research in robotics, computer vision, and natural language processing. Because of this, it was the first project that melded logical reasoning and physical action. Shakey was developed at the Artificial Intelligence Center of Stanford Research Institute (now called SRI International). 88. McCorduck, Pamela (2004), Machines Who Think (2nd ed.), Natick, MA: A. K. Peters, Ltd., pp.268-271 89. STUDENT is an early artificial intelligence program that solves algebra word problems. It is written in Lisp by Daniel G. Bobrow as his PhD thesis in 1964 (Bobrow 1964). It was designed to read and solve the kind of word problems found in high school algebra books. The program is often cited as an early accomplishment of AI in natural language processing. 90. “Semantic Nets” were first invented for computers by Richard H. Richens of the Cambridge Language Research Unit in 1956 as an “interlingua” for machine translation of natural languages. 91. ELIZA is an early natural language processing computer program created from 1964 to 1966 at the MIT Artificial Intelligence Laboratory by Joseph Weizenbaum. Created to demonstrate the superficiality of communication between man and machine, Eliza simulated conversation by using a ‘pattern matching’ and substitution methodology that gave users an illusion of understanding on the part of the program, but had no built in framework for contextualizing events. ELIZA was one of the first chatterbots, but was also regarded as one of the first programs capable of passing the Turing Test. ELIZA’s creator, Weizenbaum regarded the program as a method to show the superficiality of communication between man and machine, but was surprised by the number of individuals who attributed human-like feelings to the computer program, including Weizenbaum’s secretary. Many academics believed that the program would be able to positively influence the lives of many people, particularly those suffering from psychological issues and that it could aid doctors working on such patients’ treatment. While ELIZA was capable of engaging in discourse, ELIZA could not converse with true understanding. However, many early users were convinced of ELIZA’s intelligence and understanding, despite Weizenbaum’s insistence to the contrary.

45


INTELLIGENCE AND SPECULATIVE PROSTHETICS: FROM TECHNOANIMISM TO TECHNOFTISHISM

a dialogue. In 1958, John McCarthy (MIT) invented the Lisp programming language.92 He showed that with a few simple operators and a notation for functions, one could build a Turing-complete language for algorithms. In 1959, the American cognitive scientist Marvin Minsky picked up the AI torch and co-founded the Massachusetts Institute of Technology’s AI laboratory, becoming one of the leading thinkers in the field through the 1960s and 1970s. In the late 60s, he and Seymour Papert of the MIT AI Laboratory proposed that AI research should focus on artificially simple situations known as micro-worlds. This idea was inspired by successful sciences like physics, as basic principles were often best understood using simplified models.93 Hence, the inextricable linkage between computation and physics arose. Shifting from energy to information, the immaterialisation of matter, gave birth to information theories. 94 The computable physical laws, underlined the computable nature of all processes. This link between computation and physics, has led to the awareness that physical processes are in fact forms of computation: ‘‘All processes, whether they are produced by human effort or occur spontaneously in nature, can be viewed as computations.’’ (Stephen Wolfram, Principle of Computational Equivalence) 95 At the same time, cellular automata 96 were studied as a particular type of dynamical system and the connection with the mathematical field of symbolic dynamics was established for the first time. Polyautomata was a computational theory. These mechanisms could potentially be seen as generative devices, and rule-based systems, characterised by simplicity, where a variety of behavioral patterns could emerge. They were finite state machines which could be regarded as a class of computer and could be seen as an example of technique which could be applied to both natural and artificial phenomena.

Fig. 6.3 Shakey the robot 92. Lisp (historically, LISP) is a family of computer programming languages with a long history and a distinctive, fully parenthesized prefix notation. Originally specified in 1958, Lisp is the second-oldest high-level programming language in widespread use today. Lisp was originally created as a practical mathematical notation for computer programs, influenced by the notation of Alonzo Church’s lambda calculus. It quickly became the favored programming language for artificial intelligence (AI) research. As one of the earliest programming languages, Lisp pioneered many ideas in computer science, including tree data structures, automatic storage management, dynamic typing, conditionals, higher-order functions, recursion, the self-hosting compiler, and the read–eval–print loop. 93. Copeland, Jack (2000), Micro-World AI, retrieved 8 October 2008. 94. Information Theory is the mathematical study of the coding of information in the form of sequences of symbols, impulses, etc. and of how rapidly such information can be transmitted, for example through computer circuits or telecommunications channels. 95. Wolfram, S., (2002), A New Kind of Science, ‘Principle of Computational Equivalence’, US: Wolfram Media. 96. A cellular automaton is a discrete model studied in computability theory, mathematics, physics, complexity science, theoretical biology, and microstructure modeling. They are used to build parallel computing architectures as well as to model and simulate physical systems. Moreover they can be used to investigate and reproduce the emergence of patterns (Spatiotemporal Pattern Formation), self-replication and self-similarity properties in natural systems.

46


BUILDING HUMANITY ANEW: chatting with machines and re-engineering humans to fit the stars

As technology progressed-more specifically computer technology, and machines increasingly started taking over mundane functions, the concept of robots appealed to people. The year 1960 found the United States with about six thousand computers in operation. The world was turning computable all the way, marking a very crucial decade for the field of Robotics. Robotic technologies came in, to bridge the digital-physical divide, enabling massive information gathering and control to pervade the real world.97 The first digitally operated and programmable robot, the Unimate, was the first industrial robot, installed in 1961, in US. After that, many important technological innovations in robotics, marked the beginning of a great era, firing the expectations for a new type of intelligence and the dreams of an augmented, intelligent world. ‘‘If the universal Turing machine was the reference point used to evaluate the computational capacity of different computers and therefore their classes, the industrial robot, represented a reference electromechanical machine, a mechatronic benchmark for the robotics.’’ Soon the first android was born; WABOT-1 (1967-1972) was the first full-scale humanoid intelligent robot. Being able to walk, grip and transport objects using tactile sensors, its conversation system allowed it to communicate with a person, with an artificial mouth. Also Stanford Research Institute’s Shakey, completed in 1969, was the first, and is still the only, mobile robot to be controlled primarily by programs that reasoned. The exception that proves the rule. Shakey’s instigators, being inspired by the early success in artificial intelligence research, they sought to apply logic-based problem-solving methods to a real-world task. During the 1970s dozens of research labs had robot arms connected to computers. The leap was crazy, and that was only the very beginning!98

Fig. 6.4 Wabot-1, humanoid robot, 1970

Fig. 6.5 Unimate, the first industrial robot, 1961

97. Nourbakhsh, I., R., (March 2013), Robot Futures, The MIT Press, pp. 4. 98. Moravec, H., (1988), Mind Children: The Future of Robot and Human Intelligence, Harvard University Press, pp. 14.

47


INTELLIGENCE AND SPECULATIVE PROSTHETICS: FROM TECHNOANIMISM TO TECHNOFTISHISM

1977 Star Wars: the concept of true-to-life robots with convincing human emotions is imaginatively portrayed..

Fig. 6.6 The Star Wars Storybook, 1977

‘‘Imagine computers orders of magnitude more powerful and far cheaper than today’s machines. That’s one promise of a field that uses individual molecules as microscopic switches.’’ -David Rotman, 2000 99 And while the pace of robotics and AI was accelerating, a new field called Molecular Computing sprung up to harness the DNA molecule itself as a practical computing device. In 1973 the work of Stanley H. Cohen and Herbert W. on DNA and its potential to be cut, joined, and then reproduced, created the foundation for Genetic Engineering.100 The evolution of life and intelligence on Earth finally reached the point where it deemed possible to engender something almost out of nothing. In principle, a universe of possible worlds based on generative principles inherent within nature and the physical universe was considered to be within the realm of the computable once quantum computing systems become a reality. For the first time, humankind was finally in possession of the power to change and transform the genetic constitution of biological species, which, without a doubt, would have profound implications for the future of life on Earth. This was meant to be a huge step, in the long history of evolution of life and intelligence. Genetic programming as the method of creating a computer program using genetic or evolutionary algorithms, was about to enter computation and intelligence in life..101 xAlgorithms do not simply govern the procedural logic of computers: more generally, they have become the objects of a new programming culture. The imperative of information processing has turned culture into a lab of generative forms that are driven by open-ended rules. Whether we are speaking about DNA, bacteria, or stem cell cultures, cultures of 99. Rotman, D., (2000) Intelligent Machines: Molecular Computing, MIT Technology Review [Online] Available: https://www.technologyreview.com/s/400728/molecular-computing/ [Accessed 25 June 2017] 100. Genetic engineering is the artificial manipulation, modification, and recombination of DNA or other nucleic acid molecules in order to modify an organism or population of organisms. 101. Chu, K., (2004), Archaeology of the Future, in Peter Eisenman ‘Barefoot on White-Hot Walls’’,Editorial 53-54, MAK Wien, [Online] Available: http://ehituskunst.ee/karl-chu-archaeology-of-the-future/?lang=en [Accessed 25 May 2017]

48


BUILDING HUMANITY ANEW: chatting with machines and re-engineering humans to fit the stars

sounds and images, time-based cultures, or cultures of spatial modeling, algorithms now explain evolution, growth, adaptation, and structural change. Algorithms, therefore, have become equated with the generative capacities of matter to evolve. It is not by chance that the age of the algorithm has also come to be recognized as an age characterized by forms of emergent behavior that are determined by continual variation and uncertainty. The mode of existence of algorithms no longer merely corresponds to models that simulate material bodies: instead it constructs a new kind of model, which derives its rules from contingencies and open-ended solutions. Generative algorithms are said to dissolve the opposition between mathematics and biology, between abstract models and concrete bodies. Just as matter has an abstract form, so too have software programs become evolutionary bodies. The late twentieth century may one day be known as the dawn age of the algorithm. If so, we wish to be the first to embrace the new rationality that sees space and matter as indistinguishable, as active mediums shaped by both embedded and remote events and the patterns they form.103 It was about a fascinating journey with the 20th century thinkers from tech-giants and eccentric mathematicians, to science fiction writers and counterculture gurus—who have shaped how we understand machines and ourselves. Cybernetics, better seen as a form of life, promoted a way of going on in the world; an attitude. Some inanimate objects could behave like living systems. They could regulate themselves and survive: They could adapt and they could compute: they could reproduce themselves, they could invent. Co-operating, learning and competing, they could evolve rapidly.. 102

Cyborgs, the imaginary figures, born through science fiction stories and myths, were about to break the boundaries of imagination, free themselves and make the ‘‘‘next step in evolution.’’ Would it be necessary to construct a different type of body?

102. Rid, T., (2016), Rise of the Machines: A Cybernetic History, W. W. Norton Company. 103. Sanford Kwinter, Far from Equilibrium: Essays on Technology and Design Culture, ed. Cynthia Davidson (Barcelona: Actar, 2008), 51.

49


INTELLIGENCE AND SPECULATIVE PROSTHETICS: FROM TECHNOANIMISM TO TECHNOFTISHISM

we: naturally born cyborgs

is a biological organism a machine? “There are now in the world machines that think, that learn and that create. Moreover, their ability to do these things is going to increase rapidly until in a visible future-the range of problems they can handle will be coextensive with the range to which the human mind has been applied.”103

Cyborgs: half- machine/half-human replicants. We are creating and at the same time becoming ourselves cyborgs. And so this twentieth-century version of our species dream, to make intelligent entities that aren’t human, is summarized half facetiously by scientists at MIT as falling into three periods: the classical, the romantic, and the modern. In the classical period, lasting from the early 1950s to the late 1960s, the search was for general principles of intelligence, regardless of the task at hand. Here was the first confirmation that the information-processing model was a rich one, that, although we had no physiological theory of mind (and still don’t) intelligence could nevertheless be understood and expressed precisely enough that a computer could be programmed to behave intelligently. Here, in short, was the news that some other entity could be made to exhibit what heretofore we’d consider our exclusive, identifying property.104 The year was 1960. The paper, titled “Cyborgs and Space,” based on the talk, “Drugs, Space and Cybernetics,” proposed a nice piece of lateral thinking. Instead of trying to provide artificial, earth-like environments for the human exploration of space, why not alter the humans so as to better cope with the new and alien demands? 105 “Space travel, challenges mankind not only technologically, but also spiritually, in that it invites man to take an active part in his own biological evolution.”106 Why not, in short, reengineer the humans to fit the stars? In 1960, of course, genetic engineering was just a gleam in science fiction’s prescient eye. And these authors were not dreamers, just creative scientists engaged in matters of national (and international) importance. They were scientists, moreover, working and thinking on the crest of two major waves of innovative research: work in computing and electronic data-processing, and work on cybernetics. The way to go, they suggested, was to combine cybernetic and computational approaches so as to create man-machine hybrids, “artifact-organism systems” in which implanted electronic devices used bodily feedback signals to automatically regulate wakefulness, metabolism, respiration, heart rate, and other physiological functions in ways suited to some alien environment. Manfred Clynes, was the first actually to suggest the term “cyborg,” in 1960. The acronym “cyborg” stood for Cybernetic 104. McCorduck, Pamela (2004), Machines Who Think (2nd ed.), Natick, MA: A. K. Peters, Ltd. 105. Clark, A., (2003), Natural-Born Cyborgs: Minds, Technologies, and the Future of Human Intelligence, Oxford University Press. 106. Clynes, M., E., and Kline, N., S., (1960), Cyborgs and Space, Austronautics, New York Times. [Online] Available: https://

50


BUILDING HUMANITY ANEW: chatting with machines and re-engineering humans to fit the stars [we]

Organism or Cybernetically Controlled Organism; it was a term of art meant to capture both a notion of human-machine merging and the rather specific nature of the merging envisaged. In ‘‘Mechanism and Biological Explanation, 1972,’’ Humberto Maturama and Francisco Varela argue that machines and biological forms are very closely related - so close that biologists claim that living organisms are machines. It is not only about a pedagogic transfer, but also with a strict analogy that exaggerates significant symmetries, expresses experimental and theoretical goals. “The machine is characterized by a set of fragmentary actions that serve specific conditions. A machine is the system of alliances that depends on the current components that “make” the machine.. Importance is not the particularity of the part but the specificity of its relationships.” 107 (Humberto Maturana, Francisco Varela) The human body had to reflect through its ‘‘altered physiology’’ all the technological innovations of the era. Any transformed environment required a transformed body.. 108

‘‘For the exogenously extended organizational complex we propose the term “cyborg.” The Cyborg deliberately incorporates exogenous components extending the self-regulating control function of the organism in order to adapt it to new environments.’’ -Rise of the Machines

Fig. 6.7 Cyborg, Kline and Clynes, 1960

Fig. 6.8 Cyborg, Kline and Clynes, 1960

partners.nytimes.com/library/cyber/surf/022697surf-cyborg.html [Accessed 26 June 2017] 107. Saggio, A. (2012) GreenBodies: Give me an Ampoule and I will Live. Rethinking the Human in Technology Driven Architecture. pg. 41-57 108. Schneider, S., (2009), Science Fiction and Philosophy: From Time Travel to Superintelligence, Cyborgs Unplugged, John Wiley & Sons. pp. 171.

51


INTELLIGENCE AND SPECULATIVE PROSTHETICS: FROM TECHNOANIMISM TO TECHNOFTISHISM

design: interfaces of the future “The computer at home is not a fanciful concept. As the cost of computation lowers, the computer utility will become a consumer item, and every child should have one.”109 Towards a new generation of architectural work; the dream of the so-called new technologies being incorporated in our everyday architectural life; when there is no question of if, but of how.. A rapidly growing digitally mediated environment. Post war consumerism ideas, pop culture images and space travel hardware; concepts of ‘‘expandability’’, mass produced artifacts and mobile structures..The designed machines and the hopes for buildings’ immortality. The use of computers to aid the designer, the introduction of intelligent systems in architectural space; sensor networks, processing units, actuator devices. Post-industrialization brought along a new flexibility in design and manufacture. A shift in interfaces occurred, focusing on the dynamic processes of user-experience rather on the physical form. The computer, was celebrated as a tool! Nicholas Negroponte, in the “Architecture Machines, 1970” introduced a computerized system that could collaborate more or less symmetrically with the architect in designing buildings. He referred to evolving systems that worked in “symbiosis” with designer and resident, something that would change the making of architecture. Early research on computer-aided design, covered early work on human-computer interaction, artificial intelligence, and computer graphics.110 The computer came to pave the way for the ‘‘electronic craftsman.’’ 111 The new technologies stemming from the computer have made possible a new facility; a one-to-one control of production and assembly equipment. This emergent type was much more geared to change and individuality than the relatively stereotyped productive processes of the First Industrial Revolution. Mass production and mass repetition, were of course one of the unshakable foundations of Modern Architecture. 112

There are three possible ways in which machines can assist the design process: (1) current procedures can be automated, thus speeding up and reducing the cost of existing practices; (2) existing methods can be altered to fit within the specifications and constitution of a machine, where only those issues are considered that are supposedly machine-compatible; (3) the design process, considered as evolutionary, can be presented to a machine, also considered as evolutionary, and a mutual training, resilience, and growth can be developed.” -Negroponte’s “Preface to a Preface” 109. Negreponte, N., (1973), The Architecture Machine: Toward a More Human Environment, US: The MIT Press. 110. Steenson, W., M., (2010), Artificial Intelligence, Architectural Intelligence: The Computer in Architecture 1960-80 | Dissertation proposal. 111. Frazer, J., (2011), A Natural Model for Architecture, in Computational Design Thinking: Computation Design Thinking | AD Reader, Vol. 74, No.3, John Wiley and Soms Ltd, pp. 153 112. Frazer, J., (1995), An evolutionary architecture, Architectural Association London, pp. 16. [Online] Available: http://www.girlwonder.com/blog/wp-content/uploads/2010/05/steenson-dissertation-proposal.pdf [Accessed 26 June 2017]

52


BUILDING HUMANITY ANEW: chatting with machines and re-engineering humans to fit the stars [design]

The period from 1960 to 1980 was significant as it marked the introduction of computing paradigms to architecture and the beginning of the mainstream of computers in architectural practice. The computational shift promoted design process over formal object, with computers used for representation and modelling. Architects had to deal with an informationally complex world. Τhe first CAD/CAM systems, the Sketchpad invention, the first 3D solid modeling program (MAGI SythaVision) and the simultaneous development of computer hardware have been the highlights of the era. The technology saw a transition from mainframes to “minicomputers.” A turn into computation, cybernetics and AI was made. 110 Key figures in this shift, were Christopher Alexander, Nicholas Negreponte and Cedric Price. They based their notions of generative machines on work by many of the same figures in technology, including W. Ross Ashby, Gordon Pask, Marvin Minsky, Douglas Engelbart and J.C.R. Licklider. The appearance of generative systems in architecture, gave birth to systems that incorporated models of intelligence, interacted with and responded to both designer and end user, adapted and evolved over time. Interactivity, information feedback, virtuality, and the relationship between organisms and technology were central to the development of cybernetics. It was within this culture that some of the first attempts to create computer technology-enhanced architectural spaces emerged from the work of Archigram and Cedric Price in Britain to that of Nicholas Negroponte in the US. Architecture and technology, first started flirting each other in the decades of 50s and 60s. All started, with the visions towards a ‘‘responsive architecture’’, strongly attached to technology, systems, environment, transformability and adaptability. 113 In 1967, Warren Brodey, in his article ‘‘The Design of Intelligent Environments’’, proposed an evolutionary, self-organizing, complex, predictive, purposeful, active environment. He asked: ‘‘Can we teach our environments first complex, then self-organizing intelligence which can ultimately refine into being evolutionary?’’ The idea of soft, responsive architecture also preoccupied Nicholas Negreponte. He suggested that the design process-considered as evolutionary, could be presented to a machine, also considered evolutionary, to give a mutual training resilience and growth. He placed high expectations, first on computer hardware, then on software through AI. 114 The ‘60s was also the decade of ‘‘unconventional’’, interactive and adaptive architecture. This story began, when the Archigram group envisioned of an architecture able to provide instant services, automation and comfort, through cybernetic interfaces and robotized systems. In Britain, the Archigram group of architects built almost nothing, but the hypothetical designs featured in Archigram magazine were iconic for this movement. Ron Herron’s fanciful Walking City in 1964 caught the mood, adaptive in the sense that if the city found itself somehow misfitted to its current environment, well, it could just walk off to find somewhere more congenial. Peter Cook’s concept of the Plug-In City was a bit more realistic: the city that could continually reconfigure itself in relation to the shifting needs and desires of its inhabitants. This idea, clearly affected by the Metabolism’s principles, also embraced Ross Ashby’s ideas of evolutionary design: the city itself as a lively and adaptive fabric for living.115 113. Yiannoudes, S., (2016), Architecture and Adaptation: From Cybernetics to Tangible Computing, Routledge, pp. 2. 114. Negreponte, N., (1973), The Architecture Machine: Toward a More Human Environment, US: The MIT Press. 115. Pickering, A., (2010), The Cybernetic Brain: Sketches of Another Future, University Of Chicago Press, pp.364.

53


INTELLIGENCE AND SPECULATIVE PROSTHETICS: FROM TECHNOANIMISM TO TECHNOFTISHISM

Fig. 6.9 Walking City, Ron Herron, 1964

Fig. 6.10 Plug-in City, Peter Cook, 1964

Archigram then turned to a more miniaturized and personalized architecture as a robotized servicing system. Some of their projects were Living-1990 (1967), Control and Choice Dwelling (1967) and the Studio Strip (1986). Although these projects described a rigid, mechanically determined architecture, they dreamed of an open-type, responsive and participatory architecture. 116 Echoing Marshall McLuhan’s idea of architecture as an organic medium-extension of the human body and regulator of environmental perception, architecture was conceived as a cybernetic interface between the human body and the environment able to respond directly to personal desire, through information feedback. This idea was even more radicalized in Peter Cook’s Metamorphosis: Sequence of Domestic Change, part of Control and Choice Dwelling project, which depicted a prefabricated living room gradually dissolving into an environment where walls became televisual membranes, and inhabitants’ desires were detected by sensor cells. Architecture, from the design of ‘‘hardware,’’ became the “software”: “programs to enable diverse situations in a given space”; a landscape of complex and indeterminate systems transmitting immaterial informational entities. 116

116. Yiannoudes, S., (2011), The Archigram Vision in the Context of Intelligent Environments and Its Current Potential, Conference Paper: 7th International Conference on Intelligent Environments (IE).

54


BUILDING HUMANITY ANEW: chatting with machines and re-engineering humans to fit the stars [design]

“The architectural complex will be modifiable. Its aspect will change totally or partially in accordance with the will of its inhabitants... The appearance of the notion of relativity in the modern mind allows one to surmise the experimental aspect of the next civilization... On the basis of this mobile civilization, architecture will, at least initially, be a means of experimenting with a thousand ways of modifying life, with a view to mythic synthesis.” -Influential Situationist International group, 1953 In a later attempt to dissolve and dematerialize architecture, Archigram moved towards a completely virtual conceptualization of architecture, with ‘‘places existing only in the mind’’; ‘‘Architecture without architecture.’’ 117 Their projects focused on the shift from the hardware to the software, from the tangible aspects of architecture to its immaterial entities-the system, the transmitted message-the program, ending up later on a completely virtual architecture, as in the Holographic Scene Setter, 1969 of Instant City, and Room of 1000 Delights, 1970, both included in the Archigram 9 issue. Simulated mental spaces made of “wires and waves and pictures and stimulant”, would recreate the interface for the fulfilment of individual desires and dreams. Archigram’s shift of interest from the collectivity of the megastructure to personalized responsive spaces, aimed at creating intelligent ‘‘user-driven’’ environments for people to fulfil and realize their desires and dreams. From artifacts and objects, to human-machine interfaces, software, programs and networks that make up the environments that people inhabit. Their architecture hoped to promote an energetic and participatory behavior of the user, towards the architectural space, thus expressing the vast desire of the era for control and continuous change. 118

Fig. 6.11

Instant City, Archigram, 1969

117. Sadler, S., (2005), Archigram: Architecture without Architecture, MIT Press, [Online] Available: http://www.arch.uth.gr/ uploads/courses/443/files/Sokratis_Yannoudes_The_Archigram_vision_in_the_context_of_Intelligent_Environments_and_ its_current_potential.pdf [Accessed 27 June 2017] 118. Yiannoudes, S., (2016), Architecture and Adaptation: From Cybernetics to Tangible Computing, Routledge, pp. 22.

55


INTELLIGENCE AND SPECULATIVE PROSTHETICS: FROM TECHNOANIMISM TO TECHNOFTISHISM

Fig. 6.12

56

Archigram magazine 9 1/2, 1974


BUILDING HUMANITY ANEW: chatting with machines and re-engineering humans to fit the stars [design]

Such an assumption seemed to echo the AmI119 vision, and its applications; the intelligent environments. Since the first attempts to introduce information technology and cybernetic concepts in architectural space, adaptation emerged as a potential property of architectural space. In 1969, Andrew Rabeneck proposed the use of cybernetic technologies in buildings, to make them adapt to the future needs of their occupants, and thus aid architects in extending the lifespan of their spaces. In the same year, cybernetics pioneer Gordon Pask, in his article “The Architectural Relevance of Cybernetics,� explained how cybernetics could make buildings learn, adapt and respond according to the aims and intentions of their users. The term adaptation, pointed to the idea that architectural space could flexibly adapt to changing conditions and needs but it also implied a property of what biologists and cyberneticists called adaptive systems; organisms or mechanisms capable of optimizing their operations by adjusting to changing conditions through feedback. Feedback has been a key factor in creating environmentally responsive machines. According to Wiener, relations are governed by four concepts: information, message, feedback, control.117 Along with these developments, the bloom of another sector, which we would have met in the intersection of architecture and robotics, was observed: the Digital Manufacture (digital fabrication).120 The origins of its evolution were met in 1952 when MIT researchers connected a computer to a milling machine, creating the first numerically controlled system. Using computer programs rather than engineers, the researchers had the ability to produce aircraft components in more complex forms than could be produced manually. A redefinition of the position of artificial intelligence, robots and computation in architecture appeared, highlighting the potential of computers to perform precise calculations and outperform human intelligence in almost every way. (Philippe Morel) The use of digital computers for the specification and communication of design and the use of computer-controlled machines for the cutting, forming, and assembling of non-standard materials and forms. Architects were envisioned to design and construct their own machines toward deliberate design ends, just as they might script their own software. 121 This vision of the future architect was imagined by engineer and inventor Douglas Engelbart during his research into emerging computer systems at Stanford in 1962. At the dawn of personal computing he imagined the creative mind overlapping symbiotically with the intelligent machine to co-create designs. This dual mode of production, would hold the potential to generate new realities which could not be realized by either entity operating alone.116

Was the dream of immortal buildings reachable?-

-

119.The field of computer science called Ambient Intelligence (AmI) refers to electronic environments that are sensitive and responsive to the presence of people. 120. Digital manufacturing is the use of an integrated, computer-based system comprised of simulation, three-dimensional (3D) visualization, analytics and various collaboration tools to create product and manufacturingbprocess definitions simultaneously 121. Gershenfeld, N., (2012), How to Make Almost Anything: The Digital Fabrication Revolution, Vol.9, No.6, Foreign Affairs.

57


INTELLIGENCE AND SPECULATIVE PROSTHETICS: FROM TECHNOANIMISM TO TECHNOFTISHISM

the first ai winter 1966-1980

In the 70s, AI was subject to critiques and financial setbacks. AI researchers had failed to appreciate the difficulty of the problems they faced. Their tremendous optimism had raised expectations impossibly high, and when the promised results failed to materialize, funding for AI disappeared.122 The limited computer power and the insufficient memory and processing speed were the main reasons for not accomplishing anything truly useful. Hans Moravec argued in 1976 that computers were still millions of times too weak to exhibit intelligence. Another problem appeared in the domain of common-sense knowledge and reasoning. Many important artificial intelligence applications like vision or natural language required simply enormous amounts of information about the world: the program needed to have some idea of what it might be looking at or what it was talking about. This required that the program knew most of the same things about the world than a child did. Researchers soon discovered that this was a truly vast amount of information. No one in 1970 could build a database so large and no one knew how a program could absorb so much information.123 Along with all these problems and the subsequent inactivity, came the end of funding. The agencies which funded AI research (such as the British government, DARPA, NCR) became frustrated with the lack of progress and eventually cut off almost all funding for undirected research into AI. By 1974, funding for AI projects was hard to find. The blames were on the crisis of the unrealistic predictions of many researchers, who were caught up in a web of increasing exaggeration.124

‘‘In no part of the field have discoveries made so far produced the major impact that was promised.’’ -Professor Sir James Lighthill, 1973

122. Crevier, D., (1993), AI: The Tumultuous Search for Artificial Intelligence, New York, NY: BasicBooks, pp. 100-144 123. Moravec, H., (1988), Mind Children, Harvard University Press. 124. Crevier 1993, pp.115. Moravec explains, “Their initial promises to DARPA had been much too optimistic. Of course, what they delivered stopped considerably short of that. But they felt they couldn’t in their next proposal promise less than in the first one, so they promised more.”

58


CHAPTER 7

BOOM! going wireless in Cyberspace! ‘‘from myth to reality..’’

In the 1980s a form of AI program; “expert systems” is adopted by corporations around the world. Intelligent action requires not only general principles but also a very large amount of highly diverse knowledge. At the same time, the Fifth Generation computer project intends to build a platform from which artificial intelligence systems can grow and ultimately build machines with reasoning capabilities. ‘‘Let’s create massive databases’’ is the motto! The cognitivistic paradigm is established; natural language, knowledge representation and reasoning, proving mathematical theorems, playing formal games (checkers or chess), and expert problem solving. The revival of connectionism (in the form of computational devices) is a fact! Once again, AI achieves success. Cyberneticians aim to create what Boulanger, in the International Association of Cybernetics, called the instruments of a new industrial revolution! By 1980s, a cybernetic space inside the machines emerges, a mythical place of hope for a freer and better society, and a fierce domain for battle and war. Cybernetics would acquire the very features of those mythical machines that it had predicted since mid-century: the idea of self-adapting, ever expanding its scope and reach, unpredictable, and threatening, yet seductive, full of promise and hope, and always escaping into the future. And BOOM! Wireless devices come to dominate the world! Transmitters are becoming smaller and are no longer visible! The rise of the personal computer and the cellphone invention in the 1980s spark even more interest in machines-that-think. After expert systems and algorithms, come the Neural networks in the attempt of the computer to mimic the human brain..in solving the big problems of mastering the interaction with the real world..

Fig. 7.1

Neuromancer, William Gibson, 1984

59


INTELLIGENCE AND SPECULATIVE PROSTHETICS: FROM TECHNOANIMISM TO TECHNOFTISHISM

The history of transformation of the world, started with a fantasy..A new space: Cyberspace, from myth to reality! In his 1984 ‘‘Neuromancer,’’125 William Gibson, has been pulling at the loose threads of our culture to imagine what will come out. He used the term ‘‘cyberspace’’, a ‘‘consensual hallucination” created by millions of connected computers. Gibson, through his novel, presented the new electronic space inside the machines. It remained vividly imagined allegory for the world of 1980s, when the first seeds of massive, globalised wealth-disparity were planted, and when the inchoate rumblings of technological rebellion were first felt. In predicting this future, Gibson have helped shape our conception of the Internet. Every social network, online game or hacking scandal takes us a step closer to the universe Gibson imagined in 1984.126 The rise of Expert Systems Over 50 years ago, the field of artificial intelligence was founded. After an enthusiastic start it quickly became clear that the original goals would be much harder to achieve than anticipated. Since then, artificial intelligence has proceeded in many different directions with a number of research “spin-offs” but without realizing the goal of achieving general-purpose intelligent computing applications. Maybe the most successful applications of artificial intelligence have been in the areas of search engines and of logical reasoning systems127 leading eventually to expert systems.128 How can human thinking processes take place in a computer? 129 By the early 1980s, Japan launched its Fifth-Generation Computer Systems Project,130 a well-funded public–private partnership that aimed to leapfrog the state of the art, by developing a massively parallel computing architecture that would serve as a platform for artificial intelligence. Information processing and abstract symbol manipulation was the common language used by researchers from different disciplines to formulate their theories. The field started to take off and spread across the United States. Natural-language programs, programs for proving mathematical theorems, for manipulating formulas, for solving abstract problems, for playing formal games, for planning, and for solving real-world problems — the expert systems. Instead of trying to create a general intelligence,131 these 125. Neuromancer is a 1984 science fiction novel by American-Canadian writer William Gibson. It is one of the best-known works in the cyberpunkgenre and the first novel to win the Nebula Award, the Philip K. Dick Award, and the Hugo Award. It was Gibson’s debut novel and the beginning of the Sprawl trilogy. The novel tells the story of a washed-up computer hacker hired by a mysterious employer to pull off the ultimate hack. 126. Cumming, E., (28 July 2014), William Gibson: the man who saw tomorrow, The Guardian, [Online] Available: https://www.theguardian.com/books/2014/jul/28/william-gibson-neuromancer-cyberpunk-books [Accessed 30 June 2017] 127. In information technology a reasoning system is a software system that generates conclusions from available knowledge using logical techniques such as deduction and induction. Reasoning systems play an important role in the implementation of artificial intelligence and knowledge-based systems. Logical reasoning is the process of using a rational, systematic series of steps based on sound mathematical procedures and given statements to arrive at a conclusion. 128. In artificial intelligence, an expert system is a computer system that emulates the decision-making ability of a human expert. Expert systems are designed to solve complex problems by reasoning about knowledge, represented mainly as if– then rules rather than through conventional procedural code. The first expert systems were created in the 1970s and then proliferated in the 1980s. Expert systems were among the first truly successful forms of artificial intelligence (AI) software. An expert system is divided into two subsystems: the inference engine and the knowledge base. The knowledge base represents facts and rules. The inference engine applies the rules to the known facts to deduce new facts. Inference engines can also include explanation and debugging abilities. 129. Sendhoff B., Korner E., (2009), Creating Brain-Like Intelligence: From Basic Principles to Complex Intelligent Systems, Springer, Berlin.

60


BOOM! GOING WIRELESS IN CYBERSPACE! - FROM MYTH TO REALITY

‘‘expert systems’’ focused on much narrower tasks. That meant they only needed to be programmed with the rules of a very particular problem. Designed as support tools for decision makers, the expert systems were rule-based programs that made simple inferences from a knowledge base of facts, which had been elicited from human domain experts and painstakingly hand-coded in a formal language. Expert systems succeeded when a problem was sufficiently well structured and its complexity could be controlled. They could answer questions or solve problems about a specific domain of knowledge, using logical rules that were derived from the knowledge of experts. 132 The power of expert systems came from the expert knowledge they contained. They were part of a new direction in AI research that had been gaining ground throughout the 70s. “AI researchers were beginning to suspect—reluctantly, for it violated the scientific canon of parsimony—that intelligence might very well be based on the ability to use large amounts of diverse knowledge in different ways,” wrote Pamela McCorduck.133 The great lesson from the 1970s was that intelligent behavior depended very much on dealing with knowledge. Knowledge based systems and knowledge engineering became a major focus of AI research in the 1980s. They emerged and multiplied everywhere: the field was booming... The 1980s also saw the birth of Cyc,134 1984; the first attempt to attack the common-sense knowledge problem directly, by creating a massive database that would contain all the mundane facts that the average person knows. Actually, it aimed to create an “epoch-making computer” with supercomputer-like performance and to provide a platform for future developments in artificial intelligence. Douglas Lenat, who started and led the project, argued that there were no shortcut - the only way for machines to know the meaning of human concepts was to teach them, one concept at a time, by hand. The project was not expected to be completed for many decades. By the mid 1980s, the cognitivistic paradigm appeared, as the classical, algorithm based, symbol-processing paradigm. “Cognition as computation;” what mattered for intelligence in this approach was the abstract algorithm or the program, whereas the underlying hardware was irrelevant. In this classical perspective of artificial intelligence the human being was placed at center stage, with human intelligence as the main focus. As a consequence, 130. The Fifth Generation Computer Systems was an initiative by Japan’s Ministry of International Trade and Industry, begun in 1982, to create a computer using massively parallel computing/processing. It was to be the result of a massive government/industry research project in Japan during the 1980s. It aimed to create an “epoch-making computer” with supercomputer-like performance and to provide a platform for future developments in artificial intelligence. The term “fifth generation” was intended to convey the system as being a leap beyond existing machines. In the history of computing hardware, computers using vacuum tubes were called the first generation; transistors and diodes, the second; integrated circuits, the third; and those using microprocessors, the fourth. Whereas previous computer generations had focused on increasing the number of logic elements in a single CPU, the fifth generation, it was widely believed at the time, would instead turn to massive numbers of CPUs for added performance. The project was to create the computer over a ten-year period, after which it was considered ended and investment in a new “sixth generation” project would begin. Opinions about its outcome are divided: either it was a failure, or it was ahead of its time. There are many examples for fifth generation of computers. 131. Artificial general intelligence (AGI) is the intelligence of a machine that could successfully perform any intellectual task that a human being can. It is a primary goal of some artificial intelligence research and a common topic in science fiction and future studies. Artificial general intelligence is also referred to as “strong AI”, “full AI” or as the ability of a machine to perform “general intelligent action”, though some distinguish “strong AI” to refer to cognition. 132. Pfeifer, R., Bongard, J., (2007), How The Body Shapes The Way We Think: A New View of Intelligence, A Bradford Book, The MIT Press. 133. McCorduck, Pamela (2004), Machines Who Think (2nd ed.), Natick, MA: A. K. Peters, Ltd, pp. 229. 134. Cyc was an artificial intelligence project that attempted to assemble a comprehensive ontology and knowledge base of everyday commo-sense knowledge, with the goal of enabling AI applications to perform human-like reasoning.

61


INTELLIGENCE AND SPECULATIVE PROSTHETICS: FROM TECHNOANIMISM TO TECHNOFTISHISM

the favourite areas of investigation were natural language, knowledge representation and reasoning, proving mathematical theorems, playing formal games like checkers or chess, and expert problem solving. Popular in 1980s these models intended to replace human experts, or take parts of their tasks, in areas like medical and technical diagnosis, configuration of complex computer systems, commercial loan assessment, and portfolio management. Humans were viewed as symbol processing systems, as systems that manipulated symbols. Indeed, GOFAI135 was successful as long as it was concerned with solving problems on the basis of explicit rules, as any knowledge was typically preprogrammed. Computers played chess, solved algebraic problems and manipulated text. As soon as it came to simulating natural behaviour, however—constructing robots that function in an unknown environment—GOFAI failed. The idea of providing a machine with sensors and motors, but keeping the rule-based models of GOFAI, did not lead to anything skilful. ‘‘This procedure was slow and subject to failure when small changes in the environment were not sufficiently taken into account.’’136 Also, the revival of a previous discipline arose in the early 1980s: connectionism,137 which tried to model phenomena in cognitive science with neural networks. Neural networks138 had been around since the 1940s-first suggested as models of biological neural networks. (McCulloch and Pitts, 1943) Their reappearance in the 1980s as computational devices was more like a renaissance. A form of neural network could learn and process information in a completely new way. Because they were based on pattern processing rather than symbol manipulation, researchers were hoping that neural networks would be better able to describe natural mental phenomena, after expert systems and related algorithms had failed to do so. Mid-1980s was the time when Neural Networks become widely used with the Backpropagation algorithm.139 Neural nets and evolutionary algorithms140 were considered self-organizing “emergent” methods because the results were not predictable and indeed often surprising to the human designers of the systems. However, they did not end up solving the big problems of mastering the interaction with the real world either. Artificial intelligence researchers got frustrated. They had to act immediately and help the field expand vastly. Changes had to be made..

135. Term GOFAI—“Good Old-Fashioned Artificial Intelligence”—to designate the classical approach (Haugeland, 1985). 136. Anderson ML (2003), Embodied cognition: a field guide. Artificial Intell149: 91–130. 137. Connectionism is a set of approaches in the fields of artificial intelligence, cognitive psychology, cognitive science, neuroscience, and philosophy of mind, that models mental or behavioral phenomena as the emergent processes of interconnected networks of simple units. 138. Neural networks, are computational models that are inspired by biological brains, and therefore many of them inherit the brain’s intrinsic ability for adaptation, generalization, and learning.

62


BOOM! GOING WIRELESS IN CYBERSPACE! - FROM MYTH TO REALITY

[DESIGN]

design: computer and code; the designer’s extensions

Fig. 7.2 First digital personal computer, 1975

‘‘Today, when architects calculate and exercise their thoughts, everything turns into algorithms! Computation, the writing and rewriting of code through simple rules, plays an everincreasing role in architecture. Architecture and the tool of computer and programming! Computer has been largely celebrated as a tool for architectural design!’’142 The time has come.. increased available software, tremendous computing power and the urgent need for computer code. New tools were needed to generate a new model of the generative architectural process. Generative tools: computer modelling, data-structures, transformations, shape processing, drafting and modelling, machine-readable models, computer modelling and simulation, rapid prototyping and 3D printing.. Science was still searching for a theory of explanation and architecture for a theory of generation. Designers were finally able to use computers; CAD technology, and do the work themselves. Engineers were encountered to experiment with programming, paving the way to today’s use of digital technology. By allowing the automation of complex processes and the rule-based means to resolve challenging problems of geometry, economy, representation and construction, the computer acted as an instrument that extended the capacity and ability of the architect to manage problems and forms that would not be possible otherwise. By scripting and programming software, architects could even have the computer carry out complex custom calculations of their own inventions, essentially encapsulating the knowledge and formal volition of the architect within the software itself. The computer was no longer used as a tool for representation, but as a medium to conduct computations. Architecture emerged as a trace of algorithmic operations. Surprisingly enough, algorithms – deterministic in their form and abstract in their operations – challenged both design con139. Backpropagation is a method used in artificial neural networks to calculate the error contribution of each neuron after a batch of data (in image recognition, multiple images) is processed. This is used by an enveloping optimization algorithm to adjust the weight of each neuron, completing the learning process for that case. Backpropagation is sometimes referred to as deep learning, a term used to describe neural networks with more than one hidden layer. 140. In artificial intelligence, an evolutionary algorithm(EA) is a subset of evolutionary computation, a generic population-based metaheuristic optimizationalgorithm. An EA uses mechanisms inspired by biological evolution, such as reproduction, mutation, recombination, and selection. 141. programming=to think dynamically through time, movement and interaction 142. Chu K., (July/August 2006), Metaphysics of Genetic Architecture and Computation, Programming Cultures: Art and Architecture in the Age of Software | Architectural Design magazine, Vol 76, No 4, pp.40

63


INTELLIGENCE AND SPECULATIVE PROSTHETICS: FROM TECHNOANIMISM TO TECHNOFTISHISM

ventions and, perhaps even more surprisingly, some of our basic intuitions.143

Scripting-enables to manage more aspects of design – extension of thinking, handling complexity, problem-solving. The computer as a tool for hybrid, open processes. ‘‘Architecture is, and always has been, coded.’’ The arrival of computers extended understanding of code in architecture. Code was now understood as a set of instructions written in a programming language. It stood for ‘source code’, a series of statements written in some human-readable computer programming language, or ‘machine code’, instruction patterns of bits (0s and 1s) corresponding to different machine commands.144 In the 1980s, the extended use of computers and information in the field of architecture, led in some great steps towards digital construction, hence paving the way for the introduction of robotics in the field as well. The first machines were created, expanding the possible choices of the produced forms, although still limited in scale. This evolution did not signify the passage from the abstract to the prosthetic process, but the ability to change the data into matter and vice versa, the matter in data. In the same decade, the first Symposium of Industrial Robotic Systems took place in Chicago. The goal was set to provide researchers and Engineers worldwide the opportunity to present their work and express their ideas on the robotics sector. In direct connection with ISIR, the International Federation of Robotics (IFR) was established as non-profit organization operating in 15 countries aiming to disseminate and strengthen the robotics industry worldwide and mobilize the social consciousness about robotic technologies. The first robotics cooperative in Japan was initially established as a voluntary organization, promoting the use of this new instrument on an interdisciplinary basis. That new system was about to establish a range of changes.144 At the same time, another field developed almost parallel to that of the digital computer and that of human interaction, under the command of which “intelligent environments” were used. Architects studied spaces where computer systems and communication technologies were incorporated harmoniously to enhance day-to-day activity. Architecture has always played a secondary role. The “ubiquitous computation,”145 was originally defined as a general idea of calculation, fully integrated into everyday objects and activities, which was located in the intersection of computer science, behavioral and design science. 143. Rocker, M. I., (July/August 2006), When Code Matters, Programming Cultures: Art and Architecture in the Age of Software | Architectural Design magazine, Vol 76, No 4, pp.20 144. IFR International Federation of Robotics (2014) History - IFR International Federation of Robotics. [Online] Available: http://www. ifr.org/history/. [Accessed 30 June 2017] 145. Ubiquitous computing (or “ubicomp”) is a concept in software engineering and computer science where computing is made to appear anytime and everywhere. In contrast to desktop computing, ubiquitous computing can occur using any device, in any location, and in any format. A user interacts with the computer, which can exist in many different forms, including laptop computers, tablets and terminals in everyday objects such as a refrigerator or a pair of glasses. The underlying technologies to support ubiquitous computing include Internet, advanced middleware, operating system, mobile code, sensors, microprocessors, new I/O and user interfaces, networks, mobile protocols, location and positioning and new materials.

64


BOOM! GOING WIRELESS IN CYBERSPACE! - FROM MYTH TO REALITY

[DESIGN]

The built environment and its use, evolved in parallel through time, under the agency of computers as design tools and users. The ultimate goal was interaction! Architects such as John Frazer at the Architecture Association were particularly interested in how Pask’s adaptive systems might be applied to the architectural design process in order to evolve building forms and behaviours. Four of Pask’s projects146 in particular, gave hints about how to create richer, more engaging and stimulating interactive environments; MusiColour Machine (1953) and Machine-a were participant-focused constructional approaches, where the data originated for the first time by the user’s action, far from the reasoning of predefined information. Also, The Self-Adaptive Keyboard Instructor(SAKI), designed by Pask and Robin McKinnon-Wood in 1956, provided a model of interaction, converging into a commonly agreed form of feedback: an architecture that learnt from its users in an analogous manner to what the users learnt from architecture. Finally, the Colloquy of Mobiles project (1968) had an impact on the environment that reactivated the sensors and re-engineered the actions. This logic largely reflected Pask’s perception of the environment. It is known as “Conversation Theory;” an interaction theory that analyzed man-to-man, machine-to-machine and machine-to-man contacts under a common framework.147 Also, “Seek” (1970 Nicholas Negroponte with Architecture Group Machine M.I.T.) was a machine that could run either as a “cybernetic world model” or as a “behavioural observation and experimental laboratory.’’148 The intention was for the mechanism of Seek to sense the physical environment, influence it, and attempt to come to terms with unexpected local events within this environment. It was all about the strict definitions for performance, conversation, interaction, environment and participation. It was about designing tools that people themselves might use to construct – in the widest sense of the word – their environments and as a result build their own sense of agency. It was about developing ways in which people themselves could become more engaged with, and ultimately responsible for, the spaces they inhabited. It was about investing the production of architecture with the poetries of its inhabitants.

146. Gordon Pask was one of the early and most vivacious proponents of cybernetics, the study of control and communication in animal and machine. His particular contribution to the discipline came in his specialisation of what is known as “second-order” cybernetics: frameworks that don’t just account for control and feedback toward the achievement of goals, but also account for “observers” and “participants” in such systems. His focus as both a scientist and theatre writer/producer led him to build, in the 1950s, the Musicolour Machine, a device that (unlike today’s direct-response music-to-colour interfaces) worked like another jazz musician in a band (see further descriptions below). Still operating in hardcore science, he was one of the exhibitors at the Cybernetic Serendipity show at the ICA, London, in the late 60s, curated by Jasia Reichardt, which went on to be the inspiration for many future interaction designers. He is best known for his “Conversation Theory”, which since the 70s and 80s has dominated as the most coherent and potentially productive theory of interaction, encompassing human-to-human, human-to-computer, and computer-to-computer configurations in a common framework. As such he was a frequent collaborator with architects, particularly at the Architecture Association, London and the Architecture Machine Group (now Media Lab), Boston. 147. Haque U., (2007), The Architectural Relevance of Gordon Pask, John Wiley & Sons Ltd, University of Vienna, Austria, pp 54-61 148. (February 2009), Project One-Systems, Networks and Collaboration, SEEK, [Online] Available: http://norgacs1projectone.blogspot.com.cy/2009/02/seek.html [Acessed 30 June 2017]

65


INTELLIGENCE AND SPECULATIVE PROSTHETICS: FROM TECHNOANIMISM TO TECHNOFTISHISM

Fig. 7.3

MusiColour Machine, Gordon Pask, 1953

Fig. 7.4 An Evolutionary Architecture, J. Frazer

Frazer’s Morphogenesis Project (1989-1996), brought along a new axis of cybernetic incursion into architecture, this time concerning the relation between the architect and the architectural design tools. The development of tools that could adapt to and encourage the architect, while providing new ways of communicating with computers: “Our attempts to improve the software of the user-interface were paralleled by attempts to improve the hardware. The morphogenesis project sought to incorporate the idea that architectural units— buildings, cities, conurbations—grow, quasi-biologically, and adapt to their environments in time.’’ 149 At the same time, the architect could interfere with this process, in the choice of seed, by selecting certain vectors of evolution for further exploration, and so on. In this way, the computer itself became an active agent in the design process, something the architect could interact with symmetrically, sailing the tides of the algorithms without controlling them —a beautiful exemplification of the cybernetic ontology in action.150

149. Frazer J., (1995), An Evolutionary Architecture, Architectural Association Publucations. 150. Pickering, A., (2010), The Cybernetic Brain: Sketches of Another Future, University Of Chicago Press.

66


BOOM! GOING WIRELESS IN CYBERSPACE! - FROM MYTH TO REALITY

Fig. 7.5

[DESIGN]

‘‘Seek’’ - Nicholas Negroponte, 1970

67


INTELLIGENCE AND SPECULATIVE PROSTHETICS: FROM TECHNOANIMISM TO TECHNOFTISHISM

we: agents extended bodily and mentally in a transforming space.. Gradually, the society started seeking psychic reproduction by attempting to create artificial intelligence systems (Collins, 1990). AI involved a separation of mind and body, or more precisely the development of a mind without a body. Machines began inundating our bodies at an increasing rate, considering the human body, in essence, technology. But in what sense are we technology? (Machines That Become Us: The Social Context of Personal Communication Technology) In her “Cyborg Manifesto,” initially published in 1985, Donna Haraway saw the conflation of body and technology as constitutive of the cyborg; a hybrid of machine and organism in which technologies of communication and biotechnologies articulate the polymorphous recrafting of bodies.151 The productivity of Haraway’s theory lies in its postulation that the cyborg, as a creature without origins that forms itself through the confusion of boundaries (between the human and the animal, the natural and the artificial, the body and mind), is a fiction that nevertheless maps “our social and corporeal reality” and allows us to imagine beneficial couplings which undo identity in terms of mutability. Like the cyborg, the performative body “has no ontological status apart from the various acts which constitute its reality,’ and its fluidity of identities “suggests an openness to re-signification and re-contextualization.’’ 152 People started turning into extended ‘‘homo-cellular’’ entities; body and brain were molded to adapt to the new realities, the new networks and the whole ‘‘interaction system’’ of the era. The act of extending the body, was not only about its appearance, neither to the creation of an exoskeleton, nor a prosthetic structure, in order to simply upgrade one’s bodily performance. The goal was to frame an entirely new experience of perceiving the world, with the participation of our brain, muscles, and senses, every part of our current or future body. Human biology and mentality was profoundly changed in 1983 by the arrival of the cell phone. The small blinking, buzzling and beeping object in our hand might be the single prosthetic device that has done the most to transform the human. This object has become an integral part of the body and brain. A whole new version of our species. The obvious redesign of the human. Perception, social interaction, memory and even thought itself became increasingly cellular. The device was no longer an accessory to human life but a basic of a new kind of life for the years to come..153

151. Jones. A., Batchen. G., Gonzales-Day. K., Phelan. P., Ross. C., Gomez-Pena. G., Sifuentes R., (2001), The Body and Technology, Art Journal, Vol. 60, No. 1, College Art Association, pp. 28. 152. Burler J., (1999), Gender Trouble: Feminism and the Subversion of Identity, US, Routledge, pp. 176. 153. Colomina, B., Wigely, M. (2017), Are We Human? Notes on an Archaeology of Design, Ch. 11: Designing The Body, Lars Müller Publishers.

68


BOOM! GOING WIRELESS IN CYBERSPACE! - FROM MYTH TO REALITY

Fig. 7.6

[WE]

The arrival of cellphone, Motorola DynaTAC 8000X

‘‘Designers have developed suits that turn corpses into mushrooms and stainless steel machines that chemically liquefy bodies. But the human is also unique in the design of life-the crafting of new forms of mechanical, electronic, and biological life. The ancient search for a thinking machine that would be indistinguishable from a human is nearing its goal. Whole new forms of life are genetically engineered in laboratories. Designing for life and death has been replaced by designing life itself.’’ 144

69


INTELLIGENCE AND SPECULATIVE PROSTHETICS: FROM TECHNOANIMISM TO TECHNOFTISHISM

the second ai winter 1987-1993

The business community’s fascination with AI rose and fell in the 80s in the classic pattern of an economic bubble. The collapse was in the perception of AI by government agencies and investors – the field continued to make advances despite the criticism. Rodney Brooks and Hans Moravec, researchers from the related field of robotics, argued for an entirely new approach to artificial intelligence. The term “AI winter” was coined by researchers who had survived the funding cuts of 1974 when they became concerned that enthusiasm for expert systems had spiralled out of control and that disappointment would certainly follow.154 Their fears were well founded: in the late 80s and early 90s, AI suffered a series of financial setbacks. Eventually the earliest successful expert systems, such as XCON, proved too expensive to maintain. They were difficult to update, they could not learn, they were “brittle.’’ Expert systems proved useful, but only in a few special contexts.155 In the late 80s, the Strategic Computing Initiative cut funding to AI “deeply and brutally.” New leadership at DARPA had decided that AI was not “the next wave” and directed funds towards projects that seemed more likely to produce immediate results...156

154. Crevier 1993, pp. 203 AI winter was first used as the title of a seminar on the subject for the Association for the Advancement of Artificial Intelligence. 155. McCorduck, P. (2004), Machines Who Think (2nd ed.), Natick, MA: A. K. Peters, Ltd. pp. 435. 156. McCorduck, P. (2004), Machines Who Think (2nd ed.), Natick, MA: A. K. Peters, Ltd. pp. 430-431.

70


TOWARDS THE EMBODIED INTELLIGENCE

shift:

from algorithm-based to embodied intelligence

In search for alternatives, eventually, neighbouring fields of research—philosophy, psychology, neuroscience and evolutionary biology—provided new insights about how intelligence worked in the real world. In the mid-1980s Rodney Brooks suggested that all of this focus on logic, problem solving, and reasoning was based on our own introspection—how we tend to see ourselves and our own mental processes—and that the way artificial intelligence was proceeding was misguided. Instead, he proposed, essentially, that we should forget about symbol processing, internal representation, and high-level cognition, and focus on the interaction with the real world: “intelligence requires a body” was the slogan of the new paradigm of embodied intelligence. With this change in orientation, the nature of the research questions also started to shift; the community got interested in locomotion, manipulation, and, in general, how an agent could act successfully in a changing world. Unlike the cognitivistic view of intelligence, which was algorithm-based, the embodied approach envisioned the intelligent artifact as more than just a computer program: ‘‘It has a body, and it behaves and performs tasks in the real world. It is not only a model of biological intelligence, but a form of intelligence in its own right.’’ As a consequence, many researchers around the world started working with robot-agents. Since then, the nature of the field of artificial intelligence changed dramatically with embodiment entering the picture..157

157. Pfeifer, R., Bongard, J., (2007), How The Body Shapes The Way We Think: A New View of Intelligence, A Bradford Book, The MIT Press.

71


INTELLIGENCE AND SPECULATIVE PROSTHETICS: FROM TECHNOANIMISM TO TECHNOFTISHISM

GENETICISTS’ DREAMS: let’s make a body for intelligence to live in! ‘‘from life-as-it-is to life-as-it-could-be’’ ‘‘The human mind, if it is to be the physical organ of human reason, simply cannot be seen as bound and restricted by the biological skinbag..’’158

Back to nature for ‘‘bottom-up’’ inspiration: Expert systems can’t crack the problem of imitating biology, they can’t deal with the real world, and they seem to fail in a real, unknown environment. And now what? Neural networks is the key! The top-down approach of pre-programming a computer with the rules of intelligent behaviour, has to be replaced by the revival of the bottom-up approach. Neuroscience is capable of explaining the mysteries of human cognition. From the symbol processing to the embodied mind thesis. From computer systems to the science of Artificial Life.. This new science is welcomed with celebrations from the scientific fields.. Artificial Life has all the power to study the artificial systems, by indicating the similarities they have with the natural ones. Early 90s is marked by another turning point in Al: intelligent behavior is recognized to be collaborative as well as single-agent. Another computer explosion takes place. Blue Brain Project makes a step closer to the decoding of the brain and neurocognition. The Human genome project is initiated. Computer is seen as an exploration medium towards the synthesizing of life. Evolutionary computation, invisible ubiquitous computing, genetic programming and nature-inspired algorithms. Can brain hacking escape science fiction and become reality? A great era is about to start. “If we want to know what intelligence is, we need to understand how it develops,” Pfeifer said. And that requires a body. The embodiment of intelligence...

158.Clark,A.,(2003),Natural-BornCyborgs:Minds,Technologies,andtheFutureofHumanIntelligence,OxfordUniversityPress.

72


CHAPTER 8

In the late 80s, several researchers advocated that completely new approach to artificial intelligence, based on robotics. They believed that, to show real intelligence, a machine needed to have a body — it needed to perceive, move, survive and deal with the world. They argued that these sensorimotor skills were essential to higher level skills like common-sense reasoning and that abstract reasoning was actually the least interesting or important human skill. The basic idea in embodiment was to give the robot/agent, sensory and other capabilities, so as to learn through experience. They advocated building intelligence “from the bottom up.” 159 In the aforementioned 1990 paper, “Elephants Don’t Play Chess,” robotics researcher Rodney Brooks took direct aim at the physical symbol hypothesis, arguing that ‘‘symbols are not always necessary since the world is its own best model. It is always exactly up to date. It always has every detail there is to be known. The trick is to sense it appropriately and often enough.” 160 In the 80s and 90s, many cognitive scientists also rejected the symbol processing model of the mind and argued that the body was essential for reasoning, a theory called the embodied mind thesis.161 At that point, the meaning of the term artificial intelligence started changing, or rather started adopting two meanings: the first implied GOFAI, the traditional algorithmic approach, while the other more generally designated a paradigm in which the goals were to understand biological systems while at the same time exploiting that knowledge to build artificial systems. While originally the field was clearly a computational discipline dominated by computer science, cognitive psychology, linguistics, and philosophy, it has then turned into a more multidisciplinary field requiring the cooperation and talents of researchers in many other fields such as biology, neuroscience, engineering (electronic and mechanical), robotics, biomechanics, material sciences, and dynamical systems. Neuroscience entered the public consciousness, and changed the way we talked about ourselves..162 Artificial Life, gave access to the domain of ‘‘life-as-it-could-be.’’ It was about explaining existing life and recreating biological phenomena in alternative media. Using insights from biology, it could explore the dynamics of interacting information structures, while generating life-like behavior through mechanisms. The aim was to examine systems related to life, its processes, and its evolution through simulations using computer models, robotics, and biochemistry. Also to create alternative life-forms - literally “life made by Man rather than by Nature.”163 The techniques used for the production of AL, have been cellular automata, neural networks (ANNs) and complex systems. (ex.Tierra by Tom Ray-computer program of genetic algorithms). The notion of embodiment implied that whenever an agent behaved in whatever way in the physical world, it would by its very nature of being a physical agent, affect the environment and in turn be influenced by it, and it would induce – generate – sensory stimulation.164

159. Moravec (1988, p.20) writes: “I am confident that this bottom-up route to artificial intelligence will one date meet the traditional top-down route more than half way, ready to provide the real world competence and the commonsense knowledge that has been so frustratingly elusive in reasoning programs. Fully intelligent machines will result when the metaphorical golden spike is driven uniting the two efforts.” 160. Brooks, A. R., (1990), “Elephants Don’t Play Chess” | Robotucs and Autonomous Systems, [Online] Available: http:// people.csail.mit.edu/brooks/papers/elephants.pdf [Accesed 1 July 2017]. 161. Lakoff, George (1987), Women, Fire, and Dangerous Things: What Categories Reveal About the Mind, University of Chicago Press. 162.Pfeifer,R.,Bongard,J.,(2007),HowTheBodyShapesTheWayWeThink:ANewViewofIntelligence,ABradfordBook,TheMITPress. 163. Langton, C.G. (1992), Artificial Life II, (Interactions between learning and evolution), pp. 487-507, D.H.Ackley and M.L.Littman. 164. Pfeifer R,, Iida F., (2006), Morphological Computation: connecting body, brain and environment | Conference Paper,Conference: AI 2006: Advances in Artificial Intelligence, 19th Australian Joint Conference on Artificial Intelligence, Hobart, Australia.

73


INTELLIGENCE AND SPECULATIVE PROSTHETICS: FROM TECHNOANIMISM TO TECHNOFTISHISM

Computers as workstations, towards ‘‘exploration’’ Computers as the new designers of evolution? AL has not adopted the computational paradigm as its underlying methodology of behavior generation, nor has it attempted to explain life as a “computation”. Instead, computers provided an alternative medium within which to attempt to synthesize and create life. The computer was considered to be the tool for the manipulation of information. “Exploration” was achieved since the numerical simulation of systems allowed one to “explore” the system’s behavior under a wide range of parameter settings and initial conditions. Computers were capable of simulating physical systems from first principles. Evolutionary computation, using evolutionary algorithms,165 was the key-tool for the ‘‘decoding’’ of evolution. The use of ‘‘evolutionary’’ principles for automated problem solving originated in the 1950s. Evolutionary Programming was introduced by Lawrence J.Fogel in the US, while John Henry Holland called his method a genetic algorithm. From the early 90s on they were unified as different representatives of one technology, called evolutionary computing. Also in the early 90s, a fourth stream following the general ideas had emerged –genetic programming. Since the 1990s, nature-inspired algorithms were becoming an increasingly significant part of evolutionary computation. These terminologies denoted the field of evolutionary computing and considered evolutionary programming, evolution strategies, genetic algorithms, and genetic programming as sub-areas. At the same time, the unfashionable Neural networks, were used as models that implemented “brain-style computation”. The role played by neuroscience in this endeavour has changed over time.. Contemporary neuroscience gradually accustomed us to no longer considering the brain as a centralised information processing unit, or a sort of giant calculator that conformed to the cybernetic representation of intelligence. Cerebral activity, including self-consciousness, appeared instead to originate in a complex set of interactions within networks whose organisation and functioning was irresistibly reminiscent of the Internet.166 Neuroscience as the study of the (human) brain, studied the physiology of neurons. The Blue Brain Project, was an attempt to study how the brain functioned, while serving as a tool for neuroscientists and medical researchers. The computational perspective on neurocognition, aimed at understanding how the neural dynamics and neural mechanisms of the brain produce cognition.167 Brain hacking, so to speak, has been a futurist fascination for decades. The idea that we will, inevitably, have chips in our brains and ways to interface directly with computing devices has been a staple of the most seminal cyberpunk works, from William Gibson’s ‘‘Neuromancer’’ to Masamune Shirow’s ‘‘Ghost in the Shell,’’ to the Wachowskis’ ‘‘The Matrix.’’

165. In an “evolutionary algorithm,” the purpose may be defined to be the discovery of a solution to a complex problem. It involves a simulated environment in which simulated software “creatures” compete for survival and the right to reproduce. Each software creature represents a possible solution to a problem encoded in its digital “DNA.” *Evolutionary algorithms are computer-based problem-solving systems that use computational models of the mechanisms of evolution as key elements in their design. They are adept at handling problems with too many variables to compute precise analytic solutions. (Glossary The Spiritual Machines) 166. Picon A., (2015), Smart Cities: A Spatialised Intelligence | Architectural Design Primer, Wiley, pp.95. 167. Gazzaniga M. S., (1995), “Preface,” in The Cognitive Neurosciences, MIT Press, Cambridge, Mass, USA.

74


GENETICISTS’ DREAMS: LET’S MAKE A BODY FOR INTELLIGENCE TO LIVE IN

Fig. 8.1,8.2

Brain-hacking, The Matrix, An age of Virtual Reality

75


INTELLIGENCE AND SPECULATIVE PROSTHETICS: FROM TECHNOANIMISM TO TECHNOFTISHISM

design: the construction of ‘‘possible worlds’’ ‘‘We are now in a position to articulate a more comprehensive theory of architecture, one that is adequate to the demands imposed by the convergence of computation and biogenetics in the so-called Post-Human Era: a monadology of genetic architecture that deals with the construction of possible worlds.’’ 168 The PC explosion in the 1990s caused a host of digitally designed projects to appear on the architecture scene. William Massie’s concrete formwork, Greg Lynn’s waffle typologies, and various attempts at surface manipulations happened simultaneously with the appearance of a powerful new software player: Autodesk. Since releasing its first version of AutoCAD in 1982, the company stepped into the realm of 3D modelling and became a strong competitor in the field of digital representation, form-finding, and system development. Since then the architectural community has seen a cascade of software debuts: Graphisoft’s ArchiCAD, Autodesk’s 3D Studio, and Revit Technology Corporation’s Revit. Nemetschek AG appeared in 1997, with CAD/CAM software for CNC machining and various tools for digital fabrication entering the mainstream in more recent years.169 Meanwhile, computer engineers made several efforts to create systems in domestic spaces, the so-called Intelligent Environments, able to respond to the presence and activities of people, in an adaptive and proactive way, by supporting and enhancing their everyday living habits. It referred to autonomously functioning systems able to provide automated services, assessing situations and human needs in order to optimize control and performance in architectural space. This visionary project, called Ambient Intelligence, emerged in computer science since the early 1990s, when Mark Weiser suggested the idea of invisible and ubiquitous computing. Wireless networks, intelligent agents, user-friendly interfaces, and artificial intelligence, including neural nets and fuzzy logic, were all employed in order to implement this vision.170 The introduction of ubiquitous computing in architectural space raised several questions about psychological implications. If physical structures could act, perceive and react by means of sensors and actuators to the activities that took place around them, then people might perceive them as entities that demonstrated some level of intelligence and agency. The drive to design and construct architecture that could move and interact by means of computational systems, implied the idea of the boundary object, a concept that sociologists of technology have recently explored, to refer to computational objects (or rather quasi-objects) that challenged the boundary between the animate and the inanimate. This idea could be further associated to the theoretical criticism of modernity’s object–subject or nature–culture divide, and human–machine discontinuities. In light of these observations, we were led to question the concepts and practices pertaining to computationally augmented architecture.170 168. Sykes K. A., (March 2010), Constructing a New Agenda: Architectural Theory 1993-2009, Princeton Architectural Press, pp. 429. 169. HP, Intel, (December 2014), A History of Technology in the Architecture Office, Architizer, [Online] Available: https:// architizer.com/blog/a-history-of-technology-in-the-architecture-office/ [Accessed 15 July 2017] 170. Yiannoudes, S., (2016), Architecture and Adaptation: From Cybernetics to Tangible Computing, Routledge, pp. 3.

76


GENETICISTS’ DREAMS: LET’S MAKE A BODY FOR INTELLIGENCE TO LIVE IN

[DESIGN]

Architecture was becoming increasingly dependent on genetic computation; the generative construction and the mutual coexistence of possible worlds within the computable domain of modal space. The underlying ambitions of computation were already apparent: the embodiment of artificial life and intelligence systems either through abstract machines or through biomachinic mutation of organic and inorganic substances. Hence, the sublimation of physical and actual worlds into higher forms of organic intelligence by extending into the computable domain of possible worlds. Genetic architecture was neither a representation of biology nor a form of biomimesis; instead, its theoretical origins, have been traced to John von Neumann’s invention of the cellular automaton and his ‘von Neumann architecture’ for self-replicating systems.168 The von Neumann architecture, was a precursor to the architecture of a genetic system.171 According to Karl Chu, in his article ‘‘Metaphysics of Genetic Architecture and Computation,’’ the potential emancipation of architecture from anthropology is already enabling us to think for the first time of a new kind of xenoarchitecture; an information labyrinth or a universal matrix that is self-generating and self-organizing with its own autonomy and will to being. But what happens when architecure separates itself from human culture? Can it be still considered architecture or will it become something new? Imagining architecure independent of human involvement, makes us see architecture more like nature, a biotechnical hybrid of our own design. In order to break through the barrier of complacency and self-imposed ignorance, what is needed is a radicalisation, by developing a new concept of architecture that is adequate to the demands imposed by computation and the biogenetic revolution. Together, this convergent synthesis provides conditions of possibilities for the likely emergence of what Kevin Kelly, refered to as the coming age of a neo-biological civilization. Genetic Space is simultaneously co-extensive with and in excess of the space of biotechnology; it is the Space of Possible Worlds that are about to be engendered by the phenomenon of universal computation. Architecture needs to keep up with the advances and pass to its next stage of evolution. It has to incorporate the architecture of computation into the computation of architecture.173

‘‘Architecture is literally a part of nature; our description of an architectural concept in coded from is analogous to the genetic code-script of nature. In nature it is only the genetically coded information of form which evolves, but selection is based on the expression of this coded information in the outward form of an organism. The codes are manufacturing instructions, but their precise expression is environmentally dependent. Our architectural model, considered as a form of artificial life also contains coded manufacturing instructions which are environmentally dependent, but as in the real world model is in only the code-script which evolves.’’ 172

171. Genetics is a term coined by William Bateson in 1905 to encompass the study of heredity, and ‘gene’ was introduced around the same time by the Danish botanist Wilhelm Johannsen, to account for the units within sex cells that determine the hereditary characteristics. The meanings of both terms, ‘genetics’ and ‘gene’, are sufficiently abstract and general to be used as concepts that have logical implications for architecture without being anchored too explicitly to biology. Implicit within the concept of genetics is the idea of the replication of heritable units based on some rule inherent within the genetic code, and embedded within the mechanism for replication is a generative function: the self-referential logic of recursion. 172. Frazer J., (1995), An Evolutionary Architecture, Architectural Association Publucations.

77


INTELLIGENCE AND SPECULATIVE PROSTHETICS: FROM TECHNOANIMISM TO TECHNOFTISHISM

-Instead of building spaces to suit our needs, what about inhabiting pre-existing spaces, capable of modifying themselves to fit us? Is this a sci-fi movie scenario or the progress of a system where genetic architecture and biomechanical engineering work together?As we now approach what Ray Kurzweil refered to as the Singularity, the myth of matter is about to be displaced by the myth of information. Contrary to Mies van der Rohe remark that ‘‘architecture is the art of putting two bricks together, the emerging conception is that architecture is the art of putting two bits together,’’ at least bits that are programmed to self-replicate, self-organise and self-synthesise into evermore new constellations of emergent relations and ensembles.168 ‘‘The information superhighway is about the global movement of weightless bits at the speed of light.’’ 174Architectural design as a special kind of problem-solving process (William Mitchell). The possibilities of this idea of genetic architecture are captivating and could drastically change what we know as architecture..

Fig. 8.3

Genetic Architecture, Possible Worlds, Karl Chu, 2010

173. Chu, K., (2004), Archaeology of the Future, in Peter Eisenman ‘Barefoot on White-Hot Walls’’,Editorial 53-54, MAK Wien, [Online] Available: http://ehituskunst.ee/karl-chu-archaeology-of-the-future/?lang=en [Accessed 25 May 2017] 174. Negreponte, N., (1995), Being Digital, Bits and Atoms, Alfred A. Knopf.

78


GENETICISTS’ DREAMS: LET’S MAKE A BODY FOR INTELLIGENCE TO LIVE IN

[DESIGN]

we: forging the ‘‘Scripter-Gods’’? Finally, with the convergence of computation and biogenetics, the world has started moving into the so-called Post-Human Era, which would bring forth a new kind of biomachinic mutation of organic and inorganic substances. Information has been the currency to drive all these developments, and nowhere was this more apparent than in the words uttered by Craig Venter: ‘‘The goal is to engineer a new species from scratch.’’175 We are information carriers ourselves and we swim about in abundant information. Life could be the power to direct the flow of information. The human becomes the machine, the machine becomes the human. We routinely replace parts on the inside of our bodies. Prosthesis transforms us visibly or invisibly. We become ‘‘bladerunners!’’ These replacements and adjustments are not experienced as changes. They are fully integrated into the idea of ‘‘natural’’ body. The very idea of adding new body parts and removing others has become routine. All the techniques of the artificial body that were once advanced medical experiments have steadily normalized. Prosthetics, neuroprosthetics, reconstructive surgeries, plastic surgeries, even drugs and chemicals all are themselves designed while constituting to our dramatical redesign. The human is simply not what it used to be. The intertwining of physical and life sciences is reflected in two technological mega trends: ‘‘biology is becoming technology and technology is becoming biology.’’ The first implies that living systems are increasingly seen as reproducible. Genetically modified bulls, cloned sheep, cultured heart valves and artificially reconstructed bacteria illustrate this trend. It is not only about biological interventions as IT-based interventions are also emerging in techniques to influence brain processes. A well-known example is the use of deep brain stimulation to reduce severe tremor in patients with Parkinson’s disease. The reverse trend being that of ‘technology becoming biology, is reflected in artifacts that increasingly appear more lifelike or seem imbued with human behaviors. By bringing into the foreground the hidden reservoir of life in all its potential manifestations through the manipulation of the genetic code, the unmasking or the transgression of what could be considered the first principle of prohibition – the taking into possession of what was once presumed to be the power of God to create life – may lead to conditions that are so precarious and treacherous as to even threaten the future viability of the species Homo sapiens on Earth. At the same time, depending on how humankind navigates into the universe of possible worlds that are about to be siphoned through computation, it could once again bring forth a poetic re-enchantment of the world, one that resonates with all the attributes of a premodern era derived, in this instance, from the intersection of the seemingly irreconcilable domains of logos and mythos. Organically interconnected to form a new plane of immanence that is digital, computation is the modern equivalent of a global alchemical system destined to transform the world into the sphere of hyper-intelligent beings. Intelligent machines, intelligent bodies, intelligent environments.176 We suddenly become enhanced humans. The application of genetic and cybernetic technologies to human brains and bodies, becomes a reality. Augmented humans aren’t 175. Enhanced Humans, Futurism, [Online] Available: https://futurism.com/enhancedhumans/ [Accessed 30 July 2017]

79


INTELLIGENCE AND SPECULATIVE PROSTHETICS: FROM TECHNOANIMISM TO TECHNOFTISHISM

starry-eyed visions of the future—they’re walking among us right now. From cochlear implants to robotic limbs controlled by our minds, the fields of biotechnology and gene editing allow us to dictate evolution and engineer a new kind of human. The scientific and technological breakthroughs are transforming our bodies and reshaping the future of humanity.175 ‘‘Each human body is highly unique within its own species, marked by idiosyncratic dimensions, fingerprints, ears, eyes, and DNA sequence. The human body is never singular or stable. On the contrary, it is defined by diversity, fluidity and transformation. The human becomes human in changing itself. Darwin said we even designed our nakedness. The body is an artifact, the product of protocols and technologies.’’176 The future of human nature is no doubt genetic engineering. As we begin to contemplate the possibility of intervening in the human genome to prevent diseases, we cannot help but feel that the human species might soon be able to take its biological evolution in its own hands. With the gene-editing tools like CRISP, precise insertions and deletions in a human DNA sequence can be introduced into the body to deactivate mutant genes, replace them with a healthy copy, or mobilize a new gene to fight a disease. ‘‘Playing God’’ is the metaphor commonly used for this self-transformation of the species, which, it seems, might soon be within our grasp. In his book, Jurgen Habermas expresses the view that genetic manipulation is bound up with the identity and self-understanding of the species. Can knowledge of one’s own hereditary factors prove to be restrictive for the choice of an individual’s way of life and undermine the symmetrical relations between free and equal human beings?? 177

176. Colomina, B., Wigely, M. (2017) Are We Human? Notes on an Archaeology of Design, Ch.14: The Unstable Body, Lars Müller Publishers. 177. Habermas J., (2003), The Future of Human Nature, Polity.

80


GENETICISTS’ DREAMS: LET’S MAKE A BODY FOR INTELLIGENCE TO LIVE IN

Fig. 8.4

[WE]

Third hand, Stelarc, 1980-2002

81


INTELLIGENCE AND SPECULATIVE PROSTHETICS: FROM TECHNOANIMISM TO TECHNOFTISHISM

Fig. 9.1 Ping body, an internet actuated performance, Stelarc, 1996

82


CHAPTER 9

TECHNOLOGIES TO BOND WITH let’s connect to each other electronically! ‘‘a global self-synthesizing organ’’

‘‘The universe of possible worlds is constantly expanding and diversifying thanks to the incessant world-constructing activity of human minds and hands. Literary fiction is probably the most active experimental laboratory of the world-constructing enterprise.’’ -Lubomir Doležel, 1998

The field of AI, now more than a half a century old, finally achieves some of its oldest goals. It begins to be used successfully throughout the technology industry, although somewhat behind the scenes. Some of the success is due to increasing computer power and some is achieved by focusing on specific isolated problems and pursuing them with the highest standards of scientific accountability. Still, the reputation of AI, in the business world at least, is less than pristine. Inside the field there is little agreement on the reasons for AI’s failure to fulfil the dream of human level intelligence that had captured the imagination of the world in the 1960s. Together, all these factors help to fragment AI into competing subfields focused on particular problems or approaches, sometimes even under new names that disguised the tarnished pedigree of “artificial intelligence”. AI is both more cautious and more successful than it had ever been. Internet; the Universe of the Adjacent Possible, the expansion of networks, social media and ubiquitous surveillance culture, supercomputers and Moore’s law, intelligent agents, Robocup178 and industrial robotics, sophisticated mathematical tools, speech recognition, medical diagnosis and Google’s search engine, all come to establish a new view of things..

178. RoboCup, 1997: a diversified community contributing to advancements in virtual multi-agent AI, expanding the frontiers of collective distributed intelligence at a pace which exceeds physical robot innovations.

83


INTELLIGENCE AND SPECULATIVE PROSTHETICS: FROM TECHNOANIMISM TO TECHNOFTISHISM

1994 was the time when the www.bomb exploded! The World Wide Web emerged. Star Wars science fiction dreams of 1974, finally became reality.. The power of computation was already evident; in the last 70 years since the inception of the Universal Turing Machine, it has ushered in the Information Revolution by giving rise to one of the most significant and now indispensable phenomena in the history of communication: the Internet or what could also be characterised as the Universe of the Adjacent Possible. Stuart Kauffman defined the Adjacent Possible as the expansion of networks of reaction graphs within an interactive system into neighbourhood domains of connectivity, which until then remained in a state of pure potentiality. The Internet marked a new world order, by reconfiguring the planet with a virtual, interactive matrix that has been becoming increasingly spatial, intelligent and autonomous: a global self-synthesising organ bustling with neural intelligence.. Being characterised by its total anonymity, it allowed for the surfacing of forgotten zones of the psyche. The absence of moral, physical or social repercussions in cyberspace, proved to be quite liberating.179

‘‘The adjacent possible is a kind of shadow future, hovering on the edges of the present state of things, a map of all the ways in which the present can reinvent itself. As Steven [Weinberg] notes, the adjacent possible “captures both the limits and the creative potential of change and innovation.’’180 -Eddie Smith, 2010

Milestones and Moore’s Law

Supercomputers were gradually entering our lives, equipped with augmented capacities. On 11 May 1997, Deep Blue became the first computer chess-playing system to beat a reigning world chess champion, Garry Kasparov. The super computer was a specialized version of a framework produced by IBM, and was capable of processing twice as many moves per second as it had during the first match (which Deep Blue had lost). The successes were not due to some revolutionary new paradigm, but mostly on the tedious application of engineering skill and on the tremendous power of computers today.181 In fact, Deep Blue’s computer was 10 million times faster than the Ferranti Mark 1 that Christopher Strachey taught to play chess in 1951. This dramatic increase was measured by Moore’s law, which predicted that the speed and memory capacity of computers double every two years. The fundamental problem of “raw computer power” was about to be overcome. The leap was crazy!

179. Chu K., (July/August 2006), Metaphysics of Genetic Architecture and Computation, Programming Cultures: Art and Architecture in the Age of Software | Architectural Design magazine, Vol 76, No 4, pp.39-40. 180. Erlic M., (2016), What is the Adjacent Possible, Medium, [Online] Available: https://medium.com/@Santafebound/whatis-the-adjacent-possible-17680e4d1198 [Accessed 3 August 2017] 181. Kurzweil, R., (2005),The Singularity is Near, Viking Press.

84


TECHNOLOGIES TO BOND WITH - LET’S CONNECT TO EACH OTHER ELECTRONICALLY!

At the same time, a new paradigm called “intelligent agents” became widely accepted in the 90s, when Judea Pearl, Allen Newell and others brought concepts from decision theory and economics into the study of AI. An intelligent agent is a system that perceives its environment and takes actions which maximize its chances of success. By this definition, simple programs that solve specific problems are “intelligent agents”, as are human beings. AI research was defined as “the study of intelligent agents,” and going beyond studying human intelligence; it studied all kinds of intelligence.182 It was hoped that a complete agent architecture would one day allow researchers to build more versatile and intelligent systems out of interacting intelligent agents. Then, AI researchers began to develop and use sophisticated mathematical tools more than they ever had in the past. AI had become a more rigorous “scientific” discipline. Judea Pearl’s highly influential 1988 book, brought probability and decision theory into AI. Among the many new tools in use were Bayesian networks, hidden Markov models, information theory, stochastic modeling and classical optimization. Precise mathematical descriptions were also developed for “computational intelligence” paradigms like neural networks and evolutionary algorithms. Algorithms originally developed by AI researchers began to appear as parts of larger systems. AI had solved a lot of very difficult problems and their solutions proved to be useful throughout the technology industry, such as data mining, industrial robotics, logistics, speech recognition, banking software, medical diagnosis and Google’s search engine.183 But the field of AI received little or no credit for these successes. Many of AI’s greatest innovations have been reduced to the status of just another item in the tool chest of computer science. Nick Bostrom explained: “A lot of cutting edge AI has filtered into general applications, often without being called AI because once something becomes useful enou-

Fig. 9.2

World Wide Web map, 2011

182, Poole, D., Mackworth, A., Goebel, R.,(1998), Computational Intelligence: A Logical Approach, Oxford University Press. 183. Olsen, S., (10 May 2004), Newsmaker: Google’s man behind the curtain, CNET.

85


INTELLIGENCE AND SPECULATIVE PROSTHETICS: FROM TECHNOANIMISM TO TECHNOFTISHISM

gh and common enough it’s not labelled AI anymore.” Therefore, many researchers in AI in 1990s deliberately called their work by other names, such as informatics, knowledge-based systems, cognitive systems or computational intelligence. In part, this might be because they considered their field to be fundamentally different from AI, but also the new names helped to procure funding. In the commercial world at least, the failed promises of the AI winter continued to haunt AI research, as the New York Times reported in 2005: “Computer scientists and software engineers avoided the term artificial intelligence for fear of being viewed as wild-eyed dreamers.” In 2000, the arrival of social media’s ‘‘bubble’’ marked the beginning of a promising era. The bubble was getting bigger and bigger through the years, that after exploding would have completely transformed the whole world socially, culturally and economically; the way we lived and ourselves.

86


TECHNOLOGIES TO BOND WITH - LET’S CONNECT TO EACH OTHER ELECTRONICALLY! [DESIGN]

design: interactive environments & robotics / moving agway from extraction to embrace aggregation / / from pre-fabrication to robotic systems on-site /

We design the way we look into the universe, we design the way we transport ourselves over great distances, the way we transport data. Internet has undoubtedly been an important factor, both in the technological and theoretical aspects of architectural creation, by presenting a variety of developments. Michael Fox, reviewing the long history of kinetics in architecture, was claiming that performance could be optimized if he could apply the new computing information and proceed to its physical adoption. As a consequence, the integration of digital sentiment and action into corresponding spaces and objects, would allow them to redefine themselves and adapt to the new reality. Gradually the consolidation of computer-controlled robotic systems took place. In such environments, each system interacted not only with the human factor but also with the behaviors of the other systems connected to a collective set that could be controlled by a primitive logic. Expanding the logic of understanding space as a collective entity with different subsystems of functions, each system individually formed a link, creating a model of decentralized matching and control. This, resulted in the transition of control to a bottom-up and emerging process. The rules of responsiveness could be very simple and the rules of interaction between the systems could be just as simplistic, while the combination was capable of producing interactions that proved to be emerging and difficult to predict. From this thought came the idea that the architectural space itself could consist of robotic systems. Production technologies coupled with recent software developments, allowed robotic segments of these systems to be increasingly smaller and more intelligent. This has prompted the architectural community to perceive the site itself as being organized within an information network. At the same time, a departure took place from the pre-existing logic of human robotic systems, to the changing discrete systems. It was an “autocatalytic” process through which smart, modular engines built smarter and crisper. Hypothetically, in the near future, vertebrate and redevelopable space would have a big impact on the way a man dwelled and on the relationships between the user and the space itself. It was at the discretion of architects and designers to design how these sections would be compiled and these compositions would respond to the continuous flow of information between the user and the space.

87


INTELLIGENCE AND SPECULATIVE PROSTHETICS: FROM TECHNOANIMISM TO TECHNOFTISHISM

Fig. 9.4 Industrial Robotics, computer-aided manufacturing process

These developments also resulted in the investigation of biomimetics from the point of view of processes, creating vertebrate automated systems that were capable of self-reproduction. Architectural robotics were used to allow buildings to adapt much more holistically and naturally. In the intersection of design, came biology and computation. Understanding the processes under which organizations are developing, evolving, and reproducing has been an invaluable factor in how such small-scale mechanisms, in an architectural composition, could, hypothetically, act. This study area is called “evolutionary biology”184 and included development, differentiation and morphogenesis. As Gordon Pask prophesied in his book An Evolutionary Architecture : “The role of the architect here, I believe, is not so much the design of the building or of the city as to catalyze it, it is to act so that it can evolve.” Subsequently, the individual parts of the system began to tend towards diminishing to the extent that they would synthesize the matter itself. We were heading towards the end of a large-scale robotic architecture. The problem of this view was the scale, as it focused on the building as a synthesis of discrete systems and machines instead of the possibilities of the very matter that composed it. The possible contribution of the kinetic systems operating on this scale could spread beyond a clear need-to-fit, and at the same time involve a wide range of human sensory perception. These new interactive systems would add new intriguing levels of parametrization and redefinition to architecture. Adaptability would become much 184. Evolutionary biology is the subfield of biology that studies the evolutionary processes that produced the diversity of life on Earth, starting from a single common ancestor. These processes include natural selection, common descent, and speciation. The discipline emerged through what Julian Huxley called the modern evolutionary synthesis (of the 1930s) of understanding from several previously unrelated fields of biological res arch, including genetics, ecology, systematics and paleontology. Current research has widened to cover the genetic architecture of adaptation, molecular evolution, and the different forces that contribute to evolution including sexual selection, genetic drift and biogeography. The newer field of evolutionary developmental biology (“evo-devo”) investigates how embryonic development is controlled, thus creating a wider synthesis that integrates developmental biology with the fields covered by the earlier evolutionary synthesis.

88


TECHNOLOGIES TO BOND WITH - LET’S CONNECT TO EACH OTHER ELECTRONICALLY! [DESIGN]

Fig. 9.5 Mesh mould, Gramazio & Kohler, 2014

more holistic and act on a very small internal scale. Technological developments in industrial and building construction and computational control would continue to expand the parameter of robotics capabilities and thus affect the scale with which we perceive and construct environments. This scaling of scale would cause the mechanical example of the adaptation to be redefined. In the last years, and particularly from 1980 to 2000, and to date, the development of robotic technology focused on the development of Nervous Networks and Robotic Behavior; a branch that originally was considered to underestimate human existence, through is simplification. Then these robotic domains developed to a great extent and managed to penetrate with a multitude of applications into modern reality. ‘‘Buildings are information-processing vehicles.’’185 Architecture is an information carrier. We humans are all information carriers too. We process information in our brains and other organs, in turn producing images and sound and leaving other processed matter behind. We are metabolists by nature. Information is always subject to a continuous process of transformation. Buildings are continually absorbing information, processing it and then producing new information. All buildings together play an important evolutionary role in the worldwide process of transforming information. A hyperbody is a programmable building body that changes its shape and content in real time. This building body is the vehicle for processing information. Dynamic architecture, able to move. Animated buildings, continually calculating, persistently fixing their position with regard to other real time processes in and around them. Architecture becomes a game and the users the players. Architects are the programmers of this game.

185. Oosterhuis, K., Cook. P., (2002), Architecture Goes Wild, E-motive Architecture, Uitgeverij.

89


INTELLIGENCE AND SPECULATIVE PROSTHETICS: FROM TECHNOANIMISM TO TECHNOFTISHISM

Fig. 9.6 Ping Body-exploring human-machine interfaces, Stelarc, 1995

90


TECHNOLOGIES TO BOND WITH - LET’S CONNECT TO EACH OTHER ELECTRONICALLY! [WE]

we:

transformed by design, we become what we want, what deisgn wants..

/ from pure restoration to further augmentation beyond biology / ‘‘Computers are our exo-brains, exo-memories, exo-databases..’’ -Kas Oosterhuis

At the end of the second millennium into the third, a precipitous return to corporeality looped subjectivity back toward the explicit embodiments of the heyday of performance around 1970. The body was dramatically reconceived as nonauthentic, defined through otherness, and specific in its identifications. As the speed and intensity of technologically mediated modes of being, have accelerated in recent years, visual theorists have come to recognize that technology not only transforms our ways of doing things, it profoundly conditions our experience of ourselves and others. The question that arises is: is it possible to think polymorphous identities with the mutability and the fallibility of the body? The fast-expanding integration of technologies of information into everyday life, the corollary blurring of work and non-work, the perfecting of eco and biotechnologies that increasingly get to blur the boundaries between the human and the nonhuman (such as genetic engineering, robotics, reproduction technologies, pharmacology, plastic surgery, and body fitness), and the underlying problematic belief in our ability to predict, control, conquer, and improve nature via technology, confirmed the body to be a materialization open to incessant reconfiguration. The incitement to reconfigure and transform, is at once creative and normative, fluid and normalized.186 According to Stelarc, the body is what allows a person to operate and become aware in the world. Through the years, it gradually becomes a site of action, interaction and experimentation. We gradually get to realize that we are no longer merely biological bodies. Rather, the body has become a chimera, a combination of meat, metal and code. The body has become a hybrid and extended operational system, performing beyond the boundaries of its skin and beyond the local space that it inhabits. We can project our physical presence and perform remotely with bodies and machines. Fractal Flesh 1995,187 expressed the idea of a multiplicity of bodies and bits of bodies, spatially separated but electronically connected, 186. Jones, A., (2009), The Body and Technology | Art Journal, Vol.60, No.1 (Spring, 2001). College Art Association. 187. Fractal Flesh was a performance that took place November 10-11, 1995 at Telepolis, an art and technology festival organized in Luxembourg by the Munich Media Lab, Stelarc plugged himself into muscle-stimulation circuitry controlled by a Mac. The Mac, in turn, was connected, via the Internet, to Paris’s Centre Pompidou, Helsinki’s Media Lab, and Amsterdam’s Doors of Perception conference. By pressing a color-coded 3-D rendering of a human body on a touchscreen, participants at the three sites jolted the artist’s (literally) wired body into action. Blipped across the net through a high-speed link to the computer in the performance space, their gestures triggered Stelarc’s muscle-stimulators; low-level bursts of voltage, zapping through electrodes attached to his limbs, caused both arms and one leg to jerk involuntarily intoraised or extended positions.

91


INTELLIGENCE AND SPECULATIVE PROSTHETICS: FROM TECHNOANIMISM TO TECHNOFTISHISM

generating recurring patterns of interactive actions at varying scales. Fractal Flesh transformed the artist into history’s first teleoperated human (teleoperation being the remote control of robots by human operators.) It also offered a fuzzy premonition of something like the “simstim” technology in William Gibson’s Neuromancer, which enables a hacker to inhabit the sensorium of a remote individual. A self-described “evolutionary alchemist” dedicated to triggering mutations [and] transforming the human landscape. Hence, we were expected to perform in Mixed and Augmented Realities, to seamlessly slip between the actual and the virtual. Actualizing interfaces between bodies, machines and virtual task environments, directly experiencing them and thereby being able to articulate something meaningful, has been what these performances have been about. Stelarc imagined a future in which humanity resembled a benign version of Star Trek: The Next Generation’s bionic race, the Borg - a hive world whose inhabitants are “electronically connected, extruding their intelligence from one body to another.” Such a world, would render accepted definitions of the human psyche obsolete. “My awareness would neither be all here in this body nor all there in the body [I’m remote-controlling] but sort of interchangeable,” he says. “In Western philosophy, we’re human because we’re individuals. But one can conceive of a body that is the medium of multiple agencies, a host for a multiplicity of selves remotely collapsed into it via the net.” If the body is obsolete, is it only our mind that makes us human? Well, what makes us human is not merely our physical bodies but our social institutions, our political structures, our cultural conditioning and our technologies that effectively become our external organs. Speaking about a “mind” is problematic in a Platonic, Cartesian or Freudian sense. Stelarc once said: ‘‘The more and more performances I do, the less and less I think I have a mind of my own, or any mind at all in the traditional, metaphysical sense.’’ Asserting that we “possess” a mind generates an unnecessary retro recursiveness of logic. Rather our evolutionary architecture generates our operation and awareness of the world. There is no “I” in the way we generally imagine. There is only a body that interacts with other bodies, situated in history. The “I” is a language construct that compresses a more complex interactive situation. What is important is not what’s “in” you or me but rather what happens between us, in the medium of language within which we communicate, in the social institutions within which we operate, in the culture that we’ve been conditioned by- at this point in time in our history. So this is not an essentialist model of the human but one that allows for a more flexible and fluid unfolding and definition of the human condition. It’s more in the realm of a Deleuzian becoming. And what constructs our identity is no longer our physical presence nor location but rather our connectivity. To be curious and creative is to be human. And perhaps what it means to be human is not to remain human at all.188

188. Interview with Stelarc, [Online], Available: http://econtact.ca/14_2/donnarumma_stelarc.html, [Accessed 20 August 2017]

92


TECHNOLOGIES TO BOND WITH - LET’S CONNECT TO EACH OTHER ELECTRONICALLY! [WE]

Fig. 9.7 ORLAN before the cosmetic surgery operation she broadcast to galleries worldwide.

Fig. 9.8 Prosthetic nose and the future of prosthetics and 3D-printing.

Fig. 9.9 A hunt for high tech, Bart Hess, 2007

93


INTELLIGENCE AND SPECULATIVE PROSTHETICS: FROM TECHNOANIMISM TO TECHNOFTISHISM

IS BIG DATA THE NEW AI? let’s self-design our avatar selves deep learning-big data-agi

‘‘Artificial intelligence would be the ultimate version of Google. It would understand exactly what you wanted, and it would give you the right thing.’’ -Google co-founder Larry Page, 2000 -The time we decided to succeed a competitive or better intelligence than human in general tasks has come.. -Is big data the new ai? In the first decades of the 21st century, access to large amounts of data; “big data,” faster computers and advanced machine learning techniques was successfully applied to many problems throughout the economy. By 2016, the interest in AI had reached a “frenzy”. The applications of big data began to reach into other fields as well, such as training models in ecology and for various applications in economics. Advances in deep learning, particularly deep neural networks, drove progress and research in image and video processing, text analysis, and even speech recognition, producing astounding results in competing with humans. Robotic systems, robotic arms and personalized construction189 were established, creating brain-like intelligence. Robot pets, smart toys, even robot for homes (vacuum cleaner Roomba) appeared, in an attempt to enter our everyday life. The cell phone’s use was on fire with new features appearing. Constantly upgrading and redefining its use. Apple iPhone was the first model to have the Google app with the speech recognition. A small device in our pockets, consisting of thousands of powerful computers, running parallel neural networks, learning to spot patterns in the vast volumes of data streaming in from Google’s many users. We could even talk with Siri; our intelligent personal assistant. The World Wide Web and the Social media expanded further and caused the biggest transformation that ever happened to society...

94


CHAPTER 10

Big Data is powerful on its own. So is artificial intelligence. What happens when the two are merged? Even though AI technologies have existed for several decades, it’s the explosion of data— the raw material of AI—that has allowed it to advance at incredible speeds. It’s the billions of searches done every day on Google that provide a sizable real-time data set for Google to learn from our typos and search preferences. Siri and Cortana would have only a rudimentary understanding of our requests without the billions of hours of spoken word now digitally available that helped them learn our language. Each year, the amount of data we produce doubles and it is predicted that within the next decade there will be 150 billion networked sensors (more than 20 times the people on Earth). This data is instrumental in helping AI devices learn how humans think and feel, and accelerates their learning curve and also allows for the automation of data analysis. The more information there is to process, the more data the system is given, the more it learns and ultimately the more accurate it becomes. Artificial Intelligence is now capable of learning without human support. In the past, AI’s growth was stunted due to limited data sets, representative samples of data rather than real-time, real-life data and the inability to analyze massive amounts of data in seconds. Today, there’s real-time, always-available access to the data and tools that enable rapid analysis. Our technology is now agile enough to access these colossal datasets to rapidly evolve AI and machine-learning applications. Will a computer ever be able to think like a human brain? Some say never, while others say we’re already there. Nevertheless, we’re at the point where the ability for machines to see, understand and interact with the world is growing at a tremendous rate and is only increasing with the volume of data that helps them learn and understand even faster. Big data is the fuel that powers AI. 190 Artificial Narrow Intelligence has been the first goal set in the beginning of the field of AI. The aim was to create a machine intelligence that could equal or exceed human intelligence or efficiency at a specific thing. And it actually happened. ANI is officially in our everyday life; your phone is a little ANI factory. Your email spam filter, Google Translate, the ‘‘recommended for you’’ button in Amazon, the price of your airplane ticket, Google Search, Facebook Newsfeed and the world’s best checkers, chess players are now all ANI systems. And those are just in the consumer world. Sophisticated ANI systems are widely used in sectors and industries like military, manufacturing, and finance (algorithmic high-frequency AI traders account for more than half of equity shares traded on U.S. markets), and in expert systems like those that help doctors make diagnoses, and, most famously, in IBM’s Watson, who contained enough facts to soundly beat the most prolific Jeopardy champions. ..But while ANI doesn’t have the capability to cause an existential threat, we should see this increasingly large and complex ecosystem of relatively harmless ANI as a precursor of the 189. The personalized construction has existed as an idea through science fiction movies such as “Star Treck: The Next Generation”. The last few years, however, scientists and laboratories have moved in that direction. Unlike recently known 3D printers, the new technology promised automatic creation of integrated operating systems. In 2001, the CBA, funded by the National Science Foundation, was set up to study the boundaries between computational science and the natural sciences, which led to the establishment of fab labs manufacturing laboratories. 190. ‘‘Why AI would be nothing without Big Data,’’ [Online], Available: https://www.forbes.com/sites/bernardmarr/2017/06/09/why-ai-would-be-nothing-without-big-data/ [Accessed 25 August 2017]

95


INTELLIGENCE AND SPECULATIVE PROSTHETICS: FROM TECHNOANIMISM TO TECHNOFTISHISM

world-altering hurricane that’s on the way. Each new ANI innovation quietly adds another brick onto the road to AGI and ASI. Or as Aaron Saenz sees it, our world’s ANI systems “are like the amino acids in the early Earth’s primordial ooze” — the inanimate stuff of life that, one unexpected day, woke up.

Fig. 10.1 A Lycopodium spore caught by strands of nanofibre, NanoRevolution and Nanotechnology

Fig. 10.2 Google search engine as an ANI example

Fig. 10.3 Siri - our new, intelligent, virtual, personal assistant

96


IS BIG DATA THE NEW AI? LET’S SELF DESIGN OUR AVATAR SELVES [DESIGN]

design:

going viral!

/ from the ‘‘internet of things’’ to the things of the internet/ / ‘‘no place’’ for design /

How soon will robots revolutionize architectural creation? As a natural conclusion of the historical retrospective, it follows that the relationship between robotics and architecture is not at the beginning, nor at any end, it is in a sense at the end of the beginning. Robotic systems are now being explored both from the point of view of physical integration to a more socially sensitive architecture and from the perspective of digital production to an informed form and materiality. However, architectural interest is not exhausted in these two areas. Robotic systems, through their historical investigation, perceive that, depending on the time, they created a lot of concerns and attempts to investigate their future influences. Another innovation is the creation of FAB Labs191 (construction laboratories) which operate as a single network. These laboratories are part of a wider “manufacturers movement”, composed by amateur high-technology user manufacturers, who tend to democratize access to modern means of construction. FAB labs have been created worldwide according to a common requirement, and despite their differences in financing and housing, they all have the same basic capabilities. Their operation allows the exchange of projects as well as the transition of human resources from one workshop to another. Internet presence is the main feature of these workshops, which is the creation of an online platform for the exchange of designs and the implementation or adaptation of their parameters in order to individualize the result produced. This practice is a major innovation, if we consider the inadequacy of all these laboratories to produce major projects and such an efficient network. 192 The open-source philosophy aims towards the dissemination of technological and social knowledge. Virtual wording and modelling, as well as physical modelling tools with common database properties so that conversions caused by a member of the group are perceived by everyone else, allow the design, evaluation and dialogue among team members in real time. Users of this system (members of the group) participate physically or even in the synergistic design process, creating a continuous feedback cycle from conception to production that does not know geographic boundaries.

191. Formed in 2009 to facilitate and support the growth of the international fab lab network as well as the development of regional capacity-building organizations. The Fab Foundation is a US non-profit 501(c) 3 organization that emerged from MIT’s Center for Bits & Atoms Fab Lab Program. Our mission is to provide access to the tools, the knowledge and the financial means to educate, innovate and invent using technology and digital fabrication to allow anyone to make (almost) anything, and thereby creating opportunities to improve lives and livelihoods around the world. Community organizations, educational institutions and non-profit concerns are our primary beneficiaries. 192. Fab Lab Foundation, [Online], Available: Http://www.fabfoundation.org/index.php/about-fab-foundation/index.html [Last accessed 20 Auhust 2017]

97


INTELLIGENCE AND SPECULATIVE PROSTHETICS: FROM TECHNOANIMISM TO TECHNOFTISHISM

Fig. 10.4 ETH Zurich Large-Scale Robotic Fabrication Arch_Tec_Lab

A new reality emerges from the contact of architecture and robotics, and has to deal with the realization of architectural practice and the object produced.193 Architectural work is understood as a system that is emerging through a process of sustained feedback. This development continues perpetually in time, while attempting to bridge the gap between nature and technology. What happens when architects become designers of materials and artefacts? What are the new design protocols needed? The most important achievement of digital production is the connection of the immaterial logic of computers and the material reality of architecture, where direct coupling of digital design with architectural production is activated in 1:1. With the use of robotic systems, it is possible to “update” the material processes and to merge digital design with material realization. Thus, emerges the phenomenon of digital materiality, already introduced into architectural terminology a few years earlier. Therefore, a uniform technological foundation is established in architecture, which at the beginning of the 20th century building industry, was more a vision than a reality. Material logic stems from an understanding of the design that is immediately updated by the structural properties of the material, in harmony with new production principles. At the same time, architectural creation arises as the uniting of individual parts, through disengagement from the logic of repeating identical prefabricated elements, and the transition to the logic of unlimited variations. Robots, being generic tools, can adjust to every kind of construction and produce an infinite number of differentiated effects. We are in a time where dialectical relations between code and matter, type and variant, creator and creation, man and machine are gradually blurred. What is design afterall? What are the effect-affect influences it creates? A new epistemological approach arises.. The historical abolition of the separation between spiritual work and manual production, between design and implementation.194 193. Kohler, M. (2014), Matthias Kohler: The Design of Robotic Fabricated Architecture | MIT Video. [Online] Available: http:// video.mit.edu/watch/goldstein-lecture-13947/. [Accessed 21 July 2017]. 194. Picon, A. (2014) Robots and Architecture: Experiments, Fiction, Epistemology. Architectural Design No 229, Made by Robots: Challenging Architecture at a Larger Scale, pp. 54-59. 195. Lim, J., Gramazio, F. and Köhler, M. (2013) A Software Environment For Designing Through Robotic Fabrication., Open Systems: Proceedings of the 18th International Conference on Computeraided Architectural Design Research in Asia (CAADRIA), Hong Kong, pp. 45–54. 196. Gramazio, F., Köhler, M. and Willmann, J. (2014), Authoring Robotic Processes, Architectural Design No 229, Made by Robots: Challenging Architecture at a Larger Scale, pp. 14-22.

98


IS BIG DATA THE NEW AI? LET’S SELF DESIGN OUR AVATAR SELVES [DESIGN]

Fig. 10.5 Hylozoic Ground, Robotic fabrication, P.Beesley, 2010

This hybridization between digital and material production finds an extension to all the scales and times of architectural production, where digital design and materiality work in parallel and are produced successively evolving one another.195 Materiality is informed through two-way procedures, both mentally and practically. A continuous feedback is created between digital and material, perceptual and tactile. The façade of the Gantenbein vineyards in Sweden, designed by Gramazio and Köhler, is a prime example to represent the central principle of prosthetic robotic systems in the digital implementation of architecture, creating a set of heterogeneous unit elements.196 Also, with the extended use of the robotic systems, a gradual shift takes place on the construction site, since the prefabrication tends to become obsolete. The area of production of the individual parts coincides with the sitting of the final project, and the robotic system acts in harmony with the human for its realization. The human and the technological means produce architectural results through a collaborative process. The robotic systems enhance human capabilities, while enriching the architectural practice. The flow of this new architectural process goes from the human mind to the robotic system, transforming the designer into a kind of cyborg that his intentions have a material texture through an artificial “body.”197 All in all, architecture is defined as “the mediator between matter and the event”198 There is a direct shift in biology as a source of inspiration and knowledge. Organic organisms are the very solution to problems posed by the environment. Matter tends to adapt to events and act in an adaptive direction in its surroundings. At the same time, human relationships and interactions, give birth to spatial correlations. Movement and gestures construct spatiality while reflecting individualities, users’ relationships with space, and interactions between them through a set of cultural, social, functional, emotional information as a whole, as one continuous. What Carlo De Carli has defined as a “spazio primario” at the center of architecture. There is no separation between interior and exterior, small and big. The focus is made on the process that led to the production of that space or object, and the following changing relations that it produces. In the same logic, the innovative robotic systems, do 197. Picon, A. (2014) Robots and Architecture: Experiments, Fiction, Epistemology. Architectural Design No 229, Made by Robots: Challenging Architecture at a Larger Scale, pp. 54-59 198. Reiser, J. and Umemoto, N. (2006). Atlas of Novel Tectonics 1st Ed., Princeton: Princeton Architectural Press.

99


INTELLIGENCE AND SPECULATIVE PROSTHETICS: FROM TECHNOANIMISM TO TECHNOFTISHISM

not interfere with human relationships, but applied in space alter the gestures of users, and therefore their behavior. The change does not imply substitution. Architecture is the reflection of the prime spaces of which technology is part. Through this idea, we realize that man can function actively in a technology-driven architecture through the relationships that develop within them and through them.199

Fig. 10.6 Micro-Organic and Neoplasmatic Architecture, Steve Pike,

199. Zanolari Bottelli, L. (2012) Wall-E., Rethinking the Human in Technolgy Driven Architecture.

100


IS BIG DATA THE NEW AI? LET’S SELF DESIGN OUR AVATAR SELVES [DESIGN]

Every organism at a time is understood as a bridge between two stages of its evolution and not as a completed being. They are not beings but becomings. Architecture following this course of evolution, beyond the impact that time passes on, tends to change itself through robotic systems in order to increase its degree of freedom and change into a continuously evolving system.200 The more non-linear is the existence or an architecture, the less clearly it is defined. Removing from linearity, both spatial and temporal, increases the degrees of becoming and hence freedom. The ‘‘becoming’’, as defined by Deleuze, is not a correlation of relationships, resemblance, imitation and identification, but a continuous creation, a concept that has its own meaning and content. With the addition of robotic systems, architecture redefines itself, while turning into an experimental survey of topological geometries, partly into a computational orchestration of robotic material production and partly into a genetic kinematic sculpture of space. The relationships that add meaning to a complex reality are now associated with dynamic, self-evolving entities, under the influence of their own interactions. Similar to the world of dreams, a world that is clearly topological, that through the robotic systems and the new method of perceiving architectural creation as a continuous, one can go from desirable to feasible. These ideas, are capable of changing the architectural profession both epistemologically and mentally. 201 Architecture is considered a parallel extension of our collective metabolism, to which we assign part of our physiology, the protection and memory of social organization. According to Manuel DeLanda, the cities (and every architectural extension) are our “mineralized exoskeleton”. (DeLanda, 2000) Architecture works mediately between us and the environment on a scale of society, in a manner analogous to atomic interaction. Removing from the obsolete idea of rigid protection and the blunted object, it composes a more sensitive mediator of the interrelationships of environmental and cultural influences, as well as a new dynamic object by itself, using the form as a means of environmental adaptation and building open-relations. Relationships that are created and act within a reality aware of complexity are no longer interwoven with a set of finite entities that make up the morphological vocabulary. They evolve dynamically, taking into account the influence of their own interactions. The idea that surrounds this kind of architecture today could be summarized as an interactive environment, complex, animated and “alive” through a process of interdependence with other elements of technology and the environment. The idea of hybridizing physics with the artificial, the digital with the material and the living with the non-living tends to go into the center of today’s architecture. An architecture “symbiotic” with the ecological whole that modern robotic systems can enhance. An architecture often incomprehensible from the whole, capable of producing new relationships and taking more active part than ever in the gatherings that shape contemporary reality.

200. Kelly, K. (2011) What Technology Wants, 1st Ed UK: Penguin Books 201. De Landa, M., Immanent Patterns of Becoming 2009- YouTube. [Online] Available: https://www.youtube.com/ watch?v=jKqOic0kx4U. [Accessed 30 July 2017].

101


INTELLIGENCE AND SPECULATIVE PROSTHETICS: FROM TECHNOANIMISM TO TECHNOFTISHISM

The ‘‘no place’’ for design and production: We could not define geographically the place where the design and production of the architectural object now takes place, since it is coordinated by a non-spatial network. The end result may occupy a non-predetermined space, which is not defined by a grid as in previous architectural examples. The robotic system has the ability to move freely in space by expanding the field that can be structured. Robotic technology has allowed architecture to acquire a new “expanded” dimension, this time in a space-time context. Geographic boundaries are relativized, while distances are nullified through the internet and the possibility of direct natural production from electronic data. A process customized and easily perceived by the cybernetically-aware current man. “How Dynamically Can We Integrate Technology?”. In “Out of Control,” Kevin Kelly explores how technology systems have started to mimic the physical, implying the ability of technology to acquire the qualities of the living. It is possible to perceive an interesting parallel with the way in which the discoveries on the genetic sequence have allowed the achievement of synthetic life: the point of the actual overlap between the technological and the physical system. Technological development today tends to program a living organism, programming beyond software and hardware, in a similar way to the physical. Living objects made from living organisms would acquire unique properties that go beyond the form of their construction and enhance the symbiotic dimension of the produced object with the surrounding space. A scientific team at Harvard University managed to come a step closer to this technology by printing live tissues by attaching them to blood vessels, turning research into human organ transplantation from the cells of the individual. This aspect of natural, which interacts with technology, brings together the interest of architects and designers with an anti-romantic look through the formalities of modern science (fractals, DNA, atoms, the relationship between life and matter, topological geometries, The moving forms). In this context, the concept of flow is becoming the basic one, expressing the continuous mutation of information, and involving architecture with current pioneering research, from biology to engineering and the most fertile areas of its superpower, such as morphogenetics, biomechanics and biotechnology. 202

Fig. 10.7 Completed ear and jaw bone structures printed with the Integrated Tissue-Organ Printing System 202. Saggio, A. (2012) GreenBodies: Give me an Ampoule and I will Live. Rethinking the Human in Technology Driven Architecture. pg. 41-57

102


IS BIG DATA THE NEW AI? LET’S SELF DESIGN OUR AVATAR SELVES [DESIGN]

A complete transformation of the way we live, brings along huge implications for architecture and design. And while a generation ago design concerned itself with its reception in the printed press, now the concern is rather reception in social media. The ultimate goal is design going viral. The world of social media is the ultimate space for design, a space where design happens at high speed by an unprecedented number of people. Social media redefine and restructure physical space. As with the arrival of mass media in the early 20th century, social media redraws once again what is public and what is private, what is inside and what is outside. The Internet and social media are fundamentally redefining the spaces in which we live, our relationship to objects and each other. They establish a new form of urbanization, redefining the architecture of how we all live together. In the recent film ‘‘Her,’’ a moving depiction of life in the soft, uterine state that is corollary to our new mobile technologies, the ‘‘her’’ in question is an operating system that turns out to be a more satisfying partner than a person. It seems to fulfil Turing’s very expectations, who suggested in 1950 that by the 21st century we would ‘‘speak of machines thinking without expecting to be contradicted,’’ that is he predicted that we would be using an operating system that converses so convincingly that is seems to be human. The film addresses the very real prospect that synthesized voices may soon become so nuanced and lifelike, and artificial intelligence so sophisticated, that a satisfactory relationship between human and computer becomes plausible. The voice of Samantha; a highly evolved operating system, is the automated ideal object of desire for lonely bachelor Theodore. He lies in bed with Her, chatting, arguing, making love and eventually breaking up still in bed. All in all, new media turn us all into inmates, constantly under surveillance, even as we celebrate endless connectivity. We have all become ‘‘a contemporary recluse,’’ as Hugh Hefner put it a half century ago. The fact that people now live in a vast global spider’s web of electronic communication connected to billions of people is a continuation of the human capacity to maximize connectivity and therefore the ability to design. Social media and electronic communication is a new form of urban life. It is not simply an expansion of design. It is a revolution in the capacity to be human and inhuman.

Fig. 10.8 Twitter visualized by flowing data, Open-source design

103


INTELLIGENCE AND SPECULATIVE PROSTHETICS: FROM TECHNOANIMISM TO TECHNOFTISHISM

Fig. 10.9 Operating System ‘‘Samantha’’, Her the movie, 2013

104


IS BIG DATA THE NEW AI? LET’S SELF DESIGN OUR AVATAR SELVES [WE ]

we: our online-anonymous-avatar self! / the viral design of self /

Machines are increasingly asking us to demonstrate that we are human. The CAPTCHA (Completely Automated Public Turing Test to Tell Computers and Humans Apart), asks you to confirm a single remarkable statement: ‘‘I am not a robot.’’ Your answer is not the point. The self-adjusting program determines you are human by monitoring how you interact with the content of the page before and after you click. Everyday life echoes the existential dilemma posed in countless books, movies, and TV series where all-too-human machines cannot be distinguished from humans behaving like robots. The constant labor of proving that you are yourself, with passwords and biometrics offering a thin and fragile defense against the traumatic threat of identity theft, gives way to the labor of proving that you are not yet a machine.203

-And who am I? I am who I would like to be.. While cell phones are turning into an enormous computational force in my pocket; called smartphones, the Internet is becoming part of my cognitive toolset. I can ask a question and I can have the answer in seconds. The brain no longer saves the kind of information that the phone is expected to store and provide. The phone itself is a remarkable condensation of computational power, able to absorb a galaxy of software. The cell phone is less of an object and more like an unimaginable vast ecology combining unprecedented flows of information and material. It is the extension of my brain, the attachment on my body. It is my shelter, my sense of security and orientation. It provides the interface accessing the Internet, while being perhaps the most visible tip of the biggest human artifact of all; the global communication-computation system that literally covers the planet in an unthinkably massive material web of webs and plays a huge role in the lives of both those who have some access to it and those who don’t, yet is only experienced as a kind of ghost. Spatially, I can anytime be turned on and off. The mobile phone is both a connection and a disconnection device.204 203. Colomina, B., Wigely, M. (2017) Are We Human? Notes on an Archaeology of Design, Ch.16: Design in2 seconds. Lars Müller Publishers. 204. Colomina, B., Wigely, M. (2017) Are We Human? Notes on an Archaeology of Design, Ch.12: Design as perversion. Lars Müller Publishers.

105


INTELLIGENCE AND SPECULATIVE PROSTHETICS: FROM TECHNOANIMISM TO TECHNOFTISHISM

There were no social media before 2000. All started with Friends Reunited, later with Friendster, MySpace, Facebook, Youtube, Twitter, WhatsApp, Instagram, etc.. There are social networks for practically everything! Through its multiple channels we not only communicate and collaborate with wider and wider groups, but we refashion ourselves. Images, videos, texts, emojis, tweets, memes, comments, posts and reposts are deployed to construct a very precise image, not necessarily matching our real-life person, an avatar launched with seemingly independent thoughts, looks, and actions – a perfected self, perhaps the image of whom we would like to be, that becomes real online. In a strange mirror logic, the phone through which we interact with the world constructs a version of us that is the real us for that world. We then inhabit a kind of hybrid space between the virtual and the real.203 One of the paradoxes of the age of social media and the sharing economy is the extreme cultivation of the sense of self. Self design has become the main responsibility and activity. As Boris Groys writes: ‘‘With the death of God, design became the medium of the soul, the revelation of the subject hidden inside the human body.’’ Thus design took on an ethical dimension it had not had previously. In design, ethics became aesthetics; it became form. Where religion once was, design has emerged. The modern subject now has a new obligation: the obligation to self-design, an aesthetic presentation as ethical subject. It is not simply that social media is a tool for self-design. Self-design has become media. This designing self is an always fragile work-in-progress, sacrificing all privacy to produce big data in return for a new illusion of independence.

Fig. 10.10 CAPTCHA aims to unmark web bots posing as humans by asking them to recognize words and shapes against a backdrop of noise. Illustration by Cristine Clark,

106


IS BIG DATA THE NEW AI? LET’S SELF DESIGN OUR AVATAR SELVES [WE ]

Fig. 10.11 CAPTCHA to prove you are not a robot!

Fig. 10.12 Instagram App to design ourselves!

Fig. 10.13 Twitter App to be evereywhere through social media!

Fig. 10.14 Instagram and Facebook Apps - Social media.

107


INTELLIGENCE AND SPECULATIVE PROSTHETICS: FROM TECHNOANIMISM TO TECHNOFTISHISM

EPILOGUE summing up: living in extreme times ‘‘what’s next?’’

What else can one do after a journey like that but look back and look forward?

-From science fiction to reality -From computers to robotics -From things that are fabricated to things that are farmed -From things that are constructed to things that are grown -From moving away from extraction to embracing aggregation -From pure logic to intuition -From ‘‘being’’ to ‘‘becoming’’ -From human body to cyborg body... -From restoration of body to reconfiguration and further enhancement -From Pistorius to Stelarc -From physical existence to avatar/immaterial /electronic existence -From ‘‘life as it is,’’ to ‘‘life as it could be’’ -From inanimate things to things with a nervous system -From craving obedience from our things to valuing autonomy -From human-centered design, to design as perversion

108


EPILOGUE

/ EVOLUTION: NATURAL SELECTION vs TECHNOLOGY? / In our world now, the primary mover for the evolutionary change, is culture, and its weaponized cousin, technology. Technology now does a lot and it does it far faster, bolstering our physical skills, deepening our intellectual range, and allowing us to expand into new and more challenging environments. We are currently undergoing a constant augmentation. Our species is not the way it used to be. We are evolving through technology and technology is evolving through us. And we are both evolving in extremely fast rates. In the old days, in the DNA days, if you had a pretty cool mutation, it might spread in the human race in a hundred thousand years. Today if you have a new cell phone or transformative manufacturing process, it could spread in a week. Our environment is evolving. We design it to be intelligent and fueled with the power of driving its own evolution. Actually it is in a constant state of ‘‘becoming.’’ Nothing is fixed, everything is fluid. We and our designed environment are in a constant interrelationship, which in turn drives the evolution of the whole planet. Evolution now means not just the slow grind of natural selection spreading desirable genes, but also everything that we can do to amplify our powers and the powers of the things we make—a union of genes, culture, and technology. Undoubtedly, evolution is relentless; when the chance of survival can be increased, it finds a way to make a change—sometimes several different ways. The universal ambition of humanity still remains greater intelligence. No other attribute is so desirable; no other so useful, so varied in its applications, here and on any world we can imagine. Over hundreds of thousands of years, our genes have evolved to devote more and more resources to our brains, but the truth is, we can never be smart enough. We can always get better and better and as long as we coevolve with technology, and as long as intelligence broadens its spectrum, there is not any specific limit or any specific goal to be attained. An intelligence augmenting to infinity.. Artificial Life is now able to find a place directly in the world as it is presented as something obvious. It is something that is living and therefore equals the reality. As a new part of it that can be incorporated and create new gatherings. Not to take part in a continuous one that exists but to interact and, on occasion, redefine reality through emerging qualities. Intelligence no longer relies on anthropomorphically framed ideas-but from the emergence of self-modifying patterns in material systems or other life forms. Similarly, it is the source of many changes. Source of differentiation of previously known data, materials and intangibles. From this point comes the need to ask questions on this new element that gradually infiltrates and changes a world that is not shattered. What is intelligence? It is a process in which things are made to relate to one-another. Our individual intelligence: a full-bodied issue that involves all the cells, human and inhuman, and social, cultural, historical influences, enfolded into an ‘‘expanded notion of body’’ akin to the expanded field of design manifold. Intelligence is a loopy process (selection, perception, connection-relatedness, assesment-effect..) a deep algorithmic sequencing. AI has taught us more about human intelligence, that is not only the function of brain activity, but that general intelligence exists 109


INTELLIGENCE AND SPECULATIVE PROSTHETICS: FROM TECHNOANIMISM TO TECHNOFTISHISM

by being embedded in a milieu, an environment, cultural, social and physical, that contains not only other human beings, but also the accumulated human knowledge and artifacts of the past, as well as nature itself. Intelligence means literally “to choose among what has been gathered.� This etymology casts the history of artificial intelligence into a vastly larger landscape, revealing it to be not some hubristic overreaching, but instead another natural stage in the flow of that most enduring, even noble, of human urges: the passion to gather, organize, and share knowledge so that we all benefit.

Fig. 11.1 Body evolution, beyond evolution, Owen Freeman.

110


EPILOGUE: LIVING IN EXTREME TIMES.. WHAT’S NEXT?

/ FROM PUTTING 2 BRICKS TOGETHER TO PUTTING 2 TO BITS TOGETHER / Architecture and design have undergone various transformations in an attempt to keep up with the technological evolution and the subsequent intelligence explosion. According to the new consideration of things, architecture has to explore, experiment and take a position on the emerging outcome of the congregations. In this case it is called upon to look beyond the technicalization of things and the natural way in which man perceives them and move towards the understanding of the new ecology. Τo understand what emerges, we need to analyze how it emerges, and not the result itself or the parts from which it emerged. For this reason the understanding and the subsequent smooth incorporation of new technologies into reality (architectural and not) will result from the understanding of the interactions and the relationships of direct influence with each discipline, but also with every element of reality. And at the same time, by understanding the interactions with the emerging results of other assemblages of reality. We understand in this way the complexity of the reality we are called upon to study and tend to improve through a system involving chaos. The changes that are caused both in material and immaterial systems such as culture, consciousness, societies are explained in terms of becoming, with which all conditions are in turn subject to external relations.

/ FROM DESIGNING THINGS IN THE ENVIRONMENT, TO DESIGNING LIFE ITSELF / A weird and wonderful world is taking shape around us. Genetics, nanotechnology, synthetic biology, and neuroscience are all challenging our understanding of nature and suggesting new design possibilities at a level and scale never before possible. If we take just one area, biotechnology, and look more closely, we can see that a revolution is well underway. It is no longer about designing the things in the environment around us but designing life itself from micro-organisms to humans, yet as designers we devote very little time to reflecting on what this means. Driven by breakthroughs in genetics, animals are cloned and genetically modified to improve their food potential, human babies are designed to order and bred to provide organs and tissue for their siblings, meat is grown in labs from animal cells, an artist has developed bulletproof skin, and a scientist has claimed to have created the first synthetic life form. Being able to design life, both human and animal, is at the core of many of these developments. They have huge consequences for what it means to be human, how we relate to each other, our identity, our dreams, hopes, and fears. Of course we have always been able to design nature through selective plant and animal breeding but the difference now is the speed at which these changes will manifest themselves and the extreme nature of the changes.

111


INTELLIGENCE AND SPECULATIVE PROSTHETICS: FROM TECHNOANIMISM TO TECHNOFTISHISM

Fig. 11.2

Fig. 11.3

Artificial heart design

Seen through a microscope, a cube of synthetic tissue

Fig. 11.4 The body of pills. Drugs as an addition on body and brain.

112


EPILOGUE: LIVING IN EXTREME TIMES.. WHAT’S NEXT?

/ FROM LARGE SCALE ROBOTIC ARCHITECTURE TO THE SYNTHESIS OF MATTER / From biotechnology to the internet of things to artificial intelligence and robotics to networked additive manufacturing and replication, architecture keeps being constantly parametrized and redefined. This material palette provides for the re-composition of the world at scales previously unthinkable, turning living tissue into a plastic medium and imbuing inorganic machines and landscapes with new sorts of practical intelligence. Again, the ultimate and most lasting value of the new material palette is not (only) in the things we can make with it, but in how it allows/ forces us to re-adjudicate fundamental questions about who we are, what we are, where we are, when we are: how we are.

/ FROM BODY AS MEASURE TO THE UNSTABLE BODY / THe multiple discoveries of previous centuries, have caused us humans, to see things differently. How we view ourselves, the relationship between mind and body. We cannot know what a body is capable of until we can define its boundaries. And along with the evolution of technology, these boundaries keep blurring. Body and brain are keep expanding. Technology is nestling itself within us and between us, has knowledge about us and can act just like us. Think of brain implants, artificial balancing organs and bio-cultured heart valves. Think of prosthetics, plastic surgeries, drugs, gene editing. Everything adds on the body and brain. Technology collects information about us; smart cameras are able to measure our heart rate by looking at our skin. Some technologies behave just like us: they get human traits, exhibit intelligent behavior or touch us with their outward appearances. Chatbots become more lifelike, computer games more realistic and all kinds of apps are happy to encourage you when you are running or going on a diet. Algorithms know better what to recommend you to buy on Amazon. Your Facebook Newsfeed keeps constantly informing you for your Facebook friends day. An explosion of information. You become information. You’re not a set of brain data, you’re a particular database whose contents are constantly changing, growing, and being updated. And you’re not a physical body of atoms, you’re a set of instructions on how to deal with and organize the atoms that bump into you.

113


INTELLIGENCE AND SPECULATIVE PROSTHETICS: FROM TECHNOANIMISM TO TECHNOFTISHISM

/ FROM [MACHINE VS MAN] TO [MAN-MACHINE HYBRID] - THE SYMBIOSIS / WHO IS THE ROBOT AFTERALL? We are currently undergoing a constant hybridization of body and machine. There is no doubt that we are investing technologically in the human body. The effect is obviously an enhancement of the technological aspect of the body, which is trying even harder to retreat from the intermingling and confusion of the natural and artificial. The symbiont, the individual-machine that lives both a human and un-human life, is the symbol of the new structure of social relations. As information technologies become more and more present in our everyday life, human relationships and interactions acquire an integral part in the reality in which building aggregation is called upon to adapt. Human behaviors generate spatial associations. Movement and gestures construct spatiality while reflecting individualities, users’ relationships with space, and interactions between them through a set of cultural, social, functional, emotional and other information as a whole, as one continuous. Through this idea, we realize that man can function actively in a technology-driven architecture through the relationships that develop within it through and through it.

Fig. 11.5 Kevin Warwick - the Captain Cyborg.

114


EPILOGUE: LIVING IN EXTREME TIMES.. WHAT’S NEXT?

Fig. 11.6 Stelar, The Third Hand, Cyborg, 1980

115


INTELLIGENCE AND SPECULATIVE PROSTHETICS: FROM TECHNOANIMISM TO TECHNOFTISHISM

Fig. 11.7 Robocop scene - Transhumanism .

Fig. 11.8 Oscar Pistorius, London, 2012, Paralympian ‘‘Bladerunner.’’

116


EPILOGUE: LIVING IN EXTREME TIMES.. WHAT’S NEXT?

/ FROM PISTORIUS TO STELARC - CYBORGS: HUMANS MERGED WITH MACHINES; A HYBRID OF SORTS / What was once the subject of far-out science fiction, initially entered reality as a medical tool for restoration, ending up as a tool for further cybernetic enhancement, beyond the biological self. From implants to robotics, there is a whole host of emerging technologies that aim to turn people into, technically, cyborgs. And the shift is huge! From the artificial limb technologies; ‘blades’ of Pistorius, to a company implanting its employees with microchips to access doors with the wave of a hand instead of with a key. From Harbisson’s antenna, to Stelarc, who seeks amplifying his body, not to ‘fix’ any disability, but to further augment its capabilities. He sees the body as an object for possible redesign. As an evolutionary architecture for experimentations with alternate anatomies, beyond any biological boundaries. Material reality and society are no doubt being reconceived, but so is the self. Contemporary science and the technologically mediated world, under a more socio-cultural frame, seeking on develop a ‘‘panhumanity’’ [Braidotti, 2013] that indicates a global sense of inter-connection among all humans, a new global proximity with new forms of wanted and unwanted intimacy. In this way the human organism is an in-between that is plugged into and connected to a variety of possible sources and forces, like a machine. The subject is an evolutionary engine, occupied by its own embodied temporality. We are progressively detaching ourselves from ourselves. [Smith, Morra, 2006]. The Body As An Evolutionary Machine...

Fig. 11.9 Stelarc, The body as an instrument, The Ping Body.

117


INTELLIGENCE AND SPECULATIVE PROSTHETICS: FROM TECHNOANIMISM TO TECHNOFTISHISM

/ FROM BRAIN AS A TURING MACHINE TO BRAIN AS WORLD WIDE WEB /

‘‘In my own lifetime I have seen popular ‘‘complexity’’ metaphors for the brain evolve. When I was a young child, the brain was likened to an electromagnetic telephone switching network. Then it became an electronic digital computer. Then a massively parallel digital computer. Could it now be just like the world wide web?’’ (Rolf Pfeifer and Josh Bongard ). Even otherwise serious scientists have become enamored of their own complexity metaphors declaring for instance that quantum phenomena and the brain are both so complex that they must be about the same thing. The process continues, and it is picking up speed. Besides the body-machine symbiosis, some of our best new tools adapt to individual brains during use, thus speeding up the process of mutual accommodation beyond measure. Human thought is biologically and technologically poised to explore cognitive spaces that would remain forever beyond the reach of non-cyborg animals. Our technologically enhanced minds are barely, if at all, tethered to the ancestral realm. The most significant twenty-first-century frontiers, are those not of space but of the mind. Our most significant technologies are those that allow our thoughts to go where no animal thoughts have gone before. It is our shape-shifter minds, not our space-roving bodies, that will most fully express our deep cyborg nature. Thanks to electronic devices, the perception of a man is extended. It could determine the future direction of mankind’s evolution from homo sapiens to hybrid sapiens. They have the power to transform our sense of self, of location, of embodiment, and of our own mental capacities. They impact who, what and where we are.

Fig. 11.10 Black mirror, ‘‘Playtest,’’ an upgraded brain and an augmented reality.

118


EPILOGUE: LIVING IN EXTREME TIMES.. WHAT’S NEXT?

Fig. 11.11 Human Cyborg and the Futurist movement, Mind-controlled drone sensing alpha and beta brainwaves. Tiana Sinclair.

119


INTELLIGENCE AND SPECULATIVE PROSTHETICS: FROM TECHNOANIMISM TO TECHNOFTISHISM

/ FROM ICARUS AND DAEDALUS TO..INEVITABLE TRANSHUMANISM? / Transhumanism is a movement that aims to understand what makes one human, and how we can surpass our natural limitations. It believes that there is an imperative to enhance our capabilities, and that limitations to those abilities can be overcome. More importantly, it believes that technology and science are the keys to overcoming them. Technologies and medicines that address these limitations are constantly being released, developed, and improved upon. But what does this mean for our society? Are we all heading towards becoming more than human? If we’re truly driven to artificially evolve past our biological capacities, then there are some questions that should be asked. Firstly, if we start transforming ourselves into something superior, what rights will these enhanced creatures claim, and what rights will they possess when compared to those left behind? Then, what will that look like? Finally, what will the cost of doing so be?

Fig. 11.12 Steve Mann, the father of wearable computing. Augmented Reality glasses.

Fig. 11.13 Neil Harbisson-cyborg artist, has an antenna mounted in the back of his head that lets him hear the sound of colours.

120


EPILOGUE: LIVING IN EXTREME TIMES.. WHAT’S NEXT?

/ FROM MERE RESTORATION TO THE WIDE-OPEN FRONTIER OF AUGMENTATION / From seeing or even hearing wavelengths of light outside the usual visual spectrum, to accelerating the pace of learning so we can acquire new cognitive skills more quickly, to getting greater memory capacity. By surprising us with their increasingly rich perspectives, our machines are starting to trigger in us new ways of thinking and imagining - even new ways of dreaming. Will this symbiotic transformation lead to problems? For sure. The meanings of individual identity, personal agency and authenticity will all require recalibration. We might even have to rethink objective reality. There will be abuses and mistakes. But here’s the kicker: unlike Darwin’s version, this act of evolution gets to choose its trajectory. We get to define what we want to become, and in so doing not just remake ourselves but reveal ourselves. Doing so wisely will demand deeply considered answers to some profound questions. If we know that our thoughts and creative efforts are being processed by a computer for possible improvement, iteration and accidental public release, will we be as open with others or even with ourselves? Will we take the risk of thinking impossible thoughts - the necessary genesis of every great advance? What societal norms will develop as people consider changing their personalities or ranges of perception? The drive toward biotechnological merger is deep within us—it is the direct expression of much of what is most characteristic of the human species. The task is to merge gracefully, to merge in ways that are virtuous, that bring us closer to one another, make us more tolerant, enhance understanding, celebrate embodiment, and encourage mutual respect. If we are to succeed in this important task, we must first understand ourselves and our complex relations with the technologies that surround us. We must recognize that, in a very deep sense, we were always hybrid beings, joint products of our biological nature and multilayered linguistic, cultural, and technological webs. Only then can we confront, without fear or prejudice, the specific demons in our cyborg closets. Only then can we actively structure the kinds of world, technology, and culture that will build the kinds of people we choose to be.

Fig. 11.14 Aimee Mullins, prosthetic legs. Her disability as an example of posthuman progress.

121


INTELLIGENCE AND SPECULATIVE PROSTHETICS: FROM TECHNOANIMISM TO TECHNOFTISHISM

/ FROM EMBODIMENT TO DISEMBODIMENT? / Will the body be unnecessary because of the information explosion? Will our mental structures be our immortal partners? Will body seem disposable? With so much emphasis on information transmission and digital media, the physical body itself can begin to seem somewhat unnecessary. Respected scientists such as Hans Moravec speak enthusiastically of a future world in which our mental structures are somehow preserved as potentially immortal patterns of information capable of being copied from one electronic storage medium to another. In the reducing heat of such a vision, the human body (in fact, any body, biological or otherwise) quickly begins to seem disposable.

Fig. 11.15 Stelarc, Walking Head, pre-programmed motions, 2006.

Fig. 11.16 Andrew Vladimirov, brain hacker investigating brainwaves and consciousness, towards enhancement. The futurist movement.

122


EPILOGUE: LIVING IN EXTREME TIMES.. WHAT’S NEXT?

/ FROM TOOLS THAT ‘HELP US DO,’ TO THINGS THAT ‘‘HELP US THINK’’ TO THINGS THAT ‘‘HELP US BE’’ / Technology has long served as a window into our tangled inner nature. With every advance from the earliest bone tools and stone hammers to today’s jet engines and social media - our technologies have revealed and amplified our most creative and destructive sides. From the moment we started to use a hammer we not only extended ourselves with tools, but we also started to interlace ourselves to systemic relationships between things. Using tools is where the Internet of Things and People really started. We can no longer discuss human society without describing the intrinsic relationships between people and things / systems, we must take into account all things, tools, systems and environments. For a long time, while technology was characterised primarily as “tools to help us do”, the fear was that machines would turn ourselves into machines. More recently have come “tools to help us think”, and with them the opposite fear: that machines might soon grow smarter than us. Today, a third wave of technological innovation is starting, featuring machines that don’t just help us do or think. They have the potential to help us be.

Fig. 11.17 Sarotis Project, The future of wearable technology, The new sense.

123


INTELLIGENCE AND SPECULATIVE PROSTHETICS: FROM TECHNOANIMISM TO TECHNOFTISHISM

/ FROM TECHNOANIMISM TO TECHNOFETISHISM / ‘Techno-animism’ is exhibiting a ‘polymorphous perversity’ that resolutely ignores boundaries between human, animal, spiritual and mechanical beings. The more computed our environment becomes, the further back it returns us to our primitive past, boomerangs us right back to an animistic world view where everything has a spirit, rocks, plants, animals and men. So all the objects in the world become more responsive, things that were once regarded as dumb become addressable, and that universal addressability—a network of things—creates this enchanted landscape. According to Descola, while naturalism ‘produces actual hybrids of nature and culture which it cannot conceptualize’, animism ‘conceptualizes a continuity between humans and non-humans which it can produce only metaphorically’, that is, by way of ritual or, indeed, literature (Descola, 1996: 89) It is obvious that our era is calling for a new normal- a tension within which new concepts of the human emerge. But design perversions are not simply about the all-too-human pathologies of designers. It is about the construction of the human by design. Perversion comes from the Latin pervertere, ‘‘to turn away’’, that is turning away from normality. According to Rudofsky ‘‘man has always been bored with his anatomy. He considers it only a point of departure for his creations.’’ A concept of fetishism towards technology. A fetish when a human-made object that has power over others. Essentially, fetishism is the attribution of inherent value or powers to an object. Both Marxian and mainstream thought represent technological objects as empowered by their intrinsic properties, which derive from human ingenuity and tend to progress over time. So where to draw the boundaries between human and non-human?

Fig. 11.18 Black mirror, ‘‘The entire history of you’’ A digital implant that records our visual and auditory experiences, allowing us replay our memories.

124


EPILOGUE: LIVING IN EXTREME TIMES.. WHAT’S NEXT?

/ FUTURE ΕVOLUTION OF THINGS / Does really an implanted machine under our skull and the mechanisms powering an android, differ from our new smartphones or fitness trackers? Virtual reality headsets are one of the hottest selling gamer toys. Our cars are our feet, our calculators are our minds, and Google is our memory. Our lives now are only partly biological, with no clear split between the organic and the technological, the carbon and the silicon. Like any other species, we are the product of millions of years of evolution. Now we’re taking matters into our own hands. We may not know yet where we’re going, but we’ve already left where we’ve been.. And so, where is it taking us? If the future of augmented age, is that we are going to be augmented cognitively, physically and perceptually, what is this wonderland going to be like? Thanks to our augmented capabilities, our world is going to change dramatically. A world of more variety, connectedness, dynamism, complexity, adaptability, beauty. The evolution of the world is a matter of design. The designer has to take his role seriously. Everything changes faster and faster and in such different directions that seem they can not be controlled and that they are not going anywhere. The attempt to speculate on the future of evolution, as well as the basis enabling us to understand the humanitarian and social aspect of evolution, is the transition from the dictatorial exploitation of the power that the new systems provide to their social integration and the creation of a new contemporary world. All this under the mirror of every science in which the systems are integrated, but also in a more general context. Within the reality of which they belong. A reality that every addition and change causes a series of reflections. An ecological reality, characterized by associated interaction systems. The meetings that make up this are non-reducible and constantly evolving. Artificially intelligent systems have been defined as systems that have the potential to change the desired to feasible. To connect the material world with the fictional and to produce what was born in the imagination. Our bodies, our brains, and the machines around us may all one day merge, as Kurzweil predicts, into a single massive communal intelligence. But if there’s one thing natural evolution has shown, it’s that there are many paths to the same goal. We are the animal that tinkers ceaselessly with our own limitations. The evolution of evolution travels multiple parallel roads. Whatever marvellous skills CRISPR might provide us 10 years from now, people will always keep searching for more..

Fig. 11.19 Black mirror, ‘‘White Christmas’’. Implanting a device that essentially copies a person’s personality. Technological horror tales.

125


INTELLIGENCE AND SPECULATIVE PROSTHETICS: FROM TECHNOANIMISM TO TECHNOFTISHISM

‘‘we are not at the endpoint of evolution..’’ ‘‘let’s enhance ourselves!’’

Fig. 11.20 Stelarc, Third Hand. Transhumanism adn evolution. Merging of Man and Machine is Imminently closer than many care to admit.

126


bibliography

Asimov, I., (1950), ‘‘Runaround,’’ I, Robot (The Isaac Asimov Collection ed.) New York City. Doubleday, pp.40 Berlinski, David (2000), The Advent of the Algorithm, Harcourt Books. Bier, H. (2012), Reconfigurable Architecture. Rethinking the Human in Technolgy Driven Architecture. pp. 207-211 Bostrom N. (2014), Superintelligence: Paths, Dangers, Strategies. Oxford University Press. Braidotti, R. (2013), The Posthuman, Cambridge: Polity Press. Brynjolfsson,E., McAfee, A. (2014), The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies, New York: W.W. Norton & Company Inc. Burler J., (1999), Gender Trouble: Feminism and the Subversion of Identity, US, Routledge, pp. 176. Chu K., (July/August 2006), Metaphysics of Genetic Architecture and Computation, Programming Cultures: Art and Architecture in the Age of Software | Architectural Design magazine, Vol 76, No 4, pp.39-40 Clark, A. (2003) Natural-Born Cyborgs: Minds, Technologies, and the Future of Human Intelligence, Oxford University Press. Colomina, B., Wigely, M. (2017), Are We Human? Notes on an Archaeology of Design, Lars Müller Publishers. Crevier, D. (1993), AI: The Tumultuous Search for Artificial Intelligence, New York, NY: BasicBooks De Landa, M. (1991) War in the Age of Intelligent Machines. New York : Zone Books. De Silva, C., (2000), Intelligent Machines: Myths and Realities, London UK: The Book Depository US De Silva, C., (2000), Intelligent Machines: Myths and Realities, London UK: The Book Depository US, pp. 1-2. Dunne, A., Raby, F. (2013) Speculative Everything: Design, Fiction, and Social Dreaming, United States: 127


INTELLIGENCE AND SPECULATIVE PROSTHETICS: FROM TECHNOANIMISM TO TECHNOFTISHISM

The MIT Press. Fox, M. (2010), Catching up with the Past: A Small Contribution to a Long History of Interactive Environments. Delft: Techne Press. Frazer, J. (1995), An evolutionary architecture. London: Architectural Association publications. Frazer, J., (2011), A Natural Model for Architecture, in Computational Design Thinking: Computation Design Thinking | AD Reader, Vol. 74, No.3, John Wiley and Soms Ltd, pp. 153 Gazzaniga M. S., (1995), “Preface,” in The Cognitive Neurosciences, MIT Press, Cambridge, Mass, USA. Gershenfeld, N. (2011), How to Make Almost Anything: The Digital Fabrication Revolution. Gramazio, F., Köhler, M. and Willmann, J. (2014), Authoring Robotic Processes, Architectural Design No 229, Made by Robots: Challenging Architecture at a Larger Scale, pp. 14-22. Habermas J., (2003), The Future of Human Nature, Polity. Haque U., (2007), The Architectural Relevance of Gordon Pask, John Wiley & Sons Ltd, University of Vienna, Austria, pp 54-61 Jones, A. (2009), The Body and Technology | Art Journal, Vol.60, No.1, College Art Association. Jones, A., (2009), The Body and Technology | Art Journal, Vol.60, No.1 (Spring, 2001). College Art Association, pp.20. Jones, A., (2009), The Body and Technology | Art Journal, Vol.60, No.1 (Spring, 2001). College Art Association. Jones. A., Batchen. G., Gonzales-Day. K., Phelan. P., Ross. C., Gomez-Pena. G., Sifuentes R., (2001), The Body and Technology, Art Journal, Vol. 60, No. 1, College Art Association, pp. 28. Katz., E., J. (2002), Machines that Become Us: The Social Context of Personal Communication Technology. New Brunswick, NJ: Transaction Publishers, pp.72. Katz., E., J., (2002), Machines that Become Us: The Social Context of Personal Communication Technology. New Brunswick, NJ: Transaction Publishers. Kelly, K. (1994), Out of Control: The New Biology of Machines, Social Systems and the Economic World, Basic Books. Kelly, K. (2011), What Technology Wants, 1st Ed UK: Penguin Books. Klima, I., (2002), Karel Čapek: Life and Work, Catbird Press, pp.78-80 Köhler, M., Gramazzio, F. and Willmann, J. (2014), The robotic Touch: How robots change architecture, Zürich: Park Books.

128


BIBLIOGRAPHY

Kurzweil, R. (1990), The Age of Intelligent Machines, Ch.6: Electronic Roots. USA: MIT Press. Kurzweil, R. (1999), The Age of Spiritual Machines: When Computers Exceed Human Intelligence. Viking Press. Kurzweil, R., (2005),The Singularity is Near, Viking Press. Kwiatkowska, A. (2012), Architectural Interfaces of Hybrid Humans. Rethinking the Human in Technolgy Driven Architecture. pp. 363-371 Lakoff, George (1987), Women, Fire, and Dangerous Things: What Categories Reveal About the Mind, University of Chicago Press. Langton, C.G. (1992), Artificial Life II, (Interactions between learning and evolution), pp. 487-507, D.H.Ackley and M.L.Littman. Larson, M. (1993), Behind the Postmodern Facade: Architectural Change in Late Twentieth-century. University of California Press, pp.25 Lim, J., Gramazio, F. and Köhler, M. (2013) A Software Environment For Designing Through Robotic Fabrication., Open Systems: Proceedings of the 18th International Conference on Computeraided Architectural Design Research in Asia (CAADRIA), Hong Kong, pp. 45–54. McCarthy et al., (Aug. 31, 1955), Dartmouth Artificial Intelligence Project Proposal. McCorduck, P. (2004), Machines Who Think (2nd ed.), Natick, MA: A. K. Peters. Moravec, H., (1988), Mind Children: The Future of Robot and Human Intelligence, Harvard University Press, pp. 14 Negreponte, N., (1973), The Architecture Machine: Toward a More Human Environment, US: The MIT Press. Negroponte, N. (1995), Being Digital., United States: Alfred A. Knopf, Inc. Nocks, L., (2007), The robot: the life story of a technology. Westport, CT: Greenwood Publishing Group. Nourbakhsh I. R., (2013) Robot Futures, United States: The MIT Press. Olsen, S., (10 May 2004), Newsmaker: Google’s man behind the curtain, CNET. Oosterhuis, K. (2003), Hyperbodies: Towards an E-motive Architecture, Switzerland: Birkhäuser-Publishers for Architecture. Oosterhuis, K., Cook. P., (2002), Architecture Goes Wild, E-motive Architecture, Uitgeverij. Parisi, L. (2013), Contagious Architecture: Computation, Aesthetics, and Space, United States: The MIT Press.

129


INTELLIGENCE AND SPECULATIVE PROSTHETICS: FROM TECHNOANIMISM TO TECHNOFTISHISM

Pask, G. (1968), An Approach to Cybernetics. Hutchinson and Co. Pask, G. (1969), Architectural Relevance of Cybernetics. Architectural Design. Pavlik, J. V. (2010), The Myths of Technology: Innovation and Inequality (Digital Formations), Oxford. Pfeifer, R., Bongard, J., (2007), How The Body Shapes The Way We Think: A New View of Intelligence, A Bradford Book, The MIT Press. Pickering, A. (2010), The Cybernetic Brain: Sketches of Another Future, University Of Chicago Press. Picon A., (2015), Smart Cities: A Spatialised Intelligence | Architectural Design Primer, Wiley, pp.95. Picon, A. (2014), Made by Robots: Challenging Architecture at a Larger Scale | Architectural Design No 229. Popper, K. (1978), Of Clouds and Clocks: An Approach to the Problem of Rationality and the Freedom of Man, included in Objective Knowledge: An Evolutionary Approach. pp. 224. Reiser, J. and Umemoto, N. (2006), Atlas of Novel Tectonics 1st Ed, Princeton: Princeton Architectural Press. Rid, T., (2016), Rise of the Machines: A Cybernetic History, W. W. Norton Company. Rocker, M. I., (July/August 2006), Programming Cultures: Art and Architecture in the Age of Software | When Code Matters | Architectural Design magazine, Vol 76, No 4. Russell, Stuart J.; Norvig, P., (2003), Artificial Intelligence: A Modern Approach (2nd ed.) vvv , Upper Saddle River, New Jersey: Prentice Hal. Saggio, A. (2012), GreenBodies: Give me an Ampoule and I will Live. Rethinking the Human in Technolgy Driven Architecture. pp. 41-57 Sanford Kwinter, Far from Equilibrium: Essays on Technology and Design Culture, ed. Cynthia Davidson (Barcelona: Actar, 2008), pp. 51. Schindler R. M., (1912), Manifesto for ‘‘modern architecture’’, Vienna. Schneider, S., (2009), Science Fiction and Philosophy: From Time Travel to Superintelligence, Cyborgs Unplugged, John Wiley & Sons. pp. 171. Shelley M. (1818), Frankenstein; or The modern Prometheus, United Kingdom: Lackington, Hughes, Harding, Mavor & Jones. Steenson, W., M., (2010), Artificial Intelligence, Architectural Intelligence: The Computer in Architecture 1960-80 | Dissertation proposal. Stiegler, B. (1998), Technics and Time: The Fault of Epimetheus, Stanford University Press.

130


BIBLIOGRAPHY

Sykes K. A., (March 2010), Constructing a New Agenda: Architectural Theory 1993-2009, Princeton Architectural Press, pp. 429. Vaccari, A. (2003), The Body Made Machine: On the History and Application of a Metaphor | Presented at The Flesh Made Text: Bodies, Theories, Cultures in the Post-Millennial Era. School of English, Aristotle University of Thessaloniki. Warwick, K. (2011), Artificial Intelligence: The Basics, New York: Routledge. Wiener, N. (1989), The Human Use of Human Beings., London: Free Association Books. Wiener, N., (1948), Cybernetics, or Control and Communication in the Animal and the Machine. Cambridge: MIT Press. Wiley, J. (2007), The Architectural Relevance of Gordon Pask. Vienna: University of Vienna. Witt, A. (2010), A machine epistemology in architecture. Encapsulated knowledge and the Instrumentation of Design, Transcript Verlag. Wolfram, S., (2002), A New Kind of Science, ‘Principle of Computational Equivalence’, US: Wolfram Media. Yannoudes, S. (2011), The Archigram vision in the context of Intelligent Environments and its current potential. Yiannoudes, S., (2016), Architecture and Adaptation: From Cybernetics to Tangible Computing, Routledge, pp. 2.

Online References BBC | iWonder, (2015), AI: 15 key moments in the story of Artificial Intelligence [Online] Available: http://www.bbc.co.uk/timelines/zq376fr [Accessed 26 May 2017] Brooks, A. R., (1990), “Elephants Don’t Play Chess” | Robotucs and Autonomous Systems, [Online] Available: http://people.csail.mit.edu/brooks/papers/elephants.pdf [Accesed 1 July 2017]. Chu, K., (2004), Archaeology of the Future, in Peter Eisenman ‘Barefoot on White-Hot Walls’’,Editorial 53-54, MAK Wien, [Online] Available: http://ehituskunst.ee/karl-chu-archaeology-of-the-future/?lang=en [Accessed 25 May 2017] Clynes, M., E., and Kline, N., S., (1960), Cyborgs and Space, Austronautics, New York Times. [Online] Available: https://partners.nytimes.com/library/cyber/surf/022697surf-cyborg.html [Accessed 26 June 2017] De Landa, M., Immanent Patterns of Becoming 2009- YouTube. [Online] Available: https://www.youtube.com/ watch?v=jKqOic0kx4U. [Accessed 30 July 2017]. 131


INTELLIGENCE AND SPECULATIVE PROSTHETICS: FROM TECHNOANIMISM TO TECHNOFTISHISM

Dr. Delahoyde, M., (2001), Karel Capek R.U.R, London Saturday Review interview, [Online] Available: http://public.wsu.edu/~delahoyd/sf/r.u.r.html [Accessed 18 June 2017] Washington State University. Enhanced Humans, Futurism, [Online] Available: https://futurism.com/enhancedhumans/ [Accessed 30 July 2017] Erlic M., (2016), What is the Adjacent Possible, Medium, [Online] Available: https://medium.com/@ Santafebound/what-is-the-adjacent-possible-17680e4d1198 [Accessed 3 August 2017] Fab Lab Foundation, [Online], Available: Http://www.fabfoundation.org/index.php/about-fab-foundation/index.html [Last accessed 20 Auhust 2017] History of Formal Reasoning [Online] Available: http://aiinformatique.altervista.org/category/history-and-formal-reasoning/?doing_wp_cron=1503348462.1458170413970947265625 [Accessed 25 May 2017] Hope, J. (2015), 7 phases of the History of Artificial Intelligence. [Online] Available: http://www.historyextra.com/article/ancient-greece/7-phases-history-artificial-intelligence [Accessed 23 May 2017] HP, Intel, (December 2014), A History of Technology in the Architecture Office, Architizer, [Online] Available: https://architizer.com/blog/a-history-of-technology-in-the-architecture-office/ [Accessed 15 July 2017] Hsu, S., (2015), Don’t Worry, Smart Machines Will Take Us With Them: Why human intelligence and AI will co-evolve. | Nautilus. [Online] Available: http://nautil.us/issue/28/2050/dont-worry-smartmachines-will-take-us-with-them [Accessed 27 May 2017] Industrial Revolution | HISTORY [Online] Available: http://www.history.com/topics/industrial-revolution [Accessed 24 May 2017] Interview with Stelarc, [Online], Available: http://econtact.ca/14_2/donnarumma_stelarc.html, [Accessed 20 August 2017] Kelly, K. (2016), The Inevitable. Understanding the 12 technological forces that will shape our future | The future of tech really is an Uber for everything. [Online] Available: https://qz.com/722101/the-future-of-tech-really-is-an-uber-for-everything/ [Accessed 26 June 2017] Kohler, M. (2014), Matthias Kohler: The Design of Robotic Fabricated Architecture | MIT Video. [Online] Available: http:// video.mit.edu/watch/goldstein-lecture-13947/. [Accessed 21 July 2017]. Mindell, D., The Science and Technology of World War II | The National Museum of World War II [Online] Available: https://www.nationalww2museum.org/sites/default/files/2017-07/s-t-teacherand-student.pdf [Accessed 25 May 2017] Monika E. Lewis, Dr. Cady, (2007), Frankenstein and the Industrial Revolution: “Powerful Engine.” [Online] Available: https://sites.google.com/site/monikalewis02/frankensteinandtheindustrialrevolution [Accessed 23 May 2017] 132


BIBLIOGRAPHY

Project One-Systems, Networks and Collaboration, SEEK, [Online] Available: http://norgacs1projectone.blogspot.com.cy/2009/02/seek.html [Acessed 30 June 2017] Prometheus, Myths Encyclopedia. [Online] Available: http://www.mythencyclopedia.com/Pa-Pr/Prometheus.html#ixzz4rI4EkbO7, [Accessed 25 August 2017] Rinie van Est, (12 April 2015), Intimate Technology: the Battle for Our Body and Behaviour, [Online] Available: https://www.nextnature.net/2015/04/intimate-technology/, [Accessed 26 June 2017] Rotman, D., (2000) Intelligent Machines: Molecular Computing, MIT Technology Review [Online] Available: https://www.technologyreview.com/s/400728/molecular-computing/ [Accessed 25 June 2017] Rubin, C., T. (2003), Artificial Intelligence and Human Nature | The New Atlantis [Online] Available: http://www.thenewatlantis.com/publications/artificial-intelligence-and-human-nature [Accessed 26 June 2017] Sadler, S., (2005), Archigram: Architecture without Architecture, MIT Press, [Online] Available: http:// www.arch.uth.gr/uploads/courses/443/files/Sokratis_Yannoudes_The_Archigram_vision_in_the_ context_of_Intelligent_Environments_and_its_current_potential.pdf [Accessed 27 June 2017] The Industrial Revolution | Encyclopedia Britannica [Online] Available: https://www.britannica.com/ technology/history-of-technology/The-Industrial-Revolution-1750-1900 [Accessed 24 May 2017] Urban, T. (2015), The AI Revolution: The Road to Superintelligence [Online] Available: https://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html [Accessed 15 June 2017] Vegter, W., (2007), Karel Capek: ‘‘Mummy, where do robots come from?’’ [Online] Available: http:// wvegter.hivemind.net/abacus/CyberHeroes/Capek.htm [Accessed 25 May 2017] ‘‘Why AI would be nothing without Big Data,’’ [Online], Available: https://www.forbes.com/sites/bernardmarr/2017/06/09/why-ai-would-be-nothing-without-big-data/ [Accessed 25 August 2017]

133


INTELLIGENCE AND SPECULATIVE PROSTHETICS: FROM TECHNOANIMISM TO TECHNOFTISHISM

Photo Credits

Fig. 1.1: The fire-theft. Valeriano, Hieroglyphica, 1586, http://unurthed.com/2008/01/29/ campbell-on-the-heros-deed/ Fig. 3.1: Frankenstein The movie, 1931, http://pastors.com/created-monster/ Fig. 3.2: Mary Shelley, Frankenstein or the Modern Prometheus, Book cover, https://deimos-remus.deviantart.com/art/Frankenstein-Or-the-modern-Prometheus-459708359 Fig. 3.3: Bernie Wrightson’s comics adaptation of the novel Frankenstein, https://listas.20minutos.es/lista/mejores-novelas-para-iniciarse-en-la-lectura-424361/ Fig. 3.4: First stone tools, Human evolution, https://humanevolutionb36.weebly.com/cultural-evolution.html Fig. 3.5: Illustration, Assembly line, Industrial Revolution, https://www.dakkadakka.com/dakkaforum/posts/list/740516.page Fig. 3.6: Industrial Revolution technology, http://technologyscince.blogspot.gr/2013/02/history-of-technology.html Fig. 3.7: Homo Naledi hands, human evolution, https://steemit.com/steemiteducation/@amity123/human-evolution-and-the-discovery-of-new-human-species-human-naledi Fig. 3.8: Samuel Batler, Book of the Machines, https://utopiaordystopia.com/2014/05/25/ the-kingdom-of-machines/ Fig. 4.1: Karel Capek Rossum’s Universal Robot R.U.R, http://ionai.srogershome.com/2017/04/09/ artificial-intelligence-is-everywhere/ Fig. 4.2: Rossum’s Universal Robots (RUR). BBC TV, 1938, https://i.pinimg.com/originals/54/ af/34/54af34054cec87243d02278f7cf89030.jpg Fig. 4.3: Isaac Asimov, I Robot, https://www.brainpickings.org/2017/11/13/mind-body-ted-ed/?utm_source=Brain+Pickings&utm_campaign=07592579b3-EMAIL_CAMPAIGN_2017_11_17&utm_medium=email&utm_term=0_179ffa2629-07592579b3-237427165&mc_cid=07592579b3&mc_eid=2d3e96ef02 134


PHOTO CREDITS

Fig. 4.4: Alan Turing, The Enigma Machine, https://paquetteetpangloss.wordpress. com/2017/01/13/la-machine-enigma/ Fig. 4.5: Grey Walter, Elmer and Elsie Robots, The tortoise robots, 1951, https://www.timetoast. com/timelines/technology-history-533b1f8b-42a4-44bb-99f5-f8b74eb2cad1 Fig. 4.6: Neufert, Hausbaumaschine, https://niepokoje.wordpress.com/2014/09/23/maszyna-do-budowania/bol5/ Fig. 4.7: Body proportions – The golden Ratio, Bauentwurfslehre by Ernst Neufert, 1936, http:// www.designanduniverse.com/articles/golden_ratio1.php Fig. 4.8: WWI amputees learn to use their artificial limbs, https://medicsinww1.wordpress.com/ loss-of-limb-2/ Fig. 5.1: Alan Turing, Electronic Numerical Integrator and Computer (ENIAC), 1946, http://luxfon.com/historical/24462-august-1947-man-working-with-early-computer.html Fig. 5.2: Leonardo da Vinci, Vitruvian Man, 1490, https://www.vectorstock.com/royalty-free-vector/the-vitruvian-man-vector-94736 Fig. 6.1: Stanley Kubrick, 2001 A Space Odyssey, 1968, http://inthemorningmag.com/23315-2/ Fig. 6.2: Stanley Kubirck, 2001 A Space Odyssey, HAL9000, 1968, http://www.alexhorovitz.com/ HAL_9000_AI/ Fig. 6.3: Charles Rosen, Shakey the robot, 1966-1972, https://www.sri.com/work/timeline-innovation/timeline.php?timeline=computing-digital#!&innovation=shakey-the-robot Fig. 6.4: Wabot-1, humanoid robot, 1970-1973, http://www.humanoid.waseda.ac.jp/booklet/ kato_2.html Fig. 6.5: George Devol, Unimate robot, 1961, https://www.robotics.org/joseph-engelberger/unimate.cfm Fig. 6.6: Geraldine Richelson, The Star Wars Storybook, 1978, https://www.goodreads.com/ book/show/318188.The_Star_Wars_Storybook# Fig. 6.7: Kline and Clynes, The Cyborg, 1960, http://cyberneticzoo.com/bionics/1960-cyborg-kline-and-clynes-american/ Fig. 6.8: Kline and Clynes, The Cyborg: Man Remade To Live In Space, Life magazine, 1960, http://includemeout2.blogspot.gr/2013/05/the-cyborg-man-remade-to-live-in-space.html

135


INTELLIGENCE AND SPECULATIVE PROSTHETICS: FROM TECHNOANIMISM TO TECHNOFTISHISM

Fig. 6.9: Ron Herron, Walking City, 1964, http://indexgrafik.fr/archigram/ Fig. 6.10: Peter Cook, Plug-in City, 1964, https://www.archdaily.com/399329/ad-classics-theplug-in-city-peter-cook-archigram Fig. 6.11: Archigram, Peter Cook, Instant City, 1969, http://www.frac-centre.fr/index-des-auteurs/rub/rubprojets-64.html?authID=44&ensembleID=113&oeuvreID=536 Fig. 6.12: Archigram magazine 9 1/2, 1974, https://monoskop.org/Archigram Fig. 7.1: William Gibson, Neuromancer, https://www.heyuguys.com/deadpool-tim-miller-fox-neuromancer/nueromancer/ Fig. 7.2: Ed Roberts, First digital computer, 1975, https://www.gettyimages.com/event/yrs-since-ibm-launched-first-personal-computer-104582913 Fig. 7.3: Gordon Pask, MusiColour Machine, 1953, http://iasl.uni-muenchen.de/links/GCAII.3e.html Fig. 7.4: J. Frazer, An evolutionary Architetcture, 1990, https://www.scribd.com/document/39485615/An-Evolutionary-Architecture Fig. 7.5: Nicholas Negreponte, ‘‘Seek’’, Software Information Technology, 1970, http://www.fondation-langlois.org/html/e/page.php?NumPage=541 Fig. 7.6: Cellphone arrival, Motorola DynaTAC 8000X, https://admin.mashable.com/wp-content/uploads/2014/03/dynatac-promo.jpg Fig. 8.1, 8.2: The Matrix, brain hacking, https://www.extremetech.com/extreme/254816-eeg-virtual-reality-matrix-just-around-corner Fig. 8.3: Karl Chu, Genetic Architecture, Possible Worlds, 2010, http://kinch-d.com/Consulting-Possible-World Fig. 8.4: Stelarc, Third hand, 1980-2002, http://artetecnologia-uerj.blogspot.gr/2013/07/stelarc-e-orlan-o-corpo-como-hospedeiro.html Fig. 9.1: Stelarc, Ping Body, 1996, https://mediascapes2010floratsai.wordpress.com/2010/01/26/ stelarc/ Fig. 9.2: World Wide Web map, 2011, https://www.andrewhazlett.com/culture-disrupted/ Fig. 9.3: Boris Goldstein and Vadim Sakharov, Artificial Intelligence and Neuroscience of brain, 2016, http://www.information-age.com/human-touch-consultative-intelligence-123467190/ Fig. 9.4: Industrial Roborics, Computer-Aided manufacturing examples, http://keywordsuggest. 136


PHOTO CREDITS

org/gallery/149998.html Fig. 9.5: Gramazio & Kohler, Mesh mould, 2014, http://gramaziokohler.arch.ethz.ch/web/e/forschung/221.html Fig. 9.6: Stelarc, Ping Body, 1995, http://v2.nl/events/body-and-nature Fig. 9.7: Orlan, Plastic Surgeries, https://www.theguardian.com/artanddesign/2016/jan/15/orlan-i-walked-a-long-way-for-women Fig. 9.8: Fripp Design, Prosthetic nose and the future of 3D printing, https://www.3ders.org/ articles/20131108-the-future-of-prosthetics-3d-printed-nose-ear-and-eye.html Fig. 9.9: Bart Hess, Hunt for High-Tech, 2007, http://barthess.nl/hunt-for-high-tech.html Fig. 10.1: Nanoniber and NanoRevolution, Nanotechnology, https://www.revolutionfibres. com/2015/09/your-friday-dose-of-beautiful-nanotechnology/ Fig. 10.2: Google Search engine, www.google.com Fig. 10.3: Siri App as a personal assistant, https://techbrunch.me/tag/siri/ Fig. 10.4: Large-Scale Robotic Fabrication, Arch-Tech-Lab, ETH Zurich, http://www.dfab.ch/ events/new-robotic-fabrication-laboratory-sets-worldwide-standards-for-large-scale-robotic-fabrication-in-architecture/ Fig. 10.5: Philip Beesley, Hylozoic Ground, Robotic Fabrication, 2010, http://www.philipbeesleyarchitect.com/sculptures/0929_Hylozoic_Ground_Venice/ Fig. 10.6: Steve Pike, Micro-Organic Architecture, AD magazine, Neoplasmatic Architecture, https://www.slideshare.net/mariamnastratichuk/architectural-design-ad-neoplasmatic-design Fig. 10.7: Wake Forest Baptist Medical Center ,Tissue-Organ Printing System, 2016, http://www. kurzweilai.net/regenerative-medicine-scientists-print-replacement-tissue Fig. 10.8: Yoan Blanc, Twitter visualized by flowing data, http://opendesignnow.org/index.html%3Fp=321.html Fig. 10.9: Spike Jonze, Her the movie, 2013, http://www.designworklife.com/2014/01/28/the-technology-and-interfaces-of-spike-jonzes-her/ Fig. 10.10: CAPTCHA, 2003, https://en.wikipedia.org/wiki/CAPTCHA Fig. 10.11: CAPTCHA, 2003, https://en.wikipedia.org/wiki/CAPTCHA Fig. 10.12: Instagram App, http://www.wearandcheer.com/7-most-popular-apps-of-all-time137


INTELLIGENCE AND SPECULATIVE PROSTHETICS: FROM TECHNOANIMISM TO TECHNOFTISHISM

that-your-phone-is-incomplete-without/ Fig. 10.13: Twitter App in social media, https://twitter.com/ Fig. 10.14: Instagram and Facebook Apps logos www.instagram.com, www.facebook.com Fig. 11.1: Owen Freeman, Body Evolution, ‘‘Beyond Evolution’’, https://www.nationalgeographic. com/magazine/2017/04/are-we-evolving-illustrations-stand-alone/ Fig. 11.2: Courtesy Texas Heart Institute, Artificial Heart, https://media.timetoast.com/timelines/ history-of-medicine-3d71a21b-f28b-4a40-ab02-607855173f95 Fig. 11.3: Clive Cookson, Synthetic tissue, 2016, https://www.ft.com/content/92cc1f70-f73711e5-96db-fc683b5e52db Fig. 11.4: Brian Christie, The body of pills, https://www.behance.net/gallery/26372193/pill-man Fig. 11.5: Kevin Warwick, https://www.coventrytelegraph.net/news/coventry-news/coventry-scientist-kevin-warwick-says-3023380 Fig. 11.6: Stelarc, The third hand, 1980, http://stelarc.org/?catID=20265 Fig. 11.7: Robocop movie scene, https://heiscomingblog.files.wordpress.com/2015/05/cyborg2.jpg Fig.11.8:OscarPistorius,Bladerunner,London2012,https://www.theblaze.com/news/2012/12/13/ watch-paralympian-bladerunner-oscar-pistorius-outrun-a-horse-in-a-foot-race Fig. 11.9: Stelarc, The body as an Instrument, https://oss.adm.ntu.edu.sg/n1404172l/research-critique-iii-virtual-bodies-in-telematic-space/ Fig. 11.10: Black mirror, ‘‘Playtest’’, http://www.prettylittlebookhead.com/2016/11/black-mirror-season-3-episode-2.html Fig. 11.11: Tiana Sinclair, Human Cyborgs and the Futurist movement, Mind-controlled drone, 2015, https://edition.cnn.com/2015/10/26/tech/gallery/futurists/index.html Fig. 11.12: Steve Mann, Wearable computer, The Augmented Reality glasses, 1980-1990, https:// en.wikipedia.org/wiki/Wearable_computer Fig. 11.13: Neil Harbisson, The world’s first cyborg artist, https://www.theguardian.com/artanddesign/2014/may/06/neil-harbisson-worlds-first-cyborg-artist Fig. 11.14: Mullins,Prosthetics, https://hannamawbey.wordpress.com/2012/05/02/prosthetics-3-aimee-mullins/

138


PHOTO CREDITS

Fig. 11.15: Stelarc, The Walking Head, 2006, http://stelarc.org/?catID=20244 Fig. 11.16: Human Cyborgs and the futurist movement, Andrew Vladimirov, Brain enhanement, https://edition.cnn.com/2015/11/06/tech/pioneers-futurists/index.html Fig. 11.17: Sarotis Wearable technology, Bartlett School, Interactive Architetcure Lab, 2016, http://www.interactivearchitecture.org/sarotis-the-new-sense.html Fig. 11.18: Black mirror, ‘‘The entire history of you’’, https://hyperallergic.com/174281/black-mirror-punishes-and-rewards-passive-viewing/ Fig. 11.19: Black Mirror, ‘‘White Christmas’’, https://io9.gizmodo.com/that-moment-whenblack-mirrors-christmas-episode-got-su-1750706400 Fig. 11.20: Stelarc,Transhumanism and Evolution, https://heiscomingblog.wordpress. com/2015/05/26/the-rich-will-become-immortal-cyborg-gods-while-the-poor-will-die-out-says-the-nwo-transhumanists/

139


INTELLIGENCE AND SPECULATIVE PROSTHETICS: FROM TECHNOANIMISM TO TECHNOFTISHISM

- ARE YOU INTELLIGENT? -

140



Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.