Volume #28 Preview

Page 1

Archis 2011 #2 Per issue € 19.50 (nl, b, d, e, p) Volume is a project by Archis + amo + c-lab…

VOLUME 28 THE INTERNET OF THINGS TO BEYOND OR NOT TO BE

Amelia Borg Bart-Jan Polman Ben Cerveny Ben Schouten Carola Moujan Christiaan Fruneaux Cloud Lab Deborah Hauptmann Dietmar Offenhuber Dimitri Nieuwenhuizen Ed Borden Eduard Sansho Pou Edwin Gardner Hiroshi Ishiguro James Burke Jeroen Beekmans Joop de Boer Juha van ‘t Zelfde Justin Fowler Ken Sakamura Lara Schrijver Lorna Goulden Marcell Mars Mark Dek Mark Shephard Mette Ramsgard Thomsen Nina Larsen Nortd Labs Ole Bouman Philip Beesley Ruairi Glynn Scott Burnham Shintaro Myazaki Stephen Gage Timothy Moore Tomasz Jasciewicz Tuur van Balen Usman Haque Vincent Schipper

INTERNET OF THINGS With Trust Design #2 and Tracing Concepts


TouchingThe word touch is on everythe Interspaceone’s lips these days. It

Domestic Robocop

Of Touch and Power

The intrinsically dynamic property of touch, which is feeling and acting simultaneously, implies an active form of perception that is different from a passive reception

Volume 28

The Bipolar Nature of Touch

Most of the time while discussing touch one thinks of the hand and its ability to grasp things. This, however, is a very narrow view of what this sense really is. The experience of touch concerns the whole body as skin sensations of temperature and humidity, pressure from internal organs, or experiences of movement and weight also belong to it. James J. Gibson calls this global understanding of touch a haptic system, describing it as a bipolar device through which an individual simultaneously gathers information about the surrounding environment and about their own body. The dual nature of touch has interested thinkers from different disciplines throughout history. Philosophers such as Husserl, for instance, have pointed out that touch is where the limit between ‘what is me’ and ‘what is not me’ lies, for it is through touch that a body

© Keiichi Matsuda

The interface is defining for our orientation in the world. Touch seems the natural way to go, but how does it influence our own notion of being? Carola Moujan suggests that ‘interspace’ is the new realm for designers.

© Chris Woebken

Carola Moujandevices

Volume 28

generally refers to tangible and interfaces, a trend that possibly started with Steven Spielberg’s 2002 movie Minority Report, and of which the iPhone is the seminal example. The spectacular commercial success of Apple’s smartphone proved to the world that there is something in touch that significantly reduces the gap between humans and computers, and indeed interacting with objects through direct contact undoubtedly increases user pleasure. Some critics, however, such as Don Norman, have been pointing out the inefficiency of tactile interaction, going as far as calling tangible devices ‘as step backwards in usability’. Norman believes ‘natural interfaces are not natural’, that they trigger random and unwanted actions, do not rely on consistent interaction protocols, present scalability problems, etc.. He argues that a clear protocol should be adopted to make them fully functional, just as happened with visual interfaces. Norman’s essays bring a critical perspective into the current tactile craze. This raises a question however: if tangible devices are unreliable and inconsistent, unpredictable, and overall less efficient than previous systems why are people willing to pay (much) more and learn how to use them – no matter how intuitive they might be? What is it that makes them so pleasurable to use? And, importantly, would they remain as pleasurable if they were more functional? The pitfall in Norman’s argument is that he puts visual and tactile interfaces on the same level. In other words, he implies that a tactile interface should work just as a visual one does; and it is true that in most tangible interfaces as they exist today, the role of touch is restricted to the hand only, and envisioned merely from a functional perspective – i.e. as a replacement for pointers and mouses in command execution. This is a mechanical understanding that overrides the most powerful affordances of haptics which, I argue below, are not connected to function, but to experience.

becomes my body – in other words, it is the localization, through touch, of sensations as such, that makes us aware of having a body of our own. On the other hand, Aristotle – who gave touch a lot of thought – noted that, unlike the other senses the experience of touch is fusional: touch does not distinguish between ‘a touching subject’ and ‘a touched object’, both actors playing both roles simultaneously. Closer to us in time, Australian filmmaker and theorist Cathryn Vasseleu underlines two seemingly contradictory aspects of touch: one is ‘a responsive and indefinable affection, a sense of being touched as being moved’; the other is ‘touching as a sense of grasping, as an objective sense of things, conveyed through the skin’. While the first of these implies a form of openness, the second expresses ‘the making of a connection, as the age-old dream of re-appropriation, autonomy and mastery’, and ‘is defined in terms of vision’. This distinction is of major importance in relation to haptic design; what Vasseleu’s remarks suggest is that, out of the two aspects of touch, only one can be considered as ‘truly tactile’, the other being somehow ‘visual’ in nature. Stated plainly: depending on whether we adopt the ‘tactile’ perspective (touch as being moved – an open passage), or the ‘visual’ one (touch as grasping – a sense of control), the quality of the outcome will be very different. In one case, subject and object are on the same level and the goal is open; in the other, there is domination from one part over the other and the goal is a specific outcome – a pre-determined ‘function’.

10

11

of stimuli. Although in all sensual activity both passiveness and action are present, in touch, the second is paramount. Therefore designing for touch implies a call to action on the participant; it enables them to drive the experience while remaining self-centered. To further clarify, let us analyze what happens in the participant’s body. Two anticipation films will help illustrate the purpose. The first, Keiichi Matsuda’s Domestic Robocop (2010), is an animated movie showing a vision of an ‘augmented’ future in which media has completely saturated physical space. Direct bodily contact with objects has disappeared, replaced by a visual representation of the hands which, quite paradoxically, conveys an impression of vintage imagery, as if the user’s gestures no longer belonged to the realm of natural movements but were a simulacrum of what humans used to do in a distant past. In other words, in the world of Domestic Robocop users do not touch objects themselves, but rather touch the image of touching them. One no longer grabs a real kettle, but instead we grab the kettle as an icon, as a gate towards concealed information. The act of touching remains present, but in the form of a simulation: we have replaced ‘the real thing’ (touching) by a simulation of touch. Considered from the tactile perspective, instead of being augmented this situation could be called reduced reality. But don’t get me wrong: I am not arguing against the concept of augmented reality (although I certainly would go for a change of name). I am critiquing simulation, a ‘visual’, autocratic approach to interaction which surreptitiously makes humans subservient to machines. Simulation is autocratic because it forces the participant into a single point of view (the one ‘reality’ it is supposed to recreate). This has two major implications: first, the reductive one I mentioned earlier – losing a dimension, exchanging the real for the fake. Second, the necessity to comply with the images’ demands which can be huge. In Domestic Robocop, for instance, the body is used as the image’s ‘control panel’ – it makes the image system work, activating the different variations and possibilities of the ‘film’ being shown. Attention is focused on

Nanofutures

what the image ‘does’ or ‘does not do’, following a predetermined program which pushes the participant to carry through a specific choreography. The succession of movements generates a particular quality of sensations which, despite its major impact on the aesthetic experience, is not acknowledged in the design outcome. John Dewey defined the notion of artificial as being what happens whenever ‘there is a split between what is overtly done and what is intended’. In this sense we can say that the system presented in Domestic Robocop is truly artificial not because machines or cutting-edge technology are involved, but because of this split – the simulation of touch that suppresses real touch. We could instead envision truly natural ways of embedding and accessing data, ways that start from the participant’s gestures instead of imposing gestures onto him. This approach is well illustrated by Chris Woebken’s Nano­ futures: Sensual Interfaces (2007). According to Anthony Dunne (who curated the 2008 MoMA’s Design and the Elastic Mind exhibition where the movie was presented), the piece is a reaction to current views on nanotechnology which are primarily related to its capacity to improve functional characteristics of existing materials (e.g., increased resistance, reduced weight). Instead Woebken explored nanotechnologies as new design materials of their own. In particular he focused on ‘smartdust’ – a hypothetical system of multiple tiny microelectromechanical elements (MEMS) – trying to imagine the type of product that might emerge from this technology and how it could transform the very notion of interaction. Nanofutures: Sensual Interfaces shows an office worker interacting with his desktop computer through an interface made out of blocks of seeds (the seeds representing smart dust). The user breaks the blocks apart, spreads the seeds, plays with them. While the seed interface still fulfills a functional goal – sharing, breaking, mining data – it is actually the sensual quality of the manipulations that strikes the viewer. Beyond function, one would want to work with them merely for the tactile pleasure they would provide.


© Thierry Galimard

In his 2006 book Herzian Tales Anthony Dunne introduced the concept of ‘post-optimal object’. For Dunne, ‘design research should explore a new role for the electronic object, one that facilitates more poetic modes of habitation’. Considering that technical and semiotic functionality have already attained optimal levels of performance, Dunne argues that the challenge for designers of electronic objects now is to “provide new experiences for everyday life”. In that sense Nanofutures is a good example of how touch can radically change the way we relate to objects, opening up new possibilities for post-optimal designs.

it is the physical contact with the fog, a caress-like sensation on the skin, that creates a feeling of immersion into a new spatial dimension. Within interspaces participants are the inflexion point, the place where multiple dimensions converge. Architects and designers have a choice when addressing this particular role: either pursuing a controlled, predetermined effect, or defining an operative mode that enables open responses and challenges conventional notions of reality. It is this second option where the true aesthetic potential of interspaces lies for by questioning the idea of an objective ‘reality’ – upon which we continue to live in spite of scientific evidence – these inter­ spaces can open up new ways of experiencing and understanding space. And it is precisely along those lines that they fulfill a specific role left open by previous languages: the transformation of the material world into a less rigid, more fluid environment.

Volume 28

Volume 28

La Fracture Numérique, Une épaisseur d’honneur, 2009

© La Fracture Numérique

12

13

© Elias Sfaxi

Touch and Interspace

With the development of ubiquitous computing, architecture has become sensitive. Spaces are now capable of responding to our actions, often in the form of images incorporated into the built environment. A new spatial category, paradoxical, unstable, and neither totally material nor fully digital, is born. Let us call it interspace. Through the articulation of brick-andmortar and electrons, interspaces create a new perception of reality. The bodily implication intensifies the impression of reality these illusionary environments convey; freed from mediation devices such as the mouse and keyboard, we internalize those spaces as their transformation, sometimes even their generation, happens through our bodies. Just as in any other architectural experience, touch plays a determinant role here for it is through touch that all experiences of space are shaped. Subsequently, if we want to create meaningful spatial experiences using digital media, experiences in which the images and the built space are bound together in such a way that we do not perceive them as separate elements but rather as parts of an organic whole, then the design ought to be touch-driven. In practice this is not always the case. Here again we could oppose the ‘visual’ to the ‘tactile’ as many interspaces today are vision-driven. Within this conception the piece is considered a ‘living painting’ or ‘living movie’ and the hosting space reduced to a mere support for the images – a screen. Once again we have lost a dimension: what was originally three-dimensional (a space) has become flat (a screen). Conversely, interspaces designed through a tactile approach feel more real, because through touch a physical connection with the body is created enabling new forms of inhabitation instead of the contemplative type of experience described above. A great variety of forms can emerge from this perspective for there are multiple possible tactile strategies. One example of this is the fog curtain used as a projection support by the Parisian collective La Fracture Numérique (a team composed by a video artist and an architect) in their 2009 piece Une épaisseur d’illusion. As the participant walks through it images are projected upon it. Beyond its symbolic role in relation to the installation’s theme (illusion),


Volume 28

Volume 28

© PBAI

Detail of a breathing column. Hylozoic Soil, ‘(in)posición dinámica,’ Festival de Mexico, Mexico City, Laboratorio Arte Almaeda/Ars Electronica Mexico, Mexico City, 2010.

74

75

Breathing column prototype model.


the future of communica-

In April 2010tions is as human-centric as Cloud Lab visitedpossible. Splitting his time the Asadabetween leading the SocioSynergisticSyngeristic-Intelligence IntelligenceGroup at Osaka University Project, a part ofand his position as Fellow at the Japanesethe Advanced TelecommuScience andnications Research Institute, Technologyhis research interests are Agency’s ERATOtele-presence, non-linguistic project. In ancommunication, embodied anonymousintelligence and cognition. meeting room surrounded by cubi-CL If a robot has a cles we met withhuman-like appearance, Hiroshi Ishiguro todo people expect humantalk about thelike intelligence? future of robotics,HI If a robot has a space andhuman-like appearance, then communications.yes, people expect humanIshiguro, an inno-like intelligence. The robot vator in robotics,is a hybrid system, a mix of is most famous forcontrolled and autonomous his Geminoidmotion. For instance, eye project, a roboticand shoulder movements are twin he con-autonomic. We are always structed to mimicmoving in a kind of unconhis every gesturescious movement. That kind and twitch.of movement is automatic,

and the conversation we’re having is dependent on these movements. At the same time, we can connect the voice to an operator on the internet, so we can have a natural conversation. I can recognize this android as my own body, and others recognize it as me. But others can adopt my body and learn to control it as well. A robot is a very good tool for understanding humans but they’re not easy to make. Human-like robots can be so complicated we cannot use the traditional understanding of robotics. In order to realize a surrogate, or a more human-like robot, we need other tools. For example, the Honda Asimo uses a very simple motor, essentially a rotary motor – but it’s not human. The human is actually a series of linear actuators. With this kind of actuator we can make a more complex, more human robot … In a traditional process, we would train or develop each part and then put them together into an integrated robot. In this project we train the entire system at the same time. If the robot has a very complicated body it is difficult to properly control [and coordinate] all its movements. Therefore the robot needs help from a caregiver, a mother – in this case, my student. My student is teaching the robot how to stand up. This way we can understand which actuators are important for standing up. There are very big differences in robotic and human systems. For instance, the human brain only needs one watt of energy while a super computer requires 50,000 watts. Why do we have this big difference? The reason is that the human brain makes good use of noise. I can try to explain how the biological system uses noise. At a molecular level everything is a gradient, but for the

CL When the computer suppresses noise and expends energy it is expending a lot of energy. HI So we need to modify our models to incorporate noise. This creates a kind of balance-seeking, where noise and the model control together. If the model fails, then noise takes over. The robot doesn’t need to know how many legs or sensors it has. It needs to start with random movements, both small and large. We can apply these same ideas to a more complicated robot. We have developed a robot with the same bone structure and muscle arrangement as a human, yet with such a complicated robot we still cannot solve the inverse kinematics equations that determine movement. Instead we control it through random movements. The robot can estimate the distance between its hand and a target. If the distance is long, the robot will begin to randomly move its arm around in a large-scale noise pattern, across all actuators. Eventually it will find the target and the noise will be suppressed. It does this without ever knowing its own bone structure. We can relate this to the human baby. A baby has many random movements; it looks like a noise equation. Yet it develops a series of behaviors that allow it to control its own body. Babies run these noise-based automatic behaviors. Employing this we can build a more human-like surrogate. CL As architects we are curious whether responsiveness – or the feeling of presence – is something that can be integrated into the architecture and space? HI [Robotics researchers] call it body propriety and it is quite important for everything. Appearance is also important. The human relationship is based on human appearance. Basically we want to see a beautiful woman, right? Appearance is very important for everything, that is why I have started the android project. Until now robot researchers only focused on how to move the robot and did not design its appearance. Every day you check your face, not your behavior. They are very different. CL Do you think the robot can be emotive, resembling the human? Can it be expressive without having the physical character of the human being? For example, Ibo or Asimo (Sony)? HI Emotion, emoting, objective function, intelligence, or even consciousness is not the objective. I function, I’m subjective, [so] you believe I’m intelligent right? Where is the function of consciousness or emotion? We believe by watching your behavior that you have consciousness or emotion. She has emotions and believes I have

Volume 28

roundings: robots will be

Hiroshi Ishiguroeverywhere in the future Interviewed by Cloud Laband he wants to make sure

computer we are suppressing noise and expending energy – we are making the binary mistake. That system takes a lot of energy. In traditional engineering, the most important principle is how to suppress noise. The next system, or more intelligent or complex system, will figure out how to utilize noise, like a biological system. We are working with biologists and we have developed this fundamental equation. We call this the Yurangi formula, which means biological fluctuation. A [kinematic skeleton] is the traditional system. But if we have a very complicated robot, if a robot moves in a dynamic environment, we can’t develop a model for that environment. If we watch a biological system, for instance, insects or humans, we see a model that can respond to a dynamic world and control a complicated body. We don’t know how many muscles we have, yet we learn to use our bodies very well. We are using noise [to learn and adapt], for instance, Brownian noise (though the biological system employs many different kinds of noise).

Volume 28

The ImportanceDressed in a black uniform, of RandomIshiguro’s presentation is Learningas matter-of-fact as his sur-

124

125


approaching the human model in robotics, trying to establish the relationship between robots and human beings.

What is the limit case of the technology then? If you are no longer physically present in a space your robot can do anything. Is this kind of freedom a goal?

CL

CL In terms of working method, the laboratory is a very controlled space – but what feedback have you been getting in terms of robot deployment in spaces that sponsor good interactions, for instance, in malls, hospitals, large spaces, small spaces, etc.?

HI The goal is ultimately to understand humans. On the other hand we can’t stop technological development in the near future. We have to seriously consider how we should use this technology as a society. My goal is still to really think through these issues of technology, which is still far behind the real understanding of the human body. Today we don’t see this kind of humanoid robot in a city, but we see many machines, for instance, the vending machines found in the Japanese rail system. The vending machine talks, says hello. It’s impossible to stop the advancement of this kind of technology; that is human history. The robots we develop always find their source in humanity. We walk, so locomotion technologies are important. We manipulate things with our hands, so manipulation is important. We are not replacing humans with machines, but we are learning about humans by making these machines.

HI The real fundamentals come from the field, in interactive robots. We are getting a lot of feedback. In order to have this kind of system, we need sensors. We can’t just use the sensors from the robots. It is not enough to [compute and plan] out the necessary activities. We have developed our own sensor networks with camera/ laser-scanners and our system is pretty robust. The importance of the teleoperation is through field testing. People ask the robot difficult questions and that is natural. Before that I developed some autonomous robots – but I gave up that [research direction] and focused on the teleoperation – which is good for collecting data. Using teleoperation the robot can gather data on how people behave and react. Then we can gather the information and make a truly autonomous robot. CL Our behavior is very different depending on the space. We operate differently from space to space. Is it not something designed into the robot?

CL In a sense, the body is the last frontier of innovation. Despite the context of many technologies (for instance, the rapid incorporation of cell phones for communications) extending the human body, the actual manipulation of the body itself remains taboo. There is a debate on the ethics of changing bodies. With whom do you identify with in this debate: the engineer, philosopher, priest?

My main interest is the human mind and why emotional phenomenon appears in human society. Robots reflect and explore that human society. My next collaboration is with a philosopher, actually two post-docs from philosophy. I am trying to develop a model in social relationships. I believe we can model human dynamics – we cannot watch just one person in understanding emotion, right? We need to watch the whole society and develop models of that society – that is very important. Today we don’t have enough researchers HI

CL Parallel to evolution, the child robot you were showing us had to be physically trained to move by a trainer. HI

That is development, not evolution. CL But it is employing a certain kind of machine learning, so that as it is trained over time it can perform these functions by itself. Do you not see that as a kind of evolution?

HI That is development. Evolution is different. For example, a robot would have to be designed through genes – that is evolution. But even for the developmental robot, you have to give a kind of gene, its program code. CL Have you experimented with genetic programming? HI We are using genetic programming but only in the context of very simple creatures, such as insects. Our main purpose is to have a more human-like robot and we cannot simulate the whole process of human evolution. CL What do you think are the limits of the Geminoids? Technology is always extending our capabilities, but is the Geminoid extending us or are we still extending it? HI People typically expect the Geminoid to be able to manipulate something. Actually, the Geminoid is just for communication. Physically it is weak. The actuators themselves are not powerful enough to manipulate much. The Geminoid is a surrogate, whereas the manipulation of objects can be accomplished with another mechanism.

Volume 28

and consciousness. Following from that, the robot can have emotion, because it can have eyes. Can you have drama in robotics? I worked on the robot drama I am the Worker by Oriza Hirata. We used the robots as actresses and actors in scenarios with human actors. The human actors don’t need to have a humanlike mind. The director’s orders are very precise, like ‘move forty centimeters in four seconds’. But actually we can feel human emotions in the heart when watching this drama. I think that is the proper understanding of emotion, consciousness and even the heart.

Volume 28

Is this Kaizen Ishiguro in personemotions therefore we just or his humanoid double?believe that we have emotions

HI That is why we developed telecommunication. If we control the robot we control the situation. We can gather information and develop more autonomous robots. We are in a gradual development process for the developing robot… Evolutionary processes are important and should happen, but evolution is driven by humans. In my laboratory we are building a new robot; we are improving the robot. That is the evolution. Evolution is quite slow. The current evolution of humanity is done through technology. By creating new technologies we can evolve. We can evolve rapidly.

126

127

CB2 is a ‘soft’ robot that is actively trained by a human mother. CB2 has pneumatic actuators, over 200 active sensors, including two cameras and microphones. It is autonomous, but largely a datagathering mechanism for figuring out what actuators are important when engaged in complex behaviors (walking, getting up). As shown in the kinematic structure, CB2 has totally fifty-six actuators; the eyeballs and eyelids are driven by electrical motors since quick movements are required for these parts while the other body parts are driven by pneumatic actuators. The joint driven by the pneumatic actuator has mechanical flexibility in the control thanks to the high compressibility of air. The pneumatic actuators mounted throughout the whole body enables CB2 to generate flexible whole-body movements. Although the mechanism is different from a human, it can generate humanlike behavior (although it has a more limited range of movements than that of humans).


Meeting inVS You are both an the Middleinteraction designer and

in a state of transformation and inherently ambiguous in the best sense of the word. Such distinctions help us to start thinking about how we might build objects, installations, and buildings that are active participants rather than just a medium through which information travels between people or machines.

architectural designer, tell

Ruairi Glynnme how these relate in your Interviewed by Vincent Schippermind. How does one design

‘interaction’ particularly

Performative Ecologies: Dancerssocial networking with global Exhibition – ‘VIDA 11.0, Concursoreach. Where once the masses Internacional de Arte y Vida‚’ would buy software from corpoMadrid Art Fair 2009 A family of performing creatures,rations and consume it, there is swing and illuminate patternsnow enormous bottom up activity with their tails to competeallowing potentially anyone to for visitors attention. As they compete in the market, to harness perform they observe and learn from the response of peoplethe creative potential of thouby assessing their attentionsands of developers and build levels using facial recognition.stable, open systems that chalA gestural conversation develops lenge corporate models of proas both robot participant and human participant adapt andduction. Since our lives are very learn about each others gestures.much determined by the protocols

given to us, the freedom and ability to challenge existing models is terribly important. With all of that in mind, the question that I came to was: if software and architecture are predominantly interfaces between interactions rather than interactive themselves, what makes something interactive? Certainly I think I am interactive. I think we’re interacting here talking to each other. I also interact with my dog which even if it’s less verbal and more gestural is a rich interaction. The natural world is saturated in interactions, so there’s plenty of inspiration there for us to reflect on. Ultimately I’m asking: can we build machines that don’t simply execute a set of commands predictably but instead enter into conversations with people? And if so, can these conversations create aesthetically pleasing and useful applications in the built environment? The conversations I’ve been looking at are gestural rather than verbal, defined by occupation, orientation and expression. In my work it’s not metaphorically a dance, it actually is a dance between robotic installations

Volume 28

ture was not itself interactive but it was a space for interaction to take place. This made me start to re-think software as the interface between interactions rather than being actually interactive itself. A particularly interesting time for thinking about social interactions and networks was the 1960s. This was a time when architecture became a great deal more adaptive, responsive, mobile, democratic and open source as it were. There was a counter culture to the top-down deterministic model of architectural progress needing to be overseen by governments or town planners. It was actually about people taking responsibility for their own space, customizing, negotiating and conversing with larger networks. So that is an interesting model to compare with the story of software design which for most of the past century had been built on centralized models of development. When the internet arrived, hackers harnessed the power of distributed independent developers and created cultures of open source, peer to peer and

VS This really comes down to the issue – famously brought up by Pask – having to do with the difference between communication and conversation right? When we talk about inter-human communication the idea that there is always a flaw in the transference of meaning is almost taken for granted. However when we talk about the transfer of information we begin to assume that there is some sort of perfection in information itself which is then communicable. With this in mind I was wondering if you could talk about your idea of ‘inherent ambiguity’.

Volume 28

Interactivity seemsin an architectural context? like a banal, prettyRG Yes, before I moved to straightforwardarchitectural design, I was conception. Thean interaction designer first, multifaceted na-almost a decade ago now. ture of what inter-In the years I worked in the activity means andinteraction design industry offers to archi-there was never any convertects, designers,sation on what constitutes and society as a‘interaction’. Thinking about whole needs to beit now it was extraordinary reassessed. Oldthat the question never paradigms of inter-cropped up in all those years activity are rootedparticularly not even during in a mechanisticmy education in so called conception while‘human computer internew emergingaction design’. ideas present Just to start, the word alternatives.‘interaction’ in the industry Instead of inter-was a buzzword for selling active versus non-to clients. In fact in early interactive, weweb publishing, levels of should think ofinteractivity were crudely the relation as ameasured on how many difgradient. The textferent types of media you below is the out-were using; it was a question come of a conver-of whether you had sound, sation. It was firstvideo, images and hyperconceived of aslinks. So it was really almost an article, how-a perverse kind of underever Ruairi Glynnstanding of interaction based pointed out thaton a number of media types this might be aand mechanism or buttons little contradictorypeople could press. to the theme. When I went on to Interactivity, hestudy architecture what was says is all aboutimmediately important were conversations.people’s interactions with This piece is aneach other, primarily, and exercise in meet-that architecture acted as ing in the middle.the interface. So architec-

134

135

and human occupants. This has led me to make some distinctions between automatic, reactive and interactive modes of driving my architecture. For something to interact it must participate and I characterize participation as involving three interrelated processes. First, a participant needs to be able to propose or generate ‘stuff’ itself, to be able to offer something to a conversation with other participants. It then needs to observe the success of whatever it is offering to that conversation, so it needs some kind of goals to measure how well it’s doing. Finally it must be able to adapt or learn from its successes and failures, evolving over the course of the conversation. Let us say if my goal is to make you smile, then I look at your facial expressions while we are talking and I adapt my verbal and nonverbal actions as I get to know you. When I get you to smile I learn about you, but I also learn as much when you don’t smile. It is a process in which multiple participants act and react in an ongoing exchange. It is unpredictable and negotiated. It is always

RG To get right into it, you can have a mechanistic view of the world; this is very Newtonian. You can think about Descartes who has been rumored to have built a automaton doll of his daughter believing we could build machines that are real lifelike representations of the human body. There is that view. And it is hardly surprising that when the guys who built the first computers and the first robots were putting these things together and saw the power of logic and binary they were heavily influenced by that same sort of understanding, of that same particular mechanistic view of the world which I think is problematic. I understand their thinking though, as they were in a difficult situation. They had to try and make machines with very unreliable technologies. These guys would really struggle to send simple command messages between components and between machines, so I salute them first and foremost. The issue was that there was a huge amount of noise, so they needed to devise all sorts of code protocols that would get their information between different places accurately and reliably and that in itself was a great achievement. The result of that endeavor is the world of telecommunications, the internet and so on. The issue, however, really isn’t that there was anything wrong with these achievements but that a particular way of making machines reliable by eradicating noise, eradicating potential ambiguity became engrained within the conceptual model of human-computer interaction. Human-to-human interaction models were not pursued because these didn’t work within a mechanistic model. They were too ambiguous. The engineering challenge of sending and receiving ones and zeroes strongly influenced the model for how people and machines communicated to each other. It was highly reductive and highly predictable. Interaction designers, software designers and the whole industry are responsible for treating and understanding people a little like machines. I recently saw a promotional video for Microsoft’s new phone operating system in which they talk about


VS The aspiration to attract more people to the ideas seems to imply that there is not much attention for these concepts at present. To what extent is that true?

VS One of the many important issues you brought up was that of overcoding for a solution. If we take the example of Gordon Pask’s selforganizing chemical computers, or that of Rodney Brookes for that matter, there seems to have been a move toward the idea that a function need not necessarily be explicitly programmed (we need not necessarily program something for it to carry out a specific action). This seems to have been marginalized in the 1990s, perhaps from a fear that a computer may do something we may not want it to do, with the idea that ‘here is what it purely needs to do and it cannot do anything apart from what we are asking it to do’. RG Right. In this world there are things that need to do what they are supposed to do. For example, you want a dialysis machine or a pacemaker to work the way they are expected to work. But then there are plenty of things that don’t really need to be entirely predictable, or at least we can give them the opportunity to perhaps surprise us. To, God forbid, even be cleverer than us. So I think the designer’s role in all this is to be able to make judgments about what sorts of things are probably better off working predictably and what things might be improved by giving them some capacity to experiment a little and learn and adapt and so on. There are obvious aesthetic opportunities but equally there are opportunities to explore how our environment might conserve energy and resources generally.

Volume 28

functioned. But there is this wonderful counterpoint in the 1980s from a guy named Rodney Brookes who actually built robots that taught themselves to walk around. The idea was rather than command a robot on how to walk, why not just put it on the ground, give it a bunch of legs and let it kick its legs around until it learned to get itself to a location or goal. They found out that by doing this the robots would work out how to get from A to B with a lot less computation. The systems were not only simpler, but better at walking, more robust, and cheaper to build. All because rather than the scientists believing they knew best and therefore should do all the thinking for the machine, they let the machine do some of the work itself; just by giving them the ability to generate their own behaviors, adapt and learn, etc. These are great examples of built systems that have the capacity to participate with the world around them. Of course Gordon Pask built participatory machines in the 1950s and 1960s such as the Colloquy of Mobiles, but for the most part the model for developing machine behavior has been mechanistic rather than social. In the history of software design there was a transition from typing in command line instructions towards WIMP graphical user interfaces, but essentially the underlying master-slave model is identical and we have pretty much kept with that right up until today. The only real difference being the resolution, the number of colors on the screen, things bouncing, etc. – all rather

I choose to build machines that deal with aesthetic goals. I am interested in how I can attract people to these ideas. So by making machines whose goal is to learn how to attract and keep an observer’s attention I am hoping I can attract people’s attention to the wider implications.

Volume 28

Performative Ecologies:experiential design; this is a recDancers in linear arrangementognition that people all experExhibition – ‘Emergencia‚’ ience things differently. However Sao Paulo 2008 Four Dancers suspendedthere is still a wide spread belief in a darkened room awaitthat you can design something visitors searching the roomeveryone will experience the for people to perform to. same which I think is a bit daft. While they wait occasionally Luckily there is hope from they will turn to each other and perform gesturesanother discipline. Between the to each other discussing1950s and 1970s early artificial the dances they have intelligence had a really mechaevolved over the course of the exhibition.nistic view of how the human mind

trivial stuff. So the models many designers are using today are inherited from early on in computer science, going back to the days of punch cards. The artificial intelligence example is an important precedent because there was this sudden paradigm shift in the way robotists conceived of designing computational systems intended to engage with the built environment. Today we have two types of AI. We have old AI, which is a top-down logic, formulaic mechanistic model, and we have this bottom-up behavioral model as it is sometimes called. Actually both have been found to be useful, so they have pretty much met in the middle at the moment. We will have something very similar happen to the way we build computational systems for architecture and more widely. Robotics has always been about software and hardware meeting the real world and seeing the results. Where ones and zeroes logic meets the messy chaotic world we inhabit. The fact that the built environment is becoming saturated with computation means we need to seriously think about how we conceive the models we use to drive these systems. Do we just follow the predictive software model or do we look for other opportunities to harness the world’s glorious ambiguity.

136

137

RG When I meet a new students for the first time they are excited about these technologies, but their ways of thinking are heavily informed by the software they have grown up with. So their way of conceiving architectural interaction is based heavily upon the software models we’ve discussed. Frankly I need to totally break them down and build them from the ground back up by asking simple questions first such as “is a light switch interactive?” And if a light switch isn’t, is an installation in which when you stand on a particular floor panel a particular light pattern appears in the room interactive? We would never talk of turning a light on and off, a sensor driven automatic door opening, or a thermostat as being interactive, yet they are within the same kind of conceptual model as much of the work that gets called interactive art or architecture. There seems to be a real laziness in the use of terminology and a lack of real conceptual interrogation within the architectural community as well as within the arts and design community as a whole. But by asking these fundamental questions, by making some distinctions you open up the very fertile territory between reactivity and interactivity. I don’t think it is an issue of one being better or worse than the other. It is more like there’s a gradient of opportunities between the two and we (my students and I) try to operate in that gradient between reactive and interactive architectural design. To do so require students to learn things like programming algorithms, adaptive computation, machine learning and so on which is immensely empowering because they no longer have to rely on software given to them. They can build the software and hardware themselves in order to challenge the protocols the industry currently offers. There’s a lot of discussion about computational optimization in architectural design. I just organized FABRICATE with my colleague Bob Sheil, a conference all about the making of digital architecture. ‘Optimization’ was probably the most frequently used word over the two days. It was all about the optimization of material, form and so on which are all very interesting, but another discussion is needed: the optimization of behavior, of the systems that will drive our built environment. If you talk to anyone who has ever lived or worked in a building with a central server running the entire building you always find anecdotes about it having some ridiculous nuances such as rooms that have lights that come on automatically when it gets dark, which isn’t very useful when you are trying to do a PowerPoint presentation and the lights ought to be off. Often the systems are so locked down there is little you can do to change them. This is where the arrogance of the designer directly impacts inhabitants in a frustrating and even dictatorial manner. If as designers we could allow some loss of control, to allow novel adaptive systems to operate you could conceivably optimize how buildings respond to the activity within and around it. As the context of a building changes – if it is the change in the number of people using it, in climate, and certainly in the buildings

around it – you suddenly start to design systems that respond more directly to people’s needs over the entire lifespan of the architecture. VS So do you think for us to be able to properly interact with a building it becomes necessary for us to be able to perceive some sort of intelligence in it? RG I am not sure we always need to be aware or at least constantly aware that something is interacting with you. Do we need to know a building is adapting and optimizing lighting or air conditioning? It’s worth stating that intelligence is an observed attribute and not something that can be mathematically or otherwise proven. We can characterize things with levels of intelligence that are computationally very simple. I’ll give you a really lovely example. Roboticist and polymath William Grey Walter built some very simple robots in the 1940s that would run around and follow light. They all had these cute behaviors and were hugely popular not just with scientists but also the public. All they did was follow light but the environment in which they were placed was complex enough to allow for them to appear to have complex behavior. And so observers attributed levels of intelligence saying things like “oh God, its alive” or “its appears to be shy”. Thus very simple reactive things responding to the complexity of the world can give us an extraordinarily different and engaging behaviors. In time the issue will be that those behaviors will become predictable and lose novelty. But they can be wonderful and there are plenty of examples of reactive systems being both delightful in an aesthetic sense and useful in a functional sense. There are also plenty of examples of things that are simple automata that are really delightful and very functional. Yet there is this interesting question: does giving things the ability to participate, propose, adapt and learn, in a sense giving them more of a capacity to surprise us, actually give us a sense that they are more intelligent? I would imagine they probably do. If we observe something that really does respond and learn about us, we build a closer relationship to it. There are plenty of opportunities for these things to be embedded, ubiquitous and highly interactive. So yes, they may be invisible but others might be visible architectural features too. It is a multi-layered, multiscale ecology of systems some of which we engage very directly and others that float in the ether, so to speak. It is about designing that ecology and a large part of that ecology will be developed by other industries. So one of the pressing questions on the minds of architects and designers is where and at what level of that ecology do we start to have a role? There’s no reason why architects can’t be the ones making the hardware and software as well as leading the debate. If we don’t someone else will, and architects will just have to accept what they get given. Our lives are very much determined by the protocols given to us as well as the critical freedom and ability to challenge existing models.


Photo: Reuters/Dylan Martinez

Volume 28

Volume 28

Anti-government protesters during demonstrations inside Tahrir Square, Cairo February 2011

146

147


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.